Nvda at the inflection point: a cleaner earnings lens meets the next AI spending wave

nvda is moving to include stock-based compensation expense in its non-GAAP results starting this quarter, a reporting shift that lands as CEO Jensen Huang highlights an “agentic AI inflection point” and points to a much larger runway for data center spending through 2030.
What happens when Nvda changes what it counts in non-GAAP earnings?
Nvidia has announced plans to “clean up” its quarterly earnings reports, centered on how stock-based compensation is treated. Colette Kress, Nvidia’s chief financial officer, said: “Starting this quarter, we will be including stock-based compensation expense in our non-GAAP results. Stock-based compensation is a foundational component of our compensation program to attract and retain world-class talent. ”
The practical change is straightforward: stock-based compensation will be treated as an expense in the company’s non-GAAP presentation, and deducted from the bottom line. The company framed the move as part of an effort to improve how its quarterly results are presented.
The shift aligns with a long-standing critique articulated by Warren Buffett in his 2018 letter to Berkshire Hathaway shareholders. Buffett wrote: “Managements sometimes assert that their company’s stock-based compensation shouldn’t be counted as an expense. (What else could it be – a gift from shareholders?)”
As described in the announcement, stock-based compensation involves paying employees in stock rather than cash, with implications for the income statement, cash flow statement, and the balance sheet. By pulling that cost into non-GAAP expense, Nvidia is narrowing the gap between adjusted results and the economic reality of compensating employees with equity.
What if investor focus shifts from headline growth to the durability of AI infrastructure spending?
Within the AI trade, Nvidia has been closely associated with rising demand for compute since AI investing became a mainstream theme following OpenAI’s release of ChatGPT in late 2022. The stock has advanced 1, 100% since January 2023, while shares have added 1% in the past six months.
In February, during the company’s fourth-quarter earnings call, Jensen Huang addressed concerns about whether AI spending is sustainable and whether Nvidia can hold its leadership position in AI infrastructure. “Compute demand is growing exponentially — the agentic AI inflection point has arrived, ” Huang told analysts.
He connected that inflection to a shift toward more complex reasoning models, which are more compute-intensive and therefore require more GPUs for training and inference. The same theme was reinforced by JPMorgan strategist Stephanie Aliaga, who described a deeper platform transition: “Beneath the near trillion-dollar headlines is a real computing platform shift decades in the making that is reshaping industries and business models. ”
Huang also pointed to what he described as a subsequent phase: “The wave that we’re seeing now is the agentic AI inflection and the next inflection beyond that is physical AI, where we take AI and these agentic systems into physical applications. ” He called that “a giant opportunity. ” In this framing, demand is not only about today’s generative AI rollout, but about a progression in use cases that increases the compute burden over time.
What happens when the data center budget expands toward 2030?
Huang expects data center spending to reach $3 trillion to $4 trillion annually by 2030. For context, the top five hyperscalers are forecast to spend $700 billion on capital expenditures in 2026, and the same discussion noted that total capex might be somewhere around $1 trillion this year. The implied trajectory suggests the market could triple or quadruple by the end of the decade, corresponding to annual growth between 32% and 41%.
How that spending translates into opportunity depends on what portion flows to accelerated computing and networking. McKinsey & Company estimates data center GPUs and networking equipment account for over 50% of data center spending, with Bernstein and TD Cowen described as having made similar estimates. In that context, Nvidia is described as the dominant supplier in both markets, which underpins the argument that the company could have a multitrillion-dollar opportunity in the data center segment alone.
Huang also emphasized an efficiency-focused metric for cloud platforms: inference tokens per watt, described as performance per unit of power consumed. Tokens were defined as the fundamental unit of data processed by AI models during training and inference to enable predictions, content generation, and reasoning. The same explanation noted that token length varies by language and that, in English, a token is roughly equivalent to 75 words, per OpenAI. The implication is that cloud profitability is closely tied to how many tokens can be processed or generated per watt, making performance-per-power a key commercial lever as inference scales.
For El-Balad. com readers tracking what comes next, the notable intersection is this: nvda is tightening earnings presentation around stock-based compensation at the same time management is asking markets to focus on a longer arc of AI compute demand—from agentic AI today toward physical AI applications—and a larger data center spending envelope through 2030. The reporting change may reduce room for debate around adjusted profitability, while the spending narrative sets the backdrop for how investors may evaluate the durability of demand.




