On February 25, 2026, the global leader in AI chips, NVIDIA (NVDA), released its Q4 FY2026 (ending January 25, 2026) and full-year financial reports: revenue, profit, and data center income nearly all exceeded expectations, with the guidance for the next quarter also being raised. According to the traditional “performance drives stock price” logic, such earnings reports usually imply an upward trend.
However, the market responded differently. The day after the earnings release, NVDA’s stock price fell about 5.46%, with broad statistics indicating a loss of approximately $260 billion in market value in a single day. The sharp divergence between strong fundamentals and weak stock price is not primarily about “earnings authenticity,” but rather that the market’s valuation weight is shifting from “current quarter profits” to “growth duration, capital expenditure slope, and structural risks.”
Locking in the earnings: How strong is it?
According to NVIDIA’s official disclosures, the key data for FY2026 Q4 and the full year are as follows:
· Q4 Revenue: $68.127 billion, up 73% YoY, up 20% QoQ
· Q4 Data Center Revenue: $62.3 billion, up 75% YoY, up 22% QoQ, continuing record highs
· Q4 GAAP Net Profit: $42.96 billion; Non-GAAP Net Profit: $39.55 billion
· Full Year Revenue: $215.938 billion, up 65% YoY
· Full Year GAAP Net Profit: $120.067 billion
· Next quarter (Q1 FY2027) guidance: revenue around $78 billion (±2%)
These figures imply two things: first, the demand for AI infrastructure remains in a strong expansion phase; second, NVIDIA’s revenue structure is increasingly concentrated on the “data center engine.”
Strength turning into single-point risk: Over-reliance on data centers
The most eye-catching aspect of the earnings report, and also the most sensitive market point, is that Q4 data center revenue reached $62.3 billion out of total revenue of $68.1 billion, accounting for about 91.5%. This means NVIDIA has almost fully bet its growth on the “AI capital expenditure cycle”—the more cloud providers, sovereign nations, and large enterprises invest in computing power, the more NVIDIA resembles a high-growth machine; once capital expenditure shifts from expansion to consolidation, volatility will be amplified accordingly.
Meanwhile, even if non-data center businesses grow, they are unlikely to provide effective hedging. Automotive, gaming, and professional visualization segments are not on the same scale as data centers. For example, automotive revenue in a single quarter is about $604 million, far insufficient to withstand cyclical changes in data center demand. This structure, during a bull market, is seen as “highly focused efficiency,” but at emotional turning points, it can quickly turn into a “single-engine dependency” discount.
Rising customer concentration: Control over the throttle in the hands of a few
Market often summarizes NVIDIA’s customer structure as “more than half of revenue from the top five cloud providers.” NVIDIA’s sales concentration increased in FY2026, with two customers accounting for a combined 36% of sales. The conclusion is straightforward—NVIDIA’s super growth is deeply tied to a few mega-clients.
This binding acts as a double-edged sword:
· Upturn: Faster expansion of top clients means more “tax” for NVIDIA
· Downturn: If top clients slow capital expenditure, NVIDIA’s orders and valuation will face pressure
· A more covert risk is the change in bargaining power: when clients start systematically supporting second suppliers or developing alternatives internally, NVIDIA’s “monopoly premium” will be squeezed into a “leading premium.”
The market’s decline after the earnings report largely reflects an early discounting of the combined risks of “growth concentration + bargaining power shift.”
Why does “beating expectations” become a negative? The shift from quarterly to long-term valuation logic
NVIDIA’s multiple quarters of beating expectations have gradually eroded the marginal surprise of “exceeding expectations.” Before the earnings, capital markets had fully priced in the “strong earnings” through positions and derivatives structures, leading to a typical trading outcome: no matter how strong the report, as long as it doesn’t introduce “new growth beyond existing narratives,” profit-taking is likely.
This pattern often manifests as “positive realization.” When the market expects growth over 2027 and beyond, the earnings report’s primary concern is not whether the current quarter can continue to blow out numbers, but rather how long growth can be sustained, in what structure, and under what competitive environment. Without longer-term certainty, a paradoxical combination of “strong fundamentals but weak stock price” can occur.
Is the AI bubble a false proposition? It’s more about re-evaluating capital expenditure and credit
The “AI bubble” is often misunderstood as “AI has no value.” A more accurate view is: AI’s value is undeniable, but the timing mismatch between investment and returns is being seriously priced in.
Cloud providers’ AI capital expenditure continues to rise sharply, with huge investments, yet commercialization returns are still climbing. Under high interest rates or profit pressures, the market naturally questions: when will such massive compute investments translate into sustainable profits? If, in the short term, investments continue to be “costs without profits,” and the slope of capital expenditure slows, the valuation center of upstream compute suppliers will be re-evaluated.
This is not unfamiliar in the crypto industry cycle: infrastructure expansion often precedes application realization. When “supply expansion” outpaces “demand realization,” prices and valuations become highly sensitive to sentiment. AI is at a similar stage, but this time, the “accounts” are not on-chain but in the financial reports of cloud providers and semiconductor giants.
The real threat of competition: Not “someone can make GPUs,” but “customers don’t want to buy from just one”
For a long time, NVIDIA has relied on GPU leadership, CUDA ecosystem, and system solutions as moats. But the key change in the competitive landscape is not a single company’s breakthrough, but a structural shift on the customer side—introducing second suppliers, developing in-house chips, and replacing single-card procurement with system-level solutions.
AMD × Meta: Institutionalizing the second-supplier strategy
Meta’s long-term, high-value cooperation with AMD is not just to change market share immediately but also to send a signal: mega-clients are using certain orders to support alternative solutions, reducing dependence on a single supplier. The direct consequence is that NVIDIA’s bargaining power in future negotiations will decline marginally, compressing valuation premiums.
The era of inference: shifting from “training” to “cost and latency”
The focus of AI industry is shifting from cost-insensitive training to cost-sensitive inference. Inference emphasizes throughput, latency, energy consumption, and cost per unit, leading to more specialized new architecture players. NVIDIA is addressing this by introducing inference-related technologies and teams (e.g., licensing and integrating with inference chip company Groq), indicating that the competition in the inference era has expanded from “chip performance” to “full-stack system efficiency.”
NVIDIA’s second curve: From cloud computing to physical-world operating systems
Viewing NVIDIA solely as “a GPU seller” underestimates its strategic depth. During the earnings cycle, NVIDIA has been actively promoting platforms in autonomous driving, robotics, industrial simulation, and other “physical AI” directions, and has launched open-source capabilities for autonomous driving inference and safety verification (e.g., Alpamayo). Although these contributions are limited in the short term, they represent a direction: upgrading from “selling tools” to “providing operating system-level foundations,” locking customers into platforms and ecosystems.
Once this platformization succeeds, NVIDIA’s growth duration will no longer be solely determined by cloud capital expenditure but will increasingly depend on industry digitalization, industrial robotics, and autonomous driving—longer-cycle demands. But before this second curve truly scales, the market will still primarily price based on “data center single engine + capex cycle assets.”
Key variables for 2026: The valuation hinges on three curves, not just one income statement
The core determinant of NVIDIA’s valuation center in 2026 is not “can growth continue,” but “how long can growth be maintained and in what structure.” The market will focus on three verifiable curves:
Cloud providers’ capital expenditure slope: Will it accelerate or slow down marginally?
Inference revenue structure and system penetration: Can the transition from “selling GPUs” to “selling complete system solutions (network interconnects, software stacks, platform tools)” continue to enhance stickiness and customer value?
Speed of second-supplier and in-house development penetration: The faster alternative solutions move from pilots to scaled procurement, the more NVIDIA’s premium valuation will be compressed.
Conclusion: The earnings report confirms that the myth of computing power endures, but valuation is entering a “duration test”
This report proves that the AI infrastructure boom continues, and NVIDIA remains the most powerful cash flow machine for compute power. But the stock decline reminds the market: when “blowout” becomes routine, valuation logic shifts from growth rate to sustainability, from profits to growth duration, from monopoly premiums to competitive landscape.
Post-earnings adjustments are not necessarily signs of a fundamental reversal but rather a shift in valuation focus. NVIDIA remains strong, but the real test is: how long can growth last, and can the structure become more stable?
This answer will determine NVIDIA’s valuation boundary in 2026 and influence the risk appetite for AI assets.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
NVIDIA's strongest financial report in history, why did it lead to an epic crash? An article to understand NVDA's "Computing Power Finance"
Author: 137Labs
On February 25, 2026, the global leader in AI chips, NVIDIA (NVDA), released its Q4 FY2026 (ending January 25, 2026) and full-year financial reports: revenue, profit, and data center income nearly all exceeded expectations, with the guidance for the next quarter also being raised. According to the traditional “performance drives stock price” logic, such earnings reports usually imply an upward trend.
However, the market responded differently. The day after the earnings release, NVDA’s stock price fell about 5.46%, with broad statistics indicating a loss of approximately $260 billion in market value in a single day. The sharp divergence between strong fundamentals and weak stock price is not primarily about “earnings authenticity,” but rather that the market’s valuation weight is shifting from “current quarter profits” to “growth duration, capital expenditure slope, and structural risks.”
According to NVIDIA’s official disclosures, the key data for FY2026 Q4 and the full year are as follows:
· Q4 Revenue: $68.127 billion, up 73% YoY, up 20% QoQ
· Q4 Data Center Revenue: $62.3 billion, up 75% YoY, up 22% QoQ, continuing record highs
· Q4 GAAP Net Profit: $42.96 billion; Non-GAAP Net Profit: $39.55 billion
· Full Year Revenue: $215.938 billion, up 65% YoY
· Full Year GAAP Net Profit: $120.067 billion
· Next quarter (Q1 FY2027) guidance: revenue around $78 billion (±2%)
These figures imply two things: first, the demand for AI infrastructure remains in a strong expansion phase; second, NVIDIA’s revenue structure is increasingly concentrated on the “data center engine.”
The most eye-catching aspect of the earnings report, and also the most sensitive market point, is that Q4 data center revenue reached $62.3 billion out of total revenue of $68.1 billion, accounting for about 91.5%. This means NVIDIA has almost fully bet its growth on the “AI capital expenditure cycle”—the more cloud providers, sovereign nations, and large enterprises invest in computing power, the more NVIDIA resembles a high-growth machine; once capital expenditure shifts from expansion to consolidation, volatility will be amplified accordingly.
Meanwhile, even if non-data center businesses grow, they are unlikely to provide effective hedging. Automotive, gaming, and professional visualization segments are not on the same scale as data centers. For example, automotive revenue in a single quarter is about $604 million, far insufficient to withstand cyclical changes in data center demand. This structure, during a bull market, is seen as “highly focused efficiency,” but at emotional turning points, it can quickly turn into a “single-engine dependency” discount.
Market often summarizes NVIDIA’s customer structure as “more than half of revenue from the top five cloud providers.” NVIDIA’s sales concentration increased in FY2026, with two customers accounting for a combined 36% of sales. The conclusion is straightforward—NVIDIA’s super growth is deeply tied to a few mega-clients.
This binding acts as a double-edged sword:
· Upturn: Faster expansion of top clients means more “tax” for NVIDIA
· Downturn: If top clients slow capital expenditure, NVIDIA’s orders and valuation will face pressure
· A more covert risk is the change in bargaining power: when clients start systematically supporting second suppliers or developing alternatives internally, NVIDIA’s “monopoly premium” will be squeezed into a “leading premium.”
The market’s decline after the earnings report largely reflects an early discounting of the combined risks of “growth concentration + bargaining power shift.”
NVIDIA’s multiple quarters of beating expectations have gradually eroded the marginal surprise of “exceeding expectations.” Before the earnings, capital markets had fully priced in the “strong earnings” through positions and derivatives structures, leading to a typical trading outcome: no matter how strong the report, as long as it doesn’t introduce “new growth beyond existing narratives,” profit-taking is likely.
This pattern often manifests as “positive realization.” When the market expects growth over 2027 and beyond, the earnings report’s primary concern is not whether the current quarter can continue to blow out numbers, but rather how long growth can be sustained, in what structure, and under what competitive environment. Without longer-term certainty, a paradoxical combination of “strong fundamentals but weak stock price” can occur.
The “AI bubble” is often misunderstood as “AI has no value.” A more accurate view is: AI’s value is undeniable, but the timing mismatch between investment and returns is being seriously priced in.
Cloud providers’ AI capital expenditure continues to rise sharply, with huge investments, yet commercialization returns are still climbing. Under high interest rates or profit pressures, the market naturally questions: when will such massive compute investments translate into sustainable profits? If, in the short term, investments continue to be “costs without profits,” and the slope of capital expenditure slows, the valuation center of upstream compute suppliers will be re-evaluated.
This is not unfamiliar in the crypto industry cycle: infrastructure expansion often precedes application realization. When “supply expansion” outpaces “demand realization,” prices and valuations become highly sensitive to sentiment. AI is at a similar stage, but this time, the “accounts” are not on-chain but in the financial reports of cloud providers and semiconductor giants.
For a long time, NVIDIA has relied on GPU leadership, CUDA ecosystem, and system solutions as moats. But the key change in the competitive landscape is not a single company’s breakthrough, but a structural shift on the customer side—introducing second suppliers, developing in-house chips, and replacing single-card procurement with system-level solutions.
Meta’s long-term, high-value cooperation with AMD is not just to change market share immediately but also to send a signal: mega-clients are using certain orders to support alternative solutions, reducing dependence on a single supplier. The direct consequence is that NVIDIA’s bargaining power in future negotiations will decline marginally, compressing valuation premiums.
The focus of AI industry is shifting from cost-insensitive training to cost-sensitive inference. Inference emphasizes throughput, latency, energy consumption, and cost per unit, leading to more specialized new architecture players. NVIDIA is addressing this by introducing inference-related technologies and teams (e.g., licensing and integrating with inference chip company Groq), indicating that the competition in the inference era has expanded from “chip performance” to “full-stack system efficiency.”
Viewing NVIDIA solely as “a GPU seller” underestimates its strategic depth. During the earnings cycle, NVIDIA has been actively promoting platforms in autonomous driving, robotics, industrial simulation, and other “physical AI” directions, and has launched open-source capabilities for autonomous driving inference and safety verification (e.g., Alpamayo). Although these contributions are limited in the short term, they represent a direction: upgrading from “selling tools” to “providing operating system-level foundations,” locking customers into platforms and ecosystems.
Once this platformization succeeds, NVIDIA’s growth duration will no longer be solely determined by cloud capital expenditure but will increasingly depend on industry digitalization, industrial robotics, and autonomous driving—longer-cycle demands. But before this second curve truly scales, the market will still primarily price based on “data center single engine + capex cycle assets.”
The core determinant of NVIDIA’s valuation center in 2026 is not “can growth continue,” but “how long can growth be maintained and in what structure.” The market will focus on three verifiable curves:
Cloud providers’ capital expenditure slope: Will it accelerate or slow down marginally?
Inference revenue structure and system penetration: Can the transition from “selling GPUs” to “selling complete system solutions (network interconnects, software stacks, platform tools)” continue to enhance stickiness and customer value?
Speed of second-supplier and in-house development penetration: The faster alternative solutions move from pilots to scaled procurement, the more NVIDIA’s premium valuation will be compressed.
Conclusion: The earnings report confirms that the myth of computing power endures, but valuation is entering a “duration test”
This report proves that the AI infrastructure boom continues, and NVIDIA remains the most powerful cash flow machine for compute power. But the stock decline reminds the market: when “blowout” becomes routine, valuation logic shifts from growth rate to sustainability, from profits to growth duration, from monopoly premiums to competitive landscape.
Post-earnings adjustments are not necessarily signs of a fundamental reversal but rather a shift in valuation focus. NVIDIA remains strong, but the real test is: how long can growth last, and can the structure become more stable?
This answer will determine NVIDIA’s valuation boundary in 2026 and influence the risk appetite for AI assets.