Nvidia market cap reaches $5 trillion milestone as AI chip demand drives historic valuation

Nvidia GTC 2026: Jensen Huang Unveils Vera Rubin Chips and the Next Era of AI Computing

Today, March 16, Jensen Huang takes the stage at SAP Center in San Jose with 30,000 people from 190 countries watching, many more online. The message will be familiar: Nvidia has built the infrastructure that makes the AI future possible. But what’s different this time is the scale at which that future is materializing, and the degree to which the line between possibility and obligation has blurred.

When Huang speaks, the entire AI sector listens. Not because he’s charismatic, though he is. Not because Nvidia dominates GPU markets, though it does. But because Nvidia has effectively become the arms dealer to the AI revolution, and GTC, this conference, is where the next phase of that revolution gets its marching orders. Every major tech company building AI infrastructure has already made their bets. Today, Huang is signaling what comes next.

The centerpiece of this year’s keynote is the official launch of the Vera Rubin platform, the successor to Blackwell architecture. This is the product line that will power the next generation of AI workloads, from large-scale model training to inference at scale to something Nvidia is calling “physical AI,” a term that encompasses everything from robotics to autonomous systems. The hardware specs matter less than the signal it sends: the AI infrastructure buildout, already at unprecedented scale, is entering a new phase.

The Capex Cycle That Defines The Decade

Meta is guiding $135 billion in capital expenditure. OpenAI is in conversations for a $500 billion data center fund. Amazon, Google, Microsoft are all committing similar magnitudes. This is not discretionary spending. This is the defining capital expenditure cycle of the decade, and Nvidia is the only company whose hardware is truly non-negotiable in these architectures.

Huang has teased “chips the world has never seen before.” Analysts believe this is the Rubin Ultra, a chip that represents a meaningful jump in capability from what came before. The specifics of compute density or memory architecture will dominate the technical discourse for weeks. What matters now is the political economy: whoever has access to the most advanced chips first, wins. And Nvidia controls when and to whom those chips become available.

One Gigawatt Is Not A Pilot Program

Consider the partnership announced this week between Nvidia and Thinking Machines Lab, committing to deploy at least one gigawatt of Vera Rubin systems. One gigawatt. That is not a pilot program. That is industrial scale infrastructure, designed to power the kind of AI applications that don’t yet exist but which everyone assumes will arrive once this capacity comes online. The logic is familiar from any major infrastructure build: build it and the demand will follow.

Then there’s the Meta-Nebius partnership. Twenty-seven billion dollars, announced alongside GTC, with explicit commitments to deploy “first large-scale Vera Rubin deployments.” This is Meta, which has already announced plans to build “AI factories,” now securing the hardware necessary to actually build them. The fact that this deal came together now, synchronized with GTC, is not coincidence. This is the market coordinating around Nvidia’s roadmap.

The CPU Takes Center Stage

One detail worth noting: this year, the CPU is taking center stage alongside the GPU. This might seem marginal. It is not. The conversation around AI infrastructure has long been dominated by GPU performance metrics. But as models scale and inference becomes the dominant workload, the CPU and memory subsystems matter more. Nvidia is signaling that they’ve solved some problem here, and everyone needs to pay attention to how inference costs scale.

The conference runs through March 19. By then, the architecture for the next eighteen months of AI infrastructure spending will have been essentially set. Companies will be updating their capex projections. Investors will be repricing stocks based on what they heard. Competitors will be calculating how far behind Nvidia they actually are. This is not just a tech announcement. This is a market-setting event.

The Moat That Keeps Getting Wider

What’s striking about Nvidia’s position is how durable it has become. Three years ago, there were serious conversations about whether other chip makers could displace Nvidia. Today, those conversations have largely ended. Not because the competition disappeared, but because the market has decided that first mover advantage in the GPU race is insurmountable. By the time competitors catch up on raw performance, Nvidia is already two generations ahead. By the time they match performance, Nvidia has moved into software and systems integration. By the time they match that, the customer relationships have already solidified.

The real story here is not technology. It is power. Nvidia has positioned itself at a chokepoint in the AI supply chain that is nearly impossible to dislodge. Every large company building AI infrastructure has to reckon with Nvidia’s roadmap, Nvidia’s capacity constraints, Nvidia’s geopolitical restrictions on chip exports. The architecture of the global AI buildout is, in essence, Nvidia’s architecture.

The Questions Nobody Asks On Stage

This raises uncomfortable questions that Nvidia will never address in a keynote. If Nvidia controls the hardware, do they also control what gets built on top of it? If they control access to chips, do they control who wins and who loses in the AI race? If they can restrict exports, have they become a tool of foreign policy? These are not Huang’s problem to solve on stage. But they’re the subtext underneath every announcement.

The practical implication is simpler: anyone making major decisions about AI infrastructure in the next eighteen months is essentially making decisions based on Nvidia’s bets. If Vera Rubin is as significant as the market is pricing in, there will be a massive allocation of capital toward Vera Rubin systems. If Rubin Ultra performs as expected, there will be another round of frenzied spending to secure access to it. The companies that secure early access will move faster. The companies that don’t will face months of delays and constraint.

This is why GTC matters. This is why Huang’s keynote sets the tone for the entire sector. Because Nvidia doesn’t just make the picks and shovels of the AI goldrush. Nvidia decides where the goldrush happens next.

The architecture Huang unveils today will shape how hundreds of billions of dollars get deployed. It will determine which companies can scale their AI operations and which face capacity constraints. It will affect hiring, R&D spending, corporate strategy across the entire tech industry. One three-hour keynote, and the map of the AI infrastructure buildout gets redrawn.

By the time the conference ends on March 19, the decision has been made. Everyone knew it would be. The question was only how definitively Huang would make it, and what new capabilities he would dangle in front of the companies competing for access. Today, we get those answers. And by tomorrow, every major tech company will be recalculating their AI infrastructure roadmap around whatever Nvidia just announced.

This is the market at work. Not efficient. Not optimal. But remarkably effective at concentrating power in the hands of whoever controls the critical resource. Right now, that’s Nvidia. The question is whether anyone can dislodge them before this cycle plays out.

Scroll to Top