NVDIA Buys Groq

Nvidia Drops $20 Billion on Groq in a Deal That Isn’t Technically a Deal

Jensen Huang just pulled off one of the most expensive non-acquisitions in tech history. On Christmas Eve, while most of America was wrapping presents, Nvidia quietly announced it would pay $20 billion for Groq’s inference technology, hire most of its leadership team, and take essentially all of its assets. But don’t call it an acquisition. Nvidia is very clear about that.

The deal, which Groq characterized as a “non-exclusive licensing agreement,” represents Nvidia’s largest transaction ever and highlights just how aggressively the chip giant is moving to lock down the AI inference market before competitors can gain a foothold.

The Art of the Non-Acquisition Acquisition

Here’s what Nvidia is getting for its $20 billion: Groq founder Jonathan Ross, President Sunny Madra, and most of the company’s engineering talent. It’s also licensing Groq’s Language Processing Unit technology and taking ownership of nearly all physical assets. What it’s not getting, technically, is the company itself. Groq will continue to operate “independently” under new CEO Simon Edwards, running its cloud business with a skeleton crew.

If this structure sounds familiar, it’s because Big Tech has turned the “acqui-hire plus licensing deal” into a preferred method for skirting antitrust scrutiny. Microsoft used a similar playbook when it effectively absorbed Inflection AI for $653 million earlier this year, bringing on co-founders Mustafa Suleyman and KarĂ©n Simonyan without triggering a formal Hart-Scott-Rodino review.

Bernstein analyst Stacy Rasgon didn’t mince words about what’s happening here. “Antitrust would seem to be the primary risk, though structuring the deal as a non-exclusive license may keep the fiction of competition alive,” he wrote in a note to clients.

Why Groq Matters to Nvidia’s Future

Jonathan Ross isn’t just another startup CEO. At Google, he led development of the first-generation Tensor Processing Unit, the chip that gave the search giant an alternative to Nvidia’s GPUs for AI workloads. When he left in 2016 to found Groq, he took eight of the original ten TPU engineers with him.

The result was the LPU, a chip architecture that takes a radically different approach to AI inference. While Nvidia’s GPUs excel at training AI models, Groq’s processors were purpose-built for inference, the phase where trained models generate actual outputs. The LPU claims to deliver up to 18x faster inference than traditional GPUs and operates with one-tenth the energy consumption for certain workloads.

That performance advantage comes from a key architectural choice: Groq’s chips use on-chip SRAM instead of external high-bandwidth memory, enabling deterministic, sub-millisecond latency that GPU architectures struggle to match. The tradeoff is limited memory capacity. Running a large model like Llama 70B requires racks of Groq processors, but for applications where speed trumps scale, the LPU has found a devoted following among the more than 2 million developers using GroqCloud.

The Inference Land Grab

Nvidia’s move reflects a broader industry recognition that AI’s center of gravity is shifting. The training phase, where Nvidia’s GPUs have dominated for years, is giving way to an explosion of inference workloads as companies deploy AI at scale. CEO Jensen Huang has estimated that inference demand could grow by up to one billion times in coming years.

Bank of America analyst Vivek Arya sees the deal as an acknowledgment that “while GPU dominated AI training, the rapid shift towards inference could require more specialized chips.” He envisions future Nvidia systems where GPUs and LPUs coexist within the same rack, connected via NVLink.

The $20 billion price tag, nearly triple Groq’s $6.9 billion September valuation, looks expensive on paper. But Rasgon notes it amounts to “pocket change for NVDA given their current $61 billion cash balance and $4.6 trillion market capitalization.” That’s roughly 82 cents per share.

The Regulatory Question Nobody’s Asking

What’s remarkable about this transaction isn’t just its size but its speed. The deal came together within months of Groq’s September fundraising round, according to Alex Davis, CEO of lead investor Disruptive. Nvidia hasn’t issued a press release or regulatory filing. As Rasgon put it, “They’re so big now that they can do a $20 billion deal on Christmas Eve with no press release and nobody bats an eye.”

Whether regulators will eventually scrutinize the arrangement remains to be seen. The FTC has signaled interest in examining these quasi-acquisition structures, but Nvidia’s careful framing as a licensing deal may complicate any challenge. The non-exclusive nature of the agreement theoretically allows Groq to license its technology to competitors, though with Ross and his core team now at Nvidia, the practical value of that option is questionable.

What Happens Next

Nvidia plans to integrate Groq’s low-latency processors into what Huang calls the “NVIDIA AI factory architecture,” extending the platform to serve a broader range of inference and real-time workloads. The company’s first opportunity to discuss the deal publicly will be at its earnings call in late January.

For the broader AI chip market, the message is clear. Nvidia isn’t content to dominate training while competitors carve out niches in inference. It’s using its massive cash pile to absorb potential threats before they can mature into genuine challengers. The question is whether “non-exclusive licensing agreements” will continue to fly under the regulatory radar as these deals scale into the tens of billions.

Groq investors, including Chamath Palihapitiya’s Social Capital, are walking away with a substantial return on a nine-year bet. Palihapitiya had called Ross “a technical genius of biblical proportions” when backing the company in 2016. The $20 billion exit validates that assessment while raising uncomfortable questions about whether the AI chip industry is consolidating too quickly for anyone to keep pace with Nvidia.

Scroll to Top