Not good. This shouldn't be allowed. What would be better is if groq and cerebras combined, and maybe other companies invested in them to help them scale. Why would the major cloud providers not lobby against this?
Usually antitrust is for consumers, but here I think companies like Microsoft and AWS would be the biggest beneficiaries of having more AI chip competition.
> Today, Groq announced that it has entered into a non-exclusive licensing agreement with Nvidia for Groq’s inference technology. The agreement reflects a shared focus on expanding access to high-performance, low cost inference.
> As part of this agreement, Jonathan Ross, Groq’s Founder, Sunny Madra, Groq’s President, and other members of the Groq team will join Nvidia to help advance and scale the licensed technology.
> Groq will continue to operate as an independent company with Simon Edwards stepping into the role of Chief Executive Officer.
> GroqCloud will continue to operate without interruption.
Indeed, as justincormack comments: ”It is not structured as an outright acquisition to avoid US Gov't anti trust scrutiny, but effectively it probably is”. “Non-exclusive” ? Ummmm, yeah, right, sure. You can probably bet there is an private understanding that Groq will no longer offer it's “top of the line” best technology to competitors of Nvidia. Some may see this as a clever, “slight of the hand” attempt for Nvidia to maintain it's perceived dominance & lead in GPU-TPU development.
“Non-exclusive” does not in any form or fashion spell out that all Nvidia's competitors can and will obtain the very top, cutting edge Groq technology as Nvidia will obtain . . .
Are they buying them to try and slow down open source models and protect the massive amounts of money they make from OpenAI, Anthropic, Meta ect?
It quite obvious that open source models are catching up to closed source models very fast they about 3-4 months behind right now, and yeah they are trained on Nvidia chips, but as the open source models become more usable, and closer to closed source models they will eat into Nvidia profit as these companies aren't spending tens of billion dollars on chips to train and run inference. These are smaller models trained on fewer GPUs and they are performing as good as the pervious OpenAI and Anthropic models.
So obviously open source models are a direct threat to Nvidia, and they only thing open source models struggle at is scaling inference and this is where Groq and Cerberus come into the picture as they provide the fastest inference for open source models that make them even more usable than SOTA models.
I'd say that it's probably not a play against open source, but more trying to remove/change the bottlenecks in the current chip production cycle. Nvidia likely doesn't care who wins, they just want to sell their chips. They literally can't make enough to meet current demand. If they split off the inference business (and now own one of the only purchasable alternatives) they can spin up more production.
That said, it's completely anti-competitive. Nvidia could design a inference chip themselves, but instead the are locking down one of the only real independents. But... Nobody was saying Groq was making any real money. This might just be a rescue mission.
Shy of an algo breakthrough, open source isn't going to catch up with SOTA, their main trick for model improvement is distilling the SOTA models. That's why they they have perpetually been "right behind".
They don't need to catch up. They just need to be good enough and fast as fuck. Vast majority of useful tasks of LLMs has nothing to do with how smart they are.
GPT-5 models have been the most useless models out of any model released this year despite being SOTA, and it because it slow as fuck.
For coding I don’t use any of the previous gen models anymore.
Ideally I would have both fast and SOTA; if I would have to pick one I’d go with SOTA.
There a report by OpenRouter on what folks tend to pay for it; it generally is SOTA in the coding domain. Folks are still paying a premium for them today.
There is a question if there is a bar where coding models are “good enough”; for myself I always want smarter / SOTA.
Then why are they spending $20 billion dollars to handicap an inference company that giving open source models a major advantage over closed source models?
They need to vertically integrate the entire stack or they die. All of the big players are already making plans for their own chips/hardware. They see everyone else competing for the exact same vendor’s chips and need to diversify.
Idk- cheaper inference seems to be a huge industry secret and providing the best inference tech that only works with nvidia seems like a good plan. Makes nvidia the absolute king of compute against AWS/AMD/Intel seems like a no brainer.
as useful as it was before this administration when big tech was sucking up to whomever was running the country (e.g. “macho man” Zuck was getting ready to tattoo DEI on his forehead couple of years ago) or just now it’ll be magically useful?
It’ll go through. It’s not an acquisition, it’s an exclusive licensing deal. Same end result, but it lets them runaround the usual regulatory approvals for acquisitions.
I believe that this news kind of helps cerebras as groq and cerebras are the only two companies working extensively in this space
I feel as if Nvidia is eating up even companies which I thought had genuine potential or anything related to AI industry whether profitable or not
Nvidia's trying its best to take all major players and consolidate into one big entity from top to bottom.
The problem with this approach imo is that long term, nvidia's stock is extremely overvalued and its still a bubble which will burst and it will take nvidia first and foremost.
The issue is that when nvidia falls, it will take the whole literal industry from top to bottom, even those companies which I thought could survive an AI burst. Long term I feel like it will have really bad impacts if nvidia continues to gobble up every company.
I am pretty sure that Nvidia might be looking at cerebras too and if they offer them a shit ton of money and cerebras gets bought. I genuinely believe that Nvidia has sort of invested in literally all pockets of any hardware related investment for AI. And when OpenAI is unable to pay Nvidia, I feel like it can all come crashing down since this whole cycle is only being possible via external investor money.
I got curious about how many wheelbarrows of cash $20bn actually is.
Two ways to think about it: weight vs volume.
By weight (assuming all $100 bills):
$20,000,000,000 / $100 = 200,000,000 bills
Each bill is roughly 1g, so total mass is ~200,000 kg
A typical builder’s wheelbarrow can take about 100 kg before it becomes unmanageable
200,000 kg total
/ 100 kg per wheelbarrow
≈ 2,000 wheelbarrows (weight limit)
By volume:
A $100 bill is ~6.14" × 2.61" × 0.11 mm, which comes out to about 102 cm³ per bill
200,000,000 bills × 102 cm³ ≈ 20,400 m³ of cash
A standard wheelbarrow holds around 0.08 m³ (80 litres)
20,400 m³ total
/ 0.08 m³ per wheelbarrow
≈ 255,000 wheelbarrows (volume limit)
So,
About 2,000 wheelbarrows if you only care about weight
About 255,000 wheelbarrows if you actually have to fit the cash in
So the limiting factor isn’t how heavy the money is; it’s that the physical volume of the cash is absurd. At this scale, $20bn in $100s is effectively a warehouse, not a stack.
There is also movement over of some key people to nvidia which is pretty significant: "As part of this agreement, Jonathan Ross, Groq’s Founder, Sunny Madra, Groq’s President, and other members of the Groq team will join Nvidia to help advance and scale the licensed technology."
curious how the execs can honor the fiduciary duty to share holders assuming only they? get the 20 billion and the company is left headless and leaking all of its intellectual property to Nvidia?
> As part of this agreement, Jonathan Ross, Groq’s Founder, Sunny Madra, Groq’s President, and other members of the Groq team will join Nvidia to help advance and scale the licensed technology.
The price is 40x their target revenue. That's twice the price to revenue multiplier applied to Anthropic in their most recent funding round, and really really hard to portray as a good deal.
I don't think it really helps Nvidia's competitive position. The serious competition to Nvidia is coming from Google's TPU, Amazon's Trainium, AMD's Instinct, and to a much lesser extent Intel's ARC.
Grow recent investors got back a 3x multiple and may now invest in one of Nvidia's other competitors instead.
The only thing I can think of here is that OpenAI’s DRAM land grab is going to stack on a non-NV target and NV need to hedge with an SRAM design that’s on the market NOW. Otherwise, I can’t see how NV couldn’t eat Groq’s lunch in one development cycle - it’s not like NV can’t attach a TPU to some SRAM and an interconnect. Either that or Groq closed a deep enough book to scare them, but 40x is a lot of scared.
That's an interesting take, it's plausible Nvidia wants to have an SRAM based product, but I am struggling to see why they would pay $20bn to have one /right now/. Even if DRAM prices make Groq's approach more economical, Nvidia can develop a competitive product before Groq could take any real market share.
Exactly. The only way this makes sense to me is if the board needed this product in <1 cycle. Which makes no sense for a market player like NV who already have the PDK, volume, and literally everything else in the universe. But here it is, so there is clearly a factor I have not considered :)
I do not understand this move by Nvidia, they are afraid of being out competed by this startup in their core competence of building chips for AI? They may be eliminating a competitor for now but this move will immediately many more AI chip startups to get founded
They're not eliminating a competitor, they're (effectively) acquiring a competitor. Nvidia's GPUs are great for training, and not bad for inference, but the custom chips are better for inference and Nvidia's worried about losing customers. Nvidia will no doubt sell custom Groq-like chips for inference now.
groq is a series E hardware startup founded in 2016. It took them this long to be a potential threat, I'm not sure they are even an actual threat.
Even if this purchase causes 100 new hardware startups to be funded tomorrow, nVidia is perfectly fine with that. Let's see how many survive 5 years down the line
the play is 10x faster inference leads to 100x demand give or take, which isn't a bad assumption at all if you ask me. the problem is actually fitting a good model onto hardware that fast.
Legit feels like Nvidia just buying out competition to maintain their position and power in the industry. I sincerely hope they fall flat on their face.
> Legit feels like Nvidia just buying out competition to maintain their position and power
Well, I mean, isn't that exactly what they should be doing? (I'm not talking about whether or not it benefits society; this is more along the lines of how they're incentivized.)
Put yourself in their shoes. If you had all that cash, and you're hearing people talk of an "AI Bubble" on a daily basis, and you want to try and ensure that you ride the wave without ever crashing... the only rational thing to do is use the money to try and cover all your bases. This means buying competitors and it also means diversifying a little bit.
Dunno thought AGI would make everything obsolete and it's just around the corner? It looks rather like it dawns on everyone that transformers won't bring salvation. It's a show of weakness.
The bottleneck in training and inference isn’t matmul, and once a chip isn’t a kindergarten toy you don’t go from FPGA to tape out by clicking a button. For local memory he’s going to have to learn to either stack DRAM (not “3000 lines of verilog” and requires a supply chain which openai just destroyed) or diffuse block RAM / SRAM like Groq which is astronomically expensive bit for bit and torpedoes yields, compounding the issue. Then comes interconnect.
Look dude, this guy failed his Twitter internship and is not about to take on Jensen Huang. This isn't some young guy anymore and this isn't 200x where is he about to have another iPhone / Sony moment.
There's this curious experience of people bringing up geohot / tinygrad and you can tell they've been sold into a personality cult.
I don't mean that pejoratively, I apologize for the bluntness. It's just I've been dealing with his nonsense since iPhone OS 1.0 x jailbreaking, and I hate seeing people taken advantage of.
(nvidia x macs x thunderbolt has been a thing for years and years and years, well before geohot) (tweet is non-sequitor beyond bogstandard geohot tells: odd obsession with LoC, and we're 2 years away from Changing The Game, just like we were 2 years ago)
There are others as well but NVidia is aggressive when it comes to punishing companies willing to buy non-NVidia products. As a result, they prefer to remain under the radar, at least until they have enough market leverage to be more widely known.
I just stopped my Groq API. Sad to see competition being eaten up by shitty Nvidia. I like their products but Jensen is an absolute mfer with deceitful marketing.
Imagine a pharma with a weight loss drug that isn't approved yet; it's either worth $20B (if approved) or zero (if not approved).
Now imagine the LPUv2 ASIC. If it works it's worth $20B and if it doesn't it's zero. If investors think LPUv2 has a 1/3 chance of success they would buy in at $7B. Then the chip boots up and... look at that.
They almost certainly plan to invest in the technology. One of the biggest threats to Nvidia is people developing AI-centric ASICs before they get there. Yes, Google has their TPUs and there are others around, but it's early on.
In some ways, it's not about eliminating a competitor. It's about eliminating all the competitors. Nvidia can use its resources to push AI ASICs farther faster than others, potentially cutting off a whole host of competitors that threaten their business. Nvidia has the hardware and software talent, the money, and the market position to give their AI ASICs an advantage. They know if they don't lean into ASICs that someone else will and their gravy train will end. So they almost certainly won't be abandoning the technology.
groq was targeting a part of the stack where cuda was weakest: guaranteed inference time at a lower cost per token at scale. This was in response to more than just goog's tpus, they were also one of the few realistic alternative paths oai had with those wafers.
Maybe the EU or individual states will sue under their own anti-trust laws will stop this - seems pretty clearly anti-competitive and probably a prelude of these over-valued companies using their stock to gobble up any possible competitor to consolidate even more.
For nvidia increasing the numbers of their money-multiplying mutual investment ring is more important than the value of the deals. It's about involving more capital and people and making their grift too big to fail and keeping the stock numbers up. Nvidia has the ability to promise large amounts of money like this in announcements but I haven't read about any of them actually having money or good exchange hands yet.
No, it doesn't really matter if they pay in cash or stock. If you think NVDA has room to run you're welcome to use your buyout money to buy NVDA on the open market.
Will be interesting technically to see what develops from this. NVLink? Full CUDA feels maybe doubtful but who knows. Nvidia CUDA Tile feels like more of a maybe, their new much more explicit way of making workloads.
This does feel a bit sad for sure, worrying whether this might hold Groq and innovation back. Reciprocally, perhaps kind of cool to see Groq get a massive funding boost and help from a very experienced chip making peer. It feels like an envious position somewhat, even with the long term consequences being so hazy. From the outside yes it looks like Nvidia solidifying their iron grasp over a market with very limited competitive suppliers, but this could help Groq, and maybe it's not on the terms we think we want right now, but could be very cool to see.
I really hope some of the rest of the markets can see what's happening, broadly, with Nvidia forming partnerships all over the place. NVLink with Intel, NVLink with Amazon's Tritanium... there's much more to the ecosystem, but just connecting the chips smartly is a huge task, is core to inter-operation. And for all we've heard of CXL, UltraAccelerator Link (UALink) and UltraEthernet (UET) it feels like very few major players are taking it seriously enough to just integrate these new interconnects & make them awesome. They remain incredible expensive & not commonly used, lacking broad industry adoption, and reserved for very expensive systems: there's a huge existential risk here that (lack of) interconnect will destroy competitors ability to get their good chips well integrated and used. The rest of the market needs more clear alarm bells going off, and needs to be making sure good interconnect is available on way more chips, get it into everyone's hands ASAP not just big customers, so that adoption & Linux nerd type folks can start building stacks that open up the future. The market risks getting left behind, if NVLink is built in everywhere and the various other fabrics never become common-place.
Can't wait for the abuse of the word Grok to die (bet none of these techbros even read the book). There was even an AI company that made a product called "Sophon". Talk about an overinflated sense of self-worth.
I like the Wright Brothers, they called the first plain, "Flyer".
> Groq is expected to alert its investors about the deal later on Wednesday. While the acquisition includes all of Groq’s assets, its nascent Groq cloud business is not part of the transaction, said Davis.
Wait, what? How is the cloud business supposed to run if Nvidia is acquiring the rights to the hardware?
It isn't, and the other companies that offer cloud AI that Nvidia has invested in can carry on happy they have one less competitor.
This is how business works in the 21st century - once one company has a dominant position and a massive warchest they can just buy any business that has any potential of disrupting their revenue. It's literally the thesis Peter Thiel sets out in Zero To One. It works really well for that one business.
That works fine with office buildings and stuff where a company is redistributing its risk profile, but not when the company it’s selling to has every incentive to kill the asset as a competitor.
I don't know specifically, but I think they're referring to the current USA administration's posture of approving anything, or pardoning anyone, in exchange for some cryptocurrency or similar big favour.
> Groq raised $750 million at a valuation of about $6.9 billion three months ago. Investors in the round included Blackrock and Neuberger Berman, as well as Samsung, Cisco, Altimeter and 1789 Capital, where Donald Trump Jr. is a partner.
this is genuinely sad, groq had really fast inference and was a legit alternative architecture to nvidia's dominance. feels like we're watching consolidation kill innovation in real time. really hoping regulators actually look at this one but not holding my breath
> Groq raised $750 million at a valuation of about $6.9 billion three months ago. Investors in the round included Blackrock and Neuberger Berman, as well as Samsung, Cisco, Altimeter and 1789 Capital, where Donald Trump Jr. is a partner.
Makes it very hard not to think of this as a way to give money to the current administration.
I know, this sounds conspiracy theory grade, but 20b is too much for groq.
Uh oh, not good that a major Nvidia competitor with genuine alternative technology will no longer be competing... Chances this tech gets killed post-acquisition?
I think it’s pretty obvious at this point that Nvidia’s architecture has reached scaling limits - the power demands of their latest chips has Microsoft investing in nuclear fusion. Similar to Intel in both the pre-Core days and their more recent chips, they need an actual new architecture to move forward. As sits, there’s no path to profitability for the buyers of these chips given the cost and capabilities of the current LLM architectures, and this is obvious enough that even Nvidia has to realize it’s existential for them.
If Groq’s architecture can actually change the economics of inference and training sufficient to bring the costs in line with the actual, not speculative, benefits of LLMs, this may not be a buy-and-kill for Nvidia but something closer to Apple’s acquisition of P.A. Semi, which made the A- and M- class chips possible.
(Mind you, in Intel’s case they had to have their clocks cleaned by AMD a couple times to get them to see, but I think we’re further past the point of diminishing returns with Nvidia - I think they’re far enough past when the economics turned against them that Reality is their competition now.)
No path to profitability for the people using their products for their putative purpose, which seems like it might affect Nvidia’s bottom line at some point. Clarified.
Not good. This shouldn't be allowed. What would be better is if groq and cerebras combined, and maybe other companies invested in them to help them scale. Why would the major cloud providers not lobby against this?
Usually antitrust is for consumers, but here I think companies like Microsoft and AWS would be the biggest beneficiaries of having more AI chip competition.
It's a non-exclusive deal.
No reason for antitrust action whatsoever.
Groq is absolutely tiny. I don't think antitrust is an issue here.
20 billions is tiny?
WhatsApp was a tiny team
That's the sales price of the company. Their marketshare, I imagine, is absolutely miniscule.
Is nowadays
Groq press release: https://groq.com/newsroom/groq-and-nvidia-enter-non-exclusiv...
> Today, Groq announced that it has entered into a non-exclusive licensing agreement with Nvidia for Groq’s inference technology. The agreement reflects a shared focus on expanding access to high-performance, low cost inference.
> As part of this agreement, Jonathan Ross, Groq’s Founder, Sunny Madra, Groq’s President, and other members of the Groq team will join Nvidia to help advance and scale the licensed technology.
> Groq will continue to operate as an independent company with Simon Edwards stepping into the role of Chief Executive Officer.
> GroqCloud will continue to operate without interruption.
Another example of the growing trend of buying out key parts of a company to avoid any actual acquisition?
I wonder if equity holding employees get anything from the deal or indeed if all the investors will be seeing a return from this?
So it is not structured as an acquisition to avoid anti trust but effectively it probably is.
Indeed, as justincormack comments: ”It is not structured as an outright acquisition to avoid US Gov't anti trust scrutiny, but effectively it probably is”. “Non-exclusive” ? Ummmm, yeah, right, sure. You can probably bet there is an private understanding that Groq will no longer offer it's “top of the line” best technology to competitors of Nvidia. Some may see this as a clever, “slight of the hand” attempt for Nvidia to maintain it's perceived dominance & lead in GPU-TPU development. “Non-exclusive” does not in any form or fashion spell out that all Nvidia's competitors can and will obtain the very top, cutting edge Groq technology as Nvidia will obtain . . .
Yes I'm sure that "non exclusive" partnership is exactly that, wink wink!
Are they buying them to try and slow down open source models and protect the massive amounts of money they make from OpenAI, Anthropic, Meta ect?
It quite obvious that open source models are catching up to closed source models very fast they about 3-4 months behind right now, and yeah they are trained on Nvidia chips, but as the open source models become more usable, and closer to closed source models they will eat into Nvidia profit as these companies aren't spending tens of billion dollars on chips to train and run inference. These are smaller models trained on fewer GPUs and they are performing as good as the pervious OpenAI and Anthropic models.
So obviously open source models are a direct threat to Nvidia, and they only thing open source models struggle at is scaling inference and this is where Groq and Cerberus come into the picture as they provide the fastest inference for open source models that make them even more usable than SOTA models.
Maybe I'm way off on this.
I'd say that it's probably not a play against open source, but more trying to remove/change the bottlenecks in the current chip production cycle. Nvidia likely doesn't care who wins, they just want to sell their chips. They literally can't make enough to meet current demand. If they split off the inference business (and now own one of the only purchasable alternatives) they can spin up more production.
That said, it's completely anti-competitive. Nvidia could design a inference chip themselves, but instead the are locking down one of the only real independents. But... Nobody was saying Groq was making any real money. This might just be a rescue mission.
Shy of an algo breakthrough, open source isn't going to catch up with SOTA, their main trick for model improvement is distilling the SOTA models. That's why they they have perpetually been "right behind".
They don't need to catch up. They just need to be good enough and fast as fuck. Vast majority of useful tasks of LLMs has nothing to do with how smart they are.
GPT-5 models have been the most useless models out of any model released this year despite being SOTA, and it because it slow as fuck.
For coding I don’t use any of the previous gen models anymore.
Ideally I would have both fast and SOTA; if I would have to pick one I’d go with SOTA.
There a report by OpenRouter on what folks tend to pay for it; it generally is SOTA in the coding domain. Folks are still paying a premium for them today.
There is a question if there is a bar where coding models are “good enough”; for myself I always want smarter / SOTA.
GPT 5 Codex is great - the best coding model around except maybe for Opus.
I'd like more speed but prefer more quality than more speed.
Confused. Is ‘fuck’ fast or slow? Or both at the same time? Is there a sort of quantum superposition of fuck?
It's an intensifier
Bullseye.
NVIDIA release some of the best open source models around.
Almost all open source models are trained and mostly run on NVIDIA hardware.
Open source is great for NVIDIA. They want more open source, not less.
Commoditize your complement is business 101.
Then why are they spending $20 billion dollars to handicap an inference company that giving open source models a major advantage over closed source models?
They need to vertically integrate the entire stack or they die. All of the big players are already making plans for their own chips/hardware. They see everyone else competing for the exact same vendor’s chips and need to diversify.
Idk- cheaper inference seems to be a huge industry secret and providing the best inference tech that only works with nvidia seems like a good plan. Makes nvidia the absolute king of compute against AWS/AMD/Intel seems like a no brainer.
How does this work considering the Nemotron models?
I don’t see how this isn’t anti trust but knowing this political climate, this deal will go through.
https://www.reuters.com/business/nvidias-huang-joins-tech-ti...
Good of them to make a list themselves, isn't it? It'll be useful in the future.
as useful as it was before this administration when big tech was sucking up to whomever was running the country (e.g. “macho man” Zuck was getting ready to tattoo DEI on his forehead couple of years ago) or just now it’ll be magically useful?
You miss my point. This is a list of people engaging in something flat-out corrupt. The ballroom is an inherently corrupt project.
It will prove to be simple corruption.
Why is that relevant if there is no one willing to prosecute and convict?
it is completely irrelevant but people still waste internet bandwidth with nonsense :)
whats the punishment for corruption (especially when you have 100’s of billions of dollars) I wonder…
If justice is served it'll be knocked down by the next admin, if it is ever built.
An American Oligarchy.
It’ll go through. It’s not an acquisition, it’s an exclusive licensing deal. Same end result, but it lets them runaround the usual regulatory approvals for acquisitions.
I believe that this news kind of helps cerebras as groq and cerebras are the only two companies working extensively in this space
I feel as if Nvidia is eating up even companies which I thought had genuine potential or anything related to AI industry whether profitable or not
Nvidia's trying its best to take all major players and consolidate into one big entity from top to bottom.
The problem with this approach imo is that long term, nvidia's stock is extremely overvalued and its still a bubble which will burst and it will take nvidia first and foremost.
The issue is that when nvidia falls, it will take the whole literal industry from top to bottom, even those companies which I thought could survive an AI burst. Long term I feel like it will have really bad impacts if nvidia continues to gobble up every company.
I am pretty sure that Nvidia might be looking at cerebras too and if they offer them a shit ton of money and cerebras gets bought. I genuinely believe that Nvidia has sort of invested in literally all pockets of any hardware related investment for AI. And when OpenAI is unable to pay Nvidia, I feel like it can all come crashing down since this whole cycle is only being possible via external investor money.
I got curious about how many wheelbarrows of cash $20bn actually is.
Two ways to think about it: weight vs volume.
By weight (assuming all $100 bills):
$20,000,000,000 / $100 = 200,000,000 bills
Each bill is roughly 1g, so total mass is ~200,000 kg
A typical builder’s wheelbarrow can take about 100 kg before it becomes unmanageable
200,000 kg total / 100 kg per wheelbarrow ≈ 2,000 wheelbarrows (weight limit)
By volume:
A $100 bill is ~6.14" × 2.61" × 0.11 mm, which comes out to about 102 cm³ per bill
200,000,000 bills × 102 cm³ ≈ 20,400 m³ of cash
A standard wheelbarrow holds around 0.08 m³ (80 litres)
20,400 m³ total / 0.08 m³ per wheelbarrow ≈ 255,000 wheelbarrows (volume limit)
So,
About 2,000 wheelbarrows if you only care about weight
About 255,000 wheelbarrows if you actually have to fit the cash in
So the limiting factor isn’t how heavy the money is; it’s that the physical volume of the cash is absurd. At this scale, $20bn in $100s is effectively a warehouse, not a stack.
Your volume of a single bill is a bit off.
Headline is incorrect.
NVIDIA isn't buying Groq.
It's a non exclusive deal for inference tech. Or am I reading it incorrectly?
There is also movement over of some key people to nvidia which is pretty significant: "As part of this agreement, Jonathan Ross, Groq’s Founder, Sunny Madra, Groq’s President, and other members of the Groq team will join Nvidia to help advance and scale the licensed technology."
curious how the execs can honor the fiduciary duty to share holders assuming only they? get the 20 billion and the company is left headless and leaking all of its intellectual property to Nvidia?
Non-exclusive licensing and hiring the team.
> As part of this agreement, Jonathan Ross, Groq’s Founder, Sunny Madra, Groq’s President, and other members of the Groq team will join Nvidia to help advance and scale the licensed technology.
The price is 40x their target revenue. That's twice the price to revenue multiplier applied to Anthropic in their most recent funding round, and really really hard to portray as a good deal.
I don't think it really helps Nvidia's competitive position. The serious competition to Nvidia is coming from Google's TPU, Amazon's Trainium, AMD's Instinct, and to a much lesser extent Intel's ARC.
Grow recent investors got back a 3x multiple and may now invest in one of Nvidia's other competitors instead.
The only thing I can think of here is that OpenAI’s DRAM land grab is going to stack on a non-NV target and NV need to hedge with an SRAM design that’s on the market NOW. Otherwise, I can’t see how NV couldn’t eat Groq’s lunch in one development cycle - it’s not like NV can’t attach a TPU to some SRAM and an interconnect. Either that or Groq closed a deep enough book to scare them, but 40x is a lot of scared.
That's an interesting take, it's plausible Nvidia wants to have an SRAM based product, but I am struggling to see why they would pay $20bn to have one /right now/. Even if DRAM prices make Groq's approach more economical, Nvidia can develop a competitive product before Groq could take any real market share.
Exactly. The only way this makes sense to me is if the board needed this product in <1 cycle. Which makes no sense for a market player like NV who already have the PDK, volume, and literally everything else in the universe. But here it is, so there is clearly a factor I have not considered :)
And Cerebras
Also Tsavorite scalable intelligence - their architecture seems to cover the broadest use cases and compatible with cuda
is this an ad? do you work there?
I'm following the chip industry on a daily basis and never heard of them...
Given that Groq is basically the TPU spin out Google should have done years ago it shows what a valuable asset TPUs are in Google.
Still this they should spin that out though!
I do not understand this move by Nvidia, they are afraid of being out competed by this startup in their core competence of building chips for AI? They may be eliminating a competitor for now but this move will immediately many more AI chip startups to get founded
They're not eliminating a competitor, they're (effectively) acquiring a competitor. Nvidia's GPUs are great for training, and not bad for inference, but the custom chips are better for inference and Nvidia's worried about losing customers. Nvidia will no doubt sell custom Groq-like chips for inference now.
groq is a series E hardware startup founded in 2016. It took them this long to be a potential threat, I'm not sure they are even an actual threat.
Even if this purchase causes 100 new hardware startups to be funded tomorrow, nVidia is perfectly fine with that. Let's see how many survive 5 years down the line
the play is 10x faster inference leads to 100x demand give or take, which isn't a bad assumption at all if you ask me. the problem is actually fitting a good model onto hardware that fast.
Legit feels like Nvidia just buying out competition to maintain their position and power in the industry. I sincerely hope they fall flat on their face.
> Legit feels like Nvidia just buying out competition to maintain their position and power
Well, I mean, isn't that exactly what they should be doing? (I'm not talking about whether or not it benefits society; this is more along the lines of how they're incentivized.)
Put yourself in their shoes. If you had all that cash, and you're hearing people talk of an "AI Bubble" on a daily basis, and you want to try and ensure that you ride the wave without ever crashing... the only rational thing to do is use the money to try and cover all your bases. This means buying competitors and it also means diversifying a little bit.
No one is claiming that it's a bad move.
It's just an anti-competitive move that could be very bad for the consumer as it makes the inference market less competitive.
Dunno thought AGI would make everything obsolete and it's just around the corner? It looks rather like it dawns on everyone that transformers won't bring salvation. It's a show of weakness.
which is exactly what a business should do.
it's not like Nvidia doesn't invest a ton into R&D, but hey, they have the cash, why not use it? like a good business.
In a normal world, this is where Nvidia gets trust busted. But that's long behind us now.
Stuff like tinygrad will change this. Geohot already made nvidia run on macs via thunderbolt.
Also: https://x.com/__tinygrad__/status/1983469817895198783
The bottleneck in training and inference isn’t matmul, and once a chip isn’t a kindergarten toy you don’t go from FPGA to tape out by clicking a button. For local memory he’s going to have to learn to either stack DRAM (not “3000 lines of verilog” and requires a supply chain which openai just destroyed) or diffuse block RAM / SRAM like Groq which is astronomically expensive bit for bit and torpedoes yields, compounding the issue. Then comes interconnect.
The main point is that it will not be an nvidia’s monopoly for too long.
This guy has the greatest dunning-kruger of all time. Lots of smoke and mirrors.
He’s no delusional: https://x.com/__tinygrad__/status/1983476594850283820
However, I would say you are wrong about it being only smoke
Look dude, this guy failed his Twitter internship and is not about to take on Jensen Huang. This isn't some young guy anymore and this isn't 200x where is he about to have another iPhone / Sony moment.
It is peak delulu.
Edit: His whole blog is 'hot take #n'. Not even anything serious. Basically podcast bro level stuff. https://geohot.github.io/blog/jekyll/update/2025/12/22/the-o...
And where do you think he’s wrong in that post?
There's this curious experience of people bringing up geohot / tinygrad and you can tell they've been sold into a personality cult.
I don't mean that pejoratively, I apologize for the bluntness. It's just I've been dealing with his nonsense since iPhone OS 1.0 x jailbreaking, and I hate seeing people taken advantage of.
(nvidia x macs x thunderbolt has been a thing for years and years and years, well before geohot) (tweet is non-sequitor beyond bogstandard geohot tells: odd obsession with LoC, and we're 2 years away from Changing The Game, just like we were 2 years ago)
Can you show any other thing that runs nvidia gpu under m-series macs?
Who cares? Nobody is building large scale inference services with macs.
Because this is exactly the demonstration of abstraction: the same stuff allows direct gpu communication so that even mac nvidia thing is possible.
It is not tied to nvidia as well.
This is the power of tinygrad
Damn. Was hoping Groq and Cerebras would give the giants a run for their money.
There are others as well but NVidia is aggressive when it comes to punishing companies willing to buy non-NVidia products. As a result, they prefer to remain under the radar, at least until they have enough market leverage to be more widely known.
There is still Modular
And China.
It would be interesting if it turned out that Chinese competition was the only thing that kept this market working!
Which China players are doing inference hardware? As indeed that is a good space for them.
Huawei
I just stopped my Groq API. Sad to see competition being eaten up by shitty Nvidia. I like their products but Jensen is an absolute mfer with deceitful marketing.
I literally said “oh no” out loud when I read the headline.
was the API good
fast but furiously expensive
> Groq will continue to operate as an independent company with Simon Edwards stepping into the role of Chief Executive Officer.
Kindof feel bad for Simon Edwards, lol. I wonder what the plan is for the future of Groq
He has to keep the company in business for 2 more years
I would assume this is a very well paid position, and with basically nothing to do.
I remember when Google acquired YouTube in 2006 for $1.65 billion in stock.
Media said it was crazy back then, well I think this sounds a lot crazier but hindsight is 20/20.
I thought the Skype deal back then was worse: it sucked balls back then already.
Related on the business side, and from the last two years:
AI Chip Startup Groq Raises $750M at $6.9B Valuation - https://news.ycombinator.com/item?id=45276985 - Sept 2025 (5 comments)
Groq Raises $640M to Meet Soaring Demand for Fast AI Inference - https://news.ycombinator.com/item?id=41162875 - Aug 2024 (34 comments)
AI chip startup Groq lands $640M to challenge Nvidia - https://news.ycombinator.com/item?id=41162463 - Aug 2024 (12 comments)
Groq CEO: 'We No Longer Sell Hardware' - https://news.ycombinator.com/item?id=39964590 - April 2024 (149 comments)
From $6.9b to 20 in a few months, not bad…
Almost as good as forking VSCode, impressive.
There was an AMA here last year https://news.ycombinator.com/item?id=39429047
I hope non executives and founders get something for their equity.
This doesn't make much sense- In September, Groq was valued at $7B. How is it that in 4 months it is being bought for $20B?
Can someone with better understanding dumb this down for me please?
It only has to be overvalued by a lower multiple then NVidia; not undervalued.
Imagine a pharma with a weight loss drug that isn't approved yet; it's either worth $20B (if approved) or zero (if not approved).
Now imagine the LPUv2 ASIC. If it works it's worth $20B and if it doesn't it's zero. If investors think LPUv2 has a 1/3 chance of success they would buy in at $7B. Then the chip boots up and... look at that.
Or it's just a massive bubble.
eli5 lpu please
Groq calls their inference chips “Language Processing Unit”: https://groq.com/blog/the-groq-lpu-explained
It's an AI blackhole https://www.youtube.com/watch?v=21e5GZF3yx0
Trump Jr. entered at $7 billion. In the meantime Nvidia got permission to sell GPUs to China.
All-In pundit Palihapitiya is invested in Groq as well. It is going well for friends of David Sacks.
Hopefully they plan to invest in the technology and not just eliminate a competitor.
They almost certainly plan to invest in the technology. One of the biggest threats to Nvidia is people developing AI-centric ASICs before they get there. Yes, Google has their TPUs and there are others around, but it's early on.
In some ways, it's not about eliminating a competitor. It's about eliminating all the competitors. Nvidia can use its resources to push AI ASICs farther faster than others, potentially cutting off a whole host of competitors that threaten their business. Nvidia has the hardware and software talent, the money, and the market position to give their AI ASICs an advantage. They know if they don't lean into ASICs that someone else will and their gravy train will end. So they almost certainly won't be abandoning the technology.
But that doesn't mean that it'll be good for us.
groq was targeting a part of the stack where cuda was weakest: guaranteed inference time at a lower cost per token at scale. This was in response to more than just goog's tpus, they were also one of the few realistic alternative paths oai had with those wafers.
Maybe the EU or individual states will sue under their own anti-trust laws will stop this - seems pretty clearly anti-competitive and probably a prelude of these over-valued companies using their stock to gobble up any possible competitor to consolidate even more.
Chamath is a bonafide scammer, but he makes good investments(gets good returns for himself)
He is good at scamming others
nVidia is being scammed here? Seems unlikely…
For nvidia increasing the numbers of their money-multiplying mutual investment ring is more important than the value of the deals. It's about involving more capital and people and making their grift too big to fail and keeping the stock numbers up. Nvidia has the ability to promise large amounts of money like this in announcements but I haven't read about any of them actually having money or good exchange hands yet.
I do not consider this good news; I had hopes Anthropic or so would buy them, not the main AI hardware people.
is this in response to the threat from Google tpu
I could not figure out if „cash“ means literally cash or figuratively cash in the sense of “no trade in shares”.
Will there be a truck full of paper money or not?
Implies they will pay cash value to equity holders as opposed to issuing NVDA shares.
(Electronically)
Is this to somehow screw the employees with RSUs or what?
No, it doesn't really matter if they pay in cash or stock. If you think NVDA has room to run you're welcome to use your buyout money to buy NVDA on the open market.
Well, this isn't framed as a buyout/takeover, so I was curious how existing RSUs would be cashed out?
This deal is framed as IP transfer and talent transfer without owning the full company. Probably to skirt anti trust, among other things.
It's the latter. They'll send a wire.
What are the other top AI silicon vendors?
Graphcore
Tenstorrent
SambaNova
Rivos
Will be interesting technically to see what develops from this. NVLink? Full CUDA feels maybe doubtful but who knows. Nvidia CUDA Tile feels like more of a maybe, their new much more explicit way of making workloads.
This does feel a bit sad for sure, worrying whether this might hold Groq and innovation back. Reciprocally, perhaps kind of cool to see Groq get a massive funding boost and help from a very experienced chip making peer. It feels like an envious position somewhat, even with the long term consequences being so hazy. From the outside yes it looks like Nvidia solidifying their iron grasp over a market with very limited competitive suppliers, but this could help Groq, and maybe it's not on the terms we think we want right now, but could be very cool to see.
I really hope some of the rest of the markets can see what's happening, broadly, with Nvidia forming partnerships all over the place. NVLink with Intel, NVLink with Amazon's Tritanium... there's much more to the ecosystem, but just connecting the chips smartly is a huge task, is core to inter-operation. And for all we've heard of CXL, UltraAccelerator Link (UALink) and UltraEthernet (UET) it feels like very few major players are taking it seriously enough to just integrate these new interconnects & make them awesome. They remain incredible expensive & not commonly used, lacking broad industry adoption, and reserved for very expensive systems: there's a huge existential risk here that (lack of) interconnect will destroy competitors ability to get their good chips well integrated and used. The rest of the market needs more clear alarm bells going off, and needs to be making sure good interconnect is available on way more chips, get it into everyone's hands ASAP not just big customers, so that adoption & Linux nerd type folks can start building stacks that open up the future. The market risks getting left behind, if NVLink is built in everywhere and the various other fabrics never become common-place.
Great choice and what a great deal.
Quite obvious that Groq would get acquired. [0]
[0] https://news.ycombinator.com/item?id=39438820
"2.7M Developers and Teams"
So, about ~$1,000/each? Seems pricey, even assuming all of them still use it every week/month.
Can't wait for the abuse of the word Grok to die (bet none of these techbros even read the book). There was even an AI company that made a product called "Sophon". Talk about an overinflated sense of self-worth.
I like the Wright Brothers, they called the first plain, "Flyer".
> Groq is expected to alert its investors about the deal later on Wednesday. While the acquisition includes all of Groq’s assets, its nascent Groq cloud business is not part of the transaction, said Davis.
Wait, what? How is the cloud business supposed to run if Nvidia is acquiring the rights to the hardware?
It isn't, and the other companies that offer cloud AI that Nvidia has invested in can carry on happy they have one less competitor.
This is how business works in the 21st century - once one company has a dominant position and a massive warchest they can just buy any business that has any potential of disrupting their revenue. It's literally the thesis Peter Thiel sets out in Zero To One. It works really well for that one business.
That's the neat trick - it isn't...
Sell the asset and then lease it from the buyer.
That works fine with office buildings and stuff where a company is redistributing its risk profile, but not when the company it’s selling to has every incentive to kill the asset as a competitor.
From the press release, Nvidia now has a non exclusive license to the hardware.
Groq will continue to operate as an independent company with Simon Edwards stepping into the role of Chief Executive Officer.
GroqCloud will continue to operate without interruption.
How can this pass antitrust régulation ?
There is no "antitrust regulation" in the US in 2025. (Until 2029)
States are "not allowed" to regulate AI companies.
There also weren't any antitrust regulations before, let's not kid ourselves.
There was an attempt under Lina Khan.
Care to give more details?
I don't know specifically, but I think they're referring to the current USA administration's posture of approving anything, or pardoning anyone, in exchange for some cryptocurrency or similar big favour.
https://www.bbc.co.uk/news/articles/crmddnge9yro
> Groq raised $750 million at a valuation of about $6.9 billion three months ago. Investors in the round included Blackrock and Neuberger Berman, as well as Samsung, Cisco, Altimeter and 1789 Capital, where Donald Trump Jr. is a partner.
They made Jimmy Carter sell his peanut farm…
I doubt Nvidia will be regulated in their home jurisdiction. America tends to protect it's cash cows, for better or worse.
this is genuinely sad, groq had really fast inference and was a legit alternative architecture to nvidia's dominance. feels like we're watching consolidation kill innovation in real time. really hoping regulators actually look at this one but not holding my breath
Is any part of this because Google has the TPU and Groq has the LPU?
There's definitely a narrative that ASICs/TPUs/LPUs are more efficient than GPUs and thus Nvidia "needs" an ASIC. Whether this is true is debated.
This is the most blatant buy the competition move if i've ever seen one....
Well that sucks.
Following the age old playbook of monopolies. https://www.arte.tv/en/videos/103517-001-A/capitalism-in-ame... (Use a vpn if outside EUR)
A free market is a regulated market. Otherwise you will end up with monopolies and a dead market.
From the comment below:
> Groq raised $750 million at a valuation of about $6.9 billion three months ago. Investors in the round included Blackrock and Neuberger Berman, as well as Samsung, Cisco, Altimeter and 1789 Capital, where Donald Trump Jr. is a partner.
Makes it very hard not to think of this as a way to give money to the current administration. I know, this sounds conspiracy theory grade, but 20b is too much for groq.
Uh oh, not good that a major Nvidia competitor with genuine alternative technology will no longer be competing... Chances this tech gets killed post-acquisition?
It may be more likely that Nvidia sells the LPUv2 at a price that doesn't threaten Rubin.
Zero. Non-zero only if someone says something deemed “woke”.
Nvidia. Please stop. Just stop it already.
They should have bought nbis
Put Groq and Nvidia execs in prison, blatant anti-trust.
Next they would acquire and kill Cerebras. I hate every part of Nvidia
That's great, but LLMs are still not generating revenue.
I pay $20/mo for Gemini, so they're generating at least that much in revenue!
All depends on how much it costs them to service your $20/month sub in OPEX and how much it cost them in capex to buy and maintain that hardware.
They’re generating tons of revenue, just not necessarily profits
They are generating revenue, profit is the dubious thing.
The absolute best case I can make for this:
I think it’s pretty obvious at this point that Nvidia’s architecture has reached scaling limits - the power demands of their latest chips has Microsoft investing in nuclear fusion. Similar to Intel in both the pre-Core days and their more recent chips, they need an actual new architecture to move forward. As sits, there’s no path to profitability for the buyers of these chips given the cost and capabilities of the current LLM architectures, and this is obvious enough that even Nvidia has to realize it’s existential for them.
If Groq’s architecture can actually change the economics of inference and training sufficient to bring the costs in line with the actual, not speculative, benefits of LLMs, this may not be a buy-and-kill for Nvidia but something closer to Apple’s acquisition of P.A. Semi, which made the A- and M- class chips possible.
(Mind you, in Intel’s case they had to have their clocks cleaned by AMD a couple times to get them to see, but I think we’re further past the point of diminishing returns with Nvidia - I think they’re far enough past when the economics turned against them that Reality is their competition now.)
NVIDIA and "no path to profitability" don't belong in the same zip code.
I read it as path to profitability for the AI companies buying Nvidia's chips.
No path to profitability for the people using their products for their putative purpose, which seems like it might affect Nvidia’s bottom line at some point. Clarified.
there’s no path to profitability for the buyers of these chips given the cost and capabilities of the current LLM architectures
Didn't Anthropic say inference is already profitable?