Anthropic in chips deals with Google and Broadcom worth hundreds of billions (3,5GW of capacity)
Posted by sr_local@reddit | hardware | View on Reddit | 23 comments
Anthropic will spend hundreds of billions of dollars on Google’s chips and cloud services in a push to secure critical computing resources as surging demand for the company’s tools propels its annualised revenue to $30bn.
The AI lab said on Monday it has committed to use “multiple gigawatts” of capacity from Google’s TPU, a rival chip to Nvidia’s dominant GPU, and the search giant’s cloud services.
Around 3.5GW of capacity on Google’s hardware will come through a partnership with chipmaker Broadcom, starting from next year, according to a separate filing on Monday.
In all, the deal would give Anthropic access to close to 5GW in new computing capacity over the coming years, according to a person with knowledge of the terms.
The hardware and infrastructure required to develop a single gigawatt of capacity — roughly equivalent to the power output of a nuclear reactor — is estimated to cost from $35bn-$50bn, with the bulk of that spent on chips. That suggests the lossmaking start-up’s commitment could run to hundreds of billions of dollars.
WJMazepas@reddit
Do they have 30 billion dollars? Or it's Saudi money going crazy?
CallMePyro@reddit
Could also exchange the compute for equity. Like OAI and MSFT
Vb_33@reddit
The money is always investment money
PerspicaciousGoshawk@reddit
Is there a reason these projects started being measured in energy consumption rather than compute? Is it just because it's easier to understand for more of the population?
If anything decent comes out of this psychodrama, I really hope a genuine milestone can be achieved in non-fossil fuel energy. I'm not expecting fusion, but just a notch on the way there would be real cool.
MinutePair7585@reddit
Because power usage is the actual limiting factor in building datacenter currently. Chips and servers aren't the critical path.. transformers, circuit breakers and gas turbines are.
Turnip-itup@reddit
Usually it’s easier to measure projects agains energy consumed because that’s usually the limiting factor for determining the cooling, data center design etc . So it makes it difficult to compare different deployments
Techhead7890@reddit
Yeah, when I had the same thought as the commenter, one of the replies was apparently measuring datacenters by power use is more common, and processing power is more for specialist supercomputers
That being said, who the heck has 3.5GW of raw input power to put into such a place, or any time soon? Apparently the whole US grid is 1280GW at the moment and this would be 0.27% of the whole thing. Google in 2016 reportedly bought 2.6GW of renewable capacity for stuff they had built at the time. Even if GPUs are much more power intensive, most prior datacenters are in the 100MW range.
That being said apparently a lot of tech companies are planning big datacenters at similar or greater GW scale (OpenAI signing a deal for 25GW of chips) and the the author's estimates say AWS, Google and Meta have already been running 400MW new builds in the past 5 years or so. So depending on construction timeframe, maybe these numbers won't be too exotic in the next few years.
WHY_DO_I_SHOUT@reddit
Heck, 3.5GW would be almost a quarter of peak power consumption of the country I'm in (Finland).
SourceScope@reddit
I thinknits because ai datacenters cause energi crisis?
So roughly 3.5 nuclear power plants?
tecedu@reddit
yes and also that these are homogenous units ie not that difference in whats in the rack in datacentre. so you can guesstimate the number of devices. Plus based on the cooling as well you can cram more gpus in for the same power
CrowdGoesWildWoooo@reddit
Correct. You just want a scale where people can at least comprehend.
Can consider it something like it is 3 bananas long. Most people don’t know the exact length of banana, but have a visualization how long roughly a banana is, and if I say that, you would at least have an idea of the size.
EloquentPinguin@reddit
I think because power usage is a real physical thing and for such large projects one of the most significant infrastructure burden.
Computer numbers depend on a lot of factors. If you Jensen math the peak computer you get something 10x higher than in real workloads.
While I think both are fine, and compute is more interesting, the shift to talking about gigawatts just demonstrates that this is a new and important constraint and challenge in these projects.
CapeChill@reddit
You are right, plus at gigawatts the clusters have a theoretical compute vs an actual. Like you say power is power so there’s no argument about how that compute was calculated.
sr_local@reddit (OP)
Probably because these are optimized custom ASICs and their power efficiency (in watts) is higher than that of typical chips like Nvidia GPUs. So they don’t want to disclose overall compute power.
They report only energy consumption that’s fundamental for calculating cooling, infrastructure, and grid requirements.
RealPjotr@reddit
But it's hell comparing numbers, because you need to know what generation chips they use, what interconnects, where it's built (Dubai vs Nordics!), etc. Not really a comparable number, all ME DCs use more power. Performance would be much better measurement, non-computer people will learn.
PerspicaciousGoshawk@reddit
Pretty much. It would be good to have a standardised compute per kWh, at standard room temp. Then coupled with a figure for how much the site uses.
I know that won't fit in a headline but some of us still do read the articles. I swear, there are dozens of us!
kiwibonga@reddit
More capacity for the people paying $200/month, and not free hits to hook the free users, I'm sure.
phate_exe@reddit
Did they specify whether these are actual, signed contracts or are we talking about non-binding letters of intent again?
Vushivushi@reddit
https://investors.broadcom.com/static-files/c906d370-921b-4bc2-bb7b-57877dfcf1ae
Material event which Broadcom had to file an 8-K for.
There's an LTA between Google and Broadcom to 2031 for TPUs, networking and other components.
The deal with Anthropic was an existing 1 GW for 2026 which Broadcom had already expected to grow to >3GW in 2027. The announcement confirms that they are now working towards procuring a total of 3.5 GW for 2027, but how that plays out depends on Anthropic's continued growth and everyone's ability to procure capacity and financing.
Anthropic's current growth trajectory supports this new capacity, but things can always change.
That said, Broadcom rarely talks about opportunities they aren't confident about. It's Broadcom that has to secure chip and packaging capacity, so they don't talk about customers that aren't ramping.
pwreit2042@reddit
Google are going to dominate AI like they dominated Search. No other company is anywhere near their moat. Apple paying them $1B a year to use AI, Meta will be committing billions to use TPU's and helping to get the tools easier to work with, Anthropic paying shit tonnes. all this is improving Googles own tech and enticing others to pay Google.
The worst thing is, Google doesn't even need the money, they could pay this off from their search business. It's scary how much power Google has right now. Google will be first to reach ASI I think, unless China do it
theholylancer@reddit
So how much of this is in % of spending vs nvidia chips
and are they also looking at meta / amazon's chips?
i am wondering about that, if this is just a diversification play, or is it a full swap over to google's chip.
III-V@reddit
They could use it. Tired of running out of responses after like 5 prompts. If I paid for it, I'd still be getting cut off way too fast.
Fusifufu@reddit
Given their rapid growth, that seems very necessary. The combination of ever more demand for AI and modern AI approaches being ever more token-intensive (longer reasoning, agent teams, etc.) makes it seem like even with all the investments, the companies will be compute-constrained for some time.