Follow the CAPEX: Triangulating NVIDIA

By

Previous CAPEX obsessing

Lets compare NVIDIA’s Data Center segment revenue to the CAPEX spending of the hyperclouds (Amazon, Google, Microsoft, henceforth AGM) plus Meta over the last couple years (AGMM collectively). We’d like to know much of NVIDIA’s revenue is from the hyperclouds (and honorary hypercloud sidekick Meta) and how much of hypercloud CAPEX is going to NVIDIA for AI infrastructure.

Above we see the AI liftoff in NVIDIA’s Data Center business in the last year. ChatGPT launched five quarters ago (November 30, 2022) while the NVIDIA H100 shipped “in the fall” of 2022.

NVIDIA’s fiscal year ends in January, but below we will compare their quarters with the overlapping calendar quarters of their big customers, so remember the NVIDIA revenues go a month later than the hyperclouds.

The chart above compares all-up, corporate level CAPEX spend of AGM (so includes Amazon’s #bonkers pandemic spending on logistics infrastructure). NVIDIA’s total Data Center revenues hit 49% of AGM all-up CAPEX in Q4.

Now we compare AGM’s estimated spending on data center CAPEX (based on proprietary Platformonomics analysis aka, in proper analyst tradition, a guess). NVIDIA Data Center revenue is at 81% of that estimated AGM data center spending. It is hard to imagine NVIDIA getting half of hypercloud CAPEX, so that suggests NVIDIA is spreading out the GPUs across lots of customers.

Next we add Meta to the all-up corporate spend picture. I don’t usually track Meta as they don’t have a cloud (and likely won’t due to a recurring tendency to pull the rug out from under developers, which is a bad look for aspiring platforms), but they do spend a lot on CAPEX1 and more recently have gone all-in on GPUs (though it is really unclear what they are doing with them beyond training Llama — perhaps LLM-powered search?). That takes NVIDIA Data Center revenue down to 40% of AGMM CAPEX in Q4.

Finally, we’ll add our proprietary guess of data center infrastructure spending by AGM plus Meta. NVIDIA Data Center revenue is about 65%. Again, that suggests they’re spreading the GPUs well beyond the hyperclouds and Meta.

None of the above tells us anything about what the hyperclouds and Meta are spending, collectively or individually, on AI infrastructure. NVIDIA obviously has a lot of other customers (enterprises, universities, nation states, et al.) and are over-allocating scarce supply to boutique GPU clouds2 (e.g. CoreWeave, Llama Labs, Oracle) in hopes of building leverage over the hyperclouds (who happen to be doing their own competitive silicon).

NVIDIA did say in their conference call:

“In the fourth quarter, large cloud providers represented more than half of our data center revenue, supporting both internal workloads and external public cloud customers.”

So over $9 billion of their revenue went to the hyperclouds (and they may generously include other not-so-hyper providers in that bucket, i.e. what we call “clowns“).

Microsoft and NVIDIA

Much more interesting is what NVIDIA says about customer concentration in their SEC filings. Our assumption is that Microsoft is NVIDIA’s largest customer. From these disclosures, we can construct the following:

Microsoft was likely responsible for 19% of NVIDIA’s revenue in FY24 and 22% in the fourth quarter. This suggests Microsoft spent $11.58 billion with NVIDIA in NVIDIA’s FY24. Microsoft also buys (a few) GPUs for Surface devices so it isn’t entirely AI infrastructure, but it is close.

And if Microsoft spent almost $5 billion with NVIDIA in Q4, that leaves ~$4 billion to split between Amazon, Google, and perhaps other “large cloud providers”. It is unclear if Meta is in NVIDIA’s “large cloud provider” category, but if so, it would suggest Amazon and Google are getting very small allocations.

A second customer gets called out in Q3 as accounting for 13% of revenue in Q3 and 10% of revenue for the first three quarters of the year ($2.4B and $3.9B respectively). This is probably Meta (too early for Amazon who took a long time to make peace with NVIDIA and Google is dividing their investment between NVIDIA and their own TPUs). Customer two didn’t warrant a breakout in any other quarter.

If we switch to Microsoft’s calendar (so the NVIDIA numbers are off by a month), we can look at spend with NVIDIA vs. Microsoft’s total CAPEX spend. It peaks to 43% by Q4! The mad scramble for GPUs is accelerating!

$11.8 billions tops my prediction of how much Microsoft spent on generative AI infrastructure from just a couple weeks ago:

Microsoft is reputed to be the largest customer for both NVIDIA (Q4 2022, Q2 2023, 2H 2023) and AMD (plus is doing its own AI silicon). The abrupt 4% bump in CAPEX as a percentage of revenue, after a steady 13-14% for years prior, is the best proxy for the incremental AI spending in 2023. That suggests an incremental AI-driven spend of about $9 billion, or 22% of overall CAPEX.

That means CAPEX intensity for the rest of Microsoft’s (non-AI) cloud infrastructure is actually declining, even as overall Azure cloud revenue grew by 30% (to which they attributed 6 points to AI). Either they’re stretching the infrastructure/not investing as far ahead of demand or there really is something to the extensions of server lifetimes (which I still dismiss as accounting hijinks downstream from raw CAPEX investments).

Microsoft’s $11.58 billion spend translates at list prices to about 385,000 H100s (on top of whatever they bought in 2022). Presumably you also get a unit discount when you write a three comma check. There is also some (required) networking gear in there too3, but that just muddles a nice big number.

NVIDIA CAPEX

NVIDIA’s own CAPEX is measly, just $1.07 billion for the fiscal year, and down 42% from the prior year. Less than 2% of revenue. Fabless indeed.

What have I missed here?

  1. Seventh largest in the CAPEX Extended Universe I track, just behind TSMC. ↩︎
  2. Who may be deadbeats? ↩︎
  3. In the conference call, NVIDIA said a couple things about networking which is also in the Data Center segment and complements the GPUs in AI infrastructure: “Compute revenue grew more than 5x and networking revenue tripled from last year.” and “Networking exceeded a $13 billion annualized revenue run rate. Our end-to-end networking solutions define modern AI data centers. Our InfiniBand solutions grew more than 5x year on year.” [$13 billion run rate equates to $3.25 billion in Q4, which is about 18% of NVIDIA’s Data Center business]. ↩︎

3 responses

  1. Nice to see you doing some real work for a change Charles 🙂

    From the NVIDIA earnings call: “In the fourth quarter, large cloud providers represented more than half of our data center revenue, supporting both internal workloads and external public cloud customers.”

    And…

    “Consumer internet companies have been early adopters of AI and represent one of our largest customer categories.”

    Your guess is probably pretty accurate.

  2. Charles Fitzgerald Avatar

    “Real work” is such a fascinating concept…

  3. […] has an extremely high market share. And as Jesse Felder pointed out (based on an article from Platformonomics), Nvidia’s total data center revenue was 40% of the entire capital expenditures of […]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Get Updates By Email

Discover more from Platformonomics

Subscribe now to keep reading and get access to the full archive.

Continue reading