In their recent earnings call, Amazon kinda blew the doors off of industry analyst (motto: “we’ll be wrong, then take it out on your stock”) projections for their capex spend.
Specifically, analysts pulled some numbers out of their… hat, and decided that Amazon would end up spending $150 billion on CapEx for 2026. Amazon then proclaimed that it was going to be a lot closer to $200 billion (“no worries, you only missed by the GDP of Croatia”), and the industry spent the next two business weeks just beating the absolute stuffing out of their stock for it. How badly? Shares fell 11% after hours, then kept falling for nine straight sessions – the longest losing streak since 2006 – erasing more than $450 billion in market value. That’s more than the entire market cap of most companies that analysts are supposedly experts at evaluating.
Folks more retail-attuned than I have made a lot of hay about how it’s a repeat of Amazon’s overinvestment in their “shipping underpants to customers” business during the pandemic, but I disagree.
A minute or two after I called it “tilting at an AI windmill” on the internet, Andy Jassy said on the record that this “isn’t some sort of quixotic top-line grab,” which is exactly the kind of statement one makes when things are going super, super well.
But is it the death knell that the industry paints it as? I think not.
The demand is real
On the earnings call, Jassy said, “If you look at the capital we’re spending and intend to spend this year, it’s predominantly in AWS. Some of it is for our core workloads, which are non-AI workloads, because they’re growing at a faster rate than we anticipated. But most of it is in AI, and we just have a lot of growth and a lot of demand.” AWS CEO Matt Garman went further in an interview: “Even with all of this investment, my best estimation is we will be capacity constrained for the next couple of years. We will sell every single server and every single bit and we will wish that we had more.” That’s not “we hope to find customers for all these GPUs, perhaps Santa Claus will deliver them,” but rather “we literally cannot build data centers fast enough to meet the screaming demand.”
If you’ll recall, among other things like “being obnoxious to giant companies on the internet” and “leading a crusade against NAT Gateway pricing,” I help very large companies negotiate their AWS contracts. To be more pointed, I’m well positioned to call out nonsense when I see it, like the time AWS claimed its staff spent a lot of time optimizing customer bills.
In this case, what I’m seeing is not “AWS pushing GPUs to customers.” It’s “their customers are taking all that AWS will give them and then clamoring for more.”
In other words, I can confirm these claims. AWS isn’t getting ahead of its skis here; customers are legitimately asking for all the GPUs they can get their hands on. When AWS can’t deliver, they’re forced to look elsewhere. It’s not going to win me friends in some circles, but my customers don’t particularly want to do business with neo-clouds; they’re being forced to do so since they’re the only game in town that can get them the hardware. In the fullness of time, this is likely to correct.
But this absolutely doesn’t mean that AWS is out of the woods.
The OpenAI problem
AWS and OpenAI last year signed a $38 billion deal, which is notable for several reasons. First, it’s the smallest hyperscaler deal that OpenAI has signed, but the largest contract I’ve ever seen AWS announce. Second, their press release explicitly calls out Nvidia GPUs and not AWS’s own Trainium chips (motto: “we’ll give you more of the stuff you want if we can use you as a reference customer for these; please. We’re begging you.”), which is a strong indicator of the lack of serious interest from leading AI labs in AWS’s homebrew silicon.
Then there’s the part where Amazon is reportedly in talks to invest up to $50 billion in OpenAI – on top of the existing $8 billion it has in Anthropic, for whom it literally built an $11 billion data center. Nothing says “coherent strategy” quite like bankrolling two companies whose entire business model is to eat each other’s lunch.
AWS is a serious company who signs serious contracts, whereas from all outward appearances OpenAI sends their corporate buyer out to dinner with vendors, where this person gets absolutely plastered and then proceeds to sign contracts they have no clear path toward being able to fulfill. That doesn’t happen in a vacuum that constrains itself to one company; a hypothetical collapse is going to look a lot like contagion as everyone from Nvidia on down is impacted. What does AWS do when suddenly a swath of its customers not only aren’t clamoring for GPUs, but aren’t paying their bills?
The secret of the hyperscalers
If you take a look at the three hyperscalers, they all share a critical survival trait: a wildly profitable business that propped up their cloud divisions long enough for them to become viable in their own right. If AWS customers start defaulting, Amazon will still make money shipping dog toys and underpants to punters. Google will still be selling ads against search results and turning off beloved services. Microsoft will still be a law firm in a trench coat, playing stupid games with licensing. The neo-clouds, on the other hand, are one-trick ponies.
This is also where the “it’s just like the pandemic warehouse overbuild” comparison falls apart. When Amazon overbuilt fulfillment capacity in 2020 and 2021, the warehouses were still useful – they just had too many of them. GPUs and AI-optimized data centers are a very different kind of asset. If the AI music stops and you’re sitting on hundreds of billions of dollars of specialized infrastructure, you can’t exactly repurpose it to ship underpants faster. The best-case scenario is a painful write-down; the worst case is that you’ve built the world’s most expensive space heaters.
The actual bet
So here’s where I land. The $200 billion isn’t insane, and it isn’t a death knell. AWS has a $244 billion backlog (up 40% year over year), is growing at 24% on a $142 billion annual run rate, and is monetizing capacity as fast as they can bolt it to a rack and flip the circuit breaker. The demand is demonstrably real, and the analysts who projected $146 billion were, to use a technical term, wrong.
But that demand is real right now. The entire thesis rests on AI workloads continuing to grow at a pace that justifies half a trillion dollars in combined CapEx from just four companies in a single year. If you believe that AI is going to continue its current trajectory – that enterprises will keep finding ways to convert GPU cycles into business value – then Amazon is making the right call. The company has a track record of making bets that Wall Street hates for eighteen months and then look brilliant in retrospect. See: AWS itself, Prime, the fulfillment network, absolutely not the Fire Phone, etc.
If you believe that we’re somewhere in the “irrational exuberance” phase of AI adoption, then Amazon is building the most expensive monument to hubris since Meta spent $46 billion trying to convince the world it wanted to attend meetings as a legless cartoon avatar.
My money, for what it’s worth, is somewhere in between. The demand is real. The contracts are signed. But the gap between “every enterprise is experimenting with AI” and “every enterprise is running production AI workloads at scale” is a chasm that $200 billion worth of GPUs can’t bridge on its own. Amazon will almost certainly be fine – they have the underpants business to fall back on. It’s everyone else in the supply chain who should be losing sleep. ®