AI CapEx Now Hinges on Deus ex machina
We are making great strides on AI adoption, but new investment is largely AGI focused. This is surprising, since luminaries have been walking back their timelines. And future science may be at risk.
When I published AI’s $600B question a year ago, the question was whether AI could ever generate enough revenue to justify the money being put in the ground.
Now, with talk of 100 GW or even 250 GW of energy buildouts for AI data centers, we’ve reached a new high water mark. AI’s $600B question seems quaint.
One thing has become clear: Nothing short of AGI will be enough to justify the investments now being proposed for the coming decade.
This is happening even as AI’s potential is being realized: ChatGPT has continued its epic rise to north of $12B in run-rate revenue, Anthropic has reached $5B+ in run-rate revenue in a meteoric rise, and there’s a new club of companies scaling quickly from $0 to $100M in revenue.
There’s a version of the world — and this is the version that Microsoft and Amazon increasingly seem to be pursuing — where the next frontier is AI adoption. The models have proven themselves to be great, and now it’s time to monetize these investments and drive a world-changing technology evolution.
But that point of view is by no means widespread. Outside of these two giants, a debt-fueled “second push” is happening. Labs are taking all their profits and capital and plowing them right back into new data centers. A new breed of companies — namely, Oracle, Meta, and Coreweave — are going all in, no holds barred. Given the scale of these investments, the only objective that can explain this strategy is AGI.
What’s surprising to me is that this doubling-down on CapEx is happening even as the dream of AGI seems to be cooling off. Two things have happened. First, new model progress has tapered off, despite much larger training clusters. Second, and likely as a consequence, AI luminaries have started to walk back their AGI timelines. In December, Ilya Sutskever said that pre-training is dead. In June, Sam Altman said that AGI will be more of a “gentle singularity.” And that same month, Andrej Karpathy forecasted “a decade of agents” rather than “AGI in 2027.”
Imagine there is some market-implied AGI probability, which is a function of total data center CapEx. And then imagine there is a science-implied AGI probability, which is a function of first principles reasoning from scientists. These curves are diverging. The limitations of the Internet as a dataset, the limitations of continuous learning, and the evidence of GPT-5’s incremental progress, have led many of AI’s leading thinkers to update their timelines for AGI. And yet at the same time, we’re spending as though exactly the opposite evidence has come in.
Here’s something even more surprising: The crazy FOMO around LLM scaling right now may actually reduce the probability of achieving AGI in the medium-term (ten to twenty years). Why?
The big labs are starving PhD programs of talent — the financial packages are almost explicitly intended to make staying in your PhD program an impossible decision
The incentives inside of these organizations drive short-term thinking, on the order of one to three years
Corporate politics tends to favor in vogue, consensus ideas over more radical, unpopular ones — the very kinds of ideas that can take years to prove worthwhile, and which scientific breakthroughs often depend on
As a result, more and more resources are being poured into a single AI approach — scale up LLMs and use reinforcement learning to tune them. If AGI turns out to be a science problem rather than an engineering problem, then this level of concentration will have been a mistake.
If new compute investments aren’t getting us closer to AGI, then what’s the point? One argument is that compute is the commodity of the future, and that stockpiling this resource is likely to be valuable, regardless. Setting aside the issue of depreciation, which makes this argument tenuous at best, the bigger question becomes how long financial markets will be willing to underwrite such stockpiling, and whether investors even understand that this is what they are doing. My sense is that while researchers are increasingly uncertain about how compute translates into capability improvements, Wall Street hasn’t fully woken up to this.
This distinction is likely to be an important one, because AI has evolved from a Silicon Valley R&D effort into an economy-wide bet. 2025 has become “The Year of the Data Center”, with abstract commitments translating into concrete reality — construction sites are underway and power plants are being developed. The Economist, Atlantic and many others have reported on the GDP impacts of this compute boom, and the potential adverse consequences (especially on regional economies) if these investments turn out to be misguided.
Deus ex machina (“god from the machine”) refers to a plot device where an unsolvable problem is suddenly resolved by an unexpected intervention. In the context of AGI, Deus ex machina now has a double meaning: the literal creation of a new godlike intelligence, but also the only path forward to resolve an increasingly tricky ROI problem.


In your interview with Goldman Sachs you made an observation about the flow of dollars in the AI universe: "One observation that a lot of people have made is, if a dollar comes in at the top, Nvidia keeps $1.20 today. So Nvidia is capturing a lot of the value in the supply chain today." I'm not following... how is NVDA capturing $1.20 out of every $1? Is there leverage? Is this based on a FV vs NPV calculation? Thanks in advance.
Global gross wage costs(not including non-wage labor costs) are roughly $55T vs global GDP of $110T. Lets assume that AI agents(which is not AGI) lead to some monetary gain that is combination of one-time productivity boost(PB) and wage cost reduction(WCR). This would be total monetary gain of $55T * PB% + $110T * WCR%. Obviously best case is all gain is from PB - for example 5% increase from $110T is $5.2T monetary gain with no change in wage costs. Lets get some rather arbitrary base case: 1% WCR and one-time 3% PB on average. That would lead to total monetary gain of $55T*1% + $110T*3% - roughly equal to $3.6T annual monetary gain. I assume no growth in wage cost and GDP, which will make the monetary gain even bigger. I am assuming one-time PB, which is worse case than compounding PB, and makes the monetary gain appear smaller. I am assuming that not all jobs will be affected by AI that is why put only 3% PB on aggregate.
This is rather simplified, maybe oversimplified, but it tries to illustrate that if AI agents are remotely useful and permeate the service economy, the annual monetary gains will be in the trillions. I fail to see how this is not a real possibility and therefore does not justify even $1T annual spending in the next 5-10 years.