The numbers coming out of Wall Street’s most influential research shop tell a story that Silicon Valley would rather not hear.

In two separate reports published in April, Goldman Sachs analysts examined the great AI infrastructure build-out from opposite ends of the telescope — one team studying how much the machine will cost to build, another studying whether the machine is actually working — and arrived at a rare institutional moment: two wings of a single firm arguing, simultaneously, that the machine costs more than anyone knows and produces less than anyone admits.

Notably, it is not the first time Goldman has said something like this. James Covello, the firm’s head of global equity research, has been one of Wall Street’s most prominent and consistent AI skeptics since he co-authored the original “Too Much Spend, Too Little Benefit?” report in June 2024 — a piece that landed like a thunderclap precisely because it came from inside one of the institutions most deeply enmeshed in financing the boom it was questioning. Goldman advises hyperscalers, underwrites chip company offerings, and sits at the table with the companies building the very infrastructure Covello was interrogating.

Two years later, Covello is back with an update. He was wrong about some things, he acknowledges. But on the central question — whether the spending is producing commensurate returns — he has only gotten more convinced.

Trillion-dollar bill

Start with the cost. The Goldman Sachs Global Institute, which is not part of the bank’s research arm, issued an inquisitive report titled “Tracking Trillions,” projecting roughly $7.6 trillion in cumulative AI capital expenditure between 2026 and 2031 — covering chips, data centers, and power infrastructure. Annual spending is expected to more than double over that period, from $765 billion this year to $1.6 trillion by 2031.

Those figures, the report is careful to note, are not forecasts. They are baseline estimates — and extremely sensitive ones at that. Change a single assumption about how quickly AI chips become obsolete, and cumulative spending swings by hundreds of billions of dollars. Build the next generation of data centers at $19 million per megawatt instead of $15 million, and total data center costs balloon by more than $500 billion over the projection period. The report’s central message is a focus on the variable estimates and is stated more in a phase of wonder than a phase of uncertainty: the $4 trillion to $8 trillion figures that have “featured prominently in recent market commentary” are “far more conditional than they appear.”

The physical reality underlying those numbers is staggering in its own right. Today’s leading AI systems pack 72 processors into a single rack, connected by hundreds of thousands of kilometers of cabling. The facilities housing them require industrial-scale liquid cooling, dedicated power delivery, and redundancy systems that didn’t exist in conventional data center design a decade ago. A standard cloud data center from the 2010s might have been built at $10 million per megawatt. The next generation of AI-optimized facilities costs $15 million to $20 million, and some facilities built just two years ago are already considered insufficiently equipped for the chips being manufactured today.

What is the return on investment?

Then there’s Covello’s perspective.

Covello writes that he spent two years tracking what all that investment is actually producing for the companies deploying it. His findings do not make for comfortable reading in the boardrooms of companies that have staked their technology roadmaps on artificial intelligence.

Despite $30 billion to $40 billion in enterprise investment in generative AI, Covello cited the influential MIT Labs report, as reported by Fortune, which found that 95% of organizations were getting zero return on their AI pilots. A 2025 EY survey found that 99% of companies in its sample reported financial losses due to AI-related risks, with an average loss of $4.4 million per company. A Wall Street Journal survey found a yawning gap between what C-suites say AI is doing for productivity and what workers on the ground actually report. One AI hiring startup tested frontier AI agents on 480 workplace tasks commonly performed by bankers, consultants, and lawyers. Every agent failed to complete most of its duties.

“56% of Americans say they use AI,” the report quotes one research firm saying, “yet 85% of the workforce does not have a value-driving AI use case.”

IT budgets, rather than shrinking as executives promised shareholders, are growing. Gartner projects global IT spending to rise from $5 trillion in 2024 to $6.15 trillion in 2026. The cost savings have not materialized. Harvard Business Review research cited in the report found that AI-generated errors — what researchers are calling “workslop” — cost a 10,000-person organization more than $9 million annually in lost productivity. Far from efficiencies, AI appears to be generating new headaches and expenses in many cases.

Nvidia: the AI economy’s big winner

Somewhere between the two reports lies what may be the defining structural problem of the AI era: almost none of the money flowing into the AI ecosystem is being captured by the companies deploying it. Nearly all of it is flowing to Nvidia.

“Tracking Trillions” anchors its entire baseline model to Nvidia’s forward revenue estimates, noting that the chip giant accounts for roughly 75% of total compute spend — at gross margins of approximately 75%, far above any competitor, essentially acknowledging that the AI economy as currently constructed is a revenue model for one company.

Covello is less diplomatic about what this means. Semiconductor companies, he writes, “are supposed to thrive when their customers thrive [but] in this cycle, the chip companies are thriving at the expense of everyone above them in the chain.” Since the launch of ChatGPT, Nvidia’s net income has grown roughly 20x. The hyperscalers — Microsoft, Amazon, Google, Meta — have seen far more modest gains and enterprises and model companies have been losing money. “Something has to change with this dynamic,” Covello writes, “either the companies higher in the chain need to start earning a return on investment or they will eventually need to spend less on the chips that are powering this build.”

FOMO is the motivator

And yet the spending continues. Which brings us to perhaps the most remarkable finding from Goldman: the engine driving the fifth industrial revolution does not appear to be a rational capital allocation process. It is insecurity, if not outright fear.

Covello had explicitly predicted in its original 2024 report that if hyperscaler stocks underperformed the market for a sustained period, those companies would cut their AI capital expenditure. The opposite has happened. Microsoft, Amazon, Google, and Meta have dramatically increased their spending on AI infrastructure even as their stocks have lagged the S&P 500. Hyperscalers have burned through all their free cash flow from operations and are now issuing debt to fund the build-out. Data center debt issuance doubled to $182 billion in 2025 alone.

Covello’s diagnosis: “FOMO has proven a stronger incentive than poor stock performance as hyperscalers have prioritized being involved in the AI arms race over their current shareholders.”

The word “arms race” is doing a lot of work in that sentence. Arms races, by definition, are not about winning — they are about not losing. No hyperscaler CEO is racing to build data centers because the ROI spreadsheet demands it. They are racing because the cost of being wrong about AI — of sitting it out and watching a competitor transform the industry — feels existentially higher than the cost of burning through cash on infrastructure that may never fully pay for itself. It is a very human insecurity driving what may be the largest coordinated capital deployment in corporate history.

The “Tracking Trillions” report captures the supply-side version of the same dynamic. When physical bottlenecks — power interconnection queues, transformer shortages, specialized labor constraints — slow data center deployment, companies don’t scale back their ambitions. They work around the constraints, building behind-the-meter power generation, duplicating capacity, absorbing inefficiency rather than reconsidering the underlying bet. “Elongation,” the report calls it: the buildout stretches, costs rise, and the gap between capital committed and capacity online widens — but the commitment itself holds, because no one wants to be the company that blinked.

The big risk in an elongation scenario, the institute argues, is if bottlenecks prove severe or persistent enough to shift the buildout narrative. When projects fail simultaneously, the focus suddenly shifts to whether they can actually succeed within the timeline. “At that point, elongation begins to function as a feedback loop, one in which supply-side friction introduces demand-side doubt, potentially leading to deferred or downsized investment plans.” That being said, the institute finds the current environment closer to the base case than the stress case, “though the buffer is not wide.”

It hardly needs to be said that insecurity and fear are the stuff that bubbles are built on, and the jitters over an AI bubble in 2025 appeared to deflate with the successful release of Google’s new Gemini model and the growing influence of Anthropic’s Claude. We still aren’t out of the woods yet, Goldman is pointing out.

The jobs didn’t disappear

One area where Goldman explicitly revises its prior pessimism is jobs — though not in the direction AI boosters would prefer.

The firm’s macro team found that while AI has measurably reduced hiring in substitution-heavy occupations — telephone operators, insurance claims clerks, billing processors — it has modestly increased employment in augmentation-heavy fields like engineering and operations management. The net drag: roughly 16,000 jobs per month, and a 0.1 percentage point bump to the unemployment rate. Goldman’s baseline projects that AI could ultimately displace 6% to 7% of jobs as adoption broadens over the next decade — meaningful, but nowhere near the “AI will replace 50% of jobs” headlines that have dominated public discourse.

Those headlines “will most likely persist,” Covello notes drily, “as that drives clicks and views.”

The job finding is, in its own way, a symptom of the same broader problem: AI has been most effective at the margins, augmenting what workers already do rather than replacing them wholesale or generating the sweeping productivity gains that would justify the spending. Consumer adoption has been spectacular — Goldman acknowledges it was too conservative on this front, noting that generative AI reached roughly 53% adoption within three years, faster than the personal computer or the internet at comparable stages. But 95% of those users are on free tiers. The consumer enthusiasm has not translated into enterprise economics.

Something has to give

Goldman’s investment conclusion is a quiet repudiation of the trade that has defined markets for the last two years: go long hyperscalers, underweight semis. The picks-and-shovels play, in other words, may be over. He echoed Fortune contributor Jeffrey Sonnenfeld, Lester Crown Professor of Leadership Practice at the Yale School of Management and founder of the Yale Chief Executive Leadership Institute, who recently argued that “data infrastructure” is the key differentiator for AI scale going forward.

What actually needs to happen, per Covello, is more mundane than most AI coverage would suggest. Data needs to be structured properly — many AI agents today are being built on top of siloed, misaligned databases that make good outputs impossible. Workloads need to be orchestrated so that expensive frontier models aren’t being deployed to answer questions a cheaper model could handle. Small language models, fine-tuned on domain-specific data, need to keep displacing the large general-purpose models that dominate headlines but often underperform in practice.

The logic is asymmetric. If enterprise ROI eventually materializes, hyperscaler stocks — currently priced with deep skepticism baked in — have significant room to run. If ROI continues to disappoint, hyperscalers will cut capex and see a cash-flow-relief rally regardless. The semiconductors, by contrast, are priced for a world in which the arms race never ends, and the returns never arrive — a world that, per Goldman’s own analysis, cannot persist indefinitely.

[This report has been updated to clarify that the Goldman Sachs Global Institute is not a part of the bank’s research arm and is not issuing forecasts of any kind, and to modify the headline accordingly.]

Share.
Exit mobile version