I. Introduction
2025 was the year “compute” broke into the mainstream. At the start of the year, the trillion-dollar compute buildout whispered about in San Francisco was still invisible to the world. On January 21, 2025, this changed: the $500 Billion Stargate project was announced to the world, and so began in earnest the race for the AI future.
As we enter 2026, it is obvious that we will not be stopping building for the AI future anytime soon. In hindsight, we can see that all narratives questioning the buildout, from Chinese competition to slowing model progress, were mere blips. More than a third of economic growth in 2025 was AI, and this share is the smallest it will ever be, for a long time. Directionally, America is about as reliant on AI as Saudi Arabia is on oil.
One school of thought here goes: given how important AI is, it seems obvious that we will need some sort of ability to hedge against AI, or protect against a world where AI underdelivers – or overdelivers – on its promises. I call the process of developing this ability to hedge against AI, financialization.
Yet, Silicon Valley thinks otherwise: everyone is fully long AI, and will deal with unexpected shocks as they come. Google and Microsoft had no plans to hedge against memory prices, and similarly, it seems to be that no one in the Valley has plans to hedge against compute prices (yet).
On the one hand, Wall Street thinking might have you believe that the AI bubble is about to pop, and that there is significant risk the economy is bearing. Yet, there seems to be no way to “hedge” the asset underlying the buildout: compute, and more specifically, the price of compute/hour. To hedge at the scale needed, one requires a vibrant futures and derivatives market where one can easily make bets on the future price of compute,1 and we are far away from having one.2 What gives?
II. Liquid Gold
In many ways, compute markets today mirror the pre-1970s oil markets. The most important way in which this comparison holds is in the structure of the firms. The oil majors (often called the “seven sisters”) were vertically integrated behemoths. They owned virtually all crude production, refining capacity, tanker fleets, and final distribution to consumers and enterprises.3
The Seven Sisters' vertical integration, c. 1949–50
Denominators vary: pipelines and refining exclude the Communist bloc and North America; crude production excludes only the US; tanker fleet is global and privately-owned only. The pattern is consistent regardless: the majors controlled the full value chain. Sources: OIES Insight 68 (2020); McNally (2020) citing U.S. FTC 'International Petroleum Cartel'.
Consolidation and integration in the oil business was driven by its boom and bust nature. Crude oil prices fluctuated heavily due to the discovery or depletion of oil reserves. The only way to maintain profits in periods prone to overproduction (say, a discovery of a new reserve) was to cartelize, to artificially reduce production of oil. Sectors downstream of crude oil production, like refining and transportation, valued price stability, so they were willing to collude too. These downstream sectors were already concentrated (due to natural economies of scale), so it was easier to apply market pressure through them.
Therefore, the (unstable) equilibrium for the oil industry therefore was for refiners, railroads/pipelines, and large producers to collude and limit crude oil production. This was done by restricting refineries from procuring oil from non-allied producers, or by securing preferential rates for transporting allied oil from railroads/pipelines. This collusion eventually turned into formal vertical integration. Standard Oil ruthlessly fought for the scale, consolidation, and vertical integration it achieved in the United States. Internationally, this was much easier: there were often government-licensed monopolies.
In a similar way, the hyperscalars and frontier labs have horizontal consolidation and vertical integration across chips, datacenter infrastructure, models, and selling products to enterprises and consumers.
Hyperscaler & frontier-lab concentration, c. 2024–25
Data-centre capacity: share of US under-construction pipeline already pre-leased (CBRE). LLM token generation: proprietary-model share on OpenRouter, Nov 2024–Nov 2025 average. Semiconductor spend: proxy calculated as cloud/shared infrastructure share (IDC, 84.1%) × big-3 cloud concentration (Synergy, 63%).
Like crude oil production of the old, the barriers to entry for building datacenters is relatively low – both activities are about having access to land, specific oil reserves (energy supply), and refineries (hyperscalars/frontier labs). Therefore, we have seen many competitive neoclouds – Crusoe, Coreweave, Fluidstack, Nebius, etc. – emerge, even if the hyperscalars continue to command a significant share of datacenter ownership.
Data-centre capacity by ownership
Global live and planned capacity (MW)
'Big Tech' = Amazon, Microsoft, Alphabet, Meta. 'New Players' = companies with zero live capacity today. 'Finance' = financial services, insurance, or banking (BICS). 'The Rest' = all other owners with some live capacity. Bloomberg compiled the top 100 owners by live and planned MW, mapped each to its ultimate parent, and manually categorised them; the review covered ~73% of global live and ~74% of planned capacity. Source: Bloomberg News analysis of DC Byte data.
However, the sectors downstream of datacenters, like developing frontier models, are very concentrated. As has been repeatedly pointed to, scaling laws mean that training future models will require contracted yearly datacenter capacity on the order of hundreds of billions of dollars, which forces consolidation: there are only so many 12-figure-plus projects that capital can fund. Therefore, at the minimum, we are in a many-to-few market, where we have many datacenter providers selling to few frontier labs.
Markets may not be necessary at all, as we see the consolidation and vertical integration between chip providers, datacenters, and frontier labs.4 Google already has its own chip programme; OpenAI, Meta, and XAI all seem to be working on one, and Amazon’s chip programme has increasingly become Anthropic’s. Moreover, the hyperscalars, Meta, and XAI already build and operate their own datacenters, while OpenAI and Anthropic increasingly work with Oracle and Fluidstack respectively. All this means that there are very clear, vertically-integrated Anthropic, OpenAI, Google, XAI, and Meta ecosystems developing.5
This vertical integration means that most buying and selling of a commodity is internal, thereby removing the influence of market prices which might cause volatility. Even in a more open, many-to-few market, the overwhelming pricing power one side commands sets up a price-taker/price-maker equation which leads to stable prices, as they are in the interest of the price-maker – volatility simply makes it harder to plan for the future.
This can be seen very clearly in oil, where Standard Oil’s monopoly on refining significantly reduced volatility in crude oil prices:
US crude oil prices, 1859–1933
Annual min–max spread ($/barrel, nominal)
Bars show the spread between lowest and highest monthly price each year. Percentages are the average annual min-to-max swing within each era. Source: McNally, Crude Volatility (2017); data originally from Derrick's Hand Book of Petroleum.
But it can also be seen in compute:
H100 SXM rental price: forecast vs. realised
1-month rental ($/hr)
| Forecast error | Apr 24 | May 24 | Jun 24 | Jul 24 | Aug 24 | Sep 24 | Oct 24 | Nov 24 | Dec 24 | Jan 25 | Feb 25 |
| % actual vs forecast | +0.3% | +1.1% | +2.3% | +2.4% | +1.0% | -0.4% | -0.2% | -2.4% | -2.9% | -2.1% | -1.8% |
Forecast produced April 2024. Errors remained within ±3% over the entire horizon — remarkably tight for a market this new. Source: SemiAnalysis AI Cloud TCO Model.
On the supply-side, compute is a far less volatile business than oil. The cost of chips/hardware, the largest cost for a datacenter, has mostly been constant and predictable – everything in the semiconductor supply chain has long lead times, so barring black swan events like COVID-19, supply is mostly predictable in the short-term. Electricity prices are the other potentially volatile line item, though they constitute less than 10% of the total cost of ownership of a datacenter. The demand-side is harder to forecast.6 In any case, many neoclouds are protected from demand fluctuation by long-term leases or backstops and guarantees.
III. Will Compute Financialize?
Oil is perhaps the most financialized commodity today. It has a vibrant futures and derivatives market, and all sorts of stakeholders – from hedge funds to airlines – use it to speculate or hedge. Today, it is the most traded commodity by derivatives turnover. Yet, it worked much like compute pre-1970s – prices were administered by suppliers, rather than being determined in real-time through market forces.
What caused financialization? Predominantly, it was the oil shocks in the 1970s.7 In 1971-1973, a cartel of (mostly) Middle Eastern countries, known as OPEC, pressured oil producers in the region to increase the administered prices. This decoupled the incentives driving crude oil production and those driving refining – before, both were set by the seven sisters. OPEC sensed that they held leverage over the vertically integrated oil producers in the region, and so took the opportunity to increase their profits – the countries enjoyed a percentage of the total proceeds of oil from the region. Exacerbating the price increases was an oil embargo imposed on the United States during the Yom Kippur War, causing chaos in the markets.
Speculation and panic over the price increases and embargos caused volatility in oil demand, which meant that the administered prices were out-of-sync with the true, market-clearing price. Therefore, the rarely-used spot markets – open marketplaces where prices are set by real-time transactions – became the better gauge of the true market price. The spot market price often ran higher than administered prices in this period of volatility, incentivizing OPEC to move their administered prices toward the spot prices. This lended spot prices some degree of legitimacy.
Spot markets became even more credible by the 1980s. The 1979 Iranian Revolution and the 1980 Iran-Iraq war caused a sudden disruption in oil supply, causing panic and speculation. Japan in particular was badly affected, as it sourced 20% of its oil from Iran. Thus, it was left scrambling to buy oil from wherever it could – the easiest way was through the spot markets. With spot markets once again running hot, they gained even more legitimacy. This was repeated again with a supply increase from non-OPEC producers in the mid-1980s, effectively ending administered rates and causing a shift toward using spot prices and markets. Deliveries of oil often could not be done in real-time (transportation and storage is expensive), so people often ended up using futures/derivatives – agreements to deliver oil at some price at some future date. This completed the process of financialization.
What did we learn from this? Roughly, for spot markets to form, one needs (a) many independent buyers and many independent sellers (otherwise, incentives favour cartelization or dealing with buyers/sellers individually), and (b) price uncertainty (otherwise, long-term fixed-price deals are better).
For reasons discussed earlier, compute is likely to have neither of these two: compute is trending toward a many-few or a few-few market, and for structural reasons, it is far more stable than oil.
Given this, it seems like we will not have complete financialization of compute: administered prices will continue.
IV. Final Reflections
Throughout history, all financialized commodities have traced a similar path: volatility, cartelization, and financialization. To any commodity, volatility is intrinsic: whether its variability in agricultural harvests, or discovery of new oil and mineral deposits. Boom-and-bust prices are bad for business, so the industry cartelizes and fixes prices. Often, this is an unstable equilibrium – companies can decide to defect from the fixed price, or the incentives holding the fixed price break down. Ultimately, this leads to the formation of spot markets and eventually exchanges for futures and derivatives. Once can look at rice in Japan in the 18th Century, agriculture in the midwest in the 19th Century, or oil in the 20th Century, as we did.
Must compute be subject to the same cycle? Those betting on the financialization of compute are betting on the stagnation of compute. This is true in two ways:
Betting on the financialization of compute is a bet on scaling laws failing. Scaling laws force consolidation, so as long as scaling laws hold, we cannot have the many independent buyers and sellers needed to have the financialization of compute. Therefore, financialization is at minimum, a bet that the current vector driving improvements in AI – scale – will fail.
Betting on the financialization of compute is a bet on the paradigm of compute stagnating. To financialize, a whole legal apparatus needs to be built. Contracts need to be written, and what “compute” itself means has to be defined. At the minimum, it is a bet that “compute” today is broadly similar to “compute” in a couple of years. Building the infrastructure to trade Nvidia GPUs is implicitly a bet that we will continue trading Nvidia GPUs, as opposed to say, Etched Sohus or Quantum computers. In a sense, “compute” needs to be tamed to be fit into legal boxes and regulations.
Financialization also seems to have this “we can never go back” quality. Market forces just seem to become far too entrenched into the sector, making financialization irreversible. OPEC spent a lot of the mid-1980s trying to defend its administered prices – it simply couldn’t despite the might of entire nation-states. Foreign exchange rates seem to be another example. If some vector of progress requires us to undo financialization, it almost seems impossible to do.
The main “vision of the future” which creates financialization of compute is the agent economy vision: we have an economy of independent agents transacting with each other and sourcing compute. One can imagine this as a million, goal-directed Clawdbots running on Mac Minis who are transacting with each other through stablecoins. Maybe they are keeping themselves alive by procuring compute on the fly, or to run copies of themselves.
This might even be a compelling vision (see: Conway). But to bet on this vision being the dominant one is to bet on most compute – or at least intelligence – being run in a decentralized manner. Taken in its most radical sense, who trains and develops new models in that world? Maybe we can decentralize training runs, but who determines the research direction?8 Absent some new, convincing answer to this question, this seems to be a bet on stagnation.
Personally, I think we will have an agent economy and it will be a part of the future – just not that most global compute, or even a significant minority, will be allocated to it. We might have niche spot markets, and perhaps even brokered swaps on compute prices, but compute as a whole will not financialize. We will not be waking up each morning and tracking some centralized index to see the price of compute, as we do with the S&P 500.
Finally, this brings me back to the question which began this piece: do we need to hedge compute? In other words, do we need to financialize compute? The answer is probably that it's unnecessary and impractical.
As Dario Amodei said on the Dwarkesh podcast:
“I could buy a trillion dollars of compute that starts at the end of 2027. And if my revenue is not a trillion dollars, if it's even... 800 billion. There's no force on earth. There's no hedge on earth that could stop me from going bankrupt if I buy that much compute.”
Footnotes
-
This is technically possible to do without exchange-traded futures and derivatives, say through bilateral/OTC deals with investment banks or brokers. However, this presents issues like counterparty risk, increased friction, and limited liquidity. Moreover, real-time and transparent price discovery of the underlying commodity (i.e. compute) is needed to make hedges feasible. These again typically require exchange/marketplace-based solutions. Therefore, any practical solution which can meaningfully scale to cover the large size of compute markets will likely require exchange-traded futures to the extent I know. ↩
-
There are many startups directly working toward this vision. The clearest match is Ornn, though others like Shadeform, Prime Intellect, SF Compute, and OneChronos all sell a “marketplace” or “exchange” based solution to varying degrees. However, a complete, full-stack implementation of “financialization” still seems far away, as none have a full solution going from spot markets and settlements to actually having a futures exchange with margin provisions. ↩
-
There are two major exceptions to the dominance of the seven sisters: North America, and the Communist bloc. In the latter case, as the economy was state-controlled, there was no scope for the seven sisters to enter. The fragmentation of the North American market was a result of aggressive antitrust enforcement in the wake of Standard Oil, “common-use” of pipelines, and fragmentation of mineral rights among many owners. Moreover, the advantages the seven sisters held in their control of oil tankers was less useful in North America, where pipelines were preferred. The dominance of the seven sisters began to wane internationally from the 1950s as Latin American and Middle Eastern countries began to nationalize or exert greater control over their crude oil production. ↩
-
There are some other interesting dynamics worth mentioning here. Frontier labs and large neoclouds enjoy rebates/discounts on Nvidia chips through the equity stakes, funding, and backstops that Nvidia provides. For example, Nvidia has agreed to buy all of Coreweave’s spare capacity. This is a way of preserving market share for Nvidia. This is a type of an economy of scale, and similar to what Standard Oil provided to its preferred oil drillers through the Treaty of Titusville. However, as the frontier labs/hyperscalars take on Nvidia, this avenue might disappear – neoclouds will lose more of their independence as they rely more on frontier labs for their chips. ↩
-
A common thread in the public markets has been the “Nvidia” ecosystem vs. the “Google ecosystem”. This can be generalized to other leaders/frontier labs. ↩
-
The one big exception here has been memory/HBM/NAND prices, which have skyrocketed for 2026 and potentially 2027. However, this doesn’t seem to have impacted neoclouds significantly, as their end-consumers (frontier labs) appear to be price-inelastic and have absorbed all the cost increases. Some other tail-risks may exist, like say, a transformer ASIC like Etched being dramatically better than Nvidia for inference, or some algorithmic breakthrough making models far, far more efficient. But on the whole, there is no intrinsic volatility in the business as with oil and its contingency on the discovery/depletion of reserves. ↩
-
Some other reasons included the deregulation of spot/futures markets for oil in the United States in the late 1970s, as well as increasing standardization/benchmarking for oil, which came with the formation of OPEC. ↩
-
If the answer is the agent recursively self-improving, this probably opens a whole can of worms with respect to a fast takeoff within the frontier labs. To my non-AI-researcher eye, this answer seems to imply that the base intelligence of these agents (scaled over many copies/instances) is capable of recursive self-improvement – if so, labs have more compute. Why won’t this recursive self-improvement be captured by the labs? Will the labs even release a model with the base intelligence capable of recursive self-improvement? Many questions to ponder over. ↩