Mapping AI’s Next 24 Months
Laying out the map
This week a widely read synthesis of four essays presented a single, joined‑up view of what will determine AI’s trajectory over the next 18–24 months. The document treats the coming period as a tight window of tests: will firms turn pilots into scaled value? Will grids and supply chains keep pace with the compute appetite? Can markets and governments adapt to exponential change?
The answer, for now, is mixed. The technology stack is exploding in diversity—large frontier models, permissively licensed open weights, small models built for phones and robots—while the physical and institutional systems that let companies and countries deploy AI at scale are under acute stress. Those stresses define the likely winners and losers of 2026.
Enterprise adoption and the productivity inflection
Adoption is already widespread: surveys show a large majority of organisations use AI in at least one function. Yet only a minority report clear, measurable value creation today. The pattern looks like previous general‑purpose technologies: a handful of early leaders—banks, software platforms and a few cloud‑native firms—capture big gains first while the majority rewire processes, governance and skills.
What makes the next 12–18 months critical is the shape of that diffusion. Several large enterprises now publish concrete returns from multi‑year programmes: they reorganised data access, built internal platforms and ran the hard integration work before reaping benefits. If these case studies multiply, a classic adoption inflection could arrive in 2026, shifting AI from pilot stage to broad productivity growth.
But there are countervailing dynamics. Employees already reach for consumer AI on personal devices; informal use can accelerate adoption but also creates governance and security holes. Boards are betting heavily—most firms increased AI budgets in the past year and plan further increases—which means expectations for visible ROI will be high and the political pressure on CIOs intense.
Revenue, usage and the token economy
Commercial revenues for generative AI have surged. Estimates put the sector at tens of billions of dollars and growing at rates comparable to early cloud adoption. API‑driven services are the fastest segment: businesses pay for compute and model access, and the rise of agentic, multi‑step workflows has driven per‑user token consumption far above simple chat interactions.
That combination—broadening adoption and heavier workloads—can lift revenues even while most companies are still only beginning to operationalise capabilities. Companies will increasingly manage costs with model routing and hybrid architectures, but providers who own the high‑capacity inference path stand to capture the most value.
Energy and the physical scaling wall
Among the constraints called out most emphatically is energy. Building a data centre is fast compared with the decade‑plus lead times that many grid and interconnection projects need. In several important markets interconnection queues stretch for years, forcing data‑centre builders to consider behind‑the‑meter generation, dedicated hydrogen or gas peakers, or new solar-plus‑storage projects.
The practical consequence is that compute capacity will increasingly chase available and resilient energy rather than simply low latency to customers. Regions that can quickly expand capacity—using stranded or fast‑deploy renewables—will attract large clusters. Regions with slow grid reform risk losing out. The upshot is a physical geography of compute more than an abstract market: energy availability will shape where the largest AI installations land.
Hardware wars and the GPU question
The chip supply story matters again. Competition between incumbents and challengers in AI accelerators is sharpening: new GPU families and purpose‑built accelerators are central to cloud offers and sovereign strategies. One major competitor has shipped new accelerators that pressure the market leader on price and performance, and large cloud customers are already placing bets across suppliers.
That matters because the depreciation and replacement cycle for accelerators determines capital intensity and strategic timing. If high‑end accelerators enjoy productive lives measured in many years, it eases replacement cycles. If demand for training and inference outstrips supply, prices and margins shift, and smaller players are squeezed. Server chassis, cooling and supply‑chain integrations—recently consolidated by acquisitions—are now part of platform competition, not only semiconductor roadmaps.
Model diversity: open, small and national
Model supply is no longer a two‑player game. In 2025 several frontier models launched alongside a flourishing open‑weight ecosystem and a maturing class of small, on‑device models. The practical result is choice: organisations can pick closed, cloud‑hosted frontier models; locally hosted open weights; or lightweight models tuned for latency, privacy and offline operation.
This diversification has three immediate effects. First, it lowers entry barriers for companies that need on‑prem or private inference. Second, it decentralises innovation, letting academic labs and smaller vendors contribute breakthroughs without massive training budgets. Third, it complicates governance: different models come with different failure modes, licensing terms and geopolitical associations.
Sovereign stacks and geopolitical fragmentation
Policy and capital are rearranging the global stack. Nations and blocs increasingly treat compute and model capabilities as strategic infrastructure to be stewarded, not only regulated. Large states and wealthy sub‑national investors are funding regional clusters, and alliances are forming around preferred hardware and software suppliers.
The likely intermediate outcome is a splintered landscape of US‑aligned, China‑aligned and non‑aligned stacks. Mid‑sized nations face a difficult choice: adopt a foreign stack and accept dependencies, or build expensive domestic capabilities and risk lagging economic adoption. Multinational pooling and minilateral consortiums are one plausible mitigation, but political friction and commercial incentives will make coordination hard.
Trust, utility and the social bargain
There is a growing social tension: adoption and familiarity are surging even as public trust softens. Large shares of populations report worry about AI’s effects even as they rely on AI tools for coding, writing and decision support. This creates a brittle social bargain—utility today in exchange for exposure to unknown risks tomorrow.
How institutions, platforms and regulators manage that bargain will shape uptake. Transparency, verifiable guardrails, and realistic communications about capabilities and limits will influence whether regulation becomes enabling or prohibitive. Absent credible governance, fragmentation and public backlash could slow diffusion and concentrate value in those who can operate with the fewest constraints.
What to watch over the next 24 months
Several concrete markers will decide whether the period becomes an inflection or a pause. First, enterprise ROI signals: do a meaningful share of large firms publish measurable productivity gains beyond isolated case studies? Second, energy and interconnection reforms: can grid operators and permitting regimes cut lead times where clusters are planned? Third, hardware supply: do accelerators remain scarce or does production scale? Fourth, the spread of open weights and locally deployable models: do they materially displace cloud dependency for regulated and privacy‑sensitive workloads?
Finally, geopolitical moves matter: new multilateral compute projects, sovereign investments, and export controls could redraw who has practical access to frontier capabilities. The interplay of economics, physics and policy will determine whether the next two years lock in a few dominant platforms or broaden the technological commons.
In short, the coming 18–24 months are not merely another product cycle: they are a test of whether organisations, grids and governments can adapt to the pressure points that rapid AI progress has already exposed. Observers and decision makers should watch the three linked systems—technology, infrastructure, and institutions—because success requires all three to scale together.