Here's the thing about AI right now: it's hungry. Really hungry. Not just for better algorithms or more training data, but for raw compute power. Actual electricity and hardware and cooling systems. And we're running out of normal places to put data centers.
So hyperscalers are getting weird with it. They're launching satellites into orbit. They're building new nuclear reactors. They're pumping billions into liquid cooling. They're even fighting over water in the desert.
I went deep on the wild infrastructure bets shaping 2026. Some of these feel like science fiction. All of them are real money being spent right now.
TL;DR
- AI demand is exploding: $700 billion in hyperscaler spending in 2026 alone, with 75% going to AI infrastructure
- There's no single solution: Companies are hedging across orbital, nuclear, liquid cooling, desert solar, and sovereign European capacity
- The big winners are being decided now: The infrastructure choices made in 2026 will lock in competitive advantages (or disadvantages) for years
1. Orbital Data Centers (Starcloud)
What It Is
Imagine a data center floating in space. That's Starcloud. The company plans to launch 88,000 satellites into orbit carrying GPU hardware, all networked together as a distributed compute cloud. It sounds insane because it kind of is.
Who's Behind It
Starcloud is based in Redmond, Washington and just hit unicorn status. The $170 million Series A was led by Benchmark and EQT Ventures, with follow-on funding from Macquarie Capital, NFX, and a bunch of other serious money. The board includes former Boeing CEO Dennis Muilenburg and retired Air Force General Stephen Wilson. This isn't some scrappy startup bet—it's backed by people who actually know how to build large-scale infrastructure.
How Much Money
$170 million in this round. The company has raised $200 million total and hit a $1.1 billion valuation. That happened in record time—the fastest ever in Y Combinator history, just 17 months after demo day.
Why It Might Work
In November 2025, Starcloud actually did it. They partnered with SpaceX and launched Starcloud-1, a satellite carrying an Nvidia H100 chip. It processed AI workloads in orbit without blowing up or catching fire. They even trained a large language model in space for the first time ever.
Later in 2026, Starcloud-2 launches. This one has 100 times more power generation and carries Nvidia's Blackwell B200 chip, which is the most powerful AI chip in the world right now. It will run actual customer workloads.
The bet is simple: terrestrial data center capacity can't scale fast enough to meet AI demand. Space has infinite real estate and cold temperatures. Why not put compute there?
Why It Might Not
Latency is the obvious killer. Your inference model in orbit still has to talk to Earth, and radio signals only go so fast. There's also the tiny problem that orbital data centers have never run at scale before. One satellite is cool. Eighty-eight thousand? That's uncharted territory.
There's also regulatory soup. Space is crowded now. Starlink already has thousands of satellites up there. The FCC and international regulators will have opinions.
And then there's the reliability question. Space is hard. Really hard. One solar flare or debris collision could take out thousands of dollars of hardware. Insurance companies are going to hate this business model.
2. Nuclear-Powered Compute
What It Is
Instead of building data centers next to coal plants or hydroelectric dams, companies are going full nuclear. It's not new technology—nuclear has powered data centers indirectly for decades. But now it's becoming a deliberate strategy to solve the power problem.
Who's Behind It
Everyone. Meta signed three deals in January 2026 securing up to 6.6 gigawatts of nuclear power over 20 years. Google partnered with Kairos Power on 500 megawatts of molten salt reactor capacity. Amazon is spreading bets across multiple SMR (small modular reactor) vendors.
Microsoft already cut a 20-year power purchase agreement with Constellation Energy to restart Three Mile Island Unit 1. Yes, that Three Mile Island. The one that had a meltdown in 1979. They're restarting it for data centers.
How Much Money
Meta's deal is worth billions but isn't fully public. Google's commitment to Kairos is significant enough that Kairos received U.S. Nuclear Regulatory Commission approval to build two demonstration reactors in Oak Ridge, Tennessee. These aren't small bets.
Why It Might Work
Nuclear plants generate massive amounts of power 24/7, no weather dependency. One gigawatt from a nuclear plant can power a massive AI data center cluster indefinitely. The math is simple: if you can get nuclear online, you solve your power problem for 20+ years.
Plus, there's no better answer right now. Solar and wind are intermittent. Fossil fuels have environmental costs. Grid power is increasingly constrained. Nuclear is the only 24/7 power source that scales.
Meta's deal gets them immediate power from three existing reactors plus new capacity coming online by 2032. That's a guaranteed supply while their models train and inference scales.
Why It Might Not
NuScale Power, the only fully certified SMR design in the U.S., has been a disaster. Cost overruns, schedule delays, investor confidence crumbling. The Nevada project they were supposed to build? Dead.
Building new nuclear plants takes forever and costs way more than anyone expects. The first plants won't be online until 2030 at the earliest, and probably later. Meta needs power now, which is why they also grabbed existing reactor capacity.
There's also political risk. Elections change. New administrations can block projects. One regulatory setback and a billions-dollar bet evaporates.
And let's be real: public opinion on nuclear is mixed. Even though it's the best zero-carbon option, people get nervous. Protests can delay construction for years.
3. Liquid Cooling at Scale
What It Is
Modern GPUs run insanely hot. We're talking 100+ kilowatts per rack. Air cooling can't cut it anymore. Liquid cooling—pumping liquid directly to the chips or immersing them in liquid—solves the thermal problem and reduces power consumption.
This isn't new technology either. What's new is scale and integration into the core AI infrastructure.
Who's Behind It
Trane Technologies (a massive HVAC and cooling company) just acquired LiquidStack, the leading liquid cooling player. That signals how serious this is getting. We're not talking about startups anymore—we're talking about industrial companies making strategic acquisitions.
Every major hyperscaler is either building or integrating liquid cooling systems. Direct-to-chip cooling, immersion cooling, CDU-based systems—there are multiple approaches, but the direction is clear.
How Much Money
The data center liquid cooling market will hit $7 billion by 2029. It was worth $6.65 billion in 2025. These aren't massive numbers yet, but the growth rate is stunning—20% CAGR through 2033.
Individual companies spend billions integrating these systems, but it's not a separate budget item. It's baked into the $700 billion overall infrastructure spend.
Why It Might Work
Liquid cooling reduces power consumption by 15-25% compared to air cooling. In a hyperscale data center running on a billion-dollar annual power bill, that's massive savings. It also lets you run hotter chips at higher utilization, so you get more work per unit of hardware.
Plus, denser packing. Liquid cooling lets you squeeze more compute into the same physical space. That matters when you're fighting for real estate.
The cooling market is growing because it actually works. The data points are solid. The ROI is clear.
Why It Might Not
Complexity. Liquid cooling systems are harder to maintain and design than air cooling. You're introducing pumps, coolant loops, leak detection. More things to break.
Reliability concerns exist. One pump failure in a closed-loop cooling system can shut down an entire rack in minutes. You need redundancy, monitoring, quick reaction teams.
Coolant is also a consumable cost. You're not just buying the hardware once—you're buying coolant forever. In a 20-year data center lifespan, that adds up.
And standardization hasn't happened yet. Different vendors have different approaches. Lock-in is real.
4. Desert Data Centers
What It Is
Arizona and Utah have become data center havens. Why? Land is cheap, power is abundant (especially solar), and there's something appealing about building in wide-open spaces. The downside? It's a desert. Hot. Dry. Low water availability.
Companies are building data centers there anyway, just betting they can solve the cooling problem with technology.
Who's Behind It
Microsoft, Google, Meta, and every other hyperscaler with Arizona/Utah infrastructure. There are over 150 data centers in Arizona alone. This isn't theoretical—it's already the dominant regional play.
Microsoft specifically committed to only building zero-water data centers in Arizona going forward. Other companies are experimenting with misting systems, evaporative cooling, and heat recovery systems.
How Much Money
Billions. Arizona alone has attracted massive capital. These aren't speculative projects—they're live operations powering production workloads. The capital is already deployed.
Why It Might Work
Real estate costs in Arizona and Utah are a fraction of what you'd pay in Silicon Valley, Seattle, or Boston. Labor costs are lower. Power is reliable and renewable in many cases.
Cooling is solvable with technology. Indirect evaporative cooling works without adding moisture. Misting systems can drop ambient temperatures by 20 degrees. Heat recovery captures waste energy. It's engineering, not magic.
And there's infrastructure built up. Highways, power grids, fiber routes. You're not starting from zero.
Why It Might Not
Water is the real problem. As climate change progresses, Arizona gets drier. Nevada is already in a water crisis. If you're building a data center that uses massive amounts of water for cooling, you're in an untenable position long-term.
That's why Microsoft went zero-water. But zero-water cooling is still more expensive than traditional cooling. You're paying a cost premium to operate in a resource-constrained area.
Heat stress is also real. When ambient temperatures hit 115+ degrees Fahrenheit (which happens every summer), your cooling systems are working at maximum capacity. One heatwave and your infrastructure is at risk.
Plus, environmental politics. Communities are starting to push back on data centers consuming local resources. That opposition can kill projects.
5. Sovereign European AI
What It Is
Mistral AI is France's answer to OpenAI and Anthropic. But instead of relying on US cloud providers, they're building their own data center infrastructure in Europe. The bet: European sovereign compute capacity becomes a strategic asset and a business advantage.
Who's Behind It
Mistral AI, founded by alumni from Meta and Mistral (the AI research institute). The company raised $830 million in debt financing from seven European banks to build a facility near Paris. The list of lenders reads like European finance royalty: BNP Paribas, Crédit Agricole, HSBC, Bpifrance, and others.
The facility is being built by Eclairion, a French data center operator.
How Much Money
$830 million for this single facility. It will house 13,800 Nvidia GB300 GPUs, deliver 44 megawatts of power, and be operational by June 2026.
But it's not just this one facility. Mistral also announced a 1.2 billion euro investment in Sweden through EcoDataCenter, targeting 200 megawatts of total European capacity by the end of 2027. This is a massive, multi-country push.
Why It Might Work
Europe is building regulatory frameworks around AI. The EU AI Act is real. If you can train and run models entirely in Europe, you avoid some compliance headaches. Data sovereignty matters to governments and enterprises.
It's also a geopolitical play. Europe doesn't want to be entirely dependent on US compute infrastructure for its AI development. Mistral is positioning itself as the European AI company with European infrastructure.
Plus, there's first-mover advantage. If Mistral becomes the go-to for European AI training and deployment, they have a defensible market.
Energy is also good in Europe. They have nuclear power, hydroelectric, and renewable capacity. The facility near Paris has solid power economics.
Why It Might Not
Building dedicated infrastructure is expensive. The opex and capex are higher than renting cloud capacity. Mistral has to bet that proprietary infrastructure justifies that cost.
Competition with hyperscalers is brutal. Google, Meta, Amazon—they're all building in Europe too. Why would an AI company choose Mistral's infrastructure over Google Cloud or AWS?
Vendor lock-in is real. Once you've trained models on Mistral's hardware, porting to something else is painful. You're betting Mistral survives and remains competitive. That's a risk.
And there's the catch-22: Mistral's advantage is being European, but most of the cutting-edge AI research and tooling comes from the US. Relying entirely on European infrastructure might mean giving up some access to latest developments.
Comparison Chart
What This All Means
The obvious pattern: no single approach solves the AI infrastructure problem. Hyperscalers are hedging across all five of these bets simultaneously.
Meta is building nuclear while also expanding desert capacity while also exploring alternatives. Google is partnering on both nuclear SMRs and supporting Mistral development (through various channels). Amazon is spreading bets across everything.
Why? Because the demand is so massive and unpredictable that betting on one approach is insane. If orbital doesn't work out, you've still got nuclear. If nuclear faces regulatory delays, you've got desert solar and liquid cooling. It's portfolio management for infrastructure.
The $700 billion being spent in 2026 isn't going to one bucket. It's distributed across all these bets, plus acquisition of existing capacity, plus incremental improvements to existing data centers.
FAQ
Q: Will orbital data centers actually happen?
Probably, but not at scale in 2026. Starcloud-2 launching in 2026 is a real milestone, but moving meaningful compute to orbit takes years. The real question is latency—can you live with the light-travel delay? For some workloads (batch processing, non-real-time inference) the answer is yes. For others, no.
Q: Isn't nuclear too slow to build?
Yes. That's why Meta and Google grabbed existing reactor capacity and signed long-term agreements. New SMRs won't contribute until 2030+. They're betting on a combo of immediate restarts (Three Mile Island) and future capacity (Google's Kairos deal).
Q: How does liquid cooling fit into this?
It's not a replacement for other strategies—it's a complement. Liquid cooling lets existing data centers handle higher GPU density and power consumption. It buys you time while you build new capacity. It's also way cheaper than building entirely new facilities.
Q: Why does Europe matter?
Because 30% of AI research and 25% of data happen in Europe. If you can't train models in Europe, you're dependent on US infrastructure. Mistral's bet is that European sovereignty becomes a selling point, both for the company and for European enterprises.
Q: Which of these will actually win?
All of them. The market is big enough. Orbital handles some niche use cases. Nuclear provides baseload power. Desert facilities run inference at scale. Liquid cooling optimizes existing capacity. Mistral captures European demand. They're not competing—they're complementary solutions to different problems.
The Real Take
AI demand is growing faster than physical infrastructure can scale. That's forcing innovation in places that haven't seen real competition in decades: power generation, cooling systems, satellite networks, and international data governance.
The companies winning right now are the ones making bets across multiple approaches. They're not betting on one solution—they're building a portfolio of bets and executing simultaneously.
In 2027, we'll see which bets paid off and which look like expensive mistakes. My guess? All of them will contribute something, even if some look silly in retrospect. That's what happens when you're trying to solve an unprecedented problem.
The infrastructure decisions being made in March 2026 will shape the compute landscape for 20 years. That's why everyone is swinging so hard.
Stay in the Loop
Subscribe to the CodeBrainery newsletter for weekly updates on infrastructure, AI, and the tools building the future.

