By Mani Vadari, President, Modern Grid Solutions
Last month’s WoMM explored the utility perspective on data center groth—why planning timelines are long, why reliability can’t be rushed, and how speculative load requests can distort grid investment. This month, we’re flipping the lens. What does this surge in demand look like from the developer’s side? What drives their urgency, and how do they navigate the complex ecosystem of siting, permitting, and procurement? By examining both viewpoints, we aim to foster better collaboration, clearer expectations, and smarter infrastructure decisions.
Let’s start with the obvious: Data centers are not new. But as more applications move to the cloud, data centers now serve as the cloud itself, anchoring today’s digital infrastructure. They house the computing power behind everything from processing to storage, and they do it faster, better, and cheaper than any other on-premises solution. Recently, they have also become the new frontier of artificial intelligence and machine learning. This frontier isn’t just about innovation; it’s about competitive advantage. Major companies worldwide aren’t just building facilities; they’re building the future and, in doing so, trying to differentiate themselves, leverage AI to improve productivity, and deliver better products & services.
And the race is on. Thanks to tech giants like Amazon, Microsoft, Google, Oracle, Apple, Meta, and Nvidia, the U.S. currently leads in cloud and AI dominance. But this is not only a global race, it’s a race against time. There are no second-place winners. Countries like Saudi Arabia and China are aggressively courting data center investment, offering speed, incentives, and infrastructure. In the time it takes to energize them in the U.S., others offer the opportunity to launch, scale, and capture market share much faster.
At its core, a data center is a facility that houses servers, storage systems, and networking equipment. But not all data centers are created equal:
AI data centers are unique. During training (learning mode), they consume enormous amounts of computer power, requiring hundreds to thousands of megawatt hours. Once models are deployed (operating mode), consumption drops, and the flexibility to manage demand increases significantly. This variability is critical, and it’s often misunderstood by utilities accustomed to designing systems for peak consumption.
Before a single server is installed, developers are deep in analysis, navigating a maze of technical, regulatory,
and logistical hurdles that extend far beyond electricity.
Electricity is just one piece, but it’s often the bottleneck. Building generation and the required transmission lines to deliver the energy can take several years and cost developers millions. That’s why developers talk to multiple utilities. It’s not about gaming the system; it’s about hedging risk in a high-stakes race. Developers are also exploring alternative power sources, such as fuel cells, battery storage, and small modular reactors (SMRs). Companies like Bloom Energy, NuScale, GE Vernova, Oklo, and others are rushing to supply data center energy demands.
While costs are always a constraint, the biggest constraint for developers is time. Once construction starts, it can be less than 15 months to operation, and speed to market is a competitive advantage. If a hyperscale AI facility takes 36 months to energize in the U.S., but less than 18 months in India or China, that’s not just a delay. It’s a lost opportunity.
Yes, there’s hyperbole about U.S. competitiveness. But the risk is real. If utilities can’t meet energy requirement timelines or if local constraints hinder data center development, operators will go where power is available. That’s not a threat; it’s a business reality.
Speed isn’t simply a metric. It moves markets. And in this race, every month counts. This urgency is reshaping corporate priorities. CEOs are stepping aside to focus on AI strategy, delegating day-to-day operations to others. And this future runs on electricity — lots of it, and fast. For developers, the challenge isn’t just securing power, telecom, and water; it’s securing it on their timeline.
Here’s the part utilities need to hear: developers don’t always need gigawatts on day one. Many are willing to start with a few hundred megawatts and ramp up over time. That gives utilities time to plan, secure approvals, and build infrastructure. It gives communities time to engage, and developers time to prove demand.
The flexibility is especially important for AI data centers, which operate in distinct phases. During training, GPUs and AI chips consume enormous amounts of compute power, often pushing toward their thermal design limits. But during communication or inference phases, power usage drops dramatically. These swings aren’t technical quirks. They can ripple across the data center and the grid, risking instability or mechanical failure if not properly managed.
This is not the problem of one data center or another; it is industry-wide. To resolve this, developers are advocating for co-design — aligning software, hardware, and infrastructure to ensure AI systems remain scalable and power-aware. Techniques like staggered scheduling, asynchronous training, and overlapping compute and communication can help mitigate power spikes without compromising performance.
As one recent paper notes, “Power swings visible at the rack, data center, and grid levels risk grid instability and mechanical failure.”ii The stakes are high, and solving them requires trust. If utilities treat every request as speculative or transactional, they risk missing real opportunities to grow demand. Developers aren’t asking for shortcuts; they’re asking for clarity, flexibility, and partnership.
Utilities are also right to be cautious. Reliability matters. But the relationship with data center developers and operators needs to evolve from transactional to collaborative. Both sides face pressure to deliver, and both have valid concerns. But alignment starts with understanding.
Developers aren’t asking utilities to bend the rules of the grid. They’re asking for a handshake to build it together—one that acknowledges risk, manages supply chains, and adapts quickly. That kind of partnership requires trust, transparency, and shared commitment.
To move from pressure to partnership, we offer these critical calls to action:
We’re at the dawn of the AI revolution, still only scratching the surface of the ‘art of the possible’. Consider the rapid evolution of AI chips: Nvidia’s GPU power consumption surged from 250W to 700W per chip in just two years. Their upcoming Blackwell generation boosts power consumption even further, with the B200 consuming up to 1,200W, and the GB200 expected to consume a staggering 2,700W.
The call to action is happening now. Strategic, methodical planning is essential to ensure we’re ready, not just for what’s next, but for what’s coming fast. This article, together with the September issue of WoMM, explores how utilities and data center developers/operators both face immense pressure to deliver, and both have valid concerns. But alignment starts with understanding. By unpacking the realities on each side, we hope to spark more productive conversations, accelerate responsible buildouts, and ensure that the future of digital infrastructure is powered by trust, not just transmission.
i Scott Guthrie – Microsoft Executive Vice President, Cloud + AI Inside the world’s most powerful AI datacenter. Sep 18, 2025. Inside the world’s most powerful AI datacenter – The Official Microsoft Blog https://blogs.microsoft.com/blog/2025/09/18/inside-the-worlds-most-powerful-ai-datacenter/
ii Numerous authors, Power Stabilization for AI Training Datacenters, Aug, 2025. arXiv:2508.14318v2.
https://arxiv.org/abs/2508.14318v2
iii https://www.opencompute.org/
iv Beth Kindig, AI Power Consumption: Rapidly Becoming Mission Critical, Forbes.com, June 20, 2024
https://www.forbes.com/sites/bethkindig/2024/06/20/ai-power-consumption-rapidly-becoming-mission-critical/
©2024 Modern Grid Solutions
Designed & Maintained by Team Mango Media.