How AI Power Demands and Policy Rollbacks Revived Coal

How AI Power Demands and Policy Rollbacks Revived Coal

Simon Haidegger sits down with Desiree Sainthrope, a legal expert in global compliance whose work straddles energy regulation, power markets, and the fast-evolving footprint of AI infrastructure. She explains why data centers are reshaping utility planning, how federal rollbacks and emergency orders intersect with reliability law, and what this all means for health and climate. We cover the chain from interconnection queues to coal unit lifelines, dig into the legal underpinnings of national security exemptions, and explore what transparent metrics and enforceable standards could look like in an era where AI can mean keeping plants online through 2039.

The article links AI’s power surge to delaying retirements at 15 coal plants and 30 units. Can you walk us through how data center requests reach utilities and then grid operators? What load forecasts or interconnection queues show this shift? Which metrics best reveal the tipping point?

A data center developer typically starts with a pre-application meeting at the utility, shares indicative load—often hundreds of megawatts for a hyperscale campus—and requests capacity and a target in-service date. That triggers a distribution or transmission feasibility review and, if transmission is implicated, entry into the regional interconnection queue. In PJM, those clusters are now full of “large load” requests associated with AI, matching the story that one-third of delayed coal retirements are aimed at serving Data Center Alley. You see the shift first in updated load forecasts—Dominion’s service territory shows an 85 percent increase over 15 years—and then in unit commitment models where coal units slated to retire are re-flagged for reliability must-run or extended operation. The tipping points show up in a trio of metrics: year-ahead reserve margins falling below planning targets, non-wires alternatives failing local deliverability tests at critical substations, and a surge of time-sensitive, high-MW interconnection requests that overwhelm near-term transmission upgrade windows.

We saw more than 500 coal plants retire from 2010-2019, yet nearly 70 plants now got mercury/soot reprieves past 2027. How did we swing from retirements to extensions? What policy steps or market signals drove that turn? Which timelines or docket filings best illustrate it?

The first era was defined by fuel economics, older unit heat rates, and a suite of standards that tightened costs—so retirements stacked quickly. The current swing adds two forces: rapid, concentrated load from AI and data centers, and federal moves to relax or delay rules on mercury, soot, greenhouse gases, and wastewater. Utilities cite expected rollbacks and the operational urgency of serving new load to justify extensions, while grid operators flag local reliability gaps that can’t be solved before new generation and transmission arrive. Look at integrated resource plans filed early this year—one utility pushed coal conversions to gas out to 2039—and emergency orders stretching oil and coal plants through February. Dockets that pair IRPs with petitions for environmental compliance flexibility are a tell: they show schedule conflicts between construction lead times and the 2027 deadlines, and they document the pivot from planned retirements to multi-year interim operation.

Dominion projects an 85% demand jump in Virginia over 15 years, largely for Data Center Alley. How do planners quantify that growth day by day and season by season? What scenarios did PJM reject or accept? Which specific substation or transmission constraints are most binding?

Planners start with customer-specific load shapes—AI campuses have high, flat baseload with spikes tied to training cycles—then overlay seasonal weather sensitivity for cooling. They run peak forecasts for summer and winter and coincident peaks by zone, convert those into hourly shapes in production cost models, and test N-1 and N-1-1 contingencies. PJM’s acceptance shows up in its long-term forecast acknowledging the 85 percent rise, and in its request to neighboring states’ plants to delay retirements; rejections are implicit where projects can’t clear deliverability under near-term transmission limits. The binding constraints are the high-voltage interfaces that feed Data Center Alley and the large substations closest to campus clusters; when those hit thermal or voltage limits, you see redispatch costs jump and retirement delays rationalized as reliability bridges.

Hyperscale sites can top 200,000 square feet and use as much power as 50,000 homes. How does a single campus translate into hourly load curves? What commissioning sequence ramps their draw? Which behind-the-meter strategies—storage, on-site generation, heat reuse—actually move the needle?

Commissioning typically phases from auxiliary loads to partial IT halls, then to full racks; each wave steps up auxiliary cooling and power distribution gear, so the curve ratchets rather than glides. Hourly, you get a persistent baseload with small troughs in the early morning and modest peaks tied to training jobs; the load factor is unusually high. Storage helps when paired with firm commitments—hundreds of megawatt-hours can shave peaks and reduce local transformer stress—but it rarely erases the baseload. On-site generation can matter if it’s truly dispatchable and permitted at scale; otherwise it’s a backup. Heat reuse is promising for district heating, but siting rarely co-locates with ready thermal offtakers, so it’s more an efficiency story than a grid relief valve. The biggest system impacts come from load management agreements that allow curtailment or shifting of non-latency-critical training windows.

Frontier Group says delayed retirements at plants like Bowen and Scherer will keep coal burning through 2039. What steps inside a utility IRP shift a planned gas conversion back to coal? Which cost stacks or reliability metrics tip that decision? How do fuel contracts lock it in?

Inside the IRP, the pivot happens when planners update the load forecast, re-run capacity expansion with tightened near-term reliability constraints, and test compliance pathways under relaxed emissions limits. The cost stack that tips the scale combines avoided near-term capital for conversion, higher-market energy prices during strained hours, and the reliability value of having units already synchronized to the grid. Metrics that push the decision include declining reserve margins, local deliverability failures, and modeled loss-of-load expectation creeping above planning thresholds. Fuel contracts reinforce it: multi-year coal supply and transportation agreements with take-or-pay provisions make short-term coal burn cheaper than idling, especially when rules are delayed and monitoring obligations are lighter.

EPA’s rollbacks target mercury, particulate, and greenhouse rules, even exploring a national security exemption. How is that exemption documented and justified? What historical precedents exist for power plants? Which legal deadlines and comment records might shape a court’s view?

The national security exemption is invoked through agency findings that certain facilities must continue operating to support critical infrastructure—here, framed around data centers and AI. The record includes operator affidavits, grid reliability assessments, and cross-references to grid operator warnings. For power plants, this is novel; emergency reliability orders have precedent, but using a national security rationale to delay air toxics compliance is unprecedented in practice. Courts will scrutinize whether the administrative record ties the exemption to specific, time-bound reliability needs, whether less-polluting alternatives were evaluated, and how the agency responded to comments noting health impacts. The timeline pressure—rules that would bite in 2027 versus multi-year construction for replacement capacity—will be weighed against statutory health mandates.

Two Georgia plants were tied to thousands of deaths over 20 years, with Bowen linked to 7,500. How do researchers attribute mortality to a specific plant? What dispersion models, emissions inventories, and exposure pathways matter most? Which uncertainties change policy relevance, and which don’t?

Researchers build plant-level emissions inventories—sulfur dioxide, nitrogen oxides, primary particulates—and run dispersion and chemical transport models to estimate downwind fine particulate concentrations. They overlay population data and baseline health statistics, then apply concentration-response functions from the peer-reviewed literature to estimate attributable mortality. Stack heights, plume rise, and prevailing winds determine exposure footprints, while secondary formation of particulates from gaseous precursors is central. Uncertainties in local exposure and individual susceptibility exist, but the directionality is robust; policy relevance hinges on the magnitude and persistence of exposure, and here the counts—thousands over two decades—clear that bar.

The UC study says training one large model can exceed 10,000 LA–NY round trips in pollution, with $20 billion in annual health costs by 2030. How do they convert compute and cooling into emissions and dollars? Which assumptions drive results? What updated figures would you expect now?

They translate model training into energy use using power usage effectiveness and known hardware efficiency, then map that energy to grid emissions by location and time. Cooling loads are added using typical data center PUE ranges, and emissions are monetized with health impact factors grounded in established epidemiology. The big assumptions are data center load factors, the share of power from high-emitting plants, and the trajectory of pollution standards. Given delayed coal retirements and relaxed rules, the original numbers are conservative; today I’d expect higher marginal emissions per megawatt-hour, making the $20 billion annual health cost by 2030 plausibly an underestimate.

EPA cites a 90% mercury cut by 2021 under the 2012 rule, yet dozens of plants now get flexibility. How should communities interpret that gap? What stack-monitoring or CEMS challenges are real versus tactical? Which low-cost fixes could keep reductions on track anyway?

Communities should see the gap as a policy choice, not an engineering impossibility; the 2012 framework proved mercury could fall dramatically. Monitoring challenges—interference, calibration drift—are real but manageable; claiming systemic inaccuracy to justify broad exemptions looks tactical when previous compliance was achieved across the fleet. Low-cost fixes include optimizing sorbent injection, tightening maintenance on baghouses and scrubbers, and enhancing operational setpoints to limit mercury re-emission. Transparency—publishing continuous emissions data—creates accountability and can sustain reductions even while formal limits are weakened.

DOE orders kept Eddystone and J.H. Campbell online to avoid “resource adequacy” problems. Step by step, how does DOE evaluate and issue those orders? Which grid reliability metrics trigger action—reserve margins, N-1-1 contingencies, or voltage stability? What alternatives were weighed and rejected?

The process begins with a reliability alert: the grid operator or utility flags that a planned retirement will impair resource adequacy. DOE compiles the technical basis—reserve margin forecasts, contingency analyses, voltage and thermal studies—and consults with the operator on near-term mitigation options. If alternatives—short-term demand response, mobile generation, accelerated upgrades—can’t bridge the gap, DOE issues a time-limited order tying continued operation to reliability needs. Triggers include falling reserve margins, failed N-1-1 tests on critical corridors, and local voltage stability risks that can’t be solved with redispatch. The orders extended Eddystone and a Michigan coal plant through February because the short list of alternatives couldn’t be deployed in time.

Georgia Power expects “large load” projects to need 51,000+ MW through the mid-2030s, yet some sites weren’t firm. How should regulators vet speculative load? What thresholds—deposits, interconnection milestones, or building permits—separate hype from real demand? Which safeguards prevent stranded costs?

Regulators should tier load credibility. Require meaningful deposits and non-refundable milestones at interconnection stages, proof of land control and local permits, and construction starts for substations. Differentiate letters of interest from executed service agreements. To prevent stranded costs, tie cost recovery to demonstrated load materialization, include off-ramps in rate plans, and require utilities to prioritize resources with flexible redeployment—shorter-term PPAs and modular upgrades—over irreversible commitments.

Microsoft backed Georgia Power’s delay while pledging carbon neutrality by 2030. What binding procurement or siting standards should big tech adopt to square those goals? Which contract terms—hourly matching, deliverability, additionality—really change plant dispatch? What enforcement keeps claims honest?

To square the circle, companies should adopt hourly carbon matching in the same grid region, require deliverable contracts that actually reduce marginal emissions, and prioritize projects that are additional—built because of their procurements. Contract terms should include hourly settlement, curtailment rights aligned with grid needs, and penalties for shortfalls. Siting standards ought to favor zones without triggering coal extensions, and require investment in local transmission or storage. Enforcement comes from independent verification and public disclosure: if a firm supports a coal delay, it should show offsetting hourly reductions that prevent net emissions increases, or its neutrality claim should be marked non-compliant.

One PJM oil plant and a Mississippi coal plant are filling Georgia’s gap, shifting pollution across borders. How do air sheds and prevailing winds complicate “out-of-state” thinking? What monitoring or health data reveal downstream effects? Which interstate policies could address this leakage?

Air doesn’t check state lines; prevailing winds and regional chemistry move pollutants across borders. Downwind monitors and health registries pick up elevated fine particulates and ozone precursors, and epidemiological studies link hospitalizations and mortality to those exposures. The idea that “the pollution’s not in Georgia” falls apart once you map plume trajectories and secondary formation. Interstate solutions include transport provisions and regional planning that factor imported generation’s emissions into approval decisions, so a state’s choice to buy energy doesn’t externalize health costs to neighbors.

EPA proposes weaker water rules, including delaying bromide limits tied to bladder cancer. What’s the pathway from plant wastewater to human exposure? Which treatment technologies and timelines exist today? How can regulators quantify avoided cases and set interim guardrails without grid risk?

Wastewater carrying bromide is discharged to rivers, forms disinfection byproducts downstream in drinking water systems, and elevates bladder cancer risk. Proven treatments—upgraded wastewater controls—exist, and the earlier rule projected avoiding 100 cases annually. Regulators can quantify cases avoided using plant-specific discharge rates and downstream population served by surface water systems. Interim guardrails include prioritizing high-risk dischargers, seasonal limits when flows are low, and contingency supply plans, all while phasing compliance to match reliability timelines rather than abandoning protections.

Communities near Plant Scherer reported coal ash concerns and soot on homes. What best practices protect neighbors right now—well testing, filtration, setbacks, or ash pond remediation? Which steps should utilities fund first? What indicators would you track to show real progress within a year?

Start with free, repeated well testing for all nearby households, provide point-of-use filtration where contaminants are found, and accelerate ash pond closure with groundwater interception. Fund siding and HVAC cleaning, seal air leaks, and establish buffer zones for dust control. Prioritize the measures that immediately reduce exposure: safe drinking water, dust suppression, and transparent data. Track indicators quarterly—well contaminants trending down, airborne particulates at fenceline monitors declining, and completed remediation milestones—to demonstrate tangible progress within a year.

If you had to meet AI demand without extending coal, what’s the step-by-step plan for the next 36 months? Which mix—efficiency, load shifting, storage, uprates, fast-track gas, or advanced nukes—scales fastest? What permitting, workforce, and supply chain moves unlock it?

Months 0–6: lock in binding curtailment agreements for training loads, deploy demand-side efficiency at campuses, and procure fast-deploy storage to shave peaks. Months 6–18: uprate existing gas and renewables, add modular storage, and accelerate interconnections that clear deliverability. Months 18–36: bring online fast-track gas with stringent emissions controls and sunset dates, while long-lead projects continue; advanced nukes are beyond 36 months for most sites. Permitting needs streamlined but transparent paths for grid upgrades; workforce efforts focus on electricians, linemen, and commissioning teams; and supply chains prioritize transformers and switchgear to relieve the bottlenecks that currently force coal to fill the gap.

EPA’s Lee Zeldin framed coal as supporting “AI dominance” and reliable power. How would you measure “reliability” in this debate—loss-of-load hours, EUE, or black start capability? Which non-coal options can match those metrics today? What case studies should guide decisions?

Reliability should be measured with loss-of-load hours and expected unserved energy because they capture customer impact, supplemented by local voltage stability and black start adequacy. Non-coal options can meet these metrics when portfolios combine fast-start gas, storage, targeted transmission upgrades, and binding demand-side programs; the key is dispatchability and deliverability, not the fuel itself. Recent emergency orders underscore that the issue is timing and location—where capacity and wires aren’t ready, risks rise—but that’s a planning problem. Case studies where operators used storage and targeted upgrades to avoid outages should guide decisions, rather than defaulting to decades-long coal extensions.

For transparency, what metrics should be public for every data center: hourly load, water use, backup fuel burn, and emissions? How would you standardize reporting without exposing trade secrets? Which agencies or ISOs should host the data, and what penalties ensure accuracy?

Publish hourly electricity draw, on-site generation and fuel burn, water withdrawal and consumption, and emissions by pollutant, all at facility level with a reasonable lag—say monthly—to protect competitive operations. Standardize definitions across utilities and grid operators, and require third-party verification. ISOs and state utility commissions should host the data alongside emissions inventories. Penalties should include fines for misreporting and conditions in interconnection agreements that tie compliance to continued service priority, creating a real incentive to be accurate.

Do you have any advice for our readers?

Treat electricity demand growth as real but not destiny. Ask your utility and regulators to show, in plain numbers, how many hours of reliability risk exist and which non-coal measures close the gap by when. Support transparency—hourly data, verified emissions—and insist that exemptions be narrow, time-limited, and paired with concrete milestones for cleaner replacements. Most of all, remember that every delay choice has a health ledger attached; when an extension runs through 2039, that’s not just a planning line—it’s years of air people breathe.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later