The bear case in four numbers
The contrarian framing, attributed to Paul DeVries of CreditSights in a February 2026 interview cataloged in our research journal, is straightforward. Add up the firm large-load interconnect commitments that US utilities have already booked or are actively processing. Adjust for realistic power usage effectiveness (PUE) — the ratio of total facility power to IT power — so the number represents actual draw at the meter, not nameplate IT capacity. The result is approximately 110 GW of firm incremental load by 2030.
Then estimate incremental 2030 demand. Start with the long-term growth rate of US electric demand (roughly flat to 1 percent per year over the last two decades). Add the electrification of transport and building heat, which is real but slow-moving. Add the specific incremental signal from data centers. Net the number against retirements of existing load — heavy industry, some legacy commercial. The bear estimate lands near 50 GW of net incremental demand by 2030.
The ratio is roughly two-to-one on the supply side. Twice as much contracted firm commitment as net demand growth. Forward ERCOT power curves through 2030 are soft on the generation side. Forward Henry Hub gas curves are soft on the fuel side. If demand were going to be twice supply, neither forward curve would be soft. The curves are telling us what the analyst is saying with words: the headline demand numbers are overstated relative to what operators are actually committing to pay for.
A cleaner version of the same point comes from ERCOT. The Texas ISO has received interconnection requests adding to roughly 410 GW. McKinsey's realistic 2030 US firm-power envelope sits near 100 GW — with a global AI power envelope under 220 GW. Somewhere between the 410 GW ERCOT request book and the 100 GW McKinsey envelope lives a 3-to-4x gap between what developers are filing and what the market will actually build. One Southeast utility CEO in the journal reports having built 7.3 GW over the last century while fielding 14 GW of active data center applications this year. The double-counting problem is not subtle — multiple utility executives describe seeing five interconnection requests for every real customer.
Why NIPSCO's GenCo is the structural tell
Northern Indiana Public Service Company (NIPSCO) is not the loudest utility in the data center story. It doesn't have a Dominion-sized pipeline or a Georgia Power-sized political profile. But its corporate structure — running a dedicated GenCo subsidiary outside the regulated utility to serve incremental data center load — is the structural signal that matters.
The NIPSCO GenCo structure works like this. Incremental data center load gets served by a generation fleet held inside the GenCo, not inside the regulated utility. The GenCo's capital structure is funded by a combination of NIPSCO equity and project finance, not rate base. Recovery of the GenCo's costs comes from the data center load under long-term contracts, not from ratepayer bills. If the data center load evaporates for any reason — a customer walks, a campus cancels, AI demand cools — the ratepayers are protected because the asset was never in rate base.
This is a novel structure in US regulated utility practice. It trades away some of the typical ratepayer socialization that utilities use to de-risk large investments, in exchange for protecting ratepayers if the demand doesn't materialize. A utility that believed its headline 10 GW data center pipeline was going to fully materialize would have no reason to do this. A utility that thought there was a realistic chance of walk-away, amendment, or shrinkage would.
Other utilities are watching. Indiana I&M has deployed what may be the cleanest large-load tariff template in the market: 80 percent take-or-pay minimum, exit fees, collateral posting, clean transition commitment, and demand-response commitment. Every one of those gates is the same kind of ratepayer-protection hedge. The Georgia Power stipulated agreement with Georgia PSC includes large-load contract terms that protect ratepayers from data center walk-away. Virginia's SB 253 / HB 1393 gives the State Corporation Commission the authority to judge whether on-site generation for data centers serves the public interest, which is itself a form of demand hedge. The 15-year minimum-bill contract regime that is spreading across Southeast and Mid-Atlantic utilities is the generalization: if you are going to build generation for data centers, contract hard enough to protect the ratepayer if the load doesn't show. DSIRE now tracks 34 states and 60 new large-load tariffs on this pattern.
The Stanford / GridCARE reframe: is the constraint physical or doctrinal?
The most productive reframe we found in the second pass of the journal came from a Stanford Sustainability Accelerator analysis of transmission utilization across the Western Interconnection, authored by Rajanie Prabha and Liang Min and advised by Ram Rajagopal. The study (published as a public StoryMap, with a peer-reviewed version expected in early 2026) ran a full AC power-flow N-1 sweep on roughly 13,000 transmission lines at 100 kV and above, 28,000 buses, and 1,800 transformers, using the 2025 heavy-summer peak case with reliability reserve margin included in the numerator. The headline numbers: California 115 kV 31.8 percent utilization, 230 kV 37.3 percent, 345 kV 14.2 percent, 500 kV 30.8 percent, transformers higher than lines fleet-wide. Of roughly 1,145 California 230 kV lines, only a small subset operate near their post-contingency thermal limits. GridCARE (a Stanford-spinout company co-founded by Rajagopal, Min, Amit Narayan, and Arun Majumdar, launched publicly in May 2025) is the commercial vehicle building on this analysis.
The Stanford data is not consistent with an unqualified 'we are out of power' narrative. It is consistent with a different framing: the aggregate Western Interconnection has meaningful physical headroom, but that headroom is unevenly distributed, and the planning doctrine is designed around firm, year-round, peak-day availability — so flexible capacity that physically exists cannot be contracted as firm capacity. The underutilization is partly doctrinal.
The important caveat the Stanford authors themselves flag, and that honest reporting has to surface: the peak case reflects roughly one percent of annual hours, and the binding constraint for any specific new load is not the fleet average but the specific post-contingency limit at the specific circuits serving the specific POI. Fleet average utilization near 30 percent coexists with a small subset of circuits that are consistently loaded — and those circuits are often exactly the ones serving the most attractive data center sites. 'The grid is 70 percent empty' is as misleading as 'we are out of grid.' The honest read is that average headroom does not map cleanly to siteable headroom.
The substantive counter-argument from the power systems engineering community lives in NERC's July 2025 whitepaper 'Characteristics and Risks of Emerging Large Loads,' the NERC Level 2 Alert on large loads, and ERCOT's August 2025 Large Load Stability whitepaper. NERC's argument is not that the grid is full. It is that the binding constraints for large-load interconnection are (1) rapid load-step dynamics that can destabilize frequency and voltage during benign contingencies, (2) the specific contingency at the specific POI rather than fleet-average thermal headroom, and (3) whether a 'curtailable' load can actually be curtailed on the timescale the stability problem requires. A flexible load that cannot trip off in under a second during a voltage excursion does not help with the stability problem even if it helps with the thermal headroom problem. These are real engineering constraints, not planning-doctrine artifacts.
The Portland General Electric Hillsboro project (announced October 2025) is the cleanest case study of the flexible-contract approach, and it is instructive but narrower than the headline implies. PGE, working with GridCARE's DeFlex methodology, identified 80 MW of near-term capacity unlockable by 2026 and a pathway toward 400 MW by 2029. Aligned Data Centers is building a 31 MW battery as the anchor flexibility asset. The early math from PGE: 1 GW of flexibly connected load at current rates represents roughly $142 million in new utility revenue, which PGE models as supporting either a 5 percent rate reduction for existing customers or $1.3 billion of grid hardening funded without raising rates. That is a meaningful affordability story for flexible-service structures.
But Hillsboro is a pilot project, not a filed generally-available tariff. Coverage in Canary Media, Utility Dive, and the American Public Power Association describes it as a utility-led program using PGE's existing authorities and bilateral arrangements with specific customers — Aligned as the anchor, not a standing tariff any new entrant can opt into. We have not located an Oregon PUC docket for a filed 'flexible-connect tariff' under that name. Operators should treat Hillsboro as proof-of-concept that the regulatory approach works in at least one utility's discretion, not as a repeatable tariff that can be assumed available elsewhere.
The implication for the demand debate is meaningful but conditional. If a material portion of the apparent 2-to-1 oversupply in the DeVries numbers is actually projects waiting for 'firm' capacity that could accept flexible service terms, then the bear case is softer than it looks. But operators whose load profiles cannot genuinely curtail at the speed NERC's stability concerns require do not benefit from the flexible-service reframe — they are still in the firm-service queue. The market is not 2x oversupplied in a simple sense; it is oversupplied on firm terms, potentially undersupplied on flexible terms, and the split between the two depends entirely on whether the load is actually flexible. Whether the bull or bear case is right depends on which contracting regime wins and how much of the load can realistically meet the flexibility bar.
The Meta-PIMCO walk-away clause
Meta's 2025 Louisiana deal with PIMCO is one of the more creative data center financing structures we have seen. The deal finances a large campus with off-balance-sheet capital, structured so that Meta's reported capital expenditure does not fully reflect the true economic investment. In exchange, Meta reportedly has optionality to walk away from the arrangement after roughly five years — specifically, the structure does not obligate Meta to a full-term commitment the way a traditional PPA or tax lease would.
If you believe the Meta AI buildout is going to compound indefinitely, a walk-away clause is unnecessary and expensive. If you believe there is a realistic scenario in which Meta's compute needs look very different in 2030 than they do in 2026 — because of efficiency improvements, because of architectural changes to transformer training, because of a shift from training compute to inference compute, because of a competitive reshape — then the walk-away clause is cheap insurance.
The existence of the walk-away clause tells us something about how Meta models its own demand. It tells us that the option value of flexibility is significant. And it tells us that Meta is not willing to lock in permanent commitments at today's terms the way a committed supply-side bull would expect.
We are not arguing that Meta is bearish on AI. We are arguing that Meta is hedging a downside that the bullish public commentary does not price. If the biggest AI buildout on earth writes walk-away clauses into its capital structure, the bear case is worth taking seriously.
What the bull case has right — and the new structural risk it raises
We are not throwing out the bull case. Several of its structural elements are correct and load-bearing.
Training compute demand is accelerating. Inference compute is a separately-shaped growth curve that is projected to exceed training compute by aggregate megawatts by 2028. A bear case on training does not automatically translate into a bear case on inference — and Gaw's framing in the 2026 industry conversations is that 10 kW-per-rack inference cloud could be structurally undersupplied even while training is overbuilt. Regional capacity shortages are real — Northern Virginia, Phoenix, Columbus, Dallas, Atlanta, and the Pacific Northwest can run tight even in a nationally balanced market. And the empirical market signal from DigitalBridge's first-round renewals — 86 bps churn against a 22 percent rent uplift — is not the footprint of a bearish market. CoreWeave's NPV per dollar of capex remains in the 15-20 cent range. The 2025 hyperscaler capex print was roughly $350 billion; 2026 estimates are reaching $650 to $700 billion.
But even in the bull case, there is a structural risk the public discourse under-prices: the risk of stranded physical assets. Kate Gordon framed it sharply in an early 2026 interview: the dotcom bust left intangible wreckage (Pets.com brand assets, unused dotcom domains). The data center bust, if it happens, leaves physical wreckage — enormous purpose-built structures on constrained industrial land, with transmission tie-ins and cooling infrastructure designed for a specific rack density and thermal envelope. Stranded data center buildings are not a 'dead mall' with easy commercial conversion. They are a specialized industrial building on a site whose zoning and interconnection were negotiated for a single use.
The stranded-building risk is distinct from the GPU obsolescence risk. GPU obsolescence is a depreciation problem that hyperscalers model tightly. Stranded building risk is a land-use problem that is harder to hedge. If a Microsoft or a Meta decides in 2030 that a specific campus is no longer fit-for-purpose because the compute mix has shifted, the building is very hard to repurpose on the same timeline as the decision.
The other bull-case complication comes from Kate Gordon's nuanced reading: we may be overbuilding in the worst possible way. New gas plants lock in 30 years of fossil capacity. Coal plants are getting life extensions. Renewable projects are being displaced in interconnection queues by speculative large-load requests — the LBNL paper on queue crowding-out is the academic source here. A 2-to-1 oversupply that locks in fossil fuel infrastructure is structurally worse than a 2-to-1 oversupply that stays clean. Even if the nominal bear case is wrong, the compositional bear case — right total capacity, wrong mix — may be the more uncomfortable scenario.
How to hedge each side of the argument
The operator who assumes the bull case unconditionally loses optionality. The operator who assumes the bear case unconditionally misses the buildout. The operators we find most thoughtful are hedging both sides deliberately, and the hedge shows up in four places.
First, contract structure. Deal structures that preserve walk-away optionality — tax equity structures with early-exit provisions, PPAs with market-repricing clauses, land leases with call options — are pricing bear-case risk explicitly. We have seen this in Meta's Louisiana deal, in Microsoft's structure for its Wisconsin Mount Pleasant campus, and in at least three of the large public mining company AI contracts. The contract-level hedge is small relative to headline numbers but significant relative to downside protection.
Second, portfolio geography. Operators building exclusively in Northern Virginia are betting that Virginia-specific demand growth stays tight. Operators spreading across Texas, Phoenix, and the Midwest are hedging regional demand divergence. The geographic hedge is the clearest signal of bull-vs-bear thinking we can observe from the outside.
Third, phased buildout. The operators with the most aggressive public plans have the most phased execution. A 5 GW announcement lands as 5 × 1 GW phases, each with its own commercial operation date, its own construction decision gate, and its own option to pause if the demand curve softens. A phased buildout is a bear-case hedge on a bull-case announcement.
Fourth, flexible interconnection. The GridCARE / PGE Hillsboro pattern is the most important bear-case hedge we have seen because it also works as a bull-case accelerator. A project that takes flexible service terms accepts downside if it gets curtailed during peaks, but it gets interconnected years faster than a firm-service equivalent. If the bear case is right, the project only curtails during a slow period anyway. If the bull case is right, the project was interconnected early and captured years of uptime a firm project could not. The flexible-service structure dominates the firm-service structure on expected value in almost every demand scenario.
What this means for Cliffcenter
If the bear case is directionally correct, three things change about our product roadmap. Queue Intelligence keeps its value, but the value proposition shifts from 'help you shorten the queue' to 'help you price the queue against the realistic demand curve.' BTM Workflow becomes more important, not less — behind-the-meter deals are precisely the structures that let a hyperscaler contract for generation without locking in a utility-ratepayer obligation. Site Intelligence needs a new scoring dimension: demand-sensitivity, measuring how much of the local interconnect pipeline is speculative versus contracted, how much of the local generation fleet depends on data center revenue, and how much tail-risk exposure a site has to a national demand softening.
We are also adding a flexible-service scoring layer. For each candidate site, we will track whether the serving utility has run a flexible-connect pilot or a filed tariff, the terms of any bilateral arrangement operators have reached, whether the load profile is genuinely curtailable at NERC-stability timescales, and what the realistic unlock would be. The PGE Hillsboro pilot is the case study we are building against — proof-of-concept that the regulatory approach works in one utility's discretion, but not a standing tariff that new entrants can assume is available. Replicability depends on the specific utility, the specific load profile, and the specific POI.
We will publish the bear case. Our research desk treats contrarian analysis as a hypothesis worth engaging, not a threat to route around. If we are serving operators in both directions, we owe them a public version of the argument so they can debate it with their own capital allocators. This brief is that first iteration.