In January 2025, the President of the United States stood in the White House and announced a $500 billion AI infrastructure project. Two months later, a Chinese lab trained a model that matched GPT-4 for $5.6 million. By November, the White House signed a Manhattan Project-style executive order unifying 17 National Labs around artificial intelligence. We are not in a tech boom. We are in a mobilization.
Most people see ChatGPT answering questions about their taxes. What they don't see is the 9-layer physical industrial chain underneath it - shale wells in Pennsylvania, $20 billion chip factories in Arizona, a Memphis datacenter built in 122 days housing half a million GPUs, and coding agents generating billions in revenue by writing software autonomously. The "AI revolution" is not primarily a software story. It is a thermodynamic, logistical, and geopolitical one.
Leopold Aschenbrenner's mid-2024 paper Situational Awareness mapped this chain before most of it existed. What he described as the likely trajectory - trillion-dollar infrastructure buildouts, government takeover of AI as a national security priority, a compute arms race with China - is now simply the news. What follows is my attempt to lay out the full industrial reality as it stands in early 2026: a layer-by-layer breakdown of everything that has to go right for AGI to happen, and everything that could shatter the whole stack if it goes wrong.
The Stack at a Glance
| # | Layer | Key Metrics | Players |
|---|---|---|---|
| 1 | Energy | 134 GW by 2030 · ~27% US grid | Utilities, shale, nuclear |
| 2 | Datacenters | $500B Stargate · 5 GW UAE cluster | AWS, Google, MSFT, xAI |
| 3 | Chip Manufacturing | $20B+ gigafabs · TSMC $122B rev | TSMC, Intel, Samsung |
| 4 | Hardware Design | Blackwell → Rubin · every hyperscaler has custom silicon | Nvidia, AMD, Google, all hyperscalers |
| 5 | Capital | $650B+/yr 2026 · $500B Stargate | MSFT, Meta, Alphabet, SoftBank |
| 6 | AI Labs | 4+ OOMs · reasoning models · DeepSeek shock | OpenAI, DeepMind, Anthropic, xAI, DeepSeek |
| 7 | Government | Genesis Mission · NEPA reform · Anthropic DoD contract | US Gov, DoD, NSA |
| 8 | AI Products | Agents shipping · Claude Code $2.5B ARR | OpenAI, Anthropic, Google, Meta |
| 9 | End Consumer | 2.7% productivity growth · 90% firms no impact yet | Global economy |
Each layer depends on the one below - cut any link and the stack collapses.
The "Order of Magnitude" Framing
Before the layers, a quick conceptual anchor.
One Order of Magnitude (OOM) = 10x. Two OOMs = 100x. Three OOMs = 1,000x. AI research has shown that scaling compute by a consistent factor consistently improves model intelligence - this is the empirical basis of the "scaling hypothesis."
The core claim of scaling theory is that intelligence is a physical property: if you scale compute (and algorithms improve), you get smarter models. This is not a metaphor. It means that running the race to AGI is fundamentally about whether you can physically build, power, and connect more compute than anyone else.
The trendline shows roughly 4 OOMs of effective compute gained per decade - combining hardware advances and algorithmic efficiency. That sounds abstract until you translate it to real infrastructure: each OOM requires roughly 10x more power, more chips, more capital, and more industrial coordination. That is what this chain is about.
Layer 1 - Energy: The Foundation of Compute
Power has replaced silicon as the binding constraint of AI. The race to AGI is, at its most fundamental level, a battle for electricity.
The numbers are hard to absorb at first:
| Year | US DC Power Demand (451 Research) | % of US Grid |
|---|---|---|
| 2026 | ~75.8 GW | ~15% |
| 2028 | ~105 GW | ~21% |
| 2030 | ~134.4 GW | ~27% |
The entire current US grid generates roughly 450–500 GW at peak. Datacenter demand alone is projected to consume more power than the entire state of California by 2030.
This is not a gradual increase. It is an industrial shock. Nearly doubling datacenter power demand in four years means the grid must add capacity equivalent to dozens of large power plants annually. No prior civilian industry has ever scaled like this.
Securing that wattage requires construction effort not seen since the post-war era. For the natural gas segment alone - just one of the required sources - estimates suggest approximately 1,200 new wells, 40 rigs deployed across the Marcellus and Utica shale formations, and $100 billion in new power plant capital expenditure. The permitting bottleneck, however, is finally being addressed: the Trump administration gutted NEPA regulations via a CEQ Final Rule in January 2026, issued an executive order for expedited DC permitting in July 2025, and the "One Big Beautiful Bill" introduced a 180-day review option. Whether implementation matches ambition remains to be seen.
The nuclear renaissance is real. Microsoft committed $1B in DOE loans to restart Three Mile Island Unit 1 through Constellation Energy, with 80% staffing complete and restart moved up to 2027. Meta signed 6.6 GW of nuclear deals across Vistra, TerraPower, Oklo, and Constellation. Google acquired Intersect Power for $4.75B and signed an SMR deal with Kairos Power. Amazon's original approach - acquiring a 1 GW datacenter campus directly adjacent to Talen Energy's Susquehanna nuclear plant - was rejected by FERC three times (November 2024, April 2025, February 2026). Amazon restructured as a front-of-meter bypass, a template other hyperscalers are now copying.
Meanwhile, China has 3,400 GW of installed electrical capacity with roughly 400 GW of spare capacity projected by 2030, and electricity costs less than half of US rates. The energy race is also a geopolitical race, and while the US is finally addressing its permitting paralysis, China's structural advantage in cheap power remains formidable.
Layer 2 - Datacenters: The Industrial Cathedral
The modern AGI datacenter is no longer a building full of servers. It is what I'd call an Industrial Cathedral - a thermodynamic system managing the heat and data flow of millions of interconnected chips.
At the $100B cluster scale, the cost breakdown looks roughly like this:
| Component | Approximate Share of Cluster Cost |
|---|---|
| Chips (GPUs/accelerators) | 40–50% |
| InfiniBand networking | ~13% |
| Power, cooling, buildings | 37–47% |
That networking line is worth pausing on. Nvidia's InfiniBand - which enables low-latency communication between GPUs so they can function as a single coherent system - represents 13% of total cluster capex on its own. At a $100B cluster, that's $13B in just cables and switches.
The geography of compute has gone from rumor to reality. The Stargate UAE campus - confirmed in Abu Dhabi with 5 GW ultimate capacity and Phase 1 (200 MW) coming online Q3 2026 - is the first mega-cluster outside US soil. Domestically, Project Stargate's Abilene TX Phase 1 became operational in September 2025, with 5 more sites announced and roughly 7 GW planned across the complex. xAI's Colossus facility in Memphis was built in 122 days, now runs at approximately 2 GW with 555,000 GPUs, and has a roadmap to 1 million.
Combined 2026 hyperscaler capex is projected at $660–690B, with roughly 75% directed at AI infrastructure. This creates a seizure risk that is genuinely existential: a training cluster worth $100B+ is a more expensive strategic asset than the International Space Station. The Stargate UAE facility alone represents a geopolitical asset that any regional conflict could compromise.
For this reason, the most critical AGI training will still have to happen on allied soil, under physical protection that approaches state-level security. These facilities are no longer company assets. They are critical national infrastructure.
Layer 3 - Chip Manufacturing: The Global Choke Point
The world's leading-edge semiconductor capacity is essentially concentrated in one company, on one island, 110 miles from mainland China.
Every serious GPU - Nvidia H100, B200, GB200, AMD MI300X, MI350 - is built at TSMC. The dependency has not diminished, but the geographic diversification has begun. TSMC Arizona Fab 1 is operational on 4nm (since 2025), Fab 2 is in equipment install as of Q3 2026, and Fab 3 is under construction. TSMC posted $122B in FY2025 revenue (+31.6%) and committed $52–56B in capex for 2026. But a new Gigafab still costs $20B+ and takes 5+ years, and the world's AI chip manufacturing capacity still needs to roughly quadruple by 2030.
Beyond wafer production, the secondary bottleneck remains: advanced packaging. CoWoS (Chip on Wafer on Substrate) capacity is still sold out through 2026–2027, with TSMC ramping from 35K to 130K wafers per month. This is the physical process that stacks High Bandwidth Memory onto logic chips - without it, even fabricated wafers cannot become functional AI accelerators.
The competitive landscape has sharpened. Intel secured $7.86B in CHIPS Act funding plus an $8.9B government equity stake, has 18A in production, and landed the Microsoft Maia 2 contract. Samsung is running above 80% utilization, targeting 2nm in H2 2026, and won a $16.5B contract from Tesla.
The cold war dimension is intensifying. China's SMIC has confirmed 5nm-class production (N+3 process) in the Kirin 9030 chip and has started 3nm GAA R&D, though yields remain around 33%. CXMT is targeting HBM3 mass production in H1 2026, still roughly 3 generations behind SK Hynix and Samsung. A Chinese EUV prototype built by ex-ASML engineers is reportedly under testing. Export controls have shifted under Trump: H200 sales to China moved to case-by-case review with a 25% revenue share condition. The controls are slowing China down - but the gap is narrowing, not widening.
If TSMC's Taiwan operations are disrupted - by blockade, natural disaster, or conflict - the AGI stack still collapses. The Arizona fabs buy time, not independence.
TSM - Quarterly revenue and net income, showing TSMC's accelerating growth driven by AI chip demand.
Layer 4 - Hardware Design: The Architecture of Reasoning
Hardware design has shifted from "build a faster processor" to "build a machine for one specific cognitive task." The dominant architecture is the Transformer, and the dominant chip ecosystem is Nvidia's.
The GPU deployment trajectory tells the story:
| Era | Hardware | Scale |
|---|---|---|
| GPT-4 training (2023) | A100/H100 | 10,000–40,000 |
| Blackwell era (2025–26) | B200/GB200 | Millions deployed |
| Rubin (H2 2026) | R200 | Next-gen architecture |
| 2030 goal | Mixed fleet | ~100,000,000 GPU-equivalents |
Nvidia posted $215.9B in FY2026 revenue (+65%) and guided Q1 FY2027 at $78B. The product cadence has accelerated: B200 → B300 (Blackwell Ultra) → Rubin R200 (H2 2026) → Rubin Ultra (2027). Going from tens of thousands to 100 million GPUs still means every upstream layer must scale proportionally.
But the structural shift since the original version of this article is that every major hyperscaler now has custom silicon in production. AMD launched MI350 in June 2025, and OpenAI selected AMD for 6 GW of MI450 capacity. Google's TPU v6 (Trillium) reached general availability. Amazon's Trainium3 (3nm) entered early production in early 2026. Meta's MTIA roadmap runs on a 6-month cadence from MTIA 300 to MTIA 500. Microsoft announced Maia 200 in January 2026. Nvidia's CUDA moat remains deep - switching costs are enormous - but the era of total dependency is ending. The question is no longer whether alternatives exist, but whether they can scale fast enough to absorb the demand that TSMC's CoWoS bottleneck cannot.
The key design challenge at this layer remains interconnect density: making a million chips act as one coherent brain. The faster chips get, the more the bottleneck shifts to how information flows between chips rather than how fast each chip computes individually.
NVDA - Quarterly revenue breakdown by product segment, illustrating the datacenter GPU explosion.
AMD - Quarterly revenue breakdown by product segment, showing the ramp of its AI accelerator business.
Layer 5 - Capital: The Trillion-Dollar Fuel
AGI development has moved past venture capital. It is now operating at sovereign scale.
The spend projections are no longer projections - we are living in them:
| Year | Estimated Global AI Infrastructure Spend |
|---|---|
| 2024 (actual) | ~$250B |
| 2025 (actual) | ~$400B |
| 2026 (projected) | ~$650–700B |
| 2030 (projected) | ~$8T/year |
For reference: the International Space Station cost approximately $150B over 30 years. Project Stargate - not rumored at $100B but officially announced at $500B - has Phase 1 operational. Meta is building the Prometheus supercluster at $115–135B/year in capex. SoftBank has committed $64.6B into OpenAI alone, with 60% of total assets now ASI-oriented. Big tech collectively issued $100B in bonds in early 2026 specifically to fund AI infrastructure.
Sovereign wealth funds have entered at scale. Saudi Arabia's HUMAIN signed a $10B deal with Google targeting 6 GW of AI compute. The UAE's MGX acquired Aligned Data Centers for $40B. This is no longer Silicon Valley venture capital - it is petrodollar infrastructure investment.
These are not R&D budgets. They are infrastructure buildouts on the scale of national rail systems or interstate highways - except they're being done by a mix of private companies and sovereign states in a matter of years.
This creates a stark dependency: Labs need Hyperscalers for compute. Hyperscalers need Hardware Designers for chips. The entire chain runs on the belief that the ROI will materialize - that the intelligence produced will be worth the capital invested. The early warning signs are sobering: MIT Media Lab found 95% of AI projects deliver zero return, and an NBER study found 90% of firms report no measurable impact. Yet there has been no capex pullback - every major player is accelerating spend into 2027.
If the bet pays off, the Hyperscalers and their sovereign backers become the industrial titans of the 21st century. If it doesn't, we are looking at the largest coordinated capital misallocation in history - and the magnitude has grown 3x since I first wrote this section.
I think the bet is right. But the downside risk is worth naming.
NVDA - Stock price with earnings overlays since 2022, visualizing the market's repricing of AI compute demand.
Layer 6 - AI Labs: The Frontiers of Algorithmic Progress
AI Labs - OpenAI, Google DeepMind, Anthropic, Meta AI, xAI, and increasingly DeepSeek - are the teams translating raw compute into model weights. Their work is measured in Effective Compute, a combination of physical hardware scaled and algorithmic efficiency gained. Historically, algorithmic progress has contributed roughly 0.5 OOMs per year, meaning every few years, the same level of intelligence can be achieved with 10x less compute. This is why models keep getting cheaper even as hardware gets more expensive.
The frontier has moved fast. GPT-5 shipped in August 2025. Claude Opus 4.6 in February 2026. Gemini 3.1 in February 2026. Llama 4 in April 2025. Grok 4.1 in November 2025. OpenAI restructured as a Public Benefit Corporation and reached an $840B valuation with $25B in annualized revenue. Anthropic hit $380B at $14B annualized revenue.
Then came the DeepSeek shock. In January 2025, DeepSeek-R1 matched GPT-4 and o1 performance at a claimed training cost of $5.6M - wiping $600B off Nvidia's market cap in a single day. The implications are still reverberating: if a small Chinese lab can match frontier capabilities at 1/1000th the cost, what exactly is the moat?
The most important paradigm shift since the original version of this article is reasoning models. What Aschenbrenner called "unhobbling" - unlocking capabilities the raw model has but doesn't deploy by default - has crystallized into a concrete product category:
- RLHF: Still foundational for making models useful and conversational.
- Chain-of-Thought and Scaffolding: Now standard in production systems.
- System II Reasoning: No longer theoretical. OpenAI's o3, DeepSeek-R1, and Claude's extended thinking are concrete examples - models that deliberate, backtrack, and synthesize over minutes rather than milliseconds. The transition from "autopilot" to "deliberation" is happening.
The open-source dynamics have also shifted. DeepSeek publishes under MIT license. Llama 4 is open-weights. Grok 3 was planned for open-source release. The frontier is no longer cleanly divided between "closed" and "open" - it is a spectrum, and the closed labs' lead is measured in months, not years.
The security situation has moved from "alarming" to "confirmed threat." In September 2025, Anthropic reported that Chinese cyber-operators had used AI models to target approximately 30 companies - described as "the first documented large-scale cyberattack executed without substantial human intervention". Model weights remain the crown jewels, and the attack surface has expanded as more labs deploy more capable systems.
Stealing the weights is equivalent to photocopying a $100B+ asset. Once copied, every advantage derived from the capital and compute that produced it is instantly shared. And DeepSeek proved that evenpartial knowledge transfer can bootstrap a competitive system at a fraction of the cost.
Layer 7 - Government: "The Project" and National Security
"The Project" is no longer a prediction. It is policy.
Trump revoked Biden's AI safety executive order on Day 1 of his second term (January 2025). What replaced it is more consequential: the "Genesis Mission" executive order (November 2025), a Manhattan Project-style AI initiative unifying 17 National Labs under a single directive to accelerate US AI supremacy. This is, almost to the letter, what Aschenbrenner predicted in Situational Awareness - and it arrived faster than even he suggested.
Stargate was announced at the White House. Anthropic signed a $200M Pentagon contract in July 2025 - the first AI lab operating on the DoD's classified network. The US AI Safety Institute was renamed CAISI and shifted from regulation to standards under Trump. The line between "private AI development" and "national security program" has dissolved.
The three pillars Aschenbrenner outlined are materializing:
- Weight Security: Anthropic's DoD contract puts frontier model weights behind classified-network protections. The September 2025 cyberattack disclosure made the urgency undeniable.
- Supply Chain Sovereignty: NEPA reform is underway (see Layer 1). The "One Big Beautiful Bill" and expedited DC permitting EOs represent the regulatory "War Room" that was previously impossible.
- Superalignment and Control: The EU AI Act's prohibited practices took effect February 2025, with full applicability in August 2026. The UK renamed its AI Safety Institute to the "AI Security Institute" and has been the most active regulatory body internationally.
China's response: the "AI Plus Action Plan" (August 2025) targeting 90% AI penetration across the economy by 2030. Export controls have loosened under Trump - H200 sales to China shifted to case-by-case review - making the geopolitical calculus more ambiguous than the clean "containment" narrative suggests.
The analogy to the Manhattan Project is no longer hypothetical. We are watching it happen.
Layer 8 - AI Products: From Chatbots to Autonomous Agents
The delivery mechanism for AGI intelligence is the product layer. We have moved from simple text completion (GPT-2) to sophisticated chatbots (GPT-4, Claude, Llama) and are now in the early Autonomous Agent era - not as a prediction, but as shipping product.
The arc, updated with reality:
- 2020–2022: Language models that complete text
- 2023–2024: Chatbots with memory, tools, and APIs
- 2025–2026: Agents that operate computers, write and execute code, and run multi-step projects autonomously - and generate real revenue doing it
Claude Code - Anthropic's coding agent - hit $2.5B in annualized revenue. Cursor crossed $500M ARR. GitHub Copilot launched Agent Mode. Devin shipped as a commercial product. OpenAI released Operator (web agent) and Codex (async coding). Google launched Project Mariner. The "direction" I described in the original version of this article is no longer directional - it is the primary revenue driver for multiple companies.
The distinction between "tool" and "colleague" has blurred exactly as predicted. These agents join your codebase, read your documentation, submit pull requests, operate browsers, and run multi-step projects with minimal supervision.
The integration schlep - authentication, compliance, API rate limits, legacy systems - is being solved, not by one clean abstraction, but by brute-force agent capability. Models that can operate a computer can navigate legacy systems the same way humans do: by clicking through them. This is inelegant but effective, and it means the deployment bottleneck is shifting from "can the agent do the task?" to "can the organization trust the agent to do the task?"
When that trust is established at scale, the economic effect will not be gradual.
Layer 9 - End Consumer: The Cognitive Labor Revolution
The economic endgame of the entire chain is the automation of cognitive labor. And the early data is - honestly - mixed.
US productivity grew 2.7% in 2025, double the decade average. AI revenue is real and scaling: OpenAI at $25B, Anthropic at $14B, total enterprise AI spending at $37B in 2025 (3x year-over-year). But a Fortune/CEO survey in February 2026 found that 90% of firms report no measurable AI impact on operations. The Yale Budget Lab documented roughly 55,000 US layoffs where AI was cited as a factor, but found no macro-level disruption to employment. AI's direct contribution to total factor productivity was measured at a mere 0.01 percentage points.
The "100 million Alec Radfords" framing still holds as the theoretical endgame - Alec Radford is one of the key researchers behind GPT, and imagine 100 million researchers of that caliber running at 100x speed, working on AI research full-time. But we are clearly in the gap between "the technology works" and "the economy has reorganized around it."
The macro projections, updated with current reality:
| Metric | 2025 Reality | Post-AGI Deployment (Projected) |
|---|---|---|
| Annual GDP growth | ~2.7% (US) | ~30%+ |
| AI's TFP contribution | 0.01 pp | Dominant driver |
| Enterprise AI revenue | $37B | Trillions |
| Firms with measurable impact | ~10% | Near-universal |
Aschenbrenner describes the end state as "Factorio-world": a self-replicating system where AI directs robots to build more factories, directed by superintelligent researchers, expanding the physical world's capacity faster than any human-governed economy ever has. The feedback loop hasn't started yet - but the capital commitment across Layers 1–8 suggests the bet is being placed that it will.
The Full Dependency Matrix
| # | Layer | Primary Players | Key 2030 Metric | Upstream Dependency | Primary Risk |
|---|---|---|---|---|---|
| 1 | Energy | Utilities, Shale, Nuclear | 134 GW / ~27% US Grid | Regulatory (NEPA reform underway) | Grid scaling / implementation lag |
| 2 | Datacenters | Microsoft, AWS, Google, xAI | $500B+ clusters · 5 GW facilities | Power (Layer 1) | Seizure / geopolitical exposure (UAE) |
| 3 | Chip Manufacturing | TSMC, Intel, Samsung | 4x capacity / $52B+ annual capex | Wafers, CoWoS packaging | CCP blockade / narrowing China gap |
| 4 | Hardware Design | Nvidia, AMD, Google, all hyperscalers | 100M GPUs · custom silicon universal | Fabrication (Layer 3) | Interconnect density / CoWoS bottleneck |
| 5 | Capital | Microsoft, Meta, Alphabet, SoftBank, Sovereign Wealth | $650B+ → $8T sovereign capex | Hardware (Layer 4) | ROI failure / 90% zero-impact risk |
| 6 | AI Labs | OpenAI, DeepMind, Anthropic, xAI, DeepSeek | 4+ OOMs · reasoning models | Compute (Layer 5) | Weight exfiltration / DeepSeek-style leapfrog |
| 7 | Government | US Gov (Genesis Mission), DoD, NSA | Manhattan Project-style initiative active | Labs (Layer 6) | Export control loosening / regulatory incoherence |
| 8 | AI Products | Anthropic, OpenAI, Google, Meta | Agent revenue at scale ($2.5B+ ARR) | Unhobbling (Layer 6) | Organizational trust gap |
| 9 | End Consumer | Global economy | 2.7% productivity → feedback loop | Reliability (Layer 8) | 90% firms no impact / societal instability |
The Knife-Edge: A Personal Take
The thing that strikes me most about this chain is not the scale - it's that the fragility argument has gotten stronger even as the buildout has accelerated.
Every layer depends on the one below it. Cut the power and the datacenters go dark. Disrupt TSMC and every GPU roadmap collapses. Fail to secure model weights and the $650B+ of annual upstream investment can be replicated by adversaries - and DeepSeek proved that even partial knowledge transfer, at $5.6M, can bootstrap a competitive system. Each of these risks is confirmed, not theoretical: Taiwan is genuinely contested, NEPA reform is signed but not yet implemented, and foreign intelligence services are confirmed operating against US AI companies.
The new risk that didn't exist when I first wrote this piece is the efficiency attack on the capital thesis. If a Chinese lab can match frontier models at 1/1000th the cost, the trillion-dollar infrastructure buildout looks less like a moat and more like a target. The Western strategy assumes that scale is the advantage - but DeepSeek suggests that algorithmic efficiency can substitute for brute-force compute. The chain doesn't just need to hold together; it needs to produce returns that justify the investment before a cheaper path renders it obsolete.
What Aschenbrenner calls "situational awareness" is the ability to see the whole chain simultaneously - not just the product, not just the chip, not just the policy, but the thermodynamic and logistical reality of what it takes to build a thinking machine. Most people, including most investors and policymakers, are optimizing for one layer while being blind to three others.
I don't think AGI is inevitable. I think it's likely, conditional on this chain holding together - which is a much harder condition than it looks.
What I'm confident about is this: if you want to understand what is happening in the economy over the next decade, you cannot start with the chatbot. You have to start with the shale fields.
Further Reading
- Situational Awareness by Leopold Aschenbrenner - situational-awareness.ai
This article is for informational and educational purposes only. Nothing in this article constitutes investment advice, financial advice, or a recommendation to buy or sell any security. All data, figures, and projections are sourced from publicly available information and may be incomplete or outdated. Investing involves risk, including the possible loss of principal. Always conduct your own research and consult a licensed financial advisor before making investment decisions.
Sources
- OpenAI - Announcing the Stargate Project (Jan 2025)
- White House - Genesis Mission Executive Order (Nov 2025)
- ANS/CNBC - FERC denies Talen-Amazon nuclear datacenter agreement (Nov 2024, Apr 2025, Feb 2026)
- Bloomberg - Hyperscaler AI capex tracker (2026)
- 451 Research - US Datacenter Power Demand Forecast (2025)
- White House - Executive Order on Expedited Datacenter Permitting; CEQ NEPA Final Rule (Jul 2025, Jan 2026)
- DOE - $1B Loan to Constellation Energy for TMI Unit 1 Restart (Nov 2025)
- Meta - Nuclear Energy Agreements (2025)
- Google - Intersect Power Acquisition and Kairos Power SMR Deal (2025)
- IEA - China Electricity Market Report (2025)
- Reuters - Stargate UAE Abu Dhabi Campus (2025)
- xAI - Colossus Supercomputer Announcement (2025)
- TSMC - Arizona Fab Updates (2025–2026)
- TSMC - FY2025 Annual Report (2026)
- DigiTimes - CoWoS Capacity Ramp (2025)
- Intel - CHIPS Act Award and 18A Production Update (2025)
- Samsung - 2nm Roadmap and Tesla Contract (2025)
- TechInsights - SMIC N+3 Process Analysis (2025)
- CXMT - HBM3 Production Timeline (2025)
- Financial Times - Chinese EUV Prototype Development (2025)
- Reuters - Trump Administration Export Control Revisions (2025)
- Nvidia - FY2026 Annual Report (2026)
- AMD - MI350 Launch; OpenAI MI450 Selection (Jun 2025)
- Amazon - Trainium3 Production Update (2026)
- Meta - MTIA Custom Silicon Roadmap (2025)
- Microsoft - Maia 200 Announcement (Jan 2026)
- Meta - FY2026 Capex Guidance (2026)
- SoftBank - OpenAI Investment and ASI Strategy (2025)
- Financial Times - Big Tech $100B Bond Issuance for AI (2026)
- Google - HUMAIN Saudi Arabia Partnership (2025)
- MGX - Aligned Data Centers Acquisition (2025)
- MIT Media Lab - AI ROI Study; NBER - AI Impact on Firms (2025)
- Model release dates: GPT-5 (Aug 2025), Claude Opus 4.6 (Feb 2026), Gemini 3.1 (Feb 2026), Llama 4 (Apr 2025), Grok 4.1 (Nov 2025)
- OpenAI - PBC Restructuring and Valuation (2025–2026)
- Anthropic - Funding Round and Revenue (2026)
- DeepSeek - R1 Technical Report; Bloomberg - Nvidia $600B Market Cap Drop (Jan 2025)
- Anthropic - Threat Intelligence Report on AI-Enabled Cyberattacks (Sep 2025)
- White House - Revocation of Biden AI Executive Order (Jan 2025)
- Anthropic - $200M Pentagon Contract (Jul 2025)
- NIST - AISI Renamed to CAISI (2025)
- EU - AI Act Implementation Timeline (2025)
- China State Council - AI Plus Action Plan (Aug 2025)
- Anthropic - Claude Code Revenue Milestone (2026)
- Agent products: Claude Code, Cursor, GitHub Copilot Agent Mode, Devin, OpenAI Operator/Codex, Google Project Mariner (2025–2026)
- BLS - US Productivity Growth 2025 (2026)
- Statista/IDC - Enterprise AI Revenue 2025 (2026)
- Fortune/CEO Survey - AI Impact Assessment (Feb 2026)
- Yale Budget Lab - AI and Employment Study (2025)