AI labs have become de facto defense contractors, with OpenAI going deep into classified work while Anthropic is punished for drawing ethical red lines, even as drones and wiper malware start hitting cloud and enterprise infrastructure directly.
Money and regulation are racing ahead of real productivity: frontier labs and GPU vendors are drowning in capital, governments are wiring identity and surveillance into the stack, and big employers are cutting or reshaping white-collar jobs on AI narratives that macro data still doesn’t justify.
Key Events
/OpenAI raised $110B and agreed to deploy its models on the U.S. Department of War's classified network.
/Anthropic closed a $30B Series G at a $380B valuation and was designated a Pentagon 'supply-chain risk' after refusing to drop AI safeguards.
/California passed a law forcing all operating systems, including Linux and SteamOS, to implement age verification and share user age brackets with apps starting in 2027.
/Amazon's AI coding agent deleted a live AWS production environment, causing a 13-hour outage and prompting a new rule requiring senior engineer approval for AI-assisted changes.
/Iranian drone strikes directly hit AWS data centers in the UAE and Bahrain, knocking cloud services offline and making U.S. tech infrastructure an explicit wartime target.
Report
Frontier AI has crossed the line into being core warfighting infrastructure. OpenAI took $110B in new capital to run models inside the U.S. Department of War, while the Pentagon moved to blacklist Anthropic and Iranian drones lit up AWS data centers in the Gulf.
the ai–military split
OpenAI agreed to deploy its models on the U.S. Department of War’s classified network, with Sam Altman acknowledging the company cannot control how the Pentagon uses its AI.
In parallel, Anthropic built a custom Claude model for the Pentagon that is 1–2 generations ahead of its consumer system, but then refused demands for unfettered access, autonomous weapons, and mass surveillance, prompting a 'supply‑chain risk' label and threats to invoke the Defense Production Act to strip safeguards.
Trump ordered all federal agencies, including Treasury, to stop using Anthropic’s products after it drew a red line against lethal autonomous weapons and domestic mass surveillance. xAI took the opposite tack, signing an agreement to put its Grok model into classified systems as the Pentagon threatens to blacklist labs that maintain strict safety rules.
On the demand side, Claude briefly overtook ChatGPT in the U.S. App Store as a 'cancel ChatGPT' wave and a 295% spike in ChatGPT uninstalls followed OpenAI’s military deal, even as around 3 million personnel adopted the GenAI.mil platform built on tools from OpenAI, Google, and xAI.
the capital stack and possible bubble
OpenAI raised $110B in new funding and now carries an implied valuation near $840B. It is planning roughly $600B of compute investment by 2030 and expects around $110B in additional cash burn on the way.
Anthropic closed a $30B Series G that values it at about $380B and is reportedly nearing a $20B annual revenue run-rate after adding $5B in just a few weeks.
SoftBank put $30B into OpenAI and is seeking a $40B loan to support the bet, while Nvidia walked back a mooted $100B check in favor of a $30B stake and a separate $26B program to build open‑weight models.
Hyperscaler capex is projected to reach about $770B in 2026, even as Goldman Sachs says AI added 'basically zero' to U.S. GDP last year and over 80% of companies report no productivity gains from their AI spending.
Downstream, Oracle is carrying over $100B of debt and planning up to 30,000 layoffs to fund AI data centers while walking away from its planned Stargate expansion with OpenAI, and Amazon has shed roughly $450B in market value amid concern that AI overbuild and service degradation are hitting margins and customer loyalty.
identity, age verification, and the end of anonymity
California’s AB 1043 now requires every operating system, including Linux and SteamOS, to collect a user’s age at account setup and share their age bracket with any app that asks, with enforcement slated for January 1, 2027.
Similar OS-level age‑assurance bills are moving in New York, Colorado, Illinois, Brazil, and over 25 U.S. states, often based on two model templates that standardize requirements.
The FTC has openly admitted that age‑verification schemes violate children’s privacy law but is choosing not to enforce that conflict, while Meta, Google, and Snap backed the template that pushes age checks down into the OS layer.
Discord’s rollout of AI‑driven age and ID checks provoked enough backlash and cancellations that the company cut ties with a verification vendor linked to Peter Thiel and is reworking the system with more human review.
In Washington, Congress is actively considering abolishing the right to be anonymous online at the same time that NSA surveillance under Section 702 and commercial age‑verification tools are normalizing mass data collection under a 'for the children' banner.
infra hits: compute, outages, and kinetic attacks
Amazon’s Kiro AI coding agent deleted a live AWS production environment and caused a 13‑hour outage, one of multiple AI‑caused incidents that led the company to require senior engineers to sign off on any AI‑assisted changes.
Claude itself suffered major outages that took Anthropic’s flagship service offline for hours, reinforcing how dependent enterprises have become on a tiny set of AI SaaS endpoints.
Separately, Iranian drones directly struck AWS data centers in the UAE and Bahrain, setting facilities ablaze and knocking regional cloud services offline in what the Pentagon called a historic first for military attacks on commercial cloud infra.
Iran’s Revolutionary Guard has explicitly named American tech giants like Amazon, Google, and Microsoft as targets, while Russia is providing Iran with targeting assistance and U.S. air defenses are widely viewed as ill‑prepared for swarms of cheap drones.
On the supply side, Qatar’s helium shutdown has yanked roughly 30% of global supply for chip fabs, Western Digital says its hard drives are 'pretty much sold out' through 2026 as AI data centers hoard capacity, and analysts warn that an AI‑driven memory squeeze could push many consumer electronics makers into bankruptcy by 2026.
Domestic politics are starting to bite as well: Bernie Sanders is pushing legislation for a national moratorium on new AI data centers over existential‑risk and pollution concerns, even as reports tie conventional data‑center air pollution to elevated lung disease and mortality.
labor, automation, and the productivity mirage
Goldman Sachs estimates AI contributed 'basically zero' to U.S. economic growth last year, and more than 80% of companies report no productivity gains from their AI investments so far.
Yet firms are restructuring aggressively around the narrative: Jack Dorsey’s Block cut about 40% of staff citing AI, Atlassian is shedding roughly 1,600 roles in an AI pivot, and Oracle is planning up to 30,000 layoffs to fund GPU‑heavy data centers.
Amazon projects eliminating 600,000 future U.S. jobs via warehouse robots and automation, while its own services have already suffered outages and misconfigurations from AI agents like Kiro deleting production systems.
At the same time, IBM is tripling entry‑level developer hiring after running into hard limits on AI in real workflows, and thousands of CEOs say AI has had no measurable impact on employment or productivity in their firms.
Workers hear a different story: Microsoft’s AI leadership openly predicts that most white‑collar tasks can be automated within 12–18 months, Andrew Yang forecasts millions of white‑collar job losses on a similar timeline, and polls show negative sentiment toward AI running roughly two‑to‑one over positive views as UBI and 'universal basic equity' debates move from theory into protest slogans.
What This Means
The frontier of AI has snapped into a three-way tension between defense-aligned labs, overstretched capital and infra, and a labor market bracing for automation without seeing real productivity, while governments quietly wire identity and surveillance into the stack. The live decision is which of those forces you treat as tailwinds to harness versus systemic risks to hedge against.
On Watch
/Yann LeCun’s AMI Labs raised $1.03B to pursue JEPA-based world-model AI with persistent memory, signaling serious capital rotation toward non-LLM architectures that could undercut today’s chatbot-centric moats.
/An alleged theft of Social Security data for 500M Americans by a former DOGE employee could force a rethink of U.S. identity infrastructure and trigger unprecedented credential reissuance if fully confirmed.
/Bernie Sanders’ push for a national moratorium on new AI data centers, alongside emerging evidence that data-center air pollution is linked to lung disease and death, is an early test of how far environmental and existential-risk politics can actually constrain AI infra build-out.
Interesting
/Microsoft is set to ditch OpenAI for its AI development, indicating a shift in strategic partnerships.
/China has quietly provided AI agents to 1 billion people, largely unnoticed by the West, showcasing its expansive reach in AI technology.
/China's control over 50% of the world's AI researchers poses significant implications for Western technological dominance and innovation.
/DeepSeek has blocked Nvidia and AMD from accessing its new AI model, granting early access to Huawei instead.
/A New York bill aims to prohibit AI from answering questions in critical fields like medicine and law, reflecting growing regulatory concerns.
We processed 10,000+ comments and posts to generate this report.
AI-generated content. Verify critical information independently.
/OpenAI raised $110B and agreed to deploy its models on the U.S. Department of War's classified network.
/Anthropic closed a $30B Series G at a $380B valuation and was designated a Pentagon 'supply-chain risk' after refusing to drop AI safeguards.
/California passed a law forcing all operating systems, including Linux and SteamOS, to implement age verification and share user age brackets with apps starting in 2027.
/Amazon's AI coding agent deleted a live AWS production environment, causing a 13-hour outage and prompting a new rule requiring senior engineer approval for AI-assisted changes.
/Iranian drone strikes directly hit AWS data centers in the UAE and Bahrain, knocking cloud services offline and making U.S. tech infrastructure an explicit wartime target.
On Watch
/Yann LeCun’s AMI Labs raised $1.03B to pursue JEPA-based world-model AI with persistent memory, signaling serious capital rotation toward non-LLM architectures that could undercut today’s chatbot-centric moats.
/An alleged theft of Social Security data for 500M Americans by a former DOGE employee could force a rethink of U.S. identity infrastructure and trigger unprecedented credential reissuance if fully confirmed.
/Bernie Sanders’ push for a national moratorium on new AI data centers, alongside emerging evidence that data-center air pollution is linked to lung disease and death, is an early test of how far environmental and existential-risk politics can actually constrain AI infra build-out.
Interesting
/Microsoft is set to ditch OpenAI for its AI development, indicating a shift in strategic partnerships.
/China has quietly provided AI agents to 1 billion people, largely unnoticed by the West, showcasing its expansive reach in AI technology.
/China's control over 50% of the world's AI researchers poses significant implications for Western technological dominance and innovation.
/DeepSeek has blocked Nvidia and AMD from accessing its new AI model, granting early access to Huawei instead.
/A New York bill aims to prohibit AI from answering questions in critical fields like medicine and law, reflecting growing regulatory concerns.