TL;DR
Power in AI is concentrating into a few chokepoints: Nvidia for hardware, a small set of shifting cloud/model alliances, and defense-linked platforms like Palantir.
At the same time, regulators and customers are starting to punish fake compliance and low-quality 'AI slop,' so the real risk is getting trapped on the wrong side of those bottlenecks.
Key Events
Report
AI is consolidating around a few infrastructure chokepoints: Nvidia on the hardware side and a shifting OpenAI–Amazon–Microsoft triangle on the cloud and model side.
At the same time, defense, compliance, and 'AI slop' backlash are starting to decide which deployments actually scale and which stall out.
Nvidia is set to sell 1 million AI chips to Amazon by 2027 as part of a cloud deal and is projecting $1 trillion in AI chip demand by the same date, implying a long hardware capex super‑cycle.
It is moving beyond GPUs with its 88‑core Vera CPU for agentic AI and the Rubin accelerator packing 336 billion transistors and 288 GB of HBM4, while Micron has begun high‑volume HBM4 production with 2.3x bandwidth and 20% better power efficiency for this stack.
The arrest of Supermicro’s co‑founder for allegedly smuggling $2.5 billion of Nvidia GPUs to China, and the subsequent stock hit, showed how aggressively the US will police access to this hardware.
On the fab side, Belgium’s imec has secured ASML’s rare EXE:5200 High‑NA EUV tool for sub‑2nm chips while others openly discuss creating ASML and TSMC competitors to escape these chokepoints.
Jensen Huang is simultaneously selling a vision of 'physical AI'—robots and embodied agents—as a $50 trillion market with 100 AI workers per human by 2036, tying Nvidia’s roadmap directly to real‑world automation.
The Pentagon is making Palantir’s Maven AI a formal program of record and plans to embed Palantir AI as a core system across US military operations, including target identification and strike planning.
The US Army separately awarded Anduril a contract worth up to $20 billion, while the Pentagon is planning for AI companies to train on classified data, signalling that frontier‑model vendors will be pulled directly into the national‑security stack.
Domestically, DHS is contracting AI firms to surveil Americans and the FBI is buying commercial location data, blurring the line between ad‑tech and intelligence collection.
Congress and the Pentagon have already labelled Anthropic a 'supply chain risk' over its staffing and access policies even as retired judges and Microsoft‑aligned military brass file briefs in its support, which shows how politicised AI vendor selection has become. xAI’s access to classified networks is under Senate scrutiny after Grok produced explicit images of minors, adding reputational and legal risk on top of technical performance.
OpenAI has signed a reported $50 billion deal to run on Amazon while Microsoft considers suing over an alleged breach of its exclusive cloud rights, turning its two largest partners into litigants.
OpenAI plans to double headcount to 8,000 by 2026, pivot harder into enterprise coding and business users, and even introduce ads for ChatGPT’s free and Go tiers, suggesting pressure to turn usage into cash quickly.
Google, meanwhile, is testing a dedicated Gemini app for Mac, leaning into national‑security contracts, and using AI to replace headlines and summarise search results, moves that have already knocked Figma’s stock down ~11–12% and angered both users and publishers.
Meta is shutting down Horizon Worlds after spending around $80–85 billion on the metaverse and is now looking at licensing Google’s Gemini after its own AI trials disappointed, underlining how even giants are retreating from owning the full stack.
Microsoft is simultaneously backing away from putting Copilot everywhere in Windows 11 after user backlash, even as Outlook, Exchange Online, and 365 outages remind enterprises that these AI‑first platforms are still brittle.
YC‑backed Delve raised $32 million selling automated SOC 2 and ISO reports only for a leaked spreadsheet to show it was templating auditor conclusions before reviewing evidence and relying on overseas certification mills, affecting at least 494 companies.
This lands just as the EU AI Act forces high‑risk AI systems into real compliance by August 2026 and as insiders describe much of current practice as 'compliance theater' that prioritises optics over genuine security.
Courts are also starting to treat algorithms like regulated products: a federal judge has allowed legal challenges to Workday’s AI hiring tools over alleged age discrimination, while Arizona has filed criminal gambling charges against Kalshi for election‑outcome markets.
In the enterprise, roughly 80% of AI projects are reportedly failing due to poor integration and stakeholder engagement, 55% of companies that fired staff for AI agents now regret it, and 'AI‑first engineering' initiatives are hitting performance and cost walls that require senior developers to unwind.
On the consumer side, YouTube is asking viewers to flag 'AI slop,' gamers are savaging Nvidia’s DLSS 5 for uglifying art, and polls show Americans increasingly see AI as a driver of wealth inequality, all of which makes low‑quality AI deployments a brand and political liability, not just a UX issue.
What This Means
AI is consolidating into a few politicised chokepoints—compute, cloud alliances, and defense‑aligned vendors—so every big AI bet now doubles as a geopolitical and regulatory position.
On Watch
Interesting
We processed 10,000+ comments and posts to generate this report.
AI-generated content. Verify critical information independently.
Sources
Key Events
On Watch
Interesting