Cloud incumbents are the ones actually minting money from AI right now: Microsoft and Google are ramping paid workloads hard while OpenAI’s consumer subs are already projected to crater. Companies are using 'AI transformation' to justify big layoffs even though compute is still pricier than most workers, and regulators have started to hit surveillance-style pricing, child data, and in-car monetization.
The live tension is between riding hyperscaler-driven AI growth and the growing political, labor, and infrastructure blowback around how that growth is being engineered.
Key Events
/MIT found AI automation is cost-effective in only 23% of jobs, with humans cheaper in 77%.
/Microsoft's AI business grew 123% year-over-year, with over 20M paid Copilot users.
/OpenAI projects ChatGPT Plus subscriptions falling from 44M in 2025 to 9M in 2026.
/Maryland became the first U.S. state to ban surveillance pricing in grocery stores.
/GM deployed Google Gemini into about 4M vehicles, drawing privacy backlash and an expected FTC consent order over data sharing.
Report
Capital is finally diverging from AI hype. The real action this quarter is in who captures AI dollars, what the unit economics look like, and how fast regulation and infrastructure constraints bite.
who is actually making money on ai
Microsoft's AI business grew 123% year-over-year, with over 20M paid Copilot users already in market. Accenture is rolling Copilot to 743k employees, locking a whole consulting workforce into Microsoft's AI stack.
Google reported 19% overall revenue growth in Q1 2026. Google Cloud revenue grew 63%, and Gemini enterprise token consumption rose 60%, showing rapid ramp in paid AI workloads.
By contrast, OpenAI expects ChatGPT Plus subs to drop from 44M in 2025 to 9M in 2026, undercutting the pure-subscription thesis.
Microsoft runs OpenAI models on its own infrastructure without profit-sharing, while OpenAI is now also on AWS, underscoring how value is concentrating with hyperscalers rather than labs.
ai unit economics vs the layoff wave
Nvidia’s own VP of applied deep learning says AI compute currently costs more than human workers for most tasks. An MIT study quantified it: automation is economically viable in about 23% of jobs, with humans cheaper in 77%.
Despite that, about 92,000 tech workers are on track to be laid off in 2026 as firms invoke AI and automation to justify cuts. Meta is eliminating around 8,000 roles while leaving 6,000 unfilled and shifting spend from payroll into capex, even as employees describe a 'dead and depressing' culture focused on shipping AI fast.
Cognizant is firing 4,000 staff while hiring 20,000 junior 'freshers' for its AI transformation, and KPMG is cutting 4% of its U.S. advisory workforce amid weak demand.
At the same time, researchers warn that LLM compute demand is environmentally and economically unsustainable at current architectures, especially as adoption accelerates.
surveillance as a shrinking moat
Maryland just became the first U.S. state to ban surveillance pricing in grocery stores, directly targeting dynamic pricing based on individual shopper data.
Commentary on Hacker News and Reddit framed the practice as discriminatory and consumer-harming, not just clever revenue management.
In Europe, the EU is rushing out an age-verification app for minors, but the system was reportedly hacked soon after launch, feeding concerns that these tools are more about data collection and censorship than safety.
Meta has been formally charged for failing to keep under‑13s off Facebook and Instagram, putting youth safety and data practices under EU law rather than just PR pressure.
Against that backdrop, GM pushed Google Gemini into about 4M connected vehicles while facing a data‑sharing controversy and likely FTC consent order, and many buyers now say they will avoid GM over privacy and reliability fears.
Broader discussions around law enforcement and online age verification frame these moves as steps toward a surveillance state, not isolated glitches in implementation.
compute, power, and the emerging capacity squeeze
Roughly half of planned U.S. data center builds have been delayed or canceled, signaling real friction in adding power and rack space just as AI demand spikes.
Nvidia has poured around $740B into AI‑focused infrastructure and is optimizing its stack for real‑time inference workloads, making GPU and power access the real choke points.
Its new RTX PRO 6000 Blackwell outperforms the H100 in token processing, and the Nemotron 3 Nano Omni 30B model claims 9.2x higher efficiency than similar models, tightening Nvidia’s grip on high‑end inference.
U.S. drone spending has jumped from $225M to $55B, pointing to a step‑change in defense demand for autonomous systems. The USDA alone signed a $300M AI contract with Palantir to safeguard the food supply, adding government load on already tight infrastructure.
A counter‑trend is the rise of efficient open and small models—IBM’s Granite 4.1 (30B, 8B, 3B) under Apache 2.0 and Tencent’s compact translation models that beat Google Translate—giving enterprises cheaper, more sovereign options for many workloads.
What This Means
Economic power in AI is consolidating with infrastructure and suite owners under tightening regulatory and capacity constraints, while many firms are cutting human labor faster than AI can justify on unit economics. The real decision is how much exposure to take on hyperscaler‑centric, surveillance‑sensitive, compute‑scarce AI bets versus slower, cheaper, more sovereign approaches that may lag in capability but carry less structural and political risk.
On Watch
/Figure AI’s shift to producing about one humanoid robot per hour, a 24x increase in capacity, is an early indicator that embodied robotics may be moving from demo to industrial scale.
/Apple shelving the cheaper Vision Air headset, disbanding the Vision Products Group, and reallocating teams to Siri and smart glasses signals a quiet retreat from heavy XR toward AI on existing devices and lighter wearables.
/China blocking U.S. capital from its AI sector and freezing new robotaxi licenses, while the U.S. weighs banning Chinese high-tech cars, points to a sharper tech and auto bifurcation that could force region-specific stacks.
Interesting
/Musk's lawsuit against OpenAI could redefine the landscape of charitable giving laws in America, potentially impacting future nonprofit structures.
/Google's TPU 8t shows a 170-180% gain in training cost-performance, indicating significant advancements in AI training efficiency.
/Despite export controls, some Chinese companies have trained advanced AI models using non-Nvidia hardware, indicating a shift in the competitive landscape.
/Security researchers have discovered malicious skills that can covertly turn AI agents into tools for third parties, highlighting security vulnerabilities.
/The U.S. plans to share intelligence regarding China's alleged industrial-scale AI model distillation.
We processed 10,000+ comments and posts to generate this report.
AI-generated content. Verify critical information independently.
/MIT found AI automation is cost-effective in only 23% of jobs, with humans cheaper in 77%.
/Microsoft's AI business grew 123% year-over-year, with over 20M paid Copilot users.
/OpenAI projects ChatGPT Plus subscriptions falling from 44M in 2025 to 9M in 2026.
/Maryland became the first U.S. state to ban surveillance pricing in grocery stores.
/GM deployed Google Gemini into about 4M vehicles, drawing privacy backlash and an expected FTC consent order over data sharing.
On Watch
/Figure AI’s shift to producing about one humanoid robot per hour, a 24x increase in capacity, is an early indicator that embodied robotics may be moving from demo to industrial scale.
/Apple shelving the cheaper Vision Air headset, disbanding the Vision Products Group, and reallocating teams to Siri and smart glasses signals a quiet retreat from heavy XR toward AI on existing devices and lighter wearables.
/China blocking U.S. capital from its AI sector and freezing new robotaxi licenses, while the U.S. weighs banning Chinese high-tech cars, points to a sharper tech and auto bifurcation that could force region-specific stacks.
Interesting
/Musk's lawsuit against OpenAI could redefine the landscape of charitable giving laws in America, potentially impacting future nonprofit structures.
/Google's TPU 8t shows a 170-180% gain in training cost-performance, indicating significant advancements in AI training efficiency.
/Despite export controls, some Chinese companies have trained advanced AI models using non-Nvidia hardware, indicating a shift in the competitive landscape.
/Security researchers have discovered malicious skills that can covertly turn AI agents into tools for third parties, highlighting security vulnerabilities.
/The U.S. plans to share intelligence regarding China's alleged industrial-scale AI model distillation.