AI is turning into a capital‑intensive infrastructure and statecraft game: governments are picking winners, hyperscalers are hoarding chips, and labs are selling guaranteed yields for model access.
At the same time, cheap Chinese models, mixed developer productivity, and shaky agent security are eroding the mystique of generic AI capability and shifting the real leverage to where you sit in the stack.
Key Events
/OpenAI is offering private‑equity firms a guaranteed minimum return of 17.5% for early access to unreleased AI models.
/Tesla and SpaceX disclosed a combined compute demand of 1 terawatt and are building chip fabs in Austin under the $20B Terafab project.
/AWS plans to purchase 1 millionNvidia GPUs to expand its AI infrastructure.
/The FCC moved to ban imports of new foreign‑made consumer routers into the U.S., citing national security risks.
/The Pentagon decided to adopt Palantir AI as a core system for U.S. military operations.
Report
OpenAI is now selling a 17.5% guaranteed return to private‑equity firms in exchange for early access to unreleased models. At the same time, Tesla and SpaceX say their combined compute demand will hit 1 terawatt and are building chip fabs in Austin to feed it.
the compute land grab
Nvidia is redesigning its next‑gen Feynman chips because TSMC cannot supply enough A16 capacity, signalling that the bottleneck is the fab, not demand.
AWS plans to buy 1 million Nvidia GPUs to bulk up its AI cloud, effectively pre‑empting capacity for others.Tesla calls its Terafab chip project in Austin civilisation‑level big, and its cars alone may soon need up to 300 GB of RAM each.
Intel is ramping a GPU‑capable fab in Arizona and has Panther Lake running in Dell XPS machines, pitching U.S. chip sovereignty as a selling point.
Meanwhile, Amazon is pushing its Trainium accelerators, which labs like Anthropic and OpenAI are already adopting, as a way to cut reliance on Nvidia.
defense as an AI kingmaker
The Pentagon has decided to make Palantir AI a core system for U.S. military operations, effectively elevating it to defense operating system status.
Palantir has also secured access to sensitive data from the UK's Financial Conduct Authority, extending its reach into financial regulation. Critics note Palantir often orchestrates third‑party models rather than building its own, yet still becomes the gatekeeper for state data and workflows.
In parallel, Anthropic is suing the Pentagon over a supply‑chain risk label that restricts its models in surveillance and warfare, a move Senator Elizabeth Warren has called retaliatory.
Google is running Gemini agents on the dark web for threat intelligence, and the EU's NIS2 regime is hardening cybersecurity requirements, together defining a distinct, heavily regulated AI‑for‑security market.
china’s low‑cost AI + EV stack
Xiaomi's MiMo‑V2‑Flash leads open‑source benchmarks like SWE‑Bench while charging $0.10 per million input tokens, undercutting Western rivals on both quality and price.
Its MiMo‑V2‑Pro model offers a 1 million token context window and is priced at $1 and $3 per million tokens for input and output, yet is reported to approach top‑tier proprietary models.
Chinese open‑source models like Qwen 3.5 are also topping local‑model benchmarks, and U.S. advisers now warn that China's open‑source dominance threatens American AI leadership.
New Energy Vehicles already look set to exceed 50% of new passenger car sales in China by late 2025, while U.S. consumers complain about the lack of comparable affordable options.
the AI talent and productivity contradiction
Software development job postings are up about 15% since mid‑2025 even as headlines predict AI will kill dev jobs. At the same time, 93% of developers now use AI tools in their work.
One study found that experienced developers were 19% slower when using these tools. Anthropic reported that developers using AI scored 17% lower on code comprehension tests.
Teams report spending around a quarter of their time fixing AI‑generated code, a measurable velocity tax. Yet Anthropic estimates that 75% of programming tasks are already handled by AI, and some hiring processes now reject candidates who do not use LLMs in their take‑home assignments.
Meanwhile, Meta and Amazon have each cut roughly 16,000 roles citing AI efficiencies, while Salesforce's Marc Benioff has frozen new engineering hires on the expectation that AI coding agents will fill the gap.
agents, slop, and the trust gap
An AI‑generated TikTok persona just picked up more than 3 million followers in just over a week, showing how fast synthetic content can capture attention.
In music, one fraudster used about 1,000 bots streaming AI‑generated songs to game royalty systems. The scheme netted around $8 million before being caught, turning AI content plus bots into a pure financial attack.
Even Nvidia's CEO is publicly distancing himself from what he calls AI slop, acknowledging the backlash against low‑quality synthetic content.
Under the hood, agents are becoming powerful: Andrej Karpathy's research agent ran 700 experiments in a couple of days. AI agents now build more dashboard components than humans in some systems, flipping who actually writes the interface layer.
But a scan of 15,923 MCP servers found widespread problems with their security posture. In that dataset, 36% of servers received a failing grade.
The same scan identified 757 servers leaking API tokens. It also confirmed 42 AI skills as malicious, making clear how brittle the current agent ecosystem is.
On top of that, 98% of MCP tool descriptions fail to give agents adequate guidance, so many of these systems are effectively piloting with blurred instruments.
What This Means
The pattern across these stories is that AI is hardening into a capital‑hungry, state‑inflected infrastructure layer at the top, while cheap Chinese models and brittle agent ecosystems commoditize generic capability and expose real trust gaps. The live decision space is less about buying another model and more about where you want to sit on that stack—owning scarce compute and regulated rails, or riding on others' platforms with their politics and failure modes baked in.
On Watch
/Over a dozen chatbot harm cases have been consolidated in California courts, setting up a test of how much liability AI vendors carry for downstream misuse and hallucinations.
/Apple is rolling out ads in Apple Maps just as a California bill targeting Big Tech self‑preferencing gains steam, putting Apple's premium brand and platform control on a collision course with regulators.
/Anthropic's models are reportedly slower on Amazon Bedrock than on competing platforms even as Amazon promotes Trainium and Bedrock as core AI infrastructure, a performance gap that could reshape which clouds win AI‑native workloads.
Interesting
/The competitive landscape in AI is intensifying, with Google and xAI emerging as dominant players, overshadowing competitors like OpenAI.
/Economic pressures are significant for AI companies, with predictions of potential commercial failures due to harsh market conditions.
/The ban on foreign-made routers is part of a larger regulatory effort by the FCC to address national security concerns, reflecting a growing trend in tech regulation.
/Concerns about data privacy are driving companies to migrate their tech stacks from the US to the EU, seeking a more favorable regulatory environment.
/The Ranking Engineer Agent (REA) automates experimentation for Meta's ads ranking, modifying ranking functions and running A/B tests.
We processed 10,000+ comments and posts to generate this report.
AI-generated content. Verify critical information independently.
/OpenAI is offering private‑equity firms a guaranteed minimum return of 17.5% for early access to unreleased AI models.
/Tesla and SpaceX disclosed a combined compute demand of 1 terawatt and are building chip fabs in Austin under the $20B Terafab project.
/AWS plans to purchase 1 millionNvidia GPUs to expand its AI infrastructure.
/The FCC moved to ban imports of new foreign‑made consumer routers into the U.S., citing national security risks.
/The Pentagon decided to adopt Palantir AI as a core system for U.S. military operations.
On Watch
/Over a dozen chatbot harm cases have been consolidated in California courts, setting up a test of how much liability AI vendors carry for downstream misuse and hallucinations.
/Apple is rolling out ads in Apple Maps just as a California bill targeting Big Tech self‑preferencing gains steam, putting Apple's premium brand and platform control on a collision course with regulators.
/Anthropic's models are reportedly slower on Amazon Bedrock than on competing platforms even as Amazon promotes Trainium and Bedrock as core AI infrastructure, a performance gap that could reshape which clouds win AI‑native workloads.
Interesting
/The competitive landscape in AI is intensifying, with Google and xAI emerging as dominant players, overshadowing competitors like OpenAI.
/Economic pressures are significant for AI companies, with predictions of potential commercial failures due to harsh market conditions.
/The ban on foreign-made routers is part of a larger regulatory effort by the FCC to address national security concerns, reflecting a growing trend in tech regulation.
/Concerns about data privacy are driving companies to migrate their tech stacks from the US to the EU, seeking a more favorable regulatory environment.
/The Ranking Engineer Agent (REA) automates experimentation for Meta's ads ranking, modifying ranking functions and running A/B tests.