AI Daily Roundup: December 10, 2025 This ai 2025 guide covers everything you need to know.
Today, Wednesday, December 10, 2025, Google is selling its TPUv7 chips directly to external customers for the first time, including a landmark deal with Anthropic for up to 1 million units. NVIDIA has released the GB200 NVL72 system, which enables mixture-of-experts AI models to run 10x faster using 72 Blackwell GPUs. Mistral launched Devstral 2, a 123-billion parameter coding model, and made a smaller variant available for free. This roundup covers the key developments in AI infrastructure, enterprise deployment, and model releases shaping the industry.
AI 2025: TL;DR
- Google is selling TPUv7 chips directly to external customers, including a deal with Anthropic for up to 1 million chips.
- NVIDIA’s GB200 NVL72 system enables mixture-of-experts AI models to run 10x faster using 72 Blackwell GPUs and a unified NVLink interconnect.
- Mistral launched Devstral 2, a 123-billion parameter coding model, and Devstral Small 2 at 24B parameters, both available for free via API and Hugging Face for a limited time.
- Cursor, the AI coding assistant, reached $1 billion in annualized revenue in November 2025 and raised $2.3 billion at a $29.3 billion valuation.
📰 Top Stories
Google is testing AI-powered article overviews on select publications’ Google News pages as part of a pilot program. Participating publishers include Der Spiegel, El País, The Guardian, and The Washington Post. Google is providing direct payments to offset potential traffic declines.
Google’s TPUv7, used to train frontier models like Gemini 3 and Claude 4.5 Opus, is now sold directly to external customers, including a landmark deal with Anthropic for up to 1 million chips. The chip is optimized for large-scale AI training and supports PyTorch.
NVIDIA’s GB200 NVL72 rack-scale system enables mixture-of-experts (MoE) AI models to run 10x faster by leveraging 72 Blackwell GPUs, high-bandwidth memory, and a unified NVLink interconnect. The system reduces memory pressure and accelerates expert communication across GPUs.
Mistral launched Devstral 2, a 123-billion parameter coding model, and Devstral Small 2 at 24B parameters, both available for free via API and Hugging Face for a limited time. The release includes Vibe CLI, a terminal-native agent.
Cursor, the AI coding assistant, reached $1 billion in annualized revenue in November 2025 and raised $2.3 billion at a $29.3 billion valuation. The company is expanding features, including in-house LLMs, cost-management tools, and agentic functions.
OpenAI hired Denise Dresser, CEO of Slack since 2023, as its chief revenue officer. She will oversee OpenAI’s global revenue strategy and help businesses integrate AI into operations.
🏢 Company Announcements
NVIDIA’s GPU platforms are driving a shift from CPU-based to GPU-accelerated computing across AI, science, and supercomputing. The company’s CUDA-X and integrated software libraries deliver performance and energy efficiency gains.
NVIDIA’s GB200 NVL72 system enables mixture-of-experts (MoE) AI models to run 10x faster by leveraging 72 Blackwell GPUs, high-bandwidth memory, and a unified NVLink interconnect. The system resolves scaling bottlenecks in MoE models.
Meta has introduced Zoomer, an automated debugging and optimization platform for AI workloads, which delivers performance insights across training and inference tasks. Zoomer operates at scale across Meta’s GPU infrastructure.
Meta’s Generative Ads Recommendation Model (GEM) has been launched on Facebook and Instagram, resulting in a 5% increase in ad conversions on Instagram and a 3% increase on Facebook Feed in Q2. Improvements to GEM’s architecture in Q3 doubled the performance benefit from adding data and compute.
NVIDIA and Mistral AI have announced the Mistral 3 family of open-source multilingual and multimodal models, including the Mistral Large 3 MoE model with 41B active parameters and 675B total parameters. The models are optimized for NVIDIA platforms and available across cloud, data center, and edge devices.
📈 This Week in AI
- Google’s TPUv7 is now available for direct sale to external customers, marking a shift in AI hardware access.
- NVIDIA’s GB200 NVL72 system enables 10x faster inference for MoE models, accelerating deployment of large-scale AI.
- Mistral’s Devstral 2 model offers a high-performance, open-source coding solution with a lightweight variant.
- Cursor’s $2.3B funding and $1B revenue milestone signal strong enterprise adoption of AI coding tools.
🚀 What Shipped This Week
- Devstral 2 – 123B parameter coding model with Vibe CLI.
- Devstral Small 2 – 24B parameter version, free via API and Hugging Face.
- TPUv7 for external customers – Available through direct sales, including to Anthropic.
- GB200 NVL72 – Rack-scale system for MoE model deployment.
📚 Further Reading
- How Google’s TPUs are reshaping the economics of large-scale AI
- NVIDIA’s mixture-of-experts scaling breakthrough
- Mistral’s open model strategy
- Meta’s Zoomer platform for AI optimization
🔗 Sources
- How Google’s TPUs are reshaping the economics of large-scale AI | VentureBeat
- 3 Ways NVIDIA Is Powering the Industrial Revolution
- Google is testing AI-powered article overviews on select publications’ Google News pages | TechCrunch
- Mistral launches powerful Devstral 2 coding model including open source, laptop-friendly version | VentureBeat
- Why Cursor’s CEO believes OpenAI, Anthropic competition won’t crush his startup | TechCrunch
- OpenAI hires Slack’s CEO as its chief revenue officer
- Mixture of Experts Powers the Most Intelligent Frontier AI Models, Runs 10x Faster on NVIDIA Blackwell NVL72
- Zoomer: Powering AI Performance at Meta’s Scale Through Intelligent Debugging and Optimization
- Meta’s Generative Ads Model (GEM): The Central Brain Accelerating Ads Recommendation AI Innovation
- NVIDIA Partners With Mistral AI to Accelerate New Family of Open Models