AI Daily Roundup: December 7, 2025

📰 AI Daily Roundup: December 7, 2025 This ai 2025 guide covers everything you need to know.

TL;DR

  • GPTZero detected 50+ verified hallucinations in 300 ICLR 2026 conference submissions, including fabricated authors and false citations, despite average peer review ratings of 8/10.
  • Berkeley AI Research deployed 100 reinforcement learning-controlled autonomous vehicles on a highway to reduce stop-and-go traffic waves, cutting fuel consumption and improving flow using local sensor data and fast simulations.
  • PLAID, a multimodal generative model from Berkeley AI Research, generates valid protein sequences and 3D structures by leveraging the latent space of ESMFold, trained only on sequence data.
  • LinkBERT, developed by Stanford AI Lab, improves language model pretraining by incorporating document-level links, enhancing contextual understanding in models like BERT and GPT series.

🏢 Company Announcements

OpenAI took an ownership stake in Thrive Holdings to accelerate enterprise AI adoption, embedding frontier research and engineering directly into accounting and IT services to boost speed and accuracy.

NVIDIA announced that mixture-of-experts (MoE) architecture is now standard for frontier AI models, enabling higher intelligence and efficiency by activating only relevant model experts per token. The GB200 NVL72 system delivers a 10x performance increase for MoE models like Kimi K2 Thinking compared to HGX H200.

Meta introduced Zoomer, an automated debugging and optimization platform for AI workloads, which provides performance insights across training and inference at scale. Zoomer has reduced training time and improved QPS, becoming the de-facto tool for AI performance optimization across Meta’s infrastructure.

Meta launched its Generative Ads Recommendation Model (GEM) across Facebook and Instagram. GEM increased ad conversions by 5% on Instagram and 3% on Facebook Feed in Q2, using architectural innovations and enhanced training infrastructure to improve ad performance across the funnel.


📰 Top Stories

iPadOS 26.2 will restore the ability to drag and drop apps from the dock into Split View and Slide Over, returning a simpler multi-tasking experience that was removed in iPadOS 26. The update also restores gesture-based activation of these features, which had been missing since the initial release of iPadOS 26.

GPTZero identified 50+ verified hallucinations in 300 ICLR 2026 conference submissions, including fabricated authors and false citations. These hallucinations were not detected by 3–5 peer reviewers per paper, despite some submissions receiving average ratings as high as 8/10.


🔬 Research & Papers

LinkBERT, developed by Stanford AI Lab, improves language model pretraining by incorporating document-level links, enhancing contextual understanding in models like BERT and GPT series. The method leverages hyperlinks within documents to strengthen semantic representation during training.

PLAID, a multimodal generative model from Berkeley AI Research, generates valid protein sequences and 3D structures, using only sequence data for training. It leverages the latent space of ESMFold, a successor to AlphaFold2, to generate biologically plausible protein folds.

Researchers from Berkeley AI Research deployed 100 reinforcement learning-controlled autonomous vehicles into rush-hour highway traffic to smooth stop-and-go waves, reduce fuel consumption, and improve traffic flow. The AVs used local sensor data and were trained in fast, data-driven simulations to optimize energy efficiency, safety, and driving comfort.


📧 From the Experts

Why We Think discusses how models can improve reasoning by using test-time compute, such as chain-of-thought (CoT) reasoning, which allows models to perform more computation on complex problems. The post reviews concepts like latent variable modeling, token-based thinking, and reinforcement learning to enhance reasoning accuracy.

Large Transformer Model Inference Optimization outlines methods to reduce memory footprint, computation complexity, and inference latency in large transformer models. Techniques include distillation, quantization, pruning, and architectural optimizations.

A from-scratch tour of Bitcoin in Python describes creating a Bitcoin transaction from scratch using Python, focusing on generating a cryptographic identity using the secp256k1 elliptic curve. The guide defines curve parameters, generator point, and order, then generates a private key from a fixed byte string for reproducibility.


🔗 Sources

This roundup was compiled from 13 verified items across 97+ sources.