Bubble's Brain - 2025-12-18

AI News 2025-12-18

AI Daily Brief

Summary

Adobe sued for allegedly using pirated books to train AI models, raising copyright compliance concerns.
Amap (AutoNavi) partnered with Xiaomi, Rokid and others to launch smart wearable solutions, while Alibaba's Qwen app integrated Amap to improve travel experiences.
Tencent restructured its AI architecture to strengthen full-stack capabilities; Volcano Engine's Doubao large model usage grew dramatically.
OpenAI opened ChatGPT app submissions; MiniMax and Zhipu AI compete to become the first large-model public company.
Google released the free Gemini 3 Flash model; Meta launched the multimodal audio separation model SAM Audio.

Today’s AI News

  1. Adobe sued for allegedly using pirated books to train AI model, SlimLM caught in copyright controversy: Oregon writer Elizabeth Lyon filed a class-action lawsuit against Adobe, alleging that the SlimPajama open-source dataset used to train its SlimLM language model contains the Books3 dataset, which includes large amounts of pirated books and illegally uses copyrighted works by the plaintiff and others. This lawsuit highlights the copyright tracing and compliance challenges facing the AI industry when using open-source training data.

  2. Amap Open Platform partners with Xiaomi, Rokid and other giants to define new “proactive service” standard for smart wearables: Amap Open Platform released a “smart wearable solution,” integrating its travel data, high-precision navigation engine and “Little Gao Teacher” AI spatial agent to provide proactive travel reminders and safety services for smartwatches and smart glasses. The solution has partnered with hardware companies including Xiaomi, Honor, and Rokid to drive smart wearable devices to become a new entry point for smart travel.

  3. Qwen App fully integrates Amap, AI empowers “en route” travel experience: Alibaba’s Tongyi Qwen App fully integrated Amap data, deeply fusing AI with location services. New features allow users to directly query nearby service locations and get “en route” suggestions, provide automatic energy replenishment planning for new energy vehicle owners, and offer compliant routes based on traffic restriction policies, achieving a leap from information queries to scenario-based decision-making.

  4. Tencent comprehensively restructures AI architecture: establishes three core departments, former OpenAI researcher Yao Shunyu leads large model infrastructure: Tencent announced a deep restructuring of its AI R&D system, establishing three new core departments—AI Infra, AI Data, and Data Computing Platform—to strengthen full-stack capabilities from computing power and data to models, marking an upgrade to a systematic “infrastructure + engineering + scenario implementation” strategy. Former OpenAI researcher Yao Shunyu has been appointed Chief AI Scientist, simultaneously leading both the AI Infra and Large Language Model departments. Meanwhile, Tencent’s Hunyuan large model is iterating rapidly and launched the first openly accessible real-time world model “Hunyuan World Model 1.5.”

  5. Volcano Engine reveals Doubao large model “achievements,” 417× growth marks the era of AI mass production: At the Volcano Engine FORCE conference, President Tan Dai revealed that as of December 2025, Doubao’s daily token usage has exceeded 50 trillion, representing 417× growth since its first release in May 2024. Currently, over 100 companies on the Volcano Engine platform have daily usage exceeding 1 trillion tokens, signaling that large model applications have moved from “small-scale pilots” to an “industrial mass production” stage.

  6. OpenAI officially announces: developers can submit applications to ChatGPT: OpenAI officially opened ChatGPT app submission permissions to global developers, marking ChatGPT’s evolution from a “chatbot” to a “super app platform.” After review, apps submitted by developers will appear in the new ChatGPT app directory, and users can complete specific tasks directly through conversation. The first batch of apps is expected to launch in early 2026.

  7. Race for the first large-model public company: MiniMax and Zhipu AI both passed HKEX listing hearings: China’s two leading AI large-model companies, MiniMax and Zhipu AI, passed Hong Kong Stock Exchange listing hearings around December 17, potentially setting a record for the fastest approval under the new “filing system” for mainland companies going public in Hong Kong, competing to become the “world’s first large-model public company.” Their IPOs mark the official opening of a new capital track focused on AGI foundational models.

  8. Google drops a “dimension-reduction attack”: Gemini 3 Flash available for free, performance surpasses Pro?: Google recently released the new lightweight flagship model Gemini 3 Flash. The model has become the default engine for Google’s AI search mode and Gemini app, with significant improvements in speed, cost, and performance: it runs 3× faster than the previous generation, with significantly reduced inference costs; it scored 78% on the SWE-Bench Verified benchmark for coding ability, even surpassing the higher-end flagship Gemini 3 Pro. The open release aims to drive large-scale practical deployment of AI through extreme cost-effectiveness.

  9. Google web interface integrates Opal, enabling “ambient coding” with natural language: Google recently announced that the “ambient programming” tool Opal is now integrated into the Gemini web interface, allowing ordinary users without programming knowledge to build AI-driven mini-apps using only natural language descriptions. To improve the experience, Gemini also includes an intuitive flowchart editor, allowing users to fine-tune application logic by dragging and dropping steps. The move aims to make Gemini the world’s easiest-to-use AI app creation platform.

  10. Meta releases SAM Audio, the world’s first unified multimodal audio separation model: Meta released SAM Audio, the world’s first unified multimodal audio separation model, supporting one-click extraction of target sounds from mixed audio using text, visual, or time segment prompts. The technology is based on its self-developed perceptual encoder audio-visual engine and simultaneously open-sourced evaluation benchmarks and quality assessment tools, expected to push audio processing into a new era of interactivity.