Bubble's Brain - 2026-01-03

AI News 2026-01-03

AI Daily Brief

Summary

TSMC said its 2nm node will enter volume production in Q4 2025 with GAA nanosheet transistors to boost performance and reduce power. KAN author Liu Ziming argued scaling law driven model growth has limits and proposed a structuralist AI path focused on structure over scale. Industry forecasts point to continual learning breakthroughs in 2026, with fully automated programming possible around 2030.
Jiukun Investment open-sourced a code model series that performs strongly on SWE-Bench Verified. A Tesla FSD cross-country drive achieved zero disengagement, seen as a key milestone. Geoffrey Hinton said scaling laws are not over and the data bottleneck can be eased by AI generated training data.

Today’s AI News

  1. TSMC said its 2nm process will enter mass production in Q4 2025, marking the start of the 2nm era. The N2 node uses GAA nanosheet transistors and delivers 10% to 15% higher performance at the same power, or 25% to 30% lower power at the same speed compared with N3E. TSMC is expanding two new fabs in Kaohsiung and Hsinchu to serve high end smartphones and AI and HPC chips. Samsung began shipping GAA at 3nm in 2022, while Intel plans GAA plus backside power delivery at its 18A node, signaling a renewed semiconductor race.

  2. KAN author Liu Ziming said scaling law driven LLM development has fundamental limits. He argued that simply increasing compute and data is a brute force approach that will hit energy and high quality data constraints. He proposed a structuralist AI path that prioritizes structure over scale, aiming for models that discover and exploit compressibility and internal regularities in data. Liu said abstraction and emergent structure will be the core challenges on the way to AGI.

  3. Industry analysts expect 2026 to be a key year for continual learning, enabling models to improve without forgetting prior knowledge. A former OpenAI researcher team projected fully automated programming around 2030 and suggested a roughly 25% chance of a one year jump to ASI once automation accelerates AI R and D. A Nature outlook also argues that by 2050 AI systems could drive Nobel level science through fully automated research workflows such as dark laboratories.

  4. Jiukun Investment open-sourced the IQuest-Coder-V1 code model series. The 40B version reportedly achieved 81.4% on SWE-Bench Verified, ahead of Claude Opus 4.5 and GPT 5.2. The series includes 7B, 14B, and 40B sizes, emphasizes engineering friendliness and long context support, and uses a code flow multi stage training strategy that learns from code evolution. The models can run on a single consumer 3090 or 4090 GPU.

  5. A Tesla Model 3 running FSD v14.2 completed a cross-country trip with zero disengagements. Owner David Moss drove about 4,397 kilometers from Los Angeles to South Carolina in roughly two days and 20 hours, crossing 24 states and handling city streets, highways, night driving, and complex weather. The system also managed charging. The trip is positioned as a milestone for Tesla’s end to end neural network approach and a fulfillment of a decade old promise from Elon Musk.

  6. Geoffrey Hinton said scaling laws are not over and the key challenge is data. He argued that the data bottleneck can be addressed by AI generated training data, similar to AlphaGo style self play. This contrasts with Ilya Sutskever, who has argued that scaling alone will not deliver a fundamental breakthrough and that new paradigms such as reasoning and agents are needed. Yann LeCun has also questioned pure scaling. Analysts note that current paradigms are impactful but AGI likely requires further research breakthroughs.