Chrome On Autopilot; Scale Is (Not) All You Need; "Adam" Robot?

Chrome On Autopilot; Scale Is (Not) All You Need; "Adam" Robot?

Today's AI Outlook: ☀️

Chrome Quietly Becomes An AI Agent

After a year of flashy AI-first browsers that mostly failed to dent user habits, Google is taking a far more pragmatic path. Instead of asking people to switch browsers, it is turning Google Chrome itself into an agent. New Gemini upgrades add agentic browsing, image generation, and a persistent AI sidebar that works across tabs, apps, and tasks.

Chrome’s new Auto Browse mode can open sites in its own sandboxed tab, click through pages, compare products and complete multi-step workflows. Crucially, it pauses before sensitive actions like payments.

Gemini now lives permanently in the sidebar, letting users ask questions about whatever is on-screen, pull context from Gmail or Calendar, and synthesize information across open tabs. Google is also layering in browser-native image generation and what it calls Personal Intelligence, a more personalized answer system tied into your Google ecosystem.

Why it matters

AI browsers have been plentiful. Adoption has not. Chrome’s dominance gives Google a distribution advantage no startup can replicate. Rather than compete on novelty, Google is normalizing agentic behavior inside the most familiar piece of software on the internet. This is how agentic AI actually spreads: invisibly, by default, and without asking permission.

The Deets

  • Auto Browse operates in a separate tab and halts before high-risk actions
  • Gemini’s sidebar persists across sessions and tabs
  • Built-in image generation removes the need for external tools
  • Deep integration with Gmail, Calendar, and other Google apps

Key takeaway

The browser wars may be over. The agent wars just moved inside Chrome.

🧩 Jargon Buster - Agentic browsing: When an AI system doesn’t just suggest actions but directly carries them out across websites, step by step.


⚡ Power Plays

DeepMind Opens The Genetic Black Box With AlphaGenome

Google DeepMind has released the full research paper and model weights for AlphaGenome, an AI system designed to understand what mutations in DNA actually do. The model can scan up to one million letters of genetic code at once and predict how a single mutation affects 11 biological processes, even when the mutation sits far away from the impacted gene.

Originally unveiled last summer, AlphaGenome is now fully open to researchers via published weights, an API, and a peer-reviewed paper in Nature. In testing, it identified leukemia-linked mutations that took human researchers years to connect.

Why it matters

After AlphaFold cracked protein structure, genetics became the next frontier. Roughly 98% of human DNA does not code for proteins and remains poorly understood. AlphaGenome does not solve that problem, but it gives scientists a working map and a powerful shortcut.

The Deets

  • Trained on massive genomic datasets
  • Predicts long-range mutation effects
  • Weights and API are freely available for research

Key takeaway

Open models plus biology may be the fastest path from AI hype to human impact.

🧩 Jargon Buster - Non-coding DNA: Parts of the genome that don’t make proteins but still influence how genes turn on and off.


🧠 Research & Models

AI Labs Bet Against “Just Scale It”

Two new AI labs are attracting serious money by rejecting the idea that bigger models trained on more data automatically lead to AGI. Flapping Airplanes raised $180M at a $1.5B valuation, while former OpenAI researcher Jerry Tworek is reportedly seeking up to $1B for his new venture, Core Automation.

Flapping Airplanes wants to train human-level intelligence without “ingesting half the internet,” leaning on tighter datasets and novel learning techniques. Tworek’s plan centers on continuous learning, building a single system called Ceres that adapts from real-world experience instead of static training runs.

Why it matters

As frontier labs burn through more than $200B chasing scale, investors are funding contrarian bets that intelligence might come from structure, memory, and interaction with the world, not just bigger GPUs.

The Deets

  • Flapping Airplanes lists Andrej Karpathy and Jeff Dean as advisors
  • Core Automation targets factory automation first, then broader autonomy
  • Both reject one-shot training as the endgame

Key takeaway

The smartest money is hedging against the idea that scale alone wins.

🧩 Jargon Buster - Continual learning: An AI system’s ability to keep learning after deployment instead of freezing its knowledge at training time.


🤖  Robotics & Embodied AI

Helix 02 Looks Like The First True Embodied Foundation Model

Figure AI has released Helix 02, a single embodied model that controls perception, language, balance, and motion across an entire humanoid body. In one widely shared clip, the robot shuts an oven door and instinctively adds a small foot kick to stabilize itself. That adjustment was not preprogrammed. It appears learned, situational and governed by the same unified control policy as everything else.

This is not a stack of systems glued together. Helix 02 runs dozens of joints, vision input, and timing loops inside one model. The robot is not executing a task script. It is behaving.

Why it matters

Humanoid robots have historically relied on layered architectures with planners, controllers, heuristics, and exception handling stitched together. Helix 02 collapses that entire stack. Training shifts from tuning components to shaping behavior. Testing shifts from unit validation to observing how the system holds itself together under real-world uncertainty.

The Deets

  • One model governs balance, movement, vision and intent
  • Stability adjustments emerge from learning, not rules
  • Behavior is validated holistically, not module by module

Key takeaway

This looks like the moment embodied intelligence becomes a general model, not a collection of tricks. Expect faster capability gains and new classes of failure as balance and intent are learned together.

🧩 Jargon Buster - Embodied foundation model: A single AI model that jointly controls perception, reasoning, and physical action across an entire robot body.


🛠️ Tools & Products

Moltbot Shows How Fast Agents Learn

The Rundown published a hands-on guide for Moltbot, also known as the crazy-popular Clawdbot, an experimental open-source autonomous agent. Within three days, the bot reportedly learned to generate images, respond to emails, message on Telegram, and deploy websites on its own.

Installation requires an API key, a clean machine, and a willingness to experiment. The team strongly warns against running Moltbot on a primary computer due to security risks.

Why it matters

This is a raw look at agentic AI without guardrails. It shows both how quickly autonomy compounds and why safety remains the bottleneck.

The Deets

  • Runs locally via terminal install
  • Supports image generation and messaging
  • Connects to external services like Telegram

⚡ Quick Hits

  • Anthropic is reportedly raising $20B at a $350B valuation, after demand hit six times its initial target
  • Google added Agentic Vision to Gemini 3 Flash, boosting visual task accuracy by up to 10%
  • Mistral upgraded its vibe coding agent with subagents and workflow skills
  • China approved purchases of more than 400,000 Nvidia H200s, easing an AI chip bottleneck

🧰 Tools of the Day

  • Agent Composer: Enterprise-grade agent builder designed for high-stakes engineering workflows
  • DeepSeek OCR 2: State-of-the-art document reading and text extraction
  • SERA: AI2’s new open-source family of coding agents
  • Z-Image: Alibaba Tongyi’s full open-source image generation model

Today’s Sources: The Rundown AI, There’s An AI For That, AI Breakfast, Robotics Herald

Subscribe to AI Slop

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe