Trained On Brains; Vacuum, Spy; Robots Run 24/7

Trained On Brains; Vacuum, Spy; Robots Run 24/7

Today's AI Outlook: 🌤️

The First Brain LLM Has Arrived

Startup Zyphra just released ZUNA, which it calls the world’s first large-scale foundation model trained specifically on EEG brain data with open weights and tooling.

The model clocks in at 380M parameters and was trained on roughly 2 million channel hours of neural recordings. Instead of treating brain data like a quirky side project, ZUNA treats neural signals as a full-blown pretraining domain.

That is a meaningful shift. Large language models learned from neatly tokenized internet text with grammar, structure and shared formatting. EEG signals are the opposite. They are noisy, device-dependent and collected under inconsistent protocols... There is no universal tokenization scheme for brainwaves, no common grammar of thought and far less data density than the web. ZUNA is trying to generalize across that chaos.

Why it matters

Language models digitized human knowledge. Brain models aim to digitize human states. If ZUNA or successors scale, it means a new interface layer between cognition and machines, one that could eventually plug directly into assistive tech, diagnostics or adaptive computing systems.

The defensibility angle is just as important. Scraping the open web is messy and legally fraught. Brain datasets are harder to assemble and standardize, which makes them both technically challenging and strategically valuable.

The Deets

  • 380M parameters
  • ~2M channel hours of EEG data
  • Open weights and tooling
  • Trained across heterogeneous protocols and hardware setups

ZUNA positions neural data as a foundation layer, not a niche experiment.

Key takeaway

We are watching the early formation of a cognition-to-compute layer. Text was the first interface. Brain signals could be the next.

🧩 Jargon Buster - EEG: Electroencephalography, a method of recording electrical activity in the brain using sensors placed on the scalp.


⚔️ Power Plays

Anthropic accused Chinese labs DeepSeek, MiniMax and Moonshot AI of running large-scale “distillation” campaigns against Claude, claiming 16M+ exchanges across 24,000 fake accounts. According to Anthropic, MiniMax alone accounted for more than 13M exchanges and pivoted to target a new Claude release within 24 hours of detection.

The allegations echo similar concerns raised by OpenAI to U.S. lawmakers weeks earlier. At the same time, usage data cited by OpenRouter shows MiniMax and Kimi API calls surging, with Kimi reportedly logging more usage in 20 days than in its entire prior year.

Why it matters

The fight is shifting from scraping training data to cloning model behavior via APIs. Instead of copying the internet, competitors can query frontier systems millions of times, capture step-by-step reasoning and compress the outputs into cheaper replicas.

When API volumes spike that quickly, incumbents assume extraction over organic growth. Monitoring and forensics start to replace dataset audits. The moat becomes preventing your outputs from becoming someone else’s inputs.

The Deets

  • 16M+ Claude exchanges allegedly used for distillation
  • 24K fake accounts cited
  • DeepSeek reportedly prompted for step-by-step reasoning and politically sensitive rewrites
  • Anthropic calling for industry and government coordination

Key takeaway

AI competition is becoming a game of output defense. Expect tighter API controls, better anomaly detection and more geopolitical heat around model access.

🧩 Jargon Buster - Distillation: A training technique where a smaller model learns to mimic the outputs of a larger, more capable model.


Stargate Hits The Gravity Of Capital Markets

The much-hyped Stargate joint venture between OpenAI, Oracle and SoftBank has effectively stalled. The project, pitched as a multi-hundred-billion-dollar effort to build up to 10 gigawatts of AI compute capacity in the U.S., has not meaningfully staffed up or begun large-scale construction.

Negotiations over control, site ownership and financing reportedly dragged through marathon sessions in Tokyo. Lenders pushed back on risk, and OpenAI has quietly stepped back from building its own facilities for now.

Why it matters

Stargate was supposed to anchor long-term training capacity and reduce reliance on hyperscalers. Without active buildout, OpenAI remains dependent on external cloud providers.

Capital markets appear cautious about underwriting speculative AI infrastructure at extreme scale. That caution consolidates power with existing cloud giants in the near term.

The Deets

  • Target: ~10 GW of compute
  • Multi-hundred-billion-dollar ambition
  • Financing and governance disputes slowed progress
  • No active large-scale construction underway

Key takeaway

Infrastructure independence sounds good in a keynote. It is harder in a term sheet.

🧩 Jargon Buster - Gigawatt-scale compute: Massive data center capacity measured in billions of watts, typically associated with hyperscale AI training clusters.


🤖 Robots & Physical AI

Figure 03 Goes 24/7 (Update)

Figure AI CEO Brett Adcock revealed that seven Figure 03 humanoids are now operating autonomously around the clock at the company’s Sunnyvale headquarters. Powered by Helix 02, the robots self-dock, wirelessly charge at 2 kW and swap units without human oversight.

This is not a lab demo. It is continuous, unattended operation inside a real facility.

Why it matters

Running 24/7 autonomy is a different beast than executing a choreographed demo. Persistent uptime requires robust navigation, power management and failure recovery. It also signals that humanoids are inching toward practical deployment scenarios.

The Deets

  • 7 humanoids
  • 24/7 autonomous operation
  • 2 kW wireless charging
  • Self-docking and automated unit swaps

Key takeaway

Humanoids are graduating from viral clips to operational metrics.

🧩 Jargon Buster - Helix 02: Figure’s internal AI stack powering perception, planning and control for its humanoid robots.


7,000 Robot Vacuums, One Backend Flaw

A software engineer reverse-engineering his DJI robot vacuum using AI tools reportedly accessed nearly 7,000 devices across 24 countries, exposing live camera feeds, audio, maps and location data. He disclosed the vulnerability publicly, and DJI issued patches on Feb. 8 and 10.

Why it matters

As robots become connected endpoints, their attack surface grows. A vacuum is no longer just a vacuum. It is a mobile sensor platform inside private homes.

The Deets

  • ~7,000 devices exposed
  • 24 countries affected
  • Access to video, audio and mapping data
  • Patches released within days

Key takeaway

Physical AI security is now a consumer privacy issue, not just an enterprise one.

🧩 Jargon Buster - Backend vulnerability: A flaw in server-side systems that can expose connected devices or user data if improperly secured.


🏢 Enterprise Moves

OpenAI Courts The Consulting Class

OpenAI signed multi-year deals with consulting giants McKinsey, BCG, Accenture and Capgemini under its new “Frontier Alliance” push. The goal is to help enterprises integrate AI agents into existing tech stacks.

These firms are building certified teams to deploy OpenAI systems alongside corporate workflows, with Accenture already training staff for enterprise AI rollouts.

Why it matters

The companies that AI threatens to disrupt are now being hired to implement it. The integration gap between frontier models and messy enterprise systems is real, and consultants see billable hours in bridging it.

The Deets

  • Multi-year partnerships
  • Certified AI deployment teams
  • Focus on integrating agents into corporate systems

Key takeaway

Frontier models are impressive. Plugging them into SAP is the real job.

🧩 Jargon Buster - Enterprise integration: The process of embedding new software systems into existing corporate tools, workflows and data environments.


⚡ Quick Hits

  • Amazon plans to invest $12B in AI data centers in Louisiana and cover local energy and water infrastructure costs.
  • xAI reached a Pentagon deal to deploy Grok in classified systems under an “all lawful use” standard.
  • IBM shares fell nearly 13% after Anthropic said Claude Code can automate COBOL modernization.
  • Google launched a free Gemini AI training program for 6M U.S. educators.
  • HII signed an MOU with Path Robotics to bring AI welding models into shipyards, targeting a 15% capacity boost in 2026.

🛠️ Tools Of The Day

  • Gamma: Turn a Claude-generated outline into a full slide deck with AI editing and export to PowerPoint or PDF.
  • HeyGen: Generate polished AI videos from simple prompts, no camera required.
  • Wispr Flow: AI dictation tool now available on Android.
  • MuseMail.ai: Create on-brand marketing emails from a single prompt.

Today’s Sources: AI Secret, The Rundown AI, Robotics Herald

Subscribe to AI Slop

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe