xAI's Reboot; Pokémon No!; AI Used To Bite Dog's Cancer

xAI's Reboot; Pokémon No!; AI Used To Bite Dog's Cancer

Today's AI Outlook: 🌤️

Musk’s xAI Reset Comes With Much Less Patience

xAI is in the middle of a serious internal reset after Elon Musk concluded the company was not built correctly and needs to be rebuilt from the ground up.

That judgment is arriving with consequences. Two more co-founders, Zihang Dai and Guodong Zhang, are out, bringing the total number of departed original co-founders to nine out of 11. That leaves only Manuel Kroiss and Ross Nordeen from the original founding crew still alongside Musk.

This looks like a blunt-force attempt to close the gap in one of AI’s hottest battlegrounds: coding. The Rundown reports that Zhang led Grok Code and reported directly to Musk, and that Musk blamed him for Grok’s coding shortcomings before his departure. AI Secret adds that Tesla and SpaceX managers have been brought in to audit teams and review work, which gives the whole thing the vibe of a founder deciding the startup playbook is too slow and sending in operators with clipboards.

Why it matters

Coding is no longer a side quest. It is becoming the control layer for agents, robotics, automation and software infrastructure. If Grok cannot keep pace in coding, xAI risks falling behind not just in chatbot bragging rights, but in the more lucrative and strategic race to power software agents that actually do things. That urgency is even sharper if Musk is trying to steer the company toward a massive future public listing while also promising it can catch OpenAI, Anthropic, and Google.

The Deets

  • Musk said xAI “was not built right” and is being “rebuilt from the foundations up.”
  • Nine of 11 original co-founders have now left.
  • Guodong Zhang, who led Grok Code, reportedly took the blame for coding shortfalls before exiting.
  • xAI has already gone shopping for outside talent, hiring senior Cursor leaders Andrew Milich and Jason Ginsberg.
  • AI Secret says Musk sees coding as foundational to broader ambitions around agents, robots, Tesla automation systems, and large-scale AI services.

Key takeaway

xAI is acting like a company that thinks it missed a turn and is now trying to reverse at highway speed.

🧩 Jargon Buster - Frontier model: A top-tier AI model competing at the leading edge of capabilities such as reasoning, coding, and multimodal tasks.


⚔️ Power Plays

OpenClaw Gets Its First Purpose-Built Model

One of the more telling stories today’s is not about a flashy benchmark or a chatbot with a better personality. It is about infrastructure. AI Secret reports that Z.ai released GLM-5-Turbo, describing it as the first large language model built specifically for the OpenClaw agent framework. That matters because agent systems do not fail in glamorous ways. They fail because call No. 27 returns malformed JSON and the whole workflow faceplants into a webhook.

The broader point is that agent workloads are forcing a different kind of model design. Traditional frontier models are often optimized for conversation, coding demos, or benchmark theater. But agents need consistency across long chains of tool use, API calls, and workflow execution. In that environment, reliability starts to matter as much as raw intelligence. Sometimes more.

Why it matters

If AI is shifting from chat assistant to operating layer, then models built for agents could become their own important category. That would be a meaningful turn in the market: fewer models trying to be charming polymaths, more models trying to quietly not break the plumbing.

The Deets

  • GLM-5-Turbo is positioned as the first model designed specifically for OpenClaw.
  • AI Secret says OpenClaw tasks can involve 30 to 40 model calls across tools, APIs, and pipelines.
  • In those workflows, one bad response can collapse the entire chain.
  • The model is framed less as a benchmark flex and more as an attempt to improve persistent, long-session agent reliability.

Key takeaway

The next wave of AI competition may be less about who sounds smartest in a prompt box and more about who can survive 40 consecutive tool calls without turning into soup.

🧩 Jargon Buster - Agent framework: The software layer that lets an AI model use tools, call APIs, follow steps, and complete multi-stage tasks instead of only replying in chat.


Pokémon Go Used You To Map The World For AI

For years, Pokémon Go looked like a harmlessly brilliant trade: players got a free game, Niantic got a global obsession, and entire city blocks got flooded with adults speed-walking toward virtual creatures at 11 p.m. What players apparently also built, almost by accident, was a massive data engine for spatial AI.

According to the report summarized in the source material, Pokémon Go users helped generate more than 30 billion real-world images through the game’s AR scanning features. Those scans captured streets, landmarks, storefronts, monuments, and public spaces, often from multiple angles and under different real-world conditions. Over time, that created something far more valuable than gameplay data: a dense, geotagged visual map of the physical world.

Why it matters

Spatial AI systems need enormous amounts of real-world visual data to understand where things are, how environments change, and how to move through them reliably. That is especially important in dense urban environments, where GPS can get messy and robotic systems need more precise visual anchors.

The bigger point is that consumer apps are increasingly doubling as infrastructure for future AI systems. Players thought they were helping a game recognize a PokéStop. In practice, they may have been helping train systems that could support robots, delivery machines, and visual navigation tools. Free entertainment has a funny habit of invoicing users in data.

The Deets

  • Pokémon Go players reportedly helped collect 30B+ real-world images
  • The scans came from AR mapping tasks inside the game
  • Roughly 140 million players contributed visual data over time
  • The images were geotagged and captured the same places across different:
    • weather conditions
    • lighting conditions
    • times of day
  • That dataset now reportedly supports visual positioning systems for navigation without relying fully on GPS

Key takeaway

Pokémon Go was not just a hit mobile game. It may also have been one of the most effective large-scale data collection systems for spatial AI ever disguised as fun.

🧩 Jargon Buster - Spatial AI: AI systems that understand and navigate the physical world using visual, location, and environmental data rather than just text or code.


Claude, Nemotron, TADA Join Daily Tool Parade

Today’s tools list reads like a snapshot of where AI product design is heading: bigger context windows, more multimodal output, and less tolerance for janky production results. The Rundown highlights Nemotron 3 Super, Nvidia’s 120B reasoning model with a 1M-token context window; Claude, now able to create charts and diagrams in chat; and TADA from Hume, a text-to-speech system that syncs text and audio for what The Rundown calls no-hallucination speech.

Why it matters

The center of gravity is moving from “neat demo” to “usable system.” Better chart generation, larger context handling, stronger speech alignment and more dependable agent behavior all point in the same direction: AI products are being pushed toward work that people actually need to trust.

The Deets

  • Nemotron 3 Super: Nvidia reasoning model with 1M-token context.
  • Claude: now creates charts and diagrams directly in chat.
  • TADA: Hume’s TTS model designed to keep text and spoken output aligned.
  • Crafting for Agents also appears in The Rundown as a sponsored enterprise coding-agent tool focused on closed-loop validation.

Key takeaway

The market is rewarding tools that reduce friction, reduce breakage, and reduce the number of tabs you need open to get one thing done.

🧩 Jargon Buster - Context window: The amount of text, code, or other input an AI model can consider at one time while generating a response.


💰 Funding & Startups

Robotics Money Cannon Still Set to “On”

According to Robotics Herald, Sunday Robotics raised $165M in a Series B led by Coatue at a $1.15B valuation, while Mind Robotics, a Rivian spinout led by RJ Scaringe, raised $500M in a Series A led by Accel and Andreessen Horowitz at a $2B valuation.

Both companies are framing themselves around real deployment rather than viral demo energy. Sunday Robotics wants to scale its wheeled home robot, Memo, and build out its “data flywheel,” while Mind Robotics is targeting industrial automation with a full-stack platform of AI models, robots, and deployment infrastructure.

Why it matters

The money says investors still believe the path from AI software to embodied systems is worth betting on, especially when the pitch is tied to deployment and manufacturing rather than humanoid backflips for the timeline.

The Deets

  • Sunday Robotics: $165M Series B, $1.15B valuation.
  • Mind Robotics: $500M Series A, $2B valuation.
  • Sunday plans beta household shipments for later in 2026.
  • Mind is building industrial automation systems for manufacturing environments.

Key takeaway

Capital is still chasing robotics, but the language is shifting from spectacle to operations. Investors appear to want fewer robot teasers and more working systems in the field.

🧩 Jargon Buster - Data flywheel: A loop where product use creates more data, which improves the system, which attracts more use, which creates even more data.


🔬 Research & Models

A Dog, A Tumor And A Very 2026 Cancer Workflow

Image: Paul Conyngham / The Australian

One of the most striking stories today is also the most human. The Rundown reports that Sydney AI consultant Paul Conyngham built a custom mRNA cancer vaccine for his rescue dog Rosie after she was diagnosed with mast cell cancer in 2024 and given months to live.

The workflow sounds like something that should have required a startup, a lab team, and a grant application. Instead, it involved ChatGPT, Grok, DeepMind’s AlphaFold, 350 GB of tumor data, and help from UNSW’s RNA Institute.

Conyngham reportedly used ChatGPT to map the research, paid $3K for genomic sequencing, used AlphaFold to model Rosie's mutations, and then worked with the UNSW RNA Institute to turn the design into a custom vaccine. He said the final vaccine construct was designed by Grok. The result: one tumor shrank significantly after a December injection, while work continues on a second vaccine for non-responding tumors.

Why it matters

This is not a tidy story about AI replacing medicine. It is a story about AI lowering the barrier to navigating complex scientific workflows. That has huge implications. In the best case, it broadens access to research and personalization. In the worst case, it raises serious questions about validation, oversight, and how much experimentation can happen outside traditional systems.

The Deets

  • Rosie was diagnosed with mast cell cancer in 2024.
  • Conyngham used ChatGPT for research mapping.
  • He paid $3K for genomic sequencing.
  • AlphaFold was used to model mutations.
  • UNSW’s RNA Institute helped produce the vaccine.
  • The final vaccine construct was reportedly designed by Grok.

Key takeaway

AI is turning more people into capable scientific operators. That is powerful, hopeful, and a little terrifying, which is usually how you know the technology is getting real.

🧩 Jargon Buster - mRNA vaccine: A treatment that uses messenger RNA to instruct cells to produce a specific protein, helping the body trigger an immune response.


⚡ Quick Hits

  • ByteDance’s Seedance 2.0 global launch is reportedly being suspended or paused after copyright backlash from Hollywood.
  • Meta is reportedly considering layoffs of more than 20% as it tries to offset massive AI spending. The Rundown pegs planned AI infrastructure spending at $600B.
  • Anthropic is offering Free, Pro, Max, and Team subscribers double Claude usage limits during off-peak hours from March 13 to March 27.
  • Sam Altman said AI could eventually be delivered like a utility, with users billed for the amount of intelligence they consume.
  • Andrej Karpathy used AI to analyze U.S. job exposure to automation and found higher-paid white-collar roles appear especially vulnerable.
  • Google and Accel’s India AI accelerator selected five startups from 4,000 applications, while steering clear of “AI wrapper” companies.
  • A Florida man reportedly used ChatGPT to help sell his home, handling pricing, marketing, scheduling, and contracts, and closed in five days while saving 3% in agent fees.
  • Musk said Tesla’s Terafab semiconductor manufacturing facility is launching in a week to build custom silicon chips.

🧰 Tools of the Day

Claude - Anthropic’s assistant now creates charts and diagrams in chat, which is exactly the kind of upgrade that sounds small until it saves you from exporting data into three other apps.

GLM-5-Turbo for OpenClaw - A notable release because it is built for agent execution, not just chatting. If OpenClaw keeps gaining traction, reliability-first models like this could become a much bigger category.

TADA - Hume’s text-to-speech model aims to keep text and audio tightly aligned, reducing the weird little mismatches that make synthetic speech sound confident and wrong at the same time.

Nemotron 3 Super - Nvidia’s 120B reasoning model with a 1M-token context window is aimed squarely at people who think “more context” is not a luxury but a basic human right.


Today’s Sources: The Rundown AI, AI Secret, Robotics Herald

Subscribe to AI Slop

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe