Apple Anoints Gemini; Claude in Your Files; Robot Learns from World Models
Today's AI Outlook: 🌤️
Apple Chooses Gemini After AI Model Shootout
After months of internal testing, Apple has officially signed a multi-year deal to make Google's Gemini the default intelligence layer behind the next generation of Siri. Rivals from OpenAI and Anthropic were evaluated, but Gemini cooked in the bake-off. ChatGPT stays optional, but Gemini becomes the brain.
This wasn’t a vibes-based decision. Apple tested scale, latency, multimodality, reliability and real-world performance across billions of devices. Gemini already runs Android assistants and Samsung Galaxy AI. Apple picked the model that already lives at planetary scale, even if it belongs to a longtime rival.
Why it matters
This is Apple publicly admitting that AI infrastructure maturity beats ideological purity. Outsourcing Siri’s intelligence to Google is a massive strategic shift, and a huge validation of Gemini’s rise. It also quietly reframes Apple’s ChatGPT partnership as a feature, not a foundation.
The Deets
- Gemini becomes the default model powering Siri’s intelligence layer
- ChatGPT remains available as an opt-in
- Apple keeps on-device AI and Private Cloud Compute for sensitive tasks
- Bloomberg previously reported Apple could be paying roughly $1B per year
- Google briefly crossed $4T in market cap after the announcement
Key takeaway
Gemini didn’t just win a contract. It won the most consequential AI platform deal yet, positioning Google as the dominant mobile intelligence provider across ecosystems.
🧩 Jargon Buster - Multimodality: An AI’s ability to understand and generate across text, images, audio and video in a single model.
Sources: The Rundown AI, There’s An AI For That
⚡ Power Plays
Nvidia And Lilly Turn Drug Discovery Into A Compute Race

Nvidia and Eli Lilly are spending up to $1B over five years to build a permanent AI drug discovery lab in the Bay Area. The partnership pairs Nvidia’s BioNeMo platform and early Vera Rubin GPUs with Lilly’s robotic wet labs.
Why it matters
Drug discovery already takes 10–15 years and billions per approved drug. This partnership flips the bottleneck from lab throughput to compute access and model quality. GPU allocation becomes a competitive weapon.
The Deets
- Vera Rubin accelerators promise ~5Ă— Blackwell performance
- Robotic labs feed real-world data directly into foundation models
- Pharma R&D, manufacturing, and even commercial ops get tied to AI roadmaps
Key takeaway
Pharma just admitted the quiet part out loud: pipeline strategy now depends on compute strategy.
đź§© Jargon Buster - Foundation Model: A large, general-purpose AI trained on massive datasets and adapted to many tasks.
Sources: AI Secret, The Rundown AI
đź§° Tools & Products
Claude Cowork = Models Absorbing The Agent Layer
Anthropic launched Claude Cowork in research preview, turning Claude into a persistent, local workplace agent. It has direct file access, task queues, memory, and long-running context, all inside a macOS app.
Why it matters
The old stack assumed models at the bottom and agents on top. Cowork collapses that stack. When the model ships with native execution and permissions, standalone agent startups lose their moat fast.
The Deets
- Operates inside a designated folder on your Mac
- Integrates with tools like Notion and Asana
- Multiple tasks run asynchronously
- Initially available to Max-tier users
Key takeaway
2026 is shaping up as the year the model becomes the agent.
đź§© Jargon Buster - Agentic AI: Systems that can plan, execute tasks and use tools autonomously over time.
Sources: Anthropic, The Rundown AI
🚀 Research & Models
World Models: Humanoids Learn From The Internet, Not Humans
1X Technologies updated its NEO humanoid with a new World Model that learns physical tasks directly from internet-scale video, ditching teleoperation and scripted demos.
Why it matters
Human-operated training data is slow and expensive. Internet video is infinite. If this holds up in messy real homes, humanoid training economics change overnight.
The Deets
- Tasks learned via voice or text prompts
- No environment-specific programming
- Targets home and semi-structured spaces
Key takeaway
Video-trained robots may become the default path to practical humanoids.
đź§© Jargon Buster - World Model: An internal representation an AI uses to predict how actions affect the physical world.
Sources: Robotics Herald
⚡ Quick Hits
- OpenAI reportedly acquired health records startup Torch for about $100M.
- Meta created Meta Compute to centralize AI infrastructure.
- JD Sports launches AI shopping via Copilot, Gemini, and ChatGPT.
- The Pentagon plans to deploy Grok alongside Google AI across its networks.
🛠️ Tools Of The Day
- MuseMail.ai – Create on-brand emails from a single prompt.
- Adfynx – Chat with your Meta ads data and get a report in 1 minute.
- KaraVideo – One hub for all major AI video models.
- CopyOwl – One-click deep research agent for any topic.
Today’s Sources: The Rundown AI, There’s An AI For That, Robotics Herald