Sora Grounded; New Interface To AGI; McDonald's Robot
Today's AI Outlook: 🌤️
OpenAI Cuts Sora Loose, Focuses On Spud
OpenAI has officially moved to wind down Sora across its video products, including the app and API, in a sharp signal that the company is narrowing its focus. Internally, the product had reportedly become a resource drain, with leadership choosing to redirect compute and attention toward “Spud,” a major upcoming model Sam Altman says is coming within weeks.
This looks like a company-wide reshuffle around what matters most now: core models, coding, enterprise positioning and whatever OpenAI thinks will matter in the escalating fight with Anthropic. Sora once looked like a crown jewel, hitting No. 1 on the App Store and even helping land a major Disney partnership. Now it is being treated like an expensive detour.

Why it matters
The retreat from Sora says a lot about where value is concentrating in AI. Video may be flashy, but the money, usage, and enterprise gravity are increasingly in coding, agents, and foundational model capability. AI Secret frames that shift even more bluntly: Anthropic’s Claude Code has surged in developer usage and is increasingly seen as winning the coding layer, while OpenAI appears to be consolidating after spreading itself across too many fronts.
No more side quests, indeed.
The Deets
- Altman reportedly told staff OpenAI would wind down all video products tied to Sora.
- The freed-up compute will go to Spud, which Altman says could “accelerate the economy.”
- Sora head Bill Peebles said the team will now pursue world simulation for robotics, with an eye toward “automating the physical economy.”
Key takeaway
OpenAI is done treating video like a centerpiece. The company is reallocating compute toward the workloads that look stickier, more lucrative, and more strategic: core models, coding, agents, and enterprise infrastructure.
đź§© Jargon Buster - Compute: The processing power used to train or run AI models. In practice, it is the expensive fuel that determines what an AI company can build, ship, and scale.
Apple’s Siri Reboot Looks Like a Last, Necessary Swing
Apple is reportedly testing a standalone Siri app and a new chatbot-style experience called “Ask Siri,” with both expected to surface at WWDC on June 8 as part of iOS 27 and macOS 27. The update would give Siri a dedicated home, a redesigned interface, and more contextual awareness across messages, emails, and notes.
Apple appears to be repositioning Siri from a rigid voice-command layer into something more like a modern AI assistant that users can either type to or talk to. After the lukewarm reception to Apple Intelligence, that is less a product refresh and more a public attempt to prove Siri still belongs in the conversation.
Why it matters
The assistant market has become brutally unforgiving. Users now compare everything to ChatGPT and Claude, not to the old version of Siri. Apple no longer has the luxury of shipping an AI experience that feels bolted on, undercooked, or limited to narrow commands.
The bigger issue is platform control. If Apple cannot make Siri genuinely useful, it risks ceding the everyday AI relationship on the iPhone to someone else. That would be a strategic own-goal of Olympic caliber.
The Deets
- Bloomberg’s Mark Gurman says Apple is testing a standalone Siri app.
- The new experience would support both typed and spoken requests.
- Siri would reportedly pull context from iMessages, emails and notes.
- The assistant may also be able to execute actions inside third-party apps.
- Apple is branding the new experience Ask Siri.
- The planned debut is tied to WWDC on June 8 and the broader iOS 27 rollout.
Key takeaway
Apple needs Siri to feel like an actual assistant, not a legacy feature with better lighting. WWDC now carries real stakes.
🧩 Jargon Buster - Context awareness: An AI assistant’s ability to use information from your apps, messages, or prior interactions to give more relevant responses and complete more useful actions.
🏛️ Power Plays
Norway’s Sovereign Fund Lets AI Start Touching the Money
Norway’s $2.1T sovereign wealth fund is preparing to let AI systems make limited investment decisions under human supervision.
About half of the fund’s 700 employees already use Claude to build internal tools for research, risk monitoring, and deal preparation. Full autonomy is still off the table, but the direction is clear: AI is moving from analyst assistant to a more active role in capital allocation.
Why it matters
When one of the world’s largest pools of capital starts giving AI even partial decision-making authority, that changes the conversation. This is no longer about drafting memos faster or summarizing earnings calls. It is about whether AI can help decide where trillions of dollars flow.
The fund tracks roughly 7,000 companies globally, which makes AI attractive for scale alone. It can widen coverage, speed analysis, and shift humans into higher-leverage oversight roles. That is the promise, anyway. The risk is that “human supervision” becomes one of those phrases that sounds more robust in a slide deck than in practice.
The Deets
- Norway’s sovereign wealth fund manages about $2.1T.
- Roughly half of its 700 employees already use Claude internally.
- Current use cases include research, risk monitoring, and deal prep.
- Full autonomy is still considered premature.
Key takeaway
AI is inching from assistant to allocator. Once it touches investment decisions, even in limited ways, the economic implications get much bigger very quickly.
đź§© Jargon Buster - Human in the loop: A system design where AI can assist or recommend actions, but a person still reviews, approves, or overrides key decisions.
đź’¸ Funding & Startups
Brett Adcock Drops $100M On 'New AGI Interface'
Brett Adcock, founder of Figure AI, has launched Hark, a new startup aimed at building a “new interface to AGI” through personalized AI and dedicated hardware. He reportedly put $100M of his own money into the company after eight months in stealth.
Hark is building a family of devices for personal and home use, with systems designed to think like you and, in Adcock’s framing, sometimes ahead of you. The team is already 45 people deep, pulling talent from Apple, Google, Meta, and Tesla, and hardware design is being led by Abidur Chowdhury, previously tied to the iPhone Air.
Why it matters
AI hardware has had a rough run. The ambition is always cinematic. The reality often looks like a pre-order page and a support headache. But Hark enters with more credibility than most because Adcock already has hardware-adjacent AI experience through Figure, plus a team stacked with people who know how to ship consumer-grade products.
The timing also matters. As AI assistants become more embedded in everyday work and home routines, there is a stronger case that a new device category could emerge. That does not mean it will. It just means this is one of the more serious attempts.
The Deets
- Hark spent 8 months in stealth.
- Adcock personally invested $100M.
- The company plans a family of AI devices for individuals and the home.
- Hark has signed for thousands of NVIDIA B200 GPUs arriving in April.
- The first AI models and software are planned for this summer.
Key takeaway
Most AI hardware bets still look risky, but Hark has the money, team, and ambition to make the category interesting again.
đź§© Jargon Buster - Stealth startup: A company operating quietly before launch, usually to build product, recruit talent, or avoid tipping off competitors too early.
đź§Ş Research & Models
Robot World Models Are Getting Smaller, Faster, A Lot More Practical
JEPA are finally easy to train end-to-end without any tricks!
— Lucas Maes (@lucasmaes_) March 23, 2026
Excited to introduce LeWorldModel: a stable, end-to-end JEPA that learns world models directly from pixels, no heuristics.
15M params, 1 GPU, and full planning <1 second.
đź“‘: https://t.co/cpTzgvbTS0 pic.twitter.com/Z2De9ASzcW
A team led by Yann LeCun alongside researchers from Mila, NYU, and Samsung introduced LeWorldModel, a compact JEPA-based architecture built to train robots directly from raw visual input. The model uses just 15M parameters, runs on a single GPU, and still delivers strong robotic planning performance.
That matters because a lot of robotics AI still gets discussed through the lens of giant language models. This work points in a different direction: compact, perception-driven systems designed for real-world action rather than chatbot theatrics.
Why it matters
The practical future of robotics may depend less on ever-larger general models and more on architectures built specifically for perception, planning, and control. Smaller models that run efficiently are far easier to deploy in actual robots, where latency, reliability, and hardware constraints are not optional details.
This also lines up with OpenAI’s internal pivot from Sora toward world simulation for robotics. Different camps are converging on the same broad idea: AI that understands and acts in the physical world may be worth more than AI that simply generates media.
The Deets
- LeWorldModel is JEPA-based and trained from raw visual input.
- It can run on a single GPU.
- The model delivers fast planning and strong robotic task performance.
- The work points toward more compact and deployable robot intelligence systems.
Key takeaway
Robotics may not be won by the loudest model. It may be won by the one that can actually run.
🧩 Jargon Buster - World model: An AI system’s internal representation of how the physical world works, helping it predict outcomes, plan actions, and adapt to new situations.
From Warehouses To McDonald's, Robots Are Becoming Weirdly Capable
Several robotics stories today point to the same trend: robots are getting better at coordinating, adapting, and functioning in messy real-world environments.
Otto Group is deploying AI coordination across warehouses using NVIDIA Omniverse and a custom Coordinated Autonomy Layer, aiming to manage fleets of robots through a unified system with human oversight.
Meanwhile, researchers built a bio-inspired odor-tracking robot that can still follow smells after losing one of its two sensors, modeled on silkworm moth behavior.
And on the humanoid side, a team from Tsinghua University, Peking University, and Galbot developed the LATENT framework, allowing the Unitree G1 robot to sustain multi-shot tennis rallies with humans, handling incoming balls above 15 m/s after simulation-based training.
Why it matters
The common thread is resilience. Real-world robotics has always been less about clean demos and more about what happens when sensors fail, layouts change, or a fast-moving object arrives at an inconvenient angle. These projects all show robots inching toward more adaptive behavior.
The Deets
- Otto is using NVIDIA Omniverse and a digital twin setup to improve warehouse robot efficiency and reduce stoppages.
- The smell-tracking robot maintained near-identical performance even after losing one sensor.
- The Unitree G1, using LATENT, learned to perform stable real-world tennis rallies with humans.
- McDonald’s in Shanghai also piloted Keenon Robotics humanoids for greeting customers, assisting with ordering, and delivering meals.
Key takeaway
Robots are getting better at the annoying, unpredictable parts of the physical world. That is usually when progress starts to count.
đź§© Jargon Buster - Digital twin: A virtual replica of a real-world environment or system used to simulate, test and optimize operations before deployment in the real world.
⚡ Quick Hits
- Token budgets are starting to look like compensation. AI Secret reports that some companies increasingly treat model usage as part of employee productivity and leverage, with annual inference costs reaching $100,000 per person in some cases.
- Claude Code vs. OpenClaw is becoming a control fight, not just a product fight. The tension is between managed convenience on one side and privacy, ownership, and isolation on the other.
- McDonald’s in Shanghai is piloting humanoid robots from Keenon Robotics for customer interaction and food delivery.
- OpenAI’s org chart is shifting along with its product priorities, including reported changes around safety oversight and a renamed AGI Deployment division.
🛠️ Tools of the Day
Unwrap Customer Intelligence: A tool focused on turning unstructured customer feedback into product insights, which is exactly the sort of thing product teams claim they already do in spreadsheets.
Claude Code: Its new auto mode adds a more hands-free coding permission system, another sign that Anthropic is smoothing the path from assistant to active operator.
Dynamic Workers: Cloudflare’s sandbox for running AI agent code at scale looks aimed squarely at teams that want safer, more manageable execution environments for agents.
Figma: The new MCP bridge lets designers work directly on canvas through agentic tools, pushing Figma further into AI-native workflow territory.
Today’s Sources: The Rundown AI, AI Secret, Robotics Herald