Need Design? Just Google It; Grok Imagine Is Tops; Models Building Models
Today's AI Outlook: 🌤️
Google Goes All In On Vibe Design

Google’s overhaul of Stitch is the clearest sign yet that the AI industry is coming for the design stack the same way it came for coding.
Stitch is no longer just a UI generator that spits out a screen mockup and calls it a day. It is being positioned as an AI-native product creation surface, where users can describe an idea in plain language, talk to the tool by voice, explore multiple directions at once, and end up with a clickable prototype instead of a static design file.
That matters because Google is not just pitching “faster mockups,” it's pitching the collapse of the old relay race between product managers, designers, and frontend engineers.
The new Stitch workflow pulls together briefs, images, code, voice edits, style rules and prototyping into one continuous loop. AI Secret framed this as the removal of design as a standalone phase.
The Rundown gave the mechanics: an infinite canvas, an agent manager that can juggle multiple design paths, instant prototyping, and a new DESIGN.md format to carry design rules into coding tools. Put that together and “vibe design” starts to sound less like branding fluff and more like Google trying to plant a flag in post-Figma product development.

Why it matters
This is a serious workflow attack on the software toolchain. If product teams can move from rough intent to working flows in minutes, the cost of iteration drops hard, and the handoff friction between design and development starts to look like legacy baggage.
That does not mean designers disappear. It does mean their job shifts further toward taste, systems thinking, critique, and decision-making, while AI handles the first 80% of production grunt work.
Taste, that's the new catch word humans are using in all the categories AI is absorbing. The human moat is "taste." Well, we'll see.
Also yet to happen is Google fusing Stitch with their vibe coder AI Studio, as a way to own a much larger part of the product build stack.
Today, we’re evolving @StitchbyGoogle from @GoogleLabs into an AI design canvas transforms natural language prompts into production-ready front-end code.
— Google AI (@GoogleAI) March 18, 2026
Some highlights from what’s new:
1. A complete redesign of the Stitch UI, which can now ingest multimodal references (text… pic.twitter.com/Ua0XbwLKyO
The Deets
- Stitch now runs on an infinite canvas
- Users can feed it images, code, or written briefs
- A preview voice mode allows live edits mid-conversation
- Instant prototyping turns static screens into interactive flows
- Stitch can auto-generate likely next screens in a UI journey
- A new DESIGN.md format helps teams move design rules between Stitch and coding tools
- AI Secret says the product now behaves more like a persistent product creation system than a standalone design app
Key takeaway
Google is trying to make product creation feel like prompting, iterating and shipping inside one surface. “Vibe coding” was the appetizer. “Vibe design” might be the meal.
đź§© Jargon Buster - Infinite canvas: A workspace that does not box you into a single artboard or page, letting you explore multiple ideas, flows, and versions in one open visual environment.
🏛️ Power Plays
OpenAI’s Cloud Triangle Is Starting To Look Like A Legal Pretzel
The Microsoft-OpenAI relationship has already spent months radiating “group project gone wrong” energy, and now a new wrinkle could push it into court.
According to The Rundown, Microsoft is reportedly considering legal action if OpenAI’s new AWS arrangement crosses the lines of its existing Azure agreement. The conflict centers on Frontier, OpenAI’s enterprise agent platform, and a much broader cloud commitment that reportedly ties OpenAI to massive AWS spending.
The broader picture is simple enough: OpenAI wants more infrastructure flexibility, Amazon wants more of the hottest AI workload on Earth, and Microsoft would very much like the startup it helped supercharge not to wander too far from Azure with the crown jewels. The problem is that “more flexible partnership” and “exclusive commercial rights” tend to become enemies the moment the invoices get large enough.
Why it matters
Cloud is no longer back-office plumbing. It is strategy, leverage and, increasingly, a legal weapon. The companies fighting over model access are also fighting over where enterprise AI lives, who gets paid for inference, and who controls developer distribution. A courtroom would simply be the least elegant possible product roadmap.
The Deets
- Microsoft is reportedly weighing legal action over OpenAI’s newer AWS deal
- The dispute is tied to Frontier, OpenAI’s enterprise agent platform
- The broader agreement reportedly includes $138B in cloud spending to AWS
- Microsoft had already loosened its exclusive hosting arrangement in October
- But The Rundown says a clause still routes developer access to OpenAI models through Azure
- One source told the Financial Times Microsoft would sue if the contract is breached
Key takeaway
The AI stack is consolidating into fewer giant relationships, which means every contract is now a pressure cooker. OpenAI’s cloud diversification may be strategically sensible, but it also makes every old clause newly expensive.
🧩 Jargon Buster - Exclusive hosting: A contractual setup where one cloud provider gets special rights to run or distribute a company’s technology, often limiting where models can be served or sold.
đź§° Tools & Products
The Knowledge Layer Fight Has Entered The Chat
AI Secret surfaced a quieter but potentially important fight around how agents learn before they act.
Andrew Ng’s Context Hub is framed as an open-source CLI and MCP layer designed to give agents curated documentation and reduce API hallucinations. The subtext is where things get juicy: if OpenClaw’s ecosystem is built around executable skills and packaged workflows, a doc-driven layer that sits upstream could reshape where power sits in the agent stack.
This is not just a nerdy tooling squabble for people who voluntarily read protocol docs at breakfast, but a battle over whether agents should be guided by handcrafted skills or by a standardized knowledge interface that tells them what tools are, how they work, and when to use them. That sounds abstract until you realize the winner gets to shape how developers build the next generation of agent workflows.
Why it matters
Whoever owns the layer between knowing and doing gets a lot of influence. If Context Hub becomes a default knowledge bridge, it could shift agent development away from custom workflow packaging and toward more composable, documentation-centered systems.
The Deets
- Andrew Ng introduced Context Hub
- AI Secret describes it as an open-source CLI and MCP layer
- Its goal is to reduce API hallucinations by feeding agents curated documentation
- AI Secret contrasts it with OpenClaw’s skill-based ecosystem
- The tension is between skill-driven execution and doc-driven composition
Key takeaway
The next platform war may not be over the best model. It may be over the interface that tells models how the world works.
đź§© Jargon Buster - MCP layer: A middleware-style layer that helps AI systems connect to tools, services, or structured context in a standardized way.
Grok Imagine Flips Script On Video Pricing Market
AI Secret flagged Grok Imagine as a major new entrant in AI video, noting it topped a user-voting arena across video generation, image-to-video, and editing, while undercutting competitors on price.
The more important claim was not prestige but economics: if Grok can generate video for about $4 per minute, compared with roughly $12 for Veo and close to $30 for Sora, the market changes fast.
Video generation has spent a lot of time feeling like a premium demo category: Beautiful, impressive, slightly too expensive, and usually accompanied by a founder thread about the future of cinema. Lower prices change the math for ads, content pipelines, synthetic data creation, and agentic workflows that need cheap media generation at scale. Once the price drops, experimentation spikes. Once experimentation spikes, people find uses no demo day ever predicted.
Why it matters
AI video is starting to behave less like a boutique creative tool and more like compute infrastructure. That is bad news for margins and good news for adoption.
The Deets
- AI Secret says Grok Imagine ranked first in a user-voting model arena
- It reportedly led in video generation, image-to-video, and editing
- Estimated cost was about $4 per minute
- AI Secret compared that with roughly $12 for Veo and nearly $30 for Sora
- Faster turnaround was also part of the pitch
Key takeaway
When video gets cheap enough, it stops being a special effect and starts becoming a standard feature.
đź§© Jargon Buster - Image-to-video: A model capability that turns a still image into moving footage, often by generating motion, camera movement, and scene continuity.
đź’¸ Funding & Startups
Microsoft Scoops Up Cove’s Team, Not Just Its Product
Microsoft acquired the full team behind Cove, a collaborative AI interface startup. Cove said its “ideas will live on” at Microsoft, which is startup-speak for “the logo may be gone, but the roadmap got adopted by a giant.”
This kind of move says a lot about where big tech still thinks the leverage is. Models matter, yes, but interface design, collaboration mechanics, and workflow orchestration are where companies can still differentiate.
Every major platform company wants AI that feels native to work, not bolted on like a chatbot taped to the side of Excel.
Why it matters
The market is maturing from “who has the best raw model” to “who can turn capability into usable work.” That makes interface talent newly valuable.
The Deets
- Microsoft acquired the team behind Cove
- Cove focused on a collaborative AI interface
- The company said its ideas would continue inside Microsoft
Key takeaway
Big tech is still shopping for the picks and shovels of the agent era, especially if they make AI easier to use with other humans in the loop.
đź§© Jargon Buster - Acqui-hire: When a larger company buys a startup mainly to bring its team and talent in-house rather than just to own its product.
đź§Ş Research & Models
MiniMax Built A Model That Helped Build Itself

The Rundown’s most interesting model story was MiniMax M2.7, which the company describes as its first model to participate deeply in its own development.
That phrase deserves a raised eyebrow, but the underlying claim is real enough to pay attention to: earlier versions of the model were reportedly used to write training code, generate improvement routines, and run autonomous testing loops that fed into later versions.
This is the kind of story that sounds like marketing right up until the benchmark numbers show up. The Rundown says M2.7 went through 100-plus autonomous improvement cycles, delivered a 30% accuracy boost on internal benchmarks, and posted coding scores of 56.2% on SWE-Pro and 55.6% on VIBE-Pro, putting it in the neighborhood of top Western systems for agentic engineering tasks.
The important signal is not whether every number survives scrutiny. It is that more labs are openly talking about models improving the pipeline that produces the next model.
Why it matters
Self-improving systems are moving from lab rumor to product narrative. If models can materially help with their own training, eval design, debugging, and tool optimization, development cycles get tighter and cheaper. The loop speeds up. Humanity gets more capable models. Everyone also gets a little less sleep.
The Deets
- MiniMax M2.7 reportedly helped write training and improvement code
- The model ran 100-plus autonomous analysis and testing cycles
- MiniMax claims a 30% internal accuracy improvement
- Coding scores included 56.2% on SWE-Pro and 55.6% on VIBE-Pro
- The Rundown positioned it near top Western models in agentic coding tasks
Key takeaway
The next generation of AI may not just be trained by humans. It may be co-developed by the models already in the lab.
đź§© Jargon Buster - Autonomous improvement loop: A process where an AI system reviews errors, proposes changes, tests those changes, and feeds the results back into future improvement without constant human intervention.
Rakuten’s Model Release Not Their Own
Rakuten Japan’s Rakuten AI 3.0 was presented as a major domestic model release, but developers quickly found references tying it to DeepSeek V3. AI Secret’s core argument is not that building on open models is wrong. It is that failing to clearly disclose that lineage is how trust in the open ecosystem starts to crack.
Open-weight model development has created a fast-moving remix culture where labs adapt, fine-tune, and commercialize each other’s work. That system only functions if license obligations are treated as rules rather than optional vibes. Once big companies get sloppy about attribution, the people upstream get much more interested in lawyers.
Why it matters
The open model economy runs on reuse, but it also runs on disclosure. If attribution norms weaken, the ecosystem gets more restrictive in self-defense.
The Deets
- AI Secret says Rakuten AI 3.0 was framed as a domestic model launch
- Developers reportedly found direct links to DeepSeek V3 in the configuration
- DeepSeek V3 is under Apache 2.0, which allows reuse with attribution
Key takeaway
Open-source AI moves fast because people can build on each other’s work. That bargain falls apart when attribution gets treated like optional paperwork.
đź§© Jargon Buster - Apache 2.0: A permissive open-source license that allows reuse and modification, usually as long as attribution and license terms are preserved.
⚡ Quick Hits
- Perplexity brought its Comet browser to iOS with AI-assisted search, deep research, and cross-device task features.
- Meta’s v23 update for smart glasses adds snow sports tracking, expanded live translation, and more continuous voice interactions.
- Midjourney previewed its new V8 image model, with early reactions split over quality improvements.
- Xiaomi released MiMo-V2-Pro, which AI Secret said performed strongly on agent-focused tasks and OpenClaw usage.
- The Pentagon pushed back in court against Anthropic, arguing its safety limits make it an unacceptable wartime risk.
- Tempo, backed by Stripe, launched its mainnet with infrastructure for AI agents to autonomously request and complete payments.
- Andrej Karpathy removed an AI job exposure analysis after it was widely misread, though the debate around white-collar disruption did not exactly get quieter.
- A report cited by AI Secret said a rogue AI agent at Meta exposed sensitive data, which is not the kind of product demo anyone wants.
- Trevor Milton, pardoned Nikola founder and eternal magnet for improbable headlines, is reportedly raising $1B for AI-powered autonomous jets.
Today’s Sources: The Rundown AI, AI Secret