Anthropic's Dilemma; OpenAI vs Cribbers; 'Adult' Dolls Go Mainstream

Anthropic's Dilemma; OpenAI vs Cribbers; 'Adult' Dolls Go Mainstream

Today's AI Outlook: 🌤️

Anthropic Is Worth $380B, at War With the Pentagon, and Apparently Irreplaceable

Two stories collided this week to paint a picture of a company operating from a position of extraordinary leverage and extraordinary tension. Anthropic closed a $30B fundraise that more than doubled its valuation to $380B, with Claude Code revenue running above $2.5B annually and total annualized revenue hitting $14B. Enterprise subscriptions have quadrupled. And over 90% of users on the OpenClaw platform are independently choosing Claude Opus 4.6 as their preferred model for running agents, a signal of genuine performance preference rather than contractual bundling.

At the same time, the Pentagon is reportedly "close" to cutting ties with Anthropic and designating the company a "supply chain risk", a label usually reserved for foreign adversaries like Chinese telecom firms. The dispute centers on Anthropic's refusal to grant the military unrestricted access to Claude for "all lawful purposes."

Anthropic has said it is open to loosening some terms but will not budge on two red lines: mass surveillance of American citizens and fully autonomous weapons systems. Defense officials have called the stance "ideological."

Claude, meanwhile, is currently the only AI model operating inside the Pentagon's classified systems, and was reportedly used via a Palantir deployment during the January 3 military operation that captured Venezuelan leader Nicolas Maduro.

Why it matters

The supply chain risk designation, if applied, would force every U.S. defense contractor to certify it does not use Claude in its workflows. Given that Anthropic says eight of the ten largest U.S. companies use its models, the collateral damage could be staggering.

The Deets

  • The $30B raise pushes Anthropic's valuation to $380B, more than double its previous round
  • Claude Code alone generates over $2.5B in annualized revenue
  • CEO Dario Amodei has separately warned the industry is "YOLOing" on infrastructure, cautioning that a one-year revenue miscalculation could bankrupt firms chasing trillion-dollar compute projects
  • Anthropic is building its own proprietary data center network with facilities in Louisiana, Texas, and New York, targeting 1M TPUs by late 2026

Key takeaway

Anthropic finds itself in one of the stranger positions in tech history: simultaneously the most valuable private AI company on Earth and the one most at risk of being blacklisted by its own government.

🧩 Jargon Buster - Supply chain risk designation: A federal label that flags a company as a potential threat to the defense supply chain. Once applied, every contractor doing business with the Pentagon must prove it does not rely on that company's products, effectively blacklisting the company from the defense ecosystem.


⚡ Power Plays

OpenAI Tells Congress That DeepSeek Is Running an AI Chop Shop

OpenAI escalated its feud with Chinese rival DeepSeek by sending a formal memo to the House Select Committee on China accusing the Hangzhou-based lab of systematically extracting outputs from top American AI models to train its own systems.

This is not a vague complaint about competitive copying. OpenAI claims it detected "new, obfuscated methods" designed to evade its defenses, including accounts linked to DeepSeek employees using third-party routers to mask their access to frontier models and programmatically harvesting outputs for distillation at scale.

The timing is notable. OpenAI suspects DeepSeek may be preparing a major product announcement during the Lunar New Year celebrations, echoing the surprise rollout of its R1 model last year that briefly rattled U.S. markets.

Rep. John Moolenaar, chair of the House China committee, responded to the memo by calling the alleged behavior "part of the CCP's playbook: steal, copy, and kill." DeepSeek has not commented.

Why it matters

If distillation at this scale is proven and normalized, the economics of frontier AI development invert. Labs spending tens of billions on compute, data and energy could find their competitive moat drained through API access.

The Deets

  • OpenAI claims DeepSeek employees developed code to access U.S. AI models through obfuscated third-party routers
  • Distillation involves using a powerful model's outputs to train a smaller, cheaper model, effectively transferring knowledge without paying for the original training
  • OpenAI warns that distilled models often lose safety guardrails, enabling misuse in areas like biology and chemistry
  • DeepSeek's chatbot has been shown to censor results on topics sensitive to the Chinese government, including Taiwan and Tiananmen Square

Key takeaway

OpenAI is not just filing a complaint. It is trying to frame distillation as a national security issue before Congress. If lawmakers agree, the regulatory implications could reshape how API access works across the entire industry.

🧩 Jargon Buster - Distillation: A technique where a smaller AI model learns by studying the outputs of a larger, more powerful one. Think of it as a student copying the smart kid's homework, except the homework cost billions of dollars to produce.


Alibaba's Qwen-3.5 Arrives With 397B Parameters And A Point To Prove

Alibaba's Qwen team released Qwen3.5-397B-A17B, an open-weight vision language model that uses a sparse Mixture-of-Experts architecture to activate only 17B of its 397B total parameters per query. The result is a model that competes with GPT-5.2, Claude Opus 4.5, and Gemini 3 Pro across math, reasoning, coding, and vision benchmarks while being dramatically cheaper and faster to run.

Alibaba claims the model is 60% cheaper and delivers 8x higher throughput than its predecessor Qwen3-Max, and on benchmarks like AIME 2026 (competitive math) it scored 91.3, outpacing several proprietary competitors.

The release is part of a broader Chinese AI offensive this week. Coming on the heels of ByteDance's Seed 2.0 launch, which undercut Western pricing by 10x, Qwen-3.5 adds open weights under an Apache 2.0 license to the mix, meaning anyone can download, modify, and deploy the model without restriction.

The model supports a 1M-token context window in its hosted variant and was designed from the ground up for agentic workflows, with native vision capabilities trained through early fusion on trillions of text and image tokens simultaneously.

Why it matters

Two major Chinese labs dropped frontier-competitive models in the same week, both at a fraction of Western pricing. Qwen-3.5 being fully open-weight under Apache 2.0 adds a dimension ByteDance's Seed 2.0 does not: any company, researcher, or developer can run it on their own infrastructure with zero licensing restrictions. The competitive pressure on Western labs is now coming from multiple directions at once, and the price floor is falling fast.

The Deets

  • 397B total parameters, only 17B active per query via sparse MoE architecture
  • Scores 94.9 on MMLU-Redux, 88.4 on GPQA Diamond (graduate-level reasoning), and 83.6 on LiveCodeBench v6
  • Still trails Claude in code generation and Gemini in long-tail knowledge, per Alibaba's own benchmarks
  • Underperforms Western competitors on hallucination rates, a notable gap for enterprise deployment

Key takeaway

The frontier AI race is no longer a two-horse contest between OpenAI and Anthropic. Chinese labs are releasing models that match or beat Western performance at dramatically lower cost, and doing it with open licenses that make the technology available to everyone.

🧩 Jargon Buster - Mixture-of-Experts (MoE): An AI architecture that splits the model into many specialized "expert" sub-networks but only activates a small fraction of them for each query.


IBM Discovers Replacing Humans With AI Still Requires... Humans

After years of signaling that AI would replace thousands of back-office roles, IBM is reversing course and tripling its entry-level hiring. The company had paused recruitment and targeted roughly 7,800 administrative positions for automation. Now HR leadership says many of those junior roles are being redesigned, not eliminated, because AI tools failed to fully replace the human oversight and coordination those positions provided.

Why it matters: For two years, CFO models across Corporate America were built around 20% to 30% labor efficiency gains from generative AI in support and operations. IBM's pivot suggests the actual bottleneck is not task execution but supervision and judgment. If AI output requires constant human review, labor costs do not disappear. They shift from doing the work to checking the work.

The Deets

  • IBM originally targeted 7,800 administrative roles for AI automation
  • The company is now tripling entry-level hiring for redesigned positions
  • The new roles focus on managing and reviewing AI-generated outputs rather than performing the original tasks manually
  • The reversal suggests enterprise AI is becoming augmentation, not substitution in practice

Key takeaway

The "AI replaces all the junior people" thesis just hit its first major corporate reality check. The humans may be coming back, but their job descriptions have changed.

🧩 Jargon Buster - Augmentation vs. substitution: Two competing theories about how AI affects jobs. Substitution means AI replaces the worker entirely. Augmentation means AI handles parts of the work while the human manages, reviews, and makes final decisions.


🛠️ Tools & Products

OpenAI Built A Panic Room Inside ChatGPT

OpenAI introduced Lockdown Mode in ChatGPT, a new optional security setting that deterministically disables tools and capabilities an attacker could exploit through prompt injection, the technique where malicious instructions hidden in web pages or documents trick AI into leaking data or taking unauthorized actions.

When enabled, Lockdown Mode restricts web browsing to cached content only, ensuring no live network requests leave OpenAI's environment. It also disables image rendering, Deep Research, Agent Mode, and network access for Canvas-generated code.

Alongside Lockdown Mode, OpenAI is rolling out standardized "Elevated Risk" labels across ChatGPT, Atlas, and Codex to flag features that may introduce security risks. The labels explain what changes, what risks exist, and when elevated access is appropriate. As OpenAI strengthens safeguards for specific features, it plans to remove the labels once it determines the risks have been sufficiently mitigated.

Why it matters

This is OpenAI publicly acknowledging that AI agents capable of browsing, connecting to apps, and executing tasks are fundamentally different security beasts than simple chatbots. "Hard blocks," deterministic restrictions that cannot be bypassed through clever prompting, may be the only reliable defense against prompt injection as agents become more powerful. The fact that Lockdown Mode disables some of ChatGPT's most useful features is the tradeoff: maximum security means minimum capability.

The Deets

  • Available now for ChatGPT Enterprise, Edu, Healthcare, and Teachers editions
  • Consumer availability planned in the coming months
  • Workspace admins can enable it via role-based controls and whitelist specific apps that remain accessible
  • Lockdown Mode disables: image rendering, Deep Research, Agent Mode, Canvas network access, and file downloads

Key takeaway

As AI tools gain more autonomy and more access to external systems, the security conversation is shifting from "how do we make the model safer" to "how do we build deterministic walls around the features most likely to be exploited." Lockdown Mode is the first major product acknowledgment of that shift.

🧩 Jargon Buster - Prompt injection: An attack where someone hides malicious instructions inside content an AI reads, like a web page, email, or document, to trick the AI into doing something it should not, such as leaking private data or visiting a harmful link.


🔬 Research & Models

Scientists Built A Fly's Eye For Robots (Oh, And It Can Smell)

Researchers at the Chinese Academy of Sciences built a 1.5-millimeter artificial compound eye inspired by fruit flies that combines 180-degree vision with chemical gas detection in a single integrated sensor, publishing their results in Nature Communications.

Using femtosecond laser two-photon polymerization, the team packed 1,027 visual units into a curved structure, then added an inkjet-printed chemical array that changes color when it detects hazardous gases. The combined "bio-CE system" was mounted on a micro robot that successfully detected moving objects and avoided obstacles in lab tests.

By fusing vision and chemical sensing into a single module, this design cuts payload and simplifies system architecture in ways that matter for real-world deployment in collapsed buildings, industrial tunnels, or toxic environments.

Why it matters

This is hardware-level multimodal fusion at extreme miniature scale. If the image resolution and chemical response speed improve in future iterations, micro robotics design could shift toward integrated sensing blocks, enabling smaller, cheaper, and more autonomous machines that can safely operate in environments too dangerous for humans.

The Deets

  • The compound eye is just 1.5 millimeters in diameter
  • Contains 1,027 visual units delivering 180-degree vision
  • Chemical array changes color to detect hazardous gases
  • Successfully tested on a micro robot performing object detection and obstacle avoidance

Key takeaway

The next generation of rescue and inspection robots will see and smell through the same device, and it will be smaller than your fingernail.

🧩 Jargon Buster - Compound eye: An eye structure made of hundreds or thousands of tiny individual visual units, like those found in insects. It provides a wide field of view in an extremely small package, which is exactly what micro robots need.


This Robot Cannot Die (Its Neighbors Will Not Let It)

Researchers at EPFL (Swiss Federal Institute of Technology) built Mori3, a modular origami robot where every unit shares energy, data, and perception with its neighbors. When one module completely loses power, sensing, and communication, the surrounding modules effectively revive it by redistributing resources locally.

Traditional multi-agent robots become more fragile as you add modules, because more parts mean more failure points. EPFL demonstrated that partial redundancy does not solve this problem. Only full resource sharing reversed the reliability curve, turning additional complexity into additional resilience rather than additional risk.

Why it matters

If this architecture scales to larger swarms, reliability stops being a tradeoff against complexity. Distributed robots could operate in hostile environments with graceful degradation instead of catastrophic collapse, which would reshape how systems are engineered for space exploration, disaster response, military operations, and autonomous infrastructure maintenance.

The Deets

  • Mori3 uses a modular origami design where each unit is structurally identical
  • Every module shares energy, data, and perception with its neighbors rather than isolating resources
  • A "dead" core module was revived by its neighbors in lab demonstrations

Key takeaway

Most robots fail like a chain: one weak link and everything stops. Mori3 fails like a starfish: lose a piece and the rest keeps going.

🧩 Jargon Buster - Graceful degradation: When a system loses a component but continues to function at reduced capacity instead of failing entirely. The opposite of what happens when your phone screen cracks and everything becomes unusable.


From Sex Dolls to Front Desks: Realbotix's Unlikely Pivot

Realbotix, the robotics company that grew out of the RealDoll adult companion business, is making a play for the mainstream. The company is repositioning its humanoid robots for hotels, casinos, retail, and healthcare, pairing the same lifelike silicone skin, modular interchangeable faces, and camera-embedded robotic eyes with large language models to run front desks and hold autonomous conversations with guests and customers.

Why it matters:

The adult industry quietly incubated the most believable humanoid form factors in robotics, and now AI is giving that hardware a new narrative and new buyers. Realbotix's robots can simulate warmth and social nuance that conventional concierge bots were never built to handle, an asymmetry that is difficult to replicate because no mainstream investor would have funded the research that created it.

The Deets

  • Realbotix robots feature modular, interchangeable faces and camera-embedded robotic eyes for natural interaction
  • The company is targeting hotels, casinos, retail, and healthcare as initial verticals
  • LLM integration enables autonomous, unscripted conversations in public-facing roles
  • The embodiment quality comes from years of adult-market R&D that most service robotics companies could never justify

Key takeaway

The first scalable social robots may not come from a Stanford lab or a SoftBank portfolio company. They may come from an industry no one in polite company wants to credit.

🧩 Jargon Buster - Embodiment: How physically realistic and human-like a robot looks and moves. The more convincing the embodiment, the more naturally humans interact with it, which is critical for customer-facing roles.


⚡ Quick Hits

  • Meta patented a social networking system that uses AI trained on a user's interaction data to simulate their responses when they are on a long break or even deceased. Digital afterlife, brought to you by the company that gave us Farmville notifications. (The Rundown AI)
  • India kicked off its AI Impact Summit, hosting Sam Altman, Sundar Pichai, and Dario Amodei, who confirmed India is now the second-largest market for both ChatGPT and Claude. Amodei also announced Anthropic's new Bengaluru office. (The Rundown AI)
  • Ireland's Data Protection Commission is investigating xAI's Grok over concerns it can generate sexualized images of women and children, following similar probes in the UK and EU. (The Rundown AI)
  • SpaceX (and xAI) will reportedly compete in the Pentagon's $100M contest to produce voice-controlled, autonomous drone swarming technology. (The Rundown AI)
  • ElevenLabs launched "ElevenLabs for Government" to help public sector agencies deploy secure, multilingual voice and chat AI. (The Rundown AI)
  • Realbotix, born from the RealDoll adult companion business, is repositioning its humanoid robots for hotels, casinos, retail, and healthcare, pairing lifelike silicone skin and expressive robotic eyes with large language models. Years of adult-market R&D gave them a head start on believable human form factors that traditional service robotics companies never invested in. (Robotics Herald)
  • Airbnb is integrating large language models into search and support to build an AI-native travel experience. (AI Secret)
  • Google is rolling out Gemini-powered audio summaries in Docs, letting users hear condensed overviews of long documents. (AI Secret)
  • ServiceNow acquired Pyramid Analytics to boost its semantic and analytics capabilities for AI-driven workflows. (AI Secret)
  • ByteDance's Seedance 2.0 continues to draw Hollywood backlash over alleged copyright and likeness infringement in its video generation outputs. (AI Secret)

🧰 Tools of the Day

  • MyClaw.Host - Deploy OpenClaw multi-agent setups on a VPS in under 60 seconds with zero server configuration. For anyone who wants always-on agents without keeping a laptop open.
  • Comp AI - Unblock $1M+ deals with AI-powered compliance. Get SOC 2, ISO 27001, HIPAA, and GDPR ready in roughly 10 hours, hands-off.
  • Remio - Auto-captures your files, meetings, and browsing into a searchable knowledge base you can chat with, now with mobile apps and email sync.
  • AI Blaze - Respond to emails, rewrite text, and correct spelling with AI shortcuts that work across every website.
  • Raccoon AI - Connects to your Gmail, calendar, and documents so you can drop in context and let it handle tasks across all of them at once.
  • EVY - An AI co-creator that works inside any app to write, edit, record meetings, and turn voice notes into polished content.
  • Lunair - A text-to-video tool for landing pages that creates custom, on-brand explainer videos with consistent characters and no stock footage.
  • Kaloria - Tracks calories by photo with AI that recognizes over 100,000 foods, plus a coach with customizable personalities.

Today's Sources: The Rundown AI, AI Secret, TAAFT, Robotics Herald

Subscribe to AI Slop

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe