Meta Glasses Capture You Nude? GPT-5.4 Agent Ready; Cheat App Fudged Revenue 🙄

Meta Glasses Capture You Nude? GPT-5.4 Agent Ready; Cheat App Fudged Revenue 🙄

Today's AI Outlook: 🌥️

GPT-5.4 Arrives, Starts Using Computers Like A Human

OpenAI rolled out GPT-5.4, its newest flagship model, and early benchmarks suggest the company is leaning hard into agent-style AI that can operate software environments rather than just produce text. The release landed only two days after GPT-5.3 Instant became the default chat model, signaling an unusually rapid release cadence from OpenAI.

The model introduces improvements across reasoning, coding, science, and math, but the most interesting shift is how it performs in real desktop environments. GPT-5.4 scored 75% on the OSWorld-V benchmark, which tests how well an AI can navigate and complete tasks on an operating system. That score beats the human baseline of 72.4% and is roughly double the score GPT-5.2 achieved. The model also includes 1M tokens of context and a new high-effort reasoning mode designed for multi-hour agent workflows.

Independent testing also points in the same direction. One hands-on test running GPT-5.4 inside OpenClaw automation workflows found the model capable of reasoning, coding, browsing, and interpreting screenshots in a single loop. Performance landed slightly above Claude Sonnet 4.6, with particularly strong reliability across multi-step tasks.

Why it matters

The frontier of AI is shifting from chat interfaces to autonomous software operators. If models can reliably use computers the same way employees do, companies can automate entire workflows without building specialized integrations.

In practical terms, GPT-5.4 looks less like a chatbot upgrade and more like a digital worker capable of running software tools on its own.

The Deets

• 75% score on OSWorld-V, beating the human baseline

• 1M token context window for long projects and research

• 83% success rate vs professionals on GDPval job benchmark

• New “x-high reasoning” mode for long agent workflows

• Early tests show strong performance in OpenClaw automation systems

Key takeaway

GPT-5.4 may not be the biggest intelligence leap yet, but it could become the most practical model for running AI agents inside real software environments.

đź§© Jargon Buster - Agent execution: When an AI system completes tasks autonomously across multiple tools and applications instead of only generating responses.


Meta’s Smart Glasses Spark Privacy Backlash

Investigations from Swedish newspapers revealed that data-labeling contract workers in Nairobi, Kenya have been reviewing footage captured by Meta’s AI-powered Ray-Ban smart glasses reported seeing bathroom visits, nudity, private household moments, and sensitive information inside the clips. Although Meta attempts to blur footage before sending it to human reviewers, annotators say the safeguards often fail, leaving identifiable details such as faces, homes, and bank cards visible.

The issue becomes larger when considering the scale. Meta reportedly sold over 7 million pairs of smart glasses in 2025, meaning millions of wearable cameras are now capturing everyday life and occasionally feeding those recordings into AI training systems.

Why it matters

AI assistants are moving beyond phones and computers into always-on wearable devices. When those devices include cameras and microphones, the boundary between user interaction and surveillance becomes blurry.

Smart glasses are likely just the beginning. Similar assistants are already appearing in earbuds, pins, and other wearable hardware, creating a growing pipeline of real-world data used to train AI models.

The Deets

• Human reviewers located in Kenya’s AI annotation industry

• Workers reported seeing nudity and intimate personal moments

• 7M+ Meta smart glasses sold in 2025

• Blurring tools reportedly fail to fully anonymize footage

Key takeaway

The biggest data source for next-generation AI models may be the real world captured by wearable devices.

đź§© Jargon Buster - Data annotation: The process of labeling images, audio, or video so machine learning systems can learn patterns from real-world data.


⚡ Power Plays

Netflix Buys Ben Affleck’s AI Filmmaking Startup

Netflix has acquired InterPositive, an AI film-production startup founded by Ben Affleck in 2022. The acquisition brings the company’s 16-person team into Netflix and places Affleck in a senior adviser role.

InterPositive focuses on AI tools that improve production workflows rather than generating movies from scratch. The system trains models on a film’s own footage and then performs tasks such as relighting scenes, swapping backgrounds, and fixing continuity errors during post-production.

Affleck has been outspoken about his skepticism toward generative scripts, arguing that most AI models lack real storytelling knowledge. His company instead targets the expensive and time-consuming technical work that happens after filming.

Why it matters

Hollywood’s relationship with AI has been tense since the writers’ and actors’ strikes. Tools that enhance production instead of replacing creators could become the industry’s most widely accepted AI applications.

The Deets

• Netflix acquires InterPositive AI startup

• 16 employees join Netflix with the deal

• Technology focuses on editing and post-production tasks

• Models trained on existing production footage

Key takeaway

The biggest AI disruption in Hollywood may arrive quietly inside post-production pipelines, not as AI-generated movies.

đź§© Jargon Buster - Post-production: The stage of filmmaking where footage is edited, enhanced, and assembled into the final movie.


🚀 Funding & Startups

Shocker: Cheating Tools Startup Admits Cheating Its Metrics

Cluely, a startup that built viral attention for software designed to secretly assist users during video interviews, has admitted that a widely cited $7M annual recurring revenue figure was fabricated.

CEO Roy Lee acknowledged the claim on X, initially describing it as an offhand response during a cold call. Later evidence showed the interview was actually arranged by the company’s PR team, raising questions about how the number spread.

Cluely later shared Stripe screenshots suggesting the company’s real revenue was several million dollars lower than the original claim.

Why it matters

Cluely raised $5.3M in seed funding and $15M in Series A while building a brand around “cheating tools.” The irony is difficult to miss. A company that marketed software designed to help users cheat during interviews ended up inflating its own growth narrative.

The Deets

• CEO admitted $7M ARR claim was fabricated

• Company raised $20M+ across seed and Series A

• Revenue evidence suggests significantly lower figures

Key takeaway

In the AI startup boom, attention is easy to manufacture but credibility still matters.

đź§© Jargon Buster - ARR (Annual Recurring Revenue): A startup metric estimating predictable yearly subscription revenue.


🔬 Research & Robotics

AI Chips That Compute With Light

Researchers at Xidian University in China demonstrated reinforcement learning on photonic neuromorphic chips, a type of AI hardware that processes neural computations entirely with light.

Earlier optical AI systems still relied on electronic circuits for nonlinear operations, creating performance bottlenecks. The new design performs both linear and nonlinear computations optically, enabling a fully photonic neural network.

Tests on reinforcement-learning tasks like CartPole and inverted pendulum control showed only small performance drops compared with traditional software models.

Why it matters

Photonic AI chips could dramatically reduce energy consumption and increase speed in future AI systems. Light travels faster than electrical signals and produces far less heat, which could transform AI data centers and robotics hardware.

The Deets

• Neural computations performed entirely with light

• Removes electronic bottlenecks in photonic AI systems

• Demonstrated using reinforcement learning tasks

Key takeaway

Future AI processors may rely on photons instead of electrons, potentially making AI systems faster and more energy efficient.

đź§© Jargon Buster - Photonic computing: A computing method that processes information using light rather than electrical signals.


Tesla’s Bot Dreams Bigger Than Its Cars

Elon Musk is now pitching Tesla’s humanoid robot ambitions as something even bigger than a factory helper.

In comments reported by PYMNTS on March 5, Musk said Tesla could eventually build humanoid robots with artificial general intelligence, or AGI, and suggested the company may be the first to do it in a humanoid, real-world form. This fits neatly into Tesla’s broader campaign to position itself not just as an EV company, but as an AI, autonomy and robotics company.

The timing is notable because Tesla is reportedly preparing to introduce Optimus Gen 3 in Q1 2026 (tick tock, Elon), with this version aimed at large-scale manufacturing rather than flashy demo duty.

The plan, according to the report, is to first deploy Optimus inside Tesla factories, where the company can gather data, improve the hardware and tune the machine-learning systems in a controlled setting. In other words, Tesla wants the robot to learn on the factory floor before it starts pitching itself as the mechanical answer to everyday labor.

Why it matters

This is Tesla trying to turn Optimus from a moonshot into an earthbound product roadmap. The bigger message is not just that Tesla wants a robot. It is that Tesla sees humanoid robotics as the next platform shift, one that could sit alongside autonomous vehicles as a major future revenue engine.

Still, the robotics industry is still wrestling with the basics, including dexterity, reliability and consistent real-world performance.

There is also a gap between the story companies tell about humanoid robots and the reality on the ground. Many robots still struggle to produce dependable productivity gains in workplaces. That means Tesla is selling two futures at once: a near-term one where Optimus does repetitive industrial work, and a long-term one where the robot becomes a general-purpose machine with intelligence approaching or exceeding human capability. That is a very long bridge to cross.

The Deets

  • Musk said Tesla will be one of the companies to develop AGI
  • He added Tesla will probably be the first to make it in humanoid or “atom-shaping” form
  • Tesla is expected to introduce Optimus Gen 3 in Q1 2026
  • Gen 3 is reportedly designed as the first version built for scale manufacturing

Key takeaway

Tesla is trying to make Optimus sound less like a side project and more like the company’s next act.

đź§© Jargon Buster - AGI: A form of AI that can handle most intellectual tasks at human level or beyond, rather than being limited to narrow jobs like driving, coding or summarizing text.


⚡ Quick Hits

• Tesla’s Elon Musk suggested the company could build the first humanoid robot with AGI-level intelligence through its Optimus robotics program.

• Engineers developed PipeINEER, a mouse-sized robot capable of traveling 6 km through CERN collider pipes to inspect infrastructure.

• Google released an open-source Workspace CLI with 40+ built-in agent skills for AI automation platforms.

• AWS launched Amazon Connect Health, an AI agent platform designed to automate administrative tasks for healthcare providers.

• Agility Robotics rebranded to “Agility” as the company expands into broader automation services.


đź§° Tools of the Day

Manus - An AI workflow tool that converts structured documents like investment memos or reports into polished slide decks, exporting directly to Google Slides, PDF, or PowerPoint.

Aident AI Beta 2 - A plain-English automation platform with 1,000+ integrations, MCP updates, and centralized monitoring for managing AI workflows.

Coursekit - A no-code platform that creates 24/7 AI tutors from a course URL, allowing educators to deploy branded AI teaching assistants.


Today’s Sources: AI Secret, The Rundown AI, Robotics Herald

Subscribe to AI Slop

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe