May 6 brought three distinctly different AI stories: a self-improvement feature from Anthropic, a privacy controversy from Google, and medical AI data that will be hard to ignore.
Anthropic "Dreaming" — Agents That Self-Improve Between Sessions
Announced at Anthropic's San Francisco developer conference and reported by Reuters, "Dreaming" is a research preview feature built into Anthropic's agent management software. Between active sessions, a Dreaming-enabled agent reviews its own completed work, identifies patterns in past outputs, and updates preference files and stored context autonomously. The goal is incremental self-improvement without user intervention. It is currently in research preview, not general availability. Anthropic also announced wider availability of its task delegation feature, which lets a primary agent break down a complex task and assign components to specialist subagents.
The timing of the Dreaming announcement — one day after the financial agents launch — underlines Anthropic's strategy at its San Francisco conference: position Claude as infrastructure for long-running, autonomous enterprise workflows rather than an interactive chat tool.
Chrome Is Installing a 4GB AI Model Without Explicit Consent
Security and privacy researchers confirmed this week that Google Chrome is silently installing a 4GB AI model on user devices as part of a background update — without explicit user notification or opt-in. The model, referred to as a "nano" AI model bundled with the browser, is triggering concern from privacy advocates and enterprise IT teams managing Chrome deployments at scale. Google has not issued a formal statement on the opt-out mechanism as of May 6.
Harvard Trial: OpenAI o1 Outperforms Triage Doctors
A clinical trial published this week shows OpenAI's o1 model correctly diagnosed 67% of emergency room patients — compared to 50–55% accuracy by triage doctors in the same study. The trial was conducted at a Harvard-affiliated hospital. The finding adds to a growing body of clinical AI evidence, though researchers note that deployment at scale raises questions about liability, workflow integration, and over-reliance on model outputs in time-critical settings.
For full context on the week's AI announcements, visit the May 2026 AI News Hub.