Level 1 — Absolute Beginner
Anthropic is a company that makes an AI called Claude. Claude can write, read, and answer questions for people.
Now Anthropic has a new idea. It is called dreaming. AI agents can look back at old work when they are not busy.
While they look back, the agents see what they did wrong. They also see what they did right. Then they can do better next time.
One company tried this idea. The agents got six times better at finishing their work. Anthropic says the idea is still new.
- AI
- computer programs that can think and learn
- agent
- an AI helper that can do tasks for people
- dreaming
- in this story, when an AI looks back at its old work
- session
- one time of using a program from start to end
- mistake
- something done in the wrong way
- memory
- what a person or computer remembers
- company
- a group of people who work together to make or sell things
- better
- more good than before
Level 2 — Elementary
Anthropic, the maker of the Claude AI, has shown a new feature for its AI agents at its yearly developer event in San Francisco called Code with Claude. The feature is called dreaming.
Most AI agents forget what happened in their past sessions. Dreaming changes that. The system runs in the background and looks back at all the work an agent has done.
It picks out things the agent did wrong, finds patterns in repeated tasks, and saves useful notes about what worked. The next time the agent gets a similar job, it can use those notes.
Anthropic showed an example from a customer called Harvey, which makes AI for lawyers. With dreaming turned on, Harvey's agents finished about six times more tasks than before. The feature is still called a research preview, which means it is being tested.
- feature
- a part or function of a product
- developer event
- a meeting where companies show new tools to programmers
- background
- happening quietly while other things are going on
- pattern
- something that happens again and again in the same way
- notes
- short pieces of information written down to remember
- customer
- a person or company that pays to use something
- task
- a piece of work to be done
- research preview
- an early version that is being tested before a full release
Level 3 — Intermediate
Anthropic used its second annual Code with Claude developer conference in San Francisco this week to unveil dreaming, an unusual new capability for its Claude Managed Agents. The idea borrows loosely from neuroscience: between live tasks, an agent reviews its own previous sessions and consolidates what it has learned, much as the human brain is thought to do during sleep.
Technically, dreaming is a scheduled background process. It scans an agent's stored memory and past transcripts, merges duplicate entries, removes outdated facts, and tags recurring patterns such as a tool that often produces a particular kind of error or a workflow that several agents converge on independently.
The promise is significant. Anthropic says that one early customer, the legal-AI start-up Harvey, used dreaming to teach its agents about quirky filetype workarounds and law-firm-specific tool patterns. In Harvey's tests, the completion rate for end-to-end tasks rose roughly six-fold after the feature was switched on.
Dreaming was launched as a research preview, meaning it is available for limited use and may change before general release. Two related features announced at the same event — outcomes-based memory and multi-agent orchestration — are already in public beta as part of Managed Agents.
- unveil
- to show something publicly for the first time
- consolidate
- to combine pieces into a single, stronger whole
- scheduled
- planned to happen at a particular time
- transcript
- a written record of something that was said or done
- workflow
- the sequence of steps used to complete a task
- completion rate
- the share of tasks that are finished successfully
- research preview
- an early test version made available to a small group
- orchestration
- the coordinated arrangement of several parts working together
Level 4 — Advanced
At Anthropic's second Code with Claude developer conference in San Francisco this week, the company introduced a research-preview capability for its Managed Agents called dreaming — a borrowed metaphor that turns out to be more than marketing. Inspired by the consolidation phase of mammalian sleep, dreaming runs as a scheduled background process between live agent sessions, mining accumulated transcripts and memory stores for the kinds of patterns no individual run could surface on its own: recurring tool failures, idiosyncratic file-format pitfalls, latent preferences shared across a team of agents working for the same customer.
The technical contribution is less a single algorithm than an architectural choice. Most contemporary agentic systems try to learn within a session, either through chain-of-thought reflection or through ephemeral scratchpad memory that is wiped at the end of each task. Dreaming is built around the opposite intuition. It assumes the most valuable lessons emerge across sessions and over time, and it grants the agent the offline compute to find them. Memories are merged, contradictions resolved, stale entries pruned, and recurring failure modes promoted into durable rules that the agent will consult before similar tasks in the future.
Early customer data, while plainly cherry-picked for the keynote, hints at non-trivial gains. Harvey, the legal-AI start-up that has emerged as one of Anthropic's flagship enterprise references, reported a roughly sixfold rise in end-to-end task completion after dreaming was enabled, attributing the jump to agents finally remembering filetype workarounds and firm-specific quirks rather than re-discovering them on every run. Anthropic is pairing the research preview with two more conventional features now entering public beta — outcomes-based memory, which biases retention toward what actually solved a task, and multi-agent orchestration, a control plane for fleets of cooperating Claude agents.
What makes dreaming intellectually interesting is how openly it concedes the limits of in-context learning. Bigger context windows, more sophisticated retrieval and richer scratchpads have been the dominant strategies for giving agents the appearance of memory; dreaming bets that genuine self-improvement requires real consolidation — periodic, costly, and somewhat opaque — analogous to what biological brains spend a third of their lives doing. Whether the analogy holds, or whether dreaming will simply become another tool in the agent-developer's toolkit, will depend on how broadly customers see the same multiplicative gains Harvey reports once the preview opens up.
- consolidation
- the process of strengthening and unifying scattered information
- idiosyncratic
- peculiar to one individual or situation
- ephemeral
- lasting only a very short time
- intuition
- an underlying idea or instinct used to guide a design