OpenClaw Dreaming: Machines Start Dreaming While Humans Lose Sleep

OpenClaw introduced Dreaming, a memory consolidation system modeled on light sleep, deep sleep, and REM to help agents retain signal and discard noise.

Long-term memory has always been a weak point for large models. As context grows, memory becomes harder to manage. An agent may appear to remember everything, yet become worse at judging what matters and what should be forgotten.

On April 5, OpenClaw introduced an experimental feature called Dreaming. It is not just a catchy label. It is a background memory-management system modeled on human sleep, designed to help agents wake up with cleaner and more useful memory.

01 A sleep-based pipeline for memory consolidation

Dreaming does more than index data. It breaks memory processing into three stages that mirror different functions of human sleep.

Light Sleep: the system scans recent conversations and retrieval traces, removes duplication, and builds a candidate list. At this stage, it only buffers information and does not modify the core memory file MEMORY.md.

Deep Sleep: the system applies stricter filters to identify durable information. Only entries that pass thresholds for score, recall count, and distinct query count move forward. Before writing anything, it checks the latest logs again to remove stale content. The final result is appended to MEMORY.md, while a deep-sleep summary is written to DREAMS.md.

REM: after memory consolidation, the system looks for hidden links across recent behavior traces. It extracts patterns and reflective summaries, then stores them in a dedicated REM section to help the agent respond with better structure and broader context.

Dreaming also produces a human-readable dream journal. Once enough material accumulates, a background sub-agent calls the default model and appends a short natural-language entry to DREAMS.md.

02 A scoring system for deciding what deserves to stay

The real point of Dreaming is not just organizing memory, but filtering it. Instead of keeping everything, OpenClaw uses a weighted scoring model to decide what belongs in long-term storage.

The six dimensions are:

  • Relevance (30%): how useful the information is when retrieved.
  • Frequency (24%): how often the item appears in short-term signals.
  • Query diversity (15%): whether it shows up across different prompts and contexts.
  • Recency (15%): whether the information is still fresh and actionable.
  • Integration (10%): whether it remains stable across multiple days.
  • Concept richness (6%): how dense and connected its concept graph is.

In practice, this means the system tries to keep information that is repeated, useful, current, and broadly applicable, while letting lower-value noise fade away.

03 Why it reminds people of Claude’s “dreaming” approach

Some developers have noted that Dreaming resembles the automated dreaming logic described in leaked Claude Code material around the KAIROS system. Older approaches that repeatedly rewrote the entire MEMORY.md could become messy over time. By splitting the flow into light sleep, deep sleep, and REM, Dreaming makes the pipeline more explicit: consolidate first, preserve next, and derive higher-level patterns last.

Others have highlighted the neuroscience angle. Terms like Dreaming, Light Sleep, Deep Sleep, and REM are not random branding. They directly borrow from human models of sleep-based memory consolidation.

OpenClaw already uses files like IDENTITY.md, USER.md, and HEARTBEAT.md to preserve identity, user context, and continuity. DREAMS.md fills in the missing piece: deciding which memories are actually worth keeping.

04 The most ironic part: machines dream, humans stay awake

The value of Dreaming is not that AI remembers everything. It is that AI learns to review short-term traces, extract patterns, and discard noise. A strong agent should not behave like a dumb storage device. It should become better over time at understanding a user’s preferences, recurring goals, and long-term context.

From an engineering perspective, the most interesting part is that the system is not presented as a mystical black box. It is a structured backend process with stages, thresholds, reflection, and forgetting rules. That makes AI memory feel less like uncontrolled context bloat and more like a designed system.

That is also what makes the whole thing feel ironic. We are spending enormous effort teaching machines how to dream, while many people are losing sleep over being replaced by those same increasingly capable systems.

记录并分享
Built with Hugo
Theme Stack designed by Jimmy