<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Multi-Agent on KnightLi Blog</title>
        <link>https://www.knightli.com/en/tags/multi-agent/</link>
        <description>Recent content in Multi-Agent on KnightLi Blog</description>
        <generator>Hugo -- gohugo.io</generator>
        <language>en</language>
        <lastBuildDate>Mon, 27 Apr 2026 08:19:02 +0800</lastBuildDate><atom:link href="https://www.knightli.com/en/tags/multi-agent/index.xml" rel="self" type="application/rss+xml" /><item>
        <title>Ralph and Multi-Agent Collaboration: How to Keep AI Working Reliably Over Long Tasks</title>
        <link>https://www.knightli.com/en/2026/04/27/ralph-multi-agent-long-running-ai-workflows/</link>
        <pubDate>Mon, 27 Apr 2026 08:19:02 +0800</pubDate>
        
        <guid>https://www.knightli.com/en/2026/04/27/ralph-multi-agent-long-running-ai-workflows/</guid>
        <description>&lt;p&gt;If you have been using coding agents lately, you quickly run into a very practical question: &lt;strong&gt;AI can work, sure, but how do you keep it working for hours without drifting, forgetting requirements, or redoing the same work?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;That is the real question behind many discussions around &lt;code&gt;Ralph&lt;/code&gt; and multi-agent collaboration. The point is not simply to compare which model is stronger. The more useful question is this: &lt;strong&gt;how do you design a workflow that lets AI stay stable during long tasks?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If you break the problem down, there are usually two main routes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;Ralph&lt;/code&gt; approach: keep starting fresh sessions and connect context through the filesystem&lt;/li&gt;
&lt;li&gt;The multi-agent approach: let a lead agent coordinate while worker agents split the execution&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Put more simply, the question is not &amp;ldquo;which model is more powerful,&amp;rdquo; but &amp;ldquo;how do you organize AI so it behaves more like a small team that can keep delivering?&amp;rdquo;&lt;/p&gt;
&lt;h2 id=&#34;01-why-long-tasks-go-off-the-rails&#34;&gt;01 Why Long Tasks Go Off the Rails
&lt;/h2&gt;&lt;p&gt;In short tasks, many problems stay hidden. You give an instruction, the model reads a few files, changes a few lines, and the job is done.&lt;/p&gt;
&lt;p&gt;Once the task gets longer, the common failure modes start to pile up:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Conversations grow longer and context starts to bloat&lt;/li&gt;
&lt;li&gt;Earlier requirements get squeezed out by newer information&lt;/li&gt;
&lt;li&gt;One agent has to plan, implement, and test at the same time&lt;/li&gt;
&lt;li&gt;Without a clear acceptance step, &amp;ldquo;it is done&amp;rdquo; often just means &amp;ldquo;it says it is done&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So when AI runs for a long time, the real challenge is often not single-shot model quality. It is &lt;strong&gt;task slicing, state handoff, role separation, and feedback loops&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id=&#34;02-the-ralph-approach-break-long-tasks-into-short-rounds&#34;&gt;02 The Ralph Approach: Break Long Tasks into Short Rounds
&lt;/h2&gt;&lt;p&gt;&lt;code&gt;Ralph&lt;/code&gt; is a good fit when the main problem is dirty, overloaded context.&lt;/p&gt;
&lt;p&gt;Its core pattern is straightforward:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Keep launching new agent sessions in a loop&lt;/li&gt;
&lt;li&gt;Let each round handle only one small enough task&lt;/li&gt;
&lt;li&gt;Store cross-round state in files instead of forcing everything into one conversation&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The benefit is immediate: every round starts with fresh context, so the session stays more focused and is less likely to get dragged down by old history.&lt;/p&gt;
&lt;p&gt;If you have already looked at &lt;code&gt;Ralph&lt;/code&gt;-style projects, the structure will feel familiar:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Current tasks live in structured files&lt;/li&gt;
&lt;li&gt;Intermediate learnings go into progress files&lt;/li&gt;
&lt;li&gt;Code changes stay in git history&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In other words, &lt;code&gt;Ralph&lt;/code&gt; does not try to make one agent remember everything forever. It externalizes memory on purpose so the session itself can stay lighter.&lt;/p&gt;
&lt;p&gt;This kind of setup works especially well when:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The work can already be split into small stories&lt;/li&gt;
&lt;li&gt;Each story can fit inside one context window&lt;/li&gt;
&lt;li&gt;The project already has tests, typecheck, or other checks&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It is a solution to the problem of &lt;strong&gt;how to keep AI moving forward one round at a time&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id=&#34;03-the-multi-agent-approach-split-the-work-one-agent-cannot-handle-alone&#34;&gt;03 The Multi-Agent Approach: Split the Work One Agent Cannot Handle Alone
&lt;/h2&gt;&lt;p&gt;The other route is multi-agent collaboration.&lt;/p&gt;
&lt;p&gt;In this kind of workflow design, the more promising pattern is usually this: the lead agent should not do all the work directly. Instead, it coordinates while other agents handle development, testing, checking, and acceptance.&lt;/p&gt;
&lt;p&gt;That differs from &lt;code&gt;Ralph&lt;/code&gt; in an important way:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Ralph&lt;/code&gt; feels more like serial iteration&lt;/li&gt;
&lt;li&gt;Multi-agent work feels more like parallel division of labor&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When the task naturally contains different roles, multi-agent collaboration becomes easier to use. For example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;One agent breaks down the task and writes the execution plan&lt;/li&gt;
&lt;li&gt;One agent implements the actual change&lt;/li&gt;
&lt;li&gt;One agent tests and validates the result&lt;/li&gt;
&lt;li&gt;One agent checks whether the result still matches the original goal&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The point is not to open more windows for the sake of it. The real value is role separation. Tasks that used to be piled onto one agent can now be split into clearer stages.&lt;/p&gt;
&lt;p&gt;Once the role boundaries are clear, several problems become lighter:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The person writing does not have to be the same one reviewing&lt;/li&gt;
&lt;li&gt;The testing side does not have to reconstruct the full requirement every time&lt;/li&gt;
&lt;li&gt;The lead agent is less likely to drown in implementation detail&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is a solution to the problem of &lt;strong&gt;how to make AI cooperate more like a small team&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id=&#34;04-the-real-key-is-not-parallelism-but-task-design&#34;&gt;04 The Real Key Is Not Parallelism, but Task Design
&lt;/h2&gt;&lt;p&gt;Whether you choose &lt;code&gt;Ralph&lt;/code&gt; or multi-agent collaboration, the easiest thing to underestimate is this: &lt;strong&gt;workflow design matters more than opening more agents.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If the task split is wrong, adding more agents only parallelizes the confusion.&lt;/p&gt;
&lt;p&gt;A more stable breakdown usually has a few traits:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;One task maps to one clear objective&lt;/li&gt;
&lt;li&gt;One role owns one category of output&lt;/li&gt;
&lt;li&gt;Every round has a clear done condition&lt;/li&gt;
&lt;li&gt;The output of one round can be consumed directly by the next&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For example, instead of giving AI one giant instruction like &amp;ldquo;build the whole feature,&amp;rdquo; a steadier structure is often:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Break out requirements and boundaries first&lt;/li&gt;
&lt;li&gt;Then split implementation&lt;/li&gt;
&lt;li&gt;Then split testing&lt;/li&gt;
&lt;li&gt;Then make acceptance its own step&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The advantage is that when something goes wrong, it becomes easier to tell whether the problem sits in understanding, implementation, testing, or delivery criteria.&lt;/p&gt;
&lt;h2 id=&#34;05-why-acceptance-matters-so-much&#34;&gt;05 Why Acceptance Matters So Much
&lt;/h2&gt;&lt;p&gt;Many AI workflows fail not because nothing happened earlier, but because the last step lacked a genuinely independent confirmation pass.&lt;/p&gt;
&lt;p&gt;In long tasks, there is often a wide gap between &amp;ldquo;a result was produced&amp;rdquo; and &amp;ldquo;the result is actually usable.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;So one especially important direction is to separate development from acceptance. Even without a complex process, it is worth asking at least these questions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Did it really complete the original task?&lt;/li&gt;
&lt;li&gt;Did it only patch the surface without fixing the root cause?&lt;/li&gt;
&lt;li&gt;Did testing cover only the happiest path?&lt;/li&gt;
&lt;li&gt;Did the upstream requirement get silently changed along the way?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Without that layer, AI can easily keep declaring success inside a long workflow.&lt;/p&gt;
&lt;h2 id=&#34;06-how-to-choose-between-the-two&#34;&gt;06 How to Choose Between the Two
&lt;/h2&gt;&lt;p&gt;If you want a fast rule of thumb:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If your main pain is context bloat and long-session drift, start with &lt;code&gt;Ralph&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;If your main pain is one agent wearing too many hats, start with multi-agent collaboration&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;More specifically:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Ralph&lt;/code&gt; fits work that is clear, granular, and easy to move forward round by round&lt;/li&gt;
&lt;li&gt;Multi-agent collaboration fits work with strong role boundaries and a need for parallelism and cross-checking&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In practice, these two approaches are not always competitors. A mature setup often combines them:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use a &lt;code&gt;Ralph&lt;/code&gt;-style outer loop to push the larger task forward&lt;/li&gt;
&lt;li&gt;Use multi-agent collaboration inside each round for research, implementation, testing, and acceptance&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That gives you both better control over long context and better collaboration inside a single round.&lt;/p&gt;
&lt;h2 id=&#34;07-one-sentence-summary&#34;&gt;07 One-Sentence Summary
&lt;/h2&gt;&lt;p&gt;What makes these approaches worth studying is not that they recommend &lt;code&gt;Ralph&lt;/code&gt; or multi-agent collaboration in isolation. It is that they make one practical truth very clear: &lt;strong&gt;keeping AI stable over long tasks depends less on the model itself and more on whether you designed context, tasks, roles, and acceptance well.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If you are already asking &lt;code&gt;Claude Code&lt;/code&gt;, &lt;code&gt;Codex&lt;/code&gt;, or other coding agents to handle longer real-world tasks, this kind of workflow thinking is often more valuable than simply switching to a stronger model.&lt;/p&gt;
</description>
        </item>
        
    </channel>
</rss>
