If you have been following open source AI agent tools lately, HKUDS/OpenHarness is a project worth watching. It is not just another chat wrapper. Instead, it pulls the infrastructure layer for a runnable, extensible, and governable agent into a standalone open source Agent Harness.
According to the official README, OpenHarness provides a lightweight but fairly complete set of agent capabilities, including tool calling, skill loading, memory, permission governance, and multi-agent coordination. The bundled ohmo is the personal AI assistant application built on top of that foundation.
01 What Is OpenHarness
You can think of OpenHarness as the runtime layer that gives a foundation model hands, memory, and boundaries.
A model may already be good at reasoning and generation, but if you want it to function as a long-running agent, it usually still needs these surrounding capabilities:
- Calling tools instead of only producing text
- Reading and writing files, executing commands, and using search and web access
- Preserving context and memory across long sessions
- Applying permission controls to risky actions
- Splitting larger tasks across multiple sub-agents in parallel
The goal of OpenHarness is to turn that engineering layer around the model into a clear, open source, inspectable Python implementation. It is closer to an agent operating substrate than to a single model experience or a single chat interface.
02 The Project’s Basic Functions
Based on the current GitHub homepage and README, OpenHarness centers on the following capability areas.
1. Agent Loop
This is the core execution loop that lets an agent keep working over multiple steps. The official highlights include:
- Streaming tool-calling loops
- API retries with exponential backoff
- Parallel tool execution
- Token accounting and cost tracking
The practical point is that the agent is not limited to a one-shot response. It can observe, reason, call tools, read results, and continue iterating within the same task.
2. Tools, Skills, and Plugins
OpenHarness puts serious effort into the tool layer. The project page says it already includes built-in tools for files, Shell, search, web access, and MCP, and it supports on-demand loading of Markdown skill files.
Its value is not only that it has many tools, but that the composition model is fairly open:
- You can use built-in tools directly
- You can load skills for a specific task
- You can extend hooks, skills, and agents through plugins
- It is compatible with the
anthropics/skillsecosystem and related plugins
If you want to turn repeated workflows into reusable capabilities rather than re-describing them in prompts every time, this layer is especially useful.
3. Context and Memory
This is one of the more important differentiators in OpenHarness. The official keywords include:
CLAUDE.mddiscovery and injection- Automatic context compression
- Persistent memory through
MEMORY.md - Session recovery and history continuation
That means it is not only reacting to the current input. It is designed to preserve project conventions, historical tasks, and long-term preferences, making the agent better suited for ongoing work instead of always starting from scratch.
4. Permission Governance and Safety Boundaries
Once an agent starts interacting with the filesystem, terminal, and network, governance becomes critical. OpenHarness provides:
- Multiple permission modes
- Rule controls based on paths and commands
PreToolUse/PostToolUsehooks- Interactive approval prompts
In other words, it is not only about enabling the agent to do things. It also defines which things can be done directly and which ones should require confirmation first.
5. Multi-Agent Coordination
OpenHarness also supports delegating work to sub-agents. The currently public materials mention capabilities such as:
- Sub-agent creation and delegation
- Team registration and task management
- Background task lifecycle management
For more complex work, this means it can move beyond a single serial agent and attempt parallel collaboration.
6. Multi-Provider Workflows
OpenHarness does not treat providers as mere API labels. It abstracts them as workflow + profile combinations. According to the README, current directions include:
- Claude / Anthropic-compatible
- OpenAI-compatible
- Codex Subscription
- GitHub Copilot
- Compatible backends such as Moonshot(Kimi), GLM, and MiniMax
That makes it feel more like a multi-model, multi-entry agent runtime framework rather than something tied to a single vendor.
7. React TUI and Non-Interactive Mode
OpenHarness ships with a terminal UI. Running oh opens a React/Ink TUI, and the official README says it supports:
- A command picker
- Permission confirmation
- Model switching
- Provider switching
- Session recovery
If you do not want to enter an interactive interface, you can also use non-interactive mode to run a task once and return the result as standard output, JSON, or streaming JSON, which is helpful for scripting and automation.
03 What Is ohmo
If OpenHarness is the infrastructure layer, ohmo is the personal agent application built on top of it.
The project homepage is very clear about its positioning: it is not just a generic chatbot, but a personal assistant that can keep working across long conversations. The official description says it can interact with you through channels such as Feishu, Slack, Telegram, and Discord, and carry out tasks like:
- forking a branch
- writing code
- running tests
- opening a PR
The README also highlights that ohmo can run on top of your existing Claude Code or Codex subscription, so it does not necessarily require you to provision a new API key. For people already using those subscriptions, that lowers the barrier considerably.
04 What Scenarios It Fits
From the currently public capabilities, OpenHarness is a strong fit for people who:
- Want to study what a production-grade agent is actually made of
- Want to build an extensible open source agent runtime of their own
- Want tools, skills, memory, permissions, and multi-agent coordination in one framework
- Do not want to be locked into a single model vendor or client form factor
- Want to build vertical agents or personal assistants on top of an existing architecture
If your goal is simply to find a finished assistant that can chat right away, OpenHarness itself may not be the lightest option. But if you care more about agent infrastructure, engineering control, and long-term extensibility, it is a very worthwhile project to study.
05 A Quick Way to Understand Its Positioning
In one sentence:
OpenHarness turns foundation models into agents that can actually execute work, while ohmo packages that capability into a personal assistant that can keep working with you over time.
You can also think of it as two layers:
- OpenHarness: an open source Agent Harness, essentially the infrastructure layer
- ohmo: a personal-agent app built on top of that infrastructure
As of April 12, 2026, the GitHub homepage shows the project had already advanced to v0.1.6 (April 10, 2026), with continued emphasis on automatic context compression, MCP transport support, the React TUI, and runtime stability for multi-agent workflows. That suggests it is still evolving quickly, but its direction is already quite clear.
References
- GitHub repository: https://github.com/HKUDS/OpenHarness
- English README: https://github.com/HKUDS/OpenHarness/blob/main/README.md
- Chinese README: https://github.com/HKUDS/OpenHarness/blob/main/README.zh-CN.md