cc-haha is a project built around a modified Claude Code workflow. Its full repository name is NanmiCoder/cc-haha. The project page says plainly that it is based on Claude Code source code leaked from the Anthropic npm registry on 2026-03-31, and that its current main form is a desktop Claude Code workbench.
Project URL: https://github.com/NanmiCoder/cc-haha
There are two important points in that description.
First, it is not Anthropic’s official Claude Code. The README also states that the original source code copyright belongs to Anthropic and that the project is only for learning and research.
Second, its focus is no longer just “run a Claude Code CLI locally.” Judging from the README and the latest release, cc-haha is more like a desktop app that brings Claude Code sessions, projects, permissions, diffs, Computer Use, remote access, and model provider configuration into one place.
What problem is it trying to solve?
Claude Code is originally terminal-oriented. Sessions, command execution, permission prompts, file edits, and context switching all happen in the terminal. That works for people who are comfortable with CLI tools, but long-term use exposes a few rough edges:
- Multiple projects and sessions are hard to manage side by side.
- To see what files the AI changed, you often need to switch to Git or an editor.
- Permission approvals, command execution, and file diffs are spread across different surfaces.
- Remote viewing from a phone or another device requires extra setup.
- Connecting non-Anthropic models requires dealing with protocol compatibility.
cc-haha tries to package these pieces into a graphical workbench. It is not just a skin for Claude Code; it moves session management and local development flow control into the desktop app.
Desktop workbench: from terminal to control center
According to the README, the cc-haha desktop app brings these capabilities into a macOS / Windows app:
- Multi-session workbench: manage tasks with tabs, project switching, terminal entry points, and session history.
- Branch / Worktree launch: choose a repository branch for a new session and decide whether to use the current worktree or an isolated Worktree.
- Right-side code changes panel: view modified files, added and removed lines, and workspace status while chatting.
- Visualized code edits: inspect AI edits, diffs, and execution steps.
- Permission and approval flow: review dangerous commands, tool calls, and AI questions in the desktop app.
- Multiple model providers: supports Anthropic-compatible APIs, third-party models, WebSearch fallback, and local configuration.
- H5 remote access: use a one-time token to connect to the current desktop session from a phone or another device.
- IM integration: use Telegram, Feishu, WeChat, or DingTalk to chat remotely, switch projects, and approve permissions.
- Scheduled tasks and token usage: create scheduled tasks and view local token usage trends.
These features make it closer to an “AI coding workbench” than a simple command-line replacement. It tries to put the common surfaces of AI coding into one place: chat, file changes, permissions, projects, remote access, and model configuration.
Installation and startup
Most users should download the desktop installer from Releases.
The README describes the desktop install flow as:
- Go to GitHub Releases and download the macOS or Windows installer.
- On first launch, configure the model provider, API key, and default model in the desktop settings.
- If macOS says the app cannot be opened, follow the installation guide to handle Gatekeeper permissions.
The latest release page shows that v0.2.6 was published on 2026-05-13. That version mainly focuses on restoring secure H5 mobile access, desktop session management, file mention search, and desktop UX polish.
If you want to start the CLI from source, the README provides:
|
|
That path is better for people who want to debug the lower-level CLI, server, or build their own changes. For normal use, the desktop app is more direct.
What changed in v0.2.6
The main point of v0.2.6 is that H5/LAN access was pulled back from a temporary open state into an explicit enablement and token pairing model.
Notable changes include:
- H5/LAN access must be explicitly enabled locally.
- QR links carry a one-time visible token.
- Remote APIs, proxies, and WebSockets are no longer exposed without protection.
- Settings now has a separate H5 Access page.
- The desktop sidebar gained batch management for selecting and deleting sessions.
- Desktop file mention search became git-first, respects ignore rules, and reduces noise from
node_modulesand build output. - A pure white theme was added, and bugs such as long URLs breaking chat layout and draft leakage across tabs were fixed.
This shows the project has moved beyond “it runs” and is now filling in the safety boundaries and daily UX details that a desktop product needs.
The H5 access part deserves special care. The author explicitly notes in the release that H5 is a browser access entry for individuals or trusted teams, not a public multi-tenant login system. In practice, it should not be treated as an internet-facing SaaS admin console.
Computer Use: letting the Agent operate the desktop
Another important selling point of cc-haha is Computer Use.
The project docs say this feature is a heavily modified version of the Computer Use implementation in the leaked Claude Code source. The official implementation depends on Anthropic’s private native modules, such as @ant/computer-use-swift and @ant/computer-use-input, which are not publicly available. cc-haha replaces the low-level operation layer with a Python bridge using public libraries such as pyautogui, mss, and pyobjc.
Computer Use supports operations such as:
- Screenshot:
screenshot,zoom - Mouse: click, drag, move, scroll, and read cursor position
- Keyboard: type text, press keys, hold keys
- Applications: open applications, switch displays
- Permissions: request app access, list granted applications
- Clipboard: read and write clipboard content
- Other: wait, batch operations
Its workflow is a “screenshot - analyze - act” loop:
- The model receives a user request.
- It calls
screenshotto capture the screen. - The model uses vision to identify buttons, input fields, and coordinates.
- It calls click, typing, or application tools.
- It screenshots again to confirm the result, then continues.
From the docs, the fully supported platform is mainly macOS, including Apple Silicon and Intel. Windows / Linux are theoretically possible, but the pyobjc app-management parts need platform-specific replacements and are not fully adapted yet.
Runtime requirements include:
Bun >= 1.1.0Python >= 3.8- macOS Accessibility permission
- macOS Screen Recording permission
This kind of feature is powerful, but it also raises permission risk. When letting AI operate desktop apps, it is better to authorize only the applications that are clearly needed and avoid leaving sensitive content open in unrelated windows.
Multi-model access through an Anthropic-compatible layer
cc-haha still communicates using the Anthropic Messages API protocol. The project docs recommend using LiteLLM as a protocol conversion proxy.
The basic structure is:
|
|
In other words, cc-haha sends Anthropic Messages API requests, LiteLLM converts them to formats such as OpenAI Chat Completions, and then forwards them to OpenAI, DeepSeek, Ollama, or other model services.
The LiteLLM install command in the docs is:
|
|
Then you can configure OpenAI, DeepSeek, Ollama, and other models in litellm_config.yaml. After the proxy starts, set these values in .env or ~/.claude/settings.json:
|
|
There are a few practical caveats:
drop_params: trueis important, because Anthropic parameters such asthinkingandcache_controldo not exist in the OpenAI API.- Extended Thinking is an Anthropic-specific feature and is unavailable with third-party models.
- Prompt Caching will not work in the Anthropic-native way.
- Tool calls must be converted from Anthropic
tool_useto OpenAI function calling, so complex tool use may have compatibility issues. - Small local Ollama models may not handle this tool-heavy workflow reliably.
So multi-model access can work, but that does not mean every model will feel the same. cc-haha still demands strong tool use, code understanding, and long-context ability from the model.
Who is it for?
cc-haha is better suited for:
- People already familiar with Claude Code who want desktop session management.
- Users who often work across multiple repositories, branches, and AI sessions.
- People who want to inspect AI file changes, diffs, and workspace status in a side panel.
- Users who want to experiment with Computer Use and let an Agent operate desktop apps.
- People who want to connect OpenAI, DeepSeek, Ollama, or other models through an Anthropic-compatible protocol.
- Users who need phone or IM-based remote viewing and permission approval.
It is less suitable for:
- Users who only want the stable official Claude Code experience.
- People who cannot accept the leaked-source background and copyright uncertainty.
- Users who do not want to grant high system permissions to local tools.
- Teams that need enterprise compliance, auditability, and official support.
- Users unfamiliar with API keys, proxies, model compatibility, and local service configuration.
Risks and boundaries
This article cannot only talk about features. It also has to talk about risk.
The origin of cc-haha means it is not an ordinary community reimplementation. The README clearly states that it is based on leaked Claude Code source code and that the original source belongs to Anthropic. This creates uncertainty around copyright, compliance, and long-term maintenance.
Computer Use, H5 remote access, IM integration, and local permission approval are also high-permission capabilities. The more convenient they are, the more clearly boundaries need to be defined:
- Do not expose H5 access on untrusted networks.
- Do not treat the token as a long-term public login credential.
- Do not grant the Agent access to unrelated sensitive applications.
- Do not casually use it in production or company compliance environments.
- Do not expose third-party model proxy settings or API keys in public repositories.
If your goal is to study AI coding tool architecture, desktop workflows, and Computer Use implementation, it is a useful reference. If you want to put it into a long-term production workflow, evaluate legal, permission, security, and maintenance risks first.
Summary
The most interesting thing about cc-haha is not whether it can replicate Claude Code. It is that it pushes Claude Code-style AI coding tools toward a desktop workbench form.
Sessions, projects, Worktree, diffs, permissions, remote access, Computer Use, model providers, scheduled tasks, and token usage are all brought into one desktop experience. That suggests the next step for AI coding tools is not only stronger models, but also a more complete workflow interface.
But its boundaries are also clear: it is not an official Anthropic product, it has a sensitive source-code background, and its high-permission features require caution. A better way to view it is as a project for observing where AI coding tools may evolve, not as a careless replacement for official Claude Code.
References
- GitHub repository: https://github.com/NanmiCoder/cc-haha
- Latest release: https://github.com/NanmiCoder/cc-haha/releases/tag/v0.2.6
- Computer Use documentation: https://github.com/NanmiCoder/cc-haha/blob/main/docs/computer-use.md
- Third-party model documentation: https://github.com/NanmiCoder/cc-haha/blob/main/docs/guide/third-party-models.md