CCX is an AI API proxy and protocol-conversion gateway. It puts Claude Messages, OpenAI Chat Completions, OpenAI Images, Codex Responses, and Gemini API behind one service entry point, while also providing a web management UI for configuring channels, keys, model mappings, priorities, failover, and traffic monitoring.
If you use Claude, OpenAI, Gemini, and Codex at the same time, or maintain multiple upstream services compatible with OpenAI API, CCX is valuable because it gives you one entry point and one management layer. Clients connect to a single service address; CCX decides which upstream channel should handle each request.
Project: https://github.com/BenedictKing/ccx
What problem does CCX solve?
When multiple AI APIs are used together, several problems appear quickly:
- Each provider has different paths, authentication, and request formats.
- One class of models may have multiple upstreams, requiring manual switching of base URL and API key.
- When a key or channel fails, the client usually does not automatically switch to a backup channel.
- In team use, it is hard to centrally manage model allowlists, proxies, custom headers, and request logs.
- When Claude, Gemini, OpenAI Chat, image APIs, and Codex Responses all need to coexist, configuration becomes scattered.
CCX’s approach is to consolidate these differences into a proxy layer. Frontend tools, scripts, or business services call CCX; CCX then routes the request to a suitable upstream based on API type, model, channel status, priority, and health.
Supported endpoints
CCX exposes one backend entry point. The default port is 3000. Main paths include:
|
|
In other words, CCX does not proxy only one protocol. It manages common AI APIs as separate channel types: Messages, Chat, Responses, Gemini, and Images. Different protocols do not share the same health state or log space, which matters when troubleshooting.
Architecture overview
CCX uses a Go backend and Vue 3 frontend. The frontend build is embedded into the backend binary, so it can be deployed on a single port: the same service provides the Web UI, management API, and proxy API.
A request roughly follows this path:
|
|
The main modules can be understood as follows:
handlers: receive requests for different protocols and management operations.providers: wrap upstream API request and response handling.converters: handle protocol conversion for scenarios such as Responses.scheduler: choose channels based on priority, promotion period, health state, circuit breaker state, and trace affinity.metrics: record request counts, success rate, latency, logs, and circuit breaker state.config: maintain runtime configuration, with hot reload and backup support.
The design is not about forcing every API into one format. It proxies each protocol type separately, while unifying management, scheduling, logging, and authentication.
CCX vs CodexBridge
CCX and CodexBridge are both related to Codex and OpenAI-compatible APIs, but they solve different problems.
CodexBridge is more like a dedicated Codex bridge. Its main goal is to wrap Codex CLI/SDK as an OpenAI-compatible /v1/chat/completions service, so OpenWebUI, Cherry Studio, scripts, or other OpenAI-compatible clients can call local Codex. In short, CodexBridge focuses on exposing Codex.
CCX is more like a unified AI API gateway. It does not only handle Codex Responses; it also supports Claude Messages, OpenAI Chat, OpenAI Images, and Gemini API, with a web management UI, channel priority, failover, log monitoring, and multi-key management. In short, CCX focuses on managing multiple models and providers together.
Quick comparison:
| Item | CodexBridge | CCX |
|---|---|---|
| Core positioning | Local Codex bridge | Multi-protocol AI API gateway |
| Main goal | Turn Codex into an OpenAI-compatible endpoint | Manage Claude, OpenAI, Gemini, Codex, and other channels together |
| Management UI | Focuses on the API service itself | Provides a web management UI |
| Multi-channel scheduling | Not the focus | Supports channel priority, failover, and log monitoring |
| Best fit | Local or single-service Codex calls | Teams, multiple keys, multiple providers, multiple protocols |
If you only want to connect Codex to OpenWebUI or Cherry Studio, CodexBridge is more direct. If you want to manage Codex, Claude, Gemini, DeepSeek, Qwen, Kimi, and other upstreams together, CCX is a better fit.
Quick deployment
The simplest way is to download the binary. After downloading it, create .env in the same directory:
|
|
After startup, open:
|
|
If localhost does not work from WSL, Docker, PowerShell, or another Windows environment, use the Windows host’s LAN IPv4 address instead, for example:
|
|
By default, CCX listens on :PORT for all network interfaces, so access control matters if it is exposed to a LAN.
Docker deployment
Docker is suitable for long-running service deployment:
|
|
If the repository already has docker-compose.yml, you can also run:
|
|
For automatic updates, add the Watchtower configuration:
|
|
After deployment, .config stores runtime configuration and persistent data. Mount it to the host to avoid losing configuration when the container is recreated.
Running from source
For development or custom builds:
|
|
Common commands:
|
|
Frontend-only development:
|
|
Backend-only development:
|
|
Key environment variables
Minimal usable configuration usually includes:
|
|
Notes:
PROXY_ACCESS_KEYis used for the proxy API and must be changed.ADMIN_ACCESS_KEYis used for the Web UI and/api/*; it should be separate from the proxy key.ENABLE_WEB_UIcontrols whether the management UI is enabled.REQUEST_TIMEOUTcontrols request timeout; increase it for long-context or image tasks.LOG_LEVELcontrols log verbosity; production usually usesinfoorwarn.
To limit request body size, check:
|
|
Image editing, base64 images, and multimodal requests can all increase request body size.
Channel orchestration and failover
The CCX management UI can configure multiple channels, with options such as:
- Upstream service type.
- API key or multi-key rotation.
- Proxy address.
- Custom request headers.
- Model allowlist.
- Route prefix.
- Priority.
- Health checks and circuit-breaker recovery.
Scheduling considers channel state, priority, promotion period, trace affinity, circuit-breaker state, and available keys. In simple terms:
- Under normal conditions, higher-priority channels are used first.
- If one channel fails, CCX can fail over to a backup channel.
- Circuit breaking avoids repeatedly hitting an obviously unavailable upstream.
- Trace affinity tries to keep related sessions on suitable channels.
These features are useful when you have multiple keys, providers, or regional upstreams. For personal lightweight use, you can also configure only one channel and use CCX as a proxy layer with a Web UI.
Logs and monitoring
CCX provides channel metrics and request logs, including:
- Request volume.
- Success rate.
- Failure rate.
- Average latency.
- Historical data by model.
- Channel status and circuit-breaker state.
For production, use relatively conservative logging:
|
|
This keeps basic request information while avoiding full response content in logs. You can temporarily enable more detailed logs for troubleshooting, but restore the safer configuration afterward, especially in production.
Security recommendations
CCX is a proxy gateway and stores upstream API keys, so deployment should not stop at “it runs.” At minimum:
- Do not use a default or short
PROXY_ACCESS_KEY. - Set a separate
ADMIN_ACCESS_KEY. - Do not expose the Web UI directly to the public internet.
- If public access is required, place it behind a reverse proxy, VPN, access control, or SSO.
- Do not commit
.env,.config, or log files to Git. - Do not keep full request and response body logging enabled in production.
You can generate random keys like this:
|
|
Who should use it?
CCX is better suited to these scenarios:
- Maintaining Claude, OpenAI, Gemini, Codex, or image APIs at the same time.
- Having multiple API keys that need rotation, routing, and failover.
- Managing upstream channels through a Web UI instead of editing config files manually.
- Observing success rate, latency, and logs for each channel.
- Providing one unified AI API entry point for a team.
If you only call one model occasionally on your own machine, the official SDK or a single OpenAI-compatible proxy is simpler. CCX’s advantage is multi-channel, multi-protocol, unified operation.
Summary
CCX is an AI API gateway, not a client for one specific model. It puts Claude Messages, OpenAI Chat, OpenAI Images, Codex Responses, and Gemini into one proxy layer, with channel orchestration, failover, logs, monitoring, and a Web management UI.
For individuals, it reduces the trouble of switching API addresses and keys. For teams or long-running services, it is closer to a lightweight AI gateway. Before production use, the important work is not only configuring models, but also securing keys, the management entry point, logging levels, channel priority, and failover strategy.
References
- GitHub project: https://github.com/BenedictKing/ccx
- Architecture notes: https://github.com/BenedictKing/ccx/blob/main/ARCHITECTURE.md
- Environment variables: https://github.com/BenedictKing/ccx/blob/main/ENVIRONMENT.md