How Can Codex Use Chinese LLMs? OpenAI-Compatible APIs and the CodexBridge Approach

CodexBridge wraps Codex CLI/SDK as an OpenAI-compatible chat API, allowing OpenWebUI, Cherry Studio, curl, and other clients to call local Codex through /v1/chat/completions. This article explains its use cases, deployment, sessions, multimodal input, structured output, and common configuration.

CodexBridge is a local bridge for exposing Codex CLI/SDK as an OpenAI-compatible HTTP service. With it, Codex no longer has to live only in the terminal. OpenWebUI, Cherry Studio, scripts, automation systems, or any client that supports OpenAI Chat Completions can call it.

The two core endpoints are /v1/chat/completions and /v1/models. The former handles conversations and supports both normal and SSE streaming responses. The latter lets clients discover models in the same way they read an OpenAI-style model list. For tools that already support OpenAI APIs, this usually means changing only the base URL, API key, and model name.

Project: https://github.com/begonia599/CodexBridge

What it is useful for

CodexBridge is useful when you want to plug Codex into existing AI clients or workflows. For example:

  • Select Codex directly in OpenWebUI or Cherry Studio.
  • Call local Codex from curl, Python, Node.js, or other scripts.
  • Let one frontend connect to OpenAI, Ollama, other compatible APIs, and Codex at the same time.
  • Keep Codex’s local threads, sandbox, working directory, and approval behavior.
  • Provide a unified /v1/chat/completions endpoint for internal tools.

It is not a new LLM, and it is not a full replacement for Codex CLI. More precisely, it is an adapter layer: Codex remains the upstream engine, while the bridge converts OpenAI-style requests into conversation input that Codex can handle.

Basic requirements

You need:

  • Node.js 18 or later.
  • Codex CLI installed and logged in.
  • npm, or pnpm / yarn if you prefer.

Basic source deployment:

1
2
3
4
5
git clone https://github.com/begonia599/CodexBridge
cd codexbridge
npm install
cp .env.example .env
cp .env .env.local

Then edit .env or .env.local to set the API key, default model, working directory, sandbox mode, network access, and related options.

Start the HTTP service:

1
npm run codex:server

The default port is 8080, and it can be changed with PORT. After startup, the service exposes:

1
2
3
GET /health
POST /v1/chat/completions
GET /v1/models

CLI conversation mode

Besides the HTTP service, CodexBridge also includes a lightweight CLI:

1
npm run codex:chat

You can type natural-language messages directly. Two useful commands are:

  • /reset: create a new Codex thread.
  • /exit: exit the CLI.

The current thread ID is stored in .codex_thread.json. If this file still exists the next time the CLI starts, the previous conversation can continue.

HTTP example

A minimal request looks like this:

1
2
3
4
curl http://localhost:8080/v1/chat/completions \
  -H "content-type: application/json" \
  -H "authorization: Bearer 123321" \
  -d '{"model":"gpt-5-codex:medium","session_id":"demo","messages":[{"role":"user","content":"ls"}]}'

Key points:

  • The token in authorization must match CODEX_BRIDGE_API_KEY.
  • model can include reasoning effort, such as gpt-5-codex:medium or gpt-5-codex:high.
  • session_id binds the request to a conversation and allows reuse of the same Codex thread.

For streaming output, add stream: true:

1
2
3
4
curl -N http://localhost:8080/v1/chat/completions \
  -H "content-type: application/json" \
  -H "authorization: Bearer 123321" \
  -d '{"model":"gpt-5-codex:high","session_id":"stream","stream":true,"messages":[{"role":"user","content":"Explain step by step how to create a Node.js project"}]}'

For clients that support OpenAI streaming responses, this feels much closer to a normal chat experience.

How sessions are persisted

Session mapping is one of CodexBridge’s important features. A request can pass a session ID through these fields:

  • session_id
  • conversation_id
  • thread_id
  • user

It can also be passed through request headers:

  • x-session-id
  • session-id
  • x-conversation-id
  • x-thread-id
  • x-user-id

For production use, enable:

1
CODEX_REQUIRE_SESSION_ID=true

This requires every request to include a session ID, preventing different users or chat windows from being mixed into the same temporary context. The bridge-side mapping is saved in .codex_threads.json. Deleting this file resets the bridge mapping, while Codex’s own threads remain under ~/.codex/sessions.

If CODEX_REQUIRE_SESSION_ID=false and the request provides no session ID, the bridge expands the current messages into one-off input for Codex. This is fine for temporary calls, but not for long-running conversations.

Multimodal input

CodexBridge supports OpenAI-style content blocks and converts images into Codex-compatible local_image input.

Remote images can be written as:

1
2
3
4
5
6
{
  "type": "image_url",
  "image_url": {
    "url": "https://example.com/demo.png"
  }
}

Local images can be written as:

1
2
3
4
{
  "type": "local_image",
  "path": "./images/demo.png"
}

Remote resources are downloaded into a temporary directory and cleaned up after the turn. In real use, watch the request body size, especially when sending base64 images. You may need to increase CODEX_JSON_LIMIT.

Structured output

If the client supports response_format, CodexBridge can map it to Codex’s outputSchema. This is useful when you want Codex to return a fixed JSON structure, such as a check result, summary, classification result, or automation report.

A minimal example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
{
  "model": "gpt-5-codex",
  "session_id": "lint",
  "response_format": {
    "type": "json_schema",
    "json_schema": {
      "name": "lint_report",
      "schema": {
        "type": "object",
        "properties": {
          "summary": { "type": "string" },
          "status": {
            "type": "string",
            "enum": ["ok", "action_required"]
          }
        },
        "required": ["summary", "status"],
        "additionalProperties": false
      }
    }
  },
  "messages": [
    {
      "role": "user",
      "content": "Check lint issues under src/ and return the result as JSON"
    }
  ]
}

type: "json_schema" must include a schema, otherwise the service returns 400.

Key environment variables

Common configuration can be grouped as follows.

Service and authentication:

1
2
3
PORT=8080
CODEX_BRIDGE_API_KEY=123321
CODEX_JSON_LIMIT=10mb

Default model:

1
2
CODEX_MODEL=gpt-5-codex
CODEX_REASONING=medium

Codex runtime:

1
2
3
4
CODEX_WORKDIR=
CODEX_SANDBOX_MODE=read-only
CODEX_APPROVAL_POLICY=never
CODEX_SKIP_GIT_CHECK=true

Network access:

1
2
CODEX_NETWORK_ACCESS=false
CODEX_WEB_SEARCH=false

If the service is only used for frontend chat, keeping network access off by default is safer. Enable these switches only when Codex clearly needs to run curl, git clone, or web search.

Docker and one-line scripts

The project also provides Docker deployment for long-running service use:

1
2
docker compose up -d
docker compose logs -f codexbridge

It also provides a Linux install script:

1
curl -fsSL https://raw.githubusercontent.com/begonia599/CodexBridge/master/scripts/install.sh | bash

The script installs dependencies, clones or updates the repository, copies .env.example, and starts the service with Docker Compose. It requires sudo, so it is best suited to a clean server. If the machine already has a complex Node.js, Docker, or Codex setup, read the script before running it.

Common issues

Request returns 413

The request body is usually too large, often because of base64 images. Increase:

1
CODEX_JSON_LIMIT=20mb

API key is rejected

Check that the request header includes:

1
Authorization: Bearer <your CODEX_BRIDGE_API_KEY>

or use x-api-key.

Codex reports a Git repository restriction

If the working directory is not a trusted repository, Codex may trigger a check. Use this only in an environment you trust:

1
CODEX_SKIP_GIT_CHECK=true

Reset conversations

The bridge mapping lives in .codex_threads.json, while Codex’s own threads live in ~/.codex/sessions. Stop the service and delete the corresponding files or directories to reset them.

Recommendations

For local testing, start with the default API key and the read-only sandbox. After OpenWebUI, Cherry Studio, or scripts can call the service normally, gradually adjust CODEX_WORKDIR, CODEX_SANDBOX_MODE, CODEX_NETWORK_ACCESS, and CODEX_APPROVAL_POLICY.

For multi-user use, do at least three things:

  • Require session_id.
  • Change the default API key.
  • Clearly limit the working directory and sandbox permissions.

CodexBridge is valuable not because it is complex, but because it places Codex inside the existing OpenAI-compatible ecosystem. If a client can change its base URL, it can treat Codex like a normal chat model while still retaining Codex’s local threads, sandbox, and tool behavior.

记录并分享
Built with Hugo
Theme Stack designed by Jimmy