Connect OpenClaw to Local Gemma 4: Complete Setup Guide

From starting the Ollama API to configuring OpenClaw, this guide walks you through connecting local Gemma 4 end to end.

This guide shows how to connect OpenClaw to a local Gemma 4 model through Ollama.

If you have not deployed Gemma 4 locally yet, start here:

Step 1: Start the Ollama API Service

Start Ollama first:

1
ollama serve

Then verify the API quickly with:

1
2
3
4
curl http://localhost:11434/api/generate -d '{
  "model": "gemma4:12b",
  "prompt": "Hello"
}'

If you get a model response, your local API is ready.

Step 2: Configure OpenClaw to Use Ollama

The OpenClaw config file is usually located at:

1
~/.openclaw/config.yaml

Edit config.yaml and add a local model entry under models:

1
2
3
4
5
6
7
8
models:
  # Your existing model config...

  gemma4-local:
    provider: ollama
    base_url: http://localhost:11434
    model: gemma4:12b
    timeout: 120s

Step 3: Set Default Model (Optional)

If you want Gemma 4 as the default model:

1
default_model: gemma4-local

Step 4: Restart and Verify OpenClaw

Restart OpenClaw:

1
openclaw restart

List available models:

1
openclaw models list

Run a quick chat test:

1
openclaw chat --model gemma4-local "Hello"

If the chat returns normally, OpenClaw is successfully connected to local Gemma 4.

Common Troubleshooting

  • connection refused: make sure ollama serve is running.
  • Model not found: check model name with ollama list (for example gemma4:12b).
  • Timeout: increase timeout and test a smaller model first.
记录并分享
Built with Hugo
Theme Stack designed by Jimmy