This guide shows how to connect OpenClaw to a local Gemma 4 model through Ollama.
If you have not deployed Gemma 4 locally yet, start here:
Step 1: Start the Ollama API Service
Start Ollama first:
|
|
Then verify the API quickly with:
|
|
If you get a model response, your local API is ready.
Step 2: Configure OpenClaw to Use Ollama
The OpenClaw config file is usually located at:
|
|
Edit config.yaml and add a local model entry under models:
|
|
Step 3: Set Default Model (Optional)
If you want Gemma 4 as the default model:
|
|
Step 4: Restart and Verify OpenClaw
Restart OpenClaw:
|
|
List available models:
|
|
Run a quick chat test:
|
|
If the chat returns normally, OpenClaw is successfully connected to local Gemma 4.
Common Troubleshooting
connection refused: make sureollama serveis running.- Model not found: check model name with
ollama list(for examplegemma4:12b). - Timeout: increase
timeoutand test a smaller model first.