Models & Providers Local Models

OpenClaw LLM Setup: What Successful Builders Always Configure

Three fields in gateway.yaml connect OpenClaw to any LLM — but the builders whose agents actually stay up configure five. Here's the complete llm section, including fallback routing and connection testing, so your setup survives a provider outage.

JD
J. Donovan
Technical Writer
Jan 20, 2025 14 min read 8.2k views
Updated Jan 20, 2025
Key Takeaways
  • → The llm section in gateway.yaml controls every LLM interaction across all your agents — get it right once and everything downstream works
  • → Three required fields: provider, model, and api_key — missing any one causes the gateway to refuse to start
  • → Add a fallback block with a secondary provider so agent conversations survive individual provider outages automatically
  • → Run openclaw test-llm after every config change to verify the connection before putting agents in front of users
  • → Any OpenAI-compatible endpoint — including Ollama running locally — works by setting provider: openai and overriding base_url

Builders who get the LLM configuration wrong spend hours debugging agent failures that have nothing to do with their agent logic. The gateway.yaml llm section is small — but every field matters. Set it up correctly the first time, add a fallback, and your agents keep running even when your primary provider has a bad day.

The LLM Section in gateway.yaml

Every OpenClaw deployment has one gateway.yaml file. Inside that file, the llm block defines which language model powers your agents. This is not per-agent configuration — it's the system-wide default. Every agent routes its LLM calls through this configuration unless you override it at the agent level.

Here's the minimum viable configuration that gets OpenClaw talking to an LLM:

llm:
  provider: openai
  model: gpt-4o
  api_key: sk-your-openai-api-key-here

That's it for the basics. The gateway loads this on startup, initializes the LLM client, and makes it available to every agent that handles a conversation. Three lines, and your entire agent fleet has a brain.

But three lines is the minimum, not the optimum. We'll get to the full production-ready configuration in a moment — first, understand exactly what each field does.

Provider and Model Fields Explained

The provider field tells OpenClaw which API client to initialize. It maps directly to a supported integration. As of early 2025, the supported values are:

  • openai — GPT-4o, GPT-4-turbo, GPT-3.5-turbo, and any OpenAI-compatible endpoint
  • anthropic — Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
  • groq — Llama 3.3 70B, Mixtral 8x7B, Gemma 2 9B via Groq Cloud
  • gemini — Gemini 1.5 Pro, Gemini 1.5 Flash via Google AI Studio or Vertex AI
  • openrouter — Unified access to 100+ models via a single API key
  • xai — Grok-2, Grok-2-mini via xAI's API
  • minimax — abab6.5, abab6.5s multimodal models

The model field must exactly match the model identifier the provider expects. This is where most people make their first mistake.

Model name must match exactly
OpenClaw passes the model name directly to the provider API without transformation. If you write gpt4o instead of gpt-4o, the API returns a 404 and your agents fail silently on every request. Check the provider's documentation for the exact model identifier string.

Correct model names for common configurations:

# OpenAI
model: gpt-4o
model: gpt-4-turbo
model: gpt-3.5-turbo

# Anthropic
model: claude-3-5-sonnet-20241022
model: claude-3-opus-20240229
model: claude-3-haiku-20240307

# Groq
model: llama-3.3-70b-versatile
model: mixtral-8x7b-32768

# Google Gemini
model: gemini-1.5-pro
model: gemini-1.5-flash

API Key Configuration

The api_key field accepts the key directly as a string, or you can reference an environment variable. Hardcoding the key in gateway.yaml works for development but is a security liability in production — the key ends up in version control.

The environment variable approach is cleaner:

llm:
  provider: openai
  model: gpt-4o
  api_key: ${OPENAI_API_KEY}

Set the variable in your shell before starting the gateway:

export OPENAI_API_KEY=sk-your-key-here
openclaw start

Or use a .env file if your deployment method supports it. The gateway reads environment variables at startup and substitutes them into the config. The key never touches disk in plaintext inside your config file.

Use different keys per environment
Keep separate API keys for development, staging, and production. This lets you set spending limits per environment, rotate keys independently, and immediately revoke a compromised development key without affecting production agents.

Switching Between Providers

Switching providers is a two-field change in gateway.yaml. Update provider and model, add the new api_key, then restart the gateway. Every agent immediately routes to the new provider on the next conversation.

Sound familiar? Here's where most teams get tripped up: different providers have different message format expectations. OpenClaw handles format conversion internally — you don't need to change your agent prompts or tool definitions when switching providers. The abstraction layer takes care of translating your agent's instructions into whatever format the new provider expects.

The one thing that doesn't automatically transfer is capability. If you switch from GPT-4o to a model that doesn't support function calling, your tool-using agents will break. Check provider capability before switching in production.

Fallback Configuration

This is the configuration most guides skip. Add a fallback block and your agents keep running when your primary provider has an outage — which happens more often than providers admit.

llm:
  provider: anthropic
  model: claude-3-5-sonnet-20241022
  api_key: ${ANTHROPIC_API_KEY}
  fallback:
    provider: openai
    model: gpt-4o
    api_key: ${OPENAI_API_KEY}
  timeout: 30
  retry_attempts: 2

When the primary provider returns a 5xx error or times out, OpenClaw automatically retries with the fallback provider. The conversation continues. Your users see no interruption. The gateway logs the fallback event so you can monitor provider reliability over time.

The timeout field (in seconds) controls how long OpenClaw waits for the LLM before declaring a failure and trying the fallback. Thirty seconds is a reasonable default — some complex multi-turn conversations need more time, but thirty catches most real outages without keeping users waiting too long.

retry_attempts controls how many times OpenClaw retries the primary provider before switching to the fallback. Two retries is usually right — more than that and you're just adding latency on a provider that's clearly having problems.

Testing the Connection

After every gateway.yaml change, run the built-in connection test before restarting with live traffic:

openclaw test-llm

This command sends a minimal test prompt to your configured provider and prints the response. A successful test looks like this:

Testing LLM connection...
Provider: anthropic
Model: claude-3-5-sonnet-20241022
Status: ✓ Connected
Response time: 1.2s
Test response: "Hello! I'm ready to assist your agents."

If you see an error, the output tells you exactly what went wrong — invalid API key, wrong model name, network timeout, or insufficient credits. Fix the specific issue, run the test again, and only then restart the gateway.

We'll get to fallback testing in a moment — but first, understand the most common mistake builders make after getting the basic connection working.

Common Mistakes

  • Wrong model identifier format — Anthropic model names include the version date (claude-3-5-sonnet-20241022), not just the model name. Copy the exact string from the provider's API documentation.
  • Not testing after every change — changing the model field without running openclaw test-llm means the first real user conversation discovers the misconfiguration.
  • Hardcoding API keys in gateway.yaml — keys in config files end up in git history. Use environment variables or a secrets manager from day one.
  • Skipping the fallback block — every major provider has had multi-hour outages in early 2025. Without a fallback, those outages take down your agents completely.
  • Setting timeout too low — a 5-second timeout causes false failures on legitimate complex requests. Start at 30 seconds and tune down based on observed latency.
  • Not checking capability before switching providers — function calling, vision, and streaming have different support levels across providers. Verify before switching in production.

Frequently Asked Questions

Where do I configure the LLM in OpenClaw?

Configure your LLM in the llm section of gateway.yaml. You need at minimum three fields: provider, model, and api_key. The gateway loads this on startup and uses it for every agent conversation in the system.

Can I switch LLM providers without restarting OpenClaw?

No — the gateway reads gateway.yaml at startup. To switch providers, update the provider and model fields, then restart the gateway. The change takes effect immediately on the next startup and applies to all subsequent conversations.

What providers does OpenClaw support in gateway.yaml?

OpenClaw supports openai, anthropic, groq, gemini, openrouter, xai, and minimax as named provider values. Any OpenAI-compatible API can also be used by setting provider: openai and overriding base_url.

How do I set up LLM fallback in OpenClaw?

Add a fallback block under the llm section with a secondary provider and model. If the primary provider returns an error or times out, OpenClaw automatically retries using the fallback configuration, keeping your agents running during provider outages.

How do I test that my LLM connection is working?

Run openclaw test-llm from the CLI after configuring gateway.yaml. This sends a test prompt to your configured provider and prints the response. If you see a reply, the connection works. If not, the output identifies the specific error — wrong key, bad model name, or network issue.

Can I use a locally running model like Ollama with OpenClaw?

Yes. Set provider: openai, model: your-local-model-name, api_key: ollama, and base_url: http://localhost:11434/v1. Ollama exposes an OpenAI-compatible API, so OpenClaw treats it identically to any other OpenAI endpoint.

JD
J. Donovan
Technical Writer

J. Donovan has documented LLM integrations across a dozen production OpenClaw deployments, from single-provider hobby projects to multi-provider enterprise setups with automatic failover. Specializes in making gateway configuration approachable without sacrificing accuracy.

LLM Config Guides

Weekly OpenClaw model and provider tips, free.