Most builders spend days juggling separate provider accounts before someone tells them about OpenRouter. 200+ models, one billing dashboard, one key — and it connects to OpenClaw in under fifteen minutes once you know the exact two things to configure. The mistake most people make is stopping at the API key and forgetting the base URL. Here's the full picture.
Why OpenRouter Changes the Model Access Equation
Managing multiple API keys across Anthropic, OpenAI, Google, Mistral, and Cohere is an operational tax. Every provider has its own dashboard, its own rate limit system, and its own billing cycle. When your OpenClaw agent needs to fall back from one model to another, you're re-wiring authentication for every transition.
OpenRouter solves this at the infrastructure layer. It acts as a unified gateway that speaks the OpenAI API spec — which is exactly what OpenClaw expects. You authenticate once with an OpenRouter API key, and OpenRouter handles the routing, authentication, and cost tracking to the underlying providers on your behalf. Your OpenClaw config never touches a provider key directly.
Here's what we've seen consistently across builder setups: teams that integrate OpenRouter early spend far less time on model-switching logistics because the switch is a single string change. Teams that hard-code a single direct provider spend hours re-wiring configs when that provider raises prices or hits an outage.
The other benefit that's underappreciated: OpenRouter exposes free-tier models. Llama 3.1 8B and several Mistral variants are available at zero cost. For prototyping, you can validate your entire agent logic without spending anything, then swap to a production model with one config line change.
:free suffix on model strings like meta-llama/llama-3.1-8b-instruct:free. Point your OpenClaw config at one of these during initial setup to confirm the connection works before switching to a paid model.What You Need Before You Start
Three things need to be in place. Every incomplete setup I've seen was missing at least one of these.
- OpenClaw installed and responding — run
openclaw --versionto confirm. If this fails, resolve your install before touching provider config. - An OpenRouter account — free to create, takes two minutes at openrouter.ai
- An OpenRouter API key — generated from the OpenRouter dashboard under the Keys section. It starts with
sk-or-v1-.
That's the full prerequisites list. No Anthropic account. No OpenAI account. No Google Cloud setup. OpenRouter handles all of those provider relationships — you're just a customer of OpenRouter.
One thing to know before your first real run: OpenRouter uses credit-based billing. You top up credits, and each model call deducts from that balance based on token usage. Free models deduct nothing. Paid model rates are listed per-million-tokens on each model's OpenRouter directory page. Load at least $5 in credits before running any agent that touches paid models — this prevents zero-balance blocks that look like authentication failures.
Step-by-Step: The Exact Config That Works
Two things to set. A base URL and an API key in your provider block. Here is the complete working configuration:
# openclaw.config.yaml
provider:
name: openrouter
base_url: https://openrouter.ai/api/v1
api_key: ${OPENROUTER_API_KEY}
model: anthropic/claude-3.5-sonnet
# OpenRouter recommends these headers for usage attribution
extra_headers:
HTTP-Referer: https://yourdomain.com
X-Title: Your Agent Name
Then export your key as an environment variable. Never hardcode it in the config file:
export OPENROUTER_API_KEY=sk-or-v1-xxxxxxxxxxxxxxxxxxxxxxxx
Start your OpenClaw session and send a test prompt. A valid response confirms the connection. Common error codes and what they mean:
- 401 — wrong key or key not loaded into the environment. Confirm
echo $OPENROUTER_API_KEYreturns your key before starting OpenClaw. - 404 — base URL is missing, wrong, or has a trailing slash. Use exactly
https://openrouter.ai/api/v1. - 402 — zero credits on a paid model. Load credits in the OpenRouter dashboard and retry.
- 429 — rate limit hit. Add retry logic to your agent config (covered below).
The model string format is provider/model-name. This is the one place OpenRouter differs from direct provider config. Instead of writing claude-3-5-sonnet-20241022, you write anthropic/claude-3.5-sonnet. Every model's OpenRouter directory page shows the exact string — always check there rather than guessing.
api_key: sk-or-v1-xxx is hardcoded in your config file and that file is committed to a git repository, your OpenRouter credits are exposed. Always use environment variable references. Add your config file to .gitignore if it contains any sensitive values, or use a dedicated secrets manager.Choosing the Right Model for Your Agent Task
With 200+ models available, the instinct is to default to the most capable one for everything. That's the wrong approach — and it's expensive. Match model capability to task complexity.
| Task Type | Recommended Model String | Cost Tier |
|---|---|---|
| Multi-step agentic reasoning | anthropic/claude-3.5-sonnet | $$ |
| Code generation and review | openai/gpt-4o | $$ |
| Summarization and classification | mistralai/mistral-medium | $ |
| Prototyping and logic validation | meta-llama/llama-3.1-8b-instruct:free | Free |
| High-volume batch operations | openai/gpt-4o-mini | $ |
The cost difference matters at scale. GPT-4o costs roughly 20x more than GPT-4o Mini with comparable accuracy on classification and summarization tasks. A 500-turn production agent session on GPT-4o runs approximately $8. The same session on GPT-4o Mini runs under $0.50. For tasks that don't require frontier reasoning, the cheaper model is the right choice.
We'll cover fallback configuration in the next section — but understand that OpenRouter makes model fallback trivial. Define a model list in your OpenClaw config, and the agent automatically advances to the next entry on failure. This protects production runs from single-provider outages.
Rate Limits and Cost Control That Holds in Production
OpenRouter's free account tier caps at 20 requests per minute. That works for light testing. The moment you run a real agentic workflow with tool use or recursive planning steps, you'll hit it within seconds.
Configure retry and backoff logic before your first real run:
# openclaw.config.yaml — rate limit handling
agent:
retry_on_rate_limit: true
retry_max_attempts: 3
retry_delay_seconds: 5
retry_backoff_multiplier: 2.0
This tells OpenClaw to wait 5 seconds on the first 429, 10 seconds on the second, and 20 seconds on the third before surfacing a hard failure. For most workflows, this absorbs burst traffic without user-visible interruptions.
For cost control, set a daily spending limit directly in the OpenRouter dashboard — not in OpenClaw. The dashboard limit is enforced server-side. Even a runaway loop can't exceed it. A $5 daily cap is reasonable for development; raise it deliberately when you move to production.
Sound familiar — you start an agent test, walk away, come back to find $30 in usage from a loop that never exited? Setting the daily limit before any multi-step run is the single highest-leverage habit you can build with OpenRouter.
Five Mistakes That Break the Connection
These are the exact errors that show up in builder support threads, in order of frequency.
Wrong model string format. Writing claude-3-5-sonnet-20241022 instead of anthropic/claude-3.5-sonnet returns a 404. OpenRouter has its own model naming convention using the provider/model-name format. Always check the OpenRouter model directory for the exact string before running.
Missing base URL. Setting the API key but forgetting the base URL causes OpenClaw to send requests to the OpenAI endpoint, which rejects your OpenRouter key with a 401. The base URL is not optional — it is what points OpenClaw at OpenRouter's servers.
Trailing slash on the base URL. https://openrouter.ai/api/v1/ with a trailing slash breaks routing on several OpenClaw versions. Use exactly https://openrouter.ai/api/v1.
Environment variable not exported. Setting OPENROUTER_API_KEY=sk-or-xxx without the export keyword means child processes — including OpenClaw — can't read the variable. Always use export OPENROUTER_API_KEY=..., and restart your terminal session if you're uncertain whether the variable is loaded.
Zero credits on a paid model request. Free models still work with zero credit balance. Paid models return a 402 that looks identical to a config error. If you've confirmed the config is correct but requests still fail, check your OpenRouter credit balance first.
Frequently Asked Questions
Does OpenRouter work with OpenClaw out of the box?
OpenRouter works with OpenClaw with just two config changes: a base URL pointing to https://openrouter.ai/api/v1 and an API key environment variable. No plugin, extension, or additional dependency is needed. Once those two values are in place, all 200+ models in the OpenRouter directory become accessible from your OpenClaw agent.
Which OpenRouter models work best with OpenClaw agents?
For agentic, multi-step tasks as of early 2025, Claude 3.5 Sonnet and GPT-4o via OpenRouter deliver the strongest performance. For cost-efficient iteration, Mistral Medium handles most reasoning tasks reliably. Sub-7B models generally struggle with structured tool call output and should be avoided for any workflow that depends on consistent JSON formatting.
Can I switch models mid-session in OpenClaw with OpenRouter?
Model selection is resolved at session start. To switch models, update the model string in your config file and restart the OpenClaw session. Hot-swapping mid-session is not natively supported, but session restarts can be scripted via the OpenClaw CLI, making automated model rotation straightforward for testing pipelines.
Will using OpenRouter cost more than direct API access?
OpenRouter adds a markup of roughly 5–10% above direct provider rates for paid models. For most builders, the operational simplicity of one key, one dashboard, and instant model-switching more than offsets this. Free-tier models via OpenRouter cost nothing, which further reduces the effective overhead for projects that prototype heavily before committing to a model.
How do I handle rate limits from OpenRouter in OpenClaw?
Set retry_on_rate_limit: true in your OpenClaw agent config alongside an exponential backoff starting at 2–5 seconds. The free OpenRouter tier defaults to 20 RPM. For production agents that exceed this, upgrade your OpenRouter account tier. Upgrading raises the cap significantly and unlocks priority routing on high-demand models.
Is my API key safe when using OpenRouter with OpenClaw?
OpenRouter holds your underlying provider credentials server-side. OpenClaw only ever stores your OpenRouter API key locally. If you suspect your OpenRouter key has been exposed, rotate it immediately from the OpenRouter dashboard — existing provider credentials remain safe. Never commit any API key to version control; always reference it through an environment variable.
You now have the exact config block, the common error codes and their fixes, a model selection framework, and the rate limit setup that protects production runs. That covers everything from first connection to scaling confidently.
With OpenRouter in place, your next agent can start on a free model to validate logic, then promote to a production model with a single line change — without touching authentication.
Create your OpenRouter account, generate your API key, and paste the config block above into your openclaw.config.yaml. The whole process takes under 8 minutes and costs nothing to start.