Models &
Providers
Every AI model OpenClaw supports — local and cloud. Ollama, OpenAI, Claude, Gemini, Groq, DeepSeek, and more. Pick the right model, then configure it correctly.
-
01
↗
OpenClaw + Ollama: Run Private Local AI Agents for Free
The complete setup for connecting OpenClaw to Ollama — model selection, performance tuning, and making local inference fast enough to actually use daily.
-
02
↗
OpenClaw Ollama Local Model: Step-by-Step Setup That Works
Pull a model, point OpenClaw at it, run your first local agent — the exact commands and config that gets you from zero to running in under 20 minutes.
-
03
↗
OpenClaw Local LLM: The Complete Off-Grid AI Agent Setup
Everything you need to know about running OpenClaw with fully local models — hardware requirements, model options, privacy benefits, and speed trade-offs.
-
04
↗
OpenClaw + LM Studio: Build Local Agents Without Cloud Costs
LM Studio gives you a GUI for local models — how to run it alongside OpenClaw, expose the local server, and configure your agent to use it.
-
05
↗
OpenClaw LLM Setup: What Successful Builders Always Configure
The LLM config options most builders miss — context window, temperature, system prompt injection, and the settings that dramatically change agent quality.
-
01
↗
OpenClaw + OpenAI: The Proven GPT Integration Guide [2024]
Connect OpenClaw to the OpenAI API — API key setup, model selection (GPT-4o vs GPT-4o-mini), cost controls, and the config that avoids runaway spend.
-
02
↗
OpenClaw + Claude: Supercharge Your Agents With Anthropic AI
Use Claude Sonnet or Haiku as the brain behind your OpenClaw agent — API key setup, model selection, and the agentic use cases where Claude outperforms GPT.
-
03
↗
OpenClaw + Gemini: Unlock Google AI in Your Agent Pipelines
Connect OpenClaw to Gemini 1.5 Pro or Flash — API key, model config, multimodal capabilities, and when Gemini's long context window gives you an edge.
-
04
↗
OpenClaw + Groq: The Fastest Inference Setup for Your Agents
Groq delivers 10x faster inference than standard cloud APIs — how to connect OpenClaw, which models are available, and when speed matters more than capability.
-
05
↗
OpenClaw + DeepSeek: Run the Cheapest Capable Model Right
DeepSeek V3 and R1 are among the most cost-effective models available — connect them to OpenClaw and get near-GPT-4 quality at a fraction of the price.
-
06
↗
OpenClaw + OpenRouter: One Key to Rule Every AI Model
OpenRouter gives you access to 100+ models through a single API — how to connect it to OpenClaw and build agents that can switch models mid-task.
-
07
↗
OpenClaw + Grok: Connect xAI to Your Agents in Minutes
Grok's real-time access and X integration make it useful for specific agent tasks — API setup, which Grok model to choose, and when it outperforms alternatives.
-
08
↗
OpenClaw + Kimi: The Long-Context Model Nobody's Talking About
Kimi supports 128k context at low cost — ideal for document-heavy agent tasks. Full setup guide for connecting it to OpenClaw and getting the best results.
-
01
↗
OpenClaw Gemini API Key: The Exact Setup Steps That Work
Get a Gemini API key from Google AI Studio, configure it in OpenClaw, and verify the connection — the complete step-by-step with current screenshots.
-
02
↗
OpenClaw Grok API Key: Connect xAI in Under 3 Minutes
Get your xAI API key, add it to OpenClaw, and verify Grok is responding correctly — a fast guide with no filler for builders who just want it working.
-
03
↗
OpenClaw Codex OAuth: The Auth Setup Most Guides Get Wrong
Connecting OpenClaw to OpenAI Codex requires OAuth, not a plain API key — the correct flow, token scopes, and common mistakes that block the connection.
-
01
↗
OpenClaw Recommended Model: What Top Builders Actually Use
Not every model works equally well for agent tasks — the data on which models perform best in OpenClaw for reasoning, tool use, speed, and cost per task.