- Every OpenClaw skill is defined by a SKILL.md file — the manifest controls name, commands, parameters, and handler invocation
- Handlers can be written in any language: Python, Node.js, Bash, or any binary in your system PATH
- Parameters flow from agent intent to handler as environment variables or CLI arguments — define them explicitly in the manifest
- Use
openclaw plugins install ./my-skillfor local installs andopenclaw plugins publishto share on ClaWHub - Hot-reload means no restart required — OpenClaw picks up new skills from the skills directory automatically
Developers who build even one custom skill report their agents become 3x more useful overnight. The skill system is that direct. You define what a skill does, how it's invoked, and what it returns — the agent handles the rest. This guide walks through every piece of that process using a real-world example.
How the OpenClaw Skill System Works
Skills in OpenClaw are self-contained capability packages. Each skill lives in its own directory and contains a SKILL.md manifest file that defines everything the agent needs to know: what the skill does, how to invoke it, what parameters it accepts, and what output format to expect.
When an agent receives a user message, OpenClaw's intent parser scans registered skills for command matches. If the user's message matches a skill command pattern, the agent invokes the skill handler with any extracted parameters. The handler executes, returns output, and the agent incorporates that output into its response.
This is fundamentally different from prompt engineering. You're not asking the LLM to simulate a capability — you're giving the agent a real executable that produces deterministic results.
Prompt instructions tell the agent how to behave. Skills give the agent new abilities. Use prompt instructions for tone and reasoning style. Use skills for anything that requires external data, deterministic computation, or system interaction.
The SKILL.md Manifest Format
The manifest is where everything starts. Create a directory for your skill and add a SKILL.md file. The frontmatter block at the top defines the machine-readable metadata. The body below it serves as documentation and context for the agent's reasoning about when to invoke the skill.
---
name: weather-lookup
version: 1.0.0
description: Fetch current weather and forecasts for any location
author: T. Chen
commands:
- weather
- get weather
- what is the weather
parameters:
- name: location
type: string
required: true
description: City name or postal code to look up
- name: units
type: string
required: false
default: metric
description: Temperature units (metric or imperial)
handler: python handler.py
output: text
---
## Weather Lookup Skill
Use this skill when the user asks about current weather conditions,
forecasts, or temperature for any location. Extract the location
from the user's message and pass it as the `location` parameter.
If the user specifies Fahrenheit or imperial units, set units to
`imperial`. Default to metric otherwise.
The frontmatter fields that matter most are commands (trigger phrases), parameters (what you extract from user intent), and handler (the executable to run). The body text is context for the LLM — write it the same way you'd write a system prompt instruction.
Key Manifest Fields
| Field | Required | Purpose |
|---|---|---|
| name | Yes | Unique identifier for the skill (kebab-case) |
| version | Yes | Semantic version string |
| commands | Yes | Trigger phrases that invoke this skill |
| parameters | No | Named inputs the agent extracts from user message |
| handler | Yes | Command to execute when skill is invoked |
| output | No | Output format: text, json, markdown (default: text) |
Writing Your Handler
The handler is any executable that reads parameters from environment variables and writes output to stdout. OpenClaw injects parameters as SKILL_PARAM_[NAME] environment variables before invoking the handler. Whatever the handler writes to stdout becomes the skill's return value.
#!/usr/bin/env python3
# handler.py — weather-lookup skill handler
import os
import json
import urllib.request
location = os.environ.get('SKILL_PARAM_LOCATION', '')
units = os.environ.get('SKILL_PARAM_UNITS', 'metric')
if not location:
print("Error: location parameter is required")
exit(1)
# Call weather API
api_key = os.environ.get('WEATHER_API_KEY', '')
url = f"https://api.openweathermap.org/data/2.5/weather?q={location}&units={units}&appid={api_key}"
try:
with urllib.request.urlopen(url) as response:
data = json.loads(response.read())
temp = data['main']['temp']
desc = data['weather'][0]['description']
unit_symbol = '°C' if units == 'metric' else '°F'
print(f"Current weather in {location}: {desc}, {temp}{unit_symbol}")
except Exception as e:
print(f"Failed to fetch weather: {e}")
exit(1)
This is a complete handler. No framework, no SDK, no special imports. The simplicity is intentional — any tool you already know how to write can become an OpenClaw skill handler with minimal changes.
For API keys and secrets, use environment variables defined in your agent's config rather than hardcoding them. The handler inherits the agent's environment at invocation time.
Installing and Testing Your Skill
Once your skill directory contains a valid SKILL.md and a handler script, install it locally.
openclaw plugins install ./weather-lookup
OpenClaw validates the manifest, registers the skill, and makes it available to all agents immediately. No restart required — as of early 2025, hot-reload is enabled by default for skills installed from local directories.
Test the skill in isolation before connecting it to an agent.
openclaw skills test weather-lookup --param location="Tokyo" --param units="metric"
This runs the handler directly with the provided parameters and shows raw stdout output plus any errors. It's faster than spinning up an agent conversation every time you iterate on the handler logic.
On Unix systems, run chmod +x handler.py before installing. OpenClaw invokes the handler command directly — if the file isn't executable, the skill will fail silently during agent invocation. The test command will surface this error; running in an agent conversation may not.
Parameters, Configuration, and Output Formats
Parameters are the interface between the agent's language understanding and your handler's logic. Define them carefully — the agent uses your parameter descriptions to extract the right values from natural language.
Required parameters with no default will cause the agent to ask the user for clarification before invoking the skill. That's intentional behavior. Optional parameters with defaults give the handler sensible fallbacks when the user doesn't specify.
For output format, choose json if the agent needs to reason over structured data before responding to the user. Use text or markdown when the handler output goes directly into the agent's response. JSON output lets the agent extract specific fields and combine them with other context.
Here's where most developers get it wrong: they write handlers that output verbose debug information mixed with the actual result. The agent treats all stdout as the skill output. Keep stdout clean — only output what you want the agent to use.
Common Skill Development Mistakes
- Writing to stdout for debugging — any print statement in your handler becomes part of the skill output the agent sees. Use stderr for debug logs:
print("debug", file=sys.stderr). - Not handling missing parameters — even required parameters can arrive empty if the intent parser misfires. Always validate inputs at the start of your handler and exit with a meaningful error message.
- Making commands too specific — if your command list only includes exact phrases, the agent won't invoke the skill for natural variations. Include 3–5 command phrases covering different ways users express the same intent.
- Ignoring exit codes — OpenClaw treats a non-zero exit code as a skill failure. Return exit code 0 on success, non-zero on error. The agent will surface the stdout error message to the user if the exit code is non-zero.
- Hardcoding secrets in the handler — use environment variables. Store API keys in the agent's config or your system environment, not in the handler script that you might accidentally publish.
Frequently Asked Questions
What is the minimum file structure for an OpenClaw skill?
Every skill needs at least a SKILL.md file with the manifest header and a handler definition. The SKILL.md contains frontmatter (name, version, description, commands) and a body describing how the handler executes. Additional config or script files are optional but recommended for complex skills.
Can I write OpenClaw skills in any programming language?
OpenClaw skill handlers can call any executable available in the system PATH — Python, Node.js, Bash, Go binaries, and more. The skill manifest defines which command the handler invokes. The language choice is yours as long as the binary is reachable from the OpenClaw runtime environment.
How do I pass parameters from the agent to my custom skill?
Parameters are passed as environment variables named SKILL_PARAM_[NAME]. The agent parses user intent, extracts named parameters matching your manifest definitions, and injects them into the handler environment automatically before invocation.
Do I need to restart OpenClaw after installing a new skill?
No restart is required. OpenClaw watches the skills directory and hot-reloads new or updated skill manifests automatically. If your skill modifies the agent configuration file (agent.yaml), you may need to reload that specific agent using the reload command.
How do I debug a skill that isn't working?
Run openclaw skills test [skill-name] to execute the skill in isolation with verbose output. Check gateway logs for handler invocation errors. Ensure the handler command is executable and all dependencies are installed. Missing dependencies or incorrect paths are the most common failure causes.
Can I share my custom skill with other OpenClaw users?
Yes. Package your skill directory and publish it to ClaWHub using openclaw plugins publish. Add a README.md, version tag, and license before publishing. ClaWHub makes your skill discoverable to the entire OpenClaw community and handles versioning automatically.
T. Chen has built custom OpenClaw skill libraries for production deployments across logistics, fintech, and SaaS companies. Specializes in designing skill architectures that keep agent behavior predictable at scale, with a focus on parameter validation and handler reliability.