Micro-agent core. The smallest possible thing that other useful things can be built on top of.
OpenVole is a microkernel AI agent framework. It provides the agent loop and the plugin contract β nothing else. Everything useful (reasoning, memory, tools, channels, integrations) is a Paw or a Skill built by the community.
A fresh OpenVole installation has zero tools, zero skills, zero opinions. This is by design.
mkdir my-agent && cd my-agent
npm init -y
npm install openvole
npx vole init
npx vole paw add @openvole/paw-brain
npx vole paw add @openvole/paw-memory
npx vole paw add @openvole/paw-dashboardEdit vole.config.json:
{
"brain": "@openvole/paw-brain",
"paws": [
{
"name": "@openvole/paw-brain",
"allow": {
"network": ["*"],
"env": ["BRAIN_PROVIDER", "BRAIN_API_KEY", "BRAIN_MODEL",
"GEMINI_API_KEY", "GEMINI_MODEL",
"ANTHROPIC_API_KEY", "ANTHROPIC_MODEL",
"OPENAI_API_KEY", "OPENAI_MODEL",
"OLLAMA_HOST", "OLLAMA_MODEL"]
}
},
{
"name": "@openvole/paw-memory",
"allow": { "env": ["VOLE_MEMORY_DIR"] }
},
{
"name": "@openvole/paw-dashboard",
"allow": { "listen": [3001], "env": ["VOLE_DASHBOARD_PORT"] }
}
],
"skills": [],
"loop": { "maxIterations": 25, "maxContextTokens": 128000 }
}Create .env:
BRAIN_PROVIDER=gemini
GEMINI_API_KEY=your-api-key
Run:
npx vole startTip: For easier access, install globally with
npm install -g openvoleβ then usevoledirectly instead ofnpx vole.
Or use a preset:
# Basic (Brain + Memory + Dashboard)
curl -fsSL https://raw.githubusercontent.com/openvole/openvole/main/presets/basic.sh | bash
# With Telegram
curl -fsSL https://raw.githubusercontent.com/openvole/openvole/main/presets/telegram.sh | bash
# Everything
curl -fsSL https://raw.githubusercontent.com/openvole/openvole/main/presets/full.sh | bash vole start (CLI)
readline prompt (vole>)
|
v
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β VoleEngine β
β β
β Tool Registry ββββ Skill Registry ββββ Paw Registry β
β | | | β
β ββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Agent Loop (per task) β β
β β β β
β β BOOTSTRAP β PERCEIVE β COMPACT β THINK β β
β β | | | | β β
β β Load data Enrich Compress Brain β β
β β + VoleNet context old msgs plans β β
β β β β
β β β ACT β OBSERVE β loop β β
β β | | β β
β β Execute Record β β
β β tools results β β
β β β β
β β Context Budget Manager β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β Task Queue ββ Scheduler ββ Message Bus ββ Cost Tracker β
β β
β VoleNet (optional distributed networking) β
β βββ WebSocket + Ed25519 Auth β
β βββ Remote Tools + Load Balancing β
β βββ Memory/Session Sync + Leader Election β
β β
ββββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββ¬ββββββββββββββββββββββ
| | | |
[Brain Paw] [Channel] [Tools] [In-Process]
paw-brain Telegram Browser Compact
(unified) Slack Shell Memory
Discord Database Session
MCP Dashboard
Official Paws across four categories: Brain, Channel, Tool, and Infrastructure.
The only thing OpenVole does natively:
Bootstrap β Perceive β Compact β Think β Act β Observe β loop
| Phase | What happens |
|---|---|
| Bootstrap | Load memory, session, VoleNet context (once per task) |
| Perceive | Paws inject dynamic context (time, calendar, unread messages) |
| Compact | Compress old messages when threshold hit (frees context space) |
| Think | Context budget trim β Brain Paw calls LLM β returns plan |
| Act | Execute tool calls (local or remote via VoleNet). Rate limits enforced. |
| Observe | Record results, update memory, sync to VoleNet peers |
Paws are tool providers. They connect OpenVole to the outside world β APIs, databases, browsers, messaging platforms. Each Paw runs in an isolated subprocess with capability-based permissions.
npx vole paw add @openvole/paw-telegramSkills are behavioral recipes. A skill is a folder with a SKILL.md file β no code, no build step. Compatible with ClawHub (13,000+ skills).
npx vole clawhub install summarize---
name: summarize
description: "Summarize text, articles, or documents"
---
When asked to summarize content:
1. Identify the key points
2. Condense into 3-5 bullet points
...The Brain is a Paw β the core is LLM-ignorant. Use @openvole/paw-brain β a single unified brain paw that supports all providers:
- Anthropic Claude β
BRAIN_PROVIDER=anthropic - OpenAI β
BRAIN_PROVIDER=openai - Google Gemini β
BRAIN_PROVIDER=gemini - xAI Grok β
BRAIN_PROVIDER=xai - Ollama (local) β
BRAIN_PROVIDER=ollama
Auto-detects provider from available API keys if BRAIN_PROVIDER is not set. Provider-specific env vars (e.g. GEMINI_API_KEY) take precedence over generic BRAIN_API_KEY.
Legacy single-provider paws (
paw-ollama,paw-claude,paw-openai,paw-gemini,paw-xai) are deprecated but still available.
| Tool | Purpose |
|---|---|
discover_tools |
Search available tools by intent (BM25 ranking) |
schedule_task |
Brain creates recurring tasks at runtime |
cancel_schedule / list_schedules |
Manage schedules (persistent across restarts) |
skill_read |
Load skill instructions on demand |
heartbeat_read / heartbeat_write |
Manage recurring jobs |
workspace_write / workspace_read |
Read/write agent scratch space |
vault_store / vault_get / vault_list |
Encrypted key-value store |
web_fetch |
Lightweight URL fetching (GET/POST with headers, body) |
spawn_agent |
Spawn sub-agent with named profile and tool restrictions |
spawn_remote_agent |
Delegate task to a remote VoleNet peer |
list_instances / get_remote_result |
Query VoleNet peers and remote task status |
Periodic wake-up β the Brain checks HEARTBEAT.md and decides what to do. No user input needed. Uses cron expressions:
{ "heartbeat": { "enabled": true, "cron": "*/30 * * * *" } }Brain-created schedules use cron expressions and are stored in .openvole/schedules.json, surviving restarts. The heartbeat is recreated from config on each startup (intervalMinutes is auto-converted to cron).
"0 13 * * *" β daily at 1 PM UTC
"*/30 * * * *" β every 30 minutes
"0 9 * * 1" β every Monday at 9 AM
Persistent memory with daily logs scoped by task source:
.openvole/paws/paw-memory/
βββ MEMORY.md # Shared long-term facts
βββ user/ # CLI session logs
βββ paw/ # Telegram/Slack logs
βββ heartbeat/ # Heartbeat logs
Conversation continuity across messages. Auto-expiring transcripts per session ID. Session data lives in .openvole/paws/paw-session/, organized by session ID (e.g., cli:default/, telegram:123/).
Bridge 1000+ community MCP servers into the tool registry via paw-mcp. MCP tools appear alongside Paw tools β the Brain doesn't know the difference.
- MCP tools are auto-discovered at runtime as MCP servers connect
- Late tool registration β tools appear after the engine starts, not at boot
- MCP config lives in
.openvole/paws/paw-mcp/servers.json(not in the installed package)
Example .openvole/paws/paw-mcp/servers.json:
{
"servers": [
{
"name": "resend",
"command": "npx",
"args": ["-y", "resend-mcp"],
"env": { "RESEND_API_KEY": "$RESEND_API_KEY" }
}
]
}Any Paw can discover and register tools at runtime using the register_tools mechanism β not just MCP. Tools registered this way appear in the tool registry like any other tool. This is a generic capability of the engine, not an MCP-specific feature.
Each Paw has its own local config directory at .openvole/paws/<name>/. The installed npm package stays immutable β all user configuration lives in the local paw directory.
.openvole/paws/
βββ paw-memory/ β memory data (MEMORY.md, daily logs)
βββ paw-session/ β session transcripts
βββ paw-mcp/ β MCP config (servers.json)
Example: paw-mcp reads its servers.json from .openvole/paws/paw-mcp/, not from node_modules/.
Per-task LLM cost estimation with provider pricing tables. Tracks input/output tokens and estimates USD cost per call. Supports auto-detection of local Ollama (free) vs cloud models.
Configure in loop:
costTracking:"auto"(default) |"enabled"|"disabled"costAlertThreshold: warn when a task exceeds $X (e.g.0.50)
Tasks support priority levels (urgent, normal, low) and dependencies (dependsOn: [taskId]). Urgent tasks jump the queue. Dependent tasks wait until all prerequisites complete.
Spawn specialized sub-agents with named profiles, tool restrictions, and context passing:
{
"agents": {
"researcher": {
"role": "Research Agent",
"allowTools": ["web_fetch", "memory_search", "workspace_write"],
"maxIterations": 15
}
}
}Sub-agents support 2-level depth, wait_for_agents for parallel coordination, and agent:completed events for async notification.
When an embedding provider is available (Ollama, OpenAI, or Gemini), memory_search uses hybrid retrieval β combining BM25 keyword matching with vector semantic similarity via Reciprocal Rank Fusion. Falls back to keyword-only when no embeddings are configured. Vector index is disposable β markdown files remain the source of truth.
Optional container-level isolation for paw subprocesses (stronger than default Node.js --permission):
{
"security": {
"docker": {
"enabled": true,
"image": "node:20-slim",
"memory": "512m",
"network": "none"
}
}
}Connect multiple OpenVole instances across machines. Remote tools, shared memory, brain sharing β all authenticated with Ed25519 signatures.
{
"net": {
"enabled": true,
"instanceName": "coordinator",
"role": "coordinator",
"port": 9700,
"peers": [
{ "url": "http://worker:9701", "trust": "full" }
],
"share": { "tools": true, "memory": true }
}
}Key capabilities:
- Remote tool execution β tools on remote peers appear in the local registry, transparent to the Brain
- Peer-specific targeting β
us-monitor/shell_execvseu-monitor/shell_execwhen multiple peers share a tool - Brain sharing β brainless workers delegate thinking to a coordinator (
brainSource: "remote") - Memory sync β write propagation and cross-peer search
- Session sync β shared conversations across devices
- Leader election β automatic failover, heartbeat coordination
- Load balancing β tasks route to the least-loaded peer
- 8 architecture patterns β from single-brain + workers to autonomous swarms
vole net init my-instance # Generate Ed25519 identity
vole net show-key # Share public key
vole net trust "vole-ed25519 ..." # Trust a peer
vole net status # Show network statusSee VoleNet documentation for architecture patterns and setup guide.
OpenVole's own skill registry. Search, install, and publish skills via CLI:
vole skill search summarize # Search VoleHub
vole skill install summarize # Install with SHA-256 verification
vole skill publish ./my-skill # Prepare for publishingPrevent runaway costs with configurable limits on LLM calls, tool executions, and task enqueue rates.
Per-source tool filtering β restrict what Telegram users can trigger:
{ "toolProfiles": { "paw": { "deny": ["shell_exec", "fs_write"] } } }Customize agent behavior with optional markdown files in .openvole/:
| File | Purpose |
|---|---|
BRAIN.md |
Custom system prompt β overrides the default system prompt entirely |
SOUL.md |
Agent personality and tone |
USER.md |
User profile and preferences |
AGENT.md |
Operating rules and constraints |
The Brain Paw loads these on startup and core builds the system prompt from them.
Agent scratch space at .openvole/workspace/ β for files, drafts, API docs, downloaded content. Path traversal protected. Tools: workspace_write, workspace_read, workspace_list, workspace_delete.
Encrypted key-value store at .openvole/vault.json:
- AES-256-GCM encryption when
VOLE_VAULT_KEYis set - Write-once semantics β prevents hallucination overwrites
- Metadata support β attach service, handle, URL context to entries
vault_listnever exposes values
Lightweight URL fetching via the web_fetch core tool β GET/POST with custom headers and body. No browser Paw needed for simple HTTP requests.
Core manages the full context pipeline via ContextBudgetManager:
- Token-aware budget β estimates tokens (4 chars/token text, 2 chars/token JSON), calculates budget breakdown per iteration
- Priority-based trimming β old tool results, old errors, old messages trimmed in order; never trims first user message or last 2 brain responses
- Compaction trigger β at 75% token usage, runs
paw-compactto summarize old messages - System prompt β built by core from BRAIN.md + identity files + skills + tools + memory (static-first ordering for provider prompt caching)
- Image handling β extracts base64 images from tool results and passes to Brain as provider-native image blocks
- Stuck loop detection β 3-tier escalation (warn at 5, dampen at 10, circuit breaker at 15 identical tool calls)
Configure via loop.maxContextTokens (default: 128000) and loop.responseReserve (default: 4000).
Real-time web UI β powered by paw-dashboard, another Paw you install like any other. Shows paws, tools, skills, tasks, schedules, and live events.
npx vole paw add @openvole/paw-dashboardAll paws live in PawHub and are installed via npm.
| Paw | Purpose |
|---|---|
paw-brain |
Unified multi-provider brain (Anthropic, OpenAI, Gemini, xAI, Ollama) |
paw-ollama |
(deprecated) Local LLM via Ollama |
paw-claude |
(deprecated) Anthropic Claude models |
paw-openai |
(deprecated) OpenAI models |
paw-gemini |
(deprecated) Google Gemini models |
paw-xai |
(deprecated) xAI Grok models |
| Paw | Purpose |
|---|---|
paw-telegram |
Telegram bot channel |
paw-slack |
Slack bot channel |
paw-discord |
Discord bot channel |
paw-whatsapp |
WhatsApp bot channel |
paw-msteams |
Microsoft Teams channel |
paw-voice-call |
Voice calls via Twilio (inbound + outbound) |
| Paw | Purpose |
|---|---|
paw-browser |
Browser automation (Puppeteer) |
paw-shell |
Shell command execution |
paw-filesystem |
File system operations |
paw-mcp |
MCP server bridge |
paw-email |
Email sending (SMTP/IMAP) |
paw-resend |
Email via Resend API |
paw-github |
GitHub integration |
paw-calendar |
Google Calendar integration |
paw-tts |
Text-to-speech (ElevenLabs, OpenAI) |
paw-stt |
Speech-to-text (OpenAI Whisper) |
paw-computer |
Desktop automation (mouse, keyboard, screen) |
paw-database |
PostgreSQL, MySQL, SQLite queries |
paw-scraper |
Structured web data extraction |
paw-pdf |
Read, merge, split PDFs |
paw-image |
Resize, crop, watermark, compress images |
paw-social |
Twitter/X and LinkedIn posting |
| Paw | Purpose |
|---|---|
paw-memory |
Persistent memory with source isolation + hybrid semantic/keyword search |
paw-session |
Session/conversation management |
paw-compact |
Context compaction β heuristic (default) + optional LLM summarization |
paw-dashboard |
Real-time web dashboard |
Install from npm:
npx vole paw add @openvole/paw-telegram
npx vole paw add @openvole/paw-browsernpx vole init # Initialize project
npx vole start # Start agent (interactive)
npx vole run "summarize my emails" # Single task (headless β no dashboard/channels)
npx vole paw add @openvole/paw-telegram # Install a Paw
npx vole paw list # List loaded Paws
npx vole skill create email-triage # Create a local skill
npx vole skill search summarize # Search VoleHub
npx vole skill install summarize # Install from VoleHub
npx vole clawhub install summarize # Install from ClawHub
npx vole clawhub search email # Search ClawHub
npx vole tool list # List all tools
npx vole tool call list_schedules # Call a tool directly (no Brain)Enabled by default. Every subprocess Paw runs with Node.js permission model restrictions:
- Read access: Paw's own package directory, project root,
.openvole/,node_modules/, OS temp directory, parent directories (for module resolution) - Write access:
.openvole/paws/<paw-name>/(paw's own data directory), OS temp directory - Network: Blocked by default β allowed when paw has
networkorlistenpermissions granted - Child processes: Blocked by default β allowed only when user grants
childProcess: truein config - Additional paths: Grant via
allow.filesystemin paw config orsecurity.allowedPathsglobally - Opt-out: Set
security.sandboxFilesystem: falseto disable (not recommended)
{
"security": {
"sandboxFilesystem": true,
"allowedPaths": ["/home/user/projects"]
},
"paws": [
{
"name": "@openvole/paw-shell",
"allow": {
"filesystem": ["./"],
"env": ["VOLE_SHELL_ALLOWED_DIRS"],
"childProcess": true
}
}
]
}Paws that need child process access (paw-shell, paw-browser, paw-mcp) must have it explicitly granted β this prevents arbitrary code execution from untrusted paws.
Important: Non-Node child processes (shell commands, Chrome, etc.) are not restricted by the filesystem sandbox β Node's permission model only applies to Node processes. Granting childProcess: true effectively gives the paw unrestricted filesystem access through spawned commands. Only grant this to paws you trust.
If your Paw spawns external processes β child_process.exec(), spawn(), launching binaries (e.g. Puppeteer spawning Chrome), or starting server processes (e.g. MCP servers) β users will need to grant childProcess: true in their config for your Paw. Document this in your Paw's README so users know to add it. Paws that only make HTTP requests, read/write files, or communicate over IPC do not need it.
Every Paw declares what it needs in its manifest. The user grants permissions in config. Effective permissions are the intersection β a Paw can only access what it requested AND what the user approved.
| Layer | What it controls |
|---|---|
network |
Outbound network domains |
listen |
Port binding |
filesystem |
File/directory access paths |
env |
Environment variables passed to subprocess |
childProcess |
Ability to spawn child processes |
| Concern | Approach |
|---|---|
| Paw isolation | Subprocess sandbox with Node.js --permission flags |
| Credentials | Each Paw owns its secrets β core never sees them |
| Runaway agent | maxIterations + rate limiting + confirmBeforeAct |
| Channel safety | Tool profiles restrict which tools each task source can use |
| Vault | AES-256-GCM encryption, write-once semantics |
Single vole.config.json β plain JSON, no imports:
{
"brain": "@openvole/paw-brain",
"paws": ["@openvole/paw-brain", "@openvole/paw-memory"],
"skills": ["clawhub/summarize"],
"loop": { "maxIterations": 25, "maxContextTokens": 128000 },
"heartbeat": { "enabled": false, "cron": "*/30 * * * *" },
"toolProfiles": { "paw": { "deny": ["shell_exec"] } }
}OpenVole loads OpenClaw skills natively β same SKILL.md format, same metadata.openclaw.requires fields. Install directly from ClawHub:
npx vole clawhub install summarize.openvole/
βββ paws/
β βββ paw-memory/ β memory data
β β βββ MEMORY.md
β β βββ user/, paw/, heartbeat/
β βββ paw-session/ β session transcripts
β β βββ cli:default/, telegram:123/
β βββ paw-brain/ β brain paw data
β β βββ BRAIN.md β system prompt (scaffolded on first run)
β βββ paw-mcp/ β MCP config
β βββ servers.json
βββ workspace/ β agent scratch space
βββ skills/ β local and clawhub skills
βββ vault.json β encrypted key-value store
βββ schedules.json β persistent cron schedules
βββ SOUL.md β agent personality
βββ USER.md β user profile
βββ AGENT.md β operating rules
βββ HEARTBEAT.md β recurring job definitions
Both are open-source AI agent frameworks. Different philosophies, many shared concepts.
| OpenVole | OpenClaw | |
|---|---|---|
| Philosophy | Microkernel β empty core, everything is a plugin | Batteries-included β 25 built-in tools |
| Core size | ~60KB | ~8MB |
| Skills | SKILL.md (same format, compatible) | SKILL.md |
| Skill marketplace | ClawHub-compatible (vole clawhub install) |
ClawHub (13K+ skills) |
| Skill loading | Progressive on-demand | Progressive on-demand |
| Brain/LLM | External Paw β core is LLM-ignorant | Configurable provider in core |
| Brain options | Unified paw-brain (Ollama, Claude, OpenAI, Gemini, xAI) | Multi-provider with fallback chains |
| Heartbeat | HEARTBEAT.md + cron | HEARTBEAT.md + cron |
| Memory | Source-isolated (user/paw/heartbeat scoped) | Shared (no source isolation) |
| Identity files | BRAIN.md, SOUL.md, USER.md, AGENT.md | SOUL.md, USER.md, AGENTS.md |
| MCP support | Via Paw with auto-discovery + late registration | Native in core |
| Channels | 6 (Telegram, Slack, Discord, WhatsApp, Teams, Voice Call) | 20+ (WhatsApp, iMessage, Signal, etc.) |
| Plugin isolation | Node.js permission sandbox (default on) + capability permissions | Optional Docker sandbox |
| Tool profiles | Per-source deny/allow lists | Channel sandboxing |
| Scheduling | Cron-based, persistent, Brain-initiated | Cron + heartbeat |
| Sessions | Per-session transcripts with auto-expiry | Built-in session keys |
| Vault | AES-256 encrypted, write-once, metadata | N/A (env vars) |
| Dashboard | Real-time web UI | Gateway web UI |
| CLI | vole (start/run/tool call/clawhub/skill) |
openclaw |
| Config | Single JSON file | Single JSON file |
OpenVole is a newborn β a tiny vole just getting started. We share the same skill format, the same heartbeat pattern, and the same MCP ecosystem as OpenClaw. Skills written for one work on the other.
We're building something small, modular, and community-driven. If you like the microkernel approach β where every piece is a Paw you can swap, extend, or build yourself β come join us. Try it out, build a Paw, write a Skill, break things, and help this little vole grow.
See VOLECONTEXT.md for how OpenVole builds, enriches, and compresses context β including the lifecycle hooks (bootstrap, perceive, compact, think, act, observe) and how paws inject data into the Brain's prompt.
We welcome contributions! See CONTRIBUTING.md for guidelines.
For contributing Paws, see the PawHub CONTRIBUTING.md.
If it connects to something, it's a Paw. If it describes behavior, it's a Skill. If the agent calls it, it's a Tool. If it's none of these, it probably doesn't belong in OpenVole.