Compile and run LLM agent skills across heterogeneous models and harnesses
SkVM is a compilation and runtime system that makes LLM agent skills portable across heterogeneous models and harnesses. It has four major parts:
- Profiling — measure a model+harness against pre-defined primitive capabilities
- AOT-Compilation — compile a skill with multiple passes in AOT compiler
- JIT-Optimization — improve runtime speed (JIT-boost) and skill content (JIT-optimize)
- Benchmark — evaluate original, compiled, and optimized skills across tasks, conditions, and models
Reference: SkVM: Revisiting Language VM for Skills across Heterogenous LLMs and Harnesses — https://arxiv.org/abs/2604.03088
- 2026-05 — New
claude-codeadapter (drives theclaude -pCLI). Note: heavy headless usage may hit account rate limits or usage-terms issues. - 2026-05 — Upload and optimize a skill with SkVM in your browser: SkVM website.
skvm-successful-rate-v3.mp4
skvm-boost4.mp4
# curl one-liner (macOS / Linux)
curl -fsSL https://skillvm.ai/install.sh | sh
# or via npm (any platform with Node ≥ 18; postinstall fetches the matching binary)
npm i -g @ipads-skvm/skvmThe installer drops a standalone binary at ~/.local/share/skvm/bin/skvm (symlinked into ~/.local/bin/skvm) and bundles a private, isolated headless agent runtime used internally by skvm jit-optimize — it is fully self-contained and does not touch any agent or CLI you may have installed globally.
Agent-facing skills ship inside the install. Copy them into your agent harness's skills directory to teach it how to drive skvm:
# OpenClaw
cp -r ~/.local/share/skvm/skills/skvm-jit ~/.openclaw/workspace/skills/
cp -r ~/.local/share/skvm/skills/skvm-general ~/.openclaw/workspace/skills/
# Hermes Agent
cp -r ~/.local/share/skvm/skills/skvm-jit ~/.hermes/skills/
cp -r ~/.local/share/skvm/skills/skvm-general ~/.hermes/skills/
# pi Agent
mkdir -p ~/.pi/agent/skills
cp -r ~/.local/share/skvm/skills/skvm-jit ~/.pi/agent/skills/
cp -r ~/.local/share/skvm/skills/skvm-general ~/.pi/agent/skills/skvm-jit— post-task JIT optimization skill for submitting conversation logs toskvm jit-optimizeskvm-general— drivesprofile/aot-compile/bench/proposalson behalf of a user
Configure your agent harness, provider, and API key via the interactive wizard:
skvm config initThis writes $SKVM_CACHE/skvm.config.json (default ~/.skvm/skvm.config.json). For non-interactive setups, see docs/providers.md.
Adapter config mode. External harnesses run in one of two modes: managed|native (default managed):
- managed — skvm provisions a fresh, minimal config inside the sandbox (e.g. a new openclaw agent, or a generated
hermesconfig.yaml). Clean baseline, no reliance on host state. - native — skvm clones the user's existing harness config (e.g. an openclaw source agent under
~/.openclaw/agents/<name>, or a hermes active profile). Requires the harness to be set up locally first.
Model id format. Model parameters on the CLI take the form <provider>/<model-id> — the leading <provider> selects the model service provider, while <model-id> is how that provider refers to the model. For OpenRouter that's three segments, e.g. openrouter/qwen/qwen3.5-35b-a3b; for the Anthropic native API it's two, e.g. anthropic/claude-sonnet-4.6.
Writes a target capability profile to ~/.skvm/profiles/.
If your target model + adapter pair is already covered by the pre-built profiles shipped in skvm-data/profiles/, you can copy the cached result into your local profile cache and skip skvm profile entirely:
mkdir -p ~/.skvm/profiles
cp -R skvm-data/profiles/. ~/.skvm/profiles/skvm profile \
--adapter=bare-agent \
--model=<provider>/<model-id>
# e.g. --model=anthropic/claude-sonnet-4.6
# e.g. --model=openrouter/qwen/qwen3.5-35b-a3bWith the default --concurrency=1, this example typically takes about 20 minutes for one full run. If you want it to finish faster, increase --concurrency to profile more primitives in parallel.
The compiler rewrites the skill to match the target's capabilities. A cached profile for the same --model + --adapter pair must exist (run skvm profile first, or use skvm pipeline which profiles automatically).
skvm aot-compile \
--skill=path/to/skill-dir \
--model=<provider>/<model-id> \
--adapter=bare-agent \
--pass=1 \
--compiler-model=<provider>/<model-id>Compiled variants are written under ~/.skvm/proposals/aot-compile/<adapter>/<safeModel>/<skillName>/<passTag>/ by default.
The optimizer LLM derives tasks from the skill itself, then loops edit → rerun → score.
By default, synthetic mode generates 2 training tasks and 1 held-out test task.
skvm jit-optimize \
--skill=path/to/skill-dir \
--task-source=synthetic \
--task-concurrency=3 \
--target-adapter=bare-agent \
--optimizer-model=<provider>/<model-id> \
--rounds=1 \
--target-model=<provider>/<model-id>Results are written under ~/.skvm/proposals/jit-optimize/<adapter>/<safeTargetModel>/<skillName>/<timestamp>/ by default.
No rerun, just diagnose and edit. Good for post-mortems and for the skvm-jit post-task optimization hook.
skvm jit-optimize \
--skill=path/to/skill-dir \
--task-source=log \
--target-adapter=bare-agent \
--logs=path/to/session.jsonl \
--optimizer-model=<provider>/<model-id> \
--target-model=<provider>/<model-id>skvm proposals list # CLI listing
skvm proposals show <id> # CLI detail view
skvm proposals accept <id> # CLI accept
skvm proposals serve # Web review UISkVM keeps all runtime artifacts — cached profiles, proposal trees, bench and compile logs — under a single cache root:
~/.skvm/
├── profiles/ # Cached target capability profiles
├── log/ # Profile, compile, bench, and runtime logs
└── proposals/ # AOT-compile, jit-boost, jit-optimize outputs
The cache is user-global and shared across every directory you invoke skvm from, so profiles cached in one project are reused everywhere. Override the location via:
--skvm-cache=<path>flag (one-off)SKVM_CACHEenv var (persistent), e.g.export SKVM_CACHE=/mnt/fast/skvm
Individual subdirectories can also be pointed elsewhere with SKVM_PROFILES_DIR, SKVM_LOGS_DIR, and SKVM_PROPOSALS_DIR.
The benchmark skills, tasks, and pre-built profiles live in a separate Git submodule (SJTU-IPADS/SkVM-data). Clone it if you plan to run skvm bench and want to use the bundled skills/tasks directly:
git submodule update --init # or: git clone --recurse-submodulesThis populates the skvm-data/ directory:
skvm-data/
├── skills/ # 108 skill directories (each contains a SKILL.md)
├── tasks/ # 216 task directories (each contains a task.json)
└── profiles/ # Pre-built target capability profiles
├── bare-agent/
└── openclaw/
skvm bench resolves skills and tasks from skvm-data/ by default. Override the location via:
--skvm-data-dir=<path>flag (one-off)SKVM_DATA_DIRenv var (persistent)
Commands that take an explicit --skill=<path> or --task=<path> do not need the submodule — they work with any directory on disk.
skvm-data/profiles/ already includes pre-built profiles for some model + adapter combinations. If the pair you need is already there, copy skvm-data/profiles/ into your profile cache directory (default: ~/.skvm/profiles/, or SKVM_PROFILES_DIR if set) and you can skip running skvm profile for that target. See the profile list in the profiling section above for the currently bundled combinations.
mkdir -p ~/.skvm/profiles
cp -R skvm-data/profiles/. ~/.skvm/profiles/- docs/usage.md — full command reference:
profile,aot-compile,run,bench,jit-optimize,proposals, and more - docs/architecture.md — subsystem map, data flow, and on-disk layout
- docs/grade-protocols.md — grader protocol reference for custom
grade.pytask graders - Paper: https://arxiv.org/abs/2604.03088
If you use SkVM in your research, please cite:
@article{chen2026skvm,
title={SkVM: Revisiting Language VM for Skills across Heterogenous LLMs and Harnesses},
author={Chen, Le and Feng, Erhu and Xia, Yubin and Chen, Haibo},
journal={arXiv preprint arXiv:2604.03088},
year={2026}
}