SkVM

Write Once, Run Everwhere Efficiently.

SkVM virtualizes agent skills the way JVM virtualizes bytecode and TVM virtualizes tensor programs: profile target capabilities, AoT-compile portable skill variants, and optimize execution online from runtime traces.

Upload a Skill, Optimize with SkVM

The Problem

Skills Shouldn't Break
When You Switch Models and Harnesses

LLM agent skills are tightly coupled to specific models. Moving between models — or even model versions — means rewriting prompts by hand. That doesn't scale.

Skills Break Across Models

Different models have different capability profiles. A skill requiring strong code generation fails silently on models that can't reliably edit files or parse structured output.

Manual Tuning Doesn't Scale

N skills × M models = N×M manual adaptations. Every model update invalidates previous prompt engineering work. The combinatorial explosion makes this untenable.

No Runtime Learning

Static prompts never improve from execution. Repeated code patterns aren't cached. There's no feedback loop from production runs back into skill optimization.

How It Works

Three Stages. Full Automation.

SkVM takes a raw skill and a target model, then profiles, AoT-compiles, and optimizes the execution path end to end.

Stage 01 — Profile

Map Model Capabilities

SkVM profiles each model against 26 primitive capabilities — code generation, tool use, reasoning, instruction following, and more. The current site snapshot is backed by real latest.json capability profiles collected from multiple model and harness combinations.

gen.code.* tool.file.* reason.* follow.format real TCP data
Real snapshot

Loading capability overview...

Reading the latest model and harness primitive capability profiles.

Open capability atlas
Stage 02 — AoT Compile

Compile for Capability, Environment, and Parallelism

The 3-pass AOT compiler analyzes what the skill needs versus what the model provides, then transforms the skill to compensate. Gap analysis, environment binding, and concurrency extraction — all automated.

pass 1: gap analysis pass 2: env binding pass 3: parallelism
1
SCR extraction → gap detection → substitution & compensation
2
Dependency manifest → presence check → env-bind script
3
Workflow decomposition → DAG → DLP / ILP / TLP parallelism
Stage 03 — Optimize

Learn at Runtime

JIT-boost solidifies repetitive code patterns to bypass LLM calls entirely. JIT-optimize rewrites skill content based on execution traces. Autotune closes the loop with self-contained evaluation cycles.

code solidification skill rewriting autotune loop
Results

Measured Across Models

Benchmark scores before and after SkVM optimization. Weaker models see the largest gains.

Original Skill
With SkVM
# Profile a model + harness
skvm profile --model=openrouter/qwen/qwen3-30b-a3b-instruct-2507 --adapter=bare-agent

# AOT-compile a skill against that profile
skvm aot-compile --skill=path/to/your/SKILL.md --model=openrouter/qwen/qwen3.5-35b-a3b

# Autotune the skill with synthetic tasks
skvm jit-optimize --skill=path/to/skill-dir --task-source=synthetic
--optimizer-model=xty/glm-5.1 --target-model=xty/glm-5.1
Architecture

Under the Hood

A modular pipeline connecting profiler, compiler, and runtime through well-defined interfaces.

Profiler 26 microbenchmarks · L3→L1 TCP AOT Compiler 3-pass · gap → env → DAG Variant Runtime + Agent JIT hooks · adapter interface Primitive Catalog gen.* · tool.* · reason.* · follow.* Skill Regenerator insert · replace · prepend · add JIT-Boost solidification JIT-Optimize skill rewriting LLM Provider Anthropic · OpenRouter Compiled Skill Artifacts variants · manifests · proposals Agent Adapters BareAgent · OpenCode · OpenClaw
Open Source · Research-Backed

Read the Paper.
Explore the Code.

For a more detailed introduction to SkVM, please refer to the paper and the open-source implementation.