Write Once, Run Everwhere Efficiently.
SkVM virtualizes agent skills the way JVM virtualizes bytecode and TVM virtualizes tensor programs: profile target capabilities, AoT-compile portable skill variants, and optimize execution online from runtime traces.
Upload a Skill, Optimize with SkVM
Skills Shouldn't Break
When You Switch Models and Harnesses
LLM agent skills are tightly coupled to specific models. Moving between models — or even model versions — means rewriting prompts by hand. That doesn't scale.
Skills Break Across Models
Different models have different capability profiles. A skill requiring strong code generation fails silently on models that can't reliably edit files or parse structured output.
Manual Tuning Doesn't Scale
N skills × M models = N×M manual adaptations. Every model update invalidates previous prompt engineering work. The combinatorial explosion makes this untenable.
No Runtime Learning
Static prompts never improve from execution. Repeated code patterns aren't cached. There's no feedback loop from production runs back into skill optimization.
Three Stages. Full Automation.
SkVM takes a raw skill and a target model, then profiles, AoT-compiles, and optimizes the execution path end to end.
Map Model Capabilities
SkVM profiles each model against 26 primitive capabilities — code generation, tool use, reasoning, instruction following, and more. The current site snapshot is backed by real latest.json capability profiles collected from multiple model and harness combinations.
Loading capability overview...
Reading the latest model and harness primitive capability profiles.
Compile for Capability, Environment, and Parallelism
The 3-pass AOT compiler analyzes what the skill needs versus what the model provides, then transforms the skill to compensate. Gap analysis, environment binding, and concurrency extraction — all automated.
Learn at Runtime
JIT-boost solidifies repetitive code patterns to bypass LLM calls entirely. JIT-optimize rewrites skill content based on execution traces. Autotune closes the loop with self-contained evaluation cycles.
Measured Across Models
Benchmark scores before and after SkVM optimization. Weaker models see the largest gains.
# Profile a model + harness
skvm profile --model=openrouter/qwen/qwen3-30b-a3b-instruct-2507 --adapter=bare-agent
# AOT-compile a skill against that profile
skvm aot-compile --skill=path/to/your/SKILL.md --model=openrouter/qwen/qwen3.5-35b-a3b
# Autotune the skill with synthetic tasks
skvm jit-optimize --skill=path/to/skill-dir --task-source=synthetic
--optimizer-model=xty/glm-5.1 --target-model=xty/glm-5.1
Under the Hood
A modular pipeline connecting profiler, compiler, and runtime through well-defined interfaces.
Read the Paper.
Explore the Code.
For a more detailed introduction to SkVM, please refer to the paper and the open-source implementation.