Lands your AI agent at the right code in
fewer turns, tokens, & breakages.
A local intelligence layer that sits between your AI agent and your codebase —
indexes every call, remembers every decision, and gets sharper the longer you use it.
One global install, then unerr install <agent> per repo. No account. No keys.
The problem
The agent isn't stupid. It's flying blind.
Watch any AI coding session for ten minutes and you'll see the same loop.
Reads 30 files to find one function
Burns the context window before it writes a line.
Edits something with 40 callers
Never knows it just broke three services.
Re-derives conventions you taught yesterday
And this morning. And an hour ago.
Forgets the entire session
The moment the context window closes.
Every one is the same root cause: no persistent memory of your code, your team's style, or its own past mistakes.
unerr is that memory.
Inside unerr
Five live panes. Every claim is a tool call your agent just made.
Open the dashboard the moment unerr is running. Each pane is backed by an append-only ledger — the same store your agent reads from over MCP.
Code Intelligence
Call graph, fan-in/out chokepoints, cross-module surprise links, and a risk grade per file — so the agent stops reading 30 files to find one function.
Project Memory
Conventions, anti-patterns, and decisions — persisted across sessions with decay-adjusted confidence and reinforcement counts.
Token Trace
Aggregate savings across every session, broken down by mechanism: graph, file_read, shell, dedup, format. Every claim is a tool call.
Reasoning Quality
Four-pillar score: cleaner context, fewer wasted turns, fewer breakages, persistent memory. Shows the load-bearing rate per fact and convention.
Activity
Turn-grouped timeline with a 30-day heatmap. Each row is one burst of agent work — intent → tools → outcome — stitched into a coherent narrative.
And what the agent feels, even when you don't look.
Four background mechanisms that don't have a pane — but show up as fewer turns, fewer breakages, and lower context cost.
Targeted file reads
file_read({ entity: "fnName" }) returns just that function plus relevant conventions — never the whole 2,000-line file.
Shell compression
11 strategies, 645+ command classifiers. Overall 93% compression (2 MB → 138 KB) across real-world benchmarks.
View all 11 strategiesConvention awareness
Naming, structure, and import patterns auto-detected and injected into the agent's context every turn.
Local-first
Two processes, one local DB. Zero network calls. No API keys. Your code never leaves the machine.
Comparison
Each peer owns a layer. unerr connects them.
Code intelligence, persistent memory, and drift prevention — behind one local MCP server.
of an agent's tokens are tool output — mostly file reads. unerr intercepts before the read. (JetBrains, NeurIPS 2025.)
LLM calls per query in Free tier. Facts, conventions, drift signals are algorithmic.
turns before agents revert to built-in Read / Grep / Glob without drift prevention.
| Capability | unerr Free, OSS | Graphify ~47K | Serena ~23K | claude-mem ~75K | RTK ~40K |
|---|---|---|---|---|---|
| Code intelligence | |||||
| Pre-hoc file-read intercept | Partial | ||||
| Convention auto-detection | |||||
| Drift / staleness signals | |||||
| Memory & continuity | |||||
| Persistent across sessions | |||||
| Per-repo isolation | — | ||||
| Runtime | |||||
| Zero LLM in core | |||||
| Keeps MCP tools in active rotation | |||||
Hover any row label for detail. ✓ ships in Free tier · Partial = limited coverage · — not applicable.
A note on honesty. unerr is the new entrant — fewer stars, heavier install than brew install. The moat compounds: day-30 agents have graph state, learned conventions, and accumulated guardrails that day-1 agents don't.
Quick start
From zero to a smarter agent in under a minute.
Three explicit steps. No accounts, no API keys, no external dependencies.
- 1
Install globally
Single global install. One command, works across all your repos.
- 2
Verify your environment
Ensures unerr is available in all terminal sessions — not just the one you installed from. Detects PATH issues (common with nvm, fnm, volta, pnpm) and offers to fix them automatically.
- 3
Choose your mode
Standalone runs one process per repo — start it manually. Daemon mode registers at boot and auto-manages all your repos with a unified dashboard at localhost:9847.
Standalonesingle repo · simpleOne
unerrprocess per repo. You start it manually. Good for single-project workflows.Daemon Modemulti-repo · recommendedA single
unerrdsupervisor manages all repos. Starts at login, spawns per-repo processes on demand — IDE auto-connects.
After unerr install, restart your coding AI session for unerr to take effect. Need a different agent? Run unerr install --show-instructions <agent> — 6 agents fully integrated.
MCP surface
19 graph-aware tools. One MCP server.
Every tool returns sub-5ms responses with inline ur| signals for drift, blast-radius warnings, and circuit-breaker halts.
Graph Intelligence
8get_entityEntity — signature, body, callers, callees, riskget_fileAll entities in a file with risk summaryget_referencesCallers (blast radius) or callees (dependencies)get_importsImport graph for a filesearch_codeGraph-ranked full-text searchget_conventionsNaming, structure, import patternsget_critical_nodesHigh fan-in/fan-out chokepointsget_cross_boundary_linksUnexpected cross-module deps
Structural Analysis
3get_project_statsEntity counts, risk distribution, health gradefile_connectionsImports + co-change correlationsget_test_coverageDirect + transitive tests for any entity
File Protocol
2file_readContext-aware read — auto-injects conventions and factsfile_outlineFile structure without reading the body
Persistent Memory
2record_factPersist a convention, decision, or anti-patternrecall_factsHierarchical scope + decay-adjusted confidence
Session Narrative
4mark_intentOne-sentence task start. Becomes the turn titlemark_decisionRecords a chosen approach + alternativesmark_blockerFlags an unresolved obstaclemark_resolutionResolves a prior blocker by marker_id
Every response includes _meta (latency, risk level, drift status). Inline ur|<tag> signals surface high-priority guidance directly in the response body.