Skip to content
Now live — open CLI · v1

Lands your AI agent at the right code in
fewer turns, tokens, & breakages.

A local intelligence layer that sits between your AI agent and your codebase — indexes every call, remembers every decision, and gets sharper the longer you use it.

One global install, then unerr install <agent> per repo. No account. No keys.

Node ≥ 20MCPLocal-first · no cloudELv2
localhost:9315 — unerr dashboard
unerr live dashboard — token savings, reasoning quality, codebase map, and project memory

The problem

The agent isn't stupid. It's flying blind.

Watch any AI coding session for ten minutes and you'll see the same loop.

  • Reads 30 files to find one function

    Burns the context window before it writes a line.

  • Edits something with 40 callers

    Never knows it just broke three services.

  • Re-derives conventions you taught yesterday

    And this morning. And an hour ago.

  • Forgets the entire session

    The moment the context window closes.

Every one is the same root cause: no persistent memory of your code, your team's style, or its own past mistakes.

unerr is that memory.

Inside unerr

Five live panes. Every claim is a tool call your agent just made.

Open the dashboard the moment unerr is running. Each pane is backed by an append-only ledger — the same store your agent reads from over MCP.

Code Intelligence
Code intelligence pane — call graph, fan-in/out chokepoints, surprise links

Code Intelligence

Call graph, fan-in/out chokepoints, cross-module surprise links, and a risk grade per file — so the agent stops reading 30 files to find one function.

get_entityget_references<5ms
Project Memory
Project memory pane — conventions, anti-patterns, decisions

Project Memory

Conventions, anti-patterns, and decisions — persisted across sessions with decay-adjusted confidence and reinforcement counts.

record_factrecall_factscross-session
Token Trace
Token trace pane — aggregate savings by mechanism

Token Trace

Aggregate savings across every session, broken down by mechanism: graph, file_read, shell, dedup, format. Every claim is a tool call.

per-turn ledgercompounding multiplier
Reasoning Quality
Reasoning quality pane — 4-pillar score

Reasoning Quality

Four-pillar score: cleaner context, fewer wasted turns, fewer breakages, persistent memory. Shows the load-bearing rate per fact and convention.

5-turn outcome windowverdicts
Activity
Activity pane — turn-grouped timeline + 30-day heatmap

Activity

Turn-grouped timeline with a 30-day heatmap. Each row is one burst of agent work — intent → tools → outcome — stitched into a coherent narrative.

intent stitching30-day heatmap

And what the agent feels, even when you don't look.

Four background mechanisms that don't have a pane — but show up as fewer turns, fewer breakages, and lower context cost.

Targeted file reads

file_read({ entity: "fnName" }) returns just that function plus relevant conventions — never the whole 2,000-line file.

Shell compression

11 strategies, 645+ command classifiers. Overall 93% compression (2 MB → 138 KB) across real-world benchmarks.

View all 11 strategies

Convention awareness

Naming, structure, and import patterns auto-detected and injected into the agent's context every turn.

Local-first

Two processes, one local DB. Zero network calls. No API keys. Your code never leaves the machine.

<0msgraph queries
0%avg shell compression
0languages
0MCP tools
0network calls

Comparison

Each peer owns a layer. unerr connects them.

Code intelligence, persistent memory, and drift prevention — behind one local MCP server.

~84%

of an agent's tokens are tool output — mostly file reads. unerr intercepts before the read. (JetBrains, NeurIPS 2025.)

0

LLM calls per query in Free tier. Facts, conventions, drift signals are algorithmic.

3–5

turns before agents revert to built-in Read / Grep / Glob without drift prevention.

Capability
unerr
Free, OSS
Graphify
~47K
Serena
~23K
claude-mem
~75K
RTK
~40K
Code intelligence
Pre-hoc file-read interceptPartial
Convention auto-detection
Drift / staleness signals
Memory & continuity
Persistent across sessions
Per-repo isolation
Runtime
Zero LLM in core
Keeps MCP tools in active rotation

Hover any row label for detail. ✓ ships in Free tier · Partial = limited coverage · — not applicable.

A note on honesty. unerr is the new entrant — fewer stars, heavier install than brew install. The moat compounds: day-30 agents have graph state, learned conventions, and accumulated guardrails that day-1 agents don't.

Quick start

From zero to a smarter agent in under a minute.

Three explicit steps. No accounts, no API keys, no external dependencies.

  1. 1

    Install globally

    Single global install. One command, works across all your repos.

  2. 2

    Verify your environment

    Ensures unerr is available in all terminal sessions — not just the one you installed from. Detects PATH issues (common with nvm, fnm, volta, pnpm) and offers to fix them automatically.

  3. 3

    Choose your mode

    Standalone runs one process per repo — start it manually. Daemon mode registers at boot and auto-manages all your repos with a unified dashboard at localhost:9847.

    Standalonesingle repo · simple

    One unerr process per repo. You start it manually. Good for single-project workflows.

    Daemon Modemulti-repo · recommended

    A single unerrd supervisor manages all repos. Starts at login, spawns per-repo processes on demand — IDE auto-connects.

After unerr install, restart your coding AI session for unerr to take effect. Need a different agent? Run unerr install --show-instructions <agent> — 6 agents fully integrated.

MCP surface

19 graph-aware tools. One MCP server.

Every tool returns sub-5ms responses with inline ur| signals for drift, blast-radius warnings, and circuit-breaker halts.

Graph Intelligence

8
  • get_entityEntity — signature, body, callers, callees, risk
  • get_fileAll entities in a file with risk summary
  • get_referencesCallers (blast radius) or callees (dependencies)
  • get_importsImport graph for a file
  • search_codeGraph-ranked full-text search
  • get_conventionsNaming, structure, import patterns
  • get_critical_nodesHigh fan-in/fan-out chokepoints
  • get_cross_boundary_linksUnexpected cross-module deps

Structural Analysis

3
  • get_project_statsEntity counts, risk distribution, health grade
  • file_connectionsImports + co-change correlations
  • get_test_coverageDirect + transitive tests for any entity

File Protocol

2
  • file_readContext-aware read — auto-injects conventions and facts
  • file_outlineFile structure without reading the body

Persistent Memory

2
  • record_factPersist a convention, decision, or anti-pattern
  • recall_factsHierarchical scope + decay-adjusted confidence

Session Narrative

4
  • mark_intentOne-sentence task start. Becomes the turn title
  • mark_decisionRecords a chosen approach + alternatives
  • mark_blockerFlags an unresolved obstacle
  • mark_resolutionResolves a prior blocker by marker_id

Every response includes _meta (latency, risk level, drift status). Inline ur|<tag> signals surface high-priority guidance directly in the response body.

Get started

Stop watching your agent read 30 files to find one function.

One install. Per repo. Zero accounts. Your code never leaves the machine.

Fully local · No account · No cloud · Free under ELv2