Proof coverage and definition coverage analysis for Lean 4 projects.
lean-coverage answers two questions:
- "What percentage of my project's theorems and axioms are fully proven?" (proof completeness)
- "Which definitions have at least one theorem about them?" (definition coverage)
Lean's sorry tactic lets you close a proof obligation without actually proving it, a useful placeholder during development. But tracking how many sorrys remain in a project is surprisingly hard:
#print axiomsworks one declaration at a time, impractical for a whole projectgrep sorrycatches comments and strings, misses transitive dependencies- Mathlib's approach is binary: sorry is forbidden on master. No metric needed when the only acceptable value is 100%
For projects using sorry-driven development (sketching out structure first, then filling in proofs over time) there's no way to track progress toward full verification as a number.
Existing ecosystem tools serve different purposes:
| Tool | What it does | What it doesn't do |
|---|---|---|
| SorryDB | Indexes sorry locations across all public Lean repos for AI benchmarking | Produce per-project coverage metrics |
| LeanDepViz | Visualizes declaration dependencies, checks policies | Aggregate into a single coverage number |
| lean4checker | Replays environment through the kernel to catch metaprogramming exploits | Track sorry usage |
| Mathlib linters | Enforce style and correctness rules | Count or report sorry usage (they skip sorry declarations) |
lean-coverage fills this gap: a quantitative proof coverage metric with per-module breakdowns, multiple output formats, and CI integration — analogous to code coverage in software testing.
- Report what fraction of theorems and axioms are free of
sorry(and optionally other escape hatches likenative_decideor user axioms) - Report which
defs are referenced by at least one theorem's type (definition coverage) - Per-module and per-declaration breakdowns
- Terminal, JSON, and one-line output formats
- CI integration with configurable coverage thresholds
- Works with any Lean 4 project without source modifications
- Loads compiled
.oleanfiles — no re-elaboration needed
Add to your lakefile.lean:
require «lean-coverage» from git
"https://github.com/didmar/lean-coverage" @ "main"Then run:
lake update
lake exe lean-coverageOr point at a project directory:
lake exe lean-coverage --project /path/to/my-lean-projectlean-coverage report: my-project
Module Total Proven Sorry Proof Cov Def Cov
──────────────────────────────────────────────────────────────────────────────────
MyProject.Core 10 10 0 100.0% 80.0%
MyProject.Proofs 8 6 2 75.0% 100.0%
MyProject.Utils 0 0 0 100.0% 0.0%
──────────────────────────────────────────────────────────────────────────────────
TOTAL 18 16 2 88.9% 62.5%
Declarations using sorry (2):
- MyProject.Proofs.lemma_foo (theorem)
- MyProject.Proofs.lemma_bar (theorem)
Definition coverage: 5/8 definitions referenced by theorems (62.5%)
Uncovered definitions (3):
- MyProject.Utils.helper (MyProject.Utils)
- MyProject.Utils.format (MyProject.Utils)
- MyProject.Utils.parse (MyProject.Utils)
lake exe lean-coverage --format jsonProduces machine-readable output with full declaration details, suitable for dashboards or further processing.
lean-coverage: 16/18 declarations proven (88.9%) | 2 sorry, 0 trusted, 0 axiomatic | 5/8 defs covered (62.5%)
- name: Check proof coverage
run: lake exe lean-coverage --format oneline --min-coverage 1.0Exit codes:
0— coverage meets or exceeds threshold1— coverage below threshold
You can add dynamic shields.io badges to your README that update automatically on each push.
-
Create a public GitHub Gist with a dummy file (any content). Note the gist ID from the URL.
-
Create a classic Personal Access Token: click "Generate new token (classic)", give it a name (e.g.
coverage-badge), check only thegistscope, and generate it. Copy the token — it won't be shown again. -
Add the token as a repo secret: go to your repo's Settings > Secrets and variables > Actions > New repository secret, name it
GIST_SECRET. -
Add this workflow to
.github/workflows/coverage-badge.yml:
name: Coverage Badge
on:
push:
branches: [main]
jobs:
update-badge:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Lean and build
uses: leanprover/lean-action@v1
with:
auto-config: false
build: true
- name: Run lean-coverage
run: |
OUTPUT=$(lake exe lean-coverage --format json --include defs)
PROOF=$(echo "$OUTPUT" | jq -r '.summary.coverage')
DEF=$(echo "$OUTPUT" | jq -r '.summary.defCoverage')
echo "PROOF_COV=$PROOF" >> "$GITHUB_ENV"
echo "DEF_COV=$DEF" >> "$GITHUB_ENV"
PROOF_NUM=$(echo "$PROOF" | tr -d '%')
DEF_NUM=$(echo "$DEF" | tr -d '%')
echo "PROOF_COV_NUM=$PROOF_NUM" >> "$GITHUB_ENV"
echo "DEF_COV_NUM=$DEF_NUM" >> "$GITHUB_ENV"
- name: Compute badge colors
run: |
for METRIC in PROOF DEF; do
if [ "$METRIC" = "PROOF" ]; then
VAL=${{ env.PROOF_COV_NUM }}
else
VAL=${{ env.DEF_COV_NUM }}
fi
if [ "$(echo "$VAL >= 80" | bc -l)" -eq 1 ]; then
COLOR=brightgreen
elif [ "$(echo "$VAL >= 60" | bc -l)" -eq 1 ]; then
COLOR=yellow
elif [ "$(echo "$VAL >= 40" | bc -l)" -eq 1 ]; then
COLOR=orange
else
COLOR=red
fi
echo "${METRIC}_COLOR=$COLOR" >> "$GITHUB_ENV"
done
- name: Update proof coverage badge
uses: schneegans/dynamic-badges-action@v1.7.0
with:
auth: ${{ secrets.GIST_SECRET }}
gistID: <YOUR_GIST_ID>
filename: lean-coverage-proof.json
label: proof coverage
message: ${{ env.PROOF_COV }}
color: ${{ env.PROOF_COLOR }}
- name: Update def coverage badge
uses: schneegans/dynamic-badges-action@v1.7.0
with:
auth: ${{ secrets.GIST_SECRET }}
gistID: <YOUR_GIST_ID>
filename: lean-coverage-def.json
label: def coverage
message: ${{ env.DEF_COV }}
color: ${{ env.DEF_COLOR }}Replace <YOUR_GIST_ID> with the hex ID from your gist URL. Remove --include defs if you only want the proof coverage badge.
- Add badges to your README:

Badges update within ~5 minutes of a push to main (shields.io caching).
| Flag | Description |
|---|---|
--format <terminal|json|oneline> |
Output format (default: terminal) |
--min-coverage <0.0-1.0> |
Minimum coverage threshold; exit 1 if below |
--include <defs,instances,opaques> |
Count additional declaration types beyond theorems |
--strict |
Classify declarations depending on non-standard user axioms as axiomatic (by default they count as proven) |
--module <pattern> |
Only analyze modules matching glob patterns (comma-separated) |
--exclude-module <pattern> |
Exclude modules matching glob patterns |
--project <path> |
Path to target project (default: current directory) |
- Loads the project's compiled environment from
.oleanfiles viaLean.importModules - Iterates all constants, filtering to project-local declarations only (excludes stdlib, Mathlib, dependencies)
- Filters out compiler-generated internals (eliminators, match lemmas, etc.)
- For each analyzed declaration, computes the transitive axiom closure via
collectAxioms - Classifies each declaration as proven or sorry (depends on
sorryAx). With--strict, declarations depending on user-declared axioms are classified as axiomatic instead of proven - Aggregates results by module and outputs the report
The standard axioms (propext, Classical.choice, Quot.sound) are filtered out as declarations and do not affect the verification status of declarations that depend on them.
A def can be "proven" (no sorry) while having zero theorems about it — meaning nobody has verified any property of it. Definition coverage is a complementary metric that answers: which definitions are actually referenced by theorem statements?
- Collects all project
defs (excluding instances, projections, and compiler-generated definitions) - Scans all theorem type expressions (using
Lean.Expr.foldConsts) to find whichdefs are referenced - Reports uncovered definitions — those not mentioned in any theorem's type
Note: Ephemeral commands like #check are not counted — the tool analyzes compiled .olean environments, which only contain persistent declarations. Source-level #check support may be added in a future version.