Skip to content

dybala-21/rune

Repository files navigation

RUNE

RUNE-BOT

A local-first AI agent that learns from experience.

Every task makes it smarter. Your data stays on your machine.

Quick Start · How It Works · Features · Architecture

Python 3.13+ License: MIT Tests LOC


─── rune ──────────────────────────────────────────
  Terminal Agent · claude-sonnet · 318 episodes learned

❯ Fix the authentication bug in api/auth.py

  ┃  ◇ file_read api/auth.py  ✓
  ┃  ◆ file_edit api/auth.py  ✓
  ┃  ▸ bash ruff check .  ✓

✓ done — steps 1 — tools 3 — tokens 12k

Why RUNE

Most coding agents make the same mistake twice in two days. They start every session from a blank slate — no memory of what worked, no memory of what failed, no learned conventions.

RUNE remembers. Every task is recorded as an episode, scored +1 or -1. Similar future tasks pull past episodes into context. Repeated failures auto-generate prevention rules. The agent measurably improves at tasks it has done before.

First time: "Fix lint in src/auth.py"
  ↓ tools: file_read → file_edit → bash(ruff)
  ↓ outcome: ruff still failing - missed a stale import after edit
  ↓ utility: -1
  ↓ rule learned: verify_before_complete

Fifth time (similar task in src/users.py)
  ↓ past episodes injected into context
  ↓ tools: file_read → file_edit → file_read (verify) → bash(ruff)
  ↓ outcome: passed first try
  ↓ utility: +1

Quick Start

# Install
curl -LsSf https://raw.githubusercontent.com/dybala-21/rune/main/install.sh | sh

# Set any LLM provider key
rune env set OPENAI_API_KEY sk-...

# Run
rune

Works with OpenAI, Anthropic, Gemini, Grok, Mistral, DeepSeek, Cohere, Azure, Ollama, and 130+ providers via LiteLLM. Switch models with one config change:

rune --model claude-sonnet-4-6 --provider anthropic
rune --model gpt-4o --provider openai
rune --model gemini-2.5-flash --provider vertex_ai
rune                                    # interactive TUI
rune --message "explain the auth flow"  # one-shot
rune web                                # web UI
rune voice                              # voice mode (STT/TTS)

How It Works

It remembers what worked

RUNE records every task as an episode scored +1 (success) or -1 (failure). Next session, similar tasks pull from past experience. Repeated failures auto-generate prevention rules.

Past Experience (auto-injected into context)
  ✅ Fixed lint with ruff check (utility: +1)
  ⚠️ web_fetch on namu.wiki → 403 (utility: -1)

Learned Rules
  verify_before_edit: re-read file before editing to avoid stale content

It earns your trust

Approve the same action multiple times and RUNE promotes it to auto-execute. Revert once and it demotes back. High-risk commands (sudo, rm -rf) stay manual no matter what.

It proves its work

An Evidence Gate checks the agent actually read files, wrote changes, and ran tests. A Quality Gate catches hollow answers. If evidence is missing, the task keeps going.

It recovers what it forgot

Long sessions hit token limits and old messages get compacted away. RUNE saves originals before deletion and automatically re-injects them when the context becomes relevant again.

Step 1-15:  web_search → web_fetch × 3 (research phase)
Step 15:    Token budget 80% → old messages compacted
Step 16:    file_write → phase transition detected
            → auto-recall: research findings injected back

Three signals trigger automatic recall: phase transition (research → implementation), stall recovery (2+ steps with no progress), and completion gate blocked. No manual memory_search needed — works even with weaker models that miss explicit recall.

It asks before acting

Every file write, every shell command goes through Guardian — 43 risk patterns with workspace sandboxing.

Your memory is a file

~/.rune/memory/
├── MEMORY.md          # your knowledge — edit freely
├── learned.md         # auto-extracted facts + rules
├── daily/
│   └── 2026-03-22.md  # what happened today
├── compacted/         # auto-saved context before rollover
└── user-profile.md    # preferences

Open in any editor. Delete a line to make it forget.

Features

Tools

Files read, write, edit, delete, list, search
Execution bash (Guardian-validated), service management
Browser Playwright headless — navigate, observe, click, extract, screenshot
Web search, fetch
Code project map, definitions, references, impact analysis (tree-sitter)
Memory multi-source search (facts + episodes + vectors), save
Voice STT/TTS with multi-provider auto-detection
MCP stdio, SSE, HTTP transports — web UI for server management

Multi-Agent

Complex goals are decomposed into subtasks with dependency tracking:

╭──────┬───────────────────────────────────┬────────────────╮
│  ✓   │ Scan for security vulnerabilities │     researcher │
│  ✓   │ Fix XSS in login.py               │       executor │
│  ✓   │ Fix SQLi in query.py              │       executor │
│  ✓   │ Write security report             │       executor │
╰──────┴───────────────────────────────────┴────────────────╯
  ✓ 4/4 · 12.3s
  • 4 roles: Researcher, Planner, Executor, Communicator — each with scoped tool access
  • Independent subtasks run in parallel; dependent ones wait for upstream results
  • Read-only tools run concurrently (up to 5), write tools stay serial
  • Research findings can spawn follow-up tasks at runtime (dynamic DAG expansion)

Browser Extension

RUNE can control your real Chrome browser via the RUNE Browser Bridge extension. This is separate from Playwright headless — it lets RUNE interact with your actual browser session (logged-in sites, cookies, etc).

# 1. Extract extension (auto-runs during install)
rune browser setup

# 2. Load in Chrome
#    Open chrome://extensions → Enable Developer mode → Load unpacked
#    Select: ~/.rune/extension/rune-browser-bridge/

# 3. Done — RUNE auto-connects when it needs the browser

The extension auto-discovers RUNE's relay server on localhost:19222-19231. No manual connection needed — when RUNE requests a browser action, the extension connects automatically.

To check connection status: rune browser status

Multi-Channel

Same agent, same memory, anywhere:

Channel Setup
Terminal (TUI) rune
Web UI rune web
Telegram rune env set RUNE_TELEGRAM_TOKEN <token>
Discord rune env set RUNE_DISCORD_TOKEN <token>
Slack rune env set RUNE_SLACK_BOT_TOKEN <token>

Self-Improving

Episode memory Every task scored +1/-1, recalled for similar future tasks
Autonomy promotion Repeatedly approved actions auto-execute; reverts demote back
Context rehydration Compacted context auto-recovered on phase transition or stall
Behavior prediction N-gram tool sequence prediction
Time-slot patterns Learns your activity by time of day for proactive suggestions
Rule learning Repeated failures generate prevention rules via LLM
Proactive engine Watches patterns, suggests actions, learns from dismissals

/learned in the TUI shows everything RUNE has learned.

Architecture

                        ┌─────────────────────┐
                        │   LLM Providers     │
                        │  OpenAI · Anthropic │
                        │  Gemini · Ollama    │
                        │  130+ via LiteLLM   │
                        └─────────┬───────────┘
                                  │
╔═════════════════════════════════╪════════════════════════════════╗
║  ┌──────────────────────────────┴─────────────────────────────┐  ║
║  │ INTERFACE                                                  │  ║
║  │  TUI · Web · Voice · Telegram · Discord · Slack            │  ║
║  └────────────────────────────┬───────────────────────────────┘  ║
║                               ▼                                  ║
║  ┌────────────────────────────────────────────────────────────┐  ║
║  │ AGENT CORE                                                 │  ║
║  │  Agent Loop ── Tools ── Skills ── MCP ── Multi-Agent       │  ║
║  │       │                                                    │  ║
║  │  Guardian ── Evidence Gate ── Quality Gate ── Autonomy     │  ║
║  └────────────────────────┬───────────────────────────────────┘  ║
║                           ▼                                      ║
║  ┌────────────────────────────────────────────────────────────┐  ║
║  │ MEMORY & LEARNING                                          │  ║
║  │  Episodes (utility scoring)  ·  Rule Learner               │  ║
║  │  Behavior Predictor          ·  Proactive Engine           │  ║
║  │  FAISS vectors + markdown    ·  Code Graph (tree-sitter)   │  ║
║  └────────────────────────────────────────────────────────────┘  ║
╚══════════════════════════════════════════════════════════════════╝

LLM Configuration

# ~/.rune/config.yaml — any one key is enough

openai_api_key: "sk-..."
anthropic_api_key: "sk-ant-..."
gemini_api_key: "AIza..."                          # Google AI Studio

# Vertex AI (service account):
google_credentials_file: "~/.rune/google-creds.json"
# project_id auto-detected from credentials file

CLI

rune                              # interactive TUI
rune --message "..."              # single prompt
rune --model <model>              # specify model
rune web                          # web UI + MCP management
rune voice                        # voice mode

rune memory show                  # view memory
rune memory search <query>        # search
rune memory edit                  # open in $EDITOR
rune memory stats                 # usage stats

rune env set KEY value            # store API keys
rune self update                  # update from GitHub
rune self status                  # version info

Development

git clone https://github.com/dybala-21/rune.git && cd rune
uv sync --extra dev
uv run rune                       # run from source
uv run pytest                     # tests
uv run ruff check .               # lint

License

MIT — See LICENSE.

About

No description, website, or topics provided.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors