Orb is a multi-agent coding runtime with a daemon, terminal UI, browser dashboard, persisted run traces, topology selection, and per-node model allocation.
It is built around a simple idea: treat coordination as a runtime problem, not just a prompt problem. Orb chooses a topology, classifies the task, assigns models per node, runs the graph, and records enough telemetry to inspect what happened afterward.
- Runs coding tasks through explicit topologies such as
triad,dual-review, andhierarchy - Classifies tasks before execution and records the chosen topology, routing reason, and classifier model
- Assigns models per node instead of forcing one model across the whole run
- Exposes a live TUI and dashboard backed by the same daemon
- Streams incremental dashboard activity as nodes work, including node-local activity cards, message flow, and payload/context details for
send_messageevents - Persists session-aware traces for replay, inspection, and future routing work
- Supports local and cloud providers, including
vmlx,openai-codex,ollama, andanthropic - Stores GraphRAG memory in Chroma-backed topology/cluster stores
Out of the box, Orb currently defaults to:
vmlx: enabledopenai-codex: enabledollama: disabledanthropic: disabled
This default mix gives Orb one local provider path and one cloud provider path without requiring all providers to be configured.
Provider settings live in ~/.orb/config.json.
Orb now expects provider and model selection to come from config and provider catalog data. Runtime paths should not hardcode model IDs or inline fallback defaults.
Prerequisites:
- Python
3.11+ git- one or more reachable model providers
- optional: Conda if you want an isolated env the same way the repo examples use
Clone and install Orb:
git clone <your-orb-repo-url>
cd orbCreate an environment and install the package:
conda create -n orb python=3.12 -y
conda activate orb
pip install -e .
orb onboardFor local development, install the test extras too:
pip install -e ".[dev]"orb onboard helps with initial auth and common setup.
Depending on the providers you want to use:
vmlxexpects a local OpenAI-compatible endpoint, defaulting tohttp://localhost:1234/v1openai-codexuses your OpenAI/Codex credentialsanthropicuses your Anthropic credentialsollamaexpects a reachable Ollama server
You can also configure auth directly:
orb auth openai
orb auth anthropicTypical first-run setups:
Use the current defaults, with local vmlx plus cloud openai-codex:
orb onboard
orb daemon start
orb tuiUse only local inference:
orb daemon start
orb tui --topology autoUse only cloud inference:
orb auth openai
orb daemon start
orb tui --connect http://127.0.0.1:8080Start the daemon:
orb daemon startAttach the TUI:
orb tuiOpen the dashboard:
orb dashboardStop the daemon:
orb daemon stopBy default, run the daemon on http://0.0.0.0:8080.
Recommended startup:
orb daemon start --host 0.0.0.0 --port 8080If you want a different port:
orb daemon start --host 0.0.0.0 --port 5000
orb tui --port 5000
orb dashboard --connect http://127.0.0.1:5000You can also start work immediately from the client:
orb tui --port 5000 "fix the failing tests"
orb dashboard --connect http://127.0.0.1:5000 "review the current diff"orb --helpCurrent top-level commands:
orb authorb logsorb configorb modelsorb onboardorb traceorb topologiesorb tuiorb dashboardorb daemon
Useful global flags:
--model MODEL: pin a model--local-only: restrict to local providers--cloud-only: restrict to cloud providers--budget N: set a global message budget--timeout N: set timeout in seconds--connect URL: attach TUI or dashboard to an existing daemon
Orb ships with three bundled topologies:
triad: coordinator, coder, reviewer, testerdual-review: stronger correctness/review shapehierarchy: broader planning and execution shape
You can request one explicitly:
orb tui --topology triad
orb tui --topology dual-review
orb tui --topology hierarchyOr let Orb choose automatically:
orb tui --topology autoTopologies are defined as explicit runtime graphs, not hardcoded branching scattered through the codebase.
Before execution, Orb performs a topology-classification step.
That classification currently:
- chooses a task type
- selects a topology
- records a routing reason
- returns candidate topologies
- records which model performed the classification
The classifier is behind a dedicated runtime interface, so the current provider-backed lightweight classifier can later be replaced by a trained in-house routing model without changing the rest of the runtime orchestration.
In the UI you can now see:
- the classifier model used for routing
- the chosen topology
- the planned model for each agent card/node
- routing metadata in trace detail views
After topology selection, Orb assigns models per node rather than using one model for the whole graph.
This allocation considers:
- provider availability
- enabled/disabled models from config
- task and node complexity
- node role/category
- explicit model pins
The dashboard surfaces those planned assignments before the run and the active model IDs as the run progresses.
The dashboard is event-driven and is intended to show the run as it happens.
It now:
- renders planning state as soon as topology and per-node model allocation are known
- keeps activity feed cards ordered by event elapsed time
- shows which node each activity card came from
- preserves live websocket updates instead of replacing them with a later bulk snapshot
- reattaches more reliably after refresh to the current run/session
- shows structured activity details in both the main feed and node detail panel
For send_message activity cards, the dashboard can show:
- destination node
- payload content
- context slice
The daemon also writes more descriptive run and dashboard event logs to ~/.orb/run.log.
Orb supports four provider families:
vmlxopenai-codexollamaanthropic
Model catalogs can be inspected and refreshed through:
orb modelsProvider selection and model defaults are controlled in ~/.orb/config.json.
The runtime resolves provider/model choices from:
- configured
default_models - enabled catalog entries refreshed for each provider
- enabled configured models
If no valid configured model exists for a selected provider/tier, Orb should fail explicitly instead of silently choosing a hardcoded fallback model.
Examples:
orb --cloud-only "plan a refactor"
orb --local-only "summarize this module"
orb --model gpt-5.4-mini "build a CLI with tests"The dashboard is not just a run viewer. It also exposes persisted run traces and session-aware history.
Orb records:
- topology choice
- task type
- routing mode and reason
- classifier model
- per-agent models
- stage timing
- token usage
- retries
- verifier and override events
- final outcome
Useful trace commands:
orb trace latest
orb trace latest --json
orb trace list --current-session
orb trace tail --current-session
orb trace show <run_id>Trace files are stored under:
.orb/traces/
You can create or edit user-defined topology YAML:
orb topologies initThat copies a sample file to:
~/.orb/topologies.yaml
Orb hot-reloads topology definitions in the dashboard/runtime flow.
Orb persists structured memory into Chroma-backed stores organized by topology and cluster.
Example shape:
persist_base: "~/.orb/chroma"
clusters:
implementation:
agents: [coordinator, coder]
review:
agents: [reviewer, tester]Recent work also optimized ephemeral Chroma stores so short-lived runs use a lighter embedding path, reducing write/query latency in tests and local iteration.
To inspect local Chroma data:
chroma run --path ~/.orb/chroma --port 8001
npx chromadb-adminorb/
├── agent/ # agent runtime, tools, compaction, prompting
├── cli/ # CLI entrypoints, daemon management, auth, config, TUI
├── llm/ # provider integrations and model typing
├── memory/ # GraphRAG and Chroma-backed memory backends
├── messaging/ # message bus, channels, message types
├── orchestrator/ # orchestrator runtime
├── runtime/ # graph runtime, classifier, session and trace plumbing
├── topologies/ # bundled topology definitions, schema, loader, factory
├── tracing/ # run trace schema and persistence helpers
web/
├── bridge.py # runtime events -> dashboard state
├── server.py # API, websocket, dashboard, trace admin endpoints
└── static/ # browser UI
Run tests:
pytest -qUseful targeted suites:
pytest -q tests/test_cli_main.py
pytest -q tests/test_server_api.py
pytest -q tests/test_run_trace.py
pytest -q tests/test_server_events.pyOrb currently has:
- a daemon-backed TUI and dashboard
- persisted session-aware traces
- explicit topologies
- provider-backed topology classification
- per-node model allocation
- configurable provider catalogs/defaults
- dashboard visibility into routing and model choices
The next major layer is execution control: budget enforcement, timeout/fan-out policy, and controller-driven early stop/escalation.
GNU GPL v3.0. Copyright (C) 2026 Souranil Sen.