Chain. Build. Run. All offline. LAO is how developers bend AI to their will—no cloud, no compromise.
LAO is a cross-platform desktop tool for chaining local AI models and plugins into powerful, agentic workflows. It supports prompt-driven orchestration, visual DAG editing, and full offline execution.
- Modular plugin system (Rust, local-first)
- Offline DAG engine (retries, caching, lifecycle hooks)
- Prompt-driven agentic workflows (LLM-powered, system prompt file)
- Visual workflow builder (UI, YAML export, node/edge display)
- CLI (run, validate, prompt, validate-prompts, plugin list)
- Prompt library (Markdown + JSON, for validation/fine-tuning)
- Test harness for prompt validation
- End-to-end “Run” from UI (execute and show logs/results)
- UI streaming run with real-time step events and parallel execution option
- Node/edge editing in UI (drag, connect, edit)
- Plugin explainability (
lao explain plugin <name>
) - Conditional/branching steps
- Plugin marketplace/discovery
- Live workflow status/logs in UI
- Multi-modal input (files, voice, etc.)
- Installer/distribution polish
# Set up the UI
cd ui/lao-ui
npm install
npm run tauri dev
# Run the CLI
cargo run --bin lao-cli run workflows/test.yaml
cargo run --bin lao-cli prompt "Summarize this audio and tag action items"
cargo run --bin lao-cli validate-prompts
LAO can generate and execute workflows from natural language prompts using a local LLM (Ollama). The system prompt is editable at core/prompt_dispatcher/prompt/system_prompt.txt
.
Example:
lao prompt "Refactor this Python file and add comments"
- Prompts and expected workflows:
core/prompt_dispatcher/prompt/prompt_library.md
and.json
- Validate with:
cargo run --bin lao-cli validate-prompts
- Add new prompts to improve LLM output and test new plugins
- Add new plugins by implementing the
LaoPlugin
trait, building as acdylib
, and placing the resulting library in theplugins/
directory - Expose a C ABI function named
plugin_entry_point
that returns aBox<dyn LaoPlugin>
- Add prompt/workflow pairs to the prompt library for validation and LLM tuning
- See
docs/plugins.md
anddocs/workflows.md
for details
- Architecture:
docs/architecture.md
- Plugins:
docs/plugins.md
- Workflows:
docs/workflows.md
- CLI:
docs/cli.md
- Observability:
docs/observability.md
Cloud is optional. Intelligence is modular. Agents are composable.
LAO is how devs build AI workflows with total control.
No tokens. No latency. No lock-in.
Let’s define the category—one plugin at a time.