Skip to content

abendrothj/lao

Repository files navigation

⚡️ LAO: Local AI Workflow Orchestrator

License: MIT Made with Rust Powered by Tauri Runs Offline

Chain. Build. Run. All offline. LAO is how developers bend AI to their will—no cloud, no compromise.


🧠 What is LAO?

LAO is a cross-platform desktop tool for chaining local AI models and plugins into powerful, agentic workflows. It supports prompt-driven orchestration, visual DAG editing, and full offline execution.


✨ Features

  • Modular plugin system (Rust, local-first)
  • Offline DAG engine (retries, caching, lifecycle hooks)
  • Prompt-driven agentic workflows (LLM-powered, system prompt file)
  • Visual workflow builder (UI, YAML export, node/edge display)
  • CLI (run, validate, prompt, validate-prompts, plugin list)
  • Prompt library (Markdown + JSON, for validation/fine-tuning)
  • Test harness for prompt validation
  • End-to-end “Run” from UI (execute and show logs/results)
  • UI streaming run with real-time step events and parallel execution option
  • Node/edge editing in UI (drag, connect, edit)
  • Plugin explainability (lao explain plugin <name>)
  • Conditional/branching steps
  • Plugin marketplace/discovery
  • Live workflow status/logs in UI
  • Multi-modal input (files, voice, etc.)
  • Installer/distribution polish

🚀 Quickstart

# Set up the UI
cd ui/lao-ui
npm install
npm run tauri dev

# Run the CLI
cargo run --bin lao-cli run workflows/test.yaml
cargo run --bin lao-cli prompt "Summarize this audio and tag action items"
cargo run --bin lao-cli validate-prompts

🧩 Prompt-Driven Workflows

LAO can generate and execute workflows from natural language prompts using a local LLM (Ollama). The system prompt is editable at core/prompt_dispatcher/prompt/system_prompt.txt.

Example:

lao prompt "Refactor this Python file and add comments"

📚 Prompt Library & Validation

  • Prompts and expected workflows: core/prompt_dispatcher/prompt/prompt_library.md and .json
  • Validate with: cargo run --bin lao-cli validate-prompts
  • Add new prompts to improve LLM output and test new plugins

🛠️ Contributing Plugins & Prompts

  • Add new plugins by implementing the LaoPlugin trait, building as a cdylib, and placing the resulting library in the plugins/ directory
  • Expose a C ABI function named plugin_entry_point that returns a Box<dyn LaoPlugin>
  • Add prompt/workflow pairs to the prompt library for validation and LLM tuning
  • See docs/plugins.md and docs/workflows.md for details

📄 Documentation

  • Architecture: docs/architecture.md
  • Plugins: docs/plugins.md
  • Workflows: docs/workflows.md
  • CLI: docs/cli.md
  • Observability: docs/observability.md

🌌 Manifesto

Cloud is optional. Intelligence is modular. Agents are composable.
LAO is how devs build AI workflows with total control.
No tokens. No latency. No lock-in.

Let’s define the category—one plugin at a time.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages