OrKa lets you define AI workflows in YAML files instead of writing complex Python code. You describe what you want - like "search memory, then ask an AI, then save the result" - and OrKa handles the execution.
Think of it as a streamlined, open-source alternative to CrewAI or LangChain, but with a focus on:
- YAML configuration instead of code
- Built-in memory that remembers and forgets intelligently
- Local LLM support for privacy
- Simple setup with Docker
Instead of writing Python code like this:
# Complex Python orchestration code
memory_results = search_memory(query)
if not memory_results:
web_results = search_web(query)
answer = llm.generate(web_results + query)
else:
answer = llm.generate(memory_results + query)
save_to_memory(query, answer)You write a YAML file like this:
orchestrator:
id: simple-qa
agents: [memory_search, web_search, answer, memory_store]
agents:
- id: memory_search
type: memory
operation: read
prompt: "Find: {{ input }}"
- id: web_search
type: search
prompt: "Search: {{ input }}"
- id: answer
type: local_llm
model: llama3.2
prompt: "Answer based on: {{ previous_outputs }}"
- id: memory_store
type: memory
operation: write
prompt: "Store: {{ input }} -> {{ previous_outputs.answer }}"# Install OrKa
pip install orka-reasoning
# Start Redis (for memory)
orka-start
# Memory TUI
orka memory watch
# Run a workflow
orka run my-workflow.yml "What is machine learning?"OrKa provides several agent types you can use in your workflows:
memory- Read from or write to persistent memorylocal_llm- Use local models (Ollama, LM Studio)openai-*- Use OpenAI modelssearch- Web searchrouter- Conditional branchingfork/join- Parallel processingloop- Iterative workflowsplan_validator- Validate and critique proposed execution pathsgraph_scout- [BETA] Find best path for workflow execution
OrKa includes a memory system that:
- Stores conversations and facts
- Searches semantically (finds related content, not just exact matches)
- Automatically forgets old, unimportant information
- Uses Redis for fast retrieval
When you run orka run workflow.yml "input", OrKa:
- Reads your YAML configuration
- Creates the agents you defined
- Runs them in the order you specified
- Passes outputs between agents
- Returns the final result
OrKa works with local models through:
- Ollama -
ollama pull llama3.2then useprovider: ollama - LM Studio - Point to your local API endpoint
- Any LLM-compatible API
π― NEW: Comprehensive Documentation for Every Agent, Node & Tool β
Detailed documentation for all agent types, control flow nodes, and tools:
- π€ 7 LLM Agents - OpenAI, Local LLM, Binary, Classification, Validation, PlanValidator
- πΎ 2 Memory Agents - Reader & Writer with 100x faster HNSW indexing
- π 6 Control Flow Nodes - Router, Fork/Join, Loop, Failover, GraphScout
- π§ 2 Search Tools - DuckDuckGo, RAG
Each with working examples, parameters, best practices, and troubleshooting!
# Check memory first, search web if nothing found
agents:
- id: check_memory
type: memory
operation: read
- id: binary_agent
type: local_llm
prompt: |
Given those memory {{get_agent_response('check_memory')}} and this input {{ input }}
Is an search on internet required?
Only answer with 'true' or 'false'
- id: route_decision
type: router
decision_key: 'binary_agent'
routing_map:
"true": [answer_from_memory]
"false": [web_search, answer_from_web]# Analyze sentiment and toxicity simultaneously
agents:
- id: parallel_analysis
type: fork
targets:
- [sentiment_analyzer]
- [toxicity_checker]
- id: combine_results
group: parallel_analysis
type: join# Keep improving until quality threshold met
agents:
- id: improvement_loop
type: loop
max_loops: 5
score_threshold: 0.85
internal_workflow:
agents: [analyzer, scorer]| Feature | OrKa | LangChain | CrewAI |
|---|---|---|---|
| Configuration | YAML files | Python code | Python code |
| Memory | Built-in with decay | External/manual | External/manual |
| Local LLMs | First-class support | Via adapters | Limited |
| Parallel execution | Native fork/join | Manual threading | Agent-based |
| Learning | Automatic memory management | Manual | Manual |
# Copy example
cp examples/simple_memory_preset_demo.yml my-qa.yml
# Run it
orka run my-qa.yml "What is artificial intelligence?"# Copy example
cp examples/person_routing_with_search.yml web-qa.yml
# Run it
orka run web-qa.yml "Latest news about quantum computing"# Start Ollama
ollama pull llama3.2
# Copy example
cp examples/multi_model_local_llm_evaluation.yml local-chat.yml
# Run it
orka run local-chat.yml "Explain machine learning simply"π Agent & Node Reference Index β β
Complete 1-to-1 documentation for every agent, node, and tool with examples, parameters, and best practices.
- Getting Started Guide - Detailed setup and first workflows
- Agent Types - All available agent types and configurations
- Memory System - How memory works and configuration
- YAML Configuration - Complete YAML reference
- Examples - 15+ ready-to-use workflow templates
- GitHub Issues - Bug reports and feature requests
- Documentation - Full documentation
- Examples - Working examples you can copy and modify
We welcome contributions! See CONTRIBUTING.md for guidelines.
Apache 2.0 License - see LICENSE for details.