Skip to content

Orchestrator Kit for Agentic Reasoning - OrKa is a modular AI orchestration system that transforms Large Language Models (LLMs) into composable agents capable of reasoning, fact-checking, and constructing answers with transparent traceability.

License

Notifications You must be signed in to change notification settings

marcosomma/orka-reasoning

OrKa - AI Agent Orchestration

OrKa Logo

GitHub Tag PyPI - License

codecov orka-reasoning

PyPiDockerDocumentation

orkacore

Pepy Total Downloads

What OrKa Does

OrKa lets you define AI workflows in YAML files instead of writing complex Python code. You describe what you want - like "search memory, then ask an AI, then save the result" - and OrKa handles the execution.

Think of it as a streamlined, open-source alternative to CrewAI or LangChain, but with a focus on:

  • YAML configuration instead of code
  • Built-in memory that remembers and forgets intelligently
  • Local LLM support for privacy
  • Simple setup with Docker

Basic Example

Instead of writing Python code like this:

# Complex Python orchestration code
memory_results = search_memory(query)
if not memory_results:
    web_results = search_web(query)
    answer = llm.generate(web_results + query)
else:
    answer = llm.generate(memory_results + query)
save_to_memory(query, answer)

You write a YAML file like this:

orchestrator:
  id: simple-qa
  agents: [memory_search, web_search, answer, memory_store]

agents:
  - id: memory_search
    type: memory
    operation: read
    prompt: "Find: {{ input }}"
    
  - id: web_search  
    type: search
    prompt: "Search: {{ input }}"
    
  - id: answer
    type: local_llm
    model: llama3.2
    prompt: "Answer based on: {{ previous_outputs }}"
    
  - id: memory_store
    type: memory
    operation: write
    prompt: "Store: {{ input }} -> {{ previous_outputs.answer }}"

Installation

# Install OrKa
pip install orka-reasoning

# Start Redis (for memory)
orka-start

# Memory TUI
orka memory watch

# Run a workflow
orka run my-workflow.yml "What is machine learning?"

How It Works

1. Agent Types

OrKa provides several agent types you can use in your workflows:

  • memory - Read from or write to persistent memory
  • local_llm - Use local models (Ollama, LM Studio)
  • openai-* - Use OpenAI models
  • search - Web search
  • router - Conditional branching
  • fork/join - Parallel processing
  • loop - Iterative workflows
  • plan_validator - Validate and critique proposed execution paths
  • graph_scout - [BETA] Find best path for workflow execution

2. Memory System

OrKa includes a memory system that:

  • Stores conversations and facts
  • Searches semantically (finds related content, not just exact matches)
  • Automatically forgets old, unimportant information
  • Uses Redis for fast retrieval

3. Workflow Execution

When you run orka run workflow.yml "input", OrKa:

  1. Reads your YAML configuration
  2. Creates the agents you defined
  3. Runs them in the order you specified
  4. Passes outputs between agents
  5. Returns the final result

4. Local LLM Support

OrKa works with local models through:

  • Ollama - ollama pull llama3.2 then use provider: ollama
  • LM Studio - Point to your local API endpoint
  • Any LLM-compatible API

πŸ“š Complete Agent & Node Reference

🎯 NEW: Comprehensive Documentation for Every Agent, Node & Tool β†’

Detailed documentation for all agent types, control flow nodes, and tools:

  • πŸ€– 7 LLM Agents - OpenAI, Local LLM, Binary, Classification, Validation, PlanValidator
  • πŸ’Ύ 2 Memory Agents - Reader & Writer with 100x faster HNSW indexing
  • πŸ”€ 6 Control Flow Nodes - Router, Fork/Join, Loop, Failover, GraphScout
  • πŸ”§ 2 Search Tools - DuckDuckGo, RAG

Each with working examples, parameters, best practices, and troubleshooting!


Common Patterns

Memory-First Q&A

# Check memory first, search web if nothing found
agents:
  - id: check_memory
    type: memory
    operation: read

  - id: binary_agent
    type: local_llm
    prompt: |
      Given those memory {{get_agent_response('check_memory')}} and this input {{ input }}
      Is an search on internet required?
      Only answer with 'true' or 'false' 
    
  - id: route_decision
    type: router
    decision_key: 'binary_agent'
    routing_map:
      "true": [answer_from_memory]
      "false": [web_search, answer_from_web]

Parallel Processing

# Analyze sentiment and toxicity simultaneously
agents:
  - id: parallel_analysis
    type: fork
    targets:
      - [sentiment_analyzer]
      - [toxicity_checker]
      
  - id: combine_results
    group: parallel_analysis
    type: join

Iterative Improvement

# Keep improving until quality threshold met
agents:
  - id: improvement_loop
    type: loop
    max_loops: 5
    score_threshold: 0.85
    internal_workflow:
      agents: [analyzer, scorer]

Comparison to Alternatives

Feature OrKa LangChain CrewAI
Configuration YAML files Python code Python code
Memory Built-in with decay External/manual External/manual
Local LLMs First-class support Via adapters Limited
Parallel execution Native fork/join Manual threading Agent-based
Learning Automatic memory management Manual Manual

Quick Start Examples

1. Simple Q&A with Memory

# Copy example
cp examples/simple_memory_preset_demo.yml my-qa.yml

# Run it
orka run my-qa.yml "What is artificial intelligence?"

2. Web Search + Memory

# Copy example  
cp examples/person_routing_with_search.yml web-qa.yml

# Run it
orka run web-qa.yml "Latest news about quantum computing"

3. Local LLM Chat

# Start Ollama
ollama pull llama3.2

# Copy example
cp examples/multi_model_local_llm_evaluation.yml local-chat.yml

# Run it
orka run local-chat.yml "Explain machine learning simply"

Documentation

Complete 1-to-1 documentation for every agent, node, and tool with examples, parameters, and best practices.

Core Guides

Getting Help

Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

License

Apache 2.0 License - see LICENSE for details.

About

Orchestrator Kit for Agentic Reasoning - OrKa is a modular AI orchestration system that transforms Large Language Models (LLMs) into composable agents capable of reasoning, fact-checking, and constructing answers with transparent traceability.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •  

Languages