Skip to content
Mak Sò edited this page May 18, 2025 · 1 revision

OrKa – Orchestrator Kit for Agentic Reasoning

OrKa is a modular, transparent AI cognition framework for orchestrating agent-based reasoning workflows using LLMs. Designed from the ground up for traceability, modularity, and introspective control, OrKa enables the construction of composable reasoning systems inspired by cognitive science — without sacrificing the power of modern foundation models.

💡 Built for AGI researchers, explainability advocates, and developers who need more than just chains and wrappers.


🧠 What is OrKa?

OrKa is a lightweight SDK and orchestration runtime for defining cognitive pipelines using YAML files and pluggable AI agents.

It shifts AI orchestration from monolithic prompting or hard-coded chains to a distributed cognition model, where each step is handled by a distinct reasoning agent: classifiers, validators, searchers and more.

Every step is logged, inspectable, and overrideable. OrKa doesn't hide the process — it lets you observe cognition as it unfolds.


🔧 Core Concepts

  • Orchestrators manage control flow between agents.
  • Agents are stateless modules that perform tasks like:
    • Binary decision making
    • Classification
    • Web search
    • Validation
    • Routing (conditional branching)
  • Redis Streams or (soon) Kafka enable async message passing and logging.
  • YAML configuration defines the entire reasoning graph declaratively.

📦 Features

✅ Pluggable agent system
✅ LLM-backed binary/classification/chain-of-thought agents
✅ Google/DuckDuckGo search fallback
✅ Full Redis-based trace logging
✅ Kafka-ready memory layer (planned)
✅ Compatible with local or remote LLMs (via LiteLLM)
✅ Roadmap for visual orchestration via Tiamat-GenAI


✨ Why OrKa?

Unlike LangChain, CrewAI, or Flowise, OrKa is not just a wrapper for LLM APIs. It is:

  • 🧱 Modular by default — Agents are black-box-optional, testable, and swappable.
  • 🕸 Traceable and introspectable — Every thought path is logged.
  • 🔄 Designed for cognitive experimentation — Emergent behaviors, agent conflict resolution, and agent memory are on the roadmap.

🚀 Getting Started

Check out Getting Started for setup, installation, and your first YAML orchestration.

Want to see how it all works?
Jump into example.yaml and try running:

python test_run.py

🛣 Roadmap

🔜 Kafka-backed memory agent

🔜 Graph-based visualization of traces

🔜 External plugin registry for custom agents

🔜 Meta-agents (self-reflection + re-routing)

🤝 Collaboration & Philosophy

OrKa is an open research framework as much as it is a dev tool. It's built on the belief that AI cognition should be explainable, inspectable, and composable — not opaque.

This project welcomes collaboration with:

  • AGI labs

  • Cognitive scientists

  • ML/LLM engineers

  • Open-source tinkerers and thinkers


Built by Marco Somma, multipotentialite, AI engineer, and builder of cognitive tools that think out loud.

Clone this wiki locally