Production-Ready AI Agent Framework
Build autonomous AI agents that think, reason, and execute complex tasks.
Quick Start β’ Features β’ Examples β’ Configuration β’ Documentation
Click to expand
- π The OmniRexFlora AI Ecosystem
- π― What is OmniCoreAgent?
- β‘ Quick Start
- ποΈ Architecture Overview
- π€ OmniCoreAgent β The Heart of the Framework
- π§ Multi-Tier Memory System
- π‘ Event System
- π Built-in MCP Client
- π οΈ Local Tools System
- π§© Agent Skills System
- πΎ Memory Tool Backend
- π₯ Sub-Agents System
- π Background Agents
- π Workflow Agents
- π§ Advanced Tool Use (BM25)
- π Production Observability & Metrics
- π‘οΈ Prompt Injection Guardrails
- π Universal Model Support
OmniCoreAgent is part of a complete "Operating System for AI Agents" β three powerful tools that work together:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β π OmniRexFlora AI Ecosystem β
β "The Operating System for AI Agents" β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β π§ OmniMemory π€ OmniCoreAgent β
β βββββββββββββ βββββββββββββββββ β
β The Brain The Worker β‘ OmniDaemon β
β βββββββββββββ β
β β’ Self-evolving memory ββββΊ β’ Agent building The Runtime β
β β’ Dual-agent synthesis β’ Tool orchestration β
β β’ Conflict resolution β’ Multi-backend β’ Event-driven βββββ€
β β’ Composite scoring β’ Workflow agents execution β
β β’ Production β
β github.com/omnirexflora- YOU ARE HERE deployment β
β labs/omnimemory β’ Framework- β
β agnostic β
β β
β github.com/ β
β omnirexflora-labs/ β
β OmniDaemon β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
| Tool | Role | Description |
|---|---|---|
| π§ OmniMemory | The Brain | Self-evolving memory with dual-agent synthesis & conflict resolution |
| π€ OmniCoreAgent | The Worker | Agent building, tool orchestration, multi-backend flexibility |
| β‘ OmniDaemon | The Runtime | Event-driven execution, production deployment, framework-agnostic |
π‘ Like how Linux runs applications, OmniRexFlora runs AI agents β reliably, at scale, in production.
OmniCoreAgent is a production-ready Python framework for building autonomous AI agents that:
| Capability | Description |
|---|---|
| π€ Think & Reason | Not just chatbots β agents that plan multi-step workflows |
| π οΈ Use Tools | Connect to APIs, databases, files, MCP servers, with Advanced Tool Use |
| π§ Remember Context | Multi-tier memory: Redis, PostgreSQL, MongoDB, SQLite |
| π Orchestrate Workflows | Sequential, Parallel, and Router agents |
| π Run in Production | Monitoring, observability, error handling built-in |
| π Plug & Play | Switch backends at runtime (Redis β MongoDB β PostgreSQL) |
# Using uv (recommended)
uv add omnicoreagent
# Or with pip
pip install omnicoreagentecho "LLM_API_KEY=your_openai_api_key_here" > .envimport asyncio
from omnicoreagent import OmniCoreAgent
async def main():
agent = OmniCoreAgent(
name="my_agent",
system_instruction="You are a helpful assistant.",
model_config={"provider": "openai", "model": "gpt-4o"}
)
result = await agent.run("Hello, world!")
print(result['response'])
await agent.cleanup()
if __name__ == "__main__":
asyncio.run(main())β That's it! You just built an AI agent with session management, memory persistence, event streaming, and error handling.
π¨ Common Errors & Fixes
| Error | Fix |
|---|---|
Invalid API key |
Check .env file: LLM_API_KEY=sk-... (no quotes) |
ModuleNotFoundError |
Run: pip install omnicoreagent |
Event loop is closed |
Use asyncio.run(main()) |
OmniCoreAgent Framework
βββ π€ Core Agent System
β βββ OmniCoreAgent (Main Class)
β βββ ReactAgent (Reasoning Engine)
β βββ Tool Orchestration
β
βββ π§ Memory System (5 Backends)
β βββ InMemoryStore (Fast Dev)
β βββ RedisMemoryStore (Production)
β βββ DatabaseMemory (PostgreSQL/MySQL/SQLite)
β βββ MongoDBMemory (Document Storage)
β
βββ π‘ Event System
β βββ InMemoryEventStore (Development)
β βββ RedisStreamEventStore (Production)
β
βββ π οΈ Tool System
β βββ Local Tools Registry
β βββ MCP Integration
β βββ Advanced Tool Use (BM25)
β βββ Memory Tool Backend
β
βββ π Background Agents
β βββ Autonomous Scheduled Tasks
β
βββ π Workflow Agents
β βββ SequentialAgent
β βββ ParallelAgent
β βββ RouterAgent
β
βββ π§© Agent Skills System
β βββ SkillManager (Discovery)
β βββ Multi-language Script Dispatcher
β βββ agentskills.io Spec Alignment
β
βββ π Built-in MCP Client
βββ stdio, SSE, HTTP transports
βββ OAuth & Bearer auth
from omnicoreagent import OmniCoreAgent, ToolRegistry, MemoryRouter, EventRouter
# Basic Agent
agent = OmniCoreAgent(
name="assistant",
system_instruction="You are a helpful assistant.",
model_config={"provider": "openai", "model": "gpt-4o"}
)
# Production Agent with All Features
agent = OmniCoreAgent(
name="production_agent",
system_instruction="You are a production agent.",
model_config={"provider": "openai", "model": "gpt-4o"},
local_tools=tool_registry,
mcp_tools=[...],
memory_router=MemoryRouter("redis"),
event_router=EventRouter("redis_stream"),
agent_config={
"max_steps": 20,
"enable_advanced_tool_use": True,
"enable_agent_skills": True,
"memory_tool_backend": "local",
"guardrail_config": {"strict_mode": True} # Enable Safety Guardrails
}
)
# Key Methods
await agent.run(query) # Execute task
await agent.run(query, session_id="user_1") # With session context
await agent.connect_mcp_servers() # Connect MCP tools
await agent.list_all_available_tools() # List all tools
await agent.swith_memory_store("mongodb") # Switch backend at runtime!
await agent.get_session_history(session_id) # Retrieve conversation history
await agent.clear_session_history(session_id) # Clear history (session_id optional, clears all if None)
await agent.get_events(session_id) # Get event history
await agent.get_memory_store_type() # Get current memory router type
await agent.cleanup() # Clean up resources and remove the agent and the config
await agent.cleanup_mcp_servers() # Clean up MCP servers without removing the agent and the config
await agent.get_metrics() # Get cumulative usage (tokens, requests, time)Tip
Each agent.run() call now returns a metric field containing fine-grained usage for that specific request.
π‘ When to Use: OmniCoreAgent is your go-to for any AI task β from simple Q&A to complex multi-step workflows. Start here for any agent project.
5 backends with runtime switching β start with Redis, switch to MongoDB, then PostgreSQL β all on the fly!
from omnicoreagent import OmniCoreAgent, MemoryRouter
# Start with Redis
agent = OmniCoreAgent(
name="my_agent",
memory_router=MemoryRouter("redis"),
model_config={"provider": "openai", "model": "gpt-4o"}
)
# Switch at runtime β no restart needed!
agent.swith_memory_store("mongodb") # Switch to MongoDB
agent.swith_memory_store("database") # Switch to PostgreSQL/MySQL/SQLite
agent.swith_memory_store("in_memory") # Switch to in-memory
agent.swith_memory_store("redis") # Back to Redis| Backend | Use Case | Environment Variable |
|---|---|---|
in_memory |
Fast development | β |
redis |
Production persistence | REDIS_URL |
database |
PostgreSQL/MySQL/SQLite | DATABASE_URL |
mongodb |
Document storage | MONGODB_URI |
π‘ When to Use: Use
in_memoryfor development/testing,redisfor production with fast access,databasefor SQL-based systems,mongodbfor document-heavy applications.
Real-time event streaming with runtime switching:
from omnicoreagent import EventRouter
# Start with in-memory
agent = OmniCoreAgent(
event_router=EventRouter("in_memory"),
...
)
# Switch to Redis Streams for production
agent.switch_event_store("redis_stream")
agent.get_event_store_type() # Get current event router type
# Stream events in real-time
async for event in agent.stream_events(session_id):
print(f"{event.type}: {event.payload}")Event Types: user_message, agent_message, tool_call_started, tool_call_result, final_answer, agent_thought, sub_agent_started, sub_agent_error, sub_agent_result
π‘ When to Use: Enable events when you need real-time monitoring, debugging, or building UIs that show agent progress. Essential for production observability.
Connect to any MCP-compatible service with support for multiple transport protocols and authentication methods.
1. stdio β Local MCP servers (process communication)
{
"name": "filesystem",
"transport_type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home"]
}2. streamable_http β Remote servers with HTTP streaming
# With Bearer Token
{
"name": "github",
"transport_type": "streamable_http",
"url": "http://localhost:8080/mcp",
"headers": {
"Authorization": "Bearer your-token" # optional
},
"timeout": 60 # optional
}
# With OAuth 2.0 (auto-starts callback server on localhost:3000)
{
"name": "oauth_server",
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://localhost:8000/mcp"
}3. sse β Server-Sent Events
{
"name": "sse_server",
"transport_type": "sse",
"url": "http://localhost:3000/sse",
"headers": {
"Authorization": "Bearer token" # optional
},
"timeout": 60, # optional
"sse_read_timeout": 120 # optional
}agent = OmniCoreAgent(
name="multi_mcp_agent",
system_instruction="You have access to filesystem, GitHub, and live data.",
model_config={"provider": "openai", "model": "gpt-4o"},
mcp_tools=[
# 1. stdio - Local filesystem
{
"name": "filesystem",
"transport_type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home"]
},
# 2. streamable_http - Remote API (supports Bearer token or OAuth)
{
"name": "github",
"transport_type": "streamable_http",
"url": "http://localhost:8080/mcp",
"headers": {"Authorization": "Bearer github-token"},
"timeout": 60
},
# 3. sse - Real-time streaming
{
"name": "live_data",
"transport_type": "sse",
"url": "http://localhost:3000/sse",
"headers": {"Authorization": "Bearer token"},
"sse_read_timeout": 120
}
]
)
await agent.connect_mcp_servers()
tools = await agent.list_all_available_tools() # All MCP + local tools
result = await agent.run("List all Python files and get latest commits")| Transport | Use Case | Auth Methods |
|---|---|---|
stdio |
Local MCP servers, CLI tools | None (local process) |
streamable_http |
Remote APIs, cloud services | Bearer token, OAuth 2.0 |
sse |
Real-time data, streaming | Bearer token, custom headers |
π‘ When to Use: Use MCP when you need to connect to external tools and services. Choose
stdiofor local CLI tools,streamable_httpfor REST APIs, andssefor real-time streaming data.
Register any Python function as an AI tool:
from omnicoreagent import ToolRegistry
tools = ToolRegistry()
@tools.register_tool("get_weather")
def get_weather(city: str) -> str:
"""Get weather for a city."""
return f"Weather in {city}: Sunny, 25Β°C"
@tools.register_tool("calculate_area")
def calculate_area(length: float, width: float) -> str:
"""Calculate rectangle area."""
return f"Area: {length * width} square units"
agent = OmniCoreAgent(
name="tool_agent",
local_tools=tools, # Your custom tools!
...
)π‘ When to Use: Use Local Tools when you need custom business logic, internal APIs, or any Python functionality that isn't available via MCP servers.
OmniCoreAgent supports the Agent Skills specification β self-contained capability packages that provide specialized knowledge, executable scripts, and documentation.
agent_config = {
"enable_agent_skills": True # Enable discovery and tools for skills
}Key Concepts:
- Discovery: Agents automatically discover skills installed in
.agents/skills/[skill-name]. - Activation (
SKILL.md): Agents are instructed to read the "Activation Document" first to understand how to use the skill's specific capabilities. - Polyglot Execution: The
run_skill_scripttool handles scripts in Python, JavaScript/Node, TypeScript, Ruby, Perl, and Shell (bash/sh).
Directory Structure:
.agents/skills/my-skill-name/
βββ SKILL.md # The "Activation" document (instructions + metadata)
βββ scripts/ # Multi-language executable scripts
βββ references/ # Deep-dive documentation
βββ assets/ # Templates, examples, and resources
Skill Tools:
read_skill_file(skill_name, file_path): Access any file within a skill (start withSKILL.md).run_skill_script(skill_name, script_name, args?): Execute bundled scripts with automatic interpreter detection.
π Learn More: To learn how to create your own agent skills, visit agentskills.io.
A file-based persistent storage system that gives your agent a local workspace to save and manage files during long-running tasks. Files are stored in a ./memories/ directory with safe concurrent access and path traversal protection.
agent_config = {
"memory_tool_backend": "local" # Enable file-based memory
}
# Agent automatically gets these tools:
# - memory_view: View/list files in memory directory
# - memory_create_update: Create new files or append/overwrite existing ones
# - memory_str_replace: Find and replace text within files
# - memory_insert: Insert text at specific line numbers
# - memory_delete: Delete files from memory
# - memory_rename: Rename or move files
# - memory_clear_all: Clear entire memory directoryHow It Works:
- Files stored in
./memories/directory (auto-created) - Thread-safe with file locking for concurrent access
- Path traversal protection for security
- Persists across agent restarts
Use Cases:
| Use Case | Description |
|---|---|
| Long-running workflows | Save progress as agent works through complex tasks |
| Resumable tasks | Continue where you left off after interruption |
| Multi-step planning | Agent can save plans, execute, and update |
| Code generation | Save code incrementally, run tests, iterate |
| Data processing | Store intermediate results between steps |
Example: A code generation agent can save its plan to memory, write code incrementally, run tests, and resume if interrupted.
Delegate tasks to specialized child agents:
weather_agent = OmniCoreAgent(name="weather_agent", ...)
filesystem_agent = OmniCoreAgent(name="filesystem_agent", mcp_tools=MCP_TOOLS, ...)
parent_agent = OmniCoreAgent(
name="parent_agent",
sub_agents=[weather_agent, filesystem_agent],
...
)π‘ When to Use: Use Sub-Agents when you have specialized agents (e.g., weather, code, data) and want a parent agent to delegate tasks intelligently. Great for building modular, reusable agent architectures.
Autonomous agents that run on schedule:
from omnicoreagent import BackgroundAgentService, MemoryRouter, EventRouter
bg_service = BackgroundAgentService(
MemoryRouter("redis"),
EventRouter("redis_stream")
)
bg_service.start_manager()
agent_config = {
"agent_id": "system_monitor",
"system_instruction": "Monitor system resources.",
"model_config": {"provider": "openai", "model": "gpt-4o-mini"},
"interval": 300, # Run every 5 minutes
"task_config": {
"query": "Monitor CPU and alert if > 80%",
"max_retries": 2
}
}
await bg_service.create(agent_config)
bg_service.start_agent("system_monitor")Management: start_agent(), pause_agent(), resume_agent(), stop_agent(), get_agent_status()
π‘ When to Use: Perfect for scheduled tasks like system monitoring, periodic reports, data syncing, or any automation that runs independently without user interaction.
Orchestrate multiple agents for complex tasks:
from omnicoreagent import SequentialAgent, ParallelAgent, RouterAgent
# Sequential: Chain agents step-by-step
seq_agent = SequentialAgent(sub_agents=[agent1, agent2, agent3])
result = await seq_agent.run(initial_task="Analyze and report")
# Parallel: Run agents concurrently
par_agent = ParallelAgent(sub_agents=[agent1, agent2, agent3])
results = await par_agent.run(agent_tasks={
"analyzer": "Analyze data",
"processor": "Process results"
})
# Router: Intelligent task routing
router = RouterAgent(
sub_agents=[code_agent, data_agent, research_agent],
model_config={"provider": "openai", "model": "gpt-4o"}
)
result = await router.run(task="Find and summarize AI research")π‘ When to Use:
- SequentialAgent: When tasks depend on each other (output of one β input of next)
- ParallelAgent: When tasks are independent and can run simultaneously for speed
- RouterAgent: When you need intelligent task routing to specialized agents
Automatically discover relevant tools at runtime using BM25 lexical search:
agent_config = {
"enable_advanced_tool_use": True # Enable BM25 retrieval
}How It Works:
- All MCP tools loaded into in-memory registry
- BM25 index built over tool names, descriptions, parameters
- User task used as search query
- Top 5 relevant tools dynamically injected
Benefits: Scales to 1000+ tools, zero network I/O, deterministic, container-friendly.
π‘ When to Use: Enable when you have many MCP tools (10+) and want the agent to automatically discover the right tools for each task without manual selection.
OmniCoreAgent tracks every token, request, and millisecond. Each run() returns a metric object, and you can get cumulative stats anytime.
result = await agent.run("Analyze this data")
print(f"Request Tokens: {result['metric'].request_tokens}")
print(f"Time Taken: {result['metric'].total_time:.2f}s")
# Get aggregated metrics for the agent's lifecycle
stats = await agent.get_metrics()
print(f"Avg Response Time: {stats['average_time']:.2f}s")Monitor and optimize your agents with deep traces:
# Add to .env
OPIK_API_KEY=your_opik_api_key
OPIK_WORKSPACE=your_workspaceWhat's Tracked: LLM call performance, tool execution traces, memory operations, agent workflow, bottlenecks.
Agent Execution Trace:
βββ agent_execution: 4.6s
βββ tools_registry_retrieval: 0.02s β
βββ memory_retrieval_step: 0.08s β
βββ llm_call: 4.5s β οΈ (bottleneck!)
βββ action_execution: 0.03s β
π‘ When to Use: Essential for production. Use Metrics for cost/performance monitoring, and Opik for identifying bottlenecks and debugging complex agent logic.
Protect your agents against malicious inputs, jailbreaks, and instruction overrides before they reach the LLM.
agent_config = {
"guardrail_config": {
"strict_mode": True, # Block all suspicious inputs
"sensitivity": 0.85, # 0.0 to 1.0 (higher = more sensitive)
"enable_pattern_matching": True,
"enable_heuristic_analysis": True
}
}
agent = OmniCoreAgent(..., agent_config=agent_config)
# If a threat is detected:
# result['response'] -> "I'm sorry, but I cannot process this request due to safety concerns..."
# result['guardrail_result'] -> Full metadata about the detected threatKey Protections:
- Instruction Overrides: "Ignore previous instructions..."
- Jailbreaks: DAN mode, roleplay escapes, etc.
- Toxicity & Abuse: Built-in pattern recognition.
- Payload Splitting: Detects fragmented attack attempts.
| Parameter | Type | Default | Description |
|---|---|---|---|
strict_mode |
bool |
False |
When True, any detection (even low confidence) blocks the request. |
sensitivity |
float |
1.0 |
Scaling factor for threat scores (0.0 to 1.0). Higher = more sensitive. |
max_input_length |
int |
10000 |
Maximum allowed query length before blocking. |
enable_encoding_detection |
bool |
True |
Detects base64, hex, and other obfuscation attempts. |
enable_heuristic_analysis |
bool |
True |
Analyzes prompt structure for typical attack patterns. |
enable_sequential_analysis |
bool |
True |
Checks for phased attacks across multiple tokens. |
enable_entropy_analysis |
bool |
True |
Detects high-entropy payloads common in injections. |
allowlist_patterns |
list |
[] |
List of regex patterns that bypass safety checks. |
blocklist_patterns |
list |
[] |
Custom regex patterns to always block. |
π‘ When to Use: Always enable in user-facing applications to prevent prompt injection attacks and ensure agent reliability.
Model-agnostic through LiteLLM β use any provider:
# OpenAI
model_config = {"provider": "openai", "model": "gpt-4o"}
# Anthropic
model_config = {"provider": "anthropic", "model": "claude-3-5-sonnet-20241022"}
# Groq (Ultra-fast)
model_config = {"provider": "groq", "model": "llama-3.1-8b-instant"}
# Ollama (Local)
model_config = {"provider": "ollama", "model": "llama3.1:8b", "ollama_host": "http://localhost:11434"}
# OpenRouter (200+ models)
model_config = {"provider": "openrouter", "model": "anthropic/claude-3.5-sonnet"}
#mistral ai
model_config = {"provider": "mistral", "model": "mistral-7b-instruct"}
#deepseek
model_config = {"provider": "deepseek", "model": "deepseek-chat"}
#google gemini
model_config = {"provider": "google", "model": "gemini-2.0-flash-exp"}
#azure openai
model_config = {"provider": "azure_openai", "model": "gpt-4o"}Supported: OpenAI, Anthropic, Google Gemini, Groq, DeepSeek, Mistral, Azure OpenAI, OpenRouter, Ollama
π‘ When to Use: Switch providers based on your needs β use cheaper models (Groq, DeepSeek) for simple tasks, powerful models (GPT-4o, Claude) for complex reasoning, and local models (Ollama) for privacy-sensitive applications.
python examples/cli/basic.py # Simple introduction
python examples/cli/run_omni_agent.py # All features demopython examples/custom_agents/e_commerce_personal_shopper_agent.py
python examples/custom_agents/flightBooking_agent.py
python examples/custom_agents/real_time_customer_support_agent.pypython examples/workflow_agents/sequential_agent.py
python examples/workflow_agents/parallel_agent.py
python examples/workflow_agents/router_agent.py| Example | Description | Location |
|---|---|---|
| DevOps Copilot | Safe bash execution, rate limiting, Prometheus metrics | examples/devops_copilot_agent/ |
| Deep Code Agent | Sandbox execution, memory backend, code analysis | examples/deep_code_agent/ |
# Required
LLM_API_KEY=your_api_key
# Optional: Memory backends
REDIS_URL=redis://localhost:6379/0
DATABASE_URL=postgresql://user:pass@localhost:5432/db
MONGODB_URI=mongodb://localhost:27017/omnicoreagent
# Optional: Observability
OPIK_API_KEY=your_opik_key
OPIK_WORKSPACE=your_workspaceagent_config = {
"max_steps": 15, # Max reasoning steps
"tool_call_timeout": 30, # Tool timeout (seconds)
"request_limit": 0, # 0 = unlimited
"total_tokens_limit": 0, # 0 = unlimited
"memory_config": {"mode": "sliding_window", "value": 10000},
"enable_advanced_tool_use": True, # BM25 tool retrieval
"enable_agent_skills": True, # Specialized packaged skills
"memory_tool_backend": "local" # Persistent working memory
}model_config = {
"provider": "openai",
"model": "gpt-4o",
"temperature": 0.7,
"max_tokens": 2000,
"top_p": 0.95
}π Additional Model Configurations
# Azure OpenAI
model_config = {
"provider": "azureopenai",
"model": "gpt-4",
"azure_endpoint": "https://your-resource.openai.azure.com",
"azure_api_version": "2024-02-01"
}
# Ollama (Local)
model_config = {
"provider": "ollama",
"model": "llama3.1:8b",
"ollama_host": "http://localhost:11434"
}# Clone
git clone https://github.com/omnirexflora-labs/omnicoreagent.git
cd omnicoreagent
# Setup
uv venv && source .venv/bin/activate
uv sync --dev
# Test
pytest tests/ -v
pytest tests/ --cov=src --cov-report=term-missing| Error | Fix |
|---|---|
Invalid API key |
Check .env: LLM_API_KEY=your_key |
ModuleNotFoundError |
pip install omnicoreagent |
Redis connection failed |
Start Redis or use MemoryRouter("in_memory") |
MCP connection refused |
Ensure MCP server is running |
π More Troubleshooting
OAuth Server Starts: Normal when using "auth": {"method": "oauth"}. Remove if not needed.
Debug Mode: agent = OmniCoreAgent(..., debug=True)
Help: Check GitHub Issues
# Fork & clone
git clone https://github.com/omnirexflora-labs/omnicoreagent.git
# Setup
uv venv && source .venv/bin/activate
uv sync --dev
pre-commit install
# Submit PRSee CONTRIBUTING.md for guidelines.
MIT License β see LICENSE
Created by Abiola Adeshina
- GitHub: @Abiorh001
- X (Twitter): @abiorhmangana
- Email: abiolaadedayo1993@gmail.com
| Project | Description |
|---|---|
| π§ OmniMemory | Self-evolving memory for autonomous agents |
| π€ OmniCoreAgent | Production-ready AI agent framework (this project) |
| β‘ OmniDaemon | Event-driven runtime engine for AI agents |
Built on: LiteLLM, FastAPI, Redis, Opik, Pydantic, APScheduler
Building the future of production-ready AI agent frameworks
β Star us on GitHub β’ π Report Bug β’ π‘ Request Feature β’ π Documentation