A lightweight, intuitive Python framework for building AI agents and multi-agent systems
MiniGen is a lightweight, Python-native framework designed to help you learn and build AI agents without getting lost in boilerplate code. It's built for simplicity, experimentation, and understanding the core concepts that make AI agents work.
- 🎯 Simple & Intuitive: Minimal boilerplate, maximum functionality
- 🔧 Tool Integration: Easily extend agents with custom capabilities
- 🔗 Workflow Orchestration: Chain prompts and coordinate multiple agents
- ⚡ Parallel Execution: Run agents concurrently for better performance
- 🎭 Multi-Agent Networks: Build complex systems with intelligent routing
- 🔌 Provider Agnostic: Works with OpenAI, Anthropic, and other LLM providers
pip install minigen
Or install from source:
git clone https://github.com/devdezzies/minigen.git
cd minigen
pip install -e .
Create a .env
file in your project root:
# Required: Your API key
OPENAI_API_KEY="your-api-key-here"
# Optional: Specify your preferred model
DEFAULT_MODEL="gpt-4"
# Optional: For non-OpenAI providers
BASE_URL="https://api.anthropic.com" # Example for Claude
from minigen import Agent
# Create a specialized agent
pirate_agent = Agent(
name="Captain Sarcasm",
system_prompt="You are a sarcastic pirate captain. Answer all questions with sarcasm and pirate slang."
)
# Start chatting
response = pirate_agent.chat("How does a computer work?")
print(f"[{pirate_agent.name}]: {response}")
Think of an Agent as a specialized AI personality with a specific role. Each agent has:
- A name for identification
- A system prompt that defines its behavior and expertise
- Optional tools for extended capabilities
- Memory of the conversation context
Tools are Python functions that agents can call to interact with the external world:
from minigen import Agent, tool
@tool(description="Convert temperature from Celsius to Fahrenheit")
def celsius_to_fahrenheit(celsius: float) -> float:
return (celsius * 9/5) + 32
@tool(description="Get current weather for a city")
def get_weather(city: str) -> str:
return f"The weather in {city} is sunny and 25°C"
# Create agent with tools
weather_agent = Agent(
name="Weather Assistant",
system_prompt="You help users with weather information and temperature conversions.",
tools=[celsius_to_fahrenheit, get_weather]
)
response = weather_agent.chat("What's the weather in London and convert 25°C to Fahrenheit?")
Chains execute a sequence of prompts, where each step builds on the previous output:
from minigen import Agent, Chain
agent = Agent()
research_chain = Chain(agent=agent, verbose=True)
research_chain \
.add_step("Generate a comprehensive technical explanation of {input}") \
.add_step("Simplify the following explanation for a beginner: {input}") \
.add_step("Create a practical example to illustrate: {input}")
result = research_chain.run("machine learning")
print(result)
Agent Networks coordinate multiple specialized agents to solve complex problems:
from minigen import Agent, AgentNetwork
# Create specialized agents
planner = Agent(
name="Planner",
system_prompt="You excel at breaking down complex tasks into actionable steps."
)
researcher = Agent(
name="Researcher",
system_prompt="You find accurate information and cite sources."
)
writer = Agent(
name="Writer",
system_prompt="You create well-structured, engaging content."
)
# Build the network
network = AgentNetwork()
network.add_node(planner)
network.add_node(researcher)
network.add_node(writer)
# Set up intelligent routing
from minigen import create_llm_router
router = create_llm_router(network.nodes)
network.set_router(router)
network.set_entry_point("Planner")
# Execute the workflow
result = network.run(
"Write a blog post about the benefits of renewable energy",
max_rounds=8
)
from minigen import Agent, tool
import ast
@tool(description="Analyze Python code for potential issues")
def analyze_code(code: str) -> str:
try:
ast.parse(code)
return "Code syntax is valid. Ready for detailed review."
except SyntaxError as e:
return f"Syntax error found: {e}"
code_reviewer = Agent(
name="Code Reviewer",
system_prompt="""You are an expert Python developer who reviews code for:
- Best practices and conventions (PEP 8)
- Performance optimizations
- Security vulnerabilities
- Code maintainability
Provide specific, actionable feedback.""",
tools=[analyze_code]
)
# Review some code
code_to_review = '''
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n-1) + fibonacci(n-2)
'''
review = code_reviewer.chat(f"Please review this code:\n{code_to_review}")
print(review)
from minigen import Agent, AgentNetwork, create_llm_router
# Define specialized agents
fact_checker = Agent(
name="FactChecker",
system_prompt="You verify information accuracy and find reliable sources."
)
content_creator = Agent(
name="ContentCreator",
system_prompt="You create engaging, well-structured content based on verified facts."
)
editor = Agent(
name="Editor",
system_prompt="You polish content for clarity, flow, and professionalism."
)
# Create the pipeline
pipeline = AgentNetwork()
pipeline.add_node(fact_checker)
pipeline.add_node(content_creator)
pipeline.add_node(editor)
router = create_llm_router(pipeline.nodes)
pipeline.set_router(router)
pipeline.set_entry_point("FactChecker")
# Generate content
final_content = pipeline.run(
"Create an article about the environmental impact of electric vehicles",
max_rounds=6
)
from minigen import Agent, AgentNetwork, Parallel, create_llm_router
# Create domain experts
tech_expert = Agent(
name="TechExpert",
system_prompt="You analyze technology trends and innovations."
)
market_expert = Agent(
name="MarketExpert",
system_prompt="You analyze market conditions and business implications."
)
synthesizer = Agent(
name="Synthesizer",
system_prompt="You combine different perspectives into comprehensive insights."
)
# Set up parallel processing
parallel_analysis = Parallel(
name="ParallelAnalysis",
agent_names=["TechExpert", "MarketExpert"]
)
network = AgentNetwork()
network.add_node(tech_expert)
network.add_node(market_expert)
network.add_node(synthesizer)
network.add_node(parallel_analysis)
router = create_llm_router(network.nodes)
network.set_router(router)
network.set_entry_point("ParallelAnalysis")
# Analyze from multiple perspectives
analysis = network.run(
"Analyze the potential impact of quantum computing on the cryptocurrency market",
max_rounds=5
)
from minigen import AgentNetwork, NetworkState
from typing import Optional
def custom_router(state: NetworkState) -> Optional[str]:
last_message = state.messages[-1]
# Route based on content keywords
content = last_message.get('content', '').lower()
if 'research' in content:
return "Researcher"
elif 'write' in content or 'draft' in content:
return "Writer"
elif 'review' in content or 'edit' in content:
return "Editor"
elif 'done' in content or 'complete' in content:
return None # End the workflow
return "Planner" # Default fallback
network.set_router(custom_router)
Parameters:
name
: Agent identifiersystem_prompt
: Instructions that define agent behaviortools
: List of functions the agent can callmodel
: LLM model to use (defaults to environment setting)max_retries
: Maximum retry attempts for failed requests
We welcome contributions! Please see our Contributing Guide for details.
This project is licensed under the MIT License - see the LICENSE file for details.
Built with ❤️ by the MiniGen community