A production-ready decentralized multi-agent AI system that uses the Nostr protocol for intelligent collaborative discussions and debates.
Nostr AI Swarm allows you to create intelligent agent swarms that engage in structured debates, discussions, and collaborative problem-solving. Agents communicate through the Nostr protocol, creating a decentralized and transparent conversation system.
- Web-based Control Panel - Intuitive UI for managing everything
- Predefined Role Templates - Ready-to-use agent personalities
- Scenario Management - Create, edit, and run debate scenarios
- Real-time Conversation Display - Watch agent discussions live
- Nostr Protocol - Decentralized, censorship-resistant communication
- Local LLM Support - Uses Ollama for AI inference
- Multiple Conversation Modes - Sequential, parallel, debate, supervised debate
- Smart Reply Strategy - Intelligent parent message selection
- Supervisor Synthesis - Automatic debate summarization
- Python 3.8+
- Ollama - Local LLM inference engine
# Install Ollama from https://ollama.ai # Pull a model (recommended): ollama pull qwen2.5:14b
- Nostr Relay (optional, defaults to localhost:7447)
# You can run a local relay or use public relays # For local development: docker run -p 7447:7447 scsibug/nostr-rs-relay
-
Clone the repository
git clone <your-repo-url> cd swarm
-
Install dependencies
pip install -r requirements.txt
-
Start the web application
python backend.py
-
Open your browser
http://localhost:8000
That's it! You're ready to create and run agent swarms.
Roles define agent personalities and behaviors. The system comes with 7 predefined roles:
| Role | Description | Best For |
|---|---|---|
| Technical Expert | Implementation-focused, analyzes feasibility | Technical discussions, architecture reviews |
| Creative Thinker | Generates novel ideas and approaches | Brainstorming, innovation |
| Critical Analyst | Challenges assumptions, finds flaws | Quality assurance, risk analysis |
| Practical Advisor | Focuses on real-world constraints | Business decisions, resource planning |
| Visionary Strategist | Long-term thinking, strategic impact | Strategic planning, future vision |
| Analytical Researcher | Data-driven, evidence-based analysis | Research, data analysis |
| Supervisor/Moderator | Synthesizes discussions, draws conclusions | Debate moderation, synthesis |
- Go to the Roles tab
- Click + New Role
- Fill in:
- Name: Display name for the agent
- Role Type: Base personality type
- System Prompt: Detailed instructions for the agent
- Debate Stance: balanced, supportive, or challenging
- Personality Traits: Keywords describing behavior
- Knowledge Base: Domain expertise areas
Scenarios define what agents will discuss and how they'll interact.
The system includes 4 ready-to-use scenarios:
- AI Safety Debate - Discussing AI safety challenges
- Startup Idea Evaluation - Analyzing a business idea
- Code Architecture Review - Reviewing technical decisions
- Climate Tech Innovation - Exploring climate solutions
- Go to the Scenarios tab
- Click + New Scenario
- Configure:
- Name & Description: Identify your scenario
- Topic: The question or prompt for discussion
- Conversation Mode:
Supervised Debate- Agents debate, supervisor summarizes (recommended)Free Debate- Dynamic freeform discussionSequential- Agents take turns in orderParallel- All agents respond simultaneously
- Max Replies: How many messages before supervisor synthesis
- Agents: Add agents with specific roles
- Go to the Swarm Control tab
- Select a scenario from the dropdown
- Configure:
- Relay URLs: Nostr relay addresses (comma-separated)
- Local:
ws://localhost:7447 - Public examples:
wss://relay.damus.io,wss://relay.nostr.band
- Local:
- Ollama URL: Your Ollama instance (default:
http://localhost:11434)
- Relay URLs: Nostr relay addresses (comma-separated)
- Click Start Swarm
- Switch to the Live Conversation tab to watch the discussion
The Live Conversation tab shows:
- Real-time messages from all agents
- Agent names and timestamps
- Conversation flow and thread structure
The Swarm Status panel shows:
- Current status (idle, running, completed, error)
- Active scenario name
- Number of active agents
- Message count
- Agents engage in freeform debate
- Supervisor automatically synthesizes when threshold is reached
- Best for: Decision-making, consensus-building
- Agents respond to each other dynamically
- No automatic synthesis
- Best for: Exploration, brainstorming
- Agents take turns in defined order
- Structured and predictable
- Best for: Presentations, formal discussions
- All agents respond simultaneously to each prompt
- Fast information gathering
- Best for: Quick surveys, initial reactions
Each agent can be customized with:
- System Prompt: Core instructions and personality
- Personality Traits: Keywords that influence behavior
- Knowledge Base: Domain expertise
- Debate Stance:
balanced- Objective and fairsupportive- Builds on ideaschallenging- Questions and probes
- Model Override: Use different LLM per agent (optional)
Default model: qwen2.5:14b
Supported models (any Ollama model):
qwen2.5:14b- Balanced performancedeepseek-r1:8b- Fast reasoningllama3.1:70b- High quality (requires GPU)phi4:14b- Efficient alternative
Configure in:
- Scenario level (applies to all agents)
- Agent level (override for specific agent)
┌─────────────────────────────────────────┐
│ Web UI (Browser) │
│ - Role Management │
│ - Scenario Editor │
│ - Swarm Control │
│ - Real-time Display │
└─────────────┬───────────────────────────┘
│ HTTP/WebSocket
┌─────────────▼───────────────────────────┐
│ FastAPI Backend │
│ - REST API (roles, scenarios) │
│ - Swarm Manager │
│ - WebSocket Server │
└─────────────┬───────────────────────────┘
│
┌─────────────▼───────────────────────────┐
│ Nostr Swarm Core │
│ - Agent Orchestration │
│ - LLM Integration (Ollama) │
│ - Nostr Protocol (nostr-sdk) │
└─────────────┬───────────────────────────┘
│
┌───────┴────────┐
▼ ▼
┌──────────┐ ┌──────────────┐
│ Ollama │ │ Nostr Relay │
│ (LLM) │ │ (Messages) │
└──────────┘ └──────────────┘
GET /api/roles- List all rolesGET /api/roles/{id}- Get specific rolePOST /api/roles- Create rolePUT /api/roles/{id}- Update roleDELETE /api/roles/{id}- Delete role
GET /api/scenarios- List all scenariosGET /api/scenarios/{id}- Get specific scenarioPOST /api/scenarios- Create scenarioPUT /api/scenarios/{id}- Update scenarioDELETE /api/scenarios/{id}- Delete scenario
POST /api/swarm/start- Start swarmPOST /api/swarm/stop- Stop swarmGET /api/swarm/status- Get status
ws://localhost:8000/ws/nostr- Real-time event stream
You can also run scenarios from the command line:
python run_json_scenario.py scenarios/ai_safety_debate.json \
--relay ws://localhost:7447 \
--config default \
--verifyfrom nostr_swarm import create_swarm, AgentConfig, AgentRole, ModelConfig
import asyncio
async def main():
# Create swarm
swarm = create_swarm(
relay_urls=["ws://localhost:7447"],
mode=ConversationMode.SUPERVISED_DEBATE
)
# Add agents
agent_config = AgentConfig(
name="Technical Expert",
role=AgentRole.TECHNICAL,
system_prompt="You are a technical expert...",
nsec="<generated-key>",
model_config=ModelConfig()
)
await swarm.add_agent(agent_config)
await swarm.start()
# Run debate
root_event_id = "<your-root-event>"
await swarm.run_supervised_debate(root_event_id)
await swarm.stop()
asyncio.run(main())Swarm won't start
- Check Ollama is running:
ollama list - Verify relay connection: Try a public relay
- Check browser console for errors
Agents not responding
- Verify model is downloaded:
ollama pull qwen2.5:14b - Check Ollama logs for errors
- Ensure sufficient system resources
WebSocket disconnects
- Check firewall settings
- Verify relay is accessible
- Try reconnecting (automatic after 5s)
Slow performance
- Use smaller models (phi4:14b, qwen2.5:7b)
- Reduce number of agents
- Increase timeout settings
swarm/
├── backend.py # FastAPI web server
├── nostr_swarm.py # Core swarm library
├── run_json_scenario.py # CLI runner
├── requirements.txt # Python dependencies
├── static/ # Frontend files
│ ├── index.html
│ ├── styles.css
│ └── app.js
├── roles/ # Role templates (JSON)
├── scenarios/ # Scenarios (JSON)
└── conversation_memory/ # Saved conversations (auto-generated)
Contributions welcome! Areas for improvement:
- Additional role templates
- Example scenarios
- UI enhancements
- Performance optimizations
- Documentation
[Your License Here]
Built with:
- FastAPI - Web framework
- Ollama - Local LLM inference
- nostr-sdk - Nostr protocol
- Nostr Protocol - Decentralized communication
- Documentation: This README
- Issues: [GitHub Issues]
- Discussions: [GitHub Discussions]
Happy Swarming!