A three-part system that captures Discord conversations, analyzes them for themes, and displays real-time insights.
Discord → Part 1 (Bot) → Redis → Part 2 (Intelligence) → Redis → Part 3 (Dashboard)
↓ ↓ ↓
Audio + Text Theme Analysis Web Interface
- Docker and Docker Compose
- Discord Bot Token
- OpenAI API Key (optional for transcription/LLM)
# Clone and setup
git clone <repository>
cd collective-intelligence-platform
# Copy environment template
cp .env.example .env
# Edit .env with your credentials
nano .env
# Build and start all services
docker-compose up -d
# View logs
docker-compose logs -f
# Access dashboard
open http://localhost:8000
- File:
transcription_bot.py
- Function: Captures voice and text from Discord
- Output: Sends to
transcription_queue
in Redis - Providers: Local Whisper, OpenAI API, or Ollama
- File:
intelligence_processor.py
- Function: Analyzes messages for themes using LangChain
- Input: Reads from
transcription_queue
- Output: Sends to
intelligence_queue
- Features: Theme identification, bridge concepts, synthesis
- File:
dashboard.py
- Function: Web interface for viewing insights
- Input: Reads from
intelligence_queue
- Features: Live updates, theme visualization, Discord notifications
Command | Description |
---|---|
!join |
Join your voice channel |
!leave |
Leave voice channel |
!status |
Check bot status |
!provider [name] |
Switch transcription provider |
# Test Part 1 (Discord Bot)
python test_suite.py discord
# Test Part 2 (Intelligence)
python test_suite.py intelligence
# Test Part 3 (Dashboard)
python test_suite.py dashboard
# Test Complete Pipeline
python test_suite.py pipeline
All configuration is done through .env
file:
# Discord
DISCORD_BOT_TOKEN=your_token_here
# OpenAI (optional)
OPENAI_API_KEY=your_key_here
# Transcription
TRANSCRIPTION_PROVIDER=local # local, openai, ollama
WHISPER_MODEL=base # tiny, base, small, medium, large
# Intelligence Processing
LLM_PROVIDER=openai # openai, local
EMBEDDING_PROVIDER=openai # openai, local
SYNTHESIS_INTERVAL=30 # seconds
MAX_THEMES=5
# Dashboard (optional)
DISCORD_WEBHOOK_URL=your_webhook_url
- Dashboard: http://localhost:8000
- API Health: http://localhost:8000/api/health
- Logs:
docker-compose logs -f [service-name]
-
Discord → Redis
- Text messages and voice transcriptions
- Format:
{type, content, username, channel, timestamp}
-
Redis → Intelligence
- Processes every 30 seconds
- Identifies themes and connections
- Format:
{themes, bridge_concepts, summary}
-
Redis → Dashboard
- Real-time WebSocket updates
- REST API for historical data
From Discord Bot:
{
"type": "voice_transcription",
"content": "I think we should use AI for code reviews",
"username": "Alice",
"channel": "dev-talk",
"timestamp": "2024-01-20T10:30:00Z"
}
From Intelligence Processor:
{
"themes": [
{
"title": "AI in Development",
"description": "Discussion about AI tools for coding",
"keywords": ["AI", "code review", "automation"],
"confidence": 0.85
}
],
"bridge_concepts": [...],
"summary": "Active discussion about integrating AI into development workflows"
}
- Check bot has voice permissions in Discord
- Verify
!join
command in a voice channel
- Check
!status
shows recording - Verify transcription provider is configured
- Check logs:
docker-compose logs discord-bot
- Ensure enough messages (minimum 5)
- Check intelligence processor logs
- Verify OpenAI API key if using OpenAI
- Check WebSocket connection in browser console
- Verify Redis is running:
docker-compose ps
- Check dashboard logs
- Reduce Whisper model size for faster transcription
- Increase SYNTHESIS_INTERVAL to reduce processing frequency
- Use local embeddings to avoid API costs
- Adjust MESSAGE_BUFFER_SIZE based on activity level
# Run services individually for development
docker-compose up redis
python transcription_bot.py
python intelligence_processor.py
python dashboard.py
# Or use development mode
docker-compose -f docker-compose.dev.yml up
MIT