A modern, AI-powered question-answering system specialized in Decentralized Finance (DeFi) topics. Built with LangGraph orchestration, FastAPI, and React, featuring real-time streaming responses and advanced semantic search capabilities.
You can try the live app here (you might wait a while for app loading) (Fly.io trial ends on July 19th, so the link will not avaliable after that)
- π§ Intelligent Q&A: Semantic search over a curated DeFi knowledge base
- β‘ Real-time Streaming: Word-by-word response streaming via WebSocket
- π High Accuracy: OpenAI embeddings with similarity-based retrieval
- π Confidence Scoring: Each answer includes a confidence percentage
- π― DeFi Specialized: Covers lending, DEXs, yield farming, staking, and more
- π Concurrent Handling: Support for multiple simultaneous users
- π Session Management: Isolated user sessions with automatic cleanup
- π Monitoring & Metrics: Comprehensive logging and Prometheus metrics
- π‘οΈ Error Resilience: Robust error handling with user-friendly messages
- π Production Ready: Docker containerization and cloud deployment configs
- β‘ Rate Limiting: Configurable request rate limiting for API protection
- Python 3.11+
- Node.js 18+
- OpenAI API Key
git clone https://github.com/yourusername/Chat_bot.git
cd Chat_bot
# Backend setup
cd backend
python -m venv env
source env/bin/activate # On Windows: env\Scripts\activate
pip install -r requirements.txt
# Frontend setup
cd ../frontend
npm installCreate a .env file in the backend directory:
cp backend/environment_template.txt backend/.envEdit .env with your settings:
OPENAI_API_KEY=your_openai_api_key_here
ENVIRONMENT=development
HOST=127.0.0.1
PORT=8000# Terminal 1: Start Backend
cd backend
python -m uvicorn main:app --reload
# Terminal 2: Start Frontend
cd frontend
npm start- Frontend: http://localhost:3000
- API Documentation: http://localhost:8000/docs
- Monitoring Dashboard: http://localhost:8000/dashboard
graph TB
UI[React Frontend] --> WS[WebSocket Connection]
UI --> HTTP[HTTP API]
WS --> MAIN[FastAPI Backend]
HTTP --> MAIN
MAIN --> LG[LangGraph Agent]
MAIN --> SM[Session Manager]
MAIN --> MON[Monitoring System]
LG --> EMB[Embedding Service]
LG --> CACHE[Cache Manager]
LG --> DS[Dataset Loader]
EMB --> OPENAI[OpenAI API]
CACHE --> FILES[File System]
DS --> JSON[DeFi Dataset]
- Ask Questions: Type DeFi-related questions in the input field
- Real-time Responses: Watch answers stream in word-by-word
- Confidence Scores: See how confident the AI is in each answer
- Connection Status: Monitor your WebSocket connection status
β’ "What is impermanent loss in liquidity provision?"
β’ "How does Compound's interest rate model work?"
β’ "What are the risks of yield farming?"
β’ "How do flash loans work on Aave?"
β’ "What is the difference between APR and APY in DeFi?"
# Build and run with Docker
cd backend
docker build -t defi-qa-bot .
docker run -p 8000:8000 --env-file .env defi-qa-bot# Copy environment template
cp docker.env.example .env
# Edit .env with your OPENAI_API_KEY
# Start all services
docker-compose up --build -d
# Access application
# Frontend: http://localhost
# Backend: http://localhost:8000This application is currently deployed on Fly.io due to specific technical requirements. You can access the live demo at: https://defi-qa-frontend.fly.dev/
Important: This application cannot be deployed on Vercel due to the 250MB serverless function limit. The combination of:
- Large ML/AI dependencies (OpenAI, LangGraph, sentence transformers)
- Extensive NLP libraries and models
- DeFi dataset and embeddings cache
- FastAPI and associated dependencies
Results in a bundle size that exceeds Vercel's 250MB unzipped limit for serverless functions. While the vercel.json configuration file exists in the repository, deployment will fail with a "Serverless Function has exceeded the unzipped maximum size" error.
Fly.io was chosen as the deployment platform because:
- No function size limits: Supports applications with large dependencies
- Persistent storage: Better handling of cache files and datasets
- Container-based: Full Docker support for complex Python applications
- WebSocket support: Native support for real-time features
- Global edge network: Fast performance worldwide
| Platform | Configuration File | Status | Notes |
|---|---|---|---|
| Fly.io | fly.toml |
β Recommended | Currently deployed |
| Heroku | Procfile |
β Compatible | Large slug size |
| Railway | railway.toml |
β Compatible | Good Docker support |
| Render | render.yaml |
β Compatible | Blueprint deployment |
| Vercel | vercel.json |
β Not Compatible | 250MB limit exceeded |
# Install Fly CLI
curl -L https://fly.io/install.sh | sh
# Deploy the application
fly deploy
# Set environment variables
fly secrets set OPENAI_API_KEY=your-key-hereOPENAI_API_KEY=your-openai-key
ENVIRONMENT=production
DEBUG=false
ALLOWED_ORIGINS=https://your-domain.comChat_bot/
βββ backend/ # FastAPI backend (see backend/README.md)
β βββ main.py # Application entry point
β βββ agents/ # LangGraph agents
β βββ services/ # Business logic
β βββ infrastructure/ # Monitoring, logging
β βββ tests/ # Test suite
βββ frontend/ # React frontend (see frontend/README.md)
β βββ src/
βββ data/ # DeFi Q&A dataset
βββ cache/ # Cached embeddings
# Backend tests
cd backend
python -m pytest tests/ -v
# Integration tests
python test_integration.py
python test_api.py# Backend development mode
cd backend
python -m uvicorn main:app --reload --host 0.0.0.0 --port 8000
# Frontend development mode
cd frontend
npm startAccess the monitoring dashboard at /dashboard for:
- Health Score: Overall system health (0-100)
- Performance Metrics: Response times and request counts
- System Resources: Memory and CPU usage
- WebSocket Activity: Active connections and message flow
- Error Tracking: Real-time error monitoring
- Health Check:
GET /health - API Documentation:
GET /docs - Metrics:
GET /metrics(Prometheus format) - Dashboard:
GET /dashboard
- Response Time: < 2 seconds average for semantic search
- Concurrent Users: Tested with 50+ simultaneous connections
- Throughput: 200+ requests per minute per instance
- Memory Usage: ~200MB baseline, scales with concurrent sessions
- Cache Hit Rate: 80%+ for repeated questions
We welcome contributions! Here's how to get started:
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Follow the development setup in Quick Start
- Make your changes and add tests
- Run the test suite:
python -m pytest - Commit with conventional commits:
git commit -m "feat: add amazing feature" - Push and create a Pull Request
- π Expand Dataset: Add more DeFi Q&A pairs
- π§ Improve AI: Enhance semantic search accuracy
- π¨ UI/UX: Improve frontend design and user experience
- π Analytics: Add more monitoring and analytics features
- π Integrations: Add support for more data sources
- Backend Documentation: See
backend/README.md - Frontend Documentation: See
frontend/README.md - Issues: Report bugs via GitHub Issues
- Discussions: Join our GitHub Discussions for questions
This project is licensed under the MIT License - see the LICENSE file for details.
- OpenAI: For providing the GPT and embedding models
- LangGraph: For the excellent graph-based agent framework
- FastAPI: For the high-performance async web framework
- React: For the modern frontend framework
Built with β€οΈ for the DeFi community