A modern, interactive AI chatbot built with Flask and Mistral-7B model. Features a beautiful web interface with real-time messaging, typing indicators, and conversation history.
- π€ AI-Powered Responses: Uses Mistral-7B model for intelligent conversations
- π¬ Real-time Messaging: Instant message exchange with typing indicators
- π¨ Modern UI: Beautiful, responsive design with animations
- π± Mobile Friendly: Works perfectly on all devices
- πΎ Conversation History: Saves and loads chat history
- π€ Export Chat: Export conversations as JSON files
- ποΈ Clear Chat: Clear conversation history
- β¨οΈ Keyboard Shortcuts: Ctrl+Enter to send, Escape to close modals
- π Auto-resize Input: Textarea automatically resizes as you type
- β‘ Fast & Lightweight: Optimized for performance
The chatbot features a modern interface with:
- Gradient header with bot avatar and status
- Message bubbles with user/bot avatars
- Typing indicators
- Auto-resizing input field
- Action buttons for clearing and exporting chat
- Python 3.8 or higher
- At least 8GB RAM (for model loading)
- Mistral-7B model file
-
Clone or download the project
git clone <https://github.com/iremteker/chatbot.git> cd chatbot
-
Create virtual environment
python -m venv venv # On Windows venv\Scripts\activate # On macOS/Linux source venv/bin/activate
-
Install dependencies
pip install -r requirements.txt
-
Download the Mistral model
Download the Mistral-7B model file and place it in the
models/directory:models/ βββ mistral-7b-instruct-v0.1.Q4_K_M.ggufYou can download it from Hugging Face or other sources.
-
Run the application
python app.py
-
Open in browser
Navigate to
http://localhost:5000in your web browser.
- Type your message in the input field
- Press Enter or click the send button
- Wait for the AI response
- Continue the conversation
Enter: Send messageShift + Enter: New line in inputCtrl/Cmd + Enter: Send messageEscape: Close modals
- Clear Chat: Click the trash icon to clear conversation history
- Export Chat: Click the download icon to export chat as JSON
- Auto-scroll: Chat automatically scrolls to latest messages
Send a message and get AI response.
Request:
{
"message": "Hello, how are you?"
}Response:
{
"success": true,
"response": "Hello! I'm doing well, thank you for asking. How can I help you today?",
"message_id": "uuid-string"
}Get conversation history for current session.
Response:
{
"messages": [
{
"id": "uuid",
"type": "user",
"message": "Hello",
"timestamp": "2024-01-01T12:00:00Z"
}
]
}Clear conversation history for current session.
Check server health status.
Edit main.py to adjust model parameters:
llm = Llama(
model_path="models/mistral-7b-instruct-v0.1.Q4_K_M.gguf",
n_ctx=2048, # Context window size
n_threads=4, # Number of CPU threads
n_gpu_layers=0, # GPU layers (0 for CPU only)
verbose=False # Verbose logging
)Edit app.py to change server settings:
app.secret_key = 'your-secret-key-here' # Change for production
app.run(debug=True, host='0.0.0.0', port=5000)chatbot/
βββ app.py # Flask application
βββ main.py # AI model and response generation
βββ requirements.txt # Python dependencies
βββ README.md # This file
βββ models/
β βββ mistral-7b-instruct-v0.1.Q4_K_M.gguf # AI model
βββ static/
β βββ style.css # CSS styles
β βββ script.js # JavaScript functionality
βββ templates/
βββ index.html # HTML template
- Ensure the model file exists in
models/directory - Check that you have enough RAM (8GB+ recommended)
- Try reducing
n_threadsif you have limited CPU cores
- Reduce
n_ctx(context window) for faster responses - Use fewer
n_threadsif CPU is overloaded - Consider using GPU acceleration if available
- Close other applications to free up RAM
- Restart the application if memory usage is high
- Consider using a smaller model variant
- Modify
app.pyfor new API endpoints - Update
main.pyfor model improvements - Edit
templates/index.htmlfor UI changes - Modify
static/style.cssfor styling - Update
static/script.jsfor frontend functionality
- Test on different browsers and devices
- Verify mobile responsiveness
- Check API endpoints with tools like Postman
This project is open source and available under the MIT License.
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
If you encounter any issues:
- Check the troubleshooting section
- Review the console for error messages
- Ensure all dependencies are installed
- Verify the model file is correctly placed
Note: This chatbot uses the Mistral-7B model which requires significant computational resources. For production use, consider using cloud services or more optimized model variants.