Backend for the Emotion Detection application built with FastAPI and MongoDB.
- π User authentication with JWT tokens
- π Real-time updates with WebSockets
- π Emotion tracking and storage
- π± RESTful API for emotion data access
- ποΈ MongoDB for efficient document storage
- π§ AI-powered emotion detection from images
-
Authentication
- POST
/api/auth/register- Register a new user - POST
/api/auth/login- Login and get JWT token - GET
/api/auth/me- Get current user profile - POST
/api/auth/logout- Logout (clears auth cookies)
- POST
-
Users
- GET
/api/users/me- Get current user profile - GET
/api/users/{user_id}- Get user by ID - PATCH
/api/users/me/emotion- Update current user's emotion
- GET
-
Emotions
- GET
/api/emotions/- Get emotions for current user - POST
/api/emotions/- Create a new emotion entry - GET
/api/emotions/{emotion_id}- Get specific emotion entry - DELETE
/api/emotions/{emotion_id}- Delete an emotion entry
- GET
-
WebSockets
- WebSocket
/api/ws- Real-time emotion updates
- WebSocket
-
Emotion Detection (AI-powered)
- POST
/api/emotion-detection/detect- Detect emotions in an uploaded image (authenticated) - POST
/api/emotion-detection/detect-anonymous- Detect emotions without authentication
- POST
- Python 3.9+
- MongoDB database (local or MongoDB Atlas)
-
Clone the repository
git clone https://github.com/yourusername/emotion-detection-backend.git cd emotion-detection-backend -
Create a virtual environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Set up environment variables Create a
.envfile in the root directory with:DATABASE_URL=mongodb+srv://username:password@cluster.mongodb.net/dbname SECRET_KEY=your_secret_key ACCESS_TOKEN_EXPIRE_MINUTES=1440 -
Run the application
uvicorn app.main:app --reload
-
Access the API documentation at http://localhost:8000/docs
This application is configured for deployment on Render.
render.yaml- Render configuration for web service and databasebuild.sh- Build script to install dependencies and run migrationsmain.py- Entry point for the applicationProcfile- Alternative startup command specification
- Fork this repository to your GitHub account
- Create a new Render account or log in
- Click "New" and select "Blueprint" from your Render dashboard
- Connect your GitHub account and select your fork of this repository
- Render will automatically detect the
render.yamlfile and set up your services - Provide the required environment variables:
DATABASE_URL- MongoDB connection string (Render can provide a MongoDB instance)SECRET_KEY- Secret key for JWT token signingACCESS_TOKEN_EXPIRE_MINUTES- Token expiration time in minutes (e.g., 1440 for 24 hours)
- Deploy your application
See websocket_README.md for WebSocket implementation details.
The application uses MongoDB for data storage. In production, it's recommended to use MongoDB Atlas.
The API includes AI-powered emotion detection based on the Emotion Recognition model.
- Upload an image with one or more faces
- The API detects faces using OpenCV
- Each face is processed through a pre-trained deep learning model
- The API returns the detected emotion and confidence level
POST /api/emotion-detection/detect-anonymous
Content-Type: multipart/form-data
file: [binary image data]
{
"emotion": "happy",
"confidence": 0.92
}If no face is detected, you'll get:
{
"emotion": "no_face",
"confidence": 0.0
}Use the included test_emotion_detection.py script to test the API:
python test_emotion_detection.py path/to/your/image.jpg- angry
- disgust
- scared
- happy
- sad
- surprised
- neutral
pytestblack appThis project is licensed under the MIT License - see the LICENSE file for details.