Skip to content

ancur4u/Emotion_Detector

Repository files navigation

🎭 Lightweight Offline Emotion Detection System

A complete offline emotion detection system using scikit-learn instead of TensorFlow for easier installation and deployment. Features real-time camera detection, image upload analysis, and custom model training.

Demo

🎯 Features

  • 🚀 Easy Installation: No TensorFlow dependencies - just scikit-learn
  • 📹 Real-time Detection: Live camera emotion detection
  • 🖼️ Image Upload: Batch processing of uploaded images
  • 🧠 Custom Training: Train your own models with the FER2013 dataset
  • ⚡ High Performance: 70-90% confidence scores with enhanced feature extraction
  • 🔧 Multiple Models: RandomForest, SVM, and GradientBoosting options
  • 📊 Advanced Features: LBP texture, gradient analysis, and facial region focus

📊 Supported Emotions

ID Emotion Description
0 Angry 😠
1 Disgust 🤢
2 Fear 😨
3 Happy 😊
4 Sad 😢
5 Surprise 😲
6 Neutral 😐

🛠️ Installation

Prerequisites

  • Python 3.8 or higher
  • Webcam (for real-time detection)

Quick Setup

# Clone the repository
git clone https://github.com/ancur4u/Emotion_Detector.git
cd Emotion_Detector

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Run the application
streamlit run emotion_app.py

🚀 Quick Start

Option 1: Use Pre-trained Model

  1. Download a pre-trained model (if available) and place it as emotion_model.pkl
  2. Run the app: streamlit run emotion_app.py
  3. Go to "Real-time Detection" and start detecting emotions!

Option 2: Train Your Own Model

  1. Download the FER2013 dataset from Kaggle
  2. Convert to CSV format (if needed)
  3. Train a model:
    python improved_trainer.py path/to/fer2013.csv 35000
  4. Copy the generated model: cp improved_emotion_model_*.pkl emotion_model.pkl
  5. Run the app: streamlit run emotion_app.py

📁 Project Structure

emotion-detection-system/
├── emotion_app.py              # Main Streamlit application
├── improved_trainer.py         # Enhanced model trainer
├── requirements.txt            # Python dependencies
├── README.md                  # This file
├── docs/                      # Documentation and images
│   ├── demo.gif
│   ├── architecture.png
│   └── results/
├── models/                    # Saved models directory
│   └── .gitkeep
├── data/                      # Dataset directory
│   └── .gitkeep
└── tests/                     # Unit tests
    └── test_emotion_detector.py

🔧 Technical Details

Enhanced Feature Extraction

Our system uses a sophisticated feature extraction pipeline:

  1. Raw Pixel Values (2304 features)

    • Normalized 48x48 grayscale pixels
  2. Facial Region Analysis (4 features)

    • Eye region statistics (mean, std)
    • Mouth region statistics (mean, std)
  3. Edge Detection (1 feature)

    • Canny edge density for facial structure
  4. Gradient Features (2 features)

    • Sobel gradient magnitude statistics
  5. Local Binary Pattern (LBP) (32 features)

    • Texture analysis for facial patterns

Total: 2343 enhanced features

Model Architecture

Input Image (48x48) 
    ↓
Enhanced Feature Extraction (2343 features)
    ↓
StandardScaler Normalization
    ↓
PCA Dimensionality Reduction (150 components)
    ↓
GradientBoosting/RandomForest/SVM Classifier
    ↓
Emotion Prediction + Confidence Score

Performance Metrics

Model Accuracy Avg Confidence Training Time
GradientBoosting 75-85% 80-90% 20-40 min
RandomForest 70-80% 70-85% 10-20 min
SVM 70-75% 75-85% 15-30 min

📖 Usage Guide

Real-time Detection

  1. Open the app and go to "Real-time Detection"
  2. Load your model using "Load Pre-trained Model"
  3. Enable "Show debug information" for detailed insights
  4. Use "Take a picture" to capture and analyze emotions

Image Upload

  1. Go to "Image Upload" page
  2. Upload an image (JPG, JPEG, PNG)
  3. View detected faces and emotion predictions
  4. Download annotated results

Model Training

  1. Go to "Model Training" page
  2. Upload FER2013 dataset or specify local file path
  3. Configure training parameters:
    • Model type (GradientBoosting recommended)
    • PCA components (150 recommended)
    • Sample size for faster training
  4. Start training and monitor progress
  5. Evaluate results with confusion matrix and classification report

🎯 Training Your Own Model

Step 1: Prepare Dataset

# Download FER2013 from Kaggle
# Ensure CSV format with 'pixels' and 'emotion' columns

Step 2: Train Model

# Full dataset (28k samples, ~40 min)
python improved_trainer.py fer2013.csv

# Smaller sample for testing (15k samples, ~20 min)
python improved_trainer.py fer2013.csv 15000

# Quick test (5k samples, ~5 min)
python improved_trainer.py fer2013.csv 5000

Step 3: Model Selection

When prompted, choose:

  • GradientBoosting (1) - Best accuracy, slower training
  • RandomForest (2) - Good balance, faster training
  • SVM (3) - Good for small datasets

Step 4: Deploy Model

# Copy trained model
cp improved_emotion_model_*.pkl emotion_model.pkl

# Test in Streamlit
streamlit run emotion_app.py

🔬 Advanced Configuration

Feature Extraction Customization

Modify enhanced_feature_extraction() in emotion_app.py:

def enhanced_feature_extraction(self, img):
    # Add your custom features here
    features = []
    
    # Example: Add HOG features
    hog_features = extract_hog_features(img)
    features.extend(hog_features)
    
    return np.array(features)

Model Hyperparameter Tuning

Modify training parameters in improved_trainer.py:

# GradientBoosting tuning
model = GradientBoostingClassifier(
    n_estimators=300,        # More trees
    learning_rate=0.05,      # Slower learning
    max_depth=8,             # Deeper trees
    subsample=0.8,           # Stochastic boosting
    random_state=42
)

🐛 Troubleshooting

Common Issues

  1. "Feature mismatch error"

    • Ensure you're using the enhanced feature extraction
    • Check that model and app versions match
  2. Low confidence scores

    • Train with more data
    • Use GradientBoosting instead of RandomForest
    • Ensure good lighting in images
  3. "No faces detected"

    • Improve lighting conditions
    • Ensure face is clearly visible
    • Try different camera angles
  4. Training takes too long

    • Reduce sample size: python improved_trainer.py dataset.csv 10000
    • Use RandomForest instead of GradientBoosting
    • Reduce PCA components to 100

Performance Optimization

# For faster inference
export OMP_NUM_THREADS=4
export OPENBLAS_NUM_THREADS=4

# For memory efficiency
export PYTHONHASHSEED=0

🤝 Contributing

We welcome contributions! Please see our Contributing Guidelines.

Development Setup

# Clone and setup
git clone https://github.com/yourusername/emotion-detection-system.git
cd emotion-detection-system

# Install development dependencies
pip install -r requirements-dev.txt

# Run tests
python -m pytest tests/

# Run linting
flake8 emotion_app.py improved_trainer.py

Adding New Features

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature-name
  3. Make your changes and add tests
  4. Run tests: python -m pytest
  5. Submit a pull request

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • FER2013 Dataset: Original emotion dataset from Kaggle
  • scikit-learn: For providing excellent machine learning tools
  • OpenCV: For computer vision capabilities
  • Streamlit: For the amazing web app framework

📞 Support

🚀 What's Next?

  • Real-time video emotion tracking
  • Emotion intensity scoring
  • Multi-face emotion analysis
  • Mobile app version
  • REST API for external integration
  • Docker containerization
  • Pre-trained model zoo

⭐ Star this repository if it helped you!

Made with ❤️ by [Ankur Parashar]

About

No description, website, or topics provided.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published