🚀 Pre-process, annotate, evaluate, and train your Affect Computing (e.g., Multimodal Emotion Recognition, Sentiment Analysis) datasets ALL within MER-Factory! (LangGraph Based Agent Workflow)
-
Updated
Oct 23, 2025 - Python
🚀 Pre-process, annotate, evaluate, and train your Affect Computing (e.g., Multimodal Emotion Recognition, Sentiment Analysis) datasets ALL within MER-Factory! (LangGraph Based Agent Workflow)
Cognitive Robotics University Exam Project
[MM 2025] The official implementation code for "VAEmo: Efficient Representation Learning for Visual-Audio Emotion with Knowledge Injection“
This emotion recognition app analyzes text, facial expressions, and speech to detect emotions. Designed for self-awareness and mental well-being, it provides personalized insights and recommendations.
😎 Awesome lists about Speech Emotion Recognition
A Unimodal Valence-Arousal Driven Contrastive Learning Framework for Multimodal Multi-Label Emotion Recognition (ACM MM 2024 oral)
Scientific Reports - Open access - Published: 14 February 2025
A collection of datasets for the purpose of emotion recognition/detection in speech.
This repository provides the codes for MMA-DFER: multimodal (audiovisual) emotion recognition method. This is an official implementation for the paper MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial Expression Recognition in-the-wild.
This API utilizes a pre-trained model for emotion recognition from audio files. It accepts audio files as input, processes them using the pre-trained model, and returns the predicted emotion along with the confidence score. The API leverages the FastAPI framework for easy development and deployment.
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
SERVER: Multi-modal Speech Emotion Recognition using Transformer-based and Vision-based Embeddings
Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)
Published in Springer Multimedia Tools and Applications Journal.
The repo contains an audio emotion detection model, facial emotion detection model, and a model that combines both these models to predict emotions from a video
audio-text multimodal emotion recognition model which is robust to missing data
Emotion recognition from Speech & Text using different heterogeneous ensemble learning methods
All experiments were done to classify multimodal data.
A Tensorflow implementation of Speech Emotion Recognition using Audio signals and Text Data
A survey of deep multimodal emotion recognition.
Add a description, image, and links to the multimodal-emotion-recognition topic page so that developers can more easily learn about it.
To associate your repository with the multimodal-emotion-recognition topic, visit your repo's landing page and select "manage topics."