MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
-
Updated
Mar 10, 2024 - Python
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)
A collection of datasets for the purpose of emotion recognition/detection in speech.
Human Emotion Understanding using multimodal dataset.
The repo contains an audio emotion detection model, facial emotion detection model, and a model that combines both these models to predict emotions from a video
The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.
The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion Recognition
Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"
🚀 Pre-process, annotate, evaluate, and train your Affect Computing (e.g., Multimodal Emotion Recognition, Sentiment Analysis) datasets ALL within MER-Factory! (LangGraph Based Agent Workflow)
Scientific Reports - Open access - Published: 14 February 2025
😎 Awesome lists about Speech Emotion Recognition
This repository provides the codes for MMA-DFER: multimodal (audiovisual) emotion recognition method. This is an official implementation for the paper MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial Expression Recognition in-the-wild.
A survey of deep multimodal emotion recognition.
SERVER: Multi-modal Speech Emotion Recognition using Transformer-based and Vision-based Embeddings
Emotion recognition from Speech & Text using different heterogeneous ensemble learning methods
A Tensorflow implementation of Speech Emotion Recognition using Audio signals and Text Data
This API utilizes a pre-trained model for emotion recognition from audio files. It accepts audio files as input, processes them using the pre-trained model, and returns the predicted emotion along with the confidence score. The API leverages the FastAPI framework for easy development and deployment.
All experiments were done to classify multimodal data.
audio-text multimodal emotion recognition model which is robust to missing data
A Unimodal Valence-Arousal Driven Contrastive Learning Framework for Multimodal Multi-Label Emotion Recognition (ACM MM 2024 oral)
Add a description, image, and links to the multimodal-emotion-recognition topic page so that developers can more easily learn about it.
To associate your repository with the multimodal-emotion-recognition topic, visit your repo's landing page and select "manage topics."