-
FPT University
- Ho Chi Minh City, Vietnam
- https://nhut-ngnn.github.io/
- in/minhnhutngnn
Highlights
- Pro
Lists (3)
Sort Name ascending (A-Z)
Stars
MMER
The definitive Web UI for local AI, with powerful features and easy setup.
The official implementation of paper DAAL: Dual Ambiguity in Active Learning for Object Detection with YOLOE
Agent Reinforcement Trainer: train multi-step agents for real-world tasks using GRPO. Give your agents on-the-job training. Reinforcement learning for Qwen2.5, Qwen3, Llama, and more!
[APNOMS'25] - "CemoBAM: Advancing Multimodal Emotion Recognition through Heterogeneous Graph Networks and Cross-Modal Attention Mechanisms" by Nhut Minh Nguyen, Thu Thuy Le, Trung Thanh Nguyen, Tai…
Semi-supervised Learning for Speech Emotion Recognition On Federated Learning using Multiview Pseudo-Labeling
How Powerful are Graph Neural Networks?
[ACL 2025 Industry Track (Oral)] Sentiment Reasoning for Healthcare
Zero-Shot Audio-Visual Compound Expression Recognition Method based on Emotion Probability Fusion
Turns Data and AI algorithms into production-ready web applications in no time.
GCNet, official pytorch implementation of our paper "GCNet: Graph Completion Network for Incomplete Multimodal Learning in Conversation"
Implementation of the table detection and table structure recognition deep learning model described in the paper "ClusterTabNet: Supervised clustering method for table detection and table structure…
MTL-TabNet: Multi-task Learning based Model for Image-based Table Recognition
Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning
Official implementation of the paper "FourierGNN: Rethinking Multivariate Time Series Forecasting from a Pure Graph Perspective"
SignboardLayout: Towards Understanding the Logical Layout of Scene Text in Signboard Images
A real time Multimodal Emotion Recognition web app for text, sound and video inputs
Project focused on enhancing the quality of low-fidelity endoscopy images using Generative Adversarial Networks (GANs) implemented in PyTorch.
Modality-Transferable-MER, multimodal emotion recognition model with zero-shot and few-shot abilities.
aita-lab / FleSER
Forked from nhut-ngnn/FleSER"FleSER: Multimodal Emotion Recognition via Dynamic Fuzzy Membership and Attention Fusion" by Nhut Minh Nguyen, Trung Minh Nguyen, Trung Thanh Nguyen, Phuong-Nam Tran, Truong Pham, Linh Le, Alice O…
Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum
ChunkFormer: Masked Chunking Conformer For Long-Form Speech Transcription
The code repository for NAACL 2021 paper "Multimodal End-to-End Sparse Model for Emotion Recognition".
This repository provides implementation for the paper "Self-attention fusion for audiovisual emotion recognition with incomplete data".
source code for "Towards Speaker-Unknown Emotion Recognition in Conversation via Progressive Contrastive Deep Supervision"
This project is the code of BF-GCN. The paper has been accepted by IEEE Transactions on Neural Networks and Learning Systems.
LibEER: A Comprehensive Benchmark and Algorithm Library for EEG-based Emotion Recognition
Recognizing emotions is crucial for the development of artificial intelligence in various fields. This project explores the application of quantum models on emotion recognition from electroencepha…