Correspondence to:
- Wenxin Xu (wxxv@stu.xidian.edu.cn)
- Hexin Jiang (hxjiang@stu.xidian.edu.cn)
- Xuefeng Liang (xliang@xidian.edu.cn)
Leveraging Knowledge of Modality Experts for Incomplete Multimodal Learning
Wenxin Xu, Hexin Jiang, Xuefeng Liang
ACM Multimedia, 2024, 🔥Best Paper Nomination! and further be honored with 🔥Honourable Mention Award!
@inproceedings{xu2024leveraging,
title={Leveraging Knowledge of Modality Experts for Incomplete Multimodal Learning},
author={Xu, Wenxin and Jiang, Hexin and others},
booktitle={ACM Multimedia 2024},
year={2024}
}- Python 3.8
- CUDA 11.6
- pytorch ==1.12.0
- torchvision == 0.13.0
- numpy == 1.24.3
(see requirements.txt for more details)
To ensure consistency with previous work, we followed the feature preprocessing module in GCNet.
For your convenience, we list the feature links below they provide.
| Dataset | Task | Download link |
|---|---|---|
| IEMOCAP | Emotion Recognition | link |
| CMU-MOSI | Sentiment Analysis | link |
| CMU-MOSEI | Sentiment Analysis | link |
cd MoMKE
sh run_MoMKE_cmumosi.sh
sh run_MoMKE_cmumosei.sh
sh run_MoMKE_iemocap4.shThis work was supported by the research funding of GuangZhou DaYou Network Technology Co., Ltd.