Skip to content
forked from wxxv/MoMKE

Code for "Leveraging Knowledge of Modality Experts for Incomplete Multimodal Learning" accepted by ACM Multimedia 2024

Notifications You must be signed in to change notification settings

ZhaiYanbo/MoMKE

 
 

Repository files navigation

MoMKE

Correspondence to:

Paper

Leveraging Knowledge of Modality Experts for Incomplete Multimodal Learning
Wenxin Xu, Hexin Jiang, Xuefeng Liang
ACM Multimedia, 2024, 🔥Best Paper Nomination! and further be honored with 🔥Honourable Mention Award!

@inproceedings{xu2024leveraging,
  title={Leveraging Knowledge of Modality Experts for Incomplete Multimodal Learning},
  author={Xu, Wenxin and Jiang, Hexin and others},
  booktitle={ACM Multimedia 2024},
  year={2024}
}

Usage

Prerequisites

  • Python 3.8
  • CUDA 11.6
  • pytorch ==1.12.0
  • torchvision == 0.13.0
  • numpy == 1.24.3

(see requirements.txt for more details)

Dataset

To ensure consistency with previous work, we followed the feature preprocessing module in GCNet.

For your convenience, we list the feature links below they provide.

Dataset Task Download link
IEMOCAP Emotion Recognition link
CMU-MOSI Sentiment Analysis link
CMU-MOSEI Sentiment Analysis link

Run MoMKE

cd MoMKE
sh run_MoMKE_cmumosi.sh

sh run_MoMKE_cmumosei.sh

sh run_MoMKE_iemocap4.sh

Acknowledgement

This work was supported by the research funding of GuangZhou DaYou Network Technology Co., Ltd.

About

Code for "Leveraging Knowledge of Modality Experts for Incomplete Multimodal Learning" accepted by ACM Multimedia 2024

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 90.0%
  • Shell 10.0%