Sign in
Audio Wave
CN-Celeb-AV
A multi-genre audio-visual person recognition dataset
CN-Celeb-AV is a multi-genre audio-visual person recognition dataset covering 11 different genres in the real world,
collected from multiple Chinese open media sources.

1,136

Speakers

CN-Celeb-AV contains speech from Chinese celebrities.

419,000 +

Utterances

CN-Celeb-AV covers multiple genres of speech, including entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement.

660 +

Hours

CN-Celeb-AV consists of both full-modality and partial-modality challenges which meet the scenarios of most real applications.

video

Video

audio

Audio

DEV

Dev-F:689

A development set with full-modality information, contains both audio and visual information.

EVAL_F

Eval-F:197

An evaluation set with full-modality information, contains both audio and visual information.

EVAL_P

Eval-P:250

An evaluation set with partial-modality information, contains some segments whose audio or visual information is corrupted or fully lost.

Download

The dataset consists of three subsets, Dev-F, Eval-F and Eval-P. For each subset, we provide video and audio files and speaker meta-data. There is no overlap among the three subsets. Dev-F contains more than 93,000 segments from 689 Chinese celebrities, Eval-F contains more than 17,000 segments from 197 Chinese celebrities, and Eval-P contains more than 307,900 segments from 250 Chinese celebrities.

License

All the resources contained in the dataset are free for research institutes and individuals. The copyright remains with the original owners of the audio/video.
No commerical usage is permitted.
Please register and log in to the CN-Celeb system, and then submit the data license to request the data.

Publications

Publications based on the dataset welcome to cite the following papers:

Lantian Li, Xiaolou Li, Haoyu Jiang, Chen Chen, Ruihai Hou, Dong Wang*

CN-Celeb-AV: A Multi-Genre Audio-Visual Dataset for Person Recognition, INTERSPEECH, 2023.

Acknowledgements

This work is supported by the National Natural Science Foundation of China (NSFC) under Grants No.62171250.