Semantics Consistent Adversarial Cross-Modal Retrieval

R Xuan, W Ou, Q Zhou, Y Cao, H Yang, X Xiong… - Cognitive Internet of …, 2020 - Springer
R Xuan, W Ou, Q Zhou, Y Cao, H Yang, X Xiong, F Ruan
Cognitive Internet of Things: Frameworks, Tools and Applications, 2020Springer
Cross-modal retrieval returns the relevant results from the other modalities given a query
from one modality. The main challenge of cross-modal retrieval is the “heterogeneity gap”
amongst modalities, because different modalities have different distributions and
representations. Therefore, the similarity of different modalities can not be measured directly.
In this paper, we propose a semantics consistent adversarial cross-modal retrieval
approach, which learns a semantics consistent representation for different modalities with …
Abstract
Cross-modal retrieval returns the relevant results from the other modalities given a query from one modality. The main challenge of cross-modal retrieval is the “heterogeneity gap” amongst modalities, because different modalities have different distributions and representations. Therefore, the similarity of different modalities can not be measured directly. In this paper, we propose a semantics consistent adversarial cross-modal retrieval approach, which learns a semantics consistent representation for different modalities with same semantic category. Specifically, we encourage the class center of different modalities with same semantic label to be as close as possible, and also minimize the distances between the samples and the class center with same semantic label from different modalities. Comprehensive experiments on Wikipedia dataset are conducted and the experimental results show the efficiency and effectiveness of our approach in cross-modal retrieval.
Springer
Showing the best result for this search. See all results