Relation-Aggregated Cross-Graph Correlation Learning for Fine-Grained Image–Text Retrieval

Shu-Juan Peng, Yi He, Xin Liu*, Yiu-ming Cheung, Xing Xu, Zhen Cui

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

5 Citations (Scopus)

Abstract

Fine-grained image–text retrieval has been a hot research topic to bridge the vision and languages, and its main challenge is how to learn the semantic correspondence across different modalities. The existing methods mainly focus on learning the global semantic correspondence or intramodal relation correspondence in separate data representations, but which rarely consider the intermodal relation that interactively provide complementary hints for fine-grained semantic correlation learning. To address this issue, we propose a relation-aggregated cross-graph (RACG) model to explicitly learn the fine-grained semantic correspondence by aggregating both intramodal and intermodal relations, which can be well utilized to guide the feature correspondence learning process. More specifically, we first build semantic-embedded graph to explore both fine-grained objects and their relations of different media types, which aim not only to characterize the object appearance in each modality, but also to capture the intrinsic relation information to differentiate intramodal discrepancies. Then, a cross-graph relation encoder is newly designed to explore the intermodal relation across different modalities, which can mutually boost the cross-modal correlations to learn more precise intermodal dependencies. Besides, the feature reconstruction module and multihead similarity alignment are efficiently leveraged to optimize the node-level semantic correspondence, whereby the relation-aggregated cross-modal embeddings between image and text are discriminatively obtained to benefit various image–text retrieval tasks with high retrieval performance. Extensive experiments evaluated on benchmark datasets quantitatively and qualitatively verify the advantages of the proposed framework for fine-grained image–text retrieval and show its competitive performance with the state of the arts.
Original languageEnglish
Pages (from-to)2194-2207
Number of pages14
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume35
Issue number2
Early online date13 Jul 2022
DOIs
Publication statusPublished - Feb 2024

Scopus Subject Areas

  • Software
  • Artificial Intelligence
  • Computer Networks and Communications
  • Computer Science Applications

User-Defined Keywords

  • Cross-graph relation encoder
  • fine-grained correspondence
  • image–text retrieval
  • intermodal relation

Fingerprint

Dive into the research topics of 'Relation-Aggregated Cross-Graph Correlation Learning for Fine-Grained Image–Text Retrieval'. Together they form a unique fingerprint.

Cite this