Polarized message-passing in graph neural networks

Tiantian He, Yang Liu*, Yew Soon Ong, Xiaohu Wu, Xin Luo

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

2 Citations (Scopus)

Abstract

In this paper, we present Polarized message-passing (PMP), a novel paradigm to revolutionize the design of message-passing graph neural networks (GNNs). In contrast to existing methods, PMP captures the power of node-node similarity and dissimilarity to acquire dual sources of messages from neighbors. The messages are then coalesced to enable GNNs to learn expressive representations from sparse but strongly correlated neighbors. Three novel GNNs based on the PMP paradigm, namely PMP graph convolutional network (PMP-GCN), PMP graph attention network (PMP-GAT), and PMP graph PageRank network (PMP-GPN) are proposed to perform various downstream tasks. Theoretical analysis is also conducted to verify the high expressiveness of the proposed PMP-based GNNs. In addition, an empirical study of five learning tasks based on 12 real-world datasets is conducted to validate the performances of PMP-GCN, PMP-GAT, and PMP-GPN. The proposed PMP-GCN, PMP-GAT, and PMP-GPN outperform numerous strong message-passing GNNs across all five learning tasks, demonstrating the effectiveness of the proposed PMP paradigm.

Original languageEnglish
Article number104129
Number of pages24
JournalArtificial Intelligence
Volume331
DOIs
Publication statusPublished - Jun 2024

Scopus Subject Areas

  • Language and Linguistics
  • Linguistics and Language
  • Artificial Intelligence

User-Defined Keywords

  • Graph analysis
  • Graph neural networks
  • Message-passing graph neural networks
  • Representation learning

Cite this