InsGNN: Interpretable spatio-temporal graph neural networks via information bottleneck

Hui Fang, Haishuai Wang*, Yang Gao, Yonggang Zhang, Jiajun Bu, Bo Han, Hui Lin

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

Abstract

Spatio-temporal graph neural networks (STGNNs) have garnered considerable attention for their promising performance across various applications. While existing models have demonstrated superior performance in exploring the interpretability of graph neural networks (GNNs), the interpretability of STGNNs is constrained by their complex spatio-temporal correlations. In this paper, we introduce a novel approach named INterpretable Spatio-temporal Graph Neural Network (InsGNN), which aims to elucidate the predictive process of STGNNs by identifying key components. To achieve this objective, two critical challenges must be addressed: (1) incorporating temporal interpretability within high-dimensional time features and (2) identifying invariant causal subgraphs for structural interpretability. To tackle these challenges, InsGNN initially integrates a lightweight prototype matching module, where high-dimensional sequences are represented by low-dimensional knowledge vectors. These knowledge vectors reveal the mapping relationship between time features and prototypes, elucidating the significance of prototypes in influencing the ultimate temporal embedding. Furthermore, to enhance structural interpretability, InsGNN incorporates a subgraph extraction module equipped with learnable structural masking, which effectively selects invariant causal substructures (including nodes and relevant edges) correlated with the labels. Following the principles of the information bottleneck, InsGNN minimizes the amount of information used in both temporal and spatial dimensions to derive knowledge vectors and invariant causal subgraphs while ensuring interpretability and prediction performance. Extensive experiments conducted on real-world datasets demonstrate that InsGNN generates explainable predictions and significantly outperforms baseline methods.
Original languageEnglish
Article number102997
Number of pages13
JournalInformation Fusion
Volume119
Early online date6 Feb 2025
DOIs
Publication statusE-pub ahead of print - 6 Feb 2025

User-Defined Keywords

  • Graph neural network
  • Information bottleneck
  • Interpretability
  • Spatio-temporal graph

Fingerprint

Dive into the research topics of 'InsGNN: Interpretable spatio-temporal graph neural networks via information bottleneck'. Together they form a unique fingerprint.

Cite this