How Interpretable Are Interpretable Graph Neural Networks?

Yongqiang Chen, Yatao Bian*, Bo Han, James Cheng

*Corresponding author for this work

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

Abstract

Interpretable graph neural networks (XGNNs) are widely adopted in various scientific applications involving graph-structured data. Existing XGNNs predominantly adopt the attention-based mechanism to learn edge or node importance for extracting and making predictions with the interpretable subgraph. However, the representational properties and limitations of these methods remain inadequately explored. In this work, we present a theoretical framework that formulates interpretable subgraph learning with the multilinear extension of the subgraph distribution, coined as subgraph multilinear extension (SubMT). Extracting the desired interpretable subgraph requires an accurate approximation of SubMT, yet we find that the existing XGNNs can have a huge gap in fitting SubMT. Consequently, the SubMT approximation failure will lead to the degenerated interpretability of the extracted subgraphs. To mitigate the issue, we design a new XGNN architecture called Graph Multilinear neT (GMT), which is provably more powerful in approximating SubMT. We empirically validate our theoretical findings on a number of graph classification benchmarks. The results demonstrate that GMT outperforms the state-of-the-art up to 10% in terms of both interpretability and generalizability across 12 regular and geometric graph benchmarks.

Original languageEnglish
Title of host publicationProceedings of the 41st International Conference on Machine Learning, ICML 2024
EditorsRuslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, Felix Berkenkamp
PublisherML Research Press
Pages6413-6456
Number of pages44
Publication statusPublished - 21 Jul 2024
Event41st International Conference on Machine Learning, ICML 2024 - Vienna, Austria
Duration: 21 Jul 202427 Jul 2024
https://icml.cc/
https://openreview.net/group?id=ICML.cc/2024/Conference#tab-accept-oral
https://proceedings.mlr.press/v235/

Publication series

NameProceedings of the International Conference on Machine Learning
NameProceedings of Machine Learning Research
PublisherML Research Press
Volume235
ISSN (Print)2640-3498

Conference

Conference41st International Conference on Machine Learning, ICML 2024
Country/TerritoryAustria
CityVienna
Period21/07/2427/07/24
Internet address

Fingerprint

Dive into the research topics of 'How Interpretable Are Interpretable Graph Neural Networks?'. Together they form a unique fingerprint.

Cite this