LRP2A: Layer-wise Relevance Propagation based Adversarial attacking for Graph Neural Networks

Li Liu, Yong Du, Ye Wang, William K. Cheung*, Youmin Zhang, Qun Liu, Guoyin Wang

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

2 Citations (Scopus)


Graph Neural Networks (GNNs) are widely utilized for graph data mining, attributable to their powerful feature representation ability. Yet, they are prone to adversarial attacks with only slight perturbations of input data, limiting their applicability to critical applications. Vulnerability analysis of GNNs is thus essential if more robust models are to be developed. To this end, a Layer-wise Relevance Propagation based Adversarial attacking (LRP2A) model is proposed1. Specifically, to facilitate applying LRP to the “black-box” victim model, we train a surrogate model based on a sophisticated re-weighting network. The LRP algorithm is then leveraged for unraveling “contributions” among the nodes in the downstream classification task. Furthermore, the graph adversarial attacking algorithm is intentionally designed to be both interpretable and efficient. Experimental results prove the effectiveness of the proposed attacking model on GNNs for node classification. Additionally, the adoption of LRP2A allows the choice of the adversarial attacking strategies on the GNN interpretable, which in turn can gain deeper insights on the GNN's vulnerability.

Original languageEnglish
Article number109830
JournalKnowledge-Based Systems
Publication statusPublished - 28 Nov 2022

Scopus Subject Areas

  • Software
  • Management Information Systems
  • Information Systems and Management
  • Artificial Intelligence

User-Defined Keywords

  • Adversarial attacks
  • Graph Neural Networks
  • Layer-wise relevance propagation


Dive into the research topics of 'LRP2A: Layer-wise Relevance Propagation based Adversarial attacking for Graph Neural Networks'. Together they form a unique fingerprint.

Cite this