TY - JOUR
T1 - LRP2A
T2 - Layer-wise Relevance Propagation based Adversarial attacking for Graph Neural Networks
AU - Liu, Li
AU - Du, Yong
AU - Wang, Ye
AU - Cheung, William K.
AU - Zhang, Youmin
AU - Liu, Qun
AU - Wang, Guoyin
N1 - Funding Information:
The work is partially supported by National Natural Science Foundation of China ( 61936001 , 61806031 ), and in part by the Natural Science Foundation of Chongqing, China ( cstc2019jcyj-cxttX0002 , cstc2020jcyj-msxmX0943 ), and in part by the key cooperation project of Chongqing Municipal Education Commission, China ( HZ2021008 ), and in part by Science and Technology Research Program of Chongqing Municipal Education Commission, China ( KJQN202100629 , KJQN202001901 ), and in part by Doctoral Innovation Talent Program of Chongqing University of Posts and Telecommunications, China ( BYJS202118 ). This work is partially done when Li Liu works at Hong Kong Baptist University, China supported by the Hong Kong Scholars program ( XJ2020054 ).
Publisher Copyright:
© 2022 Elsevier B.V.
PY - 2022/11/28
Y1 - 2022/11/28
N2 - Graph Neural Networks (GNNs) are widely utilized for graph data mining, attributable to their powerful feature representation ability. Yet, they are prone to adversarial attacks with only slight perturbations of input data, limiting their applicability to critical applications. Vulnerability analysis of GNNs is thus essential if more robust models are to be developed. To this end, a Layer-wise Relevance Propagation based Adversarial attacking (LRP2A) model is proposed1. Specifically, to facilitate applying LRP to the “black-box” victim model, we train a surrogate model based on a sophisticated re-weighting network. The LRP algorithm is then leveraged for unraveling “contributions” among the nodes in the downstream classification task. Furthermore, the graph adversarial attacking algorithm is intentionally designed to be both interpretable and efficient. Experimental results prove the effectiveness of the proposed attacking model on GNNs for node classification. Additionally, the adoption of LRP2A allows the choice of the adversarial attacking strategies on the GNN interpretable, which in turn can gain deeper insights on the GNN's vulnerability.
AB - Graph Neural Networks (GNNs) are widely utilized for graph data mining, attributable to their powerful feature representation ability. Yet, they are prone to adversarial attacks with only slight perturbations of input data, limiting their applicability to critical applications. Vulnerability analysis of GNNs is thus essential if more robust models are to be developed. To this end, a Layer-wise Relevance Propagation based Adversarial attacking (LRP2A) model is proposed1. Specifically, to facilitate applying LRP to the “black-box” victim model, we train a surrogate model based on a sophisticated re-weighting network. The LRP algorithm is then leveraged for unraveling “contributions” among the nodes in the downstream classification task. Furthermore, the graph adversarial attacking algorithm is intentionally designed to be both interpretable and efficient. Experimental results prove the effectiveness of the proposed attacking model on GNNs for node classification. Additionally, the adoption of LRP2A allows the choice of the adversarial attacking strategies on the GNN interpretable, which in turn can gain deeper insights on the GNN's vulnerability.
KW - Adversarial attacks
KW - Graph Neural Networks
KW - Layer-wise relevance propagation
UR - http://www.scopus.com/inward/record.url?scp=85138089774&partnerID=8YFLogxK
U2 - 10.1016/j.knosys.2022.109830
DO - 10.1016/j.knosys.2022.109830
M3 - Journal article
AN - SCOPUS:85138089774
SN - 0950-7051
VL - 256
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 109830
ER -