TY - JOUR
T1 - Towards explaining graph neural networks via preserving prediction ranking and structural dependency
AU - Zhang, Youmin
AU - Cheung, William K.
AU - Liu, Qun
AU - Wang, Guoyin
AU - Yang, Lili
AU - Liu, Li
N1 - The work is partially supported by National Natural Science Foundation of China ( 61936001 , 62221005 , 61806031 ), and in part by the Natural Science Foundation of Chongqing, China ( cstc2019jcyj-cxttX0002 , cstc2020jcyj-msxmX0943 ), and in part by the key cooperation project of Key Cooperation Project of Chongqing Municipal Education Commission ( HZ2021008 ), and in part by Doctoral Innovation Talent Program of Chongqing University of Posts and Telecommunications, China ( BYJS202118 ). This work is partially done when Li Liu works at Hong Kong Baptist University supported by the Hong Kong Scholars program ( XJ2020054 ).
Publisher Copyright:
© 2023 Elsevier Ltd
PY - 2024/3
Y1 - 2024/3
N2 - Graph Neural Networks (GNNs) have demonstrated their efficacy in representing graph-structured data, but their lack of explainability hinders their applicability to critical tasks. Existing GNNs explainers fail to consider the prediction ranking consistency between the original graph and the explanation, which is critical for preserving the fidelity of the explainer. Moreover, the structural dependency in the graph, reflecting the distinctive learning schema of the model, is ignored in current GNN explainers. To this end, we propose the NeuralSort based Plackett-Luce model to guide the parameter learning of the explainer via a differentiable ranking loss to ensure the explainer's fidelity to the GNNs. Additionally, a graph transformation schema explicitly modeling the edge dependency is proposed for constructing the mask generator. By integrating the aforementioned strategies, we propose a novel framework for explaining GNNs in a faithful manner. Through comprehensive experiments both for node classification and graph classification on BA-Shapes, BA-Community, Graph-Twitter, and Graph-SST5 datasets, the proposed framework achieves 149.67%, 51.43%, 40.747%, and 28.87% improvements compared with the state-of-the-art explainers in terms of fidelity to the GNNs. Data and code are available at https://github.com/ymzhang0103/RDPExplainer.
AB - Graph Neural Networks (GNNs) have demonstrated their efficacy in representing graph-structured data, but their lack of explainability hinders their applicability to critical tasks. Existing GNNs explainers fail to consider the prediction ranking consistency between the original graph and the explanation, which is critical for preserving the fidelity of the explainer. Moreover, the structural dependency in the graph, reflecting the distinctive learning schema of the model, is ignored in current GNN explainers. To this end, we propose the NeuralSort based Plackett-Luce model to guide the parameter learning of the explainer via a differentiable ranking loss to ensure the explainer's fidelity to the GNNs. Additionally, a graph transformation schema explicitly modeling the edge dependency is proposed for constructing the mask generator. By integrating the aforementioned strategies, we propose a novel framework for explaining GNNs in a faithful manner. Through comprehensive experiments both for node classification and graph classification on BA-Shapes, BA-Community, Graph-Twitter, and Graph-SST5 datasets, the proposed framework achieves 149.67%, 51.43%, 40.747%, and 28.87% improvements compared with the state-of-the-art explainers in terms of fidelity to the GNNs. Data and code are available at https://github.com/ymzhang0103/RDPExplainer.
KW - Explainable machine learning
KW - Graph neural network
KW - Prediction ranking
KW - Structural dependency
UR - http://www.scopus.com/inward/record.url?scp=85178016053&partnerID=8YFLogxK
U2 - 10.1016/j.ipm.2023.103571
DO - 10.1016/j.ipm.2023.103571
M3 - Journal article
AN - SCOPUS:85178016053
SN - 0306-4573
VL - 61
JO - Information Processing and Management
JF - Information Processing and Management
IS - 2
M1 - 103571
ER -