TY - JOUR
T1 - GEAR
T2 - Learning graph neural network explainer via adjusting gradients
AU - Zhang, Youmin
AU - Liu, Qun
AU - Wang, Guoyin
AU - Cheung, William K.
AU - Liu, Li
N1 - This work was partially supported by the National Natural Science Foundation of China (61936001, 62221005, 61806031), Natural Science Foundation of Chongqing, China (cstc2019jcyj-cxttX0002, cstc2020jcyj-msxmX0943), Key Cooperation Project of Chongqing Municipal Education Commission, China (HZ2021008), and Doctoral Innovation Talent Program of Chongqing University of Posts and Telecommunications, China (BYJS202118).
Publisher Copyright:
© 2024 Elsevier B.V.
PY - 2024/10/25
Y1 - 2024/10/25
N2 - Graph neural network (GNN) explainers aim to elucidate the prediction behavior of GNNs, facilitating their wide adoption in high-stakes tasks. Current approaches typically define multiple objective functions to construct comprehensive and accurate explainers. Nevertheless, optimizing GNN explainers with multiple objective functions is challenging because of conflicts between the gradients among these objectives, which may result in suboptimal solutions. To eliminate potential conflicts and enhance explainer optimization, we introduce GEAR, which is a novel framework that adjusts the gradients to optimize the GNN explainer. Specifically, we attempt to define comprehensive objectives from multiple perspectives that are crucial for optimizing GNN explainers, including fidelity, sparsity, counterfactual reasoning, and connectivity. Subsequently, we purposefully determine the dominant gradient and angle threshold of the conflicting gradients from a geometric perspective. More importantly, we propose a simple yet effective gradient adjuster to refine the gradients during explainer optimization. Experiments on synthetic and real-world datasets demonstrate the effectiveness of the proposed GEAR. In addition, state-of-the-art explainers with the incorporated gradient adjuster outperform their counterparts without the proposed gradient adjuster.
AB - Graph neural network (GNN) explainers aim to elucidate the prediction behavior of GNNs, facilitating their wide adoption in high-stakes tasks. Current approaches typically define multiple objective functions to construct comprehensive and accurate explainers. Nevertheless, optimizing GNN explainers with multiple objective functions is challenging because of conflicts between the gradients among these objectives, which may result in suboptimal solutions. To eliminate potential conflicts and enhance explainer optimization, we introduce GEAR, which is a novel framework that adjusts the gradients to optimize the GNN explainer. Specifically, we attempt to define comprehensive objectives from multiple perspectives that are crucial for optimizing GNN explainers, including fidelity, sparsity, counterfactual reasoning, and connectivity. Subsequently, we purposefully determine the dominant gradient and angle threshold of the conflicting gradients from a geometric perspective. More importantly, we propose a simple yet effective gradient adjuster to refine the gradients during explainer optimization. Experiments on synthetic and real-world datasets demonstrate the effectiveness of the proposed GEAR. In addition, state-of-the-art explainers with the incorporated gradient adjuster outperform their counterparts without the proposed gradient adjuster.
KW - Explainable machine learning
KW - Gradient adjustment
KW - Graph neural network
KW - Multi-objective optimization
UR - http://www.scopus.com/inward/record.url?scp=85201255314&partnerID=8YFLogxK
U2 - 10.1016/j.knosys.2024.112368
DO - 10.1016/j.knosys.2024.112368
M3 - Journal article
AN - SCOPUS:85201255314
SN - 0950-7051
VL - 302
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 112368
ER -