Stability and generalization of graph convolutional networks in eigen-domains

Michael K. Ng, Andy Yip*

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

Abstract

Graph Convolution Networks (GCNs) have been shown to be very effective in utilizing pair-wise relationships across samples. They have been successfully applied to solve various machine learning problems in practice. In many applications, the construction of GCNs involves more than one layer. However, their generalization and stability analysis are limited. The main aim of this paper is to analyze GCNs with two layers. The formulation is based on transductive semi-supervised learning and the filtering is done in the eigen-domain. We show the uniform stability of the neural network and the convergence of the generalization gap to zero. The analysis of two-layer GCN is more involved than the single-layer case and requires some new estimates of the neural network's quantities. The analysis confirms the usefulness of GCNs. It also sheds light on the design of the neural network, for instance, how the data should be scaled to achieve the uniform stability of the learning process. Some experimental results on benchmark datasets are presented to illustrate the theory.

Original languageEnglish
Pages (from-to)819-840
Number of pages22
JournalAnalysis and Applications
Volume21
Issue number3
Early online date15 Feb 2023
DOIs
Publication statusPublished - May 2023

Scopus Subject Areas

  • Analysis
  • Applied Mathematics

User-Defined Keywords

  • eigenvalues
  • generalization guarantees
  • Graph convolutional neural networks
  • stability

Fingerprint

Dive into the research topics of 'Stability and generalization of graph convolutional networks in eigen-domains'. Together they form a unique fingerprint.

Cite this