Abstract
Graph Convolution Networks (GCNs) have been shown to be very effective in utilizing pair-wise relationships across samples. They have been successfully applied to solve various machine learning problems in practice. In many applications, the construction of GCNs involves more than one layer. However, their generalization and stability analysis are limited. The main aim of this paper is to analyze GCNs with two layers. The formulation is based on transductive semi-supervised learning and the filtering is done in the eigen-domain. We show the uniform stability of the neural network and the convergence of the generalization gap to zero. The analysis of two-layer GCN is more involved than the single-layer case and requires some new estimates of the neural network's quantities. The analysis confirms the usefulness of GCNs. It also sheds light on the design of the neural network, for instance, how the data should be scaled to achieve the uniform stability of the learning process. Some experimental results on benchmark datasets are presented to illustrate the theory.
Original language | English |
---|---|
Pages (from-to) | 819-840 |
Number of pages | 22 |
Journal | Analysis and Applications |
Volume | 21 |
Issue number | 3 |
Early online date | 15 Feb 2023 |
DOIs | |
Publication status | Published - May 2023 |
Scopus Subject Areas
- Analysis
- Applied Mathematics
User-Defined Keywords
- eigenvalues
- generalization guarantees
- Graph convolutional neural networks
- stability