Abstract
Hypergraphs can represent higher-order relations among objects. Traditional hypergraph neural networks involve node-edge-node transform, leading to high computational cost and timing. The main aim of this article is to propose a new sampling technique for learning with hypergraph neural networks. The core idea is to design a layer-wise sampling scheme for nodes and hyperedges to approximate the original hypergraph convolution. We rewrite hypergraph convolution in the form of double integral and leverage Monte Carlo to achieve a discrete and consistent estimator. In addition, we use importance sampling and finally derive feasible probability mass functions for both nodes and hyperedges in consideration of variance reduction, based on some assumptions. Notably, the proposed sampling technique allows us to handle large-scale hypergraph learning, which is not feasible with traditional hypergraph neural networks. Experiment results demonstrate that our proposed model keeps a good balance between running time and prediction accuracy.
Original language | English |
---|---|
Article number | ART184 |
Number of pages | 26 |
Journal | ACM Transactions on Knowledge Discovery from Data |
Volume | 18 |
Issue number | 8 |
DOIs | |
Publication status | Published - 26 Jul 2024 |
Scopus Subject Areas
- Computer Science(all)
User-Defined Keywords
- hypergraph neural networks
- hypergraph sampling
- importance sampling
- Large-scale hypergraph learning
- variance reduction