FastHGNN: A New Sampling Technique for Learning with Hypergraph Neural Networks

Fengcheng Lu, Michael Kwok Po Ng*

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

Abstract

Hypergraphs can represent higher-order relations among objects. Traditional hypergraph neural networks involve node-edge-node transform, leading to high computational cost and timing. The main aim of this article is to propose a new sampling technique for learning with hypergraph neural networks. The core idea is to design a layer-wise sampling scheme for nodes and hyperedges to approximate the original hypergraph convolution. We rewrite hypergraph convolution in the form of double integral and leverage Monte Carlo to achieve a discrete and consistent estimator. In addition, we use importance sampling and finally derive feasible probability mass functions for both nodes and hyperedges in consideration of variance reduction, based on some assumptions. Notably, the proposed sampling technique allows us to handle large-scale hypergraph learning, which is not feasible with traditional hypergraph neural networks. Experiment results demonstrate that our proposed model keeps a good balance between running time and prediction accuracy.

Original languageEnglish
Article numberART184
Number of pages26
JournalACM Transactions on Knowledge Discovery from Data
Volume18
Issue number8
DOIs
Publication statusPublished - 26 Jul 2024

Scopus Subject Areas

  • Computer Science(all)

User-Defined Keywords

  • hypergraph neural networks
  • hypergraph sampling
  • importance sampling
  • Large-scale hypergraph learning
  • variance reduction

Cite this