Realization of Spatial Sparseness by Deep ReLU Nets with Massive Data

Charles K. Chui, Shao Bo Lin*, Bo Zhang, Ding Xuan Zhou

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

17 Citations (Scopus)

Abstract

The great success of deep learning poses urgent challenges for understanding its working mechanism and rationality. The depth, structure, and massive size of the data are recognized to be three key ingredients for deep learning. Most of the recent theoretical studies for deep learning focus on the necessity and advantages of depth and structures of neural networks. In this article, we aim at rigorous verification of the importance of massive data in embodying the outperformance of deep learning. In particular, we prove that the massiveness of data is necessary for realizing the spatial sparseness, and deep nets are crucial tools to make full use of massive data in such an application. All these findings present the reasons why deep learning achieves great success in the era of big data though deep nets and numerous network structures have been proposed at least 20 years ago.

Original languageEnglish
Pages (from-to)229-243
Number of pages15
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume33
Issue number1
DOIs
Publication statusPublished - Jan 2022

Scopus Subject Areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

User-Defined Keywords

  • Deep nets
  • learning theory
  • massive data
  • spatial sparseness

Fingerprint

Dive into the research topics of 'Realization of Spatial Sparseness by Deep ReLU Nets with Massive Data'. Together they form a unique fingerprint.

Cite this