Deep neural networks for rotation-invariance approximation and learning

Charles Kam-Tai CHUI, Shao Bo Lin*, Ding Xuan Zhou

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

24 Citations (Scopus)

Abstract

Based on the tree architecture, the objective of this paper is to design deep neural networks with two or more hidden layers (called deep nets) for realization of radial functions so as to enable rotational invariance for near-optimal function approximation in an arbitrarily high-dimensional Euclidian space. It is shown that deep nets have much better performance than shallow nets (with only one hidden layer) in terms of approximation accuracy and learning capabilities. In particular, for learning radial functions, it is shown that near-optimal rate can be achieved by deep nets but not by shallow nets. Our results illustrate the necessity of depth in neural network design for realization of rotation-invariance target functions.

Original languageEnglish
Pages (from-to)737-772
Number of pages36
JournalAnalysis and Applications
Volume17
Issue number5
DOIs
Publication statusPublished - 1 Sept 2019

Scopus Subject Areas

  • Analysis
  • Applied Mathematics

User-Defined Keywords

  • Deep nets
  • learning theory
  • radial-basis functions
  • rotation-invariance

Fingerprint

Dive into the research topics of 'Deep neural networks for rotation-invariance approximation and learning'. Together they form a unique fingerprint.

Cite this