Generalization ability of fractional polynomial models

Yunwen Lei*, Lixin Ding, Yiming Ding

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

1 Citation (Scopus)

Abstract

In this paper, the problem of learning the functional dependency between input and output variables from scattered data using fractional polynomial models (FPM) is investigated. The estimation error bounds are obtained by calculating the pseudo-dimension of FPM, which is shown to be equal to that of sparse polynomial models (SPM). A linear decay of the approximation error is obtained for a class of target functions which are dense in the space of continuous functions. We derive a structural risk analogous to the Schwartz Criterion and demonstrate theoretically that the model minimizing this structural risk can achieve a favorable balance between estimation and approximation errors. An empirical model selection comparison is also performed to justify the usage of this structural risk in selecting the optimal complexity index from the data. We show that the construction of FPM can be efficiently addressed by the variable projection method. Furthermore, our empirical study implies that FPM could attain better generalization performance when compared with SPM and cubic splines.

Original languageEnglish
Pages (from-to)59-73
Number of pages15
JournalNeural Networks
Volume49
Early online date1 Oct 2013
DOIs
Publication statusPublished - Jan 2014

Scopus Subject Areas

  • Cognitive Neuroscience
  • Artificial Intelligence

User-Defined Keywords

  • Approximation theory
  • Fractional polynomial
  • Learning algorithm
  • Learning theory
  • Model selection

Fingerprint

Dive into the research topics of 'Generalization ability of fractional polynomial models'. Together they form a unique fingerprint.

Cite this