A divide-and-conquer learning approach to radial basis function networks

Yiu Ming Cheung*, Rong Bo Huang

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

5 Citations (Scopus)

Abstract

This paper presents a new divide-and-conquer based learning approach to radial basis function (RBF) networks, in which a conventional RBF network is divided into several RBF sub-networks. Each of them individually takes an input sub-space as its input. The original network's output then becomes a linear combination of the sub-networks' outputs with the coefficients adaptively learned together with the system parameters of each sub-network. Since this approach reduces the structural complexity of a RBF network by describing a high-dimensional modelling problem via several low-dimensional ones, the network's learning speed is considerably improved as a whole with the comparable generalization capability. The empirical studies have shown its outstanding performance on forecasting two real time series as well as synthetic data. Besides, we have found that the performance of this approach generally varies with the different decompositions of the network's input and the hidden layer. We therefore further explore the decomposition rule with the results verified by the experiments.

Original languageEnglish
Pages (from-to)189-206
Number of pages18
JournalNeural Processing Letters
Volume21
Issue number3
DOIs
Publication statusPublished - Jun 2005

Scopus Subject Areas

  • Software
  • General Neuroscience
  • Computer Networks and Communications
  • Artificial Intelligence

User-Defined Keywords

  • Divide and conquer learning
  • Hidden-layer decomposition
  • Input decomposition
  • Radial basis function network
  • Recurrent radial basis function network

Fingerprint

Dive into the research topics of 'A divide-and-conquer learning approach to radial basis function networks'. Together they form a unique fingerprint.

Cite this