TY - GEN
T1 - A new recurrent radial basis function network
AU - Cheung, Yiu Ming
N1 - Funding Information:
The work described in this paper was supported by the Faculty Research Grant of Hong Kong Baptist University with Project Number: FRG/02-03/1-06
Publisher Copyright:
© 2002 Nanyang Technological University.
PY - 2002/11
Y1 - 2002/11
N2 - Cheung and Xu (2001) has presented a dual structural recurrent radial basis function (RBF) network by considering the different scales in net's inputs and outputs. However, such a network implies that the underlying functional relationship between the net's inputs and outputs is linear separable, which may not be true from a practical viewpoint. In this paper, we therefore propose a new recurrent RBF network. It takes the net's input and the past outputs as an augmented input in analogy with the one in Billings and Fung (1995), but introduces a scale tuner into the net's hidden layer to balance the different scales between inputs and outputs. This network adaptively learns the parameters in the hidden layer together with those in the output layer. We implement this network by using a variant of extended normalized RBF (Cheung and Xu (2001)) with its hidden units learned by the rival penalization controlled competitive learning algorithm. The experiments have shown the outstanding performance of the proposed network in recursive function estimation.
AB - Cheung and Xu (2001) has presented a dual structural recurrent radial basis function (RBF) network by considering the different scales in net's inputs and outputs. However, such a network implies that the underlying functional relationship between the net's inputs and outputs is linear separable, which may not be true from a practical viewpoint. In this paper, we therefore propose a new recurrent RBF network. It takes the net's input and the past outputs as an augmented input in analogy with the one in Billings and Fung (1995), but introduces a scale tuner into the net's hidden layer to balance the different scales between inputs and outputs. This network adaptively learns the parameters in the hidden layer together with those in the output layer. We implement this network by using a variant of extended normalized RBF (Cheung and Xu (2001)) with its hidden units learned by the rival penalization controlled competitive learning algorithm. The experiments have shown the outstanding performance of the proposed network in recursive function estimation.
UR - http://www.scopus.com/inward/record.url?scp=84964515060&partnerID=8YFLogxK
U2 - 10.1109/ICONIP.2002.1198217
DO - 10.1109/ICONIP.2002.1198217
M3 - Conference proceeding
AN - SCOPUS:84964515060
VL - 2
T3 - Proceedings of the International Conference on Neural Information Processing
SP - 1032
EP - 1036
BT - Proceedings of the 9th International Conference on Neural Information Processing
A2 - Wang, Lipo
A2 - Rajapakse, Jagath C.
A2 - Fukushima, Kunihiko
A2 - Lee, Soo Young
A2 - Yao, Xin
PB - IEEE
T2 - 9th International Conference on Neural Information Processing, ICONIP 2002
Y2 - 18 November 2002 through 22 November 2002
ER -