TY - JOUR
T1 - Rival-model penalized self-organizing map
AU - CHEUNG, Yiu Ming
AU - Law, Lap Tak
N1 - Funding Information:
Manuscript received September 5, 2004; revised June 20, 2006. This work was supported by the Research Grant Council of the Hong Kong SAR, China, under Project HKBU 2156/04E and Project HKBU 210306 and by the Faculty Research Grant of Hong Kong Baptist University, Hong Kong, under Project FRG/05–06/II-42.
PY - 2007/1
Y1 - 2007/1
N2 - As a typical data visualization technique, self-organizing map (SOM) has been extensively applied to data clustering, image analysis, dimension reduction, and so forth. In a conventional adaptive SOM, it needs to choose an appropriate learning rate whose value is monotonically reduced over time to ensure the convergence of the map, meanwhile being kept large enough so that the map is able to gradually learn the data topology. Otherwise, the SOM's performance may seriously deteriorate. In general, it is nontrivial to choose an appropriate monotonically decreasing function for such a learning rate. In this letter, we therefore propose a novel rival-model penalized self-organizing map (RPSOM) learning algorithm that, for each input, adaptively chooses several rivals of the best-matching unit (BMU) and penalizes their associated models, i.e., those parametric real vectors with the same dimension as the input vectors, a little far away from the input. Compared to the existing methods, this RPSOM utilizes a constant learning rate to circumvent the awkward selection of a monotonically decreased function for the learning rate, but still reaches a robust result. The numerical experiments have shown the efficacy of our algorithm.
AB - As a typical data visualization technique, self-organizing map (SOM) has been extensively applied to data clustering, image analysis, dimension reduction, and so forth. In a conventional adaptive SOM, it needs to choose an appropriate learning rate whose value is monotonically reduced over time to ensure the convergence of the map, meanwhile being kept large enough so that the map is able to gradually learn the data topology. Otherwise, the SOM's performance may seriously deteriorate. In general, it is nontrivial to choose an appropriate monotonically decreasing function for such a learning rate. In this letter, we therefore propose a novel rival-model penalized self-organizing map (RPSOM) learning algorithm that, for each input, adaptively chooses several rivals of the best-matching unit (BMU) and penalizes their associated models, i.e., those parametric real vectors with the same dimension as the input vectors, a little far away from the input. Compared to the existing methods, this RPSOM utilizes a constant learning rate to circumvent the awkward selection of a monotonically decreased function for the learning rate, but still reaches a robust result. The numerical experiments have shown the efficacy of our algorithm.
KW - Constant learning rate
KW - Rival-model penalized self-organizing map (RPSOM)
KW - Self-organizing map (SOM)
UR - http://www.scopus.com/inward/record.url?scp=33846050871&partnerID=8YFLogxK
U2 - 10.1109/TNN.2006.885039
DO - 10.1109/TNN.2006.885039
M3 - Journal article
C2 - 17278479
AN - SCOPUS:33846050871
SN - 1045-9227
VL - 18
SP - 289
EP - 295
JO - IEEE Transactions on Neural Networks
JF - IEEE Transactions on Neural Networks
IS - 1
ER -