TY - JOUR
T1 - Maximum weighted likelihood via rival penalized EM for density mixture clustering with automatic model selection
AU - Cheung, Yiu Ming
N1 - Funding Information:
The author would like to sincerely thank the editor and three anonymous reviewers for their valuable comments and insightful suggestions. Also, the author thanks Mr. Lap-tak Law for conducting the experiment of RPEM on color image segmentation problem. This work was supported by the Research Grant Council of Hong Kong SAR under Project HKBU 2156/04E.
PY - 2005/6
Y1 - 2005/6
N2 - Expectation-Maximization (EM) algorithm [10] has been extensively used in density mixture clustering problems, but it is unable to perform model selection automatically. This paper, therefore, proposes to learn the model parameters via maximizing a weighted likelihood. Under a specific weight design, we give out a Rival Penalized Expectation-Maximization (RPEM) algorithm, which makes the components in a density mixture compete each other at each time step. Not only are the associated parameters of the winner updated to adapt to an input, but also all rivals' parameters are penalized with the strength proportional to the corresponding posterior density probabilities. Compared to the EM algorithm [10], the RPEM is able to fade out the redundant densities from a density mixture during the learning process. Hence, it can automatically select an appropriate number of densities in density mixture clustering. We experimentally demonstrate its outstanding performance on Gaussian mixtures and color image segmentation problem. Moreover, a simplified version of RPEM generalizes our recently proposed RPCCL algorithm [8] so that it is applicable to elliptical clusters as well with any input proportion. Compared to the existing heuristic RPCL [25] and its variants, this generalized RPCCL (G-RPCCL) circumvents the difficult preselection of the so-called delearning rate. Additionally, a special setting of the G-RPCCL not only degenerates to RPCL and its Type A variant, but also gives a guidance to choose an appropriate delearning rate for them. Subsequently, we propose a stochastic version of RPCL and its Type A variant, respectively, in which the difficult selection problem of delearning rate has been novelly circumvented. The experiments show the promising results of this stochastic implementation. 2005 IEEE.
AB - Expectation-Maximization (EM) algorithm [10] has been extensively used in density mixture clustering problems, but it is unable to perform model selection automatically. This paper, therefore, proposes to learn the model parameters via maximizing a weighted likelihood. Under a specific weight design, we give out a Rival Penalized Expectation-Maximization (RPEM) algorithm, which makes the components in a density mixture compete each other at each time step. Not only are the associated parameters of the winner updated to adapt to an input, but also all rivals' parameters are penalized with the strength proportional to the corresponding posterior density probabilities. Compared to the EM algorithm [10], the RPEM is able to fade out the redundant densities from a density mixture during the learning process. Hence, it can automatically select an appropriate number of densities in density mixture clustering. We experimentally demonstrate its outstanding performance on Gaussian mixtures and color image segmentation problem. Moreover, a simplified version of RPEM generalizes our recently proposed RPCCL algorithm [8] so that it is applicable to elliptical clusters as well with any input proportion. Compared to the existing heuristic RPCL [25] and its variants, this generalized RPCCL (G-RPCCL) circumvents the difficult preselection of the so-called delearning rate. Additionally, a special setting of the G-RPCCL not only degenerates to RPCL and its Type A variant, but also gives a guidance to choose an appropriate delearning rate for them. Subsequently, we propose a stochastic version of RPCL and its Type A variant, respectively, in which the difficult selection problem of delearning rate has been novelly circumvented. The experiments show the promising results of this stochastic implementation. 2005 IEEE.
KW - Cluster number
KW - Generalized rival penalization controlled competitive learning
KW - Maximum weighted likelihood
KW - Rival penalized expectation-maximization algorithm
KW - Stochastic implementation
UR - http://www.scopus.com/inward/record.url?scp=20844432428&partnerID=8YFLogxK
U2 - 10.1109/TKDE.2005.97
DO - 10.1109/TKDE.2005.97
M3 - Journal article
AN - SCOPUS:20844432428
SN - 1041-4347
VL - 17
SP - 750
EP - 761
JO - IEEE Transactions on Knowledge and Data Engineering
JF - IEEE Transactions on Knowledge and Data Engineering
IS - 6
ER -