TY - JOUR
T1 - Regularization parameter selection in minimum volume hyperspectral unmixing
AU - Zhuang, Lina
AU - Lin, Chia Hsiang
AU - Figueiredo, Mario A.T.
AU - Bioucas-Dias, Jose M.
N1 - Funding Information:
Manuscript received June 5, 2018; revised October 13, 2018, February 19, 2019, and June 5, 2019; accepted June 26, 2019. Date of publication August 14, 2019; date of current version November 25, 2019. This work was supported in part by the European Union’s Seventh Framework Programme (FP7-PEOPLE-2013-ITN) under Grant 607290 SpaRTaN, in part by the Portuguese Foundation for Science and Technology/Ministry of Education and Science (FCT/MEC) through National Funds, in part by the European Regional Development Fund (FEDER), within the Portugal 2020 (PT-2020) Partnership Agreement under Project UID/EEA/50008/2019, in part by the Young Scholar Fellowship Program (Einstein Program) of Ministry of Science and Technology (MOST), Taiwan, under Grant MOST107-2636-E-006-006, and in part by the Higher Education Sprout Project of Ministry of Education (MOE) to the Headquarters of University Advancement at National Cheng Kung University (NCKU). (Corresponding author: Lina Zhuang.) L. Zhuang, M. A. T. Figueiredo, and J. M. Bioucas-Dias are with the Instituto de Telecomunicações, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisbon, Portugal (e-mail: [email protected]; [email protected]; [email protected]).
PY - 2019/12
Y1 - 2019/12
N2 - Linear hyperspectral unmixing (HU) aims at factoring the observation matrix into an endmember matrix and an abundance matrix. Linear HU via variational minimum volume (MV) regularization has recently received considerable attention in the remote sensing and machine learning areas, mainly owing to its robustness against the absence of pure pixels. We put some popular linear HU formulations under a unifying framework, which involves a data-fitting term and an MV-based regularization term, and collectively solve it via a nonconvex optimization. As the former and the latter terms tend, respectively, to expand (reducing the data-fitting errors) and to shrink the simplex enclosing the measured spectra, it is critical to strike a balance between those two terms. To the best of our knowledge, the existing methods find such balance by tuning a regularization parameter manually, which has little value in unsupervised scenarios. In this paper, we aim at selecting the regularization parameter automatically by exploiting the fact that a too large parameter overshrinks the volume of the simplex defined by the endmembers, making many data points be left outside of the simplex and hence inducing a large data-fitting error, while a sufficiently small parameter yields a large simplex making data-fitting error very small. Roughly speaking, the transition point happens when the simplex still encloses the data cloud but there are data points on all its facets. These observations are systematically formulated to find the transition point that, in turn, yields a good parameter. The competitiveness of the proposed selection criterion is illustrated with simulated and real data.
AB - Linear hyperspectral unmixing (HU) aims at factoring the observation matrix into an endmember matrix and an abundance matrix. Linear HU via variational minimum volume (MV) regularization has recently received considerable attention in the remote sensing and machine learning areas, mainly owing to its robustness against the absence of pure pixels. We put some popular linear HU formulations under a unifying framework, which involves a data-fitting term and an MV-based regularization term, and collectively solve it via a nonconvex optimization. As the former and the latter terms tend, respectively, to expand (reducing the data-fitting errors) and to shrink the simplex enclosing the measured spectra, it is critical to strike a balance between those two terms. To the best of our knowledge, the existing methods find such balance by tuning a regularization parameter manually, which has little value in unsupervised scenarios. In this paper, we aim at selecting the regularization parameter automatically by exploiting the fact that a too large parameter overshrinks the volume of the simplex defined by the endmembers, making many data points be left outside of the simplex and hence inducing a large data-fitting error, while a sufficiently small parameter yields a large simplex making data-fitting error very small. Roughly speaking, the transition point happens when the simplex still encloses the data cloud but there are data points on all its facets. These observations are systematically formulated to find the transition point that, in turn, yields a good parameter. The competitiveness of the proposed selection criterion is illustrated with simulated and real data.
KW - Craig criterion
KW - hyperspectral images (HSIs)
KW - nonconvex optimization
KW - spectral unmixing
UR - http://www.scopus.com/inward/record.url?scp=85075667290&partnerID=8YFLogxK
U2 - 10.1109/TGRS.2019.2929776
DO - 10.1109/TGRS.2019.2929776
M3 - Journal article
AN - SCOPUS:85075667290
SN - 0196-2892
VL - 57
SP - 9858
EP - 9877
JO - IEEE Transactions on Geoscience and Remote Sensing
JF - IEEE Transactions on Geoscience and Remote Sensing
IS - 12
M1 - 8798985
ER -