Abstract
This paper proposes a concept of lip motion password (simply called lip-password hereinafter), which is composed of a password embedded in the lip movement and the underlying characteristic of lip motion. It provides a double security to a visual speaker verification system, where the speaker is verified by both of the private password information and the underlying behavioral biometrics of lip motions simultaneously. Accordingly, the target speaker saying the wrong password or an impostor who knows the correct password will be detected and rejected. To this end, we shall present a multi-boosted Hidden Markov model (HMM) learning approach to such a system. Initially, we extract a group of representative visual features to characterize each lip frame. Then, an effective lip motion segmentation algorithm is addressed to segment the lip-password sequence into a small set of distinguishable subunits. Subsequently, we integrate HMMs with boosting learning framework associated with a random subspace method and data sharing scheme to formulate a precise decision boundary for these subunits verification, featuring on high discrimination power. Finally, the lip-password, whether spoken by the target speaker with the pre-registered password or not, is identified based on all the subunit verification results learned from multi-boosted HMMs. The experimental results show that the proposed approach performs favorably compared with the state-of-the-art methods.
Original language | English |
---|---|
Article number | 6675773 |
Pages (from-to) | 233-246 |
Number of pages | 14 |
Journal | IEEE Transactions on Information Forensics and Security |
Volume | 9 |
Issue number | 2 |
DOIs | |
Publication status | Published - Feb 2014 |
Scopus Subject Areas
- Safety, Risk, Reliability and Quality
- Computer Networks and Communications
User-Defined Keywords
- data sharing scheme
- lip motion segmentation
- Lip-password
- multi-boosted HMMs
- random subspace method