This paper presents a multi-boosted Hidden Markov Model (HMM) approach to lip password (i.e. the password embedded in the lip motion) based speaker verification, where the speaker is verified by both of lip password and the underlying characteristics of lip motions. That is, the target speaker saying the wrong password or an impostor even knowing the correct password will be detected as well. To this end, we firstly propose an effective lip motion segmentation algorithm to segment the password sequence into a small set of discrete subunits. Then, we integrate HMMs with boosting learning framework associated with the random subspace method (RSM) and data sharing scheme (DSS) to model the segmental sequence of the input subunit discriminatively so that a precise decision boundary is formulated for these subunits verification. Finally, the speaker is verified based on all verification results of the subunits learned from multi-boosted HMMs. Experimental results show the promising results.