Lip-password has provided a promising solution for speaker verification (Liu and Cheung 2014). Despite the potential of this technology, there are few related studies, largely attributed to the lack of corresponding public datasets. Furthermore, previous works in this field generally demand a substantial amount of training samples and negative samples, impeding their applications from a practical perspective. Therefore, this paper collects a lip-password dataset and proposes a novel few-shot lip-password based speaker verification model, which can be effectively deployed in real-world scenarios because only a small number of data are required for training. Specifically, with an analysis of lip-password features, a down-sampling strategy is presented to generate more training samples. To compensate for the information loss caused by this strategy, a few-shot model, consisting of global and local models, is designed to simultaneously verify the global and local information of the lip-password. Speaker identity is verified only if both stages are passed. The efficacy of the proposed method is demonstrated using the newly collected dataset.