Multi-modal speaker recognition has received a lot of attention in recent years due to the growing security demands in real applications. In this paper, we present an efficient audiovisual speaker recognition method by fusing face and audio via the multi-modal correlated neural networks. Within our proposed approach, the facial features learned by convolutional neural networks are compatible with audio features at high-level and the heterogeneous multi-modal features can be learned automatically. Accordingly, we propose a correlated neural networks to fuse the face and audio modalities at different level such that the speaker identity can be well identified. The experimental results have shown that our proposed multi-modal speaker recognition approach can produce better performance than single modality, and the feature-level fusion yields comparative and even better results than the decision-level case.