Hashing is an efficient approximate nearest neighbor search method. It has been widely adopted for large-scale multimedia retrieval. While supervised learning is popular for the data-dependent hashing, deep unsupervised hashing methods can learn non-linear transformations for converting multimedia inputs to binary codes without label information. Most of the existing deep unsupervised hashing methods make use of a quadratic constraint for minimizing the difference between the compact representations and the target binary codes, which inevitably causes severe information loss. In this paper, we propose a novel deep unsupervised method called DeepQuan for hashing. The DeepQuan model utilizes a deep autoencoder network, where the encoder is used to learn compact representations and the decoder is for manifold preservation. To contrast with the existing unsupervised methods, DeepQuan learns the binary codes by minimizing the quantization error through product quantization. Furthermore, a weighted triplet loss is proposed to avoid trivial solutions and poor generalization. Extensive experimental results on standard datasets show that the proposed DeepQuan model outperforms the state-of-the-art unsupervised hashing methods for image retrieval tasks.