Partially observable Markov decision process (POMDP) is a commonly adopted framework to model planning problems for agents to act in a stochastic environment. Obtaining the optimal policy of POMDP for large-scale problems is known to be intractable, where the high dimension of its belief state is one of the major causes. The use of the compression approach has recently been shown to be promising in tackling the curse of dimensionality problem. In this paper, a novel value-directed belief compression technique is proposed,together with clustering of belief states for further reducing the underlying computational complexity. We first cluster some sampled belief states into disjoint partitions and then apply a non-negative matrix factorization (NMF) based projection to each belief state cluster for dimension reduction. We then compute the optimal policy is then computed using a pointed-based value iteration algorithm defined in the low-dimensional projected belief state space. The proposed algorithm has been evaluated using a synthesized navigation problem. Solutions with quality comparable to the original POMDP were obtained at a much lower computational cost.