While extractive summarization is an important approach of the NLP text summarization task, redundancy in the generated extractive summary is always a problem. Previous works usually set the length of the output summary to a fixed number, which might only be appropriate for some of the documents while too long for others. At the same time, though extractive summarization possesses high readability as it directly selects sentences from the document, the unimportant parts within sentences are also selected. These two scenarios result in redundancy in the extractive summaries. To solve this problem, we propose a length control framework for extractive summarization, named LenC, in a two-stage pipeline. We first use a pretrained BERT-based summarizer to select smaller units (i.e. EDUs) than original sentences to abandon the insignificant parts of a sentence. Then a portable length controller is implemented to prune the output summary to an appropriate length, and it can be attached to any extractive summarizer. Experiments show that the proposed model outperforms the state-of-the-art baseline models and successfully reduces the redundancy in the extractive summaries.