Abstract
The real-time deployment of bidirectional encoder representations from transformers (BERT) is limited by its slow inference caused by its large number of parameters. Recently, multi-exit architecture has garnered scholarly attention for its ability to achieve a trade-off between performance and efficiency. However, its early exits suffer from a considerable performance reduction compared to the final classifier. To accelerate inference with minimal compensation of performance, we propose a novel training paradigm for multi-exit BERT performing at two levels: training samples and intermediate features. Specifically, for the training samples level, we leverage curriculum learning to guide the training process and improve the generalization capacity of the model. For the intermediate features level, we employ layer-wise distillation learning from shallow to deep layers to resolve the performance deterioration of early exits. The experimental results obtained on the benchmark datasets of textual entailment and answer selection demonstrate that the proposed training paradigm is effective and achieves state-of-the-art results. Furthermore, the layer-wise distillation can completely replace vanilla distillation and deliver superior performance on text entailment datasets.
Original language | English |
---|---|
Pages (from-to) | 395-413 |
Number of pages | 19 |
Journal | International Journal of Software Engineering and Knowledge Engineering |
Volume | 33 |
Issue number | 3 |
DOIs | |
Publication status | Published - Mar 2023 |
Scopus Subject Areas
- Software
- Computer Networks and Communications
- Computer Graphics and Computer-Aided Design
- Artificial Intelligence
User-Defined Keywords
- curriculum learning
- knowledge distillation
- Multi-exit architecture