Accelerating Multi-Exit BERT Inference via Curriculum Learning and Knowledge Distillation

Shengwei Gu, Xiangfeng Luo*, Xinzhi Wang*, Yike Guo

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

Abstract

The real-time deployment of bidirectional encoder representations from transformers (BERT) is limited by its slow inference caused by its large number of parameters. Recently, multi-exit architecture has garnered scholarly attention for its ability to achieve a trade-off between performance and efficiency. However, its early exits suffer from a considerable performance reduction compared to the final classifier. To accelerate inference with minimal compensation of performance, we propose a novel training paradigm for multi-exit BERT performing at two levels: training samples and intermediate features. Specifically, for the training samples level, we leverage curriculum learning to guide the training process and improve the generalization capacity of the model. For the intermediate features level, we employ layer-wise distillation learning from shallow to deep layers to resolve the performance deterioration of early exits. The experimental results obtained on the benchmark datasets of textual entailment and answer selection demonstrate that the proposed training paradigm is effective and achieves state-of-the-art results. Furthermore, the layer-wise distillation can completely replace vanilla distillation and deliver superior performance on text entailment datasets.

Original languageEnglish
Pages (from-to)395-413
Number of pages19
JournalInternational Journal of Software Engineering and Knowledge Engineering
Volume33
Issue number3
DOIs
Publication statusPublished - Mar 2023

Scopus Subject Areas

  • Software
  • Computer Networks and Communications
  • Computer Graphics and Computer-Aided Design
  • Artificial Intelligence

User-Defined Keywords

  • curriculum learning
  • knowledge distillation
  • Multi-exit architecture

Fingerprint

Dive into the research topics of 'Accelerating Multi-Exit BERT Inference via Curriculum Learning and Knowledge Distillation'. Together they form a unique fingerprint.

Cite this