Interpreting and Improving Large Language Models in Arithmetic Calculation

Wei Zhang, Chaoqun Wan, Yonggang Zhang*, Yiu-ming Cheung, Xinmei Tian, Xu Shen*, Jieping Ye

*Corresponding author for this work

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

Abstract

Large language models (LLMs) have demonstrated remarkable potential across numerous applications and have shown an emergent ability to tackle complex reasoning tasks, such as mathematical computations. However, even for the simplest arithmetic calculations, the intrinsic mechanisms behind LLMs remains mysterious, making it challenging to ensure reliability. In this work, we delve into uncovering a specific mechanism by which LLMs execute calculations. Through comprehensive experiments, we find that LLMs frequently involve a small fraction (<5%) of attention heads, which play a pivotal role in focusing on operands and operators during calculation processes. Subsequently, the information from these operands is processed through multi-layer perceptrons (MLPs), progressively leading to the final solution. These pivotal heads/MLPs, though identified on a specific dataset, exhibit transferability across different datasets and even distinct tasks. This insight prompted us to investigate the potential benefits of selectively fine-tuning these essential heads/MLPs to boost the LLMs' computational performance. We empirically find that such precise tuning can yield notable enhancements on mathematical prowess, without compromising the performance on non-mathematical tasks. Our work serves as a preliminary exploration into the arithmetic calculation abilities inherent in LLMs, laying a solid foundation to reveal more intricate mathematical tasks.
Original languageEnglish
Title of host publicationProceedings of the 41st International Conference on Machine Learning, ICML 2024
EditorsRuslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, Felix Berkenkamp
PublisherML Research Press
Pages59932-59950
Number of pages19
Publication statusPublished - 21 Jul 2024
Event41st International Conference on Machine Learning, ICML 2024 - Vienna, Austria
Duration: 21 Jul 202427 Jul 2024
https://icml.cc/
https://openreview.net/group?id=ICML.cc/2024/Conference#tab-accept-oral
https://proceedings.mlr.press/v235/

Publication series

NameProceedings of the International Conference on Machine Learning
NameProceedings of Machine Learning Research
Volume235
ISSN (Print)2640-3498

Conference

Conference41st International Conference on Machine Learning, ICML 2024
Country/TerritoryAustria
CityVienna
Period21/07/2427/07/24
Internet address

Scopus Subject Areas

  • Software
  • Artificial Intelligence
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Interpreting and Improving Large Language Models in Arithmetic Calculation'. Together they form a unique fingerprint.

Cite this