On the Language of Thoughts in Large Language Models

Chenxi Liu, Yongqiang Chen, Tongliang Liu, James Cheng, Bo Han*, Kun Zhang

*Corresponding author for this work

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

Abstract

System 2 reasoning is one of the defining characteristics of intelligence, which requires slow and logical thinking. Human conducts System 2 reasoning via the language of thoughts that organizes the reasoning process as a causal sequence of mental language, or thoughts. Recently, it has been observed that System 2 reasoning can be elicited from Large Language Models (LLMs) pre-trained on large-scale natural languages. However, in this work, we show that there is a significant gap between the modeling of languages and thoughts. As language is primarily a tool for humans to share knowledge and thinking, modeling human language can easily absorb language biases into LLMs deviated from the chain of thoughts in minds. Furthermore, we show that the biases will mislead the eliciting of "thoughts" in LLMs to focus only on a biased part of the premise. To this end, we propose a new prompt technique termed Language-of-Thoughts (LoT) to demonstrate and alleviate this gap. Instead of directly eliciting the chain of thoughts from partial information, LoT instructs LLMs to adjust the order and token used for the expressions of all the relevant information. We show that the simple strategy significantly reduces the language modeling biases in LLMs and improves the performance of LLMs across a variety of reasoning tasks.
Original languageEnglish
Title of host publicationICLR 2025 Workshop on Reasoning and Planning for Large Language Models
PublisherInternational Conference on Learning Representations
Pages1-28
Number of pages28
Publication statusPublished - 27 Apr 2025
EventICLR 2025 Workshop on Reasoning and Planning for Large Language Models - , Singapore
Duration: 27 Apr 202527 Apr 2025
https://openreview.net/group?id=ICLR.cc/2025/Workshop/LLM_Reason_and_Plan#tab-accept

Workshop

WorkshopICLR 2025 Workshop on Reasoning and Planning for Large Language Models
Country/TerritorySingapore
Period27/04/2527/04/25
Internet address

Fingerprint

Dive into the research topics of 'On the Language of Thoughts in Large Language Models'. Together they form a unique fingerprint.

Cite this