TY - GEN
T1 - Eliciting Causal Abilities in Large Language Models for Reasoning Tasks
AU - Wang, Yajing
AU - Luo, Zongwei
AU - Wang, Jingzhe
AU - Zhou, Zhanke
AU - Chen, Yongqiang
AU - Han, Bo
N1 - We thank the anonymous reviewers for their insightful comments. This work is supported by Beijing Normal University Zhuhai Startup Fund - Research on Artificial Intelligence Computing Models and Applications, the Beijing Normal University Zhuhai Teaching Reform Project - Online and Offline Course on Artificial Intelligence and Ethics, the Ministry of Education Supply and Demand Matching Employment - Education Integration Project: Hikvision and Beijing Normal University at Zhuhai; Hikvision and BNU-HKBU United International College, and Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science. ZKZ and BH were supported by Guangdong Basic and Applied Basic Research Foundation Nos. 2022A1515011652 and 2024A1515012399, NSFC General Program No. 62376235, HKBU Faculty Niche Research Areas No. RC-FNRA-IG/22-23/SCI/04, and HKBU CSD Departmental Incentive Scheme.
Publisher Copyright:
Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2025/4/11
Y1 - 2025/4/11
N2 - Prompt optimization automatically refines prompting expressions, unlocking the full potential of LLMs in downstream tasks. However, current prompt optimization methods are costly to train and lack sufficient interpretability. This paper proposes enhancing LLMs’ reasoning performance by eliciting their causal inference ability from prompting instructions to correct answers. Specifically, we introduce the Self-Causal Instruction Enhancement (SCIE) method, which enables LLMs to generate high-quality, low-quantity observational data, then estimates the causal effect based on these data, and ultimately generates instructions with the optimized causal effect. In SCIE, the instructions are treated as the treatment, and textual features are used to process natural language, establishing causal relationships through treatments between instructions and downstream tasks. Additionally, we propose applying Object-Relational (OR) principles, where the uncovered causal relationships are treated as the inheritable class across task objects, ensuring low-cost reusability. Extensive experiments demonstrate that our method effectively generates instructions that enhance reasoning performance with reduced training cost of prompts, leveraging interpretable textual features to provide actionable insights.
AB - Prompt optimization automatically refines prompting expressions, unlocking the full potential of LLMs in downstream tasks. However, current prompt optimization methods are costly to train and lack sufficient interpretability. This paper proposes enhancing LLMs’ reasoning performance by eliciting their causal inference ability from prompting instructions to correct answers. Specifically, we introduce the Self-Causal Instruction Enhancement (SCIE) method, which enables LLMs to generate high-quality, low-quantity observational data, then estimates the causal effect based on these data, and ultimately generates instructions with the optimized causal effect. In SCIE, the instructions are treated as the treatment, and textual features are used to process natural language, establishing causal relationships through treatments between instructions and downstream tasks. Additionally, we propose applying Object-Relational (OR) principles, where the uncovered causal relationships are treated as the inheritable class across task objects, ensuring low-cost reusability. Extensive experiments demonstrate that our method effectively generates instructions that enhance reasoning performance with reduced training cost of prompts, leveraging interpretable textual features to provide actionable insights.
UR - http://www.scopus.com/inward/record.url?scp=105004004376&partnerID=8YFLogxK
U2 - 10.1609/aaai.v39i14.33669
DO - 10.1609/aaai.v39i14.33669
M3 - Conference proceeding
AN - SCOPUS:105004004376
T3 - Proceedings of the AAAI Conference on Artificial Intelligence
SP - 15212
EP - 15220
BT - Proceedings of the 39th AAAI Conference on Artificial Intelligence, AAAI 2025
A2 - Walsh, Toby
A2 - Shah, Julie
A2 - Kolter, Zico
PB - AAAI press
T2 - 39th AAAI Conference on Artificial Intelligence, AAAI 2025
Y2 - 25 February 2025 through 4 March 2025
ER -