SHARP: Unlocking Interactive Hallucination via Stance Transfer in Role-Playing LLMs

  • Chuyi Kong
  • , Ziyang Luo
  • , Hongzhan Lin
  • , Zhiyuan Fan
  • , Yaxin Fan
  • , Yuxi Sun
  • , Jing Ma*
  • *Corresponding author for this work

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

Abstract

The advanced role-playing capabilities of Large Language Models (LLMs) have enabled rich interactive scenarios, yet existing research in social interactions neglects hallucination while struggling with poor generalizability and implicit character fidelity judgments. To bridge this gap, motivated by human behaviour, we introduce a generalizable and explicit paradigm for uncovering interactive patterns of LLMs across diverse worldviews. Specifically, we first define interactive hallucination through stance transfer, then construct SHARP, a benchmark built by extracting relations from commonsense knowledge graphs and utilizing LLMs’ inherent hallucination properties to simulate multi-role interactions. Extensive experiments confirm our paradigm’s effectiveness and stability, examine the factors that influence these metrics, and challenge conventional hallucination mitigation solutions. More broadly, our work reveals a fundamental limitation in popular post-training methods for role-playing LLMs: the tendency to obscure knowledge beneath style, resulting in monotonous yet human-like behaviors—interactive hallucination.
Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics: ACL 2025
EditorsWanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Place of PublicationVienna
PublisherAssociation for Computational Linguistics (ACL)
Pages839–866
Number of pages28
ISBN (Electronic)9798891762565
DOIs
Publication statusPublished - 27 Jul 2025
Event63rd Annual Meeting of the Association for Computational Linguistics, ACL 2025 - Austria Center Vienna, Vienna, Austria
Duration: 27 Jul 20251 Aug 2025
https://2025.aclweb.org/ (Conference Website)
https://docs.google.com/spreadsheets/d/1O-n3HPvv8vY0L_kjyP5AtRTcWWjqLk2deCYtrMgCGw4/edit?usp=drive_link (Conference Program)
https://aclanthology.org/events/acl-2025/ (Conference Proceedings)

Publication series

NameFindings of the Association for Computational Linguistics
PublisherAssociation for Computational Linguistics

Conference

Conference63rd Annual Meeting of the Association for Computational Linguistics, ACL 2025
Country/TerritoryAustria
CityVienna
Period27/07/251/08/25
Internet address

Fingerprint

Dive into the research topics of 'SHARP: Unlocking Interactive Hallucination via Stance Transfer in Role-Playing LLMs'. Together they form a unique fingerprint.

Cite this