Strategies for the Analysis and Elimination of Hallucinations in Artificial Intelligence Generated Medical Knowledge

Fengxian Chen, Yan Li, Yaolong Chen, Zhaoxiang Bian, La Duo, Qingguo Zhou*, Lu Zhang*, ADVANCED Working Group

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

Abstract

The application of artificial intelligence (AI) in healthcare has become increasingly widespread, showing significant potential in assisting with diagnosis and treatment. However, generative AI (GAI) models often produce “hallucinations”—plausible but factually incorrect or unsubstantiated outputs—that threaten clinical decision-making and patient safety. This article systematically analyzes the causes of hallucinations across data, training, and inference dimensions and proposes multi-dimensional strategies to mitigate them. Our findings reveal three critical conclusions: The technical optimization through knowledge graphs and multi-stage training significantly reduces hallucinations, while clinical integration through expert feedback loops and multidisciplinary workflows enhances output reliability. Additionally, implementing robust evaluation systems that combine adversarial testing and real-world validation substantially improves factual accuracy in clinical settings. These integrated strategies underscore the importance of harmonizing technical advancements with clinical governance to develop trustworthy, patient-centric AI systems.

Original languageEnglish
Article numbere70075
JournalJournal of Evidence-Based Medicine
Volume18
Issue number3
DOIs
Publication statusPublished - 22 Sept 2025

User-Defined Keywords

  • assisted diagnosis and treatment
  • evaluation system
  • generative artificial intelligence
  • multi-stage training

Fingerprint

Dive into the research topics of 'Strategies for the Analysis and Elimination of Hallucinations in Artificial Intelligence Generated Medical Knowledge'. Together they form a unique fingerprint.

Cite this