Comparative analysis of the performance of the large language models DeepSeek-V3, DeepSeek-R1, open AI-O3 mini and open AI-O3 mini high in urology

Zijun Yan, Ke Qin Fan, Qi Zhang, Xinyan Wu, Yuquan Chen, Xinyu Wu, Ting Yu, Ning Su, Yan Zou, Hao Chi*, Liangjing Xia*, Qiang Cao*

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

3 Citations (Scopus)

Abstract

Objectives: We sought to compare how DeepSeek‑V3, DeepSeek‑R1, OpenAI o3‑mini, and OpenAI o3‑mini high handle urological questions, especially in areas such as benign prostatic enlargement, urinary stones, infections, and guideline updates. The intent was to identify how these text‑creation platforms might aid clinical practice without overlooking potential gaps in accuracy.

Methods: A set of 34 routinely asked questions plus 25 queries based on newly revised guidelines was assembled. Six board‑certified urologists independently scored each system’s replies using a five‑point scale. Questions scoring below a set threshold were reintroduced to the same system, accompanied by critiques, to gauge self‑correction. Statistical analyses focused on total scores, percentage of excellent ratings, and improvements after iterative prompting. 

Results: Across all 59 queries (34 general plus 25 guideline-based), OpenAI o3-mini high recorded the highest median total score (22 [20–24]), significantly outperforming DeepSeek-R1, DeepSeek-V3 and OpenAI o3-mini (all pair-wise p < 0.01). DeepSeek-R1’s accuracy approached that of o3-mini high in patient-counseling items, where their excellent-answer rates were 49% and 57%, respectively. DeepSeek‑V3 achieved solid baseline correctness but made fewer successful corrections on subsequent attempts. Although OpenAI o3‑mini initially produced more concise responses, it showed a surprisingly strong capacity to revise earlier errors.

Conclusion: OpenAI o3‑mini high, followed by DeepSeek‑R1, provided the most reliable answers for modern urological concerns, whereas DeepSeek‑V3 exhibited limited adaptability during re‑evaluation. Despite often briefer replies, OpenAI o3‑mini outdid DeepSeek‑V3 in self‑correction. These findings indicate that, when reviewed by a clinician, o3-mini high can serve as a rapid second-opinion tool for outpatient counselling and protocol updates, whereas DeepSeek-R1 may provide a cost-effective alternative in resource-limited settings.

Original languageEnglish
Article number416
Number of pages10
JournalWorld Journal of Urology
Volume43
Issue number1
Early online date7 Jul 2025
DOIs
Publication statusE-pub ahead of print - 7 Jul 2025

User-Defined Keywords

  • Clinical guidelines
  • Large language models
  • Performance evaluation
  • Self‑correction capacity
  • Urology

Fingerprint

Dive into the research topics of 'Comparative analysis of the performance of the large language models DeepSeek-V3, DeepSeek-R1, open AI-O3 mini and open AI-O3 mini high in urology'. Together they form a unique fingerprint.

Cite this