Skip to main navigation Skip to search Skip to main content

The paradox of agency in psychotherapy: How people with mental distress experience support from generative AI chatbots and human therapists

  • Xiaolu Dai
  • , Ling Li Leng*
  • , Yiyan Liu
  • , Yu-Te Huang
  • , Daniel Fu Keung Wong
  • *Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

Abstract

Background: The rapid advancement of Large Language Models has sparked heated debate over whether Generative Artificial Intelligence (AI) chatbots can serve as “digital therapists” capable of providing therapeutic support. While much of this discussion focuses on AI’s lack of agency, understood as the absence of mental states, consciousness, autonomy, and intentionality, empirical research on users’ real-world experiences remains limited.

Objective: This study explores how individuals with mental distress experience support from both generative AI chatbots and human psychotherapy in natural and unguided contexts, with a focus on how perceptions of agency shape therapeutic experiences. By drawing on participants’ dual exposure, the study seeks to contribute to the ongoing debate about “AI therapists” by clarifying the role of agency in therapeutic change.

Methods: Sixteen adults who had sought mental health support from both human therapists and ChatGPT participated in semi-structured interviews, during which they shared and compared their experiences with each type of interaction. Transcripts were analyzed using reflexive thematic analysis.

Results: Three themes captured participants’ perceptions of ChatGPT relative to human therapists: (1) encouraging open and authentic self-disclosure but limiting deep exploration; (2) the myth of relationship: caring, acceptance, and understanding; (3) fostering therapeutic change: the promise and pitfalls of data-driven solutions. We propose a conceptual model that illustrates how differences in agency status between AI chatbots and human therapists shape the distinct ways they support individuals with mental distress, with agency functioning as both a strength and a limitation for therapeutic engagement.

Conclusion: Given that agency functions as a double-edged sword in therapeutic interactions, future mental health services should consider integrated care models that combine the non-agential advantages of AI chatbots with the agentic qualities of human therapists. Rather than anthropomorphizing AI chatbots, their non-agential features—such as responsiveness, absence of intentions, objectivity, and disembodiment—should be strategically leveraged to complement specific functions in human-delivered psychotherapy. At the same time, practitioners should maximize the benefits of their agentic qualities while remaining cautious of the risks. The findings should be interpreted with caution as the sample consisted mainly of young, well-educated Chinese participants from a collectivist cultural context, which may limit transferability to other populations, particularly those from individualistic cultures with different mental health literacy levels, stigma patterns, and therapeutic norms.
Original languageEnglish
Article number49
Number of pages16
JournalBMC Psychiatry
Volume26
Issue number1
Early online date12 Dec 2025
DOIs
Publication statusE-pub ahead of print - 12 Dec 2025

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 3 - Good Health and Well-being
    SDG 3 Good Health and Well-being

User-Defined Keywords

  • generative artificial intelligence
  • psychotherapy
  • mental distress
  • anthropomorphizing
  • agency

Fingerprint

Dive into the research topics of 'The paradox of agency in psychotherapy: How people with mental distress experience support from generative AI chatbots and human therapists'. Together they form a unique fingerprint.

Cite this