Abstract
Generative artificial intelligence like ChatGPT paints a promising picture of enhancing students’ learning in higher education. However, little is known regarding whether the university students actually adopt ChatGPT for enhancement or misuse it for disburdenment. This study employed a qualitative descriptive research approach to investigate if there is the problem of misuse through an examination of the adoption journey of 20 university students in Hong Kong. Our findings show only a few interviewees did not use ChatGPT for disburdenment while the majority allow the cognitive artifact to liberate them from burdens of learning. Moreover, reports from misusers also show more pronounced automation bias and perception of AI agency than non-misusers. Among the misusers who are aware of the problem of AI hallucination, some exhibit a paradoxical approach of adoption behavior: they continue to rely on ChatGPT despite noticing that the new burdens of cross-checking its output may be induced. Such a decision calls into question the reasons for continuous reliance on ChatGPT. Our finding suggests that this may be associated with the students’ perception of ChatGPT as agentic, a factor known to positively shape users’ affect-based trust in AI. This study illuminates how actual cases of misuse and problematic adoption behavior unfold, offering useful directions for education of sensible, responsible and effective use of AI in the higher education context.
| Original language | English |
|---|---|
| Article number | 11 |
| Number of pages | 20 |
| Journal | AI and Ethics |
| Volume | 6 |
| Issue number | 1 |
| Early online date | 1 Dec 2025 |
| DOIs | |
| Publication status | E-pub ahead of print - 1 Dec 2025 |
User-Defined Keywords
- ChatGPT
- Automation bias
- Trust in AI
- Misuse
- AI agency
- AI literacy