TY - JOUR
T1 - The way you assess matters
T2 - User interaction design of survey chatbots for mental health
AU - Jin, Yucheng
AU - Chen, Li
AU - Zhao, Xianglin
AU - Cai, Wanling
N1 - This work was supported by Hong Kong Research Grants Council (RGC) GRF project (RGC/HKBU12201620), Hong Kong Baptist University IG-FNRA project (RC-FNRA-IG/21-22/SCI/01), and Hong Kong Baptist University Start-up Grant (RC-STARTUP/21-22/23).
Publisher Copyright:
© 2024 Elsevier Ltd
PY - 2024/9
Y1 - 2024/9
N2 - The global pandemic has pushed human society into a mental health crisis, prompting the development of various chatbots to supplement the limited mental health workforce. Several organizations have employed mental health survey chatbots for public mental status assessments. These survey chatbots typically ask closed-ended questions (Closed-EQs) to assess specific psychological issues like anxiety, depression, and loneliness, followed by open-ended questions (Open-EQs) for deeper insights. While Open-EQs are naturally presented conversationally in a survey chatbot, Closed-EQs can be delivered as embedded forms or within conversations, with the length of the questionnaire varying according to the psychological assessment. This study investigates how the interaction style of Closed-EQs and the questionnaire length affect user perceptions regarding survey credibility, enjoyment, and self-awareness, as well as their responses to Open-EQs in terms of quality and self-disclosure in a survey chatbot. We conducted a 2 (interaction style: form-based vs. conversation-based) × 3 (questionnaire length: short vs. middle vs. long) between-subjects study (N=213) with a loneliness survey chatbot. The results indicate that the form-based interaction significantly enhances the perceived credibility of the assessment, thereby improving response quality and self-disclosure in subsequent Open-EQs and fostering self-awareness. We discuss our findings for the interaction design of psychological assessment in a survey chatbot for mental health.
AB - The global pandemic has pushed human society into a mental health crisis, prompting the development of various chatbots to supplement the limited mental health workforce. Several organizations have employed mental health survey chatbots for public mental status assessments. These survey chatbots typically ask closed-ended questions (Closed-EQs) to assess specific psychological issues like anxiety, depression, and loneliness, followed by open-ended questions (Open-EQs) for deeper insights. While Open-EQs are naturally presented conversationally in a survey chatbot, Closed-EQs can be delivered as embedded forms or within conversations, with the length of the questionnaire varying according to the psychological assessment. This study investigates how the interaction style of Closed-EQs and the questionnaire length affect user perceptions regarding survey credibility, enjoyment, and self-awareness, as well as their responses to Open-EQs in terms of quality and self-disclosure in a survey chatbot. We conducted a 2 (interaction style: form-based vs. conversation-based) × 3 (questionnaire length: short vs. middle vs. long) between-subjects study (N=213) with a loneliness survey chatbot. The results indicate that the form-based interaction significantly enhances the perceived credibility of the assessment, thereby improving response quality and self-disclosure in subsequent Open-EQs and fostering self-awareness. We discuss our findings for the interaction design of psychological assessment in a survey chatbot for mental health.
KW - Chatbots
KW - Loneliness
KW - Mental health
KW - Open-ended questions
KW - Psychological assessment
KW - Self-disclosure
KW - Survey design
UR - http://www.scopus.com/inward/record.url?scp=85194772637&partnerID=8YFLogxK
U2 - 10.1016/j.ijhcs.2024.103290
DO - 10.1016/j.ijhcs.2024.103290
M3 - Journal article
AN - SCOPUS:85194772637
SN - 1071-5819
VL - 189
JO - International Journal of Human Computer Studies
JF - International Journal of Human Computer Studies
M1 - 103290
ER -