A Case Study on Measuring AI Assistant Competence in Narrative Interviews

Chitat Chan*, Yunmeng Zhao

*Corresponding author for this work

    Research output: Contribution to journalJournal article

    Abstract

    Background: Researchers are leading the development of AI designed to conduct interviews. These developments imply that AI's role is expanding from mere data analysis to becoming a tool for social researchers to interact with and comprehend their subjects. Yet, academic discussions have not addressed the potential impacts of AI on narrative interviews. In narrative interviews, the method of collecting data is a collaborative effort. The interviewer also contributes to exploring and shaping the interviewee's story. A compelling narrative interviewer has to display critical skills, such as maintaining a specific questioning order, showing empathy, and helping participants delve into and build their own stories.

    Methods: This case study configured an OpenAI Assistant on WhatsApp to conduct narrative interviews with a human participant. The participant shared the same story in two distinct conversations: first, following a standard cycle and answering questions earnestly, and second, deliberately sidetracking the assistant from the main interview path as instructed by the researcher, to test how well the metrics could reflect the deliberate differences between different conversations. The AI's performance was evaluated through conversation analysis and specific narrative indicators, focusing on its adherence to the interview structure, empathy, narrative coherence, complexity, and support for human participant agency. The study sought to answer these questions: 1) How can the proposed metrics help us, as social researchers without a technical background, understand the quality of the AI-driven interviews in this study? 2) What do these findings contribute to our discussion on using AI in narrative interviews for social research? 3) What further research could these results inspire?

    Results: The findings show to what extent the AI maintained structure and adaptability in conversations, illustrating its potential to support personalized, flexible narrative interviews based on specific needs.

    Conclusions: These results suggest that social researchers without a technical background can use observation-based metrics to gauge how well an AI assistant conducts narrative interviews. They also prompt reflection on AI's role in narrative interviews and spark further research.
    Original languageEnglish
    Article number601
    Number of pages17
    JournalF1000Research
    Volume13
    DOIs
    Publication statusPublished - 7 Jun 2024

    Scopus Subject Areas

    • Social Sciences(all)

    User-Defined Keywords

    • Artificial Intelligence
    • Narrative Inquiry
    • Qualitative Research
    • WhatsApp Interviews
    • Conversational AI
    • Prompt Engineering
    • Digital Research Methodologies

    Fingerprint

    Dive into the research topics of 'A Case Study on Measuring AI Assistant Competence in Narrative Interviews'. Together they form a unique fingerprint.

    Cite this