Pre-service Language Teachers’ Task-specific Large Language Model Prompting Practices

Benjamin Luke Moorhouse*, Tsz Ying Ho, Chenze Wu, Yuwei Wan

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

1 Citation (Scopus)

Abstract

Since the emergence of ChatGPT, a type of large language model (LLM), there has been interest in how these tools can support language teachers’ professional practices and assist them with professional tasks, e.g., lesson planning. The current study explored how pre-service language teachers interacted with LLMs to assist them in improving lesson plans, and the knowledge and skills involved in these task-specific prompting practices. Data was collected from 25 pre-service teachers enrolled in a Master of Education in English Language Teaching program at a Hong Kong university. Analysis was performed on their submitted assignments, which included a revised lesson plan, a typed pedagogical rationale for the modifications, the logs of their interactions with the LLMs and reflections on the use of LLMs. The findings revealed a three-stage decision-making process among the teachers when interacting with AI to improve lesson plans. These were task identification, iterative prompting, and task implementation. Our findings also suggested that for teachers to engage effectively with LLMs they need pedagogical content knowledge, LLMs knowledge, and prompting skills. This study has implications for teacher professional development in enhancing prompt practices and the effective use of LLMs for accomplishing professional tasks.
Original languageEnglish
Pages (from-to)1-22
Number of pages22
JournalRELC Journal
DOIs
Publication statusE-pub ahead of print - 16 Feb 2025

User-Defined Keywords

  • Generative AI
  • LLMs
  • language teachers
  • lesson planning
  • prompt literacy

Fingerprint

Dive into the research topics of 'Pre-service Language Teachers’ Task-specific Large Language Model Prompting Practices'. Together they form a unique fingerprint.

Cite this