TY - JOUR
T1 - Processing Written Language in Video Games
T2 - An Eye-Tracking Study on Subtitled Instructions
AU - Lan, Haiting
AU - Liao, Sixin
AU - Kruger, Jan-Louis
AU - Richardson, Michael J.
N1 - Publisher copyright:
© 2025 by the authors. Licensee MDPI, Basel, Switzerland.
PY - 2025/10
Y1 - 2025/10
N2 - Written language is a common component among the multimodal representations that help players construct meanings and guide actions in video games. However, how players process texts in video games remains underexplored. To address this, the current exploratory eye-tracking study examines how players processed subtitled instructions and resultant game performance. Sixty-four participants were recruited to play a videogame set in a foggy desert, where they were guided by subtitled instructions to locate, corral, and contain robot agents (targets). These instructions were manipulated into three modalities: visual-only (with subtitled instructions only), auditory only (with spoken instructions), and visual–auditory (with both subtitled and spoken instructions). The instructions were addressed to participants (as relevant subtitles) or their AI teammates (as irrelevant subtitles). Subtitle-level results of eye movements showed that participants primarily focused on the relevant subtitles, as evidenced by more fixations and higher dwell time percentages. Moreover, the word-level results indicate that participants showed lower skipping rates, more fixations, and higher dwell time percentages on words loaded with immediate action-related information, especially in the absence of audio. No significant differences were found in player performance across conditions. The findings of this study contribute to a better understanding of subtitle processing in video games and, more broadly, text processing in multimedia contexts. Implications for future research on digital literacy and computer-mediated text processing are discussed.
AB - Written language is a common component among the multimodal representations that help players construct meanings and guide actions in video games. However, how players process texts in video games remains underexplored. To address this, the current exploratory eye-tracking study examines how players processed subtitled instructions and resultant game performance. Sixty-four participants were recruited to play a videogame set in a foggy desert, where they were guided by subtitled instructions to locate, corral, and contain robot agents (targets). These instructions were manipulated into three modalities: visual-only (with subtitled instructions only), auditory only (with spoken instructions), and visual–auditory (with both subtitled and spoken instructions). The instructions were addressed to participants (as relevant subtitles) or their AI teammates (as irrelevant subtitles). Subtitle-level results of eye movements showed that participants primarily focused on the relevant subtitles, as evidenced by more fixations and higher dwell time percentages. Moreover, the word-level results indicate that participants showed lower skipping rates, more fixations, and higher dwell time percentages on words loaded with immediate action-related information, especially in the absence of audio. No significant differences were found in player performance across conditions. The findings of this study contribute to a better understanding of subtitle processing in video games and, more broadly, text processing in multimedia contexts. Implications for future research on digital literacy and computer-mediated text processing are discussed.
KW - Attentional allocation
KW - Eye movements
KW - Subtitle processing
KW - Text processing
KW - Video games
UR - https://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=hkbuirimsintegration2023&SrcAuth=WosAPI&KeyUT=WOS:001601690700001&DestLinkType=FullRecord&DestApp=WOS_CPL
U2 - 10.3390/jemr18050044
DO - 10.3390/jemr18050044
M3 - Journal article
C2 - 40989220
SN - 1995-8692
VL - 18
JO - Journal of Eye Movement Research
JF - Journal of Eye Movement Research
IS - 5
M1 - 44
ER -