A study of danmu: Detecting emotional coherence in music videos through synchronized EEG analysis

  • Yuqing Liu
  • , Bu Zhong
  • , Jiaxuan Wang
  • , Yao Song*
  • *Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

Abstract

A novel approach is essential to assess viewers' emotional responses to online music videos, as the emotional coherence between perceived and induced reactions has not been thoroughly explored. This research investigates the relationship between perceived and induced emotional responses to music videos through a unique multimodal framework that integrates electroencephalography (EEG) analysis with natural language processing to examine danmu—user-generated scrolling marquee comments synchronized to specific playback times. Employing a time-synchronized methodology, our deep learning model predicted continuous emotional scores from EEG signals based on danmu sentiment. The findings revealed an over 80% similarity between the two forms of induced emotional data: EEG-derived emotion curves and danmu sentiment curves across five music videos. We explored periods of divergence by contrasting peak emotional responses during the climaxes of the music, highlighting the significant influence of the multimodal sentiment tone on the alignment between neurophysiological and behavioral emotional trajectories. This study uncovers the coherence between emotion curves derived from EEG and danmu data—a methodology that notably diverges from traditional reliance on self-reports or surveys. The partial consistency observed between perceived and induced emotions, along with the effects of emotional valence and arousal on brain-behavior synchronization, underscores the shared nature of emotions elicited by music videos. Contributing factors include the diversity of emotional experiences and expressions among individuals, as well as the intrinsic rhythmicity within music videos, both of which enhance emotional elicitation.
Original languageEnglish
Article number108803
Number of pages14
JournalComputers in Human Behavior
Volume174
Early online date22 Sept 2025
DOIs
Publication statusPublished - Jan 2026

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 9 - Industry, Innovation, and Infrastructure
    SDG 9 Industry, Innovation, and Infrastructure

User-Defined Keywords

  • EEG
  • Deep learning
  • Emotion coherence
  • Music videos
  • Multimodal analysis
  • User comments

Fingerprint

Dive into the research topics of 'A study of danmu: Detecting emotional coherence in music videos through synchronized EEG analysis'. Together they form a unique fingerprint.

Cite this