Abstract
A novel approach is essential to assess viewers' emotional responses to online music videos, as the emotional coherence between perceived and induced reactions has not been thoroughly explored. This research investigates the relationship between perceived and induced emotional responses to music videos through a unique multimodal framework that integrates electroencephalography (EEG) analysis with natural language processing to examine danmu—user-generated scrolling marquee comments synchronized to specific playback times. Employing a time-synchronized methodology, our deep learning model predicted continuous emotional scores from EEG signals based on danmu sentiment. The findings revealed an over 80% similarity between the two forms of induced emotional data: EEG-derived emotion curves and danmu sentiment curves across five music videos. We explored periods of divergence by contrasting peak emotional responses during the climaxes of the music, highlighting the significant influence of the multimodal sentiment tone on the alignment between neurophysiological and behavioral emotional trajectories. This study uncovers the coherence between emotion curves derived from EEG and danmu data—a methodology that notably diverges from traditional reliance on self-reports or surveys. The partial consistency observed between perceived and induced emotions, along with the effects of emotional valence and arousal on brain-behavior synchronization, underscores the shared nature of emotions elicited by music videos. Contributing factors include the diversity of emotional experiences and expressions among individuals, as well as the intrinsic rhythmicity within music videos, both of which enhance emotional elicitation.
| Original language | English |
|---|---|
| Article number | 108803 |
| Number of pages | 14 |
| Journal | Computers in Human Behavior |
| Volume | 174 |
| Early online date | 22 Sept 2025 |
| DOIs | |
| Publication status | Published - Jan 2026 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 9 Industry, Innovation, and Infrastructure
User-Defined Keywords
- EEG
- Deep learning
- Emotion coherence
- Music videos
- Multimodal analysis
- User comments
Fingerprint
Dive into the research topics of 'A study of danmu: Detecting emotional coherence in music videos through synchronized EEG analysis'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver