Abstract
We present several interrelated technical and empirical contributions to the problem of emotion-based music recommendation and show how they can be applied in a possible usage scenario. The contributions are (1) a new three-dimensional resonance-arousal-valence model for the representation of emotion expressed in music, together with methods for automatically classifying a piece of music in terms of this model, using robust regression methods applied to musical/acoustic features; (2) methods for predicting a listener's emotional state on the assumption that the emotional state has been determined entirely by a sequence of pieces of music recently listened to, using conditional random fields and taking into account the decay of emotion intensity over time; and (3) a method for selecting a ranked list of pieces of music that match a particular emotional state, using a minimization iteration method. A series of experiments yield information about the validity of our operationalizations of these contributions. Throughout the article, we refer to an illustrative usage scenario in which all of these contributions can be exploited, where it is assumed that (1) a listener's emotional state is being determined entirely by the music that he or she has been listening to and (2) the listener wants to hear additional music that matches his or her current emotional state. The contributions are intended to be useful in a variety of other scenarios as well.
Original language | English |
---|---|
Article number | 4 |
Journal | ACM Transactions on Interactive Intelligent Systems |
Volume | 5 |
Issue number | 1 |
DOIs | |
Publication status | Published - 1 Mar 2015 |
Scopus Subject Areas
- Human-Computer Interaction
- Artificial Intelligence
User-Defined Keywords
- Affective computing
- Conditional random fields
- Emotional state
- Music emotion recognition
- Music recommendation
- Musical emotion