TY - GEN
T1 - Music retrieval in joint emotion space using audio features and emotional tags
AU - Deng, James J.
AU - Leung, C. H.C.
N1 - Copyright:
Copyright 2014 Elsevier B.V., All rights reserved.
PY - 2013
Y1 - 2013
N2 - Emotion-based music retrieval provides a natural and humanized way to help people experience music. In this paper, we utilize the three-dimensional Resonance-Arousal-Valence emotion model to represent the emotions invoked by music, and the relationship between acoustic features and their emotional impact based on this model is established. In addition, we also consider the emotional tag features for music, and then represent acoustic features and emotional tag features jointly in a low dimensional embedding space for music emotion, while the joint emotion space is optimized by minimizing the joint loss of acoustic features and emotional tag features through dimension reduction. Finally we construct a unified framework for music retrieval in joint emotion space by the means of query-by-music or query-by-tag or together, and then we utilize our proposed ranking algorithm to return an optimized ranked list that has the highest emotional similarity. The experimental results show that the joint emotion space and unified framework can produce satisfying results for emotion-based music retrieval.
AB - Emotion-based music retrieval provides a natural and humanized way to help people experience music. In this paper, we utilize the three-dimensional Resonance-Arousal-Valence emotion model to represent the emotions invoked by music, and the relationship between acoustic features and their emotional impact based on this model is established. In addition, we also consider the emotional tag features for music, and then represent acoustic features and emotional tag features jointly in a low dimensional embedding space for music emotion, while the joint emotion space is optimized by minimizing the joint loss of acoustic features and emotional tag features through dimension reduction. Finally we construct a unified framework for music retrieval in joint emotion space by the means of query-by-music or query-by-tag or together, and then we utilize our proposed ranking algorithm to return an optimized ranked list that has the highest emotional similarity. The experimental results show that the joint emotion space and unified framework can produce satisfying results for emotion-based music retrieval.
KW - Audio features
KW - Dimensionality reduction
KW - Emotional tag
KW - Music emotion
KW - Music retrieval
KW - Ranking
UR - http://www.scopus.com/inward/record.url?scp=84892846199&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-35725-1_48
DO - 10.1007/978-3-642-35725-1_48
M3 - Conference proceeding
AN - SCOPUS:84892846199
SN - 9783642357244
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 524
EP - 534
BT - Advances in Multimedia Modeling - 19th International Conference, MMM 2013, Proceedings
T2 - 19th International Conference on Advances in Multimedia Modeling, MMM 2013
Y2 - 7 January 2013 through 9 January 2013
ER -