Linguistic analysis for emotion recognition: a case of Chinese speakers

Carlo Schirru, Shahla Simin, Paolo Mengoni*, Alfredo Milani

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

1 Citation (Scopus)

Abstract

Acoustic features, in the examination of emotions occurring in pronouncing English and Chinese Mandarin words, are investigated in this study, then different emotion recognition experiments are presented. To this end, the sound recordings for 91 speakers were analyzed. In the test experiment, a linguistic data set was used to examine which acoustic features are most important for the emotional representation in signal acquisition, segmentation, construction, and encoding. In doing so, words, syllables, phonemes (which contain vowels and consonants), stress and frequency tones were taken into consideration. The types of emotions considered in the experiment included neutral, happy, and sad. Time duration differences, F0 frequency, and dB intensity levels variables were used in conjunction with unsupervised and supervised machine learning approaches for emotion recognition.
Original languageEnglish
Pages (from-to)417-432
Number of pages16
JournalInternational Journal of Speech Technology
Volume26
Issue number2
Early online date18 Mar 2023
DOIs
Publication statusPublished - Jul 2023

Scopus Subject Areas

  • Artificial Intelligence
  • Software
  • Language and Linguistics
  • Human-Computer Interaction
  • Computer Vision and Pattern Recognition
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'Linguistic analysis for emotion recognition: a case of Chinese speakers'. Together they form a unique fingerprint.

Cite this