![]() Some of our proposed networks contain far fewer parameters when compared to state-of-the-art architectures. This sharing of representations among related tasks enables our network to better generalize the original task of SER. Our networks also employ a regularizing effect by simultaneously performing the auxiliary task of reconstructing the input speech features. We experimented with several Deep Neural Network (DNN) architectures that take in speech features as input and trained them under both softmax and center loss, which resulted in highly discriminative features ideal for Speech Emotion Recognition (SER). Speech features such as Spectrograms and Mel-frequency Cepstral Coefficients (MFCCs) help retain emotion related low-level characteristics in speech. %X This paper proposes a Convolutional Neural Network (CNN) inspired by Multitask Learning (MTL) and based on speech features trained under the joint supervision of softmax loss and center loss, a powerful metric learning strategy, for the recognition of emotion in speech. %C Proceedings of Machine Learning Research %B Proceedings of IJCAI 2019 3rd Workshop on Artificial Intelligence in Affective Computing %T Learning Discriminative Features using Center Loss and Reconstruction as Regularizer for Speech Emotion Recognition Our best performing model achieves a 3.1% improvement in overall accuracy and a 5.3% improvement in class accuracy when compared to existing state-of-the-art methods.Ĭite this = We used the University of Southern California’s Interactive Emotional Motion Capture (USC-IEMOCAP) database in this work. Interspeech 2021, 3415-3419, doi: 10.21437/Interspeech.This paper proposes a Convolutional Neural Network (CNN) inspired by Multitask Learning (MTL) and based on speech features trained under the joint supervision of softmax loss and center loss, a powerful metric learning strategy, for the recognition of emotion in speech. ![]() ![]() (2021) Acoustic Features and Neural Representations for Categorical Emotion Recognition from Speech. That standard acoustic feature sets are still very useful baselinesįor emotional classification, but high quality neural speech representationsĬite as: Keesing, A., Koh, Y.S., Witbrock, M. Wav2vec performing best out of all tested features. Neural representations were wav2veq and VGGish, respectively, with Performance, and BoAW features lie in the middle. Representations, while neural representations have a larger range of Standard acoustic feature sets still perform competitively to neural Show statistically significant differences between features and betweenĬlassifiers, with large effect sizes between features. Speaker-independent cross-validation with diverse classifiers. In a categorical emotion classification problem for each dataset, using We propose a full factorial designĪnd to compare speech processing features, BoAW and neural representations Not been directly compared across a large number of speech corpora Many features have been proposed for use in speech emotion recognition,įrom signal processing features to bag-of-audio-words (BoAW) models Acoustic Features and Neural Representations for Categorical Emotion Recognition from Speech Aaron Keesing, Yun Sing Koh, Michael Witbrock
0 Comments
Leave a Reply. |