Bimodal Speech Emotion Recognition using Fused Intra and Cross Modality Features

Samuel Kakuba, Dong Seog Han

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

The interactive speech between two or more inter locutors involves the text and acoustic modalities. These modalities consist of intra and cross-modality relationships at different time intervals which if modeled well, can avail emotionally rich cues for robust and accurate prediction of emotion states. This necessitates models that take into consideration long short-term dependency between the current, previous, and future time steps using multimodal approaches. Moreover, it is important to contextualize the interactive speech in order to accurately infer the emotional state. A combination of recurrent and/or convolutional neural networks with attention mechanisms is often used by researchers. In this paper, we propose a deep learning-based bimodal speech emotion recognition (DLBER) model that uses multi-level fusion to learn intra and cross-modality feature representations. The proposed DLBER model uses the transformer encoder to model the intra-modality features that are combined at the first level fusion in the local feature learning block (LFLB). We also use self-attentive bidirectional LSTM layers to further extract intramodality features before the second level fusion for further progressive learning of the cross-modality features. The resultant feature representation is fed into another self-attentive bidirectional LSTM layer in the global feature learning block (GFLB). The interactive emotional dyadic motion capture (IEMOCAP) dataset was used to evaluate the performance of the proposed DLBER model. The proposed DLBER model achieves 72.93% and 74.05% of F1 score and accuracy respectively.

Original languageEnglish
Title of host publicationICUFN 2023 - 14th International Conference on Ubiquitous and Future Networks
PublisherIEEE Computer Society
Pages109-113
Number of pages5
ISBN (Electronic)9798350335385
DOIs
StatePublished - 2023
Event14th International Conference on Ubiquitous and Future Networks, ICUFN 2023 - Paris, France
Duration: 4 Jul 20237 Jul 2023

Publication series

NameInternational Conference on Ubiquitous and Future Networks, ICUFN
Volume2023-July
ISSN (Print)2165-8528
ISSN (Electronic)2165-8536

Conference

Conference14th International Conference on Ubiquitous and Future Networks, ICUFN 2023
Country/TerritoryFrance
CityParis
Period4/07/237/07/23

Keywords

  • emotion recognition
  • Fusion
  • inter modality features
  • intra modality features

Fingerprint

Dive into the research topics of 'Bimodal Speech Emotion Recognition using Fused Intra and Cross Modality Features'. Together they form a unique fingerprint.

Cite this