Spectral Salt-and-Pepper Patch Masking for Self-Supervised Speech Representation Learning

June Woo Kim, Hoon Chung, Ho Young Jung

Research output: Contribution to journalArticlepeer-review

Abstract

Recent advanced systems in the speech recognition domain use large Transformer neural networks that have been pretrained on massive speech data. General methods in the deep learning area have been frequently shared across various domains, and the Transformer model can also be used effectively across speech and image. In this paper, we introduce a novel masking method for self-supervised speech representation learning with salt-and-pepper (S&P) mask which is commonly used in computer vision. The proposed scheme includes consecutive quadrilateral-shaped S&P patches randomly contaminating the input speech spectrum. Furthermore, we modify the standard S&P mask to make it appropriate for the speech domain. In order to validate the effect of the proposed spectral S&P patch masking for the self-supervised representation learning approach, we conduct the pretraining and downstream experiments with two languages, English and Korean. To this end, we pretrain the speech representation model using each dataset and evaluate the pretrained models for feature extraction and fine-tuning performance on varying downstream tasks, respectively. The experimental outcomes clearly illustrate that the proposed spectral S&P patch masking is effective for various downstream tasks when combined with the conventional masking methods.

Original languageEnglish
Article number3418
JournalMathematics
Volume11
Issue number15
DOIs
StatePublished - Aug 2023

Keywords

  • salt-and-pepper masking
  • self-supervised learning
  • spectrum patch masking
  • speech representation learning

Fingerprint

Dive into the research topics of 'Spectral Salt-and-Pepper Patch Masking for Self-Supervised Speech Representation Learning'. Together they form a unique fingerprint.

Cite this