Convolutional neural network based audio event classification

Minkyu Lim, Donghyun Lee, Hosung Park, Yoseb Kang, Junseok Oh, Jeong Sik Park, Gil Jin Jang, Ji Hwan Kim

Research output: Contribution to journalArticlepeer-review

46 Scopus citations

Abstract

This paper proposes an audio event classification method based on convolutional neural networks (CNNs). CNN has great advantages of distinguishing complex shapes of image. Proposed system uses the features of audio sound as an input image of CNN. Mel scale filter bank features are extracted from each frame, then the features are concatenated over 40 consecutive frames and as a result, the concatenated frames are regarded as an input image. The output layer of CNN generates probabilities of audio event (e.g. dogs bark, siren, forest). The event probabilities for all images in an audio segment are accumulated, then the audio event having the highest accumulated probability is determined to be the classification result. This proposed method classified thirty audio events with the accuracy of 81.5% for the UrbanSound8K, BBC Sound FX, DCASE2016, and FREESOUND dataset.

Original languageEnglish
Pages (from-to)2748-2760
Number of pages13
JournalKSII Transactions on Internet and Information Systems
Volume12
Issue number6
DOIs
StatePublished - Jun 2018

Keywords

  • Audio event classification
  • Convolutional neural networks
  • Deep learning

Fingerprint

Dive into the research topics of 'Convolutional neural network based audio event classification'. Together they form a unique fingerprint.

Cite this