TY - JOUR
T1 - Development of an auditory emotion recognition function using psychoacoustic parameters based on the International Affective Digitized Sounds
AU - Choi, Youngimm
AU - Lee, Sungjun
AU - Jung, Sung Soo
AU - Choi, In Mook
AU - Park, Yon Kyu
AU - Kim, Chobok
N1 - Publisher Copyright:
© 2014, Psychonomic Society, Inc.
PY - 2015/12/1
Y1 - 2015/12/1
N2 - The purpose of this study was to develop an auditory emotion recognition function that could determine the emotion caused by sounds coming from the environment in our daily life. For this purpose, sound stimuli from the International Affective Digitized Sounds (IADS-2), a standardized database of sounds intended to evoke emotion, were selected, and four psychoacoustic parameters (i.e., loudness, sharpness, roughness, and fluctuation strength) were extracted from the sounds. Also, by using an emotion adjective scale, 140 college students were tested to measure three basic emotions (happiness, sadness, and negativity). From this discriminant analysis to predict basic emotions from the psychoacoustic parameters of sound, a discriminant function with overall discriminant accuracy of 88.9 % was produced from training data. In order to validate the discriminant function, the same four psychoacoustic parameters were extracted from 46 sound stimuli collected from another database and substituted into the discriminant function. The results showed that an overall discriminant accuracy of 63.04 % was confirmed. Our findings provide the possibility that daily-life sounds, beyond voice and music, can be used in a human–machine interface.
AB - The purpose of this study was to develop an auditory emotion recognition function that could determine the emotion caused by sounds coming from the environment in our daily life. For this purpose, sound stimuli from the International Affective Digitized Sounds (IADS-2), a standardized database of sounds intended to evoke emotion, were selected, and four psychoacoustic parameters (i.e., loudness, sharpness, roughness, and fluctuation strength) were extracted from the sounds. Also, by using an emotion adjective scale, 140 college students were tested to measure three basic emotions (happiness, sadness, and negativity). From this discriminant analysis to predict basic emotions from the psychoacoustic parameters of sound, a discriminant function with overall discriminant accuracy of 88.9 % was produced from training data. In order to validate the discriminant function, the same four psychoacoustic parameters were extracted from 46 sound stimuli collected from another database and substituted into the discriminant function. The results showed that an overall discriminant accuracy of 63.04 % was confirmed. Our findings provide the possibility that daily-life sounds, beyond voice and music, can be used in a human–machine interface.
KW - Auditory emotion recognition
KW - Emotion recognition
KW - Emotional adjectives
KW - IADS-2
KW - Psychoacoustic parameters
UR - http://www.scopus.com/inward/record.url?scp=84947039298&partnerID=8YFLogxK
U2 - 10.3758/s13428-014-0525-4
DO - 10.3758/s13428-014-0525-4
M3 - Article
C2 - 25319038
AN - SCOPUS:84947039298
SN - 1554-351X
VL - 47
SP - 1076
EP - 1084
JO - Behavior Research Methods
JF - Behavior Research Methods
IS - 4
ER -