TY - GEN
T1 - Feature Vector Extraction Technique for Facial Emotion Recognition Using Facial Landmarks
AU - Poulose, Alwin
AU - Kim, Jung Hwan
AU - Han, Dong Seog
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - The facial emotion recognition (FER) system classifies the driver's emotions and these results are crucial in the autonomous driving system (ADS). The ADS effectively utilizes the features from FER and increases its safety by preventing road accidents. In FER, the system classifies the driver's emotions into different categories such as happy, sad, angry, surprise, disgust, fear, and neutral. These emotions determine the driver's mental condition and the current mental status of the driver can give us valuable information to predict the occurrence of road accidents. Conventional FER systems use direct facial image pixel values as its input and these pixel values provide a limited number of features for training the model. The limited number of features from facial images degrade the performance of the system and it gives a higher degree of classification error. To address this problem in the conventional FER systems, we propose a feature vector extraction technique that combines the facial image pixel values with the facial landmarks and the deep learning model uses these combined features as its input. Our experiments and results show that the proposed feature vector extraction-based FER approach reduces the classification error for emotion recognition and enhances the performance of the system. The proposed FER approach achieved a classification accuracy of 99.96% and a 0.095 model loss from the ResNet architecture.
AB - The facial emotion recognition (FER) system classifies the driver's emotions and these results are crucial in the autonomous driving system (ADS). The ADS effectively utilizes the features from FER and increases its safety by preventing road accidents. In FER, the system classifies the driver's emotions into different categories such as happy, sad, angry, surprise, disgust, fear, and neutral. These emotions determine the driver's mental condition and the current mental status of the driver can give us valuable information to predict the occurrence of road accidents. Conventional FER systems use direct facial image pixel values as its input and these pixel values provide a limited number of features for training the model. The limited number of features from facial images degrade the performance of the system and it gives a higher degree of classification error. To address this problem in the conventional FER systems, we propose a feature vector extraction technique that combines the facial image pixel values with the facial landmarks and the deep learning model uses these combined features as its input. Our experiments and results show that the proposed feature vector extraction-based FER approach reduces the classification error for emotion recognition and enhances the performance of the system. The proposed FER approach achieved a classification accuracy of 99.96% and a 0.095 model loss from the ResNet architecture.
KW - autonomous driving system (ADS)
KW - Facial emotion recognition (FER)
KW - facial keypoints detection
KW - facial landmark detection
KW - feature vector extraction
UR - http://www.scopus.com/inward/record.url?scp=85122013949&partnerID=8YFLogxK
U2 - 10.1109/ICTC52510.2021.9620798
DO - 10.1109/ICTC52510.2021.9620798
M3 - Conference contribution
AN - SCOPUS:85122013949
T3 - International Conference on ICT Convergence
SP - 1072
EP - 1076
BT - ICTC 2021 - 12th International Conference on ICT Convergence
PB - IEEE Computer Society
T2 - 12th International Conference on Information and Communication Technology Convergence, ICTC 2021
Y2 - 20 October 2021 through 22 October 2021
ER -