A pre-trained model vs dedicated convolution neural networks for emotion recognition

Asmaa Yaseen Nawaf, Wesam M. Jasim

Abstract


Facial expression recognition (FER) is one of the most important methods influencing human-machine interaction (HMI). In this paper, a comparison was made between two models, a model that was built from scratch and trained on FER dataset only, and a model previously trained on a data set containing various images, which is the VGG16 model, then the model was reset and trained using FER dataset. The FER+ data set was augmented to be used in training phases using the two proposed models. The models will be evaluated (extra validation) by using images from the internet in order to find the best model for identifying human emotions, where Dlib detector and OpenCV libraries are used for face detection. The results showed that the proposed emotion recognition convolutional neural networks (ERCNN) model dedicated to identifying human emotions significantly outperformed the pre-trained model in terms of accuracy, speed, and performance, which was 87.133% in the public test and 82.648% in the private test. While it was 71.685% in the public test and 67.338% in the private test using the proposed VGG16 pre-trained model.

Keywords


Convolutional neural networks; Deep learning; Facial expression recognition FER+ dataset; VGG16 pre-trained model

Full Text:

PDF


DOI: http://doi.org/10.11591/ijece.v13i1.pp1123-1133

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

International Journal of Electrical and Computer Engineering (IJECE)
p-ISSN 2088-8708, e-ISSN 2722-2578

This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).