Performance analysis of deep unified model for facial expression recognition using convolution neural network

Kavita Kavita, Rajender Singh Chhillar


Facial expression recognition has gathered substantial attention in computer vision applications, with the need for robust and accurate models that can decipher human emotions from facial images. Performance analysis of a novel hybrid model combines the strengths of residual network (ResNet) and dense network (DenseNet) architectures after applying preprocessing for facial expression recognition. The proposed hybrid model capitalizes on the complementary characteristics of ResNet's and DenseNet's densely connected blocks to enhance the model's capacity to extract discriminative features from facial images. This research evaluates the hybrid model performance and conducts a comprehensive benchmark against established facial expression recognition convolution neural network (CNN) models. The analysis encompasses key aspects of model performance, including classification accuracy, and adaptability with the LFW dataset for facial expressions such as Anger, Fear, Happy, Disgust, Sad, Surprise, along Neutral. The research observes that the proposed hybrid model is more accurate and efficient computationally than existing models consistently. This performance analysis eliminates information on the hybrid model's perspective to further facial expression recognition research.


Convolution neural network; Deep learning; Facial expression; LFW dataset; Preprocessing

Full Text:



Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

International Journal of Electrical and Computer Engineering (IJECE)
p-ISSN 2088-8708, e-ISSN 2722-2578

This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).