A novel hybrid generation technique of facial expressions using fine-tuning and auxiliary condition generative adversarial network
Abstract
The facial expression generation continues to be interesting for researchers and scientists worldwide. Facial expressions are an excellent way of transmitting one's emotions or intentions to others, and they have been extensively studied in areas such as driver safety, human-computer interaction (HCI), deception detection, health care, and monitoring. The facial expression generation starts with a single neutral image and generates sequences of facial expression images, which are combined to create a video. Previous methods generated facial expression images of the same person. However, they still suffer from low accuracy and image quality. This article overcomes this problem using a novel hybrid model for facial expression video generation using fine-tuning and condition-generative model architectures to optimize the model's parameters. Results indicate that the proposed novel approach significantly improves the expression generation of the same person. The proposed method can reliably and accurately generate facial expressions, with a testing accuracy of 98.7% and a training accuracy of 99.9%.
Keywords
Condition generative adversarial networks; Convolution neural network; Deep learning; Discriminator; FERG dataset; Generator; Hyperparameter tuning model
Full Text:
PDFDOI: http://doi.org/10.11591/ijece.v15i3.pp3418-3428
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
International Journal of Electrical and Computer Engineering (IJECE)
p-ISSN 2088-8708, e-ISSN 2722-2578
This journal is published by theĀ Institute of Advanced Engineering and Science (IAES).