Fast discrimination of fake video manipulation
Abstract
Deepfakes have become possible using artificial intelligence techniques, replacing one person’s face with another person’s face (primarily a public figure), making the latter do or say things he would not have done. Therefore, contributing to a solution for video credibility has become a critical goal that we will address in this paper. Our work exploits the visible artifacts (blur inconsistencies) which are generated by the manipulation process. We analyze focus quality and its ability to detect these artifacts. Focus measure operators in this paper include image Laplacian and image gradient groups, which are very fast to compute and do not need a large dataset for training. The results showed that i) the Laplacian group operators, as a value, may be lower or higher in the fake video than its value in the real video, depending on the quality of the fake video, so we cannot use them for deepfake detection and ii) the gradient-based measure (GRA7) decreases its value in the fake video in all cases, whether the fake video is of high or low quality and can help detect deepfake.
Keywords
deepfake; focus measures; multimedia forensics; video manipulation; visible artifacts;
Full Text:
PDFDOI: http://doi.org/10.11591/ijece.v12i3.pp2582-2587
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
International Journal of Electrical and Computer Engineering (IJECE)
p-ISSN 2088-8708, e-ISSN 2722-2578
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).