Aim: The purpose of the article is to present the hypothesis that the use of discrepancies in audiovisual materials can significantly increase the effectiveness of detecting various types of deepfake and related threats. In order to verify this hypothesis, the authors proposed a new method that reveals inconsistencies in both multiple modalities simultaneously and within individual modalities separately, enabling them to effectively distinguish between authentic and altered public speaking videos.

Project and methods: The proposed approach is to integrate audio and visual signals in a so-called fine-grained manner, and then carry out binary classification processes based on calculated adjustments to the classification results of each modality. The method has been tested using various network architectures, in particular Capsule networks – for deep anomaly detection and Swin Transformer – for image classification. Pre-processing included frame extraction and face detection using the MTCNN algorithm, as well as conversion of audio to mel spectrograms to better reflect human auditory perception. The proposed technique was tested on multimodal deepfake datasets, namely FakeAVCeleb and TMC, along with a custom dataset containing 4,700 recordings. The method has shown high performance in identifying deepfake threats in various test scenarios.

Results: The method proposed by the authors achieved better AUC and accuracy compared to other reference methods, confirming its effectiveness in the analysis of multimodal artefacts. The test results confirm that it is effective in detecting modified videos in a variety of test scenarios which can be considered an advance over existing deepfake detection techniques. The results highlight the adaptability of the method in various architectures of feature extraction networks.

Conclusions: The presented method of audiovisual deepfake detection uses fine inconsistencies of multimodal features to distinguish whether the material is authentic or synthetic. It is distinguished by its ability to point out inconsistencies in different types of deepfakes and, within each individual modality, can effectively distinguish authentic content from manipulated counterparts. The adaptability has been confirmed by the successful application of the method in various feature extraction network architectures. Moreover, its effectiveness has been proven in rigorous tests on two different audiovisual deepfake datasets.

Keywords: analysis of audio-video stream, detection of deepfake threats, analysis of public speeches

Type of article: original scientific article


  1. Nguyen T.T., Nguyen Q.V.H., Nguyen D.T., Nguyen D.T., Huynh-The T., Nahavandi S., Nguyen C. M., Deep learning for deepfakes creation and detection: A survey, „Computer Vision and Image Understanding” 2022, 223, 103525.
  2. https://brusselstimes.com/106320/xr-belgium-postsdeepfake-of-belgian-premier-linking-covid-19-with-climate-crisis [dostęp 10.09.2023].
  3. https://wiadomosci.onet.pl/swiat/politycy-padli-ofiara-technologii-deep-fake-pranksterzy-podszywali-sie--pod/16w1ep7 [dostęp: 04.12.2023].
  4. Wang X., Guo H., Hu S., Chang M.C., Lyu S., Gan-generated faces detection: A survey and new perspectives, „arXiv” 2022,2202.07145.
  5. Cao Y., Li S., Liu Y., Yan Z., Dai Y., Yu P.S., Sun L. A comprehensive survey of ai-generated content (aigc): A history of generativeai from gan to chatgpt, „arXiv” 2023, 2303.04226.
  6. https://noizz.pl/nauka-i-technologia/sztuczna-inteligencja-sklonowali-glos-dyrektora-banku-i-ukradli-miliony/mnwrnpk [dostęp: 04.12.2023].
  7. https://www.komputerswiat.pl/aktualnosci/wydarzenia/do-sieci-trafil-deepfake-z-prezydentem-zelenskim-w-falszywym-wideo-namawial-do/n40qel7, [dostęp: 04.12.2023].
  8. Xie T., Liao L., Bi C., Tang B., Yin X., Yang J., Ma, Z., Towards realistic visual dubbing with heterogeneous sources, Proceedings of the 29th ACM International Conference on Multimedia, 2021, 1739–1747.
  9. Amerini I., Galteri L., Caldelli R., Del Bimbo A., Deepfake video detection through optical flow based cnn, Proceedings of the IEEE/CVF international conference on computer vision workshops, 2019.
  10. Almutairi Z., Elgibreen H., A review of modern audio deepfake detection methods: challenges and future directions, „Algorithms” 2022, 15(5), 155.
  11. Khalid H., Tariq S., Kim M., Woo S.S., FakeAVCeleb: A novel audio-video multimodal deepfake dataset, „arXiv” 2021, 2108.05080.
  12. Zhang N., Luo J., Gao W., Research on face detection technology based on MTCNN, International Conference on Computer Network, Electronic and Automation (ICCNEA), 2020, 154–158.
  13. Patrick M.K., Adekoya A.F., Mighty A.A., Edward B.Y., Capsule networks – a survey, „Journal of King Saud University - computer and information sciences” 2022, 34(1), 1295–1310.
  14. Liang J., Cao J., Sun G., Zhang K., Van Gool L., Timofte R., Swinir: Image restoration using swin transformer, Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, 1833–1844.
  15. Chen W., Chua S.L.B., Winkler S., Ng S.K., Trusted Media Challenge Dataset and User Study, Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2022, 3873–3877.
  16. Afchar D., Nozick V., Yamagishi J., Echizen I., Mesonet: a compact facial video forgery detection network, In 2018 IEEE International Workshop on Information Forensics and Security (WIFS), 2018, 1–7.
  17. Koonce B., Koonce B., EfficientNet. Convolutional Neural Networks with Swift for Tensorflow: Image Recognition and Dataset Categorization, 2021, 109–123.
  18. Zheng Y., Bao J., Chen D., Zeng M., Wen F., Exploring temporal coherence for more general video face forgery detection, Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, 15044–15054.
  19. Yang W., Zhou X., Chen Z., Guo B., Ba Z., Xia Z., Ren K., AVoiD-DF: Audio-Visual Joint Learning for Detecting Deepfake, „IEEE Transactions on Information Forensics and Security” 2023, 18, 2015–2029.
  20. Shahzad S.A., Hashmi A., Peng Y.T., Tsao Y., Wang H. M., AV-Lip-Sync+: Leveraging AV-HuBERT to Exploit Multimodal Inconsistency for Video Deepfake Detection, „arXiv” 2023, 2311.02733.