Publications-Periodical Articles

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 How Good is ChatGPT at Audiovisual Deepfake Detection: A Comparative Study of ChatGPT, AI Models and Human Perception
作者 彭彥璁
Shahzad, Sahibzada Adil;Hashmi, Ammarah;Peng, Yan-Tsung;Tsao, Yu;Wang, Hsin-Min
貢獻者 資訊系
關鍵詞 LLM; ChatGPT; Deepfake; Audiovisual deepfake; Multi-modality; Video forensics; Forgery detection
日期 2025-06
上傳時間 12-Mar-2026 15:07:56 (UTC+8)
摘要 Multimodal deepfakes involving audiovisual manipulations are a growing threat because they are difficult to detect with the naked eye or using unimodal deep learningbased forgery detection methods. Audiovisual forensic models, while more capable than unimodal models, require large training datasets and are computationally expensive for training and inference. Furthermore, these models lack interpretability and often do not generalize well to unseen manipulations. In this study, we examine the detection capabilities of a large language model (LLM) (i.e., ChatGPT) to identify and account for any possible visual and auditory artifacts and manipulations in audiovisual deepfake content. Extensive experiments are conducted on videos from a benchmark multimodal deepfake dataset to evaluate the detection performance of ChatGPT and compare it with the detection capabilities of state-of-the-art multimodal forensic models and humans. Experimental results demonstrate the importance of domain knowledge and prompt engineering for video forgery detection tasks using LLMs. Unlike approaches based on end-to-end learning, ChatGPT can account for spatial and spatiotemporal artifacts and inconsistencies that may exist within or across modalities. Additionally, we discuss the limitations of ChatGPT for multimedia forensic tasks.
關聯 APSIPA transactions on signal and information processing, Vol.14, No.1
資料類型 article
DOI https://doi.org/10.1561/116.20250004
dc.contributor 資訊系
dc.creator (作者) 彭彥璁
dc.creator (作者) Shahzad, Sahibzada Adil;Hashmi, Ammarah;Peng, Yan-Tsung;Tsao, Yu;Wang, Hsin-Min
dc.date (日期) 2025-06
dc.date.accessioned 12-Mar-2026 15:07:56 (UTC+8)-
dc.date.available 12-Mar-2026 15:07:56 (UTC+8)-
dc.date.issued (上傳時間) 12-Mar-2026 15:07:56 (UTC+8)-
dc.identifier.uri (URI) https://ah.lib.nccu.edu.tw/item?item_id=181642-
dc.description.abstract (摘要) Multimodal deepfakes involving audiovisual manipulations are a growing threat because they are difficult to detect with the naked eye or using unimodal deep learningbased forgery detection methods. Audiovisual forensic models, while more capable than unimodal models, require large training datasets and are computationally expensive for training and inference. Furthermore, these models lack interpretability and often do not generalize well to unseen manipulations. In this study, we examine the detection capabilities of a large language model (LLM) (i.e., ChatGPT) to identify and account for any possible visual and auditory artifacts and manipulations in audiovisual deepfake content. Extensive experiments are conducted on videos from a benchmark multimodal deepfake dataset to evaluate the detection performance of ChatGPT and compare it with the detection capabilities of state-of-the-art multimodal forensic models and humans. Experimental results demonstrate the importance of domain knowledge and prompt engineering for video forgery detection tasks using LLMs. Unlike approaches based on end-to-end learning, ChatGPT can account for spatial and spatiotemporal artifacts and inconsistencies that may exist within or across modalities. Additionally, we discuss the limitations of ChatGPT for multimedia forensic tasks.
dc.format.extent 114 bytes-
dc.format.mimetype text/html-
dc.relation (關聯) APSIPA transactions on signal and information processing, Vol.14, No.1
dc.subject (關鍵詞) LLM; ChatGPT; Deepfake; Audiovisual deepfake; Multi-modality; Video forensics; Forgery detection
dc.title (題名) How Good is ChatGPT at Audiovisual Deepfake Detection: A Comparative Study of ChatGPT, AI Models and Human Perception
dc.type (資料類型) article
dc.identifier.doi (DOI) 10.1561/116.20250004
dc.doi.uri (DOI) https://doi.org/10.1561/116.20250004