dc.contributor | 資訊系 | |
dc.creator (作者) | 廖文宏 | |
dc.creator (作者) | Liao, Wen-Hung;Khan, Sarwar;Chen, Jun-Cheng;Chen, Chu-Song | |
dc.date (日期) | 2024-01 | |
dc.date.accessioned | 7-Jan-2025 09:36:39 (UTC+8) | - |
dc.date.available | 7-Jan-2025 09:36:39 (UTC+8) | - |
dc.date.issued (上傳時間) | 7-Jan-2025 09:36:39 (UTC+8) | - |
dc.identifier.uri (URI) | https://nccur.lib.nccu.edu.tw/handle/140.119/155076 | - |
dc.description.abstract (摘要) | Deepfake technology has raised concerns about the authenticity of digital content, necessitating the development of effective detection methods. However, the widespread availability of deepfakes has given rise to a new challenge in the form of adversarial attacks. Adversaries can manipulate deepfake videos with small, imperceptible perturbations that can deceive the detection models into producing incorrect outputs. To tackle this critical issue, we introduce Adversarial Feature Similarity Learning (AFSL), which integrates three fundamental deep feature learning paradigms. By optimizing the similarity between samples and weight vectors, our approach aims to distinguish between real and fake instances. Additionally, we aim to maximize the similarity between both adversarially perturbed examples and unperturbed examples, regardless of their real or fake nature. Moreover, we introduce a regularization technique that maximizes the dissimilarity between real and fake samples, ensuring a clear separation between these two categories. With extensive experiments on popular deepfake datasets, including FaceForensics++, FaceShifter, and DeeperForensics, the proposed method outperforms other standard adversarial training-based defense methods significantly. This further demonstrates the effectiveness of our approach to protecting deepfake detectors from adversarial attacks. | |
dc.format.extent | 108 bytes | - |
dc.format.mimetype | text/html | - |
dc.relation (關聯) | Proceedings of the 30th International Conference on Multimedia Modeling, pp.503-516, University of Amsterdam | |
dc.subject (關鍵詞) | Adversarial attack; Adversarial training; Deepfake video detection; Forgery detector | |
dc.title (題名) | Adversarially Robust Deepfake Detection via Adversarial Feature Similarity Learning | |
dc.type (資料類型) | conference | |
dc.identifier.doi (DOI) | 10.1007/978-3-031-53311-2_37 | |
dc.doi.uri (DOI) | https://doi.org/10.1007/978-3-031-53311-2_37 | |