政大學術集成


Please use this identifier to cite or link to this item: https://ah.nccu.edu.tw/handle/140.119/137243


Title: Virtual Adversarial Active Learning
Authors: 游勤葑
Yu, Chin-Feng
Pao, Hsing-Kuo
Contributors: 資碩計一
Keywords: Active Learning;Adversarial Examples;Virtual Adversarial Training;Adversarial Training
Date: 2020-12
Issue Date: 2021-09-22 13:33:32 (UTC+8)
Abstract: In traditional active learning, one of the most well-known strategies is to select the most uncertain data for annotation. By doing that, we acquire as most as we can obtain from the labeling oracle so that the training in the next run can be much more effective than the one from this run once the informative labeled data are added to training. The strategy however, may not be suitable when deep learning become one of the dominant modeling techniques. Deep learning is notorious for its failure to achieve a certain degree of effectiveness under the adversarial environment. Often we see the sparsity in deep learning training space which gives us a result with low confidence. Moreover, to have some adversarial inputs to fool the deep learners, we should have an active learning strategy that can deal with the aforementioned difficulties. We propose a novel Active Learning strategy based on Virtual Adversarial Training (VAT) and the computation of local distributional roughness (LDR). Instead of selecting the data that are closest to the decision boundaries, we select the data that are located in a place with rough enough surface if measured by the posterior probability. The proposed strategy called Virtual Adversarial Active Learning (VAAL) can help us to find the data with rough surface, reshape the model with smooth posterior distribution output thanks to the active learning framework. Moreover, we shall prefer the labeling data that own enough confidence once they are annotated from an oracle. In VAAL, we have the VAT that can not only be used as a regularization term but also helps us effectively and actively choose the valuable samples for active learning labeling. Experiment results show that the proposed VAAL strategy can guide the convolutional networks model converging efficiently on several well-known datasets.
Relation: 2020 IEEE International Conference on Big Data (Big Data), Georgia Institute of Technology, USA, Univ of Illinois at Urbana-Champaign, USA, pp.Page(s):5323 - 5331
Data Type: conference
DOI link: https://doi.org/10.1109/BigData50022.2020.9378021
Appears in Collections:[Executive Master Program of Computer Science of NCCU] Proceedings

Files in This Item:

File Description SizeFormat
24.pdf1168KbAdobe PDF20View/Open


All items in 學術集成 are protected by copyright, with all rights reserved.


社群 sharing