學術產出-Proceedings

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 Virtual Adversarial Active Learning
作者 游勤葑
Yu, Chin-Feng
Pao, Hsing-Kuo
貢獻者 資碩計一
關鍵詞 Active Learning ; Adversarial Examples ; Virtual Adversarial Training ; Adversarial Training
日期 2020-12
上傳時間 22-Sep-2021 13:33:32 (UTC+8)
摘要 In traditional active learning, one of the most well-known strategies is to select the most uncertain data for annotation. By doing that, we acquire as most as we can obtain from the labeling oracle so that the training in the next run can be much more effective than the one from this run once the informative labeled data are added to training. The strategy however, may not be suitable when deep learning become one of the dominant modeling techniques. Deep learning is notorious for its failure to achieve a certain degree of effectiveness under the adversarial environment. Often we see the sparsity in deep learning training space which gives us a result with low confidence. Moreover, to have some adversarial inputs to fool the deep learners, we should have an active learning strategy that can deal with the aforementioned difficulties. We propose a novel Active Learning strategy based on Virtual Adversarial Training (VAT) and the computation of local distributional roughness (LDR). Instead of selecting the data that are closest to the decision boundaries, we select the data that are located in a place with rough enough surface if measured by the posterior probability. The proposed strategy called Virtual Adversarial Active Learning (VAAL) can help us to find the data with rough surface, reshape the model with smooth posterior distribution output thanks to the active learning framework. Moreover, we shall prefer the labeling data that own enough confidence once they are annotated from an oracle. In VAAL, we have the VAT that can not only be used as a regularization term but also helps us effectively and actively choose the valuable samples for active learning labeling. Experiment results show that the proposed VAAL strategy can guide the convolutional networks model converging efficiently on several well-known datasets.
關聯 2020 IEEE International Conference on Big Data (Big Data), Georgia Institute of Technology, USA, Univ of Illinois at Urbana-Champaign, USA, pp.Page(s):5323 - 5331
資料類型 conference
DOI https://doi.org/10.1109/BigData50022.2020.9378021
dc.contributor 資碩計一
dc.creator (作者) 游勤葑
dc.creator (作者) Yu, Chin-Feng
dc.creator (作者) Pao, Hsing-Kuo
dc.date (日期) 2020-12
dc.date.accessioned 22-Sep-2021 13:33:32 (UTC+8)-
dc.date.available 22-Sep-2021 13:33:32 (UTC+8)-
dc.date.issued (上傳時間) 22-Sep-2021 13:33:32 (UTC+8)-
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/137243-
dc.description.abstract (摘要) In traditional active learning, one of the most well-known strategies is to select the most uncertain data for annotation. By doing that, we acquire as most as we can obtain from the labeling oracle so that the training in the next run can be much more effective than the one from this run once the informative labeled data are added to training. The strategy however, may not be suitable when deep learning become one of the dominant modeling techniques. Deep learning is notorious for its failure to achieve a certain degree of effectiveness under the adversarial environment. Often we see the sparsity in deep learning training space which gives us a result with low confidence. Moreover, to have some adversarial inputs to fool the deep learners, we should have an active learning strategy that can deal with the aforementioned difficulties. We propose a novel Active Learning strategy based on Virtual Adversarial Training (VAT) and the computation of local distributional roughness (LDR). Instead of selecting the data that are closest to the decision boundaries, we select the data that are located in a place with rough enough surface if measured by the posterior probability. The proposed strategy called Virtual Adversarial Active Learning (VAAL) can help us to find the data with rough surface, reshape the model with smooth posterior distribution output thanks to the active learning framework. Moreover, we shall prefer the labeling data that own enough confidence once they are annotated from an oracle. In VAAL, we have the VAT that can not only be used as a regularization term but also helps us effectively and actively choose the valuable samples for active learning labeling. Experiment results show that the proposed VAAL strategy can guide the convolutional networks model converging efficiently on several well-known datasets.
dc.format.extent 1196449 bytes-
dc.format.mimetype application/pdf-
dc.relation (關聯) 2020 IEEE International Conference on Big Data (Big Data), Georgia Institute of Technology, USA, Univ of Illinois at Urbana-Champaign, USA, pp.Page(s):5323 - 5331
dc.subject (關鍵詞) Active Learning ; Adversarial Examples ; Virtual Adversarial Training ; Adversarial Training
dc.title (題名) Virtual Adversarial Active Learning
dc.type (資料類型) conference
dc.identifier.doi (DOI) 10.1109/BigData50022.2020.9378021
dc.doi.uri (DOI) https://doi.org/10.1109/BigData50022.2020.9378021