Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/137243
DC FieldValueLanguage
dc.contributor資碩計一
dc.creator游勤葑
dc.creatorYu, Chin-Feng
dc.creatorPao, Hsing-Kuo
dc.date2020-12
dc.date.accessioned2021-09-22T05:33:32Z-
dc.date.available2021-09-22T05:33:32Z-
dc.date.issued2021-09-22T05:33:32Z-
dc.identifier.urihttp://nccur.lib.nccu.edu.tw/handle/140.119/137243-
dc.description.abstractIn traditional active learning, one of the most well-known strategies is to select the most uncertain data for annotation. By doing that, we acquire as most as we can obtain from the labeling oracle so that the training in the next run can be much more effective than the one from this run once the informative labeled data are added to training. The strategy however, may not be suitable when deep learning become one of the dominant modeling techniques. Deep learning is notorious for its failure to achieve a certain degree of effectiveness under the adversarial environment. Often we see the sparsity in deep learning training space which gives us a result with low confidence. Moreover, to have some adversarial inputs to fool the deep learners, we should have an active learning strategy that can deal with the aforementioned difficulties. We propose a novel Active Learning strategy based on Virtual Adversarial Training (VAT) and the computation of local distributional roughness (LDR). Instead of selecting the data that are closest to the decision boundaries, we select the data that are located in a place with rough enough surface if measured by the posterior probability. The proposed strategy called Virtual Adversarial Active Learning (VAAL) can help us to find the data with rough surface, reshape the model with smooth posterior distribution output thanks to the active learning framework. Moreover, we shall prefer the labeling data that own enough confidence once they are annotated from an oracle. In VAAL, we have the VAT that can not only be used as a regularization term but also helps us effectively and actively choose the valuable samples for active learning labeling. Experiment results show that the proposed VAAL strategy can guide the convolutional networks model converging efficiently on several well-known datasets.
dc.format.extent1196449 bytes-
dc.format.mimetypeapplication/pdf-
dc.relation2020 IEEE International Conference on Big Data (Big Data), Georgia Institute of Technology, USA, Univ of Illinois at Urbana-Champaign, USA, pp.Page(s):5323 - 5331
dc.subjectActive Learning ; Adversarial Examples ; Virtual Adversarial Training ; Adversarial Training
dc.titleVirtual Adversarial Active Learning
dc.typeconference
dc.identifier.doi10.1109/BigData50022.2020.9378021
dc.doi.urihttps://doi.org/10.1109/BigData50022.2020.9378021
item.grantfulltextrestricted-
item.cerifentitytypePublications-
item.openairetypeconference-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.fulltextWith Fulltext-
Appears in Collections:會議論文
Files in This Item:
File Description SizeFormat
24.pdf1.17 MBAdobe PDF2View/Open
Show simple item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.