dc.contributor.advisor | 廖文宏 | zh_TW |
dc.contributor.advisor | Liao,Wen Hung | en_US |
dc.contributor.author (Authors) | 廖奕齊 | zh_TW |
dc.contributor.author (Authors) | Liao,I Chi | en_US |
dc.creator (作者) | 廖奕齊 | zh_TW |
dc.creator (作者) | Liao,I Chi | en_US |
dc.date (日期) | 2006 | en_US |
dc.date.accessioned | 17-Sep-2009 14:10:03 (UTC+8) | - |
dc.date.available | 17-Sep-2009 14:10:03 (UTC+8) | - |
dc.date.issued (上傳時間) | 17-Sep-2009 14:10:03 (UTC+8) | - |
dc.identifier (Other Identifiers) | G0947530342 | en_US |
dc.identifier.uri (URI) | https://nccur.lib.nccu.edu.tw/handle/140.119/32736 | - |
dc.description (描述) | 碩士 | zh_TW |
dc.description (描述) | 國立政治大學 | zh_TW |
dc.description (描述) | 資訊科學學系 | zh_TW |
dc.description (描述) | 94753034 | zh_TW |
dc.description (描述) | 95 | zh_TW |
dc.description.abstract (摘要) | 由於自動化程式的濫用越來越廣泛,因此擁有區分人與機器的能力也就日益重要。然而現在廣泛被運用的原文圖像(textual-image-based)CAPTCHA已經遭到破解。在此篇論文中,我們提出一個以交換圖片中不重疊區塊、簡單且有效的人機區分機制,利用簡單的幾個步驟就能產生出人類可以輕鬆通過但機器卻難以用自動化程式分析的測驗圖片,也同時針對此機制的強健度做了多方面的測試,實驗中也對於此機制所使用的參數選擇和圖像資料庫進行詳細的分析;最後我們設計了眼動儀實驗去比較不同的測驗類型所對應的視線軌跡。 | zh_TW |
dc.description.abstract (摘要) | The need to tell human and machines apart has surged due to abuse of automated ‘bots’. However, several textual-image-based CAPTCHAs have been defeated recently. In this thesis, we propose a simple yet effective visual CAPTCHA test by exchanging the content of non-overlapping regions in an image. Using simple steps, the algorithm is able to produce a discrimination mechanism which is difficult for machine to analyze but easy for human to pass. We have tested the robustness of the proposed method by exploring different ways to attack this CAPTCHA and the corresponding counter-attack measures. Additionaly, we have carried out in-depth analysis regarding the choice of the parameters and the image database. Finally, eye-tracking experiments have been conducted to examine and compare the gaze paths for different visual tasks. | en_US |
dc.description.tableofcontents | 第一章 緒論.................................................11.1. 研究背景與目的.........................................11.2. 相關研究...............................................21.3. 關於本論文.............................................8第二章 以圖形內涵為基礎的人機區分機制.........................102.1. 測驗產生方式..........................................102.1.1. 交換區域的選擇方式...................................122.1.2. 圖形資料庫的選定....................................162.2. 破解方式.............................................162.2.1. 隨機猜測(Random guess).............................172.2.2. 蒐集比對法(Collect and match).......................192.2.3. 影像分割法(Image Segmentation).....................202.2.4. Saliency Toolbox..................................25第三章 參數設定............................................293.1. 實驗內容..............................................293.1.1. 實驗素材-人臉圖.....................................293.1.2. 風景與材質圖.......................................303.1.3. 實驗設計............................................313.2. 實驗結果.........................................343.2.1. 人臉部份...........................................343.2.1.1. 間隔距離與時間的關係…………………………………...39 3.2.1.2. 差異度與時間的關係……………….……...…....………413.2.2. 風景與材質圖.......................................453.2.2.1. 間隔距離與時間的關係…………………………………...…463.2.2.2. 差異度與時間的關係………………………………………...493.2.3. 影像分割破解方法結果................................533.2.4 Saliency toolbox 運算結果...........................583.3. 總結.................................................60第四章 眼動儀實驗.......................................614.1 實驗目的...............................................614.2 實驗方法.............................................624.2.1.實驗材料...........................................624.2.2. 實驗程序...........................................644.2.3. 實驗結果與討論.....................................65 第五章 結論...............................................80參考文獻..................................................82 | zh_TW |
dc.format.extent | 90834 bytes | - |
dc.format.extent | 88413 bytes | - |
dc.format.extent | 108784 bytes | - |
dc.format.extent | 135887 bytes | - |
dc.format.extent | 446812 bytes | - |
dc.format.extent | 958401 bytes | - |
dc.format.extent | 2932304 bytes | - |
dc.format.extent | 1685275 bytes | - |
dc.format.extent | 87290 bytes | - |
dc.format.extent | 111874 bytes | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.language.iso | en_US | - |
dc.source.uri (資料來源) | http://thesis.lib.nccu.edu.tw/record/#G0947530342 | en_US |
dc.subject (關鍵詞) | 人機區分 | zh_TW |
dc.subject (關鍵詞) | CAPTCHA | en_US |
dc.title (題名) | 植基於圖像內涵之自動化人機區分機制 | zh_TW |
dc.title (題名) | A CAPTCHA Mechanism By Exchanging Image Blocks | en_US |
dc.type (資料類型) | thesis | en |
dc.relation.reference (參考文獻) | 【1】 Ahn, L von., Blum, M., Hopper, N. J., and Langford, J., “CAPTCHA: Telling Humans and Computers Apart (Automatically)”, Advances in Cryptology, Eurocrypt `03, Vol. 2656 of Lecture Notes in Computer Science, pp.294–311, 2003. | zh_TW |
dc.relation.reference (參考文獻) | 【2】 Turing,A. (1950). Computing machinery and intelligence, artificial intelligence. | zh_TW |
dc.relation.reference (參考文獻) | 【3】 Liao, W. H., Chang, C.C. “Embedding Information within Dynamic Visual Patterns”, Proceedings of IEEE International Conference On Multimedia And Expo, May 2004. | zh_TW |
dc.relation.reference (參考文獻) | 【4】 張繼志,植基於質感圖像之自動化人機區分機制,國立政治大學資訊科學研究所,民國94年。 | zh_TW |
dc.relation.reference (參考文獻) | 【5】 Rusu A. and Govindaraju V. , “Handwritten CAPTCHA: Using the difference in the abilities of humans and machines in reading handwritten words”, Proceedings of the 9th Int’l Workshop on Frontiers in Handwriting Recognition, Sept 2004. | zh_TW |
dc.relation.reference (參考文獻) | 【6】 Misra D. and Gaj K. , “Face Recognition CAPTCHAs” Proceedings of the Advanced International Conference on Telecommunications and International Conference on Internet and Web Applications and Services, 2006 | zh_TW |
dc.relation.reference (參考文獻) | 【7】 Chakrabarti S. and Singhal M. , “Password-Based Authentication: Preventing Dictionary Attacks”, Proceeing of IEEE Computer, June 2007. | zh_TW |
dc.relation.reference (參考文獻) | 【8】 http://research.microsoft.com/asirra/ | zh_TW |
dc.relation.reference (參考文獻) | 【9】 http://www.w3.org/TR/turingtest/ | zh_TW |
dc.relation.reference (參考文獻) | 【10】 Moscovitch M., Winocur G. and Behrmann M., “What Is Special about Face Recognition?: Nineteen Experiments on a Person with Visual Object Agnosia and Dyslexia but Normal Face Recognition“, The Journal of Cognitive Neuroscience, Vol. 9, pp. 555-604, 1997. | zh_TW |
dc.relation.reference (參考文獻) | 【11】 Martin D., Fowlkes C. and Malik. J., "Learning to Detect Natural Image Boundaries Using Local Brightness, Color and Texture Cues", IEEE Transactions on Pattern Analysis and Machine Intelligence, 26 (5) p.530-549, 2004. | zh_TW |
dc.relation.reference (參考文獻) | 【12】 http://www.seas.upenn.edu/~timothee/ | zh_TW |
dc.relation.reference (參考文獻) | 【13】 Dirk Walther and Christof Koch, Modeling attention to salient proto-objects. Neural | zh_TW |
dc.relation.reference (參考文獻) | Networks (2006) 19, 1395-1407 (http://www.saliencytoolbox.net) | zh_TW |
dc.relation.reference (參考文獻) | 【14】 Michael M. , Mads Larsen, Janusz Sierakowski, Mikkel B. S , “The IMM Face | zh_TW |
dc.relation.reference (參考文獻) | Database - An Annotated Dataset of 240 Face Images" Informatics and Mathematical Modelling, Technical University of Denmark, DTU , May 2004 | zh_TW |