Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/54364
DC FieldValueLanguage
dc.contributor.advisor廖文宏<br>陳聖智zh_TW
dc.contributor.advisorLiao, Wen Hung<br>Chen, Sheng Chihen_US
dc.contributor.author黃政明zh_TW
dc.contributor.authorHuang, Cheng Mingen_US
dc.creator黃政明zh_TW
dc.creatorHuang, Cheng Mingen_US
dc.date2011en_US
dc.date.accessioned2012-10-30T02:48:42Z-
dc.date.available2012-10-30T02:48:42Z-
dc.date.issued2012-10-30T02:48:42Z-
dc.identifierG0098462011en_US
dc.identifier.urihttp://nccur.lib.nccu.edu.tw/handle/140.119/54364-
dc.description碩士zh_TW
dc.description國立政治大學zh_TW
dc.description數位內容碩士學位學程zh_TW
dc.description98462011zh_TW
dc.description100zh_TW
dc.description.abstract智慧型手機的用途已從語音溝通延伸轉變為多功能導向的的生活工具。目 前多數的智慧型手機均具備攝影鏡頭,而此模組更已被公認為基本的標準 配備。使用者透過手機,可以輕易且自然地拍攝感興趣的物體、景色或文 字等,並且建立屬於自己的影像資料庫。在眾多的手機軟體中,旅遊類的 程式是其中一種常見整合內容與多項感測模組的應用實例。在行動平台上, 設計一個影像辨識系統服務可以大幅地協助遊客們在旅途中去瞭解、認識\n知名的地標、建築物、或別具意義的物體與文字等。 然而在行動平台上的可用資源是有限的,因此想要在行動平台上開發有效 率的影像辨識系統,是頗具挑戰性的任務。如何在準確率與計算成本之間 取得最佳的平衡點往往是行動平台上開發影像辨識技術的最重要課題。 根據上述的目標,本研究擬於行動平台上設計、開發行動影像搜尋與智慧 型文字辨識系統。具體而言,我們將在影像搜尋上整合兩個全域的特徵描 述子,並針對印刷與手寫字體去開發智慧型文字辨識系統。實驗結果顯示, 在行動影像搜尋與文字辨識的效能測試部分,前三名的辨識率皆可達到的 80%。zh_TW
dc.description.abstractThe roles of smart phones have extended from simple voice communications to multi-purpose applications. Smart phone equipped with miniaturized image capturing modules are now considered standard. Users can easily take pictures of interested objects, scenes or texts, and build their own image database. Travel-type mobile app is one example that takes advantage of the array of sensors on the device. A mobile image search engine can bring much convenience to tourists when they want to retrieve information regarding specific landmarks, buildings, or other objects.\nHowever, devising an effective image recognition system for smart phone is a quite challenging task due to the complexity of image search and pattern recognition algorithms. Image recognition techniques that strike a balance between accuracy and efficiency need to be developed to cope with limited resources on mobile platforms.\nToward the above goal, this thesis seeks to design effective mobile visual search and intelligent character recognition systems on mobile platforms. Specifically, we propose two global feature descriptors for efficient image search. We also develop an intelligent character recognition engine that can handle both printed and handwritten texts. Experimental results show that the accuracy reaches 80% for top-3 candidates in visual search and intelligent character recognition tasks.en_US
dc.description.tableofcontents1. Introduction................................................................................................. 1\n2. Related Work ............................................................................................... 6\n2.1. Visual Search ............................................................................................ 6\n2.1.1. Content-based Image Retrieval on PC .................................................... 6\n2.1.2. Mobile Image Search ............................................................................. 7\n2.1.3. Mobile Image Search Apps now ............................................................. 9\n2.2. Intelligent Character Recognition............................................................. 12\n2.2.1. Printed Character Recognition ............................................................. 12\n2.2.2. Handwritten Character Recognition...................................................... 13\n2.2.3. Mobile Character Recognition and Apps............................................... 15\n3. Case Studies .............................................................................................. 18\n3.1. HuayuNavi............................................................................................... 18\n3.2. iConference............................................................................................. 21\n4. Proposed Methodology .............................................................................. 25\n4.1. System Flowchart ................................................................................... 25\n4.2. Image Descriptors .................................................................................. 28 4.2.1. Weighted Gist Descriptor...................................................................... 29 4.2.2. Average ENN Descriptor ...................................................................... 32 4.2.3. Information Fusion............................................................................... 33\n4.3. Intelligent Character Recognition............................................................. 34 4.3.1. Feature Extraction ............................................................................... 34 4.3.2. Recognition ......................................................................................... 35\n5. Performance Evaluation ............................................................................. 38\n5.1. Data Collection ....................................................................................... 38 5.1.1. Visual Search Dataset .......................................................................... 38 5.1.1.1. Field Study........................................................................................ 38\n5.1.1.2. Taiwan Landmark Image Set.............................................................. 40 5.1.2. Intelligent Character Recognition Dataset ............................................ 41 5.2. Experimental Results .............................................................................. 43 5.2.1. Visual Search ....................................................................................... 43 5.2.1.1. Different Parameters in ENN.............................................................. 45 5.2.1.2. Comparison of Individual and Hybrid Approaches.............................. 47 5.2.2. Intelligent Character Recognition ......................................................... 49 5.2.3. Routing Implementation on Mobile Platform..........................................49\n6. Comparative Analysis ................................................................................ 51\n6.1. Benchmark Database – Oxford Buildings Dataset .................................... 51\n6.2. Experiments on Oxford Building Dataset................................................. 52\n6.3. Other Approaches on Oxford Dataset...................................................... 53\n7. Conclusion and Future Work ...................................................................... 55\n8. References................................................................................................. 57zh_TW
dc.language.isoen_US-
dc.source.urihttp://thesis.lib.nccu.edu.tw/record/#G0098462011en_US
dc.subject行動裝置zh_TW
dc.subject地標辨識zh_TW
dc.subject智慧型文字辨識zh_TW
dc.subjectmobile devicesen_US
dc.subjectlandmark photo matchingen_US
dc.subjectintelligent character recognitionen_US
dc.title以圖文辨識為基礎的旅遊路線規劃輔助工具zh_TW
dc.titleTour Planning Using Landmark Photo Matching and Intelligent Character Recognitionen_US
dc.typethesisen
dc.relation.reference[1] F. Corp. (April, 2012). FunTrip 旅遊手札. Available: https://http://www.facebook.com/funtrip.tw\n[2] M. O. T. C. R. o. C. T. Toursim Bureau. Available:\nhttp://admin.taiwan.net.tw/indexc.asp\n[3] T. C. Government. (June 2011). Taipei-Free. Available: http://www.tpe-free.taipei.gov.tw/TPE/\n[4] UDN. (2012/04/15). 一機在手 跟著「旅遊雲」玩遍全世界. Available: http://mag.udn.com/mag/digital/storypage.jsp?f_ART_ID=383884\n[5] D. G. Lowe, &quot;Distinctive image features from scale-invariant keypoints,&quot; International journal of computer vision, vol. 60, pp. 91-110, 2004.\n[6] H. Bay, T. Tuytelaars, and L. Van Gool, &quot;Surf: Speeded up robust features,&quot; Computer Vision–ECCV 2006, pp. 404-417, 2006.\n[7] L. Juan and O. Gwun, &quot;A comparison of sift, pca-sift and surf,&quot; International Journal of Image Processing, vol. 3, pp. 143-152, 2009.\n[8] V. Chandrasekhar, S. S. Tsai, G. Takacs, D. M. Chen, N. M. Cheung, Y. Reznik, R. Vedantham, R. Grzeszczuk, and B. Girod, &quot;Low Latency Image Retrieval with Embedded Compressed Histogram of Gradient Descriptors.&quot;\n[9] V. Chandrasekhar, D. M. Chen, A. Lin, G. Takacs, S. S. Tsai, N. M. Cheung, Y. Reznik, R. Grzeszczuk, and B. Girod, &quot;Comparison of local feature descriptors for mobile visual search,&quot; 2010, pp. 3885-3888.\n[10] Y. Cao, H. Zhang, Y. Gao, X. Xu, and J. Guo, &quot;Matching Image with Multiple Local Features,&quot; 2010.\n[11] D. Nister and H. Stewenius, &quot;Scalable recognition with a vocabulary tree,&quot; 2006, pp. 2161-2168.\n[12] S. S. Tsai, D. Chen, G. Takacs, V. Chandrasekhar, R. Vedantham, R. Grzeszczuk, and B. Girod, &quot;Fast geometric re-ranking for image-based retrieval,&quot; 2010, pp. 1029-1032.\n[13] S. S. Tsai, D. Chen, V. Chandrasekhar, G. Takacs, N. M. Cheung, R. Vedantham, R. Grzeszczuk, and B. Girod, &quot;Mobile product recognition,&quot; 2010, pp. 1587-1590.\n[14] S. S. Tsai, D. Chen, J. P. Singh, and B. Girod, &quot;Rate-efficient, real-time CD cover recognition on a camera-phone,&quot; 2008, pp. 1023-1024.\n[15] D. Chen, S. Tsai, C. H. Hsu, J. P. Singh, and B. Girod, &quot;Mobile augmented reality for books on a shelf,&quot; 2011, pp. 1-6.\n[16] S. S. Tsai, H. Chen, D. Chen, R. Vedantham, R. Grzeszczuk, and B. Girod, &quot;Mobile Visual Search Using Image and Text Features.&quot;\n[17] G. Inc. (2009). Google Goggles. Available: http://www.google.com/mobile/goggles/ - text\n[18] Amazon. (2011). Flow powered by Amazon. Available: http://itunes.apple.com/us/app/flow-powered-by-amazon/id474664425?mt=8\n[19] L. Earnest, &quot;Machine reading of cursive script,&quot; in in Proc. IFIP Congress, Amsterdam, 1963, pp. 462-466.\n[20] R. Casey and G. Nagy, &quot;Automatic Recognition of Machine Printed Chinese Characters,&quot; IEEE-TEC 1966, 1966.\n[21] J. Liu, &quot;Real Time Chinese Handwriting Recognition,&quot; E.E., MIT, Cambridge, 1966.\n[22] WorldCard. Worldictionary. Available: http://worldcard.penpowerinc.com/product.asp?sn=300\n[23] P. S. Inc. Pleco. Available: http://www.pleco.com/\n[24] F. Corp. (2010). HuayuNavi. Available: http://funwish.net/huayunavi/\n[25] J. H. Kuo, C. M. Huang, W. H. Liao, and C. C. Huang, &quot;HuayuNavi: a mobile Chinese learning application based on intelligent character recognition,&quot; Edutainment Technologies. Educational Games and Virtual Reality/Augmented Reality Applications, pp. 346-354, 2011.\n[26] M. I. M. E. L. National University of Singapore. iConference - Social networking in a conference using mobile augmented reality technology. Available: http://www.mimelab.com/content/\n[27] C. M. Huang, W. H. Liao, and S. C. Chen, &quot;Mobile Tour Planning Using Landmark Photo Matching and Intelligent Character Recognition,&quot; American Journal of Engineering and Technology Research Vol, vol. 11, 2011.\n[28] Wen-Hung Liao, &quot;A Framework for Attention-Based Personal Photo Manager&quot;, Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, p.2128-232, 2009.\n[29] A. Oliva and A. Torralba, &quot;Modeling the shape of the scene: A holistic representation of the spatial envelope,&quot; International journal of computer vision, vol. 42, pp. 145-175, 2001.\n[30] D. Parkhurst, K. Law, and E. Niebur, &quot;Modeling the role of salience in the allocation of overt visual attention,&quot; Vision research, vol. 42, pp. 107-123, 2002.\n[31] J. Harel, C. Koch, and P. Perona, &quot;Graph-based visual saliency,&quot; Advances in neural information processing systems, vol. 19, p. 545, 2007.\n[32] J. Dong, A. Krzyżak, and C. Y. Suen, &quot;An improved handwritten Chinese character recognition system using support vector machine,&quot; Pattern Recognition Letters, vol. 26, pp. 1849-1856, 2005.\n[33] H. T. Lin, C. J. Lin, and R. C. Weng, &quot;A note on Platt’s probabilistic outputs for support vector machines,&quot; Machine learning, vol. 68, pp. 267-276, 2007.\n[34] D. M. Chen, G. Baatz, K. Koser, S. S. Tsai, R. Vedantham, T. Pylvanainen, K. Roimela, X. Chen, J. Bach, and M. Pollefeys, &quot;City-scale landmark identification on mobile devices,&quot; 2011, pp. 737-744.\n[35] O. P. a. M. Werman. (2010). The Quadratic-Chi Histogram Distance Family. Available: http://www.seas.upenn.edu/~ofirpele/QC/\n[36] M. KLINKIGT and K. KISE, &quot;Local Configuration of SIFT-like Features by a Shape Context,&quot; 2010, pp. 11-15.\n[37] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman, &quot;Object retrieval with large vocabularies and fast spatial matching,&quot; 2007, pp. 1-8.\n[38] H. Jégou, M. Douze, and C. Schmid, &quot;Improving bag-of-features for large scale image search,&quot; International journal of computer vision, vol. 87, pp. 316-336, 2010.zh_TW
item.openairecristypehttp://purl.org/coar/resource_type/c_46ec-
item.fulltextWith Fulltext-
item.openairetypethesis-
item.cerifentitytypePublications-
item.grantfulltextrestricted-
item.languageiso639-1en_US-
Appears in Collections:學位論文
學位論文
Files in This Item:
File SizeFormat
201101.pdf7.68 MBAdobe PDF2View/Open
Show simple item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.