Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 照片詮釋資料標記工具之設計與製作
Design and Implementation of a Metadata Annotation Tool for Images
作者 陳昱宇
Chen, Yu Yu
貢獻者 陳恭
Chen, Kung
陳昱宇
Chen, Yu Yu
關鍵詞 詮釋資料
標記
照片
Metadata
Annotation
Images
日期 2013
上傳時間 25-Aug-2014 15:22:22 (UTC+8)
摘要 數位照片的技術發展快速,使用者拍攝照片後,通常都不再將照片洗出來保存,而是將其保存在電腦或網路相簿,以往照片收集成冊的習慣已經不合時宜。但隨著時間過去,所累積的大量數位相片也帶來搜尋上的困擾。在相片上加註解顯然是一種便於日後搜尋的一種做法。但現今的照片標記工具都是著重在人臉辨識,過於狹隘,缺乏針對整體照片的內容給予如:人事時地物等的註解。
本論文實作一個數位相片標記工具,讓使用者可以對照片加上多種詮釋資料(metadata),藉由這些詮釋資料來做為有效管理照片的依據,並達到搜尋的目的。目前實作的工具可以將標記的註解(annotation)分為兩類:照片背景與照片內容。照片背景註解是關於整張照片,如:拍照時間、地點;內容是照片內部的註解,可以對照片中的人、事、物進行多個標記。每個註解給予類型(type)作為分類,在搜尋時候類型也會當作搜尋條件來增加精準度。目前的結果,本工具可以讓使用者對照片的詮釋資料執行新增、刪除,並依循詮釋資料協助使用者搜尋到正確的照片。
As digital photos are widely used in cell phones and camera these days, now days people seldom develop photos but keep them in their own computers. Yet, as the volume of photos grows, it is not easy to search for specific photos. Conceivably, one can add annotations to digital photos to facilitate the task of searching. However, most photo annotation tools focus on people identification; no facility is provided for events, places, and time.
This thesis presents a metadata annotation tool that enables users to add arbitrary annotations on digital photos to facilitate photo management and search. In this system, users add metadata to photos and then search photos by these added metadata. We classify theses annotation into two categories: background annotation and content annotation. Background annotation specifies information about the whole photo, such as date and location. By contrast, content annotation specifies information about the contents of a photo; there could be more than one content annotation associated with a photo. For example, we can put one annotation for each person occurred in a photo. Every annotation includes a type field to classify into four categories: who, which, when, and where. These types will help us to mange photos and search them. There are also a few other facilities that make annotations easy to manage.
參考文獻 [1] Amazon Mechanical Turk: http://aws.amazon.com/cn/documentation/mturk/, Accessed on January 27, 2014.
[2]T,Gotz. and O,Suhre. Design and implementation of the UIMA Common Analysis System. in04 IBM SYSTEM JOURNAL, pp.476-489
[3]Apache UIMA:http://uima.apache.org/, Accessed on January 27, 2014.
[4] Lux,M. Caliph & Emir:MPEG-7 Photo Annotation and Retrieval, in 09 Proceedings of the 17th ACM international conference on Multimedia,pp925-926
[5]Sarvas,R., .User-centric Metadata for Mobile Photos, In Proc. of MobiSys
2004. ACM Press, New York, NY, 2004, 33-35.
[6] Kustanowitz,J. and Sheiderman,B . Motive Annotation for Personal Digital Photo
Libraries:Lowering Barriers While Raising Incentives , in Proceeding
CHI’07 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems


[7] OKFN-Annotator:http://annotatorjs.org/, Accessed on January 27, 2014.
[8] Catherine,C., and Palo,A. Annotation: from paper books to the digital library.In proc. ACM international conference on Digital libraries 1997,pp 131-140
[9] Daren C. Crowd sourcing as a Model for Problem Solving, in 08 Sage Publications The International Journal of Research into New Media Technologies, pp 75-89
[10] jQuery Image Annotation:http://flipbit.co.uk/jquery-image-annotation.html, Accessed on January 27, 2014.
[11] Manjunath, B.S., Salembier,P. and Sikora, T. Introduction to MPEG-7, Wiley 2002
[12]Wilhem, A., Takhteyev, Y., Sarvas, R. Van House, N., and David, M. Photo Annotation on a Camera Phone. In Proc. of CHI2004, ACM Press, 2004, pp1403-1406
[13]Munnelly, G., Hampson, C., Ferro, N.,and Conlan, O. The FAST-CAT: Empowering Cultural Heritage Annotations. In Proc. Digital Humanities 2013, University of Nebraska, Lincoln 2013, pp. 320-322.
[14] 林宸均,「網路使用者圖像標記行為初探-以Flickr圖像標籤為例」,國立
台東
大學教育學系(所)教學科技碩士班
[15] Sara,S. L. Some Issues in the Indexing of Images. Journal of the American Society for Information Science 45, no. 8 (1994): 583-88 .
[16] Matthew,P. Gaps in Keywords: A study into the ‘semantic gap’ between images
and keywords in users of the Witt Library, Courtauld Institute of Art, in partial
fulfilment of the requirements for the degree of MSc in Information Science,2007 PP.17-23
[17] Diamond, R. M. The development of a retrieval system for 35mm slides
utilized in art and humanities instruction: Final report. ED031 925.
[18] V. Gudivada, V.V. Raghavan, Content-based image retrieval systems, IEEE
Comput. 28 (9) (1995) 18–22.
描述 碩士
國立政治大學
資訊科學學系
100753033
102
資料來源 http://thesis.lib.nccu.edu.tw/record/#G1007530331
資料類型 thesis
dc.contributor.advisor 陳恭zh_TW
dc.contributor.advisor Chen, Kungen_US
dc.contributor.author (Authors) 陳昱宇zh_TW
dc.contributor.author (Authors) Chen, Yu Yuen_US
dc.creator (作者) 陳昱宇zh_TW
dc.creator (作者) Chen, Yu Yuen_US
dc.date (日期) 2013en_US
dc.date.accessioned 25-Aug-2014 15:22:22 (UTC+8)-
dc.date.available 25-Aug-2014 15:22:22 (UTC+8)-
dc.date.issued (上傳時間) 25-Aug-2014 15:22:22 (UTC+8)-
dc.identifier (Other Identifiers) G1007530331en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/69231-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學學系zh_TW
dc.description (描述) 100753033zh_TW
dc.description (描述) 102zh_TW
dc.description.abstract (摘要) 數位照片的技術發展快速,使用者拍攝照片後,通常都不再將照片洗出來保存,而是將其保存在電腦或網路相簿,以往照片收集成冊的習慣已經不合時宜。但隨著時間過去,所累積的大量數位相片也帶來搜尋上的困擾。在相片上加註解顯然是一種便於日後搜尋的一種做法。但現今的照片標記工具都是著重在人臉辨識,過於狹隘,缺乏針對整體照片的內容給予如:人事時地物等的註解。
本論文實作一個數位相片標記工具,讓使用者可以對照片加上多種詮釋資料(metadata),藉由這些詮釋資料來做為有效管理照片的依據,並達到搜尋的目的。目前實作的工具可以將標記的註解(annotation)分為兩類:照片背景與照片內容。照片背景註解是關於整張照片,如:拍照時間、地點;內容是照片內部的註解,可以對照片中的人、事、物進行多個標記。每個註解給予類型(type)作為分類,在搜尋時候類型也會當作搜尋條件來增加精準度。目前的結果,本工具可以讓使用者對照片的詮釋資料執行新增、刪除,並依循詮釋資料協助使用者搜尋到正確的照片。
zh_TW
dc.description.abstract (摘要) As digital photos are widely used in cell phones and camera these days, now days people seldom develop photos but keep them in their own computers. Yet, as the volume of photos grows, it is not easy to search for specific photos. Conceivably, one can add annotations to digital photos to facilitate the task of searching. However, most photo annotation tools focus on people identification; no facility is provided for events, places, and time.
This thesis presents a metadata annotation tool that enables users to add arbitrary annotations on digital photos to facilitate photo management and search. In this system, users add metadata to photos and then search photos by these added metadata. We classify theses annotation into two categories: background annotation and content annotation. Background annotation specifies information about the whole photo, such as date and location. By contrast, content annotation specifies information about the contents of a photo; there could be more than one content annotation associated with a photo. For example, we can put one annotation for each person occurred in a photo. Every annotation includes a type field to classify into four categories: who, which, when, and where. These types will help us to mange photos and search them. There are also a few other facilities that make annotations easy to manage.
en_US
dc.description.tableofcontents 緒論 1
1.1前言 1
1.2研究之背景 1
1.2.1 Crowd sourcing 1
1.2.2 Text annotation 2
1.2.3 Image content 2
1.3研究動機與目的 4
1.4論文研究成果 5
1.5論文之章節架構 5
相關研究的概況與評述 6
2.1 UIMA (UNSTRUCTURED INFORMATION MANAGEMENT ARCHITECTURE) 6
2.2 FACEBOOK、PICASA 照片標記 6
2.3 CALIPH & EMIR 7
2.4 MOBILE MEDIA METADATA SYSTEM 8
2.5 OKFN (OPEN KNOWLEDGE FOUNDATION) ANNOTATOR 9
2.6 FLICKR 12
研究方法及架構 14
3.1 系統架構 14
3.2 系統設計概念 15
3.3 系統設計元件 16
3.3.1 相簿(album)編輯 16
3.3.2 照片註解 16
3.3.2 註解類型(Annotation Type ) 17
3.3.3 權限(permission)控管 17
3.3.4 Input 18
3.3.5 Output 19
3.3.5.1 總攬頁面 20
3.3.5.2 細部註解頁面 21
3.3.6 Date Picker 21
系統實作與成果評估 23
4.1 使用情境 23
4.2 系統實作 24
4.2.1上傳照片設計與實作 24
4.2.2 照片之詮釋資料類型 25
4.2.3 詮釋資料格式 27
4.2.4 照片之詮釋資料儲存與傳遞 28
4.2.5 權限實作 29
4.2.6 搜尋結果呈現 30
4.3成果評估 31
結論與未來發展 34
5.1 結論 34
5.2 未來發展 34
參考文獻 38
附錄 40
zh_TW
dc.format.extent 1170791 bytes-
dc.format.mimetype application/pdf-
dc.language.iso en_US-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G1007530331en_US
dc.subject (關鍵詞) 詮釋資料zh_TW
dc.subject (關鍵詞) 標記zh_TW
dc.subject (關鍵詞) 照片zh_TW
dc.subject (關鍵詞) Metadataen_US
dc.subject (關鍵詞) Annotationen_US
dc.subject (關鍵詞) Imagesen_US
dc.title (題名) 照片詮釋資料標記工具之設計與製作zh_TW
dc.title (題名) Design and Implementation of a Metadata Annotation Tool for Imagesen_US
dc.type (資料類型) thesisen
dc.relation.reference (參考文獻) [1] Amazon Mechanical Turk: http://aws.amazon.com/cn/documentation/mturk/, Accessed on January 27, 2014.
[2]T,Gotz. and O,Suhre. Design and implementation of the UIMA Common Analysis System. in04 IBM SYSTEM JOURNAL, pp.476-489
[3]Apache UIMA:http://uima.apache.org/, Accessed on January 27, 2014.
[4] Lux,M. Caliph & Emir:MPEG-7 Photo Annotation and Retrieval, in 09 Proceedings of the 17th ACM international conference on Multimedia,pp925-926
[5]Sarvas,R., .User-centric Metadata for Mobile Photos, In Proc. of MobiSys
2004. ACM Press, New York, NY, 2004, 33-35.
[6] Kustanowitz,J. and Sheiderman,B . Motive Annotation for Personal Digital Photo
Libraries:Lowering Barriers While Raising Incentives , in Proceeding
CHI’07 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems


[7] OKFN-Annotator:http://annotatorjs.org/, Accessed on January 27, 2014.
[8] Catherine,C., and Palo,A. Annotation: from paper books to the digital library.In proc. ACM international conference on Digital libraries 1997,pp 131-140
[9] Daren C. Crowd sourcing as a Model for Problem Solving, in 08 Sage Publications The International Journal of Research into New Media Technologies, pp 75-89
[10] jQuery Image Annotation:http://flipbit.co.uk/jquery-image-annotation.html, Accessed on January 27, 2014.
[11] Manjunath, B.S., Salembier,P. and Sikora, T. Introduction to MPEG-7, Wiley 2002
[12]Wilhem, A., Takhteyev, Y., Sarvas, R. Van House, N., and David, M. Photo Annotation on a Camera Phone. In Proc. of CHI2004, ACM Press, 2004, pp1403-1406
[13]Munnelly, G., Hampson, C., Ferro, N.,and Conlan, O. The FAST-CAT: Empowering Cultural Heritage Annotations. In Proc. Digital Humanities 2013, University of Nebraska, Lincoln 2013, pp. 320-322.
[14] 林宸均,「網路使用者圖像標記行為初探-以Flickr圖像標籤為例」,國立
台東
大學教育學系(所)教學科技碩士班
[15] Sara,S. L. Some Issues in the Indexing of Images. Journal of the American Society for Information Science 45, no. 8 (1994): 583-88 .
[16] Matthew,P. Gaps in Keywords: A study into the ‘semantic gap’ between images
and keywords in users of the Witt Library, Courtauld Institute of Art, in partial
fulfilment of the requirements for the degree of MSc in Information Science,2007 PP.17-23
[17] Diamond, R. M. The development of a retrieval system for 35mm slides
utilized in art and humanities instruction: Final report. ED031 925.
[18] V. Gudivada, V.V. Raghavan, Content-based image retrieval systems, IEEE
Comput. 28 (9) (1995) 18–22.
zh_TW