Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/95268
DC FieldValueLanguage
dc.contributor.advisor沈錳坤zh_TW
dc.contributor.advisorShan, Man-Kwanen_US
dc.contributor.author黃詰仁zh_TW
dc.contributor.authorHuang, Chieh-Jenen_US
dc.creator黃詰仁zh_TW
dc.creatorHuang, Chieh-Jenen_US
dc.date2009en_US
dc.date.accessioned2016-05-09T07:29:10Z-
dc.date.available2016-05-09T07:29:10Z-
dc.date.issued2016-05-09T07:29:10Z-
dc.identifierG0096753011en_US
dc.identifier.urihttp://nccur.lib.nccu.edu.tw/handle/140.119/95268-
dc.description碩士zh_TW
dc.description國立政治大學zh_TW
dc.description資訊科學學系zh_TW
dc.description96753011zh_TW
dc.description.abstract在現今的社會中,我們可以隨處見到影像被用到各個地方,像是報章雜誌、網站或是兒童圖畫書中,影像可以加深讀者對文字的印象。對一般人而言,這些影像往往比周遭的文字來的更吸引人。尤其,童話故事的文本若以影像的視覺方式呈現,將更可吸引兒童的注意。\r\n因此,本論文研究將文字形式的童話故事文本轉換為影像的視覺化技術。我們利用童話故事的敘事結構、角色等特性,將童話故事依故事劇情分段。從中找出代表每個段落主題與故事全文的關鍵字,並利用全球資訊網的影像搜尋引擎來找出初步的影像集合。最後再為每一段落找尋適合的影像,進而達到視覺化的效果。實驗結果顯示,本研究所提出的視覺化技術,在還原童話故事的敘事結構上,準確率約70%。zh_TW
dc.description.abstractStories are often accompanied with images, in order to emphasize the effect of stories. In particular, most fairy tales written for children are decorated by images to attract children’s interest.\r\n This thesis focuses on story visualization technology which transforms text of a fairy tale into s series of visual images. The proposed visualization technology is developed based on the narrative structure of fairy tales. First, the input fairy tale is divided into segments in accordance with the plot of the story. Then, global keywords for the whole story and segment keywords for each segment are extracted. Moreover, expanded keywords which are important but infrequent in each segment are discovered. These three types of keywords are fed into Web Image Search Engine to find the initial image set. At last, the proposed system filters out the irrelevant images from the initial image set, and selects the representative image for each segment. Experiments show that the proposed method achieves 70% accuracy for the reconstruction of narrative structures of fairy tales. en_US
dc.description.tableofcontents摘要 i\r\n目錄 v\r\n圖目錄 vii\r\n表目錄 ix\r\n第一章 前言 1\r\n1.1 動機 1\r\n1.2 論文架構 3\r\n第二章 相關研究 4\r\n2.1 童話研究 4\r\n2.2 相關研究 9\r\n第三章 研究方法及步驟 14\r\n3.1 故事分段 14\r\n3.2 關鍵字擷取 18\r\n3.2.1 全文關鍵字萃取 19\r\n3.2.2 文段關鍵字萃取 22\r\n3.2.3 擴充關鍵字萃取 23\r\n3.3 關鍵字為基礎的影像搜尋 29\r\n3.3.1 查詢形成 29\r\n3.3.2 代表性影像 35\r\n3.4 視覺化 43\r\n第四章 實驗結果與評估 45\r\n4.1 實驗設計 45\r\n4.2 實驗結果 48\r\n第五章 結論與未來研究 53\r\n參考文獻 54\r\n附錄 58\r\n附錄一 「Little Snow-white」故事文本 58\r\n附錄二 「The Wise Little Girl」故事文本 66zh_TW
dc.source.urihttp://thesis.lib.nccu.edu.tw/record/#G0096753011en_US
dc.subject童話zh_TW
dc.subject視覺化zh_TW
dc.subject數位敘事zh_TW
dc.subject影像檢索zh_TW
dc.titleVizstory:視覺化數位童話故事zh_TW
dc.titleVizStory: Visualization of Digital Narrative for Fairy Talesen_US
dc.typethesisen_US
dc.relation.reference[1] 林文寶、徐守濤、蔡尚志與陳正治,兒童文學,國立空中大學,1993。\r\n[2] 林逸君與徐雅惠,傳統童話圖畫書與顛覆性童話圖畫書表現手法之比較研究-以「三隻小豬」為例,國立台中教育大學碩士論文,2003。\r\n[3] 林守為,兒童文學,五南圖書出版公司,1988。\r\n[4] 陳正治,童話寫作研究,五南圖書出版公司,1990。\r\n[5] 傅林統,兒童文學的思想與技巧,富春文化事業股份有限公司,1990。\r\n[6] 蔡尚志,童話創作的原理與技巧,五南圖書出版公司,1996。\r\n[7] A. Z. Broder, D. Carmel, M. Herscovici, A. Soffer, and J. Zien, “Efficient Query Evalua-tion using a Two-level Retrieval Process,” Proc. of ACM International Conference on Information and Knowledge Management CIKM, 2003.\r\n[8] R. Cilibrasi, and P. M. B. Vitanyi, “The Google Similarity Distance,” IEEE Transaction on Knowledge and Data Engineering, Vol. 19, No. 3, 2007.\r\n[9] B. Coyne, and R. Sproat, “WordsEye: An Automatic Text-to-Scene Conversion System,” Proc. of ACM International Conference on Computer Graphics and Interactive Tech-niques SIGGRAPH, 2001.\r\n[10] R. Datta, D. Joshi, J. Li, and James Z. Wang, “Image Retrieval: Idea, Influences, and Trends of the New Age,” ACM Computing Surveys, Vol. 40, No. 1, 2008.\r\n[11] R. Datta, J. Li , and J. Z. Wang, “Content-based Image Retrieval: Approaches and Trends of the New Age,” Proc. of the ACM International Workshop on Multimedia In-formation Retrieval MIR, 2005.\r\n[12] E. Frank, G. W. Paynter, I. H. Witten, C. Gutwin, and C. G. Nevil-Manning, “Do-main-specific Keyphrase Extraction,” Proc. of the International Joint Conference on Ar-tificial Intelligence, 1999.\r\n[13] A. B. Goldberg, X. Zhu, Charles R. Dyer, M. Eldawy, and L. Heng, “Easy as ABC? Fa-cilitating Pictorial Communication via Semantically Enhanced Layout,” Proc. of Confe-rence on Computational Natural Language Learning CoNLL, 2008.\r\n[14] M. A. Hearst, “TextTiling: Segmenting Text into Multi-paragraph Subtopic Passages,” Computational Linguistics, Vol. 23, No. 1, 1997.\r\n[15] Y. Itabashi, and Y. Masunaga, “Correlating Scenes as Series of Document Sentences with Images – Taking The Tale of Genji as an Example,” Proc. of IEEE International Confe-rence on Data Engineering ICDE, 2005.\r\n[16] R. Johansson, A. Berglund, M. Danielsson, and P. Nugues, “Automatic Text-to-Scene Conversion in the Traffic Accident domain,” Proc. of International Joint Conference on Artificial Intelligence IJCAI, 2005.\r\n[17] D. Joshi, J. Z. Wang, and J. Li, “The Story Picturing Engine – A System for Automatic Text Illustration,” ACM Transaction on Multimedia Computing, Communications and Applications, Vol. 2, No. 1, 2006.\r\n[18] D. Joshi, J. Z. Wang, and J. Li, “The Story Picturing Engine: Finding Elite Images to Il-lustrate a Story Using Mutual Reinforcement,” Proc. of ACM International Workshop on Multimedia Information Retrieval MIR, 2004.\r\n[19] D. Kauchak, and F. Chen, “Feature-Based Segmentation of Narrative Documents,” Proc. of ACL Workshop on Feature Engineering for Machine Learning in Natural Language Processing, 2005.\r\n[20] H. Kozima, and T. Furugori, “Segmenting Narrative Text into Coherent Scenes,” Literary and Linguistic Computing, Vol. 9, No.1, 1993.\r\n[21] D. Lowe, “Distinctive Image Features from Scale-invariant keypoints,” International Journal of Computer Vision, Vol. 60, No. 2, 2004.\r\n[22] M. de Marneffe, B. MacCartney, and C. D. Manning, “Generating Typed Dependency Parses from Phrase Structure Parses,” Proc. of the International Conference on Language Resources and Evaluation LREC 2006.\r\n[23] O. Medelyan, and I. H. Witten, “Thesaurus Based Automatic Keyphrase Indexing,” Proc. of the Joint Conference on Digital Libraries JCDL 2006.\r\n[24] R. Mihalcea, and B. Leong, “Toward Communicating Simple Sentences Using Pictorial Representations,” Proc. of the Conference of the Association for Machine Translation in the America AMTA, 2005.\r\n[25] G. Miller, R. Beckwith, C. Fellbaum, and K. Miller, “Introduction to WordNet: An On-line Lexical Database,” Journal of Lexicography, Vol. 3, No. 4, 1990.\r\n[26] S. Osiński, D. Weiss, “A Concept-Driven Algorithm for Clustering Search Results,” IEEE Intelligent Systems, Vol. 20, No.3, 2005.\r\n[27] S. Osiński, D. Weiss, “Lingo: Search Results Clustering Algorithm Based on Singular Value Decomposition,” Proc. of International Joint Conference on Intelligent Information Systems IIS, 2004.\r\n[28] J. Y. Pan, H. J. Yang, C. Faloutsos, and P. Duygulu, “Automatic Multimedia Cross-modal Correlation Discovery,” Proc. of ACM International Conference on Knowledge Discovery on Database SIGKDD, 2004.\r\n[29] J. Y. Pan, H. J. Yang, P. Duygulu, and C. Faloutsos, “Automatic Image Captioning,” Proc. of IEEE International Conference on Multimedia and Expo. ICME, 2004.\r\n[30] J. Y. Pan, H. J. Yang, C. Faloutsos, and P. Duygulu, “GCap: Graph-based Automatic Im-age Captioning,” Proc. of International Workshop on Multimedia Data and Document Engineering, 2004.\r\n[31] J. Y. Pan, H. J. Yang, C. Faloutsos, and P. Duygulu, “Cross-modal Correlation Mining using Graph Algorithm,” In Xingquan Zhu and Ian Davidson (Ed.), Knowledge Discov-ery and Data Mining: Challenges and Realities with Real World Data, Ideal Group, Inc., Hershey, PA., 2006.\r\n[32] I. H. Witten, G. W. Paynter, E. Frank, C. Gutwin, and C. G. Nevil-Manning, “Kea: Prac-tice Automatic Keyphrase Extraction,” Proc. of the ACM Conference on Digital Libraries DL, 1999.\r\n[33] L. Wu, X.-S. Hua, N. Yu, W.-Y. Ma, and S. Li, “Flickr Distance,” Proc. of ACM Interna-tional Conference on Multimedia MM, 2008.\r\n[34] X. Zhu, A. B. Goldberg, M. Eldawy, C. R. Dyer, and B. Strock, “A Text-to-Picture Syn-thesis System for Augmenting Communication,” Proc. of the AAAI Conference on Artifi-cial Intelligence, AAAI-07, 2007.zh_TW
item.grantfulltextopen-
item.openairecristypehttp://purl.org/coar/resource_type/c_46ec-
item.openairetypethesis-
item.cerifentitytypePublications-
item.fulltextWith Fulltext-
Appears in Collections:學位論文
Files in This Item:
File SizeFormat
index.html115 BHTML2View/Open
Show simple item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.