Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/126581
DC FieldValueLanguage
dc.contributor.advisor李蔡彥zh_TW
dc.contributor.advisorLi, Tsai-Yenen_US
dc.contributor.author王玟璇zh_TW
dc.contributor.authorWang, Wen-Hsuanen_US
dc.creator王玟璇zh_TW
dc.creatorWang, Wen-Hsuanen_US
dc.date2019en_US
dc.date.accessioned2019-10-03T09:17:57Z-
dc.date.available2019-10-03T09:17:57Z-
dc.date.issued2019-10-03T09:17:57Z-
dc.identifierG0105753004en_US
dc.identifier.urihttp://nccur.lib.nccu.edu.tw/handle/140.119/126581-
dc.description碩士zh_TW
dc.description國立政治大學zh_TW
dc.description資訊科學系zh_TW
dc.description105753004zh_TW
dc.description.abstract近年來,越來越多的產業加入虛擬實境技術的開發與應用,例如職場訓練模擬或遊戲娛樂等,但大多數的應用,通常是利用手把按鍵或給定的動作選項與環境中的物件及NPC互動,故事體驗者選擇動作後,NPC給予的回應多只是制式的罐頭動畫或者是單純的語音文字輸出。\n我們認為此般互動並不能讓故事體驗者真正融入虛擬世界當中,因此,我們提議能夠利用穿戴式動作捕捉設備,讓玩家能以自然的動作當作輸入,並將虛擬人物的動畫模組透過參數化的方式,讓動畫模組能透過參數的變化而有更多元的呈現輸出,並讓相同的人物與場景,會因為與故事體驗者進行不同的互動而呈現出不同的劇情發展及動畫回饋。\n我們實現了一套系統,讓故事體驗者利用穿戴式裝置輸入肢體動作,系統解析動作後,決定玩家角色的動畫呈現,以及判斷是否有觸發NPC的互動事件,根據互動過程的不同導向不同的結局。實驗利用穿戴式裝置與VIVE控制器兩種不同輸入媒介來做比較,受試者完成體驗後填寫問卷以及接受訪談,最後分析實驗結果,驗證了我們設計的互動方式是直覺且順暢的,並且受試者會想要嘗試不同的故事路徑,證明了我們的系統有重玩的價值。zh_TW
dc.description.abstractIn recent year, more and more industries and companies are devoted to the de-velopment of Virtual Reality in applications such as work training and entertainment. However, most of them use traditional user interfaces such as buttons or predefined action sequences to interact with virtual agents. When a player has chosen her move-ment, the responses from NPC’s are usually fixed animations, voice, or text outputs.\nWe think this kind of interaction could not allow players to immerse into a virtual world easily. Instead, we suggest using wearable devices to capture the player’s ges-ture and use her natural movements as inputs. In addition, we attempt to make the animation module of virtual character parameterizable in order to deliver appropriate, flexible, and diversified responses. We hope that the player can experience different story plots and perceive responsive animation feedbacks when they interact with the virtual world.\nWe have implemented an interactive storytelling system which captures and in-terprets user’s body actions through wearable devices. The system can decide how to perform player character’s animation accordingly. The storyline will be adjusted if any NPC interactions are activated, thus leading to different story experiences. We have conducted a user study to evaluate our system by using traditional controller and wearable device for comparison. The participants evaluated the system by filling ques-tionnaires and were interviewed after the experiment. The experimental results reveal that the interaction methods we have designed are intuitive and easy to use for the users, compared to the controller. In addition, the users are willing to try to play with the system multiple times, which confirm the replay value of our interactive storytell-ing system.en_US
dc.description.tableofcontents第1章 導論 1\n1.1研究動機 1\n1.2研究目標 2\n1.3論文貢獻 4\n1.4本論文之章節架構 5\n第2章 相關研究 6\n2.1自然的體感輸入 6\n2.2豐富且擬真的動畫呈現 7\n2.3動畫腳本語言 9\n2.4互動敘事(Interactive storytelling) 10\n2.5沉浸感(Immersion) 11\n2.6小結 12\n第3章 身體動作語言解析 13\n第4章 系統架構 18\n4.1 系統使用之載具 18\n4.1.1 畫面呈現載具之介紹 18\n4.1.2 互動載具的介紹 19\n4.1.3 兩種載具合用時之調整 20\n4.2 系統架構 20\n4.3 動作解析模組 22\n4.3.1 直接偵測輸入模式 22\n4.3.2 指令輸入模式 24\n4.3.3 強制播放模式 26\n4.3.4 互動腳本 26\n4.4 故事管理模組 29\n4.5 動畫管理模組 30\n4.5.1 動畫模組參數化 30\n4.5.2 動作排程 31\n4.5.3 互動式動畫 33\n4.6 提示系統 39\n4.7 配合VIVE眼鏡之字幕設計 40\n第5章 實驗設計與結果分析 41\n5.1 實驗目標與對象 41\n5.2 實驗流程與範例故事 42\n5.2.1 實驗流程 42\n5.2.2 VIVE手把操作按鍵 43\n5.2.3 範例故事 43\n5.2.4 問卷設計 53\n5.2.5 受試者記錄 53\n5.3 實驗結果與分析 53\n5.3.1 問卷分析 54\n5.3.2 受試者記錄分析 61\n第6章 結論與未來發展 63\n6.1 研究結論 63\n6.2 未來目標 63\n參考文獻 65\n附錄1 實驗的故事劇本 67\n附錄2 實驗的互動腳本 68\n附錄3 實驗同意書 69\n附錄4 實驗流程說明 70\n附錄5 問卷調查 71zh_TW
dc.format.extent3256017 bytes-
dc.format.mimetypeapplication/pdf-
dc.source.urihttp://thesis.lib.nccu.edu.tw/record/#G0105753004en_US
dc.subject互動敘事zh_TW
dc.subject虛擬實境zh_TW
dc.subject穿戴式裝置zh_TW
dc.subject電腦動畫zh_TW
dc.title3D互動敘事中以穿戴式裝置與虛擬角色互動之機制設計zh_TW
dc.titleUsing Wearable Devices to Interact with Virtual Agents in 3D Interactive Storytellingen_US
dc.typethesisen_US
dc.relation.reference[1] F. Kistler, D. Sollfrank, N. Bee, E. André, "Full Body Gestures enhancing a Game Book for Interactive Story Telling," in International Conference on Interactive Digital Storytelling, 2011, pp.207-218.\n[2] C. Mousas, C.-N. Anagnostopoulos, "Performance-Driven Hybrid Full-Body Character Control for Navigation and Interaction in Virtual Environments," 3D Research, 8(2), Article No. 124, 2017.\n[3] H. Rhodin, J. Tompkin, K. I. Kim, E. de Aguiar, H. Pfister, H.-P. Seidel, C. Theobalt, "Generalizing Wave Gestures from Sparse Examples for Real-time Character Control," in Proceedings of ACM SIGGRAPH Asia 2015, 34(6), Article No. 181, 2015.\n[4] D. Thalmann, "Motion Modeling: Can We Get Rid of Motion Capture?," in In-ternational Workshop on Motion in Games, 2008, pp.121-131.\n[5] S. Tonneau, R. A. Al-Ashqar, J. Pettré,. T. Komura, N. Mansard, "Character contact re-positioning under large environment deformation," in Proceedings of the 37th Annual Conference of the European Association for Computer Graphics, 2016, pp127-138.\n[6] A. Shoulson, N. Marshak, M. Kapadia, N. I. Badler, "Adapt: the agent developmentand prototyping testbed," IEEE Transactions on Visualization and Computer Graphics 20.7 , 2014, pp.1035-1047.\n[7] M. Kapadia, X. Xu, M. Nitti, M. Kallmann, S. Coros, RW. Sumner, MH. Gross, "PRECISION: Precomputing Environment Semantics for Contact-Rich Character Animation," in Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, 2016, pp.29-37.\n[8] C. Mousas, "Towards Developing an Easy-To-Use Scripting Environment for Animating Virtual Characters," arXiv preprint arXiv:1702.03246, 2017.\n[9] 楊奇珍, "以體感方式參與敘事的3D互動敘事系統," 國立政治大學資訊科學系碩士論文, 2015.\n[10] M. Kipp, A. Heloir, M. Schroder, P. Gebhard, "Realizing Multimodal Behavior," in International Conference on Intelligent Virtual Agents, 2010, pp.57-63.\n[11] J. Funge, X. Tu, D. Terzopoulos, "Cognitive modeling: knowledge, reasoning and planning for intelligent characters," in Computer graphics and interactive tech-niques, 1999, pp.29-38.\n[12] 梁芳綺, "互動敘事中智慧型共同創作平台設計," 國立政治大學資訊科學系碩士論文, 2015.\n[13] 蘇雅雯, "互動敘事中具沉浸感之互動動畫產生研究," 國立政治大學資訊科學系碩士論文, 2017.\n[14] E. Brown, P. Cairns, " A grounded investigation of game immersion," in Extended Abstracts on Human Factors in Computing Systems, 2004, pp.1297-1300.\n[15] C. Jennett, A. L. Cox , P. Cairns, S. Dhoparee, A. Epps, T. Tijs, and A. Walton, "Measuring and defining the experience of immersion in games," International Journal of Human-Computer Studies, 66(9):641-661, 2008.zh_TW
dc.identifier.doi10.6814/NCCU201901172en_US
item.openairecristypehttp://purl.org/coar/resource_type/c_46ec-
item.fulltextWith Fulltext-
item.grantfulltextopen-
item.openairetypethesis-
item.cerifentitytypePublications-
Appears in Collections:學位論文
Files in This Item:
File SizeFormat
300401.pdf3.18 MBAdobe PDF2View/Open
Show simple item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.