Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 3D互動敘事中以穿戴式裝置與虛擬角色互動之機制設計
Using Wearable Devices to Interact with Virtual Agents in 3D Interactive Storytelling
作者 王玟璇
Wang, Wen-Hsuan
貢獻者 李蔡彥
Li, Tsai-Yen
王玟璇
Wang, Wen-Hsuan
關鍵詞 互動敘事
虛擬實境
穿戴式裝置
電腦動畫
日期 2019
上傳時間 3-Oct-2019 17:17:57 (UTC+8)
摘要 近年來,越來越多的產業加入虛擬實境技術的開發與應用,例如職場訓練模擬或遊戲娛樂等,但大多數的應用,通常是利用手把按鍵或給定的動作選項與環境中的物件及NPC互動,故事體驗者選擇動作後,NPC給予的回應多只是制式的罐頭動畫或者是單純的語音文字輸出。
我們認為此般互動並不能讓故事體驗者真正融入虛擬世界當中,因此,我們提議能夠利用穿戴式動作捕捉設備,讓玩家能以自然的動作當作輸入,並將虛擬人物的動畫模組透過參數化的方式,讓動畫模組能透過參數的變化而有更多元的呈現輸出,並讓相同的人物與場景,會因為與故事體驗者進行不同的互動而呈現出不同的劇情發展及動畫回饋。
我們實現了一套系統,讓故事體驗者利用穿戴式裝置輸入肢體動作,系統解析動作後,決定玩家角色的動畫呈現,以及判斷是否有觸發NPC的互動事件,根據互動過程的不同導向不同的結局。實驗利用穿戴式裝置與VIVE控制器兩種不同輸入媒介來做比較,受試者完成體驗後填寫問卷以及接受訪談,最後分析實驗結果,驗證了我們設計的互動方式是直覺且順暢的,並且受試者會想要嘗試不同的故事路徑,證明了我們的系統有重玩的價值。
In recent year, more and more industries and companies are devoted to the de-velopment of Virtual Reality in applications such as work training and entertainment. However, most of them use traditional user interfaces such as buttons or predefined action sequences to interact with virtual agents. When a player has chosen her move-ment, the responses from NPC’s are usually fixed animations, voice, or text outputs.
We think this kind of interaction could not allow players to immerse into a virtual world easily. Instead, we suggest using wearable devices to capture the player’s ges-ture and use her natural movements as inputs. In addition, we attempt to make the animation module of virtual character parameterizable in order to deliver appropriate, flexible, and diversified responses. We hope that the player can experience different story plots and perceive responsive animation feedbacks when they interact with the virtual world.
We have implemented an interactive storytelling system which captures and in-terprets user’s body actions through wearable devices. The system can decide how to perform player character’s animation accordingly. The storyline will be adjusted if any NPC interactions are activated, thus leading to different story experiences. We have conducted a user study to evaluate our system by using traditional controller and wearable device for comparison. The participants evaluated the system by filling ques-tionnaires and were interviewed after the experiment. The experimental results reveal that the interaction methods we have designed are intuitive and easy to use for the users, compared to the controller. In addition, the users are willing to try to play with the system multiple times, which confirm the replay value of our interactive storytell-ing system.
參考文獻 [1] F. Kistler, D. Sollfrank, N. Bee, E. André, "Full Body Gestures enhancing a Game Book for Interactive Story Telling," in International Conference on Interactive Digital Storytelling, 2011, pp.207-218.
[2] C. Mousas, C.-N. Anagnostopoulos, "Performance-Driven Hybrid Full-Body Character Control for Navigation and Interaction in Virtual Environments," 3D Research, 8(2), Article No. 124, 2017.
[3] H. Rhodin, J. Tompkin, K. I. Kim, E. de Aguiar, H. Pfister, H.-P. Seidel, C. Theobalt, "Generalizing Wave Gestures from Sparse Examples for Real-time Character Control," in Proceedings of ACM SIGGRAPH Asia 2015, 34(6), Article No. 181, 2015.
[4] D. Thalmann, "Motion Modeling: Can We Get Rid of Motion Capture?," in In-ternational Workshop on Motion in Games, 2008, pp.121-131.
[5] S. Tonneau, R. A. Al-Ashqar, J. Pettré,. T. Komura, N. Mansard, "Character contact re-positioning under large environment deformation," in Proceedings of the 37th Annual Conference of the European Association for Computer Graphics, 2016, pp127-138.
[6] A. Shoulson, N. Marshak, M. Kapadia, N. I. Badler, "Adapt: the agent developmentand prototyping testbed," IEEE Transactions on Visualization and Computer Graphics 20.7 , 2014, pp.1035-1047.
[7] M. Kapadia, X. Xu, M. Nitti, M. Kallmann, S. Coros, RW. Sumner, MH. Gross, "PRECISION: Precomputing Environment Semantics for Contact-Rich Character Animation," in Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, 2016, pp.29-37.
[8] C. Mousas, "Towards Developing an Easy-To-Use Scripting Environment for Animating Virtual Characters," arXiv preprint arXiv:1702.03246, 2017.
[9] 楊奇珍, "以體感方式參與敘事的3D互動敘事系統," 國立政治大學資訊科學系碩士論文, 2015.
[10] M. Kipp, A. Heloir, M. Schroder, P. Gebhard, "Realizing Multimodal Behavior," in International Conference on Intelligent Virtual Agents, 2010, pp.57-63.
[11] J. Funge, X. Tu, D. Terzopoulos, "Cognitive modeling: knowledge, reasoning and planning for intelligent characters," in Computer graphics and interactive tech-niques, 1999, pp.29-38.
[12] 梁芳綺, "互動敘事中智慧型共同創作平台設計," 國立政治大學資訊科學系碩士論文, 2015.
[13] 蘇雅雯, "互動敘事中具沉浸感之互動動畫產生研究," 國立政治大學資訊科學系碩士論文, 2017.
[14] E. Brown, P. Cairns, " A grounded investigation of game immersion," in Extended Abstracts on Human Factors in Computing Systems, 2004, pp.1297-1300.
[15] C. Jennett, A. L. Cox , P. Cairns, S. Dhoparee, A. Epps, T. Tijs, and A. Walton, "Measuring and defining the experience of immersion in games," International Journal of Human-Computer Studies, 66(9):641-661, 2008.
描述 碩士
國立政治大學
資訊科學系
105753004
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0105753004
資料類型 thesis
dc.contributor.advisor 李蔡彥zh_TW
dc.contributor.advisor Li, Tsai-Yenen_US
dc.contributor.author (Authors) 王玟璇zh_TW
dc.contributor.author (Authors) Wang, Wen-Hsuanen_US
dc.creator (作者) 王玟璇zh_TW
dc.creator (作者) Wang, Wen-Hsuanen_US
dc.date (日期) 2019en_US
dc.date.accessioned 3-Oct-2019 17:17:57 (UTC+8)-
dc.date.available 3-Oct-2019 17:17:57 (UTC+8)-
dc.date.issued (上傳時間) 3-Oct-2019 17:17:57 (UTC+8)-
dc.identifier (Other Identifiers) G0105753004en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/126581-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系zh_TW
dc.description (描述) 105753004zh_TW
dc.description.abstract (摘要) 近年來,越來越多的產業加入虛擬實境技術的開發與應用,例如職場訓練模擬或遊戲娛樂等,但大多數的應用,通常是利用手把按鍵或給定的動作選項與環境中的物件及NPC互動,故事體驗者選擇動作後,NPC給予的回應多只是制式的罐頭動畫或者是單純的語音文字輸出。
我們認為此般互動並不能讓故事體驗者真正融入虛擬世界當中,因此,我們提議能夠利用穿戴式動作捕捉設備,讓玩家能以自然的動作當作輸入,並將虛擬人物的動畫模組透過參數化的方式,讓動畫模組能透過參數的變化而有更多元的呈現輸出,並讓相同的人物與場景,會因為與故事體驗者進行不同的互動而呈現出不同的劇情發展及動畫回饋。
我們實現了一套系統,讓故事體驗者利用穿戴式裝置輸入肢體動作,系統解析動作後,決定玩家角色的動畫呈現,以及判斷是否有觸發NPC的互動事件,根據互動過程的不同導向不同的結局。實驗利用穿戴式裝置與VIVE控制器兩種不同輸入媒介來做比較,受試者完成體驗後填寫問卷以及接受訪談,最後分析實驗結果,驗證了我們設計的互動方式是直覺且順暢的,並且受試者會想要嘗試不同的故事路徑,證明了我們的系統有重玩的價值。
zh_TW
dc.description.abstract (摘要) In recent year, more and more industries and companies are devoted to the de-velopment of Virtual Reality in applications such as work training and entertainment. However, most of them use traditional user interfaces such as buttons or predefined action sequences to interact with virtual agents. When a player has chosen her move-ment, the responses from NPC’s are usually fixed animations, voice, or text outputs.
We think this kind of interaction could not allow players to immerse into a virtual world easily. Instead, we suggest using wearable devices to capture the player’s ges-ture and use her natural movements as inputs. In addition, we attempt to make the animation module of virtual character parameterizable in order to deliver appropriate, flexible, and diversified responses. We hope that the player can experience different story plots and perceive responsive animation feedbacks when they interact with the virtual world.
We have implemented an interactive storytelling system which captures and in-terprets user’s body actions through wearable devices. The system can decide how to perform player character’s animation accordingly. The storyline will be adjusted if any NPC interactions are activated, thus leading to different story experiences. We have conducted a user study to evaluate our system by using traditional controller and wearable device for comparison. The participants evaluated the system by filling ques-tionnaires and were interviewed after the experiment. The experimental results reveal that the interaction methods we have designed are intuitive and easy to use for the users, compared to the controller. In addition, the users are willing to try to play with the system multiple times, which confirm the replay value of our interactive storytell-ing system.
en_US
dc.description.tableofcontents 第1章 導論 1
1.1研究動機 1
1.2研究目標 2
1.3論文貢獻 4
1.4本論文之章節架構 5
第2章 相關研究 6
2.1自然的體感輸入 6
2.2豐富且擬真的動畫呈現 7
2.3動畫腳本語言 9
2.4互動敘事(Interactive storytelling) 10
2.5沉浸感(Immersion) 11
2.6小結 12
第3章 身體動作語言解析 13
第4章 系統架構 18
4.1 系統使用之載具 18
4.1.1 畫面呈現載具之介紹 18
4.1.2 互動載具的介紹 19
4.1.3 兩種載具合用時之調整 20
4.2 系統架構 20
4.3 動作解析模組 22
4.3.1 直接偵測輸入模式 22
4.3.2 指令輸入模式 24
4.3.3 強制播放模式 26
4.3.4 互動腳本 26
4.4 故事管理模組 29
4.5 動畫管理模組 30
4.5.1 動畫模組參數化 30
4.5.2 動作排程 31
4.5.3 互動式動畫 33
4.6 提示系統 39
4.7 配合VIVE眼鏡之字幕設計 40
第5章 實驗設計與結果分析 41
5.1 實驗目標與對象 41
5.2 實驗流程與範例故事 42
5.2.1 實驗流程 42
5.2.2 VIVE手把操作按鍵 43
5.2.3 範例故事 43
5.2.4 問卷設計 53
5.2.5 受試者記錄 53
5.3 實驗結果與分析 53
5.3.1 問卷分析 54
5.3.2 受試者記錄分析 61
第6章 結論與未來發展 63
6.1 研究結論 63
6.2 未來目標 63
參考文獻 65
附錄1 實驗的故事劇本 67
附錄2 實驗的互動腳本 68
附錄3 實驗同意書 69
附錄4 實驗流程說明 70
附錄5 問卷調查 71
zh_TW
dc.format.extent 3256017 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0105753004en_US
dc.subject (關鍵詞) 互動敘事zh_TW
dc.subject (關鍵詞) 虛擬實境zh_TW
dc.subject (關鍵詞) 穿戴式裝置zh_TW
dc.subject (關鍵詞) 電腦動畫zh_TW
dc.title (題名) 3D互動敘事中以穿戴式裝置與虛擬角色互動之機制設計zh_TW
dc.title (題名) Using Wearable Devices to Interact with Virtual Agents in 3D Interactive Storytellingen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] F. Kistler, D. Sollfrank, N. Bee, E. André, "Full Body Gestures enhancing a Game Book for Interactive Story Telling," in International Conference on Interactive Digital Storytelling, 2011, pp.207-218.
[2] C. Mousas, C.-N. Anagnostopoulos, "Performance-Driven Hybrid Full-Body Character Control for Navigation and Interaction in Virtual Environments," 3D Research, 8(2), Article No. 124, 2017.
[3] H. Rhodin, J. Tompkin, K. I. Kim, E. de Aguiar, H. Pfister, H.-P. Seidel, C. Theobalt, "Generalizing Wave Gestures from Sparse Examples for Real-time Character Control," in Proceedings of ACM SIGGRAPH Asia 2015, 34(6), Article No. 181, 2015.
[4] D. Thalmann, "Motion Modeling: Can We Get Rid of Motion Capture?," in In-ternational Workshop on Motion in Games, 2008, pp.121-131.
[5] S. Tonneau, R. A. Al-Ashqar, J. Pettré,. T. Komura, N. Mansard, "Character contact re-positioning under large environment deformation," in Proceedings of the 37th Annual Conference of the European Association for Computer Graphics, 2016, pp127-138.
[6] A. Shoulson, N. Marshak, M. Kapadia, N. I. Badler, "Adapt: the agent developmentand prototyping testbed," IEEE Transactions on Visualization and Computer Graphics 20.7 , 2014, pp.1035-1047.
[7] M. Kapadia, X. Xu, M. Nitti, M. Kallmann, S. Coros, RW. Sumner, MH. Gross, "PRECISION: Precomputing Environment Semantics for Contact-Rich Character Animation," in Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, 2016, pp.29-37.
[8] C. Mousas, "Towards Developing an Easy-To-Use Scripting Environment for Animating Virtual Characters," arXiv preprint arXiv:1702.03246, 2017.
[9] 楊奇珍, "以體感方式參與敘事的3D互動敘事系統," 國立政治大學資訊科學系碩士論文, 2015.
[10] M. Kipp, A. Heloir, M. Schroder, P. Gebhard, "Realizing Multimodal Behavior," in International Conference on Intelligent Virtual Agents, 2010, pp.57-63.
[11] J. Funge, X. Tu, D. Terzopoulos, "Cognitive modeling: knowledge, reasoning and planning for intelligent characters," in Computer graphics and interactive tech-niques, 1999, pp.29-38.
[12] 梁芳綺, "互動敘事中智慧型共同創作平台設計," 國立政治大學資訊科學系碩士論文, 2015.
[13] 蘇雅雯, "互動敘事中具沉浸感之互動動畫產生研究," 國立政治大學資訊科學系碩士論文, 2017.
[14] E. Brown, P. Cairns, " A grounded investigation of game immersion," in Extended Abstracts on Human Factors in Computing Systems, 2004, pp.1297-1300.
[15] C. Jennett, A. L. Cox , P. Cairns, S. Dhoparee, A. Epps, T. Tijs, and A. Walton, "Measuring and defining the experience of immersion in games," International Journal of Human-Computer Studies, 66(9):641-661, 2008.
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU201901172en_US