dc.contributor | APSIPA | en_US |
dc.contributor | 國立政治大學資訊科學系 | en_US |
dc.creator (作者) | 李蔡彥 | zh_TW |
dc.creator (作者) | Li, Tsai-Yen | - |
dc.date (日期) | 2009-10 | en_US |
dc.date.accessioned | 27-May-2010 16:49:22 (UTC+8) | - |
dc.date.available | 27-May-2010 16:49:22 (UTC+8) | - |
dc.date.issued (上傳時間) | 27-May-2010 16:49:22 (UTC+8) | - |
dc.identifier.uri (URI) | http://nccur.lib.nccu.edu.tw/handle/140.119/39727 | - |
dc.description.abstract (摘要) | In this paper, we propose to use procedural animation of a human character to enhance the interpretation of music. The system consists of a procedural motion generator which generates expressive motions according to music features extracted from a music input, and uses Dynamic Programming (DP) to segment a piece of music into several music segments for further planning of character animations. In the literature, much animation research related to music uses reconstruction and modification of existing motions to compose new animations. In this work, we analyze the relationship between music and motions, and then use procedural animation to automatically generate expressive motions for the upper body of a human character to interpret music. Our experiments show that the system can generate appropriate motions for music of different styles and allow a user to modify system parameters to satisfy his/her visual preferences. | - |
dc.language | en-US | en_US |
dc.language.iso | en_US | - |
dc.relation (關聯) | Proceedings of 2009 APSIPA Annual Summit and Conference | en_US |
dc.title (題名) | Automatic Generation of Character Animations Expressing Music Features | en_US |
dc.type (資料類型) | conference | en |