Publications-Theses

題名 能表達音樂特徵的人體動畫自動產生機制
Automatic Generation of Human Animation for Expressing Music Features
作者 雷嘉駿
Loi, Ka Chon
貢獻者 李蔡彥
Li, Tsai Yen
雷嘉駿
Loi, Ka Chon
關鍵詞 人體動畫
虛擬環境
音樂特徵
human animation
virtual enivronment
music features
日期 2007
上傳時間 17-Sep-2009 14:04:22 (UTC+8)
摘要 近年來電腦計算能力的進步使得3D虛擬環境得到廣泛的應用。本研究希望能在虛擬環境中結合人體動畫和音樂的特色,以人體動畫來詮釋音樂。我們希望能設計一個智慧型的人體動作產生器,賦予虛擬人物表達音樂特徵的能力,讓動作會因為“聽到”不同的音樂而有所不同。基於人類聽覺的短暫性,系統會自動抓取音樂特徵後將音樂切割成多個片段、對每一片段獨立規劃動作並產生動畫。過去動畫與音樂相關的研究中,許多生成的動作都經由修改或重組運動資料庫中的動作。本研究分析音樂和動作之間的關係,使用程序式動畫產生法自動產生多變且適當的詮釋動作。實驗顯示本系統能通用於LOA1人體模型和MIDI音樂;此外,透過調整系統中的參數,我們能產生不同風格的動畫,以符合不同使用者偏好和不同音樂曲風的特色。
In recent years, the improvement of computing ability has contributed to the wide application of 3D virtual environment. In the thesis, we propose to combine character animation with music for music interpretation in 3D virtual environment. The system proposed in the thesis is an intelligent avatar motion generator, which generates expressive motions according to music features. The system can extract music features from input music data, segment a music into several music segments, and then plan avatar animation. In the literature, much music-related animation research uses reconstruction and modification of existing motion to compose new animations. In this work, we analyze the relationship between music and motions, and then use procedural animation to automatically generate applicable and variable motions to interpret music. Our experiments show that the system can accept LOA1 models and midi as inputs in general, and generate appropriate expressive motions by modifying parameters according to users’ preference or music style.
參考文獻 [1] M. Cardle, L. Barthe, S. Brooks, and P. Robinson, “Music Driven Motion Editing: Local Motion Transformations Guided By Music Analysis,” in Proc. of the Eurographics UK Conference, 2002.
[2] P.F. Chen, and T.Y. Li, “Generating Humanoid Lower-Body Motions with Real-time Planning,” in Proc. of 2002 Computer Graphics Workshop, 2002.
[3] G. Cooper, and L.B. Meyer, “The rhythmic structure of music,” in Chicago:University of Chicago Press, 1960.
[4] R. DeLone, “Aspects of Twentieth-Century Music,” Englewood Cliffs, New Jersey: Prentice-Hall, Chap. 4, pages 270-301, 1975.
[5] W.J. Dowling, “Scale and Contour: Two components of a theory of memory for melodies,” Psychological Review, 1978.
[6] R.O. Gjerdingen, “Apparent Motion in Music?,” Music Perception, Volume 11, pages 335-370, 1994.
[7] R.I. Godøy, E. Haga, and A.R. Jensenius, “Playing ‘Air Instruments’: Mimicry of Sound-producing Gestures by Novices and Experts,” in Gesture in Human-Computer Interaction and Simulation: 6th International Gesture Workshop, 2005.
[8] Humanoid Animation Working Group (H-Anim).
http://www.h-anim.org
[9] L. Kovar, M. Gleicher, and F. Pighin, “Motion Graph,” in Proc. of ACM SIGGRAPH02, 2002.
[10] C.L. Krumhansl, “Cognitive Foundations of Musical Pitch,” Psychology of Music, Volume 20, pages 180-185, 1992.
[11] R. Laban, and L. Ullmann, Mastery of Movement, Princeton Book Company Pulishers, 1960.
[12] E.W. Large, and J.F. Kolen, “Resonance and the perception of musical meter,” Connection Science, Volume 6, pages 177-208, 1994.
[13] H.C. Lee, and I.K. Lee, “Automatic Synchronization of Background Music and Motion in Computer Animation,” Computer Graphics Forum, Volume 24, pages 353-362, 2005.
[14] E. Lerdahl, and R. Jackendoff, A generative theory of tonal music, Cambridge:MIT Press, 1983.
[15] M.Y. Liao, and J.F. Liao and T.Y. Li, “An Extensible Scripting Language for Interactive Animation in a Speech-Enabled Virtual Environment," in Proc. of the IEEE In-ternational Conference on Multimedia and Expo, 2004.
[16] M. Mancini, and G. Castellano, “Real-time analysis and synthesis of emotional gesture expressivity,” in Proc. of the Doctoral Consortium of 2nd International Conference on Affective Computing and Intelligent Interaction, 2007.
[17] S. Mishra, and J.K. Hahn, “Mapping motion to sound and music and in computer animation and VE,” in Proc. of the Pacific Graphics `95, 1995.
[18] F. Multon, L. France, M.P. Cani-Gascuel, and G. Debunne, “Computer Animation of Human Walking: a Survey,” Journal of Visualization and Computer Animation, 1999.
[19] J. Nakamura, T. Kaku, T. Noma, and S. Yoshida, “Automatic Background Music Generation Based on Actors ‘Emotion and Motions’,” in Proc. of the Pacific Graphics, 1993.
[20] S. Oore, and Y. Akiyama, “Learning to Synthesize Arm Motion to Music By Example,” in Proc. of the 14-th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, 2006.
[21] Rick Parent, Computer Animation: Algorithms and Techniques, Morgan Kaufmann Publishers, 2005.
[22] Robert Rowe, Interactive Music Systems, Cambridge: MIT Press, 1993.
[23] T. Shiratori, A. Nakazawa, and K. Ikeuchi, “Detecting dance motion structure through music analysis,” in Proc. of IEEE Int’l Conf. on Automatic Face and Gesture Recognition, 2004.
[24] T. Shiratori, A. Nakazawa, and K. Ikeuchi, “Dancing-to-Music Character Animation,” in Computer Graphics Forum, Volume 25, pages 449-458, 2006.
[25] I. Shmulevich, Y.H. Olli, E. Coyle, D.J. Povel, and K. Lemström, “Perceptual Issues in Music Pattern Recognition: Complexity of Rhythm and Key Finding,” in Proc. of AISB Symposium on Musical Creativity, 2001.
[26] M. Sung, L. Kovar, and M. Gleicher, “Fast and accurate goal-directed motion synthesis for crowds,” in Proc. of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation, 2005.
[27] IKAN(Inverse Kinematics using Analytical Methods).
http://cg.cis.upenn.edu/hms/software/ikan/ikan.html
[28] L. Torresani, P. Hackney, and C. Bregler, “Learning Motion Style Synthesis from Perceptual Observations,” in Proc. of the Neural Information Processing Systems Foundation, 2006.
[29] A.L. Uitdenbogerd, and J. Zobel, “Manipulation of music for melody matching,” in Proc. of ACM International Multimedia Conference, 1998.
[30] B. Vines, M.M. Wanderley, R. Nuzzo, D. Levitin, and C. Krumhansl, “Performance Gestures of Musicians: What Structural and Emotional Information do they Convey?,” Gesture-Based Communication in Human-Computer Interaction, Volume 2915/2004, pages 468-478, 2004.
[31] D.J. Wiley, and J.K. Hahn, “Interpolation Synthesis of Articulated Figure Motion,” IEEE Computer Graphics and Applications, Volume 17, pages 39-45, 1997.
描述 碩士
國立政治大學
資訊科學學系
95753006
96
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0095753006
資料類型 thesis
dc.contributor.advisor 李蔡彥zh_TW
dc.contributor.advisor Li, Tsai Yenen_US
dc.contributor.author (Authors) 雷嘉駿zh_TW
dc.contributor.author (Authors) Loi, Ka Chonen_US
dc.creator (作者) 雷嘉駿zh_TW
dc.creator (作者) Loi, Ka Chonen_US
dc.date (日期) 2007en_US
dc.date.accessioned 17-Sep-2009 14:04:22 (UTC+8)-
dc.date.available 17-Sep-2009 14:04:22 (UTC+8)-
dc.date.issued (上傳時間) 17-Sep-2009 14:04:22 (UTC+8)-
dc.identifier (Other Identifiers) G0095753006en_US
dc.identifier.uri (URI) https://nccur.lib.nccu.edu.tw/handle/140.119/32693-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學學系zh_TW
dc.description (描述) 95753006zh_TW
dc.description (描述) 96zh_TW
dc.description.abstract (摘要) 近年來電腦計算能力的進步使得3D虛擬環境得到廣泛的應用。本研究希望能在虛擬環境中結合人體動畫和音樂的特色,以人體動畫來詮釋音樂。我們希望能設計一個智慧型的人體動作產生器,賦予虛擬人物表達音樂特徵的能力,讓動作會因為“聽到”不同的音樂而有所不同。基於人類聽覺的短暫性,系統會自動抓取音樂特徵後將音樂切割成多個片段、對每一片段獨立規劃動作並產生動畫。過去動畫與音樂相關的研究中,許多生成的動作都經由修改或重組運動資料庫中的動作。本研究分析音樂和動作之間的關係,使用程序式動畫產生法自動產生多變且適當的詮釋動作。實驗顯示本系統能通用於LOA1人體模型和MIDI音樂;此外,透過調整系統中的參數,我們能產生不同風格的動畫,以符合不同使用者偏好和不同音樂曲風的特色。zh_TW
dc.description.abstract (摘要) In recent years, the improvement of computing ability has contributed to the wide application of 3D virtual environment. In the thesis, we propose to combine character animation with music for music interpretation in 3D virtual environment. The system proposed in the thesis is an intelligent avatar motion generator, which generates expressive motions according to music features. The system can extract music features from input music data, segment a music into several music segments, and then plan avatar animation. In the literature, much music-related animation research uses reconstruction and modification of existing motion to compose new animations. In this work, we analyze the relationship between music and motions, and then use procedural animation to automatically generate applicable and variable motions to interpret music. Our experiments show that the system can accept LOA1 models and midi as inputs in general, and generate appropriate expressive motions by modifying parameters according to users’ preference or music style.en_US
dc.description.tableofcontents 第一章 導論 1
1.1 簡介 1
1.2 研究目的 3
1.3 論文貢獻 4
1.4 論文章節架構 5
第二章 相關研究 6
2.1 人體動畫的製作 6
2.2 分析音樂 8
2.2.1 音樂的認知 8
2.2.2 音樂的記錄方法 9
2.3 結合音樂和動畫的相關研究 10
第三章 系統總覽 12
3.1 系統流程與架構 12
3.2 元件介紹 13
3.2.1 音樂分析元件 13
3.2.2 動作規劃元件 14
3.2.3 動畫生成元件 14
第四章 考慮機構限制的旋律切割 15
4.1 介紹MIDI的記錄方法 15
4.2 旋律的特性 16
4.3 問題定義 18
4.3.1 組態空間的定義 18
4.3.2 動作的限制 18
4.4 旋律切割器 19
4.4.1 旋律切割器的目標函數 19
4.4.2 旋律切割演算法 21
4.5 實驗驗證與討論 22
第五章 規劃具表達性的動作軌跡 33
5.1 音樂特徵模型 33
5.2 關鍵格的產生 36
5.3 產生關鍵格之間的軌跡 38
5.3.1 非節奏點 38
5.3.2 節奏點 39
5.4 產生關鍵格之間的速率函數 40
5.5 手部動作軌跡規劃演算法 42
5.6 身體其他各部份的動作軌跡規劃 44
5.6.1 左手動作軌跡規劃 44
5.6.2 身體和頭部運動規劃 47
第六章 產生虛擬人物的動作 49
6.1 反向關節運動 49
6.2 產生畫格 50
6.2.1 內插方法 50
6.2.2 生成畫格 52
第七章 實驗結果與討論 53
7.1 系統介面 53
7.2 動作的表達性測試 54
7.3 系統的通用性測試 56
7.3.1 實驗一:不同的人體模型 56
7.3.2 實驗二:不同類型的音樂檔案 59
7.4 產生不同風格動畫測試 62
第八章 結論與未來發展 66
8.1. 結論 66
8.2. 未來發展 66
參考文獻 68
附錄A MIDI的訊息規格總匯 72
附錄B 貝茲曲線 75
zh_TW
dc.format.extent 145311 bytes-
dc.format.extent 142947 bytes-
dc.format.extent 173761 bytes-
dc.format.extent 202320 bytes-
dc.format.extent 355493 bytes-
dc.format.extent 358671 bytes-
dc.format.extent 302160 bytes-
dc.format.extent 1859568 bytes-
dc.format.extent 546345 bytes-
dc.format.extent 421746 bytes-
dc.format.extent 3573034 bytes-
dc.format.extent 171965 bytes-
dc.format.extent 197413 bytes-
dc.format.extent 278130 bytes-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.format.mimetype application/pdf-
dc.language.iso en_US-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0095753006en_US
dc.subject (關鍵詞) 人體動畫zh_TW
dc.subject (關鍵詞) 虛擬環境zh_TW
dc.subject (關鍵詞) 音樂特徵zh_TW
dc.subject (關鍵詞) human animationen_US
dc.subject (關鍵詞) virtual enivronmenten_US
dc.subject (關鍵詞) music featuresen_US
dc.title (題名) 能表達音樂特徵的人體動畫自動產生機制zh_TW
dc.title (題名) Automatic Generation of Human Animation for Expressing Music Featuresen_US
dc.type (資料類型) thesisen
dc.relation.reference (參考文獻) [1] M. Cardle, L. Barthe, S. Brooks, and P. Robinson, “Music Driven Motion Editing: Local Motion Transformations Guided By Music Analysis,” in Proc. of the Eurographics UK Conference, 2002.zh_TW
dc.relation.reference (參考文獻) [2] P.F. Chen, and T.Y. Li, “Generating Humanoid Lower-Body Motions with Real-time Planning,” in Proc. of 2002 Computer Graphics Workshop, 2002.zh_TW
dc.relation.reference (參考文獻) [3] G. Cooper, and L.B. Meyer, “The rhythmic structure of music,” in Chicago:University of Chicago Press, 1960.zh_TW
dc.relation.reference (參考文獻) [4] R. DeLone, “Aspects of Twentieth-Century Music,” Englewood Cliffs, New Jersey: Prentice-Hall, Chap. 4, pages 270-301, 1975.zh_TW
dc.relation.reference (參考文獻) [5] W.J. Dowling, “Scale and Contour: Two components of a theory of memory for melodies,” Psychological Review, 1978.zh_TW
dc.relation.reference (參考文獻) [6] R.O. Gjerdingen, “Apparent Motion in Music?,” Music Perception, Volume 11, pages 335-370, 1994.zh_TW
dc.relation.reference (參考文獻) [7] R.I. Godøy, E. Haga, and A.R. Jensenius, “Playing ‘Air Instruments’: Mimicry of Sound-producing Gestures by Novices and Experts,” in Gesture in Human-Computer Interaction and Simulation: 6th International Gesture Workshop, 2005.zh_TW
dc.relation.reference (參考文獻) [8] Humanoid Animation Working Group (H-Anim).zh_TW
dc.relation.reference (參考文獻) http://www.h-anim.orgzh_TW
dc.relation.reference (參考文獻) [9] L. Kovar, M. Gleicher, and F. Pighin, “Motion Graph,” in Proc. of ACM SIGGRAPH02, 2002.zh_TW
dc.relation.reference (參考文獻) [10] C.L. Krumhansl, “Cognitive Foundations of Musical Pitch,” Psychology of Music, Volume 20, pages 180-185, 1992.zh_TW
dc.relation.reference (參考文獻) [11] R. Laban, and L. Ullmann, Mastery of Movement, Princeton Book Company Pulishers, 1960.zh_TW
dc.relation.reference (參考文獻) [12] E.W. Large, and J.F. Kolen, “Resonance and the perception of musical meter,” Connection Science, Volume 6, pages 177-208, 1994.zh_TW
dc.relation.reference (參考文獻) [13] H.C. Lee, and I.K. Lee, “Automatic Synchronization of Background Music and Motion in Computer Animation,” Computer Graphics Forum, Volume 24, pages 353-362, 2005.zh_TW
dc.relation.reference (參考文獻) [14] E. Lerdahl, and R. Jackendoff, A generative theory of tonal music, Cambridge:MIT Press, 1983.zh_TW
dc.relation.reference (參考文獻) [15] M.Y. Liao, and J.F. Liao and T.Y. Li, “An Extensible Scripting Language for Interactive Animation in a Speech-Enabled Virtual Environment," in Proc. of the IEEE In-ternational Conference on Multimedia and Expo, 2004.zh_TW
dc.relation.reference (參考文獻) [16] M. Mancini, and G. Castellano, “Real-time analysis and synthesis of emotional gesture expressivity,” in Proc. of the Doctoral Consortium of 2nd International Conference on Affective Computing and Intelligent Interaction, 2007.zh_TW
dc.relation.reference (參考文獻) [17] S. Mishra, and J.K. Hahn, “Mapping motion to sound and music and in computer animation and VE,” in Proc. of the Pacific Graphics `95, 1995.zh_TW
dc.relation.reference (參考文獻) [18] F. Multon, L. France, M.P. Cani-Gascuel, and G. Debunne, “Computer Animation of Human Walking: a Survey,” Journal of Visualization and Computer Animation, 1999.zh_TW
dc.relation.reference (參考文獻) [19] J. Nakamura, T. Kaku, T. Noma, and S. Yoshida, “Automatic Background Music Generation Based on Actors ‘Emotion and Motions’,” in Proc. of the Pacific Graphics, 1993.zh_TW
dc.relation.reference (參考文獻) [20] S. Oore, and Y. Akiyama, “Learning to Synthesize Arm Motion to Music By Example,” in Proc. of the 14-th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, 2006.zh_TW
dc.relation.reference (參考文獻) [21] Rick Parent, Computer Animation: Algorithms and Techniques, Morgan Kaufmann Publishers, 2005.zh_TW
dc.relation.reference (參考文獻) [22] Robert Rowe, Interactive Music Systems, Cambridge: MIT Press, 1993.zh_TW
dc.relation.reference (參考文獻) [23] T. Shiratori, A. Nakazawa, and K. Ikeuchi, “Detecting dance motion structure through music analysis,” in Proc. of IEEE Int’l Conf. on Automatic Face and Gesture Recognition, 2004.zh_TW
dc.relation.reference (參考文獻) [24] T. Shiratori, A. Nakazawa, and K. Ikeuchi, “Dancing-to-Music Character Animation,” in Computer Graphics Forum, Volume 25, pages 449-458, 2006.zh_TW
dc.relation.reference (參考文獻) [25] I. Shmulevich, Y.H. Olli, E. Coyle, D.J. Povel, and K. Lemström, “Perceptual Issues in Music Pattern Recognition: Complexity of Rhythm and Key Finding,” in Proc. of AISB Symposium on Musical Creativity, 2001.zh_TW
dc.relation.reference (參考文獻) [26] M. Sung, L. Kovar, and M. Gleicher, “Fast and accurate goal-directed motion synthesis for crowds,” in Proc. of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation, 2005.zh_TW
dc.relation.reference (參考文獻) [27] IKAN(Inverse Kinematics using Analytical Methods).zh_TW
dc.relation.reference (參考文獻) http://cg.cis.upenn.edu/hms/software/ikan/ikan.htmlzh_TW
dc.relation.reference (參考文獻) [28] L. Torresani, P. Hackney, and C. Bregler, “Learning Motion Style Synthesis from Perceptual Observations,” in Proc. of the Neural Information Processing Systems Foundation, 2006.zh_TW
dc.relation.reference (參考文獻) [29] A.L. Uitdenbogerd, and J. Zobel, “Manipulation of music for melody matching,” in Proc. of ACM International Multimedia Conference, 1998.zh_TW
dc.relation.reference (參考文獻) [30] B. Vines, M.M. Wanderley, R. Nuzzo, D. Levitin, and C. Krumhansl, “Performance Gestures of Musicians: What Structural and Emotional Information do they Convey?,” Gesture-Based Communication in Human-Computer Interaction, Volume 2915/2004, pages 468-478, 2004.zh_TW
dc.relation.reference (參考文獻) [31] D.J. Wiley, and J.K. Hahn, “Interpolation Synthesis of Articulated Figure Motion,” IEEE Computer Graphics and Applications, Volume 17, pages 39-45, 1997.zh_TW