Publications-Theses
Article View/Open
Publication Export
-
Google ScholarTM
NCCU Library
Citation Infomation
Related Publications in TAIR
題名 吉他和弦把位音訊特徵萃取與辨識系統研究
Guitar Chord Fret Position Audio Feature Extraction and Recognition System作者 莊淳中
Chuang, Chun Chung貢獻者 蔡瑞煌
Tsaih, Rua Huan
莊淳中
Chuang, Chun Chung關鍵詞 音樂資訊檢索
和弦辨識
音級輪廓
梅爾倒頻譜係數
支撐向量機
吉他日期 2016 上傳時間 22-Aug-2016 10:45:00 (UTC+8) 摘要 和弦在現代音樂當中扮演重要的角色,它能構成音樂的基礎並能表現多種變化性的聽覺感受。而吉他是一種適合作為演奏和弦的樂器,透過手指選擇在吉他指板上的音符並按壓琴弦,再配合撥弦或刷扣彈奏可以變化出許多不同的和弦。和弦辨識系統是結合音樂理論與電腦運算能力,將聲音訊號當中出現的和弦辨識出來,其已經在音樂資訊檢索領域有許多研究,也開發出許多的應用系統,以往的系統通常只辨識出和弦的名稱,但對於吉他演奏者來說,在吉他上面按壓和弦的把位,會造成音色與和聲的不同,因此本研究透過相關文獻整理,實作一個系統,觀察到音級輪廓與梅爾倒頻譜係數兩種音訊特徵,與支撐向量機監督式機器學習,能達到辨識吉他和弦把位,進而希望得到吉他音樂背後音色與和聲的高階音樂意涵。 參考文獻 [1] Baniya, B. K., Ghimire, D. and Lee, J., "Automatic Music Genre Classification Using Timbral Texture and Rhythmic Content Features," ICACT TACT, (3:3), 2014 [2] Bharucha, J., Krumhansl, C. L., "The representation of harmonic structure in music: Hierarchies of stability as a function of context", Cognition 13, pp. 63-102, 1983[3] Corrigall, K. A., and Schellenber., E. G., Handbook of psychology of emotions: Recent theoretical perspectives and novel empirical findings, Nova, Canada, pp. 299-326[4] Chien, H. C., Essentials of Guitar (4th ed.), OverTop Music, Taiwan, 2004 (Chinese version)[5] Casey, M. A., Veltkamp, R., Goto, M., Leman, M., Rhodes, C. and Slaney, M., "Content-Based Music Information Retrieval: Current Directions and Future Challenges,", Proc. of the IEEE (96:4), April 2008[6] Chuan, C. H., and Chew, E., "Audio onset detection using machine learning techniques: the effect and applicability of key and tempo information," Computer Science Department Technical Report, University of Southern California, 2008[7] Davis, S. B. and Mermelstein, P., "Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences," IEEE Transactions on ASSP, (28:4), pp. 357-366, 1980[8] Dosenbach, K., Fohl, W. and Meisel, A., "Identification of individual guitar sound by support vector machines," Proc. of the 11th Int. Conference on Digital Audio Effects, 2008[9] Dixon, S., "Onset Detection Revisited," Proc. of the 9th International Conference on Digital Audio Effects, 2006[10] Fujishima, T., "Real time chord recognition of musical sound: A system using common lisp music," ICMC, pp. 464-467, 1999.[11] Fohl, W., Turkalj, I., and Meisel A., "A Feature Relevance Study for Guitar Tone Classification," Proc. of the 13th ISMIR, 2013.[12] Gomez, E., Tonal description of music audio signals, Ph.D. thesis, UPF Barcelona, 2006.[13] Hrybyk, A. and Kim, Y. E. "Combined audio and video analysis for guitar chord identification," Proc. of the 11th ISMIR, pp.159-164, 2010.[14] Lee, J. H., "Supervised Learning for Guitar Chord Voicing Identification Aided by the Use of MIDI Pickups", 2013.[15] Lee, K., and Slaney, M., "Automatic Chord Recognition from Audio Using an HMM with Supervised Learning," Proc. of the 7th ISMIR, 2006.[16] Liu, J. and Xie, L., "SVM-Based Automatic Classification of Musical Instruments," International Conference on Intelligent Computation Technology and Automation, 2010.[17] McFadden, A., "Why 44.1 kHz? Why not 48 kHz?, CD-Recordable FAQ,", March 2016 (available online at http://stason.org/TULARC/pc/cd-recordable/2-35-Why-44-1KHz-Why-not-48KHz.html)[18] Mitra, S. L, Digital Signal Processing: A Computer-Based Approach (3rd ed.), 2006.[19] Oudre, L., Grenier, Y., and Févotte, C., "Chord Recognition by Fitting Rescaled Chroma Vectors to Chord Templates," IEEE Transactions on Audio, Speech and Language Processing, (19:7), pp.2222-2233, 2011 [20] Pan, S.W., Guitar Chord Encyclopedia (8th ed.), Vision Quest, Taiwan, 2013 (Chinese version)[21] PyMIR, https://github.com/jsawruk/pymir[22] Stark, A. M., and Plumbley, M. D., "Real-time Chord Recognition for Live Performance," ICMC, 2009.[23] Sheh, A. and Ellis, D. P., "Chord segmentation and recognition using EM-trained hidden Markov models," ISMIR, 2003.[24] Shepard, R. N., The Psychology of Music: Structural representations of musical pitch (1st ed.), Swets & Zeitlinger, Deutsch, 1982.[25] scikit-learn, http://scikit-learn.org/[26] Spark 1.6.1 Mllib Logistic regression, http://spark.apache.org/docs/latest/ml-classification-regression.html#logistic-regression[27] Tzanetakis, G., Music Data Mining: An Introduction, pp. 44-46 pp.52.[28] Zhang, X., and Ras, Z., "Discriminant feature analysis for music timbre recognition," ECML/PKDD Third International Workshop on Mining Complex Data (MCD 2007), pp. 59-70 描述 碩士
國立政治大學
資訊管理學系
103356018資料來源 http://thesis.lib.nccu.edu.tw/record/#G0103356018 資料類型 thesis dc.contributor.advisor 蔡瑞煌 zh_TW dc.contributor.advisor Tsaih, Rua Huan en_US dc.contributor.author (Authors) 莊淳中 zh_TW dc.contributor.author (Authors) Chuang, Chun Chung en_US dc.creator (作者) 莊淳中 zh_TW dc.creator (作者) Chuang, Chun Chung en_US dc.date (日期) 2016 en_US dc.date.accessioned 22-Aug-2016 10:45:00 (UTC+8) - dc.date.available 22-Aug-2016 10:45:00 (UTC+8) - dc.date.issued (上傳時間) 22-Aug-2016 10:45:00 (UTC+8) - dc.identifier (Other Identifiers) G0103356018 en_US dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/100461 - dc.description (描述) 碩士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 資訊管理學系 zh_TW dc.description (描述) 103356018 zh_TW dc.description.abstract (摘要) 和弦在現代音樂當中扮演重要的角色,它能構成音樂的基礎並能表現多種變化性的聽覺感受。而吉他是一種適合作為演奏和弦的樂器,透過手指選擇在吉他指板上的音符並按壓琴弦,再配合撥弦或刷扣彈奏可以變化出許多不同的和弦。和弦辨識系統是結合音樂理論與電腦運算能力,將聲音訊號當中出現的和弦辨識出來,其已經在音樂資訊檢索領域有許多研究,也開發出許多的應用系統,以往的系統通常只辨識出和弦的名稱,但對於吉他演奏者來說,在吉他上面按壓和弦的把位,會造成音色與和聲的不同,因此本研究透過相關文獻整理,實作一個系統,觀察到音級輪廓與梅爾倒頻譜係數兩種音訊特徵,與支撐向量機監督式機器學習,能達到辨識吉他和弦把位,進而希望得到吉他音樂背後音色與和聲的高階音樂意涵。 zh_TW dc.description.tableofcontents Chapter 1 Introduction 1Background and Motivation 1Chapter 2 Literature Review 32.1 Music Information Retrieval 32.2 Levels for Content Description 42.3 Audio Signal Processing 62.3.1 Terms 62.3.2 Framing 62.3.3 Spectral Analysis 72.4 Chord Recognition 82.5 Timbre Recognition 112.5.1 Mel-Frequency Cepstral Coefficients 112.6 Onset Detection 122.6.1 Spectral Flux 132.7 The Applications of to Timbre Recognition 142.8 Basic Musical Theory 152.8.1 Frequency, Pitch and Pitch Class 152.8.2 Interval 162.8.3 Scale 172.8.4 Chord 182.9 Acoustic Guitar 192.9.1 Guitar Chord 20Chapter 3 Experiment 213.1 Data collection 213.2 System Architecture 223.2.1 Preprocessing 233.2.2 Model Training and Testing 243.2.3 Testing in noisy condition 28Chapter 4 Evaluation 324.1 Comparison of Chord Type Classification 324.2 Comparison of Fret Position Classification 33Chapter 5 Discussion and Conclusion 395.1 Conclusion 395.2 Limitation and Future Work 39Reference 41 zh_TW dc.format.extent 2057654 bytes - dc.format.mimetype application/pdf - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0103356018 en_US dc.subject (關鍵詞) 音樂資訊檢索 zh_TW dc.subject (關鍵詞) 和弦辨識 zh_TW dc.subject (關鍵詞) 音級輪廓 zh_TW dc.subject (關鍵詞) 梅爾倒頻譜係數 zh_TW dc.subject (關鍵詞) 支撐向量機 zh_TW dc.subject (關鍵詞) 吉他 zh_TW dc.title (題名) 吉他和弦把位音訊特徵萃取與辨識系統研究 zh_TW dc.title (題名) Guitar Chord Fret Position Audio Feature Extraction and Recognition System en_US dc.type (資料類型) thesis en_US dc.relation.reference (參考文獻) [1] Baniya, B. K., Ghimire, D. and Lee, J., "Automatic Music Genre Classification Using Timbral Texture and Rhythmic Content Features," ICACT TACT, (3:3), 2014 [2] Bharucha, J., Krumhansl, C. L., "The representation of harmonic structure in music: Hierarchies of stability as a function of context", Cognition 13, pp. 63-102, 1983[3] Corrigall, K. A., and Schellenber., E. G., Handbook of psychology of emotions: Recent theoretical perspectives and novel empirical findings, Nova, Canada, pp. 299-326[4] Chien, H. C., Essentials of Guitar (4th ed.), OverTop Music, Taiwan, 2004 (Chinese version)[5] Casey, M. A., Veltkamp, R., Goto, M., Leman, M., Rhodes, C. and Slaney, M., "Content-Based Music Information Retrieval: Current Directions and Future Challenges,", Proc. of the IEEE (96:4), April 2008[6] Chuan, C. H., and Chew, E., "Audio onset detection using machine learning techniques: the effect and applicability of key and tempo information," Computer Science Department Technical Report, University of Southern California, 2008[7] Davis, S. B. and Mermelstein, P., "Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences," IEEE Transactions on ASSP, (28:4), pp. 357-366, 1980[8] Dosenbach, K., Fohl, W. and Meisel, A., "Identification of individual guitar sound by support vector machines," Proc. of the 11th Int. Conference on Digital Audio Effects, 2008[9] Dixon, S., "Onset Detection Revisited," Proc. of the 9th International Conference on Digital Audio Effects, 2006[10] Fujishima, T., "Real time chord recognition of musical sound: A system using common lisp music," ICMC, pp. 464-467, 1999.[11] Fohl, W., Turkalj, I., and Meisel A., "A Feature Relevance Study for Guitar Tone Classification," Proc. of the 13th ISMIR, 2013.[12] Gomez, E., Tonal description of music audio signals, Ph.D. thesis, UPF Barcelona, 2006.[13] Hrybyk, A. and Kim, Y. E. "Combined audio and video analysis for guitar chord identification," Proc. of the 11th ISMIR, pp.159-164, 2010.[14] Lee, J. H., "Supervised Learning for Guitar Chord Voicing Identification Aided by the Use of MIDI Pickups", 2013.[15] Lee, K., and Slaney, M., "Automatic Chord Recognition from Audio Using an HMM with Supervised Learning," Proc. of the 7th ISMIR, 2006.[16] Liu, J. and Xie, L., "SVM-Based Automatic Classification of Musical Instruments," International Conference on Intelligent Computation Technology and Automation, 2010.[17] McFadden, A., "Why 44.1 kHz? Why not 48 kHz?, CD-Recordable FAQ,", March 2016 (available online at http://stason.org/TULARC/pc/cd-recordable/2-35-Why-44-1KHz-Why-not-48KHz.html)[18] Mitra, S. L, Digital Signal Processing: A Computer-Based Approach (3rd ed.), 2006.[19] Oudre, L., Grenier, Y., and Févotte, C., "Chord Recognition by Fitting Rescaled Chroma Vectors to Chord Templates," IEEE Transactions on Audio, Speech and Language Processing, (19:7), pp.2222-2233, 2011 [20] Pan, S.W., Guitar Chord Encyclopedia (8th ed.), Vision Quest, Taiwan, 2013 (Chinese version)[21] PyMIR, https://github.com/jsawruk/pymir[22] Stark, A. M., and Plumbley, M. D., "Real-time Chord Recognition for Live Performance," ICMC, 2009.[23] Sheh, A. and Ellis, D. P., "Chord segmentation and recognition using EM-trained hidden Markov models," ISMIR, 2003.[24] Shepard, R. N., The Psychology of Music: Structural representations of musical pitch (1st ed.), Swets & Zeitlinger, Deutsch, 1982.[25] scikit-learn, http://scikit-learn.org/[26] Spark 1.6.1 Mllib Logistic regression, http://spark.apache.org/docs/latest/ml-classification-regression.html#logistic-regression[27] Tzanetakis, G., Music Data Mining: An Introduction, pp. 44-46 pp.52.[28] Zhang, X., and Ras, Z., "Discriminant feature analysis for music timbre recognition," ECML/PKDD Third International Workshop on Mining Complex Data (MCD 2007), pp. 59-70 zh_TW