學術產出-學位論文

文章檢視/開啟

書目匯出

Google ScholarTM

政大圖書館

引文資訊

TAIR相關學術產出

題名 結合局部特徵序列的影片背景音樂推薦機制
Background Music Recommendation for Video by Incorporating Temporal Sequence of Local Features
作者 林鼎崴
Lin, Ting Wei
貢獻者 沈錳坤
Shan, Man-Kwan
林鼎崴
Lin, Ting Wei
關鍵詞 影片自動配樂
資料探勘
關聯模型
背景音樂推薦
日期 2017
上傳時間 13-九月-2017 14:47:48 (UTC+8)
摘要 隨著手持裝置的普及與社群網路的興起,大眾可以隨時拍攝影片並且上傳至網路上與他人分享。但是一般使用者產生的影片若少了配樂,將失色許多。除了原本影片帶給人們的視覺觀感之外,配樂可以帶給人們聽覺的觀感,進而使得人們可以更容易了解影片的情感,也可以讓人們更能夠融入在影片中。背景音樂推薦的研究主要有兩大種做法,Emotion-mediated Approach與Correlation-based Approach。我們使用Correlational-based Approach的方法,利用Correlation Modeling找出影片特徵值與音樂特徵值之間的關係。但是由於目前Correlation-based Approach的研究只有考慮到全域特徵,因此在此論文中,我們提出了區域特徵。區域特徵利用時間序列表達影片細部的變化,並且將區域特徵與全域特徵結合至Correlation Modeling中,透過 MLSA、CFA、CCA、KCCA、DCCA、PLS、PLSR演算法找出其中的關係並且產生背景音樂推薦的Ranking List,實驗部份比較了各個演算法在背景音樂推薦上的準確率,並且觀察Global Features與Local Features之間的準確率。
Background music plays an important role in making user-generated video more colorful and attractive. One of current research on automatic background music recommendation is the correlation-based approach in which the correlation model between visual and music features is discovered from training data and is utilized to recommend background music for query video. Because the existing correlation-based approaches consider global features only, in this work we proposed to integrate the temporal sequence of local features along with global features into the correlation modeling process. The local features are derived from segmented audiovisual clips and can represent the local variation of features. Then the temporal sequence of local features is transformed and incorporated into correlation modeling process. Cross-Modal Factor Analysis along with Multiple-type Latent Semantic Analysis, Canonical Correlation Analysis, Kernel Canonical Correlation Analysis, Deep Canonical Correlation Analysis, Partial Least Square and Partial Least Square Regression, are investigated for correlation modeling which recommends background music in ranking order. In the experiments, we first compare the results of only global features, only local Features and incorporating global and local Features among each algorithm. Then second compare the results of different clip numbers and Fourier coefficients.
參考文獻 [1] Y. Altun, I. Tsochantaridis, and T. Hofmann, Hidden markov support vector machines. International Conference on Machine Learning, 2003.
[2] G. Andrew, R. Arora, J. A. Bilmes, and K. Livescu, Deep canonical correlation analysis. International Conference on Machine Learning (ICML), 2013.
[3] M. Cristani, A. Pesarin, C. Drioli, V. Murino, A. Rodà, M. Grapulin, and N. Sebe, Toward an automatically generated soundtrack from low-level cross-modal correlations for automotive scenarios. Proceedings of the 18th ACM International Conference on Multimedia, 2010.
[4] S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman, Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(416), 1990.
[5] A. Hanjalic and L. Q. Xu, Affective video content representation and modeling. IEEE Transactions on Multimedia, 7(1), 2005.
[6] D. R. Hardoon, S. Szedmak, and J. Shawe-Taylor, Canonical correlation analysis: An overview with application to learning methods. Neural Computation, 16(12), 2004.
[7] H. Wold, Path models with latent variables: The NIPALS approach. In H.M. Blalock et al., editor, Quantitative Socialogy: International Perspectives on Mathematical and Statistical Model Building, 1975.
[8] G. E. Hinton, S. Osindero, and Y. W. Teh, A fast learning algorithm for deep belief nets. Neural Computation, 18(7), 2006.
[9] F. F. Kuo, M. F. Chiang, M. K. Shan, and S. Y. Lee, Emotion-based music recommendation by association discovery from film music. Proceedings of the 13th annual ACM International Conference on Multimedia, 2005.
[10] F. F. Kuo, M. K. Shan, and S. Y. Lee, Background music recommendation for video based on multimodal latent semantic analysis. 2013 IEEE International Conference on Multimedia and Expo, 2013.
[11] D. Li, N. Dimitrova, M. Li, and I. K. Sethi, Multimedia content processing through cross-modal association. Proceedings of the 11th ACM International Conference on Multimedia, 2003.
[12] J. C. Lin, W. L. Wei, and H. M. Wang, EMV-matchmaker: Emotional temporal course modeling and matching for automatic music video generation. Proceedings of the 23rd ACMIinternational Conference on Multimedia, 2015.
[13] J. C. Lin, W. L. Wei, and H. M. Wang, DEMV-matchmaker: Emotional temporal course representation and deep similarity matching for automatic music video generation. IEEE International Conference on Acoustics, Speech and Signal Processing, 2016.
[14] L. Lu, D. Liu, and H. J. Zhang, Automatic mood detection and tracking of music audio signals. IEEE Transactions on Audio, Speech, and Language Processing, 14(1), 2006.
[15] L. R. Rabiner and B. Gold, Theory and application of digital signal processing. Englewood Cliffs, NJ, Prentice-Hall, Inc., 1975.
[16] R. Rosipal and N. Krämer, Overview and recent advances in partial least squares. In Subspace,Latent Structure and Feature Selection, Springer, 34-51, 2006.
[17] E. M. Schmidt and Y. E. Kim, Prediction of time-varying musical mood distributions from audio. Proceedings of the 11th Internation Society for Music Information Retrieval Conference, 2010.
[18] R. R. Shah, Y. Yu, and R. Zimmermann, Advisor: Personalized video soundtrack recommendation by late fusion with heuristic rankings. Proceedings of the 22nd ACM International Conference on Multimedia, 2014.
[19] R. R. Shah, Y. Yu, and R. Zimmermann, User preference-aware music video generation based on modeling scene moods. Proceedings of the 5th ACM Multimedia Systems Conference (MMSys), 2014.
[20] H. Su, F. F. Kuo, C. H. Chiu, Y. J. Chou, and M. K. Shan, MediaEval 2013: Soundtrack selection for commercials based on content Correlation Modeling. MedaiEval Benchmarking Initiative for Multimedia Evaluation, 2013.
[21] R. E. Thayer, The biopsychology of mood and arousal. Oxford University Press, 1990.
[22] H. Tong, C. Faloutsos, and J. Y. Pan, Fast random walk with restart and its applications. Proceedings of the 6th IEEE International Conference on Data Mining (ICDM), 2006.
[23] J. C. Wang, Y. H. Yang, I. H. Jhuo, Y. Y. Lin, and H. M. Wang, The acousticvisual emotion Guassians model for automatic generation of music video. Proceedings of the 20th ACM International Conference on Multimedia, 2012.
[24] X. Wang, J. T. Sun, Z. Chen, and C. Zhai, Latent semantic analysis for multiple-type interrelated data objects. Proceedings of the 29th annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2006.
[25] Y. Yu, Z. Shen, and R. Zimmermann, Automatic music soundtrack generation for outdoor videos from contextual sensor information. Proceedings of the 20th ACM International Conference on Multimedia, 2012.
描述 碩士
國立政治大學
資訊科學學系
103753008
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0103753008
資料類型 thesis
dc.contributor.advisor 沈錳坤zh_TW
dc.contributor.advisor Shan, Man-Kwanen_US
dc.contributor.author (作者) 林鼎崴zh_TW
dc.contributor.author (作者) Lin, Ting Weien_US
dc.creator (作者) 林鼎崴zh_TW
dc.creator (作者) Lin, Ting Weien_US
dc.date (日期) 2017en_US
dc.date.accessioned 13-九月-2017 14:47:48 (UTC+8)-
dc.date.available 13-九月-2017 14:47:48 (UTC+8)-
dc.date.issued (上傳時間) 13-九月-2017 14:47:48 (UTC+8)-
dc.identifier (其他 識別碼) G0103753008en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/112677-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學學系zh_TW
dc.description (描述) 103753008zh_TW
dc.description.abstract (摘要) 隨著手持裝置的普及與社群網路的興起,大眾可以隨時拍攝影片並且上傳至網路上與他人分享。但是一般使用者產生的影片若少了配樂,將失色許多。除了原本影片帶給人們的視覺觀感之外,配樂可以帶給人們聽覺的觀感,進而使得人們可以更容易了解影片的情感,也可以讓人們更能夠融入在影片中。背景音樂推薦的研究主要有兩大種做法,Emotion-mediated Approach與Correlation-based Approach。我們使用Correlational-based Approach的方法,利用Correlation Modeling找出影片特徵值與音樂特徵值之間的關係。但是由於目前Correlation-based Approach的研究只有考慮到全域特徵,因此在此論文中,我們提出了區域特徵。區域特徵利用時間序列表達影片細部的變化,並且將區域特徵與全域特徵結合至Correlation Modeling中,透過 MLSA、CFA、CCA、KCCA、DCCA、PLS、PLSR演算法找出其中的關係並且產生背景音樂推薦的Ranking List,實驗部份比較了各個演算法在背景音樂推薦上的準確率,並且觀察Global Features與Local Features之間的準確率。zh_TW
dc.description.abstract (摘要) Background music plays an important role in making user-generated video more colorful and attractive. One of current research on automatic background music recommendation is the correlation-based approach in which the correlation model between visual and music features is discovered from training data and is utilized to recommend background music for query video. Because the existing correlation-based approaches consider global features only, in this work we proposed to integrate the temporal sequence of local features along with global features into the correlation modeling process. The local features are derived from segmented audiovisual clips and can represent the local variation of features. Then the temporal sequence of local features is transformed and incorporated into correlation modeling process. Cross-Modal Factor Analysis along with Multiple-type Latent Semantic Analysis, Canonical Correlation Analysis, Kernel Canonical Correlation Analysis, Deep Canonical Correlation Analysis, Partial Least Square and Partial Least Square Regression, are investigated for correlation modeling which recommends background music in ranking order. In the experiments, we first compare the results of only global features, only local Features and incorporating global and local Features among each algorithm. Then second compare the results of different clip numbers and Fourier coefficients.en_US
dc.description.tableofcontents 第一章 緒論 1
第二章 相關研究 6
2.1 Emotion-mediated approach 6
2.2 Correlation-based Approach 9
第三章 研究方法 11
3.1系統架構 11
3.2 影片與音樂特徵值 12
3.2.1 影片特徵值 12
3.2.2 音樂特徵值 13
3.3全域特徵 (Global Features) 15
3.4局部特徵 (Local Features) 16
3.4.1 Temporal Sequence 17
3.4.2 Discrete Fourier Transform 18
3.5特徵值矩陣 (Feature Matrix) 19
3.6 Correlation Modeling 20
3.6.1 Multiple-type Latent Semantic Analysis (MLSA) 20
3.6.2 Cross-modal Factor Analysis (CFA) 22
3.6.3 Canonical Correlation Analysis 23
3.6.3.1 Linear Canonical Correlation Analysis (CCA) 24
3.6.3.2 Kernel Canonical Correlation Analysis (KCCA) 26
3.6.3.3 Deep Canonical Correlation Analysis (DCCA) 28
3.6.4 Partial Least Square (PLS) 29
3.6.5 Partial Least Square Regression (PLSR) 31
3.6推薦 (Recommendation) 32
第四章 實驗設計 35
4.1資料蒐集 35
4.2實驗步驟 35
4.3實驗評估 36
4.4實驗結果 37
4.4.1Global Features, Local Features, Global Features + Local Features與不同演算法之間比較 37
4.4.2不同片段個數與傅立葉係數個數之間比較 42
第五章 結論與未來研究 49
參考文獻 51
zh_TW
dc.format.extent 3900630 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0103753008en_US
dc.subject (關鍵詞) 影片自動配樂zh_TW
dc.subject (關鍵詞) 資料探勘zh_TW
dc.subject (關鍵詞) 關聯模型zh_TW
dc.subject (關鍵詞) 背景音樂推薦zh_TW
dc.title (題名) 結合局部特徵序列的影片背景音樂推薦機制zh_TW
dc.title (題名) Background Music Recommendation for Video by Incorporating Temporal Sequence of Local Featuresen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] Y. Altun, I. Tsochantaridis, and T. Hofmann, Hidden markov support vector machines. International Conference on Machine Learning, 2003.
[2] G. Andrew, R. Arora, J. A. Bilmes, and K. Livescu, Deep canonical correlation analysis. International Conference on Machine Learning (ICML), 2013.
[3] M. Cristani, A. Pesarin, C. Drioli, V. Murino, A. Rodà, M. Grapulin, and N. Sebe, Toward an automatically generated soundtrack from low-level cross-modal correlations for automotive scenarios. Proceedings of the 18th ACM International Conference on Multimedia, 2010.
[4] S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman, Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(416), 1990.
[5] A. Hanjalic and L. Q. Xu, Affective video content representation and modeling. IEEE Transactions on Multimedia, 7(1), 2005.
[6] D. R. Hardoon, S. Szedmak, and J. Shawe-Taylor, Canonical correlation analysis: An overview with application to learning methods. Neural Computation, 16(12), 2004.
[7] H. Wold, Path models with latent variables: The NIPALS approach. In H.M. Blalock et al., editor, Quantitative Socialogy: International Perspectives on Mathematical and Statistical Model Building, 1975.
[8] G. E. Hinton, S. Osindero, and Y. W. Teh, A fast learning algorithm for deep belief nets. Neural Computation, 18(7), 2006.
[9] F. F. Kuo, M. F. Chiang, M. K. Shan, and S. Y. Lee, Emotion-based music recommendation by association discovery from film music. Proceedings of the 13th annual ACM International Conference on Multimedia, 2005.
[10] F. F. Kuo, M. K. Shan, and S. Y. Lee, Background music recommendation for video based on multimodal latent semantic analysis. 2013 IEEE International Conference on Multimedia and Expo, 2013.
[11] D. Li, N. Dimitrova, M. Li, and I. K. Sethi, Multimedia content processing through cross-modal association. Proceedings of the 11th ACM International Conference on Multimedia, 2003.
[12] J. C. Lin, W. L. Wei, and H. M. Wang, EMV-matchmaker: Emotional temporal course modeling and matching for automatic music video generation. Proceedings of the 23rd ACMIinternational Conference on Multimedia, 2015.
[13] J. C. Lin, W. L. Wei, and H. M. Wang, DEMV-matchmaker: Emotional temporal course representation and deep similarity matching for automatic music video generation. IEEE International Conference on Acoustics, Speech and Signal Processing, 2016.
[14] L. Lu, D. Liu, and H. J. Zhang, Automatic mood detection and tracking of music audio signals. IEEE Transactions on Audio, Speech, and Language Processing, 14(1), 2006.
[15] L. R. Rabiner and B. Gold, Theory and application of digital signal processing. Englewood Cliffs, NJ, Prentice-Hall, Inc., 1975.
[16] R. Rosipal and N. Krämer, Overview and recent advances in partial least squares. In Subspace,Latent Structure and Feature Selection, Springer, 34-51, 2006.
[17] E. M. Schmidt and Y. E. Kim, Prediction of time-varying musical mood distributions from audio. Proceedings of the 11th Internation Society for Music Information Retrieval Conference, 2010.
[18] R. R. Shah, Y. Yu, and R. Zimmermann, Advisor: Personalized video soundtrack recommendation by late fusion with heuristic rankings. Proceedings of the 22nd ACM International Conference on Multimedia, 2014.
[19] R. R. Shah, Y. Yu, and R. Zimmermann, User preference-aware music video generation based on modeling scene moods. Proceedings of the 5th ACM Multimedia Systems Conference (MMSys), 2014.
[20] H. Su, F. F. Kuo, C. H. Chiu, Y. J. Chou, and M. K. Shan, MediaEval 2013: Soundtrack selection for commercials based on content Correlation Modeling. MedaiEval Benchmarking Initiative for Multimedia Evaluation, 2013.
[21] R. E. Thayer, The biopsychology of mood and arousal. Oxford University Press, 1990.
[22] H. Tong, C. Faloutsos, and J. Y. Pan, Fast random walk with restart and its applications. Proceedings of the 6th IEEE International Conference on Data Mining (ICDM), 2006.
[23] J. C. Wang, Y. H. Yang, I. H. Jhuo, Y. Y. Lin, and H. M. Wang, The acousticvisual emotion Guassians model for automatic generation of music video. Proceedings of the 20th ACM International Conference on Multimedia, 2012.
[24] X. Wang, J. T. Sun, Z. Chen, and C. Zhai, Latent semantic analysis for multiple-type interrelated data objects. Proceedings of the 29th annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2006.
[25] Y. Yu, Z. Shen, and R. Zimmermann, Automatic music soundtrack generation for outdoor videos from contextual sensor information. Proceedings of the 20th ACM International Conference on Multimedia, 2012.
zh_TW