學術產出-會議論文

文章檢視/開啟

書目匯出

Google ScholarTM

政大圖書館

引文資訊

TAIR相關學術產出

題名 Background music recommendation for video based on multimodal latent semantic analysis
作者 Kuo, F.-F.;Shan, Man-Kwan;Lee, S.-Y.
沈錳坤
貢獻者 資科系
關鍵詞 Audio-visual features; Automatic video editing; Background musics; Co-occurrence relationships; Content correlations; Correlation modeling; Detection algorithm; Latent Semantic Analysis; Exhibitions; Semantics; Websites
日期 2013-07
上傳時間 26-五月-2015 18:28:11 (UTC+8)
摘要 Automatic video editing is receiving increasingly attention as the digital camera technology develops further and social media sites such as YouTube and Flickr become popular. Background music selection is one of the key elements to make the generated video attractive. In this work, we propose a framework for background music recommendation based on multi-modal latent semantic analysis between video and music. The videos and accompanied background music are collected from YouTube, and the videos with low musicality are filtered out by musicality detection algorithm. The co-occurrence relationships between audiovisual features are derived for multi-modal latent semantic analysis. Then, given a video, a ranked list of recommended music can be derived from the correlation model. In addition, we propose an algorithm for music beat and video shot alignment to calculate the alignability of recommended music and video. The final recommendation list is the combined result of both content correlation and alignability. Experiments show that the proposed method achieves a promising result. © 2013 IEEE.
關聯 Proceedings - IEEE International Conference on Multimedia and Expo, 2013, 論文編號 6607444, 2013 IEEE International Conference on Multimedia and Expo, ICME 2013; San Jose, CA; United States; 15 July 2013 到 19 July 2013; 類別編號CFP13ICM-ART; 代碼 100169
資料類型 conference
DOI http://dx.doi.org/10.1109/ICME.2013.6607444
dc.contributor 資科系
dc.creator (作者) Kuo, F.-F.;Shan, Man-Kwan;Lee, S.-Y.
dc.creator (作者) 沈錳坤zh_TW
dc.date (日期) 2013-07
dc.date.accessioned 26-五月-2015 18:28:11 (UTC+8)-
dc.date.available 26-五月-2015 18:28:11 (UTC+8)-
dc.date.issued (上傳時間) 26-五月-2015 18:28:11 (UTC+8)-
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/75325-
dc.description.abstract (摘要) Automatic video editing is receiving increasingly attention as the digital camera technology develops further and social media sites such as YouTube and Flickr become popular. Background music selection is one of the key elements to make the generated video attractive. In this work, we propose a framework for background music recommendation based on multi-modal latent semantic analysis between video and music. The videos and accompanied background music are collected from YouTube, and the videos with low musicality are filtered out by musicality detection algorithm. The co-occurrence relationships between audiovisual features are derived for multi-modal latent semantic analysis. Then, given a video, a ranked list of recommended music can be derived from the correlation model. In addition, we propose an algorithm for music beat and video shot alignment to calculate the alignability of recommended music and video. The final recommendation list is the combined result of both content correlation and alignability. Experiments show that the proposed method achieves a promising result. © 2013 IEEE.
dc.format.extent 176 bytes-
dc.format.mimetype text/html-
dc.relation (關聯) Proceedings - IEEE International Conference on Multimedia and Expo, 2013, 論文編號 6607444, 2013 IEEE International Conference on Multimedia and Expo, ICME 2013; San Jose, CA; United States; 15 July 2013 到 19 July 2013; 類別編號CFP13ICM-ART; 代碼 100169
dc.subject (關鍵詞) Audio-visual features; Automatic video editing; Background musics; Co-occurrence relationships; Content correlations; Correlation modeling; Detection algorithm; Latent Semantic Analysis; Exhibitions; Semantics; Websites
dc.title (題名) Background music recommendation for video based on multimodal latent semantic analysis
dc.type (資料類型) conferenceen
dc.identifier.doi (DOI) 10.1109/ICME.2013.6607444
dc.doi.uri (DOI) http://dx.doi.org/10.1109/ICME.2013.6607444