學術產出-會議論文

文章檢視/開啟

書目匯出

Google ScholarTM

政大圖書館

引文資訊

TAIR相關學術產出

題名 Incorporating Semantic Knowledge for Visual Lifelog Activity Recognition
作者 黃瀚萱
Huang, Hen-Hsen
貢獻者 資科系
日期 2020-06
上傳時間 4-六月-2021 14:39:02 (UTC+8)
摘要 The advance in wearable technology has made lifelogging more feasible and more popular. Visual lifelogs collected by wearable cameras capture every single detail of individual`s life experience, offering a promising data source for deeper lifestyle analysis and better memory recall assistance. However, building a system for organizing and accessing visual lifelogs is a challenging task due to the semantic gap between visual data and semantic descriptions of life events. In this paper, we introduce semantic knowledge to reduce such a semantic gap for daily activity recognition and lifestyle understanding. We incorporate the semantic knowledge derived from external resources to enrich the training data for the proposed supervised learning model. Experimental results show that incorporating external semantic knowledge is beneficial for improving the performance of recognizing life events.
關聯 Proceedings of the 2020 International Conference on Multimedia Retrieval (ICMR ’20), Association for Computing Machinery, pp.450-456
資料類型 conference
DOI https://doi.org/10.1145/3372278.3390700
dc.contributor 資科系
dc.creator (作者) 黃瀚萱
dc.creator (作者) Huang, Hen-Hsen
dc.date (日期) 2020-06
dc.date.accessioned 4-六月-2021 14:39:02 (UTC+8)-
dc.date.available 4-六月-2021 14:39:02 (UTC+8)-
dc.date.issued (上傳時間) 4-六月-2021 14:39:02 (UTC+8)-
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/135522-
dc.description.abstract (摘要) The advance in wearable technology has made lifelogging more feasible and more popular. Visual lifelogs collected by wearable cameras capture every single detail of individual`s life experience, offering a promising data source for deeper lifestyle analysis and better memory recall assistance. However, building a system for organizing and accessing visual lifelogs is a challenging task due to the semantic gap between visual data and semantic descriptions of life events. In this paper, we introduce semantic knowledge to reduce such a semantic gap for daily activity recognition and lifestyle understanding. We incorporate the semantic knowledge derived from external resources to enrich the training data for the proposed supervised learning model. Experimental results show that incorporating external semantic knowledge is beneficial for improving the performance of recognizing life events.
dc.format.extent 1207500 bytes-
dc.format.mimetype application/pdf-
dc.relation (關聯) Proceedings of the 2020 International Conference on Multimedia Retrieval (ICMR ’20), Association for Computing Machinery, pp.450-456
dc.title (題名) Incorporating Semantic Knowledge for Visual Lifelog Activity Recognition
dc.type (資料類型) conference
dc.identifier.doi (DOI) 10.1145/3372278.3390700
dc.doi.uri (DOI) https://doi.org/10.1145/3372278.3390700