學術產出-Proceedings

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 Estimation and evaluation of body language in a presentation scenario using RGBD data
作者 廖文宏
Wu, Yi Chieh
Liao, Wenhung
貢獻者 資訊科學系
關鍵詞 Animation; Classification (of information); Computational linguistics; Intelligent systems; Personnel training; Audio processing; Body language; Evaluation results; Facial Expressions; Kinect sensors; performance evaluation; Presentation style; Skeletal joints; Intelligent control
日期 2015
上傳時間 10-Aug-2017 15:16:38 (UTC+8)
摘要 In this research, we capture body movements, facial expressions, and voice data of subjects in the presentation scenario using RGBD-capable Kinect sensor. The acquired videos were accessed by a group of reviewers to indicate their preferences/aversions to the presentation style. We denote the two classes of ruling as Period of Like (POL) and Period of Dislike (POD), respectively. We then employ three types of image features, namely, animation units (AU), skeletal joints, and 3D face vertices to analyze the consistency of the evaluation result, as well as the ability to classify unseen footage based on the training data supplied by 35 evaluators. Finally, we develop a prototype program to help users to identify their strength/weakness during their presentation so that they can improve their skills accordingly. © 2015 The authors and IOS Press. All rights reserved.
關聯 Frontiers in Artificial Intelligence and Applications, 274, 1213-1222
International Computer Symposium, ICS 2014; Taichung; Taiwan; 12 December 2014 到 14 December 2014; 代碼 111725
資料類型 conference
DOI http://dx.doi.org/10.3233/978-1-61499-484-8-1213
dc.contributor 資訊科學系zh_Tw
dc.creator (作者) 廖文宏zh_TW
dc.creator (作者) Wu, Yi Chiehen_US
dc.creator (作者) Liao, Wenhungen_US
dc.date (日期) 2015en_US
dc.date.accessioned 10-Aug-2017 15:16:38 (UTC+8)-
dc.date.available 10-Aug-2017 15:16:38 (UTC+8)-
dc.date.issued (上傳時間) 10-Aug-2017 15:16:38 (UTC+8)-
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/111901-
dc.description.abstract (摘要) In this research, we capture body movements, facial expressions, and voice data of subjects in the presentation scenario using RGBD-capable Kinect sensor. The acquired videos were accessed by a group of reviewers to indicate their preferences/aversions to the presentation style. We denote the two classes of ruling as Period of Like (POL) and Period of Dislike (POD), respectively. We then employ three types of image features, namely, animation units (AU), skeletal joints, and 3D face vertices to analyze the consistency of the evaluation result, as well as the ability to classify unseen footage based on the training data supplied by 35 evaluators. Finally, we develop a prototype program to help users to identify their strength/weakness during their presentation so that they can improve their skills accordingly. © 2015 The authors and IOS Press. All rights reserved.en_US
dc.format.extent 214 bytes-
dc.format.mimetype text/html-
dc.relation (關聯) Frontiers in Artificial Intelligence and Applications, 274, 1213-1222en_US
dc.relation (關聯) International Computer Symposium, ICS 2014; Taichung; Taiwan; 12 December 2014 到 14 December 2014; 代碼 111725en_US
dc.subject (關鍵詞) Animation; Classification (of information); Computational linguistics; Intelligent systems; Personnel training; Audio processing; Body language; Evaluation results; Facial Expressions; Kinect sensors; performance evaluation; Presentation style; Skeletal joints; Intelligent controlen_US
dc.title (題名) Estimation and evaluation of body language in a presentation scenario using RGBD dataen_US
dc.type (資料類型) conference
dc.identifier.doi (DOI) 10.3233/978-1-61499-484-8-1213
dc.doi.uri (DOI) http://dx.doi.org/10.3233/978-1-61499-484-8-1213