Publications-Conference Papers
Article View/Open
Publication Export
-
Google ScholarTM
NCCU Library
Citation Infomation
Related Publications in TAIR
題名 A Web-enabled Talking Head System Based on a Single Face Image 作者 Hung, Cheng-Sheng
Huang, Chien-Feng
Lee, Guo-Shi
Lin, I-Chen
Ouhyoung, Ming日期 1999 上傳時間 2-Oct-2017 15:31:51 (UTC+8) 摘要 In this paper, a lifelike talking head system is proposed. The talking head, which is driven by speaker independent speech recognition, requires only one single face image to synthesize lifelike facial expression.The proposed system uses speech recognition engines to get utterances and corresponding time stamps in the speech data. Associated facial expressions can be fetched from an expression pool and the synthetic facial expression can then be synchronized with speech.When applied to Internet, our talking head system can be a vivid web-site introducer, and only requires 100 K bytes/minute with an additional face image (about 40Kbytes in CIF format, 24 bit-color, JPEG compression). The system can synthesize facial animation more than 30 frames/sec on a Pentium II 266 MHz PC. 關聯 TANet`99會議論文
C3 Web技術資料類型 conference dc.creator (作者) Hung, Cheng-Sheng en_US dc.creator (作者) Huang, Chien-Feng en_US dc.creator (作者) Lee, Guo-Shi en_US dc.creator (作者) Lin, I-Chen en_US dc.creator (作者) Ouhyoung, Ming en_US dc.date (日期) 1999 dc.date.accessioned 2-Oct-2017 15:31:51 (UTC+8) - dc.date.available 2-Oct-2017 15:31:51 (UTC+8) - dc.date.issued (上傳時間) 2-Oct-2017 15:31:51 (UTC+8) - dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/113347 - dc.description.abstract (摘要) In this paper, a lifelike talking head system is proposed. The talking head, which is driven by speaker independent speech recognition, requires only one single face image to synthesize lifelike facial expression.The proposed system uses speech recognition engines to get utterances and corresponding time stamps in the speech data. Associated facial expressions can be fetched from an expression pool and the synthetic facial expression can then be synchronized with speech.When applied to Internet, our talking head system can be a vivid web-site introducer, and only requires 100 K bytes/minute with an additional face image (about 40Kbytes in CIF format, 24 bit-color, JPEG compression). The system can synthesize facial animation more than 30 frames/sec on a Pentium II 266 MHz PC. dc.format.extent 1452955 bytes - dc.format.mimetype application/pdf - dc.relation (關聯) TANet`99會議論文 zh_TW dc.relation (關聯) C3 Web技術 zh_TW dc.title (題名) A Web-enabled Talking Head System Based on a Single Face Image en_US dc.type (資料類型) conference en