Please use this identifier to cite or link to this item: https://ah.nccu.edu.tw/handle/140.119/113347


Title: A Web-enabled Talking Head System Based on a Single Face Image
Authors: Hung, Cheng-Sheng
Huang, Chien-Feng
Lee, Guo-Shi
Lin, I-Chen
Ouhyoung, Ming
Date: 1999
Issue Date: 2017-10-02 15:31:51 (UTC+8)
Abstract: In this paper, a lifelike talking head system is proposed. The talking head, which is driven by speaker independent speech recognition, requires only one single face image to synthesize lifelike facial expression.
The proposed system uses speech recognition engines to get utterances and corresponding time stamps in the speech data. Associated facial expressions can be fetched from an expression pool and the synthetic facial expression can then be synchronized with speech.
When applied to Internet, our talking head system can be a vivid web-site introducer, and only requires 100 K bytes/minute with an additional face image (about 40Kbytes in CIF format, 24 bit-color, JPEG compression). The system can synthesize facial animation more than 30 frames/sec on a Pentium II 266 MHz PC.
Relation: TANet'99會議論文
C3 Web技術
Data Type: conference
Appears in Collections:[TANET 台灣網際網路研討會] 會議論文

Files in This Item:

File Description SizeFormat
045.pdf1418KbAdobe PDF194View/Open


All items in 學術集成 are protected by copyright, with all rights reserved.


社群 sharing