學術產出-Theses
文章檢視/開啟
- pdf(1005)pdf(1116)pdf(1501)pdf(1075)pdf(1603)pdf(1816)pdf(10189)pdf(5361)pdf(20676)pdf(1062)pdf(1180)
書目匯出
Google ScholarTM
政大圖書館
引文資訊
TAIR相關學術產出
Title | 基於方向性邊緣特徵之即時物件偵測與追蹤 Real-Time Object Detection and Tracking using Directional Edge Maps |
Creator | 王財得 Wang, Tsai-Te |
Contributor | 廖文宏 Liao, Wen-Hung 王財得 Wang, Tsai-Te |
Key Words | 物件偵測 方向性邊緣特徵 人臉偵測 表情識別 Object Detection Directional Edge Maps Face Detection Facial Expression Recognition AdaBoost |
Date | 2007 |
Date Issued | 19-九月-2009 12:10:47 (UTC+8) |
Summary | 在電腦視覺的研究之中,有關物件的偵測與追蹤應用在速度及可靠性上的追求一直是相當具有挑戰性的問題,而現階段發展以視覺為基礎互動式的應用,所使用到技術諸如:類神經網路、SVM及貝氏網路等。 本論文中我們持續深入此領域,並提出及發展一個方向性邊緣特徵集(DEM)與修正後的AdaBoost訓練演算法相互結合,期能有效提高物件偵測與識別的速度及準確性,在實際驗證中,我們將之應用於多種角度之人臉偵測,以及臉部表情識別等兩個主要問題之上;在人臉偵測的應用中,我們使用CMU的臉部資料庫並與Viola & Jones方法進行分析比較,在準確率上,我們的方法擁有79% 的recall及90% 的precision,而Viola & ones的方法則分別為81%及77%;在運算速度上,同樣處理512x384的影像,相較於Viola & Jones需時132ms,我們提出的方法則有較佳的82ms。 此外,於表情識別的應用中,我們結合運用Component-based及Action-unit model 兩種方法。前者的優勢在於提供臉部細節特徵的定位及追蹤變化,後者主要功用則為進行情緒表情的分類。我們對於四種不同情緒表情的辨識準確度如下:高興(83.6%)、傷心(72.7%)、驚訝(80%) 、生氣(78.1%)。在實驗中,可以發現生氣及傷心兩種情緒較難區分,而高興與驚訝則較易識別。 Rapid and robust detection and tracking of objects is a challenging problem in computer vision research. Techniques such as artificial neural networks, support vector machine and Bayesian networks have been developed to enable interactive vision-based applications. In this thesis, we tackle this issue by devising a novel feature descriptor named directional edge maps (DEM). When combined with a modified AdaBoost training algorithm, the proposed descriptor can produce effective results in many object detection and recognition tasks. We have applied the newly developed method to two important object recognition problems, namely, face detection and facial expression recognition. The DEM-based methodology conceived in this thesis is capable of detecting faces of multiple views. To test the efficacy of our face detection mechanism, we have performed a comparative analysis with the Viola and Jones algorithm using Carnegie Mellon University face database. The recall and precision using our approach is 79% and 90%, respectively, compared to 81% and 77% using Viola and Jones algorithm. Our algorithm is also more efficient, requiring only 82 ms (compared to 132 ms by Viola and Jones) for processing a 512x384 image. To achieve robust facial expression recognition, we have combined component-based methods and action-unit model-based approaches. The component-based method is mainly utilized to locate important facial features and track their deformations. Action-unit model-based approach is then employed to carry out expression recognition. The accuracy of classifying different emotion type is as follows: happiness 83.6%, sadness 72.7%, surprise 80%, and anger 78.1%. It turns out that anger and sadness are more difficult to distinguish, whereas happiness and surprise expression have higher recognition rates. |
參考文獻 | [1] Y. Freund and R. E. Schapire. “Experiments with a new boosting algorithm”, proceedings of 13th International Conference on Machine Learning, pp.148-146 Morgan Kaufmann 1996. [2] Y. Freund. “Boosting a weak learning algorithm by majority”, Information and Computation, September 1995. [3] Y. Freund, R. Iyer, R.E. Schapire and Y. Singer. “An Efficient Boosting Algorithm for Combining Preferences”, Proceedings of International Conference on Machine Learning, 1998. [4] Y. Freund and R E. Schapire “A Short Introduction to Boosting”, Proceeding of Japanese Society for artificial Intelligence, Vol. 14(5) pp. 771-780, 1999. [5] P. Viola, and M. Jones, “Rapid Object Detection using a Boosted Cascade of Simple Features”, Proceedings of Computer Vision and Pattern Recognition , Vol.1, pp.I-511~I-518, Dec. 2001. [6] T. Ojala and M. Pietikainen, “Unsupervised Texture Segmentation Using Feature Distributions”, Pattern Recognition, Vol.32, pp. 477-486, 1999. [7] T. Ojala and M. Pietikainen, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns” IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 24, pp. 971–987, 2002. [8] 紀煜豪,不同光源環境下的即時膚色辨識,國立政治大學資訊科學研究所,民國96年。 [9] P. Viola and M. Jones, “Robust Real-Time Face Detection”, Proceedings of Computer Vision and Pattern Recognition, Vol.2, pp.747-765 , July 11, 2003 [10] H. Rowley, S. Baluja, and T. Kanade, “Neural Network-based Face Detection”, Proceeding of Computer Vision and Pattern Recognition, pp. 963-963, June 1998. [11] M. Pontil and A. Verri, "Support Vector Machines for 3D Object Recognition" Transactions on Pattern Analysis and Machine Intelligence, Vol.20(6), pp.637-646, 1998. [12] J. Pearl and S. Russel, "Bayesian Networks" in M.A. Arbib(editor), Handbook of Brain Theory and Neural Networks, pp.157-160, 2003. [13] http://face-and-emotion.com/dataface/facs/new_version.jsp [14] P. Ekman and W. V. Friesen, “Constants Across Cultures in the Face and Emotion”, Journal of personality and Social Psychology, pp.124-129, 1971. [15] G. Littlewort, M. Bartlett, I. Fasel, J. Susskind and Javier Movellan. ”Dynamics of Facial Expression Extracted Automatically from Video”, Proceeding of Computer Vision and Pattern Recognition, pp.80, 2004. [16] S.C. Tai, K.C. Chung. ”Automatic Facial Expression Recognition System Using Neural Networks”, Proceedings of TENCON, pp.1~4, 2007. [17] M. P. Loh, Y. Wong and C. Wong, “Facial Expression Recognition for E-learning Systems using Gabor Wavelet Neural Network”, Proceedings of International Conference on Advanced Learning Technologies, pp.523~525, 2006. [18] J. Y. Chang and J. Chen, “Automated Facial Expression Recognition System Using Neural Networks”, Journal of the Chinese Institute of Engineers, 2001. [19] F. Tang and B. Deng, “Facial Expression Recognition using AAM and Local Facial Features”, Proceedings of Interdisciplinary Center for Neural Computation, pp.632~635, 2007. [20] W. Liu and Z. Wang, “Facial Expression Recognition Based on Fusion of Multiple Gabor Features“, Proceeding of Computer Vision and Pattern Recognition, Vol.3, pp.536~539, 2006. [21] C. Shan, S. Haogang, G. McOwan and P. W, “Recognizing Facial Expressions at Low Resolution”, Proceedings of Advanced Video and Signal Based Surveillance, pp.330~335, 2005. [22] Q. Xu, P. Zhang, W. Pei, L. Yang and Z. He, “An Automatic Facial Expression Recognition Approach Based on Confusion-Crossed Support Vector Machine Tree”, Proceedings of International Conference on Acoustics, Speech, and Signal Processing, Vol.1, pp.I-625~I-628, 2007. [23] I. Kotsia, N. Nikolaidis and I. Pitas, ”Facial Expression Recognition in Videos Using a Novel Multi-Class Support Vector Machines Variant”. Proceedings of International Conference on Acoustics, Speech, and Signal Processing, Vol.2, pp.II-585~II-588, 2007. [24] J. Tang, Z. Youwei, “Facial Expression Recognition Based on Motion Information Between Frames”, Proceedings of 8th International Conference on Signal Processing, vol.3, pp.16-20, 2006. [25] A. Kanaujia, D. Metaxas, “Recognizing Facial Expressions by Tracking Feature Shapes”, Proceedings of Computer Vision and Pattern Recognition, Vol.2, pp.33-38, 2006. [26] S. Jung, D. H. Kim, K. Ho and M. J. Chung, “Efficient Rectangle Feature Extraction for Real-time Facial Expression Recognition based on AdaBoost”, Proceedings of Intelligent Robots and Systems, pp.1941-1946, Aug. 2005. [27] H. Deng, J. Zhu, M. R. Lyu and I. King, “Two-stage Multi-class AdaBoost for Facial Expression Recognition”, Proceedings of International Joint Conference on Neural Networks, pp. 3005-3010, 2007. [28] P. S. Aleksic and A. K. Katsaggelos, “Automatic Facial Expression Recognition Using Facial Animation Parameters and Multistream HMMs”, Transactions on Information Forensics and Security, Vol.1, Issue:1, pp.3~11, 2006. [29] Q. X. Xu and J. Wei, “Application of Wavelet Energy Feature in Facial Expression Recognition”, Proceedings of IEEE International Workshop on Anti-counterfeiting Security, Identification, pp.169~174, 2007. [30] K. S. Cho, Y. Kim and Y.ang-Bok Lee, “Real-time Expression recognition System using Active Appearance Model and EFM”, Proceedings of Computational Intelligence and Security, Vol.1, pp. 747~750, 2006 [31] D. Torre, F. Campoy, J. Ambadar, Z. Cohn and F. Jeffrey, “Temporal Segmentation of Facial Behavior”, Proceedings of International Conference on Computer Vision, pp.1~8, 2007. [32] X. Feng, B. Lv and Z. Li, “Automatic Facial Expression Recognition Using Both Local and Global Information”, Proceedings of Chinese Control Conference, pp.1878~1881, 2006. [33] G. Zhou, Y. Zhan and J. Zhang, “Facial Expression Recognition Based on Selective Feature Extraction”, Proceedings of Intelligent Systems Design and Applications, Vol.2, pp.412~417, 2006. [34] F. Dornaika, F. Davoine, “Facial Expression Recognition using Auto-regressive Models”, Proceeding of Computer Vision and Pattern Recognition, Vol.2, pp.520-523, 2006. [35] M. A. Turk, A. P. Pentland, “Face Recognition Using Eigenfaces” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586-591, 1991. [36] T. F. Cootes, G. J. Edwards, and C. J. Taylor. ”Active Appearance Models” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.23, Issue:6, pp.681-685, 2001. [37] T. F. Cootes, C. J. Taylor, D. H. Cooper and J. Graham, ”Training models of shape from sets of examples”, Proceedings of British Machine Vision Conference, pp. 266–275, 1992. [38] S. Baker, R. Gross, and I. Matthews, “Lucas-Kanade 20 Years On: A Unifying Framework: Part 3” tech. report Carnegie Mellon University, 2003. |
Description | 碩士 國立政治大學 資訊科學學系 95971003 96 |
資料來源 | http://thesis.lib.nccu.edu.tw/record/#G0095971003 |
Type | thesis |
dc.contributor.advisor | 廖文宏 | zh_TW |
dc.contributor.advisor | Liao, Wen-Hung | en_US |
dc.contributor.author (作者) | 王財得 | zh_TW |
dc.contributor.author (作者) | Wang, Tsai-Te | en_US |
dc.creator (作者) | 王財得 | zh_TW |
dc.creator (作者) | Wang, Tsai-Te | en_US |
dc.date (日期) | 2007 | en_US |
dc.date.accessioned | 19-九月-2009 12:10:47 (UTC+8) | - |
dc.date.available | 19-九月-2009 12:10:47 (UTC+8) | - |
dc.date.issued (上傳時間) | 19-九月-2009 12:10:47 (UTC+8) | - |
dc.identifier (其他 識別碼) | G0095971003 | en_US |
dc.identifier.uri (URI) | https://nccur.lib.nccu.edu.tw/handle/140.119/37112 | - |
dc.description (描述) | 碩士 | zh_TW |
dc.description (描述) | 國立政治大學 | zh_TW |
dc.description (描述) | 資訊科學學系 | zh_TW |
dc.description (描述) | 95971003 | zh_TW |
dc.description (描述) | 96 | zh_TW |
dc.description.abstract (摘要) | 在電腦視覺的研究之中,有關物件的偵測與追蹤應用在速度及可靠性上的追求一直是相當具有挑戰性的問題,而現階段發展以視覺為基礎互動式的應用,所使用到技術諸如:類神經網路、SVM及貝氏網路等。 本論文中我們持續深入此領域,並提出及發展一個方向性邊緣特徵集(DEM)與修正後的AdaBoost訓練演算法相互結合,期能有效提高物件偵測與識別的速度及準確性,在實際驗證中,我們將之應用於多種角度之人臉偵測,以及臉部表情識別等兩個主要問題之上;在人臉偵測的應用中,我們使用CMU的臉部資料庫並與Viola & Jones方法進行分析比較,在準確率上,我們的方法擁有79% 的recall及90% 的precision,而Viola & ones的方法則分別為81%及77%;在運算速度上,同樣處理512x384的影像,相較於Viola & Jones需時132ms,我們提出的方法則有較佳的82ms。 此外,於表情識別的應用中,我們結合運用Component-based及Action-unit model 兩種方法。前者的優勢在於提供臉部細節特徵的定位及追蹤變化,後者主要功用則為進行情緒表情的分類。我們對於四種不同情緒表情的辨識準確度如下:高興(83.6%)、傷心(72.7%)、驚訝(80%) 、生氣(78.1%)。在實驗中,可以發現生氣及傷心兩種情緒較難區分,而高興與驚訝則較易識別。 | zh_TW |
dc.description.abstract (摘要) | Rapid and robust detection and tracking of objects is a challenging problem in computer vision research. Techniques such as artificial neural networks, support vector machine and Bayesian networks have been developed to enable interactive vision-based applications. In this thesis, we tackle this issue by devising a novel feature descriptor named directional edge maps (DEM). When combined with a modified AdaBoost training algorithm, the proposed descriptor can produce effective results in many object detection and recognition tasks. We have applied the newly developed method to two important object recognition problems, namely, face detection and facial expression recognition. The DEM-based methodology conceived in this thesis is capable of detecting faces of multiple views. To test the efficacy of our face detection mechanism, we have performed a comparative analysis with the Viola and Jones algorithm using Carnegie Mellon University face database. The recall and precision using our approach is 79% and 90%, respectively, compared to 81% and 77% using Viola and Jones algorithm. Our algorithm is also more efficient, requiring only 82 ms (compared to 132 ms by Viola and Jones) for processing a 512x384 image. To achieve robust facial expression recognition, we have combined component-based methods and action-unit model-based approaches. The component-based method is mainly utilized to locate important facial features and track their deformations. Action-unit model-based approach is then employed to carry out expression recognition. The accuracy of classifying different emotion type is as follows: happiness 83.6%, sadness 72.7%, surprise 80%, and anger 78.1%. It turns out that anger and sadness are more difficult to distinguish, whereas happiness and surprise expression have higher recognition rates. | en_US |
dc.description.tableofcontents | 第一章 緒論 1.1 研究背景及動機 1.1.1 多種角度人臉偵測 1.1.2 臉部表情識別系統 1.1.3 機器人視覺系統應用 1.2 主要貢獻 1.3 論文架構 第二章 特徵值萃取與物件識別 2.1 相關特徵值萃取 2.1.1 邊緣偵測 2.1.2 Local Binary Pattern (LBP)特徵值萃取[7] 2.1.3 Haar-Like Features 2.2 DIRECTIONAL EDGE MAPS (DEM)特徵值萃取方法 2.3 物件識別相關研究與改良方法 2.3.1 Support Vector Machine (SVM) 2.3.2 Neural Network 2.3.3 AdaBoost 演算法 2.3.4 改良式AdaBoost演算法 2.4 以DEM特徵物件識別系統架構圖 第三章 多角度臉部偵測系統 3.1 相關人臉偵測與定位技術 3.1.1 Color space methods 3.1.2 Haar-Like Feature With AdaBoost [5][9] 3.1.3 Neural Network Methods 3.2 多種角度臉部偵測系統架構圖與方法 3.2.1 多角度臉部偵測系統架構圖 3.2.2 多角度臉部偵測子系統說明 3.3 加速運算偵測方法 3.3.1 演算法加快偵測速度 3.3.2 快速過濾計算方法 3.3.3 區域統計快速過濾條件 3.3.4 區域統計量過濾法計算後結果 3.3.5 快速測試效能評估 3.3.6 CPU指令集加快運算 3.3.7 MMX應用於快速DEM計算方法 3.4 臉部訓練樣本照片的收集 3.5 與VIOLA & JONES比較結果 3.6 臉部偵測單張照片結果 第四章 臉部表情識別系統 4.1 臉部表情辨識相關研究 4.1.1 Active Appearance Models (AAM)特徵定位的方法 4.1.2 Facial Action Coding System 4.1.3 臉部表情辨識相關研究比較表 4.2 表情識別系統架構與方法 4.3 表情資料庫的收集 4.4 COMPONENT-BASED表情識別方法 4.4.1 依比例定位表情特徵位置 4.4.2 DEM五官位置定位 4.4.3 嘴巴細部定位 4.4.4 DEM眼睛細部定位 4.5 精確臉部定位方法 4.5.1 轉換成二值化 4.5.2 定位臉部邊界 4.5.3 精準定位臉部位置結果 4.5.4 臉部特徵表情識別規則 4.6 ACTION-UNIT MODEL 實做方法 4.6.1 Action-Unit model 簡化 4.6.2 Action-Unit分類 4.6.3 Action-Unit model 正規化 4.7 ACTION-UNIT MODEL 影像處理增加樣本數 4.7.1 影像處理結果 4.8 表情單元實驗訓練樣本 4.9 表情識別實驗結果與分析 4.9.1 JAFFE資料庫測試結果 4.9.2 Yale資料庫測試結果 4.9.3 自行收集照測試結果 4.9.4 多人表情識別測試結果 4.9.5 從電影蒐集影像測試結果 4.9.6 即時影像測試結果 第五章 結論與未來規劃 參考文獻 | zh_TW |
dc.format.extent | 145630 bytes | - |
dc.format.extent | 145886 bytes | - |
dc.format.extent | 149747 bytes | - |
dc.format.extent | 53080 bytes | - |
dc.format.extent | 160704 bytes | - |
dc.format.extent | 174250 bytes | - |
dc.format.extent | 375609 bytes | - |
dc.format.extent | 1466025 bytes | - |
dc.format.extent | 4354548 bytes | - |
dc.format.extent | 148860 bytes | - |
dc.format.extent | 158580 bytes | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.language.iso | en_US | - |
dc.source.uri (資料來源) | http://thesis.lib.nccu.edu.tw/record/#G0095971003 | en_US |
dc.subject (關鍵詞) | 物件偵測 | zh_TW |
dc.subject (關鍵詞) | 方向性邊緣特徵 | zh_TW |
dc.subject (關鍵詞) | 人臉偵測 | zh_TW |
dc.subject (關鍵詞) | 表情識別 | zh_TW |
dc.subject (關鍵詞) | Object Detection | en_US |
dc.subject (關鍵詞) | Directional Edge Maps | en_US |
dc.subject (關鍵詞) | Face Detection | en_US |
dc.subject (關鍵詞) | Facial Expression Recognition | en_US |
dc.subject (關鍵詞) | AdaBoost | en_US |
dc.title (題名) | 基於方向性邊緣特徵之即時物件偵測與追蹤 | zh_TW |
dc.title (題名) | Real-Time Object Detection and Tracking using Directional Edge Maps | en_US |
dc.type (資料類型) | thesis | en |
dc.relation.reference (參考文獻) | [1] Y. Freund and R. E. Schapire. “Experiments with a new boosting algorithm”, proceedings of 13th International Conference on Machine Learning, pp.148-146 Morgan Kaufmann 1996. | zh_TW |
dc.relation.reference (參考文獻) | [2] Y. Freund. “Boosting a weak learning algorithm by majority”, Information and Computation, September 1995. | zh_TW |
dc.relation.reference (參考文獻) | [3] Y. Freund, R. Iyer, R.E. Schapire and Y. Singer. “An Efficient Boosting Algorithm for Combining Preferences”, Proceedings of International Conference on Machine Learning, 1998. | zh_TW |
dc.relation.reference (參考文獻) | [4] Y. Freund and R E. Schapire “A Short Introduction to Boosting”, Proceeding of Japanese Society for artificial Intelligence, Vol. 14(5) pp. 771-780, 1999. | zh_TW |
dc.relation.reference (參考文獻) | [5] P. Viola, and M. Jones, “Rapid Object Detection using a Boosted Cascade of Simple Features”, Proceedings of Computer Vision and Pattern Recognition , Vol.1, pp.I-511~I-518, Dec. 2001. | zh_TW |
dc.relation.reference (參考文獻) | [6] T. Ojala and M. Pietikainen, “Unsupervised Texture Segmentation Using Feature Distributions”, Pattern Recognition, Vol.32, pp. 477-486, 1999. | zh_TW |
dc.relation.reference (參考文獻) | [7] T. Ojala and M. Pietikainen, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns” IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 24, pp. 971–987, 2002. | zh_TW |
dc.relation.reference (參考文獻) | [8] 紀煜豪,不同光源環境下的即時膚色辨識,國立政治大學資訊科學研究所,民國96年。 | zh_TW |
dc.relation.reference (參考文獻) | [9] P. Viola and M. Jones, “Robust Real-Time Face Detection”, Proceedings of Computer Vision and Pattern Recognition, Vol.2, pp.747-765 , July 11, 2003 | zh_TW |
dc.relation.reference (參考文獻) | [10] H. Rowley, S. Baluja, and T. Kanade, “Neural Network-based Face Detection”, Proceeding of Computer Vision and Pattern Recognition, pp. 963-963, June 1998. | zh_TW |
dc.relation.reference (參考文獻) | [11] M. Pontil and A. Verri, "Support Vector Machines for 3D Object Recognition" Transactions on Pattern Analysis and Machine Intelligence, Vol.20(6), pp.637-646, 1998. | zh_TW |
dc.relation.reference (參考文獻) | [12] J. Pearl and S. Russel, "Bayesian Networks" in M.A. Arbib(editor), Handbook of Brain Theory and Neural Networks, pp.157-160, 2003. | zh_TW |
dc.relation.reference (參考文獻) | [13] http://face-and-emotion.com/dataface/facs/new_version.jsp | zh_TW |
dc.relation.reference (參考文獻) | [14] P. Ekman and W. V. Friesen, “Constants Across Cultures in the Face and Emotion”, Journal of personality and Social Psychology, pp.124-129, 1971. | zh_TW |
dc.relation.reference (參考文獻) | [15] G. Littlewort, M. Bartlett, I. Fasel, J. Susskind and Javier Movellan. ”Dynamics of Facial Expression Extracted Automatically from Video”, Proceeding of Computer Vision and Pattern Recognition, pp.80, 2004. | zh_TW |
dc.relation.reference (參考文獻) | [16] S.C. Tai, K.C. Chung. ”Automatic Facial Expression Recognition System Using Neural Networks”, Proceedings of TENCON, pp.1~4, 2007. | zh_TW |
dc.relation.reference (參考文獻) | [17] M. P. Loh, Y. Wong and C. Wong, “Facial Expression Recognition for E-learning Systems using Gabor Wavelet Neural Network”, Proceedings of International Conference on Advanced Learning Technologies, pp.523~525, 2006. | zh_TW |
dc.relation.reference (參考文獻) | [18] J. Y. Chang and J. Chen, “Automated Facial Expression Recognition System Using Neural Networks”, Journal of the Chinese Institute of Engineers, 2001. | zh_TW |
dc.relation.reference (參考文獻) | [19] F. Tang and B. Deng, “Facial Expression Recognition using AAM and Local Facial Features”, Proceedings of Interdisciplinary Center for Neural Computation, pp.632~635, 2007. | zh_TW |
dc.relation.reference (參考文獻) | [20] W. Liu and Z. Wang, “Facial Expression Recognition Based on Fusion of Multiple Gabor Features“, Proceeding of Computer Vision and Pattern Recognition, Vol.3, pp.536~539, 2006. | zh_TW |
dc.relation.reference (參考文獻) | [21] C. Shan, S. Haogang, G. McOwan and P. W, “Recognizing Facial Expressions at Low Resolution”, Proceedings of Advanced Video and Signal Based Surveillance, pp.330~335, 2005. | zh_TW |
dc.relation.reference (參考文獻) | [22] Q. Xu, P. Zhang, W. Pei, L. Yang and Z. He, “An Automatic Facial Expression Recognition Approach Based on Confusion-Crossed Support Vector Machine Tree”, Proceedings of International Conference on Acoustics, Speech, and Signal Processing, Vol.1, pp.I-625~I-628, 2007. | zh_TW |
dc.relation.reference (參考文獻) | [23] I. Kotsia, N. Nikolaidis and I. Pitas, ”Facial Expression Recognition in Videos Using a Novel Multi-Class Support Vector Machines Variant”. Proceedings of International Conference on Acoustics, Speech, and Signal Processing, Vol.2, pp.II-585~II-588, 2007. | zh_TW |
dc.relation.reference (參考文獻) | [24] J. Tang, Z. Youwei, “Facial Expression Recognition Based on Motion Information Between Frames”, Proceedings of 8th International Conference on Signal Processing, vol.3, pp.16-20, 2006. | zh_TW |
dc.relation.reference (參考文獻) | [25] A. Kanaujia, D. Metaxas, “Recognizing Facial Expressions by Tracking Feature Shapes”, Proceedings of Computer Vision and Pattern Recognition, Vol.2, pp.33-38, 2006. | zh_TW |
dc.relation.reference (參考文獻) | [26] S. Jung, D. H. Kim, K. Ho and M. J. Chung, “Efficient Rectangle Feature Extraction for Real-time Facial Expression Recognition based on AdaBoost”, Proceedings of Intelligent Robots and Systems, pp.1941-1946, Aug. 2005. | zh_TW |
dc.relation.reference (參考文獻) | [27] H. Deng, J. Zhu, M. R. Lyu and I. King, “Two-stage Multi-class AdaBoost for Facial Expression Recognition”, Proceedings of International Joint Conference on Neural Networks, pp. 3005-3010, 2007. | zh_TW |
dc.relation.reference (參考文獻) | [28] P. S. Aleksic and A. K. Katsaggelos, “Automatic Facial Expression Recognition Using Facial Animation Parameters and Multistream HMMs”, Transactions on Information Forensics and Security, Vol.1, Issue:1, pp.3~11, 2006. | zh_TW |
dc.relation.reference (參考文獻) | [29] Q. X. Xu and J. Wei, “Application of Wavelet Energy Feature in Facial Expression Recognition”, Proceedings of IEEE International Workshop on Anti-counterfeiting Security, Identification, pp.169~174, 2007. | zh_TW |
dc.relation.reference (參考文獻) | [30] K. S. Cho, Y. Kim and Y.ang-Bok Lee, “Real-time Expression recognition System using Active Appearance Model and EFM”, Proceedings of Computational Intelligence and Security, Vol.1, pp. 747~750, 2006 | zh_TW |
dc.relation.reference (參考文獻) | [31] D. Torre, F. Campoy, J. Ambadar, Z. Cohn and F. Jeffrey, “Temporal Segmentation of Facial Behavior”, Proceedings of International Conference on Computer Vision, pp.1~8, 2007. | zh_TW |
dc.relation.reference (參考文獻) | [32] X. Feng, B. Lv and Z. Li, “Automatic Facial Expression Recognition Using Both Local and Global Information”, Proceedings of Chinese Control Conference, pp.1878~1881, 2006. | zh_TW |
dc.relation.reference (參考文獻) | [33] G. Zhou, Y. Zhan and J. Zhang, “Facial Expression Recognition Based on Selective Feature Extraction”, Proceedings of Intelligent Systems Design and Applications, Vol.2, pp.412~417, 2006. | zh_TW |
dc.relation.reference (參考文獻) | [34] F. Dornaika, F. Davoine, “Facial Expression Recognition using Auto-regressive Models”, Proceeding of Computer Vision and Pattern Recognition, Vol.2, pp.520-523, 2006. | zh_TW |
dc.relation.reference (參考文獻) | [35] M. A. Turk, A. P. Pentland, “Face Recognition Using Eigenfaces” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586-591, 1991. | zh_TW |
dc.relation.reference (參考文獻) | [36] T. F. Cootes, G. J. Edwards, and C. J. Taylor. ”Active Appearance Models” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.23, Issue:6, pp.681-685, 2001. | zh_TW |
dc.relation.reference (參考文獻) | [37] T. F. Cootes, C. J. Taylor, D. H. Cooper and J. Graham, ”Training models of shape from sets of examples”, Proceedings of British Machine Vision Conference, pp. 266–275, 1992. | zh_TW |
dc.relation.reference (參考文獻) | [38] S. Baker, R. Gross, and I. Matthews, “Lucas-Kanade 20 Years On: A Unifying Framework: Part 3” tech. report Carnegie Mellon University, 2003. | zh_TW |