dc.contributor.advisor | 蔡子傑 | zh_TW |
dc.contributor.advisor | Tsai, Tzu Chieh | en_US |
dc.contributor.author (Authors) | 陳建家 | zh_TW |
dc.contributor.author (Authors) | Chen, Jian Jia | en_US |
dc.creator (作者) | 陳建家 | zh_TW |
dc.creator (作者) | Chen, Jian Jia | en_US |
dc.date (日期) | 2008 | en_US |
dc.date.accessioned | 17-Sep-2009 14:05:43 (UTC+8) | - |
dc.date.available | 17-Sep-2009 14:05:43 (UTC+8) | - |
dc.date.issued (上傳時間) | 17-Sep-2009 14:05:43 (UTC+8) | - |
dc.identifier (Other Identifiers) | G0095753038 | en_US |
dc.identifier.uri (URI) | https://nccur.lib.nccu.edu.tw/handle/140.119/32703 | - |
dc.description (描述) | 碩士 | zh_TW |
dc.description (描述) | 國立政治大學 | zh_TW |
dc.description (描述) | 資訊科學學系 | zh_TW |
dc.description (描述) | 95753038 | zh_TW |
dc.description (描述) | 97 | zh_TW |
dc.description.abstract (摘要) | 貼心的智慧型生活環境,必須能在不同的情緒狀態提供適當服務,因此我們希望能開發出一個情緒辨識系統,透過對於形於外的生理感測資料的變化來觀察形於內的情緒狀態。首先我們採用國際情緒圖庫系統(IAPS: International Affective Picture System) 及維度式分析方法,透過心理實驗的操弄,收集了20位的受測者生理數值與主觀評定情緒的強度與正負向。我們提出了一個情緒辨識學習演算法,經由交叉驗證訓練出每個情緒的特徵,並藉由即時測試資料來修正情緒特徵的個人化,經由學習趨勢的評估,準確率有明顯提升。其次,我們更進一步引用了維度式與類別式情緒的轉換概念來驗證受測者主觀評定的結果。相較於相關研究實驗結果,我們在維度式上的強度與正負向辨識率有較高的表現,在類別式上的驗證我們也達到明顯區分效果。更重要的是,我們所實作出的系統,是搭載了無線生理感測器,使用時更具行動性,而且可即時反映情緒,提供線上智慧型服務。 | zh_TW |
dc.description.abstract (摘要) | A living smart environment should be able to provide thoughtful services by considering different states of emotions. The goal of our research is to develop an emotion recognition system which can detect the internal emotion states from external varieties of physiological data. First we applied the dimensional analysis approach and adopted IAPS (International Affective Picture System) to manipulate psychological experiments. We collected physiological data and subjective ratings for arousal and valence from 20 subjects. We proposed an emotion recognition learning algorithm. It would extract each pattern of emotions from cross validation training and can further learn adaptively by feeding personalized testing data. We measured the learning trend of each subject. The recognition rate reveals incremental enhancement. Furthermore, we adopted a dimensional to discrete emotion transforming concept for validating the subjective rating. Compared to the experiment results of related works, our system outperforms both in dimensional and discrete analyses. Most importantly, the system is implemented based on wireless physiological sensors for mobile usage. This system can reflect the image of emotion states in order to provide on-line smart services. | en_US |
dc.description.tableofcontents | CHAPTER 1 Introduction ...11.1. Background ....11.1.1. Emotion ..11.1.2. Physiological Signals ...21.1.3. Physiological Pattern ...61.2. Motivation ....71.3. Organization ....7CHAPTER 2 Related Work ....92.1. Pattern recognition ....102.1.1. Features selection ....102.1.2. Fisher Linear Discriminant Analysis ....112.1.3. Support Vector Machine ....132.2. Emotion Induction ....16CHAPTER 3 On-line Emotion Recognition algorithm ....183.1. Training phase ....193.1.1. Physiological data ....193.1.2. Feature Extraction ....203.1.3. Cross Validation ....213.1.4. Remove Ambiguous Data ....233.1.5. Threshold ....233.2. Testing Phase ....243.2.1. Training Model ....253.2.2. Model Rebuild ....263.2.3. Learning Trend (k) ....273.3. Dimensional Emotion to Discrete Emotion ....283.3.1. International Affective Picture System (IAPS) ....293.3.2. Probability Density Function ....31CHAPTER 4 Recognition System Implementation ....364.1. System Structure ....364.1.1. Training instances ....374.1.2. The Application Program Interface (API) ....394.2. Recognizing the Incoming Instances ....40CHAPTER 5 Experimental Evaluation ....415.1. Experimental Setup ....415.2. Experiment Results ....445.2.1. Results of Recognition rate and Learning rate on valence ....445.2.2. Results of Recognition rate and Learning rate in arousal ....475.2.3. Result of Dimensional Emotion to Discrete Emotion ....50CHAPTER 6 Conclusions ....53References ....54 LIST OF TABLESTable 3 1: The mean and standard deviation of valence for discrete emotion states from IAPS. ....31Table 3 2: Mean and standard deviation for valence and arousal from IAPS ....33Table 5 1: Experimental Setup ....42Table 5 2: Comparison of recognition in different valence. ....46Table 5 3 : Comparison of recognition in different arousal. ....49Table 5 4: Statistic parameter of arousal and valence. ....50 LIST OF FIGURESFigure 1 1: Physiological signal devices ....3Figure 1 2: ECG (Electrokardiogram) ....3Figure 1 3: BVP (Blood Volume Pulse) ....4Figure 1 4: EMG (Electromyogram) ....4Figure 1 5: SC or SCL (Skin Conductivity) ....5Figure 1 6: RESP (Respiration) ....5Figure 2 1: Features data set. ....11Figure 3 1: Proposed algorithm ....18Figure 3 2: Physiological sensor (a) Respiration (RESP) , (b) MULTI (BVP, SCL, TEMP) (c) EMG (EMG1, EMG2). ....19Figure 3 3: The training phase ....21Figure 3 4: Removing farthest instance from gravity. ....23Figure 3 5: The default model modification. ....25Figure 3 6: The default model modification detail. ....26Figure 3 7: An example of k. ....28Figure 3 8: Emotion model ....29Figure 3 9: IAPS average rating distribution. ....30Figure 3 10: Discrete emotion states distribution. ....32Figure 3 11: The probability distribution of surprise for arousal and valence. ....34Figure 3 12: The probability distribution of sad for arousal and valence. ....35Figure 4 1: System implementation ....36Figure 4 2: Application flow ....38Figure 4 3: System implementation blocks ....39Figure 5 1: IAPS database. ....41Figure 5 2: Subjective rating table. Class (a). Valence (b). Arousal (c). ....42Figure 5 3 : System server GUI ....43Figure 5 4 : Objective learning rate of valence ....44Figure 5 5 : Subjective learning rate of valence ....45Figure 5 6 : Comparison of learning rate of valence. ....45Figure 5 7 : Recognition of each valence ....46Figure 5 8: Subjective learning rate of arousal. ....47Figure 5 9: Objective learning rate of arousal ....48Figure 5 10 : Comparison of learning rate of valence. ....48Figure 5 11 : Recognition rate of each arousal. ....49Figure 5 12: Dimensional to discrete emotion ....51 | zh_TW |
dc.format.extent | 98711 bytes | - |
dc.format.extent | 63760 bytes | - |
dc.format.extent | 118799 bytes | - |
dc.format.extent | 125243 bytes | - |
dc.format.extent | 322419 bytes | - |
dc.format.extent | 534579 bytes | - |
dc.format.extent | 785487 bytes | - |
dc.format.extent | 281718 bytes | - |
dc.format.extent | 377546 bytes | - |
dc.format.extent | 64463 bytes | - |
dc.format.extent | 86503 bytes | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.format.mimetype | application/pdf | - |
dc.language.iso | en_US | - |
dc.source.uri (資料來源) | http://thesis.lib.nccu.edu.tw/record/#G0095753038 | en_US |
dc.subject (關鍵詞) | 情緒 | zh_TW |
dc.subject (關鍵詞) | 生理感測器 | zh_TW |
dc.subject (關鍵詞) | 演算法 | zh_TW |
dc.subject (關鍵詞) | 即時辨識 | zh_TW |
dc.subject (關鍵詞) | 智慧型生活環境 | zh_TW |
dc.subject (關鍵詞) | emotion | en_US |
dc.subject (關鍵詞) | physiological sensors | en_US |
dc.subject (關鍵詞) | algorithm | en_US |
dc.subject (關鍵詞) | on-line recognition | en_US |
dc.subject (關鍵詞) | smart environment | en_US |
dc.title (題名) | 利用生理感測資料之線上情緒辨識系統 | zh_TW |
dc.title (題名) | On-line Emotion Recognition System by Physiological Signals | en_US |
dc.type (資料類型) | thesis | en |
dc.relation.reference (參考文獻) | [1] J. LeDoux, The Emotional Brain. New York: Simon & Schuster, 1996. | zh_TW |
dc.relation.reference (參考文獻) | [2] P. Salovery and J. D. Mayer, “Emotional intelligence”, Imagination, Cognition and Personality, vol. 9, no. 3, pp. 185-211, 1990. | zh_TW |
dc.relation.reference (參考文獻) | [3] D. Goleman, Emotional Intelligence. New York: Bantam Books, 1995. | zh_TW |
dc.relation.reference (參考文獻) | [4] K. R. Scherer. Ch. 10: Speech and emotional states. In J. K. Darby, editor, Speech Evaluation in psychiatry, pages 189-220. Grune and Stratton, Inc., 1981. | zh_TW |
dc.relation.reference (參考文獻) | [5] Y. Yacoob and L.S. Davis. Recognizing human facial expressions from log image sequences using optical flow. Transactions on Pattern Recognition and Machine Intelligence, 18(6):636-642, June 1996. | zh_TW |
dc.relation.reference (參考文獻) | [6] I. A. Essa and A. Pentland. Facial expression recognition using a dynamic model and motion enerty. In International Conference on Computer Vision, pates 360-367, Cambridge MA, 1995, IEEE Computer Society. | zh_TW |
dc.relation.reference (參考文獻) | [7] R. W. Picard. Affective Computing. The MIT Press, Cambridge, MA, 1997. | zh_TW |
dc.relation.reference (參考文獻) | [8] Christian Martyn Jones and Tommy Troen, Biometric Valence and Arousal Recognition , ACM International Conference Proceeding Series; Vol. 251, Proceedings of the 2007 conference of the computer-human interaction special interest group (CHISIG) of Australia on Computer-human interaction. | zh_TW |
dc.relation.reference (參考文獻) | [9] Corinna Cortes and V. Vapnik, "Support-Vector Networks, Machine Learning, 20, 1995 | zh_TW |
dc.relation.reference (參考文獻) | [10] Peng, H.C., Long, F., and Ding, C., "Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 27, No. 8, pp.1226-1238, 2005. | zh_TW |
dc.relation.reference (參考文獻) | [11] R.O. Duda and P.E. Hart. Pattern Classification and Scene Analysis. Wiley-Interscience, 1978. | zh_TW |
dc.relation.reference (參考文獻) | [12] Kim, K.H., Bang, S.W., Kim, S.R.: Emotion recognition system using short-term. monitoring of physiological signals. Med Biol Eng Compute 42 (2004). | zh_TW |
dc.relation.reference (參考文獻) | [13] Lang, P.J., Bradley, M. M. and Cuthbert, B. N.. International affective picture system (IAPS): Technical Manual and Affective Ratings, Center for Research in Psychophysiology, University of Florida (1999). | zh_TW |
dc.relation.reference (參考文獻) | [14] Kohavi, Ron (1995). "A study of cross-validation and bootstrap for accuracy estimation and model selection". Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence 2 (12): 1137–1143.(Morgan Kaufmann, San Mateo). | zh_TW |
dc.relation.reference (參考文獻) | [15] Chang, J., Luo, Y., and Su, K. 1992. GPSM: a Generalized Probabilistic Semantic Model for ambiguity resolution. In Proceedings of the 30th Annual Meeting on Association For Computational Linguistics (Newark, Delaware, June 28 - July 02, 1992). Annual Meeting of the ACL. Association for Computational Linguistics, Morristown, NJ, 177-184 | zh_TW |
dc.relation.reference (參考文獻) | [16] Devijver, P. A., and J. Kittler, Pattern Recognition: A Statistical Approach, Prentice-Hall, London, 1982. | zh_TW |
dc.relation.reference (參考文獻) | [17] 徐世平 Master thesis,“Application of Physiological Signal Monitoring in Smart Living Space”, Jun,2008. | zh_TW |