學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 以一致性進行特徵選取
Feature Selection Based On Kappa Statistics
作者 沈妤芳
Shen, Yu-Fang
貢獻者 周珮婷<br>林怡伶
沈妤芳
Shen, Yu-Fang
關鍵詞 機器學習
特徵選取
Kappa
一致性
Machine learning
Feature selection
Kappa
Consistency
日期 2018
上傳時間 29-Aug-2018 15:47:32 (UTC+8)
摘要 本文針對特徵選取做主要研究對象,在機器學習中分類是很重要的一環,從過去到現在有許多方法和文獻採用特徵冗餘的方式消除彼此間相對不重要進而保留其他相關性差異大的變數,這裡將針對特徵配適後的模型做分類後,對於特徵之間的關係在這裡選用Kappa一致性為一指標,再透過分類結果相似的特徵組合起來作為特徵選取的方法,與其他如:Random Forest、ReliefF、mRMR和建構在Symmetric Uncertainty的特徵選取演算法下做比較,對於準確度和變數子集合數量發現都有不錯的效果。
Feature selection plays an important role in supervised learning by eliminating irrelevant features and improving classification results. The current study proposed a feature selection method based on Kappa statistics to select consistent features. SVM was used as a single-variable classifier and Kappa statistics was computed from the fitted results as an indicator of relationship between features. The proposed method was compared with other methods such as, Random Forest, ReliefF, mRMR, and Symmetric Uncertainty based method. The results showed that the proposed method can effectively select important features and achieve stable prediction performance.
參考文獻 Acid, S., De Campos, L.M., & Fernandez , M. (2011). Minimum redundancy maximum relevancy versus score-based methods for learning Markov boundaries. 2011 11th International Conference on Intelligent Systems Design and Applications, pp. 619-623.
     Breiman, L. (2001). Random Forest. Machine Learning, 45(1), pp. 5-32.
     Carletta, J. (1996). Assessing agreement on classification tasks: The kappa statistic. Computational Linguistics, 22(2), pp. 249-254.
     Cohen, J. (1960). A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1), pp. 37-46.
     Durgabai, R.P.L. (2014). Feature Selection using ReliefF Algorithm. International Journal of Advanced Research in Computer and Communication Engineering, 3(10).
     Fushing, H., & McAssey, M. (2010). Time, temperature, and data cloud geometry. PHYSICAL REVIEW E, 82,061110.
     Genuer, R., Poggi, J.M., & Tuleau-Malot, C. (2010). Variable selection using Random Forests. Pattern Recognition Letters, 31(14), pp. 2225-2236.
     Kononenko, I., & Robnik-Sikonja, M. (2003). Theoretical and Empirical Analysis of ReliefF and RReliefF. Machine Learning, 53, pp. 23-69.
     Kononenko, I., Robnik-Sikonia, M., & Pompe, U. (1996). ReliefF for estimation and discretization of attributes in classification, regression and IPL problems. Artificial Intelligence: Methodology, Systems, Applications: Proceedings of AIMSA`96, pp. 31-40.
     McHugh, M. (2012). Interrater reliability: The kappa statistic. Biochemia Medica, 22(3), pp. 276-282.
     Peng, H., Long, F., & Ding, C. (2005). Feature Selection Based on Mutual Information: Criteria of Max-Dependency, Max-Relevance, and Min-Redundancy. IEEE Transactions On Pattern Analysis And Machine Intelligence, 27(8), pp. 1226-1238.
     Sim, J., & Wright, C. (2005). The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements. Physical Therapy, 85, pp. 257-268.
     Singh, B., Kushwaha, N., & Vyas, O.P. (2014). A Feature Subset Selection Technique for High Dimensional Data Using Symmetric Uncertainty. Journal of Data Analysis and Information Processing, 2, pp. 95-105. doi:http://dx.doi.org/10.4236/jdaip.2014.24012
描述 碩士
國立政治大學
統計學系
105354014
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0105354014
資料類型 thesis
dc.contributor.advisor 周珮婷<br>林怡伶zh_TW
dc.contributor.author (Authors) 沈妤芳zh_TW
dc.contributor.author (Authors) Shen, Yu-Fangen_US
dc.creator (作者) 沈妤芳zh_TW
dc.creator (作者) Shen, Yu-Fangen_US
dc.date (日期) 2018en_US
dc.date.accessioned 29-Aug-2018 15:47:32 (UTC+8)-
dc.date.available 29-Aug-2018 15:47:32 (UTC+8)-
dc.date.issued (上傳時間) 29-Aug-2018 15:47:32 (UTC+8)-
dc.identifier (Other Identifiers) G0105354014en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/119716-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 統計學系zh_TW
dc.description (描述) 105354014zh_TW
dc.description.abstract (摘要) 本文針對特徵選取做主要研究對象,在機器學習中分類是很重要的一環,從過去到現在有許多方法和文獻採用特徵冗餘的方式消除彼此間相對不重要進而保留其他相關性差異大的變數,這裡將針對特徵配適後的模型做分類後,對於特徵之間的關係在這裡選用Kappa一致性為一指標,再透過分類結果相似的特徵組合起來作為特徵選取的方法,與其他如:Random Forest、ReliefF、mRMR和建構在Symmetric Uncertainty的特徵選取演算法下做比較,對於準確度和變數子集合數量發現都有不錯的效果。zh_TW
dc.description.abstract (摘要) Feature selection plays an important role in supervised learning by eliminating irrelevant features and improving classification results. The current study proposed a feature selection method based on Kappa statistics to select consistent features. SVM was used as a single-variable classifier and Kappa statistics was computed from the fitted results as an indicator of relationship between features. The proposed method was compared with other methods such as, Random Forest, ReliefF, mRMR, and Symmetric Uncertainty based method. The results showed that the proposed method can effectively select important features and achieve stable prediction performance.en_US
dc.description.tableofcontents 摘要 I
     Abstract II
     目錄 III
     表次 IV
     圖次 V
     第一章 緒論 1
     第二章 文獻 2
     第三章 研究方法 6
     第一節 資料集簡述 6
     第二節 研究方法 7
     第三節 研究過程 11
     第四章 研究結果 16
     第五章 結論與建議 24
     第一節 結論 24
     第二節 研究限制與建議 26
     參考文獻 27
zh_TW
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0105354014en_US
dc.subject (關鍵詞) 機器學習zh_TW
dc.subject (關鍵詞) 特徵選取zh_TW
dc.subject (關鍵詞) Kappazh_TW
dc.subject (關鍵詞) 一致性zh_TW
dc.subject (關鍵詞) Machine learningen_US
dc.subject (關鍵詞) Feature selectionen_US
dc.subject (關鍵詞) Kappaen_US
dc.subject (關鍵詞) Consistencyen_US
dc.title (題名) 以一致性進行特徵選取zh_TW
dc.title (題名) Feature Selection Based On Kappa Statisticsen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) Acid, S., De Campos, L.M., & Fernandez , M. (2011). Minimum redundancy maximum relevancy versus score-based methods for learning Markov boundaries. 2011 11th International Conference on Intelligent Systems Design and Applications, pp. 619-623.
     Breiman, L. (2001). Random Forest. Machine Learning, 45(1), pp. 5-32.
     Carletta, J. (1996). Assessing agreement on classification tasks: The kappa statistic. Computational Linguistics, 22(2), pp. 249-254.
     Cohen, J. (1960). A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1), pp. 37-46.
     Durgabai, R.P.L. (2014). Feature Selection using ReliefF Algorithm. International Journal of Advanced Research in Computer and Communication Engineering, 3(10).
     Fushing, H., & McAssey, M. (2010). Time, temperature, and data cloud geometry. PHYSICAL REVIEW E, 82,061110.
     Genuer, R., Poggi, J.M., & Tuleau-Malot, C. (2010). Variable selection using Random Forests. Pattern Recognition Letters, 31(14), pp. 2225-2236.
     Kononenko, I., & Robnik-Sikonja, M. (2003). Theoretical and Empirical Analysis of ReliefF and RReliefF. Machine Learning, 53, pp. 23-69.
     Kononenko, I., Robnik-Sikonia, M., & Pompe, U. (1996). ReliefF for estimation and discretization of attributes in classification, regression and IPL problems. Artificial Intelligence: Methodology, Systems, Applications: Proceedings of AIMSA`96, pp. 31-40.
     McHugh, M. (2012). Interrater reliability: The kappa statistic. Biochemia Medica, 22(3), pp. 276-282.
     Peng, H., Long, F., & Ding, C. (2005). Feature Selection Based on Mutual Information: Criteria of Max-Dependency, Max-Relevance, and Min-Redundancy. IEEE Transactions On Pattern Analysis And Machine Intelligence, 27(8), pp. 1226-1238.
     Sim, J., & Wright, C. (2005). The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements. Physical Therapy, 85, pp. 257-268.
     Singh, B., Kushwaha, N., & Vyas, O.P. (2014). A Feature Subset Selection Technique for High Dimensional Data Using Symmetric Uncertainty. Journal of Data Analysis and Information Processing, 2, pp. 95-105. doi:http://dx.doi.org/10.4236/jdaip.2014.24012
zh_TW
dc.identifier.doi (DOI) 10.6814/THE.NCCU.STAT.016.2018.B03-