學術產出-Periodical Articles

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 A Resistant Learning Procedure for Coping with Outliers
作者 鄭宗記;蔡瑞煌
Tsaih,Rua-Huan; Cheng,Tsung-Chi
貢獻者 統計系
關鍵詞 Resistant learning; Outliers; Single-hidden layer feed-forward neural networks; Smallest trimmed sum of squared residuals principle ; Deletion diagnostics
日期 2010.05
上傳時間 6-Nov-2014 18:22:06 (UTC+8)
摘要 In the context of resistant learning, outliers are the observations far away from the fitting function that is deduced from a subset of the given observations and whose form is adaptable during the process. This study presents a resistant learning procedure for coping with outliers via single-hidden layer feed-forward neural network (SLFN). The smallest trimmed sum of squared residuals principle is adopted as the guidance of the proposed procedure, and key mechanisms are: an analysis mechanism that excludes any potential outliers at early stages of the process, a modeling mechanism that deduces enough hidden nodes for fitting the reference observations, an estimating mechanism that tunes the associated weights of SLFN, and a deletion diagnostics mechanism that checks to see if the resulted SLFN is stable. The lake data set is used to demonstrate the resistant-learning performance of the proposed procedure.
關聯 Annals of Mathematics and Artificial Intelligence57(2),161-180
資料類型 article
DOI http://dx.doi.org/10.1007/s10472-010-9183-0
dc.contributor 統計系en_US
dc.creator (作者) 鄭宗記;蔡瑞煌-
dc.creator (作者) Tsaih,Rua-Huan; Cheng,Tsung-Chi-
dc.date (日期) 2010.05en_US
dc.date.accessioned 6-Nov-2014 18:22:06 (UTC+8)-
dc.date.available 6-Nov-2014 18:22:06 (UTC+8)-
dc.date.issued (上傳時間) 6-Nov-2014 18:22:06 (UTC+8)-
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/71192-
dc.description.abstract (摘要) In the context of resistant learning, outliers are the observations far away from the fitting function that is deduced from a subset of the given observations and whose form is adaptable during the process. This study presents a resistant learning procedure for coping with outliers via single-hidden layer feed-forward neural network (SLFN). The smallest trimmed sum of squared residuals principle is adopted as the guidance of the proposed procedure, and key mechanisms are: an analysis mechanism that excludes any potential outliers at early stages of the process, a modeling mechanism that deduces enough hidden nodes for fitting the reference observations, an estimating mechanism that tunes the associated weights of SLFN, and a deletion diagnostics mechanism that checks to see if the resulted SLFN is stable. The lake data set is used to demonstrate the resistant-learning performance of the proposed procedure.en_US
dc.format.extent 604734 bytes-
dc.format.mimetype application/pdf-
dc.language.iso en_US-
dc.relation (關聯) Annals of Mathematics and Artificial Intelligence57(2),161-180en_US
dc.subject (關鍵詞) Resistant learning; Outliers; Single-hidden layer feed-forward neural networks; Smallest trimmed sum of squared residuals principle ; Deletion diagnostics-
dc.title (題名) A Resistant Learning Procedure for Coping with Outliersen_US
dc.type (資料類型) articleen
dc.identifier.doi (DOI) 10.1007/s10472-010-9183-0en_US
dc.doi.uri (DOI) http://dx.doi.org/10.1007/s10472-010-9183-0en_US