dc.contributor | 統計系 | en_US |
dc.creator (作者) | 鄭宗記;蔡瑞煌 | - |
dc.creator (作者) | Tsaih,Rua-Huan; Cheng,Tsung-Chi | - |
dc.date (日期) | 2010.05 | en_US |
dc.date.accessioned | 6-Nov-2014 18:22:06 (UTC+8) | - |
dc.date.available | 6-Nov-2014 18:22:06 (UTC+8) | - |
dc.date.issued (上傳時間) | 6-Nov-2014 18:22:06 (UTC+8) | - |
dc.identifier.uri (URI) | http://nccur.lib.nccu.edu.tw/handle/140.119/71192 | - |
dc.description.abstract (摘要) | In the context of resistant learning, outliers are the observations far away from the fitting function that is deduced from a subset of the given observations and whose form is adaptable during the process. This study presents a resistant learning procedure for coping with outliers via single-hidden layer feed-forward neural network (SLFN). The smallest trimmed sum of squared residuals principle is adopted as the guidance of the proposed procedure, and key mechanisms are: an analysis mechanism that excludes any potential outliers at early stages of the process, a modeling mechanism that deduces enough hidden nodes for fitting the reference observations, an estimating mechanism that tunes the associated weights of SLFN, and a deletion diagnostics mechanism that checks to see if the resulted SLFN is stable. The lake data set is used to demonstrate the resistant-learning performance of the proposed procedure. | en_US |
dc.format.extent | 604734 bytes | - |
dc.format.mimetype | application/pdf | - |
dc.language.iso | en_US | - |
dc.relation (關聯) | Annals of Mathematics and Artificial Intelligence57(2),161-180 | en_US |
dc.subject (關鍵詞) | Resistant learning; Outliers; Single-hidden layer feed-forward neural networks; Smallest trimmed sum of squared residuals principle ; Deletion diagnostics | - |
dc.title (題名) | A Resistant Learning Procedure for Coping with Outliers | en_US |
dc.type (資料類型) | article | en |
dc.identifier.doi (DOI) | 10.1007/s10472-010-9183-0 | en_US |
dc.doi.uri (DOI) | http://dx.doi.org/10.1007/s10472-010-9183-0 | en_US |