學術產出-Periodical Articles

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

  • No doi shows Citation Infomation
題名 Early stopping of L2Boosting
作者 Chang,Yuan-Chin Ivan;Huang,Yufen;Huang,Yu-Pai
貢獻者 統計系
日期 2010-12
上傳時間 23-Dec-2014 15:19:20 (UTC+8)
摘要 It is well known that the boosting-like algorithms, such as AdaBoost and many of its modifications, may over-fit the training data when the number of boosting iterations becomes large. Therefore, how to stop a boosting algorithm at an appropriate iteration time is a longstanding problem for the past decade (see Meir and Rätsch, 2003). Bühlmann and Yu (2005) applied model selection criteria to estimate the stopping iteration for L2Boosting, but it is still necessary to compute all boosting iterations under consideration for the training data. Thus, the main purpose of this paper is focused on studying the early stopping rule for L2Boosting during the training stage to seek a very substantial computational saving. The proposed method is based on a change point detection method on the values of model selection criteria during the training stage. This method is also extended to two-class classification problems which are very common in medical and bioinformatics applications. A simulation study and a real data example to these approaches are provided for illustrations, and comparisons are made with LogitBoost.
關聯 Computational Statistics and Data Analysis,54(10),2203-2213
資料類型 article
dc.contributor 統計系en_US
dc.creator (作者) Chang,Yuan-Chin Ivan;Huang,Yufen;Huang,Yu-Paien_US
dc.date (日期) 2010-12en_US
dc.date.accessioned 23-Dec-2014 15:19:20 (UTC+8)-
dc.date.available 23-Dec-2014 15:19:20 (UTC+8)-
dc.date.issued (上傳時間) 23-Dec-2014 15:19:20 (UTC+8)-
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/72224-
dc.description.abstract (摘要) It is well known that the boosting-like algorithms, such as AdaBoost and many of its modifications, may over-fit the training data when the number of boosting iterations becomes large. Therefore, how to stop a boosting algorithm at an appropriate iteration time is a longstanding problem for the past decade (see Meir and Rätsch, 2003). Bühlmann and Yu (2005) applied model selection criteria to estimate the stopping iteration for L2Boosting, but it is still necessary to compute all boosting iterations under consideration for the training data. Thus, the main purpose of this paper is focused on studying the early stopping rule for L2Boosting during the training stage to seek a very substantial computational saving. The proposed method is based on a change point detection method on the values of model selection criteria during the training stage. This method is also extended to two-class classification problems which are very common in medical and bioinformatics applications. A simulation study and a real data example to these approaches are provided for illustrations, and comparisons are made with LogitBoost.en_US
dc.format.extent 553281 bytes-
dc.format.mimetype application/pdf-
dc.language.iso en_US-
dc.relation (關聯) Computational Statistics and Data Analysis,54(10),2203-2213en_US
dc.title (題名) Early stopping of L2Boostingen_US
dc.type (資料類型) articleen