學術產出-期刊論文

文章檢視/開啟

書目匯出

Google ScholarTM

政大圖書館

引文資訊

TAIR相關學術產出

題名 An attention algorithm for solving large scale structured L0-norm penalty estimation problems
作者 顏佑銘
Yen, Yu-Min
Yen, Tso-Jung
貢獻者 國貿系
關鍵詞 Blockwise coordinate descent algorithms; Model selection; Nonconvex optimization; Proximal operators; Randomized algorithms
日期 2021-01
上傳時間 27-十二月-2022 10:57:04 (UTC+8)
摘要 Technology advances have enabled researchers to collect large amounts of data with lots of covariates. Because of the high volume (large n) and high variety (large p) properties, model estimation with such big data has posed great challenges for statisticians. In this paper, we focus on the algorithmic aspect of these challenges. We propose a numerical procedure for solving large scale regression estimation problems involving a structured l0-norm penalty function. This numerical procedure blends the ideas of randomization, blockwise coordinate descent algorithms, and a closed-form representation of the proximal operator of the structured l0-norm penalty function. In particular, it adopts an “attention” mechanism that exploits the iteration errors to build a sampling distribution for picking up regression coefficients for updates. Simulation study shows the proposed numerical procedure is competitive when comparing with other algorithms for sparse estimation in terms of runtime and statistical accuracy when both the sample size and the number of covariates become large.
關聯 Japanese Journal of Statistics and Data Science, Vol.4, pp.345-371
資料類型 article
DOI https://doi.org/10.1007/s42081-020-00101-z
dc.contributor 國貿系
dc.creator (作者) 顏佑銘
dc.creator (作者) Yen, Yu-Min
dc.creator (作者) Yen, Tso-Jung
dc.date (日期) 2021-01
dc.date.accessioned 27-十二月-2022 10:57:04 (UTC+8)-
dc.date.available 27-十二月-2022 10:57:04 (UTC+8)-
dc.date.issued (上傳時間) 27-十二月-2022 10:57:04 (UTC+8)-
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/142865-
dc.description.abstract (摘要) Technology advances have enabled researchers to collect large amounts of data with lots of covariates. Because of the high volume (large n) and high variety (large p) properties, model estimation with such big data has posed great challenges for statisticians. In this paper, we focus on the algorithmic aspect of these challenges. We propose a numerical procedure for solving large scale regression estimation problems involving a structured l0-norm penalty function. This numerical procedure blends the ideas of randomization, blockwise coordinate descent algorithms, and a closed-form representation of the proximal operator of the structured l0-norm penalty function. In particular, it adopts an “attention” mechanism that exploits the iteration errors to build a sampling distribution for picking up regression coefficients for updates. Simulation study shows the proposed numerical procedure is competitive when comparing with other algorithms for sparse estimation in terms of runtime and statistical accuracy when both the sample size and the number of covariates become large.
dc.format.extent 106 bytes-
dc.format.mimetype text/html-
dc.relation (關聯) Japanese Journal of Statistics and Data Science, Vol.4, pp.345-371
dc.subject (關鍵詞) Blockwise coordinate descent algorithms; Model selection; Nonconvex optimization; Proximal operators; Randomized algorithms
dc.title (題名) An attention algorithm for solving large scale structured L0-norm penalty estimation problems
dc.type (資料類型) article
dc.identifier.doi (DOI) 10.1007/s42081-020-00101-z
dc.doi.uri (DOI) https://doi.org/10.1007/s42081-020-00101-z