學術產出-學位論文
文章檢視/開啟
書目匯出
-
題名 貝式模型平均法在預測分析之應用
The Application of Bayesian Model Averaging for Predictive Analysis作者 郭東穎 貢獻者 翁久幸
郭東穎關鍵詞 貝式平均法
預測分析
logistic regression
occam`s window
laplace approximation
Bayesian Model Averaging日期 2013 上傳時間 21-七月-2014 15:36:29 (UTC+8) 摘要 當論及二元變數的分類問題,邏輯斯迴歸模型是個常用的典型模型。傳統的邏輯斯模型建置,往往會面臨模型選擇的問題,其方法例如逐步迴歸 (stpewise) 選取法。然而,在這種以單一模型作為最終架構的方式可能會遇到某些困難;例如模型不確定性,以及當多個模型在選取準則方面皆表現良好,而難以抉擇該使用何者為最終模型的問題。在本文中,我們引入貝式模型平均法 (BMA) 作應用,希望不僅能夠降低這些問題的影響,並且期望能夠增進模型在預測上面的表現,此外,透過Occam’s window以及Laplace近似法,能夠將貝式模型平均法方法中較複雜的運算變得容易且更有效率。最後,我們對CARVANA的車輛資料做實證分析,運用了交叉驗證模擬決策點、以及誤差抽樣等分析技巧,分別針對貝式模型平均法、逐步選取法以及未做選取法來建立模型,進而比較。從實證結果顯示,在F-measure的評估架構下,貝式模型平均法以及精確率 (precision) 的表現較佳,而逐步迴歸 (stpewise) 選取法則在回應率的 (recall) 上的表現較佳,說明BMA方法不僅能夠改善先前的問題且在某些情況下,能夠提升模型預測上面的精確性。
Logistic regression serves as a classical model to be used when it comes to the binary classification problem. In logistic regression, it is common to choose one model by some selected process such as stepwise method. However, using single model structure would confront with some problems such as model uncertainty and the difficulty in choosing among the models when they perform similarly. In this thesis, we aim to take uncertainty into consideration and refine the predictive performance via Bayesian Model Averaging (BMA). BMA, which considers all possible models, attempts to solve the uncertainty by rendering the posterior probability of models as the weight to average. Additionally, Occam’s window and Laplace approximation would be employed to be more efficient in calculation process. Finally, Cavana vehicle auction data would be demonstrated and applied by BMA method, stepwise model and full one. Equipped with the techniques including under-sampling and cutting-point simulation. For the performances of F-measures and precision, BMA method is better. While stepwise model work out for recall assessment. The result unveils that Bayesian Model Averaging approach not only makes up model uncertainty but also enhances the precision of prediction in some situations.參考文獻 Berger, J. O., and Delampady, M. (1987). Testing precise hypotheses. Statistical science, 317-335.Bradley, A. P. (1997). The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern recognition 30, 1145-1159.Breiman, L. (2001). Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statistical science 16, 199-231.Chen, C., Liaw, A., and Breiman, L. (2004). Using random forest to learn imbalanced data. University of California, Berkeley.Cortes, C., and Mohri, M. (2004). Confidence Intervals for the Area Under the ROC Curve. In Nips.Dickinson, J. P. (1973). Some statistical results in the combination of forecasts. Journal of the Operational Research Society 24, 253-260.Dietterich, T. G. (2000). Ensemble methods in machine learning. In Multiple classifier systems, 1-15: Springer.Draper, D. (1995). Assessment and propagation of model uncertainty. Journal of the Royal Statistical Society. Series B (Methodological), 45-97.Fernandez, C., Ley, E., and Steel, M. F. J. (2001). Benchmark priors for Bayesian model averaging. Journal of Econometrics 100, 381-427.Fleming, T. R., and Harrington, D. P. (1991). Counting processes and survival analysis. 1991. John Wiley&Sons, Hoboken, NJ, USA.Grambsch, P. M., Dickson, E. R., Kaplan, M., Lesage, G., Fleming, T. R., and Langworthy, A. L. (1989). Extramural cross‐validation of the mayo primary biliary cirrhosis survival model establishes its generalizability. Hepatology 10, 846-850.Hoeting, J., Raftery, A. E., and Madigan, D. (1996). A method for simultaneous variable selection and outlier identification in linear regression. Computational Statistics & Data Analysis 22, 251-270.Hoeting, J. A., Madigan, D., Raftery, A. E., and Volinsky, C. T. (1999). Bayesian model averaging: a tutorial. Statistical science, 382-401.Kass, R. E., and Raftery, A. E. (1995). Bayes factors. Journal of the American Statistical Association 90, 773-795.Kohavi, R. (1995). A study of cross-validation and bootstrap for accuracy estimation and model selection. In Ijcai, 1137-1145.Leamer, E. E., and Leamer, E. E. (1978). Specification searches: Ad hoc inference with nonexperimental data: Wiley New York.Lemeshow, S., and Hosmer, D. (2000). Applied Logistic Regression (Wiley Series in Probability and Statistics: Wiley-Interscience; 2 Sub edition.Madigan, D., and Raftery, A. E. (1994). Model selection and accounting for model uncertainty in graphical models using Occam`s window. Journal of the American Statistical Association 89, 1535-1546.Madigan, D., York, J., and Allard, D. (1995). Bayesian graphical models for discrete data. International Statistical Review/Revue Internationale de Statistique, 215-232.Markus, B. H., Dickson, E. R., Grambsch, P. M., et al. (1989). Efficacy of liver transplantation in patients with primary biliary cirrhosis. New England Journal of Medicine 320, 1709-1713.Merlise, A. (1999). Bayesian model averaging and model search strategies. Bayesian statistics 6, 157.Miller, A. (2002). Subset selection in regression: CRC Press.Monteith, K., Carroll, J. L., Seppi, K., and Martinez, T. (2011). Turning Bayesian model averaging into Bayesian model combination. In Neural Networks (IJCNN), The 2011 International Joint Conference on, 2657-2663: IEEE.Posada, D., and Buckley, T. R. (2004). Model selection and model averaging in phylogenetics: advantages of Akaike information criterion and Bayesian approaches over likelihood ratio tests. Systematic biology 53, 793-808.Raftery, A. E. (1996). Approximate Bayes factors and accounting for model uncertainty in generalised linear models. Biometrika 83, 251-266.Raftery, A. E., Gneiting, T., Balabdaoui, F., and Polakowski, M. (2005). Using Bayesian model averaging to calibrate forecast ensembles. Monthly Weather Review 133.Raftery, A. E., Madigan, D., and Hoeting, J. A. (1997). Bayesian model averaging for linear regression models. Journal of the American Statistical Association 92, 179-191.Regal, R. R., and Hook, E. B. (1991). The effects of model selection on confidence intervals for the size of a closed population. Statistics in medicine 10, 717-721.Stone, M. (1974). Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society. Series B (Methodological), 111-147.Tierney, L., and Kadane, J. B. (1986). Accurate approximations for posterior moments and marginal densities. Journal of the American Statistical Association 81, 82-86.Tsao, A. C. (2014). A Statistical Introduction to Ensemble Learning Methods. 中國統計學報 52, 115-132.Volinsky, C. T., Madigan, D., Raftery, A. E., and Kronmal, R. A. (1997). Bayesian model averaging in proportional hazard models: assessing the risk of a stroke. Journal of the Royal Statistical Society: Series C (Applied Statistics) 46, 433-448.Wang, D., Zhang, W., and Bakhai, A. (2004). Comparison of Bayesian model averaging and stepwise methods for model selection in logistic regression. Statistics in medicine 23, 3451-3467.Wasserman, L. (2000). Bayesian model selection and model averaging. Journal of mathematical psychology 44, 92-107.Zhu, M. (2004). Recall, precision and average precision. Department of Statistics and Actuarial Science, University of Waterloo, Waterloo. 描述 碩士
國立政治大學
統計研究所
101354007
102資料來源 http://thesis.lib.nccu.edu.tw/record/#G0101354007 資料類型 thesis dc.contributor.advisor 翁久幸 zh_TW dc.contributor.author (作者) 郭東穎 zh_TW dc.creator (作者) 郭東穎 zh_TW dc.date (日期) 2013 en_US dc.date.accessioned 21-七月-2014 15:36:29 (UTC+8) - dc.date.available 21-七月-2014 15:36:29 (UTC+8) - dc.date.issued (上傳時間) 21-七月-2014 15:36:29 (UTC+8) - dc.identifier (其他 識別碼) G0101354007 en_US dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/67587 - dc.description (描述) 碩士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 統計研究所 zh_TW dc.description (描述) 101354007 zh_TW dc.description (描述) 102 zh_TW dc.description.abstract (摘要) 當論及二元變數的分類問題,邏輯斯迴歸模型是個常用的典型模型。傳統的邏輯斯模型建置,往往會面臨模型選擇的問題,其方法例如逐步迴歸 (stpewise) 選取法。然而,在這種以單一模型作為最終架構的方式可能會遇到某些困難;例如模型不確定性,以及當多個模型在選取準則方面皆表現良好,而難以抉擇該使用何者為最終模型的問題。在本文中,我們引入貝式模型平均法 (BMA) 作應用,希望不僅能夠降低這些問題的影響,並且期望能夠增進模型在預測上面的表現,此外,透過Occam’s window以及Laplace近似法,能夠將貝式模型平均法方法中較複雜的運算變得容易且更有效率。最後,我們對CARVANA的車輛資料做實證分析,運用了交叉驗證模擬決策點、以及誤差抽樣等分析技巧,分別針對貝式模型平均法、逐步選取法以及未做選取法來建立模型,進而比較。從實證結果顯示,在F-measure的評估架構下,貝式模型平均法以及精確率 (precision) 的表現較佳,而逐步迴歸 (stpewise) 選取法則在回應率的 (recall) 上的表現較佳,說明BMA方法不僅能夠改善先前的問題且在某些情況下,能夠提升模型預測上面的精確性。 zh_TW dc.description.abstract (摘要) Logistic regression serves as a classical model to be used when it comes to the binary classification problem. In logistic regression, it is common to choose one model by some selected process such as stepwise method. However, using single model structure would confront with some problems such as model uncertainty and the difficulty in choosing among the models when they perform similarly. In this thesis, we aim to take uncertainty into consideration and refine the predictive performance via Bayesian Model Averaging (BMA). BMA, which considers all possible models, attempts to solve the uncertainty by rendering the posterior probability of models as the weight to average. Additionally, Occam’s window and Laplace approximation would be employed to be more efficient in calculation process. Finally, Cavana vehicle auction data would be demonstrated and applied by BMA method, stepwise model and full one. Equipped with the techniques including under-sampling and cutting-point simulation. For the performances of F-measures and precision, BMA method is better. While stepwise model work out for recall assessment. The result unveils that Bayesian Model Averaging approach not only makes up model uncertainty but also enhances the precision of prediction in some situations. en_US dc.description.tableofcontents 中文摘要............................................................................................I Abstract…………………….…………………………………………….IIChapter 1 Introduction Chapter 2 Analysis Method……………………………………….…….2.1 Bayesian Model Averaging Method……………………….…2 2.2 Logistic Regression……………………………………………52.3 Assessment Review………………………….………………..6Chapter 3 Practical Application .…………………………………..…..8?3.1 The application of CARVANA case..….….…………………..83.2 Data preprocessing…….……….….………………………….93.3 Data partition and sampling………………..………………..113.4 Decision-making cut point…………………….……………..123.5 Bayesian Model Averaging Analysis…………………….…183.6 Predictive Performance………………………………………23Chapter 4 Concluding remarks…..…..…………….………………..26References……………………………………………………………..28 zh_TW dc.format.extent 1343438 bytes - dc.format.mimetype application/pdf - dc.language.iso en_US - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0101354007 en_US dc.subject (關鍵詞) 貝式平均法 zh_TW dc.subject (關鍵詞) 預測分析 zh_TW dc.subject (關鍵詞) logistic regression en_US dc.subject (關鍵詞) occam`s window en_US dc.subject (關鍵詞) laplace approximation en_US dc.subject (關鍵詞) Bayesian Model Averaging en_US dc.title (題名) 貝式模型平均法在預測分析之應用 zh_TW dc.title (題名) The Application of Bayesian Model Averaging for Predictive Analysis en_US dc.type (資料類型) thesis en dc.relation.reference (參考文獻) Berger, J. O., and Delampady, M. (1987). Testing precise hypotheses. Statistical science, 317-335.Bradley, A. P. (1997). The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern recognition 30, 1145-1159.Breiman, L. (2001). Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statistical science 16, 199-231.Chen, C., Liaw, A., and Breiman, L. (2004). Using random forest to learn imbalanced data. University of California, Berkeley.Cortes, C., and Mohri, M. (2004). Confidence Intervals for the Area Under the ROC Curve. In Nips.Dickinson, J. P. (1973). Some statistical results in the combination of forecasts. Journal of the Operational Research Society 24, 253-260.Dietterich, T. G. (2000). Ensemble methods in machine learning. In Multiple classifier systems, 1-15: Springer.Draper, D. (1995). Assessment and propagation of model uncertainty. Journal of the Royal Statistical Society. Series B (Methodological), 45-97.Fernandez, C., Ley, E., and Steel, M. F. J. (2001). Benchmark priors for Bayesian model averaging. Journal of Econometrics 100, 381-427.Fleming, T. R., and Harrington, D. P. (1991). Counting processes and survival analysis. 1991. John Wiley&Sons, Hoboken, NJ, USA.Grambsch, P. M., Dickson, E. R., Kaplan, M., Lesage, G., Fleming, T. R., and Langworthy, A. L. (1989). Extramural cross‐validation of the mayo primary biliary cirrhosis survival model establishes its generalizability. Hepatology 10, 846-850.Hoeting, J., Raftery, A. E., and Madigan, D. (1996). A method for simultaneous variable selection and outlier identification in linear regression. Computational Statistics & Data Analysis 22, 251-270.Hoeting, J. A., Madigan, D., Raftery, A. E., and Volinsky, C. T. (1999). Bayesian model averaging: a tutorial. Statistical science, 382-401.Kass, R. E., and Raftery, A. E. (1995). Bayes factors. Journal of the American Statistical Association 90, 773-795.Kohavi, R. (1995). A study of cross-validation and bootstrap for accuracy estimation and model selection. In Ijcai, 1137-1145.Leamer, E. E., and Leamer, E. E. (1978). Specification searches: Ad hoc inference with nonexperimental data: Wiley New York.Lemeshow, S., and Hosmer, D. (2000). Applied Logistic Regression (Wiley Series in Probability and Statistics: Wiley-Interscience; 2 Sub edition.Madigan, D., and Raftery, A. E. (1994). Model selection and accounting for model uncertainty in graphical models using Occam`s window. Journal of the American Statistical Association 89, 1535-1546.Madigan, D., York, J., and Allard, D. (1995). Bayesian graphical models for discrete data. International Statistical Review/Revue Internationale de Statistique, 215-232.Markus, B. H., Dickson, E. R., Grambsch, P. M., et al. (1989). Efficacy of liver transplantation in patients with primary biliary cirrhosis. New England Journal of Medicine 320, 1709-1713.Merlise, A. (1999). Bayesian model averaging and model search strategies. Bayesian statistics 6, 157.Miller, A. (2002). Subset selection in regression: CRC Press.Monteith, K., Carroll, J. L., Seppi, K., and Martinez, T. (2011). Turning Bayesian model averaging into Bayesian model combination. In Neural Networks (IJCNN), The 2011 International Joint Conference on, 2657-2663: IEEE.Posada, D., and Buckley, T. R. (2004). Model selection and model averaging in phylogenetics: advantages of Akaike information criterion and Bayesian approaches over likelihood ratio tests. Systematic biology 53, 793-808.Raftery, A. E. (1996). Approximate Bayes factors and accounting for model uncertainty in generalised linear models. Biometrika 83, 251-266.Raftery, A. E., Gneiting, T., Balabdaoui, F., and Polakowski, M. (2005). Using Bayesian model averaging to calibrate forecast ensembles. Monthly Weather Review 133.Raftery, A. E., Madigan, D., and Hoeting, J. A. (1997). Bayesian model averaging for linear regression models. Journal of the American Statistical Association 92, 179-191.Regal, R. R., and Hook, E. B. (1991). The effects of model selection on confidence intervals for the size of a closed population. Statistics in medicine 10, 717-721.Stone, M. (1974). Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society. Series B (Methodological), 111-147.Tierney, L., and Kadane, J. B. (1986). Accurate approximations for posterior moments and marginal densities. Journal of the American Statistical Association 81, 82-86.Tsao, A. C. (2014). A Statistical Introduction to Ensemble Learning Methods. 中國統計學報 52, 115-132.Volinsky, C. T., Madigan, D., Raftery, A. E., and Kronmal, R. A. (1997). Bayesian model averaging in proportional hazard models: assessing the risk of a stroke. Journal of the Royal Statistical Society: Series C (Applied Statistics) 46, 433-448.Wang, D., Zhang, W., and Bakhai, A. (2004). Comparison of Bayesian model averaging and stepwise methods for model selection in logistic regression. Statistics in medicine 23, 3451-3467.Wasserman, L. (2000). Bayesian model selection and model averaging. Journal of mathematical psychology 44, 92-107.Zhu, M. (2004). Recall, precision and average precision. Department of Statistics and Actuarial Science, University of Waterloo, Waterloo. zh_TW