學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 神經網路與機器學習方法建構P2P信用評分模型: 以Lending Club為例
Neural Network and Machine Learning to Construct P2P Lending Credit Score Model: A Case of Lending Club
作者 楊立楷
Yang, Li-Kai
貢獻者 林士貴<br>蔡瑞煌
楊立楷
Yang, Li-Kai
關鍵詞 P2P借貸
信用評分
機器學習
類神經網路
特徵工程
Lending Club
P2P lending
Credit score
Machine learning
Neural network
Feature engineering
Lending Club
日期 2019
上傳時間 7-Aug-2019 16:14:20 (UTC+8)
摘要 本研究以不同之神經網路與機器學習方法,包含羅吉斯回歸、支援向量機、決策樹、隨機森林、XGBoost、LightGBM與類神經網路七種,分別建構P2P信用評分模型,並經由交叉驗證方式找尋最佳超參數組合,再計算測試表現,綜合比較得到最適合於實務上應用之信用評分模型。
本研究使用之資料集來自Lending Club網站公開之P2P貸款資料,首先使用特徵工程的概念進行資料清理,而後為了找尋顯著影響違約的因子,使用XGBoost方法預訓練一次所有資料得到特徵重要度,將重要度最高的數個特徵篩選出,以供模型做為違約因子使用。
經由比較後,最終得知GBDT方法,包括XGBoost與LightGBM,最適合用於建構P2P評分模型,表現超越神經網路與傳統上一般使用之羅吉斯回歸,而其中以XGBoost表現最為優異。
This study applies several machine learning and neural network methods, including logistic regression, support vector machine, decision tree, random forest, XGBoost, LightGBM and neural network, to construct a credit score model of P2P loans; and then finds the best set of hyperparameters of each method by cross validation, computes training time and test performance, finally comprehensively compares those statistics, finding out the most suitable credit score model on practical application.
This study uses open P2P loan data on the website of Lending Club, first we clean the data combining with some feature engineering concepts, in order to find significant default factors, we use XGBoost method to pre-train all data and get feature importances, sort out the most important features to be used as default factors on data modelling.
After comprehensive comparison, we figure out that GBDT methods, including XGBoost and LightGBM, are the most appropriate for constructing credit score model, outperform neural network and logistic regression, which is commonly used on credit scoring in tradition; among the all methods, XGBoost performs best.
參考文獻 中文文獻
1. 中央銀行 (2018) , “主要國家P2P借貸之發展經驗與借鏡。”, 2018年9月27日央行理監事會後記者會參考資料。
2. 朱君亞 (2018), “金融壓力事件預警模型:類神經網路、支援向量機與羅吉斯迴歸之比較。” 碩士論文。國立政治大學金融學系碩士班。
3. 陳勃文 (2018), “機器學習在P2P借貸信用風險模型之應用:以Lending Club為例。” 碩士論文。國立政治大學金融學系碩士班。
英文文獻
1. Aldrich, J. H., & Nelson, F. D. (1984), Quantitative Applications in the Social Sciences: Linear Probability, Logit, and Probit Models. Thousand Oaks, CA: SAGE Publications.
2. Alexander, V. E. & Clifford, C. C. (1996), Categorical Variables in Developmental Research: Methods of Analysis. Elsevier.
3. Arya, S., Eckel C. & Wichman C. (2013), “Anatomy of the Credit Score.” Journal of Economic Behavior & Organization, Vol. 95, 175-185.
4. Baesens, B., Gestel, T. V., Stepanova M. & Poel, D. V. D. (2004), “Neural Network Survival Analysis for Personal Loan Data.” Journal of the Operational Research Society, Vol. 56, 1089-1098.
5. Bishop, C. M. (2006), Pattern Recognition and Machine Learning. Springer.
6. Bolton, C. (2009), Logistic Regression and its Application in Credit Scoring. University of Pretoria.
7. Breiman, L. (1996), “Bagging Predictors.” Machine Learning, Vol. 24, No. 2, 123-140.
8. Breiman, L. (2001). “Random Forests.” Machine Learning, Vol. 45, No. 1, 5-32.
9. Breiman, L., Friedman, J., Stone, C. J. & Olshen, R. A. (1984), Classification and Regression Trees., Taylor & Francis.
10. Brown, M., Grundy, M., Lin, D., Cristianini, N., Sugnet, C., Furey, T., Ares, M. & Haussler, D. (1999), “Knowledge-Base Analysis of Microarray Gene Expression Data Using Support Vector Machines.” Technical Report, University of California in Santa Cruz.
11. Chen, T. Q. & Guestrin, C. (2016), “XGBoost: A Scalable Tree Boosting System.” KDD `16 Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785-794.
12. Crouhy, M., Galai D. & Mark R. (2014), The Essentials of Risk Management 2nd Edition. McGraw-Hill.
13. Cybenko, G. (1989), “Approximation by Superpositions of a Sigmoidal Function Mathematics of Control.” Signals and Systems, Vol. 2, No. 4, 303-314.
14. Dietterich, T. G. (2000), “An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization.” Machine Learning, Vol. 40, No. 2, 139-157.
15. Duchi, J., Hazan, E. & Singer, Y. (2011), “Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.” Journal of Machine Learning Research, Vol. 12, 2121–2159.
16. Elrahman, S. M. A & Abraham, A. (2013), “A Review of Class Imbalance Problem.” Journal of Network and Innovative Computing, Vol. 1, 332-340.
17. Everett, C. R. (2015). “Group Membership, Relationship Banking and Loan Default Risk: the Case of Online Social Lending.” Banking and Finance Review, Vol.7, No.2, 15-54.
18. Fletcher, R. (1981), “A Nonlinear Programming Problem in Statistics.” SIAM Journal on Scientific and Statistical Computing, Vol. 2, No. 3, 257-267.
19. Friedman, J. H. (2001), “Greedy Function Approximation: A Gradient Boosting Machine.” The Annals of Statistics, Vol. 29, No. 5, 1189-1232.
20. Genuer, R., Poggi, J. M. & Tuleau-Malot, C. (2010), “Variable selection Using Random Forests.” Pattern Recognition Letters, Vol. 31, No. 14, 2225-2236
21. Glorot, X. & Bengio, Y. (2010), “Understanding the Difficulty of Training Deep Feedforward Neural Networks.” Journal of Machine Learning Research, Vol. 9, 249-256
22. Guyon, I. & ElNoeeff, A. (2003), “An Introduction to Variable and Feature Selection.” The Journal of Machine Learning Research, Vol. 3, 1157-1182.
23. He, K. M., Zhang, X. Y., Ren, S. Q. & Sun, J. (2015), “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification.” Arxiv.
24. Ho, T. K. (1995). “Random Decision Forest.” Proceeding of the 3rd International Conference on Document Analysis and Recognition, 278-282.
25. Ho, T. K. (1998). "The Random Subspace Method for Constructing Decision Forests." IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 20, No. 8, 832-844.
26. Hochreiter, S., Bengio, Y., Frasconi, P., Schmidhuber, J. (2001), Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies.
27. Hsu, C. W., Chang, C. C. & Lin, C. J. (2003), “A Practical Guide to Support Vector Classification.” Technical Report, Department of Computer Science and Information Engineering, University of National Taiwan, 1-12.
28. Ioffe, S. & Szegedy, C (2015), “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.”, Arxiv
29. Iyer, R., Khwaja, A. I., Luttmer, E. F., & Shue, K. (2009), “Screening in New Credit Markets: Can Individual Lenders Infer Borrower Creditworthiness in Peer-to-Peer Lending?” AFA 2011 Denver Meetings Paper.
30. Kang, H. (2013), “The Prevention and Handling of the Missing Data.” Korean Journal of Anesthesiology, Vol. 64, No. 5, 402-406.
31. Ke, G. L., Meng, Q., Finley, T., Wang, T. F., Chen, W., Ma, W. D., Ye, Q. W. & Liu, T. Y. (2017), “LightGBM: A highly Efficient Gradient Boosting Decision Tree.” Neural Information Processing Systems, 3149-3157.
32. Keogh, E. & Mueen, A. (2017), “Curse of Dimensionality.” Encyclopedia of Machine Learning and Data Mining, Springer, Boston, MA.
33. Kingma, D. P. & Ba, J. L. (2015), “Adam: a Method for Stochastic Optimization.” International Conference on Learning Representations, 1–13.
34. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012), “Imagenet Classification with Deep Convolutional Neural Networks.” Advances in Neural Information Processing Systems, 1097-1105.
35. Lantz, B. (2013), Machine Learning with R. Packt Publishing Limited.
36. Lin, H. T & Lin, C. J. (2003), “A Study on Sigmoid Kernels for SVM and the Training of Non-PSD Kernels by SMO-type Methods.” Technical Report, Department of Computer Science & Information Engineering, National Taiwan University.
37. Lu, L., Shin, Y. J., Su, Y. H. & Karniadakis, G. E. (2019), “Dying ReLU and Initialization: Theory and Numerical Examples.” Arxiv.
38. Maas, A. L., Hannun, A. Y. & Ng, A. Y. (2013), “Rectifier Nonlinearities Improve Neural Network Acoustic Models.” ICML Workshop on Deep Learning for Audio, Speech, and Language Processing.
39. Madasamy, K. & Ramaswami, M. (2017), “Data Imbalance and Classifiers: Impact and Solutions from a Big Data Perspective.” International Journal of Computational Intelligence Research, Vol. 13, No. 9, 2267-2281.
40. McCulloch, W. S. & Pitts, W. (1943), “A Logical Calculus of the Ideas Immanent in Nervous Activity.” The Bulletin of Mathematical Biophysics, Vol. 5, No. 4, 115-133.
41. Mester, L. J. (1997), “What’s the Point of Credit Scoring?” Business Review, No. 3, 3-16.
42. Mijwel, M. M. (2018), “Artificial Neural Networks Advantages and Disadvantages.”
43. Mills, K. G. & McCarthy, B. (2016), “The State of Small Business Lending: Innovation and Technology and the Implications for Regulation.” HBS Working Paper No. 17-042.
44. Milne, A. & Parboteeah, P. (2016) “The Business Models and Economics of Peer-to-Peer Lending.” ECRI Research Report, No. 17.
45. Mountcastle, V. B. (1957), “Modality and Topographic Properties of Single Neurons of Cat`s Somatic Sensory Cortex.” Journal of Neurophysiology, Vol. 20, 408-434.
46. Ng, A. Y. (2004), “Feature Selection, L1 vs. L2 Regularization, and Rotational Invariance.”, ICML `04 Proceedings of the twenty-first international conference on Machine Learning, 78-85.
47. Ohlson, J. A. (1980), “Financial Ratios and the Probabilistic Prediction of Bankruptcy.” Journal of Accounting Research, Vol. 18, No. 1, 109-131.
48. Patro, S. G. K. & Sahu, K. K. (2015), “Normalization: A Preprocessing Stage.” ArXiv.
49. Pontil, M. & Verri, A. (1998), “Support Vector Machines for 3D Object Recognition.” IEEE Transaction On PAMI, Vol. 20, 637-646.
50. Qian, N. (1999), “On the Momentum Term in Gradient Descent Learning Algorithms.” Neural Networks : The Official Journal of the International Neural Network Society, Vol. 12, No.1, 145–151
51. Quinlan, J. R. (1987), “Simplifying Decision Trees.” International Journal of Man-Machine Studies, Vol. 27, No. 3, 221-234.
52. Quinlan, J. R. (1993), C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers Inc. San Francisco, CA, USA.
53. Raina, R., Madhavan, A. & Ng, A. Y. (2009), "Large-Scale Deep Unsupervised Learning Using Graphics Processors.” Proceedings of the 26th International Conference on Machine Learning.
54. Rajan, U., Seru, A. & Vig, V. (2015), “The Failure of Models that Predict Failure: Distance, Incentives, and Defaults.” Journal of Financial Economics, Vol. 115, No. 2, 237-260.
55. Rosenblatt, F. (1958), “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain.” Psychological Review, Vol. 65, No. 6, 386-408.
56. Ruder, S. (2017), “An Overview of Gradient Descent Optimization Algorithms.” Arxiv.
57. Rumelhart, D. E., Hinton, G. E. & Williams, R. J. (1986), “Learning Representations by Back-Propagating Errors.” Nature, Vol. 323, 533-536.
58. Samitsu, A. (2017), “The Structure of P2P Lending and Legal Arrangements: Focusing on P2P Lending Regulation in the UK.” IMES Discussion Paper Series, No. 17-J-3.
59. Serrano-Cinca, C., Gutierrez-Nieto, B., & López-Palacios, L. (2015), “Determinants of Default in P2P Lending.” PloS One, Vol. 10, No. 10, e0139427.
60. Shannon, C. (1948), “A Mathematical Theory of Communication.” The Bell System Technical Journal, Vol. 27, No. 3, 379-423.
61. Shelke, M. S., Deshmukh, P. R. & Shandilya, V. K. (2017), “A Review on Imbalanced Data Handling using Undersampling and Oversampling Technique.” International Journal of Recent Trends in Engineering and Research.
62. Singh, S. & Gupta, P. (2014), “Comparative Study Id3, Cart and C4.5 Decision Tree Algorithm: A Survey.” International Journal of Advanced Information Science and Technology (IJAIST), Vol.3, No.7.
63. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. (2014), “Dropout: A Simple Way to Prevent Neural Networks from Overfitting.” Journal of Machine Learning Research, Vol. 15, 1929-1958.
64. Thomas, L. C. (2000), “A Survey of Credit and Behavioural Scoring: Forecasting Financial Risk of Lending to Consumers.” International Journal of Forecasting, Vol. 16, No. 2, 149-172.
65. Tieleman, T. & Hinton, G (2012), “Lecture 6.5 - RMSProp, COURSERA: Neural Networks for Machine Learning.” Technical report.
66. Wang, Z., Cui, P., Li, F. T., Chang, E. & Yang, S. Q. (2014), “A Data-Driven Study of Image Feature Extraction and Fusion.” Information Sciences, Vol. 281, 536-558.
描述 碩士
國立政治大學
金融學系
1063520292
資料來源 http://thesis.lib.nccu.edu.tw/record/#G1063520292
資料類型 thesis
dc.contributor.advisor 林士貴<br>蔡瑞煌zh_TW
dc.contributor.author (Authors) 楊立楷zh_TW
dc.contributor.author (Authors) Yang, Li-Kaien_US
dc.creator (作者) 楊立楷zh_TW
dc.creator (作者) Yang, Li-Kaien_US
dc.date (日期) 2019en_US
dc.date.accessioned 7-Aug-2019 16:14:20 (UTC+8)-
dc.date.available 7-Aug-2019 16:14:20 (UTC+8)-
dc.date.issued (上傳時間) 7-Aug-2019 16:14:20 (UTC+8)-
dc.identifier (Other Identifiers) G1063520292en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/124747-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 金融學系zh_TW
dc.description (描述) 1063520292zh_TW
dc.description.abstract (摘要) 本研究以不同之神經網路與機器學習方法,包含羅吉斯回歸、支援向量機、決策樹、隨機森林、XGBoost、LightGBM與類神經網路七種,分別建構P2P信用評分模型,並經由交叉驗證方式找尋最佳超參數組合,再計算測試表現,綜合比較得到最適合於實務上應用之信用評分模型。
本研究使用之資料集來自Lending Club網站公開之P2P貸款資料,首先使用特徵工程的概念進行資料清理,而後為了找尋顯著影響違約的因子,使用XGBoost方法預訓練一次所有資料得到特徵重要度,將重要度最高的數個特徵篩選出,以供模型做為違約因子使用。
經由比較後,最終得知GBDT方法,包括XGBoost與LightGBM,最適合用於建構P2P評分模型,表現超越神經網路與傳統上一般使用之羅吉斯回歸,而其中以XGBoost表現最為優異。
zh_TW
dc.description.abstract (摘要) This study applies several machine learning and neural network methods, including logistic regression, support vector machine, decision tree, random forest, XGBoost, LightGBM and neural network, to construct a credit score model of P2P loans; and then finds the best set of hyperparameters of each method by cross validation, computes training time and test performance, finally comprehensively compares those statistics, finding out the most suitable credit score model on practical application.
This study uses open P2P loan data on the website of Lending Club, first we clean the data combining with some feature engineering concepts, in order to find significant default factors, we use XGBoost method to pre-train all data and get feature importances, sort out the most important features to be used as default factors on data modelling.
After comprehensive comparison, we figure out that GBDT methods, including XGBoost and LightGBM, are the most appropriate for constructing credit score model, outperform neural network and logistic regression, which is commonly used on credit scoring in tradition; among the all methods, XGBoost performs best.
en_US
dc.description.tableofcontents 第一章 緒論 1
第一節 研究背景及動機 1
第二節 研究目的 3
第二章 文獻回顧 4
第一節 信貸違約因子 4
第二節 羅吉斯迴歸 5
第三節 支援向量機 6
第四節 決策樹 6
第五節 隨機森林 7
第六節 XGBoost 7
第七節 LightGBM 8
第八節 類神經網路 8
第三章 研究方法 9
第一節 羅吉斯迴歸 9
第二節 支援向量機 10
第三節 決策樹 14
第四節 隨機森林 16
第五節 XGBoost 17
第六節 LightGBM 22
第七節 類神經網路 23
第八節 模型衡量指標 34
第四章 資料描述 36
第一節 資料來源 36
第二節 資料清理 37
第五章 實證結果 44
第一節 格狀搜尋最佳超參數 44
第二節 測試表現綜合比較 49
第六章 研究結論與未來展望 52
第一節 研究結論 52
第二節 未來展望 53
參考文獻 54
附錄一 60
附錄二 72
zh_TW
dc.format.extent 2359602 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G1063520292en_US
dc.subject (關鍵詞) P2P借貸zh_TW
dc.subject (關鍵詞) 信用評分zh_TW
dc.subject (關鍵詞) 機器學習zh_TW
dc.subject (關鍵詞) 類神經網路zh_TW
dc.subject (關鍵詞) 特徵工程zh_TW
dc.subject (關鍵詞) Lending Clubzh_TW
dc.subject (關鍵詞) P2P lendingen_US
dc.subject (關鍵詞) Credit scoreen_US
dc.subject (關鍵詞) Machine learningen_US
dc.subject (關鍵詞) Neural networken_US
dc.subject (關鍵詞) Feature engineeringen_US
dc.subject (關鍵詞) Lending Cluben_US
dc.title (題名) 神經網路與機器學習方法建構P2P信用評分模型: 以Lending Club為例zh_TW
dc.title (題名) Neural Network and Machine Learning to Construct P2P Lending Credit Score Model: A Case of Lending Cluben_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) 中文文獻
1. 中央銀行 (2018) , “主要國家P2P借貸之發展經驗與借鏡。”, 2018年9月27日央行理監事會後記者會參考資料。
2. 朱君亞 (2018), “金融壓力事件預警模型:類神經網路、支援向量機與羅吉斯迴歸之比較。” 碩士論文。國立政治大學金融學系碩士班。
3. 陳勃文 (2018), “機器學習在P2P借貸信用風險模型之應用:以Lending Club為例。” 碩士論文。國立政治大學金融學系碩士班。
英文文獻
1. Aldrich, J. H., & Nelson, F. D. (1984), Quantitative Applications in the Social Sciences: Linear Probability, Logit, and Probit Models. Thousand Oaks, CA: SAGE Publications.
2. Alexander, V. E. & Clifford, C. C. (1996), Categorical Variables in Developmental Research: Methods of Analysis. Elsevier.
3. Arya, S., Eckel C. & Wichman C. (2013), “Anatomy of the Credit Score.” Journal of Economic Behavior & Organization, Vol. 95, 175-185.
4. Baesens, B., Gestel, T. V., Stepanova M. & Poel, D. V. D. (2004), “Neural Network Survival Analysis for Personal Loan Data.” Journal of the Operational Research Society, Vol. 56, 1089-1098.
5. Bishop, C. M. (2006), Pattern Recognition and Machine Learning. Springer.
6. Bolton, C. (2009), Logistic Regression and its Application in Credit Scoring. University of Pretoria.
7. Breiman, L. (1996), “Bagging Predictors.” Machine Learning, Vol. 24, No. 2, 123-140.
8. Breiman, L. (2001). “Random Forests.” Machine Learning, Vol. 45, No. 1, 5-32.
9. Breiman, L., Friedman, J., Stone, C. J. & Olshen, R. A. (1984), Classification and Regression Trees., Taylor & Francis.
10. Brown, M., Grundy, M., Lin, D., Cristianini, N., Sugnet, C., Furey, T., Ares, M. & Haussler, D. (1999), “Knowledge-Base Analysis of Microarray Gene Expression Data Using Support Vector Machines.” Technical Report, University of California in Santa Cruz.
11. Chen, T. Q. & Guestrin, C. (2016), “XGBoost: A Scalable Tree Boosting System.” KDD `16 Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785-794.
12. Crouhy, M., Galai D. & Mark R. (2014), The Essentials of Risk Management 2nd Edition. McGraw-Hill.
13. Cybenko, G. (1989), “Approximation by Superpositions of a Sigmoidal Function Mathematics of Control.” Signals and Systems, Vol. 2, No. 4, 303-314.
14. Dietterich, T. G. (2000), “An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization.” Machine Learning, Vol. 40, No. 2, 139-157.
15. Duchi, J., Hazan, E. & Singer, Y. (2011), “Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.” Journal of Machine Learning Research, Vol. 12, 2121–2159.
16. Elrahman, S. M. A & Abraham, A. (2013), “A Review of Class Imbalance Problem.” Journal of Network and Innovative Computing, Vol. 1, 332-340.
17. Everett, C. R. (2015). “Group Membership, Relationship Banking and Loan Default Risk: the Case of Online Social Lending.” Banking and Finance Review, Vol.7, No.2, 15-54.
18. Fletcher, R. (1981), “A Nonlinear Programming Problem in Statistics.” SIAM Journal on Scientific and Statistical Computing, Vol. 2, No. 3, 257-267.
19. Friedman, J. H. (2001), “Greedy Function Approximation: A Gradient Boosting Machine.” The Annals of Statistics, Vol. 29, No. 5, 1189-1232.
20. Genuer, R., Poggi, J. M. & Tuleau-Malot, C. (2010), “Variable selection Using Random Forests.” Pattern Recognition Letters, Vol. 31, No. 14, 2225-2236
21. Glorot, X. & Bengio, Y. (2010), “Understanding the Difficulty of Training Deep Feedforward Neural Networks.” Journal of Machine Learning Research, Vol. 9, 249-256
22. Guyon, I. & ElNoeeff, A. (2003), “An Introduction to Variable and Feature Selection.” The Journal of Machine Learning Research, Vol. 3, 1157-1182.
23. He, K. M., Zhang, X. Y., Ren, S. Q. & Sun, J. (2015), “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification.” Arxiv.
24. Ho, T. K. (1995). “Random Decision Forest.” Proceeding of the 3rd International Conference on Document Analysis and Recognition, 278-282.
25. Ho, T. K. (1998). "The Random Subspace Method for Constructing Decision Forests." IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 20, No. 8, 832-844.
26. Hochreiter, S., Bengio, Y., Frasconi, P., Schmidhuber, J. (2001), Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies.
27. Hsu, C. W., Chang, C. C. & Lin, C. J. (2003), “A Practical Guide to Support Vector Classification.” Technical Report, Department of Computer Science and Information Engineering, University of National Taiwan, 1-12.
28. Ioffe, S. & Szegedy, C (2015), “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.”, Arxiv
29. Iyer, R., Khwaja, A. I., Luttmer, E. F., & Shue, K. (2009), “Screening in New Credit Markets: Can Individual Lenders Infer Borrower Creditworthiness in Peer-to-Peer Lending?” AFA 2011 Denver Meetings Paper.
30. Kang, H. (2013), “The Prevention and Handling of the Missing Data.” Korean Journal of Anesthesiology, Vol. 64, No. 5, 402-406.
31. Ke, G. L., Meng, Q., Finley, T., Wang, T. F., Chen, W., Ma, W. D., Ye, Q. W. & Liu, T. Y. (2017), “LightGBM: A highly Efficient Gradient Boosting Decision Tree.” Neural Information Processing Systems, 3149-3157.
32. Keogh, E. & Mueen, A. (2017), “Curse of Dimensionality.” Encyclopedia of Machine Learning and Data Mining, Springer, Boston, MA.
33. Kingma, D. P. & Ba, J. L. (2015), “Adam: a Method for Stochastic Optimization.” International Conference on Learning Representations, 1–13.
34. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012), “Imagenet Classification with Deep Convolutional Neural Networks.” Advances in Neural Information Processing Systems, 1097-1105.
35. Lantz, B. (2013), Machine Learning with R. Packt Publishing Limited.
36. Lin, H. T & Lin, C. J. (2003), “A Study on Sigmoid Kernels for SVM and the Training of Non-PSD Kernels by SMO-type Methods.” Technical Report, Department of Computer Science & Information Engineering, National Taiwan University.
37. Lu, L., Shin, Y. J., Su, Y. H. & Karniadakis, G. E. (2019), “Dying ReLU and Initialization: Theory and Numerical Examples.” Arxiv.
38. Maas, A. L., Hannun, A. Y. & Ng, A. Y. (2013), “Rectifier Nonlinearities Improve Neural Network Acoustic Models.” ICML Workshop on Deep Learning for Audio, Speech, and Language Processing.
39. Madasamy, K. & Ramaswami, M. (2017), “Data Imbalance and Classifiers: Impact and Solutions from a Big Data Perspective.” International Journal of Computational Intelligence Research, Vol. 13, No. 9, 2267-2281.
40. McCulloch, W. S. & Pitts, W. (1943), “A Logical Calculus of the Ideas Immanent in Nervous Activity.” The Bulletin of Mathematical Biophysics, Vol. 5, No. 4, 115-133.
41. Mester, L. J. (1997), “What’s the Point of Credit Scoring?” Business Review, No. 3, 3-16.
42. Mijwel, M. M. (2018), “Artificial Neural Networks Advantages and Disadvantages.”
43. Mills, K. G. & McCarthy, B. (2016), “The State of Small Business Lending: Innovation and Technology and the Implications for Regulation.” HBS Working Paper No. 17-042.
44. Milne, A. & Parboteeah, P. (2016) “The Business Models and Economics of Peer-to-Peer Lending.” ECRI Research Report, No. 17.
45. Mountcastle, V. B. (1957), “Modality and Topographic Properties of Single Neurons of Cat`s Somatic Sensory Cortex.” Journal of Neurophysiology, Vol. 20, 408-434.
46. Ng, A. Y. (2004), “Feature Selection, L1 vs. L2 Regularization, and Rotational Invariance.”, ICML `04 Proceedings of the twenty-first international conference on Machine Learning, 78-85.
47. Ohlson, J. A. (1980), “Financial Ratios and the Probabilistic Prediction of Bankruptcy.” Journal of Accounting Research, Vol. 18, No. 1, 109-131.
48. Patro, S. G. K. & Sahu, K. K. (2015), “Normalization: A Preprocessing Stage.” ArXiv.
49. Pontil, M. & Verri, A. (1998), “Support Vector Machines for 3D Object Recognition.” IEEE Transaction On PAMI, Vol. 20, 637-646.
50. Qian, N. (1999), “On the Momentum Term in Gradient Descent Learning Algorithms.” Neural Networks : The Official Journal of the International Neural Network Society, Vol. 12, No.1, 145–151
51. Quinlan, J. R. (1987), “Simplifying Decision Trees.” International Journal of Man-Machine Studies, Vol. 27, No. 3, 221-234.
52. Quinlan, J. R. (1993), C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers Inc. San Francisco, CA, USA.
53. Raina, R., Madhavan, A. & Ng, A. Y. (2009), "Large-Scale Deep Unsupervised Learning Using Graphics Processors.” Proceedings of the 26th International Conference on Machine Learning.
54. Rajan, U., Seru, A. & Vig, V. (2015), “The Failure of Models that Predict Failure: Distance, Incentives, and Defaults.” Journal of Financial Economics, Vol. 115, No. 2, 237-260.
55. Rosenblatt, F. (1958), “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain.” Psychological Review, Vol. 65, No. 6, 386-408.
56. Ruder, S. (2017), “An Overview of Gradient Descent Optimization Algorithms.” Arxiv.
57. Rumelhart, D. E., Hinton, G. E. & Williams, R. J. (1986), “Learning Representations by Back-Propagating Errors.” Nature, Vol. 323, 533-536.
58. Samitsu, A. (2017), “The Structure of P2P Lending and Legal Arrangements: Focusing on P2P Lending Regulation in the UK.” IMES Discussion Paper Series, No. 17-J-3.
59. Serrano-Cinca, C., Gutierrez-Nieto, B., & López-Palacios, L. (2015), “Determinants of Default in P2P Lending.” PloS One, Vol. 10, No. 10, e0139427.
60. Shannon, C. (1948), “A Mathematical Theory of Communication.” The Bell System Technical Journal, Vol. 27, No. 3, 379-423.
61. Shelke, M. S., Deshmukh, P. R. & Shandilya, V. K. (2017), “A Review on Imbalanced Data Handling using Undersampling and Oversampling Technique.” International Journal of Recent Trends in Engineering and Research.
62. Singh, S. & Gupta, P. (2014), “Comparative Study Id3, Cart and C4.5 Decision Tree Algorithm: A Survey.” International Journal of Advanced Information Science and Technology (IJAIST), Vol.3, No.7.
63. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. (2014), “Dropout: A Simple Way to Prevent Neural Networks from Overfitting.” Journal of Machine Learning Research, Vol. 15, 1929-1958.
64. Thomas, L. C. (2000), “A Survey of Credit and Behavioural Scoring: Forecasting Financial Risk of Lending to Consumers.” International Journal of Forecasting, Vol. 16, No. 2, 149-172.
65. Tieleman, T. & Hinton, G (2012), “Lecture 6.5 - RMSProp, COURSERA: Neural Networks for Machine Learning.” Technical report.
66. Wang, Z., Cui, P., Li, F. T., Chang, E. & Yang, S. Q. (2014), “A Data-Driven Study of Image Feature Extraction and Fusion.” Information Sciences, Vol. 281, 536-558.
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU201900106en_US