學術產出-學位論文
文章檢視/開啟
書目匯出
-
題名 強記暨軟化整合演算法:以ReLU激發函數與二元輸入/輸出為例
The Cramming, Softening and Integrating Learning Algorithm with ReLU activation function for Binary Input/Output Problems作者 蔡羽涵
Tsai, Yu-Han貢獻者 蔡瑞煌<br>蕭舜文
Tsaih, Rua-Huan<br>Hsiao, Shun-Wen
蔡羽涵
Tsai, Yu-Han關鍵詞 強記暨軟化整合
自適應神經網路
圖形處理單元
ReLU
TensorFlow
GPU日期 2019 上傳時間 7-八月-2019 16:06:51 (UTC+8) 摘要 在類神經網路領域中,很少研究會同時針對以下三個議題進行研究:(1) 在學習過程中,神經網路能夠有系統的調整隱藏節點的數量 ;(2) 使用ReLU作為激發函數,而非使用傳統的tanh ;(3) 保證能學習所有的訓練資料。在本研究中會針對上述三點,提出強記暨軟化整合 (Cramming, Softening and Integrating)學習演算法,基於單層神經網路並使用ReLU作為激發函數,解決二元輸入/輸出問題,此外也會進行實驗驗證演算法。在實驗中我們使用SPECT心臟影像資料進行實驗,並且使用張量流(TensorFlow)和圖形處理單元(GPU)進行實作。
Rare Artificial Neural Networks studies address simultaneously the challenges of (1) systematically adjusting the amount of used hidden layer nodes within the learning process, (2) adopting ReLU activation function instead of tanh function for fast learning, and (3) guaranteeing learning all training data. This study will address these challenges through deriving the CSI (Cramming, Softening and Integrating) learning algorithm for the single-hidden layer feed-forward neural networks with ReLU activation function and the binary input/output, and further making the technical justification. For the purpose of verifying the proposed learning algorithm, this study conducts an empirical experiment using SPECT heart diagnosis data set from UCI Machine Learning repository. The learning algorithm is implemented via the advanced TensorFlow and GPU.參考文獻 [1] I. C. Yeh, and C. H. Lien, "The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients," Expert Systems with Applications, vol. 36(2), pp. 2473-2480, 2009.[2] J. de Jesús Rubio, E. Lughofer, J. A. Meda-Campaña, L. A. Páramo, J. F. Novoa, and J. Pacheco, “Neural network updating via argument Kalman filter for modeling of Takagi-Sugeno fuzzy models,” Journal of Intelligent & Fuzzy Systems, vol. 35(2), pp. 2585-2596, 2018.[3] X. L. Meng, F. G. Shi, and J. C. Yao, “An inequality approach for evaluating decision making units with a fuzzy output,” Journal of Intelligent & Fuzzy Systems, vol. 34(1), pp. 459-465, 2018.[4] J. de Jesús Rubio, “Stable Kalman filter and neural network for the chaotic systems identification,” Journal of the Franklin Institute, vol. 354(16), pp. 7444-7462, 2017.[5] M. Y. Cheng, D. Prayogo, and Y. W. Wu, “Prediction of permanent deformation in asphalt pavements using a novel symbiotic organisms search-least squares support vector regression,” Neural Computing and Applications, 2018.[6] J. de Jesús Rubio, “SOFMLS: online self-organizing fuzzy modified least-squares network, “ IEEE Transactions on Fuzzy Systems, vol. 17(6), pp. 1296-1309, 2009.[7] X. M. Zhang, and Q. L. Han, “State estimation for static neural networks with time-varying delays based on an improved reciprocally convex inequality,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29(4), pp. 1376-1381, 2018.[8] V. Nair, and G. E. Hinton, “Rectified Linear Units improve restricted boltzman machines,” Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807-814, 2010.[9] L. Ma, and K. Khorasani, "A new strategy for adaptively constructing multilayer feedforward neural networks," Neurocomputing, vol. 51, pp. 361-385, 2003.[10] R. R. Tsaih, “An explanation of reasoning neural networks,” Mathematical and Computer Modelling, vol. 28(2), pp 37-44, 1998.[11] E. Watanabe, and H. Shimizu, “Algorithm for pruning hidden nodes in multi-layered neural network for binary pattern classification problem,” Proceeding of 1993 International Joint Conference on Neural Networks I, pp. 327-330, 1993.[12] Y. Q. Chen, D. W. Thomas, and M. S. Nixon, "Generating-shrinking algorithm for learning arbitrary classification," Neural Networks, vol. 7(9), pp. 1477-1489, 1994.[13] M. Mezard, and J. P. Nadal, "Learning in feedforward layered networks: The tiling algorithm," Journal of Physics A: Mathematical and General, vol. 22(12), pp. 2191, 1989.[14] S. E. Fahlman, and C. Lebiere, "The cascade-correlation learning architecture," Advances in neural information processing systems, pp. 524-532, 1990.[15] M. Frean, "The upstart algorithm: A method for constructing and training feedforward neural networks," Neural computation, vol. 2(2), pp. 198-209, 1990.[16] R. R. Tsaih, "The softening learning procedure," Mathematical and computer modelling, vol. 18(8), pp. 61-64, 1993.[17] R. H. Tsaih, and T. C. Cheng, “A resistant learning procedure for coping with outliers,” Annals of Mathematics and Artificial Intelligence, vol. 57(2), pp. 161-180, 2009.[18] R. H. Tsaih, B. S. Kuo, T. H. Lin, and C. C. Hsu, “The use of big data analytics to predict the foreign exchange rate based on public media: A machine-learning experiment,” IT Professional, vol. 20(2), pp. 34-41, 2018.[19] L. A. Kurgan, K. J. Cios, R. Tadeusiewicz, M. Ogiela, and L. S. Goodenday, "Knowledge Discovery Approach to Automated Cardiac SPECT Diagnosis," Artificial Intelligence in Medicine, vol. 23(2), pp. 149-169, Oct 2001.[20] D. Dua, and E. Karra Taniskidou, (2017). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.[21] Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning,” nature, vol. 521(7553), pp. 436, 2015.[22] K. Hara, D. Saito, and H. Shouno, “Analysis of function of rectified linear unit used in deep learning,” 2015 International Joint Conference on Neural Networks (IJCNN). IEEE, pp. 1-8, 2015.[23] B. Xu, N. Wang, T. Chen, and M. Li, “Empirical evaluation of rectified activations in convolutional network,” arXiv preprint, arXiv:1505.00853, 2015.[24] S. Y. Huang, J. W. Lin, and R. H. Tsaih, “Outlier detection in the concept drifting environment,” IEEE 2016 International Joint Conference Neural Networks, pp.31-37, 2016.[25] M. Abadi, P. Barham, et al., “Tensorflow: A system for large-scale machine learning,” In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), pp.265-283, 2016.[26] M. Abadi, A. Agarwal et al., “TensorFlow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint, arXiv:1603.04467, 2016.[27] S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv preprint, arXiv:1609.04747.[28] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “SMOTE: Synthetic Minority Over-sampling Technique,” Journal of artificial intelligence research, vol. 16, pp. 321-357, 2002.[29] X. Y. Liu, J. Wu, and Z. H. Zhou, “Exploratory undersampling for class-imbalance learning,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 39(2), pp. 539-550. 描述 碩士
國立政治大學
資訊管理學系
106356018資料來源 http://thesis.lib.nccu.edu.tw/record/#G0106356018 資料類型 thesis dc.contributor.advisor 蔡瑞煌<br>蕭舜文 zh_TW dc.contributor.advisor Tsaih, Rua-Huan<br>Hsiao, Shun-Wen en_US dc.contributor.author (作者) 蔡羽涵 zh_TW dc.contributor.author (作者) Tsai, Yu-Han en_US dc.creator (作者) 蔡羽涵 zh_TW dc.creator (作者) Tsai, Yu-Han en_US dc.date (日期) 2019 en_US dc.date.accessioned 7-八月-2019 16:06:51 (UTC+8) - dc.date.available 7-八月-2019 16:06:51 (UTC+8) - dc.date.issued (上傳時間) 7-八月-2019 16:06:51 (UTC+8) - dc.identifier (其他 識別碼) G0106356018 en_US dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/124710 - dc.description (描述) 碩士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 資訊管理學系 zh_TW dc.description (描述) 106356018 zh_TW dc.description.abstract (摘要) 在類神經網路領域中,很少研究會同時針對以下三個議題進行研究:(1) 在學習過程中,神經網路能夠有系統的調整隱藏節點的數量 ;(2) 使用ReLU作為激發函數,而非使用傳統的tanh ;(3) 保證能學習所有的訓練資料。在本研究中會針對上述三點,提出強記暨軟化整合 (Cramming, Softening and Integrating)學習演算法,基於單層神經網路並使用ReLU作為激發函數,解決二元輸入/輸出問題,此外也會進行實驗驗證演算法。在實驗中我們使用SPECT心臟影像資料進行實驗,並且使用張量流(TensorFlow)和圖形處理單元(GPU)進行實作。 zh_TW dc.description.abstract (摘要) Rare Artificial Neural Networks studies address simultaneously the challenges of (1) systematically adjusting the amount of used hidden layer nodes within the learning process, (2) adopting ReLU activation function instead of tanh function for fast learning, and (3) guaranteeing learning all training data. This study will address these challenges through deriving the CSI (Cramming, Softening and Integrating) learning algorithm for the single-hidden layer feed-forward neural networks with ReLU activation function and the binary input/output, and further making the technical justification. For the purpose of verifying the proposed learning algorithm, this study conducts an empirical experiment using SPECT heart diagnosis data set from UCI Machine Learning repository. The learning algorithm is implemented via the advanced TensorFlow and GPU. en_US dc.description.tableofcontents 摘要 1Abstract 2Figure Index 4Table Index 51. Introduction 62. Literature Review 92.1 Rectified Linear Unit (ReLU) 92.2 The Single-hidden Layer Feed-forward Neural Networks (SLFN) with one output node 102.3 The Back-Propagation Learning Algorithm associated with SLFN 112.4 The Adaptive Single-hidden Layer Feed-forward Neural Networks (ASLFN) 142.5 Least Trimmed Squares (LTS) Principle 142.6 TensorFlow 152.7 Cardiac Single Proton Emission Computed Tomography (SPECT) Heart Diagnosis Data Set 163. The Proposed CSI Learning Algorithm and Its Technical Justification 184. Experimental Design 295. The Performance of the Proposed CSI Learning Algorithm 325.1 Evaluate the Efficiency of Four Versions 325.2 Total Amount of Adopted Hidden Nodes of Four Versions 345.3 The Occurrence Percentages of Step 4, Step 6.1 and Step 6.2 of Four Versions 355.4 Evaluate the Cramming Mechanism of Four Versions 375.5 Evaluate the Softening and Integrating Mechanisms of Four Versions 395.6 Evaluate the Performance of Four Versions 446. Conclusion and Future Work 46Reference 48Appendix 52 zh_TW dc.format.extent 1234128 bytes - dc.format.mimetype application/pdf - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0106356018 en_US dc.subject (關鍵詞) 強記暨軟化整合 zh_TW dc.subject (關鍵詞) 自適應神經網路 zh_TW dc.subject (關鍵詞) 圖形處理單元 zh_TW dc.subject (關鍵詞) ReLU en_US dc.subject (關鍵詞) TensorFlow en_US dc.subject (關鍵詞) GPU en_US dc.title (題名) 強記暨軟化整合演算法:以ReLU激發函數與二元輸入/輸出為例 zh_TW dc.title (題名) The Cramming, Softening and Integrating Learning Algorithm with ReLU activation function for Binary Input/Output Problems en_US dc.type (資料類型) thesis en_US dc.relation.reference (參考文獻) [1] I. C. Yeh, and C. H. Lien, "The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients," Expert Systems with Applications, vol. 36(2), pp. 2473-2480, 2009.[2] J. de Jesús Rubio, E. Lughofer, J. A. Meda-Campaña, L. A. Páramo, J. F. Novoa, and J. Pacheco, “Neural network updating via argument Kalman filter for modeling of Takagi-Sugeno fuzzy models,” Journal of Intelligent & Fuzzy Systems, vol. 35(2), pp. 2585-2596, 2018.[3] X. L. Meng, F. G. Shi, and J. C. Yao, “An inequality approach for evaluating decision making units with a fuzzy output,” Journal of Intelligent & Fuzzy Systems, vol. 34(1), pp. 459-465, 2018.[4] J. de Jesús Rubio, “Stable Kalman filter and neural network for the chaotic systems identification,” Journal of the Franklin Institute, vol. 354(16), pp. 7444-7462, 2017.[5] M. Y. Cheng, D. Prayogo, and Y. W. Wu, “Prediction of permanent deformation in asphalt pavements using a novel symbiotic organisms search-least squares support vector regression,” Neural Computing and Applications, 2018.[6] J. de Jesús Rubio, “SOFMLS: online self-organizing fuzzy modified least-squares network, “ IEEE Transactions on Fuzzy Systems, vol. 17(6), pp. 1296-1309, 2009.[7] X. M. Zhang, and Q. L. Han, “State estimation for static neural networks with time-varying delays based on an improved reciprocally convex inequality,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29(4), pp. 1376-1381, 2018.[8] V. Nair, and G. E. Hinton, “Rectified Linear Units improve restricted boltzman machines,” Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807-814, 2010.[9] L. Ma, and K. Khorasani, "A new strategy for adaptively constructing multilayer feedforward neural networks," Neurocomputing, vol. 51, pp. 361-385, 2003.[10] R. R. Tsaih, “An explanation of reasoning neural networks,” Mathematical and Computer Modelling, vol. 28(2), pp 37-44, 1998.[11] E. Watanabe, and H. Shimizu, “Algorithm for pruning hidden nodes in multi-layered neural network for binary pattern classification problem,” Proceeding of 1993 International Joint Conference on Neural Networks I, pp. 327-330, 1993.[12] Y. Q. Chen, D. W. Thomas, and M. S. Nixon, "Generating-shrinking algorithm for learning arbitrary classification," Neural Networks, vol. 7(9), pp. 1477-1489, 1994.[13] M. Mezard, and J. P. Nadal, "Learning in feedforward layered networks: The tiling algorithm," Journal of Physics A: Mathematical and General, vol. 22(12), pp. 2191, 1989.[14] S. E. Fahlman, and C. Lebiere, "The cascade-correlation learning architecture," Advances in neural information processing systems, pp. 524-532, 1990.[15] M. Frean, "The upstart algorithm: A method for constructing and training feedforward neural networks," Neural computation, vol. 2(2), pp. 198-209, 1990.[16] R. R. Tsaih, "The softening learning procedure," Mathematical and computer modelling, vol. 18(8), pp. 61-64, 1993.[17] R. H. Tsaih, and T. C. Cheng, “A resistant learning procedure for coping with outliers,” Annals of Mathematics and Artificial Intelligence, vol. 57(2), pp. 161-180, 2009.[18] R. H. Tsaih, B. S. Kuo, T. H. Lin, and C. C. Hsu, “The use of big data analytics to predict the foreign exchange rate based on public media: A machine-learning experiment,” IT Professional, vol. 20(2), pp. 34-41, 2018.[19] L. A. Kurgan, K. J. Cios, R. Tadeusiewicz, M. Ogiela, and L. S. Goodenday, "Knowledge Discovery Approach to Automated Cardiac SPECT Diagnosis," Artificial Intelligence in Medicine, vol. 23(2), pp. 149-169, Oct 2001.[20] D. Dua, and E. Karra Taniskidou, (2017). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.[21] Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning,” nature, vol. 521(7553), pp. 436, 2015.[22] K. Hara, D. Saito, and H. Shouno, “Analysis of function of rectified linear unit used in deep learning,” 2015 International Joint Conference on Neural Networks (IJCNN). IEEE, pp. 1-8, 2015.[23] B. Xu, N. Wang, T. Chen, and M. Li, “Empirical evaluation of rectified activations in convolutional network,” arXiv preprint, arXiv:1505.00853, 2015.[24] S. Y. Huang, J. W. Lin, and R. H. Tsaih, “Outlier detection in the concept drifting environment,” IEEE 2016 International Joint Conference Neural Networks, pp.31-37, 2016.[25] M. Abadi, P. Barham, et al., “Tensorflow: A system for large-scale machine learning,” In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), pp.265-283, 2016.[26] M. Abadi, A. Agarwal et al., “TensorFlow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint, arXiv:1603.04467, 2016.[27] S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv preprint, arXiv:1609.04747.[28] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “SMOTE: Synthetic Minority Over-sampling Technique,” Journal of artificial intelligence research, vol. 16, pp. 321-357, 2002.[29] X. Y. Liu, J. Wu, and Z. H. Zhou, “Exploratory undersampling for class-imbalance learning,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 39(2), pp. 539-550. zh_TW dc.identifier.doi (DOI) 10.6814/NCCU201900582 en_US