Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/119155
DC FieldValueLanguage
dc.contributor.advisor蔡瑞煌zh_TW
dc.contributor.advisorTsaih, Rua-Huanen_US
dc.contributor.author余艾玨zh_TW
dc.contributor.authorYu, Ai-Chuehen_US
dc.creator余艾玨zh_TW
dc.creatorYu, Ai-Chuehen_US
dc.date2018en_US
dc.date.accessioned2018-08-02T08:15:32Z-
dc.date.available2018-08-02T08:15:32Z-
dc.date.issued2018-08-02T08:15:32Z-
dc.identifierG0105356017en_US
dc.identifier.urihttp://nccur.lib.nccu.edu.tw/handle/140.119/119155-
dc.description碩士zh_TW
dc.description國立政治大學zh_TW
dc.description資訊管理學系zh_TW
dc.description105356017zh_TW
dc.description.abstract近年來人工智慧在機器學習的應用扮演重要的角色,而相較於大數據分析的統計方法,ANN成為最有用方法中的其中一個,為了處理動態環境中的時間序列資料和離群值,Wu (2017)提出一個資料清理和機器學習的機制,實驗結果顯示提出的機制在資料清理和機器學習方面是很有效的,Wu (2017)已經透過單一隱藏層倒傳遞神經網路實作RLEM,這個研究將使用兩個方法優化此機制,一個是在RLEM的損失函數(loss function)加上正規化項來避免過度擬合(overfitting)的問題,另一個是修改RLEM並透過新版的Tensorflow實作來達成目標。zh_TW
dc.description.abstractIn recent years, artificial intelligence (AI) has become an important part in the application of machine learning, and the artificial neural networks (ANN) serves as one of the most useful methods compared to statistical methods for the purpose of big data analytics. To cope with the time series data that may have concept-drifting phenomenon and outliers, Wu (2017) had derived a mechanism for effective data cleaning and machine learning. The experiment results had shown that the proposed mechanism is promising in effective data cleaning and machine learning. Wu (2017) had implemented the resistant learning with envelope module (RLEM) via the adaptive single-hidden layer feed-forward neural networks (SLFN). This research will add the regularization term to loss function to prevent overfitting and will refine RLEM to improve the accuracy of the predicted return of carry trade. The refined mechanism will be implemented via the updated version of Tensorflow.en_US
dc.description.tableofcontentsAbstract 3\r\nFigure Index 5\r\nTable Index 6\r\n1 Introduction 7\r\n1.1 Background 7\r\n1.2 Motivation 8\r\n1.3 Objective 9\r\n2 Literature Review 10\r\n2.1 Regularization 10\r\n2.2 Gradient descent optimization algorithms 11\r\nBackpropagation 12\r\n2.3 GPU and Tensorflow 14\r\n2.4 The resistant learning with envelope module 15\r\n2.5 The mechanism for data cleaning and machine learning 19\r\n3 Experiment Design 24\r\n3.1 Data description 24\r\n3.2 Experiment design 25\r\n4 Experiment results 29\r\n5 Conclusion and future work 33\r\nReference 35zh_TW
dc.source.urihttp://thesis.lib.nccu.edu.tw/record/#G0105356017en_US
dc.subject人工神經網路zh_TW
dc.subject正規化zh_TW
dc.subject單一隱藏層倒傳遞神經網路zh_TW
dc.subjectArtificial neural networksen_US
dc.subjectRegularizationen_US
dc.subjectSingle-hidden layer feed-forward neural networksen_US
dc.subjectResistant learning with envelope moduleen_US
dc.title優化資料清理與機器學習的機制zh_TW
dc.titleThe refined mechanism for data cleaning and machine learningen_US
dc.typethesisen_US
dc.relation.reference1. Android Authority (2018) “Artificial intelligence vs machine learning : what’s the difference?”, available at https://www.androidauthority.com/artificial-intelligence-vs-machine-learning-832331/ (accessed 5 March 2018)\r\n2. J. Cao, Y. Pang, X. Li, J. Liang (2018) “Randomly translational activation inspired by the input distributions of ReLU,” Neurocomputing (275), pp:859-868\r\n3. D.A. Clevert, T. Unterthiner, S. Hochreiter (2016) “Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs),” Published as a conference paper at ICLR\r\n4. Educational Research Techniques (2016) “Black Box Method-Artificial Neural Networks”, available at\r\nhttps://educationalresearchtechniques.com/2016/07/06/black-box-method-artificial-neural-networks/ (accessed 5 March 2018)\r\n5. Enhance Data Science (2017) “Machine Learning Explained: Regularization”, available at\r\nhttp://enhancedatascience.com/2017/07/04/machine-learning-explained-regularization/ (accessed 5 March 2018)\r\n6. I. Goodfellow , Y. Bengio, A. Courville (2016), “Deep Learning,” The MIT Press\r\n7. S. Y. Huang, J. W. Lin, and R. H. Tsaih (2106), “Outlier Detection in the Concept Drifting Environment,” In: Proceedings of the International Joint Conference on Neural Networks (IJCNN), pp:31-37\r\n8. S. Y. Huang, F. Yu, R. H. Tsaih, and Y. Huang (2104), “Resistant learning on the envelope bulk for identifying anomalous patterns,” In: Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), pp:3303-3310\r\n9. Investopedia “Carry Tradde” available at https://www.investopedia.com/terms/c/carry-trade.asp-0 (accessed 20 March 2018)\r\n10. Ò. Jordà, and A. M. Taylor (2012), “The carry trade and fundamentals: Nothing to fear but FEER itself,” Journal of International Economics, vol. 88, pp:74-90\r\n11. F. F. Li, J. Johnson, S. Yeung (2017), “Convolutional Neural Networks for Visual Recognition, Stanford University School of Engineering,” available at http://cs231n.stanford.edu/ (accessed 5 March 2018)\r\n12. J. D. Olden, M. K. Joy, R. G. Death (2004), “An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data,” Ecological Modeling (178:3), pp:389-397\r\n13. Quora (2013), “Differences between L1 and L2 as Loss Function and Regularization”, available at\r\nhttp://www.chioka.in/differences-between-l1-and-l2-as-loss-function-and-regularization/ (accessed 5 March 2018)\r\n14. S. Ruder (2016), “An overview of gradient descent optimization algorithms”, available at http://ruder.io/optimizing-gradient-descent/index.html#adam (accessed 5 March 2018)\r\n15. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov (2014) “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” Journal of Machine Learning Research (15) 2014, pp:1929-1958\r\n16. The Theory of Everything (2017), “Understanding Activation Functions in Neural Networks”, available at https://medium.com/the-theory-of-everything/understanding-activation-functions-in-neural-networks-9491262884e0 (accessed 5 March 2018).\r\n17. Towards Data Science (2017), “Types of Optimization Algorithms used in Neural Networks and Ways to Optimize Gradient Descent”, available at https://towardsdatascience.com/types-of-optimization-algorithms-used-in-neural-networks-and-ways-to-optimize-gradient-95ae5d39529f (accessed 5 March 2018).\r\n18. R. H. Tsaih, T. C. Cheng (2009), “A resistant learning procedure for coping with outliers,” Annals of Mathematics and Artificial Intelligence (57:2), pp:161-180\r\n19. J. V. Tu (1996), “Advantages and disadvantages of using artificial neural networks versus logistic regression for predicting medical outcomes” Journal of Clinical Epidemiology 49(11), pp:1225-1231.\r\n20. F. Y. Tzeng, K. L. Ma (2005), “Opening the Black Box — Data Driven Visualization of Neural Networks”, Visualization, IEEE\r\n21. L. Wan, M. Zeiler, S. Zhang, Y. L. Cun, R. Fergus (2013), “Regularization of Neural Networks using DropConnect” Proceedings of the 30th International Conference on Machine Learning, PMLR (28:3), pp:1058-1066\r\n22. J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, Y. Ma (2009), “Robust face recognition via sparse representation,” IEEE Transactions (31:1), pp:210-227\r\n23. J. Wu. (2017), “Application of Machine Learning to Predicting the Returns of Carry Trade. Unpubliched Master Thesis,” National Chengchi University, Taipei\r\n24. S. N. Zeng, J. P. Gou, L. M. Deng (2017), “An antinoise sparse representation method for robust face recognition via joint l1 and l2 regularization,” Expert Systems with Applications (82), pp:1-9zh_TW
dc.identifier.doi10.6814/THE.NCCU.MIS.011.2018.A05-
item.openairecristypehttp://purl.org/coar/resource_type/c_46ec-
item.cerifentitytypePublications-
item.fulltextWith Fulltext-
item.openairetypethesis-
item.grantfulltextopen-
Appears in Collections:學位論文
Files in This Item:
File SizeFormat
601701.pdf2.09 MBAdobe PDF2View/Open
Show simple item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.