Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/125805
DC FieldValueLanguage
dc.contributor.advisor陳樹衡zh_TW
dc.contributor.author莊彥哲zh_TW
dc.contributor.authorChuang, Yan-Cheen_US
dc.creator莊彥哲zh_TW
dc.creatorChuang, Yan-Cheen_US
dc.date2019en_US
dc.date.accessioned2019-09-05T09:07:54Z-
dc.date.available2019-09-05T09:07:54Z-
dc.date.issued2019-09-05T09:07:54Z-
dc.identifierG0105258033en_US
dc.identifier.urihttp://nccur.lib.nccu.edu.tw/handle/140.119/125805-
dc.description碩士zh_TW
dc.description國立政治大學zh_TW
dc.description經濟學系zh_TW
dc.description105258033zh_TW
dc.description.abstract本文中我們主要的目標是想要基於深度學習模型(Long Short Term Memory Network,縮寫LSTM),並結合經驗模態分解(Empirical Mode Decomposition,EMD分解)將期貨的分鐘頻率日內資料分解為有意義的頻率信號,將人工智慧應用於預測金融時間序列走勢,且實際用於期貨市場的當沖交易。預測金融時間序列走勢的一直都不是個簡單的任務,主要是因為金融時間序列的非定態,具有序列相關。於是我們想結合專門將時間序列分解成多個獨立且有頻率意義信號的經驗模態分解(EMD分解):以及具有長期記憶、寫入、清除、輸出,專門處理時間序列資料的長短期記憶神經網路模型(LSTM),並運用模型輸出結果實際在歷史資料上交易回測,然後計算模型效能、策略績效,最後與傳統的機器學習(本文中以具有隱藏層以及多神經元的深度學習區分傳統上統計學的機器學習方法)演算法K-近鄰演算法(K Nearest Neighbor. KNN)做比較。經過實驗我們成功找出EMD分解與LSTM、KNN的最佳預測區間長度,且經由實驗證明EMD分解確實能有效幫助中、短期的金融時間序列趨勢預測,以及深度學習模型LSTM的效能在相同資料處理方式下明顯優於傳統機器學習方法KNN。zh_TW
dc.description.tableofcontents摘要 1\n一、緒論 4\n1.1 研究緣起 4\n1.2 本文貢獻 6\n1.3 本文架構 7\n二、研究背景及文獻回顧 8\n2.1 機器學習模型用於分類簡介 8\n2.2 深度學習模型用於分類簡介 9\n2.3 文獻回顧 10\n三、研究方法 13\n3.1 資料處理 13\n3.1.1 台灣指數期貨資料簡介 13\n3.1.2 資料集切割與標籤 14\n3.2 機器學習演算法,K-近鄰演算法(K Nearest Neighbor,KNN) 20\n3.3 深度學習演算法-長短期記憶模型((Long short-term memory. LSTM) 21\n3.3.1遞迴神經網路(Recurrent Neural Networ,RNN) 21\n3.3.2 長短期記憶模型(Long short-term memory. LSTM) 24\n3.3.3 本文實驗採用的LSTM架構 27\n四、實驗 36\n4.1實驗設計 36\n4.2 衡量指標 37\n4.2.1 判斷模型效能的指標 37\n4.2.2 交易績效指標 38\n4.3 實驗數據 39\n4.3.1 不同模型與資料處理方式的效能交叉比對 40\n4.3.2 不同模型與資料處理方式的交易效能交叉比對 44\n五、結論與展望 50\n5.1結論 50\n5.2 展望 51\n參考文獻 52zh_TW
dc.format.extent1730713 bytes-
dc.format.mimetypeapplication/pdf-
dc.source.urihttp://thesis.lib.nccu.edu.tw/record/#G0105258033en_US
dc.subject人工智慧zh_TW
dc.subject深度學習zh_TW
dc.subject神經網路zh_TW
dc.subject長短期記憶模型zh_TW
dc.subject遞迴神經網路zh_TW
dc.subject機器學習zh_TW
dc.subjectK-近鄰演算法zh_TW
dc.subject經驗模態分解zh_TW
dc.subject日內資料zh_TW
dc.subject金融時間序列趨勢預測zh_TW
dc.subject當沖交易zh_TW
dc.title深度學習於台灣指數期貨之應用 : 經驗模態分解下之長短期記憶神經網路建模zh_TW
dc.titleApplication of Deep Learning in Taiwan Index Futures : Long-term and Short-term Memory Neural Network Modeling Based on Empirical Mode Decompositionen_US
dc.typethesisen_US
dc.relation.reference[1] Bengio, Yoshua, Patrice Simard, and Paolo Frasconi.(1994) “Learning long-term dependencies with gradient descent is difficult.” Neural Networks, IEEE Transactions on 5.2 (1994): 157-166.\n[2] Cover, T.;P. Hart(1967) .“Nearest neighbor pattern classification.” in IEEE Transactions on Information Theory, vol. 13, no. 1, pp. 21-27, January 1967.\ndoi: 10.1109/TIT.1967.1053964\n[3] Cox, D.R. (1958). “The Regression Analysis of Binary Sequences.” Journal of the Royal Statistical Society: Series B, 20, 215-242.\n[4] Diederik, Kingma & Ba, Jimmy. (2014). Adam: A Method for Stochastic Optimization. International Conference on Learning Representations.\n[5] Doering , Jonathan & Fairbank, Michael & Markose, Sheri. (2017). “Convolutional neural networks applied to high-frequency market microstructure forecasting.” 31-36. 10.1109/CEEC.2017.8101595.\n[6] Fisher, R.A. (1936). “The Use of Multiple Measurements in Taxonomic Problems.” Annals of Eugenics, 7, 179-188.\n[7] Gode, D. K., & Sunder, S. (1993). Alloca;ve efficiency of markets with zero-intelligence traders: Market as a par;al subs;tute for individual ra;onality. Journal of poli;cal economy, 101(1), 119–137\n[8]Hochreiter, Sepp, and Jürgen Schmidhuber(1997). “Long short-term memory.” Neural computation 9.8 (1997): 1735-1780.\n[9] Huang, Norden E.;Zheng Shen;Steven R. Long3(1998). “The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis” 454Proc. R. Soc. Lond. A\n[10] Krizhevsky, Alex & Sutskever, Ilya & E. Hinton, Geoffrey. (2012). “ImageNet Classification with Deep Convolutional Neural Networks.” Neural Information Processing Systems. 25. 10.1145/3065386.\n[11] Le, Quoc V. Navdeep Jaitly, Geoffrey E. Hinton(2015). “A Simple Way to Initialize Recurrent Networks of Rectified Linear Units”.arXiv:1504.00941v2 [cs.NE] 7 Apr 2015\n[12] Li, Edwin (2018). “LSTM Neural Network Models for Market Movement Prediction” (Dissertation). Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231627\n[13] Lipton, Zachary C. John Berkowitz, Charles Elkan(2015). “A Critical Review of Recurrent Neural Networks for Sequence Learning.” arXiv:1506.00019v4 [cs.LG] 17 Oct 2015\n[14] Loffe, Sergey. Christian Szegedy(2015). “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.” arXiv:1502.03167v3 [cs.LG] 2 Mar 2015\n[15] McCulloch, Warren S.;Walter Pitts(1943). “A logical calculus of the ideas immanent in nervous activity.” Bulletin of Mathematical Biology, 52, 99-115.\n[16] Navon, Ariel Yosi Keller(Nov 2017). “Financial Time Series Prediction using Deep Learning.” arXiv:1711.04174v1 [eess.SP] 11 Nov 2017\n[17]Rosenblatt, F(1958). “The perceptron: A probabilistic model for information storage and organization in the brain.” _Psychological Review_ 65 (6):386-408.\n[18] Rumelhart, David E Geoffrey E. Hinton, Ronald J. Williams(1986). “Learning representations by back-propagating errors” . Nature. 323 (6088): 533–536. doi:10.1038/323533a0. ISSN 1476-4687.\n[19] SUBHA, M.V & Nambi, S.T.. (2012). “Classification of stock index movement using k-nearest neighbours (k-NN) algorithm.” WSEAS Transactions on Information Science and Applications. 9. 261-270.\n[20] Srivastava, Nitish, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov(2014). “Dropout: A Simple Way to Prevent Neural Networks from Overfitting” Journal of Machine Learning Research 15 (2014) 1929-1958 Submitted 11/13; Published 6/14\n[21] Teixeira, LA & Oliveira, A.L. (2010). “A method for automatic stock trading combining technical analysis and nearest neighbor classification.” Expert Syst. Appl., 37, 6885-6890.\n[22]Williams, R. J. (1989). "Complexity of exact gradient computation algorithms for recurrent neural networks. Technical Report Technical Report NU-CCS-89-27". Boston: Northeastern University, College of Computer Science.\n[23]Zhang, Boning. (2018). Foreign exchange rates forecasting with an EMD-LSTM neural networks model. Journal of Physics: Conference Series. 1053. 012005. 10.1088/1742-6596/1053/1/012005.\n[24] Zheng, Huiting & Yuan, Jiabin & Chen, Long. (2017). “Short-Term Load Forecasting Using EMD-LSTM Neural Networks with a Xgboost Algorithm for Feature Importance Evaluation.” Energies. 10. 1168. 10.3390/en10081168.zh_TW
dc.identifier.doi10.6814/NCCU201900955en_US
item.openairetypethesis-
item.fulltextWith Fulltext-
item.grantfulltextembargo_20240819-
item.openairecristypehttp://purl.org/coar/resource_type/c_46ec-
item.cerifentitytypePublications-
Appears in Collections:學位論文
Files in This Item:
File SizeFormat
803301.pdf1.69 MBAdobe PDF2View/Open
Show simple item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.