學術產出-Theses
Article View/Open
Publication Export
-
題名 利用SVM模型判斷股票資料的隨機性成分
Using SVM Model to Classify the Random Components of Stock Data作者 賴彥儒
Lai, Yan-Ru貢獻者 曾正男
Tzeng, Jeng-Nan
賴彥儒
Lai, Yan-Ru關鍵詞 預測模型
類神經網路
長短期記憶模型
機器學習
支持向量機
總體經驗模態分解
Forecasting model
Artificial Neural Network
Long-short term memory,
Machine learning
Support vector machine
EEMD日期 2021 上傳時間 4-Aug-2021 15:40:23 (UTC+8) 摘要 該研究的目的是對股票的資料進行分類,以判斷在一段時間內的資料為函數行為或隨機噪音。為了訓練該模型什麼是函數行為和什麼是隨機噪音,我們用三種數學模型對股票資料進行了模擬,並利用訊號處理的技巧從真實股票資料中找出建立數學模型所需要的參數。 我們使用支持向量機(SVM)和具有長期短期記憶(LSTM)的深度學習模型進行分類。 我們的結果表明,由我們的模擬數據訓練的模型使用在實際數據的預測結果,在顯著水準alpha = 0.05下,我們的分類在統計上有顯著差異。
The purpose of the study was to classify the stock price as functional behavior or random noise in a fixed period. We simulated the data with three kinds of mathematics models to train the model what is functional behavior or random noise. The parameter of mathematics models calculated by the technique of signal processing, such as EEMD. We use the support vector machine(SVM) and the deep learning model with long short-term memory(LSTM) to classification. Our results showed that our model trained by our simulated data used prediction results based on actual data, which are statistically significantly different at the significance level alpha = 0.05 for our classification.參考文獻 [1] Abien Fred Agarap. Deep learning using rectified linear units (relu). arXiv preprint arXiv:1803.08375, 2018.[2] Léon Bottou. Stochastic gradient descent tricks. In Neural networks: Tricks of the trade, pages 421–436. Springer, 2012.[3] Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for largescale machine learning. Siam Review, 60(2):223–311, 2018.[4] Chris Chatfield and Mohammad Yar. Holtwinters forecasting: some practical issues. Journal of the Royal Statistical Society: Series D (The Statistician), 37(2):129–140, 1988.[5] J. X. Chen. The evolution of computing: Alphago. Computing in Science Engineering, 18(4):4–7, 2016.[6] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The elements of statistical learning: data mining, inference, and prediction. Springer Science & Business Media, 2009.[7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification, 2015.[8] Mikael Henaff, Arthur Szlam, and Yann LeCun. Recurrent orthogonal networks and long memory tasks. arXiv preprint arXiv:1602.06662, 2016.[9] Geoffrey E Hinton, Simon Osindero, and YeeWhye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006.[10] Sepp Hochreiter and Jürgen Schmidhuber. Long shortterm memory. Neural computation, 9(8):1735–1780, 1997.[11] ChihWeiHsu,ChihChungChang,ChihJenLin,etal.Apracticalguidetosupportvector classification, 2003.[12] Norden Eh Huang. HilbertHuang transform and its applications, volume 16. World Scientific, 2014.[13] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pages 448–456. PMLR, 2015.[14] Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani. An introduction to statistical learning, volume 112. Springer, 2013.[15] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25:1097–1105, 2012.[16] Guohui Li, Zhichao Yang, and Hong Yang. Noise reduction method of underwater acoustic signals based on uniform phase empirical mode decomposition, amplitudeaware permutation entropy, and pearson correlation coefficient. Entropy, 20(12), 2018.[17] KR Muller, Sebastian Mika, Gunnar Ratsch, Koji Tsuda, and Bernhard Scholkopf. An introduction to kernelbased learning algorithms. IEEE transactions on neural networks, 12(2):181–201, 2001.[18] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by backpropagating errors. nature, 323(6088):533–536, 1986.[19] Mike Schuster and Kuldip K Paliwal. Bidirectional recurrent neural networks. IEEE transactions on Signal Processing, 45(11):2673–2681, 1997.[20] Ohad Shamir and Tong Zhang. Stochastic gradient descent for nonsmooth optimization: Convergence results and optimal averaging schemes. In International conference on machine learning, pages 71–79. PMLR, 2013.[21] Alex J Smola and Bernhard Schölkopf. A tutorial on support vector regression. Statistics and computing, 14(3):199–222, 2004.[22] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014.[23] EugeneVorontsov,ChihebTrabelsi,SamuelKadoury,andChrisPal.Onorthogonalityand learning recurrent networks with long term dependencies. In International Conference on Machine Learning, pages 3570–3578. PMLR, 2017.[24] Xing Wan. Influence of feature scaling on convergence of gradient iterative algorithm. In Journal of Physics: Conference Series, volume 1213, page 032021. IOP Publishing, 2019.[25] Zhaohua Wu and Norden E Huang. Ensemble empirical mode decomposition: a noise assisted data analysis method. Advances in adaptive data analysis, 1(01):1–41, 2009. 描述 碩士
國立政治大學
應用數學系
109751005資料來源 http://thesis.lib.nccu.edu.tw/record/#G0109751005 資料類型 thesis dc.contributor.advisor 曾正男 zh_TW dc.contributor.advisor Tzeng, Jeng-Nan en_US dc.contributor.author (Authors) 賴彥儒 zh_TW dc.contributor.author (Authors) Lai, Yan-Ru en_US dc.creator (作者) 賴彥儒 zh_TW dc.creator (作者) Lai, Yan-Ru en_US dc.date (日期) 2021 en_US dc.date.accessioned 4-Aug-2021 15:40:23 (UTC+8) - dc.date.available 4-Aug-2021 15:40:23 (UTC+8) - dc.date.issued (上傳時間) 4-Aug-2021 15:40:23 (UTC+8) - dc.identifier (Other Identifiers) G0109751005 en_US dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/136486 - dc.description (描述) 碩士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 應用數學系 zh_TW dc.description (描述) 109751005 zh_TW dc.description.abstract (摘要) 該研究的目的是對股票的資料進行分類,以判斷在一段時間內的資料為函數行為或隨機噪音。為了訓練該模型什麼是函數行為和什麼是隨機噪音,我們用三種數學模型對股票資料進行了模擬,並利用訊號處理的技巧從真實股票資料中找出建立數學模型所需要的參數。 我們使用支持向量機(SVM)和具有長期短期記憶(LSTM)的深度學習模型進行分類。 我們的結果表明,由我們的模擬數據訓練的模型使用在實際數據的預測結果,在顯著水準alpha = 0.05下,我們的分類在統計上有顯著差異。 zh_TW dc.description.abstract (摘要) The purpose of the study was to classify the stock price as functional behavior or random noise in a fixed period. We simulated the data with three kinds of mathematics models to train the model what is functional behavior or random noise. The parameter of mathematics models calculated by the technique of signal processing, such as EEMD. We use the support vector machine(SVM) and the deep learning model with long short-term memory(LSTM) to classification. Our results showed that our model trained by our simulated data used prediction results based on actual data, which are statistically significantly different at the significance level alpha = 0.05 for our classification. en_US dc.description.tableofcontents 1 Introduction 12 Support Vector Machine 32.1 Hard Margin 42.2 Soft Margin 52.3 The Dual Optimization Problem 62.4 Kernel Method 73 Deep Learning and Neural Networks 93.1 Neuron and Neural Networks 103.2 Activation Function 103.3 Loss Function 133.4 Gradient Descent and Back-propagation 143.4.1 Gradient Descent 143.4.2 Back-propagation 163.5 Overfitting, Dropout and Batch Normalization 173.5.1 Overfitting 173.5.2 Dropout 183.5.3 Batch Normalization 204 Recurrent Neural Networks 214.1 Simple Recurrent Neural Network 214.2 Long Sort Term Memory(LSTM) 235 Data Simulation 275.1 Seasonal Movement Model 285.2 Exponential Model 305.3 Polynominal Model 316 Experience and Results 326.1 Data Transformation 326.2 Prediction Model 336.2.1 LSTM 336.2.2 SVM 336.3 Model Performance 346.4 Test on Real World Data 377 Conclusion and Discussion 39Appendix A 40Biblography 41 zh_TW dc.format.extent 564919 bytes - dc.format.mimetype application/pdf - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0109751005 en_US dc.subject (關鍵詞) 預測模型 zh_TW dc.subject (關鍵詞) 類神經網路 zh_TW dc.subject (關鍵詞) 長短期記憶模型 zh_TW dc.subject (關鍵詞) 機器學習 zh_TW dc.subject (關鍵詞) 支持向量機 zh_TW dc.subject (關鍵詞) 總體經驗模態分解 zh_TW dc.subject (關鍵詞) Forecasting model en_US dc.subject (關鍵詞) Artificial Neural Network en_US dc.subject (關鍵詞) Long-short term memory, en_US dc.subject (關鍵詞) Machine learning en_US dc.subject (關鍵詞) Support vector machine en_US dc.subject (關鍵詞) EEMD en_US dc.title (題名) 利用SVM模型判斷股票資料的隨機性成分 zh_TW dc.title (題名) Using SVM Model to Classify the Random Components of Stock Data en_US dc.type (資料類型) thesis en_US dc.relation.reference (參考文獻) [1] Abien Fred Agarap. Deep learning using rectified linear units (relu). arXiv preprint arXiv:1803.08375, 2018.[2] Léon Bottou. Stochastic gradient descent tricks. In Neural networks: Tricks of the trade, pages 421–436. Springer, 2012.[3] Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for largescale machine learning. Siam Review, 60(2):223–311, 2018.[4] Chris Chatfield and Mohammad Yar. Holtwinters forecasting: some practical issues. Journal of the Royal Statistical Society: Series D (The Statistician), 37(2):129–140, 1988.[5] J. X. Chen. The evolution of computing: Alphago. Computing in Science Engineering, 18(4):4–7, 2016.[6] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The elements of statistical learning: data mining, inference, and prediction. Springer Science & Business Media, 2009.[7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification, 2015.[8] Mikael Henaff, Arthur Szlam, and Yann LeCun. Recurrent orthogonal networks and long memory tasks. arXiv preprint arXiv:1602.06662, 2016.[9] Geoffrey E Hinton, Simon Osindero, and YeeWhye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006.[10] Sepp Hochreiter and Jürgen Schmidhuber. Long shortterm memory. Neural computation, 9(8):1735–1780, 1997.[11] ChihWeiHsu,ChihChungChang,ChihJenLin,etal.Apracticalguidetosupportvector classification, 2003.[12] Norden Eh Huang. HilbertHuang transform and its applications, volume 16. World Scientific, 2014.[13] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pages 448–456. PMLR, 2015.[14] Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani. An introduction to statistical learning, volume 112. Springer, 2013.[15] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25:1097–1105, 2012.[16] Guohui Li, Zhichao Yang, and Hong Yang. Noise reduction method of underwater acoustic signals based on uniform phase empirical mode decomposition, amplitudeaware permutation entropy, and pearson correlation coefficient. Entropy, 20(12), 2018.[17] KR Muller, Sebastian Mika, Gunnar Ratsch, Koji Tsuda, and Bernhard Scholkopf. An introduction to kernelbased learning algorithms. IEEE transactions on neural networks, 12(2):181–201, 2001.[18] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by backpropagating errors. nature, 323(6088):533–536, 1986.[19] Mike Schuster and Kuldip K Paliwal. Bidirectional recurrent neural networks. IEEE transactions on Signal Processing, 45(11):2673–2681, 1997.[20] Ohad Shamir and Tong Zhang. Stochastic gradient descent for nonsmooth optimization: Convergence results and optimal averaging schemes. In International conference on machine learning, pages 71–79. PMLR, 2013.[21] Alex J Smola and Bernhard Schölkopf. A tutorial on support vector regression. Statistics and computing, 14(3):199–222, 2004.[22] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014.[23] EugeneVorontsov,ChihebTrabelsi,SamuelKadoury,andChrisPal.Onorthogonalityand learning recurrent networks with long term dependencies. In International Conference on Machine Learning, pages 3570–3578. PMLR, 2017.[24] Xing Wan. Influence of feature scaling on convergence of gradient iterative algorithm. In Journal of Physics: Conference Series, volume 1213, page 032021. IOP Publishing, 2019.[25] Zhaohua Wu and Norden E Huang. Ensemble empirical mode decomposition: a noise assisted data analysis method. Advances in adaptive data analysis, 1(01):1–41, 2009. zh_TW dc.identifier.doi (DOI) 10.6814/NCCU202100699 en_US