學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 利用SVM模型判斷股票資料的隨機性成分
Using SVM Model to Classify the Random Components of Stock Data
作者 賴彥儒
Lai, Yan-Ru
貢獻者 曾正男
Tzeng, Jeng-Nan
賴彥儒
Lai, Yan-Ru
關鍵詞 預測模型
類神經網路
長短期記憶模型
機器學習
支持向量機
總體經驗模態分解
Forecasting model
Artificial Neural Network
Long­-short term memory,
Machine learning
Support vector machine
EEMD
日期 2021
上傳時間 4-Aug-2021 15:40:23 (UTC+8)
摘要 該研究的目的是對股票的資料進行分類,以判斷在一段時間內的資料為函數行為或隨機噪音。為了訓練該模型什麼是函數行為和什麼是隨機噪音,我們用三種數學模型對股票資料進行了模擬,並利用訊號處理的技巧從真實股票資料中找出建立數學模型所需要的參數。 我們使用支持向量機(SVM)和具有長期短期記憶(LSTM)的深度學習模型進行分類。 我們的結果表明,由我們的模擬數據訓練的模型使用在實際數據的預測結果,在顯著水準alpha = 0.05下,我們的分類在統計上有顯著差異。
The purpose of the study was to classify the stock price as functional behavior or random noise in a fixed period. We simulated the data with three kinds of mathematics models to train the model what is functional behavior or random noise. The parameter of mathematics models calculated by the technique of signal processing, such as EEMD. We use the support vector machine(SVM) and the deep learning model with long short-term memory(LSTM) to classification. Our results showed that our model trained by our simulated data used prediction results based on actual data, which are statistically significantly different at the significance level alpha = 0.05 for our classification.
參考文獻 [1] Abien Fred Agarap. Deep learning using rectified linear units (relu). arXiv preprint arXiv:1803.08375, 2018.
[2] Léon Bottou. Stochastic gradient descent tricks. In Neural networks: Tricks of the trade, pages 421–436. Springer, 2012.
[3] Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large­scale machine learning. Siam Review, 60(2):223–311, 2018.
[4] Chris Chatfield and Mohammad Yar. Holt­winters forecasting: some practical issues. Journal of the Royal Statistical Society: Series D (The Statistician), 37(2):129–140, 1988.
[5] J. X. Chen. The evolution of computing: Alphago. Computing in Science Engineering, 18(4):4–7, 2016.
[6] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The elements of statistical learning: data mining, inference, and prediction. Springer Science & Business Media, 2009.
[7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human­level performance on imagenet classification, 2015.
[8] Mikael Henaff, Arthur Szlam, and Yann LeCun. Recurrent orthogonal networks and long­ memory tasks. arXiv preprint arXiv:1602.06662, 2016.
[9] Geoffrey E Hinton, Simon Osindero, and Yee­Whye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006.
[10] Sepp Hochreiter and Jürgen Schmidhuber. Long short­term memory. Neural computation, 9(8):1735–1780, 1997.
[11] Chih­WeiHsu,Chih­ChungChang,Chih­JenLin,etal.Apracticalguidetosupportvector classification, 2003.
[12] Norden Eh Huang. Hilbert­Huang transform and its applications, volume 16. World Scientific, 2014.
[13] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pages 448–456. PMLR, 2015.
[14] Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani. An introduction to statistical learning, volume 112. Springer, 2013.
[15] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25:1097–1105, 2012.
[16] Guohui Li, Zhichao Yang, and Hong Yang. Noise reduction method of underwater acoustic signals based on uniform phase empirical mode decomposition, amplitude­aware permutation entropy, and pearson correlation coefficient. Entropy, 20(12), 2018.
[17] K­R Muller, Sebastian Mika, Gunnar Ratsch, Koji Tsuda, and Bernhard Scholkopf. An introduction to kernel­based learning algorithms. IEEE transactions on neural networks, 12(2):181–201, 2001.
[18] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back­propagating errors. nature, 323(6088):533–536, 1986.
[19] Mike Schuster and Kuldip K Paliwal. Bidirectional recurrent neural networks. IEEE transactions on Signal Processing, 45(11):2673–2681, 1997.
[20] Ohad Shamir and Tong Zhang. Stochastic gradient descent for non­smooth optimization: Convergence results and optimal averaging schemes. In International conference on machine learning, pages 71–79. PMLR, 2013.
[21] Alex J Smola and Bernhard Schölkopf. A tutorial on support vector regression. Statistics and computing, 14(3):199–222, 2004.
[22] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014.
[23] EugeneVorontsov,ChihebTrabelsi,SamuelKadoury,andChrisPal.Onorthogonalityand learning recurrent networks with long term dependencies. In International Conference on Machine Learning, pages 3570–3578. PMLR, 2017.
[24] Xing Wan. Influence of feature scaling on convergence of gradient iterative algorithm. In Journal of Physics: Conference Series, volume 1213, page 032021. IOP Publishing, 2019.
[25] Zhaohua Wu and Norden E Huang. Ensemble empirical mode decomposition: a noise­ assisted data analysis method. Advances in adaptive data analysis, 1(01):1–41, 2009.
描述 碩士
國立政治大學
應用數學系
109751005
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0109751005
資料類型 thesis
dc.contributor.advisor 曾正男zh_TW
dc.contributor.advisor Tzeng, Jeng-Nanen_US
dc.contributor.author (Authors) 賴彥儒zh_TW
dc.contributor.author (Authors) Lai, Yan-Ruen_US
dc.creator (作者) 賴彥儒zh_TW
dc.creator (作者) Lai, Yan-Ruen_US
dc.date (日期) 2021en_US
dc.date.accessioned 4-Aug-2021 15:40:23 (UTC+8)-
dc.date.available 4-Aug-2021 15:40:23 (UTC+8)-
dc.date.issued (上傳時間) 4-Aug-2021 15:40:23 (UTC+8)-
dc.identifier (Other Identifiers) G0109751005en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/136486-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 應用數學系zh_TW
dc.description (描述) 109751005zh_TW
dc.description.abstract (摘要) 該研究的目的是對股票的資料進行分類,以判斷在一段時間內的資料為函數行為或隨機噪音。為了訓練該模型什麼是函數行為和什麼是隨機噪音,我們用三種數學模型對股票資料進行了模擬,並利用訊號處理的技巧從真實股票資料中找出建立數學模型所需要的參數。 我們使用支持向量機(SVM)和具有長期短期記憶(LSTM)的深度學習模型進行分類。 我們的結果表明,由我們的模擬數據訓練的模型使用在實際數據的預測結果,在顯著水準alpha = 0.05下,我們的分類在統計上有顯著差異。zh_TW
dc.description.abstract (摘要) The purpose of the study was to classify the stock price as functional behavior or random noise in a fixed period. We simulated the data with three kinds of mathematics models to train the model what is functional behavior or random noise. The parameter of mathematics models calculated by the technique of signal processing, such as EEMD. We use the support vector machine(SVM) and the deep learning model with long short-term memory(LSTM) to classification. Our results showed that our model trained by our simulated data used prediction results based on actual data, which are statistically significantly different at the significance level alpha = 0.05 for our classification.en_US
dc.description.tableofcontents 1 Introduction 1
2 Support Vector Machine 3
2.1 Hard Margin 4
2.2 Soft Margin 5
2.3 The Dual Optimization Problem 6
2.4 Kernel Method 7
3 Deep Learning and Neural Networks 9
3.1 Neuron and Neural Networks 10
3.2 Activation Function 10
3.3 Loss Function 13
3.4 Gradient Descent and Back-propagation 14
3.4.1 Gradient Descent 14
3.4.2 Back-propagation 16
3.5 Overfitting, Dropout and Batch Normalization 17
3.5.1 Overfitting 17
3.5.2 Dropout 18
3.5.3 Batch Normalization 20
4 Recurrent Neural Networks 21
4.1 Simple Recurrent Neural Network 21
4.2 Long Sort Term Memory(LSTM) 23
5 Data Simulation 27
5.1 Seasonal Movement Model 28
5.2 Exponential Model 30
5.3 Polynominal Model 31
6 Experience and Results 32
6.1 Data Transformation 32
6.2 Prediction Model 33
6.2.1 LSTM 33
6.2.2 SVM 33
6.3 Model Performance 34
6.4 Test on Real World Data 37
7 Conclusion and Discussion 39
Appendix A 40
Biblography 41
zh_TW
dc.format.extent 564919 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0109751005en_US
dc.subject (關鍵詞) 預測模型zh_TW
dc.subject (關鍵詞) 類神經網路zh_TW
dc.subject (關鍵詞) 長短期記憶模型zh_TW
dc.subject (關鍵詞) 機器學習zh_TW
dc.subject (關鍵詞) 支持向量機zh_TW
dc.subject (關鍵詞) 總體經驗模態分解zh_TW
dc.subject (關鍵詞) Forecasting modelen_US
dc.subject (關鍵詞) Artificial Neural Networken_US
dc.subject (關鍵詞) Long­-short term memory,en_US
dc.subject (關鍵詞) Machine learningen_US
dc.subject (關鍵詞) Support vector machineen_US
dc.subject (關鍵詞) EEMDen_US
dc.title (題名) 利用SVM模型判斷股票資料的隨機性成分zh_TW
dc.title (題名) Using SVM Model to Classify the Random Components of Stock Dataen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] Abien Fred Agarap. Deep learning using rectified linear units (relu). arXiv preprint arXiv:1803.08375, 2018.
[2] Léon Bottou. Stochastic gradient descent tricks. In Neural networks: Tricks of the trade, pages 421–436. Springer, 2012.
[3] Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large­scale machine learning. Siam Review, 60(2):223–311, 2018.
[4] Chris Chatfield and Mohammad Yar. Holt­winters forecasting: some practical issues. Journal of the Royal Statistical Society: Series D (The Statistician), 37(2):129–140, 1988.
[5] J. X. Chen. The evolution of computing: Alphago. Computing in Science Engineering, 18(4):4–7, 2016.
[6] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The elements of statistical learning: data mining, inference, and prediction. Springer Science & Business Media, 2009.
[7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human­level performance on imagenet classification, 2015.
[8] Mikael Henaff, Arthur Szlam, and Yann LeCun. Recurrent orthogonal networks and long­ memory tasks. arXiv preprint arXiv:1602.06662, 2016.
[9] Geoffrey E Hinton, Simon Osindero, and Yee­Whye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006.
[10] Sepp Hochreiter and Jürgen Schmidhuber. Long short­term memory. Neural computation, 9(8):1735–1780, 1997.
[11] Chih­WeiHsu,Chih­ChungChang,Chih­JenLin,etal.Apracticalguidetosupportvector classification, 2003.
[12] Norden Eh Huang. Hilbert­Huang transform and its applications, volume 16. World Scientific, 2014.
[13] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pages 448–456. PMLR, 2015.
[14] Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani. An introduction to statistical learning, volume 112. Springer, 2013.
[15] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25:1097–1105, 2012.
[16] Guohui Li, Zhichao Yang, and Hong Yang. Noise reduction method of underwater acoustic signals based on uniform phase empirical mode decomposition, amplitude­aware permutation entropy, and pearson correlation coefficient. Entropy, 20(12), 2018.
[17] K­R Muller, Sebastian Mika, Gunnar Ratsch, Koji Tsuda, and Bernhard Scholkopf. An introduction to kernel­based learning algorithms. IEEE transactions on neural networks, 12(2):181–201, 2001.
[18] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back­propagating errors. nature, 323(6088):533–536, 1986.
[19] Mike Schuster and Kuldip K Paliwal. Bidirectional recurrent neural networks. IEEE transactions on Signal Processing, 45(11):2673–2681, 1997.
[20] Ohad Shamir and Tong Zhang. Stochastic gradient descent for non­smooth optimization: Convergence results and optimal averaging schemes. In International conference on machine learning, pages 71–79. PMLR, 2013.
[21] Alex J Smola and Bernhard Schölkopf. A tutorial on support vector regression. Statistics and computing, 14(3):199–222, 2004.
[22] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014.
[23] EugeneVorontsov,ChihebTrabelsi,SamuelKadoury,andChrisPal.Onorthogonalityand learning recurrent networks with long term dependencies. In International Conference on Machine Learning, pages 3570–3578. PMLR, 2017.
[24] Xing Wan. Influence of feature scaling on convergence of gradient iterative algorithm. In Journal of Physics: Conference Series, volume 1213, page 032021. IOP Publishing, 2019.
[25] Zhaohua Wu and Norden E Huang. Ensemble empirical mode decomposition: a noise­ assisted data analysis method. Advances in adaptive data analysis, 1(01):1–41, 2009.
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU202100699en_US