Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/136486
DC FieldValueLanguage
dc.contributor.advisor曾正男zh_TW
dc.contributor.advisorTzeng, Jeng-Nanen_US
dc.contributor.author賴彥儒zh_TW
dc.contributor.authorLai, Yan-Ruen_US
dc.creator賴彥儒zh_TW
dc.creatorLai, Yan-Ruen_US
dc.date2021en_US
dc.date.accessioned2021-08-04T07:40:23Z-
dc.date.available2021-08-04T07:40:23Z-
dc.date.issued2021-08-04T07:40:23Z-
dc.identifierG0109751005en_US
dc.identifier.urihttp://nccur.lib.nccu.edu.tw/handle/140.119/136486-
dc.description碩士zh_TW
dc.description國立政治大學zh_TW
dc.description應用數學系zh_TW
dc.description109751005zh_TW
dc.description.abstract該研究的目的是對股票的資料進行分類,以判斷在一段時間內的資料為函數行為或隨機噪音。為了訓練該模型什麼是函數行為和什麼是隨機噪音,我們用三種數學模型對股票資料進行了模擬,並利用訊號處理的技巧從真實股票資料中找出建立數學模型所需要的參數。 我們使用支持向量機(SVM)和具有長期短期記憶(LSTM)的深度學習模型進行分類。 我們的結果表明,由我們的模擬數據訓練的模型使用在實際數據的預測結果,在顯著水準alpha = 0.05下,我們的分類在統計上有顯著差異。zh_TW
dc.description.abstractThe purpose of the study was to classify the stock price as functional behavior or random noise in a fixed period. We simulated the data with three kinds of mathematics models to train the model what is functional behavior or random noise. The parameter of mathematics models calculated by the technique of signal processing, such as EEMD. We use the support vector machine(SVM) and the deep learning model with long short-term memory(LSTM) to classification. Our results showed that our model trained by our simulated data used prediction results based on actual data, which are statistically significantly different at the significance level alpha = 0.05 for our classification.en_US
dc.description.tableofcontents1 Introduction 1\n2 Support Vector Machine 3\n2.1 Hard Margin 4\n2.2 Soft Margin 5\n2.3 The Dual Optimization Problem 6\n2.4 Kernel Method 7\n3 Deep Learning and Neural Networks 9\n3.1 Neuron and Neural Networks 10\n3.2 Activation Function 10\n3.3 Loss Function 13\n3.4 Gradient Descent and Back-propagation 14\n3.4.1 Gradient Descent 14\n3.4.2 Back-propagation 16\n3.5 Overfitting, Dropout and Batch Normalization 17\n3.5.1 Overfitting 17\n3.5.2 Dropout 18\n3.5.3 Batch Normalization 20\n4 Recurrent Neural Networks 21\n4.1 Simple Recurrent Neural Network 21\n4.2 Long Sort Term Memory(LSTM) 23\n5 Data Simulation 27\n5.1 Seasonal Movement Model 28\n5.2 Exponential Model 30\n5.3 Polynominal Model 31\n6 Experience and Results 32\n6.1 Data Transformation 32\n6.2 Prediction Model 33\n6.2.1 LSTM 33\n6.2.2 SVM 33\n6.3 Model Performance 34\n6.4 Test on Real World Data 37\n7 Conclusion and Discussion 39\nAppendix A 40\nBiblography 41zh_TW
dc.format.extent564919 bytes-
dc.format.mimetypeapplication/pdf-
dc.source.urihttp://thesis.lib.nccu.edu.tw/record/#G0109751005en_US
dc.subject預測模型zh_TW
dc.subject類神經網路zh_TW
dc.subject長短期記憶模型zh_TW
dc.subject機器學習zh_TW
dc.subject支持向量機zh_TW
dc.subject總體經驗模態分解zh_TW
dc.subjectForecasting modelen_US
dc.subjectArtificial Neural Networken_US
dc.subjectLong­-short term memory,en_US
dc.subjectMachine learningen_US
dc.subjectSupport vector machineen_US
dc.subjectEEMDen_US
dc.title利用SVM模型判斷股票資料的隨機性成分zh_TW
dc.titleUsing SVM Model to Classify the Random Components of Stock Dataen_US
dc.typethesisen_US
dc.relation.reference[1] Abien Fred Agarap. Deep learning using rectified linear units (relu). arXiv preprint arXiv:1803.08375, 2018.\n[2] Léon Bottou. Stochastic gradient descent tricks. In Neural networks: Tricks of the trade, pages 421–436. Springer, 2012.\n[3] Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large­scale machine learning. Siam Review, 60(2):223–311, 2018.\n[4] Chris Chatfield and Mohammad Yar. Holt­winters forecasting: some practical issues. Journal of the Royal Statistical Society: Series D (The Statistician), 37(2):129–140, 1988.\n[5] J. X. Chen. The evolution of computing: Alphago. Computing in Science Engineering, 18(4):4–7, 2016.\n[6] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The elements of statistical learning: data mining, inference, and prediction. Springer Science & Business Media, 2009.\n[7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human­level performance on imagenet classification, 2015.\n[8] Mikael Henaff, Arthur Szlam, and Yann LeCun. Recurrent orthogonal networks and long­ memory tasks. arXiv preprint arXiv:1602.06662, 2016.\n[9] Geoffrey E Hinton, Simon Osindero, and Yee­Whye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006.\n[10] Sepp Hochreiter and Jürgen Schmidhuber. Long short­term memory. Neural computation, 9(8):1735–1780, 1997.\n[11] Chih­WeiHsu,Chih­ChungChang,Chih­JenLin,etal.Apracticalguidetosupportvector classification, 2003.\n[12] Norden Eh Huang. Hilbert­Huang transform and its applications, volume 16. World Scientific, 2014.\n[13] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pages 448–456. PMLR, 2015.\n[14] Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani. An introduction to statistical learning, volume 112. Springer, 2013.\n[15] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25:1097–1105, 2012.\n[16] Guohui Li, Zhichao Yang, and Hong Yang. Noise reduction method of underwater acoustic signals based on uniform phase empirical mode decomposition, amplitude­aware permutation entropy, and pearson correlation coefficient. Entropy, 20(12), 2018.\n[17] K­R Muller, Sebastian Mika, Gunnar Ratsch, Koji Tsuda, and Bernhard Scholkopf. An introduction to kernel­based learning algorithms. IEEE transactions on neural networks, 12(2):181–201, 2001.\n[18] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back­propagating errors. nature, 323(6088):533–536, 1986.\n[19] Mike Schuster and Kuldip K Paliwal. Bidirectional recurrent neural networks. IEEE transactions on Signal Processing, 45(11):2673–2681, 1997.\n[20] Ohad Shamir and Tong Zhang. Stochastic gradient descent for non­smooth optimization: Convergence results and optimal averaging schemes. In International conference on machine learning, pages 71–79. PMLR, 2013.\n[21] Alex J Smola and Bernhard Schölkopf. A tutorial on support vector regression. Statistics and computing, 14(3):199–222, 2004.\n[22] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014.\n[23] EugeneVorontsov,ChihebTrabelsi,SamuelKadoury,andChrisPal.Onorthogonalityand learning recurrent networks with long term dependencies. In International Conference on Machine Learning, pages 3570–3578. PMLR, 2017.\n[24] Xing Wan. Influence of feature scaling on convergence of gradient iterative algorithm. In Journal of Physics: Conference Series, volume 1213, page 032021. IOP Publishing, 2019.\n[25] Zhaohua Wu and Norden E Huang. Ensemble empirical mode decomposition: a noise­ assisted data analysis method. Advances in adaptive data analysis, 1(01):1–41, 2009.zh_TW
dc.identifier.doi10.6814/NCCU202100699en_US
item.fulltextWith Fulltext-
item.grantfulltextrestricted-
item.openairetypethesis-
item.openairecristypehttp://purl.org/coar/resource_type/c_46ec-
item.cerifentitytypePublications-
Appears in Collections:學位論文
Files in This Item:
File Description SizeFormat
100501.pdf551.68 kBAdobe PDF2View/Open
Show simple item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.