Publications-Theses

Article View/Open

Publication Export

Google ScholarTM

NCCU Library

Citation Infomation

Related Publications in TAIR

題名 應用深度學習於股票走勢分析-以台灣市場為例
Applying Deep Learning to Predict the Trend of Stock in Taiwan
作者 周家民
Zhou, Jia-Min
貢獻者 蔡炎龍
Tsai, Yen-Lung
周家民
Zhou, Jia-Min
關鍵詞 深度學習
神經網路
卷積神經網路
長短期記憶
股票趨勢預測
市場模擬
Deep Learning
NN
CNN
LSTM
Stock Trend Forecast
Market Simulation
日期 2022
上傳時間 1-Aug-2022 18:12:42 (UTC+8)
摘要 在本篇論文中,我們使用了現有的 NN、CNN、LSTM 等模型去組合出一個更為複雜的合併模型,並使用新的前處理方法處理技術指標,透過預設的閥值或一些條件轉成新的指標。此外,還使用了一些較為新穎的技術來改善模型,例如:LeakyReLU、Nadam,讓模型更好訓練。與其他模型相比,在同樣的輸入下,合併模型大幅度優於其他的模型,也遠高於最簡單的預測方法。而加入前處理的指標後,更讓原本的合併模型以及 LSTM 模型的準確率分別提升了 4.13% 以及 8.54%。
除了單純模型預測外,我們也提出一個簡單的策略來應用模型的預測,並預設了一個閥值來達到更好的結果。扣除掉手續費、交易稅後,最多大約可以得到 7% 的回報。
In this paper, we use existing NN, CNN and LSTM models to combine a more complex merged model and use new preprocessing methods to handle the technical indicators, which are transformed into new indicators by pre-set thresholds or some conditions. In addition, some newer techniques are used to improve the model, such as LeakyReLU and Nadam, to make the model better trained. Compared with other models, the merged model is substantially better than other models with the same inputs and much better than the simplest prediction method. The addition of the
preprocessing indicators also improved the accuracy of the original merged model and LSTM model by 4.13% and 8.54%, respectively.
In addition to the pure model prediction, we also propose a simple strategy to apply the model prediction with a pre-set threshold to achieve better results. The maximum return is about 7% after deducting the handling fee and transaction tax.
參考文獻 [1] Kunihiko Fukushima and Sei Miyake. Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recognition. In Competition and cooperation in
neural nets, pages 267–285. Springer, 1982.
[2] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan RSalakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors.
arXiv preprint arXiv:1207.0580, 2012.
[3] David H Hubel. Single unit activity in striate cortex of unrestrained cats. The Journal of physiology, 147(2):226, 1959.
[4] David H Hubel and Torsten N Wiesel. Receptive fields of single neurones in the cat’s striate cortex. The Journal of physiology, 148(3):574, 1959.
[5] WS McCullock and W Pitts. A logical calculus of ideas immanent in nervous activity. archive copy of 27 november 2007 on wayback machine. Avtomaty [Automated Devices] Moscow, Inostr. Lit. publ, pages 363–384, 1956.
[6] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc
Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489, 2016.
[7] Bing Xu, Naiyan Wang, Tianqi Chen, and Mu Li. Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853, 2015.
描述 碩士
國立政治大學
應用數學系
108751018
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0108751018
資料類型 thesis
dc.contributor.advisor 蔡炎龍zh_TW
dc.contributor.advisor Tsai, Yen-Lungen_US
dc.contributor.author (Authors) 周家民zh_TW
dc.contributor.author (Authors) Zhou, Jia-Minen_US
dc.creator (作者) 周家民zh_TW
dc.creator (作者) Zhou, Jia-Minen_US
dc.date (日期) 2022en_US
dc.date.accessioned 1-Aug-2022 18:12:42 (UTC+8)-
dc.date.available 1-Aug-2022 18:12:42 (UTC+8)-
dc.date.issued (上傳時間) 1-Aug-2022 18:12:42 (UTC+8)-
dc.identifier (Other Identifiers) G0108751018en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/141180-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 應用數學系zh_TW
dc.description (描述) 108751018zh_TW
dc.description.abstract (摘要) 在本篇論文中,我們使用了現有的 NN、CNN、LSTM 等模型去組合出一個更為複雜的合併模型,並使用新的前處理方法處理技術指標,透過預設的閥值或一些條件轉成新的指標。此外,還使用了一些較為新穎的技術來改善模型,例如:LeakyReLU、Nadam,讓模型更好訓練。與其他模型相比,在同樣的輸入下,合併模型大幅度優於其他的模型,也遠高於最簡單的預測方法。而加入前處理的指標後,更讓原本的合併模型以及 LSTM 模型的準確率分別提升了 4.13% 以及 8.54%。
除了單純模型預測外,我們也提出一個簡單的策略來應用模型的預測,並預設了一個閥值來達到更好的結果。扣除掉手續費、交易稅後,最多大約可以得到 7% 的回報。
zh_TW
dc.description.abstract (摘要) In this paper, we use existing NN, CNN and LSTM models to combine a more complex merged model and use new preprocessing methods to handle the technical indicators, which are transformed into new indicators by pre-set thresholds or some conditions. In addition, some newer techniques are used to improve the model, such as LeakyReLU and Nadam, to make the model better trained. Compared with other models, the merged model is substantially better than other models with the same inputs and much better than the simplest prediction method. The addition of the
preprocessing indicators also improved the accuracy of the original merged model and LSTM model by 4.13% and 8.54%, respectively.
In addition to the pure model prediction, we also propose a simple strategy to apply the model prediction with a pre-set threshold to achieve better results. The maximum return is about 7% after deducting the handling fee and transaction tax.
en_US
dc.description.tableofcontents 謝辭 i
中文摘要 ii
Abstract iii
1 Introduction 1
2 Deep Learning 3
2.1 Neural Networks 4
2.2 Fully Connected Neural Networks 4
2.3 Activation Function 5
2.4 Loss Function 7
2.5 Gradient Descent Method 8
2.6 Dropout 11
2.7 L2 Regularization 11
3 Convolutional Neural Network 12
4 Long Short-Term Memory 14
4.1 Recurrent Neural Network 14
4.2 LSTM 15
5 Prediction System 17
5.1 Data Set 17
5.2 Data preprocessing 17
5.2.1 Technical Indicators 18
5.2.2 Rolling Window 21
5.3 Model Settings 21
5.4 Model Structure 22
5.5 Metrics 24
5.6 Result 25
6 Market Simulation 27
6.1 Strategy 27
6.2 Result 28
7 Conclusion 29
Bibliography 30
zh_TW
dc.format.extent 1067705 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0108751018en_US
dc.subject (關鍵詞) 深度學習zh_TW
dc.subject (關鍵詞) 神經網路zh_TW
dc.subject (關鍵詞) 卷積神經網路zh_TW
dc.subject (關鍵詞) 長短期記憶zh_TW
dc.subject (關鍵詞) 股票趨勢預測zh_TW
dc.subject (關鍵詞) 市場模擬zh_TW
dc.subject (關鍵詞) Deep Learningen_US
dc.subject (關鍵詞) NNen_US
dc.subject (關鍵詞) CNNen_US
dc.subject (關鍵詞) LSTMen_US
dc.subject (關鍵詞) Stock Trend Forecasten_US
dc.subject (關鍵詞) Market Simulationen_US
dc.title (題名) 應用深度學習於股票走勢分析-以台灣市場為例zh_TW
dc.title (題名) Applying Deep Learning to Predict the Trend of Stock in Taiwanen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] Kunihiko Fukushima and Sei Miyake. Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recognition. In Competition and cooperation in
neural nets, pages 267–285. Springer, 1982.
[2] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan RSalakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors.
arXiv preprint arXiv:1207.0580, 2012.
[3] David H Hubel. Single unit activity in striate cortex of unrestrained cats. The Journal of physiology, 147(2):226, 1959.
[4] David H Hubel and Torsten N Wiesel. Receptive fields of single neurones in the cat’s striate cortex. The Journal of physiology, 148(3):574, 1959.
[5] WS McCullock and W Pitts. A logical calculus of ideas immanent in nervous activity. archive copy of 27 november 2007 on wayback machine. Avtomaty [Automated Devices] Moscow, Inostr. Lit. publ, pages 363–384, 1956.
[6] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc
Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489, 2016.
[7] Bing Xu, Naiyan Wang, Tianqi Chen, and Mu Li. Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853, 2015.
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU202200774en_US