學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 應用深度雙Q網路於股票自動交易系統
Double Deep Q-Network in Automated Stock Trading
作者 黃冠棋
Huang, Kuan-Chi
貢獻者 蔡炎龍
黃冠棋
Huang, Kuan-Chi
關鍵詞 深度強化學習
神經網路
Q學習
深度雙Q網路
股票交易
Deep reinforcement learning
Neural network
Q-learning
DDQN
Stocks trading
日期 2021
上傳時間 10-Feb-2022 13:07:06 (UTC+8)
摘要 本篇文章使用了強化學習結合深度學習的技術去訓練自動交易系統,我們分別建立了深度卷積網路和全連接網路去預測動作的Q值,並使用DDQN的模型去更新我們的動作價值。我們的交易系統每天採用10天前的股票資訊,去預測股票的趨勢,並最大化我們的利益。

DDQN是一種深度強化學習模型,透過建立目標網路和調整誤差函數使得他能夠避免DQN的過估計問題,並得到更好的效能,在我們的實驗中,我們得到了一個良好的效果,證明DDQN在自動交易系統上是有效的。
In this paper, we use the artificial neural network combined with reinforcement learning to train the automated trading system. We construct the CNN model and the fully-connected model to predict the Q-values of the actions and use the algorithm of DDQN to correct the TD error. According to past 10 days data, the system predicts the trend of the stocks and maximize our profit.

DDQN is a deep reinforcement model, which is an improvement of DQN, build the target network and modify loss function to avoid overestimation and get better performance. In our experiment, we get a good result that DDQN is feasible on automated trading systems.
參考文獻 [1] Fei-Ting Chen. Convolutional deep q-learning for etf automated trading system, 2017.
[2] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Machine learning basics. Deep
learning, 1(7):98–164, 2016.
[3] RobertHecht-Nielsen.Theoryofthebackpropagationneuralnetwork.InNeuralnetworks
for perception, pages 65–93. Elsevier, 1992.
[4] Yu-Ping Huang. A comparison of deep reinforcement learning models: The case of stock
automated trading system, 2021.
[5] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
[6] Moshe Leshno, Vladimir Ya Lin, Allan Pinkus, and Shimon Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural networks, 6(6):861–867, 1993.
[7] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
[8] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
[9] Jerome H Saltzer, David P Reed, and David D Clark. End-to-end arguments in system design. ACM Transactions on Computer Systems (TOCS), 2(4):277–288, 1984.
31
[10] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489, 2016.
[11] David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In International conference on machine learning, pages 387–395. PMLR, 2014.
[12] Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057–1063, 2000.
[13] Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. In Proceedings of the AAAI conference on artificial intelligence, volume 30, 2016.
[14] Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. 1989.
[15] Bayya Yegnanarayana. Artificial neural networks. PHI Learning Pvt. Ltd., 2009.
描述 碩士
國立政治大學
應用數學系
107751007
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0107751007
資料類型 thesis
dc.contributor.advisor 蔡炎龍zh_TW
dc.contributor.author (Authors) 黃冠棋zh_TW
dc.contributor.author (Authors) Huang, Kuan-Chien_US
dc.creator (作者) 黃冠棋zh_TW
dc.creator (作者) Huang, Kuan-Chien_US
dc.date (日期) 2021en_US
dc.date.accessioned 10-Feb-2022 13:07:06 (UTC+8)-
dc.date.available 10-Feb-2022 13:07:06 (UTC+8)-
dc.date.issued (上傳時間) 10-Feb-2022 13:07:06 (UTC+8)-
dc.identifier (Other Identifiers) G0107751007en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/138942-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 應用數學系zh_TW
dc.description (描述) 107751007zh_TW
dc.description.abstract (摘要) 本篇文章使用了強化學習結合深度學習的技術去訓練自動交易系統,我們分別建立了深度卷積網路和全連接網路去預測動作的Q值,並使用DDQN的模型去更新我們的動作價值。我們的交易系統每天採用10天前的股票資訊,去預測股票的趨勢,並最大化我們的利益。

DDQN是一種深度強化學習模型,透過建立目標網路和調整誤差函數使得他能夠避免DQN的過估計問題,並得到更好的效能,在我們的實驗中,我們得到了一個良好的效果,證明DDQN在自動交易系統上是有效的。
zh_TW
dc.description.abstract (摘要) In this paper, we use the artificial neural network combined with reinforcement learning to train the automated trading system. We construct the CNN model and the fully-connected model to predict the Q-values of the actions and use the algorithm of DDQN to correct the TD error. According to past 10 days data, the system predicts the trend of the stocks and maximize our profit.

DDQN is a deep reinforcement model, which is an improvement of DQN, build the target network and modify loss function to avoid overestimation and get better performance. In our experiment, we get a good result that DDQN is feasible on automated trading systems.
en_US
dc.description.tableofcontents 致謝 i
中文摘要 ii
Abstract iii
Contents iv
List of Figures vi
1 Introduction 1
2 Deep Learning 2
2.1 NeuronsandNeuralNetworks .......................... 3
2.2 ActivationFunction................................ 4
2.3 LossFunction................................... 6
2.4 GradientDescentMethod............................. 7
3 Convolutional Neural Network (CNN) 9
4 Reinforcement Learning 12
4.1 Introduction.................................... 12
4.2 MarkovDecisionProcesses............................ 14
4.3 MonteCarloMethodandTemporalDifference .................16
4.4 Q-Learning .................................... 17
5 Deep Reinforcement Learning 18
5.1 DeepQ-LearningNetwork(DQN)........................ 18
5.2 PolicyGradient .................................. 21
6 Automated Trading System 24
6.1 DatasetPreparation................................ 24
6.2 TradingSystemSettlement............................ 25
6.3 InitialParameterSettlement ........................... 26
6.4 NeuralNetwork.................................. 27
6.4.1 CNNinDDQN.............................. 27
6.4.2 Fully-ConnectedNetworkinDDQN................... 28
6.5 Result....................................... 28
7 Conclusion 30
Bibliography 31
zh_TW
dc.format.extent 1585222 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0107751007en_US
dc.subject (關鍵詞) 深度強化學習zh_TW
dc.subject (關鍵詞) 神經網路zh_TW
dc.subject (關鍵詞) Q學習zh_TW
dc.subject (關鍵詞) 深度雙Q網路zh_TW
dc.subject (關鍵詞) 股票交易zh_TW
dc.subject (關鍵詞) Deep reinforcement learningen_US
dc.subject (關鍵詞) Neural networken_US
dc.subject (關鍵詞) Q-learningen_US
dc.subject (關鍵詞) DDQNen_US
dc.subject (關鍵詞) Stocks tradingen_US
dc.title (題名) 應用深度雙Q網路於股票自動交易系統zh_TW
dc.title (題名) Double Deep Q-Network in Automated Stock Tradingen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] Fei-Ting Chen. Convolutional deep q-learning for etf automated trading system, 2017.
[2] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Machine learning basics. Deep
learning, 1(7):98–164, 2016.
[3] RobertHecht-Nielsen.Theoryofthebackpropagationneuralnetwork.InNeuralnetworks
for perception, pages 65–93. Elsevier, 1992.
[4] Yu-Ping Huang. A comparison of deep reinforcement learning models: The case of stock
automated trading system, 2021.
[5] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
[6] Moshe Leshno, Vladimir Ya Lin, Allan Pinkus, and Shimon Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural networks, 6(6):861–867, 1993.
[7] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
[8] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
[9] Jerome H Saltzer, David P Reed, and David D Clark. End-to-end arguments in system design. ACM Transactions on Computer Systems (TOCS), 2(4):277–288, 1984.
31
[10] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489, 2016.
[11] David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In International conference on machine learning, pages 387–395. PMLR, 2014.
[12] Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057–1063, 2000.
[13] Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. In Proceedings of the AAAI conference on artificial intelligence, volume 30, 2016.
[14] Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. 1989.
[15] Bayya Yegnanarayana. Artificial neural networks. PHI Learning Pvt. Ltd., 2009.
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU202200014en_US