Publications-Theses
Article View/Open
Publication Export
-
題名 具可調整風險偏好之深度強化學習資產配置系統
A Deep Reinforcement Learning Portfolio Management System with Adjustable Risk Preference作者 張天慈
Chang, Tain-Tzu貢獻者 胡毓忠
Hu, Yuh Jong
張天慈
Chang, Tain-Tzu關鍵詞 深度強化學習
強化學習
資產配置系統
風險偏好
Deep Reinforcement Learning
Reinforcement Learning
Portfolio Management System
Risk Preference
Asset Allocation
OpenAI gym日期 2021 上傳時間 2-Sep-2021 18:17:46 (UTC+8) 摘要 我們導入了一個具可調整風險偏好之深度強化學習資產配置系統。透過變更門檻參數,此系統可提供適合不同風險容忍度的投資者合適的投資組合。實驗結果顯示此系統在多數情況最大跌幅和年化報酬上皆優於固定比重投資組合。相同的做法也可用於其他投資者偏好,例如BlackLitterman模型中的投資者觀點。
We introduced a DRL-based portfolio management system with adjustable risk preference. The system can produce portfolio s that meet different investors’risk preference by adjusting the threshold parameter.The experiment results show that for most cases, our system outperformed the constant rebalanced portfolio (CRP) in terms of maximum drawdown (MDD) and Compound annual growth rate (CAGR). The same approach has the potential to apply to different investors’ preferences, like the opinion of the investor used in the Black–Litterman model.參考文獻 [1] Black, F., and Litterman, R. Global portfolio optimization. Financial analysts journal48, 5 (1992), 28–43.[2] Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., andZaremba, W. Openai gym. arXiv preprint arXiv:1606.01540 (2016).[3] Chang, E. Why you likely have too many mutual funds or etfs, Sep 2016.[4] Cogneau, P., and Hübner, G. The 101 ways to measure portfolio performance. Availableat SSRN 1326076 (2009).[5] Fischer, T., and Krauss, C. Deep learning with long shorttermmemory networksfor financial market predictions. European Journal of Operational Research 270, 2(2018), 654 – 669.[6] Fujimoto, S., Hoof, H., and Meger, D. Addressing function approximation errorin actorcriticmethods. In International Conference on Machine Learning (2018),PMLR, pp. 1587–1596.[7] Haarnoja, T., Zhou, A., Abbeel, P., and Levine, S. Soft actorcritic:Offpolicymaximum entropy deep reinforcement learning with a stochastic actor, 2018.[8] Johansen, A., and Sornette, D. Stock market crashes are outliers. The EuropeanPhysical Journal BCondensedMatter and Complex Systems 1, 2 (1998), 141–143.[9] Johansen, A., and Sornette, D. Large stock market price drawdowns are outliers.Journal of Risk 4 (2002), 69–110.[10] Kahneman, D., and Tversky, A. An analysis of decision under risk. Econometrica36 (2000).[11] Krauss, C., Do, X. A., and Huck, N. Deep neural networks, gradientboostedtrees,random forests: Statistical arbitrage on the s&p 500. European Journal of OperationalResearch 259, 2 (2017), 689 – 702.[12] Levine, S., Finn, C., Darrell, T., and Abbeel, P. Endtoendtraining of deep visuomotorpolicies. The Journal of Machine Learning Research 17, 1 (2016), 1334–1373.[13] MagdonIsmail,M., and Atiya, A. F. Maximum drawdown. Risk Magazine 17, 10(2004), 99–102.[14] Markowitz, H. Portfolio selection. The Journal of Finance 7, 1 (1952), 77–91.[15] Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D.,and Riedmiller, M. Playing atari with deep reinforcement learning. arXiv preprintarXiv:1312.5602 (2013).[16] Moody, J., and Lizhong Wu. Optimization of trading systems and portfolios. InProceedings of the IEEE/IAFE 1997 Computational Intelligence for Financial Engineering(CIFEr) (1997), pp. 300–307.[17] Moody, J., and Saffell, M. Learning to trade via direct reinforcement. IEEE Transactionson Neural Networks 12, 4 (2001), 875–889.[18] Moody, J., Wu, L., Liao, Y., and Saffell, M. Performance functions and reinforcementlearning for trading systems and portfolios. Journal of Forecasting 17, 56(1998), 441–470.[19] Schulman, J., Levine, S., Abbeel, P., Jordan, M., and Moritz, P. Trust region policyoptimization. In International conference on machine learning (2015), PMLR,pp. 1889–1897.[20] Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policyoptimization algorithms. arXiv preprint arXiv:1707.06347 (2017).[21] Sharpe, W. F. The sharpe ratio. The Journal of Portfolio Management 21, 1 (1994),49–58.[22] Silver, D., Lever, G.,Heess, N., Degris, T., Wierstra, D., and Riedmiller, M. Deterministicpolicy gradient algorithms. In International conference on machine learning(2014), PMLR, pp. 387–395.[23] Statman, M. How many stocks make a diversified portfolio? Journal of financialand quantitative analysis (1987), 353–363.[24] Tversky, A., and Kahneman, D. Advances in prospect theory: Cumulative representationof uncertainty. Journal of Risk and Uncertainty 5, 4 (Oct 1992), 297–323.[25] Willenbrock, S. Diversification return, portfolio rebalancing, and the commodityreturn puzzle. Financial Analysts Journal 67, 4 (2011), 42–49. 描述 碩士
國立政治大學
資訊科學系碩士在職專班
108971001資料來源 http://thesis.lib.nccu.edu.tw/record/#G0108971001 資料類型 thesis dc.contributor.advisor 胡毓忠 zh_TW dc.contributor.advisor Hu, Yuh Jong en_US dc.contributor.author (Authors) 張天慈 zh_TW dc.contributor.author (Authors) Chang, Tain-Tzu en_US dc.creator (作者) 張天慈 zh_TW dc.creator (作者) Chang, Tain-Tzu en_US dc.date (日期) 2021 en_US dc.date.accessioned 2-Sep-2021 18:17:46 (UTC+8) - dc.date.available 2-Sep-2021 18:17:46 (UTC+8) - dc.date.issued (上傳時間) 2-Sep-2021 18:17:46 (UTC+8) - dc.identifier (Other Identifiers) G0108971001 en_US dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/137166 - dc.description (描述) 碩士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 資訊科學系碩士在職專班 zh_TW dc.description (描述) 108971001 zh_TW dc.description.abstract (摘要) 我們導入了一個具可調整風險偏好之深度強化學習資產配置系統。透過變更門檻參數,此系統可提供適合不同風險容忍度的投資者合適的投資組合。實驗結果顯示此系統在多數情況最大跌幅和年化報酬上皆優於固定比重投資組合。相同的做法也可用於其他投資者偏好,例如BlackLitterman模型中的投資者觀點。 zh_TW dc.description.abstract (摘要) We introduced a DRL-based portfolio management system with adjustable risk preference. The system can produce portfolio s that meet different investors’risk preference by adjusting the threshold parameter.The experiment results show that for most cases, our system outperformed the constant rebalanced portfolio (CRP) in terms of maximum drawdown (MDD) and Compound annual growth rate (CAGR). The same approach has the potential to apply to different investors’ preferences, like the opinion of the investor used in the Black–Litterman model. en_US dc.description.tableofcontents Acknowledgements i摘要 iiAbstract iiiContents ivList of Figures viiList of Tables viii1 Introduction 11.1 Objective 11.2 Portfolio Management System 21.3 Supervised Learning Forecast System 21.4 RLbaesdPortfolio Management System 32 Related Work 42.1 Modern Portfolio Theory (MPT) 42.2 Reinforcement Learning with Sharpe Ratio 52.3 Reinforcement Learning with Sterling Ratio 62.4 Comparison 73 System Design 83.1 Overview 83.2 Investment Universe 93.3 Market Features 93.4 Investor Preference 104 Deep Reinforcement Learning Model 124.1 Overview 124.2 Reinforcement Learning (RL) 124.3 Value Optimization vs Policy Optimization 134.4 Onpolicy vs Offpolicy 144.5 Deterministic Policy vs Stochastic Policy 144.6 Comparison 144.7 Soft ActorCritic 154.7.1 ActorCritic 154.7.2 Entropy Regularization 155 Trading Environment 165.1 Feature Extractor 175.2 Portfolio Builder 185.3 Trading System 195.4 Reward Provider 196 Methodology 216.1 Data Collection and Processing 216.2 Holding and Transaction Costs 216.3 Hyper Parameter 226.4 Experiment 236.4.1 System Parameter 236.4.2 Trading Frequency 236.4.3 Validation Period 236.4.4 Baseline 246.4.5 Result 247 Conclusions and Extensions 277.1 Conclusions 277.2 Extensions 28A Selection of ETFs 29B Market Features 30Reference 31 zh_TW dc.format.extent 1837055 bytes - dc.format.mimetype application/pdf - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0108971001 en_US dc.subject (關鍵詞) 深度強化學習 zh_TW dc.subject (關鍵詞) 強化學習 zh_TW dc.subject (關鍵詞) 資產配置系統 zh_TW dc.subject (關鍵詞) 風險偏好 zh_TW dc.subject (關鍵詞) Deep Reinforcement Learning en_US dc.subject (關鍵詞) Reinforcement Learning en_US dc.subject (關鍵詞) Portfolio Management System en_US dc.subject (關鍵詞) Risk Preference en_US dc.subject (關鍵詞) Asset Allocation en_US dc.subject (關鍵詞) OpenAI gym en_US dc.title (題名) 具可調整風險偏好之深度強化學習資產配置系統 zh_TW dc.title (題名) A Deep Reinforcement Learning Portfolio Management System with Adjustable Risk Preference en_US dc.type (資料類型) thesis en_US dc.relation.reference (參考文獻) [1] Black, F., and Litterman, R. Global portfolio optimization. Financial analysts journal48, 5 (1992), 28–43.[2] Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., andZaremba, W. Openai gym. arXiv preprint arXiv:1606.01540 (2016).[3] Chang, E. Why you likely have too many mutual funds or etfs, Sep 2016.[4] Cogneau, P., and Hübner, G. The 101 ways to measure portfolio performance. Availableat SSRN 1326076 (2009).[5] Fischer, T., and Krauss, C. Deep learning with long shorttermmemory networksfor financial market predictions. European Journal of Operational Research 270, 2(2018), 654 – 669.[6] Fujimoto, S., Hoof, H., and Meger, D. Addressing function approximation errorin actorcriticmethods. In International Conference on Machine Learning (2018),PMLR, pp. 1587–1596.[7] Haarnoja, T., Zhou, A., Abbeel, P., and Levine, S. Soft actorcritic:Offpolicymaximum entropy deep reinforcement learning with a stochastic actor, 2018.[8] Johansen, A., and Sornette, D. Stock market crashes are outliers. The EuropeanPhysical Journal BCondensedMatter and Complex Systems 1, 2 (1998), 141–143.[9] Johansen, A., and Sornette, D. Large stock market price drawdowns are outliers.Journal of Risk 4 (2002), 69–110.[10] Kahneman, D., and Tversky, A. An analysis of decision under risk. Econometrica36 (2000).[11] Krauss, C., Do, X. A., and Huck, N. Deep neural networks, gradientboostedtrees,random forests: Statistical arbitrage on the s&p 500. European Journal of OperationalResearch 259, 2 (2017), 689 – 702.[12] Levine, S., Finn, C., Darrell, T., and Abbeel, P. Endtoendtraining of deep visuomotorpolicies. The Journal of Machine Learning Research 17, 1 (2016), 1334–1373.[13] MagdonIsmail,M., and Atiya, A. F. Maximum drawdown. Risk Magazine 17, 10(2004), 99–102.[14] Markowitz, H. Portfolio selection. The Journal of Finance 7, 1 (1952), 77–91.[15] Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D.,and Riedmiller, M. Playing atari with deep reinforcement learning. arXiv preprintarXiv:1312.5602 (2013).[16] Moody, J., and Lizhong Wu. Optimization of trading systems and portfolios. InProceedings of the IEEE/IAFE 1997 Computational Intelligence for Financial Engineering(CIFEr) (1997), pp. 300–307.[17] Moody, J., and Saffell, M. Learning to trade via direct reinforcement. IEEE Transactionson Neural Networks 12, 4 (2001), 875–889.[18] Moody, J., Wu, L., Liao, Y., and Saffell, M. Performance functions and reinforcementlearning for trading systems and portfolios. Journal of Forecasting 17, 56(1998), 441–470.[19] Schulman, J., Levine, S., Abbeel, P., Jordan, M., and Moritz, P. Trust region policyoptimization. In International conference on machine learning (2015), PMLR,pp. 1889–1897.[20] Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policyoptimization algorithms. arXiv preprint arXiv:1707.06347 (2017).[21] Sharpe, W. F. The sharpe ratio. The Journal of Portfolio Management 21, 1 (1994),49–58.[22] Silver, D., Lever, G.,Heess, N., Degris, T., Wierstra, D., and Riedmiller, M. Deterministicpolicy gradient algorithms. In International conference on machine learning(2014), PMLR, pp. 387–395.[23] Statman, M. How many stocks make a diversified portfolio? Journal of financialand quantitative analysis (1987), 353–363.[24] Tversky, A., and Kahneman, D. Advances in prospect theory: Cumulative representationof uncertainty. Journal of Risk and Uncertainty 5, 4 (Oct 1992), 297–323.[25] Willenbrock, S. Diversification return, portfolio rebalancing, and the commodityreturn puzzle. Financial Analysts Journal 67, 4 (2011), 42–49. zh_TW dc.identifier.doi (DOI) 10.6814/NCCU202101177 en_US
