學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

  • No doi shows Citation Infomation
題名 深度強化學習歷程的視覺化分析以Procgen Benchmark環境為例
The Learning Process of Deep Reinforcement Learning with Visualization Analysis Based on Procgen Benchmark Environment
作者 黃亦晨
HUANG, Yi-Chen
貢獻者 紀明德
Chi, Ming-Te
黃亦晨
HUANG, Yi-Chen
關鍵詞 深度強化學習
視覺化
Procgen Benchmark
Deep reinforcement learning
Visualization
Procgen Benchmark
日期 2023
上傳時間 9-Mar-2023 18:37:03 (UTC+8)
摘要 強化學習結合了深度學習,也就是深度強化學習—透過傳送遊戲中的畫面給電腦,讓電腦選擇動作,再根據電腦的選擇給予獎勵或是 懲罰。在 Procgen Benchmark 環境中,使用者可以自訂環境數量,能 夠提供幾乎完全隨機的遊戲環境,降低發生擬合過度的情形,使模型 的訓練更為完善。
由於神經網路黑盒子的特性,我們只能透過模型的輸出判斷模型 是否完善,但無法知道模型是透過什麼方式來做決策,因此需要視覺 化的工具輔助觀察模型行為。首先我們呈現了模型相關資訊的圖表, 再來使用了基於擾動生成的顯著圖來觀察模型關注區域,此外我們將 神經網路中隱藏層的數據提取後使用 T-SNE 降維,並用 K-means 演 算法做分群,最後將上述功能整合成一個視覺分析介面,幫助使用者 理解模型決策過程,同時也能觀察模型的決策是否與我們預期中相符。 透過使用者報告,受試者認為我們設計的獎勵機制優於對照組,驗證 了模型良好的性能,此外,受試者也能透過視覺化分析介面觀察出電 腦使用的通關策略以及人類玩家通關策略的差異,驗證了視覺化分析介面的有效性。
Deep reinforcement learning combines with deep learning and reinforcement learning. By sending game screens to the computer and allowing the computer to choose actions, the computer is then rewarded or punished based on its choices. Procgen Benchmark environment, in which users can customize the environment numbers, provides almost random game environments and reduces overfitting, making the model training more comprehensive.
Due to the black box nature of neural networks, we can judge whether the model is well-performing by only its output, but we cannot know how the model makes decisions. Therefore, visual tools are needed to assist in observing the model behavior. First, we present charts with model-related information. Then, we use saliency maps generated based on perturbation to observe the areas of focus in the model. In addition, we extract data from the hidden layer of the neural network and use T-SNE for dimensionality reduction and K-means algorithm for clustering. Finally, we integrate the aforementioned functions into a visual analysis interface to help users understand the model`s decision-making process and to observe whether the model`s decisions meet up with our expectations. We verify the good performance of the model through user studies, which showed that the participants believe that our designed reward mechanism is better than the control group. Additionally, that participants can observe the differences between computer and human player strategies through the visual analysis interface validate the effectiveness of the visualization analysis interface.
參考文獻 [1] Cheng, S., Li, X., Shan, G., Niu, B., Wang, Y., & Luo, M. (2022). ACMViz: a visual analytics approach to understand DRL-based autonomous control model. Journal of Visualization, 1-16.
[2] Cobbe, K., Hesse, C., Hilton, J., & Schulman, J. (2020, November). Leveraging procedural generation to benchmark reinforcement learning. In International conference on machine learning (pp. 2048-2056). PMLR.
[3] Cobbe, K., Klimov, O., Hesse, C., Kim, T., & Schulman, J. (2019, May). Quantifying generalization in reinforcement learning. In International Conference on Machine Learning (pp. 1282-1289). PMLR.
[4] Deshpande, S., Eysenbach, B., & Schneider, J. (2020). Interactive visualization for debugging rl. arXiv preprint arXiv:2008.07331.
[5] Greydanus, S., Koul, A., Dodge, J., & Fern, A. (2018, July). Visualizing and understanding atari agents. In International conference on machine learning (pp. 1792-1801). PMLR.
[6] Hilton, J., Cammarata, N., Carter, S., Goh, G., & Olah, C. (2020). Understanding rl vision. Distill, 5(11), e29.
[7] Joo, H. T., & Kim, K. J. (2019, August). Visualization of deep reinforcement learning using grad-CAM: how AI plays atari games?. In 2019 IEEE Conference on Games (CoG) (pp. 1-2). IEEE.
[8] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., ... & Hassabis, D. (2015). Human-level control through deep reinforcement learning. nature, 518(7540), 529-533.
[9] Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.
[10] Mott, A., Zoran, D., Chrzanowski, M., Wierstra, D., & Jimenez Rezende, D. (2019). Towards interpretable reinforcement learning using attention augmented agents. Advances in neural information processing systems, 32.
[11] Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
[12] Van der Maaten, L., & Hinton, G. (2008). Visualizing data using t-SNE. Journal of machine learning research, 9(11).
[13] Wang, J., Zhang, W., Yang, H., Yeh, C. C. M., & Wang, L. (2021). Visual analytics for rnn-based deep reinforcement learning. IEEE Transactions on Visualization and Computer Graphics, 28(12), 4141-4155.
[14] Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13 (pp. 818-833). Springer International Publishing.
描述 碩士
國立政治大學
資訊科學系
109753107
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0109753107
資料類型 thesis
dc.contributor.advisor 紀明德zh_TW
dc.contributor.advisor Chi, Ming-Teen_US
dc.contributor.author (Authors) 黃亦晨zh_TW
dc.contributor.author (Authors) HUANG, Yi-Chenen_US
dc.creator (作者) 黃亦晨zh_TW
dc.creator (作者) HUANG, Yi-Chenen_US
dc.date (日期) 2023en_US
dc.date.accessioned 9-Mar-2023 18:37:03 (UTC+8)-
dc.date.available 9-Mar-2023 18:37:03 (UTC+8)-
dc.date.issued (上傳時間) 9-Mar-2023 18:37:03 (UTC+8)-
dc.identifier (Other Identifiers) G0109753107en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/143834-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系zh_TW
dc.description (描述) 109753107zh_TW
dc.description.abstract (摘要) 強化學習結合了深度學習,也就是深度強化學習—透過傳送遊戲中的畫面給電腦,讓電腦選擇動作,再根據電腦的選擇給予獎勵或是 懲罰。在 Procgen Benchmark 環境中,使用者可以自訂環境數量,能 夠提供幾乎完全隨機的遊戲環境,降低發生擬合過度的情形,使模型 的訓練更為完善。
由於神經網路黑盒子的特性,我們只能透過模型的輸出判斷模型 是否完善,但無法知道模型是透過什麼方式來做決策,因此需要視覺 化的工具輔助觀察模型行為。首先我們呈現了模型相關資訊的圖表, 再來使用了基於擾動生成的顯著圖來觀察模型關注區域,此外我們將 神經網路中隱藏層的數據提取後使用 T-SNE 降維,並用 K-means 演 算法做分群,最後將上述功能整合成一個視覺分析介面,幫助使用者 理解模型決策過程,同時也能觀察模型的決策是否與我們預期中相符。 透過使用者報告,受試者認為我們設計的獎勵機制優於對照組,驗證 了模型良好的性能,此外,受試者也能透過視覺化分析介面觀察出電 腦使用的通關策略以及人類玩家通關策略的差異,驗證了視覺化分析介面的有效性。
zh_TW
dc.description.abstract (摘要) Deep reinforcement learning combines with deep learning and reinforcement learning. By sending game screens to the computer and allowing the computer to choose actions, the computer is then rewarded or punished based on its choices. Procgen Benchmark environment, in which users can customize the environment numbers, provides almost random game environments and reduces overfitting, making the model training more comprehensive.
Due to the black box nature of neural networks, we can judge whether the model is well-performing by only its output, but we cannot know how the model makes decisions. Therefore, visual tools are needed to assist in observing the model behavior. First, we present charts with model-related information. Then, we use saliency maps generated based on perturbation to observe the areas of focus in the model. In addition, we extract data from the hidden layer of the neural network and use T-SNE for dimensionality reduction and K-means algorithm for clustering. Finally, we integrate the aforementioned functions into a visual analysis interface to help users understand the model`s decision-making process and to observe whether the model`s decisions meet up with our expectations. We verify the good performance of the model through user studies, which showed that the participants believe that our designed reward mechanism is better than the control group. Additionally, that participants can observe the differences between computer and human player strategies through the visual analysis interface validate the effectiveness of the visualization analysis interface.
en_US
dc.description.tableofcontents 目錄
第一章 緒論 0
1.1 研究動機與目的 0
1.2 問題描述 1
1.3 貢獻 2
第二章 相關研究 3
2.1 深度強化學習 3
2.2 特徵視覺化 4
2.3 強化學習模型視覺化 5
第三章 研究方法與步驟 8
3.1 Procgen Benchmark 環境 8
3.2 系統架構 9
3.3 模型架構 9
3.4 調整獎勵與資料收集 10
3.5 顯著圖 13
3.6 T-SNE 與 K-means 分群 14
3.7 人類玩家的遊戲數據 15
第四章 視覺化設計 16
4.1. 視覺化介面設計需求 16
4.2. 訓練概覽 17
4.3. 精選關卡的探索 19
4.5. 視覺化介面說明功能 25
第五章 視覺化分析結果 26
5.1. 系統與訓練環境 26
5.2. 模型初次訓練結果視覺化分析與獎勵機制調整 26
5.3. 向下動作的解釋 29
5.4. 模型的關注區域 31
5.5. 模型的通關策略 36
5.6. 人類玩家與電腦策略上差異 42
第六章 使用者報告 44
6.1. 使用者報告模型介面及模型比較 44
6.2. 問卷問題與結果 46
第七章 結論與未來發展 53
參考文獻 54
zh_TW
dc.format.extent 14412821 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0109753107en_US
dc.subject (關鍵詞) 深度強化學習zh_TW
dc.subject (關鍵詞) 視覺化zh_TW
dc.subject (關鍵詞) Procgen Benchmarkzh_TW
dc.subject (關鍵詞) Deep reinforcement learningen_US
dc.subject (關鍵詞) Visualizationen_US
dc.subject (關鍵詞) Procgen Benchmarken_US
dc.title (題名) 深度強化學習歷程的視覺化分析以Procgen Benchmark環境為例zh_TW
dc.title (題名) The Learning Process of Deep Reinforcement Learning with Visualization Analysis Based on Procgen Benchmark Environmenten_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] Cheng, S., Li, X., Shan, G., Niu, B., Wang, Y., & Luo, M. (2022). ACMViz: a visual analytics approach to understand DRL-based autonomous control model. Journal of Visualization, 1-16.
[2] Cobbe, K., Hesse, C., Hilton, J., & Schulman, J. (2020, November). Leveraging procedural generation to benchmark reinforcement learning. In International conference on machine learning (pp. 2048-2056). PMLR.
[3] Cobbe, K., Klimov, O., Hesse, C., Kim, T., & Schulman, J. (2019, May). Quantifying generalization in reinforcement learning. In International Conference on Machine Learning (pp. 1282-1289). PMLR.
[4] Deshpande, S., Eysenbach, B., & Schneider, J. (2020). Interactive visualization for debugging rl. arXiv preprint arXiv:2008.07331.
[5] Greydanus, S., Koul, A., Dodge, J., & Fern, A. (2018, July). Visualizing and understanding atari agents. In International conference on machine learning (pp. 1792-1801). PMLR.
[6] Hilton, J., Cammarata, N., Carter, S., Goh, G., & Olah, C. (2020). Understanding rl vision. Distill, 5(11), e29.
[7] Joo, H. T., & Kim, K. J. (2019, August). Visualization of deep reinforcement learning using grad-CAM: how AI plays atari games?. In 2019 IEEE Conference on Games (CoG) (pp. 1-2). IEEE.
[8] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., ... & Hassabis, D. (2015). Human-level control through deep reinforcement learning. nature, 518(7540), 529-533.
[9] Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.
[10] Mott, A., Zoran, D., Chrzanowski, M., Wierstra, D., & Jimenez Rezende, D. (2019). Towards interpretable reinforcement learning using attention augmented agents. Advances in neural information processing systems, 32.
[11] Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
[12] Van der Maaten, L., & Hinton, G. (2008). Visualizing data using t-SNE. Journal of machine learning research, 9(11).
[13] Wang, J., Zhang, W., Yang, H., Yeh, C. C. M., & Wang, L. (2021). Visual analytics for rnn-based deep reinforcement learning. IEEE Transactions on Visualization and Computer Graphics, 28(12), 4141-4155.
[14] Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13 (pp. 818-833). Springer International Publishing.
zh_TW