Please use this identifier to cite or link to this item: https://ah.nccu.edu.tw/handle/140.119/137165


Title: 大數據分析於GPU平台之效能評估:以影像辨識為例
Evaluation of Big Data Analytical Performance on GPU Platforms: Computer Vision as an Example
Authors: 曾豐源
Tseng, Feng-Yuan
Contributors: 胡毓忠
Hu, Yuh-Jong
曾豐源
Tseng, Feng-Yuan
Keywords: 大數據分析
深度學習
ImageNet
NVIDIA
GPU
NVIDIA DGX A100
NVIDIA DGX Station
Big Data Analysis
Deep Learning
ImageNet
NVIDIA
GPU
NVIDIA DGX A100
NVIDIA DGX Station
Date: 2021
Issue Date: 2021-09-02 18:17:33 (UTC+8)
Abstract: 本研究以ImageNet Large Scale Visual Recognition Challenge (ILSVRC)作為資料集,結合ResNet50深度學習模型,從企業角度為出發點,比較不同的GPU運算環境在AI 大數據分析流程中,探討硬體效能及性價比。本研究以政大電算中心私有雲NVIDIA DGX A100、NVIDIA DGX Station,以及Desktop Computer三種GPU運算環境進行效能測試,並且利用系統監控技術,取得各流程中硬體資源的使用情況,並分析總體效能。因此實驗結果顯示,NVIDIA DGX A100在訓練階段能夠減少模型訓練時間,而在上線階段Desktop Computer其性價比優於NVIDIA DGX A100和NVIDIA DGX Station。
This research adopts the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as data set, combined with the ResNet50 deep learning model to compare the performance and cost-effectiveness of a hardware under different GPU computing environments applied throughout the AI big data analysis process from an enterprise’s perspective. Performance tests are conducted under three different GPU computing environments, including NVIDIA DGX A100 and NVIDIA DGX Station, hosted as two seperate private clouds owned by the NCCU Computer Center, and the typical desktop computer. We use the system monitoring technology to obtain the usage of hardware resources in each analysis process and to examine the overall performance. The results show that NVIDIA DGX A100 can reduce the time needed for model training during training phase, while Desktop Computer is more cost-effective than NVIDIA DGX A100 and NVIDIA DGX Station during the online phase.
Reference: [1] Nvidia dali documentation. https://docs.nvidia.com/deeplearning/dali/user-guide/docs/. [Online; accessed 30May2021].
[2] Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., and FeiFei, L. Imagenet: A largescale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (2009), Ieee, pp. 248–255.
[3] He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (2016), pp. 770–778.
[4] Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012), 1097–1105.
[5] Lawrence, J., Malmsten, J., Rybka, A., et al. Comparing tensorflow deep learning performance using cpus, gpus, local pcs and cloud.
[6] Lin, C.Y., Pai, H.Y., and Chou, J. Comparison between baremetal, container and vm using tensorflow image classification benchmarks for deep learning cloud platform. In CLOSER (2018), pp. 376–383.
[7] Peter Mattson, C. C., and Cody Coleman, e. a. Mlperf training benchmark, 2020.
[8] Reddi, V. J., Cheng, C., and David Kanter, e. a. Mlperf inference benchmark, 2020.
[9] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. Going deeper with convolutions, 2014.
[10] Wikipedia contributors. Huang’s law — Wikipedia, the free encyclopedia. https://en.wikipedia.org/w/index.php?title=Huang%27s_law&oldid=996423603, 2020. [Online; accessed 27January2021].
[11] Wikipedia contributors. Imagenet — Wikipedia, the free encyclopedia, 2021. [Online; accessed 26May2021].
[12] Wikipedia contributors. Kubernetes — Wikipedia, the free encyclopedia. https://en.wikipedia.org/w/index.php?title=Kubernetes&oldid=1024839217, 2021. [Online; accessed 28May2021].
Description: 碩士
國立政治大學
資訊科學系碩士在職專班
107971025
Source URI: http://thesis.lib.nccu.edu.tw/record/#G0107971025
Data Type: thesis
Appears in Collections:[資訊科學系碩士在職專班] 學位論文

Files in This Item:

File Description SizeFormat
102501.pdf2230KbAdobe PDF0View/Open


All items in 學術集成 are protected by copyright, with all rights reserved.


社群 sharing