dc.contributor.advisor | 廖文宏 | zh_TW |
dc.contributor.advisor | Liao, Wen-Hung | en_US |
dc.contributor.author (Authors) | 吳信賢 | zh_TW |
dc.contributor.author (Authors) | Wu, Shin-Shian | en_US |
dc.creator (作者) | 吳信賢 | zh_TW |
dc.creator (作者) | Wu, Shin-Shian | en_US |
dc.date (日期) | 2018 | en_US |
dc.date.accessioned | 6-Aug-2018 18:23:57 (UTC+8) | - |
dc.date.available | 6-Aug-2018 18:23:57 (UTC+8) | - |
dc.date.issued (上傳時間) | 6-Aug-2018 18:23:57 (UTC+8) | - |
dc.identifier (Other Identifiers) | G0104971022 | en_US |
dc.identifier.uri (URI) | http://nccur.lib.nccu.edu.tw/handle/140.119/119236 | - |
dc.description (描述) | 碩士 | zh_TW |
dc.description (描述) | 國立政治大學 | zh_TW |
dc.description (描述) | 資訊科學系碩士在職專班 | zh_TW |
dc.description (描述) | 104971022 | zh_TW |
dc.description.abstract (摘要) | 本論文試圖探究如何在少量資料時利用深度學習的方法以衛照影像來進行軍民艦船的判斷,並從圖中偵測出所判別的船艦的位置。本研究使用的方法除了透過深度學習以及遷移學習(Transfer Learning) 的概念來訓練模組外,鑑於部分分類資料較少也必須用資料增強(Data Augmentation)的方式生成部分資料並加入訓練集,藉以提高整體船體偵測的準確率。經過不同模型測試與參數調校後,最佳實驗結果得出軍艦AP為0.816、民船AP為0.908,整體mAP為0.862,期許這樣的成果可進一步發展為可輔助軍事判圖人員進行相關作業之系統,提升整體作業效率。未來以此為基礎希望可以發展更細部的軍事設施判斷模組,進而可以投入其餘判圖種類之應用,完善其判圖系統。 | zh_TW |
dc.description.abstract (摘要) | The objective of this thesis is to develop methods to detect and recognize civilian boats and war ships in satellite images based on deep learning approaches when only limited amount of data are available.The concept of transfer learning is employed to take advantage of existing models. Owing to the restricted availability of certain categorical data, this thesis also used data augmentation techniques to generate and add samples into the training sets to improve the overall accuracy of ship detection.After extensive model selection and parameter fine-tuning, the average precision (AP) of war ships and civilian boats has reached 0.816 and 0.908 respectively, and the overall mAP is 0.862. The developed framework is ready to be incorporated in a semi-automatic system to assist military personnel in facilitating the efficiency of image detection and interpretation.This thesis is expected to lay the groundwork for more precise military facility detection models, thus improving efficacy of future military facility image detection systems. | en_US |
dc.description.tableofcontents | 第一章 緒論 11.1 研究動機 11.2 論文架構 3第二章 相關研究 42.1 文獻探討 4第三章 研究方法 173.1 基本構想 173.2 前期研究 183.2.1 港口資料、衛星影像資料抓取 183.2.2 研究分析處理架構、圖片前處理 223.2.3 遷移學習的構思 263.3 研究架構設計 293.3.1 問題陳述 293.3.2 研究架構 303.4 目標設定 31第四章 研究過程與結果分析 324.1 研究過程 324.1.1 演算法選擇 324.1.2 測試評估階段 354.1.3 資料增強階段 384.1.4 模組優化階段 404.2 可用性分析 414.2.1 最後模駔辨識率以及成果 414.2.2 各模組辨識率比較 454.2.3 自動化判圖系統建置 464.3 成果分析以及探討 47第五章 結論與未來研究方向 545.1 結論 545.2 未來研究方向 54參考文獻 57 | zh_TW |
dc.format.extent | 3287995 bytes | - |
dc.format.mimetype | application/pdf | - |
dc.source.uri (資料來源) | http://thesis.lib.nccu.edu.tw/record/#G0104971022 | en_US |
dc.subject (關鍵詞) | 深度學習 | zh_TW |
dc.subject (關鍵詞) | 物體偵測 | zh_TW |
dc.subject (關鍵詞) | 遷移學習 | zh_TW |
dc.subject (關鍵詞) | 資料增強 | zh_TW |
dc.subject (關鍵詞) | 衛照圖資分析 | zh_TW |
dc.subject (關鍵詞) | Deep learning | en_US |
dc.subject (關鍵詞) | Object detection | en_US |
dc.subject (關鍵詞) | Transfer learning | en_US |
dc.subject (關鍵詞) | Data augmentation | en_US |
dc.subject (關鍵詞) | Satellite image | en_US |
dc.title (題名) | 基於深度學習框架之衛照圖船艦識別 | zh_TW |
dc.title (題名) | Detection of Civilian Boat and War Ship in Satellite Images with Deep Learning Framework | en_US |
dc.type (資料類型) | thesis | en_US |
dc.relation.reference (參考文獻) | [1] ImageNet Large Scale Visual Recognition Challenge form http://www.image-net.org/challenges/LSVRC/[2] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems. 2012.[3] The PASCAL Visual Object Classes form ttp://host.robots.ox.ac.uk/pascal/VOC/[4] ImageNet data set from http://image-net.org/[5] Cocodataset form http://cocodataset.org/#home[6] Ren, Shaoqing, et al. "Faster r-cnn: Towards real-time object detection with region proposal networks." Advances in neural information processing systems. 2015.[7] He, Kaiming, et al. "Mask r-cnn." Computer Vision (ICCV), 2017 IEEE International Conference on. IEEE, 2017.[8] Liu, Wei, et al. "Ssd: Single shot multibox detector." European conference on computer vision. Springer, Cham, 2016.[9] Redmon, Joseph, et al. "You only look once: Unified, real-time object detection." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.[10] Lin, Tsung-Yi, et al. "Focal loss for dense object detection." arXiv preprint arXiv:1708.02002 (2017).[11] Girshick, Ross. "Fast r-cnn." arXiv preprint arXiv:1504.08083 (2015).[12] Redmon, Joseph, and Ali Farhadi. "YOLO9000: better, faster, stronger." arXiv preprint (2017).[13] YOLO v2 from https://www.youtube.com/watch?time_continue=3&v=VOC3huqHrss[14] Ma, Zhong, et al. "Satellite imagery classification based on deep convolution network." Int. J. Comput. Autom. Control Inf. Eng 10 (2016): 1055-1059.[15] Albert, Adrian, Jasleen Kaur, and Marta C. Gonzalez. "Using convolutional networks and satellite imagery to identify patterns in urban environments at a large scale." Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2017.[16] Van Etten, Adam. "You Only Look Twice: Rapid Multi-Scale Object Detection In Satellite Imagery." arXiv preprint arXiv:1805.09512 (2018).[17] You Only Look Twice — Multi-Scale Object Detection in Satellite Imagery With Convolutional Neural Networks from https://medium.com/the-downlinq/you-only-look-twice-multi-scale-object-detection-in-satellite-imagery-with-convolutional-neural-38dad1cf7571[18] Google Earth from https://earth.google.com/web/[19] arcgis-earth from http://www.esri.com/software/arcgis-earth[20] ESRI from https://www.esri.com/en-us/home[21] labelImg from https://github.com/tzutalin/labelImg[22] Tensorflow from https://www.tensorflow.org/[23] Keras from https://github.com/keras-team/keras[24] Pan, Sinno Jialin, and Qiang Yang. "A survey on transfer learning." IEEE Transactions on knowledge and data engineering 22.10 (2010): 1345-1359.[25] Huang, Jonathan, et al. "Speed/accuracy trade-offs for modern convolutional object detectors." IEEE CVPR. 2017.[26] Chen, Liang-Chieh, et al. "Rethinking atrous convolution for semantic image segmentation." arXiv preprint arXiv:1706.05587 (2017).[27] Goodfellow, Ian, et al. "Generative adversarial nets." Advances in neural information processing systems. 2014. | zh_TW |
dc.identifier.doi (DOI) | 10.6814/THE.NCCU.EMCS.004.2018.B02 | - |