Publications-Theses
Article View/Open
Publication Export
-
題名 基於深度學習之印刷電路板缺陷偵測
Detection of PCB Defects Using Deep Learning Approach作者 張家瑋
Chang, Chia-Wei貢獻者 廖文宏
Liao, Wen-Hung
張家瑋
Chang, Chia-Wei關鍵詞 深度學習
物件偵測
印刷電路板
缺陷偵測
人工目視檢測
雙列直插封裝
Deep learning
Object detection
PCB(Printed circuit board)
Defect detection
Manual visual inspection
DIP(Dual In-line Package)日期 2021 上傳時間 2-Sep-2021 18:17:04 (UTC+8) 摘要 本論文旨在探究以人工智慧方式取代工廠生產線使用的人工目視檢測的可行性,採用基於深度學習技術的物件偵測框架為主要演算法;以六種在產線的雙列直插封裝(DIP)階段中常見的印刷電路板缺陷為主要偵測類別,然而因印刷電路板缺陷影像資料取得不易,本研究試圖在有限的資料量中使用不同比例和組合的資料集做分析,從中訓練並取得結果較佳的模型,進而導入於本研究的DIP AI系統中。DIP AI系統包含了Inference Module、Data Server、Training System三個子系統,經實驗測試此DIP AI系統能成功的在工廠產線的DIP階段中檢測出印刷電路板缺陷,用以幫助產線操作員快速地進行下一階段的修補作業,改善工廠產線的生產作業流程。
The objective of this thesis is to explore the feasibility of replacing the manual inspection used in the factory production line with AI (Artificial intelligence). The research method is based on the object detection framework in deep learning. Six common printed circuit board (PCB) defects in the DIP stage of the production process have been identified as the main target for the detection task. Due to the difficulty of collecting and labeling images with PCB defects, this research experiments with different proportions and combinations of data to arrive at a robust model, which is then implemented and integrated into a DIP system that consists of three components: Inference Module, Data Server, and Training System. The experimental results demonstrate that this DIP AI system can successfully detect the PCB defects, and help the operators to quickly move on to the next repair stage, thereby improving the process of the factory production line.參考文獻 [1] Joseph Redmon, Ali Farhadi, “YOLOv3: AnIncrementalImprovement”, 8 Apr 2018.[2] Licheng Jiao, Fellow, IEEE, Fan Zhang, Fang Liu, Senior Member, IEEE, Shuyuan Yang, Senior Member, IEEE, Lingling Li, Member, IEEE, Zhixi Feng, Member, IEEE, and Rong Qu, Senior Member, IEEE, “A Survey of Deep Learning-based Object Detection”, 10 Oct 2019.[3] J.R.R. Uijlings and K.E.A. van de Sande and T. Gevers and A.W.M.Smeulders , “Selective Search for Object Recognition”, 2012.[4] C. Lawrence ZitnickPiotr Dollár, “Edge Boxes : Locating Object Proposals from Edges”, 2014.[5] Peiyun Hu, Deva Ramanan Robotics Institute, "Finding Tiny Faces", 2017.[6] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg, “SSD: Single Shot MultiBox Detector”, 29 Dec 2016.[7] Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi, “You Only Look Once: Unified, Real-Time Object Detection”, 8 Jun 2015.[8] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Doll´ar, “Focal loss for dense object detection”, 2017.[9] Ondrej Simecek, TRI Marketing Dept, “TRI White Paper Introduction to Automated Optical Inspection (AOI) Technology Ver. 2.0.0”.[10] 賴威豪,曾紹崟, "Defect Classification of Printed Circuit Board Based on Deep Convolutional Neural Networks", 2017.[11] Peng Wei, Chang Liu, Mengyuan Liu, Yunlong Gao, Hong Liu, “CNN-based reference comparison method for classifying bare PCB defects”, ACAIT 2018.[12] Alexey Bochkovskiy, Chien-Yao Wang, Hong-Yuan Mark Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection”, 2020. 描述 碩士
國立政治大學
資訊科學系碩士在職專班
107971013資料來源 http://thesis.lib.nccu.edu.tw/record/#G0107971013 資料類型 thesis dc.contributor.advisor 廖文宏 zh_TW dc.contributor.advisor Liao, Wen-Hung en_US dc.contributor.author (Authors) 張家瑋 zh_TW dc.contributor.author (Authors) Chang, Chia-Wei en_US dc.creator (作者) 張家瑋 zh_TW dc.creator (作者) Chang, Chia-Wei en_US dc.date (日期) 2021 en_US dc.date.accessioned 2-Sep-2021 18:17:04 (UTC+8) - dc.date.available 2-Sep-2021 18:17:04 (UTC+8) - dc.date.issued (上傳時間) 2-Sep-2021 18:17:04 (UTC+8) - dc.identifier (Other Identifiers) G0107971013 en_US dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/137164 - dc.description (描述) 碩士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 資訊科學系碩士在職專班 zh_TW dc.description (描述) 107971013 zh_TW dc.description.abstract (摘要) 本論文旨在探究以人工智慧方式取代工廠生產線使用的人工目視檢測的可行性,採用基於深度學習技術的物件偵測框架為主要演算法;以六種在產線的雙列直插封裝(DIP)階段中常見的印刷電路板缺陷為主要偵測類別,然而因印刷電路板缺陷影像資料取得不易,本研究試圖在有限的資料量中使用不同比例和組合的資料集做分析,從中訓練並取得結果較佳的模型,進而導入於本研究的DIP AI系統中。DIP AI系統包含了Inference Module、Data Server、Training System三個子系統,經實驗測試此DIP AI系統能成功的在工廠產線的DIP階段中檢測出印刷電路板缺陷,用以幫助產線操作員快速地進行下一階段的修補作業,改善工廠產線的生產作業流程。 zh_TW dc.description.abstract (摘要) The objective of this thesis is to explore the feasibility of replacing the manual inspection used in the factory production line with AI (Artificial intelligence). The research method is based on the object detection framework in deep learning. Six common printed circuit board (PCB) defects in the DIP stage of the production process have been identified as the main target for the detection task. Due to the difficulty of collecting and labeling images with PCB defects, this research experiments with different proportions and combinations of data to arrive at a robust model, which is then implemented and integrated into a DIP system that consists of three components: Inference Module, Data Server, and Training System. The experimental results demonstrate that this DIP AI system can successfully detect the PCB defects, and help the operators to quickly move on to the next repair stage, thereby improving the process of the factory production line. en_US dc.description.tableofcontents 第一章 緒論 11.1 研究動機 11.2 論文架構 4第二章 相關研究 62.1 文獻探討 62.2 問題探討 92.3 工業檢測相關研究 10第三章 研究方法 153.1 基本構想 153.2 前期研究 163.2.1 資料研究 163.2.2 資料蒐集 183.2.3 預訓練模型 193.3 研究架構設計 203.3.1 第一階段架構 203.3.2 第二階段架構 233.4 目標設定 23第四章 研究過程與結果分析 244.1 研究過程 244.1.1 訓練資料集蒐集階段 244.1.2 物件偵測演算法訓練階段 264.1.3 物件偵測演算法測試階段 304.2 分析項目 334.2.1 資料集比例分析 334.2.2 資料集組合分析 404.2.3 資料擴增分析 484.3 可用性分析 52第五章 研究結果之應用 545.1 基於研究結果之應用 545.2 應用實例 56第六章 結論與未來研究方向 595.1 結論 595.2 未來研究方向 59參考文獻 62 zh_TW dc.format.extent 3932813 bytes - dc.format.mimetype application/pdf - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0107971013 en_US dc.subject (關鍵詞) 深度學習 zh_TW dc.subject (關鍵詞) 物件偵測 zh_TW dc.subject (關鍵詞) 印刷電路板 zh_TW dc.subject (關鍵詞) 缺陷偵測 zh_TW dc.subject (關鍵詞) 人工目視檢測 zh_TW dc.subject (關鍵詞) 雙列直插封裝 zh_TW dc.subject (關鍵詞) Deep learning en_US dc.subject (關鍵詞) Object detection en_US dc.subject (關鍵詞) PCB(Printed circuit board) en_US dc.subject (關鍵詞) Defect detection en_US dc.subject (關鍵詞) Manual visual inspection en_US dc.subject (關鍵詞) DIP(Dual In-line Package) en_US dc.title (題名) 基於深度學習之印刷電路板缺陷偵測 zh_TW dc.title (題名) Detection of PCB Defects Using Deep Learning Approach en_US dc.type (資料類型) thesis en_US dc.relation.reference (參考文獻) [1] Joseph Redmon, Ali Farhadi, “YOLOv3: AnIncrementalImprovement”, 8 Apr 2018.[2] Licheng Jiao, Fellow, IEEE, Fan Zhang, Fang Liu, Senior Member, IEEE, Shuyuan Yang, Senior Member, IEEE, Lingling Li, Member, IEEE, Zhixi Feng, Member, IEEE, and Rong Qu, Senior Member, IEEE, “A Survey of Deep Learning-based Object Detection”, 10 Oct 2019.[3] J.R.R. Uijlings and K.E.A. van de Sande and T. Gevers and A.W.M.Smeulders , “Selective Search for Object Recognition”, 2012.[4] C. Lawrence ZitnickPiotr Dollár, “Edge Boxes : Locating Object Proposals from Edges”, 2014.[5] Peiyun Hu, Deva Ramanan Robotics Institute, "Finding Tiny Faces", 2017.[6] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg, “SSD: Single Shot MultiBox Detector”, 29 Dec 2016.[7] Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi, “You Only Look Once: Unified, Real-Time Object Detection”, 8 Jun 2015.[8] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Doll´ar, “Focal loss for dense object detection”, 2017.[9] Ondrej Simecek, TRI Marketing Dept, “TRI White Paper Introduction to Automated Optical Inspection (AOI) Technology Ver. 2.0.0”.[10] 賴威豪,曾紹崟, "Defect Classification of Printed Circuit Board Based on Deep Convolutional Neural Networks", 2017.[11] Peng Wei, Chang Liu, Mengyuan Liu, Yunlong Gao, Hong Liu, “CNN-based reference comparison method for classifying bare PCB defects”, ACAIT 2018.[12] Alexey Bochkovskiy, Chien-Yao Wang, Hong-Yuan Mark Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection”, 2020. zh_TW dc.identifier.doi (DOI) 10.6814/NCCU202101352 en_US
