學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

  • No doi shows Citation Infomation
題名 多無人機複合視角救難目標物搜索定位研究
Research on search and rescue localization with multi-UAVs compound perspectives
作者 陳佩瑩
Chen, Pei-Ying
貢獻者 劉吉軒
Liu, Ji-Xuan
陳佩瑩
Chen, Pei-Ying
關鍵詞 無人機
多機複合視角
協作溝通
物件偵測
災難救助
電腦視覺輔助
路徑規劃
區域搜索
自主飛行控制
Drone
Multi-UAVs’ compound perspectives
cooperate
object detection
disaster
computer vision
path planning
area search
autonomous
target localization
日期 2023
上傳時間 1-Sep-2023 15:39:13 (UTC+8)
摘要 近年來無人機運用的層面越來越廣,未來需求及應用很可觀,現階段多以單部無人機應用為主,隨著技術提升及需求、任務複雜性提升,以及為了擴大應用層面發揮最佳效用等面向,未來勢必朝向以多部或集群無人機執行分工以完成特定任務,本研究主要提出以多部無人機協作方式,運用各無人機任務分配及路徑規劃進行目標物搜索及偵測,聯合各無人機協作位姿以複合視角執行目標物定位位置估算,能藉由不同無人機相機視角來提高目標定位的準確性及可靠度,且多部無人機亦可提升搜索效率,於較短時間內完成搜救區域內特定目標物搜尋回報,以利未來搜救單位用於尋找各式災難的受困人員;基於多部無人機視覺輔助回傳受困環境現況圖像,除可觀測目前當地受災情況,亦同時進行特定目標物偵測,在各機臺不同相機拍攝視角執行目標空照位置計算;本文提出之多機視角目標定位演算法可有效降低目標定位誤差,以各單機綜合而成的複合視角及位姿調整來彌補定位誤差,運用計算之定位平均值提升準確度;無人機優勢主要應用於達成人類在地面上無法執行、高危險、繁瑣以及全面監視的動作,因此,本研究以多部無人機於空中飛行搜索範圍內執行路徑規劃、搜查,自主執行目標偵測,當某部無人機發現目標物時,可發送目前目標物初步定位的所在位置,通知也在搜索範圍內之其餘無人機飛至附近,用多部無人機複合視角至定點拍攝目標,執行目標定位資訊計算後,以統合均值計算之目標預測位置為主,傳回地面控制站,此為無人機間自主構聯同步多機蒐集資訊,並於當下完成目標自動定位,提升定位準度,以減少人力搜索、縮短搜索時間及提升目標定位可靠性及準確度為主,成功運用多部無人機複合視角及自主協作方式彌補目前僅以人力或單部無人機相較匱乏之災難救助能量。
Drones have been utilized more and more widely in recent years, so that the fu-ture demand and application are considerable. At this stage, most applications are based on a single drone. With some perspectives such as the improvement of technol-ogy, the increased demand, and the complexity of tasks, the future is using multiple drones or swarms to perform mission assignment to complete some specific tasks. This study mainly proposes to use multiple UAVs to cooperate, and do the task assignment and path planning for searching and detecting targets, and then combine different UAVs` cooperative posture to perform target localization while using a composite perspective from all UAVs. The accuracy and reliability of target localization can be improved by using the perspective of drone cameras, and multiple drones can also im-prove the efficiency of search and rescue missions, finishing to provide the search re-port of specific targets in the search and rescue area in a relatively short period of time, so that future search and rescue units can be used to search for various disasters of trapped persons. Based on the visual assistance of multiple drones to provide the feedback of current situation of the trapped environment, in addition to observing the current disaster situation, it can also detect specific targets at the same time, and take different perspective pictures for calculating the target position from different camera angles of each machine. The multi-camera perspective target positioning algorithm proposed in this paper can effectively reduce the target positioning error. The compo-site perspective and pose adjustment of each single machine is used to make up for the positioning error, and the calculated positioning average value is used to improve the accuracy.
參考文獻 [1] Yao, P., Xie, Z., & Ren, P. (2017). Optimal UAV route planning for cover-age search of stationary target in river.IEEE Transactions on Control Sys-tems Technology,27(2), 822-829.
[2]Lee, P. Y., & Lee, J. B. A Simple Guide to Lawn Mowing. Recreational Mathematics Magazine, 8(14), 73-90.
[3] 無作者,2023年4月20日取自「海巡法規查詢系統-海岸巡防機關執行海上救難作業程序準則」
[4] 無作者,2023年4月20日取自「海巡法規查詢系統-海岸巡防機關執行海上救難作業程序準則-附件」
[5] Galceran, E., & Carreras, M. (2013). A survey on coverage path planning for robotics. Robotics and Autonomous systems, 61(12), 1258-1276.
[6] Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016). Ssd: Single shot multibox detector. In Computer Vi-sion–ECCV 2016: 14th European Conference, Amsterdam, The Nether-lands, October 11–14, 2016, Proceedings, Part I 14 (pp. 21-37). Springer International Publishing.
[7] Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hi-erarchies for accurate object detection and semantic segmentation. In Pro-ceedings of the IEEE conference on computer vision and pattern recogni-tion (pp. 580-587).
[8] Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards re-al-time object detection with region proposal networks. Advances in neural information processing systems,28.
[9] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE con-ference on computer vision and pattern recognition (pp. 779-788).
[10] Redmon, J., & Farhadi, A. (2017). YOLO9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7263-7271).
[11] Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.
[12] Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934.
[13] Sambolek, S., & Ivasic-Kos, M. (2021). Automatic person detection in search and rescue operations using deep CNN detectors. Ieee Access, 9, 37905-37922.
[14] Cai, Z., & Vasconcelos, N. (2018). Cascade r-cnn: Delving into high quality object detection. In Proceedings of the IEEE conference on com-puter vision and pattern recognition(pp. 6154-6162).
[15] Lin, T. Y., Goyal, P., Girshick, R., He, K., & Dollár, P. (2017). Focal loss for dense object detection. In Proceedings of the IEEE international con-ference on computer vision (pp. 2980-2988).
[16] Zhan, W., Sun, C., Wang, M., She, J., Zhang, Y., Zhang, Z., & Sun, Y. (2022). An improved Yolov5 real-time detection method for small objects captured by UAV. Soft Computing,26, 361-373.
[17] Sruthi, M. S., Poovathingal, M. J., Nandana, V. N., Lakshmi, S., Sam-shad, M., & Sudeesh, V. S. (2021, October). YOLOv5 based Open-Source UAV for Human Detection during Search And Rescue (SAR). In 2021 In-ternational Conference on Advances in Computing and Communications (ICACC)(pp. 1-6). IEEE.
[18]Li, X., He, B., Ding, K., Guo, W., Huang, B., & Wu, L. (2022). Wide-Area and Real-Time Object Search System of UAV. Remote Sens-ing,14(5), 1234.
[19]Drake, S. P. (2002). Converting GPS coordinates [phi, lambda, h] to nav-igation coordinates (ENU).
[20]Xiang, H., & Tian, L. (2011). Method for automatic georeferencing aerial remote sensing (RS) images from an unmanned aerial vehicle (UAV) plat-form. Biosystems Engineering, 108(2), 104-113.
[21]蔡宇融(2017)。多無人機協作以協助搜尋與救援行動。國立台灣
師範大學資訊工程學系碩士論文。
[22]劉又誠(2022)。多無人機協作拍攝之路徑規劃系統。國立政治大學資訊科學系碩士論文。
[23]Lin, B., Wu, L., & Niu, Y. (2022). End-to-end vision-based cooperative target geo-localization for multiple micro UAVs. Journal of Intelligent & Robotic Systems,106(1), 13.
[24]Xu, C., Yin, C., Huang, D., Han, W., & Wang, D. (2021). 3D target lo-calization based on multi–unmanned aerial vehicle cooperation. Measure-ment and Control,54(5-6), 895-907.
[25]洪瑋胤(2018)。基於人工智慧之無人飛行機目標量測。國立台灣科技大學自動化及控制研究所碩士論文。
[26]Bhat, A., Aoki, S., & Rajkumar, R. (2018). Tools and methodologies for autonomous driving systems. Proceedings of the IEEE, 106(9), 1700-1716.
[27]Shah, S., Dey, D., Lovett, C., & Kapoor, A. (2018). Airsim: High-fidelity visual and physical simulation for autonomous vehicles. In Field and Ser-vice Robotics: Results of the 11th International Conference (pp. 621-635). Springer International Publishing.
[28] Galceran, E., & Carreras, M. (2013). A survey on coverage path planning for robotics. Robotics and Autonomous systems, 61(12), 1258-1276.
[29] Ahmadzadeh, A., Keller, J., Pappas, G., Jadbabaie, A., & Kumar, V. (2008). An optimization-based approach to time-critical cooperative sur-veillance and coverage with UAVs. In Experimental Robotics: The 10th International Symposium on Experimental Robotics (pp. 491-500). Spring-er Berlin Heidelberg.
[30] Choset, H. (2001). Coverage for robotics–a survey of recent results. An-nals of mathematics and artificial intelligence, 31, 113-126.
[31] Shafiee, M. J., Chywl, B., Li, F., & Wong, A. (2017). Fast YOLO: A fast you only look once system for real-time embedded object detection in vid-eo. arXiv preprint arXiv:1709.05943.
[32] Jocher, G., Chaurasia, A., Stoken, A., Borovec, J., Kwon, Y., Michael, K., ... & Jain, M. (2022). ultralytics/yolov5: V7. 0-YOLOv5 SOTA realtime instance segmentation. Zenodo.
[33] Xu, X., Zhang, X., & Zhang, T. (2022). Lite-yolov5: A lightweight deep learning detector for on-board ship detection in large-scene sentinel-1 sar images. Remote Sensing, 14(4), 1018.
[34] Drake, S. P. (2002). Converting GPS coordinates [phi, lambda, h] to nav-igation coordinates (ENU).
[35] Smrcka, D., Baca, T., Nascimento, T., & Saska, M. (2021, June). Admit-tance force-based uav-wall stabilization and press exertion for documenta-tion and inspection of historical buildings. In 2021 International Conference on Unmanned Aircraft Systems (ICUAS) (pp. 552-559). IEEE.
描述 碩士
國立政治大學
資訊科學系碩士在職專班
108971012
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0108971012
資料類型 thesis
dc.contributor.advisor 劉吉軒zh_TW
dc.contributor.advisor Liu, Ji-Xuanen_US
dc.contributor.author (Authors) 陳佩瑩zh_TW
dc.contributor.author (Authors) Chen, Pei-Yingen_US
dc.creator (作者) 陳佩瑩zh_TW
dc.creator (作者) Chen, Pei-Yingen_US
dc.date (日期) 2023en_US
dc.date.accessioned 1-Sep-2023 15:39:13 (UTC+8)-
dc.date.available 1-Sep-2023 15:39:13 (UTC+8)-
dc.date.issued (上傳時間) 1-Sep-2023 15:39:13 (UTC+8)-
dc.identifier (Other Identifiers) G0108971012en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/147094-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系碩士在職專班zh_TW
dc.description (描述) 108971012zh_TW
dc.description.abstract (摘要) 近年來無人機運用的層面越來越廣,未來需求及應用很可觀,現階段多以單部無人機應用為主,隨著技術提升及需求、任務複雜性提升,以及為了擴大應用層面發揮最佳效用等面向,未來勢必朝向以多部或集群無人機執行分工以完成特定任務,本研究主要提出以多部無人機協作方式,運用各無人機任務分配及路徑規劃進行目標物搜索及偵測,聯合各無人機協作位姿以複合視角執行目標物定位位置估算,能藉由不同無人機相機視角來提高目標定位的準確性及可靠度,且多部無人機亦可提升搜索效率,於較短時間內完成搜救區域內特定目標物搜尋回報,以利未來搜救單位用於尋找各式災難的受困人員;基於多部無人機視覺輔助回傳受困環境現況圖像,除可觀測目前當地受災情況,亦同時進行特定目標物偵測,在各機臺不同相機拍攝視角執行目標空照位置計算;本文提出之多機視角目標定位演算法可有效降低目標定位誤差,以各單機綜合而成的複合視角及位姿調整來彌補定位誤差,運用計算之定位平均值提升準確度;無人機優勢主要應用於達成人類在地面上無法執行、高危險、繁瑣以及全面監視的動作,因此,本研究以多部無人機於空中飛行搜索範圍內執行路徑規劃、搜查,自主執行目標偵測,當某部無人機發現目標物時,可發送目前目標物初步定位的所在位置,通知也在搜索範圍內之其餘無人機飛至附近,用多部無人機複合視角至定點拍攝目標,執行目標定位資訊計算後,以統合均值計算之目標預測位置為主,傳回地面控制站,此為無人機間自主構聯同步多機蒐集資訊,並於當下完成目標自動定位,提升定位準度,以減少人力搜索、縮短搜索時間及提升目標定位可靠性及準確度為主,成功運用多部無人機複合視角及自主協作方式彌補目前僅以人力或單部無人機相較匱乏之災難救助能量。zh_TW
dc.description.abstract (摘要) Drones have been utilized more and more widely in recent years, so that the fu-ture demand and application are considerable. At this stage, most applications are based on a single drone. With some perspectives such as the improvement of technol-ogy, the increased demand, and the complexity of tasks, the future is using multiple drones or swarms to perform mission assignment to complete some specific tasks. This study mainly proposes to use multiple UAVs to cooperate, and do the task assignment and path planning for searching and detecting targets, and then combine different UAVs` cooperative posture to perform target localization while using a composite perspective from all UAVs. The accuracy and reliability of target localization can be improved by using the perspective of drone cameras, and multiple drones can also im-prove the efficiency of search and rescue missions, finishing to provide the search re-port of specific targets in the search and rescue area in a relatively short period of time, so that future search and rescue units can be used to search for various disasters of trapped persons. Based on the visual assistance of multiple drones to provide the feedback of current situation of the trapped environment, in addition to observing the current disaster situation, it can also detect specific targets at the same time, and take different perspective pictures for calculating the target position from different camera angles of each machine. The multi-camera perspective target positioning algorithm proposed in this paper can effectively reduce the target positioning error. The compo-site perspective and pose adjustment of each single machine is used to make up for the positioning error, and the calculated positioning average value is used to improve the accuracy.en_US
dc.description.tableofcontents 第一章 緒論 1
1.1 研究背景 1
1.2 研究動機與目的 2
1.3 論文架構 4
1.4 研究成果與貢獻 5
第二章 文獻探討 6
2.1 搜索區域分配及路徑規劃 6
2.2 物件偵測 9
2.3 地理定位 11
2.4 多機協作定位 13
第三章 多機協同目標物搜索定位模型 15
3.1 研究架構 16
3.2 自主搜索區域分配及路徑規劃模組 17
3.3 目標偵測模型 25
3.4 目標定位及多機協作模組 28
第四章 實驗設計與結果分析 37
4.1 實驗設計 37
4.2 實驗評估指標計算 40
4.3 實驗結果與分析 41
4.3.1 變更飛行高度 42
4.3.2 變更相機視角 43
4.3.3 變更目標物實際位置 47
4.4 小結 51
第五章 結論與未來展望 53
5.1 研究結論 53
5.2 未來展望 55
參考文獻 56
zh_TW
dc.format.extent 3104069 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0108971012en_US
dc.subject (關鍵詞) 無人機zh_TW
dc.subject (關鍵詞) 多機複合視角zh_TW
dc.subject (關鍵詞) 協作溝通zh_TW
dc.subject (關鍵詞) 物件偵測zh_TW
dc.subject (關鍵詞) 災難救助zh_TW
dc.subject (關鍵詞) 電腦視覺輔助zh_TW
dc.subject (關鍵詞) 路徑規劃zh_TW
dc.subject (關鍵詞) 區域搜索zh_TW
dc.subject (關鍵詞) 自主飛行控制zh_TW
dc.subject (關鍵詞) Droneen_US
dc.subject (關鍵詞) Multi-UAVs’ compound perspectivesen_US
dc.subject (關鍵詞) cooperateen_US
dc.subject (關鍵詞) object detectionen_US
dc.subject (關鍵詞) disasteren_US
dc.subject (關鍵詞) computer visionen_US
dc.subject (關鍵詞) path planningen_US
dc.subject (關鍵詞) area searchen_US
dc.subject (關鍵詞) autonomousen_US
dc.subject (關鍵詞) target localizationen_US
dc.title (題名) 多無人機複合視角救難目標物搜索定位研究zh_TW
dc.title (題名) Research on search and rescue localization with multi-UAVs compound perspectivesen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] Yao, P., Xie, Z., & Ren, P. (2017). Optimal UAV route planning for cover-age search of stationary target in river.IEEE Transactions on Control Sys-tems Technology,27(2), 822-829.
[2]Lee, P. Y., & Lee, J. B. A Simple Guide to Lawn Mowing. Recreational Mathematics Magazine, 8(14), 73-90.
[3] 無作者,2023年4月20日取自「海巡法規查詢系統-海岸巡防機關執行海上救難作業程序準則」
[4] 無作者,2023年4月20日取自「海巡法規查詢系統-海岸巡防機關執行海上救難作業程序準則-附件」
[5] Galceran, E., & Carreras, M. (2013). A survey on coverage path planning for robotics. Robotics and Autonomous systems, 61(12), 1258-1276.
[6] Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016). Ssd: Single shot multibox detector. In Computer Vi-sion–ECCV 2016: 14th European Conference, Amsterdam, The Nether-lands, October 11–14, 2016, Proceedings, Part I 14 (pp. 21-37). Springer International Publishing.
[7] Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hi-erarchies for accurate object detection and semantic segmentation. In Pro-ceedings of the IEEE conference on computer vision and pattern recogni-tion (pp. 580-587).
[8] Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards re-al-time object detection with region proposal networks. Advances in neural information processing systems,28.
[9] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE con-ference on computer vision and pattern recognition (pp. 779-788).
[10] Redmon, J., & Farhadi, A. (2017). YOLO9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7263-7271).
[11] Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.
[12] Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934.
[13] Sambolek, S., & Ivasic-Kos, M. (2021). Automatic person detection in search and rescue operations using deep CNN detectors. Ieee Access, 9, 37905-37922.
[14] Cai, Z., & Vasconcelos, N. (2018). Cascade r-cnn: Delving into high quality object detection. In Proceedings of the IEEE conference on com-puter vision and pattern recognition(pp. 6154-6162).
[15] Lin, T. Y., Goyal, P., Girshick, R., He, K., & Dollár, P. (2017). Focal loss for dense object detection. In Proceedings of the IEEE international con-ference on computer vision (pp. 2980-2988).
[16] Zhan, W., Sun, C., Wang, M., She, J., Zhang, Y., Zhang, Z., & Sun, Y. (2022). An improved Yolov5 real-time detection method for small objects captured by UAV. Soft Computing,26, 361-373.
[17] Sruthi, M. S., Poovathingal, M. J., Nandana, V. N., Lakshmi, S., Sam-shad, M., & Sudeesh, V. S. (2021, October). YOLOv5 based Open-Source UAV for Human Detection during Search And Rescue (SAR). In 2021 In-ternational Conference on Advances in Computing and Communications (ICACC)(pp. 1-6). IEEE.
[18]Li, X., He, B., Ding, K., Guo, W., Huang, B., & Wu, L. (2022). Wide-Area and Real-Time Object Search System of UAV. Remote Sens-ing,14(5), 1234.
[19]Drake, S. P. (2002). Converting GPS coordinates [phi, lambda, h] to nav-igation coordinates (ENU).
[20]Xiang, H., & Tian, L. (2011). Method for automatic georeferencing aerial remote sensing (RS) images from an unmanned aerial vehicle (UAV) plat-form. Biosystems Engineering, 108(2), 104-113.
[21]蔡宇融(2017)。多無人機協作以協助搜尋與救援行動。國立台灣
師範大學資訊工程學系碩士論文。
[22]劉又誠(2022)。多無人機協作拍攝之路徑規劃系統。國立政治大學資訊科學系碩士論文。
[23]Lin, B., Wu, L., & Niu, Y. (2022). End-to-end vision-based cooperative target geo-localization for multiple micro UAVs. Journal of Intelligent & Robotic Systems,106(1), 13.
[24]Xu, C., Yin, C., Huang, D., Han, W., & Wang, D. (2021). 3D target lo-calization based on multi–unmanned aerial vehicle cooperation. Measure-ment and Control,54(5-6), 895-907.
[25]洪瑋胤(2018)。基於人工智慧之無人飛行機目標量測。國立台灣科技大學自動化及控制研究所碩士論文。
[26]Bhat, A., Aoki, S., & Rajkumar, R. (2018). Tools and methodologies for autonomous driving systems. Proceedings of the IEEE, 106(9), 1700-1716.
[27]Shah, S., Dey, D., Lovett, C., & Kapoor, A. (2018). Airsim: High-fidelity visual and physical simulation for autonomous vehicles. In Field and Ser-vice Robotics: Results of the 11th International Conference (pp. 621-635). Springer International Publishing.
[28] Galceran, E., & Carreras, M. (2013). A survey on coverage path planning for robotics. Robotics and Autonomous systems, 61(12), 1258-1276.
[29] Ahmadzadeh, A., Keller, J., Pappas, G., Jadbabaie, A., & Kumar, V. (2008). An optimization-based approach to time-critical cooperative sur-veillance and coverage with UAVs. In Experimental Robotics: The 10th International Symposium on Experimental Robotics (pp. 491-500). Spring-er Berlin Heidelberg.
[30] Choset, H. (2001). Coverage for robotics–a survey of recent results. An-nals of mathematics and artificial intelligence, 31, 113-126.
[31] Shafiee, M. J., Chywl, B., Li, F., & Wong, A. (2017). Fast YOLO: A fast you only look once system for real-time embedded object detection in vid-eo. arXiv preprint arXiv:1709.05943.
[32] Jocher, G., Chaurasia, A., Stoken, A., Borovec, J., Kwon, Y., Michael, K., ... & Jain, M. (2022). ultralytics/yolov5: V7. 0-YOLOv5 SOTA realtime instance segmentation. Zenodo.
[33] Xu, X., Zhang, X., & Zhang, T. (2022). Lite-yolov5: A lightweight deep learning detector for on-board ship detection in large-scene sentinel-1 sar images. Remote Sensing, 14(4), 1018.
[34] Drake, S. P. (2002). Converting GPS coordinates [phi, lambda, h] to nav-igation coordinates (ENU).
[35] Smrcka, D., Baca, T., Nascimento, T., & Saska, M. (2021, June). Admit-tance force-based uav-wall stabilization and press exertion for documenta-tion and inspection of historical buildings. In 2021 International Conference on Unmanned Aircraft Systems (ICUAS) (pp. 552-559). IEEE.
zh_TW