學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

  • No doi shows Citation Infomation
題名 基於視覺導航之多無人機自主巡檢任務協作
Autonomous Multi-UAVs Collaborative Inspection based on Visual Navigation
作者 謝鴻偉
Hsieh, Hung-Wei
貢獻者 劉吉軒
Liu, Jyi-Shane
謝鴻偉
Hsieh, Hung-Wei
關鍵詞 無人機
多無人機系統
視覺導航
編隊控制
協作視覺同時定位與地圖構建
語義分割
UAV
multi-UAVs system
visual navigation
formation control
collaborative visual SLAM
semantic segmentation
日期 2023
上傳時間 1-Sep-2023 15:23:19 (UTC+8)
摘要 隨著無人機技術的成熟與成本降低,在許多領域如物資遞送、精準農業、設施巡檢皆有無人機的應用案例。應用一台無人機於地面場域巡檢時,若遇到寬度較大的道路、河川、橋樑時,需以較高高度飛行才能涵蓋完整目標,然而這將擴大與目標的距離,導致目標解析度下降,較難用蒐集到的影像做進一步的分析。若以較低高度飛行,雖能有較清楚的目標影像,但無法涵蓋完整目標,將降低巡檢的效率。應用多無人機執行巡檢能以較低高度飛行,同時擴大視野,保有巡檢效率與清楚目標影像等優點。本研究提出基於視覺導航的多無人機自主巡檢方法,利用無人機鏡頭獲得的影像資訊進行定位與判斷巡檢目標位置,使多無人機自主跟隨巡檢目標及編隊飛行以能同時涵蓋不同側影像。系統架構包含協作視覺定位模組、線條偵測模組與飛行控制模組,以協作視覺同時定位與地圖構建方法估計多無人機位置,以語義分割技術偵測巡檢目標並產生跟隨線條,並利用這些資訊產生飛行控制指令使多無人機進行線條跟隨與編隊飛行。本研究設定的目標巡檢場域為河川,我們在模擬環境與真實環境的河川場域皆進行實驗,以探討模擬環境與真實環境中的差異,驗證本研究提出的系統之可行性,並分析不同情境下於巡檢過程中線條跟隨與編隊飛行的穩定性與效能,展示巡檢過程影像以證實本研究的應用價值。
With the maturity and cost reduction of Unmanned Aerial Vehicles (UAVs) technology, there are applications of UAVs in various fields such as goods delivery, precision agriculture, and facility inspection. Compared to using a single UAV, employing multiple UAVs can further enhance task efficiency and execute more complex missions. When using a single UAV for ground inspection, When using a single drone for ground inspection, if encountering roads, rivers, or bridges with larger widths, it is necessary to fly at a higher altitude in order to cover the entire target. However, this increases the distance from the target, leading to decreased target resolution, making it more challenging to perform further analysis using collected images. Flying at lower altitudes provides clearer target images but may not cover the entire target, thus reducing inspection efficiency. Employing multiple UAVs for inspection enables flying at lower altitudes while expanding the field of view, retaining benefits such as inspection efficiency and clear target images. This study proposes a multi-UAVs autonomous inspection method based on visual navigation. It uses image information captured by UAV cameras for localization and determining the location of inspection targets. This allows multiple UAVs to autonomously follow inspection targets and fly in formation to simultaneously cover different side views. The system architecture includes collaborative visual localization module, line detection module, and flight control module. We utilize collaborative visual SLAM method to estimate the positions of multiple UAVs, and semantic segmentation techniques to detect inspection targets and generate follow lines. Flight control commands are generated using these information to enable multiple UAVs line following and formation flying.The target inspection area set in this study is river. Experiments were conducted in both simulated and real river environments to explore differences between the two and validate the feasibility of the proposed system. Stability and performance of line following and formation flying during inspections under various scenarios were analyzed. The inspection process images were presented to demonstrate the practicality of this research.
參考文獻 [1] A. N. Wilson, A. Kumar, A. Jha, and L. R. Cenkeramaddi, “Embedded sensors, communication technologies, computing platforms and machine learning for uavs: A review,” IEEE Sensors Journal, vol. 22, no. 3, pp. 1807–1826, 2022.
[2] H. Shakhatreh, A. H. Sawalmeh, A. Al-Fuqaha, Z. Dou, E. Almaita, I. Khalil, N. S. Othman, A. Khreishah, and M. Guizani, “Unmanned aerial vehicles (uavs): A survey on civil applications and key research challenges,” IEEE Access, vol. 7, pp. 48572–48634, 2019.
[3] S. Jordan, J. Moore, S. E. Hovet, jonny box, J. Perry, K. Kirsche, D. Lewis, and Z. T. H. Tse, “State-of-the-art technologies for uav inspections,” Iet Radar Sonar and Navigation, vol. 12, pp. 151–164, 2018.
[4] K. Máthé and L. Buşoniu, “Vision and control for uavs: A survey of general methods and of inexpensive platforms for infrastructure inspection,” Sensors, vol. 15, no. 7, pp. 14887–14916, 2015.
[5] G. Skorobogatov, C. Barrado, and E. Salamí, “Multiple uav systems: A survey,” Unmanned Systems, vol. 8, no. 02, pp. 149–169, 2020.
[6] J. Han, Y. Xu, L. Di, and Y. Chen, “Low-cost multi-uav technologies for con- tour mapping of nuclear radiation field,” Journal of Intelligent & Robotic Systems, vol. 70, pp. 401 410, 2013.
[7] J. Scherer, S. Yahyanejad, S. Hayat, E. Yanmaz, T. Andre, A. Khan, V. Vukadinovic, C. Bettstetter, H. Hellwagner, and B. Rinner, “An autonomous multi-uav system for search and rescue,” in Proceedings of the First Workshop on Micro Aerial Vehicle Networks, Systems, and Applications for Civilian Use, pp. 33–38, 2015.
[8] P. Petráček, V. Krátký, and M. Saska, “Dronument: System for reliable deployment of micro aerial vehicles in dark areas of large historical monuments,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 2078–2085, 2020.
[9] A. Alcántara, J. Capitán, A. Torres-González, R. Cunha, and A. Ollero, “Autonomous execution of cinematographic shots with multiple drones,” IEEE Access, vol. 8, pp. 201300–201316, 2020.
[10] M. Mozaffari, W. Saad, M. Bennis, and M. Debbah, “Efficient deployment of multiple unmanned aerial vehicles for optimal wireless coverage,” IEEE Communications Letters, vol. 20, no. 8, pp. 1647–1650, 2016.
[11] J. Huang, G. Tian, J. Zhang, and Y. Chen, “On unmanned aerial vehicles light show systems: Algorithms, software and hardware,” Applied Sciences, vol. 11, no. 16, p. 7687, 2021.
[12] C. Deng, S. Wang, Z. Huang, Z. Tan, and J. Liu, “Unmanned aerial vehicles for power line inspection: A cooperative way in platforms and communications.,” J. Commun., vol. 9, no. 9, pp. 687–692, 2014.
[13] V. T. Hoang, M. D. Phung, T. H. Dinh, and Q. P. Ha, “System architecture for real-time surface inspection using multiple uavs,” IEEE Systems Journal, vol. 14, no. 2, pp. 2925–2936, 2020.
[14] R. Shakeri, M. A. Al-Garadi, A. Badawy, A. Mohamed, T. Khattab, A. K. Al-Ali, K. A. Harras, and M. Guizani, “Design challenges of multi-uav systems in cyber-physical applications: A comprehensive survey and future directions,” IEEE Communications Surveys Tutorials, vol. 21, no. 4, pp. 3340–3385, 2019.
[15] Y. Lyu, Q. Pan, Y. Zhang, C. Zhao, H. Zhu, T. Tang, and L. Liu, “Simultaneously multi-uav mapping and control with visual servoing,” in 2015 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 125–131, 2015.
[16] X. Meng, W. Wang, and B. Leong, “Skystitch: A cooperative multi-uav-based realtime video surveillance system with stitching,” in Proceedings of the 23rd ACM international conference on Multimedia, pp. 261–270, 2015.
[17] M. Saska, J. Chudoba, L. Přeučil, J. Thomas, G. Loianno, A. Třešňák, V. Vonásek, and V. Kumar, “Autonomous deployment of swarms of micro-aerial vehicles in cooperative surveillance,” in 2014 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 584–595, 2014.
[18] A. Couturier and M. A. Akhloufi, “A review on absolute visual localization for uav,” Robotics and Autonomous Systems, vol. 135, p. 103666, 2021.
[19] F. Zimmermann, C. Eling, L. Klingbeil, and H. Kuhlmann, “Precise positioning of uavs-dealing with challenging rtk-gps measurement conditions during automated uav flights.,” ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences, vol. 4, 2017.
[20] Y. Lu, Z. Xue, G.-S. Xia, and L. Zhang, “A survey on vision-based uav navigation,” Geo-spatial information science, vol. 21, no. 1, pp. 21–32, 2018.
[21] D. Zou, P. Tan, and W. Yu, “Collaborative visual slam for multiple agents: A brief survey,” Virtual Reality & Intelligent Hardware, vol. 1, no. 5, pp. 461–482, 2019.
[22] T. Taketomi, H. Uchiyama, and S. Ikeda, “Visual slam algorithms: A survey from 2010 to 2016,” IPSJ Transactions on Computer Vision and Applications, vol. 9, no. 1, pp. 1–11, 2017.
[23] A. Macario Barros, M. Michel, Y. Moline, G. Corre, and F. Carrel, “A comprehensive survey of visual slam algorithms,” Robotics, vol. 11, no. 1, p. 24, 2022.
[24] J. Engel, T. Schöps, and D. Cremers, “Lsd-slam: Large-scale direct monocular slam,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part II 13, pp. 834–849, Springer, 2014.
[25] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardós, “Orb-slam: A versatile and accurate monocular slam system,” IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147–1163, 2015.
[26] D. Zou and P. Tan, “Coslam: Collaborative visual slam in dynamic environments,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 2, pp. 354–366, 2013.
[27] L. Riazuelo, J. Civera, and J. Montiel, “C2tam: A cloud framework for cooperative tracking and mapping,” Robotics and Autonomous Systems, vol. 62, no. 4, pp. 401–413, 2014.
[28] G. Klein and D. Murray, “Parallel tracking and mapping for small ar workspaces,” in 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 225–234, 2007.
[29] P. Schmuck and M. Chli, “Ccm-slam: Robust and efficient centralized collaborative monocular simultaneous localization and mapping for robotic teams,” Journal of Field Robotics, vol. 36, no. 4, pp. 763–781, 2019.
[30] R. Mur-Artal and J. D. Tardós, “Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras,” IEEE transactions on robotics, vol. 33, no. 5, pp. 1255–1262, 2017.
[31] N. Cao and A. F. Lynch, “Inner-outer loop control with constraints for rotary-wing uavs,” in 2015 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 294–302, 2015.
[32] W. Giernacki, P. Kozierski, J. Michalski, M. Retinger, R. Madonski, and P. Campoy, “Bebop 2 quadrotor as a platform for research and education in robotics and control engineering,” in 2020 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 1733–1741, 2020.
[33] H. Lim, J. Park, D. Lee, and H. Kim, “Build your own quadrotor: Open-source projects on unmanned aerial vehicles,” IEEE Robotics Automation Magazine, vol. 19, no. 3, pp. 33–45, 2012.
[34] H. T. Nguyen, T. V. Quyen, C. V. Nguyen, A. M. Le, H. T. Tran, and M. T. Nguyen, “Control algorithms for uavs: A comprehensive survey,” EAI Endorsed Transactions on Industrial Networks and Intelligent Systems, vol. 7, no. 23, pp. e5–e5, 2020.
[35] H. T. Do, H. T. Hua, M. T. Nguyen, C. V. Nguyen, H. T. Nguyen, H. T. Nguyen, and N. T. Nguyen, “Formation control algorithms for multiple-uavs: a comprehensive survey,” EAI Endorsed Transactions on Industrial Networks and Intelligent Systems, vol. 8, no. 27, pp. e3–e3, 2021.
[36] K. A. Ghamry and Y. Zhang, “Formation control of multiple quadrotors based on leader-follower method,” in 2015 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 1037–1042, 2015.
[37] S. Kim and Y. Kim, “Three dimensional optimum controller for multiple uav formation flight using behavior-based decentralized approach,” in 2007 International Conference on Control, Automation and Systems, pp. 1387–1392, 2007.
[38] Q. Chen, Y. Wang, and Y. Lu, “Formation control for uavs based on the virtual structure idea and nonlinear guidance logic,” in 2021 6th International Conference on Automation, Control and Robotics Engineering (CACRE), pp. 135–139, 2021.
[39] D. Galvez-López and J. D. Tardos, “Bags of binary words for fast place recognition in image sequences,” IEEE Transactions on Robotics, vol. 28, no. 5, pp. 1188–1197, 2012.
[40] L. Nieto-Hernández, A. A. Gómez-Casasola, and H. Rodríguez-Cortés, “Monocular slam position scale estimation for quadrotor autonomous navigation,” in 2019 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 1359–1364, 2019.
[41] A. Gómez-Casasola and H. Rodríguez-Cortés, “Scale factor estimation for quadrotor monocular-vision positioning algorithms,” Sensors, vol. 22, no. 20, p. 8048, 2022.
[42] D. Gehrig, M. Göttgens, B. Paden, and E. Frazzoli, “Scale-corrected monocular-slam for the ar. drone 2.0,” 2017.
[43] G. S., M. P. M.M., U. Verma, and R. M. Pai, “Semantic segmentation of uav aerial videos using convolutional neural networks,” in 2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE), pp. 21–27, 2019.
[44] U. Verma, A. Chauhan, M. P. M.M., and R. Pai, “Deeprivwidth : Deep learning based semantic segmentation approach for river identification and width measurement in sar images of coastal karnataka,” Computers Geosciences, vol. 154, p. 104805, 2021.
[45] Y. Huang, G. Lee, R. Soong, and J. Liu, “Real-time vision-based river detection and lateral shot following for autonomous uavs,” in 2020 IEEE International Conference on Real-time Computing and Robotics (RCAR), pp. 421–426, 2020.
[46] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234–241, Springer, 2015.
[47] L. Lopez-Fuentes, C. Rossi, and H. Skinnemoen, “River segmentation for flood monitoring,” in 2017 IEEE International Conference on Big Data (Big Data), pp. 3746–3749, 2017.
[48] Parrot, “Parrot anafi | professional drone camera 4k hdr.” Available at https://www.parrot.com/us/drones/anafi.
[49] Parrot, “Parrot anafi | professional thermal drones - 4k camera.” Available at https://www.parrot.com/en/drones/anafi-thermal.
[50] Parrot-Developers, “Olympe - python controller library for parrot drones.” Availableat https://github.com/Parrot-Developers/olympe.
[51] Parrot, “What is parrot sphinx.” Available at https://developer.parrot.com/docs/sphinx/index.html.
描述 碩士
國立政治大學
資訊科學系
109753155
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0109753155
資料類型 thesis
dc.contributor.advisor 劉吉軒zh_TW
dc.contributor.advisor Liu, Jyi-Shaneen_US
dc.contributor.author (Authors) 謝鴻偉zh_TW
dc.contributor.author (Authors) Hsieh, Hung-Weien_US
dc.creator (作者) 謝鴻偉zh_TW
dc.creator (作者) Hsieh, Hung-Weien_US
dc.date (日期) 2023en_US
dc.date.accessioned 1-Sep-2023 15:23:19 (UTC+8)-
dc.date.available 1-Sep-2023 15:23:19 (UTC+8)-
dc.date.issued (上傳時間) 1-Sep-2023 15:23:19 (UTC+8)-
dc.identifier (Other Identifiers) G0109753155en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/147027-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 資訊科學系zh_TW
dc.description (描述) 109753155zh_TW
dc.description.abstract (摘要) 隨著無人機技術的成熟與成本降低,在許多領域如物資遞送、精準農業、設施巡檢皆有無人機的應用案例。應用一台無人機於地面場域巡檢時,若遇到寬度較大的道路、河川、橋樑時,需以較高高度飛行才能涵蓋完整目標,然而這將擴大與目標的距離,導致目標解析度下降,較難用蒐集到的影像做進一步的分析。若以較低高度飛行,雖能有較清楚的目標影像,但無法涵蓋完整目標,將降低巡檢的效率。應用多無人機執行巡檢能以較低高度飛行,同時擴大視野,保有巡檢效率與清楚目標影像等優點。本研究提出基於視覺導航的多無人機自主巡檢方法,利用無人機鏡頭獲得的影像資訊進行定位與判斷巡檢目標位置,使多無人機自主跟隨巡檢目標及編隊飛行以能同時涵蓋不同側影像。系統架構包含協作視覺定位模組、線條偵測模組與飛行控制模組,以協作視覺同時定位與地圖構建方法估計多無人機位置,以語義分割技術偵測巡檢目標並產生跟隨線條,並利用這些資訊產生飛行控制指令使多無人機進行線條跟隨與編隊飛行。本研究設定的目標巡檢場域為河川,我們在模擬環境與真實環境的河川場域皆進行實驗,以探討模擬環境與真實環境中的差異,驗證本研究提出的系統之可行性,並分析不同情境下於巡檢過程中線條跟隨與編隊飛行的穩定性與效能,展示巡檢過程影像以證實本研究的應用價值。zh_TW
dc.description.abstract (摘要) With the maturity and cost reduction of Unmanned Aerial Vehicles (UAVs) technology, there are applications of UAVs in various fields such as goods delivery, precision agriculture, and facility inspection. Compared to using a single UAV, employing multiple UAVs can further enhance task efficiency and execute more complex missions. When using a single UAV for ground inspection, When using a single drone for ground inspection, if encountering roads, rivers, or bridges with larger widths, it is necessary to fly at a higher altitude in order to cover the entire target. However, this increases the distance from the target, leading to decreased target resolution, making it more challenging to perform further analysis using collected images. Flying at lower altitudes provides clearer target images but may not cover the entire target, thus reducing inspection efficiency. Employing multiple UAVs for inspection enables flying at lower altitudes while expanding the field of view, retaining benefits such as inspection efficiency and clear target images. This study proposes a multi-UAVs autonomous inspection method based on visual navigation. It uses image information captured by UAV cameras for localization and determining the location of inspection targets. This allows multiple UAVs to autonomously follow inspection targets and fly in formation to simultaneously cover different side views. The system architecture includes collaborative visual localization module, line detection module, and flight control module. We utilize collaborative visual SLAM method to estimate the positions of multiple UAVs, and semantic segmentation techniques to detect inspection targets and generate follow lines. Flight control commands are generated using these information to enable multiple UAVs line following and formation flying.The target inspection area set in this study is river. Experiments were conducted in both simulated and real river environments to explore differences between the two and validate the feasibility of the proposed system. Stability and performance of line following and formation flying during inspections under various scenarios were analyzed. The inspection process images were presented to demonstrate the practicality of this research.en_US
dc.description.tableofcontents 第一章 緒論 1
1.1 研究背景 1
1.2 研究動機與目的 2
1.3 研究成果與貢獻 5
1.4 論文架構 5
第二章 文獻探討 6
2.1 同時定位與地圖構建 (SLAM) 6
2.1.1 視覺 SLAM (VSLAM) 6
2.1.2 協作視覺 SLAM (Collaborative VSLAM) 7
2.2 多無人機編隊控制 8
2.2.1 無人機控制 8
2.2.2 編隊控制策略 10
第三章 多無人機自主巡檢方法 12
3.1 系統架構 12
3.1.1 無人機 13
3.1.2 地面站 13
3.2 協作視覺定位模組 14
3.2.1 CCM-SLAM 15
3.2.2 地圖合併策略 15
3.2.3 真實尺度估計 16
3.3 線條偵測模組 17
3.3.1 語義分割模型 18
3.4 飛行控制模組 20
3.4.1 跟隨控制 21
3.4.2 編隊控制 27
第四章 實驗結果與分析 31
4.1 實驗設置 31
4.1.1 硬體設置 31
4.1.2 軟體設置 33
4.2 實驗流程與評估方法 35
4.2.1 實驗流程 35
4.2.2 實驗評估方法 36
4.3 模擬環境實驗 37
4.3.1 實驗場域 37
4.3.2 實驗結果分析 39
4.4 真實環境實驗 56
4.4.1 實驗場域 56
4.4.2 實驗結果分析 57
4.5 小結 71
第五章 結論與未來展望 73
5.1 研究結論 73
5.2 未來展望 74
參考文獻 76
zh_TW
dc.format.extent 42143371 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0109753155en_US
dc.subject (關鍵詞) 無人機zh_TW
dc.subject (關鍵詞) 多無人機系統zh_TW
dc.subject (關鍵詞) 視覺導航zh_TW
dc.subject (關鍵詞) 編隊控制zh_TW
dc.subject (關鍵詞) 協作視覺同時定位與地圖構建zh_TW
dc.subject (關鍵詞) 語義分割zh_TW
dc.subject (關鍵詞) UAVen_US
dc.subject (關鍵詞) multi-UAVs systemen_US
dc.subject (關鍵詞) visual navigationen_US
dc.subject (關鍵詞) formation controlen_US
dc.subject (關鍵詞) collaborative visual SLAMen_US
dc.subject (關鍵詞) semantic segmentationen_US
dc.title (題名) 基於視覺導航之多無人機自主巡檢任務協作zh_TW
dc.title (題名) Autonomous Multi-UAVs Collaborative Inspection based on Visual Navigationen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) [1] A. N. Wilson, A. Kumar, A. Jha, and L. R. Cenkeramaddi, “Embedded sensors, communication technologies, computing platforms and machine learning for uavs: A review,” IEEE Sensors Journal, vol. 22, no. 3, pp. 1807–1826, 2022.
[2] H. Shakhatreh, A. H. Sawalmeh, A. Al-Fuqaha, Z. Dou, E. Almaita, I. Khalil, N. S. Othman, A. Khreishah, and M. Guizani, “Unmanned aerial vehicles (uavs): A survey on civil applications and key research challenges,” IEEE Access, vol. 7, pp. 48572–48634, 2019.
[3] S. Jordan, J. Moore, S. E. Hovet, jonny box, J. Perry, K. Kirsche, D. Lewis, and Z. T. H. Tse, “State-of-the-art technologies for uav inspections,” Iet Radar Sonar and Navigation, vol. 12, pp. 151–164, 2018.
[4] K. Máthé and L. Buşoniu, “Vision and control for uavs: A survey of general methods and of inexpensive platforms for infrastructure inspection,” Sensors, vol. 15, no. 7, pp. 14887–14916, 2015.
[5] G. Skorobogatov, C. Barrado, and E. Salamí, “Multiple uav systems: A survey,” Unmanned Systems, vol. 8, no. 02, pp. 149–169, 2020.
[6] J. Han, Y. Xu, L. Di, and Y. Chen, “Low-cost multi-uav technologies for con- tour mapping of nuclear radiation field,” Journal of Intelligent & Robotic Systems, vol. 70, pp. 401 410, 2013.
[7] J. Scherer, S. Yahyanejad, S. Hayat, E. Yanmaz, T. Andre, A. Khan, V. Vukadinovic, C. Bettstetter, H. Hellwagner, and B. Rinner, “An autonomous multi-uav system for search and rescue,” in Proceedings of the First Workshop on Micro Aerial Vehicle Networks, Systems, and Applications for Civilian Use, pp. 33–38, 2015.
[8] P. Petráček, V. Krátký, and M. Saska, “Dronument: System for reliable deployment of micro aerial vehicles in dark areas of large historical monuments,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 2078–2085, 2020.
[9] A. Alcántara, J. Capitán, A. Torres-González, R. Cunha, and A. Ollero, “Autonomous execution of cinematographic shots with multiple drones,” IEEE Access, vol. 8, pp. 201300–201316, 2020.
[10] M. Mozaffari, W. Saad, M. Bennis, and M. Debbah, “Efficient deployment of multiple unmanned aerial vehicles for optimal wireless coverage,” IEEE Communications Letters, vol. 20, no. 8, pp. 1647–1650, 2016.
[11] J. Huang, G. Tian, J. Zhang, and Y. Chen, “On unmanned aerial vehicles light show systems: Algorithms, software and hardware,” Applied Sciences, vol. 11, no. 16, p. 7687, 2021.
[12] C. Deng, S. Wang, Z. Huang, Z. Tan, and J. Liu, “Unmanned aerial vehicles for power line inspection: A cooperative way in platforms and communications.,” J. Commun., vol. 9, no. 9, pp. 687–692, 2014.
[13] V. T. Hoang, M. D. Phung, T. H. Dinh, and Q. P. Ha, “System architecture for real-time surface inspection using multiple uavs,” IEEE Systems Journal, vol. 14, no. 2, pp. 2925–2936, 2020.
[14] R. Shakeri, M. A. Al-Garadi, A. Badawy, A. Mohamed, T. Khattab, A. K. Al-Ali, K. A. Harras, and M. Guizani, “Design challenges of multi-uav systems in cyber-physical applications: A comprehensive survey and future directions,” IEEE Communications Surveys Tutorials, vol. 21, no. 4, pp. 3340–3385, 2019.
[15] Y. Lyu, Q. Pan, Y. Zhang, C. Zhao, H. Zhu, T. Tang, and L. Liu, “Simultaneously multi-uav mapping and control with visual servoing,” in 2015 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 125–131, 2015.
[16] X. Meng, W. Wang, and B. Leong, “Skystitch: A cooperative multi-uav-based realtime video surveillance system with stitching,” in Proceedings of the 23rd ACM international conference on Multimedia, pp. 261–270, 2015.
[17] M. Saska, J. Chudoba, L. Přeučil, J. Thomas, G. Loianno, A. Třešňák, V. Vonásek, and V. Kumar, “Autonomous deployment of swarms of micro-aerial vehicles in cooperative surveillance,” in 2014 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 584–595, 2014.
[18] A. Couturier and M. A. Akhloufi, “A review on absolute visual localization for uav,” Robotics and Autonomous Systems, vol. 135, p. 103666, 2021.
[19] F. Zimmermann, C. Eling, L. Klingbeil, and H. Kuhlmann, “Precise positioning of uavs-dealing with challenging rtk-gps measurement conditions during automated uav flights.,” ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences, vol. 4, 2017.
[20] Y. Lu, Z. Xue, G.-S. Xia, and L. Zhang, “A survey on vision-based uav navigation,” Geo-spatial information science, vol. 21, no. 1, pp. 21–32, 2018.
[21] D. Zou, P. Tan, and W. Yu, “Collaborative visual slam for multiple agents: A brief survey,” Virtual Reality & Intelligent Hardware, vol. 1, no. 5, pp. 461–482, 2019.
[22] T. Taketomi, H. Uchiyama, and S. Ikeda, “Visual slam algorithms: A survey from 2010 to 2016,” IPSJ Transactions on Computer Vision and Applications, vol. 9, no. 1, pp. 1–11, 2017.
[23] A. Macario Barros, M. Michel, Y. Moline, G. Corre, and F. Carrel, “A comprehensive survey of visual slam algorithms,” Robotics, vol. 11, no. 1, p. 24, 2022.
[24] J. Engel, T. Schöps, and D. Cremers, “Lsd-slam: Large-scale direct monocular slam,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part II 13, pp. 834–849, Springer, 2014.
[25] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardós, “Orb-slam: A versatile and accurate monocular slam system,” IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147–1163, 2015.
[26] D. Zou and P. Tan, “Coslam: Collaborative visual slam in dynamic environments,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 2, pp. 354–366, 2013.
[27] L. Riazuelo, J. Civera, and J. Montiel, “C2tam: A cloud framework for cooperative tracking and mapping,” Robotics and Autonomous Systems, vol. 62, no. 4, pp. 401–413, 2014.
[28] G. Klein and D. Murray, “Parallel tracking and mapping for small ar workspaces,” in 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 225–234, 2007.
[29] P. Schmuck and M. Chli, “Ccm-slam: Robust and efficient centralized collaborative monocular simultaneous localization and mapping for robotic teams,” Journal of Field Robotics, vol. 36, no. 4, pp. 763–781, 2019.
[30] R. Mur-Artal and J. D. Tardós, “Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras,” IEEE transactions on robotics, vol. 33, no. 5, pp. 1255–1262, 2017.
[31] N. Cao and A. F. Lynch, “Inner-outer loop control with constraints for rotary-wing uavs,” in 2015 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 294–302, 2015.
[32] W. Giernacki, P. Kozierski, J. Michalski, M. Retinger, R. Madonski, and P. Campoy, “Bebop 2 quadrotor as a platform for research and education in robotics and control engineering,” in 2020 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 1733–1741, 2020.
[33] H. Lim, J. Park, D. Lee, and H. Kim, “Build your own quadrotor: Open-source projects on unmanned aerial vehicles,” IEEE Robotics Automation Magazine, vol. 19, no. 3, pp. 33–45, 2012.
[34] H. T. Nguyen, T. V. Quyen, C. V. Nguyen, A. M. Le, H. T. Tran, and M. T. Nguyen, “Control algorithms for uavs: A comprehensive survey,” EAI Endorsed Transactions on Industrial Networks and Intelligent Systems, vol. 7, no. 23, pp. e5–e5, 2020.
[35] H. T. Do, H. T. Hua, M. T. Nguyen, C. V. Nguyen, H. T. Nguyen, H. T. Nguyen, and N. T. Nguyen, “Formation control algorithms for multiple-uavs: a comprehensive survey,” EAI Endorsed Transactions on Industrial Networks and Intelligent Systems, vol. 8, no. 27, pp. e3–e3, 2021.
[36] K. A. Ghamry and Y. Zhang, “Formation control of multiple quadrotors based on leader-follower method,” in 2015 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 1037–1042, 2015.
[37] S. Kim and Y. Kim, “Three dimensional optimum controller for multiple uav formation flight using behavior-based decentralized approach,” in 2007 International Conference on Control, Automation and Systems, pp. 1387–1392, 2007.
[38] Q. Chen, Y. Wang, and Y. Lu, “Formation control for uavs based on the virtual structure idea and nonlinear guidance logic,” in 2021 6th International Conference on Automation, Control and Robotics Engineering (CACRE), pp. 135–139, 2021.
[39] D. Galvez-López and J. D. Tardos, “Bags of binary words for fast place recognition in image sequences,” IEEE Transactions on Robotics, vol. 28, no. 5, pp. 1188–1197, 2012.
[40] L. Nieto-Hernández, A. A. Gómez-Casasola, and H. Rodríguez-Cortés, “Monocular slam position scale estimation for quadrotor autonomous navigation,” in 2019 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 1359–1364, 2019.
[41] A. Gómez-Casasola and H. Rodríguez-Cortés, “Scale factor estimation for quadrotor monocular-vision positioning algorithms,” Sensors, vol. 22, no. 20, p. 8048, 2022.
[42] D. Gehrig, M. Göttgens, B. Paden, and E. Frazzoli, “Scale-corrected monocular-slam for the ar. drone 2.0,” 2017.
[43] G. S., M. P. M.M., U. Verma, and R. M. Pai, “Semantic segmentation of uav aerial videos using convolutional neural networks,” in 2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE), pp. 21–27, 2019.
[44] U. Verma, A. Chauhan, M. P. M.M., and R. Pai, “Deeprivwidth : Deep learning based semantic segmentation approach for river identification and width measurement in sar images of coastal karnataka,” Computers Geosciences, vol. 154, p. 104805, 2021.
[45] Y. Huang, G. Lee, R. Soong, and J. Liu, “Real-time vision-based river detection and lateral shot following for autonomous uavs,” in 2020 IEEE International Conference on Real-time Computing and Robotics (RCAR), pp. 421–426, 2020.
[46] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234–241, Springer, 2015.
[47] L. Lopez-Fuentes, C. Rossi, and H. Skinnemoen, “River segmentation for flood monitoring,” in 2017 IEEE International Conference on Big Data (Big Data), pp. 3746–3749, 2017.
[48] Parrot, “Parrot anafi | professional drone camera 4k hdr.” Available at https://www.parrot.com/us/drones/anafi.
[49] Parrot, “Parrot anafi | professional thermal drones - 4k camera.” Available at https://www.parrot.com/en/drones/anafi-thermal.
[50] Parrot-Developers, “Olympe - python controller library for parrot drones.” Availableat https://github.com/Parrot-Developers/olympe.
[51] Parrot, “What is parrot sphinx.” Available at https://developer.parrot.com/docs/sphinx/index.html.
zh_TW