Publications-Theses
Article View/Open
Publication Export
-
題名 無人機於建築物周圍指定區域之視覺導航降落方法
Visual Navigation for UAV Landing on Accessory Building Floor作者 劉効哲
Liu, Hsiao-Che貢獻者 劉吉軒
Liu, Jyi-Shane
劉効哲
Liu, Hsiao-Che關鍵詞 無人機
決策控制
行為樹
圖像/目標特徵點辨識
視覺導航日期 2020 上傳時間 1-Feb-2021 14:10:34 (UTC+8) 摘要 近年來無人機不只在軍事方面的應用,與人類日常生活的應用也逐漸普及,許多領域開始將無人機技術結合,進行開發具有自主行為能力的行為。如Google母公司Alphabet的無人機子公司Wing為全美第一家使用無人機送貨公司,利用偵測目的地和搜索著陸點的技術,實際應用在貨物運送上;美國亞馬遜在無人機上裝置感應裝置,及一般相機和紅外線相機分析周遭環境,發展能夠長途飛行的送貨無人機。在大多數應用於現實世界的無人機任務中,降落是相當重要的關鍵步驟,尤其是在貨物運送及交付方面。當無人機成功著陸或低空盤旋於目標降落點時,貨物的交付才算成功。對於精確的著陸要求,基於視覺的導航技術具有高度的可靠性和準確性。 在本文中,我們介紹了用於自主降落在建築物周圍附屬平台上的精確視覺導航的研究工作。我們結合了一些基於視覺的先進方法,開發了其他功能組件,透過行為樹進行決策邏輯的控制,整合視覺模組及無人機的飛行導航控制,以提供可用於建築物附近精確著陸的實用自主導航系統。在現實世界中的初始實驗顯示出利用視覺方式進行導航的結果,執行精確著陸的成功率很高。 參考文獻 [1] B.D.O. Anderson, C. Yu, B. Fidan, D. Van der Walle, “UAV Formation Control: Theory and Application,” in Recent Advances in Learning and Control, New York:Springer-Verlag, pp. 15-34, 2008.[2] T. Arnold, M. De Biasio, A. Fritz, and R. Leitner, “UAV-based measurement of vegetation indices for environmental monitoring,” in Proc. 7th Int. Conf. Sens. Technol. (ICST), pp. 704–707, Dec. 2013.[3] M. Silvagni, A. Tonoli, E. Zenerino and M. Chiaberge, “Multipurpose UAV for search and rescue operations in mountain avalanche events,” Geomatics Natural Hazards Risk, vol. 8, no. 1, pp. 18-33, 2017.[4] Z. Wang, M. Zheng, J. Guo, and H. Huang, “Uncertain UAV ISR mission planning problem with multiple correlated objectives,” J. Intell., Fuzzy Syst., vol. 32, no. 1, pp. 321–335, Jan. 2017[5] G. Brunner, B. Szebedy, S. Tanner and R. Wattenhofer, “The Urban Last Mile Problem: Autonomous Drone Delivery to Your Balcony,” 2019 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, pp. 1005-1012, 2019.[6] H. Lee, S. Jung and D. H. Shim, “Vision-based UAV landing on the moving vehicle,” 2016 International Conference on Unmanned Aircraft Systems (ICUAS), Arlington, VA, pp. 1-7, 2016.[7] A. Bachrach, S. Prentice, R. He, and N. Roy, “Range-robust autonomous navigation in GPS-denied environments,” J. Field Robot., vol. 28, no. 5, pp. 644–666, 2011.[8] S. Ahrens, D. Levine, G. Andrews and J. P. How, “Vision-based guidance and control of a hovering vehicle in unknown, GPS-denied environments,” 2009 IEEE International Conference on Robotics and Automation, Kobe, pp. 2643-2648, 2009.[9] K. Peng, “A secure network for mobile wireless service,” Journal of Information Processing Systems, vol. 9, no. 2, pp. 247–258, 2013.[10] P. A. Zandbergen and L. L. Arnold, “Positional accuracy of the wide area augmentation system in consumer-grade GPS units,” Computers and Geosciences, vol. 37, no. 7, pp. 883–892, 2011.[11] G. Xu, “GPS: Theory, Algorithms and Applications,” Springer, Berlin, Germany, 2nd edition, 2007.[12] G. Conte and P. Doherty, “An integrated UAV navigation system based on aerial image matching,” Proc. IEEE Aerosp. Conf., pp. 3142-3151, 2008.[13] C. Forster, M. Faessler, F. Fontana, M. Werlberger, and D. Scaramuzza, “Continuous on-board monocular-vision-based elevation mapping applied to autonomous landing of micro aerial vehicles,” in 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 111–118, 2015.[14] P. H. Nguyen, M. Arsalan, J. H. Koo, R. A. Naqvi, N. Q. Truong, and K. R. Park, “LightDenseYOLO: A fast and accurate marker tracker for autonomous UAV landing by visible light camera sensor on drone, “ Sensors, vol. 18, no. 6, p. 1703, 2018.[15] J. Wubben, F. Fabra, C. T. Calafate, T. Krzeszowski, J. M. Marquez-Barja, J. C. Cano, and P. Manzoni, “Accurate landing of unmanned aerial vehicles using ground pattern recognition,” Electronics, 8, 1532, 2019.[16] S. Hening, C. A. Ippolito, K. S. Krishnakumar, V. Stepanyan, and M. Teodorescu, “3d lidar slam integration with gps/ins for uavs in urban gps-degraded environments,” in AIAA Information Systems-AIAA Infotech@ Aerospace, pp. 448–457, 2017.[17] M. Pierzchała, P. Giguere, and R. Astrup, “Mapping forests using an unmanned ground vehicle with 3d lidar and graph-slam,” Computers and Electronics in Agriculture, vol. 145, pp. 217–225, 2018.[18] J. Joglekar, S. S. Gedam and B. Krishna Mohan, “Image Matching Using SIFT Features and Relaxation Labeling Technique—A Constraint Initializing Method for Dense Stereo Matching,” in IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 9, pp. 5643-5652, Sept. 2014.[19] D. Lee, T. Ryan and H. J. Kim, “Autonomous landing of a VTOL UAV on a moving platform using image-based visual servoing,” 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, 2012, pp. 971-976, 2012.[20] H. Choi, M. Geeves, B. Alsalam, and F. Gonzalez, “Open Source Computer-Vision Based Guidance System for UAVs On-Board Decision Making,” IEEE Aerospace conference, Big sky, Montana, 2016.[21] S. G. Lin, M. A. Garratt, A. J. Lambert, “Monocular vision-based real-time target recognition and tracking for autonomously landing an UAV in a cluttered shipboard environment,” Autonomous Robots, 41(4): pp. 881–901, 2017.[22] K. E.Wenzel, A. Masselli, and A. Zell, “Automatic Take Off , Tracking and Landing of a Miniature UAV on a Moving Carrier Vehicle,” Journal of Intelligent & Robotic Systems, vol. 61, no. 1, pp. 221-238, 2010.[23] M. Bryson and S. Sukkarieh, “Building a robust implementation of bearing-only inertial SLAM for a UAV,” J. Field Robot., vol. 24, no. 1/2, pp. 113-143, Jan./Feb. 2007.[24] C. Kanellakis and G. Nikolakopoulos, “Survey on Computer Vision for UAVs: Current Developments and Trends,” J. Intell. Robot. Syst., vol. 87, no. 1, pp. 141-168, Jan. 2017.[25] F. Caballero, L. Merino, J. Ferruz, and A. Ollero, “Vision-based odometry and slam for medium and high altitude flying uavs,” Journal of Intelligent Robotic Systems, vol. 54, pp. 137-161, 2009.[26] J. Engel, V. Koltun and D. Cremers, “Direct sparse odometry,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 3,pp. 611-625,Mar.2018.[27] E. Karami, S. Prasad and M. Shehata, “Image matching using sift, surf, brief and orb: Performance comparison for distorted images,” in Proc. Newfoundland Electr. Comput. Eng. Conf., St. John’s, NL, Canada, Nov.2015.[28] David G Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol.50, No. 2, pp.91-110, 2004.[29] Y. Li, N. Snavely, and D. P. Huttenlocher, “Location recognition using prioritized feature matching,” in Proc. European Conf. Computer Vision (ECCV), Crete, Greece, Sept. 2010.[30] Zhen Liu, Ziying Zhao, Yida Fan, Dong Tian, “Automated change detection of multi-level icebergs near Mertz Glacier region using feature vector matching,” The international conference on Image processing, computer vision and Pattern Recogniton, 2013.[31] R. Mur-Artal and J. D. Tardos, “ORB-SLAM2: an Open-Source SLAM system for monocu lar, stereo and RGB-D cameras,” arXiv:1610.06475, Oct.2016.[32] M. Colledanchise and P. Ogren, “Behavior trees in robotics and AI: an ¨ introduction,” CoRR, vol. abs/1709.00084, 2017.[33] M. Colledanchise, D. Almeida, and P. Ogren, “Towards blended reactive planning and acting using behavior trees,” arXiv preprint arXiv:1611.00230, 11 2016.[34] M. F. Sani and G. Karimian, “Automatic navigation and landing of an indoor ar. drone quadrotor using aruco marker and inertial sensors,” in 2017 International Conference on Computer and Drone Applications (IConDA). IEEE, pp. 102–107, 2017.[35] P. H. Nguyen, K. W. Kim, Y. W. Lee, and K. R. Park, “Remote marker-based tracking for uav landing using visible-light camera sensor,” Sensors, vol. 17, no. 9, p. 1987, 2017.[36] S. A. K. Tareen and Z. Saleem, “A comparative analysis of sift, surf, kaze, akaze, orb, and brisk,” in 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), pp. 1–10, March 2018.[37] M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM, vol. 24, no. 6, pp. 381–395, Jun. 1981.[38] H. Zhou, Y. Yuan, and C. Shi, “Object tracking using SIFT features and mean shift,” J. Comput. Vision Image Understand., vol. 113, no. 3, pp. 345–352, Mar. 2009.[39] S. Choi, J. Park, W. Yu, “Resolving Scale Ambiguity for Monocular Visual Odometry,” in Proc. IEEE URAI, pp. 604-608, 2013.[40] O. Esrafilian and H. D. Taghirad, “Autonomous flight and obstacle avoidance of a quadrotor by monocular slam,” in 2016 4th International Conference on Robotics and Mechatronics (ICROM), pp. 240–245, Oct 2016. 描述 碩士
國立政治大學
資訊科學系
107753028資料來源 http://thesis.lib.nccu.edu.tw/record/#G0107753028 資料類型 thesis dc.contributor.advisor 劉吉軒 zh_TW dc.contributor.advisor Liu, Jyi-Shane en_US dc.contributor.author (Authors) 劉効哲 zh_TW dc.contributor.author (Authors) Liu, Hsiao-Che en_US dc.creator (作者) 劉効哲 zh_TW dc.creator (作者) Liu, Hsiao-Che en_US dc.date (日期) 2020 en_US dc.date.accessioned 1-Feb-2021 14:10:34 (UTC+8) - dc.date.available 1-Feb-2021 14:10:34 (UTC+8) - dc.date.issued (上傳時間) 1-Feb-2021 14:10:34 (UTC+8) - dc.identifier (Other Identifiers) G0107753028 en_US dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/133894 - dc.description (描述) 碩士 zh_TW dc.description (描述) 國立政治大學 zh_TW dc.description (描述) 資訊科學系 zh_TW dc.description (描述) 107753028 zh_TW dc.description.abstract (摘要) 近年來無人機不只在軍事方面的應用,與人類日常生活的應用也逐漸普及,許多領域開始將無人機技術結合,進行開發具有自主行為能力的行為。如Google母公司Alphabet的無人機子公司Wing為全美第一家使用無人機送貨公司,利用偵測目的地和搜索著陸點的技術,實際應用在貨物運送上;美國亞馬遜在無人機上裝置感應裝置,及一般相機和紅外線相機分析周遭環境,發展能夠長途飛行的送貨無人機。在大多數應用於現實世界的無人機任務中,降落是相當重要的關鍵步驟,尤其是在貨物運送及交付方面。當無人機成功著陸或低空盤旋於目標降落點時,貨物的交付才算成功。對於精確的著陸要求,基於視覺的導航技術具有高度的可靠性和準確性。 在本文中,我們介紹了用於自主降落在建築物周圍附屬平台上的精確視覺導航的研究工作。我們結合了一些基於視覺的先進方法,開發了其他功能組件,透過行為樹進行決策邏輯的控制,整合視覺模組及無人機的飛行導航控制,以提供可用於建築物附近精確著陸的實用自主導航系統。在現實世界中的初始實驗顯示出利用視覺方式進行導航的結果,執行精確著陸的成功率很高。 zh_TW dc.description.tableofcontents 第一章、 緒論 11.1 研究背景 11.2 研究動機與目的 21.3 論文架構 51.4 研究成果與貢獻 5第二章、 文獻探討 72.1 降落問題 72.2 視覺導航 72.2.1 特徵點匹配 82.2.2 ORB-SLAM 92.3 行為樹 11第三章、 技術框架與模組 143.1 降落任務 153.1.1 識別地標建築物 153.1.2 定位無人機位置以建立安全的飛行路徑 163.1.3 鎖定畫面並接近目標 173.1.4 降落進行著陸 183.2 地標建築物圖像特徵點匹配 193.2.1 SIFT+RANSAC(RANdom SAmple Consensus) 193.2.2 目標影像特徵點匹配方式 203.2.3 特徵點匹配實際比對結果 233.3 ORB-SLAM對應真實世界的映射 273.4 導航控制 293.5 錯誤回復機制 303.6 行為樹決策方式之建立 32第四章、 實驗設計與結果分析 364.1 實驗設計 374.1.1 實驗指標評估 394.2 實驗結果與分析 394.2.1 根據近景圖像特徵點匹配結果降落 404.2.2 近景圖像特徵點匹配加上標記辨識降落 434.2.3 多種視覺導航依據:ORB-SLAM的錯誤回復機制 464.2.4 特徵點檢測受光線影響問題 484.3 小結 49第五章、 結論與未來展望 515.1 研究結論 515.2 未來展望 52Reference 54 zh_TW dc.format.extent 3283130 bytes - dc.format.mimetype application/pdf - dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0107753028 en_US dc.subject (關鍵詞) 無人機 zh_TW dc.subject (關鍵詞) 決策控制 zh_TW dc.subject (關鍵詞) 行為樹 zh_TW dc.subject (關鍵詞) 圖像/目標特徵點辨識 zh_TW dc.subject (關鍵詞) 視覺導航 zh_TW dc.title (題名) 無人機於建築物周圍指定區域之視覺導航降落方法 zh_TW dc.title (題名) Visual Navigation for UAV Landing on Accessory Building Floor en_US dc.type (資料類型) thesis en_US dc.relation.reference (參考文獻) [1] B.D.O. Anderson, C. Yu, B. Fidan, D. Van der Walle, “UAV Formation Control: Theory and Application,” in Recent Advances in Learning and Control, New York:Springer-Verlag, pp. 15-34, 2008.[2] T. Arnold, M. De Biasio, A. Fritz, and R. Leitner, “UAV-based measurement of vegetation indices for environmental monitoring,” in Proc. 7th Int. Conf. Sens. Technol. (ICST), pp. 704–707, Dec. 2013.[3] M. Silvagni, A. Tonoli, E. Zenerino and M. Chiaberge, “Multipurpose UAV for search and rescue operations in mountain avalanche events,” Geomatics Natural Hazards Risk, vol. 8, no. 1, pp. 18-33, 2017.[4] Z. Wang, M. Zheng, J. Guo, and H. Huang, “Uncertain UAV ISR mission planning problem with multiple correlated objectives,” J. Intell., Fuzzy Syst., vol. 32, no. 1, pp. 321–335, Jan. 2017[5] G. Brunner, B. Szebedy, S. Tanner and R. Wattenhofer, “The Urban Last Mile Problem: Autonomous Drone Delivery to Your Balcony,” 2019 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, pp. 1005-1012, 2019.[6] H. Lee, S. Jung and D. H. Shim, “Vision-based UAV landing on the moving vehicle,” 2016 International Conference on Unmanned Aircraft Systems (ICUAS), Arlington, VA, pp. 1-7, 2016.[7] A. Bachrach, S. Prentice, R. He, and N. Roy, “Range-robust autonomous navigation in GPS-denied environments,” J. Field Robot., vol. 28, no. 5, pp. 644–666, 2011.[8] S. Ahrens, D. Levine, G. Andrews and J. P. How, “Vision-based guidance and control of a hovering vehicle in unknown, GPS-denied environments,” 2009 IEEE International Conference on Robotics and Automation, Kobe, pp. 2643-2648, 2009.[9] K. Peng, “A secure network for mobile wireless service,” Journal of Information Processing Systems, vol. 9, no. 2, pp. 247–258, 2013.[10] P. A. Zandbergen and L. L. Arnold, “Positional accuracy of the wide area augmentation system in consumer-grade GPS units,” Computers and Geosciences, vol. 37, no. 7, pp. 883–892, 2011.[11] G. Xu, “GPS: Theory, Algorithms and Applications,” Springer, Berlin, Germany, 2nd edition, 2007.[12] G. Conte and P. Doherty, “An integrated UAV navigation system based on aerial image matching,” Proc. IEEE Aerosp. Conf., pp. 3142-3151, 2008.[13] C. Forster, M. Faessler, F. Fontana, M. Werlberger, and D. Scaramuzza, “Continuous on-board monocular-vision-based elevation mapping applied to autonomous landing of micro aerial vehicles,” in 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 111–118, 2015.[14] P. H. Nguyen, M. Arsalan, J. H. Koo, R. A. Naqvi, N. Q. Truong, and K. R. Park, “LightDenseYOLO: A fast and accurate marker tracker for autonomous UAV landing by visible light camera sensor on drone, “ Sensors, vol. 18, no. 6, p. 1703, 2018.[15] J. Wubben, F. Fabra, C. T. Calafate, T. Krzeszowski, J. M. Marquez-Barja, J. C. Cano, and P. Manzoni, “Accurate landing of unmanned aerial vehicles using ground pattern recognition,” Electronics, 8, 1532, 2019.[16] S. Hening, C. A. Ippolito, K. S. Krishnakumar, V. Stepanyan, and M. Teodorescu, “3d lidar slam integration with gps/ins for uavs in urban gps-degraded environments,” in AIAA Information Systems-AIAA Infotech@ Aerospace, pp. 448–457, 2017.[17] M. Pierzchała, P. Giguere, and R. Astrup, “Mapping forests using an unmanned ground vehicle with 3d lidar and graph-slam,” Computers and Electronics in Agriculture, vol. 145, pp. 217–225, 2018.[18] J. Joglekar, S. S. Gedam and B. Krishna Mohan, “Image Matching Using SIFT Features and Relaxation Labeling Technique—A Constraint Initializing Method for Dense Stereo Matching,” in IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 9, pp. 5643-5652, Sept. 2014.[19] D. Lee, T. Ryan and H. J. Kim, “Autonomous landing of a VTOL UAV on a moving platform using image-based visual servoing,” 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, 2012, pp. 971-976, 2012.[20] H. Choi, M. Geeves, B. Alsalam, and F. Gonzalez, “Open Source Computer-Vision Based Guidance System for UAVs On-Board Decision Making,” IEEE Aerospace conference, Big sky, Montana, 2016.[21] S. G. Lin, M. A. Garratt, A. J. Lambert, “Monocular vision-based real-time target recognition and tracking for autonomously landing an UAV in a cluttered shipboard environment,” Autonomous Robots, 41(4): pp. 881–901, 2017.[22] K. E.Wenzel, A. Masselli, and A. Zell, “Automatic Take Off , Tracking and Landing of a Miniature UAV on a Moving Carrier Vehicle,” Journal of Intelligent & Robotic Systems, vol. 61, no. 1, pp. 221-238, 2010.[23] M. Bryson and S. Sukkarieh, “Building a robust implementation of bearing-only inertial SLAM for a UAV,” J. Field Robot., vol. 24, no. 1/2, pp. 113-143, Jan./Feb. 2007.[24] C. Kanellakis and G. Nikolakopoulos, “Survey on Computer Vision for UAVs: Current Developments and Trends,” J. Intell. Robot. Syst., vol. 87, no. 1, pp. 141-168, Jan. 2017.[25] F. Caballero, L. Merino, J. Ferruz, and A. Ollero, “Vision-based odometry and slam for medium and high altitude flying uavs,” Journal of Intelligent Robotic Systems, vol. 54, pp. 137-161, 2009.[26] J. Engel, V. Koltun and D. Cremers, “Direct sparse odometry,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 3,pp. 611-625,Mar.2018.[27] E. Karami, S. Prasad and M. Shehata, “Image matching using sift, surf, brief and orb: Performance comparison for distorted images,” in Proc. Newfoundland Electr. Comput. Eng. Conf., St. John’s, NL, Canada, Nov.2015.[28] David G Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol.50, No. 2, pp.91-110, 2004.[29] Y. Li, N. Snavely, and D. P. Huttenlocher, “Location recognition using prioritized feature matching,” in Proc. European Conf. Computer Vision (ECCV), Crete, Greece, Sept. 2010.[30] Zhen Liu, Ziying Zhao, Yida Fan, Dong Tian, “Automated change detection of multi-level icebergs near Mertz Glacier region using feature vector matching,” The international conference on Image processing, computer vision and Pattern Recogniton, 2013.[31] R. Mur-Artal and J. D. Tardos, “ORB-SLAM2: an Open-Source SLAM system for monocu lar, stereo and RGB-D cameras,” arXiv:1610.06475, Oct.2016.[32] M. Colledanchise and P. Ogren, “Behavior trees in robotics and AI: an ¨ introduction,” CoRR, vol. abs/1709.00084, 2017.[33] M. Colledanchise, D. Almeida, and P. Ogren, “Towards blended reactive planning and acting using behavior trees,” arXiv preprint arXiv:1611.00230, 11 2016.[34] M. F. Sani and G. Karimian, “Automatic navigation and landing of an indoor ar. drone quadrotor using aruco marker and inertial sensors,” in 2017 International Conference on Computer and Drone Applications (IConDA). IEEE, pp. 102–107, 2017.[35] P. H. Nguyen, K. W. Kim, Y. W. Lee, and K. R. Park, “Remote marker-based tracking for uav landing using visible-light camera sensor,” Sensors, vol. 17, no. 9, p. 1987, 2017.[36] S. A. K. Tareen and Z. Saleem, “A comparative analysis of sift, surf, kaze, akaze, orb, and brisk,” in 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), pp. 1–10, March 2018.[37] M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM, vol. 24, no. 6, pp. 381–395, Jun. 1981.[38] H. Zhou, Y. Yuan, and C. Shi, “Object tracking using SIFT features and mean shift,” J. Comput. Vision Image Understand., vol. 113, no. 3, pp. 345–352, Mar. 2009.[39] S. Choi, J. Park, W. Yu, “Resolving Scale Ambiguity for Monocular Visual Odometry,” in Proc. IEEE URAI, pp. 604-608, 2013.[40] O. Esrafilian and H. D. Taghirad, “Autonomous flight and obstacle avoidance of a quadrotor by monocular slam,” in 2016 4th International Conference on Robotics and Mechatronics (ICROM), pp. 240–245, Oct 2016. zh_TW dc.identifier.doi (DOI) 10.6814/NCCU202100033 en_US