學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

題名 以深度學習萃取高解析度無人機正射影像之農地重劃區現況資訊
Extracting Terrain Detail Information from High Resolution UAV Orthoimages of Farm Land Readjustment Area Using Deep Learning
作者 汪知馨
Wang, Chih-Hsin
貢獻者 邱式鴻
Chio, Shih-Hong
汪知馨
Wang, Chih-Hsin
關鍵詞 地籍測量
現況測量
深度學習
影像分割
遷移學習
Cadastral Survey
Detail Survey
Deep Learning
Image Segmentation
Transfer Learning
日期 2022
上傳時間 2-Sep-2022 15:21:10 (UTC+8)
摘要 為改善農地重劃區圖、地不符之問題,乃透過現況測量將所測得的土地使用現況與數化完成的地籍圖進行套疊分析。目前執行現況測量時多以地面測量方法為之,然此種測量方式耗費大量人力、時間,且其測量成果通常較為局部,難以測得全域之現況資訊。相較於傳統地面測量,近年來航空攝影測量逐漸被應用於地籍測量相關領域中,其中無人機以其低成本、快速產製高解析度正射影像等特性,彌補傳統地面測量耗時、耗力的不足,惟該方法仍以人工方式獲取現況資訊。
在影像分割領域,深度學習已取得較傳統影像處理更佳的表現,其透過卷積自動學習影像特徵,能夠在短時間內完成影像分割,具有自動化、高效率等優點。為此本研究採用ResU-net協助萃取高解析度正射影像中農地重劃區全域的現況資訊,並分析經過後處理的萃取成果應用於地籍測量相關作業之可行性。於模型訓練方面,除了使用高解析度正射影像外,另加入密匹配產製之數值高程資訊,且為了使模型有抗不同解析度的能力,嘗試使用不同高解析度的宜蘭正射影像作為模型的輸入資料,透過實驗比較網路的訓練成果。
研究成果顯示在標籤資料涵蓋高程變化處時,加入高程資訊能些微提升模型的偵測能力。本研究使用宜蘭資料訓練的模型權重作為初始權重,再使用部分宜蘭與台中區域資料進行遷移學習,實驗成果證實使用遷移學習可提升效率,模型於宜蘭測試資料的精度F Score達0.73;台中測試資料的F Score達到0.86。另外,本研究計算地測現況點與深度學習萃取成果的平面位置較差,以《地籍測量實施規則》第73條規定進行分析,經統計得約80%資料符合規定,顯示應用深度學習搭配高解析度正射影像協助萃取農地重劃區現況資訊有其可行性。
The results of detail survey are used for overlap analysis with digitized graphic cadastral maps to solve the problem of inconsistence between current land situation and cadastral maps. Currently, the detail data is mostly surveyed by theodolites and satellite positioning instruments; however, it is time-consuming and labor-intensive. Additionally, the surveying result is usually local data, and it is unable to obtain global terrain detail data. In recent years, aerial photogrammetry has been applied to cadastral survey, of which UAVs are increasingly being used as a low-cost, efficient system which can support in acquiring high-resolution data and bridge the gap between slow but accurate field surveys and the fast approach of conventional aerial surveys. However, these methods are based on manual measurement.
In the field of image segmentation, deep learning has shown a higher accuracy than other image processing methods. By stacking multiple layers of neural networks, it can extract the image features in a short time, and carry out image segmentation. Furthermore, it is automatic and high-efficiency. Therefore, this study attempts to use ResU-net to assist in extracting global terrain detail information from high-resolution UAV orthoimages of farm land readjustment areas, and evaluate the feasibility of using the post-processing results in the detail survey. Except for the high-resolution orthoimages, the digital surface model (DSM) by dense matching was also used. In order to overcome the problem of different resolution, orthoimages and DSM with different ground sampling distance in Yilan were used as training data and analyzed the results of networks.
The results showed that if the label data covered the elevation changes, adding DSM data by dense matching could promote the accuracy of detection. After the model had been trained by Yilan data, the study used part of data in Yilan, and part of data in Taichung as training data to fine-tune the model. The experimental results showed that using transfer learning could speed up the training time. The F Score in Yilan testing data was 0.73; Taichung testing data was 0.86. After post-processing of detected result from deep learning, the study calculated the planar location differences between the detail point positions from ground survey and those from deep learning, then performed the analysis based on the article 73 of “Rules for Implementation of Cadastral Surveys”, the result showed that about 80% data meet the accuracy requirement and it demonstrated the feasibility of using deep learning to assist in extracting global terrain detail information from high-resolution UAV orthoimages of farm land readjustment areas.
參考文獻 內政部土地測量局志編修小組,2007,『內政部土地測量局志』再版,臺中市:內政部土地測量局。
江政矩,2019,「無人機航空攝影測量輔助土地複丈可行性之研究」,國立政治大學地政學系碩士論文:臺北市。
李瑞清,1984,「農地重劃後之地籍測量採用數值法之研究」,『中華民國地籍測量學會會刊』,3:77-88。
李瑞清,1986,「農地重劃地籍測量之研究」,『中華民國地籍測量學會會刊』,5:62-72。
林宏軒、李肇棠、江滄明,2018,「深度學習的訓練資料準備與平台之演進發展」,『電腦與通訊』,(174):5-21。
邱式鴻、王蜀嘉、張奕華、黃建華、陳錫禎,2018,「以旋翼無人機航攝影像輔助辦理山區地籍圖重測之研究」,〈國際地籍測量學術研討會〉,日本,2018年11月21日。
袁克中、陳昆成、何定遠、劉正倫,2012,「地籍圖重測未來政策之研究」,『中華民國地籍測量學會會刊』,31(4):13-28。
高書屏、吳亞翰、陳昭男,2009,「利用面積約制坐標轉換模式以改善土地複丈效率之研究」,『中華民國地籍測量學會會刊』,28:15-31。
張寶堂,2019,「利用無人飛機系統航拍輔助土地複丈」,國立臺灣師範大學地理學系第三屆空間資訊在職碩士論文:臺北市。
許松,2012,「早期農地重劃區地籍圖重製問題探討」,『中華民國地籍測量學會會刊』,31(2):29-38。
黃正龍、陳渭梯、蔡全烈,2016,「利用無人飛行載具拍攝影像圖配合e-GNSS系統輔助農地重劃區地籍圖土地複丈作業之研究」,彰化縣:員林地政事務所。
臺灣省地政局,1961,『臺灣省實施農地重劃參考資料暨有關法令彙編』。
臺灣省地政處,1994,『台灣省台南縣學甲測區航測輔助地籍圖重測作業示範』。
劉瑞煌,2002,「台灣之農地重劃」,『土地問題研究季刊』,1(4):24-32。
鄭有勝、徐元俊、楊嘉欽,2018,「利用無人飛行載具配合 e-GNSS 系統辦理農地重劃區地籍圖重測現況測量之可行性分析:以宜蘭縣員山鄉三泰段為例」,宜蘭縣:宜蘭地政事務所。
鄭彩堂、董荔偉、鄒慶敏、蘇惠璋、劉正倫,2011,「地籍圖簿地不符解決對策之研究」,臺北市:內政部國土測繪中心。
蕭震洋、安軒霈、陳俊愷、饒見有、陳樹群,2015,「應用UAV量化台東金崙溪河川型態演變及致災特性」,『中國土木水利工程學刊』,27(3):213-222。
藍春明,1996,「圖解與數值複丈作業之探討」,『中華民國地籍測量學會會刊』,15(4):33-46。
闕啟華,2009,「早期農地重劃地籍圖精度探討—以新竹縣為例」,國立政治大學地政學系碩士論文:臺北市。
Arbelaez, P., Maire, M., Fowlkes, C. and Malik, J., 2011, “Contour detection and hierarchical image segmentation” IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(5): 898-916.
Awad, M. M., 2013, “A morphological model for extracting road networks from high-resolution satellite images” Journal of Engineering, 2013: 1-9.
Babawuro, U. and Beiji, Z., 2012, “Satellite Imagery Cadastral Features Extractions using Image Processing Algorithms: A Viable Option for Cadastral Science” International Journal of Computer Science Issues, 9(4): 30-38.
Badrinarayanan, V., Kendall, A. and Cipolla, R., 2015, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation” IEEE transactions on pattern analysis and machine intelligence, 39(12): 2481-2495.
Bergado, J. R., Persello, C. and Gevaert, C., 2016, “A Deep Learning Approach to the Classification of sub-decimetre Resolution Aerial Images” 2016 IEEE International Geoscience and Remote Sensing Symposium, 1516-1519.
Bryson, A. E. and Ho, Y. C., 1969, Applied Optimal Control: Optimization, estimation, and control, Waltham, Mass: Blaisdell Pub. Co.
Canny, J., 1986, “A Computational Approach to Edge Detection” IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(6): 679-698.
Chapelle, O., Haffner, P., and Vapnik, V., 1999. “Support vector machines for histogram-based image classification” IEEE Transactions on Neural Networks, 10(5): 1055-1064.
Chen, B., Qiu, F., Wu, B. and Du, H, 2015, “Image segmentation based on constrained spectral variance difference and edge penalty” Remote Sens., 7(5): 5980-6004.
Ciresan, D. C., Meier, U., Masci, J., Gambardella, L. M. and Schmidhuber, J., 2011, “Flexible, High Performance Convolutional Neural Networks for Image Classification” Paper presented at the Proceedings of the 22nd International Joint Conference on Artificial Intelligence, Barcelona, Catalonia, Spain, July 16-22, 2011.
Colomina, I. and Molina, P., 2014, “Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm” Remote Sens., 92: 79–97.
Crommelinck, S., Bennett, R., Gerke, M., Nex, F., Yang, M. Y. and Vosselman, G., 2016 “Review of Automatic Feature Extraction from High-Resolution Optical Sensor Data for UAV-Based Cadastral Mapping” Remote Sensing, 8: 689.
Crommelinck, S., Bennett, R., Gerke, M., Yang, M. and Vosselman, G., 2017, “Contour Detection for UAV-Based Cadastral Mapping” Remote Sens., 9(2): 171.
Crommelinck, S., Koeva, M., Yang, M.Y. and Vosselman, G., 2019, “Application of Deep Learning for Delineation of Visible Cadastral Boundaries from Remote Sensing Imagery” Remote Sensing, 11(21): 2505.
Dalal, N. and Triggs, B., 2005, “Histograms of oriented gradients for human detection” 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1: 886-893.
Di, W., Wei, J. and Bhardwaj, A., 2018, Deep Learning Essentials, Birmingham: Packt.
Duchi, J., Hazan, E., and Singer, Y., 2011, “Adaptive subgradient methods for online learning and stochastic optimization” Journal of machine learning research, 12(7): 2121-2159.
Eker, O. and Seker, D. Z., 2008, “Semi-automatic extraction of features from digital imagery” ISPRS Congress, 37(4): 443–446.
Enemark, S., Mclaren, R. and Lemmen, C., 2016, Fit-for-purpose land administration guiding principles for country implementation, Nairobi: UN Habitat.
Farabet, C., Couprie, C., Najman, L. and LeCun, Y., 2013, “Learning Hierarchical Features for Scene Labeling” IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8): 1915-1929.
Felzenszwalb, P. F. and Huttenlocher, D. P., 2004, “Efficient Graph-Based Image Segmentation” International Journal of Computer, 59(2): 167-181.
Fetai, B., Oštir, K., Fras, M. K. and Lisec, A., 2019, “Extraction of Visible Boundaries for Cadastral Mapping Based on UAV Imagery” Remote Sensing, 11(13): 1510.
Fetai, B., Račič, M., and Lisec, A., 2021, “Deep Learning for Detection of Visible Land Boundaries from UAV Imagery” Remote Sensing, 13(11): 2077.
Glorot, X., Bordes, A. and Bengio, Y., 2011, “Deep Sparse Rectifier Neural Networks” Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics,PMLR, 15: 315-323.
Goodfello, I., Bengio, Y. and Courville, A., 2016, Deep Learning, Cambridge: MIT Press.
Goutte, C. and Gaussier, E., 2005, “A Probabilistic Interpretation of Precision, Recall and F-Score, with Implication for Evaluation”, Proceedings of the 27th European conference on Advances in Information Retrieval Research, Springer, Berlin, Heidelberg, March.
Grangier, D., Bottou, L and Collobert, R. 2009 “Deep convolutional networks for scene parsing” ICML 2009 Deep Learning Workshop.
Haykin, S., 1998, Neural Networks: A Comprehensive Foundation, United States NJ: Prentice Hall PTR.
He, K., Zhang, X., Ren, S. and Sun, J., 2016, “Deep Residual Learning for Image Recognition” In Proceedings of CVPR, 770-778.
Hesamian, M. H., Jia, W., He, X. and Kennedy, P., 2019, “Deep Learning Techniques for Medical Image Segmentation: Achievements and Challenges” Journal of Digital Imaging, 32(4): 582-596.
Hinton, G., 2012, Lecture 6d: a separate, adaptive learning rate for each connection, Slides of Lecture Neural Networks for Machine Learning.
Hossin, M. and Sulaiman, M. N., 2015, “A Review on Evaluation Metrics for Data Classification Evaluations” International Journal of Data Mining & Knowledge Management Process, 5(2): 1-11.
Hu, X. and Tao, V., 2007, “Automatic extraction of main road centerlines from high resolution satellite imagery using hierarchical grouping” Photogrammetric Engineering and Remote Sensing, 73(9): 1049-1056.
Ide, H. and Kurita, T., 2017, “Improvement of learning for CNN with ReLU activation by sparse regularization” 2017 International Joint Conference on Neural Networks (IJCNN), 2684-2691.
Keskar, N. S., Mudigere, D., Nocedal, J., Smelyanskiy, M. and Tang, P. T. P., 2016, “On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima” arXiv 2016, arXiv:1609.04836.
Kingma, D. P. and Ba, J., 2015, “Adam: A method for stochastic optimization” 2015ICLR (Poster).
Kleesiek, J., Urban, G., Hubert, A., Schwarz, D., Maier-Hein, K., Bendszus, M. and Biller, A., 2016, “Deep MRI brain extraction: a 3D convolutional neural network for skull stripping” NeuroImage, 129: 460-469.
Krizhevsky, A., Sutskever, I. and Hinton, G. E.,2012, “ImageNet Classification with Deep Convolutional Neural Networks” Paper presented at the Proceedings of the 25th International Conference on Neural Information Processing Systems, 1: 1097-1105.
LeCun, Y., Bengio, Y. and Hinton, G., 2015, “Deep learning” Nature, 521(7553): 436-444.
LeCun, Y., Bottou, L., Bengio, Y. and Haffner, P., 1998, “Gradient-based learning applied to document recognition” Proceedings of the IEEE, 86(11): 2278-2324.
Lin, T. Y., Goyal, P., Girshick, R., He, K. and Dollár, P., 2017, “Focal Loss for Dense Object Detection” In Computer Vision and Pattern Recognition.
Long, J., Shelhamer, E. and Darrell, T., 2015, “Fully Convolutional Networks for Semantic Segmentation” In CVPR.
Lowe, D. G., 2004, “Distinctive Image Features from Scale-Invariant Keypoints” International Journal of Computer Vision, 60(2): 91-110.
MacQueen, J., 1967, “Some Methods for Classification and Analysis of Multivariate Observations” Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, 1: 281-297.
Marmanis, D., Wegner, J. D., Galliani, S., Schindler, K., Datcu, M. and Stilla, U., 2016, “Semantic segmentation of aerial images with an ensemble of CNNs” ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, III(3): 473-480.
Masoud, K. M., Persello, C. and Tolpekin, V. A., 2020, “Delineation of Agricultural Field Boundaries from Sentinel-2 Images Using a Novel Super-Resolution Contour Detector Based on Fully Convolutional Networks” Remote Sensing, 12(1): 59.
Mena, J. B., 2003, “State of the art on automatic road extraction for GIS update: a novel classification” Pattern Recognition Letters, 24(16): 3037-3058.
Merkow, J., Marsden, A., Kriegman, D. and Tu, Z., 2016, “Dense volume-tovolume vascular boundary detection” International Conference on Medical Image Computing and Computer-Assisted Intervention(Springer)., 371-379
Micheletti, N., Chandler, J. H. and Lane, S. N., 2015, “Investigating the Geomorphological Potential of Freely Available and Accessible Structurefrom-Motion Photogrammetry Using a Martphone” Earth Surface Processes and Landforms, 40(4): 473-486.
Milletari, F., Navab, N. and Ahmadi, S. A., 2016, “V-net: fully convolutional neural networks for volumetric medical image segmentation” Fourth International Conference on 3D Vision (3DV).
Minsky, M. and Papert, S. A., 1969, Perceptrons An Introduction to Computational Geometry, Cambridge: MIT Press.
Mostajabi, M., Yadollahpour, M. and Shakhnarovich, G., 2015, “Feedforward semantic segmentation with zoom-out features” In CVPR.
Noh, H., Hong, S. and Han, B., 2015, “Learning deconvolution network for semantic segmentation” Proceedings of the IEEE International Conference on Computer Vision. 1520-1528.
Persello, C. and Stein, A., 2017, “Deep Fully Convolutional Networks for the Detection of Informal Settlements in VHR Images” IEEE Geoscience and Remote Sensing Letters, 14(12): 2325-2329.
Persello, C., Tolpekin, V. A., Bergado, J. R. and de By, R. A., 2019, “Delineation of agricultural fields in smallholder farms from satellite images using fully convolutional networks and combinatorial grouping” Remote Sensing of Environment, 231: 1-18.
Polyak, B. T., 1964, “Some methods of speeding up the convergence of iteration methods” USSR Computational Mathematics and Mathematical Physics, 4(5): 1-17.
Poudel, R. P., Lamata, P. and Montana, G., 2016, “Recurrent fully convolutional neural networks for multi-slice MRI cardiac segmentation” econstruction, Segmentation, and Analysis of Medical Images(Springer)., 83-94.
Ronneberger, O., Fischer, P. and Brox, T., 2015, “U-Net: Convolutional Networks for Biomedical Image Segmentation” Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015, 9351: 234-241.
Rosenblatt, F., 1958, “The perceptron: A probabilistic model for information storage and organization in the brain” Psychological Review, 65(6): 386-408.
Schulz, H. and Behnke, S., 2012, “Learning Object-Class Segmentation with Convolutional Neural Networks” Proceedings of the European Symposium on Artificial Neural Networks.
Shie, C. K., Chuang, C. H., Chou, C. N., Wu, M. H. and Chang, E. Y., 2015, “Transfer representation learning for medical image analysis” 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 711-714.
Simonyan, K. and Zisserman, A., 2015, “Very Deep Convolutional Networks for Large-Scale Image Recognition” The 3rd International Conference on Learning Representations (ICLR2015.)
Snavely, N., Seitz, S. M., and Szeliski, R., 2006, “Photo tourism: Exploring photo collections in 3D” ACM Transactions on Graphics (TOG),25(3): 835-846.
Stöcker, C.; Nex, F.; Koeva, M. and Gerke, M., 2020, “High-Quality UAV-Based Orthophotos for Cadastral Mapping: Guidance for Optimal Flight Configurations” Remote Sens., 12(21): 3625.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V. and Rabinovich, A., 2014, “Going Deeper with Convolutions” In Proceedings of CVPR, 1-9.
Tappert C. C., 2019, “Who Is the Father of Deep Learning?” 2019 International Conference on Computational Science and Computational Intelligence, 343-348.
Taravat, A., Wanger, M. P., Bonifacio, R. and Petit D., 2021, “Advanced Fully Convolutional Networks for Agricultural Field Boundary Detection” Remote sensing, 13(4):722.
Wassie, Y. A., Koeva M. N., Bennett R. M. and Lemmen, C. H. J., 2018, “A procedure for semi-automated cadastral boundary feature extraction from high-resolution satellite imagery” Journal of Spatial Science, 63(1): 75-92.
Werbos, P., 1974, Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences, Ph.D. Thesis, Harvard University, Cambridge.
Xia, X., Persello, C. and Koeva, M., 2019, “Deep Fully Convolutional Networks for Cadastral Boundary Detection from UAV Images” Remote Sensing, 11(14):1725.
Xu, B., Huang, R. and Li, M., 2016, “Revise Saturated Activation Functions” arXiv preprint, arXiv:1602.05980.
Yosinski, J., Clune, J., Bengio, Y. and Lipson, H., 2014, “How transferable are features in deep neural networks?” Advances in Neural Information Processing Systems, 3320-3328.
Zhang, T. Y. and Suen C. Y., 1984, “A Fast Parallel Algorithm for Thinning Digital Patterns” Commun. ACM, 27:236-239.
Zhou, Y., Xie, L., Shen, W., Wang, Y., Fishman, E. K. and Yuille, A. L., 2017, “A fixed-point model for pancreas segmentation in abdominal CT scans” International Conference on Medical Image Computing and Computer-Assisted Intervention(Springer):693-701.
Zhu, X. X., Tuia, D., Mou, L., Xia, G. S., Zhang, L. and Xu, F., 2017, “Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources” IEEE Geoscience and Remote Sensing Magazine, 5(4): 8-36.
Ziegler, R., Matusik, W., Pfister, H., and McMillan, L., 2003, “3D reconstruction using labeled image regions” Eurographics Association, Aachen, Germany, 248-259.
DJI (2021) DJI Phantom 4 Pro From DJI on the World Wide Web:https://www.dji.com/tw
DJI (2021) DJI Phantom 4 RTK From DJI on the World Wide Web:https://www.dji.com/tw
Pix4D (2017) Refereces for photogrammetry algorithms from Pix4D Community on the World Wide Web:https://community.pix4d.com/t/refereces-for-photogrammetry-algorithms/3291
內政部國土測繪中心(2021). 內政部國土測繪中心官網:https://www.nlsc.gov.tw/
描述 碩士
國立政治大學
地政學系
109257031
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0109257031
資料類型 thesis
dc.contributor.advisor 邱式鴻zh_TW
dc.contributor.advisor Chio, Shih-Hongen_US
dc.contributor.author (Authors) 汪知馨zh_TW
dc.contributor.author (Authors) Wang, Chih-Hsinen_US
dc.creator (作者) 汪知馨zh_TW
dc.creator (作者) Wang, Chih-Hsinen_US
dc.date (日期) 2022en_US
dc.date.accessioned 2-Sep-2022 15:21:10 (UTC+8)-
dc.date.available 2-Sep-2022 15:21:10 (UTC+8)-
dc.date.issued (上傳時間) 2-Sep-2022 15:21:10 (UTC+8)-
dc.identifier (Other Identifiers) G0109257031en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/141716-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 地政學系zh_TW
dc.description (描述) 109257031zh_TW
dc.description.abstract (摘要) 為改善農地重劃區圖、地不符之問題,乃透過現況測量將所測得的土地使用現況與數化完成的地籍圖進行套疊分析。目前執行現況測量時多以地面測量方法為之,然此種測量方式耗費大量人力、時間,且其測量成果通常較為局部,難以測得全域之現況資訊。相較於傳統地面測量,近年來航空攝影測量逐漸被應用於地籍測量相關領域中,其中無人機以其低成本、快速產製高解析度正射影像等特性,彌補傳統地面測量耗時、耗力的不足,惟該方法仍以人工方式獲取現況資訊。
在影像分割領域,深度學習已取得較傳統影像處理更佳的表現,其透過卷積自動學習影像特徵,能夠在短時間內完成影像分割,具有自動化、高效率等優點。為此本研究採用ResU-net協助萃取高解析度正射影像中農地重劃區全域的現況資訊,並分析經過後處理的萃取成果應用於地籍測量相關作業之可行性。於模型訓練方面,除了使用高解析度正射影像外,另加入密匹配產製之數值高程資訊,且為了使模型有抗不同解析度的能力,嘗試使用不同高解析度的宜蘭正射影像作為模型的輸入資料,透過實驗比較網路的訓練成果。
研究成果顯示在標籤資料涵蓋高程變化處時,加入高程資訊能些微提升模型的偵測能力。本研究使用宜蘭資料訓練的模型權重作為初始權重,再使用部分宜蘭與台中區域資料進行遷移學習,實驗成果證實使用遷移學習可提升效率,模型於宜蘭測試資料的精度F Score達0.73;台中測試資料的F Score達到0.86。另外,本研究計算地測現況點與深度學習萃取成果的平面位置較差,以《地籍測量實施規則》第73條規定進行分析,經統計得約80%資料符合規定,顯示應用深度學習搭配高解析度正射影像協助萃取農地重劃區現況資訊有其可行性。
zh_TW
dc.description.abstract (摘要) The results of detail survey are used for overlap analysis with digitized graphic cadastral maps to solve the problem of inconsistence between current land situation and cadastral maps. Currently, the detail data is mostly surveyed by theodolites and satellite positioning instruments; however, it is time-consuming and labor-intensive. Additionally, the surveying result is usually local data, and it is unable to obtain global terrain detail data. In recent years, aerial photogrammetry has been applied to cadastral survey, of which UAVs are increasingly being used as a low-cost, efficient system which can support in acquiring high-resolution data and bridge the gap between slow but accurate field surveys and the fast approach of conventional aerial surveys. However, these methods are based on manual measurement.
In the field of image segmentation, deep learning has shown a higher accuracy than other image processing methods. By stacking multiple layers of neural networks, it can extract the image features in a short time, and carry out image segmentation. Furthermore, it is automatic and high-efficiency. Therefore, this study attempts to use ResU-net to assist in extracting global terrain detail information from high-resolution UAV orthoimages of farm land readjustment areas, and evaluate the feasibility of using the post-processing results in the detail survey. Except for the high-resolution orthoimages, the digital surface model (DSM) by dense matching was also used. In order to overcome the problem of different resolution, orthoimages and DSM with different ground sampling distance in Yilan were used as training data and analyzed the results of networks.
The results showed that if the label data covered the elevation changes, adding DSM data by dense matching could promote the accuracy of detection. After the model had been trained by Yilan data, the study used part of data in Yilan, and part of data in Taichung as training data to fine-tune the model. The experimental results showed that using transfer learning could speed up the training time. The F Score in Yilan testing data was 0.73; Taichung testing data was 0.86. After post-processing of detected result from deep learning, the study calculated the planar location differences between the detail point positions from ground survey and those from deep learning, then performed the analysis based on the article 73 of “Rules for Implementation of Cadastral Surveys”, the result showed that about 80% data meet the accuracy requirement and it demonstrated the feasibility of using deep learning to assist in extracting global terrain detail information from high-resolution UAV orthoimages of farm land readjustment areas.
en_US
dc.description.tableofcontents 謝誌 I
摘要 III
Abstract V
目錄 VII
圖目錄 IX
表目錄 XIII
第一章 緒論 1
第一節 研究背景與動機 1
第二節 研究目的 5
第三節 研究架構 6
第二章 文獻回顧 7
第一節 現況測量 7
第二節 深度學習 15
第三章 研究方法 29
第一節 研究區域 29
第二節 研究資料與處理工具 31
第三節 研究流程 39
第四節 研究方法及理論基礎 41
第四章 實驗成果分析與討論 57
第一節 實驗資料 57
第二節 深度學習網路訓練 63
第五章 結論與建議 83
第一節 結論 83
第二節 建議 85
參考文獻 87
zh_TW
dc.format.extent 6295672 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0109257031en_US
dc.subject (關鍵詞) 地籍測量zh_TW
dc.subject (關鍵詞) 現況測量zh_TW
dc.subject (關鍵詞) 深度學習zh_TW
dc.subject (關鍵詞) 影像分割zh_TW
dc.subject (關鍵詞) 遷移學習zh_TW
dc.subject (關鍵詞) Cadastral Surveyen_US
dc.subject (關鍵詞) Detail Surveyen_US
dc.subject (關鍵詞) Deep Learningen_US
dc.subject (關鍵詞) Image Segmentationen_US
dc.subject (關鍵詞) Transfer Learningen_US
dc.title (題名) 以深度學習萃取高解析度無人機正射影像之農地重劃區現況資訊zh_TW
dc.title (題名) Extracting Terrain Detail Information from High Resolution UAV Orthoimages of Farm Land Readjustment Area Using Deep Learningen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) 內政部土地測量局志編修小組,2007,『內政部土地測量局志』再版,臺中市:內政部土地測量局。
江政矩,2019,「無人機航空攝影測量輔助土地複丈可行性之研究」,國立政治大學地政學系碩士論文:臺北市。
李瑞清,1984,「農地重劃後之地籍測量採用數值法之研究」,『中華民國地籍測量學會會刊』,3:77-88。
李瑞清,1986,「農地重劃地籍測量之研究」,『中華民國地籍測量學會會刊』,5:62-72。
林宏軒、李肇棠、江滄明,2018,「深度學習的訓練資料準備與平台之演進發展」,『電腦與通訊』,(174):5-21。
邱式鴻、王蜀嘉、張奕華、黃建華、陳錫禎,2018,「以旋翼無人機航攝影像輔助辦理山區地籍圖重測之研究」,〈國際地籍測量學術研討會〉,日本,2018年11月21日。
袁克中、陳昆成、何定遠、劉正倫,2012,「地籍圖重測未來政策之研究」,『中華民國地籍測量學會會刊』,31(4):13-28。
高書屏、吳亞翰、陳昭男,2009,「利用面積約制坐標轉換模式以改善土地複丈效率之研究」,『中華民國地籍測量學會會刊』,28:15-31。
張寶堂,2019,「利用無人飛機系統航拍輔助土地複丈」,國立臺灣師範大學地理學系第三屆空間資訊在職碩士論文:臺北市。
許松,2012,「早期農地重劃區地籍圖重製問題探討」,『中華民國地籍測量學會會刊』,31(2):29-38。
黃正龍、陳渭梯、蔡全烈,2016,「利用無人飛行載具拍攝影像圖配合e-GNSS系統輔助農地重劃區地籍圖土地複丈作業之研究」,彰化縣:員林地政事務所。
臺灣省地政局,1961,『臺灣省實施農地重劃參考資料暨有關法令彙編』。
臺灣省地政處,1994,『台灣省台南縣學甲測區航測輔助地籍圖重測作業示範』。
劉瑞煌,2002,「台灣之農地重劃」,『土地問題研究季刊』,1(4):24-32。
鄭有勝、徐元俊、楊嘉欽,2018,「利用無人飛行載具配合 e-GNSS 系統辦理農地重劃區地籍圖重測現況測量之可行性分析:以宜蘭縣員山鄉三泰段為例」,宜蘭縣:宜蘭地政事務所。
鄭彩堂、董荔偉、鄒慶敏、蘇惠璋、劉正倫,2011,「地籍圖簿地不符解決對策之研究」,臺北市:內政部國土測繪中心。
蕭震洋、安軒霈、陳俊愷、饒見有、陳樹群,2015,「應用UAV量化台東金崙溪河川型態演變及致災特性」,『中國土木水利工程學刊』,27(3):213-222。
藍春明,1996,「圖解與數值複丈作業之探討」,『中華民國地籍測量學會會刊』,15(4):33-46。
闕啟華,2009,「早期農地重劃地籍圖精度探討—以新竹縣為例」,國立政治大學地政學系碩士論文:臺北市。
Arbelaez, P., Maire, M., Fowlkes, C. and Malik, J., 2011, “Contour detection and hierarchical image segmentation” IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(5): 898-916.
Awad, M. M., 2013, “A morphological model for extracting road networks from high-resolution satellite images” Journal of Engineering, 2013: 1-9.
Babawuro, U. and Beiji, Z., 2012, “Satellite Imagery Cadastral Features Extractions using Image Processing Algorithms: A Viable Option for Cadastral Science” International Journal of Computer Science Issues, 9(4): 30-38.
Badrinarayanan, V., Kendall, A. and Cipolla, R., 2015, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation” IEEE transactions on pattern analysis and machine intelligence, 39(12): 2481-2495.
Bergado, J. R., Persello, C. and Gevaert, C., 2016, “A Deep Learning Approach to the Classification of sub-decimetre Resolution Aerial Images” 2016 IEEE International Geoscience and Remote Sensing Symposium, 1516-1519.
Bryson, A. E. and Ho, Y. C., 1969, Applied Optimal Control: Optimization, estimation, and control, Waltham, Mass: Blaisdell Pub. Co.
Canny, J., 1986, “A Computational Approach to Edge Detection” IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(6): 679-698.
Chapelle, O., Haffner, P., and Vapnik, V., 1999. “Support vector machines for histogram-based image classification” IEEE Transactions on Neural Networks, 10(5): 1055-1064.
Chen, B., Qiu, F., Wu, B. and Du, H, 2015, “Image segmentation based on constrained spectral variance difference and edge penalty” Remote Sens., 7(5): 5980-6004.
Ciresan, D. C., Meier, U., Masci, J., Gambardella, L. M. and Schmidhuber, J., 2011, “Flexible, High Performance Convolutional Neural Networks for Image Classification” Paper presented at the Proceedings of the 22nd International Joint Conference on Artificial Intelligence, Barcelona, Catalonia, Spain, July 16-22, 2011.
Colomina, I. and Molina, P., 2014, “Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm” Remote Sens., 92: 79–97.
Crommelinck, S., Bennett, R., Gerke, M., Nex, F., Yang, M. Y. and Vosselman, G., 2016 “Review of Automatic Feature Extraction from High-Resolution Optical Sensor Data for UAV-Based Cadastral Mapping” Remote Sensing, 8: 689.
Crommelinck, S., Bennett, R., Gerke, M., Yang, M. and Vosselman, G., 2017, “Contour Detection for UAV-Based Cadastral Mapping” Remote Sens., 9(2): 171.
Crommelinck, S., Koeva, M., Yang, M.Y. and Vosselman, G., 2019, “Application of Deep Learning for Delineation of Visible Cadastral Boundaries from Remote Sensing Imagery” Remote Sensing, 11(21): 2505.
Dalal, N. and Triggs, B., 2005, “Histograms of oriented gradients for human detection” 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1: 886-893.
Di, W., Wei, J. and Bhardwaj, A., 2018, Deep Learning Essentials, Birmingham: Packt.
Duchi, J., Hazan, E., and Singer, Y., 2011, “Adaptive subgradient methods for online learning and stochastic optimization” Journal of machine learning research, 12(7): 2121-2159.
Eker, O. and Seker, D. Z., 2008, “Semi-automatic extraction of features from digital imagery” ISPRS Congress, 37(4): 443–446.
Enemark, S., Mclaren, R. and Lemmen, C., 2016, Fit-for-purpose land administration guiding principles for country implementation, Nairobi: UN Habitat.
Farabet, C., Couprie, C., Najman, L. and LeCun, Y., 2013, “Learning Hierarchical Features for Scene Labeling” IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8): 1915-1929.
Felzenszwalb, P. F. and Huttenlocher, D. P., 2004, “Efficient Graph-Based Image Segmentation” International Journal of Computer, 59(2): 167-181.
Fetai, B., Oštir, K., Fras, M. K. and Lisec, A., 2019, “Extraction of Visible Boundaries for Cadastral Mapping Based on UAV Imagery” Remote Sensing, 11(13): 1510.
Fetai, B., Račič, M., and Lisec, A., 2021, “Deep Learning for Detection of Visible Land Boundaries from UAV Imagery” Remote Sensing, 13(11): 2077.
Glorot, X., Bordes, A. and Bengio, Y., 2011, “Deep Sparse Rectifier Neural Networks” Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics,PMLR, 15: 315-323.
Goodfello, I., Bengio, Y. and Courville, A., 2016, Deep Learning, Cambridge: MIT Press.
Goutte, C. and Gaussier, E., 2005, “A Probabilistic Interpretation of Precision, Recall and F-Score, with Implication for Evaluation”, Proceedings of the 27th European conference on Advances in Information Retrieval Research, Springer, Berlin, Heidelberg, March.
Grangier, D., Bottou, L and Collobert, R. 2009 “Deep convolutional networks for scene parsing” ICML 2009 Deep Learning Workshop.
Haykin, S., 1998, Neural Networks: A Comprehensive Foundation, United States NJ: Prentice Hall PTR.
He, K., Zhang, X., Ren, S. and Sun, J., 2016, “Deep Residual Learning for Image Recognition” In Proceedings of CVPR, 770-778.
Hesamian, M. H., Jia, W., He, X. and Kennedy, P., 2019, “Deep Learning Techniques for Medical Image Segmentation: Achievements and Challenges” Journal of Digital Imaging, 32(4): 582-596.
Hinton, G., 2012, Lecture 6d: a separate, adaptive learning rate for each connection, Slides of Lecture Neural Networks for Machine Learning.
Hossin, M. and Sulaiman, M. N., 2015, “A Review on Evaluation Metrics for Data Classification Evaluations” International Journal of Data Mining & Knowledge Management Process, 5(2): 1-11.
Hu, X. and Tao, V., 2007, “Automatic extraction of main road centerlines from high resolution satellite imagery using hierarchical grouping” Photogrammetric Engineering and Remote Sensing, 73(9): 1049-1056.
Ide, H. and Kurita, T., 2017, “Improvement of learning for CNN with ReLU activation by sparse regularization” 2017 International Joint Conference on Neural Networks (IJCNN), 2684-2691.
Keskar, N. S., Mudigere, D., Nocedal, J., Smelyanskiy, M. and Tang, P. T. P., 2016, “On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima” arXiv 2016, arXiv:1609.04836.
Kingma, D. P. and Ba, J., 2015, “Adam: A method for stochastic optimization” 2015ICLR (Poster).
Kleesiek, J., Urban, G., Hubert, A., Schwarz, D., Maier-Hein, K., Bendszus, M. and Biller, A., 2016, “Deep MRI brain extraction: a 3D convolutional neural network for skull stripping” NeuroImage, 129: 460-469.
Krizhevsky, A., Sutskever, I. and Hinton, G. E.,2012, “ImageNet Classification with Deep Convolutional Neural Networks” Paper presented at the Proceedings of the 25th International Conference on Neural Information Processing Systems, 1: 1097-1105.
LeCun, Y., Bengio, Y. and Hinton, G., 2015, “Deep learning” Nature, 521(7553): 436-444.
LeCun, Y., Bottou, L., Bengio, Y. and Haffner, P., 1998, “Gradient-based learning applied to document recognition” Proceedings of the IEEE, 86(11): 2278-2324.
Lin, T. Y., Goyal, P., Girshick, R., He, K. and Dollár, P., 2017, “Focal Loss for Dense Object Detection” In Computer Vision and Pattern Recognition.
Long, J., Shelhamer, E. and Darrell, T., 2015, “Fully Convolutional Networks for Semantic Segmentation” In CVPR.
Lowe, D. G., 2004, “Distinctive Image Features from Scale-Invariant Keypoints” International Journal of Computer Vision, 60(2): 91-110.
MacQueen, J., 1967, “Some Methods for Classification and Analysis of Multivariate Observations” Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, 1: 281-297.
Marmanis, D., Wegner, J. D., Galliani, S., Schindler, K., Datcu, M. and Stilla, U., 2016, “Semantic segmentation of aerial images with an ensemble of CNNs” ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, III(3): 473-480.
Masoud, K. M., Persello, C. and Tolpekin, V. A., 2020, “Delineation of Agricultural Field Boundaries from Sentinel-2 Images Using a Novel Super-Resolution Contour Detector Based on Fully Convolutional Networks” Remote Sensing, 12(1): 59.
Mena, J. B., 2003, “State of the art on automatic road extraction for GIS update: a novel classification” Pattern Recognition Letters, 24(16): 3037-3058.
Merkow, J., Marsden, A., Kriegman, D. and Tu, Z., 2016, “Dense volume-tovolume vascular boundary detection” International Conference on Medical Image Computing and Computer-Assisted Intervention(Springer)., 371-379
Micheletti, N., Chandler, J. H. and Lane, S. N., 2015, “Investigating the Geomorphological Potential of Freely Available and Accessible Structurefrom-Motion Photogrammetry Using a Martphone” Earth Surface Processes and Landforms, 40(4): 473-486.
Milletari, F., Navab, N. and Ahmadi, S. A., 2016, “V-net: fully convolutional neural networks for volumetric medical image segmentation” Fourth International Conference on 3D Vision (3DV).
Minsky, M. and Papert, S. A., 1969, Perceptrons An Introduction to Computational Geometry, Cambridge: MIT Press.
Mostajabi, M., Yadollahpour, M. and Shakhnarovich, G., 2015, “Feedforward semantic segmentation with zoom-out features” In CVPR.
Noh, H., Hong, S. and Han, B., 2015, “Learning deconvolution network for semantic segmentation” Proceedings of the IEEE International Conference on Computer Vision. 1520-1528.
Persello, C. and Stein, A., 2017, “Deep Fully Convolutional Networks for the Detection of Informal Settlements in VHR Images” IEEE Geoscience and Remote Sensing Letters, 14(12): 2325-2329.
Persello, C., Tolpekin, V. A., Bergado, J. R. and de By, R. A., 2019, “Delineation of agricultural fields in smallholder farms from satellite images using fully convolutional networks and combinatorial grouping” Remote Sensing of Environment, 231: 1-18.
Polyak, B. T., 1964, “Some methods of speeding up the convergence of iteration methods” USSR Computational Mathematics and Mathematical Physics, 4(5): 1-17.
Poudel, R. P., Lamata, P. and Montana, G., 2016, “Recurrent fully convolutional neural networks for multi-slice MRI cardiac segmentation” econstruction, Segmentation, and Analysis of Medical Images(Springer)., 83-94.
Ronneberger, O., Fischer, P. and Brox, T., 2015, “U-Net: Convolutional Networks for Biomedical Image Segmentation” Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015, 9351: 234-241.
Rosenblatt, F., 1958, “The perceptron: A probabilistic model for information storage and organization in the brain” Psychological Review, 65(6): 386-408.
Schulz, H. and Behnke, S., 2012, “Learning Object-Class Segmentation with Convolutional Neural Networks” Proceedings of the European Symposium on Artificial Neural Networks.
Shie, C. K., Chuang, C. H., Chou, C. N., Wu, M. H. and Chang, E. Y., 2015, “Transfer representation learning for medical image analysis” 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 711-714.
Simonyan, K. and Zisserman, A., 2015, “Very Deep Convolutional Networks for Large-Scale Image Recognition” The 3rd International Conference on Learning Representations (ICLR2015.)
Snavely, N., Seitz, S. M., and Szeliski, R., 2006, “Photo tourism: Exploring photo collections in 3D” ACM Transactions on Graphics (TOG),25(3): 835-846.
Stöcker, C.; Nex, F.; Koeva, M. and Gerke, M., 2020, “High-Quality UAV-Based Orthophotos for Cadastral Mapping: Guidance for Optimal Flight Configurations” Remote Sens., 12(21): 3625.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V. and Rabinovich, A., 2014, “Going Deeper with Convolutions” In Proceedings of CVPR, 1-9.
Tappert C. C., 2019, “Who Is the Father of Deep Learning?” 2019 International Conference on Computational Science and Computational Intelligence, 343-348.
Taravat, A., Wanger, M. P., Bonifacio, R. and Petit D., 2021, “Advanced Fully Convolutional Networks for Agricultural Field Boundary Detection” Remote sensing, 13(4):722.
Wassie, Y. A., Koeva M. N., Bennett R. M. and Lemmen, C. H. J., 2018, “A procedure for semi-automated cadastral boundary feature extraction from high-resolution satellite imagery” Journal of Spatial Science, 63(1): 75-92.
Werbos, P., 1974, Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences, Ph.D. Thesis, Harvard University, Cambridge.
Xia, X., Persello, C. and Koeva, M., 2019, “Deep Fully Convolutional Networks for Cadastral Boundary Detection from UAV Images” Remote Sensing, 11(14):1725.
Xu, B., Huang, R. and Li, M., 2016, “Revise Saturated Activation Functions” arXiv preprint, arXiv:1602.05980.
Yosinski, J., Clune, J., Bengio, Y. and Lipson, H., 2014, “How transferable are features in deep neural networks?” Advances in Neural Information Processing Systems, 3320-3328.
Zhang, T. Y. and Suen C. Y., 1984, “A Fast Parallel Algorithm for Thinning Digital Patterns” Commun. ACM, 27:236-239.
Zhou, Y., Xie, L., Shen, W., Wang, Y., Fishman, E. K. and Yuille, A. L., 2017, “A fixed-point model for pancreas segmentation in abdominal CT scans” International Conference on Medical Image Computing and Computer-Assisted Intervention(Springer):693-701.
Zhu, X. X., Tuia, D., Mou, L., Xia, G. S., Zhang, L. and Xu, F., 2017, “Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources” IEEE Geoscience and Remote Sensing Magazine, 5(4): 8-36.
Ziegler, R., Matusik, W., Pfister, H., and McMillan, L., 2003, “3D reconstruction using labeled image regions” Eurographics Association, Aachen, Germany, 248-259.
DJI (2021) DJI Phantom 4 Pro From DJI on the World Wide Web:https://www.dji.com/tw
DJI (2021) DJI Phantom 4 RTK From DJI on the World Wide Web:https://www.dji.com/tw
Pix4D (2017) Refereces for photogrammetry algorithms from Pix4D Community on the World Wide Web:https://community.pix4d.com/t/refereces-for-photogrammetry-algorithms/3291
內政部國土測繪中心(2021). 內政部國土測繪中心官網:https://www.nlsc.gov.tw/
zh_TW
dc.identifier.doi (DOI) 10.6814/NCCU202201144en_US