Please use this identifier to cite or link to this item: https://ah.lib.nccu.edu.tw/handle/140.119/100407
DC FieldValueLanguage
dc.creator黃金聰;陳思翰zh_TW
dc.creatorHwang, Jin-Tsong;Chen, Szu-Han
dc.date2013-05
dc.date.accessioned2016-08-18T03:57:12Z-
dc.date.available2016-08-18T03:57:12Z-
dc.date.issued2016-08-18T03:57:12Z-
dc.identifier.urihttp://nccur.lib.nccu.edu.tw/handle/140.119/100407-
dc.description.abstract目前已經有多款利用網路雲端技術進行三維模型建構的軟體發表,並提供免費 使用,其中以微軟最早於2008年推出的Photosynth最被常使用。該軟體是運用Scale- Invariant Feature Transform(SIFT)以及Structure from Motion(SfM)的技術,將 同一場景不同位置拍攝的影像進行特徵萃取及匹配後,重建攝影站位置以及被拍攝 物體的三維空間坐標,再結合瀏覽器供3D場景展示。由於操作簡單且運算快速,\\r 可以讓使用者使用消費型數位相機輕易建構出虛擬實境,也提供物件嵌入網頁分享 成果給其他使用者。本研究透過相關實驗數據探多重影像以Photosynth方法進行物\\r 空間坐標重建時定位可達之精度,藉由實驗設計分析相關數據,以全測站經緯儀測 算的覘標點三維坐標為依據,並將本文提出之方法的成果與近景攝影(Close-range photogrammetry)、光達等其他空間資訊重建方法的成果做比較,以瞭解運用該法\\r 時的三維空間定位精度,並瞭解此方法的限制與可能面臨的問題。\\r 本文所提之方法是利用一般消費型數位相機進行拍攝預先佈設控制覘標的實驗 區,並利用與已知覘標點坐標差異的RMSE為精度評估指標。實驗顯示,在適當的\\r 拍攝因素控制下,檢核點精度之三維精度分別為±0.027m、±0.065m以及±0.012m左\\r 右。建構出適宜應用精度的三維模型,經濟、便利且可靠的完成空間樣貌紀錄,對 於建物保存工作將有所助益,亦提供未來建構三維模型的另一種選擇。
dc.description.abstractThere are many of 3D model reconstruction software based on cloud computation technology released recently. Photosynth is the earliest one which based on Scaleinvariant feature transform (SIFT) and Structure from Motion (SfM). SIFT gives function of invariance of scale, rotation, perspective and lighting for feature point extraction. This kind of approach is automatic feature point extraction and matching. SfM can restore camera station and acquire space coordinates by continuous photos. In 2008, Microsoft released a free software named Photosynth which combined with SIFT and SfM technology. It can analyze digital photographs and generate a 3D point cloud of a photographed object. In this paper, the accuracy was estimated by the index of RMSE of the point cloud generated from Photosynth, Lidar, and close-range photogrammetry by using some empirical data. The results indicate that the 3D positioning accuracy of Photosynth approach under well control is about ±0.027m, ±0.065m, and ±0.012m respectively. The result is used to compare to that of close-range Photogrammetry and Lidar. We found out the restriction and the problem in this method either.
dc.format.extent1762467 bytes-
dc.format.mimetypeapplication/pdf-
dc.relation臺灣土地研究, 16(1), 81-101
dc.relationJournal of Taiwan land research
dc.subject近景攝影測量; 光達; 點雲
dc.subjectClose-range Photogrammetry; Lidar; Point Cloud
dc.title利用多重影像產生之點雲的精度評估zh_TW
dc.title.alternativeAccuracy Assessment of Point Cloud Generated by Multiple Images
dc.typearticle
item.fulltextWith Fulltext-
item.openairetypearticle-
item.cerifentitytypePublications-
item.grantfulltextopen-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
Appears in Collections:期刊論文
Files in This Item:
File SizeFormat
16(1)-81-101.pdf1.72 MBAdobe PDF2View/Open
Show simple item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.