Responsive image
博碩士論文 etd-0610117-155210 詳細資訊
Title page for etd-0610117-155210
論文名稱
Title
應用影像特徵匹配於水下拖曳式載具定位研究
Positioning of Underwater Towed Vehicles from Image Feature Matching
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
96
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2017-06-16
繳交日期
Date of Submission
2017-07-12
關鍵字
Keywords
拖曳式光纖探測系統、尺度不變特徵轉換、影像姿態、鏡頭扭曲、影像定位、拖曳式載具
FITS, Attitude calibration, SIFT, Towed vehicle, Radial distortion, Feature-based positioning algorithm
統計
Statistics
本論文已被瀏覽 5676 次,被下載 70
The thesis/dissertation has been browsed 5676 times, has been downloaded 70 times.
中文摘要
慣性導航和聲學導航是常用的水下定位方法,但高精度慣性導航系統價格極為昂貴,而且定位積分漂移誤差隨時間累積快速增大,聲學定位系統性能則受到聲速在水層中隨時間與空間動態變化、低更新頻率、高延遲、多重路徑效應等因素影響。考量光學影像具有高解析度、高幀率、低成本優勢,且攝影機幾乎是水下載具的標準配備,故若能藉由攝影機拍攝海床表面影像,便有機會透過光學影像進行特徵點擷取與匹配來估算水下載具位移。因此本研究發展海床影像特徵擷取與匹配的水下定位演算法,由影像校正技術著手,包括影像徑向扭曲校正、影像平面姿態校正。接著使用尺度不變特徵轉換(Scale Invariant Feature Transform;SIFT) 演算法擷取連續影像中的特徵點並進行匹配,求得相鄰兩幀影像特徵點的位移,並將影像位移轉換成真實世界的尺度,以估算水下載具相對位移。最後,將影像位移量由像素單位轉換成真實世界的尺度(公尺或公分),以估算水下載具實際位移。本研究採用中山大學海下科技研究所自行開發之深海拖曳式光纖探測系統FITS (Fiber-optical Instrumentation Towed System) 所拍攝之海床影像進行特徵匹配,並進行FITS 之定位軌跡估算,其估算結果與FITS 所搭載的DVL、ROVINS 水下定位系統之定位軌跡進行比較,以評估海床影像特徵演算法之水下定位效能。
Abstract
Underwater navigation is usually achieved by using inertial or acoustic sensors. However, high precision inertial navigation system (INS) is quite expensive and its main drawback is the performance degradation with time. As for the acoustic navigation system, its performance is limited by sound speed variation of the water column, high latency, low refresh rate, and multi-path effect. In additional to inertial and acoustic navigation sensors, optical sensor has a great potential as a navigation tool for underwater vehicles. Considering that video camera is a standard equipment on almost every underwater vehicle, it is easy to collect seafloor videos when a vehicle conducts seafloor survey. With the advantages of high resolution and high frame rate, the seafloor video has a great potential for accurately positioning an underwater vehicle based on detecting and matching image features. Therefore, in this study, we developed a feature-based positioning algorithm for estimating the displacement of underwater vehicles. The feature-based positioning algorithm consists of four steps: radial distortion calibration of the image, attitude calibration of the image plane, the scale invariant feature transform (SIFT) descriptor, and scale transform from image to physical world. To evaluate the performance of the feature-based positioning algorithm, analysis was carried out based on the seafloor videos off southwestern Taiwan collected by using the Fiber-optical Instrumentation Towed System (FITS). The FITS is a deep-towed vehicle that was developed by the National Sun Yat-sen University, which is also equipped with the INS and Doppler velocity log (DVL) for navigation. Measurements of the INS and DVL were also collected while performing seafloor imaging survey. Based on the developed feature-based positioning algorithm, the seafloor images were extracted from the video to detect and match features to estimate the vehicle displacement. Then, the performance of the image feature-based positioning algorithm was evaluated by comparing the estimates of vehicle displacement to the measurements of INS and DVL.
目次 Table of Contents
誌謝. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
摘要. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
目錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
圖目錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
表目錄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
第一章緒論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 前言. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 研究動機. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 相關研究. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 研究目的. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
第二章海床影像特徵匹配之定位演算. . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 鏡頭扭曲校正. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 影像平面姿態角校正. . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 SIFT 特徵點擷取. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3.1 尺度空間極值偵測. . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.2 特徵點位置最佳化. . . . . . . . . . . . . . . . . . . . . . . . 24
2.3.3 計算特徵點方向性. . . . . . . . . . . . . . . . . . . . . . . . 27
2.3.4 特徵點描述符. . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.4 特徵點匹配. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4.1 最短歐氏距離(Euclidean Distance) . . . . . . . . . . . . . . . 29
2.4.2 Median Flow Filter . . . . . . . . . . . . . . . . . . . . . . . . 30
第三章影像校正與尺度轉換. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.1 鏡頭扭曲校正實驗. . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2 影像姿態角校正實驗. . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3 影像尺度轉換. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
第四章定位效能評估. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.1 Hessian 閥值影響. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2 Median Flow Filter 改善特徵匹配. . . . . . . . . . . . . . . . . . . . 51
4.3 影像定位效能分析. . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4 攝影機偏角修正後之影像定位效能. . . . . . . . . . . . . . . . . . . 60
第五章討論與結論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.1 討論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.2 結論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
參考文獻. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
參考文獻 References
[1] Grenon, G., An, P. E., Smith, S. M., and Healey, A. J., “Enhancement of the inertial navigation system for the Morpheus autonomous underwater vehicles,” IEEE Journal of Oceanic Engineering, Vol. 26, pp. 548-560, 2001.
[2] Kinsey, J. C., Eustice, R. M., and Whitcomb, L. L., “A survey of underwater vehicle navigation: Recent advances and new challenges,” IFAC Conference of Manoeuvering and Control of Marine Craft, 20-22 Sept. 2006, Lisbon, Portugal, 2006.
[3] McEwen, R., Thomas, H., Weber, D., and Psota, F., “Performance of an AUV navigation system at Arctic latitudes,” IEEE Journal of Oceanic Engineering, Vol. 30, pp. 443-454, 2005.
[4] Milne, P. H., Underwater acoustic positioning systems, Gulf Publishing Co.,Houston, TX, 1983.
[5] Garcia, R., Batlle, J., Cufi, X., and Amat, J., “Positioning an underwater vehicle through image mosaicking.” IEEE International Conference on Robotics and Automation, 21-26 May 2001, Seoul, Korea, Vol. 3, pp. 2779-2784, 2001.
[6] Chen, H., Lee, A. S., Swift, M., and Tang, J. C., “3D Collaboration Method over HoloLens™ and Skype™; End Points,” Proceedings of the 3rd International Workshop on Immersive Media Experiences, 30 Oct. 2015, Brisbane, Australia, pp. 27-30, 2015.
[7] Zia, M. Z., Nardi, L., Jack, A., Vespa, E., Bodin, B., Kelly, P. H., Davison, A. J., “Comparative Design Space Exploration of Dense and Semi-Dense SLAM,” 2016 IEEE International Conference on Robotics and Automation, 16-21 May 2016, Stockholm, Sweden, 2016.
[8] Harris, C. and Stephens, M., “A combined corner and edge detector,” Alvey Vision Conference on the British Machine Vision Association and Society for Pattern Recognition, 31 Aug. - 2 Sept 1988, Manchester, UK, pp. 50, 1988.
[9] Lowe, D. G.,“Object recognition from local scale-invariant features, ” Proceedings of the Seventh IEEE International Conference on Computer Vision, 23-25 June 1999, Fort Collins, Colorado, pp. 1150-1157, 1999.
[10] Lowe, D. G., “Distinctive image features from scale-invariant keypoints.” International Journal of Computer Vision, Vol. 60.2, pp. 91-110, 2004.
[11] Brown, M., Lowe, D. G., “Automatic panoramic image stitching using invariant features.” International Journal of Computer Vision, Vol. 74, pp. 59-73, 2007.
[12] Ke, Y. and Sukthankar, R., “PCA-SIFT: A more distinctive representation for local image descriptors,” Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 27 June - 2 July 2004, Washington, D.C., USA, Vol. 2, pp. 506-513, 2004.
[13] Bay, H., Tuytelaars, T., and Van Gool, L., “Surf: Speeded up robust features,” 9th European Conference on Computer Vision, 7-13 May 2006, Graz, Austria, pp. 404-417, 2006.
[14] Bay, H., Ess, A., Tuytelaars, T., and Van Gool, L., “Speeded-up robust features (SURF),” Computer Vision and Image Understanding, 12-18 Oct 2008, Marseille, France, Vol. 110, pp. 346-359, 2008.
[15] Mikolajczyk, K. and Schmid, C., “A performance evaluation of local descriptors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 27, pp. 1615-1630, 2005.
[16] Juan, L. and Gwun, O., “A comparison of SIFT, PCA-SIFT and SURF,” International Journal of Image Processing (IJIP), Vol. 3, pp. 143-152, 2009.
[17] Woolsey, M. and Woolsey, A., “Geographical mosaicking of seafloor images acquired by an AUV,” Oceans’13 MTS/IEEE San Diego Conference, 23-27 Sept. 2013, San Diego, CA, USA, pp. 1-5, 2013.
[18] Nur, K., Morenza-Cinos, M., Carreras, A., and Pous, R., “Projection of RFIDObtained Product Information on a Retail Stores Indoor Panoramas,”IEEE Intelligent Systems, Vol. 30, pp. 30-37, 2015.
[19] Nowozin, S., http://hugin.sourceforge.net/docs/manual/Autopano-sift.html.
[20] Kauhanen, H., Heiska, N., and Kurkela, M., “Long focal length imaging for photogrammetric reconstruction,”International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp. 1682-1750, 2009.
[21] Tsai, R., “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE Journal of Robotics and Automation, Vol. RA-3(4), pp. 323-344, 1987.
[22] Harris, C. and Stephens, M., “A combined corner and edge detector,” Alvey Vision Conference on the British Machine Vision Association and Society for Pattern Recognition, 31 Aug. - 2 Sept. 1988, Manchester, UK, pp. 147-151, 1988.
[23] 李昆鴻,應用影像特徵於底拖攝影系統運動之估算,國立中山大學海下科技研究所碩士論文,2013。
[24] Smith, P., Sinclair D., Cipolla R., and Wood K., “Effective corner matching,” Proceedings of the British Machine Vision Conference, 14-17 Sept. 1998, Southampton, UK, pp. 1-12, 1998.
[25] 陳俊夫,深海拖曳式即時攝影系統之改良設計,國立中山大學海下科技研究所碩士論文,2016。
[26] Zhang, Z., “Flexible camera calibration by viewing a plane from unknown orientations.” Proceedings of the Seventh IEEE International Conference on Computer Vision, 23-25 June 1999, Fort Collins, Colorado, Vol. 1, pp. 666-673 , 1999.
[27] Zhang, Z., “A flexible new technique for camera calibration.” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22.11, pp. 1330-1334, 2000.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:自定論文開放時間 user define
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code