Responsive image
博碩士論文 etd-0915113-150130 詳細資訊
Title page for etd-0915113-150130
論文名稱
Title
應用影像直線偵測於ROV水下查驗
Line Detection in ROV Video Image for Underwater Inspection
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
84
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2013-10-03
繳交日期
Date of Submission
2013-10-15
關鍵字
Keywords
水下無人遙控載具、邊緣偵測、直線偵測、霍夫轉換
Hough transform, line detection, edge detection, ROV
統計
Statistics
本論文已被瀏覽 5710 次,被下載 809
The thesis/dissertation has been browsed 5710 times, has been downloaded 809 times.
中文摘要
無人遙控水下載具(Remotely Operated Vehicle; ROV)近年來蓬勃發展,取代大多數過去必須由潛水伕或載人潛艇才能進行之水下作業,目前已廣泛應用於海洋科學探勘研究、軍事防衛、海洋工程設施之維修檢測、以及水下救援。雖然國內已針對ROV系統與應用技術進行研發,但尚未大規模引入水下工程查驗工作。因應國內未來在水下工程的監造查驗服務需求增加,本研究對ROV 應用於港灣工程之水下監造查驗技術進行探討,並針對實海域測試時所遭遇的問題提出可行之解決方案。對於港灣工程水下查驗而言,海流與強勁波浪拍擊沿岸港堤容易影響ROV水下運動的穩定性,也影響水下查驗的操作效率。對此本研究開發水下發無人遙控載具ROV Auroree,並針對ROV Auroree 發展定艏向/定高/定深控制,以降低海流與波浪對ROV Auroree 運動的影響。此外,對於查驗範圍廣闊且水質混濁的工作區域,通常透過適當的引導方式來提升查驗效率。傳統以潛水伕進行水下查驗時,會於海床鋪設尼龍繩索以標記出查驗路徑,由潛水伕沿著尼龍繩索進行水下攝影。當以ROV取代潛水伕進行水下查驗時,有必要透過自動追循功能輔助操縱ROV沿著尼龍繩索拍攝水下影像。因此,本研究以開發尼龍繩索自動追循功能為前提,進行ROV水下影像處理技術開發,以達到提升繩索(即目標物)偵測成功率為目的。本研究建立之影像處理流程分為三階段,分別為影像色彩空間轉換、邊緣偵測、以及直線幾何特徵偵測,每階段皆探討各種影像處理方法的效能,並據此建立適用於水下查驗工作的影像處理流程。本研究結果發現,YCbCr 色彩空間中的Cb 值能在水中有效的區別黃色尼龍線目標物與背景,而利用Canny 邊緣偵測法可以定位準確且簡化邊緣線條取得輪廓。此外,藉由概率霍夫轉換可以改善標準霍夫轉換計算耗時的缺點,快速準確的尋找出直線幾何特徵。
Abstract
Remotely Operated Vehicle (ROV) has flourished in recent years, as evidenced by facilitating the replacement of divers and manned submersibles for the operation of diverse underwater tasks. Currently ROVs are being used worldwide for a variety of underwater tasks, including scientific research, mine hunting, offshore engineering, structural inspection, heavy work construction, and wreck recovery. In Taiwan, ROV technology is well-known in academic circles but rare in the offshore industries. Even though the use of ROV is expected to become more and more attractive in the offshore industries as it becomes increasingly affordable, there are challenges need to be solved to increase its operation efficiency. While operating ROVs for underwater inspection tasks of offshore construction, one of the major problems is the disturbances caused by surface wave and current. Another problem encountered is the limited underwater visibility. Even though nylon ropes are usually laid on the seafloor to aid the divers/ROVs to follow for conducting inspections, the maneuvering of a ROV for underwater pipeline/cable tracking requires experienced operators and continued attention. A good pipeline/cable tracking control of ROV is desired for relieving the burden of operators and increase inspection efficiency. Therefore, we designed and fabricated the ROV Auroree, and a proportional-differential controller is implemented on the ROV Auroree for auto-pilot performing auto-heading, auto-depth, and auto-altitude functions. In addition, considering that an image-based visual servo control of ROV for the pipeline/cable tracking problem can facilitate the efficiency of underwater inspection, the methods of image processing for line detection are developed in this study. The proposed image-processing algorithm is divided into three stages: color space transformation, edge detection, and line detection. At each stage of image processing, the performance of different methods is evaluated and accordingly a procedure suitable for underwater nylon rope tracking is established. The evaluation results indicate that, in the YCbCr color space, the yellow nylon rope has a high blue chrominance (Cb) value, which make it effective to extract the rope from the background. Then, in the edge-detection stage, it is found that applying the Canny detector on the Cb attribute of the underwater image gives sharp thin edge of the nylon rope. Moreover, in the stage of line detection, we founded that the Probabilistic Hough Transform (PHT) is more computationally efficient than the Hough Transform to acquire the line segments of the nylon rope in images.
目次 Table of Contents
1 緒論 1
1.1 前言. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 動機與目的. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 文獻回顧. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 論文架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 ROV Auroree 系統 9
2.1 前言. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 感測器與周邊 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 通訊系統. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 電控與程式設計. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3 定艏向/定高/定深控制 17
3.1 控制架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 實驗規劃. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 實驗結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4 影像直線偵測演算法 27
4.1 前言. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 色彩空間轉換 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.3 邊緣偵測. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.3.1 梯度法. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.3.2 Canny 邊緣偵測法. . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.4 直線偵測. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.4.1 霍夫轉換. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.4.2 概率霍夫轉換. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5 影像直線偵測結果分析 45
5.1 色彩空間轉換. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.2 邊緣偵測. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.3 直線偵測. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6 討論與結論 60
附錄A 影像處理流程範例說明 63
A.1 色彩空間轉換. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
A.2 邊緣偵測. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
A.3 直線偵測. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
參考文獻 67
參考文獻 References
[1] G. N. DeSouza and A. C. Kak, “Vision for mobile robot navigation: a survey,”
IEEE Trans. Pattern Anal. Mach. Intell., 24(2), pp. 237-267, 2002.
[2] F. Bonin-Font, A. Ortiz, and G. Oliver, “Visual navigation for mobile robots: a
survey,” Journal of intelligent and robotic systems, 53, pp. 263-296, 2008.
[3] S. Matsumoto and Y. Ito, “Real-time vision-based tracking of submarine-cable
for AUV/ROV,” OCEANS’95. MTS/IEEE. Challenges of Our Changing Global
Environment. Conference Proceedings, 3, pp. 1997-2002, 1995
[4] P. Zingaretti and S. M. Zanoli, “Robust real-time detection of an underwater
pipeline,” Engineering Applications of Artificial Intelligence, 11, pp. 257-268, 1998.
[5] A.Grau, J. Climent, and J. Aranda, “Real-time architecture for cable tracking
using texture descriptors,” OCEANS’98 Conference Proceedings/ IEEE, 3, pp.
1496-1500, 1998.
[6] B.A.A.P. Balasuriya, M. Takai, W.C. Lam, T. Ura, and Y. Kuroda, “Vision
based autonomous underwater vehicle navigation: underwater cable tracking,”
OCEANS’97. MTS/IEEE Conference Proceedings, 2, pp. 1418-1424, 1997.
[7] A. Balasuriya and T. Ura, “Vision-based underwater cable detection and following
using AUVs,” OCEANS’02 MTS/IEEE, 3, pp. 1582-1587, 2002.
[8] G. L. Foresti and S. Gentili, “A hierarchical classification system for object recognition
in underwater environments,” IEEE Journal of Oceanic Engineering, 27(1),
2002.
[9] A. Ortiz and J. Antich, “Bayesian visual tracking for inspection of undersea power
and telecommunication cables,” Journal of Maritime Research, 6(2), pp. 83-98,
2009.
[10] N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans.
Sys., Man, Cybern., 9(1), pp. 62-66, 1979.
[11] J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern
Anal. Mach. Intell., 8(6), pp. 679-698, 1986.
[12] L. Ding and A. Goshtasby, “On the Canny edgedector,” Pattern Recognition, 34,
pp. 721-725, 2001.
[13] P. V. C. Hough, “Method and means for recognizing complex patterns,” U.S.
Patent 3,069,654, 1962.
[14] R. O. Duda and P. E. Hart, “Use of the Hough transformation to detect lines and
curves in pictures,” Communications of the ACM, 15(1), pp. 11-15, 1972.
[15] N. Kiryati, Y. Eldar and A. M. Bruckstein, “A probabilistic Hough transform,”
Pattern Recognition, 24(4), pp. 303-316, 1991.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:校內校外完全公開 unrestricted
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code