Responsive image
博碩士論文 etd-0330118-130335 詳細資訊
Title page for etd-0330118-130335
論文名稱
Title
使用RGB-D攝影機實現機器手臂姿態量測
A Posture Measurement Approach for Robot Arms by RGB-D Cameras
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
103
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2018-05-18
繳交日期
Date of Submission
2018-06-21
關鍵字
Keywords
高斯雜訊濾波、隨機取樣一致性、點雲庫、3D深度感測器、機器手臂、圓柱模型
robot arm, 3D depth sensor, Point Cloud Library, Gaussian noise filtering, random sample consensus, cylindrical model
統計
Statistics
本論文已被瀏覽 5648 次,被下載 0
The thesis/dissertation has been browsed 5648 times, has been downloaded 0 times.
中文摘要
近年來機器手臂為工業帶來了許多的便利性,但是由於目前企業所生產的機
器手臂大多數沒有自動偵測避撞的功能,而隨著機器手臂的使用率增加,人們與
機器手臂共處的密集度提高,安全性的議題也越來越重要。因此本論文旨在透過
外部觀察的方式,提出一個幾何形狀辨識的方法,將幾何結構形狀不完整的點雲
資料,找出其圓柱參數。本論文以協作型機器人(Universal Robots 5,UR5)為例作
研究標的,量測UR5 下方5 個連結的其中3 個連結之圓柱參數,再計算出與其所
對應的3 軸角度。其內容可簡單分成兩大部分,第一部分為影像雜訊濾波,第二
部分則為影像處理。一開始透過兩臺3D 深度感測器與點雲庫(Point Cloud Library,
PCL)分別捕捉含有UR5 下方5 個連結的姿態,再將此兩臺3D 深度感測器所捕捉
到的影像進行合併。在第一部分中,將合併的影像進行點雲分群與高斯雜訊濾波,
從原始影像中分離出3 個連結的點雲資料。而在第二部分中,則是透過隨機取樣
一致性(random sample consensus,RANSAC)的演算法,分別找出與理想圓柱模型
形狀一致的點雲資料,取得其圓柱參數,並且透過此3 個連結的圓柱參數,計算
出對應的3 軸角度。最後再透過以UR5 系統中所顯示的3 軸角度來驗證此研究方
法的可行性,以達到姿態量測的目的,並且有利於未來在自動偵測避撞的研究。
Abstract
Robot arms bring a lot of conveniences to the industries in the recent years.
However, there aren’t any functions to detect collision automatically in most robot arms
at present. As the rates of robot arm applications are increasing, the density which
humans work with robot arms is rising. Some important issues of safety should be
concerned. Therefore, the objective of this thesis is using external observations to
propose a method of geometric shape recognition which could find cylindrical
parameters from point cloud data containing incompletely geometric shapes. This thesis
uses Universal Robots 5 (UR5) for the research target to measure cylindrical parameters
of three UR5 links and then compute three angles of joints which are corresponding to
the UR5 links. The content of this thesis can be simply divided into two parts. The first
part is image noise filtering, and the second part is image processing. At the beginning,
images with point cloud data including the posture of the five UR5 links are captured
respectively by two 3D depth sensors and Point Cloud Library (PCL). Then, images
captured by the two 3D depth sensors are combined into an image. The first part is to
separate the point cloud data from the combined images and use Gaussian noise filtering
to obtain point cloud data of three links. The second part is to use the algorithm of
vii
random sample consensus (RANSAC) to segment point cloud data which consist with
the ideally cylindrical model and get cylindrical parameters. Besides, the three angles of
joints which are corresponding to the UR5 links are computed through the cylindrical
parameters. Finally, the feasibility of the method is verified by comparing the three
angles displayed on the UR5 system to achieve the purpose of posture measurement and
benefits the research to detect collision automatically in the future.
目次 Table of Contents
論文審定書 i
論文公開授權書 ii
致謝 iv
摘要 v
Abstract vi
目錄 viii
圖目錄 xi
表目錄 xv
符號目錄表 xvi
第1章 導論 1
1.1 研究動機與目的 1
1.2 論文架構 2
第2章 研究背景 3
2.1 微軟Xbox One Kinect感應器 3
2.2 微軟Xbox One Kinect配接器 5
2.3 協作型機器人 5
2.4 點雲庫 7
2.5 點雲資料格式 8
2.6 隨機取樣一致性 10
2.7 模型分割 12
2.7.1 平面模型分割 12
2.7.2 圓柱模型分割 14
2.7.3 平面與圓柱模型分割 19
2.8 系統環境和軟體介面 23
第3章 研究方法 24
3.1 點雲資料影像合併 28
3.2 點雲資料雜訊濾波 30
3.2.1 空間座標濾波 31
3.2.2 影像顏色分群 32
3.2.3 高斯雜訊濾波 35
3.3 點雲資料影像處理 42
3.3.1 點雲資料法向量計算 43
3.3.2 連結主軸方向驗證 48
3.3.3 主軸補償法 52
3.3.4 各軸角度計算 53
3.3.5 夾角證明 56
第4章 模擬實驗與討論 59
4.1 模擬實驗 59
4.1.1 實驗環境 60
4.1.2 實驗方法 62
4.1.3 實驗結果 63
4.2 實機實驗 68
4.2.1 實驗環境 68
4.2.2 實驗方法 70
4.2.3 實驗結果 73
第5章 結論與未來展望 80
5.1 結論 80
5.2 未來展望 81
參考文獻 82
參考文獻 References
[1] S. Rogge and C. Hentschel, “A Multi-Depth Camera Capture System for Point
Cloud Library,” IEEE Fourth International Conference on Consumer Electronics
Berlin (ICCE-Berlin), pp. 50-54, 2014.
[2] R. B. Rusu and S. Cousins, “3D is here: Point Cloud Library (PCL),” IEEE
International Conference on Robotics and Automation, pp. 1-4, 2011.
[3] L. Yang, L. Zhang, H. Dong, A. Alelaiwi and A. E. Saddik, “Evaluating and
Improving the Depth Accuracy of Kinect for Windows v2,” IEEE Sensors Journal,
Volume 15, Issue 8, pp. 4275-4285, 2015.
[4] D. Pagliari and L. Pinto, “Calibration of Kinect for Xbox One and Comparison
between the Two Generations of Microsoft Sensors,” MDPI Sensors, Volume 15,
pp. 27569-27589, 2015.
[5] K. S. Hwang and Q. H. Yang, “Posture Imitation and Balance Learning for
Humanoid Robots,” master thesis, National Sun Yat-sen University, November
2015.
[6] T. Sugiura, “Introduction to Kinect v2,” only available online:
https://www.slideshare.net/SugiuraTsukasa/kinect-v2-introduction-and-tutorial [Accessed 2 April 2018].
[7] M. Valoriani and C. Giorio, “Develop Store Apps with Kinect for Windows v2,”
only available online:
https://www.slideshare.net/MatteoValoriani/develop-store-apps-with-kinect-forwin
dowsv2-150601152707lva1app6891 [Accessed 2 April 2018].
[8] Kinect Developers, “Xbox Kinect Adapter,” only available online:
http://www.xbox.com/zh-TW/xbox-one/accessories/kinect/kinect-adapter
[Accessed 2 April 2018].
[9] Kinect Developers, “Kinect for Windows SDK 2.0,” only available online:
https://www.microsoft.com/en-us/download/details.aspx?id=44561 [Accessed 14
May 2017].
[10] UR5 Developers, “Universal Robots User Manual,” only available online:
https://www.universal-robots.com/media/8704/ur5_user_manual_gb.pdf [Accessed
5 April 2017].
[11] UR5 Developers, “UR5 - A Highly Flexible Robot Arm,” only available online:
https://www.universal-robots.com/products/ur5-robot/ [Accessed 3 April 2018].
[12] UR5 Developers, “UR5,” only available online:
http://rva-tw.com/products_view.php?PID=16&sn=17 [Accessed 3 April 2018].
[13] P. M. Kebria, S. Al-wais, H. Abdi and S. Nahavandi, “Kinematic and Dynamic Modelling of UR5 Manipulator,” IEEE International Conference on Systems, Man,
and Cybernetics (SMC), pp. 004229-004234, October 2016.
[14] UR5 Developers, “Actual Center of Mass for Robot - 17264,” only available
online:
https://www.universal-robots.com/how-tos-and-faqs/faq/ur-faq/actual-center-of-ma
ss-for-robot-17264/ [Accessed 3 April 2018].
[15] PCL Developers, “Point Cloud Library (PCL),” only available online:
http://pointclouds.org/ [Accessed 17 February 2017].
[16] PCL Developers, “What Is PCL?,” only available online:
http://pointclouds.org/about/ [Accessed 3 April 2018].
[17] PCL Developers, “The PCD (Point Cloud Data) file format,” only available online:
http://ns50.pointclouds.org/documentation/tutorials/pcd_file_format.php#pcd-file-f
ormat [Accessed 18 February 2017].
[18] M. A. Fischler and R. C. Bolles, “Random Sample Consensus: A Paradigm for
Model Fitting with Applications to Image Analysis and Automated Cartography,”
Communications of the ACM, Volume 24, Number 6, pp. 381-395, June 1981.
[19] K. G. Derpanis, “Overview of the RANSAC Algorithm,” Version 1.2, York
University, May 2010.
[20] PCL Developers, “Plane Model Segmentation,” only available online: http://ns50.pointclouds.org/documentation/tutorials/planar_segmentation.php#plan
ar-segmentation [Accessed 20 February 2017].
[21] PCL Developers, “Cylinder Model Segmentation,” only available online:
http://pointclouds.org/documentation/tutorials/cylinder_segmentation.php#cylinder
-segmentation [Accessed 20 February 2017].
[22] G. Arvanitis, A. S. Lalos, K. Moustakas and N. Fakotakis, “Real-Time Removing
of Outliers and Noise in 3D Point Clouds Applied in Robotic Applications,”
University of Patras, August 2017.
[23] R. B. Rusu, Z. C. Marton, N. Blodow, M. Dolha and M. Beetz, “Towards 3D Point
Cloud Based Object Maps for Household Environments,” Robotics and
Autonomous Systems Journal, pp. 927-941, 2008.
[24] PCL Developers, “Estimating Surface Normals in a PointCloud,” only available
online:
http://pointclouds.org/documentation/tutorials/normal_estimation.php#normal-esti
mation [Accessed 24 May 2018].
[25] J. Miseikis, K. Glette, O. J. Elle and J. Torresen, “Automatic Calibration of a Robot
Manipulator and Multi 3D Camera System,” IEEE/SICE International Symposium
on System Integration (SII), pp. 735-741, December 2016.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:自定論文開放時間 user define
開放時間 Available:
校內 Campus:永不公開 not available
校外 Off-campus:永不公開 not available

您的 IP(校外) 位址是 3.138.124.40
論文開放下載的時間是 校外不公開

Your IP address is 3.138.124.40
This thesis will be available to you on Indicate off-campus access is not available.

紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 永不公開 not available

QR Code