Responsive image
博碩士論文 etd-0113114-143306 詳細資訊
Title page for etd-0113114-143306
論文名稱
Title
人形機器人基於模仿之動作記憶與重現
Memorization and Replay of Humanoid Robot Actions by Emulation
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
102
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2014-01-28
繳交日期
Date of Submission
2014-02-13
關鍵字
Keywords
Kinect、動作模仿、動作記憶、關鍵姿態、人形機器人、動作重現
Action Replay, Key-Pose, Humanoid Robot, Action Emulation, Kinect, Action Memorization
統計
Statistics
本論文已被瀏覽 5743 次,被下載 76
The thesis/dissertation has been browsed 5743 times, has been downloaded 76 times.
中文摘要
隨著科技的日新月異,各個產業技術的發展速度為之驚人,相對人力的分配也顯得珍貴且重要,也因為科技的發達,機器的功能越來越多,且能處理更多複雜的事情,機器取代人力將會是一種趨勢。
本論文目的是透過模仿的方式使人形機器人能夠記憶不同的動作並且可以將記憶的動作重現。本系統利用體感的概念,採用Xbox360 的Kinect 當作體感控制器,人形機器人則是使用DARwIn-OP,動作模仿的功能中系統可以將Kinect 擷取到人體骨架關節的座標經過計算成角度,再將角度經過公式的轉換成DARwIn-OP可以使用的馬達參數, 並透過無線網路將各個關節對應的馬達參數傳至DARwIn-OP,接著經過DARwIn-OP 內建的函式就可以即時地模仿使用者的動作。動作重現的部分,因為使用原始資料會使機器人在重現時發生抖動的問題,所以
我們提出了關鍵姿態演算法以及分群演算法減少資料個數並達到與原始資料誤差最小的方法,可以解決抖動的問題增加動作的流暢性。最後是動作儲存,這裡本論文提出動作相似度的計算公式,透過公式計算出的數值可以讓DARwIn-OP 判斷資料庫中是否已有相同動作存在,可減少資料儲存的空間。
本系統實作 Kinect 的應用程式是在OpenNI 以及NITE 的環境,NITE 是中介軟體提供骨架追蹤演算法找出使用者並畫出其骨架,並透過OpenNI 讀取Kinect所擷取到的資料,並對其資料作許多處理並完成此系統。本論文將以影片呈現並將影片至於YouTube,搜尋關鍵字 “Memorization and Replay of Humanoid Robot Actions by Emulation”。
Abstract
As technology advances, the developmental pace of various industries is also advancing at an amazing rate. Manpower allocation appears to be relatively rare and important. While machines have become more and more powerful, they can do many complex works. It is a trend that machines replace manpower.
The purpose of this paper is to make humanoid robot remember different actions by means of emulation and then make it repeat the action. The system applies the concept of motion sensing technology—the Xbox360 Kinect is used as the motion sensing input device, and DARwIn-OP is used as the humanoid robot. In action
emulation, The Kinect captures the human skeleton joint movement and are then calculated to angles. The angles are converted by a formula into motor counts that can be used in DARwIn-OP. The motor counts corresponding to each joint are transferred to DARwIn-OP through a wireless network (Wi-Fi). The built-in function of DARwIn-OP then enables the humanoid robot to immediately imitate the user's actions. In action replay, a jittering problem occurred when the original data was processed. To increase action fluency, we proposed key-pose identification algorithm and grouping algorithm to reduce the data size and to minimize the error with the original data. Finally, in action memorization, we proposed similarity rate calculation formula. The value calculated allows DARwIn-OP to determine if the same action has existed in its database. The data storage space can therefore be reduced.
The Kinect application programs of this system run in OpenNI and NITE environments. NITE is the middleware that provides the skeleton tracking algorithms to identify the user and to draw the skeleton. OpenNI reads and processes the data captured by Kinect. The system is demonstrated by a play of “Memorization and Replay of Humanoid Robot Actions by Emulation” on YouTube .
目次 Table of Contents
摘要 .................................................................................................................................. v
Abstract ............................................................................................................................ vi
Table of Contents ........................................................................................................... viii
List of Figures .................................................................................................................. xi
List of Tables ................................................................................................................. xiv
I. INTRODUCTION .................................................................................................... 1
1.1 Motivation .................................................................................................... 1
1.2 Objective....................................................................................................... 2
1.3 Organization of thesis ................................................................................... 3
II. BACKGROUND ...................................................................................................... 4
2.1. Motion Capture ............................................................................................. 4
2.2. OpenNI ....................................................................................................... 10
2.3. Humanoid Robot ........................................................................................ 13
2.4. Communication .......................................................................................... 15
2.5. Hierarchical clustering ................................................................................ 16
2.6. Long Common Subsequence (LCS) ........................................................... 18
III. SYSTEM ANALYSIS AND DESIGN ........................................................... 20
3.1. System Analysis .......................................................................................... 20
3.1.1. 5W1H ............................................................................................. 20
3.1.2. Functional and Non-Functional Requirements ............................... 21
3.1.3. Use Case ......................................................................................... 23
3.2. System Architecture .................................................................................... 25
3.3. PC Side ....................................................................................................... 27
3.3.1. Skeleton Tracking Algorithm ......................................................... 27
3.3.2. Joint Angle Calculate and Motor Count Transform ....................... 29
3.3.3. Key-Pose Identification Algorithm ................................................. 35
3.3.4. Grouping Algorithm ....................................................................... 39
3.3.5 Communication .............................................................................. 42
3.4. DARwIn-OP Side ....................................................................................... 43
3.4.1. Action Emulation ............................................................................ 43
3.4.2. Action Replay ................................................................................. 45
3.4.3. Action Memorization ...................................................................... 49
IV. IMPLEMENTATION ..................................................................................... 52
4.1. Development Environment ......................................................................... 52
4.2. Implementation of Skeleton Tracking Algorithm ....................................... 54

4.3. Implement of Joint Angle Calculate and Motor Count Transform ............. 56
4.4. Implementation of Key-Pose Identification Algorithm .............................. 61
4.5. Implementation of Grouping Algorithm ..................................................... 76
4.6. Implementation of Action Emulation, Replay and Memorization ............. 79
V. CONCLUSION AND FUTURE WORK ............................................................... 84
5.1. Conclusion .................................................................................................. 84
5.2. Future Work ................................................................................................ 84
REFERENCE ................................................................................................................. 87
參考文獻 References
[1]. Y. Sakagami, R. Watanabe, C. Aoyama, S. Matsunaga, N. Higaki and K. Fujimura, “The intelligent asimo: system overview and integration,” IEEE/RSJ International Conference on Intelligent Robots and Systems, vol.3, pp. 2478-2483, 2002.
[2]. Anne V. Gromly, “ Lifespan Human Development,” 高立圖書有限公司, 2005
[3]. 林永祥,“SOWT 分析光學式動作擷取系統發展趨勢,” 第三屆運動科學暨休閒
遊憩管理學術研討會論文集, pp. 467-473, 2010.
[4]. J. Preis, M. Kessel, M. Werner, C. Linnhoff-Popien,“Gait Recognition with Kinect,” Ludwig-Maximilians University
[5]. C.H. Liu, K.T Song, “A new approach to map joining for depth-augmented visual SLAM,” 9th Asian Control Conference(ASCC), pp. 1-6, Jun. 23-26, 2013.
[6]. H.B. Suay, S. Chernova, “Humanoid robot control using depth camera,” 6th ACM/IEEE International Conference on Human-Robot Interaction(HRI), p. 401, Mar. 8-11, 2011.
[7]. J. Cao. (2014). Kinect for windows[Online]. Available: http://msdn.microsoft.com/zh-tw/hh367958.aspx
[8]. H. Ku. (2011, Jan 19). OpenNI/Kinect [Online]. Available:
http://kheresy.wordpress.com/2011/01/19/openni_1st/
[9]. X.Q. Tang, P. Zhu, “Hierarchical Clustering Problems and Analysis of Fuzzy Proximity Relation on Granular Space,” IEEE Transactions on Fuzzy Systems, pp. 814-824, Oct. 2013.
[10].(2014, Jan 16). Longest common subsequence problem [Online]. Available: http://en.wikipedia.org/wiki/Longest_common_subsequence_problem
[11].R.R. Igorevich, E.P. Ismoilovich and D. Min, “Behavioral synchronization of human and humanoid robot,” 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), pp. 655-660, Nov. 23-26, 2011.
[12].I.I. Itauma, H. Kivrak, and H. Kose, “Gesture imitation using machine learning techniques,” 20th Signal Processing and Communications Applications Conference
(SIU), pp. 1-4, Apr. 18-20, 2012.
[13].K. Pan, “Implementation of robot arm controller on embedded system,” National Chung Cheng University, 2010.
[14].V.V. Nguyen, J.H. Lee, “Full-Body Imitation of Human Motions with Kinect and Heterogeneous Kinematic Structure of Humanoid Robot,” IEEE/SICE International Symposium on System Integration (SII), Fukuoka, pp. 93-98, 2012.
[15].D.Y. Chen, H.Y.M. Liao, H.R. Tyan and C.W. Lin, “Automatic key posture selection for human behavior analysis,” IEEE 7th Workshop on Multimedia Signal 88 Processing, pp. 1-4, Oct. 30, 2005.
[16].Y. Chen, Q. Wu, X. He, C. Du and J. Yang, “Extracting key postures in a human action video,” IEEE 10th Workshop on Multimedia Signal Processing, pp. 569-573, Oct. 8-10, 2008.
[17].J. Assa, Y. Caspi, D. Cohen-Or, “Action Synopsis: Pose Selection and Illustration,” ACM Transactions on Graphics(TOG), pp. 667-676, Jul. 2005.
[18].Y.A. Zheng, “Humanoid Robot Behavior Emulation and Representation,” Nation Sun Yat-sen University, 2012.
[19].P. R. S. D. Silva, T. Matsumoto, S. G. Lambacher, A. P. Madurapperuma, S. Herath, M. Higashi,“ The Extraction of Symbolic Postures to Transfer Social Cues into Robot”, Intelligent and Biosensors, Vernon S. Somerset (Ed.), pp. 147-163, Jan. 2010.
[20].S. Lallee, S. Lemaignan, A. Lenz, C. Melhuish, L. Natale, S. Skachek, T. van Der Zant, F. Warneken, P.F. Dominey, ”Towards a platform-independent cooperative human-robot interaction system: I. Perception,” IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS), pp. 4444-4451, Oct. 18-22,
2010.
[21].S. Lallee, U. Pattacini, J.D. Boucher, S. Lemaignan, A. Lenz, C. Melhuish, L. Natale, S. Skachek, K. Hamann, J. Steinwender, E.A. Sisbot, G. Metta, R. Alami, M. Warnier, J. Guitton, F. Warneken, P.F. Dominey, “Towards a platform-independent cooperative human-robot interaction system: II. Perception, execution and imitation of goal directed actions,” IEEE/RSJ International Conference on Intelligent Robots and systems(IROS), pp. 2895-2902, Sep. 25-30,
2011.
[22].S. Lallee, U. Pattacini, S. Lemaignan, A. Lenz, C. Melhuish, L. Natale, S. Skachek, K. Hamann, J. Steinwender, E.A. Sisbot, G. Metta, R. Alami, M. Warnier, J. Guitton, F. Warneken, P.F. Dominey, T. Pipe “Towards a platform-independent cooperative human-robot interaction system: III An Architecture for Learning and
Executing Actions and Shared Plans,” IEEE Transactions on Autonmous Mental Development, pp. 239-253, Sep. 10, 2012.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:自定論文開放時間 user define
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code