Responsive image
博碩士論文 etd-0912112-163212 詳細資訊
Title page for etd-0912112-163212
論文名稱
Title
多樣化機器人劇本之轉譯器之開發
The Development of Screenplay Interpreter for Multi-morphic Robots
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
82
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2012-08-28
繳交日期
Date of Submission
2012-09-12
關鍵字
Keywords
劇本轉譯器
XML, Multi-morphic Robots, NAO, Screenplay Interpreter
統計
Statistics
本論文已被瀏覽 5745 次,被下載 806
The thesis/dissertation has been browsed 5745 times, has been downloaded 806 times.
中文摘要
藉由機器人控制平台來控制機器人的概念風行已久,然而將機器人平台以更親切的方式呈現對於產品深入家庭生活中是有必要性的。機器人劇場表演製作平台是由雲端計算、操作介面以及劇場轉譯器所結合而成的機器人平台。使用者可以藉由任何具備上網功能的裝置,透過網路連接到操作介面編輯及設計劇本,劇本編輯完成後透過劇場轉譯器便能讓這些機器人隨著劇本而飛舞表演,在編輯過程中,還可以指定不同型號之機器人來執行,而這些機器人的表演都會是一種富含寓意的訊息傳遞。整個機器人劇場表演製作平台分為操作介面與劇場轉譯器,本論文提供了劇場轉譯器以及實作NAO(念作now)人型機器人導入此劇場轉譯器,並完成了劇場轉譯器與操作介面的整合,因而讓使用者可以藉由貼近人性的介面來編輯NAO與DARwIn-OP(Dynamic Anthropomorphic Robot with Intelligence - Open Platform)的劇本,而NAO與DARwIn-OP便會隨著劇本演戲。本系統將以影片”do-as-I-do”呈獻並將影片置於YouTube上,
http://www.youtube.com/watch?v=v8ErTOgAQSo。
Abstract
Controlling the robots through a robotic platform has recently been used widely; however, it is necessary to make the platform friendlier in use to bring this product deeply into the family. The screenplay based performance platform (SBPP) of robotic puppet shows proposed in this thesis is a robotic platform composed with cloud computing, User Interface (UI) and Screenplay Interpreter(SI). The users can connect to the UI to edit and setup the screenplay ubiquitously through any device which can link to the internet. Through screenplay interpreter, the robot can perform as the designed screenplay after the setup; that is, one can always designate a different robot to execute during the process of editing, and the actions of these robots will be a message communication with preset meaning. The project of SBPP is divided into two modules: the UI and SI for multi-morphic robots. The work of this thesis takes charge mainly of screenplay interpreter for a variety of robot models and interfacing between NAO(pronounced now) which is a humanoid robot and the screenplay interpreter. And we integrate UI and SI. The integration of the work provides users a friendly UI to edit scenario of NAO and DARwIn-OP such the robot, NAO and DARwIn-OP(Dynamic Anthropomorphic Robot with Intelligence - Open Platform), can play in a scenario as the screenplay describing. The system is demonstrated by a play of “do-as-I-do” and recorded in a video at YouTube, http://www.youtube.com/watch?v=v8ErTOgAQSo.
目次 Table of Contents
TABLE OF CONTENTS
Page
I. Motivation and Plan 1
1.1 Motivation 1
1.2 Objective 2
1.3 Organization of thesis 2
II. Background 4
2.1 Common defects in general robotic platform 4
2.2 Robot 4
2.2.1 Comparison with robots 5
2.2.2 NAO robot 6
2.2.3 NAO web page 7
2.2.4 Comparison of module design 8
2.2.5 NAOqi 10
2.2.6 Module 14
2.3 OpenCV and face detection 17
III. System Analysis and Design 18
3.1 System architecture 18
3.2 System analysis 19
3.2.1 5W1H 19
3.2.2 Problem statement and requirements elicitation 21
3.2.3 Pose, motion and behavior 22
3.2.4 Consideration of robot imported 23
3.2.5 Functional and Non-functional requirements 24
3.2.6 Use case analysis 26
3.3 System Design 29
3.3.1 The composition of screenplay interpreter 29
3.3.2 Screenplay 30
3.3.3 Parser 33
3.3.4 Executor 36
3.4 NAO 40
3.4.1 Module design of NAO 40
3.4.2 Speed intensity of module execution 41
3.4.3 NAO_executor 42
3.5 The integration of screenplay interpreter and UI 42
3.6 Comparison of typical robotic platform and SBPP 43
3.7 Event driven 45
IV. Implementation 48
4.1 Implementation of screenplay interpreter 48
4.1.1 Implementation of parser 48
4.1.2 Implementation of NAO_executor 50
4.1.3 The execution time in SI 51
4.2 Implementation of robot’s motion and behavior 51
4.2.1 NAO OS 51
4.2.2 Setting in NAO web page 52
4.2.3 SDK and IP assignment 53
4.2.4 Motion module designed 57
4.2.5 Implementation of behavior module 58
4.2.6 Implementation of NAO speech 59
4.2.7 Implementation of combine SI and NAO 59
4.3 Implementation of integrate screenplay SI and UI 60
4.4 Implementation of event driven 60
4.4.1 OpenCV and face detection 61
4.4.2 NAO and face detection 62
V. Achievement and future work 64
5.1 Achievement 64
5.2 Improvement 64
5.3 Future work 65
Reference 67
參考文獻 References
Reference
[1] T.-L. Kuo, Intelligent Robotic Puppet Developments, National Chung Cheng University, National Chung Cheng University, 2010.
[2] C.-C. An, “The Authoring tool and Performance Platform for Robotic Puppet Show”, National Chung Cheng University, 2011.
[3] J.-C. Huang, “The design and realization of interactive biped robots(I) gesture recognition,” National Central University, 2008
[4] J.-S. Li, “ An Authoring Platform for Screenplay of Robotic Puppet Shows,” National Chung Cheng University, 2012.
[5] S. Shamsuddin, L.-I. Ismail, H. Yussof, Z.-N. Ismarrubie, S. Bahari, H. Hashim and A. Jaffar, ” Humanoid robot NAO: Review of control and motion exploration,” IEEE International Conference on Control System, Computing and Engineering (ICCSCE), pp. 511-516, NOV. 25-27, 2011.
[6] S. Shamsuddin, H. Yussof, L. Ismail, F.-A. Hanapiah, S. Mohamed, H.-A. Piah and N.-I. Zahari, “Initial response of autistic children in human-robot interaction therapy with humanoid robot NAO,” International Colloquium on Signal Processing and its Applications (CSPA), pp. 188-193, March. 23-25, 2012.
[7] P. Smolar, J. Tuharsky, Z. Fedor, M. Vircikova, P. Sincak, “Development of cognitive capabilities for robot Nao in Center for Intelligent Technologies in Kosice,” International Conference on Cognitive Infocommunications (CogInfoCom), pp. 1-5, July. 7-9, 2011.
[8] M. Akhtaruzzaman and A.-A. Shafie, “Evolution of Humanoid Robot and contribution of various countries in advancing the research and development of the platform ,” Conference on Control Automation and Systems (ICCAS), pp. 2021-1028, Oct. 27-30, 2010
[9] P. Viola and M. Jones, "Rapid object detection using a boosted cascade of
simple features," IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 8-14, 2001.
[10] C.-Y. Lin, “Face Detection using Haar-like Features and the AdaBoost Algorithm in a Cascade of Classifiers,” National University of Tainan, 2008.
[11] P. Viola and M. Jones, "Robust real-time face detection," InternationalJournal of Computer Vision, vol. 57, no. 2, pp. 137-154, 2004.
[12]P. Viola and M. Jones, "Robust Real-time Object Detection," Cambridge
Research Laboratory, Technical Report Series, 2001.
[13] J.-Y. WU, The design and realization of interactive biped robots(II)basic motions' control of biped robots, National Central University, 2008
[14] XML Ray, Erik T, 2001, Learning XML, 1, Beijing, O’REILLY.
[15] 鳥哥, 2010, 鳥哥的Linux私房菜. 基礎學習篇, 1, Taipei, 碁峰資訊.
[16] C. Ramsley, M. Fugere, R. Pawson, C. Rich, and D. O’Donnell, “A Simple Intensity-Based Drama Manager,” Proceeding ICIDS'10 Proceedings of the Third joint conference on Interactive digital storytelling, pp. 204-209, 2010.
[17] R. Volpe, I.-A.-D. Nesnas, T. Estlin, D. Mutz, R. Petras and H. Das, "CLARAty: Coupled Layer Architecture for Robotic Autonomy," JPL Technical Report D-19975, Dec 2000.
[18] R. Volpe, I.-A.-D. Nesnas, T. Estlin, D. Mutz, R. Petras and H. Das, "The CLARAty Architecture for Robotic Autonomy." Proceedings of the 2001 IEEE Aerospace Conference, Big Sky Montana, March. 10-17, 2001.
[19] I.-A.-D. Nesnas, R. Volpe, T. Estlin, H. Das, R. Petras and D. Mutz, "Toward Developing Reusable Software Components for Robotic Applications" Proceedings of the International Conference on Intelligent Robots and Systems (IROS), Maui Hawaii, Oct. 29 - Nov. 3, 2001.
[20] T. Estlin, R. Volpe, I.-A.-D. Nesnas, D. Mutz, F. Fisher, B. Engelhardt and S. Chien, "Decision-Making in a Robotic Architecture for Autonomy." Proceedings of 6th International Symposium on Artificial Intelligence, Robotics, and Automation in Space (i-SAIRAS), Montreal Canada, June. 18-21, 2001
[21] F. Yamaoka, T. Kanda, H. Ishiguro and N. Hagita, “Interacting with a human or a humanoid robot?,” International Conference on Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ, pp. 2685-2691, Oct. 29-Nov. 2, 2007.
[22] K. Sangseung, K. Jaehong and S. Joochan, “A robot platform for children and it ' saction control,” International Conference on Control, Automation and Systems (ICCAS 2008), pp. 2095-2098, Oct. 14-17, 2008.
[23] Ismail, L. , “Face detection technique of Humanoid Robot NAO for application in robotic assistive therapy ,” Conference on Control System, Computing and Engineering (ICCSCE), pp. 517-521, Nov. 25-27, 2011.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:校內校外完全公開 unrestricted
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code