Responsive image
博碩士論文 etd-1124114-165835 詳細資訊
Title page for etd-1124114-165835
論文名稱
Title
由影片自動推導移動平台控制參數之技術
Automatic Derivation of Motion Platform Control Parameters from Video
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
38
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2015-01-06
繳交日期
Date of Submission
2015-01-09
關鍵字
Keywords
移動平台、虛擬實境、影像序列、移動平台控制參數
motion platform, video segmentation, virtual reality, motion platform control parameters
統計
Statistics
本論文已被瀏覽 5640 次,被下載 54
The thesis/dissertation has been browsed 5640 times, has been downloaded 54 times.
中文摘要
虛擬實境系統常被用在娛樂,藉由無縫式影片播放技術,結合移動平台運動
機制,使用者更能產生身歷其境之體驗。一段影片由影格、分鏡、群集及場景不
同層次結構所組成。影片拍攝時,多根據攝影師個人經驗,改變攝影機視角、鏡
頭和變焦係數,由不同角度進行取像。另一方面,為將影片於配置移動平台之劇
場放映,目前仍需以人工方式,逐一影格觀看影片畫面場景變化,進而回推原攝
影師之拍攝角度及軌跡,以手動調整控制移動平台之擺動方式,這種逆向工程方
式耗時費力,成本昂貴。
有鑑於上述缺失,本研究透過影像處理技術,採用自動化進行分析,藉由連
續攝影機移動所拍攝之畫面推導,自動產出移動平台參數,須利用影片分鏡變換
偵測將影片分為場景、群集、分鏡,待分割為同屬性之分鏡後,再推導同一分鏡
中攝影機畫面拍攝參數或分鏡與分鏡間之運動軌跡及速度,據以產生控制移動平
台上下(Heave)、左右(Sway) 與前後(Surge)方向之平移運動參數,以及前傾(Pitch)、
翻滾(Roll)、扭轉(Yaw)之轉動運動參數。經由設計實驗測試所獲得結果,當視訊做
單一種運動方式拍攝時,本研究所提出之方法在分析視訊影像拍攝運動參數有著
相當好成果。最後傳入於智崴資訊科技股份有限公司之控制移動平台API 以控制
運動平台相應運動,期能取代大部分人工調校,提供快速與準確自動化產生之移
動參數,降低成本,完成劇院動感平台之整體化開發環境之技術轉移目標,大幅
增加此智崴資訊科技股份有限公司在國際上競爭力。
Abstract
Virtual reality, coupled with motion platform, provides immersive experience to
the users. The video played has to be synchronized seamlessly with the movement of
the motion platform. A video is composed of a hierarchy of frame, shot, cluster and
scene. During the shooting of a video, the cameraman manipulates the camera through
different angles, lenses and zooming coefficients based on personal experience. On the
other hand, given the video taken, a technician is required to observe the scene change
visually and manually adjust the movement of motion platform to match the video
scenario. This reverse engineering process is both labor-intensive and time-consuming.
In light of the aforementioned pitfalls, this study will employ image processing
techniques to facilitate the automatic derivation of motion platform control parameters
from video. Through the segmentation of video into scene, cluster, shot and frame,
camera manipulation, e.g., zoom, pan, tilt, and displacement, will be derived on a
frame-by-frame basis. These parameters will then feed into the existing Brogent
Technologies, Inc. API to control the corresponding movement, including Heave,
Sway, Surge, Pitch, Roll, and Yaw of the motion platform. The development of the
automatic derivation of motion platform control parameters from video can expedite
the control of motion platform in a fast-changing environment and lower the operating
cost. A highly efficient and automatic theatrical motion platform system can be
constructed, to increase the competitiveness of Brogent Technologies Inc. in the field
of high-end virtual reality.
目次 Table of Contents
論文審定書 ................................................ i
中文摘要 ................................................. ii
英文摘要 ................................................ iii
圖目錄 .................................................... v
第一章 簡介 ............................................... 1
1.1 背景與目的 ......................................... 1
1.2 影片拍攝運鏡手法 ................................... 1
1.3 智崴科技移動平台 ................................... 2
1.4 研究概述 ........................................... 4
第二章 相關研究 ........................................... 5
2.1 分割場景 ........................................... 5
2.2 畫面拍攝參數、軌跡分析 ............................ 10
第三章 研究方法 .......................................... 15
3.1 分割場景 .......................................... 15
3.1.1 直方圖交集法(Histogram Intersection) .......................... 15
3.1.2 Graph Partition Model ............................................... 16
3.2 移動參數估測 ...................................... 18
3.2.1 偽逆矩陣(Pseudo inverse matrix) .................................. 20
3.3 移動平台參數推導 .................................. 21
第四章 實驗結果 .......................................... 22
第五章 結論與未來工作 .................................... 29
參考文獻 ................................................. 30
參考文獻 References
[1] G. Saposnik, M. Mamdani, M. Bayley, K. E. Thorpe, J. Hall, L. G. Cohen, and R.
Teasell, "Effectiveness of Virtual Reality Exercises in STroke Rehabilitation
(EVREST): rationale, design, and protocol of a pilot randomized clinical trial
assessing the Wii gaming system," Int. J. Stroke, vol. 5, no. 1, pp. 47-51, 2010.
[2] T.-S. Lan, M.-G. Her, and K.-S. Hsu, “Virtual Reality Application for Direct Drive
Robot with Force Feedback,” Int. J. Adv. Manuf. Technol., Vol. 21, Issue 1 , pp
66-71, 2003.
[3] M.-G. Her, K.-S. Hsu, and W.-S. Yu, “Analysis and Design of a Haptic Control
System: Virtual Reality Approach,” Int. J. Adv. Manuf. Technol., Vol. 19, pp.743-751,
2002.
[4] M. Nierenberger, M. Poncelet, S. Pattofatto, A. Hamouche, B. Raka, and J. M. Virely
"Multiaxial Testing of Materials Using a Stewart Platform: Case Study of the Nooru‐
Mohamed Test," Exp. Tech., 2012.
[5] Brogent Technologies Stewart Platform Specification. from
http://www.brogent.com/tw
[6] V. B. Thakar, C. B. Desai, and S. K. Hadia, "Video Retrieval: An Adaptive Novel
Feature Based Approach for Movies," IJIGSP, vol.5, no.3, pp.26-35, 2013.
[7] Z.-N. Li, X. Zhong, and M. S. Drew, “Spatial-temporal joint probability images for
video segmentation,” Pattern Recognit., vol. 35, no. 9, pp. 1847-1867, 2002.
[8] S.-J. Kang, S.-I. Cho, S. Yoo, and Y.-H. Kim, "Scene Change Detection Using
Multiple Histograms for Motion-Compensated Frame Rate Up-Conversion," J.
Display Technol., vol.8, no. 3, pp. 121-126, 2012.
[9] D. Lelescu and D. Schonfeld, "Statistical sequential analysis for real-time video
scene change detection on compressed multimedia bitstream," IEEE Trans.
Multimedia, vol. 5, no. 1, pp. 106-117, 2003.
[10] X. Bai and G. Sapiro, "Geodesic matting: A framework for fast interactive image
and video segmentation and matting," Int. J. Comput. Vis., vol. 82, no. 2, pp.
113-132, 2009.
[11] H. C. Liao, M. H. Pan, H. W. Hwang, M. C. Chang, and P. C. Chen, "An
Automatic Calibration Method Based On Feature Point Matching For The
Cooperation Of Wide-Angle And Pan-Tilt-Zoom Cameras," Inf. Technol. Control,
vol. 40, no. 1, pp. 41-47, 2011.
31
[12] F. Dufaux and J. Konrad, “Efficient, robust and fast global motion estimation for
video coding,” IEEE Trans. Image Process., vol. 9, no. 3, pp. 497-501, Mar. 2000.
[13] P. Zheng, J. B. Lu, and D.-R. Zhou, "Efficient Algorithm of Acquiring Shot
Motion Features," Wuhan University Journal (Natural Science Edition), vol.3, no.
016, 2004.
[14] N. Kondo, U. Ahmad, M. Monta, and H. Murase, “Machine vision based quality
evaluation ofIyokan orange fruit using neural networks,” Comput. Electron. Agr.,
Vol. 29, pp.135-147, 2000.
[15] H. C. Shih, "Multi-level video segmentation using visual semantic units," in Proc.
IEEE ICCE, Jan. 2013.
[16] J. H. Yuan, H. Y. Wang, L. Xiao, W. J. Zheng, J. M. Li, F. Z. Lin, and B. Zhang, "A
Formal Study of Shot Boundary Detection," IEEE Trans. Circuits Syst. Video
Technol., vol. 17, pp. 168-186, Feb.2007.
[17] D. G. Lowe, “Distinctive Image Features from Scale-InvariantKeypoints,” IJCV, vol.
60, no. 2, pp. 91-110, 2004.
[18] 高志邦,「以支援向量分類為依據之影片場景變化偵測」,國立中山大學資訊
工程學系研究所(2005)。
[19] M. Swain and D. Ballard, “Color indexing,” IJCV, 7(1):11–32, 1991.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:自定論文開放時間 user define
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code