Responsive image
博碩士論文 etd-0901111-161035 詳細資訊
Title page for etd-0901111-161035
論文名稱
Title
基於深度影像繪圖法之低成本立體合成器硬體設計
Low-Cost Design of a 3D Stereo Synthesizer Using Depth-Image-Based Rendering
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
65
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2011-07-25
繳交日期
Date of Submission
2011-09-01
關鍵字
Keywords
深度資訊、基於深度影像繪圖法、3D立體呈像
depth information, 3D stereoscopic image generation, Depth Image Based Rendering (DIBR)
統計
Statistics
本論文已被瀏覽 5677 次,被下載 428
The thesis/dissertation has been browsed 5677 times, has been downloaded 428 times.
中文摘要
隨著三維立體顯示技術的進步,立體顯示的應用也越來越普及,於本篇論文中,我們提出一個基於深度資訊繪圖法(Depth Image Based Rendering, DIBR)之低成本立體影像繪製硬體。
由於DIBR的演算法所繪製的虛擬視點影像會產生一些畫面上的瑕疵,以往的研究提出一些消除瑕疵的方法,其中以對深度影像前置平滑處理,為目前最常見的方法。然而,對來源深度影像資訊的前置處理方法雖然能有效處理DIBR的破洞問題,卻會造成立體效果的衰減。為了避免立體效果的衰減及其他的瑕疵影響,我們捨棄了純粹對深度資訊進行帄滑處理的前置方法,而針對前景邊緣做深度影像的提升,以確保虛擬影像之物件邊緣的完整。
以往研究為了完美地消彌瑕疵,其硬體設計大多為面積大、運算複雜且高功耗。有鑑於此,我們提出一種無須運算的水帄背景鏡射的方法來填補破洞,使畫面看起來自然逼真,並提出一個新的演算法去簡化DIBR的步驟,利用深度資訊圖做影像位移的同時也偵測是否產生破洞,並且計算破洞數目,即時做破洞填補的動作,以破洞周邊之有效像素對破洞部分進行修補。如此一來,可以大大節省標準DIBR做位移完後再做破洞填補在記憶體上搜尋破洞所耗費的功率。
Abstract
In this thesis, we proposed a low cost stereoscopic image generation hardware using Depth Image Based Rendering (DIBR) method. Due to the unfavorable artifacts produced by the DIBR algorithm, researchers have developed various algorithms to handle the problem. The most common one is to smooth the depth map before rendering. However, pre-processing of the depth map usually generates other artifacts and even degrades the perception of 3D images. In order to avoid these defects, we present a method by modifying the disparity of edges to make the edges of foreground objects on the synthesized virtual images look more natural. In contrast to the high computational complexity and power consumption in previous designs, we propose a method that fills the holes with the mirrored background pixel values next to the holes. Furthermore, unlike previous DIBR methods that usually consist of two phases, image warping and hole filling, in this thesis we present a new DIBR algorithm that combines the operations of image warping and hole filling in one phase so that the total computation time and power consumption are greatly reduced. Experimental results show that the proposed design can generate more natural virtual images for different view angles with shorter computation latency.
目次 Table of Contents
第1章 導論 ...................................................................................................... 1
1.1 研究動機 .............................................................................................. 1
1.2 本文大綱 .............................................................................................. 1
1.3 貢獻 ...................................................................................................... 2
第2章 背景知識與相關研究 .......................................................................... 3
2.1 人類立體視覺感知 .............................................................................. 3
2.2 立體呈像之顯示器 .............................................................................. 4
2.2.1 視差原理 ..................................................................................... 4
2.2.2 立體呈像顯示技術 ..................................................................... 6
2.2.3 多視角相關議題 ......................................................................... 9
2.3 背景知識探究 ...................................................................................... 9
2.3.1 基於深度影像繪圖法 ................................................................. 9
2.3.2 影像位移 .................................................................................... 11
2.3.3 破洞填補 ................................................................................... 12
2.4 相關研究論文 .................................................................................... 14
第3章 演算法與架構之設計 ........................................................................ 18
3.1 架構設計 ............................................................................................ 18
3.2 邊緣提升法 ........................................................................................ 19
3.3 水平背景鏡射法 ................................................................................ 23
3.4 混合影像位移與補洞演算法之設計 ................................................ 23
第4章 硬體設計與實現 ................................................................................ 33
4.1 綜觀整體架構 .................................................................................... 33
4.2 查表法 ................................................................................................ 35
4.3 邊緣提升之硬體實現 ........................................................................ 37
4.4 水平背景鏡射之硬體實現 ................................................................ 38
第5章 結果驗證與分析 ................................................................................ 39
5.1 演算法與硬體驗證 ............................................................................ 39
5.2 合成數據結果 .................................................................................... 42
5.3 結果呈現與比較 ................................................................................ 44
第6章 結論與未來目標 ................................................................................ 47
6.1 結論 .................................................................................................... 47
6.2 未來目標 ............................................................................................ 47
參考文獻 ................................................................................................................ 48
附錄 ........................................................................................................................ 51
參考文獻 References
[1] C. Fehn, “A 3D-TV system based on video plus depth information,” in Proc. of Asilomar Conference on Signals, Systems and Computers, vol. 2, pp. 1529-1533, 2003.
[2] A. Smolic, et al.,“3-D video and free viewpoint video – Technologies, appli- cations and MPEG standards,” in Proc. IEEE Int. Conf. Multimedia Expo., pp. 2161-2164, 2006.
[3] C. Fehn, R. de la Barre, and S. Pastoor, “Interactive 3-DTV—concepts and k- ey technologies,”in Proc. IEEE, vol. 94, no. 3, pp.524-538, 2006.
[4] L. Zhang and W. Tam, “Stereoscopic image generation based on depth images for 3D TV,” IEEE Trans. Broadcast., vol. 51,no. 2,pp.191-199, 2005.
[5] I. Becton, et al., “Stereoscopic and Autostereoscopic Display Systems,” IEEE Signal Processing Mag, pp. 85-99, May. 1999.
[6] N. A. Dodgson, et al., “Autostereoscopic 3D Displays,” Computer, pp. 31-36, Aug. 2005.
[7] M. Zwicker, et al., “Resampling, Antialiasing, and Compression in Multiview 3-D Displays, ” IEEE Signal Processing Mag., pp. 88-96, Nov. 2007.
[8] P.-K. Tsung, et al., “Video Encoder Design for High-Definition 3D Video Co- mmunication Systems, ” IEEE Communications Mag., pp. 76-86, Apr. 2010.
[9] H.-J. Chen, et al., “Real-Time Multiview Rendering Architecture for Autoster- eoscopic Displays,” in Proc. IEEE ISCAS, pp. 1165-1168, 2010.
[10] Tian-Sheuan Chang, et al., "Stereoscopic images generation with directional G- aussian filter, ” in Proc. IEEE ISCAS, pp. 2650-2653, 2010.
[11] Hsuan-Chih Chen, et al., “Real-time Stereoscopic Image Generation Using D- epth Image Based Rendering with Virtual View Point Estimation,” Master’s Thesis, Dept. of Electrical Engineering, NCKU., Jul. 2010.
[12] C. Vazques, W. Tam, and F. Speranza, “Stereoscopic Imaging: Filling Disocc- luded Areas in Depth Image-Based Rendering. ” in Proc. SPIE, vol. 6392, pp. 63920D, 2006.
[13] S.-H. Kim, et al., “A 36 fps SXGA 3-D Display Processor Embedding a Prog- rammable 3-D Graphics Rendering Engine,” IEEE Journal of Solid-State Cir- cuits, vol. 43, no. 5, pp. 1247-1259, May 2008.
[14] H.-C. Shin, et al., “Fast View Synthesis Using GPU for 3D Display, ” IEEE T- rans. Consumer Electronics, vol. 54, no. 4, pp. 2068-2076, Nov. 2008.
[15] L.-F. Ding, et al., “A 212 MPixels/s 4096x2160p Multiview Video Encoder C- hip for 3D/Quad Full HDTV Applications,” IEEE Journal of Solid-State Circ- uits, vol. 45, no. 1, pp. 46-58, Jan. 2010.
[16] S.-T. Shen, et al., “Full System Simulation with QEMU: an Approach to Mult- iview 3D GPU Design,”in Proc. IEEE ISCAS, pp. 3877-3880, 2010.
[17] P.-C. Lin, et al., “Low-Cost Hardware Architecture Design for 3D Warping E- ngine in Multiview Video, ” in Proc. IEEE ISCAS, pp. 2964-2967, 2010.
[18] Hang Shao, et al., “Objective quality assessment of depth image based renderi- ng in 3DTV system, ” in Proc. IEEE Conf., pp. 1-4 , May. 2009.
[19] http://vision.middlebury.edu/
[20] Shinya Shimizu, et al., “View Scalable Multiview Video Coding Using 3-D W- arping With Depth Map, ” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 11, Nov. 2007.
[21] A. Smolic and D. McCutchen, “3-DAV exploration of video-based rendering t- echnology in MPEG, ” IEEE Trans. Circuits Syst. Video Technol., vol.14, pp. 348, 2004.
[22] A. Smolic and P. Kauff, “Interactive 3-D video representation and coding tech- nologies, ” in Proc. IEEE, Special Issue on Advances in Video Coding and Del- ivery, vol. 93, no. 1, pp. 98-110, Jan. 2005.
[23] Luo Yan, et al., “Stereo Video Coding Based on Frame Estimation and Interp- olation, ” IEEE Trans. Broadcast., vol. 49, no. 1, Mar. 2003.
[24] D. M. Gavrila and L. S. Davis, “3d model-based tracking of humans in action: A multi-view approach, ” in Proc. IEEE Conf. Computer Vision and Pattern Re- cognition, pp. 73-80, 1996.
[25] L. F. Hodges, et al., “Tutorial: Time-multiplexed stereoscopic computer grap- hics, ” IEEE Computer Graphics and Applications, pp. 20-30, 1992.
[26] M. Bertalmio, et al., “Image Inpainting, ” Computer Graphics (SIGGRAPH 2000), pp. 417-424, July 2000.
[27] A. Criminisi, et al., “Region Filling and Object Removal by Exemplar-Based Image Inpainting, ” IEEE Trans.Image Proc., vol. 13, no. 9, Sep. 2004.
[28] H.-J. Hsu, et al., “A Hybrid Algorithm With Artifact Detection Mechanism fo- r Region Filling After Object Removal From a Digital Photograph, ”IEEE Tr- ans.Image Proc., vol. 16, no. 6, June 2007
[29] W.-Y. Chen, et al., “Efficient Depth Image Based Rendering With Edge Depende- nt Depth Filter and Interpolation, ” in Proc. IEEE Int. Conf. Multimedia Expo. , pp. 1314-1317, ICME, July 2005.
[30] M. Gong, et al., “Real-time backward disparity-based rendering for dynamic scenes using programmable graphics hardware, ” Proceedings of Graphics Interface, pp. 241-248, May 2007.
[31] W. H. Huang, et al., “Real-time novel rendering architecture for 3D display,” in The 23rd IPPR Conf.Comput. Vision, Graphics, Image Process. (CVGIP), Aug. 15–17,2010.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:自定論文開放時間 user define
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code