Responsive image
博碩士論文 etd-0611116-213632 詳細資訊
Title page for etd-0611116-213632
論文名稱
Title
高、低解析度手機雙鏡頭之影像融合
Image fusion of smart phone equipped with high- and low-resolution dual cameras
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
54
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2016-07-12
繳交日期
Date of Submission
2016-07-12
關鍵字
Keywords
影像融合、雙鏡頭、融合規則、小波轉換
multiscale technique, dual lens, image fusion, fusion rule
統計
Statistics
本論文已被瀏覽 5636 次,被下載 15
The thesis/dissertation has been browsed 5636 times, has been downloaded 15 times.
中文摘要
數位成像解析度由早期30萬像素提高至現今普遍1,800萬畫素,然受限於智慧手機輕薄特性,並無法搭載大型感光元件,感光面積狹小令進光量有限,於相同感光面積提升解析度,雖可獲得更多細節資訊,但單位像素受光面積變小,使得雜訊放大、受光量與感光靈敏度降低,導致成像出現過暗、模糊、對比偏低無法分辨細微亮度變化等缺失,整體成像品質低落,若為增加進光量而延長快門時間或提升感光度,將使成像呈現動態模糊與大量雜訊,大幅降低影像品質。單一鏡頭配置欲提升畫質已趨困難。雙鏡頭智慧型手機出現打破過去僅能使用單一鏡頭取像限制,藉由兩個不同空間解析度、光圈或焦距之鏡頭,依其特性於不同拍照場景中自動搭配適當快門及參數值,獲得兩幅具有相同時間,但不同空間解析度、光圈或焦距,並同時保留各自優勢資訊之影像,再透過影像融合技術整合兩幅影像之優勢資訊,於單一影像中呈現。
本研究旨在利用高、低解析度雙鏡頭結合小波轉換之影像融合(Image fusion)技術達到於各光照環境下皆能提升成像品質之效果。首先利用高、低解析度雙鏡頭取像,參考相同時間與空間之高、低解析度影像進行融合,能兼顧高、低解析度鏡頭特性。將來源影像經小波轉換,將影像由空間域轉換與分解至頻率域,對不同頻域依其特性設計融合規則,於低頻區域評估來源影像品質,結合過曝與欠曝分析、細節紋理、噪點多寡與最小可覺差等來源影像評估方法;對於高頻區域則使用區域最大值法,保留影像變化情形。透過給予各區域對應之融合規則與評估融合權重,將各區域優勢訊息融合為一,使低解析度鏡頭成像之低雜訊與低動態模糊優勢加入高解析度鏡頭成像中,保留高解析度鏡頭之影像細節,並於低光照拍攝環境中,使用低解析度鏡頭捕獲場景內微量光源,提供高解析度鏡頭成像參考,補足高解析度影像所喪失之亮度資訊,以提升高解析度成像於低光源時之暗區細節,最後逆轉換至影像資料,完成可靠且快速之影像融合。透過冗餘資訊之整合,影像融合可提升影像準確性、可靠性與成像品質,創建具有更豐富、正確訊息之合成影像。藉由影像融合技術能最大化高、低解析度鏡頭兩者優勢,補足各自缺乏資訊,在單一影像中呈現,於正常光照時仍保有高解析度細節並降低因快門時間延長與感光值提高造成之動態模糊與影像雜訊,同時能提升影像中暗區亮度;於低光照時能有效提升因受光量不足而降低之影像亮度、避免欠曝與模糊等缺失。實驗結果表明,相較於Debevec之HDR方法與Zhang之影像融合方法,更能有效提升成像亮度,捕捉動態影像,同時兼顧影像細節,改善成像品質。
Abstract
The resolution of digital imaging increases from 300K to 1.8M pixels nowadays. Limited by the thin and light-weight casing of smart phone, only small image sensors can be mounted, causing fairly restricted amount of light collected on the image sensing area. With the increase in resolution, even though more image detail can be acquired, yet accompanied with the amplification of noise, decrease in total incident light and sensitivity due to the small pixel sensing area. All these lead to pitfalls of image darkness, blurring, low contrast, etc. Attempts in increasing the exposure time or ISO inadvertently causing motion blur or large amount of noise, severely degrading the image quality. In light of the aforementioned limitation of the single-lens setup, the smart phone equipped with dual-lens is a promising alternative. Through two lens with different spatial resolution, aperture or focal length, the lens can be operated independently with different sets of parameters to acquire two images taken at the same time, each with the corresponding prominent features. These prominent features can be adaptively integrated through image fusion technique to form one image with better quality.
This project will employ multi-resolution technique to fuse two images captured through low- and high-resolution dual lens to render a more consistent image in different lighting environments. By integrating the low noise level and low motion blur present in low resolution image, and good image detail in the high resolution counterpart, image fusion can preserve the prominent features present in two images to compensate the inherent weakness present in each type of lens. The image detail acquired through high-resolution lens, and the luminance information captured by the low-resolution one can be combined to produce a more desirable composite image, even in a low-lighting scene. The images acquired will be transformed to frequency domain by multiscale transformation. Based on the frequency bands, different fusion rules will be applied to adaptively combine the frequency components. The low frequency regions will be evaluated to estimate the overall luminance to determine under- or over-exposure, texture, noise and just noticeable difference (JND). The corresponding fusion weighting will be derived correspondingly. Region maximum will be preserved in the high frequency regions to preserve subtle variations. Based on the fusion rule and weighting, the features prominent in each frequency band will be combined, and then inverse transformed back to spatial domain. Through the integration of redundant information, image accuracy, consistency and quality can be substantially improved. The drawbacks inherent in each type of lens can be compensated by adopting the prominent features in the counterpart to maintain image detail and light sensitivity, decrease motion blur and noise level, especially in low lighting environment, while at the same time enhance detail and contrast of dark regions within an image.
Keywords: dual lens, image fusion, fusion rule, multiscale technique
目次 Table of Contents
論文審定書………………………………………………………………i
論文公開授權書 ………………………………………………………ii
誌謝……………………………………………………………………iii
中文摘要 ………………………………………………………………iv
英文摘要…………………………………………………………………v
目錄 ……………………………………………………………………vi
圖目錄 ………………………………………………………………viii
表目錄 …………………………………………………………………xi
第 一 章 緒論 …………………………………………………………1
1.1背景 ……………………………………………………………1
1.2目的 ……………………………………………………………4
1.3研究概述 ………………………………………………………5
第 二 章 相關研究 ……………………………………………………7
2.1影像融合 ………………………………………………………7
2.1.1影像融合現有應 ………………………………………7
2.1.2空間域與頻域融合法 …………………………………9
2.1.3小波轉換法……………………………………………10
2.2雙鏡頭相機……………………………………………………12
2.2.1單鏡頭相機之侷限……………………………………12
2.2.2雙鏡頭手機相機發展沿革 …………………………13
第 三 章 研究方法 …………………………………………………20
3.1取像方法 ……………………………………………………20
3.2影像校正 ……………………………………………………21
3.3影像融合 ……………………………………………………23
3.3.1融合規則 ……………………………………………24
3.3.2融合權重 ……………………………………………25
第 四 章 實驗結果 …………………………………………………28
4.1評估方法………………………………………………………28
4.2低光照影像……………………………………………………29
4.2.1室外低光照影像………………………………………29
4.2.2室內低光照影像………………………………………31
4.3動態影像………………………………………………………33
4.4影像品質………………………………………………………35
第 五 章 結論與未來展望……………………………………………38
參考文獻 ………………………………………………………………39
參考文獻 References
[1] Albert Theuwissen, “CMOS image sensors: State-Of-the-art and future perspectives,” Solid State Device Research Conference, 2007 ESSDERC 37th European, pp. 21-27, 2007.
[2] B. S. Carlson, "Comparison of modern CCD and CMOS image sensor technologies and systems for low resolution imaging," Proc. IEEE Sensors, vol. 1, pp. 171 -176, 2002.
[3] H. Abe, "Device technologies for high quality and smaller pixel in CCD and CMOS image sensors," International Electron Devices Meeting, 2004. Technical Digest, Dec. 2004.
[4] H. Rhodes, et al., "The Mass Production of BSI CMOS Image Sensors," International Image Sensor Workshop, pp. 27-32, 2009.
[5] Orazio Gallo, Alejandro Troccoli, Jun Hu, Kari Pulli, Jan Kautz, " Locally non-rigid registration for mobile HDR photography," 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 48-55, 2015.
[6] Gal Shabtay, Ephraim Goldenberg, “Thin dual-aperture zoom digital camera,” WIPO Patent WO2014083489 A1, April 23, 2015.
[7] S. Zhang, S. Yau, "Absolute phase-assisted three-dimensional data registration for a dual-camera structured light system," J. Applied Optics., 47, pp. 3134-3142, 2008.
[8] N. Blanc, T. Oggier, G. Gruener, J. Weingarten, A. Codourey and P. Seitz, "Miniaturized smart cameras for 3D-imaging in real-time," Proc. IEEE Sensors 2004, pp. 471 -474, 2004.
[9] Hajime Nagahara, Akira Hoshikawa, Tomohiro Shigemoto, Yoshio Iwai, Masahiko Yachida, Hiroyuki Tanaka, "Dual-sensor camera for acquiring image sequences with different spatio-temporal resolution," AVSS, pp. 450-455, 2005.
[10] P.J. Burt, R.J. Kolczynski, "Enhanced image capture through fusion,” Proceedings of the Fourth International Conference on Computer Vision, pp. 173-182, 1993.
[11] Varkonyi-Koczy R, Rovidz A, Hashimoto T, “Gradient-Based Synthesized Multiple Exposure Time Color HDR Image,” IEEE Trans. Instrum. Meas, vol. 57, no. 8, pp.1779 -1785, 2008.
[12] T. Mertens, J. Kautz, F. Van Reeth, “Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography,” Computer Graphics Forum, pp.161 -17, 2009.
[13] Wei Zhang, Wai-Kuen Cham, “Gradient-directed composition of multi-exposure images,” Proc. IEEE CVPR, pp. 530-536, 2010.
[14] Seyed Abolfazl Valizadeh, Hassan Ghassemian, "Remote sensing image fusion using combining IHS and Curvelet transform," Telecommunications (IST), 2012 Sixth International Symposium on, pp. 1181-1189, 2012.
[15] Ujwala Patil, Uma Mudengudi, "Image fusion using hierarchical PCA," Image Information Processing (ICIIP), 2011 International Conference on, pp. 1-9, 2011.
[16] Mohamed R. Metwalli, Ayman H. Nasr, Osama S. Farag Allah, S. El-Rabaie, "Image fusion based on principal component analysis and high-pass filter," Computer Engineering & Systems, 2009. ICCES 2009. International Conference on, pp. 63-70, 2009.
[17] Jianbing Shen, Ying Zhao, Shuicheng Yan, Xuelong Li, “Exposure Fusion Using Boosting Laplacian Pyramid,” IEEE Transactions on Image Processing, pp.1579 -1590, 2014
[18] C.Y. Wen, J.K. Chen, "Multi-resolution image fusion technique and its application to forensic science," Forensic Science International, pp. 217-232, 2004.
[19] Minsu Choi, Jinsang Kim, Won-Kyung Cho, Yunmo Chung, "Low complexity image rectification for multi-view video coding," 2012 IEEE International Symposium on Circuits and Systems, pp. 381-384, 2012.
[20] Nikolaos Stamatopoulos, Basilis Gatos, Ioannis Pratikakis, Stavros J. Perantonis, "Goal-Oriented Rectification of Camera-Based Document Images," IEEE Transactions on Image Processing, pp. 910-920, 2011.
[21] M. A. Berbar, Menoufia Univ Egypt, S. F. Gaber, N. A. Ismail, "Image fusion using multi-decomposition levels of discrete wavelet transform," Visual Information Engineering, 2003. VIE 2003. International Conference on, pp. 294-297, 2003.
[22] P. Romaniak, L. Janowski, M. Leszczuk, Z. Papir, "A no reference metric for the quality assessment of videos affected by exposure distortion," 2011 IEEE International Conference on Multimedia and Expo, pp. 1-6, 2011.
[23] X. Yang, W. Lin, Z. Lu, E. P. Ong and S. Yao, "Motion-compensated residue pre-processing in video coding based on just-noticeable-distortion profile," IEEE Trans. Circuits Syst. Video Technol., vol. 15, no. 6, pp. 742-752, 2005.
[24] A. Liu, W. Lin, M. Paul, C. Deng and F. Zhang, "Just noticeable difference for images with decomposition model for separating edge and textured regions," IEEE Trans. Circuits Syst. Video Technol., vol. 20, no. 11, pp. 648-652, 2010.
[25] R. J. Safrenek, J. D. Johnson, "A perceptually tuned sub-band image coder with image dependent quantization and postquantization data compression," Proc. IEEE Int. Conf. Acoustics Speech and Signal Processing., pp. 1945-1948, 1989.
[26] Paul E. Debevec, Jitendra Malik, "Recovering High Dynamic Range Radiance Maps from Photographs," Proc. SIGGRAPH'97s, pp. 369-378, 1997.
[27] Bin Zhang, "Study on image fusion based on different fusion rules of wavelet transform," 2010 3rd International Conference on Advanced Computer Theory and Engineering(ICACTE)., vol. 3, pp. 649-663, 2010.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:自定論文開放時間 user define
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code