Responsive image
博碩士論文 etd-0811117-180509 詳細資訊
Title page for etd-0811117-180509
論文名稱
Title
高效率之無參考視訊容誤測試方法
An Efficient No-Reference Error-Tolerability Test Method for Videos
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
74
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2017-07-26
繳交日期
Date of Submission
2017-09-11
關鍵字
Keywords
錯誤視訊、容誤、視訊品質評估、視訊處理電路、無參考品質評估方法
video processing circuits, video quality evaluation, no-reference quality assessment, erroneous video, error-tolerance
統計
Statistics
本論文已被瀏覽 5665 次,被下載 11
The thesis/dissertation has been browsed 5665 times, has been downloaded 11 times.
中文摘要
隨著科技日新月異,物聯網與居家安全的概念越來越興盛,這也使得人們對於監控系統所能提供的安全程度更加看重。由於監控系統內的視訊處理電路可能因電路老化或是製程缺陷導致電路內部出現錯誤,而使監控視訊不易辨認出目標物件,進而導致安全上的漏洞。另一方面,監控系統通常含有多數視訊來源,若是每一視訊出現問題都須派員查看維修,所需維修成本也極可能相當驚人。所幸視訊處理電路出現錯誤不見得都會使監控系統失效,若是錯誤並不嚴重,則監控視訊仍極有可能可辨別目標物件,而此錯誤仍可容忍。故如何評估視訊訊號之可靠度為一極待解決之問題,其可有助於提升視訊處理電路,乃至於監控系統之使用壽命,也可降低其維護成本。
在本論文中,我們提出了一種利用無參考方式評估視訊容誤程度之測試方法。在大多數的視訊應用中,尤其是監控視訊或者由感應器拍成的視訊應用,並無原始正確視訊可參考,測試其容誤程度相當困難。使用無參考測試方法讓我們能夠依據目標視訊的錯誤特徵來判斷這個視訊的品質,而不需原始參考視訊。目前文獻中雖已有一些無參考視訊品質評估方法,但主要都只針對傳輸或視訊壓縮時所產生的雜訊,並無針對視訊處理電路內部有錯時產生的錯誤特徵進行評估。我們也相信本論文是目前中第一個針對電路錯誤提出無參考方式之測試解決方案。
為了偵測錯誤視訊的錯誤特徵,我們針對電路內部所有可能出現的錯誤進行詳細模擬,產生所有因電路內部受影響所可能會產生的錯誤視訊,並對其進行仔細分析視訊品質,以評估其可接受度。我們也根據分析結果提出一高效率容誤測試方法,可在短時間內決定目標視訊之可接受度。與過去相關方法比較結果顯示
我們所開發的方法有高達90%以上的準確度,而過去方法則只能達到約80%的準確度。更且,我們的運算時間相較於過去方法可少3倍的時間。
Abstract
Since the IoT (Internet of Things) and home security is more and more common as technology progresses, the security that can be provided by surveillance systems receives more and more attentions. However, the aging effects or process defects of the video processing related circuits in surveillance systems may result in errors such that the objects of interests are not easy to be identified and thus raises the security concern. On the other hand, surveillance systems usually have a large number of digital video sources. If the maintenance staff needs to check each video source manually to see if repair or replacement is required, the incurred cost may be unaffordable. Fortunately, the video errors may not necessarily fail the surveillance system. If the error is insignificant, the objects are still likely identifiable, and thus the error is acceptable. Therefore, how to evaluate the acceptability of video signals is one critical issue to be addressed, which is helpful to extend the life time of video processing circuits and thereby the surveillance system. The required maintenance costs can also be reduced.
In this thesis, we proposed a no-reference error-tolerability test method for videos. In most of the video applications, especially for the surveillance system or videos that are provided by sensors, the reference (golden) video data are usually unavailable, which makes implementation of an error-tolerability test process quite difficult. No-reference test methods can allow us to evaluate video quality by analyzing only characteristics inside the video without reference videos. In the literature there have been several no-reference video quality assessment methods developed. However, these methods focus on dealing with only noises during transmission or compression of video signals. So far no work targets errors due to problems inside video processing circuits such as aging. We believe that this thesis is the first one that proposes a no-reference error-tolerability test method for erroneous circuits.
In order to evaluate error-tolerability of erroneous videos, we simulate all the possible artifacts and generate various erroneous videos. We also analyze the characteristics of these generated erroneous videos as well their qualities. The relationship between video quality and its characteristics is also investigated. Accordingly an efficient error-tolerability test method is proposed, which can accurately evaluate acceptability of errors in short time. Compared with the previous researches, the proposed can achieve more than 90% test accuracy, while that of the previous method is only about 80%. In addition, our execution time is three times as fast as the previous method.
目次 Table of Contents
論文審訂書 i
誌謝 ii
摘要 iii
Abstract iv
目錄 v
圖目錄 vii
表目錄 ix
第一章 緒論 1
1.1 研究背景與研究動機 1
1.2 研究貢獻 1
1.3 論文大綱 2
第二章 研究背景與相關文獻回顧 3
2.1 H.264視訊壓縮標準與編解碼器介紹 3
2.2 視訊品質評估方法 4
2.2.1 視訊品質評估種類 4
2.2.2 峰值訊噪比Peak Signal to Noise Ratio (PSNR) 6
2.2.3 結構相似性指標SSIM(Structural SIMilarity) 6
2.3 容誤Error-Tolerance [1] 8
第三章 錯誤視訊分析 10
3.1 雜訊與錯誤影像特性探討 10
3.2 錯誤視訊產生流程 15
3.3 JM Encoder 參數設定 17
3.4 錯誤注入 18
3.5 Fast Forward mpeg(FFmpeg)多媒體處理軟體 21
3.6 錯誤視訊分析 22
第四章 無參考視訊容誤測試方法與其實驗結果 23
4.1 無參考視訊容誤測試方法 23
4.1.1 邊緣偵測方法Edge detection method 23
4.1.2 極端值偵測方法Extreme value detection method 30
4.1.3 容誤與實驗流程圖 33
4.2 方法的容誤性與實驗參數分析 37
4.3 實驗結果 44
4.3.1 實驗參數 44
4.3.2 實驗結果 46
4.4 實驗結果比較 56
第五章 硬體實現與成本分析 58
5.1 視訊品質測試方法之硬體 58
5.2 硬體成本分析 60
第六章 結論 61
第七章 參考文獻 62
參考文獻 References
[1] M. A. Breuer, S. K. Gupta and T. M. Mak, “Defect and error-tolerance in the presence of massive numbers of defects,” IEEE Design & Test of Computers, vol. 21, no. 3, pp. 216-227, 2004.
[2] Y. Deng, Q. Yang, J. Lu, N. Liu, Y. Qiao and Y. Sun, “A hybrid no-reference blockiness metric for h.264 standard,” IEEE International Conference on Control and Automation(ICCA), pp. 1367-1371, 2013.
[3] N. D. Narvekar and L. J. Karam, “A no-reference image blur metric based on the cumulative probability of blur detection (CPBD),” IEEE Transactions on Image Processing., vol. 20, no. 9, pp. 2678-2683, 2011.
[4] T. Wiegand, G. J. Sullivan, G. Bjonntegaard and A. Luthra, “Overview of the H.264/AVC video coding standard.” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, no. 7, pp. 560-576, 2003.
[5] M. Shahid, A. Rossholm, B. Lovstrom and H. J. Zepernick, “No-reference image and video quality assessment: a classification and review of recent approaches.” Journal on Image and Video Processing, vol. 40, no .1, pp. 1-32, 2014.
[6] S. Winkler, “Video quality measurement standards–current status and trends,” In 7th International Conference on Information and Communication Systems, pp. 848-852, 2009.
[7] S. Chikkerur, V. Sunndaram, M. Reisslein and L. J. Karam, “Objective video quality assessment methods: a classification, review, and performance comparison.” IEEE Transactions on Broadcasting, vol. 57, no. 2, pp. 1665-182, 2011.
[8] K. Seshadrinathan, R. Soundararajan, A. C. Bovik and L. K. Cormack, “Study of subjective and objective quality assessment of video.” IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1427-1441, 2010.
[9] T. Y. Hsieh and Y. H. Peng, “Filtering-based error-tolerability evaluation of image processing circuits.” Proc. Int'l. On-Line Testing Symp., pp. 132-137, 2015.
[10] A. Rehman and Z. Wang, “Reduced-reference image quality assessment by structural similarity estimation,” IEEE Transactions on Image Processing, vol. 21, no. 8, pp. 3378–3389, 2012.
[11] P. Marziliano, F. Dufaux, S. Winkler and T. Ebrahimi, “A no-reference perceptual blur metric,” in: Proceedings of the International Conference on Image Processing, vol. 3, pp. 57-60, 2002.
[12] C. Chen and J. A. Bloom, “A blind reference-free blockiness measure,” in Proceedings of the Pacific Rim Conference on Advances in Multimedia Information Processing, pp. 112-123, 2010.
[13] Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, 2004.
[14] R. Ferzli and L. J. Karam, “A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB),” IEEE Transactions Image Processing, vol. 18, no. 4, pp. 717–728, 2009.
[15] D. Jayaraman, A. Mittal, A. K. Moorthy and A. C. Bovik, “Objective quality assessment of multiply distorted Images,” Proceedings of Asilomar Conference on Signals, Systems and Computers, pp. 1693-1697, 2012.
[16] Joint Model reference software for H.264/AVC (JM 19.0) Website. Accessed on
September 11, 2017. [Online]. Available: http://iphome.hhi.de/suehring/tml/.
[17] NOVA, Free, Open Source, H.264/AVC Baseline Decoder Website. Accessed on
September 11, 2017. [Online]. Available: http://opencores.org/project,nova.
[18] K. Xu, Power-Efficient Design Methodology for Video Decoding, PH.D. Dissertation, 2007.
[19] YUV Video Sequence Website. Accessed on September 11, 2017. [Online].
Available: http://trace.eas.asu.edu/yuv/index.html.
[20] FFMPEG, Free, Open Source, Multimedia Framework Website. Accessed on
September 11, 2017. [Online]. Available: https://ffmpeg.org/.
[21] C. Y. Lien, C. C. Huang, P. Y. Chen, and Y. F. Lin, “An efficient denoising architecture for removal of impulse noise in Images,” IEEE Transactions on Computers, vol. 62, no. 4, pp. 631-643, 2013.
[22] CPBD Tool Website. Accessed on September 11, 2017. [Online]. Available:
https://ivulab.engineering.asu.edu/software/cpbd/.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:自定論文開放時間 user define
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code