Responsive image
博碩士論文 etd-0623110-100526 詳細資訊
Title page for etd-0623110-100526
論文名稱
Title
多攝影機協同人物追蹤之即時沉浸式監視系統
Multi-camera Human Tracking on Realtime 3D Immersive Surveillance System
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
78
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2010-06-14
繳交日期
Date of Submission
2010-06-23
關鍵字
Keywords
多攝影機人物追蹤、夜間影像增強、前景偵測
Salient object detection, Nighttime image fusion, Multi-camera human tracking
統計
Statistics
本論文已被瀏覽 5659 次,被下載 1811
The thesis/dissertation has been browsed 5659 times, has been downloaded 1811 times.
中文摘要
傳統監視系統多以分割畫面方式,將數個縮小之監視影像顯示於同一螢幕,標示場景名稱於各分割畫面,供事件發生時辨識場景之用,並允許使用者觀察不同部分或相同部分不同視角之場景,然隨著監視系統逐漸普及,監視器設置數量及密度不斷上升,考量空間與顯示器之數量、成本,欲配置數以百計監控攝影機於校園、城市路口、建築物各樓層走道等大範圍監控區域,傳統分割畫面之監控系統已不敷使用,並有分割畫面相關性低、突發事件追蹤困難、監視影像解析度低、全域監控困難等四大缺失;為解決上述傳統監視系統之缺失,全狀況掌控之沉浸式監視系統於二維衛星影像上以電腦圖學技術建構三維立體建築模型,使用者可自行定義各樓層平面資訊與監視器相對應位置以建構樓層平面圖,將建築物外觀及監控場景利用二維視訊與物件模型之對應以動態更新方式貼上二維靜態材質,完成三維場景建置,使用者亦可以固定頻率依自行設定之路徑動態選擇最佳攝影視角進行巡邏。
為有效追蹤各場景之突發事件,本論文以既有沉浸式監視系統為基礎進行以下研究,1. 前景偵測:分析由各監視器回傳之監控影片,將影片轉換成影像序列,針對每一張影像根據以像素穩定度為基礎之背景更新演算法計算背景,以此背景過濾前景以進行前景偵測擷取前景明顯移動物體;2. 夜間影像增強:以模糊理論為基礎將影像亮度值轉成模糊矩陣,強化暗部影像中人眼不易見之處並保留原飽和度資訊再擷取明顯移動物,將擷取結果貼上白天背景再分割成天花板、地板、牆壁三部分以貼材質方式貼進3D場景,擷取出夜間移動物體將有利提升整體監控安全性;3. 多攝影機人物追蹤:以connected component labeling過濾前景破碎小區塊並記錄各區塊相關資訊,接著針對各攝影機所擷取出之前景以RGB分量百分比與其區塊內對應位置資訊進行人物辨識,並將各人物區分為5種state(Enter、Leave、Match、Occlusion、Fraction)進行追蹤,於自行建置之三維動態場景勾勒出每個人於此攝影機之視野範圍中實際移動路線,再分析攝影機與攝影機間所擷取出之移動物體關連性,融合多攝影機進行協同偵測完成全域場景人物即時監控目標。
Abstract
Conventional surveillance systems present video to a user from more than one camera on a single display. Such a display allows the user to observe different part of the scene, or to observe the same part of the scene from different viewpoints. Each video is usually labeled by a fixed textual annotation displayed under the video segment to identify the image. With the growing number of surveillance cameras set up and the expanse of surveillance area, the conventional split-screen display approach cannot provide intuitive correspondence between the images acquired and the areas under surveillance. Such a system has a number of inherent flaws:Lower relativity of split videos、The difficulty of tracking new activities、Low resolution of surveillance videos、The difficulty of total surveillance;In order to improve the above defects, the “Immersive Surveillance for Total Situational Awareness” use computer graphic technique to construct 3D model of buildings on the 2D satellite-images, the users can construct the floor platform by defining the information of each floor or building and the position of each camera. This information is combined to construct 3D surveillance scene, and the images acquired by surveillance cameras are pasted into the constructed 3D model to provide intuitively visual presentation. The users could also walk through the scene by a fixed-frequency , self-defined business model to perform a virtual surveillance.
Multi-camera Human Tracking on Realtime 3D Immersive Surveillance System based on the “Immersive Surveillance for Total Situational Awareness,” 1. Salient object detection:The System converts videos to corresponding image sequences and analyze the videos provided by each camera. In order to filter out the foreground pixels, the background model of each image is calculated by pixel-stability-based background update algorithm. 2. Nighttime image fusion:Use the fuzzy enhancement method to enhance the dark area in nighttime image, and also maintain the saturation information. Then apply the Salient object detection Algorithm to extract salient objects of the dark area. The system divides fusion results into 3 parts: wall, ceiling, and floor, then pastes them as materials into corresponding parts of 3D scene. 3. Multi-camera human tracking:Apply connected component labeling to filter out small area and save each block’s infomation. Use RGB-weight percentage information in each block and 5-state status (Enter、Leave、Match、Occlusion、Fraction) to draw out the trajectory of each person in every camera’s field of view on the 3D surveillance scene. Finally, fuse every camera together to complete the multi-camera realtime people tracking. Above all, we can track every human in our 3D immersive surveillance system without watching out each of thousand of camera views.
目次 Table of Contents
摘要...........................................iii
目錄...........................................vi
圖目錄......................................vii
第一章、簡介............................1
第二章、相關研究..................11
第三章、理論基礎..................27
第四章、研究方法..................33
第五章、實驗結果..................46
第六章、結論與未來工作......58
參考文獻..................................60
參考文獻 References
[1] Juan Zhu, Yong-ping Kong, “A Fast Method for Building and Updating Background Model,” IEEE Conf. on ISA’09, Pages 1-4, May 23-24, 2009.
[2] T.K. Kim, J.H. Im, and J.K. Paik, “Video Object Segmentation and Its Salient Motion Detection using Adaptive Background Generation,” IET JNL Digital Object Identifier, Vol. 45, No. 11, Pages 542-543, May 21, 2009.
[3] Shan Li, Moon-Chuen Lee, “An Efficient Spatio-temporal Attention Model and Its Application to Shot Matching,” IEEE Trans. on Circuits and System for Video Technology, Vol. 17, No. 10, Oct., 2007.
[4] D. Roqueiro, V. A. Petrushin, “Counting People using Video Cameras,” Intl. Journal of Parallel, Emergent and Distributed Systems, Vol. 22, No. 3, Pages 193-209, Jan., 2007.
[5] Yeon-sung Choi, Piao Zaijun, Sun-woo Kim, Tae-hun Kim, and Chun-bae Park, “Salient Motion Information Detection Technique using Weighted Subtraction Image and Motion Vector,” ICHIT’06, Vol. 1, Pages 263-269, Nov. 9-11, 2006.
[6] Ying-Li Tian, Arun Hampapur, “Robust Salient Motion Detection with Complex Background for Real-time Video Surveillance,” Intl. Conf. on WACV/MOTIONS'05, Vol. 2, Pages 30-35, Jan., 2005.
[7] Sen-Ching S. Cheung, Chandrika Kamath, “Robust Techniques for Background Subtraction in Urban Traffic Video,” Proc. of Video Communications and Image Processing, Pages 881-892, Jan., 2004.
[8] George V. Paul, Glenn J. Beach, and Charles J. Cohen, “A Realtime Object Tracking System using a Color Camera,” IEEE Conf. on Applied Imagery Pattern Recognition Workshop 30th, Pages 137-142, Oct. 10-12, 2001.
[9] Laurent Itti, Christof Koch, and Ernst Niebur, “A Model of Saliency-Based Visual Attention for Rapid Scene Analysis,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 20, No. 11, Pages 1254-1259, Nov., 1998.
[10] Alexis M. Tourapis, “Enhanced Predictive Zonal Search for Single and Multiple Frame Motion Estimation,’’ Proc. of SPIE Visual Communications and Image Processing, Vol. 4671, Pages 1069-1079, Jan., 2002.
[11] Ramesh Rasker, Adrian Ilie, and Jingyi Yu, “Image Fusion for Context Enhancement and Video Surrealism,” The 3rd International Symposium on Non-Photo Realistic and Rendering, Pages 747-757, 2004.
[12] Jing Li, Stan Z.Li, Quan Pan, and Tao Yang, “Illumination and Motion-Based Video Enhancement for Night Surveillance,” 2nd Joint IEEE Intl. Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, Pages 169-175, Oct. 15-16, 2005.
[13] Oliver Rockinger, “Image Sequence Fusion using a Shift Invariant Wavelet Transform,” IEEE Intl. Conf. on Image Processing, Pages 288-291, Oct. 26-29, 1997.
[14] Yin Chen, Rick S. Bium, “Experimental Tests of Image Fusion for Night Vision,” 7th Intl. Conf. on Information Fusion, Pages 488-498, July 25-28, 2005.
[15] Liangrui Tang, Jing Zhang, and Bing Qi. “An Improved Fuzzy Image Enhancement Algorithm,” 5th Intl. Conf. on Fuzzy Systems and Knowledge Discovery, Pages 186-189, 2008.
[16] Madasu Hanmandlu, Devendra Jha, “An Optimal Fuzzy System for Color Image Enhancement,” IEEE Trans. on Image Processing, Vol. 15, No. 10, Pages 2956-2966, Oct., 2006.
[17] Jiman Kim, Daijin Kim, “Probabilistic Camera Hand-off for Visual Surveillance,” Intl. Conf. on Digital Smart Cameras, Pages 1-8, 7-11 Sept., 2008.
[18] Tao Yang, Francine Chen, Don Kimber, and Jim Vaughan, “Robust People Detection and Tracking in a Multi-camera Indoor Visual Surveillance System,” Intl. Conf. on Multimedia and Expo, Pages 675-678, July 2-5, 2007.
[19] Francois Fleuret, Jerome Berclaz, Richard Lengagne, and Pascal Fua, “Multi-camera People Tracking with a Probabilistic Occupancy Map,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 30, No. 2, Pages 267-282, 2008.
[20] Mohamed Dahmane, Jean Meunier, “Real-time Video Surveillance with Self-organizing Maps,” Proc. of the 2nd Canadian Conference on Computer and Robot Vision, Pages 136-143, May 9-11, 2005.
[21] L. Fuentes, S. Velastin, “People Tracking in Surveillance Applications,” Proc. of the 2nd IEEE Workshop on Performance Evaluation of Tracking and Surveillance, Vol. 24, Pages 1165-1171, Nov. 1, 2006.
[22] Wei Jyh Heng, King N Ngan, “Digital Video Transition Analysis and Detection,” Publisher:World Scientific Co. Pte. Ltd., ISBN:978-9812381859, Jan. 1, 2003.
[23] Gian Luca Foresti, “Object Recognition and Tracking for Remote Video Surveillance,” IEEE Trans. on Circuits and Systems for Video Technology, Vol. 9, Pages 1045-1062, Oct., 1999.
[24] Weidong Zhang, Feng Chen, Wenli Xu, and Enwei Zhang, “Real-time Video Intelligent Surveillance System,” Intl. Conf. on Multimedia and Expo, Pages 1021-1024, July 9-12, 2006.
[25] Collins R T, Lipton A J, and Kanade T, “A System for Video Surveillance and Monitoring,” Pittsburgh:Robotics Institute, Carnegie Mellon University, 2000.
[26] Liansheng Zhuang, Ketan Tang, Nenghai Yu, and Yangchun Qian, “Fast Salient Object Detection Based on Segments,” Intl. Conf. on Measuring Technology and Mechatronics Automation, Vol. 1, Pages 469-472, 2009.
[27] Patricia P. Wang, Wei Zhang, Jianguo Li, and Yimin Zhang, “Realtime Detection of Salient Moving Object:A Multi-core Solution,” Intl. Conf. on Acoustics, Speech and Signal Processing, Pages 1481-1484, 2008.
[28] L. Zhu, J.N. Hwang, and H.Y. Cheng, “Tracking of Multiple Objects Across Multiple Cameras with Overlapping and Non-overlapping Views,” IEEE Intl. Symposium on Circuits and Systems, Taipei, Taiwan, May, 2009.
[29] Eden A., Uyttendaele M., and Szeliski R., “Seamless Image Stitching of Scenes with Large Motions and Exposure Differences,” Proc. of CVPR, Vol. 3, Pages 2498-2505, 2006.
[30] C.W. Chang, J.J. Tang, and Z. Lee,“An Adaptive Surveillance System Platform for Cooperative Object Detection and Tracking in Multiple Cameras,” Intl. Conf. on Open Source, Taipei Taiwan, 2007.
[31] I. Haritaoglu, D. Harwood, and L.S. Davis, “Real-time Surveillance of People and their Activities,” IEEE Trans. Pattern Anal. Mach. Intell., Vol. 22, Pages 809–830, Aug., 2000.
[32] C.R. Wren, A. Azarbayejani, T. Darrell, and A. Pentland, “Real-time Tracking of the Human Body,” IEEE Trans. Pattern Anal. Mach. Intell., Vol. 19, No. 7, Pages 780-785, 1997.
[33] N. Rajagopalan, R. Chellappa, and N.T. Koterba, “Background Learning for Robust Face Recognition With PCA in the Presence of Cluster,” IEEE Trans. Image Processing, Vol. 14, No. 6, Pages 832-843, 2005.
[34] A. Elgammal, D. Harwood, and L. Davis, “Non-parametric Model for Background Subtraction,” Proc. of ICCV '99 FRAME-RATE Workshop, 1999.
[35] C. Stauffer, E. Grimson, “Adaptive Background Mixture Models for Real-time Tracking,” Proc. of ICCV and Pattern Recognition, Pages 246-252, 1999.
[36] K. Kim, T.H. Chalidabhongse, D. Harwood, and L.S. Davis, “Real-time Foreground-background Segmentation using Codebook Model,” Real-time Imaging, Pages 172-185, 2005.
[37] M. Heikkila, M. Pietikainen, “A Texture-based Method for Modeling the Background and Detecting Moving Objects,” IEEE Trans. Pattern Anal. Mach. Intell., Vol. 28, No. 4, Pages 657-652, 2006.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:校內外都一年後公開 withheld
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code