Responsive image
博碩士論文 etd-0719114-230542 詳細資訊
Title page for etd-0719114-230542
論文名稱
Title
以AdaBoost演算法為基礎之興趣區間物件辨識系統設計
Object Recognition System Design in Regions of Interest Based on AdaBoost Algorithm
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
97
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2014-07-28
繳交日期
Date of Submission
2015-01-15
關鍵字
Keywords
影像處理、類神經網路、系統整合、AdaBoost、OpenCV
neural network, image processing, AdaBoost, OpenCV, system integration
統計
Statistics
本論文已被瀏覽 5635 次,被下載 35
The thesis/dissertation has been browsed 5635 times, has been downloaded 35 times.
中文摘要
近年來,車輛安全已成為現代汽車學上的顯學,許多學術單位與車廠也紛紛開發相關車用技術的行列。本研究藉由雷射測距儀(Laser Range Finder)與網路攝影機(Camera)建構出這一套以AdaBoost演算法為基礎之興趣區間物件辨識系統設計之方法。系統透過雷射測距儀可以即時且準確地測量車前環境的距離資訊,進而計算出物件中心、物件長度以及物件點數等資訊,幫助系統即時地偵測車前興趣區間(Regions of Interest)是否有障礙物,倘若系統偵測到障礙物,則會發出提示聲音來警示駕駛人注意,同時,透過AdaBoost演算法(Adaptive Boosting Algorithm)可將車前障礙物辨識為行人或車輛。因為雷射測距儀會發生遮蔽現象和資料遺失等缺點,因此本研究整合雷射測距儀及網路攝影機,完成物件偵測與物件辨識之功能,當兩者感測器中的其中一種發生偵測失效時,系統以決策機制來切換偵測所需採用之對象,補償單一種感測器之缺失。於攝影機部分,使用OpenCV 1.0開發套件中的Shi-Tomasi角點偵測,並於影像之興趣區間中萃取出物件之角點作為特徵點,藉由 Lucas-Kanade光流法偵測出影像上障礙物之位置。於雷射測距儀部分,將距離資訊進行線段切割及資料聚類,若是偵測到有障礙物於興趣區間中,則系統會計算出其物件之中心點作為特徵點,並得到物件的距離位置。其中透過倒傳遞類神經網路(Back Propagation Neural Network,BPNN)將距離資訊轉換成影像中的座標資訊,並統計轉換後誤差之結果,最後以決策機制來整合雷射測距儀和攝影機,完成兩種感測器之間的資訊整合,將偵測結果使用AdaBoost分類器來辨識前方物件為行人或車輛,同時,記錄新學習樣本,作為更新訓練樣本,以提升系統對於新樣本辨識的強健性。
Abstract
In recent years, vehicle safety has become an important issue in modern automotive technology. This research proposes an object recognition system based on AdaBoost algorithm, integrating a laser range finder (LRF) and a camera. The system can measure the environmental distance information accurately from the front of the vehicle in real-time, calculating the center, length and points information from the object in advance. With above information, the system detects the region of interest (ROI) in front of the vehicle, if any obstacle occurs, the alarm will warn the driver, at the same time; AdaBoost algorithm will recognize obstacles for pedestrians or vehicles. To compensate the obstruction and data loss on the laser range finder, this research integrates the laser range finder and the camera for the purpose of object detection and object recognition. When one of two sensors is failed, the decision mechanism of the system will switch to another to detect the object, compensating for the lack of a single sensor. The camera uses Shi-Tomasi corner detection from the development kit of OpenCV 1.0, extracting the corner on the image in the range of interest as feature points; capturing the obstacle position on the image through the optical flow method of Lucas-Kanade. The back-propagation neural network (BPNN) converts the distance information derived from laser range finder to the coordinate information in the image, calculating the accuracy after conversion. At the end, the AdaBoost classifier will distinct the pedestrians or vehicles from the object, recording the new learning samples to update initial training sample for enhancing the robustness of the recognition.
目次 Table of Contents
目 錄
論文審定書 i
致 謝 ii
摘 要 iii
Abstract iv
目 錄 v
圖 次 vii
表 次 xii
第一章 緒 論 1
1-1 研究動機 1
1-2 文獻回顧 3
1-3 主要貢獻 8
1-4 章節介紹 8
第二章 系統概述 9
2-1 以AdaBoost演算法之興趣區間物件辨識系統 9
2-2 三大系統功能 10
2-3 系統架構流程 11
第三章 系統實現 13
3-1 實驗平台 13
3-2 雷射測距儀偵測系統 14
3-2-1 雷射測距儀 14
3-2-2 前處理 16
3-2-3 中心點之移動物件偵測 19
3-3 網路攝影機偵測系統 20
3-3-1 網路攝影機 20
3-3-2 前處理 22
3-3-3 角點偵測 26
3-3-4 光流法之移動物件偵測 28
3-4 雷射測距儀與網路攝影機之整合 29
3-4-1 樣本收集 29
3-4-2 線性轉換矩陣 32
3-4-3 曲面擬合方程式 34
3-4-4 輻射基底類神經網路 35
3-4-5 倒傳遞類神經網路 38
3-4-6 演算法比較 40
3-4-7 誤差統計 41
3-4-8 決策機制 44
3-5 以AdaBoost演算法之物件辨識系統 45
3-5-1初始訓練樣本 45
3-5-2物件辨識系統 46
3-5-2分類器更新 50
第四章 實驗結果 51
4-1 實驗場景 51
4-2 雷射測距儀偵測系統 52
4-3 網路攝影機偵測系統 55
4-4 雷射測距儀與網路攝影機之整合系統 58
4-5 具AdaBoost演算法之物件辨識系統 68
第五章 結論與未來展望 76
5-1 結論 76
5-2 未來展望 77
參考文獻 78
參考文獻 References
參考文獻
[1] 內政部警政署警103年政統計資料 - 道路交通事故(A1及A2類)。
http://www.npa.gov.tw/NPAGip/wSite/lp?ctNode=12593&CtUnit=2374&BaseDSD=7&mp=1.
[2] 曾招雄、陳高村,交通事故衍生成本之探討,道路交通安全與執法研討會,國立中央警察大學,1999,pp.243-254。
[3] 財團法人車輛研究測試中心(ARTC) - 研究發展。http://www.artc.org.tw/chinese/02_research/00_overview.aspx.
[4] 陳育菘、蘇一峰、陳加增,雙視覺前方安全系統,車輛研究測試中心,2010。
[5] 財團法人工業技術研究院。
https://www.itri.org.tw/chi/index.asp.
[6] 經濟部投資業務處,台灣汽車電子產業分析及投資機會,產業分析與投資機會,2010。
[7] F. Nashashibi and A. Bargeton, “Laser-based vehicles tracking and classification using occlusion reasoning and confidence estimation,” IEEE Intelligent Vehicles Symposium, 2008, pp.847-852.
[8] W. Li, R. Zhang, Z. Liu, H. Zhao and R. Shibasaki, “An approach of laser-based vehicle monitor,” Applied mathematics and computation, 2007, pp.953-962.
[9] 郭志宏,單眼視覺的行人偵測與追蹤,碩士論文,國立中央大學資訊工程研究所,2011。
[10] G. Kim and J. S. Cho, “Vision-based vehicle detection and inter-vehicle distance estimation,” IEEE Control, Automation and Systems (ICCAS) 12th International Conference, 2012, pp.625-629.

[11] T. Wang, N. Zheng, J. Xin and Z. Ma, “Integrating millimeter wave radar with a monocular vision sensor for on-road obstacle detection applications,” Sensors, 2011, pp.8992-9008.
[12] S. Sugimoto, H. Tateda, H. Takahashi and M. Okutomi, “Multi-classifier based LIDAR and camera fusion,” IEEE Pattern Recognition, 2004, pp.342-345.
[13] L. Huang and M. Barth, “A novel multi-planar LIDAR and computer vision calibration procedure using 2D patterns for automated navigation,” IEEE Intelligent Vehicles Symposium, 2009, pp.117-122.
[14] L. Zhou, “A new minimal solution for the extrinsic calibration of a 2D LIDAR and a camera using three plane-line correspondences,” IEEE Sensors Journal, 2013, pp.442-454.
[15] L. Oliveira, U. Nunes, P. Peixoto, M. Silva and F. Moita, “Semantic fusion of laser and vision in pedestrian detection,” Pattern Recognition, 2010, pp.3648-3659.
[16] J. P. Hwang, S. E. Cho and K. J. Ryu, S. Park and E. Kim, “Multi-classifier based LIDAR and camera fusion,” IEEE Intelligent Transportation Systems Conference, 2007, pp.467-472.
[17] X. Liu, Z. Sun and H. He, “On-road vehicle detection fusing radar and vision,” IEEE Vehicular Electronics and Safety (ICVES) International Conference, 2011, pp.150-154.
[18] S. A., R. F., V. Fr´emont and P. Bonnifait, “Extrinsic calibration between a multi-layer lidar and a camera,” IEEE Multisensor Fusion and Integration for Intelligent Systems International Conference, 2008, pp.214-219.
[19] H. Zhao, Q. Zhang, M. Chiba, R. Shibasaki, J. Cui and H. Zha, “Moving object classification using horizontal laser scan data,” IEEE Robotics and Automation International Conference, 2009, pp.2424-2430.
[20] H. Alipour, D. Zeng and D. C. Derrick, “AdaBoost-based sensor fusion for credibility Assessment,” IEEE Intelligence and Security Informatics (ISI) International Conference, 2012, pp.224-226.
[21] Club Car, Owner’s manual of DS golf Car electric vehicles.
[22] SICK, Technical description of LMS-291 laser measurement systems.
[23] 楊瓊茹、洪萬生,極座標(Polar Coordinate),高瞻自然科學教學資源平台,科技部。
http://highscope.ch.ntu.edu.tw/wordpress/?p=15721.
[24] 林家億,多感測器融合應用於智慧型路口安全監控系統,碩士論文,國立中山大學機電與機械工程學系,2013。
[25] Intel Corporation, Reference manual of open source computer vision library
[26] G. Bradski and A. Kaehler, Learning OpenCV, O'Reilly media, 2013
[27] finalevil blog,OpenCV學習筆記心得02:如何在C++ Builder(BCB)中使用OpenCV。
http://blog.finalevil.com/2008/03/opencv02c-builderbcbopencv.html.
[28] J. Shi and C. Tomasi, “Good features to track,” IEEE Computer Vision and Pattern Recognition, Proceedings CVPR '94. Computer Society Conference, 1994, pp.593-600.
[29] 宏碁股份有限公司(Acer) - 筆記型電腦 Aspire V3-571G。
http://www.acer.com.tw/ac/zh/TW/content/model/NX.M6ATA.003.
[30] Q. Zhang and R. Pless, “Extrinsic calibration of a camera and laser range finder (improves camera calibration),” IEEE Intelligent Robots and Systems, (IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference Vol.3, 2004, pp.2301-2306.

[31] K. Kwak, D. F. Huber, H. Badino and T. Kanade, “Extrinsic calibration of a single line scanning lidar and a camera,” IEEE Intelligent Robots and Systems (IROS) RSJ International Conference, 2011, pp.3283-3289.
[32] C. Heij, P. d. Boer, P. H. Franses, T. Kloek and H. K. v. Dijk, “Econometric methods with applications in business and economics,” Oxford University Press, 2004, pp.118-131.
[33] Symbolic Math TOOLBOX, “Curve fitting toolbox user’s guide r2012a,” Mathworks Inc, 1993.
[34] J. W. Perng, G. Y. Chen, L. S. Ma, S. C. Hsieh and D. M. Tsay, “Optimal PID Controller Design Based on PSO-RBFNN of Wind Turbine Systems,” Energies, 2014, pp.191-209.
[35] 王進德,類神經網路與模糊控制理論入門與應用,全華出版社,2011。
[36] 杜孟奇,應用RBF類神經網路於超音波馬達之位置控制,碩士論文,國立中央大學機械工程研究所,2007。
[37] S. Theodoridis, A. Pikrakis, K. Koutroumbas and D. Cavouras, “Introduction to pattern recognition : a MATLAB approach,” Academic Press, 2009.
[38] S. S. Haykin, “Neural networks : a comprehensive foundation,” Prentice Hall International, 1999.
[39] C. T. Lin and C. S. G. Lee, “Neural fuzzy systems : a neuro-fuzzy synergism to intelligent systems,” Prentice Hall, 1996.
[40] C. H. Chiang,類神經網路,台大生工系水資源資訊系統研究室。
[41] A. Cordiner and M. Candidate, “AdaBoost toolbox, a MATLAB toolbox for adaptive boosting,” Advanced Multimedia Research Laboratory, 2009.
[42] Y. Freund and R. E. Schapire, “A short introduction to boosting,” Journal Japanese Society for Artificial Intelligence, 1999, pp.771-780.

[43] Y. L. Murphey, Z. Chen and H. Guo, “Neural learning using AdaBoost,” IEEE Neural Networks. Proceedings. IJCNN '01. International Joint Conference (Volume :2), 2001, pp.1037-1042.
[44] K. Nguyen, T. Ng and L. g. Nguyen, “Adaptive boosting features for automatic speech recognition,” IEEE Acoustics, Speech and Signal Processing (ICASSP) International Conference, 2012, pp.4733-4736.
[45] S. M. Valiollahzadeh, A. Sayadiyan and F. Karbassian, “Adaptive boosting of support vector machine component classifiers applied in face detection.”
[46] M. Zhimin, P. Zhisong, H. Guyu and Z. Luwen, “Treating missing data processing based on neural network and AdaBoost,” IEEE Grey Systems and Intelligent Services(GSIS) International Conference, 2007, pp.1107-1111.
[47] Xsens Technologies, MTi and MTx user manual and technical documentation.
[48] NovAtel, OEM4 family installation and operation user manual rev 19.
[49] Volvo,
http://www.volvo.com/group/volvosplash-global/en-gb/Pages/volvo_splash.aspx.
[50] Mercedes-Benz,
http://www.mercedes-benz.com.tw/content/taiwan/mpc/mpc_taiwan_website/twng/home_mpc/passengercars.flash.skipintro.html.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:自定論文開放時間 user define
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code