Responsive image
博碩士論文 etd-0725117-151943 詳細資訊
Title page for etd-0725117-151943
論文名稱
Title
以雷達與影像融合技術開發車前防碰撞警示系統
Development of a Forward Collision Warning System Using Monovision and Radar Fusion
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
94
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2017-07-19
繳交日期
Date of Submission
2017-08-31
關鍵字
Keywords
物件辨識、類神經網路、支持向量機、粒子濾波、方向梯度直方圖、感測器融合
particle filter, object recognition, Histogram of Gradient, sensor fusion, neural network, Support Vector Machine
統計
Statistics
本論文已被瀏覽 5744 次,被下載 26
The thesis/dissertation has been browsed 5744 times, has been downloaded 26 times.
中文摘要
本論文採用毫米波雷達(Millimeter Wave Radar)與車載攝影機(Camera)開發感測器融合技術,並建構一套車前防碰撞警示系統。透過感測器互補的特性,補償單一感測器失效之場景,並提升前方物件的偵測率,再依兩項警示指標:碰撞時間(Time to Collision, TTC)、侵占時間(Post-encroachment time, PET),適時地給予駕駛提醒。
系統採用並聯式感測器融合架構進行開發,經由雷達進行前方物件的偵測,搭配粒子濾波演算法去除非物件的雜訊,以及追蹤前方的目標物。影像辨識系統則以雙層式分類器架構進行前方物件的辨識,其中物件種類包含:行人、機車和汽車,第一層分類器採用Haar-like演算法快速地得到假定區域,第二層分類器則以圖像的梯度特徵,搭配支持向量機(Support Vector Machine, SVM)驗證假定區域。利用輻狀基底類神經網路(Radial Basis Function Neural Network, RBFNN)學習雷達座標與影像座標之間的關係,最後再以類神經網路判斷雷達系統及影像系統所偵測的物件資訊是否為相同物件,並以信心度較高之感測器作為系統的輸出結果。
為了驗證演算法的效能,測試場景分為三種不同天候之市區道路,包括白天、夜晚及雨天。由於攝影機易受光線及雨滴附著於鏡頭的影響,導致影像系統的偵測率較低,此時雷達可有效彌補攝影機失效之情形。經由實車進行道路驗證後,感測器融合系統偵測率達90.5%,而誤報率僅0.6%。
Abstract
The forward collision warming system based on sensor fusion with millimeter wave (MMW) radar and camera is proposed. The proposed sensor fusion system improves limitation of single sensor and increases the detection rate of the front object. Furthermore, time to collision (TTC) and post-encroachment time (PET) can be calculated and necessary emergency warming can be noticed to drivers.
The proposed system is based on parallel architecture. First, a radar is used to detect the objects in front of the vehicle. By using particle filter, the false alarm of radar can be reduced. Second, the vision recognition system detects the objects using the two layer classifier. In the first stage, the coordinates of the objects are separated by Haar-like algorithm, the corresponding second stage illustrates that the gradient of the image and support vector machine (SVM) classifier are adopted for object recognition. The spatial alignment uses a radial basis function neural network (RBFNN) to learn the conversion relationship between the distance information of millimeter wave radar and the coordinate information in the image. Finally, the neural network is utilized in object association. The highest confidence of the sensor will be chosen as the output of system.
Eventually, the three different situations (daytime, night, and rainy day) are selected to demonstrate the performance of our proposed algorithm. The radar can effectively compensate the camera failure caused by the insufficient light in the evening, raindrop in rainy day, etc. Comparison with those appeared in recent literatures, the detection rate and the false alarm rate of the proposed system are about 90.5% and 0.6%, respectively.
目次 Table of Contents
摘 要 i
Abstract iv
目 錄 v
圖 次 vii
表 次 xi
第一章 緒 論 1
1-1 研究動機 1
1-2 文獻回顧 3
1-3 主要貢獻 8
1-4 章節介紹 8
第二章 系統概述 9
2-1 防碰撞警示系統 9
2-2 系統功能 10
2-3 系統架構 10
2-3-1 毫米波雷達偵測系統 10
2-3-2 影像辨識系統 11
2-3-3 感測器融合系統 11
2-3-4 碰撞警示系統 11
第三章 系統實現 13
3-1 實驗平台 13
3-1-1 實驗載具平台 13
3-1-2 實驗計算平台 14
3-2 雷達偵測系統 15
3-2-1 毫米波雷達 15
3-2-2 雷達偵測系統架構 18
3-2-3 聚類演算法 20
3-2-4 粒子濾波器(Particle Filter) 22
3-3 影像辨識系統 31
3-3-1 車用攝影機 31
3-3-2 影像辨識系統架構 32
3-3-3 影像辨識演算法 33
3-3-4 機器學習演算法 40
3-4 感測器融合系統 44
3-4-1 座標轉換 44
3-4-2 物件比對 48
3-4-3 決策機制 49
3-5 防碰撞警示系統 50
3-5-1 影像距離估測 50
3-5-2 警示指標 51
第四章 實驗結果 54
4-1 實驗場景 54
4-2 雷達偵測系統 56
4-3 影像辨識系統 62
4-4 感測器融合系統 68
4-5 防碰撞警示系統 74
第五章 結論與未來展望 76
5-1 結論 76
5-2 未來展望 76
參考文獻 77
參考文獻 References
[1] Mercedes-Benz Taiwan,CPA防撞輔助系統。
取自: http://www.mercedes-benz.com.tw/
[2] Mobileye, “Mobileye Advanced Technologies - Sensing the Driving Scene.” Retrieved from https://www.mobileye.com/
[3] C. W. Liang, and C. F. Juang, “Moving object classification using local shape and HOG features in wavelet-transformed space with hierarchical SVM classifiers,” Applied Soft Computing, vol. 28, pp. 483-497, 2015.
[4] M. T. Yang, and J. Y. Zheng, “On-Road Collision Warning Based on Multiple FOE Segmentation Using a Dashboard Camera,” IEEE Transactions on Vehicular Technology, vol. 64, pp. 4947-4984, 2015.
[5] L. Guo, P. S. Ge, M. H. Zhang, L. H. Li, and Y. B. Zhao, “Pedestrian detection for intelligent transportation systems combining AdaBoost algorithm and support vector machine,” Expert Systems with Applications, vol. 39, pp. 4274-4286, 2012.
[6] X. W. Wang, L. Xu and H. Sun, “On-Road Vehicle Detection and Tracking Using MMW Radar and Monovision Fusion,” IEEE Transactions on Intelligent Transportation systems, vol. 17, pp. 2075-2084, 2016.
[7] L. Oliveira, U. Nunes, P. Peixoto, M. Silva, and F. Moita, “Semantic fusion of laser and vision in pedestrian detection,” Pattern Recognition, vol. 43, pp. 3648-3659, 2010.
[8] J. Kim, J. Beak and E. Kim, “A Novel On-Road Vehicle Detection Method Using πHOG,” IEEE Transactions on Intelligent Transportation systems, vol. 16, pp. 3414-3429, 2015.
[9] TOYOTA, “TOYOTA Premio 1.6,” Retrieved from http://www.toyota.com.tw/
[10] ASUS, “ASUS ROG G771JW,” Retrieved from https://www.asus.com/tw/
[11] WNC, “Radar UMD-RF01,” Retrieved from http://www.wnc.com.tw/
[12] 張中禺,智慧型網路設備CAN BUS特色與應用實例
取自: http://inlem.blogspot.tw/2013/12/can-bus_6991.html
[13] Kvaser, “Kvaser Leaf Light V2,” Retrieved from https://www.kvaser.com/
[14] M. Ester, H. Kriegel, J.Sander, and X. Xu, “A density-based algorithm for discovering clusters in large spatial databases with noise,” Proceedings of the second Knowledge Discovery and Data Mining, pp. 226-231, 1996.
[15] J. MacQueen, “Some methods for classification and analysis of multivariate observation,” in Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, vo1. 1, pp. 281-297, 1967.
[16] Sony, “Sony LI-CAM-IMX224.” Retrieved from https://www.sony.net/
[17] R. Viola, and M. Jones, “Rapid object detection using a boosted cascade of simple features,” Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 511-518, 2001.
[18] C. Musso, F. Champagnat, and O. Rabaste, “Improvement of the laplace-based particle filter for track-before-detect,” 2016 19th International Conference on Information Fusion, vol. 1, pp. 1095-1102, 2016.
[19] P. F. Felzenszwalb, R. B. Girshick , D. McAllester and D. Ramanan, “Object Detection with Discriminatively Trained Part-Based Models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, pp. 1627-1645, 2010.
[20] N. Dalal, and B. Triggs, “Histograms of oriented gradients for human detection,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 886-893, 2005.
[21] R. Lienhart, and J. Maydt, “An extended set of Haar-like features for rapid object detection,” Proceedings. International Conference on Image Processing, vol. 1, pp. 900-903, 2002.
[22] D. T. Kim, and M. Jeon, “Data fusion of radar and image measurements for multi-object tracking via Kalman filtering,” Information Sciences, vol. 278, pp. 641-652, 2014.
[23] S. Han, X. Wang, L. Xu, H. Sun, and N. Zheng, “Frontal object perception for Intelligent Vehicles based on radar and camera fusion,” 2016 35th Chinese Control Conference, pp. 4003-4008, 2016.
[24] 王進德,類神經網路與模糊控制理論入門與應用,全華出版社,2011。
[25] 邱約聖,以AdaBoost演算法為基礎之興趣區間物件辨識系統設計,碩士論文,國立中山大學機電與機械工程學系,2015。
[26] OpenCV, “Introduction to support vector machines,” Retrieved from
http://docs.opencv.org/2.4/doc/tutorials/ml/introduction_to_svm/introduction_to_svm.html
[27] Symbolic Math TOOLBOX, “Curve fitting toolbox user’s guide r2012a,” Mathworks Inc, 1993.
[28] NHTSA, “A forward collision warning preference evaluation,” Retrieved from https://www.nhtsa.gov/.
[29] E. Orhan, “Particle Filtering,” Center for Neural Science of New York University Sciences, 2012.
[30] MIT CBCL, “Pedestrian Data,” Retrieved from http://cbcl.mit.edu/cbcl/software-datasets/CarData.html
[31] R. E. Kalman, “A new approach to linear filtering and prediction problems,” Journal of Basic Engineering, vol. 82, no. 1, pp. 35–45, 1960.
[32] A. Doucet, N. de Freitas, and N. Gordon (2001), “An introduction to sequential Monte Carlo methods,” in Sequential Monte Carlo Methods in Practice, A. Doucet, N. de Freitas, and N. Gordon, Eds. New York: Springer-Verlag.

[33] R. Jinan, and T. Raveendran, “Particle Filters for Multiple Target Tracking,” Procedia Technology, vol. 24, pp. 980–987, 2016.
[34] Z. Chen, “Bayesian filtering: From Kalman filters to particle filters, and beyond,” Statistics, vol. 182, pp. 1–69, 2003.
[35] SICK Sensor Intelligence, “Technical description of LMS-291 laser measurement systems.” Retrieved from https://www.sick.com/tw/
[36] R. Streubel, and B. Yang, “Fusion of Stereo Camera and MIMO-FMCW Radar for Pedestrian Tracking in Indoor Environments,” 19th International Conference on Information Fusion, pp. 565-572, 2016.
[37] G. Xu, X. Wu, L. Liu, and Z. Wu, “Real-time Pedestrian Detection Based on Edge Factor and Histogram of Oriented Gradient,” IEEE International Conference on Information and Automation, pp. 384-389, 2011.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:自定論文開放時間 user define
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code