Responsive image
博碩士論文 etd-0521116-192334 詳細資訊
Title page for etd-0521116-192334
論文名稱
Title
使用支援向量機之花卉影像分類方法
Flower Image Classification Based on the Support Vector Machine
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
57
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2016-06-16
繳交日期
Date of Submission
2016-06-23
關鍵字
Keywords
花卉影像、支援向量機、加速魯棒特徵、HSV色彩空間、方向梯度直方圖、局部二值模式、局部三元圖樣
HSV color space, support vector machine, flower images, speed up robust feature, histogram of gradient, local binary pattern, local ternary pattern
統計
Statistics
本論文已被瀏覽 5756 次,被下載 467
The thesis/dissertation has been browsed 5756 times, has been downloaded 467 times.
中文摘要
本論文提出一種基於支援向量機之花卉影像分類方法。我們的方法分為三個階段:(1)實行影像分割,將花卉影像分為花卉本體以及背景兩部分。(2)從花卉影像中提取多種特徵組。(3)嘗試不同特徵組合,以支援向量機訓練分類模型。本論文所使用的特徵組包含顏色特徵以及紋理特徵。本實驗採用102種花卉影像作為資料庫[14]。從實驗結果表示本實驗最佳準確率為67.66%。Nilsback 與Zisserman方法[15]最佳準確度為72.8%。雖採用相同資料集,準確度卻有所不及,我們認為可能是實驗參數導致此差異。
Abstract
This thesis proposes a method of flower image classification, based on the support vector machine (SVM). There are three main stages in our method: (1) Perform segmentation on the input flower image and remove the background. (2) Extract several feature sets from the image. (3) Train the classification model by SVM with various combinations of feature sets. The feature sets include color features and texture features. The experimental dataset is the 102-category flower dataset [14]. As the experimental results show, the best accuracy of our method is 67.66%. However, with the same dataset, Nilsback and Zisserman's method [15] achieved the best accuracy 72.8%. We guess that the experimental environment leads to difference.
目次 Table of Contents
VERIFICATION FORM . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
THESIS AUTHORIZATION FORM . . . . . . . . . . . . . . . . . . . . iii
THANKS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
CHINESE ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
ENGLISH ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 2. Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1 Color Space Transformation . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.1 Hue Saturation Value Color Space . . . . . . . . . . . . . . . . 4
2.1.2 CIELab Color Space . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Otsu's Thresholding Method . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Uniform Local Binary Pattern . . . . . . . . . . . . . . . . . . . . . . 8
2.4 Local Ternary Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.5 Histogram of Oriented Gradients . . . . . . . . . . . . . . . . . . . . 11
2.6 Speeded up Robust Features . . . . . . . . . . . . . . . . . . . . . . . 13
2.7 Support Vector Machine . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.8 Li's Method for the Flower Recognition . . . . . . . . . . . . . . . . . 15
Chapter 3. The Proposed Algorithm . . . . . . . . . . . . . . . . . . . . 17
3.1 Image Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.1 Color Features . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.2 Texture Features . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3 Bag of Visual Words . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4 Data Classi cation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Chapter 4. Experimental Results . . . . . . . . . . . . . . . . . . . . . . 32
Chapter 5. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
參考文獻 References
[1] D. Agnew, “Efficient use of the Hessian matrix for circuit optimization," IEEE Transactions on Circuits and Systems, Vol. 25, pp. 600-608, 1978.
[2] A. Angelova, S. Zhu, and Y. Lin, “Image segmentation for large-scale subcategory flower recognition," Proceedings of IEEE Workshop on Applications of Computer Vision, Tampa, FL, USA, pp. 39-45, 2013.
[3] H. Bay, A. Ess, and T. Tuytelaars, “SURF: Speeded up robust features," Computer Vision and Image Understanding, Vol. 110, pp. 346-359, 2008.
[4] S. Belongie and J. Malik, “Matching with shape contexts," Proceedings of IEEE Workshop on Content-based Access of Image and Video Libraries, Hilton Head Island, SC, USA, pp. 264-269, 2000.
[5] C.-C. Chang and C.-J. Lin, “Libsvm: A library for support vector machines," ACM Transactions on Intelligent Systems and Technology, Vol. 2, No. 27, pp. 1-27, 2011.
[6] S.-Y. Cho, “Content-based structural recognition for flower image classification," Proceeding of IEEE Conference on Industrial Electronics and Applications, Singapore, pp. 541-546, 2012.
[7] C. Connolly and T. Fleiss, “A study of efficiency and accuracy in the transformation from RGB to CIELAB color space," IEEE Transactions on Image Processing, Vol. 6, No. 7, pp. 1046-1048, 1997.
[8] M. D. Fairchild, Color Appearance Models. Addison-Wesley, 2005.
[9] R.-E. Fan, P.-H. Chen, and C.-J. Lin, “Working set selection using second order information for training SVM," Journal of Machine Learning Research, Vol. 6, pp. 1889-1918, 2005.
[10] R. Gonzalez and R. E. Woods, Digital Image Processing. Prentice Hall Press, 2002.
[11] L. Li and Y. Qiao, “Flower image retrieval with category attributes," Proceedings of IEEE International Conference on Information Science and Technology, Shenzhen, China, pp. 505-808, 2014.
[12] D. G. Lowe, “Distinctive image features from scale-invariant keypoints," International journal of computer vision, Vol. 60, No. 2, pp. 91-110, 2004.
[13] C. E. Meek, “An efficient method for analysing ionospheric drifts data," Journal of Atmospheric and Terrestrial Physics, Vol. 42, No. 9, pp. 2392-2396, 1980.
[14] M.-E. Nilsback and A. Zisserman, “17 category flower dataset," 2006. http://www.robots.ox.ac.uk/~vgg/data/flowers/17/.
[15] M.-E. Nilsback and A. Zisserman, “Automated flower classification over a large number of classes," Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing, Bhubaneswar, India, pp. 722-729, 2008.
[16] Nilsjohan, “The principle of the CIELAB colour space. CC BY-SA 4.0," Aug. 2014. https://sv.wikipedia.org/wiki/CIELAB.
[17] M. P. Ojala, Timo and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 7, pp. 971-987, 2002.
[18] C. A. Poynton, Digital Video and HDTV: Algorithms and Interfaces. Morgan Kaufmann, 2003.
[19] SharkD, “HSV color solid cylinder. CC BY-SA 3.0," Dec. 2015. https://commons.wikimedia.org/wiki/User:SharkD.
[20] C. Shu, X. Ding, and C. Fang, “Histogram of the oriented gradient for face recognition," Tsinghua Science and Technology, Vol. 16, No. 2, pp. 216-224, 2011.
[21] M. A. Stricker and A. Dimai, “Color indexing with weak spatial constraints," Proceedings of SPIE, Vol. 2670 (Storage and Retrieval for Still Image and Video Databases IV), San Jose, CA, USA, pp. 29-40, Mar. 1996.
[22] X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions," IEEE Transactions on Image Processing, pp. 1635-1650, 2010.
[23] E. Weisstein, Fourier Transform Gaussian. MathWorld, Dec. 2013.
[24] S. Xu, T. Fang, D. Li, and S. Wang, “Object classification of aerial images with bag-of-visual words," IEEE Geoscience and Remote Sensing Letters, Vol. 7, pp. 366-370, 2010.
[25] J.-H. Xue and D. Titterington, “t-Tests, F-Tests and Otsu's methods for image thresholding," IEEE Transactions on Image Processing, Vol. 20, No. 8, pp. 2392-2396, 1997.
[26] H. M. Zawbaa, M. Abbass, S. H. Basha, M. Hazman, and A. E. Hassenian, “An automatic flower classification approach using machine learning algorithms," Proceedings of International Conference on Advances in Computing, Communications and Informatics, New Delhi, India, pp. 895-901, 2014.
[27] H. Zhang, A. C. Berg, M. Maire, and J. Malik, “SVM-KNN discriminative nearest neighbor classification for visual category recognition," IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 2, pp. 2126-2136, 2006.
[28] L. Zhu, H. Jin, R. Zheng, and X. Feng, “Weighting scheme for image retrieval based on bag-of-visual-words," IET Image Processing, Vol. 8, pp. 509-518, 2014.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:自定論文開放時間 user define
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code