Responsive image
博碩士論文 etd-0326118-050537 詳細資訊
Title page for etd-0326118-050537
論文名稱
Title
以花的外觀資料強化花朵辨識率
Enhancement of Flower Classification with the Profile Feature
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
50
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2018-04-19
繳交日期
Date of Submission
2018-04-29
關鍵字
Keywords
紋理特徵、支持向量機、色彩特徵、花朵辨識、圖像切割、提取特徵
inception-v3 model, Adaboost, flower classification, SVM, texture feature, color feature, segmentation, feature extraction
統計
Statistics
本論文已被瀏覽 5699 次,被下載 103
The thesis/dissertation has been browsed 5699 times, has been downloaded 103 times.
中文摘要
在此篇論文中,我們提出了一種機器學習方法來分類花朵。我們的方法有三個步驟如下:第一步,我們利用GrabCut這一項切割方法來將花朵圖片中背景部分去除。第二步,我們將提取特徵,特徵包含了色彩特徵及紋理特徵。最後,我們會以各種特徵組合訓練不同的SVM及Adaboost模型。我們實驗的資料集是利用Oxford-102 flower。我們所提出來的「外觀特徵」在SVM中提升約2%,而在我們提出的整體學習模型中提升約3%。然而,若將Incpetion-v3加到我們的模型中,外觀特徵只提升約1%。在我們的實驗中,我們利用了各種不同特徵與分類器做組合,最佳的準確度是83.57%。
Abstract
In this thesis, we propose an elegant method, based on machine learning, for the flower classification. There are three steps in our method. The process begins with segmenting the flower images and removing their backgrounds. We utilize the GrabCut approach to the do segmentation, because it provides a good performance; then, we extract the features from the foreground, including color features and texture features; finally, we train the SVM (support vector machine) models and Adaboost models with several feature combinations. The experimental material comes from the Oxford-102 category flower dataset. Our proposed feature, the profile feature, improves about 2% accuracy in the SVM model and about 3% in our ensemble model. However, if we combine the inception-v3 model into our model, the profile feature only improves about 1% accuracy. Our best result is 83.57% in accuracy which is obtained by aggregating several classification models and it outperforms all methods without deep-learning.
目次 Table of Contents
VERIFICATION FORM+i
THESIS AUTHORIZATION FORM+iii
THANKS+iv
CHINESE ABSTRACT+v
ENGLISH ABSTRACT+vi
LIST OF FIGURES+ix
LIST OF TABLES+x
Chapter 1. Introduction+1
Chapter 2. Preliminaries+5
2.1 Color Feature+5
2.1.1 Hue Saturation Value Color Model+5
2.1.2 CIELAB Color Model+6
2.2 Texture Feature+7
2.2.1 Scale-Invariant Feature Transform+7
2.2.2 Speeded up Robust Features+11
2.3 Hu's moment+13
2.4 The GrabCut Method+15
2.5 Classification Models+15
2.5.1 Support Vector Machine+16
2.5.2 Adaboost+17
2.5.3 Random Forest+18
2.5.4 Tensorflow Inception-v3 Model+19
Chapter 3. The Proposed Algorithm+21
3.1 Image Segmentation+21
3.2 Feature Extraction+22
3.2.1 Feature Size+22
3.2.2 Profile Feature+24
Chapter 4. Experimental Results+26
Chapter 5. Conclusion+34
BIBLIOGRAPHY+35
參考文獻 References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. J. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Józefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. G. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. A. Tucker, V. Vanhoucke, V. Vasudevan, F. B. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation, Savannah, GA, USA, pp. 265–283, USENIX Association, 2016.
[2] D. Agnew, “Efficient use of the Hessian matrix for circuit optimization,” IEEE Transactions on Circuits and Systems, Vol. 25, pp. 600–608, 1978.
[3] A. Angelova, S. Zhu, and Y. Lin, “Image segmentation for large-scale subcategory flower recognition,” Proceedings of 2013 IEEE Workshop on Applications of Computer Vision, Tampa, FL, USA, pp. 39–45, 2013.
[4] H. Bay, A. Ess, and T. Tuytelaars, “SURF: Speeded up robust features,” Computer Vision and Image Understanding, Vol. 110, pp. 346–359, 2008.
[5] Y. Boykov and G. Funka-Lea, “Graph cuts and efficient N-D image segmentation,” International Journal of Computer Vision, Vol. 70, No. 2, pp. 109–131, Nov. 2006.
[6] Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23, No. 11, pp. 1222–1239, Nov. 2001.
[7] L. Breiman, “Random forests,” Machine Learning, Vol. 45, No. 1, pp. 5–32, Oct. 2001.
[8] Y. Chai, V. Lempitsky, and A. Zisserman, “BiCoS: A bi-level co-segmentation method for image classification,” Proceedings of 2011 IEEE International Conference on Computer Vision, Barcelona, Spain, pp. 2579–2586, Nov. 2011.
[9] C. C. Chang and C. J. Lin, “LIBSVM: A library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, Vol. 2, No. 3, pp. 27:1–27:27, 2011.
[10] C. Connolly and T. Fleiss, “A study of efficiency and accuracy in the transformation from RGB to CIELAB color space,” IEEE Transactions on Image Processing, Vol. 6, No. 7, pp. 1046–1048, July 1997.
[11] C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, Vol. 20, No. 3, pp. 273–297, Sept. 1995.
[12] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 1 of CVPR ’05, Washington, DC, USA, pp. 886–893, IEEE Computer Society, 2005.
[13] M. D. Fairchild, Color Appearance Models. Boston, USA: Addison-Wesley, 2005.
[14] R. E. Fan, K. W. Chang, C. J. Hsieh, X. R. Wang, and C. J. Lin, “LIBLINEAR: A library for large linear classification,” The Journal of Machine Learning Research, Vol. 9, pp. 1871–1874, June 2008.
[15] Y. Freund and R. E. Schapire, “Experiments with a new boosting algorithm,” Proceedings of 13th International Conference on Machine Learning, San Francisco, USA, pp. 148–156, Morgan Kaufmann, 1996.
[16] Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of Computer and System Sciences, Vol. 55, No. 1, pp. 119–139, 1997.
[17] R. Gonzalez and R. E. Woods, Digital Image Processing. Prentice Hall Press, 2002.
[18] T. K. Ho, “Random decision forests,” Proceedings of the 3rd International Conference on Document Analysis and Recognition, Vol. 1 of ICDAR ’95, Washington, DC, USA, pp. 278–282, IEEE Computer Society, 1995.
[19] M. K. Hu, “Visual pattern recognition by moment invariants, computer methods in image analysis,” IRE Transactions on Information Theory, Vol. 8, 1962.
[20] S. Ito and S. Kubota, “Object classification using heterogeneous co-occurrence
features,” Proceedings of the 11th European Conference on Computer Vision:
Part II, ECCV’10, Berlin, Heidelberg, pp. 209–222, Springer-Verlag, 2010.
[21] Itseez,“Open source computer vision library," 2015, https://github.com/itseez/opencv.visionlibrary.
[22] M. Kearns, “Thoughts on hypothesis boosting.” Machine Learning class project, Dec. 1988.
[23] Y. Liu, F. Tang, D. Zhou, Y. Meng, and W. Dong, “Flower classification via convolutional neural network,” Proceedings of 2016 IEEE International Conference on Functional-Structural Plant Growth Modeling, Simulation, Visualization and Applications, Qingdao, China, pp. 110–116, Nov. 2016.
[24] D. G. Lowe, “Object recognition from local scale-invariant features,” Proceedings of the 7th IEEE International Conference on Computer Vision, Vol. 2, Kcrkyra, Greece, pp. 1150–1157, 1999.
[25] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, Vol. 60, pp. 91–110, 2004.
[26] M. E. Nilsback, An Automatic Visual Flora: Segmentation and Classification of Flower Images. PhD thesis, University of Oxford, Oxford, England, UK, 2009.
[27] M. E. Nilsback and A. Zisserman, “102 category flower dataset,” 2008, http://www.robots.ox.ac.uk/ vgg/data/flowers/102/.
[28] M. E. Nilsback and A. Zisserman, “Automated flower classification over a large number of classes,” Proceedings of the 6th Indian Conference on Computer Vision, Graphics and Image Processing, Bhubaneswar, India, pp. 722–729, 2008.
[29] C. Rother, V. Kolmogorov, and A. Blake, “Grabcut - interactive foreground extraction using iterated graph cuts,” Proceedings of ACM SIGGRAPH 2004, Vol. 23, Los Angeles, California, USA, pp. 309–314, Aug. 2004.
[30] R. E. Schapire, “A brief introduction to boosting,” Proceedings of the 16th International Joint Conference on Artificial Intelligence, Vol. 2 of IJCAI’99, San Francisco, CA, USA, pp. 1401–1406, Morgan Kaufmann Publishers Inc., 1999.
[31] R. E. Schapire, Explaining AdaBoost, pp. 37–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.
[32] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, pp. 2818–2826, 2016.
[33] P. E. Utgoff, “Incremental induction of decision trees,” Machine Learning, Vol. 4, No. 2, pp. 161–186, Nov. 1989.
[34] X. Xia, C. Xu, and B. Nan, “Inception-v3 for flower classification,” Proceedings of 2017 2nd IEEE International Conference on Image, Vision and Computing, Chengdu, China, pp. 783–787, 2017.
[35] D. Yoo, S. Park, J. Y. Lee, and I. S. Kweon, “Multi-scale pyramid pooling for deep convolutional representation,” Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Boston, MA, USA, pp. 71–80, June 2015.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:自定論文開放時間 user define
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code