Responsive image
博碩士論文 etd-0803118-143818 詳細資訊
Title page for etd-0803118-143818
論文名稱
Title
生成協作網路應用於大腸內視鏡息肉/腫瘤偵測
A Generative Collaborative Network for Detection of Polyps in Endoscopic Images
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
62
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2018-08-27
繳交日期
Date of Submission
2018-09-03
關鍵字
Keywords
類神經網路、卷積神經網路、轉置卷積神經網路、雙網路、大腸息肉偵測
Polyps Detection, Double Network, Transpose Convolution Neural Network, Neural Network, Convolution Neural Network
統計
Statistics
本論文已被瀏覽 5664 次,被下載 0
The thesis/dissertation has been browsed 5664 times, has been downloaded 0 times.
中文摘要
癌症是國人十大死因之首,而在十大癌症中大腸癌已有九次佔據首位,故大腸癌的影響已不容小覷。若能愈早發現徵兆,治癒的期望就愈大,而基本所有大腸腫瘤都是由大腸息肉病變而成,換言之若能及早發現大腸息肉,則愈能防範未來。
因此本論文提出一個基於神經網路(Neural Network)的架構來用於在大腸內視鏡圖中檢測是否有息肉的產生。本論文提出之網路架構命名為生成協作網路(Generative Collaborative Networks),由兩個子網路:生成器(Generator)和協作器(Collaborator)所構成。兩個子網路分別再由卷積神經網路(Convolutional Neural Network)和轉置卷積神經網路(Transpose Convolutional Neural Network)構成。整個網路會從訓練資料中學習如何萃取息肉的特徵,然後再將其映射回大腸內視鏡圖,將息肉標記出來。
本論文所使用的大腸內視鏡圖資料來自CVC-ClinicDB、ETIS-LaribPolypDB以及CVC-EndoSceneStill等三個資料庫。大腸內視鏡圖輸入進生成器產生預測之大腸息肉標記圖,後再輸入進協作器後得到最終之大腸息肉標記圖。實驗結果表示本論文的方法表現最好。論文最後會展示本論文提出之方式的成果和其他方式的成果之間的對照,也會展示出一組臨床實驗結果。
Abstract
Cancer is the leading cause of death in Taiwan and colorectal cancer ranks first in the top 10 cancers of the years several times. The effects of colorectal cancer can not be ignored, so the sooner the signs of colorectal cancer are detected, the greater the prospect of a cure.
Therefore, we proposed a neural network based architecture for detecting the polyps in the large intestine endoscope. The proposed network architecture is named Generative Collaborative Network, consisting of two subnets: Generator and Collaborator. Two subnets are based on combination of Convolution Neural Network and Transpose Convolution Neural Network. The entire network learns how to extract polyp features from the training data and then maps them back to endoscopics, where polyps are marked.
The data used in this paper are from three databases: cvc-clinicdb, etis-laribpolypdb and cvc-endoscenestill. The endoscopics was sent into Generator to produce the predicted pattern of polyps markers, and then sent into Collaborator to obtain the final pattern of polyps markers. Experimental results indicate that the method presented in this paper performs best. At the end of the paper, a comparison between the results presented in this paper and the results presented in other ways will be shown, as well as a set of clinical trial results.
目次 Table of Contents
論文審定書 i
中文摘要 iii
Abstract iv
目錄 vi
圖目錄 viii
表目錄 x
第一章 緒論 1
1-1 研究動機 1
1-2 論文架構 2
第二章 文獻探討 3
2-1 人工神經網路(Artificial Neural Network) 3
2-2 卷積神經網路(Convolutional Neural Network) 7
2-2-1 卷積層(Convolution Layer) 8
2-2-2 池化層(Pooling Layer) 10
2-2-3 全連結層(Fully Connected Layer) 11
2-3 轉置卷積神經網路(Transpose Convolutional Neural Network) 11
2-4 批標準化(Batch Normalization) 12
第三章 研究方法 15
3-1 生成協作網路(Generative Collaborative Networks) 15
3-1-1 概略 15
3-1-2 網路內部結構 16
3-2 協作性質 21
3-3 整體演算法 24
第四章 實驗 25
4-1 實驗環境與樣本介紹 25
4-1-1 環境介紹 25
4-1-2 樣本介紹 26
4-2 實驗說明 27
4-3 實驗成果 30
4-3-1 有息肉 31
4-3-2 無息肉 40
4-3-3 展示 43
第五章 結論與未來展望 48
5-1 結論 48
5-2 未來展望 48
參考文獻 49
參考文獻 References
[1] Y. Bengio, A. Courville, and P. Vincent, “Representation Learning: A Review and New Perspectives,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8): p. 1798-1828, 2013.
[2] J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural Networks, 61: p. 85-117, 2015.
[3] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, 521(7553): p. 436-444, 2015.
[4] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representations by error propagation,” Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, MIT Press, p. 318-362, 1986.
[5] K. Fukushima, “Neocognitron: A Self-Organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position,” Biological Cybernetics, 36: p. 193-202, 1980.
[6] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation Applied to Handwritten Zip Code Recognition,” Neural Computation, 1(4): p. 541-551, 1989.
[7] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, 86(11): p. 2278-2324, 1998.
[8] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” Communications of the Acm, 60(6): p. 84-90, 2017.
[9] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” Advances in Neural Information Processing Systems, Vol. 27, Curran Associates Inc, p. 2672-2680, 2014.
[10] J. Zhao, M. Mathieu, and Y. LeCun, Energy-based Generative Adversarial Network in International Conference on Learning Representations. 2017.
[11] S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” 2015.
[12] J. Bernal, F. J. Sanchez, G. Fernandez-Esparrach, D. Gil, C. Rodriguez, and F. Vilarino, “WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians,” Computerized Medical Imaging and Graphics, 43: p. 99-111, 2015.
[13] J. Silva, A. Histace, O. Romain, X. Dray, B. Granado, and P. Marteau, “Towards embedded detection of polyps in videocolonoscopy and WCE images for early diagnosis of colorectal cancer,” International Journal of Computer Assisted Radiology and Surgery, Springer Verlag: Germany, p. 283-293, 2013.
[14] D. Vázquez, J. Bernal, F. Javier Sánchez, G. Fernández-Esparrach, A. López, A. Romero, M. Drozdzal, and A. Courville, “A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images,” 2016.
[15] V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A Deep Convolutiona Encoder-Decoder Architecture for Image Segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12): p. 2481-2495, 2017.
[16] V. Dumoulin and F. Visin, “A guide to convolution arithmetic for deep learning,” 2016.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:自定論文開放時間 user define
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code