Responsive image
博碩士論文 etd-0101107-182709 詳細資訊
Title page for etd-0101107-182709
論文名稱
Title
基於支撐向量機之細胞神經網路韌性模板設計及 Wilcoxon 學習機之初步研究
SVM-based Robust Template Design of Cellular Neural Networks and Primary Study of Wilcoxon Learning Machines
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
95
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2006-12-29
繳交日期
Date of Submission
2007-01-01
關鍵字
Keywords
細胞神經網路、支撐向量機、Wilcoxon 學習機
Cellular Neural Networks, Support Vector Machines, Wilcoxon Learning Machines
統計
Statistics
本論文已被瀏覽 5741 次,被下載 2801
The thesis/dissertation has been browsed 5741 times, has been downloaded 2801 times.
中文摘要
本論文主要分成兩個部分。首先,針對具受限參數的細胞神經網路進行強韌樣板分解設計,以實踐任一給定的布林函數之問題研究。其次,提出有關一些嶄新的 Wilcoxon 學習機之初步研究成果。
本論文第一部份是有關細胞神經網路之強韌樣板設計。一個可線性分類的布林函數可由一個適當的非耦合細胞神經網路來實現。對此類的布林函數,我們引用機器學習理論中之幾何餘裕作為一個非耦合細胞神經網路之強韌指標。因此我們可用支撐向量機設計一個具最大幾何餘裕的線性分類器作為非耦合細胞神經網路的強韌樣板。接下來提出一些關於在無參數條件限制的細胞類神經網路強韌樣板之一般性質。此外再針對具參數條件限制的細胞類神經網路強韌樣板給予完全界定。此外再針對任一給定的布林函數,我們提出一個廣義的CFC演算法,來求得一序列的非耦合細胞類神經網路強韌樣板,以實踐一給定之布林函數。
本論文第二部份是提出一種新的 Wilcoxon 學習機。主要動機是來自於統計方法中,使用 Wilcoxon 法解決線性回歸問題。Wilcoxon線性回歸器具高抑制異常值之強韌性。我們將此法推廣至非線性機器學習的領域中。由此而發展出Wilcoxon 類神經網路、Wicoxon徑向基底函數網路、Wicoxon 模糊神經網路及 Wilcoxon 核基回歸器。
Abstract
This thesis is divided into two parts. In the first part, a general problem of the robust template decomposition with restricted weights for cellular neural networks (CNNs) implementing an arbitrary Boolean function is investigated. In the second part, some primary study of the novel Wilcoxon learning machines is made.
In the first part of the thesis for the robust CNN template design, the geometric margin of a linear classifier with respect to a training data set, a notion borrowed from the machine learning theory, is used to define the robustness of an uncoupled CNN implementing a linearly separable Boolean function. Consequently, the so-called maximal margin classifiers can be devised via support vector machines (SVMs) to provide the most robust template design for uncoupled CNNs implementing linearly separable Boolean functions. Some general properties of robust CNNs with or without restricted weights are discussed. Moreover, all robust CNNs with restricted weights are characterized. For an arbitrarily given Boolean function, we propose an algorithm, which is the generalized version of the well known CFC algorithm, to find a sequence of robust uncoupled CNNs implementing the given Boolean function.
In the second part of the thesis, we investigate the novel Wilcoxon learning machines (WLMs). The invention of these learning machines was motivated by the Wilcoxon approach to linear regression problems in statistics. The resulting linear regressors are quits robust against outliers, as is well known in statistics. The Wilcoxon learning machines investigated in this thesis include Wilcoxon Neural Network (WNN), Wilcoxon Generalized Radial Basis Function Network (WGRBFN), Wilcoxon Fuzzy Neural Network (WFNN), and Kernel-based Wilcoxon Regressor (KWR).
目次 Table of Contents
誌謝 i
摘要 ii
ABSTRACT iii
LIST OF FIGURES AND TABLES iv
GLOSSARY OF SYMBOLS vi

CHAPTER 1 INTRODUCTION 1

1.1 Motivation 1
1.2 Brief Sketch of the Contents 5

CHAPTER 2 CELLULAR NEURAL NETWORKS 7

2.1 Basic Concepts 7
2.2 Uncoupled Cellular Neural Networks 9
2.3 Binary Steady State Output Formula for Binary Inputs 9

CHAPTER 3 SUPPORT VECTOR MACHINES 11

3.1 Linear Classification 11
3.2 Maximal Margin Classifiers 12
3.3 Linear Support Vector Classifiers 15
3.4 Linear Regression 19
3.5 Linear Support Vector Regressors 21
3.6 Kernels 27
3.7 Kernel-based Support Vector Machines 28

CHAPTER 4 ROBUST DECOMPOSITION OF AN ARBITRARY BOOLEAN FUNCTION 32

4.1 Linearly Separable Boolean Functions 32
4.2 Some General Properties of Robust Templates 35
4.3 Restricted-weight Templates 38
4.4 Decomposition Algorithm 43
4.5 Illustrative Examples 47

CHAPTER 5 WILCOXON LEARNING MACHINES 56

5.1 Wilcoxon Norms 56
5.2 Wilcoxon Neural Networks 59
5.3 Wilcoxon Generalized Radial Basis Function Networks 64
5.4 Wilcoxon Fuzzy Neural Networks 66
5.5 Kernel-based Wilcoxon Regressors 68
5.6 Illustrative Examples 70

CHAPTER 6 CONCLUSION AND DISCUSSION 78

REFERENCES 82
參考文獻 References
[Aiz.1] I. N. Aizenberg, “Processing of noisy and small-detailed gray-scale images using cellular neural networks,” Journal of Electronic Imaging, vol. 6, no. 3, pp. 272-285, 1997.
[Aiz.2] I. N. Aizenberg, N. N. Aizenberg, and J. Vandewalle, “Precise edge detection: representation by boolean functions, implementations on the CNN,” in Proc. IEEE International Workshop on Cellular Neural Networks and Their Applications, London, pp. 301-306, 1998.
[Bla.1] C. Blake and C. Merz. (1998) UCI Machine Learning Repository [Online]. Available: http://www.ics.uci.edu/~mlearn/MLRepository.html.
[Bos.1] B. E. Boser, I. M. Guyon, and V. N. Vapnik, “A training algorithm for optimal margin classifiers,” in Proc. The Fifth Annual Workshop on Computational Learning Theory, Pittsburgh, Pennsylvania, United States, pp. 144-152, 1992.
[Cha.1] C. C. Chang and C. J. Lin. (2001) LIMSVM: A Library for Support Vector Machines [Online]. Available: http://www.csie.ntu.edu.tw/~cjlin/libsvm.
[Chu.1] C. C. Chuang, J. T. Jeng, and P. T. Lin, “Annealing robust radial basis function networks for function approximation with outliers,” Neurocomputing, vol. 56, pp. 123-139, 2004.
[Chua.1] L. O. Chua and L. Yang, “Cellular neural networks: theory,” IEEE Transactions on Circuits and Systems, vol. 35, pp. 1257-1272, 1988.
[Chua.2] L. O. Chua and L. Yang, “Cellular neural networks: applications,” IEEE Transactions on Circuits and Systems, vol. 35, pp. 1273-1290, 1988.
[Chua.3] L. O. Chua and P. Thiran, “An analytic method for designing simple cellular neural networks,” IEEE Transactions on Circuits and Systems, vol. 38, no. 11, pp. 1332-1341, 1991.
[Chua.4] L. O. Chua and T. Roska, Cellular Neural Networks and Visual Computing. Cambridge, United Kingdom: Cambridge University Press, 2002.
[Cor.1] C. Cortes and V. N. Vapnik, “Support vector networks,” Machine Learning, vol. 20, pp. 273-297, 1995.
[Cri.1] N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines. Cambridge, United Kingdom: Cambridge University Press, 2000.
[Cro.1] K. R. Crounse, E. L. Fung, and L. O. Chua, “Efficient implementation of neighborhood logic for cellular automata via the cellular neural network universal machine,” IEEE Transactions on Circuits and Systems–I, vol. 44, pp. 355-361, 1997.
[Czú.1] L. Czúni and T. Szirányi, “Motion segmentation and tracking optimization with edge relaxation in the cellular nonlinear network architecture,” in Proc. IEEE International Workshop on Cellular Neural Networks and Their Applications, Catania, Italy, pp. 51-56, 2000.
[Esp.1] S. Espejo, R. Carmona, R. Domínguez-Castro, and A. Rodríguez-Vázquez, “A CNN universal chip in CMOS technology,” International Journal of Circuit Theory and Applications, vol. 24, no. 1, pp. 93-109, 1996.
[Gal.1] Z. Galias, “Designing cellular neural networks for the evaluation of local boolean functions,” IEEE Transactions on Circuits and Systems II, vol. 40, pp. 219-223, 1993.
[Hän.1] M. Hänggi and G. S. Moschytz, “Genetic optimization of cellular neural networks,” in Proc. IEEE International Conference Evolutionary Computation, Anchorage, Alaska, pp. 381-386, 1998.
[Har.1] E. J. Hartman, J. D. Keeler, and J. M. Kowalski, “Layered neural networks with Gaussian hidden units as universal approximations,” Neural Computation, vol. 2, no. 2, pp. 210-215, 1990.
[Hogg.1] R. V. Hogg, J. W. McKean, and A. T. Craig, Introduction to Mathematical Statistics. Sixth Edition, New Jersey: Prentice-Hall, 2005.
[Hor.1] K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2, no. 5, pp. 359-366, 1989.
[Hsi.1] J. G. Hsieh, Y. L. Lin, and J. H. Jeng, “Preliminary study on Wilcoxon learning machines,” IEEE Transactions on Neural Networks. (submitted)
[Kec.1] V. Kecman, Learning and Soft Computing. Cambridge, MA: MIT Press, 2001.
[Kin.1] P. Kinget and M. Steyaert, “Evaluation of CNN template robustness towards VLSI implementation,” International Journal of Circuit Theory and Applications, vol. 24, no. 1, pp. 111-120, 1996.
[Kos.1] B. Kosko, “Fuzzy systems as universal approximators,” IEEE Transactions on Computers, vol. 43, no. 11, pp. 1329-1333, 1994.
[Lím.1] D. Lím and G. S. Moschytz, “A modular gm-C programmable CNN implementation,” in Proc. IEEE International Symposium on Circuits and Systems, Monterey, California, vol. 3, pp. 139-142, 1998.
[Lin.1] Y. L. Lin, W. C. Teng, J. H. Jeng, and J. G. Hsieh, “Improved CFC algorithm for template decomposition with guaranteed robustness,” Proc. the 9th IEEE International Workshop on Cellular Neural Networks and their Applications, Hsin Chu, Taiwan, pp. 102-105, 2005.
[Lin.2] Y. L. Lin, W. C. Teng, J. H. Jeng, and J. G. Hsieh, “Characterization of canonical robust template values for a class of uncoupled CNNs implementing linearly separable Boolean functions,” WSEAS Transactions on Information Science and Applications, vol. 2, no. 7, pp. 940-944, 2005.
[Lin.3] Y. L. Lin, J. G. Hsieh, and J. H. Jeng, “Robust decomposition with restricted weights for cellular neural networks implementing an arbitrary Boolean function,” International Journal of Bifurcation and Chaos, 2007. (to appear)
[Lis.1] G. Liszka, T. Roska, Á. Zarándy, J. Hegyesi, L. Kék, and C. Rekeczky, “Mammogram analysis using CNN algorithms,” in Proc. SPIE Medical Imaging, San Diego, California, vol.2434, pp. 461-470, 1995.
[Mir.1] B. Mirzai, Z. Cheng, and G. S. Moschytz, “Learning algorithms for cellular neural networks,” in Proc. IEEE International Symposium on Circuits and Systems, Monterey, California, vol. 3, pp. 159-162, 1998.
[Nem.1] L. Nemes, L. O. Chua, and T. Roska, “Implementation of arbitrary boolean functions on the CNN universal machine,” International Journal of Circuit Theory and Applications, vol. 26, no. 6, pp. 593-610, 1998.
[Park.1] J. Park and I. W. Sandberg, ”Universal approximation using radial basis function networks,” Neural Computation, vol. 3, no. 2, pp. 246-257, 1991.
[Pla.1] J. Platt, “Fast training of support vector machines using sequential minimal optimization,” in: B. Schölkopf, C.J.C. Burges, A.J. Smola (Eds.), Advances in Kernel Methods-Support Vector Learning. Cambridge, MA: MIT Press, 1999, pp. 185-208.
[Ros.1] T. Roska and L. O. Chua, “The CNN universal machine: an analogic array computer,” IEEE Transactions on Circuits and Systems II, vol. 40, no. 3, pp. 163-173, 1993.
[Rum.1] D. E. Rumelhart, G. E. Hinton, and R.J. Williams, “Learning internal representations by error propagation,” in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, D. E. Rumelhart, J. L. McClelland, and the PDP Research Group Eds. Cambridge, MA: MIT Press, vol. 1, Foundations, pp. 318-362, 1986.
[Sza.1] I. Szatmári, A. Schultz, C. Rekeczky, T. Kozek, T. Roska, and L. O. Chua, “Morphology and autowave metric on CNN applied to bubble-debris classification,” IEEE Transactions on Neural Networks, vol. 11, no. 6, pp. 1385-1393, 2000.
[Teng.1] W. C. Teng, “SVM-based robust template design for cellular neural networks implementing an arbitrary Boolean function,” M.S. thesis, National Sun Yat-Sen University, 2005.
[Vap.1] V. N. Vapnik, The Nature of Statistical Learning Theory. 2nd ed., New York: Springer, 1999.
[Wang.1] L. X. Wang and J. M. Mendel, “Fuzzy basis functions, universal approximation, and orthogonal least squares learning,” IEEE Transactions on Neural Networks, vol. 3, no. 5, pp. 807-814, 1992.
[Wang.2] L. X. Wang, A Course in Fuzzy Systems and Control. New Jersey: Prentice-Hall, 1997.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:校內外都一年後公開 withheld
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code