Responsive image
博碩士論文 etd-0908104-190413 詳細資訊
Title page for etd-0908104-190413
論文名稱
Title
使用模糊類神經方法,對於分類問題之研究
A Neuro-Fuzzy Approach for Classificaion
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
53
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2004-07-24
繳交日期
Date of Submission
2004-09-08
關鍵字
Keywords
最陡坡降法、模糊法則、模糊類神經網路、TSK模型、神經模糊
fuzzy neural network, TSK model., neuro-fuzzy, gradient descent, fuzzy rule, similarity measure, singular value decomposition
統計
Statistics
本論文已被瀏覽 5779 次,被下載 0
The thesis/dissertation has been browsed 5779 times, has been downloaded 0 times.
中文摘要
我們提出一個模糊類神經網路的分類系統,它可以精確地對未知類別的資料,達到正確分類的目的。這個分類系統是以TSK式模糊法則系統為基礎,主要分成兩個階段:第一階段是可合併式群聚演算法(merge-based clustering algorithm),用來建立系統的初步架構。訓練資料一次輸入一個,亦即漸進式(incrementally)的方式,初步分群之後,對於相似的群(clusters)再加以合併,而相似度的計算方式包含:輸入的相似度(input-similarity)、輸出的相似度(output-similarity)以及輸出的標準差(output-variance)測試。我們以高斯函數(Gaussian function)的平均值(mean)和標準差(variance)來描述一個群,而每個群 對應一條IF-THEN的模糊法則。第二階段是混合式的學習演算法(hybrid learning algorithm),目的是調整系統裡的參數,達到最佳化。主要是由基於遞廻式SVD最小平方估測和最陡坡降演算法所組成。我們的方法主要有下面幾個優點:訓練資料的初步分群和合併,同時考量輸入空間和輸出空間的相似度;而歸屬函數(membership function)可以很適當地表現訓練資料的分佈;可合併多餘的群,以降低輸入順序的不良影響。此外,當新的訓練資料輸入時,只須要針對合併後的群加以考量。
Abstract
We develop a neuro-fuzzy network technique to extract TSK-type fuzzy rules from a given set of input-output data for classification problems. Fuzzy clusters are generated incrementally from the training data set, and similar clusters are merged dynamically together through input-similarity, output-similarity, and output-variance tests. The associated membership functions are defined with statistical means and deviations. Each cluster corresponds to a fuzzy IF-THEN rule, and the obtained rules can be further refined by a fuzzy neural network with a hybrid learning algorithm which combines a recursive SVD-based least squares estimator and the gradient descent method. The proposed technique has several advantages. The information about input and output data subspaces is considered simultaneously for cluster generation and merging. Membership functions match closely with and describe properly the real distribution of the training data points. Redundant clusters are combined and the sensitivity to the input order of training data is reduced. Besides, generation of the whole set of clusters from the scratch can be avoided when new training data are considered.
目次 Table of Contents
目錄

摘要 1
Abstract 2


第一章 簡介 4


第二章 合併式模糊群聚 9
2.1 資料分割 10
3.2 群合併 15


第三章 模糊法則及神經模糊建構 20


第四章 使用混合式學習來優化參數 23
4.1 基於SVD遞廻式最小平方估測 25
4.2 最陡坡降演算法 27


第五章 實驗結果 30
5.1實驗一 輸入順序之比較 31
5.2實驗二 相似度測量之比較 35
5.3實驗三 高維度問題之比較 40
5.3.1 GAS 40
5.3.2 STOCK 41
5.3.3 HOUSING 42


第六章 結論 44


參考文獻 45
參考文獻 References
[1] L. A. Zadeh, “Fuzzy sets,” Inform. and Contr., vol. 8, pp. 338–353, 1965.
[2] L. M. Fu, “Discrete Probability Estimation for Classification Using Certainty-Factor-Based Neural Networks,” IEEE Trans. Neural Networks, vol. 11, no. 2, pp. 415-422, 2000.
[3] X. Jin, D. Srinivasan, and R. L. Cheu, “Classification of Freeway Traffic Patterns for Incident Detection Using Constructive Probabilistic Neural Networks,” IEEE Trans. Neural Networks, vol. 12, no. 5, pp. 1173-1187, 2001.
[4] H.-J. Huang and C.-N. Hsu, “Bayesian Classification for Data From theSame Unknown Class,” IEEE Trans. Syst., MAN, Cybern. B, vol. 32, no. 2, pp. 137-145, 2002.
[5] R. S. Lynch, and P. K. Willett, “Bayesian Classification and Feature Reduction Using Uniform Dirichlet Priors,” IEEE Trans. Syst., MAN, Cybern. B, vol. 33, no. 3, pp. 448-464, 2003.
[6] T. Hoya, “On the Capability of Accommodating New Classes Within Probabilistic Neural Networks” IEEE Trans. Neural Networks, vol. 14, no. 2, pp. 450-453, 2003.
[7] Z. J. Wang and P. Willett, “Joint Segmentation and Classification of Time Series Using Class-Specific Features,” IEEE Trans. Syst., MAN, Cybern. B, vol. 34, no. 2, pp. 1056-1067, 2004.
[8] G.-C. Alicia, C.-S. Jesús, A.-R. Rocío, and R. F.-V. Aníbal,“Local Estimation of Posterior Class Probabilities to Minimize Classification Errors,” IEEE Trans. Neural Networks, vol. 15, no. 2, pp. 309-317, 2004.
[9] N. Kwak and C.-H. Choi, “Input Feature Selection for Classification Problems,” IEEE Trans. Neural Networks, vol. 13, no. 1, pp. 143-159, 2002.
[10] H. Roubos and M. Setnes, “Compact and transparent fuzzy models and classifiers through iterative complexity reduction,” IEEE Trans. Fuzzy Syst., vol. 9, pp. 516–524, Aug. 2001.
[11] H. Ishibuchi, T. Nakashima, and T. Murata, “Three-objective geneticsbased machine learning for linguistic rule extraction,” Information Sci., vol. 136, no. 1–4, pp. 109–133, Aug. 2001.
[12] H. Ishibuchi and T. Yamamoto, “Fuzzy rule selection by data mining criteria and genetic algorithms,” in Proc. Genetic Evolutionary Computat. Conf. (GECCO-2002), 2002, pp. 399–406.
[13] L. Castillo, A. Gonzalez, and R. Perez, “Including a simplicity criterion in the selection of the best rule in a genetic fuzzy learning algorithm,” Fuzzy Sets Syst., vol. 120, no. 2, pp. 309–321, June 2001.
[14] J. Casillas, M. J. Del Jesus, and F. Herrera, “Genetic feature selection in a fuzzy rule-based classification system learning process for high-dimensional problems,” Information Sci., vol. 136, pp. 135–157, Aug. 2001.
[15] J. Yen, “Fuzzy logic—a modern perspective,” IEEE Trans. Knowledge Data Eng., vol. 11, pp. 153–165, Jan.–Feb. 1999.
[16] C. Z. Janikow, “A genetic algorithm for optimizing fuzzy decision trees,” Information Sci., vol. 89, no. 3–4, pp. 275–296, Mar. 1996.
[17] S. Abe and M.-S. Lan, “A method for fuzzy rules extraction directly from numerical data and its application to pattern classification,” IEEE Trans. Fuzzy Syst., vol. 3, pp. 18–28, Feb. 1995.
[18] S. Abe and R. Thawonmas, “A fuzzy classifier with ellipsoidal regions,” IEEE Trans. Fuzzy Syst., vol. 5, pp. 358–368, Aug. 1997.
[19] V. Uebele, S. Abe, and M.-S. Lan, “Aneural-network-based fuzzy classifier,” IEEE Trans. Syst., Man, Cybern., vol. 25, pp. 353–361, Feb. 1995.
[20] Nojun Kwak and Chong-Ho Choi, “Input Feature Selection for Classification Problems,” IEEE Trans. Neural Networks, vol. 13, no. pp.143-159, 2002.
[21] Y.-T. Elad, and Gideon F. Inbar, “Feature Selection for the Classification of Movements From Single Movement-Related Potentials,” IEEE Trans. Neural Syst. Rehabilitation Engineering, vol. 10, no. 3, pp. 170-177, 2002.
[22] M. Bressan and J. Vitria, “On the Selection and Classification of Independent Features,” IEEE Trans. Pattern Analysis AND Machine Intelligence, vol. 25, no. 10, pp. 1312-1317, 2003.
[23] D. Chakraborty and N. R. Pal, “A Neuro-Fuzzy Scheme for Simultaneous Feature Selection and Fuzzy Rule-Based Classification,” IEEE Trans. Neural Networks, vol. 15, no. 1, pp. 110-123, 2004.
[24] M. Setnes and H. Roubos, “GA-Fuzzy Modeling and Classification: Complexity and Performance,” IEEE Trans. Fuzzy Syst., vol. 8, no. 5, pp. 509-522, 2000.
[25] R. Palaniappan, P. Raveendran, and S. Omatu, “VEP Optimal Channel Selection Using Genetic Algorithm for Neural Network Classification of Alcoholics,” IEEE Trans. Neural Networks, vol. 13, no. 2, pp. 486-491, 2002.
[26] P.K. Simpson,”Fuzzy min-max neural networks-part 1:classification,”IEEE Trans. Neural Networks, vol.3,no.5,pp.776-786,1992.
[27] S. Seo, M. Bode, and K. Obermayer, “Soft Nearest Prototype Classification,” IEEE Trans. Neural Networks, vol. 14, no. 2, pp. 390-398, 2003.
[28] J. Peng, D. R. Heisterkamp, and H.K. Dai, “Adaptive Quasiconformal Kernel Nearest Neighbor Classification,” IEEE Trans. Pattern Analysis AND Machine Intelligence, vol. 26, no. 5, pp. 656-661, 2004.
[29] N. R. Pal and S. Ghosh, “Some Classification Algorithms Integrating Dempster–Shafer Theory of Evidence with the Rank Nearest Neighbor Rules,” IEEE Trans. Syst., MAN, Cybern. A, vol. 31, no. 1, pp. 59-66, 2001.
[30] H.-L. Tsai and S.-J. Lee “Entropy-Based Generation of Supervised Neural Networks for Classification of Structured Patterns,” IEEE Trans. Neural Networks, vol 15, no. 2, pp. 283-297, 2004.
[31] A. Porta, S. Guzzetti, N. Montano, R. Furlan, M.Pagani, A. Malliani, and S. Cerutti, “Entropy, Entropy Rate, and Pattern Classification as Tools to Typify Complexity in Short Heart Period Variability Series,” IEEE Trans. Biomedical Engineering, vol. 48, no. 11, pp. 1282-1291, 2001.
[32] H.-M. Lee, C.-M. Chen, J.-M. Chen, and Y.-L. Jou, “An Efficient Fuzzy Classifier with Feature Selection Based on Fuzzy Entropy,” IEEE Trans. Syst., MAN, Cybern. B, vol. 31, no. 3, pp. 426-432, 2001.
[33] T. Sudkamp, A. Knapp, and J. Knapp, “Model generation by domain refinement and rule reduction ,” IEEE Trans. Syst., Man, Cybern. B, vol. 33, no. 1, pp. 45–55, 2003.
[34] L.-X. Wang, “Training of fuzzy logic systems using nearest neighborhood clustering,”in Proc. IEEE Int. Conf. Fuzzy Syst., (San Francisco, CA, USA), pp. 13–17, Apr. 1993.
[35] M. W. Berry, “Large-scale sparse singular value computations,” Int. J. Supercomputer Appl., vol. 6, no. 1, pp. 13–49, 1992.
[36] G. H. Golub and C. F. Van Loan, Matrix Computations. Baltimore, MD: The Johns Hopkins University Press, 1996.
[37] M. G. Bello, “Enhanced training algorithms, and integrated training/architecture selection for multilayer perceptron networks,” IEEE Trans. Neural Networks, vol. 3, no. 6, pp. 864–875, 1992.
[38] M. Gori and A. Tesi, “On the problem of local minima in backpropagation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 1, pp. 76–86, 1992.
[39] A. Bjorck, Numerical Methods for Least Squares Problems. Philadelphia, PA: Society for Industrial and Applied Mathematics, 1996.
[40] T. Takagi and M. Sugeno, “Fuzzy identification of systems and its application to modeling and control,” IEEE Trans. Syst., Man, Cybern., vol. 15, no. 1, pp. 116–132, 1985.
[41] S.-J. Lee and C.-S. Ouyang, “A neuro-fuzzy system modeling with self-constructing rule generation and hybrid SVD-based learning,” IEEE Trans. Fuzzy Syst., vol. 11, no. 3, pp. 341–353, 2003.
[42] J. S. Wang and C. S. G. Lee, “Self-adaptive neuro-fuzzy inference systems for classification applications,” IEEE Trans. Fuzzy Syst., vol. 10 no. 6, pp. 790–802, 2002.
[43] J. Yen and R. Langari, Fuzzy Logic – Intelligence, Control, and Information, NJ: Prentice Hall, 1999.
[44] D. E. Rumehart and J. L. McClelland, Parallel Distributed Processing (Two Volumes), Cambridge, MA: MIT Press, 1986.
[45] S. Haykin, Neural Networks – A Comprehensive Foundation. NJ: Prentice-Hall, 1999.
[46] J. Yen, L. Wang, and C. W. Gillepie, “Improving the interpretability of TSK fuzzy models by combining global learning and local learning,” IEEE Trans. Fuzzy Syst., vol. 6, no. 4, pp. 530–537, 1998.
[47] C.-F. Juang, “A TSK-type recurrent fuzzy network for dynamic systems processing by neural network and genetic algorithms,” IEEE Trans. Fuzzy Syst., vol. 10, no. 2, pp. 155–170, 2002.
[48] C.-F. Juang and C.-T. Lin, “An on-line self-constructing neural fuzzy inference network and its applications,” IEEE Trans. Fuzzy Syst., vol. 6, no. 1, pp. 12–32, 1998.
[49] M. Setnes, R. Babuska, U. Kaymak, and H. R. v. N. Lemke, “Similarity measures in fuzzy rules base simplification,” IEEE Trans. Syst., Man, Cybern. B, vol. 28, no. 3, pp. 376–386, 1998.
[50] Y. Lin, G. A. Cunningham III, and S. V. Coggeshall, “Using fuzzy partitions to create fuzzy systems from input-output data and set the initial weights in a fuzzy neural network,” IEEE Trans. Fuzzy Syst., vol. 5, no. 4, pp. 614–621, 1997.
[51] G. E. P. Box and G. M. Jenkins, Time Series Analysis, Forecasting and Control. San Francisco, CA: Holden Day, 1970.
[52] M. Sugeno and T. Yasukawa, “A fuzzy-logic-based approach to qualitative modeling,” IEEE Trans. Fuzzy Syst., vol. 1, no. 1, pp. 7–31, 1993.
[53] C. L. Blake and C. J. Merz, “UCI repository of machine learning database,” http://www.ics.uci.edu/∼mlearn/MLRepository.html, 1998.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:校內校外均不公開 not available
開放時間 Available:
校內 Campus:永不公開 not available
校外 Off-campus:永不公開 not available

您的 IP(校外) 位址是 13.58.252.8
論文開放下載的時間是 校外不公開

Your IP address is 13.58.252.8
This thesis will be available to you on Indicate off-campus access is not available.

紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code