Responsive image
博碩士論文 etd-0618109-174414 詳細資訊
Title page for etd-0618109-174414
論文名稱
Title
加法幅基函數網路之研究
Study on Additive Generalized Radial Basis Function Networks
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
69
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2009-05-29
繳交日期
Date of Submission
2009-06-18
關鍵字
Keywords
加法模型、加法幅基函數網路、幅基函數網路
Generalized Radial Basis Function Network, Additive Model, Additive Generalized Radial Basis Function Network
統計
Statistics
本論文已被瀏覽 5763 次,被下載 1751
The thesis/dissertation has been browsed 5763 times, has been downloaded 1751 times.
中文摘要
在本論文裡,針對一般非線性回歸問題,我們提出一個新的學習機:加法幅基函數網路。此學習機結合了經常使用在一般機器學習問題的幅基函數網路和半參數回歸問題裡使用的加法模型。
在統計回歸理論裡,於線性模型和非參數模型之間加法模型是一個好的折衷模型。為能處理更一般化的資料,我們將加法模型崁入幅基函數網路的輸出層而形成加法幅基函數網路。在本論文中,我們也將會提出簡單的權重更新規則。
我們會提出一些範例用來比較一般幅基函數網路和我們所提出的加法幅基函數網路的模擬結果。由模擬結果顯示,在加法輸出層中,適當選擇隱藏節點和核平滑器的頻寬,加法幅基函數網路會表現得比一般幅基函數網路好。此外,在所給定的學習問題中,我們發現在同樣準確度的標準下,加法幅基函數網路通常比幅基函數網路需要較少的隱藏節點。
Abstract
In this thesis, we propose a new class of learning models, namely the additive generalized radial basis function networks (AGRBFNs), for general nonlinear regression problems. This class of learning machines combines the generalized radial basis function networks (GRBFNs) commonly used in general machine learning problems and the additive models (AMs) frequently encountered in semiparametric regression problems. In statistical regression theory, AM is a good compromise between the linear model and the nonparametric model. In order for more general network structure hoping to address more general data sets, the AMs are embedded in the output layer of the GRBFNs to form the AGRBFNs. Simple weights updating rules based on incremental gradient descent will be derived. Several illustrative examples are provided to compare the performances for the classical GRBFNs and the proposed AGRBFNs. Simulation results show that upon proper selection of the hidden nodes and the bandwidth of the kernel smoother used in additive output layer, AGRBFNs can give better fits than the classical GRBFNs. Furthermore, for the given learning problem, AGRBFNs usually need fewer hidden nodes than those of GRBFNs for the same level of accuracy.
目次 Table of Contents
誌謝 i
摘要 ii
Abstract iii
List of Figures and Tables iv
Glossary of Symbols vi
Glossary of Abbreviations vii
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Brief Sketch of the Contents 4
Chapter 2 Generalized Radial Basis Function Networks 7
2.1 Introduction 8
2.2 Training by Incremental Gradient Descent Algorithm 11
2.3 Subset Selection by K-means Algorithm 13
Chapter 3 Additive Models 16
3.1 Introduction 17
3.2 Backfitting Method for Additive Models 20
3.3 Univariate Nadaraya-Watson Estimators 21
3.4 Degrees of Freedom of a Smoother 25
Chapter 4 Additive Generalized Radial Basis Function Networks 27
4.1 Introduction 28
4.2 Training by Steepest Descent Algorithm 31
4.3 Bandwidth Selection 34
4.4 Degrees of Freedom 36
Chapter 5 Illustrative Examples 38
Chapter 6 Conclusion and Discussion 53
References 56
參考文獻 References
[Bor.1] A. G. Bors, “Introduction of the radial basis function networks,” Online Symposium for Electronics Engineers, vol. 1, pp. 1-7, 2001.
[Chu.1] C. C. Chuang, J. T. Jeng, and P. T. Lin, “Annealing robust radial basis function networks for function approximation with outliers,” Neurocomputing, vol. 56, pp. 123-139, 2004.
[Far.1] J. J. Faraway, Extending the Linear Model with R Models. CRC Press, Boca Raton, Florida, 2006.
[Hart.1] E. J. Hartman, J. D. Keeler, and J. M. Kowalski, “Layered neural networks with Gaussian hidden units as universal approximations,” Neural Computation, vol. 2, no. 2, pp. 210-215, 1990.
[Härd.1] W. Härdle, M. Muller, S. Sperlich, and A. Werwatz, Nonparametric and Semiparametric Models. Springer, Berlin, Germany, 2004.
[Has.1] T. J. Hastie and R. J. Tibshirani, Generalized Additive Models. Chapman & Hall, London, 1990.
[Hsi.1] J. G. Hsieh, Y. L. Lin, and J. H. Jeng, “Preliminary study on Wilcoxon learning machines,” IEEE Transactions on Neural Networks, vol. 19, no. 2, pp. 201-211, 2008.
[Lin.1] Y. L. Lin, SVM-based Robust Template Design of Cellular Neural Networks and Primary Study of Wilcoxon Learning Machines. PhD Thesis, Department of Electrical Engineering, National Sun Yat-Sen University, Kaohsiung, Taiwan, 2006.
[Kec.1] V. Kecman, Learning and Soft Computing. MIT Press, Cambridge, Massachusetts, 2001.
[Mam.1] E. Mammen and B. U. Park, “A simple smooth backfitting method for additive model,” Annals of Statistics, vol. 34, no. 5, pp. 2252-2271, 2006.
[Mon.1] D. C. Montgomery, E. A. Peck, and G. G. Vining, Introduction to Linear Regression Analysis. 4th ed., Wiley, Hoboken, New Jersey, 2006.
[Mor.1] V. A. Morozov, Regularization Methods for Ill-posed Problems. CRC Press, Boca Raton, Florida, 1993.
[Nad.1] E. A. Nadaraya, “On estimating regression,” Theory of Probability and its Applications, vol. 9, pp. 141-142, 1964.
[Park.1] J. Park and I. W. Sandberg, “Universal approximation using radial basis function networks,” Neural Computation, vol. 3, no. 2, pp. 246-257, 1991.
[Pog.1] T. Poggio and F. Girosi, “A theory of networks for approximation and learning,” A.I. Memo No. 1140 MIT, Cambridge, 1989.
[Pog.2] T. Poggio and F. Girosi, “Networks and the best approximation property,” A.I. Memo No. 1164 MIT, Cambridge, 1989.
[Pog.3] T. Poggio and F. Girosi, “Regularization algorithms for learning that are equivalent to multilayer networks,” Science, vol. 247, no. 4945, pp. 978-982, 1990.
[Pog.4] T. Poggio and F. Girosi, “Networks for approximation and learning,” Proceedings of the IEEE, vol. 78, pp. 1481-1497, 1990.
[Pog.5] T. Poggio and F. Girosi, “Extensions of a theory of networks for approximation and learning: Dimensionality reduction and clustering,” A.I. Memo No. 1167 MIT, Cambridge, 1990.
[Rum.1] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagation error,” Nature, vol. 323, pp. 533-536, 1986.
[Rup.1] D. Ruppert, M. P. Wand, and R. J. Carroll, Semiparametric Regression. Cambridge University Press, New York, 2003.
[Sto.1] C. Stone, “Additive regression and other nonparametric models,” Annals of Statistics, vol. 13, no. 2, pp. 689-705, 1985.
[Sun.1] N. Sundararajan, P. Saratchandran, and Y. W. Lu, Radial Basis Function Neural Networks with Sequential Learning. World Scientific, Singapore, 1999.
[Tik.1] A. N. Tikhonov, “On solving incorrectly posed problems and method of regularization,” Doklady Akademii Nauk USSR, vol. 151, pp. 501-504, 1963.
[Tik.2] A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-posed Problems. V. H. Winston, Washington, D. C., 1977.

[Wat.1] G. S. Watson, “Smooth regression analysis,” Sankhya, Series A, vol. 26, pp. 359-372, 1964.
[Wer.1] P. J. Werbors, Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. PhD Thesis, Harvard University, Cambridge, 1974.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:校內外都一年後公開 withheld
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code