Responsive image
博碩士論文 etd-0719106-155600 詳細資訊
Title page for etd-0719106-155600
論文名稱
Title
最小平方支撐向量機於影像編碼之應用
Application of Least Squares Support Vector Machines in Image Coding
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
82
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2006-06-17
繳交日期
Date of Submission
2006-07-19
關鍵字
Keywords
最小平方支撐向量機、影像編碼、支撐向量、循序最小最佳化演算法
LS-SVR, LS-SVM, image coding, support vector, SMO, two-layer LS-SVR
統計
Statistics
本論文已被瀏覽 5739 次,被下載 1270
The thesis/dissertation has been browsed 5739 times, has been downloaded 1270 times.
中文摘要
在本論文中,我們應用針對迴歸之最小平方支撐向量機 (LS-SVR) 到影像編碼。首先我們提出了五個簡易演算法來解LS-SVR。對於線性迴歸部份,我們針對LS-SVR問題提出了兩個簡易的Widrow-Hoff-like演算法,一為原始形式,另一個為對偶形式。然後此演算法的對偶形式被擴展到基於核函數的非線性LS-SVR。我們也詳細地提供簡要且強大的雙參數循序最小最佳化演算法及三參數循序最小最佳化演算法。我們利用從LS-SVR所求得的預測函數來近似影像的灰階值。在經由修剪的動作後,我們只須要儲存一小群叫支撐向量的訓練資料。針對七張影像區塊所作的實驗結果顯示,使用高斯核函數的LS-SVR會比使用具有互變異數矩陣的Mahalanobis核函數之LS-SVR更為合適。我們提出兩層的LS-SVR來選擇LS-SVR的參數。在訓練外層的LS-SVR之前,我們使用特徵擷取來降低輸入的維度。針對三張完整影像所作的實驗結果顯示,對於Lena與狒狒這兩張影像而言,使用降低維度的兩層LS-SVR的結果其PSNR值會比使用未降低維度的兩層LS-SVR的結果高,而對於F16此影像而言,兩者其PSNR值幾乎相同。
Abstract
In this thesis, least squares support vector machine for regression (LS-SVR) is applied to image coding. First, we propose five simple algorithms for solving LS-SVR. For linear regression, two simple Widrow-Hoff-like algorithms, in primal form and in dual form, are proposed for LS-SVR problems. The dual form of the algorithm is then generalized to kernel-based nonlinear LS-SVR. The elegant and powerful two-parameter sequential minimization optimization (2PSMO) and three-parameter sequential minimization optimization (3PSMO) algorithms are provided in detail. A predictive function obtained from LS-SVR is utilized to approximate the gray levels of the image. After pruning, only a subset of training data called support vectors is saved. Experimental results on seven image blocks show that the LS-SVR with Gaussian kernel is more appropriate than that with Mahalanobis kernel with a covariance matrix. Two-layer LS-SVR is proposed to choose the machine parameters of the LS-SVR. Before training outer LS-SVR, feature extraction is used to reduce the input dimensionality. Experimental results on three whole images show that the results with two-layer LS-SVR after reducing dimensionality are better than those with two-layer LS-SVR without reducing dimensionality in PSNR for Lena and Baboon images and they are almost the same in PSNR for F16 image.
目次 Table of Contents
誌謝 ii
摘要 iii
ABSTRACT iv
GLOSSARY OF SYMBOLS v
LIST OF FIGURES vi
LIST OF TABLES viii

CHAPTER ONE INTRODUCTION 1

1.1 Motivation 1
1.2 Brief Sketch of the Contents 4

CHAPTER TWO LEAST SQUARES SUPPORT VECTOR MACHINES 6

2.1 Least Square Linear Regressors 6
2.2 Least Squares Support Vector Machines for Linear Regression 7
2.3 Widrow-Hoff-like Algorithms 9
2.4 Widrow-Hoff-like Algorithm for Kernel-based LS-SVR 11
2.5 Dual Optimization Problem 13
2.6 Two-parameter SMO Algorithm 16
2.7 Three-parameter SMO Algorithm 20
2.8 An Illustrative Example 26

CHAPTER THREE IMAGE ENCODING BASED ON LEAST SQUARES SUPPORT VECTOR REGRESSION 28

3.1 Encoding and Decoding Processes 28
3.1.1 Image Format and Representation 28
3.1.2 Encoding and Decoding Processes 29
3.2 Model Selection 30
3.2.1 Comparison between Gaussian and Mahalanobis Kernel Functions 30
3.2.2 Coding Process with Two-layer LS-SVR 51
3.3 Experimental Results 61

CHAPTER FOUR CONLCUSION 67

REFERENCES 69
參考文獻 References
[1] A. E. Jacquin, “Image coding based on a fractal theory of iterated contractive image transformations,” IEEE Transactions on Image Process, vol. 1, no.1, pp. 18-30, January 1992.
[2] R. C. Gonzalez and R. E. Woods, Digital Image Processing. 2nd ed., New Jersey: Prentice Hall, 2002.
[3] B. E. Boser, I. M. Guyon, and V. N. Vapnik, “A training algorithm for optimal margin classifiers,” in Proceedings of the Fifth Annual Workshop on Computational Learning Theory, Pittsburgh, Pennsylvania, United States, July 1992, pp. 144-152.
[4] H. Drucker, C. J. C. Burges, L. Kaufman, A. Smola, and V. Vapnik, “Support vector regression machines,” in: M. Mozer, M. Jordan, and T. Petsche (Eds.), Advances in Neural Information Processing Systems, vol. 9. Cambridge, MA: MIT Press, 1997, pp. 155-161.
[5] V. N. Vapnik, Statistical Learning Theory. New York: Wiley, 1998.
[6] N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines. Cambridge, United Kingdom: Cambridge University Press, 2000.
[7] J. A. K. Suykens and J. Vandewalle, “Least squares support vector machine classifiers,” Neural Processing Letters, vol. 9, no. 3, pp. 293-300, June 1999.
[8] J. A. K. Suykens and J. Vandewalle, “Recurrent least squares support vector machines,” IEEE Transactions on Circuits and Systems I, vol. 47, no. 7, pp. 1109-1114, July 2000.
[9] J. A. K. Suykens, “Nonlinear modelling and support vector machines,” in Proceeding of the IEEE International Conference on Instrumentation and Measurement Technology, Budapest, Hungary, May 2001, pp. 287-294.
[10] J. A. K. Suykens, J. De Barbanter, L. Lukas, and J. Vandewalle, “Weighted least squares support vector machines: robustness and sparse approximation,” Neurocomputing, vol. 48, pp. 85-105, October 2002.
[11] J. A. K. Suykens, T. V. Gestel, J. De Barbanter, B. De Moor, and J. Vandewalle, Least Squares Support Vector Machines. Singapore: World Scientific, 2002.
[12] H. K. Wu, P. J. Chen, and J. G. Hsieh, “Simple algorithms for least square support vector machines,” in Proceedings of IEEE International Conference on Systems, Man, and Cybernetics, Taipei, Taiwan, October 2006.
[13] D. K. T. Chow and T. Lee, “Image approximation and smoothing by support vector regression,” in Proceedings of International Joint Conference on Neural Networks, Washington, D.C., United States, July 2001, vol. 4, pp. 2427-2432.
[14] J. Robinson and V. Kecman, “The use of support vector machines in image compression,” in Proceedings of 2nd International Conference on Engineering Intelligent Systems, Scotland, United Kingdom, June 2000.
[15] I. Hadzic and V. Kecman, “Support vector machines trained by linear programming: Theory and application in image compression and data classification,” in Proceedings of the 5th Seminar on Neural Network Applications in Electrical Engineering, Belgrade, Yugoslavia, September 2000, pp. 18-23.
[16] J. Robinson and V. Kecman, “Exploitation of sparse properties of support vector machines in image compression,” in Proceedings of the International Joint Conference on Neural Networks, Portland, Oregon, United States, July 2003, vol. 2, pp. 1232-1236.
[17] J. Robinson and V. Kecman, “Combining support vector machine learning with the discrete cosine transform in image compression,” IEEE Transactions on Neural Networks, vol. 14, no. 4, pp. 950-958, July 2003.
[18] M. H. Jeng, “An application of support vector machine to image coding,” M.S. thesis, Department of Information Engineering, I-Shou University, Kaohsiung, Taiwan, 2004.
[19] J. Platt, “Fast training of support vector machines using sequential minimal optimization,” in: B. Schölkopf, C.J.C. Burges, A.J. Smola (Eds.), Advances in Kernel Methods-Support Vector Learning. Cambridge, MA: MIT Press, 1999, pp. 185-208.
[20] S. S. Keerthi and S. K. Shevade, “SMO algorithm for least-squares SVM formulations,” Neural Computation, vol. 15, pp. 487-507, February 2003.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:校內校外完全公開 unrestricted
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code