Responsive image
博碩士論文 etd-0722109-203723 詳細資訊
Title page for etd-0722109-203723
論文名稱
Title
兩個自組織映射圖網路的變形及其在影像量化與壓縮上的應用
Two Variants of Self-Organizing Map and Their Applications in Image Quantization and Compression
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
84
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2009-07-08
繳交日期
Date of Submission
2009-07-22
關鍵字
Keywords
向量量化、自組織映射圖網路、邊緣保留、顏色量化、適應性學習、部份誤差理論
vector quantization, partial distortion theorem, color quantization, edge preserving, Self-organizing map, adaptive learning
統計
Statistics
本論文已被瀏覽 5724 次,被下載 1385
The thesis/dissertation has been browsed 5724 times, has been downloaded 1385 times.
中文摘要
自組織映射圖網路是一種非監督模式的學習演算法,已經成功的應用在很多領域。自組織映射圖網路其中一項優點就是可以用漸進式的方式處理資料。在過去幾十年,已經有許多自組織映射圖網路的變型應用在很多領域。在本論文中提出了兩個新的自組織映射圖網路的演算法,分別應用在不同用途的影像壓縮方法上。

第一個演算法是一個具有樣本大小和參數適應性的自組織映射圖網路演算法,應用在彩色的影像顏色量化。其中鄰近函數的收縮頻率,尋找優勝的神經單元中裡的最小化最大誤差方法參數,和收斂截止條件都可以跟著樣本大小而改變。基於樣本大小適應性自組織映射圖網路演算法, 我們使用樣本比例取代傳統的收斂條件,可以有效加速學習的過程。實驗結果顯示我們所提出的”樣本大小適應性自組織映射圖網路”演算法在不同的網路參數和不同大小的輸入影像尺寸的環境下可以達到較好的影像還原品質,以及有較小的影像還原品質變異度。

第二個演算法是一個新的權重分類自組織映射圖網路應用於邊緣保留影像壓縮,使用了動態大小的次編碼簿和動態權重學習速率。次編碼簿的大小是基於部分誤差理論來預估並且動態調整,藉以改善神經單元產率不佳的問題並且平衡神經單元的部分誤差。無論權重有多大,我們提出來的動態權重學習速率都可以有效率的更新神經單元。實驗結果顯示我們所提出的新的方法跟其他傳統演算法比較,可以得到較好的影像邊緣區塊還原品質,碼向量較為分散的編碼簿,和較低的計算複雜度。
Abstract
The self-organizing map (SOM) is an unsupervised learning algorithm which has been successfully applied to various applications. One of advantages of SOM is it maintains an incremental property to handle data on the fly. In the last several decades, there have been variants of SOM used in many application domains. In this dissertation, two new SOM algorithms are developed for image quantization and compression.

The first algorithm is a sample-size adaptive SOM algorithm that can be used for color quantization of images to adapt to the variations of network parameters and training sample size. The sweep size of neighborhood function is modulated by the size of the training data. In addition, the minimax distortion principle which is modulated by training sample size is used to search the winning neuron. Based on the sample-size adaptive self-organizing map, we use the sampling ratio of training data, rather than the conventional weight change between adjacent sweeps, as a stop criterion. As a result, it can significantly speed up the learning process. Experimental results show that the proposed sample-size adaptive SOM achieves much better PSNR quality, and smaller PSNR variation under various combinations of network parameters and image size.

The second algorithm is a novel classified SOM method for edge preserving quantization of images using an adaptive subcodebook and weighted learning rate. The subcodebook sizes of two classes are automatically adjusted in training iterations based on modified partial distortions that can be estimated incrementally. The proposed weighted learning rate updates the neuron efficiently no matter of how large the weighting factor is. Experimental results show that the proposed classified SOM method achieves better quality of reconstructed edge blocks and more spread out codebook and incurs a significantly less computational cost as compared to the competing methods.
目次 Table of Contents
Inner Cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
論文審定書. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
論文授權書. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
中文摘要. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Glossary of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Self-Organizing Map . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 Color Image Quantization . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Edge Preserving Vector Quantization . . . . . . . . . . . . . . 4
1.3 The Contributions of This Dissertation . . . . . . . . . . . . . . . . . 5
1.4 The Organization of This Dissertation . . . . . . . . . . . . . . . . . 6
2 Preliminaries and Related Work . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1 Definition of Vector Quantization . . . . . . . . . . . . . . . . . . . . 7
2.2 Algorithms for Codebook Design . . . . . . . . . . . . . . . . . . . . 8
2.2.1 Batch Learning . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 Incremental Learning . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.3 Intermediate Method . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Optimal Principle for Vector Quantization . . . . . . . . . . . . . . . 13
2.3.1 Equi-probable Principle . . . . . . . . . . . . . . . . . . . . . 14
2.3.2 Equi-distortion Principle . . . . . . . . . . . . . . . . . . . . . 15
3 Sample-Size Adaptive Self-organizing Map for Color Images Quantization. 17
3.1 Sample Size Adaptive SOM . . . . . . . . . . . . . . . . . . . . . . . 18
3.1.1 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.1.2 Global Butterfly Permutation . . . . . . . . . . . . . . . . . . 20
3.1.3 Winner Search with Minimax Partial Distortion . . . . . . . . 21
3.1.4 Updating of Winner and It’s Neighborhoods . . . . . . . . . . 22
3.2 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2.1 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . 25
3.2.2 Results and Analyses . . . . . . . . . . . . . . . . . . . . . . . 27
3.3 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4 Classified Self-Organizing Map with Adaptive Subcodebook for Edge Preserving
Vector Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.1 Optimal Vector Quantizer . . . . . . . . . . . . . . . . . . . . . . . . 37
4.2 Edge Preserving Vector Quantizer . . . . . . . . . . . . . . . . . . . . 38
4.3 Classified SOM with Adaptive Subcodebook . . . . . . . . . . . . . . 40
4.3.1 Subcodebook Initialization . . . . . . . . . . . . . . . . . . . . 42
4.3.2 Subcodebook Search . . . . . . . . . . . . . . . . . . . . . . . 43
4.3.3 Weighted Winner Update . . . . . . . . . . . . . . . . . . . . 43
4.3.4 Subcodebook Rearrangement . . . . . . . . . . . . . . . . . . 45
4.4 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.4.1 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . 52
4.4.2 Results and Discussions . . . . . . . . . . . . . . . . . . . . . 53
5 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . 60
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
參考文獻 References
Ahalt, S., Krishnamurthy, A. K., Chen, P., and Melton, D. E. (1990). Competitive learning algorithms for vector quantization. Neural Networks, 3(3):277–290.
Anderberg, M. (1973). Cluster analysis for applications. New York: Academic Press, Inc.
Bermejo, S. and Cabestany, J. (2002). The effect of finite sample size on on-line k-means. Neurocomputing, 48(1):511–539.
Bezdek, J. (1981). Pattern recognition with fuzzy objective function algorithms. Plenum Press, New York.
Chang, C.-H., Xu, P., Xiao, R., and Srikanthan, T. (2005). New adaptive color quantization method based on self-organizing maps. IEEE Transactions on Neural Networks, 16(1):237–249.
Chen, O., Sheu, B. J., and Fang, W. (1994). Image compression using selforganization networks. IEEE Transactions on Circuits and Systems for Video Technology, 4(5):480–489.
Chou, C.-H., Su, M.-C., and Lai, E. (2004). A new cluster validity measure and its application to image compression. Pattern Analysis & Applications, 7(2):205–220.
DeSieno, D. (1988). Adding a conscience to competitive learning. IEEE International Conference on Neural Networks, 1:117–124.
Frigui, H. and Krishnapuram, R. (1999). A robust competitive clustering algorithm with applications in computer vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(5):450–465.
Fukunaga, K. (1990). Introduction to statistical pattern recognition. Academic Press Professional, 2 edition.
Gersho, A. (1979). Asymptotically optimal block quantization. IEEE Transactions on Information Theory, 25(4):373–380.
Gersho, A. and Gray, R. (1992). Vector quantization and signal compression. Norwell, MA: Kluwer Academic Publidhers.
Gray, R. (1984). Vector quantization. IEEE Acoust., Speech. Signak Processing Mag., pages 4–29.
Grossberg, S. (1976). Adaptive pattern classification and universal recoding: I. parallel development and coding of neural feature detectors. Biol.Cybern., 23(3):121–134.
Haykin, S. (1994). Neural networks; A comprehensive foundation. MacMillan Collage Publishing Company, New York.
Heckbert, P. (1982). Color image quantization for frame buffer display. ACM SIGGRAPH Computer Graphics, 16(3):297–307.
Henstock, P. and Chelberg, D. (1996). Automatic gradient threshold determination for edge detection. IEEE Trans. Image Processing, 5(5):784–787.
Hertz, J., Krogh, A., and Palmer, R. (1991). Introduction to the theory of neural computation. Reading, MA: Addison-Wesley.
Hofmann, T. and Buhmann, J. (1997). Pairwise data clustering by deterministic annealing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(1):1–14.
Jain, A., Duin, R., and Mao, J. (2000). Statistical pattern recognition: a review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(1):4–37.
Jain, A., Murty, M., and Flynn, P. (1999). Data clustering: a review. ACM Computing Surveys, 31(3):264–323.
Jiang, J. (1999). Image compression with neural networks - a survey. Signal Process Image Commun., 14(9):737–760.
Kangas, J., Kohonen, T., and Laaksonen, J. (1990). Variants of self-organizing maps. IEEE Transactions on Neural Networks, 1(1):93–99.
Kim, Y. and Ra, J. (1995). Adaptive learning method in self-organizing map for edge preserving vector quantization. IEEE Transactions on Neural Networks, 6(1):278–280.
Kiviluoto, K. (1996). Topology preservation in self-organizing maps. IEEE International Conference on Neural Networks, 1:294–299.
Kohonen, T. (1988). Self-organization and associative memory. New York: Springer-Verlag.
Kohonen, T. (1990). The self-organizing map. Proc. IEEE, 78:1464–1480.
Krishnamurthy, A., Ahalt, S., Melton, D., and Chen, P. (1990). Neural networks for vector quantization of speech and images. IEEE Journal on Selected Areas in Communications, 8(8):1449–1457.
Krishnapuram, R. and Keller, J. (1993). A possibilistic approach to clustering. IEEE Transactions of Fuzzy Systems, 1(2):98–110.
Linde, Y., Buzo, A., and Gray, R. (1980). An algorithm for vector quantizer design. IEEE Transactions on Communications, 28(1):84–94.
Liu, Z.-Q., Glickman, M., and Zhang, Y.-J. (2000). Soft-competitive learning paradigms, Soft Computing and Human-Centered Machines. New York: Springer-Verlag.
Lo, Z. and Bavarian, B. E. (1991). On the rate of convergence in topology preserving neural networks. Biol. Cybern., 65(1):55–63.
MacQueen, J. (1967). Some methods for classification and analysis of multivariate observations. Proceedings of 5-th Berkeley Symposium on Mathematical Statistics and Probability, 1:281–297.
Mulier, F. and Cherkassky, V. (1995). Statistical analysis of self-organization. Neural Networks, 8(5):717–727.
Nakajima, T., Takizawa, H., Kobayashi, H., and Nakamura, T. (1998). Kononen learning with a mechanism, the law of the jungle, capable of dealing with nonstationary probability distribution functions. IEICE Transactions on Information and Systems, E81-D(6):584–591.
Pal, N., Bezdek, J., and Tsao, E. (1993). Generalized clustering networks and kohonen’s self-organizing scheme. IEEE Transactions on Neural Networks, 4(4):549–557.
Park, D.-C. and Woo, Y.-J. (2001). Weighted centroid neural network for edge preserving image compression. IEEE Transactions on Neural Networks, 12(5):1134–1146.
Patane, G. and Russo, M. (2001). The enhanced lbg algorithm. Neural Netw., 14(9):1219–1237.
Pei, S.-C. and Lo, Y.-S. (1998). Color image compression and limited display using self-organization kohonen map. IEEE Transactions on Circuits and Systems for Video Technology, 8(2):191–205.
Ramamurthi, B. and Gersho, A. (1986). Classified vector quantization of images. IEEE Transactions on Communications, 34(11):1105–1115.
Rasmussen, E. (1992). Information retrieval: data structures and algorithms. Prentice-Hall, Inc. Riskin, E., Lookabaugh, T., Chou, P., and Gray, R. (1990). Variable rate vector quantization for medical image compression. IEEE Trans. Medical Imaging, 9(3):290–298.
R.L., B. (1984). Vector quantization of digital images. Ph.D dissertation, Standford Univ., Standford, CA. Saad, D. (1998). On-Line Learning in Neural Networks. Cambridge: Cambridge University Press.
Sano, K., Takagi, C., Egawa, R., Suzuki, K., and Nakamura, T. (2004). A systolic memory architecture for fast codebook design based on mmpdcl algorithm. Proceedings ITCC 2004 International Conference on Information Technology: Coding and Computing, 1:572–578.
Sano, K., Takagi, C., and Nakamura, T. (2005). Systolic computational memory approach to high-speed codebook design. Proceedings of the Fifth IEEE International Symposium on Signal Processing and Information Technology, pages 334–339.
Sarle, W. (1997). Neural network faq, part 1 of 7. Introduction, periodic posting to the Usenet newsgroup comp.ai.neural-nets.
Sarle, W. (2000). Neural network faq, part 2 of 7. Introduction, periodic posting to the Usenet newsgroup comp.ai.neural-nets.
Swayne, D., Lang, D., Buja, A., and Cook, D. (2003). Ggobi: evolving from xgobi into an extensible framework for interactive data visualization. Computational Statistics & Data Analysis, 43(4):423–444.
Ueda, N. and Nakano, R. (1994). A new competitive learning approach based on an equidistortion principle for designing optimal vector quantizers. Neural Networks, 7(8):1211–1227.
Wang, C.-H., Lee, C.-N., and Hsieh, C.-H. (2007). Sample-size adaptive selforganization map for color images quantization. Pattern Recognition Letters, 28(13):1616–1629.
Wu, K.-L. and Yang, M.-S. (2006). Alternative learning vector quantization. Pattern Recognition, 39(3):351–362.
Wu, X. and Zhang, K. (1991). A better tree-structured vector quantizer. Data Compression Conference, DCC ’91, pages 392–401.
Xu, L., Krzyzak, A., and Oja, E. (1993). Rival penalized competitive learning for clustering analysis, rbf net, and curve detection. IEEE Transactions on Neural Networks, 4(4):636–649.
Zhang, Y.-J. and Liu, Z.-Q. (2002). Self-splitting competitive learning: a new on-line clustering paradigm. IEEE Transactions on Neural Networks, 13(2):369–380.
Zhu, C. and Po, L.-M. (1998). Minimax partial distortion competitive learning for optimal codebook design. IEEE Transactions on Image Processing, 7(10):1400–1409.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:校內校外完全公開 unrestricted
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code