Responsive image
博碩士論文 etd-1130114-120105 詳細資訊
Title page for etd-1130114-120105
論文名稱
Title
極端學習機的最佳化研究
Optimizing Extreme Learning Machines for Supervised Learning Applications
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
85
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2014-12-10
繳交日期
Date of Submission
2014-12-30
關鍵字
Keywords
遞增學習、最小平方支持向量機、極端學習機、互訊息、相關係數、特徵萃取、機器學習、特徵分群
incremental learning, machine Learning, feature clustering, feature extraction, correlation coefficient, mutual information, extreme learning machine, least-square support vector machine
統計
Statistics
本論文已被瀏覽 5742 次,被下載 49
The thesis/dissertation has been browsed 5742 times, has been downloaded 49 times.
中文摘要
本篇論文的研究分為兩個部分,第一部份是針對迴歸(regression)問題提出一個以機器學習(machine learning)為基礎的特徵萃取(feature extraction)的方法,第二部分為改良以等式條件最佳化為基底的極端學習機(Equality Constrained-optimization-based Extreme Learning Machine, C-ELM),提出一個遞增型的極限學習機(Incremental Equality Constrained-optimization-based Extreme Learning Machine, IC-ELM)。
在處理迴歸或分類(classification)問題時,有時候會因為輸入資料的維度(dimension)過多而造成預測結果的不準確,所以後來也有很多方法提出來改善此問題。但是大部分的方法都是針對分類問題所提出,處理迴歸問題的資料維度縮減(dimensionality reduction)的方法很少,而且目前所提出的方法很多都是統計的方法並且需要計算共變異數矩陣(covariance matrix),之後再計算特徵值(eigenvalue)與特徵向量(eigenvector),這對於在減少資料維度的過程中其實是很耗時的。所以我們針對迴歸問題提出一個以機器學習為基礎的維度縮減方法。
在給定歷史資料後,特徵(features)或預測子向量(predictor vectors)會被分為好幾群(clusters),每一群裡預測子向量都是很相像的,使用者不需要事先決定要分幾群,這些預測子向量會根據資料的特性自動分群。最後每一個萃取的特徵即為每一群裡的預測子向量的加權總合(weighted combination),所以原本資料的維度即可大大的降低而且由於萃取的特徵為原本預測子向量的加權總合,資料的特性也可以保存下來。我們也避免了計算共變異數矩陣。最後藉由實際生活中的資料集合(data sets)來驗證我們方法的效率。
等式條件最佳化為基底的極端學習機(Equality Constrained-Optimization-based Extreme Learning Machine, C-ELM),這裡我們簡稱C-ELM,是由Huang等人所提出來的模型,其模型的輸入權重(input weights)和隱藏層(hidden layer)裡的神經元(neurons)的偏權值(biases)都是隨機產生的,此模型只需要算出輸出權重(output weights)。在使用C-ELM的時候和使用一般類神經網路(neural networks)一樣需要事先決定好隱藏層裡的神經元個數,這裡我們簡稱為hidden nodes,當發現模型的效能不好時,再使用trial-and-error的方式不斷測試,直到有好的結果。但是由於trial-and-error的方式實在太耗時而且也很麻煩,因此我們提出遞增型(incremental)的C-ELM,簡稱IC-ELM,IC-ELM可以自動增加hidden node的個數,可以一次加一顆或一次加多顆hidden node,而輸出權重會因為hidden node的個數變動而自動更新,不會像C-ELM一樣每次改變hidden node的個數,就必須重新計算一次輸出權重。當滿足事先定義好的條件之後,加入hidden node的過程就會停止,實驗結果可以證實我們提出的IC-ELM速度比C-ELM快很多,而且依然可以達到跟C-ELM相似的效能。
Abstract
This thesis is divided into two parts. The first part is a machine learning-based feature extraction method for regression problem. The second part is an incremental learning method for equality constrained-optimization-based extreme learning machine(C-ELM) (IC-ELM).
One of the issues encountered in classification and regression is the inefficiency caused by a large number of dimensions or features involved in the input space. Many approaches have been proposed to handle this issue by reducing the number of dimensions associated with the underlying data set, and statistical methods seem to have more prevailed in this area. However, less attention to dimensionality reduction has been paid for regression than for classification. Besides, the computation with covariance matrices is involved in most existing methods, resulting in an inefficient reduction process. In this thesis, we propose a machine learning based dimensionality reduction approach for regression problems. Given a set of historical data, the predictor vectors involved are grouped into a number of clusters such that the instances included in the same cluster are similar to one another. The user need not specify the number of clusters in advance. The clusters are created incrementally and the number of them is determined automatically. Finally, one feature is extracted from a cluster by a weighted combination of the instances contained in the cluster. Therefore, the dimensionality of the original data set is reduced. Since all the original features contribute to the making of the extracted features, the characteristics of the original data set can be substantially retained. Also, the computation with covariance matrices is avoided, and thus efficiency is maintained. Experimental results on real-world data sets validate the effectiveness of the proposed approach.
The Equality Constrained-Optimization-based Extreme Learning Machine here we abbreviate C-ELM was proposed by Huang et al. It’s input weights and biases of the neurons in the hidden layer are randomly assigned. It just determined the output weights analytically. When using C-ELM as a predicted model, the number of neurons in the hidden layer here we called hidden nodes must be decided previously same as using neural networks. When the performance of model is not good, we must trial-and-error to get a satisfied performance. But trial-and-error is inefficient, so we propose a incremental learning of C-ELM called IC-ELM. IC-ELM can add hidden nodes one by one or group by group and the output weights can update automatically when the number of hidden nodes changed. It doesn’t like C-ELM need to produce a recomputation of output weights when the number of hidden nodes changed. The adding procedure stopped when satisfying the pre-defined threshold. Experimental results are shown that IC-ELM is faster than C-ELM and achieve similar performance to C-ELM.
目次 Table of Contents
誌謝 iii
中文摘要 iv
ABSTRACT vi
目錄 viii
圖目錄 xi
表目錄 xii
第 一 章 緒論 1
1.1 研究背景 1
1.1.1 特徵縮減 2
1.1.2 極端學習機 4
1.2 問題描述 6
1.2.1 迴歸問題的維度縮減 6
1.2.2 遞增型等式條件最佳化為基底的ELM 6
1.3 論文架構 7
第 二 章 文獻回顧 8
2.1 特徵分群 8
2.2 極端學習機 8
2.3 遞增型學習 10
2.3.1 I-ELM 11
2.3.2 EM-ELM 12
2.4 等式條件最佳化為基底的極端學習機 13
第 三 章 迴歸問題的維度縮減方法 17
3.1 我們的方法概述 17
3.2 自建構式分群方法 18
3.3 特徵萃取 20
3.4 原本資料的集合和查詢句的轉換 23
3.5 範例說明 24
第 四 章 遞增型等式條件最佳化為基底的極端學習機 27
第 五 章 實驗結果 35
5.1 針對迴歸問題的維度縮減方法實驗 35
5.1.1 資料集概述 36
5.1.2 不同方法間的準確度比較 37
5.1.3 不同方法間的時間比較 43
5.1.4 不同加權方式的準確度比較 45
5.1.5 相關議題 47
5.2 IC-ELM實驗 51
5.2.1 隨機產生的矩陣 52
5.2.2 實際生活資料集 53
5.2.3 不同hidden node的個數 56
5.2.4 第一個解的IC-ELM 58
5.2.5 維度縮減方法與IC-ELM 60
第 六 章 結論與未來方向 62
Reference 63
參考文獻 References
[1] J. Han, M. Kamber, and J. Pei. Data Mining: Concepts and Techniques, 3rd edition. Morgan Kaufmann Publishers, 2011.
[2] D. Hand, H. Mannila, and P. Smyth. Principles of Data Mining. The MIT Press, 2001 .
[3] D. D. Lewis. Feature Selection and Feature Extraction for Text Categorization. Proceedings Workshop Speech and Natural Language, pp.212-217, 1992.
[4] F. Li and C. Sminchisescu. Feature Selection in Kernel Regression via L1 Regularization. In Proceedings 26th International Conference on Machine Learning, 2009.
[5] S. Maldonado and R. Weber. Feature Selection for Support Vector Regression via Kernel Penalization. In Proceedings International Joint Conference on Neural Networks, pp.1-7, 2010.
[6] J.-A. Ting, A. D. Souza, S. Vijaykumar, and S. Schaal. Efficient Learning and Feature Selection in High-Dimensional Regression. Neural Computation, vol. 22, pp. 831-886, 2010.
[7] M. Hall. Correlation-Based Feature Selection for Machine Learning. Ph.D. Thesis, University of Waikato, 1999.
[8] R. Battiti. Using Mutual Information for Selecting Features in Supervised Neural Net Learning. IEEE Transactions on Neural Networks, vol. 5, no. 4, pp. 537-550, 1994.
[9] P. L. Carmona, J. M. Sotoca, F. Pla, F. K. H. Phoa, and J. B. Dias. Feature Selection in Regression Tasks Using Conditional Mutual Information. In Proceedings 5th Iberian Conference on Pattern Recognition and Image Analysis, pp. 224-231, 2011.
[10] B. Frénay, G. Doquire, and M. Verleysen, Is Mutual Information Adequate for Feature Selection in Regression? Neural Networks, vol. 48, pp. 1-7, 2013.
[11] H. Peng, F. Long, and C. Ding. Feature Selection Based on Mutual Information Criteria of Max-Dependency, Max-Relevance, and Min-Redundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 8, pp. 1226-1238, 2005.
[12] F. Rossi, A. Lendasse, D. Francois, V. Wertz, and M. Verleysen. Mutual Information for the Selection of Relevant Variables in Spectrometric Nonlinear Modeling. Chemometrics and Intelligent Laboratory Systems, vol. 80, pp. 215-226, 2006.
[13] O. Valenzuela, I. Rojas, L. J. Herrera, A. Guillén, F. Rojas, L. Marquez, and M. Pasadas. Feature Selection Using Mutual Information and Neural Networks. Monografias del Seminario Matemático García de Galdeano, vol. 33, pp. 331-340, 2006.
[14] Y. Xu, G. Jones, J.-T. Li, B. Wang, and C.-M. Sun. A Study on Mutual Information-Based Feature Selection for Text Categorization. Journal of Computational Information Systems, vol. 3, no. 3, pp. 1007-1012, 2007.
[15] G. Doquire and M. Verleysen. A Graph Laplacian Based Approach to Semi-Supervised Feature Selection for Regression Problems. Neurocomputing, vol. 121, pp. 5-13, 2013.
[16] X. He, D. Cai, and P. Niyogi. Laplacian Score for Feature Selection. Advances in Neural Information Processing System 18, The MIT Press, 2005.
[17] X. He, M. Ji, C. Zhang, H. Bao. A Variance Minimization Criterion to Feature Selection Using Laplacian Regularization. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 10, pp. 2013-2025, 2011.
[18] A. Kyrillidis and V. Cevher. Combinatorial Selection and Least Absolute Shrinkage via the CLASH Algorithm. In Proceedings IEEE International Synposium on Information Theory Proceedings, pp. 2216-2220, 2012.
[19] D. Paul, E. Bair, T. Hastie, and R. Tibshirani. “Preconditioning ” for Feature Selection and Regression in High-Dimensional Problems. The Annals of Statistics, vol. 36, no. 4, pp. 1595-1618, 2008.
[20] R. Tibshirani. Regression Shrinkage and Selection via the LASSO. Journal of the Royal Statistical Society, Series B, vol. 58, no. 1, pp.267-288, 1996.
[21] R. Tibshirani. Regression Shrinkage and Selection via the LASSO: a Retrospective. Journal of the Royal Statistical Society, Series B, vol. 73, no. 3, pp. 273-282, 2011.
[22] J. Yan, B. Zhang, N. Liu, S. Yan, Q. Cheng, W. Fan, Q. Yang, W. Xi, and Z. Chen. Effective and Efficient Dimensionality Reduction for Large-Scale and Streaming data preprocessing. IEEE Transactions on Knowledge and Data Engineering, vol. 18, no. 3, pp. 320-333, 2006.
[23] I. T. Jolliffe. Principal Component Analysis. Springer-Verlag, 1986.
[24] S. T. Roweis and L. K. Saul. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science, vol. 290, no. 5500, pp. 2323-2326, 2000.
[25] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A Global Geometric Framework for Nonlinear Dimensionality Reduction. Science, vol. 290, no. 5500, pp. 2319-2323, 2000.
[26] A. M. Martinez and A. C. Kak. PCA versus LDA. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 228-233, 2001.
[27] K.-C. Li. Sliced Inverse Regression for Dimension Reduction. Journal of the American Statistical Association, vol. 86, no. 414, pp. 316-327, 1991.
[28] R. D. Cook and S. Weisberg. Sliced Inverse Regression for Dimension Reduction: Comment. Journal of the American Statistical Association, vol. 86, no. 414, pp. 328-332, 1991.
[29] K.-C. Li. On Principal Hessian Directions for Data Visualization and Dimension Reduction: Another Application of Stein’s Lemma. Journal of the American Statistical Association, vol. 87, no. 420, pp. 1025-1039, 1992.
[30] N. Kwak and J.-W. Lee. Feature Extraction Based on Subspace Methods for Regression Problems. Neurocomputing, vol. 73, pp. 1740-1751, 2010.
[31] F. Pereira, N. Tishby, and L. Lee. Distributional Clustering of English Words. 31th Annual Meeting of ACL, pp. 183-190, 1993.
[32] L. D. Baker and A. McCallum. Distributional Clustering of Words for Text Classification. In Proceedings 21th Annual International ACM SIGIR, pp. 96-103, 1998.
[33] R. Bekkerman, R. El-Yaniv, N. Tishby, and Y. Winter. Distributional Word Cluster vs. Words for Text Categorization. Journal of Machine Learning Research, vol. 3, pp. 1183-1208, 2003.
[34] M. C. Dalmau and O. W. M. Flórez. Experimental Results of the Signal Processing Approach to Distributional Clustering of Term on Reuters-21578 Collection. In Proceedings 29th European Conference IR Research, pp. 678-681, 2007.
[35] I. S. Dhillon, S. Mallela, and R. Kumar. A Divisive Information-Theoretic Feature Clustering Algorithm for Text Classification. Journal of Machine Learning Research, vol. 3, pp. 1265-1287, 2003.
[36] J.-Y. Jiang and S.-J. Lee. A Weight-Based Feature Extraction Approach for Text Classification. In Proceedings 2nd International Conference on Innovative Computing, Information and Control, 2007.
[37] J.-Y. Jiang, R.-J. Liou, and S.-J. Lee. A Fuzzy Self-Constructing Feature Clustering Algorithm for Text Classification. IEEE Transactions on Knowledge and Data Engineering, vol. 23, no. 3, pp. 335-349, 2011.
[38] K. Pearson. Notes on the History of Correlation. Biometrika, vol. 13, no. 1, pp. 25-45, 1920.
[39] H.-H. Hsu and C.-W. Hsieh. Feature Selection via Correlation Coefficient Clustering. Journal of Software, vol. 5, no. 12, pp. 1371-1377, 2010.
[40] M. T. Hagan, H. B. Demuth, and M. H. Beale. Neural Network Design. PWS Publishing Company, 2002 .
[41] A. Hart. Using neural network for classification tasks—some experiments on data sets and practical advice. The Journal of the Operational Research Society, vol. 43, no. 3, pp. 215-226, 1992.
[42] B. Cheng and D. M. Titterington. Neural Networks: A Review from a statistical perspective. Statistical Science, vol. 9, no. 1, pp. 2-30, 1994.
[43] B. D. Ripley. Neural Networks and Related Methods for Classification. Journal of the Royal Statistical Society. Series B (Methodological), vol. 56, no. 3, pp. 409-456, 1994.
[44] T.-Y. Kwok and D.-Y. Yeung. Constructive Algorithms for Structure Learning in Feedforward Neural Networks for Regression Problems. IEEE Transactions on Neural Networks, vol. 8, no. 3, pp. 630-645, 1997.
[45] E. W. M. Lee, C. P. Lim, R. K. K. Yuen, and S. M. Lo. A Hybrid Neural Network Model for Noisy Data Regression. IEEE Transactions on Systems, Man, and Cybernetics—Part B: Cybernetics, vol. 34, no. 2, pp. 951-960, 2004.
[46] D. F. Specht. A General Regression Neural Network. IEEE Transactions on Neural Networks, vol. 2, no. 6, pp. 568-576, 1991.
[47] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning Representations by Back-propagation Errors. Nature, vol. 323, pp. 533-536, 1986.
[48] M. T. Hagan and M. B. Menhaj. Training Feedforward Networks with Marquardt Algorithm. IEEE Transactions on Neural Networks, vol. 5, no. 6, pp. 989-993, 1994.
[49] C. Charalambous. Conjugate Gradient Algorithm for Efficient Training of Artificial Neural Networks. IEE Proceedings G, vol. 139, pp. 301-310, 1992.
[50] T. P. Vogl, J. K. Mangis, A. K. Rigler, W. T. Zink, and D. L. Alkon. Accelerating the Convergence of the Back-propagation Method. Biological Cybernetics, vol. 59, no. 4-5, pp. 257-263, 1988.
[51] R. A. Jacobs. Increased Rates of Convergence through Learning Rate Adaptation. Neural Networks, vol. 1, no. 4, pp. 295-307, 1988.
[52] C. Cortes and V. Vapnik. Support Vector Network. Machine Learning, vol. 20, no. 3, pp. 273-297, 1995.
[53] J. A. K. Suykens and J. Vandewalle. Least Squares Support Vector Machine Classifiers. Neural Processing Letters, vol. 9, no. 3, pp. 293-300, 1999.
[54] H. Drucker, C. J. Burges, L. Kaufman, A. Smola, and V. Vapnik. Support Vector Regression Machines. Neural Information Processing System 9, M. Mozer, J. Jordan, and T. Petscbe, Eds. Cambridge, MA: MIT Press, pp. 155-161, 1997.
[55] C.-W. Hsu and C.-J. Lin. A Comparison of Methods for Multiclass Support Vector Machines. IEEE Transactions on Neural Networks, vol. 13, no. 2, pp. 415-425, 2002.
[56] J. A. K. Suykens, J. D. Brabanter, L. Lukas, J. Vandewalle. Weighted Least Squares Support Vector Machines: Robustness and Sparse Approximation. Neurocomputing, vol. 48, no. 1-4, pp. 85-105, 2002.
[57] G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew. Extreme Learning Machine: Theory and Applications. Neurocomputing, vol. 70, no. 1-3, pp.489-501, 2006.
[58] G.-B. Huang, X. Ding, and H. Zhous. Optimization Method based Extreme Learning Machine for Classification. Neurocomputing, vol. 74, no. 1-3, pp.155-163, 2010.
[59] G.-B. Huang, H. Zhou, X. Ding, and R. Zhang. Extreme Learning Machine for Regression and Multiclass Classification. IEEE Transactions on Systems, Man, and Cybernetics—Part B: Cybernetics, vol. 42, no. 2, pp. 513-529, 2012.
[60] G.-B. Huang, L. Chen, and C.-K. Siew. Universal Approximation using Incremental Constructive Feedforward Networks with Random Hidden Nodes. IEEE Transactions on Neural Networks, vol. 17, no. 4, pp.879-892, 2006.
[61] G.-B. Huang and L. Chen. Enhanced Random Search based Incremental Extreme Learning Machine. Neurocomputing, vol. 71, no.16-18, pp.3460-3468, 2008.
[62] G. Feng, G.-B. Huang, Q. Lin, and R. Gay. Error Minimized Extreme Learning Machine with Growth of Hidden Nodes and Incremental Learning. IEEE Transactions on Neural Networks, vol. 20, no. 8, pp. 1352-1357, 2009.
[63] H.-J. Rong, G.-B. Huang, N. Sundararajan, and P. Saratchandran. Online Sequential Fuzzy Extreme Learning Machine for Function Approximation and Classification Problem. IEEE Transactions on Systems, Man, and Cybernetics—Part B: Cybernetics, vol. 39, no. 4, pp.1067-1072, 2009.
[64] G.-B. Huang, Q.-Y. Zhu, K.Z. Mao, C.-K. Siew, P. Saratchandran, and N. Sundararajan. Can Threshold Networks Be Trained Directly?. IEEE Transactions on Circuits and System—II, vol. 53, no. 3, pp. 187-191, 2006.
[65] F. Han and D.-S. Huang. Improved Extreme Learning Machine for Function Approximation by Encoding a Priori Information. Neurocomputing, vol. 69, no. 16-18, pp. 2369-2373, 2006.
[66] G.-B. Huang, M.-B. Li, L. Chen, and C.-K. Siew. Incremental Extreme Learning Machine with Fully Complex Hidden Nodes. Neurocomputing, vol. 71, no. 4-6, pp. 576-583, 2008.
[67] Z. Deng, K.-S. Choi, L. Cao, and S. Wang. T2FELA: Type-2 Fuzzy Extreme Learning Algorithm for Fast Training of Interval Type-2 TSK Fuzzy Logic System. IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 4, pp. 664-676, 2014.
[68] Z.-L. Sun, K.-F. Au, and T.-M. Choi. A Neuro-Fuzzy Inference System Through Integration of Fuzzy Logic and Extreme Learning Machines. IEEE Transactions on Systems, Man, and Cybernetics—Part B: Cybernetics, vol. 37, no. 5, pp. 1321-1331, 2007.
[69] W. B. Zhang and H. B. Ji. Fuzzy Extreme Learning Machine for Classification. Electronics Letters, vol. 49, no. 7, pp. 448-450, 2013.
[70] R. Wan, S. Kwong, and D. D. Wang. An Analysis of ELM Approximate Error Based on Random Weight Matrix. International Journal of Uncertainty, Fuzziness, and Knowledge-Based Systems, vol. 21, no. 2, pp. 1-12, 2013.
[71] G.-B. Huang. An Insight Into Extreme Learning Machine: Random Neurons, Random Features and Random Kernels. Cognitive Computation in Press, 2014.
[72] R. Fletcher. Practical Methods of Optimization: Volume 2 Constrained Optimization. New York: Wiley, 1981.
[73] W. Härdle and L. Simar. Applied Multivariate Statistical Analysis. Springer-Verlag Berlin Heidelberg, 2003.
[74] S. Boyd, L. E. Ghaoul, E. Feron, and V. Balakrishnan. Linear Matrix Inequalities in System and Control Theory. New York: Wiley, 1981.
[75] K. S. Miller. On the Inverse of the Sum of Matrices. Mathematics Magazine vol. 54, no. 2, pp. 67-72, 1981.
[76] UCI data set. http://archive.ics.uci.edu/ml/
[77] Regression data set. http://www.dcc.fc.up.pt/~ltorgo/Regression/DataSets.html
[78] S. Weisberg. “dr”package.
http://cran.r-project.org/web/packages/dr/index.html.
[79] BP source codes in matlab toolbox, 2013.
[80] S. Theodoridis and K. Koutroumbas. Pattern Recognition, 4th edition. Academic Press, Canada, 2009.
[81] P. A. Estévez, M. Tesmer, C. A. Perez, and J. M. Zurada. Normalized Mutual Information Feature Selection. IEEE Transactions on Neural Networks, vol. 20, no. 2, pp. 189-201, 2009.
[82] S. Menard. Coefficients of Determination for multiple Logistic Regression Analysis. The American Statisician, vol. 54, no. 1, pp. 17-24, 2000.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:自定論文開放時間 user define
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code