iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: http://dx.doi.org/10.1023/A:1009715923555
A Tutorial on Support Vector Machines for Pattern Recognition | Data Mining and Knowledge Discovery Skip to main content
Log in

A Tutorial on Support Vector Machines for Pattern Recognition

  • Published:
Data Mining and Knowledge Discovery Aims and scope Submit manuscript

Abstract

The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  • Aizerman, M.A., Braverman, E.M. and Rozoner, L.I. Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control, 25:821–837, 1964.

    Google Scholar 

  • Anthony, M. and Biggs, N. Pac learning and neural networks. In The Handbook of Brain Theory and Neural Networks, pages 694–697, 1995.

  • Bennett, K.P. and Bredensteiner, E. Geometry in learning. In Geometry at Work, page to appear, Washington, D.C., 1998. Mathematical Association of America.

  • Bishop, C.M. Neural Networks for Pattern Recognition. Clarendon Press, Oxford, 1995.

    Google Scholar 

  • Blanz, V., Schölkopf, B., Bülthoff, H., Burges, C., Vapnik, V. and Vetter, T. Comparison of view–based object recognition algorithms using realistic 3d models. In C. von der Malsburg, W. von Seelen, J. C. Vorbrüggen, and B. Sendhoff, editors, Artificial Neural Networks—ICANN’96, pages 251–256, Berlin, 1996. Springer Lecture Notes in Computer Science, Vol. 1112.

  • Boser, B.E., Guyon, I.M. and Vapnik, V. A training algorithm for optimal margin classifiers. In Fifth Annual Workshop on Computational Learning Theory, Pittsburgh, 1992. ACM.

  • Bunch, J.R. and Kaufman, L. Some stable methods for calculating inertia and solving symmetric linear systems. Mathematics of computation, 31(137):163–179, 1977.

    Google Scholar 

  • Bunch, J.R. and Kaufman, L. A computational method for the indefinite quadratic programming problem. Linear Algebra and its Applications, 34:341–370, 1980.

    Google Scholar 

  • Burges, C.J.C. and Schölkopf, B. Improving the accuracy and speed of support vector learning machines. In M. Mozer, M. Jordan, and T. Petsche, editors, Advances in Neural Information Processing Systems 9, pages 375–381, Cambridge, MA, 1997. MIT Press.

  • Burges, C.J.C. Simplified support vector decision rules. In Lorenza Saitta, editor, Proceedings of the Thirteenth International Conference on Machine Learning, pages 71–77, Bari, Italy, 1996. Morgan Kaufman.

  • Burges, C.J.C. Geometry and invariance in kernel based methods. In Advances in Kernel Methods-Support Vector Learning,Bernhard Schölkopf, Christopher J.C. Burges and Alexander J. Smola (eds.), MIT Press, Cambridge, MA, 1998 (to appear).

    Google Scholar 

  • Burges, C.J.C., Knirsch, P. and Haratsch, R. Support vector web page: http://svm.research.bell-labs.com. Technical report, Lucent Technologies, 1996.

  • Cortes, C. and Vapnik, V. Support vector networks. Machine Learning, 20:273–297, 1995.

    Google Scholar 

  • Courant, R. and Hilbert, D. Methods of Mathematical Physics. Interscience, 1953.

  • Devroye, L., Györfi, L. and Lugosi, G. A Probabilistic Theory of Pattern Recognition. Springer Verlag, Applications of Mathematics Vol. 31, 1996.

  • Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A. and Vapnik, V. Support vector regression machines. Advances in Neural Information Processing Systems, 9:155–161, 1997.

    Google Scholar 

  • Fletcher, R. Practical Methods of Optimization. John Wiley and Sons, Inc., 2nd edition, 1987.

  • Geman, S. and Bienenstock, E. Neural networks and the bias / variance dilemma. Neural Computation, 4:1–58, 1992.

    Google Scholar 

  • Girosi, F. An equivalence between sparse approximation and support vector machines. Neural Computation (to appear); CBCL AI Memo 1606, MIT, 1998.

  • Guyon, I., Vapnik, V., Boser, B., Bottou, L. and Solla. S.A. Structural risk minimization for character recognition. Advances in Neural Information Processing Systems, 4:471–479, 1992.

    Google Scholar 

  • Halmos, P.R. A Hilbert Space Problem Book. D. Van Nostrand Company, Inc., 1967.

  • Horn, R.A. and Johnson, C.R. Matrix Analysis. Cambridge University Press, 1985.

  • Joachims, T. Text categorization with support vector machines. Technical report, LS VIII Number 23, University of Dortmund, 1997. ftp://ftp-ai.informatik.uni-dortmund.de/pub/Reports/report23.ps.Z.

  • Kaufman, L. Solving the quadratic programming problem arising in support vector classification. In Advances in Kernel Methods-Support Vector Learning,Bernhard Schölkopf, Chrisopher J.C. Burges and Alexander J. Smola (eds.), MIT Press, Cambridge, MA, 1998 (to appear).

    Google Scholar 

  • Kolmogorov, A.N. and Fomin, S.V. Introductory Real Analysis. Prentice-Hall, Inc., 1970.

  • Mangarasian, O.L. Nonlinear Programming. McGraw Hill, New York, 1969.

    Google Scholar 

  • McCormick, G.P. Non Linear Programming: Theory, Algorithms and Applications. John Wiley and Sons, Inc., 1983.

  • Montgomery, D.C. and Peck, E.A. Introduction to Linear Regression Analysis. John Wiley and Sons, Inc., 2nd edition, 1992.

  • Moré and Wright. Optimization Guide. SIAM, 1993.

  • Moré, J.J. and Toraldo, G. On the solution of large quadratic programming problems with bound constraints. SIAM J. Optimization, 1(1):93–113, 1991.

    Google Scholar 

  • Mukherjee, S., Osuna, E. and Girosi, F. Nonlinear prediction of chaotic time series using a support vector machine. In Proceedings of the IEEE Workshop on Neural Networks for Signal Processing 7, pages 511–519, Amelia Island, FL, 1997.

    Google Scholar 

  • Müller, K.-R., Smola, A., Rätsch, G., Schölkopf, B., Kohlmorgen, J. and Vapnik, V. Predicting time series with support vector machines. In Proceedings, International Conference on Artificial Neural Networks, page 999. Springer Lecture Notes in Computer Science, 1997.

  • Osuna, E., Freund, R. and Girosi, F. An improved training algorithm for support vector machines. In Proceedings of the 1997 IEEE Workshop on Neural Networks for Signal Processing, Eds. J. Principe, L. Giles, N. Morgan, E. Wilson, pages 276–285, Amelia Island, FL, 1997.

  • Osuna, E., Freund, R. and Girosi, F. Training support vector machines: an application to face detection. In IEEE Conference on Computer Vision and Pattern Recognition, pages 130–136, 1997.

  • Osuna, E. and Girosi. F. Reducing the run-time complexity of support vector machines. In International Conference on Pattern Recognition (submitted), 1998.

  • Press, W.H., Flannery, B.P., Teukolsky, S.A. and Vettering, W.T. Numerical recipes in C: the art of scientific computing. Cambridge University Press, 2nd edition, 1992.

  • Schmidt, M. Identifying speaker with support vector networks. In Interface ’96 Proceedings, Sydney, 1996.

  • Schölkopf, B. Support Vector Learning. R. Oldenbourg Verlag, Munich, 1997.

    Google Scholar 

  • Schölkopf, B., Burges, C. and Vapnik, V. Extracting support data for a given task. In U. M. Fayyad and R. Uthurusamy, editors, Proceedings, First International Conference on Knowledge Discovery & Data Mining. AAAI Press, Menlo Park, CA, 1995.

    Google Scholar 

  • Schölkopf, B., Burges, C. and Vapnik, V. Incorporating invariances in support vector learning machines. In C. von der Malsburg, W. von Seelen, J. C. Vorbrüggen, and B. Sendhoff, editors, Artificial Neural Networks — ICANN’96, pages 47–52, Berlin, 1996. Springer Lecture Notes in Computer Science, Vol. 1112.

  • Schölkopf, B., Simard, P., Smola, A. and Vapnik, V. Prior knowledge in support vector kernels. In M. Jordan, M. Kearns, and S. Solla, editors, Advances in Neural Information Processing Systems 10, Cambridge, MA, 1998. MIT Press. In press.

  • Schölkopf, B., Smola, A. and Müller, K.-R. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 1998. In press.

  • Schölkopf, B., Smola, A., Müller, K.-R., Burges, C.J.C. and Vapnik, V. Support vector methods in learning and feature extraction. In Ninth Australian Congress on Neural Networks (to appear), 1998.

  • Schölkopf, B., Sung, K., Burges, C., Girosi, F., Niyogi, P., Poggio, T. and Vapnik. V. Comparing support vector machines with gaussian kernels to radial basis function classifiers. IEEE Trans. Sign. Processing, 45:2758–2765, 1997.

    Google Scholar 

  • Shawe-Taylor, J., Bartlett, P.L., Williamson, R.C. and Anthony, M. A framework for structural risk minimization. In Proceedings, 9th Annual Conference on Computational Learning Theory, pages 68–76, 1996.

  • Shawe-Taylor, J., Bartlett, P.L., Williamson, R.C. and Anthony, M. Structural risk minimization over data-dependent hierarchies. Technical report, NeuroCOLT Technical Report NC-TR-96-053, 1996.

  • Smola, A. and Schölkopf, B. On a kernel-based method for pattern recognition, regression, approximation and operator inversion. Algorithmica (to appear), 1998.

  • Smola, A., Schölkopf, B. and Müller, K.-R. General cost functions for support vector regression. In Ninth Australian Congress on Neural Networks (to appear), 1998.

  • Smola, A.J., Schölkopf, B. and Müller, K.-R. The connection between regularization operators and support vector kernels. Neural Networks (to appear), 1998.

  • Stitson, M.O., Gammerman, A., Vapnik, V., Vovk, V., Watkins, C. and Weston, J. Support vector anova decomposition. Technical report, Royal Holloway College, Report number CSD-TR-97-22, 1997.

  • Strang, G.T. Introduction to Applied Mathematics. Wellesley-Cambridge Press, 1986.

  • Vanderbei, R.J. Interior point methods: Algorithms and formulations. ORSA J. Computing, 6(1):32–34, 1994.

    Google Scholar 

  • Vanderbei, R.J. LOQO: An interior point code for quadratic programming. Technical report, Program in Statistics & Operations Research, Princeton University, 1994.

  • Vapnik, V. Estimation of Dependences Based on Empirical Data [in Russian]. Nauka, Moscow, 1979. (English translation: Springer Verlag, New York, 1982).

    Google Scholar 

  • Vapnik, V. The Nature of Statistical Learning Theory. Springer-Verlag, New York, 1995.

    Google Scholar 

  • Vapnik, V. Statistical Learning Theory. John Wiley and Sons, Inc., New York, in preparation.

  • Vapnik, V., Golowich, S. and Smola, A. Support vector method for function approximation, regression estimation, and signal processing. Advances in Neural Information Processing Systems, 9:281–287, 1996.

    Google Scholar 

  • G. Wahba. Support vector machines, reproducing kernel hilbert spaces and the randomized gacv. In Advances in Kernel Methods-Support Vector Learning, Bernhard Schölkopf, Christopher J.C. Burges and Alexander J. Smola (eds.), MIT Press, Cambridge, MA, 1998 (to appear).

    Google Scholar 

  • Weston, J., Gammerman, A., Stitson, M.O., Vapnik, V., Vovk, V., and Watkins, C. Density estimation using support vector machines. Technical report, Royal Holloway College, Report number CSD-TR-97-23, 1997.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Burges, C.J. A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery 2, 121–167 (1998). https://doi.org/10.1023/A:1009715923555

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1009715923555

Navigation