
Support vector machine
The support vector machine (SVM) algorithms were born in the '90s, thanks to the research of Vladimir Vapnik and his collaborators. They are used to classify instances starting from a training set of experimental data, whose characteristic parameters are known. The goal is, therefore, to build a system that learns from already-correctly classified data. This can then be used to build a classification function that's able to catalog data, even from outside this set. The main characteristic of the SVMs, which led them to immediate success, is given by the fact that based on simple ideas, they allow for high performance in practical applications, and they are rather simple to analyze mathematically but allow you to analyze complex models. The algorithm that allows them to be trained can be traced back to the problem of quadratic programming with linear constraints. These models are used in very different contexts, among which the most common are pattern recognition, text cataloging, and image identification.