Content deleted Content added
edited short summary |
short summary edited |
||
Line 1:
Within [[mathematical analysis]], '''Regularization perspectives on support-vector machines''' provide a way of interpreting [[support-vector machine]]s (SVMs) in the context of other regularization-based machine-learning algorithms. SVM algorithms categorize binary data, with the goal of fitting the [[training set]] data in a way that minimizes the average of the hinge-loss function and L2 norm of the learned weights. This strategy avoids [[overfitting]] via [[Tikhonov regularization]] and in the L2 norm sense and also corresponds to minimizing the bias and variance of our
Specifically, [[Tikhonov regularization]] algorithms produce a decision boundary that minimizes the average training-set error and constrains the model to not be excessively complicated or overfit the training data via a L2 norm of the weights term. The training and test-set errors can be measured without bias and in a fair way using accuracy, precision, Auc-Roc, precision-recall, and other metrics.
|