Class HingeLoss<T>
- Namespace
- AiDotNet.LossFunctions
- Assembly
- AiDotNet.dll
Implements the Hinge loss function commonly used in support vector machines.
public class HingeLoss<T> : LossFunctionBase<T>, ILossFunction<T>
Type Parameters
TThe numeric type used for calculations (e.g., float, double).
- Inheritance
-
HingeLoss<T>
- Implements
- Inherited Members
- Extension Methods
Remarks
For Beginners: Hinge loss is used for binary classification problems, particularly in support vector machines (SVMs). It measures how well your model separates different classes.
The formula is: max(0, 1 - y * f(x)), where:
- y is the true label (usually -1 or 1)
- f(x) is the model's prediction
Key properties of hinge loss:
- It penalizes predictions that are incorrect or not confident enough
- It's zero when the prediction is correct and confident (y*f(x) = 1)
- It increases linearly when the prediction is incorrect or not confident enough
- It encourages the model to find a decision boundary with a large margin between classes
This loss function is ideal for binary classification tasks where you want to maximize the margin between different classes, which often improves generalization to new data.
Methods
CalculateDerivative(Vector<T>, Vector<T>)
Calculates the derivative of the Hinge loss function.
public override Vector<T> CalculateDerivative(Vector<T> predicted, Vector<T> actual)
Parameters
predictedVector<T>The predicted values from the model.
actualVector<T>The actual (target) values, typically -1 or 1.
Returns
- Vector<T>
A vector containing the derivatives of Hinge loss for each prediction.
CalculateLoss(Vector<T>, Vector<T>)
Calculates the Hinge loss between predicted and actual values.
public override T CalculateLoss(Vector<T> predicted, Vector<T> actual)
Parameters
predictedVector<T>The predicted values from the model.
actualVector<T>The actual (target) values, typically -1 or 1.
Returns
- T
The average Hinge loss across all samples.
CalculateLossAndGradientGpu(IGpuTensor<T>, IGpuTensor<T>)
Calculates both Hinge loss and gradient on GPU in a single efficient pass.
public override (T Loss, IGpuTensor<T> Gradient) CalculateLossAndGradientGpu(IGpuTensor<T> predicted, IGpuTensor<T> actual)
Parameters
predictedIGpuTensor<T>The predicted GPU tensor from the model.
actualIGpuTensor<T>The actual (target) GPU tensor.