Class ElasticNetLoss<T>
- Namespace
- AiDotNet.LossFunctions
- Assembly
- AiDotNet.dll
Implements the Elastic Net Loss function, which combines Mean Squared Error with L1 and L2 regularization.
public class ElasticNetLoss<T> : LossFunctionBase<T>, ILossFunction<T>
Type Parameters
TThe numeric type used for calculations (e.g., float, double).
- Inheritance
-
ElasticNetLoss<T>
- Implements
- Inherited Members
- Extension Methods
Remarks
For Beginners: Elastic Net Loss combines the Mean Squared Error (which measures prediction accuracy) with two types of regularization (which prevent overfitting):
- L1 regularization (also called Lasso): Helps select only the most important features by pushing some weights to zero
- L2 regularization (also called Ridge): Prevents any single weight from becoming too large
The formula is: MSE + a * [l1Ratio * |weights|_1 + (1-l1Ratio) * 0.5 * |weights|_2²] Where:
- MSE is the Mean Squared Error
- |weights|_1 is the L1 norm (sum of absolute values)
- |weights|_2² is the squared L2 norm (sum of squared values)
- a is the regularization strength
- l1Ratio controls the mix between L1 and L2 regularization
The l1Ratio parameter (between 0 and 1) controls the balance:
- When l1Ratio = 1: Only L1 regularization is used (Lasso)
- When l1Ratio = 0: Only L2 regularization is used (Ridge)
- Values in between: A mix of both (Elastic Net)
This loss function is particularly useful when:
- You have many correlated features
- You want to perform feature selection (L1 component)
- You still want the stability of L2 regularization
- You want to balance between model simplicity and prediction accuracy
Constructors
ElasticNetLoss(double, double)
Initializes a new instance of the ElasticNetLoss class.
public ElasticNetLoss(double l1Ratio = 0.5, double alpha = 0.01)
Parameters
l1RatiodoubleThe mixing parameter between L1 and L2 regularization (0 to 1). Default is 0.5.
alphadoubleThe regularization strength parameter. Default is 0.01.
Methods
CalculateDerivative(Vector<T>, Vector<T>)
Calculates the derivative of the Elastic Net Loss function.
public override Vector<T> CalculateDerivative(Vector<T> predicted, Vector<T> actual)
Parameters
predictedVector<T>The predicted values vector.
actualVector<T>The actual (ground truth) values vector.
Returns
- Vector<T>
A vector containing the derivatives of the elastic net loss with respect to each predicted value.
CalculateLoss(Vector<T>, Vector<T>)
Calculates the Elastic Net Loss between predicted and actual values.
public override T CalculateLoss(Vector<T> predicted, Vector<T> actual)
Parameters
predictedVector<T>The predicted values vector.
actualVector<T>The actual (ground truth) values vector.
Returns
- T
The elastic net loss value.
CalculateLossAndGradientGpu(IGpuTensor<T>, IGpuTensor<T>)
Calculates both Elastic Net loss and gradient on GPU in a single efficient pass.
public override (T Loss, IGpuTensor<T> Gradient) CalculateLossAndGradientGpu(IGpuTensor<T> predicted, IGpuTensor<T> actual)
Parameters
predictedIGpuTensor<T>The predicted GPU tensor from the model.
actualIGpuTensor<T>The actual (target) GPU tensor.