Class MeanAbsoluteErrorLoss<T>
- Namespace
- AiDotNet.LossFunctions
- Assembly
- AiDotNet.dll
Implements the Mean Absolute Error (MAE) loss function.
public class MeanAbsoluteErrorLoss<T> : LossFunctionBase<T>, ILossFunction<T>
Type Parameters
TThe numeric type used for calculations (e.g., float, double).
- Inheritance
-
MeanAbsoluteErrorLoss<T>
- Implements
- Inherited Members
- Extension Methods
Remarks
For Beginners: Mean Absolute Error measures the average absolute difference between predicted and actual values.
The formula is: MAE = (1/n) * ?|predicted - actual|
MAE has these key properties:
- It treats all errors linearly (unlike MSE which squares errors)
- It's less sensitive to outliers than MSE
- It's simple to understand as the average magnitude of errors
- It's always positive, with perfect predictions giving a value of zero
MAE is ideal for problems where:
- You're predicting continuous values
- You want all errors to be treated equally (not emphasizing large errors)
- The prediction errors follow a Laplace distribution
- Outliers should not have a disproportionate influence on the model
Methods
CalculateDerivative(Vector<T>, Vector<T>)
Calculates the derivative of the Mean Absolute Error loss function.
public override Vector<T> CalculateDerivative(Vector<T> predicted, Vector<T> actual)
Parameters
predictedVector<T>The predicted values from the model.
actualVector<T>The actual (target) values.
Returns
- Vector<T>
A vector containing the derivatives of MAE for each prediction.
Remarks
The derivative of MAE is sign(predicted-actual)/n where:
- sign(x) = 1 if x > 0
- sign(x) = -1 if x < 0
- sign(x) = 0 if x = 0 (subgradient at the kink point)
CalculateLoss(Vector<T>, Vector<T>)
Calculates the Mean Absolute Error between predicted and actual values.
public override T CalculateLoss(Vector<T> predicted, Vector<T> actual)
Parameters
predictedVector<T>The predicted values from the model.
actualVector<T>The actual (target) values.
Returns
- T
The mean absolute error value.
CalculateLossAndGradientGpu(IGpuTensor<T>, IGpuTensor<T>)
Calculates both MAE loss and gradient on GPU in a single efficient pass.
public override (T Loss, IGpuTensor<T> Gradient) CalculateLossAndGradientGpu(IGpuTensor<T> predicted, IGpuTensor<T> actual)
Parameters
predictedIGpuTensor<T>The predicted GPU tensor from the model.
actualIGpuTensor<T>The actual (target) GPU tensor.