Class MeanSquaredErrorLoss<T>
- Namespace
- AiDotNet.LossFunctions
- Assembly
- AiDotNet.dll
Implements the Mean Squared Error (MSE) loss function.
public class MeanSquaredErrorLoss<T> : LossFunctionBase<T>, ILossFunction<T>
Type Parameters
TThe numeric type used for calculations (e.g., float, double).
- Inheritance
-
MeanSquaredErrorLoss<T>
- Implements
- Inherited Members
- Extension Methods
Remarks
For Beginners: Mean Squared Error is one of the most common loss functions used in regression problems. It measures the average squared difference between predicted and actual values.
The formula is: MSE = (1/n) * ?(predicted - actual)²
MSE has these key properties:
- It heavily penalizes large errors due to the squaring operation
- It treats all data points equally
- It's differentiable everywhere, making it suitable for gradient-based optimization
- It's always positive, with perfect predictions giving a value of zero
MSE is ideal for problems where:
- You're predicting continuous values (like prices, temperatures, etc.)
- Outliers should be given extra attention (due to the squaring)
- The prediction errors follow a normal distribution
Methods
CalculateDerivative(Vector<T>, Vector<T>)
Calculates the derivative of the Mean Squared Error loss function.
public override Vector<T> CalculateDerivative(Vector<T> predicted, Vector<T> actual)
Parameters
predictedVector<T>The predicted values from the model.
actualVector<T>The actual (target) values.
Returns
- Vector<T>
A vector containing the derivatives of MSE for each prediction.
CalculateLoss(Vector<T>, Vector<T>)
Calculates the Mean Squared Error between predicted and actual values.
public override T CalculateLoss(Vector<T> predicted, Vector<T> actual)
Parameters
predictedVector<T>The predicted values from the model.
actualVector<T>The actual (target) values.
Returns
- T
The mean squared error value.
CalculateLossAndGradientGpu(IGpuTensor<T>, IGpuTensor<T>)
Calculates both MSE loss and gradient on GPU in a single efficient pass.
public override (T Loss, IGpuTensor<T> Gradient) CalculateLossAndGradientGpu(IGpuTensor<T> predicted, IGpuTensor<T> actual)
Parameters
predictedIGpuTensor<T>The predicted GPU tensor from the model.
actualIGpuTensor<T>The actual (target) GPU tensor.