Interface ILossFunction<T>
- Namespace
- AiDotNet.Interfaces
- Assembly
- AiDotNet.dll
Interface for loss functions used in neural networks.
public interface ILossFunction<T>
Type Parameters
TThe numeric type used for calculations (e.g., float, double).
- Extension Methods
Remarks
For Beginners: Loss functions measure how far the predictions of a neural network are from the expected outputs. They provide a signal that helps the network learn by adjusting its weights to minimize this "loss" value.
Think of a loss function as a score that tells you how well or poorly your neural network is performing. A higher loss value means worse performance, while a lower loss value indicates better performance.
Different types of problems require different loss functions. For example:
- Mean Squared Error is often used for regression problems (predicting numeric values)
- Cross Entropy is commonly used for classification problems (categorizing inputs)
The derivative of a loss function is equally important, as it tells the network which direction to adjust its weights during training to reduce the loss.
Methods
CalculateDerivative(Vector<T>, Vector<T>)
Calculates the derivative (gradient) of the loss function.
Vector<T> CalculateDerivative(Vector<T> predicted, Vector<T> actual)
Parameters
predictedVector<T>The predicted values from the model.
actualVector<T>The actual (target) values.
Returns
- Vector<T>
A vector containing the derivatives of the loss with respect to each prediction.
CalculateLoss(Vector<T>, Vector<T>)
Calculates the loss between predicted and actual values.
T CalculateLoss(Vector<T> predicted, Vector<T> actual)
Parameters
predictedVector<T>The predicted values from the model.
actualVector<T>The actual (target) values.
Returns
- T
The loss value.
CalculateLossAndGradientGpu(IGpuTensor<T>, IGpuTensor<T>)
Calculates both loss and gradient on GPU in a single pass.
(T Loss, IGpuTensor<T> Gradient) CalculateLossAndGradientGpu(IGpuTensor<T> predicted, IGpuTensor<T> actual)
Parameters
predictedIGpuTensor<T>The predicted GPU tensor from the model.
actualIGpuTensor<T>The actual (target) GPU tensor.
Returns
Remarks
This method is more efficient than calling separate loss and gradient calculations as it can compute both in a single GPU kernel invocation.