Table of Contents

Class MCDropoutNeuralNetwork<T>

Implements Monte Carlo Dropout for uncertainty estimation.

public class MCDropoutNeuralNetwork<T> : NeuralNetwork<T>, INeuralNetworkModel<T>, INeuralNetwork<T>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>, IInterpretableModel<T>, IInputGradientComputable<T>, IDisposable, IUncertaintyEstimator<T>

Type Parameters

T

The numeric type used for calculations (e.g., float, double).

Inheritance
MCDropoutNeuralNetwork<T>
Implements
IFullModel<T, Tensor<T>, Tensor<T>>
IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>
IParameterizable<T, Tensor<T>, Tensor<T>>
ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>
IGradientComputable<T, Tensor<T>, Tensor<T>>
Inherited Members
Extension Methods

Remarks

For Beginners: MC Dropout is the simplest way to add uncertainty estimation to existing neural networks.

The idea is straightforward:

  1. Add dropout layers to your network
  2. Keep dropout active even during prediction (normally it's turned off)
  3. Run multiple predictions with different random dropout patterns
  4. The variation in predictions tells you the uncertainty

This is much easier than full Bayesian neural networks but still provides useful uncertainty estimates. It's like getting a second (and third, and fourth...) opinion from slightly different versions of your model.

Constructors

MCDropoutNeuralNetwork(NeuralNetworkArchitecture<T>, int)

Initializes a new instance of the MCDropoutNeuralNetwork class.

public MCDropoutNeuralNetwork(NeuralNetworkArchitecture<T> architecture, int numSamples = 50)

Parameters

architecture NeuralNetworkArchitecture<T>

The network architecture (should include MC dropout layers).

numSamples int

Number of forward passes for uncertainty estimation (default: 50).

Remarks

For Beginners: Make sure your architecture includes MCDropoutLayer instances. The more samples you use, the better the uncertainty estimate, but prediction becomes slower. 50 samples is a good default that balances accuracy and speed.

Methods

EstimateAleatoricUncertainty(Tensor<T>)

Estimates aleatoric uncertainty.

public Tensor<T> EstimateAleatoricUncertainty(Tensor<T> input)

Parameters

input Tensor<T>

The input tensor.

Returns

Tensor<T>

The aleatoric uncertainty estimate.

Remarks

For Beginners: MC Dropout primarily captures epistemic uncertainty. Aleatoric uncertainty would require the network to explicitly model output variance, so this is a simplified estimate.

EstimateEpistemicUncertainty(Tensor<T>)

Estimates epistemic uncertainty.

public Tensor<T> EstimateEpistemicUncertainty(Tensor<T> input)

Parameters

input Tensor<T>

The input tensor.

Returns

Tensor<T>

The epistemic uncertainty estimate.

Remarks

For Beginners: MC Dropout excels at capturing epistemic uncertainty, which represents the model's lack of knowledge about the correct prediction.

PredictWithUncertainty(Tensor<T>)

Predicts output with uncertainty estimates using MC dropout.

public UncertaintyPredictionResult<T, Tensor<T>> PredictWithUncertainty(Tensor<T> input)

Parameters

input Tensor<T>

The input tensor.

Returns

UncertaintyPredictionResult<T, Tensor<T>>

A prediction result augmented with uncertainty information.