Class GaussianNoiseLayer<T>
- Namespace
- AiDotNet.NeuralNetworks.Layers
- Assembly
- AiDotNet.dll
A neural network layer that adds random Gaussian noise to inputs during training.
public class GaussianNoiseLayer<T> : LayerBase<T>, ILayer<T>, IJitCompilable<T>, IDiagnosticsProvider, IWeightLoadable<T>, IDisposable
Type Parameters
TThe numeric type used for computations (e.g., float, double).
- Inheritance
-
LayerBase<T>GaussianNoiseLayer<T>
- Implements
-
ILayer<T>
- Inherited Members
Remarks
Gaussian noise layers help prevent overfitting by adding random noise to the input data. This forces the network to learn more robust features that can withstand small variations. The noise follows a Gaussian (normal) distribution with a specified mean and standard deviation. During inference (testing/prediction), no noise is added to preserve predictable outputs.
For Beginners: This layer adds random "static" to your data during training to make the network more robust.
Think of it like training an athlete in challenging conditions:
- Training in rain and wind makes athletes perform better even in good weather
- Training with noise makes neural networks perform better on clean data
For example, in image recognition:
- During training: The layer slightly changes pixel values randomly
- This forces the network to focus on important patterns, not tiny details
- During testing/prediction: No noise is added, giving clean results
Gaussian noise is particularly useful because it follows the same distribution as many natural variations in real-world data.
Constructors
GaussianNoiseLayer(int[], double, double)
public GaussianNoiseLayer(int[] inputShape, double standardDeviation = 0.1, double mean = 0)
Parameters
Properties
SupportsGpuExecution
Gets whether this layer has a GPU execution implementation for inference.
protected override bool SupportsGpuExecution { get; }
Property Value
Remarks
Override this to return true when the layer implements ForwardGpu(params IGpuTensor<T>[]). The actual CanExecuteOnGpu property combines this with engine availability.
For Beginners: This flag indicates if the layer has GPU code for the forward pass. Set this to true in derived classes that implement ForwardGpu.
SupportsGpuTraining
Gets whether this layer has full GPU training support (forward, backward, and parameter updates).
public override bool SupportsGpuTraining { get; }
Property Value
Remarks
This property indicates whether the layer can perform its entire training cycle on GPU without downloading data to CPU. A layer has full GPU training support when:
- ForwardGpu is implemented
- BackwardGpu is implemented
- UpdateParametersGpu is implemented (for layers with trainable parameters)
- GPU weight/bias/gradient buffers are properly managed
For Beginners: This tells you if training can happen entirely on GPU.
GPU-resident training is much faster because:
- Data stays on GPU between forward and backward passes
- No expensive CPU-GPU transfers during each training step
- GPU kernels handle all gradient computation
Only layers that return true here can participate in fully GPU-resident training.
SupportsJitCompilation
Gets a value indicating whether this layer supports JIT compilation.
public override bool SupportsJitCompilation { get; }
Property Value
- bool
Always
truebecause the JIT-compiled version uses inference mode (no noise added).
SupportsTraining
Gets a value indicating whether this layer supports training.
public override bool SupportsTraining { get; }
Property Value
- bool
Always
falsebecause the GaussianNoiseLayer doesn't have any trainable parameters.
Remarks
This property indicates that the GaussianNoiseLayer doesn't have trainable parameters that need to be updated during backpropagation. The layer simply adds noise during the forward pass in training mode, but it doesn't learn or adapt based on the data.
For Beginners: This property tells you that this layer doesn't learn or change during training.
A value of false means:
- The layer has no weights or biases to adjust
- It performs a fixed operation (adding noise) rather than learning
- It's a helper layer that assists the learning process of other layers
Unlike layers like convolutional or fully connected layers that learn patterns from data, the noise layer simply adds randomness with fixed statistical properties.
Methods
Backward(Tensor<T>)
Performs the backward pass by passing the gradient unchanged.
public override Tensor<T> Backward(Tensor<T> outputGradient)
Parameters
outputGradientTensor<T>The gradient of the loss with respect to the layer's output.
Returns
- Tensor<T>
The gradient of the loss with respect to the layer's input.
Remarks
This method implements the backward pass (backpropagation) of the Gaussian noise layer. Since adding noise is a simple element-wise operation, the gradient flows through unchanged. This means that during backpropagation, this layer simply passes the gradient as-is to the previous layer without modifying it.
For Beginners: This is where error information flows back through the layer during training.
During the backward pass:
- The layer receives gradients (information about how to improve)
- Since noise was just added element-wise, gradients flow through directly
- No changes are made to the gradients
This is different from layers with parameters (like weights and biases):
- Those layers would compute how to adjust their parameters
- This layer has no parameters to adjust
The noise affected the forward pass, but during backpropagation, the gradients flow through unmodified.
BackwardGpu(IGpuTensor<T>)
Performs the backward pass of the layer on GPU.
public override IGpuTensor<T> BackwardGpu(IGpuTensor<T> outputGradient)
Parameters
outputGradientIGpuTensor<T>The GPU-resident gradient of the loss with respect to the layer's output.
Returns
- IGpuTensor<T>
The GPU-resident gradient of the loss with respect to the layer's input.
Remarks
This method performs the layer's backward computation entirely on GPU, including:
- Computing input gradients to pass to previous layers
- Computing and storing weight gradients on GPU (for layers with trainable parameters)
- Computing and storing bias gradients on GPU
For Beginners: This is like Backward() but runs entirely on GPU.
During GPU training:
- Output gradients come in (on GPU)
- Input gradients are computed (stay on GPU)
- Weight/bias gradients are computed and stored (on GPU)
- Input gradients are returned for the previous layer
All data stays on GPU - no CPU round-trips needed!
Exceptions
- NotSupportedException
Thrown when the layer does not support GPU training.
ExportComputationGraph(List<ComputationNode<T>>)
Exports the Gaussian noise layer's forward pass as a JIT-compilable computation graph.
public override ComputationNode<T> ExportComputationGraph(List<ComputationNode<T>> inputNodes)
Parameters
inputNodesList<ComputationNode<T>>List to populate with input computation nodes.
Returns
- ComputationNode<T>
The output computation node (same as input for inference mode).
Remarks
This method builds a computation graph for the Gaussian noise layer. During JIT compilation (which is typically for inference), no noise is added, so the layer simply passes through the input unchanged. This matches the behavior of Forward() when IsTrainingMode is false.
Forward(Tensor<T>)
Performs the forward pass by adding Gaussian noise to the input during training.
public override Tensor<T> Forward(Tensor<T> input)
Parameters
inputTensor<T>The input tensor to the layer.
Returns
- Tensor<T>
The input tensor with added noise during training, or unchanged during inference.
Remarks
This method implements the forward pass of the Gaussian noise layer. During training mode, it generates random Gaussian noise with the specified mean and standard deviation and adds it to the input tensor. During inference mode, it simply passes the input through unchanged.
For Beginners: This is where the layer adds random noise during training.
The forward pass works differently depending on the mode:
During training mode:
- Generate random noise following a Gaussian distribution
- Add this noise to the input data
- Save the noise for potential use in backward pass
- Return the noisy data
During testing/prediction mode:
- Simply pass the input through unchanged
- No noise is added to ensure consistent results
This behavior is what makes noise layers useful for regularization: They make training more difficult but don't affect the final predictions.
ForwardGpu(params IGpuTensor<T>[])
Performs the forward pass of the layer on GPU.
public override IGpuTensor<T> ForwardGpu(params IGpuTensor<T>[] inputs)
Parameters
inputsIGpuTensor<T>[]The GPU-resident input tensor(s).
Returns
- IGpuTensor<T>
The GPU-resident output tensor.
Remarks
This method performs the layer's forward computation entirely on GPU. The input and output tensors remain in GPU memory, avoiding expensive CPU-GPU transfers.
For Beginners: This is like Forward() but runs on the graphics card.
The key difference:
- Forward() uses CPU tensors that may be copied to/from GPU
- ForwardGpu() keeps everything on GPU the whole time
Override this in derived classes that support GPU acceleration.
Exceptions
- NotSupportedException
Thrown when the layer does not support GPU execution.
GetParameters()
Gets the trainable parameters of the layer.
public override Vector<T> GetParameters()
Returns
- Vector<T>
An empty vector since this layer has no trainable parameters.
Remarks
This method is a required override from the base class, but the Gaussian noise layer has no trainable parameters to retrieve, so it returns an empty vector.
For Beginners: This method returns an empty list because noise layers have no learnable values.
Unlike layers with weights and biases:
- Gaussian noise layers don't have any parameters that change during training
- They perform a fixed operation (adding noise) that doesn't involve learning
- There are no values to save when storing a trained model
This method returns an empty vector, indicating there are no parameters to collect.
ResetState()
Resets the internal state of the layer.
public override void ResetState()
Remarks
This method resets the internal state of the layer by clearing the cached noise tensor from the previous forward pass. This is useful when starting to process a new batch of data or when switching between training and inference modes.
For Beginners: This method clears the layer's memory to start fresh.
When resetting the state:
- The saved noise tensor is cleared
- This frees up memory
- The layer will generate new random noise next time
This is typically called:
- Between training batches
- When switching from training to evaluation mode
- When starting to process completely new data
It's like wiping a whiteboard clean before starting a new experiment.
UpdateParameters(T)
Updates the parameters of the layer based on the calculated gradients.
public override void UpdateParameters(T learningRate)
Parameters
learningRateTThe learning rate to use for parameter updates.
Remarks
This method is a required override from the base class, but the Gaussian noise layer has no trainable parameters to update, so it performs no operation.
For Beginners: This method does nothing because noise layers have no adjustable weights.
Unlike most layers (like convolutional or fully connected layers):
- Gaussian noise layers don't have weights or biases to learn
- They just add random noise based on fixed settings
- There's nothing to update during training
This method exists only to fulfill the requirements of the base layer class. The noise layer influences the network by making training more robust, not by adjusting internal parameters.