Class CroppingLayer<T>
- Namespace
- AiDotNet.NeuralNetworks.Layers
- Assembly
- AiDotNet.dll
Represents a cropping layer that removes portions of input tensors from the edges.
public class CroppingLayer<T> : LayerBase<T>, ILayer<T>, IJitCompilable<T>, IDiagnosticsProvider, IWeightLoadable<T>, IDisposable
Type Parameters
TThe numeric type used for calculations, typically float or double.
- Inheritance
-
LayerBase<T>CroppingLayer<T>
- Implements
-
ILayer<T>
- Inherited Members
Remarks
A cropping layer removes specified portions from the edges of an input tensor. This is useful for removing border artifacts, adjusting dimensions between layers, or focusing on specific regions of input data. The cropping can be applied differently to each dimension of the input.
For Beginners: A cropping layer cuts off the edges of your data.
Think of it like cropping a photo:
- You can trim different amounts from the top, bottom, left, and right
- The middle portion (the important part) is kept
- The trimmed edges are discarded
For example, in image processing:
- You might crop off padding added by previous layers
- You might focus on the central region where the important features are
- You might adjust the size to match what the next layer expects
Cropping layers are simple but useful for controlling exactly what part of the data flows through your neural network.
Constructors
CroppingLayer(int[], int[], int[], int[], int[], IActivationFunction<T>?, IEngine?)
Initializes a new instance of the CroppingLayer<T> class with the specified cropping parameters and a scalar activation function.
public CroppingLayer(int[] inputShape, int[] cropTop, int[] cropBottom, int[] cropLeft, int[] cropRight, IActivationFunction<T>? scalarActivation = null, IEngine? engine = null)
Parameters
inputShapeint[]The shape of the input data.
cropTopint[]The amount to crop from the top of each dimension.
cropBottomint[]The amount to crop from the bottom of each dimension.
cropLeftint[]The amount to crop from the left of each dimension.
cropRightint[]The amount to crop from the right of each dimension.
scalarActivationIActivationFunction<T>The activation function to apply. Defaults to Identity if not specified.
engineIEngineThe computation engine for vectorized operations. Defaults to CPU if not specified.
Remarks
This constructor creates a cropping layer with the specified cropping parameters and activation function. The output shape is calculated based on the input shape and cropping parameters. The Identity activation function is used by default, which means no transformation is applied to the cropped output.
For Beginners: This setup method creates a new cropping layer with specific settings.
When creating the layer, you specify:
- The size and shape of your input data
- How much to crop from each side/dimension
- What mathematical function to apply after cropping (usually none)
The layer automatically calculates how big the output will be after cropping. By default, it uses the "Identity" activation, which means the values don't change after cropping - they just pass through unchanged.
CroppingLayer(int[], int[], int[], int[], int[], IVectorActivationFunction<T>?, IEngine?)
Initializes a new instance of the CroppingLayer<T> class with the specified cropping parameters and a vector activation function.
public CroppingLayer(int[] inputShape, int[] cropTop, int[] cropBottom, int[] cropLeft, int[] cropRight, IVectorActivationFunction<T>? vectorActivation = null, IEngine? engine = null)
Parameters
inputShapeint[]The shape of the input data.
cropTopint[]The amount to crop from the top of each dimension.
cropBottomint[]The amount to crop from the bottom of each dimension.
cropLeftint[]The amount to crop from the left of each dimension.
cropRightint[]The amount to crop from the right of each dimension.
vectorActivationIVectorActivationFunction<T>The vector activation function to apply. Defaults to Identity if not specified.
engineIEngineThe computation engine for vectorized operations. Defaults to CPU if not specified.
Remarks
This constructor creates a cropping layer with the specified cropping parameters and a vector activation function. The output shape is calculated based on the input shape and cropping parameters. The Identity activation function is used by default, which means no transformation is applied to the cropped output.
For Beginners: This setup method is similar to the previous one, but uses a different type of activation function.
A vector activation function:
- Works on entire groups of numbers at once
- Can be more efficient for certain types of calculations
- Otherwise works the same as the regular activation function
Most of the time with cropping layers, you'll use the Identity activation (no change), but this option gives you flexibility if you need it.
Properties
SupportsGpuExecution
Gets whether this layer has a GPU execution implementation for inference.
protected override bool SupportsGpuExecution { get; }
Property Value
Remarks
Override this to return true when the layer implements ForwardGpu(params IGpuTensor<T>[]). The actual CanExecuteOnGpu property combines this with engine availability.
For Beginners: This flag indicates if the layer has GPU code for the forward pass. Set this to true in derived classes that implement ForwardGpu.
SupportsGpuTraining
Gets whether this layer has full GPU training support (forward, backward, and parameter updates).
public override bool SupportsGpuTraining { get; }
Property Value
Remarks
This property indicates whether the layer can perform its entire training cycle on GPU without downloading data to CPU. A layer has full GPU training support when:
- ForwardGpu is implemented
- BackwardGpu is implemented
- UpdateParametersGpu is implemented (for layers with trainable parameters)
- GPU weight/bias/gradient buffers are properly managed
For Beginners: This tells you if training can happen entirely on GPU.
GPU-resident training is much faster because:
- Data stays on GPU between forward and backward passes
- No expensive CPU-GPU transfers during each training step
- GPU kernels handle all gradient computation
Only layers that return true here can participate in fully GPU-resident training.
SupportsJitCompilation
Gets whether this layer supports JIT compilation.
public override bool SupportsJitCompilation { get; }
Property Value
- bool
True if the activation function supports JIT compilation, false otherwise.
Remarks
Cropping layers support JIT compilation as long as their activation function does. The cropping operation is straightforward to compile and optimize.
SupportsTraining
Gets a value indicating whether this layer supports training through backpropagation.
public override bool SupportsTraining { get; }
Property Value
- bool
Always returns
falsefor cropping layers, as they have no trainable parameters.
Remarks
This property indicates whether the layer can be trained through backpropagation. Cropping layers have no trainable parameters, so they cannot be trained directly.
For Beginners: This property tells you if the layer can learn from data.
For cropping layers:
- The value is always false
- This means the layer doesn't have any adjustable values
- It performs the same operation regardless of training
The cropping layer simply passes data through (after trimming the edges), without changing its behavior based on training examples.
Methods
Backward(Tensor<T>)
Calculates gradients during backpropagation.
public override Tensor<T> Backward(Tensor<T> outputGradient)
Parameters
outputGradientTensor<T>The gradient of the loss with respect to the layer's output.
Returns
- Tensor<T>
The gradient of the loss with respect to the layer's input.
Remarks
This method performs the backward pass of the cropping layer during training. It creates a tensor with the same shape as the input and places the output gradient in the non-cropped region, leaving the cropped regions as zero. This effectively passes the gradient back through only the portions of the input that were kept during the forward pass.
For Beginners: This method helps pass error information backward through the network during training.
During the backward pass:
- A tensor the same size as the original input is created
- The gradient information is placed in the center (non-cropped) region
- The cropped regions get zero gradient (since they didn't contribute to the output)
- This allows the network to learn only from the parts of the input that were actually used
Even though the cropping layer itself doesn't learn, it needs to properly pass gradient information back to previous layers that do learn.
BackwardGpu(IGpuTensor<T>)
Performs the backward pass of the layer on GPU.
public override IGpuTensor<T> BackwardGpu(IGpuTensor<T> outputGradient)
Parameters
outputGradientIGpuTensor<T>The GPU-resident gradient of the loss with respect to the layer's output.
Returns
- IGpuTensor<T>
The GPU-resident gradient of the loss with respect to the layer's input.
Remarks
This method performs the layer's backward computation entirely on GPU, including:
- Computing input gradients to pass to previous layers
- Computing and storing weight gradients on GPU (for layers with trainable parameters)
- Computing and storing bias gradients on GPU
For Beginners: This is like Backward() but runs entirely on GPU.
During GPU training:
- Output gradients come in (on GPU)
- Input gradients are computed (stay on GPU)
- Weight/bias gradients are computed and stored (on GPU)
- Input gradients are returned for the previous layer
All data stays on GPU - no CPU round-trips needed!
Exceptions
- NotSupportedException
Thrown when the layer does not support GPU training.
ExportComputationGraph(List<ComputationNode<T>>)
Exports this layer's computation as a differentiable computation graph for JIT compilation.
public override ComputationNode<T> ExportComputationGraph(List<ComputationNode<T>> inputNodes)
Parameters
inputNodesList<ComputationNode<T>>List to which input variable nodes should be added.
Returns
- ComputationNode<T>
The output computation node representing this layer's operation.
Remarks
This method builds a computation graph representation of the cropping operation that can be compiled and optimized for efficient execution. The graph represents removing specified portions from the edges of the input tensor followed by optional activation.
For Beginners: This method creates an optimized version of the cropping operation.
For cropping layers:
- Creates a placeholder for the input tensor
- Applies the cropping operation (removes edges)
- Applies the activation function if present
- Returns a computation graph for efficient execution
This allows for faster inference by pre-compiling the cropping operation.
Exceptions
- ArgumentNullException
Thrown when inputNodes is null.
- NotSupportedException
Thrown when the activation function is not supported for JIT compilation.
Forward(Tensor<T>)
Processes the input data through the cropping layer.
public override Tensor<T> Forward(Tensor<T> input)
Parameters
inputTensor<T>The input tensor to process.
Returns
- Tensor<T>
The output tensor after cropping and activation.
Remarks
This method performs the forward pass of the cropping layer. It creates a new tensor with the calculated output shape and copies the non-cropped portion of the input tensor to it. Then it applies the activation function to the result.
For Beginners: This method applies the cropping to your input data.
During the forward pass:
- A new, smaller output is created based on the calculated size
- The layer copies the central portion of the input to the output
- The edges specified by the cropping parameters are left out
- The activation function is applied to the result (usually no change)
Think of it like cutting out the center of a photo and discarding the edges.
ForwardGpu(params IGpuTensor<T>[])
Performs the forward pass of the layer on GPU.
public override IGpuTensor<T> ForwardGpu(params IGpuTensor<T>[] inputs)
Parameters
inputsIGpuTensor<T>[]The GPU-resident input tensor(s).
Returns
- IGpuTensor<T>
The GPU-resident output tensor.
Remarks
This method performs the layer's forward computation entirely on GPU. The input and output tensors remain in GPU memory, avoiding expensive CPU-GPU transfers.
For Beginners: This is like Forward() but runs on the graphics card.
The key difference:
- Forward() uses CPU tensors that may be copied to/from GPU
- ForwardGpu() keeps everything on GPU the whole time
Override this in derived classes that support GPU acceleration.
Exceptions
- NotSupportedException
Thrown when the layer does not support GPU execution.
GetParameters()
Gets all trainable parameters of the layer as a single vector.
public override Vector<T> GetParameters()
Returns
- Vector<T>
An empty vector, as cropping layers have no trainable parameters.
Remarks
This method returns an empty vector for cropping layers, as they have no trainable parameters. It is implemented to satisfy the abstract method requirement from the base class.
For Beginners: This method returns an empty list because there are no values to learn.
Since cropping layers:
- Have no weights or biases
- Don't learn from data
- Just perform a fixed cropping operation
The method returns an empty vector (list) to indicate there's nothing to adjust. This is like a recipe that has no ingredients that can be changed - it's always the same.
ResetState()
Resets the internal state of the layer.
public override void ResetState()
Remarks
This method is a no-operation for cropping layers, as they maintain no internal state that needs to be reset. It is implemented to satisfy the abstract method requirement from the base class.
For Beginners: This method is empty because cropping layers don't store any temporary information.
Since cropping layers:
- Don't keep track of past inputs
- Don't remember anything between operations
- Simply crop each input as it comes
There's nothing to reset. This is like a paper cutter - it doesn't remember the last paper it cut, so there's nothing to clear between uses.
SetParameters(Vector<T>)
Sets the trainable parameters of the layer from a single vector.
public override void SetParameters(Vector<T> parameters)
Parameters
parametersVector<T>A vector containing parameters to set.
Remarks
This method is a no-operation for cropping layers, as they have no trainable parameters to set. It is implemented to satisfy the abstract method requirement from the base class.
For Beginners: This method is empty because cropping layers don't have adjustable values.
Since cropping layers:
- Have no weights or biases to update
- Perform a fixed operation that doesn't change
- Don't learn from training
There's nothing to set. It's like trying to change the color settings on a black and white printer - the feature doesn't exist.
UpdateParameters(T)
Updates the layer's parameters using the specified learning rate.
public override void UpdateParameters(T learningRate)
Parameters
learningRateTThe learning rate to use for the update.
Remarks
This method is a no-operation for cropping layers, as they have no trainable parameters to update. It is implemented to satisfy the abstract method requirement from the base class.
For Beginners: This method is empty because cropping layers don't learn.
Since cropping layers:
- Have no adjustable parameters
- Always perform the same fixed operation
- Don't change their behavior based on training
This method exists but does nothing. It's like having a bike pedal that's not connected to the chain - you can push it, but it won't change anything.