Class LambdaLayer<T>
- Namespace
- AiDotNet.NeuralNetworks.Layers
- Assembly
- AiDotNet.dll
Represents a customizable layer that applies user-defined functions for both forward and backward passes.
public class LambdaLayer<T> : LayerBase<T>, ILayer<T>, IJitCompilable<T>, IDiagnosticsProvider, IWeightLoadable<T>, IDisposable
Type Parameters
TThe numeric type used for calculations, typically float or double.
- Inheritance
-
LayerBase<T>LambdaLayer<T>
- Implements
-
ILayer<T>
- Inherited Members
Remarks
The Lambda Layer allows for custom transformations to be incorporated into a neural network by accepting user-defined functions for both the forward and backward passes. This provides flexibility to implement custom operations that aren't available as standard layers. The layer can optionally apply an activation function after the custom transformation.
For Beginners: This layer lets you create your own custom operations in a neural network.
Think of the Lambda Layer as a "do-it-yourself" layer where:
- You provide your own custom function to process the data
- You can optionally provide a custom function for the learning process
- It gives you flexibility to implement operations not covered by standard layers
For example, if you wanted to apply a special mathematical transformation that isn't available in standard layers, you could define that transformation and use it in a Lambda Layer.
This is an advanced feature that gives you complete control when standard layers don't provide what you need.
JIT Compilation Support: To enable JIT compilation, use the constructor that accepts a traceable expression function (Func<ComputationNode<T>, ComputationNode<T>>) instead of an opaque tensor function. The traceable function uses TensorOperations which can be compiled.
Constructors
LambdaLayer(int[], int[], Func<ComputationNode<T>, ComputationNode<T>>, IActivationFunction<T>?)
Initializes a new instance of the LambdaLayer<T> class with a traceable expression for JIT compilation support.
public LambdaLayer(int[] inputShape, int[] outputShape, Func<ComputationNode<T>, ComputationNode<T>> traceableExpression, IActivationFunction<T>? activationFunction = null)
Parameters
inputShapeint[]The shape of the input tensor.
outputShapeint[]The shape of the output tensor.
traceableExpressionFunc<ComputationNode<T>, ComputationNode<T>>A function that defines the forward pass using TensorOperations on ComputationNodes.
activationFunctionIActivationFunction<T>The activation function to apply after the custom transformation. Defaults to ReLU if not specified.
Remarks
This constructor creates a Lambda Layer that supports JIT compilation by accepting a traceable expression. The traceable expression must use TensorOperations methods to define the forward pass, which allows the computation graph to be captured and compiled.
For Beginners: This creates a custom layer that can be JIT compiled for better performance.
To use JIT compilation:
- Define your custom operation using TensorOperations methods
- Pass it as a function that takes and returns ComputationNodes
- The system can then compile and optimize your operation
Example:
var layer = new LambdaLayer<float>(
inputShape: new[] { 10 },
outputShape: new[] { 10 },
traceableExpression: x => TensorOperations<float>.Square(x)
);
LambdaLayer(int[], int[], Func<Tensor<T>, Tensor<T>>, Func<Tensor<T>, Tensor<T>, Tensor<T>>?, IActivationFunction<T>?)
Initializes a new instance of the LambdaLayer<T> class with the specified shapes, functions, and element-wise activation function.
public LambdaLayer(int[] inputShape, int[] outputShape, Func<Tensor<T>, Tensor<T>> forwardFunction, Func<Tensor<T>, Tensor<T>, Tensor<T>>? backwardFunction = null, IActivationFunction<T>? activationFunction = null)
Parameters
inputShapeint[]The shape of the input tensor.
outputShapeint[]The shape of the output tensor.
forwardFunctionFunc<Tensor<T>, Tensor<T>>The function to apply during the forward pass.
backwardFunctionFunc<Tensor<T>, Tensor<T>, Tensor<T>>The optional function to apply during the backward pass. If null, the layer will not support training.
activationFunctionIActivationFunction<T>The activation function to apply after the custom transformation. Defaults to ReLU if not specified.
Remarks
This constructor creates a new Lambda Layer with the specified shapes, functions, and element-wise activation function. The input and output shapes must be specified as they may differ depending on the custom transformation.
For Beginners: This creates a new custom layer with your functions.
When creating a Lambda Layer, you specify:
- inputShape: The shape of the data that will come into your layer
- outputShape: The shape of the data that will come out of your layer
- forwardFunction: Your custom function that processes the data
- backwardFunction (optional): Your custom function for learning
- activationFunction (optional): A standard function to apply after your custom transformation
For example, if you have data with 10 features and want to transform it into 5 features, you would use inputShape=[10] and outputShape=[5], and provide a function that performs this transformation.
LambdaLayer(int[], int[], Func<Tensor<T>, Tensor<T>>, Func<Tensor<T>, Tensor<T>, Tensor<T>>?, IVectorActivationFunction<T>?)
Initializes a new instance of the LambdaLayer<T> class with the specified shapes, functions, and vector activation function.
public LambdaLayer(int[] inputShape, int[] outputShape, Func<Tensor<T>, Tensor<T>> forwardFunction, Func<Tensor<T>, Tensor<T>, Tensor<T>>? backwardFunction = null, IVectorActivationFunction<T>? vectorActivationFunction = null)
Parameters
inputShapeint[]The shape of the input tensor.
outputShapeint[]The shape of the output tensor.
forwardFunctionFunc<Tensor<T>, Tensor<T>>The function to apply during the forward pass.
backwardFunctionFunc<Tensor<T>, Tensor<T>, Tensor<T>>The optional function to apply during the backward pass. If null, the layer will not support training.
vectorActivationFunctionIVectorActivationFunction<T>The vector activation function to apply after the custom transformation. Defaults to ReLU if not specified.
Remarks
This constructor creates a new Lambda Layer with the specified shapes, functions, and vector activation function. Vector activation functions operate on entire vectors rather than individual elements, which can capture dependencies between different elements of the vectors.
For Beginners: This creates a new custom layer with an advanced vector-based activation.
Vector activation functions:
- Process entire groups of numbers together, not just one at a time
- Can capture relationships between different features
- May be more powerful for complex patterns
This constructor is useful when you need the layer to understand how different features interact with each other, rather than treating each feature independently.
Properties
SupportsJitCompilation
Gets a value indicating whether this layer supports JIT compilation.
public override bool SupportsJitCompilation { get; }
Property Value
- bool
trueif a traceable expression was provided; otherwise,false.
Remarks
JIT compilation is only supported when the LambdaLayer was created with a traceable expression that uses TensorOperations. Opaque user-defined functions cannot be compiled.
SupportsTraining
Gets a value indicating whether this layer supports training.
public override bool SupportsTraining { get; }
Property Value
- bool
trueif a backward function is provided; otherwise,false.
Remarks
This property indicates whether the layer can be trained through backpropagation. The LambdaLayer supports training only if a backward function is provided.
For Beginners: This property tells you if the layer can learn from data.
A value of true means:
- The layer can adjust its behavior during training
- A backward function has been provided
- It participates in the learning process
A value of false means:
- No backward function was provided
- The layer will always apply the same transformation
- It doesn't participate in the learning process
This is determined by whether you provided a backward function when creating the layer.
Methods
Backward(Tensor<T>)
Performs the backward pass of the lambda layer.
public override Tensor<T> Backward(Tensor<T> outputGradient)
Parameters
outputGradientTensor<T>The gradient of the loss with respect to the layer's output.
Returns
- Tensor<T>
The gradient of the loss with respect to the layer's input.
Remarks
This method implements the backward pass of the lambda layer, which is used during training to propagate error gradients back through the network. It applies the derivative of the activation function to the output gradient, then applies the user-defined backward function to compute the gradient with respect to the input.
For Beginners: This method is used during training to calculate how the layer's input should change to reduce errors.
During the backward pass:
- The layer receives information about how its output contributed to errors
- If an activation function was used, its effect is accounted for
- Your custom backward function calculates how the input should change
This method will throw an error if:
- The Forward method hasn't been called first
- No backward function was provided when creating the layer
Writing a correct backward function requires understanding of calculus and how gradients flow through neural networks.
Exceptions
- InvalidOperationException
Thrown when Forward has not been called before Backward or when no backward function is provided.
ExportComputationGraph(List<ComputationNode<T>>)
Exports the layer's computation graph for JIT compilation.
public override ComputationNode<T> ExportComputationGraph(List<ComputationNode<T>> inputNodes)
Parameters
inputNodesList<ComputationNode<T>>List to populate with input computation nodes.
Returns
- ComputationNode<T>
The output computation node representing the layer's operation.
Remarks
This method constructs a computation graph representation of the layer's forward pass that can be JIT compiled for faster inference. All layers MUST implement this method to support JIT compilation.
For Beginners: JIT (Just-In-Time) compilation converts the layer's operations into optimized native code for 5-10x faster inference.
To support JIT compilation, a layer must:
- Implement this method to export its computation graph
- Set SupportsJitCompilation to true
- Use ComputationNode and TensorOperations to build the graph
All layers are required to implement this method, even if they set SupportsJitCompilation = false.
Forward(Tensor<T>)
Performs the forward pass of the lambda layer.
public override Tensor<T> Forward(Tensor<T> input)
Parameters
inputTensor<T>The input tensor to process.
Returns
- Tensor<T>
The output tensor after applying the custom transformation and activation.
Remarks
This method implements the forward pass of the lambda layer. It applies the user-defined forward function to the input tensor, followed by the activation function if one was specified. The input and output are cached for use during the backward pass.
For Beginners: This method processes your data through the custom layer.
During the forward pass:
- Your custom function processes the input data
- If specified, an activation function is applied to add non-linearity
- The input and output are saved for use during training
This is where your custom transformation actually gets applied to the data as it flows through the network.
GetParameters()
Returns an empty vector since the lambda layer typically has no trainable parameters.
public override Vector<T> GetParameters()
Returns
- Vector<T>
An empty vector.
Remarks
This method returns an empty vector since the LambdaLayer typically has no trainable parameters. It is implemented as required by the LayerBase interface.
For Beginners: This method returns an empty list because there are typically no parameters.
Since Lambda layers:
- Usually don't have their own weights or biases
- Rely on the custom functions you provide
This method returns an empty vector to indicate there are no parameters. If your custom functions have parameters, you would need to handle saving and loading them separately.
ResetState()
Resets the internal state of the layer.
public override void ResetState()
Remarks
This method resets the internal state of the layer, clearing cached values from the forward pass. This includes the last input and output tensors.
For Beginners: This method clears the layer's memory to start fresh.
When resetting the state:
- The saved input and output from previous data are cleared
- The layer is ready for new data without being influenced by previous data
This is important for:
- Processing a new, unrelated batch of data
- Preventing information from one batch affecting another
- Starting a new training episode
UpdateParameters(T)
Update parameters is a no-op for the lambda layer since it typically doesn't have trainable parameters.
public override void UpdateParameters(T learningRate)
Parameters
learningRateTThe learning rate (unused in this layer).
Remarks
This method is implemented as required by the LayerBase interface but typically does nothing for the LambdaLayer since most custom transformations don't have trainable parameters.
For Beginners: This method exists but typically does nothing for Lambda layers.
Since Lambda layers:
- Usually don't have their own weights or biases
- Rely on the custom functions you provide
This method is included only because all layers must have this method, but it doesn't usually do anything for Lambda layers. If your custom functions have parameters that need updating, you would need to handle that separately.