Class SynapticPlasticityLayer<T>
- Namespace
- AiDotNet.NeuralNetworks.Layers
- Assembly
- AiDotNet.dll
Represents a synaptic plasticity layer that models biological learning mechanisms through spike-timing-dependent plasticity.
public class SynapticPlasticityLayer<T> : LayerBase<T>, ILayer<T>, IJitCompilable<T>, IDiagnosticsProvider, IWeightLoadable<T>, IDisposable
Type Parameters
TThe numeric type used for calculations, typically float or double.
- Inheritance
-
LayerBase<T>SynapticPlasticityLayer<T>
- Implements
-
ILayer<T>
- Inherited Members
Remarks
A synaptic plasticity layer implements biologically-inspired learning rules that modify connection strengths based on the relative timing of pre- and post-synaptic neuron activations. This implements spike-timing-dependent plasticity (STDP), a form of Hebbian learning observed in biological neural systems. The layer maintains traces of neuronal activity and applies long-term potentiation (LTP) and long-term depression (LTD) based on the temporal relationship between spikes.
For Beginners: This layer mimics how brain cells (neurons) learn by strengthening or weakening their connections.
Think of it like forming memories:
- When two connected neurons activate in sequence (one fires, then the other), their connection gets stronger
- When they activate in the opposite order, their connection gets weaker
- Over time, pathways that represent useful patterns become stronger
For example, imagine learning to associate a bell sound with food (like Pavlov's dog experiment):
- Initially, there's a weak connection between "hear bell" neurons and "expect food" neurons
- When the bell regularly comes before food, the connection strengthens
- Eventually, just the bell alone strongly activates the "expect food" response
This mimics how real brains learn patterns and form associations between related events.
Constructors
SynapticPlasticityLayer(int, double, double, double, double, double, double)
Initializes a new instance of the SynapticPlasticityLayer<T> class.
public SynapticPlasticityLayer(int size, double stdpLtpRate = 0.005, double stdpLtdRate = 0.0025, double homeostasisRate = 0.0001, double minWeight = 0, double maxWeight = 1, double traceDecay = 0.95)
Parameters
sizeintThe number of neurons in the layer.
stdpLtpRatedoubleThe rate of long-term potentiation (strengthening). Default is 0.005.
stdpLtdRatedoubleThe rate of long-term depression (weakening). Default is 0.0025.
homeostasisRatedoubleThe rate of homeostatic regulation. Default is 0.0001.
minWeightdoubleThe minimum allowed weight value. Default is 0.
maxWeightdoubleThe maximum allowed weight value. Default is 1.
traceDecaydoubleThe decay rate for activity traces. Default is 0.95.
Remarks
This constructor creates a synaptic plasticity layer with the specified number of neurons and plasticity parameters. The layer maintains a full connectivity matrix between all neurons, with weights initialized to small random values.
For Beginners: This constructor creates a new synaptic plasticity layer.
The parameters you provide determine:
- size: How many neurons are in the layer
- stdpLtpRate: How quickly connections strengthen (higher = faster learning)
- stdpLtdRate: How quickly connections weaken (higher = faster forgetting)
- homeostasisRate: How strongly the system maintains balance (higher = more aggressive balancing)
- minWeight/maxWeight: The range of possible connection strengths
- traceDecay: How quickly the memory of recent activity fades
These settings control the learning dynamics and how the layer will adapt to patterns over time.
Properties
SupportsJitCompilation
Gets a value indicating whether this layer supports JIT compilation.
public override bool SupportsJitCompilation { get; }
Property Value
- bool
Always
true. SynapticPlasticityLayer uses a differentiable forward pass.
Remarks
JIT compilation for SynapticPlasticity exports the forward pass as a simple matrix multiplication. The STDP learning dynamics are approximated through standard gradient-based optimization during training. The temporal spike timing information is not used in the JIT-compiled forward pass.
SupportsTraining
Gets a value indicating whether this layer supports training.
public override bool SupportsTraining { get; }
Property Value
- bool
truefor this layer, as it implements synaptic plasticity rules for learning.
Remarks
This property indicates whether the synaptic plasticity layer can be trained. Since this layer implements biologically-inspired learning rules, it supports training, although the mechanism differs from the standard backpropagation approach.
For Beginners: This property tells you if the layer can learn from data.
A value of true means:
- The layer has internal values (synaptic weights) that can be adjusted during training
- It will improve its performance as it sees more data
- It participates in the learning process
For this layer, the value is always true because its whole purpose is to implement biologically-inspired learning rules that modify connection strengths based on experience.
Methods
Backward(Tensor<T>)
Performs the backward pass of the synaptic plasticity layer.
public override Tensor<T> Backward(Tensor<T> outputGradient)
Parameters
outputGradientTensor<T>The gradient of the loss with respect to the layer's output.
Returns
- Tensor<T>
The gradient of the loss with respect to the layer's input (same as output gradient for this pass-through layer).
Remarks
This method implements the backward pass of the synaptic plasticity layer. As a pass-through layer, it simply passes the gradient back without modification. The actual weight updates are handled in the UpdateParameters method.
For Beginners: This method passes the gradient unchanged back to the previous layer.
During the backward pass:
- The layer receives error gradients from the next layer
- It passes these gradients back without changing them
- No learning happens in this step for this particular layer
This layer uses a different learning mechanism than backpropagation:
- Instead of using gradients directly, it uses spike timing relationships
- The actual learning happens in the UpdateParameters method
- This backward pass is only needed to maintain compatibility with the neural network framework
Dispose(bool)
Releases resources used by this layer.
protected override void Dispose(bool disposing)
Parameters
disposingboolTrue if called from Dispose(), false if called from finalizer.
Remarks
Override this method in derived classes to release layer-specific resources. Always call base.Dispose(disposing) after releasing your resources.
For Beginners: When creating a custom layer with resources:
protected override void Dispose(bool disposing)
{
if (disposing)
{
// Release your managed resources here
_myGpuHandle?.Dispose();
_myGpuHandle = null;
}
base.Dispose(disposing);
}
ExportComputationGraph(List<ComputationNode<T>>)
Exports the layer's computation graph for JIT compilation.
public override ComputationNode<T> ExportComputationGraph(List<ComputationNode<T>> inputNodes)
Parameters
inputNodesList<ComputationNode<T>>List to populate with input computation nodes.
Returns
- ComputationNode<T>
The output computation node representing the layer's operation.
Remarks
This method constructs a computation graph representation of the layer's forward pass that can be JIT compiled for faster inference. All layers MUST implement this method to support JIT compilation.
For Beginners: JIT (Just-In-Time) compilation converts the layer's operations into optimized native code for 5-10x faster inference.
To support JIT compilation, a layer must:
- Implement this method to export its computation graph
- Set SupportsJitCompilation to true
- Use ComputationNode and TensorOperations to build the graph
All layers are required to implement this method, even if they set SupportsJitCompilation = false.
Forward(Tensor<T>)
Performs the forward pass of the synaptic plasticity layer.
public override Tensor<T> Forward(Tensor<T> input)
Parameters
inputTensor<T>The input tensor to process.
Returns
- Tensor<T>
The output tensor (same as input for this pass-through layer).
Remarks
This method implements the forward pass of the synaptic plasticity layer. As a pass-through layer, it simply records the input and returns it unchanged. The actual learning occurs during the update step.
For Beginners: This method passes the input data through the layer unchanged.
During the forward pass:
- The layer receives input activations
- It stores these activations for later use in learning
- It returns the same activations without modification
This layer doesn't change the data during the forward pass because:
- It functions as a "pass-through" layer
- The learning happens during the update step, not the forward pass
- This matches how biological plasticity works (neurons transmit signals unchanged, but their connections change strength afterward)
ForwardGpu(params IGpuTensor<T>[])
Performs the forward pass of the layer on GPU.
public override IGpuTensor<T> ForwardGpu(params IGpuTensor<T>[] inputs)
Parameters
inputsIGpuTensor<T>[]The GPU-resident input tensor(s).
Returns
- IGpuTensor<T>
The GPU-resident output tensor.
Remarks
This method performs the layer's forward computation entirely on GPU. The input and output tensors remain in GPU memory, avoiding expensive CPU-GPU transfers.
For Beginners: This is like Forward() but runs on the graphics card.
The key difference:
- Forward() uses CPU tensors that may be copied to/from GPU
- ForwardGpu() keeps everything on GPU the whole time
Override this in derived classes that support GPU acceleration.
Exceptions
- NotSupportedException
Thrown when the layer does not support GPU execution.
GetParameters()
Gets all trainable parameters of the layer as a single vector.
public override Vector<T> GetParameters()
Returns
- Vector<T>
A vector containing the weight matrix parameters.
Remarks
This method returns the weight matrix as a flattened vector. Although this layer primarily uses STDP learning rules, exposing parameters allows for saving/loading state.
For Beginners: This method returns the layer's weights for saving or inspection.
While the layer uses spike-timing-dependent plasticity rules for learning, it still has parameters (the weight matrix) that can be:
- Saved to disk
- Loaded from a previously trained model
- Inspected for analysis
ResetState()
Resets the internal state of the layer.
public override void ResetState()
Remarks
This method resets the internal state of the synaptic plasticity layer by clearing the last input and output vectors. This can be useful when processing new, unrelated sequences or when restarting training.
For Beginners: This method clears the layer's memory of recent activity.
When resetting the state:
- The layer forgets what inputs and outputs it recently saw
- This is useful when starting to process a new, unrelated example
- It prevents information from one sequence affecting another
Note that this doesn't reset the learned weights, only the temporary state variables. Think of it like clearing short-term memory while keeping long-term memories intact.
SetParameters(Vector<T>)
Sets the trainable parameters of the layer from a single vector.
public override void SetParameters(Vector<T> parameters)
Parameters
parametersVector<T>A vector containing all parameters to set.
Remarks
This method sets the weight matrix from a flattened vector. This is useful for loading saved model weights or for implementing optimization algorithms.
Exceptions
- ArgumentException
Thrown when the parameters vector has incorrect length.
UpdateParameters(T)
Updates the parameters of the layer using the calculated gradients.
public override void UpdateParameters(T learningRate)
Parameters
learningRateTThe learning rate to use for the parameter updates.
Remarks
This abstract method must be implemented by derived classes to define how the layer's parameters are updated during training. The learning rate controls the size of the parameter updates.
For Beginners: This method updates the layer's internal values during training.
When updating parameters:
- The weights, biases, or other parameters are adjusted to reduce prediction errors
- The learning rate controls how big each update step is
- Smaller learning rates mean slower but more stable learning
- Larger learning rates mean faster but potentially unstable learning
This is how the layer "learns" from data over time, gradually improving its ability to extract useful patterns from inputs.