Class GraphAttentionLayer<T>
- Namespace
- AiDotNet.NeuralNetworks.Layers
- Assembly
- AiDotNet.dll
Implements Graph Attention Network (GAT) layer for processing graph-structured data with attention mechanisms.
public class GraphAttentionLayer<T> : LayerBase<T>, IDisposable, IGraphConvolutionLayer<T>, ILayer<T>, IJitCompilable<T>, IDiagnosticsProvider, IWeightLoadable<T>
Type Parameters
TThe numeric type used for calculations, typically float or double.
- Inheritance
-
LayerBase<T>GraphAttentionLayer<T>
- Implements
-
ILayer<T>
- Inherited Members
Remarks
Graph Attention Networks (GAT) introduced by Veličković et al. use attention mechanisms to learn the relative importance of neighboring nodes. Unlike standard GCN which treats all neighbors equally, GAT can assign different weights to different neighbors, allowing the model to focus on the most relevant connections. The layer uses multi-head attention for robustness and expressiveness.
The attention mechanism computes: α_ij = softmax(LeakyReLU(a^T [Wh_i || Wh_j])) where α_ij is the attention coefficient from node j to node i, W is a weight matrix, h_i and h_j are node features, a is the attention vector, and || denotes concatenation.
Production-Ready Features:
- Fully vectorized operations using IEngine for GPU acceleration
- Tensor-based weights for all parameters
- Dual backward pass: BackwardManual() for efficiency, BackwardViaAutodiff() for accuracy
- Full gradient computation through attention mechanism
- JIT compilation support via ExportComputationGraph()
- Complete GetParameters()/SetParameters() for model persistence
Constructors
GraphAttentionLayer(int, int, int, double, double, IActivationFunction<T>?)
Initializes a new instance of the GraphAttentionLayer<T> class.
public GraphAttentionLayer(int inputFeatures, int outputFeatures, int numHeads = 1, double alpha = 0.2, double dropoutRate = 0, IActivationFunction<T>? activationFunction = null)
Parameters
inputFeaturesintoutputFeaturesintnumHeadsintalphadoubledropoutRatedoubleactivationFunctionIActivationFunction<T>
Properties
DropoutRate
Gets the dropout rate applied to attention coefficients during training.
public double DropoutRate { get; }
Property Value
InputFeatures
Gets the number of input features per node.
public int InputFeatures { get; }
Property Value
Remarks
This property indicates how many features each node in the graph has as input. For example, in a molecular graph, this might be properties of each atom.
For Beginners: This tells you how many pieces of information each node starts with.
Examples:
- In a social network: age, location, interests (3 features)
- In a molecule: atomic number, charge, mass (3 features)
- In a citation network: word embeddings (300 features)
Each node has the same number of input features.
NumHeads
Gets the number of attention heads used in multi-head attention.
public int NumHeads { get; }
Property Value
OutputFeatures
Gets the number of output features per node.
public int OutputFeatures { get; }
Property Value
Remarks
This property indicates how many features each node will have after processing through this layer. The layer transforms each node's input features into output features through learned transformations.
For Beginners: This tells you how many pieces of information each node will have after processing.
The layer learns to:
- Combine input features in useful ways
- Extract important patterns
- Create new representations that are better for the task
For example, if you start with 10 features per node and the layer has 16 output features, each node's 10 numbers will be transformed into 16 numbers that hopefully capture more useful information for your specific task.
ParameterCount
Gets the total number of parameters in this layer.
public override int ParameterCount { get; }
Property Value
- int
The total number of trainable parameters.
Remarks
This property returns the total number of trainable parameters in the layer. By default, it returns the length of the Parameters vector, but derived classes can override this to calculate the number of parameters differently.
For Beginners: This tells you how many learnable values the layer has.
The parameter count:
- Shows how complex the layer is
- Indicates how many values need to be learned during training
- Can help estimate memory usage and computational requirements
Layers with more parameters can potentially learn more complex patterns but may also require more data to train effectively.
SupportsGpuExecution
Gets whether this layer supports GPU execution.
protected override bool SupportsGpuExecution { get; }
Property Value
Remarks
GraphAttentionLayer supports GPU execution with multi-head attention computed on GPU. When sparse aggregation is enabled via SetEdges(), the layer uses O(E) GPU operations for efficient attention computation on large graphs.
SupportsJitCompilation
Gets whether this layer supports JIT compilation.
public override bool SupportsJitCompilation { get; }
Property Value
- bool
True if the layer can be JIT compiled, false otherwise.
Remarks
This property indicates whether the layer has implemented ExportComputationGraph() and can benefit from JIT compilation. All layers MUST implement this property.
For Beginners: JIT compilation can make inference 5-10x faster by converting the layer's operations into optimized native code.
Layers should return false if they:
- Have not yet implemented a working ExportComputationGraph()
- Use dynamic operations that change based on input data
- Are too simple to benefit from JIT compilation
When false, the layer will use the standard Forward() method instead.
SupportsTraining
Gets a value indicating whether this layer supports training.
public override bool SupportsTraining { get; }
Property Value
- bool
trueif the layer has trainable parameters and supports backpropagation; otherwise,false.
Remarks
This property indicates whether the layer can be trained through backpropagation. Layers with trainable parameters such as weights and biases typically return true, while layers that only perform fixed transformations (like pooling or activation layers) typically return false.
For Beginners: This property tells you if the layer can learn from data.
A value of true means:
- The layer has parameters that can be adjusted during training
- It will improve its performance as it sees more data
- It participates in the learning process
A value of false means:
- The layer doesn't have any adjustable parameters
- It performs the same operation regardless of training
- It doesn't need to learn (but may still be useful)
UsesSparseAggregation
Gets whether sparse (edge-based) aggregation is currently enabled.
public bool UsesSparseAggregation { get; }
Property Value
Methods
Backward(Tensor<T>)
Performs the backward pass of the layer.
public override Tensor<T> Backward(Tensor<T> outputGradient)
Parameters
outputGradientTensor<T>The gradient of the loss with respect to the layer's output.
Returns
- Tensor<T>
The gradient of the loss with respect to the layer's input.
Remarks
This abstract method must be implemented by derived classes to define the backward pass of the layer. The backward pass propagates error gradients from the output of the layer back to its input, and calculates gradients for any trainable parameters.
For Beginners: This method is used during training to calculate how the layer's input should change to reduce errors.
During the backward pass:
- The layer receives information about how its output contributed to errors
- It calculates how its parameters should change to reduce errors
- It calculates how its input should change, which will be used by earlier layers
This is the core of how neural networks learn from their mistakes during training.
BackwardGpu(IGpuTensor<T>)
GPU-accelerated backward pass for Graph Attention Networks.
public override IGpuTensor<T> BackwardGpu(IGpuTensor<T> outputGradient)
Parameters
outputGradientIGpuTensor<T>
Returns
- IGpuTensor<T>
Remarks
Computes gradients through the GAT layer using cached forward values: 1. Bias gradient: sum of output gradients over nodes 2. Weight gradients: input^T @ (attention^T @ dOutput) 3. Input gradient: (attention^T @ dOutput) @ W^T 4. Attention weight gradients: through attention score computation
ClearEdges()
Clears the edge list and switches back to dense adjacency matrix aggregation.
public void ClearEdges()
ExportComputationGraph(List<ComputationNode<T>>)
Exports the computation graph for JIT compilation.
public override ComputationNode<T> ExportComputationGraph(List<ComputationNode<T>> inputNodes)
Parameters
inputNodesList<ComputationNode<T>>
Returns
Remarks
The exported graph includes both node features and adjacency matrix as inputs, following the industry-standard approach used by PyTorch Geometric and DGL. The adjacency matrix is treated as a dynamic input, allowing the JIT-compiled function to work with different graph structures.
The computation graph captures: 1. Linear transformation for all attention heads 2. Attention score computation with LeakyReLU 3. Softmax normalization over neighbors 4. Weighted aggregation and multi-head averaging
Forward(Tensor<T>)
Performs the forward pass of the layer.
public override Tensor<T> Forward(Tensor<T> input)
Parameters
inputTensor<T>The input tensor to process.
Returns
- Tensor<T>
The output tensor after processing.
Remarks
This abstract method must be implemented by derived classes to define the forward pass of the layer. The forward pass transforms the input tensor according to the layer's operation and activation function.
For Beginners: This method processes your data through the layer.
The forward pass:
- Takes input data from the previous layer or the network input
- Applies the layer's specific transformation (like convolution or matrix multiplication)
- Applies any activation function
- Passes the result to the next layer
This is where the actual data processing happens during both training and prediction.
ForwardGpu(params IGpuTensor<T>[])
GPU-accelerated forward pass for Graph Attention Networks.
public override IGpuTensor<T> ForwardGpu(params IGpuTensor<T>[] inputs)
Parameters
inputsIGpuTensor<T>[]
Returns
- IGpuTensor<T>
Remarks
Implements multi-head graph attention with GPU acceleration. The computation involves: 1. Linear transformation for each attention head: H_h = X * W_h 2. Attention score computation: e_ij = LeakyReLU(a_source^T * H_hi + a_target^T * H_hj) 3. Softmax normalization over neighbors: α_ij = softmax_j(e_ij) 4. Weighted aggregation: output_i = Σ_j α_ij * H_hj 5. Head averaging and bias addition
For sparse graphs, uses efficient O(E) edge-based computation instead of O(N²) dense operations.
GetAdjacencyMatrix()
Gets the adjacency matrix currently being used by this layer.
public Tensor<T>? GetAdjacencyMatrix()
Returns
- Tensor<T>
The adjacency matrix tensor, or null if not set.
Remarks
This method retrieves the adjacency matrix that was set using SetAdjacencyMatrix. It may return null if the adjacency matrix has not been set yet.
For Beginners: This method lets you check what graph structure the layer is using.
This can be useful for:
- Verifying the correct graph was loaded
- Debugging graph connectivity issues
- Visualizing the graph structure
GetParameters()
Gets all trainable parameters of the layer as a single vector.
public override Vector<T> GetParameters()
Returns
- Vector<T>
A vector containing all trainable parameters.
Remarks
This abstract method must be implemented by derived classes to provide access to all trainable parameters of the layer as a single vector. This is useful for optimization algorithms that operate on all parameters at once, or for saving and loading model weights.
For Beginners: This method collects all the learnable values from the layer.
The parameters:
- Are the numbers that the neural network learns during training
- Include weights, biases, and other learnable values
- Are combined into a single long list (vector)
This is useful for:
- Saving the model to disk
- Loading parameters from a previously trained model
- Advanced optimization techniques that need access to all parameters
ResetState()
Resets the internal state of the layer.
public override void ResetState()
Remarks
This abstract method must be implemented by derived classes to reset any internal state the layer maintains between forward and backward passes. This is useful when starting to process a new sequence or when implementing stateful recurrent networks.
For Beginners: This method clears the layer's memory to start fresh.
When resetting the state:
- Cached inputs and outputs are cleared
- Any temporary calculations are discarded
- The layer is ready to process new data without being influenced by previous data
This is important for:
- Processing a new, unrelated sequence
- Preventing information from one sequence affecting another
- Starting a new training episode
SetAdjacencyMatrix(Tensor<T>)
Sets the adjacency matrix that defines the graph structure.
public void SetAdjacencyMatrix(Tensor<T> adjacencyMatrix)
Parameters
adjacencyMatrixTensor<T>The adjacency matrix tensor representing node connections.
Remarks
The adjacency matrix is a square matrix where element [i,j] indicates whether and how strongly node i is connected to node j. Common formats include: - Binary adjacency: 1 if connected, 0 otherwise - Weighted adjacency: connection strength as a value - Normalized adjacency: preprocessed for better training
For Beginners: This method tells the layer how nodes in the graph are connected.
Think of the adjacency matrix as a map:
- Each row represents a node
- Each column represents a potential connection
- The value at position [i,j] tells if node i connects to node j
For example, in a social network:
- adjacencyMatrix[Alice, Bob] = 1 means Alice is friends with Bob
- adjacencyMatrix[Alice, Charlie] = 0 means Alice is not friends with Charlie
This connectivity information is crucial for graph neural networks to propagate information between connected nodes.
SetEdges(Tensor<int>, Tensor<int>)
Sets the edge list representation of the graph structure for sparse aggregation.
public void SetEdges(Tensor<int> sourceIndices, Tensor<int> targetIndices)
Parameters
sourceIndicesTensor<int>Tensor containing source node indices for each edge. Shape: [numEdges].
targetIndicesTensor<int>Tensor containing target node indices for each edge. Shape: [numEdges].
Remarks
This method provides an edge-list representation of the graph, enabling memory-efficient sparse attention computation using the Engine's GraphAttention operations. This is the recommended approach for production GAT workloads with large sparse graphs.
SetParameters(Vector<T>)
Sets the trainable parameters of the layer.
public override void SetParameters(Vector<T> parameters)
Parameters
parametersVector<T>A vector containing all parameters to set.
Remarks
This method sets all the trainable parameters of the layer from a single vector of parameters. The parameters vector must have the correct length to match the total number of parameters in the layer. By default, it simply assigns the parameters vector to the Parameters field, but derived classes may override this to handle the parameters differently.
For Beginners: This method updates all the learnable values in the layer.
When setting parameters:
- The input must be a vector with the correct length
- The layer parses this vector to set all its internal parameters
- Throws an error if the input doesn't match the expected number of parameters
This is useful for:
- Loading a previously saved model
- Transferring parameters from another model
- Setting specific parameter values for testing
Exceptions
- ArgumentException
Thrown when the parameters vector has incorrect length.
UpdateParameters(T)
Updates the parameters of the layer using the calculated gradients.
public override void UpdateParameters(T learningRate)
Parameters
learningRateTThe learning rate to use for the parameter updates.
Remarks
This abstract method must be implemented by derived classes to define how the layer's parameters are updated during training. The learning rate controls the size of the parameter updates.
For Beginners: This method updates the layer's internal values during training.
When updating parameters:
- The weights, biases, or other parameters are adjusted to reduce prediction errors
- The learning rate controls how big each update step is
- Smaller learning rates mean slower but more stable learning
- Larger learning rates mean faster but potentially unstable learning
This is how the layer "learns" from data over time, gradually improving its ability to extract useful patterns from inputs.