Table of Contents

Class DirectionalGraphLayer<T>

Namespace
AiDotNet.NeuralNetworks.Layers
Assembly
AiDotNet.dll

Implements Directional Graph Networks for directed graph processing with separate in/out aggregations.

public class DirectionalGraphLayer<T> : LayerBase<T>, IDisposable, IGraphConvolutionLayer<T>, ILayer<T>, IJitCompilable<T>, IDiagnosticsProvider, IWeightLoadable<T>

Type Parameters

T

The numeric type used for calculations, typically float or double.

Inheritance
DirectionalGraphLayer<T>
Implements
Inherited Members

Remarks

Directional Graph Networks (DGN) explicitly model the directionality of edges in directed graphs. Unlike standard GNNs that often ignore edge direction or treat graphs as undirected, DGNs maintain separate aggregations for incoming and outgoing edges, capturing asymmetric relationships.

The layer computes separate representations for in-neighbors and out-neighbors: - h_in = AGGREGATE_IN({h_j : j → i}) - h_out = AGGREGATE_OUT({h_j : i → j}) - h_i' = UPDATE(h_i, h_in, h_out)

This allows the network to learn different patterns for sources and targets of edges.

For Beginners: This layer understands that graph connections can have direction.

Think of different types of directed networks:

Twitter/Social Media:

  • You follow someone (outgoing edge)
  • Someone follows you (incoming edge)
  • These are NOT the same! Celebrities have many incoming, fewer outgoing

Citation Networks:

  • Papers you cite (outgoing): Shows your influences
  • Papers citing you (incoming): Shows your impact
  • Direction matters for understanding importance

Web Pages:

  • Links you have (outgoing): What you reference
  • Links to you (incoming/backlinks): Your authority
  • Google PageRank uses this directionality

Transaction Networks:

  • Money sent (outgoing): Your purchases
  • Money received (incoming): Your sales
  • Different patterns for buyers vs sellers

Why separate in/out aggregation?

  • Asymmetric roles: Being cited vs citing have different meanings
  • Different patterns: Incoming and outgoing patterns can be very different
  • Better expressiveness: Captures more information than treating edges as undirected

The layer learns separate transformations for incoming and outgoing neighbors, then combines them to update each node's representation.

Constructors

DirectionalGraphLayer(int, int, bool, IActivationFunction<T>?)

Initializes a new instance of the DirectionalGraphLayer<T> class.

public DirectionalGraphLayer(int inputFeatures, int outputFeatures, bool useGating = false, IActivationFunction<T>? activationFunction = null)

Parameters

inputFeatures int

Number of input features per node.

outputFeatures int

Number of output features per node.

useGating bool

Whether to use gating mechanism for combining in/out features (default: false).

activationFunction IActivationFunction<T>

Activation function to apply.

Remarks

Creates a directional graph layer that processes incoming and outgoing edges separately. The layer maintains three transformation paths: incoming neighbors, outgoing neighbors, and self-features, which are then combined using learned weights.

For Beginners: This creates a new directional graph layer.

Key parameters:

  • useGating: Advanced feature for dynamic combination of in/out information
    • false: Simple weighted combination (faster, good for most cases)
    • true: Learned gating decides how much to use each direction (more expressive)

The layer has three "paths":

  1. Incoming path: Processes nodes that point TO this node
  2. Outgoing path: Processes nodes that this node points TO
  3. Self path: Processes the node's own features

All three are combined to create the final node representation.

Example usage:

// For a citation network where direction matters
var layer = new DirectionalGraphLayer(128, 256, useGating: true);

// Set directed adjacency matrix
// adjacency[i,j] = 1 means edge from j to i (j→i)
layer.SetAdjacencyMatrix(adjacencyMatrix);

var output = layer.Forward(nodeFeatures);
// Output captures both who cites you (incoming) and who you cite (outgoing)

Properties

InputFeatures

Gets the number of input features per node.

public int InputFeatures { get; }

Property Value

int

Remarks

This property indicates how many features each node in the graph has as input. For example, in a molecular graph, this might be properties of each atom.

For Beginners: This tells you how many pieces of information each node starts with.

Examples:

  • In a social network: age, location, interests (3 features)
  • In a molecule: atomic number, charge, mass (3 features)
  • In a citation network: word embeddings (300 features)

Each node has the same number of input features.

OutputFeatures

Gets the number of output features per node.

public int OutputFeatures { get; }

Property Value

int

Remarks

This property indicates how many features each node will have after processing through this layer. The layer transforms each node's input features into output features through learned transformations.

For Beginners: This tells you how many pieces of information each node will have after processing.

The layer learns to:

  • Combine input features in useful ways
  • Extract important patterns
  • Create new representations that are better for the task

For example, if you start with 10 features per node and the layer has 16 output features, each node's 10 numbers will be transformed into 16 numbers that hopefully capture more useful information for your specific task.

SupportsGpuExecution

Gets whether this layer has a GPU execution implementation for inference.

protected override bool SupportsGpuExecution { get; }

Property Value

bool

Remarks

Override this to return true when the layer implements ForwardGpu(params IGpuTensor<T>[]). The actual CanExecuteOnGpu property combines this with engine availability.

For Beginners: This flag indicates if the layer has GPU code for the forward pass. Set this to true in derived classes that implement ForwardGpu.

SupportsJitCompilation

Gets whether this layer supports JIT compilation.

public override bool SupportsJitCompilation { get; }

Property Value

bool

True if the layer can be JIT compiled, false otherwise.

Remarks

This property indicates whether the layer has implemented ExportComputationGraph() and can benefit from JIT compilation. All layers MUST implement this property.

For Beginners: JIT compilation can make inference 5-10x faster by converting the layer's operations into optimized native code.

Layers should return false if they:

  • Have not yet implemented a working ExportComputationGraph()
  • Use dynamic operations that change based on input data
  • Are too simple to benefit from JIT compilation

When false, the layer will use the standard Forward() method instead.

SupportsTraining

Gets a value indicating whether this layer supports training.

public override bool SupportsTraining { get; }

Property Value

bool

true if the layer has trainable parameters and supports backpropagation; otherwise, false.

Remarks

This property indicates whether the layer can be trained through backpropagation. Layers with trainable parameters such as weights and biases typically return true, while layers that only perform fixed transformations (like pooling or activation layers) typically return false.

For Beginners: This property tells you if the layer can learn from data.

A value of true means:

  • The layer has parameters that can be adjusted during training
  • It will improve its performance as it sees more data
  • It participates in the learning process

A value of false means:

  • The layer doesn't have any adjustable parameters
  • It performs the same operation regardless of training
  • It doesn't need to learn (but may still be useful)

Methods

Backward(Tensor<T>)

Computes the backward pass for this Directional Graph layer.

public override Tensor<T> Backward(Tensor<T> outputGradient)

Parameters

outputGradient Tensor<T>

The gradient of the loss with respect to this layer's output.

Returns

Tensor<T>

The gradient of the loss with respect to this layer's input.

Remarks

This backward pass computes gradients for all parameters and propagates gradients to the input. It handles the complex flow through directional aggregation, gating, and combination stages.

ExportComputationGraph(List<ComputationNode<T>>)

Exports the layer's computation graph for JIT compilation.

public override ComputationNode<T> ExportComputationGraph(List<ComputationNode<T>> inputNodes)

Parameters

inputNodes List<ComputationNode<T>>

List to populate with input computation nodes.

Returns

ComputationNode<T>

The output computation node representing the layer's operation.

Remarks

This method constructs a computation graph representation of the layer's forward pass that can be JIT compiled for faster inference. All layers MUST implement this method to support JIT compilation.

For Beginners: JIT (Just-In-Time) compilation converts the layer's operations into optimized native code for 5-10x faster inference.

To support JIT compilation, a layer must:

  1. Implement this method to export its computation graph
  2. Set SupportsJitCompilation to true
  3. Use ComputationNode and TensorOperations to build the graph

All layers are required to implement this method, even if they set SupportsJitCompilation = false.

Forward(Tensor<T>)

Performs the forward pass of the layer.

public override Tensor<T> Forward(Tensor<T> input)

Parameters

input Tensor<T>

The input tensor to process.

Returns

Tensor<T>

The output tensor after processing.

Remarks

This abstract method must be implemented by derived classes to define the forward pass of the layer. The forward pass transforms the input tensor according to the layer's operation and activation function.

For Beginners: This method processes your data through the layer.

The forward pass:

  • Takes input data from the previous layer or the network input
  • Applies the layer's specific transformation (like convolution or matrix multiplication)
  • Applies any activation function
  • Passes the result to the next layer

This is where the actual data processing happens during both training and prediction.

ForwardGpu(params IGpuTensor<T>[])

GPU-accelerated forward pass for DirectionalGraphLayer. Uses sparse matrix operations for efficient directed graph aggregation.

public override IGpuTensor<T> ForwardGpu(params IGpuTensor<T>[] inputs)

Parameters

inputs IGpuTensor<T>[]

Returns

IGpuTensor<T>

GetAdjacencyMatrix()

Gets the adjacency matrix currently being used by this layer.

public Tensor<T>? GetAdjacencyMatrix()

Returns

Tensor<T>

The adjacency matrix tensor, or null if not set.

Remarks

This method retrieves the adjacency matrix that was set using SetAdjacencyMatrix. It may return null if the adjacency matrix has not been set yet.

For Beginners: This method lets you check what graph structure the layer is using.

This can be useful for:

  • Verifying the correct graph was loaded
  • Debugging graph connectivity issues
  • Visualizing the graph structure

GetParameters()

Gets all trainable parameters of the layer as a single vector.

public override Vector<T> GetParameters()

Returns

Vector<T>

A vector containing all trainable parameters.

Remarks

This abstract method must be implemented by derived classes to provide access to all trainable parameters of the layer as a single vector. This is useful for optimization algorithms that operate on all parameters at once, or for saving and loading model weights.

For Beginners: This method collects all the learnable values from the layer.

The parameters:

  • Are the numbers that the neural network learns during training
  • Include weights, biases, and other learnable values
  • Are combined into a single long list (vector)

This is useful for:

  • Saving the model to disk
  • Loading parameters from a previously trained model
  • Advanced optimization techniques that need access to all parameters

ResetState()

Resets the internal state of the layer.

public override void ResetState()

Remarks

This abstract method must be implemented by derived classes to reset any internal state the layer maintains between forward and backward passes. This is useful when starting to process a new sequence or when implementing stateful recurrent networks.

For Beginners: This method clears the layer's memory to start fresh.

When resetting the state:

  • Cached inputs and outputs are cleared
  • Any temporary calculations are discarded
  • The layer is ready to process new data without being influenced by previous data

This is important for:

  • Processing a new, unrelated sequence
  • Preventing information from one sequence affecting another
  • Starting a new training episode

SetAdjacencyMatrix(Tensor<T>)

Sets the adjacency matrix that defines the graph structure.

public void SetAdjacencyMatrix(Tensor<T> adjacencyMatrix)

Parameters

adjacencyMatrix Tensor<T>

The adjacency matrix tensor representing node connections.

Remarks

The adjacency matrix is a square matrix where element [i,j] indicates whether and how strongly node i is connected to node j. Common formats include: - Binary adjacency: 1 if connected, 0 otherwise - Weighted adjacency: connection strength as a value - Normalized adjacency: preprocessed for better training

For Beginners: This method tells the layer how nodes in the graph are connected.

Think of the adjacency matrix as a map:

  • Each row represents a node
  • Each column represents a potential connection
  • The value at position [i,j] tells if node i connects to node j

For example, in a social network:

  • adjacencyMatrix[Alice, Bob] = 1 means Alice is friends with Bob
  • adjacencyMatrix[Alice, Charlie] = 0 means Alice is not friends with Charlie

This connectivity information is crucial for graph neural networks to propagate information between connected nodes.

SetParameters(Vector<T>)

Sets the trainable parameters of the layer.

public override void SetParameters(Vector<T> parameters)

Parameters

parameters Vector<T>

A vector containing all parameters to set.

Remarks

This method sets all the trainable parameters of the layer from a single vector of parameters. The parameters vector must have the correct length to match the total number of parameters in the layer. By default, it simply assigns the parameters vector to the Parameters field, but derived classes may override this to handle the parameters differently.

For Beginners: This method updates all the learnable values in the layer.

When setting parameters:

  • The input must be a vector with the correct length
  • The layer parses this vector to set all its internal parameters
  • Throws an error if the input doesn't match the expected number of parameters

This is useful for:

  • Loading a previously saved model
  • Transferring parameters from another model
  • Setting specific parameter values for testing

Exceptions

ArgumentException

Thrown when the parameters vector has incorrect length.

UpdateParameters(T)

Updates the parameters of the layer using the calculated gradients.

public override void UpdateParameters(T learningRate)

Parameters

learningRate T

The learning rate to use for the parameter updates.

Remarks

This abstract method must be implemented by derived classes to define how the layer's parameters are updated during training. The learning rate controls the size of the parameter updates.

For Beginners: This method updates the layer's internal values during training.

When updating parameters:

  • The weights, biases, or other parameters are adjusted to reduce prediction errors
  • The learning rate controls how big each update step is
  • Smaller learning rates mean slower but more stable learning
  • Larger learning rates mean faster but potentially unstable learning

This is how the layer "learns" from data over time, gradually improving its ability to extract useful patterns from inputs.