Table of Contents

Class NodeClassificationModel<T>

Namespace
AiDotNet.NeuralNetworks.Tasks.Graph
Assembly
AiDotNet.dll

Implements a complete neural network model for node classification tasks on graphs.

public class NodeClassificationModel<T> : NeuralNetworkBase<T>, INeuralNetworkModel<T>, INeuralNetwork<T>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>, IInterpretableModel<T>, IInputGradientComputable<T>, IDisposable

Type Parameters

T

The numeric type used for calculations, typically float or double.

Inheritance
NodeClassificationModel<T>
Implements
IFullModel<T, Tensor<T>, Tensor<T>>
IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>
IParameterizable<T, Tensor<T>, Tensor<T>>
ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>
IGradientComputable<T, Tensor<T>, Tensor<T>>
Inherited Members
Extension Methods

Remarks

Node classification predicts labels for individual nodes in a graph using: - Node features - Graph structure (adjacency information) - Semi-supervised learning (only some nodes have labels)

For Beginners: This model classifies nodes in a graph.

How it works:

  1. Input: Graph with node features and structure
  2. Processing: Stack of graph convolutional layers
    • Each layer aggregates information from neighbors
    • Features become more "context-aware" at each layer
    • After k layers, each node knows about its k-hop neighborhood
  3. Output: Class predictions for each node

Example architecture:

Input: [num_nodes, input_features]
  |
GCN Layer 1: [num_nodes, hidden_dim]
  |
Activation (ReLU)
  |
Dropout
  |
GCN Layer 2: [num_nodes, num_classes]
  |
Softmax: [num_nodes, num_classes] (probabilities)

Training:

  • Use labeled nodes for computing loss
  • Unlabeled nodes still participate in message passing
  • Graph structure helps propagate label information

Constructors

NodeClassificationModel(NeuralNetworkArchitecture<T>, int, int, double, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>?, ILossFunction<T>?, double)

Initializes a new instance of the NodeClassificationModel<T> class.

public NodeClassificationModel(NeuralNetworkArchitecture<T> architecture, int hiddenDim = 64, int numLayers = 2, double dropoutRate = 0.5, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>? optimizer = null, ILossFunction<T>? lossFunction = null, double maxGradNorm = 1)

Parameters

architecture NeuralNetworkArchitecture<T>

The neural network architecture defining input/output sizes and layers.

hiddenDim int

Hidden dimension for intermediate layers (default: 64).

numLayers int

Number of graph convolutional layers (default: 2).

dropoutRate double

Dropout rate for regularization (default: 0.5).

optimizer IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>

Optional optimizer for training.

lossFunction ILossFunction<T>

Optional loss function for training.

maxGradNorm double

Maximum gradient norm for clipping (default: 1.0).

Remarks

For Beginners: Creating a node classification model:

// Create architecture for Cora citation network
var architecture = new NeuralNetworkArchitecture<double>(
    InputType.OneDimensional,
    NeuralNetworkTaskType.MultiClassClassification,
    NetworkComplexity.Simple,
    inputSize: 1433,    // Cora has 1433 word features
    outputSize: 7);     // 7 paper categories

// Create model with default layers
var model = new NodeClassificationModel<double>(architecture);

// Train on node classification task
var history = model.TrainOnTask(task, epochs: 200, learningRate: 0.01);

Properties

DropoutRate

Gets the dropout rate for regularization.

public double DropoutRate { get; }

Property Value

double

HiddenDim

Gets the hidden dimension size.

public int HiddenDim { get; }

Property Value

int

InputFeatures

Gets the number of input features per node.

public int InputFeatures { get; }

Property Value

int

NumClasses

Gets the number of output classes.

public int NumClasses { get; }

Property Value

int

NumLayers

Gets the number of graph layers.

public int NumLayers { get; }

Property Value

int

Methods

Backward(Tensor<T>)

Performs a backward pass through the network.

public Tensor<T> Backward(Tensor<T> outputGradient)

Parameters

outputGradient Tensor<T>

Gradient of loss with respect to output.

Returns

Tensor<T>

Gradient with respect to input.

CreateNewInstance()

Creates a new instance of this network type for cloning or deserialization.

protected override IFullModel<T, Tensor<T>, Tensor<T>> CreateNewInstance()

Returns

IFullModel<T, Tensor<T>, Tensor<T>>

DeserializeNetworkSpecificData(BinaryReader)

Deserializes network-specific data from a binary reader.

protected override void DeserializeNetworkSpecificData(BinaryReader reader)

Parameters

reader BinaryReader

EvaluateOnTask(NodeClassificationTask<T>)

Evaluates the model on test nodes.

public double EvaluateOnTask(NodeClassificationTask<T> task)

Parameters

task NodeClassificationTask<T>

The node classification task.

Returns

double

Test accuracy.

Forward(Tensor<T>)

Performs a forward pass through the network.

public Tensor<T> Forward(Tensor<T> nodeFeatures)

Parameters

nodeFeatures Tensor<T>

Node feature tensor.

Returns

Tensor<T>

Output predictions for all nodes.

GetModelMetadata()

Gets metadata about this model for serialization and identification.

public override ModelMetadata<T> GetModelMetadata()

Returns

ModelMetadata<T>

GetParameters()

Gets all parameters as a vector.

public override Vector<T> GetParameters()

Returns

Vector<T>

InitializeLayers()

Initializes the layers of the neural network based on the provided architecture.

protected override void InitializeLayers()

Predict(Tensor<T>)

Makes a prediction using the trained network.

public override Tensor<T> Predict(Tensor<T> input)

Parameters

input Tensor<T>

The input tensor containing node features.

Returns

Tensor<T>

The prediction tensor with class probabilities for each node.

SerializeNetworkSpecificData(BinaryWriter)

Serializes network-specific data to a binary writer.

protected override void SerializeNetworkSpecificData(BinaryWriter writer)

Parameters

writer BinaryWriter

SetAdjacencyMatrix(Tensor<T>)

Sets the adjacency matrix for all graph layers in the model.

public void SetAdjacencyMatrix(Tensor<T> adjacencyMatrix)

Parameters

adjacencyMatrix Tensor<T>

The graph adjacency matrix.

Remarks

Call this before training or inference to provide the graph structure. All graph convolutional layers in the model will use this adjacency matrix.

Train(Tensor<T>, Tensor<T>)

Trains the network on a single batch of data.

public override void Train(Tensor<T> input, Tensor<T> expectedOutput)

Parameters

input Tensor<T>

The input node features.

expectedOutput Tensor<T>

The expected output (labels).

TrainOnTask(NodeClassificationTask<T>, int, double)

Trains the model on a node classification task.

public Dictionary<string, List<double>> TrainOnTask(NodeClassificationTask<T> task, int epochs, double learningRate = 0.01)

Parameters

task NodeClassificationTask<T>

The node classification task with graph data and labels.

epochs int

Number of training epochs.

learningRate double

Learning rate for optimization.

Returns

Dictionary<string, List<double>>

Training history with loss and accuracy per epoch.

Remarks

For Beginners: Semi-supervised training is special:

  • All nodes participate in message passing Even unlabeled test nodes help propagate information

  • Loss computed only on labeled training nodes We only update weights based on nodes where we know the answer

  • Test nodes benefit from training nodes Graph structure lets label information flow through the network

UpdateParameters(Vector<T>)

Updates the parameters of all layers in the network.

public override void UpdateParameters(Vector<T> parameters)

Parameters

parameters Vector<T>

A vector containing all parameters for the network.