Table of Contents

Class SAGAN<T>

Namespace
AiDotNet.NeuralNetworks
Assembly
AiDotNet.dll

Self-Attention GAN (SAGAN) implementation that uses self-attention mechanisms to model long-range dependencies in generated images.

For Beginners: Traditional CNNs in GANs only look at nearby pixels (local receptive fields). This works well for textures and local patterns, but struggles with global structure and long-range relationships (like making sure both eyes of a face look similar, or ensuring consistent geometric patterns).

Self-Attention solves this by letting each pixel "attend to" all other pixels, similar to how Transformers work in NLP. Think of it as:

  • CNN: "I can only see my immediate neighbors"
  • Self-Attention: "I can see the entire image and decide what's important"

Example: When generating a dog's face:

  • CNN: Might make one ear pointy and one floppy (inconsistent)
  • SAGAN: Notices both ears and makes them match (consistent)

Key innovations:

  1. Self-Attention Layers: Allow modeling of long-range dependencies
  2. Spectral Normalization: Stabilizes training for both G and D
  3. Hinge Loss: More stable than standard GAN loss
  4. Two Time-Scale Update Rule (TTUR): Different learning rates for G and D
  5. Conditional Batch Normalization: For class-conditional generation

Based on "Self-Attention Generative Adversarial Networks" by Zhang et al. (2019)

public class SAGAN<T> : NeuralNetworkBase<T>, INeuralNetworkModel<T>, INeuralNetwork<T>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>, IInterpretableModel<T>, IInputGradientComputable<T>, IDisposable

Type Parameters

T

The numeric type for computations (e.g., double, float)

Inheritance
SAGAN<T>
Implements
IFullModel<T, Tensor<T>, Tensor<T>>
IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>
IParameterizable<T, Tensor<T>, Tensor<T>>
ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>
IGradientComputable<T, Tensor<T>, Tensor<T>>
Inherited Members
Extension Methods

Constructors

SAGAN(NeuralNetworkArchitecture<T>, NeuralNetworkArchitecture<T>, int, int, int, int, int, int, int, int[]?, InputType, ILossFunction<T>?, double)

Initializes a new instance of Self-Attention GAN.

public SAGAN(NeuralNetworkArchitecture<T> generatorArchitecture, NeuralNetworkArchitecture<T> discriminatorArchitecture, int latentSize = 128, int imageChannels = 3, int imageHeight = 64, int imageWidth = 64, int numClasses = 0, int generatorChannels = 64, int discriminatorChannels = 64, int[]? attentionLayers = null, InputType inputType = InputType.TwoDimensional, ILossFunction<T>? lossFunction = null, double initialLearningRate = 0.0001)

Parameters

generatorArchitecture NeuralNetworkArchitecture<T>

Architecture for the generator network.

discriminatorArchitecture NeuralNetworkArchitecture<T>

Architecture for the discriminator network.

latentSize int

Size of the latent vector (typically 128)

imageChannels int

Number of image channels (1 for grayscale, 3 for RGB)

imageHeight int

Height of generated images

imageWidth int

Width of generated images

numClasses int

Number of classes (0 for unconditional)

generatorChannels int

Base number of feature maps in generator (default 64)

discriminatorChannels int

Base number of feature maps in discriminator (default 64)

attentionLayers int[]

Indices of layers where self-attention is applied

inputType InputType

The type of input.

lossFunction ILossFunction<T>

Loss function for training (defaults to hinge loss)

initialLearningRate double

Initial learning rate (default 0.0001)

Properties

AttentionLayers

Gets the positions where self-attention layers are inserted. Typically at mid-level feature maps (e.g., 32x32 or 64x64 resolution).

public int[] AttentionLayers { get; }

Property Value

int[]

Discriminator

Gets the discriminator network with self-attention layers.

public ConvolutionalNeuralNetwork<T> Discriminator { get; }

Property Value

ConvolutionalNeuralNetwork<T>

Generator

Gets the generator network with self-attention layers.

public ConvolutionalNeuralNetwork<T> Generator { get; }

Property Value

ConvolutionalNeuralNetwork<T>

LatentSize

Gets the size of the latent vector (noise input).

public int LatentSize { get; }

Property Value

int

NumClasses

Gets the number of classes for conditional generation. Set to 0 for unconditional generation.

public int NumClasses { get; }

Property Value

int

ParameterCount

Gets the total number of trainable parameters in the SAGAN.

public override int ParameterCount { get; }

Property Value

int

Remarks

This includes all parameters from both the Generator and Discriminator networks.

UseSpectralNormalization

Gets or sets whether to use spectral normalization. Spectral normalization stabilizes GAN training by constraining the Lipschitz constant of the discriminator.

public bool UseSpectralNormalization { get; set; }

Property Value

bool

Methods

CreateNewInstance()

Creates a new instance of the same type as this neural network.

protected override IFullModel<T, Tensor<T>, Tensor<T>> CreateNewInstance()

Returns

IFullModel<T, Tensor<T>, Tensor<T>>

A new instance of the same neural network type.

Remarks

For Beginners: This creates a blank version of the same type of neural network.

It's used internally by methods like DeepCopy and Clone to create the right type of network before copying the data into it.

DeserializeNetworkSpecificData(BinaryReader)

Deserializes network-specific data that was not covered by the general deserialization process.

protected override void DeserializeNetworkSpecificData(BinaryReader reader)

Parameters

reader BinaryReader

The BinaryReader to read the data from.

Remarks

This method is called at the end of the general deserialization process to allow derived classes to read any additional data specific to their implementation.

For Beginners: Continuing the suitcase analogy, this is like unpacking that special compartment. After the main deserialization method has unpacked the common items (layers, parameters), this method allows each specific type of neural network to unpack its own unique items that were stored during serialization.

Generate(Tensor<T>, int[]?)

Generates images from specific latent codes.

public Tensor<T> Generate(Tensor<T> latentCodes, int[]? classIndices = null)

Parameters

latentCodes Tensor<T>

Latent codes to use

classIndices int[]

Optional class indices for conditional generation

Returns

Tensor<T>

Generated images tensor

Generate(int, int[]?)

Generates images from random latent codes.

public Tensor<T> Generate(int numImages, int[]? classIndices = null)

Parameters

numImages int

Number of images to generate

classIndices int[]

Optional class indices for conditional generation

Returns

Tensor<T>

Generated images tensor

GetModelMetadata()

Gets the metadata for this neural network model.

public override ModelMetadata<T> GetModelMetadata()

Returns

ModelMetadata<T>

A ModelMetaData object containing information about the model.

GetParameters()

Gets all trainable parameters of the network as a single vector.

public override Vector<T> GetParameters()

Returns

Vector<T>

A vector containing all parameters of the network.

Remarks

For Beginners: Neural networks learn by adjusting their "parameters" (also called weights and biases). This method collects all those adjustable values into a single list so they can be updated during training.

InitializeLayers()

Initializes the layers of the neural network based on the architecture.

protected override void InitializeLayers()

Remarks

For Beginners: This method sets up all the layers in your neural network according to the architecture you've defined. It's like assembling the parts of your network before you can use it.

Predict(Tensor<T>)

Makes a prediction using the neural network.

public override Tensor<T> Predict(Tensor<T> input)

Parameters

input Tensor<T>

The input data to process.

Returns

Tensor<T>

The network's prediction.

Remarks

For Beginners: This is the main method you'll use to get results from your trained neural network. You provide some input data (like an image or text), and the network processes it through all its layers to produce an output (like a classification or prediction).

SerializeNetworkSpecificData(BinaryWriter)

Serializes network-specific data that is not covered by the general serialization process.

protected override void SerializeNetworkSpecificData(BinaryWriter writer)

Parameters

writer BinaryWriter

The BinaryWriter to write the data to.

Remarks

This method is called at the end of the general serialization process to allow derived classes to write any additional data specific to their implementation.

For Beginners: Think of this as packing a special compartment in your suitcase. While the main serialization method packs the common items (layers, parameters), this method allows each specific type of neural network to pack its own unique items that other networks might not have.

SetTrainingMode(bool)

Sets the neural network to either training or inference mode.

public override void SetTrainingMode(bool isTraining)

Parameters

isTraining bool

True to enable training mode; false to enable inference mode.

Remarks

For Beginners: Neural networks behave differently during training versus when making predictions.

When in training mode (isTraining = true): - The network keeps track of intermediate calculations needed for learning - Certain layers like Dropout and BatchNormalization behave differently - The network uses more memory but can learn from its mistakes

When in inference/prediction mode (isTraining = false): - The network only performs forward calculations - It uses less memory and runs faster - It cannot learn or update its parameters

Think of it like the difference between taking a practice test (training mode) where you can check your answers and learn from mistakes, versus taking the actual exam (inference mode) where you just give your best answers based on what you've already learned.

Train(Tensor<T>, Tensor<T>)

Trains the neural network on a single input-output pair.

public override void Train(Tensor<T> input, Tensor<T> expectedOutput)

Parameters

input Tensor<T>

The input data.

expectedOutput Tensor<T>

The expected output for the given input.

Remarks

This method performs one training step on the neural network using the provided input and expected output. It updates the network's parameters to reduce the error between the network's prediction and the expected output.

For Beginners: This is how your neural network learns. You provide: - An input (what the network should process) - The expected output (what the correct answer should be)

The network then:

  1. Makes a prediction based on the input
  2. Compares its prediction to the expected output
  3. Calculates how wrong it was (the loss)
  4. Adjusts its internal values to do better next time

After training, you can get the loss value using the GetLastLoss() method to see how well the network is learning.

TrainStep(Tensor<T>, int, int[]?)

Performs a single training step on a batch of real images. Uses hinge loss for improved stability.

public (T discriminatorLoss, T generatorLoss) TrainStep(Tensor<T> realImages, int batchSize, int[]? realLabels = null)

Parameters

realImages Tensor<T>

Batch of real images

batchSize int

Number of images in the batch

realLabels int[]

Optional class labels for conditional training

Returns

(T Accuracy, T Loss)

Tuple of (discriminator loss, generator loss)

UpdateParameters(Vector<T>)

Updates the network's parameters with new values.

public override void UpdateParameters(Vector<T> parameters)

Parameters

parameters Vector<T>

The new parameter values to set.

Remarks

For Beginners: During training, a neural network's internal values (parameters) get adjusted to improve its performance. This method allows you to update all those values at once by providing a complete set of new parameters.

This is typically used by optimization algorithms that calculate better parameter values based on training data.