Table of Contents

Class HopfieldNetwork<T>

Namespace
AiDotNet.NeuralNetworks
Assembly
AiDotNet.dll

Represents a Hopfield Network, a recurrent neural network designed for pattern storage and retrieval.

public class HopfieldNetwork<T> : NeuralNetworkBase<T>, INeuralNetworkModel<T>, INeuralNetwork<T>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>, IInterpretableModel<T>, IInputGradientComputable<T>, IDisposable

Type Parameters

T

The numeric type used for calculations, typically float or double.

Inheritance
HopfieldNetwork<T>
Implements
IFullModel<T, Tensor<T>, Tensor<T>>
IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>
IParameterizable<T, Tensor<T>, Tensor<T>>
ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>
IGradientComputable<T, Tensor<T>, Tensor<T>>
Inherited Members
Extension Methods

Remarks

A Hopfield Network is a type of recurrent artificial neural network that serves as a content-addressable memory system. It can store patterns and retrieve them based on partial or noisy inputs. The network consists of a single layer of fully connected neurons, with each neuron connected to all others except itself. Hopfield networks are particularly useful for pattern recognition, image restoration, and optimization problems.

For Beginners: A Hopfield Network is like a special memory system that can store and recall patterns.

Think of it like a photo album with a magical property:

  • You can store a collection of complete photos in the album
  • Later, if you show the album a damaged or partial photo, it can recall the complete original version

For example:

  • You might store clear images of the digits 0-9
  • If you later show the network a smudged or partially erased "7", it can recall the clean version

Hopfield networks work differently from most neural networks:

  • They don't have separate input and output layers
  • All neurons are connected to each other (but not to themselves)
  • They use a special learning rule based on correlations between pattern elements

These networks are useful for tasks like:

  • Image reconstruction
  • Pattern recognition
  • Noise filtering
  • Solving certain optimization problems

Constructors

HopfieldNetwork(NeuralNetworkArchitecture<T>, int, ILossFunction<T>?)

Initializes a new instance of the HopfieldNetwork<T> class with the specified architecture and size.

public HopfieldNetwork(NeuralNetworkArchitecture<T> architecture, int size, ILossFunction<T>? lossFunction = null)

Parameters

architecture NeuralNetworkArchitecture<T>

The neural network architecture providing base configuration.

size int

The size of the network, determining the number of neurons.

lossFunction ILossFunction<T>

Remarks

This constructor creates a Hopfield network with the specified size, initializing the weight matrix and setting up a sign activation function. The input and output sizes are both set to the specified size, as Hopfield networks have a single layer that serves as both input and output.

For Beginners: This sets up a new Hopfield network with a specific number of neurons.

When creating a Hopfield network:

  • You specify how many neurons you need (the size parameter)
  • This determines how large a pattern you can store (e.g., how many pixels in an image)
  • The network automatically sets up a weight matrix initialized to zeros
  • It uses a special "sign" activation function that outputs either +1 or -1

For example, if you're creating a network to store 8x8 pixel images, you would set the size to 64 (8x8=64).

Methods

CalculateEnergy(Vector<T>)

Calculates the energy of the current state of the Hopfield network.

public T CalculateEnergy(Vector<T> state)

Parameters

state Vector<T>

The state vector to calculate energy for.

Returns

T

The energy value of the given state.

Remarks

This method calculates the energy function of the Hopfield network for a given state. The energy function is defined as E = -0.5 * sum(sum(w_ij * s_i * s_j)) - sum(theta_i * s_i), where w_ij are the weights, s_i and s_j are the states of neurons i and j, and theta_i is the bias for neuron i (typically 0 in standard Hopfield networks). Lower energy values correspond to more stable states, and the network naturally evolves toward states with minimum energy.

For Beginners: This calculates how "stable" a pattern is in the network.

Think of energy like a ball rolling on a landscape:

  • Lower energy = valleys where the ball comes to rest (stable patterns)
  • Higher energy = hills that the ball rolls away from (unstable patterns)

The Hopfield network naturally moves toward lower energy states. When you use the Recall method, the network is essentially rolling downhill to the nearest valley (stable pattern) from your starting point (input pattern).

The stored patterns correspond to the deepest valleys in this energy landscape.

CreateNewInstance()

Creates a new instance of the Hopfield Network with the same architecture and configuration.

protected override IFullModel<T, Tensor<T>, Tensor<T>> CreateNewInstance()

Returns

IFullModel<T, Tensor<T>, Tensor<T>>

A new Hopfield Network instance with the same architecture and size.

Remarks

This method creates a new instance of the Hopfield Network with the same architecture and size as the current instance. It's used in scenarios where a fresh copy of the model is needed while maintaining the same configuration.

For Beginners: This method creates a brand new copy of the network with the same setup.

Think of it like creating a blank version of the network:

  • The new network has the same size (number of neurons)
  • It has the same architecture (configuration)
  • But it starts with no stored patterns - it's a fresh network
  • The weight matrix is initialized to zeros

This is useful when you want to:

  • Start with a clean network with the same structure
  • Train it on different patterns
  • Compare results between different training approaches

DeserializeNetworkSpecificData(BinaryReader)

Deserializes Hopfield network-specific data from a binary reader.

protected override void DeserializeNetworkSpecificData(BinaryReader reader)

Parameters

reader BinaryReader

The BinaryReader to read the data from.

Remarks

This method reads the Hopfield network's specific data from a binary stream. It retrieves the network size and the weight matrix. After reading this data, the Hopfield network's state is fully restored to what it was when saved.

For Beginners: This method loads a previously saved Hopfield network.

It's like restoring the network from a snapshot, retrieving:

  • The size of the network
  • All the connection weights that were learned during training

This allows you to use a previously trained network without having to train it again on the same patterns.

GetModelMetadata()

Gets metadata about the Hopfield network model.

public override ModelMetadata<T> GetModelMetadata()

Returns

ModelMetadata<T>

A ModelMetaData object containing information about the model.

Remarks

This method returns metadata about the Hopfield network, including its model type, size, and serialized weights. This information is useful for model management and serialization.

For Beginners: This method provides a summary of the Hopfield network.

The metadata includes:

  • The type of model (Hopfield Network)
  • The size of the network (number of neurons)
  • The current weight matrix (connection strengths)
  • Serialized data that can be used to save and reload the network

This information is useful when:

  • Managing multiple models
  • Saving the network for later use
  • Analyzing the network's properties

GetNetworkCapacity()

Gets the maximum number of patterns that can be reliably stored in the network.

public int GetNetworkCapacity()

Returns

int

The estimated capacity of the network.

Remarks

This method calculates the theoretical capacity of the Hopfield network based on its size. The general rule of thumb is that a Hopfield network can reliably store approximately N/(4*log(N)) patterns, where N is the number of neurons. This is a theoretical upper bound, and in practice, the capacity might be lower depending on the patterns' similarities.

For Beginners: This tells you how many different patterns you can reliably store in the network.

Hopfield networks have limited memory capacity:

  • Too many patterns will cause interference
  • This leads to incorrect recall and "spurious patterns"
  • The capacity depends on the number of neurons

The rule of thumb is that a network with N neurons can store approximately N/(4*log(N)) patterns. For example, a network with 100 neurons can reliably store about 5 patterns, not 100 patterns as you might expect.

This limited capacity is important to keep in mind when designing applications.

InitializeLayers()

Initializes the layers of the neural network.

protected override void InitializeLayers()

Remarks

This method is required by the NeuralNetworkBase class but is empty for Hopfield networks, as they don't use separate layers like feedforward neural networks. Instead, a Hopfield network consists of a single fully-connected layer of neurons.

For Beginners: This method is empty because Hopfield networks don't have layers.

Most neural networks have distinct layers like:

  • Input layer
  • Hidden layers
  • Output layer

But a Hopfield network:

  • Has just a single layer of neurons
  • Each neuron connects to all others
  • The same neurons act as both input and output

This method exists only because the base neural network class requires it, but it doesn't need to do anything for a Hopfield network.

Predict(Tensor<T>)

Converts an input tensor to a vector, performs pattern recall, and converts back to a tensor.

public override Tensor<T> Predict(Tensor<T> input)

Parameters

input Tensor<T>

The input tensor containing the pattern to recall.

Returns

Tensor<T>

A tensor containing the recalled pattern.

Remarks

This method implements the prediction functionality for the Hopfield network. It converts the input tensor to a vector, performs the recall operation to retrieve the stored pattern most similar to the input, and then converts the result back to a tensor. This allows the Hopfield network to be used within the broader neural network framework while maintaining its unique recall-based approach.

For Beginners: This is how the Hopfield network makes predictions.

When you provide an input pattern (possibly noisy or incomplete):

  1. The method first converts it to the right format for the network
  2. It then runs the recall process to find the closest stored pattern
  3. Finally, it converts the result back to the expected output format

This allows the Hopfield network to be used like other neural networks where you can simply call Predict() to get a result, even though the underlying mechanism is quite different.

Recall(Vector<T>, int)

Performs pattern recall to retrieve a complete pattern from a partial or noisy input.

public Vector<T> Recall(Vector<T> input, int maxIterations = 100)

Parameters

input Vector<T>

The input pattern to use as a starting point for recall.

maxIterations int

The maximum number of iterations to perform during recall. Default is 100.

Returns

Vector<T>

The recalled pattern after the network reaches stability or the maximum number of iterations.

Remarks

This method implements the asynchronous update rule for Hopfield networks. It starts with the provided input pattern and iteratively updates each neuron based on the weighted sum of inputs from all other neurons. The process continues until either the pattern stabilizes (no more changes occur) or the maximum number of iterations is reached. The stable pattern represents the stored memory closest to the input pattern.

For Beginners: This is like showing the network a partial or damaged pattern and asking it to recall the complete version.

The recall process works like this:

  1. Start with your input pattern (which might be incomplete or noisy)
  2. For each element in the pattern:
    • Calculate how it's influenced by all other elements using the connection weights
    • Update its state to either "on" (+1) or "off" (-1) based on those influences
  3. Repeat this process until:
    • The pattern stops changing (it's stable), OR
    • You reach the maximum number of allowed iterations

For example, if you trained the network on images of letters and show it a smudged "A", this process would gradually clean up the image until it resembles a complete "A".

This recall process doesn't always find the exact pattern that was stored - it finds the closest stable pattern according to the network's energy function.

SerializeNetworkSpecificData(BinaryWriter)

Serializes Hopfield network-specific data to a binary writer.

protected override void SerializeNetworkSpecificData(BinaryWriter writer)

Parameters

writer BinaryWriter

The BinaryWriter to write the data to.

Remarks

This method writes the Hopfield network's specific data to a binary stream. It includes the network size and the weight matrix. This data is needed to reconstruct the Hopfield network when deserializing.

For Beginners: This method saves the Hopfield network to a file.

It's like taking a snapshot of the network's current state, including:

  • The size of the network (how many neurons it has)
  • All the connection weights between neurons

This allows you to save a trained network and reload it later, without having to train it again from scratch.

Train(Tensor<T>, Tensor<T>)

Trains the Hopfield network using the provided input patterns.

public override void Train(Tensor<T> input, Tensor<T> expectedOutput)

Parameters

input Tensor<T>

A tensor containing the patterns to store.

expectedOutput Tensor<T>

This parameter is ignored in Hopfield networks.

Remarks

This method adapts the standard neural network training interface to the Hopfield network. It extracts patterns from the input tensor and calls the Hebbian learning-based Train method. The expectedOutput parameter is ignored since Hopfield networks are autoassociative and use the same patterns for both input and output.

For Beginners: This method allows the Hopfield network to learn patterns.

Unlike traditional neural networks that learn mappings from inputs to outputs:

  • Hopfield networks learn to associate patterns with themselves
  • The input tensor contains the patterns to be stored
  • The expectedOutput parameter is ignored (not needed)

This method:

  1. Extracts individual patterns from the input tensor
  2. Converts them to the right format
  3. Calls the core training method that implements Hebbian learning

After training, the network will be able to recognize and complete these patterns.

Train(List<Vector<T>>)

Trains the Hopfield network on a set of patterns.

public void Train(List<Vector<T>> patterns)

Parameters

patterns List<Vector<T>>

A list of patterns to store in the network.

Remarks

This method trains the Hopfield network using the Hebbian learning rule, which adjusts the weights based on the correlation between pattern elements. For each pattern, the method updates the weight matrix by adding the product of each pair of elements. After processing all patterns, the weights are normalized by dividing by the number of patterns to improve recall performance.

For Beginners: This teaches the network to remember a set of patterns.

During training:

  • For each pattern you want to store (like an image)
  • The network looks at each pair of elements in the pattern
  • If two elements are both active (+1) or both inactive (-1), it strengthens their connection
  • If one is active and one is inactive, it weakens their connection

This follows a principle similar to "neurons that fire together, wire together."

After training on all patterns, the connections are adjusted (normalized) to ensure better recall. This process is different from training in most neural networks because:

  • It happens in one pass, not through repeated iterations
  • It doesn't use backpropagation or gradients
  • It has limited capacity (can only store approximately 0.14 * network size patterns reliably)

UpdateParameters(Vector<T>)

Not implemented for Hopfield networks, as they don't use gradient-based parameter updates.

public override void UpdateParameters(Vector<T> parameters)

Parameters

parameters Vector<T>

A vector containing parameters to update.

Remarks

This method is required by the NeuralNetworkBase class but is not implemented for Hopfield networks. Hopfield networks use a different training approach (Hebbian learning) that directly sets the weights rather than using gradient-based updates as in most neural networks.

For Beginners: This method is not used in Hopfield networks.

Most neural networks learn by:

  • Making small adjustments to weights based on error gradients
  • Updating parameters gradually over many training iterations

Hopfield networks are different:

  • They learn using the Hebbian rule ("neurons that fire together, wire together")
  • Training happens in one pass through the Train method
  • They don't use gradient-based updates at all

This method exists only because the base neural network class requires it, but it will throw an error if called.

Exceptions

NotImplementedException

Always thrown, as this method is not applicable to Hopfield networks.