Table of Contents

Class HopeNetwork<T>

Namespace
AiDotNet.NeuralNetworks
Assembly
AiDotNet.dll

Hope architecture - a self-modifying recurrent neural network variant of Titans with unbounded levels of in-context learning. Core innovation of Google's Nested Learning paradigm.

public class HopeNetwork<T> : NeuralNetworkBase<T>, INeuralNetworkModel<T>, INeuralNetwork<T>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>, IInterpretableModel<T>, IInputGradientComputable<T>, IDisposable

Type Parameters

T

The numeric type

Inheritance
HopeNetwork<T>
Implements
IFullModel<T, Tensor<T>, Tensor<T>>
IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>
IParameterizable<T, Tensor<T>, Tensor<T>>
ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>
IGradientComputable<T, Tensor<T>, Tensor<T>>
Inherited Members
Extension Methods

Constructors

HopeNetwork(NeuralNetworkArchitecture<T>, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>?, ILossFunction<T>?, int, int, int, int)

public HopeNetwork(NeuralNetworkArchitecture<T> architecture, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>? optimizer = null, ILossFunction<T>? lossFunction = null, int hiddenDim = 256, int numCMSLevels = 4, int numRecurrentLayers = 3, int inContextLearningLevels = 5)

Parameters

architecture NeuralNetworkArchitecture<T>
optimizer IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>
lossFunction ILossFunction<T>
hiddenDim int
numCMSLevels int
numRecurrentLayers int
inContextLearningLevels int

Properties

AdaptationStep

Gets the adaptation step count.

public int AdaptationStep { get; }

Property Value

int

InContextLearningLevels

Gets the number of in-context learning levels (unbounded in theory, bounded in practice).

public int InContextLearningLevels { get; }

Property Value

int

SupportsTraining

Indicates whether the network supports training. Hope always supports training.

public override bool SupportsTraining { get; }

Property Value

bool

Methods

AddOutputLayer(int, ActivationFunction)

Adds an output layer to the Hope network.

public void AddOutputLayer(int outputDim, ActivationFunction activation = ActivationFunction.Linear)

Parameters

outputDim int
activation ActivationFunction

Backward(Tensor<T>)

Performs a backward pass through the Hope architecture. Propagates gradients through recurrent layers, context flow, and CMS blocks.

public Tensor<T> Backward(Tensor<T> outputGradient)

Parameters

outputGradient Tensor<T>

Returns

Tensor<T>

ConsolidateMemory()

Consolidates memories across all CMS blocks. Should be called periodically during training.

public void ConsolidateMemory()

CreateNewInstance()

Creates a new instance of HopeNetwork with the same architecture.

protected override IFullModel<T, Tensor<T>, Tensor<T>> CreateNewInstance()

Returns

IFullModel<T, Tensor<T>, Tensor<T>>

DeserializeNetworkSpecificData(BinaryReader)

Deserializes Hope-specific data for model restoration.

protected override void DeserializeNetworkSpecificData(BinaryReader reader)

Parameters

reader BinaryReader

Forward(Tensor<T>)

Performs a forward pass through the Hope architecture. Processes input through CMS blocks, context flow, and recurrent layers.

public Tensor<T> Forward(Tensor<T> input)

Parameters

input Tensor<T>

Returns

Tensor<T>

GetAssociativeMemory()

Gets the associative memory system.

public IAssociativeMemory<T> GetAssociativeMemory()

Returns

IAssociativeMemory<T>

GetCMSBlocks()

Gets the CMS blocks (for inspection/debugging).

public ContinuumMemorySystemLayer<T>[] GetCMSBlocks()

Returns

ContinuumMemorySystemLayer<T>[]

GetContextFlow()

Gets the context flow mechanism.

public IContextFlow<T> GetContextFlow()

Returns

IContextFlow<T>

GetMetaState()

Gets the current meta-state (for inspection/debugging).

public Vector<T>? GetMetaState()

Returns

Vector<T>

GetModelMetadata()

Gets metadata about the model (required by NeuralNetworkBase).

public override ModelMetadata<T> GetModelMetadata()

Returns

ModelMetadata<T>

InitializeLayers()

Initializes the layers of the neural network based on the architecture.

protected override void InitializeLayers()

Remarks

For Beginners: This method sets up all the layers in your neural network according to the architecture you've defined. It's like assembling the parts of your network before you can use it.

Predict(Tensor<T>)

Makes a prediction on the given input (required by NeuralNetworkBase). For Hope, this is equivalent to Forward pass.

public override Tensor<T> Predict(Tensor<T> input)

Parameters

input Tensor<T>

Returns

Tensor<T>

ResetMemory()

Resets all memory in CMS blocks and meta-state.

public void ResetMemory()

ResetRecurrentState()

Resets recurrent layer states.

public void ResetRecurrentState()

ResetState()

Resets the state of the network (required by NeuralNetworkBase).

public override void ResetState()

SerializeNetworkSpecificData(BinaryWriter)

Serializes Hope-specific data for model persistence.

protected override void SerializeNetworkSpecificData(BinaryWriter writer)

Parameters

writer BinaryWriter

SetSelfModificationRate(T)

Sets the self-modification rate for self-referential optimization.

public void SetSelfModificationRate(T rate)

Parameters

rate T

Train(Tensor<T>, Tensor<T>)

Trains the network on a single input-output pair (required by NeuralNetworkBase).

public override void Train(Tensor<T> input, Tensor<T> expectedOutput)

Parameters

input Tensor<T>
expectedOutput Tensor<T>

UpdateParameters(Vector<T>)

Updates all parameters in the network (required by NeuralNetworkBase). Distributes parameters across all CMS blocks and recurrent layers.

public override void UpdateParameters(Vector<T> parameters)

Parameters

parameters Vector<T>