Class HopeNetwork<T>
- Namespace
- AiDotNet.NeuralNetworks
- Assembly
- AiDotNet.dll
Hope architecture - a self-modifying recurrent neural network variant of Titans with unbounded levels of in-context learning. Core innovation of Google's Nested Learning paradigm.
public class HopeNetwork<T> : NeuralNetworkBase<T>, INeuralNetworkModel<T>, INeuralNetwork<T>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>, IInterpretableModel<T>, IInputGradientComputable<T>, IDisposable
Type Parameters
TThe numeric type
- Inheritance
-
HopeNetwork<T>
- Implements
- Inherited Members
- Extension Methods
Constructors
HopeNetwork(NeuralNetworkArchitecture<T>, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>?, ILossFunction<T>?, int, int, int, int)
public HopeNetwork(NeuralNetworkArchitecture<T> architecture, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>? optimizer = null, ILossFunction<T>? lossFunction = null, int hiddenDim = 256, int numCMSLevels = 4, int numRecurrentLayers = 3, int inContextLearningLevels = 5)
Parameters
architectureNeuralNetworkArchitecture<T>optimizerIGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>lossFunctionILossFunction<T>hiddenDimintnumCMSLevelsintnumRecurrentLayersintinContextLearningLevelsint
Properties
AdaptationStep
Gets the adaptation step count.
public int AdaptationStep { get; }
Property Value
InContextLearningLevels
Gets the number of in-context learning levels (unbounded in theory, bounded in practice).
public int InContextLearningLevels { get; }
Property Value
SupportsTraining
Indicates whether the network supports training. Hope always supports training.
public override bool SupportsTraining { get; }
Property Value
Methods
AddOutputLayer(int, ActivationFunction)
Adds an output layer to the Hope network.
public void AddOutputLayer(int outputDim, ActivationFunction activation = ActivationFunction.Linear)
Parameters
outputDimintactivationActivationFunction
Backward(Tensor<T>)
Performs a backward pass through the Hope architecture. Propagates gradients through recurrent layers, context flow, and CMS blocks.
public Tensor<T> Backward(Tensor<T> outputGradient)
Parameters
outputGradientTensor<T>
Returns
- Tensor<T>
ConsolidateMemory()
Consolidates memories across all CMS blocks. Should be called periodically during training.
public void ConsolidateMemory()
CreateNewInstance()
Creates a new instance of HopeNetwork with the same architecture.
protected override IFullModel<T, Tensor<T>, Tensor<T>> CreateNewInstance()
Returns
- IFullModel<T, Tensor<T>, Tensor<T>>
DeserializeNetworkSpecificData(BinaryReader)
Deserializes Hope-specific data for model restoration.
protected override void DeserializeNetworkSpecificData(BinaryReader reader)
Parameters
readerBinaryReader
Forward(Tensor<T>)
Performs a forward pass through the Hope architecture. Processes input through CMS blocks, context flow, and recurrent layers.
public Tensor<T> Forward(Tensor<T> input)
Parameters
inputTensor<T>
Returns
- Tensor<T>
GetAssociativeMemory()
Gets the associative memory system.
public IAssociativeMemory<T> GetAssociativeMemory()
Returns
GetCMSBlocks()
Gets the CMS blocks (for inspection/debugging).
public ContinuumMemorySystemLayer<T>[] GetCMSBlocks()
Returns
GetContextFlow()
Gets the context flow mechanism.
public IContextFlow<T> GetContextFlow()
Returns
- IContextFlow<T>
GetMetaState()
Gets the current meta-state (for inspection/debugging).
public Vector<T>? GetMetaState()
Returns
- Vector<T>
GetModelMetadata()
Gets metadata about the model (required by NeuralNetworkBase).
public override ModelMetadata<T> GetModelMetadata()
Returns
InitializeLayers()
Initializes the layers of the neural network based on the architecture.
protected override void InitializeLayers()
Remarks
For Beginners: This method sets up all the layers in your neural network according to the architecture you've defined. It's like assembling the parts of your network before you can use it.
Predict(Tensor<T>)
Makes a prediction on the given input (required by NeuralNetworkBase). For Hope, this is equivalent to Forward pass.
public override Tensor<T> Predict(Tensor<T> input)
Parameters
inputTensor<T>
Returns
- Tensor<T>
ResetMemory()
Resets all memory in CMS blocks and meta-state.
public void ResetMemory()
ResetRecurrentState()
Resets recurrent layer states.
public void ResetRecurrentState()
ResetState()
Resets the state of the network (required by NeuralNetworkBase).
public override void ResetState()
SerializeNetworkSpecificData(BinaryWriter)
Serializes Hope-specific data for model persistence.
protected override void SerializeNetworkSpecificData(BinaryWriter writer)
Parameters
writerBinaryWriter
SetSelfModificationRate(T)
Sets the self-modification rate for self-referential optimization.
public void SetSelfModificationRate(T rate)
Parameters
rateT
Train(Tensor<T>, Tensor<T>)
Trains the network on a single input-output pair (required by NeuralNetworkBase).
public override void Train(Tensor<T> input, Tensor<T> expectedOutput)
Parameters
inputTensor<T>expectedOutputTensor<T>
UpdateParameters(Vector<T>)
Updates all parameters in the network (required by NeuralNetworkBase). Distributes parameters across all CMS blocks and recurrent layers.
public override void UpdateParameters(Vector<T> parameters)
Parameters
parametersVector<T>