Class OptimizerBase<T, TInput, TOutput>
- Namespace
- AiDotNet.Optimizers
- Assembly
- AiDotNet.dll
Represents the base class for all optimization algorithms, providing common functionality and interfaces.
public abstract class OptimizerBase<T, TInput, TOutput> : IOptimizer<T, TInput, TOutput>, IModelSerializer
Type Parameters
TThe numeric type used for calculations, typically float or double.
TInputThe type of input data for the model.
TOutputThe type of output data for the model.
- Inheritance
-
OptimizerBase<T, TInput, TOutput>
- Implements
-
IOptimizer<T, TInput, TOutput>
- Derived
- Inherited Members
- Extension Methods
Remarks
OptimizerBase is an abstract class that serves as the foundation for all optimization algorithms. It defines the common structure and functionality that all optimizers must implement, such as solution evaluation, caching, and adaptive parameter management. This class handles the core mechanics of optimization processes, allowing derived classes to focus on their specific optimization strategies.
For Beginners: This is the blueprint that all optimization algorithms follow.
Think of OptimizerBase as the common foundation that all optimizers are built upon:
- It defines what every optimizer must be able to do (evaluate solutions, manage caching)
- It provides shared tools that all optimizers can use (like adaptive learning rates and early stopping)
- It manages the evaluation of solutions and tracks the optimization progress
- It handles saving and loading optimizer states
All specific optimizer types (like genetic algorithms, particle swarm, etc.) inherit from this class, which ensures they all work together consistently in the optimization process.
Constructors
OptimizerBase(IFullModel<T, TInput, TOutput>?, OptimizationAlgorithmOptions<T, TInput, TOutput>)
Initializes a new instance of the OptimizerBase class.
protected OptimizerBase(IFullModel<T, TInput, TOutput>? model, OptimizationAlgorithmOptions<T, TInput, TOutput> options)
Parameters
modelIFullModel<T, TInput, TOutput>The model to be optimized (can be null if set later).
optionsOptimizationAlgorithmOptions<T, TInput, TOutput>The optimization algorithm options.
Fields
CurrentLearningRate
The current learning rate used in the optimization process.
protected T CurrentLearningRate
Field Value
- T
CurrentMomentum
The current momentum used in the optimization process.
protected T CurrentMomentum
Field Value
- T
FitDetector
Detects the quality of fit for models.
protected readonly IFitDetector<T, TInput, TOutput> FitDetector
Field Value
- IFitDetector<T, TInput, TOutput>
FitnessCalculator
Calculates the fitness score of models.
protected readonly IFitnessCalculator<T, TInput, TOutput> FitnessCalculator
Field Value
- IFitnessCalculator<T, TInput, TOutput>
FitnessList
Stores the fitness scores of evaluated models.
protected readonly List<T> FitnessList
Field Value
- List<T>
IterationHistoryList
Stores information about each optimization iteration.
protected readonly List<OptimizationIterationInfo<T>> IterationHistoryList
Field Value
IterationsWithImprovement
Counts the number of consecutive iterations with improvement.
protected int IterationsWithImprovement
Field Value
IterationsWithoutImprovement
Counts the number of consecutive iterations without improvement.
protected int IterationsWithoutImprovement
Field Value
ModelCache
Caches evaluated models to avoid redundant calculations.
protected readonly IModelCache<T, TInput, TOutput> ModelCache
Field Value
- IModelCache<T, TInput, TOutput>
ModelEvaluator
Evaluates the performance of models.
protected readonly IModelEvaluator<T, TInput, TOutput> ModelEvaluator
Field Value
- IModelEvaluator<T, TInput, TOutput>
ModelStatsOptions
Options for model statistics calculations.
protected readonly ModelStatsOptions ModelStatsOptions
Field Value
NumOps
Provides numeric operations for type T.
protected readonly INumericOperations<T> NumOps
Field Value
- INumericOperations<T>
Options
Contains the configuration options for the optimization algorithm.
protected readonly OptimizationAlgorithmOptions<T, TInput, TOutput> Options
Field Value
- OptimizationAlgorithmOptions<T, TInput, TOutput>
PredictionOptions
Options for prediction statistics calculations.
protected readonly PredictionStatsOptions PredictionOptions
Field Value
Random
Provides random number generation for all derived classes.
protected readonly Random Random
Field Value
Properties
Engine
Gets the global execution engine for vector operations.
protected IEngine Engine { get; }
Property Value
- IEngine
Model
Gets the model that this optimizer is configured to optimize.
public IFullModel<T, TInput, TOutput>? Model { get; }
Property Value
- IFullModel<T, TInput, TOutput>
Remarks
This property provides access to the model that the optimizer is working with. It implements the IOptimizer interface property to expose the protected Model field.
For Beginners: This property lets external code see which model the optimizer is currently working with, without being able to change it. It's like a window that lets you look at the model but not touch it.
Methods
AdjustModelParameters(IFullModel<T, TInput, TOutput>, double, double)
Adjusts the parameters (weights) of a model.
protected virtual void AdjustModelParameters(IFullModel<T, TInput, TOutput> model, double adjustmentScale = 0.1, double signFlipProbability = 0.05)
Parameters
modelIFullModel<T, TInput, TOutput>The model whose parameters should be adjusted.
adjustmentScaledoubleScale factor for parameter adjustments.
signFlipProbabilitydoubleProbability of flipping a parameter's sign.
Remarks
For Beginners: This is like adjusting the quantities of ingredients in your recipe. While keeping the same ingredients, you're changing how much of each one you use to find the perfect balance.
AdjustParameters(Vector<T>, double, double)
Adjusts a vector of parameters by applying random modifications.
protected virtual Vector<T> AdjustParameters(Vector<T> parameters, double adjustmentScale, double signFlipProbability)
Parameters
parametersVector<T>The original parameters.
adjustmentScaledoubleScale factor for parameter adjustments.
signFlipProbabilitydoubleProbability of flipping a parameter's sign.
Returns
- Vector<T>
A new vector with adjusted parameters.
ApplyFeatureSelection(IFullModel<T, TInput, TOutput>, List<int>)
Applies the selected features to a model.
protected virtual void ApplyFeatureSelection(IFullModel<T, TInput, TOutput> model, List<int> selectedFeatures)
Parameters
modelIFullModel<T, TInput, TOutput>The model to apply feature selection to.
selectedFeaturesList<int>The list of selected feature indices.
ApplyFeatureSelection(IFullModel<T, TInput, TOutput>, int)
Applies feature selection to a model.
protected virtual void ApplyFeatureSelection(IFullModel<T, TInput, TOutput> model, int totalFeatures)
Parameters
modelIFullModel<T, TInput, TOutput>The model to apply feature selection to.
totalFeaturesintThe total number of available features.
Remarks
This method selects a subset of features to be used by the model, potentially improving its performance by focusing on the most relevant data dimensions.
For Beginners: This is like deciding which ingredients to include in your recipe. Some ingredients might not be necessary or might even make the dish worse, so you're experimenting with different combinations to find which ones are truly important.
CacheStepData(string, OptimizationStepData<T, TInput, TOutput>)
Caches step data for a given solution.
protected void CacheStepData(string key, OptimizationStepData<T, TInput, TOutput> stepData)
Parameters
keystringThe cache key for the solution.
stepDataOptimizationStepData<T, TInput, TOutput>The step data to cache.
CalculateLoss(IFullModel<T, TInput, TOutput>, OptimizationInputData<T, TInput, TOutput>)
Calculates the loss for a given solution.
protected virtual T CalculateLoss(IFullModel<T, TInput, TOutput> solution, OptimizationInputData<T, TInput, TOutput> inputData)
Parameters
solutionIFullModel<T, TInput, TOutput>The solution to evaluate.
inputDataOptimizationInputData<T, TInput, TOutput>The input data for evaluation.
Returns
- T
The calculated loss value.
CalculateUpdate(Vector<T>, Vector<T>)
Calculates the parameter updates based on the gradients.
public virtual Vector<T> CalculateUpdate(Vector<T> gradients, Vector<T> parameters)
Parameters
gradientsVector<T>The gradients of the loss function with respect to the parameters.
parametersVector<T>The current parameter values.
Returns
- Vector<T>
The updates to be applied to the parameters.
Remarks
For Beginners: This base implementation returns the gradients as-is, which represents vanilla gradient descent. Derived classes should override this to implement specific optimization algorithms (like Adam, SGD with momentum, etc.).
CalculateUpdate(Dictionary<string, Vector<T>>)
Calculates the parameter update based on the provided gradients.
public virtual Dictionary<string, Vector<T>> CalculateUpdate(Dictionary<string, Vector<T>> gradients)
Parameters
gradientsDictionary<string, Vector<T>>The gradients used to compute the parameter updates.
Returns
- Dictionary<string, Vector<T>>
The calculated parameter updates as a dictionary mapping parameter names to their update vectors.
CreateOptimizationResult(OptimizationStepData<T, TInput, TOutput>, OptimizationInputData<T, TInput, TOutput>)
Creates a new optimization result based on the best step data found during optimization.
protected OptimizationResult<T, TInput, TOutput> CreateOptimizationResult(OptimizationStepData<T, TInput, TOutput> bestStepData, OptimizationInputData<T, TInput, TOutput> input)
Parameters
bestStepDataOptimizationStepData<T, TInput, TOutput>The data from the best optimization step.
inputOptimizationInputData<T, TInput, TOutput>The original input data used for optimization.
Returns
- OptimizationResult<T, TInput, TOutput>
A structured optimization result containing all relevant information.
Remarks
This method packages all the optimization results into a single structured object that can be returned to the caller. It includes the best solution found, its fitness score, training metrics, validation metrics, and test metrics.
For Beginners: Think of this method as packaging up all the results from your optimization process into one neat container.
It's like finishing a science experiment and organizing all your findings into a clear report:
- It includes the best solution found
- It shows how well that solution performed (fitness score)
- It contains detailed statistics about how the solution performed on different datasets
- It records other important information like selected features and iteration count
This makes it easy for you or other code to work with the results.
CreateSolution(TInput)
Creates a potential solution based on the optimization mode.
protected virtual IFullModel<T, TInput, TOutput> CreateSolution(TInput xTrain)
Parameters
xTrainTInputTraining data used to determine data dimensions.
Returns
- IFullModel<T, TInput, TOutput>
A new potential solution (model variant).
Remarks
This method creates a new model variant by either selecting features, adjusting parameters, or both, depending on the optimization mode.
For Beginners: This is like creating a new version of the recipe. Depending on what you're focusing on, you might change which ingredients you use, how much of each ingredient you add, or both aspects at once.
Deserialize(byte[])
Reconstructs the optimizer from a serialized byte array.
public virtual void Deserialize(byte[] data)
Parameters
databyte[]The byte array containing the serialized optimizer.
Remarks
This method rebuilds the optimizer's state from a serialized byte array. It verifies that the serialized type matches the current type, restores configuration options, and calls into derived classes to restore any additional data.
For Beginners: This method loads a previously saved optimizer state.
It's like restoring a snapshot:
- It takes a byte array that was previously created with Serialize()
- It checks that the type matches (you can't load settings from a different type of optimizer)
- It reconstructs all the settings and values
This allows you to:
- Continue working with an optimizer that you previously saved
- Use an optimizer that someone else created and shared
- Recover from backups if needed
Exceptions
- InvalidOperationException
Thrown when the serialized type doesn't match the current type.
DeserializeAdditionalData(BinaryReader)
Deserializes additional data specific to derived optimizer classes.
protected virtual void DeserializeAdditionalData(BinaryReader reader)
Parameters
readerBinaryReaderThe binary reader to use for deserialization.
Remarks
This protected virtual method allows derived optimizer classes to deserialize additional data beyond what is handled by the base implementation. The base implementation does nothing.
For Beginners: This method allows specialized optimizers to load their unique settings.
Think of it as reading extra details from the snapshot:
- The base optimizer loads the common information
- This method lets specialized optimizers load their specific settings
- It's empty in the base class since it doesn't have any special data to load
This complements the SerializeAdditionalData method to allow different optimizers to save and load their specialized settings.
EvaluateSolution(IFullModel<T, TInput, TOutput>, OptimizationInputData<T, TInput, TOutput>)
Evaluates a solution, using cached results if available.
protected virtual OptimizationStepData<T, TInput, TOutput> EvaluateSolution(IFullModel<T, TInput, TOutput> solution, OptimizationInputData<T, TInput, TOutput> inputData)
Parameters
solutionIFullModel<T, TInput, TOutput>The solution to evaluate.
inputDataOptimizationInputData<T, TInput, TOutput>The input data for evaluation.
Returns
- OptimizationStepData<T, TInput, TOutput>
The evaluation results for the solution.
GenerateCacheKey(IFullModel<T, TInput, TOutput>, OptimizationInputData<T, TInput, TOutput>)
Generates a cache key for the given solution and input data.
protected virtual string GenerateCacheKey(IFullModel<T, TInput, TOutput> solution, OptimizationInputData<T, TInput, TOutput> inputData)
Parameters
solutionIFullModel<T, TInput, TOutput>The solution model.
inputDataOptimizationInputData<T, TInput, TOutput>The optimization input data.
Returns
- string
A unique cache key string.
GetCachedStepData(string)
Retrieves cached step data for a given solution.
protected OptimizationStepData<T, TInput, TOutput>? GetCachedStepData(string key)
Parameters
keystringThe cache key for the solution.
Returns
- OptimizationStepData<T, TInput, TOutput>
The cached step data, if available; otherwise, null.
GetOptions()
Gets the current options for this optimizer.
public abstract OptimizationAlgorithmOptions<T, TInput, TOutput> GetOptions()
Returns
- OptimizationAlgorithmOptions<T, TInput, TOutput>
The current optimization algorithm options.
Remarks
This abstract method must be implemented by derived classes to return their current configuration options.
For Beginners: This method retrieves the current settings of the optimizer.
It's like checking the current configuration of your device:
- It returns all the settings that control how the optimizer behaves
- Each type of optimizer will implement this differently to return its specific settings
This is useful for:
- Seeing what settings are currently active
- Making a copy of settings to modify and apply later
- Comparing settings between different optimizers
InitializeAdaptiveParameters()
Initializes the adaptive parameters used during optimization to their starting values.
protected virtual void InitializeAdaptiveParameters()
Remarks
This method sets the initial values for parameters that can adapt during optimization, such as learning rate and momentum. It also resets tracking variables for monitoring improvement.
For Beginners: This method sets up the starting values for parameters that will change during the optimization process.
Think of it like setting up a car before a long journey:
- Setting the initial speed (learning rate)
- Setting the initial acceleration (momentum)
- Resetting the trip counters (improvement trackers)
These values will adjust automatically during optimization to help find the best solution efficiently. The learning rate controls how big each step is, while momentum helps move through flat areas.
InitializeRandomSolution(Vector<T>, Vector<T>)
Initializes a random solution within the given bounds.
protected virtual Vector<T> InitializeRandomSolution(Vector<T> lowerBounds, Vector<T> upperBounds)
Parameters
lowerBoundsVector<T>Lower bounds for each parameter.
upperBoundsVector<T>Upper bounds for each parameter.
Returns
- Vector<T>
A vector representing a random solution.
InitializeRandomSolution(TInput)
protected virtual IFullModel<T, TInput, TOutput> InitializeRandomSolution(TInput trainingData)
Parameters
trainingDataTInput
Returns
- IFullModel<T, TInput, TOutput>
LoadModel(string)
Loads the optimizer state from a file.
public virtual void LoadModel(string filePath)
Parameters
filePathstringThe path to the file containing the saved optimizer.
Remarks
This method loads the complete state of the optimizer from a file, including all configuration options and any optimizer-specific data.
For Beginners: This loads a previously saved optimizer from a file.
It's like loading a saved game:
- It restores all the optimizer's settings and state
- You can continue optimization from where you left off
- You can reuse optimizer configurations that worked well previously
Optimize(OptimizationInputData<T, TInput, TOutput>)
Performs the optimization process.
public abstract OptimizationResult<T, TInput, TOutput> Optimize(OptimizationInputData<T, TInput, TOutput> inputData)
Parameters
inputDataOptimizationInputData<T, TInput, TOutput>The input data for the optimization process.
Returns
- OptimizationResult<T, TInput, TOutput>
The result of the optimization process.
PrepareAndEvaluateSolution(IFullModel<T, TInput, TOutput>, OptimizationInputData<T, TInput, TOutput>)
Prepares and evaluates a solution, applying feature selection before checking the cache.
protected OptimizationStepData<T, TInput, TOutput> PrepareAndEvaluateSolution(IFullModel<T, TInput, TOutput> solution, OptimizationInputData<T, TInput, TOutput> inputData)
Parameters
solutionIFullModel<T, TInput, TOutput>The solution to evaluate.
inputDataOptimizationInputData<T, TInput, TOutput>The input data for evaluation.
Returns
- OptimizationStepData<T, TInput, TOutput>
The evaluation results for the solution.
Remarks
For Beginners: This method prepares a model with a specific set of features, checks if we've already trained this exact configuration before, and if not, trains and evaluates the model with the selected features.
RandomlySelectFeatures(int, int?, int?)
Randomly selects a subset of features to use in a model.
protected virtual List<int> RandomlySelectFeatures(int totalFeatures, int? minFeatures = null, int? maxFeatures = null)
Parameters
totalFeaturesintThe total number of available features.
minFeaturesint?The minimum number of features to select.
maxFeaturesint?The maximum number of features to select.
Returns
Remarks
For Beginners: This is like randomly selecting a subset of ingredients from your pantry to include in your recipe experiment.
Reset()
Resets the optimizer state, clearing the model cache.
public virtual void Reset()
ResetAdaptiveParameters()
Resets the adaptive parameters back to their initial values.
protected virtual void ResetAdaptiveParameters()
Remarks
This method calls InitializeAdaptiveParameters to reset all adaptive parameters to their starting values. This can be useful when the optimization process needs to be restarted or when a significant change occurs.
For Beginners: This method resets the optimization process parameters to start fresh.
It's like restarting a navigation system when you've gone off course:
- You reset your speed back to the initial value
- You reset your direction and momentum
- You clear any history of previous attempts
This gives the optimization a clean slate to try again from the beginning.
SaveModel(string)
Saves the optimizer state to a file.
public virtual void SaveModel(string filePath)
Parameters
filePathstringThe path where the optimizer should be saved.
Remarks
This method saves the complete state of the optimizer, including all configuration options and any optimizer-specific data, to a file.
For Beginners: This saves your optimizer's current settings and state to a file.
Think of it like saving your progress:
- It captures all the optimizer's settings and current state
- This can be loaded later to resume optimization or reuse the same settings
- It's useful for checkpointing long-running optimizations
Serialize()
Serializes the optimizer state to a byte array.
public virtual byte[] Serialize()
Returns
- byte[]
A byte array containing the serialized optimizer state.
Remarks
This method saves the current state of the optimizer, including its options and any derived class-specific data. The serialized data can be used to reconstruct the optimizer's state later or to transfer it between processes.
For Beginners: This method saves all the important information about the optimizer's current state.
Think of it like taking a snapshot of the optimizer:
- It captures all the current settings and progress
- This snapshot can be saved to a file or sent to another computer
- Later, you can use this snapshot to continue from where you left off
This is useful for:
- Saving your progress in case the program crashes
- Sharing your optimizer's state with others
- Continuing a long optimization process after a break
SerializeAdditionalData(BinaryWriter)
Serializes additional data specific to derived optimizer classes.
protected virtual void SerializeAdditionalData(BinaryWriter writer)
Parameters
writerBinaryWriterThe binary writer to use for serialization.
Remarks
This protected virtual method allows derived optimizer classes to serialize additional data beyond what is handled by the base implementation. The base implementation does nothing.
For Beginners: This method allows specialized optimizers to save their unique settings.
Think of it as adding extra details to the snapshot:
- The base optimizer saves the common information
- This method lets specialized optimizers save their specific settings
- It's empty in the base class since it doesn't have any special data to save
This is part of the extensible design that allows different types of optimizers to all use the same saving and loading system.
ShouldEarlyStop()
Determines whether the optimization process should stop early based on the recent history of fitness scores.
public virtual bool ShouldEarlyStop()
Returns
- bool
True if early stopping criteria are met, false otherwise.
Remarks
This method checks if the fitness score has not improved significantly over a specified number of iterations. If the improvement is below a threshold for a consecutive number of iterations, it suggests stopping early.
For Beginners: This method decides if it's time to stop trying to improve the solution.
Imagine you're trying to beat your personal best in a game:
- You keep playing and tracking your scores
- If your score hasn't improved much after several attempts, you might decide to stop
- This method does that for the optimization process
It's useful because:
- It prevents wasting time when the solution isn't getting much better
- It helps avoid overfitting, where the model becomes too specific to the training data
- It can save computational resources by stopping when further improvement is unlikely
Step()
Performs a single optimization step, updating the model parameters based on gradients.
public virtual void Step()
Remarks
This method performs one iteration of parameter updates. The default implementation throws a NotImplementedException, and gradient-based optimizers should override this method to implement their specific parameter update logic.
For Beginners: This is like taking one small step toward a better model. After calculating how wrong the model is (gradients), this method adjusts the model's parameters slightly to make it more accurate.
Think of it like adjusting a recipe:
- You taste the dish (check model performance)
- You determine what needs changing (calculate gradients)
- You adjust the ingredients (this Step method updates parameters)
- Repeat until the dish tastes good (model is accurate)
Most training loops call this method many times, each time making the model a little bit better.
UpdateAdaptiveParameters(OptimizationStepData<T, TInput, TOutput>, OptimizationStepData<T, TInput, TOutput>)
Updates the adaptive parameters based on the progress of optimization.
protected virtual void UpdateAdaptiveParameters(OptimizationStepData<T, TInput, TOutput> currentStepData, OptimizationStepData<T, TInput, TOutput> previousStepData)
Parameters
currentStepDataOptimizationStepData<T, TInput, TOutput>The current optimization step data.
previousStepDataOptimizationStepData<T, TInput, TOutput>The previous optimization step data.
Remarks
This method adjusts the learning rate and momentum based on whether the optimization is improving. If the fitness is improving, it may increase certain parameters to move faster. If the fitness is not improving, it may decrease certain parameters to explore more carefully.
For Beginners: This method changes how the optimization behaves based on its progress.
Think of it like adjusting your driving based on the road conditions:
- If you're making good progress, you might speed up (decrease learning rate, increase momentum)
- If you're not improving, you might slow down and try different directions (increase learning rate, decrease momentum)
These automatic adjustments help the optimizer find better solutions by being more efficient:
- When close to a good solution, it takes smaller, more precise steps
- When stuck in a difficult area, it tries different approaches
The learning rate controls how big each step is in the optimization process. The momentum helps maintain direction through flat or noisy areas.
UpdateBestSolution(OptimizationStepData<T, TInput, TOutput>, ref OptimizationStepData<T, TInput, TOutput>)
Updates the best step data if the current step data has a better solution.
protected void UpdateBestSolution(OptimizationStepData<T, TInput, TOutput> currentStepData, ref OptimizationStepData<T, TInput, TOutput> bestStepData)
Parameters
currentStepDataOptimizationStepData<T, TInput, TOutput>The current optimization step data.
bestStepDataOptimizationStepData<T, TInput, TOutput>The best optimization step data found so far, passed by reference to be updated.
Remarks
This method wraps the current and best step data in ModelResult objects and calls UpdateAndApplyBestSolution to determine if the current step data is better. If it is, the bestStepData reference is updated with values from the current step data.
For Beginners: This method compares two sets of optimization results to keep the better one.
Think of it like a talent competition:
- You have a current contestant (currentStepData)
- You have the current champion (bestStepData)
- This method compares them to see who performs better
- If the contestant wins, they become the new champion
This is a key part of optimization - always keeping track of the best solution found so far.
UpdateIterationHistoryAndCheckEarlyStopping(int, OptimizationStepData<T, TInput, TOutput>)
Updates the iteration history with the current step data and checks if early stopping should be applied.
protected bool UpdateIterationHistoryAndCheckEarlyStopping(int iteration, OptimizationStepData<T, TInput, TOutput> stepData)
Parameters
iterationintThe current iteration number.
stepDataOptimizationStepData<T, TInput, TOutput>The current step data.
Returns
- bool
True if optimization should stop early, false if it should continue.
Remarks
This method adds the current iteration's data to the history list and then checks if early stopping criteria have been met. Early stopping helps prevent overfitting by stopping the optimization process when progress stagnates for a number of iterations.
For Beginners: This method keeps track of progress and decides if it's time to stop trying.
Imagine you're trying to climb a hill to find the highest point:
- You keep a record of your altitude at each step (the iteration history)
- If you haven't gone any higher after walking for a while, you might decide to stop
- This saves time and prevents you from wandering too far
Early stopping is important because:
- It saves computation time when further optimization isn't helping
- It can prevent overfitting (when a model works too well on training data but poorly on new data)
- It tells you when you've found a good enough solution
UpdateOptions(OptimizationAlgorithmOptions<T, TInput, TOutput>)
Updates the optimizer's options with the provided options.
protected abstract void UpdateOptions(OptimizationAlgorithmOptions<T, TInput, TOutput> options)
Parameters
optionsOptimizationAlgorithmOptions<T, TInput, TOutput>The options to apply to this optimizer.
Remarks
This abstract method must be implemented by derived classes to update their specific options based on the provided generic optimization options.
For Beginners: This method configures the optimizer with specific settings.
Think of it like updating the settings on your device:
- You provide a set of options (settings)
- This method applies those options to the optimizer
- Each type of optimizer will implement this differently based on what options it supports
This is important because different optimizers might interpret the same options differently, or might have additional specialized options.