Class SEALAlgorithm<T, TInput, TOutput>
- Namespace
- AiDotNet.MetaLearning.Algorithms
- Assembly
- AiDotNet.dll
Implementation of the SEAL (Sample-Efficient Adaptive Learning) meta-learning algorithm.
public class SEALAlgorithm<T, TInput, TOutput> : MetaLearnerBase<T, TInput, TOutput>, IMetaLearner<T, TInput, TOutput>
Type Parameters
TThe numeric type used for calculations (e.g., double, float).
TInputThe input data type (e.g., Matrix<T>, Tensor<T>).
TOutputThe output data type (e.g., Vector<T>, Tensor<T>).
- Inheritance
-
MetaLearnerBase<T, TInput, TOutput>SEALAlgorithm<T, TInput, TOutput>
- Implements
-
IMetaLearner<T, TInput, TOutput>
- Inherited Members
Remarks
SEAL is a gradient-based meta-learning algorithm that combines ideas from MAML with sample-efficiency improvements. It learns initial parameters that can be quickly adapted to new tasks with just a few examples.
Key Features: - Temperature scaling: Controls confidence in predictions during meta-training - Entropy regularization: Encourages diverse predictions to prevent overconfident models - Adaptive learning rates: Per-parameter learning rate adaptation based on gradient norms - Weight decay: Prevents overfitting to meta-training tasks
Algorithm: 1. Sample a batch of tasks 2. For each task: a. Clone the meta-model b. Adapt to the task using support set (inner loop) c. Evaluate on query set to compute meta-loss d. Apply temperature scaling and entropy regularization e. Compute meta-gradients 3. Average meta-gradients across tasks 4. Apply weight decay and update meta-parameters
For Beginners: SEAL learns the best starting point for a model so that it can quickly adapt to new tasks with minimal data.
Imagine learning to play musical instruments:
- Learning your first instrument (e.g., piano) is hard
- Learning your second instrument (e.g., guitar) is easier
- By the time you learn your 5th instrument, you've learned principles of music that help you pick up new instruments much faster
SEAL does the same with machine learning models - it learns from many tasks to find a great starting point that makes adapting to new tasks much faster.
Reference: Based on gradient-based meta-learning with additional efficiency improvements including temperature scaling and entropy regularization.
Constructors
SEALAlgorithm(SEALOptions<T, TInput, TOutput>)
Initializes a new instance of the SEALAlgorithm class.
public SEALAlgorithm(SEALOptions<T, TInput, TOutput> options)
Parameters
optionsSEALOptions<T, TInput, TOutput>SEAL configuration options containing the model and all hyperparameters.
Examples
// Create SEAL with minimal configuration
var options = new SEALOptions<double, Matrix<double>, Vector<double>>(myNeuralNetwork);
var seal = new SEALAlgorithm<double, Matrix<double>, Vector<double>>(options);
// Create SEAL with entropy regularization
var options = new SEALOptions<double, Matrix<double>, Vector<double>>(myNeuralNetwork)
{
EntropyCoefficient = 0.01,
Temperature = 1.5,
UseAdaptiveInnerLR = true,
AdaptiveLearningRateMode = SEALAdaptiveLearningRateMode.RunningMean
};
var seal = new SEALAlgorithm<double, Matrix<double>, Vector<double>>(options);
Exceptions
- ArgumentNullException
Thrown when options is null.
- InvalidOperationException
Thrown when required components are not set in options.
Properties
AlgorithmType
Gets the algorithm type identifier for this meta-learner.
public override MetaLearningAlgorithmType AlgorithmType { get; }
Property Value
- MetaLearningAlgorithmType
Returns SEAL.
Remarks
This property identifies the algorithm as SEAL, a sample-efficient meta-learning algorithm that combines MAML-style gradient-based meta-learning with temperature scaling and entropy regularization.
Methods
Adapt(IMetaLearningTask<T, TInput, TOutput>)
Adapts the meta-learned model to a new task using gradient descent.
public override IModel<TInput, TOutput, ModelMetadata<T>> Adapt(IMetaLearningTask<T, TInput, TOutput> task)
Parameters
taskIMetaLearningTask<T, TInput, TOutput>The new task containing support set examples for adaptation.
Returns
- IModel<TInput, TOutput, ModelMetadata<T>>
A new model instance that has been fine-tuned to the given task.
Remarks
SEAL adaptation performs gradient descent on the support set, optionally using adaptive per-parameter learning rates based on gradient norms. The meta-learned initialization enables rapid adaptation with few examples.
Adaptation Process: 1. Clone the meta-model with learned initialization 2. For each adaptation step: a. Compute gradients on support set b. Optionally compute adaptive learning rates c. Update parameters using gradient descent 3. Return the adapted model
For Beginners: When you give SEAL a new task (with just a few examples), it quickly adjusts its parameters to perform well on that task. This works because the meta-learned starting point was specifically optimized to enable fast adaptation.
It's like a musician who has learned many instruments - when they pick up a new one, they already know the general principles and just need to learn the specific fingerings and techniques for that instrument.
Exceptions
- ArgumentNullException
Thrown when task is null.
MetaTrain(TaskBatch<T, TInput, TOutput>)
Performs one meta-training step using SEAL's sample-efficient approach.
public override T MetaTrain(TaskBatch<T, TInput, TOutput> taskBatch)
Parameters
taskBatchTaskBatch<T, TInput, TOutput>A batch of tasks to meta-train on, each containing support and query sets.
Returns
- T
The average meta-loss across all tasks in the batch (evaluated on query sets).
Remarks
SEAL meta-training extends MAML with additional sample-efficiency improvements:
For each task: 1. Clone the meta-model with current meta-parameters 2. Perform K gradient descent steps on the task's support set (inner loop) - Optionally uses adaptive per-parameter learning rates 3. Evaluate adapted model on query set to compute meta-loss 4. Apply temperature scaling: loss = loss / temperature 5. Add entropy regularization: loss = loss - entropy_coef * entropy(predictions) 6. Compute meta-gradients with optional first-order approximation 7. Clip gradients if threshold is set
Meta-Update: 1. Average meta-gradients across all tasks 2. Apply weight decay: gradient += weight_decay * parameters 3. Update meta-parameters using the meta-optimizer
For Beginners: SEAL meta-training is like a teacher who practices on many small lessons. For each lesson, the teacher quickly adapts their teaching style (inner loop), then evaluates how well students learned (query set). The teacher then adjusts their general teaching approach based on what worked across all lessons (meta-update).
The special features of SEAL:
- Temperature scaling controls how confident the model should be
- Entropy regularization encourages diverse predictions
- Adaptive learning rates help parameters learn at appropriate speeds
- Weight decay prevents overfitting to training tasks
Exceptions
- ArgumentException
Thrown when the task batch is null or empty.
- InvalidOperationException
Thrown when meta-gradient computation fails.