Interface IMetaLearner<T, TInput, TOutput>
- Namespace
- AiDotNet.Interfaces
- Assembly
- AiDotNet.dll
Unified interface for meta-learning algorithms that train models to quickly adapt to new tasks.
public interface IMetaLearner<T, TInput, TOutput>
Type Parameters
TThe numeric data type used for calculations (e.g., float, double).
TInputThe type of input data (e.g., Matrix<T>, Tensor<T>, double[]).
TOutputThe type of output data (e.g., Vector<T>, Tensor<T>, double[]).
Examples
// 1. Setup: Create episodic data loader for 5-way 5-shot tasks
var dataLoader = new UniformEpisodicDataLoader<double, Tensor<double>, Tensor<double>>(
datasetX: trainingFeatures,
datasetY: trainingLabels,
nWay: 5, // 5 classes per task
kShot: 5, // 5 support examples per class
queryShots: 15 // 15 query examples per class
);
// 2. Configure: Setup meta-learner with options
var options = MetaLearnerOptionsBase<double>.CreateBuilder()
.WithInnerLearningRate(0.01)
.WithOuterLearningRate(0.001)
.WithAdaptationSteps(5)
.WithMetaBatchSize(4)
.WithNumMetaIterations(1000)
.Build();
var metaLearner = new MAMLAlgorithm<double, Tensor<double>, Tensor<double>>(
metaModel: neuralNetwork,
lossFunction: new CrossEntropyLoss<double>(),
dataLoader: dataLoader,
options: options
);
// 3. Meta-Training: Simply call Train()
var trainingResult = metaLearner.Train();
// 4. Deployment: Adapt to new task with 5 examples
var newTask = dataLoader.GetNextTask();
var adaptResult = metaLearner.AdaptAndEvaluate(newTask);
Console.WriteLine($"New Task Accuracy: {adaptResult.QueryAccuracy:P2}");
Remarks
This is the unified interface for all meta-learning algorithms in the framework. It combines both training infrastructure and algorithm capabilities, enabling seamless integration with AiModelBuilder while supporting all 17 meta-learning algorithms (MAML, Reptile, ProtoNets, LEO, MetaOptNet, etc.).
For Beginners: Meta-learning is like teaching someone how to learn, not just what to learn.
Traditional vs Meta-Learning:
- Traditional: Train on thousands of cat/dog images → classify cats vs dogs well
- Meta-Learning: Train on many classification tasks → learn ANY new category from 5 examples
Real-world applications:
- Few-shot image classification (recognize new objects from 1-5 images)
- Rapid robot adaptation (new environments with minimal data)
- Personalized recommendations (adapt to new users quickly)
- Drug discovery (predict properties of new molecules)
Architecture - Two-Loop Optimization:
Inner Loop (Task Adaptation):
- Given: New task with support set (K examples per class)
- Process: Few gradient steps (1-10) to adapt model
- Output: Task-specific adapted parameters
- Goal: Quickly learn this specific task
Outer Loop (Meta-Optimization):
- Given: Batch of tasks from task distribution
- Process: For each task, adapt (inner loop) and evaluate on query set
- Output: Updated meta-parameters
- Goal: Learn parameters that enable fast adaptation across all tasks
This two-loop structure is what enables "learning to learn."
Production Considerations: - Use MetaTrainStep() for training loop with proper batch sizes (2-32 tasks) - Monitor Evaluate() metrics every N iterations to detect overfitting - Use AdaptAndEvaluate() for deployment to quickly adapt to new tasks - Save/Load models after meta-training for deployment - Thread Safety: Not thread-safe, use separate instances for concurrent training
Properties
AdaptationSteps
Gets the number of adaptation steps to perform during task adaptation (inner loop).
int AdaptationSteps { get; }
Property Value
AlgorithmType
Gets the type of meta-learning algorithm.
MetaLearningAlgorithmType AlgorithmType { get; }
Property Value
BaseModel
Gets the base model being meta-trained.
IFullModel<T, TInput, TOutput> BaseModel { get; }
Property Value
- IFullModel<T, TInput, TOutput>
CurrentIteration
Gets the current meta-training iteration count.
int CurrentIteration { get; }
Property Value
InnerLearningRate
Gets the learning rate used for task adaptation (inner loop).
double InnerLearningRate { get; }
Property Value
Options
Gets the meta-learner options (configuration).
IMetaLearnerOptions<T> Options { get; }
Property Value
OuterLearningRate
Gets the learning rate used for meta-learning (outer loop).
double OuterLearningRate { get; }
Property Value
Methods
Adapt(IMetaLearningTask<T, TInput, TOutput>)
Adapts the model to a new task using its support set.
IModel<TInput, TOutput, ModelMetadata<T>> Adapt(IMetaLearningTask<T, TInput, TOutput> task)
Parameters
taskIMetaLearningTask<T, TInput, TOutput>The task to adapt to.
Returns
- IModel<TInput, TOutput, ModelMetadata<T>>
A new model instance adapted to the task.
Remarks
For Beginners: This is where the "quick learning" happens. Given a new task with just a few examples (the support set), this method creates a new model that's specialized for that specific task.
AdaptAndEvaluate(MetaLearningTask<T, TInput, TOutput>)
Adapts the model to a specific task and evaluates adaptation quality.
MetaAdaptationResult<T> AdaptAndEvaluate(MetaLearningTask<T, TInput, TOutput> task)
Parameters
taskMetaLearningTask<T, TInput, TOutput>Meta-learning task with support set (for adaptation) and query set (for evaluation).
Returns
- MetaAdaptationResult<T>
Detailed metrics about adaptation performance and timing.
Evaluate(TaskBatch<T, TInput, TOutput>)
Evaluates the meta-learning algorithm on a batch of tasks.
T Evaluate(TaskBatch<T, TInput, TOutput> taskBatch)
Parameters
taskBatchTaskBatch<T, TInput, TOutput>The batch of tasks to evaluate on.
Returns
- T
The average evaluation loss across all tasks.
Remarks
For Beginners: This checks how well the meta-learning algorithm performs. For each task, it adapts using the support set and then tests on the query set. The returned value is the average loss across all tasks - lower means better performance.
Evaluate(int)
Evaluates meta-learning performance on multiple held-out tasks.
MetaEvaluationResult<T> Evaluate(int numTasks)
Parameters
numTasksintNumber of tasks to evaluate (100-1000 recommended for statistics).
Returns
- MetaEvaluationResult<T>
Comprehensive metrics including mean accuracy, confidence intervals, and per-task statistics.
Remarks
Uses the episodic data loader configured during construction to sample evaluation tasks.
GetMetaModel()
Gets the current meta-model.
IFullModel<T, TInput, TOutput> GetMetaModel()
Returns
- IFullModel<T, TInput, TOutput>
The current meta-model.
Remarks
For Beginners: This returns the "meta-learned" model that has been trained on many tasks. This model itself may not be very good at any specific task, but it's excellent as a starting point for quickly adapting to new tasks.
Load(string)
Loads a previously meta-trained model from disk.
void Load(string filePath)
Parameters
filePathstringFile path to the saved model.
MetaTrain(TaskBatch<T, TInput, TOutput>)
Performs one meta-training step on a batch of tasks.
T MetaTrain(TaskBatch<T, TInput, TOutput> taskBatch)
Parameters
taskBatchTaskBatch<T, TInput, TOutput>The batch of tasks to train on.
Returns
- T
The meta-training loss for this batch.
Remarks
For Beginners: This method updates the model by training on multiple tasks at once. Each task teaches the model something about how to learn quickly. The returned loss value indicates how well the model is doing - lower is better.
MetaTrainStep(int)
Performs one meta-training step (outer loop update) using the episodic data loader.
MetaTrainingStepResult<T> MetaTrainStep(int batchSize)
Parameters
batchSizeintNumber of tasks to sample for this meta-update.
Returns
- MetaTrainingStepResult<T>
Metrics including meta-loss, task loss, accuracy, and timing information.
Remarks
Uses the episodic data loader configured during construction to sample tasks for this meta-update.
Reset()
Resets the meta-learner to initial untrained state.
void Reset()
Save(string)
Saves the meta-trained model to disk for later deployment.
void Save(string filePath)
Parameters
filePathstringFile path where model should be saved.
SetMetaModel(IFullModel<T, TInput, TOutput>)
Sets the base model for this meta-learning algorithm.
void SetMetaModel(IFullModel<T, TInput, TOutput> model)
Parameters
modelIFullModel<T, TInput, TOutput>The model to use as the base.
Train()
Trains the meta-learner using the configuration specified during construction.
MetaTrainingResult<T> Train()
Returns
- MetaTrainingResult<T>
Complete training history with loss/accuracy progression and timing information.
Remarks
This method performs the complete outer-loop meta-training process, repeatedly calling MetaTrainStep and collecting metrics across all iterations. All training parameters are specified in the IMetaLearnerOptions provided during construction.
For Beginners: This is the main training method for meta-learning. Unlike traditional training where you train once on a dataset, this trains your model across many different tasks so it learns how to quickly adapt to new tasks.