Class MetaOptNetAlgorithm<T, TInput, TOutput>
- Namespace
- AiDotNet.MetaLearning.Algorithms
- Assembly
- AiDotNet.dll
Implementation of Meta-learning with Differentiable Convex Optimization (MetaOptNet) algorithm.
public class MetaOptNetAlgorithm<T, TInput, TOutput> : MetaLearnerBase<T, TInput, TOutput>, IMetaLearner<T, TInput, TOutput>
Type Parameters
TThe numeric type used for calculations (e.g., double, float).
TInputThe input data type (e.g., Matrix<T>, Tensor<T>).
TOutputThe output data type (e.g., Vector<T>, Tensor<T>).
- Inheritance
-
MetaLearnerBase<T, TInput, TOutput>MetaOptNetAlgorithm<T, TInput, TOutput>
- Implements
-
IMetaLearner<T, TInput, TOutput>
- Inherited Members
Remarks
MetaOptNet replaces the gradient-based inner-loop optimization of MAML with a differentiable convex optimization solver. This provides several advantages:
- Closed-form solution (no iterative optimization)
- Theoretically guaranteed convergence
- Stable training dynamics
- Differentiable through implicit function theorem
Key Innovation: Instead of gradient descent in the inner loop:
MAML inner loop: θ' = θ - α∇L(θ) (repeat k times)
MetaOptNet: w* = argmin_w L(w) + λR(w) (closed-form!)
For Beginners: Imagine you're trying to fit a line to some points. MAML would iteratively adjust the line: "move a bit left, now a bit right..." MetaOptNet uses math to find the exact best line in one shot using the formula: w = (X^T X + λI)^(-1) X^T y
Supported Solvers: - Ridge Regression: Fast, closed-form, good for most tasks - SVM: More powerful, better margins, but slower - Logistic Regression: For probabilistic outputs
Algorithm:
For each task batch:
For each task:
1. Extract embeddings from support set: Z_s = f(X_s)
2. Solve convex problem: w* = Solver(Z_s, Y_s, λ)
3. Classify query set: Y_q = Z_q × w*
4. Compute query loss
Meta-update encoder f using gradients through the solver
Reference: Lee, K., Maji, S., Ravichandran, A., & Soatto, S. (2019). Meta-Learning with Differentiable Convex Optimization. CVPR 2019.
Constructors
MetaOptNetAlgorithm(MetaOptNetOptions<T, TInput, TOutput>)
Initializes a new instance of the MetaOptNetAlgorithm class.
public MetaOptNetAlgorithm(MetaOptNetOptions<T, TInput, TOutput> options)
Parameters
optionsMetaOptNetOptions<T, TInput, TOutput>MetaOptNet configuration options containing the model and all hyperparameters.
Examples
// Create MetaOptNet with minimal configuration
var options = new MetaOptNetOptions<double, Tensor, Tensor>(myNeuralNetwork);
var metaOptNet = new MetaOptNetAlgorithm<double, Tensor, Tensor>(options);
// Create MetaOptNet with SVM solver
var options = new MetaOptNetOptions<double, Tensor, Tensor>(myNeuralNetwork)
{
SolverType = ConvexSolverType.SVM,
RegularizationStrength = 1.0,
NumClasses = 5
};
var metaOptNet = new MetaOptNetAlgorithm<double, Tensor, Tensor>(options);
Exceptions
- ArgumentNullException
Thrown when options is null.
- InvalidOperationException
Thrown when required components are not set in options.
Properties
AlgorithmType
Gets the algorithm type identifier for this meta-learner.
public override MetaLearningAlgorithmType AlgorithmType { get; }
Property Value
- MetaLearningAlgorithmType
Returns MetaOptNet.
Remarks
This property identifies the algorithm as MetaOptNet, which uses differentiable convex optimization in the inner loop for meta-learning.
Methods
Adapt(IMetaLearningTask<T, TInput, TOutput>)
Adapts the meta-learned model to a new task using convex optimization.
public override IModel<TInput, TOutput, ModelMetadata<T>> Adapt(IMetaLearningTask<T, TInput, TOutput> task)
Parameters
taskIMetaLearningTask<T, TInput, TOutput>The new task containing support set examples for adaptation.
Returns
- IModel<TInput, TOutput, ModelMetadata<T>>
A new model instance that has been adapted to the given task.
Remarks
MetaOptNet adaptation is extremely fast because it uses a closed-form solution:
- Extract embeddings from support examples
- Solve convex optimization for classifier weights
- Return model with encoder + classifier
For Beginners: Adaptation is instant! We just: 1. Transform support examples into feature space 2. Use a mathematical formula to find the best classifier 3. Done! No gradient steps needed.
Speed Comparison: - MAML: ~10 gradient steps at test time - MetaOptNet: 1 matrix inversion (constant time)
Exceptions
- ArgumentNullException
Thrown when task is null.
MetaTrain(TaskBatch<T, TInput, TOutput>)
Performs one meta-training step using MetaOptNet's convex optimization approach.
public override T MetaTrain(TaskBatch<T, TInput, TOutput> taskBatch)
Parameters
taskBatchTaskBatch<T, TInput, TOutput>A batch of tasks to meta-train on, each containing support and query sets.
Returns
- T
The average meta-loss across all tasks in the batch (evaluated on query sets).
Remarks
MetaOptNet meta-training differs from MAML in the inner loop:
MetaOptNet Training Loop:
For each task:
1. Extract embeddings: Z_s = f_θ(X_s), Z_q = f_θ(X_q)
2. Solve for classifier: w* = ConvexSolver(Z_s, Y_s)
3. Classify query: logits = Z_q × w* / τ (τ = temperature)
4. Compute loss: L = CrossEntropy(softmax(logits), Y_q)
Update encoder θ using gradients through the solver
Key Difference from MAML: - MAML: Gradients flow through the optimization trajectory - MetaOptNet: Gradients flow through the implicit function at the optimum
The implicit gradient computation uses: ∂w*/∂θ = -(H^-1) × (∂²L/∂w∂θ) where H is the Hessian of the inner objective.
For Beginners: MetaOptNet learns a feature extractor that produces embeddings where simple classifiers work well. The convex solver finds the best simple classifier, and we update the feature extractor to make this classifier work even better on the query set.
Exceptions
- ArgumentException
Thrown when the task batch is null or empty.
- InvalidOperationException
Thrown when meta-gradient computation fails.