Enum MetaLearningAlgorithmType
- Namespace
- AiDotNet.MetaLearning
- Assembly
- AiDotNet.dll
Specifies the type of meta-learning algorithm used for few-shot learning and quick adaptation.
public enum MetaLearningAlgorithmType
Fields
ANIL = 13Almost No Inner Loop (Raghu et al., 2020). A simplified version of MAML that only adapts the final classification layer.
Key Idea: The feature extractor is frozen during inner-loop adaptation; only the classifier head is updated. Much faster than full MAML.
Use When: You want faster adaptation with comparable performance to MAML.
BOIL = 16Body Only Inner Loop (Oh et al., 2021). Opposite of ANIL - only adapts the feature extractor, keeping the head frozen.
Key Idea: The classifier head is frozen; only the feature extractor (body) is adapted during the inner loop. Provides different inductive biases than ANIL.
Use When: You believe task-specific features are more important than task-specific classifiers.
CNAP = 4Conditional Neural Adaptive Processes (Requeima et al., 2019). Combines neural processes with task-specific adaptation using FiLM layers.
Key Idea: Generate task-specific parameters by conditioning on the support set, enabling fast adaptation without gradient-based fine-tuning at test time.
Use When: You need fast inference-time adaptation without gradient computation.
GNNMeta = 7Graph Neural Network for Meta-Learning. Uses graph neural networks to model relationships between examples in few-shot learning.
Key Idea: Treat the support and query examples as nodes in a graph and use message passing to propagate information for classification.
Use When: You want to explicitly model relationships between all examples in a task.
LEO = 14Latent Embedding Optimization (Rusu et al., 2019). Performs optimization in a low-dimensional latent space for faster adaptation.
Key Idea: Learn a low-dimensional latent space for model parameters. Adaptation happens in this latent space, then maps back to full parameters.
Use When: You need to adapt very large models quickly with limited data.
MAML = 0Model-Agnostic Meta-Learning (Finn et al., 2017). The foundational gradient-based meta-learning algorithm that learns an initialization that can be quickly fine-tuned to new tasks with a few gradient steps.
Key Idea: Find initial parameters that are sensitive to task-specific changes, so that small gradient updates produce large improvements in task performance.
Use When: You need a general-purpose meta-learning approach that works across different domains (classification, regression, reinforcement learning).
MANN = 9Memory-Augmented Neural Network (Santoro et al., 2016). Uses external memory for one-shot learning without explicit training phases.
Key Idea: Store examples in external memory and learn to retrieve similar examples for classification. No explicit support/query split at inference.
Use When: You need online learning capabilities where examples arrive sequentially.
MatchingNetworks = 10Matching Networks for One Shot Learning (Vinyals et al., 2016). Uses attention over support examples for one-shot classification.
Key Idea: Embed examples in a shared space and classify by computing attention-weighted similarity to support examples.
Use When: You need simple, non-parametric few-shot classification with attention mechanisms.
MetaOptNet = 15Meta-learning with differentiable convex optimization (Lee et al., 2019). Uses a differentiable SVM or ridge regression for the final classification.
Key Idea: Replace the inner-loop gradient descent with a closed-form convex optimization (like ridge regression or SVM) that is differentiable.
Use When: You want theoretically grounded, stable optimization in the inner loop.
MetaSGD = 2Meta-SGD with per-parameter learning rates (Li et al., 2017). Extends MAML by learning not just the initialization but also per-parameter learning rates.
Key Idea: Different parameters may need different learning rates for optimal adaptation. Meta-SGD learns these rates as part of the meta-learning process.
Use When: You suspect that uniform learning rates are suboptimal for your model architecture.
NTM = 8Neural Turing Machine for meta-learning. Uses external memory with read/write heads for meta-learning.
Key Idea: Use a differentiable external memory to store and retrieve task-relevant information across examples.
Use When: Tasks require storing and retrieving specific examples or patterns.
ProtoNets = 11Prototypical Networks (Snell et al., 2017). Learns a metric space where classification is performed by computing distances to class prototypes.
Key Idea: Represent each class by the mean (prototype) of its support examples in embedding space. Classify by nearest prototype.
Use When: You want simple, effective metric-based few-shot learning with strong baselines.
RelationNetwork = 12Relation Network for few-shot learning (Sung et al., 2018). Learns to compare query and support examples through a learned relation module.
Key Idea: Instead of using a fixed distance metric, learn a neural network that computes relation scores between example pairs.
Use When: You want to learn complex, non-linear similarity functions.
Reptile = 1Reptile meta-learning algorithm (Nichol et al., 2018). A simpler alternative to MAML that avoids computing second-order derivatives.
Key Idea: Repeatedly sample a task, train on it, and move the initialization towards the trained weights. Simpler gradient computation than MAML.
Use When: You want MAML-like performance with lower computational cost and simpler implementation.
SEAL = 5Self-Explanatory Attention Learning. Combines attention mechanisms with meta-learning for interpretable few-shot learning.
Key Idea: Use attention to focus on relevant features and provide explanations for predictions in few-shot scenarios.
Use When: You need interpretable meta-learning with attention-based explanations.
TADAM = 6Task-Dependent Adaptive Metric (Oreshkin et al., 2018). Combines metric-based learning with task-dependent feature scaling.
Key Idea: Learn to adapt the metric space based on the task at hand, combining prototypical networks with task-conditional scaling.
Use When: You want metric-based learning with task-specific adaptation.
iMAML = 3Implicit MAML with implicit gradients (Rajeswaran et al., 2019). Uses implicit differentiation to compute meta-gradients more efficiently.
Key Idea: Instead of differentiating through the optimization path, use the implicit function theorem to compute gradients. Enables more inner-loop steps.
Use When: You need many inner-loop adaptation steps and MAML's memory requirements become prohibitive.
Remarks
For Beginners: Meta-learning algorithms are designed to "learn how to learn." Instead of learning a single task, they learn to quickly adapt to new tasks with minimal data. This enum lists all supported meta-learning algorithms in the framework.
Algorithm Categories:
- Optimization-based: MAML, Reptile, Meta-SGD, iMAML, ANIL, BOIL, LEO
- Metric-based: ProtoNets, MatchingNetworks, RelationNetwork, TADAM
- Memory-based: MANN, NTM
- Hybrid/Advanced: CNAP, SEAL, GNNMeta, MetaOptNet