Class BOILAlgorithm<T, TInput, TOutput>
- Namespace
- AiDotNet.MetaLearning.Algorithms
- Assembly
- AiDotNet.dll
Implementation of Body Only Inner Loop (BOIL) meta-learning algorithm.
public class BOILAlgorithm<T, TInput, TOutput> : MetaLearnerBase<T, TInput, TOutput>, IMetaLearner<T, TInput, TOutput>
Type Parameters
TThe numeric type used for calculations (e.g., double, float).
TInputThe input data type (e.g., Matrix<T>, Tensor<T>).
TOutputThe output data type (e.g., Vector<T>, Tensor<T>).
- Inheritance
-
MetaLearnerBase<T, TInput, TOutput>BOILAlgorithm<T, TInput, TOutput>
- Implements
-
IMetaLearner<T, TInput, TOutput>
- Inherited Members
Remarks
BOIL is the opposite of ANIL - it only adapts the feature extractor (body) during inner-loop adaptation while keeping the classification head frozen. This explores whether learning task-specific representations is more important than task-specific classifiers.
Key Insight: ANIL showed that adapting only the head works well, suggesting the body learns general features. BOIL tests the complementary hypothesis: what if we need to adapt HOW we see things (features) rather than HOW we decide (classifier)?
For Beginners: Think of a neural network as having two jobs:
- Feature Extraction (Body): "What do I see in this image?"
- Classification (Head): "Given what I see, which class is it?"
BOIL says: "The classifier is general enough - we just need to learn to SEE things differently for each task!" So it only updates how the network extracts features, while the decision-making layer stays fixed.
Algorithm (MAML-style with body-only adaptation):
For each task batch:
For each task:
1. Clone body parameters, keep head parameters frozen
2. For each adaptation step:
a. Forward pass through (adaptable body) + (frozen head)
b. Compute loss on support set
c. Compute gradients for BODY ONLY
d. Update body parameters
3. Evaluate adapted body on query set
Meta-update: head + body initialization
Reference: Oh, J., Yoo, H., Kim, C., & Yun, S. Y. (2021). BOIL: Towards Representation Change for Few-shot Learning.
Constructors
BOILAlgorithm(BOILOptions<T, TInput, TOutput>)
Initializes a new instance of the BOILAlgorithm class.
public BOILAlgorithm(BOILOptions<T, TInput, TOutput> options)
Parameters
optionsBOILOptions<T, TInput, TOutput>BOIL configuration options containing the model and all hyperparameters.
Examples
// Create BOIL with minimal configuration
var options = new BOILOptions<double, Tensor, Tensor>(myNeuralNetwork);
var boil = new BOILAlgorithm<double, Tensor, Tensor>(options);
// Create BOIL with custom configuration
var options = new BOILOptions<double, Tensor, Tensor>(myNeuralNetwork)
{
AdaptationSteps = 5,
InnerLearningRate = 0.01,
NumClasses = 5,
BodyAdaptationFraction = 0.5
};
var boil = new BOILAlgorithm<double, Tensor, Tensor>(options);
Exceptions
- ArgumentNullException
Thrown when options is null.
- InvalidOperationException
Thrown when required components are not set in options.
Properties
AlgorithmType
Gets the algorithm type identifier for this meta-learner.
public override MetaLearningAlgorithmType AlgorithmType { get; }
Property Value
- MetaLearningAlgorithmType
Returns BOIL.
Remarks
This property identifies the algorithm as BOIL (Body Only Inner Loop), which adapts only the feature extractor during inner-loop adaptation.
Methods
Adapt(IMetaLearningTask<T, TInput, TOutput>)
Adapts the meta-learned model to a new task by only updating the feature extractor (body).
public override IModel<TInput, TOutput, ModelMetadata<T>> Adapt(IMetaLearningTask<T, TInput, TOutput> task)
Parameters
taskIMetaLearningTask<T, TInput, TOutput>The new task containing support set examples for adaptation.
Returns
- IModel<TInput, TOutput, ModelMetadata<T>>
A new model instance that has been adapted to the given task with an updated body.
Remarks
BOIL adaptation updates only the body (feature extractor) while keeping the head frozen:
- Clone the meta-learned body parameters (or reinitialize if configured)
- Keep head parameters frozen
- For each adaptation step:
- Forward pass: body → features → head → output
- Compute loss on support set
- Update body parameters with gradient descent
- Return model with adapted body + frozen head
For Beginners: When you give BOIL a new task: 1. It keeps its "decision maker" (classifier head) exactly the same 2. It only retrains the "feature extractor" (how it sees inputs) 3. This is like teaching someone who already knows what categories exist, just teaching them what to look for in the new domain
Use Case: BOIL might work better when: - Different tasks require seeing different patterns in the input - The classification boundaries are similar across tasks - You have a robust meta-learned classifier
Exceptions
- ArgumentNullException
Thrown when task is null.
MetaTrain(TaskBatch<T, TInput, TOutput>)
Performs one meta-training step using BOIL's body-only adaptation approach.
public override T MetaTrain(TaskBatch<T, TInput, TOutput> taskBatch)
Parameters
taskBatchTaskBatch<T, TInput, TOutput>A batch of tasks to meta-train on, each containing support and query sets.
Returns
- T
The average meta-loss across all tasks in the batch (evaluated on query sets).
Remarks
BOIL meta-training adapts only the body parameters in the inner loop:
BOIL Inner Loop (per task):
1. Clone body parameters from meta-learned initialization
2. Keep head parameters FROZEN
3. For each adaptation step:
a. Forward: (adaptable body) → features → (frozen head) → output
b. Compute loss on support set
c. Compute gradients for body only
d. Update body parameters
4. Evaluate on query set with adapted body
BOIL Outer Loop:
1. Accumulate gradients from all tasks' query losses
2. Update body initialization to provide better starting point
3. Update head weights (frozen during inner loop, but updated in outer loop)
Key Difference from ANIL: - ANIL: Adapts head only, freezes body - BOIL: Adapts body only, freezes head
For Beginners: BOIL learns two things: 1. A good classifier (head) that works for all tasks (fixed during adaptation) 2. A good feature extractor (body) that can be quickly adjusted per task
During adaptation, only the "seeing" part changes - the "decision" stays fixed. This tests whether task-specific vision is more important than task-specific decisions.
Exceptions
- ArgumentException
Thrown when the task batch is null or empty.
- InvalidOperationException
Thrown when meta-gradient computation fails.