Table of Contents

Class OnceForAll<T>

Namespace
AiDotNet.AutoML.NAS
Assembly
AiDotNet.dll

Once-for-All (OFA) Networks: Train Once, Specialize for Anything. Trains a single large network that supports diverse architectural configurations, enabling instant specialization to different hardware platforms without retraining.

Reference: "Once for All: Train One Network and Specialize it for Efficient Deployment" (ICLR 2020)

public class OnceForAll<T> : NasAutoMLModelBase<T>, IAutoMLModel<T, Tensor<T>, Tensor<T>>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>

Type Parameters

T

The numeric type for calculations

Inheritance
AutoMLModelBase<T, Tensor<T>, Tensor<T>>
OnceForAll<T>
Implements
IAutoMLModel<T, Tensor<T>, Tensor<T>>
IFullModel<T, Tensor<T>, Tensor<T>>
IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>
IParameterizable<T, Tensor<T>, Tensor<T>>
ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>
IGradientComputable<T, Tensor<T>, Tensor<T>>
Inherited Members
Extension Methods

Constructors

OnceForAll(SearchSpaceBase<T>, List<int>?, List<double>?, List<int>?, List<int>?)

public OnceForAll(SearchSpaceBase<T> searchSpace, List<int>? elasticDepths = null, List<double>? elasticWidths = null, List<int>? elasticKernelSizes = null, List<int>? elasticExpansionRatios = null)

Parameters

searchSpace SearchSpaceBase<T>
elasticDepths List<int>
elasticWidths List<double>
elasticKernelSizes List<int>
elasticExpansionRatios List<int>

Properties

NasNumNodes

Gets the number of nodes to search over.

protected override int NasNumNodes { get; }

Property Value

int

NasSearchSpace

Gets the NAS search space.

protected override SearchSpaceBase<T> NasSearchSpace { get; }

Property Value

SearchSpaceBase<T>

NumOps

Gets the numeric operations provider for T.

protected override INumericOperations<T> NumOps { get; }

Property Value

INumericOperations<T>

Methods

CreateInstanceForCopy()

Factory method for creating a new instance for deep copy. Derived classes must implement this to return a new instance of themselves. This ensures each copy has its own collections and lock object.

protected override AutoMLModelBase<T, Tensor<T>, Tensor<T>> CreateInstanceForCopy()

Returns

AutoMLModelBase<T, Tensor<T>, Tensor<T>>

A fresh instance of the derived class with default parameters

Remarks

When implementing this method, derived classes should create a fresh instance with default parameters, and should not attempt to preserve runtime or initialization state from the original instance. The deep copy logic will transfer relevant state (trial history, search space, etc.) after construction.

GetSharedWeights(string, int, int)

Gets shared weights for a specific layer configuration

public Matrix<T> GetSharedWeights(string layerKey, int rows, int cols)

Parameters

layerKey string
rows int
cols int

Returns

Matrix<T>

SampleSubNetwork()

Samples a sub-network configuration based on current training stage

public SubNetworkConfig SampleSubNetwork()

Returns

SubNetworkConfig

SearchArchitecture(Tensor<T>, Tensor<T>, Tensor<T>, Tensor<T>, TimeSpan, CancellationToken)

Searches for the best sub-network architecture from the OFA supernet. Samples multiple sub-networks, evaluates each on validation data, and returns the best one.

protected override Architecture<T> SearchArchitecture(Tensor<T> inputs, Tensor<T> targets, Tensor<T> validationInputs, Tensor<T> validationTargets, TimeSpan timeLimit, CancellationToken cancellationToken)

Parameters

inputs Tensor<T>
targets Tensor<T>
validationInputs Tensor<T>
validationTargets Tensor<T>
timeLimit TimeSpan
cancellationToken CancellationToken

Returns

Architecture<T>

Remarks

OFA's key insight is that the supernet is pre-trained with progressive shrinking, so any sampled sub-network is already well-trained. However, different sub-networks have different accuracy/efficiency trade-offs, so we evaluate multiple candidates on validation data to find the best one within the given time limit.

SetTrainingStage(int)

Progressive shrinking: trains the OFA network in stages Stage 1: Train largest kernel sizes Stage 2: Add elastic depth Stage 3: Add elastic expansion ratios Stage 4: Add elastic width

public void SetTrainingStage(int stage)

Parameters

stage int

SpecializeForHardware(HardwareConstraints<T>, int, int, int, int)

Specializes the OFA network to meet specific hardware constraints Uses evolutionary search to find the best sub-network configuration

public SubNetworkConfig SpecializeForHardware(HardwareConstraints<T> constraints, int inputChannels, int spatialSize, int populationSize = 100, int generations = 50)

Parameters

constraints HardwareConstraints<T>
inputChannels int
spatialSize int
populationSize int
generations int

Returns

SubNetworkConfig