Enum AutoMLSearchStrategy
Defines the search strategy used to explore AutoML candidate configurations.
public enum AutoMLSearchStrategy
Fields
BayesianOptimization = 1Bayesian optimization (typically Gaussian-process or TPE style).
DARTS = 5DARTS (Differentiable Architecture Search) - gradient-based NAS. Jointly learns architecture and weights through continuous relaxation. Best for: Fast search, moderate compute budgets.
Evolutionary = 2Evolutionary / genetic search.
GDAS = 6GDAS (Gumbel-softmax DARTS) - improved differentiable NAS. Uses Gumbel-softmax sampling for better architecture discretization. Best for: When DARTS produces weak architectures due to discretization gap.
MultiFidelity = 3Multi-fidelity search (e.g., HyperBand/ASHA-style scheduling).
NeuralArchitectureSearch = 4Neural Architecture Search with automatic algorithm selection. Chooses the best NAS algorithm based on task characteristics and constraints.
OnceForAll = 7Once-for-All (OFA) Networks - train once, specialize anywhere. Trains a supernet supporting elastic depth, width, and kernel sizes. Best for: Multi-hardware deployment, mobile/edge devices.
RandomSearch = 0Random search baseline.
Remarks
AutoML can use different strategies to decide which candidate model configurations to try next. The best choice depends on budget, search-space shape (continuous vs categorical), and how expensive each trial is.
For Beginners: This is how AutoML decides what to try next:
- RandomSearch tries random settings (simple and surprisingly strong).
- BayesianOptimization tries to learn which settings work best and focus on them.
- Evolutionary evolves good settings over time (useful for discrete/conditional knobs).
- MultiFidelity uses short runs first and only gives more budget to promising trials.