Table of Contents

Class SupervisedAutoMLModelBase<T, TInput, TOutput>

Namespace
AiDotNet.AutoML
Assembly
AiDotNet.dll

Base class for AutoML implementations that train and score supervised models.

public abstract class SupervisedAutoMLModelBase<T, TInput, TOutput> : AutoMLModelBase<T, TInput, TOutput>, IAutoMLModel<T, TInput, TOutput>, IFullModel<T, TInput, TOutput>, IModel<TInput, TOutput, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, TInput, TOutput>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, TInput, TOutput>>, IGradientComputable<T, TInput, TOutput>, IJitCompilable<T>

Type Parameters

T

The numeric type used for calculations.

TInput

The input data type.

TOutput

The output data type.

Inheritance
AutoMLModelBase<T, TInput, TOutput>
SupervisedAutoMLModelBase<T, TInput, TOutput>
Implements
IAutoMLModel<T, TInput, TOutput>
IFullModel<T, TInput, TOutput>
IModel<TInput, TOutput, ModelMetadata<T>>
IParameterizable<T, TInput, TOutput>
ICloneable<IFullModel<T, TInput, TOutput>>
IGradientComputable<T, TInput, TOutput>
Derived
Inherited Members
Extension Methods

Remarks

This base class provides common trial execution logic (create model, train, evaluate, record results) for AutoML strategies that operate on supervised learning datasets.

For Beginners: AutoML is an automatic "model picker + tuner". A supervised AutoML run:

  1. Tries a candidate model configuration (a "trial").
  2. Trains it on your training data.
  3. Scores it on validation data using a metric (like RMSE or Accuracy).
  4. Repeats until it finds a strong model or runs out of budget.
Concrete strategies (random search, Bayesian optimization, etc.) decide how to pick the next trial.

Constructors

SupervisedAutoMLModelBase(IModelEvaluator<T, TInput, TOutput>?, Random?)

Initializes a new supervised AutoML model with sensible default dependencies.

protected SupervisedAutoMLModelBase(IModelEvaluator<T, TInput, TOutput>? modelEvaluator = null, Random? random = null)

Parameters

modelEvaluator IModelEvaluator<T, TInput, TOutput>

Optional evaluator; if null, a default evaluator is used.

random Random

Optional RNG; if null, a secure RNG is used.

Properties

BudgetPreset

Gets or sets the compute budget preset used to choose sensible built-in defaults.

public AutoMLBudgetPreset BudgetPreset { get; set; }

Property Value

AutoMLBudgetPreset

Remarks

Built-in AutoML defaults (for example, candidate model sets) can vary by budget preset so CI runs remain fast while thorough runs consider a broader model catalog.

For Beginners: A budget preset is like choosing how much time/effort AutoML should spend searching: CI is very fast, Standard is balanced, and Thorough tries more options.

CrossValidationOptions

Gets or sets cross-validation options for trial evaluation.

public CrossValidationOptions? CrossValidationOptions { get; set; }

Property Value

CrossValidationOptions

Remarks

When set, each trial is evaluated using k-fold cross-validation instead of a single train/validation split. This provides more robust performance estimates but increases computation time by a factor of k (the number of folds).

For Beginners: Cross-validation trains the model k times, each on a different portion of the data. The final score is the average, giving a more reliable estimate.

EnsembleOptions

Gets or sets options controlling optional post-search ensembling.

public AutoMLEnsembleOptions EnsembleOptions { get; set; }

Property Value

AutoMLEnsembleOptions

Remarks

This is primarily used by the facade options overload in AiModelBuilder.

Random

Gets the RNG used for sampling candidate trials.

protected Random Random { get; }

Property Value

Random

Methods

EnsureDefaultOptimizationMetric(TOutput)

Applies an industry-default metric if the user didn't explicitly choose one.

protected void EnsureDefaultOptimizationMetric(TOutput targets)

Parameters

targets TOutput

ExecuteTrialAsync(ModelType, Dictionary<string, object>, TInput, TOutput, TInput, TOutput, CancellationToken)

Runs a single trial (create, train, evaluate, record history).

protected Task<double> ExecuteTrialAsync(ModelType modelType, Dictionary<string, object> trialParameters, TInput trainInputs, TOutput trainTargets, TInput validationInputs, TOutput validationTargets, CancellationToken cancellationToken)

Parameters

modelType ModelType
trialParameters Dictionary<string, object>
trainInputs TInput
trainTargets TOutput
validationInputs TInput
validationTargets TOutput
cancellationToken CancellationToken

Returns

Task<double>

Remarks

If CrossValidationOptions is set, the trial is evaluated using k-fold cross-validation for more robust performance estimates. Otherwise, a single train/validation split is used.

PickCandidateModelType()

Picks a model type uniformly from the configured candidate list.

protected ModelType PickCandidateModelType()

Returns

ModelType

TrySelectEnsembleAsBestAsync(TInput, TOutput, TInput, TOutput, DateTime, CancellationToken)

Attempts to build and select an ensemble as the final model based on EnsembleOptions.

protected Task TrySelectEnsembleAsBestAsync(TInput trainInputs, TOutput trainTargets, TInput validationInputs, TOutput validationTargets, DateTime deadlineUtc, CancellationToken cancellationToken)

Parameters

trainInputs TInput
trainTargets TOutput
validationInputs TInput
validationTargets TOutput
deadlineUtc DateTime
cancellationToken CancellationToken

Returns

Task