Interface IAiModelBuilder<T, TInput, TOutput>
- Namespace
- AiDotNet.Interfaces
- Assembly
- AiDotNet.dll
Defines a builder pattern interface for creating and configuring predictive models.
public interface IAiModelBuilder<T, TInput, TOutput>
Type Parameters
TThe numeric data type used for calculations (e.g., float, double).
TInputTOutput
Remarks
This interface provides a fluent API for setting up all components of a machine learning model.
For Beginners: Think of this as a step-by-step recipe builder for creating AI models. Just like building a custom sandwich where you choose the bread, fillings, and condiments, this builder lets you choose different components for your AI model.
The builder pattern makes it easy to:
- Configure your model piece by piece
- Change only the parts you want while keeping default settings for the rest
- Create different variations of models without writing repetitive code
Methods
BuildAsync()
Asynchronously builds a meta-trained model that can quickly adapt to new tasks.
Task<AiModelResult<T, TInput, TOutput>> BuildAsync()
Returns
- Task<AiModelResult<T, TInput, TOutput>>
A task that represents the asynchronous operation, containing the trained model.
Remarks
This method is used when you've configured a meta-learner using ConfigureMetaLearning(), or when you've configured a data loader using ConfigureDataLoader().
When a data loader is configured: - The loader's LoadAsync() is called to load data - Features and Labels are extracted from the loader - Training proceeds using the loaded data
When meta-learning is configured: - It performs meta-training across many tasks to create a model that can rapidly adapt to new tasks with just a few examples.
For Beginners: Use this method when you've configured either: - A data loader (via ConfigureDataLoader) - the loader provides the training data - Meta-learning (via ConfigureMetaLearning) - trains your model to learn NEW tasks quickly
Example with data loader:
var result = await builder
.ConfigureDataLoader(DataLoaders.FromCsv("data.csv", labelColumn: "target"))
.ConfigureModel(model)
.BuildAsync();
Exceptions
- InvalidOperationException
Thrown if no valid training path was configured.
ConfigureABTesting(ABTestingConfig?)
Configures A/B testing to compare multiple model versions by splitting traffic.
IAiModelBuilder<T, TInput, TOutput> ConfigureABTesting(ABTestingConfig? config = null)
Parameters
configABTestingConfigThe A/B testing configuration (optional, disables A/B testing if null).
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: A/B testing lets you safely test a new model version on a small percentage of users before fully deploying it. For example, you might send 10% of traffic to a new model and 90% to the current model, then compare performance metrics to decide which is better.
This is useful for:
- Testing new models in production safely
- Gradually rolling out changes
- Making data-driven decisions about which model to use
Example:
// 90% on v1.0 (stable), 10% on v2.0 (experimental)
var abConfig = new ABTestingConfig
{
Enabled = true,
TrafficSplit = new Dictionary<string, double> { { "1.0", 0.9 }, { "2.0", 0.1 } },
ControlVersion = "1.0"
};
var result = await builder
.ConfigureModel(model)
.ConfigureABTesting(abConfig)
.BuildAsync();
ConfigureAdversarialRobustness(AdversarialRobustnessConfiguration<T, TInput, TOutput>?)
Configures adversarial robustness and AI safety features for the model.
IAiModelBuilder<T, TInput, TOutput> ConfigureAdversarialRobustness(AdversarialRobustnessConfiguration<T, TInput, TOutput>? configuration = null)
Parameters
configurationAdversarialRobustnessConfiguration<T, TInput, TOutput>The adversarial robustness configuration. When null, uses industry-standard defaults.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
This unified configuration provides comprehensive control over all aspects of adversarial robustness and AI safety:
- Safety FilteringInput validation and output filtering for harmful content
- Adversarial AttacksFGSM, PGD, CW, AutoAttack for robustness testing
- Adversarial DefensesAdversarial training, input preprocessing, ensemble methods
- Certified RobustnessRandomized smoothing, IBP, CROWN for provable guarantees
- Content ModerationPrompt injection detection, PII filtering for LLMs
- Red TeamingAutomated adversarial prompt generation for evaluation
For Beginners: This is your one-stop configuration for making your model safe and robust. When called with no parameters (null), industry-standard defaults are applied automatically.
ConfigureAgentAssistance(AgentConfiguration<T>)
Configures AI agent assistance during model building and inference.
IAiModelBuilder<T, TInput, TOutput> ConfigureAgentAssistance(AgentConfiguration<T> configuration)
Parameters
configurationAgentConfiguration<T>The agent configuration containing API keys, provider settings, and options.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: Agent assistance adds AI-powered help during model creation. The agent can analyze your data, suggest which model type to use, recommend hyperparameters, and provide insights about feature importance.
The configuration is stored securely and will be reused during inference if you call AskAsync() on the trained model.
Example usage:
var agentConfig = new AgentConfiguration<double>
{
ApiKey = "sk-...",
Provider = LLMProvider.OpenAI,
IsEnabled = true
};
var builder = new AiModelBuilder<double, Matrix<double>, Vector<double>>()
.ConfigureAgentAssistance(agentConfig);
ConfigureAugmentation(AugmentationConfig?)
Configures data augmentation for training and inference.
IAiModelBuilder<T, TInput, TOutput> ConfigureAugmentation(AugmentationConfig? config = null)
Parameters
configAugmentationConfigAugmentation configuration. If null, uses industry-standard defaults with automatic data-type detection.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
Data augmentation creates variations of training data on-the-fly to help models generalize better. This configuration covers both training-time augmentation and Test-Time Augmentation (TTA) for improved inference accuracy.
For Beginners: Augmentation is like showing the model many variations of the same data. For images, this might include rotations, flips, and color changes. The model learns to recognize objects regardless of these variations.
Key features:
- Automatic data-type detection (image, tabular, audio, text, video)
- Industry-standard defaults that work well out-of-the-box
- Test-Time Augmentation (TTA) enabled by default for better predictions
Example - Simple usage with defaults:
var result = builder
.ConfigureModel(myModel)
.ConfigureAugmentation() // Uses auto-detected defaults
.Build(X, y);
Example - Custom configuration:
var result = builder
.ConfigureModel(myModel)
.ConfigureAugmentation(new AugmentationConfig
{
EnableTTA = true,
TTANumAugmentations = 8,
ImageSettings = new ImageAugmentationSettings
{
EnableFlips = true,
EnableRotation = true,
RotationRange = 20.0
}
})
.Build(images, labels);
ConfigureAutoML(AutoMLOptions<T, TInput, TOutput>?)
Configures AutoML using facade-style options (recommended for most users).
IAiModelBuilder<T, TInput, TOutput> ConfigureAutoML(AutoMLOptions<T, TInput, TOutput>? options = null)
Parameters
optionsAutoMLOptions<T, TInput, TOutput>AutoML options (budget, strategy, and optional overrides). If null, defaults are used.
Returns
- IAiModelBuilder<T, TInput, TOutput>
This builder instance for method chaining.
Remarks
This overload follows the AiDotNet facade pattern: you provide a small options object, and the library chooses an appropriate built-in AutoML implementation and industry-standard defaults for you.
For Beginners: Use this overload if you want AutoML without having to manually instantiate an AutoML implementation. Pick a budget preset (Fast/Standard/Thorough) and let AiDotNet handle the rest.
ConfigureAutoML(IAutoMLModel<T, TInput, TOutput>)
Configures an AutoML model for automatic machine learning optimization.
IAiModelBuilder<T, TInput, TOutput> ConfigureAutoML(IAutoMLModel<T, TInput, TOutput> autoMLModel)
Parameters
autoMLModelIAutoMLModel<T, TInput, TOutput>The AutoML model instance to use for hyperparameter search and model selection.
Returns
- IAiModelBuilder<T, TInput, TOutput>
This builder instance for method chaining.
Remarks
For Beginners: AutoML (Automated Machine Learning) automatically searches for the best model and hyperparameters for your problem. Instead of manually trying different models and settings, AutoML does this for you.
When you configure an AutoML model: - The Build() method will run the AutoML search process - AutoML will try different models and hyperparameters - The best model found will be returned as your trained model - You can configure search time limits, candidate models, and optimization metrics
Example:
// Advanced usage: plug in your own AutoML implementation.
// Most users should prefer the ConfigureAutoML(AutoMLOptions<...>) overload instead.
var autoML = new RandomSearchAutoML<double, Matrix<double>, Vector<double>>();
autoML.SetTimeLimit(TimeSpan.FromMinutes(30));
autoML.SetCandidateModels(new List<ModelType> { ModelType.RandomForest, ModelType.GradientBoosting });
var builder = new AiModelBuilder<double, Matrix<double>, Vector<double>>()
.ConfigureAutoML(autoML)
.Build(trainingData, trainingLabels);
ConfigureBenchmarking(BenchmarkingOptions?)
Configures benchmarking to run standardized benchmark suites and attach a structured report to the built model.
IAiModelBuilder<T, TInput, TOutput> ConfigureBenchmarking(BenchmarkingOptions? options = null)
Parameters
optionsBenchmarkingOptionsBenchmarking options. If null, sensible defaults are used.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
This follows the AiDotNet facade pattern: users select benchmark suites using enums and receive a structured report, without wiring benchmark implementations manually.
For Beginners: This is like running a standardized test after training/building your model.
ConfigureBiasDetector(IBiasDetector<T>)
Configures the bias detector component for ethical AI evaluation.
IAiModelBuilder<T, TInput, TOutput> ConfigureBiasDetector(IBiasDetector<T> detector)
Parameters
detectorIBiasDetector<T>The bias detector implementation to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
A bias detector analyzes model predictions to identify potential bias across different demographic groups defined by sensitive features.
For Beginners: Bias detection helps ensure your model treats different groups fairly. For example, if your model predicts loan approvals, bias detection checks whether it unfairly favors or discriminates against certain demographic groups (like age, gender, or race). This is crucial for ethical AI and regulatory compliance.
ConfigureCaching(CacheConfig?)
Configures model caching to avoid reloading models from disk repeatedly.
IAiModelBuilder<T, TInput, TOutput> ConfigureCaching(CacheConfig? config = null)
Parameters
configCacheConfigThe caching configuration (optional, uses default cache settings if null).
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: Caching keeps frequently-used models in memory so they load instantly. Like keeping your favorite apps open on your phone instead of closing and reopening them.
Benefits:
- Much faster inference (no model loading time)
- Better throughput for multiple requests
- Configurable cache size and eviction policies
Example:
// Enable caching with default settings (10 models, LRU eviction)
var result = await builder
.ConfigureModel(model)
.ConfigureCaching()
.BuildAsync();
ConfigureCheckpointManager(ICheckpointManager<T, TInput, TOutput>)
Configures checkpoint management for saving and restoring training state.
IAiModelBuilder<T, TInput, TOutput> ConfigureCheckpointManager(ICheckpointManager<T, TInput, TOutput> manager)
Parameters
managerICheckpointManager<T, TInput, TOutput>The checkpoint manager implementation to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: Checkpoints are like save points in a video game. They let you pause training and resume later, or go back to an earlier state if something goes wrong.
Key features include: - Saving model state periodically during training - Restoring from the latest or best checkpoint - Automatic cleanup of old checkpoints - Tracking metrics at each checkpoint
ConfigureCompression(CompressionConfig?)
Configures model compression for reducing model size during serialization.
IAiModelBuilder<T, TInput, TOutput> ConfigureCompression(CompressionConfig? config = null)
Parameters
configCompressionConfigThe compression configuration (optional, uses automatic mode if null).
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: Compression makes your model smaller for storage and faster to load. When you save (serialize) your model, compression automatically reduces its size. When you load (deserialize) it, decompression happens transparently.
Benefits:
- 50-90% smaller model files
- Faster model loading and deployment
- Lower storage and bandwidth costs
- Enables deployment on resource-constrained devices
Compression is applied during serialization (saving) and reversed during deserialization (loading). You never need to handle compression manually - it happens behind the scenes.
Example:
// Use automatic compression (recommended for most cases)
var result = await builder
.ConfigureModel(model)
.ConfigureCompression() // Uses industry-standard defaults
.BuildAsync();
// Model is now configured to compress on save
builder.SaveModel(result, "model.bin"); // Compressed automatically
var loaded = builder.LoadModel("model.bin"); // Decompressed automatically
// Or customize compression settings
var result = await builder
.ConfigureCompression(new CompressionConfig
{
Mode = ModelCompressionMode.Full,
Type = CompressionType.HybridHuffmanClustering,
NumClusters = 256
})
.BuildAsync();
ConfigureCrossValidation(ICrossValidator<T, TInput, TOutput>)
Configures the cross-validation strategy for automatic model evaluation during training.
IAiModelBuilder<T, TInput, TOutput> ConfigureCrossValidation(ICrossValidator<T, TInput, TOutput> crossValidator)
Parameters
crossValidatorICrossValidator<T, TInput, TOutput>The cross-validation strategy to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
A cross-validator determines how data should be split into folds for cross-validation. Different strategies (K-Fold, Leave-One-Out, Stratified, Time Series, etc.) are appropriate for different types of data and problems.
For Beginners: Cross-validation tests how well your model will perform on new data by training and testing it multiple times on different subsets of your training data. If you configure both a cross-validator and model evaluator (via ConfigureModelEvaluator), cross-validation will automatically run during Build() and the results will be included in your trained model.
Common strategies:
- StandardCrossValidator (K-Fold): General purpose, splits data into K equal parts
- LeaveOneOutCrossValidator: For small datasets, uses each sample once as test
- StratifiedKFoldCrossValidator: For classification, maintains class proportions
- TimeSeriesCrossValidator: For sequential data, respects temporal ordering
ConfigureCurriculumLearning(CurriculumLearningOptions<T, TInput, TOutput>?)
Configures curriculum learning for training models with progressively harder samples.
IAiModelBuilder<T, TInput, TOutput> ConfigureCurriculumLearning(CurriculumLearningOptions<T, TInput, TOutput>? options = null)
Parameters
optionsCurriculumLearningOptions<T, TInput, TOutput>Curriculum learning options (schedule type, phases, difficulty estimation). If null, sensible defaults are used (Linear schedule, 5 phases, loss-based difficulty).
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
Curriculum learning trains models by presenting samples in order of difficulty, starting with easy examples and gradually introducing harder ones. This approach often leads to faster convergence and better final performance compared to random training order.
For Beginners: Think of this like how humans learn - we start with basic concepts before tackling advanced material. Your model will:
- First learn from easy samples to build a foundation
- Gradually be exposed to harder samples as it improves
- Often converge faster and achieve better final accuracy
Example Usage:
// Basic usage with default settings (Linear schedule, 5 phases)
var result = await builder
.ConfigureModel(model)
.ConfigureCurriculumLearning()
.Build(features, labels);
// Self-paced learning where model determines its own pace
var result = await builder
.ConfigureModel(model)
.ConfigureCurriculumLearning(new CurriculumLearningOptions<double, TInput, TOutput>
{
ScheduleType = CurriculumScheduleType.SelfPaced,
NumPhases = 10,
SelfPaced = new SelfPacedOptions { InitialLambda = 0.1 }
})
.Build(features, labels);
// Competence-based learning that advances when mastery is achieved
var result = await builder
.ConfigureModel(model)
.ConfigureCurriculumLearning(new CurriculumLearningOptions<double, TInput, TOutput>
{
ScheduleType = CurriculumScheduleType.CompetenceBased,
CompetenceBased = new CompetenceBasedOptions { CompetenceThreshold = 0.85 }
})
.Build(features, labels);
ConfigureDataLoader(IDataLoader<T>)
Configures the data loader for providing training data.
IAiModelBuilder<T, TInput, TOutput> ConfigureDataLoader(IDataLoader<T> dataLoader)
Parameters
dataLoaderIDataLoader<T>The data loader that provides training data.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
A data loader handles loading data from various sources (files, databases, memory, URLs) and provides it in a format suitable for model training.
For Beginners: Instead of passing raw arrays or matrices directly to BuildAsync, you can configure a data loader that handles loading your data for you. This is useful when: - Your data comes from a file (CSV, JSON, etc.) - Your data needs to be downloaded from the internet - You want automatic batching and shuffling - You want train/validation/test splitting handled for you
Example:
// Load data from CSV
var loader = DataLoaders.FromCsv("housing.csv", labelColumn: "price");
var result = await builder
.ConfigureDataLoader(loader)
.ConfigureModel(model)
.BuildAsync(); // Uses data from the loader
You can also use simple in-memory loaders for arrays:
var loader = DataLoaders.FromArrays(features, labels);
ConfigureDataPreprocessor(IDataPreprocessor<T, TInput, TOutput>)
Configures the data preprocessing component for the model.
IAiModelBuilder<T, TInput, TOutput> ConfigureDataPreprocessor(IDataPreprocessor<T, TInput, TOutput> dataPreprocessor)
Parameters
dataPreprocessorIDataPreprocessor<T, TInput, TOutput>The data preprocessor implementation to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
A data preprocessor cleans and transforms raw data before it's used for training.
For Beginners: Data preprocessing is like preparing ingredients before cooking. It involves:
- Cleaning data (removing or fixing errors)
- Transforming data (converting text to numbers, etc.)
- Organizing data (putting it in the right format)
Good preprocessing can dramatically improve your model's performance by ensuring it learns from high-quality data.
ConfigureDataVersionControl(IDataVersionControl<T>)
Configures data version control for tracking dataset changes.
IAiModelBuilder<T, TInput, TOutput> ConfigureDataVersionControl(IDataVersionControl<T> dataVersionControl)
Parameters
dataVersionControlIDataVersionControl<T>The data version control implementation to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: Data version control is like Git, but for your datasets. It tracks what data was used for training each model and lets you reproduce experiments.
Key features include: - Creating and tracking dataset versions - Computing dataset hashes for integrity verification - Tracking data lineage and transformations - Linking datasets to training runs
ConfigureDistributedTraining(ICommunicationBackend<T>?, DistributedStrategy, IShardingConfiguration<T>?)
Configures distributed training across multiple GPUs or machines.
IAiModelBuilder<T, TInput, TOutput> ConfigureDistributedTraining(ICommunicationBackend<T>? backend = null, DistributedStrategy strategy = DistributedStrategy.DDP, IShardingConfiguration<T>? configuration = null)
Parameters
backendICommunicationBackend<T>Communication backend to use. If null, uses InMemoryCommunicationBackend.
strategyDistributedStrategyDistributed training strategy. Default is FSDP.
configurationIShardingConfiguration<T>Sharding configuration. If null, created from backend with defaults.
Returns
- IAiModelBuilder<T, TInput, TOutput>
This builder instance for method chaining.
Remarks
When distributed training is configured, the builder automatically wraps the model and optimizer with their distributed counterparts based on the chosen strategy. This enables: - Training models too large to fit on a single GPU - Faster training by distributing work across multiple processes - Automatic gradient synchronization and parameter sharding
Important: The strategy parameter controls BOTH the model and optimizer as a matched pair. You cannot mix and match strategies between model and optimizer because they must be compatible:
- DDP → Uses DDPModel + DDPOptimizer (replicated parameters, AllReduce gradients)
- FSDP → Uses FSDPModel + FSDPOptimizer (fully sharded parameters)
- ZeRO1/2/3 → Uses matching ZeRO models + optimizers (progressive sharding)
- PipelineParallel → Uses PipelineParallelModel + PipelineParallelOptimizer
- TensorParallel → Uses TensorParallelModel + TensorParallelOptimizer
- Hybrid → Uses HybridShardedModel + HybridShardedOptimizer (3D parallelism)
This design follows industry standards (PyTorch DDP/FSDP, DeepSpeed ZeRO, Megatron-LM) where the distributed training strategy is a cohesive unit that applies to both model and optimizer. Mixing strategies would cause incompatibilities - for example, a DDP model (replicated parameters) cannot work with an FSDP optimizer (expects sharded parameters).
For Beginners: Call this method to enable distributed training across multiple GPUs. You can use it with no parameters for sensible defaults, or customize each aspect. The strategy you choose automatically configures both the model and optimizer to work together.
Beginner Usage (no parameters):
var result = builder
.ConfigureModel(myModel)
.ConfigureDistributedTraining() // InMemory backend, DDP strategy
.Build(xTrain, yTrain);
Intermediate Usage (specify backend):
var backend = new MPICommunicationBackend<double>();
var result = builder
.ConfigureModel(myModel)
.ConfigureDistributedTraining(backend) // MPI backend, DDP strategy
.Build(xTrain, yTrain);
Advanced Usage (specify strategy):
var result = builder
.ConfigureModel(myModel)
.ConfigureDistributedTraining(
backend: new NCCLCommunicationBackend<double>(),
strategy: DistributedStrategy.FSDP) // Use FSDP instead of DDP
.Build(xTrain, yTrain);
Expert Usage (full control):
var backend = new NCCLCommunicationBackend<double>();
var config = new ShardingConfiguration<double>(backend)
{
AutoSyncGradients = true,
MinimumParameterGroupSize = 2048,
EnableGradientCompression = true
};
var result = builder
.ConfigureDistributedTraining(
backend: backend,
strategy: DistributedStrategy.ZeRO2,
configuration: config) // Full control over all options
.Build(xTrain, yTrain);
ConfigureExperimentTracker(IExperimentTracker<T>)
Configures experiment tracking for organizing and logging ML experiments.
IAiModelBuilder<T, TInput, TOutput> ConfigureExperimentTracker(IExperimentTracker<T> tracker)
Parameters
trackerIExperimentTracker<T>The experiment tracker implementation to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: Experiment tracking is like a lab notebook for your machine learning work. It helps you keep track of what you've tried, what worked, and what didn't.
Key features include: - Creating experiments to group related training runs - Logging hyperparameters, metrics, and artifacts - Comparing different runs to find the best approach - Reproducing previous experiments
ConfigureExport(ExportConfig?)
Configures export settings for deploying the model to different platforms.
IAiModelBuilder<T, TInput, TOutput> ConfigureExport(ExportConfig? config = null)
Parameters
configExportConfigThe export configuration (optional, uses CPU/ONNX if null).
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: Export settings determine how your trained model will be saved for deployment. Different platforms need different formats:
- ONNX: Universal format, works everywhere (recommended)
- TensorRT: NVIDIA GPUs, maximum performance
- CoreML: Apple devices (iPhone, iPad, Mac)
- TFLite: Android devices and edge hardware
- WASM: Run models in web browsers
Configure this BEFORE training if you know your target platform, so the model can be optimized accordingly. After training, use the Export methods on AiModelResult.
Example:
// Configure for TensorRT deployment with FP16 quantization
var exportConfig = new ExportConfig
{
TargetPlatform = TargetPlatform.TensorRT,
Quantization = QuantizationMode.Float16
};
var result = await builder
.ConfigureModel(model)
.ConfigureExport(exportConfig)
.BuildAsync();
// After training, export the model
result.ExportToTensorRT("model.trt");
ConfigureFairnessEvaluator(IFairnessEvaluator<T>)
Configures the fairness evaluator component for ethical AI evaluation.
IAiModelBuilder<T, TInput, TOutput> ConfigureFairnessEvaluator(IFairnessEvaluator<T> evaluator)
Parameters
evaluatorIFairnessEvaluator<T>The fairness evaluator implementation to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
A fairness evaluator computes multiple fairness metrics to assess how equitably a model performs across different demographic groups.
For Beginners: Fairness evaluation goes beyond basic accuracy to measure whether your model is fair to all groups. It calculates metrics like demographic parity (do all groups get positive outcomes at similar rates?) and equal opportunity (do qualified individuals from all groups have equal chances?). This helps you build AI systems that are not only accurate but also ethical and compliant with regulations.
ConfigureFeatureSelector(IFeatureSelector<T, TInput>)
Configures the feature selector component for the model.
IAiModelBuilder<T, TInput, TOutput> ConfigureFeatureSelector(IFeatureSelector<T, TInput> selector)
Parameters
selectorIFeatureSelector<T, TInput>The feature selector implementation to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
A feature selector helps identify which input variables (features) are most important for making predictions.
For Beginners: Imagine you're trying to predict house prices. You have many possible factors: size, location, age, number of rooms, etc. A feature selector helps figure out which of these factors actually matter for making good predictions. This can improve your model's accuracy and make it run faster by focusing only on what's important.
ConfigureFederatedLearning(FederatedLearningOptions, IAggregationStrategy<IFullModel<T, TInput, TOutput>>?, IClientSelectionStrategy?, IFederatedServerOptimizer<T>?, IFederatedHeterogeneityCorrection<T>?, IHomomorphicEncryptionProvider<T>?)
Enables federated learning training using the provided options.
IAiModelBuilder<T, TInput, TOutput> ConfigureFederatedLearning(FederatedLearningOptions options, IAggregationStrategy<IFullModel<T, TInput, TOutput>>? aggregationStrategy = null, IClientSelectionStrategy? clientSelectionStrategy = null, IFederatedServerOptimizer<T>? serverOptimizer = null, IFederatedHeterogeneityCorrection<T>? heterogeneityCorrection = null, IHomomorphicEncryptionProvider<T>? homomorphicEncryptionProvider = null)
Parameters
optionsFederatedLearningOptionsFederated learning configuration options.
aggregationStrategyIAggregationStrategy<IFullModel<T, TInput, TOutput>>Optional aggregation strategy override (null uses defaults based on options).
clientSelectionStrategyIClientSelectionStrategyOptional client selection strategy override (null uses defaults based on options).
serverOptimizerIFederatedServerOptimizer<T>Optional server-side optimizer override (null uses defaults based on options).
heterogeneityCorrectionIFederatedHeterogeneityCorrection<T>Optional heterogeneity correction strategy override (null uses defaults based on options).
homomorphicEncryptionProviderIHomomorphicEncryptionProvider<T>Optional homomorphic encryption provider for encrypted aggregation (null uses plaintext aggregation).
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
Federated learning is orchestrated internally by the builder to preserve the public facade API. Users typically only provide an options object; optional strategy injection is available for advanced scenarios.
ConfigureFewShotExampleSelector(IFewShotExampleSelector<T>?)
Configures the few-shot example selector for selecting examples to include in prompts.
IAiModelBuilder<T, TInput, TOutput> ConfigureFewShotExampleSelector(IFewShotExampleSelector<T>? selector = null)
Parameters
selectorIFewShotExampleSelector<T>The few-shot example selector to use. If null, no selector is configured.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
A few-shot example selector chooses the most relevant examples to include in prompts based on the current query. Different strategies include random selection, fixed order, and similarity-based selection.
For Beginners: Few-shot learning teaches the model by showing it examples. The selector picks which examples to show for each new query.
ConfigureFineTuning(FineTuningConfiguration<T, TInput, TOutput>?)
Configures fine-tuning for the model using preference learning, RLHF, or other alignment methods.
IAiModelBuilder<T, TInput, TOutput> ConfigureFineTuning(FineTuningConfiguration<T, TInput, TOutput>? configuration = null)
Parameters
configurationFineTuningConfiguration<T, TInput, TOutput>The fine-tuning configuration including training data. When null, uses industry-standard defaults.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
This configuration enables post-training fine-tuning using various alignment techniques:
- Supervised Fine-Tuning (SFT)Traditional fine-tuning on labeled examples
- Direct Preference Optimization (DPO)Learn from human preferences without reward models
- Simple Preference Optimization (SimPO)Reference-free, length-normalized preference learning
- Group Relative Policy Optimization (GRPO)Memory-efficient RL without critic models
- Reinforcement Learning from Human Feedback (RLHF)Classic PPO-based alignment
For Beginners: Fine-tuning helps align your model with human preferences. When called with no parameters (null), industry-standard defaults are applied automatically. Training data should be set in the configuration's TrainingData property.
ConfigureFitDetector(IFitDetector<T, TInput, TOutput>)
Configures the fit detector component for the model.
IAiModelBuilder<T, TInput, TOutput> ConfigureFitDetector(IFitDetector<T, TInput, TOutput> detector)
Parameters
detectorIFitDetector<T, TInput, TOutput>The fit detector implementation to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
A fit detector determines whether the model is underfitting, well-fitted, or overfitting.
For Beginners: This component checks if your model is learning properly. It's like a teacher who can tell if:
- Your model is "underfitting" (too simple and missing important patterns)
- Your model is "just right" (learning the important patterns without memorizing noise)
- Your model is "overfitting" (memorizing the training data instead of learning general rules)
This helps you know when to stop training or when to adjust your model's complexity.
ConfigureFitnessCalculator(IFitnessCalculator<T, TInput, TOutput>)
Configures the fitness calculator component for the model.
IAiModelBuilder<T, TInput, TOutput> ConfigureFitnessCalculator(IFitnessCalculator<T, TInput, TOutput> calculator)
Parameters
calculatorIFitnessCalculator<T, TInput, TOutput>The fitness calculator implementation to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
A fitness calculator measures how well the model is performing during training.
For Beginners: The fitness calculator is like a scorekeeper that tells you how well your model is doing. It compares the model's predictions to the actual correct answers and calculates a score. This score helps determine if changes to the model are making it better or worse.
ConfigureGpuAcceleration(GpuAccelerationConfig?)
Enables GPU acceleration for training and inference with optional configuration.
IAiModelBuilder<T, TInput, TOutput> ConfigureGpuAcceleration(GpuAccelerationConfig? config = null)
Parameters
configGpuAccelerationConfigGPU acceleration configuration (optional, uses defaults if null).
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: GPU acceleration makes your model train 10-100x faster on large datasets by using your graphics card (GPU) for parallel computation. It automatically uses GPU for large operations and CPU for small ones, with zero code changes required.
Benefits: - 10-100x faster training for large neural networks - Automatic size-based routing (GPU for large ops, CPU for small) - Supports NVIDIA (CUDA) and AMD/Intel (OpenCL) GPUs - Automatic CPU fallback if GPU unavailable - Works transparently with existing models
Example:
// Enable with defaults (recommended)
var result = await builder
.ConfigureModel(model)
.ConfigureGpuAcceleration()
.BuildAsync();
// Or with aggressive settings for high-end GPUs
builder.ConfigureGpuAcceleration(GpuAccelerationConfig.Aggressive());
// Or CPU-only for debugging
builder.ConfigureGpuAcceleration(GpuAccelerationConfig.CpuOnly());
ConfigureHyperparameterOptimizer(IHyperparameterOptimizer<T, TInput, TOutput>, HyperparameterSearchSpace?, int)
Configures hyperparameter optimization for automatic tuning of model settings.
IAiModelBuilder<T, TInput, TOutput> ConfigureHyperparameterOptimizer(IHyperparameterOptimizer<T, TInput, TOutput> optimizer, HyperparameterSearchSpace? searchSpace = null, int nTrials = 10)
Parameters
optimizerIHyperparameterOptimizer<T, TInput, TOutput>The hyperparameter optimizer implementation to use.
searchSpaceHyperparameterSearchSpaceThe hyperparameter search space defining parameter ranges. If null, hyperparameter optimization is disabled.
nTrialsintNumber of trials to run. Default is 10.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: Hyperparameter optimization automatically finds the best settings for your model (like learning rate, number of layers, etc.) instead of you having to guess.
Key features include: - Systematic search through hyperparameter space - Multiple search strategies (grid, random, Bayesian) - Tracking and comparing trial results - Early stopping of unpromising trials
Example:
var searchSpace = new HyperparameterSearchSpace();
searchSpace.AddContinuous("learning_rate", 0.0001, 0.1, logScale: true);
searchSpace.AddInteger("hidden_units", 32, 256);
var optimizer = new RandomSearchOptimizer<double, Matrix<double>, Vector<double>>(maximize: false);
var result = builder
.ConfigureModel(model)
.ConfigureHyperparameterOptimizer(optimizer, searchSpace, nTrials: 20)
.Build(x, y);
ConfigureInferenceOptimizations(InferenceOptimizationConfig?)
Configures inference-time optimizations for faster predictions.
IAiModelBuilder<T, TInput, TOutput> ConfigureInferenceOptimizations(InferenceOptimizationConfig? config = null)
Parameters
configInferenceOptimizationConfigInference optimization configuration (optional, uses defaults if null).
Returns
- IAiModelBuilder<T, TInput, TOutput>
This builder instance for method chaining.
Remarks
For Beginners: Inference optimization makes your model's predictions faster and more efficient.
Key features enabled:
- KV Cache: Speeds up transformer/attention models by 2-10x
- Batching: Groups predictions for higher throughput
- Speculative Decoding: Speeds up text generation by 1.5-3x
Example:
var result = await new AiModelBuilder<double, ...>()
.ConfigureModel(myModel)
.ConfigureInferenceOptimizations() // Uses sensible defaults
.BuildAsync();
// Or with custom settings:
var config = new InferenceOptimizationConfig
{
EnableKVCache = true,
MaxBatchSize = 64,
EnableSpeculativeDecoding = true
};
var result = await builder
.ConfigureInferenceOptimizations(config)
.BuildAsync();
ConfigureJitCompilation(JitCompilationConfig?)
Configures Just-In-Time (JIT) compilation for neural network forward and backward passes.
IAiModelBuilder<T, TInput, TOutput> ConfigureJitCompilation(JitCompilationConfig? config = null)
Parameters
configJitCompilationConfigJIT compilation configuration (optional, enables with defaults if null).
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: JIT compilation is an optimization technique that converts your neural network's operations into highly optimized native code at runtime, similar to how modern browsers optimize JavaScript.
Benefits: - 2-10x faster inference through operation fusion and vectorization - Reduced memory allocations during forward/backward passes - Automatic optimization of computation graphs - Zero code changes required - just enable the config
JIT compilation works by: 1. Analyzing your neural network's computation graph 2. Fusing compatible operations together (e.g., MatMul + Bias + ReLU) 3. Generating optimized native code using System.Reflection.Emit 4. Caching compiled code for subsequent runs
Example:
// Enable JIT with defaults (recommended)
var result = await builder
.ConfigureModel(model)
.ConfigureJitCompilation()
.BuildAsync();
// Or with custom settings
builder.ConfigureJitCompilation(new JitCompilationConfig
{
Enabled = true,
CompilerOptions = new JitCompilerOptions
{
EnableOperationFusion = true,
EnableVectorization = true
}
});
ConfigureKnowledgeDistillation(KnowledgeDistillationOptions<T, TInput, TOutput>?)
Configures knowledge distillation for training a smaller student model from a larger teacher model.
IAiModelBuilder<T, TInput, TOutput> ConfigureKnowledgeDistillation(KnowledgeDistillationOptions<T, TInput, TOutput>? options = null)
Parameters
optionsKnowledgeDistillationOptions<T, TInput, TOutput>The knowledge distillation configuration options (optional, uses sensible defaults if null).
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
Knowledge distillation enables model compression by transferring knowledge from a large, accurate teacher model to a smaller, faster student model. The student learns to mimic the teacher's predictions and internal representations.
For Beginners: Knowledge distillation is like having an expert teacher help train a smaller, faster student. The student model learns not just from the training labels, but also from the teacher's "soft" predictions which contain richer information about relationships between classes.
Benefits:
- Model compression: Deploy 10x smaller models with 90%+ of original accuracy
- Faster inference: Smaller models run significantly faster
- Lower memory: Fits on edge devices and mobile platforms
- Better generalization: Learning from soft labels often improves accuracy
Common use cases:
- DistilBERT: 40% smaller than BERT, 97% performance, 60% faster
- MobileNet: Distilled from ResNet for mobile deployment
- Edge AI: Deploy powerful models on resource-constrained devices
Quick Start Example:
var distillationOptions = new KnowledgeDistillationOptions<double, Vector<double>, Vector<double>>
{
TeacherModelType = TeacherModelType.NeuralNetwork,
StrategyType = DistillationStrategyType.ResponseBased,
Temperature = 3.0,
Alpha = 0.3,
Epochs = 20,
BatchSize = 32
};
var builder = new AiModelBuilder<double, Vector<double>, Vector<double>>()
.ConfigureKnowledgeDistillation(distillationOptions);
Note: Current implementation requires student model to use Vector<T> for both input and output types.
ConfigureLoRA(ILoRAConfiguration<T>)
Configures LoRA (Low-Rank Adaptation) for parameter-efficient fine-tuning.
IAiModelBuilder<T, TInput, TOutput> ConfigureLoRA(ILoRAConfiguration<T> loraConfiguration)
Parameters
loraConfigurationILoRAConfiguration<T>The LoRA configuration implementation to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
LoRA enables efficient fine-tuning of neural networks by learning low-rank decompositions of weight updates instead of modifying all weights directly. This dramatically reduces the number of trainable parameters while maintaining model performance.
For Beginners: LoRA is a technique that lets you adapt large pre-trained models with 100x fewer parameters than traditional fine-tuning. Instead of updating all weights, LoRA adds small "correction layers" that learn what adjustments are needed.
Think of it like:
- The original model has the base knowledge (optionally frozen)
- LoRA layers learn small corrections for your specific task
- The final output combines both: original + correction
This is especially useful when:
- You want to fine-tune a large model with limited memory
- You need to create multiple task-specific versions of the same model
- You want to adapt pre-trained models without retraining everything
The configuration determines which layers get LoRA adaptations, what rank to use, and whether to freeze the base layers during training.
ConfigureMetaLearning(IMetaLearner<T, TInput, TOutput>)
Configures a meta-learning algorithm (MAML, Reptile, SEAL) for training models that can quickly adapt to new tasks.
IAiModelBuilder<T, TInput, TOutput> ConfigureMetaLearning(IMetaLearner<T, TInput, TOutput> metaLearner)
Parameters
metaLearnerIMetaLearner<T, TInput, TOutput>The meta-learning algorithm to use (e.g., ReptileTrainer with its episodic data loader).
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: Meta-learning trains models to quickly learn new tasks from just a few examples. If you configure this, Build() will do meta-training instead of regular training.
Only configure this if you need few-shot learning capabilities. For standard machine learning, just use ConfigureModel() and Build() as usual.
ConfigureMixedPrecision(MixedPrecisionConfig?)
Configures mixed-precision training for faster neural network training with reduced memory usage.
IAiModelBuilder<T, TInput, TOutput> ConfigureMixedPrecision(MixedPrecisionConfig? config = null)
Parameters
configMixedPrecisionConfigMixed precision configuration (optional, uses defaults if null).
Returns
- IAiModelBuilder<T, TInput, TOutput>
This builder instance for method chaining.
Remarks
For Beginners: Mixed-precision training is a powerful optimization technique that uses both 16-bit (half precision) and 32-bit (full precision) floating-point numbers during training. This provides: - **Up to 50% memory savings** allowing larger batch sizes or bigger models - **2-3x faster training** on modern GPUs with Tensor Cores (NVIDIA Volta+) - **Maintained accuracy** through careful precision management and loss scaling
Requirements:
- Type parameter T must be float (FP32)
- Requires gradient-based optimizers (SGD, Adam, etc.)
- Best suited for neural networks with large parameter counts
Example:
// Enable with default settings (recommended)
var result = await new AiModelBuilder<float, Matrix<float>, Vector<float>>()
.ConfigureModel(network)
.ConfigureOptimizer(optimizer)
.ConfigureMixedPrecision() // Enable mixed-precision
.BuildAsync();
// Or with custom configuration
builder.ConfigureMixedPrecision(MixedPrecisionConfig.Conservative());
ConfigureModel(IFullModel<T, TInput, TOutput>)
Configures the prediction model algorithm to use.
IAiModelBuilder<T, TInput, TOutput> ConfigureModel(IFullModel<T, TInput, TOutput> model)
Parameters
modelIFullModel<T, TInput, TOutput>The prediction model implementation to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
This method lets you specify which machine learning algorithm will be used as the core of your predictive model.
For Beginners: This is where you choose the specific type of AI model for your prediction task. You can select from various algorithms depending on your needs:
Regression models for predicting numeric values:
- Linear regression (for simple straight-line relationships)
- Polynomial regression (for curved relationships)
- Ridge or Lasso regression (to prevent overfitting)
Classification models for categorizing data:
- Logistic regression (for yes/no predictions)
- Decision trees (for rule-based decisions)
- Support vector machines (for complex boundaries)
Neural networks for complex pattern recognition:
- Simple neural networks (for moderate complexity)
- Deep learning models (for highly complex patterns)
Time series models for sequential data:
- ARIMA (for forecasting trends)
- LSTM networks (for long-term patterns)
Different models excel at different types of problems, so choosing the right one depends on your specific data and prediction goals.
ConfigureModelEvaluator(IModelEvaluator<T, TInput, TOutput>)
Configures the model evaluator component for comprehensive model evaluation and cross-validation.
IAiModelBuilder<T, TInput, TOutput> ConfigureModelEvaluator(IModelEvaluator<T, TInput, TOutput> evaluator)
Parameters
evaluatorIModelEvaluator<T, TInput, TOutput>The model evaluator implementation to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
A model evaluator provides methods to evaluate model performance on different datasets and perform cross-validation to assess generalization.
For Beginners: The model evaluator helps you understand how well your model performs. If you configure both a model evaluator and cross-validator (via ConfigureCrossValidation), cross-validation will automatically run during Build() and the results will be included in your trained model.
ConfigureModelRegistry(IModelRegistry<T, TInput, TOutput>)
Configures model registry for centralized model storage and versioning.
IAiModelBuilder<T, TInput, TOutput> ConfigureModelRegistry(IModelRegistry<T, TInput, TOutput> registry)
Parameters
registryIModelRegistry<T, TInput, TOutput>The model registry implementation to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: A model registry is like a library for your trained models. It keeps track of all your models, their versions, and which ones are in production.
Key features include: - Storing and versioning trained models - Managing model lifecycle (development → staging → production) - Tracking model metadata and lineage - Comparing different model versions
ConfigureNormalizer(INormalizer<T, TInput, TOutput>)
Configures the data normalizer component for the model.
IAiModelBuilder<T, TInput, TOutput> ConfigureNormalizer(INormalizer<T, TInput, TOutput> normalizer)
Parameters
normalizerINormalizer<T, TInput, TOutput>The normalizer implementation to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
A normalizer transforms data to a standard scale, which helps many machine learning algorithms perform better.
For Beginners: Different features in your data might use different scales. For example, a person's age (0-100) and income (thousands or millions) are on very different scales. Normalization converts all features to a similar scale (like 0-1), which prevents features with larger numbers from dominating the learning process just because they have bigger values.
Note: This method is maintained for backward compatibility. For new code, prefer ConfigurePreprocessing(IDataTransformer<T, TInput, TInput>) which supports the full range of preprocessing transformers (scalers, encoders, imputers, etc.).
ConfigureOptimizer(IOptimizer<T, TInput, TOutput>)
Configures the optimization algorithm for the model.
IAiModelBuilder<T, TInput, TOutput> ConfigureOptimizer(IOptimizer<T, TInput, TOutput> optimizationAlgorithm)
Parameters
optimizationAlgorithmIOptimizer<T, TInput, TOutput>The optimization algorithm implementation to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
An optimizer determines how the model's parameters are updated during training.
For Beginners: The optimizer is like the "learning strategy" for your model. It decides:
- How quickly the model should learn (learning rate)
- How to adjust the model's parameters to improve predictions
- When to stop trying to improve further
Common optimizers include Gradient Descent, Adam, and L-BFGS, each with different strengths and weaknesses.
ConfigureOutlierRemoval(IOutlierRemoval<T, TInput, TOutput>)
Configures the outlier removal component for the model.
IAiModelBuilder<T, TInput, TOutput> ConfigureOutlierRemoval(IOutlierRemoval<T, TInput, TOutput> outlierRemoval)
Parameters
outlierRemovalIOutlierRemoval<T, TInput, TOutput>The outlier removal implementation to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
An outlier removal component identifies and handles unusual data points that might negatively impact the model's performance.
For Beginners: Outliers are unusual data points that don't follow the general pattern. For example, if you're analyzing house prices and most houses cost $100,000-$500,000, a $10 million mansion would be an outlier. These unusual points can confuse your model and make it perform worse. Outlier removal helps identify and handle these unusual cases.
ConfigurePostprocessing(IDataTransformer<T, TOutput, TOutput>)
Configures the output postprocessing pipeline for the model using a single transformer.
IAiModelBuilder<T, TInput, TOutput> ConfigurePostprocessing(IDataTransformer<T, TOutput, TOutput> transformer)
Parameters
transformerIDataTransformer<T, TOutput, TOutput>The postprocessing transformer to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
Postprocessing transforms model outputs into the desired format. This includes operations like softmax application, label decoding, output formatting, and converting tensor outputs to structured data.
For Beginners: Postprocessing is like formatting the final presentation of results. It involves: - Converting raw model outputs to probabilities (Softmax) - Decoding indices to human-readable labels (LabelDecoder) - Applying thresholds and confidence filtering - Formatting outputs for specific use cases
Example with a single transformer:
var result = new AiModelBuilder<double, Matrix<double>, Vector<double>>()
.ConfigurePreprocessing(new StandardScaler<double>())
.ConfigurePostprocessing(new SoftmaxTransformer<double>())
.ConfigureModel(new LogisticRegression<double>())
.Build(X, y);
ConfigurePostprocessing(PostprocessingPipeline<T, TOutput, TOutput>?)
Configures the output postprocessing pipeline for the model using an existing pipeline.
IAiModelBuilder<T, TInput, TOutput> ConfigurePostprocessing(PostprocessingPipeline<T, TOutput, TOutput>? pipeline = null)
Parameters
pipelinePostprocessingPipeline<T, TOutput, TOutput>The postprocessing pipeline to use, or null for industry defaults.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
Use this overload when you have a pre-configured PostprocessingPipeline instance. If null is passed, a default postprocessing pipeline will be created with industry-standard transformers for the model type.
For Beginners: Use this when you've already created a pipeline elsewhere:
var myPipeline = new PostprocessingPipeline<double, Vector<double>, Vector<double>>()
.Add(new SoftmaxTransformer<double>());
builder.ConfigurePostprocessing(myPipeline);
ConfigurePostprocessing(Action<PostprocessingPipeline<T, TOutput, TOutput>>)
Configures the output postprocessing pipeline for the model using a fluent builder.
IAiModelBuilder<T, TInput, TOutput> ConfigurePostprocessing(Action<PostprocessingPipeline<T, TOutput, TOutput>> configure)
Parameters
configureAction<PostprocessingPipeline<T, TOutput, TOutput>>An action that configures the postprocessing pipeline.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
This overload accepts a configuration action that allows you to build a postprocessing pipeline with multiple transformers in a fluent style.
For Beginners: Use this when you need multiple postprocessing steps.
Example with multiple steps:
var result = new AiModelBuilder<double, Matrix<double>, Vector<double>>()
.ConfigurePreprocessing(new StandardScaler<double>())
.ConfigurePostprocessing(pipeline => pipeline
.Add(new SoftmaxTransformer<double>())
.Add(new LabelDecoder<double>(labels)))
.ConfigureModel(new LogisticRegression<double>())
.Build(X, y);
ConfigurePreprocessing(IDataTransformer<T, TInput, TInput>)
Configures the data preprocessing pipeline for the model using a single transformer.
IAiModelBuilder<T, TInput, TOutput> ConfigurePreprocessing(IDataTransformer<T, TInput, TInput> transformer)
Parameters
transformerIDataTransformer<T, TInput, TInput>The preprocessing transformer to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
Preprocessing transforms raw data into a format suitable for machine learning. This includes operations like scaling, encoding categorical variables, imputing missing values, and generating polynomial features.
For Beginners: Preprocessing is like preparing ingredients before cooking. It involves: - Scaling data to a standard range (StandardScaler, MinMaxScaler) - Encoding categories as numbers (OneHotEncoder, LabelEncoder) - Filling in missing values (SimpleImputer) - Creating new features (PolynomialFeatures)
Example with a single scaler:
var result = new AiModelBuilder<double, Matrix<double>, Vector<double>>()
.ConfigurePreprocessing(new StandardScaler<double>())
.ConfigureModel(new LassoRegression<double>())
.Build(X, y);
ConfigurePreprocessing(Action<PreprocessingPipeline<T, TInput, TInput>>)
Configures the data preprocessing pipeline for the model using a fluent builder.
IAiModelBuilder<T, TInput, TOutput> ConfigurePreprocessing(Action<PreprocessingPipeline<T, TInput, TInput>> configure)
Parameters
configureAction<PreprocessingPipeline<T, TInput, TInput>>An action that configures the preprocessing pipeline.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
This overload accepts a configuration action that allows you to build a preprocessing pipeline with multiple transformers in a fluent style.
For Beginners: Use this when you need multiple preprocessing steps.
Example with multiple steps:
var result = new AiModelBuilder<double, Matrix<double>, Vector<double>>()
.ConfigurePreprocessing(pipeline => pipeline
.Add(new SimpleImputer<double>(strategy: ImputationStrategy.Mean))
.Add(new StandardScaler<double>())
.Add(new PolynomialFeatures<double>(degree: 2)))
.ConfigureModel(new LassoRegression<double>())
.Build(X, y);
ConfigureProgramSynthesis(ProgramSynthesisOptions?)
Configures built-in Program Synthesis defaults for code tasks.
IAiModelBuilder<T, TInput, TOutput> ConfigureProgramSynthesis(ProgramSynthesisOptions? options = null)
Parameters
optionsProgramSynthesisOptionsOptional configuration options.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
This method is available on the generic builder interface and can be used with any
TInput and TOutput (for example Tensor<T>,
Vector<T>, or Matrix<T>).
Implementations should use sensible defaults and ensure the program-synthesis capabilities are available
through AiModelResult without requiring users to manually wire low-level components.
ConfigureProgramSynthesisServing(ProgramSynthesisServingClientOptions?, IProgramSynthesisServingClient?)
Configures Program Synthesis to prefer calling AiDotNet.Serving for sandboxed execution and evaluation.
IAiModelBuilder<T, TInput, TOutput> ConfigureProgramSynthesisServing(ProgramSynthesisServingClientOptions? options = null, IProgramSynthesisServingClient? client = null)
Parameters
optionsProgramSynthesisServingClientOptionsServing client options. If null, Serving is not used unless a client is provided.
clientIProgramSynthesisServingClientOptional custom client implementation. When provided, this takes precedence over
options.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
When configured, Program Synthesis inference (code tasks, sandboxed execution, evaluation) can be routed through
AiDotNet.Serving by default to isolate untrusted code and keep proprietary logic on the server side.
For Beginners: This lets your app call a secure server to run code tasks safely.
Instead of running code on your machine (which can be unsafe), you can point AiDotNet to a Serving instance that runs everything in a sandbox.
ConfigurePromptChain(IChain<string, string>?)
Configures the prompt chain for composing multiple language model operations.
IAiModelBuilder<T, TInput, TOutput> ConfigurePromptChain(IChain<string, string>? chain = null)
Parameters
chainIChain<string, string>The chain to use for processing prompts. If null, no chain is configured.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
A chain orchestrates multiple language model calls, tools, and transformations into a cohesive workflow. Chains can be sequential, conditional, or parallel.
For Beginners: A chain connects multiple steps into a complete workflow, like a recipe where each step builds on the previous one.
ConfigurePromptOptimizer(IPromptOptimizer<T>?)
Configures the prompt optimizer for automatically improving prompts.
IAiModelBuilder<T, TInput, TOutput> ConfigurePromptOptimizer(IPromptOptimizer<T>? optimizer = null)
Parameters
optimizerIPromptOptimizer<T>The prompt optimizer to use. If null, no optimizer is configured.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
A prompt optimizer automatically refines prompts to achieve better performance on a specific task. Optimization strategies include discrete search, gradient-based methods, and evolutionary algorithms.
For Beginners: A prompt optimizer automatically improves your prompts by testing variations and keeping the best-performing ones.
ConfigurePromptTemplate(IPromptTemplate?)
Configures the prompt template for language model interactions.
IAiModelBuilder<T, TInput, TOutput> ConfigurePromptTemplate(IPromptTemplate? template = null)
Parameters
templateIPromptTemplateThe prompt template to use. If null, no template is configured.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
A prompt template provides a structured way to create prompts for language models by combining a template string with runtime variables.
For Beginners: A prompt template is like a form with blanks to fill in. You define the structure once and fill in different values each time you use it.
ConfigureQuantization(QuantizationConfig?)
Configures model quantization for reducing model size and improving inference speed.
IAiModelBuilder<T, TInput, TOutput> ConfigureQuantization(QuantizationConfig? config = null)
Parameters
configQuantizationConfigThe quantization configuration (optional, uses no quantization if null).
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: Quantization compresses your model by using smaller numbers (like 8-bit instead of 32-bit). This makes your model:
- Smaller (50-75% size reduction)
- Faster (2-4x speedup)
- Use less memory
The trade-off is a small accuracy loss (usually 1-5%). For most applications, this is acceptable.
Example:
// Use Float16 quantization (recommended for most cases)
var result = await builder
.ConfigureModel(model)
.ConfigureQuantization(new QuantizationConfig { Mode = QuantizationMode.Float16 })
.BuildAsync();
ConfigureReasoning(ReasoningConfig?)
Configures advanced reasoning capabilities for the model using Chain-of-Thought, Tree-of-Thoughts, and Self-Consistency strategies.
IAiModelBuilder<T, TInput, TOutput> ConfigureReasoning(ReasoningConfig? config = null)
Parameters
configReasoningConfigThe reasoning configuration (optional, uses defaults if null).
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: Reasoning capabilities make AI models "think step by step" instead of giving quick answers that might be wrong. Just like a student showing their work on a math test, reasoning strategies help the AI: - Break down complex problems into manageable steps - Explore multiple solution approaches - Verify and refine its answers - Provide transparent, explainable reasoning
After building your model, use the reasoning methods on AiModelResult:
- ReasonAsync(): Solve problems with configurable reasoning strategies
- QuickReasonAsync(): Fast answers for simple problems
- DeepReasonAsync(): Thorough analysis for complex problems
Example:
// Configure reasoning during model building
var agentConfig = new AgentConfiguration<double>
{
ApiKey = "sk-...",
Provider = LLMProvider.OpenAI,
IsEnabled = true
};
var result = await new AiModelBuilder<double, Matrix<double>, Vector<double>>()
.ConfigureAgentAssistance(agentConfig)
.ConfigureReasoning()
.BuildAsync();
// Use reasoning on the trained model
var reasoningResult = await result.ReasonAsync(
"Explain why this prediction was made and what factors contributed most?",
ReasoningMode.ChainOfThought
);
Console.WriteLine(reasoningResult.FinalAnswer);
ConfigureRegularization(IRegularization<T, TInput, TOutput>)
Configures the regularization component for the model.
IAiModelBuilder<T, TInput, TOutput> ConfigureRegularization(IRegularization<T, TInput, TOutput> regularization)
Parameters
regularizationIRegularization<T, TInput, TOutput>The regularization implementation to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
Regularization helps prevent overfitting by adding a penalty for complexity in the model.
For Beginners: Overfitting happens when a model learns the training data too well, including all its noise and peculiarities, making it perform poorly on new data. Regularization is like adding training wheels that prevent the model from becoming too complex. It's like telling the model "keep it simple" so it learns general patterns rather than memorizing specific examples.
ConfigureReinforcementLearning(RLTrainingOptions<T>)
Configures reinforcement learning options for training an RL agent.
IAiModelBuilder<T, TInput, TOutput> ConfigureReinforcementLearning(RLTrainingOptions<T> options)
Parameters
optionsRLTrainingOptions<T>The reinforcement learning configuration options.
Returns
- IAiModelBuilder<T, TInput, TOutput>
This builder instance for method chaining.
Remarks
For Beginners: Reinforcement learning trains an agent through trial and error in an environment. This method configures all aspects of RL training:
- The environment (simulation/game for the agent to learn from)
- Training parameters (episodes, steps, batch size)
- Exploration strategies (how to balance trying new things vs using learned behavior)
- Replay buffers (how to store and sample past experiences)
- Callbacks for monitoring training progress
After configuring RL options, use BuildAsync(episodes) to train the agent.
Example:
var options = new RLTrainingOptions<double>
{
Environment = new CartPoleEnvironment<double>(),
Episodes = 1000,
MaxStepsPerEpisode = 500,
OnEpisodeComplete = (metrics) => Console.WriteLine($"Episode {metrics.Episode}: {metrics.TotalReward}")
};
var result = await new AiModelBuilder<double, Vector<double>, Vector<double>>()
.ConfigureReinforcementLearning(options)
.ConfigureModel(new DQNAgent<double>())
.BuildAsync();
ConfigureRetrievalAugmentedGeneration(IRetriever<T>?, IReranker<T>?, IGenerator<T>?, IEnumerable<IQueryProcessor>?, IGraphStore<T>?, KnowledgeGraph<T>?, IDocumentStore<T>?)
Configures the retrieval-augmented generation (RAG) components for use during model inference.
IAiModelBuilder<T, TInput, TOutput> ConfigureRetrievalAugmentedGeneration(IRetriever<T>? retriever = null, IReranker<T>? reranker = null, IGenerator<T>? generator = null, IEnumerable<IQueryProcessor>? queryProcessors = null, IGraphStore<T>? graphStore = null, KnowledgeGraph<T>? knowledgeGraph = null, IDocumentStore<T>? documentStore = null)
Parameters
retrieverIRetriever<T>Optional retriever for finding relevant documents. If not provided, standard RAG won't be available.
rerankerIReranker<T>Optional reranker for improving document ranking quality. Default provided if retriever is set.
generatorIGenerator<T>Optional generator for producing grounded answers. Default provided if retriever is set.
queryProcessorsIEnumerable<IQueryProcessor>Optional query processors for improving search quality.
graphStoreIGraphStore<T>Optional graph storage backend for Graph RAG (e.g., MemoryGraphStore, FileGraphStore).
knowledgeGraphKnowledgeGraph<T>Optional pre-configured knowledge graph. If null but graphStore is provided, a new one is created.
documentStoreIDocumentStore<T>Optional document store for hybrid vector + graph retrieval.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
RAG enhances text generation by retrieving relevant documents from a knowledge base and using them as context for generating grounded, factual answers.
Graph RAG: When graphStore or knowledgeGraph is provided, enables knowledge graph-based retrieval that finds related entities and their relationships, providing richer context than vector similarity alone. If documentStore is also provided, hybrid retrieval combines both vector search and graph traversal.
For Beginners: RAG is like giving your AI access to a library before answering questions. Instead of relying only on what it learned during training, it can:
- Search a document collection for relevant information
- Read the relevant documents
- Generate an answer based on those documents
- Cite its sources
Graph RAG Example: If you ask about "Paris", Graph RAG can find not just documents mentioning Paris, but also related concepts like France, Eiffel Tower, and Seine River by traversing the knowledge graph.
RAG operations (GenerateAnswer, RetrieveDocuments, GraphQuery, etc.) are performed during inference via AiModelResult, not during model building.
ConfigureTelemetry(TelemetryConfig?)
Configures telemetry for tracking and monitoring model inference metrics.
IAiModelBuilder<T, TInput, TOutput> ConfigureTelemetry(TelemetryConfig? config = null)
Parameters
configTelemetryConfigThe telemetry configuration (optional, uses default telemetry settings if null).
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: Telemetry collects performance data about your model in production, like:
- How long each inference takes (latency)
- How many inferences per second (throughput)
- When errors occur
- Cache hit/miss rates
- Which model versions are being used
This helps you:
- Detect performance problems before users complain
- Understand usage patterns
- Debug production issues
- Make informed decisions about model updates
Example:
// Enable telemetry with default settings
var result = await builder
.ConfigureModel(model)
.ConfigureTelemetry()
.BuildAsync();
ConfigureTokenizer(ITokenizer?, TokenizationConfig?)
Configures tokenization for text-based input processing.
IAiModelBuilder<T, TInput, TOutput> ConfigureTokenizer(ITokenizer? tokenizer = null, TokenizationConfig? config = null)
Parameters
tokenizerITokenizerThe tokenizer to use for text processing. If null, no tokenizer is configured.
configTokenizationConfigOptional tokenization configuration. If null, default settings are used.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
Tokenization is the process of breaking text into smaller pieces (tokens) that can be processed by machine learning models. This is essential for NLP and text-based models.
For Beginners: Tokenization converts human-readable text into numbers that AI models understand.
Different tokenization strategies include:
- BPE (Byte Pair Encoding): Used by GPT models, learns subword units from data
- WordPiece: Used by BERT, splits unknown words into known subwords
- SentencePiece: Language-independent tokenization used by many multilingual models
Example:
var tokenizer = BpeTokenizer.Train(corpus, vocabSize: 32000);
var builder = new AiModelBuilder<float, Matrix<float>, Vector<float>>()
.ConfigureTokenizer(tokenizer)
.ConfigureModel(new TransformerModel())
.BuildAsync();
ConfigureTokenizerFromPretrained(PretrainedTokenizerModel, TokenizationConfig?)
Configures tokenization using a pretrained tokenizer from HuggingFace Hub.
IAiModelBuilder<T, TInput, TOutput> ConfigureTokenizerFromPretrained(PretrainedTokenizerModel model = PretrainedTokenizerModel.BertBaseUncased, TokenizationConfig? config = null)
Parameters
modelPretrainedTokenizerModelThe pretrained tokenizer model to use. Defaults to BertBaseUncased.
configTokenizationConfigOptional tokenization configuration.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: This is the easiest and most type-safe way to use industry-standard tokenizers. Using the enum ensures you always specify a valid model name.
Simply call without parameters for sensible defaults:
var builder = new AiModelBuilder<float, Matrix<float>, Vector<float>>()
.ConfigureTokenizerFromPretrained() // Uses BertBaseUncased by default
.ConfigureModel(new BertModel())
.BuildAsync();
Or specify a model using the enum:
builder.ConfigureTokenizerFromPretrained(PretrainedTokenizerModel.Gpt2)
Available models include:
- BertBaseUncased: BERT tokenizer for English text (default)
- Gpt2, Gpt2Medium, Gpt2Large: GPT-2 tokenizers for text generation
- RobertaBase, RobertaLarge: RoBERTa tokenizers (improved BERT)
- T5Small, T5Base, T5Large: T5 tokenizers for text-to-text tasks
- DistilBertBaseUncased: Faster, smaller BERT
- CodeBertBase: For code understanding tasks
ConfigureTokenizerFromPretrained(string?, TokenizationConfig?)
Configures tokenization using a pretrained tokenizer from a custom HuggingFace model name or local path.
IAiModelBuilder<T, TInput, TOutput> ConfigureTokenizerFromPretrained(string? modelNameOrPath = null, TokenizationConfig? config = null)
Parameters
modelNameOrPathstringThe HuggingFace model name or local path. Defaults to "bert-base-uncased" if not specified.
configTokenizationConfigOptional tokenization configuration.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: Use this overload when you need to specify a custom model name or path that isn't in the PretrainedTokenizerModel enum. For common models, prefer the enum-based overload for type safety.
Example with custom model:
// Use a custom or community model from HuggingFace
builder.ConfigureTokenizerFromPretrained("sentence-transformers/all-MiniLM-L6-v2")
If null or empty, defaults to "bert-base-uncased".
ConfigureTokenizerFromPretrainedAsync(PretrainedTokenizerModel, TokenizationConfig?)
Asynchronously configures the tokenizer by loading a pretrained model from HuggingFace Hub.
Task<IAiModelBuilder<T, TInput, TOutput>> ConfigureTokenizerFromPretrainedAsync(PretrainedTokenizerModel model = PretrainedTokenizerModel.BertBaseUncased, TokenizationConfig? config = null)
Parameters
modelPretrainedTokenizerModelThe pretrained tokenizer model to use.
configTokenizationConfigOptional tokenization configuration.
Returns
- Task<IAiModelBuilder<T, TInput, TOutput>>
A task that completes with the builder instance for method chaining.
Remarks
For Beginners: This is the async version of ConfigureTokenizerFromPretrained. Use this when you want to avoid blocking the thread while downloading tokenizer files from HuggingFace Hub. This is especially important in UI applications or web servers.
Example:
// Async configuration
await builder.ConfigureTokenizerFromPretrainedAsync(PretrainedTokenizerModel.BertBaseUncased);
ConfigureTokenizerFromPretrainedAsync(string?, TokenizationConfig?)
Asynchronously configures the tokenizer by loading a pretrained model from HuggingFace Hub using a model name or path.
Task<IAiModelBuilder<T, TInput, TOutput>> ConfigureTokenizerFromPretrainedAsync(string? modelNameOrPath = null, TokenizationConfig? config = null)
Parameters
modelNameOrPathstringThe HuggingFace model name or local path. Defaults to "bert-base-uncased" if not specified.
configTokenizationConfigOptional tokenization configuration.
Returns
- Task<IAiModelBuilder<T, TInput, TOutput>>
A task that completes with the builder instance for method chaining.
Remarks
For Beginners: This is the async version that accepts a custom model name or path. Use this when loading custom or community models without blocking the thread.
Example:
// Async configuration with custom model
await builder.ConfigureTokenizerFromPretrainedAsync("sentence-transformers/all-MiniLM-L6-v2");
ConfigureTrainingMonitor(ITrainingMonitor<T>)
Configures training monitoring for real-time visibility into training progress.
IAiModelBuilder<T, TInput, TOutput> ConfigureTrainingMonitor(ITrainingMonitor<T> monitor)
Parameters
monitorITrainingMonitor<T>The training monitor implementation to use.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: A training monitor is like a dashboard for your model training. It shows you how training is progressing, what resources are being used, and if there are any problems.
Key features include: - Real-time metric tracking (loss, accuracy, etc.) - Resource usage monitoring (CPU, GPU, memory) - Progress updates and ETA estimation - Alert thresholds for detecting problems
ConfigureTrainingPipeline(TrainingPipelineConfiguration<T, TInput, TOutput>?)
Configures a multi-stage training pipeline for advanced training workflows.
IAiModelBuilder<T, TInput, TOutput> ConfigureTrainingPipeline(TrainingPipelineConfiguration<T, TInput, TOutput>? configuration = null)
Parameters
configurationTrainingPipelineConfiguration<T, TInput, TOutput>The training pipeline configuration defining the stages to execute.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
ConfigureTrainingPipeline enables advanced multi-stage training workflows where each stage can have its own training method, optimizer, learning rate, and dataset. Stages execute sequentially, with each stage's output model becoming the next stage's input.
For Beginners: Think of this as a recipe with multiple cooking steps. Just like you might marinate, then sear, then bake - training can have multiple phases where each phase teaches the model something different.
ConfigureUncertaintyQuantification(UncertaintyQuantificationOptions?, UncertaintyCalibrationData<TInput, TOutput>?)
Configures uncertainty quantification (UQ) for inference-time uncertainty estimates.
IAiModelBuilder<T, TInput, TOutput> ConfigureUncertaintyQuantification(UncertaintyQuantificationOptions? options = null, UncertaintyCalibrationData<TInput, TOutput>? calibrationData = null)
Parameters
optionsUncertaintyQuantificationOptionsOptional options; when null, defaults are used and UQ is enabled.
calibrationDataUncertaintyCalibrationData<TInput, TOutput>Optional calibration data for conformal/prediction calibration features.
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
Uncertainty quantification augments point predictions with uncertainty signals (for example: variance and predictive entropy). This can be used to detect low-confidence outputs and make safer decisions.
Some uncertainty features optionally use a separate calibration dataset (held out from training) to compute calibration artifacts (for example: conformal thresholds or temperature scaling).
For Beginners: This enables a "confidence signal" alongside predictions. If you're not sure what to choose, call this method with no parameters to enable industry-standard defaults.
ConfigureVersioning(VersioningConfig?)
Configures model versioning for managing multiple versions of the same model.
IAiModelBuilder<T, TInput, TOutput> ConfigureVersioning(VersioningConfig? config = null)
Parameters
configVersioningConfigThe versioning configuration (optional, uses "latest" version if null).
Returns
- IAiModelBuilder<T, TInput, TOutput>
The builder instance for method chaining.
Remarks
For Beginners: Versioning helps you manage different versions of your model as it improves over time. You can:
- Keep track of which version is deployed
- Roll back to previous versions if needed
- Use "latest" to always get the newest version
- Compare performance between versions
Example:
// Enable versioning (defaults to "latest")
var result = await builder
.ConfigureModel(model)
.ConfigureVersioning()
.BuildAsync();
DeserializeModel(byte[])
Reconstructs a model from a previously serialized byte array.
AiModelResult<T, TInput, TOutput> DeserializeModel(byte[] modelData)
Parameters
modelDatabyte[]The byte array containing the serialized model data.
Returns
- AiModelResult<T, TInput, TOutput>
The reconstructed predictive model.
Remarks
This method converts a byte array back into a usable model object.
For Beginners: Deserialization is like unpacking your model from the digital suitcase created by SerializeModel. It takes the compact byte format and rebuilds your complete model so you can use it for making predictions again.
This is the counterpart to SerializeModel - first you serialize to create the byte array, then you deserialize to recreate the model when needed.
For example, if you stored your model in a database or received it over a network, you would use this method to convert it back into a working model.
LoadModel(string)
Loads a previously saved model from a file.
AiModelResult<T, TInput, TOutput> LoadModel(string filePath)
Parameters
filePathstringThe file path where the model is stored.
Returns
- AiModelResult<T, TInput, TOutput>
The loaded predictive model.
Remarks
This method retrieves a model that was previously saved to disk.
For Beginners: This method lets you load a previously saved model from a file. It's like opening a document you worked on earlier. Once loaded, you can immediately use the model to make predictions without having to train it again.
Predict(TInput, AiModelResult<T, TInput, TOutput>)
Uses a trained model to make predictions on new data.
TOutput Predict(TInput newData, AiModelResult<T, TInput, TOutput> model)
Parameters
newDataTInputThe new input data to make predictions for.
modelAiModelResult<T, TInput, TOutput>The trained model to use for making predictions.
Returns
- TOutput
A vector of predicted values.
Remarks
This method applies a previously trained model to new data to generate predictions.
For Beginners: Once your model is trained, you can use it to make predictions on new data it hasn't seen before. For example, if you trained a model to predict house prices based on features like size and location, you can now give it information about new houses and it will estimate their prices.
SaveModel(AiModelResult<T, TInput, TOutput>, string)
Saves a trained model to a file.
void SaveModel(AiModelResult<T, TInput, TOutput> model, string filePath)
Parameters
modelAiModelResult<T, TInput, TOutput>The trained model to save.
filePathstringThe file path where the model should be saved.
Remarks
This method persists a model to disk so it can be reused later without retraining.
For Beginners: Training a model can take a lot of time and computing power. This method lets you save your trained model to a file on your computer, so you can use it again later without having to retrain it. It's like saving a document you've been working on.
SerializeModel(AiModelResult<T, TInput, TOutput>)
Converts a trained model into a byte array for storage or transmission.
byte[] SerializeModel(AiModelResult<T, TInput, TOutput> model)
Parameters
modelAiModelResult<T, TInput, TOutput>The trained model to serialize.
Returns
- byte[]
A byte array containing the serialized model data.
Remarks
This method transforms a model into a compact binary format that can be stored in memory, databases, or transmitted over networks.
For Beginners: Serialization is like packing your model into a compact digital suitcase. Instead of saving to a file (like with SaveModel), this method converts your model into a series of bytes that can be:
- Stored in a database
- Sent over the internet
- Kept in computer memory
- Embedded in other applications
This is useful when you need to store models in places other than files or when you want to send models between different parts of your application.