Class ExportConfig
- Namespace
- AiDotNet.Deployment.Configuration
- Assembly
- AiDotNet.dll
Configuration for exporting models to different formats and platforms.
public class ExportConfig
- Inheritance
-
ExportConfig
- Inherited Members
Remarks
For Beginners: After training an AI model, you often need to export it to a specific format depending on where it will run. Think of it like exporting a document to PDF, Word, or plain text - same content, different format for different uses.
Export Formats:
- ONNX: Universal format that works everywhere (recommended for most cases)
- TensorRT: NVIDIA GPUs only, maximum performance on NVIDIA hardware
- CoreML: Apple devices (iPhone, iPad, Mac), optimized for Apple Silicon
- TFLite: Android devices and edge hardware, very efficient
- WASM: Run models in web browsers without plugins
When to export:
- Deploying to production servers (ONNX or TensorRT)
- Mobile apps (CoreML for iOS, TFLite for Android)
- Edge devices like Raspberry Pi (TFLite)
- Web applications (WASM)
Optimization: Most export formats support optimization and quantization to make models smaller and faster.
Properties
BatchSize
Gets or sets the batch size for static shapes (default: 1).
public int BatchSize { get; set; }
Property Value
Remarks
For Beginners: How many inputs to process at once. 1 = process one at a time (typical for real-time inference). Higher values can be faster but use more memory.
IncludeMetadata
Gets or sets whether to include model metadata (default: true).
public bool IncludeMetadata { get; set; }
Property Value
Remarks
For Beginners: Include name, version, and description in the exported model. Helpful for documentation. Minimal file size impact.
ModelDescription
Gets or sets the model description to include in metadata (optional).
public string? ModelDescription { get; set; }
Property Value
Remarks
For Beginners: Brief description of what the model does. Useful documentation for others (or your future self).
ModelName
Gets or sets the model name to include in metadata (optional).
public string? ModelName { get; set; }
Property Value
Remarks
For Beginners: A friendly name for your model (e.g., "HousePricePredictor"). Helps identify which model is which when you have many.
ModelVersion
Gets or sets the model version to include in metadata (optional).
public string? ModelVersion { get; set; }
Property Value
Remarks
For Beginners: Version number (e.g., "1.2.3"). Helps track which version of the model you're using.
OptimizeModel
Gets or sets whether to optimize the exported model (default: true).
public bool OptimizeModel { get; set; }
Property Value
Remarks
For Beginners: Optimization makes models run faster. Recommended to keep true. May increase export time but improves inference speed.
Quantization
Gets or sets the quantization mode for export (default: None).
public QuantizationMode Quantization { get; set; }
Property Value
Remarks
For Beginners: Quantization compresses the model. None = full precision, Float16 = half precision, Int8 = maximum compression. See QuantizationConfig for more details.
TargetPlatform
Gets or sets the target platform for export (default: CPU).
public TargetPlatform TargetPlatform { get; set; }
Property Value
Remarks
For Beginners: Choose where your model will run: CPU for general servers, GPU for graphics cards, TensorRT for NVIDIA GPUs, CoreML for Apple devices, NNAPI for Android, etc.
ValidateAfterExport
Gets or sets whether to validate the exported model (default: true).
public bool ValidateAfterExport { get; set; }
Property Value
Remarks
For Beginners: Check that the exported model works correctly. Recommended to keep true - catches export errors before deployment. Adds export time but prevents broken models.