Class JitCompilerOptions
- Namespace
- AiDotNet.JitCompiler
- Assembly
- AiDotNet.dll
Configuration options for the JIT compiler.
public class JitCompilerOptions
- Inheritance
-
JitCompilerOptions
- Inherited Members
Remarks
For Beginners: Settings to control how the JIT compiler works.
You can:
- Enable/disable specific optimizations
- Turn caching on/off
- Configure compilation behavior
- Control how unsupported operations are handled
For most users, the defaults work great!
Properties
EnableAdaptiveFusion
Gets or sets a value indicating whether to enable adaptive fusion strategies. Default: false (currently uses standard fusion when enabled).
public bool EnableAdaptiveFusion { get; set; }
Property Value
Remarks
Status: Architecture implemented, delegates to standard fusion. Adaptive fusion will intelligently select which operations to fuse based on graph structure, tensor sizes, and hardware characteristics.
EnableAutoTuning
Gets or sets a value indicating whether to enable auto-tuning of optimizations. Default: true.
public bool EnableAutoTuning { get; set; }
Property Value
Remarks
Auto-tuning automatically determines the best optimization configuration for each graph based on graph analysis, tensor sizes, and operation types. It selects the optimal combination of fusion, unrolling, and vectorization strategies.
EnableCaching
Gets or sets a value indicating whether to enable caching of compiled graphs. Default: true.
public bool EnableCaching { get; set; }
Property Value
EnableConstantFolding
Gets or sets a value indicating whether to enable constant folding optimization. Default: true.
public bool EnableConstantFolding { get; set; }
Property Value
EnableDeadCodeElimination
Gets or sets a value indicating whether to enable dead code elimination. Default: true.
public bool EnableDeadCodeElimination { get; set; }
Property Value
EnableLoopUnrolling
Gets or sets a value indicating whether to enable loop unrolling optimization. Default: true.
public bool EnableLoopUnrolling { get; set; }
Property Value
Remarks
Loop unrolling improves performance for small, fixed-size loops by eliminating loop overhead and enabling better instruction pipelining. The optimizer automatically determines which loops benefit from unrolling based on tensor size and operation type.
EnableMemoryPooling
Gets or sets a value indicating whether to enable memory pooling for tensors. Default: true.
public bool EnableMemoryPooling { get; set; }
Property Value
Remarks
For Beginners: Reuses tensor memory to reduce allocations.
Memory pooling improves performance by:
- Reducing garbage collection pauses
- Avoiding repeated memory allocations
- Improving cache locality
This is especially beneficial for training loops that create many temporary tensors.
EnableOperationFusion
Gets or sets a value indicating whether to enable operation fusion. Default: true.
public bool EnableOperationFusion { get; set; }
Property Value
EnableSIMDHints
Gets or sets a value indicating whether to enable SIMD vectorization hints. Default: false (not yet fully implemented).
public bool EnableSIMDHints { get; set; }
Property Value
Remarks
Status: Architecture planned, implementation pending. SIMD hints guide the code generator to use vector instructions (AVX, AVX-512) for better performance on element-wise operations.
LogUnsupportedOperations
Gets or sets whether to log warnings for unsupported operations. Default: true.
public bool LogUnsupportedOperations { get; set; }
Property Value
Remarks
For Beginners: When enabled, you'll see warnings in logs when operations can't be JIT compiled. This helps you: - Identify which operations need fallback - Understand performance implications - Know when to request JIT support for new operation types
MaxElementsToPool
Gets or sets the maximum total elements in a tensor to pool. Tensors larger than this will not be pooled. Default: 10,000,000 (about 40MB for float32).
public int MaxElementsToPool { get; set; }
Property Value
MaxPoolSizePerShape
Gets or sets the maximum number of tensor buffers to keep per shape. Default: 10.
public int MaxPoolSizePerShape { get; set; }
Property Value
UnsupportedLayerHandling
Gets or sets how the JIT compiler handles unsupported operations. Default: Fallback (use interpreted execution for entire graph if any op is unsupported).
public UnsupportedLayerHandling UnsupportedLayerHandling { get; set; }
Property Value
Remarks
For Beginners: When your model has operations the JIT can't compile, this setting controls what happens:
- Throw: Stop with an error - use when you need all ops compiled
- Fallback: (Default) Run the whole graph interpreted - always works
- Hybrid: JIT the supported ops, interpret the rest - best performance
- Skip: Ignore unsupported ops - dangerous, may give wrong results
Hybrid mode is recommended for production when you have mixed-support graphs. It gives you JIT speed for supported operations while still handling all ops correctly.