Table of Contents

Class JitCompiler

Namespace
AiDotNet.JitCompiler
Assembly
AiDotNet.dll

Just-In-Time compiler for computation graphs.

public class JitCompiler : IDisposable
Inheritance
JitCompiler
Implements
Inherited Members

Remarks

The JitCompiler is the main entry point for JIT compilation in AiDotNet. It provides a high-level API for compiling computation graphs to optimized executable code. The compiler automatically handles: - IR graph construction from ComputationNode graphs - Optimization passes (constant folding, dead code elimination, operation fusion) - Code generation and compilation - Caching of compiled graphs for reuse

For Beginners: This compiles your neural network graphs to run much faster.

Think of it like this:

  • Without JIT: Your model runs by interpreting each operation step-by-step (slow)
  • With JIT: Your model is compiled to optimized machine code (fast!)

How to use:

  1. Create a JitCompiler instance (once)
  2. Pass your computation graph to Compile()
  3. Get back a compiled function
  4. Call that function with your inputs (runs 5-10x faster!)

Example: var jit = new JitCompiler(); var compiled = jit.Compile(myGraph, inputs); var results = compiled(inputTensors); // Fast execution!

The JIT compiler:

  • Automatically optimizes your graph
  • Caches compiled code for reuse
  • Handles all the complexity internally
  • Just works!

Expected speedup: 5-10x for typical neural networks

Constructors

JitCompiler()

Initializes a new instance of the JitCompiler class with default options.

public JitCompiler()

Remarks

Creates a new JIT compiler with standard optimization passes enabled: - Constant folding - Dead code elimination - Operation fusion

For Beginners: Creates a JIT compiler ready to use.

The compiler is created with good default settings:

  • All standard optimizations enabled
  • Caching enabled for fast repeated compilation
  • Ready to compile graphs immediately

JitCompiler(JitCompilerOptions)

Initializes a new instance of the JitCompiler class with custom options.

public JitCompiler(JitCompilerOptions options)

Parameters

options JitCompilerOptions

Configuration options for the compiler.

Remarks

Creates a new JIT compiler with specified options. This allows you to: - Enable/disable specific optimizations - Configure caching behavior - Control compilation settings

For Beginners: Creates a JIT compiler with custom settings.

Use this if you want to:

  • Turn off certain optimizations for debugging
  • Disable caching for testing
  • Customize compilation behavior

For most users, the default constructor is fine!

Properties

TensorPool

Gets the tensor memory pool if memory pooling is enabled.

public TensorPool? TensorPool { get; }

Property Value

TensorPool

Remarks

For Beginners: Access the memory pool for manual buffer management.

Usually you don't need to use this directly - the JIT compiler manages memory automatically. But if you want fine-grained control over memory allocation in your code, you can use this pool.

Example: if (jit.TensorPool != null) { var buffer = jit.TensorPool.Rent<float>(1000); // Use buffer... jit.TensorPool.Return(buffer); }

Methods

AnalyzeCompatibility<T>(ComputationNode<T>, List<ComputationNode<T>>)

Analyzes a computation graph to determine JIT compatibility.

public JitCompatibilityResult AnalyzeCompatibility<T>(ComputationNode<T> outputNode, List<ComputationNode<T>> inputs)

Parameters

outputNode ComputationNode<T>

The output node of the computation graph.

inputs List<ComputationNode<T>>

The input nodes to the computation graph.

Returns

JitCompatibilityResult

A compatibility result describing which operations are supported.

Type Parameters

T

The numeric type for tensor elements.

Remarks

For Beginners: Call this before compiling to see if your graph is JIT-compatible.

This method:

  • Walks through your entire computation graph
  • Checks each operation against the supported list
  • Reports which operations will be JIT-compiled vs. need fallback
  • Tells you if hybrid mode is available

Example: var compat = jit.AnalyzeCompatibility(output, inputs); if (compat.IsFullySupported) { Console.WriteLine("Graph can be fully JIT compiled!"); } else { Console.WriteLine($"Partial support: {compat.SupportedPercentage:F0}%"); foreach (var unsupported in compat.UnsupportedOperations) { Console.WriteLine($" - {unsupported}"); } }

ClearCache()

Clears the compiled graph cache.

public void ClearCache()

Remarks

For Beginners: This clears all cached compiled graphs.

Use this when:

  • You want to free memory
  • You're testing and want fresh compilations
  • You've changed compilation settings

After clearing, the next Compile() will be slower but subsequent calls with the same graph will be fast again (cached).

ClearTensorPool()

Clears the tensor memory pool, releasing all cached buffers.

public void ClearTensorPool()

CompileBackwardWithStats<T>(ComputationNode<T>, List<ComputationNode<T>>)

Compiles the backward pass and returns compilation statistics.

public (Func<Tensor<T>[], Tensor<T>[]> CompiledBackward, CompilationStats Stats) CompileBackwardWithStats<T>(ComputationNode<T> outputNode, List<ComputationNode<T>> inputs)

Parameters

outputNode ComputationNode<T>

The output node of the computation graph.

inputs List<ComputationNode<T>>

The input nodes to compute gradients for.

Returns

(Func<Tensor<T>[], Tensor<T>[]> CompiledFunc, CompilationStats Stats)

A tuple of (compiled backward function, compilation statistics).

Type Parameters

T

The numeric type for tensor elements.

Remarks

For Beginners: Compiles gradient computation and shows optimization details.

Use this to:

  • See how much the backward pass was optimized
  • Understand what optimizations were applied
  • Debug gradient computation issues
  • Monitor compilation performance

The statistics tell you:

  • How many gradient operations were generated
  • How many operations after optimization
  • What optimizations were applied (fusion of backward ops!)
  • Cache hit information

Exceptions

ArgumentNullException

Thrown if outputNode or inputs is null.

CompileBackward<T>(ComputationNode<T>, List<ComputationNode<T>>)

public Func<Tensor<T>[], Tensor<T>[]> CompileBackward<T>(ComputationNode<T> outputNode, List<ComputationNode<T>> inputs)

Parameters

outputNode ComputationNode<T>
inputs List<ComputationNode<T>>

Returns

Func<Tensor<T>[], Tensor<T>[]>

Type Parameters

T

CompileWithFallback<T>(ComputationNode<T>, List<ComputationNode<T>>)

Compiles a computation graph with automatic fallback to interpreted execution.

public (Func<Tensor<T>[], Tensor<T>[]> Func, bool WasJitCompiled, string? Message) CompileWithFallback<T>(ComputationNode<T> outputNode, List<ComputationNode<T>> inputs)

Parameters

outputNode ComputationNode<T>

The output node of the computation graph.

inputs List<ComputationNode<T>>

The input nodes to the computation graph.

Returns

(Func<Tensor<T>[], Tensor<T>[]> Func, bool WasJitCompiled, string Message)

A tuple containing:

  • The executable function (JIT compiled or interpreted fallback)
  • Whether JIT compilation succeeded
  • Any warning or error message

Type Parameters

T

The numeric type for tensor elements.

Remarks

For Beginners: This is the most robust way to compile a graph.

It tries JIT compilation first. If that fails, it automatically falls back to interpreted execution (slower but always works).

You get the best performance when JIT works, and guaranteed execution when it doesn't.

Example: var (func, wasJitted, message) = jit.CompileWithFallback(output, inputs); if (!wasJitted) { Console.WriteLine($"Using interpreted fallback: {message}"); } // func is always usable! var result = func(inputTensors);

CompileWithStats<T>(ComputationNode<T>, List<ComputationNode<T>>)

Compiles a computation graph and returns compilation statistics.

public (Func<Tensor<T>[], Tensor<T>[]> CompiledFunc, CompilationStats Stats) CompileWithStats<T>(ComputationNode<T> outputNode, List<ComputationNode<T>> inputs)

Parameters

outputNode ComputationNode<T>

The output node of the computation graph.

inputs List<ComputationNode<T>>

The input nodes to the computation graph.

Returns

(Func<Tensor<T>[], Tensor<T>[]> CompiledFunc, CompilationStats Stats)

A tuple of (compiled function, compilation statistics).

Type Parameters

T

The numeric type for tensor elements.

Remarks

For Beginners: This compiles your graph and tells you what optimizations were applied.

Use this when you want to:

  • See how much the graph was optimized
  • Debug compilation issues
  • Understand what the JIT compiler is doing

The statistics tell you:

  • How many operations were in the original graph
  • How many operations after optimization
  • What optimizations were applied
  • How much speedup to expect

Exceptions

ArgumentNullException

Thrown if outputNode or inputs is null.

CompileWithUnsupportedHandling<T>(ComputationNode<T>, List<ComputationNode<T>>)

Compiles a computation graph with intelligent handling of unsupported operations.

public HybridCompilationResult<T> CompileWithUnsupportedHandling<T>(ComputationNode<T> outputNode, List<ComputationNode<T>> inputs)

Parameters

outputNode ComputationNode<T>

The output node of the computation graph.

inputs List<ComputationNode<T>>

The input nodes to the computation graph.

Returns

HybridCompilationResult<T>

A result containing the compiled function, whether JIT was used, compatibility information, and any warnings.

Type Parameters

T

The numeric type for tensor elements.

Remarks

For Beginners: This is the recommended way to compile graphs with mixed support.

This method automatically:

  1. Analyzes your graph for JIT compatibility
  2. Based on UnsupportedLayerHandling setting:
    • Throw: Fails if any operation is unsupported
    • Fallback: Uses interpreted execution if anything is unsupported
    • Hybrid: JIT-compiles what it can, interprets the rest
    • Skip: Ignores unsupported operations (dangerous!)
  3. Returns a function that always works, plus useful diagnostics

Example: var result = jit.CompileWithUnsupportedHandling(output, inputs); if (!result.IsFullyJitCompiled) { Console.WriteLine($"Hybrid mode: {result.Compatibility.SupportedPercentage:F0}% JIT compiled"); } var predictions = result.CompiledFunc(inputTensors);

Compile<T>(ComputationNode<T>, List<ComputationNode<T>>)

public Func<Tensor<T>[], Tensor<T>[]> Compile<T>(ComputationNode<T> outputNode, List<ComputationNode<T>> inputs)

Parameters

outputNode ComputationNode<T>
inputs List<ComputationNode<T>>

Returns

Func<Tensor<T>[], Tensor<T>[]>

Type Parameters

T

Dispose()

Releases all resources used by the JIT compiler.

public void Dispose()

GetCacheStats()

Gets statistics about the compilation cache.

public CacheStats GetCacheStats()

Returns

CacheStats

Cache statistics.

Remarks

For Beginners: This tells you how many graphs are cached.

Useful for:

  • Monitoring memory usage
  • Understanding cache efficiency
  • Debugging caching behavior

GetSupportedOperationTypes()

Gets the set of operation types that are fully supported by the JIT compiler.

public static HashSet<OperationType> GetSupportedOperationTypes()

Returns

HashSet<OperationType>

A set of supported operation type enums.

Remarks

For Beginners: This tells you which operations can be JIT compiled.

Supported operations include:

  • Basic math: Add, Subtract, Multiply, Divide, Power, Negate
  • Math functions: Exp, Log, Sqrt
  • Activations: ReLU, Sigmoid, Tanh, Softmax
  • Matrix ops: MatMul, Transpose
  • Convolutions: Conv2D, ConvTranspose2D, DepthwiseConv2D
  • Pooling: MaxPool2D, AvgPool2D
  • Normalization: LayerNorm, BatchNorm
  • And more...

If your operation isn't listed, it will need fallback execution.

GetTensorPoolStats()

Gets statistics about the tensor memory pool.

public TensorPoolStats? GetTensorPoolStats()

Returns

TensorPoolStats

Pool statistics, or null if memory pooling is disabled.

TryCompile<T>(ComputationNode<T>, List<ComputationNode<T>>, out Func<Tensor<T>[], Tensor<T>[]>?, out string?)

Attempts to compile a computation graph without throwing exceptions.

public bool TryCompile<T>(ComputationNode<T> outputNode, List<ComputationNode<T>> inputs, out Func<Tensor<T>[], Tensor<T>[]>? compiledFunc, out string? error)

Parameters

outputNode ComputationNode<T>

The output node of the computation graph.

inputs List<ComputationNode<T>>

The input nodes to the computation graph.

compiledFunc Func<Tensor<T>[], Tensor<T>[]>

When this method returns true, contains the compiled function.

error string

When this method returns false, contains the error message.

Returns

bool

True if compilation succeeded, false otherwise.

Type Parameters

T

The numeric type for tensor elements.

Remarks

For Beginners: This is a safe version of Compile that won't crash your program.

Instead of throwing an exception when something goes wrong, it returns false and tells you what went wrong through the error parameter.

Example: if (jit.TryCompile(output, inputs, out var compiled, out var error)) { // Use compiled function var result = compiled(inputTensors); } else { // Handle error gracefully Console.WriteLine($"JIT compilation failed: {error}"); // Fall back to interpreted execution }