Table of Contents

Enum OnnxExecutionProvider

Namespace
AiDotNet.Onnx
Assembly
AiDotNet.dll

Specifies the execution provider (hardware accelerator) for ONNX model inference.

public enum OnnxExecutionProvider

Fields

Auto = 100

Automatically select the best available provider. Falls back through: TensorRT → CUDA → DirectML → CPU

CoreML = 4

Apple CoreML execution provider for Apple Silicon. Optimized for M1/M2/M3 chips on macOS.

Cpu = 0

CPU execution provider (default, always available).

Cuda = 1

NVIDIA CUDA execution provider for GPU acceleration. Requires CUDA toolkit and compatible NVIDIA GPU.

DirectML = 3

DirectML execution provider for Windows GPU acceleration. Works with AMD, Intel, and NVIDIA GPUs on Windows.

NNAPI = 7

NNAPI execution provider for Android devices.

OpenVINO = 5

OpenVINO execution provider for Intel hardware. Optimized for Intel CPUs and integrated graphics.

ROCm = 6

ROCm execution provider for AMD GPUs.

TensorRT = 2

NVIDIA TensorRT execution provider for optimized GPU inference. Provides additional optimizations on top of CUDA.

Remarks

Execution providers allow ONNX models to run on different hardware accelerators. The order of fallback is typically: CUDA/TensorRT → DirectML → CPU.

For Beginners: Think of execution providers as different engines:

  • CPU: Works everywhere, slowest but most compatible
  • CUDA: NVIDIA GPUs, much faster than CPU
  • TensorRT: NVIDIA GPUs with extra optimizations, fastest for NVIDIA
  • DirectML: Windows GPUs (AMD, Intel, NVIDIA), good cross-vendor support
  • CoreML: Apple Silicon (M1/M2/M3), fastest on Mac