Table of Contents

Enum TargetPlatform

Namespace
AiDotNet.Enums
Assembly
AiDotNet.dll

Target hardware platforms for model deployment and optimization.

public enum TargetPlatform

Fields

CPU = 0

Generic CPU - most compatible but slower for AI workloads

CoreML = 5

iOS with CoreML - Apple's machine learning framework

Edge = 4

Edge devices (Raspberry Pi, etc.) - very limited resources

GPU = 1

Generic GPU - faster than CPU for AI computations

Mobile = 3

Mobile devices (iOS/Android) - requires size and power optimizations

NNAPI = 6

Android with NNAPI - Android's Neural Networks API

TFLite = 7

TensorFlow Lite - lightweight models for mobile and edge devices

TensorRT = 2

NVIDIA GPU with TensorRT - optimized for NVIDIA GPUs

WebAssembly = 8

WebAssembly - run AI models in web browsers

Remarks

For Beginners: Different devices and platforms have different hardware capabilities. This enum helps you specify where your AI model will run, allowing the library to optimize the model specifically for that platform. For example:

  • CPU: Traditional computer processors (slowest but most compatible)
  • GPU: Graphics cards (much faster for AI workloads)
  • TensorRT: NVIDIA's optimized AI inference engine (fastest for NVIDIA GPUs)
  • Mobile: Smartphones and tablets (limited power, needs optimization)
  • Edge: Small devices like Raspberry Pi or Arduino (very limited resources)