Enum TargetPlatform
Target hardware platforms for model deployment and optimization.
public enum TargetPlatform
Fields
CPU = 0Generic CPU - most compatible but slower for AI workloads
CoreML = 5iOS with CoreML - Apple's machine learning framework
Edge = 4Edge devices (Raspberry Pi, etc.) - very limited resources
GPU = 1Generic GPU - faster than CPU for AI computations
Mobile = 3Mobile devices (iOS/Android) - requires size and power optimizations
NNAPI = 6Android with NNAPI - Android's Neural Networks API
TFLite = 7TensorFlow Lite - lightweight models for mobile and edge devices
TensorRT = 2NVIDIA GPU with TensorRT - optimized for NVIDIA GPUs
WebAssembly = 8WebAssembly - run AI models in web browsers
Remarks
For Beginners: Different devices and platforms have different hardware capabilities. This enum helps you specify where your AI model will run, allowing the library to optimize the model specifically for that platform. For example:
- CPU: Traditional computer processors (slowest but most compatible)
- GPU: Graphics cards (much faster for AI workloads)
- TensorRT: NVIDIA's optimized AI inference engine (fastest for NVIDIA GPUs)
- Mobile: Smartphones and tablets (limited power, needs optimization)
- Edge: Small devices like Raspberry Pi or Arduino (very limited resources)