Table of Contents

Enum PrecisionMode

Namespace
AiDotNet.Enums
Assembly
AiDotNet.dll

Defines the numeric precision mode for neural network training and computation.

public enum PrecisionMode

Fields

BF16 = 3

Brain float 16 (bfloat16) format. Same range as FP32 but reduced precision (8 bits mantissa). Better numerical stability than FP16, used by Google TPUs.

Note: BF16 is reserved for future implementation and not currently supported.

FP16 = 1

Half precision using 16-bit floating-point (Half/FP16). Faster on modern GPUs with Tensor Cores but limited range [6e-8, 65504].

FP32 = 0

Full precision using 32-bit floating-point (float/FP32). Default mode for standard training.

FP64 = 4

Double precision using 64-bit floating-point (double/FP64). Maximum numerical precision, but slower and uses more memory.

Mixed = 2

Mixed precision training: FP16 for forward/backward passes, FP32 for parameter updates. Combines speed of FP16 with numerical stability of FP32. Recommended for large models on GPU.