Enum MixedPrecisionType
Types of mixed precision training data types.
public enum MixedPrecisionType
Fields
BF16 = 2Brain floating point (BF16) mixed precision.
16-bit format with same exponent range as FP32. Better numerical stability than FP16, no loss scaling needed. Requires Ampere or newer GPU (RTX 30 series, A100, etc.).
FP16 = 1Half precision (FP16) mixed precision.
Uses 16-bit floating point for forward/backward pass. Widely supported on GPUs since Pascal (GTX 10 series). Requires loss scaling to handle small gradients.
None = 0Full precision (FP32). No mixed precision.
Uses 32-bit floating point for all operations. Maximum precision but highest memory usage and slowest.
TF32 = 3TensorFloat-32 (TF32) precision.
NVIDIA format that uses 19 bits total (10-bit mantissa). Automatically enabled on Ampere GPUs for matmul operations. Good balance of speed and precision.
Remarks
Mixed precision training uses lower precision floating point numbers to speed up training and reduce memory usage while maintaining accuracy.
For Beginners: FP16 works on most GPUs, BF16 is better on newer hardware (Ampere and later). If unsure, start with FP16.