Table of Contents

Class LarsGpuConfig

Namespace
AiDotNet.Interfaces
Assembly
AiDotNet.dll

Configuration for LARS (Layer-wise Adaptive Rate Scaling) optimizer on GPU.

public class LarsGpuConfig : IGpuOptimizerConfig
Inheritance
LarsGpuConfig
Implements
Inherited Members

Remarks

LARS scales the learning rate for each layer based on the ratio of parameter norm to gradient norm. This enables training with very large batch sizes.

For Beginners: LARS was designed for training with huge batch sizes (like 32K images). It automatically adjusts the learning rate for each layer so that layers with large parameters don't update too fast and layers with small parameters don't update too slow.

Constructors

LarsGpuConfig(float, float, float, float, int)

Creates a new LARS GPU configuration.

public LarsGpuConfig(float learningRate, float momentum = 0.9, float weightDecay = 0.0001, float trustCoefficient = 0.001, int step = 0)

Parameters

learningRate float

Global learning rate for parameter updates.

momentum float

Momentum coefficient (default 0.9).

weightDecay float

Weight decay coefficient (default 0.0001).

trustCoefficient float

Trust coefficient for scaling (default 0.001).

step int

Current optimization step.

Properties

LearningRate

Gets the learning rate for parameter updates.

public float LearningRate { get; init; }

Property Value

float

Momentum

Gets the momentum coefficient (typically 0.9).

public float Momentum { get; init; }

Property Value

float

OptimizerType

Gets the type of optimizer (SGD, Adam, AdamW, etc.).

public GpuOptimizerType OptimizerType { get; }

Property Value

GpuOptimizerType

Step

Gets the current optimization step (used for bias correction in Adam-family optimizers).

public int Step { get; init; }

Property Value

int

TrustCoefficient

Gets the trust coefficient for layer-wise scaling (typically 0.001).

public float TrustCoefficient { get; init; }

Property Value

float

WeightDecay

Gets the weight decay (L2 regularization) coefficient.

public float WeightDecay { get; init; }

Property Value

float

Methods

ApplyUpdate(IDirectGpuBackend, IGpuBuffer, IGpuBuffer, GpuOptimizerState, int)

Applies the optimizer update to the given parameter buffer using its gradient.

public void ApplyUpdate(IDirectGpuBackend backend, IGpuBuffer param, IGpuBuffer gradient, GpuOptimizerState state, int size)

Parameters

backend IDirectGpuBackend

The GPU backend to execute the update.

param IGpuBuffer

Buffer containing the parameters to update (modified in-place).

gradient IGpuBuffer

Buffer containing the gradients.

state GpuOptimizerState

Optimizer state buffers (momentum, squared gradients, etc.).

size int

Number of parameters to update.

Remarks

For Beginners: This method applies the optimizer's update rule directly on the GPU. Each optimizer type (SGD, Adam, etc.) implements its own update logic using GPU kernels. The state parameter contains any auxiliary buffers needed (like velocity for SGD with momentum, or m/v buffers for Adam).

Design Note: Following the Open/Closed Principle, each optimizer config knows how to apply its own update, so adding new optimizers doesn't require modifying layer code.