Interface ILoRAConfiguration<T>
- Namespace
- AiDotNet.Interfaces
- Assembly
- AiDotNet.dll
Interface for configuring how LoRA (Low-Rank Adaptation) should be applied to neural network layers.
public interface ILoRAConfiguration<T>
Type Parameters
TThe numeric type used for calculations, typically float or double.
Remarks
This interface defines a strategy pattern for applying LoRA adaptations to layers within a model. Different implementations can provide different strategies for which layers to adapt and how.
For Beginners: This interface lets you define a "strategy" for how LoRA should be applied to your model. Different strategies might: - Apply LoRA to all dense layers - Apply LoRA only to layers with names matching a pattern - Apply LoRA to all layers above a certain size - Apply different LoRA ranks to different layer types
This gives you flexible control over how your model is adapted without hardcoding the logic.
Properties
Alpha
Gets the scaling factor (alpha) for LoRA adaptations.
double Alpha { get; }
Property Value
Remarks
Alpha controls how strongly LoRA adaptations affect outputs. Common practice: alpha = rank (for scaling factor of 1.0) Set to -1 to use rank as alpha (automatic scaling).
FreezeBaseLayer
Gets whether base layers should be frozen during training.
bool FreezeBaseLayer { get; }
Property Value
Remarks
When true (typical), only LoRA parameters are trained while base layer weights remain frozen. This dramatically reduces memory and compute requirements.
When false, both base layer and LoRA parameters are trained. This uses more resources but may achieve better results in some scenarios.
Rank
Gets the rank of the low-rank decomposition to use for adapted layers.
int Rank { get; }
Property Value
Remarks
The rank determines the number of parameters in the LoRA adaptation. Lower rank = fewer parameters = more efficient but less flexible.
Common values: - 1-4: Minimal parameters, very efficient - 8: Good default balance - 16-32: More flexibility - 64+: Approaching full fine-tuning
Methods
ApplyLoRA(ILayer<T>)
Applies LoRA adaptation to a layer if applicable according to this configuration strategy.
ILayer<T> ApplyLoRA(ILayer<T> layer)
Parameters
layerILayer<T>The layer to potentially adapt with LoRA.
Returns
- ILayer<T>
A LoRA-adapted version of the layer if the configuration determines it should be adapted, otherwise returns the original layer unchanged.
Remarks
This method examines the layer and decides whether to wrap it with a LoRA adapter. The decision can be based on: - Layer type (Dense, Convolutional, Attention, etc.) - Layer size or parameter count - Layer position in the model - Custom predicates or rules
For Beginners: This method looks at each layer in your model and decides: "Should I add LoRA to this layer?" If yes, it wraps the layer with a LoRA adapter. If no, it returns the layer as-is. This lets you selectively apply LoRA instead of adapting every single layer.