Table of Contents

Namespace AiDotNet.LoRA.Adapters

Classes

AdaLoRAAdapter<T>

Adaptive Low-Rank Adaptation (AdaLoRA) adapter that dynamically allocates parameter budgets among weight matrices.

ChainLoRAAdapter<T>

Chain-of-LoRA adapter that implements sequential composition of multiple LoRA adapters.

DVoRAAdapter<T>

DVoRA (DoRA + VeRA) adapter - combines DoRA's magnitude-direction decomposition with VeRA's extreme parameter efficiency.

DeltaLoRAAdapter<T>

Delta-LoRA adapter that focuses on parameter-efficient delta updates with momentum.

DenseLoRAAdapter<T>

LoRA adapter specifically for Dense and FullyConnected layers with 1D input/output shapes.

DoRAAdapter<T>

DoRA (Weight-Decomposed Low-Rank Adaptation) adapter for parameter-efficient fine-tuning with improved stability.

DyLoRAAdapter<T>

DyLoRA (Dynamic LoRA) adapter that trains with multiple ranks simultaneously.

FloraAdapter<T>

Implements Flora (Low-Rank Adapters Are Secretly Gradient Compressors) adapter for memory-efficient fine-tuning.

GLoRAAdapter<T>

Generalized LoRA (GLoRA) implementation that adapts both weights AND activations.

GraphConvolutionalLoRAAdapter<T>

LoRA adapter for Graph Convolutional layers, enabling parameter-efficient fine-tuning of GNN models.

HRAAdapter<T>

HRA (Hybrid Rank Adaptation) adapter that combines low-rank and full-rank updates for optimal parameter efficiency.

LoHaAdapter<T>

LoHa (Low-Rank Hadamard Product Adaptation) adapter for parameter-efficient fine-tuning.

LoKrAdapter<T>

LoKr (Low-Rank Kronecker Product Adaptation) adapter for parameter-efficient fine-tuning.

LoRAAdapterBase<T>

Abstract base class for LoRA (Low-Rank Adaptation) adapters that wrap existing layers.

LoRADropAdapter<T>

LoRA-drop implementation: LoRA with dropout regularization.

LoRAFAAdapter<T>

LoRA-FA (LoRA with Frozen A matrix) adapter for parameter-efficient fine-tuning.

LoRAPlusAdapter<T>

LoRA+ adapter that uses optimized learning rates for faster convergence and better performance.

LoRAXSAdapter<T>

LoRA-XS (Extremely Small) adapter for ultra-parameter-efficient fine-tuning using SVD with trainable scaling matrix.

LoRETTAAdapter<T>

LoRETTA (Low-Rank Economic Tensor-Train Adaptation) adapter for parameter-efficient fine-tuning.

LoftQAdapter<T>

LoftQ (LoRA-Fine-Tuning-Quantized) adapter that combines quantization and LoRA with improved initialization.

LongLoRAAdapter<T>

LongLoRA adapter that efficiently extends LoRA to handle longer context lengths using shifted sparse attention.

MoRAAdapter<T>

Implements MoRA (High-Rank Updating for Parameter-Efficient Fine-Tuning) adapter.

MultiLoRAAdapter<T>

Multi-task LoRA adapter that manages multiple task-specific LoRA layers for complex multi-task learning scenarios.

NOLAAdapter<T>

Implements NOLA (Compressing LoRA using Linear Combination of Random Basis) adapter for extreme parameter efficiency.

PiSSAAdapter<T>

Principal Singular Values and Singular Vectors Adaptation (PiSSA) adapter for parameter-efficient fine-tuning.

QALoRAAdapter<T>

Quantization-Aware LoRA (QA-LoRA) adapter that combines parameter-efficient fine-tuning with group-wise quantization awareness.

QLoRAAdapter<T>

QLoRA (Quantized LoRA) adapter for parameter-efficient fine-tuning with 4-bit quantized base weights.

ReLoRAAdapter<T>

Restart LoRA (ReLoRA) adapter that periodically merges and restarts LoRA training for continual learning.

RoSAAdapter<T>

RoSA (Robust Adaptation) adapter for parameter-efficient fine-tuning with improved robustness to distribution shifts.

SLoRAAdapter<T>

S-LoRA adapter for scalable serving of thousands of concurrent LoRA adapters.

StandardLoRAAdapter<T>

Standard LoRA implementation (original LoRA algorithm).

TiedLoRAAdapter<T>

Tied-LoRA adapter - LoRA with weight tying for extreme parameter efficiency across deep networks.

VBLoRAAdapter<T>

Vector Bank LoRA (VB-LoRA) adapter that uses shared parameter banks for efficient multi-client deployment.

VeRAAdapter<T>

VeRA (Vector-based Random Matrix Adaptation) adapter - an extreme parameter-efficient variant of LoRA.

XLoRAAdapter<T>

X-LoRA (Mixture of LoRA Experts) adapter that uses multiple LoRA experts with learned routing.

Enums

LoftQAdapter<T>.QuantizationType

Specifies the type of 4-bit quantization to use for base layer weights.

QLoRAAdapter<T>.QuantizationType

Specifies the type of 4-bit quantization to use for base layer weights.