Table of Contents

Class LocallyWeightedRegression<T>

Namespace
AiDotNet.Regression
Assembly
AiDotNet.dll

Implements Locally Weighted Regression, a non-parametric approach that creates a different model for each prediction point based on the weighted influence of nearby training examples.

public class LocallyWeightedRegression<T> : NonLinearRegressionBase<T>, INonLinearRegression<T>, IRegression<T>, IFullModel<T, Matrix<T>, Vector<T>>, IModel<Matrix<T>, Vector<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Matrix<T>, Vector<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Matrix<T>, Vector<T>>>, IGradientComputable<T, Matrix<T>, Vector<T>>, IJitCompilable<T>

Type Parameters

T

The numeric type used for calculations, typically float or double.

Inheritance
LocallyWeightedRegression<T>
Implements
IFullModel<T, Matrix<T>, Vector<T>>
IModel<Matrix<T>, Vector<T>, ModelMetadata<T>>
IParameterizable<T, Matrix<T>, Vector<T>>
ICloneable<IFullModel<T, Matrix<T>, Vector<T>>>
IGradientComputable<T, Matrix<T>, Vector<T>>
Inherited Members
Extension Methods

Remarks

Locally Weighted Regression (LWR) is a memory-based, non-parametric method that creates a unique model for each prediction point. Unlike global regression methods that find a single model for all data, LWR fits a separate weighted regression model for each query point, giving higher influence to nearby training examples. This approach provides excellent flexibility for modeling complex, nonlinear relationships without specifying a fixed functional form.

For Beginners: Locally Weighted Regression is like having a personalized prediction for each point.

Instead of creating a single model for all data (like linear regression does), LWR:

  • Creates a new, custom model for each prediction point you want to estimate
  • Gives more importance to training examples that are close to your prediction point
  • Gives less importance to training examples that are far away

Imagine predicting house prices: When estimating the price of a specific house, LWR would:

  • Give most influence to similar houses in the same neighborhood
  • Give moderate influence to somewhat similar houses in nearby areas
  • Give little or no influence to very different houses in distant locations

This approach is flexible and works well for complex patterns, but requires keeping all training data around for making predictions, which can be computationally intensive for large datasets.

Constructors

LocallyWeightedRegression(LocallyWeightedRegressionOptions?, IRegularization<T, Matrix<T>, Vector<T>>?)

Initializes a new instance of the LocallyWeightedRegression<T> class.

public LocallyWeightedRegression(LocallyWeightedRegressionOptions? options = null, IRegularization<T, Matrix<T>, Vector<T>>? regularization = null)

Parameters

options LocallyWeightedRegressionOptions

Optional configuration options for the Locally Weighted Regression algorithm.

regularization IRegularization<T, Matrix<T>, Vector<T>>

Optional regularization strategy to prevent overfitting.

Remarks

This constructor creates a new Locally Weighted Regression model with the specified options and regularization strategy. If no options are provided, default values are used. If no regularization is specified, no regularization is applied.

For Beginners: This is how you create a new Locally Weighted Regression model.

The most important option is the "bandwidth", which controls how quickly the influence of training points drops off with distance:

  • Smaller bandwidth: Only very nearby points have influence (more local, potentially more wiggly)
  • Larger bandwidth: Points farther away also have some influence (smoother, potentially less accurate for complex patterns)

If you don't specify these parameters, the model will use reasonable default settings.

Example:

// Create a Locally Weighted Regression model with default settings
var lwr = new LocallyWeightedRegression<double>();

// Create a model with custom options
var options = new LocallyWeightedRegressionOptions { Bandwidth = 0.5 };
var customLwr = new LocallyWeightedRegression<double>(options);

Properties

SupportsJitCompilation

Gets whether this model supports JIT compilation.

public override bool SupportsJitCompilation { get; }

Property Value

bool

true when UseSoftMode is enabled and training data is available; false otherwise.

Remarks

When UseSoftMode is enabled, LWR can be exported as a differentiable computation graph using attention-weighted averaging. The training data is embedded as constants in the computation graph.

When UseSoftMode is disabled, JIT compilation is not supported because traditional LWR requires solving a weighted least squares problem for each query point, which cannot be represented as a static computation graph.

UseSoftMode

Gets or sets whether to use soft (differentiable) mode for JIT compilation support.

public bool UseSoftMode { get; set; }

Property Value

bool

true to enable soft mode; false (default) for traditional LWR behavior.

Remarks

When enabled, LocallyWeightedRegression uses a differentiable approximation that embeds all training data as constants in the computation graph and computes attention-weighted predictions using the softmax of negative squared distances.

For Beginners: Soft mode allows this model to be JIT compiled for faster inference. Traditional LWR solves a new weighted least squares problem for each prediction, which cannot be represented as a static computation graph. Soft mode uses a simplified approach that enables JIT compilation while giving similar results for smooth data.

Methods

CreateInstance()

Creates a new instance of the LocallyWeightedRegression with the same configuration as the current instance.

protected override IFullModel<T, Matrix<T>, Vector<T>> CreateInstance()

Returns

IFullModel<T, Matrix<T>, Vector<T>>

A new LocallyWeightedRegression instance with the same options and regularization as the current instance.

Remarks

This method creates a new instance of the LocallyWeightedRegression model with the same configuration options and regularization settings as the current instance. This is useful for model cloning, ensemble methods, or cross-validation scenarios where multiple instances of the same model with identical configurations are needed.

For Beginners: This method creates a fresh copy of the model's blueprint.

When you need multiple versions of the same type of model with identical settings:

  • This method creates a new, empty model with the same configuration
  • It's like making a copy of a recipe before you start cooking
  • The new model has the same settings but no trained data
  • This is useful for techniques that need multiple models, like cross-validation

For example, when testing your model on different subsets of data, you'd want each test to use a model with identical settings.

Deserialize(byte[])

Loads a previously serialized Locally Weighted Regression model from a byte array.

public override void Deserialize(byte[] modelData)

Parameters

modelData byte[]

The byte array containing the serialized model.

Remarks

This method reconstructs a Locally Weighted Regression model from a byte array that was previously created using the Serialize method. It restores the base class data, the bandwidth parameter, and the training data that is used for making predictions.

For Beginners: This method loads a previously saved model from a sequence of bytes.

Deserialization allows you to:

  • Load a model that was saved earlier
  • Use a model without having to retrain it
  • Share models between different applications

When you deserialize a model:

  • The bandwidth parameter is restored
  • All training examples are loaded back into memory
  • The model is ready to make predictions immediately

Example:

// Load from a file
byte[] modelData = File.ReadAllBytes("lwr.model");

// Deserialize the model
var lwr = new LocallyWeightedRegression<double>();
lwr.Deserialize(modelData);

// Now you can use the model for predictions
var predictions = lwr.Predict(newFeatures);

ExportComputationGraph(List<ComputationNode<T>>)

Exports the model's computation as a graph of operations.

public override ComputationNode<T> ExportComputationGraph(List<ComputationNode<T>> inputNodes)

Parameters

inputNodes List<ComputationNode<T>>

The input nodes for the computation graph.

Returns

ComputationNode<T>

The root node of the exported computation graph.

Remarks

When soft mode is enabled, this exports the LWR model as a differentiable computation graph using SoftLocallyWeighted(ComputationNode<T>, ComputationNode<T>, ComputationNode<T>, T?) operations. The training data (features and targets) are embedded as constants in the graph.

The soft LWR approximation computes: - distances[i] = ||input - xTrain[i]||² - weights = softmax(-distances / bandwidth) - output = Σ weights[i] * yTrain[i]

Exceptions

NotSupportedException

Thrown when UseSoftMode is false.

InvalidOperationException

Thrown when no training data is available.

GetModelType()

Gets the model type of the Locally Weighted Regression model.

protected override ModelType GetModelType()

Returns

ModelType

The model type enumeration value.

OptimizeModel(Matrix<T>, Vector<T>)

Stores the training data for later use in making predictions.

protected override void OptimizeModel(Matrix<T> x, Vector<T> y)

Parameters

x Matrix<T>

A matrix where each row represents a sample and each column represents a feature.

y Vector<T>

A vector of target values corresponding to each sample in x.

Remarks

This method "optimizes" the Locally Weighted Regression model by simply storing the training data for later use during prediction. Unlike global regression methods that compute a fixed set of parameters during training, LWR defers the actual model fitting until prediction time, when a unique model is created for each query point.

For Beginners: Unlike most regression models, LWR doesn't compute a model during training.

Instead of finding a single global model during training, LWR simply:

  • Stores all the training examples (both features and target values)
  • Waits until prediction time to create a custom model for each point

This is similar to K-Nearest Neighbors but more sophisticated, as it creates a weighted regression model for each prediction rather than just averaging nearby points.

Because it doesn't do much work during training, LWR is sometimes called a "lazy learner" - it postpones the real work until prediction time.

Predict(Matrix<T>)

Predicts target values for the provided input features using the trained Locally Weighted Regression model.

public override Vector<T> Predict(Matrix<T> input)

Parameters

input Matrix<T>

A matrix where each row represents a sample to predict and each column represents a feature.

Returns

Vector<T>

A vector of predicted values corresponding to each input sample.

Remarks

This method predicts target values for new input data by creating a unique weighted regression model for each input sample. It processes each input row separately, creating a custom model that gives higher weight to training examples that are closer to the query point, then uses that model to make a prediction.

For Beginners: This method uses your training data to make predictions on new data.

For each input example you want to predict:

  1. The method creates a custom model just for that example
  2. It computes predictions using this personalized model
  3. It repeats this process for each example

This approach can be more accurate than global models for complex patterns, but it's also more computationally intensive because a new model is created for each prediction.

Example:

// Make predictions
var predictions = lwr.Predict(newFeatures);

PredictSingle(Vector<T>)

Predicts the target value for a single input feature vector.

protected override T PredictSingle(Vector<T> input)

Parameters

input Vector<T>

The feature vector of the sample to predict.

Returns

T

The predicted value for the input sample.

Remarks

This method predicts the target value for a single input feature vector by creating a weighted least squares model. First, it computes weights for each training example based on their distance to the input point, giving higher weight to closer points. Then it solves a weighted linear regression problem using these weights to find the best coefficients, and uses those coefficients to make a prediction for the input.

For Beginners: This method creates a personalized prediction model for a single data point.

The prediction process for a single point works like this:

  1. Calculate weights for all training examples based on how close they are to the input point (nearby examples get higher weights)
  2. Use these weights to create a custom weighted regression model
  3. Use this custom model to make a prediction for the input point

The bandwidth parameter controls how quickly the weights decrease with distance:

  • Small bandwidth: Only very close points get significant weight
  • Large bandwidth: Even somewhat distant points get some weight

This personalized approach allows the model to adapt to local patterns in different regions of the data.

Serialize()

Serializes the Locally Weighted Regression model to a byte array for storage or transmission.

public override byte[] Serialize()

Returns

byte[]

A byte array containing the serialized model.

Remarks

This method converts the Locally Weighted Regression model into a byte array that can be stored in a file, database, or transmitted over a network. The serialized data includes the base class data, the bandwidth parameter, and the training data that is used for making predictions.

For Beginners: This method saves your trained model as a sequence of bytes.

Serialization allows you to:

  • Save your model to a file
  • Store your model in a database
  • Send your model over a network
  • Keep your model for later use without having to retrain it

The serialized data includes:

  • The bandwidth parameter that controls the locality of the weighted regression
  • All the training examples (both features and target values)

Since Locally Weighted Regression stores all training data, the serialized model can be quite large compared to parametric models like linear regression.

Example:

// Serialize the model
byte[] modelData = lwr.Serialize();

// Save to a file
File.WriteAllBytes("lwr.model", modelData);