Table of Contents

Class LinearQLearningOptions<T>

Namespace
AiDotNet.Models.Options
Assembly
AiDotNet.dll

Configuration options for Linear Q-Learning agents.

public class LinearQLearningOptions<T> : ReinforcementLearningOptions<T>

Type Parameters

T

The numeric type used for calculations.

Inheritance
LinearQLearningOptions<T>
Inherited Members

Remarks

Linear Q-Learning uses linear function approximation to estimate Q-values. Instead of maintaining a table, it learns weight vectors for each action and computes Q(s,a) = w_a^T * φ(s) where φ(s) are state features.

For Beginners: Linear Q-Learning extends tabular Q-learning to handle larger state spaces by using feature representations. Think of it as learning a formula instead of memorizing every single state.

Best for:

  • Medium-sized continuous state spaces
  • Problems where states can be represented as feature vectors
  • Faster learning than tabular methods
  • Generalization across similar states

Not suitable for:

  • Very small discrete states (use tabular instead)
  • Highly non-linear relationships (use neural networks)
  • Continuous action spaces (use actor-critic)

Properties

ActionSize

Size of the action space (number of possible actions).

public int ActionSize { get; init; }

Property Value

int

FeatureSize

Number of features in the state representation.

public int FeatureSize { get; init; }

Property Value

int