Class LinearSARSAOptions<T>
Configuration options for Linear SARSA agents.
public class LinearSARSAOptions<T> : ReinforcementLearningOptions<T>
Type Parameters
TThe numeric type used for calculations.
- Inheritance
-
LinearSARSAOptions<T>
- Inherited Members
Remarks
Linear SARSA uses linear function approximation for on-policy learning. Unlike Linear Q-Learning (off-policy), SARSA updates based on the action actually taken by the current policy, making it more conservative.
For Beginners: Linear SARSA is the on-policy version of Linear Q-Learning. It learns about the policy it's currently following, rather than the optimal policy. This makes it safer in risky environments where exploration could be dangerous.
Best for:
- Medium-sized continuous state spaces
- Risky environments (cliff walking, robotics)
- More conservative, safe learning
- Feature-based state representations
Not suitable for:
- Very small discrete states (use tabular SARSA)
- When fastest convergence is needed (use Q-learning)
- Highly non-linear problems (use neural networks)
Properties
ActionSize
Size of the action space (number of possible actions).
public int ActionSize { get; init; }
Property Value
FeatureSize
Number of features in the state representation.
public int FeatureSize { get; init; }