Table of Contents

Class TabularActorCriticOptions<T>

Namespace
AiDotNet.Models.Options
Assembly
AiDotNet.dll

Configuration options for Tabular Actor-Critic agents.

public class TabularActorCriticOptions<T> : ReinforcementLearningOptions<T>

Type Parameters

T

The numeric type used for calculations.

Inheritance
TabularActorCriticOptions<T>
Inherited Members

Remarks

Tabular Actor-Critic combines policy learning (actor) with value function learning (critic) using lookup tables. The actor learns which actions to take, while the critic evaluates how good those actions are.

For Beginners: Actor-Critic is like having both a player (actor) and a coach (critic). The player tries different strategies, and the coach provides feedback on how well they're working.

Best for:

  • Small discrete state/action spaces
  • Problems requiring both policy and value learning
  • More stable learning than pure policy gradient
  • Reducing variance in policy updates

Not suitable for:

  • Continuous states (use linear/neural versions)
  • Large state spaces (table becomes too big)
  • High-dimensional observations

Properties

ActionSize

Size of the action space (number of possible actions).

public int ActionSize { get; init; }

Property Value

int

ActorLearningRate

Learning rate for the actor (policy) updates.

public double ActorLearningRate { get; init; }

Property Value

double

CriticLearningRate

Learning rate for the critic (value function) updates.

public double CriticLearningRate { get; init; }

Property Value

double

StateSize

Size of the state space (number of state features).

public int StateSize { get; init; }

Property Value

int