Class RainbowDQNOptions<T>
Configuration options for Rainbow DQN agent.
public class RainbowDQNOptions<T> : ReinforcementLearningOptions<T>
Type Parameters
TThe numeric type used for calculations.
- Inheritance
-
RainbowDQNOptions<T>
- Inherited Members
Remarks
Rainbow DQN combines six extensions to DQN:
- Double Q-learning: Reduces overestimation bias
- Dueling networks: Separates value and advantage streams
- Prioritized replay: Samples important experiences more frequently
- Multi-step learning: Uses n-step returns for better credit assignment
- Distributional RL: Learns full distribution of returns (C51)
- Noisy networks: Parameter noise for exploration
Properties
ActionSize
public int ActionSize { get; init; }
Property Value
AdvantageStreamLayers
public List<int> AdvantageStreamLayers { get; init; }
Property Value
NSteps
public int NSteps { get; init; }
Property Value
NoisyNetSigma
public double NoisyNetSigma { get; init; }
Property Value
NumAtoms
public int NumAtoms { get; init; }
Property Value
Optimizer
The optimizer used for updating network parameters. If null, Adam optimizer will be used by default.
public IOptimizer<T, Vector<T>, Vector<T>>? Optimizer { get; init; }
Property Value
- IOptimizer<T, Vector<T>, Vector<T>>
PriorityAlpha
public double PriorityAlpha { get; init; }
Property Value
PriorityBeta
public double PriorityBeta { get; init; }
Property Value
PriorityBetaIncrement
public double PriorityBetaIncrement { get; init; }
Property Value
PriorityEpsilon
public double PriorityEpsilon { get; init; }
Property Value
SharedLayers
public List<int> SharedLayers { get; init; }
Property Value
StateSize
public int StateSize { get; init; }
Property Value
UseDistributional
public bool UseDistributional { get; init; }
Property Value
UseNoisyNetworks
public bool UseNoisyNetworks { get; init; }
Property Value
VMax
public double VMax { get; init; }
Property Value
VMin
public double VMin { get; init; }
Property Value
ValueStreamLayers
public List<int> ValueStreamLayers { get; init; }