Interface IEnvironment<T>
- Namespace
- AiDotNet.Interfaces
- Assembly
- AiDotNet.dll
Represents a reinforcement learning environment that an agent interacts with.
public interface IEnvironment<T>
Type Parameters
TThe numeric type used for calculations (typically float or double).
Remarks
This interface defines the standard RL environment contract following the OpenAI Gym pattern. All state observations and actions use AiDotNet's Vector type for consistency with the rest of the library's type system.
For Beginners: An environment is the "world" that the RL agent interacts with. Think of it like a video game: - The agent sees the current state (like where characters are on screen) - The agent takes actions (like pressing buttons) - The environment responds with a new state and a reward (like points scored) - The episode ends when certain conditions are met (like game over)
This interface ensures all environments work consistently with AiDotNet's RL agents.
Properties
ActionSpaceSize
Gets the size of the action space (number of possible discrete actions or continuous action dimensions).
int ActionSpaceSize { get; }
Property Value
Remarks
For discrete action spaces (like CartPole): this is the number of possible actions (e.g., 2 for left/right). For continuous action spaces: this is the dimensionality of the action vector.
IsContinuousActionSpace
Gets whether the action space is continuous (true) or discrete (false).
bool IsContinuousActionSpace { get; }
Property Value
ObservationSpaceDimension
Gets the dimension of the observation space.
int ObservationSpaceDimension { get; }
Property Value
Remarks
This is the length of the Vector returned by Reset() and Step(). For example, CartPole has 4 dimensions: cart position, cart velocity, pole angle, pole angular velocity.
Methods
Close()
Closes the environment and cleans up resources.
void Close()
Reset()
Resets the environment to an initial state and returns the initial observation.
Vector<T> Reset()
Returns
- Vector<T>
Initial state observation as a Vector.
Remarks
For Beginners: Call this at the start of each episode to get a fresh starting state. Like pressing "restart" on a game.
Seed(int)
Seeds the random number generator for reproducibility.
void Seed(int seed)
Parameters
seedintThe random seed.
Step(Vector<T>)
Takes an action in the environment and returns the result.
(Vector<T> NextState, T Reward, bool Done, Dictionary<string, object> Info) Step(Vector<T> action)
Parameters
actionVector<T>For discrete action spaces: a one-hot encoded Vector (length = ActionSpaceSize) or a Vector with a single element containing the action index. For continuous action spaces: a Vector of continuous values (length = ActionSpaceSize).
Returns
- (Vector<T> NextState, T Reward, bool Done, Dictionary<string, object> Info)
A tuple containing:
- NextState: The resulting state observation
- Reward: The reward received for this action
- Done: Whether the episode has terminated
- Info: Optional diagnostic information as a dictionary
Remarks
For Beginners: This is like taking one action in the game - you press a button (action), and the game tells you what happened (new state, reward, whether game is over).