Class TimeSeriesModelBase<T>
- Namespace
- AiDotNet.TimeSeries
- Assembly
- AiDotNet.dll
Provides a base class for all time series forecasting models in the library.
public abstract class TimeSeriesModelBase<T> : ITimeSeriesModel<T>, IFullModel<T, Matrix<T>, Vector<T>>, IModel<Matrix<T>, Vector<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Matrix<T>, Vector<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Matrix<T>, Vector<T>>>, IGradientComputable<T, Matrix<T>, Vector<T>>, IJitCompilable<T>
Type Parameters
TThe numeric data type used for calculations (e.g., float, double).
- Inheritance
-
TimeSeriesModelBase<T>
- Implements
- Derived
- Inherited Members
- Extension Methods
Remarks
This abstract class defines the common interface and functionality that all time series models share, including training, prediction, evaluation, and serialization/deserialization capabilities.
Time series models capture temporal dependencies in data and use patterns learned from historical observations to predict future values. This base class provides the foundation for implementing various time series forecasting algorithms like ARIMA, Exponential Smoothing, TBATS, and more complex machine learning approaches.
For Beginners: A time series model helps predict future values based on past observations.
Think of a time series like a sequence of measurements taken over time - for example, daily temperatures, monthly sales, or hourly website visits. These models analyze the patterns in historical data to make predictions about what will happen next.
This base class is like a blueprint that all specific time series models follow. It ensures that every model can:
- Be trained on historical data to learn patterns
- Make predictions for future periods based on what it learned
- Evaluate how accurate its predictions are compared to actual values
- Be saved to disk and loaded later without retraining
Time series models are used in many real-world applications, including:
- Weather forecasting
- Stock market prediction
- Demand planning for retail
- Energy consumption forecasting
- Website traffic prediction
Constructors
TimeSeriesModelBase(TimeSeriesRegressionOptions<T>)
Initializes a new instance of the TimeSeriesModelBase class with the specified options.
protected TimeSeriesModelBase(TimeSeriesRegressionOptions<T> options)
Parameters
optionsTimeSeriesRegressionOptions<T>The configuration options for the time series model.
Remarks
This constructor validates the provided options, initializes the model with the specified configuration, and sets up the numeric operations appropriate for the data type.
For Beginners: This constructor sets up the basic configuration for any time series model.
It takes an options object that specifies important settings like:
- How many past values to consider (lag order)
- Whether to include a trend component (like steady growth or decline)
- The length of seasonal patterns (e.g., 7 for weekly, 12 for monthly)
- Whether to correct for autocorrelation in errors (systematic errors)
It also checks that these settings make sense - for example, you can't have a negative number of past values or a seasonal period less than 2.
Exceptions
- ArgumentNullException
Thrown when options is null.
- ArgumentException
Thrown when options contain invalid values.
Properties
DefaultLossFunction
Gets the default loss function used by this model for gradient computation.
public virtual ILossFunction<T> DefaultLossFunction { get; }
Property Value
Remarks
This loss function is used when calling ComputeGradients(TInput, TOutput, ILossFunction<T>?) without explicitly providing a loss function. It represents the model's primary training objective.
For Beginners: The loss function tells the model "what counts as a mistake". For example: - For regression (predicting numbers): Mean Squared Error measures how far predictions are from actual values - For classification (predicting categories): Cross Entropy measures how confident the model is in the right category
This property provides a sensible default so you don't have to specify the loss function every time, but you can still override it if needed for special cases.
Distributed Training: In distributed training, all workers use the same loss function to ensure consistent gradient computation. The default loss function is automatically used when workers compute local gradients.
Exceptions
- InvalidOperationException
Thrown if accessed before the model has been configured with a loss function.
Engine
Gets the global execution engine for vector operations.
protected IEngine Engine { get; }
Property Value
- IEngine
Remarks
This property provides access to the execution engine (CPU or GPU) for performing vectorized operations. The engine is determined by the global AiDotNetEngine configuration and allows automatic fallback from GPU to CPU when GPU is not available.
For Beginners: This gives access to either CPU or GPU processing for faster computations. The system automatically chooses the best available option and falls back to CPU if GPU acceleration is not available.
IsTrained
Indicates whether the model has been trained.
protected bool IsTrained { get; }
Property Value
Remarks
This flag is set to true after the model has been successfully trained on data.
For Beginners: This is like a switch that gets turned on once the model has learned from your data. It helps prevent errors by making sure you don't try to use the model for predictions before it's ready.
LastEvaluationMetrics
Gets the last computed error metrics when the model was evaluated.
protected Dictionary<string, T> LastEvaluationMetrics { get; }
Property Value
- Dictionary<string, T>
Remarks
Contains accuracy metrics calculated during model evaluation, such as MAE, RMSE, and MAPE.
For Beginners: These numbers tell you how accurate the model's predictions are compared to actual values. Lower numbers mean better predictions. They're like a scorecard for the model's performance.
ModelParameters
Gets or sets the trained model parameters.
protected Vector<T> ModelParameters { get; set; }
Property Value
- Vector<T>
Remarks
Contains the values that the model has learned during training, such as coefficients for different lags, trend components, and seasonal factors.
For Beginners: These are the numerical values the model learns during training that tell it exactly how much influence each past observation should have on the prediction. They're like the recipe ingredients with specific measurements that the model has figured out work best.
NumOps
Provides numeric operations for the specific type T.
protected INumericOperations<T> NumOps { get; }
Property Value
- INumericOperations<T>
Remarks
This property provides mathematical operations appropriate for the generic type T, allowing the algorithm to work consistently with different numeric types like float, double, or decimal.
For Beginners: This is a helper that knows how to do math (addition, multiplication, etc.) with your specific number type, whether that's a regular double, a precise decimal value, or something else. It allows the model to work with different types of numbers without changing its core logic.
Options
Configuration options for the time series model.
protected TimeSeriesRegressionOptions<T> Options { get; }
Property Value
Remarks
These options control the core behavior of the time series model, including how much historical data is considered, whether trends or seasonality are modeled, and how errors are handled.
For Beginners: Think of these options as settings that determine how the model works: - LagOrder: How many past values to consider (like remembering the last 5 days to predict tomorrow) - IncludeTrend: Whether to account for ongoing trends (like sales steadily increasing over time) - SeasonalPeriod: Whether there are regular patterns (like retail sales spiking every December) - AutocorrelationCorrection: Whether to fix systematic errors in predictions
ParameterCount
Gets the number of parameters in the model.
public virtual int ParameterCount { get; }
Property Value
Remarks
This property returns the total count of trainable parameters in the model. It's useful for understanding model complexity and memory requirements.
SupportsJitCompilation
Gets whether this model currently supports JIT compilation.
public virtual bool SupportsJitCompilation { get; }
Property Value
- bool
True if the model can be JIT compiled, false otherwise.
Remarks
Some models may not support JIT compilation due to: - Dynamic graph structure (changes based on input) - Lack of computation graph representation - Use of operations not yet supported by the JIT compiler
For Beginners: This tells you whether this specific model can benefit from JIT compilation.
Models return false if they:
- Use layer-based architecture without graph export (e.g., current neural networks)
- Have control flow that changes based on input data
- Use operations the JIT compiler doesn't understand yet
In these cases, the model will still work normally, just without JIT acceleration.
Methods
ApplyGradients(Vector<T>, T)
Applies pre-computed gradients to update the model parameters.
public virtual void ApplyGradients(Vector<T> gradients, T learningRate)
Parameters
gradientsVector<T>The gradient vector to apply.
learningRateTThe learning rate for the update.
Remarks
Updates parameters using: θ = θ - learningRate * gradients
For Beginners: After computing gradients (seeing which direction to move), this method actually moves the model in that direction. The learning rate controls how big of a step to take.
Distributed Training: In DDP/ZeRO-2, this applies the synchronized (averaged) gradients after communication across workers. Each worker applies the same averaged gradients to keep parameters consistent.
ApplyParameters(Vector<T>)
Applies the provided parameters to the model.
protected virtual void ApplyParameters(Vector<T> parameters)
Parameters
parametersVector<T>The vector of parameters to apply.
Remarks
This method applies the provided parameter values to the model, updating its internal state to reflect the new parameters. The implementation is model-specific and should be overridden by derived classes as needed.
For Beginners: This method updates the model's internal parameters with new values. It's the counterpart to GetParameters and should understand the parameter vector in exactly the same way.
For example, if the first 5 elements of the parameters vector represent lag coefficients, this method should apply them as lag coefficients in the model's internal structure.
Exceptions
- ArgumentException
Thrown when the parameters vector is invalid.
CalculateErrorMetrics(Vector<T>, Vector<T>)
Calculates error metrics by comparing predictions to actual values.
protected virtual Dictionary<string, T> CalculateErrorMetrics(Vector<T> predictions, Vector<T> actuals)
Parameters
predictionsVector<T>The predicted values.
actualsVector<T>The actual values.
Returns
- Dictionary<string, T>
A dictionary containing error metrics.
Remarks
This method computes standard error metrics for time series forecasting, including MAE, RMSE, MAPE, and others as appropriate for the model type.
For Beginners: This method calculates how far off the model's predictions are from the actual values. It computes several different ways of measuring the prediction errors:
- MAE (Mean Absolute Error): The average magnitude of errors, ignoring whether they're positive or negative
- RMSE (Root Mean Squared Error): Emphasizes larger errors by squaring them before averaging
- MAPE (Mean Absolute Percentage Error): Shows errors as percentages of the actual values
These metrics help you understand not just how accurate the model is overall, but also what kinds of errors it tends to make.
Clip(T, T, T)
Clips a value to be within the specified range.
protected T Clip(T value, T min, T max)
Parameters
valueTThe value to clip.
minTThe minimum allowed value.
maxTThe maximum allowed value.
Returns
- T
The clipped value.
Remarks
This utility method constrains a value to be within the specified range. If the value is less than the minimum, the minimum is returned. If the value is greater than the maximum, the maximum is returned. Otherwise, the original value is returned.
For Beginners: This method ensures a value stays within a specified range (between min and max). It's like setting boundaries that a value cannot cross.
For example, if you clip a value with min=0 and max=1:
- If the value is -0.5, it returns 0 (the minimum)
- If the value is 1.5, it returns 1 (the maximum)
- If the value is 0.7, it returns 0.7 (unchanged, as it's within range)
This is useful for:
- Preventing parameters from taking extreme values
- Constraining predictions to reasonable ranges
- Implementing optimization algorithms that require bounded parameters
Clone()
Creates a clone of the time series model.
public virtual IFullModel<T, Matrix<T>, Vector<T>> Clone()
Returns
- IFullModel<T, Matrix<T>, Vector<T>>
A new instance that is a clone of this model.
Remarks
This method creates a copy of the model that shares the same options but has independent parameter values. It's a lighter-weight alternative to DeepCopy for cases where a complete independent copy is not needed.
For Beginners: This method creates a copy of the current model with the same configuration and parameters.
While DeepCopy creates a fully independent duplicate of everything in the model, Clone sometimes creates a more lightweight copy that might share some non-essential components with the original (depending on the specific model implementation).
This is useful for:
- Creating variations of a model for ensemble methods
- Saving a snapshot of the model before making changes
- Creating multiple instances for parallel training
ComputeGradients(Matrix<T>, Vector<T>, ILossFunction<T>?)
Computes gradients of the loss function with respect to model parameters for the given data, WITHOUT updating the model parameters.
public virtual Vector<T> ComputeGradients(Matrix<T> input, Vector<T> target, ILossFunction<T>? lossFunction = null)
Parameters
inputMatrix<T>The input data.
targetVector<T>The target/expected output.
lossFunctionILossFunction<T>The loss function to use for gradient computation. If null, uses the model's default loss function.
Returns
- Vector<T>
A vector containing gradients with respect to all model parameters.
Remarks
This method performs a forward pass, computes the loss, and back-propagates to compute gradients, but does NOT update the model's parameters. The parameters remain unchanged after this call.
Distributed Training: In DDP/ZeRO-2, each worker calls this to compute local gradients on its data batch. These gradients are then synchronized (averaged) across workers before applying updates. This ensures all workers compute the same parameter updates despite having different data.
For Meta-Learning: After adapting a model on a support set, you can use this method to compute gradients on the query set. These gradients become the meta-gradients for updating the meta-parameters.
For Beginners: Think of this as "dry run" training: - The model sees what direction it should move (the gradients) - But it doesn't actually move (parameters stay the same) - You get to decide what to do with this information (average with others, inspect, modify, etc.)
Exceptions
- InvalidOperationException
If lossFunction is null and the model has no default loss function.
CreateInstance()
Creates a new instance of the derived model class.
protected abstract IFullModel<T, Matrix<T>, Vector<T>> CreateInstance()
Returns
- IFullModel<T, Matrix<T>, Vector<T>>
A new instance of the same model type.
Remarks
This abstract factory method must be implemented by derived classes to create a new instance of their specific type. It's used by Clone and DeepCopy to ensure that the correct derived type is instantiated.
For Beginners: This method creates a new, empty instance of the specific model type. It's used during cloning and deep copying to ensure that the copy is of the same specific type as the original.
For example, if the original model is an ARIMA model, this method would create a new ARIMA model. If it's a TBATS model, it would create a new TBATS model.
DeepCopy()
Creates a deep copy of the time series model.
public virtual IFullModel<T, Matrix<T>, Vector<T>> DeepCopy()
Returns
- IFullModel<T, Matrix<T>, Vector<T>>
A new instance that is a deep copy of this model.
Remarks
This method creates a completely independent copy of the model, with all parameters, options, and internal state duplicated. Modifications to the copy will not affect the original, and vice versa.
For Beginners: This method creates a completely independent copy of the current model.
A deep copy means that all components of the model are duplicated, including:
- Configuration options
- Learned parameters
- Internal state variables
This is useful when you need to:
- Create multiple variations of a model for experimentation
- Save a model at a specific point during training
- Use the same model structure for different datasets
Changes to the copy won't affect the original model and vice versa.
Deserialize(byte[])
Deserializes the model from a byte array.
public virtual void Deserialize(byte[] data)
Parameters
databyte[]The byte array containing the serialized model.
Remarks
This method deserializes the common components of the model (options, trained status, parameters) and then calls the model-specific deserialization method to handle specialized data.
For Beginners: Deserialization is the process of loading a previously saved model from a byte array.
This method:
- Creates a memory stream from the provided byte array
- Reads the common configuration options shared by all models
- Reads whether the model has been trained
- Reads the model parameters learned during training
- Calls the model-specific deserialization method to read specialized data
After deserialization, the model is restored to the same state it was in when serialized, allowing you to make predictions without retraining the model.
This is particularly useful for:
- Deploying models to production environments
- Sharing models between different applications
- Saving computation time by not having to retrain complex models
Exceptions
- ArgumentNullException
Thrown when data is null.
- InvalidOperationException
Thrown when the serialized data is corrupted or incompatible.
DeserializeCore(BinaryReader)
Deserializes model-specific data from the binary reader.
protected abstract void DeserializeCore(BinaryReader reader)
Parameters
readerBinaryReaderThe binary reader to read from.
Remarks
This abstract method must be implemented by each specific model type to load its unique parameters and state.
For Beginners: This method is responsible for loading the specific details that make each type of time series model unique. It reads exactly what was written by SerializeCore, in the same order, reconstructing the specialized parts of the model.
It's the counterpart to SerializeCore and should read data in exactly the same order and format that it was written.
This separation allows the base class to handle common deserialization tasks while each model type handles its specialized data.
EvaluateModel(Matrix<T>, Vector<T>)
Evaluates the performance of the trained model on test data.
public virtual Dictionary<string, T> EvaluateModel(Matrix<T> xTest, Vector<T> yTest)
Parameters
xTestMatrix<T>The input features matrix for testing.
yTestVector<T>The actual target values for testing.
Returns
- Dictionary<string, T>
A dictionary containing evaluation metrics.
Remarks
This method calculates various error metrics by comparing the model's predictions on the test data to the actual values, providing a quantitative assessment of model performance.
For Beginners: This method tests how well the model performs by comparing its predictions to actual values.
It works by:
- Using the model to make predictions based on the test inputs
- Comparing these predictions to the actual test values
- Calculating various error metrics to quantify the accuracy
Common metrics include:
- Mean Absolute Error (MAE): Average of absolute differences between predictions and actual values
- Root Mean Squared Error (RMSE): Square root of the average squared differences
- Mean Absolute Percentage Error (MAPE): Average percentage differences
These metrics help you understand how accurate your model is and compare different models. Lower values indicate better performance for all these metrics.
Exceptions
- InvalidOperationException
Thrown when the model has not been trained.
- ArgumentNullException
Thrown when xTest or yTest is null.
- ArgumentException
Thrown when the dimensions of xTest and yTest don't match.
ExportComputationGraph(List<ComputationNode<T>>)
Exports the model's computation graph for JIT compilation.
public virtual ComputationNode<T> ExportComputationGraph(List<ComputationNode<T>> inputNodes)
Parameters
inputNodesList<ComputationNode<T>>List to populate with input computation nodes (parameters).
Returns
- ComputationNode<T>
The output computation node representing the model's prediction.
Remarks
This method should construct a computation graph representing the model's forward pass. The graph should use placeholder input nodes that will be filled with actual data during execution.
For Beginners: This method creates a "recipe" of your model's calculations that the JIT compiler can optimize.
The method should:
- Create placeholder nodes for inputs (features, parameters)
- Build the computation graph using TensorOperations
- Return the final output node
- Add all input nodes to the inputNodes list (in order)
Example for a simple linear model (y = Wx + b):
public ComputationNode<T> ExportComputationGraph(List<ComputationNode<T>> inputNodes)
{
// Create placeholder inputs
var x = TensorOperations<T>.Variable(new Tensor<T>(InputShape), "x");
var W = TensorOperations<T>.Variable(Weights, "W");
var b = TensorOperations<T>.Variable(Bias, "b");
// Add inputs in order
inputNodes.Add(x);
inputNodes.Add(W);
inputNodes.Add(b);
// Build graph: y = Wx + b
var matmul = TensorOperations<T>.MatMul(x, W);
var output = TensorOperations<T>.Add(matmul, b);
return output;
}
The JIT compiler will then:
- Optimize the graph (fuse operations, eliminate dead code)
- Compile it to fast native code
- Cache the compiled version for reuse
Forecast(Vector<T>, int)
Generates a forecast for multiple steps ahead.
public virtual Vector<T> Forecast(Vector<T> history, int steps)
Parameters
historyVector<T>The historical time series data.
stepsintThe number of steps to forecast.
Returns
- Vector<T>
A vector containing the forecasted values.
Remarks
This method generates a multi-step forecast using the history data as the starting point. For each step, it makes a prediction and then updates the history with the predicted value to generate the next prediction.
For Beginners: This method predicts multiple future values in sequence.
For example, if you have daily data and want to forecast the next 7 days:
- It first predicts day 1 using your historical data
- Then it adds that prediction to the history
- Then it predicts day 2 using the updated history (including the day 1 prediction)
- And so on, until it has predicted all 7 days
This approach lets you make predictions further into the future, but be aware that errors tend to accumulate with each step (predictions become less accurate the further ahead you forecast).
Exceptions
- InvalidOperationException
Thrown when the model has not been trained.
- ArgumentNullException
Thrown when history is null.
- ArgumentException
Thrown when steps is not positive or history is insufficient.
GetActiveFeatureIndices()
Gets the indices of features (lags/time periods) actively used by the model.
public virtual IEnumerable<int> GetActiveFeatureIndices()
Returns
- IEnumerable<int>
A collection of indices representing the active features.
Remarks
This method identifies which input features (lags) have significant impact on the model's predictions, based on their corresponding parameter values.
For Beginners: This method tells you which past time periods (lags) are most important for predictions.
For example, if the result includes indices [1, 7, 12], this means:
- The value from 1 period ago strongly influences the prediction
- The value from 7 periods ago strongly influences the prediction (could be weekly seasonality)
- The value from 12 periods ago strongly influences the prediction (could be yearly for monthly data)
These active features are determined by the model's structure and learned parameters. For instance, in an ARIMA model, non-zero AR coefficients indicate active features.
Understanding active features helps interpret how the model works and which historical points matter most for forecasting.
Exceptions
- InvalidOperationException
Thrown when the model has not been trained.
GetFeatureImportance()
Gets the feature importance scores as a dictionary.
public virtual Dictionary<string, T> GetFeatureImportance()
Returns
- Dictionary<string, T>
A dictionary mapping feature names to their importance scores.
GetFeatureImportance(int)
Gets the importance of a specific feature (lag).
protected virtual T GetFeatureImportance(int featureIndex)
Parameters
featureIndexintThe index of the feature.
Returns
- T
A value indicating the feature's importance.
Remarks
This method calculates the importance of a specific lag in the model's predictions, based on its parameter value and the model's structure. The implementation is model-specific.
For Beginners: This method estimates how important a specific past time period is for making predictions. Higher values indicate more influential features.
For example, in many time series models:
- Recent lags (like lag 1) often have higher importance
- Seasonal lags (like lag 7 for weekly data) often have higher importance
- Some lags may have near-zero importance, meaning they don't affect predictions much
This information helps understand the model's internal logic and which past time periods it considers most predictive of future values.
Exceptions
- InvalidOperationException
Thrown when the model has not been trained.
- ArgumentOutOfRangeException
Thrown when featureIndex is negative.
GetModelMetadata()
Gets metadata about the time series model.
public abstract ModelMetadata<T> GetModelMetadata()
Returns
- ModelMetadata<T>
A ModelMetaData object containing information about the model.
Remarks
This method provides comprehensive metadata about the model, including its type, configuration options, training status, evaluation metrics, and information about which features/lags are most important.
For Beginners: This method provides important information about the model that can help you understand its characteristics and performance.
The metadata includes:
- The type of model (e.g., ARIMA, TBATS, Neural Network)
- Configuration details (e.g., lag order, seasonality period)
- Whether the model has been trained
- Performance metrics from the last evaluation
- Information about which features (time periods) are most influential
This information is useful for documentation, model comparison, and debugging. It's like a complete summary of everything important about the model.
GetParameters()
Gets the trainable parameters of the model as a vector.
public virtual Vector<T> GetParameters()
Returns
- Vector<T>
A vector containing all trainable parameters of the model.
Remarks
This method returns all the parameters learned during training, combined into a single vector. These parameters determine how the model makes predictions based on input data.
For Beginners: This method returns all the numerical values that the model has learned during training.
For time series models, these parameters typically include:
- Coefficients for each lag (how much each past value influences the prediction)
- Trend coefficients (if trend is included)
- Seasonal coefficients (if seasonality is included)
- Error correction terms (if autocorrelation correction is enabled)
These parameters can be:
- Analyzed to understand what the model has learned
- Saved for later use
- Modified to adjust the model's behavior
- Transferred to another model with the same structure
Exceptions
- InvalidOperationException
Thrown when the model has not been trained.
IsFeatureUsed(int)
Determines if a specific feature (lag) is actively used by the model.
public virtual bool IsFeatureUsed(int featureIndex)
Parameters
featureIndexintThe index of the feature to check.
Returns
- bool
True if the feature is actively used; otherwise, false.
Remarks
This method determines whether a specific lag has a significant impact on the model's predictions, based on its corresponding parameter value. The threshold for significance is model-specific.
For Beginners: This method checks if a specific past time period (lag) has a significant influence on the model's predictions.
For example:
- IsFeatureUsed(1) checks if the value from 1 period ago matters
- IsFeatureUsed(7) checks if the value from 7 periods ago matters
- IsFeatureUsed(12) checks if the value from 12 periods ago matters
A feature is typically considered "used" if its coefficient or weight in the model is significantly different from zero.
This information helps understand which historical points the model considers important when making predictions.
Exceptions
- InvalidOperationException
Thrown when the model has not been trained.
- ArgumentOutOfRangeException
Thrown when featureIndex is negative or exceeds the maximum lag order.
LoadModel(string)
Loads the model from a file.
public virtual void LoadModel(string filePath)
Parameters
filePathstringThe path to the file containing the saved model.
Remarks
This method provides a convenient way to load a model directly from disk. It combines file I/O operations with deserialization.
For Beginners: This is like clicking "Open" in a document editor. Instead of manually reading from a file and then calling Deserialize(), this method does both steps for you.
Exceptions
- FileNotFoundException
Thrown when the specified file does not exist.
- IOException
Thrown when an I/O error occurs while reading from the file or when the file contains corrupted or invalid model data.
LoadState(Stream)
Loads the time series model's state from a stream.
public virtual void LoadState(Stream stream)
Parameters
streamStreamThe stream to read the model state from.
Remarks
This method deserializes a time series model that was previously saved with SaveState. It uses the existing Deserialize method after reading data from the stream.
For Beginners: This is like loading a saved snapshot of your time series model.
When you call LoadState:
- All parameters and trends are read from the stream
- Model configuration and state are restored
After loading, the model can:
- Make forecasts using the restored parameters
- Continue training from where it left off
- Be deployed to production for time series prediction
This is essential for:
- Resuming interrupted training sessions
- Loading the best model for forecasting
- Deploying trained models to production
- Knowledge distillation workflows
Exceptions
- ArgumentNullException
Thrown when stream is null.
- IOException
Thrown when there's an error reading from the stream.
- InvalidOperationException
Thrown when the stream contains invalid or incompatible data.
Predict(Matrix<T>)
Generates forecasts using the trained time series model.
public virtual Vector<T> Predict(Matrix<T> input)
Parameters
inputMatrix<T>The input features matrix.
Returns
- Vector<T>
A vector of forecasted values.
Remarks
This method validates that the model is trained and the input data is valid, then generates predictions for each row in the input matrix using the model-specific prediction algorithm.
For Beginners: This method uses the patterns learned during training to predict future values.
The input matrix typically contains:
- Past values of the time series
- Time indicators (e.g., month, day of week)
- Any external factors that might influence the forecast
The output is a vector of predicted values, one for each row in the input matrix. Each prediction represents what the model thinks will happen at that future time point.
Exceptions
- InvalidOperationException
Thrown when the model has not been trained.
- ArgumentNullException
Thrown when input is null.
- ArgumentException
Thrown when input has incorrect dimensions.
PredictSingle(Vector<T>)
Generates a prediction for a single input vector.
public abstract T PredictSingle(Vector<T> input)
Parameters
inputVector<T>The input feature vector.
Returns
- T
The predicted value.
Remarks
This abstract method must be implemented by derived classes to generate a prediction for a single input vector using the model-specific algorithm.
For Beginners: This method takes a single row of input data (representing one time point) and calculates what the model predicts will happen at that point. Each type of time series model will have its own way of calculating this prediction based on the patterns it learned during training.
PrepareForecastFeatures(List<T>, int)
Prepares input features for a forecast step using the extended history.
protected virtual Vector<T> PrepareForecastFeatures(List<T> extendedHistory, int step)
Parameters
extendedHistoryList<T>The historical data including any previous forecasts.
stepintThe current forecast step (0-based).
Returns
- Vector<T>
A vector of input features for the forecast.
Remarks
This method extracts the appropriate lags and constructs any additional features needed for the forecast, such as trend indicators or seasonal dummies.
For Beginners: This method prepares the input data needed to make a forecast for a specific step. It typically extracts recent values, seasonal patterns, and trend indicators from the history (which may include previous predictions for multi-step forecasts).
Reset()
Resets the model to its untrained state.
public virtual void Reset()
Remarks
This method clears all trained parameters and returns the model to its initial untrained state.
For Beginners: This method erases all the patterns the model has learned.
After calling this method:
- All coefficients and learned parameters are cleared
- The model behaves as if it was never trained
- You would need to train it again before making predictions
This is useful when you want to:
- Experiment with different training data on the same model
- Retrain a model from scratch with new parameters
- Reset a model that might have been trained incorrectly
SaveModel(string)
Saves the model to a file.
public virtual void SaveModel(string filePath)
Parameters
filePathstringThe path where the model should be saved.
Remarks
This method provides a convenient way to save the model directly to disk. It combines serialization with file I/O operations.
For Beginners: This is like clicking "Save As" in a document editor. Instead of manually calling Serialize() and then writing to a file, this method does both steps for you.
Exceptions
- IOException
Thrown when an I/O error occurs while writing to the file.
- UnauthorizedAccessException
Thrown when the caller does not have the required permission to write to the specified file path.
SaveState(Stream)
Saves the time series model's current state to a stream.
public virtual void SaveState(Stream stream)
Parameters
streamStreamThe stream to write the model state to.
Remarks
This method serializes the time series model's parameters and configuration. It uses the existing Serialize method and writes the data to the provided stream.
For Beginners: This is like creating a snapshot of your trained time series model.
When you call SaveState:
- All learned parameters and trends are written to the stream
- Model configuration and internal state are preserved
This is particularly useful for:
- Checkpointing during long training sessions
- Saving the best model for forecasting
- Knowledge distillation from time series models
- Deploying forecasting models to production
You can later use LoadState to restore the model.
Exceptions
- ArgumentNullException
Thrown when stream is null.
- IOException
Thrown when there's an error writing to the stream.
Serialize()
Serializes the model to a byte array for storage or transmission.
public virtual byte[] Serialize()
Returns
- byte[]
A byte array containing the serialized model.
Remarks
This method serializes the common components of the model (options, trained status, parameters) and then calls the model-specific serialization method to handle specialized data.
For Beginners: Serialization converts the model's state into a format that can be saved to disk or transmitted over a network.
This method:
- Creates a memory stream to hold the serialized data
- Writes the common configuration options shared by all models
- Writes whether the model has been trained
- Writes the model parameters learned during training
- Calls the model-specific serialization method to write specialized data
- Returns everything as a byte array
This allows you to save a trained model and load it later without having to retrain it, which can save significant time for complex models trained on large datasets.
SerializeCore(BinaryWriter)
Serializes model-specific data to the binary writer.
protected abstract void SerializeCore(BinaryWriter writer)
Parameters
writerBinaryWriterThe binary writer to write to.
Remarks
This abstract method must be implemented by each specific model type to save its unique parameters and state.
For Beginners: This method is responsible for saving the specific details that make each type of time series model unique. Different models have different internal structures and parameters that need to be saved separately from the common elements.
For example:
- An ARIMA model would save its AR, I, and MA coefficients
- A TBATS model would save its level, trend, and seasonal components
- A neural network model would save its weights and biases
This separation allows the base class to handle common serialization tasks while each model type handles its specialized data.
SetActiveFeatureIndices(IEnumerable<int>)
Sets the active feature indices for this model.
public virtual void SetActiveFeatureIndices(IEnumerable<int> featureIndices)
Parameters
featureIndicesIEnumerable<int>The indices of features to activate.
SetParameters(Vector<T>)
Sets the parameters for this model.
public virtual void SetParameters(Vector<T> parameters)
Parameters
parametersVector<T>A vector containing the model parameters.
Train(Matrix<T>, Vector<T>)
Trains the time series model using the provided input data and target values.
public void Train(Matrix<T> x, Vector<T> y)
Parameters
xMatrix<T>The input features matrix.
yVector<T>The target values vector.
Remarks
This method validates the input data, prepares the model for training, performs the actual training algorithm, and sets the IsTrained flag once complete.
For Beginners: Training is the process where the model learns patterns from historical data.
During training, the model analyzes the relationship between:
- Input features (x): These might include past values, time indicators, or external factors
- Target values (y): The actual observed values we want to predict
After training, the model will have learned parameters that capture the patterns in your data, which it can then use to make predictions for new inputs.
This is an abstract method, meaning each specific model type (ARIMA, TBATS, etc.) will implement its own training algorithm.
Exceptions
- ArgumentNullException
Thrown when x or y is null.
- ArgumentException
Thrown when the dimensions of x and y don't match or when the data is insufficient.
TrainCore(Matrix<T>, Vector<T>)
Performs the model-specific training algorithm.
protected abstract void TrainCore(Matrix<T> x, Vector<T> y)
Parameters
xMatrix<T>The input features matrix.
yVector<T>The target values vector.
Remarks
This abstract method must be implemented by derived classes to perform the actual model training.
For Beginners: This is where the specific math and algorithms for each type of time series model are implemented. Different models (like ARIMA, Exponential Smoothing, etc.) will have their own unique ways of finding patterns in the data.
ValidateOptions(TimeSeriesRegressionOptions<T>)
Validates the provided time series options to ensure they are within acceptable ranges.
protected virtual void ValidateOptions(TimeSeriesRegressionOptions<T> options)
Parameters
optionsTimeSeriesRegressionOptions<T>The options to validate.
Remarks
Checks that LagOrder is non-negative, SeasonalPeriod is either 0 (no seasonality) or at least 2, and that other parameters have reasonable values.
For Beginners: This method makes sure the settings you've chosen for your model make logical sense. For example, you can't look back a negative number of time periods, and a seasonal pattern must repeat at least every 2 periods to be considered seasonal.
Exceptions
- ArgumentException
Thrown when any option is invalid.
ValidatePredictionInput(Matrix<T>)
Validates the input data for prediction.
protected virtual void ValidatePredictionInput(Matrix<T> input)
Parameters
inputMatrix<T>The input features matrix.
Remarks
This method verifies that the input data for prediction is valid and has the correct dimensions.
For Beginners: Before making predictions, this method checks that your input data is properly formatted. It ensures that: - You have provided input features - The input has the correct structure (number of features/columns) - The data meets any model-specific requirements
Exceptions
- ArgumentNullException
Thrown when input is null.
- ArgumentException
Thrown when input has incorrect dimensions.
ValidateTrainingInputs(Matrix<T>, Vector<T>)
Validates the training input data before proceeding with training.
protected virtual void ValidateTrainingInputs(Matrix<T> x, Vector<T> y)
Parameters
xMatrix<T>The input features matrix.
yVector<T>The target values vector.
Remarks
This method verifies that the input data meets the requirements for model training, including checking dimensions, sample size, and consistency.
For Beginners: Before the model starts learning, this method checks that your data is valid and properly formatted. It ensures that: - You have provided both input features and target values - The number of examples matches the number of target values - You have enough data points to train the model effectively - There are no obvious inconsistencies in your data structure
Exceptions
- ArgumentNullException
Thrown when x or y is null.
- ArgumentException
Thrown when the dimensions of x and y don't match or when the data is insufficient.
WithParameters(Vector<T>)
Creates a new model with the specified parameters.
public virtual IFullModel<T, Matrix<T>, Vector<T>> WithParameters(Vector<T> parameters)
Parameters
parametersVector<T>The vector of parameters to use for the new model.
Returns
- IFullModel<T, Matrix<T>, Vector<T>>
A new model instance with the specified parameters.
Remarks
This method creates a clone of the current model but replaces its parameters with the provided values. This allows for creating variations of a model without retraining.
For Beginners: This method creates a copy of the current model but with different parameter values.
This allows you to:
- Create a model with manually specified parameters (e.g., from expert knowledge)
- Make small adjustments to a trained model without full retraining
- Implement ensemble models that combine multiple parameter sets
- Perform what-if analysis by changing specific parameters
The parameters must be in the same order and have the same meaning as those returned by the GetParameters method.
Exceptions
- ArgumentNullException
Thrown when parameters is null.
- ArgumentException
Thrown when the parameters vector has incorrect length.