Table of Contents

Class AiModelResult<T, TInput, TOutput>.InferenceSequence

Namespace
AiDotNet.Models.Results
Assembly
AiDotNet.dll

Represents one independent, stateful inference sequence (e.g., one chat/generation stream).

public sealed class AiModelResult<T, TInput, TOutput>.InferenceSequence : IDisposable
Inheritance
AiModelResult<T, TInput, TOutput>.InferenceSequence
Implements
Inherited Members

Remarks

A sequence may keep internal state across calls when inference optimizations are enabled (e.g., KV-cache). Call Reset() to start a new logical sequence on the same object.

Methods

Dispose()

Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.

public void Dispose()

Predict(TInput)

Runs a prediction for the given input within this sequence.

public TOutput Predict(TInput newData)

Parameters

newData TInput

The input to predict on.

Returns

TOutput

The predicted output.

Remarks

When inference optimizations are configured, this method may keep and reuse sequence-local state (such as a KV-cache) across calls for improved throughput and latency.

For Beginners: This is like predicting with "memory". Each call can reuse what was computed previously for the same sequence so the next call can be faster.

Reset()

Resets sequence-local inference state.

public void Reset()

Remarks

This clears any cached state for the current sequence so the next prediction starts fresh.

For Beginners: Call this when you want to start a new conversation/stream using the same sequence object.