Table of Contents

Class AudioVisualEventLocalizationNetwork<T>

Namespace
AiDotNet.NeuralNetworks
Assembly
AiDotNet.dll

Neural network for audio-visual event localization - identifying WHEN and WHERE events occur in video by jointly analyzing audio and visual streams with precise temporal boundaries.

public class AudioVisualEventLocalizationNetwork<T> : NeuralNetworkBase<T>, INeuralNetworkModel<T>, INeuralNetwork<T>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>, IInterpretableModel<T>, IInputGradientComputable<T>, IDisposable, IAudioVisualEventLocalizationModel<T>

Type Parameters

T

The numeric type for calculations.

Inheritance
AudioVisualEventLocalizationNetwork<T>
Implements
IFullModel<T, Tensor<T>, Tensor<T>>
IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>
IParameterizable<T, Tensor<T>, Tensor<T>>
ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>
IGradientComputable<T, Tensor<T>, Tensor<T>>
Inherited Members
Extension Methods

Constructors

AudioVisualEventLocalizationNetwork(NeuralNetworkArchitecture<T>, int, double, int, IEnumerable<string>?, IOptimizer<T, Tensor<T>, Tensor<T>>?, ILossFunction<T>?, int?)

Initializes a new instance of the AudioVisualEventLocalizationNetwork.

public AudioVisualEventLocalizationNetwork(NeuralNetworkArchitecture<T> architecture, int embeddingDimension = 512, double temporalResolution = 0.1, int numEncoderLayers = 6, IEnumerable<string>? eventCategories = null, IOptimizer<T, Tensor<T>, Tensor<T>>? optimizer = null, ILossFunction<T>? lossFunction = null, int? seed = null)

Parameters

architecture NeuralNetworkArchitecture<T>
embeddingDimension int
temporalResolution double
numEncoderLayers int
eventCategories IEnumerable<string>
optimizer IOptimizer<T, Tensor<T>, Tensor<T>>
lossFunction ILossFunction<T>
seed int?

Properties

ParameterCount

Gets the total number of parameters in the model.

public override int ParameterCount { get; }

Property Value

int

Remarks

For Beginners: This tells you how many adjustable values (weights and biases) your neural network has. More complex networks typically have more parameters and can learn more complex patterns, but also require more data to train effectively. This is part of the IFullModel interface for consistency with other model types.

Performance: This property uses caching to avoid recomputing the sum on every access. The cache is invalidated when layers are modified.

SupportedEventCategories

Gets the supported event categories.

public IReadOnlyList<string> SupportedEventCategories { get; }

Property Value

IReadOnlyList<string>

TemporalResolution

Gets the temporal resolution in seconds.

public double TemporalResolution { get; }

Property Value

double

Methods

AnswerEventQuestion(Tensor<T>, IEnumerable<Tensor<T>>, string, double)

Answers questions about events in the video.

public (string Answer, IEnumerable<(double StartTime, double EndTime)> Evidence) AnswerEventQuestion(Tensor<T> audioWaveform, IEnumerable<Tensor<T>> frames, string question, double frameRate)

Parameters

audioWaveform Tensor<T>

Audio waveform.

frames IEnumerable<Tensor<T>>

Video frames.

question string

Question about events.

frameRate double

Video frame rate.

Returns

(string Answer, IEnumerable<(double StartTime, double EndTime)> Evidence)

Answer with supporting temporal evidence.

ClassifyEvent(Tensor<T>, IEnumerable<Tensor<T>>, IEnumerable<string>)

Classifies a pre-segmented event.

public Dictionary<string, T> ClassifyEvent(Tensor<T> audioSegment, IEnumerable<Tensor<T>> frameSegment, IEnumerable<string> candidateLabels)

Parameters

audioSegment Tensor<T>

Audio segment for the event.

frameSegment IEnumerable<Tensor<T>>

Video frames for the event.

candidateLabels IEnumerable<string>

Possible event labels.

Returns

Dictionary<string, T>

Classification probabilities.

ComputeEventAttention(Tensor<T>, IEnumerable<Tensor<T>>)

Computes event-level audio-visual attention.

public (Tensor<T> AudioToVisualAttention, Tensor<T> VisualToAudioAttention) ComputeEventAttention(Tensor<T> audioSegment, IEnumerable<Tensor<T>> frameSegment)

Parameters

audioSegment Tensor<T>

Audio segment.

frameSegment IEnumerable<Tensor<T>>

Video frame segment.

Returns

(Tensor<T> grad1, Tensor<T> grad2)

Cross-modal attention weights.

CreateNewInstance()

Creates a new instance of the same type as this neural network.

protected override IFullModel<T, Tensor<T>, Tensor<T>> CreateNewInstance()

Returns

IFullModel<T, Tensor<T>, Tensor<T>>

A new instance of the same neural network type.

Remarks

For Beginners: This creates a blank version of the same type of neural network.

It's used internally by methods like DeepCopy and Clone to create the right type of network before copying the data into it.

DeepCopy()

Creates a deep copy of the neural network.

public override IFullModel<T, Tensor<T>, Tensor<T>> DeepCopy()

Returns

IFullModel<T, Tensor<T>, Tensor<T>>

A new instance that is a deep copy of this neural network.

Remarks

This method creates a complete independent copy of the network, including all layers and their parameters. It uses serialization and deserialization to ensure a true deep copy.

For Beginners: This creates a completely independent duplicate of your neural network.

Think of it like creating an exact clone of your network where:

  • The copy has the same structure (layers, connections)
  • The copy has the same learned parameters (weights, biases)
  • Changes to one network don't affect the other

This is useful when you want to:

  • Experiment with modifications without risking your original network
  • Create multiple variations of a model
  • Save a snapshot of your model at a particular point in training

DeserializeNetworkSpecificData(BinaryReader)

Deserializes network-specific data that was not covered by the general deserialization process.

protected override void DeserializeNetworkSpecificData(BinaryReader reader)

Parameters

reader BinaryReader

The BinaryReader to read the data from.

Remarks

This method is called at the end of the general deserialization process to allow derived classes to read any additional data specific to their implementation.

For Beginners: Continuing the suitcase analogy, this is like unpacking that special compartment. After the main deserialization method has unpacked the common items (layers, parameters), this method allows each specific type of neural network to unpack its own unique items that were stored during serialization.

DetectAnomalies(Tensor<T>, IEnumerable<Tensor<T>>, double)

Detects anomalous events that don't match expected patterns.

public IEnumerable<(double StartTime, double EndTime, T AnomalyScore, string Description)> DetectAnomalies(Tensor<T> audioWaveform, IEnumerable<Tensor<T>> frames, double frameRate)

Parameters

audioWaveform Tensor<T>

Audio waveform.

frames IEnumerable<Tensor<T>>

Video frames.

frameRate double

Video frame rate.

Returns

IEnumerable<(double StartTime, double EndTime, T SyncQuality, string Description)>

Detected anomalies with anomaly scores.

DetectEvents(Tensor<T>, IEnumerable<Tensor<T>>, double)

Detects and localizes all audio-visual events in a video.

public IEnumerable<AudioVisualEvent> DetectEvents(Tensor<T> audioWaveform, IEnumerable<Tensor<T>> frames, double frameRate)

Parameters

audioWaveform Tensor<T>

Audio waveform.

frames IEnumerable<Tensor<T>>

Video frames.

frameRate double

Video frame rate.

Returns

IEnumerable<AudioVisualEvent>

List of detected events with temporal and spatial localization.

DetectSpecificEvents(Tensor<T>, IEnumerable<Tensor<T>>, IEnumerable<string>, double)

Detects events of specific categories.

public IEnumerable<AudioVisualEvent> DetectSpecificEvents(Tensor<T> audioWaveform, IEnumerable<Tensor<T>> frames, IEnumerable<string> targetCategories, double frameRate)

Parameters

audioWaveform Tensor<T>

Audio waveform.

frames IEnumerable<Tensor<T>>

Video frames.

targetCategories IEnumerable<string>

Categories to detect.

frameRate double

Video frame rate.

Returns

IEnumerable<AudioVisualEvent>

Detected events matching the target categories.

DetectSyncEvents(Tensor<T>, IEnumerable<Tensor<T>>, double)

Detects audio-visual synchronization events (e.g., lip sync).

public IEnumerable<(double StartTime, double EndTime, T SyncQuality, string Description)> DetectSyncEvents(Tensor<T> audioWaveform, IEnumerable<Tensor<T>> frames, double frameRate)

Parameters

audioWaveform Tensor<T>

Audio waveform.

frames IEnumerable<Tensor<T>>

Video frames.

frameRate double

Video frame rate.

Returns

IEnumerable<(double StartTime, double EndTime, T SyncQuality, string Description)>

Sync events with quality scores.

GenerateDenseCaptions(Tensor<T>, IEnumerable<Tensor<T>>, double)

Generates dense event captions for the entire video.

public IEnumerable<(double Time, string Caption)> GenerateDenseCaptions(Tensor<T> audioWaveform, IEnumerable<Tensor<T>> frames, double frameRate)

Parameters

audioWaveform Tensor<T>

Audio waveform.

frames IEnumerable<Tensor<T>>

Video frames.

frameRate double

Video frame rate.

Returns

IEnumerable<(double Time, string Caption)>

Time-stamped captions describing events.

GenerateProposals(Tensor<T>, IEnumerable<Tensor<T>>, double)

Generates temporal proposals for potential events.

public IEnumerable<(double StartTime, double EndTime, T EventnessScore)> GenerateProposals(Tensor<T> audioWaveform, IEnumerable<Tensor<T>> frames, double frameRate)

Parameters

audioWaveform Tensor<T>

Audio waveform.

frames IEnumerable<Tensor<T>>

Video frames.

frameRate double

Video frame rate.

Returns

IEnumerable<(double StartTime, double EndTime, T Confidence)>

Proposed time segments that may contain events.

GetModelMetadata()

Gets the metadata for this neural network model.

public override ModelMetadata<T> GetModelMetadata()

Returns

ModelMetadata<T>

A ModelMetaData object containing information about the model.

GetParameters()

Gets all trainable parameters of the network as a single vector.

public override Vector<T> GetParameters()

Returns

Vector<T>

A vector containing all parameters of the network.

Remarks

For Beginners: Neural networks learn by adjusting their "parameters" (also called weights and biases). This method collects all those adjustable values into a single list so they can be updated during training.

InitializeLayers()

Initializes the layers of the neural network based on the architecture.

protected override void InitializeLayers()

Remarks

For Beginners: This method sets up all the layers in your neural network according to the architecture you've defined. It's like assembling the parts of your network before you can use it.

LocalizeEventByDescription(Tensor<T>, IEnumerable<Tensor<T>>, string, double)

Localizes a specific event described in text.

public IEnumerable<(double StartTime, double EndTime, T Confidence)> LocalizeEventByDescription(Tensor<T> audioWaveform, IEnumerable<Tensor<T>> frames, string eventDescription, double frameRate)

Parameters

audioWaveform Tensor<T>

Audio waveform.

frames IEnumerable<Tensor<T>>

Video frames.

eventDescription string

Text description of the event.

frameRate double

Video frame rate.

Returns

IEnumerable<(double StartTime, double EndTime, T Confidence)>

Temporal segments where the event occurs.

Remarks

For Beginners: Find events using natural language!

Example: "person laughing" → returns [(5.2s, 7.8s), (15.1s, 16.4s)]

Predict(Tensor<T>)

Makes a prediction using the neural network.

public override Tensor<T> Predict(Tensor<T> input)

Parameters

input Tensor<T>

The input data to process.

Returns

Tensor<T>

The network's prediction.

Remarks

For Beginners: This is the main method you'll use to get results from your trained neural network. You provide some input data (like an image or text), and the network processes it through all its layers to produce an output (like a classification or prediction).

SegmentScenes(Tensor<T>, IEnumerable<Tensor<T>>, double)

Segments video into coherent audio-visual scenes.

public IEnumerable<(double StartTime, double EndTime, string SceneDescription)> SegmentScenes(Tensor<T> audioWaveform, IEnumerable<Tensor<T>> frames, double frameRate)

Parameters

audioWaveform Tensor<T>

Audio waveform.

frames IEnumerable<Tensor<T>>

Video frames.

frameRate double

Video frame rate.

Returns

IEnumerable<(double StartTime, double EndTime, string SceneDescription)>

Scene boundaries with descriptions.

SerializeNetworkSpecificData(BinaryWriter)

Serializes network-specific data that is not covered by the general serialization process.

protected override void SerializeNetworkSpecificData(BinaryWriter writer)

Parameters

writer BinaryWriter

The BinaryWriter to write the data to.

Remarks

This method is called at the end of the general serialization process to allow derived classes to write any additional data specific to their implementation.

For Beginners: Think of this as packing a special compartment in your suitcase. While the main serialization method packs the common items (layers, parameters), this method allows each specific type of neural network to pack its own unique items that other networks might not have.

SetParameters(Vector<T>)

Sets the parameters of the neural network.

public override void SetParameters(Vector<T> parameters)

Parameters

parameters Vector<T>

The parameters to set.

Remarks

This method distributes the parameters to all layers in the network. The parameters should be in the same format as returned by GetParameters.

TrackEvent(Tensor<T>, IEnumerable<Tensor<T>>, AudioVisualEvent, double)

Tracks an event across time.

public IEnumerable<AudioVisualEvent> TrackEvent(Tensor<T> audioWaveform, IEnumerable<Tensor<T>> frames, AudioVisualEvent initialEvent, double frameRate)

Parameters

audioWaveform Tensor<T>

Full audio waveform.

frames IEnumerable<Tensor<T>>

All video frames.

initialEvent AudioVisualEvent

Initial event detection.

frameRate double

Video frame rate.

Returns

IEnumerable<AudioVisualEvent>

Event trajectory with updated temporal and spatial locations.

Train(Tensor<T>, Tensor<T>)

Trains the neural network on a single input-output pair.

public override void Train(Tensor<T> input, Tensor<T> expectedOutput)

Parameters

input Tensor<T>

The input data.

expectedOutput Tensor<T>

The expected output for the given input.

Remarks

This method performs one training step on the neural network using the provided input and expected output. It updates the network's parameters to reduce the error between the network's prediction and the expected output.

For Beginners: This is how your neural network learns. You provide: - An input (what the network should process) - The expected output (what the correct answer should be)

The network then:

  1. Makes a prediction based on the input
  2. Compares its prediction to the expected output
  3. Calculates how wrong it was (the loss)
  4. Adjusts its internal values to do better next time

After training, you can get the loss value using the GetLastLoss() method to see how well the network is learning.

UpdateParameters(Vector<T>)

Updates the network's parameters with new values.

public override void UpdateParameters(Vector<T> gradients)

Parameters

gradients Vector<T>

Remarks

For Beginners: During training, a neural network's internal values (parameters) get adjusted to improve its performance. This method allows you to update all those values at once by providing a complete set of new parameters.

This is typically used by optimization algorithms that calculate better parameter values based on training data.