Table of Contents

Class SpeechEmotionRecognizer<T>

Namespace
AiDotNet.Audio.Emotion
Assembly
AiDotNet.dll

Neural network-based speech emotion recognition model that classifies emotional states from audio.

public class SpeechEmotionRecognizer<T> : AudioClassifierBase<T>, INeuralNetworkModel<T>, INeuralNetwork<T>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>, IInterpretableModel<T>, IInputGradientComputable<T>, IDisposable, IEmotionRecognizer<T>

Type Parameters

T

The numeric type used for calculations.

Inheritance
SpeechEmotionRecognizer<T>
Implements
IFullModel<T, Tensor<T>, Tensor<T>>
IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>
IParameterizable<T, Tensor<T>, Tensor<T>>
ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>
IGradientComputable<T, Tensor<T>, Tensor<T>>
Inherited Members
Extension Methods

Remarks

This model uses deep learning to detect emotions from speech audio. It supports two operation modes:

  • ONNX Mode: Load pre-trained models for fast inference
  • Native Mode: Train models from scratch with full customization

For Beginners: This is like teaching a computer to "hear" emotions in someone's voice!

How it works:

  1. Audio is converted to a mel spectrogram (a visual representation of sound frequencies over time)
  2. A neural network analyzes patterns in the spectrogram
  3. The network outputs probabilities for each emotion (happy, sad, angry, etc.)

Key features detected:

  • Pitch patterns (high pitch often = excitement, low pitch often = sadness)
  • Speaking rate (fast = excited/angry, slow = sad/calm)
  • Volume dynamics (loud = angry, soft = sad/fearful)
  • Voice quality (breathy, tense, relaxed)

Common applications:

  • Call centers: Detect frustrated customers for priority handling
  • Mental health: Monitor patient emotional well-being
  • Voice assistants: Respond appropriately to user mood
  • Gaming: Adapt gameplay to player emotional state
  • Market research: Analyze focus group reactions

Default emotions supported (based on industry standards):

  • Neutral, Happy, Sad, Angry, Fearful, Disgusted, Surprised

You can also measure:

  • Arousal: How activated/calm the speaker is (-1 to +1)
  • Valence: How positive/negative the emotion is (-1 to +1)

Constructors

SpeechEmotionRecognizer(NeuralNetworkArchitecture<T>, int, int, int, int, double, int, int, int, double, string[]?, bool, ILossFunction<T>?)

Creates a speech emotion recognizer in native training mode.

public SpeechEmotionRecognizer(NeuralNetworkArchitecture<T> architecture, int sampleRate = 16000, int numMels = 80, int nFft = 1024, int hopLength = 256, double inputDurationSeconds = 3, int numConvBlocks = 4, int baseFilters = 32, int hiddenDim = 256, double dropoutRate = 0.3, string[]? emotionLabels = null, bool includeArousalValence = true, ILossFunction<T>? lossFunction = null)

Parameters

architecture NeuralNetworkArchitecture<T>

The neural network architecture provided by the user.

sampleRate int

Audio sample rate in Hz. Default: 16000 (standard for speech).

numMels int

Number of mel spectrogram bands. Default: 80.

nFft int

FFT window size. Default: 1024 samples.

hopLength int

Hop length between FFT frames. Default: 256 samples.

inputDurationSeconds double

Expected input audio duration. Default: 3.0 seconds.

numConvBlocks int

Number of convolutional feature extraction blocks. Default: 4.

baseFilters int

Filters in first conv layer (doubles per block). Default: 32.

hiddenDim int

Hidden dimension for dense layers. Default: 256.

dropoutRate double

Dropout rate for regularization. Default: 0.3.

emotionLabels string[]

Custom emotion labels. If null, uses standard 7 emotions.

includeArousalValence bool

Whether to include arousal/valence prediction. Default: true.

lossFunction ILossFunction<T>

Loss function for training. Default: CrossEntropyLoss.

Remarks

For Beginners: Use this constructor to train a new model from scratch. You can customize every aspect of the model architecture.

Example:

var architecture = new NeuralNetworkArchitecture<float>(...);
var recognizer = new SpeechEmotionRecognizer<float>(
    architecture,
    sampleRate: 16000,
    numConvBlocks: 4,
    hiddenDim: 256);

// Train the model
recognizer.Train(audioTensor, emotionLabels);

SpeechEmotionRecognizer(NeuralNetworkArchitecture<T>, string, int, int, int, int, string[]?, bool)

Creates a speech emotion recognizer in ONNX inference mode with a pre-trained model.

public SpeechEmotionRecognizer(NeuralNetworkArchitecture<T> architecture, string modelPath, int sampleRate = 16000, int numMels = 80, int nFft = 1024, int hopLength = 256, string[]? emotionLabels = null, bool includeArousalValence = true)

Parameters

architecture NeuralNetworkArchitecture<T>

The neural network architecture provided by the user.

modelPath string

Path to the ONNX emotion recognition model.

sampleRate int

Audio sample rate in Hz. Default: 16000 (standard for speech).

numMels int

Number of mel spectrogram bands. Default: 80 (industry standard).

nFft int

FFT window size. Default: 1024 samples.

hopLength int

Hop length between FFT frames. Default: 256 samples.

emotionLabels string[]

Custom emotion labels. If null, uses standard 7 emotions.

includeArousalValence bool

Whether to include arousal/valence prediction. Default: true.

Remarks

For Beginners: Use this constructor to load a pre-trained model. Pre-trained models are ready to use immediately without training.

Example:

var architecture = new NeuralNetworkArchitecture<float>(...);
var recognizer = new SpeechEmotionRecognizer<float>(
    architecture,
    "emotion_model.onnx");

var result = recognizer.RecognizeEmotion(audioTensor);
Console.WriteLine($"Emotion: {result.Emotion}, Confidence: {result.Confidence}");

Properties

SupportedEmotions

Gets the list of emotions this model can detect.

public IReadOnlyList<string> SupportedEmotions { get; }

Property Value

IReadOnlyList<string>

Methods

CreateNewInstance()

Creates a new instance of the same type as this neural network.

protected override IFullModel<T, Tensor<T>, Tensor<T>> CreateNewInstance()

Returns

IFullModel<T, Tensor<T>, Tensor<T>>

A new instance of the same neural network type.

Remarks

For Beginners: This creates a blank version of the same type of neural network.

It's used internally by methods like DeepCopy and Clone to create the right type of network before copying the data into it.

DeserializeNetworkSpecificData(BinaryReader)

Deserializes network-specific data that was not covered by the general deserialization process.

protected override void DeserializeNetworkSpecificData(BinaryReader reader)

Parameters

reader BinaryReader

The BinaryReader to read the data from.

Remarks

This method is called at the end of the general deserialization process to allow derived classes to read any additional data specific to their implementation.

For Beginners: Continuing the suitcase analogy, this is like unpacking that special compartment. After the main deserialization method has unpacked the common items (layers, parameters), this method allows each specific type of neural network to unpack its own unique items that were stored during serialization.

ExtractEmotionFeatures(Tensor<T>)

Extracts emotion-relevant features from audio.

public Vector<T> ExtractEmotionFeatures(Tensor<T> audio)

Parameters

audio Tensor<T>

Audio tensor.

Returns

Vector<T>

Feature vector useful for emotion classification.

Forward(Tensor<T>)

Performs a forward pass through the native neural network layers.

protected override Tensor<T> Forward(Tensor<T> input)

Parameters

input Tensor<T>

Preprocessed input tensor.

Returns

Tensor<T>

Model output tensor.

GetArousal(Tensor<T>)

Gets arousal (activation) level from speech.

public T GetArousal(Tensor<T> audio)

Parameters

audio Tensor<T>

Audio tensor containing speech.

Returns

T

Arousal level from -1.0 (calm) to 1.0 (excited).

GetEmotionProbabilities(Tensor<T>)

Gets probabilities for all supported emotions.

public IReadOnlyDictionary<string, T> GetEmotionProbabilities(Tensor<T> audio)

Parameters

audio Tensor<T>

Audio tensor containing speech.

Returns

IReadOnlyDictionary<string, T>

Dictionary mapping emotion names to probabilities.

GetModelMetadata()

Gets the metadata for this neural network model.

public override ModelMetadata<T> GetModelMetadata()

Returns

ModelMetadata<T>

A ModelMetaData object containing information about the model.

GetValence(Tensor<T>)

Gets valence (positivity) level from speech.

public T GetValence(Tensor<T> audio)

Parameters

audio Tensor<T>

Audio tensor containing speech.

Returns

T

Valence level from -1.0 (negative) to 1.0 (positive).

InitializeLayers()

Initializes the neural network layers for native training mode.

protected override void InitializeLayers()

PostprocessOutput(Tensor<T>)

Postprocesses model output into the final result format.

protected override Tensor<T> PostprocessOutput(Tensor<T> modelOutput)

Parameters

modelOutput Tensor<T>

Raw output from the model.

Returns

Tensor<T>

Postprocessed output in the expected format.

Predict(Tensor<T>)

Makes a prediction using the neural network.

public override Tensor<T> Predict(Tensor<T> input)

Parameters

input Tensor<T>

The input data to process.

Returns

Tensor<T>

The network's prediction.

Remarks

For Beginners: This is the main method you'll use to get results from your trained neural network. You provide some input data (like an image or text), and the network processes it through all its layers to produce an output (like a classification or prediction).

PreprocessAudio(Tensor<T>)

Preprocesses raw audio for model input.

protected override Tensor<T> PreprocessAudio(Tensor<T> rawAudio)

Parameters

rawAudio Tensor<T>

Raw audio waveform tensor [samples] or [batch, samples].

Returns

Tensor<T>

Preprocessed audio features suitable for model input.

Remarks

For Beginners: Raw audio is just a series of numbers representing sound pressure. Neural networks often work better with transformed representations like mel spectrograms. This method converts raw audio into the format the model expects.

RecognizeEmotion(Tensor<T>)

Recognizes the primary emotion in speech audio.

public EmotionResult<T> RecognizeEmotion(Tensor<T> audio)

Parameters

audio Tensor<T>

Audio tensor containing speech.

Returns

EmotionResult<T>

The detected emotion and confidence score.

RecognizeEmotionTimeSeries(Tensor<T>, int, int)

Recognizes emotions over time (for longer recordings).

public IReadOnlyList<TimedEmotionResult<T>> RecognizeEmotionTimeSeries(Tensor<T> audio, int windowSizeMs = 1000, int hopSizeMs = 500)

Parameters

audio Tensor<T>

Audio tensor containing speech.

windowSizeMs int

Analysis window size in milliseconds.

hopSizeMs int

Hop between windows in milliseconds.

Returns

IReadOnlyList<TimedEmotionResult<T>>

Time-series of emotion predictions.

SerializeNetworkSpecificData(BinaryWriter)

Serializes network-specific data that is not covered by the general serialization process.

protected override void SerializeNetworkSpecificData(BinaryWriter writer)

Parameters

writer BinaryWriter

The BinaryWriter to write the data to.

Remarks

This method is called at the end of the general serialization process to allow derived classes to write any additional data specific to their implementation.

For Beginners: Think of this as packing a special compartment in your suitcase. While the main serialization method packs the common items (layers, parameters), this method allows each specific type of neural network to pack its own unique items that other networks might not have.

Train(Tensor<T>, Tensor<T>)

Trains the neural network on a single input-output pair.

public override void Train(Tensor<T> input, Tensor<T> expected)

Parameters

input Tensor<T>

The input data.

expected Tensor<T>

Remarks

This method performs one training step on the neural network using the provided input and expected output. It updates the network's parameters to reduce the error between the network's prediction and the expected output.

For Beginners: This is how your neural network learns. You provide: - An input (what the network should process) - The expected output (what the correct answer should be)

The network then:

  1. Makes a prediction based on the input
  2. Compares its prediction to the expected output
  3. Calculates how wrong it was (the loss)
  4. Adjusts its internal values to do better next time

After training, you can get the loss value using the GetLastLoss() method to see how well the network is learning.

UpdateParameters(Vector<T>)

Updates the network's parameters with new values.

public override void UpdateParameters(Vector<T> parameters)

Parameters

parameters Vector<T>

The new parameter values to set.

Remarks

For Beginners: During training, a neural network's internal values (parameters) get adjusted to improve its performance. This method allows you to update all those values at once by providing a complete set of new parameters.

This is typically used by optimization algorithms that calculate better parameter values based on training data.