Class DeepFilterNet<T>
- Namespace
- AiDotNet.Audio.Enhancement
- Assembly
- AiDotNet.dll
DeepFilterNet - State-of-the-art deep filtering network for speech enhancement.
public class DeepFilterNet<T> : AudioNeuralNetworkBase<T>, INeuralNetworkModel<T>, INeuralNetwork<T>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>, IInterpretableModel<T>, IInputGradientComputable<T>, IDisposable, IAudioEnhancer<T>
Type Parameters
TThe numeric type used for calculations.
- Inheritance
-
DeepFilterNet<T>
- Implements
- Inherited Members
- Extension Methods
Remarks
DeepFilterNet is a hybrid time-frequency domain model that combines: - ERB (Equivalent Rectangular Bandwidth) filterbank for perceptually-motivated processing - Deep filtering in the complex STFT domain for fine-grained enhancement - Efficient architecture with grouped convolutions for real-time processing
For Beginners: DeepFilterNet is like having an intelligent audio engineer that can separate speech from background noise in real-time. It's particularly effective because it processes audio the way humans perceive sound - focusing more on frequencies that matter for understanding speech.
The model works by:
- Converting audio to a time-frequency representation (spectrogram)
- Applying learned filters to suppress noise while preserving speech
- Reconstructing clean audio from the enhanced spectrogram
Usage:
// ONNX mode for inference
var model = new DeepFilterNet<float>(architecture, "deepfilternet.onnx");
var cleanAudio = model.Enhance(noisyAudio);
// Native mode for training
var model = new DeepFilterNet<float>(architecture, hiddenDim: 96);
model.Train(noisyAudio, cleanAudio);
Reference: "DeepFilterNet: A Low Complexity Speech Enhancement Framework for Full-Band Audio based on Deep Filtering" by Schröter et al., ICASSP 2022
Constructors
DeepFilterNet(NeuralNetworkArchitecture<T>, int, int, int, int, int, int, int, int, int, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>?, ILossFunction<T>?)
Creates a DeepFilterNet model for native training and inference.
public DeepFilterNet(NeuralNetworkArchitecture<T> architecture, int sampleRate = 48000, int numErbBands = 32, int hiddenDim = 96, int dfOrder = 5, int dfBins = 96, int numGruLayers = 2, int fftSize = 960, int hopSize = 480, int lookahead = 2, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>? optimizer = null, ILossFunction<T>? lossFunction = null)
Parameters
architectureNeuralNetworkArchitecture<T>Neural network architecture configuration.
sampleRateintAudio sample rate in Hz. Default is 48000.
numErbBandsintNumber of ERB bands. Default is 32.
hiddenDimintHidden dimension. Default is 96.
dfOrderintDeep filter order. Default is 5.
dfBinsintNumber of DF bins. Default is 96.
numGruLayersintNumber of GRU layers. Default is 2.
fftSizeintFFT size. Default is 960.
hopSizeintHop size. Default is 480.
lookaheadintLookahead frames. Default is 2.
optimizerIGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>Optimizer for training. If null, Adam is used.
lossFunctionILossFunction<T>Loss function. If null, multi-resolution STFT loss is used.
Remarks
For Beginners: Use this constructor when you want to train DeepFilterNet from scratch or fine-tune on your own data.
Key parameters:
- numErbBands: More bands = better frequency resolution but slower
- hiddenDim: Larger = more capacity but more computation
- dfOrder: Higher order = better noise suppression but more latency
Example:
var model = new DeepFilterNet<float>(
architecture,
sampleRate: 48000,
hiddenDim: 96,
numGruLayers: 2);
// Train on noisy/clean audio pairs
model.Train(noisyAudio, cleanAudio);
DeepFilterNet(NeuralNetworkArchitecture<T>, string, int, int, int, OnnxModelOptions?)
Creates a DeepFilterNet model for ONNX inference.
public DeepFilterNet(NeuralNetworkArchitecture<T> architecture, string modelPath, int sampleRate = 48000, int fftSize = 960, int hopSize = 480, OnnxModelOptions? onnxOptions = null)
Parameters
architectureNeuralNetworkArchitecture<T>Neural network architecture configuration.
modelPathstringPath to the ONNX model file.
sampleRateintAudio sample rate in Hz. Default is 48000.
fftSizeintFFT size for STFT. Default is 960.
hopSizeintHop size for STFT. Default is 480.
onnxOptionsOnnxModelOptionsOptional ONNX runtime options.
Remarks
For Beginners: Use this constructor when you have a pre-trained DeepFilterNet model in ONNX format. This is the fastest way to get started with speech enhancement.
Example:
var model = new DeepFilterNet<float>(
architecture,
"deepfilternet3.onnx",
sampleRate: 48000);
var cleanAudio = model.Enhance(noisyAudio);
Properties
DfOrder
Gets the deep filter order.
public int DfOrder { get; }
Property Value
EnhancementStrength
Gets or sets the enhancement strength (0.0 = no enhancement, 1.0 = maximum).
public double EnhancementStrength { get; set; }
Property Value
Remarks
Higher values provide more noise reduction but may introduce artifacts. Start with 0.5-0.7 for natural-sounding results.
LatencySamples
Gets the processing latency in samples.
public int LatencySamples { get; }
Property Value
Remarks
Important for real-time applications. Lower latency means faster response but potentially lower quality enhancement.
NumChannels
Gets the number of audio channels supported.
public int NumChannels { get; protected set; }
Property Value
NumErbBands
Gets the number of ERB bands used.
public int NumErbBands { get; }
Property Value
SupportsTraining
Gets whether this network supports training.
public override bool SupportsTraining { get; }
Property Value
Methods
CreateNewInstance()
Creates a new instance of the same type as this neural network.
protected override IFullModel<T, Tensor<T>, Tensor<T>> CreateNewInstance()
Returns
- IFullModel<T, Tensor<T>, Tensor<T>>
A new instance of the same neural network type.
Remarks
For Beginners: This creates a blank version of the same type of neural network.
It's used internally by methods like DeepCopy and Clone to create the right type of network before copying the data into it.
DeserializeNetworkSpecificData(BinaryReader)
Deserializes network-specific data that was not covered by the general deserialization process.
protected override void DeserializeNetworkSpecificData(BinaryReader reader)
Parameters
readerBinaryReaderThe BinaryReader to read the data from.
Remarks
This method is called at the end of the general deserialization process to allow derived classes to read any additional data specific to their implementation.
For Beginners: Continuing the suitcase analogy, this is like unpacking that special compartment. After the main deserialization method has unpacked the common items (layers, parameters), this method allows each specific type of neural network to unpack its own unique items that were stored during serialization.
Dispose(bool)
Disposes resources.
protected override void Dispose(bool disposing)
Parameters
disposingbool
Enhance(Tensor<T>)
Enhances audio quality by reducing noise and artifacts.
public Tensor<T> Enhance(Tensor<T> audio)
Parameters
audioTensor<T>Input audio tensor with shape [channels, samples] or [samples].
Returns
- Tensor<T>
Enhanced audio tensor with the same shape as input.
EnhanceWithReference(Tensor<T>, Tensor<T>)
Enhances audio with a reference signal for echo cancellation.
public Tensor<T> EnhanceWithReference(Tensor<T> audio, Tensor<T> reference)
Parameters
audioTensor<T>Input audio (microphone signal).
referenceTensor<T>Reference audio (speaker playback signal).
Returns
- Tensor<T>
Enhanced audio with echo removed.
Remarks
For Beginners: This is for video calls!
The problem: Your microphone picks up sound from your speakers, creating an echo for the other person.
Solution: We know what's playing from the speakers (reference), so we can subtract it from what the microphone picks up.
EstimateNoiseProfile(Tensor<T>)
Estimates the noise profile from a segment of audio.
public void EstimateNoiseProfile(Tensor<T> noiseOnlyAudio)
Parameters
noiseOnlyAudioTensor<T>Audio containing only noise (no signal).
Remarks
For Beginners: Some enhancers work better if you tell them what the noise sounds like. Record a few seconds of "silence" (just the background noise) and pass it here.
GetModelMetadata()
Gets the metadata for this neural network model.
public override ModelMetadata<T> GetModelMetadata()
Returns
- ModelMetadata<T>
A ModelMetaData object containing information about the model.
InitializeLayers()
Initializes the layers of the neural network based on the architecture.
protected override void InitializeLayers()
Remarks
For Beginners: This method sets up all the layers in your neural network according to the architecture you've defined. It's like assembling the parts of your network before you can use it.
PostprocessOutput(Tensor<T>)
Postprocesses model output into the final result format.
protected override Tensor<T> PostprocessOutput(Tensor<T> modelOutput)
Parameters
modelOutputTensor<T>Raw output from the model.
Returns
- Tensor<T>
Postprocessed output in the expected format.
Predict(Tensor<T>)
Makes a prediction using the neural network.
public override Tensor<T> Predict(Tensor<T> input)
Parameters
inputTensor<T>The input data to process.
Returns
- Tensor<T>
The network's prediction.
Remarks
For Beginners: This is the main method you'll use to get results from your trained neural network. You provide some input data (like an image or text), and the network processes it through all its layers to produce an output (like a classification or prediction).
PreprocessAudio(Tensor<T>)
Preprocesses raw audio for model input.
protected override Tensor<T> PreprocessAudio(Tensor<T> rawAudio)
Parameters
rawAudioTensor<T>Raw audio waveform tensor [samples] or [batch, samples].
Returns
- Tensor<T>
Preprocessed audio features suitable for model input.
Remarks
For Beginners: Raw audio is just a series of numbers representing sound pressure. Neural networks often work better with transformed representations like mel spectrograms. This method converts raw audio into the format the model expects.
ProcessChunk(Tensor<T>)
Processes audio in real-time streaming mode.
public Tensor<T> ProcessChunk(Tensor<T> audioChunk)
Parameters
audioChunkTensor<T>A small chunk of audio for real-time processing.
Returns
- Tensor<T>
Enhanced audio chunk (may have latency).
Remarks
For real-time applications like video calls. The enhancer maintains internal state between calls for continuity.
ResetState()
Resets the internal state of the different layers, clearing any remembered information.
public override void ResetState()
Remarks
This method resets the internal state (hidden state and cell state) of all layers in the network. This is useful when starting to process a new, unrelated sequence or when the network's memory should be cleared before making new predictions.
For Beginners: This clears the neural network's memory to start fresh.
Think of this like:
- Wiping the slate clean before starting a new task
- Erasing the neural network's "memory" so past inputs don't influence new predictions
- Starting fresh when processing a completely new sequence
For example, if you've been using an neural network to analyze one document and now want to analyze a completely different document, you would reset the state first to avoid having the first document influence the analysis of the second one.
SerializeNetworkSpecificData(BinaryWriter)
Serializes network-specific data that is not covered by the general serialization process.
protected override void SerializeNetworkSpecificData(BinaryWriter writer)
Parameters
writerBinaryWriterThe BinaryWriter to write the data to.
Remarks
This method is called at the end of the general serialization process to allow derived classes to write any additional data specific to their implementation.
For Beginners: Think of this as packing a special compartment in your suitcase. While the main serialization method packs the common items (layers, parameters), this method allows each specific type of neural network to pack its own unique items that other networks might not have.
Train(Tensor<T>, Tensor<T>)
Trains the neural network on a single input-output pair.
public override void Train(Tensor<T> input, Tensor<T> expectedOutput)
Parameters
inputTensor<T>The input data.
expectedOutputTensor<T>The expected output for the given input.
Remarks
This method performs one training step on the neural network using the provided input and expected output. It updates the network's parameters to reduce the error between the network's prediction and the expected output.
For Beginners: This is how your neural network learns. You provide: - An input (what the network should process) - The expected output (what the correct answer should be)
The network then:
- Makes a prediction based on the input
- Compares its prediction to the expected output
- Calculates how wrong it was (the loss)
- Adjusts its internal values to do better next time
After training, you can get the loss value using the GetLastLoss() method to see how well the network is learning.
UpdateParameters(Vector<T>)
Updates the network's parameters with new values.
public override void UpdateParameters(Vector<T> parameters)
Parameters
parametersVector<T>The new parameter values to set.
Remarks
For Beginners: During training, a neural network's internal values (parameters) get adjusted to improve its performance. This method allows you to update all those values at once by providing a complete set of new parameters.
This is typically used by optimization algorithms that calculate better parameter values based on training data.