Class DCCRN<T>
- Namespace
- AiDotNet.Audio.Enhancement
- Assembly
- AiDotNet.dll
DCCRN - Deep Complex Convolution Recurrent Network for speech enhancement.
public class DCCRN<T> : AudioNeuralNetworkBase<T>, INeuralNetworkModel<T>, INeuralNetwork<T>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>, IInterpretableModel<T>, IInputGradientComputable<T>, IDisposable, IAudioEnhancer<T>
Type Parameters
TThe numeric type used for calculations.
- Inheritance
-
DCCRN<T>
- Implements
- Inherited Members
- Extension Methods
Remarks
DCCRN operates directly on complex-valued spectrograms, preserving phase information for high-quality speech enhancement. Key features: - Complex-valued convolutions for better spectral modeling - LSTM layers for temporal dependencies - Skip connections for gradient flow - Mask-based enhancement for clean speech estimation
For Beginners: DCCRN is a neural network designed specifically for cleaning up noisy audio. Unlike simpler methods that only work with the "loudness" of frequencies, DCCRN also considers the "timing" (phase), which results in more natural-sounding enhanced audio.
Think of it like this: regular enhancement is like adjusting volume of different frequencies, while DCCRN can also adjust the timing of sound waves to better reconstruct the original clean speech.
Usage:
var model = new DCCRN<float>(architecture, "dccrn.onnx");
var cleanAudio = model.Enhance(noisyAudio);
Reference: "DCCRN: Deep Complex Convolution Recurrent Network for Phase-Aware Speech Enhancement" by Hu et al., Interspeech 2020
Constructors
DCCRN(NeuralNetworkArchitecture<T>, int, int, int, int, int, int, int, bool, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>?, ILossFunction<T>?)
Creates a DCCRN model for native training and inference.
public DCCRN(NeuralNetworkArchitecture<T> architecture, int sampleRate = 16000, int numStages = 6, int baseChannels = 32, int lstmHiddenDim = 256, int numLstmLayers = 2, int fftSize = 512, int hopSize = 256, bool useComplexMask = true, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>? optimizer = null, ILossFunction<T>? lossFunction = null)
Parameters
architectureNeuralNetworkArchitecture<T>Neural network architecture configuration.
sampleRateintAudio sample rate in Hz. Default is 16000.
numStagesintNumber of encoder/decoder stages. Default is 6.
baseChannelsintBase number of channels. Default is 32.
lstmHiddenDimintLSTM hidden dimension. Default is 256.
numLstmLayersintNumber of LSTM layers. Default is 2.
fftSizeintFFT size. Default is 512.
hopSizeintHop size. Default is 256.
useComplexMaskboolUse complex mask estimation. Default is true.
optimizerIGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>Optimizer for training.
lossFunctionILossFunction<T>Loss function for training.
Remarks
For Beginners: Use this constructor to train DCCRN from scratch.
Key parameters:
- numStages: More stages = deeper network, better results but slower
- baseChannels: More channels = more capacity
- useComplexMask: true preserves phase for better quality
Example:
var model = new DCCRN<float>(
architecture,
numStages: 6,
baseChannels: 32,
useComplexMask: true);
model.Train(noisyBatch, cleanBatch);
DCCRN(NeuralNetworkArchitecture<T>, string, int, int, int, OnnxModelOptions?)
Creates a DCCRN model for ONNX inference.
public DCCRN(NeuralNetworkArchitecture<T> architecture, string modelPath, int sampleRate = 16000, int fftSize = 512, int hopSize = 256, OnnxModelOptions? onnxOptions = null)
Parameters
architectureNeuralNetworkArchitecture<T>Neural network architecture configuration.
modelPathstringPath to the ONNX model file.
sampleRateintAudio sample rate in Hz. Default is 16000.
fftSizeintFFT size for STFT. Default is 512.
hopSizeintHop size for STFT. Default is 256.
onnxOptionsOnnxModelOptionsOptional ONNX runtime options.
Remarks
For Beginners: Use this constructor to load a pre-trained DCCRN model for speech enhancement.
Example:
var model = new DCCRN<float>(architecture, "dccrn_16k.onnx");
var clean = model.Enhance(noisy);
Properties
EnhancementStrength
Gets or sets the enhancement strength (0.0 = no enhancement, 1.0 = maximum).
public double EnhancementStrength { get; set; }
Property Value
Remarks
Higher values provide more noise reduction but may introduce artifacts. Start with 0.5-0.7 for natural-sounding results.
LatencySamples
Gets the processing latency in samples.
public int LatencySamples { get; }
Property Value
Remarks
Important for real-time applications. Lower latency means faster response but potentially lower quality enhancement.
NumChannels
Gets the number of audio channels supported.
public int NumChannels { get; protected set; }
Property Value
NumStages
Gets the number of encoder/decoder stages.
public int NumStages { get; }
Property Value
SupportsTraining
Gets whether this network supports training.
public override bool SupportsTraining { get; }
Property Value
UseComplexMask
Gets whether complex mask is used.
public bool UseComplexMask { get; }
Property Value
Methods
CreateNewInstance()
Creates a new instance of the same type as this neural network.
protected override IFullModel<T, Tensor<T>, Tensor<T>> CreateNewInstance()
Returns
- IFullModel<T, Tensor<T>, Tensor<T>>
A new instance of the same neural network type.
Remarks
For Beginners: This creates a blank version of the same type of neural network.
It's used internally by methods like DeepCopy and Clone to create the right type of network before copying the data into it.
DeserializeNetworkSpecificData(BinaryReader)
Deserializes network-specific data that was not covered by the general deserialization process.
protected override void DeserializeNetworkSpecificData(BinaryReader reader)
Parameters
readerBinaryReaderThe BinaryReader to read the data from.
Remarks
This method is called at the end of the general deserialization process to allow derived classes to read any additional data specific to their implementation.
For Beginners: Continuing the suitcase analogy, this is like unpacking that special compartment. After the main deserialization method has unpacked the common items (layers, parameters), this method allows each specific type of neural network to unpack its own unique items that were stored during serialization.
Dispose(bool)
Disposes resources.
protected override void Dispose(bool disposing)
Parameters
disposingbool
Enhance(Tensor<T>)
Enhances audio quality by reducing noise and artifacts.
public Tensor<T> Enhance(Tensor<T> audio)
Parameters
audioTensor<T>Input audio tensor with shape [channels, samples] or [samples].
Returns
- Tensor<T>
Enhanced audio tensor with the same shape as input.
EnhanceWithReference(Tensor<T>, Tensor<T>)
Enhances audio with a reference signal for echo cancellation.
public Tensor<T> EnhanceWithReference(Tensor<T> audio, Tensor<T> reference)
Parameters
audioTensor<T>Input audio (microphone signal).
referenceTensor<T>Reference audio (speaker playback signal).
Returns
- Tensor<T>
Enhanced audio with echo removed.
Remarks
For Beginners: This is for video calls!
The problem: Your microphone picks up sound from your speakers, creating an echo for the other person.
Solution: We know what's playing from the speakers (reference), so we can subtract it from what the microphone picks up.
EstimateNoiseProfile(Tensor<T>)
Estimates the noise profile from a segment of audio.
public void EstimateNoiseProfile(Tensor<T> noiseOnlyAudio)
Parameters
noiseOnlyAudioTensor<T>Audio containing only noise (no signal).
Remarks
For Beginners: Some enhancers work better if you tell them what the noise sounds like. Record a few seconds of "silence" (just the background noise) and pass it here.
GetModelMetadata()
Gets the metadata for this neural network model.
public override ModelMetadata<T> GetModelMetadata()
Returns
- ModelMetadata<T>
A ModelMetaData object containing information about the model.
InitializeLayers()
Initializes the layers of the neural network based on the architecture.
protected override void InitializeLayers()
Remarks
For Beginners: This method sets up all the layers in your neural network according to the architecture you've defined. It's like assembling the parts of your network before you can use it.
PostprocessOutput(Tensor<T>)
Postprocesses model output into the final result format.
protected override Tensor<T> PostprocessOutput(Tensor<T> modelOutput)
Parameters
modelOutputTensor<T>Raw output from the model.
Returns
- Tensor<T>
Postprocessed output in the expected format.
Predict(Tensor<T>)
Makes a prediction using the neural network.
public override Tensor<T> Predict(Tensor<T> input)
Parameters
inputTensor<T>The input data to process.
Returns
- Tensor<T>
The network's prediction.
Remarks
For Beginners: This is the main method you'll use to get results from your trained neural network. You provide some input data (like an image or text), and the network processes it through all its layers to produce an output (like a classification or prediction).
PreprocessAudio(Tensor<T>)
Preprocesses raw audio for model input.
protected override Tensor<T> PreprocessAudio(Tensor<T> rawAudio)
Parameters
rawAudioTensor<T>Raw audio waveform tensor [samples] or [batch, samples].
Returns
- Tensor<T>
Preprocessed audio features suitable for model input.
Remarks
For Beginners: Raw audio is just a series of numbers representing sound pressure. Neural networks often work better with transformed representations like mel spectrograms. This method converts raw audio into the format the model expects.
ProcessChunk(Tensor<T>)
Processes audio in real-time streaming mode.
public Tensor<T> ProcessChunk(Tensor<T> audioChunk)
Parameters
audioChunkTensor<T>A small chunk of audio for real-time processing.
Returns
- Tensor<T>
Enhanced audio chunk (may have latency).
Remarks
For real-time applications like video calls. The enhancer maintains internal state between calls for continuity.
ResetState()
Resets the internal state of the different layers, clearing any remembered information.
public override void ResetState()
Remarks
This method resets the internal state (hidden state and cell state) of all layers in the network. This is useful when starting to process a new, unrelated sequence or when the network's memory should be cleared before making new predictions.
For Beginners: This clears the neural network's memory to start fresh.
Think of this like:
- Wiping the slate clean before starting a new task
- Erasing the neural network's "memory" so past inputs don't influence new predictions
- Starting fresh when processing a completely new sequence
For example, if you've been using an neural network to analyze one document and now want to analyze a completely different document, you would reset the state first to avoid having the first document influence the analysis of the second one.
SerializeNetworkSpecificData(BinaryWriter)
Serializes network-specific data that is not covered by the general serialization process.
protected override void SerializeNetworkSpecificData(BinaryWriter writer)
Parameters
writerBinaryWriterThe BinaryWriter to write the data to.
Remarks
This method is called at the end of the general serialization process to allow derived classes to write any additional data specific to their implementation.
For Beginners: Think of this as packing a special compartment in your suitcase. While the main serialization method packs the common items (layers, parameters), this method allows each specific type of neural network to pack its own unique items that other networks might not have.
Train(Tensor<T>, Tensor<T>)
Trains the neural network on a single input-output pair.
public override void Train(Tensor<T> input, Tensor<T> expectedOutput)
Parameters
inputTensor<T>The input data.
expectedOutputTensor<T>The expected output for the given input.
Remarks
This method performs one training step on the neural network using the provided input and expected output. It updates the network's parameters to reduce the error between the network's prediction and the expected output.
For Beginners: This is how your neural network learns. You provide: - An input (what the network should process) - The expected output (what the correct answer should be)
The network then:
- Makes a prediction based on the input
- Compares its prediction to the expected output
- Calculates how wrong it was (the loss)
- Adjusts its internal values to do better next time
After training, you can get the loss value using the GetLastLoss() method to see how well the network is learning.
UpdateParameters(Vector<T>)
Updates the network's parameters with new values.
public override void UpdateParameters(Vector<T> parameters)
Parameters
parametersVector<T>The new parameter values to set.
Remarks
For Beginners: During training, a neural network's internal values (parameters) get adjusted to improve its performance. This method allows you to update all those values at once by providing a complete set of new parameters.
This is typically used by optimization algorithms that calculate better parameter values based on training data.