Class CLAPModel<T>
- Namespace
- AiDotNet.Audio.Fingerprinting
- Assembly
- AiDotNet.dll
CLAP (Contrastive Language-Audio Pretraining) - A neural network model that learns to align audio and text representations in a shared embedding space.
public class CLAPModel<T> : AudioNeuralNetworkBase<T>, INeuralNetworkModel<T>, INeuralNetwork<T>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>, IInterpretableModel<T>, IInputGradientComputable<T>, IDisposable, IAudioFingerprinter<T>
Type Parameters
TThe numeric type used for calculations.
- Inheritance
-
CLAPModel<T>
- Implements
- Inherited Members
- Extension Methods
Remarks
CLAP is a multimodal model trained using contrastive learning to create embeddings where similar audio-text pairs are close together and dissimilar pairs are far apart. This enables:
- Zero-shot audio classification using text prompts
- Audio-to-text retrieval (find descriptions matching audio)
- Text-to-audio retrieval (find audio matching descriptions)
- Semantic audio fingerprinting
For Beginners: CLAP understands both audio and text! It can:
- Tell you what's in an audio clip without pre-defined categories
- Find audio that matches a text description ("a dog barking in the rain")
- Create embeddings for audio search and recommendation
Unlike traditional fingerprinting that matches exact audio, CLAP understands audio semantics - it knows a dog barking and a recording of barking are related!
Example use cases:
- "Is this audio of a happy or sad scene?" (sentiment analysis)
- "Find all audio clips with birds singing" (content-based search)
- "Classify this sound into one of these categories: ..." (zero-shot classification)
- Audio content moderation (detect specific sounds)
Reference: Wu, Y., et al. (2023). Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation.
Constructors
CLAPModel(NeuralNetworkArchitecture<T>, int, int, int, int, int, int, int, int, int, int, double, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>?, ILossFunction<T>?)
Initializes a new instance of the CLAPModel<T> class for native training mode.
public CLAPModel(NeuralNetworkArchitecture<T> architecture, int sampleRate = 48000, int embeddingDim = 768, int projectionDim = 512, int numMelBands = 64, int audioEncoderLayers = 12, int audioEncoderHeads = 12, int vocabSize = 49408, int maxTextLength = 77, int windowSize = 1024, int hopSize = 480, double temperature = 0.07, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>? optimizer = null, ILossFunction<T>? lossFunction = null)
Parameters
architectureNeuralNetworkArchitecture<T>The neural network architecture defining input/output dimensions.
sampleRateintSample rate of input audio (default: 48000 Hz).
embeddingDimintInternal embedding dimension (default: 768).
projectionDimintProjection dimension for output embeddings (default: 512).
numMelBandsintNumber of mel spectrogram bands (default: 64).
audioEncoderLayersintNumber of transformer layers in audio encoder (default: 12).
audioEncoderHeadsintNumber of attention heads (default: 12).
vocabSizeintVocabulary size for text encoding (default: 49408).
maxTextLengthintMaximum text sequence length (default: 77).
windowSizeintSTFT window size (default: 1024).
hopSizeintSTFT hop size (default: 480).
temperaturedoubleTemperature for contrastive loss (default: 0.07).
optimizerIGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>Optimizer for training. If null, a default Adam optimizer is used.
lossFunctionILossFunction<T>Loss function. If null, contrastive loss is used.
CLAPModel(NeuralNetworkArchitecture<T>, string, string?, int, int, int, OnnxModelOptions?)
Initializes a new instance of the CLAPModel<T> class for ONNX inference mode.
public CLAPModel(NeuralNetworkArchitecture<T> architecture, string audioEncoderPath, string? textEncoderPath = null, int sampleRate = 48000, int embeddingDim = 768, int projectionDim = 512, OnnxModelOptions? onnxOptions = null)
Parameters
architectureNeuralNetworkArchitecture<T>The neural network architecture defining input/output dimensions.
audioEncoderPathstringPath to the ONNX audio encoder model.
textEncoderPathstringOptional path to the ONNX text encoder model.
sampleRateintSample rate of input audio (default: 48000 Hz).
embeddingDimintEmbedding dimension (default: 768).
projectionDimintProjection dimension for output embeddings (default: 512).
onnxOptionsOnnxModelOptionsOptional ONNX model options.
Exceptions
- FileNotFoundException
Thrown when the ONNX model file is not found.
Properties
EmbeddingDimension
Gets the embedding dimension used internally.
public int EmbeddingDimension { get; }
Property Value
FingerprintLength
Gets the fingerprint length in bits or elements.
public int FingerprintLength { get; }
Property Value
Name
Gets the name of the fingerprinting algorithm.
public string Name { get; }
Property Value
ProjectionDimension
Gets the projection dimension (final embedding size).
public int ProjectionDimension { get; }
Property Value
Methods
ComputeSimilarity(AudioFingerprint<T>, AudioFingerprint<T>)
Computes the similarity between two fingerprints.
public double ComputeSimilarity(AudioFingerprint<T> fp1, AudioFingerprint<T> fp2)
Parameters
fp1AudioFingerprint<T>First fingerprint.
fp2AudioFingerprint<T>Second fingerprint.
Returns
- double
Similarity score (0-1, higher is more similar).
CreateNewInstance()
Creates a new instance of the same type as this neural network.
protected override IFullModel<T, Tensor<T>, Tensor<T>> CreateNewInstance()
Returns
- IFullModel<T, Tensor<T>, Tensor<T>>
A new instance of the same neural network type.
Remarks
For Beginners: This creates a blank version of the same type of neural network.
It's used internally by methods like DeepCopy and Clone to create the right type of network before copying the data into it.
Deserialize(byte[])
Deserializes the neural network from a byte array.
public override void Deserialize(byte[] data)
Parameters
databyte[]The byte array containing the serialized neural network data.
DeserializeNetworkSpecificData(BinaryReader)
Deserializes network-specific data that was not covered by the general deserialization process.
protected override void DeserializeNetworkSpecificData(BinaryReader reader)
Parameters
readerBinaryReaderThe BinaryReader to read the data from.
Remarks
This method is called at the end of the general deserialization process to allow derived classes to read any additional data specific to their implementation.
For Beginners: Continuing the suitcase analogy, this is like unpacking that special compartment. After the main deserialization method has unpacked the common items (layers, parameters), this method allows each specific type of neural network to unpack its own unique items that were stored during serialization.
EncodeAudio(Tensor<T>)
Encodes audio into an embedding vector.
public Tensor<T> EncodeAudio(Tensor<T> audio)
Parameters
audioTensor<T>Audio tensor [samples] or [batch, samples].
Returns
- Tensor<T>
Audio embedding [batch, projectionDim].
EncodeText(int[])
Encodes text into an embedding vector.
public Tensor<T> EncodeText(int[] tokens)
Parameters
tokensint[]Text token IDs [batch, seqLen].
Returns
- Tensor<T>
Text embedding [batch, projectionDim].
FindMatches(AudioFingerprint<T>, AudioFingerprint<T>, int)
Finds matching segments between two fingerprints.
public IReadOnlyList<FingerprintMatch> FindMatches(AudioFingerprint<T> query, AudioFingerprint<T> reference, int minMatchLength = 10)
Parameters
queryAudioFingerprint<T>The query fingerprint.
referenceAudioFingerprint<T>The reference fingerprint to search in.
minMatchLengthintMinimum length of matching segment.
Returns
- IReadOnlyList<FingerprintMatch>
List of matching segments with time offsets.
Fingerprint(Tensor<T>)
Generates a fingerprint from audio data.
public AudioFingerprint<T> Fingerprint(Tensor<T> audio)
Parameters
audioTensor<T>Audio samples as a tensor (mono audio).
Returns
- AudioFingerprint<T>
The audio fingerprint.
Fingerprint(Vector<T>)
Generates a fingerprint from audio data.
public AudioFingerprint<T> Fingerprint(Vector<T> audio)
Parameters
audioVector<T>Audio samples as a vector (mono audio).
Returns
- AudioFingerprint<T>
The audio fingerprint.
GetModelMetadata()
Gets the metadata for this neural network model.
public override ModelMetadata<T> GetModelMetadata()
Returns
- ModelMetadata<T>
A ModelMetaData object containing information about the model.
InitializeLayers()
Initializes the neural network layers.
protected override void InitializeLayers()
PostprocessOutput(Tensor<T>)
Postprocesses model output.
protected override Tensor<T> PostprocessOutput(Tensor<T> modelOutput)
Parameters
modelOutputTensor<T>
Returns
- Tensor<T>
Predict(Tensor<T>)
Predicts audio embedding.
public override Tensor<T> Predict(Tensor<T> input)
Parameters
inputTensor<T>
Returns
- Tensor<T>
PreprocessAudio(Tensor<T>)
Preprocesses raw audio waveform for model input.
protected override Tensor<T> PreprocessAudio(Tensor<T> rawAudio)
Parameters
rawAudioTensor<T>
Returns
- Tensor<T>
Serialize()
Serializes the neural network to a byte array.
public override byte[] Serialize()
Returns
- byte[]
A byte array representing the serialized neural network.
SerializeNetworkSpecificData(BinaryWriter)
Serializes network-specific data that is not covered by the general serialization process.
protected override void SerializeNetworkSpecificData(BinaryWriter writer)
Parameters
writerBinaryWriterThe BinaryWriter to write the data to.
Remarks
This method is called at the end of the general serialization process to allow derived classes to write any additional data specific to their implementation.
For Beginners: Think of this as packing a special compartment in your suitcase. While the main serialization method packs the common items (layers, parameters), this method allows each specific type of neural network to pack its own unique items that other networks might not have.
Train(Tensor<T>, Tensor<T>)
Trains the model on audio-text pairs using contrastive loss.
public override void Train(Tensor<T> input, Tensor<T> expected)
Parameters
inputTensor<T>expectedTensor<T>
UpdateParameters(Vector<T>)
Updates the network's parameters with new values.
public override void UpdateParameters(Vector<T> gradients)
Parameters
gradientsVector<T>
Remarks
For Beginners: During training, a neural network's internal values (parameters) get adjusted to improve its performance. This method allows you to update all those values at once by providing a complete set of new parameters.
This is typically used by optimization algorithms that calculate better parameter values based on training data.
ZeroShotClassify(Tensor<T>, string[], Func<string, int[]>)
Performs zero-shot classification using text prompts.
public Dictionary<string, double> ZeroShotClassify(Tensor<T> audio, string[] classLabels, Func<string, int[]> tokenizer)
Parameters
audioTensor<T>Audio tensor to classify.
classLabelsstring[]Array of text labels to classify against.
tokenizerFunc<string, int[]>Function to tokenize text labels.
Returns
- Dictionary<string, double>
Classification probabilities for each label.