Class SimCSE<T>
- Namespace
- AiDotNet.NeuralNetworks
- Assembly
- AiDotNet.dll
SimCSE (Simple Contrastive Learning of Sentence Embeddings) neural network implementation.
public class SimCSE<T> : TransformerEmbeddingNetwork<T>, INeuralNetworkModel<T>, INeuralNetwork<T>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>, IInterpretableModel<T>, IInputGradientComputable<T>, IDisposable, IEmbeddingModel<T>
Type Parameters
TThe numeric type used for calculations (typically float or double).
- Inheritance
-
SimCSE<T>
- Implements
- Inherited Members
- Extension Methods
Remarks
SimCSE is a state-of-the-art framework for learning sentence embeddings. It uses a contrastive learning objective to pull semantically similar sentences together and push dissimilar ones apart. Its most famous variant is unsupervised, using different dropout masks on the same sentence as a minimal data augmentation.
For Beginners: Imagine you're trying to recognize a friend in a crowded room. Even if they are wearing a hat, glasses, or a scarf (like "dropout" noise), they are still the same person. SimCSE trains the model by showing it the same sentence twice with different "masks" and telling it: "this is the same sentence." This helps the model learn the true, deep meaning of the sentence that stays constant regardless of small changes.
Constructors
SimCSE(NeuralNetworkArchitecture<T>, ITokenizer?, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>?, SimCSEType, int, int, int, int, int, int, double, PoolingStrategy, ILossFunction<T>?, double)
Initializes a new instance of the SimCSE model.
public SimCSE(NeuralNetworkArchitecture<T> architecture, ITokenizer? tokenizer = null, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>? optimizer = null, SimCSEType type = SimCSEType.Unsupervised, int vocabSize = 30522, int embeddingDimension = 768, int maxSequenceLength = 512, int numLayers = 12, int numHeads = 12, int feedForwardDim = 3072, double dropoutRate = 0.1, TransformerEmbeddingNetwork<T>.PoolingStrategy poolingStrategy = PoolingStrategy.ClsToken, ILossFunction<T>? lossFunction = null, double maxGradNorm = 1)
Parameters
architectureNeuralNetworkArchitecture<T>tokenizerITokenizeroptimizerIGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>typeSimCSETypevocabSizeintembeddingDimensionintmaxSequenceLengthintnumLayersintnumHeadsintfeedForwardDimintdropoutRatedoublepoolingStrategyTransformerEmbeddingNetwork<T>.PoolingStrategylossFunctionILossFunction<T>maxGradNormdouble
Methods
CreateNewInstance()
Creates a new instance of the same type as this neural network.
protected override IFullModel<T, Tensor<T>, Tensor<T>> CreateNewInstance()
Returns
- IFullModel<T, Tensor<T>, Tensor<T>>
A new instance of the same neural network type.
Remarks
For Beginners: This creates a blank version of the same type of neural network.
It's used internally by methods like DeepCopy and Clone to create the right type of network before copying the data into it.
DeserializeNetworkSpecificData(BinaryReader)
Deserializes network-specific data that was not covered by the general deserialization process.
protected override void DeserializeNetworkSpecificData(BinaryReader reader)
Parameters
readerBinaryReaderThe BinaryReader to read the data from.
Remarks
This method is called at the end of the general deserialization process to allow derived classes to read any additional data specific to their implementation.
For Beginners: Continuing the suitcase analogy, this is like unpacking that special compartment. After the main deserialization method has unpacked the common items (layers, parameters), this method allows each specific type of neural network to unpack its own unique items that were stored during serialization.
Embed(string)
Encodes a single string into a normalized summary vector.
public override Vector<T> Embed(string text)
Parameters
textstringThe text to encode.
Returns
- Vector<T>
A normalized embedding vector.
Remarks
For Beginners: This is the main use case. You give the model a sentence, it reads it with all its layers, summarizes the meaning based on your chosen pooling strategy (like taking the average meaning), and returns one final list of numbers.
EmbedAsync(string)
Asynchronously embeds a single text string into a vector representation.
public override Task<Vector<T>> EmbedAsync(string text)
Parameters
textstringThe text to embed.
Returns
- Task<Vector<T>>
A task representing the async operation, with the resulting vector.
EmbedBatchAsync(IEnumerable<string>)
Asynchronously embeds multiple text strings into vector representations in a single batch operation.
public override Task<Matrix<T>> EmbedBatchAsync(IEnumerable<string> texts)
Parameters
textsIEnumerable<string>The collection of texts to embed.
Returns
- Task<Matrix<T>>
A task representing the async operation, with the resulting matrix.
GetModelMetadata()
Retrieves detailed metadata about the SimCSE configuration.
public override ModelMetadata<T> GetModelMetadata()
Returns
- ModelMetadata<T>
Metadata object with training mode and dropout details.
InitializeLayers()
Configures the transformer encoder layers for SimCSE based on standard research patterns from LayerHelper.
protected override void InitializeLayers()
SerializeNetworkSpecificData(BinaryWriter)
Serializes network-specific data that is not covered by the general serialization process.
protected override void SerializeNetworkSpecificData(BinaryWriter writer)
Parameters
writerBinaryWriterThe BinaryWriter to write the data to.
Remarks
This method is called at the end of the general serialization process to allow derived classes to write any additional data specific to their implementation.
For Beginners: Think of this as packing a special compartment in your suitcase. While the main serialization method packs the common items (layers, parameters), this method allows each specific type of neural network to pack its own unique items that other networks might not have.