Class ColBERT<T>
- Namespace
- AiDotNet.NeuralNetworks
- Assembly
- AiDotNet.dll
ColBERT (Contextualized Late Interaction over BERT) neural network implementation. Uses token-level representations for high-precision document retrieval.
public class ColBERT<T> : TransformerEmbeddingNetwork<T>, INeuralNetworkModel<T>, INeuralNetwork<T>, IFullModel<T, Tensor<T>, Tensor<T>>, IModel<Tensor<T>, Tensor<T>, ModelMetadata<T>>, IModelSerializer, ICheckpointableModel, IParameterizable<T, Tensor<T>, Tensor<T>>, IFeatureAware, IFeatureImportance<T>, ICloneable<IFullModel<T, Tensor<T>, Tensor<T>>>, IGradientComputable<T, Tensor<T>, Tensor<T>>, IJitCompilable<T>, IInterpretableModel<T>, IInputGradientComputable<T>, IDisposable, IEmbeddingModel<T>
Type Parameters
TThe numeric type used for calculations (typically float or double).
- Inheritance
-
ColBERT<T>
- Implements
- Inherited Members
- Extension Methods
Remarks
ColBERT is a highly efficient and accurate retrieval model that keeps a separate vector for every token in a sentence. It calculates the similarity between a query and a document using a "Late Interaction" MaxSim operator, allowing it to capture fine-grained semantic matches.
For Beginners: Most AI search models are like people who read a whole book and then try to summarize it in just one word. ColBERT is like a person who keeps detailed notes on every single word. When you ask a question, ColBERT compares every word in your question to every word in the document notes. This is much more accurate because no information is "lost" during summarization.
Constructors
ColBERT(NeuralNetworkArchitecture<T>, ITokenizer?, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>?, int, int, int, int, int, int, ILossFunction<T>?, double)
Initializes a new instance of the ColBERT model.
public ColBERT(NeuralNetworkArchitecture<T> architecture, ITokenizer? tokenizer = null, IGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>? optimizer = null, int vocabSize = 30522, int outputDimension = 128, int maxSequenceLength = 512, int numLayers = 12, int numHeads = 12, int feedForwardDim = 3072, ILossFunction<T>? lossFunction = null, double maxGradNorm = 1)
Parameters
architectureNeuralNetworkArchitecture<T>tokenizerITokenizeroptimizerIGradientBasedOptimizer<T, Tensor<T>, Tensor<T>>vocabSizeintoutputDimensionintmaxSequenceLengthintnumLayersintnumHeadsintfeedForwardDimintlossFunctionILossFunction<T>maxGradNormdouble
Methods
CreateNewInstance()
Creates a new instance of the same type as this neural network.
protected override IFullModel<T, Tensor<T>, Tensor<T>> CreateNewInstance()
Returns
- IFullModel<T, Tensor<T>, Tensor<T>>
A new instance of the same neural network type.
Remarks
For Beginners: This creates a blank version of the same type of neural network.
It's used internally by methods like DeepCopy and Clone to create the right type of network before copying the data into it.
DeserializeNetworkSpecificData(BinaryReader)
Deserializes network-specific data that was not covered by the general deserialization process.
protected override void DeserializeNetworkSpecificData(BinaryReader reader)
Parameters
readerBinaryReaderThe BinaryReader to read the data from.
Remarks
This method is called at the end of the general deserialization process to allow derived classes to read any additional data specific to their implementation.
For Beginners: Continuing the suitcase analogy, this is like unpacking that special compartment. After the main deserialization method has unpacked the common items (layers, parameters), this method allows each specific type of neural network to unpack its own unique items that were stored during serialization.
Embed(string)
Fallback method that encodes a sentence into a single summary vector (mean-pooled).
public override Vector<T> Embed(string text)
Parameters
textstringThe text to encode.
Returns
- Vector<T>
A summary vector for the input text.
Remarks
For Beginners: This is a fallback option. While ColBERT works best when it keeps all its notes (as a table), sometimes you just want one summary list of numbers. This method averages all the word-level info into one overall summary.
EmbedLateInteraction(string)
Encodes text into a multi-vector matrix where each row is a contextualized token embedding.
public Matrix<T> EmbedLateInteraction(string text)
Parameters
textstring
Returns
- Matrix<T>
GetModelMetadata()
Retrieves metadata about the ColBERT model.
public override ModelMetadata<T> GetModelMetadata()
Returns
- ModelMetadata<T>
Metadata containing model type and output dimensionality information.
InitializeLayers()
Sets up the transformer layers and the token-level projection head for ColBERT.
protected override void InitializeLayers()
LateInteractionScore(Matrix<T>, Matrix<T>)
Computes the similarity score between a query and document matrix using the MaxSim interaction.
public T LateInteractionScore(Matrix<T> queryEmbeddings, Matrix<T> docEmbeddings)
Parameters
queryEmbeddingsMatrix<T>The token-level embeddings for the query.
docEmbeddingsMatrix<T>The token-level embeddings for the document.
Returns
- T
A scalar interaction score.
Remarks
For Beginners: This is how ColBERT compares a question to a document. It looks at every word in your question and finds the absolute "best match" for it in the entire document. It then combines all those best matches into one final score.
SerializeNetworkSpecificData(BinaryWriter)
Serializes network-specific data that is not covered by the general serialization process.
protected override void SerializeNetworkSpecificData(BinaryWriter writer)
Parameters
writerBinaryWriterThe BinaryWriter to write the data to.
Remarks
This method is called at the end of the general serialization process to allow derived classes to write any additional data specific to their implementation.
For Beginners: Think of this as packing a special compartment in your suitcase. While the main serialization method packs the common items (layers, parameters), this method allows each specific type of neural network to pack its own unique items that other networks might not have.