Enum SSLMethodCategory
Categorizes self-supervised learning methods by their learning paradigm.
public enum SSLMethodCategory
Fields
Contrastive = 0Contrastive learning methods (SimCLR, MoCo). Learn by contrasting positive pairs against negative samples.
Methods in this category maximize agreement between different augmented views of the same image while minimizing agreement with views from different images.
Examples: SimCLR, MoCo, MoCo v2, MoCo v3
Generative = 2Generative self-supervised methods (MAE). Learn by reconstructing masked or corrupted inputs.
Methods in this category learn by predicting missing parts of the input, similar to language model pretraining (BERT, GPT).
Examples: MAE (Masked Autoencoder)
NonContrastive = 1Non-contrastive learning methods (BYOL, SimSiam, Barlow Twins). Learn without explicit negative samples.
Methods in this category avoid the need for negative samples through asymmetric architectures, stop-gradients, or redundancy reduction.
Examples: BYOL, SimSiam, Barlow Twins
SelfDistillation = 3Self-distillation methods (DINO, iBOT). Learn by knowledge transfer from a teacher network to a student network.
Methods in this category use a momentum-updated teacher network to provide soft targets for a student network, enabling self-supervised knowledge distillation.
Examples: DINO, iBOT
Remarks
For Beginners: SSL methods can be grouped by how they learn representations:
- Contrastive: Learn by pulling similar samples together and pushing different samples apart
- NonContrastive: Learn by predicting one view from another without explicit negatives
- Generative: Learn by reconstructing masked or corrupted inputs
- SelfDistillation: Learn by matching predictions between teacher and student networks