Self-Supervised Learning 8
- [Paper Review] Vision Transformers Need Registers
- [Paper Review] DINOv2: Learning Robust Visual Features without Supervision
- [Paper Review] iBOT: Image BERT Pre-Training with Online Tokenizer
- [Paper Review] BEiT: BERT Pre-Training of Image Transformers
- [Paper Review] Emerging Properties in Self-Supervised Vision Transformers
- [Paper Review] Bootstrap your own latent: A new approach to self-supervised Learning
- [Paper Review] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
- [Paper Review] Masked Autoencoders Are Scalable Vision Learners