Name | Modified | Size | Downloads / Week |
---|---|---|---|
Parent folder | |||
Implement DINOv2 source code.tar.gz | 2025-06-11 | 3.3 MB | |
Implement DINOv2 source code.zip | 2025-06-11 | 4.1 MB | |
README.md | 2025-06-11 | 4.1 kB | |
Totals: 3 Items | 7.4 MB | 0 |
What's Changed
- Add DINOv2 ViT benchmark Implementation
- Add Paper Joint-Embedding vs Reconstruction: Provable Benefits of Latent Space Prediction for Self-Supervised Learning, 2025 by Meta to "Lightly in Research". Thank them for the credit!
- Add
seed_everything
for reproducibility in benchmarks by @yvesyue in https://github.com/lightly-ai/lightly/pull/1819 - Fix MyPy type-checking issues for newer versions of NumPy by @yvesyue in https://github.com/lightly-ai/lightly/pull/1820
- Fix DCLLoss negative-term aggregation and add loop-based reference test by @yvesyue in https://github.com/lightly-ai/lightly/pull/1827
- Fix bugs in KNN benchmark evaluation
- Fix bugs in cosine scheduler warmup epochs
- Fix
MaskedCausalBlock.__init__() got an unexpected keyword argument 'proj_bias'
due to interface change in the newer TIMM versions - Fix
AddGridTransform
due to interface change in the newer Torchvision versions - Fix
format
&format-check
to only target python directories - Remove video download functions
- Remove unused download functions & add typing
New Contributors
- @yvesyue made their first contribution in https://github.com/lightly-ai/lightly/pull/1819
Full Changelog: https://github.com/lightly-ai/lightly/compare/v1.5.20...v1.15.21
Many thanks to our contributors!
Models
- AIM: Scalable Pre-training of Large Autoregressive Image Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DenseCL: Dense Contrastive Learning for Self-Supervised Visual Pre-Training, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- DINOv2: Learning Robust Visual Features without Supervision, 2023
- FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022
- I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MSN: Masked Siamese Networks for Label-Efficient Learning, 2022
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- PMSN: Prior Matching for Siamese Networks, 2022
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimMIM: A Simple Framework for Masked Image Modeling, 2021
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
- TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022
- VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022
- VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022