Releases: lightly-ai/lightly
Releases · lightly-ai/lightly
Cyclic Cosine Scheduler
Changes
- Add
period
argument toCosineWarmupScheduler
#1413, thanks to @JannikWirtz!
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022
- I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MSN: Masked Siamese Networks for Label-Efficient Learning, 2022
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- PMSN: Prior Matching for Siamese Networks, 2022
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimMIM: A Simple Framework for Masked Image Modeling, 2021
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
- TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022
- VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022
- VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022
Documentation Improvements
Changes
- Small improvements to documentation
- Added types to embedding scripts (kudos to @agpeshal!)
- Fix: Handle OPTIONS method for
lightly-serve
script
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022
- I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MSN: Masked Siamese Networks for Label-Efficient Learning, 2022
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- PMSN: Prior Matching for Siamese Networks, 2022
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimMIM: A Simple Framework for Masked Image Modeling, 2021
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
- TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022
- VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022
- VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022
VICReg benchmarks on Imagenet
Changes
- We have evaluated our VICReg implementation on Imagenet (check it out).
- Docs update to emphasize difference between Lightly SSL and the company.
- Allow filenames with commas in embedding files and datasets.
- Fix: Imagenet benchmarks memory problems.
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022
- I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MSN: Masked Siamese Networks for Label-Efficient Learning, 2022
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- PMSN: Prior Matching for Siamese Networks, 2022
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimMIM: A Simple Framework for Masked Image Modeling, 2021
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
- TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022
- VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022
- VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022
Types!
Changes
- Add mypy and type the package partially (#1382).
lightly.transforms
is fully typed. We'll gradually add types for the other modules. - Add
py.typed
files for typed parts of the package (#1382). This makes types available when working withlightly
from other codebases. - Add support to resume benchmark training (#1347). Thanks a lot to @sadimanna!
- Remove docs for outdated/internal API methods (#1385).
- Make the
relative_path
argument optional when scheduling a Lightly Worker run with local storage (#1384).
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022
- I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MSN: Masked Siamese Networks for Label-Efficient Learning, 2022
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- PMSN: Prior Matching for Siamese Networks, 2022
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimMIM: A Simple Framework for Masked Image Modeling, 2021
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
- TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022
- VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022
- VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022
BarlowTwins Benchmark on ImageNet
Changes
- Added a new benchmark of BarlowTwins on ImageNet.
- Optimized performance of the
BarlowTwinsLoss
computation, making it much faster - Fixed a bug in the
CosineWarmupScheduler
. Thanks to @anishacharya for pointing out the problem. - Cleaned the
setup.py
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022
- I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MSN: Masked Siamese Networks for Label-Efficient Learning, 2022
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- PMSN: Prior Matching for Siamese Networks, 2022
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimMIM: A Simple Framework for Masked Image Modeling, 2021
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
- TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022
- VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022
- VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022
Lightly Local Workflow
Changes
- Prepare for local workflow support:
- add
lightly-serve
command - regenerate specs
- add
- Fix docstrings
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022
- I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MSN: Masked Siamese Networks for Label-Efficient Learning, 2022
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- PMSN: Prior Matching for Siamese Networks, 2022
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimMIM: A Simple Framework for Masked Image Modeling, 2021
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
- TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022
- VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022
- VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022
Patch generated API client
Changes
- Patch generated API client.
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022
- I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MSN: Masked Siamese Networks for Label-Efficient Learning, 2022
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- PMSN: Prior Matching for Siamese Networks, 2022
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimMIM: A Simple Framework for Masked Image Modeling, 2021
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
- TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022
- VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022
- VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022
BYOLTransform
Changes
- Add
BYOLTransform
which replacesSimCLRTransform
in BYOL benchmarks. - Log benchmark results only on rank0.
- Fix bug in PMSNLoss where probabilities were not converted to log-space before the loss calculation. Thanks to @Cloudy1225 for reporting this!
- Version check now runs in background and no longer requires SIGALRM.
- Add support for scheduling Lightly Worker runs with the new selection strategy strength option.
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022
- I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MSN: Masked Siamese Networks for Label-Efficient Learning, 2022
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- PMSN: Prior Matching for Siamese Networks, 2022
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimMIM: A Simple Framework for Masked Image Modeling, 2021
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
- TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022
- VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022
- VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022
I-JEPA
Changes
- Basic support for I-JEPA (thanks to @Natyren!)
- add BYOL imagenet resnet50 benchmark
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MSN: Masked Siamese Networks for Label-Efficient Learning, 2022
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- PMSN: Prior Matching for Siamese Networks, 2022
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimMIM: A Simple Framework for Masked Image Modeling, 2021
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
- TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022
- VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022
- VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022
- I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023
Paginate API client endpoints
Changes
- Paginate API client endpoints
- Cleaned up API client codebase
Models
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
- Bootstrap your own latent: A new approach to self-supervised Learning, 2020
- DCL: Decoupled Contrastive Learning, 2021
- DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
- FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022
- MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
- MSN: Masked Siamese Networks for Label-Efficient Learning, 2022
- MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
- NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
- PMSN: Prior Matching for Siamese Networks, 2022
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
- SimMIM: A Simple Framework for Masked Image Modeling, 2021
- SimSiam: Exploring Simple Siamese Representation Learning, 2020
- SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022
- SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
- TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022
- VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022
- VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022