CVNets is a computer vision toolkit that allows researchers and engineers to train standard and novel mobile- and non-mobile computer vision models for variety of tasks, including object classification, object detection, semantic segmentation, and foundation models (e.g., CLIP).
- What's new?
- Installation
- Getting started
- Supported models and tasks
- Maintainers
- Research effort at Apple using CVNets
- Contributing to CVNets
- License
- Citation
- July 2023: Version 0.4 of the CVNets library includes
- Bytes Are All You Need: Transformers Operating Directly On File Bytes
- RangeAugment: Efficient online augmentation with Range Learning
- Training and evaluating foundation models (CLIP)
- Mask R-CNN
- EfficientNet, Swin Transformer, and ViT
- Enhanced distillation support
We recommend to use Python 3.10+ and PyTorch (version >= v1.12.0)
Instructions below use Conda, if you don't have Conda installed, you can check out How to Install Conda.
# Clone the repo
git clone [email protected]:apple/ml-cvnets.git
cd ml-cvnets
# Create a virtual env. We use Conda
conda create -n cvnets python=3.10.8
conda activate cvnets
# install requirements and CVNets package
pip install -r requirements.txt -c constraints.txt
pip install --editable .
- General instructions for working with CVNets are given here.
- Examples for training and evaluating models are provided here and here.
- Examples for converting a PyTorch model to CoreML are provided here.
To see a list of available models and benchmarks, please refer to Model Zoo and examples folder.
ImageNet classification models
Multimodal Classification
Object detection
Foundation models
Automatic Data Augmentation
Distillation
- Soft distillation
- Hard distillation
This code is developed by Sachin, and is now maintained by Sachin, Maxwell Horton, Mohammad Sekhavat, and Yanzi Jin.
Below is the list of publications from Apple that uses CVNets:
- MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer, ICLR'22
- CVNets: High performance library for Computer Vision, ACM MM'22
- Separable Self-attention for Mobile Vision Transformers (MobileViTv2)
- RangeAugment: Efficient Online Augmentation with Range Learning
- Bytes Are All You Need: Transformers Operating Directly on File Bytes
We welcome PRs from the community! You can find information about contributing to CVNets in our contributing document.
Please remember to follow our Code of Conduct.
For license details, see LICENSE.
If you find our work useful, please cite the following paper:
@inproceedings{mehta2022mobilevit,
title={MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer},
author={Sachin Mehta and Mohammad Rastegari},
booktitle={International Conference on Learning Representations},
year={2022}
}
@inproceedings{mehta2022cvnets,
author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
title = {CVNets: High Performance Library for Computer Vision},
year = {2022},
booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
series = {MM '22}
}