Skip to content

Commit

Permalink
README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
ffiirree committed Dec 9, 2022
1 parent 8e8e326 commit 080174b
Showing 1 changed file with 30 additions and 30 deletions.
60 changes: 30 additions & 30 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,42 +2,42 @@

## Backbones

- [x] `AlexNet` - [ImageNet Classification with Deep Convolutional Neural Networks](https://papers.nips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf), 2012
- [x] `AlexNet` - [ImageNet Classification with Deep Convolutional Neural Networks](https://papers.nips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf), NeurIPS, 2012
- [x] `VGGNets` - [Very Deep Convolutional Networks for Large-Scale Image Recognition](https://arxiv.org/abs/1409.1556), 2014
- [x] `GoogLeNet` - [Going Deeper with Convolutions](https://arxiv.org/abs/1409.4842), 2014
- [x] `Inception-V3` - [Rethinking the Inception Architecture for Computer Vision](https://arxiv.org/abs/1512.00567), 2015
- [x] `Inception-V4 and Inception-ResNet` - [Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning](https://arxiv.org/abs/1602.07261), 2016
- [x] `Inception-V4 and Inception-ResNet` - [Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning](https://arxiv.org/abs/1602.07261), AAAI, 2016
- [x] `ResNet` - [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385), 2015
- [x] `SqueezeNet` - [SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size](https://arxiv.org/abs/1602.07360), 2016
- [x] `ResNeXt` - [Aggregated Residual Transformations for Deep Neural Networks](https://arxiv.org/abs/1611.05431), 2016
- [ ] `Res2Net` - [Res2Net: A New Multi-scale Backbone Architecture](https://arxiv.org/abs/1904.01169), 2019
- [x] `ReXNet` - [Rethinking Channel Dimensions for Efficient Model Design](https://arxiv.org/abs/2007.00992), 2020
- [x] `Xception` - [Xception: Deep Learning with Depthwise Separable Convolutions](https://arxiv.org/abs/1610.02357), 2016
- [x] `DenseNet` - [Densely Connected Convolutional Networks](https://arxiv.org/abs/1608.06993), 2016
- [x] `ResNeXt` - [Aggregated Residual Transformations for Deep Neural Networks](https://arxiv.org/abs/1611.05431), CVPR, 2016
- [ ] `Res2Net` - [Res2Net: A New Multi-scale Backbone Architecture](https://arxiv.org/abs/1904.01169), TPAMI, 2019
- [x] `ReXNet` - [Rethinking Channel Dimensions for Efficient Model Design](https://arxiv.org/abs/2007.00992), CVPR, 2020
- [x] `Xception` - [Xception: Deep Learning with Depthwise Separable Convolutions](https://arxiv.org/abs/1610.02357), CVPR, 2016
- [x] `DenseNet` - [Densely Connected Convolutional Networks](https://arxiv.org/abs/1608.06993), CVPR, 2016
- [ ] `DLA` - [Deep Layer Aggregation](https://arxiv.org/abs/1707.06484), CVPR, 2017
- [ ] `DPN` - [Dual Path Networks](https://arxiv.org/abs/1707.01629), 2017
- [ ] `NASNet-A` - [Learning Transferable Architectures for Scalable Image Recognition](https://arxiv.org/abs/1707.07012), 2017
- [ ] `PNasNet` - [Progressive Neural Architecture Search](https://arxiv.org/abs/1712.00559), 2017
- [ ] `DPN` - [Dual Path Networks](https://arxiv.org/abs/1707.01629), NeurIPS, 2017
- [ ] `NASNet-A` - [Learning Transferable Architectures for Scalable Image Recognition](https://arxiv.org/abs/1707.07012), CVPR, 2017
- [ ] `PNasNet` - [Progressive Neural Architecture Search](https://arxiv.org/abs/1712.00559), ECCV, 2017
- [x] `MobileNets` - [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861), 2017
- [x] `MobileNetV2` - [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381), 2018
- [x] `MobileNetV3` - [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244), 2019
- [x] `ShuffleNet` - [ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices](https://arxiv.org/abs/1707.01083), 2017
- [x] `ShuffleNet V2` - [ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design](https://arxiv.org/abs/1807.11164), 2018
- [x] `MnasNet` - [MnasNet: Platform-Aware Neural Architecture Search for Mobile](https://arxiv.org/abs/1807.11626), 2018
- [x] `MobileNetV2` - [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381), CVPR, 2018
- [x] `MobileNetV3` - [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244), ICCV, 2019
- [x] `ShuffleNet` - [ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices](https://arxiv.org/abs/1707.01083), CVPR, 2017
- [x] `ShuffleNet V2` - [ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design](https://arxiv.org/abs/1807.11164), ECCV, 2018
- [x] `MnasNet` - [MnasNet: Platform-Aware Neural Architecture Search for Mobile](https://arxiv.org/abs/1807.11626), CVPR, 2018
- [x] `GhostNet` - [GhostNet: More Features from Cheap Operations](https://arxiv.org/abs/1911.11907), CVPR, 2019
- [ ] `HRNet` - [Deep High-Resolution Representation Learning for Visual Recognition](https://arxiv.org/abs/1908.07919), 2019
- [ ] `CSPNet` - [CSPNet: A New Backbone that can Enhance Learning Capability of CNN](https://arxiv.org/abs/1911.11929), 2019
- [x] `EfficientNet` - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946), 2019
- [x] `EfficientNetV2` - [EfficientNetV2: Smaller Models and Faster Training](https://arxiv.org/abs/2104.00298), 2021
- [x] `RegNet` - [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678), 2020
- [ ] `HRNet` - [Deep High-Resolution Representation Learning for Visual Recognition](https://arxiv.org/abs/1908.07919), TPAMI, 2019
- [ ] `CSPNet` - [CSPNet: A New Backbone that can Enhance Learning Capability of CNN](https://arxiv.org/abs/1911.11929), CVPR, 2019
- [x] `EfficientNet` - [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946), ICML, 2019
- [x] `EfficientNetV2` - [EfficientNetV2: Smaller Models and Faster Training](https://arxiv.org/abs/2104.00298), ICML, 2021
- [x] `RegNet` - [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678), CVPR, 2020
- [ ] `GPU-EfficientNets` - [Neural Architecture Design for GPU-Efficient Networks](https://arxiv.org/abs/2006.14090), 2020
- [ ] `LambdaNetworks` - [LambdaNetworks: Modeling Long-Range Interactions Without Attention](https://arxiv.org/abs/2102.08602), 2021
- [ ] `RepVGG` - [RepVGG: Making VGG-style ConvNets Great Again](https://arxiv.org/abs/2101.03697), 2021
- [ ] `HardCoRe-NAS` - [HardCoRe-NAS: Hard Constrained diffeRentiable Neural Architecture Search](https://arxiv.org/abs/2102.11646), 2021
- [ ] `NFNet` - [High-Performance Large-Scale Image Recognition Without Normalization](https://arxiv.org/abs/2102.06171), 2021
- [ ] `NF-ResNets` - [Characterizing signal propagation to close the performance gap in unnormalized ResNets](https://arxiv.org/abs/2101.08692), 2021
- [ ] `LambdaNetworks` - [LambdaNetworks: Modeling Long-Range Interactions Without Attention](https://arxiv.org/abs/2102.08602), ICLR, 2021
- [ ] `RepVGG` - [RepVGG: Making VGG-style ConvNets Great Again](https://arxiv.org/abs/2101.03697), CVPR, 2021
- [ ] `HardCoRe-NAS` - [HardCoRe-NAS: Hard Constrained diffeRentiable Neural Architecture Search](https://arxiv.org/abs/2102.11646), ICML, 2021
- [ ] `NFNet` - [High-Performance Large-Scale Image Recognition Without Normalization](https://arxiv.org/abs/2102.06171), ICML, 2021
- [ ] `NF-ResNets` - [Characterizing signal propagation to close the performance gap in unnormalized ResNets](https://arxiv.org/abs/2101.08692), ICLR, 2021
- [x] `ConvMixer` - [Patches are all you need?](https://openreview.net/forum?id=TVHS5Y4dNvM), 2021
- [x] `ConvNeXt` - [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545), 2022, CVPR
- [x] `ConvNeXt` - [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545), CVPR, 2022

### Attention Blocks

Expand All @@ -54,19 +54,19 @@
### Transformer

- [x] `ViT` - [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929), ICLR, 2020
- [ ] `DeiT` - [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877), 2020
- [ ] `DeiT` - [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877), ICML, 2020
- [ ] `Swin Transformer` - [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030), ICCV, 2021
- [ ] `Twins` - [Twins: Revisiting the Design of Spatial Attention in Vision Transformers](https://arxiv.org/abs/2104.13840), NeurIPS, 2021

### MLP

- [x] `MLP-Mixer` - [MLP-Mixer: An all-MLP Architecture for Vision](https://arxiv.org/abs/2105.01601), 2021
- [x] `MLP-Mixer` - [MLP-Mixer: An all-MLP Architecture for Vision](https://arxiv.org/abs/2105.01601), NeurIPS, 2021
- [x] `ResMLP` - [ResMLP: Feedforward networks for image classification with data-efficient training](https://arxiv.org/abs/2105.03404), 2021
- [ ] `gMLP` - [Pay Attention to MLPs](https://arxiv.org/abs/2105.08050), 2021

### Self-supervised

- [ ] `MAE` - [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377), 2021
- [ ] `MAE` - [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377), CVPR, 2021

## Object Detection

Expand Down Expand Up @@ -108,5 +108,5 @@

## Adversarial Attacks

- [x] `FGSM` - [Explaining and Harnessing Adversarial Examples](https://arxiv.org/abs/1412.6572), 2014
- [x] `FGSM` - [Explaining and Harnessing Adversarial Examples](https://arxiv.org/abs/1412.6572), ICLR, 2014
- [x] `PGD` - [Towards Deep Learning Models Resistant to Adversarial Attacks](https://arxiv.org/abs/1706.06083), ICLR, 2017

0 comments on commit 080174b

Please sign in to comment.