Skip to content

Commit

Permalink
Pytorch Feature Tutorials Refactor (sony#1225)
Browse files Browse the repository at this point in the history
Refactored feature notebooks for consistency. Standardized function usage across notebooks. Updated model examples to use pretrained models from torchvision with the ImageNet validation set as the representative dataset (except for the pruning notebook).

---------

Co-authored-by: liord <[email protected]>
  • Loading branch information
lior-dikstein and liord authored Sep 26, 2024
1 parent c4a0899 commit 148bbe0
Show file tree
Hide file tree
Showing 11 changed files with 1,786 additions and 2,470 deletions.
28 changes: 16 additions & 12 deletions tutorials/notebooks/mct_features_notebooks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,15 +72,11 @@ These techniques are essential for further optimizing models and achieving super
<details id="pytorch-ptq">
<summary>Post-Training Quantization (PTQ)</summary>

| Tutorial | Included Features |
|---------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
| [Training & Quantizing Model on MNIST](pytorch/example_pytorch_ptq_mnist.ipynb) | &#x2705; PTQ |
| [Mixed-Precision MobileNetV2 on Cifar100](pytorch/example_pytorch_mobilenetv2_cifar100_mixed_precision.ipynb) | &#x2705; PTQ <br/> &#x2705; Mixed-Precision |
| [SSDLite MobileNetV3 Quantization](pytorch/example_pytorch_ssdlite_mobilenetv3_object_detection.ipynb) | &#x2705; PTQ |

</details>


| Tutorial | Included Features |
|-----------------------------------------------------------------------------------------------------------|---------------------------------------------|
| [Basic Post-Training Quantization (PTQ)](pytorch/example_pytorch_post_training_quantization.ipynb) | &#x2705; PTQ |
| [Mixed-Precision Post-Training Quantization](pytorch/example_pytorch_mixed_precision_ptq.ipynb) | &#x2705; PTQ <br/> &#x2705; Mixed-Precision |
| [Advanced Gradient-Based Post-Training Quantization (GPTQ)](pytorch/example_pytorch_mobilenet_gptq.ipynb) | &#x2705; GPTQ |

</details>

Expand All @@ -97,9 +93,9 @@ These techniques are essential for further optimizing models and achieving super
<details id="pytorch-data-generation">
<summary>Data Generation</summary>

| Tutorial | Included Features |
|-----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|
| [Data-Free Quantization using Data Generation](pytorch/example_pytorch_data_generation.ipynb) | &#x2705; PTQ <br/> &#x2705; Data-Free Quantization <br/> &#x2705; Data Generation |
| Tutorial | Included Features |
|-----------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|
| [Zero-Shot Quantization (ZSQ) using Data Generation](pytorch/example_pytorch_data_generation.ipynb) | &#x2705; PTQ <br/> &#x2705; ZSQ <br/> &#x2705; Data-Free Quantization <br/> &#x2705; Data Generation |

</details>

Expand All @@ -112,3 +108,11 @@ These techniques are essential for further optimizing models and achieving super
| [Exporter Usage](pytorch/example_pytorch_export.ipynb) | &#x2705; Export |

</details>
<details id="pytorch-xquant">
<summary>Quantization Troubleshooting</summary>

| Tutorial | Included Features |
|------------------------------------------------------------------------------------------------|-------------------|
| [Quantization Troubleshooting using the Xquant Feature](pytorch/example_pytorch_xquant.ipynb) | &#x2705; Debug |

</details>
Loading

0 comments on commit 148bbe0

Please sign in to comment.