diff --git a/README.md b/README.md index 966efd679..e69f3a3df 100644 --- a/README.md +++ b/README.md @@ -132,11 +132,11 @@ Various tutorials are proposed for the [built-in models](docs/built-in-models/ml - [Sentiment analysis with transformers](use_case_examples/sentiment_analysis_with_transformer): a gradio demo which predicts if a tweet / short message is positive, negative or neutral, with FHE of course! The [live interactive](https://huggingface.co/spaces/zama-fhe/encrypted_sentiment_analysis) demo is available on Hugging Face. This [blog post](https://huggingface.co/blog/sentiment-analysis-fhe) explains how this demo works! -- [CIFAR10 FHE-friendly model with Brevitas](use_case_examples/cifar_brevitas_training): code for training from scratch a VGG-like FHE-compatible neural network using Brevitas, and a script to run the neural network in FHE. Execution in FHE takes ~20 minutes per image and shows an accuracy of 88.7%. +- [CIFAR10 FHE-friendly model with Brevitas](use_case_examples/cifar/cifar_brevitas_training): code for training from scratch a VGG-like FHE-compatible neural network using Brevitas, and a script to run the neural network in FHE. Execution in FHE takes ~20 minutes per image and shows an accuracy of 88.7%. -- [CIFAR10 / CIFAR100 FHE-friendly models with Transfer Learning approach](use_case_examples/cifar_brevitas_finetuning): series of three notebooks, that show how to convert a pre-trained FP32 VGG11 neural network into a quantized model using Brevitas. The model is fine-tuned on the CIFAR data-sets, converted for FHE execution with Concrete ML and evaluated using FHE simulation. For CIFAR10 and CIFAR100, respectively, our simulations show an accuracy of 90.2% and 68.2%. +- [CIFAR10 / CIFAR100 FHE-friendly models with Transfer Learning approach](use_case_examples/cifar/cifar_brevitas_finetuning): series of three notebooks, that show how to convert a pre-trained FP32 VGG11 neural network into a quantized model using Brevitas. The model is fine-tuned on the CIFAR data-sets, converted for FHE execution with Concrete ML and evaluated using FHE simulation. For CIFAR10 and CIFAR100, respectively, our simulations show an accuracy of 90.2% and 68.2%. -- [FHE neural network splitting for client/server deployment](use_case_examples/cifar_brevitas_with_model_splitting): we explain how to split a computationally-intensive neural network model in two parts. First, we execute the first part on the client side in the clear, and the output of this step is encrypted. Next, to complete the computation, the second part of the model is evaluated with FHE. This tutorial also shows the impact of FHE speed/accuracy trade-off on CIFAR10, limiting PBS to 8-bit, and thus achieving 62% accuracy. +- [FHE neural network splitting for client/server deployment](use_case_examples/cifar/cifar_brevitas_with_model_splitting): we explain how to split a computationally-intensive neural network model in two parts. First, we execute the first part on the client side in the clear, and the output of this step is encrypted. Next, to complete the computation, the second part of the model is evaluated with FHE. This tutorial also shows the impact of FHE speed/accuracy trade-off on CIFAR10, limiting PBS to 8-bit, and thus achieving 62% accuracy. - [Encrypted image filtering](use_case_examples/image_filtering): finally, the live demo for our [6-min](https://6min.zama.ai) is available, in the form of a gradio application. We take encrypted images, and apply some filters (for example black-and-white, ridge detection, or your own filter). diff --git a/docs/advanced-topics/advanced_features.md b/docs/advanced-topics/advanced_features.md index 009be0a4c..cc9fccbe7 100644 --- a/docs/advanced-topics/advanced_features.md +++ b/docs/advanced-topics/advanced_features.md @@ -187,7 +187,7 @@ In practice, the process looks like this: 1. Update P = P - 1 1. repeat steps 2 and 3 until the accuracy loss is above a certain, acceptable threshold. -An example of such implementation is available in [evaluate_torch_cml.py](../../use_case_examples/cifar_brevitas_training/evaluate_one_example_fhe.py) and [CifarInFheWithSmallerAccumulators.ipynb](../../use_case_examples/cifar_brevitas_finetuning/CifarInFheWithSmallerAccumulators.ipynb) +An example of such implementation is available in [evaluate_torch_cml.py](../../use_case_examples/cifar/cifar_brevitas_training/evaluate_one_example_fhe.py) and [CifarInFheWithSmallerAccumulators.ipynb](../../use_case_examples/cifar/cifar_brevitas_finetuning/CifarInFheWithSmallerAccumulators.ipynb) ## Seeing compilation information diff --git a/docs/deep-learning/fhe_friendly_models.md b/docs/deep-learning/fhe_friendly_models.md index acc925c69..1c7cd5929 100644 --- a/docs/deep-learning/fhe_friendly_models.md +++ b/docs/deep-learning/fhe_friendly_models.md @@ -8,7 +8,7 @@ Regarding FHE-friendly neural networks, QAT is the best way to reach optimal acc Concrete ML uses the third-party library [Brevitas](https://github.com/Xilinx/brevitas) to perform QAT for PyTorch NNs, but options exist for other frameworks such as Keras/Tensorflow. -Several [demos and tutorials](../getting-started/showcase.md) that use Brevitas are available in the Concrete ML library, such as the [CIFAR classification tutorial](../../use_case_examples/cifar_brevitas_finetuning/CifarQuantizationAwareTraining.ipynb). +Several [demos and tutorials](../getting-started/showcase.md) that use Brevitas are available in the Concrete ML library, such as the [CIFAR classification tutorial](../../use_case_examples/cifar/cifar_brevitas_finetuning/CifarQuantizationAwareTraining.ipynb). This guide is based on a [notebook tutorial](../advanced_examples/QuantizationAwareTraining.ipynb), from which some code blocks are documented. diff --git a/docs/getting-started/showcase.md b/docs/getting-started/showcase.md index 91402a547..955ede0d7 100644 --- a/docs/getting-started/showcase.md +++ b/docs/getting-started/showcase.md @@ -49,7 +49,7 @@ Simpler tutorials that discuss only model usage and compilation are also availab nn.png - use_case_examples/cifar_brevitas_finetuning + use_case_examples/cifar/cifar_brevitas_finetuning @@ -61,7 +61,7 @@ Simpler tutorials that discuss only model usage and compilation are also availab client-server-1.png - use_case_examples/cifar_brevitas_with_model_splitting + use_case_examples/cifar/cifar_brevitas_with_model_splitting diff --git a/script/make_utils/check_headers.py b/script/make_utils/check_headers.py index 57177ccfe..a8edfa628 100644 --- a/script/make_utils/check_headers.py +++ b/script/make_utils/check_headers.py @@ -83,6 +83,8 @@ def contains_header(ast, header) -> bool: .replace(".", "") .replace("!", "") .replace("?", "") + .replace("/", "") + .replace("&", "") .lower() ) diff --git a/use_case_examples/cifar/README.md b/use_case_examples/cifar/README.md index 6ef892b97..3cbc2b7ee 100644 --- a/use_case_examples/cifar/README.md +++ b/use_case_examples/cifar/README.md @@ -24,14 +24,14 @@ Notebooks: 1. [Computing the accuracy of the quantized models with FHE simulation](cifar_brevitas_finetuning/CifarInFhe.ipynb). 1. [Enhancing inference time in FHE using smaller accumulators](cifar_brevitas_finetuning/CifarInFheWithSmallerAccumulators.ipynb). -[Results & Metrics](cifar_brevitas_finetuning/README.md#results) +[Results & Metrics](./cifar_brevitas_finetuning/README.md#results) ### Training Ternary VGG on CIFAR10 - **Description**: Train a VGG-like neural network from scratch using Brevitas on CIFAR-10 and run it in FHE. This use-case modifies the original VGG model for compatibility with Concrete ML, and explores the performance gains of rounding operations in FHE. - **Training & Inference**: Scripts provided to train the network and evaluate its performance. It also includes simulations in Concrete ML and insights into the performance enhancement using rounding. -[Results & Metrics](cifar_brevitas_training/README.md#Accuracy_/and_/performance) +[Results & Metrics](./cifar_brevitas_training/README.md#accuracy-and-performance) ### CIFAR-10 with a Split Clear/FHE Model @@ -39,7 +39,7 @@ Notebooks: - **Model Design**: Aims at using 8-bit accumulators to speed up FHE inference. The design incorporates pruning techniques and employs 2-bit weights to meet this aim. - **Implementation**: Provides step-by-step guidance on how to execute the hybrid clear/FHE model, focusing on the details and decisions behind selecting the optimal `p_error` value. Special attention is given to the binary search method to balance accuracy and FHE performance. -[Results & Metrics](cifar_brevitas_with_model_splitting/README.md#results) +[Results & Metrics](./cifar_brevitas_with_model_splitting/README.md#results) ## Installation diff --git a/use_case_examples/cifar/cifar_brevitas_with_model_splitting/README.md b/use_case_examples/cifar/cifar_brevitas_with_model_splitting/README.md index c02b1bb32..f6a670154 100644 --- a/use_case_examples/cifar/cifar_brevitas_with_model_splitting/README.md +++ b/use_case_examples/cifar/cifar_brevitas_with_model_splitting/README.md @@ -53,7 +53,7 @@ Once we had this `p_error` validated we re-run the FHE inference using this new - Time to keygen: 30 seconds - Time to infer: 1738 seconds -We see a 20x improvement with a simple change in the `p_error` parameter, for more details on how to handle `p_error` please refer to the [documentation](../../docs/advanced-topics/advanced_features.md#approximate-computations). +We see a 20x improvement with a simple change in the `p_error` parameter, for more details on how to handle `p_error` please refer to the [documentation](../../../docs/advanced-topics/advanced_features.md#approximate-computations). ## Results diff --git a/use_case_examples/deployment/cifar_8_bit/README.md b/use_case_examples/deployment/cifar_8_bit/README.md index 5de0328a8..f7395efae 100644 --- a/use_case_examples/deployment/cifar_8_bit/README.md +++ b/use_case_examples/deployment/cifar_8_bit/README.md @@ -1,7 +1,7 @@ # Deployment In this folder we show how to deploy a Concrete ML model that classifies images from CIFAR-10, either through Docker or Amazon Web Services. -We use the model showcased in [cifar split model](../../cifar_brevitas_with_model_splitting/README.md). +We use the model showcased in [cifar split model](../../cifar/cifar_brevitas_with_model_splitting/README.md). ## Get started