diff --git a/README.md b/README.md index 966efd679..e69f3a3df 100644 --- a/README.md +++ b/README.md @@ -132,11 +132,11 @@ Various tutorials are proposed for the [built-in models](docs/built-in-models/ml - [Sentiment analysis with transformers](use_case_examples/sentiment_analysis_with_transformer): a gradio demo which predicts if a tweet / short message is positive, negative or neutral, with FHE of course! The [live interactive](https://huggingface.co/spaces/zama-fhe/encrypted_sentiment_analysis) demo is available on Hugging Face. This [blog post](https://huggingface.co/blog/sentiment-analysis-fhe) explains how this demo works! -- [CIFAR10 FHE-friendly model with Brevitas](use_case_examples/cifar_brevitas_training): code for training from scratch a VGG-like FHE-compatible neural network using Brevitas, and a script to run the neural network in FHE. Execution in FHE takes ~20 minutes per image and shows an accuracy of 88.7%. +- [CIFAR10 FHE-friendly model with Brevitas](use_case_examples/cifar/cifar_brevitas_training): code for training from scratch a VGG-like FHE-compatible neural network using Brevitas, and a script to run the neural network in FHE. Execution in FHE takes ~20 minutes per image and shows an accuracy of 88.7%. -- [CIFAR10 / CIFAR100 FHE-friendly models with Transfer Learning approach](use_case_examples/cifar_brevitas_finetuning): series of three notebooks, that show how to convert a pre-trained FP32 VGG11 neural network into a quantized model using Brevitas. The model is fine-tuned on the CIFAR data-sets, converted for FHE execution with Concrete ML and evaluated using FHE simulation. For CIFAR10 and CIFAR100, respectively, our simulations show an accuracy of 90.2% and 68.2%. +- [CIFAR10 / CIFAR100 FHE-friendly models with Transfer Learning approach](use_case_examples/cifar/cifar_brevitas_finetuning): series of three notebooks, that show how to convert a pre-trained FP32 VGG11 neural network into a quantized model using Brevitas. The model is fine-tuned on the CIFAR data-sets, converted for FHE execution with Concrete ML and evaluated using FHE simulation. For CIFAR10 and CIFAR100, respectively, our simulations show an accuracy of 90.2% and 68.2%. -- [FHE neural network splitting for client/server deployment](use_case_examples/cifar_brevitas_with_model_splitting): we explain how to split a computationally-intensive neural network model in two parts. First, we execute the first part on the client side in the clear, and the output of this step is encrypted. Next, to complete the computation, the second part of the model is evaluated with FHE. This tutorial also shows the impact of FHE speed/accuracy trade-off on CIFAR10, limiting PBS to 8-bit, and thus achieving 62% accuracy. +- [FHE neural network splitting for client/server deployment](use_case_examples/cifar/cifar_brevitas_with_model_splitting): we explain how to split a computationally-intensive neural network model in two parts. First, we execute the first part on the client side in the clear, and the output of this step is encrypted. Next, to complete the computation, the second part of the model is evaluated with FHE. This tutorial also shows the impact of FHE speed/accuracy trade-off on CIFAR10, limiting PBS to 8-bit, and thus achieving 62% accuracy. - [Encrypted image filtering](use_case_examples/image_filtering): finally, the live demo for our [6-min](https://6min.zama.ai) is available, in the form of a gradio application. We take encrypted images, and apply some filters (for example black-and-white, ridge detection, or your own filter). diff --git a/docs/advanced-topics/advanced_features.md b/docs/advanced-topics/advanced_features.md index 009be0a4c..cc9fccbe7 100644 --- a/docs/advanced-topics/advanced_features.md +++ b/docs/advanced-topics/advanced_features.md @@ -187,7 +187,7 @@ In practice, the process looks like this: 1. Update P = P - 1 1. repeat steps 2 and 3 until the accuracy loss is above a certain, acceptable threshold. -An example of such implementation is available in [evaluate_torch_cml.py](../../use_case_examples/cifar_brevitas_training/evaluate_one_example_fhe.py) and [CifarInFheWithSmallerAccumulators.ipynb](../../use_case_examples/cifar_brevitas_finetuning/CifarInFheWithSmallerAccumulators.ipynb) +An example of such implementation is available in [evaluate_torch_cml.py](../../use_case_examples/cifar/cifar_brevitas_training/evaluate_one_example_fhe.py) and [CifarInFheWithSmallerAccumulators.ipynb](../../use_case_examples/cifar/cifar_brevitas_finetuning/CifarInFheWithSmallerAccumulators.ipynb) ## Seeing compilation information diff --git a/docs/deep-learning/fhe_friendly_models.md b/docs/deep-learning/fhe_friendly_models.md index acc925c69..1c7cd5929 100644 --- a/docs/deep-learning/fhe_friendly_models.md +++ b/docs/deep-learning/fhe_friendly_models.md @@ -8,7 +8,7 @@ Regarding FHE-friendly neural networks, QAT is the best way to reach optimal acc Concrete ML uses the third-party library [Brevitas](https://github.com/Xilinx/brevitas) to perform QAT for PyTorch NNs, but options exist for other frameworks such as Keras/Tensorflow. -Several [demos and tutorials](../getting-started/showcase.md) that use Brevitas are available in the Concrete ML library, such as the [CIFAR classification tutorial](../../use_case_examples/cifar_brevitas_finetuning/CifarQuantizationAwareTraining.ipynb). +Several [demos and tutorials](../getting-started/showcase.md) that use Brevitas are available in the Concrete ML library, such as the [CIFAR classification tutorial](../../use_case_examples/cifar/cifar_brevitas_finetuning/CifarQuantizationAwareTraining.ipynb). This guide is based on a [notebook tutorial](../advanced_examples/QuantizationAwareTraining.ipynb), from which some code blocks are documented. diff --git a/docs/getting-started/showcase.md b/docs/getting-started/showcase.md index 91402a547..955ede0d7 100644 --- a/docs/getting-started/showcase.md +++ b/docs/getting-started/showcase.md @@ -49,7 +49,7 @@ Simpler tutorials that discuss only model usage and compilation are also availab