Skip to content

Commit

Permalink
chore: fix cifar links
Browse files Browse the repository at this point in the history
  • Loading branch information
RomanBredehoft authored and fd0r committed Sep 21, 2023
1 parent 4562fc9 commit f338ac3
Show file tree
Hide file tree
Showing 8 changed files with 14 additions and 12 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,11 +132,11 @@ Various tutorials are proposed for the [built-in models](docs/built-in-models/ml

- [Sentiment analysis with transformers](use_case_examples/sentiment_analysis_with_transformer): a gradio demo which predicts if a tweet / short message is positive, negative or neutral, with FHE of course! The [live interactive](https://huggingface.co/spaces/zama-fhe/encrypted_sentiment_analysis) demo is available on Hugging Face. This [blog post](https://huggingface.co/blog/sentiment-analysis-fhe) explains how this demo works!

- [CIFAR10 FHE-friendly model with Brevitas](use_case_examples/cifar_brevitas_training): code for training from scratch a VGG-like FHE-compatible neural network using Brevitas, and a script to run the neural network in FHE. Execution in FHE takes ~20 minutes per image and shows an accuracy of 88.7%.
- [CIFAR10 FHE-friendly model with Brevitas](use_case_examples/cifar/cifar_brevitas_training): code for training from scratch a VGG-like FHE-compatible neural network using Brevitas, and a script to run the neural network in FHE. Execution in FHE takes ~20 minutes per image and shows an accuracy of 88.7%.

- [CIFAR10 / CIFAR100 FHE-friendly models with Transfer Learning approach](use_case_examples/cifar_brevitas_finetuning): series of three notebooks, that show how to convert a pre-trained FP32 VGG11 neural network into a quantized model using Brevitas. The model is fine-tuned on the CIFAR data-sets, converted for FHE execution with Concrete ML and evaluated using FHE simulation. For CIFAR10 and CIFAR100, respectively, our simulations show an accuracy of 90.2% and 68.2%.
- [CIFAR10 / CIFAR100 FHE-friendly models with Transfer Learning approach](use_case_examples/cifar/cifar_brevitas_finetuning): series of three notebooks, that show how to convert a pre-trained FP32 VGG11 neural network into a quantized model using Brevitas. The model is fine-tuned on the CIFAR data-sets, converted for FHE execution with Concrete ML and evaluated using FHE simulation. For CIFAR10 and CIFAR100, respectively, our simulations show an accuracy of 90.2% and 68.2%.

- [FHE neural network splitting for client/server deployment](use_case_examples/cifar_brevitas_with_model_splitting): we explain how to split a computationally-intensive neural network model in two parts. First, we execute the first part on the client side in the clear, and the output of this step is encrypted. Next, to complete the computation, the second part of the model is evaluated with FHE. This tutorial also shows the impact of FHE speed/accuracy trade-off on CIFAR10, limiting PBS to 8-bit, and thus achieving 62% accuracy.
- [FHE neural network splitting for client/server deployment](use_case_examples/cifar/cifar_brevitas_with_model_splitting): we explain how to split a computationally-intensive neural network model in two parts. First, we execute the first part on the client side in the clear, and the output of this step is encrypted. Next, to complete the computation, the second part of the model is evaluated with FHE. This tutorial also shows the impact of FHE speed/accuracy trade-off on CIFAR10, limiting PBS to 8-bit, and thus achieving 62% accuracy.

- [Encrypted image filtering](use_case_examples/image_filtering): finally, the live demo for our [6-min](https://6min.zama.ai) is available, in the form of a gradio application. We take encrypted images, and apply some filters (for example black-and-white, ridge detection, or your own filter).

Expand Down
2 changes: 1 addition & 1 deletion docs/advanced-topics/advanced_features.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ In practice, the process looks like this:
1. Update P = P - 1
1. repeat steps 2 and 3 until the accuracy loss is above a certain, acceptable threshold.

An example of such implementation is available in [evaluate_torch_cml.py](../../use_case_examples/cifar_brevitas_training/evaluate_one_example_fhe.py) and [CifarInFheWithSmallerAccumulators.ipynb](../../use_case_examples/cifar_brevitas_finetuning/CifarInFheWithSmallerAccumulators.ipynb)
An example of such implementation is available in [evaluate_torch_cml.py](../../use_case_examples/cifar/cifar_brevitas_training/evaluate_one_example_fhe.py) and [CifarInFheWithSmallerAccumulators.ipynb](../../use_case_examples/cifar/cifar_brevitas_finetuning/CifarInFheWithSmallerAccumulators.ipynb)

## Seeing compilation information

Expand Down
2 changes: 1 addition & 1 deletion docs/deep-learning/fhe_friendly_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Regarding FHE-friendly neural networks, QAT is the best way to reach optimal acc

Concrete ML uses the third-party library [Brevitas](https://github.com/Xilinx/brevitas) to perform QAT for PyTorch NNs, but options exist for other frameworks such as Keras/Tensorflow.

Several [demos and tutorials](../getting-started/showcase.md) that use Brevitas are available in the Concrete ML library, such as the [CIFAR classification tutorial](../../use_case_examples/cifar_brevitas_finetuning/CifarQuantizationAwareTraining.ipynb).
Several [demos and tutorials](../getting-started/showcase.md) that use Brevitas are available in the Concrete ML library, such as the [CIFAR classification tutorial](../../use_case_examples/cifar/cifar_brevitas_finetuning/CifarQuantizationAwareTraining.ipynb).

This guide is based on a [notebook tutorial](../advanced_examples/QuantizationAwareTraining.ipynb), from which some code blocks are documented.

Expand Down
4 changes: 2 additions & 2 deletions docs/getting-started/showcase.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ Simpler tutorials that discuss only model usage and compilation are also availab
<td></td>
<!--- start -->
<td><a href="../.gitbook/assets/demo_nn_finetuning.png">nn.png</a></td>
<td><a href="../../use_case_examples/cifar_brevitas_finetuning">use_case_examples/cifar_brevitas_finetuning</a></td>
<td><a href="../../use_case_examples/cifar/cifar_brevitas_finetuning">use_case_examples/cifar/cifar_brevitas_finetuning</a></td>
<!--- end -->
</tr>
<tr>
Expand All @@ -61,7 +61,7 @@ Simpler tutorials that discuss only model usage and compilation are also availab
<td></td>
<!--- start -->
<td><a href="../.gitbook/assets/demo_nn_splitting.png">client-server-1.png</a></td>
<td><a href="../../use_case_examples/cifar_brevitas_with_model_splitting">use_case_examples/cifar_brevitas_with_model_splitting</a></td>
<td><a href="../../use_case_examples/cifar/cifar_brevitas_with_model_splitting">use_case_examples/cifar/cifar_brevitas_with_model_splitting</a></td>
<!--- end -->
</tr>
<tr>
Expand Down
2 changes: 2 additions & 0 deletions script/make_utils/check_headers.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,8 @@ def contains_header(ast, header) -> bool:
.replace(".", "")
.replace("!", "")
.replace("?", "")
.replace("/", "")
.replace("&", "")
.lower()
)

Expand Down
6 changes: 3 additions & 3 deletions use_case_examples/cifar/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,22 +24,22 @@ Notebooks:
1. [Computing the accuracy of the quantized models with FHE simulation](cifar_brevitas_finetuning/CifarInFhe.ipynb).
1. [Enhancing inference time in FHE using smaller accumulators](cifar_brevitas_finetuning/CifarInFheWithSmallerAccumulators.ipynb).

[Results & Metrics](cifar_brevitas_finetuning/README.md#results)
[Results & Metrics](./cifar_brevitas_finetuning/README.md#results)

### Training Ternary VGG on CIFAR10

- **Description**: Train a VGG-like neural network from scratch using Brevitas on CIFAR-10 and run it in FHE. This use-case modifies the original VGG model for compatibility with Concrete ML, and explores the performance gains of rounding operations in FHE.
- **Training & Inference**: Scripts provided to train the network and evaluate its performance. It also includes simulations in Concrete ML and insights into the performance enhancement using rounding.

[Results & Metrics](cifar_brevitas_training/README.md#Accuracy_/and_/performance)
[Results & Metrics](./cifar_brevitas_training/README.md#accuracy-and-performance)

### CIFAR-10 with a Split Clear/FHE Model

- **Description**: This method divides the model into two segments: one that operates in plaintext (clear) and the other in Fully Homomorphic Encryption (FHE). This division allows for greater precision in the input layer while taking advantage of FHE's privacy-preserving capabilities in the subsequent layers.
- **Model Design**: Aims at using 8-bit accumulators to speed up FHE inference. The design incorporates pruning techniques and employs 2-bit weights to meet this aim.
- **Implementation**: Provides step-by-step guidance on how to execute the hybrid clear/FHE model, focusing on the details and decisions behind selecting the optimal `p_error` value. Special attention is given to the binary search method to balance accuracy and FHE performance.

[Results & Metrics](cifar_brevitas_with_model_splitting/README.md#results)
[Results & Metrics](./cifar_brevitas_with_model_splitting/README.md#results)

## Installation

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Once we had this `p_error` validated we re-run the FHE inference using this new
- Time to keygen: 30 seconds
- Time to infer: 1738 seconds

We see a 20x improvement with a simple change in the `p_error` parameter, for more details on how to handle `p_error` please refer to the [documentation](../../docs/advanced-topics/advanced_features.md#approximate-computations).
We see a 20x improvement with a simple change in the `p_error` parameter, for more details on how to handle `p_error` please refer to the [documentation](../../../docs/advanced-topics/advanced_features.md#approximate-computations).

## Results

Expand Down
2 changes: 1 addition & 1 deletion use_case_examples/deployment/cifar_8_bit/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Deployment

In this folder we show how to deploy a Concrete ML model that classifies images from CIFAR-10, either through Docker or Amazon Web Services.
We use the model showcased in [cifar split model](../../cifar_brevitas_with_model_splitting/README.md).
We use the model showcased in [cifar split model](../../cifar/cifar_brevitas_with_model_splitting/README.md).

## Get started

Expand Down

0 comments on commit f338ac3

Please sign in to comment.