Skip to content

Commit

Permalink
chore: remove cifar with splitting
Browse files Browse the repository at this point in the history
  • Loading branch information
andrei-stoian-zama authored Mar 15, 2024
1 parent 85cb962 commit 43587fa
Show file tree
Hide file tree
Showing 29 changed files with 37 additions and 1,659 deletions.
2 changes: 0 additions & 2 deletions .github/workflows/refresh-one-notebook.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@ on:
# --- refresh_notebooks_list.py: refresh list of notebooks currently available [START] ---
# --- do not edit, auto generated part by `make refresh_notebooks_list` ---
description: "Notebook file name only in: \n
- Cifar10 \n
- CifarInFhe \n
- CifarInFheWithSmallerAccumulators \n
- CifarQuantizationAwareTraining \n
Expand Down Expand Up @@ -51,7 +50,6 @@ env:
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
# --- refresh_notebooks_list.py: refresh list of notebook paths currently available [START] ---
# --- do not edit, auto generated part by `make refresh_notebooks_list` ---
Cifar10: "use_case_examples/cifar/cifar_brevitas_with_model_splitting/Cifar10.ipynb"
CifarInFhe: "use_case_examples/cifar/cifar_brevitas_finetuning/CifarInFhe.ipynb"
CifarInFheWithSmallerAccumulators: "use_case_examples/cifar/cifar_brevitas_finetuning/CifarInFheWithSmallerAccumulators.ipynb"
CifarQuantizationAwareTraining: "use_case_examples/cifar/cifar_brevitas_finetuning/CifarQuantizationAwareTraining.ipynb"
Expand Down
1 change: 0 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,6 @@ Concrete ML built-in models have APIs that are almost identical to their scikit-
- [Titanic](use_case_examples/titanic/KaggleTitanic.ipynb): solving the [Kaggle Titanic competition](https://www.kaggle.com/c/titanic/). Implemented with XGBoost from Concrete ML, this example comes as a companion of the [Kaggle notebook](https://www.kaggle.com/code/concretemlteam/titanic-with-privacy-preserving-machine-learning), and was the subject of a blogpost in [KDnuggets](https://www.kdnuggets.com/2022/08/machine-learning-encrypted-data.html).
- [CIFAR10 FHE-friendly model with Brevitas](use_case_examples/cifar/cifar_brevitas_training): training a VGG9 FHE-compatible neural network using Brevitas, and a script to run the neural network in FHE. Execution in FHE takes ~4 minutes per image and shows an accuracy of 88.7%.
- [CIFAR10 / CIFAR100 FHE-friendly models with Transfer Learning approach](use_case_examples/cifar/cifar_brevitas_finetuning): series of three notebooks, that convert a pre-trained FP32 VGG11 neural network into a quantized model using Brevitas. The model is fine-tuned on the CIFAR data-sets, converted for FHE execution with Concrete ML and evaluated using FHE simulation. For CIFAR10 and CIFAR100, respectively, our simulations show an accuracy of 90.2% and 68.2%.
- [FHE neural network splitting for client/server deployment](use_case_examples/cifar/cifar_brevitas_with_model_splitting): explaining how to split a computationally-intensive neural network model in two parts. First, we execute the first part on the client side in the clear, and the output of this step is encrypted. Next, to complete the computation, the second part of the model is evaluated with FHE. This tutorial also shows the impact of FHE speed/accuracy trade-off on CIFAR10, limiting PBS to 8-bit, and thus achieving 62% accuracy.

*If you have built awesome projects using Concrete ML, please let us know and we will be happy to showcase them here!*
<br></br>
Expand Down
9 changes: 0 additions & 9 deletions use_case_examples/cifar/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ This repository provides resources and documentation on different use-cases for
1. [Use-Cases](#use-cases)
- [Fine-Tuning VGG11 CIFAR-10/100](#fine-tuning-cifar)
- [Training Ternary VGG9 on CIFAR10](#training-ternary-vgg-on-cifar10)
- [CIFAR-10 VGG9 with one client-side layer](#cifar-10-with-a-split-model)
1. [Installation](#installation)
1. [Further Reading & Resources](#further-reading)

Expand All @@ -33,14 +32,6 @@ Notebooks:

[Results & Metrics](./cifar_brevitas_training/README.md#accuracy-and-performance)

### CIFAR-10 with a Split Model

- **Description**: This method divides the model into two segments: one that operates in plaintext (clear) and the other in Fully Homomorphic Encryption (FHE). This division allows for greater precision in the input layer while taking advantage of FHE's privacy-preserving capabilities in the subsequent layers.
- **Model Design**: Aims at using 8-bit accumulators to speed up FHE inference. The design incorporates pruning techniques and employs 2-bit weights to meet this aim.
- **Implementation**: Provides step-by-step guidance on how to execute the hybrid clear/FHE model, focusing on the details and decisions behind selecting the optimal `p_error` value. Special attention is given to the binary search method to balance accuracy and FHE performance.

[Results & Metrics](./cifar_brevitas_with_model_splitting/README.md#results)

## Installation

All use-cases can be quickly set up with:
Expand Down

This file was deleted.

Loading

0 comments on commit 43587fa

Please sign in to comment.