Skip to content

Commit

Permalink
chore: review
Browse files Browse the repository at this point in the history
  • Loading branch information
jfrery committed Sep 14, 2023
1 parent 6461510 commit 5ddfe66
Showing 1 changed file with 7 additions and 7 deletions.
14 changes: 7 additions & 7 deletions use_case_examples/cifar/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ This repository provides resources and documentation on different use-cases for

## Table of Contents
1. [Use-Cases](#use-cases)
- [Fine-Tuning CIFAR-10/100](#fine-tuning-cifar-10100)
- [Training Ternary VGG on CIFAR10](#training-ternary-vgg-on-cifar10)
- [CIFAR-10 with a Split Clear/FHE Model](#cifar-10-with-a-split-clearfhe-model)
- [Fine-Tuning VGG11 CIFAR-10/100](#fine-tuning-cifar-10100)
- [Training Ternary VGG9 on CIFAR10](#training-ternary-vgg-on-cifar10)
- [CIFAR-10 VGG9 with one client-side layer](#cifar-10-with-a-split-clearfhe-model)
2. [Installation](#installation)
3. [Further Reading & Resources](#further-reading--resources)

Expand All @@ -27,15 +27,15 @@ This repository provides resources and documentation on different use-cases for
### Training Ternary VGG on CIFAR10

- **Description**: Train a VGG-like neural network from scratch using Brevitas on CIFAR-10 and run it in FHE. This use-case modifies the original VGG model for compatibility with Concrete ML, and explores the performance gains of rounding operations in FHE.
- **Training & Inference**: Scripts provided to train the network and evaluate its performance. Also includes simulations in Concrete ML and insights into the performance enhancement using rounding.
- **Training & Inference**: Scripts provided to train the network and evaluate its performance. It also includes simulations in Concrete ML and insights into the performance enhancement using rounding.

[Results & Metrics](cifar_brevitas_training/README.md#Accuracy_/and_/performance)

### CIFAR-10 with a Split Clear/FHE Model

- **Description**: An approach that splits the model into two parts: one running in clear and the other in FHE. By doing this, higher precision can be achieved in the input layer while still benefiting from FHE in subsequent layers.
- **Model Design**: Targets 8-bit accumulators for faster FHE inference. Pruning and 2-bit weights are used.
- **Implementation**: Detailed steps on how to run the model in FHE and the trade-offs involved in choosing the appropriate p_error value.
- **Description**: This method divides the model into two segments: one that operates in plaintext (clear) and the other in Fully Homomorphic Encryption (FHE). This division allows for greater precision in the input layer while taking advantage of FHE's privacy-preserving capabilities in the subsequent layers.
- **Model Design**: Aims at using 8-bit accumulators to speed up FHE inference. The design incorporates pruning techniques and employs 2-bit weights to meet this aim.
- **Implementation**: Provides step-by-step guidance on how to execute the hybrid clear/FHE model, focusing on the details and decisions behind selecting the optimal `p_error` value. Special attention is given to the binary search method to balance accuracy and FHE performance.

[Results & Metrics](cifar_brevitas_with_model_splitting/README.md#results)

Expand Down

0 comments on commit 5ddfe66

Please sign in to comment.