Skip to content

Commit

Permalink
fix: Update docs/deep-learning/torch_support.md
Browse files Browse the repository at this point in the history
Co-authored-by: yuxizama <[email protected]>
  • Loading branch information
andrei-stoian-zama and yuxizama authored Jun 17, 2024
1 parent feff4f6 commit c70609b
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/deep-learning/torch_support.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ With QAT (the PyTorch/Brevitas models created following the example above), you

With PTQ, you need to set the `n_bits` value in the `compile_torch_model` function and must manually determine the trade-off between accuracy, FHE compatibility, and latency.

The quantization parameters, along with the number of neurons on each layer, will determine the accumulator bit-width of the network. Larger accumulator bit-widths result in higher accuracy but slower FHE inference time.
The quantization parameters, along with the number of neurons on each layer, will determine the accumulator bit width of the network. Larger accumulator bit widths result in higher accuracy but slower FHE inference time.

## Running encrypted inference

Expand Down

0 comments on commit c70609b

Please sign in to comment.