From 1d3fbd6b94efdcdbcdb8893d16900339213d20a0 Mon Sep 17 00:00:00 2001 From: Andrei Stoian Date: Mon, 17 Jun 2024 15:25:43 +0200 Subject: [PATCH] fix: bitwidth --- docs/deep-learning/torch_support.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/deep-learning/torch_support.md b/docs/deep-learning/torch_support.md index 15a195169..8c6497715 100644 --- a/docs/deep-learning/torch_support.md +++ b/docs/deep-learning/torch_support.md @@ -111,7 +111,7 @@ With QAT (the PyTorch/Brevitas models created following the example above), you With PTQ, you need to set the `n_bits` value in the `compile_torch_model` function and must manually determine the trade-off between accuracy, FHE compatibility, and latency. -The quantization parameters, along with the number of neurons on each layer, will determine the accumulator bit width of the network. Larger accumulator bit widths result in higher accuracy but slower FHE inference time. +The quantization parameters, along with the number of neurons on each layer, will determine the accumulator bit-width of the network. Larger accumulator bit-widths result in higher accuracy but slower FHE inference time. ## Running encrypted inference