From 2dcacd54394223aef908d5eef17077dc5fc00e8b Mon Sep 17 00:00:00 2001 From: Andrei Stoian Date: Fri, 14 Jun 2024 16:08:54 +0200 Subject: [PATCH] fix: review changes --- docs/deep-learning/fhe_assistant.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/deep-learning/fhe_assistant.md b/docs/deep-learning/fhe_assistant.md index 07932550b..6a6fe1783 100644 --- a/docs/deep-learning/fhe_assistant.md +++ b/docs/deep-learning/fhe_assistant.md @@ -81,7 +81,7 @@ The most common compilation errors stem from the following causes: **Error message**: `Error occurred during quantization aware training (QAT) import [...] Could not determine a unique scale for the quantization!`. -**Cause**: This error is a due to missing quantization operators in the model that is imported as a quantized aware training model. See [this guide](../deep-learning/fhe_friendly_models.md) for a guide on how to use Brevitas layers. This error message is generated when not all layers take inputs that are quantized through `QuantIdentity` layers. +**Cause**: This error is a due to missing quantization operators in the model that is imported as a quantized aware training model. See [this guide](../deep-learning/fhe_friendly_models.md) on how to use Brevitas layers. This error message is generated when not all layers take inputs that are quantized through `QuantIdentity` layers. A common example is related to the concatenation operator. Suppose two tensors `x` and `y` are produced by two layers and need to be concatenated: