diff --git a/README.md b/README.md index 9582d82..17a1beb 100644 --- a/README.md +++ b/README.md @@ -35,7 +35,7 @@ We extend the following inferencing tools to run Rubra models in an OpenAI-compa - [llama.cpp](https://github.com/rubra-ai/tools.cpp) - [vLLM](https://github.com/rubra-ai/vllm) -**Note**: It is a known issue that Llama3 models (including 8B and 70B) are more prone to damage from quantization. We recommend serving them with either vLLM or using the fp16 quantization. +**Note**: Llama3 models, including the 8B and 70B variants, are known to experience increased perplexity and a subsequent degradation in function-calling performance as a result of quantization. We recommend serving them with either vLLM or using the fp16 quantization. ## Benchmark diff --git a/docs/docs/README.md b/docs/docs/README.md index 7e66124..8843070 100644 --- a/docs/docs/README.md +++ b/docs/docs/README.md @@ -41,7 +41,7 @@ We extend the following inferencing tools to run Rubra models in an OpenAI-compa - [llama.cpp](https://github.com/rubra-ai/tools.cpp) - [vLLM](https://github.com/rubra-ai/vllm) -**Note**: It is a known issue that Llama3 models (including 8B and 70B) are more prone to damage from quantization. We recommend serving them with either vLLM or using the fp16 quantization. +**Note**: Llama3 models, including the 8B and 70B variants, are known to experience increased perplexity and a subsequent degradation in function-calling performance as a result of quantization. We recommend serving them with either vLLM or using the fp16 quantization. ## Contributing