From b4db57d2541cde1d6469a72c06a02ff1c0311b4a Mon Sep 17 00:00:00 2001 From: Konrad Zawora Date: Tue, 8 Oct 2024 09:59:21 +0200 Subject: [PATCH] [1.18.0][docs] Move LoRA adapters to "Supported Features" (#371) --- README_GAUDI.md | 2 +- docs/source/getting_started/gaudi-installation.rst | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/README_GAUDI.md b/README_GAUDI.md index 644829210125c..04d620ff3bc60 100644 --- a/README_GAUDI.md +++ b/README_GAUDI.md @@ -82,12 +82,12 @@ Supported Features Graphs](https://docs.habana.ai/en/latest/PyTorch/Inference_on_PyTorch/Inference_Using_HPU_Graphs.html) for accelerating low-batch latency and throughput - Attention with Linear Biases (ALiBi) +- LoRA adapters Unsupported Features ==================== - Beam search -- LoRA adapters - Quantization (AWQ, FP8 E5M2, FP8 E4M3) - Prefill chunking (mixed-batch inferencing) diff --git a/docs/source/getting_started/gaudi-installation.rst b/docs/source/getting_started/gaudi-installation.rst index 328f9e723ec71..3a8b745a2a2f9 100644 --- a/docs/source/getting_started/gaudi-installation.rst +++ b/docs/source/getting_started/gaudi-installation.rst @@ -77,12 +77,12 @@ Supported Features - Inference with `HPU Graphs `__ for accelerating low-batch latency and throughput - Attention with Linear Biases (ALiBi) +- LoRA adapters Unsupported Features ==================== - Beam search -- LoRA adapters - Quantization (AWQ, FP8 E5M2, FP8 E4M3) - Prefill chunking (mixed-batch inferencing)