Skip to content

Commit

Permalink
updated release docker image version in readme to 2.0.6 (#242)
Browse files Browse the repository at this point in the history
  • Loading branch information
tthakkal authored Oct 31, 2024
1 parent 8d84ffa commit 6ba3d1d
Showing 1 changed file with 14 additions and 14 deletions.
28 changes: 14 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ To use [🤗 text-generation-inference](https://github.com/huggingface/text-gene

1. Pull the official Docker image with:
```bash
docker pull ghcr.io/huggingface/tgi-gaudi:2.0.5
docker pull ghcr.io/huggingface/tgi-gaudi:2.0.6
```
> [!NOTE]
> Alternatively, you can build the Docker image using the `Dockerfile` located in this folder with:
Expand All @@ -83,7 +83,7 @@ To use [🤗 text-generation-inference](https://github.com/huggingface/text-gene
-e OMPI_MCA_btl_vader_single_copy_mechanism=none -e HF_TOKEN=$hf_token \
-e ENABLE_HPU_GRAPH=true -e LIMIT_HPU_GRAPH=true -e USE_FLASH_ATTENTION=true \
-e FLASH_ATTENTION_RECOMPUTE=true --cap-add=sys_nice --ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 --model-id $model --max-input-tokens 1024 \
ghcr.io/huggingface/tgi-gaudi:2.0.6 --model-id $model --max-input-tokens 1024 \
--max-total-tokens 2048
```

Expand All @@ -97,7 +97,7 @@ To use [🤗 text-generation-inference](https://github.com/huggingface/text-gene
-e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none \
-e HF_TOKEN=$hf_token -e ENABLE_HPU_GRAPH=true -e LIMIT_HPU_GRAPH=true \
-e USE_FLASH_ATTENTION=true -e FLASH_ATTENTION_RECOMPUTE=true --cap-add=sys_nice \
--ipc=host ghcr.io/huggingface/tgi-gaudi:2.0.5 --model-id $model --sharded true \
--ipc=host ghcr.io/huggingface/tgi-gaudi:2.0.6 --model-id $model --sharded true \
--num-shard 8 --max-input-tokens 1024 --max-total-tokens 2048
```
3. Wait for the TGI-Gaudi server to come online. You will see something like so:
Expand Down Expand Up @@ -140,7 +140,7 @@ docker run -p 8080:80 \
-e FLASH_ATTENTION_RECOMPUTE=true \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.0.6 \
--model-id $model \
--max-input-length 1024 --max-total-tokens 2048 \
--max-batch-prefill-tokens 2048 --max-batch-total-tokens 65536 \
Expand Down Expand Up @@ -172,7 +172,7 @@ docker run -p 8080:80 \
-e FLASH_ATTENTION_RECOMPUTE=true \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.0.6 \
--model-id $model \
--sharded true --num-shard 8 \
--max-input-length 1024 --max-total-tokens 2048 \
Expand Down Expand Up @@ -204,7 +204,7 @@ docker run -p 8080:80 \
-e FLASH_ATTENTION_RECOMPUTE=true \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.0.6 \
--model-id $model \
--max-input-length 1024 --max-total-tokens 2048 \
--max-batch-prefill-tokens 2048 --max-batch-total-tokens 65536 \
Expand Down Expand Up @@ -236,7 +236,7 @@ docker run -p 8080:80 \
-e FLASH_ATTENTION_RECOMPUTE=true \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.0.6 \
--model-id $model \
--sharded true --num-shard 8 \
--max-input-length 1024 --max-total-tokens 2048 \
Expand Down Expand Up @@ -268,7 +268,7 @@ docker run -p 8080:80 \
-e BATCH_BUCKET_SIZE=1 \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.0.6 \
--model-id $model \
--max-input-tokens 4096 --max-batch-prefill-tokens 16384 \
--max-total-tokens 8192 --max-batch-total-tokens 32768
Expand Down Expand Up @@ -319,7 +319,7 @@ docker run -p 8080:80 \
-e FLASH_ATTENTION_RECOMPUTE=true \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.0.6 \
--model-id $model \
--max-input-length 1024 --max-total-tokens 2048 \
--max-batch-prefill-tokens 2048 --max-batch-total-tokens 65536 \
Expand Down Expand Up @@ -354,7 +354,7 @@ docker run -p 8080:80 \
-e FLASH_ATTENTION_RECOMPUTE=true \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.0.6 \
--model-id $model \
--sharded true --num-shard 8 \
--max-input-length 1024 --max-total-tokens 2048 \
Expand Down Expand Up @@ -390,7 +390,7 @@ docker run -p 8080:80 \
-e FLASH_ATTENTION_RECOMPUTE=true \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.0.6 \
--model-id $model \
--max-input-length 1024 --max-total-tokens 2048 \
--max-batch-prefill-tokens 2048 --max-batch-total-tokens 65536 \
Expand Down Expand Up @@ -425,7 +425,7 @@ docker run -p 8080:80 \
-e FLASH_ATTENTION_RECOMPUTE=true \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.0.6 \
--model-id $model \
--sharded true --num-shard 8 \
--max-input-length 1024 --max-total-tokens 2048 \
Expand Down Expand Up @@ -458,7 +458,7 @@ docker run -p 8080:80 \
-e BATCH_BUCKET_SIZE=1 \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.0.6 \
--model-id $model \
--max-input-tokens 4096 --max-batch-prefill-tokens 16384 \
--max-total-tokens 8192 --max-batch-total-tokens 32768
Expand Down Expand Up @@ -489,7 +489,7 @@ docker run -p 8080:80 \
-e BATCH_BUCKET_SIZE=1 \
--cap-add=sys_nice \
--ipc=host \
ghcr.io/huggingface/tgi-gaudi:2.0.5 \
ghcr.io/huggingface/tgi-gaudi:2.0.6 \
--model-id $model \
--sharded true --num-shard 8 \
--max-input-tokens 4096 --max-batch-prefill-tokens 16384 \
Expand Down

0 comments on commit 6ba3d1d

Please sign in to comment.