-
Notifications
You must be signed in to change notification settings - Fork 91
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
9cf6e83
commit ae5d0a8
Showing
1 changed file
with
39 additions
and
39 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,70 +1,70 @@ | ||
# XTTS streaming server | ||
|
||
## Running the server | ||
## 1) Run the server | ||
|
||
To run a pre-built container (CUDA 11.8): | ||
### Recommended: use a pre-built container | ||
|
||
CUDA 12.1: | ||
|
||
```bash | ||
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest | ||
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cuda121 | ||
``` | ||
|
||
CUDA 12.1 version (for newer cards) | ||
CUDA 11.8 (for older cards): | ||
|
||
```bash | ||
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cuda121 | ||
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest | ||
``` | ||
|
||
Run with a custom XTTS v2 model (FT or previous versions): | ||
Run with a fine-tuned model: | ||
|
||
Make sure the model folder `/path/to/model/folder` contains the following files: | ||
- `config.json` | ||
- `model.pth` | ||
- `vocab.json` | ||
|
||
```bash | ||
$ docker run -v /path/to/model/folder:/app/tts_models --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest` | ||
``` | ||
|
||
Setting the `COQUI_TOS_AGREED` environment variable to `1` indicates you have read and agreed to | ||
the terms of the [CPML license](https://coqui.ai/cpml). | ||
## Not Recommended: Build the container yourself | ||
|
||
(Fine-tuned XTTS models also are under the [CPML license](https://coqui.ai/cpml)) | ||
To build the Docker container Pytorch 2.1 and CUDA 11.8 : | ||
|
||
## Testing the server | ||
|
||
### Using the gradio demo | ||
`DOCKERFILE` may be `Dockerfile`, `Dockerfile.cpu`, `Dockerfile.cuda121`, or your own custom Dockerfile. | ||
|
||
```bash | ||
$ python -m pip install -r test/requirements.txt | ||
$ python demo.py | ||
$ cd server | ||
$ docker build -t xtts-stream . -f DOCKERFILE | ||
$ docker run --gpus all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 xtts-stream | ||
``` | ||
|
||
### Using the test script | ||
Setting the `COQUI_TOS_AGREED` environment variable to `1` indicates you have read and agreed to | ||
the terms of the [CPML license](https://coqui.ai/cpml). (Fine-tuned XTTS models also are under the [CPML license](https://coqui.ai/cpml)) | ||
|
||
```bash | ||
$ cd test | ||
$ python -m pip install -r requirements.txt | ||
$ python test_streaming.py | ||
``` | ||
## 2) Testing the running server | ||
|
||
## Building the container | ||
Once your Docker container is running, you can test that it's working properly. You will need to run the following code from a fresh terminal. | ||
1. To build the Docker container Pytorch 2.1 and CUDA 11.8 : | ||
### Clone `xtts-streaming-server` | ||
```bash | ||
$ cd server | ||
$ docker build -t xtts-stream . | ||
``` | ||
For Pytorch 2.1 and CUDA 12.1 : | ||
```bash | ||
$ cd server | ||
docker build -t xtts-stream . -f Dockerfile.cuda121 | ||
$ git clone [email protected]:coqui-ai/xtts-streaming-server.git | ||
``` | ||
2. Run the server container: | ||
### Using the gradio demo | ||
```bash | ||
$ docker run --gpus all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 xtts-stream | ||
$ cd xtts-streaming-server | ||
$ python -m pip install -r test/requirements.txt | ||
$ python demo.py | ||
``` | ||
Setting the `COQUI_TOS_AGREED` environment variable to `1` indicates you have read and agreed to | ||
the terms of the [CPML license](https://coqui.ai/cpml). | ||
|
||
|
||
Make sure the model folder contains the following files: | ||
- `config.json` | ||
- `model.pth` | ||
- `vocab.json` | ||
### Using the test script | ||
```bash | ||
$ cd xtts-streaming-server | ||
$ cd test | ||
$ python -m pip install -r requirements.txt | ||
$ python test_streaming.py | ||
``` |