You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Then I could run the server and it was listening on 5000 as intended. But when I then run server_client.py, I got these errors:
With transformers 4.45.1:
Client disconnected.
Exception in thread Thread-1:
Traceback (most recent call last):
File "/home/user/miniconda3/envs/tortoise/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/home/user/miniconda3/envs/tortoise/lib/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "/home/user/Documents/tortoise-tts/tortoise/socket_server.py", line 58, in handle_client
for audio_chunk in audio_stream:
File "/home/user/Documents/tortoise-tts/tortoise/socket_server.py", line 21, in generate_audio_stream
for audio_chunk in stream:
File "/home/user/miniconda3/envs/tortoise/lib/python3.9/site-packages/tortoise_tts-3.0.0-py3.9.egg/tortoise/api_fast.py", line 380, in tts_stream
gpt_generator = self.autoregressive.get_generator(
File "/home/user/miniconda3/envs/tortoise/lib/python3.9/site-packages/tortoise_tts-3.0.0-py3.9.egg/tortoise/models/autoregressive.py", line 566, in get_generator
return self.inference_model.generate_stream(
File "/home/user/miniconda3/envs/tortoise/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/user/miniconda3/envs/tortoise/lib/python3.9/site-packages/tortoise_tts-3.0.0-py3.9.egg/tortoise/models/stream_generator.py", line 210, in generate
] = self._prepare_attention_mask_for_generation(
File "/home/user/miniconda3/envs/tortoise/lib/python3.9/site-packages/transformers-4.45.1-py3.9.egg/transformers/generation/utils.py", line 465, in _prepare_attention_mask_for_generation
isin_mps_friendly(elements=eos_token_id, test_elements=pad_token_id).any()
File "/home/user/miniconda3/envs/tortoise/lib/python3.9/site-packages/transformers-4.45.1-py3.9.egg/transformers/pytorch_utils.py", line 324, in isin_mps_friendly
if elements.device.type == "mps" and not is_torch_greater_or_equal_than_2_4:
AttributeError: 'int' object has no attribute 'device'
With transformers 4.48.1:
Client disconnected.
Exception in thread Thread-2:
Traceback (most recent call last):
File "/home/user/miniconda3/envs/tortoise/lib/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/home/user/miniconda3/envs/tortoise/lib/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "/home/user/Documents/tortoise-tts/tortoise/socket_server.py", line 58, in handle_client
for audio_chunk in audio_stream:
File "/home/user/Documents/tortoise-tts/tortoise/socket_server.py", line 21, in generate_audio_stream
for audio_chunk in stream:
File "/home/user/miniconda3/envs/tortoise/lib/python3.9/site-packages/tortoise_tts-3.0.0-py3.9.egg/tortoise/api_fast.py", line 380, in tts_stream
gpt_generator = self.autoregressive.get_generator(
File "/home/user/miniconda3/envs/tortoise/lib/python3.9/site-packages/tortoise_tts-3.0.0-py3.9.egg/tortoise/models/autoregressive.py", line 566, in get_generator
return self.inference_model.generate_stream(
File "/home/user/miniconda3/envs/tortoise/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/user/miniconda3/envs/tortoise/lib/python3.9/site-packages/tortoise_tts-3.0.0-py3.9.egg/tortoise/models/stream_generator.py", line 210, in generate
] = self._prepare_attention_mask_for_generation(
File "/home/user/miniconda3/envs/tortoise/lib/python3.9/site-packages/transformers/generation/utils.py", line 585, in _prepare_attention_mask_for_generation
pad_token_id = generation_config._pad_token_tensor
AttributeError: 'int' object has no attribute '_pad_token_tensor'
Does anyone have insights in this please?
The text was updated successfully, but these errors were encountered:
Hi all
I have this repo installed and it works to generate voice audio (cuda 11.8).
When I tried to run the server, there were additional dependencies, which I solved with:
Then I could run the server and it was listening on 5000 as intended. But when I then run
server_client.py
, I got these errors:With
transformers 4.45.1
:With
transformers 4.48.1
:Does anyone have insights in this please?
The text was updated successfully, but these errors were encountered: