Replies: 4 comments 3 replies
-
This will greatly speed things up, but requires quite a bit of gpu memory.
Thanks for posting this!
…On Sun, Jan 15, 2023, 11:46 AM Eli ***@***.***> wrote:
First of all, this is currently the best TTS project on github, and James
has done amazing work here.
I played a little with the code and noticed there is many CPU operations
done in pytorch, we finish inferencing on the autoregression model and we
move it back to the CPU, I was able to move all of the operations to the
GPU only, and it is much faster now.
The fork is here:
https://github.com/Okoyl/tortoise-tts-nocpu
I'd like some feedback, hopefully it is good enough for a PR.
—
Reply to this email directly, view it on GitHub
<#251>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAGLMOVKIVZTULCQV4GTBBLWSRAZVANCNFSM6AAAAAAT4AKDRA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
How does it compare to metavoicexyz solution? |
Beta Was this translation helpful? Give feedback.
-
Does this require a whole new rebuild or can we just replace some files. |
Beta Was this translation helpful? Give feedback.
-
How much vram it uses? 12 vram enough? |
Beta Was this translation helpful? Give feedback.
-
First of all, this is currently the best TTS project on github, and James has done amazing work here.
I played a little with the code and noticed there is many CPU operations done in pytorch, we finish inferencing on the autoregression model and we move it back to the CPU before running the other models, (I believe) I was able to move all of the operations to the GPU only, and it is much faster now.
The fork is here:
https://github.com/Okoyl/tortoise-tts-nocpu
I'd like some feedback, hopefully it is good enough for a PR.
Beta Was this translation helpful? Give feedback.
All reactions