Replies: 3 comments 7 replies
-
Hi, |
Beta Was this translation helpful? Give feedback.
0 replies
-
E.g. I can run just 4 batch-size, each example of 0.750 seconds on a TITAN RTX and training takes more than week. This is the biggest GPU I have and I have only one. |
Beta Was this translation helpful? Give feedback.
7 replies
-
Yes sure it is a simplification I made just to be sure the separation stack
is correct.
Next step is to try the full pipeline the code is already there actually.
It is a matter of debugging and running experiments.
…On Wed, Feb 24, 2021, 9:15 PM manu0586 ***@***.***> wrote:
By doing that I think that your system becomes a target speech extraction
(with N target) and not a 'blind' speech separation anymore (like
wavesplit).
But maybe it's a first step to understand/implement the separation stack
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#453 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEO36OJ6T7MNYEAOIL2LB7TTAVM4TANCNFSM4YEUFD4Q>
.
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
According to the paper "Attention is all you need in speech separation" , a wavesplit model was implemented with asteroid (by Samuele Cornell in branch wavesplit).
What is the status of the model ? Does it work now?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions