-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unexpectedly long outputs #167
Comments
it depends on tensor2tensor version, they break it every month |
I am facing a similar issue as well:
Is this a model issue or a bug in the decoder code? I tried using the suggestion that it might be due to the tensor2tensor lib but I am getting the same results for tensor2tensor==1.6.6 and tensor2tensor==1.7.0 |
@joshhansen I solved this issue by adjusting the beam size of the decoding from 1 to 5.
|
I'm finding repeatedly that the g2p-seq2seq model generates strangely long pronunciations using the included model. For all sequences up to three letters long, the following strange outputs occur:
But these are all fairly arbitrary. Actual words get such results, too:
The recurring theme seems to be that for whatever reason these words get stuck in a loop for a long time.
These are pretty rare, but are so egregiously bad that it makes me wonder if there is a bug somewhere? If not, guidance would be appreciated on how to train a model that avoids these issues.
The text was updated successfully, but these errors were encountered: