-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about training #32
Comments
This phenomenon is normal; some fluctuations in the total loss may occur during training. Extending the training duration can help address this issue. |
Even though the loss appears to be decreasing, it can be a time-consuming process to wait for the increased loss to decrease again. Have you encountered this issue during training? How did you address it? Is simply extending the training time the solution? |
I'm using audio data from my own realm to do a continuous training on the checkpoint of WavTokenizer-mdium. However, it was found that the model seemed to get worse and worse with the training, and at about 135,000 steps, the loss of the generator suddenly increased (about 50, and the lowest value of the previous steady decline was around 43). The loss of discriminator is relatively stable at 10.7~10.8, and there is a downward trend. Do you know if this situation is normal, and in what ways should I solve this problem? (My training config uses the official github configuration, and I don't adjust other hyperparameters such as learning rate)
The text was updated successfully, but these errors were encountered: