Exclude pretrained models from the checkpoint rotation logic #729
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was an issue due to the pretrained models having the same name as the checkpoints saved during model training. Suppose
keep_last_n_checkpoints
was set to 5 since the pretrained model was also included in this count, only 4 training checkpoints + 1 pretrained model checkpoint were actually kept. Additionally, in long enough training sessions, there was a risk of the pretrained model being deleted. These changes address this issue.If there are any modifications I might have overlooked, it would be great if you could review them and provide feedback.