-
Notifications
You must be signed in to change notification settings - Fork 537
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Serious bug in libri_train.py #82
Comments
I can put all training .npy files into one directory, but the real problem is that the model would have to fit the largest sample in the whole dataset: if the largest sample is 4000 time steps, then every sample would need to be padded to this size. This would make training extremely slow. Look at https://github.com/fordDeepDSP/deepSpeech code for a better solution (bucketing sorted inputs). |
From L171, I think the logic is to restore saved parameters trained on previous folders? So I guess it's not training 8 separate models if the
|
I got this repo to work, however it took a lot of effort and many bug fixes. At the end, it's just not worth it - this repo has pretty much been abandoned, and there are better repos available (fordDSP, Mozilla, or SeanNaren for the excellent PyTorch implementation). Also, DeepSpeech is pretty old - there are now better architectures, for example Jasper or transducer based ones). Don't waste your time on this one. |
I kinda agree after trying this repo on LibriSpeech. And thank you for the pointers. I also checked fordDSP and SeanNaren's DeepSpeech2 pytorch implementation, but I still see people having trouble getting reasonable WER/CER there without getting responses. I just want to train on LibriSpeech and might have to try Kaldi now. |
LibriSpeech dataset (e.g. train-clean-100) is split into multiple directories during preprocessing. Then during training, the code iterates through these directories:
https://github.com/zzw922cn/Automatic_Speech_Recognition/blob/master/speechvalley/main/libri_train.py#L159
The problem is that for each directory, a new model is created according to the maxTimeSteps parameter for the inputs in the directory. This means that if we have 8 directories for train-clean-100 dataset, we are training 8 separate models, which don't share their weights (in fact, every time a model saves a checkpoint, it overwrites the checkpoint saved by the previous model).
This means that we are effectively training only one model out of 8, and we are training it only on data in the last directory (so we are using 1/8 of the dataset).
The text was updated successfully, but these errors were encountered: