-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
is GPU training possible #87
Comments
Yes and "yes but it's a bad idea". Some part of the recipe (scripts that corresponds to the subspace SHMM) are already trained on GPU. However, when the training is using the HMM forward-backward algorithm, even though it is possible to use the GPU it is be much slower than training on CPU. |
how about the HMM-VAE option? I'm trying to integrate VAE's as done in timit recipe |
Well that's a tricky case. As I have written previously my HMM forward-backward implementation is not GPU friendly so it won't be very fast. On the other hand, the VAE would strongly benefits from the GPU during the training. So basically, it depends on the structure of your neural network. if it sufficiently big so that the forward-backward is the not the bottleneck, then I guess using the GPU will be fine. I haven't tested much the HMM-VAE code but moving the model (including the HMM) to the GPU should be straightforward. You may have a look at how I train the SHMM to see how to train the model on the GPU: https://github.com/beer-asr/beer/blob/master/beer/cli/subcommands/shmm/train.py |
I think I am having trouble with using GPU. When I run the code with |
Hi Eom, |
I wonder if it is possible to run
scripts on GPU?
The text was updated successfully, but these errors were encountered: