Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cuda runtime error (2) : out of memory #59

Open
ivancruzbht opened this issue Mar 24, 2017 · 0 comments
Open

cuda runtime error (2) : out of memory #59

ivancruzbht opened this issue Mar 24, 2017 · 0 comments

Comments

@ivancruzbht
Copy link

I ran the encode-training-and-test-images.lua to get the encodings from the data for classification and I got this error.

The problem is located at this line of code, where the script tries to allocate memory for the embeddings.

For my dataset, this line would allocate 2.6 GB of VRAM. I have a GTX970M with 3GB but at the time this line is evaluated, the encoder model is already in memory, occupying about 700MB of VRAM, so thats why I get this error.

A quick and dirty fix is to change the allocation to RAM, by changing this line of code to something like this:

local inputs = torch.Tensor(batches, batch_size, 384, 11, 11)

And also changing #L33 to this:

inputs[batch][batch_el] = inputs[batch][batch_el]:set(encodedInput:double())

I'm just opening this issue for reference in case that someone else have the same problem

A good new feature would be to check the amount of VRAM available and the memory to be allocated whenever this happens in code and handling it accordingly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant