You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I ran the encode-training-and-test-images.lua to get the encodings from the data for classification and I got this error.
The problem is located at this line of code, where the script tries to allocate memory for the embeddings.
For my dataset, this line would allocate 2.6 GB of VRAM. I have a GTX970M with 3GB but at the time this line is evaluated, the encoder model is already in memory, occupying about 700MB of VRAM, so thats why I get this error.
A quick and dirty fix is to change the allocation to RAM, by changing this line of code to something like this:
local inputs = torch.Tensor(batches, batch_size, 384, 11, 11)
I'm just opening this issue for reference in case that someone else have the same problem
A good new feature would be to check the amount of VRAM available and the memory to be allocated whenever this happens in code and handling it accordingly.
The text was updated successfully, but these errors were encountered:
I ran the encode-training-and-test-images.lua to get the encodings from the data for classification and I got this error.
The problem is located at this line of code, where the script tries to allocate memory for the embeddings.
For my dataset, this line would allocate 2.6 GB of VRAM. I have a GTX970M with 3GB but at the time this line is evaluated, the encoder model is already in memory, occupying about 700MB of VRAM, so thats why I get this error.
A quick and dirty fix is to change the allocation to RAM, by changing this line of code to something like this:
local inputs = torch.Tensor(batches, batch_size, 384, 11, 11)
And also changing #L33 to this:
inputs[batch][batch_el] = inputs[batch][batch_el]:set(encodedInput:double())
I'm just opening this issue for reference in case that someone else have the same problem
A good new feature would be to check the amount of VRAM available and the memory to be allocated whenever this happens in code and handling it accordingly.
The text was updated successfully, but these errors were encountered: