You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While computing the Greedy Decoder script getting the error. Can you suggest the type of GPU and the amount of memory required to run the script?
The output of the script while running in spyder the following code along with its error is given below. I have also enclosed all details. The machine is Dell Precision Tower 7920.
with torch.no_grad():
greedy_time = time.time() # start timer
loss_greedy = []
greedy_predict = []
model.eval()
# initialize the encoder hidden states
val_hidden = model.encoder.init_hidden(batch_size=32)
for x_val, y_val in get_batches(val_text,val_summary,batch_size=32):
# convert data to PyTorch tensor
x_val = torch.from_numpy(x_val).to(device)
y_val = torch.from_numpy(y_val).to(device)
val_hidden = tuple([each.data for each in val_hidden])
# run the greedy decoder
val_loss, prediction = model.inference_greedy(x,y,val_hidden,criterion,batch_size=32)
loss_greedy.append(val_loss.item())
greedy_predict.append(prediction)
File "", line 14, in
val_loss, prediction = model.inference_greedy(x,y,val_hidden,criterion,batch_size=32)
File "", line 59, in inference_greedy
logits, d_hidden = self.decoder(dec_input,enc_output,d_hidden,x,batch_size)
File "/home/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)RuntimeError: CUDA out of memory
File "", line 74, in forward
output_probability = torch.mul(p_pointer.unsqueeze(1),pointer_prob) + torch.mul(p_gen.unsqueeze(1),generator_prob)
RuntimeError: CUDA out of memory. Tried to allocate 11.75 MiB (GPU 0; 7.92 GiB total capacity; 5.84 GiB already allocated; 20.75 MiB free; 717.94 MiB cached)
The text was updated successfully, but these errors were encountered:
I forgot to post additional information about the issue
Collecting environment information...
PyTorch version: 1.0.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.5 LTS
GCC version: (Ubuntu 6.5.0-2ubuntu1~16.04) 6.5.0 20181026
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration: GPU 0: Quadro P4000
Nvidia driver version: 384.130
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a
/usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudnn.so
/usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudnn.so.5
/usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudnn.so.5.1.3
/usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudnn_static.a
While computing the Greedy Decoder script getting the error. Can you suggest the type of GPU and the amount of memory required to run the script?
The output of the script while running in spyder the following code along with its error is given below. I have also enclosed all details. The machine is Dell Precision Tower 7920.
with torch.no_grad():
greedy_time = time.time() # start timer
loss_greedy = []
greedy_predict = []
model.eval()
# initialize the encoder hidden states
val_hidden = model.encoder.init_hidden(batch_size=32)
for x_val, y_val in get_batches(val_text,val_summary,batch_size=32):
# convert data to PyTorch tensor
x_val = torch.from_numpy(x_val).to(device)
y_val = torch.from_numpy(y_val).to(device)
val_hidden = tuple([each.data for each in val_hidden])
# run the greedy decoder
val_loss, prediction = model.inference_greedy(x,y,val_hidden,criterion,batch_size=32)
loss_greedy.append(val_loss.item())
greedy_predict.append(prediction)
Traceback (most recent call last):
File "", line 14, in
val_loss, prediction = model.inference_greedy(x,y,val_hidden,criterion,batch_size=32)
File "", line 59, in inference_greedy
logits, d_hidden = self.decoder(dec_input,enc_output,d_hidden,x,batch_size)
File "/home/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)RuntimeError: CUDA out of memory
File "", line 74, in forward
output_probability = torch.mul(p_pointer.unsqueeze(1),pointer_prob) + torch.mul(p_gen.unsqueeze(1),generator_prob)
RuntimeError: CUDA out of memory. Tried to allocate 11.75 MiB (GPU 0; 7.92 GiB total capacity; 5.84 GiB already allocated; 20.75 MiB free; 717.94 MiB cached)
The text was updated successfully, but these errors were encountered: