You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
同样的,当电脑中同时有CUDA和CPU环境时,transformer_sent_polarity.py文件也会报错 RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
第四章文件utils.py 的第25行 mask = torch.arange(max_len).expand(lengths.shape[0], max_len) < lengths.unsqueeze(1)
也应修改为: mask = torch.arange(max_len).expand(lengths.shape[0], max_len).cuda() < lengths.unsqueeze(1)
The text was updated successfully, but these errors were encountered:
如题,直接运行会报错。
RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor
需要把'lengths'放到cpu中去,才可以。
将第40行的
x_pack = pack_padded_sequence(embeddings, lengths, batch_first=True, enforce_sorted=False)
改为
x_pack = pack_padded_sequence(embeddings, lengths.cpu(), batch_first=True, enforce_sorted=False)
同样的,当电脑中同时有CUDA和CPU环境时,
transformer_sent_polarity.py
文件也会报错RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
第四章文件
utils.py
的第25行mask = torch.arange(max_len).expand(lengths.shape[0], max_len) < lengths.unsqueeze(1)
也应修改为:
mask = torch.arange(max_len).expand(lengths.shape[0], max_len).cuda() < lengths.unsqueeze(1)
The text was updated successfully, but these errors were encountered: