Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
deep-course committed Feb 17, 2019
2 parents fa422de + bb42418 commit 9893603
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion chapter1/2_autograd_tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
"``torch.Tensor``是这个包的核心类.如果设置\n",
"``.requires_grad`` 为 ``True``, 那么将会追踪多有对于该张量的操作. \n",
"当完成计算后通过调用 ``.backward()``会自动计算所有的梯度.\n",
"这个张量的所有提多将会自动积累到 ``.grad`` 属性.\n",
"这个张量的所有梯度将会自动积累到 ``.grad`` 属性.\n",
"\n",
"要阻止张量跟踪历史记录,可以调用``.detach()``方法将其与计算历史记录分离,并禁止跟踪它将来的计算记录。\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion chapter1/5_data_parallel_tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
"\n",
" mytensor = my_tensor.to(device)\n",
"```\n",
"请注意,只调用``my_tensor.to(device)``并没有复制张量到GPU上,二十返回了一个copy。所以你需要把它赋值给一个新的张量并在GPU上使用这个张量。\n",
"请注意,只调用``my_tensor.to(device)``并没有复制张量到GPU上,而是返回了一个copy。所以你需要把它赋值给一个新的张量并在GPU上使用这个张量。\n",
"\n",
"在多GPU上执行前向和反向传播是自然而然的事。\n",
"但是PyTorch默认将只是用一个GPU。\n",
Expand Down

0 comments on commit 9893603

Please sign in to comment.