Skip to content

Commit

Permalink
Update 5_data_parallel_tutorial.ipynb
Browse files Browse the repository at this point in the history
  • Loading branch information
muliyangm authored Feb 19, 2019
1 parent 4de06da commit e2b5e87
Showing 1 changed file with 7 additions and 7 deletions.
14 changes: 7 additions & 7 deletions chapter1/5_data_parallel_tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -14,26 +14,26 @@
"metadata": {},
"source": [
"\n",
"数据并行(选读)\n",
"数据并行(选读)\n",
"==========================\n",
"**Authors**: `Sung Kim <https://github.com/hunkim>`_ and `Jenny Kang <https://github.com/jennykang>`_\n",
"**Authors**: [Sung Kim](https://github.com/hunkim) and [Jenny Kang](https://github.com/jennykang)\n",
"\n",
"在这个教程里,我们将学习如何使用``DataParallel``来使用多GPU. \n",
"在这个教程里我们将学习如何使用 ``DataParallel`` 来使用多GPU \n",
"\n",
"PyTorch非常容易的就可以使用多GPU,用如下方式把一个模型放到GPU上\n",
"PyTorch非常容易就可以使用多GPU,用如下方式把一个模型放到GPU上\n",
"\n",
"```python\n",
"\n",
" device = torch.device(\"cuda:0\")\n",
" model.to(device)\n",
"```\n",
" GPU:\n",
"然后复制所有的张量到GPU上\n",
"然后复制所有的张量到GPU上\n",
"```python\n",
"\n",
" mytensor = my_tensor.to(device)\n",
"```\n",
"请注意,只调用``my_tensor.to(device)``并没有复制张量到GPU上,而是返回了一个copy。所以你需要把它赋值给一个新的张量并在GPU上使用这个张量。\n",
"请注意只调用``my_tensor.to(device)``并没有复制张量到GPU上而是返回了一个copy。所以你需要把它赋值给一个新的张量并在GPU上使用这个张量。\n",
"\n",
"在多GPU上执行前向和反向传播是自然而然的事。\n",
"但是PyTorch默认将只是用一个GPU。\n",
Expand Down Expand Up @@ -104,7 +104,7 @@
"虚拟数据集\n",
"-------------\n",
"\n",
"制作一个虚拟(随机)数据集\n",
"制作一个虚拟(随机)数据集\n",
"你只需实现 `__getitem__`\n",
"\n",
"\n"
Expand Down

0 comments on commit e2b5e87

Please sign in to comment.