From e2b5e871ff10a21f7a62a5cca0b8630edd4ff7d7 Mon Sep 17 00:00:00 2001 From: Muli Yang <31494617+mattmoevil@users.noreply.github.com> Date: Tue, 19 Feb 2019 17:32:23 +0800 Subject: [PATCH] Update 5_data_parallel_tutorial.ipynb --- chapter1/5_data_parallel_tutorial.ipynb | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/chapter1/5_data_parallel_tutorial.ipynb b/chapter1/5_data_parallel_tutorial.ipynb index 55a763fc..0be5795f 100644 --- a/chapter1/5_data_parallel_tutorial.ipynb +++ b/chapter1/5_data_parallel_tutorial.ipynb @@ -14,13 +14,13 @@ "metadata": {}, "source": [ "\n", - "数据并行(选读)\n", + "数据并行(选读)\n", "==========================\n", - "**Authors**: `Sung Kim `_ and `Jenny Kang `_\n", + "**Authors**: [Sung Kim](https://github.com/hunkim) and [Jenny Kang](https://github.com/jennykang)\n", "\n", - "在这个教程里,我们将学习如何使用``DataParallel``来使用多GPU. \n", + "在这个教程里,我们将学习如何使用 ``DataParallel`` 来使用多GPU。 \n", "\n", - "PyTorch非常容易的就可以使用多GPU,用如下方式把一个模型放到GPU上\n", + "PyTorch非常容易就可以使用多GPU,用如下方式把一个模型放到GPU上:\n", "\n", "```python\n", "\n", @@ -28,12 +28,12 @@ " model.to(device)\n", "```\n", " GPU:\n", - "然后复制所有的张量到GPU上\n", + "然后复制所有的张量到GPU上:\n", "```python\n", "\n", " mytensor = my_tensor.to(device)\n", "```\n", - "请注意,只调用``my_tensor.to(device)``并没有复制张量到GPU上,而是返回了一个copy。所以你需要把它赋值给一个新的张量并在GPU上使用这个张量。\n", + "请注意,只调用``my_tensor.to(device)``并没有复制张量到GPU上,而是返回了一个copy。所以你需要把它赋值给一个新的张量并在GPU上使用这个张量。\n", "\n", "在多GPU上执行前向和反向传播是自然而然的事。\n", "但是PyTorch默认将只是用一个GPU。\n", @@ -104,7 +104,7 @@ "虚拟数据集\n", "-------------\n", "\n", - "制作一个虚拟(随机)数据集\n", + "制作一个虚拟(随机)数据集,\n", "你只需实现 `__getitem__`\n", "\n", "\n"