From 9092b481af04272f5cd942083c4cf29ea2c22782 Mon Sep 17 00:00:00 2001 From: Muli Yang <31494617+mattmoevil@users.noreply.github.com> Date: Tue, 19 Feb 2019 17:39:44 +0800 Subject: [PATCH] Update 5_data_parallel_tutorial.ipynb --- chapter1/5_data_parallel_tutorial.ipynb | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/chapter1/5_data_parallel_tutorial.ipynb b/chapter1/5_data_parallel_tutorial.ipynb index 0be5795f..da0ebe31 100644 --- a/chapter1/5_data_parallel_tutorial.ipynb +++ b/chapter1/5_data_parallel_tutorial.ipynb @@ -36,7 +36,7 @@ "请注意,只调用``my_tensor.to(device)``并没有复制张量到GPU上,而是返回了一个copy。所以你需要把它赋值给一个新的张量并在GPU上使用这个张量。\n", "\n", "在多GPU上执行前向和反向传播是自然而然的事。\n", - "但是PyTorch默认将只是用一个GPU。\n", + "但是PyTorch默认将只使用一个GPU。\n", "\n", "使用``DataParallel``可以轻易的让模型并行运行在多个GPU上。\n", "\n", @@ -139,7 +139,7 @@ "简单模型\n", "------------\n", "作为演示,我们的模型只接受一个输入,执行一个线性操作,然后得到结果。\n", - "说明,``DataParallel``能在任何模型(CNN,RNN,Capsule Net等)上使用。\n", + "说明:``DataParallel``能在任何模型(CNN,RNN,Capsule Net等)上使用。\n", "\n", "\n", "我们在模型内部放置了一条打印语句来打印输入和输出向量的大小。\n", @@ -286,7 +286,7 @@ " Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])\n", "\n", "3 GPUs\n", - "~~~~~~\n", + "~\n", "\n", "If you have 3 GPUs, you will see:\n", "\n", @@ -311,7 +311,7 @@ " Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])\n", "\n", "8 GPUs\n", - "~~~~~~~~~~~~~~\n", + "~~\n", "\n", "If you have 8, you will see:\n", "\n",