Skip to content

Commit

Permalink
Update 5_data_parallel_tutorial.ipynb
Browse files Browse the repository at this point in the history
  • Loading branch information
muliyangm authored Feb 19, 2019
1 parent e2b5e87 commit 9092b48
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions chapter1/5_data_parallel_tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
"请注意,只调用``my_tensor.to(device)``并没有复制张量到GPU上,而是返回了一个copy。所以你需要把它赋值给一个新的张量并在GPU上使用这个张量。\n",
"\n",
"在多GPU上执行前向和反向传播是自然而然的事。\n",
"但是PyTorch默认将只是用一个GPU\n",
"但是PyTorch默认将只使用一个GPU\n",
"\n",
"使用``DataParallel``可以轻易的让模型并行运行在多个GPU上。\n",
"\n",
Expand Down Expand Up @@ -139,7 +139,7 @@
"简单模型\n",
"------------\n",
"作为演示,我们的模型只接受一个输入,执行一个线性操作,然后得到结果。\n",
"说明``DataParallel``能在任何模型(CNN,RNN,Capsule Net等)上使用。\n",
"说明``DataParallel``能在任何模型(CNN,RNN,Capsule Net等)上使用。\n",
"\n",
"\n",
"我们在模型内部放置了一条打印语句来打印输入和输出向量的大小。\n",
Expand Down Expand Up @@ -286,7 +286,7 @@
" Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])\n",
"\n",
"3 GPUs\n",
"~~~~~~\n",
"~\n",
"\n",
"If you have 3 GPUs, you will see:\n",
"\n",
Expand All @@ -311,7 +311,7 @@
" Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])\n",
"\n",
"8 GPUs\n",
"~~~~~~~~~~~~~~\n",
"~~\n",
"\n",
"If you have 8, you will see:\n",
"\n",
Expand Down

0 comments on commit 9092b48

Please sign in to comment.