From 276e2624b9e54d1f26fca190c579ab03028c1875 Mon Sep 17 00:00:00 2001 From: Shane Wong Date: Mon, 18 Mar 2019 12:33:00 +0800 Subject: [PATCH 1/5] Update 3_neural_networks_tutorial.ipynb --- chapter1/3_neural_networks_tutorial.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/chapter1/3_neural_networks_tutorial.ipynb b/chapter1/3_neural_networks_tutorial.ipynb index 50214ecd..8c9ab28c 100644 --- a/chapter1/3_neural_networks_tutorial.ipynb +++ b/chapter1/3_neural_networks_tutorial.ipynb @@ -36,7 +36,7 @@ "2. 在数据集上迭代; \n", "3. 通过神经网络处理输入; \n", "4. 计算损失(输出结果和正确值的差值大小);\n", - "5. 将梯度反向传播会网络的参数; \n", + "5. 将梯度反向传播回网络的参数; \n", "6. 更新网络的参数,主要使用如下简单的更新原则: \n", "``weight = weight - learning_rate * gradient``\n", "\n", From 386477fbade49c300a507c99d6043849ed6a4881 Mon Sep 17 00:00:00 2001 From: Shane Wong Date: Mon, 18 Mar 2019 18:15:28 +0800 Subject: [PATCH 2/5] Update 2.1.1.pytorch-basics-tensor.ipynb --- chapter2/2.1.1.pytorch-basics-tensor.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/chapter2/2.1.1.pytorch-basics-tensor.ipynb b/chapter2/2.1.1.pytorch-basics-tensor.ipynb index 001ba7ac..4b80464c 100644 --- a/chapter2/2.1.1.pytorch-basics-tensor.ipynb +++ b/chapter2/2.1.1.pytorch-basics-tensor.ipynb @@ -672,7 +672,7 @@ } ], "source": [ - "# 使用[0,1]均匀分布随机初始化二维数组\n", + "# 使用[0,1]正态分布随机初始化二维数组\n", "rnd = torch.rand(5, 3)\n", "rnd" ] From 5a3938e203f1c02d721d94a293731d218b29dcf7 Mon Sep 17 00:00:00 2001 From: Shane Wong Date: Mon, 18 Mar 2019 18:16:03 +0800 Subject: [PATCH 3/5] Update 2.1.2-pytorch-basics-autograd.ipynb --- chapter2/2.1.2-pytorch-basics-autograd.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/chapter2/2.1.2-pytorch-basics-autograd.ipynb b/chapter2/2.1.2-pytorch-basics-autograd.ipynb index 02c15674..304b5e0e 100644 --- a/chapter2/2.1.2-pytorch-basics-autograd.ipynb +++ b/chapter2/2.1.2-pytorch-basics-autograd.ipynb @@ -41,7 +41,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "在张量创建时,通过设置 requires_grad 标识为Ture来告诉Pytorch需要对该张量进行自动的求导,PyTorch回记录该张量的每一步操作历史并自动计算" + "在张量创建时,通过设置 requires_grad 标识为Ture来告诉Pytorch需要对该张量进行自动的求导,PyTorch会记录该张量的每一步操作历史并自动计算" ] }, { From e628b39edfe00f4131f12d72b0a28677e5e26951 Mon Sep 17 00:00:00 2001 From: freesinger Date: Tue, 19 Mar 2019 00:57:31 +0800 Subject: [PATCH 4/5] fix --- chapter2/2.1.1.pytorch-basics-tensor.ipynb | 8 ++++---- chapter2/2.1.2-pytorch-basics-autograd.ipynb | 6 +++--- chapter2/2.1.3-pytorch-basics-nerual-network.ipynb | 6 +++--- chapter2/2.1.4-pytorch-basics-data-lorder.ipynb | 4 ++-- chapter2/2.2-deep-learning-basic-mathematics.ipynb | 12 ++++++------ 5 files changed, 18 insertions(+), 18 deletions(-) diff --git a/chapter2/2.1.1.pytorch-basics-tensor.ipynb b/chapter2/2.1.1.pytorch-basics-tensor.ipynb index 001ba7ac..1b571fd1 100644 --- a/chapter2/2.1.1.pytorch-basics-tensor.ipynb +++ b/chapter2/2.1.1.pytorch-basics-tensor.ipynb @@ -672,7 +672,7 @@ } ], "source": [ - "# 使用[0,1]均匀分布随机初始化二维数组\n", + "# 使用[0,1]正态分布随机初始化二维数组\n", "rnd = torch.rand(5, 3)\n", "rnd" ] @@ -871,9 +871,9 @@ ], "metadata": { "kernelspec": { - "display_name": "pytorch 1.0", + "display_name": "Python 3", "language": "python", - "name": "pytorch1" + "name": "python3" }, "language_info": { "codemirror_mode": { @@ -885,7 +885,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.6.6" + "version": "3.6.7" } }, "nbformat": 4, diff --git a/chapter2/2.1.2-pytorch-basics-autograd.ipynb b/chapter2/2.1.2-pytorch-basics-autograd.ipynb index 02c15674..35524b02 100644 --- a/chapter2/2.1.2-pytorch-basics-autograd.ipynb +++ b/chapter2/2.1.2-pytorch-basics-autograd.ipynb @@ -41,7 +41,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "在张量创建时,通过设置 requires_grad 标识为Ture来告诉Pytorch需要对该张量进行自动的求导,PyTorch回记录该张量的每一步操作历史并自动计算" + "在张量创建时,通过设置 requires_grad 标识为Ture来告诉Pytorch需要对该张量进行自动的求导,PyTorch会记录该张量的每一步操作历史并自动计算" ] }, { @@ -249,9 +249,9 @@ ], "metadata": { "kernelspec": { - "display_name": "Pytorch for Deeplearning", + "display_name": "Python 3", "language": "python", - "name": "pytorch" + "name": "python3" }, "language_info": { "codemirror_mode": { diff --git a/chapter2/2.1.3-pytorch-basics-nerual-network.ipynb b/chapter2/2.1.3-pytorch-basics-nerual-network.ipynb index b329d808..4450b59e 100644 --- a/chapter2/2.1.3-pytorch-basics-nerual-network.ipynb +++ b/chapter2/2.1.3-pytorch-basics-nerual-network.ipynb @@ -401,9 +401,9 @@ ], "metadata": { "kernelspec": { - "display_name": "pytorch 1.0", + "display_name": "Python 3", "language": "python", - "name": "pytorch1" + "name": "python3" }, "language_info": { "codemirror_mode": { @@ -415,7 +415,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.6.6" + "version": "3.6.7" } }, "nbformat": 4, diff --git a/chapter2/2.1.4-pytorch-basics-data-lorder.ipynb b/chapter2/2.1.4-pytorch-basics-data-lorder.ipynb index c0eebae5..975b0ea1 100644 --- a/chapter2/2.1.4-pytorch-basics-data-lorder.ipynb +++ b/chapter2/2.1.4-pytorch-basics-data-lorder.ipynb @@ -336,9 +336,9 @@ ], "metadata": { "kernelspec": { - "display_name": "Pytorch for Deeplearning", + "display_name": "Python 3", "language": "python", - "name": "pytorch" + "name": "python3" }, "language_info": { "codemirror_mode": { diff --git a/chapter2/2.2-deep-learning-basic-mathematics.ipynb b/chapter2/2.2-deep-learning-basic-mathematics.ipynb index 9e2f13e8..9633d8c8 100644 --- a/chapter2/2.2-deep-learning-basic-mathematics.ipynb +++ b/chapter2/2.2-deep-learning-basic-mathematics.ipynb @@ -223,7 +223,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "优化器我们选择最长见的优化方法 SGD,就是每一次迭代计算mini-batch的梯度,然后对参数进行更新,学习率0.01 ,优化器本章后面也会进行介绍" + "优化器我们选择最常见的优化方法 SGD,就是每一次迭代计算mini-batch的梯度,然后对参数进行更新,学习率0.01 ,优化器本章后面也会进行介绍" ] }, { @@ -330,9 +330,9 @@ " optim.zero_grad()\n", " # 计算损失\n", " loss = criterion(outputs, labels)\n", - " # 反响传播\n", + " # 反向传播\n", " loss.backward()\n", - " # 使用优化器默认方行优化\n", + " # 使用优化器默认方法优化\n", " optim.step()\n", " if (i%100==0):\n", " #每 100次打印一下损失函数,看看效果\n", @@ -391,7 +391,7 @@ } ], "source": [ - "predicted =model.forward(torch.from_numpy(x_train)).data.numpy()\n", + "predicted = model.forward(torch.from_numpy(x_train)).data.numpy()\n", "plt.plot(x_train, y_train, 'go', label = 'data', alpha = 0.3)\n", "plt.plot(x_train, predicted, label = 'predicted', alpha = 1)\n", "plt.legend()\n", @@ -676,9 +676,9 @@ ], "metadata": { "kernelspec": { - "display_name": "pytorch 1.0", + "display_name": "Python 3", "language": "python", - "name": "pytorch1" + "name": "python3" }, "language_info": { "codemirror_mode": { From 83a8c7b37088963699634f5fbc5413c3dab5af70 Mon Sep 17 00:00:00 2001 From: freesinger Date: Tue, 19 Mar 2019 21:11:34 +0800 Subject: [PATCH 5/5] fix --- .../2.2-deep-learning-basic-mathematics.ipynb | 2 +- ...ep-learning-neural-network-introduction.ipynb | 6 +++--- chapter2/2.4-cnn.ipynb | 16 +++++++++------- chapter3/3.1-logistic-regression.ipynb | 6 +++--- 4 files changed, 16 insertions(+), 14 deletions(-) diff --git a/chapter2/2.2-deep-learning-basic-mathematics.ipynb b/chapter2/2.2-deep-learning-basic-mathematics.ipynb index 9633d8c8..c2da1871 100644 --- a/chapter2/2.2-deep-learning-basic-mathematics.ipynb +++ b/chapter2/2.2-deep-learning-basic-mathematics.ipynb @@ -543,7 +543,7 @@ "metadata": {}, "outputs": [], "source": [ - "#lr参数为学习了率对于SGD来说一般选择0.1 0.01.0.001,如何设置会在后面实战的章节中详细说明\n", + "#lr参数为学习率,对于SGD来说一般选择0.1 0.01.0.001,如何设置会在后面实战的章节中详细说明\n", "##如果设置了momentum,就是带有动量的SGD,可以不设置\n", "optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)" ] diff --git a/chapter2/2.3-deep-learning-neural-network-introduction.ipynb b/chapter2/2.3-deep-learning-neural-network-introduction.ipynb index ec0a8971..9ccf23cd 100644 --- a/chapter2/2.3-deep-learning-neural-network-introduction.ipynb +++ b/chapter2/2.3-deep-learning-neural-network-introduction.ipynb @@ -269,7 +269,7 @@ "source": [ "### Leaky Relu 函数\n", "为了解决relu函数z<0时的问题出现了 Leaky ReLU函数,该函数保证在z<0的时候,梯度仍然不为0。\n", - "ReLU的前半段设为αxαx而非0,通常α=0.01 $ a=max(\\alpha z,z)$" + "ReLU的前半段设为αz而非0,通常α=0.01 $ a=max(\\alpha z,z)$" ] }, { @@ -361,9 +361,9 @@ ], "metadata": { "kernelspec": { - "display_name": "pytorch 1.0", + "display_name": "Python 3", "language": "python", - "name": "pytorch1" + "name": "python3" }, "language_info": { "codemirror_mode": { diff --git a/chapter2/2.4-cnn.ipynb b/chapter2/2.4-cnn.ipynb index 09cf7429..13b11c84 100644 --- a/chapter2/2.4-cnn.ipynb +++ b/chapter2/2.4-cnn.ipynb @@ -36,7 +36,7 @@ "metadata": {}, "source": [ "## 2.4.1 为什么要用卷积神经网络\n", - "对于计算机视觉来说,每一个图像是由一个个像素点构成,每个像素点有三个通道,分别代表RGB三种颜色(不计算透明度),我们以手写识别的数据你MNIST举例,每个图像的是一个长宽均为28,channel为1的单色图像,如果使用全连接的网络结构,即,网络中的神经与与相邻层上的每个神经元均连接,那就意味着我们的网络有28 * 28 =784个神经元(RGB3色的话还要*3),hidden层如果使用了15个神经元,需要的参数个数(w和b)就有:28 * 28 * 15 * 10 + 15 + 10=117625个,这个数量级到现在为止也是一个很恐怖的数量级,一次反向传播计算量都是巨大的,这还展示一个单色的28像素大小的图片,如果我们使用更大的像素,计算量可想而知。" + "对于计算机视觉来说,每一个图像是由一个个像素点构成,每个像素点有三个通道,分别代表RGB三种颜色(不计算透明度),我们以手写识别的数据你MNIST举例,每个图像的是一个长宽均为28,channel为1的单色图像,如果使用全连接的网络结构,即,网络中的神经与相邻层上的每个神经元均连接,那就意味着我们的网络有28 * 28 =784个神经元(RGB3色的话还要*3),hidden层如果使用了15个神经元,需要的参数个数(w和b)就有:28 * 28 * 15 * 10 + 15 + 10=117625个,这个数量级到现在为止也是一个很恐怖的数量级,一次反向传播计算量都是巨大的,这还展示一个单色的28像素大小的图片,如果我们使用更大的像素,计算量可想而知。" ] }, { @@ -197,11 +197,11 @@ "2012,Alex Krizhevsky\n", "可以算作LeNet的一个更深和更广的版本,可以用来学习更复杂的对象 [论文](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf)\n", " - 用rectified linear units(ReLU)得到非线性;\n", - " - 使用辍 dropout 技巧在训练期间有选择性地忽略单个神经元,来减缓模型的过拟合;\n", + " - 使用 dropout 技巧在训练期间有选择性地忽略单个神经元,来减缓模型的过拟合;\n", " - 重叠最大池,避免平均池的平均效果;\n", " - 使用 GPU NVIDIA GTX 580 可以减少训练时间,这比用CPU处理快了 10 倍,所以可以被用于更大的数据集和图像上。\n", "![](alexnet.png)\n", - "虽然 AlexNet只有8层,但是它有60M以上的参数总量,Alexnet有一个特殊的计算层,LRN层,做的事是对当前层的输出结果做平滑处理,这里就不做纤细介绍了,\n", + "虽然 AlexNet只有8层,但是它有60M以上的参数总量,Alexnet有一个特殊的计算层,LRN层,做的事是对当前层的输出结果做平滑处理,这里就不做详细介绍了,\n", "Alexnet的每一阶段(含一次卷积主要计算的算作一层)可以分为8层:\n", "1. con - relu - pooling - LRN :\n", "要注意的是input层是227*227,而不是paper里面的224,这里可以算一下,主要是227可以整除后面的conv1计算,224不整除。如果一定要用224可以通过自动补边实现,不过在input就补边感觉没有意义,补得也是0,这就是我们上面说的公式的重要性。\n", @@ -375,14 +375,16 @@ "\n", "Inception架构的主要思想是找出如何让已有的稠密组件接近与覆盖卷积视觉网络中的最佳局部稀疏结构。现在需要找出最优的局部构造,并且重复 几次。之前的一篇文献提出一个层与层的结构,在最后一层进行相关性统计,将高相关性的聚集到一起。这些聚类构成下一层的单元,且与上一层单元连接。假设前 面层的每个单元对应于输入图像的某些区域,这些单元被分为滤波器组。在接近输入层的低层中,相关单元集中在某些局部区域,最终得到在单个区域中的大量聚类,在最后一层通过1x1的卷积覆盖。\n", "\n", - "上面的话听起来很生硬,其实解释起来很简单:每一模块我们都是用若干个不同的特征提取方式,例如 3x3卷积,5x5卷积,1x1的卷积,pooling等,都计算一下,最后再把这些结果通过Filter Concat来进行连接,找到这里面作用最大的。而网络里面包含了许多这养的模块,这样不用我们人为去判断那个特征提取方式好,网络会自己解决(是不是有点像AUTO ML),在Pytorch中实现了InceptionA-E,还有InceptionAUX 模块。\n", + "上面的话听起来很生硬,其实解释起来很简单:每一模块我们都是用若干个不同的特征提取方式,例如 3x3卷积,5x5卷积,1x1的卷积,pooling等,都计算一下,最后再把这些结果通过Filter Concat来进行连接,找到这里面作用最大的。而网络里面包含了许多这养的模块,这样不用我们人为去判断哪个特征提取方式好,网络会自己解决(是不是有点像AUTO ML),在Pytorch中实现了InceptionA-E,还有InceptionAUX 模块。\n", "\n" ] }, { "cell_type": "code", "execution_count": 5, - "metadata": {}, + "metadata": { + "scrolled": true + }, "outputs": [ { "name": "stdout", @@ -960,9 +962,9 @@ ], "metadata": { "kernelspec": { - "display_name": "pytorch 1.0", + "display_name": "Python 3", "language": "python", - "name": "pytorch1" + "name": "python3" }, "language_info": { "codemirror_mode": { diff --git a/chapter3/3.1-logistic-regression.ipynb b/chapter3/3.1-logistic-regression.ipynb index c9e45e34..aa0fdb3f 100644 --- a/chapter3/3.1-logistic-regression.ipynb +++ b/chapter3/3.1-logistic-regression.ipynb @@ -230,7 +230,7 @@ " y_hat=net(x)\n", " loss=criterion(y_hat,y) # 计算损失\n", " optm.zero_grad() # 前一步的损失清零\n", - " loss.backward() # 反响传播\n", + " loss.backward() # 反向传播\n", " optm.step() # 优化\n", " if (i+1)%100 ==0 : # 这里我们每100次输出相关的信息\n", " # 指定模型为计算模式\n", @@ -260,9 +260,9 @@ ], "metadata": { "kernelspec": { - "display_name": "pytorch 1.0", + "display_name": "Python 3", "language": "python", - "name": "pytorch1" + "name": "python3" }, "language_info": { "codemirror_mode": {