Skip to content

Commit

Permalink
Merge pull request zergtant#12 from freesinger/master
Browse files Browse the repository at this point in the history
fix some character errors
  • Loading branch information
zergtant authored Mar 25, 2019
2 parents 66da51a + b32e1a9 commit 33f2d98
Show file tree
Hide file tree
Showing 8 changed files with 28 additions and 25 deletions.
2 changes: 1 addition & 1 deletion chapter1/3_neural_networks_tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
"2. 在数据集上迭代; \n",
"3. 通过神经网络处理输入; \n",
"4. 计算损失(输出结果和正确值的差值大小);\n",
"5. 将梯度反向传播会网络的参数\n",
"5. 将梯度反向传播回网络的参数\n",
"6. 更新网络的参数,主要使用如下简单的更新原则: \n",
"``weight = weight - learning_rate * gradient``\n",
"\n",
Expand Down
8 changes: 4 additions & 4 deletions chapter2/2.1.1.pytorch-basics-tensor.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -672,7 +672,7 @@
}
],
"source": [
"# 使用[0,1]均匀分布随机初始化二维数组\n",
"# 使用[0,1]正态分布随机初始化二维数组\n",
"rnd = torch.rand(5, 3)\n",
"rnd"
]
Expand Down Expand Up @@ -871,9 +871,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "pytorch 1.0",
"display_name": "Python 3",
"language": "python",
"name": "pytorch1"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
Expand All @@ -885,7 +885,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
"version": "3.6.7"
}
},
"nbformat": 4,
Expand Down
4 changes: 2 additions & 2 deletions chapter2/2.1.2-pytorch-basics-autograd.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -249,9 +249,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "Pytorch for Deeplearning",
"display_name": "Python 3",
"language": "python",
"name": "pytorch"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
Expand Down
6 changes: 3 additions & 3 deletions chapter2/2.1.3-pytorch-basics-nerual-network.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -401,9 +401,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "pytorch 1.0",
"display_name": "Python 3",
"language": "python",
"name": "pytorch1"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
Expand All @@ -415,7 +415,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
"version": "3.6.7"
}
},
"nbformat": 4,
Expand Down
5 changes: 3 additions & 2 deletions chapter2/2.2-deep-learning-basic-mathematics.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -208,6 +208,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [

"优化器我们选择最常见的优化方法 `SGD`,就是每一次迭代计算 `mini-batch` 的梯度,然后对参数进行更新,学习率 0.01 ,优化器本章后面也会进行介绍"
]
},
Expand Down Expand Up @@ -315,7 +316,7 @@
" loss = criterion(outputs, labels)\n",
" # 反向传播\n",
" loss.backward()\n",
" # 使用优化器默认方行优化\n",
" # 使用优化器默认方法优化\n",
" optim.step()\n",
" if (i%100==0):\n",
" #每 100次打印一下损失函数,看看效果\n",
Expand Down Expand Up @@ -372,7 +373,7 @@
}
],
"source": [
"predicted =model.forward(torch.from_numpy(x_train)).data.numpy()\n",
"predicted = model.forward(torch.from_numpy(x_train)).data.numpy()\n",
"plt.plot(x_train, y_train, 'go', label = 'data', alpha = 0.3)\n",
"plt.plot(x_train, predicted, label = 'predicted', alpha = 1)\n",
"plt.legend()\n",
Expand Down
6 changes: 3 additions & 3 deletions chapter2/2.3-deep-learning-neural-network-introduction.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -269,7 +269,7 @@
"source": [
"### Leaky Relu 函数\n",
"为了解决relu函数z<0时的问题出现了 Leaky ReLU函数,该函数保证在z<0的时候,梯度仍然不为0。\n",
"ReLU的前半段设为αxαx而非0,通常α=0.01 $ a=max(\\alpha z,z)$"
"ReLU的前半段设为αz而非0,通常α=0.01 $ a=max(\\alpha z,z)$"
]
},
{
Expand Down Expand Up @@ -361,9 +361,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "pytorch 1.0",
"display_name": "Python 3",
"language": "python",
"name": "pytorch1"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
Expand Down
16 changes: 9 additions & 7 deletions chapter2/2.4-cnn.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
"metadata": {},
"source": [
"## 2.4.1 为什么要用卷积神经网络\n",
"对于计算机视觉来说,每一个图像是由一个个像素点构成,每个像素点有三个通道,分别代表RGB三种颜色(不计算透明度),我们以手写识别的数据你MNIST举例,每个图像的是一个长宽均为28,channel为1的单色图像,如果使用全连接的网络结构,即,网络中的神经与与相邻层上的每个神经元均连接,那就意味着我们的网络有28 * 28 =784个神经元(RGB3色的话还要*3),hidden层如果使用了15个神经元,需要的参数个数(w和b)就有:28 * 28 * 15 * 10 + 15 + 10=117625个,这个数量级到现在为止也是一个很恐怖的数量级,一次反向传播计算量都是巨大的,这还展示一个单色的28像素大小的图片,如果我们使用更大的像素,计算量可想而知。"
"对于计算机视觉来说,每一个图像是由一个个像素点构成,每个像素点有三个通道,分别代表RGB三种颜色(不计算透明度),我们以手写识别的数据你MNIST举例,每个图像的是一个长宽均为28,channel为1的单色图像,如果使用全连接的网络结构,即,网络中的神经与相邻层上的每个神经元均连接,那就意味着我们的网络有28 * 28 =784个神经元(RGB3色的话还要*3),hidden层如果使用了15个神经元,需要的参数个数(w和b)就有:28 * 28 * 15 * 10 + 15 + 10=117625个,这个数量级到现在为止也是一个很恐怖的数量级,一次反向传播计算量都是巨大的,这还展示一个单色的28像素大小的图片,如果我们使用更大的像素,计算量可想而知。"
]
},
{
Expand Down Expand Up @@ -197,11 +197,11 @@
"2012,Alex Krizhevsky\n",
"可以算作LeNet的一个更深和更广的版本,可以用来学习更复杂的对象 [论文](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf)\n",
" - 用rectified linear units(ReLU)得到非线性;\n",
" - 使用辍 dropout 技巧在训练期间有选择性地忽略单个神经元,来减缓模型的过拟合;\n",
" - 使用 dropout 技巧在训练期间有选择性地忽略单个神经元,来减缓模型的过拟合;\n",
" - 重叠最大池,避免平均池的平均效果;\n",
" - 使用 GPU NVIDIA GTX 580 可以减少训练时间,这比用CPU处理快了 10 倍,所以可以被用于更大的数据集和图像上。\n",
"![](alexnet.png)\n",
"虽然 AlexNet只有8层,但是它有60M以上的参数总量,Alexnet有一个特殊的计算层,LRN层,做的事是对当前层的输出结果做平滑处理,这里就不做纤细介绍了\n",
"虽然 AlexNet只有8层,但是它有60M以上的参数总量,Alexnet有一个特殊的计算层,LRN层,做的事是对当前层的输出结果做平滑处理,这里就不做详细介绍了\n",
"Alexnet的每一阶段(含一次卷积主要计算的算作一层)可以分为8层:\n",
"1. con - relu - pooling - LRN :\n",
"要注意的是input层是227*227,而不是paper里面的224,这里可以算一下,主要是227可以整除后面的conv1计算,224不整除。如果一定要用224可以通过自动补边实现,不过在input就补边感觉没有意义,补得也是0,这就是我们上面说的公式的重要性。\n",
Expand Down Expand Up @@ -375,14 +375,16 @@
"\n",
"Inception架构的主要思想是找出如何让已有的稠密组件接近与覆盖卷积视觉网络中的最佳局部稀疏结构。现在需要找出最优的局部构造,并且重复 几次。之前的一篇文献提出一个层与层的结构,在最后一层进行相关性统计,将高相关性的聚集到一起。这些聚类构成下一层的单元,且与上一层单元连接。假设前 面层的每个单元对应于输入图像的某些区域,这些单元被分为滤波器组。在接近输入层的低层中,相关单元集中在某些局部区域,最终得到在单个区域中的大量聚类,在最后一层通过1x1的卷积覆盖。\n",
"\n",
"上面的话听起来很生硬,其实解释起来很简单:每一模块我们都是用若干个不同的特征提取方式,例如 3x3卷积,5x5卷积,1x1的卷积,pooling等,都计算一下,最后再把这些结果通过Filter Concat来进行连接,找到这里面作用最大的。而网络里面包含了许多这养的模块,这样不用我们人为去判断那个特征提取方式好,网络会自己解决(是不是有点像AUTO ML),在Pytorch中实现了InceptionA-E,还有InceptionAUX 模块。\n",
"上面的话听起来很生硬,其实解释起来很简单:每一模块我们都是用若干个不同的特征提取方式,例如 3x3卷积,5x5卷积,1x1的卷积,pooling等,都计算一下,最后再把这些结果通过Filter Concat来进行连接,找到这里面作用最大的。而网络里面包含了许多这养的模块,这样不用我们人为去判断哪个特征提取方式好,网络会自己解决(是不是有点像AUTO ML),在Pytorch中实现了InceptionA-E,还有InceptionAUX 模块。\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
Expand Down Expand Up @@ -960,9 +962,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "pytorch 1.0",
"display_name": "Python 3",
"language": "python",
"name": "pytorch1"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
Expand Down
6 changes: 3 additions & 3 deletions chapter3/3.1-logistic-regression.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,7 @@
" y_hat=net(x)\n",
" loss=criterion(y_hat,y) # 计算损失\n",
" optm.zero_grad() # 前一步的损失清零\n",
" loss.backward() # 反响传播\n",
" loss.backward() # 反向传播\n",
" optm.step() # 优化\n",
" if (i+1)%100 ==0 : # 这里我们每100次输出相关的信息\n",
" # 指定模型为计算模式\n",
Expand Down Expand Up @@ -260,9 +260,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "pytorch 1.0",
"display_name": "Python 3",
"language": "python",
"name": "pytorch1"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
Expand Down

0 comments on commit 33f2d98

Please sign in to comment.