Skip to content

Commit

Permalink
Chapter20 ok! NEED A BREAK ...
Browse files Browse the repository at this point in the history
Former-commit-id: 152c92f06c602a5e9453f705532f4459b12b1243
  • Loading branch information
SwordYork committed Dec 14, 2016
1 parent 9cf6dd9 commit f260a52
Show file tree
Hide file tree
Showing 8 changed files with 1,954 additions and 14 deletions.
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@
*.sh
*.glo
*.ist
Chapter20
agreement.jpg
dlbook_cn_public.tex
dlbook_cn_public.bib
Expand Down
1,850 changes: 1,850 additions & 0 deletions Chapter20/deep_generative_models.tex

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion Chapter7/regularization.tex
Original file line number Diff line number Diff line change
Expand Up @@ -807,7 +807,7 @@ \section{\glsentrytext{sparse}\glsentrytext{representation}}
其中$\alpha \in [0, \infty]$ 权衡范数惩罚项的相对贡献,越大的$\alpha$对应更多的\gls{regularization}。

正如对参数的$L^1$惩罚诱导参数\gls{sparse}性,对\gls{representation}元素的$L^1$惩罚诱导\gls{sparse}的\gls{representation}:
$\Omega(\Vh) = \norm(\Vh)_1 = \sum_i |h_i|$
$\Omega(\Vh) = \norm{\Vh}_1 = \sum_i |h_i|$
当然$L^1$惩罚是导致\gls{sparse}\gls{representation}的选择之一。
其他包括从\gls{representation}上\ENNAME{Student} $t$先验导出的惩罚\citep{Olshausen+Field-1996,Bergstra-Phd-2011}和\gls{KL}惩罚\citep{Larochelle+Bengio-2008}有利于表示元素约束于单位区间上。
<BAD>\cite{HonglakL2008-small}和\cite{Goodfellow2009}都提供了基于平均几个实例激活的\gls{regularization}策略的例子,$\sum_i^m \Vh^{(i)}$,使其接近某些目标值,如每项都是$.01$的向量。
Expand Down
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
面向的读者
--------------------

请直接下载[PDF](https://github.com/exacity/deeplearningbook-chinese/releases/download/v0.1.2-alpha/dlbook_cn_v0.1.2-alpha.pdf)阅读(PDF 12月12日已更新)。
请直接下载[PDF](https://github.com/exacity/deeplearningbook-chinese/releases/download/v0.2-alpha/dlbook_cn_v0.2-alpha.pdf)阅读(PDF 12月12日已更新)。
这一版读起来肯定费劲,我们建议英文好的或者研究者直接读原版。
这一版面向的读者是英语不好,急于想入门深度学习的同学。或者希望帮忙校对的各路大哥也可以读读,只要不嫌弃。

Expand All @@ -36,7 +36,7 @@
- 由于版权问题,我们不能将图片和bib上传,请见谅。
- Due to copyright issues, we would not upload figures and the bib file.
- 可用于学习研究目的,不得用于任何商业行为。谢谢!
- 大约每周release一个版本,[PDF](https://github.com/exacity/deeplearningbook-chinese/releases/download/v0.1.2-alpha/dlbook_cn_v0.1.2-alpha.pdf)文件每天更新。
- 大约每周release一个版本,[PDF](https://github.com/exacity/deeplearningbook-chinese/releases/download/v0.2-alpha/dlbook_cn_v0.2-alpha.pdf)文件每天更新。
- 大家不要watch啊,邮箱可能会炸。
- **先不要打印,这一版不值得打印,浪费钱,** 给我们一个月时间,我们给出我们自己满意的版本。打印版仅供学习参考和找茬纠错,正式出版后,希望大家多多支持纸质正版书籍。

Expand All @@ -45,10 +45,10 @@
TODO
---------

1. 翻译图片描述
1. 翻译图片描述,表格和算法
2. 链接补全
3. 语句通顺
4. 第20章正在校对,这章比较难估计还需2周
4. ~~第20章正在校对,这章比较难估计还需2周~~,12月14日完成校对


实在有问题,请发邮件至`echo c3dvcmQueW9ya0BnbWFpbC5jb20K | base64 -d`
Expand All @@ -60,7 +60,7 @@ TODO

如果我们采用了大家的建议,我们会列在这。具体见[acknowledgments_github.md](https://github.com/exacity/deeplearningbook-chinese/blob/master/acknowledgments_github.md)

@tttwwy @tankeco @fairmiracle @GageGao @huangpingchun @MaHongP @acgtyrant @yanhuibin315 @Buttonwood @titicacafz @weijy026a @RuiZhang1993 @zymiboxpay @xingkongliang @oisc @tielei @yuduowu @Qingmu
@tttwwy @tankeco @fairmiracle @GageGao @huangpingchun @MaHongP @acgtyrant @yanhuibin315 @Buttonwood @titicacafz @weijy026a @RuiZhang1993 @zymiboxpay @xingkongliang @oisc @tielei @yuduowu @Qingmu @xiaomingabc



Expand Down
19 changes: 18 additions & 1 deletion acknowledgments_github.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,24 +20,41 @@
- @oisc ==> Chapter2 公式2.83 2.84有误 是arg max



2016年12月9日
------------
- @fairmiracle ==> Chapter4 公式,语句,详见[issue](https://github.com/exacity/deeplearningbook-chinese/issues/3#issuecomment-265854595).
- @huangpingchun ==> Chapter1 邻域==>领域
- @tielei ==> Chapter9 equivariance
- @yuduowu ==> contact issue
- @minoriwww ==> Chapter2 "排布"==>"排列"
- @khty2000 ==> Chapter2 "X_{-S}"问题,矩阵横列错误
- @khty2000 ==> Chapter2 "X{-S}"问题,矩阵横列错误
- @Qingmu ==> 用WinEdit打开文件问题
- @tielei ==> Chapter9 公式的index问题



2016年12月10日
-------------
- @fairmiracle ==> Chapter5 "\Vy"==>"\Vx";多余括号;in action,more frequently,more formally 提议校对,imputation 翻译建议
- @huangpingchun ==> Chapter5 not completely formal or distinct concepts, VC维,imputation of missing data 翻译建议
- @tielei ==> Chapter9 "full convolution";翻译建议;"tiling range"



2016年12月12日
-------------
- @huangpingchun ==> Chapter7 "模型平均"重复



2016年12月13日
-------------
- @huangpingchun ==> inference的统一翻译
- @xiaomingabc ==> Chapter1 病句问题



2016年12月14日
--------------
- @fairmiracle ==> Chapter7 \norm 括号打错
2 changes: 1 addition & 1 deletion deep_learning_research.tex
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,4 @@ \part{深度学习研究}
\input{Chapter17/monte_carlo_methods.tex}
\input{Chapter18/confronting_the_partition_function.tex}
\input{Chapter19/approximate_inference.tex}
%\input{Chapter20/deep_generative_models.tex}
\input{Chapter20/deep_generative_models.tex}
1 change: 1 addition & 0 deletions dlbook_cn.tex
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,7 @@
% my command
\newcommand{\firstgls}[1]{\textbf{\gls{#1}}(\glsdesc{#1})}
\newcommand{\firstacr}[1]{\textbf{\gls{#1}}~(\glssymbol{#1})}
\newcommand{\glsacr}[1]{\gls{#1}~(\glssymbol{#1})}
\newcommand{\firstall}[1]{\textbf{\gls{#1}}~(\glsdesc{#1}, \glssymbol{#1})}
\newcommand{\ENNAME}[1]{\text{#1}}
\newcommand{\NUMTEXT}[1]{\text{#1}}
Expand Down
83 changes: 78 additions & 5 deletions terminology.tex
Original file line number Diff line number Diff line change
Expand Up @@ -253,8 +253,8 @@
\newglossaryentry{poor_conditioning}
{
name=病态条件数,
description={Poor Conditioning},
sort={Poor Conditioning},
description={poor conditioning},
sort={poor conditioning},
}

\newglossaryentry{objective_function}
Expand All @@ -266,7 +266,7 @@

\newglossaryentry{criterion}
{
name=判据,
name=准则,
description={criterion},
sort={criterion},
}
Expand Down Expand Up @@ -1239,7 +1239,7 @@

\newglossaryentry{weight_scaling_inference_rule}
{
name=权重比例推理规则,
name=权重比例推断规则,
description={weight scaling inference rule},
sort={weight scaling inference rule},
}
Expand Down Expand Up @@ -5190,7 +5190,7 @@

\newglossaryentry{auto_regressive_network}
{
name=自回归网络,
name=自动回归网络,
description={auto-regressive network},
sort={auto-regressive network}
}
Expand All @@ -5201,3 +5201,76 @@
description={generator network},
sort={generator network}
}

\newglossaryentry{discriminator_network}
{
name=生成器网络,
description={discriminator network},
sort={discriminator network},
}

\newglossaryentry{generative_moment_matching_network}
{
name=生成矩匹配网络,
description={generative moment matching network},
sort={generative moment matching network},
}

\newglossaryentry{moment_matching}
{
name=矩匹配,
description={moment matching},
sort={moment matching},
}

\newglossaryentry{moment}
{
name=矩,
description={moment},
sort={moment},
}

\newglossaryentry{MMD}
{
name=最大平均偏差,
description={maximum mean discrepancy},
sort={maximum mean discrepancy},
symbol={MMD}
}

\newglossaryentry{linear_auto_regressive_network}
{
name=线性自动回归网络,
description={linear auto-regressive network},
sort={linear auto-regressive network}
}

\newglossaryentry{neural_auto_regressive_network}
{
name=神经自动回归网络,
description={neural auto-regressive network},
sort={neural auto-regressive network}
}

\newglossaryentry{NADE}
{
name=神经自动回归密度估计器,
description={neural auto-regressive density estimator},
sort={neural auto-regressive density estimator},
symbol={NADE}
}

\newglossaryentry{detailed_balance}
{
name=细致平衡,
description={detailed balance},
sort={detailed balance},
}

\newglossaryentry{ABC}
{
name=近似贝叶斯计算,
description={approximate Bayesian computation},
sort={approximate Bayesian computationA},
symbol={ABC}
}

0 comments on commit f260a52

Please sign in to comment.