diff --git a/.gitattributes b/.gitattributes
new file mode 100644
index 000000000..88263a450
--- /dev/null
+++ b/.gitattributes
@@ -0,0 +1,2 @@
+*.ipynb linguist-detectable=false
+.pages linguist-language=YAML
diff --git a/docs/coding/machine-learning/.pages b/docs/coding/machine-learning/.pages
index f7086f0f0..f3070fe99 100644
--- a/docs/coding/machine-learning/.pages
+++ b/docs/coding/machine-learning/.pages
@@ -1,3 +1,4 @@
nav:
- 线性模型: linear-models.ipynb
- - 决策树: decision-tree.ipynb
\ No newline at end of file
+ - 决策树: decision-tree.ipynb
+ - 贝叶斯优化: bayesian-optimization.ipynb
diff --git a/docs/coding/machine-learning/bayesian-optimization.ipynb b/docs/coding/machine-learning/bayesian-optimization.ipynb
new file mode 100644
index 000000000..be40ecba3
--- /dev/null
+++ b/docs/coding/machine-learning/bayesian-optimization.ipynb
@@ -0,0 +1,7869 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# 贝叶斯优化\n",
+ "\n",
+ "贝叶斯优化是一类可以用于优化黑盒函数的优化方法。对于黑盒函数来说,我们无法得知其显式的表达式,只能通过输入一个数据,观测其输出的方法寻找它的最大值。借助于贝叶斯统计的思想,贝叶斯优化最初认为函数是一个完全随机的函数,可以通过不断地观测来更新对函数的估计(后验)。在每一步迭代中,算法会根据当前的后验,通过采集函数来选择下一个观测的点。贝叶斯优化的优点是可以在较少的迭代次数内找到全局最优解,因此在实际应用中被广泛使用。\n",
+ "\n",
+ "假设目标函数的形式如下:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import numpy as np\n",
+ "from scipy.stats import norm\n",
+ "\n",
+ "def objective(x):\n",
+ " return norm.pdf(x, 3, 2) * 1.5 + norm.pdf(x, 7, 1) + norm.pdf(x, 11, 2)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "初步对该函数进行可视化"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "image/svg+xml": [
+ "\n",
+ "\n",
+ "\n"
+ ],
+ "text/plain": [
+ "