Skip to content

Commit

Permalink
Update course book
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Apr 3, 2024
1 parent 11ef449 commit 6fee1b1
Show file tree
Hide file tree
Showing 63 changed files with 1,506 additions and 1,287 deletions.
2 changes: 1 addition & 1 deletion .buildinfo
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 09fb4996dfec79b54da73f2e49efff41
config: 665b87bc356bcf7d6c6e949e546d49c8
tags: 645f666f9bcd5a90fca523b33c5a78b7
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/NeuromatchAcademy/course-content-dl/blob/main/tutorials/W3D1_TimeSeriesAndNaturalLanguageProcessing/student/W3D1_Tutorial2.ipynb\" target=\"_blank\"><img alt=\"Open In Colab\" src=\"https://colab.research.google.com/assets/colab-badge.svg\"/></a>   <a href=\"https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W3D1_TimeSeriesAndNaturalLanguageProcessing/student/W3D1_Tutorial2.ipynb\" target=\"_blank\"><img alt=\"Open in Kaggle\" src=\"https://kaggle.com/static/images/open-in-kaggle.svg\"/></a>"
"<a href=\"https://colab.research.google.com/github/wangshaonan/course-content-dl/blob/main/tutorials/W3D1_TimeSeriesAndNaturalLanguageProcessing/student/W3D1_Tutorial2.ipynb\" target=\"_blank\"><img alt=\"Open In Colab\" src=\"https://colab.research.google.com/assets/colab-badge.svg\"/></a>   <a href=\"https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W3D1_TimeSeriesAndNaturalLanguageProcessing/student/W3D1_Tutorial2.ipynb\" target=\"_blank\"><img alt=\"Open in Kaggle\" src=\"https://kaggle.com/static/images/open-in-kaggle.svg\"/></a>"
]
},
{
Expand All @@ -29,7 +29,7 @@
"\n",
"__Content editors:__ Konrad Kording, Shaonan Wang\n",
"\n",
"__Production editors:__ Konrad Kording, Spiros Chavlis"
"__Production editors:__ Konrad Kording, Spiros Chavlis, Konstantine Tsafatinos"
]
},
{
Expand Down Expand Up @@ -1801,11 +1801,44 @@
"source": [
"## Play around with LLMs\n",
"\n",
"Try the following questions with [ChatGPT](https://openai.com/blog/chatgpt) (GPT3.5 without access to the web) and with GPTBing in creative mode (GPT4 with access to the web). Note that the latter requires installing Microsoft Edge.\n",
"1. Try using LLMs' API to do tasks, such as utilizing the GPT-2 API to extend text from a provided context. To achieve this, ensure you have a HuggingFace account and secure an API token.\n",
"\n",
"Pick someone you know who is likely to have a web presence but is not super famous (not Musk or Trump). Ask GPT for a two-paragraph biography. How good is it?\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"import requests\n",
"\n",
"def query(payload, model_id, api_token):\n",
" headers = {\"Authorization\": f\"Bearer {api_token}\"}\n",
" API_URL = f\"https://api-inference.huggingface.co/models/{model_id}\"\n",
" response = requests.post(API_URL, headers=headers, json=payload)\n",
" return response.json()\n",
"\n",
"Ask it something like “What is the US, UK, Germany, China, and Japan's per capita income over the past ten years? Plot the data in a single figure” (depending on when and where you run this, you will need to paste the resulting Python code into a colab notebook). Try asking it questions about the data or the definition of “per capita income” used. How good is it?"
"model_id = \"gpt2\"\n",
"api_token = \"hf_****\" # get yours at hf.co/settings/tokens\n",
"data = query(\"The goal of life is\", model_id, api_token)\n",
"print(data)"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"2. Try the following questions with [ChatGPT](https://openai.com/blog/chatgpt) (GPT3.5 without access to the web) and with GPTBing in creative mode (GPT4 with access to the web). Note that the latter requires installing Microsoft Edge.\n",
"\n",
" Pick someone you know who is likely to have a web presence but is not super famous (not Musk or Trump). Ask GPT for a two-paragraph biography. How good is it?\n",
"\n",
" Ask it something like “What is the US, UK, Germany, China, and Japan's per capita income over the past ten years? Plot the data in a single figure” (depending on when and where you run this, you will need to paste the resulting Python code into a colab notebook). Try asking it questions about the data or the definition of “per capita income” used. How good is it?"
]
},
{
Expand Down Expand Up @@ -1976,7 +2009,7 @@
"name": "python3"
},
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
Expand All @@ -1990,10 +2023,10 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
"version": "3.9.19"
},
"toc-autonumbering": true
},
"nbformat": 4,
"nbformat_minor": 0
"nbformat_minor": 4
}
4 changes: 2 additions & 2 deletions projects/ComputerVision/data_augmentation.html
Original file line number Diff line number Diff line change
Expand Up @@ -1763,8 +1763,8 @@ <h2>Cutout<a class="headerlink" href="#cutout" title="Permalink to this heading"
<section id="mixup">
<h2>Mixup<a class="headerlink" href="#mixup" title="Permalink to this heading">#</a></h2>
<p>Mixup is a data augmentation technique that combines pairs of examples via a convex combination of the images and the labels. Given images <span class="math notranslate nohighlight">\(x_i\)</span> and <span class="math notranslate nohighlight">\(x_j\)</span> with labels <span class="math notranslate nohighlight">\(y_i\)</span> and <span class="math notranslate nohighlight">\(y_j\)</span>, respectively, and <span class="math notranslate nohighlight">\(\lambda \in [0, 1]\)</span>, mixup creates a new image <span class="math notranslate nohighlight">\(\hat{x}\)</span> with label <span class="math notranslate nohighlight">\(\hat{y}\)</span> the following way:</p>
<div class="amsmath math notranslate nohighlight" id="equation-0b39b23f-2e3c-4ab0-88c5-80534c90bf06">
<span class="eqno">(128)<a class="headerlink" href="#equation-0b39b23f-2e3c-4ab0-88c5-80534c90bf06" title="Permalink to this equation">#</a></span>\[\begin{align}
<div class="amsmath math notranslate nohighlight" id="equation-1fee2fe6-b220-4c40-8d08-7b46f712535c">
<span class="eqno">(128)<a class="headerlink" href="#equation-1fee2fe6-b220-4c40-8d08-7b46f712535c" title="Permalink to this equation">#</a></span>\[\begin{align}
\hat{x} &amp;= \lambda x_i + (1 - \lambda) x_j \\
\hat{y} &amp;= \lambda y_i + (1 - \lambda) y_j
\end{align}\]</div>
Expand Down
66 changes: 33 additions & 33 deletions projects/modelingsteps/Example_Deep_Learning_Project.html
Original file line number Diff line number Diff line change
Expand Up @@ -2067,33 +2067,33 @@ <h2>Build model<a class="headerlink" href="#build-model" title="Permalink to thi
</div>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Epoch [100/500], Step [1/2], Loss: 1.5181, Accuracy: 44.57%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Epoch [100/500], Step [1/2], Loss: 0.8369, Accuracy: 71.51%
------------------------------------------
Epoch [100/500], Step [2/2], Loss: 1.3739, Accuracy: 49.61%
Epoch [100/500], Step [2/2], Loss: 0.9079, Accuracy: 67.83%
------------------------------------------
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Epoch [200/500], Step [1/2], Loss: 1.1389, Accuracy: 59.11%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Epoch [200/500], Step [1/2], Loss: 0.6369, Accuracy: 77.52%
------------------------------------------
Epoch [200/500], Step [2/2], Loss: 1.0898, Accuracy: 59.50%
Epoch [200/500], Step [2/2], Loss: 0.5581, Accuracy: 81.40%
------------------------------------------
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Epoch [300/500], Step [1/2], Loss: 0.9293, Accuracy: 64.53%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Epoch [300/500], Step [1/2], Loss: 0.5327, Accuracy: 80.23%
------------------------------------------
Epoch [300/500], Step [2/2], Loss: 0.9329, Accuracy: 65.31%
Epoch [300/500], Step [2/2], Loss: 0.4692, Accuracy: 85.08%
------------------------------------------
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Epoch [400/500], Step [1/2], Loss: 0.8864, Accuracy: 69.38%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Epoch [400/500], Step [1/2], Loss: 0.3940, Accuracy: 87.98%
------------------------------------------
Epoch [400/500], Step [2/2], Loss: 0.8522, Accuracy: 69.19%
Epoch [400/500], Step [2/2], Loss: 0.4626, Accuracy: 82.56%
------------------------------------------
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Epoch [500/500], Step [1/2], Loss: 0.7897, Accuracy: 71.90%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Epoch [500/500], Step [1/2], Loss: 0.3307, Accuracy: 90.50%
------------------------------------------
Epoch [500/500], Step [2/2], Loss: 0.7659, Accuracy: 72.67%
Epoch [500/500], Step [2/2], Loss: 0.3613, Accuracy: 87.60%
------------------------------------------
</pre></div>
</div>
Expand Down Expand Up @@ -2123,7 +2123,7 @@ <h2>Build model<a class="headerlink" href="#build-model" title="Permalink to thi
</div>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Test Accuracy of the model on the 172 test moves: 66.860%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Test Accuracy of the model on the 172 test moves: 84.302%
</pre></div>
</div>
</div>
Expand All @@ -2137,7 +2137,7 @@ <h2>Build model<a class="headerlink" href="#build-model" title="Permalink to thi
</div>
</div>
<div class="cell_output docutils container">
<img alt="../../_images/cba2349525942ed7116fe4ee60b69551fbe8db7877d96f2b5f73a68370265c04.png" src="../../_images/cba2349525942ed7116fe4ee60b69551fbe8db7877d96f2b5f73a68370265c04.png" />
<img alt="../../_images/78566d46f1f05e3965d54f9bb1efec22378c1d3f0317e8f5faf8b50b2220d972.png" src="../../_images/78566d46f1f05e3965d54f9bb1efec22378c1d3f0317e8f5faf8b50b2220d972.png" />
</div>
</div>
<p>The errors vary each time the model is run, but a common error seems to be that head scratching is predicted from some other movements that also involve arms a lot: throw/catch, hand clapping, phone talking, checking watch, hand waving, taking photo. If we train the model longer, these errors tend to go away as well. For some reason, crossed legged sitting is sometimes misclassified for crawling, but this doesn’t always happen.</p>
Expand Down Expand Up @@ -2225,10 +2225,10 @@ <h1>Step 8: Modeling completion<a class="headerlink" href="#step-8-modeling-comp
</div>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>71.51162790697676
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>73.83720930232558
</pre></div>
</div>
<img alt="../../_images/b262307ae22113985561c99072e9060eec913dd4399f366e530bd1a3924d7810.png" src="../../_images/b262307ae22113985561c99072e9060eec913dd4399f366e530bd1a3924d7810.png" />
<img alt="../../_images/3d16ae3899ad056181e31c4171b56e0d6efa3a29eb3ddf32ed9cf0fdc143a92b.png" src="../../_images/3d16ae3899ad056181e31c4171b56e0d6efa3a29eb3ddf32ed9cf0fdc143a92b.png" />
</div>
</div>
<p>That is some pretty good performance based on only 6 / 24 joints!</p>
Expand Down Expand Up @@ -2274,32 +2274,32 @@ <h1>Step 9: Model evaluation<a class="headerlink" href="#step-9-model-evaluation
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>*** FITTING: Left Leg
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>limb performance: 79.65%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>limb performance: 72.67%

*** FITTING: Right Leg
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>limb performance: 75.00%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>limb performance: 70.93%

*** FITTING: Left Arm
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>limb performance: 55.23%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>limb performance: 66.86%

*** FITTING: Right Arm
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>limb performance: 43.60%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>limb performance: 38.95%

*** FITTING: Torso
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>limb performance: 81.98%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>limb performance: 79.65%

*** FITTING: Head
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>limb performance: 52.33%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>limb performance: 50.00%
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -2353,44 +2353,44 @@ <h1>Step 9: Model evaluation<a class="headerlink" href="#step-9-model-evaluation
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>*** FITTING: limbs only
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 77.33%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 76.16%
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 6.98%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 68.02%
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 82.56%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 75.58%
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 87.79%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 73.26%
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 70.35%
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 6.98%
median performance: 73.84%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 85.47%
median performance: 74.42%

*** FITTING: limbs+torso+head
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 69.77%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 81.98%
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 73.26%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 81.98%
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 81.40%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 86.63%
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 81.98%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 79.07%
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 83.14%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 79.65%
</pre></div>
</div>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 84.30%
median performance: 81.69%
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>performance: 86.05%
median performance: 81.98%
</pre></div>
</div>
</div>
Expand Down
4 changes: 2 additions & 2 deletions projects/modelingsteps/ModelingSteps_10_DL.html

Large diffs are not rendered by default.

16 changes: 8 additions & 8 deletions projects/modelingsteps/ModelingSteps_1through2_DL.html

Large diffs are not rendered by default.

10 changes: 5 additions & 5 deletions projects/modelingsteps/ModelingSteps_3through4_DL.html

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions projects/modelingsteps/ModelingSteps_5through6_DL.html

Large diffs are not rendered by default.

8 changes: 4 additions & 4 deletions projects/modelingsteps/ModelingSteps_7through9_DL.html

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions projects/modelingsteps/TrainIllusionDataProjectDL.html
Original file line number Diff line number Diff line change
Expand Up @@ -1772,8 +1772,8 @@ <h1>Question<a class="headerlink" href="#question" title="Permalink to this head
<p><em>Part of Step 1</em></p>
<p>Previous literature predicts that accumulated sensory evidence from vestibular signals determines the decision of whether self-motion is experienced or not. We now have vestibular neuron data (simulated in our case, but let’s pretend) and would like to see if that prediction holds true.</p>
<p>The data contains <span class="math notranslate nohighlight">\(N\)</span> neurons and <span class="math notranslate nohighlight">\(M\)</span> trials for each of 3 motion conditions: no self-motion, slowly accelerating self-motion and faster accelerating self-motion.</p>
<div class="amsmath math notranslate nohighlight" id="equation-b8b604c8-7e19-4574-a57e-11ca89862b6f">
<span class="eqno">(126)<a class="headerlink" href="#equation-b8b604c8-7e19-4574-a57e-11ca89862b6f" title="Permalink to this equation">#</a></span>\[\begin{align}
<div class="amsmath math notranslate nohighlight" id="equation-bb3937f8-4c44-4fa8-8836-702c2e8c1ecc">
<span class="eqno">(126)<a class="headerlink" href="#equation-bb3937f8-4c44-4fa8-8836-702c2e8c1ecc" title="Permalink to this equation">#</a></span>\[\begin{align}
N &amp;= 40 \\
M &amp;= 400
\end{align}\]</div>
Expand Down
Loading

0 comments on commit 6fee1b1

Please sign in to comment.