-
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
4122d50
commit ccea53d
Showing
435 changed files
with
206,711 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1 +1,26 @@ | ||
# mlr3torch-course | ||
# Deep Learning with `(mlr3)torch` in R :fire: | ||
|
||
## Overview :book: | ||
|
||
This is a course, containing seven tutorials and corresponding exercises with solutions on [`torch`](https://torch.mlverse.org/) and [`mlr3torch`](https://mlr3torch.mlr-org.com/). | ||
|
||
The seven topics are: | ||
|
||
1. Torch Tensors | ||
2. Autograd | ||
3. Modules and Data | ||
4. Optimizers | ||
5. Intro to mlr3torch (and mlr3 recap) | ||
6. Training Efficiency | ||
7. Use Case (WIP) | ||
|
||
## Contributing | ||
|
||
After editing the content, e.g. in the `notebooks` folder, run `quarto render` to render the website. | ||
This will render the website into the `docs/` folder. | ||
Upon pushing the changes to GitHub, the content of the `docs/` folder is automatically deployed to GitHub Pages. | ||
|
||
## Credit | ||
|
||
Some of the content is based on the book [Deep Learning and Scientific Computing with R torch](https://skeydan.github.io/Deep-Learning-and-Scientific-Computing-with-R-torch/) by Sigrid Keydana. | ||
This course has been funded by [Essential Data Science Training](https://www.essentialds.de/). |
15 changes: 15 additions & 0 deletions
15
_freeze/notebooks/1-tensor-exercise-solution/execute-results/html.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
{ | ||
"hash": "3c200113e999cd4a5d113387b99179bc", | ||
"result": { | ||
"engine": "knitr", | ||
"markdown": "---\ntitle: \"Tensors\"\nsolutions: true\n---\n\n---\ntitle: \"Tensors\"\n---\n\n\n\n\n\n\n:::{.callout-note}\nTo solve these exercises, consulting the `torch` [function reference](https://torch.mlverse.org/docs/reference/) can be helpful.\n:::\n\n**Question 1**: Tensor creation and manipulation\n\nRecreate this torch tensor:\n\n\n::: {.cell layout-align=\"center\"}\n::: {.cell-output .cell-output-stdout}\n\n```\ntorch_tensor\n 1 2 3\n 4 5 6\n[ CPULongType{2,3} ]\n```\n\n\n:::\n:::\n\n\n<details>\n<summary>Hint</summary>\nFirst create an R `matrix` and then convert it using `torch_tensor()`.\n</details>\n\nNext, create a view of the tensor so it looks like this:\n\n\n::: {.cell layout-align=\"center\"}\n::: {.cell-output .cell-output-stdout}\n\n```\ntorch_tensor\n 1 2\n 3 4\n 5 6\n[ CPULongType{3,2} ]\n```\n\n\n:::\n:::\n\n\n<details>\n<summary>Hint</summary>\nUse the `$view()` method and pass the desired shape as a vector.\n</details>\n\nCheck programmatically that you successfully created a view, and not a copy.\n\n<details>\n<summary>Hint</summary>\nSee what happens when you modify one of the tensors.\n</details>\n\n::: {.content-visible when-meta=solutions}\n**Solution**\n\nWe start by creating the tensor:\n\n\n::: {.cell layout-align=\"center\"}\n\n```{.r .cell-code}\nx <- torch_tensor(matrix(1:6, byrow = TRUE, nrow = 2))\nx\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\ntorch_tensor\n 1 2 3\n 4 5 6\n[ CPULongType{2,3} ]\n```\n\n\n:::\n:::\n\n\nThen, we create a view of the tensor:\n\n\n::: {.cell layout-align=\"center\"}\n\n```{.r .cell-code}\ny <- x$view(c(3, 2))\n```\n:::\n\n\nTo check that we created a view, we can modify one of the tensors and see if the other one changes:\n\n\n::: {.cell layout-align=\"center\"}\n\n```{.r .cell-code}\nx[1, 1] <- 100\ny\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\ntorch_tensor\n 100 2\n 3 4\n 5 6\n[ CPULongType{3,2} ]\n```\n\n\n:::\n:::\n\n:::\n\n**Question 2**: More complex reshaping\n\nConsider the following tensor:\n\n\n::: {.cell layout-align=\"center\"}\n\n```{.r .cell-code}\nx <- torch_tensor(1:6)\nx\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\ntorch_tensor\n 1\n 2\n 3\n 4\n 5\n 6\n[ CPULongType{6} ]\n```\n\n\n:::\n:::\n\n\n\nReshape it so it looks like this.\n\n\n::: {.cell layout-align=\"center\"}\n::: {.cell-output .cell-output-stdout}\n\n```\ntorch_tensor\n 1 3 5\n 2 4 6\n[ CPULongType{2,3} ]\n```\n\n\n:::\n:::\n\n\n<details>\n<summary>Hint</summary>\nFirst reshape to `(2, 3)` and then `$permute()` the two dimensions.\n</details>\n\n::: {.content-visible when-meta=solutions}\n**Solution**\nWe therefore first reshape to `(3, 2)` and then permute the two dimensions to get the desired shape `(2, 3)`.\n\n\n::: {.cell layout-align=\"center\"}\n\n```{.r .cell-code}\nx <- x$reshape(c(3, 2))\nx\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\ntorch_tensor\n 1 2\n 3 4\n 5 6\n[ CPULongType{3,2} ]\n```\n\n\n:::\n\n```{.r .cell-code}\nx$permute(c(2, 1))\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\ntorch_tensor\n 1 3 5\n 2 4 6\n[ CPULongType{2,3} ]\n```\n\n\n:::\n:::\n\n:::\n\n**Question 3**: Broadcasting\n\nConsider the following vectors:\n\n\n::: {.cell layout-align=\"center\"}\n\n```{.r .cell-code}\nx1 <- torch_tensor(c(1, 2))\nx1\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\ntorch_tensor\n 1\n 2\n[ CPUFloatType{2} ]\n```\n\n\n:::\n\n```{.r .cell-code}\nx2 <- torch_tensor(c(3, 7))\nx2\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\ntorch_tensor\n 3\n 7\n[ CPUFloatType{2} ]\n```\n\n\n:::\n:::\n\n\nPredict the result (shape and values) of the following operation by applying the broadcasting rules.\n\n\n::: {.cell layout-align=\"center\"}\n\n```{.r .cell-code}\nx1 + x2$reshape(c(2, 1))\n```\n:::\n\n\n::: {.content-visible when-meta=solutions}\n**Solution**\n\nThe result is the following tensor:\n\n\n::: {.cell layout-align=\"center\"}\n::: {.cell-output .cell-output-stdout}\n\n```\ntorch_tensor\n 4 5\n 8 9\n[ CPUFloatType{2,2} ]\n```\n\n\n:::\n:::\n\n\nWe will now show how to arrive at this step by step.\nAccording to the broadcasting rules, we start by adding a singleton dimension to the first tensor:\n\n\n::: {.cell layout-align=\"center\"}\n\n```{.r .cell-code}\nx1 <- x1$reshape(c(1, 2))\n```\n:::\n\n\nNow, we have a tensor of shape `(1, 2)` and a tensor of shape `(2, 1)`.\nNext, we extend the first tensor along the first dimension to match the second tensor:\n\n\n::: {.cell layout-align=\"center\"}\n\n```{.r .cell-code}\nx1 <- x1$expand(c(2, 2))\n```\n:::\n\n\nWe do this analogously for the second (reshaped) tensor:\n\n\n::: {.cell layout-align=\"center\"}\n\n```{.r .cell-code}\nx2 <- x2$reshape(c(2, 1))$expand(c(2, 2))\n```\n:::\n\n\nNow they both have the same shape `(2, 2)`, so we can add them:\n\n\n::: {.cell layout-align=\"center\"}\n\n```{.r .cell-code}\nx1 + x2\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\ntorch_tensor\n 4 5\n 8 9\n[ CPUFloatType{2,2} ]\n```\n\n\n:::\n:::\n\n:::\n\n**Question 4**: Handling Singleton dimensions\n\nA common operation in deep learning is to add or get rid of singleton dimensions, i.e., dimensions of size 1.\nAs this is so common, torch offers a [`$squeeze()`](https://torch.mlverse.org/docs/reference/torch_squeeze.html) and [`$unsqueeze()`](https://torch.mlverse.org/docs/reference/torch_squeeze.html) method to add and remove singleton dimensions.\n\nUse these two functions to first remove the second dimension and then add one in the first position.\n\n\n::: {.cell layout-align=\"center\"}\n\n```{.r .cell-code}\nx <- torch_randn(2, 1)\nx\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\ntorch_tensor\n-0.1115\n 0.1204\n[ CPUFloatType{2,1} ]\n```\n\n\n:::\n:::\n\n\n::: {.content-visible when-meta=solutions}\n**Solution**\n\n::: {.cell layout-align=\"center\"}\n\n```{.r .cell-code}\nx$squeeze(2)$unsqueeze(1)\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\ntorch_tensor\n-0.1115 0.1204\n[ CPUFloatType{1,2} ]\n```\n\n\n:::\n:::\n\n:::\n\n**Question 5**: Matrix multiplication\n\nGenerate a random matrix $A$ of shape `(10, 5)` and a random matrix $B$ of shape `(10, 5)` by sampling from a standard normal distribution.\n\n<details>\n<summary>Hint</summary>\nUse `torch_randn(nrow, ncol)` to generate random matrices.\n</details>\n\nCan you multiply these two matrices with each other and if so, in which order?\nIf not, generate two random matrices with compatible shapes and multiply them.\n\n::: {.content-visible when-meta=solutions}\n**Solution**\n\nWe can only multiply a matrix of shape `(n, k)` with a matrix of shape `(k, m)`, i.e., the the number of columns in the first matrix matches the number of rows in the second matrix.\n\nWe can therefore not multiply the two matrices with each other in either order.\nTo generate two random matrices with compatible shapes, we can generate two random matrices with shape `(10, 5)` and `(5, 10)`.\n\n\n::: {.cell layout-align=\"center\"}\n\n```{.r .cell-code}\nA <- torch_randn(10, 5)\nB <- torch_randn(5, 10)\nA$matmul(B)\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\ntorch_tensor\n-1.4311 0.6090 -1.4795 -0.6977 2.4857 -0.7402 0.4060 -0.4299 2.9035 0.1459\n-4.0841 3.8794 -1.5376 -3.5270 4.8175 -0.7630 0.1188 3.0368 1.0634 0.0011\n-0.3880 -1.4639 -1.3191 -0.0589 3.1754 -3.1779 1.7006 0.0521 5.0765 0.0552\n 1.6030 -2.2295 1.1606 3.3083 3.3677 1.5567 -2.3565 -5.1759 -1.9122 5.1734\n 4.0126 -4.3978 0.5547 1.9958 -3.4347 -2.2880 2.1990 0.2017 2.6702 -1.7145\n 0.8548 3.0118 -2.0971 -3.3564 -8.1899 3.3494 1.5969 4.4134 0.4593 -6.8904\n 0.0597 -0.1650 -2.5737 -1.1190 6.1582 -0.6400 0.8576 0.2152 5.0070 1.6070\n 0.2675 2.4575 -2.6582 -3.1801 -3.0074 2.0887 1.4936 3.5447 2.3877 -4.3110\n-3.7894 1.8938 0.0528 -0.9525 0.3706 -1.8813 0.0365 0.2768 0.2025 -0.8839\n 2.7060 -2.1856 1.0679 2.6758 -6.8991 1.6866 -0.2875 -2.8479 -1.4630 -1.6319\n[ CPUFloatType{10,10} ]\n```\n\n\n:::\n:::\n\n:::\n\n**Question 6**: Uniform sampling\n\nGenerate 10 random variables from a uniform distribution (using only torch functions) in the interval $[10, 20]$.\nUse `torch_rand()` for this (which does not allow for `min` and `max` parameters).\n\n<details>\n<summary>Hint</summary>\nAdd the lower bound and multiply with the width of the interval.\n</details>\n\nThen, calculate the mean of the values that are larger than 15.\n\n\n::: {.content-visible when-meta=solutions}\n**Solution**\nBecause the uniform distribution of `torch` has no `min` and `max` parameters like `runif()`, we instead sample from a standard uniform distribution and then scale and shift it to the desired interval.\n\n\n::: {.cell layout-align=\"center\"}\n\n```{.r .cell-code}\nn <- 10\na <- 10\nb <- 20\nx <- torch_rand(n) * (b - a) + a\nx\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\ntorch_tensor\n 17.2108\n 15.4495\n 15.4898\n 13.4831\n 15.0240\n 13.4448\n 16.4367\n 19.8558\n 15.7574\n 12.7854\n[ CPUFloatType{10} ]\n```\n\n\n:::\n\n```{.r .cell-code}\nmean(x[x > 15])\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\ntorch_tensor\n16.4606\n[ CPUFloatType{} ]\n```\n\n\n:::\n:::\n\n:::\n\n**Question 7**: Don't touch this\n\nConsider the code below:\n\n\n::: {.cell layout-align=\"center\"}\n\n```{.r .cell-code}\nf <- function(x) {\n x[1] <- torch_tensor(-99)\n return(x)\n}\nx <- torch_tensor(1:3)\ny <- f(x)\nx\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\ntorch_tensor\n-99\n 2\n 3\n[ CPULongType{3} ]\n```\n\n\n:::\n:::\n\n\nImplement a new different version of this function that returns the same tensor but does not change the value of the input tensor in-place.\n\n<details>\n<summary>Hint</summary>\nThe `$clone()` method might be helpful.\n</details>\n\n::: {.content-visible when-meta=solutions}\n**Solution**\n\nWe need to `$clone()` the tensor before we modify it.\n\n\n::: {.cell layout-align=\"center\"}\n\n```{.r .cell-code}\ng <- function(x) {\n x <- x$clone()\n x[1] <- torch_tensor(-99)\n x\n}\nx <- torch_tensor(1:3)\ny <- g(x)\nx\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\ntorch_tensor\n 1\n 2\n 3\n[ CPULongType{3} ]\n```\n\n\n:::\n:::\n\n:::\n\n", | ||
"supporting": [], | ||
"filters": [ | ||
"rmarkdown/pagebreak.lua" | ||
], | ||
"includes": {}, | ||
"engineDependencies": {}, | ||
"preserve": {}, | ||
"postProcess": true | ||
} | ||
} |
Oops, something went wrong.