Skip to content

Commit

Permalink
Merge pull request #2016 from Saransh-cpp/linear-regression
Browse files Browse the repository at this point in the history
A new linear regression tutorial
  • Loading branch information
ToucheSir authored Nov 22, 2022
2 parents 8d948e8 + 25eea17 commit da8ce81
Show file tree
Hide file tree
Showing 11 changed files with 414 additions and 20 deletions.
4 changes: 4 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,12 +1,16 @@
[deps]
BSON = "fbb218c0-5317-5bc6-957e-2ee96dd4b1f0"
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
Functors = "d9f16b24-f501-4c13-a1f2-28368ffc5196"
MLDatasets = "eb30cadb-4394-5ae3-aed4-317e484a6458"
MLUtils = "f1d291b0-491e-4a28-83b9-f70985020b54"
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
OneHotArrays = "0b1bfda6-eb8a-41d2-88d8-f5af5cad476f"
Optimisers = "3bd65402-5787-11e9-1adc-39752487f4e2"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"

[compat]
Expand Down
11 changes: 6 additions & 5 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
using Documenter, Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays, Zygote, ChainRulesCore, Statistics
using Documenter, Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays, Zygote, ChainRulesCore, Plots, MLDatasets, Statistics, DataFrames


DocMeta.setdocmeta!(Flux, :DocTestSetup, :(using Flux); recursive = true)

makedocs(
modules = [Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays, Zygote, ChainRulesCore, Base, Statistics],
modules = [Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays, Zygote, ChainRulesCore, Base, Plots, MLDatasets, Statistics, DataFrames],
doctest = false,
sitename = "Flux",
# strict = [:cross_references,],
Expand Down Expand Up @@ -41,11 +41,12 @@ makedocs(
"Flat vs. Nested 📚" => "destructure.md",
"Functors.jl 📚 (`fmap`, ...)" => "models/functors.md",
],
"Tutorials" => [
"Linear Regression" => "tutorials/linear_regression.md",
"Custom Layers" => "models/advanced.md", # TODO move freezing to Training
],
"Performance Tips" => "performance.md",
"Flux's Ecosystem" => "ecosystem.md",
"Tutorials" => [ # TODO, maybe
"Custom Layers" => "models/advanced.md", # TODO move freezing to Training
],
],
format = Documenter.HTML(
sidebar_sitename = false,
Expand Down
2 changes: 1 addition & 1 deletion docs/src/gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ true

Support for array operations on other hardware backends, like GPUs, is provided by external packages like [CUDA](https://github.com/JuliaGPU/CUDA.jl). Flux is agnostic to array types, so we simply need to move model weights and data to the GPU and Flux will handle it.

For example, we can use `CUDA.CuArray` (with the `cu` converter) to run our [basic example](models/basics.md) on an NVIDIA GPU.
For example, we can use `CUDA.CuArray` (with the `cu` converter) to run our [basic example](@ref man-basics) on an NVIDIA GPU.

(Note that you need to have CUDA available to use CUDA.CuArray – please see the [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl) instructions for more details.)

Expand Down
4 changes: 2 additions & 2 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@ Other closely associated packages, also installed automatically, include [Zygote

## Learning Flux

The [quick start](models/quickstart.md) page trains a simple neural network.
The [quick start](@ref man-quickstart) page trains a simple neural network.

This rest of this documentation provides a from-scratch introduction to Flux's take on models and how they work, starting with [fitting a line](models/overview.md). Once you understand these docs, congratulations, you also understand [Flux's source code](https://github.com/FluxML/Flux.jl), which is intended to be concise, legible and a good reference for more advanced concepts.
This rest of this documentation provides a from-scratch introduction to Flux's take on models and how they work, starting with [fitting a line](@ref man-overview). Once you understand these docs, congratulations, you also understand [Flux's source code](https://github.com/FluxML/Flux.jl), which is intended to be concise, legible and a good reference for more advanced concepts.

Sections with 📚 contain API listings. The same text is avalable at the Julia prompt, by typing for example `?gpu`.

Expand Down
3 changes: 1 addition & 2 deletions docs/src/models/activation.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@

# Activation Functions from NNlib.jl
# [Activation Functions from NNlib.jl](@id man-activations)

These non-linearities used between layers of your model are exported by the [NNlib](https://github.com/FluxML/NNlib.jl) package.

Expand Down
4 changes: 2 additions & 2 deletions docs/src/models/advanced.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Defining Customised Layers
# [Defining Customised Layers](@id man-advanced)

Here we will try and describe usage of some more advanced features that Flux provides to give more control over model building.

Expand Down Expand Up @@ -34,7 +34,7 @@ For an intro to Flux and automatic differentiation, see this [tutorial](https://

## Customising Parameter Collection for a Model

Taking reference from our example `Affine` layer from the [basics](basics.md#Building-Layers-1).
Taking reference from our example `Affine` layer from the [basics](@ref man-basics).

By default all the fields in the `Affine` type are collected as its parameters, however, in some cases it may be desired to hold other metadata in our "layers" that may not be needed for training, and are hence supposed to be ignored while the parameters are collected. With Flux, it is possible to mark the fields of our layers that are trainable in two ways.

Expand Down
2 changes: 1 addition & 1 deletion docs/src/models/functors.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Flux models are deeply nested structures, and [Functors.jl](https://github.com/F

New layers should be annotated using the `Functors.@functor` macro. This will enable [`params`](@ref Flux.params) to see the parameters inside, and [`gpu`](@ref) to move them to the GPU.

`Functors.jl` has its own [notes on basic usage](https://fluxml.ai/Functors.jl/stable/#Basic-Usage-and-Implementation) for more details. Additionally, the [Advanced Model Building and Customisation](../models/advanced.md) page covers the use cases of `Functors` in greater details.
`Functors.jl` has its own [notes on basic usage](https://fluxml.ai/Functors.jl/stable/#Basic-Usage-and-Implementation) for more details. Additionally, the [Advanced Model Building and Customisation](@ref man-advanced) page covers the use cases of `Functors` in greater details.

```@docs
Functors.@functor
Expand Down
2 changes: 1 addition & 1 deletion docs/src/training/optimisers.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ CurrentModule = Flux

# Optimisers

Consider a [simple linear regression](../models/basics.md). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.
Consider a [simple linear regression](@ref man-linear-regression). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.

```julia
using Flux
Expand Down
10 changes: 5 additions & 5 deletions docs/src/training/training.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,12 +36,12 @@ Flux.Optimise.train!
```

There are plenty of examples in the [model zoo](https://github.com/FluxML/model-zoo), and
more information can be found on [Custom Training Loops](../models/advanced.md).
more information can be found on [Custom Training Loops](@ref man-advanced).

## Loss Functions

The objective function must return a number representing how far the model is from its target – the *loss* of the model. The `loss` function that we defined in [basics](../models/basics.md) will work as an objective.
In addition to custom losses, a model can be trained in conjunction with
The objective function must return a number representing how far the model is from its target – the *loss* of the model. The `loss` function that we defined in [basics](@ref man-basics) will work as an objective.
In addition to custom losses, model can be trained in conjuction with
the commonly used losses that are grouped under the `Flux.Losses` module.
We can also define an objective in terms of some model:

Expand All @@ -64,11 +64,11 @@ At first glance, it may seem strange that the model that we want to train is not

## Model parameters

The model to be trained must have a set of tracked parameters that are used to calculate the gradients of the objective function. In the [basics](../models/basics.md) section it is explained how to create models with such parameters. The second argument of the function `Flux.train!` must be an object containing those parameters, which can be obtained from a model `m` as `Flux.params(m)`.
The model to be trained must have a set of tracked parameters that are used to calculate the gradients of the objective function. In the [basics](@ref man-basics) section it is explained how to create models with such parameters. The second argument of the function `Flux.train!` must be an object containing those parameters, which can be obtained from a model `m` as `Flux.params(m)`.

Such an object contains a reference to the model's parameters, not a copy, such that after their training, the model behaves according to their updated values.

Handling all the parameters on a layer-by-layer basis is explained in the [Layer Helpers](../models/basics.md) section. For freezing model parameters, see the [Advanced Usage Guide](../models/advanced.md).
Handling all the parameters on a layer by layer basis is explained in the [Layer Helpers](@ref man-basics) section. Also, for freezing model parameters, see the [Advanced Usage Guide](@ref man-advanced).

```@docs
Flux.params
Expand Down
Loading

0 comments on commit da8ce81

Please sign in to comment.