Skip to content

Commit

Permalink
Move to the existing tutorials section
Browse files Browse the repository at this point in the history
  • Loading branch information
Saransh-cpp committed Oct 27, 2022
1 parent 17d167e commit 0350e03
Show file tree
Hide file tree
Showing 8 changed files with 12 additions and 15 deletions.
10 changes: 4 additions & 6 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,6 @@ makedocs(
"Fitting a Line" => "getting_started/overview.md",
"Gradients and Layers" => "getting_started/basics.md",
],
"Tutorials" => [
"Linear Regression" => "tutorials/linear_regression.md",
],
"Building Models" => [
"Built-in Layers 📚" => "models/layers.md",
"Recurrence" => "models/recurrence.md",
Expand All @@ -44,11 +41,12 @@ makedocs(
"Flat vs. Nested 📚" => "destructure.md",
"Functors.jl 📚 (`fmap`, ...)" => "models/functors.md",
],
"Tutorials" => [
"Linear Regression" => "tutorials/linear_regression.md",
"Custom Layers" => "tutorials/advanced.md", # TODO move freezing to Training
],
"Performance Tips" => "performance.md",
"Flux's Ecosystem" => "ecosystem.md",
"Tutorials" => [ # TODO, maybe
"Custom Layers" => "models/advanced.md", # TODO move freezing to Training
],
],
format = Documenter.HTML(
sidebar_sitename = false,
Expand Down
4 changes: 2 additions & 2 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@ Other closely associated packages, also installed automatically, include [Zygote

## Learning Flux

The [quick start](models/quickstart.md) page trains a simple neural network.
The [quick start](getting_started/quickstart.md) page trains a simple neural network.

This rest of this documentation provides a from-scratch introduction to Flux's take on models and how they work, starting with [fitting a line](models/overview.md). Once you understand these docs, congratulations, you also understand [Flux's source code](https://github.com/FluxML/Flux.jl), which is intended to be concise, legible and a good reference for more advanced concepts.
This rest of this documentation provides a from-scratch introduction to Flux's take on models and how they work, starting with [fitting a line](getting_started/overview.md). Once you understand these docs, congratulations, you also understand [Flux's source code](https://github.com/FluxML/Flux.jl), which is intended to be concise, legible and a good reference for more advanced concepts.

Sections with 📚 contain API listings. The same text is avalable at the Julia prompt, by typing for example `?gpu`.

Expand Down
3 changes: 1 addition & 2 deletions docs/src/models/activation.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@

# Activation Functions from NNlib.jl
# [Activation Functions from NNlib.jl](@id man-activations)

These non-linearities used between layers of your model are exported by the [NNlib](https://github.com/FluxML/NNlib.jl) package.

Expand Down
2 changes: 1 addition & 1 deletion docs/src/models/functors.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Flux models are deeply nested structures, and [Functors.jl](https://github.com/F

New layers should be annotated using the `Functors.@functor` macro. This will enable [`params`](@ref Flux.params) to see the parameters inside, and [`gpu`](@ref) to move them to the GPU.

`Functors.jl` has its own [notes on basic usage](https://fluxml.ai/Functors.jl/stable/#Basic-Usage-and-Implementation) for more details. Additionally, the [Advanced Model Building and Customisation](../models/advanced.md) page covers the use cases of `Functors` in greater details.
`Functors.jl` has its own [notes on basic usage](https://fluxml.ai/Functors.jl/stable/#Basic-Usage-and-Implementation) for more details. Additionally, the [Advanced Model Building and Customisation](../tutorials/advanced.md) page covers the use cases of `Functors` in greater details.

```@docs
Functors.@functor
Expand Down
2 changes: 1 addition & 1 deletion docs/src/training/optimisers.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ CurrentModule = Flux

# Optimisers

Consider a [simple linear regression](../getting_started/linear_regression.md). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.
Consider a [simple linear regression](../tutorials/linear_regression.md). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.

```julia
using Flux
Expand Down
4 changes: 2 additions & 2 deletions docs/src/training/training.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Flux.Optimise.train!
```

There are plenty of examples in the [model zoo](https://github.com/FluxML/model-zoo), and
more information can be found on [Custom Training Loops](../models/advanced.md).
more information can be found on [Custom Training Loops](../tutorials/advanced.md).

## Loss Functions

Expand Down Expand Up @@ -68,7 +68,7 @@ The model to be trained must have a set of tracked parameters that are used to c

Such an object contains a reference to the model's parameters, not a copy, such that after their training, the model behaves according to their updated values.

Handling all the parameters on a layer by layer basis is explained in the [Layer Helpers](../getting_started/basics.md) section. Also, for freezing model parameters, see the [Advanced Usage Guide](../models/advanced.md).
Handling all the parameters on a layer by layer basis is explained in the [Layer Helpers](../getting_started/basics.md) section. Also, for freezing model parameters, see the [Advanced Usage Guide](../tutorials/advanced.md).

```@docs
Flux.params
Expand Down
File renamed without changes.
2 changes: 1 addition & 1 deletion src/losses/functions.jl
Original file line number Diff line number Diff line change
Expand Up @@ -273,7 +273,7 @@ Return the binary cross-entropy loss, computed as
agg(@.(-y * log(ŷ + ϵ) - (1 - y) * log(1 - ŷ + ϵ)))
Where typically, the prediction `ŷ` is given by the output of a [sigmoid](@ref Activation-Functions-from-NNlib.jl) activation.
Where typically, the prediction `ŷ` is given by the output of a [sigmoid](@ref man-activations) activation.
The `ϵ` term is included to avoid infinity. Using [`logitbinarycrossentropy`](@ref) is recomended
over `binarycrossentropy` for numerical stability.
Expand Down

0 comments on commit 0350e03

Please sign in to comment.