Skip to content

Commit

Permalink
Revert structure + use ids
Browse files Browse the repository at this point in the history
  • Loading branch information
Saransh-cpp committed Oct 27, 2022
1 parent 0350e03 commit 6b64b58
Show file tree
Hide file tree
Showing 10 changed files with 15 additions and 15 deletions.
8 changes: 4 additions & 4 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ makedocs(
pages = [
"Getting Started" => [
"Welcome" => "index.md",
"Quick Start" => "getting_started/quickstart.md",
"Fitting a Line" => "getting_started/overview.md",
"Gradients and Layers" => "getting_started/basics.md",
"Quick Start" => "models/quickstart.md",
"Fitting a Line" => "models/overview.md",
"Gradients and Layers" => "models/basics.md",
],
"Building Models" => [
"Built-in Layers 📚" => "models/layers.md",
Expand Down Expand Up @@ -43,7 +43,7 @@ makedocs(
],
"Tutorials" => [
"Linear Regression" => "tutorials/linear_regression.md",
"Custom Layers" => "tutorials/advanced.md", # TODO move freezing to Training
"Custom Layers" => "models/advanced.md", # TODO move freezing to Training
],
"Performance Tips" => "performance.md",
"Flux's Ecosystem" => "ecosystem.md",
Expand Down
2 changes: 1 addition & 1 deletion docs/src/gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ true

Support for array operations on other hardware backends, like GPUs, is provided by external packages like [CUDA](https://github.com/JuliaGPU/CUDA.jl). Flux is agnostic to array types, so we simply need to move model weights and data to the GPU and Flux will handle it.

For example, we can use `CUDA.CuArray` (with the `cu` converter) to run our [basic example](getting_started/basics.md) on an NVIDIA GPU.
For example, we can use `CUDA.CuArray` (with the `cu` converter) to run our [basic example](@ref man-basics) on an NVIDIA GPU.

(Note that you need to have CUDA available to use CUDA.CuArray – please see the [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl) instructions for more details.)

Expand Down
4 changes: 2 additions & 2 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@ Other closely associated packages, also installed automatically, include [Zygote

## Learning Flux

The [quick start](getting_started/quickstart.md) page trains a simple neural network.
The [quick start](@ref man-quickstart) page trains a simple neural network.

This rest of this documentation provides a from-scratch introduction to Flux's take on models and how they work, starting with [fitting a line](getting_started/overview.md). Once you understand these docs, congratulations, you also understand [Flux's source code](https://github.com/FluxML/Flux.jl), which is intended to be concise, legible and a good reference for more advanced concepts.
This rest of this documentation provides a from-scratch introduction to Flux's take on models and how they work, starting with [fitting a line](@ref man-overview). Once you understand these docs, congratulations, you also understand [Flux's source code](https://github.com/FluxML/Flux.jl), which is intended to be concise, legible and a good reference for more advanced concepts.

Sections with 📚 contain API listings. The same text is avalable at the Julia prompt, by typing for example `?gpu`.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Defining Customised Layers
# [Defining Customised Layers](@id man-advanced)

Here we will try and describe usage of some more advanced features that Flux provides to give more control over model building.

Expand Down Expand Up @@ -34,7 +34,7 @@ For an intro to Flux and automatic differentiation, see this [tutorial](https://

## Customising Parameter Collection for a Model

Taking reference from our example `Affine` layer from the [basics](../getting_started/basics.md#Building-Layers-1).
Taking reference from our example `Affine` layer from the [basics](@ref man-basics).

By default all the fields in the `Affine` type are collected as its parameters, however, in some cases it may be desired to hold other metadata in our "layers" that may not be needed for training, and are hence supposed to be ignored while the parameters are collected. With Flux, it is possible to mark the fields of our layers that are trainable in two ways.

Expand Down
File renamed without changes.
2 changes: 1 addition & 1 deletion docs/src/models/functors.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Flux models are deeply nested structures, and [Functors.jl](https://github.com/F

New layers should be annotated using the `Functors.@functor` macro. This will enable [`params`](@ref Flux.params) to see the parameters inside, and [`gpu`](@ref) to move them to the GPU.

`Functors.jl` has its own [notes on basic usage](https://fluxml.ai/Functors.jl/stable/#Basic-Usage-and-Implementation) for more details. Additionally, the [Advanced Model Building and Customisation](../tutorials/advanced.md) page covers the use cases of `Functors` in greater details.
`Functors.jl` has its own [notes on basic usage](https://fluxml.ai/Functors.jl/stable/#Basic-Usage-and-Implementation) for more details. Additionally, the [Advanced Model Building and Customisation](@ref man-advanced) page covers the use cases of `Functors` in greater details.

```@docs
Functors.@functor
Expand Down
File renamed without changes.
File renamed without changes.
2 changes: 1 addition & 1 deletion docs/src/training/optimisers.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ CurrentModule = Flux

# Optimisers

Consider a [simple linear regression](../tutorials/linear_regression.md). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.
Consider a [simple linear regression](@ref man-linear-regression). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.

```julia
using Flux
Expand Down
8 changes: 4 additions & 4 deletions docs/src/training/training.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,11 +36,11 @@ Flux.Optimise.train!
```

There are plenty of examples in the [model zoo](https://github.com/FluxML/model-zoo), and
more information can be found on [Custom Training Loops](../tutorials/advanced.md).
more information can be found on [Custom Training Loops](@ref man-advanced).

## Loss Functions

The objective function must return a number representing how far the model is from its target – the *loss* of the model. The `loss` function that we defined in [basics](../getting_started/basics.md) will work as an objective.
The objective function must return a number representing how far the model is from its target – the *loss* of the model. The `loss` function that we defined in [basics](@ref man-basics) will work as an objective.
In addition to custom losses, model can be trained in conjuction with
the commonly used losses that are grouped under the `Flux.Losses` module.
We can also define an objective in terms of some model:
Expand All @@ -64,11 +64,11 @@ At first glance, it may seem strange that the model that we want to train is not

## Model parameters

The model to be trained must have a set of tracked parameters that are used to calculate the gradients of the objective function. In the [basics](../getting_started/basics.md) section it is explained how to create models with such parameters. The second argument of the function `Flux.train!` must be an object containing those parameters, which can be obtained from a model `m` as `Flux.params(m)`.
The model to be trained must have a set of tracked parameters that are used to calculate the gradients of the objective function. In the [basics](@ref man-basics) section it is explained how to create models with such parameters. The second argument of the function `Flux.train!` must be an object containing those parameters, which can be obtained from a model `m` as `Flux.params(m)`.

Such an object contains a reference to the model's parameters, not a copy, such that after their training, the model behaves according to their updated values.

Handling all the parameters on a layer by layer basis is explained in the [Layer Helpers](../getting_started/basics.md) section. Also, for freezing model parameters, see the [Advanced Usage Guide](../tutorials/advanced.md).
Handling all the parameters on a layer by layer basis is explained in the [Layer Helpers](@ref man-basics) section. Also, for freezing model parameters, see the [Advanced Usage Guide](@ref man-advanced).

```@docs
Flux.params
Expand Down

0 comments on commit 6b64b58

Please sign in to comment.