Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: improve freezing docs #2385

Closed
wants to merge 11 commits into from
2 changes: 1 addition & 1 deletion .github/workflows/JuliaFormatter.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ jobs:
steps:
- uses: actions/[email protected]

- uses: dorny/[email protected].1
- uses: dorny/[email protected].2
id: filter
with:
filters: |
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ jobs:
if: contains(fromJson('["1", "1.9"]'), matrix.version) && matrix.os == 'ubuntu-latest'
- uses: julia-actions/julia-processcoverage@v1
if: contains(fromJson('["1", "1.9"]'), matrix.version) && matrix.os == 'ubuntu-latest'
- uses: codecov/codecov-action@v3
- uses: codecov/codecov-action@v4
if: contains(fromJson('["1", "1.9"]'), matrix.version) && matrix.os == 'ubuntu-latest'
with:
file: lcov.info
Expand Down
8 changes: 8 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,13 @@

See also [github's page](https://github.com/FluxML/Flux.jl/releases) for a complete list of PRs merged before each release.

## v0.14.13
* New macro `Flux.@layer` which should be used in place of `@functor`.
This also adds `show` methods for pretty printing.

## v0.14.12
* New `SignDecay` optimiser, like `WeightDecay` but for L1 norm.

## v0.14.0 (July 2023)
* Flux now requires julia v1.9 or later.
* CUDA.jl is not a hard dependency anymore. Support is now provided through the extension mechanism, by loading `using Flux, CUDA`.
Expand Down Expand Up @@ -51,6 +58,7 @@ See also [github's page](https://github.com/FluxML/Flux.jl/releases) for a compl

## v0.13.6
* Use the package [OneHotArrays.jl](https://github.com/FluxML/OneHotArrays.jl) instead of having the same code here.
* Added [`@autosize` macro](https://github.com/FluxML/Flux.jl/pull/2078)

## v0.13.4
* Added [`PairwiseFusion` layer](https://github.com/FluxML/Flux.jl/pull/1983)
Expand Down
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name = "Flux"
uuid = "587475ba-b771-5e3f-ad9e-33799f191a9c"
version = "0.14.12"
version = "0.14.13"

[deps]
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
Expand Down
3 changes: 2 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,8 @@ makedocs(
"Deep Convolutional GAN" => "tutorials/2021-10-08-dcgan-mnist.md",
=#
# Not really sure where this belongs... some in Fluxperimental, aim to delete?
"Custom Layers" => "models/advanced.md", # TODO move freezing to Training
"Custom Layers" => "models/advanced.md",
"Advanced tweaking of models" => "tutorials/misc-model-tweaking.md",
],
],
format = Documenter.HTML(
Expand Down
55 changes: 11 additions & 44 deletions docs/src/models/advanced.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@ function (m::CustomModel)(x)
return m.chain(x) + x
end

# Call @functor to allow for training. Described below in more detail.
Flux.@functor CustomModel
# Call @layer to allow for training. Described below in more detail.
Flux.@layer CustomModel
```

You can then use the model like:
Expand All @@ -39,15 +39,15 @@ Taking reference from our example `Affine` layer from the [basics](@ref man-basi
By default all the fields in the `Affine` type are collected as its parameters, however, in some cases it may be desired to hold other metadata in our "layers" that may not be needed for training, and are hence supposed to be ignored while the parameters are collected. With Flux, the way to mark some fields of our layer as trainable is through overloading the `trainable` function:

```julia-repl
julia> Flux.@functor Affine
julia> @layer Affine

julia> a = Affine(Float32[1 2; 3 4; 5 6], Float32[7, 8, 9])
Affine(Float32[1.0 2.0; 3.0 4.0; 5.0 6.0], Float32[7.0, 8.0, 9.0])

julia> Flux.params(a) # default behavior
Params([Float32[1.0 2.0; 3.0 4.0; 5.0 6.0], Float32[7.0, 8.0, 9.0]])

julia> Flux.trainable(a::Affine) = (; a.W) # returns a NamedTuple using the field's name
julia> Flux.trainable(a::Affine) = (; W = a.W) # returns a NamedTuple using the field's name

julia> Flux.params(a)
Params([Float32[1.0 2.0; 3.0 4.0; 5.0 6.0]])
Expand All @@ -67,47 +67,14 @@ julia> Flux.params(Affine(true, [10, 11, 12.0]))
Params([])
```

It is also possible to further restrict what fields are seen by writing `@functor Affine (W,)`. However, this is not recommended. This requires the `struct` to have a corresponding constructor that accepts only `W` as an argument, and the ignored fields will not be seen by functions like `gpu` (which is usually undesired).

## Freezing Layer Parameters

When it is desired to not include all the model parameters (for e.g. transfer learning), we can simply not pass in those layers into our call to `params`.

!!! compat "Flux ≤ 0.14"
The mechanism described here is for Flux's old "implicit" training style.
When upgrading for Flux 0.15, it should be replaced by [`freeze!`](@ref Flux.freeze!) and `thaw!`.

Consider a simple multi-layer perceptron model where we want to avoid optimising the first two `Dense` layers. We can obtain
this using the slicing features `Chain` provides:
The exact same method of `trainable` can also be defined using the macro, for convenience:

```julia
m = Chain(
Dense(784 => 64, relu),
Dense(64 => 64, relu),
Dense(32 => 10)
);

ps = Flux.params(m[3:end])
Flux.@layer Affine trainable=(W,)
```

The `Zygote.Params` object `ps` now holds a reference to only the parameters of the layers passed to it.

During training, the gradients will only be computed for (and applied to) the last `Dense` layer, therefore only that would have its parameters changed.

`Flux.params` also takes multiple inputs to make it easy to collect parameters from heterogenous models with a single call. A simple demonstration would be if we wanted to omit optimising the second `Dense` layer in the previous example. It would look something like this:
There is a second, more severe, kind of restriction possible. This is not recommended, but is included here for completeness. Calling `Functors.@functor Affine (W,)` means that all no exploration of the model will ever visit the other fields: They will not be moved to the GPU by [`gpu`](@ref), and their precision will not be changed by `f32`. This requires the `struct` to have a corresponding constructor that accepts only `W` as an argument.

```julia
Flux.params(m[1], m[3:end])
```

Sometimes, a more fine-tuned control is needed.
We can freeze a specific parameter of a specific layer which already entered a `Params` object `ps`,
by simply deleting it from `ps`:

```julia
ps = Flux.params(m)
delete!(ps, m[2].bias)
```

## Custom multiple input or output layer

Expand Down Expand Up @@ -135,9 +102,9 @@ Join(combine, paths...) = Join(combine, paths)
```
Notice that we parameterized the type of the `paths` field. This is necessary for fast Julia code; in general, `T` might be a `Tuple` or `Vector`, but we don't need to pay attention to what it specifically is. The same goes for the `combine` field.

The next step is to use [`Functors.@functor`](@ref) to make our struct behave like a Flux layer. This is important so that calling `params` on a `Join` returns the underlying weight arrays on each path.
The next step is to use [`Flux.@layer`](@ref) to make our struct behave like a Flux layer. This is important so that calling `Flux.setup` on a `Join` maps over the underlying trainable arrays on each path.
```julia
Flux.@functor Join
Flux.@layer Join
```

Finally, we define the forward pass. For `Join`, this means applying each `path` in `paths` to each input array, then using `combine` to merge the results.
Expand Down Expand Up @@ -194,7 +161,7 @@ model(xs)

Our custom `Split` layer will accept a single input, then pass the input through a separate path to produce multiple outputs.

We start by following the same steps as the `Join` layer: define a struct, use [`Functors.@functor`](@ref), and define the forward pass.
We start by following the same steps as the `Join` layer: define a struct, use [`@layer`](@ref), and define the forward pass.
```julia
using Flux
using CUDA
Expand All @@ -206,7 +173,7 @@ end

Split(paths...) = Split(paths)

Flux.@functor Split
Flux.@layer Split

(m::Split)(x::AbstractArray) = map(f -> f(x), m.paths)
```
Expand Down
11 changes: 8 additions & 3 deletions docs/src/models/basics.md
Original file line number Diff line number Diff line change
Expand Up @@ -255,10 +255,10 @@ m(5) # => 26

## Layer Helpers

There is still one problem with this `Affine` layer, that Flux does not know to look inside it. This means that [`Flux.train!`](@ref) won't see its parameters, nor will [`gpu`](@ref) be able to move them to your GPU. These features are enabled by the [`@functor`](@ref Functors.@functor) macro:
There is still one problem with this `Affine` layer, that Flux does not know to look inside it. This means that [`Flux.train!`](@ref) won't see its parameters, nor will [`gpu`](@ref) be able to move them to your GPU. These features are enabled by the [`@layer`](@ref Flux.@layer) macro:

```
Flux.@functor Affine
```julia
Flux.@layer Affine
```

Finally, most Flux layers make bias optional, and allow you to supply the function used for generating random weights. We can easily add these refinements to the `Affine` layer as follows, using the helper function [`create_bias`](@ref Flux.create_bias):
Expand All @@ -272,3 +272,8 @@ end

Affine(3 => 1, bias=false, init=ones) |> gpu
```

```@docs
Flux.@layer
Flux.create_bias
```
127 changes: 127 additions & 0 deletions docs/src/models/freezing-params.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
# Freezing model weights
Flux provides several ways of freezing, excluding from backprop entirely and
marking custom struct fields not to be moved to the GPU
([Functors.@functor](@ref)) hence excluded from being trained. The following
subsections should make it clear which one suits your needs the best.

## On-the-fly freezing per model instance
Perhaps you'd like to freeze some of the weights of the model (even at
mid-training), and Flux accomplishes this through [`freeze!`](@ref Flux.freeze!) and `thaw!`.

```julia
m = Chain(
Dense(784 => 64, relu), # freeze this one
Dense(64 => 64, relu),
Dense(32 => 10)
)
opt_state = Flux.setup(Momentum(), m);

# Freeze some layers right away
Flux.freeze!(opt_state.layers[1])

for data in train_set
input, label = data

# Some params could be frozen during the training:
Flux.freeze!(opt_state.layers[2])

grads = Flux.gradient(m) do m
result = m(input)
loss(result, label)
end
Flux.update!(opt_state, m, grads[1])

# Optionally unfreeze the params later
Flux.thaw!(opt_state.layers[1])
end
```

## Static freezing per model definition
Sometimes some parts of the model ([`Flux.@functor`](@ref)) needn't to be trained at all but these params
still need to reside on the GPU (these params are still needed in the forward
and/or backward pass).
```julia
struct MaskedLayer{T}
chain::Chain
mask::T
end
Flux.@functor MaskedLayer

# mark the trainable part
Flux.trainable(a::MaskedLayer)=(;a.chain)
# a.mask will not be updated in the training loop

function (m::MaskedLayer)(x)
return m.chain(x) + x + m.mask
end

model = MaskedLayer(...) # this model will not have the `mask` field trained
```
Note how this method permanently sets some model fields to be excluded from
training without on-the-fly changing.

## Excluding from model definition
Sometimes some parameters are just "not trainable" but they shouldn't even
transfer to the GPU. All scalar fields are like this by default, so things like
learning rate multipliers are not trainable nor transferred to the GPU by
default.
```julia
struct CustomLayer{T, F}
chain::Chain
activation_results::Vector{F}
lr_multiplier::Float32
end
Flux.@functor CustomLayer (chain, ) # Explicitly leaving out `activation_results`

function (m::CustomLayer)(x)
result = m.chain(x) + x

# `activation_results` are not part of the GPU loop, hence we could do
# things like `push!`
push!(m.activation_results, mean(result))
return result
end
```
See more about this in [`Flux.@functor`](@ref) and


## Freezing Layer Parameters (deprecated)

When it is desired to not include all the model parameters (for e.g. transfer learning), we can simply not pass in those layers into our call to `params`.

!!! compat "Flux ≤ 0.14"
The mechanism described here is for Flux's old "implicit" training style.
When upgrading for Flux 0.15, it should be replaced by [`freeze!`](@ref Flux.freeze!) and `thaw!`.

Consider a simple multi-layer perceptron model where we want to avoid optimising the first two `Dense` layers. We can obtain
this using the slicing features `Chain` provides:

```julia
m = Chain(
Dense(784 => 64, relu),
Dense(64 => 64, relu),
Dense(32 => 10)
);

ps = Flux.params(m[3:end])
```

The `Zygote.Params` object `ps` now holds a reference to only the parameters of the layers passed to it.

During training, the gradients will only be computed for (and applied to) the last `Dense` layer, therefore only that would have its parameters changed.

`Flux.params` also takes multiple inputs to make it easy to collect parameters from heterogenous models with a single call. A simple demonstration would be if we wanted to omit optimising the second `Dense` layer in the previous example. It would look something like this:

```julia
Flux.params(m[1], m[3:end])
```

Sometimes, a more fine-tuned control is needed.
We can freeze a specific parameter of a specific layer which already entered a `Params` object `ps`,
by simply deleting it from `ps`:

```julia
ps = Flux.params(m)
delete!(ps, m[2].bias)
```

6 changes: 5 additions & 1 deletion docs/src/models/functors.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,11 @@

Flux models are deeply nested structures, and [Functors.jl](https://github.com/FluxML/Functors.jl) provides tools needed to explore such objects, apply functions to the parameters they contain, and re-build them.

New layers should be annotated using the `Functors.@functor` macro. This will enable [`params`](@ref Flux.params) to see the parameters inside, and [`gpu`](@ref) to move them to the GPU.
!!! compat "Flux ≤ 0.14"
All layers were previously defined with the `Functors.@functor` macro.
This still works, but it is recommended that you use the new [`Flux.@layer`](@ref Flux.@layer) macro instead.
Both allow [`Flux.setup`](@ref Flux.setup) to see the parameters inside, and [`gpu`](@ref) to move them to the GPU, but [`Flux.@layer`](@ref Flux.@layer) also overloads printing,
and offers a way to define `trainable` at the same time.

`Functors.jl` has its own [notes on basic usage](https://fluxml.ai/Functors.jl/stable/#Basic-Usage-and-Implementation) for more details. Additionally, the [Advanced Model Building and Customisation](@ref man-advanced) page covers the use cases of `Functors` in greater details.

Expand Down
2 changes: 1 addition & 1 deletion docs/src/models/layers.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ The `Dense` exemplifies several features:

* The bias vector is always initialised [`Flux.zeros32`](@ref). The keyword `bias=false` will turn this off, i.e. keeping the bias permanently zero.

* It is annotated with [`@functor`](@ref Functors.@functor), which means that [`params`](@ref Flux.params) will see the contents, and [`gpu`](@ref Flux.gpu) will move their arrays to the GPU.
* It is annotated with [`@layer`](@ref Flux.@layer), which means that [`Flux.setup`](@ref Flux.setup) will see the contents, and [`gpu`](@ref Flux.gpu) will move their arrays to the GPU.

By contrast, `Chain` itself contains no parameters, but connects other layers together.
The section on [dataflow layers](@ref man-dataflow-layers) introduces others like this.
Expand Down
4 changes: 2 additions & 2 deletions docs/src/saving.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,12 @@ julia> struct MyModel
net
end

julia> Flux.@functor MyModel
julia> Flux.@layer MyModel

julia> MyModel() = MyModel(Chain(Dense(10, 5, relu), Dense(5, 2)));

julia> model = MyModel()
MyModel(Chain(Dense(10 => 5, relu), Dense(5 => 2)))
MyModel(Chain(Dense(10 => 5, relu), Dense(5 => 2))) # 67 parameters

julia> model_state = Flux.state(model);

Expand Down
3 changes: 2 additions & 1 deletion docs/src/training/optimisers.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ Flux.Optimise.Optimiser

## Scheduling Optimisers

In practice, it is fairly common to schedule the learning rate of an optimiser to obtain faster convergence. There are a variety of popular scheduling policies, and you can find implementations of them in [ParameterSchedulers.jl](http://fluxml.ai/ParameterSchedulers.jl/dev/README.html). The documentation for ParameterSchedulers.jl provides a more detailed overview of the different scheduling policies, and how to use them with Flux optimisers. Below, we provide a brief snippet illustrating a [cosine annealing](https://arxiv.org/pdf/1608.03983.pdf) schedule with a momentum optimiser.
In practice, it is fairly common to schedule the learning rate of an optimiser to obtain faster convergence. There are a variety of popular scheduling policies, and you can find implementations of them in [ParameterSchedulers.jl](http://fluxml.ai/ParameterSchedulers.jl/dev). The documentation for ParameterSchedulers.jl provides a more detailed overview of the different scheduling policies, and how to use them with Flux optimisers. Below, we provide a brief snippet illustrating a [cosine annealing](https://arxiv.org/pdf/1608.03983.pdf) schedule with a momentum optimiser.

First, we import ParameterSchedulers.jl and initialize a cosine annealing schedule to vary the learning rate between `1e-4` and `1e-2` every 10 steps. We also create a new [`Momentum`](@ref) optimiser.
```julia
Expand Down Expand Up @@ -112,6 +112,7 @@ Similar to optimisers, Flux also defines some simple decays that can be used in
ExpDecay
InvDecay
WeightDecay
SignDecay
```

## Gradient Clipping
Expand Down
Loading