From dfa160575bd6e68a319a58582ecbda5e23cd36ad Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Sun, 17 Dec 2023 06:25:34 +0000 Subject: [PATCH] build based on e8ec0c9 --- previews/PR2360/data/mlutils/index.html | 2 +- previews/PR2360/data/onehot/index.html | 2 +- previews/PR2360/destructure/index.html | 6 +- previews/PR2360/ecosystem/index.html | 2 +- previews/PR2360/gpu/index.html | 8 +- previews/PR2360/index.html | 2 +- previews/PR2360/models/activation/index.html | 2 +- previews/PR2360/models/advanced/index.html | 2 +- previews/PR2360/models/basics/index.html | 2 +- previews/PR2360/models/functors/index.html | 6 +- previews/PR2360/models/layers/index.html | 84 +++++++++---------- previews/PR2360/models/losses/index.html | 34 ++++---- previews/PR2360/models/nnlib/index.html | 2 +- previews/PR2360/models/overview/index.html | 2 +- previews/PR2360/models/quickstart/index.html | 2 +- previews/PR2360/models/recurrence/index.html | 2 +- previews/PR2360/outputsize/index.html | 4 +- previews/PR2360/performance/index.html | 2 +- previews/PR2360/saving/index.html | 2 +- previews/PR2360/search/index.html | 2 +- previews/PR2360/training/callbacks/index.html | 8 +- .../PR2360/training/optimisers/index.html | 34 ++++---- previews/PR2360/training/reference/index.html | 12 +-- previews/PR2360/training/training/index.html | 2 +- previews/PR2360/training/zygote/index.html | 2 +- .../2020-09-15-deep-learning-flux/index.html | 2 +- .../tutorials/2021-01-26-mlp/index.html | 2 +- .../tutorials/2021-02-07-convnet/index.html | 2 +- .../2021-10-08-dcgan-mnist/index.html | 2 +- .../2021-10-14-vanilla-gan/index.html | 2 +- .../tutorials/linear_regression/index.html | 2 +- .../tutorials/logistic_regression/index.html | 2 +- previews/PR2360/utilities/index.html | 20 ++--- 33 files changed, 131 insertions(+), 131 deletions(-) diff --git a/previews/PR2360/data/mlutils/index.html b/previews/PR2360/data/mlutils/index.html index 7dfcfd713e..968536e86c 100644 --- a/previews/PR2360/data/mlutils/index.html +++ b/previews/PR2360/data/mlutils/index.html @@ -562,4 +562,4 @@ julia> zeros_like(x, Float64) 2×2 CuArray{Float64, 2, CUDA.Mem.DeviceBuffer}: 0.0 0.0 - 0.0 0.0 + 0.0 0.0 diff --git a/previews/PR2360/data/onehot/index.html b/previews/PR2360/data/onehot/index.html index 21e62ff2b5..21351fef4e 100644 --- a/previews/PR2360/data/onehot/index.html +++ b/previews/PR2360/data/onehot/index.html @@ -73,4 +73,4 @@ 3 6 15 3 9 3 12 3 6 15 3
OneHotArrays.OneHotArrayType
OneHotArray{T, N, M, I} <: AbstractArray{Bool, M}
 OneHotArray(indices, L)

A one-hot M-dimensional array with L labels (i.e. size(A, 1) == L and sum(A, dims=1) == 1) stored as a compact N == M-1-dimensional array of indices.

Typically constructed by onehot and onehotbatch. Parameter I is the type of the underlying storage, and T its eltype.

OneHotArrays.OneHotVectorType
OneHotVector{T} = OneHotArray{T, 0, 1, T}
 OneHotVector(indices, L)

A one-hot vector with L labels (i.e. length(A) == L and count(A) == 1) typically constructed by onehot. Stored efficiently as a single index of type T, usually UInt32.

OneHotArrays.OneHotMatrixType
OneHotMatrix{T, I} = OneHotArray{T, 1, 2, I}
-OneHotMatrix(indices, L)

A one-hot matrix (with L labels) typically constructed using onehotbatch. Stored efficiently as a vector of indices with type I and eltype T.

+OneHotMatrix(indices, L)

A one-hot matrix (with L labels) typically constructed using onehotbatch. Stored efficiently as a vector of indices with type I and eltype T.

diff --git a/previews/PR2360/destructure/index.html b/previews/PR2360/destructure/index.html index 3464a0a316..f4fe9ac121 100644 --- a/previews/PR2360/destructure/index.html +++ b/previews/PR2360/destructure/index.html @@ -69,7 +69,7 @@ L2 (generic function with 1 method) julia> L2(m2) isa Float32 -truesource

Save and Load

Flux.stateFunction
state(x)

Return an object with the same nested structure as x according to Functors.children, but made only of basic containers (e.g. named tuples, tuples, arrays, and dictionaries).

Besides trainable and non-trainable arrays, the state will contain leaf nodes that are not arrays, such as numbers, symbols, strings, and nothing values. The leaf types that end up in the state could increase in the future.

This method is particularly useful for saving and loading models, since the state contain only simple data types that can be easily serialized.

The state can be passed to loadmodel! to restore the model.

Examples

Copy the state into another model

julia> m1 = Chain(Dense(1, 2, tanh; init=ones), Dense(2, 1; init=ones));
+true
source

Save and Load

Flux.stateFunction
state(x)

Return an object with the same nested structure as x according to Functors.children, but made only of basic containers (e.g. named tuples, tuples, arrays, and dictionaries).

Besides trainable and non-trainable arrays, the state will contain leaf nodes that are not arrays, such as numbers, symbols, strings, and nothing values. The leaf types that end up in the state could increase in the future.

This method is particularly useful for saving and loading models, since the state contain only simple data types that can be easily serialized.

The state can be passed to loadmodel! to restore the model.

Examples

Copy the state into another model

julia> m1 = Chain(Dense(1, 2, tanh; init=ones), Dense(2, 1; init=ones));
 
 julia> s = Flux.state(m1)
 (layers = ((weight = [1.0; 1.0;;], bias = [0.0, 0.0], σ = ()), (weight = [1.0 1.0], bias = [0.0], σ = ())),)
@@ -95,7 +95,7 @@
 
 julia> JLD2.jldsave("checkpoint.jld2", model_state = s)
 
-julia> Flux.loadmodel!(m2, JLD2.load("checkpoint.jld2", "model_state"))
source
Flux.loadmodel!Function
loadmodel!(dst, src)

Copy all the parameters (trainable and non-trainable) from src into dst.

Recursively walks dst and src together using Functors.children, and calling copyto! on parameter arrays or throwing an error when there is a mismatch. Non-array elements (such as activation functions) are not copied and need not match. Zero bias vectors and bias=false are considered equivalent (see extended help for more details).

See also Flux.state.

Examples

julia> dst = Chain(Dense(Flux.ones32(2, 5), Flux.ones32(2), tanh), Dense(2 => 1; bias = [1f0]))
+julia> Flux.loadmodel!(m2, JLD2.load("checkpoint.jld2", "model_state"))
source
Flux.loadmodel!Function
loadmodel!(dst, src)

Copy all the parameters (trainable and non-trainable) from src into dst.

Recursively walks dst and src together using Functors.children, and calling copyto! on parameter arrays or throwing an error when there is a mismatch. Non-array elements (such as activation functions) are not copied and need not match. Zero bias vectors and bias=false are considered equivalent (see extended help for more details).

See also Flux.state.

Examples

julia> dst = Chain(Dense(Flux.ones32(2, 5), Flux.ones32(2), tanh), Dense(2 => 1; bias = [1f0]))
 Chain(
   Dense(5 => 2, tanh),                  # 12 parameters
   Dense(2 => 1),                        # 3 parameters
@@ -112,4 +112,4 @@
 false
 
 julia> iszero(dst[2].bias)
-true

Extended help

Throws an error when:

  • dst and src do not share the same fields (at any level)
  • the sizes of leaf nodes are mismatched between dst and src
  • copying non-array values to/from an array parameter (except inactive parameters described below)
  • dst is a "tied" parameter (i.e. refers to another parameter) and loaded into multiple times with mismatched source values

Inactive parameters can be encoded by using the boolean value false instead of an array. If dst == false and src is an all-zero array, no error will be raised (and no values copied); however, attempting to copy a non-zero array to an inactive parameter will throw an error. Likewise, copying a src value of false to any dst array is valid, but copying a src value of true will error.

source
+true

Extended help

Throws an error when:

Inactive parameters can be encoded by using the boolean value false instead of an array. If dst == false and src is an all-zero array, no error will be raised (and no values copied); however, attempting to copy a non-zero array to an inactive parameter will throw an error. Likewise, copying a src value of false to any dst array is valid, but copying a src value of true will error.

source diff --git a/previews/PR2360/ecosystem/index.html b/previews/PR2360/ecosystem/index.html index 04981c0a8d..e768038462 100644 --- a/previews/PR2360/ecosystem/index.html +++ b/previews/PR2360/ecosystem/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-36890222-9', {'page_path': location.pathname + location.search + location.hash}); -

The Julia Ecosystem around Flux

One of the main strengths of Julia lies in an ecosystem of packages globally providing a rich and consistent user experience.

This is a non-exhaustive list of Julia packages, nicely complementing Flux in typical machine learning and deep learning workflows. To add your project please send a PR. See also academic work citing Flux or citing Zygote.

Flux models

  • Flux's model-zoo contains examples from many domains.

Computer vision

  • ObjectDetector.jl provides ready-to-go image detection via YOLO.
  • Metalhead.jl includes many state-of-the-art computer vision models which can easily be used for transfer learning.
  • UNet.jl is a generic UNet implementation.

Natural language processing

  • Transformers.jl provides components for Transformer models for NLP, as well as providing several trained models out of the box.
  • TextAnalysis.jl provides several NLP algorithms that use Flux models under the hood.

Reinforcement learning

  • AlphaZero.jl provides a generic, simple and fast implementation of Deepmind's AlphaZero algorithm.
  • ReinforcementLearning.jl offers a collection of tools for doing reinforcement learning research in Julia.

Graph learning

  • GraphNeuralNetworks.jl is a fresh, performant and flexible graph neural network library based on Flux.jl.
  • GeometricFlux.jl is the first graph neural network library for julia.
  • NeuralOperators.jl enables training infinite dimensional PDEs by learning a continuous function instead of using the finite element method.
  • SeaPearl.jl is a Constraint Programming solver that uses Reinforcement Learning based on graphs as input.

Time series

Robust networks

  • RobustNeuralNetworks.jl includes classes of neural networks that are constructed to naturally satisfy robustness constraints.

Tools closely associated with Flux

Utility tools you're unlikely to have met if you never used Flux!

High-level training flows

  • FastAI.jl is a Julia port of Python's fast.ai library.
  • FluxTraining.jl is a package for using and writing powerful, extensible training loops for deep learning models. It supports callbacks for many common use cases like hyperparameter scheduling, metrics tracking and logging, checkpointing, early stopping, and more. It powers training in FastAI.jl

Datasets

Commonly used machine learning datasets are provided by the following packages in the julia ecosystem:

Plumbing

Tools to put data into the right order for creating a model.

  • Augmentor.jl is a real-time library augmentation library for increasing the number of training images.
  • DataAugmentation.jl aims to make it easy to build stochastic, label-preserving augmentation pipelines for vision use cases involving images, keypoints and segmentation masks.
  • MLUtils.jl (replaces MLDataUtils.jl and MLLabelUtils.jl) is a library for processing Machine Learning datasets.

Parameters


Differentiable programming

Packages based on differentiable programming but not necessarily related to Machine Learning.

  • The SciML ecosystem uses Flux and Zygote to mix neural nets with differential equations, to get the best of black box and mechanistic modelling.
  • DiffEqFlux.jl provides tools for creating Neural Differential Equations.
  • Flux3D.jl shows off machine learning on 3D data.
  • RayTracer.jl combines ML with computer vision via a differentiable renderer.
  • Duckietown.jl Differentiable Duckietown simulator.
  • The Yao.jl project uses Flux and Zygote for Quantum Differentiable Programming.
  • AtomicGraphNets.jl enables learning graph based models on atomic systems used in chemistry.
  • DiffImages.jl differentiable computer vision modeling in Julia with the Images.jl ecosystem.

Probabilistic programming

  • Turing.jl extends Flux's differentiable programming capabilities to probabilistic programming.
  • Omega.jl is a research project aimed at causal, higher-order probabilistic programming.
  • Stheno.jl provides flexible Gaussian processes.

Statistics


Useful miscellaneous packages

Some useful and random packages!

  • AdversarialPrediction.jl provides a way to easily optimise generic performance metrics in supervised learning settings using the Adversarial Prediction framework.
  • Mill.jl helps to prototype flexible multi-instance learning models.
  • MLMetrics.jl is a utility for scoring models in data science and machine learning.
  • Torch.jl exposes torch in Julia.
  • ValueHistories.jl is a utility for efficient tracking of optimization histories, training curves or other information of arbitrary types and at arbitrarily spaced sampling times.
  • InvertibleNetworks.jl Building blocks for invertible neural networks in the Julia programming language.
  • ProgressMeter.jl progress meters for long-running computations.
  • TensorBoardLogger.jl easy peasy logging to tensorboard in Julia
  • ArgParse.jl is a package for parsing command-line arguments to Julia programs.
  • Parameters.jl types with default field values, keyword constructors and (un-)pack macros.
  • BSON.jl is a package for working with the Binary JSON serialisation format.
  • DataFrames.jl in-memory tabular data in Julia.
  • DrWatson.jl is a scientific project assistant software.

This tight integration among Julia packages is shown in some of the examples in the model-zoo repository.


Alternatives to Flux

Julia has several other libraries for making neural networks.

  • SimpleChains.jl is focused on making small, simple, CPU-based, neural networks fast. Uses LoopVectorization.jl. (Was FastChain in DiffEqFlux.jl)

  • Knet.jl is a neural network library built around AutoGrad.jl.

  • Lux.jl (earlier ExplicitFluxLayers.jl) shares much of the design, use-case, and NNlib.jl / Optimisers.jl back-end of Flux. But instead of encapsulating all parameters within the model structure, it separates this into 3 components: a model, a tree of parameters, and a tree of model states.

Explicit or explicit?

Flux's training docs talk about changes from Zygote's implicit to explicit gradients, dictionary-like to tree-like structures. (See also Zygote's description of these.) Lux also uses Zygote, but uses the word "explicit" to mean something unrelated, namely storing the tree of parameters (and of state) separately from the model.

+

The Julia Ecosystem around Flux

One of the main strengths of Julia lies in an ecosystem of packages globally providing a rich and consistent user experience.

This is a non-exhaustive list of Julia packages, nicely complementing Flux in typical machine learning and deep learning workflows. To add your project please send a PR. See also academic work citing Flux or citing Zygote.

Flux models

  • Flux's model-zoo contains examples from many domains.

Computer vision

  • ObjectDetector.jl provides ready-to-go image detection via YOLO.
  • Metalhead.jl includes many state-of-the-art computer vision models which can easily be used for transfer learning.
  • UNet.jl is a generic UNet implementation.

Natural language processing

  • Transformers.jl provides components for Transformer models for NLP, as well as providing several trained models out of the box.
  • TextAnalysis.jl provides several NLP algorithms that use Flux models under the hood.

Reinforcement learning

  • AlphaZero.jl provides a generic, simple and fast implementation of Deepmind's AlphaZero algorithm.
  • ReinforcementLearning.jl offers a collection of tools for doing reinforcement learning research in Julia.

Graph learning

  • GraphNeuralNetworks.jl is a fresh, performant and flexible graph neural network library based on Flux.jl.
  • GeometricFlux.jl is the first graph neural network library for julia.
  • NeuralOperators.jl enables training infinite dimensional PDEs by learning a continuous function instead of using the finite element method.
  • SeaPearl.jl is a Constraint Programming solver that uses Reinforcement Learning based on graphs as input.

Time series

Robust networks

  • RobustNeuralNetworks.jl includes classes of neural networks that are constructed to naturally satisfy robustness constraints.

Tools closely associated with Flux

Utility tools you're unlikely to have met if you never used Flux!

High-level training flows

  • FastAI.jl is a Julia port of Python's fast.ai library.
  • FluxTraining.jl is a package for using and writing powerful, extensible training loops for deep learning models. It supports callbacks for many common use cases like hyperparameter scheduling, metrics tracking and logging, checkpointing, early stopping, and more. It powers training in FastAI.jl

Datasets

Commonly used machine learning datasets are provided by the following packages in the julia ecosystem:

Plumbing

Tools to put data into the right order for creating a model.

  • Augmentor.jl is a real-time library augmentation library for increasing the number of training images.
  • DataAugmentation.jl aims to make it easy to build stochastic, label-preserving augmentation pipelines for vision use cases involving images, keypoints and segmentation masks.
  • MLUtils.jl (replaces MLDataUtils.jl and MLLabelUtils.jl) is a library for processing Machine Learning datasets.

Parameters


Differentiable programming

Packages based on differentiable programming but not necessarily related to Machine Learning.

  • The SciML ecosystem uses Flux and Zygote to mix neural nets with differential equations, to get the best of black box and mechanistic modelling.
  • DiffEqFlux.jl provides tools for creating Neural Differential Equations.
  • Flux3D.jl shows off machine learning on 3D data.
  • RayTracer.jl combines ML with computer vision via a differentiable renderer.
  • Duckietown.jl Differentiable Duckietown simulator.
  • The Yao.jl project uses Flux and Zygote for Quantum Differentiable Programming.
  • AtomicGraphNets.jl enables learning graph based models on atomic systems used in chemistry.
  • DiffImages.jl differentiable computer vision modeling in Julia with the Images.jl ecosystem.

Probabilistic programming

  • Turing.jl extends Flux's differentiable programming capabilities to probabilistic programming.
  • Omega.jl is a research project aimed at causal, higher-order probabilistic programming.
  • Stheno.jl provides flexible Gaussian processes.

Statistics


Useful miscellaneous packages

Some useful and random packages!

  • AdversarialPrediction.jl provides a way to easily optimise generic performance metrics in supervised learning settings using the Adversarial Prediction framework.
  • Mill.jl helps to prototype flexible multi-instance learning models.
  • MLMetrics.jl is a utility for scoring models in data science and machine learning.
  • Torch.jl exposes torch in Julia.
  • ValueHistories.jl is a utility for efficient tracking of optimization histories, training curves or other information of arbitrary types and at arbitrarily spaced sampling times.
  • InvertibleNetworks.jl Building blocks for invertible neural networks in the Julia programming language.
  • ProgressMeter.jl progress meters for long-running computations.
  • TensorBoardLogger.jl easy peasy logging to tensorboard in Julia
  • ArgParse.jl is a package for parsing command-line arguments to Julia programs.
  • Parameters.jl types with default field values, keyword constructors and (un-)pack macros.
  • BSON.jl is a package for working with the Binary JSON serialisation format.
  • DataFrames.jl in-memory tabular data in Julia.
  • DrWatson.jl is a scientific project assistant software.

This tight integration among Julia packages is shown in some of the examples in the model-zoo repository.


Alternatives to Flux

Julia has several other libraries for making neural networks.

  • SimpleChains.jl is focused on making small, simple, CPU-based, neural networks fast. Uses LoopVectorization.jl. (Was FastChain in DiffEqFlux.jl)

  • Knet.jl is a neural network library built around AutoGrad.jl.

  • Lux.jl (earlier ExplicitFluxLayers.jl) shares much of the design, use-case, and NNlib.jl / Optimisers.jl back-end of Flux. But instead of encapsulating all parameters within the model structure, it separates this into 3 components: a model, a tree of parameters, and a tree of model states.

Explicit or explicit?

Flux's training docs talk about changes from Zygote's implicit to explicit gradients, dictionary-like to tree-like structures. (See also Zygote's description of these.) Lux also uses Zygote, but uses the word "explicit" to mean something unrelated, namely storing the tree of parameters (and of state) separately from the model.

diff --git a/previews/PR2360/gpu/index.html b/previews/PR2360/gpu/index.html index ec1ff787e8..2bbce7120f 100644 --- a/previews/PR2360/gpu/index.html +++ b/previews/PR2360/gpu/index.html @@ -169,10 +169,10 @@ julia> CUDA.device(dense_model.weight) CuDevice(1): GeForce RTX 2080 Ti -

Due to a limitation in Metal.jl, currently this kind of data movement across devices is only supported for CUDA and AMDGPU backends.

Printing models after moving to a different device

Due to a limitation in how GPU packages currently work, printing models on the REPL after moving them to a GPU device which is different from the current device will lead to an error.

Flux.AbstractDeviceType
Flux.AbstractDevice <: Function

An abstract type representing device objects for different GPU backends. The currently supported backends are "CUDA", "AMDGPU", "Metal" and "CPU"; the "CPU" backend is the fallback case when no GPU is available. GPU extensions of Flux define subtypes of this type.

source
Flux.FluxCPUDeviceType
Flux.FluxCPUDevice <: Flux.AbstractDevice

A type representing device objects for the "CPU" backend for Flux. This is the fallback case when no GPU is available to Flux.

source
Flux.FluxCUDADeviceType
FluxCUDADevice <: AbstractDevice

A type representing device objects for the "CUDA" backend for Flux.

source
Flux.FluxAMDGPUDeviceType
FluxAMDGPUDevice <: AbstractDevice

A type representing device objects for the "AMDGPU" backend for Flux.

source
Flux.FluxMetalDeviceType
FluxMetalDevice <: AbstractDevice

A type representing device objects for the "Metal" backend for Flux.

source
Flux.supported_devicesFunction
Flux.supported_devices()

Get all supported backends for Flux, in order of preference.

Example

julia> using Flux;
+

Due to a limitation in Metal.jl, currently this kind of data movement across devices is only supported for CUDA and AMDGPU backends.

Printing models after moving to a different device

Due to a limitation in how GPU packages currently work, printing models on the REPL after moving them to a GPU device which is different from the current device will lead to an error.

Flux.AbstractDeviceType
Flux.AbstractDevice <: Function

An abstract type representing device objects for different GPU backends. The currently supported backends are "CUDA", "AMDGPU", "Metal" and "CPU"; the "CPU" backend is the fallback case when no GPU is available. GPU extensions of Flux define subtypes of this type.

source
Flux.FluxCPUDeviceType
Flux.FluxCPUDevice <: Flux.AbstractDevice

A type representing device objects for the "CPU" backend for Flux. This is the fallback case when no GPU is available to Flux.

source
Flux.FluxCUDADeviceType
FluxCUDADevice <: AbstractDevice

A type representing device objects for the "CUDA" backend for Flux.

source
Flux.FluxAMDGPUDeviceType
FluxAMDGPUDevice <: AbstractDevice

A type representing device objects for the "AMDGPU" backend for Flux.

source
Flux.FluxMetalDeviceType
FluxMetalDevice <: AbstractDevice

A type representing device objects for the "Metal" backend for Flux.

source
Flux.supported_devicesFunction
Flux.supported_devices()

Get all supported backends for Flux, in order of preference.

Example

julia> using Flux;
 
 julia> Flux.supported_devices()
-("CUDA", "AMDGPU", "Metal", "CPU")
source
Flux.get_deviceFunction
Flux.get_device(; verbose=false)::Flux.AbstractDevice

Returns a device object for the most appropriate backend for the current Julia session.

First, the function checks whether a backend preference has been set via the Flux.gpu_backend! function. If so, an attempt is made to load this backend. If the corresponding trigger package has been loaded and the backend is functional, a device corresponding to the given backend is loaded. Otherwise, the backend is chosen automatically. To update the backend preference, use Flux.gpu_backend!.

If there is no preference, then for each of the "CUDA", "AMDGPU", "Metal" and "CPU" backends in the given order, this function checks whether the given backend has been loaded via the corresponding trigger package, and whether the backend is functional. If so, the device corresponding to the backend is returned. If no GPU backend is available, a Flux.FluxCPUDevice is returned.

If verbose is set to true, then the function prints informative log messages.

Examples

For the example given below, the backend preference was set to "AMDGPU" via the gpu_backend! function.

julia> using Flux;
+("CUDA", "AMDGPU", "Metal", "CPU")
source
Flux.get_deviceFunction
Flux.get_device(; verbose=false)::Flux.AbstractDevice

Returns a device object for the most appropriate backend for the current Julia session.

First, the function checks whether a backend preference has been set via the Flux.gpu_backend! function. If so, an attempt is made to load this backend. If the corresponding trigger package has been loaded and the backend is functional, a device corresponding to the given backend is loaded. Otherwise, the backend is chosen automatically. To update the backend preference, use Flux.gpu_backend!.

If there is no preference, then for each of the "CUDA", "AMDGPU", "Metal" and "CPU" backends in the given order, this function checks whether the given backend has been loaded via the corresponding trigger package, and whether the backend is functional. If so, the device corresponding to the backend is returned. If no GPU backend is available, a Flux.FluxCPUDevice is returned.

If verbose is set to true, then the function prints informative log messages.

Examples

For the example given below, the backend preference was set to "AMDGPU" via the gpu_backend! function.

julia> using Flux;
 
 julia> model = Dense(2 => 3)
 Dense(2 => 3)       # 9 parameters
@@ -212,7 +212,7 @@
 3×2 CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}:
   0.820013   0.527131
  -0.915589   0.549048
-  0.290744  -0.0592499
source
Flux.get_device(backend::String, idx::Int = 0)::Flux.AbstractDevice

Get a device object for a backend specified by the string backend and idx. The currently supported values of backend are "CUDA", "AMDGPU" and "CPU". idx must be an integer value between 0 and the number of available devices.

Examples

julia> using Flux, CUDA;
+  0.290744  -0.0592499
source
Flux.get_device(backend::String, idx::Int = 0)::Flux.AbstractDevice

Get a device object for a backend specified by the string backend and idx. The currently supported values of backend are "CUDA", "AMDGPU" and "CPU". idx must be an integer value between 0 and the number of available devices.

Examples

julia> using Flux, CUDA;
 
 julia> CUDA.devices()
 CUDA.DeviceIterator() for 3 devices:
@@ -234,4 +234,4 @@
 
 julia> cpu_device = Flux.get_device("CPU")
 (::Flux.FluxCPUDevice) (generic function with 1 method)
-
source
+source diff --git a/previews/PR2360/index.html b/previews/PR2360/index.html index 05e8d5c27d..e53156cdc8 100644 --- a/previews/PR2360/index.html +++ b/previews/PR2360/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-36890222-9', {'page_path': location.pathname + location.search + location.hash}); -

Flux: The Julia Machine Learning Library

Flux is a library for machine learning. It comes "batteries-included" with many useful tools built in, but also lets you use the full power of the Julia language where you need it. We follow a few key principles:

  • Doing the obvious thing. Flux has relatively few explicit APIs. Instead, writing down the mathematical form will work – and be fast.
  • Extensible by default. Flux is written to be highly flexible while being performant. Extending Flux is as simple as using your own code as part of the model you want - it is all high-level Julia code.
  • Play nicely with others. Flux works well with unrelated Julia libraries from images to differential equation solvers, rather than duplicating them.

Installation

Download Julia 1.9 or later, preferably the current stable release. You can add Flux using Julia's package manager, by typing ] add Flux in the Julia prompt. For Nvidia GPU support, you will also need to install the CUDA and the cuDNN packages. For AMD GPU support, install the AMDGPU package. For acceleration on Apple Silicon, install the Metal package.

Learning Flux

The quick start page trains a simple neural network.

This rest of the guide provides a from-scratch introduction to Flux's take on models and how they work, starting with fitting a line. Once you understand these docs, congratulations, you also understand Flux's source code, which is intended to be concise, legible and a good reference for more advanced concepts.

There are some tutorials about building particular models. The model zoo has starting points for many other common ones. And finally, the ecosystem page lists packages which define Flux models.

The reference section includes, beside Flux's own functions, those of some companion packages: Zygote.jl (automatic differentiation), Optimisers.jl (training) and others.

Community

Everyone is welcome to join our community on the Julia discourse forum, or the slack chat (channel #machine-learning). If you have questions or issues we'll try to help you out.

If you're interested in hacking on Flux, the source code is open and easy to understand – it's all just the same Julia code you work with normally. You might be interested in our intro issues to get started, or our contributing guide.

+

Flux: The Julia Machine Learning Library

Flux is a library for machine learning. It comes "batteries-included" with many useful tools built in, but also lets you use the full power of the Julia language where you need it. We follow a few key principles:

  • Doing the obvious thing. Flux has relatively few explicit APIs. Instead, writing down the mathematical form will work – and be fast.
  • Extensible by default. Flux is written to be highly flexible while being performant. Extending Flux is as simple as using your own code as part of the model you want - it is all high-level Julia code.
  • Play nicely with others. Flux works well with unrelated Julia libraries from images to differential equation solvers, rather than duplicating them.

Installation

Download Julia 1.9 or later, preferably the current stable release. You can add Flux using Julia's package manager, by typing ] add Flux in the Julia prompt. For Nvidia GPU support, you will also need to install the CUDA and the cuDNN packages. For AMD GPU support, install the AMDGPU package. For acceleration on Apple Silicon, install the Metal package.

Learning Flux

The quick start page trains a simple neural network.

This rest of the guide provides a from-scratch introduction to Flux's take on models and how they work, starting with fitting a line. Once you understand these docs, congratulations, you also understand Flux's source code, which is intended to be concise, legible and a good reference for more advanced concepts.

There are some tutorials about building particular models. The model zoo has starting points for many other common ones. And finally, the ecosystem page lists packages which define Flux models.

The reference section includes, beside Flux's own functions, those of some companion packages: Zygote.jl (automatic differentiation), Optimisers.jl (training) and others.

Community

Everyone is welcome to join our community on the Julia discourse forum, or the slack chat (channel #machine-learning). If you have questions or issues we'll try to help you out.

If you're interested in hacking on Flux, the source code is open and easy to understand – it's all just the same Julia code you work with normally. You might be interested in our intro issues to get started, or our contributing guide.

diff --git a/previews/PR2360/models/activation/index.html b/previews/PR2360/models/activation/index.html index 300ed3bc80..0b997e630e 100644 --- a/previews/PR2360/models/activation/index.html +++ b/previews/PR2360/models/activation/index.html @@ -430,4 +430,4 @@ -1 │⣀⣀⣀⣀⣀⣀⣀⣀⣀⡤⠤⠔⠒⠉⠁⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ └────────────────────────────────────────┘ ⠀-3⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀3⠀ - ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ + ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ diff --git a/previews/PR2360/models/advanced/index.html b/previews/PR2360/models/advanced/index.html index 5fd1e3f3bc..9b0d6c6020 100644 --- a/previews/PR2360/models/advanced/index.html +++ b/previews/PR2360/models/advanced/index.html @@ -103,4 +103,4 @@ # rms over all the mse ŷs = model(x) return sqrt(mean(Flux.mse(y, ŷ) for (y, ŷ) in zip(ys, ŷs))) -end
Note

This Split layer is available from the Fluxperimental.jl package.

+end
Note

This Split layer is available from the Fluxperimental.jl package.

diff --git a/previews/PR2360/models/basics/index.html b/previews/PR2360/models/basics/index.html index 4281889c86..5e1046ef4a 100644 --- a/previews/PR2360/models/basics/index.html +++ b/previews/PR2360/models/basics/index.html @@ -127,4 +127,4 @@ Affine(W, b) end -Affine(3 => 1, bias=false, init=ones) |> gpu +Affine(3 => 1, bias=false, init=ones) |> gpu diff --git a/previews/PR2360/models/functors/index.html b/previews/PR2360/models/functors/index.html index e882e12c43..9c64cf450c 100644 --- a/previews/PR2360/models/functors/index.html +++ b/previews/PR2360/models/functors/index.html @@ -164,7 +164,7 @@ julia> m.bias 2-element Vector{Float32}: 0.0 - 0.0source
Flux.gpuMethod
gpu(m)

Copies m to the current GPU device (using current GPU backend), if one is available. If no GPU is available, it does nothing (but prints a warning the first time).

On arrays, this calls CUDA's cu, which also changes arrays with Float64 elements to Float32 while copying them to the device (same for AMDGPU). To act on arrays within a struct, the struct type must be marked with @functor.

Use cpu to copy back to ordinary Arrays. See also f32 and f16 to change element type only.

See the CUDA.jl docs to help identify the current device.

Example

julia> m = Dense(rand(2, 3))  # constructed with Float64 weight matrix
+ 0.0
source
Flux.gpuMethod
gpu(m)

Copies m to the current GPU device (using current GPU backend), if one is available. If no GPU is available, it does nothing (but prints a warning the first time).

On arrays, this calls CUDA's cu, which also changes arrays with Float64 elements to Float32 while copying them to the device (same for AMDGPU). To act on arrays within a struct, the struct type must be marked with @functor.

Use cpu to copy back to ordinary Arrays. See also f32 and f16 to change element type only.

See the CUDA.jl docs to help identify the current device.

Example

julia> m = Dense(rand(2, 3))  # constructed with Float64 weight matrix
 Dense(3 => 2)       # 8 parameters
 
 julia> typeof(m.weight)
@@ -174,7 +174,7 @@
 Dense(3 => 2)       # 8 parameters
 
 julia> typeof(m_gpu.weight)
-CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}
source
Flux.gpuMethod
gpu(data::DataLoader)

Transforms a given DataLoader to apply gpu to each batch of data, when iterated over. (If no GPU is available, this does nothing.)

Example

julia> dl = Flux.DataLoader((x = ones(2,10), y='a':'j'), batchsize=3)
+CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}
source
Flux.gpuMethod
gpu(data::DataLoader)

Transforms a given DataLoader to apply gpu to each batch of data, when iterated over. (If no GPU is available, this does nothing.)

Example

julia> dl = Flux.DataLoader((x = ones(2,10), y='a':'j'), batchsize=3)
 4-element DataLoader(::NamedTuple{(:x, :y), Tuple{Matrix{Float64}, StepRange{Char, Int64}}}, batchsize=3)
   with first element:
   (; x = 2×3 Matrix{Float64}, y = 3-element StepRange{Char, Int64})
@@ -193,4 +193,4 @@
  1.0  1.0  1.0

For large datasets, this is preferred over moving all the data to the GPU before creating the DataLoader, like this:

julia> Flux.DataLoader((x = ones(2,10), y=2:11) |> gpu, batchsize=3)
 4-element DataLoader(::NamedTuple{(:x, :y), Tuple{CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, UnitRange{Int64}}}, batchsize=3)
   with first element:
-  (; x = 2×3 CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, y = 3-element UnitRange{Int64})
Warning

This only works if gpu is applied directly to the DataLoader. While gpu acts recursively on Flux models and many basic Julia structs, it will not work on (say) a tuple of DataLoaders.

source
+ (; x = 2×3 CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, y = 3-element UnitRange{Int64})
Warning

This only works if gpu is applied directly to the DataLoader. While gpu acts recursively on Flux models and many basic Julia structs, it will not work on (say) a tuple of DataLoaders.

source diff --git a/previews/PR2360/models/layers/index.html b/previews/PR2360/models/layers/index.html index 21615c9a38..aec08ae125 100644 --- a/previews/PR2360/models/layers/index.html +++ b/previews/PR2360/models/layers/index.html @@ -22,7 +22,7 @@ 0.9999092042625951 julia> Flux.params(d1) # no trainable bias -Params([[1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]])source
Flux.BilinearType
Bilinear((in1, in2) => out, σ=identity; bias=true, init=glorot_uniform)
+Params([[1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]])
source
Flux.BilinearType
Bilinear((in1, in2) => out, σ=identity; bias=true, init=glorot_uniform)
 Bilinear(W::AbstractArray, [bias, σ])

Creates a layer which is fully connected between two inputs and the output, and otherwise similar to Dense. Its output, given vectors x & y, is another vector z with, for all i ∈ 1:out:

z[i] = σ(x' * W[i,:,:] * y + bias[i])

If x and y are matrices, then each column of the output z = B(x, y) is of this form, with B the Bilinear layer.

If the second input y is not given, it is taken to be equal to x, i.e. B(x) == B(x, x)

The two inputs may also be provided as a tuple, B((x, y)) == B(x, y), which is accepted as the input to a Chain.

If the two input sizes are the same, in1 == in2, then you may write Bilinear(in => out, σ).

The initialisation works as for Dense layer, with W = init(out, in1, in2). By default the bias vector is zeros(Float32, out), option bias=false will switch off trainable bias. Either of these may be provided explicitly.

Examples

julia> x, y = randn(Float32, 5, 32), randn(Float32, 5, 32);
 
 julia> B = Flux.Bilinear((5, 5) => 7)
@@ -43,7 +43,7 @@
 (3, 32)
 
 julia> Flux.Bilinear(rand(4,8,16), false, tanh)  # first dim of weight is the output
-Bilinear((8, 16) => 4, tanh; bias=false)  # 512 parameters
source
Flux.ScaleType
Scale(size::Integer..., σ=identity; bias=true, init=ones32)
+Bilinear((8, 16) => 4, tanh; bias=false)  # 512 parameters
source
Flux.ScaleType
Scale(size::Integer..., σ=identity; bias=true, init=ones32)
 Scale(scale::AbstractArray, [bias, σ])

Create an element-wise layer, whose forward pass is given by:

y = σ.(scale .* x .+ bias)

This uses .* instead of matrix multiplication * of Dense.

The learnable scale & bias are initialised init(size...) and zeros32(size...), with init=ones32 by default. You may specify the function init, turn off trainable bias with bias=false, or provide the array(s) explicitly.

Used by LayerNorm with affine=true.

Examples

julia> a = Flux.Scale(2)
 Scale(2)            # 4 parameters
 
@@ -64,7 +64,7 @@
  100  400  900  1600
 
 julia> Flux.params(b)
-Params([[1 2 3 4]])
source

Perhaps Scale isn't quite fully connected, but it may be thought of as Dense(Diagonal(s.weights), s.bias), and LinearAlgebra's Diagonal is a matrix which just happens to contain many zeros.

Flux ≤ 0.12

Old versions of Flux accepted only Dense(in, out, act) and not Dense(in => out, act). This notation makes a Pair object. If you get an error like MethodError: no method matching Dense(::Pair{Int64,Int64}), this means that you should upgrade to newer Flux versions.

Convolution Models

These layers are used to build convolutional neural networks (CNNs).

They all expect images in what is called WHCN order: a batch of 32 colour images, each 50 x 50 pixels, will have size(x) == (50, 50, 3, 32). A single grayscale image might instead have size(x) == (28, 28, 1, 1).

Besides images, 2D data, they also work with 1D data, where for instance stereo sound recording with 1000 samples might have size(x) == (1000, 2, 1). They will also work with 3D data, ndims(x) == 5, where again the last two dimensions are channel and batch.

To understand how strides and padding work, the article by Dumoulin & Visin has great illustrations.

Flux.ConvType
Conv(filter, in => out, σ = identity;
+Params([[1 2 3 4]])
source

Perhaps Scale isn't quite fully connected, but it may be thought of as Dense(Diagonal(s.weights), s.bias), and LinearAlgebra's Diagonal is a matrix which just happens to contain many zeros.

Flux ≤ 0.12

Old versions of Flux accepted only Dense(in, out, act) and not Dense(in => out, act). This notation makes a Pair object. If you get an error like MethodError: no method matching Dense(::Pair{Int64,Int64}), this means that you should upgrade to newer Flux versions.

Convolution Models

These layers are used to build convolutional neural networks (CNNs).

They all expect images in what is called WHCN order: a batch of 32 colour images, each 50 x 50 pixels, will have size(x) == (50, 50, 3, 32). A single grayscale image might instead have size(x) == (28, 28, 1, 1).

Besides images, 2D data, they also work with 1D data, where for instance stereo sound recording with 1000 samples might have size(x) == (1000, 2, 1). They will also work with 3D data, ndims(x) == 5, where again the last two dimensions are channel and batch.

To understand how strides and padding work, the article by Dumoulin & Visin has great illustrations.

Flux.ConvType
Conv(filter, in => out, σ = identity;
      stride = 1, pad = 0, dilation = 1, groups = 1, [bias, init])

Standard convolutional layer. filter is a tuple of integers specifying the size of the convolutional kernel; in and out specify the number of input and output channels.

Image data should be stored in WHCN order (width, height, channels, batch). In other words, a 100×100 RGB image would be a 100×100×3×1 array, and a batch of 50 would be a 100×100×3×50 array. This has N = 2 spatial dimensions, and needs a kernel size like (5,5), a 2-tuple of integers.

To take convolutions along N feature dimensions, this layer expects as input an array with ndims(x) == N+2, where size(x, N+1) == in is the number of input channels, and size(x, ndims(x)) is (as always) the number of observations in a batch. Then:

  • filter should be a tuple of N integers.
  • Keywords stride and dilation should each be either single integer, or a tuple with N integers.
  • Keyword pad specifies the number of elements added to the borders of the data array. It can be
    • a single integer for equal padding all around,
    • a tuple of N integers, to apply the same padding at begin/end of each spatial dimension,
    • a tuple of 2*N integers, for asymmetric padding, or
    • the singleton SamePad(), to calculate padding such that size(output,d) == size(x,d) / stride (possibly rounded) for each spatial dimension.
  • Keyword groups is expected to be an Int. It specifies the number of groups to divide a convolution into.

Keywords to control initialization of the layer:

  • init - Function used to generate initial weights. Defaults to glorot_uniform.
  • bias - The initial bias vector is all zero by default. Trainable bias can be disabled entirely by setting this to false, or another vector can be provided such as bias = randn(Float32, out).

See also ConvTranspose, DepthwiseConv, CrossCor.

Examples

julia> xs = rand32(100, 100, 3, 50); # a batch of 50 RGB images
 
 julia> layer = Conv((5,5), 3 => 7, relu; bias = false)
@@ -83,7 +83,7 @@
 (130, 100, 7, 50)
 
 julia> Conv((5,5), 3 => 7; stride = 2, dilation = 4)(xs) |> size
-(42, 42, 7, 50)
source
Flux.ConvMethod
Conv(weight::AbstractArray, [bias, activation; stride, pad, dilation])

Constructs a convolutional layer with the given weight and bias. Accepts the same keywords and has the same defaults as Conv(k::NTuple{N,Integer}, ch::Pair{<:Integer,<:Integer}, σ; ...).

julia> weight = rand(3, 4, 5);
+(42, 42, 7, 50)
source
Flux.ConvMethod
Conv(weight::AbstractArray, [bias, activation; stride, pad, dilation])

Constructs a convolutional layer with the given weight and bias. Accepts the same keywords and has the same defaults as Conv(k::NTuple{N,Integer}, ch::Pair{<:Integer,<:Integer}, σ; ...).

julia> weight = rand(3, 4, 5);
 
 julia> bias = zeros(5);
 
@@ -94,7 +94,7 @@
 (98, 5, 64)
 
 julia> Flux.params(layer) |> length
-2
source
Flux.ConvTransposeType
ConvTranspose(filter, in => out, σ=identity; stride=1, pad=0, dilation=1, [bias, init])

Standard convolutional transpose layer. filter is a tuple of integers specifying the size of the convolutional kernel, while in and out specify the number of input and output channels.

Note that pad=SamePad() here tries to ensure size(output,d) == size(x,d) * stride.

Parameters are controlled by additional keywords, with defaults init=glorot_uniform and bias=true.

See also Conv for more detailed description of keywords.

Examples

julia> xs = rand32(100, 100, 3, 50);  # a batch of 50 RGB images
+2
source
Flux.ConvTransposeType
ConvTranspose(filter, in => out, σ=identity; stride=1, pad=0, dilation=1, [bias, init])

Standard convolutional transpose layer. filter is a tuple of integers specifying the size of the convolutional kernel, while in and out specify the number of input and output channels.

Note that pad=SamePad() here tries to ensure size(output,d) == size(x,d) * stride.

Parameters are controlled by additional keywords, with defaults init=glorot_uniform and bias=true.

See also Conv for more detailed description of keywords.

Examples

julia> xs = rand32(100, 100, 3, 50);  # a batch of 50 RGB images
 
 julia> layer = ConvTranspose((5,5), 3 => 7, relu)
 ConvTranspose((5, 5), 3 => 7, relu)  # 532 parameters
@@ -106,7 +106,7 @@
 (203, 203, 7, 50)
 
 julia> ConvTranspose((5,5), 3 => 7, stride=3, pad=SamePad())(xs) |> size
-(300, 300, 7, 50)
source
Flux.ConvTransposeMethod
ConvTranspose(weight::AbstractArray, [bias, activation; stride, pad, dilation, groups])

Constructs a ConvTranspose layer with the given weight and bias. Accepts the same keywords and has the same defaults as ConvTranspose(k::NTuple{N,Integer}, ch::Pair{<:Integer,<:Integer}, σ; ...).

Examples

julia> weight = rand(3, 4, 5);
+(300, 300, 7, 50)
source
Flux.ConvTransposeMethod
ConvTranspose(weight::AbstractArray, [bias, activation; stride, pad, dilation, groups])

Constructs a ConvTranspose layer with the given weight and bias. Accepts the same keywords and has the same defaults as ConvTranspose(k::NTuple{N,Integer}, ch::Pair{<:Integer,<:Integer}, σ; ...).

Examples

julia> weight = rand(3, 4, 5);
 
 julia> bias = zeros(4);
 
@@ -117,7 +117,7 @@
 (102, 4, 64)
 
 julia> Flux.params(layer) |> length
-2
source
Flux.CrossCorType
CrossCor(filter, in => out, σ=identity; stride=1, pad=0, dilation=1, [bias, init])

Standard cross correlation layer. filter is a tuple of integers specifying the size of the convolutional kernel; in and out specify the number of input and output channels.

Parameters are controlled by additional keywords, with defaults init=glorot_uniform and bias=true.

See also Conv for more detailed description of keywords.

Examples

julia> xs = rand(Float32, 100, 100, 3, 50);  # a batch of 50 RGB images
+2
source
Flux.CrossCorType
CrossCor(filter, in => out, σ=identity; stride=1, pad=0, dilation=1, [bias, init])

Standard cross correlation layer. filter is a tuple of integers specifying the size of the convolutional kernel; in and out specify the number of input and output channels.

Parameters are controlled by additional keywords, with defaults init=glorot_uniform and bias=true.

See also Conv for more detailed description of keywords.

Examples

julia> xs = rand(Float32, 100, 100, 3, 50);  # a batch of 50 RGB images
 
 julia> layer = CrossCor((5,5), 3 => 6, relu; bias=false)
 CrossCor((5, 5), 3 => 6, relu, bias=false)  # 450 parameters
@@ -126,7 +126,7 @@
 (96, 96, 6, 50)
 
 julia> CrossCor((5,5), 3 => 7, stride=3, pad=(2,0))(xs) |> size
-(34, 32, 7, 50)
source
Flux.CrossCorMethod
CrossCor(weight::AbstractArray, [bias, activation; stride, pad, dilation])

Constructs a CrossCor layer with the given weight and bias. Accepts the same keywords and has the same defaults as CrossCor(k::NTuple{N,Integer}, ch::Pair{<:Integer,<:Integer}, σ; ...).

Examples

julia> weight = rand(3, 4, 5);
+(34, 32, 7, 50)
source
Flux.CrossCorMethod
CrossCor(weight::AbstractArray, [bias, activation; stride, pad, dilation])

Constructs a CrossCor layer with the given weight and bias. Accepts the same keywords and has the same defaults as CrossCor(k::NTuple{N,Integer}, ch::Pair{<:Integer,<:Integer}, σ; ...).

Examples

julia> weight = rand(3, 4, 5);
 
 julia> bias = zeros(5);
 
@@ -134,7 +134,7 @@
 CrossCor((3,), 4 => 5, relu)  # 65 parameters
 
 julia> layer(randn(100, 4, 64)) |> size
-(98, 5, 64)
source
Flux.DepthwiseConvFunction
DepthwiseConv(filter, in => out, σ=identity; stride=1, pad=0, dilation=1, [bias, init])
+(98, 5, 64)
source
Flux.DepthwiseConvFunction
DepthwiseConv(filter, in => out, σ=identity; stride=1, pad=0, dilation=1, [bias, init])
 DepthwiseConv(weight::AbstractArray, [bias, activation; stride, pad, dilation])

Return a depthwise convolutional layer, that is a Conv layer with number of groups equal to the number of input channels.

See Conv for a description of the arguments.

Examples

julia> xs = rand(Float32, 100, 100, 3, 50);  # a batch of 50 RGB images
 
 julia> layer = DepthwiseConv((5,5), 3 => 6, relu; bias=false)
@@ -144,7 +144,7 @@
 (96, 96, 6, 50)
 
 julia> DepthwiseConv((5, 5), 3 => 9, stride=2, pad=2)(xs) |> size
-(50, 50, 9, 50)
source
Flux.SamePadType
SamePad()

Passed as an option to convolutional layers (and friends), this causes the padding to be chosen such that the input and output sizes agree (on the first N dimensions, the kernel or window) when stride==1. When stride≠1, the output size equals ceil(input_size/stride).

See also Conv, MaxPool.

Examples

julia> xs = rand32(100, 100, 3, 50);  # a batch of images
+(50, 50, 9, 50)
source
Flux.SamePadType
SamePad()

Passed as an option to convolutional layers (and friends), this causes the padding to be chosen such that the input and output sizes agree (on the first N dimensions, the kernel or window) when stride==1. When stride≠1, the output size equals ceil(input_size/stride).

See also Conv, MaxPool.

Examples

julia> xs = rand32(100, 100, 3, 50);  # a batch of images
 
 julia> layer = Conv((2,2), 3 => 7, pad=SamePad())
 Conv((2, 2), 3 => 7, pad=(1, 0, 1, 0))  # 91 parameters
@@ -162,7 +162,7 @@
 Conv((5, 5), 3 => 7, pad=2, stride=2)  # 532 parameters
 
 julia> layer3(xs) |> size  # output size = `ceil(input_size/stride)` = 50
-(50, 50, 7, 50)
source
Flux.flattenFunction

flatten(x)

Same as MLUtils.flatten, which should be prefered to this method existing only for backward compatibility.

source

MultiHeadAttention

The basic blocks needed to implement Transformer architectures. See also the functional counterparts documented in NNlib's Attention section.

Flux.MultiHeadAttentionType
MultiHeadAttention(dims; [nheads, bias, init, dropout_prob])

The multi-head dot-product attention layer used in Transformer architectures [1].

Returns the transformed input sequence and the attention scores.

[1] Vaswani et al. "Attention is all you need." Advances in Neural Information Processing Systems. 2017.

Arguments

  • dims: The embedding dimensions of inputs, intermediate tensors and outputs. In the most general case, it is given as a) (q_in_dim, k_in_dim, v_in_dim) => (qk_dim, v_dim) => out_dim. Can take also simpler forms as b) dims::Int; c) in_dim::Int => (qk_dim, v_dim) => out_dim; d) in_dim::Int => qkv_dim => out_dim.
  • nheads: number of heads. Default 8.
  • init: weight initializer for the Dense layers. Default glorot_uniform.
  • bias : whether pointwise QKVO dense transforms use bias. Default false.
  • dropout_prob: dropout probability for the attention scores. Default 0.0.

Forward

(mha::MultiHeadAttention)(q_in, k_in, v_in, [bias]; [mask])

The arguments of the forward pass are:

  • q_in: Input query array of size (q_in_dim, q_len, batch_size).
  • k_in: Input key array of size (k_in_dim, kv_len, batch_size).
  • v_in: Input value array of size (v_in_dim, kv_len, batch_size).
  • bias: Bias array broadcastable to size (kv_len, q_len, nheads, batch_size). It will be added to the attention scores before the softmax. Default nothing.
  • mask: Input array broadcastable to size (kv_len, q_len, nheads, batch_size). The mask is applied to the attention scores just before the softmax. See NNlib.make_causal_mask for creating causal masks. Default nothing.

Alternative calling signatures are mha(q_in), equivalent to mha(q_in, q_in, q_in) (self-attention), and mha(q_in, k_in), equivalent to mha(q_in, k_in, k_in) (key and value are the same).

See also NNlib.dot_product_attention.

Examples

mha = MultiHeadAttention(64, nheads = 8)
+(50, 50, 7, 50)
source
Flux.flattenFunction

flatten(x)

Same as MLUtils.flatten, which should be prefered to this method existing only for backward compatibility.

source

MultiHeadAttention

The basic blocks needed to implement Transformer architectures. See also the functional counterparts documented in NNlib's Attention section.

Flux.MultiHeadAttentionType
MultiHeadAttention(dims; [nheads, bias, init, dropout_prob])

The multi-head dot-product attention layer used in Transformer architectures [1].

Returns the transformed input sequence and the attention scores.

[1] Vaswani et al. "Attention is all you need." Advances in Neural Information Processing Systems. 2017.

Arguments

  • dims: The embedding dimensions of inputs, intermediate tensors and outputs. In the most general case, it is given as a) (q_in_dim, k_in_dim, v_in_dim) => (qk_dim, v_dim) => out_dim. Can take also simpler forms as b) dims::Int; c) in_dim::Int => (qk_dim, v_dim) => out_dim; d) in_dim::Int => qkv_dim => out_dim.
  • nheads: number of heads. Default 8.
  • init: weight initializer for the Dense layers. Default glorot_uniform.
  • bias : whether pointwise QKVO dense transforms use bias. Default false.
  • dropout_prob: dropout probability for the attention scores. Default 0.0.

Forward

(mha::MultiHeadAttention)(q_in, k_in, v_in, [bias]; [mask])

The arguments of the forward pass are:

  • q_in: Input query array of size (q_in_dim, q_len, batch_size).
  • k_in: Input key array of size (k_in_dim, kv_len, batch_size).
  • v_in: Input value array of size (v_in_dim, kv_len, batch_size).
  • bias: Bias array broadcastable to size (kv_len, q_len, nheads, batch_size). It will be added to the attention scores before the softmax. Default nothing.
  • mask: Input array broadcastable to size (kv_len, q_len, nheads, batch_size). The mask is applied to the attention scores just before the softmax. See NNlib.make_causal_mask for creating causal masks. Default nothing.

Alternative calling signatures are mha(q_in), equivalent to mha(q_in, q_in, q_in) (self-attention), and mha(q_in, k_in), equivalent to mha(q_in, k_in, k_in) (key and value are the same).

See also NNlib.dot_product_attention.

Examples

mha = MultiHeadAttention(64, nheads = 8)
 q = rand(Float32, (64, 10, 32))
 k = rand(Float32, (64, 20, 32))
 v = rand(Float32, (64, 20, 32))
@@ -173,13 +173,13 @@
 mha = MultiHeadAttention(64 => 1024 => 1024, nheads = 8)
 y, α = mha(q) # self-attention
 # [y] = [1024, 10, 32]
-# [α] = [10, 10, 8, 32]
source

Pooling

These layers are commonly used after a convolution layer, and reduce the size of its output. They have no trainable parameters.

Flux.AdaptiveMaxPoolType
AdaptiveMaxPool(out::NTuple)

Adaptive max pooling layer. Calculates the necessary window size such that its output has size(y)[1:N] == out.

Expects as input an array with ndims(x) == N+2, i.e. channel and batch dimensions, after the N feature dimensions, where N = length(out).

See also MaxPool, AdaptiveMeanPool.

Examples

julia> xs = rand(Float32, 100, 100, 3, 50);  # batch of 50 RGB images
+# [α] = [10, 10, 8, 32]
source

Pooling

These layers are commonly used after a convolution layer, and reduce the size of its output. They have no trainable parameters.

Flux.AdaptiveMaxPoolType
AdaptiveMaxPool(out::NTuple)

Adaptive max pooling layer. Calculates the necessary window size such that its output has size(y)[1:N] == out.

Expects as input an array with ndims(x) == N+2, i.e. channel and batch dimensions, after the N feature dimensions, where N = length(out).

See also MaxPool, AdaptiveMeanPool.

Examples

julia> xs = rand(Float32, 100, 100, 3, 50);  # batch of 50 RGB images
 
 julia> AdaptiveMaxPool((25, 25))(xs) |> size
 (25, 25, 3, 50)
 
 julia> MaxPool((4,4))(xs) ≈ AdaptiveMaxPool((25, 25))(xs)
-true
source
Flux.MaxPoolType
MaxPool(window::NTuple; pad=0, stride=window)

Max pooling layer, which replaces all pixels in a block of size window with one.

Expects as input an array with ndims(x) == N+2, i.e. channel and batch dimensions, after the N feature dimensions, where N = length(window).

By default the window size is also the stride in each dimension. The keyword pad accepts the same options as for the Conv layer, including SamePad().

See also Conv, MeanPool, AdaptiveMaxPool, GlobalMaxPool.

Examples

julia> xs = rand(Float32, 100, 100, 3, 50);  # batch of 50 RGB images
+true
source
Flux.MaxPoolType
MaxPool(window::NTuple; pad=0, stride=window)

Max pooling layer, which replaces all pixels in a block of size window with one.

Expects as input an array with ndims(x) == N+2, i.e. channel and batch dimensions, after the N feature dimensions, where N = length(window).

By default the window size is also the stride in each dimension. The keyword pad accepts the same options as for the Conv layer, including SamePad().

See also Conv, MeanPool, AdaptiveMaxPool, GlobalMaxPool.

Examples

julia> xs = rand(Float32, 100, 100, 3, 50);  # batch of 50 RGB images
 
 julia> m = Chain(Conv((5, 5), 3 => 7, pad=SamePad()), MaxPool((5, 5), pad=SamePad()))
 Chain(
@@ -197,7 +197,7 @@
 MaxPool((5,), pad=2, stride=3)
 
 julia> layer(rand(Float32, 100, 7, 50)) |> size
-(34, 7, 50)
source
Flux.GlobalMaxPoolType
GlobalMaxPool()

Global max pooling layer.

Transforms (w,h,c,b)-shaped input into (1,1,c,b)-shaped output, by performing max pooling on the complete (w,h)-shaped feature maps.

See also MaxPool, GlobalMeanPool.

julia> xs = rand(Float32, 100, 100, 3, 50);
+(34, 7, 50)
source
Flux.GlobalMaxPoolType
GlobalMaxPool()

Global max pooling layer.

Transforms (w,h,c,b)-shaped input into (1,1,c,b)-shaped output, by performing max pooling on the complete (w,h)-shaped feature maps.

See also MaxPool, GlobalMeanPool.

julia> xs = rand(Float32, 100, 100, 3, 50);
 
 julia> m = Chain(Conv((3,3), 3 => 7), GlobalMaxPool());
 
@@ -205,13 +205,13 @@
 (1, 1, 7, 50)
 
 julia> GlobalMaxPool()(rand(3,5,7)) |> size  # preserves 2 dimensions
-(1, 5, 7)
source
Flux.AdaptiveMeanPoolType
AdaptiveMeanPool(out::NTuple)

Adaptive mean pooling layer. Calculates the necessary window size such that its output has size(y)[1:N] == out.

Expects as input an array with ndims(x) == N+2, i.e. channel and batch dimensions, after the N feature dimensions, where N = length(out).

See also MaxPool, AdaptiveMaxPool.

Examples

julia> xs = rand(Float32, 100, 100, 3, 50);  # batch of 50 RGB images
+(1, 5, 7)
source
Flux.AdaptiveMeanPoolType
AdaptiveMeanPool(out::NTuple)

Adaptive mean pooling layer. Calculates the necessary window size such that its output has size(y)[1:N] == out.

Expects as input an array with ndims(x) == N+2, i.e. channel and batch dimensions, after the N feature dimensions, where N = length(out).

See also MaxPool, AdaptiveMaxPool.

Examples

julia> xs = rand(Float32, 100, 100, 3, 50);  # batch of 50 RGB images
 
 julia> AdaptiveMeanPool((25, 25))(xs) |> size
 (25, 25, 3, 50)
 
 julia> MeanPool((4,4))(xs) ≈ AdaptiveMeanPool((25, 25))(xs)
-true
source
Flux.MeanPoolType
MeanPool(window::NTuple; pad=0, stride=window)

Mean pooling layer, averaging all pixels in a block of size window.

Expects as input an array with ndims(x) == N+2, i.e. channel and batch dimensions, after the N feature dimensions, where N = length(window).

By default the window size is also the stride in each dimension. The keyword pad accepts the same options as for the Conv layer, including SamePad().

See also Conv, MaxPool, AdaptiveMeanPool.

Examples

julia> xs = rand(Float32, 100, 100, 3, 50);
+true
source
Flux.MeanPoolType
MeanPool(window::NTuple; pad=0, stride=window)

Mean pooling layer, averaging all pixels in a block of size window.

Expects as input an array with ndims(x) == N+2, i.e. channel and batch dimensions, after the N feature dimensions, where N = length(window).

By default the window size is also the stride in each dimension. The keyword pad accepts the same options as for the Conv layer, including SamePad().

See also Conv, MaxPool, AdaptiveMeanPool.

Examples

julia> xs = rand(Float32, 100, 100, 3, 50);
 
 julia> m = Chain(Conv((5,5), 3 => 7), MeanPool((5,5), pad=SamePad()))
 Chain(
@@ -223,12 +223,12 @@
 (96, 96, 7, 50)
 
 julia> m(xs) |> size
-(20, 20, 7, 50)
source
Flux.GlobalMeanPoolType
GlobalMeanPool()

Global mean pooling layer.

Transforms (w,h,c,b)-shaped input into (1,1,c,b)-shaped output, by performing mean pooling on the complete (w,h)-shaped feature maps.

julia> xs = rand(Float32, 100, 100, 3, 50);
+(20, 20, 7, 50)
source
Flux.GlobalMeanPoolType
GlobalMeanPool()

Global mean pooling layer.

Transforms (w,h,c,b)-shaped input into (1,1,c,b)-shaped output, by performing mean pooling on the complete (w,h)-shaped feature maps.

julia> xs = rand(Float32, 100, 100, 3, 50);
 
 julia> m = Chain(Conv((3,3), 3 => 7), GlobalMeanPool());
 
 julia> m(xs) |> size
-(1, 1, 7, 50)
source

Upsampling

The opposite of pooling, these layers increase the size of an array. They have no trainable parameters.

Flux.UpsampleType
Upsample(mode = :nearest; [scale, size]) 
+(1, 1, 7, 50)
source

Upsampling

The opposite of pooling, these layers increase the size of an array. They have no trainable parameters.

Flux.UpsampleType
Upsample(mode = :nearest; [scale, size]) 
 Upsample(scale, mode = :nearest)

An upsampling layer. One of two keywords must be given:

If scale is a number, this applies to all but the last two dimensions (channel and batch) of the input. It may also be a tuple, to control dimensions individually. Alternatively, keyword size accepts a tuple, to directly specify the leading dimensions of the output.

Currently supported upsampling modes and corresponding NNlib's methods are:

Examples

julia> m = Upsample(scale = (2, 3))
 Upsample(:nearest, scale = (2, 3))
 
@@ -239,7 +239,7 @@
 Upsample(:bilinear, size = (4, 5))
 
 julia> m(ones(2, 2, 1, 1)) |> size
-(4, 5, 1, 1)
source
Flux.PixelShuffleType
PixelShuffle(r::Int)

Pixel shuffling layer with upscale factor r. Usually used for generating higher resolution images while upscaling them.

See NNlib.pixel_shuffle.

Examples

julia> p = PixelShuffle(2);
+(4, 5, 1, 1)
source
Flux.PixelShuffleType
PixelShuffle(r::Int)

Pixel shuffling layer with upscale factor r. Usually used for generating higher resolution images while upscaling them.

See NNlib.pixel_shuffle.

Examples

julia> p = PixelShuffle(2);
 
 julia> xs = [2row + col + channel/10 for row in 1:2, col in 1:2, channel in 1:4, n in 1:1]
 2×2×4×1 Array{Float64, 4}:
@@ -291,7 +291,7 @@
  4.1  4.3  5.1  5.3  6.1  6.3
  4.2  4.4  5.2  5.4  6.2  6.4
  7.1  7.3  8.1  8.3  9.1  9.3
- 7.2  7.4  8.2  8.4  9.2  9.4
source

Embedding Vectors

These layers accept an index, and return a vector (or several indices, and several vectors). The possible embedding vectors are learned parameters.

Flux.EmbeddingType
Embedding(in => out; init=randn32)

A lookup table that stores embeddings of dimension out for a vocabulary of size in, as a trainable matrix.

This layer is often used to store word embeddings and retrieve them using indices. The input to the layer can be a vocabulary index in 1:in, an array of indices, or the corresponding onehot encoding.

For indices x, the result is of size (out, size(x)...), allowing several batch dimensions. For one-hot ohx, the result is of size (out, size(ohx)[2:end]...).

Examples

julia> emb = Embedding(26 => 4, init=Flux.identity_init(gain=22))
+ 7.2  7.4  8.2  8.4  9.2  9.4
source

Embedding Vectors

These layers accept an index, and return a vector (or several indices, and several vectors). The possible embedding vectors are learned parameters.

Flux.EmbeddingType
Embedding(in => out; init=randn32)

A lookup table that stores embeddings of dimension out for a vocabulary of size in, as a trainable matrix.

This layer is often used to store word embeddings and retrieve them using indices. The input to the layer can be a vocabulary index in 1:in, an array of indices, or the corresponding onehot encoding.

For indices x, the result is of size (out, size(x)...), allowing several batch dimensions. For one-hot ohx, the result is of size (out, size(ohx)[2:end]...).

Examples

julia> emb = Embedding(26 => 4, init=Flux.identity_init(gain=22))
 Embedding(26 => 4)  # 104 parameters
 
 julia> emb(2)  # one column of e.weight (here not random!)
@@ -312,7 +312,7 @@
 true
 
 julia> emb(rand(1:26, (10, 1, 12))) |> size  # three batch dimensions
-(4, 10, 1, 12)
source
Flux.EmbeddingBagType
EmbeddingBag(in => out, reduction=mean; init=Flux.randn32)

A lookup table that stores embeddings of dimension out for a vocabulary of size in. Differs from Embedding in that, instead of acting on a single vocabulary index, it always acts a vector of indices which it calls a "bag". Their individual embedding vectors are reduced to one, using mean or some other function.

Instead of acting on one "bag", such as x::Vector{Int}, the layer can also act on several:

  • Acting on a vector of "bags", it produces a matrix whose columns are the reduced vectors. More generally on x::Array{Vector{Int}}, its output is of size (out, size(x)...).

  • Any higher-rank array of integers is interpreted as a collection of "bags" each along the first dimension. Thus the output is mapslices(e, x; dims=1) when e::EmbeddingBag and x::Array{Int,N}. This method is more efficient, but requires that all "bags" have the same length.

  • A vector of "bags" may also be produced by splitting a vector of indices at specified points. For this case the layer takes two inputs, both vectors of integers. See details below.

The "bag" may equivalently be represented as a OneHotMatrix. A collection of these, or one higher-rank OneHotArray, again produce a stack of embeddings. See details below.

Examples

julia> vocab_size = 26;  # embed into 3 dimensions, with non-random vectors:
+(4, 10, 1, 12)
source
Flux.EmbeddingBagType
EmbeddingBag(in => out, reduction=mean; init=Flux.randn32)

A lookup table that stores embeddings of dimension out for a vocabulary of size in. Differs from Embedding in that, instead of acting on a single vocabulary index, it always acts a vector of indices which it calls a "bag". Their individual embedding vectors are reduced to one, using mean or some other function.

Instead of acting on one "bag", such as x::Vector{Int}, the layer can also act on several:

  • Acting on a vector of "bags", it produces a matrix whose columns are the reduced vectors. More generally on x::Array{Vector{Int}}, its output is of size (out, size(x)...).

  • Any higher-rank array of integers is interpreted as a collection of "bags" each along the first dimension. Thus the output is mapslices(e, x; dims=1) when e::EmbeddingBag and x::Array{Int,N}. This method is more efficient, but requires that all "bags" have the same length.

  • A vector of "bags" may also be produced by splitting a vector of indices at specified points. For this case the layer takes two inputs, both vectors of integers. See details below.

The "bag" may equivalently be represented as a OneHotMatrix. A collection of these, or one higher-rank OneHotArray, again produce a stack of embeddings. See details below.

Examples

julia> vocab_size = 26;  # embed into 3 dimensions, with non-random vectors:
 
 julia> eb = EmbeddingBag(vocab_size => 3, init=Flux.identity_init(gain=100))
 EmbeddingBag(26 => 3)  # 78 parameters
@@ -361,7 +361,7 @@
 3×2 Matrix{Float32}:
  33.3333    0.0
  66.6667    0.0
-  0.0     100.0
source

Dataflow Layers, or Containers

The basic Chain(F, G, H) applies the layers it contains in sequence, equivalent to H ∘ G ∘ F. Flux has some other layers which contain layers, but connect them up in a more complicated way: SkipConnection allows ResNet's residual connection.

Flux.ChainType
Chain(layers...)
+  0.0     100.0
source

Dataflow Layers, or Containers

The basic Chain(F, G, H) applies the layers it contains in sequence, equivalent to H ∘ G ∘ F. Flux has some other layers which contain layers, but connect them up in a more complicated way: SkipConnection allows ResNet's residual connection.

Flux.ChainType
Chain(layers...)
 Chain(name = layer, ...)

Collects multiple layers / functions to be called in sequence on a given input. Supports indexing and slicing, m[2] or m[1:end-1], and if names are given, m[:name] == m[1] etc.

Examples

julia> m = Chain(x -> x^2, x -> x+1);
 
 julia> m(5) == 26
@@ -378,12 +378,12 @@
                   dec = Dense(5 => 2));
 
 julia> m2(x) == (m2[:dec] ∘ m2[:enc])(x)
-true

For large models, there is a special type-unstable path which can reduce compilation times. This can be used by supplying a vector of layers Chain([layer1, layer2, ...]). This feature is somewhat experimental, beware!

source
Flux.activationsFunction
activations(c::Chain, input)

Like calling a Chain, but saves the result of each layer as an output.

Examples

julia> using Flux: activations
+true

For large models, there is a special type-unstable path which can reduce compilation times. This can be used by supplying a vector of layers Chain([layer1, layer2, ...]). This feature is somewhat experimental, beware!

source
Flux.activationsFunction
activations(c::Chain, input)

Like calling a Chain, but saves the result of each layer as an output.

Examples

julia> using Flux: activations
 
 julia> c = Chain(x -> x + 1, x -> x * 2, x -> x ^ 3);
 
 julia> activations(c, 1)
-(2, 4, 64)
source
Flux.MaxoutType
Maxout(layers...)
+(2, 4, 64)
source
Flux.MaxoutType
Maxout(layers...)
 Maxout(f, n_alts)

This contains a number of internal layers, each of which receives the same input. Its output is the elementwise maximum of the internal layers' outputs.

Instead of defining layers individually, you can provide a zero-argument function which constructs them, and the number to construct.

Maxout over linear dense layers satisfies the universal approximation theorem. See Goodfellow, Warde-Farley, Mirza, Courville & Bengio "Maxout Networks" https://arxiv.org/abs/1302.4389.

See also Parallel to reduce with other operators.

Examples

julia> m = Maxout(x -> abs2.(x), x -> x .* 3);
 
 julia> m([-2 -1 0 1 2])
@@ -398,7 +398,7 @@
 )                   # Total: 6 arrays, 126 parameters, 888 bytes.
 
 julia> Flux.outputsize(m3, (5, 11))
-(7, 11)
source
Flux.SkipConnectionType
SkipConnection(layer, connection)

Create a skip connection which consists of a layer or Chain of consecutive layers and a shortcut connection linking the block's input to the output through a user-supplied 2-argument callable. The first argument to the callable will be propagated through the given layer while the second is the unchanged, "skipped" input.

The simplest "ResNet"-type connection is just SkipConnection(layer, +). Here is a more complicated example:

julia> m = Conv((3,3), 4 => 7, pad=(1,1));
+(7, 11)
source
Flux.SkipConnectionType
SkipConnection(layer, connection)

Create a skip connection which consists of a layer or Chain of consecutive layers and a shortcut connection linking the block's input to the output through a user-supplied 2-argument callable. The first argument to the callable will be propagated through the given layer while the second is the unchanged, "skipped" input.

The simplest "ResNet"-type connection is just SkipConnection(layer, +). Here is a more complicated example:

julia> m = Conv((3,3), 4 => 7, pad=(1,1));
 
 julia> x = ones(Float32, 5, 5, 4, 10);
 
@@ -408,7 +408,7 @@
 julia> sm = SkipConnection(m, (mx, x) -> cat(mx, x, dims=3));
 
 julia> size(sm(x)) == (5, 5, 11, 10)
-true

See also Parallel, Maxout.

source
Flux.ParallelType
Parallel(connection, layers...)
+true

See also Parallel, Maxout.

source
Flux.ParallelType
Parallel(connection, layers...)
 Parallel(connection; name = layer, ...)

Create a layer which passes an input array to each path in layers, before reducing the output with connection.

Called with one input x, this is equivalent to connection([l(x) for l in layers]...). If called with multiple inputs, one is passed to each layer, thus Parallel(+, f, g)(x, y) = f(x) + g(y).

Like Chain, its sub-layers may be given names using the keyword constructor. These can be accessed by indexing: m[1] == m[:name] is the first layer.

See also SkipConnection which is Parallel with one identity, and Maxout which reduces by broadcasting max.

Examples

julia> model = Chain(Dense(3 => 5),
                      Parallel(vcat, Dense(5 => 4), Chain(Dense(5 => 7), Dense(7 => 4))),
                      Dense(8 => 17));
@@ -430,7 +430,7 @@
 (2,)
 
 julia> model2[:β] == model2[2]
-true
source
Flux.PairwiseFusionType
PairwiseFusion(connection, layers...)

Arguments

  • connection: A function taking 2 inputs and combining them into a single output
  • layers: The layers whose outputs are combined

Inputs

This layer behaves differently based on input type:

  1. If input x is a tuple of length N (or the input is xs with N x's), matching the number of layers,

then each layer receives a new input x[i] combined with the previous output y[i-1] using connection. Thus (y1, y2, y3) = PairwiseFusion(connection, layer1, layer2, layer3)((x1, x2, x3)) may be drawn as:

x1 → layer1 → y1 ↘
+true
source
Flux.PairwiseFusionType
PairwiseFusion(connection, layers...)

Arguments

  • connection: A function taking 2 inputs and combining them into a single output
  • layers: The layers whose outputs are combined

Inputs

This layer behaves differently based on input type:

  1. If input x is a tuple of length N (or the input is xs with N x's), matching the number of layers,

then each layer receives a new input x[i] combined with the previous output y[i-1] using connection. Thus (y1, y2, y3) = PairwiseFusion(connection, layer1, layer2, layer3)((x1, x2, x3)) may be drawn as:

x1 → layer1 → y1 ↘
                   connection → layer2 → y2 ↘
               x2 ↗                          connection → layer3 → y3
                                         x3 ↗

... or written as:

y1 = layer1(x1)
@@ -438,7 +438,7 @@
 y3 = layer3(connection(y2, x3))
  1. With just one input, each layer receives the same x combined with the previous output. Thus y = PairwiseFusion(connection, layers...)(x) obeys:
y[1] == layers[1](x)
 for i in 2:length(layers)
     y[i] == connection(layers[i](y[i-1]), x)
-end

Returns

A tuple of length N with the output of each fusion ((y1, y2, ..., yN) in the example above).

source

Recurrent Models

Much like the core layers above, but can be used to process sequence data (as well as other kinds of structured data).

Flux.RNNFunction
RNN(in => out, σ = tanh)

The most basic recurrent layer; essentially acts as a Dense layer, but with the output fed back into the input each time step.

The arguments in and out describe the size of the feature vectors passed as input and as output. That is, it accepts a vector of length in or a batch of vectors represented as a in x B matrix and outputs a vector of length out or a batch of vectors of size out x B.

This constructor is syntactic sugar for Recur(RNNCell(a...)), and so RNNs are stateful. Note that the state shape can change depending on the inputs, and so it is good to reset! the model between inference calls if the batch size changes. See the examples below.

Examples

julia> r = RNN(3 => 5)
+end

Returns

A tuple of length N with the output of each fusion ((y1, y2, ..., yN) in the example above).

source

Recurrent Models

Much like the core layers above, but can be used to process sequence data (as well as other kinds of structured data).

Flux.RNNFunction
RNN(in => out, σ = tanh)

The most basic recurrent layer; essentially acts as a Dense layer, but with the output fed back into the input each time step.

The arguments in and out describe the size of the feature vectors passed as input and as output. That is, it accepts a vector of length in or a batch of vectors represented as a in x B matrix and outputs a vector of length out or a batch of vectors of size out x B.

This constructor is syntactic sugar for Recur(RNNCell(a...)), and so RNNs are stateful. Note that the state shape can change depending on the inputs, and so it is good to reset! the model between inference calls if the batch size changes. See the examples below.

Examples

julia> r = RNN(3 => 5)
 Recur(
   RNNCell(3 => 5, tanh),                # 50 parameters
 )         # Total: 4 trainable arrays, 50 parameters,
@@ -477,7 +477,7 @@
 julia> r = Flux.Recur(Flux.RNNCell(tanh, rand(5, 4), Tridiagonal(rand(5, 5)), rand(5), rand(5, 1)))
 
 julia> r(rand(4, 10)) |> size # batch size of 10
-(5, 10)
source
Flux.LSTMFunction
LSTM(in => out)

Long Short Term Memory recurrent layer. Behaves like an RNN but generally exhibits a longer memory span over sequences.

The arguments in and out describe the size of the feature vectors passed as input and as output. That is, it accepts a vector of length in or a batch of vectors represented as a in x B matrix and outputs a vector of length out or a batch of vectors of size out x B.

This constructor is syntactic sugar for Recur(LSTMCell(a...)), and so LSTMs are stateful. Note that the state shape can change depending on the inputs, and so it is good to reset! the model between inference calls if the batch size changes. See the examples below.

See this article for a good overview of the internals.

Examples

julia> l = LSTM(3 => 5)
+(5, 10)
source
Flux.LSTMFunction
LSTM(in => out)

Long Short Term Memory recurrent layer. Behaves like an RNN but generally exhibits a longer memory span over sequences.

The arguments in and out describe the size of the feature vectors passed as input and as output. That is, it accepts a vector of length in or a batch of vectors represented as a in x B matrix and outputs a vector of length out or a batch of vectors of size out x B.

This constructor is syntactic sugar for Recur(LSTMCell(a...)), and so LSTMs are stateful. Note that the state shape can change depending on the inputs, and so it is good to reset! the model between inference calls if the batch size changes. See the examples below.

See this article for a good overview of the internals.

Examples

julia> l = LSTM(3 => 5)
 Recur(
   LSTMCell(3 => 5),                     # 190 parameters
 )         # Total: 5 trainable arrays, 190 parameters,
@@ -489,7 +489,7 @@
 julia> Flux.reset!(l);
 
 julia> l(rand(Float32, 3, 10)) |> size # batch size of 10
-(5, 10)
Batch size changes

Failing to call reset! when the input batch size changes can lead to unexpected behavior. See the example in RNN.

Note:

LSTMCells can be constructed directly by specifying the non-linear function, the Wi and Wh internal matrices, a bias vector b, and a learnable initial state state0. The Wi and Wh matrices do not need to be the same type. See the example in RNN.

source
Flux.GRUFunction
GRU(in => out)

Gated Recurrent Unit layer. Behaves like an RNN but generally exhibits a longer memory span over sequences. This implements the variant proposed in v1 of the referenced paper.

The integer arguments in and out describe the size of the feature vectors passed as input and as output. That is, it accepts a vector of length in or a batch of vectors represented as a in x B matrix and outputs a vector of length out or a batch of vectors of size out x B.

This constructor is syntactic sugar for Recur(GRUCell(a...)), and so GRUs are stateful. Note that the state shape can change depending on the inputs, and so it is good to reset! the model between inference calls if the batch size changes. See the examples below.

See this article for a good overview of the internals.

Examples

julia> g = GRU(3 => 5)
+(5, 10)
Batch size changes

Failing to call reset! when the input batch size changes can lead to unexpected behavior. See the example in RNN.

Note:

LSTMCells can be constructed directly by specifying the non-linear function, the Wi and Wh internal matrices, a bias vector b, and a learnable initial state state0. The Wi and Wh matrices do not need to be the same type. See the example in RNN.

source
Flux.GRUFunction
GRU(in => out)

Gated Recurrent Unit layer. Behaves like an RNN but generally exhibits a longer memory span over sequences. This implements the variant proposed in v1 of the referenced paper.

The integer arguments in and out describe the size of the feature vectors passed as input and as output. That is, it accepts a vector of length in or a batch of vectors represented as a in x B matrix and outputs a vector of length out or a batch of vectors of size out x B.

This constructor is syntactic sugar for Recur(GRUCell(a...)), and so GRUs are stateful. Note that the state shape can change depending on the inputs, and so it is good to reset! the model between inference calls if the batch size changes. See the examples below.

See this article for a good overview of the internals.

Examples

julia> g = GRU(3 => 5)
 Recur(
   GRUCell(3 => 5),                      # 140 parameters
 )         # Total: 4 trainable arrays, 140 parameters,
@@ -501,7 +501,7 @@
 julia> Flux.reset!(g);
 
 julia> g(rand(Float32, 3, 10)) |> size # batch size of 10
-(5, 10)
Batch size changes

Failing to call reset! when the input batch size changes can lead to unexpected behavior. See the example in RNN.

Note:

GRUCells can be constructed directly by specifying the non-linear function, the Wi and Wh internal matrices, a bias vector b, and a learnable initial state state0. The Wi and Wh matrices do not need to be the same type. See the example in RNN.

source
Flux.GRUv3Function
GRUv3(in => out)

Gated Recurrent Unit layer. Behaves like an RNN but generally exhibits a longer memory span over sequences. This implements the variant proposed in v3 of the referenced paper.

The arguments in and out describe the size of the feature vectors passed as input and as output. That is, it accepts a vector of length in or a batch of vectors represented as a in x B matrix and outputs a vector of length out or a batch of vectors of size out x B.

This constructor is syntactic sugar for Recur(GRUv3Cell(a...)), and so GRUv3s are stateful. Note that the state shape can change depending on the inputs, and so it is good to reset! the model between inference calls if the batch size changes. See the examples below.

See this article for a good overview of the internals.

Examples

julia> g = GRUv3(3 => 5)
+(5, 10)
Batch size changes

Failing to call reset! when the input batch size changes can lead to unexpected behavior. See the example in RNN.

Note:

GRUCells can be constructed directly by specifying the non-linear function, the Wi and Wh internal matrices, a bias vector b, and a learnable initial state state0. The Wi and Wh matrices do not need to be the same type. See the example in RNN.

source
Flux.GRUv3Function
GRUv3(in => out)

Gated Recurrent Unit layer. Behaves like an RNN but generally exhibits a longer memory span over sequences. This implements the variant proposed in v3 of the referenced paper.

The arguments in and out describe the size of the feature vectors passed as input and as output. That is, it accepts a vector of length in or a batch of vectors represented as a in x B matrix and outputs a vector of length out or a batch of vectors of size out x B.

This constructor is syntactic sugar for Recur(GRUv3Cell(a...)), and so GRUv3s are stateful. Note that the state shape can change depending on the inputs, and so it is good to reset! the model between inference calls if the batch size changes. See the examples below.

See this article for a good overview of the internals.

Examples

julia> g = GRUv3(3 => 5)
 Recur(
   GRUv3Cell(3 => 5),                    # 140 parameters
 )         # Total: 5 trainable arrays, 140 parameters,
@@ -513,7 +513,7 @@
 julia> Flux.reset!(g);
 
 julia> g(rand(Float32, 3, 10)) |> size # batch size of 10
-(5, 10)
Batch size changes

Failing to call reset! when the input batch size changes can lead to unexpected behavior. See the example in RNN.

Note:

GRUv3Cells can be constructed directly by specifying the non-linear function, the Wi, Wh, and Wh_h internal matrices, a bias vector b, and a learnable initial state state0. The Wi, Wh, and Wh_h matrices do not need to be the same type. See the example in RNN.

source
Flux.RecurType
Recur(cell)

Recur takes a recurrent cell and makes it stateful, managing the hidden state in the background. cell should be a model of the form:

h, y = cell(h, x...)

For example, here's a recurrent network that keeps a running total of its inputs:

Examples

julia> accum(h, x) = (h + x, x)
+(5, 10)
Batch size changes

Failing to call reset! when the input batch size changes can lead to unexpected behavior. See the example in RNN.

Note:

GRUv3Cells can be constructed directly by specifying the non-linear function, the Wi, Wh, and Wh_h internal matrices, a bias vector b, and a learnable initial state state0. The Wi, Wh, and Wh_h matrices do not need to be the same type. See the example in RNN.

source
Flux.RecurType
Recur(cell)

Recur takes a recurrent cell and makes it stateful, managing the hidden state in the background. cell should be a model of the form:

h, y = cell(h, x...)

For example, here's a recurrent network that keeps a running total of its inputs:

Examples

julia> accum(h, x) = (h + x, x)
 accum (generic function with 1 method)
 
 julia> rnn = Flux.Recur(accum, 0)
@@ -564,7 +564,7 @@
 
 julia> rnn.state
 1×1 Matrix{Int64}:
- 60
source
Flux.reset!Function
reset!(rnn)

Reset the hidden state of a recurrent layer back to its original value.

Assuming you have a Recur layer rnn, this is roughly equivalent to:

rnn.state = hidden(rnn.cell)

Examples

julia> r = Flux.RNNCell(relu, ones(1,1), zeros(1,1), ones(1,1), zeros(1,1));  # users should use the RNN wrapper struct instead
+ 60
source
Flux.reset!Function
reset!(rnn)

Reset the hidden state of a recurrent layer back to its original value.

Assuming you have a Recur layer rnn, this is roughly equivalent to:

rnn.state = hidden(rnn.cell)

Examples

julia> r = Flux.RNNCell(relu, ones(1,1), zeros(1,1), ones(1,1), zeros(1,1));  # users should use the RNN wrapper struct instead
 
 julia> y = Flux.Recur(r, ones(1,1));
 
@@ -586,7 +586,7 @@
 
 julia> y.state
 1×1 Matrix{Float64}:
- 0.0
source

Normalisation & Regularisation

These layers don't affect the structure of the network but may improve training times or reduce overfitting. Some of them contain trainable parameters, while others do not.

Flux.BatchNormType
BatchNorm(channels::Integer, λ=identity;
+ 0.0
source

Normalisation & Regularisation

These layers don't affect the structure of the network but may improve training times or reduce overfitting. Some of them contain trainable parameters, while others do not.

Flux.BatchNormType
BatchNorm(channels::Integer, λ=identity;
           initβ=zeros32, initγ=ones32,
           affine=true, track_stats=true, active=nothing,
           eps=1f-5, momentum= 0.1f0)

Batch Normalization layer. channels should be the size of the channel dimension in your data (see below).

Given an array with N dimensions, call the N-1th the channel dimension. For a batch of feature vectors this is just the data dimension, for WHCN images it's the usual channel dimension.

BatchNorm computes the mean and variance for each D_1×...×D_{N-2}×1×D_N input slice and normalises the input accordingly.

If affine=true, it also applies a shift and a rescale to the input through to learnable per-channel bias β and scale γ parameters.

After normalisation, elementwise activation λ is applied.

If track_stats=true, accumulates mean and var statistics in training phase that will be used to renormalize the input in test phase.

Use testmode! during inference.

Examples

julia> using Statistics
@@ -598,7 +598,7 @@
 julia> Flux.trainmode!(m);
 
 julia> isapprox(std(m(xs)), 1, atol=0.1) && std(xs) != std(m(xs))
-true
source
Flux.DropoutType
Dropout(p; [dims, rng, active])

Layer implementing dropout with the given probability. This is used as a regularisation, i.e. to reduce overfitting.

While training, it sets each input to 0 (with probability p) or else scales it by 1 / (1 - p), using the NNlib.dropout function. While testing, it has no effect.

By default the mode will switch automatically, but it can also be controlled manually via Flux.testmode!, or by passing keyword active=true for training mode.

By default every input is treated independently. With the dims keyword, instead it takes a random choice only along that dimension. For example Dropout(p; dims = 3) will randomly zero out entire channels on WHCN input (also called 2D dropout).

Keyword rng lets you specify a custom random number generator. (Only supported on the CPU.)

Examples

julia> m = Chain(Dense(ones(3,2)), Dropout(0.4))
+true
source
Flux.DropoutType
Dropout(p; [dims, rng, active])

Layer implementing dropout with the given probability. This is used as a regularisation, i.e. to reduce overfitting.

While training, it sets each input to 0 (with probability p) or else scales it by 1 / (1 - p), using the NNlib.dropout function. While testing, it has no effect.

By default the mode will switch automatically, but it can also be controlled manually via Flux.testmode!, or by passing keyword active=true for training mode.

By default every input is treated independently. With the dims keyword, instead it takes a random choice only along that dimension. For example Dropout(p; dims = 3) will randomly zero out entire channels on WHCN input (also called 2D dropout).

Keyword rng lets you specify a custom random number generator. (Only supported on the CPU.)

Examples

julia> m = Chain(Dense(ones(3,2)), Dropout(0.4))
 Chain(
   Dense(2 => 3),                        # 9 parameters
   Dropout(0.4),
@@ -630,7 +630,7 @@
 1.9989999999999961
 
 julia> mean(iszero, y)  # is about 0.4
-0.4003
source
Flux.AlphaDropoutType
AlphaDropout(p; [rng, active])

A dropout layer. Used in Self-Normalizing Neural Networks. The AlphaDropout layer ensures that mean and variance of activations remain the same as before.

Does nothing to the input once testmode! is true.

Examples

julia> using Statistics
+0.4003
source
Flux.AlphaDropoutType
AlphaDropout(p; [rng, active])

A dropout layer. Used in Self-Normalizing Neural Networks. The AlphaDropout layer ensures that mean and variance of activations remain the same as before.

Does nothing to the input once testmode! is true.

Examples

julia> using Statistics
 
 julia> x = randn32(1000,1);
 
@@ -641,7 +641,7 @@
 julia> y = m(x);
 
 julia> isapprox(std(x), std(y), atol=0.2)
-true
source
Flux.LayerNormType
LayerNorm(size..., λ=identity; affine=true, eps=1f-5)

A normalisation layer designed to be used with recurrent hidden states. The argument size should be an integer or a tuple of integers.

In the forward pass, the layer normalises the mean and standard deviation of the input, then applies the elementwise activation λ. The input is normalised along the first length(size) dimensions for tuple size, and along the first dimension for integer size. The input is expected to have first dimensions' size equal to size.

If affine=true, it also applies a learnable shift and rescaling using the Scale layer.

See also BatchNorm, InstanceNorm, GroupNorm, and normalise.

Examples

julia> using Statistics
+true
source
Flux.LayerNormType
LayerNorm(size..., λ=identity; affine=true, eps=1f-5)

A normalisation layer designed to be used with recurrent hidden states. The argument size should be an integer or a tuple of integers.

In the forward pass, the layer normalises the mean and standard deviation of the input, then applies the elementwise activation λ. The input is normalised along the first length(size) dimensions for tuple size, and along the first dimension for integer size. The input is expected to have first dimensions' size equal to size.

If affine=true, it also applies a learnable shift and rescaling using the Scale layer.

See also BatchNorm, InstanceNorm, GroupNorm, and normalise.

Examples

julia> using Statistics
 
 julia> xs = rand(3, 3, 3, 2);  # a batch of 2 images, each having 3 channels
 
@@ -650,7 +650,7 @@
 julia> y = m(xs);
 
 julia> isapprox(std(y, dims=1:3), ones(1, 1, 1, 2), atol=0.1) && std(y, dims=1:3) != std(xs, dims=1:3)
-true
source
Flux.InstanceNormType
InstanceNorm(channels::Integer, λ=identity;
+true
source
Flux.InstanceNormType
InstanceNorm(channels::Integer, λ=identity;
              initβ=zeros32, initγ=ones32,
              affine=false, track_stats=false,
              eps=1f-5, momentum=0.1f0)

Instance Normalization layer. channels should be the size of the channel dimension in your data (see below).

Given an array with N > 2 dimensions, call the N-1th the channel dimension. For WHCN images it's the usual channel dimension.

InstanceNorm computes the mean and variance for each D_1×...×D_{N-2}×1×1 input slice and normalises the input accordingly.

If affine=true, it also applies a shift and a rescale to the input through to learnable per-channel bias β and scale γ parameters.

If track_stats=true, accumulates mean and var statistics in training phase that will be used to renormalize the input in test phase.

Warning: the defaults for affine and track_stats used to be true in previous Flux versions (< v0.12).

Examples

julia> using Statistics
@@ -662,7 +662,7 @@
 julia> y = m(xs);
 
 julia> isapprox(std(y, dims=1:2), ones(1, 1, 3, 2), atol=0.2) && std(y, dims=1:2) != std(xs, dims=1:2)
-true
source
Flux.GroupNormType
GroupNorm(channels::Int, G::Int, λ = identity;
+true
source
Flux.GroupNormType
GroupNorm(channels::Int, G::Int, λ = identity;
           initβ = zeros32, 
           initγ = ones32,
           affine = true, 
@@ -679,7 +679,7 @@
 true
 
 julia> isapprox(std(y[:, :, 3:4, 2]), 1, atol=0.1) && std(xs[:, :, 3:4, 2]) != std(y[:, :, 3:4, 2])
-true
source
Flux.normaliseFunction
normalise(x; dims=ndims(x), eps=1e-5)

Normalise x to mean 0 and standard deviation 1 across the dimension(s) given by dims. Per default, dims is the last dimension. eps is a small term added to the denominator for numerical stability.

Examples

julia> using Statistics
+true
source
Flux.normaliseFunction
normalise(x; dims=ndims(x), eps=1e-5)

Normalise x to mean 0 and standard deviation 1 across the dimension(s) given by dims. Per default, dims is the last dimension. eps is a small term added to the denominator for numerical stability.

Examples

julia> using Statistics
 
 julia> x = [90, 100, 110, 130, 70];
 
@@ -702,7 +702,7 @@
 julia> y = Flux.normalise(x, dims=1);
 
 julia> isapprox(std(y; dims=1, corrected=false), ones(1, 10), atol=1e-5)
-true
source

Test vs. Train

Several normalisation layers behave differently under training and inference (testing). By default, Flux will automatically determine when a layer evaluation is part of training or inference.

Warning

This automatic train/test detection works best with Zygote, the default automatic differentiation package. It may not work with other packages such as Tracker, Yota, or ForwardDiff.

The functions Flux.trainmode! and Flux.testmode! let you manually specify which behaviour you want. When called on a model, they will place all layers within the model into the specified mode.

Flux.testmode!Method
testmode!(model, [mode]) -> model

Set a layer, or all layers in a model, to test mode. This disables the effect of Dropout and some other regularisation layers.

If you manually set a model into test mode, you need to manually place it back into train mode during training phase, using trainmode!.

There is an optional second argument, which takes a symbol :auto to reset all layers back to the default automatic mode.

Example

julia> d = Dropout(0.3)
+true
source

Test vs. Train

Several normalisation layers behave differently under training and inference (testing). By default, Flux will automatically determine when a layer evaluation is part of training or inference.

Warning

This automatic train/test detection works best with Zygote, the default automatic differentiation package. It may not work with other packages such as Tracker, Yota, or ForwardDiff.

The functions Flux.trainmode! and Flux.testmode! let you manually specify which behaviour you want. When called on a model, they will place all layers within the model into the specified mode.

Flux.testmode!Method
testmode!(model, [mode]) -> model

Set a layer, or all layers in a model, to test mode. This disables the effect of Dropout and some other regularisation layers.

If you manually set a model into test mode, you need to manually place it back into train mode during training phase, using trainmode!.

There is an optional second argument, which takes a symbol :auto to reset all layers back to the default automatic mode.

Example

julia> d = Dropout(0.3)
 Dropout(0.3)
 
 julia> testmode!(d)   # dropout is now always disabled
@@ -712,4 +712,4 @@
 Dropout(0.3, active=true)
 
 julia> testmode!(d, :auto)  # back to default
-Dropout(0.3)
source
Flux.testmode!Method
testmode!(model, inactive)

This two-argument method is largely internal. It recurses into the model, and until a method like testmode!(d::Dropout, inactive) alters the activity of a layer. Custom layers can support manual testmode! / trainmode! switching by defining such a method.

Possible values of inactive are:

  • true for testing, i.e. active=false
  • false for training, same as trainmode!(m)
  • :auto or nothing for Flux to detect training automatically.
Compat

This method may be removed in a future breaking change, to separate the user-facing testmode! from the internal recursion.

source
Flux.trainmode!Function
trainmode!(model) -> model

Set a layer, or all layers in a model, to training mode. Opposite to testmode!, see further details there.

source
trainmode!(m, active)
Warning

This two-argument method is deprecated.

Possible values of active are:

  • true for training, or
  • false for testing, same as testmode!(m)
  • :auto or nothing for Flux to detect training automatically.
source
+Dropout(0.3)source
Flux.testmode!Method
testmode!(model, inactive)

This two-argument method is largely internal. It recurses into the model, and until a method like testmode!(d::Dropout, inactive) alters the activity of a layer. Custom layers can support manual testmode! / trainmode! switching by defining such a method.

Possible values of inactive are:

  • true for testing, i.e. active=false
  • false for training, same as trainmode!(m)
  • :auto or nothing for Flux to detect training automatically.
Compat

This method may be removed in a future breaking change, to separate the user-facing testmode! from the internal recursion.

source
Flux.trainmode!Function
trainmode!(model) -> model

Set a layer, or all layers in a model, to training mode. Opposite to testmode!, see further details there.

source
trainmode!(m, active)
Warning

This two-argument method is deprecated.

Possible values of active are:

  • true for training, or
  • false for testing, same as testmode!(m)
  • :auto or nothing for Flux to detect training automatically.
source
diff --git a/previews/PR2360/models/losses/index.html b/previews/PR2360/models/losses/index.html index bd0f4fafcd..d08fdfabde 100644 --- a/previews/PR2360/models/losses/index.html +++ b/previews/PR2360/models/losses/index.html @@ -10,16 +10,16 @@ loss(ŷ, y, agg=identity) # no aggregation.

Function listing

Flux.Losses.maeFunction
mae(ŷ, y; agg = mean)

Return the loss corresponding to mean absolute error:

agg(abs.(ŷ .- y))

Example

julia> y_model = [1.1, 1.9, 3.1];
 
 julia> Flux.mae(y_model, 1:3)
-0.10000000000000009
source
Flux.Losses.mseFunction
mse(ŷ, y; agg = mean)

Return the loss corresponding to mean square error:

agg((ŷ .- y) .^ 2)

See also: mae, msle, crossentropy.

Example

julia> y_model = [1.1, 1.9, 3.1];
+0.10000000000000009
source
Flux.Losses.mseFunction
mse(ŷ, y; agg = mean)

Return the loss corresponding to mean square error:

agg((ŷ .- y) .^ 2)

See also: mae, msle, crossentropy.

Example

julia> y_model = [1.1, 1.9, 3.1];
 
 julia> y_true = 1:3;
 
 julia> Flux.mse(y_model, y_true)
-0.010000000000000018
source
Flux.Losses.msleFunction
msle(ŷ, y; agg = mean, eps = eps(eltype(ŷ)))

The loss corresponding to mean squared logarithmic errors, calculated as

agg((log.(ŷ .+ ϵ) .- log.(y .+ ϵ)) .^ 2)

The ϵ == eps term provides numerical stability. Penalizes an under-estimation more than an over-estimatation.

Example

julia> Flux.msle(Float32[1.1, 2.2, 3.3], 1:3)
+0.010000000000000018
source
Flux.Losses.msleFunction
msle(ŷ, y; agg = mean, eps = eps(eltype(ŷ)))

The loss corresponding to mean squared logarithmic errors, calculated as

agg((log.(ŷ .+ ϵ) .- log.(y .+ ϵ)) .^ 2)

The ϵ == eps term provides numerical stability. Penalizes an under-estimation more than an over-estimatation.

Example

julia> Flux.msle(Float32[1.1, 2.2, 3.3], 1:3)
 0.009084041f0
 
 julia> Flux.msle(Float32[0.9, 1.8, 2.7], 1:3)
-0.011100831f0
source
Flux.Losses.huber_lossFunction
huber_loss(ŷ, y; delta = 1, agg = mean)

Return the mean of the Huber loss given the prediction and true values y.

             | 0.5 * |ŷ - y|^2,            for |ŷ - y| <= δ
+0.011100831f0
source
Flux.Losses.huber_lossFunction
huber_loss(ŷ, y; delta = 1, agg = mean)

Return the mean of the Huber loss given the prediction and true values y.

             | 0.5 * |ŷ - y|^2,            for |ŷ - y| <= δ
 Huber loss = |
              |  δ * (|ŷ - y| - 0.5 * δ), otherwise

Example

julia> ŷ = [1.1, 2.1, 3.1];
 
@@ -27,7 +27,7 @@
 0.005000000000000009
 
 julia> Flux.huber_loss(ŷ, 1:3, delta=0.05)  # changes behaviour as |ŷ - y| > δ
-0.003750000000000005
source
Flux.Losses.label_smoothingFunction
label_smoothing(y::Union{Number, AbstractArray}, α; dims::Int=1)

Returns smoothed labels, meaning the confidence on label values are relaxed.

When y is given as one-hot vector or batch of one-hot, its calculated as

y .* (1 - α) .+ α / size(y, dims)

when y is given as a number or batch of numbers for binary classification, its calculated as

y .* (1 - α) .+ α / 2

in which case the labels are squeezed towards 0.5.

α is a number in interval (0, 1) called the smoothing factor. Higher the value of α larger the smoothing of y.

dims denotes the one-hot dimension, unless dims=0 which denotes the application of label smoothing to binary distributions encoded in a single number.

Example

julia> y = Flux.onehotbatch([1, 1, 1, 0, 1, 0], 0:1)
+0.003750000000000005
source
Flux.Losses.label_smoothingFunction
label_smoothing(y::Union{Number, AbstractArray}, α; dims::Int=1)

Returns smoothed labels, meaning the confidence on label values are relaxed.

When y is given as one-hot vector or batch of one-hot, its calculated as

y .* (1 - α) .+ α / size(y, dims)

when y is given as a number or batch of numbers for binary classification, its calculated as

y .* (1 - α) .+ α / 2

in which case the labels are squeezed towards 0.5.

α is a number in interval (0, 1) called the smoothing factor. Higher the value of α larger the smoothing of y.

dims denotes the one-hot dimension, unless dims=0 which denotes the application of label smoothing to binary distributions encoded in a single number.

Example

julia> y = Flux.onehotbatch([1, 1, 1, 0, 1, 0], 0:1)
 2×6 OneHotMatrix(::Vector{UInt32}) with eltype Bool:
  ⋅  ⋅  ⋅  1  ⋅  1
  1  1  1  ⋅  1  ⋅
@@ -51,7 +51,7 @@
 true
 
 julia> Flux.crossentropy(y_dis, y) > Flux.crossentropy(y_dis, y_smoothed)
-true
source
Flux.Losses.crossentropyFunction
crossentropy(ŷ, y; dims = 1, eps = eps(eltype(ŷ)), agg = mean)

Return the cross entropy between the given probability distributions; calculated as

agg(-sum(y .* log.(ŷ .+ ϵ); dims))

Cross entropy is typically used as a loss in multi-class classification, in which case the labels y are given in a one-hot format. dims specifies the dimension (or the dimensions) containing the class probabilities. The prediction is supposed to sum to one across dims, as would be the case with the output of a softmax operation.

For numerical stability, it is recommended to use logitcrossentropy rather than softmax followed by crossentropy .

Use label_smoothing to smooth the true labels as preprocessing before computing the loss.

See also: logitcrossentropy, binarycrossentropy, logitbinarycrossentropy.

Example

julia> y_label = Flux.onehotbatch([0, 1, 2, 1, 0], 0:2)
+true
source
Flux.Losses.crossentropyFunction
crossentropy(ŷ, y; dims = 1, eps = eps(eltype(ŷ)), agg = mean)

Return the cross entropy between the given probability distributions; calculated as

agg(-sum(y .* log.(ŷ .+ ϵ); dims))

Cross entropy is typically used as a loss in multi-class classification, in which case the labels y are given in a one-hot format. dims specifies the dimension (or the dimensions) containing the class probabilities. The prediction is supposed to sum to one across dims, as would be the case with the output of a softmax operation.

For numerical stability, it is recommended to use logitcrossentropy rather than softmax followed by crossentropy .

Use label_smoothing to smooth the true labels as preprocessing before computing the loss.

See also: logitcrossentropy, binarycrossentropy, logitbinarycrossentropy.

Example

julia> y_label = Flux.onehotbatch([0, 1, 2, 1, 0], 0:2)
 3×5 OneHotMatrix(::Vector{UInt32}) with eltype Bool:
  1  ⋅  ⋅  ⋅  1
  ⋅  1  ⋅  1  ⋅
@@ -80,7 +80,7 @@
  0.05  0.05  0.9   0.05  0.05
 
 julia> Flux.crossentropy(y_model, y_smooth)
-1.5776052f0
source
Flux.Losses.logitcrossentropyFunction
logitcrossentropy(ŷ, y; dims = 1, agg = mean)

Return the cross entropy calculated by

agg(-sum(y .* logsoftmax(ŷ; dims); dims))

This is mathematically equivalent to crossentropy(softmax(ŷ), y), but is more numerically stable than using functions crossentropy and softmax separately.

See also: binarycrossentropy, logitbinarycrossentropy, label_smoothing.

Example

julia> y_label = Flux.onehotbatch(collect("abcabaa"), 'a':'c')
+1.5776052f0
source
Flux.Losses.logitcrossentropyFunction
logitcrossentropy(ŷ, y; dims = 1, agg = mean)

Return the cross entropy calculated by

agg(-sum(y .* logsoftmax(ŷ; dims); dims))

This is mathematically equivalent to crossentropy(softmax(ŷ), y), but is more numerically stable than using functions crossentropy and softmax separately.

See also: binarycrossentropy, logitbinarycrossentropy, label_smoothing.

Example

julia> y_label = Flux.onehotbatch(collect("abcabaa"), 'a':'c')
 3×7 OneHotMatrix(::Vector{UInt32}) with eltype Bool:
  1  ⋅  ⋅  1  ⋅  1  1
  ⋅  1  ⋅  ⋅  1  ⋅  ⋅
@@ -96,7 +96,7 @@
 1.5791205f0
 
 julia> Flux.crossentropy(softmax(y_model), y_label)
-1.5791197f0
source
Flux.Losses.binarycrossentropyFunction
binarycrossentropy(ŷ, y; agg = mean, eps = eps(eltype(ŷ)))

Return the binary cross-entropy loss, computed as

agg(@.(-y * log(ŷ + ϵ) - (1 - y) * log(1 - ŷ + ϵ)))

Where typically, the prediction is given by the output of a sigmoid activation. The ϵ == eps term is included to avoid infinity. Using logitbinarycrossentropy is recomended over binarycrossentropy for numerical stability.

Use label_smoothing to smooth the y value as preprocessing before computing the loss.

See also: crossentropy, logitcrossentropy.

Examples

julia> y_bin = Bool[1,0,1]
+1.5791197f0
source
Flux.Losses.binarycrossentropyFunction
binarycrossentropy(ŷ, y; agg = mean, eps = eps(eltype(ŷ)))

Return the binary cross-entropy loss, computed as

agg(@.(-y * log(ŷ + ϵ) - (1 - y) * log(1 - ŷ + ϵ)))

Where typically, the prediction is given by the output of a sigmoid activation. The ϵ == eps term is included to avoid infinity. Using logitbinarycrossentropy is recomended over binarycrossentropy for numerical stability.

Use label_smoothing to smooth the y value as preprocessing before computing the loss.

See also: crossentropy, logitcrossentropy.

Examples

julia> y_bin = Bool[1,0,1]
 3-element Vector{Bool}:
  1
  0
@@ -119,7 +119,7 @@
  1  ⋅  1
 
 julia> Flux.crossentropy(y_prob, y_hot)
-0.43989f0
source
Flux.Losses.logitbinarycrossentropyFunction
logitbinarycrossentropy(ŷ, y; agg = mean)

Mathematically equivalent to binarycrossentropy(σ(ŷ), y) but is more numerically stable.

See also: crossentropy, logitcrossentropy.

Examples

julia> y_bin = Bool[1,0,1];
+0.43989f0
source
Flux.Losses.logitbinarycrossentropyFunction
logitbinarycrossentropy(ŷ, y; agg = mean)

Mathematically equivalent to binarycrossentropy(σ(ŷ), y) but is more numerically stable.

See also: crossentropy, logitcrossentropy.

Examples

julia> y_bin = Bool[1,0,1];
 
 julia> y_model = Float32[2, -1, pi]
 3-element Vector{Float32}:
@@ -131,7 +131,7 @@
 0.160832f0
 
 julia> Flux.binarycrossentropy(sigmoid.(y_model), y_bin)
-0.16083185f0
source
Flux.Losses.kldivergenceFunction
kldivergence(ŷ, y; agg = mean, eps = eps(eltype(ŷ)))

Return the Kullback-Leibler divergence between the given probability distributions.

The KL divergence is a measure of how much one probability distribution is different from the other. It is always non-negative, and zero only when both the distributions are equal.

Example

julia> p1 = [1 0; 0 1]
+0.16083185f0
source
Flux.Losses.kldivergenceFunction
kldivergence(ŷ, y; agg = mean, eps = eps(eltype(ŷ)))

Return the Kullback-Leibler divergence between the given probability distributions.

The KL divergence is a measure of how much one probability distribution is different from the other. It is always non-negative, and zero only when both the distributions are equal.

Example

julia> p1 = [1 0; 0 1]
 2×2 Matrix{Int64}:
  1  0
  0  1
@@ -151,10 +151,10 @@
 0.0
 
 julia> Flux.kldivergence(p1, p2; eps = 0)  # about 17.3 with the regulator
-Inf
source
Flux.Losses.poisson_lossFunction
poisson_loss(ŷ, y; agg = mean)

Return how much the predicted distribution diverges from the expected Poisson distribution y; calculated as -

sum(ŷ .- y .* log.(ŷ)) / size(y, 2)

More information..

Example

julia> y_model = [1, 3, 3];  # data should only take integral values
+Inf
source
Flux.Losses.poisson_lossFunction
poisson_loss(ŷ, y; agg = mean)

Return how much the predicted distribution diverges from the expected Poisson distribution y; calculated as -

sum(ŷ .- y .* log.(ŷ)) / size(y, 2)

More information..

Example

julia> y_model = [1, 3, 3];  # data should only take integral values
 
 julia> Flux.poisson_loss(y_model, 1:3)
-0.5023128522198171
source
Flux.Losses.hinge_lossFunction
hinge_loss(ŷ, y; agg = mean)

Return the hinge_loss given the prediction and true labels y (containing 1 or -1); calculated as

sum(max.(0, 1 .- ŷ .* y)) / size(y, 2)

Usually used with classifiers like Support Vector Machines. See also: squared_hinge_loss

Example

julia> y_true = [1, -1, 1, 1];
+0.5023128522198171
source
Flux.Losses.hinge_lossFunction
hinge_loss(ŷ, y; agg = mean)

Return the hinge_loss given the prediction and true labels y (containing 1 or -1); calculated as

sum(max.(0, 1 .- ŷ .* y)) / size(y, 2)

Usually used with classifiers like Support Vector Machines. See also: squared_hinge_loss

Example

julia> y_true = [1, -1, 1, 1];
 
 julia> y_pred = [0.1, 0.3, 1, 1.5];
 
@@ -168,7 +168,7 @@
 true
 
 julia> Flux.hinge_loss(y_pred[2], y_true[2]) != 0 # opposite signs
-true
source
Flux.Losses.squared_hinge_lossFunction
squared_hinge_loss(ŷ, y)

Return the squared hinge_loss loss given the prediction and true labels y (containing 1 or -1); calculated as

sum((max.(0, 1 .- ŷ .* y)).^2) / size(y, 2)

Usually used with classifiers like Support Vector Machines. See also: hinge_loss

Example

julia> y_true = [1, -1, 1, 1];
+true
source
Flux.Losses.squared_hinge_lossFunction
squared_hinge_loss(ŷ, y)

Return the squared hinge_loss loss given the prediction and true labels y (containing 1 or -1); calculated as

sum((max.(0, 1 .- ŷ .* y)).^2) / size(y, 2)

Usually used with classifiers like Support Vector Machines. See also: hinge_loss

Example

julia> y_true = [1, -1, 1, 1];
 
 julia> y_pred = [0.1, 0.3, 1, 1.5];
 
@@ -182,13 +182,13 @@
 true
 
 julia> Flux.squared_hinge_loss(y_pred[2], y_true[2]) != 0
-true
source
Flux.Losses.dice_coeff_lossFunction
dice_coeff_loss(ŷ, y; smooth = 1)

Return a loss based on the dice coefficient. Used in the V-Net image segmentation architecture. The dice coefficient is similar to the F1_score. Loss calculated as:

1 - 2*sum(|ŷ .* y| + smooth) / (sum(ŷ.^2) + sum(y.^2) + smooth)

Example

julia> y_pred = [1.1, 2.1, 3.1];
+true
source
Flux.Losses.dice_coeff_lossFunction
dice_coeff_loss(ŷ, y; smooth = 1)

Return a loss based on the dice coefficient. Used in the V-Net image segmentation architecture. The dice coefficient is similar to the F1_score. Loss calculated as:

1 - 2*sum(|ŷ .* y| + smooth) / (sum(ŷ.^2) + sum(y.^2) + smooth)

Example

julia> y_pred = [1.1, 2.1, 3.1];
 
 julia> Flux.dice_coeff_loss(y_pred, 1:3)
 0.000992391663909964
 
 julia> 1 - Flux.dice_coeff_loss(y_pred, 1:3)  # ~ F1 score for image segmentation
-0.99900760833609
source
Flux.Losses.tversky_lossFunction
tversky_loss(ŷ, y; beta = 0.7)

Return the Tversky loss. Used with imbalanced data to give more weight to false negatives. Larger β == beta weigh recall more than precision (by placing more emphasis on false negatives). Calculated as:

1 - sum(|y .* ŷ| + 1) / (sum(y .* ŷ + (1 - β)*(1 .- y) .* ŷ + β*y .* (1 .- ŷ)) + 1)
source
Flux.Losses.binary_focal_lossFunction
binary_focal_loss(ŷ, y; agg=mean, gamma=2, eps=eps(eltype(ŷ)))

Return the binaryfocalloss The input, 'ŷ', is expected to be normalized (i.e. softmax output).

For gamma = 0, the loss is mathematically equivalent to Losses.binarycrossentropy.

See also: Losses.focal_loss for multi-class setting

Example

julia> y = [0  1  0
+0.99900760833609
source
Flux.Losses.tversky_lossFunction
tversky_loss(ŷ, y; beta = 0.7)

Return the Tversky loss. Used with imbalanced data to give more weight to false negatives. Larger β == beta weigh recall more than precision (by placing more emphasis on false negatives). Calculated as:

1 - sum(|y .* ŷ| + 1) / (sum(y .* ŷ + (1 - β)*(1 .- y) .* ŷ + β*y .* (1 .- ŷ)) + 1)
source
Flux.Losses.binary_focal_lossFunction
binary_focal_loss(ŷ, y; agg=mean, gamma=2, eps=eps(eltype(ŷ)))

Return the binaryfocalloss The input, 'ŷ', is expected to be normalized (i.e. softmax output).

For gamma = 0, the loss is mathematically equivalent to Losses.binarycrossentropy.

See also: Losses.focal_loss for multi-class setting

Example

julia> y = [0  1  0
             1  0  1]
 2×3 Matrix{Int64}:
  0  1  0
@@ -201,7 +201,7 @@
  0.731059  0.5  0.731059
 
 julia> Flux.binary_focal_loss(ŷ, y) ≈ 0.0728675615927385
-true
source
Flux.Losses.focal_lossFunction
focal_loss(ŷ, y; dims=1, agg=mean, gamma=2, eps=eps(eltype(ŷ)))

Return the focal_loss which can be used in classification tasks with highly imbalanced classes. It down-weights well-classified examples and focuses on hard examples. The input, 'ŷ', is expected to be normalized (i.e. softmax output).

The modulating factor, γ == gamma, controls the down-weighting strength. For γ == 0, the loss is mathematically equivalent to Losses.crossentropy.

Example

julia> y = [1  0  0  0  1
+true
source
Flux.Losses.focal_lossFunction
focal_loss(ŷ, y; dims=1, agg=mean, gamma=2, eps=eps(eltype(ŷ)))

Return the focal_loss which can be used in classification tasks with highly imbalanced classes. It down-weights well-classified examples and focuses on hard examples. The input, 'ŷ', is expected to be normalized (i.e. softmax output).

The modulating factor, γ == gamma, controls the down-weighting strength. For γ == 0, the loss is mathematically equivalent to Losses.crossentropy.

Example

julia> y = [1  0  0  0  1
             0  1  0  1  0
             0  0  1  0  0]
 3×5 Matrix{Int64}:
@@ -216,10 +216,10 @@
  0.665241   0.665241   0.665241   0.665241   0.665241
 
 julia> Flux.focal_loss(ŷ, y) ≈ 1.1277571935622628
-true

See also: Losses.binary_focal_loss for binary (not one-hot) labels

source
Flux.Losses.siamese_contrastive_lossFunction
siamese_contrastive_loss(ŷ, y; margin = 1, agg = mean)

Return the contrastive loss which can be useful for training Siamese Networks. It is given by

agg(@. (1 - y) * ŷ^2 + y * max(0, margin - ŷ)^2)

Specify margin to set the baseline for distance at which pairs are dissimilar.

Example

julia> ŷ = [0.5, 1.5, 2.5];
+true

See also: Losses.binary_focal_loss for binary (not one-hot) labels

source
Flux.Losses.siamese_contrastive_lossFunction
siamese_contrastive_loss(ŷ, y; margin = 1, agg = mean)

Return the contrastive loss which can be useful for training Siamese Networks. It is given by

agg(@. (1 - y) * ŷ^2 + y * max(0, margin - ŷ)^2)

Specify margin to set the baseline for distance at which pairs are dissimilar.

Example

julia> ŷ = [0.5, 1.5, 2.5];
 
 julia> Flux.siamese_contrastive_loss(ŷ, 1:3)
 -4.833333333333333
 
 julia> Flux.siamese_contrastive_loss(ŷ, 1:3, margin = 2)
--4.0
source
+-4.0source diff --git a/previews/PR2360/models/nnlib/index.html b/previews/PR2360/models/nnlib/index.html index bc33d9d946..503b45b466 100644 --- a/previews/PR2360/models/nnlib/index.html +++ b/previews/PR2360/models/nnlib/index.html @@ -375,4 +375,4 @@ [:, :, 1, 1] = 1.0 3.0 1.5 3.5 - 2.0 4.0
NNlib.∇grid_sampleFunction
∇grid_sample(Δ::AbstractArray{T, 4}, input::AbstractArray{T, 4}, grid::AbstractArray{T, 4}; padding_mode = :zeros) where T

Arguments

  • Δ: Input gradient in (W_out, H_out, C, N) shape (same as output of the primal computation).
  • input: Input from primal computation in (W_in, H_in, C, N) shape.
  • grid: Grid from primal computation in (2, W_out, H_out, N) shape.
  • padding_mode: Out-of-bound padding. :zeros to use 0 for out-of-bound grid locations. :border to use border values for out-of-bound grid locations. Should be the same as in primal computation. Default is :zeros.

Returns

dinput (same shape as input) and dgrid (same shape as grid) gradients.

Losses

NNlib.ctc_lossFunction
ctc_loss(ŷ, y)

Computes the connectionist temporal classification loss between and y. must be a classes-by-time matrices, i.e., each row represents a class and each column represents a time step. Additionally, the logsoftmax function will be applied to , so must be the raw activation values from the neural network and not, for example, the activations after being passed through a softmax activation function. y must be a 1D array of the labels associated with . The blank label is assumed to be the last label category in , so it is equivalent to size(ŷ, 1). Used for sequence-to-sequence classification problems such as speech recognition and handwriting recognition where the exact time-alignment of the output (e.g., letters) is not needed to solve the problem. See Graves et al. (2006) or Graves (2012) for mathematical details.

Miscellaneous

NNlib.logsumexpFunction
logsumexp(x; dims = :)

Computes log.(sum(exp.(x); dims)) in a numerically stable way. Without dims keyword this returns a scalar.

See also logsoftmax.

NNlib.gluFunction
glu(x, dim = 1)

The gated linear unit from the "Language Modeling with Gated Convolutional Networks" paper.

Calculates a .* sigmoid(b), where x is split in half along given dimension dim to form a and b.

+ 2.0 4.0
NNlib.∇grid_sampleFunction
∇grid_sample(Δ::AbstractArray{T, 4}, input::AbstractArray{T, 4}, grid::AbstractArray{T, 4}; padding_mode = :zeros) where T

Arguments

  • Δ: Input gradient in (W_out, H_out, C, N) shape (same as output of the primal computation).
  • input: Input from primal computation in (W_in, H_in, C, N) shape.
  • grid: Grid from primal computation in (2, W_out, H_out, N) shape.
  • padding_mode: Out-of-bound padding. :zeros to use 0 for out-of-bound grid locations. :border to use border values for out-of-bound grid locations. Should be the same as in primal computation. Default is :zeros.

Returns

dinput (same shape as input) and dgrid (same shape as grid) gradients.

Losses

NNlib.ctc_lossFunction
ctc_loss(ŷ, y)

Computes the connectionist temporal classification loss between and y. must be a classes-by-time matrices, i.e., each row represents a class and each column represents a time step. Additionally, the logsoftmax function will be applied to , so must be the raw activation values from the neural network and not, for example, the activations after being passed through a softmax activation function. y must be a 1D array of the labels associated with . The blank label is assumed to be the last label category in , so it is equivalent to size(ŷ, 1). Used for sequence-to-sequence classification problems such as speech recognition and handwriting recognition where the exact time-alignment of the output (e.g., letters) is not needed to solve the problem. See Graves et al. (2006) or Graves (2012) for mathematical details.

Miscellaneous

NNlib.logsumexpFunction
logsumexp(x; dims = :)

Computes log.(sum(exp.(x); dims)) in a numerically stable way. Without dims keyword this returns a scalar.

See also logsoftmax.

NNlib.gluFunction
glu(x, dim = 1)

The gated linear unit from the "Language Modeling with Gated Convolutional Networks" paper.

Calculates a .* sigmoid(b), where x is split in half along given dimension dim to form a and b.

diff --git a/previews/PR2360/models/overview/index.html b/previews/PR2360/models/overview/index.html index d0cefdf053..7fc3ab7187 100644 --- a/previews/PR2360/models/overview/index.html +++ b/previews/PR2360/models/overview/index.html @@ -56,4 +56,4 @@ julia> y_test 1×5 Matrix{Int64}: - 26 30 34 38 42

The predictions are good. Here's how we got there.

First, we gathered real-world data into the variables x_train, y_train, x_test, and y_test. The x_* data defines inputs, and the y_* data defines outputs. The *_train data is for training the model, and the *_test data is for verifying the model. Our data was based on the function 4x + 2.

Then, we built a single input, single output predictive model, predict = Dense(1 => 1). The initial predictions weren't accurate, because we had not trained the model yet.

After building the model, we trained it with train!(loss, predict, data, opt). The loss function is first, followed by the model itself, the training data, and the Descent optimiser provided by Flux. We ran the training step once, and observed that the parameters changed and the loss went down. Then, we ran the train! many times to finish the training process.

After we trained the model, we verified it with the test data to verify the results.

This overall flow represents how Flux works. Let's drill down a bit to understand what's going on inside the individual layers of Flux.

+ 26 30 34 38 42

The predictions are good. Here's how we got there.

First, we gathered real-world data into the variables x_train, y_train, x_test, and y_test. The x_* data defines inputs, and the y_* data defines outputs. The *_train data is for training the model, and the *_test data is for verifying the model. Our data was based on the function 4x + 2.

Then, we built a single input, single output predictive model, predict = Dense(1 => 1). The initial predictions weren't accurate, because we had not trained the model yet.

After building the model, we trained it with train!(loss, predict, data, opt). The loss function is first, followed by the model itself, the training data, and the Descent optimiser provided by Flux. We ran the training step once, and observed that the parameters changed and the loss went down. Then, we ran the train! many times to finish the training process.

After we trained the model, we verified it with the test data to verify the results.

This overall flow represents how Flux works. Let's drill down a bit to understand what's going on inside the individual layers of Flux.

diff --git a/previews/PR2360/models/quickstart/index.html b/previews/PR2360/models/quickstart/index.html index b27554cc35..c6e2d6a7e0 100644 --- a/previews/PR2360/models/quickstart/index.html +++ b/previews/PR2360/models/quickstart/index.html @@ -59,4 +59,4 @@ y_hat = m(x) Flux.crossentropy(y_hat, y) end -end
Implicit-style training, Flux ≤ 0.14

Until recently Flux's training worked a bit differently. Any code which looks like

gradient(() -> loss(model, x, y), Flux.params(model))

(gradient of a zero-argument function) or

train!((x,y) -> loss(model, x, y), Flux.params(model), loader, opt)

(with Flux.params) is in the old "implicit" style. This still works on Flux 0.14, but will be removed from Flux 0.15. See the training section for more details.

+end
Implicit-style training, Flux ≤ 0.14

Until recently Flux's training worked a bit differently. Any code which looks like

gradient(() -> loss(model, x, y), Flux.params(model))

(gradient of a zero-argument function) or

train!((x,y) -> loss(model, x, y), Flux.params(model), loader, opt)

(with Flux.params) is in the old "implicit" style. This still works on Flux 0.14, but will be removed from Flux 0.15. See the training section for more details.

diff --git a/previews/PR2360/models/recurrence/index.html b/previews/PR2360/models/recurrence/index.html index 59763f244b..98c01c0a96 100644 --- a/previews/PR2360/models/recurrence/index.html +++ b/previews/PR2360/models/recurrence/index.html @@ -100,4 +100,4 @@ true

In many situations, such as when dealing with a language model, the sentences in each batch are independent (i.e. the last item of the first sentence of the first batch is independent from the first item of the first sentence of the second batch), so we cannot handle the model as if each batch was the direct continuation of the previous one. To handle such situations, we need to reset the state of the model between each batch, which can be conveniently performed within the loss function:

function loss(x, y)
   Flux.reset!(m)
   sum(mse(m(xi), yi) for (xi, yi) in zip(x, y))
-end

A potential source of ambiguity with RNN in Flux can come from the different data layout compared to some common frameworks where data is typically a 3 dimensional array: (features, seq length, samples). In Flux, those 3 dimensions are provided through a vector of seq length containing a matrix (features, samples).

+end

A potential source of ambiguity with RNN in Flux can come from the different data layout compared to some common frameworks where data is typically a 3 dimensional array: (features, seq length, samples). In Flux, those 3 dimensions are provided through a vector of seq length containing a matrix (features, samples).

diff --git a/previews/PR2360/outputsize/index.html b/previews/PR2360/outputsize/index.html index 34596ed35f..8b77e507a7 100644 --- a/previews/PR2360/outputsize/index.html +++ b/previews/PR2360/outputsize/index.html @@ -68,7 +68,7 @@ # plus 2 non-trainable, 10 parameters, summarysize 10.469 KiB. julia> outputsize(ans, (28, 28, 1, 32)) -(10, 32)

Limitations:

source
Flux.outputsizeFunction
outputsize(m, x_size, y_size, ...; padbatch=false)

For model or layer m accepting multiple arrays as input, this returns size(m((x, y, ...))) given size_x = size(x), etc.

Examples

julia> x, y = rand(Float32, 5, 64), rand(Float32, 7, 64);
+(10, 32)

Limitations:

  • While @autosize (5, 32) Flux.Bilinear(_ => 7) is OK, something like Bilinear((_, _) => 7) will fail.
  • While Scale(_) and LayerNorm(_) are fine (and use the first dimension), Scale(_,_) and LayerNorm(_,_) will fail if size(x,1) != size(x,2).
source
Flux.outputsizeFunction
outputsize(m, x_size, y_size, ...; padbatch=false)

For model or layer m accepting multiple arrays as input, this returns size(m((x, y, ...))) given size_x = size(x), etc.

Examples

julia> x, y = rand(Float32, 5, 64), rand(Float32, 7, 64);
 
 julia> par = Parallel(vcat, Dense(5 => 9), Dense(7 => 11));
 
@@ -81,4 +81,4 @@
 (13, 1)
 
 julia> par(x, y) == par((x, y)) == Chain(par, identity)((x, y))
-true

Notice that Chain only accepts multiple arrays as a tuple, while Parallel also accepts them as multiple arguments; outputsize always supplies the tuple.

source
+true

Notice that Chain only accepts multiple arrays as a tuple, while Parallel also accepts them as multiple arguments; outputsize always supplies the tuple.

source diff --git a/previews/PR2360/performance/index.html b/previews/PR2360/performance/index.html index 57bae7a78d..9707a83497 100644 --- a/previews/PR2360/performance/index.html +++ b/previews/PR2360/performance/index.html @@ -14,4 +14,4 @@ function loss_total(x_batch::Matrix, y_batch::Matrix) y_preds = model(x_batch) sum(loss.(y_preds, y_batch)) -end

When doing this kind of concatenation use reduce(hcat, xs) rather than hcat(xs...). This will avoid the splatting penalty, and will hit the optimised reduce method.

+end

When doing this kind of concatenation use reduce(hcat, xs) rather than hcat(xs...). This will avoid the splatting penalty, and will hit the optimised reduce method.

diff --git a/previews/PR2360/saving/index.html b/previews/PR2360/saving/index.html index f39d375daa..010cc50ffe 100644 --- a/previews/PR2360/saving/index.html +++ b/previews/PR2360/saving/index.html @@ -59,4 +59,4 @@ Chain( Dense(10 => 5, relu), # 55 parameters Dense(5 => 2), # 12 parameters -) # Total: 4 arrays, 67 parameters, 524 bytes.
Warning

Saving models this way could lead to compatibility issues across julia versions and across Flux versions if some of the Flux layers' internals are changed. It is therefore not recommended for long term storage, use Flux.state instead.

Warning

Previous versions of Flux suggested saving only the model weights using @save "mymodel.bson" params(model). This is no longer recommended and even strongly discouraged. Saving models this way will only store the trainable parameters which will result in incorrect behavior for layers like BatchNorm.

+) # Total: 4 arrays, 67 parameters, 524 bytes.
Warning

Saving models this way could lead to compatibility issues across julia versions and across Flux versions if some of the Flux layers' internals are changed. It is therefore not recommended for long term storage, use Flux.state instead.

Warning

Previous versions of Flux suggested saving only the model weights using @save "mymodel.bson" params(model). This is no longer recommended and even strongly discouraged. Saving models this way will only store the trainable parameters which will result in incorrect behavior for layers like BatchNorm.

diff --git a/previews/PR2360/search/index.html b/previews/PR2360/search/index.html index 27491a679b..72dc631f40 100644 --- a/previews/PR2360/search/index.html +++ b/previews/PR2360/search/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-36890222-9', {'page_path': location.pathname + location.search + location.hash}); -

Loading search...

    +

    Loading search...

      diff --git a/previews/PR2360/training/callbacks/index.html b/previews/PR2360/training/callbacks/index.html index b8afdde9b9..d34b718f9e 100644 --- a/previews/PR2360/training/callbacks/index.html +++ b/previews/PR2360/training/callbacks/index.html @@ -10,7 +10,7 @@ sleep(1) end Flux -Fluxsource

      Patience Helpers

      Flux provides utilities for controlling your training procedure according to some monitored condition and a maximum patience. For example, you can use early_stopping to stop training when the model is converging or deteriorating, or you can use plateau to check if the model is stagnating.

      For example, below we create a pseudo-loss function that decreases, bottoms out, and then increases. The early stopping trigger will break the loop before the loss increases too much.

      # create a pseudo-loss that decreases for 4 calls, then starts increasing
      +Flux
      source

      Patience Helpers

      Flux provides utilities for controlling your training procedure according to some monitored condition and a maximum patience. For example, you can use early_stopping to stop training when the model is converging or deteriorating, or you can use plateau to check if the model is stagnating.

      For example, below we create a pseudo-loss function that decreases, bottoms out, and then increases. The early stopping trigger will break the loop before the loss increases too much.

      # create a pseudo-loss that decreases for 4 calls, then starts increasing
       # we call this like loss()
       loss = let t = 0
         () -> begin
      @@ -61,7 +61,7 @@
              end
       [ Info: Epoch 1
       [ Info: Epoch 2
      -[ Info: Epoch 3
      source
      Flux.early_stoppingFunction
      early_stopping(f, delay; distance = -, init_score = 0, min_dist = 0)

      Return a function that internally counts by one when distance(best_score, f(...)) <= min_dist, where best_score is the last seen best value of f(...). If the count is greater than or equal to delay, the function returns true, otherwise it returns false. The count is reset when distance(best_score, f(...)) > min_dist.

      Examples

      julia> loss = let l = 0
      +[ Info: Epoch 3
      source
      Flux.early_stoppingFunction
      early_stopping(f, delay; distance = -, init_score = 0, min_dist = 0)

      Return a function that internally counts by one when distance(best_score, f(...)) <= min_dist, where best_score is the last seen best value of f(...). If the count is greater than or equal to delay, the function returns true, otherwise it returns false. The count is reset when distance(best_score, f(...)) > min_dist.

      Examples

      julia> loss = let l = 0
                () -> l += 1
              end; # pseudo loss function that returns increasing values
       
      @@ -74,7 +74,7 @@
              end
       [ Info: Epoch 1
       [ Info: Epoch 2
      -[ Info: Epoch 3
      source
      Flux.plateauFunction
      plateau(f, width; distance = -, init_score = 0, min_dist = 1f-6)

      Return a function that internally counts by one when abs(distance(last_score, f(...))) <= min_dist, where last_score holds the last value of f(...). If the count is greater than or equal to width, the function returns true, otherwise it returns false. The count is reset when abs(distance(last_score, f(...))) > min_dist.

      Examples

      julia> f = let v = 10
      +[ Info: Epoch 3
      source
      Flux.plateauFunction
      plateau(f, width; distance = -, init_score = 0, min_dist = 1f-6)

      Return a function that internally counts by one when abs(distance(last_score, f(...))) <= min_dist, where last_score holds the last value of f(...). If the count is greater than or equal to width, the function returns true, otherwise it returns false. The count is reset when abs(distance(last_score, f(...))) > min_dist.

      Examples

      julia> f = let v = 10
                () -> v = v / abs(v) - v
              end; # -9, 8, -7, 6, ...
       
      @@ -88,4 +88,4 @@
       [ Info: Epoch 1
       [ Info: Epoch 2
       [ Info: Epoch 3
      -[ Info: Epoch 4
      source
      +[ Info: Epoch 4source diff --git a/previews/PR2360/training/optimisers/index.html b/previews/PR2360/training/optimisers/index.html index 289b9cad84..c19cd23a1a 100644 --- a/previews/PR2360/training/optimisers/index.html +++ b/previews/PR2360/training/optimisers/index.html @@ -13,33 +13,33 @@ loss(x, y) end -Flux.Optimise.update!(opt, ps, gs)source
      Flux.Optimise.MomentumType
      Momentum(η = 0.01, ρ = 0.9)

      Gradient descent optimiser with learning rate η and momentum ρ.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Momentum (ρ): Controls the acceleration of gradient descent in the prominent direction, in effect damping oscillations.

      Examples

      opt = Momentum()
      +Flux.Optimise.update!(opt, ps, gs)
      source
      Flux.Optimise.MomentumType
      Momentum(η = 0.01, ρ = 0.9)

      Gradient descent optimiser with learning rate η and momentum ρ.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Momentum (ρ): Controls the acceleration of gradient descent in the prominent direction, in effect damping oscillations.

      Examples

      opt = Momentum()
       
      -opt = Momentum(0.01, 0.99)
      source
      Flux.Optimise.NesterovType
      Nesterov(η = 0.001, ρ = 0.9)

      Gradient descent optimiser with learning rate η and Nesterov momentum ρ.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Nesterov momentum (ρ): Controls the acceleration of gradient descent in the prominent direction, in effect damping oscillations.

      Examples

      opt = Nesterov()
      +opt = Momentum(0.01, 0.99)
      source
      Flux.Optimise.NesterovType
      Nesterov(η = 0.001, ρ = 0.9)

      Gradient descent optimiser with learning rate η and Nesterov momentum ρ.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Nesterov momentum (ρ): Controls the acceleration of gradient descent in the prominent direction, in effect damping oscillations.

      Examples

      opt = Nesterov()
       
      -opt = Nesterov(0.003, 0.95)
      source
      Flux.Optimise.RMSPropType
      RMSProp(η = 0.001, ρ = 0.9, ϵ = 1.0e-8)

      Optimizer using the RMSProp algorithm. Often a good choice for recurrent networks. Parameters other than learning rate generally don't need tuning.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Momentum (ρ): Controls the acceleration of gradient descent in the prominent direction, in effect damping oscillations.

      Examples

      opt = RMSProp()
      +opt = Nesterov(0.003, 0.95)
      source
      Flux.Optimise.RMSPropType
      RMSProp(η = 0.001, ρ = 0.9, ϵ = 1.0e-8)

      Optimizer using the RMSProp algorithm. Often a good choice for recurrent networks. Parameters other than learning rate generally don't need tuning.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Momentum (ρ): Controls the acceleration of gradient descent in the prominent direction, in effect damping oscillations.

      Examples

      opt = RMSProp()
       
      -opt = RMSProp(0.002, 0.95)
      source
      Flux.Optimise.AdamType
      Adam(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)

      Adam optimiser.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

      Examples

      opt = Adam()
      +opt = RMSProp(0.002, 0.95)
      source
      Flux.Optimise.AdamType
      Adam(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)

      Adam optimiser.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

      Examples

      opt = Adam()
       
      -opt = Adam(0.001, (0.9, 0.8))
      source
      Flux.Optimise.RAdamType
      RAdam(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)

      Rectified Adam optimiser.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

      Examples

      opt = RAdam()
      +opt = Adam(0.001, (0.9, 0.8))
      source
      Flux.Optimise.RAdamType
      RAdam(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)

      Rectified Adam optimiser.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

      Examples

      opt = RAdam()
       
      -opt = RAdam(0.001, (0.9, 0.8))
      source
      Flux.Optimise.AdaMaxType
      AdaMax(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)

      AdaMax is a variant of Adam based on the ∞-norm.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

      Examples

      opt = AdaMax()
      +opt = RAdam(0.001, (0.9, 0.8))
      source
      Flux.Optimise.AdaMaxType
      AdaMax(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)

      AdaMax is a variant of Adam based on the ∞-norm.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

      Examples

      opt = AdaMax()
       
      -opt = AdaMax(0.001, (0.9, 0.995))
      source
      Flux.Optimise.AdaGradType
      AdaGrad(η = 0.1, ϵ = 1.0e-8)

      AdaGrad optimiser. It has parameter specific learning rates based on how frequently it is updated. Parameters don't need tuning.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.

      Examples

      opt = AdaGrad()
      +opt = AdaMax(0.001, (0.9, 0.995))
      source
      Flux.Optimise.AdaGradType
      AdaGrad(η = 0.1, ϵ = 1.0e-8)

      AdaGrad optimiser. It has parameter specific learning rates based on how frequently it is updated. Parameters don't need tuning.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.

      Examples

      opt = AdaGrad()
       
      -opt = AdaGrad(0.001)
      source
      Flux.Optimise.AdaDeltaType
      AdaDelta(ρ = 0.9, ϵ = 1.0e-8)

      AdaDelta is a version of AdaGrad adapting its learning rate based on a window of past gradient updates. Parameters don't need tuning.

      Parameters

      • Rho (ρ): Factor by which the gradient is decayed at each time step.

      Examples

      opt = AdaDelta()
      +opt = AdaGrad(0.001)
      source
      Flux.Optimise.AdaDeltaType
      AdaDelta(ρ = 0.9, ϵ = 1.0e-8)

      AdaDelta is a version of AdaGrad adapting its learning rate based on a window of past gradient updates. Parameters don't need tuning.

      Parameters

      • Rho (ρ): Factor by which the gradient is decayed at each time step.

      Examples

      opt = AdaDelta()
       
      -opt = AdaDelta(0.89)
      source
      Flux.Optimise.AMSGradType
      AMSGrad(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)

      The AMSGrad version of the Adam optimiser. Parameters don't need tuning.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

      Examples

      opt = AMSGrad()
      +opt = AdaDelta(0.89)
      source
      Flux.Optimise.AMSGradType
      AMSGrad(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)

      The AMSGrad version of the Adam optimiser. Parameters don't need tuning.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

      Examples

      opt = AMSGrad()
       
      -opt = AMSGrad(0.001, (0.89, 0.995))
      source
      Flux.Optimise.NAdamType
      NAdam(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)

      NAdam is a Nesterov variant of Adam. Parameters don't need tuning.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

      Examples

      opt = NAdam()
      +opt = AMSGrad(0.001, (0.89, 0.995))
      source
      Flux.Optimise.NAdamType
      NAdam(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)

      NAdam is a Nesterov variant of Adam. Parameters don't need tuning.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

      Examples

      opt = NAdam()
       
      -opt = NAdam(0.002, (0.89, 0.995))
      source
      Flux.Optimise.AdamWFunction
      AdamW(η = 0.001, β::Tuple = (0.9, 0.999), decay = 0)

      AdamW is a variant of Adam fixing (as in repairing) its weight decay regularization.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.
      • decay: Decay applied to weights during optimisation.

      Examples

      opt = AdamW()
      +opt = NAdam(0.002, (0.89, 0.995))
      source
      Flux.Optimise.AdamWFunction
      AdamW(η = 0.001, β::Tuple = (0.9, 0.999), decay = 0)

      AdamW is a variant of Adam fixing (as in repairing) its weight decay regularization.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.
      • decay: Decay applied to weights during optimisation.

      Examples

      opt = AdamW()
       
      -opt = AdamW(0.001, (0.89, 0.995), 0.1)
      source
      Flux.Optimise.OAdamType
      OAdam(η = 0.0001, β::Tuple = (0.5, 0.9), ϵ = 1.0e-8)

      OAdam (Optimistic Adam) is a variant of Adam adding an "optimistic" term suitable for adversarial training.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

      Examples

      opt = OAdam()
      +opt = AdamW(0.001, (0.89, 0.995), 0.1)
      source
      Flux.Optimise.OAdamType
      OAdam(η = 0.0001, β::Tuple = (0.5, 0.9), ϵ = 1.0e-8)

      OAdam (Optimistic Adam) is a variant of Adam adding an "optimistic" term suitable for adversarial training.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

      Examples

      opt = OAdam()
       
      -opt = OAdam(0.001, (0.9, 0.995))
      source
      Flux.Optimise.AdaBeliefType
      AdaBelief(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)

      The AdaBelief optimiser is a variant of the well-known Adam optimiser.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

      Examples

      opt = AdaBelief()
      +opt = OAdam(0.001, (0.9, 0.995))
      source
      Flux.Optimise.AdaBeliefType
      AdaBelief(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)

      The AdaBelief optimiser is a variant of the well-known Adam optimiser.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

      Examples

      opt = AdaBelief()
       
      -opt = AdaBelief(0.001, (0.9, 0.8))
      source

      Composing Optimisers

      Flux defines a special kind of optimiser simply called Optimiser which takes in arbitrary optimisers as input. Its behaviour is similar to the usual optimisers, but differs in that it acts by calling the optimisers listed in it sequentially. Each optimiser produces a modified gradient that will be fed into the next, and the resultant update will be applied to the parameter as usual. A classic use case is where adding decays is desirable. Flux defines some basic decays including ExpDecay, InvDecay etc.

      opt = Optimiser(ExpDecay(1, 0.1, 1000, 1e-4), Descent())

      Here we apply exponential decay to the Descent optimiser. The defaults of ExpDecay say that its learning rate will be decayed every 1000 steps. It is then applied like any optimiser.

      w = randn(10, 10)
      +opt = AdaBelief(0.001, (0.9, 0.8))
      source

      Composing Optimisers

      Flux defines a special kind of optimiser simply called Optimiser which takes in arbitrary optimisers as input. Its behaviour is similar to the usual optimisers, but differs in that it acts by calling the optimisers listed in it sequentially. Each optimiser produces a modified gradient that will be fed into the next, and the resultant update will be applied to the parameter as usual. A classic use case is where adding decays is desirable. Flux defines some basic decays including ExpDecay, InvDecay etc.

      opt = Optimiser(ExpDecay(1, 0.1, 1000, 1e-4), Descent())

      Here we apply exponential decay to the Descent optimiser. The defaults of ExpDecay say that its learning rate will be decayed every 1000 steps. It is then applied like any optimiser.

      w = randn(10, 10)
       w1 = randn(10,10)
       ps = Params([w, w1])
       
      @@ -53,7 +53,7 @@
         Flux.Optimise.update!(opt, θ, θ̄)
       end
       
      -loss(rand(10)) # around 0.9

      It is possible to compose optimisers for some added flexibility.

      Flux.Optimise.OptimiserType
      Optimiser(a, b, c...)

      Combine several optimisers into one; each optimiser produces a modified gradient that will be fed into the next, and this is finally applied to the parameter as usual.

      Note

      This will be replaced by Optimisers.OptimiserChain in Flux 0.15.

      source

      Scheduling Optimisers

      In practice, it is fairly common to schedule the learning rate of an optimiser to obtain faster convergence. There are a variety of popular scheduling policies, and you can find implementations of them in ParameterSchedulers.jl. The documentation for ParameterSchedulers.jl provides a more detailed overview of the different scheduling policies, and how to use them with Flux optimisers. Below, we provide a brief snippet illustrating a cosine annealing schedule with a momentum optimiser.

      First, we import ParameterSchedulers.jl and initialize a cosine annealing schedule to vary the learning rate between 1e-4 and 1e-2 every 10 steps. We also create a new Momentum optimiser.

      using ParameterSchedulers
      +loss(rand(10)) # around 0.9

      It is possible to compose optimisers for some added flexibility.

      Flux.Optimise.OptimiserType
      Optimiser(a, b, c...)

      Combine several optimisers into one; each optimiser produces a modified gradient that will be fed into the next, and this is finally applied to the parameter as usual.

      Note

      This will be replaced by Optimisers.OptimiserChain in Flux 0.15.

      source

      Scheduling Optimisers

      In practice, it is fairly common to schedule the learning rate of an optimiser to obtain faster convergence. There are a variety of popular scheduling policies, and you can find implementations of them in ParameterSchedulers.jl. The documentation for ParameterSchedulers.jl provides a more detailed overview of the different scheduling policies, and how to use them with Flux optimisers. Below, we provide a brief snippet illustrating a cosine annealing schedule with a momentum optimiser.

      First, we import ParameterSchedulers.jl and initialize a cosine annealing schedule to vary the learning rate between 1e-4 and 1e-2 every 10 steps. We also create a new Momentum optimiser.

      using ParameterSchedulers
       
       opt = Momentum()
       schedule = Cos(λ0 = 1e-4, λ1 = 1e-2, period = 10)
      @@ -66,6 +66,6 @@
       for epoch in 1:100
         opt.eta = next!(schedule)
         # your training code here
      -end

      ParameterSchedulers.jl allows for many more scheduling policies including arbitrary functions, looping any function with a given period, or sequences of many schedules. See the ParameterSchedulers.jl documentation for more info.

      Decays

      Similar to optimisers, Flux also defines some simple decays that can be used in conjunction with other optimisers, or standalone.

      Flux.Optimise.ExpDecayType
      ExpDecay(η = 0.001, decay = 0.1, decay_step = 1000, clip = 1e-4, start = 1)

      Discount the learning rate η by the factor decay every decay_step steps till a minimum of clip.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • decay: Factor by which the learning rate is discounted.
      • decay_step: Schedule decay operations by setting the number of steps between two decay operations.
      • clip: Minimum value of learning rate.
      • 'start': Step at which the decay starts.

      See also the Scheduling Optimisers section of the docs for more general scheduling techniques.

      Examples

      ExpDecay is typically composed with other optimisers as the last transformation of the gradient:

      opt = Optimiser(Adam(), ExpDecay(1.0))

      Note: you may want to start with η=1 in ExpDecay when combined with other optimisers (Adam in this case) that have their own learning rate.

      source
      Flux.Optimise.InvDecayType
      InvDecay(γ = 0.001)

      Apply inverse time decay to an optimiser, so that the effective step size at iteration n is eta / (1 + γ * n) where eta is the initial step size. The wrapped optimiser's step size is not modified.

      See also the Scheduling Optimisers section of the docs for more general scheduling techniques.

      Examples

      InvDecay is typically composed with other optimisers as the last transformation of the gradient:

      # Inverse decay of the learning rate
      +end

      ParameterSchedulers.jl allows for many more scheduling policies including arbitrary functions, looping any function with a given period, or sequences of many schedules. See the ParameterSchedulers.jl documentation for more info.

      Decays

      Similar to optimisers, Flux also defines some simple decays that can be used in conjunction with other optimisers, or standalone.

      Flux.Optimise.ExpDecayType
      ExpDecay(η = 0.001, decay = 0.1, decay_step = 1000, clip = 1e-4, start = 1)

      Discount the learning rate η by the factor decay every decay_step steps till a minimum of clip.

      Parameters

      • Learning rate (η): Amount by which gradients are discounted before updating the weights.
      • decay: Factor by which the learning rate is discounted.
      • decay_step: Schedule decay operations by setting the number of steps between two decay operations.
      • clip: Minimum value of learning rate.
      • 'start': Step at which the decay starts.

      See also the Scheduling Optimisers section of the docs for more general scheduling techniques.

      Examples

      ExpDecay is typically composed with other optimisers as the last transformation of the gradient:

      opt = Optimiser(Adam(), ExpDecay(1.0))

      Note: you may want to start with η=1 in ExpDecay when combined with other optimisers (Adam in this case) that have their own learning rate.

      source
      Flux.Optimise.InvDecayType
      InvDecay(γ = 0.001)

      Apply inverse time decay to an optimiser, so that the effective step size at iteration n is eta / (1 + γ * n) where eta is the initial step size. The wrapped optimiser's step size is not modified.

      See also the Scheduling Optimisers section of the docs for more general scheduling techniques.

      Examples

      InvDecay is typically composed with other optimisers as the last transformation of the gradient:

      # Inverse decay of the learning rate
       # with starting value 0.001 and decay coefficient 0.01.
      -opt = Optimiser(Adam(1f-3), InvDecay(1f-2))
      source
      Flux.Optimise.WeightDecayType
      WeightDecay(λ = 0)

      Decay weights by $λ$. Typically composed with other optimisers as the first transformation to the gradient, making it equivalent to adding $L_2$ regularization with coefficient $λ$ to the loss.

      Examples

      opt = Optimiser(WeightDecay(1f-4), Adam())
      source

      Gradient Clipping

      Gradient clipping is useful for training recurrent neural networks, which have a tendency to suffer from the exploding gradient problem. An example usage is

      opt = Optimiser(ClipValue(1e-3), Adam(1e-3))
      Flux.Optimise.ClipValueType
      ClipValue(thresh)

      Clip gradients when their absolute value exceeds thresh.

      Note

      This will be replaced by Optimisers.ClipGrad in Flux 0.15.

      source
      +opt = Optimiser(Adam(1f-3), InvDecay(1f-2))source
      Flux.Optimise.WeightDecayType
      WeightDecay(λ = 0)

      Decay weights by $λ$. Typically composed with other optimisers as the first transformation to the gradient, making it equivalent to adding $L_2$ regularization with coefficient $λ$ to the loss.

      Examples

      opt = Optimiser(WeightDecay(1f-4), Adam())
      source

      Gradient Clipping

      Gradient clipping is useful for training recurrent neural networks, which have a tendency to suffer from the exploding gradient problem. An example usage is

      opt = Optimiser(ClipValue(1e-3), Adam(1e-3))
      Flux.Optimise.ClipValueType
      ClipValue(thresh)

      Clip gradients when their absolute value exceeds thresh.

      Note

      This will be replaced by Optimisers.ClipGrad in Flux 0.15.

      source
      Flux.Optimise.ClipNormType
      ClipNorm(thresh)

      Clip gradients when their L2 norm exceeds thresh.

      source
      diff --git a/previews/PR2360/training/reference/index.html b/previews/PR2360/training/reference/index.html index 30cb065e33..3cb2654c32 100644 --- a/previews/PR2360/training/reference/index.html +++ b/previews/PR2360/training/reference/index.html @@ -19,14 +19,14 @@ 10.19 julia> opt_state # mutated by Flux.train! -(weight = Leaf(Momentum{Float64}(0.1, 0.9), [-2.018 3.027]), bias = Leaf(Momentum{Float64}(0.1, 0.9), [-10.09]), σ = ())source
      Flux.Optimise.train!Method
      train!(loss, model, data, opt_state)

      Uses a loss function and training data to improve the model's parameters according to a particular optimisation rule encoded in opt_state. Iterates through data once, evaluating for each d in data either loss(model, d...) if d isa Tuple, or else loss(model, d) for other d.

      For example, with these definitions...

      data = [(x1, y1), (x2, y2), (x3, y3)]
      +(weight = Leaf(Momentum{Float64}(0.1, 0.9), [-2.018 3.027]), bias = Leaf(Momentum{Float64}(0.1, 0.9), [-10.09]), σ = ())
      source
      Flux.Optimise.train!Method
      train!(loss, model, data, opt_state)

      Uses a loss function and training data to improve the model's parameters according to a particular optimisation rule encoded in opt_state. Iterates through data once, evaluating for each d in data either loss(model, d...) if d isa Tuple, or else loss(model, d) for other d.

      For example, with these definitions...

      data = [(x1, y1), (x2, y2), (x3, y3)]
       
       loss3(m, x, y) = norm(m(x) .- y)        # the model is the first argument
       
       opt_state = Flux.setup(Adam(), model)   # explicit setup of optimiser momenta

      ...calling Flux.train!(loss3, model, data, opt_state) runs a loop much like this:

      for d in data
           ∂L∂m = gradient(loss3, model, d...)[1]
           update!(opt_state, model, ∂L∂m)
      -end

      You can also write this loop yourself, if you need more flexibility. For this reason train! is not highly extensible. It adds only a few features to the loop above:

      • Stop with a DomainError if the loss is infinite or NaN at any point.

      • Show a progress bar using @withprogress.

      New

      This method was added in Flux 0.13.9. It has significant changes from the one used by Flux ≤ 0.13:

      • It now takes the model itself, not the result of Flux.params. (This is to move away from Zygote's "implicit" parameter handling, with Grads.)
      • Instead of loss being a function which accepts only the data, now it must also accept the model itself, as the first argument.
      • opt_state should be the result of Flux.setup. Using an optimiser such as Adam() without this step should give you a warning.
      • Callback functions are not supported. (But any code can be included in the above for loop.)
      source
      Optimisers.update!Function
      Optimisers.update!(tree, model, gradient) -> (tree, model)

      Uses the optimiser and the gradient to change the trainable parameters in the model. Returns the improved model, and the optimiser states needed for the next update. The initial tree of states comes from setup.

      This is used in exactly the same manner as update, but because it may mutate arrays within the old model (and the old state), it will be faster for models of ordinary Arrays or CuArrays. However, you should not rely on the old model being fully updated but rather use the returned model. (The original state tree is always mutated, as each Leaf is mutable.)

      Example

      julia> using StaticArrays, Zygote, Optimisers
      +end

      You can also write this loop yourself, if you need more flexibility. For this reason train! is not highly extensible. It adds only a few features to the loop above:

      • Stop with a DomainError if the loss is infinite or NaN at any point.

      • Show a progress bar using @withprogress.

      New

      This method was added in Flux 0.13.9. It has significant changes from the one used by Flux ≤ 0.13:

      • It now takes the model itself, not the result of Flux.params. (This is to move away from Zygote's "implicit" parameter handling, with Grads.)
      • Instead of loss being a function which accepts only the data, now it must also accept the model itself, as the first argument.
      • opt_state should be the result of Flux.setup. Using an optimiser such as Adam() without this step should give you a warning.
      • Callback functions are not supported. (But any code can be included in the above for loop.)
      source
      Optimisers.update!Function
      Optimisers.update!(tree, model, gradient) -> (tree, model)

      Uses the optimiser and the gradient to change the trainable parameters in the model. Returns the improved model, and the optimiser states needed for the next update. The initial tree of states comes from setup.

      This is used in exactly the same manner as update, but because it may mutate arrays within the old model (and the old state), it will be faster for models of ordinary Arrays or CuArrays. However, you should not rely on the old model being fully updated but rather use the returned model. (The original state tree is always mutated, as each Leaf is mutable.)

      Example

      julia> using StaticArrays, Zygote, Optimisers
       
       julia> m = (x = [1f0, 2f0], y = SA[4f0, 5f0]);  # partly mutable model
       
      @@ -107,12 +107,12 @@
       Params([[1, 2, 3], [4]])
       
       julia> params(1, [2 2], (alpha=[3,3,3], beta=Ref(4), gamma=sin))  # ignores scalars, unpacks NamedTuples
      -Params([[2 2], [3, 3, 3]])
      source
      Optimisers.update!Method
      update!(opt, p, g)
      -update!(opt, ps::Params, gs)

      Perform an update step of the parameters ps (or the single parameter p) according to optimiser opt::AbstractOptimiser and the gradients gs (the gradient g).

      As a result, the parameters are mutated and the optimiser's internal state may change. The gradient could be mutated as well.

      Deprecated

      This method for implicit Params (and AbstractOptimiser) will be removed from Flux 0.15. The explicit method update!(opt, model, grad) from Optimisers.jl will remain.

      source
      Flux.Optimise.train!Method
      train!(loss, pars::Params, data, opt::AbstractOptimiser; [cb])

      Uses a loss function and training data to improve the model's parameters according to a particular optimisation rule opt.

      Deprecated

      This method with implicit Params will be removed from Flux 0.15. It should be replaced with the explicit method train!(loss, model, data, opt).

      For each d in data, first the gradient of the loss is computed like this:

          gradient(() -> loss(d...), pars)  # if d isa Tuple
      -    gradient(() -> loss(d), pars)     # otherwise

      Here pars is produced by calling Flux.params on your model. (Or just on the layers you want to train, like train!(loss, params(model[1:end-2]), data, opt).) This is the "implicit" style of parameter handling.

      This gradient is then used by optimiser opt to update the parameters:

          update!(opt, pars, grads)

      The optimiser should be from the Flux.Optimise module (see Optimisers). Different optimisers can be combined using Flux.Optimise.Optimiser.

      This training loop iterates through data once. It will stop with a DomainError if the loss is NaN or infinite.

      You can use use train! inside a for loop to do this several times, or use for instance Itertools.ncycle to make a longer data iterator.

      Callbacks

      Callbacks are given with the keyword argument cb. For example, this will print "training" every 10 seconds (using Flux.throttle):

          train!(loss, params, data, opt, cb = throttle(() -> println("training"), 10))

      Multiple callbacks can be passed to cb as array.

      source

      Callbacks

      Implicit train! takes an additional argument, cb, that's used for callbacks so that you can observe the training process. For example:

      train!(objective, ps, data, opt, cb = () -> println("training"))

      Callbacks are called for every batch of training data. You can slow this down using Flux.throttle(f, timeout) which prevents f from being called more than once every timeout seconds.

      A more typical callback might look like this:

      test_x, test_y = # ... create single batch of test data ...
      +Params([[2 2], [3, 3, 3]])
      source
      Optimisers.update!Method
      update!(opt, p, g)
      +update!(opt, ps::Params, gs)

      Perform an update step of the parameters ps (or the single parameter p) according to optimiser opt::AbstractOptimiser and the gradients gs (the gradient g).

      As a result, the parameters are mutated and the optimiser's internal state may change. The gradient could be mutated as well.

      Deprecated

      This method for implicit Params (and AbstractOptimiser) will be removed from Flux 0.15. The explicit method update!(opt, model, grad) from Optimisers.jl will remain.

      source
      Flux.Optimise.train!Method
      train!(loss, pars::Params, data, opt::AbstractOptimiser; [cb])

      Uses a loss function and training data to improve the model's parameters according to a particular optimisation rule opt.

      Deprecated

      This method with implicit Params will be removed from Flux 0.15. It should be replaced with the explicit method train!(loss, model, data, opt).

      For each d in data, first the gradient of the loss is computed like this:

          gradient(() -> loss(d...), pars)  # if d isa Tuple
      +    gradient(() -> loss(d), pars)     # otherwise

      Here pars is produced by calling Flux.params on your model. (Or just on the layers you want to train, like train!(loss, params(model[1:end-2]), data, opt).) This is the "implicit" style of parameter handling.

      This gradient is then used by optimiser opt to update the parameters:

          update!(opt, pars, grads)

      The optimiser should be from the Flux.Optimise module (see Optimisers). Different optimisers can be combined using Flux.Optimise.Optimiser.

      This training loop iterates through data once. It will stop with a DomainError if the loss is NaN or infinite.

      You can use use train! inside a for loop to do this several times, or use for instance Itertools.ncycle to make a longer data iterator.

      Callbacks

      Callbacks are given with the keyword argument cb. For example, this will print "training" every 10 seconds (using Flux.throttle):

          train!(loss, params, data, opt, cb = throttle(() -> println("training"), 10))

      Multiple callbacks can be passed to cb as array.

      source

      Callbacks

      Implicit train! takes an additional argument, cb, that's used for callbacks so that you can observe the training process. For example:

      train!(objective, ps, data, opt, cb = () -> println("training"))

      Callbacks are called for every batch of training data. You can slow this down using Flux.throttle(f, timeout) which prevents f from being called more than once every timeout seconds.

      A more typical callback might look like this:

      test_x, test_y = # ... create single batch of test data ...
       evalcb() = @show(loss(test_x, test_y))
       throttled_cb = throttle(evalcb, 5)
       for epoch in 1:20
         @info "Epoch $epoch"
         Flux.train!(objective, ps, data, opt, cb = throttled_cb)
      -end

      See the page about callback helpers for more.

      +end

      See the page about callback helpers for more.

      diff --git a/previews/PR2360/training/training/index.html b/previews/PR2360/training/training/index.html index d170abb604..2dd9e4dfa3 100644 --- a/previews/PR2360/training/training/index.html +++ b/previews/PR2360/training/training/index.html @@ -119,4 +119,4 @@ train!(loss, bimodel, data, opt_state) # Un-freeze the entire model: -Flux.thaw!(opt_state)
      Flux ≤ 0.14

      The earlier "implicit" equivalent was to pass to gradient an object referencing only part of the model, such as Flux.params(bimodel.layers.enc).

      Implicit or Explicit?

      Flux used to handle gradients, training, and optimisation rules quite differently. The new style described above is called "explicit" by Zygote, and the old style "implicit". Flux 0.13 and 0.14 are the transitional versions which support both.

      The blue-green boxes above describe the changes. For more details on training in the implicit style, see Flux 0.13.6 documentation.

      For details about the two gradient modes, see Zygote's documentation.

      +Flux.thaw!(opt_state)
      Flux ≤ 0.14

      The earlier "implicit" equivalent was to pass to gradient an object referencing only part of the model, such as Flux.params(bimodel.layers.enc).

      Implicit or Explicit?

      Flux used to handle gradients, training, and optimisation rules quite differently. The new style described above is called "explicit" by Zygote, and the old style "implicit". Flux 0.13 and 0.14 are the transitional versions which support both.

      The blue-green boxes above describe the changes. For more details on training in the implicit style, see Flux 0.13.6 documentation.

      For details about the two gradient modes, see Zygote's documentation.

      diff --git a/previews/PR2360/training/zygote/index.html b/previews/PR2360/training/zygote/index.html index 2e39b8b8b2..e06e0b7a99 100644 --- a/previews/PR2360/training/zygote/index.html +++ b/previews/PR2360/training/zygote/index.html @@ -194,4 +194,4 @@ y = fill(x, len) fill_pullback(ȳ) = (NoTangent(), @thunk(sum(Ȳ)), NoTangent()) return y, fill_pullback - end
      ChainRulesCore.ZeroTangentType
      ZeroTangent() <: AbstractZero

      The additive identity for tangents. This is basically the same as 0. A derivative of ZeroTangent() does not propagate through the primal function.

      + end
      ChainRulesCore.ZeroTangentType
      ZeroTangent() <: AbstractZero

      The additive identity for tangents. This is basically the same as 0. A derivative of ZeroTangent() does not propagate through the primal function.

      diff --git a/previews/PR2360/tutorials/2020-09-15-deep-learning-flux/index.html b/previews/PR2360/tutorials/2020-09-15-deep-learning-flux/index.html index 6ca0dacbc3..dfeddf8f97 100644 --- a/previews/PR2360/tutorials/2020-09-15-deep-learning-flux/index.html +++ b/previews/PR2360/tutorials/2020-09-15-deep-learning-flux/index.html @@ -121,4 +121,4 @@ end end -class_correct ./ class_total

      The spread seems pretty good, with certain classes performing significantly better than the others. Why should that be?

      Info

      Originally published at fluxml.ai on 15 November 2020. Written by Saswat Das, Mike Innes, Andrew Dinhobl, Ygor Canalli, Sudhanshu Agrawal, João Felipe Santos.

      +class_correct ./ class_total

      The spread seems pretty good, with certain classes performing significantly better than the others. Why should that be?

      Info

      Originally published at fluxml.ai on 15 November 2020. Written by Saswat Das, Mike Innes, Andrew Dinhobl, Ygor Canalli, Sudhanshu Agrawal, João Felipe Santos.

      diff --git a/previews/PR2360/tutorials/2021-01-26-mlp/index.html b/previews/PR2360/tutorials/2021-01-26-mlp/index.html index c072aeb0c1..4f5dafe3fa 100644 --- a/previews/PR2360/tutorials/2021-01-26-mlp/index.html +++ b/previews/PR2360/tutorials/2021-01-26-mlp/index.html @@ -78,4 +78,4 @@ @show accuracy(train_data, m) @show accuracy(test_data, m) -end

      train performs the following steps:

      To see the full version of this example, see Simple multi-layer perceptron - model-zoo.

      Resources

      Info

      Originally published at fluxml.ai on 26 January 2021. Written by Adarsh Kumar, Mike J Innes, Andrew Dinhobl, Jerry Ling, natema, Zhang Shitian, Liliana Badillo, Dhairya Gandhi

      +end

      train performs the following steps:

      To see the full version of this example, see Simple multi-layer perceptron - model-zoo.

      Resources

      Info

      Originally published at fluxml.ai on 26 January 2021. Written by Adarsh Kumar, Mike J Innes, Andrew Dinhobl, Jerry Ling, natema, Zhang Shitian, Liliana Badillo, Dhairya Gandhi

      diff --git a/previews/PR2360/tutorials/2021-02-07-convnet/index.html b/previews/PR2360/tutorials/2021-02-07-convnet/index.html index 72dcfc0c69..e9f68e79a6 100644 --- a/previews/PR2360/tutorials/2021-02-07-convnet/index.html +++ b/previews/PR2360/tutorials/2021-02-07-convnet/index.html @@ -149,4 +149,4 @@ test_set = gpu.(test_set) model = gpu(model) @show accuracy(test_set...,model) -end

      test loads the MNIST test data set, reconstructs the model, and loads the saved parameters (in mnist_conv.bson) onto it. Finally, it computes our model's predictions for the test set and shows the test accuracy (around 99%).

      To see the full version of this example, see Simple ConvNets - model-zoo.

      Resources

      Info

      Originally published at fluxml.ai on 7 February 2021. Written by Elliot Saba, Adarsh Kumar, Mike J Innes, Dhairya Gandhi, Sudhanshu Agrawal, Sambit Kumar Dash, fps.io, Carlo Lucibello, Andrew Dinhobl, Liliana Badillo

      +end

      test loads the MNIST test data set, reconstructs the model, and loads the saved parameters (in mnist_conv.bson) onto it. Finally, it computes our model's predictions for the test set and shows the test accuracy (around 99%).

      To see the full version of this example, see Simple ConvNets - model-zoo.

      Resources

      Info

      Originally published at fluxml.ai on 7 February 2021. Written by Elliot Saba, Adarsh Kumar, Mike J Innes, Dhairya Gandhi, Sudhanshu Agrawal, Sambit Kumar Dash, fps.io, Carlo Lucibello, Andrew Dinhobl, Liliana Badillo

      diff --git a/previews/PR2360/tutorials/2021-10-08-dcgan-mnist/index.html b/previews/PR2360/tutorials/2021-10-08-dcgan-mnist/index.html index 5d9307cf11..73e9a9ce1f 100644 --- a/previews/PR2360/tutorials/2021-10-08-dcgan-mnist/index.html +++ b/previews/PR2360/tutorials/2021-10-08-dcgan-mnist/index.html @@ -177,4 +177,4 @@ images = load.(img_paths) # Join all the images in the array to create a matrix of images gif_mat = cat(images..., dims=3) -save("./output.gif", gif_mat)

      Resources & References

      Info

      Originally published at fluxml.ai on 8 October 2021, by Deeptendu Santra

      +save("./output.gif", gif_mat)

      Resources & References

      Info

      Originally published at fluxml.ai on 8 October 2021, by Deeptendu Santra

      diff --git a/previews/PR2360/tutorials/2021-10-14-vanilla-gan/index.html b/previews/PR2360/tutorials/2021-10-14-vanilla-gan/index.html index c28cfe1beb..0a2ceaf067 100644 --- a/previews/PR2360/tutorials/2021-10-14-vanilla-gan/index.html +++ b/previews/PR2360/tutorials/2021-10-14-vanilla-gan/index.html @@ -104,4 +104,4 @@ p = heatmap(fake_data, colormap=:inferno) print(p) end -end

      For the hyper-parameters shown in this example, the generator produces useful images after about 1000 epochs. And after about 5000 epochs the result look indistinguishable from real MNIST data. Using a Nvidia V100 GPU on a 2.7 GHz Power9 CPU with 32 hardware threads, training 100 epochs takes about 80 seconds when using the GPU. The GPU utilization is between 30 and 40%. To observe the network more frequently during training you can for example set output_period=20. Training the GAN using the CPU takes about 10 minutes per epoch and is not recommended.

      Results

      Below you can see what some of the images output may look like after different numbers of epochs.

      Resources

      Info

      Originally published at fluxml.ai on 14 October 2021, by Ralph Kube.

      +end

      For the hyper-parameters shown in this example, the generator produces useful images after about 1000 epochs. And after about 5000 epochs the result look indistinguishable from real MNIST data. Using a Nvidia V100 GPU on a 2.7 GHz Power9 CPU with 32 hardware threads, training 100 epochs takes about 80 seconds when using the GPU. The GPU utilization is between 30 and 40%. To observe the network more frequently during training you can for example set output_period=20. Training the GAN using the CPU takes about 10 minutes per epoch and is not recommended.

      Results

      Below you can see what some of the images output may look like after different numbers of epochs.

      Resources

      Info

      Originally published at fluxml.ai on 14 October 2021, by Ralph Kube.

      diff --git a/previews/PR2360/tutorials/linear_regression/index.html b/previews/PR2360/tutorials/linear_regression/index.html index 4b2bc7a023..b0cc02af69 100644 --- a/previews/PR2360/tutorials/linear_regression/index.html +++ b/previews/PR2360/tutorials/linear_regression/index.html @@ -106,4 +106,4 @@ 27.1272f0

      The loss went down significantly! It can be minimized further by choosing an even smaller δ.

      Testing the Flux model

      The last step of this tutorial would be to test our model using the testing data. We will first normalise the testing data and then calculate the corresponding loss.

      julia> x_test_n = Flux.normalise(x_test);
       
       julia> loss(model, x_test_n, y_test)
      -66.91015f0

      The loss is not as small as the loss of the training data, but it looks good! This also shows that our model is not overfitting!


      Summarising this tutorial, we started by generating a random yet correlated dataset for our custom model. We then saw how a simple linear regression model could be built with and without Flux, and how they were almost identical.

      Next, we trained the model by manually writing down the Gradient Descent algorithm and optimising the loss. We also saw how Flux provides various wrapper functionalities and keeps the API extremely intuitive and simple for the users.

      After getting familiar with the basics of Flux and Julia, we moved ahead to build a machine learning model for a real dataset. We repeated the exact same steps, but this time with a lot more features and data points, and by harnessing Flux's full capabilities. In the end, we developed a training loop that was smarter than the hardcoded one and ran the model on our normalised dataset to conclude the tutorial.

      Info

      Originally published on 21 November 2022, by Saransh Chopra.

      +66.91015f0

      The loss is not as small as the loss of the training data, but it looks good! This also shows that our model is not overfitting!


      Summarising this tutorial, we started by generating a random yet correlated dataset for our custom model. We then saw how a simple linear regression model could be built with and without Flux, and how they were almost identical.

      Next, we trained the model by manually writing down the Gradient Descent algorithm and optimising the loss. We also saw how Flux provides various wrapper functionalities and keeps the API extremely intuitive and simple for the users.

      After getting familiar with the basics of Flux and Julia, we moved ahead to build a machine learning model for a real dataset. We repeated the exact same steps, but this time with a lot more features and data points, and by harnessing Flux's full capabilities. In the end, we developed a training loop that was smarter than the hardcoded one and ran the model on our normalised dataset to conclude the tutorial.

      Info

      Originally published on 21 November 2022, by Saransh Chopra.

      diff --git a/previews/PR2360/tutorials/logistic_regression/index.html b/previews/PR2360/tutorials/logistic_regression/index.html index db8bc72ab2..e287887778 100644 --- a/previews/PR2360/tutorials/logistic_regression/index.html +++ b/previews/PR2360/tutorials/logistic_regression/index.html @@ -131,4 +131,4 @@ flux_accuracy(x, y) = 0.98 julia> flux_loss(flux_model, x, flux_y_onehot) -0.6952386604624324

      We see a very similar final loss and accuracy.


      Summarising this tutorial, we saw how we can run a logistic regression algorithm in Julia with and without using Flux. We started by importing the classic Iris dataset, and one hot encoded the labels. Next, we defined our model, the loss function, and the accuracy, all by ourselves.

      Finally, we trained the model by manually writing down the Gradient Descent algorithm and optimising the loss. Interestingly, we implemented most of the functions on our own, and then parallelly compared them with the functionalities provided by Flux!

      Info

      Originally published on 1st April 2023, by Saransh Chopra.

      +0.6952386604624324

      We see a very similar final loss and accuracy.


      Summarising this tutorial, we saw how we can run a logistic regression algorithm in Julia with and without using Flux. We started by importing the classic Iris dataset, and one hot encoded the labels. Next, we defined our model, the loss function, and the accuracy, all by ourselves.

      Finally, we trained the model by manually writing down the Gradient Descent algorithm and optimising the loss. Interestingly, we implemented most of the functions on our own, and then parallelly compared them with the functionalities provided by Flux!

      Info

      Originally published on 1st April 2023, by Saransh Chopra.

      diff --git a/previews/PR2360/utilities/index.html b/previews/PR2360/utilities/index.html index ce24ef4209..862acaec14 100644 --- a/previews/PR2360/utilities/index.html +++ b/previews/PR2360/utilities/index.html @@ -32,7 +32,7 @@ julia> ans.bias 2-element Vector{Float32}: 0.0 - 0.0

      References

      [1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010.

      source
      Flux.glorot_normalFunction
      glorot_normal([rng], size...; gain = 1) -> Array
      + 0.0

      References

      [1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010.

      source
      Flux.glorot_normalFunction
      glorot_normal([rng], size...; gain = 1) -> Array
       glorot_normal([rng]; kw...) -> Function

      Return an Array{Float32} of the given size containing random numbers drawn from a normal distribution with standard deviation gain * sqrt(2 / (fan_in + fan_out)), using nfan.

      This method is described in [1] and also known as Xavier initialization.

      Examples

      julia> using Statistics
       
       julia> round(std(Flux.glorot_normal(10, 1000)), digits=3)
      @@ -48,7 +48,7 @@
       Dense(10 => 1000, tanh)  # 11_000 parameters
       
       julia> round(std(ans.weight), sigdigits=3)
      -4.45f0

      References

      [1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010.

      source
      Flux.kaiming_uniformFunction
      kaiming_uniform([rng], size...; gain = √2) -> Array
      +4.45f0

      References

      [1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010.

      source
      Flux.kaiming_uniformFunction
      kaiming_uniform([rng], size...; gain = √2) -> Array
       kaiming_uniform([rng]; kw...) -> Function

      Return an Array{Float32} of the given size containing random numbers drawn from a uniform distribution on the interval [-x, x], where x = gain * sqrt(3/fan_in) using nfan.

      This method is described in [1] and also known as He initialization.

      Examples

      julia> round.(extrema(Flux.kaiming_uniform(100, 10)), digits=3)
       (-0.774f0, 0.774f0)
       
      @@ -56,7 +56,7 @@
       (-0.245f0, 0.244f0)
       
       julia> round.(extrema(Flux.kaiming_uniform(100, 100)), digits=3)
      -(-0.245f0, 0.245f0)

      References

      [1] He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." Proceedings of the IEEE international conference on computer vision. 2015.

      source
      Flux.kaiming_normalFunction
      kaiming_normal([rng], size...; gain = √2) -> Array
      +(-0.245f0, 0.245f0)

      References

      [1] He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." Proceedings of the IEEE international conference on computer vision. 2015.

      source
      Flux.kaiming_normalFunction
      kaiming_normal([rng], size...; gain = √2) -> Array
       kaiming_normal([rng]; kw...) -> Function

      Return an Array{Float32} of the given size containing random numbers taken from a normal distribution standard deviation gain / sqrt(fan_in), using nfan.

      This method is described in [1] and also known as He initialization.

      Examples

      julia> using Statistics
       
       julia> round(std(Flux.kaiming_normal(10, 1000)), digits=3)
      @@ -66,7 +66,7 @@
       0.447f0
       
       julia> round(std(Flux.kaiming_normal(1000, 1000)), digits=3)
      -0.045f0

      References

      [1] He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." Proceedings of the IEEE international conference on computer vision. 2015.

      source
      Flux.truncated_normalFunction
      truncated_normal([rng], size...; mean = 0, std = 1, lo = -2, hi = 2) -> Array
      +0.045f0

      References

      [1] He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." Proceedings of the IEEE international conference on computer vision. 2015.

      source
      Flux.truncated_normalFunction
      truncated_normal([rng], size...; mean = 0, std = 1, lo = -2, hi = 2) -> Array
       truncated_normal([rng]; kw...) -> Function

      Return an Array{Float32} of the given size where each element is drawn from a truncated normal distribution. The numbers are distributed like filter(x -> lo<=x<=hi, mean .+ std .* randn(100)).

      The values are generated by sampling a Uniform(0, 1) (rand()) and then applying the inverse CDF of the truncated normal distribution. This method works best when lo ≤ mean ≤ hi.

      Examples

      julia> using Statistics
       
       julia> Flux.truncated_normal(3, 4) |> summary
      @@ -76,7 +76,7 @@
       (-2.0f0, 2.0f0)
       
       julia> round(std(Flux.truncated_normal(10^6; lo = -100, hi = 100)))
      -1.0f0
      source
      Flux.orthogonalFunction
      orthogonal([rng], size...; gain = 1) -> Array
      +1.0f0
      source
      Flux.orthogonalFunction
      orthogonal([rng], size...; gain = 1) -> Array
       orthogonal([rng]; kw...) -> Function

      Return an Array{Float32} of the given size which is a (semi) orthogonal matrix, as described in [1].

      Cannot construct a vector, i.e. length(size) == 1 is forbidden. For length(size) > 2, a prod(size[1:(end - 1)]) by size[end] orthogonal matrix is computed before reshaping it to the original dimensions.

      Examples

      julia> W = Flux.orthogonal(5, 7);
       
       julia> summary(W)
      @@ -96,7 +96,7 @@
       julia> W3 = Flux.orthogonal(3, 3, 2, 4);
       
       julia> transpose(reshape(W3, :, 4)) * reshape(W3, :, 4) ≈ I(4)
      -true

      References

      [1] Saxe, McClelland, Ganguli. "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks", ICLR 2014, https://arxiv.org/abs/1312.6120

      source
      Flux.sparse_initFunction
      sparse_init([rng], rows, cols; sparsity, std = 0.01) -> Array
      +true

      References

      [1] Saxe, McClelland, Ganguli. "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks", ICLR 2014, https://arxiv.org/abs/1312.6120

      source
      Flux.sparse_initFunction
      sparse_init([rng], rows, cols; sparsity, std = 0.01) -> Array
       sparse_init([rng]; kw...) -> Function

      Return a Matrix{Float32} of size rows, cols where each column contains a fixed fraction of zero elements given by sparsity. Non-zero elements are normally distributed with a mean of zero and standard deviation std.

      This method is described in [1].

      Examples

      julia> count(iszero, Flux.sparse_init(10, 10, sparsity=1/5))
       20
       
      @@ -109,7 +109,7 @@
       
       julia> count(iszero, ans.weight, dims=1)
       1×3 Matrix{Int64}:
      - 5  5  5

      References

      [1] Martens, J, "Deep learning via Hessian-free optimization" Proceedings of the 27th International Conference on International Conference on Machine Learning. 2010.

      source
      Flux.identity_initFunction
      identity_init(size...; gain=1, shift=0) -> Array
      + 5  5  5

      References

      [1] Martens, J, "Deep learning via Hessian-free optimization" Proceedings of the 27th International Conference on International Conference on Machine Learning. 2010.

      source
      Flux.identity_initFunction
      identity_init(size...; gain=1, shift=0) -> Array
       identity_init(; kw...) -> Function

      Return an Array{Float32} of the given size which yields an identity mapping when used as parameters in most Flux layers. Use gain to scale the identity by a constant.

      Often useful in the context of transfer learning, i.e when one wants to add more capacity to a model but start from the same mapping.

      Has the following behaviour

      • 1D: A Vector of zeros (useful for an identity bias)
      • 2D: An identity matrix (useful for an identity matrix multiplication)
      • More than 2D: A dense block array of center tap spatial filters (useful for an identity convolution)

      Some caveats:

      • Not all layers will be identity mapping when used with this init. Exceptions include recurrent layers and normalization layers.

      • Layers must have input_size == output_size for identity mapping to be possible. When this is not the case, extra dimensions of the array are padded with zeros.

      • For convolutional layers, in addition to the above, the kernel sizes must also be odd and padding must be applied so that output feature maps have the same size as input feature maps, e.g by using SamePad.

      Use keyword shift (integer or tuple) to apply circular shift to the output, equivalent to Base.circshift(identity_init(size...), shift).

      For consistency with other initialisers, it accepts rng::AbstractRNG as an optional first argument. But this is ignored, since the result is not random.

      Examples

      julia> Flux.identity_init(3,5)
       3×5 Matrix{Float32}:
        1.0  0.0  0.0  0.0  0.0
      @@ -141,7 +141,7 @@
       [:, :, 1, 1] =
        10.0  20.0  30.0
        40.0  50.0  60.0
      - 70.0  80.0  90.0
      source
      Flux.ones32Function
      ones32(size...) = ones(Float32, size...)

      Return an Array{Float32} of the given size filled with 1s.

      source
      Flux.zeros32Function
      zeros32(size...) = zeros(Float32, size...)

      Return an Array{Float32} of the given size filled with 0s.

      source
      Flux.rand32Function
      rand32([rng], size...)

      Return an Array{Float32} of the given size, filled like rand. When the size is not provided, rand32(rng::AbstractRNG) returns a function.

      source
      Flux.randn32Function
      randn32([rng], size...)

      Return an Array{Float32} of the given size, filled like randn. When the size is not provided, randn32(rng::AbstractRNG) returns a function.

      source
      Flux.create_biasFunction
      create_bias(weights, bias, size...)

      Return a bias parameter for a layer, based on the value given to the constructor's keyword bias=bias.

      • bias == true creates a trainable array of the given size, of the same type as weights, initialised to zero.
      • bias == false returns false, which is understood by AD to be non-differentiable.
      • bias::AbstractArray uses the array provided, provided it has the correct size. It will also correct the eltype to match that of weights.
      source

      These functions call:

      Flux.rng_from_arrayFunction
      rng_from_array(x)

      Create an instance of the RNG most appropriate for x. The current defaults are:

      • x isa CuArray: CUDA.default_rng()
      • x isa AbstractArray: `Random.default_rng()
      source
      Flux.nfanFunction
      nfan(n_out, n_in=1) -> Tuple
      + 70.0  80.0  90.0
      source
      Flux.ones32Function
      ones32(size...) = ones(Float32, size...)

      Return an Array{Float32} of the given size filled with 1s.

      source
      Flux.zeros32Function
      zeros32(size...) = zeros(Float32, size...)

      Return an Array{Float32} of the given size filled with 0s.

      source
      Flux.rand32Function
      rand32([rng], size...)

      Return an Array{Float32} of the given size, filled like rand. When the size is not provided, rand32(rng::AbstractRNG) returns a function.

      source
      Flux.randn32Function
      randn32([rng], size...)

      Return an Array{Float32} of the given size, filled like randn. When the size is not provided, randn32(rng::AbstractRNG) returns a function.

      source
      Flux.create_biasFunction
      create_bias(weights, bias, size...)

      Return a bias parameter for a layer, based on the value given to the constructor's keyword bias=bias.

      • bias == true creates a trainable array of the given size, of the same type as weights, initialised to zero.
      • bias == false returns false, which is understood by AD to be non-differentiable.
      • bias::AbstractArray uses the array provided, provided it has the correct size. It will also correct the eltype to match that of weights.
      source

      These functions call:

      Flux.rng_from_arrayFunction
      rng_from_array(x)

      Create an instance of the RNG most appropriate for x. The current defaults are:

      • x isa CuArray: CUDA.default_rng()
      • x isa AbstractArray: `Random.default_rng()
      source
      Flux.nfanFunction
      nfan(n_out, n_in=1) -> Tuple
       nfan(dims...)
       nfan(dims::Tuple)

      For a layer characterized by dimensions dims, return a tuple (fan_in, fan_out), where fan_in is the number of input neurons connected to an output one, and fan_out is the number of output neurons connected to an input one.

      This function is mainly used by weight initializers, e.g., kaiming_normal.

      Examples

      julia> layer = Dense(10, 20);
       
      @@ -151,7 +151,7 @@
       julia> layer = Conv((3, 3), 2=>10);
       
       julia> Flux.nfan(size(layer.weight))
      -(18, 90)
      source

      Changing the type of all parameters

      The default eltype for models is Float32 since models are often trained/run on GPUs. The eltype of model m can be changed to Float64 by f64(m):

      Flux.f64Function
      f64(m)

      Converts the eltype of model's floating point parameters to Float64. Recurses into structs marked with @functor.

      See also f32 and f16.

      source
      Flux.f32Function
      f32(m)

      Converts the eltype of model's floating point parameters to Float32 (which is Flux's default). Recurses into structs marked with @functor.

      See also f64 and f16.

      source
      Flux.f16Function
      f16(m)

      Converts the eltype of model's floating point parameters to Float16. Recurses into structs marked with @functor.

      Support for Float16 is limited on many CPUs. Julia may convert to Float32 for each operation, which is slow.

      See also f32 and f64.

      Example

      julia> m = Chain(Dense(784, 2048, relu), Dense(2048, 10))  # all Float32
      +(18, 90)
      source

      Changing the type of all parameters

      The default eltype for models is Float32 since models are often trained/run on GPUs. The eltype of model m can be changed to Float64 by f64(m):

      Flux.f64Function
      f64(m)

      Converts the eltype of model's floating point parameters to Float64. Recurses into structs marked with @functor.

      See also f32 and f16.

      source
      Flux.f32Function
      f32(m)

      Converts the eltype of model's floating point parameters to Float32 (which is Flux's default). Recurses into structs marked with @functor.

      See also f64 and f16.

      source
      Flux.f16Function
      f16(m)

      Converts the eltype of model's floating point parameters to Float16. Recurses into structs marked with @functor.

      Support for Float16 is limited on many CPUs. Julia may convert to Float32 for each operation, which is slow.

      See also f32 and f64.

      Example

      julia> m = Chain(Dense(784, 2048, relu), Dense(2048, 10))  # all Float32
       Chain(
         Dense(784 => 2048, relu),             # 1_607_680 parameters
         Dense(2048 => 10),                    # 20_490 parameters
      @@ -161,4 +161,4 @@
       Chain(
         Dense(784 => 2048, relu),             # 1_607_680 parameters
         Dense(2048 => 10),                    # 20_490 parameters
      -)                   # Total: 4 arrays, 1_628_170 parameters, 3.106 MiB.
      source
      +) # Total: 4 arrays, 1_628_170 parameters, 3.106 MiB.source