diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 4ff88d17..53abfb92 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-07-06T11:07:24","documenter_version":"1.5.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-07-28T11:26:44","documenter_version":"1.5.0"}} \ No newline at end of file diff --git a/dev/dev/contributing/index.html b/dev/dev/contributing/index.html index 7dca6c03..1862d57b 100644 --- a/dev/dev/contributing/index.html +++ b/dev/dev/contributing/index.html @@ -1,2 +1,2 @@ -How to Contribute · ForwardDiff

How to Contribute

There are a few fairly easy ways for newcomers to substantially improve ForwardDiff, and they all revolve around writing functions for Dual numbers. This section provides brief tutorials on how to make these contributions.

If you're new GitHub, here's an outline of the workflow you should use:

  1. Fork ForwardDiff
  2. Make a new branch on your fork, named after whatever changes you'll be making
  3. Apply your code changes to the branch on your fork
  4. When you're done, submit a PR to ForwardDiff to merge your fork into ForwardDiff's master branch.

Adding New Derivative Definitions

In general, new derivative implementations for Dual are automatically defined via simple symbolic rules. ForwardDiff accomplishes this by looping over the rules provided by the DiffRules package and using them to auto-generate Dual definitions. Conveniently, these auto-generated definitions are also automatically tested.

Thus, in order to add a new derivative implementation for Dual, you should define the appropriate derivative rule(s) in DiffRules, and then check that calling the function on Dual instances delivers the desired result.

Depending on your function, ForwardDiff's auto-definition mechanism might need to be expanded to support it. If this is the case, file an issue/PR so that ForwardDiff's maintainers can help you out.

+How to Contribute · ForwardDiff

How to Contribute

There are a few fairly easy ways for newcomers to substantially improve ForwardDiff, and they all revolve around writing functions for Dual numbers. This section provides brief tutorials on how to make these contributions.

If you're new GitHub, here's an outline of the workflow you should use:

  1. Fork ForwardDiff
  2. Make a new branch on your fork, named after whatever changes you'll be making
  3. Apply your code changes to the branch on your fork
  4. When you're done, submit a PR to ForwardDiff to merge your fork into ForwardDiff's master branch.

Adding New Derivative Definitions

In general, new derivative implementations for Dual are automatically defined via simple symbolic rules. ForwardDiff accomplishes this by looping over the rules provided by the DiffRules package and using them to auto-generate Dual definitions. Conveniently, these auto-generated definitions are also automatically tested.

Thus, in order to add a new derivative implementation for Dual, you should define the appropriate derivative rule(s) in DiffRules, and then check that calling the function on Dual instances delivers the desired result.

Depending on your function, ForwardDiff's auto-definition mechanism might need to be expanded to support it. If this is the case, file an issue/PR so that ForwardDiff's maintainers can help you out.

diff --git a/dev/dev/how_it_works/index.html b/dev/dev/how_it_works/index.html index e7ecb3c9..6a186a7b 100644 --- a/dev/dev/how_it_works/index.html +++ b/dev/dev/how_it_works/index.html @@ -43,4 +43,4 @@ x_4 + \epsilon_2 \end{bmatrix} \to -f(\vec{x}_{\epsilon}) = f(\vec{x}) + \frac{\delta f(\vec{x})}{\delta x_3} \epsilon_1 + \frac{\delta f(\vec{x})}{\delta x_4} \epsilon_2\]

This seeding process is similar for Jacobians, so we won't rehash it here.

+f(\vec{x}_{\epsilon}) = f(\vec{x}) + \frac{\delta f(\vec{x})}{\delta x_3} \epsilon_1 + \frac{\delta f(\vec{x})}{\delta x_4} \epsilon_2\]

This seeding process is similar for Jacobians, so we won't rehash it here.

diff --git a/dev/index.html b/dev/index.html index eb3c4aa6..35a7ef81 100644 --- a/dev/index.html +++ b/dev/index.html @@ -35,4 +35,4 @@ end 2×4 Matrix{Float64}: 0.707107 0.0 0.0 0.0 - 0.0 12.0 8.0 6.0

If you find ForwardDiff useful in your work, we kindly request that you cite our paper. The relevant BibLaTex is available in ForwardDiff's README (not included here because BibLaTex doesn't play nice with Documenter/Jekyll).

+ 0.0 12.0 8.0 6.0

If you find ForwardDiff useful in your work, we kindly request that you cite our paper. The relevant BibLaTex is available in ForwardDiff's README (not included here because BibLaTex doesn't play nice with Documenter/Jekyll).

diff --git a/dev/user/advanced/index.html b/dev/user/advanced/index.html index 6a75a4eb..3e2cd9bf 100644 --- a/dev/user/advanced/index.html +++ b/dev/user/advanced/index.html @@ -93,4 +93,4 @@ 0 0 0 2 1 0

Likewise, you could write a version of vector_hessian which supports functions of the form f!(y, x), or perhaps an in-place Jacobian with ForwardDiff.jacobian!.

Custom tags and tag checking

The Dual type includes a "tag" parameter indicating the particular function call to which it belongs. This is to avoid a problem known as perturbation confusion which can arise when there are nested differentiation calls. Tags are automatically generated as part of the appropriate config object, and the tag is checked when the config is used as part of a differentiation call (derivative, gradient, etc.): an InvalidTagException will be thrown if the incorrect config object is used.

This checking can sometimes be inconvenient, and there are certain cases where you may want to disable this checking.

Warning

Disabling tag checking should only be done with caution, especially if the code itself could be used inside another differentiation call.

  1. (preferred) Provide an extra Val{false}() argument to the differentiation function, e.g.

    cfg = ForwardDiff.GradientConfig(g, x)
     ForwardDiff.gradient(f, x, cfg, Val{false}())

    If using as part of a library, the tag can be checked manually via

    ForwardDiff.checktag(cfg, g, x)
  2. (discouraged) Construct the config object with nothing instead of a function argument, e.g.

    cfg = GradientConfig(nothing, x)
    -gradient(f, x, cfg)
+gradient(f, x, cfg) diff --git a/dev/user/api/index.html b/dev/user/api/index.html index 380325a4..ef418131 100644 --- a/dev/user/api/index.html +++ b/dev/user/api/index.html @@ -1,2 +1,2 @@ -Differentiation API · ForwardDiff

Differentiation API

Derivatives of f(x::Real)::Union{Real,AbstractArray}

ForwardDiff.derivativeFunction
ForwardDiff.derivative(f, x::Real)

Return df/dx evaluated at x, assuming f is called as f(x).

This method assumes that isa(f(x), Union{Real,AbstractArray}).

source
ForwardDiff.derivative(f!, y::AbstractArray, x::Real, cfg::DerivativeConfig = DerivativeConfig(f!, y, x), check=Val{true}())

Return df!/dx evaluated at x, assuming f! is called as f!(y, x) where the result is stored in y.

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source
ForwardDiff.derivative!Function
ForwardDiff.derivative!(result::Union{AbstractArray,DiffResult}, f, x::Real)

Compute df/dx evaluated at x and store the result(s) in result, assuming f is called as f(x).

This method assumes that isa(f(x), Union{Real,AbstractArray}).

source
ForwardDiff.derivative!(result::Union{AbstractArray,DiffResult}, f!, y::AbstractArray, x::Real, cfg::DerivativeConfig = DerivativeConfig(f!, y, x), check=Val{true}())

Compute df!/dx evaluated at x and store the result(s) in result, assuming f! is called as f!(y, x) where the result is stored in y.

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source

Gradients of f(x::AbstractArray)::Real

ForwardDiff.gradientFunction
ForwardDiff.gradient(f, x::AbstractArray, cfg::GradientConfig = GradientConfig(f, x), check=Val{true}())

Return ∇f evaluated at x, assuming f is called as f(x). The array ∇f has the same shape as x, and its elements are ∇f[j, k, ...] = ∂f/∂x[j, k, ...].

This method assumes that isa(f(x), Real).

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source
ForwardDiff.gradient!Function
ForwardDiff.gradient!(result::Union{AbstractArray,DiffResult}, f, x::AbstractArray, cfg::GradientConfig = GradientConfig(f, x), check=Val{true}())

Compute ∇f evaluated at x and store the result(s) in result, assuming f is called as f(x).

This method assumes that isa(f(x), Real).

source

Jacobians of f(x::AbstractArray)::AbstractArray

ForwardDiff.jacobianFunction
ForwardDiff.jacobian(f, x::AbstractArray, cfg::JacobianConfig = JacobianConfig(f, x), check=Val{true}())

Return J(f) evaluated at x, assuming f is called as f(x). Multidimensional arrays are flattened in iteration order: the array J(f) has shape length(f(x)) × length(x), and its elements are J(f)[j,k] = ∂f(x)[j]/∂x[k]. When x is a vector, this means that jacobian(x->[f(x)], x) is the transpose of gradient(f, x).

This method assumes that isa(f(x), AbstractArray).

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source
ForwardDiff.jacobian(f!, y::AbstractArray, x::AbstractArray, cfg::JacobianConfig = JacobianConfig(f!, y, x), check=Val{true}())

Return J(f!) evaluated at x, assuming f! is called as f!(y, x) where the result is stored in y.

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source
ForwardDiff.jacobian!Function
ForwardDiff.jacobian!(result::Union{AbstractArray,DiffResult}, f, x::AbstractArray, cfg::JacobianConfig = JacobianConfig(f, x), check=Val{true}())

Compute J(f) evaluated at x and store the result(s) in result, assuming f is called as f(x).

This method assumes that isa(f(x), AbstractArray).

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source
ForwardDiff.jacobian!(result::Union{AbstractArray,DiffResult}, f!, y::AbstractArray, x::AbstractArray, cfg::JacobianConfig = JacobianConfig(f!, y, x), check=Val{true}())

Compute J(f!) evaluated at x and store the result(s) in result, assuming f! is called as f!(y, x) where the result is stored in y.

This method assumes that isa(f(x), AbstractArray).

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source

Hessians of f(x::AbstractArray)::Real

ForwardDiff.hessianFunction
ForwardDiff.hessian(f, x::AbstractArray, cfg::HessianConfig = HessianConfig(f, x), check=Val{true}())

Return H(f) (i.e. J(∇(f))) evaluated at x, assuming f is called as f(x).

This method assumes that isa(f(x), Real).

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source
ForwardDiff.hessian!Function
ForwardDiff.hessian!(result::AbstractArray, f, x::AbstractArray, cfg::HessianConfig = HessianConfig(f, x), check=Val{true}())

Compute H(f) (i.e. J(∇(f))) evaluated at x and store the result(s) in result, assuming f is called as f(x).

This method assumes that isa(f(x), Real).

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source
ForwardDiff.hessian!(result::DiffResult, f, x::AbstractArray, cfg::HessianConfig = HessianConfig(f, result, x), check=Val{true}())

Exactly like ForwardDiff.hessian!(result::AbstractArray, f, x::AbstractArray, cfg::HessianConfig), but because isa(result, DiffResult), cfg is constructed as HessianConfig(f, result, x) instead of HessianConfig(f, x).

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source

Preallocating/Configuring Work Buffers

For the sake of convenience and performance, all "extra" information used by ForwardDiff's API methods is bundled up in the ForwardDiff.AbstractConfig family of types. These types allow the user to easily feed several different parameters to ForwardDiff's API methods, such as chunk size, work buffers, and perturbation seed configurations.

ForwardDiff's basic API methods will allocate these types automatically by default, but you can drastically reduce memory usage if you preallocate them yourself.

Note that for all constructors below, the chunk size N may be explicitly provided, or omitted, in which case ForwardDiff will automatically select a chunk size for you. However, it is highly recommended to specify the chunk size manually when possible (see Configuring Chunk Size).

Note also that configurations constructed for a specific function f cannot be reused to differentiate other functions (though can be reused to differentiate f at different values). To construct a configuration which can be reused to differentiate any function, you can pass nothing as the function argument. While this is more flexible, it decreases ForwardDiff's ability to catch and prevent perturbation confusion.

ForwardDiff.DerivativeConfigType
ForwardDiff.DerivativeConfig(f!, y::AbstractArray, x::AbstractArray)

Return a DerivativeConfig instance based on the type of f!, and the types/shapes of the output vector y and the input vector x.

The returned DerivativeConfig instance contains all the work buffers required by ForwardDiff.derivative and ForwardDiff.derivative! when the target function takes the form f!(y, x).

If f! is nothing instead of the actual target function, then the returned instance can be used with any target function. However, this will reduce ForwardDiff's ability to catch and prevent perturbation confusion (see https://github.com/JuliaDiff/ForwardDiff.jl/issues/83).

This constructor does not store/modify y or x.

source
ForwardDiff.GradientConfigType
ForwardDiff.GradientConfig(f, x::AbstractArray, chunk::Chunk = Chunk(x))

Return a GradientConfig instance based on the type of f and type/shape of the input vector x.

The returned GradientConfig instance contains all the work buffers required by ForwardDiff.gradient and ForwardDiff.gradient!.

If f is nothing instead of the actual target function, then the returned instance can be used with any target function. However, this will reduce ForwardDiff's ability to catch and prevent perturbation confusion (see https://github.com/JuliaDiff/ForwardDiff.jl/issues/83).

This constructor does not store/modify x.

source
ForwardDiff.JacobianConfigType
ForwardDiff.JacobianConfig(f, x::AbstractArray, chunk::Chunk = Chunk(x))

Return a JacobianConfig instance based on the type of f and type/shape of the input vector x.

The returned JacobianConfig instance contains all the work buffers required by ForwardDiff.jacobian and ForwardDiff.jacobian! when the target function takes the form f(x).

If f is nothing instead of the actual target function, then the returned instance can be used with any target function. However, this will reduce ForwardDiff's ability to catch and prevent perturbation confusion (see https://github.com/JuliaDiff/ForwardDiff.jl/issues/83).

This constructor does not store/modify x.

source
ForwardDiff.JacobianConfig(f!, y::AbstractArray, x::AbstractArray, chunk::Chunk = Chunk(x))

Return a JacobianConfig instance based on the type of f!, and the types/shapes of the output vector y and the input vector x.

The returned JacobianConfig instance contains all the work buffers required by ForwardDiff.jacobian and ForwardDiff.jacobian! when the target function takes the form f!(y, x).

If f! is nothing instead of the actual target function, then the returned instance can be used with any target function. However, this will reduce ForwardDiff's ability to catch and prevent perturbation confusion (see https://github.com/JuliaDiff/ForwardDiff.jl/issues/83).

This constructor does not store/modify y or x.

source
ForwardDiff.HessianConfigType
ForwardDiff.HessianConfig(f, x::AbstractArray, chunk::Chunk = Chunk(x))

Return a HessianConfig instance based on the type of f and type/shape of the input vector x.

The returned HessianConfig instance contains all the work buffers required by ForwardDiff.hessian and ForwardDiff.hessian!. For the latter, the buffers are configured for the case where the result argument is an AbstractArray. If it is a DiffResult, the HessianConfig should instead be constructed via ForwardDiff.HessianConfig(f, result, x, chunk).

If f is nothing instead of the actual target function, then the returned instance can be used with any target function. However, this will reduce ForwardDiff's ability to catch and prevent perturbation confusion (see https://github.com/JuliaDiff/ForwardDiff.jl/issues/83).

This constructor does not store/modify x.

source
ForwardDiff.HessianConfig(f, result::DiffResult, x::AbstractArray, chunk::Chunk = Chunk(x))

Return a HessianConfig instance based on the type of f, types/storage in result, and type/shape of the input vector x.

The returned HessianConfig instance contains all the work buffers required by ForwardDiff.hessian! for the case where the result argument is an DiffResult.

If f is nothing instead of the actual target function, then the returned instance can be used with any target function. However, this will reduce ForwardDiff's ability to catch and prevent perturbation confusion (see https://github.com/JuliaDiff/ForwardDiff.jl/issues/83).

This constructor does not store/modify x.

source
+Differentiation API · ForwardDiff

Differentiation API

Derivatives of f(x::Real)::Union{Real,AbstractArray}

ForwardDiff.derivativeFunction
ForwardDiff.derivative(f, x::Real)

Return df/dx evaluated at x, assuming f is called as f(x).

This method assumes that isa(f(x), Union{Real,AbstractArray}).

source
ForwardDiff.derivative(f!, y::AbstractArray, x::Real, cfg::DerivativeConfig = DerivativeConfig(f!, y, x), check=Val{true}())

Return df!/dx evaluated at x, assuming f! is called as f!(y, x) where the result is stored in y.

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source
ForwardDiff.derivative!Function
ForwardDiff.derivative!(result::Union{AbstractArray,DiffResult}, f, x::Real)

Compute df/dx evaluated at x and store the result(s) in result, assuming f is called as f(x).

This method assumes that isa(f(x), Union{Real,AbstractArray}).

source
ForwardDiff.derivative!(result::Union{AbstractArray,DiffResult}, f!, y::AbstractArray, x::Real, cfg::DerivativeConfig = DerivativeConfig(f!, y, x), check=Val{true}())

Compute df!/dx evaluated at x and store the result(s) in result, assuming f! is called as f!(y, x) where the result is stored in y.

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source

Gradients of f(x::AbstractArray)::Real

ForwardDiff.gradientFunction
ForwardDiff.gradient(f, x::AbstractArray, cfg::GradientConfig = GradientConfig(f, x), check=Val{true}())

Return ∇f evaluated at x, assuming f is called as f(x). The array ∇f has the same shape as x, and its elements are ∇f[j, k, ...] = ∂f/∂x[j, k, ...].

This method assumes that isa(f(x), Real).

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source
ForwardDiff.gradient!Function
ForwardDiff.gradient!(result::Union{AbstractArray,DiffResult}, f, x::AbstractArray, cfg::GradientConfig = GradientConfig(f, x), check=Val{true}())

Compute ∇f evaluated at x and store the result(s) in result, assuming f is called as f(x).

This method assumes that isa(f(x), Real).

source

Jacobians of f(x::AbstractArray)::AbstractArray

ForwardDiff.jacobianFunction
ForwardDiff.jacobian(f, x::AbstractArray, cfg::JacobianConfig = JacobianConfig(f, x), check=Val{true}())

Return J(f) evaluated at x, assuming f is called as f(x). Multidimensional arrays are flattened in iteration order: the array J(f) has shape length(f(x)) × length(x), and its elements are J(f)[j,k] = ∂f(x)[j]/∂x[k]. When x is a vector, this means that jacobian(x->[f(x)], x) is the transpose of gradient(f, x).

This method assumes that isa(f(x), AbstractArray).

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source
ForwardDiff.jacobian(f!, y::AbstractArray, x::AbstractArray, cfg::JacobianConfig = JacobianConfig(f!, y, x), check=Val{true}())

Return J(f!) evaluated at x, assuming f! is called as f!(y, x) where the result is stored in y.

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source
ForwardDiff.jacobian!Function
ForwardDiff.jacobian!(result::Union{AbstractArray,DiffResult}, f, x::AbstractArray, cfg::JacobianConfig = JacobianConfig(f, x), check=Val{true}())

Compute J(f) evaluated at x and store the result(s) in result, assuming f is called as f(x).

This method assumes that isa(f(x), AbstractArray).

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source
ForwardDiff.jacobian!(result::Union{AbstractArray,DiffResult}, f!, y::AbstractArray, x::AbstractArray, cfg::JacobianConfig = JacobianConfig(f!, y, x), check=Val{true}())

Compute J(f!) evaluated at x and store the result(s) in result, assuming f! is called as f!(y, x) where the result is stored in y.

This method assumes that isa(f(x), AbstractArray).

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source

Hessians of f(x::AbstractArray)::Real

ForwardDiff.hessianFunction
ForwardDiff.hessian(f, x::AbstractArray, cfg::HessianConfig = HessianConfig(f, x), check=Val{true}())

Return H(f) (i.e. J(∇(f))) evaluated at x, assuming f is called as f(x).

This method assumes that isa(f(x), Real).

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source
ForwardDiff.hessian!Function
ForwardDiff.hessian!(result::AbstractArray, f, x::AbstractArray, cfg::HessianConfig = HessianConfig(f, x), check=Val{true}())

Compute H(f) (i.e. J(∇(f))) evaluated at x and store the result(s) in result, assuming f is called as f(x).

This method assumes that isa(f(x), Real).

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source
ForwardDiff.hessian!(result::DiffResult, f, x::AbstractArray, cfg::HessianConfig = HessianConfig(f, result, x), check=Val{true}())

Exactly like ForwardDiff.hessian!(result::AbstractArray, f, x::AbstractArray, cfg::HessianConfig), but because isa(result, DiffResult), cfg is constructed as HessianConfig(f, result, x) instead of HessianConfig(f, x).

Set check to Val{false}() to disable tag checking. This can lead to perturbation confusion, so should be used with care.

source

Preallocating/Configuring Work Buffers

For the sake of convenience and performance, all "extra" information used by ForwardDiff's API methods is bundled up in the ForwardDiff.AbstractConfig family of types. These types allow the user to easily feed several different parameters to ForwardDiff's API methods, such as chunk size, work buffers, and perturbation seed configurations.

ForwardDiff's basic API methods will allocate these types automatically by default, but you can drastically reduce memory usage if you preallocate them yourself.

Note that for all constructors below, the chunk size N may be explicitly provided, or omitted, in which case ForwardDiff will automatically select a chunk size for you. However, it is highly recommended to specify the chunk size manually when possible (see Configuring Chunk Size).

Note also that configurations constructed for a specific function f cannot be reused to differentiate other functions (though can be reused to differentiate f at different values). To construct a configuration which can be reused to differentiate any function, you can pass nothing as the function argument. While this is more flexible, it decreases ForwardDiff's ability to catch and prevent perturbation confusion.

ForwardDiff.DerivativeConfigType
ForwardDiff.DerivativeConfig(f!, y::AbstractArray, x::AbstractArray)

Return a DerivativeConfig instance based on the type of f!, and the types/shapes of the output vector y and the input vector x.

The returned DerivativeConfig instance contains all the work buffers required by ForwardDiff.derivative and ForwardDiff.derivative! when the target function takes the form f!(y, x).

If f! is nothing instead of the actual target function, then the returned instance can be used with any target function. However, this will reduce ForwardDiff's ability to catch and prevent perturbation confusion (see https://github.com/JuliaDiff/ForwardDiff.jl/issues/83).

This constructor does not store/modify y or x.

source
ForwardDiff.GradientConfigType
ForwardDiff.GradientConfig(f, x::AbstractArray, chunk::Chunk = Chunk(x))

Return a GradientConfig instance based on the type of f and type/shape of the input vector x.

The returned GradientConfig instance contains all the work buffers required by ForwardDiff.gradient and ForwardDiff.gradient!.

If f is nothing instead of the actual target function, then the returned instance can be used with any target function. However, this will reduce ForwardDiff's ability to catch and prevent perturbation confusion (see https://github.com/JuliaDiff/ForwardDiff.jl/issues/83).

This constructor does not store/modify x.

source
ForwardDiff.JacobianConfigType
ForwardDiff.JacobianConfig(f, x::AbstractArray, chunk::Chunk = Chunk(x))

Return a JacobianConfig instance based on the type of f and type/shape of the input vector x.

The returned JacobianConfig instance contains all the work buffers required by ForwardDiff.jacobian and ForwardDiff.jacobian! when the target function takes the form f(x).

If f is nothing instead of the actual target function, then the returned instance can be used with any target function. However, this will reduce ForwardDiff's ability to catch and prevent perturbation confusion (see https://github.com/JuliaDiff/ForwardDiff.jl/issues/83).

This constructor does not store/modify x.

source
ForwardDiff.JacobianConfig(f!, y::AbstractArray, x::AbstractArray, chunk::Chunk = Chunk(x))

Return a JacobianConfig instance based on the type of f!, and the types/shapes of the output vector y and the input vector x.

The returned JacobianConfig instance contains all the work buffers required by ForwardDiff.jacobian and ForwardDiff.jacobian! when the target function takes the form f!(y, x).

If f! is nothing instead of the actual target function, then the returned instance can be used with any target function. However, this will reduce ForwardDiff's ability to catch and prevent perturbation confusion (see https://github.com/JuliaDiff/ForwardDiff.jl/issues/83).

This constructor does not store/modify y or x.

source
ForwardDiff.HessianConfigType
ForwardDiff.HessianConfig(f, x::AbstractArray, chunk::Chunk = Chunk(x))

Return a HessianConfig instance based on the type of f and type/shape of the input vector x.

The returned HessianConfig instance contains all the work buffers required by ForwardDiff.hessian and ForwardDiff.hessian!. For the latter, the buffers are configured for the case where the result argument is an AbstractArray. If it is a DiffResult, the HessianConfig should instead be constructed via ForwardDiff.HessianConfig(f, result, x, chunk).

If f is nothing instead of the actual target function, then the returned instance can be used with any target function. However, this will reduce ForwardDiff's ability to catch and prevent perturbation confusion (see https://github.com/JuliaDiff/ForwardDiff.jl/issues/83).

This constructor does not store/modify x.

source
ForwardDiff.HessianConfig(f, result::DiffResult, x::AbstractArray, chunk::Chunk = Chunk(x))

Return a HessianConfig instance based on the type of f, types/storage in result, and type/shape of the input vector x.

The returned HessianConfig instance contains all the work buffers required by ForwardDiff.hessian! for the case where the result argument is an DiffResult.

If f is nothing instead of the actual target function, then the returned instance can be used with any target function. However, this will reduce ForwardDiff's ability to catch and prevent perturbation confusion (see https://github.com/JuliaDiff/ForwardDiff.jl/issues/83).

This constructor does not store/modify x.

source
diff --git a/dev/user/limitations/index.html b/dev/user/limitations/index.html index 50d4cebf..f7eee47d 100644 --- a/dev/user/limitations/index.html +++ b/dev/user/limitations/index.html @@ -1,2 +1,2 @@ -Limitations of ForwardDiff · ForwardDiff

Limitations of ForwardDiff

ForwardDiff works by injecting user code with new number types that collect derivative information at runtime. Naturally, this technique has some limitations. Here's a list of all the roadblocks we've seen users run into ("target function" here refers to the function being differentiated):

  • The target function can only be composed of generic Julia functions. ForwardDiff cannot propagate derivative information through non-Julia code. Thus, your function may not work if it makes calls to external, non-Julia programs, e.g. uses explicit BLAS calls instead of Ax_mul_Bx-style functions.

  • The target function must be unary (i.e., only accept a single argument). ForwardDiff.jacobian is an exception to this rule.

  • The target function must be written generically enough to accept numbers of type T<:Real as input (or arrays of these numbers). The function doesn't require a specific type signature, as long as the type signature is generic enough to avoid breaking this rule. This also means that any storage assigned used within the function must be generic as well (see this comment for an example).

  • The types of array inputs must be subtypes of AbstractArray . Non-AbstractArray array-like types are not officially supported.

ForwardDiff is not natively compatible with rules defined by the ChainRules.jl ecosystem. You can use ForwardDiffChainRules.jl to bridge this gap.

+Limitations of ForwardDiff · ForwardDiff

Limitations of ForwardDiff

ForwardDiff works by injecting user code with new number types that collect derivative information at runtime. Naturally, this technique has some limitations. Here's a list of all the roadblocks we've seen users run into ("target function" here refers to the function being differentiated):

  • The target function can only be composed of generic Julia functions. ForwardDiff cannot propagate derivative information through non-Julia code. Thus, your function may not work if it makes calls to external, non-Julia programs, e.g. uses explicit BLAS calls instead of Ax_mul_Bx-style functions.

  • The target function must be unary (i.e., only accept a single argument). ForwardDiff.jacobian is an exception to this rule.

  • The target function must be written generically enough to accept numbers of type T<:Real as input (or arrays of these numbers). The function doesn't require a specific type signature, as long as the type signature is generic enough to avoid breaking this rule. This also means that any storage assigned used within the function must be generic as well (see this comment for an example).

  • The types of array inputs must be subtypes of AbstractArray . Non-AbstractArray array-like types are not officially supported.

ForwardDiff is not natively compatible with rules defined by the ChainRules.jl ecosystem. You can use ForwardDiffChainRules.jl to bridge this gap.

diff --git a/dev/user/upgrade/index.html b/dev/user/upgrade/index.html index b9d1e7f9..712ce9fc 100644 --- a/dev/user/upgrade/index.html +++ b/dev/user/upgrade/index.html @@ -73,4 +73,4 @@ jf! = ForwardDiff.jacobian(f!, mutates = true, output_length = length(y)) # ForwardDiff v0.2 & above -jf! = (out, y, x) -> ForwardDiff.jacobian!(out, f!, y, x) +jf! = (out, y, x) -> ForwardDiff.jacobian!(out, f!, y, x)