Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

typos CI #981

Merged
merged 1 commit into from
Jan 21, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .github/dependabot.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,6 @@ updates:
directory: "/" # Location of package manifests
schedule:
interval: "weekly"
ignore:
- dependency-name: "crate-ci/typos"
update-types: ["version-update:semver-patch"]
13 changes: 13 additions & 0 deletions .github/workflows/SpellCheck.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
name: Spell Check

on: [pull_request]

jobs:
typos-check:
name: Spell Check with Typos
runs-on: ubuntu-latest
steps:
- name: Checkout Actions Repository
uses: actions/checkout@v3
- name: Check spelling
uses: crate-ci/[email protected]
6 changes: 6 additions & 0 deletions .typos.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
[default.extend-words]
nin = "nin"
strat = "strat"
ND = "ND"
NUMER = "NUMER"
Chater = "Chater"
6 changes: 3 additions & 3 deletions docs/src/manual/differential_equation_sensitivities.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ is:
- `ForwardDiffSensitivity` is the fastest for differential equations with small
numbers of parameters (<100) and can be used on any differential equation
solver that is native Julia. If the chosen ODE solver is incompatible
with direct automatic differentiation, `ForwardSensitivty` may be used instead.
with direct automatic differentiation, `ForwardSensitivity` may be used instead.
- Adjoint sensitivity analysis is the fastest when the number of parameters is
sufficiently large. `GaussAdjoint` should be generally preferred. `BacksolveAdjoint`
uses the least memory but on very stiff problems it may be unstable and
Expand Down Expand Up @@ -281,7 +281,7 @@ dramatically reduces the computational cost while being a low-memory
format. This is the preferred method for highly stiff equations
when memory is an issue, i.e. stiff PDEs or large neural DAEs.

For forward-mode, the `ForwardSensitivty` is the version that performs
For forward-mode, the `ForwardSensitivity` is the version that performs
the optimize-then-discretize approach. In this case, `autojacvec` corresponds
to the method for computing `J*v` within the forward sensitivity equations,
which is either `true` or `false` for whether to use Jacobian-free
Expand All @@ -301,7 +301,7 @@ differentiation on the solver via
[Zygote.jl](https://fluxml.ai/Zygote.jl/latest/), and `TrackerAdjoint`
performs reverse-mode automatic differentiation on the solver via
[Tracker.jl](https://github.com/FluxML/Tracker.jl). In addition,
`ForwardDiffSensitivty` performs forward-mode automatic differentiation
`ForwardDiffSensitivity` performs forward-mode automatic differentiation
on the solver via [ForwardDiff.jl](https://juliadiff.org/ForwardDiff.jl/stable/).

We note that many studies have suggested that [this approach produces
Expand Down
22 changes: 11 additions & 11 deletions src/adjoint_common.jl
Original file line number Diff line number Diff line change
Expand Up @@ -589,7 +589,7 @@ function generate_callbacks(sensefun, dgdu, dgdp, λ, t, t0, callback, init_cb,
end
cb = PresetTimeCallback(_t, rlcb)

# handle duplicates (currently only for double occurances)
# handle duplicates (currently only for double occurrences)
if duplicate_iterator_times !== nothing
# use same ref for cur_time to cope with concrete_solve
cbrev_dupl_affect = ReverseLossCallback(sensefun, λ, t, dgdu, dgdp, cur_time)
Expand All @@ -603,22 +603,22 @@ end
function separate_nonunique(t)
# t is already sorted
_t = unique(t)
ts_with_occurances = [(i, count(==(i), t)) for i in _t]
ts_with_occurrences = [(i, count(==(i), t)) for i in _t]

# duplicates (only those values which occur > 1 times)
dupl = filter(x -> last(x) > 1, ts_with_occurances)
dupl = filter(x -> last(x) > 1, ts_with_occurrences)

ts = first.(dupl)
occurances = last.(dupl)
occurrences = last.(dupl)

if isempty(occurances)
if isempty(occurrences)
itrs = nothing
else
maxoc = maximum(occurances)
maxoc = maximum(occurrences)
maxoc > 2 &&
error("More than two occurances of the same time point. Please report this.")
# handle also more than two occurances
itrs = [ts[occurances .>= i] for i in 2:maxoc]
error("More than two occurrences of the same time point. Please report this.")
# handle also more than two occurrences
itrs = [ts[occurrences .>= i] for i in 2:maxoc]
end

return _t, itrs
Expand All @@ -631,11 +631,11 @@ function out_and_ts(_ts, duplicate_iterator_times, sol)
else
# if callbacks are tracked, there is potentially an event_time that must be considered
# in the loss function but doesn't occur in saveat/t. So we need to add it.
# Note that if it doens't occur in saveat/t we even need to add it twice
# Note that if it doesn't occur in saveat/t we even need to add it twice
# However if the callbacks are not saving in the forward, we don't want to compute a loss
# value for them. This information is given by sol.t/checkpoints.
# Additionally we need to store the left and the right limit, respectively.
duplicate_times = duplicate_iterator_times[1] # just treat two occurances at the moment (see separate_nonunique above)
duplicate_times = duplicate_iterator_times[1] # just treat two occurrences at the moment (see separate_nonunique above)
_ts = Array(_ts)
for d in duplicate_times
(d ∉ _ts) && push!(_ts, d)
Expand Down
2 changes: 1 addition & 1 deletion src/callback_tracking.jl
Original file line number Diff line number Diff line change
Expand Up @@ -229,7 +229,7 @@ function _setup_reverse_callbacks(cb::Union{ContinuousCallback, DiscreteCallback
# if save_positions = [1,1] is true the loss gradient is accumulated correctly before and after callback.
# if save_positions = [0,0] no extra gradient is added.
# if save_positions = [0,1] the gradient contribution is added before the callback but no additional gradient is added afterwards.
# if save_positions = [1,0] the gradient contribution is added before, and in principle we would need to correct the adjoint state again. Thefore,
# if save_positions = [1,0] the gradient contribution is added before, and in principle we would need to correct the adjoint state again. Therefore,

cb.save_positions == [1, 0] && error("save_positions=[1,0] is currently not supported.")
!(sensealg.autojacvec isa Union{ReverseDiffVJP, EnzymeVJP}) &&
Expand Down
8 changes: 4 additions & 4 deletions src/concrete_solve.jl
Original file line number Diff line number Diff line change
Expand Up @@ -649,7 +649,7 @@ function DiffEqBase._concrete_solve_forward(prob::SciMLBase.AbstractODEProblem,
out, _concrete_solve_pushforward
end

const FORWARDDIFF_SENSITIVITY_PARAMETER_COMPATABILITY_MESSAGE = """
const FORWARDDIFF_SENSITIVITY_PARAMETER_COMPATIBILITY_MESSAGE = """
ForwardDiffSensitivity assumes the `AbstractArray` interface for `p`. Thus while
DifferentialEquations.jl can support any parameter struct type, usage
with ForwardDiffSensitivity requires that `p` could be a valid
Expand All @@ -664,7 +664,7 @@ const FORWARDDIFF_SENSITIVITY_PARAMETER_COMPATABILITY_MESSAGE = """
struct ForwardDiffSensitivityParameterCompatibilityError <: Exception end

function Base.showerror(io::IO, e::ForwardDiffSensitivityParameterCompatibilityError)
print(io, FORWARDDIFF_SENSITIVITY_PARAMETER_COMPATABILITY_MESSAGE)
print(io, FORWARDDIFF_SENSITIVITY_PARAMETER_COMPATIBILITY_MESSAGE)
end

# Generic Fallback for ForwardDiff
Expand Down Expand Up @@ -1200,7 +1200,7 @@ function DiffEqBase._concrete_solve_adjoint(prob::Union{SciMLBase.AbstractDiscre
DiffEqBase.sensitivity_solution(sol, u, Tracker.data.(sol.t)), tracker_adjoint_backpass
end

const REVERSEDIFF_ADJOINT_GPU_COMPATABILITY_MESSAGE = """
const REVERSEDIFF_ADJOINT_GPU_COMPATIBILITY_MESSAGE = """
ReverseDiffAdjoint is not compatible GPU-based array types. Use a different
sensitivity analysis method, like InterpolatingAdjoint or TrackerAdjoint,
in order to combine with GPUs.
Expand All @@ -1209,7 +1209,7 @@ const REVERSEDIFF_ADJOINT_GPU_COMPATABILITY_MESSAGE = """
struct ReverseDiffGPUStateCompatibilityError <: Exception end

function Base.showerror(io::IO, e::ReverseDiffGPUStateCompatibilityError)
print(io, FORWARDDIFF_SENSITIVITY_PARAMETER_COMPATABILITY_MESSAGE)
print(io, FORWARDDIFF_SENSITIVITY_PARAMETER_COMPATIBILITY_MESSAGE)
end

function DiffEqBase._concrete_solve_adjoint(prob::Union{SciMLBase.AbstractDiscreteProblem,
Expand Down
4 changes: 2 additions & 2 deletions src/forward_sensitivity.jl
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ function ODEForwardSensitivityProblem(prob::ODEProblem, alg; kwargs...)
ODEForwardSensitivityProblem(prob.f, prob.u0, prob.tspan, prob.p, alg; kwargs...)
end

const FORWARD_SENSITIVITY_PARAMETER_COMPATABILITY_MESSAGE = """
const FORWARD_SENSITIVITY_PARAMETER_COMPATIBILITY_MESSAGE = """
ODEForwardSensitivityProblem requires being able to solve
a differential equation defined by the parameter struct `p`. Even though
DifferentialEquations.jl can support any parameter struct type, usage
Expand All @@ -174,7 +174,7 @@ const FORWARD_SENSITIVITY_PARAMETER_COMPATABILITY_MESSAGE = """
struct ForwardSensitivityParameterCompatibilityError <: Exception end

function Base.showerror(io::IO, e::ForwardSensitivityParameterCompatibilityError)
print(io, FORWARD_SENSITIVITY_PARAMETER_COMPATABILITY_MESSAGE)
print(io, FORWARD_SENSITIVITY_PARAMETER_COMPATIBILITY_MESSAGE)
end

const FORWARD_SENSITIVITY_OUT_OF_PLACE_MESSAGE = """
Expand Down
2 changes: 1 addition & 1 deletion src/sensitivity_algorithms.jl
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
function SensitivityAlg(args...; kwargs...)
@error("The SensitivtyAlg choice mechanism was completely overhauled. Please consult the local sensitivity documentation for more information")
@error("The SensitivityAlg choice mechanism was completely overhauled. Please consult the local sensitivity documentation for more information")
end

"""
Expand Down
4 changes: 2 additions & 2 deletions src/sensitivity_interface.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## Direct calls

const ADJOINT_PARAMETER_COMPATABILITY_MESSAGE = """
const ADJOINT_PARAMETER_COMPATIBILITY_MESSAGE = """
Adjoint sensitivity analysis functionality requires being able to solve
a differential equation defined by the parameter struct `p`. Thus while
DifferentialEquations.jl can support any parameter struct type, usage
Expand All @@ -16,7 +16,7 @@ const ADJOINT_PARAMETER_COMPATABILITY_MESSAGE = """
struct AdjointSensitivityParameterCompatibilityError <: Exception end

function Base.showerror(io::IO, e::AdjointSensitivityParameterCompatibilityError)
print(io, ADJOINT_PARAMETER_COMPATABILITY_MESSAGE)
print(io, ADJOINT_PARAMETER_COMPATIBILITY_MESSAGE)
end

@doc doc"""
Expand Down
2 changes: 1 addition & 1 deletion test/concrete_solve_derivatives.jl
Original file line number Diff line number Diff line change
Expand Up @@ -338,7 +338,7 @@ du012, dp12 = Zygote.gradient((u0, p) -> sum(solve(proboop, Tsit5(), u0 = u0, p
saveat = 0.1, save_idxs = 1:1,
sensealg = ForwardDiffSensitivity())),
u0, p)
# Redundent to test aliasing
# Redundant to test aliasing
du013, dp13 = Zygote.gradient((u0, p) -> sum(solve(proboop, Tsit5(), u0 = u0, p = p,
abstol = 1e-14, reltol = 1e-14,
saveat = 0.1, save_idxs = 1:1,
Expand Down
2 changes: 1 addition & 1 deletion test/gpu/diffeqflux_standard_gpu.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
using SciMLSensitivity, OrdinaryDiffEq, Lux, DiffEqFlux, LuxCUDA, Zygote, Random
using ComponentArrays
CUDA.allowscalar(false) # Makes sure no slow operations are occuring
CUDA.allowscalar(false) # Makes sure no slow operations are occurring

const gdev = gpu_device()
const cdev = cpu_device()
Expand Down
2 changes: 1 addition & 1 deletion test/sde_stratonovich.jl
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ end

@info res_sde_p2

# test consitency for different switches for the noise Jacobian
# test consistency for different switches for the noise Jacobian
res_sde_u02a, res_sde_p2a = adjoint_sensitivities(sol_oop_sde2, EulerHeun(), t = tarray,
dgdu_discrete = dg!,
dt = dt1, adaptive = false,
Expand Down
Loading