Skip to content

Commit

Permalink
Merge branch 'main' into compat
Browse files Browse the repository at this point in the history
  • Loading branch information
ranocha authored Dec 2, 2024
2 parents 03b7bc4 + 5db379b commit 9b5f01c
Show file tree
Hide file tree
Showing 31 changed files with 457 additions and 278 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/SpellCheck.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,4 @@ jobs:
- name: Checkout Actions Repository
uses: actions/checkout@v4
- name: Check spelling
uses: crate-ci/typos@v1.27.0
uses: crate-ci/typos@v1.28.1
4 changes: 2 additions & 2 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -133,9 +133,9 @@ jobs:
- uses: julia-actions/julia-processcoverage@v1
with:
directories: src,examples,ext
- uses: codecov/codecov-action@v4
- uses: codecov/codecov-action@v5
with:
file: ./lcov.info
files: ./lcov.info
flags: unittests
name: codecov-umbrella
fail_ci_if_error: true
Expand Down
2 changes: 2 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ for human readability.
- New time integrator `PairedExplicitRK3`, implementing the third-order paired explicit Runge-Kutta
method with [Convex.jl](https://github.com/jump-dev/Convex.jl), [ECOS.jl](https://github.com/jump-dev/ECOS.jl),
and [NLsolve.jl](https://github.com/JuliaNLSolvers/NLsolve.jl) ([#2008])
- `LobattoLegendreBasis` and related datastructures made fully floating-type general,
enabling calculations with higher than double (`Float64`) precision ([#2128])

## Changes when updating to v0.9 from v0.8.x

Expand Down
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ installation and postprocessing procedures. Its features include:
* Hierarchical quadtree/octree grid with adaptive mesh refinement
* Forests of quadtrees/octrees with [p4est](https://github.com/cburstedde/p4est) via [P4est.jl](https://github.com/trixi-framework/P4est.jl)
* High-order accuracy in space and time
* Arbitrary floating-point precision
* Discontinuous Galerkin methods
* Kinetic energy-preserving and entropy-stable methods based on flux differencing
* Entropy-stable shock capturing
Expand Down
1 change: 1 addition & 0 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ installation and postprocessing procedures. Its features include:
* Hierarchical quadtree/octree grid with adaptive mesh refinement
* Forests of quadtrees/octrees with [p4est](https://github.com/cburstedde/p4est) via [P4est.jl](https://github.com/trixi-framework/P4est.jl)
* High-order accuracy in space and time
* Arbitrary floating-point precision
* Discontinuous Galerkin methods
* Kinetic energy-preserving and entropy-stable methods based on flux differencing
* Entropy-stable shock capturing
Expand Down
60 changes: 60 additions & 0 deletions examples/structured_1d_dgsem/elixir_advection_float128.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@

using OrdinaryDiffEq
using Trixi

using Quadmath

###############################################################################
# semidiscretization of the linear advection equation

# See https://github.com/JuliaMath/Quadmath.jl
RealT = Float128

advection_velocity = 4 / 3 # Does not need to be in higher precision
equations = LinearScalarAdvectionEquation1D(advection_velocity)

solver = DGSEM(RealT = RealT, polydeg = 13, surface_flux = flux_lax_friedrichs)

# CARE: Important to use higher precision datatype for coordinates
# as these are used for type promotion of the mesh (points etc.)
coordinates_min = (-one(RealT),)
coordinates_max = (one(RealT),)
cells_per_dimension = (1,)

# `StructuredMesh` infers datatype from coordinates
mesh = StructuredMesh(cells_per_dimension, coordinates_min, coordinates_max)

semi = SemidiscretizationHyperbolic(mesh, equations, initial_condition_convergence_test,
solver)

###############################################################################
# ODE solvers, callbacks etc.

# CARE: Important to use higher precision datatype in specification of final time
tspan = (zero(RealT), one(RealT))

ode = semidiscretize(semi, tspan);

summary_callback = SummaryCallback()

analysis_callback = AnalysisCallback(semi, interval = 100,
extra_analysis_errors = (:conservation_error,))

# cfl does not need to be in higher precision
stepsize_callback = StepsizeCallback(cfl = 0.25)

callbacks = CallbackSet(summary_callback,
stepsize_callback,
analysis_callback)

###############################################################################
# run the simulation

sol = solve(ode, Feagin14(),
# Turn off adaptivity to avoid setting very small tolerances
adaptive = false,
dt = 42, # `dt` does not need to be in higher precision
save_everystep = false, callback = callbacks);

# Print the timer summary
summary_callback()
4 changes: 2 additions & 2 deletions examples/structured_1d_dgsem/elixir_burgers_perk3.jl
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Convex and ECOS are imported because they are used for finding the optimal time step and optimal
# monomial coefficients in the stability polynomial of P-ERK time integrators.
# monomial coefficients in the stability polynomial of PERK time integrators.
using Convex, ECOS

# NLsolve is imported to solve the system of nonlinear equations to find the coefficients
# in the Butcher tableau in the third order P-ERK time integrator.
# in the Butcher tableau in the third order PERK time integrator.
using NLsolve

using OrdinaryDiffEq
Expand Down
63 changes: 63 additions & 0 deletions examples/tree_1d_dgsem/elixir_advection_doublefloat.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@

using OrdinaryDiffEq
using Trixi

using DoubleFloats

###############################################################################
# semidiscretization of the linear advection equation

# See https://github.com/JuliaMath/DoubleFloats.jl
RealT = Double64

advection_velocity = 4 / 3 # Does not need to be in higher precision
equations = LinearScalarAdvectionEquation1D(advection_velocity)

solver = DGSEM(RealT = RealT, polydeg = 7, surface_flux = flux_lax_friedrichs)

# CARE: Important to use higher precision datatype for coordinates
# as these are used for type promotion of the mesh (points etc.)
coordinates_min = -one(RealT) # minimum coordinate
coordinates_max = one(RealT) # maximum coordinate

# For `TreeMesh` the datatype has to be specified explicitly,
# i.e., is not inferred from the coordinates.
mesh = TreeMesh(coordinates_min, coordinates_max,
initial_refinement_level = 3,
n_cells_max = 30_000,
RealT = RealT)

semi = SemidiscretizationHyperbolic(mesh, equations, initial_condition_convergence_test,
solver)

###############################################################################
# ODE solvers, callbacks etc.

# CARE: Important to use higher precision datatype in specification of final time
tspan = (zero(RealT), one(RealT))

ode = semidiscretize(semi, tspan);

summary_callback = SummaryCallback()

analysis_callback = AnalysisCallback(semi, interval = 100,
extra_analysis_errors = (:conservation_error,))

# cfl does not need to be in higher precision
stepsize_callback = StepsizeCallback(cfl = 1.4)

callbacks = CallbackSet(summary_callback,
stepsize_callback,
analysis_callback)

###############################################################################
# run the simulation

sol = solve(ode, DP8(),
# Turn off adaptivity to avoid setting very small tolerances
adaptive = false,
dt = 42, # `dt` does not need to be in higher precision
save_everystep = false, callback = callbacks);

# Print the timer summary
summary_callback()
2 changes: 1 addition & 1 deletion examples/tree_1d_dgsem/elixir_advection_perk2.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@

# Convex and ECOS are imported because they are used for finding the optimal time step and optimal
# monomial coefficients in the stability polynomial of P-ERK time integrators.
# monomial coefficients in the stability polynomial of PERK time integrators.
using Convex, ECOS

using OrdinaryDiffEq
Expand Down
2 changes: 1 addition & 1 deletion ext/TrixiConvexECOSExt.jl
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Package extension for adding Convex-based features to Trixi.jl
module TrixiConvexECOSExt

# Required for coefficient optimization in P-ERK scheme integrators
# Required for coefficient optimization in PERK scheme integrators
if isdefined(Base, :get_extension)
using Convex: MOI, solve!, Variable, minimize, evaluate
using ECOS: Optimizer
Expand Down
2 changes: 1 addition & 1 deletion ext/TrixiNLsolveExt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
module TrixiNLsolveExt

# Required for finding coefficients in Butcher tableau in the third order of
# P-ERK scheme integrators
# PERK scheme integrators
if isdefined(Base, :get_extension)
using NLsolve: nlsolve
else
Expand Down
6 changes: 4 additions & 2 deletions src/auxiliary/precompile.jl
Original file line number Diff line number Diff line change
Expand Up @@ -329,8 +329,10 @@ function _precompile_manual_()
@assert Base.precompile(Tuple{typeof(Trixi.gauss_nodes_weights), Int})
@assert Base.precompile(Tuple{typeof(Trixi.calc_forward_upper), Int})
@assert Base.precompile(Tuple{typeof(Trixi.calc_forward_lower), Int})
@assert Base.precompile(Tuple{typeof(Trixi.calc_reverse_upper), Int, Val{:gauss}})
@assert Base.precompile(Tuple{typeof(Trixi.calc_reverse_lower), Int, Val{:gauss}})
@assert Base.precompile(Tuple{typeof(Trixi.calc_reverse_upper), Int,
Val{:gauss}})
@assert Base.precompile(Tuple{typeof(Trixi.calc_reverse_lower), Int,
Val{:gauss}})
@assert Base.precompile(Tuple{typeof(Trixi.calc_reverse_upper), Int,
Val{:gauss_lobatto}})
@assert Base.precompile(Tuple{typeof(Trixi.calc_reverse_lower), Int,
Expand Down
16 changes: 10 additions & 6 deletions src/auxiliary/special_elixirs.jl
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,11 @@
#! format: noindent

"""
convergence_test([mod::Module=Main,] elixir::AbstractString, iterations; kwargs...)
convergence_test([mod::Module=Main,] elixir::AbstractString, iterations, RealT = Float64; kwargs...)
Run `iterations` Trixi.jl simulations using the setup given in `elixir` and compute
the experimental order of convergence (EOC) in the ``L^2`` and ``L^\\infty`` norm.
Use `RealT` as the data type to represent the errors.
In each iteration, the resolution of the respective mesh will be doubled.
Additional keyword arguments `kwargs...` and the optional module `mod` are passed directly
to [`trixi_include`](@ref).
Expand All @@ -18,12 +19,14 @@ This function assumes that the spatial resolution is set via the keywords
`initial_refinement_level` (an integer) or `cells_per_dimension` (a tuple of
integers, one per spatial dimension).
"""
function convergence_test(mod::Module, elixir::AbstractString, iterations; kwargs...)
function convergence_test(mod::Module, elixir::AbstractString, iterations,
RealT = Float64;
kwargs...)
@assert(iterations>1,
"Number of iterations must be bigger than 1 for a convergence analysis")

# Types of errors to be calculated
errors = Dict(:l2 => Float64[], :linf => Float64[])
errors = Dict(:l2 => RealT[], :linf => RealT[])

initial_resolution = extract_initial_resolution(elixir, kwargs)

Expand Down Expand Up @@ -105,7 +108,7 @@ function analyze_convergence(errors, iterations,
println("")

# Print mean EOCs
mean_values = zeros(nvariables)
mean_values = zeros(eltype(errors[:l2]), nvariables)
for v in 1:nvariables
mean_values[v] = sum(eocs[kind][:, v]) ./ length(eocs[kind][:, v])
@printf("%-10s", "mean")
Expand All @@ -119,8 +122,9 @@ function analyze_convergence(errors, iterations,
return eoc_mean_values
end

function convergence_test(elixir::AbstractString, iterations; kwargs...)
convergence_test(Main, elixir::AbstractString, iterations; kwargs...)
function convergence_test(elixir::AbstractString, iterations, RealT = Float64;
kwargs...)
convergence_test(Main, elixir::AbstractString, iterations, RealT; kwargs...)
end

# Helper methods used in the functions defined above
Expand Down
9 changes: 5 additions & 4 deletions src/callbacks_step/analysis_dg1d.jl
Original file line number Diff line number Diff line change
Expand Up @@ -61,16 +61,17 @@ function calc_error_norms(func, u, t, analyzer,
inv.(view(inverse_jacobian, :, element)))

# Calculate errors at each analysis node
@. jacobian_local = abs(jacobian_local)

for i in eachnode(analyzer)
u_exact = initial_condition(get_node_coords(x_local, equations, dg, i), t,
equations)
diff = func(u_exact, equations) -
func(get_node_vars(u_local, equations, dg, i), equations)
l2_error += diff .^ 2 * (weights[i] * jacobian_local[i])
# We take absolute value as we need the Jacobian here for the volume calculation
abs_jacobian_local_i = abs(jacobian_local[i])

l2_error += diff .^ 2 * (weights[i] * abs_jacobian_local_i)
linf_error = @. max(linf_error, abs(diff))
total_volume += weights[i] * jacobian_local[i]
total_volume += weights[i] * abs_jacobian_local_i
end
end

Expand Down
9 changes: 5 additions & 4 deletions src/callbacks_step/analysis_dg2d.jl
Original file line number Diff line number Diff line change
Expand Up @@ -161,16 +161,17 @@ function calc_error_norms(func, u, t, analyzer,
jacobian_tmp1)

# Calculate errors at each analysis node
@. jacobian_local = abs(jacobian_local)

for j in eachnode(analyzer), i in eachnode(analyzer)
u_exact = initial_condition(get_node_coords(x_local, equations, dg, i, j),
t, equations)
diff = func(u_exact, equations) -
func(get_node_vars(u_local, equations, dg, i, j), equations)
l2_error += diff .^ 2 * (weights[i] * weights[j] * jacobian_local[i, j])
# We take absolute value as we need the Jacobian here for the volume calculation
abs_jacobian_local_ij = abs(jacobian_local[i, j])

l2_error += diff .^ 2 * (weights[i] * weights[j] * abs_jacobian_local_ij)
linf_error = @. max(linf_error, abs(diff))
total_volume += weights[i] * weights[j] * jacobian_local[i, j]
total_volume += weights[i] * weights[j] * abs_jacobian_local_ij
end
end

Expand Down
9 changes: 5 additions & 4 deletions src/callbacks_step/analysis_dg2d_parallel.jl
Original file line number Diff line number Diff line change
Expand Up @@ -114,16 +114,17 @@ function calc_error_norms(func, u, t, analyzer,
jacobian_tmp1)

# Calculate errors at each analysis node
@. jacobian_local = abs(jacobian_local)

for j in eachnode(analyzer), i in eachnode(analyzer)
u_exact = initial_condition(get_node_coords(x_local, equations, dg, i, j),
t, equations)
diff = func(u_exact, equations) -
func(get_node_vars(u_local, equations, dg, i, j), equations)
l2_error += diff .^ 2 * (weights[i] * weights[j] * jacobian_local[i, j])
# We take absolute value as we need the Jacobian here for the volume calculation
abs_jacobian_local_ij = abs(jacobian_local[i, j])

l2_error += diff .^ 2 * (weights[i] * weights[j] * abs_jacobian_local_ij)
linf_error = @. max(linf_error, abs(diff))
volume += weights[i] * weights[j] * jacobian_local[i, j]
volume += weights[i] * weights[j] * abs_jacobian_local_ij
end
end

Expand Down
11 changes: 6 additions & 5 deletions src/callbacks_step/analysis_dg3d.jl
Original file line number Diff line number Diff line change
Expand Up @@ -187,18 +187,19 @@ function calc_error_norms(func, u, t, analyzer,
jacobian_tmp1, jacobian_tmp2)

# Calculate errors at each analysis node
@. jacobian_local = abs(jacobian_local)

for k in eachnode(analyzer), j in eachnode(analyzer), i in eachnode(analyzer)
u_exact = initial_condition(get_node_coords(x_local, equations, dg, i, j,
k), t, equations)
diff = func(u_exact, equations) -
func(get_node_vars(u_local, equations, dg, i, j, k), equations)
# We take absolute value as we need the Jacobian here for the volume calculation
abs_jacobian_local_ijk = abs(jacobian_local[i, j, k])

l2_error += diff .^ 2 *
(weights[i] * weights[j] * weights[k] * jacobian_local[i, j, k])
(weights[i] * weights[j] * weights[k] * abs_jacobian_local_ijk)
linf_error = @. max(linf_error, abs(diff))
total_volume += weights[i] * weights[j] * weights[k] *
jacobian_local[i, j, k]
total_volume += (weights[i] * weights[j] * weights[k] *
abs_jacobian_local_ijk)
end
end

Expand Down
9 changes: 5 additions & 4 deletions src/callbacks_step/analysis_dg3d_parallel.jl
Original file line number Diff line number Diff line change
Expand Up @@ -31,17 +31,18 @@ function calc_error_norms(func, u, t, analyzer,
jacobian_tmp1, jacobian_tmp2)

# Calculate errors at each analysis node
@. jacobian_local = abs(jacobian_local)

for k in eachnode(analyzer), j in eachnode(analyzer), i in eachnode(analyzer)
u_exact = initial_condition(get_node_coords(x_local, equations, dg, i, j,
k), t, equations)
diff = func(u_exact, equations) -
func(get_node_vars(u_local, equations, dg, i, j, k), equations)
# We take absolute value as we need the Jacobian here for the volume calculation
abs_jacobian_local_ijk = abs(jacobian_local[i, j, k])

l2_error += diff .^ 2 *
(weights[i] * weights[j] * weights[k] * jacobian_local[i, j, k])
(weights[i] * weights[j] * weights[k] * abs_jacobian_local_ijk)
linf_error = @. max(linf_error, abs(diff))
volume += weights[i] * weights[j] * weights[k] * jacobian_local[i, j, k]
volume += weights[i] * weights[j] * weights[k] * abs_jacobian_local_ijk
end
end

Expand Down
Loading

0 comments on commit 9b5f01c

Please sign in to comment.