Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Embedded + Autodiff #67

Open
JordiManyer opened this issue Jul 12, 2024 · 3 comments
Open

Embedded + Autodiff #67

JordiManyer opened this issue Jul 12, 2024 · 3 comments
Labels
notes Developer notes

Comments

@JordiManyer
Copy link
Collaborator

JordiManyer commented Jul 12, 2024

So here are some notes on the implementation of Autodiff for embedded triangulations. A similar discussion on this can be found in here.

Overview of the problem:

We distinguish two cases:

  1. When differentiating the boundary, triangulations change. I.e some interior cells become cut, some cuts cells become interior, etc...
  2. The above does not happen, i.e all interior, cut and exterior cells remain the same. In this case, the domain is modified by moving around the cuts within their cut cell. I.e the number and shape of the sub-cells stays the same, but the area of the sub-cells is different.

Case 1:

  • Since the amount/type of interior/exterior/cut cells changes, quadrature points are basically created and/or destroyed.
  • Therefore, there is no 1-1 correspondence between quadrature points in the original and updated measure.
  • This makes it (I think) impossible to propagate dual numbers.

Case 2:

  • Quadratures in the sub-cells reference space are the same. Differentiability is given by a change in the geometric maps going from the sub-cell reference spaces to their parent cut cell.
  • Duality can be embedded within those geometric maps, which are produced from the values of our level-set.
@JordiManyer JordiManyer added the notes Developer notes label Jul 12, 2024
@JordiManyer
Copy link
Collaborator Author

Current state:

  • We are currently only considering Case 2, for the above reasons. This is also what theoretical papers do (see here), which is very reasonable since otherwise things are not differentiable.
  • We can ensure we always fall under Case 2 by avoiding φh(p) = 0 for all mesh nodes p. During level-set updates, we can always add artificial advection (order ε) in the direction of movement that ensure we do not move our level set right on top of a mesh node.

We have the code working for simplicial meshes. Moving to QUAD/HEX meshes, however, requires some modification of the cutting algorithm:

  • Currently , QUADs are simplexified then cut using a marching tetrahedron algorithm. This creates bad cuts when the level set is not linear, because our FE space is locally Q1 (instead of P1, which is what the algorithm assumes).

  • To remedy this, we need to implement a marching cubes algorithm. This would allow us to cut QUADs directly, instead of having to go through a mesh of simplices first. (TODO)

@zjwegert
Copy link
Owner

zjwegert commented Nov 17, 2024

This comment will serve as a note regarding additional issues with comparability of GridapTopOpt and GridapEmbedded, and other issues that arrise in implementation of unfitted problems. We are actively investigating these.

Distributed issues

  • There appears to be a bug in distributed ODEs that gives AssertionError: You are trying to set a value that is not stored in the local portion. See scripts\Embedded\Bugs\odes_skeleton_distributed_bug.jl.

Jordi: I have not solved this one yet, but here is an explanation and workaround:

  • ODEs is allocating separately the residual and jacobian
  • This is fine in serial, but in parallel there are some instances where the the following happens:
  • The residual is touched by less ghost entries than the columns of the matrix
  • If we assemble both jac and res together, we communicate the extra ghost ids to the residual, so everything is consistent.
  • However, if we assemble the residual and jacobian separately, the residual is not aware of the extra ghost ids

This happens when there are touched ghost entries that do not belong to the local domain.
In particular, this happens when we have jumps, where some contributions come from two cells away. Boundary cells then get contributions from cells which are not in the local domain.
The workaround simply consists on touching the correct entries in the residual, as done in the script.

Autodiff/analytic gradient issues

  • MPI + 3D analytic gradients fails because Λ = Skeleton(EmbeddedBoundary(cutgeo)) produces an error. MWE in test/mpi/EmbeddedTests/EmbeddedDifferentiationTests.jl
Related error
┌ Error: 
│   exception =
│    BoundsError: attempt to access 34420-element BitVector at index [0]
│    Stacktrace:
│      [1] throw_boundserror(A::BitVector, I::Tuple{Int64})
│        @ Base ./essentials.jl:14
│      [2] checkbounds
│        @ ./abstractarray.jl:699 [inlined]
│      [3] getindex
│        @ ./bitarray.jl:681 [inlined]
│      [4] _getindex
│        @ ./abstractarray.jl:1336 [inlined]
│      [5] getindex
│        @ ./abstractarray.jl:1312 [inlined]
│      [6] #201
│        @ ~/.julia/packages/PartitionedArrays/py6uo/src/p_vector.jl:340 [inlined]
│      [7] map(::PartitionedArrays.var"#201#205", ::PartitionedArrays.MPIArray{BitVector, 1}, ::PartitionedArrays.MPIArray{PartitionedArrays.VectorAssemblyCache{Bool}, 1})
│        @ PartitionedArrays ~/.julia/packages/PartitionedArrays/py6uo/src/mpi_array.jl:229
│      [8] assemble_impl!(f::PartitionedArrays.var"#insert#216", vector_partition::PartitionedArrays.MPIArray{BitVector, 1}, cache::PartitionedArrays.MPIArray{PartitionedArrays.VectorAssemblyCache{Bool}, 1}, ::Type{PartitionedArrays.VectorAssemblyCache{Bool}})
│        @ PartitionedArrays ~/.julia/packages/PartitionedArrays/py6uo/src/p_vector.jl:337
│      [9] assemble!
│        @ ~/.julia/packages/PartitionedArrays/py6uo/src/p_vector.jl:333 [inlined]
│     [10] consistent!(a::PartitionedArrays.PVector{BitVector, PartitionedArrays.MPIArray{BitVector, 1}, PartitionedArrays.MPIArray{PartitionedArrays.LocalIndices, 1}, PartitionedArrays.MPIArray{PartitionedArrays.VectorAssemblyCache{Bool}, 1}, Bool})
│        @ PartitionedArrays ~/.julia/packages/PartitionedArrays/py6uo/src/p_vector.jl:460
│     [11] add_ghost_cells(trian::GridapDistributed.DistributedTriangulation{2, 3, PartitionedArrays.MPIArray{Gridap.Geometry.TriangulationView{2, 3, GridapEmbedded.Interfaces.SubFacetTriangulation{2, 3, Float64, Gridap.Geometry.UnstructuredDiscreteModel{3, 3, Float64, Gridap.Geometry.NonOriented}}, Vector{Int64}}, 1}, GridapDistributed.GenericDistributedDiscreteModel{3, 3, PartitionedArrays.MPIArray{Gridap.Geometry.UnstructuredDiscreteModel{3, 3, Float64, Gridap.Geometry.NonOriented}, 1}, Vector{PartitionedArrays.PRange}, Nothing}})
│        @ GridapEmbedded.Distributed ~/.julia/dev/GridapEmbedded/src/Distributed/DistributedSubFacetTriangulations.jl:51
│     [12] Gridap.Geometry.SkeletonTriangulation(portion::GridapDistributed.NoGhost, trian::GridapDistributed.DistributedTriangulation{2, 3, PartitionedArrays.MPIArray{Gridap.Geometry.TriangulationView{2, 3, GridapEmbedded.Interfaces.SubFacetTriangulation{2, 3, Float64, Gridap.Geometry.UnstructuredDiscreteModel{3, 3, Float64, Gridap.Geometry.NonOriented}}, Vector{Int64}}, 1}, GridapDistributed.GenericDistributedDiscreteModel{3, 3, PartitionedArrays.MPIArray{Gridap.Geometry.UnstructuredDiscreteModel{3, 3, Float64, Gridap.Geometry.NonOriented}, 1}, Vector{PartitionedArrays.PRange}, Nothing}}; kwargs::@Kwargs{})
│        @ GridapDistributed ~/.julia/dev/GridapDistributed/src/Geometry.jl:571
│     [13] SkeletonTriangulation
│        @ ~/.julia/dev/GridapDistributed/src/Geometry.jl:566 [inlined]
│     [14] #SkeletonTriangulation#231
│        @ ~/.julia/dev/GridapDistributed/src/Geometry.jl:520 [inlined]
│     [15] SkeletonTriangulation
│        @ ~/.julia/dev/GridapDistributed/src/Geometry.jl:518 [inlined]
│     [16] #Skeleton#157
│        @ ~/.julia/dev/Gridap/src/Geometry/SkeletonTriangulations.jl:48 [inlined]
│     [17] Skeleton(args::GridapDistributed.DistributedTriangulation{2, 3, PartitionedArrays.MPIArray{Gridap.Geometry.TriangulationView{2, 3, GridapEmbedded.Interfaces.SubFacetTriangulation{2, 3, Float64, Gridap.Geometry.UnstructuredDiscreteModel{3, 3, Float64, Gridap.Geometry.NonOriented}}, Vector{Int64}}, 1}, GridapDistributed.GenericDistributedDiscreteModel{3, 3, PartitionedArrays.MPIArray{Gridap.Geometry.UnstructuredDiscreteModel{3, 3, Float64, Gridap.Geometry.NonOriented}, 1}, Vector{PartitionedArrays.PRange}, Nothing}})
│        @ Gridap.Geometry ~/.julia/dev/Gridap/src/Geometry/SkeletonTriangulations.jl:47
│     [18] main(model::GridapDistributed.GenericDistributedDiscreteModel{3, 3, PartitionedArrays.MPIArray{Gridap.Geometry.UnstructuredDiscreteModel{3, 3, Float64, Gridap.Geometry.NonOriented}, 1}, Vector{PartitionedArrays.PRange}, Nothing}, φ::Function, f::Function)
│        @ Main.EmbeddedDifferentiationTestsMPI ~/.julia/dev/GridapTopOpt/test/mpi/EmbeddedTests/EmbeddedDifferentiationTests.jl:120
│     [19] (::Main.EmbeddedDifferentiationTestsMPI.var"#54#56")(distribute::PartitionedArrays.var"#88#89"{MPI.Comm, Bool})
│        @ Main.EmbeddedDifferentiationTestsMPI ~/.julia/dev/GridapTopOpt/test/mpi/EmbeddedTests/EmbeddedDifferentiationTests.jl:278
│     [20] with_mpi(f::Main.EmbeddedDifferentiationTestsMPI.var"#54#56"; comm::MPI.Comm, duplicate_comm::Bool)
│        @ PartitionedArrays ~/.julia/packages/PartitionedArrays/py6uo/src/mpi_array.jl:73
│     [21] with_mpi(f::Function)
│        @ PartitionedArrays ~/.julia/packages/PartitionedArrays/py6uo/src/mpi_array.jl:64
│     [22] top-level scope
│        @ ~/.julia/dev/GridapTopOpt/test/mpi/EmbeddedTests/EmbeddedDifferentiationTests.jl:270
│     [23] include(mod::Module, _path::String)
│        @ Base ./Base.jl:557
│     [24] exec_options(opts::Base.JLOptions)
│        @ Base ./client.jl:323
│     [25] _start()
│        @ Base ./client.jl:531
└ @ PartitionedArrays ~/.julia/packages/PartitionedArrays/py6uo/src/mpi_array.jl:75
  • We don't currently support differentiation of the interface normal. E.g., of φ->∫(f ⋅ n_Γ)dΓ. I believe we should see approximate equalites between dfh and dfh2 in the script below. The analytic derivatives in unfitted framework need checking as well.
f(x) = VectorValue(x[1],x[2])
df = ∇(φ->∫(f ⋅ n_Γ)dΓ,φh)
dfh = assemble_vector(df,V_φ)
_n(∇φ) = ∇φ/(10^-20+norm(∇φ))
df2 = ∇(φ->∫(f ⋅ (_n ∘ (∇(φ))))dΓ,φh)
dfh2 = assemble_vector(df2,V_φ)
  • Analytic derivatives for the above
  • Analytic differentiation of φ -> ∫(∇(fh)⋅∇(fh))dΓ fails with ERROR: This function belongs to an interface definition and cannot be used. The MWE can be found in Bugs/analytic_gradient_error.jl. In short,
dJ_int_exact2(w) = ((-n_Γ(g(fh)))*w/(norm  ((φh))))dΓ +
                  (-n_S_Λ  (jump(g(fh)*m_k_Λ) * mean(w) / ∇ˢφ_Λ))dΛ +
                  (-n_S_Σ  (g(fh)*m_k_Σ * w / ∇ˢφ_Σ))dΣ
dJ_int_exact_vec2 = assemble_vector(dJ_int_exact2,V_φ)

fails for g(fh) = ∇(fh)⋅∇(fh).

  • Differentiation of functionals that include derivatives, e.g., φ -> ∫(∇(fh)⋅∇(fh))dΩin and φ -> ∫(∇(fh)⋅∇(fh))dΓ, will cause an error. This is due to isbitstype being true for ForwardDiff.Dual. This is currently fixed by
function Arrays.return_cache(
  fg::Fields.FieldGradientArray{1,Polynomials.MonomialBasis{D,V}},
  x::AbstractVector{<:Point}) where {D,V}
  xi = testitem(x)
  T = gradient_type(V,xi)
  Polynomials._return_cache(fg,x,T,Val(false))
end

function Arrays.evaluate!(
  cache,
  fg::Fields.FieldGradientArray{1,Polynomials.MonomialBasis{D,V}},
  x::AbstractVector{<:Point}) where {D,V}
  Polynomials._evaluate!(cache,fg,x,Val(false))
end
Related error
  ERROR: MethodError: no method matching pos_neg_data(::Gridap.Arrays.LazyArray{…}, ::Gridap.Arrays.LazyArray{…})
  
  Closest candidates are:
    pos_neg_data(::AbstractArray{<:AbstractArray{<:Number}}, ::Gridap.Arrays.PosNegPartition)
     @ Gridap C:\Users\n10915192\.julia\dev\Gridap\src\Geometry\Triangulations.jl:328
    pos_neg_data(::AbstractArray, ::Gridap.Arrays.PosNegPartition)
     @ Gridap C:\Users\n10915192\.julia\dev\Gridap\src\Geometry\Triangulations.jl:317
  
  Stacktrace:
    [1] lazy_map(k::Reindex{Gridap.Arrays.LazyArray{…}}, ids::Gridap.Arrays.LazyArray{FillArrays.Fill{…}, Int32, 1, Tuple{…}})
      @ GridapTopOpt c:\Users\n10915192\.julia\dev\GridapTopOpt\src\GridapExtensions.jl:129
    [2] autodiff_array_gradient(a::Gridap.FESpaces.var"#g#65"{}, i_to_x::Gridap.Arrays.LazyArray{…}, j_to_i::Gridap.Arrays.LazyArray{…})
      @ Gridap.Arrays C:\Users\n10915192\.julia\dev\Gridap\src\Arrays\Autodiff.jl:29
    [3] _gradient(f::Function, uh::Gridap.FESpaces.SingleFieldFEFunction{GenericCellField{…}}, fuh::DomainContribution)
      @ Gridap.FESpaces C:\Users\n10915192\.julia\dev\Gridap\src\FESpaces\FEAutodiff.jl:23
    [4] gradient(f::GridapTopOpt.var"#_f#23"{}, uh::Gridap.FESpaces.SingleFieldFEFunction{…})
      @ Gridap.FESpaces C:\Users\n10915192\.julia\dev\Gridap\src\FESpaces\FEAutodiff.jl:5
    [5] gradient(F::Function, uh::Vector{Gridap.FESpaces.SingleFieldFEFunction{GenericCellField{ReferenceDomain}}}, K::Int64)
      @ GridapTopOpt c:\Users\n10915192\.julia\dev\GridapTopOpt\src\ChainRules.jl:20
    [6] StateParamIntegrandWithMeasure(F::Function, U::TrialFESpace{…}, V_φ::Gridap.FESpaces.UnconstrainedFESpace{…}, U_reg::TrialFESpace{…}, assem_U::Gridap.FESpaces.GenericSparseMatrixAssembler, assem_deriv::Gridap.FESpaces.GenericSparseMatrixAssembler)
      @ GridapTopOpt c:\Users\n10915192\.julia\dev\GridapTopOpt\src\ChainRules.jl:88
    [7] StateParamIntegrandWithMeasure
      @ c:\Users\n10915192\.julia\dev\GridapTopOpt\src\ChainRules.jl:344 [inlined]
    [8] #326
      @ c:\Users\n10915192\.julia\dev\GridapTopOpt\scripts\Embedded\Examples\2d_thermal_L.jl:92 [inlined]
    [9] iterate
      @ .\generator.jl:47 [inlined]
   [10] _collect(c::Vector{…}, itr::Base.Generator{…}, ::Base.EltypeUnknown, isz::Base.HasShape{…})
      @ Base .\array.jl:854
   [11] collect_similar(cont::Vector{typeof(Vol)}, itr::Base.Generator{Vector{typeof(Vol)}, var"#326#328"{AffineFEStateMap{…}}})
      @ Base .\array.jl:763
   [12] map(f::Function, A::Vector{typeof(Vol)})
      @ Base .\abstractarray.jl:3285
   [13] (::var"#325#327")(::EmbeddedDiscretization{2, Float64})
      @ Main c:\Users\n10915192\.julia\dev\GridapTopOpt\scripts\Embedded\Examples\2d_thermal_L.jl:89
   [14] update_collection!(c::EmbeddedCollection, φh::Gridap.FESpaces.SingleFieldFEFunction{GenericCellField{ReferenceDomain}})
      @ GridapTopOpt c:\Users\n10915192\.julia\dev\GridapTopOpt\src\Embedded\EmbeddedCollections.jl:50
   [15] EmbeddedCollection(recipes::var"#325#327", bgmodel::UnstructuredDiscreteModel{…}, φ0::Gridap.FESpaces.SingleFieldFEFunction{…})
      @ GridapTopOpt c:\Users\n10915192\.julia\dev\GridapTopOpt\src\Embedded\EmbeddedCollections.jl:41
   [16] top-level scope
      @ c:\Users\n10915192\.julia\dev\GridapTopOpt\scripts\Embedded\Examples\2d_thermal_L.jl:84
  • In FCM, the forward problem space has the same number of cells as the level-set function space. As a result, the fix in matching_level_set does not work.
  • There are two cases where the evaluation of the jacobian does not work. See Bugs/autodiff_jacobian.jl. This can be recreated in the following way
# Error 1
Γ = DifferentiableTriangulation(EmbeddedBoundary(cutgeo),V_φ)
dΓ = Measure(Γ,2get_order(V_φ))
dj(u,du,v) = jacobian(u->(u*v)dΓ,u)
dj(φh, get_trial_fe_basis(V_φ), get_fe_basis(V_φ))

# Error 2
Λ = SkeletonTriangulation(Triangulation(model))
dΛ = Measure(Λ,2*order)
dj(u,du,v) = jacobian(u->(jump((u))  jump((v)))dΛ,u)
dj(φh, get_trial_fe_basis(V_φ), get_fe_basis(V_φ))

AgFEM issues

  • There are instances where aggregate (AgFEM) fails to aggregate all cells and returns an error.
  • TODO: Make max_iters = 20 a parameter of the AggregateByThreshold struct.
Related error
  ERROR: AssertionError: all_aggregated
  Stacktrace:
    [1] _aggregate_by_threshold_barrier(threshold::Float64, cell_to_unit_cut_meas::Gridap.Arrays.LazyArray{…}, facet_to_inoutcut::Vector{…}, cell_to_inoutcut::Vector{…}, loc::Int64, cell_to_coords::Gridap.Arrays.LazyArray{…}, cell_to_faces::Gridap.Arrays.Table{…}, face_to_cells::Gridap.Arrays.Table{…})
      @ GridapEmbedded.AgFEM C:\Users\n10915192\.julia\dev\GridapEmbedded\src\AgFEM\CellAggregation.jl:178
    [2] _aggregate_by_threshold(threshold::Float64, cut::EmbeddedDiscretization{…}, geo::DiscreteGeometry{…}, loc::Int64, facet_to_inoutcut::Vector{…})
      @ GridapEmbedded.AgFEM C:\Users\n10915192\.julia\dev\GridapEmbedded\src\AgFEM\CellAggregation.jl:123
    [3] aggregate(strategy::AggregateCutCellsByThreshold, cut::EmbeddedDiscretization{…}, geo::DiscreteGeometry{…}, in_or_out::Int64)
      @ GridapEmbedded.AgFEM C:\Users\n10915192\.julia\dev\GridapEmbedded\src\AgFEM\CellAggregation.jl:80
    [4] aggregate(strategy::AggregateCutCellsByThreshold, cut::EmbeddedDiscretization{…}, geo::DiscreteGeometry{…})
      @ GridapEmbedded.AgFEM C:\Users\n10915192\.julia\dev\GridapEmbedded\src\AgFEM\CellAggregation.jl:7
    [5] aggregate(strategy::AggregateCutCellsByThreshold, cut::EmbeddedDiscretization{2, Float64})
      @ GridapEmbedded.AgFEM C:\Users\n10915192\.julia\dev\GridapEmbedded\src\AgFEM\CellAggregation.jl:3
    [6] (::var"#333#335")(cutgeo::EmbeddedDiscretization{2, Float64})
      @ Main c:\Users\n10915192\.julia\dev\GridapTopOpt\scripts\Embedded\Examples\2d_thermal_AgFEM.jl:79
    [7] update_collection!(c::EmbeddedCollection, φh::Gridap.FESpaces.SingleFieldFEFunction{GenericCellField{ReferenceDomain}})
      @ GridapTopOpt c:\Users\n10915192\.julia\dev\GridapTopOpt\src\Embedded\EmbeddedCollections.jl:50
    [8] evaluate!(pcf::EmbeddedPDEConstrainedFunctionals{…}, φh::Gridap.FESpaces.SingleFieldFEFunction{…}; update_space::Bool)
      @ GridapTopOpt c:\Users\n10915192\.julia\dev\GridapTopOpt\src\ChainRules.jl:1102
    [9] evaluate!(pcf::EmbeddedPDEConstrainedFunctionals{…}, φh::Gridap.FESpaces.SingleFieldFEFunction{…})
      @ GridapTopOpt c:\Users\n10915192\.julia\dev\GridapTopOpt\src\ChainRules.jl:1101
   [10] iterate(m::AugmentedLagrangian, state::@NamedTuple{})
      @ GridapTopOpt c:\Users\n10915192\.julia\dev\GridapTopOpt\src\Optimisers\AugmentedLagrangian.jl:201
   [11] top-level scope
      @ c:\Users\n10915192\.julia\dev\GridapTopOpt\scripts\Embedded\Examples\2d_thermal_AgFEM.jl:122
  Some type information was truncated. Use `show(err)` to see complete types.

Other issues/TODOs

  • Testing multi-field AD changes
  • Caching for Gridap.ODEs evolution method has been disabled as we are not correctly updating the stiffness matrix/Jacobian etc. See Bugs/odes_reassemble_stiffness.jl
  • For nonlinear problems, we should allow for ramping-type implementations
  • Implement problems
  • MPI tests
  • Currently, we use a modified version of RungeKutta - MutableRungeKutta. This is so that dt can change between iterations of the optimiser. This could be replaced with a more general ODESolver wrapper instead.
  • Related to Tagging Isolated Volumes gridap/GridapEmbedded.jl#52. This will cause a SingularException in the forward problem. We need a way to mark islands when using CutFEM/AgFEM
  • Investigate use of Jacobian for reinitialisation instead of Picard iterations.
  • CutFEM evolution: the sparsity pattern of the Jacobian changes with the level-set function because of the ∫(γg*h^2*jump(∇(u) ⋅ n_Γg)*jump(∇(v) ⋅ n_Γg))dΓg term, where dΓg is the measure on the ghost skeleton. As such, the first time we compute the operator we use the full background mesh skeleton so that the sparsity pattern of the Jacobian is the worst possible. Subsequent iterations will re-use the sparsity pattern but less integration is done. The trade-off here is memory cost vs. integration.
  • Add more tests
  • Test preprocessed refined mesh with Gmsh
  • In distributed, differentiation through n = get_normal_vector(Γ) breaks with an out of bounds error in face_normal for certain LSFs

@zjwegert
Copy link
Owner

zjwegert commented Feb 19, 2025

EDIT: The below has been fixed in gridap/Gridap.jl#1070

Related to PRs gridap/Gridap.jl#1070 & gridap/GridapDistributed.jl#169, there are some existing breaking cases:

  • An empty partition in distributed (e.g., embedded domain lives in only one partition) breaks integration (and likely AD in other places).
  • The following works for some level sets, but fails for others. This problem was thought to of been fixed. (dh is defined over the active part of Ω_IN). EDIT: It seems that this one works for level sets whose IN includes the corners of the background domain. (e.g., test lsf(x) = - cos(4π*x[1])*cos(4π*x[2])-0.4 vs lsf(x) = - cos(4π*(x[1]-0.25))*cos(4π*x[2])-0.4).
Vol(d,φ) = (1)dΩ_IN - (1)dΩ_bg
(d -> Vol(d,φh),dh)

Here is a MWE that hopefully covers enough cases:

using Gridap, Gridap.Geometry, Gridap.Adaptivity, Gridap.MultiField, Gridap.TensorValues
using GridapEmbedded, GridapEmbedded.LevelSetCutters
using GridapSolvers, GridapSolvers.BlockSolvers, GridapSolvers.NonlinearSolvers
using GridapGmsh
using GridapTopOpt

using GridapDistributed,PartitionedArrays,GridapPETSc

function main(ranks,mesh_partition,n;x0=(0.1,0.1))
  path = "./results/Testing AD with empty parts/"
  i_am_main(ranks) && mkpath(path)

  domain = (0,1,0,1,0,1)
  cell_partition = (n,n)
  base_model = UnstructuredDiscreteModel(CartesianDiscreteModel(ranks,mesh_partition,domain,cell_partition))
  ref_model = refine(base_model, refinement_method = "barycentric")
  model = Adaptivity.get_model(ref_model)
  Ω_bg = Triangulation(model)
  writevtk(model,path*"model")

  # Cut the background model
  reffe_scalar = ReferenceFE(lagrangian,Float64,1)
  V_φ = TestFESpace(model,reffe_scalar)

  lsf(x) = sqrt((x[1]-x0[1])^2+(x[2]-x0[2])^2)-0.2 # Sphere
  φh = interpolate(lsf,V_φ)
  writevtk(get_triangulation(φh),path*"initial_lsf",cellfields=["φh"=>φh])

  # Correct LSF= get_free_dof_values(φh)
  map(local_views(_φ)) do φ
    idx = findall(isapprox(0.0;atol=eps()),φ)
    if !isempty(idx)
      i_am_main(ranks) && println("    Correcting level values at $(length(idx)) nodes")
    end
    φ[idx] .+= 100*eps(eltype(φ))
  end
  consistent!(_φ) |> wait

  # Cut
  order = 1
  degree = 2*(order+1)

  geo = DiscreteGeometry(φh,model)
  cutgeo = cut(model,geo)
  cutgeo_facets = cut_facets(model,geo)
  # IN + CUT
  Ω_IN = DifferentiableTriangulation(Triangulation(cutgeo,PHYSICAL),V_φ)
  Ω_act_IN = Triangulation(cutgeo,ACTIVE)
  # OUT + CUT
  Ω_OUT = DifferentiableTriangulation(Triangulation(cutgeo,PHYSICAL_OUT),V_φ)
  Ω_act_OUT = Triangulation(cutgeo,ACTIVE_OUT)
  # INTERFACE
  Γ  = DifferentiableTriangulation(EmbeddedBoundary(cutgeo),V_φ)
  # GHOST SKEL
  Γg = GhostSkeleton(cutgeo)
  n_Γg = get_normal_vector(Γg)
  # OUT SKEL
  Γi_OUT = SkeletonTriangulation(cutgeo_facets,ACTIVE_OUT)
  n_Γi_OUT  = get_normal_vector(Γi_OUT)
  # IN SKEL
  Γi_IN = SkeletonTriangulation(cutgeo_facets,ACTIVE)
  n_Γi_IN  = get_normal_vector(Γi_IN)
  # MEAS
  dΩ_bg = Measure(Ω_bg,degree)
  dΩ_IN = Measure(Ω_IN,degree)
  dΩ_OUT = Measure(Ω_OUT,degree)
  dΓg = Measure(Γg,degree)
  dΓ = Measure(Γ,degree)
  dΓi_IN = Measure(Γi_IN,degree)
  dΓi_OUT = Measure(Γi_OUT,degree)

  writevtk(Γ,path*"Gamma")
  writevtk(Γi_OUT,path*"Gammai_OUT")
  writevtk(Γi_IN,path*"Gammai_IN")
  writevtk(Γg,path*"Gammag")

  # Some other spaces
  reffe_u = ReferenceFE(lagrangian,VectorValue{2,Float64},order,space=:P)
  reffe_p = ReferenceFE(lagrangian,Float64,order-1,space=:P)
  reffe_d = ReferenceFE(lagrangian,VectorValue{2,Float64},order)

  V = TestFESpace(Ω_act_OUT,reffe_u,conformity=:H1,dirichlet_tags=[5,6])
  Q = TestFESpace(Ω_act_OUT,reffe_p,conformity=:L2)
  VQ = MultiFieldFESpace([V,Q])
  S = TestFESpace(Ω_act_IN,reffe_d,conformity=:H1,dirichlet_tags=[5,6,7,8])

  U = TrialFESpace(V,[VectorValue(0.0,0.1),VectorValue(0.1,0.1)])
  P = TrialFESpace(Q)
  UP = MultiFieldFESpace([U,P])
  D = TrialFESpace(S,[VectorValue(0.0,0.1),VectorValue(0.1,0.1),
    VectorValue(0.1,0.0),VectorValue(0.0,0.0)])

  xh = interpolate([x->VectorValue(x[1],x[2]),x->x[1]],UP)
  uh,ph = xh
  dh = interpolate(x->VectorValue(x[2],x[1]),D)

  writevtk(Ω_IN,path*"Omega_IN",cellfields=["uh"=>uh,"ph"=>ph,"dh"=>dh])
  writevtk(Ω_OUT,path*"Omega_OUT",cellfields=["uh"=>uh,"ph"=>ph,"dh"=>dh])
  writevtk(Ω_act_OUT,path*"Omega_act_OUT")
  writevtk(Ω_act_IN,path*"Omega_act_IN")

  σf_n(u,p,n) = (u)  n - p*n
  a_Ω(u,v) = (u)  (v)
  b_Ω(v,p) = -p*(∇v)
  a_Γ(u,v,n) = - (n(u))  v - (n(v))  u + uv
  b_Γ(v,p,n) = (nv)*p
  ju(u,v) = jump(n_Γg  (u))  jump(n_Γg  (v))
  jp(p,q) = jump(p) * jump(q)

  σ(ε) = tr(ε)*one(ε) + 2*ε
  a_s_Ω(d,s) = ε(s)  ε(d))
  j_s_k(d,s) = jump(n_Γg  (s))  jump(n_Γg  (d))

  #### AD through fluid weak form [PASSES]
  function a_fluid((u,p),(v,q),φ)
    n_Γ = -get_normal_vector(Γ)
    return (a_Ω(u,v) + b_Ω(v,p) + b_Ω(u,q))dΩ_OUT +
      (a_Γ(u,v,n_Γ) + b_Γ(v,p,n_Γ) + b_Γ(u,q,n_Γ))dΓ +
      (ju(u,v))dΓg - (jp(p,q))dΓi_OUT
  end
  l_fluid((),(v,q),φ) =  (0q)dΩ_OUT
  R_fluid((u,p),(v,q),φ) = a_fluid((u,p),(v,q),φ) - l_fluid((),(v,q),φ)

  # sum(R_fluid(xh,xh,φh))
  # ∇(φ -> R_fluid(xh,xh,φ),φh)
  # ∇(φ -> R_fluid(zero(UP),zero(UP),φ),φh)
  # ∇(φ -> R_fluid(zero(UP),zero(VQ),φ),φh)
  # ∇(φ -> R_fluid(zero(VQ),zero(VQ),φ),φh)

  #### AD through elast weak form [FAILS]
  function a_solid(((u,p),),d,s,φ)
    return (a_s_Ω(d,s))dΩ_IN +
      (j_s_k(d,s))dΓg
  end
  function l_solid(((u,p),),s,φ)
    n = -get_normal_vector(Γ)
    return (-σf_n(u,p,n)  s)dΓ
  end
  R_solid(((u,p),),d,s,φ) = a_solid(((u,p),),d,s,φ) - l_solid(((u,p),),s,φ)

  sum((dh)dΩ_IN) # This works
  sum(((dh))dΩ_IN) # This fails

  ## Untested due to above
  sum(R_solid((xh,),dh,dh,φh))
  -> R_solid((zero(UP),),dh,dh,φ),φh)
  -> R_solid((xh,),zero(D),zero(D),φ),φh)
  -> R_solid((xh,),zero(S),zero(S),φ),φh)
  -> R_solid((xh,),zero(D),zero(S),φ),φh)

  #### AD through objective/constraint functionals [FAILS]
  J_comp(d,φ) = (ε(d)  ε(d)))dΩ_IN
  Vol(d,φ) = (1.0)dΩ_IN - (0.1)dΩ_bg

  sum(J_comp(dh,φh)) # Fails due to above issue
  -> J_comp(dh,φ),φh) # Untested
  (d -> J_comp(d,φh),dh) # Untested
  sum(Vol(dh,φh))
  -> Vol(dh,φ),φh) # Passes
  (d -> Vol(d,φh),dh) # Fails for different LSFs, see serial version below for easier debug.
                       #  This one seems unrelated to empty parts issue.
end

with_debug() do distribute
  mesh_partition = (2,2)
  ranks = distribute(LinearIndices((prod(mesh_partition),)))
  # main(ranks,mesh_partition,30;x0=(0.5,0.5))
  main(ranks,mesh_partition,30;x0=(0.1,0.1))
end

################## Last error in serial for easier debug ##################

function main_serial_for_last_error(n;x0=(0.1,0.1))
  domain = (0,1,0,1,0,1)
  cell_partition = (n,n)
  base_model = UnstructuredDiscreteModel(CartesianDiscreteModel(domain,cell_partition))
  ref_model = refine(base_model, refinement_method = "barycentric")
  model = Adaptivity.get_model(ref_model)
  Ω_bg = Triangulation(model)

  # Cut the background model
  reffe_scalar = ReferenceFE(lagrangian,Float64,1)
  V_φ = TestFESpace(model,reffe_scalar)

  lsf(x) = sqrt((x[1]-x0[1])^2+(x[2]-x0[2])^2)-0.2 # FAILS
  # lsf(x) = - cos(4π*x[1])*cos(4π*x[2])-0.4 # WORKS
  φh = interpolate(lsf,V_φ)

  # Cut
  order = 1
  degree = 2*(order+1)

  geo = DiscreteGeometry(φh,model)
  cutgeo = cut(model,geo)
  # IN + CUT
  Ω_IN = DifferentiableTriangulation(Triangulation(cutgeo,PHYSICAL),V_φ)
  Ω_act_IN = Triangulation(cutgeo,ACTIVE)
  # MEAS
  dΩ_bg = Measure(Ω_bg,degree)
  dΩ_IN = Measure(Ω_IN,degree)

  # Some other spaces
  reffe_d = ReferenceFE(lagrangian,VectorValue{2,Float64},order)
  S = TestFESpace(Ω_act_IN,reffe_d,conformity=:H1)
  dh = zero(S)
  Vol(d,φ) = (1.0)dΩ_IN - (0.1)dΩ_bg

  (d -> Vol(d,φh),dh)
end

main_serial_for_last_error(30;x0=(0.5,0.5))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
notes Developer notes
Projects
None yet
Development

No branches or pull requests

2 participants