-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add capability to handle with node variables #67
Conversation
The current implementation supports both reinterpolation and no reinterpolation for the subcell limiting coefficient. That's the way it is handled for the solution itself. I already prepared an PR to |
IMHO, no. It's like an AMR marker (should this cell be refined or not), which is not a continuous field that happens to be represented by a piecewise-constant approximation. Thus reinterpoaltion does not give a meaningful representation of reality but can actually be harmful. Having said that, I am not sure how easy/annoying it is to always create two files, once once with reinterpolation (for the solution itself) and once without (for the node variables). It might thus be nice to have the ability for reinterpolation, just so you can have only one VTK file for a quick'n'dirty visualization. What we really need/want though, is a ParaView plugin that can read in our HDF5 natively without us having to convert to VTK first. But that's for another day I guess :-) |
I agree with @sloede |
I thought so too. But I implemented the reinterpolation of the coefficients before it was possible to not reinterpolate the solution and just kept the code after the update.
I'm not sure that I understand your thoughts correctly. I don't think we need two different files here either way. It should be enough to just delete the implication of parameter EDIT: Oh, I now see the problem. I didn't realized that it's a problem to save reinterpolated and non-reinterpolated node-level data in one
😌 |
No, I don't think that's a good idea. In some situations, it might still be advantageous to reinterpolate the solution variables to get a nice and smooth representation of the (part of the field that is a pure) DG solution. |
After understanding the problem correctly 😅 , I will keep the current implementation for now and add a warning in that case, if that's okay for you. |
I also think that it not very "meaningful" to reinterpolate the node-wise
I think that @sloede is correct. It would be annoying but you could create two separate files where the solution re-interpolates but the trixi2vtk(joinpath("out", "solution_*.h5"), output_directory="out")
trixi2vtk(joinpath("out", "solution_*_celldata.h5"), output_directory="out", reinterpolate=false) The second call might need to use |
So, to avoid the annoying part with two files, I just added a comment if For the testing, I added two new tests. After running |
Yes. Since fewer macOS runners are available to us on the free plan, we only run some basic tests on macOS and assume that everything else "just works" as well. Originally, we tested everything everywhere, but this just wasn't feasible anymore due to long CI runtimes.
This is something you need to discuss with your advisor, if it is worth to investigate right now (maybe bring it up on Friday?). IMHO it is at least somewhat unfortunate that floating point truncation errors seem to have such a significant impact already. It indicates to me that either there's something wrong with the method itself (unlikely), the implementation (less unlikely, but still), or that the setups are not robust enough (my current favorite). By robust I mean that maybe you are not running a stable simulation but rather simulations that are just not crashing. I wonder if choosing slightly different parameters would allow you to alleviate the issue (thus confirming my suspicion) or whether it is really by design.
Yes. But please leave a short comment explaining the reason for this and x-ref this PR (#67) |
Yes, I agree. I took a quick look into it and made a list of failing tests. I will add this here but will present this also on friday:
After that (but of course this must be investigated further) I would assume your are right with:
|
After stabilizing some elixirs in trixi-framework/Trixi.jl#2007 and a new Trixi release, I'm about to update the reference files in the specific repository (trixi-framework/Trixi2Vtk_reference_files#4) and then, hopefully, we can merge this PR 😅 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
@bennibolm do you know why the tests do not seem to run? |
@sloede I think, they are running, but are somehow showed very strange. When I press on the red X at the last commit, the CI test ran and there is an old version (failing) and a new version of the coverall tests. I will try to make them run and ping you again. |
@sloede I adapted the test setup for macOS. Is it possible that one needs to adapt the tests in GitHub that must pass in a PR? That could be the reason why they are not shown correctly but actually run properly. |
I don't understand why the tests don't show up here, but as you pointed out correctly, they seem to have passed for the latest commit. Thus I'll merge this now. |
Thanks |
This PR adds the functionality to handle node-level data beside the solution itself.
This is required to visualize subcell limiting coefficients, as used in #1476 and #1502.
It comes together with the PR to Trixi, which saves the coefficients in the
.h5
files.TODO:
Question: