Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comparing demo outputs during docs build #113

Open
ddundo opened this issue Feb 17, 2025 · 2 comments
Open

Comparing demo outputs during docs build #113

ddundo opened this issue Feb 17, 2025 · 2 comments
Labels
testing Extensions and improvements to the testing infrastructure

Comments

@ddundo
Copy link
Member

ddundo commented Feb 17, 2025

Issues like mesh-adaptation/movement#155 make me think that it would be good to have some functionality to compare demo outputs every time we run demos in the docs workflow. I don't see a super straightforward way to do it though...

Can't do this in the test suite since we shorten demos during testing, which changes outputs. Edit: perhaps we could run full demos once a week (via the scheduled trigger)?

@ddundo ddundo added the testing Extensions and improvements to the testing infrastructure label Feb 17, 2025
@jwallwork23
Copy link
Member

Issues like mesh-adaptation/movement#155 make me think that it would be good to have some functionality to compare demo outputs every time we run demos in the docs workflow. I don't see a super straightforward way to do it though...

Can't do this in the test suite since we shorten demos during testing, which changes outputs. Edit: perhaps we could run full demos once a week (via the scheduled trigger)?

What kind of checks were you thinking of? Personally, I'd prefer to avoid trying to get bit-wise reproducibility and be more inclined towards checking outputs are approximately as expected, otherwise such checks will fail every time we do anything that would change answers (even if the changes are very small). In the world of mesh adaptation, small changes can give different meshes, as I'm sure you've experienced.

In the case of Movement, I suppose we could have checks that the number of iterations hasn't got out of hand. In Animate we could have checks that the number of elements isn't much smaller/larger than expected.

@ddundo
Copy link
Member Author

ddundo commented Feb 18, 2025

Your suggestions sound great!

What I was primarily thinking about was indeed bitwise reproducibility, motivated by issues like mesh-adaptation/animate#152 and mesh-adaptation/animate#150. As I described there, these are minor changes in results - making it easy to miss them. So then I would notice that they are different several months later, when it's become very hard to trace the cause. Not important in these cases, but it would be nice to at least get a warning that something has changed.

And in the meantime I got a few ideas how to do this easily :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
testing Extensions and improvements to the testing infrastructure
Projects
Development

No branches or pull requests

2 participants