You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Issues like mesh-adaptation/movement#155 make me think that it would be good to have some functionality to compare demo outputs every time we run demos in the docs workflow. I don't see a super straightforward way to do it though...
Can't do this in the test suite since we shorten demos during testing, which changes outputs. Edit: perhaps we could run full demos once a week (via the scheduled trigger)?
The text was updated successfully, but these errors were encountered:
Issues like mesh-adaptation/movement#155 make me think that it would be good to have some functionality to compare demo outputs every time we run demos in the docs workflow. I don't see a super straightforward way to do it though...
Can't do this in the test suite since we shorten demos during testing, which changes outputs. Edit: perhaps we could run full demos once a week (via the scheduled trigger)?
What kind of checks were you thinking of? Personally, I'd prefer to avoid trying to get bit-wise reproducibility and be more inclined towards checking outputs are approximately as expected, otherwise such checks will fail every time we do anything that would change answers (even if the changes are very small). In the world of mesh adaptation, small changes can give different meshes, as I'm sure you've experienced.
In the case of Movement, I suppose we could have checks that the number of iterations hasn't got out of hand. In Animate we could have checks that the number of elements isn't much smaller/larger than expected.
What I was primarily thinking about was indeed bitwise reproducibility, motivated by issues like mesh-adaptation/animate#152 and mesh-adaptation/animate#150. As I described there, these are minor changes in results - making it easy to miss them. So then I would notice that they are different several months later, when it's become very hard to trace the cause. Not important in these cases, but it would be nice to at least get a warning that something has changed.
And in the meantime I got a few ideas how to do this easily :)
Issues like mesh-adaptation/movement#155 make me think that it would be good to have some functionality to compare demo outputs every time we run demos in the docs workflow. I don't see a super straightforward way to do it though...
Can't do this in the test suite since we shorten demos during testing, which changes outputs. Edit: perhaps we could run full demos once a week (via the scheduled trigger)?
The text was updated successfully, but these errors were encountered: