Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run examples in float32 precision #256

Open
pnkraemer opened this issue Nov 3, 2022 · 2 comments
Open

Run examples in float32 precision #256

pnkraemer opened this issue Nov 3, 2022 · 2 comments
Labels
documentation Improvements or additions to documentation

Comments

@pnkraemer
Copy link
Owner

pnkraemer commented Nov 3, 2022

Not in float64 precision. This will also expose some potential issues in numerically stable implementations.

@pnkraemer
Copy link
Owner Author

At least the non-advanced examples that do not solve complicated problems.

@pnkraemer pnkraemer added the documentation Improvements or additions to documentation label Feb 28, 2023
@pnkraemer
Copy link
Owner Author

pnkraemer commented Apr 3, 2023

For future reference:

  • Investigate all uses of jnp.sqrt(jnp.dot(...))
  • Investigate all uses of ** operations, e.g. l_obs**2
  • Change _adaptive.AdaptiveIVPSolver.numerical_zero to a value that depends on floating-point precision
  • Resolve Minstep and maxstep in step-size adaptation #168 (which improves the debuggability of the "close-to-numerical-failure" cases)
  • Introduce a "success=True/False" flag in IVP solution (which improves the debuggability of the "close-to-numerical-failure" cases)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

1 participant