You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment the test framework just uses a relative error with ndiff to assess numerical differences. Using just relative differences with very small numbers will cause problems. If I have a number in a file that is very close to zero e.g., 1e-16 then it is possible that another run of the problem on a different machine or os might produce 2e-16. Both are effectively zero and thus there shouldn't be a difference flagged. However, the relative error between them is calculated by ndiff as abs(x-y)/(min(abs(x),abs(y))) i.e., 1e-16/1e-16 = 1.0 which would be flagged as different. Ideally what is required is for both numbers to be checked to ensure that they are both below zero tolerance. Relative (or absolute or other) error checking should only proceed if the numbers are not both below zero tolerance. It doesn't appear that ndiff has this option. Fixes could include adding this functionality to ndiff; Running two ndfiffs - the first with an absolute error tolerance set to zero tolerance and the second with a relative error tolerance. An alternative would be to use numdiff (http://www.nongnu.org/numdiff/) which can run with both an absolute and relative error. Numdiff seems more current than ndiff (last changed 2017 vs 2004) so it may be worth switching to numdiff regardless. Iron uses 5*epsilon as a zero tolerance which, for double precision, would be 5*2.22e-16 = 1.11e-15. An absolute tolerance option is required to be added to the testing framework.
The text was updated successfully, but these errors were encountered:
Potential fixed with #9 which implements an abs error check before a rel check. This is not perfect and we may have to add functionality to ndiff or numdiff to completely fix. Leaving this issue open in case we have future problems with this.
At the moment the test framework just uses a relative error with ndiff to assess numerical differences. Using just relative differences with very small numbers will cause problems. If I have a number in a file that is very close to zero e.g., 1e-16 then it is possible that another run of the problem on a different machine or os might produce 2e-16. Both are effectively zero and thus there shouldn't be a difference flagged. However, the relative error between them is calculated by ndiff as abs(x-y)/(min(abs(x),abs(y))) i.e., 1e-16/1e-16 = 1.0 which would be flagged as different. Ideally what is required is for both numbers to be checked to ensure that they are both below zero tolerance. Relative (or absolute or other) error checking should only proceed if the numbers are not both below zero tolerance. It doesn't appear that ndiff has this option. Fixes could include adding this functionality to ndiff; Running two ndfiffs - the first with an absolute error tolerance set to zero tolerance and the second with a relative error tolerance. An alternative would be to use numdiff (http://www.nongnu.org/numdiff/) which can run with both an absolute and relative error. Numdiff seems more current than ndiff (last changed 2017 vs 2004) so it may be worth switching to numdiff regardless. Iron uses 5*epsilon as a zero tolerance which, for double precision, would be 5*2.22e-16 = 1.11e-15. An absolute tolerance option is required to be added to the testing framework.
The text was updated successfully, but these errors were encountered: