Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change default QR tolerance to match SPQR #557

Merged
merged 1 commit into from
Aug 22, 2024

Conversation

corbett5
Copy link
Contributor

Copy link

codecov bot commented Aug 13, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 84.12%. Comparing base (55976a6) to head (a5ffa1a).
Report is 9 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #557      +/-   ##
==========================================
+ Coverage   84.09%   84.12%   +0.03%     
==========================================
  Files          12       12              
  Lines        9102     9098       -4     
==========================================
  Hits         7654     7654              
+ Misses       1448     1444       -4     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@ViralBShah
Copy link
Member

ViralBShah commented Aug 13, 2024

@DrTimothyAldenDavis Should we follow the SuiteSparse lead here - just wanted your eyes once more - using eps() as the tolerance for Float32 matrices as opposed eps(Float32)?

@DrTimothyAldenDavis
Copy link

SPQR doesn't support float. It only supports double and double-complex. There are only 4 explicit instantiations of the spqr_tol templatized function: (double or double-complex) and (int32 or int64).

I'm not sure what the tol should be for a float or float-complex case. I'm guessing it should be based on the FLT_EPSILON, not DBL_EPSILON.

@corbett5
Copy link
Contributor Author

corbett5 commented Aug 13, 2024

Ahh silly me, I didn't realize floats weren't supported. Well I do have a use case where the default tolerance for a Float32 matrix (as computed in Julia) is ridiculously large. My intuition is that the default tolerance should be DBL_EPSILON since this is more conservative and would accurately reflect floating point errors that are incurred during the decomposition, ignoring floating point errors that may or may not have occurred in calculating the original matrix entries themselves.

Edit: Another reason this line should be changed (although not necessarily in the way I suggest) is that it fails when eltype(A) is an integer:

qr(sparse([1 1; 1 1]))
ERROR: MethodError: no method matching eps(::Type{Int64})
...
Stacktrace:
 [1] _default_tol(A::SparseMatrixCSC{Int64, Int64})
...

qr(sparse([1 1; 1 1]); tol=0)
SparseArrays.SPQR.QRSparse{Float64, Int64}
...

@ViralBShah
Copy link
Member

I am inclined to go with @DrTimothyAldenDavis as the default suggestion (but you can always use a different tolerance). It is unclear to me what eps(Int) should do.

@corbett5
Copy link
Contributor Author

That's fine if you want to leave it as epsilon(Float32) for the Float32 case, although the last point I will make is that the default tolerance as calculated in SPQR is clearly based on the floating point error that accumulates during SPQR, (it scales with the size and maximum column norm of the matrix). If SPQR only ever uses Float64 it seems to me that we should only ever use eps(Float64) to calculate the default tolerance.

@ViralBShah since julia simply converts an integer matrix into a Float64 matrix to pass to SPQR it seems to me that we should use eps(Float64) when the input data type is an integer.

@ViralBShah
Copy link
Member

Ah - if we are solving in Float64 anyways, we might as well use eps(Float64). I suppose your point about the eps on integer also makes sense. Let's do both. In that case we should be able to merge this right?

@corbett5
Copy link
Contributor Author

Yes if we simply want to use eps(Float64) in all cases then this is good to go.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants