Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improve performance with sparse matrices #101

Merged
merged 12 commits into from
Jul 18, 2024
Merged

improve performance with sparse matrices #101

merged 12 commits into from
Jul 18, 2024

Conversation

ranocha
Copy link
Collaborator

@ranocha ranocha commented Jul 17, 2024

Closes #98

@ranocha ranocha added the performance We are greedy label Jul 17, 2024
@codecov-commenter
Copy link

codecov-commenter commented Jul 17, 2024

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

Attention: Patch coverage is 96.85864% with 6 lines in your changes missing coverage. Please review.

Project coverage is 98.09%. Comparing base (4fdc654) to head (1d157ca).
Report is 1 commits behind head on main.

Files Patch % Lines
src/mprk.jl 95.20% 6 Missing ⚠️

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #101      +/-   ##
==========================================
- Coverage   98.32%   98.09%   -0.23%     
==========================================
  Files           6        6              
  Lines        1250     1415     +165     
==========================================
+ Hits         1229     1388     +159     
- Misses         21       27       +6     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@ranocha ranocha changed the title WIP: improve performance with sparse matrices improve performance with sparse matrices Jul 17, 2024
@ranocha ranocha requested a review from SKopecz July 17, 2024 14:25
@ranocha
Copy link
Collaborator Author

ranocha commented Jul 17, 2024

There seem to be some issues with floating point rounding errors on macOS CI. The other tests seem to be fine

@ranocha ranocha marked this pull request as ready for review July 17, 2024 14:26
@ranocha
Copy link
Collaborator Author

ranocha commented Jul 17, 2024

@SKopecz Do you have an idea why MPRK43II may be so sensitive here - or how we should fix it?

@SKopecz
Copy link
Owner

SKopecz commented Jul 17, 2024

@SKopecz Do you have an idea why MPRK43II may be so sensitive here - or how we should fix it?

If possible, I'd exclude MPRK43II from this specific test on macOS for now and create an issue to deal with this later.

I think we need much more information to see what the actual problem is. It looks to me as if the result of linear solve has a negative element.

@ranocha
Copy link
Collaborator Author

ranocha commented Jul 17, 2024

@SKopecz Do you have an idea why MPRK43II may be so sensitive here - or how we should fix it?

If possible, I'd exclude MPRK43II from this specific test on macOS for now and create an issue to deal with this later.

👍

I think we need much more information to see what the actual problem is. It looks to me as if the result of linear solve has a negative element.

Yes, that's what I think, too - computing the powers like σ^(1 - q1) * u^q1 seems to be the problem when σ becomes slightly negative

@ranocha
Copy link
Collaborator Author

ranocha commented Jul 17, 2024

This PR is ready for a review. Let's keep fingers crossed that CI passes this time 🤞

docs/src/faq.md Outdated Show resolved Hide resolved
do not necessarily have the structure for which UMFPACK is optimized for. Thus,
it is often possible to gain performance by switching to KLU instead.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you saying that this is the reason for the large number of allocations we see in the linear advection tutorial?

There's also something else we should look at. When we initialize the linear solvers we use something like

linsolve = init(linprob, alg.linsolve, alias_A = true, alias_b = true,
                        assumptions = LinearSolve.OperatorAssumptions(true))

The statement alias_A=true allows the linear solver to reuse the memory of the system matrix and modify it. The idea behind this was that in an MPRK scheme we no longer need the system matrix once the system is solved, so the linear solver can change it at will.

For a sparse matrix this modification of the system matrix could become problematic, since the sparsity pattern of L and U does not match that of A, especially in the linear advection tutorial, where A is a banded matrix with an entry in the upper right corner, for which in genaral L or U could become a full triangular matrix.

All I want to say is that we should try alias_A=false and check if we see any difference.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we use alias_A = !issparse(the_matrix_we_put_in) for the general case?

Copy link
Owner

@SKopecz SKopecz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, thanks! I just had a minor comment on the FAQ and the concern regarding alias_A=true could also be moved to an issue.

Co-authored-by: Stefan Kopecz <[email protected]>
@ranocha ranocha merged commit 258c798 into main Jul 18, 2024
9 checks passed
@ranocha ranocha deleted the hr/sparse branch July 18, 2024 13:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance We are greedy
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Specializations for SparseMatrix
3 participants