Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use of AbstractArray #935

Closed
rveltz opened this issue Dec 6, 2024 · 9 comments
Closed

use of AbstractArray #935

rveltz opened this issue Dec 6, 2024 · 9 comments

Comments

@rveltz
Copy link

rveltz commented Dec 6, 2024

Hi,

In the matrix free context, I would like to apply an operator to a Matrix representing the solution of a PDE. In order to use Krylov.jl, I have to pass AbstractVectors hence reshape / views. Is there a better workflow?

Thank you a lot

@amontoison
Copy link
Member

amontoison commented Dec 6, 2024

Hi Romain!
Is this solution not good enough for you:
https://jso.dev/Krylov.jl/dev/matrix_free/#Example-with-discretized-PDE ?

I have another alternative if you want but I would like to know the type of the matrix before (just Matrix or something on GPU / custom type)?

@rveltz
Copy link
Author

rveltz commented Dec 6, 2024

You are using vec basically hence a reshape. It means I have to dispatch my operator on AbstractVectors

like

function myop(x::AbstractVector)
    xc = reshape(x, N, N)
    return vec(myop(xc))
end

I was hoping for a solution ala KrylovKit where you dont need those but I guess this is not a big deal. IterativeSolvers has the same requirements

@amontoison
Copy link
Member

amontoison commented Dec 6, 2024

You allocate 32 bits at each vec / reshape.
So it's quite cheap but you could remove these allocations with a custom workspace: https://jso.dev/Krylov.jl/dev/custom_workspaces/

You just need to create your own AbstractVector that wraps your matrix.
The drawback is that you will lose by default the dispatch to BLAS / CUBLAS / rocBLAS for the linear algebra operations.

I only recommend it for an advanced use like multi-GPU with MPI.

@rveltz
Copy link
Author

rveltz commented Dec 6, 2024

Ok it just halfed my computation time and I did not optimise it.

With bifurcationKit, I regret now not having tried earlier, the fact that you can reuse the krylov space is a massive boost in perfs, I thought KrylovKit was good enough

@amontoison
Copy link
Member

amontoison commented Dec 6, 2024

If you use it on a GPU, the speed-up will be even greater because allocations and deallocations are more expensive, and the garbage collection system works differently.

KrylovKit.jl is probably more flexible but was not developed for performances and also doesn't support preconditioners which is quite relevant for Krylov methods.

@rveltz
Copy link
Author

rveltz commented Dec 6, 2024

The only con is the lack of general eigensolver like in KrylovKit

@amontoison
Copy link
Member

amontoison commented Dec 6, 2024

Yes, you're totally right. I should implement some eigensolvers one day.

Off-topic: Romain, I will probably come to see JB-Caillau at INRIA the second week of January.

@rveltz
Copy link
Author

rveltz commented Dec 6, 2024

Great! Please ping me the week before so I can free some time.

@rveltz rveltz closed this as completed Dec 6, 2024
@rveltz
Copy link
Author

rveltz commented Dec 6, 2024

By the way, thanks for your detailed answers, I am typing on my phone so I am not really verbose

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants