Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove SuiteSparse #426

Merged
merged 4 commits into from
Nov 8, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ SciMLOperators = "c0aeaf25-5076-4817-a8d5-81caf7dfa961"
Setfield = "efcf1570-3423-57d1-acb7-fd33fddbac46"
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
Sparspak = "e56a9233-b9d6-4f03-8d0f-1825330902ac"
SuiteSparse = "4607b0f0-06f3-5cda-b6b1-a6196a1729e9"
UnPack = "3a884ed6-31ef-47d7-9d2a-63182c4928ed"

[weakdeps]
Expand Down Expand Up @@ -88,7 +87,6 @@ SciMLOperators = "0.3"
Setfield = "1"
SparseArrays = "1.9"
Sparspak = "0.3.6"
SuiteSparse = "1.9"
UnPack = "1"
julia = "1.9"

Expand Down
1 change: 0 additions & 1 deletion src/LinearSolve.jl
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@ PrecompileTools.@recompile_invalidations begin
using SciMLOperators: AbstractSciMLOperator, IdentityOperator
using Setfield
using UnPack
using SuiteSparse
using KLU
using Sparspak
using FastLapackInterface
Expand Down
34 changes: 17 additions & 17 deletions src/factorization.jl
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
* On dense matrices, this uses the current BLAS implementation of the user's computer,
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
system.
* On sparse matrices, this will use UMFPACK from SuiteSparse. Note that this will not
* On sparse matrices, this will use UMFPACK from SparseArrays. Note that this will not
cache the symbolic factorization.
* On CuMatrix, it will use a CUDA-accelerated LU from CuSolver.
* On BandedMatrix and BlockBandedMatrix, it will use a banded LU.
Expand Down Expand Up @@ -139,7 +139,7 @@
* On dense matrices, this uses the current BLAS implementation of the user's computer
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
system.
* On sparse matrices, this will use SPQR from SuiteSparse
* On sparse matrices, this will use SPQR from SparseArrays
* On CuMatrix, it will use a CUDA-accelerated QR from CuSolver.
* On BandedMatrix and BlockBandedMatrix, it will use a banded QR.
"""
Expand Down Expand Up @@ -681,7 +681,7 @@

!!! note

By default, the SuiteSparse.jl are implemented for efficiency by caching the
By default, the SparseArrays.jl are implemented for efficiency by caching the
symbolic factorization. I.e., if `set_A` is used, it is expected that the new
`A` has the same sparsity pattern as the previous `A`. If this algorithm is to
be used in a context where that assumption does not hold, set `reuse_symbolic=false`.
Expand All @@ -692,11 +692,11 @@
end

@static if VERSION < v"1.9.0-DEV.1622"
const PREALLOCATED_UMFPACK = SuiteSparse.UMFPACK.UmfpackLU(C_NULL, C_NULL, 0, 0,
const PREALLOCATED_UMFPACK = SparseArrays.UMFPACK.UmfpackLU(C_NULL, C_NULL, 0, 0,
[0], Int[], Float64[], 0)
finalizer(SuiteSparse.UMFPACK.umfpack_free_symbolic, PREALLOCATED_UMFPACK)
finalizer(SparseArrays.UMFPACK.umfpack_free_symbolic, PREALLOCATED_UMFPACK)
else
const PREALLOCATED_UMFPACK = SuiteSparse.UMFPACK.UmfpackLU(SparseMatrixCSC(0, 0, [1],
const PREALLOCATED_UMFPACK = SparseArrays.UMFPACK.UmfpackLU(SparseMatrixCSC(0, 0, [1],
Int[],
Float64[]))
end
Expand All @@ -722,17 +722,17 @@
A = convert(AbstractMatrix, A)
zerobased = SparseArrays.getcolptr(A)[1] == 0
@static if VERSION < v"1.9.0-DEV.1622"
res = SuiteSparse.UMFPACK.UmfpackLU(C_NULL, C_NULL, size(A, 1), size(A, 2),
res = SparseArrays.UMFPACK.UmfpackLU(C_NULL, C_NULL, size(A, 1), size(A, 2),

Check warning on line 725 in src/factorization.jl

View check run for this annotation

Codecov / codecov/patch

src/factorization.jl#L725

Added line #L725 was not covered by tests
zerobased ?
copy(SparseArrays.getcolptr(A)) :
SuiteSparse.decrement(SparseArrays.getcolptr(A)),
SparseArrays.decrement(SparseArrays.getcolptr(A)),
zerobased ? copy(rowvals(A)) :
SuiteSparse.decrement(rowvals(A)),
SparseArrays.decrement(rowvals(A)),
copy(nonzeros(A)), 0)
finalizer(SuiteSparse.UMFPACK.umfpack_free_symbolic, res)
finalizer(SparseArrays.UMFPACK.umfpack_free_symbolic, res)

Check warning on line 732 in src/factorization.jl

View check run for this annotation

Codecov / codecov/patch

src/factorization.jl#L732

Added line #L732 was not covered by tests
return res
else
return SuiteSparse.UMFPACK.UmfpackLU(SparseMatrixCSC(size(A)..., getcolptr(A),
return SparseArrays.UMFPACK.UmfpackLU(SparseMatrixCSC(size(A)..., getcolptr(A),

Check warning on line 735 in src/factorization.jl

View check run for this annotation

Codecov / codecov/patch

src/factorization.jl#L735

Added line #L735 was not covered by tests
rowvals(A), nonzeros(A)))
end
end
Expand All @@ -744,9 +744,9 @@
cacheval = @get_cacheval(cache, :UMFPACKFactorization)
if alg.reuse_symbolic
# Caches the symbolic factorization: https://github.com/JuliaLang/julia/pull/33738
if alg.check_pattern && !(SuiteSparse.decrement(SparseArrays.getcolptr(A)) ==
if alg.check_pattern && !(SparseArrays.decrement(SparseArrays.getcolptr(A)) ==
cacheval.colptr &&
SuiteSparse.decrement(SparseArrays.getrowval(A)) ==
SparseArrays.decrement(SparseArrays.getrowval(A)) ==
cacheval.rowval)
fact = lu(SparseMatrixCSC(size(A)..., getcolptr(A), rowvals(A),
nonzeros(A)))
Expand All @@ -773,7 +773,7 @@

!!! note

By default, the SuiteSparse.jl are implemented for efficiency by caching the
By default, the SparseArrays.jl are implemented for efficiency by caching the
symbolic factorization. I.e., if `set_A` is used, it is expected that the new
`A` has the same sparsity pattern as the previous `A`. If this algorithm is to
be used in a context where that assumption does not hold, set `reuse_symbolic=false`.
Expand Down Expand Up @@ -816,9 +816,9 @@
if cache.isfresh
cacheval = @get_cacheval(cache, :KLUFactorization)
if cacheval !== nothing && alg.reuse_symbolic
if alg.check_pattern && !(SuiteSparse.decrement(SparseArrays.getcolptr(A)) ==
if alg.check_pattern && !(SparseArrays.decrement(SparseArrays.getcolptr(A)) ==
cacheval.colptr &&
SuiteSparse.decrement(SparseArrays.getrowval(A)) == cacheval.rowval)
SparseArrays.decrement(SparseArrays.getrowval(A)) == cacheval.rowval)
fact = KLU.klu(SparseMatrixCSC(size(A)..., getcolptr(A), rowvals(A),
nonzeros(A)))
else
Expand Down Expand Up @@ -1378,4 +1378,4 @@
maxiters::Int, abstol, reltol, verbose::Bool,
assumptions::OperatorAssumptions)
end
end
end
8 changes: 4 additions & 4 deletions src/factorization_sparse.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,14 @@
# Missing ldiv! definitions: https://github.com/JuliaSparse/SparseArrays.jl/issues/242
function _ldiv!(x::Vector,
A::Union{SparseArrays.QR, LinearAlgebra.QRCompactWY,
SuiteSparse.SPQR.QRSparse,
SuiteSparse.CHOLMOD.Factor}, b::Vector)
SparseArrays.SPQR.QRSparse,
SparseArrays.CHOLMOD.Factor}, b::Vector)
x .= A \ b
end

function _ldiv!(x::AbstractVector,
A::Union{SparseArrays.QR, LinearAlgebra.QRCompactWY,
SuiteSparse.SPQR.QRSparse,
SuiteSparse.CHOLMOD.Factor}, b::AbstractVector)
SparseArrays.SPQR.QRSparse,
SparseArrays.CHOLMOD.Factor}, b::AbstractVector)
x .= A \ b
end
Loading