Skip to content

Commit

Permalink
Merge pull request #333 from SciML/docs
Browse files Browse the repository at this point in the history
Improve a few docstrings
  • Loading branch information
ChrisRackauckas authored Jun 18, 2023
2 parents 74edf0b + 9c85e96 commit 5426cdd
Show file tree
Hide file tree
Showing 2 changed files with 38 additions and 16 deletions.
22 changes: 22 additions & 0 deletions src/extension_algs.jl
Original file line number Diff line number Diff line change
Expand Up @@ -180,6 +180,28 @@ MKLPardisoIterate(; kwargs...) = PardisoJL(; solver_type = 1, kwargs...)
end
end
else
"""
```julia
PardisoJL(; nprocs::Union{Int, Nothing} = nothing,
solver_type = nothing,
matrix_type = nothing,
iparm::Union{Vector{Tuple{Int, Int}}, Nothing} = nothing,
dparm::Union{Vector{Tuple{Int, Int}}, Nothing} = nothing)
```
A generic method using MKL Pardiso. Specifying `solver_type` is required.
!!! note
Using this solver requires adding the package Pardiso.jl, i.e. `using Pardiso`
## Keyword Arguments
For the definition of the keyword arguments, see the Pardiso.jl documentation.
All values default to `nothing` and the solver internally determines the values
given the input types, and these keyword arguments are only for overriding the
default handling process. This should not be required by most users.
"""
Base.@kwdef struct PardisoJL <: LinearSolve.SciMLLinearSolveAlgorithm
nprocs::Union{Int, Nothing} = nothing
solver_type::Any = nothing
Expand Down
32 changes: 16 additions & 16 deletions src/factorization.jl
Original file line number Diff line number Diff line change
Expand Up @@ -30,13 +30,13 @@ end
Julia's built in `lu`. Equivalent to calling `lu!(A)`
* On dense matrices, this uses the current BLAS implementation of the user's computer,
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
system.
* On sparse matrices, this will use UMFPACK from SuiteSparse. Note that this will not
cache the symbolic factorization.
* On CuMatrix, it will use a CUDA-accelerated LU from CuSolver.
* On BandedMatrix and BlockBandedMatrix, it will use a banded LU.
* On dense matrices, this uses the current BLAS implementation of the user's computer,
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
system.
* On sparse matrices, this will use UMFPACK from SuiteSparse. Note that this will not
cache the symbolic factorization.
* On CuMatrix, it will use a CUDA-accelerated LU from CuSolver.
* On BandedMatrix and BlockBandedMatrix, it will use a banded LU.
## Positional Arguments
Expand Down Expand Up @@ -136,12 +136,12 @@ end
Julia's built in `qr`. Equivalent to calling `qr!(A)`.
* On dense matrices, this uses the current BLAS implementation of the user's computer
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
system.
* On sparse matrices, this will use SPQR from SuiteSparse
* On CuMatrix, it will use a CUDA-accelerated QR from CuSolver.
* On BandedMatrix and BlockBandedMatrix, it will use a banded QR.
* On dense matrices, this uses the current BLAS implementation of the user's computer
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
system.
* On sparse matrices, this will use SPQR from SuiteSparse
* On CuMatrix, it will use a CUDA-accelerated QR from CuSolver.
* On BandedMatrix and BlockBandedMatrix, it will use a banded QR.
"""
struct QRFactorization{P} <: AbstractFactorization
pivot::P
Expand Down Expand Up @@ -324,9 +324,9 @@ end
Julia's built in `svd`. Equivalent to `svd!(A)`.
* On dense matrices, this uses the current BLAS implementation of the user's computer
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
system.
* On dense matrices, this uses the current BLAS implementation of the user's computer
which by default is OpenBLAS but will use MKL if the user does `using MKL` in their
system.
"""
struct SVDFactorization{A} <: AbstractFactorization
full::Bool
Expand Down

0 comments on commit 5426cdd

Please sign in to comment.