-
Notifications
You must be signed in to change notification settings - Fork 106
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Remove section about benchmarking * WIP on documentation * Fix typo * Remove remainder of plot keyword * Chebyshev & power method * Uncomment deploy command * A -> B * Move things to separate folders and added the remaining methods in separate files * More docs * Add the iterator page * Update docs for stationary methods * ::Int consistency * Typo in IDR(s) docs * Add notes on underlying iterators * for _ = _ to for _ in _
- Loading branch information
1 parent
29ba93b
commit db0702b
Showing
38 changed files
with
1,309 additions
and
1,927 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
# [(Inverse) power method](@id PowerMethod) | ||
|
||
Solves the eigenproblem $Ax = λx$ approximately where $A$ is a general linear map. By default converges towards the dominant eigenpair $(λ, x)$ such that $|λ|$ is largest. Shift-and-invert can be applied to target a specific eigenvalue near `shift` in the complex plane. | ||
|
||
## Usage | ||
|
||
```@docs | ||
powm | ||
powm! | ||
invpowm | ||
invpowm! | ||
``` | ||
|
||
## Implementation details | ||
Storage requirements are 3 vectors: the approximate eigenvector `x`, the residual vector `r` and a temporary. The residual norm lags behind one iteration, as it is computed when $Ax$ is performed. Therefore the final resdiual norm is even smaller. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,126 @@ | ||
# Getting started | ||
|
||
## Installation | ||
|
||
The package can be installed via Julia's package manager. | ||
|
||
```julia | ||
julia> Pkg.add("IterativeSolvers") | ||
``` | ||
|
||
## Interface | ||
|
||
Virtually all solvers have the common function declarations: | ||
|
||
```julia | ||
solver(A, args...; kwargs...) | ||
solver!(x, A, args...; kwargs...) | ||
``` | ||
|
||
where `A` is a [linear operator](@ref matrixfree) and `x` an initial guess. The second declaration also updates `x` in-place. | ||
|
||
### [Explicit matrices and the matrix-free approach](@id matrixfree) | ||
Rather than constructing an explicit matrix `A` of the type `Matrix` or `SparseMatrixCSC`, it is also possible to pass a general linear operator that performs matrix operations implicitly. This is called the **matrix-free** approach. | ||
|
||
For matrix-free types of `A` the following interface is expected to be defined: | ||
|
||
- `A*v` computes the matrix-vector product on a `v::AbstractVector`; | ||
- `A_mul_B!(y, A, v)` computes the matrix-vector product on a `v::AbstractVector` in-place; | ||
- `eltype(A)` returns the element type implicit in the equivalent matrix representation of `A`; | ||
- `size(A, d)` returns the nominal dimensions along the `d`th axis in the equivalent matrix representation of `A`. | ||
|
||
!!! tip "Matrix-free with LinearMaps.jl" | ||
We strongly recommend [LinearMaps.jl](https://github.com/Jutho/LinearMaps.jl) for matrix-free linear operators, as it implements the above methods already for you; you just have to write the action of the linear map. | ||
|
||
|
||
### Additional arguments | ||
|
||
Keyword names will vary depending on the method, however some of them will always have the same spelling: | ||
|
||
- `tol`: (relative) stopping tolerance of the method; | ||
- `verbose`: print information during the iterations; | ||
- `maxiter`: maximum number of allowed iterations; | ||
- `Pl` and `Pr`: left and right preconditioner. See [Preconditioning](@ref Preconditioning); | ||
- `log::Bool = false`: output an extra element of type `ConvergenceHistory` containing the convergence history. | ||
|
||
### `log` keyword | ||
|
||
Most solvers contain the `log` keyword. This is to be used when obtaining | ||
more information is required, to use it place the set `log` to `true`. | ||
|
||
```julia | ||
x, ch = cg(Master, rand(10, 10), rand(10) log=true) | ||
svd, L, ch = svdl(Master, rand(100, 100), log=true) | ||
``` | ||
|
||
The function will now return one more parameter of type `ConvergenceHistory`. | ||
|
||
## ConvergenceHistory | ||
|
||
A [`ConvergenceHistory`](@ref) instance stores information of a solver. | ||
|
||
Number of iterations. | ||
|
||
```julia | ||
ch.iters | ||
``` | ||
|
||
Convergence status. | ||
|
||
```julia | ||
ch.isconverged | ||
``` | ||
|
||
Stopping tolerances. (A `Symbol` key is needed to access) | ||
|
||
```julia | ||
ch[:tol] | ||
``` | ||
|
||
Maximum number of iterations per restart. (Only on restarted methods) | ||
|
||
```julia | ||
nrests(ch) | ||
``` | ||
|
||
Number of matrix-vectors and matrix-transposed-vector products. | ||
|
||
```julia | ||
nprods(ch) | ||
``` | ||
|
||
Data stored on each iteration, accessed information can be either a vector | ||
or matrix. This data can be a lot of things, most commonly residual. | ||
(A `Symbol` key is needed to access) | ||
|
||
```julia | ||
ch[:resnorm] #Vector or Matrix | ||
ch[:resnorm, x] #Vector or Matrix element | ||
ch[:resnorm, x, y] #Matrix element | ||
``` | ||
|
||
```@docs | ||
ConvergenceHistory | ||
``` | ||
|
||
### Plotting | ||
|
||
`ConvergeHistory` provides a recipe to use with the package [Plots.jl](https://github.com/tbreloff/Plots.jl), this makes it really easy to | ||
plot on different plot backends. There are two recipes provided: | ||
|
||
One for the whole `ConvergenceHistory`. | ||
|
||
```julia | ||
plot(ch) | ||
``` | ||
|
||
The other one to plot data binded to a key. | ||
|
||
```julia | ||
_, ch = gmres(rand(10,10), rand(10), maxiter = 100, log=true) | ||
plot(ch, :resnorm, sep = :blue) | ||
``` | ||
|
||
*Plot additional keywords* | ||
|
||
`sep::Symbol = :white`: color of the line separator in restarted methods. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,91 +1,37 @@ | ||
# IterativeSolvers.jl | ||
|
||
IterativeSolvers.jl is a Julia package that provides iterative algorithms for | ||
solving linear systems, eigensystems, and singular value problems. The purpose | ||
of this package is to provide efficient Julia implementations for iterative | ||
methods. The package aims to accept a wide variety of input types and that's | ||
why most arguments don't specify a specific type, however this is still in | ||
progress. | ||
IterativeSolvers.jl is a Julia package that provides efficient iterative algorithms for solving large linear systems, eigenproblems, and singular value problems. Most of the methods can be used *matrix-free*. | ||
|
||
For bug reports, feature requests and questions please submit an issue. | ||
If you're interested in contributing, please see the [Contributing](@ref) guide. | ||
For bug reports, feature requests and questions please submit an issue. If you're interested in contributing, please see the [Contributing](@ref) guide. | ||
|
||
For more information on future methods have a look at the package [roadmap](https://github.com/JuliaLang/IterativeSolvers.jl/issues/1) on deterministic methods, for randomized algorithms check [here](https://github.com/JuliaLang/IterativeSolvers.jl/issues/33). | ||
|
||
## Linear Solvers | ||
## What method should I use for linear systems? | ||
|
||
**Stationary methods** | ||
When solving linear systems $Ax = b$ for a square matrix $A$ there are quite some options. The typical choices are listed below: | ||
|
||
* Jacobi | ||
* Gauss-Seidel | ||
* Successive over relaxation | ||
* Symmetric successive over relaxation | ||
| Method | When to use it | | ||
|---------------------|--------------------------------------------------------------------------| | ||
| [Conjugate Gradients](@ref CG) | Best choice for **symmetric**, **positive-definite** matrices | | ||
| [MINRES](@ref MINRES) | For **symmetric**, **indefinite** matrices | | ||
| [GMRES](@ref GMRES) | For **nonsymmetric** matrices when a good [preconditioner](@ref Preconditioning) is available | | ||
| [IDR(s)](@ref IDRs) | For **nonsymmetric**, **strongly indefinite** problems without a good preconditioner | | ||
| [BiCGStab(l)](@ref BiCGStabl) | Otherwise for **nonsymmetric** problems | | ||
|
||
**Non stationary methods** | ||
We also offer [Chebyshev iteration](@ref Chebyshev) as an alternative to Conjugate Gradients when bounds on the spectrum are known. | ||
|
||
* IDRS | ||
* LSMR | ||
* LSQR | ||
* Conjugate gradients (CG) | ||
* Chebyshev iteration | ||
* Generalized minimal residual method (with restarts) (GMRES) | ||
Stationary methods like [Jacobi](@ref), [Gauss-Seidel](@ref), [SOR](@ref) and [SSOR](@ref) can be used as smoothers to reduce high-frequency components in the error in just a few iterations. | ||
|
||
## Eigen Solvers | ||
When solving **least-squares** problems we currently offer just [LSMR](@ref LSMR) and [LSQR](@ref LSQR). | ||
|
||
*Simple eigenpair iterations* | ||
## Eigenproblems and SVD | ||
|
||
* Power iteration | ||
* Inverse power iteration | ||
For the Singular Value Decomposition we offer [SVDL](@ref SVDL), which is the Golub-Kahan-Lanczos procedure. | ||
|
||
**Hermitian** | ||
For eigenvalue problems we have at this point just the [Power Method](@ref PowerMethod) and some convenience wrappers to do shift-and-invert. | ||
|
||
*Lanczos* | ||
## Randomized algorithms | ||
|
||
* Simple Lanczos | ||
[Randomized algorithms](@ref Randomized) have gotten some traction lately. Some of the methods mentioned in [^Halko2011] have been implemented as well, although their quality is generally poor. Also note that many classical methods such as the subspace iteration, BiCG and recent methods like IDR(s) are also "randomized" in some sense. | ||
|
||
## Singular Value Decomposition | ||
|
||
* Golub-Kahan-Lanczos | ||
|
||
## Randomized | ||
|
||
* Condition number estimate | ||
* Extremal eigenvalue estimates | ||
* Norm estimate | ||
* Randomized singular value decomposition | ||
|
||
|
||
|
||
## Documentation Outline | ||
|
||
### Manual | ||
|
||
```@contents | ||
Pages = [ | ||
"user_manual.md", | ||
] | ||
Depth = 2 | ||
``` | ||
|
||
### Library | ||
|
||
```@contents | ||
Pages = ["library/public.md", "library/internal.md"] | ||
Depth = 2 | ||
``` | ||
|
||
### [Index](@id main-index) | ||
|
||
### Functions | ||
|
||
```@index | ||
Pages = ["library/public.md", "library/internals.md"] | ||
Order = [:function] | ||
``` | ||
|
||
### Types | ||
|
||
```@index | ||
Pages = ["library/public.md", "library/internals.md"] | ||
Order = [:type] | ||
``` | ||
[^Halko2011]: Halko, Nathan, Per-Gunnar Martinsson, and Joel A. Tropp. "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions." SIAM review 53.2 (2011): 217-288. |
Oops, something went wrong.