From f98deac37693fc917c6e8a280a761cd441290d57 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Sun, 24 Sep 2023 19:11:52 +0000 Subject: [PATCH] build based on 8e5570a --- dev/.documenter-siteinfo.json | 2 +- dev/api/index.html | 76 ++++++++++++------------ dev/callbacks/index.html | 2 +- dev/examples/bicgstab/index.html | 2 +- dev/examples/car/index.html | 4 +- dev/examples/cg/index.html | 4 +- dev/examples/cg_lanczos_shift/index.html | 4 +- dev/examples/cgls/index.html | 4 +- dev/examples/cgne/index.html | 4 +- dev/examples/craig/index.html | 22 +++---- dev/examples/craigmr/index.html | 4 +- dev/examples/crls/index.html | 4 +- dev/examples/crmr/index.html | 4 +- dev/examples/dqgmres/index.html | 2 +- dev/examples/lsmr/index.html | 4 +- dev/examples/lsqr/index.html | 4 +- dev/examples/minares/index.html | 4 +- dev/examples/minres_qlp/index.html | 2 +- dev/examples/symmlq/index.html | 2 +- dev/examples/tricg/index.html | 14 ++--- dev/examples/trimr/index.html | 14 ++--- dev/factorization-free/index.html | 6 +- dev/gpu/index.html | 2 +- dev/index.html | 2 +- dev/inplace/index.html | 4 +- dev/preconditioners/index.html | 2 +- dev/processes/index.html | 30 +++++----- dev/reference/index.html | 2 +- dev/search_index.js | 2 +- dev/solvers/as/index.html | 8 +-- dev/solvers/gsp/index.html | 4 +- dev/solvers/ln/index.html | 14 ++--- dev/solvers/ls/index.html | 14 ++--- dev/solvers/sid/index.html | 16 ++--- dev/solvers/sp_sqd/index.html | 8 +-- dev/solvers/spd/index.html | 18 +++--- dev/solvers/unsymmetric/index.html | 36 +++++------ dev/storage/index.html | 2 +- dev/tips/index.html | 2 +- dev/warm-start/index.html | 2 +- 40 files changed, 178 insertions(+), 178 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 4c9625805..9c75f6041 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.9.3","generation_timestamp":"2023-09-23T16:58:09","documenter_version":"1.0.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.9.3","generation_timestamp":"2023-09-24T19:11:40","documenter_version":"1.0.1"}} \ No newline at end of file diff --git a/dev/api/index.html b/dev/api/index.html index 7b90e60c4..0125c2d76 100644 --- a/dev/api/index.html +++ b/dev/api/index.html @@ -1,39 +1,39 @@ -API · Krylov.jl

Stats Types

Krylov.SimpleStatsType

Type for statistics returned by the majority of Krylov solvers, the attributes are:

  • niter
  • solved
  • inconsistent
  • residuals
  • Aresiduals
  • Acond
  • timer
  • status
source
Krylov.LanczosStatsType

Type for statistics returned by CG-LANCZOS, the attributes are:

  • niter
  • solved
  • residuals
  • indefinite
  • Anorm
  • Acond
  • timer
  • status
source
Krylov.LanczosShiftStatsType

Type for statistics returned by CG-LANCZOS with shifts, the attributes are:

  • niter
  • solved
  • residuals
  • indefinite
  • Anorm
  • Acond
  • timer
  • status
source
Krylov.SymmlqStatsType

Type for statistics returned by SYMMLQ, the attributes are:

  • niter
  • solved
  • residuals
  • residualscg
  • errors
  • errorscg
  • Anorm
  • Acond
  • timer
  • status
source
Krylov.AdjointStatsType

Type for statistics returned by adjoint systems solvers BiLQR and TriLQR, the attributes are:

  • niter
  • solved_primal
  • solved_dual
  • residuals_primal
  • residuals_dual
  • timer
  • status
source
Krylov.LNLQStatsType

Type for statistics returned by the LNLQ method, the attributes are:

  • niter
  • solved
  • residuals
  • errorwithbnd
  • errorbndx
  • errorbndy
  • timer
  • status
source
Krylov.LSLQStatsType

Type for statistics returned by the LSLQ method, the attributes are:

  • niter
  • solved
  • inconsistent
  • residuals
  • Aresiduals
  • err_lbnds
  • errorwithbnd
  • errubndslq
  • errubndscg
  • timer
  • status
source
Krylov.LsmrStatsType

Type for statistics returned by LSMR. The attributes are:

  • niter
  • solved
  • inconsistent
  • residuals
  • Aresiduals
  • Acond
  • Anorm
  • xNorm
  • timer
  • status
source

Solver Types

Krylov.MinresSolverType

Type for storing the vectors required by the in-place version of MINRES.

The outer constructors

solver = MinresSolver(m, n, S; window :: Int=5)
-solver = MinresSolver(A, b; window :: Int=5)

may be used in order to create these vectors.

source
Krylov.MinaresSolverType

Type for storing the vectors required by the in-place version of MINARES.

The outer constructors

solver = MinaresSolver(m, n, S)
-solver = MinaresSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgSolverType

Type for storing the vectors required by the in-place version of CG.

The outer constructors

solver = CgSolver(m, n, S)
-solver = CgSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CrSolverType

Type for storing the vectors required by the in-place version of CR.

The outer constructors

solver = CrSolver(m, n, S)
-solver = CrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CarSolverType

Type for storing the vectors required by the in-place version of CAR.

The outer constructors

solver = CarSolver(m, n, S)
-solver = CarSolver(A, b)

may be used in order to create these vectors.

source
Krylov.SymmlqSolverType

Type for storing the vectors required by the in-place version of SYMMLQ.

The outer constructors

solver = SymmlqSolver(m, n, S)
-solver = SymmlqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgLanczosSolverType

Type for storing the vectors required by the in-place version of CG-LANCZOS.

The outer constructors

solver = CgLanczosSolver(m, n, S)
-solver = CgLanczosSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgLanczosShiftSolverType

Type for storing the vectors required by the in-place version of CG-LANCZOS-SHIFT.

The outer constructors

solver = CgLanczosShiftSolver(m, n, nshifts, S)
-solver = CgLanczosShiftSolver(A, b, nshifts)

may be used in order to create these vectors.

source
Krylov.MinresQlpSolverType

Type for storing the vectors required by the in-place version of MINRES-QLP.

The outer constructors

solver = MinresQlpSolver(m, n, S)
-solver = MinresQlpSolver(A, b)

may be used in order to create these vectors.

source
Krylov.DiomSolverType

Type for storing the vectors required by the in-place version of DIOM.

The outer constructors

solver = DiomSolver(m, n, memory, S)
-solver = DiomSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.FomSolverType

Type for storing the vectors required by the in-place version of FOM.

The outer constructors

solver = FomSolver(m, n, memory, S)
-solver = FomSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.DqgmresSolverType

Type for storing the vectors required by the in-place version of DQGMRES.

The outer constructors

solver = DqgmresSolver(m, n, memory, S)
-solver = DqgmresSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.GmresSolverType

Type for storing the vectors required by the in-place version of GMRES.

The outer constructors

solver = GmresSolver(m, n, memory, S)
-solver = GmresSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.UsymlqSolverType

Type for storing the vectors required by the in-place version of USYMLQ.

The outer constructors

solver = UsymlqSolver(m, n, S)
-solver = UsymlqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.UsymqrSolverType

Type for storing the vectors required by the in-place version of USYMQR.

The outer constructors

solver = UsymqrSolver(m, n, S)
-solver = UsymqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.TricgSolverType

Type for storing the vectors required by the in-place version of TRICG.

The outer constructors

solver = TricgSolver(m, n, S)
-solver = TricgSolver(A, b)

may be used in order to create these vectors.

source
Krylov.TrimrSolverType

Type for storing the vectors required by the in-place version of TRIMR.

The outer constructors

solver = TrimrSolver(m, n, S)
-solver = TrimrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.TrilqrSolverType

Type for storing the vectors required by the in-place version of TRILQR.

The outer constructors

solver = TrilqrSolver(m, n, S)
-solver = TrilqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgsSolverType

Type for storing the vectors required by the in-place version of CGS.

The outer constructorss

solver = CgsSolver(m, n, S)
-solver = CgsSolver(A, b)

may be used in order to create these vectors.

source
Krylov.BicgstabSolverType

Type for storing the vectors required by the in-place version of BICGSTAB.

The outer constructors

solver = BicgstabSolver(m, n, S)
-solver = BicgstabSolver(A, b)

may be used in order to create these vectors.

source
Krylov.BilqSolverType

Type for storing the vectors required by the in-place version of BILQ.

The outer constructors

solver = BilqSolver(m, n, S)
-solver = BilqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.QmrSolverType

Type for storing the vectors required by the in-place version of QMR.

The outer constructors

solver = QmrSolver(m, n, S)
-solver = QmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.BilqrSolverType

Type for storing the vectors required by the in-place version of BILQR.

The outer constructors

solver = BilqrSolver(m, n, S)
-solver = BilqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CglsSolverType

Type for storing the vectors required by the in-place version of CGLS.

The outer constructors

solver = CglsSolver(m, n, S)
-solver = CglsSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CrlsSolverType

Type for storing the vectors required by the in-place version of CRLS.

The outer constructors

solver = CrlsSolver(m, n, S)
-solver = CrlsSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgneSolverType

Type for storing the vectors required by the in-place version of CGNE.

The outer constructors

solver = CgneSolver(m, n, S)
-solver = CgneSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CrmrSolverType

Type for storing the vectors required by the in-place version of CRMR.

The outer constructors

solver = CrmrSolver(m, n, S)
-solver = CrmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LslqSolverType

Type for storing the vectors required by the in-place version of LSLQ.

The outer constructors

solver = LslqSolver(m, n, S)
-solver = LslqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LsqrSolverType

Type for storing the vectors required by the in-place version of LSQR.

The outer constructors

solver = LsqrSolver(m, n, S)
-solver = LsqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LsmrSolverType

Type for storing the vectors required by the in-place version of LSMR.

The outer constructors

solver = LsmrSolver(m, n, S)
-solver = LsmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LnlqSolverType

Type for storing the vectors required by the in-place version of LNLQ.

The outer constructors

solver = LnlqSolver(m, n, S)
-solver = LnlqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CraigSolverType

Type for storing the vectors required by the in-place version of CRAIG.

The outer constructors

solver = CraigSolver(m, n, S)
-solver = CraigSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CraigmrSolverType

Type for storing the vectors required by the in-place version of CRAIGMR.

The outer constructors

solver = CraigmrSolver(m, n, S)
-solver = CraigmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.GpmrSolverType

Type for storing the vectors required by the in-place version of GPMR.

The outer constructors

solver = GpmrSolver(m, n, memory, S)
-solver = GpmrSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n + m if the value given is larger than n + m.

source
Krylov.FgmresSolverType

Type for storing the vectors required by the in-place version of FGMRES.

The outer constructors

solver = FgmresSolver(m, n, memory, S)
-solver = FgmresSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source

Utilities

Krylov.roots_quadraticFunction
roots = roots_quadratic(q₂, q₁, q₀; nitref)

Find the real roots of the quadratic

q(x) = q₂ x² + q₁ x + q₀,

where q₂, q₁ and q₀ are real. Care is taken to avoid numerical cancellation. Optionally, nitref steps of iterative refinement may be performed to improve accuracy. By default, nitref=1.

source
Krylov.sym_givensFunction
(c, s, ρ) = sym_givens(a, b)

Numerically stable symmetric Givens reflection. Given a and b reals, return (c, s, ρ) such that

[ c  s ] [ a ] = [ ρ ]
-[ s -c ] [ b ] = [ 0 ].
source

Numerically stable symmetric Givens reflection. Given a and b complexes, return (c, s, ρ) with c real and (s, ρ) complexes such that

[ c   s ] [ a ] = [ ρ ]
-[ s̅  -c ] [ b ] = [ 0 ].
source
Krylov.to_boundaryFunction
roots = to_boundary(n, x, d, radius; flip, xNorm2, dNorm2)

Given a trust-region radius radius, a vector x lying inside the trust-region and a direction d, return σ1 and σ2 such that

‖x + σi d‖ = radius, i = 1, 2

in the Euclidean norm. n is the length of vectors x and d. If known, ‖x‖² and ‖d‖² may be supplied with xNorm2 and dNorm2.

If flip is set to true, σ1 and σ2 are computed such that

‖x - σi d‖ = radius, i = 1, 2.
source
Krylov.vec2strFunction
s = vec2str(x; ndisp)

Display an array in the form

[ -3.0e-01 -5.1e-01  1.9e-01 ... -2.3e-01 -4.4e-01  2.4e-01 ]

with (ndisp - 1)/2 elements on each side.

source
Krylov.ktypeofFunction
S = ktypeof(v)

Return the most relevant storage type S based on the type of v.

source
Krylov.kzerosFunction
v = kzeros(S, n)

Create a vector of storage type S of length n only composed of zero.

source
Krylov.konesFunction
v = kones(S, n)

Create a vector of storage type S of length n only composed of one.

source
Krylov.vector_to_matrixFunction
M = vector_to_matrix(S)

Return the dense matrix storage type M related to the dense vector storage type S.

source
Krylov.matrix_to_vectorFunction
S = matrix_to_vector(M)

Return the dense vector storage type S related to the dense matrix storage type M.

source
+API · Krylov.jl

Stats Types

Krylov.SimpleStatsType

Type for statistics returned by the majority of Krylov solvers, the attributes are:

  • niter
  • solved
  • inconsistent
  • residuals
  • Aresiduals
  • Acond
  • timer
  • status
source
Krylov.LanczosStatsType

Type for statistics returned by CG-LANCZOS, the attributes are:

  • niter
  • solved
  • residuals
  • indefinite
  • Anorm
  • Acond
  • timer
  • status
source
Krylov.LanczosShiftStatsType

Type for statistics returned by CG-LANCZOS with shifts, the attributes are:

  • niter
  • solved
  • residuals
  • indefinite
  • Anorm
  • Acond
  • timer
  • status
source
Krylov.SymmlqStatsType

Type for statistics returned by SYMMLQ, the attributes are:

  • niter
  • solved
  • residuals
  • residualscg
  • errors
  • errorscg
  • Anorm
  • Acond
  • timer
  • status
source
Krylov.AdjointStatsType

Type for statistics returned by adjoint systems solvers BiLQR and TriLQR, the attributes are:

  • niter
  • solved_primal
  • solved_dual
  • residuals_primal
  • residuals_dual
  • timer
  • status
source
Krylov.LNLQStatsType

Type for statistics returned by the LNLQ method, the attributes are:

  • niter
  • solved
  • residuals
  • errorwithbnd
  • errorbndx
  • errorbndy
  • timer
  • status
source
Krylov.LSLQStatsType

Type for statistics returned by the LSLQ method, the attributes are:

  • niter
  • solved
  • inconsistent
  • residuals
  • Aresiduals
  • err_lbnds
  • errorwithbnd
  • errubndslq
  • errubndscg
  • timer
  • status
source
Krylov.LsmrStatsType

Type for statistics returned by LSMR. The attributes are:

  • niter
  • solved
  • inconsistent
  • residuals
  • Aresiduals
  • Acond
  • Anorm
  • xNorm
  • timer
  • status
source

Solver Types

Krylov.MinresSolverType

Type for storing the vectors required by the in-place version of MINRES.

The outer constructors

solver = MinresSolver(m, n, S; window :: Int=5)
+solver = MinresSolver(A, b; window :: Int=5)

may be used in order to create these vectors.

source
Krylov.MinaresSolverType

Type for storing the vectors required by the in-place version of MINARES.

The outer constructors

solver = MinaresSolver(m, n, S)
+solver = MinaresSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgSolverType

Type for storing the vectors required by the in-place version of CG.

The outer constructors

solver = CgSolver(m, n, S)
+solver = CgSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CrSolverType

Type for storing the vectors required by the in-place version of CR.

The outer constructors

solver = CrSolver(m, n, S)
+solver = CrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CarSolverType

Type for storing the vectors required by the in-place version of CAR.

The outer constructors

solver = CarSolver(m, n, S)
+solver = CarSolver(A, b)

may be used in order to create these vectors.

source
Krylov.SymmlqSolverType

Type for storing the vectors required by the in-place version of SYMMLQ.

The outer constructors

solver = SymmlqSolver(m, n, S)
+solver = SymmlqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgLanczosSolverType

Type for storing the vectors required by the in-place version of CG-LANCZOS.

The outer constructors

solver = CgLanczosSolver(m, n, S)
+solver = CgLanczosSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgLanczosShiftSolverType

Type for storing the vectors required by the in-place version of CG-LANCZOS-SHIFT.

The outer constructors

solver = CgLanczosShiftSolver(m, n, nshifts, S)
+solver = CgLanczosShiftSolver(A, b, nshifts)

may be used in order to create these vectors.

source
Krylov.MinresQlpSolverType

Type for storing the vectors required by the in-place version of MINRES-QLP.

The outer constructors

solver = MinresQlpSolver(m, n, S)
+solver = MinresQlpSolver(A, b)

may be used in order to create these vectors.

source
Krylov.DiomSolverType

Type for storing the vectors required by the in-place version of DIOM.

The outer constructors

solver = DiomSolver(m, n, memory, S)
+solver = DiomSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.FomSolverType

Type for storing the vectors required by the in-place version of FOM.

The outer constructors

solver = FomSolver(m, n, memory, S)
+solver = FomSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.DqgmresSolverType

Type for storing the vectors required by the in-place version of DQGMRES.

The outer constructors

solver = DqgmresSolver(m, n, memory, S)
+solver = DqgmresSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.GmresSolverType

Type for storing the vectors required by the in-place version of GMRES.

The outer constructors

solver = GmresSolver(m, n, memory, S)
+solver = GmresSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.UsymlqSolverType

Type for storing the vectors required by the in-place version of USYMLQ.

The outer constructors

solver = UsymlqSolver(m, n, S)
+solver = UsymlqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.UsymqrSolverType

Type for storing the vectors required by the in-place version of USYMQR.

The outer constructors

solver = UsymqrSolver(m, n, S)
+solver = UsymqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.TricgSolverType

Type for storing the vectors required by the in-place version of TRICG.

The outer constructors

solver = TricgSolver(m, n, S)
+solver = TricgSolver(A, b)

may be used in order to create these vectors.

source
Krylov.TrimrSolverType

Type for storing the vectors required by the in-place version of TRIMR.

The outer constructors

solver = TrimrSolver(m, n, S)
+solver = TrimrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.TrilqrSolverType

Type for storing the vectors required by the in-place version of TRILQR.

The outer constructors

solver = TrilqrSolver(m, n, S)
+solver = TrilqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgsSolverType

Type for storing the vectors required by the in-place version of CGS.

The outer constructorss

solver = CgsSolver(m, n, S)
+solver = CgsSolver(A, b)

may be used in order to create these vectors.

source
Krylov.BicgstabSolverType

Type for storing the vectors required by the in-place version of BICGSTAB.

The outer constructors

solver = BicgstabSolver(m, n, S)
+solver = BicgstabSolver(A, b)

may be used in order to create these vectors.

source
Krylov.BilqSolverType

Type for storing the vectors required by the in-place version of BILQ.

The outer constructors

solver = BilqSolver(m, n, S)
+solver = BilqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.QmrSolverType

Type for storing the vectors required by the in-place version of QMR.

The outer constructors

solver = QmrSolver(m, n, S)
+solver = QmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.BilqrSolverType

Type for storing the vectors required by the in-place version of BILQR.

The outer constructors

solver = BilqrSolver(m, n, S)
+solver = BilqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CglsSolverType

Type for storing the vectors required by the in-place version of CGLS.

The outer constructors

solver = CglsSolver(m, n, S)
+solver = CglsSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CrlsSolverType

Type for storing the vectors required by the in-place version of CRLS.

The outer constructors

solver = CrlsSolver(m, n, S)
+solver = CrlsSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgneSolverType

Type for storing the vectors required by the in-place version of CGNE.

The outer constructors

solver = CgneSolver(m, n, S)
+solver = CgneSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CrmrSolverType

Type for storing the vectors required by the in-place version of CRMR.

The outer constructors

solver = CrmrSolver(m, n, S)
+solver = CrmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LslqSolverType

Type for storing the vectors required by the in-place version of LSLQ.

The outer constructors

solver = LslqSolver(m, n, S)
+solver = LslqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LsqrSolverType

Type for storing the vectors required by the in-place version of LSQR.

The outer constructors

solver = LsqrSolver(m, n, S)
+solver = LsqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LsmrSolverType

Type for storing the vectors required by the in-place version of LSMR.

The outer constructors

solver = LsmrSolver(m, n, S)
+solver = LsmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LnlqSolverType

Type for storing the vectors required by the in-place version of LNLQ.

The outer constructors

solver = LnlqSolver(m, n, S)
+solver = LnlqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CraigSolverType

Type for storing the vectors required by the in-place version of CRAIG.

The outer constructors

solver = CraigSolver(m, n, S)
+solver = CraigSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CraigmrSolverType

Type for storing the vectors required by the in-place version of CRAIGMR.

The outer constructors

solver = CraigmrSolver(m, n, S)
+solver = CraigmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.GpmrSolverType

Type for storing the vectors required by the in-place version of GPMR.

The outer constructors

solver = GpmrSolver(m, n, memory, S)
+solver = GpmrSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n + m if the value given is larger than n + m.

source
Krylov.FgmresSolverType

Type for storing the vectors required by the in-place version of FGMRES.

The outer constructors

solver = FgmresSolver(m, n, memory, S)
+solver = FgmresSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source

Utilities

Krylov.roots_quadraticFunction
roots = roots_quadratic(q₂, q₁, q₀; nitref)

Find the real roots of the quadratic

q(x) = q₂ x² + q₁ x + q₀,

where q₂, q₁ and q₀ are real. Care is taken to avoid numerical cancellation. Optionally, nitref steps of iterative refinement may be performed to improve accuracy. By default, nitref=1.

source
Krylov.sym_givensFunction
(c, s, ρ) = sym_givens(a, b)

Numerically stable symmetric Givens reflection. Given a and b reals, return (c, s, ρ) such that

[ c  s ] [ a ] = [ ρ ]
+[ s -c ] [ b ] = [ 0 ].
source

Numerically stable symmetric Givens reflection. Given a and b complexes, return (c, s, ρ) with c real and (s, ρ) complexes such that

[ c   s ] [ a ] = [ ρ ]
+[ s̅  -c ] [ b ] = [ 0 ].
source
Krylov.to_boundaryFunction
roots = to_boundary(n, x, d, radius; flip, xNorm2, dNorm2)

Given a trust-region radius radius, a vector x lying inside the trust-region and a direction d, return σ1 and σ2 such that

‖x + σi d‖ = radius, i = 1, 2

in the Euclidean norm. n is the length of vectors x and d. If known, ‖x‖² and ‖d‖² may be supplied with xNorm2 and dNorm2.

If flip is set to true, σ1 and σ2 are computed such that

‖x - σi d‖ = radius, i = 1, 2.
source
Krylov.vec2strFunction
s = vec2str(x; ndisp)

Display an array in the form

[ -3.0e-01 -5.1e-01  1.9e-01 ... -2.3e-01 -4.4e-01  2.4e-01 ]

with (ndisp - 1)/2 elements on each side.

source
Krylov.ktypeofFunction
S = ktypeof(v)

Return the most relevant storage type S based on the type of v.

source
Krylov.kzerosFunction
v = kzeros(S, n)

Create a vector of storage type S of length n only composed of zero.

source
Krylov.konesFunction
v = kones(S, n)

Create a vector of storage type S of length n only composed of one.

source
Krylov.vector_to_matrixFunction
M = vector_to_matrix(S)

Return the dense matrix storage type M related to the dense vector storage type S.

source
Krylov.matrix_to_vectorFunction
S = matrix_to_vector(M)

Return the dense vector storage type S related to the dense matrix storage type M.

source
diff --git a/dev/callbacks/index.html b/dev/callbacks/index.html index 60a700e46..4a9db6e7b 100644 --- a/dev/callbacks/index.html +++ b/dev/callbacks/index.html @@ -50,4 +50,4 @@ return false # We don't want to add new stopping conditions end -(x, stats) = gmres(A, b, callback = gmres_callback) +(x, stats) = gmres(A, b, callback = gmres_callback) diff --git a/dev/examples/bicgstab/index.html b/dev/examples/bicgstab/index.html index abc870fd0..ee0e68b76 100644 --- a/dev/examples/bicgstab/index.html +++ b/dev/examples/bicgstab/index.html @@ -51,4 +51,4 @@ [Left preconditioning] Residual norm: 3.2e-07 [Left preconditioning] Number of iterations: 9 [Right preconditioning] Residual norm: 7.8e-06 -[Right preconditioning] Number of iterations: 8 +[Right preconditioning] Number of iterations: 8 diff --git a/dev/examples/car/index.html b/dev/examples/car/index.html index bad944ca9..4c9e3052e 100644 --- a/dev/examples/car/index.html +++ b/dev/examples/car/index.html @@ -23,6 +23,6 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 28.48ms + timer: 35.88ms status: solution good enough given atol and rtol -Relative residual: 1.5e-08 +Relative residual: 1.5e-08 diff --git a/dev/examples/cg/index.html b/dev/examples/cg/index.html index adcd25d58..73c912277 100644 --- a/dev/examples/cg/index.html +++ b/dev/examples/cg/index.html @@ -23,6 +23,6 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 4.78ms + timer: 5.86ms status: solution good enough given atol and rtol -Relative residual: 1.3e-08 +Relative residual: 1.3e-08 diff --git a/dev/examples/cg_lanczos_shift/index.html b/dev/examples/cg_lanczos_shift/index.html index c831cc764..6333406b2 100644 --- a/dev/examples/cg_lanczos_shift/index.html +++ b/dev/examples/cg_lanczos_shift/index.html @@ -33,7 +33,7 @@ indefinite: Bool[0, 1, 1, 1] ‖A‖F: NaN κ₂(A): NaN - timer: 11.79ms + timer: 15.15ms status: solution good enough given atol and rtol Relative residuals with shifts: - 1.5e-08 1.5e-08 1.3e-08 1.3e-08 + 1.5e-08 1.5e-08 1.3e-08 1.3e-08 diff --git a/dev/examples/cgls/index.html b/dev/examples/cgls/index.html index e30b5cbf1..674bfe732 100644 --- a/dev/examples/cgls/index.html +++ b/dev/examples/cgls/index.html @@ -28,7 +28,7 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 1.55ms + timer: 2.01ms status: solution good enough given atol and rtol CGLS: Relative residual: 1.9e-08 -CGLS: ‖x‖: 8.4e+03 +CGLS: ‖x‖: 8.4e+03 diff --git a/dev/examples/cgne/index.html b/dev/examples/cgne/index.html index caa187c1f..5efb641fd 100644 --- a/dev/examples/cgne/index.html +++ b/dev/examples/cgne/index.html @@ -27,7 +27,7 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 1.71ms + timer: 2.35ms status: solution good enough given atol and rtol CGNE: Relative residual: 1.5e-08 -CGNE: ‖x - x*‖₂: 3.2e-07 +CGNE: ‖x - x*‖₂: 3.2e-07 diff --git a/dev/examples/craig/index.html b/dev/examples/craig/index.html index bc493f040..6327ffb1e 100644 --- a/dev/examples/craig/index.html +++ b/dev/examples/craig/index.html @@ -21,12 +21,12 @@ @printf("Error in y: %7.1e\n", norm(λ * y - xy_exact[n+1:n+m]) / norm(xy_exact[n+1:n+m])) end
CRAIG: system of 5 equations in 8 variables
     k       ‖r‖       ‖x‖       ‖A‖      κ(A)         α        β  timer
-    0  7.70e+00  0.00e+00  0.00e+00  0.00e+00   ✗ ✗ ✗ ✗  ✗ ✗ ✗ ✗  0.37s
-    1  5.05e-01  2.70e+00  2.86e+00  2.86e+00   2.9e+00  1.9e-01  0.61s
-    2  1.87e-01  2.78e+00  2.97e+00  4.21e+00   7.7e-01  2.9e-01  0.76s
-    3  5.32e-02  2.79e+00  3.13e+00  5.48e+00   9.4e-01  2.7e-01  0.76s
-    4  8.22e-02  2.79e+00  3.19e+00  6.46e+00   3.3e-01  5.0e-01  0.76s
-    5  2.77e-11  2.80e+00  3.21e+00  7.88e+00   4.1e-01  1.4e-10  0.76s
+    0  9.10e+00  0.00e+00  0.00e+00  0.00e+00   ✗ ✗ ✗ ✗  ✗ ✗ ✗ ✗  0.45s
+    1  3.10e-01  2.74e+00  3.32e+00  3.32e+00   3.3e+00  1.1e-01  0.75s
+    2  4.23e-02  2.79e+00  3.38e+00  4.78e+00   6.2e-01  8.5e-02  0.91s
+    3  7.34e-04  2.79e+00  3.49e+00  6.05e+00   8.8e-01  1.5e-02  0.91s
+    4  3.08e-04  2.79e+00  3.60e+00  7.21e+00   8.2e-01  3.4e-01  0.91s
+    5  2.85e-11  2.79e+00  3.64e+00  8.23e+00   5.6e-01  5.2e-08  0.91s
 
 SimpleStats
  niter: 5
@@ -35,9 +35,9 @@
  residuals: []
  Aresiduals: []
  κ₂(A): []
- timer: 755.20ms
+ timer: 913.74ms
  status: solution good enough for the tolerances given
-Primal feasibility: 3.6e-12
-Dual   feasibility: 2.4e-16
-Error in x: 3.5e-12
-Error in y: 1.6e-12
+Primal feasibility: 3.1e-12 +Dual feasibility: 1.3e-16 +Error in x: 3.1e-12 +Error in y: 2.2e-12 diff --git a/dev/examples/craigmr/index.html b/dev/examples/craigmr/index.html index 2d091dea9..5f7277b2c 100644 --- a/dev/examples/craigmr/index.html +++ b/dev/examples/craigmr/index.html @@ -28,8 +28,8 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 1.01ms + timer: 1.23ms status: found approximate minimum-norm solution CRAIGMR: Relative residual: 1.3e-08 CRAIGMR: ‖x - x*‖₂: 1.3e-05 -CRAIGMR: 0 iterations +CRAIGMR: 0 iterations diff --git a/dev/examples/crls/index.html b/dev/examples/crls/index.html index 66129eb6a..2c527c564 100644 --- a/dev/examples/crls/index.html +++ b/dev/examples/crls/index.html @@ -28,7 +28,7 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 6.69ms + timer: 9.61ms status: solution good enough given atol and rtol CRLS: Relative residual: 2.0e-08 -CRLS: ‖x‖: 1.0e+04 +CRLS: ‖x‖: 1.0e+04 diff --git a/dev/examples/crmr/index.html b/dev/examples/crmr/index.html index fcf11b2ee..2810f029d 100644 --- a/dev/examples/crmr/index.html +++ b/dev/examples/crmr/index.html @@ -29,7 +29,7 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 1.73ms + timer: 2.05ms status: system probably inconsistent but least squares/norm solution found CRMR: Relative residual: 4.0e-06 -CRMR: ‖x - x*‖₂: 6.3e-03 +CRMR: ‖x - x*‖₂: 6.3e-03 diff --git a/dev/examples/dqgmres/index.html b/dev/examples/dqgmres/index.html index b0978df1d..3ba06a522 100644 --- a/dev/examples/dqgmres/index.html +++ b/dev/examples/dqgmres/index.html @@ -51,4 +51,4 @@ [Left preconditioning] Residual norm: 7.6e-08 [Left preconditioning] Number of iterations: 34 [Right preconditioning] Residual norm: 3.4e-07 -[Right preconditioning] Number of iterations: 33 +[Right preconditioning] Number of iterations: 33 diff --git a/dev/examples/lsmr/index.html b/dev/examples/lsmr/index.html index d4f9c251e..f30cbe6d4 100644 --- a/dev/examples/lsmr/index.html +++ b/dev/examples/lsmr/index.html @@ -32,7 +32,7 @@ κ₂(A): 67.13806544631052 ‖A‖F: 45.016285250483314 xNorm: 16060.854604524206 - timer: 28.28ms + timer: 41.33ms status: truncated forward error small enough LSMR: Relative residual: 2.4e-03 -LSMR: ‖x‖: 1.6e+04 +LSMR: ‖x‖: 1.6e+04 diff --git a/dev/examples/lsqr/index.html b/dev/examples/lsqr/index.html index 7a2d1864c..47eebf4b8 100644 --- a/dev/examples/lsqr/index.html +++ b/dev/examples/lsqr/index.html @@ -28,7 +28,7 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 17.73ms + timer: 24.16ms status: maximum number of iterations exceeded LSQR: Relative residual: 1.4e-03 -LSQR: ‖x‖: 9.4e+03 +LSQR: ‖x‖: 9.4e+03 diff --git a/dev/examples/minares/index.html b/dev/examples/minares/index.html index 82e356361..8ed458ebe 100644 --- a/dev/examples/minares/index.html +++ b/dev/examples/minares/index.html @@ -23,6 +23,6 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 2.09ms + timer: 2.52ms status: solution good enough given atol, rtol and Artol -Relative A-residual: 7.8e-09 +Relative A-residual: 7.8e-09 diff --git a/dev/examples/minres_qlp/index.html b/dev/examples/minres_qlp/index.html index b5cee531e..88d372d03 100644 --- a/dev/examples/minres_qlp/index.html +++ b/dev/examples/minres_qlp/index.html @@ -17,4 +17,4 @@ @printf("Minimum-norm solution? %s\n", x ≈ [1.0; 1.0; 1.0; 0.0])
Residual r: [-6.2e-15  2.9e-15  2.2e-15  4.0e+00 ]
 Relative residual norm ‖r‖:  7.3e-01
 Solution x: [ 1.0e+00  1.0e+00  1.0e+00 -9.6e-16 ]
-Minimum-norm solution? true
+Minimum-norm solution? true diff --git a/dev/examples/symmlq/index.html b/dev/examples/symmlq/index.html index d79eb6d75..05a5da2b4 100644 --- a/dev/examples/symmlq/index.html +++ b/dev/examples/symmlq/index.html @@ -17,4 +17,4 @@ @printf("Minimum-norm solution? %s\n", x ≈ [1.0; 1.0; 1.0; 0.0])
Residual r: [ 1.1e-16  2.2e-16  0.0e+00  0.0e+00 ]
 Relative residual norm ‖r‖:  6.6e-17
 Solution x: [ 1.0e+00  1.0e+00  1.0e+00  0.0e+00 ]
-Minimum-norm solution? true
+Minimum-norm solution? true diff --git a/dev/examples/tricg/index.html b/dev/examples/tricg/index.html index 03de9ed35..c2369f06a 100644 --- a/dev/examples/tricg/index.html +++ b/dev/examples/tricg/index.html @@ -74,11 +74,11 @@ TriCG: Relative residual: 3.5e-10 TriCG: system of 10 equations in 10 variables k ‖rₖ‖ βₖ₊₁ γₖ₊₁ timer - 0 1.1e+01 6.7e+00 8.7e+00 0.22s - 1 5.8e+00 2.6e+02 3.0e+02 0.33s - 2 2.5e+00 2.9e+02 2.2e+02 0.33s - 3 1.2e+00 2.4e+01 5.5e+01 0.33s - 4 1.9e-01 3.4e-01 8.1e-01 0.33s - 5 2.7e-08 1.5e-07 9.0e-08 0.33s + 0 1.1e+01 6.7e+00 8.7e+00 0.27s + 1 5.8e+00 2.6e+02 3.0e+02 0.40s + 2 2.5e+00 2.9e+02 2.2e+02 0.40s + 3 1.2e+00 2.4e+01 5.5e+01 0.40s + 4 1.9e-01 3.4e-01 8.1e-01 0.40s + 5 2.7e-08 1.5e-07 9.0e-08 0.40s -TriCG: Relative residual: 2.7e-08 +TriCG: Relative residual: 2.7e-08 diff --git a/dev/examples/trimr/index.html b/dev/examples/trimr/index.html index a5c40b864..0aa5513a4 100644 --- a/dev/examples/trimr/index.html +++ b/dev/examples/trimr/index.html @@ -58,11 +58,11 @@ TriMR: Relative residual: 2.8e-11 TriMR: system of 10 equations in 10 variables k ‖rₖ‖ βₖ₊₁ γₖ₊₁ timer - 0 1.1e+00 8.7e-01 6.8e-01 0.16s - 1 7.3e-01 4.0e+00 8.8e-01 0.16s - 2 4.4e-01 2.2e+00 2.0e+00 0.16s - 3 3.2e-01 9.3e-02 1.4e+00 0.16s - 4 2.2e-03 1.0e-02 7.0e-03 0.16s - 5 2.0e-13 5.8e-10 3.3e-11 0.16s + 0 1.1e+00 8.7e-01 6.8e-01 0.19s + 1 7.3e-01 4.0e+00 8.8e-01 0.19s + 2 4.4e-01 2.2e+00 2.0e+00 0.19s + 3 3.2e-01 9.3e-02 1.4e+00 0.19s + 4 2.2e-03 1.0e-02 7.0e-03 0.19s + 5 2.0e-13 5.8e-10 3.3e-11 0.19s -TriMR: Relative residual: 2.0e-13 +TriMR: Relative residual: 2.0e-13 diff --git a/dev/factorization-free/index.html b/dev/factorization-free/index.html index 69efad120..7b18e4fb8 100644 --- a/dev/factorization-free/index.html +++ b/dev/factorization-free/index.html @@ -52,7 +52,7 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 629.65ms + timer: 747.25ms status: solution good enough given atol and rtol )

Example 2: The Gauss-Newton Method for Nonlinear Least Squares

At each iteration of the Gauss-Newton method applied to a nonlinear least-squares objective $f(x) = \tfrac{1}{2}\| F(x)\|^2$ where $F : \mathbb{R}^n \rightarrow \mathbb{R}^m$ is $\mathcal{C}^1$, we solve the subproblem:

\[\min_{d \in \mathbb{R}^n}~~\tfrac{1}{2}~\|J(x_k) d + F(x_k)\|^2,\]

where $J(x)$ is the Jacobian of $F$ at $x$.

An appropriate iterative method to solve the above linear least-squares problems is LSMR. We could pass the explicit Jacobian to LSMR as follows:

using ForwardDiff, Krylov
 
@@ -85,6 +85,6 @@
  κ₂(A): 1.2777264193293685
  ‖A‖F: 5.328138145631235
  xNorm: 0.5623144027914095
- timer: 636.80ms
+ timer: 785.92ms
  status: found approximate minimum least-squares solution
-)

Note that preconditioners can be also implemented as abstract operators. For instance, we could compute the Cholesky factorization of $M$ and $N$ and create linear operators that perform the forward and backsolves.

Krylov methods combined with factorization free operators allow to reduce computation time and memory requirements considerably by avoiding building and storing the system matrix. In the field of partial differential equations, the implementation of high-performance factorization free operators and assembly free preconditioning is a subject of active research.

+)

Note that preconditioners can be also implemented as abstract operators. For instance, we could compute the Cholesky factorization of $M$ and $N$ and create linear operators that perform the forward and backsolves.

Krylov methods combined with factorization free operators allow to reduce computation time and memory requirements considerably by avoiding building and storing the system matrix. In the field of partial differential equations, the implementation of high-performance factorization free operators and assembly free preconditioning is a subject of active research.

diff --git a/dev/gpu/index.html b/dev/gpu/index.html index de081a513..d8ba45028 100644 --- a/dev/gpu/index.html +++ b/dev/gpu/index.html @@ -172,4 +172,4 @@ b_gpu = MtlVector(b_cpu) # Solve a dense least-norm problem on an Apple M1 GPU -x, stats = craig(A_gpu, b_gpu)
Warning

Metal.jl is under heavy development and is considered experimental for now.

+x, stats = craig(A_gpu, b_gpu)
Warning

Metal.jl is under heavy development and is considered experimental for now.

diff --git a/dev/index.html b/dev/index.html index b624dcf0f..23567cf03 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,4 +1,4 @@ Home · Krylov.jl

Krylov.jl documentation

This package provides implementations of certain of the most useful Krylov method for a variety of problems:

1 - Square or rectangular full-rank systems

\[ Ax = b\]

should be solved when b lies in the range space of A. This situation occurs when

  • A is square and nonsingular,
  • A is tall and has full column rank and b lies in the range of A.

2 - Linear least-squares problems

\[ \min \|b - Ax\|\]

should be solved when b is not in the range of A (inconsistent systems), regardless of the shape and rank of A. This situation mainly occurs when

  • A is square and singular,
  • A is tall and thin.

Underdetermined systems are less common but also occur.

If there are infinitely many such x (because A is column rank-deficient), one with minimum norm is identified

\[ \min \|x\| \quad \text{subject to} \quad x \in \argmin \|b - Ax\|.\]

3 - Linear least-norm problems

\[ \min \|x\| \quad \text{subject to} \quad Ax = b\]

should be solved when A is column rank-deficient but b is in the range of A (consistent systems), regardless of the shape of A. This situation mainly occurs when

  • A is square and singular,
  • A is short and wide.

Overdetermined systems are less common but also occur.

4 - Adjoint systems

\[ Ax = b \quad \text{and} \quad A^H y = c\]

where A can have any shape.

5 - Saddle-point and Hermitian quasi-definite systems

\[ \begin{bmatrix} M & \phantom{-}A \\ A^H & -N \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \left(\begin{bmatrix} b \\ 0 \end{bmatrix},\begin{bmatrix} 0 \\ c \end{bmatrix},\begin{bmatrix} b \\ c \end{bmatrix}\right)\]

where A can have any shape.

6 - Generalized saddle-point and non-Hermitian partitioned systems

\[ \begin{bmatrix} M & A \\ B & N \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} b \\ c \end{bmatrix}\]

where A can have any shape and B has the shape of Aᴴ. A, B, b and c must be all nonzero.

Krylov solvers are particularly appropriate in situations where such problems must be solved but a factorization is not possible, either because:

  • A is not available explicitly,
  • A would be dense or would consume an excessive amount of memory if it were materialized,
  • factors would consume an excessive amount of memory.

Iterative methods are recommended in either of the following situations:

  • the problem is sufficiently large that a factorization is not feasible or would be slow,
  • an effective preconditioner is known in cases where the problem has unfavorable spectral structure,
  • the operator can be represented efficiently as a sparse matrix,
  • the operator is fast, i.e., can be applied with better complexity than if it were materialized as a matrix. Certain fast operators would materialize as dense matrices.

Features

All solvers in Krylov.jl have in-place version, are compatible with GPU and work in any floating-point data type.

How to Install

Krylov can be installed and tested through the Julia package manager:

julia> ]
 pkg> add Krylov
-pkg> test Krylov

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers organization, so questions about any of our packages are welcome.

+pkg> test Krylov

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers organization, so questions about any of our packages are welcome.

diff --git a/dev/inplace/index.html b/dev/inplace/index.html index 5bd63ead3..dda2fa446 100644 --- a/dev/inplace/index.html +++ b/dev/inplace/index.html @@ -7,7 +7,7 @@ dqgmres!(dqgmres_solver, A2, b2) lsqr_solver = LsqrSolver(m, n, CuVector{Float32}) -lsqr!(lsqr_solver, A3, b3)

A generic function solve! is also available and dispatches to the appropriate Krylov method.

Krylov.solve!Function
solve!(solver, args...; kwargs...)

Use the in-place Krylov method associated to solver.

source

In-place methods return an updated solver workspace. Solutions and statistics can be recovered via solver.x, solver.y and solver.stats. Functions solution and statistics can be also used.

Krylov.nsolutionFunction
nsolution(solver)

Return the number of outputs of solution(solver).

source
Krylov.solutionFunction
solution(solver)

Return the solution(s) stored in the solver. Optionally you can specify which solution you want to recover, solution(solver, 1) returns x and solution(solver, 2) returns y.

source
Krylov.statisticsFunction
statistics(solver)

Return the statistics stored in the solver.

source
Krylov.issolvedFunction
issolved(solver)

Return a boolean that determines whether the Krylov method associated to solver succeeded.

source

Examples

We illustrate the use of in-place Krylov solvers with two well-known optimization methods. The details of the optimization methods are described in the section about Factorization-free operators.

Example 1: Newton's method for convex optimization without linesearch

using Krylov
+lsqr!(lsqr_solver, A3, b3)

A generic function solve! is also available and dispatches to the appropriate Krylov method.

Krylov.solve!Function
solve!(solver, args...; kwargs...)

Use the in-place Krylov method associated to solver.

source

In-place methods return an updated solver workspace. Solutions and statistics can be recovered via solver.x, solver.y and solver.stats. Functions solution and statistics can be also used.

Krylov.nsolutionFunction
nsolution(solver)

Return the number of outputs of solution(solver).

source
Krylov.solutionFunction
solution(solver)

Return the solution(s) stored in the solver. Optionally you can specify which solution you want to recover, solution(solver, 1) returns x and solution(solver, 2) returns y.

source
Krylov.statisticsFunction
statistics(solver)

Return the statistics stored in the solver.

source
Krylov.issolvedFunction
issolved(solver)

Return a boolean that determines whether the Krylov method associated to solver succeeded.

source

Examples

We illustrate the use of in-place Krylov solvers with two well-known optimization methods. The details of the optimization methods are described in the section about Factorization-free operators.

Example 1: Newton's method for convex optimization without linesearch

using Krylov
 
 function newton(∇f, ∇²f, x₀; itmax = 200, tol = 1e-8)
 
@@ -65,4 +65,4 @@
         tired = iter ≥ itmax
     end
     return x
-end
+end diff --git a/dev/preconditioners/index.html b/dev/preconditioners/index.html index 846773b5f..71d7c5734 100644 --- a/dev/preconditioners/index.html +++ b/dev/preconditioners/index.html @@ -60,4 +60,4 @@ (y, v) -> (y .= v ; BasicLU.solve!(B, y, 'T')), (y, v) -> (y .= v ; BasicLU.solve!(B, y, 'T'))) -x, y, stats = craigmr(B⁻¹ * opA, B⁻¹ * b) # min ‖x‖₂ s.t. B⁻¹Ax = B⁻¹b +x, y, stats = craigmr(B⁻¹ * opA, B⁻¹ * b) # min ‖x‖₂ s.t. B⁻¹Ax = B⁻¹b diff --git a/dev/processes/index.html b/dev/processes/index.html index 672211471..2e70e9539 100644 --- a/dev/processes/index.html +++ b/dev/processes/index.html @@ -1,5 +1,5 @@ -Krylov processes · Krylov.jl

Krylov processes

Krylov processes are the foundation of Krylov methods, they generate bases of Krylov subspaces. Depending on the Krylov subspaces generated, Krylov processes are more or less specialized for a subset of linear problems. The following table summarizes the most relevant processes for each linear problem.

Linear problemsProcesses
Hermitian linear systemsHermitian Lanczos
Square Non-Hermitian linear systemsNon-Hermitian Lanczos – Arnoldi
Least-squares problemsGolub-Kahan – Saunders-Simon-Yip
Least-norm problemsGolub-Kahan – Saunders-Simon-Yip
Saddle-point and Hermitian quasi-definite systemsGolub-Kahan – Saunders-Simon-Yip
Generalized saddle-point and non-Hermitian partitioned systemsMontoison-Orban

Notation

For a matrix $A$, $A^H$ denotes the conjugate transpose of $A$. It coincides with $A^T$, the transpose of $A$, for real matrices. Define $V_k := \begin{bmatrix} v_1 & \ldots & v_k \end{bmatrix} \enspace$ and $\enspace U_k := \begin{bmatrix} u_1 & \ldots & u_k \end{bmatrix}$.

For a matrix $C \in \mathbb{C}^{n \times n}$ and a vector $t \in \mathbb{C}^{n}$, the $k$-th Krylov subspace generated by $C$ and $t$ is

\[\mathcal{K}_k(C, t) := \left\{\sum_{i=0}^{k-1} \omega_i C^i t \, \middle \vert \, \omega_i \in \mathbb{C},~0 \le i \le k-1 \right\}.\]

For matrices $C \in \mathbb{C}^{n \times n} \enspace$ and $\enspace T \in \mathbb{C}^{n \times p}$, the $k$-th block Krylov subspace generated by $C$ and $T$ is

\[\mathcal{K}_k^{\square}(C, T) := -\left\{\sum_{i=0}^{k-1} C^i T \, \Omega_i \, \middle \vert \, \Omega_i \in \mathbb{C}^{p \times p},~0 \le i \le k-1 \right\}.\]

Hermitian Lanczos

hermitian_lanczos

After $k$ iterations of the Hermitian Lanczos process, the situation may be summarized as

\[\begin{align*} - A V_k &= V_k T_k + \beta_{k+1,k} v_{k+1} e_k^T = V_{k+1} T_{k+1,k}, \\ +\left\{\sum_{i=0}^{k-1} C^i T \, \Omega_i \, \middle \vert \, \Omega_i \in \mathbb{C}^{p \times p},~0 \le i \le k-1 \right\}.\]

Hermitian Lanczos

hermitian_lanczos

After $k$ iterations of the Hermitian Lanczos process, the situation may be summarized as

\[\begin{align*} + A V_k &= V_k T_k + \beta_{k+1,k} v_{k+1} e_k^T = V_{k+1} T_{k+1,k}, \\ V_k^H V_k &= I_k, -\end{align*}\]

where $V_k$ is an orthonormal basis of the Krylov subspace $\mathcal{K}_k (A,b)$,

\[T_k = +\end{align*}\]

where $V_k$ is an orthonormal basis of the Krylov subspace $\mathcal{K}_k(A,b)$,

\[T_k = \begin{bmatrix} \alpha_1 & \bar{\beta}_2 & & \\ \beta_2 & \alpha_2 & \ddots & \\ @@ -41,8 +41,8 @@ \begin{bmatrix} T_{k} \\ \beta_{k+1} e_{k}^T -\end{bmatrix}.\]

Note that depending on how we normalize the vectors that compose $V_k$, $T_{k+1,k}$ can be a real tridiagonal matrix even if $A$ is a complex matrix.

The function hermitian_lanczos returns $V_{k+1}$ and $T_{k+1,k}$.

Related methods: SYMMLQ, CG, CR, CAR, MINRES, MINRES-QLP, MINARES, CGLS, CRLS, CGNE, CRMR, CG-LANCZOS and CG-LANCZOS-SHIFT.

Krylov.hermitian_lanczosFunction
V, T = hermitian_lanczos(A, b, k)

Input arguments

  • A: a linear operator that models an Hermitian matrix of dimension n;
  • b: a vector of length n;
  • k: the number of iterations of the Hermitian Lanczos process.

Output arguments

  • V: a dense n × (k+1) matrix;
  • T: a sparse (k+1) × k tridiagonal matrix.

Reference

source

Non-Hermitian Lanczos

nonhermitian_lanczos

After $k$ iterations of the non-Hermitian Lanczos process (also named the Lanczos biorthogonalization process), the situation may be summarized as

\[\begin{align*} - A V_k &= V_k T_k + \beta_{k+1} v_{k+1} e_k^T = V_{k+1} T_{k+1,k}, \\ +\end{bmatrix}.\]

Note that depending on how we normalize the vectors that compose $V_k$, $T_{k+1,k}$ can be a real tridiagonal matrix even if $A$ is a complex matrix.

The function hermitian_lanczos returns $V_{k+1}$, $\beta_1$ and $T_{k+1,k}$.

Related methods: SYMMLQ, CG, CR, CAR, MINRES, MINRES-QLP, MINARES, CGLS, CRLS, CGNE, CRMR, CG-LANCZOS and CG-LANCZOS-SHIFT.

Krylov.hermitian_lanczosMethod
V, β, T = hermitian_lanczos(A, b, k)

Input arguments

  • A: a linear operator that models an Hermitian matrix of dimension n;
  • b: a vector of length n;
  • k: the number of iterations of the Hermitian Lanczos process.

Output arguments

  • V: a dense n × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • T: a sparse (k+1) × k tridiagonal matrix.

Reference

source

Non-Hermitian Lanczos

nonhermitian_lanczos

After $k$ iterations of the non-Hermitian Lanczos process (also named the Lanczos biorthogonalization process), the situation may be summarized as

\[\begin{align*} + A V_k &= V_k T_k + \beta_{k+1} v_{k+1} e_k^T = V_{k+1} T_{k+1,k}, \\ A^H U_k &= U_k T_k^H + \bar{\gamma}_{k+1} u_{k+1} e_k^T = U_{k+1} T_{k,k+1}^H, \\ V_k^H U_k &= U_k^H V_k = I_k, \end{align*}\]

where $V_k$ and $U_k$ are bases of the Krylov subspaces $\mathcal{K}_k (A,b)$ and $\mathcal{K}_k (A^H,c)$, respectively,

\[T_k = @@ -62,7 +62,7 @@ T_{k,k+1} = \begin{bmatrix} T_{k} & \gamma_{k+1} e_k -\end{bmatrix}.\]

The function nonhermitian_lanczos returns $V_{k+1}$, $T_{k+1,k}$, $U_{k+1}$ and $T_{k,k+1}^H$.

Related methods: BiLQ, QMR, BiLQR, CGS and BICGSTAB.

Note

The scaling factors used in our implementation are $\beta_k = |u_k^H v_k|^{\tfrac{1}{2}}$ and $\gamma_k = (u_k^H v_k) / \beta_k$. With these scaling factors, the non-Hermitian Lanczos process coincides with the Hermitian Lanczos process when $A = A^H$ and $b = c$.

Krylov.nonhermitian_lanczosFunction
V, T, U, Tᴴ = nonhermitian_lanczos(A, b, c, k)

Input arguments

  • A: a linear operator that models a square matrix of dimension n;
  • b: a vector of length n;
  • c: a vector of length n;
  • k: the number of iterations of the non-Hermitian Lanczos process.

Output arguments

  • V: a dense n × (k+1) matrix;
  • T: a sparse (k+1) × k tridiagonal matrix;
  • U: a dense n × (k+1) matrix;
  • Tᴴ: a sparse (k+1) × k tridiagonal matrix.

Reference

source

The non-Hermitian Lanczos process can be also implemented without $A^H$ (transpose-free variant). To derive it, we can observe that $\beta_{k+1} v_{k+1} = P_k(A) b~~$ and $\bar{\gamma}_{k+1} u_{k+1} = Q_k(A^H) c~~$ where $P_k$ and $Q_k$ are polynomials of degree $k$. The polynomials are defined from the recursions:

\[\begin{align*} +\end{bmatrix}.\]

The function nonhermitian_lanczos returns $V_{k+1}$, $\beta_1$, $T_{k+1,k}$, $U_{k+1}$, $\bar{\gamma}_1$ and $T_{k,k+1}^H$.

Related methods: BiLQ, QMR, BiLQR, CGS and BICGSTAB.

Note

The scaling factors used in our implementation are $\beta_k = |u_k^H v_k|^{\tfrac{1}{2}}$ and $\gamma_k = (u_k^H v_k) / \beta_k$. With these scaling factors, the non-Hermitian Lanczos process coincides with the Hermitian Lanczos process when $A = A^H$ and $b = c$.

Krylov.nonhermitian_lanczosMethod
V, β, T, U, γᴴ, Tᴴ = nonhermitian_lanczos(A, b, c, k)

Input arguments

  • A: a linear operator that models a square matrix of dimension n;
  • b: a vector of length n;
  • c: a vector of length n;
  • k: the number of iterations of the non-Hermitian Lanczos process.

Output arguments

  • V: a dense n × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • T: a sparse (k+1) × k tridiagonal matrix;
  • U: a dense n × (k+1) matrix;
  • γᴴ: a coefficient such that γᴴu₁ = c;
  • Tᴴ: a sparse (k+1) × k tridiagonal matrix.

Reference

source

The non-Hermitian Lanczos process can be also implemented without $A^H$ (transpose-free variant). To derive it, we can observe that $\beta_{k+1} v_{k+1} = P_k(A) b~~$ and $\bar{\gamma}_{k+1} u_{k+1} = Q_k(A^H) c~~$ where $P_k$ and $Q_k$ are polynomials of degree $k$. The polynomials are defined from the recursions:

\[\begin{align*} P_0(A) &= I_n, \\ P_1(A) &= \left(\dfrac{A - \alpha_1 I_n}{\beta_1}\right) P_0(A), \\ P_k(A) &= \left(\dfrac{A - \alpha_k I_n}{\beta_k}\right) P_{k-1}(A) - \dfrac{\gamma_k}{\beta_{k-1}} P_{k-2}(A), \quad k \ge 2, \\ @@ -70,13 +70,13 @@ Q_0(A^H) &= I_n, \\ Q_1(A^H) &= \left(\dfrac{A^H - \bar{\alpha}_1 I_n}{\bar{\gamma}_1}\right) Q_0(A^H), \\ Q_k(A^H) &= \left(\dfrac{A^H - \bar{\alpha}_k I_n}{\bar{\gamma}_k}\right) Q_{k-1}(A^H) - \dfrac{\bar{\beta}_k}{\bar{\gamma}_{k-1}} Q_{k-2}(A^H), \quad k \ge 2. -\end{align*}\]

Because $\alpha_k = u_k^H A v_k$ and $(\bar{\gamma}_{k+1} u_{k+1})^H (\beta_{k+1} v_{k+1}) = \gamma_{k+1} \beta_{k+1}$, we can define the coefficients of $T_{k+1,k}$ and $T_{k,k+1}^H$ as follows:

\[\begin{align*} +\end{align*}\]

Because $\alpha_k = u_k^H A v_k$ and $(\bar{\gamma}_{k+1} u_{k+1})^H (\beta_{k+1} v_{k+1}) = \gamma_{k+1} \beta_{k+1}$, we can determine the coefficients of $T_{k+1,k}$ and $T_{k,k+1}^H$ as follows:

\[\begin{align*} \alpha_k &= \dfrac{1}{\gamma_k \beta_k} \langle~Q_{k-1}(A^H) c \, , \, A P_{k-1}(A) b~\rangle \\ &= \dfrac{1}{\gamma_k \beta_k} \langle~c \, , \, \bar{Q}_{k-1}(A) A P_{k-1}(A) b~\rangle, \\ & \\ \beta_{k+1} \gamma_{k+1} &= \langle~Q_k(A^H) c \, , \, P_k(A) b~\rangle \\ &= \langle~c \, , \, \bar{Q}_k(A) P_k(A) b~\rangle. -\end{align*}\]

Arnoldi

arnoldi

After $k$ iterations of the Arnoldi process, the situation may be summarized as

\[\begin{align*} +\end{align*}\]

Arnoldi

arnoldi

After $k$ iterations of the Arnoldi process, the situation may be summarized as

\[\begin{align*} A V_k &= V_k H_k + h_{k+1,k} v_{k+1} e_k^T = V_{k+1} H_{k+1,k}, \\ V_k^H V_k &= I_k, \end{align*}\]

where $V_k$ is an orthonormal basis of the Krylov subspace $\mathcal{K}_k (A,b)$,

\[H_k = @@ -91,8 +91,8 @@ \begin{bmatrix} H_{k} \\ h_{k+1,k} e_{k}^T -\end{bmatrix}.\]

The function arnoldi returns $V_{k+1}$ and $H_{k+1,k}$.

Related methods: DIOM, FOM, DQGMRES, GMRES and FGMRES.

Note

The Arnoldi process coincides with the Hermitian Lanczos process when $A$ is Hermitian.

Krylov.arnoldiFunction
V, H = arnoldi(A, b, k; reorthogonalization=false)

Input arguments

  • A: a linear operator that models a square matrix of dimension n;
  • b: a vector of length n;
  • k: the number of iterations of the Arnoldi process.

Keyword arguments

  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors.

Output arguments

  • V: a dense n × (k+1) matrix;
  • H: a dense (k+1) × k upper Hessenberg matrix.

Reference

source

Golub-Kahan

golub_kahan

After $k$ iterations of the Golub-Kahan bidiagonalization process, the situation may be summarized as

\[\begin{align*} - A V_k &= U_{k+1} B_k, \\ +\end{bmatrix}.\]

The function arnoldi returns $V_{k+1}$, $\beta$ and $H_{k+1,k}$.

Related methods: DIOM, FOM, DQGMRES, GMRES and FGMRES.

Note

The Arnoldi process coincides with the Hermitian Lanczos process when $A$ is Hermitian.

Krylov.arnoldiMethod
V, β, H = arnoldi(A, b, k; reorthogonalization=false)

Input arguments

  • A: a linear operator that models a square matrix of dimension n;
  • b: a vector of length n;
  • k: the number of iterations of the Arnoldi process.

Keyword arguments

  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors.

Output arguments

  • V: a dense n × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • H: a dense (k+1) × k upper Hessenberg matrix.

Reference

source

Golub-Kahan

golub_kahan

After $k$ iterations of the Golub-Kahan bidiagonalization process, the situation may be summarized as

\[\begin{align*} + A V_k &= U_{k+1} B_k, \\ A^H U_{k+1} &= V_k B_k^H + \bar{\alpha}_{k+1} v_{k+1} e_{k+1}^T = V_{k+1} L_{k+1}^H, \\ V_k^H V_k &= U_k^H U_k = I_k, \end{align*}\]

where $V_k$ and $U_k$ are bases of the Krylov subspaces $\mathcal{K}_k (A^HA,A^Hb)$ and $\mathcal{K}_k (AA^H,b)$, respectively,

\[L_k = @@ -115,8 +115,8 @@ \begin{bmatrix} L_{k} \\ \beta_{k+1} e_{k}^T -\end{bmatrix}.\]

Note that depending on how we normalize the vectors that compose $V_k$ and $U_k$, $L_k$ can be a real bidiagonal matrix even if $A$ is a complex matrix.

The function golub_kahan returns $V_{k+1}$, $U_{k+1}$ and $L_{k+1}$.

Related methods: LNLQ, CRAIG, CRAIGMR, LSLQ, LSQR and LSMR.

Note

The Golub-Kahan process coincides with the Hermitian Lanczos process applied to the normal equations $A^HA x = A^Hb$ and $AA^H x = b$. It is also related to the Hermitian Lanczos process applied to $\begin{bmatrix} 0 & A \\ A^H & 0 \end{bmatrix}$ with initial vector $\begin{bmatrix} b \\ 0 \end{bmatrix}$.

Krylov.golub_kahanFunction
V, U, L = golub_kahan(A, b, k)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • k: the number of iterations of the Golub-Kahan process.

Output arguments

  • V: a dense n × (k+1) matrix;
  • U: a dense m × (k+1) matrix;
  • L: a sparse (k+1) × (k+1) lower bidiagonal matrix.

References

source

Saunders-Simon-Yip

saunders_simon_yip

After $k$ iterations of the Saunders-Simon-Yip process (also named the orthogonal tridiagonalization process), the situation may be summarized as

\[\begin{align*} - A U_k &= V_k T_k + \beta_{k+1} v_{k+1} e_k^T = V_{k+1} T_{k+1,k}, \\ +\end{bmatrix}.\]

Note that depending on how we normalize the vectors that compose $V_k$ and $U_k$, $L_k$ can be a real bidiagonal matrix even if $A$ is a complex matrix.

The function golub_kahan returns $V_{k+1}$, $U_{k+1}$, $\beta_1$ and $L_{k+1}$.

Related methods: LNLQ, CRAIG, CRAIGMR, LSLQ, LSQR and LSMR.

Note

The Golub-Kahan process coincides with the Hermitian Lanczos process applied to the normal equations $A^HA x = A^Hb$ and $AA^H x = b$. It is also related to the Hermitian Lanczos process applied to $\begin{bmatrix} 0 & A \\ A^H & 0 \end{bmatrix}$ with initial vector $\begin{bmatrix} b \\ 0 \end{bmatrix}$.

Krylov.golub_kahanMethod
V, U, β, L = golub_kahan(A, b, k)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • k: the number of iterations of the Golub-Kahan process.

Output arguments

  • V: a dense n × (k+1) matrix;
  • U: a dense m × (k+1) matrix;
  • β: a coefficient such that βu₁ = b;
  • L: a sparse (k+1) × (k+1) lower bidiagonal matrix.

References

source

Saunders-Simon-Yip

saunders_simon_yip

After $k$ iterations of the Saunders-Simon-Yip process (also named the orthogonal tridiagonalization process), the situation may be summarized as

\[\begin{align*} + A U_k &= V_k T_k + \beta_{k+1} v_{k+1} e_k^T = V_{k+1} T_{k+1,k}, \\ A^H V_k &= U_k T_k^H + \bar{\gamma}_{k+1} u_{k+1} e_k^T = U_{k+1} T_{k,k+1}^H, \\ V_k^H V_k &= U_k^H U_k = I_k, \end{align*}\]

where $\begin{bmatrix} V_k & 0 \\ 0 & U_k \end{bmatrix}$ is an orthonormal basis of the block Krylov subspace $\mathcal{K}^{\square}_k \left(\begin{bmatrix} 0 & A \\ A^H & 0 \end{bmatrix}, \begin{bmatrix} b & 0 \\ 0 & c \end{bmatrix}\right)$,

\[T_k = @@ -136,7 +136,7 @@ T_{k,k+1} = \begin{bmatrix} T_{k} & \gamma_{k+1} e_{k} -\end{bmatrix}.\]

The function saunders_simon_yip returns $V_{k+1}$, $T_{k+1,k}$, $U_{k+1}$ and $T_{k,k+1}^H$.

Related methods: USYMLQ, USYMQR, TriLQR, TriCG and TriMR.

Krylov.saunders_simon_yipFunction
V, T, U, Tᴴ = saunders_simon_yip(A, b, c, k)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n;
  • k: the number of iterations of the Saunders-Simon-Yip process.

Output arguments

  • V: a dense m × (k+1) matrix;
  • T: a sparse (k+1) × k tridiagonal matrix;
  • U: a dense n × (k+1) matrix;
  • Tᴴ: a sparse (k+1) × k tridiagonal matrix.

Reference

source
Note

The Saunders-Simon-Yip is equivalent to the block-Lanczos process applied to $\begin{bmatrix} 0 & A \\ A^H & 0 \end{bmatrix}$ with initial matrix $\begin{bmatrix} b & 0 \\ 0 & c \end{bmatrix}$.

Montoison-Orban

montoison_orban

After $k$ iterations of the Montoison-Orban process (also named the orthogonal Hessenberg reduction process), the situation may be summarized as

\[\begin{align*} +\end{bmatrix}.\]

The function saunders_simon_yip returns $V_{k+1}$, $\beta_1$, $T_{k+1,k}$, $U_{k+1}$, $\bar{\gamma}_1$ and $T_{k,k+1}^H$.

Related methods: USYMLQ, USYMQR, TriLQR, TriCG and TriMR.

Note

The Saunders-Simon-Yip is equivalent to the block-Lanczos process applied to $\begin{bmatrix} 0 & A \\ A^H & 0 \end{bmatrix}$ with initial matrix $\begin{bmatrix} b & 0 \\ 0 & c \end{bmatrix}$.

Krylov.saunders_simon_yipMethod
V, β, T, U, γᴴ, Tᴴ = saunders_simon_yip(A, b, c, k)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n;
  • k: the number of iterations of the Saunders-Simon-Yip process.

Output arguments

  • V: a dense m × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • T: a sparse (k+1) × k tridiagonal matrix;
  • U: a dense n × (k+1) matrix;
  • γᴴ: a coefficient such that γᴴu₁ = c;
  • Tᴴ: a sparse (k+1) × k tridiagonal matrix.

Reference

source

Montoison-Orban

montoison_orban

After $k$ iterations of the Montoison-Orban process (also named the orthogonal Hessenberg reduction process), the situation may be summarized as

\[\begin{align*} A U_k &= V_k H_k + h_{k+1,k} v_{k+1} e_k^T = V_{k+1} H_{k+1,k}, \\ B V_k &= U_k F_k + f_{k+1,k} u_{k+1} e_k^T = U_{k+1} F_{k+1,k}, \\ V_k^H V_k &= U_k^H U_k = I_k, @@ -164,4 +164,4 @@ \begin{bmatrix} F_{k} \\ f_{k+1,k} e_{k}^T -\end{bmatrix}.\]

The function montoison_orban returns $V_{k+1}$, $H_{k+1,k}$, $U_{k+1}$ and $F_{k+1,k}$.

Related methods: GPMR.

Note

The Montoison-Orban is equivalent to the block-Arnoldi process applied to $\begin{bmatrix} 0 & A \\ B & 0 \end{bmatrix}$ with initial matrix $\begin{bmatrix} b & 0 \\ 0 & c \end{bmatrix}$. It also coincides with the Saunders-Simon-Yip process when $B = A^H$.

Krylov.montoison_orbanFunction
V, H, U, F = montoison_orban(A, B, b, c, k; reorthogonalization=false)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • B: a linear operator that models a matrix of dimension n × m;
  • b: a vector of length m;
  • c: a vector of length n;
  • k: the number of iterations of the Montoison-Orban process.

Keyword arguments

  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors.

Output arguments

  • V: a dense m × (k+1) matrix;
  • H: a dense (k+1) × k upper Hessenberg matrix;
  • U: a dense n × (k+1) matrix;
  • F: a dense (k+1) × k upper Hessenberg matrix.

Reference

source
+\end{bmatrix}.\]

The function montoison_orban returns $V_{k+1}$, $\beta$, $H_{k+1,k}$, $U_{k+1}$, $\gamma$ and $F_{k+1,k}$.

Related methods: GPMR.

Note

The Montoison-Orban is equivalent to the block-Arnoldi process applied to $\begin{bmatrix} 0 & A \\ B & 0 \end{bmatrix}$ with initial matrix $\begin{bmatrix} b & 0 \\ 0 & c \end{bmatrix}$. It also coincides with the Saunders-Simon-Yip process when $B = A^H$.

Krylov.montoison_orbanMethod
V, β, H, U, γ, F = montoison_orban(A, B, b, c, k; reorthogonalization=false)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • B: a linear operator that models a matrix of dimension n × m;
  • b: a vector of length m;
  • c: a vector of length n;
  • k: the number of iterations of the Montoison-Orban process.

Keyword arguments

  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors.

Output arguments

  • V: a dense m × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • H: a dense (k+1) × k upper Hessenberg matrix;
  • U: a dense n × (k+1) matrix;
  • γ: a coefficient such that γu₁ = c;
  • F: a dense (k+1) × k upper Hessenberg matrix.

Reference

source
diff --git a/dev/reference/index.html b/dev/reference/index.html index 060436c56..be9536806 100644 --- a/dev/reference/index.html +++ b/dev/reference/index.html @@ -1,2 +1,2 @@ -Reference · Krylov.jl

Reference

Index

Krylov.niterationsFunction
niterations(solver)

Return the number of iterations performed by the Krylov method associated to solver.

source
Krylov.AprodFunction
Aprod(solver)

Return the number of operator-vector products with A performed by the Krylov method associated to solver.

source
Krylov.AtprodFunction
Atprod(solver)

Return the number of operator-vector products with A' performed by the Krylov method associated to solver.

source
Krylov.extract_parametersFunction
arguments = extract_parameters(ex::Expr)

Extract the arguments of an expression that is keyword parameter tuple. Implementation suggested by Mitchell J. O'Sullivan (@mosullivan93).

source
Base.showFunction
show(io, solver; show_stats=true)

Statistics of solver are displayed if show_stats is set to true.

source
+Reference · Krylov.jl

Reference

Index

Krylov.niterationsFunction
niterations(solver)

Return the number of iterations performed by the Krylov method associated to solver.

source
Krylov.AprodFunction
Aprod(solver)

Return the number of operator-vector products with A performed by the Krylov method associated to solver.

source
Krylov.AtprodFunction
Atprod(solver)

Return the number of operator-vector products with A' performed by the Krylov method associated to solver.

source
Krylov.extract_parametersFunction
arguments = extract_parameters(ex::Expr)

Extract the arguments of an expression that is keyword parameter tuple. Implementation suggested by Mitchell J. O'Sullivan (@mosullivan93).

source
Base.showFunction
show(io, solver; show_stats=true)

Statistics of solver are displayed if show_stats is set to true.

source
diff --git a/dev/search_index.js b/dev/search_index.js index 5a7d38be3..ded0d1178 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"examples/minares/","page":"MINARES","title":"MINARES","text":"using Krylov, MatrixMarket, SuiteSparseMatrixCollection\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"GHS_indef\", \"laser\")\npath = fetch_ssmc(matrix, format=\"MM\")\n\nn = matrix.nrows[1]\nA = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1]).mtx\"))\nb = ones(n)\n\n# Solve Ax = b.\nx, stats = minares(A, b)\nshow(stats)\nr = b - A * x\nAr = A * r\n@printf(\"Relative A-residual: %8.1e\\n\", norm(A * r) / norm(A * b))","category":"page"},{"location":"examples/trimr/","page":"TriMR","title":"TriMR","text":"using Krylov, LinearOperators, LDLFactorizations\nusing LinearAlgebra, Printf, SparseArrays\n\n# Identity matrix.\neye(n::Int) = sparse(1.0 * I, n, n)\n\n# Saddle-point systems\nn = m = 5\nA = [2^(i/j)*j + (-1)^(i-j) * n*(i-1) for i = 1:n, j = 1:n]\nb = ones(n)\nD = diagm(0 => [2.0 * i for i = 1:n])\nm, n = size(A)\nc = -b\n\n# [D A] [x] = [b]\n# [Aᴴ 0] [y] [c]\nllt_D = cholesky(D)\nopD⁻¹ = LinearOperator(Float64, 5, 5, true, true, (y, v) -> ldiv!(y, llt_D, v))\nopH⁻¹ = BlockDiagonalOperator(opD⁻¹, eye(n))\n(x, y, stats) = trimr(A, b, c, M=opD⁻¹, sp=true)\nK = [D A; A' zeros(n,n)]\nB = [b; c]\nr = B - K * [x; y]\nresid = sqrt(dot(r, opH⁻¹ * r))\n@printf(\"TriMR: Relative residual: %8.1e\\n\", resid)\n\n# Symmetric quasi-definite systems\nn = m = 5\nA = [2^(i/j)*j + (-1)^(i-j) * n*(i-1) for i = 1:n, j = 1:n]\nb = ones(n)\nM = diagm(0 => [3.0 * i for i = 1:n])\nN = diagm(0 => [5.0 * i for i = 1:n])\nc = -b\n\n# [I A] [x] = [b]\n# [Aᴴ -I] [y] [c]\n(x, y, stats) = trimr(A, b, c)\nK = [eye(m) A; A' -eye(n)]\nB = [b; c]\nr = B - K * [x; y]\nresid = norm(r)\n@printf(\"TriMR: Relative residual: %8.1e\\n\", resid)\n\n# [M A] [x] = [b]\n# [Aᴴ -N] [y] [c]\nldlt_M = ldl(M)\nldlt_N = ldl(N)\nopM⁻¹ = LinearOperator(Float64, size(M,1), size(M,2), true, true, (y, v) -> ldiv!(y, ldlt_M, v))\nopN⁻¹ = LinearOperator(Float64, size(N,1), size(N,2), true, true, (y, v) -> ldiv!(y, ldlt_N, v))\nopH⁻¹ = BlockDiagonalOperator(opM⁻¹, opN⁻¹)\n(x, y, stats) = trimr(A, b, c, M=opM⁻¹, N=opN⁻¹, verbose=1)\nK = [M A; A' -N]\nB = [b; c]\nr = B - K * [x; y]\nresid = sqrt(dot(r, opH⁻¹ * r))\n@printf(\"TriMR: Relative residual: %8.1e\\n\", resid)","category":"page"},{"location":"solvers/spd/","page":"Hermitian positive definite linear systems","title":"Hermitian positive definite linear systems","text":"# Hermitian positive definite linear systems","category":"page"},{"location":"solvers/spd/#CG","page":"Hermitian positive definite linear systems","title":"CG","text":"","category":"section"},{"location":"solvers/spd/","page":"Hermitian positive definite linear systems","title":"Hermitian positive definite linear systems","text":"cg\ncg!","category":"page"},{"location":"solvers/spd/#Krylov.cg","page":"Hermitian positive definite linear systems","title":"Krylov.cg","text":"(x, stats) = cg(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false, radius::T=zero(T),\n linesearch::Bool=false, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = cg(A, b, x0::AbstractVector; kwargs...)\n\nCG can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nThe conjugate gradient method to solve the Hermitian linear system Ax = b of size n.\n\nThe method does not abort if A is not definite. M also indicates the weighted norm in which residuals are measured.\n\nInput arguments\n\nA: a linear operator that models a Hermitian positive definite matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nradius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;\nlinesearch: if true, indicate that the solution is to be used in an inexact Newton method with linesearch. If negative curvature is detected at iteration k > 0, the solution of iteration k-1 is returned. If negative curvature is detected at iteration 0, the right-hand side is returned (i.e., the negative gradient);\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nM. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, Journal of Research of the National Bureau of Standards, 49(6), pp. 409–436, 1952.\n\n\n\n\n\n","category":"function"},{"location":"solvers/spd/#Krylov.cg!","page":"Hermitian positive definite linear systems","title":"Krylov.cg!","text":"solver = cg!(solver::CgSolver, A, b; kwargs...)\nsolver = cg!(solver::CgSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of cg.\n\nSee CgSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/spd/#CR","page":"Hermitian positive definite linear systems","title":"CR","text":"","category":"section"},{"location":"solvers/spd/","page":"Hermitian positive definite linear systems","title":"Hermitian positive definite linear systems","text":"cr\ncr!","category":"page"},{"location":"solvers/spd/#Krylov.cr","page":"Hermitian positive definite linear systems","title":"Krylov.cr","text":"(x, stats) = cr(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false, radius::T=zero(T),\n linesearch::Bool=false, γ::T=√eps(T),\n atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = cr(A, b, x0::AbstractVector; kwargs...)\n\nCR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nA truncated version of Stiefel’s Conjugate Residual method to solve the Hermitian linear system Ax = b of size n or the least-squares problem min ‖b - Ax‖ if A is singular. The matrix A must be Hermitian semi-definite. M also indicates the weighted norm in which residuals are measured.\n\nInput arguments\n\nA: a linear operator that models a Hermitian positive definite matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nradius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;\nlinesearch: if true, indicate that the solution is to be used in an inexact Newton method with linesearch. If negative curvature is detected at iteration k > 0, the solution of iteration k-1 is returned. If negative curvature is detected at iteration 0, the right-hand side is returned (i.e., the negative gradient);\nγ: tolerance to determine that the curvature of the quadratic model is nonpositive;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nM. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, Journal of Research of the National Bureau of Standards, 49(6), pp. 409–436, 1952.\nE. Stiefel, Relaxationsmethoden bester Strategie zur Losung linearer Gleichungssysteme, Commentarii Mathematici Helvetici, 29(1), pp. 157–179, 1955.\nD. G. Luenberger, The conjugate residual method for constrained minimization problems, SIAM Journal on Numerical Analysis, 7(3), pp. 390–398, 1970.\nM-A. Dahito and D. Orban, The Conjugate Residual Method in Linesearch and Trust-Region Methods, SIAM Journal on Optimization, 29(3), pp. 1988–2025, 2019.\n\n\n\n\n\n","category":"function"},{"location":"solvers/spd/#Krylov.cr!","page":"Hermitian positive definite linear systems","title":"Krylov.cr!","text":"solver = cr!(solver::CrSolver, A, b; kwargs...)\nsolver = cr!(solver::CrSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of cr.\n\nSee CrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/spd/#CAR","page":"Hermitian positive definite linear systems","title":"CAR","text":"","category":"section"},{"location":"solvers/spd/","page":"Hermitian positive definite linear systems","title":"Hermitian positive definite linear systems","text":"car\ncar!","category":"page"},{"location":"solvers/spd/#Krylov.car","page":"Hermitian positive definite linear systems","title":"Krylov.car","text":"(x, stats) = car(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false,\n atol::T=√eps(T), rtol::T=√eps(T),\n itmax::Int=0, timemax::Float64=Inf,\n verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = car(A, b, x0::AbstractVector; kwargs...)\n\nCAR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nCAR solves the Hermitian and positive definite linear system Ax = b of size n. CAR minimizes ‖Arₖ‖₂ when M = Iₙ and ‖AMrₖ‖M otherwise. The estimates computed every iteration are ‖Mrₖ‖₂ and ‖AMrₖ‖M.\n\nInput arguments\n\nA: a linear operator that models a Hermitian positive definite matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nA. Montoison, D. Orban and M. A. Saunders, MinAres: An Iterative Solver for Symmetric Linear Systems, Cahier du GERAD G-2023-40, GERAD, Montréal, 2023.\n\n\n\n\n\n","category":"function"},{"location":"solvers/spd/#Krylov.car!","page":"Hermitian positive definite linear systems","title":"Krylov.car!","text":"solver = car!(solver::CarSolver, A, b; kwargs...)\nsolver = car!(solver::CarSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of car.\n\nSee CarSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/spd/#CG-LANCZOS","page":"Hermitian positive definite linear systems","title":"CG-LANCZOS","text":"","category":"section"},{"location":"solvers/spd/","page":"Hermitian positive definite linear systems","title":"Hermitian positive definite linear systems","text":"cg_lanczos\ncg_lanczos!","category":"page"},{"location":"solvers/spd/#Krylov.cg_lanczos","page":"Hermitian positive definite linear systems","title":"Krylov.cg_lanczos","text":"(x, stats) = cg_lanczos(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false,\n check_curvature::Bool=false, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = cg_lanczos(A, b, x0::AbstractVector; kwargs...)\n\nCG-LANCZOS can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nThe Lanczos version of the conjugate gradient method to solve the Hermitian linear system Ax = b of size n.\n\nThe method does not abort if A is not definite.\n\nInput arguments\n\nA: a linear operator that models a Hermitian matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\ncheck_curvature: if true, check that the curvature of the quadratic along the search direction is positive, and abort if not, unless linesearch is also true;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a LanczosStats structure.\n\nReferences\n\nA. Frommer and P. Maass, Fast CG-Based Methods for Tikhonov-Phillips Regularization, SIAM Journal on Scientific Computing, 20(5), pp. 1831–1850, 1999.\nC. C. Paige and M. A. Saunders, Solution of Sparse Indefinite Systems of Linear Equations, SIAM Journal on Numerical Analysis, 12(4), pp. 617–629, 1975.\n\n\n\n\n\n","category":"function"},{"location":"solvers/spd/#Krylov.cg_lanczos!","page":"Hermitian positive definite linear systems","title":"Krylov.cg_lanczos!","text":"solver = cg_lanczos!(solver::CgLanczosSolver, A, b; kwargs...)\nsolver = cg_lanczos!(solver::CgLanczosSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of cg_lanczos.\n\nSee CgLanczosSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/spd/#CG-LANCZOS-SHIFT","page":"Hermitian positive definite linear systems","title":"CG-LANCZOS-SHIFT","text":"","category":"section"},{"location":"solvers/spd/","page":"Hermitian positive definite linear systems","title":"Hermitian positive definite linear systems","text":"cg_lanczos_shift\ncg_lanczos_shift!","category":"page"},{"location":"solvers/spd/#Krylov.cg_lanczos_shift","page":"Hermitian positive definite linear systems","title":"Krylov.cg_lanczos_shift","text":"(x, stats) = cg_lanczos_shift(A, b::AbstractVector{FC}, shifts::AbstractVector{T};\n M=I, ldiv::Bool=false,\n check_curvature::Bool=false, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nThe Lanczos version of the conjugate gradient method to solve a family of shifted systems\n\n(A + αI) x = b (α = α₁, ..., αₚ)\n\nof size n. The method does not abort if A + αI is not definite.\n\nInput arguments\n\nA: a linear operator that models a Hermitian matrix of dimension n;\nb: a vector of length n;\nshifts: a vector of length p.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\ncheck_curvature: if true, check that the curvature of the quadratic along the search direction is positive, and abort if not, unless linesearch is also true;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a vector of p dense vectors, each one of length n;\nstats: statistics collected on the run in a LanczosShiftStats structure.\n\nReferences\n\nA. Frommer and P. Maass, Fast CG-Based Methods for Tikhonov-Phillips Regularization, SIAM Journal on Scientific Computing, 20(5), pp. 1831–1850, 1999.\nC. C. Paige and M. A. Saunders, Solution of Sparse Indefinite Systems of Linear Equations, SIAM Journal on Numerical Analysis, 12(4), pp. 617–629, 1975.\n\n\n\n\n\n","category":"function"},{"location":"solvers/spd/#Krylov.cg_lanczos_shift!","page":"Hermitian positive definite linear systems","title":"Krylov.cg_lanczos_shift!","text":"solver = cg_lanczos!(solver::CgLanczosShiftSolver, A, b, shifts; kwargs...)\n\nwhere kwargs are keyword arguments of cg_lanczos_shift.\n\nSee CgLanczosShiftSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"examples/craigmr/","page":"CRAIGMR","title":"CRAIGMR","text":"using Krylov, HarwellRutherfordBoeing, SuiteSparseMatrixCollection\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"wm1\")\npath = fetch_ssmc(matrix, format=\"RB\")\n\nA = RutherfordBoeingData(joinpath(path[1], \"$(matrix.name[1]).rb\")).data\n(m, n) = size(A)\n@printf(\"System size: %d rows and %d columns\\n\", m, n)\n\nx_exact = A' * ones(m)\nx_exact_norm = norm(x_exact)\nx_exact /= x_exact_norm\nb = A * x_exact\n(x, y, stats) = craigmr(A, b)\nshow(stats)\nresid = norm(A * x - b) / norm(b)\n@printf(\"CRAIGMR: Relative residual: %7.1e\\n\", resid)\n@printf(\"CRAIGMR: ‖x - x*‖₂: %7.1e\\n\", norm(x - x_exact))\n@printf(\"CRAIGMR: %d iterations\\n\", length(stats.residuals))","category":"page"},{"location":"examples/symmlq/","page":"SYMMLQ","title":"SYMMLQ","text":"using Krylov\nusing LinearAlgebra, Printf\n\nA = diagm([1.0; 2.0; 3.0; 0.0])\nn = size(A, 1)\nb = [1.0; 2.0; 3.0; 0.0]\nb_norm = norm(b)\n\n# SYMMLQ returns the minimum-norm solution of symmetric, singular and consistent systems\n(x, stats) = symmlq(A, b, transfer_to_cg=false);\nr = b - A * x;\n\n@printf(\"Residual r: %s\\n\", Krylov.vec2str(r))\n@printf(\"Relative residual norm ‖r‖: %8.1e\\n\", norm(r) / b_norm)\n@printf(\"Solution x: %s\\n\", Krylov.vec2str(x))\n@printf(\"Minimum-norm solution? %s\\n\", x ≈ [1.0; 1.0; 1.0; 0.0])","category":"page"},{"location":"examples/dqgmres/","page":"DQGMRES","title":"DQGMRES","text":"using Krylov, LinearOperators, ILUZero, MatrixMarket\nusing LinearAlgebra, Printf, SuiteSparseMatrixCollection\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"Simon\", \"raefsky1\")\npath = fetch_ssmc(matrix, format=\"MM\")\n\nn = matrix.nrows[1]\nA = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1]).mtx\"))\nb = A * ones(n)\n\nF = ilu0(A)\n\n@printf(\"nnz(ILU) / nnz(A): %7.1e\\n\", nnz(F) / nnz(A))\n\n# Solve Ax = b with DQGMRES and an ILU(0) preconditioner\n# Remark: DIOM, FOM and GMRES can be used in the same way\nopM = LinearOperator(Float64, n, n, false, false, (y, v) -> forward_substitution!(y, F, v))\nopN = LinearOperator(Float64, n, n, false, false, (y, v) -> backward_substitution!(y, F, v))\nopP = LinearOperator(Float64, n, n, false, false, (y, v) -> ldiv!(y, F, v))\n\n# Without preconditioning\nx, stats = dqgmres(A, b, memory=50, history=true)\nr = b - A * x\n@printf(\"[Without preconditioning] Residual norm: %8.1e\\n\", norm(r))\n@printf(\"[Without preconditioning] Number of iterations: %3d\\n\", length(stats.residuals) - 1)\n\n# Split preconditioning\nx, stats = dqgmres(A, b, memory=50, history=true, M=opM, N=opN)\nr = b - A * x\n@printf(\"[Split preconditioning] Residual norm: %8.1e\\n\", norm(r))\n@printf(\"[Split preconditioning] Number of iterations: %3d\\n\", length(stats.residuals) - 1)\n\n# Left preconditioning\nx, stats = dqgmres(A, b, memory=50, history=true, M=opP)\nr = b - A * x\n@printf(\"[Left preconditioning] Residual norm: %8.1e\\n\", norm(r))\n@printf(\"[Left preconditioning] Number of iterations: %3d\\n\", length(stats.residuals) - 1)\n\n# Right preconditioning\nx, stats = dqgmres(A, b, memory=50, history=true, N=opP)\nr = b - A * x\n@printf(\"[Right preconditioning] Residual norm: %8.1e\\n\", norm(r))\n@printf(\"[Right preconditioning] Number of iterations: %3d\\n\", length(stats.residuals) - 1)","category":"page"},{"location":"solvers/sp_sqd/","page":"Saddle-point and Hermitian quasi-definite systems","title":"Saddle-point and Hermitian quasi-definite systems","text":"# Saddle-point and Hermitian quasi-definite systems","category":"page"},{"location":"solvers/sp_sqd/#TriCG","page":"Saddle-point and Hermitian quasi-definite systems","title":"TriCG","text":"","category":"section"},{"location":"solvers/sp_sqd/","page":"Saddle-point and Hermitian quasi-definite systems","title":"Saddle-point and Hermitian quasi-definite systems","text":"tricg\ntricg!","category":"page"},{"location":"solvers/sp_sqd/#Krylov.tricg","page":"Saddle-point and Hermitian quasi-definite systems","title":"Krylov.tricg","text":"(x, y, stats) = tricg(A, b::AbstractVector{FC}, c::AbstractVector{FC};\n M=I, N=I, ldiv::Bool=false,\n spd::Bool=false, snd::Bool=false,\n flip::Bool=false, τ::T=one(T),\n ν::T=-one(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, y, stats) = tricg(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)\n\nTriCG can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.\n\nGiven a matrix A of dimension m × n, TriCG solves the Hermitian linear system\n\n[ τE A ] [ x ] = [ b ]\n[ Aᴴ νF ] [ y ] [ c ],\n\nof size (n+m) × (n+m) where τ and ν are real numbers, E = M⁻¹ ≻ 0 and F = N⁻¹ ≻ 0. b and c must both be nonzero. TriCG could breakdown if τ = 0 or ν = 0. It's recommended to use TriMR in these cases.\n\nBy default, TriCG solves Hermitian and quasi-definite linear systems with τ = 1 and ν = -1.\n\nTriCG is based on the preconditioned orthogonal tridiagonalization process and its relation with the preconditioned block-Lanczos process.\n\n[ M 0 ]\n[ 0 N ]\n\nindicates the weighted norm in which residuals are measured. It's the Euclidean norm when M and N are identity operators.\n\nTriCG stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m;\nc: a vector of length n.\n\nOptional arguments\n\nx0: a vector of length m that represents an initial guess of the solution x;\ny0: a vector of length n that represents an initial guess of the solution y.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the partitioned system;\nN: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the partitioned system;\nldiv: define whether the preconditioners use ldiv! or mul!;\nspd: if true, set τ = 1 and ν = 1 for Hermitian and positive-definite linear system;\nsnd: if true, set τ = -1 and ν = -1 for Hermitian and negative-definite linear systems;\nflip: if true, set τ = -1 and ν = 1 for another known variant of Hermitian quasi-definite systems;\nτ and ν: diagonal scaling factors of the partitioned Hermitian linear system;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length m;\ny: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nA. Montoison and D. Orban, TriCG and TriMR: Two Iterative Methods for Symmetric Quasi-Definite Systems, SIAM Journal on Scientific Computing, 43(4), pp. 2502–2525, 2021.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sp_sqd/#Krylov.tricg!","page":"Saddle-point and Hermitian quasi-definite systems","title":"Krylov.tricg!","text":"solver = tricg!(solver::TricgSolver, A, b, c; kwargs...)\nsolver = tricg!(solver::TricgSolver, A, b, c, x0, y0; kwargs...)\n\nwhere kwargs are keyword arguments of tricg.\n\nSee TricgSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sp_sqd/#TriMR","page":"Saddle-point and Hermitian quasi-definite systems","title":"TriMR","text":"","category":"section"},{"location":"solvers/sp_sqd/","page":"Saddle-point and Hermitian quasi-definite systems","title":"Saddle-point and Hermitian quasi-definite systems","text":"trimr\ntrimr!","category":"page"},{"location":"solvers/sp_sqd/#Krylov.trimr","page":"Saddle-point and Hermitian quasi-definite systems","title":"Krylov.trimr","text":"(x, y, stats) = trimr(A, b::AbstractVector{FC}, c::AbstractVector{FC};\n M=I, N=I, ldiv::Bool=false,\n spd::Bool=false, snd::Bool=false,\n flip::Bool=false, sp::Bool=false,\n τ::T=one(T), ν::T=-one(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, y, stats) = trimr(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)\n\nTriMR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.\n\nGiven a matrix A of dimension m × n, TriMR solves the symmetric linear system\n\n[ τE A ] [ x ] = [ b ]\n[ Aᴴ νF ] [ y ] [ c ],\n\nof size (n+m) × (n+m) where τ and ν are real numbers, E = M⁻¹ ≻ 0, F = N⁻¹ ≻ 0. b and c must both be nonzero. TriMR handles saddle-point systems (τ = 0 or ν = 0) and adjoint systems (τ = 0 and ν = 0) without any risk of breakdown.\n\nBy default, TriMR solves symmetric and quasi-definite linear systems with τ = 1 and ν = -1.\n\nTriMR is based on the preconditioned orthogonal tridiagonalization process and its relation with the preconditioned block-Lanczos process.\n\n[ M 0 ]\n[ 0 N ]\n\nindicates the weighted norm in which residuals are measured. It's the Euclidean norm when M and N are identity operators.\n\nTriMR stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m;\nc: a vector of length n.\n\nOptional arguments\n\nx0: a vector of length m that represents an initial guess of the solution x;\ny0: a vector of length n that represents an initial guess of the solution y.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the partitioned system;\nN: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the partitioned system;\nldiv: define whether the preconditioners use ldiv! or mul!;\nspd: if true, set τ = 1 and ν = 1 for Hermitian and positive-definite linear system;\nsnd: if true, set τ = -1 and ν = -1 for Hermitian and negative-definite linear systems;\nflip: if true, set τ = -1 and ν = 1 for another known variant of Hermitian quasi-definite systems;\nsp: if true, set τ = 1 and ν = 0 for saddle-point systems;\nτ and ν: diagonal scaling factors of the partitioned Hermitian linear system;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length m;\ny: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nA. Montoison and D. Orban, TriCG and TriMR: Two Iterative Methods for Symmetric Quasi-Definite Systems, SIAM Journal on Scientific Computing, 43(4), pp. 2502–2525, 2021.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sp_sqd/#Krylov.trimr!","page":"Saddle-point and Hermitian quasi-definite systems","title":"Krylov.trimr!","text":"solver = trimr!(solver::TrimrSolver, A, b, c; kwargs...)\nsolver = trimr!(solver::TrimrSolver, A, b, c, x0, y0; kwargs...)\n\nwhere kwargs are keyword arguments of trimr.\n\nSee TrimrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"examples/car/","page":"CAR","title":"CAR","text":"using Krylov, MatrixMarket, SuiteSparseMatrixCollection\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"Nasa\", \"nasa2146\")\npath = fetch_ssmc(matrix, format=\"MM\")\n\nn = matrix.nrows[1]\nA = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1]).mtx\"))\nb = ones(n)\nb_norm = norm(b)\n\n# Solve Ax = b.\n(x, stats) = car(A, b)\nshow(stats)\nr = b - A * x\n@printf(\"Relative residual: %8.1e\\n\", norm(r) / b_norm)","category":"page"},{"location":"examples/bicgstab/","page":"BICGSTAB","title":"BICGSTAB","text":"using Krylov, LinearOperators, IncompleteLU, HarwellRutherfordBoeing\nusing LinearAlgebra, Printf, SuiteSparseMatrixCollection, SparseArrays\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"sherman5\")\npath = fetch_ssmc(matrix, format=\"RB\")\n\nn = matrix.nrows[1]\nA = RutherfordBoeingData(joinpath(path[1], \"$(matrix.name[1]).rb\")).data\nb = A * ones(n)\n\nF = ilu(A, τ = 0.05)\n\n@printf(\"nnz(ILU) / nnz(A): %7.1e\\n\", nnz(F) / nnz(A))\n\n# Solve Ax = b with BICGSTAB and an incomplete LU factorization\n# Remark: CGS can be used in the same way\nopM = LinearOperator(Float64, n, n, false, false, (y, v) -> forward_substitution!(y, F, v))\nopN = LinearOperator(Float64, n, n, false, false, (y, v) -> backward_substitution!(y, F, v))\nopP = LinearOperator(Float64, n, n, false, false, (y, v) -> ldiv!(y, F, v))\n\n# Without preconditioning\nx, stats = bicgstab(A, b, history=true)\nr = b - A * x\n@printf(\"[Without preconditioning] Residual norm: %8.1e\\n\", norm(r))\n@printf(\"[Without preconditioning] Number of iterations: %3d\\n\", length(stats.residuals) - 1)\n\n# Split preconditioning\nx, stats = bicgstab(A, b, history=true, M=opM, N=opN)\nr = b - A * x\n@printf(\"[Split preconditioning] Residual norm: %8.1e\\n\", norm(r))\n@printf(\"[Split preconditioning] Number of iterations: %3d\\n\", length(stats.residuals) - 1)\n\n# Left preconditioning\nx, stats = bicgstab(A, b, history=true, M=opP)\nr = b - A * x\n@printf(\"[Left preconditioning] Residual norm: %8.1e\\n\", norm(r))\n@printf(\"[Left preconditioning] Number of iterations: %3d\\n\", length(stats.residuals) - 1)\n\n# Right preconditioning\nx, stats = bicgstab(A, b, history=true, N=opP)\nr = b - A * x\n@printf(\"[Right preconditioning] Residual norm: %8.1e\\n\", norm(r))\n@printf(\"[Right preconditioning] Number of iterations: %3d\\n\", length(stats.residuals) - 1)","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"The most expensive procedures in Krylov methods are matrix-vector products (y ← Ax) and vector operations (dot products, vector norms, y ← αx + βy). Therefore they directly affect the efficiency of the methods provided by Krylov.jl. In this section, we present various optimizations based on multithreading to speed up these procedures. Multithreading is a form of shared memory parallelism that makes use of multiple cores to perform tasks.","category":"page"},{"location":"tips/#Multi-threaded-BLAS-and-LAPACK-operations","page":"Performance tips","title":"Multi-threaded BLAS and LAPACK operations","text":"","category":"section"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"OPENBLAS_NUM_THREADS=N julia # Set the number of OpenBLAS threads to N\nMKL_NUM_THREADS=N julia # Set the number of MKL threads to N","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"If you don't know the maximum number of threads available on your computer, you can obtain it with","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"NMAX = Sys.CPU_THREADS","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"and define the number of OpenBLAS/MKL threads at runtime with","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"BLAS.set_num_threads(N) # 1 ≤ N ≤ NMAX\nBLAS.get_num_threads()","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"The recommended number of BLAS threads is the number of physical and not logical cores, which is in general N = NMAX / 2 if your CPU supports simultaneous multithreading (SMT).","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"By default Julia ships with OpenBLAS but it's also possible to use Intel MKL BLAS and LAPACK with MKL.jl. If your operating system is MacOS 13.4 or later, it's recommended to use Accelerate BLAS and LAPACK with AppleAccelerate.jl.","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"using LinearAlgebra\nBLAS.get_config() # BLAS.vendor() for Julia 1.6","category":"page"},{"location":"tips/#Multi-threaded-sparse-matrix-vector-products","page":"Performance tips","title":"Multi-threaded sparse matrix-vector products","text":"","category":"section"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"For sparse matrices, the Julia implementation of mul! of SparseArrays library is not parallelized. A significant speed-up can be observed with the multhreaded mul! of MKLSparse.jl or ThreadedSparseCSR.jl.","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"It's also possible to implement a generic multithreaded julia version. For instance, the following function can be used for symmetric matrices","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"using Base.Threads\n\nfunction threaded_mul!(y::Vector{T}, A::SparseMatrixCSC{T}, x::Vector{T}) where T <: Number\n A.m == A.n || error(\"A is not a square matrix!\")\n @threads for i = 1 : A.n\n tmp = zero(T)\n @inbounds for j = A.colptr[i] : (A.colptr[i+1] - 1)\n tmp += A.nzval[j] * x[A.rowval[j]]\n end\n @inbounds y[i] = tmp\n end\n return y\nend","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"and wrapped inside a linear operator to solve symmetric linear systems","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"using LinearOperators\n\nn, m = size(A)\nsym = herm = true\nT = eltype(A)\nopA = LinearOperator(T, n, m, sym, herm, (y, v) -> threaded_mul!(y, A, v))","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"To enable multi-threading with Julia, you can start julia with the environment variable JULIA_NUM_THREADS or the options -t and --threads","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"julia -t auto # alternative: --threads auto\njulia -t N # alternative: --threads N\n\nJULIA_NUM_THREADS=N julia","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"Thereafter, you can verify the number of threads usable by Julia","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"using Base.Threads\nnthreads()","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"The following benchmarks illustrate the time required in seconds to compute 1000 sparse matrix-vector products with symmetric matrices of the SuiteSparse Matrix Collection. The computer used for the benchmarks has 2 physical cores and Julia was launched with JULIA_NUM_THREADS=2.","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"(Image: benchmarks)","category":"page"},{"location":"solvers/gsp/","page":"Generalized saddle-point and non-Hermitian partitioned systems","title":"Generalized saddle-point and non-Hermitian partitioned systems","text":"# Generalized saddle-point and non-Hermitian partitioned systems","category":"page"},{"location":"solvers/gsp/#GPMR","page":"Generalized saddle-point and non-Hermitian partitioned systems","title":"GPMR","text":"","category":"section"},{"location":"solvers/gsp/","page":"Generalized saddle-point and non-Hermitian partitioned systems","title":"Generalized saddle-point and non-Hermitian partitioned systems","text":"gpmr\ngpmr!","category":"page"},{"location":"solvers/gsp/#Krylov.gpmr","page":"Generalized saddle-point and non-Hermitian partitioned systems","title":"Krylov.gpmr","text":"(x, y, stats) = gpmr(A, B, b::AbstractVector{FC}, c::AbstractVector{FC};\n memory::Int=20, C=I, D=I, E=I, F=I,\n ldiv::Bool=false, gsp::Bool=false,\n λ::FC=one(FC), μ::FC=one(FC),\n reorthogonalization::Bool=false, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, y, stats) = gpmr(A, B, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)\n\nGPMR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.\n\nGiven matrices A of dimension m × n and B of dimension n × m, GPMR solves the non-Hermitian partitioned linear system\n\n[ λIₘ A ] [ x ] = [ b ]\n[ B μIₙ ] [ y ] [ c ],\n\nof size (n+m) × (n+m) where λ and μ are real or complex numbers. A can have any shape and B has the shape of Aᴴ. A, B, b and c must be all nonzero.\n\nThis implementation allows left and right block diagonal preconditioners\n\n[ C ] [ λM A ] [ E ] [ E⁻¹x ] = [ Cb ]\n[ D ] [ B μN ] [ F ] [ F⁻¹y ] [ Dc ],\n\nand can solve\n\n[ λM A ] [ x ] = [ b ]\n[ B μN ] [ y ] [ c ]\n\nwhen CE = M⁻¹ and DF = N⁻¹.\n\nBy default, GPMR solves unsymmetric linear systems with λ = 1 and μ = 1.\n\nGPMR is based on the orthogonal Hessenberg reduction process and its relations with the block-Arnoldi process. The residual norm ‖rₖ‖ is monotonically decreasing in GPMR.\n\nGPMR stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nB: a linear operator that models a matrix of dimension n × m;\nb: a vector of length m;\nc: a vector of length n.\n\nOptional arguments\n\nx0: a vector of length m that represents an initial guess of the solution x;\ny0: a vector of length n that represents an initial guess of the solution y.\n\nKeyword arguments\n\nmemory: if restart = true, the restarted version GPMR(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;\nC: linear operator that models a nonsingular matrix of size m, and represents the first term of the block-diagonal left preconditioner;\nD: linear operator that models a nonsingular matrix of size n, and represents the second term of the block-diagonal left preconditioner;\nE: linear operator that models a nonsingular matrix of size m, and represents the first term of the block-diagonal right preconditioner;\nF: linear operator that models a nonsingular matrix of size n, and represents the second term of the block-diagonal right preconditioner;\nldiv: define whether the preconditioners use ldiv! or mul!;\ngsp: if true, set λ = 1 and μ = 0 for generalized saddle-point systems;\nλ and μ: diagonal scaling factors of the partitioned linear system;\nreorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length m;\ny: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nA. Montoison and D. Orban, GPMR: An Iterative Method for Unsymmetric Partitioned Linear Systems, SIAM Journal on Matrix Analysis and Applications, 44(1), pp. 293–311, 2023.\n\n\n\n\n\n","category":"function"},{"location":"solvers/gsp/#Krylov.gpmr!","page":"Generalized saddle-point and non-Hermitian partitioned systems","title":"Krylov.gpmr!","text":"solver = gpmr!(solver::GpmrSolver, A, B, b, c; kwargs...)\nsolver = gpmr!(solver::GpmrSolver, A, B, b, c, x0, y0; kwargs...)\n\nwhere kwargs are keyword arguments of gpmr.\n\nNote that the memory keyword argument is the only exception. It's required to create a GpmrSolver and can't be changed later.\n\nSee GpmrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"examples/lsmr/","page":"LSMR","title":"LSMR","text":"using MatrixMarket, SuiteSparseMatrixCollection\nusing Krylov, LinearOperators\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"illc1850\")\npath = fetch_ssmc(matrix, format=\"MM\")\n\nA = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1]).mtx\"))\nb = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1])_b.mtx\"))[:]\n(m, n) = size(A)\n@printf(\"System size: %d rows and %d columns\\n\", m, n)\n\n# Define a regularization parameter.\nλ = 1.0e-3\n\n(x, stats) = lsmr(A, b, λ=λ, atol=0.0, btol=0.0)\nshow(stats)\nresid = norm(A' * (A * x - b) + λ * x) / norm(b)\n@printf(\"LSMR: Relative residual: %8.1e\\n\", resid)\n@printf(\"LSMR: ‖x‖: %8.1e\\n\", norm(x))","category":"page"},{"location":"examples/cg_lanczos_shift/","page":"CG-LANCZOS-SHIFT","title":"CG-LANCZOS-SHIFT","text":"using Krylov, MatrixMarket, SuiteSparseMatrixCollection\nusing LinearAlgebra, Printf\n\nfunction residuals(A, b, shifts, x)\n nshifts = length(shifts)\n r = [ (b - A * x[i] - shifts[i] * x[i]) for i = 1 : nshifts ]\n return r\nend\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"1138_bus\")\npath = fetch_ssmc(matrix, format=\"MM\")\n\nA = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1]).mtx\"))\nn, m = size(A)\nb = ones(n)\n\n# Solve (A + αI)x = b.\nshifts = [1.0, 2.0, 3.0, 4.0]\n(x, stats) = cg_lanczos_shift(A, b, shifts)\nshow(stats)\nr = residuals(A, b, shifts, x)\nresids = map(norm, r) / norm(b)\n@printf(\"Relative residuals with shifts:\\n\")\nfor resid in resids\n @printf(\" %8.1e\", resid)\nend\n@printf(\"\\n\")","category":"page"},{"location":"examples/minres_qlp/","page":"MINRES-QLP","title":"MINRES-QLP","text":"using Krylov\nusing LinearAlgebra, Printf\n\nA = diagm([1.0; 2.0; 3.0; 0.0])\nn = size(A, 1)\nb = [1.0; 2.0; 3.0; 4.0]\nb_norm = norm(b)\n\n# MINRES-QLP returns the minimum-norm solution of symmetric, singular and inconsistent systems\n(x, stats) = minres_qlp(A, b);\nr = b - A * x;\n\n@printf(\"Residual r: %s\\n\", Krylov.vec2str(r))\n@printf(\"Relative residual norm ‖r‖: %8.1e\\n\", norm(r) / b_norm)\n@printf(\"Solution x: %s\\n\", Krylov.vec2str(x))\n@printf(\"Minimum-norm solution? %s\\n\", x ≈ [1.0; 1.0; 1.0; 0.0])","category":"page"},{"location":"examples/lsqr/","page":"LSQR","title":"LSQR","text":"using MatrixMarket, SuiteSparseMatrixCollection\nusing Krylov, LinearOperators\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"illc1033\")\npath = fetch_ssmc(matrix, format=\"MM\")\n\nA = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1]).mtx\"))\nb = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1])_b.mtx\"))[:]\n(m, n) = size(A)\n@printf(\"System size: %d rows and %d columns\\n\", m, n)\n\n# Define a regularization parameter.\nλ = 1.0e-3\n\n(x, stats) = lsqr(A, b, λ=λ, atol=0.0, btol=0.0)\nshow(stats)\nresid = norm(A' * (A * x - b) + λ * x) / norm(b)\n@printf(\"LSQR: Relative residual: %8.1e\\n\", resid)\n@printf(\"LSQR: ‖x‖: %8.1e\\n\", norm(x))","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"","category":"page"},{"location":"factorization-free/#factorization-free","page":"Factorization-free operators","title":"Factorization-free operators","text":"","category":"section"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"All methods are factorization-free, which means that you only need to provide operator-vector products.","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"The A or B input arguments of Krylov.jl solvers can be any object that represents a linear operator. That object must implement mul!, for multiplication with a vector, size() and eltype(). For certain methods it must also implement adjoint().","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"Some methods only require A * v products, whereas other ones also require A' * u products. In the latter case, adjoint(A) must also be implemented.","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"A * v A * v and A' * u\nCG, CR, CAR CGLS, CRLS, CGNE, CRMR\nSYMMLQ, CG-LANCZOS, MINRES, MINRES-QLP, MINARES LSLQ, LSQR, LSMR, LNLQ, CRAIG, CRAIGMR\nDIOM, FOM, DQGMRES, GMRES, FGMRES BiLQ, QMR, BiLQR, USYMLQ, USYMQR, TriLQR\nCGS, BICGSTAB TriCG, TriMR","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"info: Info\nGPMR is the only method that requires A * v and B * w products.","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"Preconditioners M, N, C, D, E or F can be also linear operators and must implement mul! or ldiv!.","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"We strongly recommend LinearOperators.jl to model matrix-free operators, but other packages such as LinearMaps.jl, DiffEqOperators.jl or your own operator can be used as well.","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"With LinearOperators.jl, operators are defined as","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"A = LinearOperator(type, nrows, ncols, symmetric, hermitian, prod, tprod, ctprod)","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"where","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"type is the operator element type;\nnrow and ncol are its dimensions;\nsymmetric and hermitian should be set to true or false;\nprod(y, v), tprod(y, w) and ctprod(u, w) are called when writing mul!(y, A, v), mul!(y, transpose(A), w), and mul!(y, A', u), respectively.","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"See the tutorial and the detailed documentation for more information on LinearOperators.jl.","category":"page"},{"location":"factorization-free/#Examples","page":"Factorization-free operators","title":"Examples","text":"","category":"section"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"In the field of nonlinear optimization, finding critical points of a continuous function frequently involves linear systems with a Hessian or Jacobian as coefficient. Materializing such operators as matrices is expensive in terms of operations and memory consumption and is unreasonable for high-dimensional problems. However, it is often possible to implement efficient Hessian-vector and Jacobian-vector products, for example with the help of automatic differentiation tools, and used within Krylov solvers. We now illustrate variants with explicit matrices and with matrix-free operators for two well-known optimization methods.","category":"page"},{"location":"factorization-free/#Example-1:-Newton's-Method-for-convex-optimization","page":"Factorization-free operators","title":"Example 1: Newton's Method for convex optimization","text":"","category":"section"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"At each iteration of Newton's method applied to a mathcalC^2 strictly convex function f mathbbR^n rightarrow mathbbR, a descent direction direction is determined by minimizing the quadratic Taylor model of f:","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"min_d in mathbbR^nf(x_k) + nabla f(x_k)^T d + tfrac12d^T nabla^2 f(x_k) d","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"which is equivalent to solving the symmetric and positive-definite system","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"nabla^2 f(x_k) d = -nabla f(x_k)","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"The system above can be solved with the conjugate gradient method as follows, using the explicit Hessian:","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"using ForwardDiff, Krylov\n\nxk = -ones(4)\n\nf(x) = (x[1] - 1)^2 + (x[2] - 2)^2 + (x[3] - 3)^2 + (x[4] - 4)^2\n\ng(x) = ForwardDiff.gradient(f, x)\n\nH(x) = ForwardDiff.hessian(f, x)\n\nd, stats = cg(H(xk), -g(xk))","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"The explicit Hessian can be replaced by a linear operator that only computes Hessian-vector products:","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"using ForwardDiff, LinearOperators, Krylov\n\nxk = -ones(4)\n\nf(x) = (x[1] - 1)^2 + (x[2] - 2)^2 + (x[3] - 3)^2 + (x[4] - 4)^2\n\ng(x) = ForwardDiff.gradient(f, x)\n\nH(y, v) = ForwardDiff.derivative!(y, t -> g(xk + t * v), 0)\nopH = LinearOperator(Float64, 4, 4, true, true, (y, v) -> H(y, v))\n\ncg(opH, -g(xk))","category":"page"},{"location":"factorization-free/#Example-2:-The-Gauss-Newton-Method-for-Nonlinear-Least-Squares","page":"Factorization-free operators","title":"Example 2: The Gauss-Newton Method for Nonlinear Least Squares","text":"","category":"section"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"At each iteration of the Gauss-Newton method applied to a nonlinear least-squares objective f(x) = tfrac12 F(x)^2 where F mathbbR^n rightarrow mathbbR^m is mathcalC^1, we solve the subproblem:","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"min_d in mathbbR^ntfrac12J(x_k) d + F(x_k)^2","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"where J(x) is the Jacobian of F at x.","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"An appropriate iterative method to solve the above linear least-squares problems is LSMR. We could pass the explicit Jacobian to LSMR as follows:","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"using ForwardDiff, Krylov\n\nxk = ones(2)\n\nF(x) = [x[1]^4 - 3; exp(x[2]) - 2; log(x[1]) - x[2]^2]\n\nJ(x) = ForwardDiff.jacobian(F, x)\n\nd, stats = lsmr(J(xk), -F(xk))","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"However, the explicit Jacobian can be replaced by a linear operator that only computes Jacobian-vector and transposed Jacobian-vector products:","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"using LinearAlgebra, ForwardDiff, LinearOperators, Krylov\n\nxk = ones(2)\n\nF(x) = [x[1]^4 - 3; exp(x[2]) - 2; log(x[1]) - x[2]^2]\n\nJ(y, v) = ForwardDiff.derivative!(y, t -> F(xk + t * v), 0)\nJᵀ(y, u) = ForwardDiff.gradient!(y, x -> dot(F(x), u), xk)\nopJ = LinearOperator(Float64, 3, 2, false, false, (y, v) -> J(y, v),\n (y, w) -> Jᵀ(y, w),\n (y, u) -> Jᵀ(y, u))\n\nlsmr(opJ, -F(xk))","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"Note that preconditioners can be also implemented as abstract operators. For instance, we could compute the Cholesky factorization of M and N and create linear operators that perform the forward and backsolves.","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"Krylov methods combined with factorization free operators allow to reduce computation time and memory requirements considerably by avoiding building and storing the system matrix. In the field of partial differential equations, the implementation of high-performance factorization free operators and assembly free preconditioning is a subject of active research.","category":"page"},{"location":"inplace/#In-place-methods","page":"In-place methods","title":"In-place methods","text":"","category":"section"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"All solvers in Krylov.jl have an in-place variant implemented in a method whose name ends with !. A workspace (KrylovSolver) that contains the storage needed by a Krylov method can be used to solve multiple linear systems that have the same dimensions in the same floating-point precision. Each KrylovSolver has two constructors:","category":"page"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"XyzSolver(A, b)\nXyzSolver(m, n, S)","category":"page"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"Xyz is the name of the Krylov method with lowercase letters except its first one (Cg, Minres, Lsmr, Bicgstab, ...). Given an operator A and a right-hand side b, you can create a KrylovSolver based on the size of A and the type of b or explicitly give the dimensions (m, n) and the storage type S.","category":"page"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"For example, use S = Vector{Float64} if you want to solve linear systems in double precision on the CPU and S = CuVector{Float32} if you want to solve linear systems in single precision on an Nvidia GPU.","category":"page"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"note: Note\nDiomSolver, FomSolver, DqgmresSolver, GmresSolver, FgmresSolver, GpmrSolver and CgLanczosShiftSolver require an additional argument (memory or nshifts).","category":"page"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"The workspace is always the first argument of the in-place methods:","category":"page"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"minres_solver = MinresSolver(n, n, Vector{Float64})\nminres!(minres_solver, A1, b1)\n\ndqgmres_solver = DqgmresSolver(n, n, memory, Vector{BigFloat})\ndqgmres!(dqgmres_solver, A2, b2)\n\nlsqr_solver = LsqrSolver(m, n, CuVector{Float32})\nlsqr!(lsqr_solver, A3, b3)","category":"page"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"A generic function solve! is also available and dispatches to the appropriate Krylov method.","category":"page"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"Krylov.solve!","category":"page"},{"location":"inplace/#Krylov.solve!","page":"In-place methods","title":"Krylov.solve!","text":"solve!(solver, args...; kwargs...)\n\nUse the in-place Krylov method associated to solver.\n\n\n\n\n\n","category":"function"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"In-place methods return an updated solver workspace. Solutions and statistics can be recovered via solver.x, solver.y and solver.stats. Functions solution and statistics can be also used.","category":"page"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"Krylov.nsolution\nKrylov.solution\nKrylov.statistics\nKrylov.issolved","category":"page"},{"location":"inplace/#Krylov.nsolution","page":"In-place methods","title":"Krylov.nsolution","text":"nsolution(solver)\n\nReturn the number of outputs of solution(solver).\n\n\n\n\n\n","category":"function"},{"location":"inplace/#Krylov.solution","page":"In-place methods","title":"Krylov.solution","text":"solution(solver)\n\nReturn the solution(s) stored in the solver. Optionally you can specify which solution you want to recover, solution(solver, 1) returns x and solution(solver, 2) returns y.\n\n\n\n\n\n","category":"function"},{"location":"inplace/#Krylov.statistics","page":"In-place methods","title":"Krylov.statistics","text":"statistics(solver)\n\nReturn the statistics stored in the solver.\n\n\n\n\n\n","category":"function"},{"location":"inplace/#Krylov.issolved","page":"In-place methods","title":"Krylov.issolved","text":"issolved(solver)\n\nReturn a boolean that determines whether the Krylov method associated to solver succeeded.\n\n\n\n\n\n","category":"function"},{"location":"inplace/#Examples","page":"In-place methods","title":"Examples","text":"","category":"section"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"We illustrate the use of in-place Krylov solvers with two well-known optimization methods. The details of the optimization methods are described in the section about Factorization-free operators.","category":"page"},{"location":"inplace/#Example-1:-Newton's-method-for-convex-optimization-without-linesearch","page":"In-place methods","title":"Example 1: Newton's method for convex optimization without linesearch","text":"","category":"section"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"using Krylov\n\nfunction newton(∇f, ∇²f, x₀; itmax = 200, tol = 1e-8)\n\n n = length(x₀)\n x = copy(x₀)\n gx = ∇f(x)\n \n iter = 0\n S = typeof(x)\n solver = CgSolver(n, n, S)\n Δx = solver.x\n\n solved = false\n tired = false\n\n while !(solved || tired)\n \n Hx = ∇²f(x) # Compute ∇²f(xₖ)\n cg!(solver, Hx, -gx) # Solve ∇²f(xₖ)Δx = -∇f(xₖ)\n x = x + Δx # Update xₖ₊₁ = xₖ + Δx\n gx = ∇f(x) # ∇f(xₖ₊₁)\n \n iter += 1\n solved = norm(gx) ≤ tol\n tired = iter ≥ itmax\n end\n return x\nend","category":"page"},{"location":"inplace/#Example-2:-The-Gauss-Newton-method-for-nonlinear-least-squares-without-linesearch","page":"In-place methods","title":"Example 2: The Gauss-Newton method for nonlinear least squares without linesearch","text":"","category":"section"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"using Krylov\n\nfunction gauss_newton(F, JF, x₀; itmax = 200, tol = 1e-8)\n\n n = length(x₀)\n x = copy(x₀)\n Fx = F(x)\n m = length(Fx)\n \n iter = 0\n S = typeof(x)\n solver = LsmrSolver(m, n, S)\n Δx = solver.x\n\n solved = false\n tired = false\n\n while !(solved || tired)\n \n Jx = JF(x) # Compute J(xₖ)\n lsmr!(solver, Jx, -Fx) # Minimize ‖J(xₖ)Δx + F(xₖ)‖\n x = x + Δx # Update xₖ₊₁ = xₖ + Δx\n Fx_old = Fx # F(xₖ)\n Fx = F(x) # F(xₖ₊₁)\n \n iter += 1\n solved = norm(Fx - Fx_old) / norm(Fx) ≤ tol\n tired = iter ≥ itmax\n end\n return x\nend","category":"page"},{"location":"warm-start/#warm-start","page":"Warm-start","title":"Warm-start","text":"","category":"section"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"Most Krylov methods in this module accept a starting point as argument. The starting point is used as initial approximation to a solution.","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"solver = CgSolver(A, b)\ncg!(solver, A, b, itmax=100)\nif !issolved(solver)\n cg!(solver, A, b, solver.x, itmax=100) # cg! uses the approximate solution `solver.x` as starting point\nend","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"If the user has an initial guess x0, it can be provided directly.","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"cg(A, b, x0)","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"It is also possible to use the warm_start! function to feed the starting point into the solver.","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"warm_start!(solver, x0)\ncg!(solver, A, b)\n# the previous two lines are equivalent to cg!(solver, A, b, x0)","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"If a Krylov method doesn't have the option to warm start, it can still be done explicitly. We provide an example with cg_lanczos!.","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"solver = CgLanczosSolver(A, b)\ncg_lanczos!(solver, A, b)\nx₀ = solver.x # Ax₀ ≈ b\nr = b - A * x₀ # r = b - Ax₀\ncg_lanczos!(solver, A, r)\nΔx = solver.x # AΔx = r\nx = x₀ + Δx # Ax = b","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"Explicit restarts cannot be avoided in certain block methods, such as TriMR, due to the preconditioners.","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"# [E A] [x] = [b]\n# [Aᴴ F] [y] [c]\nM = inv(E)\nN = inv(F)\nx₀, y₀, stats = trimr(A, b, c, M=M, N=N)\n\n# E and F are not available inside TriMR\nb₀ = b - Ex₀ - Ay\nc₀ = c - Aᴴx₀ - Fy\n\nΔx, Δy, stats = trimr(A, b₀, c₀, M=M, N=N)\nx = x₀ + Δx\ny = y₀ + Δy","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"# ## Restarted methods\n#\n# The storage requirements of Krylov methods based on the Arnoldi process, such as FOM and GMRES, increase as the iteration progresses.\n# For very large problems, the storage costs become prohibitive after only few iterations and restarted variants FOM(k) and GMRES(k) are preferred.\n# In this section, we show how to use warm starts to implement GMRES(k) and FOM(k).\n#\n# ```julia\n# k = 50\n# solver = GmresSolver(A, b, k) # FomSolver(A, b, k)\n# solver.x .= 0 # solver.x .= x₀ \n# nrestart = 0\n# while !issolved(solver) || nrestart ≤ 10\n# solve!(solver, A, b, solver.x, itmax=k)\n# nrestart += 1\n# end\n# ```","category":"page"},{"location":"examples/tricg/","page":"TriCG","title":"TriCG","text":"using Krylov, LinearOperators\nusing LinearAlgebra, Printf, SparseArrays\n\n# Identity matrix.\neye(n::Int) = sparse(1.0 * I, n, n)\n\n# Symmetric quasi-definite systems and variants\nn = m = 5\nA = [2^(i/j)*j + (-1)^(i-j) * n*(i-1) for i = 1:n, j = 1:n]\nb = ones(n)\nM = diagm(0 => [3.0 * i for i = 1:n])\nN = diagm(0 => [5.0 * i for i = 1:n])\nc = -b\n\n# [I A] [x] = [b]\n# [Aᴴ -I] [y] [c]\n(x, y, stats) = tricg(A, b, c)\nK = [eye(m) A; A' -eye(n)]\nB = [b; c]\nr = B - K * [x; y]\nresid = norm(r)\n@printf(\"TriCG: Relative residual: %8.1e\\n\", resid)\n\n# [-I A] [x] = [b]\n# [ Aᴴ I] [y] [c]\n(x, y, stats) = tricg(A, b, c, flip=true)\nK = [-eye(m) A; A' eye(n)]\nB = [b; c]\nr = B - K * [x; y]\nresid = norm(r)\n@printf(\"TriCG: Relative residual: %8.1e\\n\", resid)\n\n# [I A] [x] = [b]\n# [Aᴴ I] [y] [c]\n(x, y, stats) = tricg(A, b, c, spd=true)\nK = [eye(m) A; A' eye(n)]\nB = [b; c]\nr = B - K * [x; y]\nresid = norm(r)\n@printf(\"TriCG: Relative residual: %8.1e\\n\", resid)\n\n# [-I A] [x] = [b]\n# [ Aᴴ -I] [y] [c]\n(x, y, stats) = tricg(A, b, c, snd=true)\nK = [-eye(m) A; A' -eye(n)]\nB = [b; c]\nr = B - K * [x; y]\nresid = norm(r)\n@printf(\"TriCG: Relative residual: %8.1e\\n\", resid)\n\n# [τI A] [x] = [b]\n# [ Aᴴ νI] [y] [c]\n(τ, ν) = (1e-4, 1e2)\n(x, y, stats) = tricg(A, b, c, τ=τ, ν=ν)\nK = [τ*eye(m) A; A' ν*eye(n)]\nB = [b; c]\nr = B - K * [x; y]\nresid = norm(r)\n@printf(\"TriCG: Relative residual: %8.1e\\n\", resid)\n\n# [M⁻¹ A ] [x] = [b]\n# [Aᴴ -N⁻¹] [y] [c]\n(x, y, stats) = tricg(A, b, c, M=M, N=N, verbose=1)\nK = [inv(M) A; A' -inv(N)]\nH = BlockDiagonalOperator(M, N)\nB = [b; c]\nr = B - K * [x; y]\nresid = sqrt(dot(r, H * r))\n@printf(\"TriCG: Relative residual: %8.1e\\n\", resid)","category":"page"},{"location":"solvers/as/","page":"Adjoint systems","title":"Adjoint systems","text":"# Adjoint systems","category":"page"},{"location":"solvers/as/#BiLQR","page":"Adjoint systems","title":"BiLQR","text":"","category":"section"},{"location":"solvers/as/","page":"Adjoint systems","title":"Adjoint systems","text":"bilqr\nbilqr!","category":"page"},{"location":"solvers/as/#Krylov.bilqr","page":"Adjoint systems","title":"Krylov.bilqr","text":"(x, y, stats) = bilqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};\n transfer_to_bicg::Bool=true, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, y, stats) = bilqr(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)\n\nBiLQR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.\n\nCombine BiLQ and QMR to solve adjoint systems.\n\n[0 A] [y] = [b]\n[Aᴴ 0] [x] [c]\n\nThe relation bᴴc ≠ 0 must be satisfied. BiLQ is used for solving primal system Ax = b of size n. QMR is used for solving dual system Aᴴy = c of size n.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n;\nc: a vector of length n.\n\nOptional arguments\n\nx0: a vector of length n that represents an initial guess of the solution x;\ny0: a vector of length n that represents an initial guess of the solution y.\n\nKeyword arguments\n\ntransfer_to_bicg: transfer from the BiLQ point to the BiCG point, when it exists. The transfer is based on the residual norm;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\ny: a dense vector of length n;\nstats: statistics collected on the run in an AdjointStats structure.\n\nReference\n\nA. Montoison and D. Orban, BiLQ: An Iterative Method for Nonsymmetric Linear Systems with a Quasi-Minimum Error Property, SIAM Journal on Matrix Analysis and Applications, 41(3), pp. 1145–1166, 2020.\n\n\n\n\n\n","category":"function"},{"location":"solvers/as/#Krylov.bilqr!","page":"Adjoint systems","title":"Krylov.bilqr!","text":"solver = bilqr!(solver::BilqrSolver, A, b, c; kwargs...)\nsolver = bilqr!(solver::BilqrSolver, A, b, c, x0, y0; kwargs...)\n\nwhere kwargs are keyword arguments of bilqr.\n\nSee BilqrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/as/#TriLQR","page":"Adjoint systems","title":"TriLQR","text":"","category":"section"},{"location":"solvers/as/","page":"Adjoint systems","title":"Adjoint systems","text":"trilqr\ntrilqr!","category":"page"},{"location":"solvers/as/#Krylov.trilqr","page":"Adjoint systems","title":"Krylov.trilqr","text":"(x, y, stats) = trilqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};\n transfer_to_usymcg::Bool=true, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, y, stats) = trilqr(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)\n\nTriLQR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.\n\nCombine USYMLQ and USYMQR to solve adjoint systems.\n\n[0 A] [y] = [b]\n[Aᴴ 0] [x] [c]\n\nUSYMLQ is used for solving primal system Ax = b of size m × n. USYMQR is used for solving dual system Aᴴy = c of size n × m.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m;\nc: a vector of length n.\n\nOptional arguments\n\nx0: a vector of length n that represents an initial guess of the solution x;\ny0: a vector of length m that represents an initial guess of the solution y.\n\nKeyword arguments\n\ntransfer_to_usymcg: transfer from the USYMLQ point to the USYMCG point, when it exists. The transfer is based on the residual norm;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\ny: a dense vector of length m;\nstats: statistics collected on the run in an AdjointStats structure.\n\nReference\n\nA. Montoison and D. Orban, BiLQ: An Iterative Method for Nonsymmetric Linear Systems with a Quasi-Minimum Error Property, SIAM Journal on Matrix Analysis and Applications, 41(3), pp. 1145–1166, 2020.\n\n\n\n\n\n","category":"function"},{"location":"solvers/as/#Krylov.trilqr!","page":"Adjoint systems","title":"Krylov.trilqr!","text":"solver = trilqr!(solver::TrilqrSolver, A, b, c; kwargs...)\nsolver = trilqr!(solver::TrilqrSolver, A, b, c, x0, y0; kwargs...)\n\nwhere kwargs are keyword arguments of trilqr.\n\nSee TrilqrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"api/#Stats-Types","page":"API","title":"Stats Types","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"Krylov.KrylovStats\nKrylov.SimpleStats\nKrylov.LanczosStats\nKrylov.LanczosShiftStats\nKrylov.SymmlqStats\nKrylov.AdjointStats\nKrylov.LNLQStats\nKrylov.LSLQStats\nKrylov.LsmrStats","category":"page"},{"location":"api/#Krylov.KrylovStats","page":"API","title":"Krylov.KrylovStats","text":"Abstract type for statistics returned by a solver\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.SimpleStats","page":"API","title":"Krylov.SimpleStats","text":"Type for statistics returned by the majority of Krylov solvers, the attributes are:\n\nniter\nsolved\ninconsistent\nresiduals\nAresiduals\nAcond\ntimer\nstatus\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.LanczosStats","page":"API","title":"Krylov.LanczosStats","text":"Type for statistics returned by CG-LANCZOS, the attributes are:\n\nniter\nsolved\nresiduals\nindefinite\nAnorm\nAcond\ntimer\nstatus\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.LanczosShiftStats","page":"API","title":"Krylov.LanczosShiftStats","text":"Type for statistics returned by CG-LANCZOS with shifts, the attributes are:\n\nniter\nsolved\nresiduals\nindefinite\nAnorm\nAcond\ntimer\nstatus\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.SymmlqStats","page":"API","title":"Krylov.SymmlqStats","text":"Type for statistics returned by SYMMLQ, the attributes are:\n\nniter\nsolved\nresiduals\nresidualscg\nerrors\nerrorscg\nAnorm\nAcond\ntimer\nstatus\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.AdjointStats","page":"API","title":"Krylov.AdjointStats","text":"Type for statistics returned by adjoint systems solvers BiLQR and TriLQR, the attributes are:\n\nniter\nsolved_primal\nsolved_dual\nresiduals_primal\nresiduals_dual\ntimer\nstatus\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.LNLQStats","page":"API","title":"Krylov.LNLQStats","text":"Type for statistics returned by the LNLQ method, the attributes are:\n\nniter\nsolved\nresiduals\nerrorwithbnd\nerrorbndx\nerrorbndy\ntimer\nstatus\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.LSLQStats","page":"API","title":"Krylov.LSLQStats","text":"Type for statistics returned by the LSLQ method, the attributes are:\n\nniter\nsolved\ninconsistent\nresiduals\nAresiduals\nerr_lbnds\nerrorwithbnd\nerrubndslq\nerrubndscg\ntimer\nstatus\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.LsmrStats","page":"API","title":"Krylov.LsmrStats","text":"Type for statistics returned by LSMR. The attributes are:\n\nniter\nsolved\ninconsistent\nresiduals\nAresiduals\nAcond\nAnorm\nxNorm\ntimer\nstatus\n\n\n\n\n\n","category":"type"},{"location":"api/#Solver-Types","page":"API","title":"Solver Types","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"KrylovSolver\nMinresSolver\nMinaresSolver\nCgSolver\nCrSolver\nCarSolver\nSymmlqSolver\nCgLanczosSolver\nCgLanczosShiftSolver\nMinresQlpSolver\nDiomSolver\nFomSolver\nDqgmresSolver\nGmresSolver\nUsymlqSolver\nUsymqrSolver\nTricgSolver\nTrimrSolver\nTrilqrSolver\nCgsSolver\nBicgstabSolver\nBilqSolver\nQmrSolver\nBilqrSolver\nCglsSolver\nCrlsSolver\nCgneSolver\nCrmrSolver\nLslqSolver\nLsqrSolver\nLsmrSolver\nLnlqSolver\nCraigSolver\nCraigmrSolver\nGpmrSolver\nFgmresSolver","category":"page"},{"location":"api/#Krylov.KrylovSolver","page":"API","title":"Krylov.KrylovSolver","text":"Abstract type for using Krylov solvers in-place\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.MinresSolver","page":"API","title":"Krylov.MinresSolver","text":"Type for storing the vectors required by the in-place version of MINRES.\n\nThe outer constructors\n\nsolver = MinresSolver(m, n, S; window :: Int=5)\nsolver = MinresSolver(A, b; window :: Int=5)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.MinaresSolver","page":"API","title":"Krylov.MinaresSolver","text":"Type for storing the vectors required by the in-place version of MINARES.\n\nThe outer constructors\n\nsolver = MinaresSolver(m, n, S)\nsolver = MinaresSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CgSolver","page":"API","title":"Krylov.CgSolver","text":"Type for storing the vectors required by the in-place version of CG.\n\nThe outer constructors\n\nsolver = CgSolver(m, n, S)\nsolver = CgSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CrSolver","page":"API","title":"Krylov.CrSolver","text":"Type for storing the vectors required by the in-place version of CR.\n\nThe outer constructors\n\nsolver = CrSolver(m, n, S)\nsolver = CrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CarSolver","page":"API","title":"Krylov.CarSolver","text":"Type for storing the vectors required by the in-place version of CAR.\n\nThe outer constructors\n\nsolver = CarSolver(m, n, S)\nsolver = CarSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.SymmlqSolver","page":"API","title":"Krylov.SymmlqSolver","text":"Type for storing the vectors required by the in-place version of SYMMLQ.\n\nThe outer constructors\n\nsolver = SymmlqSolver(m, n, S)\nsolver = SymmlqSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CgLanczosSolver","page":"API","title":"Krylov.CgLanczosSolver","text":"Type for storing the vectors required by the in-place version of CG-LANCZOS.\n\nThe outer constructors\n\nsolver = CgLanczosSolver(m, n, S)\nsolver = CgLanczosSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CgLanczosShiftSolver","page":"API","title":"Krylov.CgLanczosShiftSolver","text":"Type for storing the vectors required by the in-place version of CG-LANCZOS-SHIFT.\n\nThe outer constructors\n\nsolver = CgLanczosShiftSolver(m, n, nshifts, S)\nsolver = CgLanczosShiftSolver(A, b, nshifts)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.MinresQlpSolver","page":"API","title":"Krylov.MinresQlpSolver","text":"Type for storing the vectors required by the in-place version of MINRES-QLP.\n\nThe outer constructors\n\nsolver = MinresQlpSolver(m, n, S)\nsolver = MinresQlpSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.DiomSolver","page":"API","title":"Krylov.DiomSolver","text":"Type for storing the vectors required by the in-place version of DIOM.\n\nThe outer constructors\n\nsolver = DiomSolver(m, n, memory, S)\nsolver = DiomSolver(A, b, memory = 20)\n\nmay be used in order to create these vectors. memory is set to n if the value given is larger than n.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.FomSolver","page":"API","title":"Krylov.FomSolver","text":"Type for storing the vectors required by the in-place version of FOM.\n\nThe outer constructors\n\nsolver = FomSolver(m, n, memory, S)\nsolver = FomSolver(A, b, memory = 20)\n\nmay be used in order to create these vectors. memory is set to n if the value given is larger than n.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.DqgmresSolver","page":"API","title":"Krylov.DqgmresSolver","text":"Type for storing the vectors required by the in-place version of DQGMRES.\n\nThe outer constructors\n\nsolver = DqgmresSolver(m, n, memory, S)\nsolver = DqgmresSolver(A, b, memory = 20)\n\nmay be used in order to create these vectors. memory is set to n if the value given is larger than n.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.GmresSolver","page":"API","title":"Krylov.GmresSolver","text":"Type for storing the vectors required by the in-place version of GMRES.\n\nThe outer constructors\n\nsolver = GmresSolver(m, n, memory, S)\nsolver = GmresSolver(A, b, memory = 20)\n\nmay be used in order to create these vectors. memory is set to n if the value given is larger than n.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.UsymlqSolver","page":"API","title":"Krylov.UsymlqSolver","text":"Type for storing the vectors required by the in-place version of USYMLQ.\n\nThe outer constructors\n\nsolver = UsymlqSolver(m, n, S)\nsolver = UsymlqSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.UsymqrSolver","page":"API","title":"Krylov.UsymqrSolver","text":"Type for storing the vectors required by the in-place version of USYMQR.\n\nThe outer constructors\n\nsolver = UsymqrSolver(m, n, S)\nsolver = UsymqrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.TricgSolver","page":"API","title":"Krylov.TricgSolver","text":"Type for storing the vectors required by the in-place version of TRICG.\n\nThe outer constructors\n\nsolver = TricgSolver(m, n, S)\nsolver = TricgSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.TrimrSolver","page":"API","title":"Krylov.TrimrSolver","text":"Type for storing the vectors required by the in-place version of TRIMR.\n\nThe outer constructors\n\nsolver = TrimrSolver(m, n, S)\nsolver = TrimrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.TrilqrSolver","page":"API","title":"Krylov.TrilqrSolver","text":"Type for storing the vectors required by the in-place version of TRILQR.\n\nThe outer constructors\n\nsolver = TrilqrSolver(m, n, S)\nsolver = TrilqrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CgsSolver","page":"API","title":"Krylov.CgsSolver","text":"Type for storing the vectors required by the in-place version of CGS.\n\nThe outer constructorss\n\nsolver = CgsSolver(m, n, S)\nsolver = CgsSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.BicgstabSolver","page":"API","title":"Krylov.BicgstabSolver","text":"Type for storing the vectors required by the in-place version of BICGSTAB.\n\nThe outer constructors\n\nsolver = BicgstabSolver(m, n, S)\nsolver = BicgstabSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.BilqSolver","page":"API","title":"Krylov.BilqSolver","text":"Type for storing the vectors required by the in-place version of BILQ.\n\nThe outer constructors\n\nsolver = BilqSolver(m, n, S)\nsolver = BilqSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.QmrSolver","page":"API","title":"Krylov.QmrSolver","text":"Type for storing the vectors required by the in-place version of QMR.\n\nThe outer constructors\n\nsolver = QmrSolver(m, n, S)\nsolver = QmrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.BilqrSolver","page":"API","title":"Krylov.BilqrSolver","text":"Type for storing the vectors required by the in-place version of BILQR.\n\nThe outer constructors\n\nsolver = BilqrSolver(m, n, S)\nsolver = BilqrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CglsSolver","page":"API","title":"Krylov.CglsSolver","text":"Type for storing the vectors required by the in-place version of CGLS.\n\nThe outer constructors\n\nsolver = CglsSolver(m, n, S)\nsolver = CglsSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CrlsSolver","page":"API","title":"Krylov.CrlsSolver","text":"Type for storing the vectors required by the in-place version of CRLS.\n\nThe outer constructors\n\nsolver = CrlsSolver(m, n, S)\nsolver = CrlsSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CgneSolver","page":"API","title":"Krylov.CgneSolver","text":"Type for storing the vectors required by the in-place version of CGNE.\n\nThe outer constructors\n\nsolver = CgneSolver(m, n, S)\nsolver = CgneSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CrmrSolver","page":"API","title":"Krylov.CrmrSolver","text":"Type for storing the vectors required by the in-place version of CRMR.\n\nThe outer constructors\n\nsolver = CrmrSolver(m, n, S)\nsolver = CrmrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.LslqSolver","page":"API","title":"Krylov.LslqSolver","text":"Type for storing the vectors required by the in-place version of LSLQ.\n\nThe outer constructors\n\nsolver = LslqSolver(m, n, S)\nsolver = LslqSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.LsqrSolver","page":"API","title":"Krylov.LsqrSolver","text":"Type for storing the vectors required by the in-place version of LSQR.\n\nThe outer constructors\n\nsolver = LsqrSolver(m, n, S)\nsolver = LsqrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.LsmrSolver","page":"API","title":"Krylov.LsmrSolver","text":"Type for storing the vectors required by the in-place version of LSMR.\n\nThe outer constructors\n\nsolver = LsmrSolver(m, n, S)\nsolver = LsmrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.LnlqSolver","page":"API","title":"Krylov.LnlqSolver","text":"Type for storing the vectors required by the in-place version of LNLQ.\n\nThe outer constructors\n\nsolver = LnlqSolver(m, n, S)\nsolver = LnlqSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CraigSolver","page":"API","title":"Krylov.CraigSolver","text":"Type for storing the vectors required by the in-place version of CRAIG.\n\nThe outer constructors\n\nsolver = CraigSolver(m, n, S)\nsolver = CraigSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CraigmrSolver","page":"API","title":"Krylov.CraigmrSolver","text":"Type for storing the vectors required by the in-place version of CRAIGMR.\n\nThe outer constructors\n\nsolver = CraigmrSolver(m, n, S)\nsolver = CraigmrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.GpmrSolver","page":"API","title":"Krylov.GpmrSolver","text":"Type for storing the vectors required by the in-place version of GPMR.\n\nThe outer constructors\n\nsolver = GpmrSolver(m, n, memory, S)\nsolver = GpmrSolver(A, b, memory = 20)\n\nmay be used in order to create these vectors. memory is set to n + m if the value given is larger than n + m.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.FgmresSolver","page":"API","title":"Krylov.FgmresSolver","text":"Type for storing the vectors required by the in-place version of FGMRES.\n\nThe outer constructors\n\nsolver = FgmresSolver(m, n, memory, S)\nsolver = FgmresSolver(A, b, memory = 20)\n\nmay be used in order to create these vectors. memory is set to n if the value given is larger than n.\n\n\n\n\n\n","category":"type"},{"location":"api/#Utilities","page":"API","title":"Utilities","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"Krylov.roots_quadratic\nKrylov.sym_givens\nKrylov.to_boundary\nKrylov.vec2str\nKrylov.ktypeof\nKrylov.kzeros\nKrylov.kones\nKrylov.vector_to_matrix\nKrylov.matrix_to_vector","category":"page"},{"location":"api/#Krylov.roots_quadratic","page":"API","title":"Krylov.roots_quadratic","text":"roots = roots_quadratic(q₂, q₁, q₀; nitref)\n\nFind the real roots of the quadratic\n\nq(x) = q₂ x² + q₁ x + q₀,\n\nwhere q₂, q₁ and q₀ are real. Care is taken to avoid numerical cancellation. Optionally, nitref steps of iterative refinement may be performed to improve accuracy. By default, nitref=1.\n\n\n\n\n\n","category":"function"},{"location":"api/#Krylov.sym_givens","page":"API","title":"Krylov.sym_givens","text":"(c, s, ρ) = sym_givens(a, b)\n\nNumerically stable symmetric Givens reflection. Given a and b reals, return (c, s, ρ) such that\n\n[ c s ] [ a ] = [ ρ ]\n[ s -c ] [ b ] = [ 0 ].\n\n\n\n\n\nNumerically stable symmetric Givens reflection. Given a and b complexes, return (c, s, ρ) with c real and (s, ρ) complexes such that\n\n[ c s ] [ a ] = [ ρ ]\n[ s̅ -c ] [ b ] = [ 0 ].\n\n\n\n\n\n","category":"function"},{"location":"api/#Krylov.to_boundary","page":"API","title":"Krylov.to_boundary","text":"roots = to_boundary(n, x, d, radius; flip, xNorm2, dNorm2)\n\nGiven a trust-region radius radius, a vector x lying inside the trust-region and a direction d, return σ1 and σ2 such that\n\n‖x + σi d‖ = radius, i = 1, 2\n\nin the Euclidean norm. n is the length of vectors x and d. If known, ‖x‖² and ‖d‖² may be supplied with xNorm2 and dNorm2.\n\nIf flip is set to true, σ1 and σ2 are computed such that\n\n‖x - σi d‖ = radius, i = 1, 2.\n\n\n\n\n\n","category":"function"},{"location":"api/#Krylov.vec2str","page":"API","title":"Krylov.vec2str","text":"s = vec2str(x; ndisp)\n\nDisplay an array in the form\n\n[ -3.0e-01 -5.1e-01 1.9e-01 ... -2.3e-01 -4.4e-01 2.4e-01 ]\n\nwith (ndisp - 1)/2 elements on each side.\n\n\n\n\n\n","category":"function"},{"location":"api/#Krylov.ktypeof","page":"API","title":"Krylov.ktypeof","text":"S = ktypeof(v)\n\nReturn the most relevant storage type S based on the type of v.\n\n\n\n\n\n","category":"function"},{"location":"api/#Krylov.kzeros","page":"API","title":"Krylov.kzeros","text":"v = kzeros(S, n)\n\nCreate a vector of storage type S of length n only composed of zero.\n\n\n\n\n\n","category":"function"},{"location":"api/#Krylov.kones","page":"API","title":"Krylov.kones","text":"v = kones(S, n)\n\nCreate a vector of storage type S of length n only composed of one.\n\n\n\n\n\n","category":"function"},{"location":"api/#Krylov.vector_to_matrix","page":"API","title":"Krylov.vector_to_matrix","text":"M = vector_to_matrix(S)\n\nReturn the dense matrix storage type M related to the dense vector storage type S.\n\n\n\n\n\n","category":"function"},{"location":"api/#Krylov.matrix_to_vector","page":"API","title":"Krylov.matrix_to_vector","text":"S = matrix_to_vector(M)\n\nReturn the dense vector storage type S related to the dense matrix storage type M.\n\n\n\n\n\n","category":"function"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"# Thanks Morten Piibeleht for the hack with the tables!","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"","category":"page"},{"location":"storage/#storage-requirements","page":"Storage requirements","title":"Storage requirements","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"This section provides the storage requirements of all Krylov methods available in Krylov.jl.","category":"page"},{"location":"storage/#Notation","page":"Storage requirements","title":"Notation","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"We denote by m and n the number of rows and columns of the linear problem. The memory parameter of DIOM, FOM, DQGMRES, GMRES, FGMRES and GPMR is k. The numbers of shifts of CG-LANCZOS-SHIFT is p.","category":"page"},{"location":"storage/#Theoretical-storage-requirements","page":"Storage requirements","title":"Theoretical storage requirements","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"The following tables provide the number of coefficients that must be allocated for each Krylov method. The coefficients have the same type as those that compose the linear problem we seek to solve. Each table summarizes the storage requirements of Krylov methods recommended to a specific linear problem.","category":"page"},{"location":"storage/#Hermitian-positive-definite-linear-systems","page":"Storage requirements","title":"Hermitian positive definite linear systems","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Methods CG CR CAR CG-LANCZOS CG-LANCZOS-SHIFT\nStorage 4n 5n 7n 5n 3n + 2np + 5p","category":"page"},{"location":"storage/#Hermitian-indefinite-linear-systems","page":"Storage requirements","title":"Hermitian indefinite linear systems","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Methods SYMMLQ MINRES MINRES-QLP MINARES\nStorage 5n 6n 6n 8n","category":"page"},{"location":"storage/#Non-Hermitian-square-linear-systems","page":"Storage requirements","title":"Non-Hermitian square linear systems","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Methods CGS BICGSTAB BiLQ QMR\nStorage 6n 6n 8n 9n","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Methods DIOM DQGMRES\nStorage n(2k+1) + 2k - 1 n(2k+2) + 3k + 1","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Methods FOM GMRES FGMRES\nStoragedfrac n(2+k) +2k + dfrack(k + 1)2 n(2+k) + 3k + dfrack(k + 1)2 n(2+2k) + 3k + dfrack(k + 1)2","category":"page"},{"location":"storage/#Least-norm-problems","page":"Storage requirements","title":"Least-norm problems","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Methods USYMLQ CGNE CRMR LNLQ CRAIG CRAIGMR\nStorage 5n + 3m 3n + 2m 3n + 2m 3n + 4m 3n + 4m 4n + 5m","category":"page"},{"location":"storage/#Least-squares-problems","page":"Storage requirements","title":"Least-squares problems","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Methods USYMQR CGLS CRLS LSLQ LSQR LSMR\nStorage 6n + 3m 3n + 2m 4n + 3m 4n + 2m 4n + 2m 5n + 2m","category":"page"},{"location":"storage/#Adjoint-systems","page":"Storage requirements","title":"Adjoint systems","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Methods BiLQR TriLQR\nStorage 11n 6m + 5n","category":"page"},{"location":"storage/#Saddle-point-and-Hermitian-quasi-definite-systems","page":"Storage requirements","title":"Saddle-point and Hermitian quasi-definite systems","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Methods TriCG TriMR\nStorage 6n + 6m 8n + 8m","category":"page"},{"location":"storage/#Generalized-saddle-point-and-non-Hermitian-partitioned-systems","page":"Storage requirements","title":"Generalized saddle-point and non-Hermitian partitioned systems","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Method GPMR\nStorage (2+k)(n+m) + 2k^2 + 11k","category":"page"},{"location":"storage/#Practical-storage-requirements","page":"Storage requirements","title":"Practical storage requirements","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Each method has its own KrylovSolver that contains all the storage needed by the method. In the REPL, the size in bytes of each attribute and the total amount of memory allocated by the solver are displayed when we show a KrylovSolver.","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"using Krylov\n\nm = 5000\nn = 12000\nA = rand(Float64, m, n)\nb = rand(Float64, m)\nsolver = LsmrSolver(A, b)\nshow(stdout, solver, show_stats=false)","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"If we want the total number of bytes used by the solver, we can call nbytes = sizeof(solver).","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"nbytes = sizeof(solver)","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Thereafter, we can use Base.format_bytes(nbytes) to recover what is displayed in the REPL.","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Base.format_bytes(nbytes)","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"To verify that we match the theoretical results, we just need to multiply the storage requirement of a method by the number of bytes associated to the precision of the linear problem. For instance, we need 4 bytes for the precision Float32, 8 bytes for precisions Float64 and ComplexF32, and 16 bytes for the precision ComplexF64.","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"FC = Float64 # precision of the least-squares problem\nncoefs_lsmr = 5*n + 2*m # number of coefficients\nnbytes_lsmr = sizeof(FC) * ncoefs_lsmr # number of bytes","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Therefore, you can check that you have enough memory in RAM to allocate a KrylovSolver.","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"free_nbytes = Sys.free_memory()\nBase.format_bytes(free_nbytes) # Total free memory in RAM in bytes.","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"note: Note\nBeyond having faster operations, using low precisions, such as simple precision, allows to store more coefficients in RAM and solve larger linear problems.\nIn the file test_allocations.jl, we use the macro @allocated to test that we match the expected storage requirement of each method with a tolerance of 2%.","category":"page"},{"location":"examples/craig/","page":"CRAIG","title":"CRAIG","text":"using Krylov\nusing LinearAlgebra, Printf\n\nm = 5\nn = 8\nλ = 1.0e-3\nA = rand(m, n)\nb = A * ones(n)\nxy_exact = [A λ*I] \\ b # In Julia, this is the min-norm solution!\n\n(x, y, stats) = craig(A, b, λ=λ, atol=0.0, rtol=1.0e-20, verbose=1)\nshow(stats)\n\n# Check that we have a minimum-norm solution.\n# When λ > 0 we solve min ‖(x,s)‖ s.t. Ax + λs = b, and we get s = λy.\n@printf(\"Primal feasibility: %7.1e\\n\", norm(b - A * x - λ^2 * y) / norm(b))\n@printf(\"Dual feasibility: %7.1e\\n\", norm(x - A' * y) / norm(x))\n@printf(\"Error in x: %7.1e\\n\", norm(x - xy_exact[1:n]) / norm(xy_exact[1:n]))\nif λ > 0.0\n @printf(\"Error in y: %7.1e\\n\", norm(λ * y - xy_exact[n+1:n+m]) / norm(xy_exact[n+1:n+m]))\nend","category":"page"},{"location":"examples/cgne/","page":"CGNE","title":"CGNE","text":"using Krylov, HarwellRutherfordBoeing, SuiteSparseMatrixCollection\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"wm2\")\npath = fetch_ssmc(matrix, format=\"RB\")\n\nA = RutherfordBoeingData(joinpath(path[1], \"$(matrix.name[1]).rb\")).data\n(m, n) = size(A)\n@printf(\"System size: %d rows and %d columns\\n\", m, n)\n\nx_exact = A' * ones(m)\nx_exact_norm = norm(x_exact)\nx_exact /= x_exact_norm\nb = A * x_exact\n(x, stats) = cgne(A, b)\nshow(stats)\nresid = norm(A * x - b) / norm(b)\n@printf(\"CGNE: Relative residual: %7.1e\\n\", resid)\n@printf(\"CGNE: ‖x - x*‖₂: %7.1e\\n\", norm(x - x_exact))","category":"page"},{"location":"solvers/ls/","page":"Least-squares problems","title":"Least-squares problems","text":"# Least-squares problems","category":"page"},{"location":"solvers/ls/#CGLS","page":"Least-squares problems","title":"CGLS","text":"","category":"section"},{"location":"solvers/ls/","page":"Least-squares problems","title":"Least-squares problems","text":"cgls\ncgls!","category":"page"},{"location":"solvers/ls/#Krylov.cgls","page":"Least-squares problems","title":"Krylov.cgls","text":"(x, stats) = cgls(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false, radius::T=zero(T),\n λ::T=zero(T), atol::T=√eps(T), rtol::T=√eps(T),\n itmax::Int=0, timemax::Float64=Inf,\n verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nSolve the regularized linear least-squares problem\n\nminimize ‖b - Ax‖₂² + λ‖x‖₂²\n\nof size m × n using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations\n\n(AᴴA + λI) x = Aᴴb\n\nbut is more stable.\n\nCGLS produces monotonic residuals ‖r‖₂ but not optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSQR, though can be slightly less accurate, but simpler to implement.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nradius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;\nλ: regularization parameter;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nM. R. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear systems, Journal of Research of the National Bureau of Standards, 49(6), pp. 409–436, 1952.\nA. Björck, T. Elfving and Z. Strakos, Stability of Conjugate Gradient and Lanczos Methods for Linear Least Squares Problems, SIAM Journal on Matrix Analysis and Applications, 19(3), pp. 720–736, 1998.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#Krylov.cgls!","page":"Least-squares problems","title":"Krylov.cgls!","text":"solver = cgls!(solver::CglsSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of cgls.\n\nSee CglsSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#CRLS","page":"Least-squares problems","title":"CRLS","text":"","category":"section"},{"location":"solvers/ls/","page":"Least-squares problems","title":"Least-squares problems","text":"crls\ncrls!","category":"page"},{"location":"solvers/ls/#Krylov.crls","page":"Least-squares problems","title":"Krylov.crls","text":"(x, stats) = crls(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false, radius::T=zero(T),\n λ::T=zero(T), atol::T=√eps(T), rtol::T=√eps(T),\n itmax::Int=0, timemax::Float64=Inf,\n verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nSolve the linear least-squares problem\n\nminimize ‖b - Ax‖₂² + λ‖x‖₂²\n\nof size m × n using the Conjugate Residuals (CR) method. This method is equivalent to applying MINRES to the normal equations\n\n(AᴴA + λI) x = Aᴴb.\n\nThis implementation recurs the residual r := b - Ax.\n\nCRLS produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSMR, though can be substantially less accurate, but simpler to implement.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nradius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;\nλ: regularization parameter;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nD. C.-L. Fong, Minimum-Residual Methods for Sparse, Least-Squares using Golubg-Kahan Bidiagonalization, Ph.D. Thesis, Stanford University, 2011.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#Krylov.crls!","page":"Least-squares problems","title":"Krylov.crls!","text":"solver = crls!(solver::CrlsSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of crls.\n\nSee CrlsSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#LSLQ","page":"Least-squares problems","title":"LSLQ","text":"","category":"section"},{"location":"solvers/ls/","page":"Least-squares problems","title":"Least-squares problems","text":"lslq\nlslq!","category":"page"},{"location":"solvers/ls/#Krylov.lslq","page":"Least-squares problems","title":"Krylov.lslq","text":"(x, stats) = lslq(A, b::AbstractVector{FC};\n M=I, N=I, ldiv::Bool=false,\n window::Int=5, transfer_to_lsqr::Bool=false,\n sqd::Bool=false, λ::T=zero(T),\n σ::T=zero(T), etol::T=√eps(T),\n utol::T=√eps(T), btol::T=√eps(T),\n conlim::T=1/√eps(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nSolve the regularized linear least-squares problem\n\nminimize ‖b - Ax‖₂² + λ²‖x‖₂²\n\nof size m × n using the LSLQ method, where λ ≥ 0 is a regularization parameter. LSLQ is formally equivalent to applying SYMMLQ to the normal equations\n\n(AᴴA + λ²I) x = Aᴴb\n\nbut is more stable.\n\nIf λ > 0, we solve the symmetric and quasi-definite system\n\n[ E A ] [ r ] [ b ]\n[ Aᴴ -λ²F ] [ x ] = [ 0 ],\n\nwhere E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.\n\nThe system above represents the optimality conditions of\n\nminimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.\n\nFor a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSLQ is then equivalent to applying SYMMLQ to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).\n\nIf λ = 0, we solve the symmetric and indefinite system\n\n[ E A ] [ r ] [ b ]\n[ Aᴴ 0 ] [ x ] = [ 0 ].\n\nThe system above represents the optimality conditions of\n\nminimize ‖b - Ax‖²_E⁻¹.\n\nIn this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).\n\nMain features\n\nthe solution estimate is updated along orthogonal directions\nthe norm of the solution estimate ‖xᴸₖ‖₂ is increasing\nthe error ‖eₖ‖₂ := ‖xᴸₖ - x*‖₂ is decreasing\nit is possible to transition cheaply from the LSLQ iterate to the LSQR iterate if there is an advantage (there always is in terms of error)\nif A is rank deficient, identify the minimum least-squares solution\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;\nN: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;\nldiv: define whether the preconditioners use ldiv! or mul!;\nwindow: number of iterations used to accumulate a lower bound on the error;\ntransfer_to_lsqr: transfer from the LSLQ point to the LSQR point, when it exists. The transfer is based on the residual norm;\nsqd: if true, set λ=1 for Hermitian quasi-definite systems;\nλ: regularization parameter;\nσ: strict lower bound on the smallest positive singular value σₘᵢₙ such as σ = (1-10⁻⁷)σₘᵢₙ;\netol: stopping tolerance based on the lower bound on the error;\nutol: stopping tolerance based on the upper bound on the error;\nbtol: stopping tolerance used to detect zero-residual problems;\nconlim: limit on the estimated condition number of A beyond which the solution will be abandoned;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a LSLQStats structure.\nstats.err_lbnds is a vector of lower bounds on the LQ error–-the vector is empty if window is set to zero\nstats.err_ubnds_lq is a vector of upper bounds on the LQ error–-the vector is empty if σ == 0 is left at zero\nstats.err_ubnds_cg is a vector of upper bounds on the CG error–-the vector is empty if σ == 0 is left at zero\nstats.error_with_bnd is a boolean indicating whether there was an error in the upper bounds computation (cancellation errors, too large σ ...)\n\nStopping conditions\n\nThe iterations stop as soon as one of the following conditions holds true:\n\nthe optimality residual is sufficiently small (stats.status = \"found approximate minimum least-squares solution\") in the sense that either\n‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ atol, or\n1 + ‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ 1\nan approximate zero-residual solution has been found (stats.status = \"found approximate zero-residual solution\") in the sense that either\n‖r‖ / ‖b‖ ≤ btol + atol ‖A‖ * ‖xᴸ‖ / ‖b‖, or\n1 + ‖r‖ / ‖b‖ ≤ 1\nthe estimated condition number of A is too large in the sense that either\n1/cond(A) ≤ 1/conlim (stats.status = \"condition number exceeds tolerance\"), or\n1 + 1/cond(A) ≤ 1 (stats.status = \"condition number seems too large for this machine\")\nthe lower bound on the LQ forward error is less than etol * ‖xᴸ‖\nthe upper bound on the CG forward error is less than utol * ‖xᶜ‖\n\nReferences\n\nR. Estrin, D. Orban and M. A. Saunders, Euclidean-norm error bounds for SYMMLQ and CG, SIAM Journal on Matrix Analysis and Applications, 40(1), pp. 235–253, 2019.\nR. Estrin, D. Orban and M. A. Saunders, LSLQ: An Iterative Method for Linear Least-Squares with an Error Minimization Property, SIAM Journal on Matrix Analysis and Applications, 40(1), pp. 254–275, 2019.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#Krylov.lslq!","page":"Least-squares problems","title":"Krylov.lslq!","text":"solver = lslq!(solver::LslqSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of lslq.\n\nSee LslqSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#LSQR","page":"Least-squares problems","title":"LSQR","text":"","category":"section"},{"location":"solvers/ls/","page":"Least-squares problems","title":"Least-squares problems","text":"lsqr\nlsqr!","category":"page"},{"location":"solvers/ls/#Krylov.lsqr","page":"Least-squares problems","title":"Krylov.lsqr","text":"(x, stats) = lsqr(A, b::AbstractVector{FC};\n M=I, N=I, ldiv::Bool=false,\n window::Int=5, sqd::Bool=false, λ::T=zero(T),\n radius::T=zero(T), etol::T=√eps(T),\n axtol::T=√eps(T), btol::T=√eps(T),\n conlim::T=1/√eps(T), atol::T=zero(T),\n rtol::T=zero(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nSolve the regularized linear least-squares problem\n\nminimize ‖b - Ax‖₂² + λ²‖x‖₂²\n\nof size m × n using the LSQR method, where λ ≥ 0 is a regularization parameter. LSQR is formally equivalent to applying CG to the normal equations\n\n(AᴴA + λ²I) x = Aᴴb\n\n(and therefore to CGLS) but is more stable.\n\nLSQR produces monotonic residuals ‖r‖₂ but not optimality residuals ‖Aᴴr‖₂. It is formally equivalent to CGLS, though can be slightly more accurate.\n\nIf λ > 0, LSQR solves the symmetric and quasi-definite system\n\n[ E A ] [ r ] [ b ]\n[ Aᴴ -λ²F ] [ x ] = [ 0 ],\n\nwhere E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.\n\nThe system above represents the optimality conditions of\n\nminimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.\n\nFor a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSQR is then equivalent to applying CG to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).\n\nIf λ = 0, we solve the symmetric and indefinite system\n\n[ E A ] [ r ] [ b ]\n[ Aᴴ 0 ] [ x ] = [ 0 ].\n\nThe system above represents the optimality conditions of\n\nminimize ‖b - Ax‖²_E⁻¹.\n\nIn this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;\nN: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;\nldiv: define whether the preconditioners use ldiv! or mul!;\nwindow: number of iterations used to accumulate a lower bound on the error;\nsqd: if true, set λ=1 for Hermitian quasi-definite systems;\nλ: regularization parameter;\nradius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;\netol: stopping tolerance based on the lower bound on the error;\naxtol: tolerance on the backward error;\nbtol: stopping tolerance used to detect zero-residual problems;\nconlim: limit on the estimated condition number of A beyond which the solution will be abandoned;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nC. C. Paige and M. A. Saunders, LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares, ACM Transactions on Mathematical Software, 8(1), pp. 43–71, 1982.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#Krylov.lsqr!","page":"Least-squares problems","title":"Krylov.lsqr!","text":"solver = lsqr!(solver::LsqrSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of lsqr.\n\nSee LsqrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#LSMR","page":"Least-squares problems","title":"LSMR","text":"","category":"section"},{"location":"solvers/ls/","page":"Least-squares problems","title":"Least-squares problems","text":"lsmr\nlsmr!","category":"page"},{"location":"solvers/ls/#Krylov.lsmr","page":"Least-squares problems","title":"Krylov.lsmr","text":"(x, stats) = lsmr(A, b::AbstractVector{FC};\n M=I, N=I, ldiv::Bool=false,\n window::Int=5, sqd::Bool=false, λ::T=zero(T),\n radius::T=zero(T), etol::T=√eps(T),\n axtol::T=√eps(T), btol::T=√eps(T),\n conlim::T=1/√eps(T), atol::T=zero(T),\n rtol::T=zero(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nSolve the regularized linear least-squares problem\n\nminimize ‖b - Ax‖₂² + λ²‖x‖₂²\n\nof size m × n using the LSMR method, where λ ≥ 0 is a regularization parameter. LSMR is formally equivalent to applying MINRES to the normal equations\n\n(AᴴA + λ²I) x = Aᴴb\n\n(and therefore to CRLS) but is more stable.\n\nLSMR produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂. It is formally equivalent to CRLS, though can be substantially more accurate.\n\nLSMR can be also used to find a null vector of a singular matrix A by solving the problem min ‖Aᴴx - b‖ with any nonzero vector b. At a minimizer, the residual vector r = b - Aᴴx will satisfy Ar = 0.\n\nIf λ > 0, we solve the symmetric and quasi-definite system\n\n[ E A ] [ r ] [ b ]\n[ Aᴴ -λ²F ] [ x ] = [ 0 ],\n\nwhere E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.\n\nThe system above represents the optimality conditions of\n\nminimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.\n\nFor a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSMR is then equivalent to applying MINRES to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).\n\nIf λ = 0, we solve the symmetric and indefinite system\n\n[ E A ] [ r ] [ b ]\n[ Aᴴ 0 ] [ x ] = [ 0 ].\n\nThe system above represents the optimality conditions of\n\nminimize ‖b - Ax‖²_E⁻¹.\n\nIn this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;\nN: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;\nldiv: define whether the preconditioners use ldiv! or mul!;\nwindow: number of iterations used to accumulate a lower bound on the error;\nsqd: if true, set λ=1 for Hermitian quasi-definite systems;\nλ: regularization parameter;\nradius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;\netol: stopping tolerance based on the lower bound on the error;\naxtol: tolerance on the backward error;\nbtol: stopping tolerance used to detect zero-residual problems;\nconlim: limit on the estimated condition number of A beyond which the solution will be abandoned;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a LsmrStats structure.\n\nReference\n\nD. C.-L. Fong and M. A. Saunders, LSMR: An Iterative Algorithm for Sparse Least Squares Problems, SIAM Journal on Scientific Computing, 33(5), pp. 2950–2971, 2011.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#Krylov.lsmr!","page":"Least-squares problems","title":"Krylov.lsmr!","text":"solver = lsmr!(solver::LsmrSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of lsmr.\n\nSee LsmrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#USYMQR","page":"Least-squares problems","title":"USYMQR","text":"","category":"section"},{"location":"solvers/ls/","page":"Least-squares problems","title":"Least-squares problems","text":"usymqr\nusymqr!","category":"page"},{"location":"solvers/ls/#Krylov.usymqr","page":"Least-squares problems","title":"Krylov.usymqr","text":"(x, stats) = usymqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};\n atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = usymqr(A, b, c, x0::AbstractVector; kwargs...)\n\nUSYMQR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nUSYMQR solves the linear least-squares problem min ‖b - Ax‖² of size m × n. USYMQR solves Ax = b if it is consistent.\n\nUSYMQR is based on the orthogonal tridiagonalization process and requires two initial nonzero vectors b and c. The vector c is only used to initialize the process and a default value can be b or Aᴴb depending on the shape of A. The residual norm ‖b - Ax‖ monotonously decreases in USYMQR. It's considered as a generalization of MINRES.\n\nIt can also be applied to under-determined and over-determined problems. USYMQR finds the minimum-norm solution if problems are inconsistent.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m;\nc: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nM. A. Saunders, H. D. Simon, and E. L. Yip, Two Conjugate-Gradient-Type Methods for Unsymmetric Linear Equations, SIAM Journal on Numerical Analysis, 25(4), pp. 927–940, 1988.\nA. Buttari, D. Orban, D. Ruiz and D. Titley-Peloquin, A tridiagonalization method for symmetric saddle-point and quasi-definite systems, SIAM Journal on Scientific Computing, 41(5), pp. 409–432, 2019.\nA. Montoison and D. Orban, BiLQ: An Iterative Method for Nonsymmetric Linear Systems with a Quasi-Minimum Error Property, SIAM Journal on Matrix Analysis and Applications, 41(3), pp. 1145–1166, 2020.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#Krylov.usymqr!","page":"Least-squares problems","title":"Krylov.usymqr!","text":"solver = usymqr!(solver::UsymqrSolver, A, b, c; kwargs...)\nsolver = usymqr!(solver::UsymqrSolver, A, b, c, x0; kwargs...)\n\nwhere kwargs are keyword arguments of usymqr.\n\nSee UsymqrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"preconditioners/#preconditioners","page":"Preconditioners","title":"Preconditioners","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"The solvers in Krylov.jl support preconditioners, i.e., transformations that modify a linear system Ax = b into an equivalent form that may yield faster convergence in finite-precision arithmetic. Preconditioning can be used to reduce the condition number of the problem or cluster its eigenvalues or singular values for instance.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"The design of preconditioners is highly dependent on the origin of the problem and most preconditioners need to take application-dependent information and structure into account. Specialized preconditioners generally outperform generic preconditioners such as incomplete factorizations.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"The construction of a preconditioner necessitates trade-offs because we need to apply it at least once per iteration within a Krylov method. Hence, a preconditioner must be constructed such that it is cheap to apply, while also capturing the characteristics of the original system in some sense.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"There exist three variants of preconditioning:","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Left preconditioning Two-sided preconditioning Right preconditioning\nP_ell^-1Ax = P_ell^-1b P_ell^-1AP_r^-1y = P_ell^-1btextwithx = P_r^-1y AP_r^-1y = btextwithx = P_r^-1y","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"where P_ell and P_r are square and nonsingular.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"The left preconditioning preserves the error x_k - x^star whereas the right preconditioning keeps invariant the residual b - A x_k. Two-sided preconditioning is the only variant that allows to preserve the hermicity of a linear system.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"note: Note\nBecause det(P^-1A - lambda I) = det(A - lambda P) det(P^-1) = det(AP^-1 - lambda I), the eigenvalues of P^-1A and AP^-1 are identical. If P = LL^H, L^-1AL^-H also has the same spectrum.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"In Krylov.jl, we call P_ell^-1 and P_r^-1 the preconditioners and we assume that we can apply them with the operation y leftarrow P^-1 * x. It is also common to call P_ell and P_r the preconditioners if the equivalent operation y leftarrow Pbackslashx is available. Krylov.jl supports both approaches thanks to the argument ldiv of the Krylov solvers.","category":"page"},{"location":"preconditioners/#How-to-use-preconditioners-in-Krylov.jl?","page":"Preconditioners","title":"How to use preconditioners in Krylov.jl?","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"info: Info\nA preconditioner only need support the operation mul!(y, P⁻¹, x) when ldiv=false or ldiv!(y, P, x) when ldiv=true to be used in Krylov.jl.\nThe default value of a preconditioner in Krylov.jl is the identity operator I.","category":"page"},{"location":"preconditioners/#Square-non-Hermitian-linear-systems","page":"Preconditioners","title":"Square non-Hermitian linear systems","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Methods concerned: CGS, BiCGSTAB, DQGMRES, GMRES, FGMRES, DIOM and FOM.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"A Krylov method dedicated to non-Hermitian linear systems allows the three variants of preconditioning.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Preconditioners P_ell^-1 P_ell P_r^-1 P_r\nArguments M with ldiv=false M with ldiv=true N with ldiv=false N with ldiv=true","category":"page"},{"location":"preconditioners/#Hermitian-linear-systems","page":"Preconditioners","title":"Hermitian linear systems","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Methods concerned: SYMMLQ, CG, CG-LANCZOS, CG-LANCZOS-SHIFT, CR, CAR, MINRES, MINRES-QLP and MINARES.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"When A is Hermitian, we can only use centered preconditioning L^-1AL^-Hy = L^-1b with x = L^-Hy. Centered preconditioning is a special case of two-sided preconditioning with P_ell = L = P_r^H that maintains hermicity. However, there is no need to specify L and one may specify P_c = LL^H or its inverse directly.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Preconditioners P_c^-1 P_c\nArguments M with ldiv=false M with ldiv=true","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"warning: Warning\nThe preconditioner M must be hermitian and positive definite.","category":"page"},{"location":"preconditioners/#Linear-least-squares-problems","page":"Preconditioners","title":"Linear least-squares problems","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Methods concerned: CGLS, CRLS, LSLQ, LSQR and LSMR.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Formulation Without preconditioning With preconditioning\nleast-squares problem min tfrac12 b - Ax^2_2 min tfrac12 b - Ax^2_E^-1\nNormal equation A^HAx = A^Hb A^HE^-1Ax = A^HE^-1b\nAugmented system beginbmatrix I A A^H 0 endbmatrix beginbmatrix r x endbmatrix = beginbmatrix b 0 endbmatrix beginbmatrix E A A^H 0 endbmatrix beginbmatrix r x endbmatrix = beginbmatrix b 0 endbmatrix","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"LSLQ, LSQR and LSMR also handle regularized least-squares problems.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Formulation Without preconditioning With preconditioning\nleast-squares problem min tfrac12 b - Ax^2_2 + tfrac12 lambda^2 x^2_2 min tfrac12 b - Ax^2_E^-1 + tfrac12 lambda^2 x^2_F\nNormal equation (A^HA + lambda^2 I)x = A^Hb (A^HE^-1A + lambda^2 F)x = A^HE^-1b\nAugmented system beginbmatrix I A A^H -lambda^2 I endbmatrix beginbmatrix r x endbmatrix = beginbmatrix b 0 endbmatrix beginbmatrix E A A^H -lambda^2 F endbmatrix beginbmatrix r x endbmatrix = beginbmatrix b 0 endbmatrix","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Preconditioners E^-1 E F^-1 F\nArguments M with ldiv=false M with ldiv=true N with ldiv=false N with ldiv=true","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"warning: Warning\nThe preconditioners M and N must be hermitian and positive definite.","category":"page"},{"location":"preconditioners/#Linear-least-norm-problems","page":"Preconditioners","title":"Linear least-norm problems","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Methods concerned: CGNE, CRMR, LNLQ, CRAIG and CRAIGMR.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Formulation Without preconditioning With preconditioning\nminimum-norm problem min tfrac12 x^2_2textstAx = b min tfrac12 x^2_FtextstAx = b\nNormal equation AA^Hy = btextwithx = A^Hy AF^-1A^Hy = btextwithx = F^-1A^Hy\nAugmented system beginbmatrix -I A^H phantom-A 0 endbmatrix beginbmatrix x y endbmatrix = beginbmatrix 0 b endbmatrix beginbmatrix -F A^H phantom-A 0 endbmatrix beginbmatrix x y endbmatrix = beginbmatrix 0 b endbmatrix","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"LNLQ, CRAIG and CRAIGMR also handle penalized minimum-norm problems.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Formulation Without preconditioning With preconditioning\nminimum-norm problem min tfrac12 x^2_2 + tfrac12 y^2_2textstAx + lambda^2 y = b min tfrac12 x^2_F + tfrac12 y^2_EtextstAx + lambda^2 Ey = b\nNormal equation (AA^H + lambda^2 I)y = btextwithx = A^Hy (AF^-1A^H + lambda^2 E)y = btextwithx = F^-1A^Hy\nAugmented system beginbmatrix -I A^H phantom-A lambda^2 I endbmatrix beginbmatrix x y endbmatrix = beginbmatrix 0 b endbmatrix beginbmatrix -F A^H phantom-A lambda^2 E endbmatrix beginbmatrix x y endbmatrix = beginbmatrix 0 b endbmatrix","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Preconditioners E^-1 E F^-1 F\nArguments M with ldiv=false M with ldiv=true N with ldiv=false N with ldiv=true","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"warning: Warning\nThe preconditioners M and N must be hermitian and positive definite.","category":"page"},{"location":"preconditioners/#Saddle-point-and-symmetric-quasi-definite-systems","page":"Preconditioners","title":"Saddle-point and symmetric quasi-definite systems","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"TriCG and TriMR can take advantage of the structure of Hermitian systems Kz = d with the 2x2 block structure","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":" beginbmatrix tau E phantom-A A^H nu F endbmatrix beginbmatrix x y endbmatrix = beginbmatrix b c endbmatrix","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Preconditioners E^-1 E F^-1 F\nArguments M with ldiv=false M with ldiv=true N with ldiv=false N with ldiv=true","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"warning: Warning\nThe preconditioners M and N must be hermitian and positive definite.","category":"page"},{"location":"preconditioners/#Generalized-saddle-point-and-unsymmetric-partitioned-systems","page":"Preconditioners","title":"Generalized saddle-point and unsymmetric partitioned systems","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"GPMR can take advantage of the structure of general square systems Kz = d with the 2x2 block structure","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":" beginbmatrix lambda M A B mu N endbmatrix beginbmatrix x y endbmatrix = beginbmatrix b c endbmatrix","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Relations CE = M^-1 EC = M DF = N^-1 FD = N\nArguments C and E with ldiv=false C and E with ldiv=true D and F with ldiv=false D and F with ldiv=true","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"note: Note\nOur implementations of BiLQ, QMR, BiLQR, USYMLQ, USYMQR and TriLQR don't support preconditioning.","category":"page"},{"location":"preconditioners/#Packages-that-provide-preconditioners","page":"Preconditioners","title":"Packages that provide preconditioners","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"IncompleteLU.jl implements the left-looking and Crout versions of ILU decompositions.\nILUZero.jl is a Julia implementation of incomplete LU factorization with zero level of fill-in. \nLimitedLDLFactorizations.jl for limited-memory LDLᵀ factorization of symmetric matrices.\nAlgebraicMultigrid.jl provides two algebraic multigrid (AMG) preconditioners.\nRandomizedPreconditioners.jl uses randomized numerical linear algebra to construct approximate inverses of matrices.\nBasicLU.jl uses a sparse LU factorization to compute a maximum volume basis that can be used as a preconditioner for least-norm and least-squares problems.","category":"page"},{"location":"preconditioners/#Examples","page":"Preconditioners","title":"Examples","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"using Krylov\nn, m = size(A)\nd = [A[i,i] ≠ 0 ? 1 / abs(A[i,i]) : 1 for i=1:n] # Jacobi preconditioner\nP⁻¹ = diagm(d)\nx, stats = symmlq(A, b, M=P⁻¹)","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"using Krylov\nn, m = size(A)\nd = [1 / norm(A[:,i]) for i=1:m] # diagonal preconditioner\nP⁻¹ = diagm(d)\nx, stats = minres(A, b, M=P⁻¹)","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"using IncompleteLU, Krylov\nPℓ = ilu(A)\nx, stats = gmres(A, b, M=Pℓ, ldiv=true) # left preconditioning","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"using LimitedLDLFactorizations, Krylov\nP = lldl(A)\nP.D .= abs.(P.D)\nx, stats = cg(A, b, M=P, ldiv=true) # centered preconditioning","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"using ILUZero, Krylov\nPᵣ = ilu0(A)\nx, stats = bicgstab(A, b, N=Pᵣ, ldiv=true) # right preconditioning","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"using LDLFactorizations, Krylov\n\nM = ldl(E)\nN = ldl(F)\n\n# [E A] [x] = [b]\n# [Aᴴ -F] [y] [c]\nx, y, stats = tricg(A, b, c, M=M, N=N, ldiv=true)","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"using SuiteSparse, Krylov\nimport LinearAlgebra.ldiv!\n\nM = cholesky(E)\n\n# ldiv! is not implemented for the sparse Cholesky factorization (SuiteSparse.CHOLMOD)\nldiv!(y::Vector{T}, F::SuiteSparse.CHOLMOD.Factor{T}, x::Vector{T}) where T = (y .= F \\ x)\n\n# [E A] [x] = [b]\n# [Aᴴ 0] [y] [c]\nx, y, stats = trimr(A, b, c, M=M, sp=true, ldiv=true)","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"using Krylov\n\nC = lu(M)\n\n# [M A] [x] = [b]\n# [B 0] [y] [c]\nx, y, stats = gpmr(A, B, b, c, C=C, gsp=true, ldiv=true)","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"import BasicLU\nusing LinearOperators, Krylov\n\n# Least-squares problem\nm, n = size(A)\nAᴴ = sparse(A')\nbasis, B = BasicLU.maxvolbasis(Aᴴ)\nopA = LinearOperator(A)\nB⁻ᴴ = LinearOperator(Float64, n, n, false, false, (y, v) -> (y .= v ; BasicLU.solve!(B, y, 'T')),\n (y, v) -> (y .= v ; BasicLU.solve!(B, y, 'N')),\n (y, v) -> (y .= v ; BasicLU.solve!(B, y, 'N')))\n\nd, stats = lsmr(opA * B⁻ᴴ, b) # min ‖AB⁻ᴴd - b‖₂\nx = B⁻ᴴ * d # recover the solution of min ‖Ax - b‖₂\n\n# Least-norm problem\nm, n = size(A)\nbasis, B = maxvolbasis(A)\nopA = LinearOperator(A)\nB⁻¹ = LinearOperator(Float64, m, m, false, false, (y, v) -> (y .= v ; BasicLU.solve!(B, y, 'N')),\n (y, v) -> (y .= v ; BasicLU.solve!(B, y, 'T')),\n (y, v) -> (y .= v ; BasicLU.solve!(B, y, 'T')))\n\nx, y, stats = craigmr(B⁻¹ * opA, B⁻¹ * b) # min ‖x‖₂ s.t. B⁻¹Ax = B⁻¹b","category":"page"},{"location":"reference/#Reference","page":"Reference","title":"Reference","text":"","category":"section"},{"location":"reference/#Index","page":"Reference","title":"Index","text":"","category":"section"},{"location":"reference/","page":"Reference","title":"Reference","text":"","category":"page"},{"location":"reference/","page":"Reference","title":"Reference","text":"Krylov.FloatOrComplex\nKrylov.niterations\nKrylov.Aprod\nKrylov.Atprod\nKrylov.kstdout\nKrylov.extract_parameters\nBase.show","category":"page"},{"location":"reference/#Krylov.FloatOrComplex","page":"Reference","title":"Krylov.FloatOrComplex","text":"FloatOrComplex{T}\n\nUnion type of T and Complex{T} where T is an AbstractFloat.\n\n\n\n\n\n","category":"type"},{"location":"reference/#Krylov.niterations","page":"Reference","title":"Krylov.niterations","text":"niterations(solver)\n\nReturn the number of iterations performed by the Krylov method associated to solver.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Krylov.Aprod","page":"Reference","title":"Krylov.Aprod","text":"Aprod(solver)\n\nReturn the number of operator-vector products with A performed by the Krylov method associated to solver.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Krylov.Atprod","page":"Reference","title":"Krylov.Atprod","text":"Atprod(solver)\n\nReturn the number of operator-vector products with A' performed by the Krylov method associated to solver.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Krylov.kstdout","page":"Reference","title":"Krylov.kstdout","text":"Default I/O stream for all Krylov methods.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#Krylov.extract_parameters","page":"Reference","title":"Krylov.extract_parameters","text":"arguments = extract_parameters(ex::Expr)\n\nExtract the arguments of an expression that is keyword parameter tuple. Implementation suggested by Mitchell J. O'Sullivan (@mosullivan93).\n\n\n\n\n\n","category":"function"},{"location":"reference/#Base.show","page":"Reference","title":"Base.show","text":"show(io, solver; show_stats=true)\n\nStatistics of solver are displayed if show_stats is set to true.\n\n\n\n\n\n","category":"function"},{"location":"examples/cg/","page":"CG","title":"CG","text":"using Krylov, MatrixMarket, SuiteSparseMatrixCollection\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"bcsstk09\")\npath = fetch_ssmc(matrix, format=\"MM\")\n\nn = matrix.nrows[1]\nA = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1]).mtx\"))\nb = ones(n)\nb_norm = norm(b)\n\n# Solve Ax = b.\n(x, stats) = cg(A, b)\nshow(stats)\nr = b - A * x\n@printf(\"Relative residual: %8.1e\\n\", norm(r) / b_norm)","category":"page"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"# Non-Hermitian square linear systems","category":"page"},{"location":"solvers/unsymmetric/#BiLQ","page":"Non-Hermitian square linear systems","title":"BiLQ","text":"","category":"section"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"bilq\nbilq!","category":"page"},{"location":"solvers/unsymmetric/#Krylov.bilq","page":"Non-Hermitian square linear systems","title":"Krylov.bilq","text":"(x, stats) = bilq(A, b::AbstractVector{FC};\n c::AbstractVector{FC}=b, transfer_to_bicg::Bool=true,\n atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = bilq(A, b, x0::AbstractVector; kwargs...)\n\nBiLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the square linear system Ax = b of size n using BiLQ. BiLQ is based on the Lanczos biorthogonalization process and requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b. When A is Hermitian and b = c, BiLQ is equivalent to SYMMLQ.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nc: the second initial vector of length n required by the Lanczos biorthogonalization process;\ntransfer_to_bicg: transfer from the BiLQ point to the BiCG point, when it exists. The transfer is based on the residual norm;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nA. Montoison and D. Orban, BiLQ: An Iterative Method for Nonsymmetric Linear Systems with a Quasi-Minimum Error Property, SIAM Journal on Matrix Analysis and Applications, 41(3), pp. 1145–1166, 2020.\nR. Fletcher, Conjugate gradient methods for indefinite systems, Numerical Analysis, Springer, pp. 73–89, 1976.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#Krylov.bilq!","page":"Non-Hermitian square linear systems","title":"Krylov.bilq!","text":"solver = bilq!(solver::BilqSolver, A, b; kwargs...)\nsolver = bilq!(solver::BilqSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of bilq.\n\nSee BilqSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#QMR","page":"Non-Hermitian square linear systems","title":"QMR","text":"","category":"section"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"qmr\nqmr!","category":"page"},{"location":"solvers/unsymmetric/#Krylov.qmr","page":"Non-Hermitian square linear systems","title":"Krylov.qmr","text":"(x, stats) = qmr(A, b::AbstractVector{FC};\n c::AbstractVector{FC}=b, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0, timemax::Float64=Inf, verbose::Int=0,\n history::Bool=false, callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = qmr(A, b, x0::AbstractVector; kwargs...)\n\nQMR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the square linear system Ax = b of size n using QMR.\n\nQMR is based on the Lanczos biorthogonalization process and requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b. When A is Hermitian and b = c, QMR is equivalent to MINRES.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nc: the second initial vector of length n required by the Lanczos biorthogonalization process;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nR. W. Freund and N. M. Nachtigal, QMR : a quasi-minimal residual method for non-Hermitian linear systems, Numerische mathematik, Vol. 60(1), pp. 315–339, 1991.\nR. W. Freund and N. M. Nachtigal, An implementation of the QMR method based on coupled two-term recurrences, SIAM Journal on Scientific Computing, Vol. 15(2), pp. 313–337, 1994.\nA. Montoison and D. Orban, BiLQ: An Iterative Method for Nonsymmetric Linear Systems with a Quasi-Minimum Error Property, SIAM Journal on Matrix Analysis and Applications, 41(3), pp. 1145–1166, 2020.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#Krylov.qmr!","page":"Non-Hermitian square linear systems","title":"Krylov.qmr!","text":"solver = qmr!(solver::QmrSolver, A, b; kwargs...)\nsolver = qmr!(solver::QmrSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of qmr.\n\nSee QmrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#CGS","page":"Non-Hermitian square linear systems","title":"CGS","text":"","category":"section"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"cgs\ncgs!","category":"page"},{"location":"solvers/unsymmetric/#Krylov.cgs","page":"Non-Hermitian square linear systems","title":"Krylov.cgs","text":"(x, stats) = cgs(A, b::AbstractVector{FC};\n c::AbstractVector{FC}=b, M=I, N=I,\n ldiv::Bool=false, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = cgs(A, b, x0::AbstractVector; kwargs...)\n\nCGS can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the consistent linear system Ax = b of size n using CGS. CGS requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b.\n\nFrom \"Iterative Methods for Sparse Linear Systems (Y. Saad)\" :\n\n«The method is based on a polynomial variant of the conjugate gradients algorithm. Although related to the so-called bi-conjugate gradients (BCG) algorithm, it does not involve adjoint matrix-vector multiplications, and the expected convergence rate is about twice that of the BCG algorithm.\n\nThe Conjugate Gradient Squared algorithm works quite well in many cases. However, one difficulty is that, since the polynomials are squared, rounding errors tend to be more damaging than in the standard BCG algorithm. In particular, very high variations of the residual vectors often cause the residual norms computed to become inaccurate.\n\nTFQMR and BICGSTAB were developed to remedy this difficulty.»\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nc: the second initial vector of length n required by the Lanczos biorthogonalization process;\nM: linear operator that models a nonsingular matrix of size n used for left preconditioning;\nN: linear operator that models a nonsingular matrix of size n used for right preconditioning;\nldiv: define whether the preconditioners use ldiv! or mul!;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nP. Sonneveld, CGS, A Fast Lanczos-Type Solver for Nonsymmetric Linear systems, SIAM Journal on Scientific and Statistical Computing, 10(1), pp. 36–52, 1989.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#Krylov.cgs!","page":"Non-Hermitian square linear systems","title":"Krylov.cgs!","text":"solver = cgs!(solver::CgsSolver, A, b; kwargs...)\nsolver = cgs!(solver::CgsSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of cgs.\n\nSee CgsSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#BiCGSTAB","page":"Non-Hermitian square linear systems","title":"BiCGSTAB","text":"","category":"section"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"bicgstab\nbicgstab!","category":"page"},{"location":"solvers/unsymmetric/#Krylov.bicgstab","page":"Non-Hermitian square linear systems","title":"Krylov.bicgstab","text":"(x, stats) = bicgstab(A, b::AbstractVector{FC};\n c::AbstractVector{FC}=b, M=I, N=I,\n ldiv::Bool=false, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = bicgstab(A, b, x0::AbstractVector; kwargs...)\n\nBICGSTAB can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the square linear system Ax = b of size n using BICGSTAB. BICGSTAB requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b.\n\nThe Biconjugate Gradient Stabilized method is a variant of BiCG, like CGS, but using different updates for the Aᴴ-sequence in order to obtain smoother convergence than CGS.\n\nIf BICGSTAB stagnates, we recommend DQGMRES and BiLQ as alternative methods for unsymmetric square systems.\n\nBICGSTAB stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖b‖ * rtol.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nc: the second initial vector of length n required by the Lanczos biorthogonalization process;\nM: linear operator that models a nonsingular matrix of size n used for left preconditioning;\nN: linear operator that models a nonsingular matrix of size n used for right preconditioning;\nldiv: define whether the preconditioners use ldiv! or mul!;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nH. A. van der Vorst, Bi-CGSTAB: A fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems, SIAM Journal on Scientific and Statistical Computing, 13(2), pp. 631–644, 1992.\nG. L.G. Sleijpen and D. R. Fokkema, BiCGstab(ℓ) for linear equations involving unsymmetric matrices with complex spectrum, Electronic Transactions on Numerical Analysis, 1, pp. 11–32, 1993.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#Krylov.bicgstab!","page":"Non-Hermitian square linear systems","title":"Krylov.bicgstab!","text":"solver = bicgstab!(solver::BicgstabSolver, A, b; kwargs...)\nsolver = bicgstab!(solver::BicgstabSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of bicgstab.\n\nSee BicgstabSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#DIOM","page":"Non-Hermitian square linear systems","title":"DIOM","text":"","category":"section"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"diom\ndiom!","category":"page"},{"location":"solvers/unsymmetric/#Krylov.diom","page":"Non-Hermitian square linear systems","title":"Krylov.diom","text":"(x, stats) = diom(A, b::AbstractVector{FC};\n memory::Int=20, M=I, N=I, ldiv::Bool=false,\n reorthogonalization::Bool=false, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = diom(A, b, x0::AbstractVector; kwargs...)\n\nDIOM can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the consistent linear system Ax = b of size n using DIOM.\n\nDIOM only orthogonalizes the new vectors of the Krylov basis against the memory most recent vectors. If CG is well defined on Ax = b and memory = 2, DIOM is theoretically equivalent to CG. If k ≤ memory where k is the number of iterations, DIOM is theoretically equivalent to FOM. Otherwise, DIOM interpolates between CG and FOM and is similar to CG with partial reorthogonalization.\n\nAn advantage of DIOM is that non-Hermitian or Hermitian indefinite or both non-Hermitian and indefinite systems of linear equations can be handled by this single algorithm.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nmemory: the number of most recent vectors of the Krylov basis against which to orthogonalize a new vector;\nM: linear operator that models a nonsingular matrix of size n used for left preconditioning;\nN: linear operator that models a nonsingular matrix of size n used for right preconditioning;\nldiv: define whether the preconditioners use ldiv! or mul!;\nreorthogonalization: reorthogonalize the new vectors of the Krylov basis against the memory most recent vectors;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nY. Saad, Practical use of some krylov subspace methods for solving indefinite and nonsymmetric linear systems, SIAM journal on scientific and statistical computing, 5(1), pp. 203–228, 1984.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#Krylov.diom!","page":"Non-Hermitian square linear systems","title":"Krylov.diom!","text":"solver = diom!(solver::DiomSolver, A, b; kwargs...)\nsolver = diom!(solver::DiomSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of diom.\n\nNote that the memory keyword argument is the only exception. It's required to create a DiomSolver and can't be changed later.\n\nSee DiomSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#FOM","page":"Non-Hermitian square linear systems","title":"FOM","text":"","category":"section"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"fom\nfom!","category":"page"},{"location":"solvers/unsymmetric/#Krylov.fom","page":"Non-Hermitian square linear systems","title":"Krylov.fom","text":"(x, stats) = fom(A, b::AbstractVector{FC};\n memory::Int=20, M=I, N=I, ldiv::Bool=false,\n restart::Bool=false, reorthogonalization::Bool=false,\n atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = fom(A, b, x0::AbstractVector; kwargs...)\n\nFOM can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the linear system Ax = b of size n using FOM.\n\nFOM algorithm is based on the Arnoldi process and a Galerkin condition.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nmemory: if restart = true, the restarted version FOM(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;\nM: linear operator that models a nonsingular matrix of size n used for left preconditioning;\nN: linear operator that models a nonsingular matrix of size n used for right preconditioning;\nldiv: define whether the preconditioners use ldiv! or mul!;\nrestart: restart the method after memory iterations;\nreorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nY. Saad, Krylov subspace methods for solving unsymmetric linear systems, Mathematics of computation, Vol. 37(155), pp. 105–126, 1981.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#Krylov.fom!","page":"Non-Hermitian square linear systems","title":"Krylov.fom!","text":"solver = fom!(solver::FomSolver, A, b; kwargs...)\nsolver = fom!(solver::FomSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of fom.\n\nNote that the memory keyword argument is the only exception. It's required to create a FomSolver and can't be changed later.\n\nSee FomSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#DQGMRES","page":"Non-Hermitian square linear systems","title":"DQGMRES","text":"","category":"section"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"dqgmres\ndqgmres!","category":"page"},{"location":"solvers/unsymmetric/#Krylov.dqgmres","page":"Non-Hermitian square linear systems","title":"Krylov.dqgmres","text":"(x, stats) = dqgmres(A, b::AbstractVector{FC};\n memory::Int=20, M=I, N=I, ldiv::Bool=false,\n reorthogonalization::Bool=false, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = dqgmres(A, b, x0::AbstractVector; kwargs...)\n\nDQGMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the consistent linear system Ax = b of size n using DQGMRES.\n\nDQGMRES algorithm is based on the incomplete Arnoldi orthogonalization process and computes a sequence of approximate solutions with the quasi-minimal residual property.\n\nDQGMRES only orthogonalizes the new vectors of the Krylov basis against the memory most recent vectors. If MINRES is well defined on Ax = b and memory = 2, DQGMRES is theoretically equivalent to MINRES. If k ≤ memory where k is the number of iterations, DQGMRES is theoretically equivalent to GMRES. Otherwise, DQGMRES interpolates between MINRES and GMRES and is similar to MINRES with partial reorthogonalization.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nmemory: the number of most recent vectors of the Krylov basis against which to orthogonalize a new vector;\nM: linear operator that models a nonsingular matrix of size n used for left preconditioning;\nN: linear operator that models a nonsingular matrix of size n used for right preconditioning;\nreorthogonalization: reorthogonalize the new vectors of the Krylov basis against the memory most recent vectors;\nldiv: define whether the preconditioners use ldiv! or mul!;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nY. Saad and K. Wu, DQGMRES: a quasi minimal residual algorithm based on incomplete orthogonalization, Numerical Linear Algebra with Applications, Vol. 3(4), pp. 329–343, 1996.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#Krylov.dqgmres!","page":"Non-Hermitian square linear systems","title":"Krylov.dqgmres!","text":"solver = dqgmres!(solver::DqgmresSolver, A, b; kwargs...)\nsolver = dqgmres!(solver::DqgmresSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of dqgmres.\n\nNote that the memory keyword argument is the only exception. It's required to create a DqgmresSolver and can't be changed later.\n\nSee DqgmresSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#GMRES","page":"Non-Hermitian square linear systems","title":"GMRES","text":"","category":"section"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"gmres\ngmres!","category":"page"},{"location":"solvers/unsymmetric/#Krylov.gmres","page":"Non-Hermitian square linear systems","title":"Krylov.gmres","text":"(x, stats) = gmres(A, b::AbstractVector{FC};\n memory::Int=20, M=I, N=I, ldiv::Bool=false,\n restart::Bool=false, reorthogonalization::Bool=false,\n atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = gmres(A, b, x0::AbstractVector; kwargs...)\n\nGMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the linear system Ax = b of size n using GMRES.\n\nGMRES algorithm is based on the Arnoldi process and computes a sequence of approximate solutions with the minimum residual.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nmemory: if restart = true, the restarted version GMRES(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;\nM: linear operator that models a nonsingular matrix of size n used for left preconditioning;\nN: linear operator that models a nonsingular matrix of size n used for right preconditioning;\nldiv: define whether the preconditioners use ldiv! or mul!;\nrestart: restart the method after memory iterations;\nreorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nY. Saad and M. H. Schultz, GMRES: A Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems, SIAM Journal on Scientific and Statistical Computing, Vol. 7(3), pp. 856–869, 1986.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#Krylov.gmres!","page":"Non-Hermitian square linear systems","title":"Krylov.gmres!","text":"solver = gmres!(solver::GmresSolver, A, b; kwargs...)\nsolver = gmres!(solver::GmresSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of gmres.\n\nNote that the memory keyword argument is the only exception. It's required to create a GmresSolver and can't be changed later.\n\nSee GmresSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#FGMRES","page":"Non-Hermitian square linear systems","title":"FGMRES","text":"","category":"section"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"fgmres\nfgmres!","category":"page"},{"location":"solvers/unsymmetric/#Krylov.fgmres","page":"Non-Hermitian square linear systems","title":"Krylov.fgmres","text":"(x, stats) = fgmres(A, b::AbstractVector{FC};\n memory::Int=20, M=I, N=I, ldiv::Bool=false,\n restart::Bool=false, reorthogonalization::Bool=false,\n atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = fgmres(A, b, x0::AbstractVector; kwargs...)\n\nFGMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the linear system Ax = b of size n using FGMRES.\n\nFGMRES computes a sequence of approximate solutions with minimum residual. FGMRES is a variant of GMRES that allows changes in the right preconditioner at each iteration.\n\nThis implementation allows a left preconditioner M and a flexible right preconditioner N. A situation in which the preconditioner is \"not constant\" is when a relaxation-type method, a Chebyshev iteration or another Krylov subspace method is used as a preconditioner. Compared to GMRES, there is no additional cost incurred in the arithmetic but the memory requirement almost doubles. Thus, GMRES is recommended if the right preconditioner N is constant.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nmemory: if restart = true, the restarted version FGMRES(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;\nM: linear operator that models a nonsingular matrix of size n used for left preconditioning;\nN: linear operator that models a nonsingular matrix of size n used for right preconditioning;\nldiv: define whether the preconditioners use ldiv! or mul!;\nrestart: restart the method after memory iterations;\nreorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nY. Saad, A Flexible Inner-Outer Preconditioned GMRES Algorithm, SIAM Journal on Scientific Computing, Vol. 14(2), pp. 461–469, 1993.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#Krylov.fgmres!","page":"Non-Hermitian square linear systems","title":"Krylov.fgmres!","text":"solver = fgmres!(solver::FgmresSolver, A, b; kwargs...)\nsolver = fgmres!(solver::FgmresSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of fgmres.\n\nNote that the memory keyword argument is the only exception. It's required to create a FgmresSolver and can't be changed later.\n\nSee FgmresSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"examples/crmr/","page":"CRMR","title":"CRMR","text":"using Krylov, HarwellRutherfordBoeing, SuiteSparseMatrixCollection\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"gemat1\")\npath = fetch_ssmc(matrix, format=\"RB\")\n\nA = RutherfordBoeingData(joinpath(path[1], \"$(matrix.name[1]).rb\")).data\n(m, n) = size(A)\n@printf(\"System size: %d rows and %d columns\\n\", m, n)\n\nx_exact = A' * ones(m)\nx_exact_norm = norm(x_exact)\nx_exact /= x_exact_norm\nb = A * x_exact\n(x, stats) = crmr(A, b)\nshow(stats)\nresid = norm(A * x - b) / norm(b)\n@printf(\"CRMR: Relative residual: %7.1e\\n\", resid)\n@printf(\"CRMR: ‖x - x*‖₂: %7.1e\\n\", norm(x - x_exact))","category":"page"},{"location":"solvers/sid/","page":"Hermitian indefinite linear systems","title":"Hermitian indefinite linear systems","text":"# Hermitian indefinite linear systems","category":"page"},{"location":"solvers/sid/#SYMMLQ","page":"Hermitian indefinite linear systems","title":"SYMMLQ","text":"","category":"section"},{"location":"solvers/sid/","page":"Hermitian indefinite linear systems","title":"Hermitian indefinite linear systems","text":"symmlq\nsymmlq!","category":"page"},{"location":"solvers/sid/#Krylov.symmlq","page":"Hermitian indefinite linear systems","title":"Krylov.symmlq","text":"(x, stats) = symmlq(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false, window::Int=5,\n transfer_to_cg::Bool=true, λ::T=zero(T),\n λest::T=zero(T), etol::T=√eps(T),\n conlim::T=1/√eps(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = symmlq(A, b, x0::AbstractVector; kwargs...)\n\nSYMMLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above\n\nSolve the shifted linear system\n\n(A + λI) x = b\n\nof size n using the SYMMLQ method, where λ is a shift parameter, and A is Hermitian.\n\nSYMMLQ produces monotonic errors ‖x* - x‖₂.\n\nInput arguments\n\nA: a linear operator that models a Hermitian matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nwindow: number of iterations used to accumulate a lower bound on the error;\ntransfer_to_cg: transfer from the SYMMLQ point to the CG point, when it exists. The transfer is based on the residual norm;\nλ: regularization parameter;\nλest: positive strict lower bound on the smallest eigenvalue λₘᵢₙ when solving a positive-definite system, such as λest = (1-10⁻⁷)λₘᵢₙ;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\netol: stopping tolerance based on the lower bound on the error;\nconlim: limit on the estimated condition number of A beyond which the solution will be abandoned;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SymmlqStats structure.\n\nReference\n\nC. C. Paige and M. A. Saunders, Solution of Sparse Indefinite Systems of Linear Equations, SIAM Journal on Numerical Analysis, 12(4), pp. 617–629, 1975.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sid/#Krylov.symmlq!","page":"Hermitian indefinite linear systems","title":"Krylov.symmlq!","text":"solver = symmlq!(solver::SymmlqSolver, A, b; kwargs...)\nsolver = symmlq!(solver::SymmlqSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of symmlq.\n\nSee SymmlqSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sid/#MINRES","page":"Hermitian indefinite linear systems","title":"MINRES","text":"","category":"section"},{"location":"solvers/sid/","page":"Hermitian indefinite linear systems","title":"Hermitian indefinite linear systems","text":"minres\nminres!","category":"page"},{"location":"solvers/sid/#Krylov.minres","page":"Hermitian indefinite linear systems","title":"Krylov.minres","text":"(x, stats) = minres(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false, window::Int=5,\n λ::T=zero(T), atol::T=√eps(T),\n rtol::T=√eps(T), etol::T=√eps(T),\n conlim::T=1/√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = minres(A, b, x0::AbstractVector; kwargs...)\n\nMINRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the shifted linear least-squares problem\n\nminimize ‖b - (A + λI)x‖₂²\n\nor the shifted linear system\n\n(A + λI) x = b\n\nof size n using the MINRES method, where λ ≥ 0 is a shift parameter, where A is Hermitian.\n\nMINRES is formally equivalent to applying CR to Ax=b when A is positive definite, but is typically more stable and also applies to the case where A is indefinite.\n\nMINRES produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂.\n\nInput arguments\n\nA: a linear operator that models a Hermitian matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nwindow: number of iterations used to accumulate a lower bound on the error;\nλ: regularization parameter;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\netol: stopping tolerance based on the lower bound on the error;\nconlim: limit on the estimated condition number of A beyond which the solution will be abandoned;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nC. C. Paige and M. A. Saunders, Solution of Sparse Indefinite Systems of Linear Equations, SIAM Journal on Numerical Analysis, 12(4), pp. 617–629, 1975.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sid/#Krylov.minres!","page":"Hermitian indefinite linear systems","title":"Krylov.minres!","text":"solver = minres!(solver::MinresSolver, A, b; kwargs...)\nsolver = minres!(solver::MinresSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of minres.\n\nSee MinresSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sid/#MINRES-QLP","page":"Hermitian indefinite linear systems","title":"MINRES-QLP","text":"","category":"section"},{"location":"solvers/sid/","page":"Hermitian indefinite linear systems","title":"Hermitian indefinite linear systems","text":"minres_qlp\nminres_qlp!","category":"page"},{"location":"solvers/sid/#Krylov.minres_qlp","page":"Hermitian indefinite linear systems","title":"Krylov.minres_qlp","text":"(x, stats) = minres_qlp(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false, Artol::T=√eps(T),\n λ::T=zero(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = minres_qlp(A, b, x0::AbstractVector; kwargs...)\n\nMINRES-QLP can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nMINRES-QLP is the only method based on the Lanczos process that returns the minimum-norm solution on singular inconsistent systems (A + λI)x = b of size n, where λ is a shift parameter. It is significantly more complex but can be more reliable than MINRES when A is ill-conditioned.\n\nM also indicates the weighted norm in which residuals are measured.\n\nInput arguments\n\nA: a linear operator that models a Hermitian matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nλ: regularization parameter;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nArtol: relative stopping tolerance based on the Aᴴ-residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nS.-C. T. Choi, Iterative methods for singular linear equations and least-squares problems, Ph.D. thesis, ICME, Stanford University, 2006.\nS.-C. T. Choi, C. C. Paige and M. A. Saunders, MINRES-QLP: A Krylov subspace method for indefinite or singular symmetric systems, SIAM Journal on Scientific Computing, Vol. 33(4), pp. 1810–1836, 2011.\nS.-C. T. Choi and M. A. Saunders, Algorithm 937: MINRES-QLP for symmetric and Hermitian linear equations and least-squares problems, ACM Transactions on Mathematical Software, 40(2), pp. 1–12, 2014.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sid/#Krylov.minres_qlp!","page":"Hermitian indefinite linear systems","title":"Krylov.minres_qlp!","text":"solver = minres_qlp!(solver::MinresQlpSolver, A, b; kwargs...)\nsolver = minres_qlp!(solver::MinresQlpSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of minres_qlp.\n\nSee MinresQlpSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sid/#MINARES","page":"Hermitian indefinite linear systems","title":"MINARES","text":"","category":"section"},{"location":"solvers/sid/","page":"Hermitian indefinite linear systems","title":"Hermitian indefinite linear systems","text":"minares\nminares!","category":"page"},{"location":"solvers/sid/#Krylov.minares","page":"Hermitian indefinite linear systems","title":"Krylov.minares","text":"(x, stats) = minares(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false,\n λ::T = zero(T), atol::T=√eps(T),\n rtol::T=√eps(T), Artol::T = √eps(T),\n itmax::Int=0, timemax::Float64=Inf,\n verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = minares(A, b, x0::AbstractVector; kwargs...)\n\nMINARES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nMINARES solves the Hermitian linear system Ax = b of size n. MINARES minimizes ‖Arₖ‖₂ when M = Iₙ and ‖AMrₖ‖M otherwise. The estimates computed every iteration are ‖Mrₖ‖₂ and ‖AMrₖ‖M.\n\nInput arguments\n\nA: a linear operator that models a Hermitian positive definite matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nλ: regularization parameter;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nArtol: relative stopping tolerance based on the Aᴴ-residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nA. Montoison, D. Orban and M. A. Saunders, MinAres: An Iterative Solver for Symmetric Linear Systems, Cahier du GERAD G-2023-40, GERAD, Montréal, 2023.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sid/#Krylov.minares!","page":"Hermitian indefinite linear systems","title":"Krylov.minares!","text":"solver = minares!(solver::MinaresSolver, A, b; kwargs...)\nsolver = minares!(solver::MinaresSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of minares.\n\nSee MinaresSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/","page":"Least-norm problems","title":"Least-norm problems","text":"# Least-norm problems","category":"page"},{"location":"solvers/ln/#CGNE","page":"Least-norm problems","title":"CGNE","text":"","category":"section"},{"location":"solvers/ln/","page":"Least-norm problems","title":"Least-norm problems","text":"cgne\ncgne!","category":"page"},{"location":"solvers/ln/#Krylov.cgne","page":"Least-norm problems","title":"Krylov.cgne","text":"(x, stats) = cgne(A, b::AbstractVector{FC};\n N=I, ldiv::Bool=false,\n λ::T=zero(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nSolve the consistent linear system\n\nAx + √λs = b\n\nof size m × n using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations of the second kind\n\n(AAᴴ + λI) y = b\n\nbut is more stable. When λ = 0, this method solves the minimum-norm problem\n\nmin ‖x‖₂ s.t. Ax = b.\n\nWhen λ > 0, it solves the problem\n\nmin ‖(x,s)‖₂ s.t. Ax + √λs = b.\n\nCGNE produces monotonic errors ‖x-x*‖₂ but not residuals ‖r‖₂. It is formally equivalent to CRAIG, though can be slightly less accurate, but simpler to implement. Only the x-part of the solution is returned.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nN: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nλ: regularization parameter;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nJ. E. Craig, The N-step iteration procedures, Journal of Mathematics and Physics, 34(1), pp. 64–73, 1955.\nJ. E. Craig, Iterations Procedures for Simultaneous Equations, Ph.D. Thesis, Department of Electrical Engineering, MIT, 1954.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#Krylov.cgne!","page":"Least-norm problems","title":"Krylov.cgne!","text":"solver = cgne!(solver::CgneSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of cgne.\n\nSee CgneSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#CRMR","page":"Least-norm problems","title":"CRMR","text":"","category":"section"},{"location":"solvers/ln/","page":"Least-norm problems","title":"Least-norm problems","text":"crmr\ncrmr!","category":"page"},{"location":"solvers/ln/#Krylov.crmr","page":"Least-norm problems","title":"Krylov.crmr","text":"(x, stats) = crmr(A, b::AbstractVector{FC};\n N=I, ldiv::Bool=false,\n λ::T=zero(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nSolve the consistent linear system\n\nAx + √λs = b\n\nof size m × n using the Conjugate Residual (CR) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CR to the normal equations of the second kind\n\n(AAᴴ + λI) y = b\n\nbut is more stable. When λ = 0, this method solves the minimum-norm problem\n\nmin ‖x‖₂ s.t. x ∈ argmin ‖Ax - b‖₂.\n\nWhen λ > 0, this method solves the problem\n\nmin ‖(x,s)‖₂ s.t. Ax + √λs = b.\n\nCRMR produces monotonic residuals ‖r‖₂. It is formally equivalent to CRAIG-MR, though can be slightly less accurate, but simpler to implement. Only the x-part of the solution is returned.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nN: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nλ: regularization parameter;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nD. Orban and M. Arioli, Iterative Solution of Symmetric Quasi-Definite Linear Systems, Volume 3 of Spotlights. SIAM, Philadelphia, PA, 2017.\nD. Orban, The Projected Golub-Kahan Process for Constrained Linear Least-Squares Problems. Cahier du GERAD G-2014-15, 2014.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#Krylov.crmr!","page":"Least-norm problems","title":"Krylov.crmr!","text":"solver = crmr!(solver::CrmrSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of crmr.\n\nSee CrmrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#LNLQ","page":"Least-norm problems","title":"LNLQ","text":"","category":"section"},{"location":"solvers/ln/","page":"Least-norm problems","title":"Least-norm problems","text":"lnlq\nlnlq!","category":"page"},{"location":"solvers/ln/#Krylov.lnlq","page":"Least-norm problems","title":"Krylov.lnlq","text":"(x, y, stats) = lnlq(A, b::AbstractVector{FC};\n M=I, N=I, ldiv::Bool=false,\n transfer_to_craig::Bool=true,\n sqd::Bool=false, λ::T=zero(T),\n σ::T=zero(T), utolx::T=√eps(T),\n utoly::T=√eps(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nFind the least-norm solution of the consistent linear system\n\nAx + λ²y = b\n\nof size m × n using the LNLQ method, where λ ≥ 0 is a regularization parameter.\n\nFor a system in the form Ax = b, LNLQ method is equivalent to applying SYMMLQ to AAᴴy = b and recovering x = Aᴴy but is more stable. Note that y are the Lagrange multipliers of the least-norm problem\n\nminimize ‖x‖ s.t. Ax = b.\n\nIf λ > 0, LNLQ solves the symmetric and quasi-definite system\n\n[ -F Aᴴ ] [ x ] [ 0 ]\n[ A λ²E ] [ y ] = [ b ],\n\nwhere E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.\n\nThe system above represents the optimality conditions of\n\nmin ‖x‖²_F + λ²‖y‖²_E s.t. Ax + λ²Ey = b.\n\nFor a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LNLQ is then equivalent to applying SYMMLQ to (AF⁻¹Aᴴ + λ²E)y = b with Fx = Aᴴy.\n\nIf λ = 0, LNLQ solves the symmetric and indefinite system\n\n[ -F Aᴴ ] [ x ] [ 0 ]\n[ A 0 ] [ y ] = [ b ].\n\nThe system above represents the optimality conditions of\n\nminimize ‖x‖²_F s.t. Ax = b.\n\nIn this case, M can still be specified and indicates the weighted norm in which residuals are measured.\n\nIn this implementation, both the x and y-parts of the solution are returned.\n\nutolx and utoly are tolerances on the upper bound of the distance to the solution ‖x-x‖ and ‖y-y‖, respectively. The bound is valid if λ>0 or σ>0 where σ should be strictly smaller than the smallest positive singular value. For instance σ:=(1-1e-7)σₘᵢₙ .\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;\nN: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;\nldiv: define whether the preconditioners use ldiv! or mul!;\ntransfer_to_craig: transfer from the LNLQ point to the CRAIG point, when it exists. The transfer is based on the residual norm;\nsqd: if true, set λ=1 for Hermitian quasi-definite systems;\nλ: regularization parameter;\nσ: strict lower bound on the smallest positive singular value σₘᵢₙ such as σ = (1-10⁻⁷)σₘᵢₙ;\nutolx: tolerance on the upper bound on the distance to the solution ‖x-x*‖;\nutoly: tolerance on the upper bound on the distance to the solution ‖y-y*‖;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\ny: a dense vector of length m;\nstats: statistics collected on the run in a LNLQStats structure.\n\nReference\n\nR. Estrin, D. Orban, M.A. Saunders, LNLQ: An Iterative Method for Least-Norm Problems with an Error Minimization Property, SIAM Journal on Matrix Analysis and Applications, 40(3), pp. 1102–1124, 2019.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#Krylov.lnlq!","page":"Least-norm problems","title":"Krylov.lnlq!","text":"solver = lnlq!(solver::LnlqSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of lnlq.\n\nSee LnlqSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#CRAIG","page":"Least-norm problems","title":"CRAIG","text":"","category":"section"},{"location":"solvers/ln/","page":"Least-norm problems","title":"Least-norm problems","text":"craig\ncraig!","category":"page"},{"location":"solvers/ln/#Krylov.craig","page":"Least-norm problems","title":"Krylov.craig","text":"(x, y, stats) = craig(A, b::AbstractVector{FC};\n M=I, N=I, ldiv::Bool=false,\n transfer_to_lsqr::Bool=false, sqd::Bool=false,\n λ::T=zero(T), btol::T=√eps(T),\n conlim::T=1/√eps(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nFind the least-norm solution of the consistent linear system\n\nAx + λ²y = b\n\nof size m × n using the Golub-Kahan implementation of Craig's method, where λ ≥ 0 is a regularization parameter. This method is equivalent to CGNE but is more stable.\n\nFor a system in the form Ax = b, Craig's method is equivalent to applying CG to AAᴴy = b and recovering x = Aᴴy. Note that y are the Lagrange multipliers of the least-norm problem\n\nminimize ‖x‖ s.t. Ax = b.\n\nIf λ > 0, CRAIG solves the symmetric and quasi-definite system\n\n[ -F Aᴴ ] [ x ] [ 0 ]\n[ A λ²E ] [ y ] = [ b ],\n\nwhere E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.\n\nThe system above represents the optimality conditions of\n\nmin ‖x‖²_F + λ²‖y‖²_E s.t. Ax + λ²Ey = b.\n\nFor a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. CRAIG is then equivalent to applying CG to (AF⁻¹Aᴴ + λ²E)y = b with Fx = Aᴴy.\n\nIf λ = 0, CRAIG solves the symmetric and indefinite system\n\n[ -F Aᴴ ] [ x ] [ 0 ]\n[ A 0 ] [ y ] = [ b ].\n\nThe system above represents the optimality conditions of\n\nminimize ‖x‖²_F s.t. Ax = b.\n\nIn this case, M can still be specified and indicates the weighted norm in which residuals are measured.\n\nIn this implementation, both the x and y-parts of the solution are returned.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;\nN: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;\nldiv: define whether the preconditioners use ldiv! or mul!;\ntransfer_to_lsqr: transfer from the LSLQ point to the LSQR point, when it exists. The transfer is based on the residual norm;\nsqd: if true, set λ=1 for Hermitian quasi-definite systems;\nλ: regularization parameter;\nbtol: stopping tolerance used to detect zero-residual problems;\nconlim: limit on the estimated condition number of A beyond which the solution will be abandoned;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\ny: a dense vector of length m;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nC. C. Paige and M. A. Saunders, LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares, ACM Transactions on Mathematical Software, 8(1), pp. 43–71, 1982.\nM. A. Saunders, Solutions of Sparse Rectangular Systems Using LSQR and CRAIG, BIT Numerical Mathematics, 35(4), pp. 588–604, 1995.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#Krylov.craig!","page":"Least-norm problems","title":"Krylov.craig!","text":"solver = craig!(solver::CraigSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of craig.\n\nSee CraigSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#CRAIGMR","page":"Least-norm problems","title":"CRAIGMR","text":"","category":"section"},{"location":"solvers/ln/","page":"Least-norm problems","title":"Least-norm problems","text":"craigmr\ncraigmr!","category":"page"},{"location":"solvers/ln/#Krylov.craigmr","page":"Least-norm problems","title":"Krylov.craigmr","text":"(x, y, stats) = craigmr(A, b::AbstractVector{FC};\n M=I, N=I, ldiv::Bool=false,\n sqd::Bool=false, λ::T=zero(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nSolve the consistent linear system\n\nAx + λ²y = b\n\nof size m × n using the CRAIGMR method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying the Conjugate Residuals method to the normal equations of the second kind\n\n(AAᴴ + λ²I) y = b\n\nbut is more stable. When λ = 0, this method solves the minimum-norm problem\n\nmin ‖x‖ s.t. x ∈ argmin ‖Ax - b‖.\n\nIf λ > 0, CRAIGMR solves the symmetric and quasi-definite system\n\n[ -F Aᴴ ] [ x ] [ 0 ]\n[ A λ²E ] [ y ] = [ b ],\n\nwhere E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.\n\nThe system above represents the optimality conditions of\n\nmin ‖x‖²_F + λ²‖y‖²_E s.t. Ax + λ²Ey = b.\n\nFor a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. CRAIGMR is then equivalent to applying MINRES to (AF⁻¹Aᴴ + λ²E)y = b with Fx = Aᴴy.\n\nIf λ = 0, CRAIGMR solves the symmetric and indefinite system\n\n[ -F Aᴴ ] [ x ] [ 0 ]\n[ A 0 ] [ y ] = [ b ].\n\nThe system above represents the optimality conditions of\n\nmin ‖x‖²_F s.t. Ax = b.\n\nIn this case, M can still be specified and indicates the weighted norm in which residuals are measured.\n\nCRAIGMR produces monotonic residuals ‖r‖₂. It is formally equivalent to CRMR, though can be slightly more accurate, and intricate to implement. Both the x- and y-parts of the solution are returned.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;\nN: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;\nldiv: define whether the preconditioners use ldiv! or mul!;\nsqd: if true, set λ=1 for Hermitian quasi-definite systems;\nλ: regularization parameter;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\ny: a dense vector of length m;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nD. Orban and M. Arioli. Iterative Solution of Symmetric Quasi-Definite Linear Systems, Volume 3 of Spotlights. SIAM, Philadelphia, PA, 2017.\nD. Orban, The Projected Golub-Kahan Process for Constrained, Linear Least-Squares Problems. Cahier du GERAD G-2014-15, 2014.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#Krylov.craigmr!","page":"Least-norm problems","title":"Krylov.craigmr!","text":"solver = craigmr!(solver::CraigmrSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of craigmr.\n\nSee CraigmrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#USYMLQ","page":"Least-norm problems","title":"USYMLQ","text":"","category":"section"},{"location":"solvers/ln/","page":"Least-norm problems","title":"Least-norm problems","text":"usymlq\nusymlq!","category":"page"},{"location":"solvers/ln/#Krylov.usymlq","page":"Least-norm problems","title":"Krylov.usymlq","text":"(x, stats) = usymlq(A, b::AbstractVector{FC}, c::AbstractVector{FC};\n transfer_to_usymcg::Bool=true, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = usymlq(A, b, c, x0::AbstractVector; kwargs...)\n\nUSYMLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nUSYMLQ determines the least-norm solution of the consistent linear system Ax = b of size m × n.\n\nUSYMLQ is based on the orthogonal tridiagonalization process and requires two initial nonzero vectors b and c. The vector c is only used to initialize the process and a default value can be b or Aᴴb depending on the shape of A. The error norm ‖x - x*‖ monotonously decreases in USYMLQ. It's considered as a generalization of SYMMLQ.\n\nIt can also be applied to under-determined and over-determined problems. In all cases, problems must be consistent.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m;\nc: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\ntransfer_to_usymcg: transfer from the USYMLQ point to the USYMCG point, when it exists. The transfer is based on the residual norm;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nM. A. Saunders, H. D. Simon, and E. L. Yip, Two Conjugate-Gradient-Type Methods for Unsymmetric Linear Equations, SIAM Journal on Numerical Analysis, 25(4), pp. 927–940, 1988.\nA. Buttari, D. Orban, D. Ruiz and D. Titley-Peloquin, A tridiagonalization method for symmetric saddle-point and quasi-definite systems, SIAM Journal on Scientific Computing, 41(5), pp. 409–432, 2019.\nA. Montoison and D. Orban, BiLQ: An Iterative Method for Nonsymmetric Linear Systems with a Quasi-Minimum Error Property, SIAM Journal on Matrix Analysis and Applications, 41(3), pp. 1145–1166, 2020.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#Krylov.usymlq!","page":"Least-norm problems","title":"Krylov.usymlq!","text":"solver = usymlq!(solver::UsymlqSolver, A, b, c; kwargs...)\nsolver = usymlq!(solver::UsymlqSolver, A, b, c, x0; kwargs...)\n\nwhere kwargs are keyword arguments of usymlq.\n\nSee UsymlqSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"","category":"page"},{"location":"processes/#krylov-processes","page":"Krylov processes","title":"Krylov processes","text":"","category":"section"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Krylov processes are the foundation of Krylov methods, they generate bases of Krylov subspaces. Depending on the Krylov subspaces generated, Krylov processes are more or less specialized for a subset of linear problems. The following table summarizes the most relevant processes for each linear problem.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Linear problems Processes\nHermitian linear systems Hermitian Lanczos\nSquare Non-Hermitian linear systems Non-Hermitian Lanczos – Arnoldi\nLeast-squares problems Golub-Kahan – Saunders-Simon-Yip\nLeast-norm problems Golub-Kahan – Saunders-Simon-Yip\nSaddle-point and Hermitian quasi-definite systems Golub-Kahan – Saunders-Simon-Yip\nGeneralized saddle-point and non-Hermitian partitioned systems Montoison-Orban","category":"page"},{"location":"processes/#Notation","page":"Krylov processes","title":"Notation","text":"","category":"section"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"For a matrix A, A^H denotes the conjugate transpose of A. It coincides with A^T, the transpose of A, for real matrices. Define V_k = beginbmatrix v_1 ldots v_k endbmatrix enspace and enspace U_k = beginbmatrix u_1 ldots u_k endbmatrix.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"For a matrix C in mathbbC^n times n and a vector t in mathbbC^n, the k-th Krylov subspace generated by C and t is","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"mathcalK_k(C t) =\nleftsum_i=0^k-1 omega_i C^i t middle vert omega_i in mathbbC0 le i le k-1 right","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"For matrices C in mathbbC^n times n enspace and enspace T in mathbbC^n times p, the k-th block Krylov subspace generated by C and T is","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"mathcalK_k^square(C T) =\nleftsum_i=0^k-1 C^i T Omega_i middle vert Omega_i in mathbbC^p times p0 le i le k-1 right","category":"page"},{"location":"processes/#Hermitian-Lanczos","page":"Krylov processes","title":"Hermitian Lanczos","text":"","category":"section"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"(Image: hermitian_lanczos)","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"After k iterations of the Hermitian Lanczos process, the situation may be summarized as","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"beginalign*\n A V_k = V_k T_k + beta_k+1k v_k+1 e_k^T = V_k+1 T_k+1k \n V_k^H V_k = I_k\nendalign*","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"where V_k is an orthonormal basis of the Krylov subspace mathcalK_k (Ab),","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"T_k =\nbeginbmatrix\n alpha_1 barbeta_2 \n beta_2 alpha_2 ddots \n ddots ddots barbeta_k \n beta_k alpha_k\nendbmatrix\n qquad\nT_k+1k =\nbeginbmatrix\n T_k \n beta_k+1 e_k^T\nendbmatrix","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Note that depending on how we normalize the vectors that compose V_k, T_k+1k can be a real tridiagonal matrix even if A is a complex matrix.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"The function hermitian_lanczos returns V_k+1 and T_k+1k.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Related methods: SYMMLQ, CG, CR, CAR, MINRES, MINRES-QLP, MINARES, CGLS, CRLS, CGNE, CRMR, CG-LANCZOS and CG-LANCZOS-SHIFT.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"hermitian_lanczos","category":"page"},{"location":"processes/#Krylov.hermitian_lanczos","page":"Krylov processes","title":"Krylov.hermitian_lanczos","text":"V, T = hermitian_lanczos(A, b, k)\n\nInput arguments\n\nA: a linear operator that models an Hermitian matrix of dimension n;\nb: a vector of length n;\nk: the number of iterations of the Hermitian Lanczos process.\n\nOutput arguments\n\nV: a dense n × (k+1) matrix;\nT: a sparse (k+1) × k tridiagonal matrix.\n\nReference\n\nC. Lanczos, An Iteration Method for the Solution of the Eigenvalue Problem of Linear Differential and Integral Operators, Journal of Research of the National Bureau of Standards, 45(4), pp. 225–280, 1950.\n\n\n\n\n\n","category":"function"},{"location":"processes/#Non-Hermitian-Lanczos","page":"Krylov processes","title":"Non-Hermitian Lanczos","text":"","category":"section"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"(Image: nonhermitian_lanczos)","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"After k iterations of the non-Hermitian Lanczos process (also named the Lanczos biorthogonalization process), the situation may be summarized as","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"beginalign*\n A V_k = V_k T_k + beta_k+1 v_k+1 e_k^T = V_k+1 T_k+1k \n A^H U_k = U_k T_k^H + bargamma_k+1 u_k+1 e_k^T = U_k+1 T_kk+1^H \n V_k^H U_k = U_k^H V_k = I_k\nendalign*","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"where V_k and U_k are bases of the Krylov subspaces mathcalK_k (Ab) and mathcalK_k (A^Hc), respectively,","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"T_k = \nbeginbmatrix\n alpha_1 gamma_2 \n beta_2 alpha_2 ddots \n ddots ddots gamma_k \n beta_k alpha_k\nendbmatrix\n qquad\nT_k+1k =\nbeginbmatrix\n T_k \n beta_k+1 e_k^T\nendbmatrix\n qquad\nT_kk+1 =\nbeginbmatrix\n T_k gamma_k+1 e_k\nendbmatrix","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"The function nonhermitian_lanczos returns V_k+1, T_k+1k, U_k+1 and T_kk+1^H.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Related methods: BiLQ, QMR, BiLQR, CGS and BICGSTAB.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"note: Note\nThe scaling factors used in our implementation are beta_k = u_k^H v_k^tfrac12 and gamma_k = (u_k^H v_k) beta_k. With these scaling factors, the non-Hermitian Lanczos process coincides with the Hermitian Lanczos process when A = A^H and b = c.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"nonhermitian_lanczos","category":"page"},{"location":"processes/#Krylov.nonhermitian_lanczos","page":"Krylov processes","title":"Krylov.nonhermitian_lanczos","text":"V, T, U, Tᴴ = nonhermitian_lanczos(A, b, c, k)\n\nInput arguments\n\nA: a linear operator that models a square matrix of dimension n;\nb: a vector of length n;\nc: a vector of length n;\nk: the number of iterations of the non-Hermitian Lanczos process.\n\nOutput arguments\n\nV: a dense n × (k+1) matrix;\nT: a sparse (k+1) × k tridiagonal matrix;\nU: a dense n × (k+1) matrix;\nTᴴ: a sparse (k+1) × k tridiagonal matrix.\n\nReference\n\nC. Lanczos, An Iteration Method for the Solution of the Eigenvalue Problem of Linear Differential and Integral Operators, Journal of Research of the National Bureau of Standards, 45(4), pp. 225–280, 1950.\n\n\n\n\n\n","category":"function"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"The non-Hermitian Lanczos process can be also implemented without A^H (transpose-free variant). To derive it, we can observe that beta_k+1 v_k+1 = P_k(A) b and bargamma_k+1 u_k+1 = Q_k(A^H) c where P_k and Q_k are polynomials of degree k. The polynomials are defined from the recursions:","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"beginalign*\n P_0(A) = I_n \n P_1(A) = left(dfracA - alpha_1 I_nbeta_1right) P_0(A) \n P_k(A) = left(dfracA - alpha_k I_nbeta_kright) P_k-1(A) - dfracgamma_kbeta_k-1 P_k-2(A) quad k ge 2 \n \n Q_0(A^H) = I_n \n Q_1(A^H) = left(dfracA^H - baralpha_1 I_nbargamma_1right) Q_0(A^H) \n Q_k(A^H) = left(dfracA^H - baralpha_k I_nbargamma_kright) Q_k-1(A^H) - dfracbarbeta_kbargamma_k-1 Q_k-2(A^H) quad k ge 2\nendalign*","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Because alpha_k = u_k^H A v_k and (bargamma_k+1 u_k+1)^H (beta_k+1 v_k+1) = gamma_k+1 beta_k+1, we can define the coefficients of T_k+1k and T_kk+1^H as follows:","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"beginalign*\n alpha_k = dfrac1gamma_k beta_k langleQ_k-1(A^H) c A P_k-1(A) brangle \n = dfrac1gamma_k beta_k langlec barQ_k-1(A) A P_k-1(A) brangle \n \n beta_k+1 gamma_k+1 = langleQ_k(A^H) c P_k(A) brangle \n = langlec barQ_k(A) P_k(A) brangle\nendalign*","category":"page"},{"location":"processes/#Arnoldi","page":"Krylov processes","title":"Arnoldi","text":"","category":"section"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"(Image: arnoldi)","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"After k iterations of the Arnoldi process, the situation may be summarized as","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"beginalign*\n A V_k = V_k H_k + h_k+1k v_k+1 e_k^T = V_k+1 H_k+1k \n V_k^H V_k = I_k\nendalign*","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"where V_k is an orthonormal basis of the Krylov subspace mathcalK_k (Ab),","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"H_k =\nbeginbmatrix\n h_11 h_12 ldots h_1k \n h_21 ddots ddots vdots \n ddots ddots h_k-1k \n h_kk-1 h_kk\nendbmatrix\n qquad\nH_k+1k =\nbeginbmatrix\n H_k \n h_k+1k e_k^T\nendbmatrix","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"The function arnoldi returns V_k+1 and H_k+1k.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Related methods: DIOM, FOM, DQGMRES, GMRES and FGMRES.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"note: Note\nThe Arnoldi process coincides with the Hermitian Lanczos process when A is Hermitian.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"arnoldi","category":"page"},{"location":"processes/#Krylov.arnoldi","page":"Krylov processes","title":"Krylov.arnoldi","text":"V, H = arnoldi(A, b, k; reorthogonalization=false)\n\nInput arguments\n\nA: a linear operator that models a square matrix of dimension n;\nb: a vector of length n;\nk: the number of iterations of the Arnoldi process.\n\nKeyword arguments\n\nreorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors.\n\nOutput arguments\n\nV: a dense n × (k+1) matrix;\nH: a dense (k+1) × k upper Hessenberg matrix.\n\nReference\n\nW. E. Arnoldi, The principle of minimized iterations in the solution of the matrix eigenvalue problem, Quarterly of Applied Mathematics, 9, pp. 17–29, 1951.\n\n\n\n\n\n","category":"function"},{"location":"processes/#Golub-Kahan","page":"Krylov processes","title":"Golub-Kahan","text":"","category":"section"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"(Image: golub_kahan)","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"After k iterations of the Golub-Kahan bidiagonalization process, the situation may be summarized as","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"beginalign*\n A V_k = U_k+1 B_k \n A^H U_k+1 = V_k B_k^H + baralpha_k+1 v_k+1 e_k+1^T = V_k+1 L_k+1^H \n V_k^H V_k = U_k^H U_k = I_k\nendalign*","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"where V_k and U_k are bases of the Krylov subspaces mathcalK_k (A^HAA^Hb) and mathcalK_k (AA^Hb), respectively,","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"L_k =\nbeginbmatrix\n alpha_1 \n beta_2 alpha_2 \n ddots ddots \n beta_k alpha_k\nendbmatrix\n qquad\nB_k =\nbeginbmatrix\n alpha_1 \n beta_2 alpha_2 \n ddots ddots \n beta_k alpha_k \n beta_k+1 \nendbmatrix\n=\nbeginbmatrix\n L_k \n beta_k+1 e_k^T\nendbmatrix","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Note that depending on how we normalize the vectors that compose V_k and U_k, L_k can be a real bidiagonal matrix even if A is a complex matrix.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"The function golub_kahan returns V_k+1, U_k+1 and L_k+1.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Related methods: LNLQ, CRAIG, CRAIGMR, LSLQ, LSQR and LSMR.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"note: Note\nThe Golub-Kahan process coincides with the Hermitian Lanczos process applied to the normal equations A^HA x = A^Hb and AA^H x = b. It is also related to the Hermitian Lanczos process applied to beginbmatrix 0 A A^H 0 endbmatrix with initial vector beginbmatrix b 0 endbmatrix.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"golub_kahan","category":"page"},{"location":"processes/#Krylov.golub_kahan","page":"Krylov processes","title":"Krylov.golub_kahan","text":"V, U, L = golub_kahan(A, b, k)\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m;\nk: the number of iterations of the Golub-Kahan process.\n\nOutput arguments\n\nV: a dense n × (k+1) matrix;\nU: a dense m × (k+1) matrix;\nL: a sparse (k+1) × (k+1) lower bidiagonal matrix.\n\nReferences\n\nG. H. Golub and W. Kahan, Calculating the Singular Values and Pseudo-Inverse of a Matrix, SIAM Journal on Numerical Analysis, 2(2), pp. 225–224, 1965.\nC. C. Paige, Bidiagonalization of Matrices and Solution of Linear Equations, SIAM Journal on Numerical Analysis, 11(1), pp. 197–209, 1974.\n\n\n\n\n\n","category":"function"},{"location":"processes/#Saunders-Simon-Yip","page":"Krylov processes","title":"Saunders-Simon-Yip","text":"","category":"section"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"(Image: saunders_simon_yip)","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"After k iterations of the Saunders-Simon-Yip process (also named the orthogonal tridiagonalization process), the situation may be summarized as","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"beginalign*\n A U_k = V_k T_k + beta_k+1 v_k+1 e_k^T = V_k+1 T_k+1k \n A^H V_k = U_k T_k^H + bargamma_k+1 u_k+1 e_k^T = U_k+1 T_kk+1^H \n V_k^H V_k = U_k^H U_k = I_k\nendalign*","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"where beginbmatrix V_k 0 0 U_k endbmatrix is an orthonormal basis of the block Krylov subspace mathcalK^square_k left(beginbmatrix 0 A A^H 0 endbmatrix beginbmatrix b 0 0 c endbmatrixright),","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"T_k = \nbeginbmatrix\n alpha_1 gamma_2 \n beta_2 alpha_2 ddots \n ddots ddots gamma_k \n beta_k alpha_k\nendbmatrix\n qquad\nT_k+1k =\nbeginbmatrix\n T_k \n beta_k+1 e_k^T\nendbmatrix\n qquad\nT_kk+1 =\nbeginbmatrix\n T_k gamma_k+1 e_k\nendbmatrix","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"The function saunders_simon_yip returns V_k+1, T_k+1k, U_k+1 and T_kk+1^H.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Related methods: USYMLQ, USYMQR, TriLQR, TriCG and TriMR.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"saunders_simon_yip","category":"page"},{"location":"processes/#Krylov.saunders_simon_yip","page":"Krylov processes","title":"Krylov.saunders_simon_yip","text":"V, T, U, Tᴴ = saunders_simon_yip(A, b, c, k)\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m;\nc: a vector of length n;\nk: the number of iterations of the Saunders-Simon-Yip process.\n\nOutput arguments\n\nV: a dense m × (k+1) matrix;\nT: a sparse (k+1) × k tridiagonal matrix;\nU: a dense n × (k+1) matrix;\nTᴴ: a sparse (k+1) × k tridiagonal matrix.\n\nReference\n\nM. A. Saunders, H. D. Simon, and E. L. Yip, Two Conjugate-Gradient-Type Methods for Unsymmetric Linear Equations, SIAM Journal on Numerical Analysis, 25(4), pp. 927–940, 1988.\n\n\n\n\n\n","category":"function"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"note: Note\nThe Saunders-Simon-Yip is equivalent to the block-Lanczos process applied to beginbmatrix 0 A A^H 0 endbmatrix with initial matrix beginbmatrix b 0 0 c endbmatrix.","category":"page"},{"location":"processes/#Montoison-Orban","page":"Krylov processes","title":"Montoison-Orban","text":"","category":"section"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"(Image: montoison_orban)","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"After k iterations of the Montoison-Orban process (also named the orthogonal Hessenberg reduction process), the situation may be summarized as","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"beginalign*\n A U_k = V_k H_k + h_k+1k v_k+1 e_k^T = V_k+1 H_k+1k \n B V_k = U_k F_k + f_k+1k u_k+1 e_k^T = U_k+1 F_k+1k \n V_k^H V_k = U_k^H U_k = I_k\nendalign*","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"where beginbmatrix V_k 0 0 U_k endbmatrix is an orthonormal basis of the block Krylov subspace mathcalK^square_k left(beginbmatrix 0 A B 0 endbmatrix beginbmatrix b 0 0 c endbmatrixright),","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"H_k =\nbeginbmatrix\n h_11 h_12 ldots h_1k \n h_21 ddots ddots vdots \n ddots ddots h_k-1k \n h_kk-1 h_kk\nendbmatrix\n qquad\nF_k =\nbeginbmatrix\n f_11 f_12 ldots f_1k \n f_21 ddots ddots vdots \n ddots ddots f_k-1k \n f_kk-1 f_kk\nendbmatrix","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"H_k+1k =\nbeginbmatrix\n H_k \n h_k+1k e_k^T\nendbmatrix\n qquad\nF_k+1k =\nbeginbmatrix\n F_k \n f_k+1k e_k^T\nendbmatrix","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"The function montoison_orban returns V_k+1, H_k+1k, U_k+1 and F_k+1k.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Related methods: GPMR.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"note: Note\nThe Montoison-Orban is equivalent to the block-Arnoldi process applied to beginbmatrix 0 A B 0 endbmatrix with initial matrix beginbmatrix b 0 0 c endbmatrix. It also coincides with the Saunders-Simon-Yip process when B = A^H.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"montoison_orban","category":"page"},{"location":"processes/#Krylov.montoison_orban","page":"Krylov processes","title":"Krylov.montoison_orban","text":"V, H, U, F = montoison_orban(A, B, b, c, k; reorthogonalization=false)\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nB: a linear operator that models a matrix of dimension n × m;\nb: a vector of length m;\nc: a vector of length n;\nk: the number of iterations of the Montoison-Orban process.\n\nKeyword arguments\n\nreorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors.\n\nOutput arguments\n\nV: a dense m × (k+1) matrix;\nH: a dense (k+1) × k upper Hessenberg matrix;\nU: a dense n × (k+1) matrix;\nF: a dense (k+1) × k upper Hessenberg matrix.\n\nReference\n\nA. Montoison and D. Orban, GPMR: An Iterative Method for Unsymmetric Partitioned Linear Systems, SIAM Journal on Matrix Analysis and Applications, 44(1), pp. 293–311, 2023.\n\n\n\n\n\n","category":"function"},{"location":"examples/crls/","page":"CRLS","title":"CRLS","text":"using MatrixMarket, SuiteSparseMatrixCollection\nusing Krylov, LinearOperators\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"well1850\")\npath = fetch_ssmc(matrix, format=\"MM\")\n\nA = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1]).mtx\"))\nb = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1])_b.mtx\"))[:]\n(m, n) = size(A)\n@printf(\"System size: %d rows and %d columns\\n\", m, n)\n\n# Define a regularization parameter.\nλ = 1.0e-3\n\n(x, stats) = crls(A, b, λ=λ)\nshow(stats)\nresid = norm(A' * (A * x - b) + λ * x) / norm(b)\n@printf(\"CRLS: Relative residual: %8.1e\\n\", resid)\n@printf(\"CRLS: ‖x‖: %8.1e\\n\", norm(x))","category":"page"},{"location":"callbacks/#callbacks","page":"Callbacks","title":"Callbacks","text":"","category":"section"},{"location":"callbacks/","page":"Callbacks","title":"Callbacks","text":"Each Krylov method is able to call a callback function as callback(solver) at each iteration. The callback should return true if the main loop should terminate, and false otherwise. If the method terminated because of the callback, the output status will be \"user-requested exit\". For example, if the user defines minres_callback(solver::MinresSolver), it can be passed to the solver using","category":"page"},{"location":"callbacks/","page":"Callbacks","title":"Callbacks","text":"(x, stats) = minres(A, b, callback = minres_callback)","category":"page"},{"location":"callbacks/","page":"Callbacks","title":"Callbacks","text":"If you need to write a callback that uses variables that are not in a KrylovSolver, use a closure:","category":"page"},{"location":"callbacks/","page":"Callbacks","title":"Callbacks","text":"function custom_stopping_condition(solver::KrylovSolver, A, b, r, tol)\n mul!(r, A, solver.x)\n r .-= b # r := b - Ax\n bool = norm(r) ≤ tol # tolerance based on the 2-norm of the residual\n return bool\nend\n\ncg_callback(solver) = custom_stopping_condition(solver, A, b, r, tol)\n(x, stats) = cg(A, b, callback = cg_callback)","category":"page"},{"location":"callbacks/","page":"Callbacks","title":"Callbacks","text":"Alternatively, use a structure and make it callable:","category":"page"},{"location":"callbacks/","page":"Callbacks","title":"Callbacks","text":"mutable struct CallbackWorkspace{T}\n A::Matrix{T}\n b::Vector{T}\n r::Vector{T}\n tol::T\nend\n\nfunction (workspace::CallbackWorkspace)(solver::KrylovSolver)\n mul!(workspace.r, workspace.A, solver.x)\n workspace.r .-= workspace.b\n bool = norm(workspace.r) ≤ workspace.tol\n return bool\nend\n\nbicgstab_callback = CallbackWorkspace(A, b, r, tol)\n(x, stats) = bicgstab(A, b, callback = bicgstab_callback)","category":"page"},{"location":"callbacks/","page":"Callbacks","title":"Callbacks","text":"Although the main goal of a callback is to add new stopping conditions, it can also retrieve information from the workspace of a Krylov method along the iterations. We now illustrate how to store all iterates x_k of the GMRES method.","category":"page"},{"location":"callbacks/","page":"Callbacks","title":"Callbacks","text":"S = Krylov.ktypeof(b)\nglobal X = S[] # Storage for GMRES iterates\n\nfunction gmres_callback(solver)\n z = solver.z\n k = solver.inner_iter\n nr = sum(1:k)\n V = solver.V\n R = solver.R\n y = copy(z)\n\n # Solve Rk * yk = zk\n for i = k : -1 : 1\n pos = nr + i - k\n for j = k : -1 : i+1\n y[i] = y[i] - R[pos] * y[j]\n pos = pos - j + 1\n end\n y[i] = y[i] / R[pos]\n end\n\n # xk = Vk * yk\n xk = sum(V[i] * y[i] for i = 1:k)\n push!(X, xk)\n\n return false # We don't want to add new stopping conditions\nend\n\n(x, stats) = gmres(A, b, callback = gmres_callback)","category":"page"},{"location":"gpu/#gpu","page":"GPU support","title":"GPU support","text":"","category":"section"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"Krylov methods are well suited for GPU computations because they only require matrix-vector products (u leftarrow Av, u leftarrow A^Hw) and vector operations (v, u^H v, v leftarrow alpha u + beta v), which are highly parallelizable.","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"The implementations in Krylov.jl are generic so as to take advantage of the multiple dispatch and broadcast features of Julia. Those allow the implementations to be specialized automatically by the compiler for both CPU and GPU. Thus, Krylov.jl works with GPU backends that build on GPUArrays.jl, such as CUDA.jl, AMDGPU.jl, oneAPI.jl or Metal.jl.","category":"page"},{"location":"gpu/#Nvidia-GPUs","page":"GPU support","title":"Nvidia GPUs","text":"","category":"section"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"All solvers in Krylov.jl can be used with CUDA.jl and allow computations on Nvidia GPUs. Problems stored in CPU format (Matrix and Vector) must first be converted to the related GPU format (CuMatrix and CuVector).","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"using CUDA, Krylov\n\nif CUDA.functional()\n # CPU Arrays\n A_cpu = rand(20, 20)\n b_cpu = rand(20)\n\n # GPU Arrays\n A_gpu = CuMatrix(A_cpu)\n b_gpu = CuVector(b_cpu)\n\n # Solve a square and dense system on an Nivida GPU\n x, stats = bilq(A_gpu, b_gpu)\nend","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"Sparse matrices have a specific storage on Nvidia GPUs (CuSparseMatrixCSC, CuSparseMatrixCSR or CuSparseMatrixCOO):","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"using CUDA, Krylov\nusing CUDA.CUSPARSE, SparseArrays\n\nif CUDA.functional()\n # CPU Arrays\n A_cpu = sprand(200, 100, 0.3)\n b_cpu = rand(200)\n\n # GPU Arrays\n A_csc_gpu = CuSparseMatrixCSC(A_cpu)\n A_csr_gpu = CuSparseMatrixCSR(A_cpu)\n A_coo_gpu = CuSparseMatrixCOO(A_cpu)\n b_gpu = CuVector(b_cpu)\n\n # Solve a rectangular and sparse system on an Nvidia GPU\n x_csc, stats_csc = lslq(A_csc_gpu, b_gpu)\n x_csr, stats_csr = lsqr(A_csr_gpu, b_gpu)\n x_coo, stats_coo = lsmr(A_coo_gpu, b_gpu)\nend","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"If you use a Krylov method that only requires A * v products (see here), the most efficient format is CuSparseMatrixCSR. Optimized operator-vector products that exploit GPU features can be also used by means of linear operators.","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"Preconditioners, especially incomplete Cholesky or Incomplete LU factorizations that involve triangular solves, can be applied directly on GPU thanks to efficient operators that take advantage of CUSPARSE routines.","category":"page"},{"location":"gpu/#Example-with-a-symmetric-positive-definite-system","page":"GPU support","title":"Example with a symmetric positive-definite system","text":"","category":"section"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"using SparseArrays, Krylov, LinearOperators\nusing CUDA, CUDA.CUSPARSE\n\nif CUDA.functional()\n # Transfer the linear system from the CPU to the GPU\n A_gpu = CuSparseMatrixCSR(A_cpu) # A_gpu = CuSparseMatrixCSC(A_cpu)\n b_gpu = CuVector(b_cpu)\n\n # IC(0) decomposition LLᴴ ≈ A for CuSparseMatrixCSC or CuSparseMatrixCSR matrices\n P = ic02(A_gpu)\n\n # Additional vector required for solving triangular systems\n n = length(b_gpu)\n T = eltype(b_gpu)\n z = CUDA.zeros(T, n)\n\n # Solve Py = x\n function ldiv_ic0!(P::CuSparseMatrixCSR, x, y, z)\n ldiv!(z, LowerTriangular(P), x) # Forward substitution with L\n ldiv!(y, LowerTriangular(P)', z) # Backward substitution with Lᴴ\n return y\n end\n\n function ldiv_ic0!(P::CuSparseMatrixCSC, x, y, z)\n ldiv!(z, UpperTriangular(P)', x) # Forward substitution with L\n ldiv!(y, UpperTriangular(P), z) # Backward substitution with Lᴴ\n return y\n end\n\n # Operator that model P⁻¹\n symmetric = hermitian = true\n opM = LinearOperator(T, n, n, symmetric, hermitian, (y, x) -> ldiv_ic0!(P, x, y, z))\n\n # Solve an Hermitian positive definite system with an IC(0) preconditioner on GPU\n x, stats = cg(A_gpu, b_gpu, M=opM)\nend","category":"page"},{"location":"gpu/#Example-with-a-general-square-system","page":"GPU support","title":"Example with a general square system","text":"","category":"section"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"using SparseArrays, Krylov, LinearOperators\nusing CUDA, CUDA.CUSPARSE, CUDA.CUSOLVER\n\nif CUDA.functional()\n # Optional -- Compute a permutation vector p such that A[:,p] has no zero diagonal\n p = zfd(A_cpu)\n p .+= 1\n A_cpu = A_cpu[:,p]\n\n # Transfer the linear system from the CPU to the GPU\n A_gpu = CuSparseMatrixCSR(A_cpu) # A_gpu = CuSparseMatrixCSC(A_cpu)\n b_gpu = CuVector(b_cpu)\n\n # ILU(0) decomposition LU ≈ A for CuSparseMatrixCSC or CuSparseMatrixCSR matrices\n P = ilu02(A_gpu)\n\n # Additional vector required for solving triangular systems\n n = length(b_gpu)\n T = eltype(b_gpu)\n z = CUDA.zeros(T, n)\n\n # Solve Py = x\n function ldiv_ilu0!(P::CuSparseMatrixCSR, x, y, z)\n ldiv!(z, UnitLowerTriangular(P), x) # Forward substitution with L\n ldiv!(y, UpperTriangular(P), z) # Backward substitution with U\n return y\n end\n\n function ldiv_ilu0!(P::CuSparseMatrixCSC, x, y, z)\n ldiv!(z, LowerTriangular(P), x) # Forward substitution with L\n ldiv!(y, UnitUpperTriangular(P), z) # Backward substitution with U\n return y\n end\n\n # Operator that model P⁻¹\n symmetric = hermitian = false\n opM = LinearOperator(T, n, n, symmetric, hermitian, (y, x) -> ldiv_ilu0!(P, x, y, z))\n\n # Solve a non-Hermitian system with an ILU(0) preconditioner on GPU\n x̄, stats = bicgstab(A_gpu, b_gpu, M=opM)\n\n # Recover the solution of Ax = b with the solution of A[:,p]x̄ = b\n invp = invperm(p)\n x = x̄[invp]\nend","category":"page"},{"location":"gpu/#AMD-GPUs","page":"GPU support","title":"AMD GPUs","text":"","category":"section"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"All solvers in Krylov.jl can be used with AMDGPU.jl and allow computations on AMD GPUs. Problems stored in CPU format (Matrix and Vector) must first be converted to the related GPU format (ROCMatrix and ROCVector).","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"using Krylov, AMDGPU\n\nif AMDGPU.functional()\n # CPU Arrays\n A_cpu = rand(ComplexF64, 20, 20)\n A_cpu = A_cpu + A_cpu'\n b_cpu = rand(ComplexF64, 20)\n\n A_gpu = ROCMatrix(A_cpu)\n b_gpu = ROCVector(b_cpu)\n\n # Solve a dense Hermitian system on an AMD GPU\n x, stats = minres(A_gpu, b_gpu)\nend","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"Sparse matrices have a specific storage on AMD GPUs (ROCSparseMatrixCSC, ROCSparseMatrixCSR or ROCSparseMatrixCOO):","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"using AMDGPU, Krylov\nusing AMDGPU.rocSPARSE, SparseArrays\n\nif AMDGPU.functional()\n # CPU Arrays\n A_cpu = sprand(100, 200, 0.3)\n b_cpu = rand(100)\n\n # GPU Arrays\n A_csc_gpu = ROCSparseMatrixCSC(A_cpu)\n A_csr_gpu = ROCSparseMatrixCSR(A_cpu)\n A_coo_gpu = ROCSparseMatrixCOO(A_cpu)\n b_gpu = ROCVector(b_cpu)\n\n # Solve a rectangular and sparse system on an AMD GPU\n x_csc, y_csc, stats_csc = lnlq(A_csc_gpu, b_gpu)\n x_csr, y_csr, stats_csr = craig(A_csr_gpu, b_gpu)\n x_coo, y_coo, stats_coo = craigmr(A_coo_gpu, b_gpu)\nend","category":"page"},{"location":"gpu/#Intel-GPUs","page":"GPU support","title":"Intel GPUs","text":"","category":"section"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"All solvers in Krylov.jl can be used with oneAPI.jl and allow computations on Intel GPUs. Problems stored in CPU format (Matrix and Vector) must first be converted to the related GPU format (oneMatrix and oneVector).","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"using Krylov, oneAPI\n\nif oneAPI.functional()\n T = Float32 # oneAPI.jl also works with ComplexF32\n m = 20\n n = 10\n\n # CPU Arrays\n A_cpu = rand(T, m, n)\n b_cpu = rand(T, m)\n\n # GPU Arrays\n A_gpu = oneMatrix(A_cpu)\n b_gpu = oneVector(b_cpu)\n\n # Solve a dense least-squares problem on an Intel GPU\n x, stats = lsqr(A_gpu, b_gpu)\nend","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"note: Note\nThe library oneMKL is interfaced in oneAPI.jl and accelerates linear algebra operations on Intel GPUs. Only dense linear systems are supported for the time being because sparse linear algebra routines are not interfaced yet.","category":"page"},{"location":"gpu/#Apple-M1-GPUs","page":"GPU support","title":"Apple M1 GPUs","text":"","category":"section"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"All solvers in Krylov.jl can be used with Metal.jl and allow computations on Apple M1 GPUs. Problems stored in CPU format (Matrix and Vector) must first be converted to the related GPU format (MtlMatrix and MtlVector).","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"using Krylov, Metal\n\nT = Float32 # Metal.jl also works with ComplexF32\nn = 10\nm = 20\n\n# CPU Arrays\nA_cpu = rand(T, n, m)\nb_cpu = rand(T, n)\n\n# GPU Arrays\nA_gpu = MtlMatrix(A_cpu)\nb_gpu = MtlVector(b_cpu)\n\n# Solve a dense least-norm problem on an Apple M1 GPU\nx, stats = craig(A_gpu, b_gpu)","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"warning: Warning\nMetal.jl is under heavy development and is considered experimental for now.","category":"page"},{"location":"#Home","page":"Home","title":"Krylov.jl documentation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This package provides implementations of certain of the most useful Krylov method for a variety of problems:","category":"page"},{"location":"","page":"Home","title":"Home","text":"1 - Square or rectangular full-rank systems","category":"page"},{"location":"","page":"Home","title":"Home","text":" Ax = b","category":"page"},{"location":"","page":"Home","title":"Home","text":"should be solved when b lies in the range space of A. This situation occurs when","category":"page"},{"location":"","page":"Home","title":"Home","text":"A is square and nonsingular,\nA is tall and has full column rank and b lies in the range of A.","category":"page"},{"location":"","page":"Home","title":"Home","text":"2 - Linear least-squares problems","category":"page"},{"location":"","page":"Home","title":"Home","text":" min b - Ax","category":"page"},{"location":"","page":"Home","title":"Home","text":"should be solved when b is not in the range of A (inconsistent systems), regardless of the shape and rank of A. This situation mainly occurs when","category":"page"},{"location":"","page":"Home","title":"Home","text":"A is square and singular,\nA is tall and thin.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Underdetermined systems are less common but also occur.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If there are infinitely many such x (because A is column rank-deficient), one with minimum norm is identified","category":"page"},{"location":"","page":"Home","title":"Home","text":" min x quad textsubject to quad x in argmin b - Ax","category":"page"},{"location":"","page":"Home","title":"Home","text":"3 - Linear least-norm problems","category":"page"},{"location":"","page":"Home","title":"Home","text":" min x quad textsubject to quad Ax = b","category":"page"},{"location":"","page":"Home","title":"Home","text":"should be solved when A is column rank-deficient but b is in the range of A (consistent systems), regardless of the shape of A. This situation mainly occurs when","category":"page"},{"location":"","page":"Home","title":"Home","text":"A is square and singular,\nA is short and wide.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Overdetermined systems are less common but also occur.","category":"page"},{"location":"","page":"Home","title":"Home","text":"4 - Adjoint systems","category":"page"},{"location":"","page":"Home","title":"Home","text":" Ax = b quad textand quad A^H y = c","category":"page"},{"location":"","page":"Home","title":"Home","text":"where A can have any shape.","category":"page"},{"location":"","page":"Home","title":"Home","text":"5 - Saddle-point and Hermitian quasi-definite systems","category":"page"},{"location":"","page":"Home","title":"Home","text":" beginbmatrix M phantom-A A^H -N endbmatrix beginbmatrix x y endbmatrix = left(beginbmatrix b 0 endbmatrixbeginbmatrix 0 c endbmatrixbeginbmatrix b c endbmatrixright)","category":"page"},{"location":"","page":"Home","title":"Home","text":"where A can have any shape.","category":"page"},{"location":"","page":"Home","title":"Home","text":"6 - Generalized saddle-point and non-Hermitian partitioned systems","category":"page"},{"location":"","page":"Home","title":"Home","text":" beginbmatrix M A B N endbmatrix beginbmatrix x y endbmatrix = beginbmatrix b c endbmatrix","category":"page"},{"location":"","page":"Home","title":"Home","text":"where A can have any shape and B has the shape of Aᴴ. A, B, b and c must be all nonzero.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Krylov solvers are particularly appropriate in situations where such problems must be solved but a factorization is not possible, either because:","category":"page"},{"location":"","page":"Home","title":"Home","text":"A is not available explicitly,\nA would be dense or would consume an excessive amount of memory if it were materialized,\nfactors would consume an excessive amount of memory.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Iterative methods are recommended in either of the following situations:","category":"page"},{"location":"","page":"Home","title":"Home","text":"the problem is sufficiently large that a factorization is not feasible or would be slow,\nan effective preconditioner is known in cases where the problem has unfavorable spectral structure,\nthe operator can be represented efficiently as a sparse matrix,\nthe operator is fast, i.e., can be applied with better complexity than if it were materialized as a matrix. Certain fast operators would materialize as dense matrices.","category":"page"},{"location":"#Features","page":"Home","title":"Features","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"All solvers in Krylov.jl have in-place version, are compatible with GPU and work in any floating-point data type.","category":"page"},{"location":"#How-to-Install","page":"Home","title":"How to Install","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Krylov can be installed and tested through the Julia package manager:","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia> ]\npkg> add Krylov\npkg> test Krylov","category":"page"},{"location":"#Bug-reports-and-discussions","page":"Home","title":"Bug reports and discussions","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers organization, so questions about any of our packages are welcome.","category":"page"},{"location":"examples/cgls/","page":"CGLS","title":"CGLS","text":"using MatrixMarket, SuiteSparseMatrixCollection\nusing Krylov, LinearOperators\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"well1033\")\npath = fetch_ssmc(matrix, format=\"MM\")\n\nA = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1]).mtx\"))\nb = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1])_b.mtx\"))[:]\n(m, n) = size(A)\n@printf(\"System size: %d rows and %d columns\\n\", m, n)\n\n# Define a regularization parameter.\nλ = 1.0e-3\n\n(x, stats) = cgls(A, b, λ=λ)\nshow(stats)\nresid = norm(A' * (A * x - b) + λ * x) / norm(b)\n@printf(\"CGLS: Relative residual: %8.1e\\n\", resid)\n@printf(\"CGLS: ‖x‖: %8.1e\\n\", norm(x))","category":"page"}] +[{"location":"examples/minares/","page":"MINARES","title":"MINARES","text":"using Krylov, MatrixMarket, SuiteSparseMatrixCollection\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"GHS_indef\", \"laser\")\npath = fetch_ssmc(matrix, format=\"MM\")\n\nn = matrix.nrows[1]\nA = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1]).mtx\"))\nb = ones(n)\n\n# Solve Ax = b.\nx, stats = minares(A, b)\nshow(stats)\nr = b - A * x\nAr = A * r\n@printf(\"Relative A-residual: %8.1e\\n\", norm(A * r) / norm(A * b))","category":"page"},{"location":"examples/trimr/","page":"TriMR","title":"TriMR","text":"using Krylov, LinearOperators, LDLFactorizations\nusing LinearAlgebra, Printf, SparseArrays\n\n# Identity matrix.\neye(n::Int) = sparse(1.0 * I, n, n)\n\n# Saddle-point systems\nn = m = 5\nA = [2^(i/j)*j + (-1)^(i-j) * n*(i-1) for i = 1:n, j = 1:n]\nb = ones(n)\nD = diagm(0 => [2.0 * i for i = 1:n])\nm, n = size(A)\nc = -b\n\n# [D A] [x] = [b]\n# [Aᴴ 0] [y] [c]\nllt_D = cholesky(D)\nopD⁻¹ = LinearOperator(Float64, 5, 5, true, true, (y, v) -> ldiv!(y, llt_D, v))\nopH⁻¹ = BlockDiagonalOperator(opD⁻¹, eye(n))\n(x, y, stats) = trimr(A, b, c, M=opD⁻¹, sp=true)\nK = [D A; A' zeros(n,n)]\nB = [b; c]\nr = B - K * [x; y]\nresid = sqrt(dot(r, opH⁻¹ * r))\n@printf(\"TriMR: Relative residual: %8.1e\\n\", resid)\n\n# Symmetric quasi-definite systems\nn = m = 5\nA = [2^(i/j)*j + (-1)^(i-j) * n*(i-1) for i = 1:n, j = 1:n]\nb = ones(n)\nM = diagm(0 => [3.0 * i for i = 1:n])\nN = diagm(0 => [5.0 * i for i = 1:n])\nc = -b\n\n# [I A] [x] = [b]\n# [Aᴴ -I] [y] [c]\n(x, y, stats) = trimr(A, b, c)\nK = [eye(m) A; A' -eye(n)]\nB = [b; c]\nr = B - K * [x; y]\nresid = norm(r)\n@printf(\"TriMR: Relative residual: %8.1e\\n\", resid)\n\n# [M A] [x] = [b]\n# [Aᴴ -N] [y] [c]\nldlt_M = ldl(M)\nldlt_N = ldl(N)\nopM⁻¹ = LinearOperator(Float64, size(M,1), size(M,2), true, true, (y, v) -> ldiv!(y, ldlt_M, v))\nopN⁻¹ = LinearOperator(Float64, size(N,1), size(N,2), true, true, (y, v) -> ldiv!(y, ldlt_N, v))\nopH⁻¹ = BlockDiagonalOperator(opM⁻¹, opN⁻¹)\n(x, y, stats) = trimr(A, b, c, M=opM⁻¹, N=opN⁻¹, verbose=1)\nK = [M A; A' -N]\nB = [b; c]\nr = B - K * [x; y]\nresid = sqrt(dot(r, opH⁻¹ * r))\n@printf(\"TriMR: Relative residual: %8.1e\\n\", resid)","category":"page"},{"location":"solvers/spd/","page":"Hermitian positive definite linear systems","title":"Hermitian positive definite linear systems","text":"# Hermitian positive definite linear systems","category":"page"},{"location":"solvers/spd/#CG","page":"Hermitian positive definite linear systems","title":"CG","text":"","category":"section"},{"location":"solvers/spd/","page":"Hermitian positive definite linear systems","title":"Hermitian positive definite linear systems","text":"cg\ncg!","category":"page"},{"location":"solvers/spd/#Krylov.cg","page":"Hermitian positive definite linear systems","title":"Krylov.cg","text":"(x, stats) = cg(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false, radius::T=zero(T),\n linesearch::Bool=false, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = cg(A, b, x0::AbstractVector; kwargs...)\n\nCG can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nThe conjugate gradient method to solve the Hermitian linear system Ax = b of size n.\n\nThe method does not abort if A is not definite. M also indicates the weighted norm in which residuals are measured.\n\nInput arguments\n\nA: a linear operator that models a Hermitian positive definite matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nradius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;\nlinesearch: if true, indicate that the solution is to be used in an inexact Newton method with linesearch. If negative curvature is detected at iteration k > 0, the solution of iteration k-1 is returned. If negative curvature is detected at iteration 0, the right-hand side is returned (i.e., the negative gradient);\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nM. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, Journal of Research of the National Bureau of Standards, 49(6), pp. 409–436, 1952.\n\n\n\n\n\n","category":"function"},{"location":"solvers/spd/#Krylov.cg!","page":"Hermitian positive definite linear systems","title":"Krylov.cg!","text":"solver = cg!(solver::CgSolver, A, b; kwargs...)\nsolver = cg!(solver::CgSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of cg.\n\nSee CgSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/spd/#CR","page":"Hermitian positive definite linear systems","title":"CR","text":"","category":"section"},{"location":"solvers/spd/","page":"Hermitian positive definite linear systems","title":"Hermitian positive definite linear systems","text":"cr\ncr!","category":"page"},{"location":"solvers/spd/#Krylov.cr","page":"Hermitian positive definite linear systems","title":"Krylov.cr","text":"(x, stats) = cr(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false, radius::T=zero(T),\n linesearch::Bool=false, γ::T=√eps(T),\n atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = cr(A, b, x0::AbstractVector; kwargs...)\n\nCR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nA truncated version of Stiefel’s Conjugate Residual method to solve the Hermitian linear system Ax = b of size n or the least-squares problem min ‖b - Ax‖ if A is singular. The matrix A must be Hermitian semi-definite. M also indicates the weighted norm in which residuals are measured.\n\nInput arguments\n\nA: a linear operator that models a Hermitian positive definite matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nradius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;\nlinesearch: if true, indicate that the solution is to be used in an inexact Newton method with linesearch. If negative curvature is detected at iteration k > 0, the solution of iteration k-1 is returned. If negative curvature is detected at iteration 0, the right-hand side is returned (i.e., the negative gradient);\nγ: tolerance to determine that the curvature of the quadratic model is nonpositive;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nM. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, Journal of Research of the National Bureau of Standards, 49(6), pp. 409–436, 1952.\nE. Stiefel, Relaxationsmethoden bester Strategie zur Losung linearer Gleichungssysteme, Commentarii Mathematici Helvetici, 29(1), pp. 157–179, 1955.\nD. G. Luenberger, The conjugate residual method for constrained minimization problems, SIAM Journal on Numerical Analysis, 7(3), pp. 390–398, 1970.\nM-A. Dahito and D. Orban, The Conjugate Residual Method in Linesearch and Trust-Region Methods, SIAM Journal on Optimization, 29(3), pp. 1988–2025, 2019.\n\n\n\n\n\n","category":"function"},{"location":"solvers/spd/#Krylov.cr!","page":"Hermitian positive definite linear systems","title":"Krylov.cr!","text":"solver = cr!(solver::CrSolver, A, b; kwargs...)\nsolver = cr!(solver::CrSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of cr.\n\nSee CrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/spd/#CAR","page":"Hermitian positive definite linear systems","title":"CAR","text":"","category":"section"},{"location":"solvers/spd/","page":"Hermitian positive definite linear systems","title":"Hermitian positive definite linear systems","text":"car\ncar!","category":"page"},{"location":"solvers/spd/#Krylov.car","page":"Hermitian positive definite linear systems","title":"Krylov.car","text":"(x, stats) = car(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false,\n atol::T=√eps(T), rtol::T=√eps(T),\n itmax::Int=0, timemax::Float64=Inf,\n verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = car(A, b, x0::AbstractVector; kwargs...)\n\nCAR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nCAR solves the Hermitian and positive definite linear system Ax = b of size n. CAR minimizes ‖Arₖ‖₂ when M = Iₙ and ‖AMrₖ‖M otherwise. The estimates computed every iteration are ‖Mrₖ‖₂ and ‖AMrₖ‖M.\n\nInput arguments\n\nA: a linear operator that models a Hermitian positive definite matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nA. Montoison, D. Orban and M. A. Saunders, MinAres: An Iterative Solver for Symmetric Linear Systems, Cahier du GERAD G-2023-40, GERAD, Montréal, 2023.\n\n\n\n\n\n","category":"function"},{"location":"solvers/spd/#Krylov.car!","page":"Hermitian positive definite linear systems","title":"Krylov.car!","text":"solver = car!(solver::CarSolver, A, b; kwargs...)\nsolver = car!(solver::CarSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of car.\n\nSee CarSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/spd/#CG-LANCZOS","page":"Hermitian positive definite linear systems","title":"CG-LANCZOS","text":"","category":"section"},{"location":"solvers/spd/","page":"Hermitian positive definite linear systems","title":"Hermitian positive definite linear systems","text":"cg_lanczos\ncg_lanczos!","category":"page"},{"location":"solvers/spd/#Krylov.cg_lanczos","page":"Hermitian positive definite linear systems","title":"Krylov.cg_lanczos","text":"(x, stats) = cg_lanczos(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false,\n check_curvature::Bool=false, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = cg_lanczos(A, b, x0::AbstractVector; kwargs...)\n\nCG-LANCZOS can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nThe Lanczos version of the conjugate gradient method to solve the Hermitian linear system Ax = b of size n.\n\nThe method does not abort if A is not definite.\n\nInput arguments\n\nA: a linear operator that models a Hermitian matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\ncheck_curvature: if true, check that the curvature of the quadratic along the search direction is positive, and abort if not, unless linesearch is also true;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a LanczosStats structure.\n\nReferences\n\nA. Frommer and P. Maass, Fast CG-Based Methods for Tikhonov-Phillips Regularization, SIAM Journal on Scientific Computing, 20(5), pp. 1831–1850, 1999.\nC. C. Paige and M. A. Saunders, Solution of Sparse Indefinite Systems of Linear Equations, SIAM Journal on Numerical Analysis, 12(4), pp. 617–629, 1975.\n\n\n\n\n\n","category":"function"},{"location":"solvers/spd/#Krylov.cg_lanczos!","page":"Hermitian positive definite linear systems","title":"Krylov.cg_lanczos!","text":"solver = cg_lanczos!(solver::CgLanczosSolver, A, b; kwargs...)\nsolver = cg_lanczos!(solver::CgLanczosSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of cg_lanczos.\n\nSee CgLanczosSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/spd/#CG-LANCZOS-SHIFT","page":"Hermitian positive definite linear systems","title":"CG-LANCZOS-SHIFT","text":"","category":"section"},{"location":"solvers/spd/","page":"Hermitian positive definite linear systems","title":"Hermitian positive definite linear systems","text":"cg_lanczos_shift\ncg_lanczos_shift!","category":"page"},{"location":"solvers/spd/#Krylov.cg_lanczos_shift","page":"Hermitian positive definite linear systems","title":"Krylov.cg_lanczos_shift","text":"(x, stats) = cg_lanczos_shift(A, b::AbstractVector{FC}, shifts::AbstractVector{T};\n M=I, ldiv::Bool=false,\n check_curvature::Bool=false, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nThe Lanczos version of the conjugate gradient method to solve a family of shifted systems\n\n(A + αI) x = b (α = α₁, ..., αₚ)\n\nof size n. The method does not abort if A + αI is not definite.\n\nInput arguments\n\nA: a linear operator that models a Hermitian matrix of dimension n;\nb: a vector of length n;\nshifts: a vector of length p.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\ncheck_curvature: if true, check that the curvature of the quadratic along the search direction is positive, and abort if not, unless linesearch is also true;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a vector of p dense vectors, each one of length n;\nstats: statistics collected on the run in a LanczosShiftStats structure.\n\nReferences\n\nA. Frommer and P. Maass, Fast CG-Based Methods for Tikhonov-Phillips Regularization, SIAM Journal on Scientific Computing, 20(5), pp. 1831–1850, 1999.\nC. C. Paige and M. A. Saunders, Solution of Sparse Indefinite Systems of Linear Equations, SIAM Journal on Numerical Analysis, 12(4), pp. 617–629, 1975.\n\n\n\n\n\n","category":"function"},{"location":"solvers/spd/#Krylov.cg_lanczos_shift!","page":"Hermitian positive definite linear systems","title":"Krylov.cg_lanczos_shift!","text":"solver = cg_lanczos!(solver::CgLanczosShiftSolver, A, b, shifts; kwargs...)\n\nwhere kwargs are keyword arguments of cg_lanczos_shift.\n\nSee CgLanczosShiftSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"examples/craigmr/","page":"CRAIGMR","title":"CRAIGMR","text":"using Krylov, HarwellRutherfordBoeing, SuiteSparseMatrixCollection\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"wm1\")\npath = fetch_ssmc(matrix, format=\"RB\")\n\nA = RutherfordBoeingData(joinpath(path[1], \"$(matrix.name[1]).rb\")).data\n(m, n) = size(A)\n@printf(\"System size: %d rows and %d columns\\n\", m, n)\n\nx_exact = A' * ones(m)\nx_exact_norm = norm(x_exact)\nx_exact /= x_exact_norm\nb = A * x_exact\n(x, y, stats) = craigmr(A, b)\nshow(stats)\nresid = norm(A * x - b) / norm(b)\n@printf(\"CRAIGMR: Relative residual: %7.1e\\n\", resid)\n@printf(\"CRAIGMR: ‖x - x*‖₂: %7.1e\\n\", norm(x - x_exact))\n@printf(\"CRAIGMR: %d iterations\\n\", length(stats.residuals))","category":"page"},{"location":"examples/symmlq/","page":"SYMMLQ","title":"SYMMLQ","text":"using Krylov\nusing LinearAlgebra, Printf\n\nA = diagm([1.0; 2.0; 3.0; 0.0])\nn = size(A, 1)\nb = [1.0; 2.0; 3.0; 0.0]\nb_norm = norm(b)\n\n# SYMMLQ returns the minimum-norm solution of symmetric, singular and consistent systems\n(x, stats) = symmlq(A, b, transfer_to_cg=false);\nr = b - A * x;\n\n@printf(\"Residual r: %s\\n\", Krylov.vec2str(r))\n@printf(\"Relative residual norm ‖r‖: %8.1e\\n\", norm(r) / b_norm)\n@printf(\"Solution x: %s\\n\", Krylov.vec2str(x))\n@printf(\"Minimum-norm solution? %s\\n\", x ≈ [1.0; 1.0; 1.0; 0.0])","category":"page"},{"location":"examples/dqgmres/","page":"DQGMRES","title":"DQGMRES","text":"using Krylov, LinearOperators, ILUZero, MatrixMarket\nusing LinearAlgebra, Printf, SuiteSparseMatrixCollection\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"Simon\", \"raefsky1\")\npath = fetch_ssmc(matrix, format=\"MM\")\n\nn = matrix.nrows[1]\nA = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1]).mtx\"))\nb = A * ones(n)\n\nF = ilu0(A)\n\n@printf(\"nnz(ILU) / nnz(A): %7.1e\\n\", nnz(F) / nnz(A))\n\n# Solve Ax = b with DQGMRES and an ILU(0) preconditioner\n# Remark: DIOM, FOM and GMRES can be used in the same way\nopM = LinearOperator(Float64, n, n, false, false, (y, v) -> forward_substitution!(y, F, v))\nopN = LinearOperator(Float64, n, n, false, false, (y, v) -> backward_substitution!(y, F, v))\nopP = LinearOperator(Float64, n, n, false, false, (y, v) -> ldiv!(y, F, v))\n\n# Without preconditioning\nx, stats = dqgmres(A, b, memory=50, history=true)\nr = b - A * x\n@printf(\"[Without preconditioning] Residual norm: %8.1e\\n\", norm(r))\n@printf(\"[Without preconditioning] Number of iterations: %3d\\n\", length(stats.residuals) - 1)\n\n# Split preconditioning\nx, stats = dqgmres(A, b, memory=50, history=true, M=opM, N=opN)\nr = b - A * x\n@printf(\"[Split preconditioning] Residual norm: %8.1e\\n\", norm(r))\n@printf(\"[Split preconditioning] Number of iterations: %3d\\n\", length(stats.residuals) - 1)\n\n# Left preconditioning\nx, stats = dqgmres(A, b, memory=50, history=true, M=opP)\nr = b - A * x\n@printf(\"[Left preconditioning] Residual norm: %8.1e\\n\", norm(r))\n@printf(\"[Left preconditioning] Number of iterations: %3d\\n\", length(stats.residuals) - 1)\n\n# Right preconditioning\nx, stats = dqgmres(A, b, memory=50, history=true, N=opP)\nr = b - A * x\n@printf(\"[Right preconditioning] Residual norm: %8.1e\\n\", norm(r))\n@printf(\"[Right preconditioning] Number of iterations: %3d\\n\", length(stats.residuals) - 1)","category":"page"},{"location":"solvers/sp_sqd/","page":"Saddle-point and Hermitian quasi-definite systems","title":"Saddle-point and Hermitian quasi-definite systems","text":"# Saddle-point and Hermitian quasi-definite systems","category":"page"},{"location":"solvers/sp_sqd/#TriCG","page":"Saddle-point and Hermitian quasi-definite systems","title":"TriCG","text":"","category":"section"},{"location":"solvers/sp_sqd/","page":"Saddle-point and Hermitian quasi-definite systems","title":"Saddle-point and Hermitian quasi-definite systems","text":"tricg\ntricg!","category":"page"},{"location":"solvers/sp_sqd/#Krylov.tricg","page":"Saddle-point and Hermitian quasi-definite systems","title":"Krylov.tricg","text":"(x, y, stats) = tricg(A, b::AbstractVector{FC}, c::AbstractVector{FC};\n M=I, N=I, ldiv::Bool=false,\n spd::Bool=false, snd::Bool=false,\n flip::Bool=false, τ::T=one(T),\n ν::T=-one(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, y, stats) = tricg(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)\n\nTriCG can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.\n\nGiven a matrix A of dimension m × n, TriCG solves the Hermitian linear system\n\n[ τE A ] [ x ] = [ b ]\n[ Aᴴ νF ] [ y ] [ c ],\n\nof size (n+m) × (n+m) where τ and ν are real numbers, E = M⁻¹ ≻ 0 and F = N⁻¹ ≻ 0. b and c must both be nonzero. TriCG could breakdown if τ = 0 or ν = 0. It's recommended to use TriMR in these cases.\n\nBy default, TriCG solves Hermitian and quasi-definite linear systems with τ = 1 and ν = -1.\n\nTriCG is based on the preconditioned orthogonal tridiagonalization process and its relation with the preconditioned block-Lanczos process.\n\n[ M 0 ]\n[ 0 N ]\n\nindicates the weighted norm in which residuals are measured. It's the Euclidean norm when M and N are identity operators.\n\nTriCG stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m;\nc: a vector of length n.\n\nOptional arguments\n\nx0: a vector of length m that represents an initial guess of the solution x;\ny0: a vector of length n that represents an initial guess of the solution y.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the partitioned system;\nN: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the partitioned system;\nldiv: define whether the preconditioners use ldiv! or mul!;\nspd: if true, set τ = 1 and ν = 1 for Hermitian and positive-definite linear system;\nsnd: if true, set τ = -1 and ν = -1 for Hermitian and negative-definite linear systems;\nflip: if true, set τ = -1 and ν = 1 for another known variant of Hermitian quasi-definite systems;\nτ and ν: diagonal scaling factors of the partitioned Hermitian linear system;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length m;\ny: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nA. Montoison and D. Orban, TriCG and TriMR: Two Iterative Methods for Symmetric Quasi-Definite Systems, SIAM Journal on Scientific Computing, 43(4), pp. 2502–2525, 2021.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sp_sqd/#Krylov.tricg!","page":"Saddle-point and Hermitian quasi-definite systems","title":"Krylov.tricg!","text":"solver = tricg!(solver::TricgSolver, A, b, c; kwargs...)\nsolver = tricg!(solver::TricgSolver, A, b, c, x0, y0; kwargs...)\n\nwhere kwargs are keyword arguments of tricg.\n\nSee TricgSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sp_sqd/#TriMR","page":"Saddle-point and Hermitian quasi-definite systems","title":"TriMR","text":"","category":"section"},{"location":"solvers/sp_sqd/","page":"Saddle-point and Hermitian quasi-definite systems","title":"Saddle-point and Hermitian quasi-definite systems","text":"trimr\ntrimr!","category":"page"},{"location":"solvers/sp_sqd/#Krylov.trimr","page":"Saddle-point and Hermitian quasi-definite systems","title":"Krylov.trimr","text":"(x, y, stats) = trimr(A, b::AbstractVector{FC}, c::AbstractVector{FC};\n M=I, N=I, ldiv::Bool=false,\n spd::Bool=false, snd::Bool=false,\n flip::Bool=false, sp::Bool=false,\n τ::T=one(T), ν::T=-one(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, y, stats) = trimr(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)\n\nTriMR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.\n\nGiven a matrix A of dimension m × n, TriMR solves the symmetric linear system\n\n[ τE A ] [ x ] = [ b ]\n[ Aᴴ νF ] [ y ] [ c ],\n\nof size (n+m) × (n+m) where τ and ν are real numbers, E = M⁻¹ ≻ 0, F = N⁻¹ ≻ 0. b and c must both be nonzero. TriMR handles saddle-point systems (τ = 0 or ν = 0) and adjoint systems (τ = 0 and ν = 0) without any risk of breakdown.\n\nBy default, TriMR solves symmetric and quasi-definite linear systems with τ = 1 and ν = -1.\n\nTriMR is based on the preconditioned orthogonal tridiagonalization process and its relation with the preconditioned block-Lanczos process.\n\n[ M 0 ]\n[ 0 N ]\n\nindicates the weighted norm in which residuals are measured. It's the Euclidean norm when M and N are identity operators.\n\nTriMR stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m;\nc: a vector of length n.\n\nOptional arguments\n\nx0: a vector of length m that represents an initial guess of the solution x;\ny0: a vector of length n that represents an initial guess of the solution y.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the partitioned system;\nN: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the partitioned system;\nldiv: define whether the preconditioners use ldiv! or mul!;\nspd: if true, set τ = 1 and ν = 1 for Hermitian and positive-definite linear system;\nsnd: if true, set τ = -1 and ν = -1 for Hermitian and negative-definite linear systems;\nflip: if true, set τ = -1 and ν = 1 for another known variant of Hermitian quasi-definite systems;\nsp: if true, set τ = 1 and ν = 0 for saddle-point systems;\nτ and ν: diagonal scaling factors of the partitioned Hermitian linear system;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length m;\ny: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nA. Montoison and D. Orban, TriCG and TriMR: Two Iterative Methods for Symmetric Quasi-Definite Systems, SIAM Journal on Scientific Computing, 43(4), pp. 2502–2525, 2021.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sp_sqd/#Krylov.trimr!","page":"Saddle-point and Hermitian quasi-definite systems","title":"Krylov.trimr!","text":"solver = trimr!(solver::TrimrSolver, A, b, c; kwargs...)\nsolver = trimr!(solver::TrimrSolver, A, b, c, x0, y0; kwargs...)\n\nwhere kwargs are keyword arguments of trimr.\n\nSee TrimrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"examples/car/","page":"CAR","title":"CAR","text":"using Krylov, MatrixMarket, SuiteSparseMatrixCollection\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"Nasa\", \"nasa2146\")\npath = fetch_ssmc(matrix, format=\"MM\")\n\nn = matrix.nrows[1]\nA = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1]).mtx\"))\nb = ones(n)\nb_norm = norm(b)\n\n# Solve Ax = b.\n(x, stats) = car(A, b)\nshow(stats)\nr = b - A * x\n@printf(\"Relative residual: %8.1e\\n\", norm(r) / b_norm)","category":"page"},{"location":"examples/bicgstab/","page":"BICGSTAB","title":"BICGSTAB","text":"using Krylov, LinearOperators, IncompleteLU, HarwellRutherfordBoeing\nusing LinearAlgebra, Printf, SuiteSparseMatrixCollection, SparseArrays\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"sherman5\")\npath = fetch_ssmc(matrix, format=\"RB\")\n\nn = matrix.nrows[1]\nA = RutherfordBoeingData(joinpath(path[1], \"$(matrix.name[1]).rb\")).data\nb = A * ones(n)\n\nF = ilu(A, τ = 0.05)\n\n@printf(\"nnz(ILU) / nnz(A): %7.1e\\n\", nnz(F) / nnz(A))\n\n# Solve Ax = b with BICGSTAB and an incomplete LU factorization\n# Remark: CGS can be used in the same way\nopM = LinearOperator(Float64, n, n, false, false, (y, v) -> forward_substitution!(y, F, v))\nopN = LinearOperator(Float64, n, n, false, false, (y, v) -> backward_substitution!(y, F, v))\nopP = LinearOperator(Float64, n, n, false, false, (y, v) -> ldiv!(y, F, v))\n\n# Without preconditioning\nx, stats = bicgstab(A, b, history=true)\nr = b - A * x\n@printf(\"[Without preconditioning] Residual norm: %8.1e\\n\", norm(r))\n@printf(\"[Without preconditioning] Number of iterations: %3d\\n\", length(stats.residuals) - 1)\n\n# Split preconditioning\nx, stats = bicgstab(A, b, history=true, M=opM, N=opN)\nr = b - A * x\n@printf(\"[Split preconditioning] Residual norm: %8.1e\\n\", norm(r))\n@printf(\"[Split preconditioning] Number of iterations: %3d\\n\", length(stats.residuals) - 1)\n\n# Left preconditioning\nx, stats = bicgstab(A, b, history=true, M=opP)\nr = b - A * x\n@printf(\"[Left preconditioning] Residual norm: %8.1e\\n\", norm(r))\n@printf(\"[Left preconditioning] Number of iterations: %3d\\n\", length(stats.residuals) - 1)\n\n# Right preconditioning\nx, stats = bicgstab(A, b, history=true, N=opP)\nr = b - A * x\n@printf(\"[Right preconditioning] Residual norm: %8.1e\\n\", norm(r))\n@printf(\"[Right preconditioning] Number of iterations: %3d\\n\", length(stats.residuals) - 1)","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"The most expensive procedures in Krylov methods are matrix-vector products (y ← Ax) and vector operations (dot products, vector norms, y ← αx + βy). Therefore they directly affect the efficiency of the methods provided by Krylov.jl. In this section, we present various optimizations based on multithreading to speed up these procedures. Multithreading is a form of shared memory parallelism that makes use of multiple cores to perform tasks.","category":"page"},{"location":"tips/#Multi-threaded-BLAS-and-LAPACK-operations","page":"Performance tips","title":"Multi-threaded BLAS and LAPACK operations","text":"","category":"section"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"OPENBLAS_NUM_THREADS=N julia # Set the number of OpenBLAS threads to N\nMKL_NUM_THREADS=N julia # Set the number of MKL threads to N","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"If you don't know the maximum number of threads available on your computer, you can obtain it with","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"NMAX = Sys.CPU_THREADS","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"and define the number of OpenBLAS/MKL threads at runtime with","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"BLAS.set_num_threads(N) # 1 ≤ N ≤ NMAX\nBLAS.get_num_threads()","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"The recommended number of BLAS threads is the number of physical and not logical cores, which is in general N = NMAX / 2 if your CPU supports simultaneous multithreading (SMT).","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"By default Julia ships with OpenBLAS but it's also possible to use Intel MKL BLAS and LAPACK with MKL.jl. If your operating system is MacOS 13.4 or later, it's recommended to use Accelerate BLAS and LAPACK with AppleAccelerate.jl.","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"using LinearAlgebra\nBLAS.get_config() # BLAS.vendor() for Julia 1.6","category":"page"},{"location":"tips/#Multi-threaded-sparse-matrix-vector-products","page":"Performance tips","title":"Multi-threaded sparse matrix-vector products","text":"","category":"section"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"For sparse matrices, the Julia implementation of mul! of SparseArrays library is not parallelized. A significant speed-up can be observed with the multhreaded mul! of MKLSparse.jl or ThreadedSparseCSR.jl.","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"It's also possible to implement a generic multithreaded julia version. For instance, the following function can be used for symmetric matrices","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"using Base.Threads\n\nfunction threaded_mul!(y::Vector{T}, A::SparseMatrixCSC{T}, x::Vector{T}) where T <: Number\n A.m == A.n || error(\"A is not a square matrix!\")\n @threads for i = 1 : A.n\n tmp = zero(T)\n @inbounds for j = A.colptr[i] : (A.colptr[i+1] - 1)\n tmp += A.nzval[j] * x[A.rowval[j]]\n end\n @inbounds y[i] = tmp\n end\n return y\nend","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"and wrapped inside a linear operator to solve symmetric linear systems","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"using LinearOperators\n\nn, m = size(A)\nsym = herm = true\nT = eltype(A)\nopA = LinearOperator(T, n, m, sym, herm, (y, v) -> threaded_mul!(y, A, v))","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"To enable multi-threading with Julia, you can start julia with the environment variable JULIA_NUM_THREADS or the options -t and --threads","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"julia -t auto # alternative: --threads auto\njulia -t N # alternative: --threads N\n\nJULIA_NUM_THREADS=N julia","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"Thereafter, you can verify the number of threads usable by Julia","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"using Base.Threads\nnthreads()","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"The following benchmarks illustrate the time required in seconds to compute 1000 sparse matrix-vector products with symmetric matrices of the SuiteSparse Matrix Collection. The computer used for the benchmarks has 2 physical cores and Julia was launched with JULIA_NUM_THREADS=2.","category":"page"},{"location":"tips/","page":"Performance tips","title":"Performance tips","text":"(Image: benchmarks)","category":"page"},{"location":"solvers/gsp/","page":"Generalized saddle-point and non-Hermitian partitioned systems","title":"Generalized saddle-point and non-Hermitian partitioned systems","text":"# Generalized saddle-point and non-Hermitian partitioned systems","category":"page"},{"location":"solvers/gsp/#GPMR","page":"Generalized saddle-point and non-Hermitian partitioned systems","title":"GPMR","text":"","category":"section"},{"location":"solvers/gsp/","page":"Generalized saddle-point and non-Hermitian partitioned systems","title":"Generalized saddle-point and non-Hermitian partitioned systems","text":"gpmr\ngpmr!","category":"page"},{"location":"solvers/gsp/#Krylov.gpmr","page":"Generalized saddle-point and non-Hermitian partitioned systems","title":"Krylov.gpmr","text":"(x, y, stats) = gpmr(A, B, b::AbstractVector{FC}, c::AbstractVector{FC};\n memory::Int=20, C=I, D=I, E=I, F=I,\n ldiv::Bool=false, gsp::Bool=false,\n λ::FC=one(FC), μ::FC=one(FC),\n reorthogonalization::Bool=false, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, y, stats) = gpmr(A, B, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)\n\nGPMR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.\n\nGiven matrices A of dimension m × n and B of dimension n × m, GPMR solves the non-Hermitian partitioned linear system\n\n[ λIₘ A ] [ x ] = [ b ]\n[ B μIₙ ] [ y ] [ c ],\n\nof size (n+m) × (n+m) where λ and μ are real or complex numbers. A can have any shape and B has the shape of Aᴴ. A, B, b and c must be all nonzero.\n\nThis implementation allows left and right block diagonal preconditioners\n\n[ C ] [ λM A ] [ E ] [ E⁻¹x ] = [ Cb ]\n[ D ] [ B μN ] [ F ] [ F⁻¹y ] [ Dc ],\n\nand can solve\n\n[ λM A ] [ x ] = [ b ]\n[ B μN ] [ y ] [ c ]\n\nwhen CE = M⁻¹ and DF = N⁻¹.\n\nBy default, GPMR solves unsymmetric linear systems with λ = 1 and μ = 1.\n\nGPMR is based on the orthogonal Hessenberg reduction process and its relations with the block-Arnoldi process. The residual norm ‖rₖ‖ is monotonically decreasing in GPMR.\n\nGPMR stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nB: a linear operator that models a matrix of dimension n × m;\nb: a vector of length m;\nc: a vector of length n.\n\nOptional arguments\n\nx0: a vector of length m that represents an initial guess of the solution x;\ny0: a vector of length n that represents an initial guess of the solution y.\n\nKeyword arguments\n\nmemory: if restart = true, the restarted version GPMR(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;\nC: linear operator that models a nonsingular matrix of size m, and represents the first term of the block-diagonal left preconditioner;\nD: linear operator that models a nonsingular matrix of size n, and represents the second term of the block-diagonal left preconditioner;\nE: linear operator that models a nonsingular matrix of size m, and represents the first term of the block-diagonal right preconditioner;\nF: linear operator that models a nonsingular matrix of size n, and represents the second term of the block-diagonal right preconditioner;\nldiv: define whether the preconditioners use ldiv! or mul!;\ngsp: if true, set λ = 1 and μ = 0 for generalized saddle-point systems;\nλ and μ: diagonal scaling factors of the partitioned linear system;\nreorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length m;\ny: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nA. Montoison and D. Orban, GPMR: An Iterative Method for Unsymmetric Partitioned Linear Systems, SIAM Journal on Matrix Analysis and Applications, 44(1), pp. 293–311, 2023.\n\n\n\n\n\n","category":"function"},{"location":"solvers/gsp/#Krylov.gpmr!","page":"Generalized saddle-point and non-Hermitian partitioned systems","title":"Krylov.gpmr!","text":"solver = gpmr!(solver::GpmrSolver, A, B, b, c; kwargs...)\nsolver = gpmr!(solver::GpmrSolver, A, B, b, c, x0, y0; kwargs...)\n\nwhere kwargs are keyword arguments of gpmr.\n\nNote that the memory keyword argument is the only exception. It's required to create a GpmrSolver and can't be changed later.\n\nSee GpmrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"examples/lsmr/","page":"LSMR","title":"LSMR","text":"using MatrixMarket, SuiteSparseMatrixCollection\nusing Krylov, LinearOperators\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"illc1850\")\npath = fetch_ssmc(matrix, format=\"MM\")\n\nA = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1]).mtx\"))\nb = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1])_b.mtx\"))[:]\n(m, n) = size(A)\n@printf(\"System size: %d rows and %d columns\\n\", m, n)\n\n# Define a regularization parameter.\nλ = 1.0e-3\n\n(x, stats) = lsmr(A, b, λ=λ, atol=0.0, btol=0.0)\nshow(stats)\nresid = norm(A' * (A * x - b) + λ * x) / norm(b)\n@printf(\"LSMR: Relative residual: %8.1e\\n\", resid)\n@printf(\"LSMR: ‖x‖: %8.1e\\n\", norm(x))","category":"page"},{"location":"examples/cg_lanczos_shift/","page":"CG-LANCZOS-SHIFT","title":"CG-LANCZOS-SHIFT","text":"using Krylov, MatrixMarket, SuiteSparseMatrixCollection\nusing LinearAlgebra, Printf\n\nfunction residuals(A, b, shifts, x)\n nshifts = length(shifts)\n r = [ (b - A * x[i] - shifts[i] * x[i]) for i = 1 : nshifts ]\n return r\nend\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"1138_bus\")\npath = fetch_ssmc(matrix, format=\"MM\")\n\nA = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1]).mtx\"))\nn, m = size(A)\nb = ones(n)\n\n# Solve (A + αI)x = b.\nshifts = [1.0, 2.0, 3.0, 4.0]\n(x, stats) = cg_lanczos_shift(A, b, shifts)\nshow(stats)\nr = residuals(A, b, shifts, x)\nresids = map(norm, r) / norm(b)\n@printf(\"Relative residuals with shifts:\\n\")\nfor resid in resids\n @printf(\" %8.1e\", resid)\nend\n@printf(\"\\n\")","category":"page"},{"location":"examples/minres_qlp/","page":"MINRES-QLP","title":"MINRES-QLP","text":"using Krylov\nusing LinearAlgebra, Printf\n\nA = diagm([1.0; 2.0; 3.0; 0.0])\nn = size(A, 1)\nb = [1.0; 2.0; 3.0; 4.0]\nb_norm = norm(b)\n\n# MINRES-QLP returns the minimum-norm solution of symmetric, singular and inconsistent systems\n(x, stats) = minres_qlp(A, b);\nr = b - A * x;\n\n@printf(\"Residual r: %s\\n\", Krylov.vec2str(r))\n@printf(\"Relative residual norm ‖r‖: %8.1e\\n\", norm(r) / b_norm)\n@printf(\"Solution x: %s\\n\", Krylov.vec2str(x))\n@printf(\"Minimum-norm solution? %s\\n\", x ≈ [1.0; 1.0; 1.0; 0.0])","category":"page"},{"location":"examples/lsqr/","page":"LSQR","title":"LSQR","text":"using MatrixMarket, SuiteSparseMatrixCollection\nusing Krylov, LinearOperators\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"illc1033\")\npath = fetch_ssmc(matrix, format=\"MM\")\n\nA = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1]).mtx\"))\nb = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1])_b.mtx\"))[:]\n(m, n) = size(A)\n@printf(\"System size: %d rows and %d columns\\n\", m, n)\n\n# Define a regularization parameter.\nλ = 1.0e-3\n\n(x, stats) = lsqr(A, b, λ=λ, atol=0.0, btol=0.0)\nshow(stats)\nresid = norm(A' * (A * x - b) + λ * x) / norm(b)\n@printf(\"LSQR: Relative residual: %8.1e\\n\", resid)\n@printf(\"LSQR: ‖x‖: %8.1e\\n\", norm(x))","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"","category":"page"},{"location":"factorization-free/#factorization-free","page":"Factorization-free operators","title":"Factorization-free operators","text":"","category":"section"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"All methods are factorization-free, which means that you only need to provide operator-vector products.","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"The A or B input arguments of Krylov.jl solvers can be any object that represents a linear operator. That object must implement mul!, for multiplication with a vector, size() and eltype(). For certain methods it must also implement adjoint().","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"Some methods only require A * v products, whereas other ones also require A' * u products. In the latter case, adjoint(A) must also be implemented.","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"A * v A * v and A' * u\nCG, CR, CAR CGLS, CRLS, CGNE, CRMR\nSYMMLQ, CG-LANCZOS, MINRES, MINRES-QLP, MINARES LSLQ, LSQR, LSMR, LNLQ, CRAIG, CRAIGMR\nDIOM, FOM, DQGMRES, GMRES, FGMRES BiLQ, QMR, BiLQR, USYMLQ, USYMQR, TriLQR\nCGS, BICGSTAB TriCG, TriMR","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"info: Info\nGPMR is the only method that requires A * v and B * w products.","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"Preconditioners M, N, C, D, E or F can be also linear operators and must implement mul! or ldiv!.","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"We strongly recommend LinearOperators.jl to model matrix-free operators, but other packages such as LinearMaps.jl, DiffEqOperators.jl or your own operator can be used as well.","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"With LinearOperators.jl, operators are defined as","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"A = LinearOperator(type, nrows, ncols, symmetric, hermitian, prod, tprod, ctprod)","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"where","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"type is the operator element type;\nnrow and ncol are its dimensions;\nsymmetric and hermitian should be set to true or false;\nprod(y, v), tprod(y, w) and ctprod(u, w) are called when writing mul!(y, A, v), mul!(y, transpose(A), w), and mul!(y, A', u), respectively.","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"See the tutorial and the detailed documentation for more information on LinearOperators.jl.","category":"page"},{"location":"factorization-free/#Examples","page":"Factorization-free operators","title":"Examples","text":"","category":"section"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"In the field of nonlinear optimization, finding critical points of a continuous function frequently involves linear systems with a Hessian or Jacobian as coefficient. Materializing such operators as matrices is expensive in terms of operations and memory consumption and is unreasonable for high-dimensional problems. However, it is often possible to implement efficient Hessian-vector and Jacobian-vector products, for example with the help of automatic differentiation tools, and used within Krylov solvers. We now illustrate variants with explicit matrices and with matrix-free operators for two well-known optimization methods.","category":"page"},{"location":"factorization-free/#Example-1:-Newton's-Method-for-convex-optimization","page":"Factorization-free operators","title":"Example 1: Newton's Method for convex optimization","text":"","category":"section"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"At each iteration of Newton's method applied to a mathcalC^2 strictly convex function f mathbbR^n rightarrow mathbbR, a descent direction direction is determined by minimizing the quadratic Taylor model of f:","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"min_d in mathbbR^nf(x_k) + nabla f(x_k)^T d + tfrac12d^T nabla^2 f(x_k) d","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"which is equivalent to solving the symmetric and positive-definite system","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"nabla^2 f(x_k) d = -nabla f(x_k)","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"The system above can be solved with the conjugate gradient method as follows, using the explicit Hessian:","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"using ForwardDiff, Krylov\n\nxk = -ones(4)\n\nf(x) = (x[1] - 1)^2 + (x[2] - 2)^2 + (x[3] - 3)^2 + (x[4] - 4)^2\n\ng(x) = ForwardDiff.gradient(f, x)\n\nH(x) = ForwardDiff.hessian(f, x)\n\nd, stats = cg(H(xk), -g(xk))","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"The explicit Hessian can be replaced by a linear operator that only computes Hessian-vector products:","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"using ForwardDiff, LinearOperators, Krylov\n\nxk = -ones(4)\n\nf(x) = (x[1] - 1)^2 + (x[2] - 2)^2 + (x[3] - 3)^2 + (x[4] - 4)^2\n\ng(x) = ForwardDiff.gradient(f, x)\n\nH(y, v) = ForwardDiff.derivative!(y, t -> g(xk + t * v), 0)\nopH = LinearOperator(Float64, 4, 4, true, true, (y, v) -> H(y, v))\n\ncg(opH, -g(xk))","category":"page"},{"location":"factorization-free/#Example-2:-The-Gauss-Newton-Method-for-Nonlinear-Least-Squares","page":"Factorization-free operators","title":"Example 2: The Gauss-Newton Method for Nonlinear Least Squares","text":"","category":"section"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"At each iteration of the Gauss-Newton method applied to a nonlinear least-squares objective f(x) = tfrac12 F(x)^2 where F mathbbR^n rightarrow mathbbR^m is mathcalC^1, we solve the subproblem:","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"min_d in mathbbR^ntfrac12J(x_k) d + F(x_k)^2","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"where J(x) is the Jacobian of F at x.","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"An appropriate iterative method to solve the above linear least-squares problems is LSMR. We could pass the explicit Jacobian to LSMR as follows:","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"using ForwardDiff, Krylov\n\nxk = ones(2)\n\nF(x) = [x[1]^4 - 3; exp(x[2]) - 2; log(x[1]) - x[2]^2]\n\nJ(x) = ForwardDiff.jacobian(F, x)\n\nd, stats = lsmr(J(xk), -F(xk))","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"However, the explicit Jacobian can be replaced by a linear operator that only computes Jacobian-vector and transposed Jacobian-vector products:","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"using LinearAlgebra, ForwardDiff, LinearOperators, Krylov\n\nxk = ones(2)\n\nF(x) = [x[1]^4 - 3; exp(x[2]) - 2; log(x[1]) - x[2]^2]\n\nJ(y, v) = ForwardDiff.derivative!(y, t -> F(xk + t * v), 0)\nJᵀ(y, u) = ForwardDiff.gradient!(y, x -> dot(F(x), u), xk)\nopJ = LinearOperator(Float64, 3, 2, false, false, (y, v) -> J(y, v),\n (y, w) -> Jᵀ(y, w),\n (y, u) -> Jᵀ(y, u))\n\nlsmr(opJ, -F(xk))","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"Note that preconditioners can be also implemented as abstract operators. For instance, we could compute the Cholesky factorization of M and N and create linear operators that perform the forward and backsolves.","category":"page"},{"location":"factorization-free/","page":"Factorization-free operators","title":"Factorization-free operators","text":"Krylov methods combined with factorization free operators allow to reduce computation time and memory requirements considerably by avoiding building and storing the system matrix. In the field of partial differential equations, the implementation of high-performance factorization free operators and assembly free preconditioning is a subject of active research.","category":"page"},{"location":"inplace/#In-place-methods","page":"In-place methods","title":"In-place methods","text":"","category":"section"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"All solvers in Krylov.jl have an in-place variant implemented in a method whose name ends with !. A workspace (KrylovSolver) that contains the storage needed by a Krylov method can be used to solve multiple linear systems that have the same dimensions in the same floating-point precision. Each KrylovSolver has two constructors:","category":"page"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"XyzSolver(A, b)\nXyzSolver(m, n, S)","category":"page"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"Xyz is the name of the Krylov method with lowercase letters except its first one (Cg, Minres, Lsmr, Bicgstab, ...). Given an operator A and a right-hand side b, you can create a KrylovSolver based on the size of A and the type of b or explicitly give the dimensions (m, n) and the storage type S.","category":"page"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"For example, use S = Vector{Float64} if you want to solve linear systems in double precision on the CPU and S = CuVector{Float32} if you want to solve linear systems in single precision on an Nvidia GPU.","category":"page"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"note: Note\nDiomSolver, FomSolver, DqgmresSolver, GmresSolver, FgmresSolver, GpmrSolver and CgLanczosShiftSolver require an additional argument (memory or nshifts).","category":"page"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"The workspace is always the first argument of the in-place methods:","category":"page"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"minres_solver = MinresSolver(n, n, Vector{Float64})\nminres!(minres_solver, A1, b1)\n\ndqgmres_solver = DqgmresSolver(n, n, memory, Vector{BigFloat})\ndqgmres!(dqgmres_solver, A2, b2)\n\nlsqr_solver = LsqrSolver(m, n, CuVector{Float32})\nlsqr!(lsqr_solver, A3, b3)","category":"page"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"A generic function solve! is also available and dispatches to the appropriate Krylov method.","category":"page"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"Krylov.solve!","category":"page"},{"location":"inplace/#Krylov.solve!","page":"In-place methods","title":"Krylov.solve!","text":"solve!(solver, args...; kwargs...)\n\nUse the in-place Krylov method associated to solver.\n\n\n\n\n\n","category":"function"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"In-place methods return an updated solver workspace. Solutions and statistics can be recovered via solver.x, solver.y and solver.stats. Functions solution and statistics can be also used.","category":"page"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"Krylov.nsolution\nKrylov.solution\nKrylov.statistics\nKrylov.issolved","category":"page"},{"location":"inplace/#Krylov.nsolution","page":"In-place methods","title":"Krylov.nsolution","text":"nsolution(solver)\n\nReturn the number of outputs of solution(solver).\n\n\n\n\n\n","category":"function"},{"location":"inplace/#Krylov.solution","page":"In-place methods","title":"Krylov.solution","text":"solution(solver)\n\nReturn the solution(s) stored in the solver. Optionally you can specify which solution you want to recover, solution(solver, 1) returns x and solution(solver, 2) returns y.\n\n\n\n\n\n","category":"function"},{"location":"inplace/#Krylov.statistics","page":"In-place methods","title":"Krylov.statistics","text":"statistics(solver)\n\nReturn the statistics stored in the solver.\n\n\n\n\n\n","category":"function"},{"location":"inplace/#Krylov.issolved","page":"In-place methods","title":"Krylov.issolved","text":"issolved(solver)\n\nReturn a boolean that determines whether the Krylov method associated to solver succeeded.\n\n\n\n\n\n","category":"function"},{"location":"inplace/#Examples","page":"In-place methods","title":"Examples","text":"","category":"section"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"We illustrate the use of in-place Krylov solvers with two well-known optimization methods. The details of the optimization methods are described in the section about Factorization-free operators.","category":"page"},{"location":"inplace/#Example-1:-Newton's-method-for-convex-optimization-without-linesearch","page":"In-place methods","title":"Example 1: Newton's method for convex optimization without linesearch","text":"","category":"section"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"using Krylov\n\nfunction newton(∇f, ∇²f, x₀; itmax = 200, tol = 1e-8)\n\n n = length(x₀)\n x = copy(x₀)\n gx = ∇f(x)\n \n iter = 0\n S = typeof(x)\n solver = CgSolver(n, n, S)\n Δx = solver.x\n\n solved = false\n tired = false\n\n while !(solved || tired)\n \n Hx = ∇²f(x) # Compute ∇²f(xₖ)\n cg!(solver, Hx, -gx) # Solve ∇²f(xₖ)Δx = -∇f(xₖ)\n x = x + Δx # Update xₖ₊₁ = xₖ + Δx\n gx = ∇f(x) # ∇f(xₖ₊₁)\n \n iter += 1\n solved = norm(gx) ≤ tol\n tired = iter ≥ itmax\n end\n return x\nend","category":"page"},{"location":"inplace/#Example-2:-The-Gauss-Newton-method-for-nonlinear-least-squares-without-linesearch","page":"In-place methods","title":"Example 2: The Gauss-Newton method for nonlinear least squares without linesearch","text":"","category":"section"},{"location":"inplace/","page":"In-place methods","title":"In-place methods","text":"using Krylov\n\nfunction gauss_newton(F, JF, x₀; itmax = 200, tol = 1e-8)\n\n n = length(x₀)\n x = copy(x₀)\n Fx = F(x)\n m = length(Fx)\n \n iter = 0\n S = typeof(x)\n solver = LsmrSolver(m, n, S)\n Δx = solver.x\n\n solved = false\n tired = false\n\n while !(solved || tired)\n \n Jx = JF(x) # Compute J(xₖ)\n lsmr!(solver, Jx, -Fx) # Minimize ‖J(xₖ)Δx + F(xₖ)‖\n x = x + Δx # Update xₖ₊₁ = xₖ + Δx\n Fx_old = Fx # F(xₖ)\n Fx = F(x) # F(xₖ₊₁)\n \n iter += 1\n solved = norm(Fx - Fx_old) / norm(Fx) ≤ tol\n tired = iter ≥ itmax\n end\n return x\nend","category":"page"},{"location":"warm-start/#warm-start","page":"Warm-start","title":"Warm-start","text":"","category":"section"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"Most Krylov methods in this module accept a starting point as argument. The starting point is used as initial approximation to a solution.","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"solver = CgSolver(A, b)\ncg!(solver, A, b, itmax=100)\nif !issolved(solver)\n cg!(solver, A, b, solver.x, itmax=100) # cg! uses the approximate solution `solver.x` as starting point\nend","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"If the user has an initial guess x0, it can be provided directly.","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"cg(A, b, x0)","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"It is also possible to use the warm_start! function to feed the starting point into the solver.","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"warm_start!(solver, x0)\ncg!(solver, A, b)\n# the previous two lines are equivalent to cg!(solver, A, b, x0)","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"If a Krylov method doesn't have the option to warm start, it can still be done explicitly. We provide an example with cg_lanczos!.","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"solver = CgLanczosSolver(A, b)\ncg_lanczos!(solver, A, b)\nx₀ = solver.x # Ax₀ ≈ b\nr = b - A * x₀ # r = b - Ax₀\ncg_lanczos!(solver, A, r)\nΔx = solver.x # AΔx = r\nx = x₀ + Δx # Ax = b","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"Explicit restarts cannot be avoided in certain block methods, such as TriMR, due to the preconditioners.","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"# [E A] [x] = [b]\n# [Aᴴ F] [y] [c]\nM = inv(E)\nN = inv(F)\nx₀, y₀, stats = trimr(A, b, c, M=M, N=N)\n\n# E and F are not available inside TriMR\nb₀ = b - Ex₀ - Ay\nc₀ = c - Aᴴx₀ - Fy\n\nΔx, Δy, stats = trimr(A, b₀, c₀, M=M, N=N)\nx = x₀ + Δx\ny = y₀ + Δy","category":"page"},{"location":"warm-start/","page":"Warm-start","title":"Warm-start","text":"# ## Restarted methods\n#\n# The storage requirements of Krylov methods based on the Arnoldi process, such as FOM and GMRES, increase as the iteration progresses.\n# For very large problems, the storage costs become prohibitive after only few iterations and restarted variants FOM(k) and GMRES(k) are preferred.\n# In this section, we show how to use warm starts to implement GMRES(k) and FOM(k).\n#\n# ```julia\n# k = 50\n# solver = GmresSolver(A, b, k) # FomSolver(A, b, k)\n# solver.x .= 0 # solver.x .= x₀ \n# nrestart = 0\n# while !issolved(solver) || nrestart ≤ 10\n# solve!(solver, A, b, solver.x, itmax=k)\n# nrestart += 1\n# end\n# ```","category":"page"},{"location":"examples/tricg/","page":"TriCG","title":"TriCG","text":"using Krylov, LinearOperators\nusing LinearAlgebra, Printf, SparseArrays\n\n# Identity matrix.\neye(n::Int) = sparse(1.0 * I, n, n)\n\n# Symmetric quasi-definite systems and variants\nn = m = 5\nA = [2^(i/j)*j + (-1)^(i-j) * n*(i-1) for i = 1:n, j = 1:n]\nb = ones(n)\nM = diagm(0 => [3.0 * i for i = 1:n])\nN = diagm(0 => [5.0 * i for i = 1:n])\nc = -b\n\n# [I A] [x] = [b]\n# [Aᴴ -I] [y] [c]\n(x, y, stats) = tricg(A, b, c)\nK = [eye(m) A; A' -eye(n)]\nB = [b; c]\nr = B - K * [x; y]\nresid = norm(r)\n@printf(\"TriCG: Relative residual: %8.1e\\n\", resid)\n\n# [-I A] [x] = [b]\n# [ Aᴴ I] [y] [c]\n(x, y, stats) = tricg(A, b, c, flip=true)\nK = [-eye(m) A; A' eye(n)]\nB = [b; c]\nr = B - K * [x; y]\nresid = norm(r)\n@printf(\"TriCG: Relative residual: %8.1e\\n\", resid)\n\n# [I A] [x] = [b]\n# [Aᴴ I] [y] [c]\n(x, y, stats) = tricg(A, b, c, spd=true)\nK = [eye(m) A; A' eye(n)]\nB = [b; c]\nr = B - K * [x; y]\nresid = norm(r)\n@printf(\"TriCG: Relative residual: %8.1e\\n\", resid)\n\n# [-I A] [x] = [b]\n# [ Aᴴ -I] [y] [c]\n(x, y, stats) = tricg(A, b, c, snd=true)\nK = [-eye(m) A; A' -eye(n)]\nB = [b; c]\nr = B - K * [x; y]\nresid = norm(r)\n@printf(\"TriCG: Relative residual: %8.1e\\n\", resid)\n\n# [τI A] [x] = [b]\n# [ Aᴴ νI] [y] [c]\n(τ, ν) = (1e-4, 1e2)\n(x, y, stats) = tricg(A, b, c, τ=τ, ν=ν)\nK = [τ*eye(m) A; A' ν*eye(n)]\nB = [b; c]\nr = B - K * [x; y]\nresid = norm(r)\n@printf(\"TriCG: Relative residual: %8.1e\\n\", resid)\n\n# [M⁻¹ A ] [x] = [b]\n# [Aᴴ -N⁻¹] [y] [c]\n(x, y, stats) = tricg(A, b, c, M=M, N=N, verbose=1)\nK = [inv(M) A; A' -inv(N)]\nH = BlockDiagonalOperator(M, N)\nB = [b; c]\nr = B - K * [x; y]\nresid = sqrt(dot(r, H * r))\n@printf(\"TriCG: Relative residual: %8.1e\\n\", resid)","category":"page"},{"location":"solvers/as/","page":"Adjoint systems","title":"Adjoint systems","text":"# Adjoint systems","category":"page"},{"location":"solvers/as/#BiLQR","page":"Adjoint systems","title":"BiLQR","text":"","category":"section"},{"location":"solvers/as/","page":"Adjoint systems","title":"Adjoint systems","text":"bilqr\nbilqr!","category":"page"},{"location":"solvers/as/#Krylov.bilqr","page":"Adjoint systems","title":"Krylov.bilqr","text":"(x, y, stats) = bilqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};\n transfer_to_bicg::Bool=true, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, y, stats) = bilqr(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)\n\nBiLQR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.\n\nCombine BiLQ and QMR to solve adjoint systems.\n\n[0 A] [y] = [b]\n[Aᴴ 0] [x] [c]\n\nThe relation bᴴc ≠ 0 must be satisfied. BiLQ is used for solving primal system Ax = b of size n. QMR is used for solving dual system Aᴴy = c of size n.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n;\nc: a vector of length n.\n\nOptional arguments\n\nx0: a vector of length n that represents an initial guess of the solution x;\ny0: a vector of length n that represents an initial guess of the solution y.\n\nKeyword arguments\n\ntransfer_to_bicg: transfer from the BiLQ point to the BiCG point, when it exists. The transfer is based on the residual norm;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\ny: a dense vector of length n;\nstats: statistics collected on the run in an AdjointStats structure.\n\nReference\n\nA. Montoison and D. Orban, BiLQ: An Iterative Method for Nonsymmetric Linear Systems with a Quasi-Minimum Error Property, SIAM Journal on Matrix Analysis and Applications, 41(3), pp. 1145–1166, 2020.\n\n\n\n\n\n","category":"function"},{"location":"solvers/as/#Krylov.bilqr!","page":"Adjoint systems","title":"Krylov.bilqr!","text":"solver = bilqr!(solver::BilqrSolver, A, b, c; kwargs...)\nsolver = bilqr!(solver::BilqrSolver, A, b, c, x0, y0; kwargs...)\n\nwhere kwargs are keyword arguments of bilqr.\n\nSee BilqrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/as/#TriLQR","page":"Adjoint systems","title":"TriLQR","text":"","category":"section"},{"location":"solvers/as/","page":"Adjoint systems","title":"Adjoint systems","text":"trilqr\ntrilqr!","category":"page"},{"location":"solvers/as/#Krylov.trilqr","page":"Adjoint systems","title":"Krylov.trilqr","text":"(x, y, stats) = trilqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};\n transfer_to_usymcg::Bool=true, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, y, stats) = trilqr(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)\n\nTriLQR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.\n\nCombine USYMLQ and USYMQR to solve adjoint systems.\n\n[0 A] [y] = [b]\n[Aᴴ 0] [x] [c]\n\nUSYMLQ is used for solving primal system Ax = b of size m × n. USYMQR is used for solving dual system Aᴴy = c of size n × m.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m;\nc: a vector of length n.\n\nOptional arguments\n\nx0: a vector of length n that represents an initial guess of the solution x;\ny0: a vector of length m that represents an initial guess of the solution y.\n\nKeyword arguments\n\ntransfer_to_usymcg: transfer from the USYMLQ point to the USYMCG point, when it exists. The transfer is based on the residual norm;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\ny: a dense vector of length m;\nstats: statistics collected on the run in an AdjointStats structure.\n\nReference\n\nA. Montoison and D. Orban, BiLQ: An Iterative Method for Nonsymmetric Linear Systems with a Quasi-Minimum Error Property, SIAM Journal on Matrix Analysis and Applications, 41(3), pp. 1145–1166, 2020.\n\n\n\n\n\n","category":"function"},{"location":"solvers/as/#Krylov.trilqr!","page":"Adjoint systems","title":"Krylov.trilqr!","text":"solver = trilqr!(solver::TrilqrSolver, A, b, c; kwargs...)\nsolver = trilqr!(solver::TrilqrSolver, A, b, c, x0, y0; kwargs...)\n\nwhere kwargs are keyword arguments of trilqr.\n\nSee TrilqrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"api/#Stats-Types","page":"API","title":"Stats Types","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"Krylov.KrylovStats\nKrylov.SimpleStats\nKrylov.LanczosStats\nKrylov.LanczosShiftStats\nKrylov.SymmlqStats\nKrylov.AdjointStats\nKrylov.LNLQStats\nKrylov.LSLQStats\nKrylov.LsmrStats","category":"page"},{"location":"api/#Krylov.KrylovStats","page":"API","title":"Krylov.KrylovStats","text":"Abstract type for statistics returned by a solver\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.SimpleStats","page":"API","title":"Krylov.SimpleStats","text":"Type for statistics returned by the majority of Krylov solvers, the attributes are:\n\nniter\nsolved\ninconsistent\nresiduals\nAresiduals\nAcond\ntimer\nstatus\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.LanczosStats","page":"API","title":"Krylov.LanczosStats","text":"Type for statistics returned by CG-LANCZOS, the attributes are:\n\nniter\nsolved\nresiduals\nindefinite\nAnorm\nAcond\ntimer\nstatus\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.LanczosShiftStats","page":"API","title":"Krylov.LanczosShiftStats","text":"Type for statistics returned by CG-LANCZOS with shifts, the attributes are:\n\nniter\nsolved\nresiduals\nindefinite\nAnorm\nAcond\ntimer\nstatus\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.SymmlqStats","page":"API","title":"Krylov.SymmlqStats","text":"Type for statistics returned by SYMMLQ, the attributes are:\n\nniter\nsolved\nresiduals\nresidualscg\nerrors\nerrorscg\nAnorm\nAcond\ntimer\nstatus\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.AdjointStats","page":"API","title":"Krylov.AdjointStats","text":"Type for statistics returned by adjoint systems solvers BiLQR and TriLQR, the attributes are:\n\nniter\nsolved_primal\nsolved_dual\nresiduals_primal\nresiduals_dual\ntimer\nstatus\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.LNLQStats","page":"API","title":"Krylov.LNLQStats","text":"Type for statistics returned by the LNLQ method, the attributes are:\n\nniter\nsolved\nresiduals\nerrorwithbnd\nerrorbndx\nerrorbndy\ntimer\nstatus\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.LSLQStats","page":"API","title":"Krylov.LSLQStats","text":"Type for statistics returned by the LSLQ method, the attributes are:\n\nniter\nsolved\ninconsistent\nresiduals\nAresiduals\nerr_lbnds\nerrorwithbnd\nerrubndslq\nerrubndscg\ntimer\nstatus\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.LsmrStats","page":"API","title":"Krylov.LsmrStats","text":"Type for statistics returned by LSMR. The attributes are:\n\nniter\nsolved\ninconsistent\nresiduals\nAresiduals\nAcond\nAnorm\nxNorm\ntimer\nstatus\n\n\n\n\n\n","category":"type"},{"location":"api/#Solver-Types","page":"API","title":"Solver Types","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"KrylovSolver\nMinresSolver\nMinaresSolver\nCgSolver\nCrSolver\nCarSolver\nSymmlqSolver\nCgLanczosSolver\nCgLanczosShiftSolver\nMinresQlpSolver\nDiomSolver\nFomSolver\nDqgmresSolver\nGmresSolver\nUsymlqSolver\nUsymqrSolver\nTricgSolver\nTrimrSolver\nTrilqrSolver\nCgsSolver\nBicgstabSolver\nBilqSolver\nQmrSolver\nBilqrSolver\nCglsSolver\nCrlsSolver\nCgneSolver\nCrmrSolver\nLslqSolver\nLsqrSolver\nLsmrSolver\nLnlqSolver\nCraigSolver\nCraigmrSolver\nGpmrSolver\nFgmresSolver","category":"page"},{"location":"api/#Krylov.KrylovSolver","page":"API","title":"Krylov.KrylovSolver","text":"Abstract type for using Krylov solvers in-place\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.MinresSolver","page":"API","title":"Krylov.MinresSolver","text":"Type for storing the vectors required by the in-place version of MINRES.\n\nThe outer constructors\n\nsolver = MinresSolver(m, n, S; window :: Int=5)\nsolver = MinresSolver(A, b; window :: Int=5)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.MinaresSolver","page":"API","title":"Krylov.MinaresSolver","text":"Type for storing the vectors required by the in-place version of MINARES.\n\nThe outer constructors\n\nsolver = MinaresSolver(m, n, S)\nsolver = MinaresSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CgSolver","page":"API","title":"Krylov.CgSolver","text":"Type for storing the vectors required by the in-place version of CG.\n\nThe outer constructors\n\nsolver = CgSolver(m, n, S)\nsolver = CgSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CrSolver","page":"API","title":"Krylov.CrSolver","text":"Type for storing the vectors required by the in-place version of CR.\n\nThe outer constructors\n\nsolver = CrSolver(m, n, S)\nsolver = CrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CarSolver","page":"API","title":"Krylov.CarSolver","text":"Type for storing the vectors required by the in-place version of CAR.\n\nThe outer constructors\n\nsolver = CarSolver(m, n, S)\nsolver = CarSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.SymmlqSolver","page":"API","title":"Krylov.SymmlqSolver","text":"Type for storing the vectors required by the in-place version of SYMMLQ.\n\nThe outer constructors\n\nsolver = SymmlqSolver(m, n, S)\nsolver = SymmlqSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CgLanczosSolver","page":"API","title":"Krylov.CgLanczosSolver","text":"Type for storing the vectors required by the in-place version of CG-LANCZOS.\n\nThe outer constructors\n\nsolver = CgLanczosSolver(m, n, S)\nsolver = CgLanczosSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CgLanczosShiftSolver","page":"API","title":"Krylov.CgLanczosShiftSolver","text":"Type for storing the vectors required by the in-place version of CG-LANCZOS-SHIFT.\n\nThe outer constructors\n\nsolver = CgLanczosShiftSolver(m, n, nshifts, S)\nsolver = CgLanczosShiftSolver(A, b, nshifts)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.MinresQlpSolver","page":"API","title":"Krylov.MinresQlpSolver","text":"Type for storing the vectors required by the in-place version of MINRES-QLP.\n\nThe outer constructors\n\nsolver = MinresQlpSolver(m, n, S)\nsolver = MinresQlpSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.DiomSolver","page":"API","title":"Krylov.DiomSolver","text":"Type for storing the vectors required by the in-place version of DIOM.\n\nThe outer constructors\n\nsolver = DiomSolver(m, n, memory, S)\nsolver = DiomSolver(A, b, memory = 20)\n\nmay be used in order to create these vectors. memory is set to n if the value given is larger than n.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.FomSolver","page":"API","title":"Krylov.FomSolver","text":"Type for storing the vectors required by the in-place version of FOM.\n\nThe outer constructors\n\nsolver = FomSolver(m, n, memory, S)\nsolver = FomSolver(A, b, memory = 20)\n\nmay be used in order to create these vectors. memory is set to n if the value given is larger than n.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.DqgmresSolver","page":"API","title":"Krylov.DqgmresSolver","text":"Type for storing the vectors required by the in-place version of DQGMRES.\n\nThe outer constructors\n\nsolver = DqgmresSolver(m, n, memory, S)\nsolver = DqgmresSolver(A, b, memory = 20)\n\nmay be used in order to create these vectors. memory is set to n if the value given is larger than n.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.GmresSolver","page":"API","title":"Krylov.GmresSolver","text":"Type for storing the vectors required by the in-place version of GMRES.\n\nThe outer constructors\n\nsolver = GmresSolver(m, n, memory, S)\nsolver = GmresSolver(A, b, memory = 20)\n\nmay be used in order to create these vectors. memory is set to n if the value given is larger than n.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.UsymlqSolver","page":"API","title":"Krylov.UsymlqSolver","text":"Type for storing the vectors required by the in-place version of USYMLQ.\n\nThe outer constructors\n\nsolver = UsymlqSolver(m, n, S)\nsolver = UsymlqSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.UsymqrSolver","page":"API","title":"Krylov.UsymqrSolver","text":"Type for storing the vectors required by the in-place version of USYMQR.\n\nThe outer constructors\n\nsolver = UsymqrSolver(m, n, S)\nsolver = UsymqrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.TricgSolver","page":"API","title":"Krylov.TricgSolver","text":"Type for storing the vectors required by the in-place version of TRICG.\n\nThe outer constructors\n\nsolver = TricgSolver(m, n, S)\nsolver = TricgSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.TrimrSolver","page":"API","title":"Krylov.TrimrSolver","text":"Type for storing the vectors required by the in-place version of TRIMR.\n\nThe outer constructors\n\nsolver = TrimrSolver(m, n, S)\nsolver = TrimrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.TrilqrSolver","page":"API","title":"Krylov.TrilqrSolver","text":"Type for storing the vectors required by the in-place version of TRILQR.\n\nThe outer constructors\n\nsolver = TrilqrSolver(m, n, S)\nsolver = TrilqrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CgsSolver","page":"API","title":"Krylov.CgsSolver","text":"Type for storing the vectors required by the in-place version of CGS.\n\nThe outer constructorss\n\nsolver = CgsSolver(m, n, S)\nsolver = CgsSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.BicgstabSolver","page":"API","title":"Krylov.BicgstabSolver","text":"Type for storing the vectors required by the in-place version of BICGSTAB.\n\nThe outer constructors\n\nsolver = BicgstabSolver(m, n, S)\nsolver = BicgstabSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.BilqSolver","page":"API","title":"Krylov.BilqSolver","text":"Type for storing the vectors required by the in-place version of BILQ.\n\nThe outer constructors\n\nsolver = BilqSolver(m, n, S)\nsolver = BilqSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.QmrSolver","page":"API","title":"Krylov.QmrSolver","text":"Type for storing the vectors required by the in-place version of QMR.\n\nThe outer constructors\n\nsolver = QmrSolver(m, n, S)\nsolver = QmrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.BilqrSolver","page":"API","title":"Krylov.BilqrSolver","text":"Type for storing the vectors required by the in-place version of BILQR.\n\nThe outer constructors\n\nsolver = BilqrSolver(m, n, S)\nsolver = BilqrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CglsSolver","page":"API","title":"Krylov.CglsSolver","text":"Type for storing the vectors required by the in-place version of CGLS.\n\nThe outer constructors\n\nsolver = CglsSolver(m, n, S)\nsolver = CglsSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CrlsSolver","page":"API","title":"Krylov.CrlsSolver","text":"Type for storing the vectors required by the in-place version of CRLS.\n\nThe outer constructors\n\nsolver = CrlsSolver(m, n, S)\nsolver = CrlsSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CgneSolver","page":"API","title":"Krylov.CgneSolver","text":"Type for storing the vectors required by the in-place version of CGNE.\n\nThe outer constructors\n\nsolver = CgneSolver(m, n, S)\nsolver = CgneSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CrmrSolver","page":"API","title":"Krylov.CrmrSolver","text":"Type for storing the vectors required by the in-place version of CRMR.\n\nThe outer constructors\n\nsolver = CrmrSolver(m, n, S)\nsolver = CrmrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.LslqSolver","page":"API","title":"Krylov.LslqSolver","text":"Type for storing the vectors required by the in-place version of LSLQ.\n\nThe outer constructors\n\nsolver = LslqSolver(m, n, S)\nsolver = LslqSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.LsqrSolver","page":"API","title":"Krylov.LsqrSolver","text":"Type for storing the vectors required by the in-place version of LSQR.\n\nThe outer constructors\n\nsolver = LsqrSolver(m, n, S)\nsolver = LsqrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.LsmrSolver","page":"API","title":"Krylov.LsmrSolver","text":"Type for storing the vectors required by the in-place version of LSMR.\n\nThe outer constructors\n\nsolver = LsmrSolver(m, n, S)\nsolver = LsmrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.LnlqSolver","page":"API","title":"Krylov.LnlqSolver","text":"Type for storing the vectors required by the in-place version of LNLQ.\n\nThe outer constructors\n\nsolver = LnlqSolver(m, n, S)\nsolver = LnlqSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CraigSolver","page":"API","title":"Krylov.CraigSolver","text":"Type for storing the vectors required by the in-place version of CRAIG.\n\nThe outer constructors\n\nsolver = CraigSolver(m, n, S)\nsolver = CraigSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.CraigmrSolver","page":"API","title":"Krylov.CraigmrSolver","text":"Type for storing the vectors required by the in-place version of CRAIGMR.\n\nThe outer constructors\n\nsolver = CraigmrSolver(m, n, S)\nsolver = CraigmrSolver(A, b)\n\nmay be used in order to create these vectors.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.GpmrSolver","page":"API","title":"Krylov.GpmrSolver","text":"Type for storing the vectors required by the in-place version of GPMR.\n\nThe outer constructors\n\nsolver = GpmrSolver(m, n, memory, S)\nsolver = GpmrSolver(A, b, memory = 20)\n\nmay be used in order to create these vectors. memory is set to n + m if the value given is larger than n + m.\n\n\n\n\n\n","category":"type"},{"location":"api/#Krylov.FgmresSolver","page":"API","title":"Krylov.FgmresSolver","text":"Type for storing the vectors required by the in-place version of FGMRES.\n\nThe outer constructors\n\nsolver = FgmresSolver(m, n, memory, S)\nsolver = FgmresSolver(A, b, memory = 20)\n\nmay be used in order to create these vectors. memory is set to n if the value given is larger than n.\n\n\n\n\n\n","category":"type"},{"location":"api/#Utilities","page":"API","title":"Utilities","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"Krylov.roots_quadratic\nKrylov.sym_givens\nKrylov.to_boundary\nKrylov.vec2str\nKrylov.ktypeof\nKrylov.kzeros\nKrylov.kones\nKrylov.vector_to_matrix\nKrylov.matrix_to_vector","category":"page"},{"location":"api/#Krylov.roots_quadratic","page":"API","title":"Krylov.roots_quadratic","text":"roots = roots_quadratic(q₂, q₁, q₀; nitref)\n\nFind the real roots of the quadratic\n\nq(x) = q₂ x² + q₁ x + q₀,\n\nwhere q₂, q₁ and q₀ are real. Care is taken to avoid numerical cancellation. Optionally, nitref steps of iterative refinement may be performed to improve accuracy. By default, nitref=1.\n\n\n\n\n\n","category":"function"},{"location":"api/#Krylov.sym_givens","page":"API","title":"Krylov.sym_givens","text":"(c, s, ρ) = sym_givens(a, b)\n\nNumerically stable symmetric Givens reflection. Given a and b reals, return (c, s, ρ) such that\n\n[ c s ] [ a ] = [ ρ ]\n[ s -c ] [ b ] = [ 0 ].\n\n\n\n\n\nNumerically stable symmetric Givens reflection. Given a and b complexes, return (c, s, ρ) with c real and (s, ρ) complexes such that\n\n[ c s ] [ a ] = [ ρ ]\n[ s̅ -c ] [ b ] = [ 0 ].\n\n\n\n\n\n","category":"function"},{"location":"api/#Krylov.to_boundary","page":"API","title":"Krylov.to_boundary","text":"roots = to_boundary(n, x, d, radius; flip, xNorm2, dNorm2)\n\nGiven a trust-region radius radius, a vector x lying inside the trust-region and a direction d, return σ1 and σ2 such that\n\n‖x + σi d‖ = radius, i = 1, 2\n\nin the Euclidean norm. n is the length of vectors x and d. If known, ‖x‖² and ‖d‖² may be supplied with xNorm2 and dNorm2.\n\nIf flip is set to true, σ1 and σ2 are computed such that\n\n‖x - σi d‖ = radius, i = 1, 2.\n\n\n\n\n\n","category":"function"},{"location":"api/#Krylov.vec2str","page":"API","title":"Krylov.vec2str","text":"s = vec2str(x; ndisp)\n\nDisplay an array in the form\n\n[ -3.0e-01 -5.1e-01 1.9e-01 ... -2.3e-01 -4.4e-01 2.4e-01 ]\n\nwith (ndisp - 1)/2 elements on each side.\n\n\n\n\n\n","category":"function"},{"location":"api/#Krylov.ktypeof","page":"API","title":"Krylov.ktypeof","text":"S = ktypeof(v)\n\nReturn the most relevant storage type S based on the type of v.\n\n\n\n\n\n","category":"function"},{"location":"api/#Krylov.kzeros","page":"API","title":"Krylov.kzeros","text":"v = kzeros(S, n)\n\nCreate a vector of storage type S of length n only composed of zero.\n\n\n\n\n\n","category":"function"},{"location":"api/#Krylov.kones","page":"API","title":"Krylov.kones","text":"v = kones(S, n)\n\nCreate a vector of storage type S of length n only composed of one.\n\n\n\n\n\n","category":"function"},{"location":"api/#Krylov.vector_to_matrix","page":"API","title":"Krylov.vector_to_matrix","text":"M = vector_to_matrix(S)\n\nReturn the dense matrix storage type M related to the dense vector storage type S.\n\n\n\n\n\n","category":"function"},{"location":"api/#Krylov.matrix_to_vector","page":"API","title":"Krylov.matrix_to_vector","text":"S = matrix_to_vector(M)\n\nReturn the dense vector storage type S related to the dense matrix storage type M.\n\n\n\n\n\n","category":"function"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"# Thanks Morten Piibeleht for the hack with the tables!","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"","category":"page"},{"location":"storage/#storage-requirements","page":"Storage requirements","title":"Storage requirements","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"This section provides the storage requirements of all Krylov methods available in Krylov.jl.","category":"page"},{"location":"storage/#Notation","page":"Storage requirements","title":"Notation","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"We denote by m and n the number of rows and columns of the linear problem. The memory parameter of DIOM, FOM, DQGMRES, GMRES, FGMRES and GPMR is k. The numbers of shifts of CG-LANCZOS-SHIFT is p.","category":"page"},{"location":"storage/#Theoretical-storage-requirements","page":"Storage requirements","title":"Theoretical storage requirements","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"The following tables provide the number of coefficients that must be allocated for each Krylov method. The coefficients have the same type as those that compose the linear problem we seek to solve. Each table summarizes the storage requirements of Krylov methods recommended to a specific linear problem.","category":"page"},{"location":"storage/#Hermitian-positive-definite-linear-systems","page":"Storage requirements","title":"Hermitian positive definite linear systems","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Methods CG CR CAR CG-LANCZOS CG-LANCZOS-SHIFT\nStorage 4n 5n 7n 5n 3n + 2np + 5p","category":"page"},{"location":"storage/#Hermitian-indefinite-linear-systems","page":"Storage requirements","title":"Hermitian indefinite linear systems","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Methods SYMMLQ MINRES MINRES-QLP MINARES\nStorage 5n 6n 6n 8n","category":"page"},{"location":"storage/#Non-Hermitian-square-linear-systems","page":"Storage requirements","title":"Non-Hermitian square linear systems","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Methods CGS BICGSTAB BiLQ QMR\nStorage 6n 6n 8n 9n","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Methods DIOM DQGMRES\nStorage n(2k+1) + 2k - 1 n(2k+2) + 3k + 1","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Methods FOM GMRES FGMRES\nStoragedfrac n(2+k) +2k + dfrack(k + 1)2 n(2+k) + 3k + dfrack(k + 1)2 n(2+2k) + 3k + dfrack(k + 1)2","category":"page"},{"location":"storage/#Least-norm-problems","page":"Storage requirements","title":"Least-norm problems","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Methods USYMLQ CGNE CRMR LNLQ CRAIG CRAIGMR\nStorage 5n + 3m 3n + 2m 3n + 2m 3n + 4m 3n + 4m 4n + 5m","category":"page"},{"location":"storage/#Least-squares-problems","page":"Storage requirements","title":"Least-squares problems","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Methods USYMQR CGLS CRLS LSLQ LSQR LSMR\nStorage 6n + 3m 3n + 2m 4n + 3m 4n + 2m 4n + 2m 5n + 2m","category":"page"},{"location":"storage/#Adjoint-systems","page":"Storage requirements","title":"Adjoint systems","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Methods BiLQR TriLQR\nStorage 11n 6m + 5n","category":"page"},{"location":"storage/#Saddle-point-and-Hermitian-quasi-definite-systems","page":"Storage requirements","title":"Saddle-point and Hermitian quasi-definite systems","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Methods TriCG TriMR\nStorage 6n + 6m 8n + 8m","category":"page"},{"location":"storage/#Generalized-saddle-point-and-non-Hermitian-partitioned-systems","page":"Storage requirements","title":"Generalized saddle-point and non-Hermitian partitioned systems","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Method GPMR\nStorage (2+k)(n+m) + 2k^2 + 11k","category":"page"},{"location":"storage/#Practical-storage-requirements","page":"Storage requirements","title":"Practical storage requirements","text":"","category":"section"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Each method has its own KrylovSolver that contains all the storage needed by the method. In the REPL, the size in bytes of each attribute and the total amount of memory allocated by the solver are displayed when we show a KrylovSolver.","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"using Krylov\n\nm = 5000\nn = 12000\nA = rand(Float64, m, n)\nb = rand(Float64, m)\nsolver = LsmrSolver(A, b)\nshow(stdout, solver, show_stats=false)","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"If we want the total number of bytes used by the solver, we can call nbytes = sizeof(solver).","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"nbytes = sizeof(solver)","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Thereafter, we can use Base.format_bytes(nbytes) to recover what is displayed in the REPL.","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Base.format_bytes(nbytes)","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"To verify that we match the theoretical results, we just need to multiply the storage requirement of a method by the number of bytes associated to the precision of the linear problem. For instance, we need 4 bytes for the precision Float32, 8 bytes for precisions Float64 and ComplexF32, and 16 bytes for the precision ComplexF64.","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"FC = Float64 # precision of the least-squares problem\nncoefs_lsmr = 5*n + 2*m # number of coefficients\nnbytes_lsmr = sizeof(FC) * ncoefs_lsmr # number of bytes","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"Therefore, you can check that you have enough memory in RAM to allocate a KrylovSolver.","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"free_nbytes = Sys.free_memory()\nBase.format_bytes(free_nbytes) # Total free memory in RAM in bytes.","category":"page"},{"location":"storage/","page":"Storage requirements","title":"Storage requirements","text":"note: Note\nBeyond having faster operations, using low precisions, such as simple precision, allows to store more coefficients in RAM and solve larger linear problems.\nIn the file test_allocations.jl, we use the macro @allocated to test that we match the expected storage requirement of each method with a tolerance of 2%.","category":"page"},{"location":"examples/craig/","page":"CRAIG","title":"CRAIG","text":"using Krylov\nusing LinearAlgebra, Printf\n\nm = 5\nn = 8\nλ = 1.0e-3\nA = rand(m, n)\nb = A * ones(n)\nxy_exact = [A λ*I] \\ b # In Julia, this is the min-norm solution!\n\n(x, y, stats) = craig(A, b, λ=λ, atol=0.0, rtol=1.0e-20, verbose=1)\nshow(stats)\n\n# Check that we have a minimum-norm solution.\n# When λ > 0 we solve min ‖(x,s)‖ s.t. Ax + λs = b, and we get s = λy.\n@printf(\"Primal feasibility: %7.1e\\n\", norm(b - A * x - λ^2 * y) / norm(b))\n@printf(\"Dual feasibility: %7.1e\\n\", norm(x - A' * y) / norm(x))\n@printf(\"Error in x: %7.1e\\n\", norm(x - xy_exact[1:n]) / norm(xy_exact[1:n]))\nif λ > 0.0\n @printf(\"Error in y: %7.1e\\n\", norm(λ * y - xy_exact[n+1:n+m]) / norm(xy_exact[n+1:n+m]))\nend","category":"page"},{"location":"examples/cgne/","page":"CGNE","title":"CGNE","text":"using Krylov, HarwellRutherfordBoeing, SuiteSparseMatrixCollection\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"wm2\")\npath = fetch_ssmc(matrix, format=\"RB\")\n\nA = RutherfordBoeingData(joinpath(path[1], \"$(matrix.name[1]).rb\")).data\n(m, n) = size(A)\n@printf(\"System size: %d rows and %d columns\\n\", m, n)\n\nx_exact = A' * ones(m)\nx_exact_norm = norm(x_exact)\nx_exact /= x_exact_norm\nb = A * x_exact\n(x, stats) = cgne(A, b)\nshow(stats)\nresid = norm(A * x - b) / norm(b)\n@printf(\"CGNE: Relative residual: %7.1e\\n\", resid)\n@printf(\"CGNE: ‖x - x*‖₂: %7.1e\\n\", norm(x - x_exact))","category":"page"},{"location":"solvers/ls/","page":"Least-squares problems","title":"Least-squares problems","text":"# Least-squares problems","category":"page"},{"location":"solvers/ls/#CGLS","page":"Least-squares problems","title":"CGLS","text":"","category":"section"},{"location":"solvers/ls/","page":"Least-squares problems","title":"Least-squares problems","text":"cgls\ncgls!","category":"page"},{"location":"solvers/ls/#Krylov.cgls","page":"Least-squares problems","title":"Krylov.cgls","text":"(x, stats) = cgls(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false, radius::T=zero(T),\n λ::T=zero(T), atol::T=√eps(T), rtol::T=√eps(T),\n itmax::Int=0, timemax::Float64=Inf,\n verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nSolve the regularized linear least-squares problem\n\nminimize ‖b - Ax‖₂² + λ‖x‖₂²\n\nof size m × n using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations\n\n(AᴴA + λI) x = Aᴴb\n\nbut is more stable.\n\nCGLS produces monotonic residuals ‖r‖₂ but not optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSQR, though can be slightly less accurate, but simpler to implement.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nradius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;\nλ: regularization parameter;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nM. R. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear systems, Journal of Research of the National Bureau of Standards, 49(6), pp. 409–436, 1952.\nA. Björck, T. Elfving and Z. Strakos, Stability of Conjugate Gradient and Lanczos Methods for Linear Least Squares Problems, SIAM Journal on Matrix Analysis and Applications, 19(3), pp. 720–736, 1998.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#Krylov.cgls!","page":"Least-squares problems","title":"Krylov.cgls!","text":"solver = cgls!(solver::CglsSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of cgls.\n\nSee CglsSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#CRLS","page":"Least-squares problems","title":"CRLS","text":"","category":"section"},{"location":"solvers/ls/","page":"Least-squares problems","title":"Least-squares problems","text":"crls\ncrls!","category":"page"},{"location":"solvers/ls/#Krylov.crls","page":"Least-squares problems","title":"Krylov.crls","text":"(x, stats) = crls(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false, radius::T=zero(T),\n λ::T=zero(T), atol::T=√eps(T), rtol::T=√eps(T),\n itmax::Int=0, timemax::Float64=Inf,\n verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nSolve the linear least-squares problem\n\nminimize ‖b - Ax‖₂² + λ‖x‖₂²\n\nof size m × n using the Conjugate Residuals (CR) method. This method is equivalent to applying MINRES to the normal equations\n\n(AᴴA + λI) x = Aᴴb.\n\nThis implementation recurs the residual r := b - Ax.\n\nCRLS produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSMR, though can be substantially less accurate, but simpler to implement.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nradius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;\nλ: regularization parameter;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nD. C.-L. Fong, Minimum-Residual Methods for Sparse, Least-Squares using Golubg-Kahan Bidiagonalization, Ph.D. Thesis, Stanford University, 2011.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#Krylov.crls!","page":"Least-squares problems","title":"Krylov.crls!","text":"solver = crls!(solver::CrlsSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of crls.\n\nSee CrlsSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#LSLQ","page":"Least-squares problems","title":"LSLQ","text":"","category":"section"},{"location":"solvers/ls/","page":"Least-squares problems","title":"Least-squares problems","text":"lslq\nlslq!","category":"page"},{"location":"solvers/ls/#Krylov.lslq","page":"Least-squares problems","title":"Krylov.lslq","text":"(x, stats) = lslq(A, b::AbstractVector{FC};\n M=I, N=I, ldiv::Bool=false,\n window::Int=5, transfer_to_lsqr::Bool=false,\n sqd::Bool=false, λ::T=zero(T),\n σ::T=zero(T), etol::T=√eps(T),\n utol::T=√eps(T), btol::T=√eps(T),\n conlim::T=1/√eps(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nSolve the regularized linear least-squares problem\n\nminimize ‖b - Ax‖₂² + λ²‖x‖₂²\n\nof size m × n using the LSLQ method, where λ ≥ 0 is a regularization parameter. LSLQ is formally equivalent to applying SYMMLQ to the normal equations\n\n(AᴴA + λ²I) x = Aᴴb\n\nbut is more stable.\n\nIf λ > 0, we solve the symmetric and quasi-definite system\n\n[ E A ] [ r ] [ b ]\n[ Aᴴ -λ²F ] [ x ] = [ 0 ],\n\nwhere E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.\n\nThe system above represents the optimality conditions of\n\nminimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.\n\nFor a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSLQ is then equivalent to applying SYMMLQ to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).\n\nIf λ = 0, we solve the symmetric and indefinite system\n\n[ E A ] [ r ] [ b ]\n[ Aᴴ 0 ] [ x ] = [ 0 ].\n\nThe system above represents the optimality conditions of\n\nminimize ‖b - Ax‖²_E⁻¹.\n\nIn this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).\n\nMain features\n\nthe solution estimate is updated along orthogonal directions\nthe norm of the solution estimate ‖xᴸₖ‖₂ is increasing\nthe error ‖eₖ‖₂ := ‖xᴸₖ - x*‖₂ is decreasing\nit is possible to transition cheaply from the LSLQ iterate to the LSQR iterate if there is an advantage (there always is in terms of error)\nif A is rank deficient, identify the minimum least-squares solution\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;\nN: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;\nldiv: define whether the preconditioners use ldiv! or mul!;\nwindow: number of iterations used to accumulate a lower bound on the error;\ntransfer_to_lsqr: transfer from the LSLQ point to the LSQR point, when it exists. The transfer is based on the residual norm;\nsqd: if true, set λ=1 for Hermitian quasi-definite systems;\nλ: regularization parameter;\nσ: strict lower bound on the smallest positive singular value σₘᵢₙ such as σ = (1-10⁻⁷)σₘᵢₙ;\netol: stopping tolerance based on the lower bound on the error;\nutol: stopping tolerance based on the upper bound on the error;\nbtol: stopping tolerance used to detect zero-residual problems;\nconlim: limit on the estimated condition number of A beyond which the solution will be abandoned;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a LSLQStats structure.\nstats.err_lbnds is a vector of lower bounds on the LQ error–-the vector is empty if window is set to zero\nstats.err_ubnds_lq is a vector of upper bounds on the LQ error–-the vector is empty if σ == 0 is left at zero\nstats.err_ubnds_cg is a vector of upper bounds on the CG error–-the vector is empty if σ == 0 is left at zero\nstats.error_with_bnd is a boolean indicating whether there was an error in the upper bounds computation (cancellation errors, too large σ ...)\n\nStopping conditions\n\nThe iterations stop as soon as one of the following conditions holds true:\n\nthe optimality residual is sufficiently small (stats.status = \"found approximate minimum least-squares solution\") in the sense that either\n‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ atol, or\n1 + ‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ 1\nan approximate zero-residual solution has been found (stats.status = \"found approximate zero-residual solution\") in the sense that either\n‖r‖ / ‖b‖ ≤ btol + atol ‖A‖ * ‖xᴸ‖ / ‖b‖, or\n1 + ‖r‖ / ‖b‖ ≤ 1\nthe estimated condition number of A is too large in the sense that either\n1/cond(A) ≤ 1/conlim (stats.status = \"condition number exceeds tolerance\"), or\n1 + 1/cond(A) ≤ 1 (stats.status = \"condition number seems too large for this machine\")\nthe lower bound on the LQ forward error is less than etol * ‖xᴸ‖\nthe upper bound on the CG forward error is less than utol * ‖xᶜ‖\n\nReferences\n\nR. Estrin, D. Orban and M. A. Saunders, Euclidean-norm error bounds for SYMMLQ and CG, SIAM Journal on Matrix Analysis and Applications, 40(1), pp. 235–253, 2019.\nR. Estrin, D. Orban and M. A. Saunders, LSLQ: An Iterative Method for Linear Least-Squares with an Error Minimization Property, SIAM Journal on Matrix Analysis and Applications, 40(1), pp. 254–275, 2019.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#Krylov.lslq!","page":"Least-squares problems","title":"Krylov.lslq!","text":"solver = lslq!(solver::LslqSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of lslq.\n\nSee LslqSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#LSQR","page":"Least-squares problems","title":"LSQR","text":"","category":"section"},{"location":"solvers/ls/","page":"Least-squares problems","title":"Least-squares problems","text":"lsqr\nlsqr!","category":"page"},{"location":"solvers/ls/#Krylov.lsqr","page":"Least-squares problems","title":"Krylov.lsqr","text":"(x, stats) = lsqr(A, b::AbstractVector{FC};\n M=I, N=I, ldiv::Bool=false,\n window::Int=5, sqd::Bool=false, λ::T=zero(T),\n radius::T=zero(T), etol::T=√eps(T),\n axtol::T=√eps(T), btol::T=√eps(T),\n conlim::T=1/√eps(T), atol::T=zero(T),\n rtol::T=zero(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nSolve the regularized linear least-squares problem\n\nminimize ‖b - Ax‖₂² + λ²‖x‖₂²\n\nof size m × n using the LSQR method, where λ ≥ 0 is a regularization parameter. LSQR is formally equivalent to applying CG to the normal equations\n\n(AᴴA + λ²I) x = Aᴴb\n\n(and therefore to CGLS) but is more stable.\n\nLSQR produces monotonic residuals ‖r‖₂ but not optimality residuals ‖Aᴴr‖₂. It is formally equivalent to CGLS, though can be slightly more accurate.\n\nIf λ > 0, LSQR solves the symmetric and quasi-definite system\n\n[ E A ] [ r ] [ b ]\n[ Aᴴ -λ²F ] [ x ] = [ 0 ],\n\nwhere E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.\n\nThe system above represents the optimality conditions of\n\nminimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.\n\nFor a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSQR is then equivalent to applying CG to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).\n\nIf λ = 0, we solve the symmetric and indefinite system\n\n[ E A ] [ r ] [ b ]\n[ Aᴴ 0 ] [ x ] = [ 0 ].\n\nThe system above represents the optimality conditions of\n\nminimize ‖b - Ax‖²_E⁻¹.\n\nIn this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;\nN: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;\nldiv: define whether the preconditioners use ldiv! or mul!;\nwindow: number of iterations used to accumulate a lower bound on the error;\nsqd: if true, set λ=1 for Hermitian quasi-definite systems;\nλ: regularization parameter;\nradius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;\netol: stopping tolerance based on the lower bound on the error;\naxtol: tolerance on the backward error;\nbtol: stopping tolerance used to detect zero-residual problems;\nconlim: limit on the estimated condition number of A beyond which the solution will be abandoned;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nC. C. Paige and M. A. Saunders, LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares, ACM Transactions on Mathematical Software, 8(1), pp. 43–71, 1982.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#Krylov.lsqr!","page":"Least-squares problems","title":"Krylov.lsqr!","text":"solver = lsqr!(solver::LsqrSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of lsqr.\n\nSee LsqrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#LSMR","page":"Least-squares problems","title":"LSMR","text":"","category":"section"},{"location":"solvers/ls/","page":"Least-squares problems","title":"Least-squares problems","text":"lsmr\nlsmr!","category":"page"},{"location":"solvers/ls/#Krylov.lsmr","page":"Least-squares problems","title":"Krylov.lsmr","text":"(x, stats) = lsmr(A, b::AbstractVector{FC};\n M=I, N=I, ldiv::Bool=false,\n window::Int=5, sqd::Bool=false, λ::T=zero(T),\n radius::T=zero(T), etol::T=√eps(T),\n axtol::T=√eps(T), btol::T=√eps(T),\n conlim::T=1/√eps(T), atol::T=zero(T),\n rtol::T=zero(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nSolve the regularized linear least-squares problem\n\nminimize ‖b - Ax‖₂² + λ²‖x‖₂²\n\nof size m × n using the LSMR method, where λ ≥ 0 is a regularization parameter. LSMR is formally equivalent to applying MINRES to the normal equations\n\n(AᴴA + λ²I) x = Aᴴb\n\n(and therefore to CRLS) but is more stable.\n\nLSMR produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂. It is formally equivalent to CRLS, though can be substantially more accurate.\n\nLSMR can be also used to find a null vector of a singular matrix A by solving the problem min ‖Aᴴx - b‖ with any nonzero vector b. At a minimizer, the residual vector r = b - Aᴴx will satisfy Ar = 0.\n\nIf λ > 0, we solve the symmetric and quasi-definite system\n\n[ E A ] [ r ] [ b ]\n[ Aᴴ -λ²F ] [ x ] = [ 0 ],\n\nwhere E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.\n\nThe system above represents the optimality conditions of\n\nminimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.\n\nFor a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSMR is then equivalent to applying MINRES to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).\n\nIf λ = 0, we solve the symmetric and indefinite system\n\n[ E A ] [ r ] [ b ]\n[ Aᴴ 0 ] [ x ] = [ 0 ].\n\nThe system above represents the optimality conditions of\n\nminimize ‖b - Ax‖²_E⁻¹.\n\nIn this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;\nN: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;\nldiv: define whether the preconditioners use ldiv! or mul!;\nwindow: number of iterations used to accumulate a lower bound on the error;\nsqd: if true, set λ=1 for Hermitian quasi-definite systems;\nλ: regularization parameter;\nradius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;\netol: stopping tolerance based on the lower bound on the error;\naxtol: tolerance on the backward error;\nbtol: stopping tolerance used to detect zero-residual problems;\nconlim: limit on the estimated condition number of A beyond which the solution will be abandoned;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a LsmrStats structure.\n\nReference\n\nD. C.-L. Fong and M. A. Saunders, LSMR: An Iterative Algorithm for Sparse Least Squares Problems, SIAM Journal on Scientific Computing, 33(5), pp. 2950–2971, 2011.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#Krylov.lsmr!","page":"Least-squares problems","title":"Krylov.lsmr!","text":"solver = lsmr!(solver::LsmrSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of lsmr.\n\nSee LsmrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#USYMQR","page":"Least-squares problems","title":"USYMQR","text":"","category":"section"},{"location":"solvers/ls/","page":"Least-squares problems","title":"Least-squares problems","text":"usymqr\nusymqr!","category":"page"},{"location":"solvers/ls/#Krylov.usymqr","page":"Least-squares problems","title":"Krylov.usymqr","text":"(x, stats) = usymqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};\n atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = usymqr(A, b, c, x0::AbstractVector; kwargs...)\n\nUSYMQR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nUSYMQR solves the linear least-squares problem min ‖b - Ax‖² of size m × n. USYMQR solves Ax = b if it is consistent.\n\nUSYMQR is based on the orthogonal tridiagonalization process and requires two initial nonzero vectors b and c. The vector c is only used to initialize the process and a default value can be b or Aᴴb depending on the shape of A. The residual norm ‖b - Ax‖ monotonously decreases in USYMQR. It's considered as a generalization of MINRES.\n\nIt can also be applied to under-determined and over-determined problems. USYMQR finds the minimum-norm solution if problems are inconsistent.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m;\nc: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nM. A. Saunders, H. D. Simon, and E. L. Yip, Two Conjugate-Gradient-Type Methods for Unsymmetric Linear Equations, SIAM Journal on Numerical Analysis, 25(4), pp. 927–940, 1988.\nA. Buttari, D. Orban, D. Ruiz and D. Titley-Peloquin, A tridiagonalization method for symmetric saddle-point and quasi-definite systems, SIAM Journal on Scientific Computing, 41(5), pp. 409–432, 2019.\nA. Montoison and D. Orban, BiLQ: An Iterative Method for Nonsymmetric Linear Systems with a Quasi-Minimum Error Property, SIAM Journal on Matrix Analysis and Applications, 41(3), pp. 1145–1166, 2020.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ls/#Krylov.usymqr!","page":"Least-squares problems","title":"Krylov.usymqr!","text":"solver = usymqr!(solver::UsymqrSolver, A, b, c; kwargs...)\nsolver = usymqr!(solver::UsymqrSolver, A, b, c, x0; kwargs...)\n\nwhere kwargs are keyword arguments of usymqr.\n\nSee UsymqrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"preconditioners/#preconditioners","page":"Preconditioners","title":"Preconditioners","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"The solvers in Krylov.jl support preconditioners, i.e., transformations that modify a linear system Ax = b into an equivalent form that may yield faster convergence in finite-precision arithmetic. Preconditioning can be used to reduce the condition number of the problem or cluster its eigenvalues or singular values for instance.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"The design of preconditioners is highly dependent on the origin of the problem and most preconditioners need to take application-dependent information and structure into account. Specialized preconditioners generally outperform generic preconditioners such as incomplete factorizations.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"The construction of a preconditioner necessitates trade-offs because we need to apply it at least once per iteration within a Krylov method. Hence, a preconditioner must be constructed such that it is cheap to apply, while also capturing the characteristics of the original system in some sense.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"There exist three variants of preconditioning:","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Left preconditioning Two-sided preconditioning Right preconditioning\nP_ell^-1Ax = P_ell^-1b P_ell^-1AP_r^-1y = P_ell^-1btextwithx = P_r^-1y AP_r^-1y = btextwithx = P_r^-1y","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"where P_ell and P_r are square and nonsingular.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"The left preconditioning preserves the error x_k - x^star whereas the right preconditioning keeps invariant the residual b - A x_k. Two-sided preconditioning is the only variant that allows to preserve the hermicity of a linear system.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"note: Note\nBecause det(P^-1A - lambda I) = det(A - lambda P) det(P^-1) = det(AP^-1 - lambda I), the eigenvalues of P^-1A and AP^-1 are identical. If P = LL^H, L^-1AL^-H also has the same spectrum.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"In Krylov.jl, we call P_ell^-1 and P_r^-1 the preconditioners and we assume that we can apply them with the operation y leftarrow P^-1 * x. It is also common to call P_ell and P_r the preconditioners if the equivalent operation y leftarrow Pbackslashx is available. Krylov.jl supports both approaches thanks to the argument ldiv of the Krylov solvers.","category":"page"},{"location":"preconditioners/#How-to-use-preconditioners-in-Krylov.jl?","page":"Preconditioners","title":"How to use preconditioners in Krylov.jl?","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"info: Info\nA preconditioner only need support the operation mul!(y, P⁻¹, x) when ldiv=false or ldiv!(y, P, x) when ldiv=true to be used in Krylov.jl.\nThe default value of a preconditioner in Krylov.jl is the identity operator I.","category":"page"},{"location":"preconditioners/#Square-non-Hermitian-linear-systems","page":"Preconditioners","title":"Square non-Hermitian linear systems","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Methods concerned: CGS, BiCGSTAB, DQGMRES, GMRES, FGMRES, DIOM and FOM.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"A Krylov method dedicated to non-Hermitian linear systems allows the three variants of preconditioning.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Preconditioners P_ell^-1 P_ell P_r^-1 P_r\nArguments M with ldiv=false M with ldiv=true N with ldiv=false N with ldiv=true","category":"page"},{"location":"preconditioners/#Hermitian-linear-systems","page":"Preconditioners","title":"Hermitian linear systems","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Methods concerned: SYMMLQ, CG, CG-LANCZOS, CG-LANCZOS-SHIFT, CR, CAR, MINRES, MINRES-QLP and MINARES.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"When A is Hermitian, we can only use centered preconditioning L^-1AL^-Hy = L^-1b with x = L^-Hy. Centered preconditioning is a special case of two-sided preconditioning with P_ell = L = P_r^H that maintains hermicity. However, there is no need to specify L and one may specify P_c = LL^H or its inverse directly.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Preconditioners P_c^-1 P_c\nArguments M with ldiv=false M with ldiv=true","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"warning: Warning\nThe preconditioner M must be hermitian and positive definite.","category":"page"},{"location":"preconditioners/#Linear-least-squares-problems","page":"Preconditioners","title":"Linear least-squares problems","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Methods concerned: CGLS, CRLS, LSLQ, LSQR and LSMR.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Formulation Without preconditioning With preconditioning\nleast-squares problem min tfrac12 b - Ax^2_2 min tfrac12 b - Ax^2_E^-1\nNormal equation A^HAx = A^Hb A^HE^-1Ax = A^HE^-1b\nAugmented system beginbmatrix I A A^H 0 endbmatrix beginbmatrix r x endbmatrix = beginbmatrix b 0 endbmatrix beginbmatrix E A A^H 0 endbmatrix beginbmatrix r x endbmatrix = beginbmatrix b 0 endbmatrix","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"LSLQ, LSQR and LSMR also handle regularized least-squares problems.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Formulation Without preconditioning With preconditioning\nleast-squares problem min tfrac12 b - Ax^2_2 + tfrac12 lambda^2 x^2_2 min tfrac12 b - Ax^2_E^-1 + tfrac12 lambda^2 x^2_F\nNormal equation (A^HA + lambda^2 I)x = A^Hb (A^HE^-1A + lambda^2 F)x = A^HE^-1b\nAugmented system beginbmatrix I A A^H -lambda^2 I endbmatrix beginbmatrix r x endbmatrix = beginbmatrix b 0 endbmatrix beginbmatrix E A A^H -lambda^2 F endbmatrix beginbmatrix r x endbmatrix = beginbmatrix b 0 endbmatrix","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Preconditioners E^-1 E F^-1 F\nArguments M with ldiv=false M with ldiv=true N with ldiv=false N with ldiv=true","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"warning: Warning\nThe preconditioners M and N must be hermitian and positive definite.","category":"page"},{"location":"preconditioners/#Linear-least-norm-problems","page":"Preconditioners","title":"Linear least-norm problems","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Methods concerned: CGNE, CRMR, LNLQ, CRAIG and CRAIGMR.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Formulation Without preconditioning With preconditioning\nminimum-norm problem min tfrac12 x^2_2textstAx = b min tfrac12 x^2_FtextstAx = b\nNormal equation AA^Hy = btextwithx = A^Hy AF^-1A^Hy = btextwithx = F^-1A^Hy\nAugmented system beginbmatrix -I A^H phantom-A 0 endbmatrix beginbmatrix x y endbmatrix = beginbmatrix 0 b endbmatrix beginbmatrix -F A^H phantom-A 0 endbmatrix beginbmatrix x y endbmatrix = beginbmatrix 0 b endbmatrix","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"LNLQ, CRAIG and CRAIGMR also handle penalized minimum-norm problems.","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Formulation Without preconditioning With preconditioning\nminimum-norm problem min tfrac12 x^2_2 + tfrac12 y^2_2textstAx + lambda^2 y = b min tfrac12 x^2_F + tfrac12 y^2_EtextstAx + lambda^2 Ey = b\nNormal equation (AA^H + lambda^2 I)y = btextwithx = A^Hy (AF^-1A^H + lambda^2 E)y = btextwithx = F^-1A^Hy\nAugmented system beginbmatrix -I A^H phantom-A lambda^2 I endbmatrix beginbmatrix x y endbmatrix = beginbmatrix 0 b endbmatrix beginbmatrix -F A^H phantom-A lambda^2 E endbmatrix beginbmatrix x y endbmatrix = beginbmatrix 0 b endbmatrix","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Preconditioners E^-1 E F^-1 F\nArguments M with ldiv=false M with ldiv=true N with ldiv=false N with ldiv=true","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"warning: Warning\nThe preconditioners M and N must be hermitian and positive definite.","category":"page"},{"location":"preconditioners/#Saddle-point-and-symmetric-quasi-definite-systems","page":"Preconditioners","title":"Saddle-point and symmetric quasi-definite systems","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"TriCG and TriMR can take advantage of the structure of Hermitian systems Kz = d with the 2x2 block structure","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":" beginbmatrix tau E phantom-A A^H nu F endbmatrix beginbmatrix x y endbmatrix = beginbmatrix b c endbmatrix","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Preconditioners E^-1 E F^-1 F\nArguments M with ldiv=false M with ldiv=true N with ldiv=false N with ldiv=true","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"warning: Warning\nThe preconditioners M and N must be hermitian and positive definite.","category":"page"},{"location":"preconditioners/#Generalized-saddle-point-and-unsymmetric-partitioned-systems","page":"Preconditioners","title":"Generalized saddle-point and unsymmetric partitioned systems","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"GPMR can take advantage of the structure of general square systems Kz = d with the 2x2 block structure","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":" beginbmatrix lambda M A B mu N endbmatrix beginbmatrix x y endbmatrix = beginbmatrix b c endbmatrix","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"Relations CE = M^-1 EC = M DF = N^-1 FD = N\nArguments C and E with ldiv=false C and E with ldiv=true D and F with ldiv=false D and F with ldiv=true","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"note: Note\nOur implementations of BiLQ, QMR, BiLQR, USYMLQ, USYMQR and TriLQR don't support preconditioning.","category":"page"},{"location":"preconditioners/#Packages-that-provide-preconditioners","page":"Preconditioners","title":"Packages that provide preconditioners","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"IncompleteLU.jl implements the left-looking and Crout versions of ILU decompositions.\nILUZero.jl is a Julia implementation of incomplete LU factorization with zero level of fill-in. \nLimitedLDLFactorizations.jl for limited-memory LDLᵀ factorization of symmetric matrices.\nAlgebraicMultigrid.jl provides two algebraic multigrid (AMG) preconditioners.\nRandomizedPreconditioners.jl uses randomized numerical linear algebra to construct approximate inverses of matrices.\nBasicLU.jl uses a sparse LU factorization to compute a maximum volume basis that can be used as a preconditioner for least-norm and least-squares problems.","category":"page"},{"location":"preconditioners/#Examples","page":"Preconditioners","title":"Examples","text":"","category":"section"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"using Krylov\nn, m = size(A)\nd = [A[i,i] ≠ 0 ? 1 / abs(A[i,i]) : 1 for i=1:n] # Jacobi preconditioner\nP⁻¹ = diagm(d)\nx, stats = symmlq(A, b, M=P⁻¹)","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"using Krylov\nn, m = size(A)\nd = [1 / norm(A[:,i]) for i=1:m] # diagonal preconditioner\nP⁻¹ = diagm(d)\nx, stats = minres(A, b, M=P⁻¹)","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"using IncompleteLU, Krylov\nPℓ = ilu(A)\nx, stats = gmres(A, b, M=Pℓ, ldiv=true) # left preconditioning","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"using LimitedLDLFactorizations, Krylov\nP = lldl(A)\nP.D .= abs.(P.D)\nx, stats = cg(A, b, M=P, ldiv=true) # centered preconditioning","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"using ILUZero, Krylov\nPᵣ = ilu0(A)\nx, stats = bicgstab(A, b, N=Pᵣ, ldiv=true) # right preconditioning","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"using LDLFactorizations, Krylov\n\nM = ldl(E)\nN = ldl(F)\n\n# [E A] [x] = [b]\n# [Aᴴ -F] [y] [c]\nx, y, stats = tricg(A, b, c, M=M, N=N, ldiv=true)","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"using SuiteSparse, Krylov\nimport LinearAlgebra.ldiv!\n\nM = cholesky(E)\n\n# ldiv! is not implemented for the sparse Cholesky factorization (SuiteSparse.CHOLMOD)\nldiv!(y::Vector{T}, F::SuiteSparse.CHOLMOD.Factor{T}, x::Vector{T}) where T = (y .= F \\ x)\n\n# [E A] [x] = [b]\n# [Aᴴ 0] [y] [c]\nx, y, stats = trimr(A, b, c, M=M, sp=true, ldiv=true)","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"using Krylov\n\nC = lu(M)\n\n# [M A] [x] = [b]\n# [B 0] [y] [c]\nx, y, stats = gpmr(A, B, b, c, C=C, gsp=true, ldiv=true)","category":"page"},{"location":"preconditioners/","page":"Preconditioners","title":"Preconditioners","text":"import BasicLU\nusing LinearOperators, Krylov\n\n# Least-squares problem\nm, n = size(A)\nAᴴ = sparse(A')\nbasis, B = BasicLU.maxvolbasis(Aᴴ)\nopA = LinearOperator(A)\nB⁻ᴴ = LinearOperator(Float64, n, n, false, false, (y, v) -> (y .= v ; BasicLU.solve!(B, y, 'T')),\n (y, v) -> (y .= v ; BasicLU.solve!(B, y, 'N')),\n (y, v) -> (y .= v ; BasicLU.solve!(B, y, 'N')))\n\nd, stats = lsmr(opA * B⁻ᴴ, b) # min ‖AB⁻ᴴd - b‖₂\nx = B⁻ᴴ * d # recover the solution of min ‖Ax - b‖₂\n\n# Least-norm problem\nm, n = size(A)\nbasis, B = maxvolbasis(A)\nopA = LinearOperator(A)\nB⁻¹ = LinearOperator(Float64, m, m, false, false, (y, v) -> (y .= v ; BasicLU.solve!(B, y, 'N')),\n (y, v) -> (y .= v ; BasicLU.solve!(B, y, 'T')),\n (y, v) -> (y .= v ; BasicLU.solve!(B, y, 'T')))\n\nx, y, stats = craigmr(B⁻¹ * opA, B⁻¹ * b) # min ‖x‖₂ s.t. B⁻¹Ax = B⁻¹b","category":"page"},{"location":"reference/#Reference","page":"Reference","title":"Reference","text":"","category":"section"},{"location":"reference/#Index","page":"Reference","title":"Index","text":"","category":"section"},{"location":"reference/","page":"Reference","title":"Reference","text":"","category":"page"},{"location":"reference/","page":"Reference","title":"Reference","text":"Krylov.FloatOrComplex\nKrylov.niterations\nKrylov.Aprod\nKrylov.Atprod\nKrylov.kstdout\nKrylov.extract_parameters\nBase.show","category":"page"},{"location":"reference/#Krylov.FloatOrComplex","page":"Reference","title":"Krylov.FloatOrComplex","text":"FloatOrComplex{T}\n\nUnion type of T and Complex{T} where T is an AbstractFloat.\n\n\n\n\n\n","category":"type"},{"location":"reference/#Krylov.niterations","page":"Reference","title":"Krylov.niterations","text":"niterations(solver)\n\nReturn the number of iterations performed by the Krylov method associated to solver.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Krylov.Aprod","page":"Reference","title":"Krylov.Aprod","text":"Aprod(solver)\n\nReturn the number of operator-vector products with A performed by the Krylov method associated to solver.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Krylov.Atprod","page":"Reference","title":"Krylov.Atprod","text":"Atprod(solver)\n\nReturn the number of operator-vector products with A' performed by the Krylov method associated to solver.\n\n\n\n\n\n","category":"function"},{"location":"reference/#Krylov.kstdout","page":"Reference","title":"Krylov.kstdout","text":"Default I/O stream for all Krylov methods.\n\n\n\n\n\n","category":"constant"},{"location":"reference/#Krylov.extract_parameters","page":"Reference","title":"Krylov.extract_parameters","text":"arguments = extract_parameters(ex::Expr)\n\nExtract the arguments of an expression that is keyword parameter tuple. Implementation suggested by Mitchell J. O'Sullivan (@mosullivan93).\n\n\n\n\n\n","category":"function"},{"location":"reference/#Base.show","page":"Reference","title":"Base.show","text":"show(io, solver; show_stats=true)\n\nStatistics of solver are displayed if show_stats is set to true.\n\n\n\n\n\n","category":"function"},{"location":"examples/cg/","page":"CG","title":"CG","text":"using Krylov, MatrixMarket, SuiteSparseMatrixCollection\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"bcsstk09\")\npath = fetch_ssmc(matrix, format=\"MM\")\n\nn = matrix.nrows[1]\nA = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1]).mtx\"))\nb = ones(n)\nb_norm = norm(b)\n\n# Solve Ax = b.\n(x, stats) = cg(A, b)\nshow(stats)\nr = b - A * x\n@printf(\"Relative residual: %8.1e\\n\", norm(r) / b_norm)","category":"page"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"# Non-Hermitian square linear systems","category":"page"},{"location":"solvers/unsymmetric/#BiLQ","page":"Non-Hermitian square linear systems","title":"BiLQ","text":"","category":"section"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"bilq\nbilq!","category":"page"},{"location":"solvers/unsymmetric/#Krylov.bilq","page":"Non-Hermitian square linear systems","title":"Krylov.bilq","text":"(x, stats) = bilq(A, b::AbstractVector{FC};\n c::AbstractVector{FC}=b, transfer_to_bicg::Bool=true,\n atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = bilq(A, b, x0::AbstractVector; kwargs...)\n\nBiLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the square linear system Ax = b of size n using BiLQ. BiLQ is based on the Lanczos biorthogonalization process and requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b. When A is Hermitian and b = c, BiLQ is equivalent to SYMMLQ.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nc: the second initial vector of length n required by the Lanczos biorthogonalization process;\ntransfer_to_bicg: transfer from the BiLQ point to the BiCG point, when it exists. The transfer is based on the residual norm;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nA. Montoison and D. Orban, BiLQ: An Iterative Method for Nonsymmetric Linear Systems with a Quasi-Minimum Error Property, SIAM Journal on Matrix Analysis and Applications, 41(3), pp. 1145–1166, 2020.\nR. Fletcher, Conjugate gradient methods for indefinite systems, Numerical Analysis, Springer, pp. 73–89, 1976.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#Krylov.bilq!","page":"Non-Hermitian square linear systems","title":"Krylov.bilq!","text":"solver = bilq!(solver::BilqSolver, A, b; kwargs...)\nsolver = bilq!(solver::BilqSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of bilq.\n\nSee BilqSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#QMR","page":"Non-Hermitian square linear systems","title":"QMR","text":"","category":"section"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"qmr\nqmr!","category":"page"},{"location":"solvers/unsymmetric/#Krylov.qmr","page":"Non-Hermitian square linear systems","title":"Krylov.qmr","text":"(x, stats) = qmr(A, b::AbstractVector{FC};\n c::AbstractVector{FC}=b, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0, timemax::Float64=Inf, verbose::Int=0,\n history::Bool=false, callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = qmr(A, b, x0::AbstractVector; kwargs...)\n\nQMR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the square linear system Ax = b of size n using QMR.\n\nQMR is based on the Lanczos biorthogonalization process and requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b. When A is Hermitian and b = c, QMR is equivalent to MINRES.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nc: the second initial vector of length n required by the Lanczos biorthogonalization process;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nR. W. Freund and N. M. Nachtigal, QMR : a quasi-minimal residual method for non-Hermitian linear systems, Numerische mathematik, Vol. 60(1), pp. 315–339, 1991.\nR. W. Freund and N. M. Nachtigal, An implementation of the QMR method based on coupled two-term recurrences, SIAM Journal on Scientific Computing, Vol. 15(2), pp. 313–337, 1994.\nA. Montoison and D. Orban, BiLQ: An Iterative Method for Nonsymmetric Linear Systems with a Quasi-Minimum Error Property, SIAM Journal on Matrix Analysis and Applications, 41(3), pp. 1145–1166, 2020.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#Krylov.qmr!","page":"Non-Hermitian square linear systems","title":"Krylov.qmr!","text":"solver = qmr!(solver::QmrSolver, A, b; kwargs...)\nsolver = qmr!(solver::QmrSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of qmr.\n\nSee QmrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#CGS","page":"Non-Hermitian square linear systems","title":"CGS","text":"","category":"section"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"cgs\ncgs!","category":"page"},{"location":"solvers/unsymmetric/#Krylov.cgs","page":"Non-Hermitian square linear systems","title":"Krylov.cgs","text":"(x, stats) = cgs(A, b::AbstractVector{FC};\n c::AbstractVector{FC}=b, M=I, N=I,\n ldiv::Bool=false, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = cgs(A, b, x0::AbstractVector; kwargs...)\n\nCGS can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the consistent linear system Ax = b of size n using CGS. CGS requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b.\n\nFrom \"Iterative Methods for Sparse Linear Systems (Y. Saad)\" :\n\n«The method is based on a polynomial variant of the conjugate gradients algorithm. Although related to the so-called bi-conjugate gradients (BCG) algorithm, it does not involve adjoint matrix-vector multiplications, and the expected convergence rate is about twice that of the BCG algorithm.\n\nThe Conjugate Gradient Squared algorithm works quite well in many cases. However, one difficulty is that, since the polynomials are squared, rounding errors tend to be more damaging than in the standard BCG algorithm. In particular, very high variations of the residual vectors often cause the residual norms computed to become inaccurate.\n\nTFQMR and BICGSTAB were developed to remedy this difficulty.»\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nc: the second initial vector of length n required by the Lanczos biorthogonalization process;\nM: linear operator that models a nonsingular matrix of size n used for left preconditioning;\nN: linear operator that models a nonsingular matrix of size n used for right preconditioning;\nldiv: define whether the preconditioners use ldiv! or mul!;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nP. Sonneveld, CGS, A Fast Lanczos-Type Solver for Nonsymmetric Linear systems, SIAM Journal on Scientific and Statistical Computing, 10(1), pp. 36–52, 1989.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#Krylov.cgs!","page":"Non-Hermitian square linear systems","title":"Krylov.cgs!","text":"solver = cgs!(solver::CgsSolver, A, b; kwargs...)\nsolver = cgs!(solver::CgsSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of cgs.\n\nSee CgsSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#BiCGSTAB","page":"Non-Hermitian square linear systems","title":"BiCGSTAB","text":"","category":"section"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"bicgstab\nbicgstab!","category":"page"},{"location":"solvers/unsymmetric/#Krylov.bicgstab","page":"Non-Hermitian square linear systems","title":"Krylov.bicgstab","text":"(x, stats) = bicgstab(A, b::AbstractVector{FC};\n c::AbstractVector{FC}=b, M=I, N=I,\n ldiv::Bool=false, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = bicgstab(A, b, x0::AbstractVector; kwargs...)\n\nBICGSTAB can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the square linear system Ax = b of size n using BICGSTAB. BICGSTAB requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b.\n\nThe Biconjugate Gradient Stabilized method is a variant of BiCG, like CGS, but using different updates for the Aᴴ-sequence in order to obtain smoother convergence than CGS.\n\nIf BICGSTAB stagnates, we recommend DQGMRES and BiLQ as alternative methods for unsymmetric square systems.\n\nBICGSTAB stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖b‖ * rtol.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nc: the second initial vector of length n required by the Lanczos biorthogonalization process;\nM: linear operator that models a nonsingular matrix of size n used for left preconditioning;\nN: linear operator that models a nonsingular matrix of size n used for right preconditioning;\nldiv: define whether the preconditioners use ldiv! or mul!;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nH. A. van der Vorst, Bi-CGSTAB: A fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems, SIAM Journal on Scientific and Statistical Computing, 13(2), pp. 631–644, 1992.\nG. L.G. Sleijpen and D. R. Fokkema, BiCGstab(ℓ) for linear equations involving unsymmetric matrices with complex spectrum, Electronic Transactions on Numerical Analysis, 1, pp. 11–32, 1993.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#Krylov.bicgstab!","page":"Non-Hermitian square linear systems","title":"Krylov.bicgstab!","text":"solver = bicgstab!(solver::BicgstabSolver, A, b; kwargs...)\nsolver = bicgstab!(solver::BicgstabSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of bicgstab.\n\nSee BicgstabSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#DIOM","page":"Non-Hermitian square linear systems","title":"DIOM","text":"","category":"section"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"diom\ndiom!","category":"page"},{"location":"solvers/unsymmetric/#Krylov.diom","page":"Non-Hermitian square linear systems","title":"Krylov.diom","text":"(x, stats) = diom(A, b::AbstractVector{FC};\n memory::Int=20, M=I, N=I, ldiv::Bool=false,\n reorthogonalization::Bool=false, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = diom(A, b, x0::AbstractVector; kwargs...)\n\nDIOM can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the consistent linear system Ax = b of size n using DIOM.\n\nDIOM only orthogonalizes the new vectors of the Krylov basis against the memory most recent vectors. If CG is well defined on Ax = b and memory = 2, DIOM is theoretically equivalent to CG. If k ≤ memory where k is the number of iterations, DIOM is theoretically equivalent to FOM. Otherwise, DIOM interpolates between CG and FOM and is similar to CG with partial reorthogonalization.\n\nAn advantage of DIOM is that non-Hermitian or Hermitian indefinite or both non-Hermitian and indefinite systems of linear equations can be handled by this single algorithm.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nmemory: the number of most recent vectors of the Krylov basis against which to orthogonalize a new vector;\nM: linear operator that models a nonsingular matrix of size n used for left preconditioning;\nN: linear operator that models a nonsingular matrix of size n used for right preconditioning;\nldiv: define whether the preconditioners use ldiv! or mul!;\nreorthogonalization: reorthogonalize the new vectors of the Krylov basis against the memory most recent vectors;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nY. Saad, Practical use of some krylov subspace methods for solving indefinite and nonsymmetric linear systems, SIAM journal on scientific and statistical computing, 5(1), pp. 203–228, 1984.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#Krylov.diom!","page":"Non-Hermitian square linear systems","title":"Krylov.diom!","text":"solver = diom!(solver::DiomSolver, A, b; kwargs...)\nsolver = diom!(solver::DiomSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of diom.\n\nNote that the memory keyword argument is the only exception. It's required to create a DiomSolver and can't be changed later.\n\nSee DiomSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#FOM","page":"Non-Hermitian square linear systems","title":"FOM","text":"","category":"section"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"fom\nfom!","category":"page"},{"location":"solvers/unsymmetric/#Krylov.fom","page":"Non-Hermitian square linear systems","title":"Krylov.fom","text":"(x, stats) = fom(A, b::AbstractVector{FC};\n memory::Int=20, M=I, N=I, ldiv::Bool=false,\n restart::Bool=false, reorthogonalization::Bool=false,\n atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = fom(A, b, x0::AbstractVector; kwargs...)\n\nFOM can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the linear system Ax = b of size n using FOM.\n\nFOM algorithm is based on the Arnoldi process and a Galerkin condition.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nmemory: if restart = true, the restarted version FOM(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;\nM: linear operator that models a nonsingular matrix of size n used for left preconditioning;\nN: linear operator that models a nonsingular matrix of size n used for right preconditioning;\nldiv: define whether the preconditioners use ldiv! or mul!;\nrestart: restart the method after memory iterations;\nreorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nY. Saad, Krylov subspace methods for solving unsymmetric linear systems, Mathematics of computation, Vol. 37(155), pp. 105–126, 1981.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#Krylov.fom!","page":"Non-Hermitian square linear systems","title":"Krylov.fom!","text":"solver = fom!(solver::FomSolver, A, b; kwargs...)\nsolver = fom!(solver::FomSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of fom.\n\nNote that the memory keyword argument is the only exception. It's required to create a FomSolver and can't be changed later.\n\nSee FomSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#DQGMRES","page":"Non-Hermitian square linear systems","title":"DQGMRES","text":"","category":"section"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"dqgmres\ndqgmres!","category":"page"},{"location":"solvers/unsymmetric/#Krylov.dqgmres","page":"Non-Hermitian square linear systems","title":"Krylov.dqgmres","text":"(x, stats) = dqgmres(A, b::AbstractVector{FC};\n memory::Int=20, M=I, N=I, ldiv::Bool=false,\n reorthogonalization::Bool=false, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = dqgmres(A, b, x0::AbstractVector; kwargs...)\n\nDQGMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the consistent linear system Ax = b of size n using DQGMRES.\n\nDQGMRES algorithm is based on the incomplete Arnoldi orthogonalization process and computes a sequence of approximate solutions with the quasi-minimal residual property.\n\nDQGMRES only orthogonalizes the new vectors of the Krylov basis against the memory most recent vectors. If MINRES is well defined on Ax = b and memory = 2, DQGMRES is theoretically equivalent to MINRES. If k ≤ memory where k is the number of iterations, DQGMRES is theoretically equivalent to GMRES. Otherwise, DQGMRES interpolates between MINRES and GMRES and is similar to MINRES with partial reorthogonalization.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nmemory: the number of most recent vectors of the Krylov basis against which to orthogonalize a new vector;\nM: linear operator that models a nonsingular matrix of size n used for left preconditioning;\nN: linear operator that models a nonsingular matrix of size n used for right preconditioning;\nreorthogonalization: reorthogonalize the new vectors of the Krylov basis against the memory most recent vectors;\nldiv: define whether the preconditioners use ldiv! or mul!;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nY. Saad and K. Wu, DQGMRES: a quasi minimal residual algorithm based on incomplete orthogonalization, Numerical Linear Algebra with Applications, Vol. 3(4), pp. 329–343, 1996.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#Krylov.dqgmres!","page":"Non-Hermitian square linear systems","title":"Krylov.dqgmres!","text":"solver = dqgmres!(solver::DqgmresSolver, A, b; kwargs...)\nsolver = dqgmres!(solver::DqgmresSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of dqgmres.\n\nNote that the memory keyword argument is the only exception. It's required to create a DqgmresSolver and can't be changed later.\n\nSee DqgmresSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#GMRES","page":"Non-Hermitian square linear systems","title":"GMRES","text":"","category":"section"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"gmres\ngmres!","category":"page"},{"location":"solvers/unsymmetric/#Krylov.gmres","page":"Non-Hermitian square linear systems","title":"Krylov.gmres","text":"(x, stats) = gmres(A, b::AbstractVector{FC};\n memory::Int=20, M=I, N=I, ldiv::Bool=false,\n restart::Bool=false, reorthogonalization::Bool=false,\n atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = gmres(A, b, x0::AbstractVector; kwargs...)\n\nGMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the linear system Ax = b of size n using GMRES.\n\nGMRES algorithm is based on the Arnoldi process and computes a sequence of approximate solutions with the minimum residual.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nmemory: if restart = true, the restarted version GMRES(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;\nM: linear operator that models a nonsingular matrix of size n used for left preconditioning;\nN: linear operator that models a nonsingular matrix of size n used for right preconditioning;\nldiv: define whether the preconditioners use ldiv! or mul!;\nrestart: restart the method after memory iterations;\nreorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nY. Saad and M. H. Schultz, GMRES: A Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems, SIAM Journal on Scientific and Statistical Computing, Vol. 7(3), pp. 856–869, 1986.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#Krylov.gmres!","page":"Non-Hermitian square linear systems","title":"Krylov.gmres!","text":"solver = gmres!(solver::GmresSolver, A, b; kwargs...)\nsolver = gmres!(solver::GmresSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of gmres.\n\nNote that the memory keyword argument is the only exception. It's required to create a GmresSolver and can't be changed later.\n\nSee GmresSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#FGMRES","page":"Non-Hermitian square linear systems","title":"FGMRES","text":"","category":"section"},{"location":"solvers/unsymmetric/","page":"Non-Hermitian square linear systems","title":"Non-Hermitian square linear systems","text":"fgmres\nfgmres!","category":"page"},{"location":"solvers/unsymmetric/#Krylov.fgmres","page":"Non-Hermitian square linear systems","title":"Krylov.fgmres","text":"(x, stats) = fgmres(A, b::AbstractVector{FC};\n memory::Int=20, M=I, N=I, ldiv::Bool=false,\n restart::Bool=false, reorthogonalization::Bool=false,\n atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = fgmres(A, b, x0::AbstractVector; kwargs...)\n\nFGMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the linear system Ax = b of size n using FGMRES.\n\nFGMRES computes a sequence of approximate solutions with minimum residual. FGMRES is a variant of GMRES that allows changes in the right preconditioner at each iteration.\n\nThis implementation allows a left preconditioner M and a flexible right preconditioner N. A situation in which the preconditioner is \"not constant\" is when a relaxation-type method, a Chebyshev iteration or another Krylov subspace method is used as a preconditioner. Compared to GMRES, there is no additional cost incurred in the arithmetic but the memory requirement almost doubles. Thus, GMRES is recommended if the right preconditioner N is constant.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nmemory: if restart = true, the restarted version FGMRES(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;\nM: linear operator that models a nonsingular matrix of size n used for left preconditioning;\nN: linear operator that models a nonsingular matrix of size n used for right preconditioning;\nldiv: define whether the preconditioners use ldiv! or mul!;\nrestart: restart the method after memory iterations;\nreorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nY. Saad, A Flexible Inner-Outer Preconditioned GMRES Algorithm, SIAM Journal on Scientific Computing, Vol. 14(2), pp. 461–469, 1993.\n\n\n\n\n\n","category":"function"},{"location":"solvers/unsymmetric/#Krylov.fgmres!","page":"Non-Hermitian square linear systems","title":"Krylov.fgmres!","text":"solver = fgmres!(solver::FgmresSolver, A, b; kwargs...)\nsolver = fgmres!(solver::FgmresSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of fgmres.\n\nNote that the memory keyword argument is the only exception. It's required to create a FgmresSolver and can't be changed later.\n\nSee FgmresSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"examples/crmr/","page":"CRMR","title":"CRMR","text":"using Krylov, HarwellRutherfordBoeing, SuiteSparseMatrixCollection\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"gemat1\")\npath = fetch_ssmc(matrix, format=\"RB\")\n\nA = RutherfordBoeingData(joinpath(path[1], \"$(matrix.name[1]).rb\")).data\n(m, n) = size(A)\n@printf(\"System size: %d rows and %d columns\\n\", m, n)\n\nx_exact = A' * ones(m)\nx_exact_norm = norm(x_exact)\nx_exact /= x_exact_norm\nb = A * x_exact\n(x, stats) = crmr(A, b)\nshow(stats)\nresid = norm(A * x - b) / norm(b)\n@printf(\"CRMR: Relative residual: %7.1e\\n\", resid)\n@printf(\"CRMR: ‖x - x*‖₂: %7.1e\\n\", norm(x - x_exact))","category":"page"},{"location":"solvers/sid/","page":"Hermitian indefinite linear systems","title":"Hermitian indefinite linear systems","text":"# Hermitian indefinite linear systems","category":"page"},{"location":"solvers/sid/#SYMMLQ","page":"Hermitian indefinite linear systems","title":"SYMMLQ","text":"","category":"section"},{"location":"solvers/sid/","page":"Hermitian indefinite linear systems","title":"Hermitian indefinite linear systems","text":"symmlq\nsymmlq!","category":"page"},{"location":"solvers/sid/#Krylov.symmlq","page":"Hermitian indefinite linear systems","title":"Krylov.symmlq","text":"(x, stats) = symmlq(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false, window::Int=5,\n transfer_to_cg::Bool=true, λ::T=zero(T),\n λest::T=zero(T), etol::T=√eps(T),\n conlim::T=1/√eps(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = symmlq(A, b, x0::AbstractVector; kwargs...)\n\nSYMMLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above\n\nSolve the shifted linear system\n\n(A + λI) x = b\n\nof size n using the SYMMLQ method, where λ is a shift parameter, and A is Hermitian.\n\nSYMMLQ produces monotonic errors ‖x* - x‖₂.\n\nInput arguments\n\nA: a linear operator that models a Hermitian matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nwindow: number of iterations used to accumulate a lower bound on the error;\ntransfer_to_cg: transfer from the SYMMLQ point to the CG point, when it exists. The transfer is based on the residual norm;\nλ: regularization parameter;\nλest: positive strict lower bound on the smallest eigenvalue λₘᵢₙ when solving a positive-definite system, such as λest = (1-10⁻⁷)λₘᵢₙ;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\netol: stopping tolerance based on the lower bound on the error;\nconlim: limit on the estimated condition number of A beyond which the solution will be abandoned;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SymmlqStats structure.\n\nReference\n\nC. C. Paige and M. A. Saunders, Solution of Sparse Indefinite Systems of Linear Equations, SIAM Journal on Numerical Analysis, 12(4), pp. 617–629, 1975.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sid/#Krylov.symmlq!","page":"Hermitian indefinite linear systems","title":"Krylov.symmlq!","text":"solver = symmlq!(solver::SymmlqSolver, A, b; kwargs...)\nsolver = symmlq!(solver::SymmlqSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of symmlq.\n\nSee SymmlqSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sid/#MINRES","page":"Hermitian indefinite linear systems","title":"MINRES","text":"","category":"section"},{"location":"solvers/sid/","page":"Hermitian indefinite linear systems","title":"Hermitian indefinite linear systems","text":"minres\nminres!","category":"page"},{"location":"solvers/sid/#Krylov.minres","page":"Hermitian indefinite linear systems","title":"Krylov.minres","text":"(x, stats) = minres(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false, window::Int=5,\n λ::T=zero(T), atol::T=√eps(T),\n rtol::T=√eps(T), etol::T=√eps(T),\n conlim::T=1/√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = minres(A, b, x0::AbstractVector; kwargs...)\n\nMINRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nSolve the shifted linear least-squares problem\n\nminimize ‖b - (A + λI)x‖₂²\n\nor the shifted linear system\n\n(A + λI) x = b\n\nof size n using the MINRES method, where λ ≥ 0 is a shift parameter, where A is Hermitian.\n\nMINRES is formally equivalent to applying CR to Ax=b when A is positive definite, but is typically more stable and also applies to the case where A is indefinite.\n\nMINRES produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂.\n\nInput arguments\n\nA: a linear operator that models a Hermitian matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nwindow: number of iterations used to accumulate a lower bound on the error;\nλ: regularization parameter;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\netol: stopping tolerance based on the lower bound on the error;\nconlim: limit on the estimated condition number of A beyond which the solution will be abandoned;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nC. C. Paige and M. A. Saunders, Solution of Sparse Indefinite Systems of Linear Equations, SIAM Journal on Numerical Analysis, 12(4), pp. 617–629, 1975.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sid/#Krylov.minres!","page":"Hermitian indefinite linear systems","title":"Krylov.minres!","text":"solver = minres!(solver::MinresSolver, A, b; kwargs...)\nsolver = minres!(solver::MinresSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of minres.\n\nSee MinresSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sid/#MINRES-QLP","page":"Hermitian indefinite linear systems","title":"MINRES-QLP","text":"","category":"section"},{"location":"solvers/sid/","page":"Hermitian indefinite linear systems","title":"Hermitian indefinite linear systems","text":"minres_qlp\nminres_qlp!","category":"page"},{"location":"solvers/sid/#Krylov.minres_qlp","page":"Hermitian indefinite linear systems","title":"Krylov.minres_qlp","text":"(x, stats) = minres_qlp(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false, Artol::T=√eps(T),\n λ::T=zero(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = minres_qlp(A, b, x0::AbstractVector; kwargs...)\n\nMINRES-QLP can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nMINRES-QLP is the only method based on the Lanczos process that returns the minimum-norm solution on singular inconsistent systems (A + λI)x = b of size n, where λ is a shift parameter. It is significantly more complex but can be more reliable than MINRES when A is ill-conditioned.\n\nM also indicates the weighted norm in which residuals are measured.\n\nInput arguments\n\nA: a linear operator that models a Hermitian matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nλ: regularization parameter;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nArtol: relative stopping tolerance based on the Aᴴ-residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nS.-C. T. Choi, Iterative methods for singular linear equations and least-squares problems, Ph.D. thesis, ICME, Stanford University, 2006.\nS.-C. T. Choi, C. C. Paige and M. A. Saunders, MINRES-QLP: A Krylov subspace method for indefinite or singular symmetric systems, SIAM Journal on Scientific Computing, Vol. 33(4), pp. 1810–1836, 2011.\nS.-C. T. Choi and M. A. Saunders, Algorithm 937: MINRES-QLP for symmetric and Hermitian linear equations and least-squares problems, ACM Transactions on Mathematical Software, 40(2), pp. 1–12, 2014.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sid/#Krylov.minres_qlp!","page":"Hermitian indefinite linear systems","title":"Krylov.minres_qlp!","text":"solver = minres_qlp!(solver::MinresQlpSolver, A, b; kwargs...)\nsolver = minres_qlp!(solver::MinresQlpSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of minres_qlp.\n\nSee MinresQlpSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sid/#MINARES","page":"Hermitian indefinite linear systems","title":"MINARES","text":"","category":"section"},{"location":"solvers/sid/","page":"Hermitian indefinite linear systems","title":"Hermitian indefinite linear systems","text":"minares\nminares!","category":"page"},{"location":"solvers/sid/#Krylov.minares","page":"Hermitian indefinite linear systems","title":"Krylov.minares","text":"(x, stats) = minares(A, b::AbstractVector{FC};\n M=I, ldiv::Bool=false,\n λ::T = zero(T), atol::T=√eps(T),\n rtol::T=√eps(T), Artol::T = √eps(T),\n itmax::Int=0, timemax::Float64=Inf,\n verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = minares(A, b, x0::AbstractVector; kwargs...)\n\nMINARES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nMINARES solves the Hermitian linear system Ax = b of size n. MINARES minimizes ‖Arₖ‖₂ when M = Iₙ and ‖AMrₖ‖M otherwise. The estimates computed every iteration are ‖Mrₖ‖₂ and ‖AMrₖ‖M.\n\nInput arguments\n\nA: a linear operator that models a Hermitian positive definite matrix of dimension n;\nb: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nλ: regularization parameter;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nArtol: relative stopping tolerance based on the Aᴴ-residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReference\n\nA. Montoison, D. Orban and M. A. Saunders, MinAres: An Iterative Solver for Symmetric Linear Systems, Cahier du GERAD G-2023-40, GERAD, Montréal, 2023.\n\n\n\n\n\n","category":"function"},{"location":"solvers/sid/#Krylov.minares!","page":"Hermitian indefinite linear systems","title":"Krylov.minares!","text":"solver = minares!(solver::MinaresSolver, A, b; kwargs...)\nsolver = minares!(solver::MinaresSolver, A, b, x0; kwargs...)\n\nwhere kwargs are keyword arguments of minares.\n\nSee MinaresSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/","page":"Least-norm problems","title":"Least-norm problems","text":"# Least-norm problems","category":"page"},{"location":"solvers/ln/#CGNE","page":"Least-norm problems","title":"CGNE","text":"","category":"section"},{"location":"solvers/ln/","page":"Least-norm problems","title":"Least-norm problems","text":"cgne\ncgne!","category":"page"},{"location":"solvers/ln/#Krylov.cgne","page":"Least-norm problems","title":"Krylov.cgne","text":"(x, stats) = cgne(A, b::AbstractVector{FC};\n N=I, ldiv::Bool=false,\n λ::T=zero(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nSolve the consistent linear system\n\nAx + √λs = b\n\nof size m × n using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations of the second kind\n\n(AAᴴ + λI) y = b\n\nbut is more stable. When λ = 0, this method solves the minimum-norm problem\n\nmin ‖x‖₂ s.t. Ax = b.\n\nWhen λ > 0, it solves the problem\n\nmin ‖(x,s)‖₂ s.t. Ax + √λs = b.\n\nCGNE produces monotonic errors ‖x-x*‖₂ but not residuals ‖r‖₂. It is formally equivalent to CRAIG, though can be slightly less accurate, but simpler to implement. Only the x-part of the solution is returned.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nN: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nλ: regularization parameter;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nJ. E. Craig, The N-step iteration procedures, Journal of Mathematics and Physics, 34(1), pp. 64–73, 1955.\nJ. E. Craig, Iterations Procedures for Simultaneous Equations, Ph.D. Thesis, Department of Electrical Engineering, MIT, 1954.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#Krylov.cgne!","page":"Least-norm problems","title":"Krylov.cgne!","text":"solver = cgne!(solver::CgneSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of cgne.\n\nSee CgneSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#CRMR","page":"Least-norm problems","title":"CRMR","text":"","category":"section"},{"location":"solvers/ln/","page":"Least-norm problems","title":"Least-norm problems","text":"crmr\ncrmr!","category":"page"},{"location":"solvers/ln/#Krylov.crmr","page":"Least-norm problems","title":"Krylov.crmr","text":"(x, stats) = crmr(A, b::AbstractVector{FC};\n N=I, ldiv::Bool=false,\n λ::T=zero(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nSolve the consistent linear system\n\nAx + √λs = b\n\nof size m × n using the Conjugate Residual (CR) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CR to the normal equations of the second kind\n\n(AAᴴ + λI) y = b\n\nbut is more stable. When λ = 0, this method solves the minimum-norm problem\n\nmin ‖x‖₂ s.t. x ∈ argmin ‖Ax - b‖₂.\n\nWhen λ > 0, this method solves the problem\n\nmin ‖(x,s)‖₂ s.t. Ax + √λs = b.\n\nCRMR produces monotonic residuals ‖r‖₂. It is formally equivalent to CRAIG-MR, though can be slightly less accurate, but simpler to implement. Only the x-part of the solution is returned.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nN: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;\nldiv: define whether the preconditioner uses ldiv! or mul!;\nλ: regularization parameter;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nD. Orban and M. Arioli, Iterative Solution of Symmetric Quasi-Definite Linear Systems, Volume 3 of Spotlights. SIAM, Philadelphia, PA, 2017.\nD. Orban, The Projected Golub-Kahan Process for Constrained Linear Least-Squares Problems. Cahier du GERAD G-2014-15, 2014.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#Krylov.crmr!","page":"Least-norm problems","title":"Krylov.crmr!","text":"solver = crmr!(solver::CrmrSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of crmr.\n\nSee CrmrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#LNLQ","page":"Least-norm problems","title":"LNLQ","text":"","category":"section"},{"location":"solvers/ln/","page":"Least-norm problems","title":"Least-norm problems","text":"lnlq\nlnlq!","category":"page"},{"location":"solvers/ln/#Krylov.lnlq","page":"Least-norm problems","title":"Krylov.lnlq","text":"(x, y, stats) = lnlq(A, b::AbstractVector{FC};\n M=I, N=I, ldiv::Bool=false,\n transfer_to_craig::Bool=true,\n sqd::Bool=false, λ::T=zero(T),\n σ::T=zero(T), utolx::T=√eps(T),\n utoly::T=√eps(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nFind the least-norm solution of the consistent linear system\n\nAx + λ²y = b\n\nof size m × n using the LNLQ method, where λ ≥ 0 is a regularization parameter.\n\nFor a system in the form Ax = b, LNLQ method is equivalent to applying SYMMLQ to AAᴴy = b and recovering x = Aᴴy but is more stable. Note that y are the Lagrange multipliers of the least-norm problem\n\nminimize ‖x‖ s.t. Ax = b.\n\nIf λ > 0, LNLQ solves the symmetric and quasi-definite system\n\n[ -F Aᴴ ] [ x ] [ 0 ]\n[ A λ²E ] [ y ] = [ b ],\n\nwhere E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.\n\nThe system above represents the optimality conditions of\n\nmin ‖x‖²_F + λ²‖y‖²_E s.t. Ax + λ²Ey = b.\n\nFor a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LNLQ is then equivalent to applying SYMMLQ to (AF⁻¹Aᴴ + λ²E)y = b with Fx = Aᴴy.\n\nIf λ = 0, LNLQ solves the symmetric and indefinite system\n\n[ -F Aᴴ ] [ x ] [ 0 ]\n[ A 0 ] [ y ] = [ b ].\n\nThe system above represents the optimality conditions of\n\nminimize ‖x‖²_F s.t. Ax = b.\n\nIn this case, M can still be specified and indicates the weighted norm in which residuals are measured.\n\nIn this implementation, both the x and y-parts of the solution are returned.\n\nutolx and utoly are tolerances on the upper bound of the distance to the solution ‖x-x‖ and ‖y-y‖, respectively. The bound is valid if λ>0 or σ>0 where σ should be strictly smaller than the smallest positive singular value. For instance σ:=(1-1e-7)σₘᵢₙ .\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;\nN: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;\nldiv: define whether the preconditioners use ldiv! or mul!;\ntransfer_to_craig: transfer from the LNLQ point to the CRAIG point, when it exists. The transfer is based on the residual norm;\nsqd: if true, set λ=1 for Hermitian quasi-definite systems;\nλ: regularization parameter;\nσ: strict lower bound on the smallest positive singular value σₘᵢₙ such as σ = (1-10⁻⁷)σₘᵢₙ;\nutolx: tolerance on the upper bound on the distance to the solution ‖x-x*‖;\nutoly: tolerance on the upper bound on the distance to the solution ‖y-y*‖;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\ny: a dense vector of length m;\nstats: statistics collected on the run in a LNLQStats structure.\n\nReference\n\nR. Estrin, D. Orban, M.A. Saunders, LNLQ: An Iterative Method for Least-Norm Problems with an Error Minimization Property, SIAM Journal on Matrix Analysis and Applications, 40(3), pp. 1102–1124, 2019.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#Krylov.lnlq!","page":"Least-norm problems","title":"Krylov.lnlq!","text":"solver = lnlq!(solver::LnlqSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of lnlq.\n\nSee LnlqSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#CRAIG","page":"Least-norm problems","title":"CRAIG","text":"","category":"section"},{"location":"solvers/ln/","page":"Least-norm problems","title":"Least-norm problems","text":"craig\ncraig!","category":"page"},{"location":"solvers/ln/#Krylov.craig","page":"Least-norm problems","title":"Krylov.craig","text":"(x, y, stats) = craig(A, b::AbstractVector{FC};\n M=I, N=I, ldiv::Bool=false,\n transfer_to_lsqr::Bool=false, sqd::Bool=false,\n λ::T=zero(T), btol::T=√eps(T),\n conlim::T=1/√eps(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nFind the least-norm solution of the consistent linear system\n\nAx + λ²y = b\n\nof size m × n using the Golub-Kahan implementation of Craig's method, where λ ≥ 0 is a regularization parameter. This method is equivalent to CGNE but is more stable.\n\nFor a system in the form Ax = b, Craig's method is equivalent to applying CG to AAᴴy = b and recovering x = Aᴴy. Note that y are the Lagrange multipliers of the least-norm problem\n\nminimize ‖x‖ s.t. Ax = b.\n\nIf λ > 0, CRAIG solves the symmetric and quasi-definite system\n\n[ -F Aᴴ ] [ x ] [ 0 ]\n[ A λ²E ] [ y ] = [ b ],\n\nwhere E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.\n\nThe system above represents the optimality conditions of\n\nmin ‖x‖²_F + λ²‖y‖²_E s.t. Ax + λ²Ey = b.\n\nFor a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. CRAIG is then equivalent to applying CG to (AF⁻¹Aᴴ + λ²E)y = b with Fx = Aᴴy.\n\nIf λ = 0, CRAIG solves the symmetric and indefinite system\n\n[ -F Aᴴ ] [ x ] [ 0 ]\n[ A 0 ] [ y ] = [ b ].\n\nThe system above represents the optimality conditions of\n\nminimize ‖x‖²_F s.t. Ax = b.\n\nIn this case, M can still be specified and indicates the weighted norm in which residuals are measured.\n\nIn this implementation, both the x and y-parts of the solution are returned.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;\nN: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;\nldiv: define whether the preconditioners use ldiv! or mul!;\ntransfer_to_lsqr: transfer from the LSLQ point to the LSQR point, when it exists. The transfer is based on the residual norm;\nsqd: if true, set λ=1 for Hermitian quasi-definite systems;\nλ: regularization parameter;\nbtol: stopping tolerance used to detect zero-residual problems;\nconlim: limit on the estimated condition number of A beyond which the solution will be abandoned;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\ny: a dense vector of length m;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nC. C. Paige and M. A. Saunders, LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares, ACM Transactions on Mathematical Software, 8(1), pp. 43–71, 1982.\nM. A. Saunders, Solutions of Sparse Rectangular Systems Using LSQR and CRAIG, BIT Numerical Mathematics, 35(4), pp. 588–604, 1995.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#Krylov.craig!","page":"Least-norm problems","title":"Krylov.craig!","text":"solver = craig!(solver::CraigSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of craig.\n\nSee CraigSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#CRAIGMR","page":"Least-norm problems","title":"CRAIGMR","text":"","category":"section"},{"location":"solvers/ln/","page":"Least-norm problems","title":"Least-norm problems","text":"craigmr\ncraigmr!","category":"page"},{"location":"solvers/ln/#Krylov.craigmr","page":"Least-norm problems","title":"Krylov.craigmr","text":"(x, y, stats) = craigmr(A, b::AbstractVector{FC};\n M=I, N=I, ldiv::Bool=false,\n sqd::Bool=false, λ::T=zero(T), atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\nSolve the consistent linear system\n\nAx + λ²y = b\n\nof size m × n using the CRAIGMR method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying the Conjugate Residuals method to the normal equations of the second kind\n\n(AAᴴ + λ²I) y = b\n\nbut is more stable. When λ = 0, this method solves the minimum-norm problem\n\nmin ‖x‖ s.t. x ∈ argmin ‖Ax - b‖.\n\nIf λ > 0, CRAIGMR solves the symmetric and quasi-definite system\n\n[ -F Aᴴ ] [ x ] [ 0 ]\n[ A λ²E ] [ y ] = [ b ],\n\nwhere E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.\n\nThe system above represents the optimality conditions of\n\nmin ‖x‖²_F + λ²‖y‖²_E s.t. Ax + λ²Ey = b.\n\nFor a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. CRAIGMR is then equivalent to applying MINRES to (AF⁻¹Aᴴ + λ²E)y = b with Fx = Aᴴy.\n\nIf λ = 0, CRAIGMR solves the symmetric and indefinite system\n\n[ -F Aᴴ ] [ x ] [ 0 ]\n[ A 0 ] [ y ] = [ b ].\n\nThe system above represents the optimality conditions of\n\nmin ‖x‖²_F s.t. Ax = b.\n\nIn this case, M can still be specified and indicates the weighted norm in which residuals are measured.\n\nCRAIGMR produces monotonic residuals ‖r‖₂. It is formally equivalent to CRMR, though can be slightly more accurate, and intricate to implement. Both the x- and y-parts of the solution are returned.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m.\n\nKeyword arguments\n\nM: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;\nN: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;\nldiv: define whether the preconditioners use ldiv! or mul!;\nsqd: if true, set λ=1 for Hermitian quasi-definite systems;\nλ: regularization parameter;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\ny: a dense vector of length m;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nD. Orban and M. Arioli. Iterative Solution of Symmetric Quasi-Definite Linear Systems, Volume 3 of Spotlights. SIAM, Philadelphia, PA, 2017.\nD. Orban, The Projected Golub-Kahan Process for Constrained, Linear Least-Squares Problems. Cahier du GERAD G-2014-15, 2014.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#Krylov.craigmr!","page":"Least-norm problems","title":"Krylov.craigmr!","text":"solver = craigmr!(solver::CraigmrSolver, A, b; kwargs...)\n\nwhere kwargs are keyword arguments of craigmr.\n\nSee CraigmrSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#USYMLQ","page":"Least-norm problems","title":"USYMLQ","text":"","category":"section"},{"location":"solvers/ln/","page":"Least-norm problems","title":"Least-norm problems","text":"usymlq\nusymlq!","category":"page"},{"location":"solvers/ln/#Krylov.usymlq","page":"Least-norm problems","title":"Krylov.usymlq","text":"(x, stats) = usymlq(A, b::AbstractVector{FC}, c::AbstractVector{FC};\n transfer_to_usymcg::Bool=true, atol::T=√eps(T),\n rtol::T=√eps(T), itmax::Int=0,\n timemax::Float64=Inf, verbose::Int=0, history::Bool=false,\n callback=solver->false, iostream::IO=kstdout)\n\nT is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.\n\n(x, stats) = usymlq(A, b, c, x0::AbstractVector; kwargs...)\n\nUSYMLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.\n\nUSYMLQ determines the least-norm solution of the consistent linear system Ax = b of size m × n.\n\nUSYMLQ is based on the orthogonal tridiagonalization process and requires two initial nonzero vectors b and c. The vector c is only used to initialize the process and a default value can be b or Aᴴb depending on the shape of A. The error norm ‖x - x*‖ monotonously decreases in USYMLQ. It's considered as a generalization of SYMMLQ.\n\nIt can also be applied to under-determined and over-determined problems. In all cases, problems must be consistent.\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m;\nc: a vector of length n.\n\nOptional argument\n\nx0: a vector of length n that represents an initial guess of the solution x.\n\nKeyword arguments\n\ntransfer_to_usymcg: transfer from the USYMLQ point to the USYMCG point, when it exists. The transfer is based on the residual norm;\natol: absolute stopping tolerance based on the residual norm;\nrtol: relative stopping tolerance based on the residual norm;\nitmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;\ntimemax: the time limit in seconds;\nverbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;\nhistory: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;\ncallback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;\niostream: stream to which output is logged.\n\nOutput arguments\n\nx: a dense vector of length n;\nstats: statistics collected on the run in a SimpleStats structure.\n\nReferences\n\nM. A. Saunders, H. D. Simon, and E. L. Yip, Two Conjugate-Gradient-Type Methods for Unsymmetric Linear Equations, SIAM Journal on Numerical Analysis, 25(4), pp. 927–940, 1988.\nA. Buttari, D. Orban, D. Ruiz and D. Titley-Peloquin, A tridiagonalization method for symmetric saddle-point and quasi-definite systems, SIAM Journal on Scientific Computing, 41(5), pp. 409–432, 2019.\nA. Montoison and D. Orban, BiLQ: An Iterative Method for Nonsymmetric Linear Systems with a Quasi-Minimum Error Property, SIAM Journal on Matrix Analysis and Applications, 41(3), pp. 1145–1166, 2020.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ln/#Krylov.usymlq!","page":"Least-norm problems","title":"Krylov.usymlq!","text":"solver = usymlq!(solver::UsymlqSolver, A, b, c; kwargs...)\nsolver = usymlq!(solver::UsymlqSolver, A, b, c, x0; kwargs...)\n\nwhere kwargs are keyword arguments of usymlq.\n\nSee UsymlqSolver for more details about the solver.\n\n\n\n\n\n","category":"function"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"","category":"page"},{"location":"processes/#krylov-processes","page":"Krylov processes","title":"Krylov processes","text":"","category":"section"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Krylov processes are the foundation of Krylov methods, they generate bases of Krylov subspaces. Depending on the Krylov subspaces generated, Krylov processes are more or less specialized for a subset of linear problems. The following table summarizes the most relevant processes for each linear problem.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Linear problems Processes\nHermitian linear systems Hermitian Lanczos\nSquare Non-Hermitian linear systems Non-Hermitian Lanczos – Arnoldi\nLeast-squares problems Golub-Kahan – Saunders-Simon-Yip\nLeast-norm problems Golub-Kahan – Saunders-Simon-Yip\nSaddle-point and Hermitian quasi-definite systems Golub-Kahan – Saunders-Simon-Yip\nGeneralized saddle-point and non-Hermitian partitioned systems Montoison-Orban","category":"page"},{"location":"processes/#Notation","page":"Krylov processes","title":"Notation","text":"","category":"section"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"For a matrix A, A^H denotes the conjugate transpose of A. It coincides with A^T, the transpose of A, for real matrices. Define V_k = beginbmatrix v_1 ldots v_k endbmatrix enspace and enspace U_k = beginbmatrix u_1 ldots u_k endbmatrix.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"For a matrix C in mathbbC^n times n and a vector t in mathbbC^n, the k-th Krylov subspace generated by C and t is","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"mathcalK_k(C t) =\nleftsum_i=0^k-1 omega_i C^i t middle vert omega_i in mathbbC0 le i le k-1 right","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"For matrices C in mathbbC^n times n enspace and enspace T in mathbbC^n times p, the k-th block Krylov subspace generated by C and T is","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"mathcalK_k^square(C T) =\nleftsum_i=0^k-1 C^i T Omega_i middle vert Omega_i in mathbbC^p times p0 le i le k-1 right","category":"page"},{"location":"processes/#hermitian-lanczos","page":"Krylov processes","title":"Hermitian Lanczos","text":"","category":"section"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"(Image: hermitian_lanczos)","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"After k iterations of the Hermitian Lanczos process, the situation may be summarized as","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"beginalign*\n A V_k = V_k T_k + beta_k+1k v_k+1 e_k^T = V_k+1 T_k+1k \n V_k^H V_k = I_k\nendalign*","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"where V_k is an orthonormal basis of the Krylov subspace mathcalK_k(Ab),","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"T_k =\nbeginbmatrix\n alpha_1 barbeta_2 \n beta_2 alpha_2 ddots \n ddots ddots barbeta_k \n beta_k alpha_k\nendbmatrix\n qquad\nT_k+1k =\nbeginbmatrix\n T_k \n beta_k+1 e_k^T\nendbmatrix","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Note that depending on how we normalize the vectors that compose V_k, T_k+1k can be a real tridiagonal matrix even if A is a complex matrix.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"The function hermitian_lanczos returns V_k+1, beta_1 and T_k+1k.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Related methods: SYMMLQ, CG, CR, CAR, MINRES, MINRES-QLP, MINARES, CGLS, CRLS, CGNE, CRMR, CG-LANCZOS and CG-LANCZOS-SHIFT.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"hermitian_lanczos(::Any, ::AbstractVector{FC}, ::Int) where FC <: (Union{Complex{T}, T} where T <: AbstractFloat)","category":"page"},{"location":"processes/#Krylov.hermitian_lanczos-Union{Tuple{FC}, Tuple{Any, AbstractVector{FC}, Int64}} where FC<:(Union{Complex{T}, T} where T<:AbstractFloat)","page":"Krylov processes","title":"Krylov.hermitian_lanczos","text":"V, β, T = hermitian_lanczos(A, b, k)\n\nInput arguments\n\nA: a linear operator that models an Hermitian matrix of dimension n;\nb: a vector of length n;\nk: the number of iterations of the Hermitian Lanczos process.\n\nOutput arguments\n\nV: a dense n × (k+1) matrix;\nβ: a coefficient such that βv₁ = b;\nT: a sparse (k+1) × k tridiagonal matrix.\n\nReference\n\nC. Lanczos, An Iteration Method for the Solution of the Eigenvalue Problem of Linear Differential and Integral Operators, Journal of Research of the National Bureau of Standards, 45(4), pp. 225–280, 1950.\n\n\n\n\n\n","category":"method"},{"location":"processes/#nonhermitian-lanczos","page":"Krylov processes","title":"Non-Hermitian Lanczos","text":"","category":"section"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"(Image: nonhermitian_lanczos)","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"After k iterations of the non-Hermitian Lanczos process (also named the Lanczos biorthogonalization process), the situation may be summarized as","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"beginalign*\n A V_k = V_k T_k + beta_k+1 v_k+1 e_k^T = V_k+1 T_k+1k \n A^H U_k = U_k T_k^H + bargamma_k+1 u_k+1 e_k^T = U_k+1 T_kk+1^H \n V_k^H U_k = U_k^H V_k = I_k\nendalign*","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"where V_k and U_k are bases of the Krylov subspaces mathcalK_k (Ab) and mathcalK_k (A^Hc), respectively,","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"T_k = \nbeginbmatrix\n alpha_1 gamma_2 \n beta_2 alpha_2 ddots \n ddots ddots gamma_k \n beta_k alpha_k\nendbmatrix\n qquad\nT_k+1k =\nbeginbmatrix\n T_k \n beta_k+1 e_k^T\nendbmatrix\n qquad\nT_kk+1 =\nbeginbmatrix\n T_k gamma_k+1 e_k\nendbmatrix","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"The function nonhermitian_lanczos returns V_k+1, beta_1, T_k+1k, U_k+1, bargamma_1 and T_kk+1^H.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Related methods: BiLQ, QMR, BiLQR, CGS and BICGSTAB.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"note: Note\nThe scaling factors used in our implementation are beta_k = u_k^H v_k^tfrac12 and gamma_k = (u_k^H v_k) beta_k. With these scaling factors, the non-Hermitian Lanczos process coincides with the Hermitian Lanczos process when A = A^H and b = c.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"nonhermitian_lanczos(::Any, ::AbstractVector{FC}, ::AbstractVector{FC}, ::Int) where FC <: (Union{Complex{T}, T} where T <: AbstractFloat)","category":"page"},{"location":"processes/#Krylov.nonhermitian_lanczos-Union{Tuple{FC}, Tuple{Any, AbstractVector{FC}, AbstractVector{FC}, Int64}} where FC<:(Union{Complex{T}, T} where T<:AbstractFloat)","page":"Krylov processes","title":"Krylov.nonhermitian_lanczos","text":"V, β, T, U, γᴴ, Tᴴ = nonhermitian_lanczos(A, b, c, k)\n\nInput arguments\n\nA: a linear operator that models a square matrix of dimension n;\nb: a vector of length n;\nc: a vector of length n;\nk: the number of iterations of the non-Hermitian Lanczos process.\n\nOutput arguments\n\nV: a dense n × (k+1) matrix;\nβ: a coefficient such that βv₁ = b;\nT: a sparse (k+1) × k tridiagonal matrix;\nU: a dense n × (k+1) matrix;\nγᴴ: a coefficient such that γᴴu₁ = c;\nTᴴ: a sparse (k+1) × k tridiagonal matrix.\n\nReference\n\nC. Lanczos, An Iteration Method for the Solution of the Eigenvalue Problem of Linear Differential and Integral Operators, Journal of Research of the National Bureau of Standards, 45(4), pp. 225–280, 1950.\n\n\n\n\n\n","category":"method"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"The non-Hermitian Lanczos process can be also implemented without A^H (transpose-free variant). To derive it, we can observe that beta_k+1 v_k+1 = P_k(A) b and bargamma_k+1 u_k+1 = Q_k(A^H) c where P_k and Q_k are polynomials of degree k. The polynomials are defined from the recursions:","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"beginalign*\n P_0(A) = I_n \n P_1(A) = left(dfracA - alpha_1 I_nbeta_1right) P_0(A) \n P_k(A) = left(dfracA - alpha_k I_nbeta_kright) P_k-1(A) - dfracgamma_kbeta_k-1 P_k-2(A) quad k ge 2 \n \n Q_0(A^H) = I_n \n Q_1(A^H) = left(dfracA^H - baralpha_1 I_nbargamma_1right) Q_0(A^H) \n Q_k(A^H) = left(dfracA^H - baralpha_k I_nbargamma_kright) Q_k-1(A^H) - dfracbarbeta_kbargamma_k-1 Q_k-2(A^H) quad k ge 2\nendalign*","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Because alpha_k = u_k^H A v_k and (bargamma_k+1 u_k+1)^H (beta_k+1 v_k+1) = gamma_k+1 beta_k+1, we can determine the coefficients of T_k+1k and T_kk+1^H as follows:","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"beginalign*\n alpha_k = dfrac1gamma_k beta_k langleQ_k-1(A^H) c A P_k-1(A) brangle \n = dfrac1gamma_k beta_k langlec barQ_k-1(A) A P_k-1(A) brangle \n \n beta_k+1 gamma_k+1 = langleQ_k(A^H) c P_k(A) brangle \n = langlec barQ_k(A) P_k(A) brangle\nendalign*","category":"page"},{"location":"processes/#arnoldi","page":"Krylov processes","title":"Arnoldi","text":"","category":"section"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"(Image: arnoldi)","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"After k iterations of the Arnoldi process, the situation may be summarized as","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"beginalign*\n A V_k = V_k H_k + h_k+1k v_k+1 e_k^T = V_k+1 H_k+1k \n V_k^H V_k = I_k\nendalign*","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"where V_k is an orthonormal basis of the Krylov subspace mathcalK_k (Ab),","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"H_k =\nbeginbmatrix\n h_11 h_12 ldots h_1k \n h_21 ddots ddots vdots \n ddots ddots h_k-1k \n h_kk-1 h_kk\nendbmatrix\n qquad\nH_k+1k =\nbeginbmatrix\n H_k \n h_k+1k e_k^T\nendbmatrix","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"The function arnoldi returns V_k+1, beta and H_k+1k.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Related methods: DIOM, FOM, DQGMRES, GMRES and FGMRES.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"note: Note\nThe Arnoldi process coincides with the Hermitian Lanczos process when A is Hermitian.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"arnoldi(::Any, ::AbstractVector{FC}, ::Int) where FC <: (Union{Complex{T}, T} where T <: AbstractFloat)","category":"page"},{"location":"processes/#Krylov.arnoldi-Union{Tuple{FC}, Tuple{Any, AbstractVector{FC}, Int64}} where FC<:(Union{Complex{T}, T} where T<:AbstractFloat)","page":"Krylov processes","title":"Krylov.arnoldi","text":"V, β, H = arnoldi(A, b, k; reorthogonalization=false)\n\nInput arguments\n\nA: a linear operator that models a square matrix of dimension n;\nb: a vector of length n;\nk: the number of iterations of the Arnoldi process.\n\nKeyword arguments\n\nreorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors.\n\nOutput arguments\n\nV: a dense n × (k+1) matrix;\nβ: a coefficient such that βv₁ = b;\nH: a dense (k+1) × k upper Hessenberg matrix.\n\nReference\n\nW. E. Arnoldi, The principle of minimized iterations in the solution of the matrix eigenvalue problem, Quarterly of Applied Mathematics, 9, pp. 17–29, 1951.\n\n\n\n\n\n","category":"method"},{"location":"processes/#golub-kahan","page":"Krylov processes","title":"Golub-Kahan","text":"","category":"section"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"(Image: golub_kahan)","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"After k iterations of the Golub-Kahan bidiagonalization process, the situation may be summarized as","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"beginalign*\n A V_k = U_k+1 B_k \n A^H U_k+1 = V_k B_k^H + baralpha_k+1 v_k+1 e_k+1^T = V_k+1 L_k+1^H \n V_k^H V_k = U_k^H U_k = I_k\nendalign*","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"where V_k and U_k are bases of the Krylov subspaces mathcalK_k (A^HAA^Hb) and mathcalK_k (AA^Hb), respectively,","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"L_k =\nbeginbmatrix\n alpha_1 \n beta_2 alpha_2 \n ddots ddots \n beta_k alpha_k\nendbmatrix\n qquad\nB_k =\nbeginbmatrix\n alpha_1 \n beta_2 alpha_2 \n ddots ddots \n beta_k alpha_k \n beta_k+1 \nendbmatrix\n=\nbeginbmatrix\n L_k \n beta_k+1 e_k^T\nendbmatrix","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Note that depending on how we normalize the vectors that compose V_k and U_k, L_k can be a real bidiagonal matrix even if A is a complex matrix.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"The function golub_kahan returns V_k+1, U_k+1, beta_1 and L_k+1.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Related methods: LNLQ, CRAIG, CRAIGMR, LSLQ, LSQR and LSMR.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"note: Note\nThe Golub-Kahan process coincides with the Hermitian Lanczos process applied to the normal equations A^HA x = A^Hb and AA^H x = b. It is also related to the Hermitian Lanczos process applied to beginbmatrix 0 A A^H 0 endbmatrix with initial vector beginbmatrix b 0 endbmatrix.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"golub_kahan(::Any, ::AbstractVector{FC}, ::Int) where FC <: (Union{Complex{T}, T} where T <: AbstractFloat)","category":"page"},{"location":"processes/#Krylov.golub_kahan-Union{Tuple{FC}, Tuple{Any, AbstractVector{FC}, Int64}} where FC<:(Union{Complex{T}, T} where T<:AbstractFloat)","page":"Krylov processes","title":"Krylov.golub_kahan","text":"V, U, β, L = golub_kahan(A, b, k)\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m;\nk: the number of iterations of the Golub-Kahan process.\n\nOutput arguments\n\nV: a dense n × (k+1) matrix;\nU: a dense m × (k+1) matrix;\nβ: a coefficient such that βu₁ = b;\nL: a sparse (k+1) × (k+1) lower bidiagonal matrix.\n\nReferences\n\nG. H. Golub and W. Kahan, Calculating the Singular Values and Pseudo-Inverse of a Matrix, SIAM Journal on Numerical Analysis, 2(2), pp. 225–224, 1965.\nC. C. Paige, Bidiagonalization of Matrices and Solution of Linear Equations, SIAM Journal on Numerical Analysis, 11(1), pp. 197–209, 1974.\n\n\n\n\n\n","category":"method"},{"location":"processes/#saunders-simon-yip","page":"Krylov processes","title":"Saunders-Simon-Yip","text":"","category":"section"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"(Image: saunders_simon_yip)","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"After k iterations of the Saunders-Simon-Yip process (also named the orthogonal tridiagonalization process), the situation may be summarized as","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"beginalign*\n A U_k = V_k T_k + beta_k+1 v_k+1 e_k^T = V_k+1 T_k+1k \n A^H V_k = U_k T_k^H + bargamma_k+1 u_k+1 e_k^T = U_k+1 T_kk+1^H \n V_k^H V_k = U_k^H U_k = I_k\nendalign*","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"where beginbmatrix V_k 0 0 U_k endbmatrix is an orthonormal basis of the block Krylov subspace mathcalK^square_k left(beginbmatrix 0 A A^H 0 endbmatrix beginbmatrix b 0 0 c endbmatrixright),","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"T_k = \nbeginbmatrix\n alpha_1 gamma_2 \n beta_2 alpha_2 ddots \n ddots ddots gamma_k \n beta_k alpha_k\nendbmatrix\n qquad\nT_k+1k =\nbeginbmatrix\n T_k \n beta_k+1 e_k^T\nendbmatrix\n qquad\nT_kk+1 =\nbeginbmatrix\n T_k gamma_k+1 e_k\nendbmatrix","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"The function saunders_simon_yip returns V_k+1, beta_1, T_k+1k, U_k+1, bargamma_1 and T_kk+1^H.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Related methods: USYMLQ, USYMQR, TriLQR, TriCG and TriMR.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"note: Note\nThe Saunders-Simon-Yip is equivalent to the block-Lanczos process applied to beginbmatrix 0 A A^H 0 endbmatrix with initial matrix beginbmatrix b 0 0 c endbmatrix.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"saunders_simon_yip(::Any, ::AbstractVector{FC}, ::AbstractVector{FC}, ::Int) where FC <: (Union{Complex{T}, T} where T <: AbstractFloat)","category":"page"},{"location":"processes/#Krylov.saunders_simon_yip-Union{Tuple{FC}, Tuple{Any, AbstractVector{FC}, AbstractVector{FC}, Int64}} where FC<:(Union{Complex{T}, T} where T<:AbstractFloat)","page":"Krylov processes","title":"Krylov.saunders_simon_yip","text":"V, β, T, U, γᴴ, Tᴴ = saunders_simon_yip(A, b, c, k)\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nb: a vector of length m;\nc: a vector of length n;\nk: the number of iterations of the Saunders-Simon-Yip process.\n\nOutput arguments\n\nV: a dense m × (k+1) matrix;\nβ: a coefficient such that βv₁ = b;\nT: a sparse (k+1) × k tridiagonal matrix;\nU: a dense n × (k+1) matrix;\nγᴴ: a coefficient such that γᴴu₁ = c;\nTᴴ: a sparse (k+1) × k tridiagonal matrix.\n\nReference\n\nM. A. Saunders, H. D. Simon, and E. L. Yip, Two Conjugate-Gradient-Type Methods for Unsymmetric Linear Equations, SIAM Journal on Numerical Analysis, 25(4), pp. 927–940, 1988.\n\n\n\n\n\n","category":"method"},{"location":"processes/#montoison-orban","page":"Krylov processes","title":"Montoison-Orban","text":"","category":"section"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"(Image: montoison_orban)","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"After k iterations of the Montoison-Orban process (also named the orthogonal Hessenberg reduction process), the situation may be summarized as","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"beginalign*\n A U_k = V_k H_k + h_k+1k v_k+1 e_k^T = V_k+1 H_k+1k \n B V_k = U_k F_k + f_k+1k u_k+1 e_k^T = U_k+1 F_k+1k \n V_k^H V_k = U_k^H U_k = I_k\nendalign*","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"where beginbmatrix V_k 0 0 U_k endbmatrix is an orthonormal basis of the block Krylov subspace mathcalK^square_k left(beginbmatrix 0 A B 0 endbmatrix beginbmatrix b 0 0 c endbmatrixright),","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"H_k =\nbeginbmatrix\n h_11 h_12 ldots h_1k \n h_21 ddots ddots vdots \n ddots ddots h_k-1k \n h_kk-1 h_kk\nendbmatrix\n qquad\nF_k =\nbeginbmatrix\n f_11 f_12 ldots f_1k \n f_21 ddots ddots vdots \n ddots ddots f_k-1k \n f_kk-1 f_kk\nendbmatrix","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"H_k+1k =\nbeginbmatrix\n H_k \n h_k+1k e_k^T\nendbmatrix\n qquad\nF_k+1k =\nbeginbmatrix\n F_k \n f_k+1k e_k^T\nendbmatrix","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"The function montoison_orban returns V_k+1, beta, H_k+1k, U_k+1, gamma and F_k+1k.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"Related methods: GPMR.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"note: Note\nThe Montoison-Orban is equivalent to the block-Arnoldi process applied to beginbmatrix 0 A B 0 endbmatrix with initial matrix beginbmatrix b 0 0 c endbmatrix. It also coincides with the Saunders-Simon-Yip process when B = A^H.","category":"page"},{"location":"processes/","page":"Krylov processes","title":"Krylov processes","text":"montoison_orban(::Any, ::Any, ::AbstractVector{FC}, ::AbstractVector{FC}, ::Int) where FC <: (Union{Complex{T}, T} where T <: AbstractFloat)","category":"page"},{"location":"processes/#Krylov.montoison_orban-Union{Tuple{FC}, Tuple{Any, Any, AbstractVector{FC}, AbstractVector{FC}, Int64}} where FC<:(Union{Complex{T}, T} where T<:AbstractFloat)","page":"Krylov processes","title":"Krylov.montoison_orban","text":"V, β, H, U, γ, F = montoison_orban(A, B, b, c, k; reorthogonalization=false)\n\nInput arguments\n\nA: a linear operator that models a matrix of dimension m × n;\nB: a linear operator that models a matrix of dimension n × m;\nb: a vector of length m;\nc: a vector of length n;\nk: the number of iterations of the Montoison-Orban process.\n\nKeyword arguments\n\nreorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors.\n\nOutput arguments\n\nV: a dense m × (k+1) matrix;\nβ: a coefficient such that βv₁ = b;\nH: a dense (k+1) × k upper Hessenberg matrix;\nU: a dense n × (k+1) matrix;\nγ: a coefficient such that γu₁ = c;\nF: a dense (k+1) × k upper Hessenberg matrix.\n\nReference\n\nA. Montoison and D. Orban, GPMR: An Iterative Method for Unsymmetric Partitioned Linear Systems, SIAM Journal on Matrix Analysis and Applications, 44(1), pp. 293–311, 2023.\n\n\n\n\n\n","category":"method"},{"location":"examples/crls/","page":"CRLS","title":"CRLS","text":"using MatrixMarket, SuiteSparseMatrixCollection\nusing Krylov, LinearOperators\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"well1850\")\npath = fetch_ssmc(matrix, format=\"MM\")\n\nA = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1]).mtx\"))\nb = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1])_b.mtx\"))[:]\n(m, n) = size(A)\n@printf(\"System size: %d rows and %d columns\\n\", m, n)\n\n# Define a regularization parameter.\nλ = 1.0e-3\n\n(x, stats) = crls(A, b, λ=λ)\nshow(stats)\nresid = norm(A' * (A * x - b) + λ * x) / norm(b)\n@printf(\"CRLS: Relative residual: %8.1e\\n\", resid)\n@printf(\"CRLS: ‖x‖: %8.1e\\n\", norm(x))","category":"page"},{"location":"callbacks/#callbacks","page":"Callbacks","title":"Callbacks","text":"","category":"section"},{"location":"callbacks/","page":"Callbacks","title":"Callbacks","text":"Each Krylov method is able to call a callback function as callback(solver) at each iteration. The callback should return true if the main loop should terminate, and false otherwise. If the method terminated because of the callback, the output status will be \"user-requested exit\". For example, if the user defines minres_callback(solver::MinresSolver), it can be passed to the solver using","category":"page"},{"location":"callbacks/","page":"Callbacks","title":"Callbacks","text":"(x, stats) = minres(A, b, callback = minres_callback)","category":"page"},{"location":"callbacks/","page":"Callbacks","title":"Callbacks","text":"If you need to write a callback that uses variables that are not in a KrylovSolver, use a closure:","category":"page"},{"location":"callbacks/","page":"Callbacks","title":"Callbacks","text":"function custom_stopping_condition(solver::KrylovSolver, A, b, r, tol)\n mul!(r, A, solver.x)\n r .-= b # r := b - Ax\n bool = norm(r) ≤ tol # tolerance based on the 2-norm of the residual\n return bool\nend\n\ncg_callback(solver) = custom_stopping_condition(solver, A, b, r, tol)\n(x, stats) = cg(A, b, callback = cg_callback)","category":"page"},{"location":"callbacks/","page":"Callbacks","title":"Callbacks","text":"Alternatively, use a structure and make it callable:","category":"page"},{"location":"callbacks/","page":"Callbacks","title":"Callbacks","text":"mutable struct CallbackWorkspace{T}\n A::Matrix{T}\n b::Vector{T}\n r::Vector{T}\n tol::T\nend\n\nfunction (workspace::CallbackWorkspace)(solver::KrylovSolver)\n mul!(workspace.r, workspace.A, solver.x)\n workspace.r .-= workspace.b\n bool = norm(workspace.r) ≤ workspace.tol\n return bool\nend\n\nbicgstab_callback = CallbackWorkspace(A, b, r, tol)\n(x, stats) = bicgstab(A, b, callback = bicgstab_callback)","category":"page"},{"location":"callbacks/","page":"Callbacks","title":"Callbacks","text":"Although the main goal of a callback is to add new stopping conditions, it can also retrieve information from the workspace of a Krylov method along the iterations. We now illustrate how to store all iterates x_k of the GMRES method.","category":"page"},{"location":"callbacks/","page":"Callbacks","title":"Callbacks","text":"S = Krylov.ktypeof(b)\nglobal X = S[] # Storage for GMRES iterates\n\nfunction gmres_callback(solver)\n z = solver.z\n k = solver.inner_iter\n nr = sum(1:k)\n V = solver.V\n R = solver.R\n y = copy(z)\n\n # Solve Rk * yk = zk\n for i = k : -1 : 1\n pos = nr + i - k\n for j = k : -1 : i+1\n y[i] = y[i] - R[pos] * y[j]\n pos = pos - j + 1\n end\n y[i] = y[i] / R[pos]\n end\n\n # xk = Vk * yk\n xk = sum(V[i] * y[i] for i = 1:k)\n push!(X, xk)\n\n return false # We don't want to add new stopping conditions\nend\n\n(x, stats) = gmres(A, b, callback = gmres_callback)","category":"page"},{"location":"gpu/#gpu","page":"GPU support","title":"GPU support","text":"","category":"section"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"Krylov methods are well suited for GPU computations because they only require matrix-vector products (u leftarrow Av, u leftarrow A^Hw) and vector operations (v, u^H v, v leftarrow alpha u + beta v), which are highly parallelizable.","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"The implementations in Krylov.jl are generic so as to take advantage of the multiple dispatch and broadcast features of Julia. Those allow the implementations to be specialized automatically by the compiler for both CPU and GPU. Thus, Krylov.jl works with GPU backends that build on GPUArrays.jl, such as CUDA.jl, AMDGPU.jl, oneAPI.jl or Metal.jl.","category":"page"},{"location":"gpu/#Nvidia-GPUs","page":"GPU support","title":"Nvidia GPUs","text":"","category":"section"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"All solvers in Krylov.jl can be used with CUDA.jl and allow computations on Nvidia GPUs. Problems stored in CPU format (Matrix and Vector) must first be converted to the related GPU format (CuMatrix and CuVector).","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"using CUDA, Krylov\n\nif CUDA.functional()\n # CPU Arrays\n A_cpu = rand(20, 20)\n b_cpu = rand(20)\n\n # GPU Arrays\n A_gpu = CuMatrix(A_cpu)\n b_gpu = CuVector(b_cpu)\n\n # Solve a square and dense system on an Nivida GPU\n x, stats = bilq(A_gpu, b_gpu)\nend","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"Sparse matrices have a specific storage on Nvidia GPUs (CuSparseMatrixCSC, CuSparseMatrixCSR or CuSparseMatrixCOO):","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"using CUDA, Krylov\nusing CUDA.CUSPARSE, SparseArrays\n\nif CUDA.functional()\n # CPU Arrays\n A_cpu = sprand(200, 100, 0.3)\n b_cpu = rand(200)\n\n # GPU Arrays\n A_csc_gpu = CuSparseMatrixCSC(A_cpu)\n A_csr_gpu = CuSparseMatrixCSR(A_cpu)\n A_coo_gpu = CuSparseMatrixCOO(A_cpu)\n b_gpu = CuVector(b_cpu)\n\n # Solve a rectangular and sparse system on an Nvidia GPU\n x_csc, stats_csc = lslq(A_csc_gpu, b_gpu)\n x_csr, stats_csr = lsqr(A_csr_gpu, b_gpu)\n x_coo, stats_coo = lsmr(A_coo_gpu, b_gpu)\nend","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"If you use a Krylov method that only requires A * v products (see here), the most efficient format is CuSparseMatrixCSR. Optimized operator-vector products that exploit GPU features can be also used by means of linear operators.","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"Preconditioners, especially incomplete Cholesky or Incomplete LU factorizations that involve triangular solves, can be applied directly on GPU thanks to efficient operators that take advantage of CUSPARSE routines.","category":"page"},{"location":"gpu/#Example-with-a-symmetric-positive-definite-system","page":"GPU support","title":"Example with a symmetric positive-definite system","text":"","category":"section"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"using SparseArrays, Krylov, LinearOperators\nusing CUDA, CUDA.CUSPARSE\n\nif CUDA.functional()\n # Transfer the linear system from the CPU to the GPU\n A_gpu = CuSparseMatrixCSR(A_cpu) # A_gpu = CuSparseMatrixCSC(A_cpu)\n b_gpu = CuVector(b_cpu)\n\n # IC(0) decomposition LLᴴ ≈ A for CuSparseMatrixCSC or CuSparseMatrixCSR matrices\n P = ic02(A_gpu)\n\n # Additional vector required for solving triangular systems\n n = length(b_gpu)\n T = eltype(b_gpu)\n z = CUDA.zeros(T, n)\n\n # Solve Py = x\n function ldiv_ic0!(P::CuSparseMatrixCSR, x, y, z)\n ldiv!(z, LowerTriangular(P), x) # Forward substitution with L\n ldiv!(y, LowerTriangular(P)', z) # Backward substitution with Lᴴ\n return y\n end\n\n function ldiv_ic0!(P::CuSparseMatrixCSC, x, y, z)\n ldiv!(z, UpperTriangular(P)', x) # Forward substitution with L\n ldiv!(y, UpperTriangular(P), z) # Backward substitution with Lᴴ\n return y\n end\n\n # Operator that model P⁻¹\n symmetric = hermitian = true\n opM = LinearOperator(T, n, n, symmetric, hermitian, (y, x) -> ldiv_ic0!(P, x, y, z))\n\n # Solve an Hermitian positive definite system with an IC(0) preconditioner on GPU\n x, stats = cg(A_gpu, b_gpu, M=opM)\nend","category":"page"},{"location":"gpu/#Example-with-a-general-square-system","page":"GPU support","title":"Example with a general square system","text":"","category":"section"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"using SparseArrays, Krylov, LinearOperators\nusing CUDA, CUDA.CUSPARSE, CUDA.CUSOLVER\n\nif CUDA.functional()\n # Optional -- Compute a permutation vector p such that A[:,p] has no zero diagonal\n p = zfd(A_cpu)\n p .+= 1\n A_cpu = A_cpu[:,p]\n\n # Transfer the linear system from the CPU to the GPU\n A_gpu = CuSparseMatrixCSR(A_cpu) # A_gpu = CuSparseMatrixCSC(A_cpu)\n b_gpu = CuVector(b_cpu)\n\n # ILU(0) decomposition LU ≈ A for CuSparseMatrixCSC or CuSparseMatrixCSR matrices\n P = ilu02(A_gpu)\n\n # Additional vector required for solving triangular systems\n n = length(b_gpu)\n T = eltype(b_gpu)\n z = CUDA.zeros(T, n)\n\n # Solve Py = x\n function ldiv_ilu0!(P::CuSparseMatrixCSR, x, y, z)\n ldiv!(z, UnitLowerTriangular(P), x) # Forward substitution with L\n ldiv!(y, UpperTriangular(P), z) # Backward substitution with U\n return y\n end\n\n function ldiv_ilu0!(P::CuSparseMatrixCSC, x, y, z)\n ldiv!(z, LowerTriangular(P), x) # Forward substitution with L\n ldiv!(y, UnitUpperTriangular(P), z) # Backward substitution with U\n return y\n end\n\n # Operator that model P⁻¹\n symmetric = hermitian = false\n opM = LinearOperator(T, n, n, symmetric, hermitian, (y, x) -> ldiv_ilu0!(P, x, y, z))\n\n # Solve a non-Hermitian system with an ILU(0) preconditioner on GPU\n x̄, stats = bicgstab(A_gpu, b_gpu, M=opM)\n\n # Recover the solution of Ax = b with the solution of A[:,p]x̄ = b\n invp = invperm(p)\n x = x̄[invp]\nend","category":"page"},{"location":"gpu/#AMD-GPUs","page":"GPU support","title":"AMD GPUs","text":"","category":"section"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"All solvers in Krylov.jl can be used with AMDGPU.jl and allow computations on AMD GPUs. Problems stored in CPU format (Matrix and Vector) must first be converted to the related GPU format (ROCMatrix and ROCVector).","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"using Krylov, AMDGPU\n\nif AMDGPU.functional()\n # CPU Arrays\n A_cpu = rand(ComplexF64, 20, 20)\n A_cpu = A_cpu + A_cpu'\n b_cpu = rand(ComplexF64, 20)\n\n A_gpu = ROCMatrix(A_cpu)\n b_gpu = ROCVector(b_cpu)\n\n # Solve a dense Hermitian system on an AMD GPU\n x, stats = minres(A_gpu, b_gpu)\nend","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"Sparse matrices have a specific storage on AMD GPUs (ROCSparseMatrixCSC, ROCSparseMatrixCSR or ROCSparseMatrixCOO):","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"using AMDGPU, Krylov\nusing AMDGPU.rocSPARSE, SparseArrays\n\nif AMDGPU.functional()\n # CPU Arrays\n A_cpu = sprand(100, 200, 0.3)\n b_cpu = rand(100)\n\n # GPU Arrays\n A_csc_gpu = ROCSparseMatrixCSC(A_cpu)\n A_csr_gpu = ROCSparseMatrixCSR(A_cpu)\n A_coo_gpu = ROCSparseMatrixCOO(A_cpu)\n b_gpu = ROCVector(b_cpu)\n\n # Solve a rectangular and sparse system on an AMD GPU\n x_csc, y_csc, stats_csc = lnlq(A_csc_gpu, b_gpu)\n x_csr, y_csr, stats_csr = craig(A_csr_gpu, b_gpu)\n x_coo, y_coo, stats_coo = craigmr(A_coo_gpu, b_gpu)\nend","category":"page"},{"location":"gpu/#Intel-GPUs","page":"GPU support","title":"Intel GPUs","text":"","category":"section"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"All solvers in Krylov.jl can be used with oneAPI.jl and allow computations on Intel GPUs. Problems stored in CPU format (Matrix and Vector) must first be converted to the related GPU format (oneMatrix and oneVector).","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"using Krylov, oneAPI\n\nif oneAPI.functional()\n T = Float32 # oneAPI.jl also works with ComplexF32\n m = 20\n n = 10\n\n # CPU Arrays\n A_cpu = rand(T, m, n)\n b_cpu = rand(T, m)\n\n # GPU Arrays\n A_gpu = oneMatrix(A_cpu)\n b_gpu = oneVector(b_cpu)\n\n # Solve a dense least-squares problem on an Intel GPU\n x, stats = lsqr(A_gpu, b_gpu)\nend","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"note: Note\nThe library oneMKL is interfaced in oneAPI.jl and accelerates linear algebra operations on Intel GPUs. Only dense linear systems are supported for the time being because sparse linear algebra routines are not interfaced yet.","category":"page"},{"location":"gpu/#Apple-M1-GPUs","page":"GPU support","title":"Apple M1 GPUs","text":"","category":"section"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"All solvers in Krylov.jl can be used with Metal.jl and allow computations on Apple M1 GPUs. Problems stored in CPU format (Matrix and Vector) must first be converted to the related GPU format (MtlMatrix and MtlVector).","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"using Krylov, Metal\n\nT = Float32 # Metal.jl also works with ComplexF32\nn = 10\nm = 20\n\n# CPU Arrays\nA_cpu = rand(T, n, m)\nb_cpu = rand(T, n)\n\n# GPU Arrays\nA_gpu = MtlMatrix(A_cpu)\nb_gpu = MtlVector(b_cpu)\n\n# Solve a dense least-norm problem on an Apple M1 GPU\nx, stats = craig(A_gpu, b_gpu)","category":"page"},{"location":"gpu/","page":"GPU support","title":"GPU support","text":"warning: Warning\nMetal.jl is under heavy development and is considered experimental for now.","category":"page"},{"location":"#Home","page":"Home","title":"Krylov.jl documentation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This package provides implementations of certain of the most useful Krylov method for a variety of problems:","category":"page"},{"location":"","page":"Home","title":"Home","text":"1 - Square or rectangular full-rank systems","category":"page"},{"location":"","page":"Home","title":"Home","text":" Ax = b","category":"page"},{"location":"","page":"Home","title":"Home","text":"should be solved when b lies in the range space of A. This situation occurs when","category":"page"},{"location":"","page":"Home","title":"Home","text":"A is square and nonsingular,\nA is tall and has full column rank and b lies in the range of A.","category":"page"},{"location":"","page":"Home","title":"Home","text":"2 - Linear least-squares problems","category":"page"},{"location":"","page":"Home","title":"Home","text":" min b - Ax","category":"page"},{"location":"","page":"Home","title":"Home","text":"should be solved when b is not in the range of A (inconsistent systems), regardless of the shape and rank of A. This situation mainly occurs when","category":"page"},{"location":"","page":"Home","title":"Home","text":"A is square and singular,\nA is tall and thin.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Underdetermined systems are less common but also occur.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If there are infinitely many such x (because A is column rank-deficient), one with minimum norm is identified","category":"page"},{"location":"","page":"Home","title":"Home","text":" min x quad textsubject to quad x in argmin b - Ax","category":"page"},{"location":"","page":"Home","title":"Home","text":"3 - Linear least-norm problems","category":"page"},{"location":"","page":"Home","title":"Home","text":" min x quad textsubject to quad Ax = b","category":"page"},{"location":"","page":"Home","title":"Home","text":"should be solved when A is column rank-deficient but b is in the range of A (consistent systems), regardless of the shape of A. This situation mainly occurs when","category":"page"},{"location":"","page":"Home","title":"Home","text":"A is square and singular,\nA is short and wide.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Overdetermined systems are less common but also occur.","category":"page"},{"location":"","page":"Home","title":"Home","text":"4 - Adjoint systems","category":"page"},{"location":"","page":"Home","title":"Home","text":" Ax = b quad textand quad A^H y = c","category":"page"},{"location":"","page":"Home","title":"Home","text":"where A can have any shape.","category":"page"},{"location":"","page":"Home","title":"Home","text":"5 - Saddle-point and Hermitian quasi-definite systems","category":"page"},{"location":"","page":"Home","title":"Home","text":" beginbmatrix M phantom-A A^H -N endbmatrix beginbmatrix x y endbmatrix = left(beginbmatrix b 0 endbmatrixbeginbmatrix 0 c endbmatrixbeginbmatrix b c endbmatrixright)","category":"page"},{"location":"","page":"Home","title":"Home","text":"where A can have any shape.","category":"page"},{"location":"","page":"Home","title":"Home","text":"6 - Generalized saddle-point and non-Hermitian partitioned systems","category":"page"},{"location":"","page":"Home","title":"Home","text":" beginbmatrix M A B N endbmatrix beginbmatrix x y endbmatrix = beginbmatrix b c endbmatrix","category":"page"},{"location":"","page":"Home","title":"Home","text":"where A can have any shape and B has the shape of Aᴴ. A, B, b and c must be all nonzero.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Krylov solvers are particularly appropriate in situations where such problems must be solved but a factorization is not possible, either because:","category":"page"},{"location":"","page":"Home","title":"Home","text":"A is not available explicitly,\nA would be dense or would consume an excessive amount of memory if it were materialized,\nfactors would consume an excessive amount of memory.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Iterative methods are recommended in either of the following situations:","category":"page"},{"location":"","page":"Home","title":"Home","text":"the problem is sufficiently large that a factorization is not feasible or would be slow,\nan effective preconditioner is known in cases where the problem has unfavorable spectral structure,\nthe operator can be represented efficiently as a sparse matrix,\nthe operator is fast, i.e., can be applied with better complexity than if it were materialized as a matrix. Certain fast operators would materialize as dense matrices.","category":"page"},{"location":"#Features","page":"Home","title":"Features","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"All solvers in Krylov.jl have in-place version, are compatible with GPU and work in any floating-point data type.","category":"page"},{"location":"#How-to-Install","page":"Home","title":"How to Install","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Krylov can be installed and tested through the Julia package manager:","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia> ]\npkg> add Krylov\npkg> test Krylov","category":"page"},{"location":"#Bug-reports-and-discussions","page":"Home","title":"Bug reports and discussions","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers organization, so questions about any of our packages are welcome.","category":"page"},{"location":"examples/cgls/","page":"CGLS","title":"CGLS","text":"using MatrixMarket, SuiteSparseMatrixCollection\nusing Krylov, LinearOperators\nusing LinearAlgebra, Printf\n\nssmc = ssmc_db(verbose=false)\nmatrix = ssmc_matrices(ssmc, \"HB\", \"well1033\")\npath = fetch_ssmc(matrix, format=\"MM\")\n\nA = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1]).mtx\"))\nb = MatrixMarket.mmread(joinpath(path[1], \"$(matrix.name[1])_b.mtx\"))[:]\n(m, n) = size(A)\n@printf(\"System size: %d rows and %d columns\\n\", m, n)\n\n# Define a regularization parameter.\nλ = 1.0e-3\n\n(x, stats) = cgls(A, b, λ=λ)\nshow(stats)\nresid = norm(A' * (A * x - b) + λ * x) / norm(b)\n@printf(\"CGLS: Relative residual: %8.1e\\n\", resid)\n@printf(\"CGLS: ‖x‖: %8.1e\\n\", norm(x))","category":"page"}] } diff --git a/dev/solvers/as/index.html b/dev/solvers/as/index.html index e171c00d2..ac33e55b0 100644 --- a/dev/solvers/as/index.html +++ b/dev/solvers/as/index.html @@ -4,11 +4,11 @@ rtol::T=√eps(T), itmax::Int=0, timemax::Float64=Inf, verbose::Int=0, history::Bool=false, callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, y, stats) = bilqr(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)

BiLQR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.

Combine BiLQ and QMR to solve adjoint systems.

[0  A] [y] = [b]
-[Aᴴ 0] [x]   [c]

The relation bᴴc ≠ 0 must be satisfied. BiLQ is used for solving primal system Ax = b of size n. QMR is used for solving dual system Aᴴy = c of size n.

Input arguments

Optional arguments

Keyword arguments

Output arguments

Reference

source
Krylov.bilqr!Function
solver = bilqr!(solver::BilqrSolver, A, b, c; kwargs...)
-solver = bilqr!(solver::BilqrSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of bilqr.

See BilqrSolver for more details about the solver.

source

TriLQR

Krylov.trilqrFunction
(x, y, stats) = trilqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
+[Aᴴ 0] [x]   [c]

The relation bᴴc ≠ 0 must be satisfied. BiLQ is used for solving primal system Ax = b of size n. QMR is used for solving dual system Aᴴy = c of size n.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n;
  • c: a vector of length n.

Optional arguments

  • x0: a vector of length n that represents an initial guess of the solution x;
  • y0: a vector of length n that represents an initial guess of the solution y.

Keyword arguments

  • transfer_to_bicg: transfer from the BiLQ point to the BiCG point, when it exists. The transfer is based on the residual norm;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length n;
  • stats: statistics collected on the run in an AdjointStats structure.

Reference

source
Krylov.bilqr!Function
solver = bilqr!(solver::BilqrSolver, A, b, c; kwargs...)
+solver = bilqr!(solver::BilqrSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of bilqr.

See BilqrSolver for more details about the solver.

source

TriLQR

Krylov.trilqrFunction
(x, y, stats) = trilqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
                        transfer_to_usymcg::Bool=true, atol::T=√eps(T),
                        rtol::T=√eps(T), itmax::Int=0,
                        timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                        callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, y, stats) = trilqr(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)

TriLQR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.

Combine USYMLQ and USYMQR to solve adjoint systems.

[0  A] [y] = [b]
-[Aᴴ 0] [x]   [c]

USYMLQ is used for solving primal system Ax = b of size m × n. USYMQR is used for solving dual system Aᴴy = c of size n × m.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional arguments

  • x0: a vector of length n that represents an initial guess of the solution x;
  • y0: a vector of length m that represents an initial guess of the solution y.

Keyword arguments

  • transfer_to_usymcg: transfer from the USYMLQ point to the USYMCG point, when it exists. The transfer is based on the residual norm;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in an AdjointStats structure.

Reference

source
Krylov.trilqr!Function
solver = trilqr!(solver::TrilqrSolver, A, b, c; kwargs...)
-solver = trilqr!(solver::TrilqrSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of trilqr.

See TrilqrSolver for more details about the solver.

source
+[Aᴴ 0] [x] [c]

USYMLQ is used for solving primal system Ax = b of size m × n. USYMQR is used for solving dual system Aᴴy = c of size n × m.

Input arguments

Optional arguments

Keyword arguments

Output arguments

Reference

source
Krylov.trilqr!Function
solver = trilqr!(solver::TrilqrSolver, A, b, c; kwargs...)
+solver = trilqr!(solver::TrilqrSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of trilqr.

See TrilqrSolver for more details about the solver.

source
diff --git a/dev/solvers/gsp/index.html b/dev/solvers/gsp/index.html index e29cf5417..46309434c 100644 --- a/dev/solvers/gsp/index.html +++ b/dev/solvers/gsp/index.html @@ -9,5 +9,5 @@ callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, y, stats) = gpmr(A, B, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)

GPMR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.

Given matrices A of dimension m × n and B of dimension n × m, GPMR solves the non-Hermitian partitioned linear system

[ λIₘ   A  ] [ x ] = [ b ]
 [  B   μIₙ ] [ y ]   [ c ],

of size (n+m) × (n+m) where λ and μ are real or complex numbers. A can have any shape and B has the shape of Aᴴ. A, B, b and c must be all nonzero.

This implementation allows left and right block diagonal preconditioners

[ C    ] [ λM   A ] [ E    ] [ E⁻¹x ] = [ Cb ]
 [    D ] [  B  μN ] [    F ] [ F⁻¹y ]   [ Dc ],

and can solve

[ λM   A ] [ x ] = [ b ]
-[  B  μN ] [ y ]   [ c ]

when CE = M⁻¹ and DF = N⁻¹.

By default, GPMR solves unsymmetric linear systems with λ = 1 and μ = 1.

GPMR is based on the orthogonal Hessenberg reduction process and its relations with the block-Arnoldi process. The residual norm ‖rₖ‖ is monotonically decreasing in GPMR.

GPMR stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.

Input arguments

Optional arguments

Keyword arguments

Output arguments

Reference

source
Krylov.gpmr!Function
solver = gpmr!(solver::GpmrSolver, A, B, b, c; kwargs...)
-solver = gpmr!(solver::GpmrSolver, A, B, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of gpmr.

Note that the memory keyword argument is the only exception. It's required to create a GpmrSolver and can't be changed later.

See GpmrSolver for more details about the solver.

source
+[ B μN ] [ y ] [ c ]

when CE = M⁻¹ and DF = N⁻¹.

By default, GPMR solves unsymmetric linear systems with λ = 1 and μ = 1.

GPMR is based on the orthogonal Hessenberg reduction process and its relations with the block-Arnoldi process. The residual norm ‖rₖ‖ is monotonically decreasing in GPMR.

GPMR stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.

Input arguments

Optional arguments

Keyword arguments

Output arguments

Reference

source
Krylov.gpmr!Function
solver = gpmr!(solver::GpmrSolver, A, B, b, c; kwargs...)
+solver = gpmr!(solver::GpmrSolver, A, B, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of gpmr.

Note that the memory keyword argument is the only exception. It's required to create a GpmrSolver and can't be changed later.

See GpmrSolver for more details about the solver.

source
diff --git a/dev/solvers/ln/index.html b/dev/solvers/ln/index.html index afd744267..cdb89c587 100644 --- a/dev/solvers/ln/index.html +++ b/dev/solvers/ln/index.html @@ -4,12 +4,12 @@ λ::T=zero(T), atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0, timemax::Float64=Inf, verbose::Int=0, history::Bool=false, - callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the consistent linear system

Ax + √λs = b

of size m × n using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations of the second kind

(AAᴴ + λI) y = b

but is more stable. When λ = 0, this method solves the minimum-norm problem

min ‖x‖₂ s.t. Ax = b.

When λ > 0, it solves the problem

min ‖(x,s)‖₂  s.t. Ax + √λs = b.

CGNE produces monotonic errors ‖x-x*‖₂ but not residuals ‖r‖₂. It is formally equivalent to CRAIG, though can be slightly less accurate, but simpler to implement. Only the x-part of the solution is returned.

Input arguments

Keyword arguments

Output arguments

References

source
Krylov.cgne!Function
solver = cgne!(solver::CgneSolver, A, b; kwargs...)

where kwargs are keyword arguments of cgne.

See CgneSolver for more details about the solver.

source

CRMR

Krylov.crmrFunction
(x, stats) = crmr(A, b::AbstractVector{FC};
+                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the consistent linear system

Ax + √λs = b

of size m × n using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations of the second kind

(AAᴴ + λI) y = b

but is more stable. When λ = 0, this method solves the minimum-norm problem

min ‖x‖₂ s.t. Ax = b.

When λ > 0, it solves the problem

min ‖(x,s)‖₂  s.t. Ax + √λs = b.

CGNE produces monotonic errors ‖x-x*‖₂ but not residuals ‖r‖₂. It is formally equivalent to CRAIG, though can be slightly less accurate, but simpler to implement. Only the x-part of the solution is returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • N: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

  • J. E. Craig, The N-step iteration procedures, Journal of Mathematics and Physics, 34(1), pp. 64–73, 1955.
  • J. E. Craig, Iterations Procedures for Simultaneous Equations, Ph.D. Thesis, Department of Electrical Engineering, MIT, 1954.
source
Krylov.cgne!Function
solver = cgne!(solver::CgneSolver, A, b; kwargs...)

where kwargs are keyword arguments of cgne.

See CgneSolver for more details about the solver.

source

CRMR

Krylov.crmrFunction
(x, stats) = crmr(A, b::AbstractVector{FC};
                   N=I, ldiv::Bool=false,
                   λ::T=zero(T), atol::T=√eps(T),
                   rtol::T=√eps(T), itmax::Int=0,
                   timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the consistent linear system

Ax + √λs = b

of size m × n using the Conjugate Residual (CR) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CR to the normal equations of the second kind

(AAᴴ + λI) y = b

but is more stable. When λ = 0, this method solves the minimum-norm problem

min ‖x‖₂  s.t.  x ∈ argmin ‖Ax - b‖₂.

When λ > 0, this method solves the problem

min ‖(x,s)‖₂  s.t. Ax + √λs = b.

CRMR produces monotonic residuals ‖r‖₂. It is formally equivalent to CRAIG-MR, though can be slightly less accurate, but simpler to implement. Only the x-part of the solution is returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • N: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.crmr!Function
solver = crmr!(solver::CrmrSolver, A, b; kwargs...)

where kwargs are keyword arguments of crmr.

See CrmrSolver for more details about the solver.

source

LNLQ

Krylov.lnlqFunction
(x, y, stats) = lnlq(A, b::AbstractVector{FC};
+                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the consistent linear system

Ax + √λs = b

of size m × n using the Conjugate Residual (CR) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CR to the normal equations of the second kind

(AAᴴ + λI) y = b

but is more stable. When λ = 0, this method solves the minimum-norm problem

min ‖x‖₂  s.t.  x ∈ argmin ‖Ax - b‖₂.

When λ > 0, this method solves the problem

min ‖(x,s)‖₂  s.t. Ax + √λs = b.

CRMR produces monotonic residuals ‖r‖₂. It is formally equivalent to CRAIG-MR, though can be slightly less accurate, but simpler to implement. Only the x-part of the solution is returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • N: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.crmr!Function
solver = crmr!(solver::CrmrSolver, A, b; kwargs...)

where kwargs are keyword arguments of crmr.

See CrmrSolver for more details about the solver.

source

LNLQ

Krylov.lnlqFunction
(x, y, stats) = lnlq(A, b::AbstractVector{FC};
                      M=I, N=I, ldiv::Bool=false,
                      transfer_to_craig::Bool=true,
                      sqd::Bool=false, λ::T=zero(T),
@@ -19,7 +19,7 @@
                      timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                      callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Find the least-norm solution of the consistent linear system

Ax + λ²y = b

of size m × n using the LNLQ method, where λ ≥ 0 is a regularization parameter.

For a system in the form Ax = b, LNLQ method is equivalent to applying SYMMLQ to AAᴴy = b and recovering x = Aᴴy but is more stable. Note that y are the Lagrange multipliers of the least-norm problem

minimize ‖x‖  s.t.  Ax = b.

If λ > 0, LNLQ solves the symmetric and quasi-definite system

[ -F    Aᴴ ] [ x ]   [ 0 ]
 [  A  λ²E  ] [ y ] = [ b ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

min ‖x‖²_F + λ²‖y‖²_E  s.t.  Ax + λ²Ey = b.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LNLQ is then equivalent to applying SYMMLQ to (AF⁻¹Aᴴ + λ²E)y = b with Fx = Aᴴy.

If λ = 0, LNLQ solves the symmetric and indefinite system

[ -F   Aᴴ ] [ x ]   [ 0 ]
-[  A   0  ] [ y ] = [ b ].

The system above represents the optimality conditions of

minimize ‖x‖²_F  s.t.  Ax = b.

In this case, M can still be specified and indicates the weighted norm in which residuals are measured.

In this implementation, both the x and y-parts of the solution are returned.

utolx and utoly are tolerances on the upper bound of the distance to the solution ‖x-x‖ and ‖y-y‖, respectively. The bound is valid if λ>0 or σ>0 where σ should be strictly smaller than the smallest positive singular value. For instance σ:=(1-1e-7)σₘᵢₙ .

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • transfer_to_craig: transfer from the LNLQ point to the CRAIG point, when it exists. The transfer is based on the residual norm;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • σ: strict lower bound on the smallest positive singular value σₘᵢₙ such as σ = (1-10⁻⁷)σₘᵢₙ;
  • utolx: tolerance on the upper bound on the distance to the solution ‖x-x*‖;
  • utoly: tolerance on the upper bound on the distance to the solution ‖y-y*‖;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in a LNLQStats structure.

Reference

source
Krylov.lnlq!Function
solver = lnlq!(solver::LnlqSolver, A, b; kwargs...)

where kwargs are keyword arguments of lnlq.

See LnlqSolver for more details about the solver.

source

CRAIG

Krylov.craigFunction
(x, y, stats) = craig(A, b::AbstractVector{FC};
+[  A   0  ] [ y ] = [ b ].

The system above represents the optimality conditions of

minimize ‖x‖²_F  s.t.  Ax = b.

In this case, M can still be specified and indicates the weighted norm in which residuals are measured.

In this implementation, both the x and y-parts of the solution are returned.

utolx and utoly are tolerances on the upper bound of the distance to the solution ‖x-x‖ and ‖y-y‖, respectively. The bound is valid if λ>0 or σ>0 where σ should be strictly smaller than the smallest positive singular value. For instance σ:=(1-1e-7)σₘᵢₙ .

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • transfer_to_craig: transfer from the LNLQ point to the CRAIG point, when it exists. The transfer is based on the residual norm;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • σ: strict lower bound on the smallest positive singular value σₘᵢₙ such as σ = (1-10⁻⁷)σₘᵢₙ;
  • utolx: tolerance on the upper bound on the distance to the solution ‖x-x*‖;
  • utoly: tolerance on the upper bound on the distance to the solution ‖y-y*‖;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in a LNLQStats structure.

Reference

source
Krylov.lnlq!Function
solver = lnlq!(solver::LnlqSolver, A, b; kwargs...)

where kwargs are keyword arguments of lnlq.

See LnlqSolver for more details about the solver.

source

CRAIG

Krylov.craigFunction
(x, y, stats) = craig(A, b::AbstractVector{FC};
                       M=I, N=I, ldiv::Bool=false,
                       transfer_to_lsqr::Bool=false, sqd::Bool=false,
                       λ::T=zero(T), btol::T=√eps(T),
@@ -28,16 +28,16 @@
                       timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                       callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Find the least-norm solution of the consistent linear system

Ax + λ²y = b

of size m × n using the Golub-Kahan implementation of Craig's method, where λ ≥ 0 is a regularization parameter. This method is equivalent to CGNE but is more stable.

For a system in the form Ax = b, Craig's method is equivalent to applying CG to AAᴴy = b and recovering x = Aᴴy. Note that y are the Lagrange multipliers of the least-norm problem

minimize ‖x‖  s.t.  Ax = b.

If λ > 0, CRAIG solves the symmetric and quasi-definite system

[ -F     Aᴴ ] [ x ]   [ 0 ]
 [  A   λ²E  ] [ y ] = [ b ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

min ‖x‖²_F + λ²‖y‖²_E  s.t.  Ax + λ²Ey = b.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. CRAIG is then equivalent to applying CG to (AF⁻¹Aᴴ + λ²E)y = b with Fx = Aᴴy.

If λ = 0, CRAIG solves the symmetric and indefinite system

[ -F   Aᴴ ] [ x ]   [ 0 ]
-[  A   0  ] [ y ] = [ b ].

The system above represents the optimality conditions of

minimize ‖x‖²_F  s.t.  Ax = b.

In this case, M can still be specified and indicates the weighted norm in which residuals are measured.

In this implementation, both the x and y-parts of the solution are returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • transfer_to_lsqr: transfer from the LSLQ point to the LSQR point, when it exists. The transfer is based on the residual norm;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.craig!Function
solver = craig!(solver::CraigSolver, A, b; kwargs...)

where kwargs are keyword arguments of craig.

See CraigSolver for more details about the solver.

source

CRAIGMR

Krylov.craigmrFunction
(x, y, stats) = craigmr(A, b::AbstractVector{FC};
+[  A   0  ] [ y ] = [ b ].

The system above represents the optimality conditions of

minimize ‖x‖²_F  s.t.  Ax = b.

In this case, M can still be specified and indicates the weighted norm in which residuals are measured.

In this implementation, both the x and y-parts of the solution are returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • transfer_to_lsqr: transfer from the LSLQ point to the LSQR point, when it exists. The transfer is based on the residual norm;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.craig!Function
solver = craig!(solver::CraigSolver, A, b; kwargs...)

where kwargs are keyword arguments of craig.

See CraigSolver for more details about the solver.

source

CRAIGMR

Krylov.craigmrFunction
(x, y, stats) = craigmr(A, b::AbstractVector{FC};
                         M=I, N=I, ldiv::Bool=false,
                         sqd::Bool=false, λ::T=zero(T), atol::T=√eps(T),
                         rtol::T=√eps(T), itmax::Int=0,
                         timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                         callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the consistent linear system

Ax + λ²y = b

of size m × n using the CRAIGMR method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying the Conjugate Residuals method to the normal equations of the second kind

(AAᴴ + λ²I) y = b

but is more stable. When λ = 0, this method solves the minimum-norm problem

min ‖x‖  s.t.  x ∈ argmin ‖Ax - b‖.

If λ > 0, CRAIGMR solves the symmetric and quasi-definite system

[ -F    Aᴴ ] [ x ]   [ 0 ]
 [  A  λ²E  ] [ y ] = [ b ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

min ‖x‖²_F + λ²‖y‖²_E  s.t.  Ax + λ²Ey = b.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. CRAIGMR is then equivalent to applying MINRES to (AF⁻¹Aᴴ + λ²E)y = b with Fx = Aᴴy.

If λ = 0, CRAIGMR solves the symmetric and indefinite system

[ -F   Aᴴ ] [ x ]   [ 0 ]
-[  A   0  ] [ y ] = [ b ].

The system above represents the optimality conditions of

min ‖x‖²_F  s.t.  Ax = b.

In this case, M can still be specified and indicates the weighted norm in which residuals are measured.

CRAIGMR produces monotonic residuals ‖r‖₂. It is formally equivalent to CRMR, though can be slightly more accurate, and intricate to implement. Both the x- and y-parts of the solution are returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.craigmr!Function
solver = craigmr!(solver::CraigmrSolver, A, b; kwargs...)

where kwargs are keyword arguments of craigmr.

See CraigmrSolver for more details about the solver.

source

USYMLQ

Krylov.usymlqFunction
(x, stats) = usymlq(A, b::AbstractVector{FC}, c::AbstractVector{FC};
+[  A   0  ] [ y ] = [ b ].

The system above represents the optimality conditions of

min ‖x‖²_F  s.t.  Ax = b.

In this case, M can still be specified and indicates the weighted norm in which residuals are measured.

CRAIGMR produces monotonic residuals ‖r‖₂. It is formally equivalent to CRMR, though can be slightly more accurate, and intricate to implement. Both the x- and y-parts of the solution are returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.craigmr!Function
solver = craigmr!(solver::CraigmrSolver, A, b; kwargs...)

where kwargs are keyword arguments of craigmr.

See CraigmrSolver for more details about the solver.

source

USYMLQ

Krylov.usymlqFunction
(x, stats) = usymlq(A, b::AbstractVector{FC}, c::AbstractVector{FC};
                     transfer_to_usymcg::Bool=true, atol::T=√eps(T),
                     rtol::T=√eps(T), itmax::Int=0,
                     timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                    callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = usymlq(A, b, c, x0::AbstractVector; kwargs...)

USYMLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

USYMLQ determines the least-norm solution of the consistent linear system Ax = b of size m × n.

USYMLQ is based on the orthogonal tridiagonalization process and requires two initial nonzero vectors b and c. The vector c is only used to initialize the process and a default value can be b or Aᴴb depending on the shape of A. The error norm ‖x - x*‖ monotonously decreases in USYMLQ. It's considered as a generalization of SYMMLQ.

It can also be applied to under-determined and over-determined problems. In all cases, problems must be consistent.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • transfer_to_usymcg: transfer from the USYMLQ point to the USYMCG point, when it exists. The transfer is based on the residual norm;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.usymlq!Function
solver = usymlq!(solver::UsymlqSolver, A, b, c; kwargs...)
-solver = usymlq!(solver::UsymlqSolver, A, b, c, x0; kwargs...)

where kwargs are keyword arguments of usymlq.

See UsymlqSolver for more details about the solver.

source
+ callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = usymlq(A, b, c, x0::AbstractVector; kwargs...)

USYMLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

USYMLQ determines the least-norm solution of the consistent linear system Ax = b of size m × n.

USYMLQ is based on the orthogonal tridiagonalization process and requires two initial nonzero vectors b and c. The vector c is only used to initialize the process and a default value can be b or Aᴴb depending on the shape of A. The error norm ‖x - x*‖ monotonously decreases in USYMLQ. It's considered as a generalization of SYMMLQ.

It can also be applied to under-determined and over-determined problems. In all cases, problems must be consistent.

Input arguments

Optional argument

Keyword arguments

Output arguments

References

source
Krylov.usymlq!Function
solver = usymlq!(solver::UsymlqSolver, A, b, c; kwargs...)
+solver = usymlq!(solver::UsymlqSolver, A, b, c, x0; kwargs...)

where kwargs are keyword arguments of usymlq.

See UsymlqSolver for more details about the solver.

source
diff --git a/dev/solvers/ls/index.html b/dev/solvers/ls/index.html index 9c51b056d..1f4bd0c3e 100644 --- a/dev/solvers/ls/index.html +++ b/dev/solvers/ls/index.html @@ -4,12 +4,12 @@ λ::T=zero(T), atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0, timemax::Float64=Inf, verbose::Int=0, history::Bool=false, - callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the regularized linear least-squares problem

minimize ‖b - Ax‖₂² + λ‖x‖₂²

of size m × n using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations

(AᴴA + λI) x = Aᴴb

but is more stable.

CGLS produces monotonic residuals ‖r‖₂ but not optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSQR, though can be slightly less accurate, but simpler to implement.

Input arguments

Keyword arguments

Output arguments

References

source
Krylov.cgls!Function
solver = cgls!(solver::CglsSolver, A, b; kwargs...)

where kwargs are keyword arguments of cgls.

See CglsSolver for more details about the solver.

source

CRLS

Krylov.crlsFunction
(x, stats) = crls(A, b::AbstractVector{FC};
+                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the regularized linear least-squares problem

minimize ‖b - Ax‖₂² + λ‖x‖₂²

of size m × n using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations

(AᴴA + λI) x = Aᴴb

but is more stable.

CGLS produces monotonic residuals ‖r‖₂ but not optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSQR, though can be slightly less accurate, but simpler to implement.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.cgls!Function
solver = cgls!(solver::CglsSolver, A, b; kwargs...)

where kwargs are keyword arguments of cgls.

See CglsSolver for more details about the solver.

source

CRLS

Krylov.crlsFunction
(x, stats) = crls(A, b::AbstractVector{FC};
                   M=I, ldiv::Bool=false, radius::T=zero(T),
                   λ::T=zero(T), atol::T=√eps(T), rtol::T=√eps(T),
                   itmax::Int=0, timemax::Float64=Inf,
                   verbose::Int=0, history::Bool=false,
-                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the linear least-squares problem

minimize ‖b - Ax‖₂² + λ‖x‖₂²

of size m × n using the Conjugate Residuals (CR) method. This method is equivalent to applying MINRES to the normal equations

(AᴴA + λI) x = Aᴴb.

This implementation recurs the residual r := b - Ax.

CRLS produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSMR, though can be substantially less accurate, but simpler to implement.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

  • D. C.-L. Fong, Minimum-Residual Methods for Sparse, Least-Squares using Golubg-Kahan Bidiagonalization, Ph.D. Thesis, Stanford University, 2011.
source
Krylov.crls!Function
solver = crls!(solver::CrlsSolver, A, b; kwargs...)

where kwargs are keyword arguments of crls.

See CrlsSolver for more details about the solver.

source

LSLQ

Krylov.lslqFunction
(x, stats) = lslq(A, b::AbstractVector{FC};
+                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the linear least-squares problem

minimize ‖b - Ax‖₂² + λ‖x‖₂²

of size m × n using the Conjugate Residuals (CR) method. This method is equivalent to applying MINRES to the normal equations

(AᴴA + λI) x = Aᴴb.

This implementation recurs the residual r := b - Ax.

CRLS produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSMR, though can be substantially less accurate, but simpler to implement.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

  • D. C.-L. Fong, Minimum-Residual Methods for Sparse, Least-Squares using Golubg-Kahan Bidiagonalization, Ph.D. Thesis, Stanford University, 2011.
source
Krylov.crls!Function
solver = crls!(solver::CrlsSolver, A, b; kwargs...)

where kwargs are keyword arguments of crls.

See CrlsSolver for more details about the solver.

source

LSLQ

Krylov.lslqFunction
(x, stats) = lslq(A, b::AbstractVector{FC};
                   M=I, N=I, ldiv::Bool=false,
                   window::Int=5, transfer_to_lsqr::Bool=false,
                   sqd::Bool=false, λ::T=zero(T),
@@ -20,7 +20,7 @@
                   timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                   callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the regularized linear least-squares problem

minimize ‖b - Ax‖₂² + λ²‖x‖₂²

of size m × n using the LSLQ method, where λ ≥ 0 is a regularization parameter. LSLQ is formally equivalent to applying SYMMLQ to the normal equations

(AᴴA + λ²I) x = Aᴴb

but is more stable.

If λ > 0, we solve the symmetric and quasi-definite system

[ E      A ] [ r ]   [ b ]
 [ Aᴴ  -λ²F ] [ x ] = [ 0 ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSLQ is then equivalent to applying SYMMLQ to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).

If λ = 0, we solve the symmetric and indefinite system

[ E    A ] [ r ]   [ b ]
-[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

Main features

  • the solution estimate is updated along orthogonal directions
  • the norm of the solution estimate ‖xᴸₖ‖₂ is increasing
  • the error ‖eₖ‖₂ := ‖xᴸₖ - x*‖₂ is decreasing
  • it is possible to transition cheaply from the LSLQ iterate to the LSQR iterate if there is an advantage (there always is in terms of error)
  • if A is rank deficient, identify the minimum least-squares solution

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • transfer_to_lsqr: transfer from the LSLQ point to the LSQR point, when it exists. The transfer is based on the residual norm;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • σ: strict lower bound on the smallest positive singular value σₘᵢₙ such as σ = (1-10⁻⁷)σₘᵢₙ;
  • etol: stopping tolerance based on the lower bound on the error;
  • utol: stopping tolerance based on the upper bound on the error;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;

  • stats: statistics collected on the run in a LSLQStats structure.

  • stats.err_lbnds is a vector of lower bounds on the LQ error–-the vector is empty if window is set to zero

  • stats.err_ubnds_lq is a vector of upper bounds on the LQ error–-the vector is empty if σ == 0 is left at zero

  • stats.err_ubnds_cg is a vector of upper bounds on the CG error–-the vector is empty if σ == 0 is left at zero

  • stats.error_with_bnd is a boolean indicating whether there was an error in the upper bounds computation (cancellation errors, too large σ ...)

Stopping conditions

The iterations stop as soon as one of the following conditions holds true:

  • the optimality residual is sufficiently small (stats.status = "found approximate minimum least-squares solution") in the sense that either
    • ‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ atol, or
    • 1 + ‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ 1
  • an approximate zero-residual solution has been found (stats.status = "found approximate zero-residual solution") in the sense that either
    • ‖r‖ / ‖b‖ ≤ btol + atol ‖A‖ * ‖xᴸ‖ / ‖b‖, or
    • 1 + ‖r‖ / ‖b‖ ≤ 1
  • the estimated condition number of A is too large in the sense that either
    • 1/cond(A) ≤ 1/conlim (stats.status = "condition number exceeds tolerance"), or
    • 1 + 1/cond(A) ≤ 1 (stats.status = "condition number seems too large for this machine")
  • the lower bound on the LQ forward error is less than etol * ‖xᴸ‖
  • the upper bound on the CG forward error is less than utol * ‖xᶜ‖

References

source
Krylov.lslq!Function
solver = lslq!(solver::LslqSolver, A, b; kwargs...)

where kwargs are keyword arguments of lslq.

See LslqSolver for more details about the solver.

source

LSQR

Krylov.lsqrFunction
(x, stats) = lsqr(A, b::AbstractVector{FC};
+[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

Main features

  • the solution estimate is updated along orthogonal directions
  • the norm of the solution estimate ‖xᴸₖ‖₂ is increasing
  • the error ‖eₖ‖₂ := ‖xᴸₖ - x*‖₂ is decreasing
  • it is possible to transition cheaply from the LSLQ iterate to the LSQR iterate if there is an advantage (there always is in terms of error)
  • if A is rank deficient, identify the minimum least-squares solution

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • transfer_to_lsqr: transfer from the LSLQ point to the LSQR point, when it exists. The transfer is based on the residual norm;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • σ: strict lower bound on the smallest positive singular value σₘᵢₙ such as σ = (1-10⁻⁷)σₘᵢₙ;
  • etol: stopping tolerance based on the lower bound on the error;
  • utol: stopping tolerance based on the upper bound on the error;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;

  • stats: statistics collected on the run in a LSLQStats structure.

  • stats.err_lbnds is a vector of lower bounds on the LQ error–-the vector is empty if window is set to zero

  • stats.err_ubnds_lq is a vector of upper bounds on the LQ error–-the vector is empty if σ == 0 is left at zero

  • stats.err_ubnds_cg is a vector of upper bounds on the CG error–-the vector is empty if σ == 0 is left at zero

  • stats.error_with_bnd is a boolean indicating whether there was an error in the upper bounds computation (cancellation errors, too large σ ...)

Stopping conditions

The iterations stop as soon as one of the following conditions holds true:

  • the optimality residual is sufficiently small (stats.status = "found approximate minimum least-squares solution") in the sense that either
    • ‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ atol, or
    • 1 + ‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ 1
  • an approximate zero-residual solution has been found (stats.status = "found approximate zero-residual solution") in the sense that either
    • ‖r‖ / ‖b‖ ≤ btol + atol ‖A‖ * ‖xᴸ‖ / ‖b‖, or
    • 1 + ‖r‖ / ‖b‖ ≤ 1
  • the estimated condition number of A is too large in the sense that either
    • 1/cond(A) ≤ 1/conlim (stats.status = "condition number exceeds tolerance"), or
    • 1 + 1/cond(A) ≤ 1 (stats.status = "condition number seems too large for this machine")
  • the lower bound on the LQ forward error is less than etol * ‖xᴸ‖
  • the upper bound on the CG forward error is less than utol * ‖xᶜ‖

References

source
Krylov.lslq!Function
solver = lslq!(solver::LslqSolver, A, b; kwargs...)

where kwargs are keyword arguments of lslq.

See LslqSolver for more details about the solver.

source

LSQR

Krylov.lsqrFunction
(x, stats) = lsqr(A, b::AbstractVector{FC};
                   M=I, N=I, ldiv::Bool=false,
                   window::Int=5, sqd::Bool=false, λ::T=zero(T),
                   radius::T=zero(T), etol::T=√eps(T),
@@ -30,7 +30,7 @@
                   timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                   callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the regularized linear least-squares problem

minimize ‖b - Ax‖₂² + λ²‖x‖₂²

of size m × n using the LSQR method, where λ ≥ 0 is a regularization parameter. LSQR is formally equivalent to applying CG to the normal equations

(AᴴA + λ²I) x = Aᴴb

(and therefore to CGLS) but is more stable.

LSQR produces monotonic residuals ‖r‖₂ but not optimality residuals ‖Aᴴr‖₂. It is formally equivalent to CGLS, though can be slightly more accurate.

If λ > 0, LSQR solves the symmetric and quasi-definite system

[ E      A ] [ r ]   [ b ]
 [ Aᴴ  -λ²F ] [ x ] = [ 0 ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSQR is then equivalent to applying CG to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).

If λ = 0, we solve the symmetric and indefinite system

[ E    A ] [ r ]   [ b ]
-[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • etol: stopping tolerance based on the lower bound on the error;
  • axtol: tolerance on the backward error;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.lsqr!Function
solver = lsqr!(solver::LsqrSolver, A, b; kwargs...)

where kwargs are keyword arguments of lsqr.

See LsqrSolver for more details about the solver.

source

LSMR

Krylov.lsmrFunction
(x, stats) = lsmr(A, b::AbstractVector{FC};
+[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • etol: stopping tolerance based on the lower bound on the error;
  • axtol: tolerance on the backward error;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.lsqr!Function
solver = lsqr!(solver::LsqrSolver, A, b; kwargs...)

where kwargs are keyword arguments of lsqr.

See LsqrSolver for more details about the solver.

source

LSMR

Krylov.lsmrFunction
(x, stats) = lsmr(A, b::AbstractVector{FC};
                   M=I, N=I, ldiv::Bool=false,
                   window::Int=5, sqd::Bool=false, λ::T=zero(T),
                   radius::T=zero(T), etol::T=√eps(T),
@@ -40,8 +40,8 @@
                   timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                   callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the regularized linear least-squares problem

minimize ‖b - Ax‖₂² + λ²‖x‖₂²

of size m × n using the LSMR method, where λ ≥ 0 is a regularization parameter. LSMR is formally equivalent to applying MINRES to the normal equations

(AᴴA + λ²I) x = Aᴴb

(and therefore to CRLS) but is more stable.

LSMR produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂. It is formally equivalent to CRLS, though can be substantially more accurate.

LSMR can be also used to find a null vector of a singular matrix A by solving the problem min ‖Aᴴx - b‖ with any nonzero vector b. At a minimizer, the residual vector r = b - Aᴴx will satisfy Ar = 0.

If λ > 0, we solve the symmetric and quasi-definite system

[ E      A ] [ r ]   [ b ]
 [ Aᴴ  -λ²F ] [ x ] = [ 0 ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSMR is then equivalent to applying MINRES to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).

If λ = 0, we solve the symmetric and indefinite system

[ E    A ] [ r ]   [ b ]
-[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • etol: stopping tolerance based on the lower bound on the error;
  • axtol: tolerance on the backward error;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a LsmrStats structure.

Reference

source
Krylov.lsmr!Function
solver = lsmr!(solver::LsmrSolver, A, b; kwargs...)

where kwargs are keyword arguments of lsmr.

See LsmrSolver for more details about the solver.

source

USYMQR

Krylov.usymqrFunction
(x, stats) = usymqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
+[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • etol: stopping tolerance based on the lower bound on the error;
  • axtol: tolerance on the backward error;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a LsmrStats structure.

Reference

source
Krylov.lsmr!Function
solver = lsmr!(solver::LsmrSolver, A, b; kwargs...)

where kwargs are keyword arguments of lsmr.

See LsmrSolver for more details about the solver.

source

USYMQR

Krylov.usymqrFunction
(x, stats) = usymqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
                     atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,
                     timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                    callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = usymqr(A, b, c, x0::AbstractVector; kwargs...)

USYMQR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

USYMQR solves the linear least-squares problem min ‖b - Ax‖² of size m × n. USYMQR solves Ax = b if it is consistent.

USYMQR is based on the orthogonal tridiagonalization process and requires two initial nonzero vectors b and c. The vector c is only used to initialize the process and a default value can be b or Aᴴb depending on the shape of A. The residual norm ‖b - Ax‖ monotonously decreases in USYMQR. It's considered as a generalization of MINRES.

It can also be applied to under-determined and over-determined problems. USYMQR finds the minimum-norm solution if problems are inconsistent.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.usymqr!Function
solver = usymqr!(solver::UsymqrSolver, A, b, c; kwargs...)
-solver = usymqr!(solver::UsymqrSolver, A, b, c, x0; kwargs...)

where kwargs are keyword arguments of usymqr.

See UsymqrSolver for more details about the solver.

source
+ callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = usymqr(A, b, c, x0::AbstractVector; kwargs...)

USYMQR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

USYMQR solves the linear least-squares problem min ‖b - Ax‖² of size m × n. USYMQR solves Ax = b if it is consistent.

USYMQR is based on the orthogonal tridiagonalization process and requires two initial nonzero vectors b and c. The vector c is only used to initialize the process and a default value can be b or Aᴴb depending on the shape of A. The residual norm ‖b - Ax‖ monotonously decreases in USYMQR. It's considered as a generalization of MINRES.

It can also be applied to under-determined and over-determined problems. USYMQR finds the minimum-norm solution if problems are inconsistent.

Input arguments

Optional argument

Keyword arguments

Output arguments

References

source
Krylov.usymqr!Function
solver = usymqr!(solver::UsymqrSolver, A, b, c; kwargs...)
+solver = usymqr!(solver::UsymqrSolver, A, b, c, x0; kwargs...)

where kwargs are keyword arguments of usymqr.

See UsymqrSolver for more details about the solver.

source
diff --git a/dev/solvers/sid/index.html b/dev/solvers/sid/index.html index 21cf14d8c..0c4609825 100644 --- a/dev/solvers/sid/index.html +++ b/dev/solvers/sid/index.html @@ -6,25 +6,25 @@ conlim::T=1/√eps(T), atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0, timemax::Float64=Inf, verbose::Int=0, history::Bool=false, - callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = symmlq(A, b, x0::AbstractVector; kwargs...)

SYMMLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above

Solve the shifted linear system

(A + λI) x = b

of size n using the SYMMLQ method, where λ is a shift parameter, and A is Hermitian.

SYMMLQ produces monotonic errors ‖x* - x‖₂.

Input arguments

Optional argument

Keyword arguments

Output arguments

Reference

source
Krylov.symmlq!Function
solver = symmlq!(solver::SymmlqSolver, A, b; kwargs...)
-solver = symmlq!(solver::SymmlqSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of symmlq.

See SymmlqSolver for more details about the solver.

source

MINRES

Krylov.minresFunction
(x, stats) = minres(A, b::AbstractVector{FC};
+                    callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = symmlq(A, b, x0::AbstractVector; kwargs...)

SYMMLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above

Solve the shifted linear system

(A + λI) x = b

of size n using the SYMMLQ method, where λ is a shift parameter, and A is Hermitian.

SYMMLQ produces monotonic errors ‖x* - x‖₂.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • transfer_to_cg: transfer from the SYMMLQ point to the CG point, when it exists. The transfer is based on the residual norm;
  • λ: regularization parameter;
  • λest: positive strict lower bound on the smallest eigenvalue λₘᵢₙ when solving a positive-definite system, such as λest = (1-10⁻⁷)λₘᵢₙ;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • etol: stopping tolerance based on the lower bound on the error;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SymmlqStats structure.

Reference

source
Krylov.symmlq!Function
solver = symmlq!(solver::SymmlqSolver, A, b; kwargs...)
+solver = symmlq!(solver::SymmlqSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of symmlq.

See SymmlqSolver for more details about the solver.

source

MINRES

Krylov.minresFunction
(x, stats) = minres(A, b::AbstractVector{FC};
                     M=I, ldiv::Bool=false, window::Int=5,
                     λ::T=zero(T), atol::T=√eps(T),
                     rtol::T=√eps(T), etol::T=√eps(T),
                     conlim::T=1/√eps(T), itmax::Int=0,
                     timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                    callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = minres(A, b, x0::AbstractVector; kwargs...)

MINRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the shifted linear least-squares problem

minimize ‖b - (A + λI)x‖₂²

or the shifted linear system

(A + λI) x = b

of size n using the MINRES method, where λ ≥ 0 is a shift parameter, where A is Hermitian.

MINRES is formally equivalent to applying CR to Ax=b when A is positive definite, but is typically more stable and also applies to the case where A is indefinite.

MINRES produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • etol: stopping tolerance based on the lower bound on the error;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.minres!Function
solver = minres!(solver::MinresSolver, A, b; kwargs...)
-solver = minres!(solver::MinresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of minres.

See MinresSolver for more details about the solver.

source

MINRES-QLP

Krylov.minres_qlpFunction
(x, stats) = minres_qlp(A, b::AbstractVector{FC};
+                    callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = minres(A, b, x0::AbstractVector; kwargs...)

MINRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the shifted linear least-squares problem

minimize ‖b - (A + λI)x‖₂²

or the shifted linear system

(A + λI) x = b

of size n using the MINRES method, where λ ≥ 0 is a shift parameter, where A is Hermitian.

MINRES is formally equivalent to applying CR to Ax=b when A is positive definite, but is typically more stable and also applies to the case where A is indefinite.

MINRES produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • etol: stopping tolerance based on the lower bound on the error;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.minres!Function
solver = minres!(solver::MinresSolver, A, b; kwargs...)
+solver = minres!(solver::MinresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of minres.

See MinresSolver for more details about the solver.

source

MINRES-QLP

Krylov.minres_qlpFunction
(x, stats) = minres_qlp(A, b::AbstractVector{FC};
                         M=I, ldiv::Bool=false, Artol::T=√eps(T),
                         λ::T=zero(T), atol::T=√eps(T),
                         rtol::T=√eps(T), itmax::Int=0,
                         timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                        callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = minres_qlp(A, b, x0::AbstractVector; kwargs...)

MINRES-QLP can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

MINRES-QLP is the only method based on the Lanczos process that returns the minimum-norm solution on singular inconsistent systems (A + λI)x = b of size n, where λ is a shift parameter. It is significantly more complex but can be more reliable than MINRES when A is ill-conditioned.

M also indicates the weighted norm in which residuals are measured.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • Artol: relative stopping tolerance based on the Aᴴ-residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.minres_qlp!Function
solver = minres_qlp!(solver::MinresQlpSolver, A, b; kwargs...)
-solver = minres_qlp!(solver::MinresQlpSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of minres_qlp.

See MinresQlpSolver for more details about the solver.

source

MINARES

Krylov.minaresFunction
(x, stats) = minares(A, b::AbstractVector{FC};
+                        callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = minres_qlp(A, b, x0::AbstractVector; kwargs...)

MINRES-QLP can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

MINRES-QLP is the only method based on the Lanczos process that returns the minimum-norm solution on singular inconsistent systems (A + λI)x = b of size n, where λ is a shift parameter. It is significantly more complex but can be more reliable than MINRES when A is ill-conditioned.

M also indicates the weighted norm in which residuals are measured.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • Artol: relative stopping tolerance based on the Aᴴ-residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.minres_qlp!Function
solver = minres_qlp!(solver::MinresQlpSolver, A, b; kwargs...)
+solver = minres_qlp!(solver::MinresQlpSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of minres_qlp.

See MinresQlpSolver for more details about the solver.

source

MINARES

Krylov.minaresFunction
(x, stats) = minares(A, b::AbstractVector{FC};
                      M=I, ldiv::Bool=false,
                      λ::T = zero(T), atol::T=√eps(T),
                      rtol::T=√eps(T), Artol::T = √eps(T),
                      itmax::Int=0, timemax::Float64=Inf,
                      verbose::Int=0, history::Bool=false,
-                     callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = minares(A, b, x0::AbstractVector; kwargs...)

MINARES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

MINARES solves the Hermitian linear system Ax = b of size n. MINARES minimizes ‖Arₖ‖₂ when M = Iₙ and ‖AMrₖ‖M otherwise. The estimates computed every iteration are ‖Mrₖ‖₂ and ‖AMrₖ‖M.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • Artol: relative stopping tolerance based on the Aᴴ-residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

  • A. Montoison, D. Orban and M. A. Saunders, MinAres: An Iterative Solver for Symmetric Linear Systems, Cahier du GERAD G-2023-40, GERAD, Montréal, 2023.
source
Krylov.minares!Function
solver = minares!(solver::MinaresSolver, A, b; kwargs...)
-solver = minares!(solver::MinaresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of minares.

See MinaresSolver for more details about the solver.

source
+ callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = minares(A, b, x0::AbstractVector; kwargs...)

MINARES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

MINARES solves the Hermitian linear system Ax = b of size n. MINARES minimizes ‖Arₖ‖₂ when M = Iₙ and ‖AMrₖ‖M otherwise. The estimates computed every iteration are ‖Mrₖ‖₂ and ‖AMrₖ‖M.

Input arguments

Optional argument

Keyword arguments

Output arguments

Reference

source
Krylov.minares!Function
solver = minares!(solver::MinaresSolver, A, b; kwargs...)
+solver = minares!(solver::MinaresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of minares.

See MinaresSolver for more details about the solver.

source
diff --git a/dev/solvers/sp_sqd/index.html b/dev/solvers/sp_sqd/index.html index d107797c8..cc4b97871 100644 --- a/dev/solvers/sp_sqd/index.html +++ b/dev/solvers/sp_sqd/index.html @@ -8,8 +8,8 @@ timemax::Float64=Inf, verbose::Int=0, history::Bool=false, callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, y, stats) = tricg(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)

TriCG can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.

Given a matrix A of dimension m × n, TriCG solves the Hermitian linear system

[ τE    A ] [ x ] = [ b ]
 [  Aᴴ  νF ] [ y ]   [ c ],

of size (n+m) × (n+m) where τ and ν are real numbers, E = M⁻¹ ≻ 0 and F = N⁻¹ ≻ 0. b and c must both be nonzero. TriCG could breakdown if τ = 0 or ν = 0. It's recommended to use TriMR in these cases.

By default, TriCG solves Hermitian and quasi-definite linear systems with τ = 1 and ν = -1.

TriCG is based on the preconditioned orthogonal tridiagonalization process and its relation with the preconditioned block-Lanczos process.

[ M   0 ]
-[ 0   N ]

indicates the weighted norm in which residuals are measured. It's the Euclidean norm when M and N are identity operators.

TriCG stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.

Input arguments

Optional arguments

Keyword arguments

Output arguments

Reference

source
Krylov.tricg!Function
solver = tricg!(solver::TricgSolver, A, b, c; kwargs...)
-solver = tricg!(solver::TricgSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of tricg.

See TricgSolver for more details about the solver.

source

TriMR

Krylov.trimrFunction
(x, y, stats) = trimr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
+[ 0   N ]

indicates the weighted norm in which residuals are measured. It's the Euclidean norm when M and N are identity operators.

TriCG stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional arguments

  • x0: a vector of length m that represents an initial guess of the solution x;
  • y0: a vector of length n that represents an initial guess of the solution y.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the partitioned system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the partitioned system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • spd: if true, set τ = 1 and ν = 1 for Hermitian and positive-definite linear system;
  • snd: if true, set τ = -1 and ν = -1 for Hermitian and negative-definite linear systems;
  • flip: if true, set τ = -1 and ν = 1 for another known variant of Hermitian quasi-definite systems;
  • τ and ν: diagonal scaling factors of the partitioned Hermitian linear system;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length m;
  • y: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.tricg!Function
solver = tricg!(solver::TricgSolver, A, b, c; kwargs...)
+solver = tricg!(solver::TricgSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of tricg.

See TricgSolver for more details about the solver.

source

TriMR

Krylov.trimrFunction
(x, y, stats) = trimr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
                       M=I, N=I, ldiv::Bool=false,
                       spd::Bool=false, snd::Bool=false,
                       flip::Bool=false, sp::Bool=false,
@@ -18,5 +18,5 @@
                       timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                       callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, y, stats) = trimr(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)

TriMR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.

Given a matrix A of dimension m × n, TriMR solves the symmetric linear system

[ τE    A ] [ x ] = [ b ]
 [  Aᴴ  νF ] [ y ]   [ c ],

of size (n+m) × (n+m) where τ and ν are real numbers, E = M⁻¹ ≻ 0, F = N⁻¹ ≻ 0. b and c must both be nonzero. TriMR handles saddle-point systems (τ = 0 or ν = 0) and adjoint systems (τ = 0 and ν = 0) without any risk of breakdown.

By default, TriMR solves symmetric and quasi-definite linear systems with τ = 1 and ν = -1.

TriMR is based on the preconditioned orthogonal tridiagonalization process and its relation with the preconditioned block-Lanczos process.

[ M   0 ]
-[ 0   N ]

indicates the weighted norm in which residuals are measured. It's the Euclidean norm when M and N are identity operators.

TriMR stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional arguments

  • x0: a vector of length m that represents an initial guess of the solution x;
  • y0: a vector of length n that represents an initial guess of the solution y.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the partitioned system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the partitioned system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • spd: if true, set τ = 1 and ν = 1 for Hermitian and positive-definite linear system;
  • snd: if true, set τ = -1 and ν = -1 for Hermitian and negative-definite linear systems;
  • flip: if true, set τ = -1 and ν = 1 for another known variant of Hermitian quasi-definite systems;
  • sp: if true, set τ = 1 and ν = 0 for saddle-point systems;
  • τ and ν: diagonal scaling factors of the partitioned Hermitian linear system;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length m;
  • y: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.trimr!Function
solver = trimr!(solver::TrimrSolver, A, b, c; kwargs...)
-solver = trimr!(solver::TrimrSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of trimr.

See TrimrSolver for more details about the solver.

source
+[ 0 N ]

indicates the weighted norm in which residuals are measured. It's the Euclidean norm when M and N are identity operators.

TriMR stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.

Input arguments

Optional arguments

Keyword arguments

Output arguments

Reference

source
Krylov.trimr!Function
solver = trimr!(solver::TrimrSolver, A, b, c; kwargs...)
+solver = trimr!(solver::TrimrSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of trimr.

See TrimrSolver for more details about the solver.

source
diff --git a/dev/solvers/spd/index.html b/dev/solvers/spd/index.html index 3cef81980..0c892290e 100644 --- a/dev/solvers/spd/index.html +++ b/dev/solvers/spd/index.html @@ -4,28 +4,28 @@ linesearch::Bool=false, atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0, timemax::Float64=Inf, verbose::Int=0, history::Bool=false, - callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cg(A, b, x0::AbstractVector; kwargs...)

CG can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

The conjugate gradient method to solve the Hermitian linear system Ax = b of size n.

The method does not abort if A is not definite. M also indicates the weighted norm in which residuals are measured.

Input arguments

Optional argument

Keyword arguments

Output arguments

Reference

source
Krylov.cg!Function
solver = cg!(solver::CgSolver, A, b; kwargs...)
-solver = cg!(solver::CgSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cg.

See CgSolver for more details about the solver.

source

CR

Krylov.crFunction
(x, stats) = cr(A, b::AbstractVector{FC};
+                callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cg(A, b, x0::AbstractVector; kwargs...)

CG can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

The conjugate gradient method to solve the Hermitian linear system Ax = b of size n.

The method does not abort if A is not definite. M also indicates the weighted norm in which residuals are measured.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • linesearch: if true, indicate that the solution is to be used in an inexact Newton method with linesearch. If negative curvature is detected at iteration k > 0, the solution of iteration k-1 is returned. If negative curvature is detected at iteration 0, the right-hand side is returned (i.e., the negative gradient);
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.cg!Function
solver = cg!(solver::CgSolver, A, b; kwargs...)
+solver = cg!(solver::CgSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cg.

See CgSolver for more details about the solver.

source

CR

Krylov.crFunction
(x, stats) = cr(A, b::AbstractVector{FC};
                 M=I, ldiv::Bool=false, radius::T=zero(T),
                 linesearch::Bool=false, γ::T=√eps(T),
                 atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,
                 timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cr(A, b, x0::AbstractVector; kwargs...)

CR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

A truncated version of Stiefel’s Conjugate Residual method to solve the Hermitian linear system Ax = b of size n or the least-squares problem min ‖b - Ax‖ if A is singular. The matrix A must be Hermitian semi-definite. M also indicates the weighted norm in which residuals are measured.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • linesearch: if true, indicate that the solution is to be used in an inexact Newton method with linesearch. If negative curvature is detected at iteration k > 0, the solution of iteration k-1 is returned. If negative curvature is detected at iteration 0, the right-hand side is returned (i.e., the negative gradient);
  • γ: tolerance to determine that the curvature of the quadratic model is nonpositive;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.cr!Function
solver = cr!(solver::CrSolver, A, b; kwargs...)
-solver = cr!(solver::CrSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cr.

See CrSolver for more details about the solver.

source

CAR

Krylov.carFunction
(x, stats) = car(A, b::AbstractVector{FC};
+                callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cr(A, b, x0::AbstractVector; kwargs...)

CR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

A truncated version of Stiefel’s Conjugate Residual method to solve the Hermitian linear system Ax = b of size n or the least-squares problem min ‖b - Ax‖ if A is singular. The matrix A must be Hermitian semi-definite. M also indicates the weighted norm in which residuals are measured.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • linesearch: if true, indicate that the solution is to be used in an inexact Newton method with linesearch. If negative curvature is detected at iteration k > 0, the solution of iteration k-1 is returned. If negative curvature is detected at iteration 0, the right-hand side is returned (i.e., the negative gradient);
  • γ: tolerance to determine that the curvature of the quadratic model is nonpositive;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.cr!Function
solver = cr!(solver::CrSolver, A, b; kwargs...)
+solver = cr!(solver::CrSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cr.

See CrSolver for more details about the solver.

source

CAR

Krylov.carFunction
(x, stats) = car(A, b::AbstractVector{FC};
                  M=I, ldiv::Bool=false,
                  atol::T=√eps(T), rtol::T=√eps(T),
                  itmax::Int=0, timemax::Float64=Inf,
                  verbose::Int=0, history::Bool=false,
-                 callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = car(A, b, x0::AbstractVector; kwargs...)

CAR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

CAR solves the Hermitian and positive definite linear system Ax = b of size n. CAR minimizes ‖Arₖ‖₂ when M = Iₙ and ‖AMrₖ‖M otherwise. The estimates computed every iteration are ‖Mrₖ‖₂ and ‖AMrₖ‖M.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

  • A. Montoison, D. Orban and M. A. Saunders, MinAres: An Iterative Solver for Symmetric Linear Systems, Cahier du GERAD G-2023-40, GERAD, Montréal, 2023.
source
Krylov.car!Function
solver = car!(solver::CarSolver, A, b; kwargs...)
-solver = car!(solver::CarSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of car.

See CarSolver for more details about the solver.

source

CG-LANCZOS

Krylov.cg_lanczosFunction
(x, stats) = cg_lanczos(A, b::AbstractVector{FC};
+                 callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = car(A, b, x0::AbstractVector; kwargs...)

CAR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

CAR solves the Hermitian and positive definite linear system Ax = b of size n. CAR minimizes ‖Arₖ‖₂ when M = Iₙ and ‖AMrₖ‖M otherwise. The estimates computed every iteration are ‖Mrₖ‖₂ and ‖AMrₖ‖M.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

  • A. Montoison, D. Orban and M. A. Saunders, MinAres: An Iterative Solver for Symmetric Linear Systems, Cahier du GERAD G-2023-40, GERAD, Montréal, 2023.
source
Krylov.car!Function
solver = car!(solver::CarSolver, A, b; kwargs...)
+solver = car!(solver::CarSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of car.

See CarSolver for more details about the solver.

source

CG-LANCZOS

Krylov.cg_lanczosFunction
(x, stats) = cg_lanczos(A, b::AbstractVector{FC};
                         M=I, ldiv::Bool=false,
                         check_curvature::Bool=false, atol::T=√eps(T),
                         rtol::T=√eps(T), itmax::Int=0,
                         timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                        callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cg_lanczos(A, b, x0::AbstractVector; kwargs...)

CG-LANCZOS can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

The Lanczos version of the conjugate gradient method to solve the Hermitian linear system Ax = b of size n.

The method does not abort if A is not definite.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • check_curvature: if true, check that the curvature of the quadratic along the search direction is positive, and abort if not, unless linesearch is also true;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a LanczosStats structure.

References

source
Krylov.cg_lanczos!Function
solver = cg_lanczos!(solver::CgLanczosSolver, A, b; kwargs...)
-solver = cg_lanczos!(solver::CgLanczosSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cg_lanczos.

See CgLanczosSolver for more details about the solver.

source

CG-LANCZOS-SHIFT

Krylov.cg_lanczos_shiftFunction
(x, stats) = cg_lanczos_shift(A, b::AbstractVector{FC}, shifts::AbstractVector{T};
+                        callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cg_lanczos(A, b, x0::AbstractVector; kwargs...)

CG-LANCZOS can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

The Lanczos version of the conjugate gradient method to solve the Hermitian linear system Ax = b of size n.

The method does not abort if A is not definite.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • check_curvature: if true, check that the curvature of the quadratic along the search direction is positive, and abort if not, unless linesearch is also true;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a LanczosStats structure.

References

source
Krylov.cg_lanczos!Function
solver = cg_lanczos!(solver::CgLanczosSolver, A, b; kwargs...)
+solver = cg_lanczos!(solver::CgLanczosSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cg_lanczos.

See CgLanczosSolver for more details about the solver.

source

CG-LANCZOS-SHIFT

Krylov.cg_lanczos_shiftFunction
(x, stats) = cg_lanczos_shift(A, b::AbstractVector{FC}, shifts::AbstractVector{T};
                               M=I, ldiv::Bool=false,
                               check_curvature::Bool=false, atol::T=√eps(T),
                               rtol::T=√eps(T), itmax::Int=0,
                               timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                              callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

The Lanczos version of the conjugate gradient method to solve a family of shifted systems

(A + αI) x = b  (α = α₁, ..., αₚ)

of size n. The method does not abort if A + αI is not definite.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n;
  • shifts: a vector of length p.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • check_curvature: if true, check that the curvature of the quadratic along the search direction is positive, and abort if not, unless linesearch is also true;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a vector of p dense vectors, each one of length n;
  • stats: statistics collected on the run in a LanczosShiftStats structure.

References

source
Krylov.cg_lanczos_shift!Function
solver = cg_lanczos!(solver::CgLanczosShiftSolver, A, b, shifts; kwargs...)

where kwargs are keyword arguments of cg_lanczos_shift.

See CgLanczosShiftSolver for more details about the solver.

source
+ callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

The Lanczos version of the conjugate gradient method to solve a family of shifted systems

(A + αI) x = b  (α = α₁, ..., αₚ)

of size n. The method does not abort if A + αI is not definite.

Input arguments

Keyword arguments

Output arguments

References

source
Krylov.cg_lanczos_shift!Function
solver = cg_lanczos!(solver::CgLanczosShiftSolver, A, b, shifts; kwargs...)

where kwargs are keyword arguments of cg_lanczos_shift.

See CgLanczosShiftSolver for more details about the solver.

source
diff --git a/dev/solvers/unsymmetric/index.html b/dev/solvers/unsymmetric/index.html index 233d51323..29457ee03 100644 --- a/dev/solvers/unsymmetric/index.html +++ b/dev/solvers/unsymmetric/index.html @@ -3,51 +3,51 @@ c::AbstractVector{FC}=b, transfer_to_bicg::Bool=true, atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0, timemax::Float64=Inf, verbose::Int=0, history::Bool=false, - callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = bilq(A, b, x0::AbstractVector; kwargs...)

BiLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the square linear system Ax = b of size n using BiLQ. BiLQ is based on the Lanczos biorthogonalization process and requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b. When A is Hermitian and b = c, BiLQ is equivalent to SYMMLQ.

Input arguments

Optional argument

Keyword arguments

Output arguments

References

source
Krylov.bilq!Function
solver = bilq!(solver::BilqSolver, A, b; kwargs...)
-solver = bilq!(solver::BilqSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of bilq.

See BilqSolver for more details about the solver.

source

QMR

Krylov.qmrFunction
(x, stats) = qmr(A, b::AbstractVector{FC};
+                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = bilq(A, b, x0::AbstractVector; kwargs...)

BiLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the square linear system Ax = b of size n using BiLQ. BiLQ is based on the Lanczos biorthogonalization process and requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b. When A is Hermitian and b = c, BiLQ is equivalent to SYMMLQ.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • transfer_to_bicg: transfer from the BiLQ point to the BiCG point, when it exists. The transfer is based on the residual norm;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.bilq!Function
solver = bilq!(solver::BilqSolver, A, b; kwargs...)
+solver = bilq!(solver::BilqSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of bilq.

See BilqSolver for more details about the solver.

source

QMR

Krylov.qmrFunction
(x, stats) = qmr(A, b::AbstractVector{FC};
                  c::AbstractVector{FC}=b, atol::T=√eps(T),
                  rtol::T=√eps(T), itmax::Int=0, timemax::Float64=Inf, verbose::Int=0,
-                 history::Bool=false, callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = qmr(A, b, x0::AbstractVector; kwargs...)

QMR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the square linear system Ax = b of size n using QMR.

QMR is based on the Lanczos biorthogonalization process and requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b. When A is Hermitian and b = c, QMR is equivalent to MINRES.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.qmr!Function
solver = qmr!(solver::QmrSolver, A, b; kwargs...)
-solver = qmr!(solver::QmrSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of qmr.

See QmrSolver for more details about the solver.

source

CGS

Krylov.cgsFunction
(x, stats) = cgs(A, b::AbstractVector{FC};
+                 history::Bool=false, callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = qmr(A, b, x0::AbstractVector; kwargs...)

QMR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the square linear system Ax = b of size n using QMR.

QMR is based on the Lanczos biorthogonalization process and requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b. When A is Hermitian and b = c, QMR is equivalent to MINRES.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.qmr!Function
solver = qmr!(solver::QmrSolver, A, b; kwargs...)
+solver = qmr!(solver::QmrSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of qmr.

See QmrSolver for more details about the solver.

source

CGS

Krylov.cgsFunction
(x, stats) = cgs(A, b::AbstractVector{FC};
                  c::AbstractVector{FC}=b, M=I, N=I,
                  ldiv::Bool=false, atol::T=√eps(T),
                  rtol::T=√eps(T), itmax::Int=0,
                  timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                 callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cgs(A, b, x0::AbstractVector; kwargs...)

CGS can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the consistent linear system Ax = b of size n using CGS. CGS requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b.

From "Iterative Methods for Sparse Linear Systems (Y. Saad)" :

«The method is based on a polynomial variant of the conjugate gradients algorithm. Although related to the so-called bi-conjugate gradients (BCG) algorithm, it does not involve adjoint matrix-vector multiplications, and the expected convergence rate is about twice that of the BCG algorithm.

The Conjugate Gradient Squared algorithm works quite well in many cases. However, one difficulty is that, since the polynomials are squared, rounding errors tend to be more damaging than in the standard BCG algorithm. In particular, very high variations of the residual vectors often cause the residual norms computed to become inaccurate.

TFQMR and BICGSTAB were developed to remedy this difficulty.»

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.cgs!Function
solver = cgs!(solver::CgsSolver, A, b; kwargs...)
-solver = cgs!(solver::CgsSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cgs.

See CgsSolver for more details about the solver.

source

BiCGSTAB

Krylov.bicgstabFunction
(x, stats) = bicgstab(A, b::AbstractVector{FC};
+                 callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cgs(A, b, x0::AbstractVector; kwargs...)

CGS can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the consistent linear system Ax = b of size n using CGS. CGS requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b.

From "Iterative Methods for Sparse Linear Systems (Y. Saad)" :

«The method is based on a polynomial variant of the conjugate gradients algorithm. Although related to the so-called bi-conjugate gradients (BCG) algorithm, it does not involve adjoint matrix-vector multiplications, and the expected convergence rate is about twice that of the BCG algorithm.

The Conjugate Gradient Squared algorithm works quite well in many cases. However, one difficulty is that, since the polynomials are squared, rounding errors tend to be more damaging than in the standard BCG algorithm. In particular, very high variations of the residual vectors often cause the residual norms computed to become inaccurate.

TFQMR and BICGSTAB were developed to remedy this difficulty.»

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.cgs!Function
solver = cgs!(solver::CgsSolver, A, b; kwargs...)
+solver = cgs!(solver::CgsSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cgs.

See CgsSolver for more details about the solver.

source

BiCGSTAB

Krylov.bicgstabFunction
(x, stats) = bicgstab(A, b::AbstractVector{FC};
                       c::AbstractVector{FC}=b, M=I, N=I,
                       ldiv::Bool=false, atol::T=√eps(T),
                       rtol::T=√eps(T), itmax::Int=0,
                       timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                      callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = bicgstab(A, b, x0::AbstractVector; kwargs...)

BICGSTAB can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the square linear system Ax = b of size n using BICGSTAB. BICGSTAB requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b.

The Biconjugate Gradient Stabilized method is a variant of BiCG, like CGS, but using different updates for the Aᴴ-sequence in order to obtain smoother convergence than CGS.

If BICGSTAB stagnates, we recommend DQGMRES and BiLQ as alternative methods for unsymmetric square systems.

BICGSTAB stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖b‖ * rtol.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.bicgstab!Function
solver = bicgstab!(solver::BicgstabSolver, A, b; kwargs...)
-solver = bicgstab!(solver::BicgstabSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of bicgstab.

See BicgstabSolver for more details about the solver.

source

DIOM

Krylov.diomFunction
(x, stats) = diom(A, b::AbstractVector{FC};
+                      callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = bicgstab(A, b, x0::AbstractVector; kwargs...)

BICGSTAB can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the square linear system Ax = b of size n using BICGSTAB. BICGSTAB requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b.

The Biconjugate Gradient Stabilized method is a variant of BiCG, like CGS, but using different updates for the Aᴴ-sequence in order to obtain smoother convergence than CGS.

If BICGSTAB stagnates, we recommend DQGMRES and BiLQ as alternative methods for unsymmetric square systems.

BICGSTAB stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖b‖ * rtol.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.bicgstab!Function
solver = bicgstab!(solver::BicgstabSolver, A, b; kwargs...)
+solver = bicgstab!(solver::BicgstabSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of bicgstab.

See BicgstabSolver for more details about the solver.

source

DIOM

Krylov.diomFunction
(x, stats) = diom(A, b::AbstractVector{FC};
                   memory::Int=20, M=I, N=I, ldiv::Bool=false,
                   reorthogonalization::Bool=false, atol::T=√eps(T),
                   rtol::T=√eps(T), itmax::Int=0,
                   timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = diom(A, b, x0::AbstractVector; kwargs...)

DIOM can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the consistent linear system Ax = b of size n using DIOM.

DIOM only orthogonalizes the new vectors of the Krylov basis against the memory most recent vectors. If CG is well defined on Ax = b and memory = 2, DIOM is theoretically equivalent to CG. If k ≤ memory where k is the number of iterations, DIOM is theoretically equivalent to FOM. Otherwise, DIOM interpolates between CG and FOM and is similar to CG with partial reorthogonalization.

An advantage of DIOM is that non-Hermitian or Hermitian indefinite or both non-Hermitian and indefinite systems of linear equations can be handled by this single algorithm.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: the number of most recent vectors of the Krylov basis against which to orthogonalize a new vector;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against the memory most recent vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.diom!Function
solver = diom!(solver::DiomSolver, A, b; kwargs...)
-solver = diom!(solver::DiomSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of diom.

Note that the memory keyword argument is the only exception. It's required to create a DiomSolver and can't be changed later.

See DiomSolver for more details about the solver.

source

FOM

Krylov.fomFunction
(x, stats) = fom(A, b::AbstractVector{FC};
+                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = diom(A, b, x0::AbstractVector; kwargs...)

DIOM can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the consistent linear system Ax = b of size n using DIOM.

DIOM only orthogonalizes the new vectors of the Krylov basis against the memory most recent vectors. If CG is well defined on Ax = b and memory = 2, DIOM is theoretically equivalent to CG. If k ≤ memory where k is the number of iterations, DIOM is theoretically equivalent to FOM. Otherwise, DIOM interpolates between CG and FOM and is similar to CG with partial reorthogonalization.

An advantage of DIOM is that non-Hermitian or Hermitian indefinite or both non-Hermitian and indefinite systems of linear equations can be handled by this single algorithm.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: the number of most recent vectors of the Krylov basis against which to orthogonalize a new vector;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against the memory most recent vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.diom!Function
solver = diom!(solver::DiomSolver, A, b; kwargs...)
+solver = diom!(solver::DiomSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of diom.

Note that the memory keyword argument is the only exception. It's required to create a DiomSolver and can't be changed later.

See DiomSolver for more details about the solver.

source

FOM

Krylov.fomFunction
(x, stats) = fom(A, b::AbstractVector{FC};
                  memory::Int=20, M=I, N=I, ldiv::Bool=false,
                  restart::Bool=false, reorthogonalization::Bool=false,
                  atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,
                  timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                 callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = fom(A, b, x0::AbstractVector; kwargs...)

FOM can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the linear system Ax = b of size n using FOM.

FOM algorithm is based on the Arnoldi process and a Galerkin condition.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: if restart = true, the restarted version FOM(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • restart: restart the method after memory iterations;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.fom!Function
solver = fom!(solver::FomSolver, A, b; kwargs...)
-solver = fom!(solver::FomSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of fom.

Note that the memory keyword argument is the only exception. It's required to create a FomSolver and can't be changed later.

See FomSolver for more details about the solver.

source

DQGMRES

Krylov.dqgmresFunction
(x, stats) = dqgmres(A, b::AbstractVector{FC};
+                 callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = fom(A, b, x0::AbstractVector; kwargs...)

FOM can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the linear system Ax = b of size n using FOM.

FOM algorithm is based on the Arnoldi process and a Galerkin condition.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: if restart = true, the restarted version FOM(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • restart: restart the method after memory iterations;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.fom!Function
solver = fom!(solver::FomSolver, A, b; kwargs...)
+solver = fom!(solver::FomSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of fom.

Note that the memory keyword argument is the only exception. It's required to create a FomSolver and can't be changed later.

See FomSolver for more details about the solver.

source

DQGMRES

Krylov.dqgmresFunction
(x, stats) = dqgmres(A, b::AbstractVector{FC};
                      memory::Int=20, M=I, N=I, ldiv::Bool=false,
                      reorthogonalization::Bool=false, atol::T=√eps(T),
                      rtol::T=√eps(T), itmax::Int=0,
                      timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                     callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = dqgmres(A, b, x0::AbstractVector; kwargs...)

DQGMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the consistent linear system Ax = b of size n using DQGMRES.

DQGMRES algorithm is based on the incomplete Arnoldi orthogonalization process and computes a sequence of approximate solutions with the quasi-minimal residual property.

DQGMRES only orthogonalizes the new vectors of the Krylov basis against the memory most recent vectors. If MINRES is well defined on Ax = b and memory = 2, DQGMRES is theoretically equivalent to MINRES. If k ≤ memory where k is the number of iterations, DQGMRES is theoretically equivalent to GMRES. Otherwise, DQGMRES interpolates between MINRES and GMRES and is similar to MINRES with partial reorthogonalization.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: the number of most recent vectors of the Krylov basis against which to orthogonalize a new vector;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against the memory most recent vectors;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.dqgmres!Function
solver = dqgmres!(solver::DqgmresSolver, A, b; kwargs...)
-solver = dqgmres!(solver::DqgmresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of dqgmres.

Note that the memory keyword argument is the only exception. It's required to create a DqgmresSolver and can't be changed later.

See DqgmresSolver for more details about the solver.

source

GMRES

Krylov.gmresFunction
(x, stats) = gmres(A, b::AbstractVector{FC};
+                     callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = dqgmres(A, b, x0::AbstractVector; kwargs...)

DQGMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the consistent linear system Ax = b of size n using DQGMRES.

DQGMRES algorithm is based on the incomplete Arnoldi orthogonalization process and computes a sequence of approximate solutions with the quasi-minimal residual property.

DQGMRES only orthogonalizes the new vectors of the Krylov basis against the memory most recent vectors. If MINRES is well defined on Ax = b and memory = 2, DQGMRES is theoretically equivalent to MINRES. If k ≤ memory where k is the number of iterations, DQGMRES is theoretically equivalent to GMRES. Otherwise, DQGMRES interpolates between MINRES and GMRES and is similar to MINRES with partial reorthogonalization.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: the number of most recent vectors of the Krylov basis against which to orthogonalize a new vector;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against the memory most recent vectors;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.dqgmres!Function
solver = dqgmres!(solver::DqgmresSolver, A, b; kwargs...)
+solver = dqgmres!(solver::DqgmresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of dqgmres.

Note that the memory keyword argument is the only exception. It's required to create a DqgmresSolver and can't be changed later.

See DqgmresSolver for more details about the solver.

source

GMRES

Krylov.gmresFunction
(x, stats) = gmres(A, b::AbstractVector{FC};
                    memory::Int=20, M=I, N=I, ldiv::Bool=false,
                    restart::Bool=false, reorthogonalization::Bool=false,
                    atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,
                    timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                   callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = gmres(A, b, x0::AbstractVector; kwargs...)

GMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the linear system Ax = b of size n using GMRES.

GMRES algorithm is based on the Arnoldi process and computes a sequence of approximate solutions with the minimum residual.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: if restart = true, the restarted version GMRES(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • restart: restart the method after memory iterations;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.gmres!Function
solver = gmres!(solver::GmresSolver, A, b; kwargs...)
-solver = gmres!(solver::GmresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of gmres.

Note that the memory keyword argument is the only exception. It's required to create a GmresSolver and can't be changed later.

See GmresSolver for more details about the solver.

source

FGMRES

Krylov.fgmresFunction
(x, stats) = fgmres(A, b::AbstractVector{FC};
+                   callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = gmres(A, b, x0::AbstractVector; kwargs...)

GMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the linear system Ax = b of size n using GMRES.

GMRES algorithm is based on the Arnoldi process and computes a sequence of approximate solutions with the minimum residual.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: if restart = true, the restarted version GMRES(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • restart: restart the method after memory iterations;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.gmres!Function
solver = gmres!(solver::GmresSolver, A, b; kwargs...)
+solver = gmres!(solver::GmresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of gmres.

Note that the memory keyword argument is the only exception. It's required to create a GmresSolver and can't be changed later.

See GmresSolver for more details about the solver.

source

FGMRES

Krylov.fgmresFunction
(x, stats) = fgmres(A, b::AbstractVector{FC};
                     memory::Int=20, M=I, N=I, ldiv::Bool=false,
                     restart::Bool=false, reorthogonalization::Bool=false,
                     atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,
                     timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                    callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = fgmres(A, b, x0::AbstractVector; kwargs...)

FGMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the linear system Ax = b of size n using FGMRES.

FGMRES computes a sequence of approximate solutions with minimum residual. FGMRES is a variant of GMRES that allows changes in the right preconditioner at each iteration.

This implementation allows a left preconditioner M and a flexible right preconditioner N. A situation in which the preconditioner is "not constant" is when a relaxation-type method, a Chebyshev iteration or another Krylov subspace method is used as a preconditioner. Compared to GMRES, there is no additional cost incurred in the arithmetic but the memory requirement almost doubles. Thus, GMRES is recommended if the right preconditioner N is constant.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: if restart = true, the restarted version FGMRES(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • restart: restart the method after memory iterations;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.fgmres!Function
solver = fgmres!(solver::FgmresSolver, A, b; kwargs...)
-solver = fgmres!(solver::FgmresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of fgmres.

Note that the memory keyword argument is the only exception. It's required to create a FgmresSolver and can't be changed later.

See FgmresSolver for more details about the solver.

source
+ callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = fgmres(A, b, x0::AbstractVector; kwargs...)

FGMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the linear system Ax = b of size n using FGMRES.

FGMRES computes a sequence of approximate solutions with minimum residual. FGMRES is a variant of GMRES that allows changes in the right preconditioner at each iteration.

This implementation allows a left preconditioner M and a flexible right preconditioner N. A situation in which the preconditioner is "not constant" is when a relaxation-type method, a Chebyshev iteration or another Krylov subspace method is used as a preconditioner. Compared to GMRES, there is no additional cost incurred in the arithmetic but the memory requirement almost doubles. Thus, GMRES is recommended if the right preconditioner N is constant.

Input arguments

Optional argument

Keyword arguments

Output arguments

Reference

source
Krylov.fgmres!Function
solver = fgmres!(solver::FgmresSolver, A, b; kwargs...)
+solver = fgmres!(solver::FgmresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of fgmres.

Note that the memory keyword argument is the only exception. It's required to create a FgmresSolver and can't be changed later.

See FgmresSolver for more details about the solver.

source
diff --git a/dev/storage/index.html b/dev/storage/index.html index 952ba7df9..f7b9d3239 100644 --- a/dev/storage/index.html +++ b/dev/storage/index.html @@ -52,4 +52,4 @@ └──────────────────┴──────────────────┴────────────────────┘

If we want the total number of bytes used by the solver, we can call nbytes = sizeof(solver).

nbytes = sizeof(solver)
560121

Thereafter, we can use Base.format_bytes(nbytes) to recover what is displayed in the REPL.

Base.format_bytes(nbytes)
"546.993 KiB"

To verify that we match the theoretical results, we just need to multiply the storage requirement of a method by the number of bytes associated to the precision of the linear problem. For instance, we need 4 bytes for the precision Float32, 8 bytes for precisions Float64 and ComplexF32, and 16 bytes for the precision ComplexF64.

FC = Float64                            # precision of the least-squares problem
 ncoefs_lsmr = 5*n + 2*m                 # number of coefficients
 nbytes_lsmr = sizeof(FC) * ncoefs_lsmr  # number of bytes
560000

Therefore, you can check that you have enough memory in RAM to allocate a KrylovSolver.

free_nbytes = Sys.free_memory()
-Base.format_bytes(free_nbytes)  # Total free memory in RAM in bytes.
"4.912 GiB"
Note
  • Beyond having faster operations, using low precisions, such as simple precision, allows to store more coefficients in RAM and solve larger linear problems.
  • In the file test_allocations.jl, we use the macro @allocated to test that we match the expected storage requirement of each method with a tolerance of 2%.
+Base.format_bytes(free_nbytes) # Total free memory in RAM in bytes.
"4.959 GiB"
Note
  • Beyond having faster operations, using low precisions, such as simple precision, allows to store more coefficients in RAM and solve larger linear problems.
  • In the file test_allocations.jl, we use the macro @allocated to test that we match the expected storage requirement of each method with a tolerance of 2%.
diff --git a/dev/tips/index.html b/dev/tips/index.html index 55aaed7aa..21a3ef220 100644 --- a/dev/tips/index.html +++ b/dev/tips/index.html @@ -23,4 +23,4 @@ julia -t N # alternative: --threads N JULIA_NUM_THREADS=N julia

Thereafter, you can verify the number of threads usable by Julia

using Base.Threads
-nthreads()

The following benchmarks illustrate the time required in seconds to compute 1000 sparse matrix-vector products with symmetric matrices of the SuiteSparse Matrix Collection. The computer used for the benchmarks has 2 physical cores and Julia was launched with JULIA_NUM_THREADS=2.

benchmarks

+nthreads()

The following benchmarks illustrate the time required in seconds to compute 1000 sparse matrix-vector products with symmetric matrices of the SuiteSparse Matrix Collection. The computer used for the benchmarks has 2 physical cores and Julia was launched with JULIA_NUM_THREADS=2.

benchmarks

diff --git a/dev/warm-start/index.html b/dev/warm-start/index.html index 6e3782993..fb96f97f2 100644 --- a/dev/warm-start/index.html +++ b/dev/warm-start/index.html @@ -23,4 +23,4 @@ Δx, Δy, stats = trimr(A, b₀, c₀, M=M, N=N) x = x₀ + Δx -y = y₀ + Δy +y = y₀ + Δy