From 01217fcc88a4f6c9de29f57a7f9ac8abf99250e8 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Wed, 27 Sep 2023 18:40:02 +0000 Subject: [PATCH] build based on 2ff5806 --- dev/.documenter-siteinfo.json | 2 +- dev/api/index.html | 76 ++++++++++++------------ dev/callbacks/index.html | 2 +- dev/examples/bicgstab/index.html | 8 +-- dev/examples/car/index.html | 4 +- dev/examples/cg/index.html | 4 +- dev/examples/cg_lanczos_shift/index.html | 6 +- dev/examples/cgls/index.html | 6 +- dev/examples/cgne/index.html | 6 +- dev/examples/craig/index.html | 22 +++---- dev/examples/craigmr/index.html | 8 +-- dev/examples/crls/index.html | 6 +- dev/examples/crmr/index.html | 4 +- dev/examples/dqgmres/index.html | 6 +- dev/examples/lsmr/index.html | 16 ++--- dev/examples/lsqr/index.html | 4 +- dev/examples/minares/index.html | 4 +- dev/examples/minres_qlp/index.html | 6 +- dev/examples/symmlq/index.html | 6 +- dev/examples/tricg/index.html | 24 ++++---- dev/examples/trimr/index.html | 18 +++--- dev/factorization-free/index.html | 14 ++--- dev/gpu/index.html | 2 +- dev/index.html | 2 +- dev/inplace/index.html | 4 +- dev/preconditioners/index.html | 2 +- dev/processes/index.html | 12 ++-- dev/reference/index.html | 2 +- dev/solvers/as/index.html | 8 +-- dev/solvers/gsp/index.html | 4 +- dev/solvers/ln/index.html | 14 ++--- dev/solvers/ls/index.html | 14 ++--- dev/solvers/sid/index.html | 16 ++--- dev/solvers/sp_sqd/index.html | 8 +-- dev/solvers/spd/index.html | 18 +++--- dev/solvers/unsymmetric/index.html | 36 +++++------ dev/storage/index.html | 2 +- dev/tips/index.html | 2 +- dev/warm-start/index.html | 2 +- 39 files changed, 200 insertions(+), 200 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 489d09285..58313b7f9 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.9.3","generation_timestamp":"2023-09-27T06:08:29","documenter_version":"1.0.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.9.3","generation_timestamp":"2023-09-27T18:39:51","documenter_version":"1.0.1"}} \ No newline at end of file diff --git a/dev/api/index.html b/dev/api/index.html index 1c72b3957..d3934fddd 100644 --- a/dev/api/index.html +++ b/dev/api/index.html @@ -1,39 +1,39 @@ -API · Krylov.jl

Stats Types

Krylov.SimpleStatsType

Type for statistics returned by the majority of Krylov solvers, the attributes are:

  • niter
  • solved
  • inconsistent
  • residuals
  • Aresiduals
  • Acond
  • timer
  • status
source
Krylov.LanczosStatsType

Type for statistics returned by CG-LANCZOS, the attributes are:

  • niter
  • solved
  • residuals
  • indefinite
  • Anorm
  • Acond
  • timer
  • status
source
Krylov.LanczosShiftStatsType

Type for statistics returned by CG-LANCZOS with shifts, the attributes are:

  • niter
  • solved
  • residuals
  • indefinite
  • Anorm
  • Acond
  • timer
  • status
source
Krylov.SymmlqStatsType

Type for statistics returned by SYMMLQ, the attributes are:

  • niter
  • solved
  • residuals
  • residualscg
  • errors
  • errorscg
  • Anorm
  • Acond
  • timer
  • status
source
Krylov.AdjointStatsType

Type for statistics returned by adjoint systems solvers BiLQR and TriLQR, the attributes are:

  • niter
  • solved_primal
  • solved_dual
  • residuals_primal
  • residuals_dual
  • timer
  • status
source
Krylov.LNLQStatsType

Type for statistics returned by the LNLQ method, the attributes are:

  • niter
  • solved
  • residuals
  • errorwithbnd
  • errorbndx
  • errorbndy
  • timer
  • status
source
Krylov.LSLQStatsType

Type for statistics returned by the LSLQ method, the attributes are:

  • niter
  • solved
  • inconsistent
  • residuals
  • Aresiduals
  • err_lbnds
  • errorwithbnd
  • errubndslq
  • errubndscg
  • timer
  • status
source
Krylov.LsmrStatsType

Type for statistics returned by LSMR. The attributes are:

  • niter
  • solved
  • inconsistent
  • residuals
  • Aresiduals
  • Acond
  • Anorm
  • xNorm
  • timer
  • status
source

Solver Types

Krylov.MinresSolverType

Type for storing the vectors required by the in-place version of MINRES.

The outer constructors

solver = MinresSolver(m, n, S; window :: Int=5)
-solver = MinresSolver(A, b; window :: Int=5)

may be used in order to create these vectors.

source
Krylov.MinaresSolverType

Type for storing the vectors required by the in-place version of MINARES.

The outer constructors

solver = MinaresSolver(m, n, S)
-solver = MinaresSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgSolverType

Type for storing the vectors required by the in-place version of CG.

The outer constructors

solver = CgSolver(m, n, S)
-solver = CgSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CrSolverType

Type for storing the vectors required by the in-place version of CR.

The outer constructors

solver = CrSolver(m, n, S)
-solver = CrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CarSolverType

Type for storing the vectors required by the in-place version of CAR.

The outer constructors

solver = CarSolver(m, n, S)
-solver = CarSolver(A, b)

may be used in order to create these vectors.

source
Krylov.SymmlqSolverType

Type for storing the vectors required by the in-place version of SYMMLQ.

The outer constructors

solver = SymmlqSolver(m, n, S)
-solver = SymmlqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgLanczosSolverType

Type for storing the vectors required by the in-place version of CG-LANCZOS.

The outer constructors

solver = CgLanczosSolver(m, n, S)
-solver = CgLanczosSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgLanczosShiftSolverType

Type for storing the vectors required by the in-place version of CG-LANCZOS-SHIFT.

The outer constructors

solver = CgLanczosShiftSolver(m, n, nshifts, S)
-solver = CgLanczosShiftSolver(A, b, nshifts)

may be used in order to create these vectors.

source
Krylov.MinresQlpSolverType

Type for storing the vectors required by the in-place version of MINRES-QLP.

The outer constructors

solver = MinresQlpSolver(m, n, S)
-solver = MinresQlpSolver(A, b)

may be used in order to create these vectors.

source
Krylov.DiomSolverType

Type for storing the vectors required by the in-place version of DIOM.

The outer constructors

solver = DiomSolver(m, n, memory, S)
-solver = DiomSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.FomSolverType

Type for storing the vectors required by the in-place version of FOM.

The outer constructors

solver = FomSolver(m, n, memory, S)
-solver = FomSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.DqgmresSolverType

Type for storing the vectors required by the in-place version of DQGMRES.

The outer constructors

solver = DqgmresSolver(m, n, memory, S)
-solver = DqgmresSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.GmresSolverType

Type for storing the vectors required by the in-place version of GMRES.

The outer constructors

solver = GmresSolver(m, n, memory, S)
-solver = GmresSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.UsymlqSolverType

Type for storing the vectors required by the in-place version of USYMLQ.

The outer constructors

solver = UsymlqSolver(m, n, S)
-solver = UsymlqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.UsymqrSolverType

Type for storing the vectors required by the in-place version of USYMQR.

The outer constructors

solver = UsymqrSolver(m, n, S)
-solver = UsymqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.TricgSolverType

Type for storing the vectors required by the in-place version of TRICG.

The outer constructors

solver = TricgSolver(m, n, S)
-solver = TricgSolver(A, b)

may be used in order to create these vectors.

source
Krylov.TrimrSolverType

Type for storing the vectors required by the in-place version of TRIMR.

The outer constructors

solver = TrimrSolver(m, n, S)
-solver = TrimrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.TrilqrSolverType

Type for storing the vectors required by the in-place version of TRILQR.

The outer constructors

solver = TrilqrSolver(m, n, S)
-solver = TrilqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgsSolverType

Type for storing the vectors required by the in-place version of CGS.

The outer constructorss

solver = CgsSolver(m, n, S)
-solver = CgsSolver(A, b)

may be used in order to create these vectors.

source
Krylov.BicgstabSolverType

Type for storing the vectors required by the in-place version of BICGSTAB.

The outer constructors

solver = BicgstabSolver(m, n, S)
-solver = BicgstabSolver(A, b)

may be used in order to create these vectors.

source
Krylov.BilqSolverType

Type for storing the vectors required by the in-place version of BILQ.

The outer constructors

solver = BilqSolver(m, n, S)
-solver = BilqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.QmrSolverType

Type for storing the vectors required by the in-place version of QMR.

The outer constructors

solver = QmrSolver(m, n, S)
-solver = QmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.BilqrSolverType

Type for storing the vectors required by the in-place version of BILQR.

The outer constructors

solver = BilqrSolver(m, n, S)
-solver = BilqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CglsSolverType

Type for storing the vectors required by the in-place version of CGLS.

The outer constructors

solver = CglsSolver(m, n, S)
-solver = CglsSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CrlsSolverType

Type for storing the vectors required by the in-place version of CRLS.

The outer constructors

solver = CrlsSolver(m, n, S)
-solver = CrlsSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgneSolverType

Type for storing the vectors required by the in-place version of CGNE.

The outer constructors

solver = CgneSolver(m, n, S)
-solver = CgneSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CrmrSolverType

Type for storing the vectors required by the in-place version of CRMR.

The outer constructors

solver = CrmrSolver(m, n, S)
-solver = CrmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LslqSolverType

Type for storing the vectors required by the in-place version of LSLQ.

The outer constructors

solver = LslqSolver(m, n, S)
-solver = LslqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LsqrSolverType

Type for storing the vectors required by the in-place version of LSQR.

The outer constructors

solver = LsqrSolver(m, n, S)
-solver = LsqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LsmrSolverType

Type for storing the vectors required by the in-place version of LSMR.

The outer constructors

solver = LsmrSolver(m, n, S)
-solver = LsmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LnlqSolverType

Type for storing the vectors required by the in-place version of LNLQ.

The outer constructors

solver = LnlqSolver(m, n, S)
-solver = LnlqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CraigSolverType

Type for storing the vectors required by the in-place version of CRAIG.

The outer constructors

solver = CraigSolver(m, n, S)
-solver = CraigSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CraigmrSolverType

Type for storing the vectors required by the in-place version of CRAIGMR.

The outer constructors

solver = CraigmrSolver(m, n, S)
-solver = CraigmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.GpmrSolverType

Type for storing the vectors required by the in-place version of GPMR.

The outer constructors

solver = GpmrSolver(m, n, memory, S)
-solver = GpmrSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n + m if the value given is larger than n + m.

source
Krylov.FgmresSolverType

Type for storing the vectors required by the in-place version of FGMRES.

The outer constructors

solver = FgmresSolver(m, n, memory, S)
-solver = FgmresSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source

Utilities

Krylov.roots_quadraticFunction
roots = roots_quadratic(q₂, q₁, q₀; nitref)

Find the real roots of the quadratic

q(x) = q₂ x² + q₁ x + q₀,

where q₂, q₁ and q₀ are real. Care is taken to avoid numerical cancellation. Optionally, nitref steps of iterative refinement may be performed to improve accuracy. By default, nitref=1.

source
Krylov.sym_givensFunction
(c, s, ρ) = sym_givens(a, b)

Numerically stable symmetric Givens reflection. Given a and b reals, return (c, s, ρ) such that

[ c  s ] [ a ] = [ ρ ]
-[ s -c ] [ b ] = [ 0 ].
source

Numerically stable symmetric Givens reflection. Given a and b complexes, return (c, s, ρ) with c real and (s, ρ) complexes such that

[ c   s ] [ a ] = [ ρ ]
-[ s̅  -c ] [ b ] = [ 0 ].
source
Krylov.to_boundaryFunction
roots = to_boundary(n, x, d, radius; flip, xNorm2, dNorm2)

Given a trust-region radius radius, a vector x lying inside the trust-region and a direction d, return σ1 and σ2 such that

‖x + σi d‖ = radius, i = 1, 2

in the Euclidean norm. n is the length of vectors x and d. If known, ‖x‖² and ‖d‖² may be supplied with xNorm2 and dNorm2.

If flip is set to true, σ1 and σ2 are computed such that

‖x - σi d‖ = radius, i = 1, 2.
source
Krylov.vec2strFunction
s = vec2str(x; ndisp)

Display an array in the form

[ -3.0e-01 -5.1e-01  1.9e-01 ... -2.3e-01 -4.4e-01  2.4e-01 ]

with (ndisp - 1)/2 elements on each side.

source
Krylov.ktypeofFunction
S = ktypeof(v)

Return the most relevant storage type S based on the type of v.

source
Krylov.kzerosFunction
v = kzeros(S, n)

Create a vector of storage type S of length n only composed of zero.

source
Krylov.konesFunction
v = kones(S, n)

Create a vector of storage type S of length n only composed of one.

source
Krylov.vector_to_matrixFunction
M = vector_to_matrix(S)

Return the dense matrix storage type M related to the dense vector storage type S.

source
Krylov.matrix_to_vectorFunction
S = matrix_to_vector(M)

Return the dense vector storage type S related to the dense matrix storage type M.

source
+API · Krylov.jl

Stats Types

Krylov.SimpleStatsType

Type for statistics returned by the majority of Krylov solvers, the attributes are:

  • niter
  • solved
  • inconsistent
  • residuals
  • Aresiduals
  • Acond
  • timer
  • status
source
Krylov.LanczosStatsType

Type for statistics returned by CG-LANCZOS, the attributes are:

  • niter
  • solved
  • residuals
  • indefinite
  • Anorm
  • Acond
  • timer
  • status
source
Krylov.LanczosShiftStatsType

Type for statistics returned by CG-LANCZOS with shifts, the attributes are:

  • niter
  • solved
  • residuals
  • indefinite
  • Anorm
  • Acond
  • timer
  • status
source
Krylov.SymmlqStatsType

Type for statistics returned by SYMMLQ, the attributes are:

  • niter
  • solved
  • residuals
  • residualscg
  • errors
  • errorscg
  • Anorm
  • Acond
  • timer
  • status
source
Krylov.AdjointStatsType

Type for statistics returned by adjoint systems solvers BiLQR and TriLQR, the attributes are:

  • niter
  • solved_primal
  • solved_dual
  • residuals_primal
  • residuals_dual
  • timer
  • status
source
Krylov.LNLQStatsType

Type for statistics returned by the LNLQ method, the attributes are:

  • niter
  • solved
  • residuals
  • errorwithbnd
  • errorbndx
  • errorbndy
  • timer
  • status
source
Krylov.LSLQStatsType

Type for statistics returned by the LSLQ method, the attributes are:

  • niter
  • solved
  • inconsistent
  • residuals
  • Aresiduals
  • err_lbnds
  • errorwithbnd
  • errubndslq
  • errubndscg
  • timer
  • status
source
Krylov.LsmrStatsType

Type for statistics returned by LSMR. The attributes are:

  • niter
  • solved
  • inconsistent
  • residuals
  • Aresiduals
  • Acond
  • Anorm
  • xNorm
  • timer
  • status
source

Solver Types

Krylov.MinresSolverType

Type for storing the vectors required by the in-place version of MINRES.

The outer constructors

solver = MinresSolver(m, n, S; window :: Int=5)
+solver = MinresSolver(A, b; window :: Int=5)

may be used in order to create these vectors.

source
Krylov.MinaresSolverType

Type for storing the vectors required by the in-place version of MINARES.

The outer constructors

solver = MinaresSolver(m, n, S)
+solver = MinaresSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgSolverType

Type for storing the vectors required by the in-place version of CG.

The outer constructors

solver = CgSolver(m, n, S)
+solver = CgSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CrSolverType

Type for storing the vectors required by the in-place version of CR.

The outer constructors

solver = CrSolver(m, n, S)
+solver = CrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CarSolverType

Type for storing the vectors required by the in-place version of CAR.

The outer constructors

solver = CarSolver(m, n, S)
+solver = CarSolver(A, b)

may be used in order to create these vectors.

source
Krylov.SymmlqSolverType

Type for storing the vectors required by the in-place version of SYMMLQ.

The outer constructors

solver = SymmlqSolver(m, n, S)
+solver = SymmlqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgLanczosSolverType

Type for storing the vectors required by the in-place version of CG-LANCZOS.

The outer constructors

solver = CgLanczosSolver(m, n, S)
+solver = CgLanczosSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgLanczosShiftSolverType

Type for storing the vectors required by the in-place version of CG-LANCZOS-SHIFT.

The outer constructors

solver = CgLanczosShiftSolver(m, n, nshifts, S)
+solver = CgLanczosShiftSolver(A, b, nshifts)

may be used in order to create these vectors.

source
Krylov.MinresQlpSolverType

Type for storing the vectors required by the in-place version of MINRES-QLP.

The outer constructors

solver = MinresQlpSolver(m, n, S)
+solver = MinresQlpSolver(A, b)

may be used in order to create these vectors.

source
Krylov.DiomSolverType

Type for storing the vectors required by the in-place version of DIOM.

The outer constructors

solver = DiomSolver(m, n, memory, S)
+solver = DiomSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.FomSolverType

Type for storing the vectors required by the in-place version of FOM.

The outer constructors

solver = FomSolver(m, n, memory, S)
+solver = FomSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.DqgmresSolverType

Type for storing the vectors required by the in-place version of DQGMRES.

The outer constructors

solver = DqgmresSolver(m, n, memory, S)
+solver = DqgmresSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.GmresSolverType

Type for storing the vectors required by the in-place version of GMRES.

The outer constructors

solver = GmresSolver(m, n, memory, S)
+solver = GmresSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source
Krylov.UsymlqSolverType

Type for storing the vectors required by the in-place version of USYMLQ.

The outer constructors

solver = UsymlqSolver(m, n, S)
+solver = UsymlqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.UsymqrSolverType

Type for storing the vectors required by the in-place version of USYMQR.

The outer constructors

solver = UsymqrSolver(m, n, S)
+solver = UsymqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.TricgSolverType

Type for storing the vectors required by the in-place version of TRICG.

The outer constructors

solver = TricgSolver(m, n, S)
+solver = TricgSolver(A, b)

may be used in order to create these vectors.

source
Krylov.TrimrSolverType

Type for storing the vectors required by the in-place version of TRIMR.

The outer constructors

solver = TrimrSolver(m, n, S)
+solver = TrimrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.TrilqrSolverType

Type for storing the vectors required by the in-place version of TRILQR.

The outer constructors

solver = TrilqrSolver(m, n, S)
+solver = TrilqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgsSolverType

Type for storing the vectors required by the in-place version of CGS.

The outer constructorss

solver = CgsSolver(m, n, S)
+solver = CgsSolver(A, b)

may be used in order to create these vectors.

source
Krylov.BicgstabSolverType

Type for storing the vectors required by the in-place version of BICGSTAB.

The outer constructors

solver = BicgstabSolver(m, n, S)
+solver = BicgstabSolver(A, b)

may be used in order to create these vectors.

source
Krylov.BilqSolverType

Type for storing the vectors required by the in-place version of BILQ.

The outer constructors

solver = BilqSolver(m, n, S)
+solver = BilqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.QmrSolverType

Type for storing the vectors required by the in-place version of QMR.

The outer constructors

solver = QmrSolver(m, n, S)
+solver = QmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.BilqrSolverType

Type for storing the vectors required by the in-place version of BILQR.

The outer constructors

solver = BilqrSolver(m, n, S)
+solver = BilqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CglsSolverType

Type for storing the vectors required by the in-place version of CGLS.

The outer constructors

solver = CglsSolver(m, n, S)
+solver = CglsSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CrlsSolverType

Type for storing the vectors required by the in-place version of CRLS.

The outer constructors

solver = CrlsSolver(m, n, S)
+solver = CrlsSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CgneSolverType

Type for storing the vectors required by the in-place version of CGNE.

The outer constructors

solver = CgneSolver(m, n, S)
+solver = CgneSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CrmrSolverType

Type for storing the vectors required by the in-place version of CRMR.

The outer constructors

solver = CrmrSolver(m, n, S)
+solver = CrmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LslqSolverType

Type for storing the vectors required by the in-place version of LSLQ.

The outer constructors

solver = LslqSolver(m, n, S)
+solver = LslqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LsqrSolverType

Type for storing the vectors required by the in-place version of LSQR.

The outer constructors

solver = LsqrSolver(m, n, S)
+solver = LsqrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LsmrSolverType

Type for storing the vectors required by the in-place version of LSMR.

The outer constructors

solver = LsmrSolver(m, n, S)
+solver = LsmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.LnlqSolverType

Type for storing the vectors required by the in-place version of LNLQ.

The outer constructors

solver = LnlqSolver(m, n, S)
+solver = LnlqSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CraigSolverType

Type for storing the vectors required by the in-place version of CRAIG.

The outer constructors

solver = CraigSolver(m, n, S)
+solver = CraigSolver(A, b)

may be used in order to create these vectors.

source
Krylov.CraigmrSolverType

Type for storing the vectors required by the in-place version of CRAIGMR.

The outer constructors

solver = CraigmrSolver(m, n, S)
+solver = CraigmrSolver(A, b)

may be used in order to create these vectors.

source
Krylov.GpmrSolverType

Type for storing the vectors required by the in-place version of GPMR.

The outer constructors

solver = GpmrSolver(m, n, memory, S)
+solver = GpmrSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n + m if the value given is larger than n + m.

source
Krylov.FgmresSolverType

Type for storing the vectors required by the in-place version of FGMRES.

The outer constructors

solver = FgmresSolver(m, n, memory, S)
+solver = FgmresSolver(A, b, memory = 20)

may be used in order to create these vectors. memory is set to n if the value given is larger than n.

source

Utilities

Krylov.roots_quadraticFunction
roots = roots_quadratic(q₂, q₁, q₀; nitref)

Find the real roots of the quadratic

q(x) = q₂ x² + q₁ x + q₀,

where q₂, q₁ and q₀ are real. Care is taken to avoid numerical cancellation. Optionally, nitref steps of iterative refinement may be performed to improve accuracy. By default, nitref=1.

source
Krylov.sym_givensFunction
(c, s, ρ) = sym_givens(a, b)

Numerically stable symmetric Givens reflection. Given a and b reals, return (c, s, ρ) such that

[ c  s ] [ a ] = [ ρ ]
+[ s -c ] [ b ] = [ 0 ].
source

Numerically stable symmetric Givens reflection. Given a and b complexes, return (c, s, ρ) with c real and (s, ρ) complexes such that

[ c   s ] [ a ] = [ ρ ]
+[ s̅  -c ] [ b ] = [ 0 ].
source
Krylov.to_boundaryFunction
roots = to_boundary(n, x, d, radius; flip, xNorm2, dNorm2)

Given a trust-region radius radius, a vector x lying inside the trust-region and a direction d, return σ1 and σ2 such that

‖x + σi d‖ = radius, i = 1, 2

in the Euclidean norm. n is the length of vectors x and d. If known, ‖x‖² and ‖d‖² may be supplied with xNorm2 and dNorm2.

If flip is set to true, σ1 and σ2 are computed such that

‖x - σi d‖ = radius, i = 1, 2.
source
Krylov.vec2strFunction
s = vec2str(x; ndisp)

Display an array in the form

[ -3.0e-01 -5.1e-01  1.9e-01 ... -2.3e-01 -4.4e-01  2.4e-01 ]

with (ndisp - 1)/2 elements on each side.

source
Krylov.ktypeofFunction
S = ktypeof(v)

Return the most relevant storage type S based on the type of v.

source
Krylov.kzerosFunction
v = kzeros(S, n)

Create a vector of storage type S of length n only composed of zero.

source
Krylov.konesFunction
v = kones(S, n)

Create a vector of storage type S of length n only composed of one.

source
Krylov.vector_to_matrixFunction
M = vector_to_matrix(S)

Return the dense matrix storage type M related to the dense vector storage type S.

source
Krylov.matrix_to_vectorFunction
S = matrix_to_vector(M)

Return the dense vector storage type S related to the dense matrix storage type M.

source
diff --git a/dev/callbacks/index.html b/dev/callbacks/index.html index e9fc1b9d8..244eb4b75 100644 --- a/dev/callbacks/index.html +++ b/dev/callbacks/index.html @@ -50,4 +50,4 @@ return false # We don't want to add new stopping conditions end -(x, stats) = gmres(A, b, callback = gmres_callback) +(x, stats) = gmres(A, b, callback = gmres_callback) diff --git a/dev/examples/bicgstab/index.html b/dev/examples/bicgstab/index.html index eef8379d7..60ba5b2b0 100644 --- a/dev/examples/bicgstab/index.html +++ b/dev/examples/bicgstab/index.html @@ -44,11 +44,11 @@ @printf("[Right preconditioning] Residual norm: %8.1e\n", norm(r)) @printf("[Right preconditioning] Number of iterations: %3d\n", length(stats.residuals) - 1)
 Downloading artifact: HB/sherman5.RB
 nnz(ILU) / nnz(A): 3.1e+00
-[Without preconditioning] Residual norm:  4.6e-05
-[Without preconditioning] Number of iterations: 1967
-[Split preconditioning] Residual norm:  9.0e-07
+[Without preconditioning] Residual norm:  5.0e-05
+[Without preconditioning] Number of iterations: 2679
+[Split preconditioning] Residual norm:  9.3e-07
 [Split preconditioning] Number of iterations:   9
 [Left preconditioning] Residual norm:  3.2e-07
 [Left preconditioning] Number of iterations:   9
 [Right preconditioning] Residual norm:  7.8e-06
-[Right preconditioning] Number of iterations:   8
+[Right preconditioning] Number of iterations: 8 diff --git a/dev/examples/car/index.html b/dev/examples/car/index.html index a563fbb60..d24b66589 100644 --- a/dev/examples/car/index.html +++ b/dev/examples/car/index.html @@ -23,6 +23,6 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 26.44ms + timer: 38.93ms status: solution good enough given atol and rtol -Relative residual: 1.5e-08 +Relative residual: 1.5e-08 diff --git a/dev/examples/cg/index.html b/dev/examples/cg/index.html index bb0282c77..3916f221b 100644 --- a/dev/examples/cg/index.html +++ b/dev/examples/cg/index.html @@ -23,6 +23,6 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 3.93ms + timer: 6.97ms status: solution good enough given atol and rtol -Relative residual: 1.3e-08 +Relative residual: 1.2e-08 diff --git a/dev/examples/cg_lanczos_shift/index.html b/dev/examples/cg_lanczos_shift/index.html index 945830f96..d31ff88ae 100644 --- a/dev/examples/cg_lanczos_shift/index.html +++ b/dev/examples/cg_lanczos_shift/index.html @@ -27,13 +27,13 @@ end @printf("\n")
 Downloading artifact: HB/1138_bus.MM
 LanczosShiftStats
- niter: 726
+ niter: 735
  solved: true
  residuals: [Float64[], Float64[], Float64[], Float64[]]
  indefinite: Bool[0, 1, 1, 1]
  ‖A‖F: NaN
  κ₂(A): NaN
- timer: 7.46ms
+ timer: 17.38ms
  status: solution good enough given atol and rtol
 Relative residuals with shifts:
-  1.5e-08  1.5e-08  1.3e-08  1.3e-08
+ 1.4e-08 1.4e-08 1.3e-08 1.3e-08 diff --git a/dev/examples/cgls/index.html b/dev/examples/cgls/index.html index a45dd6f2b..7fab2e3d9 100644 --- a/dev/examples/cgls/index.html +++ b/dev/examples/cgls/index.html @@ -28,7 +28,7 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 1.45ms + timer: 1.90ms status: solution good enough given atol and rtol -CGLS: Relative residual: 1.9e-08 -CGLS: ‖x‖: 8.4e+03 +CGLS: Relative residual: 1.8e-08 +CGLS: ‖x‖: 8.4e+03 diff --git a/dev/examples/cgne/index.html b/dev/examples/cgne/index.html index 2941e8608..ee28dcd85 100644 --- a/dev/examples/cgne/index.html +++ b/dev/examples/cgne/index.html @@ -21,13 +21,13 @@ @printf("CGNE: ‖x - x*‖₂: %7.1e\n", norm(x - x_exact))
 Downloading artifact: HB/wm2.RB
 System size: 207 rows and 260 columns
 SimpleStats
- niter: 213
+ niter: 216
  solved: true
  inconsistent: false
  residuals: []
  Aresiduals: []
  κ₂(A): []
- timer: 1.46ms
+ timer: 2.15ms
  status: solution good enough given atol and rtol
 CGNE: Relative residual: 1.5e-08
-CGNE: ‖x - x*‖₂: 3.2e-07
+CGNE: ‖x - x*‖₂: 3.0e-07 diff --git a/dev/examples/craig/index.html b/dev/examples/craig/index.html index fb82b1096..084ffd2de 100644 --- a/dev/examples/craig/index.html +++ b/dev/examples/craig/index.html @@ -21,12 +21,12 @@ @printf("Error in y: %7.1e\n", norm(λ * y - xy_exact[n+1:n+m]) / norm(xy_exact[n+1:n+m])) end
CRAIG: system of 5 equations in 8 variables
     k       ‖r‖       ‖x‖       ‖A‖      κ(A)         α        β  timer
-    0  8.94e+00  0.00e+00  0.00e+00  0.00e+00   ✗ ✗ ✗ ✗  ✗ ✗ ✗ ✗  0.36s
-    1  4.99e-01  2.71e+00  3.30e+00  3.30e+00   3.3e+00  1.8e-01  0.61s
-    2  2.00e-01  2.78e+00  3.42e+00  4.84e+00   8.2e-01  3.3e-01  0.74s
-    3  9.59e-03  2.79e+00  3.55e+00  6.23e+00   9.5e-01  4.5e-02  0.74s
-    4  1.29e-03  2.79e+00  3.57e+00  7.20e+00   3.4e-01  4.5e-02  0.74s
-    5  3.43e-10  2.79e+00  3.64e+00  8.22e+00   7.4e-01  2.0e-07  0.74s
+    0  8.39e+00  0.00e+00  0.00e+00  0.00e+00   ✗ ✗ ✗ ✗  ✗ ✗ ✗ ✗  0.52s
+    1  3.54e-01  2.76e+00  3.04e+00  3.04e+00   3.0e+00  1.3e-01  0.88s
+    2  1.78e-01  2.78e+00  3.34e+00  4.72e+00   1.2e+00  6.2e-01  1.07s
+    3  8.49e-02  2.79e+00  3.42e+00  6.03e+00   6.7e-01  3.2e-01  1.07s
+    4  4.34e-02  2.80e+00  3.43e+00  7.08e+00   3.2e-01  1.6e-01  1.07s
+    5  3.57e-10  2.81e+00  3.46e+00  8.05e+00   4.2e-01  3.4e-09  1.07s
 
 SimpleStats
  niter: 5
@@ -35,9 +35,9 @@
  residuals: []
  Aresiduals: []
  κ₂(A): []
- timer: 745.01ms
+ timer: 1.07s
  status: solution good enough for the tolerances given
-Primal feasibility: 3.8e-11
-Dual   feasibility: 1.4e-16
-Error in x: 3.7e-11
-Error in y: 2.6e-11
+Primal feasibility: 4.3e-11 +Dual feasibility: 1.5e-16 +Error in x: 4.2e-11 +Error in y: 2.2e-11 diff --git a/dev/examples/craigmr/index.html b/dev/examples/craigmr/index.html index e1fcf2637..e73ed5243 100644 --- a/dev/examples/craigmr/index.html +++ b/dev/examples/craigmr/index.html @@ -28,8 +28,8 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 883.39μs + timer: 1.57ms status: found approximate minimum-norm solution -CRAIGMR: Relative residual: 1.3e-08 -CRAIGMR: ‖x - x*‖₂: 1.3e-05 -CRAIGMR: 0 iterations +CRAIGMR: Relative residual: 1.4e-08 +CRAIGMR: ‖x - x*‖₂: 1.5e-05 +CRAIGMR: 0 iterations diff --git a/dev/examples/crls/index.html b/dev/examples/crls/index.html index 262041d03..a0fa77a6e 100644 --- a/dev/examples/crls/index.html +++ b/dev/examples/crls/index.html @@ -28,7 +28,7 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 6.01ms + timer: 8.84ms status: solution good enough given atol and rtol -CRLS: Relative residual: 2.0e-08 -CRLS: ‖x‖: 1.0e+04 +CRLS: Relative residual: 2.1e-08 +CRLS: ‖x‖: 1.0e+04 diff --git a/dev/examples/crmr/index.html b/dev/examples/crmr/index.html index 1dd00898e..961d680ad 100644 --- a/dev/examples/crmr/index.html +++ b/dev/examples/crmr/index.html @@ -29,7 +29,7 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 1.59ms + timer: 2.45ms status: system probably inconsistent but least squares/norm solution found CRMR: Relative residual: 4.0e-06 -CRMR: ‖x - x*‖₂: 6.3e-03 +CRMR: ‖x - x*‖₂: 6.3e-03 diff --git a/dev/examples/dqgmres/index.html b/dev/examples/dqgmres/index.html index 29f9d2e67..e2c3287cb 100644 --- a/dev/examples/dqgmres/index.html +++ b/dev/examples/dqgmres/index.html @@ -44,11 +44,11 @@ @printf("[Right preconditioning] Residual norm: %8.1e\n", norm(r)) @printf("[Right preconditioning] Number of iterations: %3d\n", length(stats.residuals) - 1)
 Downloading artifact: Simon/raefsky1.MM
 nnz(ILU) / nnz(A): 1.0e+00
-[Without preconditioning] Residual norm:  9.6e-07
-[Without preconditioning] Number of iterations: 3716
+[Without preconditioning] Residual norm:  3.4e-07
+[Without preconditioning] Number of iterations: 4540
 [Split preconditioning] Residual norm:  3.8e-07
 [Split preconditioning] Number of iterations:  33
 [Left preconditioning] Residual norm:  7.6e-08
 [Left preconditioning] Number of iterations:  34
 [Right preconditioning] Residual norm:  3.4e-07
-[Right preconditioning] Number of iterations:  33
+[Right preconditioning] Number of iterations: 33 diff --git a/dev/examples/lsmr/index.html b/dev/examples/lsmr/index.html index f44115ca4..d21813c5e 100644 --- a/dev/examples/lsmr/index.html +++ b/dev/examples/lsmr/index.html @@ -22,17 +22,17 @@ @printf("LSMR: ‖x‖: %8.1e\n", norm(x))
 Downloading artifact: HB/illc1850.MM
 System size: 1850 rows and 712 columns
 LsmrStats
- niter: 1089
+ niter: 1028
  solved: true
  inconsistent: true
  residuals: []
  Aresiduals: []
- residual: 16.205302719154798
- Aresidual: 0.001744668542011466
- κ₂(A): 67.13806544631052
- ‖A‖F: 45.016285250483314
- xNorm: 16060.854604524206
- timer: 25.11ms
+ residual: 16.239115589155727
+ Aresidual: 0.003559142560311877
+ κ₂(A): 86.71320866451842
+ ‖A‖F: 43.722077889715436
+ xNorm: 16032.795019150253
+ timer: 36.19ms
  status: truncated forward error small enough
 LSMR: Relative residual:  2.4e-03
-LSMR: ‖x‖:  1.6e+04
+LSMR: ‖x‖: 1.6e+04 diff --git a/dev/examples/lsqr/index.html b/dev/examples/lsqr/index.html index 5480d0aef..441ec39dc 100644 --- a/dev/examples/lsqr/index.html +++ b/dev/examples/lsqr/index.html @@ -28,7 +28,7 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 16.60ms + timer: 22.75ms status: maximum number of iterations exceeded LSQR: Relative residual: 1.4e-03 -LSQR: ‖x‖: 9.4e+03 +LSQR: ‖x‖: 9.4e+03 diff --git a/dev/examples/minares/index.html b/dev/examples/minares/index.html index 4a8ee732d..91c787fce 100644 --- a/dev/examples/minares/index.html +++ b/dev/examples/minares/index.html @@ -23,6 +23,6 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 1.21ms + timer: 2.90ms status: solution good enough given atol, rtol and Artol -Relative A-residual: 7.8e-09 +Relative A-residual: 7.8e-09 diff --git a/dev/examples/minres_qlp/index.html b/dev/examples/minres_qlp/index.html index 4e1cfcf4a..3011e4b89 100644 --- a/dev/examples/minres_qlp/index.html +++ b/dev/examples/minres_qlp/index.html @@ -14,7 +14,7 @@ @printf("Residual r: %s\n", Krylov.vec2str(r)) @printf("Relative residual norm ‖r‖: %8.1e\n", norm(r) / b_norm) @printf("Solution x: %s\n", Krylov.vec2str(x)) -@printf("Minimum-norm solution? %s\n", x ≈ [1.0; 1.0; 1.0; 0.0])
Residual r: [-6.2e-15  2.9e-15  2.2e-15  4.0e+00 ]
+@printf("Minimum-norm solution? %s\n", x ≈ [1.0; 1.0; 1.0; 0.0])
Residual r: [-4.9e-15  2.4e-15  4.4e-16  4.0e+00 ]
 Relative residual norm ‖r‖:  7.3e-01
-Solution x: [ 1.0e+00  1.0e+00  1.0e+00 -9.6e-16 ]
-Minimum-norm solution? true
+Solution x: [ 1.0e+00 1.0e+00 1.0e+00 -2.0e-15 ] +Minimum-norm solution? true diff --git a/dev/examples/symmlq/index.html b/dev/examples/symmlq/index.html index 5f1e33111..21146992d 100644 --- a/dev/examples/symmlq/index.html +++ b/dev/examples/symmlq/index.html @@ -14,7 +14,7 @@ @printf("Residual r: %s\n", Krylov.vec2str(r)) @printf("Relative residual norm ‖r‖: %8.1e\n", norm(r) / b_norm) @printf("Solution x: %s\n", Krylov.vec2str(x)) -@printf("Minimum-norm solution? %s\n", x ≈ [1.0; 1.0; 1.0; 0.0])
Residual r: [ 1.1e-16  2.2e-16  0.0e+00  0.0e+00 ]
-Relative residual norm ‖r‖:  6.6e-17
+@printf("Minimum-norm solution? %s\n", x ≈ [1.0; 1.0; 1.0; 0.0])
Residual r: [ 0.0e+00  4.4e-16  0.0e+00  0.0e+00 ]
+Relative residual norm ‖r‖:  1.2e-16
 Solution x: [ 1.0e+00  1.0e+00  1.0e+00  0.0e+00 ]
-Minimum-norm solution? true
+Minimum-norm solution? true diff --git a/dev/examples/tricg/index.html b/dev/examples/tricg/index.html index 14662d6af..a6a2ebb71 100644 --- a/dev/examples/tricg/index.html +++ b/dev/examples/tricg/index.html @@ -67,18 +67,18 @@ B = [b; c] r = B - K * [x; y] resid = sqrt(dot(r, H * r)) -@printf("TriCG: Relative residual: %8.1e\n", resid)
TriCG: Relative residual:  2.8e-11
-TriCG: Relative residual:  8.2e-12
-TriCG: Relative residual:  1.2e-11
-TriCG: Relative residual:  4.3e-11
-TriCG: Relative residual:  3.5e-10
+@printf("TriCG: Relative residual: %8.1e\n", resid)
TriCG: Relative residual:  2.3e-11
+TriCG: Relative residual:  7.6e-12
+TriCG: Relative residual:  1.0e-11
+TriCG: Relative residual:  3.6e-11
+TriCG: Relative residual:  3.9e-10
 TriCG: system of 10 equations in 10 variables
     k     ‖rₖ‖     βₖ₊₁     γₖ₊₁  timer
-    0  1.1e+01  6.7e+00  8.7e+00  0.22s
-    1  5.8e+00  2.6e+02  3.0e+02  0.33s
-    2  2.5e+00  2.9e+02  2.2e+02  0.33s
-    3  1.2e+00  2.4e+01  5.5e+01  0.33s
-    4  1.9e-01  3.4e-01  8.1e-01  0.33s
-    5  2.7e-08  1.5e-07  9.0e-08  0.33s
+    0  1.1e+01  6.7e+00  8.7e+00  0.34s
+    1  5.8e+00  2.6e+02  3.0e+02  0.50s
+    2  2.5e+00  2.9e+02  2.2e+02  0.50s
+    3  1.2e+00  2.4e+01  5.5e+01  0.50s
+    4  1.9e-01  3.4e-01  8.1e-01  0.50s
+    5  1.2e-08  6.7e-08  7.7e-08  0.50s
 
-TriCG: Relative residual:  2.7e-08
+TriCG: Relative residual: 1.2e-08 diff --git a/dev/examples/trimr/index.html b/dev/examples/trimr/index.html index 27154b8ae..1fd964071 100644 --- a/dev/examples/trimr/index.html +++ b/dev/examples/trimr/index.html @@ -54,15 +54,15 @@ B = [b; c] r = B - K * [x; y] resid = sqrt(dot(r, opH⁻¹ * r)) -@printf("TriMR: Relative residual: %8.1e\n", resid)
TriMR: Relative residual:  3.4e-09
-TriMR: Relative residual:  2.8e-11
+@printf("TriMR: Relative residual: %8.1e\n", resid)
TriMR: Relative residual:  2.0e-09
+TriMR: Relative residual:  2.3e-11
 TriMR: system of 10 equations in 10 variables
     k     ‖rₖ‖     βₖ₊₁     γₖ₊₁  timer
-    0  1.1e+00  8.7e-01  6.8e-01  0.16s
-    1  7.3e-01  4.0e+00  8.8e-01  0.16s
-    2  4.4e-01  2.2e+00  2.0e+00  0.16s
-    3  3.2e-01  9.3e-02  1.4e+00  0.16s
-    4  2.2e-03  1.0e-02  7.0e-03  0.16s
-    5  2.0e-13  5.8e-10  3.3e-11  0.16s
+    0  1.1e+00  8.7e-01  6.8e-01  0.23s
+    1  7.3e-01  4.0e+00  8.8e-01  0.23s
+    2  4.4e-01  2.2e+00  2.0e+00  0.23s
+    3  3.2e-01  9.3e-02  1.4e+00  0.23s
+    4  2.2e-03  1.0e-02  7.0e-03  0.23s
+    5  1.2e-13  3.5e-10  1.9e-11  0.23s
 
-TriMR: Relative residual:  2.0e-13
+TriMR: Relative residual: 1.2e-13 diff --git a/dev/factorization-free/index.html b/dev/factorization-free/index.html index 66a67be90..66ff3958c 100644 --- a/dev/factorization-free/index.html +++ b/dev/factorization-free/index.html @@ -52,7 +52,7 @@ residuals: [] Aresiduals: [] κ₂(A): [] - timer: 603.97ms + timer: 876.44ms status: solution good enough given atol and rtol )

Example 2: The Gauss-Newton Method for Nonlinear Least Squares

At each iteration of the Gauss-Newton method applied to a nonlinear least-squares objective $f(x) = \tfrac{1}{2}\| F(x)\|^2$ where $F : \mathbb{R}^n \rightarrow \mathbb{R}^m$ is $\mathcal{C}^1$, we solve the subproblem:

\[\min_{d \in \mathbb{R}^n}~~\tfrac{1}{2}~\|J(x_k) d + F(x_k)\|^2,\]

where $J(x)$ is the Jacobian of $F$ at $x$.

An appropriate iterative method to solve the above linear least-squares problems is LSMR. We could pass the explicit Jacobian to LSMR as follows:

using ForwardDiff, Krylov
 
@@ -80,11 +80,11 @@
  inconsistent: true
  residuals: []
  Aresiduals: []
- residual: 0.022490204087080454
- Aresidual: 1.9837017206339416e-15
- κ₂(A): 1.2777264193293685
- ‖A‖F: 5.328138145631235
+ residual: 0.022490204087080457
+ Aresidual: 2.0849695585298754e-15
+ κ₂(A): 1.2777264193293687
+ ‖A‖F: 5.328138145631234
  xNorm: 0.5623144027914095
- timer: 639.43ms
+ timer: 932.24ms
  status: found approximate minimum least-squares solution
-)

Note that preconditioners can be also implemented as abstract operators. For instance, we could compute the Cholesky factorization of $M$ and $N$ and create linear operators that perform the forward and backsolves.

Krylov methods combined with factorization free operators allow to reduce computation time and memory requirements considerably by avoiding building and storing the system matrix. In the field of partial differential equations, the implementation of high-performance factorization free operators and assembly free preconditioning is a subject of active research.

+)

Note that preconditioners can be also implemented as abstract operators. For instance, we could compute the Cholesky factorization of $M$ and $N$ and create linear operators that perform the forward and backsolves.

Krylov methods combined with factorization free operators allow to reduce computation time and memory requirements considerably by avoiding building and storing the system matrix. In the field of partial differential equations, the implementation of high-performance factorization free operators and assembly free preconditioning is a subject of active research.

diff --git a/dev/gpu/index.html b/dev/gpu/index.html index 416ad6e2a..52bcbca33 100644 --- a/dev/gpu/index.html +++ b/dev/gpu/index.html @@ -172,4 +172,4 @@ b_gpu = MtlVector(b_cpu) # Solve a dense least-norm problem on an Apple M1 GPU -x, stats = craig(A_gpu, b_gpu)
Warning

Metal.jl is under heavy development and is considered experimental for now.

+x, stats = craig(A_gpu, b_gpu)
Warning

Metal.jl is under heavy development and is considered experimental for now.

diff --git a/dev/index.html b/dev/index.html index 2601ab192..47fc18400 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,4 +1,4 @@ Home · Krylov.jl

Krylov.jl documentation

This package provides implementations of certain of the most useful Krylov method for a variety of problems:

1 - Square or rectangular full-rank systems

\[ Ax = b\]

should be solved when b lies in the range space of A. This situation occurs when

  • A is square and nonsingular,
  • A is tall and has full column rank and b lies in the range of A.

2 - Linear least-squares problems

\[ \min \|b - Ax\|\]

should be solved when b is not in the range of A (inconsistent systems), regardless of the shape and rank of A. This situation mainly occurs when

  • A is square and singular,
  • A is tall and thin.

Underdetermined systems are less common but also occur.

If there are infinitely many such x (because A is column rank-deficient), one with minimum norm is identified

\[ \min \|x\| \quad \text{subject to} \quad x \in \argmin \|b - Ax\|.\]

3 - Linear least-norm problems

\[ \min \|x\| \quad \text{subject to} \quad Ax = b\]

should be solved when A is column rank-deficient but b is in the range of A (consistent systems), regardless of the shape of A. This situation mainly occurs when

  • A is square and singular,
  • A is short and wide.

Overdetermined systems are less common but also occur.

4 - Adjoint systems

\[ Ax = b \quad \text{and} \quad A^H y = c\]

where A can have any shape.

5 - Saddle-point and Hermitian quasi-definite systems

\[ \begin{bmatrix} M & \phantom{-}A \\ A^H & -N \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \left(\begin{bmatrix} b \\ 0 \end{bmatrix},\begin{bmatrix} 0 \\ c \end{bmatrix},\begin{bmatrix} b \\ c \end{bmatrix}\right)\]

where A can have any shape.

6 - Generalized saddle-point and non-Hermitian partitioned systems

\[ \begin{bmatrix} M & A \\ B & N \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} b \\ c \end{bmatrix}\]

where A can have any shape and B has the shape of Aᴴ. A, B, b and c must be all nonzero.

Krylov solvers are particularly appropriate in situations where such problems must be solved but a factorization is not possible, either because:

  • A is not available explicitly,
  • A would be dense or would consume an excessive amount of memory if it were materialized,
  • factors would consume an excessive amount of memory.

Iterative methods are recommended in either of the following situations:

  • the problem is sufficiently large that a factorization is not feasible or would be slow,
  • an effective preconditioner is known in cases where the problem has unfavorable spectral structure,
  • the operator can be represented efficiently as a sparse matrix,
  • the operator is fast, i.e., can be applied with better complexity than if it were materialized as a matrix. Certain fast operators would materialize as dense matrices.

Features

All solvers in Krylov.jl have in-place version, are compatible with GPU and work in any floating-point data type.

How to Install

Krylov can be installed and tested through the Julia package manager:

julia> ]
 pkg> add Krylov
-pkg> test Krylov

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers organization, so questions about any of our packages are welcome.

+pkg> test Krylov

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers organization, so questions about any of our packages are welcome.

diff --git a/dev/inplace/index.html b/dev/inplace/index.html index fc79b507f..cd47d374a 100644 --- a/dev/inplace/index.html +++ b/dev/inplace/index.html @@ -7,7 +7,7 @@ dqgmres!(dqgmres_solver, A2, b2) lsqr_solver = LsqrSolver(m, n, CuVector{Float32}) -lsqr!(lsqr_solver, A3, b3)

A generic function solve! is also available and dispatches to the appropriate Krylov method.

Krylov.solve!Function
solve!(solver, args...; kwargs...)

Use the in-place Krylov method associated to solver.

source

In-place methods return an updated solver workspace. Solutions and statistics can be recovered via solver.x, solver.y and solver.stats. Functions solution and statistics can be also used.

Krylov.nsolutionFunction
nsolution(solver)

Return the number of outputs of solution(solver).

source
Krylov.solutionFunction
solution(solver)

Return the solution(s) stored in the solver. Optionally you can specify which solution you want to recover, solution(solver, 1) returns x and solution(solver, 2) returns y.

source
Krylov.statisticsFunction
statistics(solver)

Return the statistics stored in the solver.

source
Krylov.issolvedFunction
issolved(solver)

Return a boolean that determines whether the Krylov method associated to solver succeeded.

source

Examples

We illustrate the use of in-place Krylov solvers with two well-known optimization methods. The details of the optimization methods are described in the section about Factorization-free operators.

Example 1: Newton's method for convex optimization without linesearch

using Krylov
+lsqr!(lsqr_solver, A3, b3)

A generic function solve! is also available and dispatches to the appropriate Krylov method.

Krylov.solve!Function
solve!(solver, args...; kwargs...)

Use the in-place Krylov method associated to solver.

source

In-place methods return an updated solver workspace. Solutions and statistics can be recovered via solver.x, solver.y and solver.stats. Functions solution and statistics can be also used.

Krylov.nsolutionFunction
nsolution(solver)

Return the number of outputs of solution(solver).

source
Krylov.solutionFunction
solution(solver)

Return the solution(s) stored in the solver. Optionally you can specify which solution you want to recover, solution(solver, 1) returns x and solution(solver, 2) returns y.

source
Krylov.statisticsFunction
statistics(solver)

Return the statistics stored in the solver.

source
Krylov.issolvedFunction
issolved(solver)

Return a boolean that determines whether the Krylov method associated to solver succeeded.

source

Examples

We illustrate the use of in-place Krylov solvers with two well-known optimization methods. The details of the optimization methods are described in the section about Factorization-free operators.

Example 1: Newton's method for convex optimization without linesearch

using Krylov
 
 function newton(∇f, ∇²f, x₀; itmax = 200, tol = 1e-8)
 
@@ -65,4 +65,4 @@
         tired = iter ≥ itmax
     end
     return x
-end
+end diff --git a/dev/preconditioners/index.html b/dev/preconditioners/index.html index 1b0856fa8..96ad7a766 100644 --- a/dev/preconditioners/index.html +++ b/dev/preconditioners/index.html @@ -60,4 +60,4 @@ (y, v) -> (y .= v ; BasicLU.solve!(B, y, 'T')), (y, v) -> (y .= v ; BasicLU.solve!(B, y, 'T'))) -x, y, stats = craigmr(B⁻¹ * opA, B⁻¹ * b) # min ‖x‖₂ s.t. B⁻¹Ax = B⁻¹b +x, y, stats = craigmr(B⁻¹ * opA, B⁻¹ * b) # min ‖x‖₂ s.t. B⁻¹Ax = B⁻¹b diff --git a/dev/processes/index.html b/dev/processes/index.html index 103d630cd..4762f080e 100644 --- a/dev/processes/index.html +++ b/dev/processes/index.html @@ -41,7 +41,7 @@ \begin{bmatrix} T_{k} \\ \beta_{k+1} e_{k}^T -\end{bmatrix}.\]

Note that depending on how we normalize the vectors that compose $V_k$, $T_{k+1,k}$ can be a real tridiagonal matrix even if $A$ is a complex matrix.

The function hermitian_lanczos returns $V_{k+1}$, $\beta_1$ and $T_{k+1,k}$.

Related methods: SYMMLQ, CG, CR, CAR, MINRES, MINRES-QLP, MINARES, CGLS, CRLS, CGNE, CRMR, CG-LANCZOS and CG-LANCZOS-SHIFT.

Krylov.hermitian_lanczosMethod
V, β, T = hermitian_lanczos(A, b, k)

Input arguments

  • A: a linear operator that models an Hermitian matrix of dimension n;
  • b: a vector of length n;
  • k: the number of iterations of the Hermitian Lanczos process.

Output arguments

  • V: a dense n × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • T: a sparse (k+1) × k tridiagonal matrix.

Reference

source

Non-Hermitian Lanczos

nonhermitian_lanczos

After $k$ iterations of the non-Hermitian Lanczos process (also named the Lanczos biorthogonalization process), the situation may be summarized as

\[\begin{align*} +\end{bmatrix}.\]

Note that depending on how we normalize the vectors that compose $V_k$, $T_{k+1,k}$ can be a real tridiagonal matrix even if $A$ is a complex matrix.

The function hermitian_lanczos returns $V_{k+1}$, $\beta_1$ and $T_{k+1,k}$.

Related methods: SYMMLQ, CG, CR, CAR, MINRES, MINRES-QLP, MINARES, CGLS, CRLS, CGNE, CRMR, CG-LANCZOS and CG-LANCZOS-SHIFT.

Krylov.hermitian_lanczosMethod
V, β, T = hermitian_lanczos(A, b, k)

Input arguments

  • A: a linear operator that models an Hermitian matrix of dimension n;
  • b: a vector of length n;
  • k: the number of iterations of the Hermitian Lanczos process.

Output arguments

  • V: a dense n × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • T: a sparse (k+1) × k tridiagonal matrix.

Reference

source

Non-Hermitian Lanczos

nonhermitian_lanczos

After $k$ iterations of the non-Hermitian Lanczos process (also named the Lanczos biorthogonalization process), the situation may be summarized as

\[\begin{align*} A V_k &= V_k T_k + \beta_{k+1} v_{k+1} e_k^T = V_{k+1} T_{k+1,k}, \\ A^H U_k &= U_k T_k^H + \bar{\gamma}_{k+1} u_{k+1} e_k^T = U_{k+1} T_{k,k+1}^H, \\ V_k^H U_k &= U_k^H V_k = I_k, @@ -62,7 +62,7 @@ T_{k,k+1} = \begin{bmatrix} T_{k} & \gamma_{k+1} e_k -\end{bmatrix}.\]

The function nonhermitian_lanczos returns $V_{k+1}$, $\beta_1$, $T_{k+1,k}$, $U_{k+1}$, $\bar{\gamma}_1$ and $T_{k,k+1}^H$.

Related methods: BiLQ, QMR, BiLQR, CGS and BICGSTAB.

Note

The scaling factors used in our implementation are $\beta_k = |u_k^H v_k|^{\tfrac{1}{2}}$ and $\gamma_k = (u_k^H v_k) / \beta_k$. With these scaling factors, the non-Hermitian Lanczos process coincides with the Hermitian Lanczos process when $A = A^H$ and $b = c$.

Krylov.nonhermitian_lanczosMethod
V, β, T, U, γᴴ, Tᴴ = nonhermitian_lanczos(A, b, c, k)

Input arguments

  • A: a linear operator that models a square matrix of dimension n;
  • b: a vector of length n;
  • c: a vector of length n;
  • k: the number of iterations of the non-Hermitian Lanczos process.

Output arguments

  • V: a dense n × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • T: a sparse (k+1) × k tridiagonal matrix;
  • U: a dense n × (k+1) matrix;
  • γᴴ: a coefficient such that γᴴu₁ = c;
  • Tᴴ: a sparse (k+1) × k tridiagonal matrix.

Reference

source

The non-Hermitian Lanczos process can be also implemented without $A^H$ (transpose-free variant). To derive it, we can observe that $\beta_{k+1} v_{k+1} = P_k(A) b~~$ and $\bar{\gamma}_{k+1} u_{k+1} = Q_k(A^H) c~~$ where $P_k$ and $Q_k$ are polynomials of degree $k$. The polynomials are defined from the recursions:

\[\begin{align*} +\end{bmatrix}.\]

The function nonhermitian_lanczos returns $V_{k+1}$, $\beta_1$, $T_{k+1,k}$, $U_{k+1}$, $\bar{\gamma}_1$ and $T_{k,k+1}^H$.

Related methods: BiLQ, QMR, BiLQR, CGS and BICGSTAB.

Note

The scaling factors used in our implementation are $\beta_k = |u_k^H v_k|^{\tfrac{1}{2}}$ and $\gamma_k = (u_k^H v_k) / \beta_k$. With these scaling factors, the non-Hermitian Lanczos process coincides with the Hermitian Lanczos process when $A = A^H$ and $b = c$.

Krylov.nonhermitian_lanczosMethod
V, β, T, U, γᴴ, Tᴴ = nonhermitian_lanczos(A, b, c, k)

Input arguments

  • A: a linear operator that models a square matrix of dimension n;
  • b: a vector of length n;
  • c: a vector of length n;
  • k: the number of iterations of the non-Hermitian Lanczos process.

Output arguments

  • V: a dense n × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • T: a sparse (k+1) × k tridiagonal matrix;
  • U: a dense n × (k+1) matrix;
  • γᴴ: a coefficient such that γᴴu₁ = c;
  • Tᴴ: a sparse (k+1) × k tridiagonal matrix.

Reference

source

The non-Hermitian Lanczos process can be also implemented without $A^H$ (transpose-free variant). To derive it, we can observe that $\beta_{k+1} v_{k+1} = P_k(A) b~~$ and $\bar{\gamma}_{k+1} u_{k+1} = Q_k(A^H) c~~$ where $P_k$ and $Q_k$ are polynomials of degree $k$. The polynomials are defined from the recursions:

\[\begin{align*} P_0(A) &= I_n, \\ P_1(A) &= \left(\dfrac{A - \alpha_1 I_n}{\beta_1}\right) P_0(A), \\ P_k(A) &= \left(\dfrac{A - \alpha_k I_n}{\beta_k}\right) P_{k-1}(A) - \dfrac{\gamma_k}{\beta_{k-1}} P_{k-2}(A), \quad k \ge 2, \\ @@ -91,7 +91,7 @@ \begin{bmatrix} H_{k} \\ h_{k+1,k} e_{k}^T -\end{bmatrix}.\]

The function arnoldi returns $V_{k+1}$, $\beta$ and $H_{k+1,k}$.

Related methods: DIOM, FOM, DQGMRES, GMRES and FGMRES.

Note

The Arnoldi process coincides with the Hermitian Lanczos process when $A$ is Hermitian.

Krylov.arnoldiMethod
V, β, H = arnoldi(A, b, k; reorthogonalization=false)

Input arguments

  • A: a linear operator that models a square matrix of dimension n;
  • b: a vector of length n;
  • k: the number of iterations of the Arnoldi process.

Keyword arguments

  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors.

Output arguments

  • V: a dense n × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • H: a dense (k+1) × k upper Hessenberg matrix.

Reference

source

Golub-Kahan

golub_kahan

After $k$ iterations of the Golub-Kahan bidiagonalization process, the situation may be summarized as

\[\begin{align*} +\end{bmatrix}.\]

The function arnoldi returns $V_{k+1}$, $\beta$ and $H_{k+1,k}$.

Related methods: DIOM, FOM, DQGMRES, GMRES and FGMRES.

Note

The Arnoldi process coincides with the Hermitian Lanczos process when $A$ is Hermitian.

Krylov.arnoldiMethod
V, β, H = arnoldi(A, b, k; reorthogonalization=false)

Input arguments

  • A: a linear operator that models a square matrix of dimension n;
  • b: a vector of length n;
  • k: the number of iterations of the Arnoldi process.

Keyword arguments

  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors.

Output arguments

  • V: a dense n × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • H: a dense (k+1) × k upper Hessenberg matrix.

Reference

source

Golub-Kahan

golub_kahan

After $k$ iterations of the Golub-Kahan bidiagonalization process, the situation may be summarized as

\[\begin{align*} A V_k &= U_{k+1} B_k, \\ A^H U_{k+1} &= V_k B_k^H + \bar{\alpha}_{k+1} v_{k+1} e_{k+1}^T = V_{k+1} L_{k+1}^H, \\ V_k^H V_k &= U_k^H U_k = I_k, @@ -115,7 +115,7 @@ \begin{bmatrix} L_{k} \\ \beta_{k+1} e_{k}^T -\end{bmatrix}.\]

Note that depending on how we normalize the vectors that compose $V_k$ and $U_k$, $L_k$ can be a real bidiagonal matrix even if $A$ is a complex matrix.

The function golub_kahan returns $V_{k+1}$, $U_{k+1}$, $\beta_1$ and $L_{k+1}$.

Related methods: LNLQ, CRAIG, CRAIGMR, LSLQ, LSQR and LSMR.

Note

The Golub-Kahan process coincides with the Hermitian Lanczos process applied to the normal equations $A^HA x = A^Hb$ and $AA^H x = b$. It is also related to the Hermitian Lanczos process applied to $\begin{bmatrix} 0 & A \\ A^H & 0 \end{bmatrix}$ with initial vector $\begin{bmatrix} b \\ 0 \end{bmatrix}$.

Krylov.golub_kahanMethod
V, U, β, L = golub_kahan(A, b, k)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • k: the number of iterations of the Golub-Kahan process.

Output arguments

  • V: a dense n × (k+1) matrix;
  • U: a dense m × (k+1) matrix;
  • β: a coefficient such that βu₁ = b;
  • L: a sparse (k+1) × (k+1) lower bidiagonal matrix.

References

source

Saunders-Simon-Yip

saunders_simon_yip

After $k$ iterations of the Saunders-Simon-Yip process (also named the orthogonal tridiagonalization process), the situation may be summarized as

\[\begin{align*} +\end{bmatrix}.\]

Note that depending on how we normalize the vectors that compose $V_k$ and $U_k$, $L_k$ can be a real bidiagonal matrix even if $A$ is a complex matrix.

The function golub_kahan returns $V_{k+1}$, $U_{k+1}$, $\beta_1$ and $L_{k+1}$.

Related methods: LNLQ, CRAIG, CRAIGMR, LSLQ, LSQR and LSMR.

Note

The Golub-Kahan process coincides with the Hermitian Lanczos process applied to the normal equations $A^HA x = A^Hb$ and $AA^H x = b$. It is also related to the Hermitian Lanczos process applied to $\begin{bmatrix} 0 & A \\ A^H & 0 \end{bmatrix}$ with initial vector $\begin{bmatrix} b \\ 0 \end{bmatrix}$.

Krylov.golub_kahanMethod
V, U, β, L = golub_kahan(A, b, k)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • k: the number of iterations of the Golub-Kahan process.

Output arguments

  • V: a dense n × (k+1) matrix;
  • U: a dense m × (k+1) matrix;
  • β: a coefficient such that βu₁ = b;
  • L: a sparse (k+1) × (k+1) lower bidiagonal matrix.

References

source

Saunders-Simon-Yip

saunders_simon_yip

After $k$ iterations of the Saunders-Simon-Yip process (also named the orthogonal tridiagonalization process), the situation may be summarized as

\[\begin{align*} A U_k &= V_k T_k + \beta_{k+1} v_{k+1} e_k^T = V_{k+1} T_{k+1,k}, \\ A^H V_k &= U_k T_k^H + \bar{\gamma}_{k+1} u_{k+1} e_k^T = U_{k+1} T_{k,k+1}^H, \\ V_k^H V_k &= U_k^H U_k = I_k, @@ -136,7 +136,7 @@ T_{k,k+1} = \begin{bmatrix} T_{k} & \gamma_{k+1} e_{k} -\end{bmatrix}.\]

The function saunders_simon_yip returns $V_{k+1}$, $\beta_1$, $T_{k+1,k}$, $U_{k+1}$, $\bar{\gamma}_1$ and $T_{k,k+1}^H$.

Related methods: USYMLQ, USYMQR, TriLQR, TriCG and TriMR.

Note

The Saunders-Simon-Yip is equivalent to the block-Lanczos process applied to $\begin{bmatrix} 0 & A \\ A^H & 0 \end{bmatrix}$ with initial matrix $\begin{bmatrix} b & 0 \\ 0 & c \end{bmatrix}$.

Krylov.saunders_simon_yipMethod
V, β, T, U, γᴴ, Tᴴ = saunders_simon_yip(A, b, c, k)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n;
  • k: the number of iterations of the Saunders-Simon-Yip process.

Output arguments

  • V: a dense m × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • T: a sparse (k+1) × k tridiagonal matrix;
  • U: a dense n × (k+1) matrix;
  • γᴴ: a coefficient such that γᴴu₁ = c;
  • Tᴴ: a sparse (k+1) × k tridiagonal matrix.

Reference

source

Montoison-Orban

montoison_orban

After $k$ iterations of the Montoison-Orban process (also named the orthogonal Hessenberg reduction process), the situation may be summarized as

\[\begin{align*} +\end{bmatrix}.\]

The function saunders_simon_yip returns $V_{k+1}$, $\beta_1$, $T_{k+1,k}$, $U_{k+1}$, $\bar{\gamma}_1$ and $T_{k,k+1}^H$.

Related methods: USYMLQ, USYMQR, TriLQR, TriCG and TriMR.

Note

The Saunders-Simon-Yip is equivalent to the block-Lanczos process applied to $\begin{bmatrix} 0 & A \\ A^H & 0 \end{bmatrix}$ with initial matrix $\begin{bmatrix} b & 0 \\ 0 & c \end{bmatrix}$.

Krylov.saunders_simon_yipMethod
V, β, T, U, γᴴ, Tᴴ = saunders_simon_yip(A, b, c, k)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n;
  • k: the number of iterations of the Saunders-Simon-Yip process.

Output arguments

  • V: a dense m × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • T: a sparse (k+1) × k tridiagonal matrix;
  • U: a dense n × (k+1) matrix;
  • γᴴ: a coefficient such that γᴴu₁ = c;
  • Tᴴ: a sparse (k+1) × k tridiagonal matrix.

Reference

source

Montoison-Orban

montoison_orban

After $k$ iterations of the Montoison-Orban process (also named the orthogonal Hessenberg reduction process), the situation may be summarized as

\[\begin{align*} A U_k &= V_k H_k + h_{k+1,k} v_{k+1} e_k^T = V_{k+1} H_{k+1,k}, \\ B V_k &= U_k F_k + f_{k+1,k} u_{k+1} e_k^T = U_{k+1} F_{k+1,k}, \\ V_k^H V_k &= U_k^H U_k = I_k, @@ -164,4 +164,4 @@ \begin{bmatrix} F_{k} \\ f_{k+1,k} e_{k}^T -\end{bmatrix}.\]

The function montoison_orban returns $V_{k+1}$, $\beta$, $H_{k+1,k}$, $U_{k+1}$, $\gamma$ and $F_{k+1,k}$.

Related methods: GPMR.

Note

The Montoison-Orban is equivalent to the block-Arnoldi process applied to $\begin{bmatrix} 0 & A \\ B & 0 \end{bmatrix}$ with initial matrix $\begin{bmatrix} b & 0 \\ 0 & c \end{bmatrix}$. It also coincides with the Saunders-Simon-Yip process when $B = A^H$.

Krylov.montoison_orbanMethod
V, β, H, U, γ, F = montoison_orban(A, B, b, c, k; reorthogonalization=false)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • B: a linear operator that models a matrix of dimension n × m;
  • b: a vector of length m;
  • c: a vector of length n;
  • k: the number of iterations of the Montoison-Orban process.

Keyword arguments

  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors.

Output arguments

  • V: a dense m × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • H: a dense (k+1) × k upper Hessenberg matrix;
  • U: a dense n × (k+1) matrix;
  • γ: a coefficient such that γu₁ = c;
  • F: a dense (k+1) × k upper Hessenberg matrix.

Reference

source
+\end{bmatrix}.\]

The function montoison_orban returns $V_{k+1}$, $\beta$, $H_{k+1,k}$, $U_{k+1}$, $\gamma$ and $F_{k+1,k}$.

Related methods: GPMR.

Note

The Montoison-Orban is equivalent to the block-Arnoldi process applied to $\begin{bmatrix} 0 & A \\ B & 0 \end{bmatrix}$ with initial matrix $\begin{bmatrix} b & 0 \\ 0 & c \end{bmatrix}$. It also coincides with the Saunders-Simon-Yip process when $B = A^H$.

Krylov.montoison_orbanMethod
V, β, H, U, γ, F = montoison_orban(A, B, b, c, k; reorthogonalization=false)

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • B: a linear operator that models a matrix of dimension n × m;
  • b: a vector of length m;
  • c: a vector of length n;
  • k: the number of iterations of the Montoison-Orban process.

Keyword arguments

  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors.

Output arguments

  • V: a dense m × (k+1) matrix;
  • β: a coefficient such that βv₁ = b;
  • H: a dense (k+1) × k upper Hessenberg matrix;
  • U: a dense n × (k+1) matrix;
  • γ: a coefficient such that γu₁ = c;
  • F: a dense (k+1) × k upper Hessenberg matrix.

Reference

source
diff --git a/dev/reference/index.html b/dev/reference/index.html index ae1b0e7f3..05dc9cdca 100644 --- a/dev/reference/index.html +++ b/dev/reference/index.html @@ -1,2 +1,2 @@ -Reference · Krylov.jl

Reference

Index

Krylov.niterationsFunction
niterations(solver)

Return the number of iterations performed by the Krylov method associated to solver.

source
Krylov.AprodFunction
Aprod(solver)

Return the number of operator-vector products with A performed by the Krylov method associated to solver.

source
Krylov.AtprodFunction
Atprod(solver)

Return the number of operator-vector products with A' performed by the Krylov method associated to solver.

source
Krylov.extract_parametersFunction
arguments = extract_parameters(ex::Expr)

Extract the arguments of an expression that is keyword parameter tuple. Implementation suggested by Mitchell J. O'Sullivan (@mosullivan93).

source
Base.showFunction
show(io, solver; show_stats=true)

Statistics of solver are displayed if show_stats is set to true.

source
+Reference · Krylov.jl

Reference

Index

Krylov.niterationsFunction
niterations(solver)

Return the number of iterations performed by the Krylov method associated to solver.

source
Krylov.AprodFunction
Aprod(solver)

Return the number of operator-vector products with A performed by the Krylov method associated to solver.

source
Krylov.AtprodFunction
Atprod(solver)

Return the number of operator-vector products with A' performed by the Krylov method associated to solver.

source
Krylov.extract_parametersFunction
arguments = extract_parameters(ex::Expr)

Extract the arguments of an expression that is keyword parameter tuple. Implementation suggested by Mitchell J. O'Sullivan (@mosullivan93).

source
Base.showFunction
show(io, solver; show_stats=true)

Statistics of solver are displayed if show_stats is set to true.

source
diff --git a/dev/solvers/as/index.html b/dev/solvers/as/index.html index 9e0eeb65e..420ad5e1c 100644 --- a/dev/solvers/as/index.html +++ b/dev/solvers/as/index.html @@ -4,11 +4,11 @@ rtol::T=√eps(T), itmax::Int=0, timemax::Float64=Inf, verbose::Int=0, history::Bool=false, callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, y, stats) = bilqr(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)

BiLQR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.

Combine BiLQ and QMR to solve adjoint systems.

[0  A] [y] = [b]
-[Aᴴ 0] [x]   [c]

The relation bᴴc ≠ 0 must be satisfied. BiLQ is used for solving primal system Ax = b of size n. QMR is used for solving dual system Aᴴy = c of size n.

Input arguments

Optional arguments

Keyword arguments

Output arguments

Reference

source
Krylov.bilqr!Function
solver = bilqr!(solver::BilqrSolver, A, b, c; kwargs...)
-solver = bilqr!(solver::BilqrSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of bilqr.

See BilqrSolver for more details about the solver.

source

TriLQR

Krylov.trilqrFunction
(x, y, stats) = trilqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
+[Aᴴ 0] [x]   [c]

The relation bᴴc ≠ 0 must be satisfied. BiLQ is used for solving primal system Ax = b of size n. QMR is used for solving dual system Aᴴy = c of size n.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n;
  • c: a vector of length n.

Optional arguments

  • x0: a vector of length n that represents an initial guess of the solution x;
  • y0: a vector of length n that represents an initial guess of the solution y.

Keyword arguments

  • transfer_to_bicg: transfer from the BiLQ point to the BiCG point, when it exists. The transfer is based on the residual norm;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length n;
  • stats: statistics collected on the run in an AdjointStats structure.

Reference

source
Krylov.bilqr!Function
solver = bilqr!(solver::BilqrSolver, A, b, c; kwargs...)
+solver = bilqr!(solver::BilqrSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of bilqr.

See BilqrSolver for more details about the solver.

source

TriLQR

Krylov.trilqrFunction
(x, y, stats) = trilqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
                        transfer_to_usymcg::Bool=true, atol::T=√eps(T),
                        rtol::T=√eps(T), itmax::Int=0,
                        timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                        callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, y, stats) = trilqr(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)

TriLQR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.

Combine USYMLQ and USYMQR to solve adjoint systems.

[0  A] [y] = [b]
-[Aᴴ 0] [x]   [c]

USYMLQ is used for solving primal system Ax = b of size m × n. USYMQR is used for solving dual system Aᴴy = c of size n × m.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional arguments

  • x0: a vector of length n that represents an initial guess of the solution x;
  • y0: a vector of length m that represents an initial guess of the solution y.

Keyword arguments

  • transfer_to_usymcg: transfer from the USYMLQ point to the USYMCG point, when it exists. The transfer is based on the residual norm;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in an AdjointStats structure.

Reference

source
Krylov.trilqr!Function
solver = trilqr!(solver::TrilqrSolver, A, b, c; kwargs...)
-solver = trilqr!(solver::TrilqrSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of trilqr.

See TrilqrSolver for more details about the solver.

source
+[Aᴴ 0] [x] [c]

USYMLQ is used for solving primal system Ax = b of size m × n. USYMQR is used for solving dual system Aᴴy = c of size n × m.

Input arguments

Optional arguments

Keyword arguments

Output arguments

Reference

source
Krylov.trilqr!Function
solver = trilqr!(solver::TrilqrSolver, A, b, c; kwargs...)
+solver = trilqr!(solver::TrilqrSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of trilqr.

See TrilqrSolver for more details about the solver.

source
diff --git a/dev/solvers/gsp/index.html b/dev/solvers/gsp/index.html index 2a39995ca..274a751e6 100644 --- a/dev/solvers/gsp/index.html +++ b/dev/solvers/gsp/index.html @@ -9,5 +9,5 @@ callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, y, stats) = gpmr(A, B, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)

GPMR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.

Given matrices A of dimension m × n and B of dimension n × m, GPMR solves the non-Hermitian partitioned linear system

[ λIₘ   A  ] [ x ] = [ b ]
 [  B   μIₙ ] [ y ]   [ c ],

of size (n+m) × (n+m) where λ and μ are real or complex numbers. A can have any shape and B has the shape of Aᴴ. A, B, b and c must be all nonzero.

This implementation allows left and right block diagonal preconditioners

[ C    ] [ λM   A ] [ E    ] [ E⁻¹x ] = [ Cb ]
 [    D ] [  B  μN ] [    F ] [ F⁻¹y ]   [ Dc ],

and can solve

[ λM   A ] [ x ] = [ b ]
-[  B  μN ] [ y ]   [ c ]

when CE = M⁻¹ and DF = N⁻¹.

By default, GPMR solves unsymmetric linear systems with λ = 1 and μ = 1.

GPMR is based on the orthogonal Hessenberg reduction process and its relations with the block-Arnoldi process. The residual norm ‖rₖ‖ is monotonically decreasing in GPMR.

GPMR stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.

Input arguments

Optional arguments

Keyword arguments

Output arguments

Reference

source
Krylov.gpmr!Function
solver = gpmr!(solver::GpmrSolver, A, B, b, c; kwargs...)
-solver = gpmr!(solver::GpmrSolver, A, B, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of gpmr.

Note that the memory keyword argument is the only exception. It's required to create a GpmrSolver and can't be changed later.

See GpmrSolver for more details about the solver.

source
+[ B μN ] [ y ] [ c ]

when CE = M⁻¹ and DF = N⁻¹.

By default, GPMR solves unsymmetric linear systems with λ = 1 and μ = 1.

GPMR is based on the orthogonal Hessenberg reduction process and its relations with the block-Arnoldi process. The residual norm ‖rₖ‖ is monotonically decreasing in GPMR.

GPMR stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.

Input arguments

Optional arguments

Keyword arguments

Output arguments

Reference

source
Krylov.gpmr!Function
solver = gpmr!(solver::GpmrSolver, A, B, b, c; kwargs...)
+solver = gpmr!(solver::GpmrSolver, A, B, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of gpmr.

Note that the memory keyword argument is the only exception. It's required to create a GpmrSolver and can't be changed later.

See GpmrSolver for more details about the solver.

source
diff --git a/dev/solvers/ln/index.html b/dev/solvers/ln/index.html index 4f7bfc1ab..97b5095f2 100644 --- a/dev/solvers/ln/index.html +++ b/dev/solvers/ln/index.html @@ -4,12 +4,12 @@ λ::T=zero(T), atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0, timemax::Float64=Inf, verbose::Int=0, history::Bool=false, - callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the consistent linear system

Ax + √λs = b

of size m × n using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations of the second kind

(AAᴴ + λI) y = b

but is more stable. When λ = 0, this method solves the minimum-norm problem

min ‖x‖₂ s.t. Ax = b.

When λ > 0, it solves the problem

min ‖(x,s)‖₂  s.t. Ax + √λs = b.

CGNE produces monotonic errors ‖x-x*‖₂ but not residuals ‖r‖₂. It is formally equivalent to CRAIG, though can be slightly less accurate, but simpler to implement. Only the x-part of the solution is returned.

Input arguments

Keyword arguments

Output arguments

References

source
Krylov.cgne!Function
solver = cgne!(solver::CgneSolver, A, b; kwargs...)

where kwargs are keyword arguments of cgne.

See CgneSolver for more details about the solver.

source

CRMR

Krylov.crmrFunction
(x, stats) = crmr(A, b::AbstractVector{FC};
+                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the consistent linear system

Ax + √λs = b

of size m × n using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations of the second kind

(AAᴴ + λI) y = b

but is more stable. When λ = 0, this method solves the minimum-norm problem

min ‖x‖₂ s.t. Ax = b.

When λ > 0, it solves the problem

min ‖(x,s)‖₂  s.t. Ax + √λs = b.

CGNE produces monotonic errors ‖x-x*‖₂ but not residuals ‖r‖₂. It is formally equivalent to CRAIG, though can be slightly less accurate, but simpler to implement. Only the x-part of the solution is returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • N: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

  • J. E. Craig, The N-step iteration procedures, Journal of Mathematics and Physics, 34(1), pp. 64–73, 1955.
  • J. E. Craig, Iterations Procedures for Simultaneous Equations, Ph.D. Thesis, Department of Electrical Engineering, MIT, 1954.
source
Krylov.cgne!Function
solver = cgne!(solver::CgneSolver, A, b; kwargs...)

where kwargs are keyword arguments of cgne.

See CgneSolver for more details about the solver.

source

CRMR

Krylov.crmrFunction
(x, stats) = crmr(A, b::AbstractVector{FC};
                   N=I, ldiv::Bool=false,
                   λ::T=zero(T), atol::T=√eps(T),
                   rtol::T=√eps(T), itmax::Int=0,
                   timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the consistent linear system

Ax + √λs = b

of size m × n using the Conjugate Residual (CR) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CR to the normal equations of the second kind

(AAᴴ + λI) y = b

but is more stable. When λ = 0, this method solves the minimum-norm problem

min ‖x‖₂  s.t.  x ∈ argmin ‖Ax - b‖₂.

When λ > 0, this method solves the problem

min ‖(x,s)‖₂  s.t. Ax + √λs = b.

CRMR produces monotonic residuals ‖r‖₂. It is formally equivalent to CRAIG-MR, though can be slightly less accurate, but simpler to implement. Only the x-part of the solution is returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • N: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.crmr!Function
solver = crmr!(solver::CrmrSolver, A, b; kwargs...)

where kwargs are keyword arguments of crmr.

See CrmrSolver for more details about the solver.

source

LNLQ

Krylov.lnlqFunction
(x, y, stats) = lnlq(A, b::AbstractVector{FC};
+                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the consistent linear system

Ax + √λs = b

of size m × n using the Conjugate Residual (CR) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CR to the normal equations of the second kind

(AAᴴ + λI) y = b

but is more stable. When λ = 0, this method solves the minimum-norm problem

min ‖x‖₂  s.t.  x ∈ argmin ‖Ax - b‖₂.

When λ > 0, this method solves the problem

min ‖(x,s)‖₂  s.t. Ax + √λs = b.

CRMR produces monotonic residuals ‖r‖₂. It is formally equivalent to CRAIG-MR, though can be slightly less accurate, but simpler to implement. Only the x-part of the solution is returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • N: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.crmr!Function
solver = crmr!(solver::CrmrSolver, A, b; kwargs...)

where kwargs are keyword arguments of crmr.

See CrmrSolver for more details about the solver.

source

LNLQ

Krylov.lnlqFunction
(x, y, stats) = lnlq(A, b::AbstractVector{FC};
                      M=I, N=I, ldiv::Bool=false,
                      transfer_to_craig::Bool=true,
                      sqd::Bool=false, λ::T=zero(T),
@@ -19,7 +19,7 @@
                      timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                      callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Find the least-norm solution of the consistent linear system

Ax + λ²y = b

of size m × n using the LNLQ method, where λ ≥ 0 is a regularization parameter.

For a system in the form Ax = b, LNLQ method is equivalent to applying SYMMLQ to AAᴴy = b and recovering x = Aᴴy but is more stable. Note that y are the Lagrange multipliers of the least-norm problem

minimize ‖x‖  s.t.  Ax = b.

If λ > 0, LNLQ solves the symmetric and quasi-definite system

[ -F    Aᴴ ] [ x ]   [ 0 ]
 [  A  λ²E  ] [ y ] = [ b ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

min ‖x‖²_F + λ²‖y‖²_E  s.t.  Ax + λ²Ey = b.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LNLQ is then equivalent to applying SYMMLQ to (AF⁻¹Aᴴ + λ²E)y = b with Fx = Aᴴy.

If λ = 0, LNLQ solves the symmetric and indefinite system

[ -F   Aᴴ ] [ x ]   [ 0 ]
-[  A   0  ] [ y ] = [ b ].

The system above represents the optimality conditions of

minimize ‖x‖²_F  s.t.  Ax = b.

In this case, M can still be specified and indicates the weighted norm in which residuals are measured.

In this implementation, both the x and y-parts of the solution are returned.

utolx and utoly are tolerances on the upper bound of the distance to the solution ‖x-x‖ and ‖y-y‖, respectively. The bound is valid if λ>0 or σ>0 where σ should be strictly smaller than the smallest positive singular value. For instance σ:=(1-1e-7)σₘᵢₙ .

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • transfer_to_craig: transfer from the LNLQ point to the CRAIG point, when it exists. The transfer is based on the residual norm;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • σ: strict lower bound on the smallest positive singular value σₘᵢₙ such as σ = (1-10⁻⁷)σₘᵢₙ;
  • utolx: tolerance on the upper bound on the distance to the solution ‖x-x*‖;
  • utoly: tolerance on the upper bound on the distance to the solution ‖y-y*‖;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in a LNLQStats structure.

Reference

source
Krylov.lnlq!Function
solver = lnlq!(solver::LnlqSolver, A, b; kwargs...)

where kwargs are keyword arguments of lnlq.

See LnlqSolver for more details about the solver.

source

CRAIG

Krylov.craigFunction
(x, y, stats) = craig(A, b::AbstractVector{FC};
+[  A   0  ] [ y ] = [ b ].

The system above represents the optimality conditions of

minimize ‖x‖²_F  s.t.  Ax = b.

In this case, M can still be specified and indicates the weighted norm in which residuals are measured.

In this implementation, both the x and y-parts of the solution are returned.

utolx and utoly are tolerances on the upper bound of the distance to the solution ‖x-x‖ and ‖y-y‖, respectively. The bound is valid if λ>0 or σ>0 where σ should be strictly smaller than the smallest positive singular value. For instance σ:=(1-1e-7)σₘᵢₙ .

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • transfer_to_craig: transfer from the LNLQ point to the CRAIG point, when it exists. The transfer is based on the residual norm;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • σ: strict lower bound on the smallest positive singular value σₘᵢₙ such as σ = (1-10⁻⁷)σₘᵢₙ;
  • utolx: tolerance on the upper bound on the distance to the solution ‖x-x*‖;
  • utoly: tolerance on the upper bound on the distance to the solution ‖y-y*‖;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in a LNLQStats structure.

Reference

source
Krylov.lnlq!Function
solver = lnlq!(solver::LnlqSolver, A, b; kwargs...)

where kwargs are keyword arguments of lnlq.

See LnlqSolver for more details about the solver.

source

CRAIG

Krylov.craigFunction
(x, y, stats) = craig(A, b::AbstractVector{FC};
                       M=I, N=I, ldiv::Bool=false,
                       transfer_to_lsqr::Bool=false, sqd::Bool=false,
                       λ::T=zero(T), btol::T=√eps(T),
@@ -28,16 +28,16 @@
                       timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                       callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Find the least-norm solution of the consistent linear system

Ax + λ²y = b

of size m × n using the Golub-Kahan implementation of Craig's method, where λ ≥ 0 is a regularization parameter. This method is equivalent to CGNE but is more stable.

For a system in the form Ax = b, Craig's method is equivalent to applying CG to AAᴴy = b and recovering x = Aᴴy. Note that y are the Lagrange multipliers of the least-norm problem

minimize ‖x‖  s.t.  Ax = b.

If λ > 0, CRAIG solves the symmetric and quasi-definite system

[ -F     Aᴴ ] [ x ]   [ 0 ]
 [  A   λ²E  ] [ y ] = [ b ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

min ‖x‖²_F + λ²‖y‖²_E  s.t.  Ax + λ²Ey = b.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. CRAIG is then equivalent to applying CG to (AF⁻¹Aᴴ + λ²E)y = b with Fx = Aᴴy.

If λ = 0, CRAIG solves the symmetric and indefinite system

[ -F   Aᴴ ] [ x ]   [ 0 ]
-[  A   0  ] [ y ] = [ b ].

The system above represents the optimality conditions of

minimize ‖x‖²_F  s.t.  Ax = b.

In this case, M can still be specified and indicates the weighted norm in which residuals are measured.

In this implementation, both the x and y-parts of the solution are returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • transfer_to_lsqr: transfer from the LSLQ point to the LSQR point, when it exists. The transfer is based on the residual norm;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.craig!Function
solver = craig!(solver::CraigSolver, A, b; kwargs...)

where kwargs are keyword arguments of craig.

See CraigSolver for more details about the solver.

source

CRAIGMR

Krylov.craigmrFunction
(x, y, stats) = craigmr(A, b::AbstractVector{FC};
+[  A   0  ] [ y ] = [ b ].

The system above represents the optimality conditions of

minimize ‖x‖²_F  s.t.  Ax = b.

In this case, M can still be specified and indicates the weighted norm in which residuals are measured.

In this implementation, both the x and y-parts of the solution are returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • transfer_to_lsqr: transfer from the LSLQ point to the LSQR point, when it exists. The transfer is based on the residual norm;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.craig!Function
solver = craig!(solver::CraigSolver, A, b; kwargs...)

where kwargs are keyword arguments of craig.

See CraigSolver for more details about the solver.

source

CRAIGMR

Krylov.craigmrFunction
(x, y, stats) = craigmr(A, b::AbstractVector{FC};
                         M=I, N=I, ldiv::Bool=false,
                         sqd::Bool=false, λ::T=zero(T), atol::T=√eps(T),
                         rtol::T=√eps(T), itmax::Int=0,
                         timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                         callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the consistent linear system

Ax + λ²y = b

of size m × n using the CRAIGMR method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying the Conjugate Residuals method to the normal equations of the second kind

(AAᴴ + λ²I) y = b

but is more stable. When λ = 0, this method solves the minimum-norm problem

min ‖x‖  s.t.  x ∈ argmin ‖Ax - b‖.

If λ > 0, CRAIGMR solves the symmetric and quasi-definite system

[ -F    Aᴴ ] [ x ]   [ 0 ]
 [  A  λ²E  ] [ y ] = [ b ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

min ‖x‖²_F + λ²‖y‖²_E  s.t.  Ax + λ²Ey = b.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. CRAIGMR is then equivalent to applying MINRES to (AF⁻¹Aᴴ + λ²E)y = b with Fx = Aᴴy.

If λ = 0, CRAIGMR solves the symmetric and indefinite system

[ -F   Aᴴ ] [ x ]   [ 0 ]
-[  A   0  ] [ y ] = [ b ].

The system above represents the optimality conditions of

min ‖x‖²_F  s.t.  Ax = b.

In this case, M can still be specified and indicates the weighted norm in which residuals are measured.

CRAIGMR produces monotonic residuals ‖r‖₂. It is formally equivalent to CRMR, though can be slightly more accurate, and intricate to implement. Both the x- and y-parts of the solution are returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.craigmr!Function
solver = craigmr!(solver::CraigmrSolver, A, b; kwargs...)

where kwargs are keyword arguments of craigmr.

See CraigmrSolver for more details about the solver.

source

USYMLQ

Krylov.usymlqFunction
(x, stats) = usymlq(A, b::AbstractVector{FC}, c::AbstractVector{FC};
+[  A   0  ] [ y ] = [ b ].

The system above represents the optimality conditions of

min ‖x‖²_F  s.t.  Ax = b.

In this case, M can still be specified and indicates the weighted norm in which residuals are measured.

CRAIGMR produces monotonic residuals ‖r‖₂. It is formally equivalent to CRMR, though can be slightly more accurate, and intricate to implement. Both the x- and y-parts of the solution are returned.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • y: a dense vector of length m;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.craigmr!Function
solver = craigmr!(solver::CraigmrSolver, A, b; kwargs...)

where kwargs are keyword arguments of craigmr.

See CraigmrSolver for more details about the solver.

source

USYMLQ

Krylov.usymlqFunction
(x, stats) = usymlq(A, b::AbstractVector{FC}, c::AbstractVector{FC};
                     transfer_to_usymcg::Bool=true, atol::T=√eps(T),
                     rtol::T=√eps(T), itmax::Int=0,
                     timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                    callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = usymlq(A, b, c, x0::AbstractVector; kwargs...)

USYMLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

USYMLQ determines the least-norm solution of the consistent linear system Ax = b of size m × n.

USYMLQ is based on the orthogonal tridiagonalization process and requires two initial nonzero vectors b and c. The vector c is only used to initialize the process and a default value can be b or Aᴴb depending on the shape of A. The error norm ‖x - x*‖ monotonously decreases in USYMLQ. It's considered as a generalization of SYMMLQ.

It can also be applied to under-determined and over-determined problems. In all cases, problems must be consistent.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • transfer_to_usymcg: transfer from the USYMLQ point to the USYMCG point, when it exists. The transfer is based on the residual norm;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.usymlq!Function
solver = usymlq!(solver::UsymlqSolver, A, b, c; kwargs...)
-solver = usymlq!(solver::UsymlqSolver, A, b, c, x0; kwargs...)

where kwargs are keyword arguments of usymlq.

See UsymlqSolver for more details about the solver.

source
+ callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = usymlq(A, b, c, x0::AbstractVector; kwargs...)

USYMLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

USYMLQ determines the least-norm solution of the consistent linear system Ax = b of size m × n.

USYMLQ is based on the orthogonal tridiagonalization process and requires two initial nonzero vectors b and c. The vector c is only used to initialize the process and a default value can be b or Aᴴb depending on the shape of A. The error norm ‖x - x*‖ monotonously decreases in USYMLQ. It's considered as a generalization of SYMMLQ.

It can also be applied to under-determined and over-determined problems. In all cases, problems must be consistent.

Input arguments

Optional argument

Keyword arguments

Output arguments

References

source
Krylov.usymlq!Function
solver = usymlq!(solver::UsymlqSolver, A, b, c; kwargs...)
+solver = usymlq!(solver::UsymlqSolver, A, b, c, x0; kwargs...)

where kwargs are keyword arguments of usymlq.

See UsymlqSolver for more details about the solver.

source
diff --git a/dev/solvers/ls/index.html b/dev/solvers/ls/index.html index 255944604..3f9224dd1 100644 --- a/dev/solvers/ls/index.html +++ b/dev/solvers/ls/index.html @@ -4,12 +4,12 @@ λ::T=zero(T), atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0, timemax::Float64=Inf, verbose::Int=0, history::Bool=false, - callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the regularized linear least-squares problem

minimize ‖b - Ax‖₂² + λ‖x‖₂²

of size m × n using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations

(AᴴA + λI) x = Aᴴb

but is more stable.

CGLS produces monotonic residuals ‖r‖₂ but not optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSQR, though can be slightly less accurate, but simpler to implement.

Input arguments

Keyword arguments

Output arguments

References

source
Krylov.cgls!Function
solver = cgls!(solver::CglsSolver, A, b; kwargs...)

where kwargs are keyword arguments of cgls.

See CglsSolver for more details about the solver.

source

CRLS

Krylov.crlsFunction
(x, stats) = crls(A, b::AbstractVector{FC};
+                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the regularized linear least-squares problem

minimize ‖b - Ax‖₂² + λ‖x‖₂²

of size m × n using the Conjugate Gradient (CG) method, where λ ≥ 0 is a regularization parameter. This method is equivalent to applying CG to the normal equations

(AᴴA + λI) x = Aᴴb

but is more stable.

CGLS produces monotonic residuals ‖r‖₂ but not optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSQR, though can be slightly less accurate, but simpler to implement.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.cgls!Function
solver = cgls!(solver::CglsSolver, A, b; kwargs...)

where kwargs are keyword arguments of cgls.

See CglsSolver for more details about the solver.

source

CRLS

Krylov.crlsFunction
(x, stats) = crls(A, b::AbstractVector{FC};
                   M=I, ldiv::Bool=false, radius::T=zero(T),
                   λ::T=zero(T), atol::T=√eps(T), rtol::T=√eps(T),
                   itmax::Int=0, timemax::Float64=Inf,
                   verbose::Int=0, history::Bool=false,
-                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the linear least-squares problem

minimize ‖b - Ax‖₂² + λ‖x‖₂²

of size m × n using the Conjugate Residuals (CR) method. This method is equivalent to applying MINRES to the normal equations

(AᴴA + λI) x = Aᴴb.

This implementation recurs the residual r := b - Ax.

CRLS produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSMR, though can be substantially less accurate, but simpler to implement.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

  • D. C.-L. Fong, Minimum-Residual Methods for Sparse, Least-Squares using Golubg-Kahan Bidiagonalization, Ph.D. Thesis, Stanford University, 2011.
source
Krylov.crls!Function
solver = crls!(solver::CrlsSolver, A, b; kwargs...)

where kwargs are keyword arguments of crls.

See CrlsSolver for more details about the solver.

source

LSLQ

Krylov.lslqFunction
(x, stats) = lslq(A, b::AbstractVector{FC};
+                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the linear least-squares problem

minimize ‖b - Ax‖₂² + λ‖x‖₂²

of size m × n using the Conjugate Residuals (CR) method. This method is equivalent to applying MINRES to the normal equations

(AᴴA + λI) x = Aᴴb.

This implementation recurs the residual r := b - Ax.

CRLS produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂. It is formally equivalent to LSMR, though can be substantially less accurate, but simpler to implement.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

  • D. C.-L. Fong, Minimum-Residual Methods for Sparse, Least-Squares using Golubg-Kahan Bidiagonalization, Ph.D. Thesis, Stanford University, 2011.
source
Krylov.crls!Function
solver = crls!(solver::CrlsSolver, A, b; kwargs...)

where kwargs are keyword arguments of crls.

See CrlsSolver for more details about the solver.

source

LSLQ

Krylov.lslqFunction
(x, stats) = lslq(A, b::AbstractVector{FC};
                   M=I, N=I, ldiv::Bool=false,
                   window::Int=5, transfer_to_lsqr::Bool=false,
                   sqd::Bool=false, λ::T=zero(T),
@@ -20,7 +20,7 @@
                   timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                   callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the regularized linear least-squares problem

minimize ‖b - Ax‖₂² + λ²‖x‖₂²

of size m × n using the LSLQ method, where λ ≥ 0 is a regularization parameter. LSLQ is formally equivalent to applying SYMMLQ to the normal equations

(AᴴA + λ²I) x = Aᴴb

but is more stable.

If λ > 0, we solve the symmetric and quasi-definite system

[ E      A ] [ r ]   [ b ]
 [ Aᴴ  -λ²F ] [ x ] = [ 0 ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSLQ is then equivalent to applying SYMMLQ to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).

If λ = 0, we solve the symmetric and indefinite system

[ E    A ] [ r ]   [ b ]
-[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

Main features

  • the solution estimate is updated along orthogonal directions
  • the norm of the solution estimate ‖xᴸₖ‖₂ is increasing
  • the error ‖eₖ‖₂ := ‖xᴸₖ - x*‖₂ is decreasing
  • it is possible to transition cheaply from the LSLQ iterate to the LSQR iterate if there is an advantage (there always is in terms of error)
  • if A is rank deficient, identify the minimum least-squares solution

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • transfer_to_lsqr: transfer from the LSLQ point to the LSQR point, when it exists. The transfer is based on the residual norm;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • σ: strict lower bound on the smallest positive singular value σₘᵢₙ such as σ = (1-10⁻⁷)σₘᵢₙ;
  • etol: stopping tolerance based on the lower bound on the error;
  • utol: stopping tolerance based on the upper bound on the error;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;

  • stats: statistics collected on the run in a LSLQStats structure.

  • stats.err_lbnds is a vector of lower bounds on the LQ error–-the vector is empty if window is set to zero

  • stats.err_ubnds_lq is a vector of upper bounds on the LQ error–-the vector is empty if σ == 0 is left at zero

  • stats.err_ubnds_cg is a vector of upper bounds on the CG error–-the vector is empty if σ == 0 is left at zero

  • stats.error_with_bnd is a boolean indicating whether there was an error in the upper bounds computation (cancellation errors, too large σ ...)

Stopping conditions

The iterations stop as soon as one of the following conditions holds true:

  • the optimality residual is sufficiently small (stats.status = "found approximate minimum least-squares solution") in the sense that either
    • ‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ atol, or
    • 1 + ‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ 1
  • an approximate zero-residual solution has been found (stats.status = "found approximate zero-residual solution") in the sense that either
    • ‖r‖ / ‖b‖ ≤ btol + atol ‖A‖ * ‖xᴸ‖ / ‖b‖, or
    • 1 + ‖r‖ / ‖b‖ ≤ 1
  • the estimated condition number of A is too large in the sense that either
    • 1/cond(A) ≤ 1/conlim (stats.status = "condition number exceeds tolerance"), or
    • 1 + 1/cond(A) ≤ 1 (stats.status = "condition number seems too large for this machine")
  • the lower bound on the LQ forward error is less than etol * ‖xᴸ‖
  • the upper bound on the CG forward error is less than utol * ‖xᶜ‖

References

source
Krylov.lslq!Function
solver = lslq!(solver::LslqSolver, A, b; kwargs...)

where kwargs are keyword arguments of lslq.

See LslqSolver for more details about the solver.

source

LSQR

Krylov.lsqrFunction
(x, stats) = lsqr(A, b::AbstractVector{FC};
+[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

Main features

  • the solution estimate is updated along orthogonal directions
  • the norm of the solution estimate ‖xᴸₖ‖₂ is increasing
  • the error ‖eₖ‖₂ := ‖xᴸₖ - x*‖₂ is decreasing
  • it is possible to transition cheaply from the LSLQ iterate to the LSQR iterate if there is an advantage (there always is in terms of error)
  • if A is rank deficient, identify the minimum least-squares solution

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • transfer_to_lsqr: transfer from the LSLQ point to the LSQR point, when it exists. The transfer is based on the residual norm;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • σ: strict lower bound on the smallest positive singular value σₘᵢₙ such as σ = (1-10⁻⁷)σₘᵢₙ;
  • etol: stopping tolerance based on the lower bound on the error;
  • utol: stopping tolerance based on the upper bound on the error;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;

  • stats: statistics collected on the run in a LSLQStats structure.

  • stats.err_lbnds is a vector of lower bounds on the LQ error–-the vector is empty if window is set to zero

  • stats.err_ubnds_lq is a vector of upper bounds on the LQ error–-the vector is empty if σ == 0 is left at zero

  • stats.err_ubnds_cg is a vector of upper bounds on the CG error–-the vector is empty if σ == 0 is left at zero

  • stats.error_with_bnd is a boolean indicating whether there was an error in the upper bounds computation (cancellation errors, too large σ ...)

Stopping conditions

The iterations stop as soon as one of the following conditions holds true:

  • the optimality residual is sufficiently small (stats.status = "found approximate minimum least-squares solution") in the sense that either
    • ‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ atol, or
    • 1 + ‖Aᴴr‖ / (‖A‖ ‖r‖) ≤ 1
  • an approximate zero-residual solution has been found (stats.status = "found approximate zero-residual solution") in the sense that either
    • ‖r‖ / ‖b‖ ≤ btol + atol ‖A‖ * ‖xᴸ‖ / ‖b‖, or
    • 1 + ‖r‖ / ‖b‖ ≤ 1
  • the estimated condition number of A is too large in the sense that either
    • 1/cond(A) ≤ 1/conlim (stats.status = "condition number exceeds tolerance"), or
    • 1 + 1/cond(A) ≤ 1 (stats.status = "condition number seems too large for this machine")
  • the lower bound on the LQ forward error is less than etol * ‖xᴸ‖
  • the upper bound on the CG forward error is less than utol * ‖xᶜ‖

References

source
Krylov.lslq!Function
solver = lslq!(solver::LslqSolver, A, b; kwargs...)

where kwargs are keyword arguments of lslq.

See LslqSolver for more details about the solver.

source

LSQR

Krylov.lsqrFunction
(x, stats) = lsqr(A, b::AbstractVector{FC};
                   M=I, N=I, ldiv::Bool=false,
                   window::Int=5, sqd::Bool=false, λ::T=zero(T),
                   radius::T=zero(T), etol::T=√eps(T),
@@ -30,7 +30,7 @@
                   timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                   callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the regularized linear least-squares problem

minimize ‖b - Ax‖₂² + λ²‖x‖₂²

of size m × n using the LSQR method, where λ ≥ 0 is a regularization parameter. LSQR is formally equivalent to applying CG to the normal equations

(AᴴA + λ²I) x = Aᴴb

(and therefore to CGLS) but is more stable.

LSQR produces monotonic residuals ‖r‖₂ but not optimality residuals ‖Aᴴr‖₂. It is formally equivalent to CGLS, though can be slightly more accurate.

If λ > 0, LSQR solves the symmetric and quasi-definite system

[ E      A ] [ r ]   [ b ]
 [ Aᴴ  -λ²F ] [ x ] = [ 0 ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSQR is then equivalent to applying CG to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).

If λ = 0, we solve the symmetric and indefinite system

[ E    A ] [ r ]   [ b ]
-[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • etol: stopping tolerance based on the lower bound on the error;
  • axtol: tolerance on the backward error;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.lsqr!Function
solver = lsqr!(solver::LsqrSolver, A, b; kwargs...)

where kwargs are keyword arguments of lsqr.

See LsqrSolver for more details about the solver.

source

LSMR

Krylov.lsmrFunction
(x, stats) = lsmr(A, b::AbstractVector{FC};
+[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • etol: stopping tolerance based on the lower bound on the error;
  • axtol: tolerance on the backward error;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.lsqr!Function
solver = lsqr!(solver::LsqrSolver, A, b; kwargs...)

where kwargs are keyword arguments of lsqr.

See LsqrSolver for more details about the solver.

source

LSMR

Krylov.lsmrFunction
(x, stats) = lsmr(A, b::AbstractVector{FC};
                   M=I, N=I, ldiv::Bool=false,
                   window::Int=5, sqd::Bool=false, λ::T=zero(T),
                   radius::T=zero(T), etol::T=√eps(T),
@@ -40,8 +40,8 @@
                   timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                   callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

Solve the regularized linear least-squares problem

minimize ‖b - Ax‖₂² + λ²‖x‖₂²

of size m × n using the LSMR method, where λ ≥ 0 is a regularization parameter. LSMR is formally equivalent to applying MINRES to the normal equations

(AᴴA + λ²I) x = Aᴴb

(and therefore to CRLS) but is more stable.

LSMR produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂. It is formally equivalent to CRLS, though can be substantially more accurate.

LSMR can be also used to find a null vector of a singular matrix A by solving the problem min ‖Aᴴx - b‖ with any nonzero vector b. At a minimizer, the residual vector r = b - Aᴴx will satisfy Ar = 0.

If λ > 0, we solve the symmetric and quasi-definite system

[ E      A ] [ r ]   [ b ]
 [ Aᴴ  -λ²F ] [ x ] = [ 0 ],

where E and F are symmetric and positive definite. Preconditioners M = E⁻¹ ≻ 0 and N = F⁻¹ ≻ 0 may be provided in the form of linear operators. If sqd=true, λ is set to the common value 1.

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹ + λ²‖x‖²_F.

For a symmetric and positive definite matrix K, the K-norm of a vector x is ‖x‖²_K = xᴴKx. LSMR is then equivalent to applying MINRES to (AᴴE⁻¹A + λ²F)x = AᴴE⁻¹b with r = E⁻¹(b - Ax).

If λ = 0, we solve the symmetric and indefinite system

[ E    A ] [ r ]   [ b ]
-[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • etol: stopping tolerance based on the lower bound on the error;
  • axtol: tolerance on the backward error;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a LsmrStats structure.

Reference

source
Krylov.lsmr!Function
solver = lsmr!(solver::LsmrSolver, A, b; kwargs...)

where kwargs are keyword arguments of lsmr.

See LsmrSolver for more details about the solver.

source

USYMQR

Krylov.usymqrFunction
(x, stats) = usymqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
+[ Aᴴ   0 ] [ x ] = [ 0 ].

The system above represents the optimality conditions of

minimize ‖b - Ax‖²_E⁻¹.

In this case, N can still be specified and indicates the weighted norm in which x and Aᴴr should be measured. r can be recovered by computing E⁻¹(b - Ax).

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the augmented system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the augmented system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • sqd: if true, set λ=1 for Hermitian quasi-definite systems;
  • λ: regularization parameter;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • etol: stopping tolerance based on the lower bound on the error;
  • axtol: tolerance on the backward error;
  • btol: stopping tolerance used to detect zero-residual problems;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a LsmrStats structure.

Reference

source
Krylov.lsmr!Function
solver = lsmr!(solver::LsmrSolver, A, b; kwargs...)

where kwargs are keyword arguments of lsmr.

See LsmrSolver for more details about the solver.

source

USYMQR

Krylov.usymqrFunction
(x, stats) = usymqr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
                     atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,
                     timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                    callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = usymqr(A, b, c, x0::AbstractVector; kwargs...)

USYMQR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

USYMQR solves the linear least-squares problem min ‖b - Ax‖² of size m × n. USYMQR solves Ax = b if it is consistent.

USYMQR is based on the orthogonal tridiagonalization process and requires two initial nonzero vectors b and c. The vector c is only used to initialize the process and a default value can be b or Aᴴb depending on the shape of A. The residual norm ‖b - Ax‖ monotonously decreases in USYMQR. It's considered as a generalization of MINRES.

It can also be applied to under-determined and over-determined problems. USYMQR finds the minimum-norm solution if problems are inconsistent.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.usymqr!Function
solver = usymqr!(solver::UsymqrSolver, A, b, c; kwargs...)
-solver = usymqr!(solver::UsymqrSolver, A, b, c, x0; kwargs...)

where kwargs are keyword arguments of usymqr.

See UsymqrSolver for more details about the solver.

source
+ callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = usymqr(A, b, c, x0::AbstractVector; kwargs...)

USYMQR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

USYMQR solves the linear least-squares problem min ‖b - Ax‖² of size m × n. USYMQR solves Ax = b if it is consistent.

USYMQR is based on the orthogonal tridiagonalization process and requires two initial nonzero vectors b and c. The vector c is only used to initialize the process and a default value can be b or Aᴴb depending on the shape of A. The residual norm ‖b - Ax‖ monotonously decreases in USYMQR. It's considered as a generalization of MINRES.

It can also be applied to under-determined and over-determined problems. USYMQR finds the minimum-norm solution if problems are inconsistent.

Input arguments

Optional argument

Keyword arguments

Output arguments

References

source
Krylov.usymqr!Function
solver = usymqr!(solver::UsymqrSolver, A, b, c; kwargs...)
+solver = usymqr!(solver::UsymqrSolver, A, b, c, x0; kwargs...)

where kwargs are keyword arguments of usymqr.

See UsymqrSolver for more details about the solver.

source
diff --git a/dev/solvers/sid/index.html b/dev/solvers/sid/index.html index 48ccd3e75..b62b622d0 100644 --- a/dev/solvers/sid/index.html +++ b/dev/solvers/sid/index.html @@ -6,25 +6,25 @@ conlim::T=1/√eps(T), atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0, timemax::Float64=Inf, verbose::Int=0, history::Bool=false, - callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = symmlq(A, b, x0::AbstractVector; kwargs...)

SYMMLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above

Solve the shifted linear system

(A + λI) x = b

of size n using the SYMMLQ method, where λ is a shift parameter, and A is Hermitian.

SYMMLQ produces monotonic errors ‖x* - x‖₂.

Input arguments

Optional argument

Keyword arguments

Output arguments

Reference

source
Krylov.symmlq!Function
solver = symmlq!(solver::SymmlqSolver, A, b; kwargs...)
-solver = symmlq!(solver::SymmlqSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of symmlq.

See SymmlqSolver for more details about the solver.

source

MINRES

Krylov.minresFunction
(x, stats) = minres(A, b::AbstractVector{FC};
+                    callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = symmlq(A, b, x0::AbstractVector; kwargs...)

SYMMLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above

Solve the shifted linear system

(A + λI) x = b

of size n using the SYMMLQ method, where λ is a shift parameter, and A is Hermitian.

SYMMLQ produces monotonic errors ‖x* - x‖₂.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • transfer_to_cg: transfer from the SYMMLQ point to the CG point, when it exists. The transfer is based on the residual norm;
  • λ: regularization parameter;
  • λest: positive strict lower bound on the smallest eigenvalue λₘᵢₙ when solving a positive-definite system, such as λest = (1-10⁻⁷)λₘᵢₙ;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • etol: stopping tolerance based on the lower bound on the error;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SymmlqStats structure.

Reference

source
Krylov.symmlq!Function
solver = symmlq!(solver::SymmlqSolver, A, b; kwargs...)
+solver = symmlq!(solver::SymmlqSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of symmlq.

See SymmlqSolver for more details about the solver.

source

MINRES

Krylov.minresFunction
(x, stats) = minres(A, b::AbstractVector{FC};
                     M=I, ldiv::Bool=false, window::Int=5,
                     λ::T=zero(T), atol::T=√eps(T),
                     rtol::T=√eps(T), etol::T=√eps(T),
                     conlim::T=1/√eps(T), itmax::Int=0,
                     timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                    callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = minres(A, b, x0::AbstractVector; kwargs...)

MINRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the shifted linear least-squares problem

minimize ‖b - (A + λI)x‖₂²

or the shifted linear system

(A + λI) x = b

of size n using the MINRES method, where λ ≥ 0 is a shift parameter, where A is Hermitian.

MINRES is formally equivalent to applying CR to Ax=b when A is positive definite, but is typically more stable and also applies to the case where A is indefinite.

MINRES produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • etol: stopping tolerance based on the lower bound on the error;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.minres!Function
solver = minres!(solver::MinresSolver, A, b; kwargs...)
-solver = minres!(solver::MinresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of minres.

See MinresSolver for more details about the solver.

source

MINRES-QLP

Krylov.minres_qlpFunction
(x, stats) = minres_qlp(A, b::AbstractVector{FC};
+                    callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = minres(A, b, x0::AbstractVector; kwargs...)

MINRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the shifted linear least-squares problem

minimize ‖b - (A + λI)x‖₂²

or the shifted linear system

(A + λI) x = b

of size n using the MINRES method, where λ ≥ 0 is a shift parameter, where A is Hermitian.

MINRES is formally equivalent to applying CR to Ax=b when A is positive definite, but is typically more stable and also applies to the case where A is indefinite.

MINRES produces monotonic residuals ‖r‖₂ and optimality residuals ‖Aᴴr‖₂.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • window: number of iterations used to accumulate a lower bound on the error;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • etol: stopping tolerance based on the lower bound on the error;
  • conlim: limit on the estimated condition number of A beyond which the solution will be abandoned;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.minres!Function
solver = minres!(solver::MinresSolver, A, b; kwargs...)
+solver = minres!(solver::MinresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of minres.

See MinresSolver for more details about the solver.

source

MINRES-QLP

Krylov.minres_qlpFunction
(x, stats) = minres_qlp(A, b::AbstractVector{FC};
                         M=I, ldiv::Bool=false, Artol::T=√eps(T),
                         λ::T=zero(T), atol::T=√eps(T),
                         rtol::T=√eps(T), itmax::Int=0,
                         timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                        callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = minres_qlp(A, b, x0::AbstractVector; kwargs...)

MINRES-QLP can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

MINRES-QLP is the only method based on the Lanczos process that returns the minimum-norm solution on singular inconsistent systems (A + λI)x = b of size n, where λ is a shift parameter. It is significantly more complex but can be more reliable than MINRES when A is ill-conditioned.

M also indicates the weighted norm in which residuals are measured.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • Artol: relative stopping tolerance based on the Aᴴ-residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.minres_qlp!Function
solver = minres_qlp!(solver::MinresQlpSolver, A, b; kwargs...)
-solver = minres_qlp!(solver::MinresQlpSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of minres_qlp.

See MinresQlpSolver for more details about the solver.

source

MINARES

Krylov.minaresFunction
(x, stats) = minares(A, b::AbstractVector{FC};
+                        callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = minres_qlp(A, b, x0::AbstractVector; kwargs...)

MINRES-QLP can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

MINRES-QLP is the only method based on the Lanczos process that returns the minimum-norm solution on singular inconsistent systems (A + λI)x = b of size n, where λ is a shift parameter. It is significantly more complex but can be more reliable than MINRES when A is ill-conditioned.

M also indicates the weighted norm in which residuals are measured.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • Artol: relative stopping tolerance based on the Aᴴ-residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.minres_qlp!Function
solver = minres_qlp!(solver::MinresQlpSolver, A, b; kwargs...)
+solver = minres_qlp!(solver::MinresQlpSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of minres_qlp.

See MinresQlpSolver for more details about the solver.

source

MINARES

Krylov.minaresFunction
(x, stats) = minares(A, b::AbstractVector{FC};
                      M=I, ldiv::Bool=false,
                      λ::T = zero(T), atol::T=√eps(T),
                      rtol::T=√eps(T), Artol::T = √eps(T),
                      itmax::Int=0, timemax::Float64=Inf,
                      verbose::Int=0, history::Bool=false,
-                     callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = minares(A, b, x0::AbstractVector; kwargs...)

MINARES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

MINARES solves the Hermitian linear system Ax = b of size n. MINARES minimizes ‖Arₖ‖₂ when M = Iₙ and ‖AMrₖ‖M otherwise. The estimates computed every iteration are ‖Mrₖ‖₂ and ‖AMrₖ‖M.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • λ: regularization parameter;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • Artol: relative stopping tolerance based on the Aᴴ-residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

  • A. Montoison, D. Orban and M. A. Saunders, MinAres: An Iterative Solver for Symmetric Linear Systems, Cahier du GERAD G-2023-40, GERAD, Montréal, 2023.
source
Krylov.minares!Function
solver = minares!(solver::MinaresSolver, A, b; kwargs...)
-solver = minares!(solver::MinaresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of minares.

See MinaresSolver for more details about the solver.

source
+ callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = minares(A, b, x0::AbstractVector; kwargs...)

MINARES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

MINARES solves the Hermitian linear system Ax = b of size n. MINARES minimizes ‖Arₖ‖₂ when M = Iₙ and ‖AMrₖ‖M otherwise. The estimates computed every iteration are ‖Mrₖ‖₂ and ‖AMrₖ‖M.

Input arguments

Optional argument

Keyword arguments

Output arguments

Reference

source
Krylov.minares!Function
solver = minares!(solver::MinaresSolver, A, b; kwargs...)
+solver = minares!(solver::MinaresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of minares.

See MinaresSolver for more details about the solver.

source
diff --git a/dev/solvers/sp_sqd/index.html b/dev/solvers/sp_sqd/index.html index a9b1ddfa4..b956ca909 100644 --- a/dev/solvers/sp_sqd/index.html +++ b/dev/solvers/sp_sqd/index.html @@ -8,8 +8,8 @@ timemax::Float64=Inf, verbose::Int=0, history::Bool=false, callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, y, stats) = tricg(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)

TriCG can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.

Given a matrix A of dimension m × n, TriCG solves the Hermitian linear system

[ τE    A ] [ x ] = [ b ]
 [  Aᴴ  νF ] [ y ]   [ c ],

of size (n+m) × (n+m) where τ and ν are real numbers, E = M⁻¹ ≻ 0 and F = N⁻¹ ≻ 0. b and c must both be nonzero. TriCG could breakdown if τ = 0 or ν = 0. It's recommended to use TriMR in these cases.

By default, TriCG solves Hermitian and quasi-definite linear systems with τ = 1 and ν = -1.

TriCG is based on the preconditioned orthogonal tridiagonalization process and its relation with the preconditioned block-Lanczos process.

[ M   0 ]
-[ 0   N ]

indicates the weighted norm in which residuals are measured. It's the Euclidean norm when M and N are identity operators.

TriCG stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.

Input arguments

Optional arguments

Keyword arguments

Output arguments

Reference

source
Krylov.tricg!Function
solver = tricg!(solver::TricgSolver, A, b, c; kwargs...)
-solver = tricg!(solver::TricgSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of tricg.

See TricgSolver for more details about the solver.

source

TriMR

Krylov.trimrFunction
(x, y, stats) = trimr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
+[ 0   N ]

indicates the weighted norm in which residuals are measured. It's the Euclidean norm when M and N are identity operators.

TriCG stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional arguments

  • x0: a vector of length m that represents an initial guess of the solution x;
  • y0: a vector of length n that represents an initial guess of the solution y.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the partitioned system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the partitioned system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • spd: if true, set τ = 1 and ν = 1 for Hermitian and positive-definite linear system;
  • snd: if true, set τ = -1 and ν = -1 for Hermitian and negative-definite linear systems;
  • flip: if true, set τ = -1 and ν = 1 for another known variant of Hermitian quasi-definite systems;
  • τ and ν: diagonal scaling factors of the partitioned Hermitian linear system;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length m;
  • y: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.tricg!Function
solver = tricg!(solver::TricgSolver, A, b, c; kwargs...)
+solver = tricg!(solver::TricgSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of tricg.

See TricgSolver for more details about the solver.

source

TriMR

Krylov.trimrFunction
(x, y, stats) = trimr(A, b::AbstractVector{FC}, c::AbstractVector{FC};
                       M=I, N=I, ldiv::Bool=false,
                       spd::Bool=false, snd::Bool=false,
                       flip::Bool=false, sp::Bool=false,
@@ -18,5 +18,5 @@
                       timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
                       callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, y, stats) = trimr(A, b, c, x0::AbstractVector, y0::AbstractVector; kwargs...)

TriMR can be warm-started from initial guesses x0 and y0 where kwargs are the same keyword arguments as above.

Given a matrix A of dimension m × n, TriMR solves the symmetric linear system

[ τE    A ] [ x ] = [ b ]
 [  Aᴴ  νF ] [ y ]   [ c ],

of size (n+m) × (n+m) where τ and ν are real numbers, E = M⁻¹ ≻ 0, F = N⁻¹ ≻ 0. b and c must both be nonzero. TriMR handles saddle-point systems (τ = 0 or ν = 0) and adjoint systems (τ = 0 and ν = 0) without any risk of breakdown.

By default, TriMR solves symmetric and quasi-definite linear systems with τ = 1 and ν = -1.

TriMR is based on the preconditioned orthogonal tridiagonalization process and its relation with the preconditioned block-Lanczos process.

[ M   0 ]
-[ 0   N ]

indicates the weighted norm in which residuals are measured. It's the Euclidean norm when M and N are identity operators.

TriMR stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.

Input arguments

  • A: a linear operator that models a matrix of dimension m × n;
  • b: a vector of length m;
  • c: a vector of length n.

Optional arguments

  • x0: a vector of length m that represents an initial guess of the solution x;
  • y0: a vector of length n that represents an initial guess of the solution y.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size m used for centered preconditioning of the partitioned system;
  • N: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning of the partitioned system;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • spd: if true, set τ = 1 and ν = 1 for Hermitian and positive-definite linear system;
  • snd: if true, set τ = -1 and ν = -1 for Hermitian and negative-definite linear systems;
  • flip: if true, set τ = -1 and ν = 1 for another known variant of Hermitian quasi-definite systems;
  • sp: if true, set τ = 1 and ν = 0 for saddle-point systems;
  • τ and ν: diagonal scaling factors of the partitioned Hermitian linear system;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to m+n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length m;
  • y: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.trimr!Function
solver = trimr!(solver::TrimrSolver, A, b, c; kwargs...)
-solver = trimr!(solver::TrimrSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of trimr.

See TrimrSolver for more details about the solver.

source
+[ 0 N ]

indicates the weighted norm in which residuals are measured. It's the Euclidean norm when M and N are identity operators.

TriMR stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖r₀‖ * rtol. atol is an absolute tolerance and rtol is a relative tolerance.

Input arguments

Optional arguments

Keyword arguments

Output arguments

Reference

source
Krylov.trimr!Function
solver = trimr!(solver::TrimrSolver, A, b, c; kwargs...)
+solver = trimr!(solver::TrimrSolver, A, b, c, x0, y0; kwargs...)

where kwargs are keyword arguments of trimr.

See TrimrSolver for more details about the solver.

source
diff --git a/dev/solvers/spd/index.html b/dev/solvers/spd/index.html index 5d441d733..781450579 100644 --- a/dev/solvers/spd/index.html +++ b/dev/solvers/spd/index.html @@ -4,28 +4,28 @@ linesearch::Bool=false, atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0, timemax::Float64=Inf, verbose::Int=0, history::Bool=false, - callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cg(A, b, x0::AbstractVector; kwargs...)

CG can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

The conjugate gradient method to solve the Hermitian linear system Ax = b of size n.

The method does not abort if A is not definite. M also indicates the weighted norm in which residuals are measured.

Input arguments

Optional argument

Keyword arguments

Output arguments

Reference

source
Krylov.cg!Function
solver = cg!(solver::CgSolver, A, b; kwargs...)
-solver = cg!(solver::CgSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cg.

See CgSolver for more details about the solver.

source

CR

Krylov.crFunction
(x, stats) = cr(A, b::AbstractVector{FC};
+                callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cg(A, b, x0::AbstractVector; kwargs...)

CG can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

The conjugate gradient method to solve the Hermitian linear system Ax = b of size n.

The method does not abort if A is not definite. M also indicates the weighted norm in which residuals are measured.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • linesearch: if true, indicate that the solution is to be used in an inexact Newton method with linesearch. If negative curvature is detected at iteration k > 0, the solution of iteration k-1 is returned. If negative curvature is detected at iteration 0, the right-hand side is returned (i.e., the negative gradient);
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.cg!Function
solver = cg!(solver::CgSolver, A, b; kwargs...)
+solver = cg!(solver::CgSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cg.

See CgSolver for more details about the solver.

source

CR

Krylov.crFunction
(x, stats) = cr(A, b::AbstractVector{FC};
                 M=I, ldiv::Bool=false, radius::T=zero(T),
                 linesearch::Bool=false, γ::T=√eps(T),
                 atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,
                 timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cr(A, b, x0::AbstractVector; kwargs...)

CR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

A truncated version of Stiefel’s Conjugate Residual method to solve the Hermitian linear system Ax = b of size n or the least-squares problem min ‖b - Ax‖ if A is singular. The matrix A must be Hermitian semi-definite. M also indicates the weighted norm in which residuals are measured.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • linesearch: if true, indicate that the solution is to be used in an inexact Newton method with linesearch. If negative curvature is detected at iteration k > 0, the solution of iteration k-1 is returned. If negative curvature is detected at iteration 0, the right-hand side is returned (i.e., the negative gradient);
  • γ: tolerance to determine that the curvature of the quadratic model is nonpositive;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.cr!Function
solver = cr!(solver::CrSolver, A, b; kwargs...)
-solver = cr!(solver::CrSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cr.

See CrSolver for more details about the solver.

source

CAR

Krylov.carFunction
(x, stats) = car(A, b::AbstractVector{FC};
+                callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cr(A, b, x0::AbstractVector; kwargs...)

CR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

A truncated version of Stiefel’s Conjugate Residual method to solve the Hermitian linear system Ax = b of size n or the least-squares problem min ‖b - Ax‖ if A is singular. The matrix A must be Hermitian semi-definite. M also indicates the weighted norm in which residuals are measured.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • radius: add the trust-region constraint ‖x‖ ≤ radius if radius > 0. Useful to compute a step in a trust-region method for optimization;
  • linesearch: if true, indicate that the solution is to be used in an inexact Newton method with linesearch. If negative curvature is detected at iteration k > 0, the solution of iteration k-1 is returned. If negative curvature is detected at iteration 0, the right-hand side is returned (i.e., the negative gradient);
  • γ: tolerance to determine that the curvature of the quadratic model is nonpositive;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.cr!Function
solver = cr!(solver::CrSolver, A, b; kwargs...)
+solver = cr!(solver::CrSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cr.

See CrSolver for more details about the solver.

source

CAR

Krylov.carFunction
(x, stats) = car(A, b::AbstractVector{FC};
                  M=I, ldiv::Bool=false,
                  atol::T=√eps(T), rtol::T=√eps(T),
                  itmax::Int=0, timemax::Float64=Inf,
                  verbose::Int=0, history::Bool=false,
-                 callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = car(A, b, x0::AbstractVector; kwargs...)

CAR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

CAR solves the Hermitian and positive definite linear system Ax = b of size n. CAR minimizes ‖Arₖ‖₂ when M = Iₙ and ‖AMrₖ‖M otherwise. The estimates computed every iteration are ‖Mrₖ‖₂ and ‖AMrₖ‖M.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

  • A. Montoison, D. Orban and M. A. Saunders, MinAres: An Iterative Solver for Symmetric Linear Systems, Cahier du GERAD G-2023-40, GERAD, Montréal, 2023.
source
Krylov.car!Function
solver = car!(solver::CarSolver, A, b; kwargs...)
-solver = car!(solver::CarSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of car.

See CarSolver for more details about the solver.

source

CG-LANCZOS

Krylov.cg_lanczosFunction
(x, stats) = cg_lanczos(A, b::AbstractVector{FC};
+                 callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = car(A, b, x0::AbstractVector; kwargs...)

CAR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

CAR solves the Hermitian and positive definite linear system Ax = b of size n. CAR minimizes ‖Arₖ‖₂ when M = Iₙ and ‖AMrₖ‖M otherwise. The estimates computed every iteration are ‖Mrₖ‖₂ and ‖AMrₖ‖M.

Input arguments

  • A: a linear operator that models a Hermitian positive definite matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

  • A. Montoison, D. Orban and M. A. Saunders, MinAres: An Iterative Solver for Symmetric Linear Systems, Cahier du GERAD G-2023-40, GERAD, Montréal, 2023.
source
Krylov.car!Function
solver = car!(solver::CarSolver, A, b; kwargs...)
+solver = car!(solver::CarSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of car.

See CarSolver for more details about the solver.

source

CG-LANCZOS

Krylov.cg_lanczosFunction
(x, stats) = cg_lanczos(A, b::AbstractVector{FC};
                         M=I, ldiv::Bool=false,
                         check_curvature::Bool=false, atol::T=√eps(T),
                         rtol::T=√eps(T), itmax::Int=0,
                         timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                        callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cg_lanczos(A, b, x0::AbstractVector; kwargs...)

CG-LANCZOS can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

The Lanczos version of the conjugate gradient method to solve the Hermitian linear system Ax = b of size n.

The method does not abort if A is not definite.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • check_curvature: if true, check that the curvature of the quadratic along the search direction is positive, and abort if not, unless linesearch is also true;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a LanczosStats structure.

References

source
Krylov.cg_lanczos!Function
solver = cg_lanczos!(solver::CgLanczosSolver, A, b; kwargs...)
-solver = cg_lanczos!(solver::CgLanczosSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cg_lanczos.

See CgLanczosSolver for more details about the solver.

source

CG-LANCZOS-SHIFT

Krylov.cg_lanczos_shiftFunction
(x, stats) = cg_lanczos_shift(A, b::AbstractVector{FC}, shifts::AbstractVector{T};
+                        callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cg_lanczos(A, b, x0::AbstractVector; kwargs...)

CG-LANCZOS can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

The Lanczos version of the conjugate gradient method to solve the Hermitian linear system Ax = b of size n.

The method does not abort if A is not definite.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • check_curvature: if true, check that the curvature of the quadratic along the search direction is positive, and abort if not, unless linesearch is also true;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a LanczosStats structure.

References

source
Krylov.cg_lanczos!Function
solver = cg_lanczos!(solver::CgLanczosSolver, A, b; kwargs...)
+solver = cg_lanczos!(solver::CgLanczosSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cg_lanczos.

See CgLanczosSolver for more details about the solver.

source

CG-LANCZOS-SHIFT

Krylov.cg_lanczos_shiftFunction
(x, stats) = cg_lanczos_shift(A, b::AbstractVector{FC}, shifts::AbstractVector{T};
                               M=I, ldiv::Bool=false,
                               check_curvature::Bool=false, atol::T=√eps(T),
                               rtol::T=√eps(T), itmax::Int=0,
                               timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                              callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

The Lanczos version of the conjugate gradient method to solve a family of shifted systems

(A + αI) x = b  (α = α₁, ..., αₚ)

of size n. The method does not abort if A + αI is not definite.

Input arguments

  • A: a linear operator that models a Hermitian matrix of dimension n;
  • b: a vector of length n;
  • shifts: a vector of length p.

Keyword arguments

  • M: linear operator that models a Hermitian positive-definite matrix of size n used for centered preconditioning;
  • ldiv: define whether the preconditioner uses ldiv! or mul!;
  • check_curvature: if true, check that the curvature of the quadratic along the search direction is positive, and abort if not, unless linesearch is also true;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a vector of p dense vectors, each one of length n;
  • stats: statistics collected on the run in a LanczosShiftStats structure.

References

source
Krylov.cg_lanczos_shift!Function
solver = cg_lanczos!(solver::CgLanczosShiftSolver, A, b, shifts; kwargs...)

where kwargs are keyword arguments of cg_lanczos_shift.

See CgLanczosShiftSolver for more details about the solver.

source
+ callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

The Lanczos version of the conjugate gradient method to solve a family of shifted systems

(A + αI) x = b  (α = α₁, ..., αₚ)

of size n. The method does not abort if A + αI is not definite.

Input arguments

Keyword arguments

Output arguments

References

source
Krylov.cg_lanczos_shift!Function
solver = cg_lanczos!(solver::CgLanczosShiftSolver, A, b, shifts; kwargs...)

where kwargs are keyword arguments of cg_lanczos_shift.

See CgLanczosShiftSolver for more details about the solver.

source
diff --git a/dev/solvers/unsymmetric/index.html b/dev/solvers/unsymmetric/index.html index b46ddb10b..35b260f08 100644 --- a/dev/solvers/unsymmetric/index.html +++ b/dev/solvers/unsymmetric/index.html @@ -3,51 +3,51 @@ c::AbstractVector{FC}=b, transfer_to_bicg::Bool=true, atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0, timemax::Float64=Inf, verbose::Int=0, history::Bool=false, - callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = bilq(A, b, x0::AbstractVector; kwargs...)

BiLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the square linear system Ax = b of size n using BiLQ. BiLQ is based on the Lanczos biorthogonalization process and requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b. When A is Hermitian and b = c, BiLQ is equivalent to SYMMLQ.

Input arguments

Optional argument

Keyword arguments

Output arguments

References

source
Krylov.bilq!Function
solver = bilq!(solver::BilqSolver, A, b; kwargs...)
-solver = bilq!(solver::BilqSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of bilq.

See BilqSolver for more details about the solver.

source

QMR

Krylov.qmrFunction
(x, stats) = qmr(A, b::AbstractVector{FC};
+                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = bilq(A, b, x0::AbstractVector; kwargs...)

BiLQ can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the square linear system Ax = b of size n using BiLQ. BiLQ is based on the Lanczos biorthogonalization process and requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b. When A is Hermitian and b = c, BiLQ is equivalent to SYMMLQ.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • transfer_to_bicg: transfer from the BiLQ point to the BiCG point, when it exists. The transfer is based on the residual norm;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.bilq!Function
solver = bilq!(solver::BilqSolver, A, b; kwargs...)
+solver = bilq!(solver::BilqSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of bilq.

See BilqSolver for more details about the solver.

source

QMR

Krylov.qmrFunction
(x, stats) = qmr(A, b::AbstractVector{FC};
                  c::AbstractVector{FC}=b, atol::T=√eps(T),
                  rtol::T=√eps(T), itmax::Int=0, timemax::Float64=Inf, verbose::Int=0,
-                 history::Bool=false, callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = qmr(A, b, x0::AbstractVector; kwargs...)

QMR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the square linear system Ax = b of size n using QMR.

QMR is based on the Lanczos biorthogonalization process and requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b. When A is Hermitian and b = c, QMR is equivalent to MINRES.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.qmr!Function
solver = qmr!(solver::QmrSolver, A, b; kwargs...)
-solver = qmr!(solver::QmrSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of qmr.

See QmrSolver for more details about the solver.

source

CGS

Krylov.cgsFunction
(x, stats) = cgs(A, b::AbstractVector{FC};
+                 history::Bool=false, callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = qmr(A, b, x0::AbstractVector; kwargs...)

QMR can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the square linear system Ax = b of size n using QMR.

QMR is based on the Lanczos biorthogonalization process and requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b. When A is Hermitian and b = c, QMR is equivalent to MINRES.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.qmr!Function
solver = qmr!(solver::QmrSolver, A, b; kwargs...)
+solver = qmr!(solver::QmrSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of qmr.

See QmrSolver for more details about the solver.

source

CGS

Krylov.cgsFunction
(x, stats) = cgs(A, b::AbstractVector{FC};
                  c::AbstractVector{FC}=b, M=I, N=I,
                  ldiv::Bool=false, atol::T=√eps(T),
                  rtol::T=√eps(T), itmax::Int=0,
                  timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                 callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cgs(A, b, x0::AbstractVector; kwargs...)

CGS can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the consistent linear system Ax = b of size n using CGS. CGS requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b.

From "Iterative Methods for Sparse Linear Systems (Y. Saad)" :

«The method is based on a polynomial variant of the conjugate gradients algorithm. Although related to the so-called bi-conjugate gradients (BCG) algorithm, it does not involve adjoint matrix-vector multiplications, and the expected convergence rate is about twice that of the BCG algorithm.

The Conjugate Gradient Squared algorithm works quite well in many cases. However, one difficulty is that, since the polynomials are squared, rounding errors tend to be more damaging than in the standard BCG algorithm. In particular, very high variations of the residual vectors often cause the residual norms computed to become inaccurate.

TFQMR and BICGSTAB were developed to remedy this difficulty.»

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.cgs!Function
solver = cgs!(solver::CgsSolver, A, b; kwargs...)
-solver = cgs!(solver::CgsSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cgs.

See CgsSolver for more details about the solver.

source

BiCGSTAB

Krylov.bicgstabFunction
(x, stats) = bicgstab(A, b::AbstractVector{FC};
+                 callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = cgs(A, b, x0::AbstractVector; kwargs...)

CGS can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the consistent linear system Ax = b of size n using CGS. CGS requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b.

From "Iterative Methods for Sparse Linear Systems (Y. Saad)" :

«The method is based on a polynomial variant of the conjugate gradients algorithm. Although related to the so-called bi-conjugate gradients (BCG) algorithm, it does not involve adjoint matrix-vector multiplications, and the expected convergence rate is about twice that of the BCG algorithm.

The Conjugate Gradient Squared algorithm works quite well in many cases. However, one difficulty is that, since the polynomials are squared, rounding errors tend to be more damaging than in the standard BCG algorithm. In particular, very high variations of the residual vectors often cause the residual norms computed to become inaccurate.

TFQMR and BICGSTAB were developed to remedy this difficulty.»

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.cgs!Function
solver = cgs!(solver::CgsSolver, A, b; kwargs...)
+solver = cgs!(solver::CgsSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of cgs.

See CgsSolver for more details about the solver.

source

BiCGSTAB

Krylov.bicgstabFunction
(x, stats) = bicgstab(A, b::AbstractVector{FC};
                       c::AbstractVector{FC}=b, M=I, N=I,
                       ldiv::Bool=false, atol::T=√eps(T),
                       rtol::T=√eps(T), itmax::Int=0,
                       timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                      callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = bicgstab(A, b, x0::AbstractVector; kwargs...)

BICGSTAB can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the square linear system Ax = b of size n using BICGSTAB. BICGSTAB requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b.

The Biconjugate Gradient Stabilized method is a variant of BiCG, like CGS, but using different updates for the Aᴴ-sequence in order to obtain smoother convergence than CGS.

If BICGSTAB stagnates, we recommend DQGMRES and BiLQ as alternative methods for unsymmetric square systems.

BICGSTAB stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖b‖ * rtol.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.bicgstab!Function
solver = bicgstab!(solver::BicgstabSolver, A, b; kwargs...)
-solver = bicgstab!(solver::BicgstabSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of bicgstab.

See BicgstabSolver for more details about the solver.

source

DIOM

Krylov.diomFunction
(x, stats) = diom(A, b::AbstractVector{FC};
+                      callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = bicgstab(A, b, x0::AbstractVector; kwargs...)

BICGSTAB can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the square linear system Ax = b of size n using BICGSTAB. BICGSTAB requires two initial vectors b and c. The relation bᴴc ≠ 0 must be satisfied and by default c = b.

The Biconjugate Gradient Stabilized method is a variant of BiCG, like CGS, but using different updates for the Aᴴ-sequence in order to obtain smoother convergence than CGS.

If BICGSTAB stagnates, we recommend DQGMRES and BiLQ as alternative methods for unsymmetric square systems.

BICGSTAB stops when itmax iterations are reached or when ‖rₖ‖ ≤ atol + ‖b‖ * rtol.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • c: the second initial vector of length n required by the Lanczos biorthogonalization process;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

References

source
Krylov.bicgstab!Function
solver = bicgstab!(solver::BicgstabSolver, A, b; kwargs...)
+solver = bicgstab!(solver::BicgstabSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of bicgstab.

See BicgstabSolver for more details about the solver.

source

DIOM

Krylov.diomFunction
(x, stats) = diom(A, b::AbstractVector{FC};
                   memory::Int=20, M=I, N=I, ldiv::Bool=false,
                   reorthogonalization::Bool=false, atol::T=√eps(T),
                   rtol::T=√eps(T), itmax::Int=0,
                   timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = diom(A, b, x0::AbstractVector; kwargs...)

DIOM can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the consistent linear system Ax = b of size n using DIOM.

DIOM only orthogonalizes the new vectors of the Krylov basis against the memory most recent vectors. If CG is well defined on Ax = b and memory = 2, DIOM is theoretically equivalent to CG. If k ≤ memory where k is the number of iterations, DIOM is theoretically equivalent to FOM. Otherwise, DIOM interpolates between CG and FOM and is similar to CG with partial reorthogonalization.

An advantage of DIOM is that non-Hermitian or Hermitian indefinite or both non-Hermitian and indefinite systems of linear equations can be handled by this single algorithm.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: the number of most recent vectors of the Krylov basis against which to orthogonalize a new vector;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against the memory most recent vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.diom!Function
solver = diom!(solver::DiomSolver, A, b; kwargs...)
-solver = diom!(solver::DiomSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of diom.

Note that the memory keyword argument is the only exception. It's required to create a DiomSolver and can't be changed later.

See DiomSolver for more details about the solver.

source

FOM

Krylov.fomFunction
(x, stats) = fom(A, b::AbstractVector{FC};
+                  callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = diom(A, b, x0::AbstractVector; kwargs...)

DIOM can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the consistent linear system Ax = b of size n using DIOM.

DIOM only orthogonalizes the new vectors of the Krylov basis against the memory most recent vectors. If CG is well defined on Ax = b and memory = 2, DIOM is theoretically equivalent to CG. If k ≤ memory where k is the number of iterations, DIOM is theoretically equivalent to FOM. Otherwise, DIOM interpolates between CG and FOM and is similar to CG with partial reorthogonalization.

An advantage of DIOM is that non-Hermitian or Hermitian indefinite or both non-Hermitian and indefinite systems of linear equations can be handled by this single algorithm.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: the number of most recent vectors of the Krylov basis against which to orthogonalize a new vector;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against the memory most recent vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.diom!Function
solver = diom!(solver::DiomSolver, A, b; kwargs...)
+solver = diom!(solver::DiomSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of diom.

Note that the memory keyword argument is the only exception. It's required to create a DiomSolver and can't be changed later.

See DiomSolver for more details about the solver.

source

FOM

Krylov.fomFunction
(x, stats) = fom(A, b::AbstractVector{FC};
                  memory::Int=20, M=I, N=I, ldiv::Bool=false,
                  restart::Bool=false, reorthogonalization::Bool=false,
                  atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,
                  timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                 callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = fom(A, b, x0::AbstractVector; kwargs...)

FOM can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the linear system Ax = b of size n using FOM.

FOM algorithm is based on the Arnoldi process and a Galerkin condition.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: if restart = true, the restarted version FOM(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • restart: restart the method after memory iterations;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.fom!Function
solver = fom!(solver::FomSolver, A, b; kwargs...)
-solver = fom!(solver::FomSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of fom.

Note that the memory keyword argument is the only exception. It's required to create a FomSolver and can't be changed later.

See FomSolver for more details about the solver.

source

DQGMRES

Krylov.dqgmresFunction
(x, stats) = dqgmres(A, b::AbstractVector{FC};
+                 callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = fom(A, b, x0::AbstractVector; kwargs...)

FOM can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the linear system Ax = b of size n using FOM.

FOM algorithm is based on the Arnoldi process and a Galerkin condition.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: if restart = true, the restarted version FOM(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • restart: restart the method after memory iterations;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.fom!Function
solver = fom!(solver::FomSolver, A, b; kwargs...)
+solver = fom!(solver::FomSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of fom.

Note that the memory keyword argument is the only exception. It's required to create a FomSolver and can't be changed later.

See FomSolver for more details about the solver.

source

DQGMRES

Krylov.dqgmresFunction
(x, stats) = dqgmres(A, b::AbstractVector{FC};
                      memory::Int=20, M=I, N=I, ldiv::Bool=false,
                      reorthogonalization::Bool=false, atol::T=√eps(T),
                      rtol::T=√eps(T), itmax::Int=0,
                      timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                     callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = dqgmres(A, b, x0::AbstractVector; kwargs...)

DQGMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the consistent linear system Ax = b of size n using DQGMRES.

DQGMRES algorithm is based on the incomplete Arnoldi orthogonalization process and computes a sequence of approximate solutions with the quasi-minimal residual property.

DQGMRES only orthogonalizes the new vectors of the Krylov basis against the memory most recent vectors. If MINRES is well defined on Ax = b and memory = 2, DQGMRES is theoretically equivalent to MINRES. If k ≤ memory where k is the number of iterations, DQGMRES is theoretically equivalent to GMRES. Otherwise, DQGMRES interpolates between MINRES and GMRES and is similar to MINRES with partial reorthogonalization.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: the number of most recent vectors of the Krylov basis against which to orthogonalize a new vector;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against the memory most recent vectors;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.dqgmres!Function
solver = dqgmres!(solver::DqgmresSolver, A, b; kwargs...)
-solver = dqgmres!(solver::DqgmresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of dqgmres.

Note that the memory keyword argument is the only exception. It's required to create a DqgmresSolver and can't be changed later.

See DqgmresSolver for more details about the solver.

source

GMRES

Krylov.gmresFunction
(x, stats) = gmres(A, b::AbstractVector{FC};
+                     callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = dqgmres(A, b, x0::AbstractVector; kwargs...)

DQGMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the consistent linear system Ax = b of size n using DQGMRES.

DQGMRES algorithm is based on the incomplete Arnoldi orthogonalization process and computes a sequence of approximate solutions with the quasi-minimal residual property.

DQGMRES only orthogonalizes the new vectors of the Krylov basis against the memory most recent vectors. If MINRES is well defined on Ax = b and memory = 2, DQGMRES is theoretically equivalent to MINRES. If k ≤ memory where k is the number of iterations, DQGMRES is theoretically equivalent to GMRES. Otherwise, DQGMRES interpolates between MINRES and GMRES and is similar to MINRES with partial reorthogonalization.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: the number of most recent vectors of the Krylov basis against which to orthogonalize a new vector;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against the memory most recent vectors;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.dqgmres!Function
solver = dqgmres!(solver::DqgmresSolver, A, b; kwargs...)
+solver = dqgmres!(solver::DqgmresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of dqgmres.

Note that the memory keyword argument is the only exception. It's required to create a DqgmresSolver and can't be changed later.

See DqgmresSolver for more details about the solver.

source

GMRES

Krylov.gmresFunction
(x, stats) = gmres(A, b::AbstractVector{FC};
                    memory::Int=20, M=I, N=I, ldiv::Bool=false,
                    restart::Bool=false, reorthogonalization::Bool=false,
                    atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,
                    timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                   callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = gmres(A, b, x0::AbstractVector; kwargs...)

GMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the linear system Ax = b of size n using GMRES.

GMRES algorithm is based on the Arnoldi process and computes a sequence of approximate solutions with the minimum residual.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: if restart = true, the restarted version GMRES(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • restart: restart the method after memory iterations;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.gmres!Function
solver = gmres!(solver::GmresSolver, A, b; kwargs...)
-solver = gmres!(solver::GmresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of gmres.

Note that the memory keyword argument is the only exception. It's required to create a GmresSolver and can't be changed later.

See GmresSolver for more details about the solver.

source

FGMRES

Krylov.fgmresFunction
(x, stats) = fgmres(A, b::AbstractVector{FC};
+                   callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = gmres(A, b, x0::AbstractVector; kwargs...)

GMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the linear system Ax = b of size n using GMRES.

GMRES algorithm is based on the Arnoldi process and computes a sequence of approximate solutions with the minimum residual.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: if restart = true, the restarted version GMRES(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • restart: restart the method after memory iterations;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.gmres!Function
solver = gmres!(solver::GmresSolver, A, b; kwargs...)
+solver = gmres!(solver::GmresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of gmres.

Note that the memory keyword argument is the only exception. It's required to create a GmresSolver and can't be changed later.

See GmresSolver for more details about the solver.

source

FGMRES

Krylov.fgmresFunction
(x, stats) = fgmres(A, b::AbstractVector{FC};
                     memory::Int=20, M=I, N=I, ldiv::Bool=false,
                     restart::Bool=false, reorthogonalization::Bool=false,
                     atol::T=√eps(T), rtol::T=√eps(T), itmax::Int=0,
                     timemax::Float64=Inf, verbose::Int=0, history::Bool=false,
-                    callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = fgmres(A, b, x0::AbstractVector; kwargs...)

FGMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the linear system Ax = b of size n using FGMRES.

FGMRES computes a sequence of approximate solutions with minimum residual. FGMRES is a variant of GMRES that allows changes in the right preconditioner at each iteration.

This implementation allows a left preconditioner M and a flexible right preconditioner N. A situation in which the preconditioner is "not constant" is when a relaxation-type method, a Chebyshev iteration or another Krylov subspace method is used as a preconditioner. Compared to GMRES, there is no additional cost incurred in the arithmetic but the memory requirement almost doubles. Thus, GMRES is recommended if the right preconditioner N is constant.

Input arguments

  • A: a linear operator that models a matrix of dimension n;
  • b: a vector of length n.

Optional argument

  • x0: a vector of length n that represents an initial guess of the solution x.

Keyword arguments

  • memory: if restart = true, the restarted version FGMRES(k) is used with k = memory. If restart = false, the parameter memory should be used as a hint of the number of iterations to limit dynamic memory allocations. Additional storage will be allocated if the number of iterations exceeds memory;
  • M: linear operator that models a nonsingular matrix of size n used for left preconditioning;
  • N: linear operator that models a nonsingular matrix of size n used for right preconditioning;
  • ldiv: define whether the preconditioners use ldiv! or mul!;
  • restart: restart the method after memory iterations;
  • reorthogonalization: reorthogonalize the new vectors of the Krylov basis against all previous vectors;
  • atol: absolute stopping tolerance based on the residual norm;
  • rtol: relative stopping tolerance based on the residual norm;
  • itmax: the maximum number of iterations. If itmax=0, the default number of iterations is set to 2n;
  • timemax: the time limit in seconds;
  • verbose: additional details can be displayed if verbose mode is enabled (verbose > 0). Information will be displayed every verbose iterations;
  • history: collect additional statistics on the run such as residual norms, or Aᴴ-residual norms;
  • callback: function or functor called as callback(solver) that returns true if the Krylov method should terminate, and false otherwise;
  • iostream: stream to which output is logged.

Output arguments

  • x: a dense vector of length n;
  • stats: statistics collected on the run in a SimpleStats structure.

Reference

source
Krylov.fgmres!Function
solver = fgmres!(solver::FgmresSolver, A, b; kwargs...)
-solver = fgmres!(solver::FgmresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of fgmres.

Note that the memory keyword argument is the only exception. It's required to create a FgmresSolver and can't be changed later.

See FgmresSolver for more details about the solver.

source
+ callback=solver->false, iostream::IO=kstdout)

T is an AbstractFloat such as Float32, Float64 or BigFloat. FC is T or Complex{T}.

(x, stats) = fgmres(A, b, x0::AbstractVector; kwargs...)

FGMRES can be warm-started from an initial guess x0 where kwargs are the same keyword arguments as above.

Solve the linear system Ax = b of size n using FGMRES.

FGMRES computes a sequence of approximate solutions with minimum residual. FGMRES is a variant of GMRES that allows changes in the right preconditioner at each iteration.

This implementation allows a left preconditioner M and a flexible right preconditioner N. A situation in which the preconditioner is "not constant" is when a relaxation-type method, a Chebyshev iteration or another Krylov subspace method is used as a preconditioner. Compared to GMRES, there is no additional cost incurred in the arithmetic but the memory requirement almost doubles. Thus, GMRES is recommended if the right preconditioner N is constant.

Input arguments

Optional argument

Keyword arguments

Output arguments

Reference

source
Krylov.fgmres!Function
solver = fgmres!(solver::FgmresSolver, A, b; kwargs...)
+solver = fgmres!(solver::FgmresSolver, A, b, x0; kwargs...)

where kwargs are keyword arguments of fgmres.

Note that the memory keyword argument is the only exception. It's required to create a FgmresSolver and can't be changed later.

See FgmresSolver for more details about the solver.

source
diff --git a/dev/storage/index.html b/dev/storage/index.html index d959ee7d5..e874b556d 100644 --- a/dev/storage/index.html +++ b/dev/storage/index.html @@ -52,4 +52,4 @@ └──────────────────┴──────────────────┴────────────────────┘

If we want the total number of bytes used by the solver, we can call nbytes = sizeof(solver).

nbytes = sizeof(solver)
560121

Thereafter, we can use Base.format_bytes(nbytes) to recover what is displayed in the REPL.

Base.format_bytes(nbytes)
"546.993 KiB"

To verify that we match the theoretical results, we just need to multiply the storage requirement of a method by the number of bytes associated to the precision of the linear problem. For instance, we need 4 bytes for the precision Float32, 8 bytes for precisions Float64 and ComplexF32, and 16 bytes for the precision ComplexF64.

FC = Float64                            # precision of the least-squares problem
 ncoefs_lsmr = 5*n + 2*m                 # number of coefficients
 nbytes_lsmr = sizeof(FC) * ncoefs_lsmr  # number of bytes
560000

Therefore, you can check that you have enough memory in RAM to allocate a KrylovSolver.

free_nbytes = Sys.free_memory()
-Base.format_bytes(free_nbytes)  # Total free memory in RAM in bytes.
"4.970 GiB"
Note
  • Beyond having faster operations, using low precisions, such as simple precision, allows to store more coefficients in RAM and solve larger linear problems.
  • In the file test_allocations.jl, we use the macro @allocated to test that we match the expected storage requirement of each method with a tolerance of 2%.
+Base.format_bytes(free_nbytes) # Total free memory in RAM in bytes.
"4.930 GiB"
Note
  • Beyond having faster operations, using low precisions, such as simple precision, allows to store more coefficients in RAM and solve larger linear problems.
  • In the file test_allocations.jl, we use the macro @allocated to test that we match the expected storage requirement of each method with a tolerance of 2%.
diff --git a/dev/tips/index.html b/dev/tips/index.html index b0dfb3ef7..f6e9724a8 100644 --- a/dev/tips/index.html +++ b/dev/tips/index.html @@ -23,4 +23,4 @@ julia -t N # alternative: --threads N JULIA_NUM_THREADS=N julia

Thereafter, you can verify the number of threads usable by Julia

using Base.Threads
-nthreads()

The following benchmarks illustrate the time required in seconds to compute 1000 sparse matrix-vector products with symmetric matrices of the SuiteSparse Matrix Collection. The computer used for the benchmarks has 2 physical cores and Julia was launched with JULIA_NUM_THREADS=2.

benchmarks

+nthreads()

The following benchmarks illustrate the time required in seconds to compute 1000 sparse matrix-vector products with symmetric matrices of the SuiteSparse Matrix Collection. The computer used for the benchmarks has 2 physical cores and Julia was launched with JULIA_NUM_THREADS=2.

benchmarks

diff --git a/dev/warm-start/index.html b/dev/warm-start/index.html index 8f975cfa6..e0db39bec 100644 --- a/dev/warm-start/index.html +++ b/dev/warm-start/index.html @@ -23,4 +23,4 @@ Δx, Δy, stats = trimr(A, b₀, c₀, M=M, N=N) x = x₀ + Δx -y = y₀ + Δy +y = y₀ + Δy