Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
Conjugate residual method
Iterative numeric method used for solving systems of linear equations

The conjugate residual method is an iterative numeric method used for solving systems of linear equations. It's a Krylov subspace method very similar to the much more popular conjugate gradient method, with similar construction and convergence properties.

This method is used to solve linear equations of the form

A x = b {\displaystyle \mathbf {A} \mathbf {x} =\mathbf {b} }

where A is an invertible and Hermitian matrix, and b is nonzero.

The conjugate residual method differs from the closely related conjugate gradient method. It involves more numerical operations and requires more storage.

Given an (arbitrary) initial estimate of the solution x 0 {\displaystyle \mathbf {x} _{0}} , the method is outlined below:

x 0 := Some initial guess r 0 := b − A x 0 p 0 := r 0 Iterate, with  k  starting at  0 : α k := r k T A r k ( A p k ) T A p k x k + 1 := x k + α k p k r k + 1 := r k − α k A p k β k := r k + 1 T A r k + 1 r k T A r k p k + 1 := r k + 1 + β k p k A p k + 1 := A r k + 1 + β k A p k k := k + 1 {\displaystyle {\begin{aligned}&\mathbf {x} _{0}:={\text{Some initial guess}}\\&\mathbf {r} _{0}:=\mathbf {b} -\mathbf {Ax} _{0}\\&\mathbf {p} _{0}:=\mathbf {r} _{0}\\&{\text{Iterate, with }}k{\text{ starting at }}0:\\&\qquad \alpha _{k}:={\frac {\mathbf {r} _{k}^{\mathrm {T} }\mathbf {Ar} _{k}}{(\mathbf {Ap} _{k})^{\mathrm {T} }\mathbf {Ap} _{k}}}\\&\qquad \mathbf {x} _{k+1}:=\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k}\\&\qquad \mathbf {r} _{k+1}:=\mathbf {r} _{k}-\alpha _{k}\mathbf {Ap} _{k}\\&\qquad \beta _{k}:={\frac {\mathbf {r} _{k+1}^{\mathrm {T} }\mathbf {Ar} _{k+1}}{\mathbf {r} _{k}^{\mathrm {T} }\mathbf {Ar} _{k}}}\\&\qquad \mathbf {p} _{k+1}:=\mathbf {r} _{k+1}+\beta _{k}\mathbf {p} _{k}\\&\qquad \mathbf {Ap} _{k+1}:=\mathbf {Ar} _{k+1}+\beta _{k}\mathbf {Ap} _{k}\\&\qquad k:=k+1\end{aligned}}}

the iteration may be stopped once x k {\displaystyle \mathbf {x} _{k}} has been deemed converged. The only difference between this and the conjugate gradient method is the calculation of α k {\displaystyle \alpha _{k}} and β k {\displaystyle \beta _{k}} (plus the optional incremental calculation of A p k {\displaystyle \mathbf {Ap} _{k}} at the end).

Note: the above algorithm can be transformed so to make only one symmetric matrix-vector multiplication in each iteration.

We don't have any images related to Conjugate residual method yet.
We don't have any YouTube videos related to Conjugate residual method yet.
We don't have any PDF documents related to Conjugate residual method yet.
We don't have any Books related to Conjugate residual method yet.
We don't have any archived web articles related to Conjugate residual method yet.

Preconditioning

By making a few substitutions and variable changes, a preconditioned conjugate residual method may be derived in the same way as done for the conjugate gradient method:

x 0 := Some initial guess r 0 := M − 1 ( b − A x 0 ) p 0 := r 0 Iterate, with  k  starting at  0 : α k := r k T A r k ( A p k ) T M − 1 A p k x k + 1 := x k + α k p k r k + 1 := r k − α k M − 1 A p k β k := r k + 1 T A r k + 1 r k T A r k p k + 1 := r k + 1 + β k p k A p k + 1 := A r k + 1 + β k A p k k := k + 1 {\displaystyle {\begin{aligned}&\mathbf {x} _{0}:={\text{Some initial guess}}\\&\mathbf {r} _{0}:=\mathbf {M} ^{-1}(\mathbf {b} -\mathbf {Ax} _{0})\\&\mathbf {p} _{0}:=\mathbf {r} _{0}\\&{\text{Iterate, with }}k{\text{ starting at }}0:\\&\qquad \alpha _{k}:={\frac {\mathbf {r} _{k}^{\mathrm {T} }\mathbf {A} \mathbf {r} _{k}}{(\mathbf {Ap} _{k})^{\mathrm {T} }\mathbf {M} ^{-1}\mathbf {Ap} _{k}}}\\&\qquad \mathbf {x} _{k+1}:=\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k}\\&\qquad \mathbf {r} _{k+1}:=\mathbf {r} _{k}-\alpha _{k}\mathbf {M} ^{-1}\mathbf {Ap} _{k}\\&\qquad \beta _{k}:={\frac {\mathbf {r} _{k+1}^{\mathrm {T} }\mathbf {A} \mathbf {r} _{k+1}}{\mathbf {r} _{k}^{\mathrm {T} }\mathbf {A} \mathbf {r} _{k}}}\\&\qquad \mathbf {p} _{k+1}:=\mathbf {r} _{k+1}+\beta _{k}\mathbf {p} _{k}\\&\qquad \mathbf {Ap} _{k+1}:=\mathbf {A} \mathbf {r} _{k+1}+\beta _{k}\mathbf {Ap} _{k}\\&\qquad k:=k+1\\\end{aligned}}}

The preconditioner M − 1 {\displaystyle \mathbf {M} ^{-1}} must be symmetric positive definite. Note that the residual vector here is different from the residual vector without preconditioning.

  • Yousef Saad, Iterative methods for sparse linear systems (2nd ed.), page 194, SIAM. ISBN 978-0-89871-534-7.