https://en.wikipedia.org/w/index.php?action=history&feed=atom&title=Lanczos_algorithm
Lanczos algorithm - Revision history
2025-06-03T08:21:22Z
Revision history for this page on the wiki
MediaWiki 1.45.0-wmf.3
https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1291780938&oldid=prev
128.131.126.87: In Cullum 1986 the subtraction of the previous state is done earlier.
2025-05-23T10:58:13Z
<p>In Cullum 1986 the subtraction of the previous state is done earlier.</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 10:58, 23 May 2025</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 24:</td>
<td colspan="2" class="diff-lineno">Line 24:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:## If <math> \beta_j \neq 0 </math>, then let <math> v_j = w_{j-1} / \beta_j </math>,</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:## If <math> \beta_j \neq 0 </math>, then let <math> v_j = w_{j-1} / \beta_j </math>,</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:##: else pick as <math>v_j</math> an arbitrary vector with Euclidean norm <math>1</math> that is orthogonal to all of <math> v_1,\dots,v_{j-1} </math>.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:##: else pick as <math>v_j</math> an arbitrary vector with Euclidean norm <math>1</math> that is orthogonal to all of <math> v_1,\dots,v_{j-1} </math>.</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>:## Let <math> w_j' = A v_j </math>.</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>:## Let <math> w_j' = A v_j<ins style="font-weight: bold; text-decoration: none;"> - \beta_j v_{j-1}</ins> </math>.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:## Let <math> \alpha_j = w_j'^* v_j </math>.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:## Let <math> \alpha_j = w_j'^* v_j </math>.</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>:## Let <math> w_j = w_j' - \alpha_j v_j<del style="font-weight: bold; text-decoration: none;"> - \beta_j v_{j-1}</del> </math>.</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>:## Let <math> w_j = w_j' - \alpha_j v_j </math>.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:# Let <math>V</math> be the matrix with columns <math> v_1,\dots,v_m </math>. Let <math>T = \begin{pmatrix}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:# Let <math>V</math> be the matrix with columns <math> v_1,\dots,v_m </math>. Let <math>T = \begin{pmatrix}</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>\alpha_1 & \beta_2 & & & & 0 \\</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>\alpha_1 & \beta_2 & & & & 0 \\</div></td>
</tr>
<tr>
<td colspan="2" class="diff-lineno">Line 35:</td>
<td colspan="2" class="diff-lineno">Line 35:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>0 & & & & \beta_m & \alpha_m \\</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>0 & & & & \beta_m & \alpha_m \\</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>\end{pmatrix}</math>.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>\end{pmatrix}</math>.</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>:'''Note''' <math> A v_j<del style="font-weight: bold; text-decoration: none;"> = w_j'</del> = \beta_{j+1} v_{j+1} + \alpha_j v_j + \beta_j v_{j-1} </math> for <math> 2 < j < m </math>.</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>:'''Note''' <math> A v_j = \beta_{j+1} v_{j+1} + \alpha_j v_j + \beta_j v_{j-1} </math> for <math> 2 < j < m </math>.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>There are in principle four ways to write the iteration procedure. Paige and other works show that the above order of operations is the most numerically stable.<ref name="CW1985">{{Cite book|last1=Cullum |last2= Willoughby|title=Lanczos Algorithms for Large Symmetric Eigenvalue Computations|year= 1985|volume= 1| isbn= 0-8176-3058-9}}</ref><ref name="Saad1992">{{Cite book|author=Yousef Saad|title=Numerical Methods for Large Eigenvalue Problems| isbn= 0-470-21820-7|url= http://www-users.cs.umn.edu/~saad/books.html|author-link=Yousef Saad|date=1992-06-22}}</ref></div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>There are in principle four ways to write the iteration procedure. Paige and other works show that the above order of operations is the most numerically stable.<ref name="CW1985">{{Cite book|last1=Cullum |last2= Willoughby|title=Lanczos Algorithms for Large Symmetric Eigenvalue Computations|year= 1985|volume= 1| isbn= 0-8176-3058-9}}</ref><ref name="Saad1992">{{Cite book|author=Yousef Saad|title=Numerical Methods for Large Eigenvalue Problems| isbn= 0-470-21820-7|url= http://www-users.cs.umn.edu/~saad/books.html|author-link=Yousef Saad|date=1992-06-22}}</ref></div></td>
</tr>
</table>
128.131.126.87
https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1223948364&oldid=prev
AaronAegeus: Separated paragraphs
2024-05-15T09:57:22Z
<p>Separated paragraphs</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 09:57, 15 May 2024</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 46:</td>
<td colspan="2" class="diff-lineno">Line 46:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>=== Application to the eigenproblem ===</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>=== Application to the eigenproblem ===</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>The Lanczos algorithm is most often brought up in the context of finding the [[eigenvalue]]s and [[eigenvector]]s of a matrix, but whereas an ordinary [[Matrix diagonalization|diagonalization of a matrix]] would make eigenvectors and eigenvalues apparent from inspection, the same is not true for the tridiagonalization performed by the Lanczos algorithm; nontrivial additional steps are needed to compute even a single eigenvalue or eigenvector. Nonetheless, applying the Lanczos algorithm is often a significant step forward in computing the eigendecomposition.<del style="font-weight: bold; text-decoration: none;"> </del>If <math>\lambda</math> is an eigenvalue of <math>T</math>, and <math>x</math> its eigenvector (<math>Tx=\lambda x</math>), then <math> y = V x </math> is a corresponding eigenvector of <math>A</math> with the same eigenvalue:</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>The Lanczos algorithm is most often brought up in the context of finding the [[eigenvalue]]s and [[eigenvector]]s of a matrix, but whereas an ordinary [[Matrix diagonalization|diagonalization of a matrix]] would make eigenvectors and eigenvalues apparent from inspection, the same is not true for the tridiagonalization performed by the Lanczos algorithm; nontrivial additional steps are needed to compute even a single eigenvalue or eigenvector. Nonetheless, applying the Lanczos algorithm is often a significant step forward in computing the eigendecomposition.</div></td>
</tr>
<tr>
<td colspan="2" class="diff-empty diff-side-deleted"></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div></div></td>
</tr>
<tr>
<td colspan="2" class="diff-empty diff-side-deleted"></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>If <math>\lambda</math> is an eigenvalue of <math>T</math>, and <math>x</math> its eigenvector (<math>Tx=\lambda x</math>), then <math> y = V x </math> is a corresponding eigenvector of <math>A</math> with the same eigenvalue:</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div><math display=block>\begin{align} A y &= A V x \\ &= V T V^* V x \\ &= V T I x \\ &= V T x \\ &= V (\lambda x) \\ &= \lambda V x \\& = \lambda y.\end{align}</math></div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div><math display=block>\begin{align} A y &= A V x \\ &= V T V^* V x \\ &= V T I x \\ &= V T x \\ &= V (\lambda x) \\ &= \lambda V x \\& = \lambda y.\end{align}</math></div></td>
</tr>
</table>
AaronAegeus
https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1223948294&oldid=prev
AaronAegeus: Improved elucidation and changed formatting
2024-05-15T09:56:27Z
<p>Improved elucidation and changed formatting</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 09:56, 15 May 2024</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 46:</td>
<td colspan="2" class="diff-lineno">Line 46:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>=== Application to the eigenproblem ===</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>=== Application to the eigenproblem ===</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>The Lanczos algorithm is most often brought up in the context of finding the [[eigenvalue]]s and [[eigenvector]]s of a matrix, but whereas an ordinary [[Matrix diagonalization|diagonalization of a matrix]] would make eigenvectors and eigenvalues apparent from inspection, the same is not true for the tridiagonalization performed by the Lanczos algorithm; nontrivial additional steps are needed to compute even a single eigenvalue or eigenvector. Nonetheless, applying the Lanczos algorithm is often a significant step forward in computing the eigendecomposition. If <math>\lambda</math> is an eigenvalue of <math>T</math>, and <math> <del style="font-weight: bold; text-decoration: none;">T</del> <del style="font-weight: bold; text-decoration: none;">x</del> =<del style="font-weight: bold; text-decoration: none;"> </del>\lambda x<del style="font-weight: bold; text-decoration: none;"> </del></math><del style="font-weight: bold; text-decoration: none;"> the corresponding eigenvector</del>, then <math> y = V x </math> is <del style="font-weight: bold; text-decoration: none;">the</del> corresponding eigenvector of <math>A</math> with the same eigenvalue:<del style="font-weight: bold; text-decoration: none;"> </del><math> A y = A V x = V T V^* V x = V T I x = V T x = V (\lambda x) = \lambda V x = \lambda y</math><del style="font-weight: bold; text-decoration: none;">. </del>Thus the Lanczos algorithm transforms the eigendecomposition problem for <math>A</math> into the eigendecomposition problem for <math>T</math>.</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>The Lanczos algorithm is most often brought up in the context of finding the [[eigenvalue]]s and [[eigenvector]]s of a matrix, but whereas an ordinary [[Matrix diagonalization|diagonalization of a matrix]] would make eigenvectors and eigenvalues apparent from inspection, the same is not true for the tridiagonalization performed by the Lanczos algorithm; nontrivial additional steps are needed to compute even a single eigenvalue or eigenvector. Nonetheless, applying the Lanczos algorithm is often a significant step forward in computing the eigendecomposition. If <math>\lambda</math> is an eigenvalue of <math>T</math>, and <<ins style="font-weight: bold; text-decoration: none;">math>x</</ins>math> <ins style="font-weight: bold; text-decoration: none;">its</ins> <ins style="font-weight: bold; text-decoration: none;">eigenvector</ins> <ins style="font-weight: bold; text-decoration: none;">(<math>Tx</ins>=\lambda x</math><ins style="font-weight: bold; text-decoration: none;">)</ins>, then <math> y = V x </math> is <ins style="font-weight: bold; text-decoration: none;">a</ins> corresponding eigenvector of <math>A</math> with the same eigenvalue:</div></td>
</tr>
<tr>
<td colspan="2" class="diff-empty diff-side-deleted"></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div></div></td>
</tr>
<tr>
<td colspan="2" class="diff-empty diff-side-deleted"></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div><math<ins style="font-weight: bold; text-decoration: none;"> display=block</ins>><ins style="font-weight: bold; text-decoration: none;">\begin{align}</ins> A y <ins style="font-weight: bold; text-decoration: none;">&</ins>= A V x <ins style="font-weight: bold; text-decoration: none;">\\ &</ins>= V T V^* V x <ins style="font-weight: bold; text-decoration: none;">\\ &</ins>= V T I x <ins style="font-weight: bold; text-decoration: none;">\\ &</ins>= V T x <ins style="font-weight: bold; text-decoration: none;">\\ &</ins>= V (\lambda x) <ins style="font-weight: bold; text-decoration: none;">\\ &</ins>= \lambda V x<ins style="font-weight: bold; text-decoration: none;"> \\&</ins> = \lambda y<ins style="font-weight: bold; text-decoration: none;">.\end{align}</ins></math></div></td>
</tr>
<tr>
<td colspan="2" class="diff-empty diff-side-deleted"></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div></div></td>
</tr>
<tr>
<td colspan="2" class="diff-empty diff-side-deleted"></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Thus the Lanczos algorithm transforms the eigendecomposition problem for <math>A</math> into the eigendecomposition problem for <math>T</math>.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div># For tridiagonal matrices, there exist a number of specialised algorithms, often with better computational complexity than general-purpose algorithms. For example, if <math>T</math> is an <math>m \times m</math> tridiagonal symmetric matrix then:</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div># For tridiagonal matrices, there exist a number of specialised algorithms, often with better computational complexity than general-purpose algorithms. For example, if <math>T</math> is an <math>m \times m</math> tridiagonal symmetric matrix then:</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>#* The [[Tridiagonal matrix#Determinant|continuant recursion]] allows computing the [[characteristic polynomial]] in <math>O(m^2)</math> operations, and evaluating it at a point in <math>O(m)</math> operations.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>#* The [[Tridiagonal matrix#Determinant|continuant recursion]] allows computing the [[characteristic polynomial]] in <math>O(m^2)</math> operations, and evaluating it at a point in <math>O(m)</math> operations.</div></td>
</tr>
</table>
AaronAegeus
https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1223938762&oldid=prev
AaronAegeus: Fixed an error in proof of eigencorrespondence and improved elaboration
2024-05-15T08:20:14Z
<p>Fixed an error in proof of eigencorrespondence and improved elaboration</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 08:20, 15 May 2024</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 46:</td>
<td colspan="2" class="diff-lineno">Line 46:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>=== Application to the eigenproblem ===</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>=== Application to the eigenproblem ===</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>The Lanczos algorithm is most often brought up in the context of finding the [[eigenvalue]]s and [[eigenvector]]s of a matrix, but whereas an ordinary [[Matrix diagonalization|diagonalization of a matrix]] would make eigenvectors and eigenvalues apparent from inspection, the same is not true for the tridiagonalization performed by the Lanczos algorithm; nontrivial additional steps are needed to compute even a single eigenvalue or eigenvector. Nonetheless, applying the Lanczos algorithm is often a significant step forward in computing the eigendecomposition. If <math>\lambda</math> is an eigenvalue of <math><del style="font-weight: bold; text-decoration: none;">A</del></math>, and<del style="font-weight: bold; text-decoration: none;"> if</del> <math> T x = \lambda x </math> <del style="font-weight: bold; text-decoration: none;">(<math>x</math></del> <del style="font-weight: bold; text-decoration: none;">is an</del> eigenvector<del style="font-weight: bold; text-decoration: none;"> of <math>T</math>)</del> then <math> y = V x </math> is the corresponding eigenvector of <math>A</math> <del style="font-weight: bold; text-decoration: none;">(since</del> <math> A y = A V x = V T V^* V x = V T I x = V T x = V (\lambda x) = \lambda V x = \lambda y</math><del style="font-weight: bold; text-decoration: none;">)</del>. Thus the Lanczos algorithm transforms the eigendecomposition problem for <math>A</math> into the eigendecomposition problem for <math>T</math>.</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>The Lanczos algorithm is most often brought up in the context of finding the [[eigenvalue]]s and [[eigenvector]]s of a matrix, but whereas an ordinary [[Matrix diagonalization|diagonalization of a matrix]] would make eigenvectors and eigenvalues apparent from inspection, the same is not true for the tridiagonalization performed by the Lanczos algorithm; nontrivial additional steps are needed to compute even a single eigenvalue or eigenvector. Nonetheless, applying the Lanczos algorithm is often a significant step forward in computing the eigendecomposition. If <math>\lambda</math> is an eigenvalue of <math><ins style="font-weight: bold; text-decoration: none;">T</ins></math>, and <math> T x = \lambda x </math> <ins style="font-weight: bold; text-decoration: none;">the</ins> <ins style="font-weight: bold; text-decoration: none;">corresponding</ins> eigenvector<ins style="font-weight: bold; text-decoration: none;">,</ins> then <math> y = V x </math> is the corresponding eigenvector of <math>A</math> <ins style="font-weight: bold; text-decoration: none;">with the same eigenvalue:</ins> <math> A y = A V x = V T V^* V x = V T I x = V T x = V (\lambda x) = \lambda V x = \lambda y</math>. Thus the Lanczos algorithm transforms the eigendecomposition problem for <math>A</math> into the eigendecomposition problem for <math>T</math>.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div># For tridiagonal matrices, there exist a number of specialised algorithms, often with better computational complexity than general-purpose algorithms. For example, if <math>T</math> is an <math>m \times m</math> tridiagonal symmetric matrix then:</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div># For tridiagonal matrices, there exist a number of specialised algorithms, often with better computational complexity than general-purpose algorithms. For example, if <math>T</math> is an <math>m \times m</math> tridiagonal symmetric matrix then:</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>#* The [[Tridiagonal matrix#Determinant|continuant recursion]] allows computing the [[characteristic polynomial]] in <math>O(m^2)</math> operations, and evaluating it at a point in <math>O(m)</math> operations.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>#* The [[Tridiagonal matrix#Determinant|continuant recursion]] allows computing the [[characteristic polynomial]] in <math>O(m^2)</math> operations, and evaluating it at a point in <math>O(m)</math> operations.</div></td>
</tr>
</table>
AaronAegeus
https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1189979134&oldid=prev
Pwesterbaan: /* The algorithm */Changed 1<j<m to 2<j<m since $\beta_1$ doesn't exist
2023-12-15T05:22:12Z
<p><span class="autocomment">The algorithm: </span>Changed 1<j<m to 2<j<m since $\beta_1$ doesn't exist</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 05:22, 15 December 2023</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 35:</td>
<td colspan="2" class="diff-lineno">Line 35:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>0 & & & & \beta_m & \alpha_m \\</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>0 & & & & \beta_m & \alpha_m \\</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>\end{pmatrix}</math>.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>\end{pmatrix}</math>.</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>:'''Note''' <math> A v_j = w_j' = \beta_{j+1} v_{j+1} + \alpha_j v_j + \beta_j v_{j-1} </math> for <math> <del style="font-weight: bold; text-decoration: none;">1</del> < j < m </math>.</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>:'''Note''' <math> A v_j = w_j' = \beta_{j+1} v_{j+1} + \alpha_j v_j + \beta_j v_{j-1} </math> for <math> <ins style="font-weight: bold; text-decoration: none;">2</ins> < j < m </math>.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>There are in principle four ways to write the iteration procedure. Paige and other works show that the above order of operations is the most numerically stable.<ref name="CW1985">{{Cite book|last1=Cullum |last2= Willoughby|title=Lanczos Algorithms for Large Symmetric Eigenvalue Computations|year= 1985|volume= 1| isbn= 0-8176-3058-9}}</ref><ref name="Saad1992">{{Cite book|author=Yousef Saad|title=Numerical Methods for Large Eigenvalue Problems| isbn= 0-470-21820-7|url= http://www-users.cs.umn.edu/~saad/books.html|author-link=Yousef Saad|date=1992-06-22}}</ref></div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>There are in principle four ways to write the iteration procedure. Paige and other works show that the above order of operations is the most numerically stable.<ref name="CW1985">{{Cite book|last1=Cullum |last2= Willoughby|title=Lanczos Algorithms for Large Symmetric Eigenvalue Computations|year= 1985|volume= 1| isbn= 0-8176-3058-9}}</ref><ref name="Saad1992">{{Cite book|author=Yousef Saad|title=Numerical Methods for Large Eigenvalue Problems| isbn= 0-470-21820-7|url= http://www-users.cs.umn.edu/~saad/books.html|author-link=Yousef Saad|date=1992-06-22}}</ref></div></td>
</tr>
</table>
Pwesterbaan
https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1187972119&oldid=prev
OAbot: Open access bot: doi updated in citation with #oabot.
2023-12-02T16:53:49Z
<p><a href="/wiki/Wikipedia:OABOT" class="mw-redirect" title="Wikipedia:OABOT">Open access bot</a>: doi updated in citation with #oabot.</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 16:53, 2 December 2023</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 50:</td>
<td colspan="2" class="diff-lineno">Line 50:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>#* The [[Tridiagonal matrix#Determinant|continuant recursion]] allows computing the [[characteristic polynomial]] in <math>O(m^2)</math> operations, and evaluating it at a point in <math>O(m)</math> operations.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>#* The [[Tridiagonal matrix#Determinant|continuant recursion]] allows computing the [[characteristic polynomial]] in <math>O(m^2)</math> operations, and evaluating it at a point in <math>O(m)</math> operations.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>#* The [[divide-and-conquer eigenvalue algorithm]] can be used to compute the entire eigendecomposition of <math>T</math> in <math>O(m^2)</math> operations.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>#* The [[divide-and-conquer eigenvalue algorithm]] can be used to compute the entire eigendecomposition of <math>T</math> in <math>O(m^2)</math> operations.</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>#* The Fast Multipole Method<ref>{{cite journal|last1=Coakley|first1=Ed S.|last2=Rokhlin|first2=Vladimir|title=A fast divide-and-conquer algorithm for computing the spectra of real symmetric tridiagonal matrices|journal=Applied and Computational Harmonic Analysis|date=2013|volume=34|issue=3|pages=379–414|doi=10.1016/j.acha.2012.06.003|doi-access=<del style="font-weight: bold; text-decoration: none;">free</del>}}</ref> can compute all eigenvalues in just <math>O(m \log m)</math> operations.</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>#* The Fast Multipole Method<ref>{{cite journal|last1=Coakley|first1=Ed S.|last2=Rokhlin|first2=Vladimir|title=A fast divide-and-conquer algorithm for computing the spectra of real symmetric tridiagonal matrices|journal=Applied and Computational Harmonic Analysis|date=2013|volume=34|issue=3|pages=379–414|doi=10.1016/j.acha.2012.06.003|doi-access=}}</ref> can compute all eigenvalues in just <math>O(m \log m)</math> operations.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div># Some general eigendecomposition algorithms, notably the [[QR algorithm]], are known to converge faster for tridiagonal matrices than for general matrices. Asymptotic complexity of tridiagonal QR is <math>O(m^2)</math> just as for the divide-and-conquer algorithm (though the constant factor may be different); since the eigenvectors together have <math>m^2</math> elements, this is asymptotically optimal.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div># Some general eigendecomposition algorithms, notably the [[QR algorithm]], are known to converge faster for tridiagonal matrices than for general matrices. Asymptotic complexity of tridiagonal QR is <math>O(m^2)</math> just as for the divide-and-conquer algorithm (though the constant factor may be different); since the eigenvectors together have <math>m^2</math> elements, this is asymptotically optimal.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div># Even algorithms whose convergence rates are unaffected by unitary transformations, such as the [[power method]] and [[inverse iteration]], may enjoy low-level performance benefits from being applied to the tridiagonal matrix <math>T</math> rather than the original matrix <math>A</math>. Since <math>T</math> is very sparse with all nonzero elements in highly predictable positions, it permits compact storage with excellent performance vis-à-vis [[Cache (computing)|caching]]. Likewise, <math>T</math> is a [[real number|real]] matrix with all eigenvectors and eigenvalues real, whereas <math>A</math> in general may have complex elements and eigenvectors, so real arithmetic is sufficient for finding the eigenvectors and eigenvalues of <math>T</math>.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div># Even algorithms whose convergence rates are unaffected by unitary transformations, such as the [[power method]] and [[inverse iteration]], may enjoy low-level performance benefits from being applied to the tridiagonal matrix <math>T</math> rather than the original matrix <math>A</math>. Since <math>T</math> is very sparse with all nonzero elements in highly predictable positions, it permits compact storage with excellent performance vis-à-vis [[Cache (computing)|caching]]. Likewise, <math>T</math> is a [[real number|real]] matrix with all eigenvectors and eigenvalues real, whereas <math>A</math> in general may have complex elements and eigenvectors, so real arithmetic is sufficient for finding the eigenvectors and eigenvalues of <math>T</math>.</div></td>
</tr>
</table>
OAbot
https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1162736907&oldid=prev
Macrakis: more useful short description
2023-06-30T21:18:56Z
<p>more useful short description</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 21:18, 30 June 2023</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 1:</td>
<td colspan="2" class="diff-lineno">Line 1:</td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>{{Short description|<del style="font-weight: bold; text-decoration: none;">Iterative</del> <del style="font-weight: bold; text-decoration: none;">numerical</del> <del style="font-weight: bold; text-decoration: none;">method</del>}}</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>{{Short description|<ins style="font-weight: bold; text-decoration: none;">Numerical</ins> <ins style="font-weight: bold; text-decoration: none;">eigenvalue</ins> <ins style="font-weight: bold; text-decoration: none;">calculation</ins>}}</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{for|the null space-finding algorithm|block Lanczos algorithm}}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{for|the null space-finding algorithm|block Lanczos algorithm}}</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{for|the interpolation method|Lanczos resampling}}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{for|the interpolation method|Lanczos resampling}}</div></td>
</tr>
</table>
Macrakis
https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1162725230&oldid=prev
MichaelMaggs: Adding local short description: "Iterative numerical method", overriding Wikidata description "numerical method for find eigenvalues"
2023-06-30T19:48:43Z
<p>Adding local <a href="/wiki/Wikipedia:Short_description" title="Wikipedia:Short description">short description</a>: "Iterative numerical method", overriding Wikidata description "numerical method for find eigenvalues"</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 19:48, 30 June 2023</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 1:</td>
<td colspan="2" class="diff-lineno">Line 1:</td>
</tr>
<tr>
<td colspan="2" class="diff-empty diff-side-deleted"></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>{{Short description|Iterative numerical method}}</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{for|the null space-finding algorithm|block Lanczos algorithm}}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{for|the null space-finding algorithm|block Lanczos algorithm}}</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{for|the interpolation method|Lanczos resampling}}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{for|the interpolation method|Lanczos resampling}}</div></td>
</tr>
</table>
MichaelMaggs
https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1148037072&oldid=prev
LongingLament: /* A more provident power method */ Fix a typo
2023-04-03T17:59:30Z
<p><span class="autocomment">A more provident power method: </span> Fix a typo</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 17:59, 3 April 2023</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 76:</td>
<td colspan="2" class="diff-lineno">Line 76:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:## Let <math> u_{j+1} = u_{j+1}' / \| u_{j+1}' \|.</math></div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:## Let <math> u_{j+1} = u_{j+1}' / \| u_{j+1}' \|.</math></div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:* In the large <math>j</math> limit, <math>u_j</math> approaches the normed eigenvector corresponding to the largest magnitude eigenvalue.</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:* In the large <math>j</math> limit, <math>u_j</math> approaches the normed eigenvector corresponding to the largest magnitude eigenvalue.</div></td>
</tr>
<tr>
<td class="diff-marker" data-marker="−"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>A critique that can be raised against this method is that it is wasteful: it spends a lot of work (the matrix–vector products in step 2.1) extracting information from the matrix <math>A</math>, but pays attention only to the very last result; implementations typically use the same variable for all the vectors <math>u_j</math>, having each new iteration overwrite the results from the previous one. It may be desirable to instead <del style="font-weight: bold; text-decoration: none;">keeptl</del> all the intermediate results and organise the data.</div></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>A critique that can be raised against this method is that it is wasteful: it spends a lot of work (the matrix–vector products in step 2.1) extracting information from the matrix <math>A</math>, but pays attention only to the very last result; implementations typically use the same variable for all the vectors <math>u_j</math>, having each new iteration overwrite the results from the previous one. It may be desirable to instead <ins style="font-weight: bold; text-decoration: none;">keep</ins> all the intermediate results and organise the data.</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>One piece of information that trivially is available from the vectors <math>u_j</math> is a chain of [[Krylov subspace]]s. One way of stating that without introducing sets into the algorithm is to claim that it computes</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>One piece of information that trivially is available from the vectors <math>u_j</math> is a chain of [[Krylov subspace]]s. One way of stating that without introducing sets into the algorithm is to claim that it computes</div></td>
</tr>
</table>
LongingLament
https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1145662458&oldid=prev
Cond-mat: added further reading
2023-03-20T08:52:27Z
<p>added further reading</p>
<table style="background-color: #fff; color: #202122;" data-mw="interface">
<col class="diff-marker" />
<col class="diff-content" />
<col class="diff-marker" />
<col class="diff-content" />
<tr class="diff-title" lang="en">
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td>
<td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 08:52, 20 March 2023</td>
</tr><tr>
<td colspan="2" class="diff-lineno">Line 298:</td>
<td colspan="2" class="diff-lineno">Line 298:</td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* {{cite book |first1=Gene H. |last1=Golub |author-link=Gene H. Golub |first2=Charles F. |last2=Van Loan |author-link2=Charles F. Van Loan |title=Matrix Computations |location=Baltimore |publisher=Johns Hopkins University Press |year=1996 |isbn=0-8018-5414-8 |pages=470–507 |chapter=Lanczos Methods |chapter-url=https://books.google.com/books?id=mlOa7wPX6OYC&pg=PA470 }}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* {{cite book |first1=Gene H. |last1=Golub |author-link=Gene H. Golub |first2=Charles F. |last2=Van Loan |author-link2=Charles F. Van Loan |title=Matrix Computations |location=Baltimore |publisher=Johns Hopkins University Press |year=1996 |isbn=0-8018-5414-8 |pages=470–507 |chapter=Lanczos Methods |chapter-url=https://books.google.com/books?id=mlOa7wPX6OYC&pg=PA470 }}</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* {{cite journal |first1=Andrew Y. |last1=Ng |author-link=Andrew Ng |first2=Alice X. |last2=Zheng |first3=Michael I. |last3=Jordan |author-link3=Michael I. Jordan |title=Link Analysis, Eigenvectors and Stability |journal=IJCAI'01 Proceedings of the 17th International Joint Conference on Artificial Intelligence |volume=2 |pages=903–910 |date=2001 |url=https://ai.stanford.edu/~ang/papers/ijcai01-linkanalysis.pdf }}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* {{cite journal |first1=Andrew Y. |last1=Ng |author-link=Andrew Ng |first2=Alice X. |last2=Zheng |first3=Michael I. |last3=Jordan |author-link3=Michael I. Jordan |title=Link Analysis, Eigenvectors and Stability |journal=IJCAI'01 Proceedings of the 17th International Joint Conference on Artificial Intelligence |volume=2 |pages=903–910 |date=2001 |url=https://ai.stanford.edu/~ang/papers/ijcai01-linkanalysis.pdf }}</div></td>
</tr>
<tr>
<td colspan="2" class="diff-empty diff-side-deleted"></td>
<td class="diff-marker" data-marker="+"></td>
<td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>* {{cite book|author=Erik Koch|chapter-url=http://www.cond-mat.de/events/correl19/manuscripts/koch.pdf|chapter=Exact Diagonalization and Lanczos Method|editor1= E. Pavarini |editor2=E. Koch |editor3=S. Zhang |title= Many-Body Methods for Real Materials|publisher= Jülich |year=2019|isbn=978-3-95806-400-3}}</div></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td>
</tr>
<tr>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{Numerical linear algebra}}</div></td>
<td class="diff-marker"></td>
<td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{Numerical linear algebra}}</div></td>
</tr>
</table>
Cond-mat