https://en.wikipedia.org/w/index.php?action=history&feed=atom&title=Lanczos_algorithm Lanczos algorithm - Revision history 2025-06-03T08:21:22Z Revision history for this page on the wiki MediaWiki 1.45.0-wmf.3 https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1291780938&oldid=prev 128.131.126.87: In Cullum 1986 the subtraction of the previous state is done earlier. 2025-05-23T10:58:13Z <p>In Cullum 1986 the subtraction of the previous state is done earlier.</p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 10:58, 23 May 2025</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 24:</td> <td colspan="2" class="diff-lineno">Line 24:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:## If &lt;math&gt; \beta_j \neq 0 &lt;/math&gt;, then let &lt;math&gt; v_j = w_{j-1} / \beta_j &lt;/math&gt;,</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:## If &lt;math&gt; \beta_j \neq 0 &lt;/math&gt;, then let &lt;math&gt; v_j = w_{j-1} / \beta_j &lt;/math&gt;,</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:##: else pick as &lt;math&gt;v_j&lt;/math&gt; an arbitrary vector with Euclidean norm &lt;math&gt;1&lt;/math&gt; that is orthogonal to all of &lt;math&gt; v_1,\dots,v_{j-1} &lt;/math&gt;.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:##: else pick as &lt;math&gt;v_j&lt;/math&gt; an arbitrary vector with Euclidean norm &lt;math&gt;1&lt;/math&gt; that is orthogonal to all of &lt;math&gt; v_1,\dots,v_{j-1} &lt;/math&gt;.</div></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>:## Let &lt;math&gt; w_j' = A v_j &lt;/math&gt;.</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>:## Let &lt;math&gt; w_j' = A v_j<ins style="font-weight: bold; text-decoration: none;"> - \beta_j v_{j-1}</ins> &lt;/math&gt;.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:## Let &lt;math&gt; \alpha_j = w_j'^* v_j &lt;/math&gt;.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:## Let &lt;math&gt; \alpha_j = w_j'^* v_j &lt;/math&gt;.</div></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>:## Let &lt;math&gt; w_j = w_j' - \alpha_j v_j<del style="font-weight: bold; text-decoration: none;"> - \beta_j v_{j-1}</del> &lt;/math&gt;.</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>:## Let &lt;math&gt; w_j = w_j' - \alpha_j v_j &lt;/math&gt;.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:# Let &lt;math&gt;V&lt;/math&gt; be the matrix with columns &lt;math&gt; v_1,\dots,v_m &lt;/math&gt;. Let &lt;math&gt;T = \begin{pmatrix}</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:# Let &lt;math&gt;V&lt;/math&gt; be the matrix with columns &lt;math&gt; v_1,\dots,v_m &lt;/math&gt;. Let &lt;math&gt;T = \begin{pmatrix}</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>\alpha_1 &amp; \beta_2 &amp; &amp; &amp; &amp; 0 \\</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>\alpha_1 &amp; \beta_2 &amp; &amp; &amp; &amp; 0 \\</div></td> </tr> <tr> <td colspan="2" class="diff-lineno">Line 35:</td> <td colspan="2" class="diff-lineno">Line 35:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>0 &amp; &amp; &amp; &amp; \beta_m &amp; \alpha_m \\</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>0 &amp; &amp; &amp; &amp; \beta_m &amp; \alpha_m \\</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>\end{pmatrix}&lt;/math&gt;.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>\end{pmatrix}&lt;/math&gt;.</div></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>:'''Note''' &lt;math&gt; A v_j<del style="font-weight: bold; text-decoration: none;"> = w_j'</del> = \beta_{j+1} v_{j+1} + \alpha_j v_j + \beta_j v_{j-1} &lt;/math&gt; for &lt;math&gt; 2 &lt; j &lt; m &lt;/math&gt;.</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>:'''Note''' &lt;math&gt; A v_j = \beta_{j+1} v_{j+1} + \alpha_j v_j + \beta_j v_{j-1} &lt;/math&gt; for &lt;math&gt; 2 &lt; j &lt; m &lt;/math&gt;.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>There are in principle four ways to write the iteration procedure. Paige and other works show that the above order of operations is the most numerically stable.&lt;ref name="CW1985"&gt;{{Cite book|last1=Cullum |last2= Willoughby|title=Lanczos Algorithms for Large Symmetric Eigenvalue Computations|year= 1985|volume= 1| isbn= 0-8176-3058-9}}&lt;/ref&gt;&lt;ref name="Saad1992"&gt;{{Cite book|author=Yousef Saad|title=Numerical Methods for Large Eigenvalue Problems| isbn= 0-470-21820-7|url= http://www-users.cs.umn.edu/~saad/books.html|author-link=Yousef Saad|date=1992-06-22}}&lt;/ref&gt;</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>There are in principle four ways to write the iteration procedure. Paige and other works show that the above order of operations is the most numerically stable.&lt;ref name="CW1985"&gt;{{Cite book|last1=Cullum |last2= Willoughby|title=Lanczos Algorithms for Large Symmetric Eigenvalue Computations|year= 1985|volume= 1| isbn= 0-8176-3058-9}}&lt;/ref&gt;&lt;ref name="Saad1992"&gt;{{Cite book|author=Yousef Saad|title=Numerical Methods for Large Eigenvalue Problems| isbn= 0-470-21820-7|url= http://www-users.cs.umn.edu/~saad/books.html|author-link=Yousef Saad|date=1992-06-22}}&lt;/ref&gt;</div></td> </tr> </table> 128.131.126.87 https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1223948364&oldid=prev AaronAegeus: Separated paragraphs 2024-05-15T09:57:22Z <p>Separated paragraphs</p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 09:57, 15 May 2024</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 46:</td> <td colspan="2" class="diff-lineno">Line 46:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>=== Application to the eigenproblem ===</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>=== Application to the eigenproblem ===</div></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>The Lanczos algorithm is most often brought up in the context of finding the [[eigenvalue]]s and [[eigenvector]]s of a matrix, but whereas an ordinary [[Matrix diagonalization|diagonalization of a matrix]] would make eigenvectors and eigenvalues apparent from inspection, the same is not true for the tridiagonalization performed by the Lanczos algorithm; nontrivial additional steps are needed to compute even a single eigenvalue or eigenvector. Nonetheless, applying the Lanczos algorithm is often a significant step forward in computing the eigendecomposition.<del style="font-weight: bold; text-decoration: none;"> </del>If &lt;math&gt;\lambda&lt;/math&gt; is an eigenvalue of &lt;math&gt;T&lt;/math&gt;, and &lt;math&gt;x&lt;/math&gt; its eigenvector (&lt;math&gt;Tx=\lambda x&lt;/math&gt;), then &lt;math&gt; y = V x &lt;/math&gt; is a corresponding eigenvector of &lt;math&gt;A&lt;/math&gt; with the same eigenvalue:</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>The Lanczos algorithm is most often brought up in the context of finding the [[eigenvalue]]s and [[eigenvector]]s of a matrix, but whereas an ordinary [[Matrix diagonalization|diagonalization of a matrix]] would make eigenvectors and eigenvalues apparent from inspection, the same is not true for the tridiagonalization performed by the Lanczos algorithm; nontrivial additional steps are needed to compute even a single eigenvalue or eigenvector. Nonetheless, applying the Lanczos algorithm is often a significant step forward in computing the eigendecomposition.</div></td> </tr> <tr> <td colspan="2" class="diff-empty diff-side-deleted"></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div></div></td> </tr> <tr> <td colspan="2" class="diff-empty diff-side-deleted"></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>If &lt;math&gt;\lambda&lt;/math&gt; is an eigenvalue of &lt;math&gt;T&lt;/math&gt;, and &lt;math&gt;x&lt;/math&gt; its eigenvector (&lt;math&gt;Tx=\lambda x&lt;/math&gt;), then &lt;math&gt; y = V x &lt;/math&gt; is a corresponding eigenvector of &lt;math&gt;A&lt;/math&gt; with the same eigenvalue:</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>&lt;math display=block&gt;\begin{align} A y &amp;= A V x \\ &amp;= V T V^* V x \\ &amp;= V T I x \\ &amp;= V T x \\ &amp;= V (\lambda x) \\ &amp;= \lambda V x \\&amp; = \lambda y.\end{align}&lt;/math&gt;</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>&lt;math display=block&gt;\begin{align} A y &amp;= A V x \\ &amp;= V T V^* V x \\ &amp;= V T I x \\ &amp;= V T x \\ &amp;= V (\lambda x) \\ &amp;= \lambda V x \\&amp; = \lambda y.\end{align}&lt;/math&gt;</div></td> </tr> </table> AaronAegeus https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1223948294&oldid=prev AaronAegeus: Improved elucidation and changed formatting 2024-05-15T09:56:27Z <p>Improved elucidation and changed formatting</p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 09:56, 15 May 2024</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 46:</td> <td colspan="2" class="diff-lineno">Line 46:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>=== Application to the eigenproblem ===</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>=== Application to the eigenproblem ===</div></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>The Lanczos algorithm is most often brought up in the context of finding the [[eigenvalue]]s and [[eigenvector]]s of a matrix, but whereas an ordinary [[Matrix diagonalization|diagonalization of a matrix]] would make eigenvectors and eigenvalues apparent from inspection, the same is not true for the tridiagonalization performed by the Lanczos algorithm; nontrivial additional steps are needed to compute even a single eigenvalue or eigenvector. Nonetheless, applying the Lanczos algorithm is often a significant step forward in computing the eigendecomposition. If &lt;math&gt;\lambda&lt;/math&gt; is an eigenvalue of &lt;math&gt;T&lt;/math&gt;, and &lt;math&gt; <del style="font-weight: bold; text-decoration: none;">T</del> <del style="font-weight: bold; text-decoration: none;">x</del> =<del style="font-weight: bold; text-decoration: none;"> </del>\lambda x<del style="font-weight: bold; text-decoration: none;"> </del>&lt;/math&gt;<del style="font-weight: bold; text-decoration: none;"> the corresponding eigenvector</del>, then &lt;math&gt; y = V x &lt;/math&gt; is <del style="font-weight: bold; text-decoration: none;">the</del> corresponding eigenvector of &lt;math&gt;A&lt;/math&gt; with the same eigenvalue:<del style="font-weight: bold; text-decoration: none;"> </del>&lt;math&gt; A y = A V x = V T V^* V x = V T I x = V T x = V (\lambda x) = \lambda V x = \lambda y&lt;/math&gt;<del style="font-weight: bold; text-decoration: none;">. </del>Thus the Lanczos algorithm transforms the eigendecomposition problem for &lt;math&gt;A&lt;/math&gt; into the eigendecomposition problem for &lt;math&gt;T&lt;/math&gt;.</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>The Lanczos algorithm is most often brought up in the context of finding the [[eigenvalue]]s and [[eigenvector]]s of a matrix, but whereas an ordinary [[Matrix diagonalization|diagonalization of a matrix]] would make eigenvectors and eigenvalues apparent from inspection, the same is not true for the tridiagonalization performed by the Lanczos algorithm; nontrivial additional steps are needed to compute even a single eigenvalue or eigenvector. Nonetheless, applying the Lanczos algorithm is often a significant step forward in computing the eigendecomposition. If &lt;math&gt;\lambda&lt;/math&gt; is an eigenvalue of &lt;math&gt;T&lt;/math&gt;, and &lt;<ins style="font-weight: bold; text-decoration: none;">math&gt;x&lt;/</ins>math&gt; <ins style="font-weight: bold; text-decoration: none;">its</ins> <ins style="font-weight: bold; text-decoration: none;">eigenvector</ins> <ins style="font-weight: bold; text-decoration: none;">(&lt;math&gt;Tx</ins>=\lambda x&lt;/math&gt;<ins style="font-weight: bold; text-decoration: none;">)</ins>, then &lt;math&gt; y = V x &lt;/math&gt; is <ins style="font-weight: bold; text-decoration: none;">a</ins> corresponding eigenvector of &lt;math&gt;A&lt;/math&gt; with the same eigenvalue:</div></td> </tr> <tr> <td colspan="2" class="diff-empty diff-side-deleted"></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div></div></td> </tr> <tr> <td colspan="2" class="diff-empty diff-side-deleted"></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>&lt;math<ins style="font-weight: bold; text-decoration: none;"> display=block</ins>&gt;<ins style="font-weight: bold; text-decoration: none;">\begin{align}</ins> A y <ins style="font-weight: bold; text-decoration: none;">&amp;</ins>= A V x <ins style="font-weight: bold; text-decoration: none;">\\ &amp;</ins>= V T V^* V x <ins style="font-weight: bold; text-decoration: none;">\\ &amp;</ins>= V T I x <ins style="font-weight: bold; text-decoration: none;">\\ &amp;</ins>= V T x <ins style="font-weight: bold; text-decoration: none;">\\ &amp;</ins>= V (\lambda x) <ins style="font-weight: bold; text-decoration: none;">\\ &amp;</ins>= \lambda V x<ins style="font-weight: bold; text-decoration: none;"> \\&amp;</ins> = \lambda y<ins style="font-weight: bold; text-decoration: none;">.\end{align}</ins>&lt;/math&gt;</div></td> </tr> <tr> <td colspan="2" class="diff-empty diff-side-deleted"></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div></div></td> </tr> <tr> <td colspan="2" class="diff-empty diff-side-deleted"></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Thus the Lanczos algorithm transforms the eigendecomposition problem for &lt;math&gt;A&lt;/math&gt; into the eigendecomposition problem for &lt;math&gt;T&lt;/math&gt;.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div># For tridiagonal matrices, there exist a number of specialised algorithms, often with better computational complexity than general-purpose algorithms. For example, if &lt;math&gt;T&lt;/math&gt; is an &lt;math&gt;m \times m&lt;/math&gt; tridiagonal symmetric matrix then:</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div># For tridiagonal matrices, there exist a number of specialised algorithms, often with better computational complexity than general-purpose algorithms. For example, if &lt;math&gt;T&lt;/math&gt; is an &lt;math&gt;m \times m&lt;/math&gt; tridiagonal symmetric matrix then:</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>#* The [[Tridiagonal matrix#Determinant|continuant recursion]] allows computing the [[characteristic polynomial]] in &lt;math&gt;O(m^2)&lt;/math&gt; operations, and evaluating it at a point in &lt;math&gt;O(m)&lt;/math&gt; operations.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>#* The [[Tridiagonal matrix#Determinant|continuant recursion]] allows computing the [[characteristic polynomial]] in &lt;math&gt;O(m^2)&lt;/math&gt; operations, and evaluating it at a point in &lt;math&gt;O(m)&lt;/math&gt; operations.</div></td> </tr> </table> AaronAegeus https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1223938762&oldid=prev AaronAegeus: Fixed an error in proof of eigencorrespondence and improved elaboration 2024-05-15T08:20:14Z <p>Fixed an error in proof of eigencorrespondence and improved elaboration</p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 08:20, 15 May 2024</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 46:</td> <td colspan="2" class="diff-lineno">Line 46:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>=== Application to the eigenproblem ===</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>=== Application to the eigenproblem ===</div></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>The Lanczos algorithm is most often brought up in the context of finding the [[eigenvalue]]s and [[eigenvector]]s of a matrix, but whereas an ordinary [[Matrix diagonalization|diagonalization of a matrix]] would make eigenvectors and eigenvalues apparent from inspection, the same is not true for the tridiagonalization performed by the Lanczos algorithm; nontrivial additional steps are needed to compute even a single eigenvalue or eigenvector. Nonetheless, applying the Lanczos algorithm is often a significant step forward in computing the eigendecomposition. If &lt;math&gt;\lambda&lt;/math&gt; is an eigenvalue of &lt;math&gt;<del style="font-weight: bold; text-decoration: none;">A</del>&lt;/math&gt;, and<del style="font-weight: bold; text-decoration: none;"> if</del> &lt;math&gt; T x = \lambda x &lt;/math&gt; <del style="font-weight: bold; text-decoration: none;">(&lt;math&gt;x&lt;/math&gt;</del> <del style="font-weight: bold; text-decoration: none;">is an</del> eigenvector<del style="font-weight: bold; text-decoration: none;"> of &lt;math&gt;T&lt;/math&gt;)</del> then &lt;math&gt; y = V x &lt;/math&gt; is the corresponding eigenvector of &lt;math&gt;A&lt;/math&gt; <del style="font-weight: bold; text-decoration: none;">(since</del> &lt;math&gt; A y = A V x = V T V^* V x = V T I x = V T x = V (\lambda x) = \lambda V x = \lambda y&lt;/math&gt;<del style="font-weight: bold; text-decoration: none;">)</del>. Thus the Lanczos algorithm transforms the eigendecomposition problem for &lt;math&gt;A&lt;/math&gt; into the eigendecomposition problem for &lt;math&gt;T&lt;/math&gt;.</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>The Lanczos algorithm is most often brought up in the context of finding the [[eigenvalue]]s and [[eigenvector]]s of a matrix, but whereas an ordinary [[Matrix diagonalization|diagonalization of a matrix]] would make eigenvectors and eigenvalues apparent from inspection, the same is not true for the tridiagonalization performed by the Lanczos algorithm; nontrivial additional steps are needed to compute even a single eigenvalue or eigenvector. Nonetheless, applying the Lanczos algorithm is often a significant step forward in computing the eigendecomposition. If &lt;math&gt;\lambda&lt;/math&gt; is an eigenvalue of &lt;math&gt;<ins style="font-weight: bold; text-decoration: none;">T</ins>&lt;/math&gt;, and &lt;math&gt; T x = \lambda x &lt;/math&gt; <ins style="font-weight: bold; text-decoration: none;">the</ins> <ins style="font-weight: bold; text-decoration: none;">corresponding</ins> eigenvector<ins style="font-weight: bold; text-decoration: none;">,</ins> then &lt;math&gt; y = V x &lt;/math&gt; is the corresponding eigenvector of &lt;math&gt;A&lt;/math&gt; <ins style="font-weight: bold; text-decoration: none;">with the same eigenvalue:</ins> &lt;math&gt; A y = A V x = V T V^* V x = V T I x = V T x = V (\lambda x) = \lambda V x = \lambda y&lt;/math&gt;. Thus the Lanczos algorithm transforms the eigendecomposition problem for &lt;math&gt;A&lt;/math&gt; into the eigendecomposition problem for &lt;math&gt;T&lt;/math&gt;.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div># For tridiagonal matrices, there exist a number of specialised algorithms, often with better computational complexity than general-purpose algorithms. For example, if &lt;math&gt;T&lt;/math&gt; is an &lt;math&gt;m \times m&lt;/math&gt; tridiagonal symmetric matrix then:</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div># For tridiagonal matrices, there exist a number of specialised algorithms, often with better computational complexity than general-purpose algorithms. For example, if &lt;math&gt;T&lt;/math&gt; is an &lt;math&gt;m \times m&lt;/math&gt; tridiagonal symmetric matrix then:</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>#* The [[Tridiagonal matrix#Determinant|continuant recursion]] allows computing the [[characteristic polynomial]] in &lt;math&gt;O(m^2)&lt;/math&gt; operations, and evaluating it at a point in &lt;math&gt;O(m)&lt;/math&gt; operations.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>#* The [[Tridiagonal matrix#Determinant|continuant recursion]] allows computing the [[characteristic polynomial]] in &lt;math&gt;O(m^2)&lt;/math&gt; operations, and evaluating it at a point in &lt;math&gt;O(m)&lt;/math&gt; operations.</div></td> </tr> </table> AaronAegeus https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1189979134&oldid=prev Pwesterbaan: /* The algorithm */Changed 1<j<m to 2<j<m since $\beta_1$ doesn't exist 2023-12-15T05:22:12Z <p><span class="autocomment">The algorithm: </span>Changed 1&lt;j&lt;m to 2&lt;j&lt;m since $\beta_1$ doesn&#039;t exist</p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 05:22, 15 December 2023</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 35:</td> <td colspan="2" class="diff-lineno">Line 35:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>0 &amp; &amp; &amp; &amp; \beta_m &amp; \alpha_m \\</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>0 &amp; &amp; &amp; &amp; \beta_m &amp; \alpha_m \\</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>\end{pmatrix}&lt;/math&gt;.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>\end{pmatrix}&lt;/math&gt;.</div></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>:'''Note''' &lt;math&gt; A v_j = w_j' = \beta_{j+1} v_{j+1} + \alpha_j v_j + \beta_j v_{j-1} &lt;/math&gt; for &lt;math&gt; <del style="font-weight: bold; text-decoration: none;">1</del> &lt; j &lt; m &lt;/math&gt;.</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>:'''Note''' &lt;math&gt; A v_j = w_j' = \beta_{j+1} v_{j+1} + \alpha_j v_j + \beta_j v_{j-1} &lt;/math&gt; for &lt;math&gt; <ins style="font-weight: bold; text-decoration: none;">2</ins> &lt; j &lt; m &lt;/math&gt;.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>There are in principle four ways to write the iteration procedure. Paige and other works show that the above order of operations is the most numerically stable.&lt;ref name="CW1985"&gt;{{Cite book|last1=Cullum |last2= Willoughby|title=Lanczos Algorithms for Large Symmetric Eigenvalue Computations|year= 1985|volume= 1| isbn= 0-8176-3058-9}}&lt;/ref&gt;&lt;ref name="Saad1992"&gt;{{Cite book|author=Yousef Saad|title=Numerical Methods for Large Eigenvalue Problems| isbn= 0-470-21820-7|url= http://www-users.cs.umn.edu/~saad/books.html|author-link=Yousef Saad|date=1992-06-22}}&lt;/ref&gt;</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>There are in principle four ways to write the iteration procedure. Paige and other works show that the above order of operations is the most numerically stable.&lt;ref name="CW1985"&gt;{{Cite book|last1=Cullum |last2= Willoughby|title=Lanczos Algorithms for Large Symmetric Eigenvalue Computations|year= 1985|volume= 1| isbn= 0-8176-3058-9}}&lt;/ref&gt;&lt;ref name="Saad1992"&gt;{{Cite book|author=Yousef Saad|title=Numerical Methods for Large Eigenvalue Problems| isbn= 0-470-21820-7|url= http://www-users.cs.umn.edu/~saad/books.html|author-link=Yousef Saad|date=1992-06-22}}&lt;/ref&gt;</div></td> </tr> </table> Pwesterbaan https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1187972119&oldid=prev OAbot: Open access bot: doi updated in citation with #oabot. 2023-12-02T16:53:49Z <p><a href="/wiki/Wikipedia:OABOT" class="mw-redirect" title="Wikipedia:OABOT">Open access bot</a>: doi updated in citation with #oabot.</p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 16:53, 2 December 2023</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 50:</td> <td colspan="2" class="diff-lineno">Line 50:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>#* The [[Tridiagonal matrix#Determinant|continuant recursion]] allows computing the [[characteristic polynomial]] in &lt;math&gt;O(m^2)&lt;/math&gt; operations, and evaluating it at a point in &lt;math&gt;O(m)&lt;/math&gt; operations.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>#* The [[Tridiagonal matrix#Determinant|continuant recursion]] allows computing the [[characteristic polynomial]] in &lt;math&gt;O(m^2)&lt;/math&gt; operations, and evaluating it at a point in &lt;math&gt;O(m)&lt;/math&gt; operations.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>#* The [[divide-and-conquer eigenvalue algorithm]] can be used to compute the entire eigendecomposition of &lt;math&gt;T&lt;/math&gt; in &lt;math&gt;O(m^2)&lt;/math&gt; operations.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>#* The [[divide-and-conquer eigenvalue algorithm]] can be used to compute the entire eigendecomposition of &lt;math&gt;T&lt;/math&gt; in &lt;math&gt;O(m^2)&lt;/math&gt; operations.</div></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>#* The Fast Multipole Method&lt;ref&gt;{{cite journal|last1=Coakley|first1=Ed S.|last2=Rokhlin|first2=Vladimir|title=A fast divide-and-conquer algorithm for computing the spectra of real symmetric tridiagonal matrices|journal=Applied and Computational Harmonic Analysis|date=2013|volume=34|issue=3|pages=379–414|doi=10.1016/j.acha.2012.06.003|doi-access=<del style="font-weight: bold; text-decoration: none;">free</del>}}&lt;/ref&gt; can compute all eigenvalues in just &lt;math&gt;O(m \log m)&lt;/math&gt; operations.</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>#* The Fast Multipole Method&lt;ref&gt;{{cite journal|last1=Coakley|first1=Ed S.|last2=Rokhlin|first2=Vladimir|title=A fast divide-and-conquer algorithm for computing the spectra of real symmetric tridiagonal matrices|journal=Applied and Computational Harmonic Analysis|date=2013|volume=34|issue=3|pages=379–414|doi=10.1016/j.acha.2012.06.003|doi-access=}}&lt;/ref&gt; can compute all eigenvalues in just &lt;math&gt;O(m \log m)&lt;/math&gt; operations.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div># Some general eigendecomposition algorithms, notably the [[QR algorithm]], are known to converge faster for tridiagonal matrices than for general matrices. Asymptotic complexity of tridiagonal QR is &lt;math&gt;O(m^2)&lt;/math&gt; just as for the divide-and-conquer algorithm (though the constant factor may be different); since the eigenvectors together have &lt;math&gt;m^2&lt;/math&gt; elements, this is asymptotically optimal.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div># Some general eigendecomposition algorithms, notably the [[QR algorithm]], are known to converge faster for tridiagonal matrices than for general matrices. Asymptotic complexity of tridiagonal QR is &lt;math&gt;O(m^2)&lt;/math&gt; just as for the divide-and-conquer algorithm (though the constant factor may be different); since the eigenvectors together have &lt;math&gt;m^2&lt;/math&gt; elements, this is asymptotically optimal.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div># Even algorithms whose convergence rates are unaffected by unitary transformations, such as the [[power method]] and [[inverse iteration]], may enjoy low-level performance benefits from being applied to the tridiagonal matrix &lt;math&gt;T&lt;/math&gt; rather than the original matrix &lt;math&gt;A&lt;/math&gt;. Since &lt;math&gt;T&lt;/math&gt; is very sparse with all nonzero elements in highly predictable positions, it permits compact storage with excellent performance vis-à-vis [[Cache (computing)|caching]]. Likewise, &lt;math&gt;T&lt;/math&gt; is a [[real number|real]] matrix with all eigenvectors and eigenvalues real, whereas &lt;math&gt;A&lt;/math&gt; in general may have complex elements and eigenvectors, so real arithmetic is sufficient for finding the eigenvectors and eigenvalues of &lt;math&gt;T&lt;/math&gt;.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div># Even algorithms whose convergence rates are unaffected by unitary transformations, such as the [[power method]] and [[inverse iteration]], may enjoy low-level performance benefits from being applied to the tridiagonal matrix &lt;math&gt;T&lt;/math&gt; rather than the original matrix &lt;math&gt;A&lt;/math&gt;. Since &lt;math&gt;T&lt;/math&gt; is very sparse with all nonzero elements in highly predictable positions, it permits compact storage with excellent performance vis-à-vis [[Cache (computing)|caching]]. Likewise, &lt;math&gt;T&lt;/math&gt; is a [[real number|real]] matrix with all eigenvectors and eigenvalues real, whereas &lt;math&gt;A&lt;/math&gt; in general may have complex elements and eigenvectors, so real arithmetic is sufficient for finding the eigenvectors and eigenvalues of &lt;math&gt;T&lt;/math&gt;.</div></td> </tr> </table> OAbot https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1162736907&oldid=prev Macrakis: more useful short description 2023-06-30T21:18:56Z <p>more useful short description</p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 21:18, 30 June 2023</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 1:</td> <td colspan="2" class="diff-lineno">Line 1:</td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>{{Short description|<del style="font-weight: bold; text-decoration: none;">Iterative</del> <del style="font-weight: bold; text-decoration: none;">numerical</del> <del style="font-weight: bold; text-decoration: none;">method</del>}}</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>{{Short description|<ins style="font-weight: bold; text-decoration: none;">Numerical</ins> <ins style="font-weight: bold; text-decoration: none;">eigenvalue</ins> <ins style="font-weight: bold; text-decoration: none;">calculation</ins>}}</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{for|the null space-finding algorithm|block Lanczos algorithm}}</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{for|the null space-finding algorithm|block Lanczos algorithm}}</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{for|the interpolation method|Lanczos resampling}}</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{for|the interpolation method|Lanczos resampling}}</div></td> </tr> </table> Macrakis https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1162725230&oldid=prev MichaelMaggs: Adding local short description: "Iterative numerical method", overriding Wikidata description "numerical method for find eigenvalues" 2023-06-30T19:48:43Z <p>Adding local <a href="/wiki/Wikipedia:Short_description" title="Wikipedia:Short description">short description</a>: &quot;Iterative numerical method&quot;, overriding Wikidata description &quot;numerical method for find eigenvalues&quot;</p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 19:48, 30 June 2023</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 1:</td> <td colspan="2" class="diff-lineno">Line 1:</td> </tr> <tr> <td colspan="2" class="diff-empty diff-side-deleted"></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>{{Short description|Iterative numerical method}}</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{for|the null space-finding algorithm|block Lanczos algorithm}}</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{for|the null space-finding algorithm|block Lanczos algorithm}}</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{for|the interpolation method|Lanczos resampling}}</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{for|the interpolation method|Lanczos resampling}}</div></td> </tr> </table> MichaelMaggs https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1148037072&oldid=prev LongingLament: /* A more provident power method */ Fix a typo 2023-04-03T17:59:30Z <p><span class="autocomment">A more provident power method: </span> Fix a typo</p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 17:59, 3 April 2023</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 76:</td> <td colspan="2" class="diff-lineno">Line 76:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:## Let &lt;math&gt; u_{j+1} = u_{j+1}' / \| u_{j+1}' \|.&lt;/math&gt;</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:## Let &lt;math&gt; u_{j+1} = u_{j+1}' / \| u_{j+1}' \|.&lt;/math&gt;</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:* In the large &lt;math&gt;j&lt;/math&gt; limit, &lt;math&gt;u_j&lt;/math&gt; approaches the normed eigenvector corresponding to the largest magnitude eigenvalue.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:* In the large &lt;math&gt;j&lt;/math&gt; limit, &lt;math&gt;u_j&lt;/math&gt; approaches the normed eigenvector corresponding to the largest magnitude eigenvalue.</div></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>A critique that can be raised against this method is that it is wasteful: it spends a lot of work (the matrix–vector products in step 2.1) extracting information from the matrix &lt;math&gt;A&lt;/math&gt;, but pays attention only to the very last result; implementations typically use the same variable for all the vectors &lt;math&gt;u_j&lt;/math&gt;, having each new iteration overwrite the results from the previous one. It may be desirable to instead <del style="font-weight: bold; text-decoration: none;">keeptl</del> all the intermediate results and organise the data.</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>A critique that can be raised against this method is that it is wasteful: it spends a lot of work (the matrix–vector products in step 2.1) extracting information from the matrix &lt;math&gt;A&lt;/math&gt;, but pays attention only to the very last result; implementations typically use the same variable for all the vectors &lt;math&gt;u_j&lt;/math&gt;, having each new iteration overwrite the results from the previous one. It may be desirable to instead <ins style="font-weight: bold; text-decoration: none;">keep</ins> all the intermediate results and organise the data.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>One piece of information that trivially is available from the vectors &lt;math&gt;u_j&lt;/math&gt; is a chain of [[Krylov subspace]]s. One way of stating that without introducing sets into the algorithm is to claim that it computes</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>One piece of information that trivially is available from the vectors &lt;math&gt;u_j&lt;/math&gt; is a chain of [[Krylov subspace]]s. One way of stating that without introducing sets into the algorithm is to claim that it computes</div></td> </tr> </table> LongingLament https://en.wikipedia.org/w/index.php?title=Lanczos_algorithm&diff=1145662458&oldid=prev Cond-mat: added further reading 2023-03-20T08:52:27Z <p>added further reading</p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 08:52, 20 March 2023</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 298:</td> <td colspan="2" class="diff-lineno">Line 298:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* {{cite book |first1=Gene H. |last1=Golub |author-link=Gene H. Golub |first2=Charles F. |last2=Van Loan |author-link2=Charles F. Van Loan |title=Matrix Computations |location=Baltimore |publisher=Johns Hopkins University Press |year=1996 |isbn=0-8018-5414-8 |pages=470–507 |chapter=Lanczos Methods |chapter-url=https://books.google.com/books?id=mlOa7wPX6OYC&amp;pg=PA470 }}</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* {{cite book |first1=Gene H. |last1=Golub |author-link=Gene H. Golub |first2=Charles F. |last2=Van Loan |author-link2=Charles F. Van Loan |title=Matrix Computations |location=Baltimore |publisher=Johns Hopkins University Press |year=1996 |isbn=0-8018-5414-8 |pages=470–507 |chapter=Lanczos Methods |chapter-url=https://books.google.com/books?id=mlOa7wPX6OYC&amp;pg=PA470 }}</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* {{cite journal |first1=Andrew Y. |last1=Ng |author-link=Andrew Ng |first2=Alice X. |last2=Zheng |first3=Michael I. |last3=Jordan |author-link3=Michael I. Jordan |title=Link Analysis, Eigenvectors and Stability |journal=IJCAI'01 Proceedings of the 17th International Joint Conference on Artificial Intelligence |volume=2 |pages=903–910 |date=2001 |url=https://ai.stanford.edu/~ang/papers/ijcai01-linkanalysis.pdf }}</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>* {{cite journal |first1=Andrew Y. |last1=Ng |author-link=Andrew Ng |first2=Alice X. |last2=Zheng |first3=Michael I. |last3=Jordan |author-link3=Michael I. Jordan |title=Link Analysis, Eigenvectors and Stability |journal=IJCAI'01 Proceedings of the 17th International Joint Conference on Artificial Intelligence |volume=2 |pages=903–910 |date=2001 |url=https://ai.stanford.edu/~ang/papers/ijcai01-linkanalysis.pdf }}</div></td> </tr> <tr> <td colspan="2" class="diff-empty diff-side-deleted"></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>* {{cite book|author=Erik Koch|chapter-url=http://www.cond-mat.de/events/correl19/manuscripts/koch.pdf|chapter=Exact Diagonalization and Lanczos Method|editor1= E. Pavarini |editor2=E. Koch |editor3=S. Zhang |title= Many-Body Methods for Real Materials|publisher= Jülich |year=2019|isbn=978-3-95806-400-3}}</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{Numerical linear algebra}}</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{Numerical linear algebra}}</div></td> </tr> </table> Cond-mat