https://en.wikipedia.org/w/index.php?action=history&feed=atom&title=Eigenvalue_algorithm Eigenvalue algorithm - Revision history 2025-05-29T16:40:27Z Revision history for this page on the wiki MediaWiki 1.45.0-wmf.2 https://en.wikipedia.org/w/index.php?title=Eigenvalue_algorithm&diff=1292279373&oldid=prev OAbot: Open access bot: url-access updated in citation with #oabot. 2025-05-26T04:25:11Z <p><a href="/wiki/Wikipedia:OABOT" class="mw-redirect" title="Wikipedia:OABOT">Open access bot</a>: url-access updated in citation with #oabot.</p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 04:25, 26 May 2025</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 47:</td> <td colspan="2" class="diff-lineno">Line 47:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For the eigenvalue problem, [[Bauer–Fike theorem|Bauer and Fike proved]] that if {{math|''λ''}} is an eigenvalue for a [[Diagonalizable matrix|diagonalizable]] {{math|''n'' × ''n''}} matrix {{math|''A''}} with [[eigenvector matrix]] {{math|''V''}}, then the absolute error in calculating {{math|''λ''}} is bounded by the product of {{math|''κ''(''V'')}} and the absolute error in {{math|''A''}}.&lt;ref&gt;{{Citation | author = F. L. Bauer | author2 = C. T. Fike | title = Norms and exclusion theorems | journal = Numer. Math. | volume = 2 | pages = 137–141 | year = 1960 | doi=10.1007/bf01386217| s2cid = 121278235 }}&lt;/ref&gt; [[Bauer-Fike theorem#Corollary|As a result]], the condition number for finding {{math|''λ''}} is {{math|1=''κ''(''λ'', ''A'') = ''κ''(''V'') = {{!!}}''V'' {{!!}}&lt;sub&gt;op&lt;/sub&gt; {{!!}}''V'' &lt;sup&gt;−1&lt;/sup&gt;{{!!}}&lt;sub&gt;op&lt;/sub&gt;}}. If {{math|''A''}} is normal, then {{math|''V''}} is unitary, and {{math|1=''κ''(''λ'', ''A'') = 1}}. Thus the eigenvalue problem for all normal matrices is well-conditioned.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For the eigenvalue problem, [[Bauer–Fike theorem|Bauer and Fike proved]] that if {{math|''λ''}} is an eigenvalue for a [[Diagonalizable matrix|diagonalizable]] {{math|''n'' × ''n''}} matrix {{math|''A''}} with [[eigenvector matrix]] {{math|''V''}}, then the absolute error in calculating {{math|''λ''}} is bounded by the product of {{math|''κ''(''V'')}} and the absolute error in {{math|''A''}}.&lt;ref&gt;{{Citation | author = F. L. Bauer | author2 = C. T. Fike | title = Norms and exclusion theorems | journal = Numer. Math. | volume = 2 | pages = 137–141 | year = 1960 | doi=10.1007/bf01386217| s2cid = 121278235 }}&lt;/ref&gt; [[Bauer-Fike theorem#Corollary|As a result]], the condition number for finding {{math|''λ''}} is {{math|1=''κ''(''λ'', ''A'') = ''κ''(''V'') = {{!!}}''V'' {{!!}}&lt;sub&gt;op&lt;/sub&gt; {{!!}}''V'' &lt;sup&gt;−1&lt;/sup&gt;{{!!}}&lt;sub&gt;op&lt;/sub&gt;}}. If {{math|''A''}} is normal, then {{math|''V''}} is unitary, and {{math|1=''κ''(''λ'', ''A'') = 1}}. Thus the eigenvalue problem for all normal matrices is well-conditioned.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>The condition number for the problem of finding the eigenspace of a normal matrix {{math|''A''}} corresponding to an eigenvalue {{math|''λ''}} has been shown to be inversely proportional to the minimum distance between {{math|''λ''}} and the other distinct eigenvalues of {{math|''A''}}.&lt;ref&gt;{{Citation | author = S.C. Eisenstat | author2 = I.C.F. Ipsen | title = Relative Perturbation Results for Eigenvalues and Eigenvectors of Diagonalisable Matrices | journal = BIT | volume = 38 | issue = 3 | pages = 502–9 | year = 1998 | doi=10.1007/bf02510256| s2cid = 119886389 | url = http://www.lib.ncsu.edu/resolver/1840.4/286 }}&lt;/ref&gt; In particular, the eigenspace problem for normal matrices is well-conditioned for isolated eigenvalues. When eigenvalues are not isolated, the best that can be hoped for is to identify the span of all eigenvectors of nearby eigenvalues.</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>The condition number for the problem of finding the eigenspace of a normal matrix {{math|''A''}} corresponding to an eigenvalue {{math|''λ''}} has been shown to be inversely proportional to the minimum distance between {{math|''λ''}} and the other distinct eigenvalues of {{math|''A''}}.&lt;ref&gt;{{Citation | author = S.C. Eisenstat | author2 = I.C.F. Ipsen | title = Relative Perturbation Results for Eigenvalues and Eigenvectors of Diagonalisable Matrices | journal = BIT | volume = 38 | issue = 3 | pages = 502–9 | year = 1998 | doi=10.1007/bf02510256| s2cid = 119886389 | url = http://www.lib.ncsu.edu/resolver/1840.4/286<ins style="font-weight: bold; text-decoration: none;"> | url-access = subscription</ins> }}&lt;/ref&gt; In particular, the eigenspace problem for normal matrices is well-conditioned for isolated eigenvalues. When eigenvalues are not isolated, the best that can be hoped for is to identify the span of all eigenvectors of nearby eigenvalues.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Algorithms==</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Algorithms==</div></td> </tr> <tr> <td colspan="2" class="diff-lineno">Line 373:</td> <td colspan="2" class="diff-lineno">Line 373:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div> | jstor =</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div> | jstor =</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div> | issn =</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div> | issn =</div></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div> | doi = 10.1137/0612005<del style="font-weight: bold; text-decoration: none;">}}</del></div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div> | doi = 10.1137/0612005<ins style="font-weight: bold; text-decoration: none;">| url-access = subscription</ins></div></td> </tr> <tr> <td colspan="2" class="diff-empty diff-side-deleted"></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div> }}</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{Numerical linear algebra}}</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>{{Numerical linear algebra}}</div></td> </tr> </table> OAbot https://en.wikipedia.org/w/index.php?title=Eigenvalue_algorithm&diff=1290929439&oldid=prev Dmoore5556: /* Iterative algorithms */ adjust a wikilink 2025-05-18T00:21:02Z <p><span class="autocomment">Iterative algorithms: </span> adjust a wikilink</p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 00:21, 18 May 2025</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 174:</td> <td colspan="2" class="diff-lineno">Line 174:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>| [[Folded spectrum method]] || real symmetric || eigenpair with value closest to ''μ'' || || || align="left" | Preconditioned inverse iteration applied to {{math|(''A'' − ''μI'')&lt;sup&gt;2&lt;/sup&gt;}}</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>| [[Folded spectrum method]] || real symmetric || eigenpair with value closest to ''μ'' || || || align="left" | Preconditioned inverse iteration applied to {{math|(''A'' − ''μI'')&lt;sup&gt;2&lt;/sup&gt;}}</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| [[<del style="font-weight: bold; text-decoration: none;">MRRR|</del>MRRR algorithm]]&lt;ref&gt;{{Citation</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| [[MRRR algorithm]]&lt;ref&gt;{{Citation</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>| last1=Dhillon</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>| last1=Dhillon</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>| first1=Inderjit S.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>| first1=Inderjit S.</div></td> </tr> </table> Dmoore5556 https://en.wikipedia.org/w/index.php?title=Eigenvalue_algorithm&diff=1280188161&oldid=prev Redav: /* Eigenvalues and eigenvectors */ real.l => real. {typo} 2025-03-13T01:55:12Z <p><span class="autocomment">Eigenvalues and eigenvectors: </span> real.l =&gt; real. {typo}</p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 01:55, 13 March 2025</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 8:</td> <td colspan="2" class="diff-lineno">Line 8:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:&lt;math&gt;\left(A - \lambda I\right)^k {\mathbf v} = 0,&lt;/math&gt;</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:&lt;math&gt;\left(A - \lambda I\right)^k {\mathbf v} = 0,&lt;/math&gt;</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>where {{math|'''v'''}} is a nonzero {{math|''n'' × 1}} column vector, {{math|''I''}} is the {{math|''n'' × ''n''}} [[identity matrix]], {{math|''k''}} is a positive integer, and both {{math|''λ''}} and {{math|'''v'''}} are allowed to be complex even when {{math|''A''}} is real.<del style="font-weight: bold; text-decoration: none;">l</del> When {{math|1=''k'' = 1}}, the vector is called simply an ''[[eigenvector]]'', and the pair is called an ''eigenpair''. In this case, {{math|1=''A'''''v''' = ''λ'''''v'''}}. Any eigenvalue {{math|''λ''}} of {{math|''A''}} has ordinary&lt;ref group="note"&gt;The term "ordinary" is used here only to emphasize the distinction between "eigenvector" and "generalized eigenvector".&lt;/ref&gt; eigenvectors associated to it, for if {{math|''k''}} is the smallest integer such that {{math|1=(''A'' − ''λI'')&lt;sup&gt;''k''&lt;/sup&gt; '''v''' = 0}} for a generalized eigenvector {{math|'''v'''}}, then {{math|1=(''A'' − ''λI'')&lt;sup&gt;''k''−1&lt;/sup&gt; '''v'''}} is an ordinary eigenvector. The value {{math|''k''}} can always be taken as less than or equal to {{math|''n''}}. In particular, {{math|1=(''A'' − ''λI'')&lt;sup&gt;''n''&lt;/sup&gt; '''v''' = 0}} for all generalized eigenvectors {{math|'''v'''}} associated with {{math|''λ''}}.</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>where {{math|'''v'''}} is a nonzero {{math|''n'' × 1}} column vector, {{math|''I''}} is the {{math|''n'' × ''n''}} [[identity matrix]], {{math|''k''}} is a positive integer, and both {{math|''λ''}} and {{math|'''v'''}} are allowed to be complex even when {{math|''A''}} is real. When {{math|1=''k'' = 1}}, the vector is called simply an ''[[eigenvector]]'', and the pair is called an ''eigenpair''. In this case, {{math|1=''A'''''v''' = ''λ'''''v'''}}. Any eigenvalue {{math|''λ''}} of {{math|''A''}} has ordinary&lt;ref group="note"&gt;The term "ordinary" is used here only to emphasize the distinction between "eigenvector" and "generalized eigenvector".&lt;/ref&gt; eigenvectors associated to it, for if {{math|''k''}} is the smallest integer such that {{math|1=(''A'' − ''λI'')&lt;sup&gt;''k''&lt;/sup&gt; '''v''' = 0}} for a generalized eigenvector {{math|'''v'''}}, then {{math|1=(''A'' − ''λI'')&lt;sup&gt;''k''−1&lt;/sup&gt; '''v'''}} is an ordinary eigenvector. The value {{math|''k''}} can always be taken as less than or equal to {{math|''n''}}. In particular, {{math|1=(''A'' − ''λI'')&lt;sup&gt;''n''&lt;/sup&gt; '''v''' = 0}} for all generalized eigenvectors {{math|'''v'''}} associated with {{math|''λ''}}.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For each eigenvalue {{math|λ}} of {{math|''A''}}, the [[kernel (matrix)|kernel]] {{math|ker(''A'' − ''λI'')}} consists of all eigenvectors associated with {{math|''λ''}} (along with 0), called the ''[[eigenspace]]'' of {{math|''λ''}}, while the vector space {{math|ker((''A'' − ''λI'')&lt;sup&gt;''n''&lt;/sup&gt;)}} consists of all generalized eigenvectors, and is called the ''[[generalized eigenspace]]''. The ''[[geometric multiplicity]]'' of {{math|''λ''}} is the dimension of its eigenspace. The ''[[algebraic multiplicity]]'' of {{math|''λ''}} is the dimension of its generalized eigenspace. The latter terminology is justified by the equation</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For each eigenvalue {{math|λ}} of {{math|''A''}}, the [[kernel (matrix)|kernel]] {{math|ker(''A'' − ''λI'')}} consists of all eigenvectors associated with {{math|''λ''}} (along with 0), called the ''[[eigenspace]]'' of {{math|''λ''}}, while the vector space {{math|ker((''A'' − ''λI'')&lt;sup&gt;''n''&lt;/sup&gt;)}} consists of all generalized eigenvectors, and is called the ''[[generalized eigenspace]]''. The ''[[geometric multiplicity]]'' of {{math|''λ''}} is the dimension of its eigenspace. The ''[[algebraic multiplicity]]'' of {{math|''λ''}} is the dimension of its generalized eigenspace. The latter terminology is justified by the equation</div></td> </tr> </table> Redav https://en.wikipedia.org/w/index.php?title=Eigenvalue_algorithm&diff=1277566623&oldid=prev 2001:67C:10EC:578C:8000:0:0:E0E: /* Algorithms */ 2025-02-25T12:02:14Z <p><span class="autocomment">Algorithms</span></p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 12:02, 25 February 2025</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 51:</td> <td colspan="2" class="diff-lineno">Line 51:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Algorithms==</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Algorithms==</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>The most reliable and most widely used algorithm for computing eigenvalues is [[John G. F. Francis]]' [[QR algorithm]], considered one of the top ten algorithms of 20th century.&lt;ref name="t10"&gt;{{cite journal |last1=J. Dongarra and F. Sullivan |title=Top ten algorithms of the century |journal=Computing in Science and Engineering |date=2000 |volume=2 |page=22-23|doi=10.1109/MCISE.2000.814652 }}&lt;/ref&gt;</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>The most reliable and most widely used algorithm for computing eigenvalues is [[John G. F. Francis]]'<ins style="font-weight: bold; text-decoration: none;"> and [[Vera N. Kublanovskaya]]'s</ins> [[QR algorithm]], considered one of the top ten algorithms of 20th century.&lt;ref name="t10"&gt;{{cite journal |last1=J. Dongarra and F. Sullivan |title=Top ten algorithms of the century |journal=Computing in Science and Engineering |date=2000 |volume=2 |page=22-23|doi=10.1109/MCISE.2000.814652 }}&lt;/ref&gt;</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Any monic polynomial is the characteristic polynomial of its [[companion matrix]]. Therefore, a general algorithm for finding eigenvalues could also be used to find the roots of polynomials. The [[Abel–Ruffini theorem]] shows that any such algorithm for dimensions greater than 4 must either be infinite, or involve functions of greater complexity than elementary arithmetic operations and fractional powers. For this reason algorithms that exactly calculate eigenvalues in a finite number of steps only exist for a few special classes of matrices. For general matrices, algorithms are [[Iterative method|iterative]], producing better approximate solutions with each iteration.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Any monic polynomial is the characteristic polynomial of its [[companion matrix]]. Therefore, a general algorithm for finding eigenvalues could also be used to find the roots of polynomials. The [[Abel–Ruffini theorem]] shows that any such algorithm for dimensions greater than 4 must either be infinite, or involve functions of greater complexity than elementary arithmetic operations and fractional powers. For this reason algorithms that exactly calculate eigenvalues in a finite number of steps only exist for a few special classes of matrices. For general matrices, algorithms are [[Iterative method|iterative]], producing better approximate solutions with each iteration.</div></td> </tr> </table> 2001:67C:10EC:578C:8000:0:0:E0E https://en.wikipedia.org/w/index.php?title=Eigenvalue_algorithm&diff=1264243280&oldid=prev Citation bot: Added pages. | Use this bot. Report bugs. | Suggested by Dominic3203 | Category:Numerical linear algebra | #UCB_Category 80/123 2024-12-21T05:54:48Z <p>Added pages. | <a href="/wiki/Wikipedia:UCB" class="mw-redirect" title="Wikipedia:UCB">Use this bot</a>. <a href="/wiki/Wikipedia:DBUG" class="mw-redirect" title="Wikipedia:DBUG">Report bugs</a>. | Suggested by Dominic3203 | <a href="/wiki/Category:Numerical_linear_algebra" title="Category:Numerical linear algebra">Category:Numerical linear algebra</a> | #UCB_Category 80/123</p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 05:54, 21 December 2024</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 204:</td> <td colspan="2" class="diff-lineno">Line 204:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>| journal=Proceedings of the 40th International Conference on Machine Learning</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>| journal=Proceedings of the 40th International Conference on Machine Learning</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>| year=2023</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>| year=2023</div></td> </tr> <tr> <td colspan="2" class="diff-empty diff-side-deleted"></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| pages=7513–7532</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>| url=https://proceedings.mlr.press/v202/delattre23a.html</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>| url=https://proceedings.mlr.press/v202/delattre23a.html</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>}}&lt;/ref&gt; || general || Eigenpair with largest eigenvalue || || super-linear || align="left" |Repeatedly computes the Gram product and rescales, deterministically.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>}}&lt;/ref&gt; || general || Eigenpair with largest eigenvalue || || super-linear || align="left" |Repeatedly computes the Gram product and rescales, deterministically.</div></td> </tr> </table> Citation bot https://en.wikipedia.org/w/index.php?title=Eigenvalue_algorithm&diff=1240989469&oldid=prev FrescoBot: Bot: link syntax 2024-08-18T17:55:46Z <p>Bot: <a href="/wiki/User:FrescoBot/Links" class="mw-redirect" title="User:FrescoBot/Links">link syntax</a></p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 17:55, 18 August 2024</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 43:</td> <td colspan="2" class="diff-lineno">Line 43:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Any problem of numeric calculation can be viewed as the evaluation of some function {{math|''f''}} for some input {{math|''x''}}. The [[condition number]] {{math|''κ''(''f'', ''x'')}} of the problem is the ratio of the relative error in the function's output to the relative error in the input, and varies with both the function and the input. The condition number describes how error grows during the calculation. Its base-10 logarithm tells how many fewer digits of accuracy exist in the result than existed in the input. The condition number is a best-case scenario. It reflects the instability built into the problem, regardless of how it is solved. No algorithm can ever produce more accurate results than indicated by the condition number, except by chance. However, a poorly designed algorithm may produce significantly worse results. For example, as mentioned below, the problem of finding eigenvalues for normal matrices is always well-conditioned. However, the problem of finding the roots of a polynomial can be [[Wilkinson's polynomial|very ill-conditioned]]. Thus eigenvalue algorithms that work by finding the roots of the characteristic polynomial can be ill-conditioned even when the problem is not.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Any problem of numeric calculation can be viewed as the evaluation of some function {{math|''f''}} for some input {{math|''x''}}. The [[condition number]] {{math|''κ''(''f'', ''x'')}} of the problem is the ratio of the relative error in the function's output to the relative error in the input, and varies with both the function and the input. The condition number describes how error grows during the calculation. Its base-10 logarithm tells how many fewer digits of accuracy exist in the result than existed in the input. The condition number is a best-case scenario. It reflects the instability built into the problem, regardless of how it is solved. No algorithm can ever produce more accurate results than indicated by the condition number, except by chance. However, a poorly designed algorithm may produce significantly worse results. For example, as mentioned below, the problem of finding eigenvalues for normal matrices is always well-conditioned. However, the problem of finding the roots of a polynomial can be [[Wilkinson's polynomial|very ill-conditioned]]. Thus eigenvalue algorithms that work by finding the roots of the characteristic polynomial can be ill-conditioned even when the problem is not.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>For the problem of solving the linear equation {{math|1=''A'''''v''' = '''b'''}} where {{math|''A''}} is invertible, the [[Condition number#Matrices|matrix condition number]] {{math|1=''κ''(''A''&lt;sup&gt;−1&lt;/sup&gt;, '''b''')}} is given by {{math|1={{!!}}''A''{{!!}}&lt;sub&gt;op&lt;/sub&gt;{{!!}}''A''&lt;sup&gt;−1&lt;/sup&gt;{{!!}}&lt;sub&gt;op&lt;/sub&gt;}}, where {{nowrap|{{!!}} {{!!}}&lt;sub&gt;op&lt;/sub&gt;}} is the [[operator norm]] subordinate to the normal [[Norm (mathematics)#Euclidean norm|Euclidean norm]] on {{math|'''C'''&lt;sup&gt;''n''&lt;/sup&gt;}}. Since this number is independent of {{math|'''b'''}} and is the same for {{math|''A''}} and {{math|''A''&lt;sup&gt;−1&lt;/sup&gt;}}, it is usually just called the condition number {{math|''κ''(''A'')}} of the matrix {{math|''A''}}. This value {{math|''κ''(''A'')}} is also the absolute value of the ratio of the largest [[<del style="font-weight: bold; text-decoration: none;">Singular_value|</del>singular value]] of {{math|''A''}} to its smallest. If {{math|''A''}} is [[Unitary matrix|unitary]], then {{math|1={{!!}}''A''{{!!}}&lt;sub&gt;op&lt;/sub&gt; = {{!!}}''A''&lt;sup&gt;−1&lt;/sup&gt;{{!!}}&lt;sub&gt;op&lt;/sub&gt; = 1}}, so {{math|1=''κ''(''A'') = 1}}. For general matrices, the operator norm is often difficult to calculate. For this reason, other [[matrix norms]] are commonly used to estimate the condition number.</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>For the problem of solving the linear equation {{math|1=''A'''''v''' = '''b'''}} where {{math|''A''}} is invertible, the [[Condition number#Matrices|matrix condition number]] {{math|1=''κ''(''A''&lt;sup&gt;−1&lt;/sup&gt;, '''b''')}} is given by {{math|1={{!!}}''A''{{!!}}&lt;sub&gt;op&lt;/sub&gt;{{!!}}''A''&lt;sup&gt;−1&lt;/sup&gt;{{!!}}&lt;sub&gt;op&lt;/sub&gt;}}, where {{nowrap|{{!!}} {{!!}}&lt;sub&gt;op&lt;/sub&gt;}} is the [[operator norm]] subordinate to the normal [[Norm (mathematics)#Euclidean norm|Euclidean norm]] on {{math|'''C'''&lt;sup&gt;''n''&lt;/sup&gt;}}. Since this number is independent of {{math|'''b'''}} and is the same for {{math|''A''}} and {{math|''A''&lt;sup&gt;−1&lt;/sup&gt;}}, it is usually just called the condition number {{math|''κ''(''A'')}} of the matrix {{math|''A''}}. This value {{math|''κ''(''A'')}} is also the absolute value of the ratio of the largest [[singular value]] of {{math|''A''}} to its smallest. If {{math|''A''}} is [[Unitary matrix|unitary]], then {{math|1={{!!}}''A''{{!!}}&lt;sub&gt;op&lt;/sub&gt; = {{!!}}''A''&lt;sup&gt;−1&lt;/sup&gt;{{!!}}&lt;sub&gt;op&lt;/sub&gt; = 1}}, so {{math|1=''κ''(''A'') = 1}}. For general matrices, the operator norm is often difficult to calculate. For this reason, other [[matrix norms]] are commonly used to estimate the condition number.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For the eigenvalue problem, [[Bauer–Fike theorem|Bauer and Fike proved]] that if {{math|''λ''}} is an eigenvalue for a [[Diagonalizable matrix|diagonalizable]] {{math|''n'' × ''n''}} matrix {{math|''A''}} with [[eigenvector matrix]] {{math|''V''}}, then the absolute error in calculating {{math|''λ''}} is bounded by the product of {{math|''κ''(''V'')}} and the absolute error in {{math|''A''}}.&lt;ref&gt;{{Citation | author = F. L. Bauer | author2 = C. T. Fike | title = Norms and exclusion theorems | journal = Numer. Math. | volume = 2 | pages = 137–141 | year = 1960 | doi=10.1007/bf01386217| s2cid = 121278235 }}&lt;/ref&gt; [[Bauer-Fike theorem#Corollary|As a result]], the condition number for finding {{math|''λ''}} is {{math|1=''κ''(''λ'', ''A'') = ''κ''(''V'') = {{!!}}''V'' {{!!}}&lt;sub&gt;op&lt;/sub&gt; {{!!}}''V'' &lt;sup&gt;−1&lt;/sup&gt;{{!!}}&lt;sub&gt;op&lt;/sub&gt;}}. If {{math|''A''}} is normal, then {{math|''V''}} is unitary, and {{math|1=''κ''(''λ'', ''A'') = 1}}. Thus the eigenvalue problem for all normal matrices is well-conditioned.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For the eigenvalue problem, [[Bauer–Fike theorem|Bauer and Fike proved]] that if {{math|''λ''}} is an eigenvalue for a [[Diagonalizable matrix|diagonalizable]] {{math|''n'' × ''n''}} matrix {{math|''A''}} with [[eigenvector matrix]] {{math|''V''}}, then the absolute error in calculating {{math|''λ''}} is bounded by the product of {{math|''κ''(''V'')}} and the absolute error in {{math|''A''}}.&lt;ref&gt;{{Citation | author = F. L. Bauer | author2 = C. T. Fike | title = Norms and exclusion theorems | journal = Numer. Math. | volume = 2 | pages = 137–141 | year = 1960 | doi=10.1007/bf01386217| s2cid = 121278235 }}&lt;/ref&gt; [[Bauer-Fike theorem#Corollary|As a result]], the condition number for finding {{math|''λ''}} is {{math|1=''κ''(''λ'', ''A'') = ''κ''(''V'') = {{!!}}''V'' {{!!}}&lt;sub&gt;op&lt;/sub&gt; {{!!}}''V'' &lt;sup&gt;−1&lt;/sup&gt;{{!!}}&lt;sub&gt;op&lt;/sub&gt;}}. If {{math|''A''}} is normal, then {{math|''V''}} is unitary, and {{math|1=''κ''(''λ'', ''A'') = 1}}. Thus the eigenvalue problem for all normal matrices is well-conditioned.</div></td> </tr> </table> FrescoBot https://en.wikipedia.org/w/index.php?title=Eigenvalue_algorithm&diff=1236392471&oldid=prev 164.73.163.20: corrected mistake in condition number definition: it's the ratio of the largest sigular value of A to the smallest. }it said "eigenvalue" instead of "singular value" 2024-07-24T12:52:52Z <p>corrected mistake in condition number definition: it&#039;s the ratio of the largest sigular value of A to the smallest. }it said &quot;eigenvalue&quot; instead of &quot;singular value&quot;</p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 12:52, 24 July 2024</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 8:</td> <td colspan="2" class="diff-lineno">Line 8:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:&lt;math&gt;\left(A - \lambda I\right)^k {\mathbf v} = 0,&lt;/math&gt;</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>:&lt;math&gt;\left(A - \lambda I\right)^k {\mathbf v} = 0,&lt;/math&gt;</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>where {{math|'''v'''}} is a nonzero {{math|''n'' × 1}} column vector, {{math|''I''}} is the {{math|''n'' × ''n''}} [[identity matrix]], {{math|''k''}} is a positive integer, and both {{math|''λ''}} and {{math|'''v'''}} are allowed to be complex even when {{math|''A''}} is real. When {{math|1=''k'' = 1}}, the vector is called simply an ''[[eigenvector]]'', and the pair is called an ''eigenpair''. In this case, {{math|1=''A'''''v''' = ''λ'''''v'''}}. Any eigenvalue {{math|''λ''}} of {{math|''A''}} has ordinary&lt;ref group="note"&gt;The term "ordinary" is used here only to emphasize the distinction between "eigenvector" and "generalized eigenvector".&lt;/ref&gt; eigenvectors associated to it, for if {{math|''k''}} is the smallest integer such that {{math|1=(''A'' − ''λI'')&lt;sup&gt;''k''&lt;/sup&gt; '''v''' = 0}} for a generalized eigenvector {{math|'''v'''}}, then {{math|1=(''A'' − ''λI'')&lt;sup&gt;''k''−1&lt;/sup&gt; '''v'''}} is an ordinary eigenvector. The value {{math|''k''}} can always be taken as less than or equal to {{math|''n''}}. In particular, {{math|1=(''A'' − ''λI'')&lt;sup&gt;''n''&lt;/sup&gt; '''v''' = 0}} for all generalized eigenvectors {{math|'''v'''}} associated with {{math|''λ''}}.</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>where {{math|'''v'''}} is a nonzero {{math|''n'' × 1}} column vector, {{math|''I''}} is the {{math|''n'' × ''n''}} [[identity matrix]], {{math|''k''}} is a positive integer, and both {{math|''λ''}} and {{math|'''v'''}} are allowed to be complex even when {{math|''A''}} is real.<ins style="font-weight: bold; text-decoration: none;">l</ins> When {{math|1=''k'' = 1}}, the vector is called simply an ''[[eigenvector]]'', and the pair is called an ''eigenpair''. In this case, {{math|1=''A'''''v''' = ''λ'''''v'''}}. Any eigenvalue {{math|''λ''}} of {{math|''A''}} has ordinary&lt;ref group="note"&gt;The term "ordinary" is used here only to emphasize the distinction between "eigenvector" and "generalized eigenvector".&lt;/ref&gt; eigenvectors associated to it, for if {{math|''k''}} is the smallest integer such that {{math|1=(''A'' − ''λI'')&lt;sup&gt;''k''&lt;/sup&gt; '''v''' = 0}} for a generalized eigenvector {{math|'''v'''}}, then {{math|1=(''A'' − ''λI'')&lt;sup&gt;''k''−1&lt;/sup&gt; '''v'''}} is an ordinary eigenvector. The value {{math|''k''}} can always be taken as less than or equal to {{math|''n''}}. In particular, {{math|1=(''A'' − ''λI'')&lt;sup&gt;''n''&lt;/sup&gt; '''v''' = 0}} for all generalized eigenvectors {{math|'''v'''}} associated with {{math|''λ''}}.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For each eigenvalue {{math|λ}} of {{math|''A''}}, the [[kernel (matrix)|kernel]] {{math|ker(''A'' − ''λI'')}} consists of all eigenvectors associated with {{math|''λ''}} (along with 0), called the ''[[eigenspace]]'' of {{math|''λ''}}, while the vector space {{math|ker((''A'' − ''λI'')&lt;sup&gt;''n''&lt;/sup&gt;)}} consists of all generalized eigenvectors, and is called the ''[[generalized eigenspace]]''. The ''[[geometric multiplicity]]'' of {{math|''λ''}} is the dimension of its eigenspace. The ''[[algebraic multiplicity]]'' of {{math|''λ''}} is the dimension of its generalized eigenspace. The latter terminology is justified by the equation</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For each eigenvalue {{math|λ}} of {{math|''A''}}, the [[kernel (matrix)|kernel]] {{math|ker(''A'' − ''λI'')}} consists of all eigenvectors associated with {{math|''λ''}} (along with 0), called the ''[[eigenspace]]'' of {{math|''λ''}}, while the vector space {{math|ker((''A'' − ''λI'')&lt;sup&gt;''n''&lt;/sup&gt;)}} consists of all generalized eigenvectors, and is called the ''[[generalized eigenspace]]''. The ''[[geometric multiplicity]]'' of {{math|''λ''}} is the dimension of its eigenspace. The ''[[algebraic multiplicity]]'' of {{math|''λ''}} is the dimension of its generalized eigenspace. The latter terminology is justified by the equation</div></td> </tr> <tr> <td colspan="2" class="diff-lineno">Line 43:</td> <td colspan="2" class="diff-lineno">Line 43:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Any problem of numeric calculation can be viewed as the evaluation of some function {{math|''f''}} for some input {{math|''x''}}. The [[condition number]] {{math|''κ''(''f'', ''x'')}} of the problem is the ratio of the relative error in the function's output to the relative error in the input, and varies with both the function and the input. The condition number describes how error grows during the calculation. Its base-10 logarithm tells how many fewer digits of accuracy exist in the result than existed in the input. The condition number is a best-case scenario. It reflects the instability built into the problem, regardless of how it is solved. No algorithm can ever produce more accurate results than indicated by the condition number, except by chance. However, a poorly designed algorithm may produce significantly worse results. For example, as mentioned below, the problem of finding eigenvalues for normal matrices is always well-conditioned. However, the problem of finding the roots of a polynomial can be [[Wilkinson's polynomial|very ill-conditioned]]. Thus eigenvalue algorithms that work by finding the roots of the characteristic polynomial can be ill-conditioned even when the problem is not.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Any problem of numeric calculation can be viewed as the evaluation of some function {{math|''f''}} for some input {{math|''x''}}. The [[condition number]] {{math|''κ''(''f'', ''x'')}} of the problem is the ratio of the relative error in the function's output to the relative error in the input, and varies with both the function and the input. The condition number describes how error grows during the calculation. Its base-10 logarithm tells how many fewer digits of accuracy exist in the result than existed in the input. The condition number is a best-case scenario. It reflects the instability built into the problem, regardless of how it is solved. No algorithm can ever produce more accurate results than indicated by the condition number, except by chance. However, a poorly designed algorithm may produce significantly worse results. For example, as mentioned below, the problem of finding eigenvalues for normal matrices is always well-conditioned. However, the problem of finding the roots of a polynomial can be [[Wilkinson's polynomial|very ill-conditioned]]. Thus eigenvalue algorithms that work by finding the roots of the characteristic polynomial can be ill-conditioned even when the problem is not.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>For the problem of solving the linear equation {{math|1=''A'''''v''' = '''b'''}} where {{math|''A''}} is invertible, the [[Condition number#Matrices|matrix condition number]] {{math|1=''κ''(''A''&lt;sup&gt;−1&lt;/sup&gt;, '''b''')}} is given by {{math|1={{!!}}''A''{{!!}}&lt;sub&gt;op&lt;/sub&gt;{{!!}}''A''&lt;sup&gt;−1&lt;/sup&gt;{{!!}}&lt;sub&gt;op&lt;/sub&gt;}}, where {{nowrap|{{!!}} {{!!}}&lt;sub&gt;op&lt;/sub&gt;}} is the [[operator norm]] subordinate to the normal [[Norm (mathematics)#Euclidean norm|Euclidean norm]] on {{math|'''C'''&lt;sup&gt;''n''&lt;/sup&gt;}}. Since this number is independent of {{math|'''b'''}} and is the same for {{math|''A''}} and {{math|''A''&lt;sup&gt;−1&lt;/sup&gt;}}, it is usually just called the condition number {{math|''κ''(''A'')}} of the matrix {{math|''A''}}. This value {{math|''κ''(''A'')}} is also the absolute value of the ratio of the largest <del style="font-weight: bold; text-decoration: none;">eigenvalue</del> of {{math|''A''}} to its smallest. If {{math|''A''}} is [[Unitary matrix|unitary]], then {{math|1={{!!}}''A''{{!!}}&lt;sub&gt;op&lt;/sub&gt; = {{!!}}''A''&lt;sup&gt;−1&lt;/sup&gt;{{!!}}&lt;sub&gt;op&lt;/sub&gt; = 1}}, so {{math|1=''κ''(''A'') = 1}}. For general matrices, the operator norm is often difficult to calculate. For this reason, other [[matrix norms]] are commonly used to estimate the condition number.</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>For the problem of solving the linear equation {{math|1=''A'''''v''' = '''b'''}} where {{math|''A''}} is invertible, the [[Condition number#Matrices|matrix condition number]] {{math|1=''κ''(''A''&lt;sup&gt;−1&lt;/sup&gt;, '''b''')}} is given by {{math|1={{!!}}''A''{{!!}}&lt;sub&gt;op&lt;/sub&gt;{{!!}}''A''&lt;sup&gt;−1&lt;/sup&gt;{{!!}}&lt;sub&gt;op&lt;/sub&gt;}}, where {{nowrap|{{!!}} {{!!}}&lt;sub&gt;op&lt;/sub&gt;}} is the [[operator norm]] subordinate to the normal [[Norm (mathematics)#Euclidean norm|Euclidean norm]] on {{math|'''C'''&lt;sup&gt;''n''&lt;/sup&gt;}}. Since this number is independent of {{math|'''b'''}} and is the same for {{math|''A''}} and {{math|''A''&lt;sup&gt;−1&lt;/sup&gt;}}, it is usually just called the condition number {{math|''κ''(''A'')}} of the matrix {{math|''A''}}. This value {{math|''κ''(''A'')}} is also the absolute value of the ratio of the largest <ins style="font-weight: bold; text-decoration: none;">[[Singular_value|singular value]]</ins> of {{math|''A''}} to its smallest. If {{math|''A''}} is [[Unitary matrix|unitary]], then {{math|1={{!!}}''A''{{!!}}&lt;sub&gt;op&lt;/sub&gt; = {{!!}}''A''&lt;sup&gt;−1&lt;/sup&gt;{{!!}}&lt;sub&gt;op&lt;/sub&gt; = 1}}, so {{math|1=''κ''(''A'') = 1}}. For general matrices, the operator norm is often difficult to calculate. For this reason, other [[matrix norms]] are commonly used to estimate the condition number.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For the eigenvalue problem, [[Bauer–Fike theorem|Bauer and Fike proved]] that if {{math|''λ''}} is an eigenvalue for a [[Diagonalizable matrix|diagonalizable]] {{math|''n'' × ''n''}} matrix {{math|''A''}} with [[eigenvector matrix]] {{math|''V''}}, then the absolute error in calculating {{math|''λ''}} is bounded by the product of {{math|''κ''(''V'')}} and the absolute error in {{math|''A''}}.&lt;ref&gt;{{Citation | author = F. L. Bauer | author2 = C. T. Fike | title = Norms and exclusion theorems | journal = Numer. Math. | volume = 2 | pages = 137–141 | year = 1960 | doi=10.1007/bf01386217| s2cid = 121278235 }}&lt;/ref&gt; [[Bauer-Fike theorem#Corollary|As a result]], the condition number for finding {{math|''λ''}} is {{math|1=''κ''(''λ'', ''A'') = ''κ''(''V'') = {{!!}}''V'' {{!!}}&lt;sub&gt;op&lt;/sub&gt; {{!!}}''V'' &lt;sup&gt;−1&lt;/sup&gt;{{!!}}&lt;sub&gt;op&lt;/sub&gt;}}. If {{math|''A''}} is normal, then {{math|''V''}} is unitary, and {{math|1=''κ''(''λ'', ''A'') = 1}}. Thus the eigenvalue problem for all normal matrices is well-conditioned.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>For the eigenvalue problem, [[Bauer–Fike theorem|Bauer and Fike proved]] that if {{math|''λ''}} is an eigenvalue for a [[Diagonalizable matrix|diagonalizable]] {{math|''n'' × ''n''}} matrix {{math|''A''}} with [[eigenvector matrix]] {{math|''V''}}, then the absolute error in calculating {{math|''λ''}} is bounded by the product of {{math|''κ''(''V'')}} and the absolute error in {{math|''A''}}.&lt;ref&gt;{{Citation | author = F. L. Bauer | author2 = C. T. Fike | title = Norms and exclusion theorems | journal = Numer. Math. | volume = 2 | pages = 137–141 | year = 1960 | doi=10.1007/bf01386217| s2cid = 121278235 }}&lt;/ref&gt; [[Bauer-Fike theorem#Corollary|As a result]], the condition number for finding {{math|''λ''}} is {{math|1=''κ''(''λ'', ''A'') = ''κ''(''V'') = {{!!}}''V'' {{!!}}&lt;sub&gt;op&lt;/sub&gt; {{!!}}''V'' &lt;sup&gt;−1&lt;/sup&gt;{{!!}}&lt;sub&gt;op&lt;/sub&gt;}}. If {{math|''A''}} is normal, then {{math|''V''}} is unitary, and {{math|1=''κ''(''λ'', ''A'') = 1}}. Thus the eigenvalue problem for all normal matrices is well-conditioned.</div></td> </tr> </table> 164.73.163.20 https://en.wikipedia.org/w/index.php?title=Eigenvalue_algorithm&diff=1214788745&oldid=prev Suslindisambiguator: /* Algorithms */ added author link for Robert Charles Thompson 2024-03-21T04:41:14Z <p><span class="autocomment">Algorithms: </span> added author link for Robert Charles Thompson</p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 04:41, 21 March 2024</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 63:</td> <td colspan="2" class="diff-lineno">Line 63:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>If an eigenvalue algorithm does not produce eigenvectors, a common practice is to use an inverse iteration based algorithm with {{math|''μ''}} set to a close approximation to the eigenvalue. This will quickly converge to the eigenvector of the closest eigenvalue to {{math|''μ''}}. For small matrices, an alternative is to look at the column space of the product of {{math|''A'' − ''λ''{{'}}''I''}} for each of the other eigenvalues {{math|''λ''{{'}}}}.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>If an eigenvalue algorithm does not produce eigenvectors, a common practice is to use an inverse iteration based algorithm with {{math|''μ''}} set to a close approximation to the eigenvalue. This will quickly converge to the eigenvector of the closest eigenvalue to {{math|''μ''}}. For small matrices, an alternative is to look at the column space of the product of {{math|''A'' − ''λ''{{'}}''I''}} for each of the other eigenvalues {{math|''λ''{{'}}}}.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>A formula for the norm of unit eigenvector components of normal matrices was discovered by Robert Thompson in 1966 and rediscovered independently by several others.&lt;ref&gt;{{cite journal |last1=Thompson |first1=R. C. |title=Principal submatrices of normal and Hermitian matrices |journal=Illinois Journal of Mathematics |date=June 1966 |volume=10 |issue=2 |pages=296–308 |doi=10.1215/ijm/1256055111 |doi-access=free }}&lt;/ref&gt;&lt;ref&gt;{{cite journal |author1=Peter Nylen |author2=Tin-Yau Tam |author3=Frank Uhlig |title=On the eigenvalues of principal submatrices of normal, hermitian and symmetric matrices |journal=Linear and Multilinear Algebra |date=1993 |volume=36 |issue=1 |pages=69–78 |doi=10.1080/03081089308818276}}&lt;/ref&gt;&lt;ref&gt;{{cite journal |vauthors=Bebiano N, Furtado S, da Providência J |title=On the eigenvalues of principal submatrices of J-normal matrices |journal=Linear Algebra and Its Applications |date=2011 |volume=435 |issue=12 |pages=3101–3114 |doi=10.1016/j.laa.2011.05.033 |doi-access=free }}&lt;/ref&gt;&lt;ref&gt;{{cite journal | vauthors=Forrester PJ, Zhang J | arxiv=1905.05314 | title=Corank-1 projections and the randomised Horn problem | journal=Tunisian Journal of Mathematics | year=2021 | volume=3 | pages=55–73 | doi=10.2140/tunis.2021.3.55 | s2cid=153312446 }}&lt;/ref&gt;&lt;ref&gt;{{cite journal | vauthors= Denton PB, Parke SJ, Tao T, Zhang X | arxiv=1908.03795 | title=Eigenvectors from eigenvalues: A survey of a basic identity in linear algebra | journal=Bulletin of the American Mathematical Society | year=2021 | volume=59 | page=1 | doi=10.1090/bull/1722 | s2cid=213918682 }}&lt;/ref&gt;</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>A formula for the norm of unit eigenvector components of normal matrices was discovered by Robert Thompson in 1966 and rediscovered independently by several others.&lt;ref&gt;{{cite journal |last1=Thompson |first1=R. C.<ins style="font-weight: bold; text-decoration: none;"> |author-link=Robert Charles Thompson</ins> |title=Principal submatrices of normal and Hermitian matrices |journal=Illinois Journal of Mathematics |date=June 1966 |volume=10 |issue=2 |pages=296–308 |doi=10.1215/ijm/1256055111 |doi-access=free }}&lt;/ref&gt;&lt;ref&gt;{{cite journal |author1=Peter Nylen |author2=Tin-Yau Tam |author3=Frank Uhlig |title=On the eigenvalues of principal submatrices of normal, hermitian and symmetric matrices |journal=Linear and Multilinear Algebra |date=1993 |volume=36 |issue=1 |pages=69–78 |doi=10.1080/03081089308818276}}&lt;/ref&gt;&lt;ref&gt;{{cite journal |vauthors=Bebiano N, Furtado S, da Providência J |title=On the eigenvalues of principal submatrices of J-normal matrices |journal=Linear Algebra and Its Applications |date=2011 |volume=435 |issue=12 |pages=3101–3114 |doi=10.1016/j.laa.2011.05.033 |doi-access=free }}&lt;/ref&gt;&lt;ref&gt;{{cite journal | vauthors=Forrester PJ, Zhang J | arxiv=1905.05314 | title=Corank-1 projections and the randomised Horn problem | journal=Tunisian Journal of Mathematics | year=2021 | volume=3 | pages=55–73 | doi=10.2140/tunis.2021.3.55 | s2cid=153312446 }}&lt;/ref&gt;&lt;ref&gt;{{cite journal | vauthors= Denton PB, Parke SJ, Tao T, Zhang X | arxiv=1908.03795 | title=Eigenvectors from eigenvalues: A survey of a basic identity in linear algebra | journal=Bulletin of the American Mathematical Society | year=2021 | volume=59 | page=1 | doi=10.1090/bull/1722 | s2cid=213918682 }}&lt;/ref&gt;</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>If {{math|''A''}} is an &lt;math display="inline"&gt; n \times n&lt;/math&gt; normal matrix with eigenvalues {{math|''λ''&lt;sub&gt;''i''&lt;/sub&gt;(''A'')}} and corresponding unit eigenvectors {{math|'''v'''&lt;sub&gt;''i''&lt;/sub&gt;}} whose component entries are {{math|''v''&lt;sub&gt;''i,j''&lt;/sub&gt;}}, let {{math|''A''&lt;sub&gt;''j''&lt;/sub&gt;}} be the &lt;math display="inline"&gt; n - 1 \times n - 1&lt;/math&gt; matrix obtained by removing the {{math|''i''}}-th row and column from {{math|''A''}}, and let {{math|''λ''&lt;sub&gt;''k''&lt;/sub&gt;(''A''&lt;sub&gt;''j''&lt;/sub&gt;)}} be its {{math|''k''}}-th eigenvalue. Then</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>If {{math|''A''}} is an &lt;math display="inline"&gt; n \times n&lt;/math&gt; normal matrix with eigenvalues {{math|''λ''&lt;sub&gt;''i''&lt;/sub&gt;(''A'')}} and corresponding unit eigenvectors {{math|'''v'''&lt;sub&gt;''i''&lt;/sub&gt;}} whose component entries are {{math|''v''&lt;sub&gt;''i,j''&lt;/sub&gt;}}, let {{math|''A''&lt;sub&gt;''j''&lt;/sub&gt;}} be the &lt;math display="inline"&gt; n - 1 \times n - 1&lt;/math&gt; matrix obtained by removing the {{math|''i''}}-th row and column from {{math|''A''}}, and let {{math|''λ''&lt;sub&gt;''k''&lt;/sub&gt;(''A''&lt;sub&gt;''j''&lt;/sub&gt;)}} be its {{math|''k''}}-th eigenvalue. Then</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>&lt;math display="block"&gt; |v_{i,j}|^2 \prod_{k=1,k\ne i}^n (\lambda_i(A) - \lambda_k(A)) = \prod_{k=1}^{n-1}(\lambda_i(A) - \lambda_k(A_j))&lt;/math&gt;</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>&lt;math display="block"&gt; |v_{i,j}|^2 \prod_{k=1,k\ne i}^n (\lambda_i(A) - \lambda_k(A)) = \prod_{k=1}^{n-1}(\lambda_i(A) - \lambda_k(A_j))&lt;/math&gt;</div></td> </tr> </table> Suslindisambiguator https://en.wikipedia.org/w/index.php?title=Eigenvalue_algorithm&diff=1206536653&oldid=prev 176.114.201.5: Changed "3×3 matrices" to "Symmetric 3×3 matrices" 2024-02-12T12:04:23Z <p>Changed &quot;3×3 matrices&quot; to &quot;Symmetric 3×3 matrices&quot;</p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 12:04, 12 February 2024</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 272:</td> <td colspan="2" class="diff-lineno">Line 272:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>In both matrices, the columns are multiples of each other, so either column can be used. Thus, {{math|(1, −2)}} can be taken as an eigenvector associated with the eigenvalue -2, and {{math|(3, −1)}} as an eigenvector associated with the eigenvalue 3, as can be verified by multiplying them by {{math|''A''}}.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>In both matrices, the columns are multiples of each other, so either column can be used. Thus, {{math|(1, −2)}} can be taken as an eigenvector associated with the eigenvalue -2, and {{math|(3, −1)}} as an eigenvector associated with the eigenvalue 3, as can be verified by multiplying them by {{math|''A''}}.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>===3×3 matrices===</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>===<ins style="font-weight: bold; text-decoration: none;">Symmetric </ins>3×3 matrices===</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The characteristic equation of a symmetric 3&amp;times;3 matrix {{math|''A''}} is:</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>The characteristic equation of a symmetric 3&amp;times;3 matrix {{math|''A''}} is:</div></td> </tr> </table> 176.114.201.5 https://en.wikipedia.org/w/index.php?title=Eigenvalue_algorithm&diff=1189320348&oldid=prev Citation bot: Add: pages, doi, authors 1-1. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Corvus florensis | #UCB_webform 1195/1612 2023-12-11T02:54:21Z <p>Add: pages, doi, authors 1-1. Removed parameters. Some additions/deletions were parameter name changes. | <a href="/wiki/Wikipedia:UCB" class="mw-redirect" title="Wikipedia:UCB">Use this bot</a>. <a href="/wiki/Wikipedia:DBUG" class="mw-redirect" title="Wikipedia:DBUG">Report bugs</a>. | Suggested by Corvus florensis | #UCB_webform 1195/1612</p> <table style="background-color: #fff; color: #202122;" data-mw="interface"> <col class="diff-marker" /> <col class="diff-content" /> <col class="diff-marker" /> <col class="diff-content" /> <tr class="diff-title" lang="en"> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">← Previous revision</td> <td colspan="2" style="background-color: #fff; color: #202122; text-align: center;">Revision as of 02:54, 11 December 2023</td> </tr><tr> <td colspan="2" class="diff-lineno">Line 51:</td> <td colspan="2" class="diff-lineno">Line 51:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Algorithms==</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Algorithms==</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>The most reliable and most widely used algorithm for computing eigenvalues is [[John G. F. Francis]]' [[QR algorithm]], considered one of the top ten algorithms of 20th century.&lt;ref name="t10"&gt;{{cite journal |last1=J. Dongarra and F. Sullivan |title=Top ten algorithms of the century |journal=Computing in Science and Engineering |date=2000 |volume=2 |page=22-23}}&lt;/ref&gt;</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>The most reliable and most widely used algorithm for computing eigenvalues is [[John G. F. Francis]]' [[QR algorithm]], considered one of the top ten algorithms of 20th century.&lt;ref name="t10"&gt;{{cite journal |last1=J. Dongarra and F. Sullivan |title=Top ten algorithms of the century |journal=Computing in Science and Engineering |date=2000 |volume=2 |page=22-23<ins style="font-weight: bold; text-decoration: none;">|doi=10.1109/MCISE.2000.814652 </ins>}}&lt;/ref&gt;</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Any monic polynomial is the characteristic polynomial of its [[companion matrix]]. Therefore, a general algorithm for finding eigenvalues could also be used to find the roots of polynomials. The [[Abel–Ruffini theorem]] shows that any such algorithm for dimensions greater than 4 must either be infinite, or involve functions of greater complexity than elementary arithmetic operations and fractional powers. For this reason algorithms that exactly calculate eigenvalues in a finite number of steps only exist for a few special classes of matrices. For general matrices, algorithms are [[Iterative method|iterative]], producing better approximate solutions with each iteration.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>Any monic polynomial is the characteristic polynomial of its [[companion matrix]]. Therefore, a general algorithm for finding eigenvalues could also be used to find the roots of polynomials. The [[Abel–Ruffini theorem]] shows that any such algorithm for dimensions greater than 4 must either be infinite, or involve functions of greater complexity than elementary arithmetic operations and fractional powers. For this reason algorithms that exactly calculate eigenvalues in a finite number of steps only exist for a few special classes of matrices. For general matrices, algorithms are [[Iterative method|iterative]], producing better approximate solutions with each iteration.</div></td> </tr> <tr> <td colspan="2" class="diff-lineno">Line 108:</td> <td colspan="2" class="diff-lineno">Line 108:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|}</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|}</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>For symmetric tridiagonal eigenvalue problems all eigenvalues (without eigenvectors) can be computed numerically in time O(n log(n)), using bisection on the characteristic polynomial.&lt;ref name=CoakleyRokhlin&gt;{{Citation |last=Coakley|first=Ed S. |title=A fast divide-and-conquer algorithm for computing the spectra of real symmetric tridiagonal matrices. |journal=[[Applied and Computational Harmonic Analysis]] |volume=34 |issue=3 |date=May 2013 |<del style="font-weight: bold; text-decoration: none;">page</del>=379–414 |doi=10.1016/j.acha.2012.06.003|doi-access= }}&lt;/ref&gt;</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>For symmetric tridiagonal eigenvalue problems all eigenvalues (without eigenvectors) can be computed numerically in time O(n log(n)), using bisection on the characteristic polynomial.&lt;ref name=CoakleyRokhlin&gt;{{Citation |last=Coakley|first=Ed S. |title=A fast divide-and-conquer algorithm for computing the spectra of real symmetric tridiagonal matrices. |journal=[[Applied and Computational Harmonic Analysis]] |volume=34 |issue=3 |date=May 2013 |<ins style="font-weight: bold; text-decoration: none;">pages</ins>=379–414 |doi=10.1016/j.acha.2012.06.003|doi-access= }}&lt;/ref&gt;</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><br /></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Iterative algorithms==</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>==Iterative algorithms==</div></td> </tr> <tr> <td colspan="2" class="diff-lineno">Line 193:</td> <td colspan="2" class="diff-lineno">Line 193:</td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>|-</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>| [[Gram iteration]]&lt;ref&gt;{{Citation</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>| [[Gram iteration]]&lt;ref&gt;{{Citation</div></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| <del style="font-weight: bold; text-decoration: none;">last</del>=Delattre</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| <ins style="font-weight: bold; text-decoration: none;">last1</ins>=Delattre</div></td> </tr> <tr> <td class="diff-marker" data-marker="−"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>| <del style="font-weight: bold; text-decoration: none;">first</del>=B.</div></td> <td class="diff-marker" data-marker="+"></td> <td style="color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>| <ins style="font-weight: bold; text-decoration: none;">first1</ins>=B.</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>| last2=Barthélemy</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>| last2=Barthélemy</div></td> </tr> <tr> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>| first2=Q.</div></td> <td class="diff-marker"></td> <td style="background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;"><div>| first2=Q.</div></td> </tr> </table> Citation bot