Jump to content

Block Lanczos algorithm: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Monkbot (talk | contribs)
m Task 18 (cosmetic): eval 1 template: hyphenate params (2×);
The block Lanczos method is far older than claimed, it goes back to at least the 70s. I don't know the original paper but it's definitely before 1995.
Tag: section blanking
Line 8: Line 8:


The [[block Wiedemann algorithm]] is more useful in contexts where several systems each large enough to hold the entire matrix are available, since in that algorithm the systems can run independently until a final stage at the end.
The [[block Wiedemann algorithm]] is more useful in contexts where several systems each large enough to hold the entire matrix are available, since in that algorithm the systems can run independently until a final stage at the end.

== History ==
The block Lanczos algorithm was developed by [[Peter Montgomery (mathematician)|Peter Montgomery]] and published in 1995;<ref>{{cite conference |last=Montgomery |first=P L |author-link=Peter Montgomery (mathematician) |year=1995 |title=A Block Lanczos Algorithm for Finding Dependencies over GF(2) |conference=EUROCRYPT '95 |book-title=Lecture Notes in Computer Science |volume=921 |pages=106–120 |publisher=Springer-Verlag|doi=10.1007/3-540-49264-X_9 |doi-access=free }}</ref> it is based on, and bears a strong resemblance to, the [[Lanczos algorithm]] for finding [[eigenvalue]]s of large sparse real matrices.


== References ==
== References ==

Revision as of 12:41, 9 June 2021

In computer science, the block Lanczos algorithm is an algorithm for finding the nullspace of a matrix over a finite field, using only multiplication of the matrix by long, thin matrices. Such matrices are considered as vectors of tuples of finite-field entries, and so tend to be called 'vectors' in descriptions of the algorithm.

The block Lanczos algorithm is amongst the most efficient methods known for finding nullspaces, which is the final stage in integer factorization algorithms such as the quadratic sieve and number field sieve, and its development has been entirely driven by this application.

Parallelization issues

The algorithm is essentially not parallel: it is of course possible to distribute the matrix–'vector' multiplication, but the whole vector must be available for the combination step at the end of each iteration, so all the machines involved in the calculation must be on the same fast network. In particular, it is not possible to widen the vectors and distribute slices of vectors to different independent machines.

The block Wiedemann algorithm is more useful in contexts where several systems each large enough to hold the entire matrix are available, since in that algorithm the systems can run independently until a final stage at the end.

References