Jump to content

Talk:Strassen algorithm

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 79.220.78.73 (talk) at 11:49, 12 November 2008 (another broken link). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

link to reference 1 is broken 75.142.209.115 (talk) 19:46, 29 October 2008 (UTC) link to reference 2 is broken as well Nov 12, 2008[reply]

Why doesn't additions matter?

While a lot of older textbooks will say that "additions don't matter, but multiplications do". That is usually not relevant anymore. Multiplying numbers is slower if you are working with multi-word representations of numbers (sometimes called 'big numbers'), but if you are just multiplying "float"s or "double"s then multiplications and additions take the same amount of time on most processors (as far as i know) today. I'm going to change the article to say, "if we only consider multiplications" without the statement bout multiplications being inherently slower.... :)

Yes, it depends on the sizes of the numbers being multiplied. Perhaps a clarification of this point should be added to the article? I'm not familiar with the size of the matrix components used in most matrix algorithms so I will defer this point to the experts. - Gauge 03:19, 21 Jan 2005 (UTC)
This is mathematical thing, not a technical article. It states theoretical bounds.--213.141.159.52 14:30, 7 March 2006 (UTC)[reply]
The larger importance of multiplications becomes apparent when one starts to use the algorithm recursively, since the additions and multiplications at all but the bottom levels are then matrix additions and multiplications. A matrix multiplication is near O(), whereas the matrix addition is O(). Hence the number of additions in a Strassen-type matrix multiplication algorithm only affects the constant factor, but the number of multiplications affect the exponent. See section 2.4 of Wilf, Herbert S. Algorithms and Complexity.81.231.33.217 (talk) 13:29, 30 June 2008 (UTC)[reply]

Notation for field

I am unfamiliar with the use of blackboard bold to denote an arbitrary field, as I understand it to mean one of the canonical sets of numbers (e.g., the complex numbers, the integers, etc.). Shouldn't the article use K or K instead of ? Pmdboi 18:05, 19 February 2006 (UTC)[reply]

Some texts use or K to emphasize the point that K is a field but I think most text use a simple K instead. I changed the article accordingly. And I changed K into the more common F. MathMartin 19:09, 19 February 2006 (UTC)[reply]

Error in formula?

Looking at Mathworld, I see a different formula for M1: http://mathworld.wolfram.com/StrassenFormulas.html Doing a simple case by hand, the wikipedia ones seem wrong. 141.218.136.204 20:07, 22 February 2007 (UTC)[reply]

I"m sorry, but I don't see where the formula at MathWorld differs from the formula here. Could you please be more specific? -- Jitse Niesen (talk) 00:49, 23 February 2007 (UTC)[reply]

Cross-over point

I reverted the edit of 31 January 2007, which added this fragment:

A popular misconception is that Strassen's algorithm outperforms the classic multiplication approach only for large matrix sizes. This is not true for modern architectures due to Strassen algorithm's superior cache behavior. For example, on a 2GHz Core Duo processor, Strassen's algorithm is three times faster at multiplying 128x128 matrices. It's time is pretty much the same as that block-multiplication for 256x256 and 512x512 matrices. For 1024x1024 and larger matrices (when the entire problem set no longer fits into processor's L2 cache), Strassen's algorithm is 3 times faster than block-multiplication and 12 times as fast as the classical three nested-loop implementation.

It might be true, but it does need a reference, especially since these kind of measurements are typically very dependent on details and it is contradicted by other references like Golub and Van Loan's Matrix Computations. -- Jitse Niesen (talk) 03:40, 23 February 2007 (UTC)[reply]

Agreed. A crossover point certainly exists, but is hardware dependent. I'll put something fluffier in. Deco 05:18, 23 February 2007 (UTC)[reply]

Which Winograd algorithm?

Winograd's 7 multiplications and fewer additions than Strassen algorithm was published 1971 (Winograd, S. (1971). "On Multiplication of 2×2 Matrices". Linear Algebra and Its Applications. 4: 381–388.), not 1980. It has the same asymptotic complexity as Strassen's: .

The "Ultra-fast matrix multiplication" reference in the article describes another matrix multiplication attributed to Winograd, which has asymptotic complexity O(). It is possible that this was published in 1980.

That time period also saw many other developments which are worth mentioning:

  • The first Strassen-like algorithm — in the sense of a computational scheme for multiplication of matrices of a fixed size that uses clever tricks to reduce the number of element multiplications needed — to achieve better asymptotic complexity than Strassen's original algorithm (Pan, V. Ya. (1980). "New fast algorithms for matrix operations". SIAM Journal on Computing. 9 (2): 321–342. ISSN 0097-5397.). It's wasn't too practical though, as it still used 47216 multiplications to multiply two 48×48 matrices, thus only reducing the exponent by 0.02.
  • The Strassen-Schönhage algorithm (which is not the same as the Schönhage-Strassen algorithm for integer multiplication), which reduced the exponent to below 2.5.Schönhage, A. (1981). "Partial and total matrix multiplication". SIAM Journal on Computing (3): 434–455. {{cite journal}}: Unknown parameter |volumne= ignored (help) (I don't recall the details, and haven't got de Groote, H. F. (1987). Lectures on the Complexity of Bilinear Problems. Lecture Notes in Computer Science. Vol. 245. Springer. ISBN 3-540-17205-X. at hand so I can check; possibly Strassen's contribution was to push the exponent below 2.5.)

81.231.33.217 (talk) 14:08, 30 June 2008 (UTC)[reply]

asymptotically faster

  • It is asymptotically faster than the standard matrix multiplication algorithm

What does asymptotically faster mean? Thanks, --Abdull (talk) 10:55, 25 July 2008 (UTC)[reply]

"For large enough input", in this case "for large enough matrix sides". 81.231.34.61 (talk) 10:49, 11 September 2008 (UTC)[reply]