Jump to content

Multiplication algorithm

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by AxelBoldt (talk | contribs) at 18:03, 15 June 2002. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

A multiplication algorithm is an algorithm (or method) to multiply two numbers. Depending on the size of the numbers, different algorithms are in use.

The major advantage of positional number systems over other systems of writing down numbers is that they facilitate the usual grade-school method of long multiplication: multiply the first number with every digit of the second number and then add up all the properly shifted results. In order to perform this algorithm, one needs to know the products of all possible digits, which is why multiplication tables have to be memorized. Humans use this algorithm in base 10, while computers employ the same algorithm in base 2. The algorithm is a lot simpler in base 2, since the multiplication table has only 4 entries. Modern chips implement this algorithm for 32-bit or 64-bit numbers in hardware or in microcode. To multiply two numbers with n digits using this method, one needs about n2 operations. More formally: the time complexity of multiplying two n-digit numbers using long multiplication is O(n2).

For systems that need to multiply huge numbers in the range of several thousand digits, such as computer algebra systems and bignum libraries, this algorithm is too slow. These systems employ Karatsuba multiplication which proceeds as follows: suppose you work in base 10 (as most of these systems do) and want to multiply two n-digit numbers x and y, and assume n = 2m is even (if not, add zeros at the left end). We can write

x = x1 10m + x2
y = y1 10m + y2

with m-digit numbers x1, x2, y1 and y2. The product is given by

xy = x1y1 102m + (x1y2 + x2y1) 10m + x2y2

so we need to quickly determine the numbers x1y1, x1y2 + x2y1 and x2y2. The heart of Karatsuba's method lies in the observation that this can be done with only three rather than four multiplications:

  1. compute x1y1, call the result A
  2. compute x2y2, call the result B
  3. compute (x1 + x2)(y1 + y2), call the result C
  4. compute C - A - B; this number is equal to x1y2 + x2y1.

To compute these three products of m-digit numbers, we can employ the same trick again, effectively using recursion. Because of the overhead of recursion, this method is not very fast for small values of n; in typical implementations, one would therefore switch to long multiplication if n is small enough.

It turns out that this algorithm has a time complexity of O(nln(3)/ln(2)); the number ln(3)/ln(2) is approximately 1.58, so this method is significantly faster than long multiplication.

It is possible to experimentally verify whether a given system uses Karatsuba's method or long multiplication: take your favorite 10,000 digit numbers, multiply them and measure the time it takes. Then take your favorite 20,000 digit numbers and measure the time it takes to multiply those. If Karatsuba's method is being used, the second time will be about three times as long as the first; if long multiplication is being used, the second time will be about four times as long.

There exist even faster algorithms, but they are not used in computer algebra systems and bignum libraries because they are difficult to implement and don't provide speed benefits for the sizes of numbers typically encountered in those systems. These algorithms are based on the fast Fourier transform. The idea is the following: multiplying two numbers represented as digit strings is virtually the same as computing the convolution of those two digit strings. Instead of computing a convolution, one can instead first compute the Fourier transforms, multiply them entry by entry, and then compute the inverse Fourier transform of the result. The fastest known methods have a time complexity of O(n ln(n) ln(ln(n))).

All the above multiplication algorithms can also be used to multiply polynomials.