Jump to content

Fast algorithms

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 91.76.87.5 (talk) at 12:56, 23 February 2009. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Fast Algorithms is the field of computational mathematics that studies algorithms of evaluation of a given function with a given accuracy, using as few bit operations as possible.


Bit operation

We assume that numbers are written in the binary form, the signs of which and are called bits.

Def.1. Writing down of one of the symbols , putting together, subtraction and multiplication of two bits is called an elementary operation or a bit operation.


Complexity of computation (bit)

To estimate the quality of a fast method or algorithm is used the function of complexity of computation (bit).

Define what it means to evaluate a function at a given point.

We consider the simplest case: let be a real function of a real argument , and assume that satisfies on the Lipschitz condition of order , that is for Let be a natural number.

Def.2 To compute the function at the point up to digits (with accuracy ) means to find such a number that

Def.3 The total number of bit operations sufficient to compute the function at the point with accuracy up to digits by means of a given algorithm, is called the complexity of computation of at the point up to digits. Thus, the complexity of computation of at the point is a function of and so are and This function is denoted by

It is clear that depends also on the algorithm of calculation and for different algorithms will be different. The complexity of a computation is directly related to the time that the computer needs to perform this computation. For this reason, the complexity is sometimes treated as a 'time' function [1].

The problem of behavior of as for a class of functions or for a particular function was for the first time formulated by Kolmogorov[2] around 1956.

History of the problem

The field Fast Algorithms was born in 1960 [3], when the first fast method --- the Karatsuba multiplication --- was found. Later the Karatsuba method was called “Divide and Conquer” (sometimes any method of computation with subdivisions is called with the same name), other names which people use for the method invented by Karatsuba are “Binary Splitting”,"Dichotomy Principle" etc. After the Karatsuba method, many other fast methods were constructed, including the Strassen fast matrix multiplication method[4] (the generalization of the Karatsuba idea for matrices), the Schönhage-Strassen multiplication method[5] etc . Some old methods become fast computational methods with use of one of the fast multiplication algorithms, such as the Newton method for calculation of elementary algebraic functions and the AGM method of Gauss for evaluation of elementary transcendental functions.

References

  1. ^ D.E. Knuth, The art of computer programming. v.2 Addison-Wesley Publ.Co., Reading (1969)
  2. ^ A.N. Kolmogorov, Information Theory and the Theory of Algorithms. Publ. Nauka, Moscow (1987)
  3. ^ A.A. Karatsuba, The Complexity of Computations. Proceedings of the Steklov Institute of Mathematics, Vol.211 (1995)
  4. ^ V. Strassen, Gaussian elimination is not optimal. J. Numer. Math., N 13 (1969)
  5. ^ A. Schönhage, A.F.W. Grotefeld and E. Vetter, Fast Algorithms. BI-Wiss.-Verl., Z\"urich (1994).

See also