Jump to content

Gradient descent

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Joris Gillis (talk | contribs) at 21:10, 18 January 2006 (Description of the method: added visual illustrations). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Gradient descent is an optimization algorithm that approaches a local minimum of a function by taking steps proportional to the negative of the gradient (or the approximate gradient) of the function at the current point. If instead one takes steps proportional to the gradient, one approaches a local maximum of that function; the procedure is then known as gradient ascent.

Gradient descent is also known as steepest descent, or the method of steepest descent, not to be confused with the method for approximating integrals with the same name, see method of steepest descent.

Description of the method

We describe gradient descent here in terms of its equivalent (but opposite) cousin, gradient ascent. Gradient ascent is based on the observation that if the real-valued function is defined and differentiable in a neighborhood of a point , then increases fastest if one goes from in the direction of the gradient of at , . It follows that, if

for a small enough number, then . With this observation in mind, one starts with a guess for a local maximum of , and considers the sequence such that

We have so hopefully the sequence converges to the desired local maximum. Note that the value of the step size is allowed to change at every iteration.

Let us illustrate this process in the picture below. Here is assumed to be defined on the plane, and that its graph looks like a hill. The blue curves are the contour lines, that is, the regions on which the value of is constant. A red arrow originating at a point shows the direction of the gradient at that point. Note that the gradient at a point is perpendicular to the contour line going through that point. We see that gradient ascent leads us to the top of the hill, that is, to the point where the value of the function is largest.

alt An illustration of the gradient descent method.

To have gradient descent go towards a local minimum, one needs to replace with .

The gradient descent method applied to an arbitrary function
Contour-lines 3D-view
The gradient descent algorithm in action. (1: contour) The gradient descent algorithm in action. (2: surface)

Comments

Note that gradient descent works in spaces of any number of dimensions, even in infinite-dimensional ones.

Two weaknesses of gradient descent are:

  1. The algorithm can take many iterations to converge towards a local maximum/minimum, if the curvature in different directions is very different.
  2. Finding the optimal per step can be time-consuming. Conversely, using a fixed can yield poor results. Conjugate gradient is often a better alternative.

A more powerful algorithm is given by the BFGS method which consists in calculating on every step a matrix by which the gradient vector is multiplied to go into a "better" direction, combined with a more sophisticated linear search algorithm, to find the "best" value of

See also