Jump to content

Gibbs algorithm

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Jheald (talk | contribs) at 17:24, 31 October 2005. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In statistical mechanics, the Gibbs algorithm, first introduced by J. Willard Gibbs in 1878, is the injunction to choose a statistical ensemble (probability distribution) for the unknown microscopic state of a thermodynamic system by minimising the average log probability

subject to the probability distribution satisfying a set of constraints (usually expectation values) corresponding to the known macroscopic quantities.

In the light of Shannon's information theory, it was re-interpreted by Jaynes in 1957 as a much more general inference technique, more widely applicable, leading to the principle of maximum entropy, and the MaxEnt view of thermodynamics.

Physicists sometimes call the result of applying the Gibbs algorithm the Gibbs distribution for the given constraints; though it may now more often be called a maximum entropy distribution. Statisticans recognise such distributions as belonging to exponential families.

Not to be confused with

The Gibbs sampler, an update algorithm used in Markov Chain Monte Carlo iterations.