Continuous-time Markov chain: Difference between revisions
Gareth Jones (talk | contribs) →See also: link now appears in body section |
Gareth Jones (talk | contribs) →Extensions: clearer explanation |
||
Line 158: | Line 158: | ||
==Extensions== |
==Extensions== |
||
A time dependent (time heterogeneous) CTMC is as above, but with the |
A time dependent (time heterogeneous) CTMC is as above, but with the transition rate matrix]] a function of time Q(''t''). |
||
==See also== |
==See also== |
Revision as of 16:15, 9 June 2013
In probability theory, a continuous-time Markov chain (CTMC[1] or continuous-time Markov process[2]) is a mathematical model which takes values in some finite or countable set and for which the time spent in each state takes non-negative real values and has an exponential distribution. It is a random process with the Markov property which means that future behaviour of the model (both remaining time in current state and next state) depends only on the current state of the model and not on historical behaviour.
Definitions
A continuous-time Markov chain (Xt)t ≥ 0 is defined by a finite or countable state space S, a transition rate matrix Q with dimensions equal to that of the state space and initial probability distribution defined on the state space. For i ≠ j, the elements qij are non-negative and describe the rate the process transitions from state i to state j. The elements qii are chosen such that each row of the transition rate matrix sums to zero.
There are three equivalent definitions of the process.[3]
Infinitesimal definition
Let Xt be the random variable describing the state of the process at time t, and assume that the process is in a state i at time t. Then Xt + h is independent of previous values (Xs : s≤ t) and as h → 0 uniformly in t for all j
using little-o notation. The qij can be seen as measuring how quickly the transition from i to j happens
Jump chain/holding time definition
Define a discrete-time Markov chain Yn to describe the nth jump of the process and variables S1, S2, S3, ... to describe holding times in each of the states where the distribution of Si is given by −qYiYi.
Transition probability definition
For any value n = 0, 1, 2, 3, ... and times indexed up to this value of n: t0, t1, t2, ... and all states recorded at these times i0, i1, i2, i3, ... it holds that
where pij is the solution of the forward equation (a first-order differential equation)
where P(0) is the identity matrix.
Properties
Irreducibility
Recurrence and transcience
Transient behaviour
Write P(t) for the matrix with entries pij = P(Xt = j | X0 = i). Then the matrix P(t)α satisfies the forward equation, a first-order differential equation
where the prime denotes differentiation with respect to t. This has solution give by a matrix exponential
with elements
In a simple case such as a CTMC on the state space {1,2}. The general Q matrix for such a process is the following 2 × 2 matrix with α,β > 0
The above relation for forward matrix can be solved explicitly in this case to give
However, direct solutions are complicated to compute for larger matrices. The fact that Q is the generator for a semigroup of matrices
is used.
Stationary distribution
The stationary distribution for an irreducible recurrent CTMC is the probability distribution to which the process converges for large values of t. Observe that for the two-state process considered earlier with P(t) given by
as t → ∞ the distribution tends to
Observe that each row has the same distribution as this does not depend on starting state. The row vector π may be found by solving[4]
with the additional constraint that
Example

The image to the right describes a continuous-time Markov chain with state-space {Bull market, Bear market, Stagnant market} and transition rate matrix
The stationary distribution of this chain can be found by solving π Q = 0 subject to the contraint that elements must sum to 1 to obtain
Hitting times
Time reversal
For a CTMC Xt, the time-reversed process is defined to be . By Kelly's lemma this process has the same stationary distribution as the forward process.
Embedded Markov chain
One method of finding the stationary probability distribution, π, of an ergodic continuous-time Markov chain, Q, is by first finding its embedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain, sometimes referred to as a jump process. Each element of the one-step transition probability matrix of the EMC, S, is denoted by sij, and represents the conditional probability of transitioning from state i into state j. These conditional probabilities may be found by
From this, S may be written as
where I is the identity matrix and diag(Q) is the diagonal matrix formed by selecting the main diagonal from the matrix Q and setting all other elements to zero.
To find the stationary probability distribution vector, we must next find such that
with being a row vector, such that all elements in are greater than 0 and = 1, and the 0 on the right side also being a row vector of 0's. From this, π may be found as
Note that S may be periodic, even if Q is not. Once π is found, it must be normalized to a unit vector.
Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observing X(t) at intervals of δ units of time. The random variables X(0), X(δ), X(2δ), ... give the sequence of states visited by the δ-skeleton.
Applications
Markov chains are used to describe physical processes where a system evolves in constant time. Sometimes, rather than a single systems, they are applied to an ensemble of identical, independent systems, and the probabilities are used to find how many members of the ensemble are in a given state. A master equation treatment is often used to analyse systems that evolve as Markov chains [citation needed], with approximations possible for complicated systems [citation needed].
Chemical reactions
Imagine a large number n of molecules in solution in state A, each of which can undergo a chemical reaction to state B with a certain average rate. Perhaps the molecule is an enzyme, and the states refer to how it is folded. The state of any single enzyme follows a Markov chains, and since the molecules are essentially independent of each other, the number of molecules in state A or B at a time is n times the probability a given molecule is in that state.
Queueing theory
Numerous queueing models use continuous-time Markov chains. For example, an M/M/1 queue is a CTMC on the non-negative integers where upward transitions from i to i + 1 occur at rate λ according to a Poisson process and describe job arrivals, while transitions from i to i – 1 (for i > 1) occur at rate μ (job service times are exponentially distributed) and describe completed services (departures) from the queue.
Extensions
A time dependent (time heterogeneous) CTMC is as above, but with the transition rate matrix]] a function of time Q(t).
See also
- Markov chain
- Master equation (physics)
- Phase-type distribution
- Semi-Markov process
- Variable-order Markov model
- Spectral expansion solution
- Matrix geometric solution method
References
- ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1007/3-540-48320-9_12, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with
|doi=10.1007/3-540-48320-9_12
instead. - ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1287/opre.27.3.616, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with
|doi=10.1287/opre.27.3.616
instead. - ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1017/CBO9780511810633.004, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with
|doi=10.1017/CBO9780511810633.004
instead. - ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1017/CBO9780511810633.005, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with
|doi=10.1017/CBO9780511810633.005
instead.
- William J. Stewart (1994). Introduction to the Numerical Solution of Markov Chains. Princeton University Press. pp. 17–23. ISBN 0-691-03699-3.