Continuous-Time Markov Chain
1. Definitions
Definition 1. We say the process \(\{X(t),t\ge0\}\) is a continuous-time Markov chain if for all \(s,t\ge0\) and nonnegative integers \(i,j,x(u),0\le u<s\)
If, in addition,
is independent of \(s\), the process is said to have statinonary or homogeneous transition probabilities.
The amount of time the process spends in a state \(i\), from the time it enters state \(i\) to the time it transite into a different state, is exponentially distributed with parameter \(v_i\).
2. Birth and Death Process
Definition 2. A birth and death process is a continuous-time Markov chain with states \(\{0,1,...\}\) for which transitions from state \(n\) may go only to either state \(n-1\) or state \(n+1\).
Let's say the transition probabilities are
The next state will be \(i+1\) if a birth occurs before a death, and the probability that an exponential random variable with rate \(\lambda_i\) wil occur earlier than an independent exponential random variable with rate \(\mu_i\) is \(\lambda_i/(\lambda_i+\mu_i)\).
Example 1. (A Linear Growth Model with Immigration) A model in which
is called a linear growth model with immigration, let \(X(t)\) denote the population size at time \(t\).
Suppose \(X(0)=i\), let \(M(t)=E[X(t)]\), we can determine \(M(t)\) by deriving and solving a differential equation. Given \(X(t)\),
Consider \(h\) is a sufficiently small period of time, by the properties of continuous Markov chain, we have
Therefore,
or, equivalently,
so we get the derivation of \(M(t)\):
By solving the differential equation we have
Note that we have implicitly assumed that \(\lambda\ne\mu\).
Example 2. (A M/M/s Queueing Model) Suppose a service station with \(s\) servers, the times between successive arrivals of customers are independent exponential random variables having mean \(1/\lambda\). Upon arrival, the customer joins the queue if there's no available server. The service time of each customer is assumed to be independent exponential random variables having mean \(1/\mu\).
This is actually a birth and death process with parameters
Let \(T_i\) denote the time, starting from state \(i\), it takes for the process to enter state \(i+1,i\ge0\). Obviously, \(E[T_0] = 1/\lambda\). Let
and note that
That is, if the first transition from \(i\) is to \(i+1\), then no additional time is needed, the time it occurs is exponential with rate \(\lambda_i+\mu_i\). If the first transition from \(i\) is to \(i-1\), the time also has expectation of \(1/(\lambda_i+\mu_i)\), but it takes additional time to go back to \(i\), that is \(E[T_{i-1}]\), and additional time to go to \(i+1\), which is \(E[X_{i+1}]\).
Hence, since the probability that the first transition is a birth is \(\lambda_i/(\lambda_i+\mu_i)\), we see that
or, equivalently,
Starting with \(E[T_0] =1/\lambda\), we can recursively calculate all \(E[T_i]\).
Example 3. For the birth and death process with parameters \(\lambda_i\equiv\lambda,\mu_i\equiv\mu\).
Using the conclusion from Example 2, we have
So
3. The Transition Probability Function \(P_{ij}(t)\)
Let
denote the probability that a process presently in state \(i\) will be in state \(j\) in a time \(t\) later.
Let
where \(v_i\) is rate at which the process makes a transition when in state \(i\). We have
Chapman-Kolmogorov Equation For all \(s\ge0,t\ge0\),
From the equation, we obtain
and thus
Hence, we have the following theorem.
Kolmogorv's Backward Equations For all states \(i,j\), and times \(t\ge0\)
Example 4. (A Continuous-Time Markov Chain Consisting of Two States) Consider a machine can work an exponential amount of time having mean \(1/\lambda\) before breaking down; and suppose it takes an exponential amount of time having mean \(1/\mu\) to repair it. If the machine is working at time 0, what is the possibility that it will be working at time \(t\)?
Using the Kolmogorov's backward equations, we have
Obviously, the above two equations are equivalent to
Integrating on both sides, we have
As \(P_{00}(0)=1,P_{10}(0)=0\), then \(c = \mu\). By replacing \(P_{10}(t)\) with the first equation in (3.1), we obtain a differential eqaution
By solving the differential equation with condition \(P_{00}(0)=1\), we have
Using the Chapman-Kolmogorov equations, we have
and thus we obtain the
Kolmogorov's Forward Equations For all states \(i,j\), and times \(t\ge0\)
Proposition 1. For a pure birth process
Proof For a pure birth process, using the Kolmogorov's forward equation we have
The first equation is quite obvious, for the second equation we have
or, equivalently,
Hence, since \(P_{ij}(0)=0\), we obtain the desired results.
4. Limiting Probabilities
The probability that a continuous-time Markov chain will be in state \(j\) at time \(t\) often converges to a limiting value that is independent of the initial state. That is, if we call this value \(P_j\), then
where we are assuming that the limit exists and is independent of the initial state \(i\).
Consider the forward equaitons
Now, if we let \(t\rarr\infin\), then assuming that we can interchange limit and summation, we obtain
As \(P_{ij}(t)\) is a bounded function, if \(P'_{ij}(t)\) converges, then it must converges to \(0\), hence we have
Thus we can solve the limiting probabilities.
Sufficient Conditions for the Existence of Limiting Probabilities:
- all states of the Markov chain communicate in the sense that starting in state \(i\) there is a positive probability of ever being in state \(j\), for all \(i,j\) and
- the Markov chain is positive recurrent in the sense that, starting in any state, the mean time to return to the state is finite.
If the above two conditions hold, then the limiting probabilities will exist and satisfy Equaitons (4.1). In addition, \(P_j\) also will have the interpretation of being the long-run proportion of time that the process is in state \(j\).
5. Time Reversibility
Suppose a continuous-time Markov chain with limiting probabilities existing, if we consider the sequence of states visited in the continuous-time Markov chain process, ignoring the amount of time spent in each state during a visit, then the sequence consitutes a discrete-time Markov chain with transition probabilities \(P_{ij}\). We call such a discrete-time Markov chain as the embedded chain. Its limiting probabilities exists and we have talked about them before
Since \(\pi_i\) represents the proportion of transitions that take the process into state \(i\), and because \(1/v_i\) is the mean time spent in state \(i\) during a visit, it seems intuitive that \(P_i\), the proportion of time in state \(i\), should be a weighted average of the \(\pi_i\) where \(\pi_i\) is weighted proportionately to \(1/v_i\). That is
Suppose now a continuous-time Markov chain has been in operation of a long time, and suppose starting at some large time \(T\), we trace back the process. Suppose the process is in state \(i\) in some large time \(t\), the probability that the process has been in this state for an amount of time greater that \(s\) is \(e^{-v_is}\). That is,
Thus, the sequence of states visited by the reversed process consititues a discrete-time Markov chain with transition probabilities \(Q_{ij}\) given by
Therefore, a continuous-time Markov chain will be time reversible, if the embedded chain is time reversible. That is, if
Using the fact that \(P_i=\displaystyle{\frac{\pi_i/v_i}{\sum_j\pi_jv_j}}\), we see that the preceding is equivalent to
Since \(P_i\) is the proportion of time in state \(i\) and \(q_{ij}\) is the rate when in state \(i\) that goes to state \(j\), the condition of time reversibility is that the rate at which the process goes to directly from state \(i\) to state \(j\) is equal to the rate at which it goes directly from \(j\) to \(i\).
Proposition 5.1 If for some set \(\{P_i\}\)
and
then the continuous-time Markov chain is time reversible and \(P_i\) represents the limiting probability of being in state \(i\).
Proposition 5.2 A time reversible chain with limiting probabilities \(P_j,j\in S\) that is truncated to the set \(A\subset S\) and remains irreducible is also time reversible and has limiting probabilities \(P_j^A\) given by
Proposition 5.3 If \(\{X_i(t),t\ge0\}\) are independent time reversible continuous-time Markov chains, then the vector process \(\{(X_1(t),...,X_n(t) ,t\ge0\}\) is also a time-reversible continuous-time Markov chain.
6. Uniformization
Consider a continuous-time Markov chain in which the mean time spent in a state is the same for all states. That is, suppose that \(v_i=v\), for all state \(i\). Let \(N(t)\) denote the number of transitions by time \(t\), then \(\{N(t),t\ge0\}\) will be a Poisson process with rate \(v\).
To compute the transition probabilities \(P_{ij}(t)\), we can condition on \(N(t)\):
Since the distribution of time spent in each state is the same, given that \(P\{N(t)=n\}\) gives us no information about which states were visited. Hence,
where \(P_{ij}^n\) is the \(n\)-stage transition probabilities associted with the discrete-time Markov chain with transition probabilities \(P_{ij}\); and so when \(v_i\equiv v\)
The assumption \(v_i\equiv v\) is quite limited in practice, but suppose we make a trick of allowing fictitious transitions from a state to itself, then most Markov chains can be put in that form. Let \(v\) satisfying
We can say any Markov chain satisfying the above condition can be thought of as being a process that spends an exponential amount of time with rate \(v\) in state \(i\) and then makes a transition to \(j\) with transition probability \(P^*_{ij}\), where
Hence we can have the transition probabilities computed by
7. Computing the Transiition Probabilities
For any pair of states \(i\) and \(j\), let
So we can rewrite the Kolmogorov's forward and backward equations as follows:
This representation is especially revealing when we introduce matrix notation. Define the matrices \(\bold{R}\) and \(\bold{P}(t), \bold{P}(t)\) by letting the elements in row \(i\), column \(j\) of these matrices be, respectively, \(r_{ij}, P_{ij}(t)\) and \(P'_{ij}(t)\). Thus
As the solution of the scalar differential equation
So the forward eqaution can be written as
Since \(\bold P(0)=\bold I\), this yields that
where the matrix \(e^{\bold Rt}\) is defined by