Continuous-Time Markov Chain
1. Definitions
Definition 1. We say the process is a continuous-time Markov chain if for all and nonnegative integers
If, in addition,
is independent of , the process is said to have statinonary or homogeneous transition probabilities.
The amount of time the process spends in a state , from the time it enters state to the time it transite into a different state, is exponentially distributed with parameter .
2. Birth and Death Process
Definition 2. A birth and death process is a continuous-time Markov chain with states for which transitions from state may go only to either state or state .
Let's say the transition probabilities are
The next state will be if a birth occurs before a death, and the probability that an exponential random variable with rate wil occur earlier than an independent exponential random variable with rate is .
Example 1. (A Linear Growth Model with Immigration) A model in which
is called a linear growth model with immigration, let denote the population size at time .
Suppose , let , we can determine by deriving and solving a differential equation. Given ,
Consider is a sufficiently small period of time, by the properties of continuous Markov chain, we have
Therefore,
or, equivalently,
so we get the derivation of :
By solving the differential equation we have
Note that we have implicitly assumed that .
Example 2. (A M/M/s Queueing Model) Suppose a service station with servers, the times between successive arrivals of customers are independent exponential random variables having mean . Upon arrival, the customer joins the queue if there's no available server. The service time of each customer is assumed to be independent exponential random variables having mean .
This is actually a birth and death process with parameters
Let denote the time, starting from state , it takes for the process to enter state . Obviously, . Let
and note that
That is, if the first transition from is to , then no additional time is needed, the time it occurs is exponential with rate . If the first transition from is to , the time also has expectation of , but it takes additional time to go back to , that is , and additional time to go to , which is .
Hence, since the probability that the first transition is a birth is , we see that
or, equivalently,
Starting with , we can recursively calculate all .
Example 3. For the birth and death process with parameters .
Using the conclusion from Example 2, we have
So
3. The Transition Probability Function
Let
denote the probability that a process presently in state will be in state in a time later.
Let
where is rate at which the process makes a transition when in state . We have
Chapman-Kolmogorov Equation For all ,
From the equation, we obtain
and thus
Hence, we have the following theorem.
Kolmogorv's Backward Equations For all states , and times
Example 4. (A Continuous-Time Markov Chain Consisting of Two States) Consider a machine can work an exponential amount of time having mean before breaking down; and suppose it takes an exponential amount of time having mean to repair it. If the machine is working at time 0, what is the possibility that it will be working at time ?
Using the Kolmogorov's backward equations, we have
Obviously, the above two equations are equivalent to
Integrating on both sides, we have
As , then . By replacing with the first equation in (3.1), we obtain a differential eqaution
By solving the differential equation with condition , we have
Using the Chapman-Kolmogorov equations, we have
and thus we obtain the
Kolmogorov's Forward Equations For all states , and times
Proposition 1. For a pure birth process
Proof For a pure birth process, using the Kolmogorov's forward equation we have
The first equation is quite obvious, for the second equation we have
or, equivalently,
Hence, since , we obtain the desired results.
4. Limiting Probabilities
The probability that a continuous-time Markov chain will be in state at time often converges to a limiting value that is independent of the initial state. That is, if we call this value , then
where we are assuming that the limit exists and is independent of the initial state .
Consider the forward equaitons
Now, if we let , then assuming that we can interchange limit and summation, we obtain
As is a bounded function, if converges, then it must converges to , hence we have
Thus we can solve the limiting probabilities.
Sufficient Conditions for the Existence of Limiting Probabilities:
- all states of the Markov chain communicate in the sense that starting in state there is a positive probability of ever being in state , for all and
- the Markov chain is positive recurrent in the sense that, starting in any state, the mean time to return to the state is finite.
If the above two conditions hold, then the limiting probabilities will exist and satisfy Equaitons (4.1). In addition, also will have the interpretation of being the long-run proportion of time that the process is in state .
5. Time Reversibility
Suppose a continuous-time Markov chain with limiting probabilities existing, if we consider the sequence of states visited in the continuous-time Markov chain process, ignoring the amount of time spent in each state during a visit, then the sequence consitutes a discrete-time Markov chain with transition probabilities . We call such a discrete-time Markov chain as the embedded chain. Its limiting probabilities exists and we have talked about them before
Since represents the proportion of transitions that take the process into state , and because is the mean time spent in state during a visit, it seems intuitive that , the proportion of time in state , should be a weighted average of the where is weighted proportionately to . That is
Suppose now a continuous-time Markov chain has been in operation of a long time, and suppose starting at some large time , we trace back the process. Suppose the process is in state in some large time , the probability that the process has been in this state for an amount of time greater that is . That is,
Thus, the sequence of states visited by the reversed process consititues a discrete-time Markov chain with transition probabilities given by
Therefore, a continuous-time Markov chain will be time reversible, if the embedded chain is time reversible. That is, if
Using the fact that , we see that the preceding is equivalent to
Since is the proportion of time in state and is the rate when in state that goes to state , the condition of time reversibility is that the rate at which the process goes to directly from state to state is equal to the rate at which it goes directly from to .
Proposition 5.1 If for some set
and
then the continuous-time Markov chain is time reversible and represents the limiting probability of being in state .
Proposition 5.2 A time reversible chain with limiting probabilities that is truncated to the set and remains irreducible is also time reversible and has limiting probabilities given by
Proposition 5.3 If are independent time reversible continuous-time Markov chains, then the vector process is also a time-reversible continuous-time Markov chain.
6. Uniformization
Consider a continuous-time Markov chain in which the mean time spent in a state is the same for all states. That is, suppose that , for all state . Let denote the number of transitions by time , then will be a Poisson process with rate .
To compute the transition probabilities , we can condition on :
Since the distribution of time spent in each state is the same, given that gives us no information about which states were visited. Hence,
where is the -stage transition probabilities associted with the discrete-time Markov chain with transition probabilities ; and so when
The assumption is quite limited in practice, but suppose we make a trick of allowing fictitious transitions from a state to itself, then most Markov chains can be put in that form. Let satisfying
We can say any Markov chain satisfying the above condition can be thought of as being a process that spends an exponential amount of time with rate in state and then makes a transition to with transition probability , where
Hence we can have the transition probabilities computed by
7. Computing the Transiition Probabilities
For any pair of states and , let
So we can rewrite the Kolmogorov's forward and backward equations as follows:
This representation is especially revealing when we introduce matrix notation. Define the matrices and by letting the elements in row , column of these matrices be, respectively, and . Thus
As the solution of the scalar differential equation
So the forward eqaution can be written as
Since , this yields that
where the matrix is defined by
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 全程不用写代码,我用AI程序员写了一个飞机大战
· DeepSeek 开源周回顾「GitHub 热点速览」
· 记一次.NET内存居高不下排查解决与启示
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· .NET10 - 预览版1新功能体验(一)