Continuous-Time Markov Chain

1. Definitions

Definition 1. We say the process {X(t),t0} is a continuous-time Markov chain if for all s,t0 and nonnegative integers i,j,x(u),0u<s

(1)P{X(t+s)=j | X(s)=i,X(u)=x(u),0u<s}(2)=P{X(t+s)=j|X(s)=i}

If, in addition,

P{X(t+s)=j|X(s)=i}

is independent of s, the process is said to have statinonary or homogeneous transition probabilities.

​ The amount of time the process spends in a state i, from the time it enters state i to the time it transite into a different state, is exponentially distributed with parameter vi.

2. Birth and Death Process

Definition 2. A birth and death process is a continuous-time Markov chain with states {0,1,...} for which transitions from state n may go only to either state n1 or state n+1.

​ Let's say the transition probabilities are

Pi,i+1=λiλi+μiPi,i1=μiλi+μi

The next state will be i+1 if a birth occurs before a death, and the probability that an exponential random variable with rate λi wil occur earlier than an independent exponential random variable with rate μi is λi/(λi+μi).


Example 1. (A Linear Growth Model with Immigration) A model in which

(3)μn=nμ,n1(4)λn=nλ+θ,n0

is called a linear growth model with immigration, let X(t) denote the population size at time t.

​ Suppose X(0)=i, let M(t)=E[X(t)], we can determine M(t) by deriving and solving a differential equation. Given X(t),

(5)M(t+h)=E[X(t+h)](6)=E[E[X(t+h)|X(t)]]

Consider h is a sufficiently small period of time, by the properties of continuous Markov chain, we have

(7)P{X(t+h)=X(t)+1|X(t)}=(X(t)λ+θ)h+o(h)(8)P{X(t+h)=X(t)1|X(t)}=X(t)μh+o(h)(9)P{X(t+h)=X(t)|X(t)}=1(X(t)(λ+μ)+θ)h+o(h)

Therefore,

E[X(t+h)|X(t)]=X(t)+(θ+X(t)λX(t)μ)h+o(h)M(t+h)=E[E[X(t+h)|X(t)]]=M(t)+(λμ)M(t)h+θh+o(h)

or, equivalently,

M(t+h)M(t)h=(λμ)M(t)+θ+o(h)h

so we get the derivation of M(t):

M(t)=(λμ)M(t)+θ

By solving the differential equation we have

M(t)=θλμ[e(λμ)t1]+ie(λμ)t

Note that we have implicitly assumed that λμ.


Example 2. (A M/M/s Queueing Model) Suppose a service station with s servers, the times between successive arrivals of customers are independent exponential random variables having mean 1/λ. Upon arrival, the customer joins the queue if there's no available server. The service time of each customer is assumed to be independent exponential random variables having mean 1/μ.

​ This is actually a birth and death process with parameters

μn={(10)nμ,1ns(11)sμ,n>sλn=λ,n0

Let Ti denote the time, starting from state i, it takes for the process to enter state i+1,i0. Obviously, E[T0]=1/λ. Let

Ii={(12)1,if the first transition from i is to i+1(13)0,,if the first transition from i is to i1

and note that

(14)E[Ti|Ii=1]=1λi+μi(15)E[Ti|Ii=0]=1λi+μi+E[Ti1]+E[Ti]

That is, if the first transition from i is to i+1, then no additional time is needed, the time it occurs is exponential with rate λi+μi. If the first transition from i is to i1, the time also has expectation of 1/(λi+μi), but it takes additional time to go back to i, that is E[Ti1], and additional time to go to i+1, which is E[Xi+1].

​ Hence, since the probability that the first transition is a birth is λi/(λi+μi), we see that

E[Ti]=1λi+μi+μiλi+μi(E[Ti1]+E[Ti])

or, equivalently,

E[Ti]=1λi+μiλiE[Ti1],i1

Starting with E[T0]=1/λ, we can recursively calculate all E[Ti].


Example 3. For the birth and death process with parameters λiλ,μiμ.

​ Using the conclusion from Example 2, we have

E[T0]=1λE[Ti]=1λ+μλE[Ti1]

So

(16)E[Ti]=1(μ/λ)i+1λμ


3. The Transition Probability Function Pij(t)

​ Let

Pij(t)=P{X(t+s)=j|X(s)=i}

denote the probability that a process presently in state i will be in state j in a time t later.

​ Let

qij=viPij

where vi is rate at which the process makes a transition when in state i. We have

(17)limh01Pii(h)h=vi(18)limh0Pij(h)h=qij,when ij

Chapman-Kolmogorov Equation For all s0,t0,

Pij(t+s)=k=0Pik(t)Pkj(s)

From the equation, we obtain

Pij(h+t)Pij(t)=kiPik(h)Pkj(t)Pij(t)(1Pii(h))

and thus

limh0Pij(h+t)Pij(t)h=kiqikPkj(t)viPij(t)

Hence, we have the following theorem.

Kolmogorv's Backward Equations For all states i,j, and times t0

Pij(t)=kiqikPkj(t)viPij(t)


Example 4. (A Continuous-Time Markov Chain Consisting of Two States) Consider a machine can work an exponential amount of time having mean 1/λ before breaking down; and suppose it takes an exponential amount of time having mean 1/μ to repair it. If the machine is working at time 0, what is the possibility that it will be working at time t?

​ Using the Kolmogorov's backward equations, we have

(3.1)(19)P00(t)=λP10(t)λP00(t)(20)P10(t)=μP00(t)μP10(t)

Obviously, the above two equations are equivalent to

μP00(t)+λP10(t)=0

Integrating on both sides, we have

μP00(t)+λP10(t)=c

As P00(0)=1,P10(0)=0, then c=μ. By replacing P10(t) with the first equation in (3.1), we obtain a differential eqaution

P00(t)+(λ+μ)P00(t)=μ

By solving the differential equation with condition P00(0)=1, we have

P00(t)=λλ+μe(λ+μ)t+μλ+μ


​ Using the Chapman-Kolmogorov equations, we have

(21)Pij(t+h)Pij(t)=k=0Pik(t)Pkj(h)Pij(t)(22)=kjPik(t)Pkj(h)[1Pjj(h)]Pij(t)

and thus we obtain the

Kolmogorov's Forward Equations For all states i,j, and times t0

Pij(t)=limh0Pij(t+h)Pij(t)h=kjqkjPik(t)vjPij(t)

Proposition 1. For a pure birth process

(23)Pii(t)=eλit,i0(24)Pij(t)=λj1eλjt0teλjsPi,j1(s) ds,ji+1

Proof For a pure birth process, using the Kolmogorov's forward equation we have

(25)Pii(t)=λiPii(t),i0(26)Pij(t)=λj1Pi,j1(t)λiPij(t),ji+1

The first equation is quite obvious, for the second equation we have

Pij(t)+λjPij(t)=λj1Pi,j1(t)

or, equivalently,

ddt[eλjtPij(t)]=λj1eλjtPi,j1(t)

Hence, since Pij(0)=0, we obtain the desired results.

4. Limiting Probabilities

​ The probability that a continuous-time Markov chain will be in state j at time t often converges to a limiting value that is independent of the initial state. That is, if we call this value Pj, then

PjlimtPij(t)

where we are assuming that the limit exists and is independent of the initial state i.

​ Consider the forward equaitons

Pij(t)=kjqkjPik(t)vjPij(t)

Now, if we let t, then assuming that we can interchange limit and summation, we obtain

(27)limtPij(t)=limt[kjqkjPik(t)vjPij(t)](28)=kjqkjPkvjPj

As Pij(t) is a bounded function, if Pij(t) converges, then it must converges to 0, hence we have

(4.1)(29)vjPj=kjqkjPk,for all states j(30)jPj=1

Thus we can solve the limiting probabilities.

Sufficient Conditions for the Existence of Limiting Probabilities:

  1. all states of the Markov chain communicate in the sense that starting in state i there is a positive probability of ever being in state j, for all i,j and
  2. the Markov chain is positive recurrent in the sense that, starting in any state, the mean time to return to the state is finite.

​ If the above two conditions hold, then the limiting probabilities will exist and satisfy Equaitons (4.1). In addition, Pj also will have the interpretation of being the long-run proportion of time that the process is in state j.

5. Time Reversibility

​ Suppose a continuous-time Markov chain with limiting probabilities existing, if we consider the sequence of states visited in the continuous-time Markov chain process, ignoring the amount of time spent in each state during a visit, then the sequence consitutes a discrete-time Markov chain with transition probabilities Pij. We call such a discrete-time Markov chain as the embedded chain. Its limiting probabilities exists and we have talked about them before

(31)πj=jπiPij(32)iπi=1

​ Since πi represents the proportion of transitions that take the process into state i, and because 1/vi is the mean time spent in state i during a visit, it seems intuitive that Pi, the proportion of time in state i, should be a weighted average of the πi where πi is weighted proportionately to 1/vi. That is

Pi=πi/vijπj/vj

​ Suppose now a continuous-time Markov chain has been in operation of a long time, and suppose starting at some large time T, we trace back the process. Suppose the process is in state i in some large time t, the probability that the process has been in this state for an amount of time greater that s is evis. That is,

P=P{X(ts)=i}evisP{X(t)=i}=evis

Thus, the sequence of states visited by the reversed process consititues a discrete-time Markov chain with transition probabilities Qij given by

Qij=πjPijπi

Therefore, a continuous-time Markov chain will be time reversible, if the embedded chain is time reversible. That is, if

πiPij=πjPji

Using the fact that Pi=πi/vijπjvj, we see that the preceding is equivalent to

Piqij=Pjqji,for all i,j

Since Pi is the proportion of time in state i and qij is the rate when in state i that goes to state j, the condition of time reversibility is that the rate at which the process goes to directly from state i to state j is equal to the rate at which it goes directly from j to i.

Proposition 5.1 If for some set {Pi}

iPi=1

and

Piqij=Pjqji,for all ij

then the continuous-time Markov chain is time reversible and Pi represents the limiting probability of being in state i.

Proposition 5.2 A time reversible chain with limiting probabilities Pj,jS that is truncated to the set AS and remains irreducible is also time reversible and has limiting probabilities PjA given by

PjA=PjiAPi,jA

Proposition 5.3 If {Xi(t),t0} are independent time reversible continuous-time Markov chains, then the vector process {(X1(t),...,Xn(t),t0} is also a time-reversible continuous-time Markov chain.

6. Uniformization

​ Consider a continuous-time Markov chain in which the mean time spent in a state is the same for all states. That is, suppose that vi=v, for all state i. Let N(t) denote the number of transitions by time t, then {N(t),t0} will be a Poisson process with rate v.

​ To compute the transition probabilities Pij(t), we can condition on N(t):

(33)Pij(t)=P{X(t)=j|X(0)=i}(34)=n=0P{X(t)=j|X(0)=i,N(t)=n}P{N(t)=n|X(t)=j}(35)=n=0P{X(t)=j|X(0)=i,N(t)=n}evt(vt)nn!

Since the distribution of time spent in each state is the same, given that P{N(t)=n} gives us no information about which states were visited. Hence,

P{X(t)=j|X(0)=i,N(t)=n}=Pijn

where Pijn is the n-stage transition probabilities associted with the discrete-time Markov chain with transition probabilities Pij; and so when viv

Pij(t)=n=0Pijnevt(vt)nn!

​ The assumption viv is quite limited in practice, but suppose we make a trick of allowing fictitious transitions from a state to itself, then most Markov chains can be put in that form. Let v satisfying

viv,for all i

We can say any Markov chain satisfying the above condition can be thought of as being a process that spends an exponential amount of time with rate v in state i and then makes a transition to j with transition probability Pij, where

Pij={(36)vivPij,ji(37)1viv,j=i

Hence we can have the transition probabilities computed by

Pij(t)=n=0Pijnevt(vt)nn!

7. Computing the Transiition Probabilities

​ For any pair of states i and j, let

rij={(38)qij,ij(39)vi,i=j

So we can rewrite the Kolmogorov's forward and backward equations as follows:

(40)Pij(t)=krikPkj(t)(backward)(41)Pij(t)=krkjPik(t)(forward)

This representation is especially revealing when we introduce matrix notation. Define the matrices R and P(t),P(t) by letting the elements in row i, column j of these matrices be, respectively, rij,Pij(t) and Pij(t). Thus

(42)P(t)=RP(t),(backward)(43)P(t)=P(t)R,(forward)

As the solution of the scalar differential equation

f(t)=f(0)ect

So the forward eqaution can be written as

P(t)=P(0)eRt

Since P(0)=I, this yields that

P(t)=eRt

where the matrix eRt is defined by

eRt=n=0Rntnn!

posted @   kaleidopink  阅读(43)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 全程不用写代码,我用AI程序员写了一个飞机大战
· DeepSeek 开源周回顾「GitHub 热点速览」
· 记一次.NET内存居高不下排查解决与启示
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· .NET10 - 预览版1新功能体验(一)
点击右上角即可分享
微信分享提示