Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Show that the stationary probabilities for the Markov chain having transition probabilities \(P_{i, j}\) are also the stationary probabilities for the Markov chain whose transition probabilities \(Q_{i, j}\) are given by $$ Q_{i, j}=P_{i, j}^{k} $$ for any specified positive integer \(k\).

Short Answer

Expert verified
In conclusion, we have shown that the stationary probabilities for a Markov chain with transition probabilities \(P_{i,j}\) are also the stationary probabilities for the Markov chain with transition probabilities \(Q_{i,j} = P_{i,j}^k\), for any specified positive integer \(k\). We achieved this by expanding the equations for the stationary probabilities of both Markov chains and showing that the stationary probabilities of the original Markov chain with matrix \(P\) satisfy the stationary probability conditions for the modified Markov chain having matrix \(Q\) as well.

Step by step solution

01

Define stationary probabilities and Markov chains

A stationary probability distribution (also known as a stationary distribution or steady-state distribution) for a Markov chain is a probability distribution that remains unchanged as the process transitions from one state to another. Consider two Markov chains with transition probability matrices \(P\) and \(Q\), where the components of \(Q\) are obtained by raising the components of \(P\) to the power \(k\) (\(Q_{i,j} = P_{i,j}^k)\). Let \(\pi = (\pi_1, \pi_2, ..., \pi_n)\) denote the stationary probability distribution for the original Markov chain with the transition probability matrix \(P\), where \(n\) is the number of states.
02

Apply the definition of stationary probabilities

According to the definition of stationary probabilities, \(\pi\) should satisfy the following equation for the Markov chain with the transition probability matrix \(P\): \[ \pi P = \pi \] We need to prove that the same stationary probability distribution \(\pi\) also satisfies this equation for the Markov chain with the transition probability matrix \(Q\): \[ \pi Q = \pi \]
03

Expand the equation for the stationary probabilities of Q

Substitute \(Q_{i,j}\) with \(P_{i,j}^k\), and expand the equation to obtain: \[ \pi_1 Q_{1,1} + \pi_2 Q_{2,1} + ... + \pi_n Q_{n,1} = \pi_1 \\ \pi_1 Q_{1,2} + \pi_2 Q_{2,2} + ... + \pi_n Q_{n,2} = \pi_2 \\ \cdots \\ \pi_1 Q_{1,n} + \pi_2 Q_{2,n} + ... + \pi_n Q_{n,n} = \pi_n \] Replace the \(Q_{i,j}\) with \(P_{i,j^k}\) in the expanded equations: \[ \pi_1 P_{1,1}^k + \pi_2 P_{2,1}^k + ... + \pi_n P_{n,1}^k = \pi_1 \\ \pi_1 P_{1,2}^k + \pi_2 P_{2,2}^k + ... + \pi_n P_{n,2}^k = \pi_2 \\ \cdots \\ \pi_1 P_{1,n}^k + \pi_2 P_{2,n}^k + ... + \pi_n P_{n,n}^k = \pi_n \]
04

Verify the stationary probability conditions for Q

We know that the stationary probabilities of P satisfy the following equation: \[ \pi_1 P_{1,1} + \pi_2 P_{2,1} + ... + \pi_n P_{n,1} = \pi_1 \\ \pi_1 P_{1,2} + \pi_2 P_{2,2} + ... + \pi_n P_{n,2} = \pi_2 \\ \cdots \\ \pi_1 P_{1,n} + \pi_2 P_{2,n} + ... + \pi_n P_{n,n} = \pi_n \] Now raise both sides of these equations to the power of \(k\), and we will have: \[ (\pi_1 P_{1,1} + \pi_2 P_{2,1} + ... + \pi_n P_{n,1})^k = \pi_1^k \\ (\pi_1 P_{1,2} + \pi_2 P_{2,2} + ... + \pi_n P_{n,2})^k = \pi_2^k \\ \cdots \\ (\pi_1 P_{1,n} + \pi_2 P_{2,n} + ... + \pi_n P_{n,n})^k = \pi_n^k \] These equations are exactly the same as the ones we obtained in step 3 for the stationary probabilities of \(Q\). This means that the stationary probability distribution \(\pi\) for the Markov chain with the transition probability matrix \(P\) also satisfies the stationary probability conditions for the Markov chain with the transition probability matrix \(Q\).
05

Conclusion

Hence, the stationary probabilities for a Markov chain with transition probabilities \(P_{i,j}\) are also the stationary probabilities for the Markov chain with transition probabilities \(Q_{i,j} = P_{i,j}^k\), for any specified positive integer \(k\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Three white and three black balls are distributed in two urns in such a way that each contains three balls. We say that the system is in state \(i, i=0,1,2,3\), if the first urn contains \(i\) white balls. At each step, we draw one ball from each urn and place the ball drawn from the first urn into the second, and conversely with the ball from the second urn. Let \(X_{n}\) denote the state of the system after the \(n\) th step. Explain why \(\left\\{X_{n}, n=0,1,2, \ldots\right\\}\) is a Markov chain and calculate its transition probability matrix.

An individual possesses \(r\) umbrellas that he employs in going from his home to office, and vice versa. If he is at home (the office) at the beginning (end) of a day and it is raining, then he will take an umbrella with him to the office (home), provided there is one to be taken. If it is not raining, then he never takes an umbrella. Assume that, independent of the past, it rains at the beginning (end) of a day with probability \(p\). (a) Define a Markov chain with \(r+1\) states, which will help us to determine the proportion of time that our man gets wet. (Note: He gets wet if it is raining, and all umbrellas are at his other location.) (b) Show that the limiting probabilities are given by $$ \pi_{i}=\left\\{\begin{array}{ll} \frac{q}{r+q}, & \text { if } i=0 \\ \frac{1}{r+q}, & \text { if } i=1, \ldots, r \end{array} \quad \text { where } q=1-p\right. $$ (c) What fraction of time does our man get wet? (d) When \(r=3\), what value of \(p\) maximizes the fraction of time he gets wet

Let \(\pi_{i}\) denote the long-run proportion of time a given irreducible Markov chain is in state \(i\). (a) Explain why \(\pi_{i}\) is also the proportion of transitions that are into state \(i\) as well as being the proportion of transitions that are from state \(i\). (b) \(\pi_{i} P_{i j}\) represents the proportion of transitions that satisfy what property? (c) \(\sum_{i} \pi_{i} P_{i j}\) represent the proportion of transitions that satisfy what property? (d) Using the preceding explain why $$ \pi_{j}=\sum_{i} \pi_{i} P_{i j} $$

Show that if state \(i\) is recurrent and state \(i\) does not communicate with state \(j\), then \(P_{i j}=0 .\) This implies that once a process enters a recurrent class of states it can never leave that class. For this reason, a recurrent class is often referred to as a closed class.

Recall that state \(i\) is said to be positive recurrent if \(m_{i, i}<\infty\), where \(m_{i, i}\) is the expected number of transitions until the Markov chain, starting in state \(i\), makes a transition back into that state. Because \(\pi_{i}\), the long-run proportion of time the Markov chain, starting in state \(i\), spends in state \(i\), satisfies $$ \pi_{i}=\frac{1}{m_{i, i}} $$ it follows that state \(i\) is positive recurrent if and only if \(\pi_{i}>0\). Suppose that state \(i\) is positive recurrent and that state \(i\) communicates with state \(j .\) Show that state \(j\) is also positive recurrent by arguing that there is an integer \(n\) such that $$ \pi_{j} \geqslant \pi_{i} P_{i, j}^{n}>0 $$

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free