Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

It follows from the argument made in Exercise 38 that state \(i\) is null recurrent if it is recurrent and \(\pi_{i}=0 .\) Consider the one-dimensional symmetric random walk of Example 4.18. (a) Argue that \(\pi_{i}=\pi_{0}\) for all \(i\). (b) Argue that all states are null recurrent.

Short Answer

Expert verified
In a one-dimensional symmetric random walk, the stationary distribution \(\pi_i\) is constant for all states \(i\). So, \(\pi_i = \pi_0\) for all states \(i\). Since all states are recurrent and infinite in number, the normalization condition (sum of \(\pi_i\) over all states should be equal to 1) can only be satisfied if \(\pi_0 = 0\). Thus, all states are null recurrent, as they are recurrent with a stationary distribution of \(\pi_i = 0\).

Step by step solution

01

Understanding One-Dimensional Symmetric Random Walk

In a one-dimensional symmetric random walk, a particle moves either to the left or to the right with equal probability. At each step, the particle has a probability of 0.5 to move in either direction.
02

Part (a): Argue that \(\pi_i = \pi_0\) for all i

A state \(i\) is said to be recurrent if the particle is expected to return to this state infinite number of times. We know that state \(i\) is recurrent if the sum of the probabilities to the left and right of the state is equal. We also know that, in a symmetric random walk, the particle moves either to the left or to the right with equal probability, which is 0.5. So for all states \(i\), we have \[\pi_i = \frac{1}{2} \pi_{i-1} + \frac{1}{2} \pi_{i+1}\] This equation shows the balance condition for the symmetric random walk. Since we have equal probabilities for moving left and right, we can argue that the stationary distribution \(\pi_i\) is constant for all states \(i\). Thus, we have \[\pi_i = \pi_0\] for all states \(i\).
03

Part (b): Argue that all states are null recurrent

To argue that all states are null recurrent, we need to show that they are recurrent and that the stationary distribution \(\pi_i = 0\) for all i. From part (a), we already know that all states have the same stationary distribution \(\pi_i = \pi_0\). In a one-dimensional symmetric random walk, we also know that the particle is expected to return to each state infinite number of times, meaning that every state is recurrent. In order to satisfy the normalization condition (sum of \(\pi_i\) over all states should be equal to 1), we need to find a value for \(\pi_0\). However, since there are an infinite number of states in a one-dimensional random walk, the sum of the stationary distribution \(\pi_i\) over all states would be infinite if \(\pi_0 > 0\). This would violate the normalization condition. Therefore, we must have \[\pi_0 = 0\] Thus, all states are recurrent, and their stationary distribution is \(\pi_i = 0\), meaning that all states in a one-dimensional symmetric random walk are null recurrent.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

It follows from Theorem \(4.2\) that for a time reversible Markov chain $$ P_{i j} P_{j k} P_{k i}=P_{i k} P_{k j} P_{j i}, \quad \text { for all } i, j, k $$ It turns out that if the state space is finite and \(P_{i j}>0\) for all \(i, j\), then the preceding is also a sufficient condition for time reversibility. (That is, in this case, we need only check Equation \((4.26)\) for paths from \(i\) to \(i\) that have only two intermediate states.) Prove this. Hint: Fix \(i\) and show that the equations $$ \pi_{j} P_{j k}=\pi_{k} P_{k j} $$ are satisfied by \(\pi_{j}=c P_{i j} / P_{j i}\), where \(c\) is chosen so that \(\sum_{j} \pi_{j}=1\)

In the gambler's ruin problem of Section 4.5.1, suppose the gambler's fortune is presently \(i\), and suppose that we know that the gambler's fortune will eventually reach \(N\) (before it goes to 0 ). Given this information, show that the probability he wins the next gamble is $$ \begin{array}{ll} \frac{p\left[1-(q / p)^{i+1}\right]}{1-(q / p)^{i}}, & \text { if } p \neq \frac{1}{2} \\ \frac{i+1}{2 i}, & \text { if } p=\frac{1}{2} \end{array} $$

For a Markov chain \(\left\\{X_{n}, n \geqslant 0\right\\}\) with transition probabilities \(P_{i, j}\), consider the conditional probability that \(X_{n}=m\) given that the chain started at time 0 in state \(i\) and has not yet entered state \(r\) by time \(n\), where \(r\) is a specified state not equal to either \(i\) or \(m .\) We are interested in whether this conditional probability is equal to the \(n\) stage transition probability of a Markov chain whose state space does not include state \(r\) and whose transition probabilities are $$ Q_{i, j}=\frac{P_{i, j}}{1-P_{i, r}}, \quad i, j \neq r $$ Either prove the equality $$ P\left\\{X_{n}=m \mid X_{0}=i, X_{k} \neq r, k=1, \ldots, n\right\\}=Q_{i, m}^{n} $$ or construct a counterexample.

A total of \(m\) white and \(m\) black balls are distributed among two urns, with each urn containing \(m\) balls. At each stage, a ball is randomly selected from each urn and the two selected balls are interchanged. Let \(X_{n}\) denote the number of black balls in urn 1 after the \(n\) th interchange. (a) Give the transition probabilities of the Markov chain \(X_{n}, n \geqslant 0\). (b) Without any computations, what do you think are the limiting probabilities of this chain? (c) Find the limiting probabilities and show that the stationary chain is time reversible.

Let \(\left\\{X_{n}, n \geqslant 0\right\\}\) denote an ergodic Markov chain with limiting probabilities \(\pi_{i} .\) Define the process \(\left\\{Y_{n}, n \geqslant 1\right\\}\) by \(Y_{n}=\left(X_{n-1}, X_{n}\right)\). That is, \(Y_{n}\) keeps track of the last two states of the original chain. Is \(\left\\{Y_{n}, n \geqslant 1\right\\}\) a Markov chain? If so, determine its transition probabilities and find $$ \lim _{n \rightarrow \infty} P\left\\{Y_{n}=(i, j)\right\\} $$

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free