Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Prove that if the number of states in a Markov chain is \(M\), and if state \(j\) can be reached from state \(i\), then it can be reached in \(M\) steps or less.

Short Answer

Expert verified
In a Markov chain with $M$ states, if state $j$ can be reached from state $i$, there exists an integer $n \le M$ such that the probability of transitioning from state $i$ to state $j$ in exactly $n$ steps, \(P_n(i, j)\), is greater than zero. This is because the longest possible path, represented by a directed graph with nodes for states and edges for transitions, cannot exceed $M-1$ steps without visiting a state more than once. Removing any loop with a revisit results in a shorter path with non-zero probability, proving that state $j$ can be reached in $M$ steps or less.

Step by step solution

01

Define the Markov chain and its properties

A Markov chain is a stochastic process with the Markov property, meaning that the future state is dependent only on the current state and not on the previous states. Formally, let \(X = (X_0, X_1, \dots , X_n)\) represent the sequence of states of a discrete-time Markov chain with a finite state space \(M\). Then, a Markov chain satisfies the property \(P(X_{n+1} = x_{n+1} | X_n = x_n, X_{n-1} = x_{n-1}, \dots , X_0 = x_0)=P(X_{n+1}=x_{n+1}|X_n = x_n)\), for all \(x_0, x_1, \ldots, x_n, x_{n+1}\) and all \(n \ge 0\).
02

Reachability in Markov chains

In Markov chains, we say that state \(j\) is reachable from state \(i\) if there exists a sequence of steps such that the Markov chain can transition from state \(i\) to state \(j\). Mathematically, state \(j\) is reachable from state \(i\) if there exists an integer \(n \ge 1\) such that the probability of transitioning from state \(i\) to state \(j\) in exactly \(n\) steps, \(P_n(i, j)\), is greater than zero.
03

Formulating the proof

To prove that state \(j\) can be reached from state \(i\) in \(M\) steps or less, we first assume that state \(j\) is reachable from state \(i\). We then need to show that there exists an integer \(n \le M\) such that the probability of transitioning from state \(i\) to state \(j\) in exactly \(n\) steps, \(P_n(i, j)\), is greater than zero.
04

Proving the statement

Since state \(j\) can be reached from state \(i\), there exists a sequence of steps of length at most \(M-1\), such that the probability of transitioning from state \(i\) to state \(j\) is greater than zero. We can imagine a directed graph with nodes representing the states and edges signifying possible transitions between these states, where the edge weights represent the transition probabilities. Let's consider the longest path from state \(i\) to state \(j\) in this graph. Since we have \(M\) states, this path cannot have a length greater than \(M-1\) without visiting at least one state more than once. If there's a state that is visited more than once in the path, we can remove the loop composed of the extra visit and obtain a shorter path with non-zero probability, as the transition probabilities are non-negative. Therefore, there exists a path of length at most \(M-1\) between state \(i\) and state \(j\), which means that it can be reached in \(M\) steps or less.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

For the random walk of Example \(4.18\) use the strong law of large numbers to give another proof that the Markov chain is transient when \(p \neq \frac{1}{2}\). Hint: Note that the state at time \(n\) can be written as \(\sum_{i=1}^{n} Y_{i}\) where the \(Y_{i}\) s are independent and \(P\left\\{Y_{i}=1\right\\}=p=1-P\left\\{Y_{i}=-1\right\\}\). Argue that if \(p>\frac{1}{2}\), then, by the strong law of large numbers, \(\sum_{1}^{n} Y_{i} \rightarrow \infty\) as \(n \rightarrow \infty\) and hence the initial state 0 can be visited only finitely often, and hence must be transient. A similar argument holds when \(p<\frac{1}{2}\).

Each day, one of \(n\) possible elements is requested, the ith one with probability \(P_{i}, i \geqslant 1, \sum_{1}^{n} P_{i}=1\). These elements are at all times arranged in an ordered list that is revised as follows: The element selected is moved to the front of the list with the relative positions of all the other elements remaining unchanged. Define the state at any time to be the list ordering at that time and note that there are \(n !\) possible states. (a) Argue that the preceding is a Markov chain. (b) For any state \(i_{1}, \ldots, i_{n}\) (which is a permutation of \(\left.1,2, \ldots, n\right)\), let \(\pi\left(i_{1}, \ldots, i_{n}\right)\) denote the limiting probability. In order for the state to be \(i_{1}, \ldots, i_{n}\), it is necessary for the last request to be for \(i_{1}\), the last non- \(i_{1}\) request for \(i_{2}\), the last non- \(i_{1}\) or \(i_{2}\) request for \(i_{3}\), and so on. Hence, it appears intuitive that $$ \pi\left(i_{1}, \ldots, i_{n}\right)=P_{i_{1}} \frac{P_{i_{2}}}{1-P_{i_{1}}} \frac{P_{i_{3}}}{1-P_{i_{1}}-P_{i_{2}}} \cdots \frac{P_{i_{n-1}}}{1-P_{i_{1}}-\cdots-P_{i_{n-2}}} $$ Verify when \(n=3\) that the preceding are indeed the limiting probabilities.

A total of \(m\) white and \(m\) black balls are distributed among two urns, with each urn containing \(m\) balls. At each stage, a ball is randomly selected from each urn and the two selected balls are interchanged. Let \(X_{n}\) denote the number of black balls in urn 1 after the \(n\) th interchange. (a) Give the transition probabilities of the Markov chain \(X_{n}, n \geqslant 0\). (b) Without any computations, what do you think are the limiting probabilities of this chain? (c) Find the limiting probabilities and show that the stationary chain is time reversible.

It follows from Theorem \(4.2\) that for a time reversible Markov chain $$ P_{i j} P_{j k} P_{k i}=P_{i k} P_{k j} P_{j i}, \quad \text { for all } i, j, k $$ It turns out that if the state space is finite and \(P_{i j}>0\) for all \(i, j\), then the preceding is also a sufficient condition for time reversibility. (That is, in this case, we need only check Equation \((4.26)\) for paths from \(i\) to \(i\) that have only two intermediate states.) Prove this. Hint: Fix \(i\) and show that the equations $$ \pi_{j} P_{j k}=\pi_{k} P_{k j} $$ are satisfied by \(\pi_{j}=c P_{i j} / P_{j i}\), where \(c\) is chosen so that \(\sum_{j} \pi_{j}=1\)

A transition probability matrix \(\mathbf{P}\) is said to be doubly stochastic if the sum over each column equals one; that is, $$ \sum_{i} P_{i j}=1, \quad \text { for all } j $$ If such a chain is irreducible and aperiodic and consists of \(M+1\) states \(0,1, \ldots, M\), show that the limiting probabilities are given by $$ \pi_{j}=\frac{1}{M+1}, \quad j=0,1, \ldots, M $$

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free