Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Each day, one of \(n\) possible elements is requested, the ith one with probability \(P_{i}, i \geqslant 1, \sum_{1}^{n} P_{i}=1\). These elements are at all times arranged in an ordered list that is revised as follows: The element selected is moved to the front of the list with the relative positions of all the other elements remaining unchanged. Define the state at any time to be the list ordering at that time and note that there are \(n !\) possible states. (a) Argue that the preceding is a Markov chain. (b) For any state \(i_{1}, \ldots, i_{n}\) (which is a permutation of \(\left.1,2, \ldots, n\right)\), let \(\pi\left(i_{1}, \ldots, i_{n}\right)\) denote the limiting probability. In order for the state to be \(i_{1}, \ldots, i_{n}\), it is necessary for the last request to be for \(i_{1}\), the last non- \(i_{1}\) request for \(i_{2}\), the last non- \(i_{1}\) or \(i_{2}\) request for \(i_{3}\), and so on. Hence, it appears intuitive that $$ \pi\left(i_{1}, \ldots, i_{n}\right)=P_{i_{1}} \frac{P_{i_{2}}}{1-P_{i_{1}}} \frac{P_{i_{3}}}{1-P_{i_{1}}-P_{i_{2}}} \cdots \frac{P_{i_{n-1}}}{1-P_{i_{1}}-\cdots-P_{i_{n-2}}} $$ Verify when \(n=3\) that the preceding are indeed the limiting probabilities.

Short Answer

Expert verified
The given procedure is a Markov chain as it possesses the Markov property: the probability of future states depends only on the current state, not on past states. For \(n = 3\), we have 6 possible permutations, and we compute the limiting probabilities using the provided formula. We find that the sum of the limiting probabilities is equal to 1, which confirms that the intuitive formula for limiting probabilities is correct when \(n = 3\).

Step by step solution

01

To prove that this procedure is a Markov chain, we need to show that it possesses the Markov property: The probability of future states depends only on the current state, not on the past states. Let's denote the state of the ordered list at time \(t\) by \(X_t\). To show that this is a Markov chain, we need to prove that \(P(X_{t+1} = j | X_t = i, X_{t-1} = i_{t-1}, ..., X_0 = i_0) = P(X_{t+1} = j | X_t = i)\). First, we know that \(P(X_{t+1} = j | X_t = i)\) is the probability of the element on the list changing from state \(i\) to state \(j\). This will happen if the element requested at time \(t+1\) is the first element in the list \(j\), which is obtained from list of elements in state \(i\) by placing requested element on the front. The probability of this is given by the probability of the requested element, i.e. \(P(X_{t+1} = j | X_t = i) = P_{i_1}\), where \(i_1\) is the first element in list \(j\). Now let's consider \(P(X_{t+1} = j | X_t = i, X_{t-1} = i_{t-1}, ..., X_0 = i_0)\). Since the list changes its order depending on the requested element, and given that the states are the ordering of the list, the future state \(X_{t+1}\) depends only on the current state \(X_t\) and the requested element at time \(t+1\). The past states \(X_{t-1}, ..., X_0\) do not have an impact on the future state, so \(P(X_{t+1} = j | X_t = i, X_{t-1} = i_{t-1}, ..., X_0 = i_0) = P(X_{t+1} = j | X_t = i)\). Since both probabilities are equal, the Markov property holds, and thus, the given procedure is a Markov chain. #b) Verify limiting probabilities for n = 3#

To verify that the given limiting probabilities are correct for n = 3, we'll use the intuitive formula provided: \(\pi(i_1, i_2, i_3) = P_{i_1}\frac{P_{i_2}}{1 - P_{i_1}}\frac{P_{i_3}}{1 - P_{i_1} - P_{i_2}}\). There are 3! = 6 possible permutations for list elements (1, 2, 3): 1. \((1, 2, 3)\) 2. \((1, 3, 2)\) 3. \((2, 1, 3)\) 4. \((2, 3, 1)\) 5. \((3, 1, 2)\) 6. \((3, 2, 1)\) Compute the limiting probabilities for these permutations: 1. \(\pi(1, 2, 3) = P_1 \frac{P_2}{1 - P_1} \frac{P_3}{P_3}\) 2. \(\pi(1, 3, 2) = P_1 \frac{P_3}{1 - P_1} \frac{P_2}{P_2}\) 3. \(\pi(2, 1, 3) = P_2 \frac{P_1}{1 - P_2} \frac{P_3}{P_3}\) 4. \(\pi(2, 3, 1) = P_2 \frac{P_3}{1 - P_2} \frac{P_1}{P_1}\) 5. \(\pi(3, 1, 2) = P_3 \frac{P_1}{1 - P_3} \frac{P_2}{P_2}\) 6. \(\pi(3, 2, 1) = P_3 \frac{P_2}{1 - P_3} \frac{P_1}{P_1}\) For the probabilities to be limiting probabilities, the sum of these probabilities must be equal to 1. Let's check this: $$\pi(1, 2, 3) + \pi(1, 3, 2) + \pi(2, 1, 3) + \pi(2, 3, 1) + \pi(3, 1, 2) + \pi(3, 2, 1) = 1$$ Cancel out the terms in the numerators and denominators, and see if they add up to 1: $$P_1 + P_1 + P_2 + P_2 + P_3 + P_3 = P_1 + P_2 + P_3 = 1$$ Since the sum of the limiting probabilities equals 1, and we have been given that \(\sum_{1}^{3} P_{i}=1\), the intuitive formula provided for limiting probabilities is indeed correct when \(n = 3\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

For a time reversible Markov chain, argue that the rate at which transitions from \(i\) to \(j\) to \(k\) occur must equal the rate at which transitions from \(k\) to \(j\) to \(i\) occur.

A DNA nucleotide has any of four values. A standard model for a mutational change of the nucleotide at a specific location is a Markov chain model that supposes that in going from period to period the nucleotide does not change with probability \(1-3 \alpha\), and if it does change then it is equally likely to change to any of the other three values, for some \(0<\alpha<\frac{1}{3}\). (a) Show that \(P_{1,1}^{n}=\frac{1}{4}+\frac{3}{4}(1-4 \alpha)^{n}\). (b) What is the long-run proportion of time the chain is in each state?

Consider a population of individuals each of whom possesses two genes that can be either type \(A\) or type \(a\). Suppose that in outward appearance type \(A\) is dominant and type \(a\) is recessive. (That is, an individual will have only the outward characteristics of the recessive gene if its pair is aa.) Suppose that the population has stabilized, and the percentages of individuals having respective gene pairs \(A A, a a\), and \(A a\) are \(p, q\), and \(r .\) Call an individual dominant or recessive depending on the outward characteristics it exhibits. Let \(S_{11}\) denote the probability that an offspring of two dominant parents will be recessive; and let \(S_{10}\) denote the probability that the offspring of one dominant and one recessive parent will be recessive. Compute \(S_{11}\) and \(S_{10}\) to show that \(S_{11}=S_{10}^{2} .\) (The quantities \(S_{10}\) and \(S_{11}\) are known in the genetics literature as Snyder's ratios.)

Coin 1 comes up heads with probability \(0.6\) and \(\operatorname{coin} 2\) with probability \(0.5 . \mathrm{A}\) coin is continually flipped until it comes up tails, at which time that coin is put aside and we start flipping the other one. (a) What proportion of flips use coin 1? (b) If we start the process with \(\operatorname{coin} 1\) what is the probability that \(\operatorname{coin} 2\) is used on the fifth flip?

At all times, an urn contains \(N\) balls?-some white balls and some black balls. At each stage, a coin having probability \(p, 0

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free