Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

A DNA nucleotide has any of four values. A standard model for a mutational change of the nucleotide at a specific location is a Markov chain model that supposes that in going from period to period the nucleotide does not change with probability \(1-3 \alpha\), and if it does change then it is equally likely to change to any of the other three values, for some \(0<\alpha<\frac{1}{3}\). (a) Show that \(P_{1,1}^{n}=\frac{1}{4}+\frac{3}{4}(1-4 \alpha)^{n}\). (b) What is the long-run proportion of time the chain is in each state?

Short Answer

Expert verified
(a) The probability of staying in state 1 after \(n\) steps, denoted by \(P_{1,1}^{n}\), is given by the formula \(P_{1,1}^{n} = \frac{1}{4} + \frac{3}{4}(1-4\alpha)^n\). (b) In the long run, the proportion of time the chain is in each state is the same, and the stationary distribution, denoted by \(\pi\), is found to be \(\pi = \frac{1}{4}\). Therefore, the long-run proportion of time the chain is in each state is \(\frac{1}{4}\).

Step by step solution

01

Interpret the given data

Since we have four different values for a DNA nucleotide, let's consider them as states 1, 2, 3, and 4. The probability of not changing the nucleotide equals to stay within the same state (1-3α). If it changes, it will transition to one of the other three states with equal probability α. We can illustrate the transition probability matrix as follows: \[ P = \left[ {\begin{array}{cccc} 1-3\alpha & \alpha & \alpha & \alpha \\ \alpha & 1-3\alpha & \alpha & \alpha \\ \alpha & \alpha & 1-3\alpha & \alpha \\ \alpha & \alpha & \alpha & 1-3\alpha \\ \end{array}} \right] \] Now, we will find the probabilities as required. ##Step 2: Compute \(P_{1,1}^{n}\)##
02

Derive the formula for \(P_{1,1}^{n}\)

To compute \(P_{1,1}^{n}\), we have to raise the transition probability matrix to the power of \(n\). Since this is a regular matrix, we can use the following formula to find the probability: \[ P^n = \lim_{k \to \infty} \sum_{j=0}^{k} P^j \] Now, we only need the first element of the matrix \(P^n\) which represents \(P_{1,1}^{n}\). Thus we need to find the first element of matrix \(P^n\) after the sum of all matrix powers. Since all rows of matrix \(P\) are the same: \[ P^j = \left[ {\begin{array}{cccc} P_{1,1}^{j} & P_{1,2}^{j} & P_{1,3}^{j} & P_{1,4}^{j} \\ P_{1,1}^{j} & P_{1,2}^{j} & P_{1,3}^{j} & P_{1,4}^{j} \\ P_{1,1}^{j} & P_{1,2}^{j} & P_{1,3}^{j} & P_{1,4}^{j} \\ P_{1,1}^{j} & P_{1,2}^{j} & P_{1,3}^{j} & P_{1,4}^{j} \\ \end{array}} \right] \] Now, we can apply the formula above to obtain: \[ P_{1,1}^{n} = \frac{1}{4} + \frac{3}{4}(1-4\alpha)^n \] ##Step 3: Calculate the long-run proportion##
03

Derive the long-run proportion of time the chain is in each state

The long-run proportion of time the chain is in each state is given by the stationary distribution, denoted by \(\pi\). We have to solve for the stationary distribution of our Markov chain model. A stationary distribution satisfies the following conditions: 1. \(\pi P = \pi\) 2. \(\sum_{i=1}^{4} \pi_i = 1\) Since this is a regular Markov chain, we need to find a probability vector \(\pi\) for which the limit as \(n\) approaches infinity is equal to the stationary distribution. In the limit, the proportions of time spent in each state will be the same, so the limit is a matrix with equal rows: \[ \lim_{n\to\infty} P^n = \left[ {\begin{array}{cccc} \pi_1 & \pi_2 & \pi_3 & \pi_4 \\ \pi_1 & \pi_2 & \pi_3 & \pi_4 \\ \pi_1 & \pi_2 & \pi_3 & \pi_4 \\ \pi_1 & \pi_2 & \pi_3 & \pi_4 \\ \end{array}} \right] \] So, we have: \[ \left[ {\begin{array}{c} \pi_1 \\ \pi_2 \\ \pi_3 \\ \pi_4 \\ \end{array}} \right] = \left[ {\begin{array}{cccc} 1-3\alpha & \alpha & \alpha & \alpha \\ \alpha & 1-3\alpha & \alpha & \alpha \\ \alpha & \alpha & 1-3\alpha & \alpha \\ \alpha & \alpha & \alpha & 1-3\alpha \\ \end{array}} \right] \left[ {\begin{array}{c} \pi_1 \\ \pi_2 \\ \pi_3 \\ \pi_4 \\ \end{array}} \right] \] Since the proportions of time spent in each state will be the same in the long run, we can assume that \(\pi_1 = \pi_2 = \pi_3 = \pi_4 = \pi\). Therefore, the long-run proportion of time the chain is in each state \(\pi = \frac{1}{4}\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Find the average premium received per policyholder of the insurance company of Example \(4.27\) if \(\lambda=1 / 4\) for one-third of its clients, and \(\lambda=1 / 2\) for two-thirds of its clients.

In Example 4.3, Gary is in a cheerful mood today. Find the expected number of days until he has been glum for three consecutive days.

In a good weather year the number of storms is Poisson distributed with mean \(1 ;\) in a bad year it is Poisson distributed with mean 3. Suppose that any year's weather conditions depends on past years only through the previous year's condition. Suppose that a good year is equally likely to be followed by either a good or a bad year, and that a bad year is twice as likely to be followed by a bad year as by a good year. Suppose that last year-call it year 0 -was a good year. (a) Find the expected total number of storms in the next two years (that is, in years 1 and 2 ). (b) Find the probability there are no storms in year 3 . (c) Find the long-run average number of storms per year.

Let \(P^{(1)}\) and \(P^{(2)}\) denote transition probability matrices for ergodic Markov chains having the same state space. Let \(\pi^{1}\) and \(\pi^{2}\) denote the stationary (limiting) probability vectors for the two chains. Consider a process defined as follows: (a) \(X_{0}=1 .\) A coin is then flipped and if it comes up heads, then the remaining states \(X_{1}, \ldots\) are obtained from the transition probability matrix \(P^{(1)}\) and if tails from the matrix \(P^{(2)} .\) Is \(\left\\{X_{n}, n \geqslant 0\right\\}\) a Markov chain? If \(p=\) \(P\left\\{\right.\) coin comes up heads\\}, what is \(\lim _{n \rightarrow \infty} P\left(X_{n}=i\right) ?\) (b) \(X_{0}=1\). At each stage the coin is flipped and if it comes up heads, then the next state is chosen according to \(P^{(1)}\) and if tails comes up, then it is chosen according to \(P^{(2)} .\) In this case do the successive states constitute a Markov chain? If so, determine the transition probabilities. Show by a counterexample that the limiting probabilities are not the same as in part (a).

Consider a Markov chain in steady state. Say that a \(k\) length run of zeroes ends at time \(m\) if $$ X_{m-k-1} \neq 0, \quad X_{m-k}=X_{m-k+1}=\ldots=X_{m-1}=0, X_{m} \neq 0 $$ Show that the probability of this event is \(\pi_{0}\left(P_{0,0}\right)^{k-1}\left(1-P_{0,0}\right)^{2}\), where \(\pi_{0}\) is the limiting probability of state 0 .

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free