Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Suppose that coin 1 has probability \(0.7\) of coming up heads, and \(\operatorname{coin} 2\) has probability \(0.6\) of coming up heads. If the coin flipped today comes up heads, then we select coin 1 to flip tomorrow, and if it comes up tails, then we select \(\operatorname{coin} 2\) to flip tomorrow. If the coin initially flipped is equally likely to be \(\operatorname{coin} 1\) or \(\operatorname{coin} 2\), then what is the probability that the coin flipped on the third day after the initial flip is coin 1? Suppose that the coin flipped on Monday comes up heads. What is the probability that the coin flipped on Friday of the same week also comes up heads?

Short Answer

Expert verified
The probability of flipping coin 1 on the third day after the initial flip is 0.635. The probability that the coin flipped on Friday comes up heads given that the coin flipped on Monday comes up heads is 0.451.

Step by step solution

01

Understand the daily coin flipping process

The probability of choosing coin 1 or coin 2 to flip initially is equal, so the probability of each is 0.5. After this initial flip, we select the next coin based on the outcome. If we get heads, we flip coin 1; if we get tails, we flip coin 2.
02

Create a tree diagram for the flipping process up to the third day

To illustrate the coin flipping process, we will draw a tree diagram showing the possibilities for each day, up to the third day after the initial flip. Note that on each branching, we will indicate probability of that branch happening. For example, for the first day, the branches are: - Coin 1: 0.5 (chosen at the start) - Coin 2: 0.5 (chosen at the start) Then, for the second day, the branches are: - Coin 1 if Heads from Coin 1: 0.7 (probability of getting Heads from Coin 1) - Coin 2 if Tails from Coin 1: 0.3 (probability of getting Tails from Coin 1) - Coin 1 if Heads from Coin 2: 0.6 (probability of getting Heads from Coin 2) - Coin 2 if Tails from Coin 2: 0.4 (probability of getting Tails from Coin 2) And finally, for the third day, the branches include all combinations of the previous days.
03

Calculate the probability of flipping coin 1 on the third day

To find the probability of flipping coin 1 on the third day, we will sum the probabilities of all branches leading to coin 1 on the third day. These branches are: 1. Coin 1 - Coin 1 (Heads) - Coin 1 (Heads): \(0.5 \times 0.7 \times 0.7 = 0.245\) 2. Coin 1 - Coin 1 (Tails) - Coin 2 (Heads): \(0.5 \times 0.3 \times 0.6 = 0.09\) 3. Coin 2 - Coin 2 (Heads) - Coin 1 (Heads): \(0.5 \times 0.6 \times 0.6 = 0.18\) 4. Coin 2 - Coin 2 (Tails) - Coin 2 (Heads): \(0.5 \times 0.4 \times 0.6 = 0.12\) Adding these probabilities gives: \(0.245 + 0.09 + 0.18 + 0.12 = 0.635\) So, the probability of flipping coin 1 on the third day after the initial flip is 0.635.
04

Calculate the probability of getting heads on Friday given that we got heads on Monday

Since we got heads on Monday, we know that coin 1 was flipped on Tuesday. To find the probability of getting heads on Friday, we will examine the flipping paths from Tuesday (Heads from Coin 1) to Friday and sum the probabilities of getting heads for each path. We will start from the condition that on Tuesday, we flipped Coin 1 and got head, so the probability is already 0.7 for this part. Next, we will have 2 scenarios for Wednesday: 1. Wednesday: Coin 1 (Heads) - Thursday: Coin 1 (Heads) - Friday: Coin 1 (Heads) 2. Wednesday: Coin 1 (Tails) - Thursday: Coin 2 (Heads) - Friday: Coin 1 (Heads) Calculating the probabilities for these two paths: 1. \((0.7) \times (0.7) \times (0.7) = 0.343\) 2. \((0.3) \times (0.6) \times (0.6) = 0.108\) Adding these probabilities gives: \(0.343 + 0.108 = 0.451\) So, the probability that the coin flipped on Friday comes up heads given that the coin flipped on Monday comes up heads is 0.451.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Let \(A\) be a set of states, and let \(A^{c}\) be the remaining states. (a) What is the interpretation of $$ \sum_{i \in A} \sum_{j \in A^{c}} \pi_{i} P_{i j} ? $$ (b) What is the interpretation of $$ \sum_{i \in A^{e}} \sum_{j \in A} \pi_{i} P_{i j} ? $$ (c) Explain the identity $$ \sum_{i \in A} \sum_{j \in A^{c}} \pi_{i} P_{i j}=\sum_{i \in A^{c}} \sum_{j \in A} \pi_{i} P_{i j} $$

Three white and three black balls are distributed in two urns in such a way that each contains three balls. We say that the system is in state \(i, i=0,1,2,3\), if the first urn contains \(i\) white balls. At each step, we draw one ball from each urn and place the ball drawn from the first urn into the second, and conversely with the ball from the second urn. Let \(X_{n}\) denote the state of the system after the \(n\) th step. Explain why \(\left\\{X_{n}, n=0,1,2, \ldots\right\\}\) is a Markov chain and calculate its transition probability matrix.

It follows from the argument made in Exercise 38 that state \(i\) is null recurrent if it is recurrent and \(\pi_{i}=0 .\) Consider the one-dimensional symmetric random walk of Example 4.18. (a) Argue that \(\pi_{i}=\pi_{0}\) for all \(i\). (b) Argue that all states are null recurrent.

For the Markov chain with states \(1,2,3,4\) whose transition probability matrix \(\mathbf{P}\) is as specified below find \(f_{i 3}\) and \(s_{i 3}\) for \(i=1,2,3\). $$ \mathbf{P}=\left[\begin{array}{llll} 0.4 & 0.2 & 0.1 & 0.3 \\ 0.1 & 0.5 & 0.2 & 0.2 \\ 0.3 & 0.4 & 0.2 & 0.1 \\ 0 & 0 & 0 & 1 \end{array}\right] $$

In a Markov decision problem, another criterion often used, different than the expected average return per unit time, is that of the expected discounted return. In this criterion we choose a number \(\alpha, 0<\alpha<1\), and try to choose a policy so as to maximize \(E\left[\sum_{i=0}^{\infty} \alpha^{i} R\left(X_{i}, a_{i}\right)\right]\) (that is, rewards at time \(n\) are discounted at rate \(\left.\alpha^{n}\right)\) Suppose that the initial state is chosen according to the probabilities \(b_{i} .\) That is, $$ P\left\\{X_{0}=i\right\\}=b_{i}, \quad i=1, \ldots, n $$ For a given policy \(\beta\) let \(y_{j a}\) denote the expected discounted time that the process is in state \(j\) and action \(a\) is chosen. That is, $$ y_{j a}=E_{\beta}\left[\sum_{n=0}^{\infty} \alpha^{n} I_{\left[X_{n}=j, a_{n}=a\right\\}}\right] $$ where for any event \(A\) the indicator variable \(I_{A}\) is defined by $$ I_{A}=\left\\{\begin{array}{ll} 1, & \text { if } A \text { occurs } \\ 0, & \text { otherwise } \end{array}\right. $$ (a) Show that $$ \sum_{a} y_{j a}=E\left[\sum_{n=0}^{\infty} \alpha^{n} I_{\left\\{X_{n}=j\right\\}}\right] $$ or, in other words, \(\sum_{a} y_{j a}\) is the expected discounted time in state \(j\) under \(\beta\). (b) Show that $$ \begin{aligned} \sum_{j} \sum_{a} y_{j a} &=\frac{1}{1-\alpha}, \\ \sum_{a} y_{j a} &=b_{j}+\alpha \sum_{i} \sum_{a} y_{i a} P_{i j}(a) \end{aligned} $$ Hint: For the second equation, use the identity $$ I_{\left\\{X_{n+1}=j\right\\}}=\sum_{i} \sum_{a} I_{\left\\{X_{n-i}, a_{n-a}\right\rangle} I_{\left\\{X_{n+1}=j\right\\}} $$ Take expectations of the preceding to obtain $$ E\left[I_{\left.X_{n+1}=j\right\\}}\right]=\sum_{i} \sum_{a} E\left[I_{\left\\{X_{n-i}, a_{n-a}\right\\}}\right] P_{i j}(a) $$ (c) Let \(\left\\{y_{j a}\right\\}\) be a set of numbers satisfying $$ \begin{aligned} \sum_{j} \sum_{a} y_{j a} &=\frac{1}{1-\alpha}, \\ \sum_{a} y_{j a} &=b_{j}+\alpha \sum_{i} \sum_{a} y_{i a} P_{i j}(a) \end{aligned} $$ Argue that \(y_{j a}\) can be interpreted as the expected discounted time that the process is in state \(j\) and action \(a\) is chosen when the initial state is chosen according to the probabilities \(b_{j}\) and the policy \(\beta\), given by $$ \beta_{i}(a)=\frac{y_{i a}}{\sum_{a} y_{i a}} $$ is employed. Hint: Derive a set of equations for the expected discounted times when policy \(\beta\) is used and show that they are equivalent to Equation \((4.38) .\) (d) Argue that an optimal policy with respect to the expected discounted return criterion can be obtained by first solving the linear program $$ \begin{array}{ll} \operatorname{maximize} & \sum_{j} \sum_{a} y_{j a} R(j, a), \\ \text { such that } & \sum_{j} \sum_{a} y_{j a}=\frac{1}{1-\alpha}, \\ \sum_{a} y_{j a}=b_{j}+\alpha \sum_{i} \sum_{a} y_{i a} P_{i j}(a) \\ y_{j a} \geqslant 0, \quad \text { all } j, a \end{array} $$ and then defining the policy \(\beta^{*}\) by $$ \beta_{i}^{*}(a)=\frac{y_{i a}^{*}}{\sum_{a} y_{i a}^{*}} $$ where the \(y_{j a}^{*}\) are the solutions of the linear program.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free