Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

You have two opponents with whom you alternate play. Whenever you play \(A\), you win with probability \(p_{A}\); whenever you play \(B\), you win with probability \(p_{B}\), where \(p_{B}>p_{A}\). If your objective is to minimize the expected number of games you need to play to win two in a row, should you start with \(A\) or with \(B\) ? Hint: Let \(E\left[N_{i}\right]\) denote the mean number of games needed if you initially play \(i\). Derive an expression for \(E\left[N_{A}\right]\) that involves \(E\left[N_{B}\right] ;\) write down the equivalent expression for \(E\left[N_{B}\right]\) and then subtract.

Short Answer

Expert verified
To minimize the expected number of games needed to win two consecutive games, we can compare the following expressions for \(E[N_A]\) and \(E[N_B]\): \[E[N_A] = \frac{1 + p_A*p_B}{1 - (1-p_A)p_B - (1-p_A)(1-p_B)}\] \[E[N_B] = \frac{1 + p_B*p_A}{1 - (1-p_B)p_A - (1-p_B)(1-p_A)}\] Calculate \(E[N_A] - E[N_B]\) and compare the resulting value. If the difference is negative (\(E[N_A] - E[N_B] < 0\)), start with opponent A, and if the difference is positive (\(E[N_A] - E[N_B] > 0\)), start with opponent B.

Step by step solution

01

Understand the problem

We need to find out whether starting with opponent A or opponent B would minimize the expected number of games needed to win two consecutive games. Given the probability of winning against both opponents, we will use expected value calculations to derive expressions for the mean number of games.
02

Derive expression for expected value with Opponent A

Let \(E[N_A]\) be the expected number of games needed if we start with opponent A and aim for two consecutive wins. For this to happen, we need to win the first game against A and then win the second game against B, which occurs with probability \(p_A * p_B\). If this doesn't occur, then we are back in the same situation as in the beginning, so the expected number of games to win from this point is again \(E[N_A]\). This gives us the following equation: \[E[N_A] = 1 + p_A*p_B + (1-p_A)p_B * E[N_A] + (1-p_A)(1-p_B) * E[N_A]\] We can solve this equation for \(E[N_A]\).
03

Solve expression for expected value with Opponent A

We can rewrite the equation for \(E[N_A]\): \[E[N_A] - (1-p_A)p_B * E[N_A] - (1-p_A)(1-p_B) * E[N_A] = 1 + p_A*p_B\] Now, factor out \(E[N_A]\): \[E[N_A](1 - (1-p_A)p_B - (1-p_A)(1-p_B)) = 1 + p_A*p_B\] Finally, we can solve for \(E[N_A]\): \[E[N_A] =\frac{1 + p_A*p_B}{1 - (1-p_A)p_B - (1-p_A)(1-p_B)}\] Now we have a formula for \(E[N_A]\).
04

Derive expression for expected value with Opponent B

Similarly, let's derive an expression for \(E[N_B]\), the expected number of games needed if we start with opponent B. For this to happen, we need to win the first game against B and then win the second game against A, which occurs with probability \(p_B * p_A\). Following a similar process as with opponent A, we get: \[E[N_B] = 1 + p_B*p_A + (1-p_B)p_A * E[N_B] + (1-p_B)(1-p_A) * E[N_B]\]
05

Solve expression for expected value with Opponent B

Solving the equation for \(E[N_B]\) in a similar manner as before, we get: \[E[N_B] =\frac{1 + p_B*p_A}{1 - (1-p_B)p_A - (1-p_B)(1-p_A)}\] Now that we have formulas for both \(E[N_A]\) and \(E[N_B]\), we can compare the two to determine which one is smaller, indicating the preferable starting opponent to minimize the expected number of games needed to win two consecutive games.
06

Compare expected values to determine the optimal starting opponent

Subtract the expression for \(E[N_B]\) from the expression for \(E[N_A]\): \[E[N_A] - E[N_B] = \frac{1 + p_A*p_B}{1 - (1-p_A)p_B - (1-p_A)(1-p_B)} - \frac{1 + p_B*p_A}{1 - (1-p_B)p_A - (1-p_B)(1-p_A)}\] Since we know that \(p_B > p_A\), \[(p_A*p_B) < (p_B*p_A)\] If \(E[N_A] - E[N_B] < 0\), we should start with opponent A, and if \(E[N_A] - E[N_B] > 0\), we should start with opponent B. If the difference is negative, then \(E[N_A] < E[N_B]\) and we should start with opponent A. If the difference is positive, then \(E[N_A] > E[N_B]\) and we should start with opponent B. Compare the expressions to determine which opposing player to start with by considering their probabilities of winning.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Let \(X_{1}, \ldots, X_{n}\) be independent random variables having a common distribution function that is specified up to an unknown parameter \(\theta\). Let \(T=T(\mathrm{X})\) be a function of the data \(\mathrm{X}=\left(X_{1}, \ldots, X_{n}\right) .\) If the conditional distribution of \(X_{1}, \ldots, X_{n}\) given \(T(\mathrm{X})\) does not depend on \(\theta\) then \(T(\mathrm{X})\) is said to be a sufficient statistic for \(\theta .\) In the following cases, show that \(T(\mathbf{X})=\sum_{i=1}^{n} X_{i}\) is a sufficient statistic for \(\theta\). (a) The \(X_{i}\) are normal with mean \(\theta\) and variance \(1 .\) (b) The density of \(X_{i}\) is \(f(x)=\theta e^{-\theta x}, x>0\). (c) The mass function of \(X_{i}\) is \(p(x)=\theta^{x}(1-\theta)^{1-x}, x=0,1,0<\theta<1\). (d) The \(X_{i}\) are Poisson random variables with mean \(\theta\).

A and B play a series of games with A winning each game with probability \(p\). The overall winner is the first player to have won two more games than the other. (a) Find the probability that \(\mathrm{A}\) is the overall winner. (b) Find the expected number of games played.

If \(R_{i}\) denotes the random amount that is earned in period \(i\), then \(\sum_{i=1}^{\infty} \beta^{i-1} R_{i}\), where \(0<\beta<1\) is a specified constant, is called the total discounted reward with discount factor \(\beta .\) Let \(T\) be a geometric random variable with parameter \(1-\beta\) that is independent of the \(R_{i} .\) Show that the expected total discounted reward is equal to the expected total (undiscounted) reward earned by time \(T\). That is, show that $$ E\left[\sum_{i=1}^{\infty} \beta^{i-1} R_{i}\right]=E\left[\sum_{i=1}^{T} R_{i}\right] $$

Suppose \(X\) and \(Y\) are independent continuous random variables. Show that $$ E[X \mid Y=y]=E[X] \text { for all } y $$

Suppose that we continually roll a die until the sum of all throws exceeds 100 . What is the most likely value of this total when you stop?

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free