Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Of \({{\rm{n}}_{\rm{1}}}\)randomly selected male smokers, \({{\rm{X}}_{\rm{1}}}\) smoked filter cigarettes, whereas of \({{\rm{n}}_{\rm{2}}}\) randomly selected female smokers, \({{\rm{X}}_{\rm{2}}}\) smoked filter cigarettes. Let \({{\rm{p}}_{\rm{1}}}\) and \({{\rm{p}}_{\rm{2}}}\) denote the probabilities that a randomly selected male and female, respectively, smoke filter cigarettes.

a. Show that \({\rm{(}}{{\rm{X}}_{\rm{1}}}{\rm{/}}{{\rm{n}}_{\rm{1}}}{\rm{) - (}}{{\rm{X}}_{\rm{2}}}{\rm{/}}{{\rm{n}}_{\rm{2}}}{\rm{)}}\) is an unbiased estimator for \({{\rm{p}}_{\rm{1}}}{\rm{ - }}{{\rm{p}}_{\rm{2}}}\). (Hint: \({\rm{E(}}{{\rm{X}}_{\rm{i}}}{\rm{) = }}{{\rm{n}}_{\rm{i}}}{{\rm{p}}_{\rm{i}}}\) for \({\rm{i = 1,2}}\).)

b. What is the standard error of the estimator in part (a)?

c. How would you use the observed values \({{\rm{x}}_{\rm{1}}}\) and \({{\rm{x}}_{\rm{2}}}\) to estimate the standard error of your estimator?

d. If \({{\rm{n}}_{\rm{1}}}{\rm{ = }}{{\rm{n}}_{\rm{2}}}{\rm{ = 200, }}{{\rm{x}}_{\rm{1}}}{\rm{ = 127}}\), and \({{\rm{x}}_{\rm{2}}}{\rm{ = 176}}\), use the estimator of part (a) to obtain an estimate of \({{\rm{p}}_{\rm{1}}}{\rm{ - }}{{\rm{p}}_{\rm{2}}}\).

e. Use the result of part (c) and the data of part (d) to estimate the standard error of the estimator.

Short Answer

Expert verified

(a) It is proved that\({{\rm{p}}_{\rm{1}}}{\rm{ - }}{{\rm{p}}_{\rm{2}}}\)has an unbiased estimator\(\frac{{{{\rm{X}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ - }}\frac{{{{\rm{X}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}\).

(b) The standard error of the estimator in part (a) is\({\rm{SE}}\left( {\frac{{{{\rm{X}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ - }}\frac{{{{\rm{X}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}} \right){\rm{ = }}\sqrt {\frac{{{{\rm{p}}_{\rm{1}}}\left( {{\rm{1 - }}{{\rm{p}}_{\rm{1}}}} \right)}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ + }}\frac{{{{\rm{p}}_{\rm{2}}}\left( {{\rm{1 - }}{{\rm{p}}_{\rm{2}}}} \right)}}{{{{\rm{n}}_{\rm{2}}}}}} \).

(c) The values\({{\rm{x}}_{\rm{1}}}\)and\({{\rm{x}}_{\rm{2}}}\)are used to estimate the standard error of the estimator as\({\rm{SE}}\left( {\frac{{{{\rm{X}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ - }}\frac{{{{\rm{X}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}} \right){\rm{ = }}\sqrt {\frac{{\frac{{{{\rm{x}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}\left( {{\rm{1 - }}\frac{{{{\rm{x}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}} \right)}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ + }}\frac{{\frac{{{{\rm{x}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}\left( {{\rm{1 - }}\frac{{{{\rm{x}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}} \right)}}{{{{\rm{n}}_{\rm{2}}}}}} \).

(d) The estimate of\({{\rm{p}}_{\rm{1}}}{\rm{ - }}{{\rm{p}}_{\rm{2}}}\)is\({{\rm{\hat p}}_{\rm{1}}}{\rm{ - }}{{\rm{\hat p}}_{\rm{2}}}{\rm{ = - 0}}{\rm{.245}}\).

(e) The standard error of the estimator is \({\rm{SE}}\left( {\frac{{{{\rm{X}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ - }}\frac{{{{\rm{X}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}} \right) \approx 0.0411\).

Step by step solution

01

Concept Introduction

The average of the given numbers is computed by dividing the total number of numbers by the sum of the given numbers.

The median is the middle number in a list of numbers that has been sorted ascending or descending, and it might be more descriptive of the data set than the average. When there are outliers in the series that could affect the average of the numbers, the median is sometimes utilised instead of the mean.

The standard deviation is a statistic that measures the amount of variation or dispersion in a set of numbers.

02

Unbiased Estimator

(a)

\({{\rm{X}}_{\rm{i}}}\)represents the number of successes (filter cigarettes) among a sample of\({{\rm{n}}_{\rm{i}}}\)individuals with probability of success\({{\rm{p}}_{\rm{i}}}{\rm{(i = 1,2)}}\).

The number of successes among a fixed sample size with a constant probability of success has a binomial distribution with parameters\({\rm{n}}\)and\({\rm{p}}\).

\(\begin{array}{c}{{\rm{X}}_{\rm{1}}}{\rm{\~b}}\left( {{{\rm{n}}_{\rm{1}}}{\rm{,}}{{\rm{p}}_{\rm{1}}}} \right)\\{{\rm{X}}_{\rm{2}}}{\rm{\~b}}\left( {{{\rm{n}}_{\rm{2}}}{\rm{,}}{{\rm{p}}_{\rm{2}}}} \right)\end{array}\)

The mean of a binomial distribution is the product of the sample size\({\rm{n}}\)and the probability\({\rm{p}}\).

\(\begin{array}{c}{\rm{E}}\left( {{{\rm{X}}_{\rm{1}}}} \right){\rm{ = }}{{\rm{\mu }}_{\rm{1}}}{\rm{ = }}{{\rm{n}}_{\rm{1}}}{{\rm{p}}_{\rm{1}}}\\{\rm{E}}\left( {{{\rm{X}}_{\rm{2}}}} \right){\rm{ = }}{{\rm{\mu }}_{\rm{2}}}{\rm{ = }}{{\rm{n}}_{\rm{2}}}{{\rm{p}}_{\rm{2}}}\end{array}\)

For the linear combination\({\rm{W = a}}{{\rm{X}}_1} + b{X_2}\), the following properties hold for the mean and variance –

\(\begin{array}{c}{{\rm{\mu }}_{\rm{W}}}{\rm{ = a}}{{\rm{\mu }}_{\rm{1}}}{\rm{ + b}}{{\rm{\mu }}_{\rm{2}}}\\{\rm{\sigma }}_{\rm{W}}^{\rm{2}}{\rm{ = }}{{\rm{a}}^{\rm{2}}}{\rm{\sigma }}_{\rm{1}}^{\rm{2}}{\rm{ + }}{{\rm{b}}^{\rm{2}}}{\rm{\sigma }}_{\rm{2}}^{\rm{2}}{\rm{ (If }}{{\rm{X}}_{\rm{1}}}{\rm{ and }}{{\rm{X}}_{\rm{2}}}{\rm{ are independent)}}\end{array}\)

Then determine the expected value of\(\frac{{{{\rm{X}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ - }}\frac{{{{\rm{X}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}\)–

\(\begin{array}{c}{\rm{E}}\left( {\frac{{{{\rm{X}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ - }}\frac{{{{\rm{X}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}} \right){\rm{ = }}\frac{{\rm{1}}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{E}}\left( {{{\rm{X}}_{\rm{1}}}} \right){\rm{ - }}\frac{{\rm{1}}}{{{{\rm{n}}_{\rm{2}}}}}{\rm{E}}\left( {{{\rm{X}}_{\rm{2}}}} \right)\\{\rm{ = }}\frac{{\rm{1}}}{{{{\rm{n}}_{\rm{1}}}}}{{\rm{n}}_{\rm{1}}}{{\rm{p}}_{\rm{1}}}{\rm{ - }}\frac{{\rm{1}}}{{{{\rm{n}}_{\rm{2}}}}}{{\rm{n}}_{\rm{2}}}{{\rm{p}}_{\rm{2}}}\\{\rm{ = }}{{\rm{p}}_{\rm{1}}}{\rm{ - }}{{\rm{p}}_{\rm{2}}}\end{array}\)

Since the expected value of\(\frac{{{{\rm{X}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ - }}\frac{{{{\rm{X}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}\)is equal to\({{\rm{p}}_{\rm{1}}}{\rm{ - }}{{\rm{p}}_{\rm{2}}}\).

Therefore, \(\frac{{{{\rm{X}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ - }}\frac{{{{\rm{X}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}\) is an unbiases estimator of \({{\rm{p}}_{\rm{1}}}{\rm{ - }}{{\rm{p}}_{\rm{2}}}\).

03

Standard Error of the Estimator

(b)

The variance of a binomial distribution is the product of the sample size \({\rm{n}}\) and the probabilities \({\rm{p}}\) and \(q\) –

\(\begin{array}{c}{\rm{V}}\left( {{{\rm{X}}_{\rm{1}}}} \right){\rm{ = \sigma }}_{\rm{1}}^{\rm{2}}{\rm{ = }}{{\rm{n}}_{\rm{1}}}{{\rm{p}}_{\rm{1}}}{{\rm{q}}_{\rm{1}}}{\rm{ = }}{{\rm{n}}_{\rm{1}}}{{\rm{p}}_{\rm{1}}}\left( {{\rm{1 - }}{{\rm{p}}_{\rm{1}}}} \right)\\{\rm{V}}\left( {{{\rm{X}}_{\rm{2}}}} \right){\rm{ = \sigma }}_{\rm{2}}^{\rm{2}}{\rm{ = }}{{\rm{n}}_{\rm{2}}}{{\rm{p}}_{\rm{2}}}{{\rm{q}}_{\rm{2}}}{\rm{ = }}{{\rm{n}}_{\rm{2}}}{{\rm{p}}_{\rm{2}}}\left( {{\rm{1 - }}{{\rm{p}}_{\rm{2}}}} \right)\end{array}\)

Determine the variance of \(\frac{{{{\rm{X}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ - }}\frac{{{{\rm{X}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}\)–

\(\begin{array}{c}{\rm{V}}\left( {\frac{{{{\rm{X}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ - }}\frac{{{{\rm{X}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}} \right){\rm{ = }}\frac{{\rm{1}}}{{{\rm{n}}_{\rm{1}}^{\rm{2}}}}{\rm{V}}\left( {{{\rm{X}}_{\rm{1}}}} \right){\rm{ + }}\frac{{\rm{1}}}{{{\rm{n}}_{\rm{2}}^{\rm{2}}}}{\rm{V}}\left( {{{\rm{X}}_{\rm{2}}}} \right)\\{\rm{ = }}\frac{{\rm{1}}}{{{\rm{n}}_{\rm{1}}^{\rm{2}}}}{{\rm{n}}_{\rm{1}}}{{\rm{p}}_{\rm{1}}}\left( {{\rm{1 - }}{{\rm{p}}_{\rm{1}}}} \right){\rm{ + }}\frac{{\rm{1}}}{{{\rm{n}}_{\rm{2}}^{\rm{2}}}}{{\rm{n}}_{\rm{2}}}{{\rm{p}}_{\rm{2}}}\left( {{\rm{1 - }}{{\rm{p}}_{\rm{2}}}} \right)\\{\rm{ = }}\frac{{{{\rm{p}}_{\rm{1}}}\left( {{\rm{1 - }}{{\rm{p}}_{\rm{1}}}} \right)}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ + }}\frac{{{{\rm{p}}_{\rm{2}}}\left( {{\rm{1 - }}{{\rm{p}}_{\rm{2}}}} \right)}}{{{{\rm{n}}_{\rm{2}}}}}\end{array}\)

The standard error is the square root of the variance –

\(\begin{array}{c}{\rm{SE}}\left( {\frac{{{{\rm{X}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ - }}\frac{{{{\rm{X}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}} \right){\rm{ = }}\sqrt {{\rm{V}}\left( {\frac{{{{\rm{X}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ - }}\frac{{{{\rm{X}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}} \right)} \\{\rm{ = }}\sqrt {\frac{{{{\rm{p}}_{\rm{1}}}\left( {{\rm{1 - }}{{\rm{p}}_{\rm{1}}}} \right)}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ + }}\frac{{{{\rm{p}}_{\rm{2}}}\left( {{\rm{1 - }}{{\rm{p}}_{\rm{2}}}} \right)}}{{{{\rm{n}}_{\rm{2}}}}}} \end{array}\)

Therefore, the standard error is \(\sqrt {\frac{{{{\rm{p}}_{\rm{1}}}\left( {{\rm{1 - }}{{\rm{p}}_{\rm{1}}}} \right)}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ + }}\frac{{{{\rm{p}}_{\rm{2}}}\left( {{\rm{1 - }}{{\rm{p}}_{\rm{2}}}} \right)}}{{{{\rm{n}}_{\rm{2}}}}}} \).

04

Standard Error of the Estimator

(c)

Let the observed values be\({{\rm{x}}_{\rm{1}}}\) and\({{\rm{x}}_{\rm{2}}}\). The estimates of the proportions are the number of successes\({{\rm{x}}_i}\) divided by the sample size\({n_i}\)–

\(\begin{array}{c}{{{\rm{\hat p}}}_{\rm{1}}}{\rm{ = }}\frac{{{{\rm{x}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}\\{{{\rm{\hat p}}}_{\rm{2}}}{\rm{ = }}\frac{{{{\rm{x}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}\end{array}\)

The standard error can then be estimated by replacing the proportions\({p_i}\)by their estimates\({\hat p_i}\).

\(\begin{array}{c}{\rm{SE}}\left( {\frac{{{{\rm{X}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ - }}\frac{{{{\rm{X}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}} \right) \approx \sqrt {\frac{{{{{\rm{\hat p}}}_{\rm{1}}}\left( {{\rm{1 - }}{{{\rm{\hat p}}}_{\rm{1}}}} \right)}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ + }}\frac{{{{{\rm{\hat p}}}_{\rm{2}}}\left( {{\rm{1 - }}{{{\rm{\hat p}}}_{\rm{2}}}} \right)}}{{{{\rm{n}}_{\rm{2}}}}}} \\{\rm{ = }}\sqrt {\frac{{\frac{{{{\rm{x}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}\left( {{\rm{1 - }}\frac{{{{\rm{x}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}} \right)}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ + }}\frac{{\frac{{{{\rm{x}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}\left( {{\rm{1 - }}\frac{{{{\rm{x}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}} \right)}}{{{{\rm{n}}_{\rm{2}}}}}} \end{array}\)

Therefore, the standard error is \(\sqrt {\frac{{\frac{{{{\rm{x}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}\left( {{\rm{1 - }}\frac{{{{\rm{x}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}} \right)}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ + }}\frac{{\frac{{{{\rm{x}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}\left( {{\rm{1 - }}\frac{{{{\rm{x}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}} \right)}}{{{{\rm{n}}_{\rm{2}}}}}} \).

05

Estimator of \({{\rm{p}}_{\rm{1}}}{\rm{ - }}{{\rm{p}}_{\rm{2}}}\)

(d)

It is given that –

\(\begin{array}{c}{{\rm{n}}_{\rm{1}}}{\rm{ = }}{{\rm{n}}_{\rm{2}}}{\rm{ = 200}}\\{{\rm{x}}_{\rm{1}}}{\rm{ = 127}}\\{{\rm{x}}_{\rm{2}}}{\rm{ = 176}}\end{array}\)

\(\frac{{{{\rm{X}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ - }}\frac{{{{\rm{X}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}\)is estimator of\({{\rm{p}}_{\rm{1}}}{\rm{ - }}{{\rm{p}}_{\rm{2}}}\)-

\(\begin{array}{c}{{{\rm{\hat p}}}_{\rm{1}}}{\rm{ - }}{{{\rm{\hat p}}}_{\rm{2}}}{\rm{ = }}\frac{{{{\rm{x}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ - }}\frac{{{{\rm{x}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}\\{\rm{ = }}\frac{{{\rm{127}}}}{{{\rm{200}}}}{\rm{ - }}\frac{{{\rm{176}}}}{{{\rm{200}}}}\\{\rm{ = - }}\frac{{{\rm{49}}}}{{{\rm{200}}}}\\{\rm{ = - 0}}{\rm{.245}}\end{array}\)

Therefore, the value is obtained as \({\rm{ - 0}}{\rm{.245}}\).

06

Standard Error of the Estimator

(e)

It is given that –

\(\begin{array}{c}{{\rm{n}}_{\rm{1}}}{\rm{ = }}{{\rm{n}}_{\rm{2}}}{\rm{ = 200}}\\{{\rm{x}}_{\rm{1}}}{\rm{ = 127}}\\{{\rm{x}}_{\rm{2}}}{\rm{ = 176}}\end{array}\)

Use the formula found in part (c) to estimate the standard error –

\(\begin{array}{c}{\rm{SE}}\left( {\frac{{{{\rm{X}}_{\rm{1}}}}}{{{{\rm{n}}_{\rm{1}}}}}{\rm{ - }}\frac{{{{\rm{X}}_{\rm{2}}}}}{{{{\rm{n}}_{\rm{2}}}}}} \right) \approx \sqrt {\frac{{\frac{{{\rm{127}}}}{{{\rm{200}}}}\left( {{\rm{1 - }}\frac{{{\rm{127}}}}{{{\rm{200}}}}} \right)}}{{{\rm{200}}}}{\rm{ + }}\frac{{\frac{{{\rm{176}}}}{{{\rm{200}}}}\left( {{\rm{1 - }}\frac{{{\rm{176}}}}{{{\rm{200}}}}} \right)}}{{{\rm{200}}}}} \\{\rm{ = }}\frac{{\sqrt {{\rm{26990}}} }}{{{\rm{4000}}}} \approx {\rm{0}}{\rm{.0411}}\end{array}\)

Therefore, the value is obtained as \(0.0411\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

The article from which the data in Exercise 1 was extracted also gave the accompanying strength observations for cylinders:

\(\begin{array}{l}\begin{array}{*{20}{r}}{{\rm{6}}{\rm{.1}}}&{{\rm{5}}{\rm{.8}}}&{{\rm{7}}{\rm{.8}}}&{{\rm{7}}{\rm{.1}}}&{{\rm{7}}{\rm{.2}}}&{{\rm{9}}{\rm{.2}}}&{{\rm{6}}{\rm{.6}}}&{{\rm{8}}{\rm{.3}}}&{{\rm{7}}{\rm{.0}}}&{{\rm{8}}{\rm{.3}}}\\{{\rm{7}}{\rm{.8}}}&{{\rm{8}}{\rm{.1}}}&{{\rm{7}}{\rm{.4}}}&{{\rm{8}}{\rm{.5}}}&{{\rm{8}}{\rm{.9}}}&{{\rm{9}}{\rm{.8}}}&{{\rm{9}}{\rm{.7}}}&{{\rm{14}}{\rm{.1}}}&{{\rm{12}}{\rm{.6}}}&{{\rm{11}}{\rm{.2}}}\end{array}\\\begin{array}{*{20}{l}}{{\rm{7}}{\rm{.8}}}&{{\rm{8}}{\rm{.1}}}&{{\rm{7}}{\rm{.4}}}&{{\rm{8}}{\rm{.5}}}&{{\rm{8}}{\rm{.9}}}&{{\rm{9}}{\rm{.8}}}&{{\rm{9}}{\rm{.7}}}&{{\rm{14}}{\rm{.1}}}&{{\rm{12}}{\rm{.6}}}&{{\rm{11}}{\rm{.2}}}\end{array}\end{array}\)

Prior to obtaining data, denote the beam strengths by X1, … ,Xm and the cylinder strengths by Y1, . . . , Yn. Suppose that the Xi ’s constitute a random sample from a distribution with mean m1 and standard deviation s1 and that the Yi ’s form a random sample (independent of the Xi ’s) from another distribution with mean m2 and standard deviation\({{\rm{\sigma }}_{\rm{2}}}\).

a. Use rules of expected value to show that \({\rm{\bar X - \bar Y}}\)is an unbiased estimator of \({{\rm{\mu }}_{\rm{1}}}{\rm{ - }}{{\rm{\mu }}_{\rm{2}}}\). Calculate the estimate for the given data.

b. Use rules of variance from Chapter 5 to obtain an expression for the variance and standard deviation (standard error) of the estimator in part (a), and then compute the estimated standard error.

c. Calculate a point estimate of the ratio \({{\rm{\sigma }}_{\rm{1}}}{\rm{/}}{{\rm{\sigma }}_{\rm{2}}}\)of the two standard deviations.

d. Suppose a single beam and a single cylinder are randomly selected. Calculate a point estimate of the variance of the difference \({\rm{X - Y}}\) between beam strength and cylinder strength.

An investigator wishes to estimate the proportion of students at a certain university who have violated the honor code. Having obtained a random sample of\({\rm{n}}\)students, she realizes that asking each, "Have you violated the honor code?" will probably result in some untruthful responses. Consider the following scheme, called a randomized response technique. The investigator makes up a deck of\({\rm{100}}\)cards, of which\({\rm{50}}\)are of type I and\({\rm{50}}\)are of type II.

Type I: Have you violated the honor code (yes or no)?

Type II: Is the last digit of your telephone number a\({\rm{0 , 1 , or 2}}\)(yes or no)?

Each student in the random sample is asked to mix the deck, draw a card, and answer the resulting question truthfully. Because of the irrelevant question on type II cards, a yes response no longer stigmatizes the respondent, so we assume that responses are truthful. Let\({\rm{p}}\)denote the proportion of honor-code violators (i.e., the probability of a randomly selected student being a violator), and let\({\rm{\lambda = P}}\)(yes response). Then\({\rm{\lambda }}\)and\({\rm{p}}\)are related by\({\rm{\lambda = }}{\rm{.5p + (}}{\rm{.5)(}}{\rm{.3)}}\).

a. Let\({\rm{Y}}\)denote the number of yes responses, so\({\rm{Y\sim}}\)Bin\({\rm{(n,\lambda )}}\). Thus Y / n is an unbiased estimator of\({\rm{\lambda }}\). Derive an estimator for\({\rm{p}}\)based on\({\rm{Y}}\). If\({\rm{n = 80}}\)and\({\rm{y = 20}}\), what is your estimate? (Hint: Solve\({\rm{\lambda = }}{\rm{.5p + }}{\rm{.15}}\)for\({\rm{p}}\)and then substitute\({\rm{Y/n}}\)for\({\rm{\lambda }}\).)

b. Use the fact that\({\rm{E(Y/n) = \lambda }}\)to show that your estimator\({\rm{\hat p}}\)is unbiased.

c. If there were\({\rm{70}}\)type I and\({\rm{30}}\)type II cards, what would be your estimator for\({\rm{p}}\)?

Let\({\rm{X}}\)denote the proportion of allotted time that a randomly selected student spends working on a certain aptitude test. Suppose the pdf of\({\rm{X}}\)is

\({\rm{f(x;\theta ) = }}\left\{ {\begin{array}{*{20}{c}}{{\rm{(\theta + 1)}}{{\rm{x}}^{\rm{\theta }}}}&{{\rm{0£ x£ 1}}}\\{\rm{0}}&{{\rm{ otherwise }}}\end{array}} \right.\)

where\({\rm{ - 1 < \theta }}\). A random sample of ten students yields data\({{\rm{x}}_{\rm{1}}}{\rm{ = }}{\rm{.92,}}{{\rm{x}}_{\rm{2}}}{\rm{ = }}{\rm{.79,}}{{\rm{x}}_{\rm{3}}}{\rm{ = }}{\rm{.90,}}{{\rm{x}}_{\rm{4}}}{\rm{ = }}{\rm{.65,}}{{\rm{x}}_{\rm{5}}}{\rm{ = }}{\rm{.86}}\),\({{\rm{x}}_{\rm{6}}}{\rm{ = }}{\rm{.47,}}{{\rm{x}}_{\rm{7}}}{\rm{ = }}{\rm{.73,}}{{\rm{x}}_{\rm{8}}}{\rm{ = }}{\rm{.97,}}{{\rm{x}}_{\rm{9}}}{\rm{ = }}{\rm{.94,}}{{\rm{x}}_{{\rm{10}}}}{\rm{ = }}{\rm{.77}}\).

a. Use the method of moments to obtain an estimator of\({\rm{\theta }}\), and then compute the estimate for this data.

b. Obtain the maximum likelihood estimator of\({\rm{\theta }}\), and then compute the estimate for the given data.

Let\({{\rm{X}}_{\rm{1}}}{\rm{,}}{{\rm{X}}_{\rm{2}}}{\rm{, \ldots ,}}{{\rm{X}}_{\rm{n}}}\)represent a random sample from a Rayleigh distribution with pdf

\({\rm{f(x,\theta ) = }}\frac{{\rm{x}}}{{\rm{\theta }}}{{\rm{e}}^{{\rm{ - }}{{\rm{x}}^{\rm{2}}}{\rm{/(2\theta )}}}}\quad {\rm{x > 0}}\)a. It can be shown that\({\rm{E}}\left( {{{\rm{X}}^{\rm{2}}}} \right){\rm{ = 2\theta }}\). Use this fact to construct an unbiased estimator of\({\rm{\theta }}\)based on\({\rm{\Sigma X}}_{\rm{i}}^{\rm{2}}\)(and use rules of expected value to show that it is unbiased).

b. Estimate\({\rm{\theta }}\)from the following\({\rm{n = 10}}\)observations on vibratory stress of a turbine blade under specified conditions:

\(\begin{array}{*{20}{l}}{{\rm{16}}{\rm{.88}}}&{{\rm{10}}{\rm{.23}}}&{{\rm{4}}{\rm{.59}}}&{{\rm{6}}{\rm{.66}}}&{{\rm{13}}{\rm{.68}}}\\{{\rm{14}}{\rm{.23}}}&{{\rm{19}}{\rm{.87}}}&{{\rm{9}}{\rm{.40}}}&{{\rm{6}}{\rm{.51}}}&{{\rm{10}}{\rm{.95}}}\end{array}\)

\({{\rm{X}}_{\rm{1}}}{\rm{,}}.....{\rm{,}}{{\rm{X}}_{\rm{n}}}\)be a random sample from a gamma distribution with parameters \({\rm{\alpha }}\) and \({\rm{\beta }}\). a. Derive the equations whose solutions yield the maximum likelihood estimators of \({\rm{\alpha }}\) and \({\rm{\beta }}\). Do you think they can be solved explicitly? b. Show that the mle of \({\rm{\mu = \alpha \beta }}\) is \(\widehat {\rm{\mu }}{\rm{ = }}\overline {\rm{X}} \).

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free