Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Let X1,X2,,Xn be a random sample from the distribution N(θ1,θ2). Show that the likelihood ratio principle for testing H0:θ2=θ2 specified, and θ1 unspecified, against H1:θ2θ2,θ1 unspecified, leads to a test that rejects when 1n(xix¯)2c1 or 1n(xix¯)2c2, where \(c_{1}

Short Answer

Expert verified
The likelihood ratio test principle leads to a test that rejects H0:θ2=θ2 when the sum i=1n(XiX¯)2 is either quite small (c1) or quite large (c2), where c1<c2, given α, the size of the test.

Step by step solution

01

Develop Likelihood Function

The first step is to determine the likelihood of the given probability distribution. Given that X1,X2,,Xn are random samples from a normal distribution N(θ1,θ2), the likelihood function L(θ1,θ2|X) is proportional to exp(12θ22i=1n(Xiθ1)2).
02

Constructing Test Statistic

The likelihood ratio test statistic is defined as 2lnλ(X) where λ(X)=supL(θ|X;H0)supL(θ|X;H1). For H0:θ2=θ2, the MLE of θ1 is the sample mean under both hypotheses. Hence, λ(X) simplifies to exp(12[1θ221θ2~2]i=1n(XiX¯)2). Then, the likelihood ratio test statistic is 2lnλ(X)=n[1θ221θ2~2]i=1n(XiX¯)2.
03

Rejection Region

The rejection region can be formed by recognizing that we reject H0 for large values of the test statistic. This will occur whenever i=1n(XiX¯)2 is either quite small or quite large i.e. i=1n(XiX¯)2c1 or i=1n(XiX¯)2c2, where c1<c2 must be selected so that the size of the test is α.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Statistical Hypothesis Testing
Statistical hypothesis testing is a crucial procedure in statistics, used to determine whether there is sufficient evidence in a sample of data to conclude that a certain condition is true for the entire population. In simple terms, it's like a trial where we are testing an assumption (the null hypothesis, usually noted as H0) against an alternative theory (the alternative hypothesis, H1).

When performing hypothesis testing, we compute a test statistic, a numerical summary of data, that under certain assumptions follows a known probability distribution. If the value of this statistic is extreme, meaning it falls within the rejection region (which we'll discuss later), we reject H0 in favor of H1. The process entails an inherent risk of making errors – specifically, Type I error (rejecting a true null hypothesis) and Type II error (failing to reject a false null hypothesis). To mitigate these errors, we designate a significance level (usually denoted as α), which is the probability of making a Type I error.
Normal Distribution
The normal distribution, also known as the Gaussian distribution, is a continuous probability distribution that is symmetric around the mean, showing that data near the mean are more frequent in occurrence than data far from the mean. In the context of hypothesis testing, the normal distribution's properties are particularly useful because many test statistics are assumed to follow it under the null hypothesis.

The normal distribution is defined by two parameters: the mean θ1 and the variance θ2. When using the normal distribution in hypothesis testing, as in the exercise provided, we often make inferences about these parameters based on the sample data. The assumption of normality allows us to use mathematical properties of the normal distribution to calculate probabilities and critical values related to our test statistic.
MLE (Maximum Likelihood Estimation)
Maximum Likelihood Estimation (MLE) is a method used to estimate the parameters of a statistical model. MLE selects the parameter values that make the observed data most probable. The beauty of MLE is that it often produces estimators with good statistical properties, such as consistency (as the sample size increases, the estimate converges to the true value) and efficiency (among all unbiased estimators, it has the smallest variance).

In the exercise, we utilize MLE to find the best-fitting parameters under both the null hypothesis H0 and the alternative hypothesis H1. Specifically, MLE is used to estimate θ1, the population mean, which is the same under both hypotheses and is simply the sample mean X¯, due to the normal distribution of the data. However, the estimation of θ2, the variance, differs depending on whether we assume H0 or H1 is true.
Rejection Region
The rejection region is an essential concept in hypothesis testing that defines the range of values for which the null hypothesis can be rejected, and it is determined by the chosen significance level α. It is shaped by the sampling distribution of the test statistic under the null hypothesis. In other words, it includes values of the test statistic that are very unlikely to occur if H0 were true.

In the likelihood ratio test from our exercise, the rejection region is set by two critical values, c1 and c2, forming a two-tailed test. This means that if the test statistic falls below c1 or above c2, we have enough evidence to reject the null hypothesis. The values of c1 and c2 are chosen so that the probability of rejecting a true null hypothesis (Type I error) is equal to the significance level α. The structure of the test and its placement of the rejection regions reflect the nature of the alternative hypothesis, which in this case does not specify whether the true variance is greater or less than the hypothesized value, warranting a two-tailed approach.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Suppose X1,X2,,Xn1 are a random sample from a N(θ,1) distribution. Suppose Z1,Z2,,Zn2 are missing observations. Show that the first step EM estimate is θ^(1)=n1x¯+n2θ^(0)n where θ^(0) is an initial estimate of θ and n=n1+n2. Note that if θ^(0)=x¯, then θ^(k)=x¯ for all k

Rao (page 368,1973 ) considers a problem in the estimation of linkages in genetics. McLachlan and Krishnan (1997) also discuss this problem and we present their model. For our purposes it can be described as a multinomial model with the four categories C1,C2,C3 and C4. For a sample of size n, let X=(X1,X2,X3,X4) denote the observed frequencies of the four categories. Hence, n=i=14Xi. The probability model is C1C2C3C412+14θ1414θ1414θ14θ where the parameter θ satisfies 0θ1. In this exercise, we obtain the mle of θ. (a) Show that likelihood function is given by L(θx)=n!x1!x2!x3!x4![12+14θ]x1[1414θ]x2+x3[14θ]x4 (b) Show that the log of the likelihood function can be expressed as a constant (not involving parameters) plus the term x1log[2+θ]+[x2+x3]log[1θ]+x4logθ (c) Obtain the partial of the last expression, set the result to 0, and solve for the mle. (This will result in a quadratic equation which has one positive and one negative root.)

Let X1,X2,,Xn be random sample from a N(θ,σ2) distribution, where σ2 is fixed but <θ< (a) Show that the mle of θ is X. (b) If θ is restricted by 0θ<, show that the mle of θ is θ^=max0,X¯.

Let X and Y be two independent random variables with respective pdfs $$f\left(x ; \theta_{i}\right)=\left\{\begin{array}{ll}\left(\frac{1}{\theta_{i}}\right) e^{-x / \theta_{i}} & 0

Let \(Y_{1}0\). (a) Show that Λ for testing H0:θ=θ0 against H1:θθ0 is Λ=(Yn/θ0)n, Ynθ0, and Λ=0, if Yn>θ0 (b) When H0 is true, show that 2logΛ has an exact χ2(2) distribution, not χ2(1). Note that the regularity conditions are not satisfied.

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free