Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Because we use the envelope theorem in constrained optimization problems often in the text, proving this theorem in a simple case may help develop some intuition. Thus, suppose we wish to maximize a function of two variables and that the value of this function also depends on a parameter, \(a: f\left(x_{1}, x_{2}, a\right) .\) This maximization problem is subject to a constraint that can be written as: \(g\left(x_{1}, x_{2}, a\right)=0\) a. Write out the Lagrangian expression and the first-order conditions for this problem. b. Sum the two first-order conditions involving the \(x\) 's. c. Now differentiate the above sum with respect to \(a-\) this shows how the \(x\) 's must change as a changes while requiring that the first-order conditions continue to hold. A. As we showed in the chapter, both the objective function and the constraint in this problem can be stated as functions of \(a: f\left(x_{1}(a), x_{2}(a), a\right), g\left(x_{1}(a), x_{2}(a), a\right)=0 .\) Dif ferentiate the first of these with respect to \(a\). This shows how the value of the objective changes as \(a\) changes while kecping the \(x^{\prime}\) s at their optimal values. You should have terms that involve the \(x\) 's and a single term in \partialflaa. e. Now differentiate the constraint as formulated in part (d) with respect to \(a\). You should have terms in the \(x^{\prime}\) 's and a single term in \(\partial g / \partial a\) f. Multiply the results from part (e) by \(\lambda\) (the Lagrange multiplier), and use this together with the first-order conditions from part (c) to substitute into the derivative from part (d). You should be able to show that \\[ \frac{d f\left(x_{1}(a), x_{2}(a), a\right)}{d a}=\frac{\partial f}{\partial a}+\lambda \frac{\partial g}{\partial a^{\prime}} \\] which is just the partial derivative of the Lagrangian expression when all the \(x^{\prime}\) 's are at their optimal values. This proves the enyclope theorem. Explain intuitively how the various parts of this proof impose the condition that the \(x\) 's are constantly being adjusted to be at their optimal values. 8\. Return to Example 2.8 and explain how the cnvelope theorem can be applied to changes in the fence perimeter \(P-\) that is, how do changes in \(P\) affect the size of the area that can be fenced? Show that, in this case, the envelope theorem illustrates how the Lagrange multiplier puts a value on the constraint.

Short Answer

Expert verified
Answer: The envelope theorem is used for analyzing how the optimal value of an objective function changes with respect to a parameter when subject to a constraint. It simplifies calculations by only considering the direct effect of the parameter on the objective function and the constraint, rather than accounting for each variable's indirect effects.

Step by step solution

01

a. Write the Lagrangian expression and the first-order conditions

Let the Lagrangian function be defined as: \\[ L\left(x_1, x_2, a, \lambda \right) = f\left(x_1, x_2, a\right) - \lambda g\left(x_1, x_2, a\right) \\] The first-order conditions require us to take the partial derivatives of the Lagrangian function with respect to all its variables and set them equal to zero: 1. \\[ \frac{\partial L}{\partial x_1} = \frac{\partial f}{\partial x_1} - \lambda \frac{\partial g}{\partial x_1} = 0 \\] 2. \\[ \frac{\partial L}{\partial x_2} = \frac{\partial f}{\partial x_2} - \lambda \frac{\partial g}{\partial x_2} = 0 \\] 3. \\[ \frac{\partial L}{\partial \lambda} = g\left(x_1, x_2, a\right) = 0 \\]
02

b. Sum the two first-order conditions involving the x's

Add the first two first-order conditions obtained in step a: \\[ \frac{\partial f}{\partial x_1} + \frac{\partial f}{\partial x_2} = \lambda \left( \frac{\partial g}{\partial x_1} + \frac{\partial g}{\partial x_2} \right) \\]
03

c. Differentiate the sum with respect to a

Differentiate both sides of the equation with respect to a: \\[ \frac{\partial^2 f}{\partial x_1 \partial a}+\frac{\partial^2 f}{\partial x_2 \partial a}=\lambda\left(\frac{\partial^2 g}{\partial x_1 \partial a}+\frac{\partial^2 g}{\partial x_2 \partial a}\right) \\]
04

d. Differentiate the objective function and constraint with respect to a

Differentiate the objective function: \\[ \frac{d f(x_1(a), x_2(a), a)}{d a} = \frac{\partial f}{\partial x_1}\frac{d x_1}{d a} + \frac{\partial f}{\partial x_2}\frac{d x_2}{d a} + \frac{\partial f}{\partial a} \\] Differentiate the constraint: \\[ \frac{d g(x_1(a), x_2(a), a)}{d a} = \frac{\partial g}{\partial x_1}\frac{d x_1}{d a} + \frac{\partial g}{\partial x_2}\frac{d x_2}{d a} + \frac{\partial g}{\partial a} \\]
05

e. Multiply the result of part (e) by 𝜆 and substitute into the derivative from part (d)

First, multiply the result from part (e) by 𝜆: \\[ \lambda \frac{d g(x_1(a), x_2(a), a)}{d a} = \lambda \frac{\partial g}{\partial x_1}\frac{d x_1}{d a} + \lambda \frac{\partial g}{\partial x_2}\frac{d x_2}{d a} + \lambda \frac{\partial g}{\partial a} \\] Now, substitute the expressions from the first-order conditions (step a) involving x's: \\[ \frac{d f(x_1(a), x_2(a), a)}{d a} = \left(\frac{\partial f}{\partial x_1}\right)\left(\lambda \frac{\partial g}{\partial x_1}\right) + \left(\frac{\partial f}{\partial x_2}\right)\left(\lambda \frac{\partial g}{\partial x_2}\right) + \frac{\partial f}{\partial a} \\] Now, we can simplify to obtain: \\[ \frac{d f(x_1(a), x_2(a), a)}{d a} = \frac{\partial f}{\partial a} + \lambda \frac{\partial g}{\partial a} \\] This proves the envelope theorem. The proof intuitively shows that by constantly differentiating the objective function and the constraint with respect to the parameter a, we are adjusting the x's to their optimal values so that we reach the maximum value of the objective function subject to the constraint.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Show that if \(f\left(x_{1}, x_{2}\right)\) is a concave function, then it is also a a quasi-concave function. Do this by comparing Equation 2,100 (defining quasi-concavity) with Equation 2.84 (defining concavity \() .\) Can you give an intuitive reason for this result? Is the converse of the statement true? Are quasi-concave functions necessarily concave? If not, give a counterexample.

Taylor's theorem shows that any function can be approximated in the vicinity of any convenient point by a series of terms involving the function and its derivatives, Here we look at some applications of the theorem for functions of one and two variables. a Any continuous and differentiable function of a single variable, \(f(x),\) can be approximated near the point \(a\) by the formula \\[ \begin{aligned} f(x)=& f(a)+f(a)(x-a)+0.5 f^{\prime \prime}(a)(x-a)^{2}+\\\ &+\text { terms in } f^{\prime \prime \prime}, f^{\prime \prime \prime}, \ldots . \end{aligned} \\] Using only the first three of these terms results in a quadratic Taylor approximation. Use this approximation together with the definition of concavity to show that any concave function must lie on or below the tangent to the function at point \(a\) b. The quadratic Taylor approximation for any function of two variables, \(f(x, y),\) near the point \((a, b)\) is given by \\[ \begin{aligned} f(x, y)=& f(a, b)+f_{1}(a, b)(x-a)+f_{2}(a, b)(y-b) \\ &+0.5\left[f_{11}(a, b)(x-a)^{2}\right.\\\ &\left.+2 f_{12}(a, b)(x-a)(y-b)+f_{22}(y-b)^{2}\right] \end{aligned} \\] Use this approximation to show that any concave function (as defined by Equation 2.84 ) must lie on or below its tangent plane at \((a, b)\)

Because the expected value concept plays an important role in many economic theories, it may be useful to summarize a few more properties of this statistical measure. Throughout this problem, \(x\) is assumed to be a continuous random variable with PDF \(f(x)\) a. (Jensen's inequality) Suppose that \(g(x)\) is a concave function. Show that \(E[g(x)] \leq g[E(x)] .\) Hint: Construct the tangent to \(g(x)\) at the point \(E(x)\). This tangent will have the form \(c+d x \geq g(x)\) for all values of \(x\) and \(c+d E(x)=g[E(x)],\) where \(c\) and \(d\) are constants. b. Use the procedure from part (a) to show that if \(g(x)\) is a convex function, then \(E[g(x)] \geq g[E(x)]\) c. Suppose \(x\) takes on only non-negative values-that is, \(0 \leq x \leq \infty\). Use integration by parts to show that \\[ E(x)=\int[1-F(x)] d x \\] where \(F(x)\) is the cumulative distribution function for \(x\) \\[ \left[\text { ice, } F(x)=\int_{0}^{x} f(t) d t\right] \\] d. (Markov's inequality) Show that if \(x\) takes on only positive values, then the following inequality holds: \\[ \begin{array}{c} P(x \geq t) \leq \frac{E(x)}{t} \\ \text { Hint: } E(x)=\int_{0}^{\infty} x f(x) d x=\int_{0}^{t} x f(x) d x+\int_{t}^{\infty} x f(x) d x \end{array} \\] e. Consider the PDF \(f(x)=2 x^{-3}\) for \(x \geq 1\) 1\. Show that this is a proper PDF. 2\. Calculate \(F(x)\) for this PDF. 3\. Use the results of part (c) to calculate \(E(x)\) for this \(P D F\) 4\. Show that Markov's incquality holds for this function. f. The concept of conditional expected value is useful in some economic problems. We denote the expected value of \(x\) conditional on the occurrence of some event, \(A\), as \(E(x | A),\) To compute this value we need to know the PDF for \(x \text { given that } A \text { has occurred [denoted by } f(x | A)]\) With this notation, \(E(x | A)=\int_{-\infty}^{+\infty} x f(x | A) d x .\) Perhaps the easiest way to understand these relationships is with an example. Let \\[ f(x)=\frac{x^{2}}{3} \quad \text { for } \quad-1 \leq x \leq 2 \\] 1\. Show that this is a proper PDF. 2\. Calculate \(E(x)\) 3\. Calculate the probability that \(-1 \leq x \leq 0\) 4\. Consider the event \(0 \leq x \leq 2,\) and call this event \(A\). What is \(f(x | A) ?\) 5\. Calculate \(E(x | A)\) 6\. Explain your results intuitively.

Suppose \(f(x, y)=4 x^{2}+3 y^{2}\) a. Calculate the partial derivatives of \(f\) b. Suppose \(f(x, y)=16 .\) Use the implicit function theorem to calculate \(d y / d x\) c. What is the value of \(d y / d x\) if \(x=1, y=2 ?\) d. Graph your results and use it to interpret the results in parts (b) and (c) of this problem.

Here are a few useful relationships related to the covariance of two random variables, \(x_{1}\) and \(x_{2}\) a. \(S h o w \quad t h\) at \(\quad \operatorname{Cov}\left(x_{1}, x_{2}\right)=E\left(x_{1} x_{2}\right)-E\left(x_{1}\right) E\left(x_{2}\right)\) An important implication of this is that if \(\operatorname{Cov}\left(x_{1}, x_{2}\right)=0, E\left(x_{1} x_{2}\right)=E\left(x_{1}\right) E\left(x_{2}\right) .\) That is, the expected value of a product of two random variables is the product of these variables' expected values. b. Show that \(\operatorname{Var}\left(a x_{1}+b x_{2}\right)=a^{2} \operatorname{Var}\left(x_{1}\right)+b^{2} \operatorname{Var}\left(x_{2}\right)+\) \\[ 2 a b \operatorname{Cov}\left(x_{1}, x_{2}\right) \\] c. In Problem \(2.15 \mathrm{d}\) we looked at the variance of \(X=k x_{1}+(1-k) x_{2} 0 \leq k \leq 1 .\) Is the conclusion that this variance is minimized for \(k=0.5\) changed by considering cases where \(\operatorname{Cov}\left(x_{1}, x_{2}\right) \neq 0 ?\) d. The corrdation cocfficicnt between two random variables is defined as \\[ \operatorname{Corr}\left(x_{1}, x_{2}\right)=\frac{\operatorname{Cov}\left(x_{1}, x_{2}\right)}{\sqrt{\operatorname{Var}\left(x_{1}\right) \operatorname{Var}\left(x_{2}\right)}} \\] Explain why \(-1 \leq \operatorname{Corr}\left(x_{1}, x_{2}\right) \leq 1\) and provide some intuition for this result. e. Suppose that the random variable \(y\) is related to the random variable \(x\) by the linear equation \(y=\alpha+\beta x\). Show that \\[ \beta=\frac{\operatorname{Cov}(y, x)}{\operatorname{Var}(x)} \\] Here \(\beta\) is sometimes called the (theoretical) regression coefficient of \(y\) on \(x\). With actual data, the sample analog of this expression is the ordinary least squares (OLS) regression coefficient.

See all solutions

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free