Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

You are required by a computer manufacturer to write a library function for a given floating point system to find the cube root \(y^{1 / 3}\) of any given positive number \(y .\) Any such relevant floating point number can be represented as \(y=a \times 2^{e}\), where \(a\) is a normalized fraction \((0.5 \leq a<1)\) and \(e\) is an integer exponent. This library function must be very efficient and it should always work. For efficiency purposes it makes sense to store some useful constants ahead of computation time, e.g., the constants \(2^{1 / 3}, \frac{2}{3}\), and \(a / 3\), should these prove useful. (a) Show how \(y^{1 / 3}\) can be obtained, once \(a^{1 / 3}\) has been calculated for the corresponding fraction, in at most five additional flops. (b) Derive the corresponding Newton iteration. What is the flop count per iteration? (c) How would you choose an initial approximation? Roughly how many iterations are needed? (The machine rounding unit is \(2^{-52}\).) [You might find this exercise challenging.]

Short Answer

Expert verified
Answer: The cube root of y, \(y^{1/3}\), is related to the cube root of a, \(a^{1/3}\), by the expression \(y^{1/3} = a^{1/3} \times 2^{e/3}\).

Step by step solution

01

(a) Relation between \(y^{1/3}\) and \(a^{1/3}\)

The given floating point number is \(y = a \times 2^e\). To find the cube root, we need to find \(y^{1/3}\). Using the given data, we can represent \(y^{1/3}\) as: \(y^{1/3} = (a \times 2^e)^{1/3}\) Now, we can rewrite this expression as: \(y^{1/3} = a^{1/3} \times (2^e)^{1/3}\) And further simplify it to: \(y^{1/3} = a^{1/3} \times 2^{e/3}\) Hence, once we have calculated \(a^{1/3}\), the expression above can be used to find \(y^{1/3}\) with just one multiplication and one exponentiation operation.
02

(b) Newton Iteration and Flop Count

To find the cube root, we need to solve for \(x\) in the equation \(x^3 = a\). We can use Newton's iteration method to find the root of the equation \(f(x) = x^3 - a = 0\). The iterative formula for Newton's method is: \(x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}\) For our function \(f(x) = x^3 - a\), the derivative \(f'(x) = 3x^2\). Therefore, the Newton's iteration formula becomes: \(x_{n+1} = x_n - \frac{x_n^3 - a}{3x_n^2}\) Simplifying the expression, we get: \(x_{n+1} = x_n - \frac{x_n^3 - a}{3x_n^2} = \frac{2x_n^3 + a}{3x^3{_n}}\) The flop count per iteration for this Newton's iteration formula includes: - 3 multiplications for \(x_n^3\) - 1 multiplication for \(2x_n^3\) - 1 addition for \(2x_n^3 + a\) - 1 division for the quotient \(\frac{2x_n^3 + a}{3x^3{_n}}\) So, the flop count per iteration is 6.
03

(c) Choosing initial approximation and estimating iterations

An initial approximation can be chosen using the method of bisection or secant, based on the interval in which the cube root of the given number lies. Once the interval is identified, a simple average of the interval endpoints can be used as the initial approximation. After finding an initial approximation \(x_0\), we can estimate the number of iterations required for the given machine rounding unit of \(2^{-52}\). We will use the formula for the error estimate in Newton's method: \(|x_{n+1} - x^*| \leq \frac{M}{2} |x_n - x^*|^2\) where \(x^*\) is the true root, \(M\) is the upper bound of the second derivative \(f''(x)\) in the interval, and \(|x_n - x^*|\) is the error in the \(n^{th}\) step. Since we have \(f(x) = x^3 - a\), the second derivative is \(f''(x) = 6x\), so the maximum value of \(M\) occurs at the endpoint of the interval containing the root. Given the machine rounding unit of \(2^{-52}\), we can iterate until the error \(|x_{n+1} - x^*|\) becomes less than or equal to the machine rounding unit. In general, the required number of iterations depends on the specific value of a and the chosen initial approximation. However, Newton's method is known for its rapid convergence, so a relatively small number of iterations would be needed to achieve the desired accuracy.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Newton's Iteration Method
When it comes to computing cube roots numerically, Newton's iteration method, also known as Newton-Raphson method, is a powerhouse of efficiency and accuracy. How does it work in the context of finding cube roots?

Imagine you're tasked with finding the cube root of a number using only simple arithmetic operations. Newton's iteration method simplifies this process by iteratively improving guesses until a sufficiently accurate result is obtained. For the equation \(x^3 - a = 0\), where \(a\) is the number whose cube root we seek, the method uses the formula:

\[x_{n+1} = x_n - \frac{x_n^3 - a}{3x_n^2}\]
Each iteration refines the value of \(x\), inching closer to the actual cube root of \(a\). With computational simplicity in mind, each iteration involves just a few multiplications and additions—this simplicity is key for floating point systems used in computers, where operation cost is a crucial factor.

To start, you need an initial guess, which can be pretty rough. Then, with every subsequent guess, you use the above formula. The iterations are repeated until the change in successive guesses is smaller than the desired margin of error. The beauty of Newton's method is its rapid convergence - typically, the approximate error is squared at each step, which means precision improves dramatically with each iteration. However, one must also be cautious of certain values that might lead to divergence or slow convergence, hence the choice of initial guess matters a lot.
Floating Point Representation
For computers, representing numbers is a game of precision and trade-offs, and floating point representation is the common compromise. But what is it exactly?

In the realm of computing, you'll often encounter numbers in the form of floating points. The name derives from the 'floating' position of the decimal point, allowing for a wide range of values. A floating point number consists of two parts: a mantissa and an exponent. The typical form looks like \(y = a \times 2^e\), where \(a\) is the normalized fraction, or mantissa, that falls between 0.5 and 1, and \(e\) is an integer exponent.

This way of representation enables handling numbers that are incredibly large or tremendously small – great for tasks like calculating cube roots, which can vary in magnitude. Remember to distinguish the actual numeric value from its floating point representation, as most numbers cannot be represented exactly due to finite precision; there's always a rounding involved.

Why does this matter? In numerical methods, the limitations of floating point arithmetic can introduce errors, known as rounding errors. Designing algorithms for such systems requires careful planning – not only to achieve an accurate result but also to minimize the impact of those arithmetic errors.
Computational Efficiency
The quest for computational efficiency is like a race against the clock, where every fraction of a second counts and every operation saps computational resources.

In numerical methods, the goal is not just to get the correct answer, but to get it quickly, using as few resources as possible. That's what we mean by computational efficiency. It's a measure of how effectively a computer algorithm performs, taking into account factors like speed and memory use. Faster algorithms that produce results quickly with fewer operations (flops) are like gold in the field of computation, and they become increasingly important as data sets grow larger.

For example, in the exercise with cube roots, storing certain constants like \(2^{1/3}\) and \((2/3)\) ahead of time can save on compute operations later, since these values are used repetitively in the Newton's iteration process. Efficient algorithms also mean more economical use of energy, which is crucial for large-scale computing systems or battery-powered devices. It's all about doing more with less – maximizing output while minimizing input. Remember, an algorithm that is efficient on paper isn't always efficient in practice due to the overhead of setting up the problem, managing memory, or handling data input and output – this real-world context is where the true challenge lies.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

We saw in Example \(3.5\) that the function $$ f(x)=\alpha \cosh (x / 4)-x $$ has two roots for \(\alpha=2\) and none for \(\alpha=10\). Is there an \(\alpha\) for which there is precisely one root? If yes, then find such an \(\alpha\) and the corresponding root; if not, then justify.

(a) Derive a third order method for solving \(f(x)=0\) in a way similar to the derivation of Newton's method, using evaluations of \(f\left(x_{n}\right), f^{\prime}\left(x_{n}\right)\), and \(f^{\prime \prime}\left(x_{n}\right) .\) The following remarks may be helpful in constructing the algorithm: \- Use the Taylor expansion with three terms plus a remainder term. \- Show that in the course of derivation a quadratic equation arises, and therefore \(t\) wo distinct schemes can be derived. (b) Show that the order of convergence (under the appropriate conditions) is cubic. (c) Estimate the number of iterations and the cost needed to reduce the initial error by a factor of \(10^{m}\). (d) Write a script for solving the problem of Exercise \(5 .\) To guarantee that your program does not generate complex roots, make sure to start sufficiently close to a real root. (e) Can you speculate what makes this method less popular than Newton's method, despite its cubic convergence? Give two reasons.

Consider the polynomial function \(^{8}\) $$ \begin{aligned} f(x)=&(x-2)^{9} \\ =& x^{9}-18 x^{8}+144 x^{7}-672 x^{6}+2016 x^{5}-4032 x^{4}+5376 x^{3}-4608 x^{2} \\ &+2304 x-512 . \end{aligned} $$ (a) Write a MATLAB script which evaluates this function at 161 equidistant points in the interval \([1.92,2.08]\) using two methods: i. Apply nested evaluation (cf. Example 1.4) for evaluating the polynomial in the expanded form \(x^{9}-18 x^{8}+\cdots\) ii. Calculate \((x-2)^{9}\) directly. Plot the results in two separate figures. (b) Explain the difference between the two graphs. (c) Suppose you were to apply the bisection routine from Section \(3.2\) to find a root of this function, starting from the interval \([1.92,2.08]\) and using the nested evaluation method, to an absolute tolerance \(10^{-6}\). Without computing anything, select the correct outcome: i. The routine will terminate with a root \(p\) satisfying \(|p-2| \leq 10^{-6}\). ii. The routine will terminate with a root \(p\) not satisfying \(|p-2| \leq 10^{-6}\). iii. The routine will not find a root. Justify your choice in one short sentence.

Given \(a>0\), we wish to compute \(x=\ln a\) using addition, subtraction, multiplication, division, and the exponential function, \(e^{x}\). (a) Suggest an iterative formula based on Newton's method, and write it in a way suitable for numerical computation. (b) Show that your formula converges quadratically. (c) Write down an iterative formula based on the secant method. (d) State which of the secant and Newton's methods is expected to perform better in this case in terms of overall number of exponential function evaluations. Assume a fair comparison, i.e., same floating point system, "same quality" initial guesses, and identical convergence criterion.

For \(x>0\) consider the equation \(x+\ln x=0\) It is a reformulation of the equation of Example \(3.4\). (a) Show analytically that there is exactly one root, \(0

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free