Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

Write a MATLAB program to find all the roots of a given, twice continuously differentiable, function \(f \in C^{2}[a, b]\). Your program should first probe the function \(f(x)\) on the given interval to find out where it changes sign. (Thus, the program has, in addition to \(f\) itself, four other input arguments: \(a\), \(b\), the number \(n\) probe of equidistant values between \(a\) and \(b\) at which \(f\) is probed, and a tolerance tol.) For each subinterval \(\left[a_{i}, b_{i}\right]\) over which the function changes sign, your program should then find a root as follows. Use either Newton's method or the secant method to find the root, monitoring decrease in \(\left|f\left(x_{k}\right)\right| .\) If an iterate is reached for which there is no sufficient decrease (e.g., if \(\left|f\left(x_{k}\right)\right| \geq 0.5\left|f\left(x_{k-1}\right)\right|\) ), then revert back to \(\left[a_{i}, b_{i}\right]\), apply three bisection steps and restart the Newton or secant method. The \(i\) th root is deemed "found" as \(x_{k}\) if both $$ \left|x_{k}-x_{k-1}\right|<\operatorname{tol}\left(1+\left|x_{k}\right|\right) \quad \text { and } \quad\left|f\left(x_{k}\right)\right|<\text { tol } $$ hold.

Short Answer

Expert verified
Question: Write a short outline of the MATLAB code to find all roots of a given continuous and twice differentiable function in a specified interval. Answer: The MATLAB code to find all roots of a continuous and twice differentiable function in a specified interval consists of the following steps: 1. Declare a function called 'roots_finder' with input arguments for the given function, interval, number of probes, and tolerance. 2. Probe the function on the interval to find subintervals with sign changes. 3. For each subinterval, implement either Newton's or Secant's method to find a root. 4. Check if the criterion of sufficient decrease is met, and if not, apply bisection and restart the method. 5. Use specified conditions to determine if a root has been found and store the roots.

Step by step solution

01

Declare the function and input arguments

Create a function called 'roots_finder' with input arguments \(f(x)\), interval \([a, b]\), number of probes \(n\), and tolerance 'tol'. ``` function roots = roots_finder(f, a, b, n, tol) ```
02

Probe the function and find subintervals with sign changes

Compute \(n\) equidistant values and probe the given function to identify subintervals where the function changes sign. ``` x_probe = linspace(a, b, n); sign_changes = []; for i = 1:n-1 if sign(f(x_probe(i))) * sign(f(x_probe(i+1))) < 0 sign_changes = [sign_changes; x_probe(i), x_probe(i+1)]; end end ```
03

Implement Newton's or Secant's method and find roots

For each subinterval, apply either Newton's or Secant's method and monitor the decrease in the absolute value of the function. ``` roots = []; for i = 1:size(sign_changes, 1) a_i = sign_changes(i, 1); b_i = sign_changes(i, 2); [root, is_found] = newton_or_secant(f, a_i, b_i, tol); if is_found roots = [roots; root]; end end ```
04

Verify sufficient decrease and apply bisection if needed

Check if the criterion of sufficient decrease is met and proceed accordingly. If not met, apply three bisections and restart the method. ``` function [x_k, is_found] = newton_or_secant(f, a, b, tol) x_k = a; x_k_1 = b; while 1 if abs(f(x_k)) >= 0.5 * abs(f(x_k_1)) x_k = (a + b) / 2; % Bisection a = x_k; else x_temp = x_k - f(x_k) * (x_k_1 - x_k) / (f(x_k_1) - f(x_k)); % Secant x_k_1 = x_k; x_k = x_temp; if abs(x_k - x_k_1) < tol * (1 + abs(x_k)) && abs(f(x_k)) < tol is_found = 1; break; end end if (b - a) < tol is_found = 0; break; end end end ```
05

Determine if the root has been found

Check if both conditions, \(|x_{k}-x_{k-1}| < \operatorname{tol} (1 + |x_{k}|)\) and \(|f(x_{k})| < \text { tol }\), are satisfied. If so, the root is found. ``` if abs(x_k - x_k_1) < tol * (1 + abs(x_k)) && abs(f(x_k)) < tol is_found = 1; else is_found = 0; end ``` This MATLAB code follows the required steps to find all the roots of a given twice continuously differentiable function, \(f \in C^2[a, b]\).

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

MATLAB Programming
MATLAB is a powerful tool used for various computational tasks, such as numerical methods, data analysis, and visualization. In the context of root-finding algorithms, MATLAB provides a platform to write efficient programs that can solve mathematical problems through built-in functions and automated processes.
When writing a MATLAB program to find roots, you start by defining a function script. This script includes necessary inputs such as the function itself, the interval to examine, the number of probes, and the tolerance for root verification.
  • The function, usually expressed as a handle (e.g., @(x) x^2 - 4), determines the target mathematical problem.
  • The interval [a, b] bounds the search area for possible roots.
  • The number of probes helps in identifying subintervals where sign changes occur, indicating potential roots.
  • A tolerance value aids in determining the precision of the root finding, ensuring that the found root is accurate to a specified degree.
After setting up these inputs, MATLAB's programming features allow the implementation of root-finding techniques like Newton's or the Secant method, leveraging efficient loops and conditional statements to reach a solution.
Root-Finding Algorithms
Root-finding algorithms are methods used to determine the roots of a function, where the function equals zero. These algorithms are crucial in engineering, physics, and mathematics for solving equations where an analytical solution might not be feasible.
Among the various approaches, some of the most widely used are the Bisection Method, Newton's Method, and the Secant Method.
  • Bisection Method: This is a bracketing method that works by consistently halving the interval where the function changes sign until the root is pinpointed with desired accuracy.
  • Newton's Method: An iterative method that uses tangents to estimate the roots. It's highly effective but requires the derivative of the function.
  • Secant Method: Similar to Newton's Method but does not require the derivative, making it easier to apply in cases where the derivative is unknown or difficult to compute.
The choice of algorithm usually depends on the function's characteristics, such as continuity, differentiability, and the availability of an analytical derivative.
Newton's Method
Newton's Method, also known as the Newton-Raphson Method, is an efficient iterative technique used primarily for finding successively better approximations to the roots of a real-valued function.
It begins with an initial guess for the root, which is updated using the formula:
\[ x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)} \]
Here, \(x_k\) is the current approximation, and \(f'(x_k)\) is the derivative of the function evaluated at \(x_k\). The method continues iterating this process until the change between successive approximations falls below a predetermined threshold, or the function value at the guess is sufficiently close to zero.
  • Convergence Speed: One of the greatest advantages of Newton's method is its rapid convergence near the root, often reaching high precision in just a few iterations when the initial guess is close enough.
  • Sensitivity: However, the method can be sensitive to the initial guess. If the guess is not close to the actual root, the method might fail to converge or converge to an incorrect root.
Thus, it requires both continuity and differentiability of the function for successful application.
Secant Method
The Secant Method is a numerical technique used for finding the root of a function, very similar to Newton's Method, but it does not require the calculation of a derivative.
Instead, it approximates the derivative by using a linear estimation between two points \(x_k\) and \(x_{k-1}\):
\[ x_{k+1} = x_k - f(x_k) \frac{x_k - x_{k-1}}{f(x_k) - f(x_{k-1})} \]
This makes the Secant Method very useful for functions where derivatives are either difficult or impossible to calculate analytically.
  • Implementation: It starts with two initial guesses instead of one and iteratively refines these guesses similarly to Newton's Method but uses the secant line instead.
  • Performance: The method generally converges faster than the bisection method but slower than Newton's when compared under similar conditions of setup and tolerances.
  • Versatility: It can be more versatile in practical applications due to its simplicity and the non-requirement of derivatives.
However, like many iterative methods, it can also converge to the wrong root or diverge without careful selection of initial points.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

For \(x>0\) consider the equation \(x+\ln x=0\) It is a reformulation of the equation of Example \(3.4\). (a) Show analytically that there is exactly one root, \(0

Consider Steffensen's method $$ x_{k+1}=x_{k}-\frac{f\left(x_{k}\right)}{g\left(x_{k}\right)}, \quad k=0,1, \ldots, $$ where $$ g(x)=\frac{f(x+f(x))-f(x)}{f(x)} $$ (a) Show that in general the method converges quadratically to a root of \(f(x)\). (b) Compare the method's efficiency to the efficiency of the secant method.

Apply the bisection routine bisect to find the root of the function $$ f(x)=\sqrt{x}-1.1 $$ starting from the interval \([0,2]\) (that is, \(a=0\) and \(b=2\) ), with atol \(=1 . \mathrm{e}-8\). (a) How many iterations are required? Does the iteration count match the expectations, based on our convergence analysis? (b) What is the resulting absolute error? Could this absolute error be predicted by our convergence analysis?

(a) Derive a third order method for solving \(f(x)=0\) in a way similar to the derivation of Newton's method, using evaluations of \(f\left(x_{n}\right), f^{\prime}\left(x_{n}\right)\), and \(f^{\prime \prime}\left(x_{n}\right) .\) The following remarks may be helpful in constructing the algorithm: \- Use the Taylor expansion with three terms plus a remainder term. \- Show that in the course of derivation a quadratic equation arises, and therefore \(t\) wo distinct schemes can be derived. (b) Show that the order of convergence (under the appropriate conditions) is cubic. (c) Estimate the number of iterations and the cost needed to reduce the initial error by a factor of \(10^{m}\). (d) Write a script for solving the problem of Exercise \(5 .\) To guarantee that your program does not generate complex roots, make sure to start sufficiently close to a real root. (e) Can you speculate what makes this method less popular than Newton's method, despite its cubic convergence? Give two reasons.

This exercise essentially utilizes various forms of Taylor's expansion and relies on expertise in calculus. (a) Prove that if \(f \in C^{2}[a, b]\) and there is a root \(x^{*}\) in \([a, b]\) such that \(f\left(x^{*}\right)=0, f^{\prime}\left(x^{*}\right) \neq 0\), then there is a number \(\delta\) such that, starting with \(x_{0}\) from anywhere in the neighborhood \(\left[x^{*}-\delta, x^{*}+\delta\right]\), Newton's method converges quadratically. (b) This is more challenging: prove the same conclusions under the same assumptions, except that now \(f\) is only assumed to have a first Lipschitz continuous derivative. Thus, while \(f^{\prime \prime}\) may not exist everywhere in \([a, b]\), there is a constant \(\gamma\) such that for any \(x, y \in[a, b]\) $$ \left|f^{\prime}(x)-f^{\prime}(y)\right| \leq \gamma|x-y| . $$

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free