Warning: foreach() argument must be of type array|object, bool given in /var/www/html/web/app/themes/studypress-core-theme/template-parts/header/mobile-offcanvas.php on line 20

(a) Use the Taylor expansion $$ \begin{aligned} y\left(t_{i+1}\right)=& y\left(t_{i}\right)+h y^{\prime}\left(t_{i}\right)+\frac{h^{2}}{2} y^{\prime \prime}\left(t_{i}\right)+\frac{h^{3}}{6} y^{\prime \prime \prime}\left(t_{i}\right) \\ &+\frac{h^{4}}{24} y^{(i v)}\left(t_{i}\right)+\frac{h^{5}}{120} y^{(v)}\left(t_{i}\right)+\mathcal{O}\left(h^{6}\right) \end{aligned} $$ to derive a corresponding series expansion for the local truncation error of the forward Euler method. (b) Manipulating the forward Euler method written for the step sizes \(h\) and \(h / 2\), apply extrapolation (Section 14.2) to obtain a second order one-step method. (c) Manipulating the forward Euler method written for the step sizes \(h, h / 2\), and \(h / 3\), apply extrapolation to obtain a third order one-step method.

Short Answer

Expert verified
As this is a short answer question, we can write the final solutions as follows: (a) The local truncation error for the forward Euler method is given by: $$ E(t_i) = \frac{h^2}{2}y''(t_i) + \frac{h^3}{6}y'''(t_i) + \frac{h^4}{24}y^{(4)}(t_i) + \frac{h^5}{120}y^{(5)}(t_i) + \mathcal{O}(h^6) $$ (b) The second-order one-step method using extrapolation is given by: $$ y(t_{i+1}) = \frac{1}{2}\left[2y^{(h/2)}(t_i) - y^{(h)}(t_i) + E^{(h)}(t_i) - E^{(h/2)}(t_i)\right] $$ (c) The third-order one-step method using extrapolation is given by: $$ y(t_{i+1}) = \frac{1}{8}\left[\left(2y^{(h/2)}(t_i) + 4y^{(h/3)}(t_i) - y^{(h)}(t_i)\right) + \left(2E^{(h/2)}(t_i) + 4E^{(h/3)}(t_i)- E^{(h)}(t_i)\right)\right] $$

Step by step solution

01

Part (a): Derive a series expansion for the local truncation error

First, let's use the Taylor expansion to represent the forward Euler method: $$ y(t_{i+1}) = y(t_i) + hy'(t_i) + \frac{h^2}{2}y''(t_i) + \cdots $$ Now, let's subtract the forward Euler method from the Taylor expansion: $$ \begin{aligned} y(t_{i+1}) - (y(t_i) + hy'(t_i)) &= \frac{h^2}{2}y''(t_i) + \frac{h^3}{6}y'''(t_i) \\ &+ \frac{h^4}{24}y^{(4)}(t_i) + \frac{h^5}{120}y^{(5)}(t_i) + \mathcal{O}(h^6) \end{aligned} $$ The left side represents the local truncation error \(E(t_i)\) of the forward Euler method, which is given by the right side. Thus: $$ E(t_i) = \frac{h^2}{2}y''(t_i) + \frac{h^3}{6}y'''(t_i) + \frac{h^4}{24}y^{(4)}(t_i) + \frac{h^5}{120}y^{(5)}(t_i) + \mathcal{O}(h^6) $$
02

Part (b): Obtaining a second-order one-step method using extrapolation

Let's denote the forward Euler method for a step size of \(h\) by \(y^{(h)}\) and a step size of \(h/2\) by \(y^{(h/2)}\). Using the local truncation error derived in part (a), we can write the forward Euler method for the two step sizes: $$ \begin{aligned} y(t_{i+1}) &= y^{(h)}(t_i) + E^{(h)}(t_i) \\ y(t_{i+1}) &= y^{(h/2)}(t_i) + E^{(h/2)}(t_i) \end{aligned} $$ Now, let's manipulate these equations to eliminate the error term \(E^{(h)}(t_i)\) and obtain a second-order one-step method using extrapolation: $$ \begin{aligned} y(t_{i+1}) - E^{(h)}(t_i) &= y^{(h)}(t_i) \\ y(t_{i+1}) - E^{(h/2)}(t_i) &= y^{(h/2)}(t_i) \\ \end{aligned} $$ Then, we can multiply the second equation by 2 and subtract the first equation from the modified second equation: $$ \begin{aligned} 2\left[y(t_{i+1}) - E^{(h/2)}(t_i)\right] - \left[y(t_{i+1}) - E^{(h)}(t_i)\right] = y^{(h/2)}(t_i) \end{aligned} $$ Finally, solving for \(y(t_{i+1})\): $$ y(t_{i+1}) = \frac{1}{2}\left[2y^{(h/2)}(t_i) - y^{(h)}(t_i) + E^{(h)}(t_i) - E^{(h/2)}(t_i)\right] $$ This is the second-order one-step method obtained using extrapolation.
03

Part (c): Obtaining a third-order one-step method using extrapolation

Similar to part (b), we need to manipulate equations involving the forward Euler method with step sizes of \(h, h/2\), and \(h/3\) for this part. Let's denote the forward Euler method for a step size of \(h/3\) by \(y^{(h/3)}\). We'll also use the local truncation error derived in part (a) and write the forward Euler method for the three step sizes: $$ \begin{aligned} y(t_{i+1}) &= y^{(h)}(t_i) + E^{(h)}(t_i) \\ y(t_{i+1}) &= y^{(h/2)}(t_i) + E^{(h/2)}(t_i) \\ y(t_{i+1}) &= y^{(h/3)}(t_i) + E^{(h/3)}(t_i) \end{aligned} $$ To obtain a third-order one-step method using extrapolation, we must eliminate two error terms, \(E^{(h)}(t_i)\) and \(E^{(h/2)}(t_i)\), in the system of equations. We can do this by multiplying the second equation by 2, the third equation by 4, and summing them, then subtracting the first equation: $$ \begin{aligned} [2(y^{(h/2)}(t_i) + E^{(h/2)}(t_i)) + 4(y^{(h/3)}(t_i) + E^{(h/3)}(t_i))] - (y^{(h)}(t_i) + E^{(h)}(t_i)) = y(t_{i+1}) \end{aligned} $$ Now solve for \(y(t_{i+1})\): $$ \begin{aligned} y(t_{i+1}) = \frac{1}{8}\left[\left(2y^{(h/2)}(t_i) + 4y^{(h/3)}(t_i) - y^{(h)}(t_i)\right) + \left(2E^{(h/2)}(t_i) + 4E^{(h/3)}(t_i)- E^{(h)}(t_i)\right)\right] \end{aligned} $$ This is the third-order one-step method obtained using extrapolation.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

Key Concepts

These are the key concepts you need to understand to accurately answer the question.

Taylor Expansion
The Taylor expansion is a crucial mathematical tool in numerical methods that helps us approximate functions. This expansion expresses a function as an infinite sum of its derivatives at a single point. For example, using the Taylor series, we can approximate the value of a function at a point by taking into account the values and the derivatives of the function at another point nearby. The more derivatives we include, the better our approximation generally becomes. In numerical methods for solving differential equations, Taylor expansions are often used to derive methods like the Euler method, providing both the framework and insights into potential errors in approximations.
Local Truncation Error
When using numerical methods to approximate solutions to differential equations, we inevitably face errors. One of these is the local truncation error. This error occurs at each step of a numerical method due to the approximation technique itself, such as approximating with a truncated Taylor series instead of the full function. For example, in a simple Euler method, the local truncation error is given by the terms of the Taylor series that are omitted, which starts from the quadratic term onward. Understanding and managing this error is crucial when developing and using numerical methods because it accumulates over multiple steps and affects the accuracy of the final result.
Euler Method
The Euler method is one of the simplest numerical techniques to solve ordinary differential equations. It's a first-order method, which means it approximates solutions by considering only the first derivative (or linear term) of a Taylor expansion. To use the Euler method, we iteratively compute the next value using the formula:
  • Current value:
  • Derivative at current point:
  • Step size:
The simplicity of the Euler method makes it easy to implement but also limits its accuracy, hence larger step sizes may lead to greater errors. It is a stepping stone to more accurate techniques that are developed by improving upon its fundamental approach.
Extrapolation
Extrapolation in numerical methods refers to refining solutions by carefully combining estimates at different scales. For instance, we might use values generated using different step sizes in the Euler method to eliminate lower-order error terms and achieve more accurate results. By cleverly subtracting scaled solutions for larger and smaller step sizes, we can "extrapolate" a more accurate estimate of the function's behavior. This process leverages the known behavior of errors in numerical solutions to effectively boost the method’s accuracy without the need for smaller step sizes directly in calculations, resulting in higher-order accuracy.
Higher Order Methods
Higher order methods in numerical computation refer to techniques that use additional derivatives or corrective steps to achieve more accurate solutions compared to simpler, lower-order methods like Euler's. By increasing the order, these methods reduce the impact of the local truncation error, leading to solutions that are closer to true values with fewer steps or larger intervals. These methods, such as Runge-Kutta techniques, take into account additional terms from the Taylor expansion, transforming our approach from a simpler method like Euler’s to one that effectively approximates with better accuracy and stability. Opting for higher order methods is particularly beneficial when dealing with complex systems requiring precise solutions.

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

Write the explicit and implicit trapezoidal methods, as well as the classical RK method of order 4 , in tableau notation.

In molecular dynamics simulations using classical mechanics modeling, one is often faced with a large nonlinear ODE system of the form $$ M \mathbf{q}^{\prime \prime}=\mathbf{f}(\mathbf{q}), \text { where } \mathbf{f}(\mathbf{q})=-\nabla U(\mathbf{q}) $$ Here \(\mathbf{q}\) are generalized positions of atoms, \(M\) is a constant, diagonal, positive mass matrix, and \(U(\mathbf{q})\) is a scalar potential function. Also, \(\nabla U(\mathbf{q})=\left(\frac{\partial U}{\partial q_{1}}, \ldots, \frac{\partial U}{\partial q_{m}}\right)^{T} .\) A small (and somewhat nasty) instance of this is given by the Morse potential where \(\mathbf{q}=q(t)\) is scalar, \(U(q)=D(1-\) \(\left.e^{-S\left(q-q_{0}\right)}\right)^{2}\), and we use the constants \(D=90.5 \cdot 0.4814 \mathrm{e}-3, S=1.814, q_{0}=1.41\), and \(M=\) \(0.9953 .\) (a) Defining the velocities \(\mathbf{v}=\mathbf{q}^{\prime}\) and momenta \(\mathbf{p}=M \mathbf{v}\), the corresponding first-order \(\mathrm{ODE}\) system for \(\mathbf{q}\) and \(\mathbf{v}\) is given by $$ \begin{aligned} \mathbf{q}^{\prime} &=\mathbf{v} \\ M \mathbf{v}^{\prime} &=\mathbf{f}(\mathbf{q}) \end{aligned} $$ Show that the Hamiltonian function $$ H(\mathbf{q}, \mathbf{p})=\mathbf{p}^{T} M^{-1} \mathbf{p} / 2+U(\mathbf{q}) $$ is constant for all \(t>0\). (b) Use a library nonstiff RK code based on a \(4(5)\) embedded pair such as MATLAB's ode 45 to integrate this problem for the Morse potential on the interval \(0 \leq t \leq 2000\), starting from \(q(0)=1.4155, p(0)=\frac{1.545}{48.888} M .\) Using a tolerance tol \(=1 . \mathrm{e}-4\), the code should require a little more than 1000 times steps. Plot the obtained values for \(H(q(t), p(t))-H(q(0), p(0))\). Describe your observations.

The ODE system given by $$ \begin{aligned} &y_{1}^{\prime}=\alpha-y_{1}-\frac{4 y_{1} y_{2}}{1+y_{1}^{2}} \\ &y_{2}^{\prime}=\beta y_{1}\left(1-\frac{y_{2}}{1+y_{1}^{2}}\right) \end{aligned} $$ where \(\alpha\) and \(\beta\) are parameters, represents a simplified approximation to a chemical reaction. There is a parameter value \(\beta_{c}=\frac{3 \alpha}{5}-\frac{25}{\alpha}\) such that for \(\beta>\beta_{c}\) solution trajectories decay in amplitude and spiral in phase space into a stable fixed point, whereas for \(\beta<\beta_{c}\) trajectories oscillate without damping and are attracted to a stable limit cycle. (This is called a Hopf bifurcation.) (a) Set \(\alpha=10\) and use any of the discretization methods introduced in this chapter with a fixed step size \(h=0.01\) to approximate the solution starting at \(y_{1}(0)=0, y_{2}(0)=2\), for \(0 \leq t \leq 20\). Do this for the parameter values \(\beta=2\) and \(\beta=4\). For each case plot \(y_{1}\) vs. \(t\) and \(y_{2}\) vs. \(y_{1} .\) Describe your observations. (b) Investigate the situation closer to the critical value \(\beta_{c}=3.5\). (You may have to increase the length of the integration interval \(b\) to get a better look.)

To draw a circle of radius \(r\) on a graphics screen, one may proceed to evaluate pairs of values \(x=r \cos (\theta), y=r \sin (\theta)\) for a succession of values \(\theta\). But this is computationally expensive. A cheaper method may be obtained by considering the \(\mathrm{ODE}\) $$ \begin{array}{lc} \dot{x}=-y, & x(0)=r, \\ \dot{y}=x, & y(0)=0 \end{array} $$ where \(\dot{x}=\frac{d x}{d \theta}\), and approximating this using a simple discretization method. However, care must be taken to ensure that the obtained approximate solution looks right, i.e., that the approximate curve closes rather than spirals. Carry out this integration using a uniform step size \(h=.02\) for \(0 \leq \theta \leq 120\), applying forward Euler, backward Euler, and the implicit trapezoidal method. Determine if the solution spirals in, spirals out, or forms an approximate circle as desired. Explain the observed results. [Hint: This has to do with a certain invariant function of \(x\) and \(y\), rather than with the accuracy order of the methods.]

Show that the local truncation error of the four-step Adams-Bashforth method is \(d_{i}=\) \(\frac{251}{720} h^{4} y^{(v)}\left(t_{i}\right)\), that of the five-step Adams-Bashforth method is \(d_{i}=\frac{95}{288} h^{5} y^{(v i)}\left(t_{i}\right)\), and that of the four-step Adams-Moulton method is \(d_{i}=\frac{-3}{160} h^{5} y^{(v i)}\left(t_{i}\right)\).

See all solutions

Recommended explanations on Math Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free