Physical systems usually evolve continuously in time: their laws of motion are
differential equations. Computer simulations must approximate these
differential equations using discrete time steps. In this exercise, we will
introduce some common methods for simulating differential equations using the
example of the pendulum:
$$
\frac{d^{2} \theta}{d t^{2}}=\ddot{\theta}=-(g / L) \sin (\theta)
$$
This equation gives the motion of a pendulum with a point mass at the tip of a
massless \(\operatorname{rod}^{47}\) of length \(L\) : rederive it using a free
body diagram.
Go to our Web site \([108]\) and download the pendulum files for the language
you'll be using. The animation should show a pendulum oscillating from an
initial condition \(\theta_{0}=2 \pi / 3, \dot{\theta}=0 ;\) the equations being
solved have \(g=9.8 \mathrm{~m} / \mathrm{s}^{2}\) and \(L=1 \mathrm{~m}\)
There are three independent criteria for picking a good algorithm for solving
differential equations: fidelity, \(a c\) curacy, and stability.
Fidelity. Notice that in our time step algorithm, we did not do the
straightforward choice - using the current \((\theta(t), \dot{\theta}(t))\) to
produce \((\theta(t+\delta), \dot{\theta}(t+\delta))\). Rather, we used
\(\theta(t)\) to calculate the acceleration and update \(\dot{\theta}\), and then
used \(\dot{\theta}(t+\delta)\) to calculate \(\theta(t+\delta)\)
$$
\begin{aligned}
&\dot{\theta}(t+\delta)=\dot{\theta}(t)+\ddot{\theta}(t) \delta \\
&\theta(t+\delta)=\theta(t)+\dot{\theta}(t+\delta) \delta
\end{aligned}
$$
Wouldn't it be simpler and make more sense to update \(\theta\) and
\(\dot{\theta}\) simultaneously from their current values, so
\(\theta(t+\delta)=\theta(t)+\dot{\theta}(t) \delta ?\) (This simplest of all
timestepping schemes is called the Euler method, and should not be used for
ordinary differential equations (although it is sometimes used in partial
differential equations.)
(a) Try it. First, see why reversing the order of the updates to \(\theta\) and
\(\dot{\theta}\),
$$
\begin{aligned}
&\theta(t+\delta)=\theta(t)+\dot{\theta}(t) \delta \\
&\dot{\theta}(t+\delta)=\dot{\theta}(t)+\ddot{\theta}(t) \delta
\end{aligned}
$$
in our loop would give us a simultaneous update. Swap these two lines in the
code, and watch the pendulum swing for several turns, until it starts looping
the loop. Is the new algorithm as good as the old one? (Make sure you switch
the two lines back afterwards.)
The simultaneous update scheme is just as accurate as the one we chose, but it
is not as faithful to the physics of the problem: its fidelity is not as good.
For subtle reasons we won't explain here, updating first \(\dot{\theta}\) and
then \(\theta\) allows our algorithm to exactly conserve an approximation to the
energy: it's called a symplectic algorithm. \(^{48}\) Improved versions of this
algorithm - like the Verlet algorithms below - are often used to simulate
systems that conserve energy (like molecular dynamics) because they exactly
\(^{49}\) simulate the dynamics for an approximation to the Hamiltonian -
preserving important physical features not kept by just approximately solving
the dynamics.
Accuracy. Most computational methods for solving differential equations (and
many other continuum problems like integrating functions) involve a step size
\(\delta\), and become more accurate as \(\delta\) gets smaller. What is most
important is not the error in each time step, but the accuracy of the answer
after a fixed time \(T\), which is the accumulated error after \(T / \delta\) time
steps. If this accumulated error varies as \(\delta^{n}\), we say that the
algorithm has \(n^{\text {th }}\) order cumulative accuracy. Our algorithm is
not very high order!
(b) Plot the pendulum trajectory \(\theta(t)\) for time steps \(\delta=\)
\(0.1,0.01\), and \(0.001\). Zoom in on the curve at one of the coarse points
(say, \(t=1)\) and compare the values from the three time steps. Does it appear
that this time is converging \(^{50}\) as \(\delta \rightarrow 0\) ? From your
measurement, what order accuracy is our method?
We can write higher-order symplectic algorithms. The approximation to the
second derivative
$$
\ddot{\theta} \approx(\theta(t+\delta)-2 \theta(t)+\theta(t-\delta)) /
\delta^{2}
$$
(which you can verify with a Taylor expansion is correct to
\(\left.O\left(\delta^{4}\right)\right)\) motivates the Verlet Algorithm
$$
\theta(t+\delta)=2 \theta(t)-\theta(t-\delta)+\ddot{\theta} \delta^{2}
$$
This algorithm is a bit awkward to start up since you need to initialize
\(^{51} \theta(t-\delta)\); it's also often convenient to know the velocities as
well as the positions. The Velocity Verlet algorithm fixes both of these
problems; it is motivated by the constant acceleration formula
\(x(t)=x_{0}+v_{0} t+1 / 2 a t^{2}\) :
$$
\begin{array}{r}
\theta(t+\delta)=\theta(t)+\dot{\theta}(t) \delta+1 / 2 \ddot{\theta}(t)
\delta^{2} \\
\dot{\theta}(t+\delta / 2)=\dot{\theta}(t)+{ }^{1} \ddot{2} \ddot{\theta}(t)
\delta \\
\dot{\theta}(t+\delta)=\dot{\theta}(t+\delta / 2)+1 / 2
\ddot{\theta}(t+\delta) \delta
\end{array}
$$
The trick that makes this algorithm so good is to cleverly split the velocity
increment into two pieces, half for the acceleration at the old position and
half for the new position. \(^{52}\) (You'll want to initialize \(\ddot{\theta}\)
once before starting the loop.)
(c) Pick one of the Verlet algorithms, implement it, and plot the trajectory
for time steps \(\delta=0.1,0.01\), and \(0.001\). You should see a dramatic
improvement in convergence. What cumulative order accuracy does Verlet have?
\(^{253}\)
Stability. In many cases high accuracy is not crucial. What prevents us from
taking enormous time steps? In a given problem, there is usually a typical
fastest time scale: a vibration or oscillation period (as in our exercise) or
a growth or decay rate. When our time step becomes a substantial fraction of
this fastest time scale, algorithms like ours usually become unstable: the
first few time steps may be fairly accurate, but small errors build up until
the errors become unacceptable (indeed, often ones first warning of problems
are machine overflows).
(d) Plot the pendulum trajectory \(\theta(t)\) for time steps \(\delta=\)
\(0.1,0.2, \ldots, 0.8\), using a small amplitude oscillation \(\theta_{0}=0.01,
\dot{\theta}_{0}=0.0\), up to \(t_{\max }=10 .\) At about what \(\delta_{c}\) does
it go unstable? Looking at the first few points of the trajectory, does it
seem like sampling the curve at steps much larger than \(\delta_{c}\) would miss
the oscillations? At \(\delta_{c} / 2\), how accurate is the amplitude of the
oscillation? (You'll) need to observe several periods in order to estimate the
maximum amplitude of the solution.)
In solving the properties of large, nonlinear systems (e.g., partial
differential equations (PDEs) and molecular dynamics) stability tends to be
the key difficulty. The maximum stepsize depends on the local configuration,
so highly nonlinear regions can send the system unstable before one might
expect. The maximum safe stable stepsize often has accuracy far higher than
needed; indeed, some algorithms become less stable if the stepsize is
decreased \(!^{54}\)
ODE packages: higher order, variable stepsize, stiff systems ...
The Verlet algorithms are not hard to code, and we use higher-order symplectic
algorithms in Hamiltonian systems mostly in unusual applications (planetary
motion) where high accuracy is demanded, because they are typically
significantly less stable. In systems of differential equations where there is
no conserved energy or Hamiltonian, or even in Hamiltonian systems (like high-
energy collisions) where accuracy at short times is more crucial than fidelity
at long times, we use general purpose methods.
The general-purpose solvers come in a variety of basic algorithms (Runge-
Kutta, predictor-corrector, ...), and methods for maintaining and enhancing
accuracy (variable step size, Richardson extrapolation). There are also
implicit methods for stiff systems. A system is stiff if there is a large
separation between the slowest and fastest relevant time scales: implicit
methods often allow one to take time steps much larger than the fastest time
scale (unlike the explicit Verlet methods you studied in part (d), which go
unstable). Large, sophisticated packages have been developed over many years
for solving differential equations - switching between algorithms and varying
the time steps to most efficiently maintain a given level of accuracy. They
solve \(d \mathbf{y} / d t=\operatorname{dyd} \mathbf{t}(\mathbf{y}, t)\), where
for us \(\mathbf{y}=[\theta, \dot{\theta}]\) and \(\mathbf{d y d}
\mathbf{t}=[\dot{\theta}, \ddot{\theta}] .\) They typically come in the form of
subroutines or functions, which need as arguments
\- Initial conditions \(\mathbf{y}_{0}\)
\- The right-hand side dydt, a function of the vector \(\mathbf{y}\) and time
\(t\), which returns a vector giving the current rate of change of \(\mathbf{y}\),
and
\- The initial and final times, and perhaps intermediate times, at which the
trajectory \(\mathbf{y}(t)\) is desired.
They often have options that
\- Ask for desired accuracy goals, typically a relative (fractional) accuracy
and an absolute accuracy, sometimes set separately for each component of
\(\mathbf{y}\),
\- Ask for and return derivative and time step information from the end of the
last step (to allow efficient restarts after intermediate points),
\- Ask for a routine that computes the derivatives of dydt with respect to the
current components of \(\mathbf{y}\) (for use by the stiff integrator), and
\- Return information about the methods, time steps, and performance of the
algorithm.
You will be supplied with one of these general-purpose packages, and
instructions on how to use it.
(e) Write the function dydt, and use the general purpose solver to solve for
the motion of the pendulum as in parts (a)- \((c)\), and informally check that
the trajectory is accurate.