Differential Equations

Questions to David Rotermund

By Jan Wiersig, modified by Udo Ernst and translated into English by Daniel Harnack. David Rotermund replaced the Matlab code with Python code.

Ordinary Differential Equations

In many fields of science, problems occur where one or several quantities vary dependent on time, e.g. oscillations or decay processes. The description of these processes can be formalized by differential equations. As a simple example, radioactive decay may serve. An amount of radioactive material is give at time t=0. We are looking for a function x(t) that gives the amount of the material x which is still present at time t. The rate at time t by which the radioactive material decays is proportional to the amount of the material x(t) that exists at time t. From these considerations, the following differential equation can be derived:

x˙=αx

with decay constant α>0. This differential equation is referred to as \quoting{ordinary}, since it only includes the derivative with respect to one quantity (here time t). It is furthermore a differential equation of ‘1. order’, since only the first derivative is taken.

The solution of a differential equation is not a number, but a function. Here, the solution is x(t)=x0eαt, which can be proven by taking the first derivative. Unfortunately, in most practical cases, a differential equation can not be be solved analytically. This holds especially for ordinary systems of differential equations. An example is the Newtonian mechanics of a particle

mr¨=F(r,r˙,t)

with place r=(x,y,z), velocity v=r˙, acceleration r¨, mass m, force F and time t. This is an ordinary system of differential equations of order 2, since the second derivative with respect to time is taken. Typically one is interested in an initial value problem: Given shall be the place r(0) and the velocity v(0) at the starting time, sought is the place r(t) and velocity v(t) at a later time t.

Alternatively to the previous notation, the differential equation can also be interpreted as an ordinary system of differential equations of 1. order with double the number of variables

r˙=v

mv˙=F(r,v,t) .

Using the ‘phase space variable’ x=(r,v), it can be transformed to a more compact form

x˙=F~(x,t) .

Here, the first three components of F~ are equal to v and the last three components equal to F/m. One can see that it is sufficient to consider ordinary systems of differential equations of 1. order. Note that in the following notation, the tilde ~ is suppressed.

Euler Method

Figure 8.1.

Figure 8.1.: Flow plan for the Euler integration scheme. The aim is to find the solution of the initial value problem of the differential equation system (DES)

x˙=F(x,t)

numerically. For this, we first replace the differential quotient by the right sided derivation

x(t+τ)x(t)τ+O(τ)=F(x,t)

respectively

x(t+τ)=x(t)+F(x,t)τ+O(τ2) .

The truncation error is of 2. order in the step width τ. By following notation we indicate the discretization in time: tn=(n1)τ, xn=x(tn) and Fn=F(x(tn),tn), where n=1,2,. The method of Euler finally follows:

xn+1=xn+Fnτ .

This method is illustrated in figure 8.2.

Figure 8.2.

Figure 8.2.: This figure shows a comparison between the true solution (continuous line) and a numerical solution found by the Euler method in one time step (dotted).

Schematically, the algorithm is as follows:

  • Choose a step width τ
  • Specify an initial condition x1
  • Set the counter n to 1
  • While tntmax:
    • Evaluate the right side of the DES Fn
    • Find the new places and velocities via xn+1=xn+Fnτ
    • Increase the counter by 1

The result is a trajectory x1,x2,, which approximates the (in the general case unknown) true solution x(t).

The local error is characterized as the truncation error per time step. It is of order O(τn), which gives n=2 for the method of Euler. The global error is calculated from the time interval of the length needed to calculate the trajectory tmax=Nτ. N is the number of time steps. An approximation for the maximal error is

global errorN×(local error)=NO(τn)=tmaxO(τn1) .

The global error of the method of Euler is thus of order O(τ).

Example: The Planar Pendulum

Figure 8.3.

Figure 8.3.: Schematic depiction of a planar pendulum.

As an example we will discuss the planar pendulum. The pendulum consists of a point-shaped mass m, that is fixed on a massless rod of length L (see fig. 8.2). The only degree of freedom of this system is the angle of deflection Θ(t)[π,π]. The total energy is the sum of kinetic and potential energy

E=12m(x˙2+y˙2)+mgy

with gravitational acceleration g. From fig. 8.2, the following relations can be extracted:

x=LsinΘ y=LcosΘ.

Taking the time derivation while considering L˙=0

x˙=LΘ˙cosΘ y˙=LΘ˙sinΘ

This yields

x˙2+y˙2=L2Θ˙2(sinΘ2+cosΘ2)=L2Θ˙2.

Insertion into the energy equation finally yields

E=12mL2Θ˙2mgLcosΘ.

This is the energy of the pendulum, expressed by the angle of deflection Θ and the respective angular velocity Θ˙. An equation of motion for the angle can be derived from the conservation of energy

E˙=mL2Θ˙Θ¨+mgLΘ˙sinΘ=!0.

This gives the equation of motion for the planar pendulum

Θ¨=gLsinΘ.

In the approximation of small deflections, the problem can be solved analytically. With sinΘΘ, the differential equation of the harmonic oscillator follows:

Θ¨=gLΘ.

We thus choose the ansatz

Θ(t)=C1cos(ωt+C2)

with amplitude C1, phase C2 and angular frequency ω. Taking the second derivative of time yields

Θ¨=ω2Θω=gL.

The known result is, that in the approximation of small deflections, the period of oscillation is independent of the deflection.

The general case of arbitrarily big deflections is not as easy to handle. We will solve this problem numerically. The analysis above shows, that it is sensible to choose L/g as the natural unit of time, i.e.

t=t~Lg.

The important point is, t~ is a dimensionless quantity. This choice of scaling yields

d2Θdt2=d2Θdt~2gL=gLsinΘ.

The equation of motion can thus be written with the scaled time as

Θ¨=sinΘ.

This equation of motion is free from both units and parameters! Introducing the natural unit thus not only prevents range errors, but also simplifies the problem.

With the generalized speed γ=Θ˙, the equation of motion can be written as a DES of 1. order

Θ˙=γ

γ˙=sinΘ.

The following is a program that solves the equations of motion numerically via the method of Euler.

Θn+1=Θn+τγn

γn+1=γnτsinΘn

For the sake of clarity, the program is divided into functions.

import numpy as np
import matplotlib.pyplot as plt
import typing


def derive_pendulum(t: float, s: np.ndarray):
    return np.array([s[1], -np.sin(s[0])])


def euler_integration(t: float, x: np.ndarray, tau: float, derivs: typing.Callable):
    return x + tau * derivs(t, x)


# set initial conditions and constants
try:
    theta0: float = float(input("initial angle = "))
except ValueError:
    theta0 = 90
    print(f"theta0 = {theta0}°")

x: np.ndarray = np.array([theta0 * np.pi / 180, 0])

try:
    tmax: float = float(input("time = "))
except ValueError:
    tmax = 100
    print(f"tmax = {tmax}")

try:
    tau: float = float(input("step width = "))
except ValueError:
    tau = 0.001
    print(f"tau = {tau}")

time: np.ndarray = np.linspace(
    start=0,
    stop=tmax,
    num=int(np.ceil(tmax / tau)),
    endpoint=True,
)

t_plot = np.zeros_like(time)
th_plot = np.zeros_like(time)

for i, t in enumerate(time):
    # remember angle and time
    t_plot[i] = t
    th_plot[i] = x[0] * 180 / np.pi

    x = euler_integration(t, x, tau, derive_pendulum)

# figure
plt.plot(t_plot, th_plot)
plt.xlabel("time")
plt.ylabel("theta")
plt.show()

Fig. 8.4(a) shows a solution of the equation of motion by the method of Euler with step width τ=0.1. We observe that the amplitude, and hence the energy, of the oscillation increases strongly with time. This violates energy conservation and has to be considered an artifact of the discretization of time. The non-physical increase of energy can be slowed down by choosing a smaller step width. This is demonstrated in fig.8.4(b) for the case of τ=0.05. Unfortunately, the deviation from the true solution is still considerable, despite the small step width.

Figure 8.4.

Figure 8.4.: Numerical solution of the pendulum dynamics following the method of Euler. The initial angle is 10, and the initial speed is 0. The step width is (a) τ=0.1 and (b) τ=0.05. Note the different scaling of the y axes.

Runge-Kutta Method

Now we will discuss a method that is superior to that of Euler, since the error term is of higher order. The idea is to find the changes in time (derivatives) of the variable at a sampling point that lies at the half of the interval, over which the integration step is supposed to run. The actual integration step is then performed with the derivative at this point, and is one order more precise than the Euler method, as we will show in the following.

We again consider the DES

x˙=F(x,t)

In the following we will compare the Taylor series

x(t+τ)=x(t)+x˙(t)τ+x¨(t)τ22+O(τ3)

with the method of Euler with half the step width

x(t+τ2)=x(t)+F(x,t)τ2.

It can be seen that

F(x(t+τ2),t+τ2)= =F(x,t)+Fx|(x,t)F(x,t)τ2+Ft|(x,t)τ2+O(τ2) =x˙(t)+(Fx|(x,t)x˙(t)+Ft|(x,t))τ2+O(τ2) =x˙(t)+x¨(t)τ2+O(τ2).

This yields the Runge-Kutta method of 2. order

x(t+τ)=x(t)+F(x(t+τ2),t+τ2)τ+O(τ3)

x(t+τ2)=x(t)+F(x,t)τ2.

The truncation error is of 3. order in τ, i.e. one order higher as compared to the method of Euler. The calculation method is illustrated in fig.8.5.

Figure 8.5.

Figure 8.5.: Scheme of the Runge-Kutta method, 2. order.

To better understand this method, we investigate a very simple differential equation

x˙=x.

With the initial condition x(0)=1, the solution is

x(t)=et.

Let us now consider a translation τ in time. We make use of the rules of powers and construct the Taylor series up to the fifth order

x(t+τ)=et+τ=eteτ

x(t+τ)=x(t)(1+τ+τ22+τ36+τ424+O(τ5)).

Comparing this with the result obtained by the method of Euler

xn+1=xn+τxn=xn(1+τ)

shows that the method of Euler gives the exact solution up to the first order. Now we look at the Runge-Kutta method of 2. order, which is

xn+1=xn+τx

x=xn+τ2xn.

Inserting the lower equation in the upper one yields

xn+1=xn+τxn+τ22xn

xn+1=xn(1+τ+τ22).

Thus we see that the Runge-Kutta method of 2. order represents the exact solution up to the 2. order.

Runge-Kutta Method of 4. Order

In complete analogy, the Runge-Kutta method of 4. order can be derived. We will thus skip this procedure and directly look at the result:

x(t+τ)=x(t)+(F1+2F2+2F3+F4)τ6+O(τ5)

F1=F(x,t)

F2=F(x+τ2F1,t+τ2)

F3=F(x+τ2F2,t+τ2)

F4=F(x+τF3,t+τ)

The truncation error is of 5. order in the step width τ. Let us first consider again the simple differential equation x˙=x. The auxiliary quantities Fi can be found effortlessly,

F1=x

F2=x+τ2F1=x+τ2x

F3=x+τ2F2=x+τ2x+τ24x

F4=x+τF3=x+τx+τ22x+τ34x.

Inserting and rearranging yields

x(t+τ)=x(t)(1+τ+τ22+τ36+τ424).

We can see that the Runge-Kutta method of 4. order is the exact solution up to the 4. order of the series. We thus assume that this method needs far less time steps than the method of Euler. Fig. 8.6(a) confirms this. The result for the planar pendulum with a small deflection angle of 10 solved by the Runge-Kutta method of 4. order is plotted. The case of big deflection angles (here 170) is shown in fig. 8.6(b). From the comparison with fig.8.6(a) we can deduct that the period of oscillation depends on the deflection of the pendulum.

Figure 8.6.

Figure 8.6.: Numerical solution of the dynamics of the planar pendulum via Runge-Kutta, 4. order. the step width is τ=0.2. The initial angle is (a) 10 and (b) 170.

Adaptive Methods

Thinking about the previously introduced methods, two questions arise:

  • Given a DES, how should the step width τ be chosen?
  • Can the step width τ be chose dynamically, when the variables of the DES show less or more temporal variations? Such a case is given for example for the pendulum with a large deflection angle near 180 (see the following figure). With a flexible step width, the calculation could be accelerated a lot.

Figure 8.7.

Figure 8.7.: Runge-Kutta method of 4. order: pendulum with initial angle = 179.5 and τ=0.2.

Both questions lead toward the so called adaptive methods, where the step width τ is adjusted in every time time step. The procedure is the following: Find x(t+τ) first with a “big” time step τ. This gives a rough approximation xg. Then calculate the far better approximation xk with two successive “small” time steps τ/2.

Figure 8.8.

With these two approximations, a relative error Δ can be approximated,

Δ=|xkxg||xk|+ε

with the machine precision ε. We want that this error is approximately equal to a given value Δ0. To achieve this, we recall that the local error of the Runge-Kutta method of 4. order is proportional to τ5. Thus, Δ is proportional to τ5. The step width has to be adjusted such that Δ0τ05. It follows for the approximated step width:

τ0=S1τ(Δ0Δ)1/5

Here, S1<1 is a “security factor”, which assures that the approximated step width is not to big. A second “security factor” S2>1 is introduced to prevent too large steps ττnew,

     
  =τ0 if τ/S2<τ0<S2τ
τnew= =S2τ if τ0>S2τ
  =τ/S2 if τ0<τ/S2

Feasible values are S1=0.9 and S2=1.25. By the way: As the new initial condition for the next time step xk should be used, since this value is more accurate than xg.

Python Solver

import numpy as np
import scipy
import matplotlib.pyplot as plt


def derive_pendulum(t, s):
    return [s[1], -np.sin(s[0])]


# set initial conditions and constants
try:
    theta0: float = float(input("initial angle = "))
except ValueError:
    theta0 = 90
    print(f"theta0 = {theta0}°")

x: np.ndarray = np.array([theta0 * np.pi / 180, 0])

tmin: float = 0.0
try:
    tmax: float = float(input("time = "))
except ValueError:
    tmax = 100
    print(f"tmax = {tmax}")

try:
    tau: float = float(input("step width = "))
except ValueError:
    tau = 0.1
    print(f"tau = {tau}")

time: np.ndarray = np.linspace(
    start=tmin,
    stop=tmax,
    num=int(np.ceil((tmax - tmin) / tau)),
    endpoint=True,
)

t_eval = np.linspace(tmin, tmax, 1000)

sol = scipy.integrate.solve_ivp(
    fun=derive_pendulum, t_span=[tmin, tmax], y0=x, t_eval=time, method="RK45"
)

plt.plot(sol.t, sol.y[0] / np.pi * 180)
plt.xlabel("time")
plt.ylabel("theta")
plt.show()

scipy.integrate.solve_ivp

scipy.integrate.solve_ivp(fun, t_span, y0, method='RK45', t_eval=None, dense_output=False, events=None, vectorized=False, args=None, **options)

Solve an initial value problem for a system of ODEs.

This function numerically integrates a system of ordinary differential equations given an initial value:

dy/dt=f(t,y) y(t0)=y0

Here t is a 1-D independent variable (time), y(t) is an N-D vector-valued function (state), and an N-D vector-valued function f(t, y) determines the differential equations. The goal is to find y(t) approximately satisfying the differential equations, given an initial value y(t0)=y0.

Some of the solvers support integration in the complex domain, but note that for stiff ODE solvers, the right-hand side must be complex-differentiable (satisfy Cauchy-Riemann equations). To solve a problem in the complex domain, pass y0 with a complex data type. Another option always available is to rewrite your problem for real and imaginary parts separately.

method:

   
‘RK45’ (default) Explicit Runge-Kutta method of order 5(4). The error is controlled assuming accuracy of the fourth-order method, but steps are taken using the fifth-order accurate formula (local extrapolation is done). A quartic interpolation polynomial is used for the dense output. Can be applied in the complex domain.
‘RK23’ Explicit Runge-Kutta method of order 3(2). The error is controlled assuming accuracy of the second-order method, but steps are taken using the third-order accurate formula (local extrapolation is done). A cubic Hermite polynomial is used for the dense output. Can be applied in the complex domain.
‘DOP853’ Explicit Runge-Kutta method of order 8. Python implementation of the “DOP853” algorithm originally written in Fortran. A 7-th order interpolation polynomial accurate to 7-th order is used for the dense output. Can be applied in the complex domain.
‘Radau’ Implicit Runge-Kutta method of the Radau IIA family of order 5. The error is controlled with a third-order accurate embedded formula. A cubic polynomial which satisfies the collocation conditions is used for the dense output.
‘BDF’ Implicit multi-step variable-order (1 to 5) method based on a backward differentiation formula for the derivative approximation. A quasi-constant step scheme is used and accuracy is enhanced using the NDF modification. Can be applied in the complex domain.
‘LSODA’ Adams/BDF method with automatic stiffness detection and switching. This is a wrapper of the Fortran solver from ODEPACK.

The source code is Open Source and can be found on GitHub.