Skip to main content

On the solution of high order stable time integration methods

Abstract

Evolution equations arise in many important practical problems. They are frequently stiff, i.e. involves fast, mostly exponentially, decreasing and/or oscillating components. To handle such problems, one must use proper forms of implicit numerical time-integration methods. In this paper, we consider two methods of high order of accuracy, one for parabolic problems and the other for hyperbolic type of problems. For parabolic problems, it is shown how the solution rapidly approaches the stationary solution. It is also shown how the arising quadratic polynomial algebraic systems can be solved efficiently by iteration and use of a proper preconditioner.

1 Introduction

Evolution equations arise in many important practical problems, such as for parabolic and hyperbolic partial differential equations. After application of a semi-discrete Galerkin finite element or a finite difference approximation method, a system of ordinary differential equations,

M d u d t +Au(t)=f(t),t>0,u(0)= u 0 ,

arises. Here, u,f n , M is a mass matrix and M, A are n×n matrices. For a finite difference approximation, M=I, the identity matrix.

In the above applications, the order n of the system can be very large. Under reasonable assumptions of the given source function f, the system is stable, i.e. its solution is bounded for all t>0 and converges to a fixed stationary solution as t0, independent of the initial value u 0 . This holds if A is a normal matrix, that is, has a complete eigenvector space, and has eigenvalues with positive real parts. This condition holds for parabolic problems, where the eigenvalues of A are real and positive. In more involved problems, the matrix A may have complex eigenvalues with arbitrary large imaginary parts.

Clearly, not all numerical time-integration methods preserve the above stability properties. Unless the time-step is sufficiently small, explicit time-integration methods do not converge and/or give unphysical oscillations in the numerical solution. Even with sufficiently small time-steps, algebraic errors may increase unboundedly due to the large number of time-steps. The simplest example where the stability holds is the Euler implicit method,

u ˜ (t+τ)+τA u ˜ (t+τ)= u ˜ (t)+τf(t+τ),t=τ,2τ,, u ˜ (0)= u ˜ 0 ,

where τ>0 is the time-step. Here, the eigenvalues of the inverse of the resulting matrix in the corresponding system,

(I+τA) u ˜ (t+τ)= u ˜ (t)+τf(t+τ)

equal ( 1 + τ λ ) 1 and satisfy the stability condition,

| μ ( λ ) | = | ( 1 + τ λ ) 1 | <1,λσ(A).

Here, σ(A) denotes the set of eigenvalues of A. To more quickly damp out initial transients in the solution, which arises for instance due to that the initial value may not satisfy boundary conditions given in the parabolic problem, one should preferably have eigenvalues of the inverse of the discrete matrix B, that satisfies |μ(λ)|0 for eigenvalues λ. This holds for the implicit Euler method, where

B=I+τAandμ(λ)= ( 1 + τ λ ) 1 .

This method is only first-order accurate, i.e. its global time discretization error is O(τ). Therefore, to get a sufficiently small discretization error, one must choose very small time-steps, which means that the method becomes computationally expensive and also causes a stronger increase of round-off errors. However, there exists stable time-integration methods of arbitrary high order. They are of implicit Runge-Kutta quadrature type (see e.g. [15]), and belong to the class of A-stable methods, i.e. the eigenvalues μ( B 1 ) of the corresponding matrix B where B u ˜ (t+τ)= u ˜ (t)+τ f ˜ (t), and f ˜ (t) is a linear function of f(t) at the quadrature points in the interval [t,t+τ], satisfy |μ( B 1 )|<1 for all normal matrices M 1 A with e(λ)>0. The highest order achieved, O( τ 2 m ) occurs for Gauss quadrature where m equals to the number of quadrature points within each time interval.

To satisfy the second, desirable condition,

lim λ | μ ( λ ) | 0,

one can use a special subclass of such methods, based on Radau quadrature; see, e.g. [1, 5]. The discretization error is here only one order less, O( τ 2 m 1 ). For linear problems, all such stable methods lead to rational polynomial approximation matrices B, and hence the need to solve quadratic polynomial equations. For stable methods, it turns out that the roots of these polynomials are complex.

In Section 2, a preconditioning method is described that is very efficient when solving such systems, without the need to factorize the quadratic polynomials in first order factors, thereby avoiding the need to use complex arithmetics. Section 3 discusses the special case where m=2. It shows also how the general case, where m>2, can be handled.

Section 4 deals with the use of implicit Runge-Kutta methods of Gauss quadrature type for solving hyperbolic systems of Hamiltonian type.

Section 5 presents a method to derive time discretization errors.

In Section 6, some illustrating numerical tests are shown. The paper ends with concluding remarks.

2 Preconditioners for quadratic matrix polynomials

From the introduction, it follows that it is of importance to use an efficient solution method for quadratic matrix polynomials and not factorize them in first order factors when this results in complex valued factors. For a method to solve complex valued systems in real arithmetics, see, e.g. [6]. Here, we use a particular method that is suitable for the arising quadratic matrix polynomials.

Consider then the matrix polynomial,

B=M+aA+ b 2 A M 1 A.
(1)

We assume that M is spd and that |a|<2b, which latter implies that the first order factors of B are complex. Systems with B will be solved by iteration. As a preconditioner, we use the matrix

C α =(M+αA) M 1 (M+αA),

where α>0 is a parameter. We assume that A is a normal matrix, that is, has a full eigenvector space and further that the symmetric part, A+ A T of A is spd. To estimate the eigenvalues of C α 1 B, we write

( C α x,x)(Bx,x)=(2αa)(Ax,x)+ ( α 2 b 2 ) ( A M 1 A x , x ) .

After a two-sided multiplication with M 1 / 2 , we get

( C ˜ α x ˜ , x ˜ )( B ˜ x ˜ , x ˜ )=(2αa)( A ˜ x ˜ , x ˜ )+ ( α 2 b 2 ) ( A ˜ 2 x ˜ , x ˜ ) ,
(2)

where C ˜ α = M 1 / 2 C α M 1 / 2 = ( I + α A ˜ ) 2 , etc. and x ˜ = M 1 / 2 x. Note that, by similarity, C α 1 B and C ˜ α 1 B ˜ have the same eigenvalues.

We are interested in cases where A ˜ may have large eigenvalues. (In our application, A ˜ involves a time-step factor τ, but since we use higher order time-discretization methods, τ will not be very small and cannot damp out the inverse to some power of the space-discretization parameter h that also occurs in A ˜ .) Therefore, we choose α=b. Note that this implies that 2αa>0.

The resulting relation (2) can now be written

( x ˜ , x ˜ ) ( C ˜ α 1 B ˜ x ˜ , x ˜ ) =(2αa) ( C ˜ α 1 A ˜ x ˜ , x ˜ ) ,
(3)

where

( C ˜ α 1 A ˜ x ˜ , x ˜ ) = ( ( I + α A ˜ ) 2 A ˜ x ˜ , x ˜ ) .

Since 2αa>0, the real part of the eigenvalues of C ˜ α 1 B ˜ are bounded above by 1. To find estimates of the eigenvalues λ(μ) of C ˜ α 1 B ˜ , let (μ,z) be eigensolutions of A ˜ , i.e. let

A ˜ z=μz,|z|=1.

It follows from (3) that for x ˜ =z,

λ ( μ ) = ( C ˜ α 1 B ˜ z , z ) = 1 ( 1 a 2 α ) 2 α μ 1 + 2 α μ + ( α μ ) 2 = 1 ( 1 a 2 α ) 1 1 + 1 2 ( α μ + 1 α μ ) .

We write αμ= μ 0 e i φ so 1 2 (αμ+ 1 α μ )= 1 2 ( μ 0 + 1 μ 0 )cos(φ)+ i 2 ( μ 0 1 μ 0 )sin(φ), where i is the imaginary unit. Note that μ 0 >0 so 1 2 ( μ 0 + 1 μ 0 )1. Since, by assumption, the real part of μ is positive, it holds |φ| φ 0 <π/2. A computation shows that the values of the factor 1 1 + 1 2 ( α μ + 1 α μ ) are located in a disc in the complex plane with center at δ/2 and radius δ/2, where δ=1/(1+cos φ 0 ).

Hence, λ(μ) is located in a disc with center at 1 1 2 (1 a 2 α )δ and radius 1 2 (1 a 2 α )δ.

For φ 0 =0, i.e. for real eigenvalues of A ˜ , then δ=1/2 and 1λ(μ) 3 4 + 1 8 a α .

3 A stiffly stable time integration method

Consider a system of ordinary differential equations,

M d x d t +σ(t) ( A x ( t ) f ( t ) ) =0,t>0,x(0)= x 0 ,
(4)

where x,f n , σ(t) σ 0 >0, M, A are n×n matrices, where M is assumed to be spd and the symmetric part of A is positive semidefinite. In the practical applications that we consider, M corresponds to a mass matrix and A to a second-order diffusion or diffusion-convection matrix. Hence, n is large. Under reasonable assumptions on the source function f, such a system is stable for all t and its solution approaches a finite function, independent on the initial value x 0 , as t.

Such stability results hold for more general problems, such as for a nonlinear parabolic problem,

u t +F(t,u)=0,where F(t,u)= ( a ( t , u , u ) u ) f(t,u),xΩ,t>0,
(5)

where f:(0,)×V V and V is a reflexive Banach space.

For proper functions a() and f(), then F is monotone, i.e.

( F ( t , u ) F ( t , v ) , u v ) ρ(t) u v 2 ,u,vV,t>0.
(6)

Here, ρ:(0,)R, ρ(t)0 and (,), denote the scalar product, and the corresponding norm in L 2 (Ω), respectively. In this case, one can easily derive the bound

1 2 d d t ( u v 2 ) = ( F ( t , u ) F ( t , v ) , u v ) ρ(t) u v 2 ,

where u, v are solution of (5) corresponding to different initial values. Consequently making use of the Gronwall lemma, we obtain

u ( t ) v ( t ) exp ( 0 t ρ ( s ) d s ) u ( 0 ) v ( 0 ) u ( 0 ) v ( 0 ) ,t>0.

Hence, (5) is stable in this case.

If F is strongly monotone (or dissipative), i.e. (6) is valid with ρ(t) ϱ 0 >0, then

u ( t ) v ( t ) exp(t ρ 0 ) u ( 0 ) v ( 0 ) 0,t,

i.e. (5) is asymptotically stable. In particular, the above holds for the test problem considered in Section 6.

For large eigenvalues of M 1 A, such a system is stiff and can have fast decreasing and possibly oscillating components. This amounts to that the eigenvalues have large real part and possibly also large imaginary parts. To handle this, one needs stable numerical time-integration methods that do not contain corresponding increasing components. For σ(t)=1, in (4), this amounts to proper approximations of the matrix exponential function exp(tE), E= M 1 A, by a rational function,

R m (tE)= Q m ( t E ) 1 P m (tE),

where

R m ( t E ) 1,t>0, for Re{ λ E }>0,

and λ E denotes eigenvalues by E. Furthermore, to cope with problems where arg( λ E )α< π 2 , but arbitrarily close to π/2, one needs A-stable methods; see e.g. [3, 7, 8]. To get stability for all times and time steps, one requires lim | λ | | R m (λ)|c<1 where preferably c=0. Such methods are called L-stable (Lambert) and stiffly A-stable [3], respectively.

An important class of methods which are stiffly A-stable is a particular class of the implicit Runga-Kutta methods; see [1, 3, 5]. Such methods correspond to rational polynomial approximations of the matrix exponential function with denominator having a higher degree than the nominator. Examples of such methods are based on Radau quadrature where the quadrature points are zeros of P ˜ m (ξ) P ˜ m 1 (ξ), where { P ˜ k } are the Legendre polynomials, orthogonal on the interval (0,1), see e.g. [1] and references therein. Note that ξ=1 is a root for all m1. The case m=1 is identical to the implicit Euler method.

Following [5], we consider here the next simplest case, where m=2, for the numerical solution of (4) over a time interval [t,t+τ].

In this case, the quadrature points (for a unit interval) are ξ 1 =1/3, ξ 2 =1 and the numerical solution x 1 , x 2 , at t+τ/3 and t+τ satisfies

[ M + 5 σ 1 A ˜ σ 2 A ˜ 9 σ 1 A ˜ M + 3 σ 2 A ˜ ] [ x 1 x 2 ] = [ M x 0 + τ 12 ( 5 f 1 f 2 ) M x 0 + τ 4 ( 3 f 1 + f 2 ) ] ,
(7)

where x 0 is the solution at time t, σ 1 =σ(t+τ/3), σ 2 =σ(t+τ), f 1 =f(t+τ/3), f 2 =f(t+τ), and A ˜ = τ 12 A. The global discretization error of the x 2 -component for this method is O( τ 3 ), i.e. it is a third-order method and it is stiffly A-stable even for arbitrary strong variations of the coefficient σ(t). This can be compared with the trapezoidal or implicit midpoint methods which are only second order accurate and not stiffly stable.

The system in (7) can be solved via its Schur complement. Thereby, to avoid an inner system with matrix M+5 σ 1 A ˜ , we derive a modified form of the Schur complement system, that involves only an inner system with matrix M 1 . To this end, but only for the derivation of the method, we scale first the system with the block diagonal matrix [ M 1 0 0 M 1 ] to get

[ I + 5 σ 1 G σ 2 G 9 σ 1 G I + 3 σ 2 G ] [ x 1 x 2 ] = [ x 0 + τ 12 ( 5 f ˜ 1 f ˜ 2 ) x 0 + τ 4 ( 3 f ˜ 1 + f ˜ 2 ) ] ,

where G= τ 12 M 1 A and f ˜ i = M 1 f i , i=1,2. The Schur complement system for x 2 is multiplied with (I+5 σ 1 G). Using commutativity, we get then

[ ( I + 5 σ 1 G ) ( I + 3 σ 2 G ) + 9 σ 1 σ 2 G 2 ] x 2 = ( I + 5 σ 1 G ) [ x 0 + τ 4 ( 3 f ˜ 1 + f ˜ 2 ) ] 9 σ 1 G [ x 0 + τ 12 ( 5 f ˜ 1 f ˜ 2 ) ]

or

[ I + ( 5 σ 1 + 3 σ 2 ) G + 24 σ 1 σ 2 G 2 ] x 2 = ( I 4 σ 1 G ) x 0 + τ 4 ( 3 f ˜ 1 + f ˜ 2 ) + 2 τ σ 1 G f ˜ 2 .

Hence,

B x 2 = ( M τ 3 σ 1 A ) x 0 + τ 4 M(3 f ˜ 1 + f ˜ 2 )+ 1 6 τ 2 σ 1 A f ˜ 2 ,

where

B=M+ τ 12 (5 σ 1 +3 σ 2 )A+ τ 2 6 σ 1 σ 2 A M 1 A.
(8)

For higher order Radau quadrature methods, the corresponding matrix polynomial in M 1 B is a m th order polynomial. By the fundamental theorem of algebra, one can factorize it in factors of at most second degree. They can be solved in a sequential order. Alternatively, using a method referred to in Remark 3.1, the solution components can be computed concurrently.

Each second-order factor can be preconditioned by the method in Section 2. The ability to factorize Q m (tE) in second-order factors and solve the arising systems as such two-by-two block matrix systems means that one only has to solve first-order systems. This is of importance if for instance M and A are large sparse bandmatrices, since then one avoids increasing bandwidths in matrix products and one can solve systems of linear combinations of M and A more efficiently than for higher order polynomial combinations. Furthermore, this enables one to keep matrices on element by element form (see, e.g. [9]) and it is in general not necessary to store the matrices M and A. The arising inner system can be solved by some inner iteration method.

The problem with a direct factorization in first order factors is that complex matrix factors appear. This occurs for the matrix in (8) for a ratio of σ 1 σ 2 in the interval

3 11 96 25 < σ 1 σ 2 <3 11 + 96 25 .
(9)

Therefore, it is more efficient to keep the second order factors and instead solve the corresponding systems by preconditioned iterations. Thereby, the preconditioner involves only first order factors. As shown in Section 2, a very efficient preconditioner for the matrix B in (8) is

C= C α =(M+ατA) M 1 (M+στA),
(10)

where α>0 is a parameter. As already shown in [5], for the above particular application it holds.

Proposition 3.1 Let B, C be as defined in (8) and (10) and assume that M is spd and A is spsd. Then letting

α=max { σ 1 σ 2 / 6 , ( 5 σ 1 + 3 σ 2 ) / 24 }

it holds

κ ( C 1 B ) max i = 1 , 2 δ i 1 ,

where

1 δ 1 = ( 5 σ 1 + 3 σ 2 ) / 24 α 10 / 4 , 1 δ 2 = σ 1 σ 2 / 6 α 2 .

If 0.144 σ 1 σ 2 2.496, then δ 2 =1 and δ 1 5 8 .

The spectral condition number is then bounded by

κ ( C 1 B ) 8 5 1.265.

If σ 1 = σ 2 , then

κ ( C 1 B ) 3 2 1.225.

Proof Let (u,v) be the 2 product of u,v n . We have

(Cx,x)(Bx,x)=2στ(1 δ 1 )(Ax,x)+ α 2 τ 2 (1 δ 2 ) ( A M 1 A x , x ) x n .

It follows that

(Bx,x)(Cx,x).

By the arithmetic-geometric means inequality, we have

δ 1 2 15 σ 1 σ 2 /α 1 2 90 = 10 4 .
(11)

a computation shows that

σ 1 σ 2 /6 ( 5 σ 1 + 3 σ 2 24 ) 2

for 0.144ξ2.496, where ξ= σ 1 / σ 2 . Further, a computation shows that δ 1 5 8 , which is in accordance with the lower bound in (11). Since

(Cx,x)2ατ(Ax,x)+ α 2 τ 2 ( A M 1 A x , x ) ,

it follows that

1 ( B x , x ) ( C x , x ) 1 δ 1

or

( B x , x ) ( C x , x ) δ 1 = 5 8 .

For α 1 = α 2 , a computation shows that

δ 1 = 1 3 6 = 2 3 .

 □

We conclude that the condition number is very close to its ideal unit value 1, leading to very few iterations. For instance, it suffices with at most 5 conjugate gradient iterations for a relative accuracy of 10−6.

Remark 3.1 High order implicit Runge-Kutta methods and their discretization error estimates can be derived using order tree methods as described in [1] and [10].

For an early presentation of implicit Runge-Kutta methods, see [2] and also [4], where the method was called global integration method to indicate its capability for large values of m to use few, or even just one, time discretization steps. It was also shown that the coefficient matrix, formed by the quadrature coefficients had a dominating lower triangular part, enabling the use of a matrix splitting and Richardson iteration method. It can be of interest to point out that the Radau method for m=2 can be described in an alternative way, using Radau quadrature for the whole time step interval and combined with a trapezoidal method for the shorter interval.

Namely, let d u d t +f(t,u)=0, t k 1 <t< t k . Then Radau quadrature on the interval ( t k 1 , t k ) has quadrature points t k 1 +τ/3, t k , and coefficients b 1 =3/4, b 2 =1/4, which results in the relation

u ˜ 1 u ˜ 0 + 3 τ 4 f( t ˜ 1 / 3 , u ˜ 1 / 3 )+ τ 4 f( t ˜ 1 , u ˜ 1 )=0,

where u ˜ 1 , u ˜ 1 / 3 , u ˜ 0 denote the corresponding approximations of u at t ˜ 1 t k 1 +τ and t ˜ 1 / 3 = t k 1 +τ/3 and t k 1 , respectively.

This equation is coupled with an equation based on quadrature

u( t k 1 +τ/3)u( t k 1 )+ t k 1 t k f(t,u)dt t k 1 + τ / 3 t k f(t,u)dt=0,

which, using the stated quadrature rules, results in

u ˜ 1 / 3 u ˜ 0 + 3 τ 4 f( t ˜ 1 / 3 , u ˜ 1 / 3 )+ τ 4 f( t ˜ 1 , u ˜ 1 ) 1 2 2 τ 3 [ f ( t ˜ 1 / 3 , u ˜ 1 / 3 ) + f ( t ˜ 1 , u ˜ 1 ) ] =0

that is,

u ˜ 1 / 3 u ˜ 0 + 5 τ 12 f( t ˜ 1 / 3 , u ˜ 1 / 3 ) τ 12 f( t ˜ 1 , u ˜ 1 )=0.

Remark 3.2 The arising system in a high order method involving q2 quadratic polynomial factors, can be solved sequentially in the order they appear. Alternatively (see, e.g. [11], Exercise 2.31), one can use a method based on solving a matrix polynomial equation, P 2 q (A)x=b as x= k = 1 q 1 P 2 q ( r k ) x k , x k = ( A r k I ) 1 b, where { r k } 2 q , is the set of zeros of the polynomial and it is assumed that A has no eigenvalues in this set. (This holds in our applications.) Then, combining pairs of terms corresponding to complex conjugate roots r k , quadratic polynomials arise for the computation of the corresponding solution components. It is seen that in this method, the solution components can be computed concurrently.

Remark 3.3 Differential algebraic equations (DAE) arise in many important problems; see, for instance [10, 12]. The simplest example of a DAE takes the form

{ d u d t = f ( t , u , v ) , g ( t , u , v ) = 0 , t > 0 ,

with u(0)= u 0 , v(0)= v 0 and it is normally assumed that the initial values satisfy the constraint equation, i.e.

g(0, u 0 , v 0 )=0.

If det( g v )0 in a sufficiently large set around the solution, one can formally eliminate the second part of the solution to form a differential equation in standard form.

d u d t =f ( t , u , v ( u ) ) ,t>0,u(0)= u 0 .

Such a DAE is said to have index one, see e.g. [13]. It can be seen to be a limit case of the system

{ d u d t = f ( t , u , v ) , d u d t = 1 ε g ( t , u , v ) ,

where ε>0 and ε0.

Hence, such an DAE can be considered as an infinitely stiff differential equation problem. For strongly or infinitely stiff problems, there can occur an order reduction phenomenae. This follows since some high order error terms in the error expansion (cf. Section 5) are multiplied with (infinitely) large factors, leading to an order reduction for some methods. Heuristically, this can be understood to occur for the Gauss integration form of IRK but does not occur for the stiffly stable variants, such as based on the Radau quadrature. For further discussions of this, see, e.g. [10, 13].

4 High order integration methods for Hamiltonian systems

Another important application of high order time integration methods occurs for Hamiltonian systems. Such systems occur in mechanics and particle physics, for instance. As an introduction, consider the conservation of energy principle. To this end, consider a mechanical system of k point masses and its associated Lagrangian functional,

L=KV= i = 1 k 1 2 m i | x ˙ i | 2 V( x 1 ,, x k ),

where K is the kinetic energy and V the potential energy. Here, x i =( x i , y i , z i ) denote the Cartesian coordinate of the i th point mass m i .

The configuration strives to minimize the total energy. The corresponding Euler-Lagrange equations become then L x i =0, that is,

m i x ¨ i = V x i ,i=1,2,,k.
(12)

We consider conservative systems, i.e. mechanical systems for which the total force on the elements of the system are related to the potential V: 3 k according to

F i = V x i .

This means that the Euler-Lagrange equation (12) is identical to the classical Newton’s law

m i x ¨ i = F i ,i=1,2,,k.

Let p i = m i v i be the momentum. Then

K= 1 k 1 2 p i 2 m i .

A mechanical system can be described by general coordinates

q=( q 1 ,, q d )

i.e. not necessarily Cartesian, but angles, length along a curve, etc. The Lagrangian takes the form L(q, q ˙ ,t). If q is determined to satisfy

min q a b L(q, q ˙ ,t)dt,

then the motion of the system is described by the Lagrange equation,

d d t L q ˙ (q, q ˙ ,t)= L q (q, q ˙ ,t).
(13)

Letting here

p k = L q k ˙ (q, q ˙ ),k=1,2,n

be the momentum variable, and using the transformation (q, q ˙ )=(q,p) we can write (13) as the Hamiltonian,

H(p,q,t)= j = 1 n p j q ˙ j L ( q , q ˙ ( q , p , t ) , t ) .

For a mechanical system with potential energy a function of configuration only and kinetic energy K given by a quadratic form

K= 1 2 q ˙ T G(q) q ˙ ,

where G is an spd matrix, possibly depending on q, we get

p=G(q) q ˙ , q ˙ = G 1 (q)p
(14)

and

H ( p , q , t ) = p T G 1 ( q ) p 1 2 p T G 1 ( q ) p + V ( q ) = 1 2 p T G 1 ( q ) p + V ( q ) = K ( p , q ) + V ( q ) ,

which equals the total energy of the system.

The corresponding Euler-Lagrange equations become now

{ p ˙ = H q , q ˙ = H p
(15)

and are referred to as the Hamiltonian system. This follows from

H p = q ˙ T + p T q ˙ p L q ˙ q ˙ p = q ˙ T , H q = p T q ˙ q L q L q ˙ q ˙ q = L q ,

which, since d d t ( L q ˙ )= L q implies p ˙ = L q , are hence equivalent to the Lagrange equations.

By (15), it holds

d d t H(p,q)= H p p ˙ + H q q ˙ =0,
(16)

that is, the Hamiltonian function H(p,q) is a first integral for the system (15).

The flow φ t :U 2 n of a Hamiltonian system is the mapping that describes the evolution of the solution by time, i.e. φ t ( p 0 , q 0 )=(p(t, p 0 , q 0 ),q(t, p 0 , q 0 )), where p(t, p 0 , q 0 ), q(t, p 0 , q 0 ) is the solution of the system for the initial values p(0)= p 0 , q(0)= q 0 .

We consider now a Hamiltonian with a quadratic first integral in the form

H(y)= y T Cy,y=(p,q),
(17)

where C is a symmetric matrix. For the solution of the Hamiltonian system (15), we shall use an implicit Runge-Kutta method based on Gauss quadrature.

The s-stage Runge-Kutta method applied to an initial value problem, y ˙ =f(t,y), y( t 0 )= y 0 is defined by

{ k i = f ( t 0 + c i τ , y 0 + τ j = 1 s a i j k j ) , i = 1 , 2 , , s , y 1 = y 0 + τ i = 1 s b i k i ,
(18)

where c i = i = 1 s a i j , see e.g. [1, 4]. The familiar implicit midpoint rule is the special case where s=1. Here, c 1 ,, c s are the zeros of the shifted Legendre polynomial d s d x s ( x s ( 1 x ) s ). For a linear problem, this results in a system which can be solved by the quadratic polynomial decomposition and the preconditioned iterative solution method, presented in Section 2.

If u(t) is a polynomial of degree s, then (18) takes the form

u ( t 0 ) = y 0 , u ˙ ( t + c i τ ) = f ( t 0 + c 0 τ , y ( t 0 + c i τ ) ) , i = 1 , , s
(19)

and u 1 =u( t 0 +τ).

For the Hamiltonian (17), it holds

d d t H ( y ( t ) ) =2 y T (t)Cy(t)

and it follows from (16) that

y 1 T C y 1 y 0 T C y 0 =2 t 0 t 0 + τ u ( t ) T C u ˙ (t)dt.

Since the integrand is a polynomial of degree 2s1, it is evaluated exactly by the s-stage Gaussian quadrature formula. Therefore, since

y ( t 0 + c i t ) T C y ˙ ( t 0 + c i τ)=u ( t 0 + c i τ ) T Cf ( u ( t 0 + c i τ ) ) =0

it follows that the energy quadrature forms y i T C i y i are conserved.

This is an important property in Hamiltonian systems and is referred to as being symplectic. For further references of symplectic integrators, see [10].

5 Discretization error estimates

Error estimation methods for parabolic and hyperbolic problems can differ greatly. Parabolic problems are characterized by the monotonicity property (6) while for hyperbolic problems a corresponding conservation property,

( F ( t , u ) F ( t , v ) , u v ) =0,t>0u,vV

holds, implying

u ( t ) v ( t ) = u ( 0 ) v ( 0 ) ,t0.
(20)

Hence, there is no decrease of errors occurring at earlier time steps. On the other hand, the strong monotonicity property for parabolic problems implies that errors at earlier time steps decrease exponentially as time evolves.

For a derivation of discretization errors for such parabolic type problems for a convex combination of the implicit Euler method and the midpoint method, referred to as the θ-method, the following holds (see [14]). Similar estimates can also be derived for the Radau quadrature method, see, e.g. [10].

The major result in [14] is the following.

Let u t s = s ( u ( t ) ) t s . Consider the problem u t =F(t,u(t)) where u belongs to some function space V and the corresponding truncation error,

R θ ( t , u ) = F ( t ¯ , u ¯ ( t ) ) τ 1 t t + τ u t ( s ) d s = u ( t ¯ ) τ 1 [ u ( t + τ ) u ( t ) ] + F ( t ¯ , u ¯ ( t ) ) F ( t ¯ , u ( t ¯ ) ) ,

where t ¯ =θt+(1θ)(t+τ), u ¯ (t)=θu(t)+(1θ)u(t+τ), 0θ1.

If u C 3 (V), then a Taylor expansion shows that

R θ ( t , u ) = 1 24 τ 2 u t ( 3 ) ( t 3 ) + ( 1 2 θ ) τ u t ( 2 ) ( t 2 ) + 1 2 θ ( 1 θ ) τ 2 F y ( t ¯ , u ˜ ( t ¯ ) ) u t ( 2 ) ( t 1 ) , t < t i < t + τ , i = 1 , 2 , 3 ,
(21)

where u ˜ ( t ¯ ) takes values in a tube with radius u ¯ (t)u( t ¯ ) about the solution u(t).

It follows that if

F u ( t ¯ , u ˜ ( t ¯ ) ) u t ( 2 ) ( t 1 ( t ) ) C 1
(22)

and θ= 1 2 O(τ), then

R θ ( t , u ) =O ( τ 2 ) .

Under the above conditions, the discretization error e(t)=u(t)v(t), where

v(t+τ)v(t)+τF ( t ¯ , v ¯ ( t ) ) =0,t=0,τ,2τ,,

v(0)=u(0), is the approximate solution, satisfies

  1. (i)

    if F is strongly monotone and 1 2 |O(τ)|θ θ 0 , then e(t) ϱ 0 1 C τ 2 , t>0;

  2. (ii)

    if F is monotone (or conservative) and 1 2 |O(τ)|θ 1 2 , then e(t)t C τ 2 , t>0.

Here, C depends on u t ( 2 ) and u t ( 3 ) , but is independent of the stiffness of the problem under the appropriate conditions stated above.

If the solution u is smooth so that F u u t ( 2 ) has also only smooth components, then F u u t ( 2 ) may be much smaller than F u u t ( 2 ) , showing that the stiffness, i.e. factors F u 1, do not enter in the error estimate.

In many problems, we can expect that F u u t ( 2 ) is of the same order as u t ( 3 ) , i.e. the first and last forms in (21) have the same order. In particular, this holds for a linear problem u t +Au=0, where u t ( 3 ) = A 3 u= F u u t ( 2 ) .

It is seen from (20) that for hyperbolic (conservative) problems like the Hamiltonian problem in Section 4, the discretization error grows at least linearly with t, but likely faster if the solution is not sufficiently smooth. It may then be necessary to control the error by coupling the numerical time-integration method with an adaptive time step control. We present here such a method based on the use of backward integration at each time-step using the adjoint operator. The use of adjoint operators in error estimates gives back to the classical Aubin-Nitsche L 2 -lifting method used in boundary value problems to derive discretization error estimates in L 2 norm. It has also been used for error estimates in initial value problems, see e.g. [7].

Assume that the monotonicity assumption (20) holds. We show first a nonlinear (monotone) stability property, called B-stability, that holds for the numerical solution of implicit Runge-Kutta methods based on Gauss quadrature points. It goes back to a scientific note in [15]; see also [16].

Let u ˜ , v ˜ be two approximate solutions to u =f(u,t), t>0 extended to polynomials of degree m from their pointwise values at t k , i in the interval [ t k 1 , t k ]. Let

Ψ(t)= 1 2 u ˜ ( t ) v ˜ ( t ) 2 .

Then, since by (18), u ˜ (t) and v ˜ (t) satisfy the differential equation at the quadrature points, and by (19) it holds

Ψ ( t k , i )= ( u ˜ ( t k , i ) v ˜ ( t k , i ) , u ˜ ( t k , i ) v ˜ ( t k , i ) ) = ( f ( u ˜ ( t k , i ) ) f ( v ˜ ( t k , i ) ) , u ˜ ( t k , i ) v ˜ ( t k , i ) ) 0,

i=1,2,,m, where { t k , i } i = 1 m is the set of quadrature points. Since Ψ (t) is a polynomial of degree 2m1, Gauss quadrature is exact so

Ψ( t k )Ψ( t k 1 )= t k 1 t k Ψ (s)ds= i = 1 m b i Ψ ( t k , i )0.

Here, b i >0 are the quadrature coefficients.

Hence,

u ˜ ( t k ) v ˜ ( t k ) u ˜ ( t k 1 ) v ˜ ( t k 1 ) u ˜ ( 0 ) v ˜ ( 0 ) ,k=1,2,.

Since Ψ ( 2 m ) (t)0, this monotonicity property can be seen to hold also for the Radau quadrature method.

We present now a method for adaptive a posteriori error control for the initial value problem

u ( t ) = σ ( t ) f ( u ( t ) ) , t > 0 , u ( 0 ) = u 0 ,
(23)

where u(t) R n and f(u(t))=Au(t) f ˜ (t).

For the implicit Runge-Kutta method with approximate solution u ˜ (t), it holds

u ˜ ( t k )= u ˜ ( t k 1 )+ t k 1 t k σ(t)f ( u ˜ ( t ) ) dt,k=1,2,

where u ˜ (t) is a piecewise polynomial of degree m.

The corresponding residual equals

R ( u ˜ ( t ) ) = u ˜ (t)σ(t)f ( u ˜ ( t ) ) .

By the property of implicit Runge-Kutta methods, it is orthogonal, i.e.

t k 1 t k ( u ˜ ( t ) σ ( t ) f ( u ˜ ( t ) ) ) v ̲ dt=0,k=0,1,
(24)

to all polynomials of degree m. Here, the ‘dot’ indicates a vector product in R n . The discretization error equals e(t)=u(t) u ˜ (t), t>0. The error estimation will be based on the backward integration of the adjoint operator problem,

{ φ ( t ) = σ ( t ) A T φ ( t ) , t k 1 < t < t k , φ ( t k ) = e ( t k ) .
(25)

Note that σ(t)Ae(t)=σ(t)(f(u(t))f( u ˜ (t))). It holds

| e ( t k ) | 2 = | e ( t k ) | 2 + t k 1 t k e ( φ σ ( t ) A T φ ) dt,

so by integration by parts, we get

| e ( t k ) | 2 = t k 1 t k ( e σ ( t ) A e ) φdt+e( t k 1 )φ( t k 1 ).

Here

e σ(t)Ae= u σ(t) ( A u f ˜ ( t ) ) ( u ˜ σ ( t ) ( A u ˜ f ˜ ( t ) ) ) = u ˜ +σ(t)f( u ˜ )=R( u ˜ ).

Hence,

| e ( t k ) | 2 = t k 1 t k R( u ˜ )φdt+e( t k 1 )φ( t k 1 ).

Here, we can use the Galerkin orthogonality property (24) to get

| e ( t k ) | 2 | e ( t k 1 ) φ ( t k 1 ) | min φ ˜ | t k 1 t k R ( u ˜ ) ( φ φ ˜ ) d t | ,

where φ ˜ is a polynomial of degree m.

Since φ( t k )=e( t k ), it follows that

| e ( t k ) | | φ ( t k 1 ) φ ( t k ) | | e ( t k 1 ) | + min φ ˜ | t k 1 t k R ( u ˜ ) φ φ ˜ φ ( t k ) d t | ,

and from φ (t)=σ(t) A T φ(t) and μ( A T )=μ(A)= max i Re| λ i (A)|=0, it follows that

φ(t)= e t t k μ ( t ) σ ( t ) d t φ( t k )=φ( t k ).

Hence,

| e ( t k ) | | e ( t k 1 ) | + min φ ˜ | t k 1 t k R ( u ˜ ) φ φ ˜ φ ( t k ) d t | .

Under sufficient regularity assumptions the last term can be bounded by C τ 2 m . Hence, the discretization error grows linearly with time,

| e ( t k ) | C t k τ 2 m ,k=0,1,

i.e. the implicit Runge-Kutta method, based on Gaussian quadrature, applied for hyperbolic (conservative) problems has order 2m.

6 A numerical test example

We consider the linear parabolic problem,

u t +σ(t)(Δu+buf)=0,t>0
(26)

in the unit square domain Ω= [ 0 , 1 ] 2 with boundary condition

{ u = 0 on parts  y = 0 , y = 1 , u ν + u = g , 1 on parts  x = 0 , x = 1 .
(27)

As initial function u 0 , we choose a tent-like function with u 0 =1 at the center of Ω and u 0 =0 on Ω; see Figure 1.

Figure 1
figure 1

Initial function.

Here, σ(t)=1+ 2 5 sinkπt, where k=1,2, , k 1 τ , is a parameter used to test the stability of the method with respect to oscillating coefficients. Here, τ is the time step to be used in the numerical solution of (26). Note that this function σ(t) satisfies the conditions of the ratio σ 1 σ 2 from (9). We let f(x,y)2 e x .

Further b is a vector satisfying b0. We choose b=[,0], where is a parameter, possibly 1.

After a finite element or finite difference approximation, a system of the form (4) arises. For a finite difference approximation M=I, the identity matrix. The Laplacian operator is approximated with a nine-point difference scheme. We use an upwind discretization of the convection term. In the outer corner points of the domain, we use the boundary conditions u x +u=0 for x=0 and u x +u=0 for x=1.

The time discretization is given by the implicit Runge-Kutta method with the Radau quadrature for m=2; see Section 3. For comparison, we also consider m=1, i.e. the implicit Euler method, in some experiments. For solving the time-discretized problems, we use the GMRES method with preconditioners from Section 2 and with the tolerance 1e10. Let us note that GMRES needs 5-6 iteration for this tolerance. The problem is implemented in Matlab.

The primary aim is to show how the time-discretization errors decrease and how fast the numerical approximation of (26)-(27) approaches its stationary value, i.e. the corresponding numerical solution to the stationary problem

{ Δ u ˆ + b u ˆ = 2 e x in  Ω , u = 0 on parts  y = 0 , y = 1 , u ν + u = g on parts  x = 0 , x = 1 .
(28)

6.1 Experiments with a known and smooth stationary solution

If we let

g(y)={ 2 y ( 1 y ) for  x = 0 , 0 for  x = 1

then the solution to (28) satisfies

u ˆ (x,y)= e x y(1y).

First, we will investigate the influence of the space discretization error on the stationary problem (28). To this end, we use the relative error estimate in the Euclidean norm:

e h = u ˆ h u ˆ 2 u ˆ 2 .

Here, u ˆ , u ˆ h denote the vectors representing the exact and numerical solutions to (28) at the nodal points, respectively. The error estimates in dependence on and h are found in Table 1. It is seen that the error decay is O(h). This is caused by the use of first order upwind approximation of the convection term.

Table 1 The error estimates in dependence on and h

In Figures 2 and 3, there are depicted numerical stationary solutions for =1 and =20, respectively. The discretization parameter is h=1/50.

Figure 2
figure 2

Numerical stationary solution for =1 .

Figure 3
figure 3

Numerical stationary solution for =20 .

Now we will investigate how fast the numerical solution to (26)-(27) approaches the numerical solution to (28) in dependence on τ. We fix k=10 and we search the smallest time T for which

u h ( T ) u ˆ h 2 u ˆ h 2 < 10 6 ,

where the vectors u ˆ h and u h (T) represent the numerical solution to (28) and the numerical solution to (26)-(27) at time T, respectively. The results for various and h are in Table 2. We can observe that the dependence of the results on h is small. For smaller , the final time does not depend on τ, while for larger , the dependence on τ is more significant.

Table 2 Values of time T in dependence on h and τ

Finally, we investigate how the time-discretization error decrease in dependence on τ at a fixed, relatively small, time T=1/8. We consider five different time-discretization parameters: τ 1 =T, τ 2 =T/2, τ 3 =T/4, τ 4 =T/8, and τ 5 =T/16. We will compare the maximal differences between the vectors u i (T) and u i + 1 (T), i=1,,4, where ui(T) represents the numerical solution to (26)-(27) at time T for the time-discretization parameter τ i , i=1,,5. So, we investigate the following error:

e i = u i + 1 ( T ) u i ( T ) ,i=1,,4,

which values are found in Tables 3-5. If we let k=10, =20 and use various h, we obtain results written in Table 3. It is seen that the influence of h on the time discretization error is small for the larger time steps but more noticeable for the smaller time steps when the time and space discretization errors are of the same order.

Table 3 Time discretization error at time T=1/8 in dependence on h and τ

If we let k=10, h=1/50, =1 and =20, we obtain results in Table 4. We can see that the investigated time-discretization error decreases faster for =20 than for =1.

Table 4 Time discretization error at time T=1/8 in dependence on and τ

If we let k=0, k=10, h=1/50, and =20, we obtain results in Table 5.

Table 5 Time discretization error at time T=1/8 in dependence on k and τ

The error estimates from Tables 3-5 indicate that the expected error estimate O( τ 3 ) holds.

For comparison, we perform the same experiment as in Table 5 for the implicit Euler time discretization. The results are in Table 6.

Table 6 Time discretization error at time T=1/8 in dependence on and τ for the implicit Euler method

The error estimates are here significantly influenced by the oscillation parameter k. For the larger value k=10, we do not observe convergence. In case k=0, the convergence is first order O(τ), that is, much slower than for the Runge-Kutta method with the two-point Radau quadrature.

6.2 Experiments with an unknown and less smooth stationary solution

Here, we replace the above defined function g with the following one:

g(y)= { y ( 1 y ) , y < 1 / 4  or  y > 3 / 4 , e 2 | y 1 / 2 | , 1 / 4 y 3 / 4 } for  x = 0 , 0 for  x = 1

and prepare Tables 7 and 8 correspondingly to Tables 2 and 4, respectively. The results in Tables 7 and 8 are very similar to the results from Tables 2 and 4. It means that less smoothness in space of the solution to (26)-(27) do not significantly influence the time-discretization error.

Table 7 Values of stabilized time T in dependence on and τ
Table 8 Time discretization error at time T=1/8 in dependence on and τ

In Figures 4 and 5, there are depicted numerical stationary solutions for =1 and =20, respectively. The discretization parameter is h=1/50.

Figure 4
figure 4

Numerical stationary solution for =1 .

Figure 5
figure 5

Numerical stationary solution for =20 .

7 Concluding remarks

There are several advantages in using high order time integration methods. Clearly, the major advantage is that the high order of discretization errors enables the use of larger, and hence fewer timesteps to achieve a desired level of accuracy. Some of the methods, like Radau integration, are highly stable, i.e. decrease unwanted solution components exponentially fast and do not suffer from an order reduction, which is otherwise common for many other methods. The disadvantage with such high order methods is that one must solve a number of quadratic matrix polynomial equations. For this reason, much work has been devoted to development of simpler methods, like diagonally implicit Runge-Kutta methods; see e.g. [10]. Such methods are, however, of lower order and may suffer from order reduction.

In the present paper, it has been shown that the arising quadratic matrix system polynomial factors can be handled in parallel and each of them can be solved efficiently with a preconditioning method, resulting in very few iterations. Each iteration involves just two first order matrix real valued factors, similar to what arises in the diagonal implicit Runge-Kutta methods. An alternative, stabilized explicit Runge-Kutta methods, i.e. methods where the stability domain has been extended by use of certain forms of Chebyshev polynomials; see, e.g. [17] can only be competitive for modestly stiff problems.

It has also been shown that the methods behave robustly with respect to oscillations in the coefficients in the differential operator. Hence, in practice, high order methods have a robust performance and do not suffer from any real disadvantage.

References

  1. Butcher JC: Numerical Method for Ordinary Differential Equations. 2nd edition. Wiley, Chichester; 2008.

    Book  MATH  Google Scholar 

  2. Butcher JC: Implicit Runge-Kutta processes. Math. Comput. 1964, 18: 50-64. 10.1090/S0025-5718-1964-0159424-9

    Article  MATH  MathSciNet  Google Scholar 

  3. Axelsson O: A class of A -stable methods. BIT 1969, 9: 185-199. 10.1007/BF01946812

    Article  MATH  MathSciNet  Google Scholar 

  4. Axelsson O: Global integration of differential equations through Lobatto quadrature. BIT 1964, 4: 69-86. 10.1007/BF01939850

    Article  MATH  MathSciNet  Google Scholar 

  5. Axelsson O: On the efficiency of a class of A -stable methods. BIT 1974, 14: 279-287. 10.1007/BF01933227

    Article  MATH  MathSciNet  Google Scholar 

  6. Axelsson O, Kucherov A: Real valued iterative methods for solving complex symmetric linear systems. Numer. Linear Algebra Appl. 2000, 7: 197-218. 10.1002/1099-1506(200005)7:4<197::AID-NLA194>3.0.CO;2-S

    Article  MATH  MathSciNet  Google Scholar 

  7. Varga RS: Functional Analysis and Approximation Theory in Numerical Analysis. SIAM, Philadelphia; 1971.

    Book  MATH  Google Scholar 

  8. Gear CW: Numerical Initial Value Problems in Ordinary Differential Equations. Prentice Hall, New York; 1971.

    MATH  Google Scholar 

  9. Fried I: Optimal gradient minimization scheme for finite element eigenproblems. J. Sound Vib. 1972, 20: 333-342. 10.1016/0022-460X(72)90614-1

    Article  MATH  Google Scholar 

  10. Hairer E, Wanner G: Solving Ordinary Differential Equations II. Stiff and Differential-Algebraic Problems. 2nd edition. Springer, Berlin; 1996.

    Book  MATH  Google Scholar 

  11. Axelsson O: Iterative Solution Methods. Cambridge University Press, Cambridge; 1994.

    Book  MATH  Google Scholar 

  12. Hairer E, Lubich Ch, Roche M Lecture Notes in Mathematics 1409. In The Numerical Solution of Differential-Algebraic Systems by Runge-Kutta Methods. Springer, Berlin; 1989.

    Google Scholar 

  13. Petzold LR: Order results for implicit Runge-Kutta methods applied to differential/algebraic systems. SIAM J. Numer. Anal. 1986, 23(4):837-852. 10.1137/0723054

    Article  MATH  MathSciNet  Google Scholar 

  14. Axelsson O: Error estimates over infinite intervals of some discretizations of evolution equations. BIT 1984, 24: 413-424. 10.1007/BF01934901

    Article  MATH  MathSciNet  Google Scholar 

  15. Wanner G: A short proof on nonlinear A -stability. BIT 1976, 16: 226-227. 10.1007/BF01931374

    Article  MATH  MathSciNet  Google Scholar 

  16. Frank R, Schneid J, Ueberhuber CW: The concept of B -convergence. SIAM J. Numer. Anal. 1981, 18: 753-780. 10.1137/0718051

    Article  MATH  MathSciNet  Google Scholar 

  17. Hundsdorfer W, Verwer JG: Numerical Solution of Time Dependent Advection-Diffusion-Reaction Equations. Springer, Berlin; 2003.

    Book  MATH  Google Scholar 

Download references

Acknowledgements

This paper was funded by King Abdulaziz University, under grant No. (35-3-1432/HiCi). The authors, therefore, acknowledge technical and financial support of KAU.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bashir Ahmad.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

Each of the authors, OA, RB, SS and BA, contributed to each part of this work equally and read and approved the final version of the manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Axelsson, O., Blaheta, R., Sysala, S. et al. On the solution of high order stable time integration methods. Bound Value Probl 2013, 108 (2013). https://doi.org/10.1186/1687-2770-2013-108

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-2770-2013-108

Keywords