Skip to main content

Computing eigenvalues and Hermite interpolation for Dirac systems with eigenparameter in boundary conditions

Abstract

Eigenvalue problems with eigenparameter appearing in the boundary conditions usually have complicated characteristic determinant where zeros cannot be explicitly computed. In this paper we use the derivative sampling theorem ‘Hermite interpolations’ to compute approximate values of the eigenvalues of Dirac systems with eigenvalue parameter in one or two boundary conditions. We use recently derived estimates for the truncation and amplitude errors to compute error bounds. Using computable error bounds, we obtain eigenvalue enclosures. Examples with tables and illustrative figures are given. Also numerical examples, which are given at the end of the paper, give comparisons with the classical sinc-method in Annaby and Tharwat (BIT Numer. Math. 47:699-713, 2007) and explain that the Hermite interpolations method gives remarkably better results.

MSC:34L16, 94A20, 65L15.

1 Introduction

Let σ>0 and PW σ 2 be the Paley-Wiener space of all L 2 (R)-entire functions of exponential type σ. Assume that f(t) PW σ 2 PW 2 σ 2 . Then f(t) can be reconstructed via the Hermite-type sampling series

f(t)= n = [ f ( n π σ ) S n 2 ( t ) + f ( n π σ ) sin ( σ t n π ) σ S n ( t ) ] ,
(1.1)

where S n (t) is the sequences of sinc functions

S n (t):={ sin ( σ t n π ) ( σ t n π ) , t n π σ , 1 , t = n π σ .
(1.2)

Series (1.1) converges absolutely and uniformly on , cf. [14]. Sometimes, series (1.1) is called the derivative sampling theorem. Our task is to use formula (1.1) to compute eigenvalues of Dirac systems numerically. This approach is a fully new technique that uses the recently obtained estimates for the truncation and amplitude errors associated with (1.1), cf. [5]. Both types of errors normally appear in numerical techniques that use interpolation procedures. In the following we summarize these estimates. The truncation error associated with (1.1) is defined to be

R N (f)(t):=f(t) f N (t),N Z + ,tR,
(1.3)

where f N (t) is the truncated series

f N (t)= | n | N [ f ( n π σ ) S n 2 ( t ) + f ( n π σ ) sin ( σ t n π ) σ S n ( t ) ] .
(1.4)

It is proved in [5] that if f(t) PW σ 2 and f(t) is sufficiently smooth in the sense that there exists k Z + such that t k f(t) L 2 (R), then, for tR, |t|<Nπ/σ, we have

| R N ( f ) ( t ) | T N , k , σ ( t ) : = ξ k , σ E k | sin σ t | 2 3 ( N + 1 ) k ( 1 ( N π σ t ) 3 / 2 + 1 ( N π + σ t ) 3 / 2 ) + ξ k , σ ( σ E k + k E k 1 ) | sin σ t | 2 σ ( N + 1 ) k ( 1 N π σ t + 1 N π + σ t ) ,
(1.5)

where the constants E k and ξ k , σ are given by

E k := | t k f ( t ) | 2 d t , ξ k , σ := σ k + 1 / 2 π k + 1 1 4 k .
(1.6)

The amplitude error occurs when approximate samples are used instead of the exact ones, which we cannot compute. It is defined to be

A ( ε , f ) ( t ) = n = [ { f ( n π σ ) f ˜ ( n π σ ) } S n 2 ( t ) + { f ( n π σ ) f ˜ ( n π σ ) } sin ( σ t n π ) σ S n ( t ) ] , t R ,
(1.7)

where f ˜ ( n π σ ) and f ˜ ( n π σ ) are approximate samples of f( n π σ ) and f ( n π σ ), respectively. Let us assume that the differences ε n :=f( n π σ ) f ˜ ( n π σ ), ε n := f ( n π σ ) f ˜ ( n π σ ), nZ, are bounded by a positive number ε, i.e., | ε n |,| ε n |ε. If f(t) PW σ 2 satisfies the natural decay conditions

(1.8)
(1.9)

0<ω1, then for 0<εmin{π/σ,σ/π,1/ e }, we have, [5],

A ( ε , f ) 4 e 1 / 4 σ ( ω + 1 ) { 3 e ( 1 + σ ) + ( ( π / σ ) A + M f ) ρ ( ε ) + ( σ + 2 + log ( 2 ) ) M f } ε log ( 1 / ε ) ,
(1.10)

where

A:= 3 σ π ( | f ( 0 ) | + M f ( σ π ) ω ) ,ρ(ε):=γ+10log(1/ε),
(1.11)

and γ:= lim n [ k = 1 n 1 k logn]0.577216 is the Euler-Mascheroni constant.

The classical [6] sampling theorem of Whittaker, Kotel’nikov and Shannon (WKS) for f PW σ 2 is the series representation

f(t)= n = f ( n π σ ) S n (t),tR,
(1.12)

where the convergence is absolute and uniform on and it is uniform on compact sets of , cf. [68]. Series (1.12), which is of Lagrange interpolation type, has been used to compute eigenvalues of second-order eigenvalue problems; see, e.g., [915]. The use of (1.12) in numerical analysis is known as the sinc-method established by Stenger, cf. [1618]. In [10, 12], the authors applied (1.12) and the regularized sinc-method to compute eigenvalues of Dirac systems with a derivation of the error estimates as given by [19, 20]. In [12] the Dirac system has an eigenparameter appearing in the boundary conditions. The aim of this paper is to investigate the possibilities of using Hermite interpolations rather than Lagrange interpolations, to compute the eigenvalues numerically. Notice that, due to Paley-Wiener’s theorem [21], f PW σ 2 if and only if there is g() L 2 (σ,σ) such that

f(t)= 1 2 π σ σ g(x) e i x t dx.
(1.13)

Therefore f (t) PW σ 2 , i.e., f (t) also has an expansion of the form (1.12). However, f (t) can be also obtained by the term-by-term differentiation formula of (1.12)

f (t)= n = f ( n π σ ) S n (t),
(1.14)

see [[6], p.52] for convergence. Thus the use of Hermite interpolations will not cost any additional computational efforts since the samples f( n π σ ) will be used to compute both f(t) and f (t) according to (1.12) and (1.14), respectively.

Consider the Dirac system which consists of the system of differential equations

u 2 (x) r 1 (x) u 1 (x)=λ u 1 (x), u 1 (x)+ r 2 (x) u 2 (x)=λ u 2 (x),x[0,1]
(1.15)

and the boundary conditions

(1.16)
(1.17)

where r 1 (), r 2 () L 1 (0,1) and α i , β i , α i , β i R, i=0,1, satisfying

( ( α 1 , α 2 ) = ( 0 , 0 )  or  α 1 α 2 α 1 α 2 > 0 ) and ( ( β 1 , β 2 ) = ( 0 , 0 )  or  β 1 β 2 β 1 β 2 > 0 ) .
(1.18)

The eigenvalue problem (1.15)-(1.17) will be denoted by Γ(r,α,β, α , β ) when ( α 1 , α 2 )(0,0)( β 1 , β 2 ). It is a Dirac system when the eigenparameter λ appears linearly in both boundary conditions. The classical problem when α 1 = α 2 = β 1 = β 2 =0, which we denote by Γ(r,α,β,0,0), is studied in the monographs of Levitan and Sargsjan [22, 23]. Annaby and Tharwat [24] used Hermite-type sampling series (1.1) to compute the eigenvalues of problem Γ(r,α,β,0,0) numerically. In [25], Kerimov proved that Γ(r,α,β, α , β ) has a denumerable set of real and simple eigenvalues with ±∞ as the limit points. Similar results are established in [26] for the problem when the eigenparameter appears in one condition, i.e., when α 1 = α 2 =0, ( β 1 , β 2 )(0,0) or equivalently when ( α 1 , α 2 )(0,0) and β 1 = β 2 =0, where also sampling theorems have been established. These problems will be denoted by Γ(r,α,β,0, β ) and Γ(r,α,β, α ,0), respectively. The aim of the present work is to compute the eigenvalues of Γ(r,α,β, α , β ) and Γ(r,α,β,0, β ) numerically by the Hermite interpolations with an error analysis. This method is based on sampling theorem, Hermite interpolations, but applied to regularized functions hence avoiding any (multiple) integration and keeping the number of terms in the Cardinal series manageable. It has been demonstrated that the method is capable of delivering higher-order estimates of the eigenvalues at a very low cost; see [24]. In Sections 2 and 3, we derive the Hermite interpolation technique to compute the eigenvalues of Dirac systems with error estimates. We briefly derive some necessary asymptotics for Dirac systems’ spectral quantities. The last section contains three worked examples with comparisons accompanied by figures and numerics with the Lagrange interpolation method.

2 Treatment of Γ(r,α,β, α , β )

In this section we derive approximate values of the eigenvalues of Γ(r,α,β, α , β ). Recall that Γ(r,α,β, α , β ) has a denumerable set of real and simple eigenvalues, cf. [25]. Let φ(,λ)= ( φ 1 ( , λ ) , φ 2 ( , λ ) ) be a solution of (1.15) satisfying the following initial:

φ 1 (0,λ)= α 2 +λ α 2 , φ 2 (0,λ)= α 1 +λ α 1 .
(2.1)

Here A denotes the transpose of a matrix A. Since φ(,λ) satisfies (1.16), then the eigenvalues of the problem Γ(r,α,β, α , β ) are the zeros of the function

Δ(λ):= ( β 1 + λ β 1 ) φ 1 (1,λ) ( β 2 + λ β 2 ) φ 2 (1,λ).
(2.2)

Similarly to [[22], p.220], φ 1 (,λ) and φ 2 (,λ) satisfy the system of integral equations

(2.3)
(2.4)

where T i and T ˜ i , i=1,2, are the Volterra operators defined by

T i u ( x , λ ) : = 0 x sin λ ( x t ) r i ( t ) u ( t , λ ) d t , T ˜ i u ( x , λ ) : = 0 x cos λ ( x t ) r i ( t ) u ( t , λ ) d t , i = 1 , 2 .
(2.5)

For convenience, we define the constants

c 1 : = max { | α 1 | + | α 2 | , | α 1 | + | α 2 | } , c 2 : = 0 1 [ | r 1 ( t ) | + | r 2 ( t ) | ] d t , c 3 : = c 1 c 2 , c 4 : = c 3 exp ( c 2 ) , c 5 : = max { | β 1 | + | β 2 | , | β 1 | + | β 2 | } , c 6 : = c 4 c 5 .
(2.6)

Define h 1 (,λ) and h 2 (,λ) to be

h 1 (x,λ):= T 1 φ 1 (x,λ)+ T ˜ 2 φ 2 (x,λ), h 2 (x,λ):= T ˜ 1 φ 1 (x,λ)+ T 2 φ 2 (x,λ).
(2.7)

As in [12] we split Δ(λ) into two parts via

Δ(λ):=G(λ)+S(λ),
(2.8)

where G(λ) is the known part

G ( λ ) : = ( β 1 + λ β 1 ) ( ( α 1 + λ α 1 ) sin λ + ( α 2 + λ α 2 ) cos λ ) ( β 2 + λ β 2 ) ( ( α 2 + λ α 2 ) sin λ + ( α 1 + λ α 1 ) cos λ )
(2.9)

and S(λ) is the unknown one

S(λ):= ( β 1 + λ β 1 ) h 1 (1,λ) ( β 2 + λ β 2 ) h 2 (1,λ).
(2.10)

Then the function S(λ) is entire in λ for each x[0,1] for which, cf. [12],

| S ( λ ) | c 6 ( 1 + | λ | ) 2 e | λ | ,λC.
(2.11)

The analyticity of S(λ) as well as estimate (2.11) are not adequate to prove that S(λ) lies in a Paley-Wiener space. To solve this problem, we will multiply S(λ) by a regularization factor. Let θ>0 and m Z + , m4, be fixed. Let F θ , m (λ) be the function

F θ , m (λ):= ( sin θ λ θ λ ) m S(λ),λC.
(2.12)

We choose θ sufficiently small for which |θλ|<π. More specifications on m, θ will be given latter on. Then F θ , m (λ), see [12], is an entire function of λ which satisfies the estimate

| F θ , m (λ)| c 0 m c 6 ( 1 + | λ | ) 2 ( 1 + θ | λ | ) m e | λ | ( 1 + m θ ) ,λC.
(2.13)

Moreover, λ m 3 F θ , m (λ) L 2 (R) and

E m 3 ( F θ , m )= | λ m 3 F θ , m ( λ ) | 2 d λ 2 c 0 m c 6 ξ 0 ,
(2.14)

where

ξ 0 := 1 θ 2 m 1 ( 3 + 2 m 2 6 θ + 6 θ 2 + 4 m θ 5 m 4 m 3 12 m 2 + 11 m 3 + 6 θ 3 ( θ + 2 m 5 ) ( 4 m 3 12 m 2 + 11 m 3 ) ( m 2 ) ( 2 m 5 ) ) .

What we have just proved is that F θ , m (λ) belongs to the Paley-Wiener space PW σ 2 with σ=1+mθ. Since F θ , m (λ) PW σ 2 PW 2 σ 2 , then we can reconstruct the functions F θ , m (λ) via the following sampling formula:

F θ , m (λ)= n = [ F θ , m ( n π σ ) S n 2 ( λ ) + F θ , m ( n π σ ) sin ( σ λ n π ) σ S n ( λ ) ] .
(2.15)

Let N Z + , N>m, and approximate F θ , m (λ) by its truncated series F θ , m , N (λ), where

F θ , m , N (λ):= n = N N [ F θ , m ( n π σ ) S n 2 ( λ ) + F θ , m ( n π σ ) sin ( σ λ n π ) σ S n ( λ ) ] .
(2.16)

Since all eigenvalues are real, then from now on we restrict ourselves to λR. Since λ m 3 F θ , m (λ) L 2 (R), the truncation error, cf. (1.5), is given for |λ|< N π σ by

| F θ , m (λ) F θ , m , N (λ)| T N , m 3 , σ (λ),
(2.17)

where

(2.18)

The samples { F θ , m ( n π σ ) } n = N N and { F θ , m ( n π σ ) } n = N N , in general, are not known explicitly. So, we approximate them by solving numerically 8N+4 initial value problems at the nodes { n π σ } n = N N . Let { F ˜ θ , m ( n π σ ) } n = N N and { F ˜ θ , m ( n π σ ) } n = N N be the approximations of the samples of { F θ , m ( n π σ ) } n = N N and { F θ , m ( n π σ ) } n = N N , respectively. Now we define F ˜ θ , m , N (λ), which approximates F θ , m , N (λ)

(2.19)

Using standard methods for solving initial problems, we may assume that for |n|<N,

| F θ , m ( n π σ ) F ˜ θ , m ( n π σ ) | <ε, | F θ , m ( n π σ ) F ˜ θ , m ( n π σ ) | <ε
(2.20)

for a sufficiently small ε. From (2.13) we can see that F θ , m (λ) satisfies the condition (1.9) when m4 and therefore whenever 0<εmin{π/σ,σ/π,1/ e }, we have

| F θ , m , N (λ) F ˜ θ , m , N (λ)|A(ε),λR,
(2.21)

where there is a positive constant M F θ , m for which, cf. (1.10),

A ( ε ) : = 2 e 1 / 4 σ { 3 e ( 1 + σ ) + ( π σ A + M F θ , m ) ρ ( ε ) + ( σ + 2 + log ( 2 ) ) M F θ , m } ε log ( 1 / ε ) .
(2.22)

Here

A:= 3 σ π ( | F θ , m ( 0 ) | + σ π M F θ , m ) ,ρ(ε):=γ+10log(1/ε).

In the following, we use the technique of [27], where only the truncation error analysis is considered, to determine enclosure intervals for the eigenvalues; see also [24, 28]. Let λ be an eigenvalue with |θ λ |<π, that is,

Δ ( λ ) =G ( λ ) + ( sin θ λ θ λ ) m F θ , m ( λ ) =0.

Then it follows that

and so

|G ( λ ) + ( sin θ λ θ λ ) m F ˜ θ , m , N ( λ ) || sin θ λ θ λ | m ( T N , m 3 , σ ( λ ) + A ( ε ) ) .

Since G( λ )+ ( sin θ λ θ λ ) m F ˜ θ , m , N ( λ ) is given and | sin θ λ θ λ | m ( T N , m 3 , σ ( λ )+A(ε)) has computable upper bound, we can define an enclosure for λ by solving the following system of inequalities:

| sin θ λ θ λ | m ( T N , m 3 , σ ( λ ) + A ( ε ) ) G ( λ ) + ( sin θ λ θ λ ) m F ˜ θ , m , N ( λ ) | sin θ λ θ λ | m ( T N , m 3 , σ ( λ ) + A ( ε ) ) .
(2.23)

Its solution is an interval containing λ , and over which the graph

G ( λ ) + ( sin θ λ θ λ ) m F ˜ θ , m , N ( λ )

is squeezed between the graphs

| sin θ λ θ λ | m ( T N , m 3 , σ ( λ ) + A ( ε ) )
(2.24)

and

| sin θ λ θ λ | m ( T N , m 3 , σ ( λ ) + A ( ε ) ) .
(2.25)

Using the fact that

F ˜ θ , m , N (λ) F θ , m (λ)

uniformly over any compact set, and since λ is a simple root, we obtain, for large N and sufficiently small ε,

λ ( G ( λ ) + ( sin θ λ θ λ ) m F ˜ θ , m , N ( λ ) ) 0

in a neighborhood of λ . Hence the graph of G(λ)+ ( sin θ λ θ λ ) m F ˜ θ , m , N (λ) intersects the graphs | sin θ λ θ λ | m ( T N , m 3 , σ (λ)+A(ε)) and | sin θ λ θ λ | m ( T N , m 3 , σ (λ)+A(ε)) at two points with abscissae a ( λ ,N,ε) a + ( λ ,N,ε) and the solution of the system of inequalities (2.23) is the interval

I ε , N := [ a ( λ , N , ε ) , a + ( λ , N , ε ) ]

and in particular λ I ε , N . Summarizing the above discussion, we arrive at the following lemma which is similar to that of [27] for Sturm-Liouville problems.

Lemma 2.1 For any eigenvalue λ , we can find N 0 Z + and sufficiently small ε such that λ I ε , N for N> N 0 . Moreover,

[ a ( λ , N , ε ) , a + ( λ , N , ε ) ] { λ } as N and ε0.
(2.26)

Proof Since all eigenvalues of Γ(r,α,β, α , β ) are simple, then for large N and sufficiently small ε, we have λ (G(λ)+ ( sin θ λ θ λ ) m F ˜ θ , m , N (λ))>0 in a neighborhood of λ . Choose N 0 such that

G(λ)+ ( sin θ λ θ λ ) m F ˜ θ , m , N 0 (λ)=±| sin θ λ θ λ | m ( T N 0 , m 3 , σ ( λ ) + A ( ε ) )

has two distinct solutions which we denote by a ( λ , N 0 ,ε) a + ( λ , N 0 ,ε). The decay of T N , m 3 , σ (λ)0 as N and A(ε)0 as ε0 will ensure the existence of the solutions a ( λ ,N,ε) and a + ( λ ,N,ε) as N and ε0. For the second point, we recall that F ˜ θ , m , N (λ) F θ , m (λ) as N and as ε0. Hence, by taking the limit, we obtain

that is, Δ( a + )=Δ( a )=0. This leads us to conclude that a + = a = λ since λ is a simple root.

Let Δ ˜ N (λ):=G(λ)+ ( sin θ λ θ λ ) m F ˜ θ , m , N (λ). Then (2.17) and (2.21) imply

|Δ(λ) Δ ˜ N (λ)|| sin θ λ θ λ | m ( T N , m 3 , σ ( λ ) + A ( ε ) ) ,|λ|< N π σ .
(2.27)

Therefore θ, m must be chosen so that for |λ|< N π σ

m4,|θλ|<π.

Let λ be an eigenvalue and λ N be its approximation. Thus Δ( λ )=0 and Δ ˜ N ( λ N )=0. From (2.27) we have | Δ ˜ N ( λ )|| sin θ λ θ λ | m ( T N , m 3 , σ ( λ )+A(ε)). Now we estimate the error | λ λ N | for an eigenvalue λ . □

Theorem 2.2 Let λ be an eigenvalue of Γ(r,α,β, α , β ). For sufficient large N, we have the following estimate:

| λ λ N | <| sin θ λ N θ λ N | m T N , m 3 , σ ( λ N ) + A ( ε ) inf ζ I ε , N | Δ ( ζ ) | .
(2.28)

Moreover, | λ λ N |0 when N and ε0.

Proof Since Δ( λ N ) Δ ˜ N ( λ N )=Δ( λ N )Δ( λ ), then from (2.27) and after replacing λ by λ N , we obtain

|Δ( λ N )Δ ( λ ) || sin θ λ N θ λ N | m ( T N , m 3 , σ ( λ N ) + A ( ε ) ) .
(2.29)

Using the mean value theorem yields that for some ζ J ε , N :=[min( λ , λ N ),max( λ , λ N )],

| ( λ λ N ) Δ (ζ)|| sin θ λ N θ λ N | m ( T N , m 3 , σ ( λ N ) + A ( ε ) ) ,ζ J ε , N I ε , N .
(2.30)

Since the eigenvalues are simple, then for sufficiently large N inf ζ I ε , N | Δ (ζ)|>0 and we get (2.28). The rest of the proof follows from the fact that Δ N (λ) converges uniformly to Δ(λ) in and A(ε)0 when ε0. □

3 The case of Γ(r,α,β,0, β )

This section includes briefly a treatment similar to that of the previous section for the eigenvalue problem Γ(r,α,β,0, β ) introduced in Section 1 above. Notice that the condition (1.18) implies that the analysis of problem Γ(r,α,β,0, β ) is not included in that of Γ(r,α,β, α , β ). Let ψ(,λ)= ( ψ 1 ( , λ ) , ψ 2 ( , λ ) ) be a solution of (1.15) satisfying the following initial:

ψ 1 (0,λ)= α 2 , ψ 2 (0,λ)= α 1 .
(3.1)

Therefore, the eigenvalues of the problem in question are the zeros of the function

Ω(λ):= ( β 1 + λ β 1 ) ψ 1 (1,λ) ( β 2 + λ β 2 ) ψ 2 (1,λ).
(3.2)

Similarly to [[22], p.220], ψ(,λ) satisfies the system of integral equations

(3.3)
(3.4)

where T i and T ˜ i , i=1,2, are the Volterra operators defined in (2.5) above. Define g 1 (,λ) and g 2 (,λ) to be

g 1 (x,λ):= T 1 ψ 1 (x,λ)+ T ˜ 2 ψ 2 (x,λ), g 2 (x,λ):= T ˜ 1 ψ 1 (x,λ)+ T 2 ψ 2 (x,λ).
(3.5)

As in [12] we split Ω(λ) into

Ω(λ):=K(λ)+U(λ),
(3.6)

where K(λ) is the known part

K(λ):= ( β 1 + λ β 1 ) ( α 2 cosλ α 1 sinλ) ( β 2 + λ β 2 ) ( α 1 cosλ+ α 2 sinλ)
(3.7)

and U(λ) is the unknown one

U(λ):= ( β 1 + λ β 1 ) g 1 (1,λ) ( β 2 + λ β 2 ) g 2 (1,λ).
(3.8)

Then U(λ) is entire in λ for each x[0,1] for which, see [12],

| U ( λ ) | c 6 ( 1 + | λ | ) e | λ | ,λC.
(3.9)

Define R m , θ (λ) to be

R m , θ (λ)= ( sin θ λ θ λ ) m U(λ),λC,
(3.10)

where θ is sufficiently small, for which |θλ|<π and m are as in the previous section, but m3. Hence

| R m , θ (λ)| c 0 m c 6 ( 1 + | λ | ) ( 1 + θ | λ | ) m e | λ | ( 1 + m θ ) ,λC
(3.11)

and λ m 2 R m , θ (λ) L 2 (R) with

E m 2 ( R m , θ )= | λ m 2 R m , θ ( λ ) | 2 d λ c 0 m c 6 ω 0 ,
(3.12)

where

ω 0 := 2 ( 3 5 m + 2 m 2 3 θ + 2 m θ + θ 2 ) θ 2 m 1 ( 3 + 11 m 12 m 2 + 4 m 3 ) .

Thus, R m , θ (λ) belongs to the Paley-Wiener space PW σ 2 with σ=1+mθ. Since R θ , m (λ) PW σ 2 PW 2 σ 2 , then we can reconstruct the functions R θ , m (λ) via the following sampling formula:

R θ , m (λ)= n = [ R θ , m ( n π σ ) S n 2 ( λ ) + R θ , m ( n π σ ) sin ( σ λ n π ) σ S n ( λ ) ] .
(3.13)

Let N Z + , N>m, and approximate R θ , m (λ) by its truncated series R θ , m , N (λ), where

R θ , m , N (λ):= n = N N [ R θ , m ( n π σ ) S n 2 ( λ ) + R θ , m ( n π σ ) sin ( σ λ n π ) σ S n ( λ ) ] .
(3.14)

Since all eigenvalues are real, then from now on we restrict ourselves to λR. Since λ m 2 R θ , m (λ) L 2 (R), the truncation error, cf. (1.5), is given for |λ|< N π σ by

| R θ , m (λ) R θ , m , N (λ)| T N , m 2 , σ (λ),
(3.15)

where

(3.16)

The samples { R θ , m ( n π σ ) } n = N N and { R θ , m ( n π σ ) } n = N N , in general, are not known explicitly. So, we approximate them by solving numerically 8N+4 initial value problems at the nodes { n π σ } n = N N . Let { R ˜ θ , m ( n π σ ) } n = N N and { R ˜ θ , m ( n π σ ) } n = N N be the approximations of the samples of { R θ , m ( n π σ ) } n = N N and { R θ , m ( n π σ ) } n = N N , respectively. Now we define R ˜ θ , m , N (λ), which approximates R θ , m , N (λ)

(3.17)

Using standard methods for solving initial problems, we may assume that for |n|<N,

| R θ , m ( n π σ ) R ˜ θ , m ( n π σ ) | <ε, | R θ , m ( n π σ ) R ˜ θ , m ( n π σ ) | <ε
(3.18)

for a sufficiently small ε. From (2.13) we can see that R θ , m (λ) satisfies the condition (1.9) when m3 and therefore whenever 0<εmin{π/σ,σ/π,1/ e }, we have

| R θ , m , N (λ) R ˜ θ , m , N (λ)|A(ε),λR,
(3.19)

where there is a positive constant M R θ , m for which, cf. (1.10),

A ( ε ) : = 2 e 1 / 4 σ { 3 e ( 1 + σ ) + ( π σ A + M R θ , m ) ρ ( ε ) + ( σ + 2 + log ( 2 ) ) M R θ , m } ε log ( 1 / ε ) .
(3.20)

Here

A:= 3 σ π ( | R θ , m ( 0 ) | + σ π M R θ , m ) ,ρ(ε):=γ+10log(1/ε).

As in the above section, we have the following lemma.

Lemma 3.1 For any eigenvalue λ of the problem Γ(r,α,β,0, β ), we can find N 0 Z + and sufficiently small ε such that λ I ε , N for N> N 0 , where

I ε , N := [ b ( λ , N , ε ) , b + ( λ , N , ε ) ] ,

b , b + are the solutions of the inequalities

| sin θ λ θ λ | m ( T N , m 2 , σ ( λ ) + A ( ε ) ) Ω ˜ N (λ)| sin θ λ θ λ | m ( T N , m 2 , σ ( λ ) + A ( ε ) ) .
(3.21)

Moreover,

[ b ( λ , N , ε ) , b + ( λ , N , ε ) ] { λ } as N and ε0.
(3.22)

Let Ω ˜ N (λ):=K(λ)+ ( sin θ λ θ λ ) m R ˜ θ , m , N (λ). Then (3.15) and (3.19) imply

|Ω(λ) Ω ˜ N (λ)|| sin θ λ θ λ | m ( T N , m 2 , σ ( λ ) + A ( ε ) ) ,|λ|< N π σ .
(3.23)

Therefore, θ, m must be chosen so that for |λ|< N π σ ,

m3,|θλ|<π.

Let λ be an eigenvalue and λ N be its approximation. Thus Ω( λ )=0 and Ω ˜ N ( λ N )=0. From (3.23) we have | Ω ˜ N ( λ )|| sin θ λ θ λ | m ( T N , m 2 , σ ( λ )+A(ε)). Now we estimate the error | λ λ N | for an eigenvalue λ . Finally, we have the following estimate.

Theorem 3.2 Let λ be an eigenvalue of the problem Γ(r,α,β,0, β ). For sufficient large N, we have the following estimate:

| λ λ N | <| sin θ λ N θ λ N | m T N , m 2 , σ ( λ N ) + A ( ε ) inf ζ I ε , N | Ω ( ζ ) | .
(3.24)

Moreover, | λ λ N |0 when N and ε0.

In the following section, we have taken θ=1/(Nm), where σ=1+mθ, in order to avoid the first singularity of ( sin θ λ N θ λ N ) 1 .

4 Examples

This section includes three detailed worked examples illustrating the above technique accompanied by comparison with the sinc-method derived in [12]. It is clearly seen that the Hermite interpolations method gives remarkably better results. The first two examples are computed in [12] with the classical sinc-method where r 1 (x)= r 2 (x). But in the last example, where eigenvalues cannot be computed concretely, r 1 (x) r 2 (x). By E S and E H we mean the absolute errors associated with the results of the classical sinc-method and our new method (Hermite interpolations), respectively. We indicate in these examples the effect of the amplitude error in the method by determining enclosure intervals for different values of ε. We also indicate the effect of the parameters m and θ by several choices. Each example is exhibited via figures that accurately illustrate the procedure near to some of the approximated eigenvalues. More explanations are given below. Recall that a ± (λ) and b ± (λ) are defined by

(4.1)
(4.2)

respectively. Recall also that the enclosure intervals I ε , N :=[ a , a + ] and I ε , N :=[ b , b + ] are determined by solving

(4.3)
(4.4)

respectively. We would like to mention that Mathematica has been used to obtain the exact values for the three examples where eigenvalues cannot be computed concretely. Mathematica is also used in rounding the exact eigenvalues, which are square roots.

Example 1

The boundary value problem

(4.5)
(4.6)

is a special case of the problem Γ(r,α,β, α , β ) when r 1 (x)= r 2 (x)= x 2 , α 1 = α 2 = β 1 = β 2 =0, α 1 =1 and α 2 = β 1 = β 2 =1. Here the characteristic function is

Δ(λ):=2λcos ( 1 3 + λ ) ( λ 2 1 ) sin ( 1 3 + λ ) .
(4.7)

The function G(λ) will be

G(λ):=2λcosλ+ ( 1 λ 2 ) sinλ.
(4.8)

As is clearly seen, eigenvalues cannot be computed explicitly. Five tables indicate the application of our technique to this problem and the effect of ε, θ and m (Tables 1, 2, 3, 4 and 5). By exact, we mean the zeros of Δ(λ) computed by Mathematica.

Table 1 N=20 , m=10 , θ=1/10
Table 2 N=20 , m=15 , θ=1/5
Table 3 Absolute error | λ k λ k , N | for N=20 , m=15 , θ=1/5
Table 4 For N=20 , m=10 and θ=1/10 , the exact solutions λ k are all inside the interval [ a , a + ] for different values of ε
Table 5 With N=20 , m=15 and θ=1/5 , λ k are all inside the interval [ a , a + ] for different values of ε

Figures 1 and 2 illustrate the comparison between Δ(λ) and Δ ˜ N (λ) for different values of m and θ. Figures 3 and 4, for N=20, m=10 and θ=1/10, illustrate the enclosure intervals for ε= 10 10 and ε= 10 15 , respectively. Also, Figures 5 and 6 illustrate the enclosure intervals for ε= 10 10 and ε= 10 15 , respectively, but for m=15, θ=1/5.

Figure 1
figure 1

Δ(λ) , Δ ˜ N (λ) with N=20 , m=10 and θ=1/10 .

Figure 2
figure 2

Δ(λ) , Δ ˜ N (λ) with N=20 , m=15 and θ=1/5 .

Figure 3
figure 3

a + , Δ(λ) , a with N=20 , m=10 , θ=1/10 and ε= 10 10 .

Figure 4
figure 4

a + , Δ(λ) , a with N=20 , m=10 , θ=1/10 and ε= 10 15 .

Figure 5
figure 5

a + , Δ(λ) , a with N=20 , m=15 , θ=1/5 and ε= 10 10 .

Figure 6
figure 6

a + , Δ(λ) , a with N=20 , m=15 , θ=1/5 and ε= 10 15 .

Example 2

The Dirac system

(4.9)
(4.10)

is a special case of the problem treated in the previous section with r 1 (x)= r 2 (x)= x 2 , α 1 = β 1 =1, α 2 = β 1 = β 2 =0 and β 2 =1. The characteristic function is

Ω(λ):=cos ( 1 3 + λ ) λsin ( 1 3 + λ ) .
(4.11)

The function K(λ) will be

K(λ):=cosλλsinλ.
(4.12)

As in the previous example, Figures 7, 8, 9, 10, 11 and 12 illustrate the results of Tables 6, 7, 8, 9 and 10. Figures 7 and 8 illustrate the comparison between Ω(λ) and Ω ˜ N (λ) for different values of m and θ. Figures 9 and 10, for N=20, m=6 and θ=1/14, illustrate the enclosure intervals for ε= 10 10 and ε= 10 15 , respectively. Also, Figures 11 and 12 illustrate the enclosure intervals for ε= 10 10 and ε= 10 15 , respectively, but for m=12, θ=1/8.

Figure 7
figure 7

Ω(λ) , Ω ˜ N (λ) with N=20 , m=6 and θ=1/14 .

Figure 8
figure 8

Ω(λ) , Ω ˜ N (λ) with N=20 , m=12 and θ=1/8 .

Figure 9
figure 9

b + , Ω(λ) , b with N=20 , m=6 , θ=1/14 and ε= 10 10 .

Figure 10
figure 10

b + , Ω(λ) , b with N=20 , m=6 , θ=1/14 and ε= 10 15 .

Figure 11
figure 11

b + , Ω(λ) , b with N=20 , m=12 , θ=1/8 and ε= 10 10 .

Figure 12
figure 12

b + , Ω(λ) , b with N=20 , m=12 , θ=1/8 and ε= 10 15 .

Table 6 N=20 , m=6 , θ=1/14
Table 7 N=20 , m=12 , θ=1/8
Table 8 Absolute error | λ k λ k , N | for N=20 , m=12 , θ=1/8
Table 9 For N=20 , m=6 and θ=1/14 , the exact solutions λ k are all inside the interval [ b , b + ] for different values of ε
Table 10 With N=20 , m=12 and θ=1/8 , λ k are all inside the interval [ b , b + ] for different values of ε

Example 3

The boundary value problem

(4.13)
(4.14)

is a special case of the problem Γ(r,α,β,0, β ) when r 1 (x)=x, r 2 (x)=1, α 2 = β 1 = β 2 =1 and α 1 = β 1 = β 2 =0. Here the characteristic function is

Ω ( λ ) : = 1 / ( AiryAiPrime [ λ ( 1 λ ) 1 / 3 ] AiryBi [ λ ( 1 λ ) 1 / 3 ] AiryAi [ λ ( 1 λ ) 1 / 3 ] AiryBiPrime [ λ ( 1 λ ) 1 / 3 ] ) × [ λ ( 1 λ ) 2 / 3 AiryAi [ ( 1 + λ ) ( 1 λ ) 1 / 3 ] AiryBi [ λ ( 1 λ ) 1 / 3 ] + AiryAiPrime [ ( λ + 1 ) ( 1 λ ) 1 / 3 ] AiryBi [ λ ( 1 λ ) 1 / 3 ] AiryAiPrime [ λ ( 1 λ ) 1 / 3 ] ( λ ( 1 λ ) 2 / 3 AiryBi [ ( λ + 1 ) ( 1 λ ) 1 / 3 ] + AiryBiPrime [ ( λ + 1 ) ( 1 λ ) 1 / 3 ] ) ] ,
(4.15)

where AiryAi[z] and AiryBi[z] are Airy functions Ai(z) and Bi(z), respectively, and AiryAiPrime[z] and AiryBiPrime[z] are derivatives of Airy functions. The function K(λ) will be

K(λ):=cosλλsinλ.
(4.16)

Figures 13, 14 and Tables 11, 12 illustrate the applications of the method to this problem.

Figure 13
figure 13

b + , Ω(λ) , b with N=20 , m=16 , θ=1/4 and ε= 10 12 .

Figure 14
figure 14

b + , Ω(λ) , b with N=20 , m=16 , θ=1/4 and ε= 10 15 .

Table 11 N=20 , m=16 , θ=1/4
Table 12 With N=20 , m=16 and θ=1/4 , λ k are all inside the interval [ b , b + ] for different values of ε

References

  1. Grozev GR, Rahman QI: Reconstruction of entire functions from irregularly spaced sample points. Can. J. Math. 1996, 48: 777-793. 10.4153/CJM-1996-040-7

    Article  MATH  MathSciNet  Google Scholar 

  2. Higgins JR, Schmeisser G, Voss JJ: The sampling theorem and several equivalent results in analysis. J. Comput. Anal. Appl. 2000, 2: 333-371.

    MATH  MathSciNet  Google Scholar 

  3. Hinsen G:Irregular sampling of bandlimited L p -functions. J. Approx. Theory 1993, 72: 346-364. 10.1006/jath.1993.1027

    Article  MATH  MathSciNet  Google Scholar 

  4. Jagerman D, Fogel L: Some general aspects of the sampling theorem. IRE Trans. Inf. Theory 1956, 2: 139-146. 10.1109/TIT.1956.1056821

    Article  Google Scholar 

  5. Annaby MH, Asharabi RM: Error analysis associated with uniform Hermite interpolations of bandlimited functions. J. Korean Math. Soc. 2010, 47: 1299-1316. 10.4134/JKMS.2010.47.6.1299

    Article  MATH  MathSciNet  Google Scholar 

  6. Higgins JR: Sampling Theory in Fourier and Signal Analysis: Foundations. Oxford University Press, Oxford; 1996.

    MATH  Google Scholar 

  7. Butzer PL, Schmeisser G, Stens RL: An introduction to sampling analysis. In Non Uniform Sampling: Theory and Practices. Edited by: Marvasti F. Kluwer Academic, New York; 2001:17-121.

    Chapter  Google Scholar 

  8. Butzer PL, Higgins JR, Stens RL: Sampling theory of signal analysis. In Development of Mathematics 1950-2000. Birkhäuser, Basel; 2000:193-234.

    Chapter  Google Scholar 

  9. Annaby MH, Asharabi RM: On sinc-based method in computing eigenvalues of boundary-value problems. SIAM J. Numer. Anal. 2008, 46: 671-690. 10.1137/060664653

    Article  MATH  MathSciNet  Google Scholar 

  10. Annaby MH, Tharwat MM: On the computation of the eigenvalues of Dirac systems. Calcolo 2012, 49: 221-240. 10.1007/s10092-011-0052-y

    Article  MATH  MathSciNet  Google Scholar 

  11. Annaby MH, Tharwat MM: On computing eigenvalues of second-order linear pencils. IMA J. Numer. Anal. 2007, 27: 366-380.

    Article  MATH  MathSciNet  Google Scholar 

  12. Annaby MH, Tharwat MM: Sinc-based computations of eigenvalues of Dirac systems. BIT Numer. Math. 2007, 47: 699-713. 10.1007/s10543-007-0154-8

    Article  MATH  MathSciNet  Google Scholar 

  13. Boumenir A, Chanane B: Eigenvalues of S-L systems using sampling theory. Appl. Anal. 1996, 62: 323-334. 10.1080/00036819608840486

    Article  MATH  MathSciNet  Google Scholar 

  14. Tharwat MM, Bhrawy AH, Yildirim A: Numerical computation of eigenvalues of discontinuous Sturm-Liouville problems with parameter dependent boundary conditions using sinc method. Numer. Algorithms 2012. doi:10.1007/s11075-012-9609-3

    Google Scholar 

  15. Tharwat MM, Bhrawy AH, Yildirim A: Numerical computation of eigenvalues of discontinuous Dirac system using sinc method with error analysis. Int. J. Comput. Math. 2012, 89: 2061-2080. 10.1080/00207160.2012.700112

    Article  MATH  MathSciNet  Google Scholar 

  16. Lund J, Bowers K: Sinc Methods for Quadrature and Differential Equations. SIAM, Philadelphia; 1992.

    Book  MATH  Google Scholar 

  17. Stenger F: Numerical methods based on Whittaker cardinal, or sinc functions. SIAM Rev. 1981, 23: 156-224.

    MathSciNet  Google Scholar 

  18. Stenger F: Numerical Methods Based on Sinc and Analytic Functions. Springer, New York; 1993.

    Book  MATH  Google Scholar 

  19. Butzer PL, Splettstösser W, Stens RL: The sampling theorem and linear prediction in signal analysis. Jahresber. Dtsch. Math.-Ver. 1988, 90: 1-70.

    MATH  Google Scholar 

  20. Jagerman D: Bounds for truncation error of the sampling expansion. SIAM J. Appl. Math. 1966, 14: 714-723. 10.1137/0114060

    Article  MATH  MathSciNet  Google Scholar 

  21. Boas RP: Entire Functions. Academic Press, New York; 1954.

    MATH  Google Scholar 

  22. Levitan BM, Sargsjan IS Translation of Mathematical Monographs 39. In Introduction to Spectral Theory: Self Adjoint Ordinary Differential Operators. Am. Math. Soc., Providence; 1975.

    Google Scholar 

  23. Levitan BM, Sargsjan IS: Sturm-Liouville and Dirac Operators. Kluwer Academic, Dordrecht; 1991.

    Book  Google Scholar 

  24. Annaby MH, Tharwat MM: The Hermite interpolation approach for computing eigenvalues of Dirac systems. Math. Comput. Model. 2012. doi:10.1016/j.mcm.2012.07.025

    Google Scholar 

  25. Kerimov NB: A boundary value problem for the Dirac system with a spectral parameter in the boundary conditions. Differ. Equ. 2002, 38: 164-174. 10.1023/A:1015368926127

    Article  MATH  MathSciNet  Google Scholar 

  26. Annaby MH, Tharwat MM: On sampling and Dirac systems with eigenparameter in the boundary conditions. J. Appl. Math. Comput. 2011, 36: 291-317. 10.1007/s12190-010-0404-9

    Article  MATH  MathSciNet  Google Scholar 

  27. Boumenir A: Higher approximation of eigenvalues by the sampling method. BIT Numer. Math. 2000, 40: 215-225. 10.1023/A:1022334806027

    Article  MATH  MathSciNet  Google Scholar 

  28. Tharwat MM, Bhrawy AH: Computation of eigenvalues of discontinuous Dirac system using Hermite interpolation technique. Adv. Differ. Equ. 2012. doi:10.1186/1687-1847-2012-59

    Google Scholar 

Download references

Acknowledgements

This article was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah. The author, therefore, acknowledges with thanks DSR technical and financial support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohammed M Tharwat.

Additional information

Competing interests

The author declares that he has no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Tharwat, M.M. Computing eigenvalues and Hermite interpolation for Dirac systems with eigenparameter in boundary conditions. Bound Value Probl 2013, 36 (2013). https://doi.org/10.1186/1687-2770-2013-36

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-2770-2013-36

Keywords