SpringerOpen Newsletter

Receive periodic news and updates relating to SpringerOpen.

Open Access Research

Approximation of eigenvalues of discontinuous Sturm-Liouville problems with eigenparameter in all boundary conditions

Mohammed M Tharwat1*, Ali H Bhrawy12 and Abdulaziz S Alofi1

Author Affiliations

1 Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah, Saudi Arabia

2 Permanent address: Department of Mathematics, Faculty of Science, Beni-Suef University, Beni-Suef, Egypt

For all author emails, please log on.

Boundary Value Problems 2013, 2013:132  doi:10.1186/1687-2770-2013-132

The electronic version of this article is the complete one and can be found online at: http://www.boundaryvalueproblems.com/content/2013/1/132


Received:12 March 2013
Accepted:4 May 2013
Published:20 May 2013

© 2013 Tharwat et al.; licensee Springer

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this paper, we apply a sinc-Gaussian technique to compute approximate values of the eigenvalues of Sturm-Liouville problems which contain an eigenparameter appearing linearly in two boundary conditions, in addition to an internal point of discontinuity. The error of this method decays exponentially in terms of the number of involved samples. Therefore the accuracy of the new technique is higher than that of the classical sinc method. Numerical worked examples with tables and illustrative figures are given at the end of the paper.

MSC: 34L16, 94A20, 65L15.

Keywords:
sampling theory; Sturm-Liouville problems; transmission conditions; sinc-Gaussian; sinc method; truncation and amplitude errors

1 Introduction

By a sampling theorem we mean a representation of a certain function in terms of its values at a discrete set of points. In communication theory, it means a reconstruction of a signal (information) in terms of a discrete set of data. This has several applications, especially in the transmission of information. If the signal is band-limited, the sampling process can be done via the celebrated Whittaker-Kotel’nikov-Shannon (WKS) sampling theorem [1-3]. By a band-limited signal with band width τ, τ > 0 , we mean a function in the Paley-Wiener space

B τ : = { f  entire , | f ( λ ) | C e τ | λ | , R | f ( λ ) | 2 d λ } . (1.1)

The WKS sampling theorem is a fundamental result in information theory. It states that any f B τ can be reconstructed from its sampled values f ( x k ) , where x k = k π / τ and k Z , by the formula

f ( x ) = k Z f ( x k ) sinc ( τ x / π k ) , x R , (1.2)

where

sinc ( x ) : = { sin π x π x , x R { 0 } , 1 , x = 0 , (1.3)

and the series converges absolutely and uniformly on any finite interval of ℝ. Expansion (1.2) is used in several approximation problems which are known as sinc methods; see, e.g., [4-7]. In particular the sinc-method is used to approximate eigenvalues of boundary value problems; see, for example, [8-14]. The sinc-method has a slow rate of decay at infinity, which is as slow as O ( | x 1 | ) . There are several attempts to improve the rate of decay. One of the interesting ways is to multiply the sinc-function in (1.2) by a kernel function; see, e.g., [15-17]. Let h ( 0 , π / τ ] and γ ( 0 , π h τ ) . Assume that Φ B γ such that Φ ( 0 ) = 1 , then for f B τ we have the expansion, [18]

f ( x ) = n = f ( n h ) sinc ( h 1 π x n π ) Φ ( h 1 x n ) . (1.4)

The speed of convergence of the series in (1.4) is determined by the decay of | Φ ( x ) | . But the decay of an entire function of exponential type cannot be as fast as e c | x | as | x | , for some positive c[18]. In [19], Qian has introduced the following regularized sampling formula. For h ( 0 , π / τ ] , N N and r > 0 , Qian defined the operator [19]

( G h , N f ) ( x ) = n Z N ( x ) f ( n h ) S n ( h 1 π x ) G ( x n h 2 r h ) , x R , (1.5)

where G ( t ) : = exp ( t 2 ) , which is called the Gaussian function, S n ( h 1 π x ) : = sinc ( h 1 π x n π ) , Z N ( x ) : = { n Z : | [ h 1 x ] n | N } and [ x ] denotes the integer part of x R ; see also [20,21]. Qian also derived the following error bound. If f B τ , h ( 0 , π / τ ] and a : = min { r ( π h τ ) , ( N 2 ) / r } 1 , then [19,20]

| f ( x ) ( G h , N f ) ( x ) | 2 τ π f 2 π 2 a 2 ( 2 π a + e 3 / 2 r 2 ) e a 2 / 2 , x R . (1.6)

In [18] Schmeisser and Stenger extended the operator (1.5) to the complex domain ℂ. For τ > 0 , h ( 0 , π / τ ] and ω : = ( π h τ ) / 2 , they defined the operator [18]

( G h , N f ) ( z ) : = n Z N ( z ) f ( n h ) S n ( π z h ) G ( ω ( z n h ) N h ) , (1.7)

where Z N ( z ) : = { n Z : | [ h 1 z + 1 / 2 ] n | N } and N N . Note that the summation limits in (1.7) depend on the real part of z. Schmeisser and Stenger [18] proved that if f is an entire function such that

| f ( ξ + i η ) | ϕ ( | ξ | ) e τ | η | , ξ , η R , (1.8)

where ϕ is a non-decreasing, non-negative function on [ 0 , ) and τ 0 , then for h ( 0 , π / τ ) , ω : = ( π h τ ) / 2 , N N , | z | < N , we have

| f ( z ) ( G h , N f ) ( z ) | 2 | sin ( h 1 π z ) | ϕ ( | z | + h ( N + 1 ) ) e ω N π ω N β N ( h 1 z ) , z C , (1.9)

where

β N ( t ) : = cosh ( 2 ω t ) + 2 e ω t 2 / N π ω N [ 1 ( t / N ) 2 ] + 1 2 [ e 2 ω t e 2 π ( N t ) 1 + e 2 ω t e 2 π ( N + t ) 1 ] . (1.10)

The amplitude error arises when the exact values f ( n h ) of (1.7) are replaced by the approximations f ˜ ( n h ) . We assume that f ˜ ( n h ) are close to f ( n h ) , i.e., there is ε > 0 sufficiently small such that

sup n Z n ( z ) | f ( n h ) f ˜ ( n h ) | < ε . (1.11)

Let h ( 0 , π / τ ) , ω : = ( π h τ ) / 2 and N N be fixed numbers. The authors in [22] proved that if (1.11) holds, then for | z | < N , we have

| ( G h , N f ) ( z ) ( G h , N f ˜ ) ( z ) | A ε , N ( z ) , (1.12)

where

A ε , N ( z ) = 2 ε e ω / 4 N ( 1 + N / ω π ) exp ( ( ω + π ) h 1 | z | ) . (1.13)

It is well known that many topics in mathematical physics require the investigation of the eigenvalues and eigenfunctions of Sturm-Liouville type boundary value problems. Therefore, the Sturmian theory is one of the most actual and extensively developing fields of theoretical and applied mathematics. Particularly, in recent years, highly important results in this field have been obtained for the case when the eigenparameter appears not only in the differential equation but also in the boundary conditions. The literature on such results is voluminous, and we refer to [23-27] and corresponding bibliography cited therein. In particular, [24,26,28,29] contain many references to problems in physics and mechanics. Our task is to use formula (1.7) to compute the eigenvalues numerically of the differential equation

y ( x , μ ) + q ( x ) y ( x , μ ) = μ 2 y ( x , μ ) , x [ 1 , 0 ) ( 0 , 1 ] , (1.14)

with boundary conditions

L 1 ( y ) : = ( α 1 μ 2 α 1 ) y ( 1 , μ ) ( α 2 μ 2 α 2 ) y ( 1 , μ ) = 0 , (1.15)

L 2 ( y ) : = ( β 1 μ 2 + β 1 ) y ( 1 , μ ) ( β 2 μ 2 + β 2 ) y ( 1 , μ ) = 0 , (1.16)

and transmission conditions

L 3 ( y ) : = γ 1 y ( 0 , μ ) δ 1 y ( 0 + , μ ) = 0 , (1.17)

L 4 ( y ) : = γ 2 y ( 0 , μ ) δ 2 y ( 0 + , μ ) = 0 , (1.18)

where μ is a complex spectral parameter; q ( x ) is a given real-valued function, which is continuous in [ 1 , 0 ) and ( 0 , 1 ] and has a finite limit q ( 0 ± ) = lim x 0 ± q ( x ) ; γ i , δ i , α i , β i , α i , β i ( i = 1 , 2 ) are real numbers; γ i 0 , δ i 0 ( i = 1 , 2 ); γ 1 γ 2 = δ 1 δ 2 and

det ( α 1 α 1 α 2 α 2 ) > 0 , det ( β 1 β 1 β 2 β 2 ) > 0 . (1.19)

The eigenvalue problem (1.14)-(1.18) when ( α 1 , α 2 ) ( 0 , 0 ) ( β 1 , β 2 ) is a Sturm-Liouville problem which contains an eigenparameter μ in two boundary conditions, in addition to an internal point of discontinuity. In [30], Tharwat proved that the eigenvalue problem (1.14)-(1.18) has a denumerable set of real and simple eigenvalues using techniques similar to those established in [23,24,31], where also sampling theorems have been established. Tharwat et al., in [14], computed the eigenvalues of the problem (1.14)-(1.18) by using the sinc method. In the sinc method, the basic idea is as follows: The eigenvalues are characterized as the zeros of an analytic function F ( μ ) which can be written in the form F ( μ ) = K ( μ ) + U ( μ ) , where K ( μ ) (known part) is the function for the case q 0 . The ingenuity of the approach is in trying to choose the function F ( μ ) so that U ( μ ) B τ (unknown part) and can be approximated by the WKS sampling theorem if its values at some equally spaced points are known; see [8-14].

Our goal in this paper is to improve the results presented in Tharwat et al.[14] with the least conditions. In this paper we use the sinc-Gaussian sampling formula (1.7) to compute eigenvalues of (1.14)-(1.18) numerically. As is expected, the new method reduced the error bounds remarkably; see the examples at the end of this paper. Also here, we use the same idea but the unknown part U ( μ ) is an entire function of exponential type and satisfies (1.8), that is, U ( μ ) is not necessary L 2 -function. Then we approximate the U ( μ ) using (1.7) and obtain better results. We would like to mention that the papers in computing eigenvalues by the sinc-Gaussian method are few; see [22,32,33]. In Section 2 we derive the sinc-Gaussian technique to compute the eigenvalues of (1.14)-(1.18) with error estimates. The last section involves some illustrative examples.

2 Treatment of the eigenvalue problem (1.14)-(1.18)

In this section we derive approximate values of the eigenvalues of the eigenvalue problem (1.14)-(1.18). Recall that the problem (1.14)-(1.18) has a denumerable set of real and simple eigenvalues, cf.[30]. Let

y ( x , μ ) = { y 1 ( x , μ ) , x [ 1 , 0 ) , y 2 ( x , μ ) , x ( 0 , 1 ]

denote the solution of (1.14) satisfying the following initial conditions:

( y 1 ( 1 , μ ) y 2 ( 0 + , μ ) y 1 ( 1 , μ ) y 2 ( 0 + , μ ) ) = ( μ 2 α 2 α 2 γ 1 δ 1 y 1 ( 0 , μ ) μ 2 α 1 α 1 γ 2 δ 2 y 1 ( 0 , μ ) ) . (2.1)

Since y ( , μ ) satisfies (1.15), (1.17) and (1.18), then the eigenvalues of problem (1.14)-(1.18) are the zeros of the characteristic determinant, cf.[30],

Ω ( μ ) : = ( β 1 μ 2 + β 1 ) y 2 ( 1 , μ ) ( β 2 μ 2 + β 2 ) y 2 ( 1 , μ ) . (2.2)

According to [30], see also [34-40], the function Ω ( μ ) is an entire function of μ where zeros are real and simple. We aim to approximate Ω ( μ ) and hence its zeros, i.e., the eigenvalues, by using (1.7). The idea is to split Ω ( μ ) into two parts, one is known and the other is unknown, but is an entire function of exponential type and satisfies (1.8). Then we approximate the unknown part using (1.7) to get the approximate Ω ( μ ) and then compute the approximate zeros. By using the method of variation of constants, we can see that the solution y ( , μ ) satisfies the Volterra integral equations, cf.[30],

y 1 ( x , μ ) = ( α 2 + μ 2 α 2 ) cos [ μ ( x + 1 ) ] ( α 1 + μ 2 α 1 ) 1 μ sin [ μ ( x + 1 ) ] + ( T 1 y 1 ) ( x , μ ) , (2.3)

y 2 ( x , μ ) = γ 1 δ 1 y 1 ( 0 , μ ) cos [ μ x ] + γ 2 δ 2 y 1 ( 0 , μ ) sin [ μ x ] μ + ( T 2 y 2 ) ( x , μ ) , (2.4)

where T 1 and T 2 are the Volterra operators

( T 1 y 1 ) ( x , μ ) : = 1 x sin [ μ ( x t ) ] μ q ( t ) y 1 ( t , μ ) d t , (2.5)

( T 2 y 2 ) ( x , μ ) : = 0 x sin [ μ ( x t ) ] μ q ( t ) y 2 ( t , μ ) d t . (2.6)

Differentiating (2.3) and (2.4), we obtain

y 1 ( x , μ ) = ( α 2 + μ 2 α 2 ) μ sin [ μ ( x + 1 ) ] ( α 1 + μ 2 α 1 ) cos [ μ ( x + 1 ) ] + ( T ˜ 1 y 1 ) ( x , μ ) , (2.7)

y 2 ( x , μ ) = γ 1 δ 1 μ y 1 ( 0 , μ ) sin [ μ x ] + γ 2 δ 2 y 1 ( 0 , μ ) cos [ μ x ] + ( T ˜ 2 y 2 ) ( x , μ ) , (2.8)

where T ˜ 1 and T ˜ 2 are the Volterra-type integral operators

( T ˜ 1 y 1 ) ( x , μ ) : = 1 x cos [ μ ( x t ) ] q ( t ) y 1 ( t , μ ) d t , (2.9)

( T ˜ 2 y 2 ) ( x , μ ) : = 0 x cos [ μ ( x t ) ] q ( t ) y 2 ( t , μ ) d t . (2.10)

Define ϑ i ( , μ ) and ϑ ˜ i ( , μ ) , i = 1 , 2 , to be

ϑ i ( x , μ ) : = T i y i ( x , μ ) , ϑ ˜ i ( x , μ ) : = T ˜ i y i ( x , μ ) . (2.11)

In the following, we make use of the known estimates [41]

| cos z | e | z | , | sin z z | c 0 1 + | z | e | z | , (2.12)

where c 0 is some constant (we may take c 0 1.72 cf.[41]). For convenience, we define the constants

q 1 : = 1 0 | q ( t ) | d t , q 2 : = 0 1 | q ( t ) | d t , c 1 : = max ( | α 1 | , | α 2 | , | α 1 | , | α 2 | ) , c 2 : = exp ( c 0 q 1 ) , c 3 : = 1 + c 0 c 2 q 1 , c 4 : = ( 1 + c 0 ) [ | γ 1 | | δ 1 | c 3 + | γ 2 | | δ 2 | c 0 ( 1 + c 3 q 1 ) ] , c 5 : = exp ( c 0 q 2 ) , c 6 : = 1 + c 0 q 2 c 5 . (2.13)

As in [14], we split Ω ( μ ) into two parts via

Ω ( μ ) : = K ( μ ) + U ( μ ) , (2.14)

where K ( μ ) is the known part

K ( μ ) : = ( β 1 μ 2 + β 1 ) [ ( μ 2 α 2 α 2 ) ( γ 1 δ 1 cos 2 μ γ 2 δ 2 sin 2 μ ) ( μ 2 α 1 α 1 ) ( γ 1 δ 1 + γ 2 δ 2 ) cos μ sin μ μ ] + ( β 2 μ 2 + β 2 ) [ ( μ 2 α 2 α 2 ) ( γ 1 δ 1 + γ 2 δ 2 ) μ cos μ sin μ + ( μ 2 α 1 α 1 ) ( γ 2 δ 2 cos 2 μ γ 1 δ 1 sin 2 μ ) ] (2.15)

and U ( μ ) is the unknown one

U ( μ ) : = γ 1 δ 1 [ ( β 1 μ 2 + β 1 ) cos μ + ( β 2 μ 2 + β 2 ) μ sin μ ] ϑ 1 ( 0 , μ ) + ( β 1 μ 2 + β 1 ) ϑ 2 ( 1 , μ ) + γ 2 δ 2 [ ( β 1 μ 2 + β 1 ) sin μ μ ( β 2 μ 2 + β 2 ) cos μ ] ϑ ˜ 1 ( 0 , μ ) ( β 2 μ 2 + β 2 ) ϑ ˜ 2 ( 1 , μ ) . (2.16)

Then the function U ( μ ) is entire in μ for each x [ 0 , 1 ] for which, cf.[14],

| U ( μ ) | ϕ ( | μ | ) e 2 | μ | , μ C , (2.17)

where

ϕ ( | μ | ) : = M ( 1 + | μ | 2 ) 2 , (2.18)

and

M : = c 1 c ( 1 + c 0 ) 2 q 1 [ c 0 c 2 | γ 1 | | δ 1 | + c 3 | γ 2 | | δ 2 | ] + c 1 c 4 c q 2 ( c 6 + c 0 c 5 ) , c : = max { | β 1 | , | β 2 | , | β 1 | , | β 2 | } . (2.19)

Then U ( μ ) is an entire function of exponential type τ = 2 . In the following, we let μ R since all eigenvalues are real. Now we approximate the function U ( μ ) using the operator (1.7) where h ( 0 , π / 2 ) and ω : = ( π 2 h ) / 2 and then, from (1.9), we obtain

| U ( μ ) ( G h , N U ) ( μ ) | T h , N ( μ ) , (2.20)

where

T h , N ( μ ) : = 2 | sin ( h 1 π μ ) | ϕ ( | μ | + h ( N + 1 ) ) e ω N π ω N β N ( 0 ) , μ R . (2.21)

The samples U ( n h ) = Ω ( n h ) K ( n h ) , n Z N ( μ ) cannot be computed explicitly in the general case. We approximate these samples numerically by solving the initial-value problems defined by (1.14) and (2.1) to obtain the approximate values U ˜ ( n h ) , n Z N ( μ ) , i.e., U ˜ ( n h ) = Ω ˜ ( n h ) K ( n h ) . Here we use a computer algebra system, MATHEMATICA, to obtain the approximate solutions with the required accuracy. However, a separate study for the effect of different numerical schemes and the computational costs would be interesting. Accordingly, we have the explicit expansion

( G h , N U ˜ ) ( μ ) : = n Z N ( μ ) U ˜ ( n h ) S n ( π μ h ) G ( ω ( μ n h ) N h ) . (2.22)

Therefore we get, cf. (1.12),

| ( G h , N U ) ( μ ) ( G h , N U ˜ ) ( μ ) | A ε , N ( 0 ) , μ R . (2.23)

Now let Ω ˜ N ( μ ) : = K ( μ ) + ( G h , N U ˜ ) ( μ ) . From (2.20) and (2.23) we obtain

| Ω ( μ ) Ω ˜ N ( μ ) | T h , N ( μ ) + A ε , N ( 0 ) , μ R . (2.24)

Let μ 2 be an eigenvalue and μ N be its desired approximation, i.e., Ω ( μ ) = 0 and Ω ˜ N ( μ N ) = 0 . From (2.24) we have | Ω ˜ N ( μ ) | T h , N ( μ ) + A ε , N ( 0 ) . Define the curves

a ± ( μ ) = Ω ˜ N ( μ ) ± T h , N ( μ ) + A ε , N ( 0 ) . (2.25)

The curves a + ( μ ) , a ( μ ) trap the curve of Ω ( μ ) for suitably large N. Hence the closure interval is determined by solving a ± ( μ ) = 0 , which gives an interval

I ε , N : = [ a , a + ] .

It is worthwhile to mention that the simplicity of the eigenvalues guarantees the existence of approximate eigenvalues, i.e., the μ N ’s for which Ω ˜ N ( μ N ) = 0 . Next we estimate the error | μ μ N | for the eigenvalue μ .

Theorem 2.1Let μ 2 be an eigenvalue of (1.14)-(1.18) and μ N be its approximation. Then, for μ R , we have the following estimate:

| μ μ N | < T h , N ( μ N ) + A ε , N ( 0 ) inf ζ I ε , N | Ω ( ζ ) | , (2.26)

where the interval I ε , N is defined above.

Proof Replacing μ by μ N in (2.24), we obtain

| Ω ( μ N ) Ω ( μ ) | < T h , N ( μ N ) + A ε , N ( 0 ) , (2.27)

where we have used Ω ˜ N ( μ N ) = Ω ( μ ) = 0 . Using the mean value theorem yields that for some ζ J ε , N : = [ min ( μ , μ N ) , max ( μ , μ N ) ] ,

| ( μ μ N ) Ω ( ζ ) | T h , N ( μ N ) + A ε , N ( 0 ) , ζ J ε , N I ε , N . (2.28)

Since μ is simple and N is sufficiently large, then inf ζ I ε , N | Ω ( ζ ) | > 0 , and we get (2.26). □

3 Examples

This section includes two examples illustrating the sinc-Gaussian method. All examples are computed in [14] with the classical sinc-method. It is clearly seen that the sinc-Gaussian method gives remarkably better results. We indicate in these two examples the effect of the amplitude error in the method by determining enclosure intervals for different values of ε. We also indicate the effect of N and h by several choices. We would like to mention that MATHEMATICA has been used to obtain the exact values for these examples where eigenvalues cannot computed concretely. MATHEMATICA is also used in rounding the exact eigenvalues, which are square roots. Each example is exhibited via figures that accurately illustrate the procedure near to some of the approximated eigenvalues. More explanations are given below.

Example 1 Consider the boundary value problem

y ( x , μ ) + q ( x ) y ( x , μ ) = μ 2 y ( x , μ ) , x [ 1 , 0 ) ( 0 , 1 ] , (3.1)

μ 2 y ( 1 , μ ) + y ( 1 , μ ) = 0 , μ 2 y ( 1 , μ ) y ( 1 , μ ) = 0 , (3.2)

y ( 0 , μ ) y ( 0 + , μ ) = 0 , y ( 0 , μ ) y ( 0 + , μ ) = 0 . (3.3)

Here β 1 = β 2 = α 1 = α 2 = 1 , β 1 = β 2 = α 1 = α 2 = 0 , γ 1 = δ 1 = 2 , γ 2 = δ 2 = 1 2 and

q ( x ) = { 1 , x [ 1 , 0 ) , 2 , x ( 0 , 1 ] . (3.4)

The characteristic function is

Ω ( μ ) = 1 1 + μ 2 2 + μ 2 [ sin 1 + μ 2 ( 2 + μ 2 ( μ 4 μ 2 1 ) cos 2 + μ 2 μ 2 ( 3 + 2 μ 2 ) sin 2 + μ 2 ) 1 + μ 2 cos 1 + μ 2 ( 2 μ 2 2 + μ 2 cos 2 + μ 2 + ( μ 4 μ 2 2 ) sin 2 + μ 2 ) ] . (3.5)

The function K ( μ ) will be

K ( μ ) = μ ( 1 + μ 2 ) sin 2 μ . (3.6)

As is clearly seen, eigenvalues cannot be computed explicitly. Tables 1, 2, 3 indicate the application of our technique to this problem and the effect of ε. By exact we mean the zeros of Ω ( μ ) computed by MATHEMATICA.

Table 1. The approximation μ k , N and the exact solution μ k for different choices ofhandN

Table 2. Absolute error | μ k μ k , N |

Table 3. For N = 20 and h = 0.1 , the exact solutions μ k are all inside the interval [ a , a + ] for different values ofε

Figures 1 and 2 illustrate the enclosure intervals dominating μ 1 for N = 20 , h = 0.1 and ε = 10 2 , ε = 10 5 , respectively. The middle curve represents Ω ( μ ) , while the upper and lower curves represent the curves of a + ( μ ) , a ( μ ) , respectively. We notice that when ε = 10 5 , all two curves are almost identical. Similarly, Figures 3 and 4 illustrate the enclosure intervals dominating μ 4 for h = 0.1 , N = 20 and ε = 10 2 , ε = 10 5 , respectively.

thumbnailFigure 1. The enclosure interval dominating μ 1 for h = 0.1 , N = 20 and ε = 10 2 .

thumbnailFigure 2. The enclosure interval dominating μ 1 for h = 0.1 , N = 20 and ε = 10 5 .

thumbnailFigure 3. The enclosure interval dominating μ 4 for h = 0.1 , N = 20 and ε = 10 2 .

thumbnailFigure 4. The enclosure interval dominating μ 4 for h = 0.1 , N = 20 and ε = 10 5 .

Example 2 Consider the boundary value problem

y ( x , μ ) + q ( x ) y ( x , μ ) = μ 2 y ( x , μ ) , x [ 1 , 0 ) ( 0 , 1 ] , (3.7)

y ( 1 , μ ) + μ 2 y ( 1 , μ ) = 0 , y ( 1 , μ ) + μ 2 y ( 1 , μ ) = 0 , (3.8)

y ( 0 , μ ) y ( 0 + , μ ) = 0 , y ( 0 , μ ) y ( 0 + , μ ) = 0 , (3.9)

where α 1 = β 1 = 1 , α 2 = β 2 = 1 , β 1 = β 2 = α 2 = α 1 = 0 , γ 1 = δ 1 = 3 , γ 2 = δ 2 = 1 3 and

q ( x ) = { 2 , x [ 1 , 0 ) , x , x ( 0 , 1 ] . (3.10)

The function K ( μ ) will be

K ( μ ) = ( 1 + μ 6 ) sin 2 μ μ . (3.11)

The characteristic determinant of the problem is

Ω ( μ ) = π 2 + μ 2 ( ( Bi [ 1 μ 2 ] + μ 2 Bi [ 1 μ 2 ] ) × ( Ai [ μ 2 ] ( μ 2 2 + μ 2 cos [ 2 + μ 2 ] + sin [ 2 + μ 2 ] ) + Ai [ μ 2 ] ( 2 + μ 2 cos [ 2 + μ 2 ] + μ 2 ( 2 + μ 2 ) sin [ 2 + μ 2 ] ) ) + ( Ai [ 1 μ 2 ] + μ 2 Ai [ 1 μ 2 ] ) × ( Bi [ μ 2 ] ( μ 2 2 + μ 2 cos [ 2 + μ 2 ] + sin [ 2 + μ 2 ] ) + Bi [ μ 2 ] ( 2 + μ 2 cos [ 2 + μ 2 ] + μ 2 ( 2 + μ 2 ) sin [ 2 + μ 2 ] ) ) ) , (3.12)

where Ai [ z ] and Bi [ z ] are Airy functions, and Ai [ z ] and Bi [ z ] are derivatives of Airy functions. As in the above example, the three tables (Tables 4, 5, 6) indicate the application of our technique to this problem and the effect of ε.

Table 4. The approximation μ k , N and the exact solution μ k for different choices ofhandN

Table 5. Absolute error | μ k μ k , N |

Table 6. For N = 40 and h = 0.3 , the exact solutions μ k are all inside the interval [ a , a + ] for different values ofε

Here Figures {5, 6}, {7, 8} illustrate the enclosure intervals dominating μ 2 and μ 3 for h = 0.3 , N = 40 and ε = 10 2 , ε = 10 5 , respectively.

thumbnailFigure 5. The enclosure interval dominating μ 2 for h = 0.3 , N = 40 and ε = 10 2 .

thumbnailFigure 6. The enclosure interval dominating μ 2 for h = 0.3 , N = 40 and ε = 10 5 .

thumbnailFigure 7. The enclosure interval dominating μ 3 for h = 0.3 , N = 40 and ε = 10 2 .

thumbnailFigure 8. The enclosure interval dominating μ 3 for h = 0.3 , N = 40 and ε = 10 5 .

4 Conclusion

With a simple analysis, and with values of solutions of initial value problems computed at a few values of the eigenparameter, we have computed the eigenvalues of discontinuous Sturm-Liouville problems which contain an eigenparameter appearing linearly in two boundary conditions, with a certain estimated error. The method proposed is a shooting procedure, i.e., the problem is reformulated as two initial value ones, due to the interior discontinuity, of size two and a miss-distance is defined at the right end of the interval of integration whose roots are the eigenvalues to be computed. The unknown part U ( μ ) of the miss-distance can be written in terms of a function which is an entire function of exponential type. Therefore, we propose to approximate such term by means of a truncated cardinal series with sampling values approximated by solving numerically corresponding suitable initial value problems. Finally, in Section 3 we introduced two instructive examples. The computations show that, as compared to the classical sampling expansion in [14], the variant with the Gaussian multiplier provides a strikingly high improvement of the accuracy.

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The authors have equal contributions to each part of this article. All the authors read and approved the final manuscript.

Acknowledgements

This work was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. (130-065-D1433). The authors, therefore, acknowledge with thanks DSR technical and financial support.

References

  1. Kotel’nikov, V: On the carrying capacity of the ‘ether’ and wire in telecommunications. Material for the First All-Union Conference on Questions of Communications, pp. 55–64. Izd. Red. Upr. Svyazi RKKA, Moscow (1933)

  2. Shannon, CE: Communications in the presence of noise. Proc. IRE. 37, 10–21 (1949)

  3. Whittaker, ET: On the functions which are represented by the expansion of the interpolation theory. Proc. R. Soc. Edinb., Sect. A. 35, 181–194 (1915)

  4. Stenger, F: Numerical methods based on Whittaker cardinal, or sinc functions. SIAM Rev.. 23, 156–224 (1981)

  5. Lund, J, Bowers, K: Sinc Methods for Quadrature and Differential Equations, SIAM, Philadelphia (1992)

  6. Stenger, F: Numerical Methods Based on Sinc and Analytic Functions, Springer, New York (1993)

  7. Kowalski, M, Sikorski, K, Stenger, F: Selected Topics in Approximation and Computation, Oxford University Press, New York (1995)

  8. Boumenir, A: Higher approximation of eigenvalues by sampling. BIT Numer. Math.. 40, 215–225 (2000). Publisher Full Text OpenURL

  9. Boumenir, A: Sampling and eigenvalues of non-self-adjoint Sturm-Liouville problems. SIAM J. Sci. Comput.. 23, 219–229 (2001). Publisher Full Text OpenURL

  10. Annaby, MH, Tharwat, MM: On computing eigenvalues of second-order linear pencils. IMA J. Numer. Anal.. 27, 366–380 (2007)

  11. Annaby, MH, Tharwat, MM: Sinc-based computations of eigenvalues of Dirac systems. BIT Numer. Math.. 47, 699–713 (2007). Publisher Full Text OpenURL

  12. Annaby, MH, Tharwat, MM: On the computation of the eigenvalues of Dirac systems. Calcolo. 49, 221–240 (2012). Publisher Full Text OpenURL

  13. Tharwat, MM, Bhrawy, AH, Yildirim, A: Numerical computation of eigenvalues of discontinuous Dirac system using Sinc method with error analysis. Int. J. Comput. Math.. 89, 2061–2080 (2012). Publisher Full Text OpenURL

  14. Tharwat, MM, Bhrawy, AH, Yildirim, A: Numerical computation of eigenvalues of discontinuous Sturm-Liouville problems with parameter dependent boundary conditions using Sinc method. Numer. Algorithms. 63, 27–48 (2013). Publisher Full Text OpenURL

  15. Gervais, R, Rahman, QI, Schmeisser, G: A bandlimited function simulating a duration-limited one. In: Butzer PL, Stens RL (eds.) Approximation Theory and Functional Analysis, pp. 355–362. Birkhäuser, Basel (1984)

  16. Butzer, PL, Stens, RL: A modification of the Whittaker-Kotel’nikov-Shannon sampling series. Aequ. Math.. 28, 305–311 (1985). Publisher Full Text OpenURL

  17. Stens, RL: Sampling by generalized kernels. In: Higgins JR, Stens RL (eds.) Sampling Theory in Fourier and Signal Analysis: Advanced Topics, pp. 130–157. Oxford University Press, Oxford (1999)

  18. Schmeisser, G, Stenger, F: Sinc approximation with a Gaussian multiplier. Sampl. Theory Signal. Image Process, Int. J.. 6, 199–221 (2007)

  19. Qian, L: On the regularized Whittaker-Kotel’nikov-Shannon sampling formula. Proc. Am. Math. Soc.. 131, 1169–1176 (2002)

  20. Qian, L, Creamer, DB: A modification of the sampling series with a Gaussian multiplie. Sampl. Theory Signal. Image Process, Int. J.. 5, 1–20 (2006)

  21. Qian, L, Creamer, DB: Localized sampling in the presence of noise. Appl. Math. Lett.. 19, 351–355 (2006). Publisher Full Text OpenURL

  22. Annaby, MH, Asharabi, RM: Computing eigenvalues of boundary value problems using sinc-Gaussian method. Sampl. Theory Signal. Image Process, Int. J.. 7, 293–312 (2008)

  23. Walter, J: Regular eigenvalue problems with eigenvalue parameter in the boundary condition. Math. Z.. 133, 301–312 (1973). Publisher Full Text OpenURL

  24. Fulton, CT: Two-point boundary value problems with eigenvalue parameter contained in the boundary conditions. Proc. R. Soc. Edinb., Sect. A. 77, 293–308 (1977). Publisher Full Text OpenURL

  25. Hinton, DB: An expansion theorem for an eigenvalue problem with eigenvalue parameter in the boundary condition. Q. J. Math.. 30, 33–42 (1979). Publisher Full Text OpenURL

  26. Shkalikov, AA: Boundary value problems for ordinary differential equations with a parameter in boundary conditions. Tr. Semin. Im. I.G. Petrovskogo. 9, 190–229 (in Russian) (1983)

  27. Binding, PA, Browne, PJ, Watson, BA: Strum-Liouville problems with boundary conditions rationally dependent on the eigenparameter II. J. Comput. Appl. Math.. 148, 147–169 (2002). Publisher Full Text OpenURL

  28. Likov, AV, Mikhailov, YA: The Theory of Heat and Mass Transfer, Qosenergaizdat, Moscow-Leningrad (1963) (in Russian)

  29. Tikhonov, AN, Samarskii, AA: Equations of Mathematical Physics, Macmillan Co., New York (1963)

  30. Tharwat, MM: Discontinuous Sturm-Liouville problems and associated sampling theories. Abstr. Appl. Anal. doi:10.1155/2011/610232 (2011)

  31. Titchmarsh, EC: Eigenfunction Expansions Associated with Second Order Differential Equations. Part I, Clarendon, Oxford (1962)

  32. Bhrawy, AH, Tharwat, MM, Al-Fhaid, A: Numerical algorithms for computing eigenvalues of discontinuous Dirac system using sinc-Gaussian method. Abstr. Appl. Anal. doi:10.1155/2012/925134 (2012)

  33. Annaby, MH, Tharwat, MM: A sinc-Gaussian technique for computing eigenvalues of second-order linear pencils. Appl. Numer. Math.. 63, 129–137 (2013)

  34. Annaby, MH, Tharwat, MM: On sampling theory and eigenvalue problems with an eigenparameter in the boundary conditions. SUT J. Math.. 42, 157–176 (2006)

  35. Annaby, MH, Tharwat, MM: On sampling and Dirac systems with eigenparameter in the boundary conditions. J. Appl. Math. Comput.. 36, 291–317 (2011). Publisher Full Text OpenURL

  36. Kandemir, M, Mukhtarov, OS: Discontinuous Sturm Liouville problems containing eigenparameter in the boundary conditions. Acta Math. Sin.. 34, 1519–1528 (2006)

  37. Mukhtarov, OS, Kadakal, M, Altinisik, N: Eigenvalues and eigenfunctions of discontinuous Sturm-Liouville problems with eigenparameter in the boundary conditions. Indian J. Pure Appl. Math.. 34, 501–516 (2003)

  38. Tharwat, MM, Bhrawy, AH: Computation of eigenvalues of discontinuous Dirac system using Hermite interpolation technique. Adv. Differ. Equ. doi:10.1186/1687-1847-2012-59 (2012)

  39. Tharwat, MM, Yildirim, A, Bhrawy, AH: Sampling of discontinuous Dirac systems. Numer. Funct. Anal. Optim.. 34, 323–348 (2013). Publisher Full Text OpenURL

  40. Tharwat, MM: On sampling theories and discontinuous Dirac systems with eigenparameter in the boundary conditions. Bound. Value Probl. doi:10.1186/1687-2770-2013-65 (2013)

  41. Chadan, K, Sabatier, PC: Inverse Problems in Quantum Scattering Theory, Springer, Berlin (1989)