Research

# On sampling theories and discontinuous Dirac systems with eigenparameter in the boundary conditions

Mohammed M Tharwat

Author Affiliations

Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah, Saudi Arabia

Department of Mathematics, Faculty of Science, Beni-Suef University, Beni-Suef, Egypt

Boundary Value Problems 2013, 2013:65  doi:10.1186/1687-2770-2013-65

 Received: 8 November 2012 Accepted: 11 March 2013 Published: 29 March 2013

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### Abstract

The sampling theory says that a function may be determined by its sampled values at some certain points provided the function satisfies some certain conditions. In this paper we consider a Dirac system which contains an eigenparameter appearing linearly in one condition in addition to an internal point of discontinuity. We closely follow the analysis derived by Annaby and Tharwat (J. Appl. Math. Comput. 2010, doi:10.1007/s12190-010-0404-9) to establish the needed relations for the derivations of the sampling theorems including the construction of Green’s matrix as well as the eigen-vector-function expansion theorem. We derive sampling representations for transforms whose kernels are either solutions or Green’s matrix of the problem. In the special case, when our problem is continuous, the obtained results coincide with the corresponding results in Annaby and Tharwat (J. Appl. Math. Comput. 2010, doi:10.1007/s12190-010-0404-9).

MSC: 34L16, 94A20, 65L15.

##### Keywords:
Dirac systems; transmission conditions; eigenvalue parameter in the boundary conditions; discontinuous boundary value problems

### 1 Introduction

Sampling theory is one of the most powerful results in signal analysis. It is of great need in signal processing to reconstruct (recover) a signal (function) from its values at a discrete sequence of points (samples). If this aim is achieved, then an analog (continuous) signal can be transformed into a digital (discrete) one and then it can be recovered by the receiver. If the signal is band-limited, the sampling process can be done via the celebrated Whittaker, Shannon and Kotel’nikov (WSK) sampling theorem [1-3]. By a band-limited signal with band width σ, σ > 0 , i.e., the signal contains no frequencies higher than σ / 2 π cycles per second (cps), we mean a function in the Paley-Wiener space B σ 2 of entire functions of exponential type at most σ which are L 2 ( R ) -functions when restricted to ℝ. In other words, f ( t ) B σ 2 if there exists g ( ) L 2 ( σ , σ ) such that, cf.[4,5],

f ( t ) = 1 2 π σ σ e i x t g ( x ) d x . (1.1)

Now WSK sampling theorem states [6,7]: If f ( t ) B σ 2 , then it is completely determined from its values at the points t k = k π / σ , k Z , by means of the formula

f ( t ) = k = f ( t k ) sinc σ ( t t k ) , t C , (1.2)

where

sinc t = { sin t t , t 0 , 1 , t = 0 . (1.3)

The sampling series (1.2) is absolutely and uniformly convergent on compact subsets of ℂ.

The WSK sampling theorem has been generalized in many different ways. Here we are interested in two extensions. The first is concerned with replacing the equidistant sampling points by more general ones, which is very important from the practical point of view. The following theorem which is known in some literature as the Paley-Wiener theorem [5] gives a sampling theorem with a more general class of sampling points. Although the theorem in its final form may be attributed to Levinson [8] and Kadec [9], it could be named after Paley and Wiener who first derived the theorem in a more restrictive form; see [6,7,10] for more details.

The Paley-Wiener theorem states that if { t k } , k Z , is a sequence of real numbers such that

D : = sup k Z | t k k π σ | < π 4 σ , (1.4)

and G is the entire function defined by

G ( t ) : = ( t t 0 ) k = 1 ( 1 t t k ) ( 1 t t k ) , (1.5)

then, for any function of the form (1.1), we have

f ( t ) = k Z f ( t k ) G ( t ) G ( t k ) ( t t k ) , t C . (1.6)

The series (1.6) converges uniformly on compact subsets of ℂ.

The WSK sampling theorem is a special case of this theorem because if we choose t k = k π / σ = t k , then

G ( t ) = t k = 1 ( 1 t t k ) ( 1 + t t k ) = t k = 1 ( 1 ( t σ / π ) 2 k 2 ) = sin t σ σ , G ( t k ) = ( 1 ) k .

The sampling series (1.6) can be regarded as an extension of the classical Lagrange interpolation formula to ℝ for functions of exponential type. Therefore, (1.6) is called a Lagrange-type interpolation expansion.

The second extension of WSK sampling theorem is the theorem of Kramer, [11] which states that if I is a finite closed interval, K ( , t ) : I × C C is a function continuous in t such that K ( , t ) L 2 ( I ) for all t C . Let { t k } k Z be a sequence of real numbers such that { K ( , t k ) } k Z is a complete orthogonal set in L 2 ( I ) . Suppose that

f ( t ) = I K ( x , t ) g ( x ) d x ,

where g ( ) L 2 ( I ) . Then

f ( t ) = k Z f ( t k ) I K ( x , t ) K ( x , t k ) ¯ d x K ( , t k ) L 2 ( I ) 2 . (1.7)

Series (1.7) converges uniformly wherever K ( , t ) L 2 ( I ) , as a function of t, is bounded. In this theorem, sampling representations were given for integral transforms whose kernels are more general than exp ( i x t ) . Also Kramer’s theorem is a generalization of WSK theorem. If we take K ( x , t ) = e i t x , I = [ σ , σ ] , t k = k π σ , then (1.7) is (1.2).

The relationship between both extensions of WSK sampling theorem has been investigated extensively. Starting from a function theory approach, cf.[12], it was proved in [13] that if K ( x , t ) , x I , t C , satisfies some analyticity conditions, then Kramer’s sampling formula (1.7) turns out to be a Lagrange interpolation one; see also [14-16]. In another direction, it was shown that Kramer’s expansion (1.7) could be written as a Lagrange-type interpolation formula if K ( , t ) and t k were extracted from ordinary differential operators; see the survey [17] and the references cited therein. The present work is a continuation of the second direction mentioned above. We prove that integral transforms associated with Dirac systems with an internal point of discontinuity can also be reconstructed in a sampling form of Lagrange interpolation type. We would like to mention that works in direction of sampling associated with eigenproblems with an eigenparameter in the boundary conditions are few; see, e.g., [18-20]. Also, papers in sampling with discontinuous eigenproblems are few; see [21-24]. However, sampling theories associated with Dirac systems which contain eigenparameter in the boundary conditions and have at the same time discontinuity conditions, do not exist as far as we know. Our investigation is be the first in that direction, introducing a good example. To achieve our aim we briefly study the spectral analysis of the problem. Then we derive two sampling theorems using solutions and Green’s matrix respectively.

### 2 The eigenvalue problem

In this section, we define our boundary value problem and state some of its properties. We consider the Dirac system

u 2 ( x ) p 1 ( x ) u 1 ( x ) = λ u 1 ( x ) , u 1 ( x ) + p 2 ( x ) u 2 ( x ) = λ u 2 ( x ) , x [ 1 , 0 ) ( 0 , 1 ] , (2.1)

U 1 ( u ) : = sin α u 1 ( 1 ) cos α u 2 ( 1 ) = 0 , (2.2)

U 2 ( u ) : = ( a 1 + λ sin β ) u 1 ( 1 ) ( a 2 + λ cos β ) u 2 ( 1 ) = 0 (2.3)

and transmission conditions

U 3 ( u ) : = u 1 ( 0 ) δ u 1 ( 0 + ) = 0 , (2.4)

U 4 ( u ) : = u 2 ( 0 ) δ u 2 ( 0 + ) = 0 , (2.5)

where λ C ; the real-valued functions p 1 ( ) and p 2 ( ) are continuous in [ 1 , 0 ) and ( 0 , 1 ] and have finite limits p 1 ( 0 ± ) : = lim x 0 ± p 1 ( x ) and p 2 ( 0 ± ) : = lim x 0 ± p 2 ( x ) ; a 1 , a 2 , δ R , α , β [ 0 , π ) ; δ 0 and ρ : = a 1 cos β a 2 sin β > 0 .

In [24] the authors discussed problem (2.1)-(2.5) but with the condition sin β u 1 ( 1 ) cos β u 2 ( 1 ) = 0 instead of (2.3). To formulate a theoretic approach to problem (2.1)-(2.5), we define the Hilbert space H = H C with an inner product, see [19,20],

U ( ) , V ( ) H : = 1 0 u ( x ) v ¯ ( x ) d x + δ 2 0 1 u ( x ) v ¯ ( x ) d x + δ 2 ρ z w ¯ , (2.6)

where ⊤ denotes the matrix transpose,

U ( x ) = ( u ( x ) z ) , V ( x ) = ( v ( x ) w ) H , u ( x ) = ( u 1 ( x ) u 2 ( x ) ) , v ( x ) = ( v 1 ( x ) v 2 ( x ) ) H ,

u i ( ) , v i ( ) L 2 ( 1 , 1 ) , i = 1 , 2 , z , w C . For convenience, we put

( R a ( u ( x ) ) R β ( u ( x ) ) ) : = ( a 1 u 1 ( 1 ) a 2 u 2 ( 1 ) sin β u 1 ( 1 ) cos β u 2 ( 1 ) ) . (2.7)

Equation (2.1) can be written as

( u ) : = A u ( x ) P ( x ) u ( x ) = λ u ( x ) , (2.8)

where

A = ( 0 1 1 0 ) , P ( x ) = ( p 1 ( x ) 0 0 p 2 ( x ) ) , u ( x ) = ( u 1 ( x ) u 2 ( x ) ) . (2.9)

For functions u ( x ) , which are defined on [ 1 , 0 ) ( 0 , 1 ] and have finite limit u ( ± 0 ) : = lim x ± 0 u ( x ) , by u ( 1 ) ( x ) and u ( 2 ) ( x ) , we denote the functions

u ( 1 ) ( x ) = { u ( x ) , x [ 1 , 0 ) , u ( 0 ) , x = 0 , u ( 2 ) ( x ) = { u ( x ) , x ( 0 , 1 ] , u ( 0 + ) , x = 0 , (2.10)

which are defined on Γ 1 : = [ 1 , 0 ] and Γ 2 : = [ 0 , 1 ] , respectively.

In the following lemma, we prove that the eigenvalues of problem (2.1)-(2.5) are real.

Lemma 2.1The eigenvalues of problem (2.1)-(2.5) are real.

Proof Assume the contrary that λ 0 is a nonreal eigenvalue of problem (2.1)-(2.5). Let ( u 1 ( x ) u 2 ( x ) ) be a corresponding (non-trivial) eigenfunction. By (2.1), we have

d d x { u 1 ( x ) u ¯ 2 ( x ) u ¯ 1 ( x ) u 2 ( x ) } = ( λ ¯ 0 λ 0 ) { | u 1 ( x ) | 2 + | u 2 ( x ) | 2 } , x [ 1 , 0 ) ( 0 , 1 ] .

Integrating the above equation through [ 1 , 0 ] and [ 0 , 1 ] , we obtain

( λ ¯ 0 λ 0 ) [ 1 0 ( | u 1 ( x ) | 2 + | u 2 ( x ) | 2 ) d x ] = u 1 ( 0 ) u ¯ 2 ( 0 ) u ¯ 1 ( 0 ) u 2 ( 0 ) [ u 1 ( 1 ) u ¯ 2 ( 1 ) u ¯ 1 ( 1 ) u 2 ( 1 ) ] , (2.11)

( λ ¯ 0 λ 0 ) [ 0 1 ( | u 1 ( x ) | 2 + | u 2 ( x ) | 2 ) d x ] = u 1 ( 1 ) u ¯ 2 ( 1 ) u ¯ 1 ( 1 ) u 2 ( 1 ) [ u 1 ( 0 + ) u ¯ 2 ( 0 + ) u ¯ 1 ( 0 + ) u 2 ( 0 + ) ] . (2.12)

Then from (2.2), (2.3) and transmission conditions, we have, respectively,

u 1 ( 1 ) u ¯ 2 ( 1 ) u ¯ 1 ( 1 ) u 2 ( 1 ) = 0 , u 1 ( 1 ) u ¯ 2 ( 1 ) u ¯ 1 ( 1 ) u 2 ( 1 ) = ρ ( λ ¯ 0 λ 0 ) | u 2 ( 1 ) | 2 | a 1 + λ 0 sin β | 2

and

u 1 ( 0 ) u ¯ 2 ( 0 ) u ¯ 1 ( 0 ) u 2 ( 0 ) = δ 2 [ u 1 ( 0 + ) u ¯ 2 ( 0 + ) u ¯ 1 ( 0 + ) u 2 ( 0 + ) ] .

Since λ 0 λ ¯ 0 , it follows from the last three equations and (2.11), (2.12) that

1 0 ( | u 1 ( x ) | 2 + | u 2 ( x ) | 2 ) d x + δ 2 0 1 ( | u 1 ( x ) | 2 + | u 2 ( x ) | 2 ) d x = ρ δ 2 | u 2 ( 1 ) | 2 | a 1 + λ 0 sin β | 2 . (2.13)

This contradicts the conditions 1 0 ( | u 1 ( x ) | 2 + | u 2 ( x ) | 2 ) d x + δ 2 0 1 ( | u 1 ( x ) | 2 + | u 2 ( x ) | 2 ) d x > 0 and ρ > 0 . Consequently, λ 0 must be real. □

Let D ( A ) H be the set of all U ( x ) = ( u ( x ) R β ( u ( x ) ) ) H such that u 1 ( i ) ( ) , u 2 ( i ) ( ) are absolutely continuous on Γ i , i = 1 , 2 , and ( u ) H , sin α u 1 ( 1 ) cos α u 2 ( 1 ) = 0 , u i ( 0 ) δ u i ( 0 + ) = 0 , i = 1 , 2 . Define the operator A : D ( A ) H by

A ( u ( x ) R β ( u ( x ) ) ) = ( ( u ) R a ( u ( x ) ) ) , ( u ( x ) R β ( u ( x ) ) ) D ( A ) . (2.14)

Thus, the operator A is symmetric in H . Indeed, for U ( ) , V ( ) D ( A ) ,

A U ( ) , U ( ) H = 1 0 ( ( u ( x ) ) ) v ¯ ( x ) d x + δ 2 0 1 ( ( u ( x ) ) ) v ¯ ( x ) d x δ 2 ρ R a ( u ( x ) ) R ¯ β ( v ( x ) ) = 1 0 ( u 2 ( x ) p 1 ( x ) u 1 ( x ) ) v ¯ 1 ( x ) d x 1 0 ( u 1 ( x ) + p 2 ( x ) u 2 ( x ) ) v ¯ 2 ( x ) d x + δ 2 0 1 ( u 2 ( x ) p 1 ( x ) u 1 ( x ) ) v ¯ 1 ( x ) d x δ 2 0 1 ( u 1 ( x ) + p 2 ( x ) u 2 ( x ) ) v ¯ 2 ( x ) d x δ 2 ρ R a ( u ( x ) ) R ¯ β ( v ( x ) ) = 1 0 u 2 ( x ) ( v ¯ 1 ( x ) + p 2 ( x ) v ¯ 2 ( x ) ) d x + 1 0 u 1 ( x ) ( v ¯ 2 ( x ) p 1 ( x ) v ¯ 1 ( x ) ) d x δ 2 0 1 u 2 ( x ) ( v ¯ 1 ( x ) + p 2 ( x ) v ¯ 2 ( x ) ) d x + δ 2 0 1 u 1 ( x ) ( v ¯ 2 ( x ) p 1 ( x ) v ¯ 1 ( x ) ) d x + ( u 2 ( 0 ) v ¯ 1 ( 0 ) u 1 ( 0 ) v ¯ 2 ( 0 ) ) ( u 2 ( 1 ) v ¯ 1 ( 1 ) u 1 ( 1 ) v ¯ 2 ( 1 ) ) + δ 2 ( u 2 ( 1 ) v ¯ 1 ( 1 ) u 1 ( 1 ) v ¯ 2 ( 1 ) ) δ 2 ( u 2 ( 0 + ) v ¯ 1 ( 0 + ) u 1 ( 0 + ) v ¯ 2 ( 0 + ) ) δ 2 ρ R a ( u ( x ) ) R ¯ β ( v ( x ) ) = 0 1 u ( x ) ¯ v ( x ) d x δ 2 ρ R β ( u ( x ) ) R ¯ a ( v ( x ) ) = U ( ) , A V ( ) H .

The operator A : D ( A ) H and the eigenvalue problem (2.1)-(2.5) have the same eigenvalues. Therefore they are equivalent in terms of this aspect.

Lemma 2.2Letλandμbe two different eigenvalues of problem (2.1)-(2.5). Then the corresponding eigenfunctions u ( x ) and v ( x ) of this problem satisfy the following equality:

1 0 u ( x ) v ( x ) d x + δ 2 0 1 u ( x ) v ( x ) d x = δ 2 ρ R a ( u ( x ) ) R β ( v ( x ) ) . (2.15)

Proof Equation (2.15) follows immediately from the orthogonality of the corresponding eigenelements:

U ( x ) = ( u ( x ) z ) , V ( x ) = ( v ( x ) w ) H , u ( x ) = ( u 1 ( x ) u 2 ( x ) ) , v ( x ) = ( v 1 ( x ) v 2 ( x ) ) H .

□

Now, we construct a special fundamental system of solutions of equation (2.1) for λ not being an eigenvalue. Let us consider the next initial value problem:

u 2 ( x ) p 1 ( x ) u 1 ( x ) = λ u 1 ( x ) , u 1 ( x ) + p 2 ( x ) u 2 ( x ) = λ u 2 ( x ) , x ( 1 , 0 ) , (2.16)

u 1 ( 1 ) = cos α , u 2 ( 1 ) = sin α . (2.17)

By virtue of Theorem 1.1 in [25], this problem has a unique solution u = ( ϕ 11 ( x , λ ) ϕ 21 ( x , λ ) ) , which is an entire function of λ C for each fixed x [ 1 , 0 ] . Similarly, employing the same method as in the proof of Theorem 1.1 in [25], we see that the problem

u 2 ( x ) p 1 ( x ) u 1 ( x ) = λ u 1 ( x ) , u 1 ( x ) + p 2 ( x ) u 2 ( x ) = λ u 2 ( x ) , x ( 0 , 1 ) , (2.18)

u 1 ( 1 ) = a 2 + λ cos β , u 2 ( 1 ) = a 1 + λ sin β (2.19)

has a unique solution u = ( χ 12 ( x , λ ) χ 22 ( x , λ ) ) , which is an entire function of parameter λ for each fixed x [ 0 , 1 ] .

Now the functions ϕ i 2 ( x , λ ) and χ i 1 ( x , λ ) are defined in terms of ϕ i 1 ( x , λ ) and χ i 2 ( x , λ ) , i = 1 , 2 , respectively, as follows: The initial-value problem,

u 2 ( x ) p 1 ( x ) u 1 ( x ) = λ u 1 ( x ) , u 1 ( x ) + p 2 ( x ) u 2 ( x ) = λ u 2 ( x ) , x ( 0 , 1 ) , (2.20)

u 1 ( 0 ) = 1 δ ϕ 11 ( 0 , λ ) , u 2 ( 0 ) = 1 δ ϕ 21 ( 0 , λ ) , (2.21)

has a unique solution u = ( ϕ 12 ( x , λ ) ϕ 22 ( x , λ ) ) for each λ C .

Similarly, the following problem also has a unique solution u = ( χ 11 ( x , λ ) χ 21 ( x , λ ) ) :

u 2 ( x ) p 1 ( x ) u 1 ( x ) = λ u 1 ( x ) , u 1 ( x ) + p 2 ( x ) u 2 ( x ) = λ u 2 ( x ) , x ( 1 , 0 ) , (2.22)

u 1 ( 0 ) = δ χ 12 ( 0 , λ ) , u 2 ( 0 ) = δ χ 22 ( 0 , λ ) . (2.23)

Let us construct two basic solutions of equation (2.1) as follows:

ϕ ( , λ ) = ( ϕ 1 ( , λ ) ϕ 2 ( , λ ) ) , χ ( , λ ) = ( χ 1 ( , λ ) χ 2 ( , λ ) ) ,

where

ϕ 1 ( x , λ ) = { ϕ 11 ( x , λ ) , x [ 1 , 0 ) , ϕ 12 ( x , λ ) , x ( 0 , 1 ] , ϕ 2 ( x , λ ) = { ϕ 21 ( x , λ ) , x [ 1 , 0 ) , ϕ 22 ( x , λ ) , x ( 0 , 1 ] , (2.24)

χ 1 ( x , λ ) = { χ 11 ( x , λ ) , x [ 1 , 0 ) , χ 12 ( x , λ ) , x ( 0 , 1 ] , χ 2 ( x , λ ) = { χ 21 ( x , λ ) , x [ 1 , 0 ) , χ 22 ( x , λ ) , x ( 0 , 1 ] . (2.25)

Then

R β ( χ ( x , λ ) ) = ρ . (2.26)

By virtue of equations (2.21) and (2.23), these solutions satisfy both transmission conditions (2.4) and (2.5). These functions are entire in λ for all x [ 1 , 0 ) ( 0 , 1 ] .

Let W ( ϕ , χ ) ( , λ ) denote the Wronskian of ϕ ( , λ ) and χ ( , λ ) defined in [[26], p.194], i.e.,

W ( ϕ , χ ) ( , λ ) : = | ϕ 1 ( , λ ) ϕ 2 ( , λ ) χ 1 ( , λ ) χ 2 ( , λ ) | .

It is obvious that the Wronskian

ω i ( λ ) : = W ( ϕ , χ ) ( x , λ ) = ϕ 1 i ( x , λ ) χ 2 i ( x , λ ) ϕ 2 i ( x , λ ) χ 1 i ( x , λ ) , x Γ i , i = 1 , 2 (2.27)

are independent of x Γ i and are entire functions. Taking into account (2.21) and (2.23), a short calculation gives

ω 1 ( λ ) = δ 2 ω 2 ( λ )

for each λ C .

Corollary 2.3The zeros of the functions ω 1 ( λ ) and ω 2 ( λ ) coincide.

Then we may take into consideration the characteristic function ω ( λ ) as

ω ( λ ) : = ω 1 ( λ ) = δ 2 ω 2 ( λ ) . (2.28)

In the following lemma, we show that all eigenvalues of problem (2.1)-(2.5) are simple.

Lemma 2.4All eigenvalues of problem (2.1)-(2.5) are just zeros of the function ω ( λ ) . Moreover, every zero of ω ( λ ) has multiplicity one.

Proof Since the functions ϕ 1 ( x , λ ) and ϕ 2 ( x , λ ) satisfy the boundary condition (2.2) and both transmission conditions (2.4) and (2.5), to find the eigenvalues of the (2.1)-(2.5), we have to insert the functions ϕ 1 ( x , λ ) and ϕ 2 ( x , λ ) in the boundary condition (2.3) and find the roots of this equation.

By (2.1) we obtain for λ , μ C , λ μ ,

d d x { ϕ 1 ( x , λ ) ϕ 2 ( x , μ ) ϕ 1 ( x , μ ) ϕ 2 ( x , λ ) } = ( μ λ ) { ϕ 1 ( x , λ ) ϕ 1 ( x , μ ) + ϕ 2 ( x , λ ) ϕ 2 ( x , μ ) } .

Integrating the above equation through [ 1 , 0 ] and [ 0 , 1 ] , and taking into account the initial conditions (2.17), (2.21) and (2.23), we obtain

δ 2 ( ϕ 12 ( 1 , λ ) ϕ 22 ( 1 , μ ) ϕ 12 ( 1 , μ ) ϕ 22 ( 1 , λ ) ) = ( μ λ ) ( 1 0 ( ϕ 11 ( x , λ ) ϕ 11 ( x , μ ) + ϕ 21 ( x , λ ) ϕ 21 ( x , μ ) ) d x + δ 2 0 1 ( ϕ 12 ( x , λ ) ϕ 12 ( x , μ ) + ϕ 22 ( x , λ ) ϕ 22 ( x , μ ) ) d x ) . (2.29)

Dividing both sides of (2.29) by ( λ μ ) and by letting μ λ , we arrive to the relation

δ 2 ( ϕ 22 ( 1 , λ ) ϕ 12 ( 1 , λ ) λ ϕ 12 ( 1 , λ ) ϕ 22 ( 1 , λ ) λ ) = ( 1 0 ( | ϕ 11 ( x , λ ) | 2 + | ϕ 21 ( x , λ ) | 2 ) d x + δ 2 0 1 ( | ϕ 12 ( x , λ ) | 2 + | ϕ 22 ( x , λ ) | 2 ) d x ) . (2.30)

We show that equation

ω ( λ ) = W ( ϕ , χ ) ( 1 , λ ) = δ 2 ( ( a 1 + λ sin β ) ϕ 12 ( 1 , λ ) ( a 2 + λ cos β ) ϕ 22 ( 1 , λ ) ) = 0 (2.31)

has only simple roots. Assume the converse, i.e., equation (2.31) has a double root λ , say. Then the following two equations hold:

( a 1 + λ sin β ) ϕ 12 ( 1 , λ ) ( a 2 + λ cos β ) ϕ 22 ( 1 , λ ) = 0 , (2.32)

sin β ϕ 12 ( 1 , λ ) + ( a 1 + λ sin β ) ϕ 12 ( 1 , λ ) λ cos β ϕ 22 ( 1 , λ ) ( a 2 + λ cos β ) ϕ 22 ( 1 , λ ) λ = 0 . (2.33)

Since ρ 0 and λ is real, then ( a 1 + λ sin β ) 2 + ( a 2 + λ cos β ) 2 0 . Let a 1 + λ sin β 0 . From (2.32) and (2.33),

ϕ 12 ( 1 , λ ) = ( a 2 + λ cos β ) ( a 1 + λ sin β ) ϕ 22 ( 1 , λ ) , ϕ 12 ( 1 , λ ) λ = ρ ϕ 22 ( 1 , λ ) ( a 1 + λ sin β ) 2 + ( a 2 + λ cos β ) ( a 1 + λ sin β ) ϕ 22 ( 1 , λ ) λ . (2.34)

Combining (2.34) and (2.30) with λ = λ , we obtain

ρ δ 2 ( ϕ 22 ( 1 , λ ) ) 2 ( a 1 + λ sin β ) 2 = ( 1 0 ( | ϕ 11 ( x , λ ) | 2 + | ϕ 21 ( x , λ ) | 2 ) d x + δ 2 0 1 ( | ϕ 12 ( x , λ ) | 2 + | ϕ 22 ( x , λ ) | 2 ) d x ) , (2.35)

contradicting the assumption ρ > 0 . The other case, when a 2 + λ cos β 0 , can be treated similarly and the proof is complete. □

Let { λ n } n = denote the sequence of zeros of ω ( λ ) . Then

Φ ( x , λ n ) : = ( ϕ ( x , λ n ) R β ( ϕ ( x , λ n ) ) ) (2.36)

are the corresponding eigenvectors of the operator A . Since A is symmetric, then it is easy to show that the following orthogonality relation holds:

Φ ( , λ n ) , Φ ( , λ m ) H = 0 for  n m . (2.37)

Here { ϕ ( , λ n ) } n = is a sequence of eigen-vector-functions of (2.1)-(2.5) corresponding to the eigenvalues { λ n } n = . We denote by Ψ ( x , λ n ) the normalized eigenvectors of A , i.e.,

Ψ ( x , λ n ) : = Φ ( x , λ n ) Φ ( , λ n ) H = ( ψ ( x , λ n ) R β ( ψ ( x , λ n ) ) ) . (2.38)

Since χ ( , λ ) satisfies (2.3)-(2.5), then the eigenvalues are also determined via

sin α χ 11 ( 1 , λ ) cos α χ 21 ( 1 , λ ) = ω ( λ ) . (2.39)

Therefore { χ ( , λ n ) } n = is another set of eigen-vector-functions which is related by { ϕ ( , λ n ) } n = with

χ ( x , λ n ) = c n ϕ ( x , λ n ) , x [ 1 , 0 ) ( 0 , 1 ] , n Z , (2.40)

where c n 0 are non-zero constants, since all eigenvalues are simple. Since the eigenvalues are all real, we can take the eigen-vector-functions to be real-valued.

Now we derive the asymptotic formulae of the eigenvalues { λ n } n = and the eigen-vector-functions { ϕ ( , λ n ) } n = . We transform equations (2.1), (2.17), (2.21) and (2.24) into the integral equations, see [26], as follows:

ϕ 11 ( x , λ ) = cos ( λ ( x + 1 ) + α ) 1 x sin λ ( x t ) p 1 ( t ) ϕ 11 ( t , λ ) d t 1 x cos λ ( x t ) p 2 ( t ) ϕ 21 ( t , λ ) d t , (2.41)

ϕ 21 ( x , λ ) = sin ( λ ( x + 1 ) + α ) + 1 x cos λ ( x t ) p 1 ( t ) ϕ 11 ( t , λ ) d t 1 x sin λ ( x t ) p 2 ( t ) ϕ 21 ( t , λ ) d t , (2.42)

ϕ 12 ( x , λ ) = 1 δ ϕ 21 ( 0 , λ ) sin ( λ x ) + 1 δ ϕ 11 ( 0 , λ ) cos ( λ x ) 0 x sin λ ( x t ) p 1 ( t ) ϕ 12 ( t , λ ) d t 0 x cos λ ( x t ) p 2 ( t ) ϕ 22 ( t , λ ) d t , (2.43)

ϕ 22 ( x , λ ) = 1 δ ϕ 11 ( 0 , λ ) sin ( λ x ) + 1 δ ϕ 21 ( 0 , λ ) cos ( λ x ) + 0 x cos λ ( x t ) p 1 ( t ) ϕ 12 ( t , λ ) d t ϕ 22 ( x , λ ) = 0 x sin λ ( x t ) p 2 ( t ) ϕ 22 ( t , λ ) d t . (2.44)

For | λ | the following estimates hold uniformly with respect to x, x [ 1 , 0 ) ( 0 , 1 ] , cf. [[25], p.55],

ϕ 11 ( x , λ ) = cos ( λ ( x + 1 ) + α ) + O ( 1 λ ) , (2.45)

ϕ 21 ( x , λ ) = sin ( λ ( x + 1 ) + α ) + O ( 1 λ ) , (2.46)

ϕ 12 ( x , λ ) = 1 δ ϕ 21 ( 0 , λ ) sin ( λ x ) + 1 δ ϕ 11 ( 0 , λ ) cos ( λ x ) + O ( 1 λ ) , (2.47)

ϕ 22 ( x , λ ) = 1 δ ϕ 11 ( 0 , λ ) sin ( λ x ) + 1 δ ϕ 21 ( 0 , λ ) cos ( λ x ) + O ( 1 λ ) . (2.48)

Now we find an asymptotic formula of the eigenvalues. Since the eigenvalues of the boundary value problem (2.1)-(2.5) coincide with the roots of the equation

( a 1 + λ sin β ) ϕ 12 ( 1 , λ ) ( a 2 + λ cos β ) ϕ 22 ( 1 , λ ) = 0 , (2.49)

then from the estimates (2.47), (2.48) and (2.49), we get

λ sin β [ 1 δ ϕ 21 ( 0 , λ ) sin λ + 1 δ ϕ 11 ( 0 , λ ) cos λ ] λ cos β [ 1 δ ϕ 11 ( 0 , λ ) sin λ + 1 δ ϕ 21 ( 0 , λ ) cos λ ] + O ( 1 ) = 0 ,

which can be written as

1 δ ϕ 11 ( 0 , λ ) sin ( λ β ) + 1 δ ϕ 21 ( 0 , λ ) cos ( λ β ) + O ( 1 λ ) = 0 . (2.50)

Then, from (2.45) and (2.46), equation (2.50) has the form

sin ( 2 λ + α β ) + O ( 1 λ ) = 0 . (2.51)

For large | λ | , equation (2.51) obviously has solutions which, as is not hard to see, have the form

2 λ n + α β = n π + δ n , n = 0 , ± 1 , ± 2 , . (2.52)

Inserting these values in (2.51), we find that sin δ n = O ( 1 n ) , i.e., δ n = O ( 1 n ) . Thus we obtain the following asymptotic formula for the eigenvalues:

λ n = n π + β α 2 + O ( 1 n ) , n = 0 , ± 1 , ± 2 , . (2.53)

Using the formulae (2.53), we obtain the following asymptotic formulae for the eigen-vector-functions ϕ ( , λ n ) :

ϕ ( x , λ n ) = { ( cos ( λ n ( x + 1 ) + α ) + O ( 1 n ) sin ( λ n ( x + 1 ) + α ) + O ( 1 n ) ) , x [ 1 , 0 ) , ( 1 δ cos ( λ n ( x + 1 ) + α ) + O ( 1 n ) 1 δ sin ( λ n ( x + 1 ) + α ) + O ( 1 n ) ) , x ( 0 , 1 ] , (2.54)

where

ϕ ( x , λ n ) = { ( ϕ 11 ( x , λ n ) ϕ 21 ( x , λ n ) ) , x [ 1 , 0 ) , ( ϕ 12 ( x , λ n ) ϕ 22 ( x , λ n ) ) , x ( 0 , 1 ] . (2.55)

### 3 Green’s matrix and expansion theorem

Let F ( ) = ( f ( ) w ) , where f ( ) = ( f 1 ( ) f 2 ( ) ) , be a continuous vector-valued function. To study the completeness of the eigenvectors of A , and hence the completeness of the eigen-vector-functions of (2.1)-(2.5), we derive Green’s function of problem (2.1)-(2.5) as well as the resolvent of A . Indeed, let λ be not an eigenvalue of A and consider the inhomogeneous problem

( A λ I ) U ( x ) = F ( x ) , U ( x ) = ( u ( x ) R β ( u ( x ) ) ) ,

where I is the identity operator. Since

( A λ I ) U ( x ) = ( ( u ) R a ( u ( x ) ) ) λ ( u ( x ) R β ( u ( x ) ) ) = ( f ( x ) w ) ,

then we have

u 2 ( x ) { p 1 ( x ) + λ } u 1 ( x ) = f 1 ( x ) , u 1 ( x ) + { p 2 ( x ) + λ } u 2 ( x ) = f 2 ( x ) , x [ 1 , 0 ) ( 0 , 1 ] , (3.1)

w = R a ( u ( x ) ) λ R β ( u ( x ) ) (3.2)

and the boundary conditions (2.2), (2.4) and (2.5) with λ is not an eigenvalue of problem (2.1)-(2.5).

Now, we can represent the general solution of (3.1) in the following form:

u ( x , λ ) = { A 1 ( ϕ 11 ( x , λ ) ϕ 21 ( x , λ ) ) + B 1 ( χ 11 ( x , λ ) χ 21 ( x , λ ) ) , x [ 1 , 0 ) , A 2 ( ϕ 12 ( x , λ ) ϕ 22 ( x , λ ) ) + B 2 ( χ 12 ( x , λ ) χ 22 ( x , λ ) ) , x ( 0 , 1 ] . (3.3)

We applied the standard method of variation of the constants to (3.3), and thus the functions A 1 ( x , λ ) , B 1 ( x , λ ) and A 2 ( x , λ ) , B 2 ( x , λ ) satisfy the linear system of equations

A 1 ( x , λ ) ϕ 21 ( x , λ ) + B 1 ( x , λ ) χ 21 ( x , λ ) = f 1 ( x ) , A 1 ( x , λ ) ϕ 11 ( x , λ ) + B 1 ( x , λ ) χ 11 ( x , λ ) = f 2 ( x ) , x [ 1 , 0 ) , (3.4)

and

A 2 ( x , λ ) ϕ 22 ( x , λ ) + B 2 ( x , λ ) χ 22 ( x , λ ) = f 1 ( x ) , A 2 ( x , λ ) ϕ 12 ( x , λ ) + B 2 ( x , λ ) χ 12 ( x , λ ) = f 2 ( x ) , x ( 0 , 1 ] . (3.5)

Since λ is not an eigenvalue and ω ( λ ) 0 , each of the linear system in (3.4) and (3.5) has a unique solution which leads to

A 1 ( x , λ ) = 1 ω ( λ ) x 0 χ ( ξ , λ ) f ( ξ ) d ξ + A 1 , B 1 ( x , λ ) = 1 ω ( λ ) 1 x ϕ ( ξ , λ ) f ( ξ ) d ξ + B 1 , x [ 1 , 0 ) , (3.6)

A 2 ( x , λ ) = δ 2 ω ( λ ) x 1 χ ( ξ , λ ) f ( ξ ) d ξ + A 2 , B 2 ( x , λ ) = δ 2 ω ( λ ) 0 x ϕ ( ξ , λ ) f ( ξ ) d ξ + B 2 , x ( 0 , 1 ] , (3.7)

where A 1 , A 2 , B 1 and B 2 are arbitrary constants, and

ϕ ( ξ , λ ) = { ( ϕ 11 ( ξ , λ ) ϕ 21 ( ξ , λ ) ) , ξ [ 1 , 0 ) , ( ϕ 12 ( ξ , λ ) ϕ 22 ( ξ , λ ) ) , ξ ( 0 , 1 ] , χ ( ξ , λ ) = { ( χ 11 ( ξ , λ ) χ 21 ( ξ , λ ) ) , ξ [ 1 , 0 ) , ( χ 12 ( ξ , λ ) χ 22 ( ξ , λ ) ) , ξ ( 0 , 1 ] .

Substituting equations (3.6) and (3.7) into (3.3), we obtain the solution of (3.1)

u ( x , λ ) = { ϕ ( x , λ ) ω ( λ ) x 0 χ ( ξ , λ ) f ( ξ ) d ξ + χ ( x , λ ) ω ( λ ) 1 x ϕ ( ξ , λ ) f ( ξ ) d ξ + A 1 ϕ ( x , λ ) + B 1 χ ( x , λ ) , x [ 1 , 0 ) , δ 2 ϕ ( x , λ ) ω ( λ ) x 1 χ ( ξ , λ ) f ( ξ ) d ξ + δ 2 χ ( x , λ ) ω ( λ ) 0 x ϕ ( ξ , λ ) f ( ξ ) d ξ + A 2 ϕ ( x , λ ) + B 2 χ ( x , λ ) , x ( 0 , 1 ] . (3.8)

Then from (2.2), (3.2) and the transmission conditions (2.4) and (2.5), we get

A 1 = δ 2 ω ( λ ) 0 1 χ ( ξ , λ ) f ( ξ ) d ξ w δ 2 ω ( λ ) , B 1 = 0 , A 2 = w δ 2 ω ( λ ) , B 2 = 1 ω ( λ ) 1 0 ϕ ( ξ , λ ) f ( ξ ) d ξ .

Then (3.8) can be written as

u ( x , λ ) = w δ 2 ω ( λ ) ϕ ( x , λ ) + χ ( x , λ ) ω ( λ ) 1 x a ( ξ ) ϕ ( ξ , λ ) f ( ξ ) d ξ + ϕ ( x , λ ) ω ( λ ) x 1 a ( ξ ) χ ( ξ , λ ) f ( ξ ) d ξ , x , ξ [ 1 , 0 ) ( 0 , 1 ] , (3.9)

where

a ( ξ ) = { 1 , ξ [ 1 , 0 ) , δ 2 , ξ ( 0 , 1 ] , (3.10)

which can be written as

u ( x , λ ) = w δ 2 ω ( λ ) ϕ ( x , λ ) + 1 1 a ( ξ ) G ( x , ξ , λ ) f ( ξ ) d ξ , (3.11)

where

G ( x , ξ , λ ) = 1 ω ( λ ) { χ ( x , λ ) ϕ ( ξ , λ ) , 1 ξ x 1 , x 0 , ξ 0 , ϕ ( x , λ ) χ ( ξ , λ ) , 1 x ξ 1 , x 0 , ξ 0 . (3.12)

Expanding (3.12) we obtain the concrete form

G ( x , ξ , λ ) = 1 ω ( λ ) { ( ϕ 1 ( ξ , λ ) χ 1 ( x , λ ) ϕ 2 ( ξ , λ ) χ 1 ( x , λ ) ϕ 1 ( ξ , λ ) χ 2 ( x , λ ) ϕ 2 ( ξ , λ ) χ 2 ( x , λ ) ) , 1 ξ x 1 , x 0 , ξ 0 , ( ϕ 1 ( x , λ ) χ 1 ( ξ , λ ) ϕ 1 ( x , λ ) χ 2 ( ξ , λ ) ϕ 2 ( x , λ ) χ 1 ( ξ , λ ) ϕ 2 ( x , λ ) χ 2 ( ξ , λ ) ) , 1 x ξ 1 , x 0 , ξ 0 . (3.13)

The matrix G ( x , ξ , λ ) is called Green’s matrix of problem (2.1)-(2.5). Obviously, G ( x , ξ , λ ) is a meromorphic function of λ, for every ( x , ξ ) ( [ 1 , 0 ) ( 0 , 1 ] ) 2 , which has simple poles only at the eigenvalues. Although Green’s matrix looks as simple as that of Dirac systems, cf., e.g., [25,26], it is rather complicated because of the transmission conditions (see the example at the end of this paper). Therefore

U ( x ) = ( A λ I ) 1 F ( x ) = ( w δ 2 ω ( λ ) ϕ ( x , λ ) + 1 1 a ( ξ ) G ( x , ξ , λ ) f ( ξ ) d ξ R β ( u ( x ) ) ) . (3.14)

Lemma 3.1The operator A is self-adjoint in H .

Proof Since A is a symmetric densely defined operator, then it is sufficient to show that the deficiency spaces are the null spaces, and hence A = A . Indeed, if F ( x ) = ( f ( x ) w ) H and λ is a non-real number, then taking

U ( x ) = ( u ( x ) z ) = ( w δ 2 ω ( λ ) ϕ ( x , λ ) + 1 1 a ( ξ ) G ( x , ξ , λ ) f ( ξ ) d ξ R β ( u ( x ) ) )

implies that U D ( A ) . Since G ( x , ξ , λ ) satisfies the conditions (2.2)-(2.5), then ( A λ I ) U ( x ) = F ( x ) . Now we prove that the inverse of ( A λ I ) exists. If A U ( x ) = λ U ( x ) , then

( λ ¯ λ ) U ( ) , U ( ) H = U ( ) , λ U ( ) H λ U ( ) , U ( ) H = U ( ) , A U ( ) H A U ( ) , U ( ) H = 0 ( since  A  is symmetric ) .

Since λ R , we have λ ¯ λ 0 . Thus U ( ) , U ( ) H = 0 , i.e., U = 0 . Then R ( λ ; A ) : = ( A λ I ) 1 , the resolvent operator of A , exists. Thus

R ( λ ; A ) F = ( A λ I ) 1 F = U .

Take λ = ± i . The domains of ( A i I ) 1 and ( A + i I ) 1 are exactly H . Consequently, the ranges of ( A i I ) and ( A + i I ) are also H . Hence the deficiency spaces of A are

N i : = N ( A + i I ) = R ( A i I ) = H 1 = { 0 } , N i : = N ( A i I ) = R ( A + i I ) = H 1 = { 0 } .

The next theorem is an eigenfunction expansion theorem. The proof is exactly similar to that of Levitan and Sargsjan derived in [[25], pp.67-77]; see also [26-29].

Theorem 3.2

(i) For U ( ) H ,

U ( ) H 2 = n = | U ( ) , Ψ n ( ) H | 2 . (3.15)

(ii) For U ( ) D ( A ) ,

U ( x ) = n = U ( ) , Ψ n ( ) H Ψ n ( x ) , (3.16)

the series being absolutely and uniformly convergent in the first component for on [ 1 , 0 ) ( 0 , 1 ] , and absolutely convergent in the second component.

### 4 The sampling theorems

The first sampling theorem of this section associated with the boundary value problem (2.1)-(2.5) is the following theorem.

Theorem 4.1Let f ( x ) = ( f 1 ( x ) f 2 ( x ) ) H . For λ C , let

F ( λ ) = 1 0 f ( x ) ϕ ( x , λ ) d x + δ 2 0 1 f ( x ) ϕ ( x , λ ) d x , (4.1)

where ϕ ( , λ ) is the solution defined above. Then F ( λ ) is an entire function of exponential type that can be reconstructed from its values at the points { λ n } n = via the sampling formula

F ( λ ) = n = F ( λ n ) ω ( λ ) ( λ λ n ) ω ( λ n ) . (4.2)

The series (4.2) converges absolutely onand uniformly on any compact subset of ℂ, and ω ( λ ) is the entire function defined in (2.28).

Proof The relation (4.1) can be rewritten in the form

F ( λ ) = F ( ) , Φ ( , λ ) H = 1 0 f ( x ) ϕ ( x , λ ) d x + δ 2 0 1 f ( x ) ϕ ( x , λ ) d x , λ C , (4.3)

where

F ( x ) = ( f ( x ) 0 ) , Φ ( x , λ ) = ( ϕ ( x , λ ) R β ( ϕ ( x , λ ) ) ) H .

Since both F ( ) and Φ ( , λ ) are in H , then they have the Fourier expansions

F ( x ) = n = f ˆ ( n ) Φ ( x , λ n ) Φ ( , λ n ) H 2 , Φ ( x , λ ) = n = Φ ( , λ ) , Φ ( , λ n ) H Φ ( x , λ n ) Φ ( , λ n ) H 2 , (4.4)

where λ C and f ˆ ( n ) are the Fourier coefficients

f ˆ ( n ) = F ( ) , Φ ( , λ n ) H = 1 0 f ( x ) ϕ ( x , λ n ) d x + δ 2 0 1 f ( x ) ϕ ( x , λ n ) d x . (4.5)

Applying Parseval’s identity to (4.3), we obtain

F ( λ ) = n = F ( λ n ) Φ ( , λ ) , Φ ( , λ n ) H Φ ( , λ n ) H 2 , λ C . (4.6)

Now we calculate Φ ( , λ ) , Φ ( , λ n ) H and Φ ( , λ n ) H of λ C , n Z . To prove expansion (4.2), we need to show that

Φ ( , λ ) , Φ ( , λ n ) H Φ ( , λ n ) H 2 = ω ( λ ) ( λ λ n ) ω ( λ ) , n Z , λ C . (4.7)

Indeed, let λ C and n Z be fixed. By the definition of the inner product of H , we have

Φ ( , λ ) , Φ ( , λ n ) H = 1 0 ϕ ( x , λ ) ϕ ( x , λ n ) d x + δ 2 0 1 ϕ ( x , λ ) ϕ ( x , λ n ) d x + δ 2 ρ R β ( ϕ ( x , λ ) ) R β ( ϕ ( x , λ n ) ) . (4.8)

From Green’s identity, see [[25], p.51], we have

( λ n λ ) [ 1 0 ϕ ( x , λ ) ϕ ( x , λ n ) d x + δ 2 0 1 ϕ ( x , λ ) ϕ ( x , λ n ) d x ] = W ( ϕ ( 0 , λ ) , ϕ ( 0 , λ n ) ) W ( ϕ ( 1 , λ ) , ϕ ( 1 , λ n ) ) δ 2 W ( ϕ ( 0 + , λ ) , ϕ ( 0 + , λ n ) ) + δ 2 W ( ϕ ( 1 , λ ) , ϕ ( 1 , λ n ) ) . (4.9)

Then (4.9) and the initial conditions (2.21) imply

1 0 ϕ ( x , λ ) ϕ ( x , λ n ) d x + δ 2 0 1 ϕ ( x , λ ) ϕ ( x , λ n ) d x = δ 2 W ( ϕ ( 1 , λ ) , ϕ ( 1 , λ n ) ) λ n λ . (4.10)

From (2.40), (2.19) and (2.7), we have

W ( ϕ ( 1 , λ ) , ϕ ( 1 , λ n ) ) = ϕ 12 ( 1 , λ ) ϕ 22 ( 1 , λ n ) ϕ 22 ( 1 , λ ) ϕ 12 ( 1 , λ n ) = c n 1 [ ϕ 12 ( 1 , λ ) χ 22 ( 1 , λ n ) ϕ 22 ( 1 , λ ) χ 12 ( 1 , λ n ) ] = c n 1 [ ( λ n sin β + a 1 ) ϕ 12 ( 1 , λ ) ( λ n cos β + a 2 ) ϕ 22 ( 1 , λ ) ] = c n 1 [ δ 2 ω ( λ ) + ( λ n λ ) R β ( ϕ ( x , λ ) ) ] . (4.11)

Also, from (2.40) we have

δ 2 ρ R β ( ϕ ( x , λ ) ) R β ( ϕ ( x , λ n ) ) = δ 2 c n 1 ρ R β ( ϕ ( x , λ ) ) R β ( χ ( x , λ n ) ) . (4.12)

Then from (2.26) and (4.12) we obtain

δ 2 ρ R β ( ϕ ( x , λ ) ) R β ( ϕ ( x , λ n ) ) = δ 2 c n 1 R β ( ϕ ( x , λ ) ) . (4.13)

Substituting from (4.10), (4.11) and (4.13) into (4.8), we get

Φ ( , λ ) , Φ ( , λ n ) H = c n 1 ω ( λ ) λ n λ . (4.14)

Letting λ λ n in (4.14), since the zeros of ω ( λ ) are simple, we get

Φ ( , λ n ) , Φ ( , λ n ) H = Φ ( , λ n ) H 2 = c n 1 ω ( λ n ) . (4.15)

Since λ C and n Z are arbitrary, then (4.14) and (4.15) hold for all λ C and all n Z . Therefore, from (4.14) and (4.15), we get (4.7). Hence (4.2) is proved with a pointwise convergence on ℂ. Now we investigate the convergence of (4.2). First we prove that it is absolutely convergent on ℂ. Using Cauchy-Schwarz’ inequality for λ C ,

k = | F ( λ k ) ω ( λ ) ( λ λ k ) ω ( λ k ) | ( k = | F ( ) , Φ ( , λ k ) H | 2 Φ ( , λ k ) H 2 ) 1 / 2 × ( k = | Φ ( , λ ) , Φ ( , λ k ) H | 2 Φ ( , λ k ) H 2 ) 1 / 2 . (4.16)

Since F ( ) , Φ ( , λ ) H , then the two series on the right-hand side of (4.16) converge. Thus series (4.2) converges absolutely on ℂ. As for uniform convergence, let M C be compact. Let λ M and N > 0 . Define ν N ( λ ) to be

ν N ( λ ) : = | F ( λ ) k = N N F ( λ k ) ω ( λ ) ( λ λ k ) ω ( λ k ) | . (4.17)

Using the same method developed above, we get

ν N ( λ ) ( k = N N | F ( ) , Φ ( , λ k ) H | 2 Φ ( , λ k ) H 2 ) 1 / 2 ( k = N N | Φ ( , λ ) , Φ ( , λ k ) H | 2 Φ ( , λ k ) H 2 ) 1 / 2 . (4.18)

Therefore

ν N ( λ ) Φ ( , λ k ) H ( k = N N | F ( ) , Φ ( , λ k ) H | 2 Φ ( , λ k ) H 2 ) 1 / 2 . (4.19)

Since [ 1 , 1 ] × M is compact, then we can find a positive constant C M such that

Φ ( , λ ) H C M for all  λ M . (4.20)

Then

ν N ( λ ) C M ( k = N N | F ( ) , Φ ( , λ k ) H | 2 Φ ( , λ k ) H 2 ) 1 / 2 (4.21)

uniformly on M. In view of Parseval’s equality,

( k = N N | F ( ) , Φ ( , λ k ) H | 2 Φ ( , λ k ) H 2 ) 1 / 2 0 as  N .

Thus ν N ( λ ) 0 uniformly on M. Hence (4.2) converges uniformly on M. Thus F ( λ ) is an entire function. From the relation

| F ( λ ) | 1 0 | f 1 ( x ) | | ϕ 11 ( x , λ ) | d x + 1 0 | f 2 ( x ) | | ϕ 21 ( x , λ ) | d x + δ 2 0 1 | f 1 ( x ) | | ϕ 12 ( x , λ ) | d x + δ 2 0 1 | f 2 ( x ) | | ϕ 22 ( x , λ ) | d x , λ C ,

and the fact that ϕ i j ( , λ ) , i , j = 1 , 2 , are entire functions of exponential type, we conclude that F ( λ ) is of exponential type. □

Remark 4.2 To see that expansion (4.2) is a Lagrange-type interpolation, we may replace ω ( λ ) by the canonical product

ω ˜ ( λ ) = ( λ λ 0 ) n = 1 ( 1 λ λ n ) ( 1 λ λ n ) . (4.22)

From Hadamard’s factorization theorem, see [4], ω ( λ ) = h ( λ ) ω ˜ ( λ ) , where h ( λ ) is an entire function with no zeros. Thus,

ω ( λ ) ω ( λ n ) = h ( λ ) ω ˜ ( λ ) h ( λ n ) ω ˜ ( λ n )

and (4.1), (4.2) remain valid for the function F ( λ ) / h ( λ ) . Hence

F ( λ ) = n = 0 F ( λ n ) h ( λ ) ω ˜ ( λ ) h ( λ n ) ω ˜ ( λ n ) ( λ λ n ) . (4.23)

We may redefine (4.1) by taking kernel ϕ ( , λ ) h ( λ ) = ϕ ˜ ( , λ ) to get

F ˜ ( λ ) = F ( λ ) h ( λ ) = n = 0 F ˜ ( λ n ) ω ˜ ( λ ) ( λ λ n ) ω ˜ ( λ n ) . (4.24)

The next theorem is devoted to giving vector-type interpolation sampling expansions associated with problem (2.1)-(2.5) for integral transforms whose kernels are defined in terms of Green’s matrix. As we see in (3.12), Green’s matrix G ( x , ξ , λ ) of problem (2.1)-(2.5) has simple poles at { λ k } k = . Define the function G ( x , λ ) to be G ( x , λ ) : = ω ( λ ) G ( x , ξ 0 , λ ) , where ξ 0 [ 1 , 0 ) ( 0 , 1 ] is a fixed point and ω ( λ ) is the function defined in (2.28) or it is the canonical product (4.22).

Theorem 4.3Let f ( x ) = ( f 1 ( x ) f 2 ( x ) ) H . Let F ( λ ) = ( F 1 ( λ ) F 2 ( λ ) ) be the vector-valued transform

F ( λ ) = 1 0 G ( x , λ ) f ¯ ( x ) d x + δ 2 0 1 G ( x , λ ) f ¯ ( x ) d x . (4.25)

Then F ( λ ) is a vector-valued entire function of exponential type that admits the vector-valued sampling expansion

F ( λ ) = n = F ( λ n ) ω ( λ ) ( λ λ n ) ω ( λ n ) . (4.26)

The vector-valued series (4.26) converges absolutely onand uniformly on compact subsets of ℂ. Here (4.26) means

F 1 ( λ ) = n = F 1 ( λ n ) ω ( λ ) ( λ λ n ) ω ( λ n ) , F 2 ( λ ) = n = F 2 ( λ n ) ω ( λ ) ( λ λ n ) ω ( λ n ) , (4.27)

where both series converge absolutely onand uniformly on compact sets of ℂ.

Proof The integral transform (4.25) can be written as

F ( λ ) = G ( , λ ) , F ( ) H , F ( x ) = ( f ( x ) 0 ) , G ( x , λ ) = ( G ( x , λ ) R β ( G ( x , λ ) ) ) H . (4.28)

Applying Parseval’s identity to (4.28) with respect to { Φ ( , λ n ) } n = , we obtain

F ( λ ) = n = G ( , λ ) , Φ ( , λ n ) H F ( ) , Φ ( , λ n ) ¯ H Φ ( , λ n ) H 2 . (4.29)

Let λ C be such that λ λ n for n Z . Since each Φ ( , λ n ) is an eigenvector of A , then

( A