### Abstract

The sampling theory says that a function may be determined by its sampled values at some certain points provided the function satisfies some certain conditions. In this paper we consider a Dirac system which contains an eigenparameter appearing linearly in one condition in addition to an internal point of discontinuity. We closely follow the analysis derived by Annaby and Tharwat (J. Appl. Math. Comput. 2010, doi:10.1007/s12190-010-0404-9) to establish the needed relations for the derivations of the sampling theorems including the construction of Green’s matrix as well as the eigen-vector-function expansion theorem. We derive sampling representations for transforms whose kernels are either solutions or Green’s matrix of the problem. In the special case, when our problem is continuous, the obtained results coincide with the corresponding results in Annaby and Tharwat (J. Appl. Math. Comput. 2010, doi:10.1007/s12190-010-0404-9).

**MSC: **
34L16, 94A20, 65L15.

##### Keywords:

Dirac systems; transmission conditions; eigenvalue parameter in the boundary conditions; discontinuous boundary value problems### 1 Introduction

Sampling theory is one of the most powerful results in signal analysis. It is of great
need in signal processing to reconstruct (recover) a signal (function) from its values
at a discrete sequence of points (samples). If this aim is achieved, then an analog
(continuous) signal can be transformed into a digital (discrete) one and then it can
be recovered by the receiver. If the signal is band-limited, the sampling process
can be done via the celebrated Whittaker, Shannon and Kotel’nikov (WSK) sampling theorem
[1-3]. By a band-limited signal with band width *σ*,
*i.e.*, the signal contains no frequencies higher than
*σ* which are
*cf.*[4,5],

Now WSK sampling theorem states [6,7]: If

where

The sampling series (1.2) is absolutely and uniformly convergent on compact subsets of ℂ.

The WSK sampling theorem has been generalized in many different ways. Here we are interested in two extensions. The first is concerned with replacing the equidistant sampling points by more general ones, which is very important from the practical point of view. The following theorem which is known in some literature as the Paley-Wiener theorem [5] gives a sampling theorem with a more general class of sampling points. Although the theorem in its final form may be attributed to Levinson [8] and Kadec [9], it could be named after Paley and Wiener who first derived the theorem in a more restrictive form; see [6,7,10] for more details.

The Paley-Wiener theorem states that if

and *G* is the entire function defined by

then, for any function of the form (1.1), we have

The series (1.6) converges uniformly on compact subsets of ℂ.

The WSK sampling theorem is a special case of this theorem because if we choose

The sampling series (1.6) can be regarded as an extension of the classical Lagrange interpolation formula to ℝ for functions of exponential type. Therefore, (1.6) is called a Lagrange-type interpolation expansion.

The second extension of WSK sampling theorem is the theorem of Kramer, [11] which states that if *I* is a finite closed interval,
*t* such that

where

Series (1.7) converges uniformly wherever
*t*, is bounded. In this theorem, sampling representations were given for integral transforms
whose kernels are more general than

The relationship between both extensions of WSK sampling theorem has been investigated
extensively. Starting from a function theory approach, *cf.*[12], it was proved in [13] that if
*e.g.*, [18-20]. Also, papers in sampling with discontinuous eigenproblems are few; see [21-24]. However, sampling theories associated with Dirac systems which contain eigenparameter
in the boundary conditions and have at the same time discontinuity conditions, do
not exist as far as we know. Our investigation is be the first in that direction,
introducing a good example. To achieve our aim we briefly study the spectral analysis
of the problem. Then we derive two sampling theorems using solutions and Green’s matrix
respectively.

### 2 The eigenvalue problem

In this section, we define our boundary value problem and state some of its properties. We consider the Dirac system

and transmission conditions

where

In [24] the authors discussed problem (2.1)-(2.5) but with the condition

where ⊤ denotes the matrix transpose,

Equation (2.1) can be written as

where

For functions

which are defined on

In the following lemma, we prove that the eigenvalues of problem (2.1)-(2.5) are real.

**Lemma 2.1***The eigenvalues of problem* (2.1)-(2.5) *are real*.

*Proof* Assume the contrary that

Integrating the above equation through

Then from (2.2), (2.3) and transmission conditions, we have, respectively,

and

Since

This contradicts the conditions

Let

Thus, the operator

The operator

**Lemma 2.2***Let**λ**and**μ**be two different eigenvalues of problem* (2.1)-(2.5). *Then the corresponding eigenfunctions*
*and*
*of this problem satisfy the following equality*:

*Proof* Equation (2.15) follows immediately from the orthogonality of the corresponding eigenelements:

□

Now, we construct a special fundamental system of solutions of equation (2.1) for
*λ* not being an eigenvalue. Let us consider the next initial value problem:

By virtue of Theorem 1.1 in [25], this problem has a unique solution

has a unique solution
*λ* for each fixed

Now the functions

has a unique solution

Similarly, the following problem also has a unique solution

Let us construct two basic solutions of equation (2.1) as follows:

where

Then

By virtue of equations (2.21) and (2.23), these solutions satisfy both transmission
conditions (2.4) and (2.5). These functions are entire in *λ* for all

Let
*i.e.*,

It is obvious that the Wronskian

are independent of

for each

**Corollary 2.3***The zeros of the functions*
*and*
*coincide*.

Then we may take into consideration the characteristic function

In the following lemma, we show that all eigenvalues of problem (2.1)-(2.5) are simple.

**Lemma 2.4***All eigenvalues of problem* (2.1)-(2.5) *are just zeros of the function*
*Moreover*, *every zero of*
*has multiplicity one*.

*Proof* Since the functions

By (2.1) we obtain for

Integrating the above equation through

Dividing both sides of (2.29) by

We show that equation

has only simple roots. Assume the converse, *i.e.*, equation (2.31) has a double root

Since

Combining (2.34) and (2.30) with

contradicting the assumption

Let

are the corresponding eigenvectors of the operator

Here
*i.e.*,

Since
*via*

Therefore

where

Now we derive the asymptotic formulae of the eigenvalues

For
*x*,
*cf.* [[25], p.55],

Now we find an asymptotic formula of the eigenvalues. Since the eigenvalues of the boundary value problem (2.1)-(2.5) coincide with the roots of the equation

then from the estimates (2.47), (2.48) and (2.49), we get

which can be written as

Then, from (2.45) and (2.46), equation (2.50) has the form

For large

Inserting these values in (2.51), we find that
*i.e.*,

Using the formulae (2.53), we obtain the following asymptotic formulae for the eigen-vector-functions

where

### 3 Green’s matrix and expansion theorem

Let
*λ* be not an eigenvalue of

where *I* is the identity operator. Since

then we have

and the boundary conditions (2.2), (2.4) and (2.5) with *λ* is not an eigenvalue of problem (2.1)-(2.5).

Now, we can represent the general solution of (3.1) in the following form:

We applied the standard method of variation of the constants to (3.3), and thus the
functions

and

Since *λ* is not an eigenvalue and

where

Substituting equations (3.6) and (3.7) into (3.3), we obtain the solution of (3.1)

Then from (2.2), (3.2) and the transmission conditions (2.4) and (2.5), we get

Then (3.8) can be written as

where

which can be written as

where

Expanding (3.12) we obtain the concrete form

The matrix
*λ*, for every
*cf.*, *e.g.*, [25,26], it is rather complicated because of the transmission conditions (see the example
at the end of this paper). Therefore

**Lemma 3.1***The operator*
*is self*-*adjoint in*

*Proof* Since
*λ* is a non-real number, then taking

implies that

Since
*i.e.*,

Take

Hence

The next theorem is an eigenfunction expansion theorem. The proof is exactly similar to that of Levitan and Sargsjan derived in [[25], pp.67-77]; see also [26-29].

**Theorem 3.2**

(i) *For*

(ii) *For*

*the series being absolutely and uniformly convergent in the first component for on*
*and absolutely convergent in the second component*.

### 4 The sampling theorems

The first sampling theorem of this section associated with the boundary value problem (2.1)-(2.5) is the following theorem.

**Theorem 4.1***Let*
*For*
*let*

*where*
*is the solution defined above*. *Then*
*is an entire function of exponential type that can be reconstructed from its values
at the points*
*via the sampling formula*

*The series* (4.2) *converges absolutely on* ℂ *and uniformly on any compact subset of* ℂ, *and*
*is the entire function defined in* (2.28).

*Proof* The relation (4.1) can be rewritten in the form

where

Since both

where

Applying Parseval’s identity to (4.3), we obtain

Now we calculate

Indeed, let

From Green’s identity, see [[25], p.51], we have

Then (4.9) and the initial conditions (2.21) imply

From (2.40), (2.19) and (2.7), we have

Also, from (2.40) we have

Then from (2.26) and (4.12) we obtain

Substituting from (4.10), (4.11) and (4.13) into (4.8), we get

Letting

Since

Since

Using the same method developed above, we get

Therefore

Since

Then

uniformly on *M*. In view of Parseval’s equality,

Thus
*M*. Hence (4.2) converges uniformly on *M*. Thus

and the fact that

**Remark 4.2** To see that expansion (4.2) is a Lagrange-type interpolation, we may replace

From Hadamard’s factorization theorem, see [4],

and (4.1), (4.2) remain valid for the function

We may redefine (4.1) by taking kernel

The next theorem is devoted to giving vector-type interpolation sampling expansions
associated with problem (2.1)-(2.5) for integral transforms whose kernels are defined
in terms of Green’s matrix. As we see in (3.12), Green’s matrix

**Theorem 4.3***Let*
*Let*
*be the vector*-*valued transform*

*Then*
*is a vector*-*valued entire function of exponential type that admits the vector*-*valued sampling expansion*

*The vector*-*valued series* (4.26) *converges absolutely on* ℂ *and uniformly on compact subsets of* ℂ. *Here* (4.26) *means*

*where both series converge absolutely on* ℂ *and uniformly on compact sets of* ℂ.

*Proof* The integral transform (4.25) can be written as

Applying Parseval’s identity to (4.28) with respect to

Let

Thus