The sampling theory says that a function may be determined by its sampled values at some certain points provided the function satisfies some certain conditions. In this paper we consider a Dirac system which contains an eigenparameter appearing linearly in one condition in addition to an internal point of discontinuity. We closely follow the analysis derived by Annaby and Tharwat (J. Appl. Math. Comput. 2010, doi:10.1007/s12190-010-0404-9) to establish the needed relations for the derivations of the sampling theorems including the construction of Green’s matrix as well as the eigen-vector-function expansion theorem. We derive sampling representations for transforms whose kernels are either solutions or Green’s matrix of the problem. In the special case, when our problem is continuous, the obtained results coincide with the corresponding results in Annaby and Tharwat (J. Appl. Math. Comput. 2010, doi:10.1007/s12190-010-0404-9).
MSC: 34L16, 94A20, 65L15.
Keywords:Dirac systems; transmission conditions; eigenvalue parameter in the boundary conditions; discontinuous boundary value problems
Sampling theory is one of the most powerful results in signal analysis. It is of great need in signal processing to reconstruct (recover) a signal (function) from its values at a discrete sequence of points (samples). If this aim is achieved, then an analog (continuous) signal can be transformed into a digital (discrete) one and then it can be recovered by the receiver. If the signal is band-limited, the sampling process can be done via the celebrated Whittaker, Shannon and Kotel’nikov (WSK) sampling theorem [1-3]. By a band-limited signal with band width σ, , i.e., the signal contains no frequencies higher than cycles per second (cps), we mean a function in the Paley-Wiener space of entire functions of exponential type at most σ which are -functions when restricted to ℝ. In other words, if there exists such that, cf.[4,5],
The sampling series (1.2) is absolutely and uniformly convergent on compact subsets of ℂ.
The WSK sampling theorem has been generalized in many different ways. Here we are interested in two extensions. The first is concerned with replacing the equidistant sampling points by more general ones, which is very important from the practical point of view. The following theorem which is known in some literature as the Paley-Wiener theorem  gives a sampling theorem with a more general class of sampling points. Although the theorem in its final form may be attributed to Levinson  and Kadec , it could be named after Paley and Wiener who first derived the theorem in a more restrictive form; see [6,7,10] for more details.
The Paley-Wiener theorem states that if , , is a sequence of real numbers such that
and G is the entire function defined by
then, for any function of the form (1.1), we have
The series (1.6) converges uniformly on compact subsets of ℂ.
The WSK sampling theorem is a special case of this theorem because if we choose , then
The sampling series (1.6) can be regarded as an extension of the classical Lagrange interpolation formula to ℝ for functions of exponential type. Therefore, (1.6) is called a Lagrange-type interpolation expansion.
The second extension of WSK sampling theorem is the theorem of Kramer,  which states that if I is a finite closed interval, is a function continuous in t such that for all . Let be a sequence of real numbers such that is a complete orthogonal set in . Suppose that
where . Then
Series (1.7) converges uniformly wherever , as a function of t, is bounded. In this theorem, sampling representations were given for integral transforms whose kernels are more general than . Also Kramer’s theorem is a generalization of WSK theorem. If we take , , , then (1.7) is (1.2).
The relationship between both extensions of WSK sampling theorem has been investigated extensively. Starting from a function theory approach, cf., it was proved in  that if , , , satisfies some analyticity conditions, then Kramer’s sampling formula (1.7) turns out to be a Lagrange interpolation one; see also [14-16]. In another direction, it was shown that Kramer’s expansion (1.7) could be written as a Lagrange-type interpolation formula if and were extracted from ordinary differential operators; see the survey  and the references cited therein. The present work is a continuation of the second direction mentioned above. We prove that integral transforms associated with Dirac systems with an internal point of discontinuity can also be reconstructed in a sampling form of Lagrange interpolation type. We would like to mention that works in direction of sampling associated with eigenproblems with an eigenparameter in the boundary conditions are few; see, e.g., [18-20]. Also, papers in sampling with discontinuous eigenproblems are few; see [21-24]. However, sampling theories associated with Dirac systems which contain eigenparameter in the boundary conditions and have at the same time discontinuity conditions, do not exist as far as we know. Our investigation is be the first in that direction, introducing a good example. To achieve our aim we briefly study the spectral analysis of the problem. Then we derive two sampling theorems using solutions and Green’s matrix respectively.
2 The eigenvalue problem
In this section, we define our boundary value problem and state some of its properties. We consider the Dirac system
and transmission conditions
where ; the real-valued functions and are continuous in and and have finite limits and ; , ; and .
In  the authors discussed problem (2.1)-(2.5) but with the condition instead of (2.3). To formulate a theoretic approach to problem (2.1)-(2.5), we define the Hilbert space with an inner product, see [19,20],
where ⊤ denotes the matrix transpose,
, , . For convenience, we put
Equation (2.1) can be written as
For functions , which are defined on and have finite limit , by and , we denote the functions
which are defined on and , respectively.
In the following lemma, we prove that the eigenvalues of problem (2.1)-(2.5) are real.
Lemma 2.1The eigenvalues of problem (2.1)-(2.5) are real.
Proof Assume the contrary that is a nonreal eigenvalue of problem (2.1)-(2.5). Let be a corresponding (non-trivial) eigenfunction. By (2.1), we have
Integrating the above equation through and , we obtain
Then from (2.2), (2.3) and transmission conditions, we have, respectively,
Since , it follows from the last three equations and (2.11), (2.12) that
This contradicts the conditions and . Consequently, must be real. □
Let be the set of all such that , are absolutely continuous on , , and , , , . Define the operator by
Thus, the operator is symmetric in . Indeed, for ,
The operator and the eigenvalue problem (2.1)-(2.5) have the same eigenvalues. Therefore they are equivalent in terms of this aspect.
Lemma 2.2Letλandμbe two different eigenvalues of problem (2.1)-(2.5). Then the corresponding eigenfunctions and of this problem satisfy the following equality:
Proof Equation (2.15) follows immediately from the orthogonality of the corresponding eigenelements:
Now, we construct a special fundamental system of solutions of equation (2.1) for λ not being an eigenvalue. Let us consider the next initial value problem:
By virtue of Theorem 1.1 in , this problem has a unique solution , which is an entire function of for each fixed . Similarly, employing the same method as in the proof of Theorem 1.1 in , we see that the problem
has a unique solution , which is an entire function of parameter λ for each fixed .
Now the functions and are defined in terms of and , , respectively, as follows: The initial-value problem,
has a unique solution for each .
Similarly, the following problem also has a unique solution :
Let us construct two basic solutions of equation (2.1) as follows:
By virtue of equations (2.21) and (2.23), these solutions satisfy both transmission conditions (2.4) and (2.5). These functions are entire in λ for all .
Let denote the Wronskian of and defined in [, p.194], i.e.,
It is obvious that the Wronskian
are independent of and are entire functions. Taking into account (2.21) and (2.23), a short calculation gives
for each .
Corollary 2.3The zeros of the functions and coincide.
Then we may take into consideration the characteristic function as
In the following lemma, we show that all eigenvalues of problem (2.1)-(2.5) are simple.
Lemma 2.4All eigenvalues of problem (2.1)-(2.5) are just zeros of the function . Moreover, every zero of has multiplicity one.
Proof Since the functions and satisfy the boundary condition (2.2) and both transmission conditions (2.4) and (2.5), to find the eigenvalues of the (2.1)-(2.5), we have to insert the functions and in the boundary condition (2.3) and find the roots of this equation.
By (2.1) we obtain for , ,
Integrating the above equation through and , and taking into account the initial conditions (2.17), (2.21) and (2.23), we obtain
Dividing both sides of (2.29) by and by letting , we arrive to the relation
We show that equation
has only simple roots. Assume the converse, i.e., equation (2.31) has a double root , say. Then the following two equations hold:
Since and is real, then . Let . From (2.32) and (2.33),
Combining (2.34) and (2.30) with , we obtain
contradicting the assumption . The other case, when , can be treated similarly and the proof is complete. □
Let denote the sequence of zeros of . Then
are the corresponding eigenvectors of the operator . Since is symmetric, then it is easy to show that the following orthogonality relation holds:
Here is a sequence of eigen-vector-functions of (2.1)-(2.5) corresponding to the eigenvalues . We denote by the normalized eigenvectors of , i.e.,
Since satisfies (2.3)-(2.5), then the eigenvalues are also determined via
Therefore is another set of eigen-vector-functions which is related by with
where are non-zero constants, since all eigenvalues are simple. Since the eigenvalues are all real, we can take the eigen-vector-functions to be real-valued.
Now we derive the asymptotic formulae of the eigenvalues and the eigen-vector-functions . We transform equations (2.1), (2.17), (2.21) and (2.24) into the integral equations, see , as follows:
For the following estimates hold uniformly with respect to x, , cf. [, p.55],
Now we find an asymptotic formula of the eigenvalues. Since the eigenvalues of the boundary value problem (2.1)-(2.5) coincide with the roots of the equation
then from the estimates (2.47), (2.48) and (2.49), we get
which can be written as
Then, from (2.45) and (2.46), equation (2.50) has the form
For large , equation (2.51) obviously has solutions which, as is not hard to see, have the form
Inserting these values in (2.51), we find that , i.e., . Thus we obtain the following asymptotic formula for the eigenvalues:
Using the formulae (2.53), we obtain the following asymptotic formulae for the eigen-vector-functions :
3 Green’s matrix and expansion theorem
Let , where , be a continuous vector-valued function. To study the completeness of the eigenvectors of , and hence the completeness of the eigen-vector-functions of (2.1)-(2.5), we derive Green’s function of problem (2.1)-(2.5) as well as the resolvent of . Indeed, let λ be not an eigenvalue of and consider the inhomogeneous problem
where I is the identity operator. Since
then we have
and the boundary conditions (2.2), (2.4) and (2.5) with λ is not an eigenvalue of problem (2.1)-(2.5).
Now, we can represent the general solution of (3.1) in the following form:
We applied the standard method of variation of the constants to (3.3), and thus the functions , and , satisfy the linear system of equations
Since λ is not an eigenvalue and , each of the linear system in (3.4) and (3.5) has a unique solution which leads to
where , , and are arbitrary constants, and
Substituting equations (3.6) and (3.7) into (3.3), we obtain the solution of (3.1)
Then from (2.2), (3.2) and the transmission conditions (2.4) and (2.5), we get
Then (3.8) can be written as
which can be written as
Expanding (3.12) we obtain the concrete form
The matrix is called Green’s matrix of problem (2.1)-(2.5). Obviously, is a meromorphic function of λ, for every , which has simple poles only at the eigenvalues. Although Green’s matrix looks as simple as that of Dirac systems, cf., e.g., [25,26], it is rather complicated because of the transmission conditions (see the example at the end of this paper). Therefore
Lemma 3.1The operator is self-adjoint in .
Proof Since is a symmetric densely defined operator, then it is sufficient to show that the deficiency spaces are the null spaces, and hence . Indeed, if and λ is a non-real number, then taking
implies that . Since satisfies the conditions (2.2)-(2.5), then . Now we prove that the inverse of exists. If , then
Since , we have . Thus , i.e., . Then , the resolvent operator of , exists. Thus
Take . The domains of and are exactly . Consequently, the ranges of and are also . Hence the deficiency spaces of are
Hence is self-adjoint. □
(i) For ,
(ii) For ,
the series being absolutely and uniformly convergent in the first component for on , and absolutely convergent in the second component.
4 The sampling theorems
The first sampling theorem of this section associated with the boundary value problem (2.1)-(2.5) is the following theorem.
Theorem 4.1Let . For , let
where is the solution defined above. Then is an entire function of exponential type that can be reconstructed from its values at the points via the sampling formula
The series (4.2) converges absolutely on ℂ and uniformly on any compact subset of ℂ, and is the entire function defined in (2.28).
Proof The relation (4.1) can be rewritten in the form
Since both and are in , then they have the Fourier expansions
where and are the Fourier coefficients
Applying Parseval’s identity to (4.3), we obtain
Now we calculate and of , . To prove expansion (4.2), we need to show that
Indeed, let and be fixed. By the definition of the inner product of , we have
From Green’s identity, see [, p.51], we have
Then (4.9) and the initial conditions (2.21) imply
From (2.40), (2.19) and (2.7), we have
Also, from (2.40) we have
Then from (2.26) and (4.12) we obtain
Substituting from (4.10), (4.11) and (4.13) into (4.8), we get
Letting in (4.14), since the zeros of are simple, we get
Since and are arbitrary, then (4.14) and (4.15) hold for all and all . Therefore, from (4.14) and (4.15), we get (4.7). Hence (4.2) is proved with a pointwise convergence on ℂ. Now we investigate the convergence of (4.2). First we prove that it is absolutely convergent on ℂ. Using Cauchy-Schwarz’ inequality for ,
Since , , then the two series on the right-hand side of (4.16) converge. Thus series (4.2) converges absolutely on ℂ. As for uniform convergence, let be compact. Let and . Define to be
Using the same method developed above, we get
Since is compact, then we can find a positive constant such that
uniformly on M. In view of Parseval’s equality,
Thus uniformly on M. Hence (4.2) converges uniformly on M. Thus is an entire function. From the relation
and the fact that , , are entire functions of exponential type, we conclude that is of exponential type. □
Remark 4.2 To see that expansion (4.2) is a Lagrange-type interpolation, we may replace by the canonical product
From Hadamard’s factorization theorem, see , , where is an entire function with no zeros. Thus,
and (4.1), (4.2) remain valid for the function . Hence
We may redefine (4.1) by taking kernel to get
The next theorem is devoted to giving vector-type interpolation sampling expansions associated with problem (2.1)-(2.5) for integral transforms whose kernels are defined in terms of Green’s matrix. As we see in (3.12), Green’s matrix of problem (2.1)-(2.5) has simple poles at . Define the function to be , where is a fixed point and is the function defined in (2.28) or it is the canonical product (4.22).
Theorem 4.3Let . Let be the vector-valued transform
Then is a vector-valued entire function of exponential type that admits the vector-valued sampling expansion
The vector-valued series (4.26) converges absolutely on ℂ and uniformly on compact subsets of ℂ. Here (4.26) means
where both series converge absolutely on ℂ and uniformly on compact sets of ℂ.
Proof The integral transform (4.25) can be written as
Applying Parseval’s identity to (4.28) with respect to , we obtain
Let be such that for . Since each is an eigenvector of , then