Research

# On sampling theories and discontinuous Dirac systems with eigenparameter in the boundary conditions

Mohammed M Tharwat

Author Affiliations

Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah, Saudi Arabia

Department of Mathematics, Faculty of Science, Beni-Suef University, Beni-Suef, Egypt

Boundary Value Problems 2013, 2013:65  doi:10.1186/1687-2770-2013-65

 Received: 8 November 2012 Accepted: 11 March 2013 Published: 29 March 2013

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### Abstract

The sampling theory says that a function may be determined by its sampled values at some certain points provided the function satisfies some certain conditions. In this paper we consider a Dirac system which contains an eigenparameter appearing linearly in one condition in addition to an internal point of discontinuity. We closely follow the analysis derived by Annaby and Tharwat (J. Appl. Math. Comput. 2010, doi:10.1007/s12190-010-0404-9) to establish the needed relations for the derivations of the sampling theorems including the construction of Green’s matrix as well as the eigen-vector-function expansion theorem. We derive sampling representations for transforms whose kernels are either solutions or Green’s matrix of the problem. In the special case, when our problem is continuous, the obtained results coincide with the corresponding results in Annaby and Tharwat (J. Appl. Math. Comput. 2010, doi:10.1007/s12190-010-0404-9).

MSC: 34L16, 94A20, 65L15.

##### Keywords:
Dirac systems; transmission conditions; eigenvalue parameter in the boundary conditions; discontinuous boundary value problems

### 1 Introduction

Sampling theory is one of the most powerful results in signal analysis. It is of great need in signal processing to reconstruct (recover) a signal (function) from its values at a discrete sequence of points (samples). If this aim is achieved, then an analog (continuous) signal can be transformed into a digital (discrete) one and then it can be recovered by the receiver. If the signal is band-limited, the sampling process can be done via the celebrated Whittaker, Shannon and Kotel’nikov (WSK) sampling theorem [1-3]. By a band-limited signal with band width σ, , i.e., the signal contains no frequencies higher than cycles per second (cps), we mean a function in the Paley-Wiener space of entire functions of exponential type at most σ which are -functions when restricted to ℝ. In other words, if there exists such that, cf.[4,5],

(1.1)

Now WSK sampling theorem states [6,7]: If , then it is completely determined from its values at the points , , by means of the formula

(1.2)

where

(1.3)

The sampling series (1.2) is absolutely and uniformly convergent on compact subsets of ℂ.

The WSK sampling theorem has been generalized in many different ways. Here we are interested in two extensions. The first is concerned with replacing the equidistant sampling points by more general ones, which is very important from the practical point of view. The following theorem which is known in some literature as the Paley-Wiener theorem [5] gives a sampling theorem with a more general class of sampling points. Although the theorem in its final form may be attributed to Levinson [8] and Kadec [9], it could be named after Paley and Wiener who first derived the theorem in a more restrictive form; see [6,7,10] for more details.

The Paley-Wiener theorem states that if , , is a sequence of real numbers such that

(1.4)

and G is the entire function defined by

(1.5)

then, for any function of the form (1.1), we have

(1.6)

The series (1.6) converges uniformly on compact subsets of ℂ.

The WSK sampling theorem is a special case of this theorem because if we choose , then

The sampling series (1.6) can be regarded as an extension of the classical Lagrange interpolation formula to ℝ for functions of exponential type. Therefore, (1.6) is called a Lagrange-type interpolation expansion.

The second extension of WSK sampling theorem is the theorem of Kramer, [11] which states that if I is a finite closed interval, is a function continuous in t such that for all . Let be a sequence of real numbers such that is a complete orthogonal set in . Suppose that

where . Then

(1.7)

Series (1.7) converges uniformly wherever , as a function of t, is bounded. In this theorem, sampling representations were given for integral transforms whose kernels are more general than . Also Kramer’s theorem is a generalization of WSK theorem. If we take , , , then (1.7) is (1.2).

The relationship between both extensions of WSK sampling theorem has been investigated extensively. Starting from a function theory approach, cf.[12], it was proved in [13] that if , , , satisfies some analyticity conditions, then Kramer’s sampling formula (1.7) turns out to be a Lagrange interpolation one; see also [14-16]. In another direction, it was shown that Kramer’s expansion (1.7) could be written as a Lagrange-type interpolation formula if and were extracted from ordinary differential operators; see the survey [17] and the references cited therein. The present work is a continuation of the second direction mentioned above. We prove that integral transforms associated with Dirac systems with an internal point of discontinuity can also be reconstructed in a sampling form of Lagrange interpolation type. We would like to mention that works in direction of sampling associated with eigenproblems with an eigenparameter in the boundary conditions are few; see, e.g., [18-20]. Also, papers in sampling with discontinuous eigenproblems are few; see [21-24]. However, sampling theories associated with Dirac systems which contain eigenparameter in the boundary conditions and have at the same time discontinuity conditions, do not exist as far as we know. Our investigation is be the first in that direction, introducing a good example. To achieve our aim we briefly study the spectral analysis of the problem. Then we derive two sampling theorems using solutions and Green’s matrix respectively.

### 2 The eigenvalue problem

In this section, we define our boundary value problem and state some of its properties. We consider the Dirac system

(2.1)

(2.2)

(2.3)

and transmission conditions

(2.4)

(2.5)

where ; the real-valued functions and are continuous in and and have finite limits and ; , ; and .

In [24] the authors discussed problem (2.1)-(2.5) but with the condition instead of (2.3). To formulate a theoretic approach to problem (2.1)-(2.5), we define the Hilbert space with an inner product, see [19,20],

(2.6)

where ⊤ denotes the matrix transpose,

, , . For convenience, we put

(2.7)

Equation (2.1) can be written as

(2.8)

where

(2.9)

For functions , which are defined on and have finite limit , by and , we denote the functions

(2.10)

which are defined on and , respectively.

In the following lemma, we prove that the eigenvalues of problem (2.1)-(2.5) are real.

Lemma 2.1The eigenvalues of problem (2.1)-(2.5) are real.

Proof Assume the contrary that is a nonreal eigenvalue of problem (2.1)-(2.5). Let be a corresponding (non-trivial) eigenfunction. By (2.1), we have

Integrating the above equation through and , we obtain

(2.11)

(2.12)

Then from (2.2), (2.3) and transmission conditions, we have, respectively,

and

Since , it follows from the last three equations and (2.11), (2.12) that

(2.13)

This contradicts the conditions and . Consequently, must be real. □

Let be the set of all such that , are absolutely continuous on , , and , , , . Define the operator by

(2.14)

Thus, the operator is symmetric in . Indeed, for ,

The operator and the eigenvalue problem (2.1)-(2.5) have the same eigenvalues. Therefore they are equivalent in terms of this aspect.

Lemma 2.2Letλandμbe two different eigenvalues of problem (2.1)-(2.5). Then the corresponding eigenfunctionsandof this problem satisfy the following equality:

(2.15)

Proof Equation (2.15) follows immediately from the orthogonality of the corresponding eigenelements:

□

Now, we construct a special fundamental system of solutions of equation (2.1) for λ not being an eigenvalue. Let us consider the next initial value problem:

(2.16)

(2.17)

By virtue of Theorem 1.1 in [25], this problem has a unique solution , which is an entire function of for each fixed . Similarly, employing the same method as in the proof of Theorem 1.1 in [25], we see that the problem

(2.18)

(2.19)

has a unique solution , which is an entire function of parameter λ for each fixed .

Now the functions and are defined in terms of and , , respectively, as follows: The initial-value problem,

(2.20)

(2.21)

has a unique solution for each .

Similarly, the following problem also has a unique solution :

(2.22)

(2.23)

Let us construct two basic solutions of equation (2.1) as follows:

where

(2.24)

(2.25)

Then

(2.26)

By virtue of equations (2.21) and (2.23), these solutions satisfy both transmission conditions (2.4) and (2.5). These functions are entire in λ for all .

Let denote the Wronskian of and defined in [[26], p.194], i.e.,

It is obvious that the Wronskian

(2.27)

are independent of and are entire functions. Taking into account (2.21) and (2.23), a short calculation gives

for each .

Corollary 2.3The zeros of the functionsandcoincide.

Then we may take into consideration the characteristic function as

(2.28)

In the following lemma, we show that all eigenvalues of problem (2.1)-(2.5) are simple.

Lemma 2.4All eigenvalues of problem (2.1)-(2.5) are just zeros of the function. Moreover, every zero ofhas multiplicity one.

Proof Since the functions and satisfy the boundary condition (2.2) and both transmission conditions (2.4) and (2.5), to find the eigenvalues of the (2.1)-(2.5), we have to insert the functions and in the boundary condition (2.3) and find the roots of this equation.

By (2.1) we obtain for , ,

Integrating the above equation through and , and taking into account the initial conditions (2.17), (2.21) and (2.23), we obtain

(2.29)

Dividing both sides of (2.29) by and by letting , we arrive to the relation

(2.30)

We show that equation

(2.31)

has only simple roots. Assume the converse, i.e., equation (2.31) has a double root , say. Then the following two equations hold:

(2.32)

(2.33)

Since and is real, then . Let . From (2.32) and (2.33),

(2.34)

Combining (2.34) and (2.30) with , we obtain

(2.35)

contradicting the assumption . The other case, when , can be treated similarly and the proof is complete. □

Let denote the sequence of zeros of . Then

(2.36)

are the corresponding eigenvectors of the operator . Since is symmetric, then it is easy to show that the following orthogonality relation holds:

(2.37)

Here is a sequence of eigen-vector-functions of (2.1)-(2.5) corresponding to the eigenvalues . We denote by the normalized eigenvectors of , i.e.,

(2.38)

Since satisfies (2.3)-(2.5), then the eigenvalues are also determined via

(2.39)

Therefore is another set of eigen-vector-functions which is related by with

(2.40)

where are non-zero constants, since all eigenvalues are simple. Since the eigenvalues are all real, we can take the eigen-vector-functions to be real-valued.

Now we derive the asymptotic formulae of the eigenvalues and the eigen-vector-functions . We transform equations (2.1), (2.17), (2.21) and (2.24) into the integral equations, see [26], as follows:

(2.41)

(2.42)

(2.43)

(2.44)

For the following estimates hold uniformly with respect to x, , cf. [[25], p.55],

(2.45)

(2.46)

(2.47)

(2.48)

Now we find an asymptotic formula of the eigenvalues. Since the eigenvalues of the boundary value problem (2.1)-(2.5) coincide with the roots of the equation

(2.49)

then from the estimates (2.47), (2.48) and (2.49), we get

which can be written as

(2.50)

Then, from (2.45) and (2.46), equation (2.50) has the form

(2.51)

For large , equation (2.51) obviously has solutions which, as is not hard to see, have the form

(2.52)

Inserting these values in (2.51), we find that , i.e., . Thus we obtain the following asymptotic formula for the eigenvalues:

(2.53)

Using the formulae (2.53), we obtain the following asymptotic formulae for the eigen-vector-functions :

(2.54)

where

(2.55)

### 3 Green’s matrix and expansion theorem

Let , where , be a continuous vector-valued function. To study the completeness of the eigenvectors of , and hence the completeness of the eigen-vector-functions of (2.1)-(2.5), we derive Green’s function of problem (2.1)-(2.5) as well as the resolvent of . Indeed, let λ be not an eigenvalue of and consider the inhomogeneous problem

where I is the identity operator. Since

then we have

(3.1)

(3.2)

and the boundary conditions (2.2), (2.4) and (2.5) with λ is not an eigenvalue of problem (2.1)-(2.5).

Now, we can represent the general solution of (3.1) in the following form:

(3.3)

We applied the standard method of variation of the constants to (3.3), and thus the functions , and , satisfy the linear system of equations

(3.4)

and

(3.5)

Since λ is not an eigenvalue and , each of the linear system in (3.4) and (3.5) has a unique solution which leads to

(3.6)

(3.7)

where , , and are arbitrary constants, and

Substituting equations (3.6) and (3.7) into (3.3), we obtain the solution of (3.1)

(3.8)

Then from (2.2), (3.2) and the transmission conditions (2.4) and (2.5), we get

Then (3.8) can be written as

(3.9)

where

(3.10)

which can be written as

(3.11)

where

(3.12)

Expanding (3.12) we obtain the concrete form

(3.13)

The matrix is called Green’s matrix of problem (2.1)-(2.5). Obviously, is a meromorphic function of λ, for every , which has simple poles only at the eigenvalues. Although Green’s matrix looks as simple as that of Dirac systems, cf., e.g., [25,26], it is rather complicated because of the transmission conditions (see the example at the end of this paper). Therefore

(3.14)

Proof Since is a symmetric densely defined operator, then it is sufficient to show that the deficiency spaces are the null spaces, and hence . Indeed, if and λ is a non-real number, then taking

implies that . Since satisfies the conditions (2.2)-(2.5), then . Now we prove that the inverse of exists. If , then

Since , we have . Thus , i.e., . Then , the resolvent operator of , exists. Thus

Take . The domains of and are exactly . Consequently, the ranges of and are also . Hence the deficiency spaces of are

The next theorem is an eigenfunction expansion theorem. The proof is exactly similar to that of Levitan and Sargsjan derived in [[25], pp.67-77]; see also [26-29].

Theorem 3.2

(i) For,

(3.15)

(ii) For,

(3.16)

the series being absolutely and uniformly convergent in the first component for on, and absolutely convergent in the second component.

### 4 The sampling theorems

The first sampling theorem of this section associated with the boundary value problem (2.1)-(2.5) is the following theorem.

Theorem 4.1Let. For, let

(4.1)

whereis the solution defined above. Thenis an entire function of exponential type that can be reconstructed from its values at the pointsvia the sampling formula

(4.2)

The series (4.2) converges absolutely onand uniformly on any compact subset of ℂ, andis the entire function defined in (2.28).

Proof The relation (4.1) can be rewritten in the form

(4.3)

where

Since both and are in , then they have the Fourier expansions

(4.4)

where and are the Fourier coefficients

(4.5)

Applying Parseval’s identity to (4.3), we obtain

(4.6)

Now we calculate and of , . To prove expansion (4.2), we need to show that

(4.7)

Indeed, let and be fixed. By the definition of the inner product of , we have

(4.8)

From Green’s identity, see [[25], p.51], we have

(4.9)

Then (4.9) and the initial conditions (2.21) imply

(4.10)

From (2.40), (2.19) and (2.7), we have

(4.11)

Also, from (2.40) we have

(4.12)

Then from (2.26) and (4.12) we obtain

(4.13)

Substituting from (4.10), (4.11) and (4.13) into (4.8), we get

(4.14)

Letting in (4.14), since the zeros of are simple, we get

(4.15)

Since and are arbitrary, then (4.14) and (4.15) hold for all and all . Therefore, from (4.14) and (4.15), we get (4.7). Hence (4.2) is proved with a pointwise convergence on ℂ. Now we investigate the convergence of (4.2). First we prove that it is absolutely convergent on ℂ. Using Cauchy-Schwarz’ inequality for ,

(4.16)

Since , , then the two series on the right-hand side of (4.16) converge. Thus series (4.2) converges absolutely on ℂ. As for uniform convergence, let be compact. Let and . Define to be

(4.17)

Using the same method developed above, we get

(4.18)

Therefore

(4.19)

Since is compact, then we can find a positive constant such that

(4.20)

Then

(4.21)

uniformly on M. In view of Parseval’s equality,

Thus uniformly on M. Hence (4.2) converges uniformly on M. Thus is an entire function. From the relation

and the fact that , , are entire functions of exponential type, we conclude that is of exponential type. □

Remark 4.2 To see that expansion (4.2) is a Lagrange-type interpolation, we may replace by the canonical product

(4.22)

From Hadamard’s factorization theorem, see [4], , where is an entire function with no zeros. Thus,

and (4.1), (4.2) remain valid for the function . Hence

(4.23)

We may redefine (4.1) by taking kernel to get

(4.24)

The next theorem is devoted to giving vector-type interpolation sampling expansions associated with problem (2.1)-(2.5) for integral transforms whose kernels are defined in terms of Green’s matrix. As we see in (3.12), Green’s matrix of problem (2.1)-(2.5) has simple poles at . Define the function to be , where is a fixed point and is the function defined in (2.28) or it is the canonical product (4.22).

Theorem 4.3Let. Letbe the vector-valued transform

(4.25)

Thenis a vector-valued entire function of exponential type that admits the vector-valued sampling expansion

(4.26)

The vector-valued series (4.26) converges absolutely onand uniformly on compact subsets of ℂ. Here (4.26) means

(4.27)

where both series converge absolutely onand uniformly on compact sets of ℂ.

Proof The integral transform (4.25) can be written as

(4.28)

Applying Parseval’s identity to (4.28) with respect to , we obtain

(4.29)

Let be such that for . Since each is an eigenvector of , then

Thus

(4.30)

From (3.14) and (4.30) we obtain

(4.31)

Then from (2.26) and (2.40) in (4.31), we get

(4.32)

Hence equation (4.32) can be rewritten as

(4.33)

The definition of implies

(4.34)

Moreover, from (3.12) we have

(4.35)

Then from (4.35), (2.26) and (2.40) in (4.34), we obtain

(4.36)

Combining (4.36) and (4.33), yields

(4.37)

Taking the limit when in (4.28), we get

(4.38)

Making use of (4.37), we may rewrite (4.38) as, ,

(4.39)

The interchange of the limit and summation is justified by the asymptotic behavior of and that of . If and , then (4.39) gives

(4.40)

Combining (4.37), (4.40) and (4.29), we get (4.28) under the assumption that and for all n. If , for some or 2, the same expansions hold with . The convergence properties as well as the analytic and growth properties can be established as in Theorem 4.1 above. □

Now we derive an example illustrating the previous results.

Example 4.1

Consider the system

(4.41)

(4.42)

(4.43)

and transmission conditions

(4.44)

(4.45)

This problem is a special case of problem (2.1)-(2.5) when and , . Then . For simplicity, we define

In the notations of the above section, the solutions and are

(4.46)

(4.47)

where

The eigenvalues are the solutions of the equation

which can be rewritten as

(4.48)

Green’s function of problem (4.41)-(4.45) is given by

(4.49)

By Theorem 4.1, the transform

(4.50)

has the following expansion:

(4.51)

where are the zeros of (4.48). In the view of Theorem 4.3, the vector-valued transform

(4.52)

has the following vector-valued expansion:

(4.53)

It should be noted that with any choices , α, β, we cannot always compute the eigenvalue of problem (4.41)-(4.45). Hence the eigenvalues are the points of ℝ which satisfy

(4.54)

This is illustrated in Figures 1 and 2.

### Competing interests

The author declares that he has no competing interests.

### Acknowledgements

This article was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah. The author, therefore, acknowledges with thanks DSR technical and financial support.

### References

1. Kotel’nikov, V: On the carrying capacity of the “ether” and wire in telecommunications. Material for the first all union conference on questions of communications. Izd. Red. Upr. Svyazi RKKA. 55, 55–64 (in Russian) (1933)

2. Shannon, C: Communication in the presence of noise. Proc. IRE. 37, 10–21 (1949)

3. Whittaker, E: On the functions which are represented by the expansion of the interpolation theory. Proc. R. Soc. Edinb., Sect. A, Math.. 35, 181–194 (1915)

4. Boas, R: Entire Functions, Academic Press, New York (1954)

5. Paley, R, Wiener, N: Fourier Transforms in the Complex Domain, Amer. Math. Soc., Providence (1934)

6. Higgins, JR: Sampling Theory in Fourier and Signal Analysis: Foundations, Oxford University Press, Oxford (1996)

7. Zayed, AI: Advances in Shannon’s Sampling Theory, CRC Press, Boca Raton (1993).

8. Levinson, N: Gap and Density Theorems, Amer. Math. Soc., Providence (1940)

9. Kadec, MI: The exact value of Paley-Wiener constant. Sov. Math. Dokl.. 5, 559–561 (1964)

10. Hinsen, G: Irregular sampling of bandlimited -functions. J. Approx. Theory. 72, 346–364 (1993). Publisher Full Text

11. Kramer, HP: A generalized sampling theorem. J. Math. Phys.. 38, 68–72 (1959)

12. Everitt, WN, Hayman, WK, Nasri-Roudsari, G: On the representation of holomorphic functions by integrals. Appl. Anal.. 65, 95–102 (1997). Publisher Full Text

13. Everitt, WN, Nasri-Roudsari, G, Rehberg, J: A note on the analytic form of the Kramer sampling theorem. Results Math.. 34, 310–319 (1988)

14. Everitt, WN, Garcia, AG, Hernández-Medina, MA: On Lagrange-type interpolation series and analytic Kramer kernels. Results Math.. 51, 215–228 (2008). Publisher Full Text

15. Garcia, AG, Littlejohn, LL: On analytic sampling theory. J. Comput. Appl. Math.. 171, 235–246 (2004). Publisher Full Text

16. Higgins, JR: A sampling principle associated with Saitoh’s fundamental theory of linear transformations. In: Saitoh S, Hayashi N, Yamamoto M (eds.) Analytic Extension Formulas and Their Applications, Kluwer Academic, Norwell (2001)

17. Everitt, WN, Nasri-Roudsari, G: Interpolation and sampling theories, and linear ordinary boundary value problems. In: Higgins JR, Stens LR (eds.) Sampling Theory in Fourier and Signal Analysis: Advanced Topics, Oxford University Press, Oxford (1999) (Chapter 5)

18. Annaby, MH, Freiling, G: Sampling integrodifferential transforms arising from second order differential operators. Math. Nachr.. 216, 25–43 (2000). Publisher Full Text

19. Annaby, MH, Tharwat, MM: On sampling theory and eigenvalue problems with an eigenparameter in the boundary conditions. SUT J. Math.. 42, 157–176 (2006)

20. Annaby, MH, Tharwat, MM: On sampling and Dirac systems with eigenparameter in the boundary conditions. J. Appl. Math. Comput. doi:10.1007/s12190-010-0404-9 (2010)

21. Annaby, MH, Freiling, G: A sampling theorem for transformations with discontinuous kernels. Appl. Anal.. 83, 1053–1075 (2004). Publisher Full Text

22. Annaby, MH, Freiling, G, Zayed, AI: Discontinuous boundary-value problems: expansion and sampling theorems. J. Integral Equ. Appl.. 16, 1–23 (2004). PubMed Abstract | Publisher Full Text | PubMed Central Full Text

23. Tharwat, MM: Discontinuous Sturm-Liouville problems and associated sampling theories. Abstr. Appl. Anal. doi:10.1155/2011/610232 (2011)

24. Tharwat, MM, Yildirim, A, Bhrawy, AH: Sampling of discontinuous Dirac systems. Numer. Funct. Anal. Optim.. 34(3), 323–348 (2013). Publisher Full Text

25. Levitan, BM, Sargsjan, IS: Introduction to Spectral Theory: Selfadjoint Ordinary Differential Operators, American Mathematical Society, Providence (1975)

26. Levitan, BM, Sargsjan, IS: Sturm-Liouville and Dirac Operators, Kluwer Academic, Dordrecht (1991)

27. Fulton, CT: Two-point boundary value problems with eigenvalue parameter contained in the boundary conditions. Proc. R. Soc. Edinb.. 77(A), 293–308 (1977)

28. Hinton, DB: An expansion theorem for an eigenvalue problem with eigenvalue parameters in the boundary conditions. Q. J. Math.. 30, 33–42 (1979). Publisher Full Text

29. Wray, SD: Absolutely convergent expansions associated with a boundary-value problem with the eigenvalue parameter contained in one boundary condition. Czechoslov. Math. J.. 32(4), 608–622 (1982)