Abstract
The sampling theory says that a function may be determined by its sampled values at some certain points provided the function satisfies some certain conditions. In this paper we consider a Dirac system which contains an eigenparameter appearing linearly in one condition in addition to an internal point of discontinuity. We closely follow the analysis derived by Annaby and Tharwat (J. Appl. Math. Comput. 2010, doi:10.1007/s1219001004049) to establish the needed relations for the derivations of the sampling theorems including the construction of Green’s matrix as well as the eigenvectorfunction expansion theorem. We derive sampling representations for transforms whose kernels are either solutions or Green’s matrix of the problem. In the special case, when our problem is continuous, the obtained results coincide with the corresponding results in Annaby and Tharwat (J. Appl. Math. Comput. 2010, doi:10.1007/s1219001004049).
MSC: 34L16, 94A20, 65L15.
Keywords:
Dirac systems; transmission conditions; eigenvalue parameter in the boundary conditions; discontinuous boundary value problems1 Introduction
Sampling theory is one of the most powerful results in signal analysis. It is of great need in signal processing to reconstruct (recover) a signal (function) from its values at a discrete sequence of points (samples). If this aim is achieved, then an analog (continuous) signal can be transformed into a digital (discrete) one and then it can be recovered by the receiver. If the signal is bandlimited, the sampling process can be done via the celebrated Whittaker, Shannon and Kotel’nikov (WSK) sampling theorem [13]. By a bandlimited signal with band width σ, , i.e., the signal contains no frequencies higher than cycles per second (cps), we mean a function in the PaleyWiener space of entire functions of exponential type at most σ which are functions when restricted to ℝ. In other words, if there exists such that, cf.[4,5],
Now WSK sampling theorem states [6,7]: If , then it is completely determined from its values at the points , , by means of the formula
where
The sampling series (1.2) is absolutely and uniformly convergent on compact subsets of ℂ.
The WSK sampling theorem has been generalized in many different ways. Here we are interested in two extensions. The first is concerned with replacing the equidistant sampling points by more general ones, which is very important from the practical point of view. The following theorem which is known in some literature as the PaleyWiener theorem [5] gives a sampling theorem with a more general class of sampling points. Although the theorem in its final form may be attributed to Levinson [8] and Kadec [9], it could be named after Paley and Wiener who first derived the theorem in a more restrictive form; see [6,7,10] for more details.
The PaleyWiener theorem states that if , , is a sequence of real numbers such that
and G is the entire function defined by
then, for any function of the form (1.1), we have
The series (1.6) converges uniformly on compact subsets of ℂ.
The WSK sampling theorem is a special case of this theorem because if we choose , then
The sampling series (1.6) can be regarded as an extension of the classical Lagrange interpolation formula to ℝ for functions of exponential type. Therefore, (1.6) is called a Lagrangetype interpolation expansion.
The second extension of WSK sampling theorem is the theorem of Kramer, [11] which states that if I is a finite closed interval, is a function continuous in t such that for all . Let be a sequence of real numbers such that is a complete orthogonal set in . Suppose that
Series (1.7) converges uniformly wherever , as a function of t, is bounded. In this theorem, sampling representations were given for integral transforms whose kernels are more general than . Also Kramer’s theorem is a generalization of WSK theorem. If we take , , , then (1.7) is (1.2).
The relationship between both extensions of WSK sampling theorem has been investigated extensively. Starting from a function theory approach, cf.[12], it was proved in [13] that if , , , satisfies some analyticity conditions, then Kramer’s sampling formula (1.7) turns out to be a Lagrange interpolation one; see also [1416]. In another direction, it was shown that Kramer’s expansion (1.7) could be written as a Lagrangetype interpolation formula if and were extracted from ordinary differential operators; see the survey [17] and the references cited therein. The present work is a continuation of the second direction mentioned above. We prove that integral transforms associated with Dirac systems with an internal point of discontinuity can also be reconstructed in a sampling form of Lagrange interpolation type. We would like to mention that works in direction of sampling associated with eigenproblems with an eigenparameter in the boundary conditions are few; see, e.g., [1820]. Also, papers in sampling with discontinuous eigenproblems are few; see [2124]. However, sampling theories associated with Dirac systems which contain eigenparameter in the boundary conditions and have at the same time discontinuity conditions, do not exist as far as we know. Our investigation is be the first in that direction, introducing a good example. To achieve our aim we briefly study the spectral analysis of the problem. Then we derive two sampling theorems using solutions and Green’s matrix respectively.
2 The eigenvalue problem
In this section, we define our boundary value problem and state some of its properties. We consider the Dirac system
and transmission conditions
where ; the realvalued functions and are continuous in and and have finite limits and ; , ; and .
In [24] the authors discussed problem (2.1)(2.5) but with the condition instead of (2.3). To formulate a theoretic approach to problem (2.1)(2.5), we define the Hilbert space with an inner product, see [19,20],
where ⊤ denotes the matrix transpose,
Equation (2.1) can be written as
where
For functions , which are defined on and have finite limit , by and , we denote the functions
which are defined on and , respectively.
In the following lemma, we prove that the eigenvalues of problem (2.1)(2.5) are real.
Lemma 2.1The eigenvalues of problem (2.1)(2.5) are real.
Proof Assume the contrary that is a nonreal eigenvalue of problem (2.1)(2.5). Let be a corresponding (nontrivial) eigenfunction. By (2.1), we have
Integrating the above equation through and , we obtain
Then from (2.2), (2.3) and transmission conditions, we have, respectively,
and
Since , it follows from the last three equations and (2.11), (2.12) that
This contradicts the conditions and . Consequently, must be real. □
Let be the set of all such that , are absolutely continuous on , , and , , , . Define the operator by
Thus, the operator is symmetric in . Indeed, for ,
The operator and the eigenvalue problem (2.1)(2.5) have the same eigenvalues. Therefore they are equivalent in terms of this aspect.
Lemma 2.2Letλandμbe two different eigenvalues of problem (2.1)(2.5). Then the corresponding eigenfunctionsandof this problem satisfy the following equality:
Proof Equation (2.15) follows immediately from the orthogonality of the corresponding eigenelements:
□
Now, we construct a special fundamental system of solutions of equation (2.1) for λ not being an eigenvalue. Let us consider the next initial value problem:
By virtue of Theorem 1.1 in [25], this problem has a unique solution , which is an entire function of for each fixed . Similarly, employing the same method as in the proof of Theorem 1.1 in [25], we see that the problem
has a unique solution , which is an entire function of parameter λ for each fixed .
Now the functions and are defined in terms of and , , respectively, as follows: The initialvalue problem,
has a unique solution for each .
Similarly, the following problem also has a unique solution :
Let us construct two basic solutions of equation (2.1) as follows:
where
Then
By virtue of equations (2.21) and (2.23), these solutions satisfy both transmission conditions (2.4) and (2.5). These functions are entire in λ for all .
Let denote the Wronskian of and defined in [[26], p.194], i.e.,
It is obvious that the Wronskian
are independent of and are entire functions. Taking into account (2.21) and (2.23), a short calculation gives
Corollary 2.3The zeros of the functionsandcoincide.
Then we may take into consideration the characteristic function as
In the following lemma, we show that all eigenvalues of problem (2.1)(2.5) are simple.
Lemma 2.4All eigenvalues of problem (2.1)(2.5) are just zeros of the function. Moreover, every zero ofhas multiplicity one.
Proof Since the functions and satisfy the boundary condition (2.2) and both transmission conditions (2.4) and (2.5), to find the eigenvalues of the (2.1)(2.5), we have to insert the functions and in the boundary condition (2.3) and find the roots of this equation.
Integrating the above equation through and , and taking into account the initial conditions (2.17), (2.21) and (2.23), we obtain
Dividing both sides of (2.29) by and by letting , we arrive to the relation
We show that equation
has only simple roots. Assume the converse, i.e., equation (2.31) has a double root , say. Then the following two equations hold:
Since and is real, then . Let . From (2.32) and (2.33),
Combining (2.34) and (2.30) with , we obtain
contradicting the assumption . The other case, when , can be treated similarly and the proof is complete. □
Let denote the sequence of zeros of . Then
are the corresponding eigenvectors of the operator . Since is symmetric, then it is easy to show that the following orthogonality relation holds:
Here is a sequence of eigenvectorfunctions of (2.1)(2.5) corresponding to the eigenvalues . We denote by the normalized eigenvectors of , i.e.,
Since satisfies (2.3)(2.5), then the eigenvalues are also determined via
Therefore is another set of eigenvectorfunctions which is related by with
where are nonzero constants, since all eigenvalues are simple. Since the eigenvalues are all real, we can take the eigenvectorfunctions to be realvalued.
Now we derive the asymptotic formulae of the eigenvalues and the eigenvectorfunctions . We transform equations (2.1), (2.17), (2.21) and (2.24) into the integral equations, see [26], as follows:
For the following estimates hold uniformly with respect to x, , cf. [[25], p.55],
Now we find an asymptotic formula of the eigenvalues. Since the eigenvalues of the boundary value problem (2.1)(2.5) coincide with the roots of the equation
then from the estimates (2.47), (2.48) and (2.49), we get
which can be written as
Then, from (2.45) and (2.46), equation (2.50) has the form
For large , equation (2.51) obviously has solutions which, as is not hard to see, have the form
Inserting these values in (2.51), we find that , i.e., . Thus we obtain the following asymptotic formula for the eigenvalues:
Using the formulae (2.53), we obtain the following asymptotic formulae for the eigenvectorfunctions :
where
3 Green’s matrix and expansion theorem
Let , where , be a continuous vectorvalued function. To study the completeness of the eigenvectors of , and hence the completeness of the eigenvectorfunctions of (2.1)(2.5), we derive Green’s function of problem (2.1)(2.5) as well as the resolvent of . Indeed, let λ be not an eigenvalue of and consider the inhomogeneous problem
where I is the identity operator. Since
then we have
and the boundary conditions (2.2), (2.4) and (2.5) with λ is not an eigenvalue of problem (2.1)(2.5).
Now, we can represent the general solution of (3.1) in the following form:
We applied the standard method of variation of the constants to (3.3), and thus the functions , and , satisfy the linear system of equations
and
Since λ is not an eigenvalue and , each of the linear system in (3.4) and (3.5) has a unique solution which leads to
where , , and are arbitrary constants, and
Substituting equations (3.6) and (3.7) into (3.3), we obtain the solution of (3.1)
Then from (2.2), (3.2) and the transmission conditions (2.4) and (2.5), we get
Then (3.8) can be written as
where
which can be written as
where
Expanding (3.12) we obtain the concrete form
The matrix is called Green’s matrix of problem (2.1)(2.5). Obviously, is a meromorphic function of λ, for every , which has simple poles only at the eigenvalues. Although Green’s matrix looks as simple as that of Dirac systems, cf., e.g., [25,26], it is rather complicated because of the transmission conditions (see the example at the end of this paper). Therefore
Lemma 3.1The operatoris selfadjoint in.
Proof Since is a symmetric densely defined operator, then it is sufficient to show that the deficiency spaces are the null spaces, and hence . Indeed, if and λ is a nonreal number, then taking
implies that . Since satisfies the conditions (2.2)(2.5), then . Now we prove that the inverse of exists. If , then
Since , we have . Thus , i.e., . Then , the resolvent operator of , exists. Thus
Take . The domains of and are exactly . Consequently, the ranges of and are also . Hence the deficiency spaces of are
The next theorem is an eigenfunction expansion theorem. The proof is exactly similar to that of Levitan and Sargsjan derived in [[25], pp.6777]; see also [2629].
Theorem 3.2
the series being absolutely and uniformly convergent in the first component for on, and absolutely convergent in the second component.
4 The sampling theorems
The first sampling theorem of this section associated with the boundary value problem (2.1)(2.5) is the following theorem.
whereis the solution defined above. Thenis an entire function of exponential type that can be reconstructed from its values at the pointsvia the sampling formula
The series (4.2) converges absolutely on ℂ and uniformly on any compact subset of ℂ, andis the entire function defined in (2.28).
Proof The relation (4.1) can be rewritten in the form
where
Since both and are in , then they have the Fourier expansions
where and are the Fourier coefficients
Applying Parseval’s identity to (4.3), we obtain
Now we calculate and of , . To prove expansion (4.2), we need to show that
Indeed, let and be fixed. By the definition of the inner product of , we have
From Green’s identity, see [[25], p.51], we have
Then (4.9) and the initial conditions (2.21) imply
From (2.40), (2.19) and (2.7), we have
Also, from (2.40) we have
Then from (2.26) and (4.12) we obtain
Substituting from (4.10), (4.11) and (4.13) into (4.8), we get
Letting in (4.14), since the zeros of are simple, we get
Since and are arbitrary, then (4.14) and (4.15) hold for all and all . Therefore, from (4.14) and (4.15), we get (4.7). Hence (4.2) is proved with a pointwise convergence on ℂ. Now we investigate the convergence of (4.2). First we prove that it is absolutely convergent on ℂ. Using CauchySchwarz’ inequality for ,
Since , , then the two series on the righthand side of (4.16) converge. Thus series (4.2) converges absolutely on ℂ. As for uniform convergence, let be compact. Let and . Define to be
Using the same method developed above, we get
Therefore
Since is compact, then we can find a positive constant such that
Then
uniformly on M. In view of Parseval’s equality,
Thus uniformly on M. Hence (4.2) converges uniformly on M. Thus is an entire function. From the relation
and the fact that , , are entire functions of exponential type, we conclude that is of exponential type. □
Remark 4.2 To see that expansion (4.2) is a Lagrangetype interpolation, we may replace by the canonical product
From Hadamard’s factorization theorem, see [4], , where is an entire function with no zeros. Thus,
and (4.1), (4.2) remain valid for the function . Hence
We may redefine (4.1) by taking kernel to get
The next theorem is devoted to giving vectortype interpolation sampling expansions associated with problem (2.1)(2.5) for integral transforms whose kernels are defined in terms of Green’s matrix. As we see in (3.12), Green’s matrix of problem (2.1)(2.5) has simple poles at . Define the function to be , where is a fixed point and is the function defined in (2.28) or it is the canonical product (4.22).
Theorem 4.3Let. Letbe the vectorvalued transform
Thenis a vectorvalued entire function of exponential type that admits the vectorvalued sampling expansion
The vectorvalued series (4.26) converges absolutely on ℂ and uniformly on compact subsets of ℂ. Here (4.26) means
where both series converge absolutely on ℂ and uniformly on compact sets of ℂ.
Proof The integral transform (4.25) can be written as
Applying Parseval’s identity to (4.28) with respect to , we obtain
Let be such that for . Since each is an eigenvector of , then
Thus
From (3.14) and (4.30) we obtain
Then from (2.26) and (2.40) in (4.31), we get
Hence equation (4.32) can be rewritten as
Moreover, from (3.12) we have
Then from (4.35), (2.26) and (2.40) in (4.34), we obtain
Combining (4.36) and (4.33), yields
Taking the limit when in (4.28), we get
Making use of (4.37), we may rewrite (4.38) as, ,
The interchange of the limit and summation is justified by the asymptotic behavior of and that of . If and , then (4.39) gives
Combining (4.37), (4.40) and (4.29), we get (4.28) under the assumption that and for all n. If , for some or 2, the same expansions hold with . The convergence properties as well as the analytic and growth properties can be established as in Theorem 4.1 above. □
Now we derive an example illustrating the previous results.
Example 4.1
Consider the system
and transmission conditions
This problem is a special case of problem (2.1)(2.5) when and , . Then . For simplicity, we define
In the notations of the above section, the solutions and are
where
The eigenvalues are the solutions of the equation
which can be rewritten as
Green’s function of problem (4.41)(4.45) is given by
By Theorem 4.1, the transform
has the following expansion:
where are the zeros of (4.48). In the view of Theorem 4.3, the vectorvalued transform
has the following vectorvalued expansion:
It should be noted that with any choices , α, β, we cannot always compute the eigenvalue of problem (4.41)(4.45). Hence the eigenvalues are the points of ℝ which satisfy
Competing interests
The author declares that he has no competing interests.
Acknowledgements
This article was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah. The author, therefore, acknowledges with thanks DSR technical and financial support.
References

Kotel’nikov, V: On the carrying capacity of the “ether” and wire in telecommunications. Material for the first all union conference on questions of communications. Izd. Red. Upr. Svyazi RKKA. 55, 55–64 (in Russian) (1933)

Shannon, C: Communication in the presence of noise. Proc. IRE. 37, 10–21 (1949)

Whittaker, E: On the functions which are represented by the expansion of the interpolation theory. Proc. R. Soc. Edinb., Sect. A, Math.. 35, 181–194 (1915)

Paley, R, Wiener, N: Fourier Transforms in the Complex Domain, Amer. Math. Soc., Providence (1934)

Higgins, JR: Sampling Theory in Fourier and Signal Analysis: Foundations, Oxford University Press, Oxford (1996)

Zayed, AI: Advances in Shannon’s Sampling Theory, CRC Press, Boca Raton (1993).

Levinson, N: Gap and Density Theorems, Amer. Math. Soc., Providence (1940)

Kadec, MI: The exact value of PaleyWiener constant. Sov. Math. Dokl.. 5, 559–561 (1964)

Hinsen, G: Irregular sampling of bandlimited functions. J. Approx. Theory. 72, 346–364 (1993). Publisher Full Text

Kramer, HP: A generalized sampling theorem. J. Math. Phys.. 38, 68–72 (1959)

Everitt, WN, Hayman, WK, NasriRoudsari, G: On the representation of holomorphic functions by integrals. Appl. Anal.. 65, 95–102 (1997). Publisher Full Text

Everitt, WN, NasriRoudsari, G, Rehberg, J: A note on the analytic form of the Kramer sampling theorem. Results Math.. 34, 310–319 (1988)

Everitt, WN, Garcia, AG, HernándezMedina, MA: On Lagrangetype interpolation series and analytic Kramer kernels. Results Math.. 51, 215–228 (2008). Publisher Full Text

Garcia, AG, Littlejohn, LL: On analytic sampling theory. J. Comput. Appl. Math.. 171, 235–246 (2004). Publisher Full Text

Higgins, JR: A sampling principle associated with Saitoh’s fundamental theory of linear transformations. In: Saitoh S, Hayashi N, Yamamoto M (eds.) Analytic Extension Formulas and Their Applications, Kluwer Academic, Norwell (2001)

Everitt, WN, NasriRoudsari, G: Interpolation and sampling theories, and linear ordinary boundary value problems. In: Higgins JR, Stens LR (eds.) Sampling Theory in Fourier and Signal Analysis: Advanced Topics, Oxford University Press, Oxford (1999) (Chapter 5)

Annaby, MH, Freiling, G: Sampling integrodifferential transforms arising from second order differential operators. Math. Nachr.. 216, 25–43 (2000). Publisher Full Text

Annaby, MH, Tharwat, MM: On sampling theory and eigenvalue problems with an eigenparameter in the boundary conditions. SUT J. Math.. 42, 157–176 (2006)

Annaby, MH, Tharwat, MM: On sampling and Dirac systems with eigenparameter in the boundary conditions. J. Appl. Math. Comput. doi:10.1007/s1219001004049 (2010)

Annaby, MH, Freiling, G: A sampling theorem for transformations with discontinuous kernels. Appl. Anal.. 83, 1053–1075 (2004). Publisher Full Text

Annaby, MH, Freiling, G, Zayed, AI: Discontinuous boundaryvalue problems: expansion and sampling theorems. J. Integral Equ. Appl.. 16, 1–23 (2004). PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Tharwat, MM: Discontinuous SturmLiouville problems and associated sampling theories. Abstr. Appl. Anal. doi:10.1155/2011/610232 (2011)

Tharwat, MM, Yildirim, A, Bhrawy, AH: Sampling of discontinuous Dirac systems. Numer. Funct. Anal. Optim.. 34(3), 323–348 (2013). Publisher Full Text

Levitan, BM, Sargsjan, IS: Introduction to Spectral Theory: Selfadjoint Ordinary Differential Operators, American Mathematical Society, Providence (1975)

Levitan, BM, Sargsjan, IS: SturmLiouville and Dirac Operators, Kluwer Academic, Dordrecht (1991)

Fulton, CT: Twopoint boundary value problems with eigenvalue parameter contained in the boundary conditions. Proc. R. Soc. Edinb.. 77(A), 293–308 (1977)

Hinton, DB: An expansion theorem for an eigenvalue problem with eigenvalue parameters in the boundary conditions. Q. J. Math.. 30, 33–42 (1979). Publisher Full Text

Wray, SD: Absolutely convergent expansions associated with a boundaryvalue problem with the eigenvalue parameter contained in one boundary condition. Czechoslov. Math. J.. 32(4), 608–622 (1982)