Abstract
Purpose
To derive existence and comparison results for extremal solutions of nonlinear singular distributional initial value problems and boundary value problems.
Main methods
Fixed point results in ordered function spaces and recently introduced concepts of regulated and continuous primitive integrals of distributions. Maple programming is used to determine solutions of examples.
Results
New existence results are derived for the smallest and greatest solutions of considered problems. Novel results are derived for the dependence of solutions on the data. The obtained results are applied to impulsive differential equations. Concrete examples are presented and solved to illustrate the obtained results.
MSC: 26A24, 26A39, 26A48, 34A12, 34A36, 37A37, 39B12, 39B22, 47B38, 47J25, 47H07, 47H10, 58D25
Keywords:
distribution; primitive; integral; regulated; continuous; initial value problem; boundary value problem; singular; distributional1 Introduction
In this paper, existence and comparison results are derived for the smallest and greatest solutions of first and second order singular nonlinear initial value problems as well as second order boundary value problems.
Recently, similar problems are studied in ordered Banach spaces, e.g., in [14], by converting problems into systems of integral equations, integrals in these systems being BochnerLebesgue or HenstockKurzweil integrals. A novel feature in the present study is that the righthand sides of the considered differential equations comprise distributions on a compact real interval [a, b]. Every distribution is assumed to have a primitive in the space of those functions from [a, b] to ℝ which are leftcontinuous on (a, b], rightcontinuous at a, and which have right limits at every point of (a, b). With this presupposition, the considered problems can be transformed into integral equations which include the regulated primitive integral of distributions introduced recently in [5].
The paper is organized as follows. Distributions on [a, b], their primitives, regulated primitive integrals and some of their properties, as well as a fixed point lemma are presented in Section 2. In Section 3, existence and comparison results are derived for the smallest and greatest solutions of first order initial value problems.
A fact that makes the solution space important in applications is that it contains primitives of Dirac delta distributions δ_{λ}, λ ∈ (a, b). This fact is exploited in Section 4, where results of Section 3 are applied to impulsive differential equations. The continuous primitive integral of distributions introduced in [6] is also used in these applications.
Existence of the smallest and greatest solutions of the second order initial and boundary value problems, and dependence of these solutions on the data are studied in Sections 5 and 6. Applications to impulsive problems are also presented.
Considered differential equations may be singular, distributional and impulsive. Differential equations, initial and boundary conditions and impulses may depend functionally on the unknown function and/or on its derivatives, and may contain discontinuous nonlinearities. Main tools are fixed point theorems in ordered spaces proved in [7] by generalized monotone iteration methods. Concrete problems are solved to illustrate obtained results. Iteration methods and Maple programming are used to determine solutions.
2 Preliminaries
Distributions on a compact real interval [a, b] are (cf. [8]) continuous linear functionals on the topological vector space of functions φ : ℝ → ℝ possessing for every j ∈ ℕ_{0 }a continuous derivative φ^{(j) }of order j that vanishes on ℝ\(a, b). The space is endowed with the topology in which the sequence (φ_{k}) of converges to if and only if uniformly on (a, b) as k → ∞ and j ∈ ℕ_{0}. As for the theory of distributions, see, e.g., [9,10].
In this paper, every distribution g on [a, b] is assumed to have a primitive, i.e., a function whose distributional derivative G' equals to g, in the function space
The value 〈g, φ〉 of g at is thus given by
Such a distribution g is called RP integrable. Its regulated primitive integral is defined by
As noticed in [5], the regulated primitive integral generalizes the wide Denjoy integral, and hence also Riemann, Lebesgue, Denjoy and HenstockKurzweil integrals.
Denote by the set of those distributions on [a, b] that are RP integrable on [a, b]. If , then the function is that primitive of g which belongs to the set
It can be shown (cf. [5]) that a relation ≼, defined by
is a partial ordering on . In particular,
Given partially ordered sets X = (X, ≤) and Y = (Y, ≼), we say that a mapping f : X → Y is increasing if f(x) ≼ f(y) whenever x ≤ y in X, and orderbounded if there exist f_{± }∈ Y such that f_{ }f (x) ≼ f_{+ }for all x ∈ X.
The following fixed point result is a consequence of [11], Theorem A.2.1, or [7], Theorem 1.2.1 and Proposition 1.2.1.
Lemma 2.1. Given a partially ordered set P = (P, ≤), and its order interval [x_{}, x_{+}] = {x ∈ P : x_{ }≤ x ≤ x_{+}}, assume that a mapping G : [x_{}, x_{+}] → [x_{}, x_{+}] is increasing, and that each wellordered chain of the range G[x_{}, x_{+}] of G has a supremum in P and each inversely wellordered chain of G[x_{}, x_{+}] has an infimum in P. Then G has the smallest and greatest fixed points, and they are increasing with respect to G.
Remarks 2.1. Under the hypotheses of Lemma 2.1, the smallest fixed point x* of G is by [[7], Theorem 1.2.1] the maximum of the chain C of [x_{}, x_{+}] that is well ordered, i.e., every nonempty subset of C has the smallest element, and that satisfies
The smallest elements of C are G^{n}(x_{}), n ∈ ℕ_{0}, as long as G^{n}(x_{}) = G(G^{n1}(x_{})) is defined and G^{n1}(x_{}) < G^{n}(x_{}), n ∈ ℕ. If G^{n1}(x_{}) = G^{n }(x_{}) for some n ∈ ℕ, there is the smallest such an n, and x_{* }= G^{n1}(x_{}) is the smallest fixed point of G in [x_{}, x_{+}]. If is defined in P and is a strict upper bound of {G^{n}(x_{})}_{n∈ℕ}, then x_{ω }is the next element of C. If x_{ω }= G(x_{ω}), then x_{* }= x_{ω}, otherwise the next elements of C are of the form G^{n}(x_{ω}), n ∈ ℕ, and so on.
The greatest fixed point x* of G is the minimum of the chain D of [x_{}, x_{+}] that is inversely well ordered, i.e., every nonempty subset of D has the greatest element, and that has the following property:
The greatest elements of D are nfold iterates G^{n}(x_{+}), as long as they are defined and G^{n}(x_{+}) < G^{n1}(x_{+}). If equality holds for some n ∈ ℕ, then x* = G^{n1}(x_{+}) is the greatest fixed point of G in [x_{}, x_{+}].
3 First order initial value problems
In this section, existence and comparison results are derived for the smallest and greatest solutions of first order initial value problems. Denote by , ∞< a < b < ∞, the space of locally Lebesgue integrable functions from the halfopen interval (a, b] to ℝ. is ordered a.e. pointwise, and its a.e. equal functions are identified.
Given p : [a, b] → ℝ_{+}, consider the initial value problem (IVP)
where c(u) ∈ ℝ, and . We are looking for solutions of (3.1) from the set
Definition 3.1. We say that a function u ∈ S is a subsolution of the IVP (3.1) if
If reversed inequalities hold in (3.3), we say that u is a supersolution of (3.1). If equalities hold in (3.3), then u is called a solution of (3.1).
We shall first transform the IVP (3.1) into an integral equation.
Lemma 3.1. Given c(u) ∈ ℝ, and p : [a, b] → ℝ_{+}, assume that , and that . Then u is a solution of the IVP (3.1) in S if and only if u is a solution of the following integral equation:
Proof: Assume that u is a solution of (3.1) in S. The definition of S and (3.1) ensure by (2.2) that
Allowing r tend to a+ and applying the initial condition of (3.1) we see that (3.4) is valid. Conversely, let u be a solution of (3.4). According to (3.4) we have
This equation implies that u ∈ S, that the initial condition of (3.1) is valid, and that
Thus, u is a solution of the IVP (3.1) in S. □
Our first existence and comparison result for the IVP (3.1) reads as follows.
Theorem 3.1. Assume that is increasing, that p : [a, b] → ℝ_{+}, that , and that the IVP (3.1) has a subsolution u_{ }and a supersolution u_{+ }in S satisfying u_{ }≤ u_{+}. Then (3.1) has the smallest and greatest solutions within the order interval [u_{}, u_{+}] of S. Moreover, these solutions are increasing with respect to g and c.
Because g is increasing, it follows from (2.3) and (3.6) that G is increasing. Applying (2.3), [[5], Theorem 7] and Definition 3.1 we see that if and u_{ }≤ u ≤ u_{+}, then
Thus
Similarly, it can be shown that G(u)(t) ≤ u_{+}(t) for each t ∈ (a, b]. Thus, G maps the order interval [u_{}, u_{+}] of into [u_{}, u_{+}]. Let W be a wellordered or an inversely wellordered chain in G[u_{}, u_{+}]. It follows from [[1], Proposition 9.36] and its dual that sup W and inf W exist in .
The above proof shows that the operator G defined by (3.6) satisfies the hypotheses of Lemma 2.1 when . Thus G has the smallest fixed point u_{* }and the greatest fixed point u* in [u_{}, u_{+}]. These fixed points are the smallest and greatest solutions of the integral equation (3.4) in [u_{}, u_{+}]. This result and Lemma 3.1 imply that u_{* }and u* belong to S, and they are the smallest and greatest solutions of the IVP (3.1) in [u_{}, u_{+}]. Moreover, u_{* }and u* are by Lemma 2.1 increasing with respect to G. This result implies by (2.3) and (3.6) the last conclusion of Theorem. □
The following result is a consequence of Theorem 3.1.
Proposition 3.1. Assume that mappings and are increasing and orderbounded, that p : [a, b] → ℝ_{+}, and that . Then, the IVP (3.1) has in S the smallest and greatest solutions that are increasing with respect to g and c.
Proof: Because g and c are orderbounded, there exist and c_{± }∈ ℝ such that g_{}≼ g(x) ≼ g_{+ }and c_{ }≤ c(x) ≤ c_{+ }for all . Denote
Then u_{± }∈ S, and
and
Thus u_{ }is a subsolution and u_{+ }is a supersolution of (3.1), whence the IVP (3.1) has by Theorem 3.1 the smallest solution u_{* }and the greatest solution u* in the order interval [u_{}, u_{+}] of S.
If u ∈ S is any solution of (3.1), then
or equivalently,
Consequently, u ∈ [u_{}, u_{+}], whence u_{* }and u* are the smallest and greatest of all the solutions of (3.1) in S. □
In the next proposition, the HenstockKurzweil integral can be replaced by any of the integrals called Riemann, Lebesgue, Denjoy and wide Denjoy integrals.
Proposition 3.2. Assume that g(x) is RP integrable on [a, b] for every , and that
Where , and for each i = 1,..., n, H_{i }: [a, b] → [0, ∞) has right limits on [a, b), is leftcontinuous on (a, b], and f_{i }: [a, b] → ℝ satisfies the following hypotheses.
(f_{i1}) f_{i}(x) is HenstockKurzweil integrable on [a, b] for every .
(f_{i2}) There exist HenstockKurzweil integrable functions
 f
If is increasing and orderbounded, then the IVP (3.1) has in S the smallest and greatest solutions that are increasing with respect to f_{i }and c.
Proof: The hypotheses imposed above ensure by (2.3) and (3.7) that g is an increasing mapping from to the order interval [g_{}, g_{+}] of , where
Thus the conclusions follow from Proposition 3.1.
Example 3.1. Assume that
where b ≥ 1, , H_{1 }is the Heaviside step function, i.e.,
Note, that the greatest integer function [·] occurs in the function f_{1}(x). Prove that the IVP
where p(t) = t, t ∈ [0, b], has the smallest and greatest solutions, and calculate them.
Solution: Problem (3.9) is of the form (3.1), where c(u) = a = 0 and p(t) ≡ t. The hypotheses (f_{11}) and (f_{21}) are valid when
Thus the IVP (3.9) has by Proposition 3.2 the smallest and greatest solutions. They are the smallest and greatest fixed points of the mapping G defined by
G is an increasing mapping from , to its order interval [u_{}, u_{+}], where
Calculating the successive approximations G^{n}(u_{±}) we see that G^{7}(u_{±}) = G^{8}(u_{±}). This means by Remark 2.1 that u_{* }= G^{7}(u_{}) and u* = G^{7}(u_{+}) are the smallest and greatest fixed points of G in [u_{}, u_{+}]. According to the proof of Proposition 3.1, u_{* }and u* are also the smallest and greatest solutions, of the initial value problem (3.9) in S. The exact expressions of u_{* }and u* are:
4 Applications to impulsive problems
In this section, we assume that Λ is a wellordered subset of (a, b). Let δ_{λ}, λ ∈ Λ, denote the translation of Dirac delta distribution for which , t ≥ a, where H is the Heaviside step function. Consider the singular distributional Cauchy problem
where p : [a, b] → ℝ_{+ }and . The values of f are distributions on [a, b], and the values of I are real numbers.
Definition 4.1. By a solution of (4.1), we mean such a function u ∈ S that satisfies (4.1), for which p · u is continuous on [a, b]\Λ, and has impulses
In the study of (4.1), the regulated primitive integral is replaced by the continuous primitive integral presented in [6]. A distribution g on [a, b] is called distributionally Denjoy (DD) integrable on [a, b], denote , if g has a continuous primitive, i.e., g is a distributional derivative of a function G ∈ C[a, b]. The continuous primitive integral of g is defined by
is a proper subset of , and for every its continuous and regulated primitive integrals are equal. As shown in [6], contains functions that are wide Denjoy integrable, and hence also Riemann, Lebesgue, Denjoy and HenstockKurzweil integrable on [a, b]. On the other hand, distributional derivatives of nowhere differentiable Weierstrass function and almost everywhere differentiable Cantor function are distributionally but not wide Denjoy integrable.
It can be shown (cf. [6]) that relation ≼, defined by
Transformation of the Cauchy problem (4.1) into an integral equation is presented in the following lemma.
Lemma 4.1. Assume that u ∈ S, that , and that . Then u is a solution of (4.1) if and only if
Proof: Assume first that u ∈ S satisfies (4.3). Because Λ is wellordered, it follows that if λ ∈ Λ and λ < sup Λ, then H(t  λ) = 1 on (λ, S(λ)], where S(λ) = min{μ ∈ Λ : λ < μ}. This property implies that if the function v : (a, b] → ℝ is defined by
then the function p · v is constant on every interval (λ, S(λ)], Λ ∋ Λ < sup Λ, on [a, min Λ], and on (sup Λ, b] if sup Λ < b. In particular, , and the distributional derivative of p · v is
Thus
Since is continuous on [a, b], then p · u is continuous on [a, b]\Λ. Because
then
Moreover , so that u is a solution of the IVP (4.1).
Assume next that u ∈ S is a solution of (4.1). Denoting
where v is defined by (4.4), it follows from (4.1) and (4.5) that
Because f(u) is DD integrable on [a, b], then
Thus
or equivalently, (4.3) holds. □
Noticing that the IVP (4.1) is a special case of the Cauchy problem (3.1), where
the results of Section 3 can be applied to study the IVP (4.1). The following result is a consequence of Proposition 3.1.
Proposition 4.1. The distributional IVP (4.1) has the smallest and greatest solutions that are increasing with respect to f and c, if and are increasing and orderbounded, if p : [a, b] → ℝ_{+}, if , and if has the following properties.
(I) for all , and x ↦ I(λ,x) is increasing for all λ ∈ Λ.
Proof: The given hypotheses imply that (4.6) defines a mapping that is increasing and orderbounded. Thus, the IVP (3.1) has by Proposition 3.1 the smallest solution u_{* }and the greatest solution u* in S, and they are increasing with respect to g and c. By Lemma 4.1, u_{* }and u* are the smallest and greatest solutions of the IVP (4.1), and they are increasing with respect to f, and c, since g is increasing with respect to f. □
The initial value problem
combined with the impulsive property:
form a special case of the IVP (4.1) when f is the Nemytskij operator associated with the function by
Considering distributions δ_{λ }as generalized functions t αδ (t  λ), t ∈ [a, b], we can rewrite the system (4.7), (4.8) as
For instance, Proposition 4.1 implies the following result:
Corollary 4.1. The impulsive Cauchy problem (4.9) has the smallest and greatest solutions which are increasing with respect to q and c, if is increasing and order bounded, and if the hypotheses (I) and the following hypotheses are valid.
(q0) q(·, x(·); x) is HenstockKurzweil integrable on [a, b] for every .
(q1) for all t ∈ [a, b] whenever × ≤ y in .
(q2) There exist HenstockKurzweil integrable functions q_{± }: [a, b] → ℝ such that for all and t ∈ [a, b].
Example 4.1. Determine the smallest and greatest solutions of the IVP
when q is defined by
'[·]' denotes, as before, the greatest integer function, and 'sgn' the sign function.
Solution: The IVP (4.10) is a special case of (4.6), when a = 0, b = 1, c(u) = 0, p(t) = t, t ∈ [0, 1], , and . The validity of the hypotheses of Corollary 4.1 is easy to verify. Thus, the IVP (4.10) has the smallest and greatest solutions. These solutions are the smallest and greatest fixed points of , defined by
Calculating the successive approximations
it turns out that is strictly increasing, that is strictly decreasing, that y_{17 }= G(y_{17}), and that z_{16 }= G(z_{16}). Thus u_{* }= y_{17 }and u* = z_{16 }are by Remark 2.1 the smallest and greatest solutions of (4.1) with c(u) = 0. The exact formulas of u_{* }and u* are
Remarks 4.1. The function (t, x) α q(t, x), defined in (4.11), has the following properties.
• It is HenstockKurzweil integrable, but it is not Lebesgue integrable with respect to the independent variable t if x ≠ 0, because h is not Lebesgue integrable on [0,1].
• Its dependence on the variables t and x is discontinuous, since the signum function sgn, the greatest integer function [·], and the function h are discontinuous.
• Its dependence on the unknown function x is nonlocal, since the integral of function x appears in the argument of the tanhfunction.
• Its dependence on x is not monotone, since h attains positive and negative values in an infinite number of disjoint sets of positive measure. For instance, y*(t) > y*(t) for all t ∈ (0, 1], but the difference function t α q(t, y*) q(t, y_{*}) is neither nonnegativevalued nor Lebesgue integrable on [0, 1].
Notice also that in Example 4.1 dependence of the function on x is discontinuous.
5 Second order initial value problems
We shall study the second order initial value problem in this section
where , , p : [a, b] → ℝ_{+}, ∞ < a < b < ∞.
We are looking for the smallest and greatest solutions of (5.1) from the set
The IVP (5.1) can be converted to a system of integral equations which does not contain derivatives.
Lemma 5.1. Assume that p : [a, b] →ℝ_{+}, that and that for all . Then u is a solution of the IVP (5.1) in Y if and only if (u, u') = (u, v), where is a solution of the system
Proof: Assume that u is a solution of the IVP (5.1) in Y , and denote
The differential equation, the initial conditions of (5.1), the definition (5.2) of Y and the notation (5.4) imply that
and
Thus, the integral equations of (5.3) hold.
Conversely, let (u, v) be a solution of the system (5.3) in . The first equation of (5.3) implies that u is a.e. differentiable and v = u', and that the second initial condition of (5.1) is fulfilled. Since v = u', it follows from the second equation of (5.3) that
The equation (5.5) implies that p · u' belongs to , and that the differential equation and first initial condition of (5.1) hold. Thus u is a solution of the IVP (5.1) in Y. □
Assume that L_{loc}(a, b] is ordered a.e. pointwise, that Y is ordered pointwise, and that the functions p, f, c and d satisfy the following hypotheses:
Our main existence and comparison result for the IVP (5.1) reads as follows.
Theorem 5.1. Assume that p : [a, b] → ℝ_{+}, that , and that the mappings and are increasing and orderbounded. Then, the IVP (5.1) has the smallest and greatest solutions in Y, and they are increasing with respect to f, c and d.
Proof: The hypotheses imposed on f, c and d imply that the following conditions are valid.
(f0) f(u, v) is RP integrable on [a, b] for every , and there exist such that h_{ }≼ f (u_{1}, v_{1}) ≼ f (u_{2}, v_{2}) ≼ h_{+ }for all , i = 1, 2, u_{1 }≤ u_{2 }and v_{1 }≤ v_{2}.
(c0) c_{± }∈ ℝ, and c_{ }≤ c(u_{1}, v_{1}) ≤ c(u_{2}, v_{2}) ≤ c_{+ }whenever , i = 1, 2, u_{1 }≤ u_{2 }and v_{1 }≤ v_{2}.
(d0) d_{± }∈ ℝ, and d_{ }≤ d(u_{1}, v_{1}) ≤ d(u_{2}, v_{2}) ≤ d_{+ }whenever , i = 1, 2, u_{1 }≤ u_{2 }and v_{1 }≤ v_{2}.
Assume that is ordered componentwise. We shall first show that the vectorfunctions x_{+}, x_{ }given by
define functions x_{± }∈ P. Since 1/p is Lebesgue integrable and the functions belong to , then the second components of x_{± }belong to . This result implies that the first components of x_{± }are defined and continuous, whence they belong to .
Similarly, by applying also the given hypotheses one can verify that the relations
define an increasing mapping G = (G_{1}, G_{2}) : [x_{}, x_{+}] → [x_{}, x_{+}].
Let W be a wellordered chain in the range of G. The sets W_{1 }= {u : (u, v) ∈ W} and W_{2 }= {v : (u, v) ∈ W} are wellordered and orderbounded chains in . It then follows from [[1], Proposition 9.36] that the supremums of W_{1 }and W_{2 }exist in . Obviously, (sup W_{1}, sup W_{2}) is the supremum of W in P. Similarly one can show that each inversely wellordered chain of the range of G has the infimum in P.
The above proof shows that the operator G = (G_{1}, G_{2}) defined by (5.7) satisfies the hypotheses of Lemma 2.1, and therefore G has the smallest fixed point x_{* }= (u_{*},v_{*}) and the greatest fixed point x* = (u*, v*). It follows from (5.7) that (u_{*}, v_{*}) and (u*, v*) are solutions of the system (5.3). According to Lemma 5.1, u_{* }and u* belong to Y and are solutions of the IVP (5.1).
To prove that u_{* }and u* are the smallest and greatest of all solutions of (5.1) in Y , let u ∈ Y be any solution of (5.1). In view of Lemma 5.1, (u, v) = (u, u') is a solution of the system (5.3). Applying the hypotheses (f0), (c0) and (d0) it is easy to show that x = (u, v) ∈ [x_{}, x_{+}], where x_{± }are defined by (5.6). Thus x = (u, v) is a fixed point of G = (G_{1}, G_{2}) : [x_{}, x_{+}] → [x_{}, x_{+}], defined by (5.7). Because x_{* }= (u_{*}, v_{*}) and x* = (u*, v*) are the smallest and greatest fixed points of G, then (u_{*}, v_{*}) ≤ (u, v) ≤ (u*, v*). In particular, u_{* }≤ u ≤ u*, whence u_{* }and u* are the smallest and greatest of all solutions of the IVP (5.1).
The last assertion is an easy consequence of the last conclusion of Lemma 2.1 and the definition (5.7) of G = (G_{1}, G_{2}). □
Consider next the the following special case of (5.1) where the values of f are combined with impulses and a HenstockKurzweil integrable function:
In this case problem (5.1) can be rewritten as
The next result is a consequence of Theorem 5.1.
Corollary 5.1. Assume that p : [a, b] → ℝ_{+}, , that functions are increasing and orderbounded, and that the mappings and satisfies the following hypotheses.
(q_{1}) q(·, x) is HenstockKurzweil integrable on [a, b] for all .
(q_{2}) There exist HenstockKurzweil integrable functions q_{± }: [a, b] → ℝ such that , whenever × ≤ y in .
(I) for all , and × α I(λ, x) is increasing for all λ ∈ Λ.
Then, the impulsive IVP (5.8) has the smallest and greatest solutions that are increasing with respect to q, c and d.
Example 5.1. Determine the smallest and greatest solutions of the following singular impulsive IVP.
Solution: System (5.9) is a special case of (5.8) by setting a = 0, b = 3, , , and q, c, d and I are given by
It is easy to verify that the hypotheses of Corollary 5.1 hold. Thus (5.9) has the smallest and greatest solutions. The functions x_{ }and x_{+ }defined by (5.6) can be calculated, and their first components are:
where
is the Fresnel sine integral. According to Lemma 5.1, the smallest solution of (5.9) is equal to the first component of the smallest fixed point of G = (G_{1}, G_{2}), defined by (5.7), with f, c and d given by (5.10) and . Calculating the iterations G^{n}x_{ }it turns out that G^{4}x_{ }= G^{5}x_{}, whence is the smallest solution of (5.9). Similarly, one can show that is the greatest solution of (5.9). The exact expressions of these solutions are
6 Second Order Boundary Value Problems
This section is devoted to the study of the second order boundary value problem (BVP)
where , c, d : L^{1}[a, b]^{2 }→ ℝ, and p : [a, b] → ℝ_{+}, ∞ < a < b < ∞. Now we are looking for the smallest and greatest solutions of (6.1) from the set
The BVP (6.1) can be transformed into a system of integral equations as follows.
Lemma 6.1. Assume that p : [a, b] → ℝ_{+}, that , and that for all u, v ∈ L^{1}[a, b]. Then u is a solution of the IVP (6.1) in Z if and only if (u, u') = (u, v), where (u, v) ∈ L^{1}[a, b]^{2 }is a solution of the system
Proof: Assume that u is a solution of the BVP (6.1) in Z, and denote
The differential equation, the boundary conditions of (6.1), the definition (6.2) of Z and the notation (6.4) ensure that
and
Thus the integral equations of (6.3) hold.
Conversely, let (u, v) be a solution of the system (6.3) in L^{1}[a, b]^{2}. The first equation of (6.3) implies that u is a.e. differentiable and v = u', and that the second boundary condition of (6.1) holds. Since v = u', it follows from the second equation of (6.3) that
This equation implies that p · u' belongs to , and that the differential equation and first boundary condition of (6.1) are satisfied. Thus u, is a solution of the BVP (6.1) in Z. □
Assume that L^{1}[a, b] is ordered a.e. pointwise, that Z is ordered pointwise. We shall impose the following hypotheses for the functions p, f, c, and d.
(p_{1}) p : [a, b] → ℝ_{+}, and .
(f_{1}) is orderbounded, and f (u_{1}, v_{1}) ≼ f (u_{2}, v_{2}) whenever u_{i}, v_{i }∈ L^{1}[a, b], i = 1, 2, u_{1 }≤ u_{2}, and v_{1 }≥ v_{2}.
(c_{1}) c : L^{1}[a, b]^{2 }→ ℝ is orderbounded, and c(u_{2}, v_{2}) ≤ c(u_{1}, v_{1}) whenever u_{i}, v_{i }∈ L^{1}[a, b], i = 1, 2, u_{1 }≤ u_{2}, and v_{1 }≥ v_{2}.
(d_{1}) d : L^{1}[a, b]^{2 }→ ℝ is orderbounded, and d(u_{1}, v_{1}) ≤ d(u_{2}, v_{2}) whenever u_{i}, v_{i }∈ L^{1}[a, b], i = 1, 2, u_{1 }≤ u_{2 }and v_{1 }≥ v_{2}.
The next theorem is our main existence and comparison result for the BVP (6.1).
Theorem 6.1. Assume that the hypotheses (p_{1}), (f_{1}), (c_{1}), and (d_{1}) hold. Then, the BVP (6.1) has the smallest and greatest solutions in Z, and they are increasing with respect to f and d and decreasing with respect to c.
Proof: Because f, c and d are orderbounded, then the following conditions are valid.
(f_{0}) There exist such that h_{ }≼ f (u, v) ≼ h_{+ }for all u, v ∈ L^{1}[a, b].
(c_{0}) There exist c_{± }∈ ℝ such that c_{ }≤ c(u, v) ≤ c_{+ }whenever u, v ∈ L^{1}[a, b].
(d_{0}) There exist d_{± }∈ ℝ such that d_{ }≤ d(u, v) ≤ d_{+ }whenever u, v ∈ L^{1}[a, b].
Assume that P = L^{1}[a, b]^{2 }is ordered by
We shall first show that the vectorfunctions x_{+}, x_{ }given by
belong to P. Since 1/p is Lebesgue integrable and the function belongs to , then the second component of x_{+ }is Lebesgue integrable on [a, b]. Similarly one can show that the second component of x_{ }belongs to L^{1}[a, b]. These results ensure that the first components of x_{± }are defined and continuous in t, and hence are in L^{1}[a, b].
Similarly, by applying the given hypotheses one can verify that the relations
define an increasing mapping G = (G_{1}, G_{2}) : [x_{}, x_{+}] → [x_{ }, x_{+}].
Let W be a wellordered chain in the range of G. The set W_{1 }= {u : (u, v) ∈ W} is well ordered, W_{2 }= {v : (u, v) ∈ W } is inversely wellordered, and both W_{1 }and W_{2 }are orderbounded in L^{1}[a, b]. It then follows from [1, Lemma 9.32] that the supremum of W_{1 }and the infimum of W_{2 }exist in L^{1}[a, b]. Obviously, (sup W_{1}, inf W_{2}) is the supremum of W in (P, ≤). Similarly, one can show that each inversely wellordered chain of the range of G has the infimum in (P, ≤).
The above proof shows that the operator G = (G_{1}, G_{2}) defined by (6.8) satisfies the hypotheses of Lemma 2.1, whence G has the smallest fixed point x_{* }= (u_{*}, v_{*}) and a greatest fixed point x* = (u*, v*). It follows from (6.8) that (u_{*}, v_{*}) and (u*, v*) are solutions of the system (6.3). According to Lemma 6.1, u_{* }and u* belong to Z and are solutions of the BVP (6.1).
To prove that u_{* }and u* are the smallest and greatest of all solutions of (6.1) in Z, let u ∈ Z be any solution of (6.1). In view of Lemma 6.1, (u, v) = (u, u') is a solution of the system (6.3). Applying the properties (f_{0}), (c_{0}), and (d_{0}) it is easy to show that x = (u, v) ∈ [x_{}, x_{+}], where x_{± }are defined by (6.7). Thus, x = (u, v) is a fixed point of G = (G_{1}, G_{2}) : [x_{}, x_{+}] → [x_{ }, x_{+}], defined by (6.8). Because x_{* }= (u_{*}, v_{*}) and x* = (u*, v*) are the smallest and greatest fixed points of G, respectively, then (u_{*}, v_{*}) ≤ (u, v) ≤ (u*, v*). In particular, u_{* }≤ u ≤ u*, whence u_{* }and u* are the smallest and greatest of all solutions of the BVP (6.1).
The last assertion is an easy consequence of the last conclusion of Lemma 2.1, and the definition (6.8) of G = (G_{1}, G_{2}). □
Consider next a special case of (6.1) where the values of f combined with impulses and HenstockKurzweil integrable functions:
Corollary 6.1. Assume that p : [a, b] → ℝ_{+}, , that functions c, d : L^{1}[a, b]^{2 }→ ℝ satisfy the hypotheses (c_{i}) and (d_{i}), i = 1, 2, that α : Λ → ℝ, , and that g satisfies the following hypotheses.
(g_{1}) g(u, v) is HenstockKurzweil integrable on [a, b] for all u, v ∈ L^{1}[a, b].
(g_{2}) There exist HenstockKurzweil integrable functions such that , whenever u_{1 }≤ u_{2 }and v_{1 }≥ v_{2 }in L^{1}[a, b].
Then, the impulsive BVP (6.9) has the smallest and greatest solutions that are increasing with respect to g, d and decreasing with respect to c.
Example 6.1. Determine the smallest and greatest solutions of the following singular impulsive BVP.
Solution: System (6.10) is a special case of (6.9) when a = 0, b = 3, , , and g, c, d are given by
It is easy to verify that the hypotheses of Corollary 6.1 are valid. Thus (6.10) has the smallest and greatest solutions. The functions x_{ }and x_{+ }defined by (6.7) can be calculated, and their first components are:
and
where FresnelS is the Fresnel sine integral.
According to Lemma 6.1 the smallest solution of (6.10) is equal to the first component of the smallest fixed point of G = (G_{1}, G_{2}), defined by (6.3). Calculating the first iterations G^{n}x_{ }it turns out that G^{6}x_{ }= G^{7}x_{ }. Thus is the smallest solution of (6.10). Similarly, one can show that G^{3}x_{+ }= G^{4}x_{+}, whence is the greatest solution of (6.10). The exact expressions of these solutions are
and
Remarks 6.1. The IVP's (3.1) and (5.1) and the BVP (6.1) can be
• nonlocal, because the functions g, c, d, and f may depend functionally on u and/or u';
• discontinuous, since the dependencies of g, c, d and f on u and/or u' can be discontinuous;
• distributional, since the values of g and f can be distributions;
• impulsive, since the values of g and f can contain impulses.
A theory for first order nonlinear distributional Cauchy problems is presented in [12]. Linear distributional differential equations are studied in [13,8]. Singular ordinary differential equations are studied, e.g., in [11,14,15]. Initial value problems in ordered Banach spaces are studied, e.g., in [14,7]. As for the study of impulsive differential equations, see, e.g. [1,16,17]. The case of wellordered set of impulses is studied first time in [18].
The solutions of examples have been calculated by using simple Maple programming.
Competing interests
The author declares that they have no competing interests.
Authors' contributions
The work was realized by the author.
Acknowledgements
The author thanks the anonymous referee for a careful review and constructive comments.
References

Carl, S, Heikkilä, S: Fixed Point Theory in Ordered Spaces and Applications. Springer, Berlin (2011)

Heikkilä, S, Kumpulainen, M: On improper integrals and differential equations in ordered Banach spaces. J Math Anal Appl. 319, 579–603 (2006). Publisher Full Text

Heikkilä, S, Kumpulainen, M: Differential equations with nonabsolutely integrable functions in ordered Banach spaces. Nonlinear Anal. 72, 4082–4090 (2010). Publisher Full Text

Heikkilä, S, Kumpulainen, M: Second order initial value problems with nonabsolute integrals in ordered Banach spaces. Nonlinear Anal. 74(5), 1939–1955 (2011)

Talvila, E: The regulated primitive integral. Ill J Math. 53, 1187–1219 (2009)

Talvila, E: The distributional Denjoy integral. Real Anal Exch. 33, 51–82 (2008)

Heikkilä, S, Lakshmikantham, V: Monotone Iterative Techniques for Discontinuous Nonlinear Differential Equations. Marcel Dekker Inc., New York (1994)

Tvrdý, M: Linear distributional differential equations of the second order. Math Bohemica. 119(4), 419–436 (1994)

Friedlander, FG, Joshi, M: Introduction to the Theory of Distributions. Cambridge University Press, Cambridge (1999)

Schwartz, L: Théorie des distributions. Hermann, Paris (1966)

Carl, S, Heikkilä, S: Nonlinear Differential Equations in Ordered Spaces. Chapman & Hall/CRC, Boca Raton (2000)

Heikkilä, S: On nonlinear distributional and impulsive Cauchy problems.

Pelant, M, Tvrdý, M: Linear distributional differential equations in the space of regulated functions. Math Bohemica. 118(4), 379–400 (1993)

Heikkilä, S, Seikkala, S: On the existence of extremal solutions of phiLaplacian initial and boundary value problems. Int J Pure Appl Math. 17(1), 119–138 (2004)

Heikkilä, S, Seikkala, S: On singular, functional, nonsmooth and implicit phiLaplacian initial and boundary value problems. J Math Anal Appl. 308(2), 513–531 (2005). Publisher Full Text

Federson, M, Táboas, P: Impulsive retarded differential equations in Banach spaces via BochnerLebesgue and Henstock integrals. Nonlinear Anal. 50, 389–407 (2002). Publisher Full Text

Lakshmikantham, V, Bainov, D, Simeonov, P: Theory of Impulsive Differential Equations, World Scientific, Singapore. (1989)

Heikkilä, S, Kumpulainen, M, Seikkala, S: Uniqueness and existence results for implicit impulsive differential equations. Nonlinear Anal. 42, 13–26 (2000). Publisher Full Text