SpringerOpen Newsletter

Receive periodic news and updates relating to SpringerOpen.

Open Access Research

Multiple positive solutions for first-order impulsive integral boundary value problems on time scales

Yongkun Li* and Jiangye Shu

Author Affiliations

Department of Mathematics, Yunnan University Kunming, Yunnan 650091, People's Republic of China

For all author emails, please log on.

Boundary Value Problems 2011, 2011:12  doi:10.1186/1687-2770-2011-12


The electronic version of this article is the complete one and can be found online at: http://www.boundaryvalueproblems.com/content/2011/1/12


Received:10 March 2011
Accepted:15 August 2011
Published:15 August 2011

© 2011 Li and Shu; licensee Springer.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this paper, we first present a class of first-order nonlinear impulsive integral boundary value problems on time scales. Then, using the well-known Guo-Krasnoselskii fixed point theorem and Legget-Williams fixed point theorem, some criteria for the existence of at least one, two, and three positive solutions are established for the problem under consideration, respectively. Finally, examples are presented to illustrate the main results.

MSC: 34B10; 34B37; 34N05.

Keywords:
integral boundary value problem; fixed point; multiple solutions; time scale

1 Introduction

In fact, continuous and discrete systems are very important in implementing and applications. It is well known that the theory of time scales has received a lot of attention, which was introduced by Stefan Hilger in order to unify continuous and discrete analyses. Therefore, it is meaningful to study dynamic systems on time scales, which can unify differential and difference systems.

In recent years, a great deal of work has been done in the study of the existence of solutions for boundary value problems on time scales. For the background and results, we refer the reader to some recent contributions [1-5] and references therein. At the same time, boundary value problems for impulsive differential equations and impulsive difference equations have received much attention [6-12], since such equations may exhibit several real-world phenomena in physics, biology, engineering, etc. see [13-15] and the references therein.

In paper [16], Sun studied the first-order boundary value problem on time scales

x Δ ( t ) = f ( x ( σ ( t ) ) ) , t [ 0 , T ] T , x ( 0 ) = β x ( σ ( T ) ) , (1.1)

where 0 < β < 1. By means of the twin fixed point theorem due to Avery and Henderson, some existence criteria for at least two positive solutions were established.

Tian and Ge [17] studied the first-order three-point boundary value problem on time scales

x Δ ( t ) + p ( t ) x ( σ ( t ) ) = f ( t , x ( σ ( t ) ) ) , t [ 0 , T ] T , x ( 0 ) - α x ( ξ ) = β x ( σ ( T ) ) . (1.2)

Using several fixed point theorems, the existence of at least one positive solution and multiple positive solutions is obtained.

However, except BVP of differential and difference equations, that is, for particular time scales ( T = or T = ), there are few papers dealing with multi-point boundary value problems more than three-point for first-order systems on time scales. In addition, problems with integral boundary conditions arise naturally in thermal conduction problems [18], semiconductor problems [19], hydrodynamic problems [20]. In continuous case, since integral boundary value problems include two-point, three-point,..., n-point boundary value problems, such boundary value problems for continuous systems have received more and more attention and many results have worked out during the past ten years, see Refs. [21-27] for more details. To the best of authors' knowledge, up to the present, there is no paper concerning the boundary value problem with integral boundary conditions on time scales. This paper is to fill the gap in the literature.

In this paper, we are concerned with the following first-order nonlinear impulsive integral boundary value problem on time scales:

{ x Δ ( t ) + p ( t ) x ( σ ( t ) ) = f ( t , x ( σ ( t ) ) ) ,   t J : = [ 0 , T ] T \ { t 1 , t 2 , , t m } , Δ x ( t i ) = x ( t i + ) x ( t i ) = I i ( x ( t i ) ) ,   i = 1 , 2 , , m , a x ( 0 ) β x ( σ ( T ) ) = 0 σ ( T ) g ( s ) x ( s ) Δ s , (1.3)

where T is a time scale which is a nonempty closed subset of ℝ with the topology and ordering inherited from ℝ, 0, and T are points in T , an interval [ 0 , T ] T : = [ 0 , T ] T which has finite right-scattered points, f C ( [ 0 , σ ( T ) ] T × [ 0 , + ) , [ 0 , + ) ) , p C ( [ 0 , σ ( T ) ] T and p is regressive, ℝ+), Ii(1 ≤ i m) ∈ C([0, +∞), [0, +∞)), g is a nonnegative integrable function on [ 0 , σ ( T ) ] T and Γ : = α - β e p ( 0 , σ ( T ) ) - 0 σ ( T ) g ( s ) e p ( 0 , s ) Δ s > 0 , ep(0,σ(T)) is the exponential function on time scale T , which will be introduced in the next section, t i ( 1 i m ) [ 0 , T ] T , 0 < t1 < · · · < tm < T, and for each i = 1 , 2 , , m , x ( t i + ) = lim h 0 + x ( t i + h ) and x ( t i - ) = lim h 0 - x ( t i + h ) represent the right and left limits of x(t) at t = t i , x ( t i - ) = x ( t i ) .

Remark 1.1. Let T r s = { θ 1 , θ 2 , . . . , θ q } denote the set of right-scattered points in interval [ 0 , T ] T , 0 ≤ θ1 < · · · < θq T, σ(θ0) = 0, θq+1 = T. By some basic concepts and time scale calculus formulae in the book by Bohner and Peterson [28], we have

0 σ ( T ) g ( s ) x ( s ) Δ s = k = 0 q σ ( θ k ) θ k + 1 g ( s ) x ( s ) Δ s + k = 1 q + 1 θ k σ ( θ k ) g ( s ) x ( s ) Δ s (1) = k = 0 q σ ( θ k ) θ k + 1 g ( s ) x ( s ) d s + k = 1 q + 1 μ ( θ k ) g ( θ k ) x ( θ k ) . (2) (3) (1.4)

The main purpose of this paper is to establish some sufficient conditions for the existence of at least one, two, or three positive solutions for BVP (1.3) using Guo-Krasnoselskii and Legget-Williams fixed point theorem, respectively.

For convenience, we introduce the following notation:

m a x f 0 = lim x 0 max t [ 0 , σ ( T ) ] T f ( t , x ) x , m i n f 0 = lim x 0 min t [ 0 , σ ( T ) ] T f ( t , x ) x , I i 0 = lim x 0 I i ( x ) x , m a x f = lim x max t [ 0 , σ ( T ) ] T f ( t , x ) x , m i n f = lim x min t [ 0 , σ ( T ) ] T f ( t , x ) x , I i = lim x I i ( x ) x ,

where i = 1, 2,..., m.

This paper is organized as follows. In Section 2, some basic definitions and lemmas on time scales are introduced without proofs. In Section 3, some useful lemmas are established. In particular, Green's function for BVP (1.3) is established. We prove the main results in Sections 4-6.

2 Preliminaries

In this section, we shall first recall some basic definitions, lemmas that are used in what follows. For the details of the calculus on time scales, we refer to books by Bohner and Peterson [28,29].

Definition 2.1. [28]A time scale T is an arbitrary nonempty closed subset of the real set with the topology and ordering inherited from ℝ. The forward and backward jump operators σ , ρ : T T and the graininess μ : T + are defined, respectively, by

σ ( t ) : = i n f { s T : s > t } , ρ ( t ) : = s u p { s T : s < t } , μ ( t ) : = σ ( t ) - t .

In this definition, we put i n f = s u p T (i.e., σ(t) = t if T has a maximum t) and s u p = i n f T (i.e., ρ(t) = t if T has a minimum t). The point t T is called left-dense, left-scattered, right-dense, or right-scattered if ρ(t) = t, ρ(t) < t, σ(t) = t, or σ(t) > t, respectively. Points that are right-dense and left-dense at the same time are called dense. If T has a left-scattered maximum m1, defined T k = T - { m 1 } ; otherwise, set T k = T . If T has a right-scattered minimum m2, defined T k = T - { m 2 } , otherwise, set T k = T .

Definition 2.2. [28]A function f : T is rd continuous provided it is continuous at each right-dense point in T and has a left-sided limit at each left-dense point in T . The set of rd-continuous functions f : T will be denoted by C r d ( T ) = C r d ( T , ) .

Definition 2.3. [28]If f : T is a function and t T k , then the delta derivative of f at the point t is defined to be the number fΔ(t) (provided it exists) with the property that for each ε > 0 there is a neighborhood U of t such that

| f ( σ ( t ) ) - f ( s ) - f Δ ( t ) [ σ ( t ) - s ] | ε | σ ( t ) - s | f o r a l l s U .

Definition 2.4. [28]For a function f : T (the range of f may be actually replaced by Banach space), the (delta) derivative is defined by

f Δ = f ( σ ( t ) ) - f ( t ) σ ( t ) - t ,

if f is continuous at t and t is right-scattered. If t is not right-scattered, then the derivative is defined by

f Δ = lim s t f ( σ ( t ) ) - f ( s ) σ ( t ) - s = lim s t f ( t ) - f ( s ) t - s

provided this limit exists.

Definition 2.5. [28]If FΔ(t) = f(t), then we define the delta integral by

a t f ( s ) Δ s = F ( t ) - F ( a ) .

Definition 2.6. [28]A function p : T is said to be regressive provided 1 + μ(t)p(t) ≠ 0 for all t T k , where μ(t) = σ(t) - t is the graininess function. The set of all regressive rd-continuous functions f : T is denoted by , while the set + is given by { f : 1 + μ ( t ) f ( t ) > 0 } for all t T . Let p . The exponential function is defined by

e p ( t , s ) = e x p s t ξ μ ( τ ) ( p ( τ ) ) Δ τ ,

where ξh(z) is the so-called cylinder transformation.

Lemma 2.1. [28]Let p, q . Then

(1) e0(t, s) ≡ 1 and ep(t, t) ≡ 1;

(2) ep(σ(t), s) = (1 + μ(t)p(t))ep(t, s);

(3) 1 e p ( t , s ) = e Θ p ( t , s ) , where Θ p ( t ) = - p ( t ) 1 + μ ( t ) p ( t ) ;

(4) ep(t, s)ep(s, r) = ep(t, r),

(5) e p Δ ( , s ) = p e p ( , s ) .

Lemma 2.2. [28]Assume that f , g : T are delta differentiable at t T k . Then

( f g ) Δ ( t ) = f Δ ( t ) g ( t ) + f ( σ ( t ) ) g Δ ( t ) = f ( t ) g Δ ( t ) + f Δ ( t ) g ( σ ( t ) )

Lemma 2.3. [28]Let a T k , b T , and assume that f : T × T k is continuous at (t, t), where t T k with t > a. Also, assume that fΔ(t, ·) is rd-continuous on [a, σ(t)]. Suppose that for each ε > 0 there exists a neighborhood U of t, independent of τ ∈ [a, σ(t)], such that

| f ( σ ( t ) , τ ) - f ( s , τ ) - f Δ ( t , τ ) ( σ ( t ) - s ) | ε | σ ( t ) - s | f o r a l l s U ,

where fΔ denotes the derivative of f with respect to the first variable. Then

(1) g ( t ) : = a t f ( t , τ ) Δ τ i m p l i e s g Δ ( t ) = a t f Δ ( t , τ ) Δ τ + f ( σ ( t ) , t ) ;

(2) h ( t ) : = t b f ( t , τ ) Δ τ i m p l i e s h Δ ( t ) = t b f Δ ( t , τ ) Δ τ - f ( σ ( t ) , t ) .

3 Foundational lemmas

In this section, we first introduce some background definitions, fixed point theorems in Banach space, then present basic lemmas that are very crucial in the proof of the main results.

We define P C = { x : [ 0 , σ ( T ) ) ] T | x ( t ) is a piecewise continuous map with first-class discontinuous points in [ 0 , σ ( T ) ] T { t i : 1 i m } and at each discontinuous point it is continuous on the left} with the norm | | x | | = sup t [ 0 , σ ( t ) ] T | x ( t ) | , then PC is a Banach Space.

Definition 3.1. A function x is said to be a positive solution of problem (1.3) if x PC satisfying problem (1.3) and x(t) > 0 for all t [ 0 , σ ( t ) ] T .

Definition 3.2. Let X be a real Banach space, the nonempty set K X is called a cone of X, if it satisfies the following conditions.

(1) x K and λ ≥ 0 implies λx K;

(2) x K and -x K implies x = 0.

Every cone K X induces an ordering in X, which is given by x y if and only if y - x K.

Definition 3.3. An operator is called completely continuous if it is continuous and maps bounded sets into precompact sets.

Lemma 3.1. (Guo-Krasnoselskii [30]) Let X be a Banach space and K X be a cone in X. Assume that Ω1, Ω2 are bounded open subsets of X with 0 Ω 1 Ω ̄ 1 Ω 2 and Φ : K ( Ω ̄ 2 \ Ω 1 ) K is a completely continuous operator such that, either

(1) ||Φx|| ≤ ||x||, x K ∩ ∂Ω1, and ||Φx|| ≥ ||x||, x K ∩ ∂Ω2; or

(2) ||Φx|| ≥ ||x||, x K ∩ ∂Ω1, and ||Φx|| ≤ ||x||, x K ∩ ∂Ω2.

Then Φ has at least one fixed point in K ( Ω ̄ 2 \ Ω 1 ) .

Lemma 3.2. Suppose h C ( [ 0 , σ ( T ) ] T , ) , νi ∈ ℝ, then x is a solution of

x ( t ) = 0 σ ( T ) G ( t , s ) h ( s ) Δ s + i = 1 m G ( t , t i ) ν i , t [ 0 , σ ( T ) ] T , (3.1)

where

G ( t , s ) = Γ - 1 e p ( s , t ) [ α - 0 σ ( s ) g ( r ) e p ( 0 , r ) Δ r ] , 0 s t σ ( T ) , Γ - 1 e p ( s , t ) [ β e p ( 0 , σ ( T ) ) + σ ( s ) σ ( T ) g ( r ) e p ( 0 , r ) Δ r ] , 0 t s σ ( T ) ,

if and only if x is a solution of the boundary value problem

{ x Δ ( t ) + p ( t ) x ( σ ( t ) ) = h ( t ) ,     t J : = [ 0 , T ] T \ { t 1 , t 2 , , t m } , Δ x ( t i ) = x ( t i + ) x ( t i ) = v i ,     i = 1 , 2 , , m , a x ( 0 ) β x ( σ ( T ) ) = 0 σ ( T ) g ( s ) x ( s ) Δ s . (3.2)

Proof. Assume that x(t) is a solution of (3.2). By the first equation in (3.2), we have

( x ( t ) e p ( t , 0 ) ) Δ = h ( t ) e p ( t , 0 ) . (3.3)

If t ∈ [0, t1], integrating (3.3) from 0 to t, we get

x ( t ) e p ( t , 0 ) = x ( 0 ) + 0 t e p ( s , 0 ) h ( s ) Δ s ,

while t t1, we have

x ( t 1 - ) e p ( t 1 , 0 ) = x ( 0 ) + 0 t 1 e p ( s , 0 ) h ( s ) Δ s ,

then

x ( t 1 + ) e p ( t 1 , 0 ) = x ( 0 ) + 0 t 1 e p ( s , 0 ) h ( s ) Δ s + ν 1 e p ( t 1 , 0 ) .

Now, let t ∈ (t1, t2], integrating (3.3) from t1 to t, we obtain

x ( t ) e p ( t , 0 ) = x ( t 1 + ) e p ( t 1 , 0 ) + t 1 t e p ( s , 0 ) h ( s ) Δ s (1) = x ( 0 ) + 0 t e p ( s , 0 ) h ( s ) Δ s + ν 1 e p ( t 1 , 0 ) . (2) (3)

For t ∈ (tk, tk+1], repeating the above process, we can get

x ( t ) e p ( t , 0 ) = x ( 0 ) + 0 t e p ( s , 0 ) h ( s ) Δ s + 0 < t i < t ν i e p ( t i , 0 ) ,

that is

x ( t ) = x ( 0 ) e p ( 0 , t ) + 0 t e p ( s , t ) h ( s ) Δ s + 0 < t i < t ν i e p ( t i , t ) .

It follows from α x ( 0 ) - β x ( σ ( T ) ) = 0 σ ( T ) g ( s ) x ( s ) Δ s that

x ( 0 ) = Γ - 1 β 0 σ ( T ) e p ( s , σ ( T ) ) h ( s ) Δ s + 0 σ ( T ) g ( s ) 0 s e p ( r , s ) h ( r ) Δ r Δ s (1) + β i = 1 m ν i e p ( t i , σ ( T ) ) + 0 σ ( T ) g ( s ) 0 < t i < s ν i e p ( t i , s ) Δ s (2) = Γ - 1 β 0 σ ( T ) e p ( s , σ ( T ) ) h ( s ) Δ s (3) + 0 σ ( T ) 0 σ ( T ) g ( r ) e p ( s , r ) Δ r h ( s ) Δ s - 0 σ ( T ) 0 σ ( s ) g ( r ) e p ( s , r ) Δ r h ( s ) Δ s (4) + i = 1 m ν i t i σ ( T ) g ( s ) e p ( t i , s ) Δ s + β e p ( t i , σ ( T ) ) , (5) (6) 

where Γ - 1 = [ α - β e p ( 0 , σ ( T ) ) - 0 σ ( T ) g ( s ) e p ( 0 , s ) Δ s ] - 1 . Then

x ( t ) = Γ - 1 e p ( 0 , t ) β 0 σ ( T ) e p ( s , σ ( T ) ) h ( s ) Δ s (1) + 0 σ ( T ) 0 σ ( T ) g ( r ) e p ( s , r ) Δ r h ( s ) Δ s - 0 σ ( T ) 0 σ ( s ) g ( r ) e p ( s , r ) Δ r h ( s ) Δ s (2) + i = 1 m ν i t i σ ( T ) g ( s ) e p ( t i , s ) Δ s + β e p ( t i , σ ( T ) ) (3) + 0 t e p ( s , t ) h ( s ) Δ s + 0 < t i < t ν i e p ( t i , t ) (4) = 0 σ ( T ) G ( t , s ) h ( s ) Δ s + i = 1 m G ( t , t i ) ν i . (5) (6)  (3.4)

This means that if x is a solution of (3.2) then x satisfies (3.1).

On the other hand, if x satisfies (3.1), we have

x ( t ) = 0 σ ( T ) G ( t , s ) h ( s ) Δ s + i = 1 m G ( t , t i ) ν i , t [ 0 , σ ( T ) ] T .

Then

x ( t ) e p ( t , 0 ) = 0 σ ( T ) H ( s ) h ( s ) Δ s + i = 1 m H ( t i ) ν i , t [ 0 , σ ( T ) ] T , (3.5)

where

H ( s ) = Γ - 1 e p ( s , 0 ) [ α - 0 σ ( s ) g ( r ) e p ( 0 , r ) Δ r ] , 0 s t σ ( T ) , Γ - 1 e p ( s , 0 ) [ β e p ( 0 , σ ( T ) ) + σ ( s ) σ ( T ) g ( r ) e p ( 0 , r ) Δ r ] , 0 t s σ ( T ) .

Notice that

0 σ ( T ) H ( s ) h ( s ) Δ s Δ = Γ - 1 0 t e p ( s , 0 ) α - 0 σ ( s ) g ( r ) e p ( 0 , r ) Δ r h ( s ) Δ s Δ + Γ - 1 t σ ( T ) e p ( s , 0 ) β e p ( 0 , σ ( T ) ) + σ ( s ) σ ( T ) g ( r ) e p ( 0 , r ) Δ r h ( s ) Δ s Δ = Γ - 1 e p ( t , 0 ) α - 0 σ ( t ) g ( r ) e p ( 0 , r ) Δ r h ( t ) - Γ - 1 e p ( t , 0 ) β e p ( 0 , σ ( T ) ) + σ ( t ) σ ( T ) g ( r ) e p ( 0 , r ) Δ r h ( t ) = e p ( t , 0 ) h ( t ) .

Similarly,

i = 1 m H ( t i ) ν i Δ = 0 .

Hence, we get from (3.5) that

( x ( t ) e p ( t , 0 ) ) Δ = h ( t ) e p ( t , 0 ) ,

that is

x Δ ( t ) + p ( t ) x ( σ ( t ) ) = h ( t ) , t J .

Finally, we can obtain from (3.1) that

x ( t k + ) - x ( t k - ) = ν k , k = 1 , 2 , . . . , m ,

and

α x ( 0 ) - β x ( σ ( T ) ) = α 0 σ ( T ) G ( 0 , s ) h ( s ) Δ s + i = 1 m G ( 0 , t i ) ν i - β 0 σ ( T ) G ( σ ( T ) , s ) h ( s ) Δ s + i = 1 m G ( σ ( T ) , t i ) ν i = α 0 t Γ - 1 e p ( s , 0 ) α - 0 σ ( s ) g ( r ) e p ( 0 , r ) Δ r h ( s ) Δ s + 0 < t i < t Γ - 1 e p ( t i , 0 ) α - 0 σ ( t i ) g ( r ) e p ( 0 , r ) Δ r ν i + t σ ( T ) Γ - 1 e p ( s , 0 ) β e p ( 0 , σ ( T ) ) + σ ( s ) σ ( T ) g ( r ) e p ( 0 , r ) Δ r h ( s ) Δ s + t < t i < σ ( T ) Γ - 1 e p ( t i , 0 ) β e p ( 0 , σ ( T ) ) + σ ( t i ) σ ( T ) g ( r ) e p ( 0 , r ) Δ r ν i - β 0 t Γ - 1 e p ( s , σ ( T ) ) α - 0 σ ( s ) g ( r ) e p ( 0 , r ) Δ r h ( s ) Δ s + 0 < t i < t Γ - 1 e p ( t i , σ ( T ) ) α - 0 σ ( t i ) g ( r ) e p ( 0 , r ) Δ r ν i + t σ ( T ) Γ - 1 e p ( s , σ ( T ) ) β e p ( 0 , σ ( T ) ) + σ ( s ) σ ( T ) g ( r ) e p ( 0 , r ) Δ r h ( s ) Δ s + t < t i < σ ( T ) Γ - 1 e p ( t i , σ ( T ) ) β e p ( 0 , σ ( T ) ) + σ ( t i ) σ ( T ) g ( r ) e p ( 0 , r ) Δ r ν i = 0 σ ( T ) g ( s ) 0 σ ( T ) G ( s , r ) h ( r ) Δ r + i = 1 m G ( s , s i ) ν i Δ s = 0 σ ( T ) g ( s ) x ( s ) Δ s .

So the proof of this lemma is completed.

Lemma 3.3. Let G(t, s) be defined the same as that in Lemma 3.2, then the following properties hold.

(1) G(t, s) > 0 for all t , s [ 0 , σ ( T ) ] T ;

(2) A G(t, s) ≤ B for all t , s [ 0 , σ ( T ) ] T , where

A = Γ - 1 β e p 2 ( 0 , σ ( T ) ) , B = Γ - 1 e p ( σ ( T ) , 0 ) α + β e p ( 0 , σ ( T ) ) + 0 σ ( T ) g ( s ) e p ( 0 , s ) Δ s .

Proof. Since α - β e p ( 0 , σ ( T ) ) - 0 σ ( T ) g ( s ) e p ( 0 , s ) Δ s > 0 , then it is clear that (1) holds. Now we will show that (2) holds.

G ( t , s ) = Γ - 1 e p ( s , t ) [ α - 0 σ ( s ) g ( r ) e p ( 0 , r ) Δ r ] , 0 s < t σ ( T ) , Γ - 1 e p ( s , t ) [ β e p ( 0 , σ ( T ) ) + σ ( s ) σ ( T ) g ( r ) e p ( 0 , r ) Δ r ] , 0 t s σ ( T ) , Γ - 1 e p ( s , 0 ) e p ( 0 , t ) [ α - 0 σ ( T ) g ( r ) e p ( 0 , r ) Δ r ] , 0 s < t σ ( T ) , Γ - 1 e p ( s , 0 ) e p ( 0 , t ) β e p ( 0 , σ ( T ) ) , 0 t s σ ( T ) , Γ - 1 e p ( 0 , σ ( T ) ) [ α - 0 σ ( T ) g ( r ) e p ( 0 , r ) Δ r ] , 0 s < t σ ( T ) , Γ - 1 β e p 2 ( 0 , σ ( T ) ) , 0 t s σ ( T ) , Γ - 1 β e p 2 ( 0 , σ ( T ) ) : = A .

Hence, the left-hand side of (2) holds. And it is easy to show that the right-hand side of (2) also holds. The proof is complete. ■

Define an operator Φ : PC PC by

( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) .

By Lemma 3.2, the fixed points of Φ are solutions of problem (1.3).

Lemma 3.4. The operator Φ : PC PC is completely continuous.

Proof. The first step we will show that Φ : PC PC is continuous. Let { x n } n = 1 be a sequence such that lim n x n = x in PC. Then

| ( Φ x n ) ( t ) - ( Φ x ) ( t ) | = 0 σ ( T ) G ( t , s ) [ f ( s , x n ( σ ( s ) ) ) - f ( s , x ( σ ( s ) ) ) ] Δ s + i = 1 m G ( t , t i ) [ I i ( x n ( t i ) ) - I i ( x ( t i ) ) ] B 0 σ ( T ) f ( s , x n ( σ ( s ) ) ) - f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m I i ( x n ( t i ) ) - I i ( x ( t i ) ) .

Since f(t, x) and Ii(x)(1 ≤ i m) are continuous in x, we have |(Φxn)(t) - (Φx)(t)| → 0, which leads to ||Φxn - Φx||PC → 0, as n → ∞. That is, Φ : PC PC is continuous.

Next, we will show that Φ : PC PC is a compact operator by two steps.

Let U PC be a bounded set.

Firstly, we will show that {Φx : x U}is bounded. For any x U, we have

( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) (1) B 0 σ ( T ) | f ( s , x ( σ ( s ) ) ) | Δ s + i = 1 m | I i ( x ( t i ) ) | . (2) (3)

In virtue of the continuity of f(t, x) and Ii(x)(1 ≤ i m), we can conclude that {Φx : x U} is bounded from above inequality.

Secondly, we will show that {Φx : x U} is the set of equicontinuous functions. For any x, y U, then

| ( Φ x ) ( t ) - ( Φ y ) ( t ) | = 0 σ ( T ) G ( t , s ) [ f ( s , x ( σ ( s ) ) ) - f ( s , y ( σ ( s ) ) ) ] Δ s + i = 1 m G ( t , t i ) [ I i ( x ( t i ) ) - I i ( y ( t i ) ) ] B 0 σ ( T ) | f ( s , x ( σ ( s ) ) ) - f ( s , y ( σ ( s ) ) ) | Δ s + i = 1 m | I i ( x ( t i ) ) - I i ( y ( t i ) ) | .

In virtue of the continuity of f(t, x) and Ii(x)(1 ≤ i m), the right-hand side tends to zero uniformly as |x - y| → 0. Consequently, {Φx : x U} is the set of equicontinuous functions.

By Arzela-Ascoli theorem on time scales [31], {Φx : x U} is a relatively compact set. So Φ maps a bounded set into a relatively compact set, and Φ is a compact operator.

From above three steps, it is easy to see that Φ : PC PC is completely continuous. The proof is complete. ■

Let K = { x P C : x ( t ) δ | | x | | , t [ 0 , σ ( T ) ] T } , where δ = A B ( 0 , 1 ) . It is not difficult to verify that K is a cone in PC.

Lemma 3.5. Φ maps K into K.

Proof. Obviously, Φ(K) ⊂ PC. ∀x K, we have

( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) (1) B 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + B i = 1 m I i ( x ( t i ) ) , t [ 0 , σ ( T ) ] T , (2) (3)

which implies

Φ x B 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + B i = 1 m I i ( x ( t i ) ) .

Therefore,

( Φ x ) ( t ) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + A i = 1 m I i ( x ( t i ) ) (1) = A B B 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + B i = 1 m I i ( x ( t i ) ) (2) δ Φ x . (3) (4)

Hence, Φ(K) ⊂ K. The proof is complete. ■

4 Existence of at least one positive solution

In this section, we will state and prove our main result about the existence of at least one positive solution of problem (1.3).

Theorem 4.1. Assume that one of the following conditions is satisfied:

(H1) max f0 = 0, min f= ∞, and Ii0 = 0, i = 1, 2,..., m; or

(H2) max f= 0, min f0 = ∞, and Ii= 0, i = 1, 2,..., m.

Then, problem (1.3) has at least one positive solution.

Proof. Firstly, we assume that (H1) holds. In this case, since max f0 = 0 and Ii0 = 0, i = 1, 2,..., m, for ε ≤ ((T) + Bm)-1, there exists a positive constant r1 such that

f ( t , x ) ε x a n d I i ( x ) ε x f o r a l l x ( 0 , r 1 ] , i = 1 , 2 , , m .

In view of min f= ∞, we have that for M ≥ ((T)δ)-1, there exists a constant r 2 > r 1 δ such that

f ( t , x ) M x f o r a l l x [ δ r 2 , ) .

Let Ωi = {x PC : ||x|| < ri}, i = 1, 2.

On the one hand, if x K ∩ ∂Ω1, we have

( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) (1) B 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + B i = 1 m I i ( x ( t i ) ) (2) B 0 σ ( T ) ε x Δ s + B m ε x (3) B σ ( T ) ε r 1 + B m ε r 1 r 1 = x , (4) (5)

which yields

Φ x x f o r a l l x K Ω 1 . (4.1)

On the other hand, if x K ∩ ∂Ω2, we have

( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) (1) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + A i = 1 m I i ( x ( t i ) ) (2) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s (3) A 0 σ ( T ) M x ( s ) Δ s (4) A σ ( T ) M δ x A σ ( T ) M δ r 2 r 2 = x , (5) (6)

which implies

Φ x x f o r a l l x K Ω 2 . (4.2)

Therefore, by (4.1), (4.2), and Lemma 3.1, it follows that Φ has a fixed point in K ( Ω ̄ 2 \ Ω 1 ) .

Next, we assume that (H2) holds. In this case, since max f= 0 and Ii= 0, i = 1, 2,..., m, for ε' ≤ ((T) + Bm)-1, there exists a positive constant r3 such that

f ( t , x ) ε x a n d I i ( x ) ε x f o r a l l x [ δ r 3 , ) , i = 1 , 2 , , m .

In view of min f= ∞, we have that for M' ≥ ((T)δ)-1, there exists a positive constant r4 < δr3 such that

f ( t , x ) M x f o r a l l x ( 0 , r 4 ] .

Let Ωi = {x PC : ||x|| < ri}, i = 3, 4.

On the one hand, if x K ∩ ∂Ω3, we have

( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) (1) B 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + B i = 1 m I i ( x ( t i ) ) (2) B 0 σ ( T ) ε x Δ s + B m ε x (3) B σ ( T ) ε r 3 + B m ε r 1 r 3 = x , (4) (5)

which yields

Φ x x f o r a l l x K Ω 3 . (4.3)

On the other hand, if x K ∩ ∂Ω4, we have

( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) (1) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + A i = 1 m I i ( x ( t i ) ) (2) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s (3) A 0 σ ( T ) M x ( s ) Δ s (4) A σ ( T ) M δ x A σ ( T ) M δ r 4 r 4 = x , (5) (6)

which implies

Φ x x f o r a l l x K Ω 4 . (4.4)

Hence, from (4.3) and (4.4) and Lemma 3.1, we conclude that Φ has a fixed point in K ( Ω ̄ 3 \ Ω 4 ) , that is, problem (1.3) has at least one positive solution. The proof is complete. ■

5 Existence of at least two positive solutions

In this section, we will state and prove our main results about the existence of at least two positive solutions to problem (1.3).

Theorem 5.1. Assume that the following conditions hold.

(H3) min f0 = +∞, min f= +∞.

(H4) There exists a positive constant R such that f ( t , x ) < R 2 B σ ( T ) for all 0 < x R.

(H5) I i ( x ) < x 2 B m , x ∈ (0, ∞), i = 1, 2,..., m.

Then, problem (1.3) has at least two positive solutions.

Proof. Let ΩR = {x PC : ||x|| < R}. From (H4) and (H5), for x K ∩ ∂ΩR, we get

( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) (1) B 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + B i = 1 m I i ( x ( t i ) ) (2) < B σ ( T ) R 2 B σ ( T ) + m R 2 m B = R = x . (3) (4)

So

Φ x x f o r a l l x K Ω R . (5.1)

Since min f0 = +∞, for M ≥ ((T)δ)-1, there exists a positive constant R1 < δR such that

f ( t , x ) M x f o r a l l x ( 0 , R 1 ] .

Let Ω R 1 = { x P C : | | x | | < R 1 } . For any x K Ω R 1 , we have

( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) (1) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + A i = 1 m I i ( x ( t i ) ) (2) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s (3) A 0 σ ( T ) M x ( s ) Δ s (4) A σ ( T ) M δ x = A σ ( T ) M δ R 1 R 1 = x . (5) (6)

Hence,

Φ x x f o r a l l x K Ω R 1 . (5.2)

Similarly, since min f= +∞, for M' ≥ ((T)δ)-1, there exists a positive constant R 2 > R δ such that

f ( t , x ) M x f o r a l l x [ δ R 2 , ) .

Let Ω R 2 = { x P C : | | x | | < R 2 } . For any x K Ω R 2 , we have

( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) (1) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + A i = 1 m I i ( x ( t i ) ) (2) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s (3) A 0 σ ( T ) M x ( s ) Δ s (4) A σ ( T ) M δ x = A σ ( T ) M δ R 2 R 2 = x . (5) (6)

Hence,

Φ x x f o r a l l x K Ω R 2 . (5.3)

Equations 5.1 and 5.2 imply that Φ has at least one fixed point in K ( Ω ̄ R \ Ω R 1 ) , which is a positive solution of problem (1.3). Besides, (5.1) and (5.3) imply that Φ has at least one fixed point in K ( Ω ̄ R 2 \ Ω R ) , which is a positive solution of problem (1.3). Therefore, problem (1.3) has at least two positive solutions x1 and x2 satisfying 0 < R1 ≤ ||x1|| < R < ||x2|| ≤ R2. The proof is complete. ■

Theorem 5.2. Assume that the following conditions hold.

(H6) max f0 = 0, max f= 0, Ii0 = 0, Ii= 0, i = 1, 2,..., m.

(H7) There exists a positive constant r such that f ( t , x ) > r A σ ( T ) for all 0 < x r.

Then problem (1.3) has at least two positive solutions.

Proof. Let Ωr = {x PC : ||x|| < r}. From (H7), for x K ∩ ∂Ωr, we get

( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) (1) A 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s (2) > A σ ( T ) r A σ ( T ) = r = x . (3) (4)

So

Φ x > x f o r a l l x K Ω r . (5.4)

Since max f0 = 0 and Ii0 = 0, i = 1, 2,..., m, for ε ≤ ((T) + Bm)-1, there exists a positive constant r1 < δr such that

f ( t , x ) ε x a n d I i ( x ) ε x f o r a l l x ( 0 , r 1 ] , i = 1 , 2 , , m .

Let Ω r 1 = { x P C : | | x | | < r 1 } . For any x K Ω r 1 , we have

( Φ x ) ( t ) = 0 σ ( T ) G ( t , s ) f ( s , x ( σ ( s ) ) ) Δ s + i = 1 m G ( t , t i ) I i ( x ( t i ) ) (1) B 0 σ ( T ) f ( s , x ( σ ( s ) ) ) Δ s + B i = 1 m I i ( x ( t i ) ) (2) ( B σ ( T ) + B m ) ε r 1 r 1 = x . (3) (4)

Hence,

Φ x x f o r a l l x K Ω r 1 . (5.5)

Similarly, since max f= 0 and Ii= 0, i = 1, 2,..., m, for ε' ≤ ((T) + Bm)-1, there exists a positive constant r 2 > r δ such that

f ( t , x ) ε x a n d I i ε x f o r a l l x [ δ r 2 , ) , i = 1 , 2 , , m .

Let Ω r 2 = { x P C : | | x | | < r 2 } . For any x K Ω r 2 , we have

( Φ x ) ( t ) = 0 σ ( T