Research

# Positive solutions to second-order differential equations with dependence on the first-order derivative and nonlocal boundary conditions

Author Affiliations

Department of Differential Equations and Applied Mathematics, Gdańsk University of Technology, 11/12 G. Narutowicz Str., Gdańsk, 80-233, Poland

Boundary Value Problems 2013, 2013:8  doi:10.1186/1687-2770-2013-8

 Received: 20 September 2012 Accepted: 15 January 2013 Published: 17 January 2013

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### Abstract

In this paper, we consider the existence of positive solutions for second-order differential equations with deviating arguments and nonlocal boundary conditions. By the fixed point theorem due to Avery and Peterson, we provide sufficient conditions under which such boundary value problems have at least three positive solutions. We discuss our problem both for delayed and advanced arguments α and also in the case when α ( t ) = t , t [ 0 , 1 ] . In all cases, the argument β can change the character on [ 0 , 1 ] , see problem (1). It means that β can be delayed in some set J ¯ [ 0 , 1 ] and advanced in [ 0 , 1 ] J ¯ . An example is added to illustrate the results.

MSC: 34B10.

##### Keywords:
boundary value problems with delayed and advanced arguments; nonlocal boundary conditions; cone; existence of positive solutions; a fixed point theorem

### 1 Introduction

Put J = [ 0 , 1 ] , R + = [ 0 , ) . Let us consider the following boundary value problem:

{ x ( t ) + h ( t ) f ( t , x ( α ( t ) ) , x ( β ( t ) ) ) = 0 , t ( 0 , 1 ) , x ( 0 ) = γ x ( η ) + λ 1 [ x ] , x ( 1 ) = ξ x ( η ) + λ 2 [ x ] , η ( 0 , 1 ) , (1)

where λ 1 , λ 2 denote linear functionals on C ( J ) given by

λ 1 [ x ] = 0 1 x ( t ) d A ( t ) , λ 2 [ x ] = 0 1 x ( t ) d B ( t )

involving Stieltjes integrals with suitable functions A and B of bounded variation on J. It is not assumed that λ 1 , λ 2 are positive to all positive x. As we see later, the measures dA, dB can be signed measures.

We introduce the following assumptions:

H1: f C ( J × R + × R , R + ) , α , β C ( J , J ) , A and B are functions of bounded variation;

H2: h C ( J , R + ) and h does not vanish identically on any subinterval;

H3: 1 γ λ 1 [ p ] > 0 or 1 ξ λ 2 [ p ] > 0 for p ( t ) = 1 , t J , γ , ξ 0 .

Recently, the existence of multiple positive solutions for differential equations has been studied extensively; for details, see, for example, [1-31]. However, many works about positive solutions have been done under the assumption that the first-order derivative is not involved explicitly in nonlinear terms; see, for example, [3,6,8-14,17,20,25-27,30]. From this list, only papers [9-12,14,20,30] concern positive solutions to problems with deviating arguments. On the other hand, there are some papers considering the multiplicity of positive solutions with dependence on the first-order derivative; see, for example, [2,4,5,7,15,16,18,19,21-24,28,29,31]. Note that boundary conditions (BCs) in differential problems have important influence on the existence of the results obtained. In this paper, we consider problem (1) which is a problem with dependence on the first-order derivative with BCs involving Stieltjes integrals with signed measures of dA, dB appearing in functionals λ 1 , λ 2 ; moreover, problem (1) depends on deviating arguments.

For example, in papers [2,4,15,18,22,24], the existence of positive solutions to second-order differential equations with dependence on the first-order derivative (but without deviating arguments) has been studied with various BCs including the following:

by fixed point theorems in a cone (such as Avery-Peterson, an extension of Krasnoselskii’s fixed point theorem or monotone iterative method) with corresponding assumptions:

a i , b i ( 0 , 1 ) , i = 1 , 2 , , n , i = 1 n a i , i = 1 n b i ( 0 , 1 ) ,

or 1 α η > 0 , respectively.

For example, in papers [8-11,20,22,30], the existence of positive solutions to second-order differential equations including impulsive problems, but without dependence on the first-order derivative, has been studied with various BCs including the following:

under corresponding assumptions by fixed point theorems in a cone (such as Avery-Peterson, Leggett-Williams, Krasnoselskii or fixed point index theorem). See also paper [13], where positive solutions have been discussed for second-order impulsive problems with boundary conditions

x ( 0 ) = 0 , x ( 1 ) = 0 1 x ( s ) d A ( s ) ;

here λ 1 has the same form as in problem (1) with signed measure dA appearing in functional λ 1 .

Positive solutions to second-order differential equations with boundary conditions that involve Stieltjes integrals have been studied in the case of signed measures in papers [25,26] with BCs including, for example, the following:

The main results of papers [25,26] have been obtained by the fixed point index theory for problems without deviating arguments. The study of positive solutions to boundary value problems with Stieltjes integrals in the case of signed measures has also been done in papers [3,7,13,14,27] for second-order differential equations (also impulsive) or third-order differential equations by using the fixed point index theory, the Avery-Peterson fixed point theorem or fixed point index theory involving eigenvalues.

Note that BCs in problem (1) with functionals λ 1 , λ 2 cover some nonlocal BCs, for example,

for some constants a i , b i and some functions g 1 , g 2 . In our paper, the assumption that the measures dA, dB in the definitions of λ 1 , λ 2 are positive is not needed. More precisely, one needs to choose the above functions g 1 , g 2 in such a way that the assumption H4 holds. It means that g 1 , g 2 can change sign on J.

A standard approach (see, for example, [25-27]) to studying positive solutions of boundary value problems such as (1) is to translate problem (1) to a Hammerstein integral equation

x ( t ) = Γ 1 ( t ) λ 1 [ x ] + Γ 2 ( t ) λ 2 [ x ] + Γ 3 ( t ) 0 1 G ( η , s ) h ( s ) f ( s , x ( α ( s ) ) , x ( β ( s ) ) ) d s + 0 1 G ( t , s ) h ( s ) f ( s , x ( α ( s ) ) , x ( β ( s ) ) ) d s W x ( t ) (2)

to find a solution as a fixed point of the operator W by using a fixed point theorem in a cone. Γ 1 , Γ 2 , Γ 3 are corresponding continuous functions while λ 1 and λ 2 have the same form as in problem (1). G denotes a Green function connected with our problem, so in our case it is given by

G ( t , s ) = { s ( 1 t ) , 0 s t , t ( 1 s ) , t s 1 .

In our paper, we eliminate λ 1 and λ 2 from problem (2) to obtain the equation x = W ¯ x with a corresponding operator W ¯ , and then we seek solutions as fixed points of this operator  W ¯ .

Note that if we put γ = ξ = 0 in the BCs of problem (1), then this new problem is more general than the previous one because in this case someone, for example, can take λ 1 [ x ] = γ x ( η ) , λ 2 [ x ] = ξ x ( η ) . In this paper, we try to explain why for some cases we have to discuss problem (1) with constants γ > 0 or ξ > 0 .

To apply such a fixed point theorem in a cone to problem (1), we have to construct a suitable cone K. Usually, we need to find a nonnegative function κ and a constant ρ ¯ ( 0 , 1 ] such that G ( t , s ) κ ( s ) for t , s J and G ( t , s ) ρ ¯ κ ( t ) for t [ η , η ¯ ] [ 0 , 1 ] and s J (see, for example, [25-27]) to work with the inequality

min [ η , η ¯ ] | x ( t ) | ρ ¯ max t J | x ( t ) | .

Indeed, for problems without deviating arguments, someone can use any interval [ η , η ¯ ] [ 0 , 1 ] . It means that when α ( t ) = t on J, then we can take γ = ξ = 0 in the boundary conditions of problem (1) to work with the inequality

min [ ζ , ϱ ] | x ( t ) | κ max t J | x ( t ) |

for ζ, ϱ such that ζ + ϱ < 1 , 0 < ζ < ϱ < 1 with κ = min ( ζ , 1 ϱ ) ; see Section 5.

Note that for problems with delayed or advanced arguments, we have to use interval [ 0 , η ] [ 0 , 1 ) or [ η , 1 ] ( 0 , 1 ] , respectively. We see that if γ = ξ = 0 , then ρ ¯ = 0 for problem (1) with deviated arguments. It shows that the approach from papers [25-27] needs a little modification to problems with delayed or advanced arguments. Consider the situation α ( t ) t on J. In this case, we can put ξ = 0 in the boundary conditions of problem (1) to find a constant ρ ( 0 , 1 ) to work with the inequality

min [ 0 , η ] | x ( t ) | ρ max t J | x ( t ) | ;

see Section 3. For the case α ( t ) t on J, we can put γ = 0 to work similarly as in Section 3; see Section 4. Note that in the above three cases for the argument β, we need only the assumption β C ( J , J ) , which means that β can change the character in J.

Note that in cited papers, positive solutions to differential equations with dependence on the first-order derivative have been investigated only for problems without deviating arguments, see [2,4,5,7,15,16,18,19,21-24,28,29,31]. Moreover, BCs in problem (1) cover some nonlocal BCs discussed earlier.

Motivated by [25-27], in this paper, we apply the fixed point theorem due to Avery-Peterson to obtain sufficient conditions for the existence of multiple positive solutions to problems of type (1). In problem (1), an unknown x depends on deviating arguments which can be both of advanced or delayed type. To the author’s knowledge, it is the first paper when positive solutions have been investigated for such general boundary value problems with functionals λ 1 , λ 2 and with deviating arguments α, β in differential equations in which f depends also on the first-order derivative. It is important to indicate that problems of type (1) have been discussed with signed measures of dA, dB appearing in Stieltjes integrals of functionals λ 1 , λ 2 .

The organization of this paper is as follows. In Section 2, we present some necessary lemmas connected with our main results. In Section 3, we first present some definitions and a theorem of Avery and Peterson which is useful in our research. Also in Section 3, we discuss the existence of multiple positive solutions to problems with delayed argument α, by using the above mentioned Avery-Peterson theorem. At the end of this section, an example is added to verify theoretical results. In Section 4, we formulate sufficient conditions under which problems with advanced argument α have positive solutions. In the last section, we discuss problems of type (1) when α ( t ) = t on J.

### 2 Some lemmas

Let us introduce the following notations:

x = max ( x 1 , x 1 ) with  z 1 = max t J | z ( t ) | .

Lemma 1Let x C 1 ( J , R ) , p ( t ) = 1 , t J . Assume thatAandBare functions of bounded variation and, moreover,

x ( 0 ) = γ x ( η ) + λ 1 [ x ] , x ( 1 ) = ξ x ( η ) + λ 2 [ x ] , γ , ξ 0 , η ( 0 , 1 )

with

(i) 1 γ λ 1 [ p ] 0 or

(ii) 1 ξ λ 2 [ p ] 0 .

Then

x 1 M x 1 , M = 1 + { Var A + γ | 1 γ λ 1 [ p ] | , in case  (i) , Var B + ξ | 1 ξ λ 2 [ p ] | , in case  (ii) .

Here, VarAdenotes the variation of a functionAonJ.

Proof Note that in case (i), we have

x ( 0 ) = γ x ( η ) + λ 1 [ x ] = γ [ x ( η ) x ( 0 ) ] + γ x ( 0 ) + 0 1 ( x ( t ) x ( 0 ) ) d A ( t ) + λ 1 [ p ] x ( 0 ) = γ 0 η x ( s ) d s + 0 1 ( 0 t x ( s ) d s ) d A ( t ) + γ x ( 0 ) + λ 1 [ p ] x ( 0 ) ,

so

x ( 0 ) = 1 1 γ λ 1 [ p ] [ γ 0 η x ( s ) d s + 0 1 ( 0 t x ( s ) d s ) d A ( t ) ] .

Hence,

| x ( 0 ) | 1 | 1 γ λ 1 [ p ] | ( γ + Var A ) x 1 .

Combining this with the relation

x ( t ) = x ( 0 ) + 0 t x ( s ) d s ,

we obtain

x 1 | x ( 0 ) | + x 1 M x 1 .

This proves case (i).

In case (ii), similarly,

x ( 1 ) = ξ x ( η ) + λ 2 [ x ] = ξ [ x ( η ) x ( 1 ) ] + ξ x ( 1 ) 0 1 ( x ( 1 ) x ( t ) ) d B ( t ) + λ 2 [ p ] x ( 1 ) = ξ η 1 x ( s ) d s 0 1 ( t 1 x ( s ) d s ) d B ( t ) + ξ x ( 1 ) + λ 2 [ p ] x ( 1 ) ,

so

x ( 1 ) = 1 1 ξ λ 2 [ p ] [ ξ η 1 x ( s ) d s + 0 1 ( 0 1 x ( s ) d s ) d B ( t ) ] .

Hence,

| x ( 1 ) | 1 | 1 ξ λ 2 [ p ] | ( ξ + Var B ) x 1 .

x ( t ) = x ( 1 ) t 1 x ( s ) d s ,

we get the result in case (ii). This ends the proof. □

Remark 1 If we assume that A and B are increasing functions, then there exists σ J such that

x ( 0 ) = 1 1 γ λ 1 [ p ] [ γ 0 η x ( s ) d s + 0 1 ( 0 t x ( s ) d s ) d A ( t ) ] = 1 1 γ λ 1 [ p ] [ γ 0 η x ( s ) d s + 0 σ x ( s ) d s 0 1 d A ( t ) ] .

Hence,

| x ( 0 ) | 1 | 1 γ λ 1 [ p ] | ( γ + | 0 1 d A ( t ) | ) x 1 .

Similarly, we can show that

| x ( 1 ) | 1 | 1 ξ λ 2 [ p ] | ( ξ + | 0 1 d B ( t ) | ) x 1 .

Now, the constant M from Lemma 1 has the form

M = 1 + { 1 | 1 γ λ 1 [ p ] | ( γ + | 0 1 d A ( t ) | ) , in case (i) , 1 | 1 ξ λ 2 [ p ] | ( ξ + | 0 1 d B ( t ) | ) , in case (ii) .

Consider the following problem:

{ u ( t ) + y ( t ) = 0 , t ( 0 , 1 ) , u ( 0 ) = γ u ( η ) + λ 1 [ u ] , u ( 1 ) = ξ u ( η ) + λ 2 [ u ] , η ( 0 , 1 ) , γ , ξ 0 . (3)

Let us introduce the assumption.

H0: A and B are functions of bounded variation and

δ 1 γ + η ( γ ξ ) 0 , Δ A 1 ( B 2 1 + ξ η ) + A 2 ( 1 ξ B 1 ) + δ η γ B 1 ( 1 γ ) B 2 0

for

We require the following result.

Lemma 2Let the assumption H0hold and let y L 1 ( J , R ) . Then problem (3) has a unique solution given by

u ( t ) = 1 Δ [ 1 ξ η B 2 ( 1 ξ B 1 ) t ] λ 1 [ F ¯ y ] + 1 Δ [ η γ + A 2 + ( 1 γ A 1 ) t ] λ 2 [ F ¯ y ] + F ¯ y ( t )

with

Proof Integrating the differential equation in (3) two times, we have

u ( t ) = u ( 0 ) + t u ( 0 ) 0 t ( t s ) y ( s ) d s . (4)

Put t = 1 and use the boundary conditions from problem (3) to obtain

ξ u ( η ) + λ 2 [ u ] = γ u ( η ) + λ 1 [ u ] + u ( 0 ) 0 1 ( 1 s ) y ( s ) d s .

Now, finding from this u ( 0 ) and then substituting it to formula (4), we have

u ( t ) = [ γ + t ( ξ γ ) ] u ( η ) + ( 1 t ) λ 1 [ u ] + t λ 2 [ u ] + 0 1 G ( t , s ) y ( s ) d s . (5)

Next, putting t = η , we can find u ( η ) , and then substitute it to formula (5) to obtain

u ( t ) = 1 δ ( [ 1 ξ η ( 1 ξ ) t ] λ 1 [ u ] + [ η γ + ( 1 γ ) t ] λ 2 [ u ] ) + F ¯ y ( t ) . (6)

Now, we have to eliminate λ 1 [ u ] and λ 2 [ u ] from (6). If u is a solution of (6), then

{ λ 1 [ u ] = 1 δ [ ( 1 ξ η ) A 1 ( 1 ξ ) A 2 ] λ 1 [ u ] + 1 δ [ η γ A 1 + ( 1 γ ) A 2 ] λ 2 [ u ] + λ 1 [ F ¯ y ] , λ 2 [ u ] = 1 δ [ ( 1 ξ η ) B 1 ( 1 ξ ) B 2 ] λ 1 [ u ] + 1 δ [ η γ B 1 + ( 1 γ ) B 2 ] λ 2 [ u ] + λ 2 [ F ¯ y ] .

Solving this system with respect to λ 1 [ u ] , λ 2 [ u ] and then substituting to (6), we have the assertion of this lemma. This ends the proof. □

Define the operator T by

T u ( t ) = 1 Δ [ 1 ξ η B 2 ( 1 ξ B 1 ) t ] λ 1 [ F u ] + 1 Δ [ η γ + A 2 + ( 1 γ A 1 ) t ] λ 2 [ F u ] + F u ( t )

with

F u ( t ) = γ + t ( ξ γ ) δ 0 1 G ( η , s ) h ( s ) f ( s , u ( α ( s ) ) , u ( β ( s ) ) ) d s + 0 1 G ( t , s ) h ( s ) f ( s , u ( α ( s ) ) , u ( β ( s ) ) ) d s .

We consider the Banach space E = ( C 1 ( J , R ) , ) with the maximum norm x = max ( x 1 , x 1 ) . Define the cone K E by

K = { x E : x ( t ) 0 , t J , λ 1 [ x ] 0 , λ 2 [ x ] 0 , min [ 0 , η ] x ( t ) ρ x 1 }

with

ρ = min ( γ ( 1 η ) , 1 η , η γ 1 + γ ( η 1 ) ) , γ > 0 .

Let us introduce the following assumption.

H4: A and B are functions of bounded variation and

(i) δ > 0 , Δ > 0 , A j 0 , B j 0 , G j ( s ) 0 for j = 1 , 2 where A j , B j , G j , δ, Δ are defined as in the assumption H0,

(ii) γ ( A 1 A 2 ) + ξ A 2 0 , γ ( B 1 B 2 ) + ξ B 2 0 , η γ B 1 + ( 1 γ ) B 2 0 , ( 1 ξ η ) A 1 ( 1 ξ ) A 2 0 , B 1 B 2 0 , δ η γ B 1 ( 1 γ ) B 2 0 , η γ A 1 + ( 1 γ ) A 2 0 , 1 ξ η B 2 0 , δ ( 1 ξ η ) A 1 + ( 1 ξ ) A 2 0 , ( 1 ξ η ) B 1 ( 1 ξ ) B 2 0 .

Lemma 3Let the assumptions H1-H4hold. Then T : K K .

Proof Clearly, u K is a positive solution of problem (1) if and only if u K solves the operator equation u = T u . Then

{ λ 1 [ F u ] = 1 δ [ γ ( A 1 A 2 ) + ξ A 2 ] 0 1 G ( η , s ) h ( s ) f ( s , u ( α ( s ) ) , u ( β ( s ) ) ) d s λ 1 [ F u ] = + 0 1 G 1 ( s ) h ( s ) f ( s , u ( α ( s ) ) , u ( β ( s ) ) ) d s , λ 2 [ F u ] = 1 δ [ γ ( B 1 B 2 ) + ξ B 2 ] 0 1 G ( η , s ) h ( s ) f ( s , u ( α ( s ) ) , u ( β ( s ) ) ) d s λ 2 [ F u ] = + 0 1 G 2 ( s ) h ( s ) f ( s , u ( α ( s ) ) , u ( β ( s ) ) ) d s . (7)

Note that λ 1 [ F u ] 0 , λ 2 [ F u ] 0 in view of the assumptions H1, H2, H4 and the positivity of Green’s function G.

Note that ( T u ) 0 . Moreover,

Hence, Tu is concave and T u ( t ) 0 on J.

We next show that λ 1 [ T u ] 0 , λ 2 [ T u ] 0 . Indeed,

Finally, we show that

min [ 0 , η ] T u ( t ) ρ T u 1 .

To do it, we consider two steps. Let T u 1 = T u ( t ) .

Step 1. Let T u ( 0 ) < T u ( η ) . Then t ( 0 , η ) or t ( η , 1 ) and min [ 0 , η ] T u ( t ) = T u ( 0 ) .

Let t ( 0 , η ) . Then

T u 1 T u ( 1 ) T u ( η ) T u ( 1 ) 1 t 1 η ,

so

T u 1 T u ( 1 ) + 1 1 η [ T u ( η ) T u ( 1 ) ] < 1 1 η T u ( η ) = 1 γ ( 1 η ) ( T u ( 0 ) λ 1 [ u ] ) 1 γ ( 1 η ) T u ( 0 ) .

It yields

min [ 0 , η ] T u ( t ) γ ( 1 η ) T u 1 .

Let t ( η , 1 ) . Then

T u 1 T u ( 0 ) T u ( η ) T u ( 0 ) t 0 η 0 ,

so

T u 1 1 η [ T u ( η ) + ( η 1 ) T u ( 0 ) ] = 1 η [ 1 γ ( T u ( 0 ) λ 1 [ u ] ) + ( η 1 ) T u ( 0 ) ] .

It yields

min [ 0 , η ] T u ( t ) γ η 1 + γ ( η 1 ) T u 1 .

Step 2. Let T u ( 0 ) T u ( η ) . Then t ( 0 , η ) and min [ 0 , η ] T u ( t ) = T u ( η ) . Then

T u 1 T u ( 1 ) T u ( η ) T u ( 1 ) 1 t 1 η ,

so

T u 1 T u ( 1 ) + 1 1 η [ T u ( η ) T u ( 1 ) ] < 1 1 η T u ( η ) .

Hence,

min [ 0 , η ] T u ( t ) ( 1 η ) T u 1 .

It shows T : K K . This ends the proof. □

Remark 2 Take d B ( t ) = ( b t 1 ) d t , b > 1 . Note that the measure changes the sign and is increasing. It is easy to show that

B 1 = 1 2 ( b 2 ) , B 2 = 1 6 ( 2 b 3 ) , G 2 ( s ) = s ( 1 s ) 6 ( b s + b 3 ) .

If we assume that b 3 , then B 1 > 0 , B 2 > 0 , G 2 ( s ) 0 , s J .

Remark 3 Take d A ( t ) = ( a t 2 1 ) d t , a > 1 . Note that the measure changes the sign and is increasing. It is easy to show that

A 1 = 1 3 ( a 3 ) , A 2 = 1 4 ( a 2 ) , G 1 ( s ) = s ( 1 s ) 12 ( a s 2 + a s + a 6 ) .

If we assume that a 6 , then A 1 > 0 , A 2 > 0 , G 1 ( s ) 0 , s J .

Remark 4 Let d A ( t ) = ( 3 t 1 ) d t , d B ( t ) = ( 7 2 t 1 ) d t , t J . Then the assumptions H3, H4 hold if one of the following conditions is satisfied:

(i) ξ = 0 , 0 < γ < 1 2 ,

(ii) γ = 0 , 0 < ξ < 1 4 ,

(iii) γ = ξ = 0 .

We consider only case (i). First of all, we see that dA, dB change the sign and are increasing. Indeed, for p = 1 , t J , we have

A 1 = A 2 = λ 1 [ p ] = 1 2 , B 1 = λ 2 [ p ] = 3 4 , B 2 = 2 3 .

It means that the assumption H3 holds. Moreover,

It proves that the assumption H4 holds.

By a similar way, we prove the assertion in case (ii) or (iii).

### 3 Positive solutions to problem (1) with delayed arguments

Now, we present the necessary definitions from the theory of cones in Banach spaces.

Definition 1 Let E be a real Banach space. A nonempty convex closed set P E is said to be a cone provided that

(i) k u P for all u P and all k 0 , and

(ii) u , u P implies u = 0 .

Note that every cone P E induces an ordering in E given by x y if y x P .

Definition 2 A map Φ is said to be a nonnegative continuous concave functional on a cone P of a real Banach space E if Φ : P R + is continuous and

Φ ( t x + ( 1 t ) y ) t Φ ( x ) + ( 1 t ) Φ ( y )

for all x , y P and t [ 0 , 1 ] .

Similarly, we say the map φ is a nonnegative continuous convex functional on a cone P of a real Banach space E if φ : P R + is continuous and

φ ( t x + ( 1 t ) y ) t φ ( x ) + ( 1 t ) φ ( y )

for all x , y P and t [ 0 , 1 ] .

Definition 3 An operator is called completely continuous if it is continuous and maps bounded sets into precompact sets.

Let φ and Θ be nonnegative continuous convex functionals on P, let Φ be a nonnegative continuous concave functional on P, and let Ψ be a nonnegative continuous functional on P. Then, for positive numbers a, b, c, d, we define the following sets:

We will use the following fixed point theorem of Avery and Peterson to establish multiple positive solutions to problem (1).

Theorem 1 (see [1])

LetPbe a cone in a real Banach spaceE. Letφand Θ be nonnegative continuous convex functionals onP, let Φ be a nonnegative continuous concave functional onP, and let Ψ be a nonnegative continuous functional onPsatisfying Ψ ( k x ) k Ψ ( x ) for 0 k 1 such that for some positive numbers M ¯ andd,

Φ ( x ) Ψ ( x ) and x M ¯ φ ( x )

for all x P ( φ , d ) ¯ . Suppose

T : P ( φ , d ) ¯ P ( φ , d ) ¯

is completely continuous and there exist positive numbersa, b, cwith a < b such that

(S1): { x P ( φ , Θ , Φ , b , c , d ) : Φ ( x ) > b } 0 and Φ ( T x ) > b for x P ( φ , Θ , Φ , b , c , d ) ,

(S2): Φ ( T x ) > b for x P ( φ , Φ , b , d ) with Θ ( T x ) > c ,

(S3): 0 R ( φ , Ψ , a , d ) and Ψ ( T x ) < a for x R ( φ , Ψ , a , d ) with Ψ ( x ) = a .

ThenThas at least three fixed points x 1 , x 2 , x 3 P ( φ , d ) ¯ such that

and

Ψ ( x 3 ) < a .

We apply Theorem 1 with the cone K instead of P and let P ¯ r = { x K : x r } . Now, we define the nonnegative continuous concave functional Φ on K by

Φ ( x ) = min [ 0 , η ] | x ( t ) | .

Note that Φ ( x ) x 1 . Put Ψ ( x ) = Θ ( x ) = x 1 , φ ( x ) = x 1 .

Now, we can formulate the main result of this section.

Theorem 2Let the assumptions H1-H4hold with ξ = 0 , γ > 0 . Let α ( t ) t , t J . In addition, we assume that there exist positive constantsa, b, c, d, M, a < b and such that

with

and

(A1): f ( t , u , v ) d μ for ( t , u , v ) J × [ 0 , M d ] × [ d , d ] ,

(A2): f ( t , u , v ) b L for ( t , u , v ) [ 0 , η ] × [ b , b ρ ] × [ d , d ] ,

(A3): f ( t , u , v ) a μ for ( t , u , v ) J × [ 0 , a ] × [ d , d ] .

Then problem (1) has at least three nonnegative solutions x 1 , x 2 , x 3 satisfying x i 1 d , i = 1 , 2 , 3 ,

b Φ ( x 1 ) , a < x 2 1 with   Φ ( x 2 ) < b

and x 3 1 < a .

Proof Basing on the definitions of T, we see that T P ¯ is equicontinuous on J, so T is completely continuous.

Let x P ( φ , d ) ¯ , so φ ( x ) = x 1 d . By Lemma 1, x 1 M d , so 0 x ( t ) M d , t J . Assumption (A1) implies f ( t , x ( α ( t ) ) , x ( β ( t ) ) ) d μ .

Moreover, in view of (7),

Combining it, we have

φ ( T x ) = max [ 0 , 1 ] | ( T x ) ( t ) | 1 Δ ( 1 B 1 ) λ 1 [ F x ] + 1 Δ ( 1 γ A 1 ) λ 2 [ F x ] + max [ 0 , 1 ] | ( F x ) ( t ) | d μ ( 1 Δ ( 1 B 1 ) D 1 + 1 Δ ( 1 γ A 1 ) D 2 + D 3 ) < d .

This proves that T : P ( φ , d ) ¯ P ( φ , d ) ¯ .

Now, we need to show that condition (S1) is satisfied. Take

x 0 ( t ) = 1 2 ( b + b ρ ) , t J .

Then x 0 ( t ) > 0 , t J , and

for p ( t ) = 1 , t J . Moreover,

This proves that

{ x 0 P ( φ , Θ , Φ , b , b ρ , d ) : b < Φ ( x 0 ) } .

Let b x ( t ) b ρ for t [ 0 , η ] . Then 0 α ( t ) t η for t [ 0 , η ] , so b x ( α ( t ) ) b ρ , t [ 0 , η ] . Assumption (A2) implies f ( t , x ( α ( t ) ) , x ( β ( t ) ) ) b L . Hence,

Moreover,

It yields

Φ ( T x ) = min [ 0 , η ] ( T x ) ( t ) = min ( ( T x ) ( 0 ) , ( T x ) ( η ) ) γ ( T x ) ( η ) = γ Δ [ 1 B 2 ( 1 B 1 ) η ] λ 1 [ F x ] + γ Δ [ η γ + A 2 + ( 1 γ A 1 ) η ] λ 2 [ F x ] + γ δ 0 1 G ( η , s ) h ( s ) f ( s , x ( α ( s ) ) , x ( β ( s ) ) ) d s b γ L ( 1 Δ ( [ 1 B 2 ( 1 B 1 ) η ] D 1 + [ η γ + A 2 + ( 1 γ A 1 ) η ] D 2 ) + D 4 δ ) > b .

This proves that condition (S1) holds.

Now, we need to prove that condition (S2) is satisfied. Take x P ( φ , Φ , b , d ) and T x 1 > b ρ = c . Then

Φ ( T x ) = min [ 0 , η ] ( T x ) ( t ) ρ T x 1 > ρ b ρ = b ,

so condition (S2) holds.

Indeed, φ ( 0 ) = 0 < a , so 0 R ( φ , Ψ , a , d ) . Suppose that x R ( φ , Ψ , a , d ) with Ψ ( x ) = x 1 = a . Note that G ( t , s ) G ( s , s ) , t J . Then

F x 1 γ δ 0 1 G ( η , s ) h ( s ) f ( s , x ( α ( s ) ) , x ( β ( s ) ) ) d s + 0 1 G ( s , s ) h ( s ) f ( s , x ( α ( s ) ) , x ( β ( s ) ) ) d s a μ [ γ δ 0 1 G ( η , s ) h ( s ) d s + 0 1 G ( s , s ) h ( s ) d s ] a μ D 5

and finally,

Ψ ( T x ) = max t J ( T x ) ( t ) 1 Δ [ 1 B 2 ] λ 1 [ F x ] + 1 Δ [ η γ + A 2 + 1 γ A 1 ] λ 2 [ F x ] + F x 1 a μ ( 1 Δ ( [ 1 B 2 ] D 1 + [ η γ + A 2 + 1 γ A 1 ] D 2 ) + D 5 ) < a .

This shows that condition (S3) is satisfied.

Since all the conditions of Theorem 1 are satisfied, problem (1) has at least three nonnegative solutions x 1 , x 2 , x 3 such that x i d for i = 1 , 2 , 3 , and

b min [ 0 , η ] x 1 ( t ) , a < x 2 1 with  min [ 0 , η ] x 2 ( t ) < b , x 3 1 < a .

This ends the proof. □

Example Consider the following problem:

{ x ( t ) + h f ( t , x ( α ( t ) ) , x ( β ( t ) ) ) = 0 , t ( 0 , 1 ) , x ( 0 ) = 1 4 x ( 1 2 ) , x ( 1 ) = 1 2 0 1 x ( t ) ( 7 t 2 ) d t , (8)

where

f ( t , u , v ) = { 1 100 cos t + ( v 20 , 000 ) 2 , ( t , u , v ) [ 0 , 1 ] × [ 0 , 1 ] × [ d , d ] , 1 100 cos t + 2 ( u 1 ) + ( v 20 , 000 ) 2 , ( t , u , v ) [ 0 , 1 ] × [ 1 , 16 ] × [ d , d ] , 1 100 cos t + 30 + ( v 20 , 000 ) 2 , ( t , v ) [ 0 , 1 ] × [ d , d ] , u 16

with d = 2 , 000 . For example, we can take α ( t ) = ρ ¯ t , β ( t ) = t on J with fixed ρ ¯ ( 0 , 1 ) . Indeed, f C ( [ 0 , 1 ] × R + × [ d , d ] , R + ) , γ = 1 4 , η = 1 2 , h ( t ) = h > 0 , ξ = 0 and

λ 1 [ x ] = 0 , λ 2 [ x ] = 1 2 0 1 x ( t ) ( 7 t 2 ) d t , ρ = 1 8 .

Note that d B ( t ) = 1 2 ( 7 t 2 ) d t , so the measure changes the sign on J. Moreover,

so the assumption H4 holds; see Remark 4. Next,

Put a = 1 , b = 2 , h = 30 , then c = 16 , μ > 37.18 , L < 1.94 . Let μ = 40 , L = 1 . Then

and

f ( t , u , v ) 1 100 + 30 + ( 2 , 000 20 , 000 ) 2 = 30.02 < 50 = d μ

for ( t , u , v ) [ 0 , 1 ] × [ 0 , 2 d ] × [ d , d ] .

All the assumptions of Theorem 2 hold, so problem (8) has at least three positive solutions.

Remark 5 We can also construct an example in which, for example, λ 1 [ x ] = 0 1 x ( t ) ( 3 t 1 ) d t to use the results of Remark 4. Note that also this measure changes the sign.

### 4 Positive solutions to problem (1) with advanced arguments

In this section, we consider the case when α ( t ) t on J, so the interval [ 0 , η ] is now replaced by [ η , 1 ] . It means that we can put γ = 0 with ξ > 0 in the boundary conditions of problem (1) because someone can take λ 1 [ x ] = γ ¯ x ( η ) as an example. Let us introduce the cone K 2 by

K 2 = { x E : x ( t ) 0 , t J , λ 1 [ x ] 0 , λ 2 [ x ] 0 , min [ η , 1 ] x ( t ) Γ x 1 }

with

Γ = min ( ξ ( 1 η ) 1 ξ η , ξ η , η ) , ξ > 0 .

Now Φ ( x ) = min [ η , 1 ] | x ( t ) | . Functionals Ψ, Θ, φ are defined as in Section 3. We formulate only the main result using the cone K 2 instead of K (see Theorem 2); the proof is similar to the previous one.

Theorem 3Let the assumptions H1-H4hold with γ = 0 , ξ > 0 . Let α ( t ) t , t J . In addition, we assume that there exist positive constantsa, b, c, d, M, a < b and such that

with

and

(B1): f ( t , u , v ) d μ for ( t , u , v ) J × [ 0 , M d ] × [ d , d ] ,

(B2): f ( t , u , v ) b L for ( t , u , v ) [ η , 1 ] × [ b , b Γ ] × [ d , d ] ,

(B3): f ( t , u , v ) a μ for ( t , u , v ) J × [ 0 , a ] × [ d , d ] .

Then problem (1) has at least three nonnegative solutions x 1 , x 2 , x 3 satisfying x i 1 d , i = 1 , 2 , 3 ,

b Φ ( x 1 ) , a < x 2 1 with   Φ ( x 2 ) < b

and x 3 1 < a .

### 5 Positive solutions to problem (1) for the case when α ( t ) = t on J

In this section, we consider problem (1) when α ( t ) = t on J and γ = ξ = 0 . It means that now Φ ( x ) = min [ ζ , ϱ ] | x ( t ) | for some fixed constants ζ, ϱ such that 0 < ζ < ϱ < 1 . For 0 < ζ + ϱ < 1 we can show that G ( t , s ) κ G ( s , s ) , t [ ζ , ϱ ] , s J . Now, for κ = min ( ζ , 1 ϱ ) , we introduce the cone K 3 by

K 3 = { x E : x ( t ) 0 , t J , λ 1 [ x ] 0 , λ 2 [ x ] 0 , min [ ζ , ϱ ] x ( t ) κ x 1 } .

Functionals Ψ, Θ, φ are defined as in Section 3; the cone K is now replaced by K 3 .

Theorem 4Let the assumptions H1-H4hold with γ = ξ = 0 . Let 0 < ζ + ϱ < 1 , α ( t ) = t , t J . In addition, we assume that there exist positive constantsa, b, c, d, M, a < b and such that

with

and

(C1): f ( t , u , v ) d μ for ( t , u , v ) J × [ 0 , M d ] × [ d , d ] ,

(C2): f ( t , u , v ) b L for ( t , u , v ) [ ζ , ϱ ] × [ b , b κ ] × [ d , d ] ,

(C3): f ( t , u , v ) a μ for ( t , u , v ) J × [ 0 , a ] × [ d , d ] .

Then problem (1) has at least three nonnegative solutions x 1 , x 2 , x 3 satisfying x i 1 d , i = 1 , 2 , 3 ,

b Φ ( x 1 ) , a < x 2 1 with   Φ ( x 2 ) < b

and x 3 1 < a .

### 6 Conclusions

In this paper, we have discussed boundary value problems for second-order differential equations with deviating arguments and with dependence on the first-order derivative. In our research, the deviating arguments can be both delayed and advanced. By using the fixed point theorem of Avery and Peterson, new sufficient conditions for the existence of positive solutions to such boundary problems have been derived. An example is provided for illustration.

### Competing interests

The author declares that he has no competing interests.

### References

1. Avery, RI, Peterson, AC: Three positive fixed points of nonlinear operators on ordered Banach spaces. Comput. Math. Appl.. 42, 313–322 (2001). Publisher Full Text

2. Bai, Z, Ge, W: Existence of three positive solutions for some second-order boundary value problems. Comput. Math. Appl.. 48, 699–707 (2004). Publisher Full Text

3. Graef, JR, Webb, JRL: Third order boundary value problems with nonlocal boundary conditions. Nonlinear Anal.. 71, 1542–1551 (2009). Publisher Full Text

4. Guo, Y, Ge, W: Positive solutions for three-point boundary value problems with dependence on the first order derivative. J. Math. Anal. Appl.. 290, 291–301 (2004). Publisher Full Text

5. Guo, Y, Yu, C, Wang, J: Existence of three positive solutions for m-point boundary value problems on infinite intervals. Nonlinear Anal.. 71, 717–722 (2009). Publisher Full Text

6. Infante, G, Pietramala, P: Nonlocal impulsive boundary value problems with solutions that change sign. In: Cabada A, Liz E, Nieto JJ (eds.) Proceedings of the International Conference on Boundary Value Problems. (2009)

7. Infante, G, Pietramala, P, Zima, M: Positive solutions for a class of nonlocal impulsive BVPs via fixed point index. Topol. Methods Nonlinear Anal.. 36, 263–284 (2010)

8. Jankowski, T: Positive solutions to second order four-point boundary value problems for impulsive differential equations. Appl. Math. Comput.. 202, 550–561 (2008). Publisher Full Text

9. Jankowski, T: Positive solutions of three-point boundary value problems for second order impulsive differential equations with advanced arguments. Appl. Math. Comput.. 197, 179–189 (2008). Publisher Full Text

10. Jankowski, T: Existence of positive solutions to second order four-point impulsive differential equations with deviating arguments. Comput. Math. Appl.. 58, 805–817 (2009). Publisher Full Text

11. Jankowski, T: Three positive solutions to second-order three-point impulsive differential equations with deviating arguments. Int. J. Comput. Math.. 87, 215–225 (2010). Publisher Full Text

12. Jankowski, T: Multiple solutions for a class of boundary value problems with deviating arguments and integral boundary conditions. Dyn. Syst. Appl.. 19, 179–188 (2010)

13. Jankowski, T: Positive solutions for second order impulsive differential equations involving Stieltjes integral conditions. Nonlinear Anal.. 74, 3775–3785 (2011). Publisher Full Text

14. Jankowski, T: Existence of positive solutions to third order differential equations with advanced arguments and nonlocal boundary conditions. Nonlinear Anal.. 75, 913–923 (2012). Publisher Full Text

15. Ji, D, Ge, W: Multiple positive solutions for some p-Laplacian boundary value problems. Appl. Math. Comput.. 187, 1315–1325 (2007). Publisher Full Text

16. Ji, D, Ge, W: Existence of multiple positive solutions for one-dimensional p-Laplacian operator. J. Appl. Math. Comput.. 26, 451–463 (2008). Publisher Full Text

17. Karakostos, GL, Tsamatos, PC: Existence of multipoint positive solutions for a nonlocal boundary value problem. Topol. Methods Nonlinear Anal.. 19, 109–121 (2002)

18. Sun, B, Qu, Y, Ge, W: Existence and iteration of positive solutions for a multipoint one-dimensional p-Laplacian boundary value problem. Appl. Math. Comput.. 197, 389–398 (2008). Publisher Full Text

19. Sun, B, Ge, W, Zhao, D: Three positive solutions for multipoint one-dimensional p-Laplacian boundary value problems with dependence on the first order derivative. Math. Comput. Model.. 45, 1170–1178 (2007). Publisher Full Text

20. Wang, W, Sheng, J: Positive solutions to a multi-point boundary value problem with delay. Appl. Math. Comput.. 188, 96–102 (2007). Publisher Full Text

21. Wang, Y, Ge, W: Multiple positive solutions for multipoint boundary value problems with one-dimensional p-Laplacian. J. Math. Anal. Appl.. 327, 1381–1395 (2007). Publisher Full Text

22. Wang, Y, Ge, W: Existence of triple positive solutions for multi-point boundary value problems with one-dimensional p-Laplacian. Comput. Math. Appl.. 54, 793–807 (2007). Publisher Full Text

23. Wang, Y, Zhao, W, Ge, W: Multiple positive solutions for boundary value problems of second order delay differential equations with one-dimensional p-Laplacian. J. Math. Anal. Appl.. 326, 641–654 (2007). Publisher Full Text

24. Wang, Z, Zhang, J: Positive solutions for one-dimensional p-Laplacian boundary value problems with dependence on the first order derivative. J. Math. Anal. Appl.. 314, 618–630 (2006). Publisher Full Text

25. Webb, JRL, Infante, G: Positive solutions of nonlocal boundary value problems: a unified approach. J. Lond. Math. Soc.. 74, 673–693 (2006). Publisher Full Text

26. Webb, JRL, Infante, G: Positive solutions of nonlocal boundary value problems involving integral conditions. Nonlinear Differ. Equ. Appl.. 15, 45–67 (2008). Publisher Full Text

27. Webb, JRL, Infante, G: Non-local boundary value problems of arbitrary order. J. Lond. Math. Soc.. 79, 238–259 (2009)

28. Yan, B, O’Regan, D, Agarwal, RP: Multiple positive solutions of singular second order boundary value problems with derivative dependence. Aequ. Math.. 74, 62–89 (2007). Publisher Full Text

29. Yang, L, Liu, X, Jia, M: Multiplicity results for second-order m-point boundary value problem. J. Math. Anal. Appl.. 324, 532–542 (2006). Publisher Full Text

30. Yang, C, Zhai, C, Yan, J: Positive solutions of the three-point boundary value problem for second order differential equations with an advanced argument. Nonlinear Anal.. 65, 2013–2023 (2006). Publisher Full Text

31. Yang, Y, Xiao, D: On existence of multiple positive solutions for ϕ-Laplacian multipoint boundary value. Nonlinear Anal.. 71, 4158–4166 (2009). Publisher Full Text