Research

# Melnikov theory for weakly coupled nonlinear RLC circuits

Flaviano Battelli1* and Michal Fečkan23

Author Affiliations

1 Department of Industrial Engineering and Mathematical Sciences, Marche Polytecnic University, Via Brecce Bianche 1, Ancona, 60131, Italy

2 Department of Mathematical Analysis and Numerical Mathematics, Comenius University, Mlynská dolina, Bratislava, 842 48, Slovakia

3 Mathematical Institute of Slovak Academy of Sciences, Štefánikova 49, Bratislava, 814 73, Slovakia

For all author emails, please log on.

Boundary Value Problems 2014, 2014:101  doi:10.1186/1687-2770-2014-101

 Received: 18 December 2013 Accepted: 1 April 2014 Published: 7 May 2014

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

### Abstract

We apply dynamical system methods and Melnikov theory to study small amplitude perturbation of some coupled implicit differential equations. In particular we show the persistence of such orbits connecting singularities in finite time provided a Melnikov like condition holds. Application is given to coupled nonlinear RLC system.

MSC: 34A09, 34C23, 37G99.

##### Keywords:
implicit ode; perturbation; Melnikov method; RLC circuits

### 1 Introduction

In [1], motivated by [2,3], the equation modeling nonlinear RLC circuits

( u + f ( u ) ) + ε γ ( u + f ( u ) ) + u + ε h ( t + α , u , ε ) = 0 (1)

has been studied. It is assumed that f ( u ) and h ( t , u , ε ) are smooth functions with f ( u ) at least quadratic at the origin and satisfying suitable assumptions. Setting v = ( u + f ( u ) ) the equation reads

( 1 + f ( u ) ) u = v , v = u ε [ h ( t + α , u , ε ) + γ v ] . (2)

It is assumed that, for some u 0 R , we have f ( u 0 ) + 1 = 0 and u 0 f ( u 0 ) < 0 . So for ε = 0 (2) has the Hamiltonian

H ( u , v ) = v 2 + 2 u 0 u σ ( 1 + f ( σ ) ) d σ

passing through ( u 0 , 0 ) . Clearly H ( u 0 , 0 ) = ( 0 0 ) and the Hessian of ℋ at ( u 0 , 0 ) is

H H ( u 0 , 0 ) = ( 2 u 0 f ( u 0 ) 0 0 2 ) ,

so that the condition u 0 f ( u 0 ) < 0 means that ( u 0 , 0 ) is a saddle for ℋ. Multiplying the second equation by 1 + f ( u ) we get the system

( 1 + f ( u ) ) ( u v ) = ( v ( 1 + f ( u ) ) { u + ε [ h ( t + α , u , ε ) + γ v ] } ) . (3)

Note that (2) falls in the class of implicit differential equations (IODE) like

A ( x ) x = f ( x ) + ε h ( t , x , ε , κ ) , ( ε , κ ) R × R m , (4)

with A ( u , v ) = ( 1 + f ( u ) 0 0 1 ) . Obviously, det A ( u , v ) = 1 + f ( u ) vanishes on the line ( u 0 , v ) and the condition f ( u 0 ) 0 implies that the line u = u 0 consists of noncritical 0-singularities for (3) (see [[4], p.163]). Let N L denote the kernel of the linear map L and R L its range. Then R A ( u 0 , 0 ) is the subspace having zero first component and then the right hand side of (3) belongs to R A ( u 0 , 0 ) if and only if v = 0 . So all the singularities ( u 0 , v ) with v 0 are impasse points while ( u 0 , 0 ) is a so called I-point (see [[4], pp.163-166]). Quasilinear implicit differential equations, such as (4), find applications in a large number of physical sciences and have been studied by several authors [4-12]. On the other hand, there are many other works on implicit differential equations [13-18] dealing with more general implicit differential systems by using analytical and topological methods.

Passing from (2) to (3), in the general case, it corresponds to multiplying (4) by the adjugate matrix A a ( x ) :

ω ( x ) x = A a ( x ) [ f ( x ) + ε h ( t , x , ε , κ ) ] ,

where ω ( x ) = det A ( x ) . Here we note that A and x may have different dimensions in this paper depending on the nature of the equation but the concrete dimension is clear from that equation, so we do not use different notations for A and x. Basic assumptions in [1] are ω ( x 0 ) = 0 , ω ( x 0 ) 0 and A a ( x 0 ) f ( x 0 ) = 0 , A a ( x 0 ) h ( t , x 0 , ε , κ ) = 0 for some x 0 (that is, x 0 is an I-point for (4)) and the existence of a solution x ( t ) in a bounded interval J tending to x 0 as t tends to the endpoints of J.

It is well known [4,8] that ω ( x 0 ) = 0 and ω ( x 0 ) 0 imply

dim N A ( x 0 ) = 1 , R A a ( x 0 ) = N A ( x 0 ) , and N A a ( x 0 ) = R A ( x 0 ) , (5)

and then A a ( x 0 ) f ( x 0 ) = 0 is equivalent to the fact that f ( x 0 ) R A ( x 0 ) .

Let F ( x ) : = A a ( x ) f ( x ) . It has been proved in [19] that (5) implies that rank F ( x 0 ) is at most 2. So, if x R n , with n > 2 then x = x 0 cannot be hyperbolic for the map x F ( x 0 ) x .

In this paper we study coupled IODEs such as

A 0 ( x 1 ) x 1 = f ( x 1 ) + ε g 1 ( t , x 1 , x 2 , ε , κ ) , A 0 ( x 2 ) x 2 = f ( x 2 ) + ε g 2 ( t , x 1 , x 2 , ε , κ ) , (6)

with x 1 , x 2 R 2 , det A 0 ( x 0 ) = 0 ( det A 0 ) ( x 0 ) , f ( x 0 ) , g j ( t , x 0 , x 0 , ε , κ ) R A 0 ( x 0 ) and other assumptions that will be specified below. Let us remark that (6) is a special kind of the general equation (4) with, among other things,

A ( x ) = ( A 0 ( x 1 ) 0 0 A 0 ( x 2 ) ) , x = ( x 1 , x 2 )

hence det A ( x ) = det A 0 ( x 1 ) det A 0 ( x 2 ) satisfies det A ( x 0 , x 0 ) = 0 , ( det A ) ( x 0 , x 0 ) = 0 and ( det A ) ( x 0 , x 0 ) 0 . Thus ( x 0 , x 0 ) is not a I-point. Multiplying the first equation by A 0 a ( x 1 ) and the second by A 0 a ( x 2 ) we obtain the system

ω ( x 1 ) x 1 = F ( x 1 ) + ε G 1 ( x 1 , x 2 , t , ε , κ ) , ω ( x 2 ) x 2 = F ( x 2 ) + ε G 2 ( x 1 , x 2 , t , ε , κ ) . (7)

We assume that ω ( x ) : = det A 0 ( x ) , F ( x ) and G j ( x 1 , x 2 , t , ε , κ ) satisfy the following assumptions:

(C1) F C 2 ( R 2 , R 2 ) , ω C 2 ( R 2 , R ) and the unperturbed equation

ω ( x ) x = F ( x ) (8)

possesses a noncritical singularity at x 0 , i.e. ω ( x 0 ) = 0 and ω ( x 0 ) 0 .

(C2) F ( x 0 ) = 0 and the spectrum σ ( F ( x 0 ) ) = { μ ± } with μ < 0 < μ + , and

x = F ( x )

has a solution γ ( s ) homoclinic to x 0 , that is, lim s ± γ ( s ) = x 0 , and ω ( γ ( s ) ) 0 for any s R . Without loss of generality, we may, and will, assume ω ( γ ( s ) ) > 0 for any s R . Moreover, G i C 2 ( R 6 + m , R 2 ) , i = 1 , 2 are 1-periodic in t with G i ( x 0 , x 0 , t , ε , κ ) = 0 for any t R , κ R m and ε sufficiently small.

(C3) Let γ ± be the eigenvectors of F ( x 0 ) with the eigenvalues μ , resp. Then ω ( x 0 ) , γ ± > 0 (or else ω ( x 0 ) γ ± > 0 ).

From (C2) we see that Γ ( s ) : = ( γ ( s ) γ ( s ) ) is a bounded solution of the equation

x 1 = F ( x 1 ) , x 2 = F ( x 2 ) (9)

and that x 0 persists as a singularity of (7). So this paper is a continuation of [1,19], but here we study more degenerate IODE.

The objective of this paper is to give conditions, besides (C1)-(C3), assuring that for | ε | 1 , the coupled equations (7) has a solution in a neighborhood of the orbit { Γ ( s ) s R } and reaching ( x 0 , x 0 ) is a finite time. Our approach mimics that in [1] and uses Melnikov methods to derive the needed conditions. Let us briefly describe the content of this paper. In Section 2 we make few remarks concerning assumptions (C1)-(C3). Then, in Section 3, we change time to reduce equation (7) to a smooth perturbation of (9) whose unperturbed part has the solution Γ ( s ) . Next, in Section 4 we derive the Melnikov condition. Finally Section 5 is devoted to the application of our result to coupled equations of the form (1) for RLC circuits, while some computations are postponed to the appendix.

We emphasize the fact that Melnikov technique is useful to predict the existence of transverse homoclinic orbits in mechanical systems [20,21] together with the associated chaotic behavior of solutions. However, the result in this paper is somewhat different in that we apply the method to show existence of orbits connecting a singularity in finite time.

### 2 Comments on the assumptions

By following [1,19] we note that since γ ( s ) x 0 as | s | then γ ( s ) is a bounded solution of the linear equation x = F ( γ ( s ) ) x . Hence | γ ( s ) | k e μ | s | for some μ > 0 . We get then, for s 0 ,

| γ ( s ) x 0 | s | γ ( s ) | d s μ 1 k e μ s .

So

lim sup s log | γ ( s ) x 0 | s μ < 0 .

From [[22], Theorem 4.3, p.335 and Theorem 4.5, p.338] it follows that

lim sup s log | γ ( s ) x 0 | s = μ < 0 , (10)

and there exist a constant δ > 0 and a solution γ + e μ s of x = F ( x 0 ) x such that

| γ ( s ) x 0 γ + e μ s | = O ( e ( μ δ ) s ) , as  s .

Note that γ + 0 since otherwise γ ( s ) x 0 = O ( e ( μ δ ) s ) , contradicting (10). Hence γ + is an eigenvector of the eigenvalue μ of F ( x 0 ) . We have then

| γ ( s ) x 0 e μ s γ + | c 1 e δ s

for a suitable constant c 1 0 . As a consequence,

lim s γ ( s ) x 0 e μ s = γ + .

Next

| γ + | c 1 e δ s | γ ( s ) x 0 | e μ s | γ + | + c 1 e δ s .

Taking logarithms, dividing by s and letting s we get

lim s log | γ ( s ) x 0 | s = μ ,

that is, in (10) lim sup s can be replaced with lim s . Similarly, changing s with −s:

lim s log | γ ( s ) x 0 | s = μ + .

Next, set

φ ( s ) : = 1 e μ s + e μ + s . (11)

Since φ ( s ) e μ s 1 as s and φ ( s ) e μ + s 1 as s we have then

lim s ± γ ( s ) x 0 φ ( s ) = γ ± 0 (12)

and

lim s ± ω ( γ ( s ) ) φ ( s ) = lim s ± ω ( x 0 ) ( γ ( s ) x 0 ) + o ( γ ( s ) x 0 ) e μ s = ω ( x 0 ) , γ ± .

From (C2), we know ω ( γ ( s ) ) > 0 for any s R , so ω ( x 0 ) , γ ± 0 . Hence condition (C3) means that γ ( s ) tends transversally to the singular manifold ω 1 ( 0 ) at x 0 .

As in [19] it is easily seen that

lim s ± γ ( s ) ω ( γ ( s ) ) = F ( x 0 ) γ ± ω ( x 0 ) γ ± = μ γ ± ω ( x 0 ) γ ± 0 (13)

and that γ ( s ) ω ( γ ( s ) ) solves the equation

x = [ F ( γ ( s ) ) F ( γ ( s ) ) ω ( γ ( s ) ) ω ( γ ( s ) ) ] x = F ( γ ( s ) ) x ω ( γ ( s ) ) x ω ( γ ( s ) ) F ( γ ( s ) ) . (14)

So γ ( s ) ω ( γ ( s ) ) is a bounded solution of (14). Next, setting as in [19]

θ ( s ) : = 0 s ω ( γ ( τ ) ) d τ (15)

and x h ( t ) = γ ( θ 1 ( t ) ) , it is easily seen that x h ( t ) satisfies ω ( x ) x = F ( x ) whose linearization along x h ( t ) is

F ( x h ( t ) ) z = x h ( t ) ω ( x h ( t ) ) z + ω ( x h ( t ) ) z = F ( x h ( t ) ) ω ( x h ( t ) ) z ω ( x h ( t ) ) + ω ( x h ( t ) ) z

i.e.

ω ( x h ( t ) ) z = F ( x h ( t ) ) z F ( x h ( t ) ) ω ( x h ( t ) ) z ω ( x h ( t ) ) . (16)

Note, then, that (14) is derived from (16) with the change x ( s ) = z ( θ ( s ) ) . This fact should clarify why we need to consider the linear system (14) instead of x = F ( γ ( s ) ) x . However, see [19] for a remark concerning the space of bounded solutions of (14) and that of the equation x = F ( γ ( s ) ) x .

We now prove that γ ( s ) ω ( γ ( s ) ) is the unique solution of equation (14) which is bounded on ℝ. This is a kind of nondegeneracy of γ ( s ) .

Lemma 2.1Assume (C2) and (C3) hold. Then, up to a multiplicative constant, γ ( s ) ω ( γ ( s ) ) is the unique solution of (14) which is bounded on ℝ.

Proof From [[19], Lemma 3.1] it follows that the linear map:

x [ F ( x 0 ) μ γ + ω ( x 0 ) γ + ω ( x 0 ) μ I ] x

has the simple eigenvalues μ + μ and μ . Let μ : = μ + 2 , then the linear map

x [ F ( x 0 ) μ γ + ω ( x 0 ) γ + ω ( x 0 ) μ I ] x

has the eigenvalues ±μ; moreover, since

c 1 | γ ( s ) | ω ( γ ( s ) ) c 2

for two positive constants 0 < c 1 < c 2 , it follows that γ 0 ( s ) : = γ ( s ) ω ( γ ( s ) ) e μ s is a solution of

x = [ F ( γ ( s ) ) F ( γ ( s ) ) ω ( γ ( s ) ) ω ( γ ( s ) ) μ I ] x (17)

satisfying

c 1 c 2 | γ 0 ( s 1 ) | | γ 0 ( s 2 ) | e μ ( s 2 s 1 ) c 2 c 1 | γ 0 ( s 1 ) |

for all 0 s 1 s 2 . Then (17) satisfies the assumptions of [[19], Theorem 5.3] and hence its conclusion with rank P + = 1 , that is, the fundamental matrix X + ( s ) of (17) satisfies

X + ( s 2 ) P + X + 1 ( s 1 ) k e μ ( s 2 s 1 ) , 0 s 1 s 2 , X + ( s 2 ) ( I P + ) X + 1 ( s 1 ) k e μ ˜ ( s 2 s 1 ) , 0 s 2 s 1 ,

where 0 μ ˜ < μ . However, it is well known (see [23-25]) that R P + is the space of initial conditions for the bounded solutions on [ 0 , [ of (17) that, then, tend to zero as s at the exponential rate e μ s . As a consequence a solution u ( s ) of (17) is bounded on [ 0 , [ if and only if u ( s ) e μ s is a bounded solution of (14). Then we conclude that the space of solutions of (14) that are bounded on [ 0 , [ is one dimensional.

Incidentally, since the fundamental matrix of (14) is X ( s ) = X + ( s ) e μ s , we note that it satisfies

X ( s 2 ) P + X 1 ( s 1 ) k , 0 s 1 s 2 , X ( s 2 ) ( I P + ) X 1 ( s 1 ) k e ( μ + μ ˜ ) ( s 2 s 1 ) , 0 s 2 s 1 .

Using a similar argument in R = ] , 0 ] with μ = μ 2 < 0 , and [[19], Theorem 5.4] instead of [[19], Theorem 5.3] with μ = μ we see that (14) has at most a one dimensional space of solutions bounded in ℝ. More precisely, μ ˜ with μ < μ ˜ < 0 , and a projection P on R 2 exists such that

X ( s 2 ) ( I P ) X 1 ( s 1 ) k , s 2 s 1 0 , X ( s 2 ) P X 1 ( s 1 ) k e ( μ + μ ˜ ) ( s 2 s 1 ) , s 1 s 2 0 ,

and dim N P = 1 . Since γ ( s ) ω ( γ ( s ) ) is a solution of (14) bounded on ℝ we deduce that R P + = N P = span { γ ( 0 ) ω ( γ ( 0 ) ) } and the result follows. □

We conclude this section with a remark about condition (c) in [[19], Theorem 5.3]. Consider a system in R n such as

x = [ D + A ( s ) ] x . (18)

Then the following result holds.

Theorem 2.2Suppose the following hold:

(i) Dhas two simple eigenvalues μ < μ and all the other eigenvalues ofDhave either real part less than μ or greater than μ ;

(ii) 0 A ( s ) d s < ;

(iii) A ( s ) 0 as s .

Then there are as many solutions x ( t ) of (18) satisfying

k 1 | x ( s ) | | x ( t ) | e μ ( t s ) k 2 | x ( s ) | , for any  0 s t , (19)

as the dimension of the space of the generalized eigenvectors of the matrixDwith real parts less than or equal to μ ; here k 1 , k 2 > 0 are two suitable positive constants. Similarly there are as many solutions of (18) such that

k ˜ 1 | x ( s ) | | x ( t ) | e μ ( t s ) k ˜ 2 | x ( s ) | , for any  0 s t , (20)

for suitable constants k ˜ 1 , k ˜ 2 > 0 , as the dimension of the space of the generalized eigenvectors of the matrixDwith real parts greater than or equal to μ .

Proof We prove the first statement concerning (19). By a similar argument (20) is handled. Changing variables we may assume that

D = ( μ 0 0 0 D 0 0 0 D + )

and the eigenvalues of D have real parts less than μ and those of D + have real parts greater than or equal to μ . So the system reads

x 1 = μ x 1 + a 11 ( t ) x 1 + A 12 ( t ) x 2 + A 13 ( t ) x 3 , x 2 = D x 2 + A 21 ( t ) x 1 + A 22 ( t ) x 2 + A 23 ( t ) x 3 , x 3 = D + x 3 + A 31 ( t ) x 1 + A 32 ( t ) x 2 + A 33 ( t ) x 3 , (21)

where a 11 ( t ) R and A i j ( t ) are matrices (or vectors) of suitable orders. Setting y i ( t ) = e μ t x i ( t ) we get

y 1 = a 11 ( t ) y 1 + A 12 ( t ) y 2 + A 13 ( t ) y 3 , y 2 = ( D μ I ) y 2 + A 21 ( t ) y 1 + A 22 ( t ) y 2 + A 23 ( t ) y 3 , y 3 = ( D + μ I ) y 3 + A 31 ( t ) y 1 + A 32 ( t ) y 2 + A 33 ( t ) y 3 . (22)

Now we observe that y ( t ) is a solution of (22) bounded at +∞ if and only if x ( t ) is a solution of (21) which is bounded on ℝ when multiplied by e μ t . Moreover, since | a 11 ( t ) | , | A 12 ( t ) | , and | A 13 ( t ) | belong to L 1 ( R ) , the limit lim t + y 1 ( t ) exists for any solution y ( t ) of (22) bounded on R + . So, let us fix t 0 > 0 and take t t 0 . If y ( t ) is a solution of (22) bounded at +∞ it must be, by the variation of constants formula,

y 1 ( t ) = y 1 t ( a 11 ( s ) y 1 ( s ) + A 12 ( s ) y 2 ( s ) + A 13 ( s ) y 3 ( s ) ) d s , y 2 ( t ) = e ( D μ I ) ( t t 0 ) y 2 0 y 2 ( t ) = + t 0 t e ( D μ I ) ( t s ) ( A 21 ( s ) y 1 ( s ) + A 22 ( s ) y 2 ( s ) + A 23 ( s ) y 3 ( s ) ) d s , y 3 ( t ) = t e ( D + μ I ) ( t s ) ( A 31 ( s ) y 1 ( s ) + A 32 ( s ) y 2 ( s ) + A 33 ( s ) y 3 ( s ) ) d s , (23)

where y 2 0 = y 2 ( t 0 ) and y 1 = lim t + y 1 ( t ) . Note that since σ ( D μ I ) { λ C λ < 0 } and σ ( D + μ I ) { λ C λ > 0 } and a 11 ( t ) , A i j ( t ) are bounded, we can interpret (22) as a fixed point theorem in the Banach space of bounded function on [ t 0 , + [ :

B : = { y ( t ) C 0 ( [ t 0 , [ ) sup | y ( t ) | < }

with the obvious norm. Since a 11 ( t ) , A i j ( t ) L 1 ( R + ) we see that the map (23) is a contraction on B, provided t 0 is sufficiently large, and then, for any given ( y 1 , y 2 0 ) , it has a unique solution y ( t , y 1 , y 2 0 ) B . Note that a priori y ( t , y 1 , y 2 0 ) is defined only on [ t 0 , + [ but of course we can extend it to [ 0 , + [ going backward with time. We now prove that positive constants 0 < c 1 < c 2 exist such that c 1 | y ( t , y 0 ) | c 2 fox any t 0 . Let t 0 < t 1 < t . We have

y 2 ( t 1 ) = e ( D μ I ) ( t 1 t 0 ) y 2 0 + t 0 t 1 e ( D μ I ) ( t 1 s ) ( A 21 ( s ) y 1 ( s ) + A 22 ( s ) y 2 ( s ) + A 23 ( s ) y 3 ( s ) ) d s

and then

y 2 ( t ) = e ( D μ I ) ( t t 0 ) y 2 0 + t 0 t 1 e ( D μ I ) ( t s ) ( A 21 ( s ) y 1 ( s ) + A 22 ( s ) y 2 ( s ) + A 23 ( s ) y 3 ( s ) ) d s + t 1 t e ( D μ I ) ( t s ) ( A 21 ( s ) y 1 ( s ) + A 22 ( s ) y 2 ( s ) + A 23 ( s ) y 3 ( s ) ) d s = e ( D μ I ) ( t t 1 ) y 2 ( t 1 ) + t 1 t e ( D μ I ) ( t s ) ( A 21 ( s ) y 1 ( s ) + A 22 ( s ) y 2 ( s ) + A 23 ( s ) y 3 ( s ) ) d s .

So, for any δ > 0 let t 1 be such that sup t t 1 | A i j ( t ) | δ and set sup t t 1 | y j ( t ) | = y ¯ j . We have

y ¯ 2 = sup t t 1 | y 2 ( t ) | k e α ( t t 1 ) y ¯ 2 + t 1 t k e α ( t s ) δ ( y ¯ 1 + y ¯ 2 + y ¯ 3 ) d s k ( y ¯ 2 + δ α ( y ¯ 1 + y ¯ 2 + y ¯ 3 ) ) e α ( t t 1 ) + k δ | α | ( y ¯ 1 + y ¯ 2 + y ¯ 3 )

with max { μ μ σ ( D μ I ) } < α < 0 . Taking the limit as t + we get

y ¯ 2 δ | α | ( y ¯ 1 + y ¯ 2 + y ¯ 3 ) .

Since δ 0 as t 1 + , from the above it follows that lim t | y 2 ( t ) | = 0 . Similarly we get

| y 3 | k δ β 1 ( y ¯ 1 + y ¯ 2 + y ¯ 3 ) ,

where 0 < β < min { μ μ σ ( D + μ I ) } and then lim t | y 3 ( t ) | = 0 . As a consequence we obtain lim t | y ( t ) | | y 1 ( t ) | = 0 and then

lim t | y ( t ) | = | y 1 | .

So, provided we take y 1 0 we see that eventually (i.e. for t t ¯ , for some t ¯ > 0 )

| y 1 | 2 | y ( t ) | 3 2 | y 1 |

and the existence of c 1 , c 2 > 0 such that

c 1 | y ( t ) | c 2

for all t 0 follows from the fact that | y ( t ) | cannot vanish in any bounded interval. Finally since | x ( t ) | = | y ( t ) | e μ t we get, for 0 s t ,

| x ( t ) | | x ( s ) | = | y ( t ) | | y ( s ) | e μ ( t s ) c 1 c 2 e μ ( t s ) | x ( t ) | | x ( s ) | c 2 c 1 e μ ( t s )

i.e.

c 1 c 2 | x ( s ) | | x ( t ) | e μ ( t s ) c 2 c 1 | x ( s ) | .

The proof is complete. □

Remark 2.3 (i) It follows from the proof of Theorem 2.2 that inequalities of (19) also hold replacing (i) with the weaker assumption that μ is a simple eigenvalue of D and all the others either have real parts less than μ or μ (i.e. we do not need that μ is simple). Similarly inequalities of (20) hold if μ is a simple eigenvalue of D and all the others either have real parts greater than μ or μ (i.e. we do not need that μ is simple).

(ii) Note that a result related to Theorem 2.2 has been proved in [26].

### 3 Solutions asymptotic to the fixed point

It follows from (11)-(12) that γ ( s ) x 0 = O ( e μ s ) as s ± then, since γ ( s ) = F ( γ ( s ) ) = O ( γ ( s ) x 0 ) we obtain γ ( s ) = O ( e μ s ) . Furthermore, from (13) we also get:

ω ( γ ( s ) ) = O ( γ ( s ) ) = O ( e μ s ) .

As a consequence

T ± : = 0 ± ω ( γ ( τ ) ) d τ < .

Since ω ( γ ( s ) ) > 0 it follows that θ : R ] T , T + [ is a strictly increasing diffeomorphism (see (15) for the definition of θ ( s ) ). Then x h ( t ) : = γ ( θ 1 ( t ) ) satisfies (8) on the interval ] T , T + [ and

lim t T ± x h ( t ) = x 0 .

Moreover (see (13))

lim t t ± x h ( t ) = lim t t ± F ( x h ( t ) ) ω ( x h ( t ) ) = lim s ± F ( γ ( s ) ) ω ( γ ( s ) ) = μ γ ± ω ( x 0 ) γ ± 0 .

Hence x 0 is not an I-point of (8). In this paper we want to look for solutions of the coupled equation (7) that belong to a neighborhood of { ( x h ( t ) , x h ( t ) ) T < t < T + } , they are defined in the interval ] T + α , T + + α [ , for some α = α ( ε ) , and tend to ( x 0 , x 0 ) at the same rate as ( x h ( t ) , x h ( t ) ) . To this end we first perform a change of the time variable as follows. Set

t = α + θ ( s ) ] T + α , T + + α [

and plug z j ( s ) = x j ( α + θ ( s ) ) in (7). We get

ω ( z j ) z j = ω ( γ ( s ) ) ( F ( z j ) + ε G j ( z 1 , z 2 , α + θ ( s ) , ε , κ ) ) , j = 1 , 2 . (24)

Since we are looking for solutions of (7) tending to ( x 0 , x 0 ) at the same rate as γ ( s ) , in (24) we make the change of variables

z j ( s ) = γ ( s ) + φ ( s ) y j ( s ) = x 0 + φ ( s ) ( η ( s ) + y j ( s ) ) , j = 1 , 2 , (25)

where η ( s ) is the bounded function γ ( s ) x 0 φ ( s ) . Since

ω ( x 0 + φ ( s ) ( η ( s ) + y ) ) ω ( x 0 ) , φ ( s ) ( η ( s ) + y ) K 1 | φ ( s ) ( η ( s ) + y ) | 2 = φ ( s ) [ ω ( x 0 ) , η ( s ) + y K 1 φ ( s ) | η ( s ) + y | 2 ] (26)

for a suitable constant K 1 > 0 and any s R , | y | 1 we get, using (C3), (26):

ω ( x 0 + φ ( s ) ( η ( s ) + y ) ) 1 2 φ ( s ) ω ( x 0 ) , γ ± > 0 (27)

for | s | > 0 large and | y | < δ sufficiently small. Then (27) and ω ( γ ( t ) ) > 0 imply the existence of M > 0 and δ > 0 so that

ω ( x 0 + φ ( s ) ( η ( s ) + y ) ) M φ ( s )

for any s R and | y | δ . Now plugging (25) into (24) we derive the equations

y j = ω ( γ ( s ) ) φ ( s ) ω ( γ ( s ) + φ ( s ) y j ) F ( γ ( s ) + φ ( s ) y j ) F ( γ ( s ) ) φ ( s ) φ ( s ) φ ( s ) y j + ε ω ( γ ( s ) ) φ ( s ) ω ( γ ( s ) + φ ( s ) y j ) G j ( γ ( s ) + φ ( s ) y 1 , γ ( s ) + φ ( s ) y 2 , θ ( s ) + α , ε , κ ) , j = 1 , 2 . (28)

From (11) it follows that

φ ( s ) φ ( s ) = μ e μ ( s ) + μ + e μ + ( s ) e μ ( s ) + e μ + ( s ) μ as  s ± .

Next we note that from G j ( x 0 , x 0 , t , ε , κ ) = 0 it follows that the quantities

G j ( γ ( s ) + φ ( s ) y 1 , γ ( s ) + φ ( s ) y 2 , α + θ ( s ) , ε , κ ) φ ( s ) = G j ( x 0 + φ ( s ) ( η ( s ) + y 1 ) , x 0 + φ ( s ) ( η ( s ) + y 2 ) , α + θ ( s ) , ε , κ ) φ ( s ) , j = 1 , 2

are bounded uniformly in s R and κ R m , y 1 , y 2 , ε bounded.

The linearization of (28) at y = 0 , ε = 0 is

y j = [ F ( γ ( s ) ) F ( γ ( s ) ) ω ( γ ( s ) ) ω ( γ ( s ) ) φ ( s ) φ ( s ) I ] y j , j = 1 , 2 . (29)

Taking the limit as s + we get the systems

y j = [ F ( x 0 ) μ γ + ω ( x 0 ) γ + ω ( x 0 ) μ I ] y j , j = 1 , 2 . (30)

Similarly taking the limit as s we get the systems

y j = [ F ( x 0 ) μ + γ ω ( x 0 ) γ ω ( x 0 ) μ + I ] y j , j = 1 , 2 . (31)

From the proof of Lemma 2.1 (see also [[1], Lemma 3.1]) we know that (30) has the positive simple eigenvalues μ + μ and μ , and (31) has the negative simple eigenvalues μ μ + and μ + . From the roughness of exponential dichotomies it follows that both equations in (29) have an exponential dichotomy on both R + and R with projections, resp. P + = 0 and P = I . Hence (see also [19]) all solutions of the system

y = [ F ( γ ( s ) ) ω ( γ ( s ) ) F ( γ ( s ) ) ω ( γ ( s ) ) φ ( s ) φ ( s ) I ] y , (32)

adjoint to (29), are bounded as | s | . We let ψ 1 ( s ) and ψ 2 ( s ) be any two linearly independent solutions of (32).

### 4 Melnikov function and the original equation

In this section we will give a condition for solving (28) for y 1 ( t ) , y 2 ( t ) near the solution y 1 ( t ) = y 2 ( t ) = 0 of the same equation with ε = 0 . Writing

F ( y ) : = y ω ( γ ( s ) ) φ ( s ) ω ( γ ( s ) + φ ( s ) y ) F ( γ ( s ) + φ ( s ) y ) + F ( γ ( s ) ) φ ( s ) + φ ( s ) φ ( s ) y (33)

and

H j ( y 1 , y 2 , η , ε ) : = ω ( γ ( s ) ) φ ( s ) ω ( γ ( s ) + φ ( s ) y j ) × G j ( γ ( s ) + φ ( s ) y 1 , γ ( s ) + φ ( s ) y 2 , θ ( s ) + α , ε , κ ) , j = 1 , 2 , (34)

we look for solutions y 1 ( t ) , y 2 ( t ) : R R 2 of

F ( y 1 ) + ε H 1 ( y 1 , y 2 , η , ε ) = 0 , F ( y 2 ) + ε H 2 ( y 1 , y 2 , η , ε ) = 0 (35)

in the Banach space of C 1 -functions on ℝ, bounded together with their derivatives and with small norms. We observe that F ( 0 ) = 0 and equation F ( 0 ) y = 0 reads

y = [ F ( γ ( s ) ) F ( γ ( s ) ) ω ( γ ( s ) ) ω ( γ ( s ) ) φ ( s ) φ ( s ) ] y . (36)

In Section 3 (see also [1,19]) we have seen that (36) has an exponential dichotomy on both R + and R + with projections P + = I P = 0 . So the only bounded solution y ( t ) of F ( 0 ) y = 0 is y ( t ) = 0 . In other words NF ( 0 ) = { 0 } . So we are lead to prove the following.

Theorem 4.1LetY, Xbe Banach spaces, ε R be a small parameter and η R 2 d . Let F : Y X , H 1 , 2 : Y × Y × R 2 d + 1 X , ( y 1 , y 2 , η , ε ) H 1 , 2 ( y 1 , y 2 , η , ε ) be C 2 -functions such that

(a) F ( 0 ) = 0 ;

(b) NF ( 0 ) = { 0 } ;

(c) there exist ψ 1 , , ψ d X such that RF ( 0 ) = { ψ 1 , , ψ d } .

Set M : R 2 d R 2 d by

M ( η ) : = ( ψ 1 H 1 ( 0 , 0 , 0 , η , 0 ) ψ d H 1 ( 0 , 0 , 0 , η , 0 ) ψ 1 H 2 ( 0 , 0 , 0 , η , 0 ) ψ d H 2 ( 0 , 0 , 0 , η , 0 ) ) (37)

and suppose there exists η ¯ R 2 d such that M ( η ¯ ) = 0 and the derivative M ( η ¯ ) is invertible. Then there exist r > 0 and unique C 1 -function η = η ( ε ) , defined in a neighborhood of ε = 0 R such that

lim ε 0 η ( ε ) = η ¯ (38)

and for η = η ( ε ) , ε 0 , (35) has a unique solution ( y 1 ( ε ) , y 2 ( ε ) ) Y × Y satisfying

( y 1 ( ε ) , y 2 ( ε ) ) r .

Moreover, y j ( ε ) = ε y ˜ j ( ε ) for C 0 -functions y ˜ j ( ε ) Y and we have

F ( 0 ) y ˜ j ( 0 ) + H j ( 0 , 0 , 0 , η ¯ , 0 ) = 0 , j = 1 , 2 . (39)

Proof We look for solutions ( y 1 , y 2 , η ) of (35) that are close to ( y 1 , y 2 , η ) = ( 0 , 0 , η ¯ ) . Let P : X X be the projection such that R P = RF ( 0 ) . Note codim RF ( 0 ) = d . From the implicit function theorem, we solve the projected equations

P F ( y 1 ) + ε P H 1 ( y 1 , y 2 , η , ε ) = 0 , P F ( y 2 ) + ε P H 2 ( y 1 , y 2 , η , ε ) = 0

for unique y 1 , 2 = Y 1 , 2 ( η , ε ) Y such that

Y 1 , 2 ( η , 0 ) = 0 ,

provided | ε | ε 0 is sufficiently small and η in a fixed closed ball Ξ R 2 d with η ¯ Ξ . Note that Y 1 , 2 are C 2 -smooth. Setting Q = I P , we need to solve the bifurcation equations:

Q F ( Y j ( η , ε ) ) + ε Q H j ( Y 1 ( η , ε ) , Y 2 ( η , ε ) , η , ε ) = 0 , j = 1 , 2 . (40)

Observe that

Q x = 0 x R P = RF ( 0 ) ψ i x = 0 for all  i = 1 , , d . (41)

Then Q F ( 0 ) = 0 and so

Q F ( Y j ( η , ε ) ) = O j ( ε 2 ) , j = 1 , 2 ,

uniformly with respect to η. We conclude that (40) can be written as

ε Q H j ( 0 , 0 , η , 0 ) = Q R j ( η , ε ) , j = 1 , 2 , (42)

where

R j ( η , ε ) : = F ( Y j ( η , ε ) ) + ε [ H j ( Y 1 ( η , ε ) , Y 2 ( η , ε ) , η , ε ) H j ( 0 , 0 , η , 0 ) ] .

Note that R j ( η , ε ) are C 2 -functions of ( η , ε ) and that ε 1 R j ( η , ε ) = O ( ε ) uniformly with respect to η, so

R ˜ j ( η , ε ) : = { ε 1 R j ( η , ε ) , if  ε 0 , 0 , if  ε = 0

is C 1 in ( η , ε ) . By (41), system (42) is equivalent to

M ( η ) = ( ψ i R ˜ 1 ( η , ε ) ψ i R ˜ 2 ( η , ε ) ) i = 1 , , d = O ( ε ) . (43)

Because of the assumptions we can apply the implicit function theorem to (43) to obtain a C 1 -function η ( ε ) defined in a neighborhood of ε = 0 satisfying (43) and such that (38) holds. Setting

y j ( ε ) : = Y j ( η ( ε ) , ε ) , j = 1 , 2

we see that y 1 ( ε ) , y 2 ( ε ) are bounded C 1 -solutions of (35) with η = η ( ε ) such that y 1 ( 0 ) = 0 , y 2 ( 0 ) = 0 . Then we can write y 1 ( ε ) = ε y ˜ 1 ( ε ) , y 2 ( ε ) = ε y ˜ 2 ( ε ) ) for continuous y ˜ 1 ( ε ) , y ˜ 2 ( ε ) Y where

F ( ε y ˜ j ( ε ) ) + ε H j ( ε y ˜ 1 ( ε ) , ε y ˜ 2 ( ε ) , η ( ε ) , ε ) = 0 , j = 1 , 2 .

Clearly (39) follows differentiating the above equality at ε = 0 . The proof is complete. □

Remark 4.2 Note that, because of M ( η ¯ ) = 0 , (39) is equivalent to

P F ( 0 ) y ˜ j ( 0 ) + H j ( 0 , 0 , 0 , η ¯ , 0 ) = 0 , j = 1 , 2 ,

which has the unique solution

y ˜ j ( 0 ) = [ P F ( 0 ) ] 1 H j ( 0 , 0 , 0 , η ¯ , 0 ) , j = 1 , 2 . (44)

Now we apply Theorem 4.1 to (28) with F ( y ) , H 1 ( y 1 , y 2 , η , ε ) , H 2 ( y 1 , y 2 , η , ε ) as in (33), (34) and

Y = C b 1 ( R , R 2 ) , X = C b 0 ( R , R 2 ) , η = ( α , κ ) R × R m ,

where C b k ( R , R 2 ) is the Banach space of C k -functions bounded together with their derivatives with the usual sup-norm.

We already observed that F ( 0 ) = 0 and NF ( 0 ) = 0 . Moreover,

RF ( 0 ) = { x = ( x 1 , x 2 ) X | ψ i ( s ) x i ( s ) d s = 0 , i = 1 , 2 } ,

where ψ i ( s ) have been defined in the previous Section 3. So d = 2 and m = 3 . We recall, from [19], that ψ j ( s ) = φ ( s ) v j ( θ ( s ) ) where v j ( t ) are solutions of the adjoint equation of (16):

ω ( x h ( t ) ) v = ω ( x h ( t ) ) ω ( x h ( t ) ) F ( x h ( t ) ) v F ( x h ( t ) ) v , t ] T , T + [ , (45)

and v j ( 0 ) = R P N P + . Hence (37) reads

M ( α , κ ) = ( ψ 1 ( s ) φ ( s ) G 1 ( γ ( s ) , γ ( s ) , α + θ ( s ) , 0 , κ ) d s ψ 2 ( s ) φ ( s ) G 1 ( γ ( s ) , γ ( s ) , α + θ ( s ) , 0 , κ ) d s ψ 1 ( s ) φ ( s ) G 2 ( γ ( s ) , γ ( s ) , α + θ ( s ) , 0 , κ ) d s ψ 2 ( s ) φ ( s ) G 2 ( γ ( s ) , γ ( s ) , α + θ ( s ) , 0 , κ ) d s )

or passing to time t = θ ( s ) :

M ( α , κ ) = ( T T + v 1 ( t ) G 1 ( x h ( t ) , x h ( t ) , t + α , 0 , κ ) ω ( x h ( t ) ) d t T T + v 2 ( t ) G 1 ( x h ( t ) , x h ( t ) , t + α , 0 , κ ) ω ( x h ( t ) ) d t T T + v 1 ( t ) G 2 ( x h ( t ) , x h ( t ) , t + α , 0 , κ ) ω ( x h ( t ) ) d t T T + v 2 ( t ) G 2 ( x h ( t ) , x h ( t ) , t + α , 0 , κ ) ω ( x h ( t ) ) d t ) . (46)

A direct application of Theorem 4.1 gives the following.

Theorem 4.3Let m = 3 and M ( α , κ ) be given as in (46) where v 1 ( t ) , v 2 ( t ) are two independent bounded solutions (on ℝ) of the adjoint equation (45). Suppose that α ¯ and κ ¯ exist so that

M ( α ¯ , κ ¯ ) = 0 and M ( α , κ ) ( α ¯ , κ ¯ ) G L ( 4 , R ) . (47)

Then there exist ε ¯ > 0 , ρ > 0 , unique C 1 -functions α ( ε ) and κ ( ε ) with α ( 0 ) = α ¯ and κ ( 0 ) = κ ¯ , defined for | ε | < ε ¯ , and a unique solution ( z 1 ( s , ε ) , z 2 ( s , ε ) ) of (24) with α = α ( ε ) , κ = κ ( ε ) , 0 < | ε | < ε ¯ , such that

sup s R | z j ( s , ε ) γ ( s ) | φ ( s ) 1 < ρ , j = 1 , 2 . (48)

Moreover,

sup s R | z j ( s , ε ) γ ( s ) | φ ( s ) 1 = O ( ε ) , j = 1 , 2 .

Remark 4.4 (i) Equation (48) implies

z j ( s , ε ) = γ ( s ) + ε y ˜ j ( s , ε ) φ ( s ) , j = 1 , 2 ,

for C 0 -functions y ˜ j ( s , ε ) with sup | ε | ε ¯ sup s R | y ˜ j ( s , ε ) | < . Then we have

z j ( s , ε ) = γ ( s ) + ε y ˜ j ( s , ε ) φ ( s ) = γ ( s ) + ε y ˜ j ( s , 0 ) φ ( s ) + ε w j ( s , ε ) φ ( s )

with w j ( s , ε ) = y ˜ j ( s , ε ) y ˜ j ( s , 0 ) , so lim ε 0 sup s R | w j ( s , ε ) | = 0 . Hence

lim ε 0 sup s R | z j ( s , ε ) γ ( s ) ε y ˜ j ( s , 0 ) φ ( s ) | φ ( s ) 1 ε 1 = 0 , j = 1 , 2 , (49)

which gives a first order approximation of z j ( s , ε ) . Next, y ˜ j ( s , 0 ) can be computed using (44) adapted to this case. Hence y ˜ j ( s , 0 ) are bounded solutions of

y j = [ F ( γ ( s ) ) F ( γ ( s ) ) ω ( γ ( s ) ) ω ( γ ( s ) ) φ ( s ) φ ( s ) ] y j + 1 φ ( s ) G j ( γ ( s ) , γ ( s ) , θ ( s ) + α ¯ , κ ¯ ) .

Since (36) has exponential dichotomies on both R + (with projection P + = 0 ) and R (with projection P = I ) it follows that

y ˜ j ( s , 0 ) = { s X ( s ) X 1 ( z ) 1 φ ( z ) G j ( γ ( z ) , γ ( z ) , θ ( z ) + α ¯ , κ ¯ ) d z for  s 0 , s X ( s ) X 1 ( z ) 1 φ ( z ) G j ( γ ( z ) , γ ( z ) , θ ( z ) + α ¯ , κ ¯ ) d z for  s 0 . (50)

X ( s ) is the fundamental solution of (36). Note formulas (50) are well defined at s = 0 , i.e., y ˜ j ( 0 , 0 ) = y ˜ j ( 0 + , 0 ) , due to the first assumption of (47). Next, passing to time t = θ ( s ) and taking z j ( t ) : = φ ( θ 1 ( t ) ) y ˜ j ( θ 1 ( t ) , 0 ) , we get

z j ( t ) = θ 1 ( t ) φ ( θ 1 ( t ) ) X ( θ 1 ( t ) ) X 1 ( z ) φ ( z ) 1 G j ( γ ( z ) , γ ( z ) , θ ( z ) + α ¯ , κ ¯ ) d z = T t φ ( θ 1 ( t ) ) X ( θ 1 ( t ) ) X 1 ( θ 1 ( u ) ) φ ( θ 1 ( u ) ) 1 G j ( x h ( u ) , x h ( u ) , u + α ¯ , κ ¯ ) ω ( x h ( u ) ) d u

for T < t 0 and

z j ( t ) = t T + φ ( θ 1 ( t ) ) X ( θ 1 ( t ) ) X 1 ( θ 1 ( u ) ) φ ( θ 1 ( u ) ) 1 G j ( x h ( u ) , x h ( u ) , u + α ¯ , κ ¯ ) ω ( x h ( u ) ) d u

for 0 t < T + . Note that z j ( t ) solves

ω ( x h ( t ) ) z = F ( x h ( t ) ) z F ( x h ( t ) ) ω ( x h ( t ) ) z ω ( x h ( t ) ) + G j ( x h ( t ) , x h ( t ) , t + α ¯ , κ ¯ )

with sup t ] T , T + [ | z j ( t ) | φ ( θ 1 ( t ) ) 1 < , and φ ( θ 1 ( t ) ) X ( θ 1 ( t ) ) is a fundamental solution of (16).

(ii) Using (49), the functions x j ( t , ε ) : = z j ( θ 1 ( t α ( ε ) ) , ε ) are bounded solutions of (7) in the interval ] T + α ( ε ) , T + + α ( ε ) [ such that

lim ε 0 sup t ] T , T + [ | x j ( t + α ( ε ) , ε ) x h ( t ) ε z j ( t ) | φ ( θ 1 ( t )