Skip to main content

Melnikov theory for weakly coupled nonlinear RLC circuits

Abstract

We apply dynamical system methods and Melnikov theory to study small amplitude perturbation of some coupled implicit differential equations. In particular we show the persistence of such orbits connecting singularities in finite time provided a Melnikov like condition holds. Application is given to coupled nonlinear RLC system.

MSC: Primary 34A09; secondary 34C23; 37G99.

1 Introduction

In [1], motivated by [2, 3], the equation modeling nonlinear RLC circuits

( u + f ( u ) ) +εγ ( u + f ( u ) ) +u+εh(t+α,u,ε)=0
(1)

has been studied. It is assumed that f(u) and h(t,u,ε) are smooth functions with f(u) at least quadratic at the origin and satisfying suitable assumptions. Setting v= ( u + f ( u ) ) the equation reads

( 1 + f ( u ) ) u = v , v = u ε [ h ( t + α , u , ε ) + γ v ] .
(2)

It is assumed that, for some u 0 R, we have f ( u 0 )+1=0 and u 0 f ( u 0 )<0. So for ε=0 (2) has the Hamiltonian

H(u,v)= v 2 +2 u 0 u σ ( 1 + f ( σ ) ) dσ

passing through ( u 0 ,0). Clearly H( u 0 ,0)= ( 0 0 ) and the Hessian of at ( u 0 ,0) is

H H ( u 0 ,0)=( 2 u 0 f ( u 0 ) 0 0 2 ),

so that the condition u 0 f ( u 0 )<0 means that ( u 0 ,0) is a saddle for . Multiplying the second equation by 1+ f (u) we get the system

( 1 + f ( u ) ) ( u v )=( v ( 1 + f ( u ) ) { u + ε [ h ( t + α , u , ε ) + γ v ] } ).
(3)

Note that (2) falls in the class of implicit differential equations (IODE) like

A(x) x =f(x)+εh(t,x,ε,κ),(ε,κ)R× R m ,
(4)

with A(u,v)= ( 1 + f ( u ) 0 0 1 ) . Obviously, detA(u,v)=1+ f (u) vanishes on the line ( u 0 ,v) and the condition f ( u 0 )0 implies that the line u= u 0 consists of noncritical 0-singularities for (3) (see [[4], p.163]). Let NL denote the kernel of the linear map L and RL its range. Then RA( u 0 ,0) is the subspace having zero first component and then the right hand side of (3) belongs to RA( u 0 ,0) if and only if v=0. So all the singularities ( u 0 ,v) with v0 are impasse points while ( u 0 ,0) is a so called I-point (see [[4], pp.163-166]). Quasilinear implicit differential equations, such as (4), find applications in a large number of physical sciences and have been studied by several authors [412]. On the other hand, there are many other works on implicit differential equations [1318] dealing with more general implicit differential systems by using analytical and topological methods.

Passing from (2) to (3), in the general case, it corresponds to multiplying (4) by the adjugate matrix A a (x):

ω(x) x = A a (x) [ f ( x ) + ε h ( t , x , ε , κ ) ] ,

where ω(x)=detA(x). Here we note that A and x may have different dimensions in this paper depending on the nature of the equation but the concrete dimension is clear from that equation, so we do not use different notations for A and x. Basic assumptions in [1] are ω( x 0 )=0, ω ( x 0 )0 and A a ( x 0 )f( x 0 )=0, A a ( x 0 )h(t, x 0 ,ε,κ)=0 for some x 0 (that is, x 0 is an I-point for (4)) and the existence of a solution x(t) in a bounded interval J tending to x 0 as t tends to the endpoints of J.

It is well known [4, 8] that ω( x 0 )=0 and ω ( x 0 )0 imply

dimNA( x 0 )=1,R A a ( x 0 )=NA( x 0 ),andN A a ( x 0 )=RA( x 0 ),
(5)

and then A a ( x 0 )f( x 0 )=0 is equivalent to the fact that f( x 0 )RA( x 0 ).

Let F(x):= A a (x)f(x). It has been proved in [19] that (5) implies that rank F ( x 0 ) is at most 2. So, if x R n , with n>2 then x= x 0 cannot be hyperbolic for the map x F ( x 0 )x.

In this paper we study coupled IODEs such as

A 0 ( x 1 ) x 1 = f ( x 1 ) + ε g 1 ( t , x 1 , x 2 , ε , κ ) , A 0 ( x 2 ) x 2 = f ( x 2 ) + ε g 2 ( t , x 1 , x 2 , ε , κ ) ,
(6)

with x 1 , x 2 R 2 , det A 0 ( x 0 )=0 ( det A 0 ) ( x 0 ), f( x 0 ), g j (t, x 0 , x 0 ,ε,κ)R A 0 ( x 0 ) and other assumptions that will be specified below. Let us remark that (6) is a special kind of the general equation (4) with, among other things,

A(x)=( A 0 ( x 1 ) 0 0 A 0 ( x 2 ) ),x=( x 1 , x 2 )

hence detA(x)=det A 0 ( x 1 )det A 0 ( x 2 ) satisfies detA( x 0 , x 0 )=0, ( det A ) ( x 0 , x 0 )=0 and ( det A ) ( x 0 , x 0 )0. Thus ( x 0 , x 0 ) is not a I-point. Multiplying the first equation by A 0 a ( x 1 ) and the second by A 0 a ( x 2 ) we obtain the system

ω ( x 1 ) x 1 = F ( x 1 ) + ε G 1 ( x 1 , x 2 , t , ε , κ ) , ω ( x 2 ) x 2 = F ( x 2 ) + ε G 2 ( x 1 , x 2 , t , ε , κ ) .
(7)

We assume that ω(x):=det A 0 (x), F(x) and G j ( x 1 , x 2 ,t,ε,κ) satisfy the following assumptions:

  1. (C1)

    F C 2 ( R 2 , R 2 ), ω C 2 ( R 2 ,R) and the unperturbed equation

    ω(x) x =F(x)
    (8)

    possesses a noncritical singularity at x 0 , i.e. ω( x 0 )=0 and ω ( x 0 )0.

  2. (C2)

    F( x 0 )=0 and the spectrum σ( F ( x 0 ))={ μ ± } with μ <0< μ + , and

    x =F(x)

    has a solution γ(s) homoclinic to x 0 , that is, lim s ± γ(s)= x 0 , and ω(γ(s))0 for any sR. Without loss of generality, we may, and will, assume ω(γ(s))>0 for any sR. Moreover, G i C 2 ( R 6 + m , R 2 ), i=1,2 are 1-periodic in t with G i ( x 0 , x 0 ,t,ε,κ)=0 for any tR, κ R m and ε sufficiently small.

  3. (C3)

    Let γ ± be the eigenvectors of F ( x 0 ) with the eigenvalues μ , resp. Then ω( x 0 ), γ ± >0 (or else ω ( x 0 ) γ ± >0).

From (C2) we see that Γ(s):= ( γ ( s ) γ ( s ) ) is a bounded solution of the equation

x 1 = F ( x 1 ) , x 2 = F ( x 2 )
(9)

and that x 0 persists as a singularity of (7). So this paper is a continuation of [1, 19], but here we study more degenerate IODE.

The objective of this paper is to give conditions, besides (C1)-(C3), assuring that for |ε|1, the coupled equations (7) has a solution in a neighborhood of the orbit {Γ(s)sR} and reaching ( x 0 , x 0 ) is a finite time. Our approach mimics that in [1] and uses Melnikov methods to derive the needed conditions. Let us briefly describe the content of this paper. In Section 2 we make few remarks concerning assumptions (C1)-(C3). Then, in Section 3, we change time to reduce equation (7) to a smooth perturbation of (9) whose unperturbed part has the solution Γ(s). Next, in Section 4 we derive the Melnikov condition. Finally Section 5 is devoted to the application of our result to coupled equations of the form (1) for RLC circuits, while some computations are postponed to the appendix.

We emphasize the fact that Melnikov technique is useful to predict the existence of transverse homoclinic orbits in mechanical systems [20, 21] together with the associated chaotic behavior of solutions. However, the result in this paper is somewhat different in that we apply the method to show existence of orbits connecting a singularity in finite time.

2 Comments on the assumptions

By following [1, 19] we note that since γ(s) x 0 as |s| then γ (s) is a bounded solution of the linear equation x =F(γ(s))x. Hence | γ (s)|k e μ | s | for some μ>0. We get then, for s0,

| γ ( s ) x 0 | s | γ ( s ) | ds μ 1 k e μ s .

So

lim sup s log | γ ( s ) x 0 | s μ<0.

From [[22], Theorem 4.3, p.335 and Theorem 4.5, p.338] it follows that

lim sup s log | γ ( s ) x 0 | s = μ <0,
(10)

and there exist a constant δ>0 and a solution γ + e μ s of x = F ( x 0 )x such that

| γ ( s ) x 0 γ + e μ s | =O ( e ( μ δ ) s ) ,as s.

Note that γ + 0 since otherwise γ(s) x 0 =O( e ( μ δ ) s ), contradicting (10). Hence γ + is an eigenvector of the eigenvalue μ of F ( x 0 ). We have then

| γ ( s ) x 0 e μ s γ + | c 1 e δ s

for a suitable constant c 1 0. As a consequence,

lim s γ ( s ) x 0 e μ s = γ + .

Next

| γ + | c 1 e δ s | γ ( s ) x 0 | e μ s | γ + |+ c 1 e δ s .

Taking logarithms, dividing by s and letting s we get

lim s log | γ ( s ) x 0 | s = μ ,

that is, in (10) lim sup s can be replaced with lim s . Similarly, changing s with −s:

lim s log | γ ( s ) x 0 | s = μ + .

Next, set

φ(s):= 1 e μ s + e μ + s .
(11)

Since φ ( s ) e μ s 1 as s and φ ( s ) e μ + s 1 as s we have then

lim s ± γ ( s ) x 0 φ ( s ) = γ ± 0
(12)

and

lim s ± ω ( γ ( s ) ) φ ( s ) = lim s ± ω ( x 0 ) ( γ ( s ) x 0 ) + o ( γ ( s ) x 0 ) e μ s = ω ( x 0 ) , γ ± .

From (C2), we know ω(γ(s))>0 for any sR, so ω( x 0 ), γ ± 0. Hence condition (C3) means that γ(s) tends transversally to the singular manifold ω 1 (0) at x 0 .

As in [19] it is easily seen that

lim s ± γ ( s ) ω ( γ ( s ) ) = F ( x 0 ) γ ± ω ( x 0 ) γ ± = μ γ ± ω ( x 0 ) γ ± 0
(13)

and that γ ( s ) ω ( γ ( s ) ) solves the equation

x = [ F ( γ ( s ) ) F ( γ ( s ) ) ω ( γ ( s ) ) ω ( γ ( s ) ) ] x= F ( γ ( s ) ) x ω ( γ ( s ) ) x ω ( γ ( s ) ) F ( γ ( s ) ) .
(14)

So γ ( s ) ω ( γ ( s ) ) is a bounded solution of (14). Next, setting as in [19]

θ(s):= 0 s ω ( γ ( τ ) ) dτ
(15)

and x h (t)=γ( θ 1 (t)), it is easily seen that x h (t) satisfies ω(x) x =F(x) whose linearization along x h (t) is

F ( x h ( t ) ) z= x h (t) ω ( x h ( t ) ) z+ω ( x h ( t ) ) z =F ( x h ( t ) ) ω ( x h ( t ) ) z ω ( x h ( t ) ) +ω ( x h ( t ) ) z

i.e.

ω ( x h ( t ) ) z = F ( x h ( t ) ) zF ( x h ( t ) ) ω ( x h ( t ) ) z ω ( x h ( t ) ) .
(16)

Note, then, that (14) is derived from (16) with the change x(s)=z(θ(s)). This fact should clarify why we need to consider the linear system (14) instead of x = F (γ(s))x. However, see [19] for a remark concerning the space of bounded solutions of (14) and that of the equation x = F (γ(s))x.

We now prove that γ ( s ) ω ( γ ( s ) ) is the unique solution of equation (14) which is bounded on . This is a kind of nondegeneracy of γ(s).

Lemma 2.1 Assume (C2) and (C3) hold. Then, up to a multiplicative constant, γ ( s ) ω ( γ ( s ) ) is the unique solution of (14) which is bounded on .

Proof From [[19], Lemma 3.1] it follows that the linear map:

x [ F ( x 0 ) μ γ + ω ( x 0 ) γ + ω ( x 0 ) μ I ] x

has the simple eigenvalues μ + μ and μ . Let μ:= μ + 2 , then the linear map

x [ F ( x 0 ) μ γ + ω ( x 0 ) γ + ω ( x 0 ) μ I ] x

has the eigenvalues ±μ; moreover, since

c 1 | γ ( s ) | ω ( γ ( s ) ) c 2

for two positive constants 0< c 1 < c 2 , it follows that γ 0 (s):= γ ( s ) ω ( γ ( s ) ) e μ s is a solution of

x = [ F ( γ ( s ) ) F ( γ ( s ) ) ω ( γ ( s ) ) ω ( γ ( s ) ) μ I ] x
(17)

satisfying

c 1 c 2 | γ 0 ( s 1 ) | | γ 0 ( s 2 ) | e μ ( s 2 s 1 ) c 2 c 1 | γ 0 ( s 1 ) |

for all 0 s 1 s 2 . Then (17) satisfies the assumptions of [[19], Theorem 5.3] and hence its conclusion with rank P + =1, that is, the fundamental matrix X + (s) of (17) satisfies

X + ( s 2 ) P + X + 1 ( s 1 ) k e μ ( s 2 s 1 ) , 0 s 1 s 2 , X + ( s 2 ) ( I P + ) X + 1 ( s 1 ) k e μ ˜ ( s 2 s 1 ) , 0 s 2 s 1 ,

where 0 μ ˜ <μ. However, it is well known (see [2325]) that R P + is the space of initial conditions for the bounded solutions on [0,[ of (17) that, then, tend to zero as s at the exponential rate e μ s . As a consequence a solution u(s) of (17) is bounded on [0,[ if and only if u(s) e μ s is a bounded solution of (14). Then we conclude that the space of solutions of (14) that are bounded on [0,[ is one dimensional.

Incidentally, since the fundamental matrix of (14) is X(s)= X + (s) e μ s , we note that it satisfies

X ( s 2 ) P + X 1 ( s 1 ) k , 0 s 1 s 2 , X ( s 2 ) ( I P + ) X 1 ( s 1 ) k e ( μ + μ ˜ ) ( s 2 s 1 ) , 0 s 2 s 1 .

Using a similar argument in R =],0] with μ= μ 2 <0, and [[19], Theorem 5.4] instead of [[19], Theorem 5.3] with μ =μ we see that (14) has at most a one dimensional space of solutions bounded in . More precisely, μ ˜ with μ< μ ˜ <0, and a projection P on R 2 exists such that

X ( s 2 ) ( I P ) X 1 ( s 1 ) k , s 2 s 1 0 , X ( s 2 ) P X 1 ( s 1 ) k e ( μ + μ ˜ ) ( s 2 s 1 ) , s 1 s 2 0 ,

and dimN P =1. Since γ ( s ) ω ( γ ( s ) ) is a solution of (14) bounded on we deduce that R P + =N P =span{ γ ( 0 ) ω ( γ ( 0 ) ) } and the result follows. □

We conclude this section with a remark about condition (c) in [[19], Theorem 5.3]. Consider a system in R n such as

x = [ D + A ( s ) ] x.
(18)

Then the following result holds.

Theorem 2.2 Suppose the following hold:

  1. (i)

    D has two simple eigenvalues μ < μ and all the other eigenvalues of D have either real part less than μ or greater than μ ;

  2. (ii)

    0 A(s)ds<;

  3. (iii)

    A(s)0 as s.

Then there are as many solutions x(t) of (18) satisfying

k 1 | x ( s ) | | x ( t ) | e μ ( t s ) k 2 | x ( s ) | ,for any 0st,
(19)

as the dimension of the space of the generalized eigenvectors of the matrix D with real parts less than or equal to μ ; here k 1 , k 2 >0 are two suitable positive constants. Similarly there are as many solutions of (18) such that

k ˜ 1 | x ( s ) | | x ( t ) | e μ ( t s ) k ˜ 2 | x ( s ) | ,for any 0st,
(20)

for suitable constants k ˜ 1 , k ˜ 2 >0, as the dimension of the space of the generalized eigenvectors of the matrix D with real parts greater than or equal to μ .

Proof We prove the first statement concerning (19). By a similar argument (20) is handled. Changing variables we may assume that

D=( μ 0 0 0 D 0 0 0 D + )

and the eigenvalues of D have real parts less than μ and those of D + have real parts greater than or equal to μ . So the system reads

x 1 = μ x 1 + a 11 ( t ) x 1 + A 12 ( t ) x 2 + A 13 ( t ) x 3 , x 2 = D x 2 + A 21 ( t ) x 1 + A 22 ( t ) x 2 + A 23 ( t ) x 3 , x 3 = D + x 3 + A 31 ( t ) x 1 + A 32 ( t ) x 2 + A 33 ( t ) x 3 ,
(21)

where a 11 (t)R and A i j (t) are matrices (or vectors) of suitable orders. Setting y i (t)= e μ t x i (t) we get

y 1 = a 11 ( t ) y 1 + A 12 ( t ) y 2 + A 13 ( t ) y 3 , y 2 = ( D μ I ) y 2 + A 21 ( t ) y 1 + A 22 ( t ) y 2 + A 23 ( t ) y 3 , y 3 = ( D + μ I ) y 3 + A 31 ( t ) y 1 + A 32 ( t ) y 2 + A 33 ( t ) y 3 .
(22)

Now we observe that y(t) is a solution of (22) bounded at +∞ if and only if x(t) is a solution of (21) which is bounded on when multiplied by e μ t . Moreover, since | a 11 (t)|, | A 12 (t)|, and | A 13 (t)| belong to L 1 (R), the limit lim t + y 1 (t) exists for any solution y(t) of (22) bounded on R + . So, let us fix t 0 >0 and take t t 0 . If y(t) is a solution of (22) bounded at +∞ it must be, by the variation of constants formula,

y 1 ( t ) = y 1 t ( a 11 ( s ) y 1 ( s ) + A 12 ( s ) y 2 ( s ) + A 13 ( s ) y 3 ( s ) ) d s , y 2 ( t ) = e ( D μ I ) ( t t 0 ) y 2 0 y 2 ( t ) = + t 0 t e ( D μ I ) ( t s ) ( A 21 ( s ) y 1 ( s ) + A 22 ( s ) y 2 ( s ) + A 23 ( s ) y 3 ( s ) ) d s , y 3 ( t ) = t e ( D + μ I ) ( t s ) ( A 31 ( s ) y 1 ( s ) + A 32 ( s ) y 2 ( s ) + A 33 ( s ) y 3 ( s ) ) d s ,
(23)

where y 2 0 = y 2 ( t 0 ) and y 1 = lim t + y 1 (t). Note that since σ( D μ I){λCλ<0} and σ( D + μ I){λCλ>0} and a 11 (t), A i j (t) are bounded, we can interpret (22) as a fixed point theorem in the Banach space of bounded function on [ t 0 ,+[:

B:= { y ( t ) C 0 ( [ t 0 , [ ) sup | y ( t ) | < }

with the obvious norm. Since a 11 (t), A i j (t) L 1 ( R + ) we see that the map (23) is a contraction on B, provided t 0 is sufficiently large, and then, for any given ( y 1 , y 2 0 ), it has a unique solution y(t, y 1 , y 2 0 )B. Note that a priori y(t, y 1 , y 2 0 ) is defined only on [ t 0 ,+[ but of course we can extend it to [0,+[ going backward with time. We now prove that positive constants 0< c 1 < c 2 exist such that c 1 |y(t, y 0 )| c 2 fox any t0. Let t 0 < t 1 <t. We have

y 2 ( t 1 ) = e ( D μ I ) ( t 1 t 0 ) y 2 0 + t 0 t 1 e ( D μ I ) ( t 1 s ) ( A 21 ( s ) y 1 ( s ) + A 22 ( s ) y 2 ( s ) + A 23 ( s ) y 3 ( s ) ) d s

and then

y 2 ( t ) = e ( D μ I ) ( t t 0 ) y 2 0 + t 0 t 1 e ( D μ I ) ( t s ) ( A 21 ( s ) y 1 ( s ) + A 22 ( s ) y 2 ( s ) + A 23 ( s ) y 3 ( s ) ) d s + t 1 t e ( D μ I ) ( t s ) ( A 21 ( s ) y 1 ( s ) + A 22 ( s ) y 2 ( s ) + A 23 ( s ) y 3 ( s ) ) d s = e ( D μ I ) ( t t 1 ) y 2 ( t 1 ) + t 1 t e ( D μ I ) ( t s ) ( A 21 ( s ) y 1 ( s ) + A 22 ( s ) y 2 ( s ) + A 23 ( s ) y 3 ( s ) ) d s .

So, for any δ>0 let t 1 be such that sup t t 1 | A i j (t)|δ and set sup t t 1 | y j (t)|= y ¯ j . We have

y ¯ 2 = sup t t 1 | y 2 ( t ) | k e α ( t t 1 ) y ¯ 2 + t 1 t k e α ( t s ) δ ( y ¯ 1 + y ¯ 2 + y ¯ 3 ) d s k ( y ¯ 2 + δ α ( y ¯ 1 + y ¯ 2 + y ¯ 3 ) ) e α ( t t 1 ) + k δ | α | ( y ¯ 1 + y ¯ 2 + y ¯ 3 )

with max{μμσ( D μ I)}<α<0. Taking the limit as t+ we get

y ¯ 2 δ | α | ( y ¯ 1 + y ¯ 2 + y ¯ 3 ).

Since δ0 as t 1 +, from the above it follows that lim t | y 2 (t)|=0. Similarly we get

| y 3 |kδ β 1 ( y ¯ 1 + y ¯ 2 + y ¯ 3 ),

where 0<β<min{μμσ( D + μ I)} and then lim t | y 3 (t)|=0. As a consequence we obtain lim t |y(t)|| y 1 (t)|=0 and then

lim t | y ( t ) | = | y 1 | .

So, provided we take y 1 0 we see that eventually (i.e. for t t ¯ , for some t ¯ >0)

| y 1 | 2 | y ( t ) | 3 2 | y 1 |

and the existence of c 1 , c 2 >0 such that

c 1 | y ( t ) | c 2

for all t0 follows from the fact that |y(t)| cannot vanish in any bounded interval. Finally since |x(t)|=|y(t)| e μ t we get, for 0st,

| x ( t ) | | x ( s ) | = | y ( t ) | | y ( s ) | e μ ( t s ) c 1 c 2 e μ ( t s ) | x ( t ) | | x ( s ) | c 2 c 1 e μ ( t s )

i.e.

c 1 c 2 | x ( s ) | | x ( t ) | e μ ( t s ) c 2 c 1 | x ( s ) | .

The proof is complete. □

Remark 2.3 (i) It follows from the proof of Theorem 2.2 that inequalities of (19) also hold replacing (i) with the weaker assumption that μ is a simple eigenvalue of D and all the others either have real parts less than μ or μ (i.e. we do not need that μ is simple). Similarly inequalities of (20) hold if μ is a simple eigenvalue of D and all the others either have real parts greater than μ or μ (i.e. we do not need that μ is simple).

  1. (ii)

    Note that a result related to Theorem 2.2 has been proved in [26].

3 Solutions asymptotic to the fixed point

It follows from (11)-(12) that γ(s) x 0 =O( e μ s ) as s± then, since γ (s)=F(γ(s))=O(γ(s) x 0 ) we obtain γ (s)=O( e μ s ). Furthermore, from (13) we also get:

ω ( γ ( s ) ) =O ( γ ( s ) ) =O ( e μ s ) .

As a consequence

T ± := 0 ± ω ( γ ( τ ) ) dτ<.

Since ω(γ(s))>0 it follows that θ:R] T , T + [ is a strictly increasing diffeomorphism (see (15) for the definition of θ(s)). Then x h (t):=γ( θ 1 (t)) satisfies (8) on the interval ] T , T + [ and

lim t T ± x h (t)= x 0 .

Moreover (see (13))

lim t t ± x h (t)= lim t t ± F ( x h ( t ) ) ω ( x h ( t ) ) = lim s ± F ( γ ( s ) ) ω ( γ ( s ) ) = μ γ ± ω ( x 0 ) γ ± 0.

Hence x 0 is not an I-point of (8). In this paper we want to look for solutions of the coupled equation (7) that belong to a neighborhood of {( x h (t), x h (t)) T <t< T + }, they are defined in the interval ] T +α, T + +α[, for some α=α(ε), and tend to ( x 0 , x 0 ) at the same rate as ( x h (t), x h (t)). To this end we first perform a change of the time variable as follows. Set

t=α+θ(s)] T +α, T + +α[

and plug z j (s)= x j (α+θ(s)) in (7). We get

ω( z j ) z j =ω ( γ ( s ) ) ( F ( z j ) + ε G j ( z 1 , z 2 , α + θ ( s ) , ε , κ ) ) ,j=1,2.
(24)

Since we are looking for solutions of (7) tending to ( x 0 , x 0 ) at the same rate as γ(s), in (24) we make the change of variables

z j (s)=γ(s)+φ(s) y j (s)= x 0 +φ(s) ( η ( s ) + y j ( s ) ) ,j=1,2,
(25)

where η(s) is the bounded function γ ( s ) x 0 φ ( s ) . Since

ω ( x 0 + φ ( s ) ( η ( s ) + y ) ) ω ( x 0 ) , φ ( s ) ( η ( s ) + y ) K 1 | φ ( s ) ( η ( s ) + y ) | 2 = φ ( s ) [ ω ( x 0 ) , η ( s ) + y K 1 φ ( s ) | η ( s ) + y | 2 ]
(26)

for a suitable constant K 1 >0 and any sR, |y|1 we get, using (C3), (26):

ω ( x 0 + φ ( s ) ( η ( s ) + y ) ) 1 2 φ(s) ω ( x 0 ) , γ ± >0
(27)

for |s|>0 large and |y|<δ sufficiently small. Then (27) and ω(γ(t))>0 imply the existence of M>0 and δ>0 so that

ω ( x 0 + φ ( s ) ( η ( s ) + y ) ) Mφ(s)

for any sR and |y|δ. Now plugging (25) into (24) we derive the equations

y j = ω ( γ ( s ) ) φ ( s ) ω ( γ ( s ) + φ ( s ) y j ) F ( γ ( s ) + φ ( s ) y j ) F ( γ ( s ) ) φ ( s ) φ ( s ) φ ( s ) y j + ε ω ( γ ( s ) ) φ ( s ) ω ( γ ( s ) + φ ( s ) y j ) G j ( γ ( s ) + φ ( s ) y 1 , γ ( s ) + φ ( s ) y 2 , θ ( s ) + α , ε , κ ) , j = 1 , 2 .
(28)

From (11) it follows that

φ ( s ) φ ( s ) = μ e μ ( s ) + μ + e μ + ( s ) e μ ( s ) + e μ + ( s ) μ as s±.

Next we note that from G j ( x 0 , x 0 ,t,ε,κ)=0 it follows that the quantities

G j ( γ ( s ) + φ ( s ) y 1 , γ ( s ) + φ ( s ) y 2 , α + θ ( s ) , ε , κ ) φ ( s ) = G j ( x 0 + φ ( s ) ( η ( s ) + y 1 ) , x 0 + φ ( s ) ( η ( s ) + y 2 ) , α + θ ( s ) , ε , κ ) φ ( s ) , j = 1 , 2

are bounded uniformly in sR and κ R m , y 1 , y 2 , ε bounded.

The linearization of (28) at y=0, ε=0 is

y j = [ F ( γ ( s ) ) F ( γ ( s ) ) ω ( γ ( s ) ) ω ( γ ( s ) ) φ ( s ) φ ( s ) I ] y j ,j=1,2.
(29)

Taking the limit as s+ we get the systems

y j = [ F ( x 0 ) μ γ + ω ( x 0 ) γ + ω ( x 0 ) μ I ] y j ,j=1,2.
(30)

Similarly taking the limit as s we get the systems

y j = [ F ( x 0 ) μ + γ ω ( x 0 ) γ ω ( x 0 ) μ + I ] y j ,j=1,2.
(31)

From the proof of Lemma 2.1 (see also [[1], Lemma 3.1]) we know that (30) has the positive simple eigenvalues μ + μ and μ , and (31) has the negative simple eigenvalues μ μ + and μ + . From the roughness of exponential dichotomies it follows that both equations in (29) have an exponential dichotomy on both R + and R with projections, resp. P + =0 and P =I. Hence (see also [19]) all solutions of the system

y = [ F ( γ ( s ) ) ω ( γ ( s ) ) F ( γ ( s ) ) ω ( γ ( s ) ) φ ( s ) φ ( s ) I ] y,
(32)

adjoint to (29), are bounded as |s|. We let ψ 1 (s) and ψ 2 (s) be any two linearly independent solutions of (32).

4 Melnikov function and the original equation

In this section we will give a condition for solving (28) for y 1 (t), y 2 (t) near the solution y 1 (t)= y 2 (t)=0 of the same equation with ε=0. Writing

F(y):= y ω ( γ ( s ) ) φ ( s ) ω ( γ ( s ) + φ ( s ) y ) F ( γ ( s ) + φ ( s ) y ) + F ( γ ( s ) ) φ ( s ) + φ ( s ) φ ( s ) y
(33)

and

H j ( y 1 , y 2 , η , ε ) : = ω ( γ ( s ) ) φ ( s ) ω ( γ ( s ) + φ ( s ) y j ) × G j ( γ ( s ) + φ ( s ) y 1 , γ ( s ) + φ ( s ) y 2 , θ ( s ) + α , ε , κ ) , j = 1 , 2 ,
(34)

we look for solutions y 1 (t), y 2 (t):R R 2 of

F ( y 1 ) + ε H 1 ( y 1 , y 2 , η , ε ) = 0 , F ( y 2 ) + ε H 2 ( y 1 , y 2 , η , ε ) = 0
(35)

in the Banach space of C 1 -functions on , bounded together with their derivatives and with small norms. We observe that F(0)=0 and equation F (0)y=0 reads

y = [ F ( γ ( s ) ) F ( γ ( s ) ) ω ( γ ( s ) ) ω ( γ ( s ) ) φ ( s ) φ ( s ) ] y.
(36)

In Section 3 (see also [1, 19]) we have seen that (36) has an exponential dichotomy on both R + and R + with projections P + =I P =0. So the only bounded solution y(t) of F (0)y=0 is y(t)=0. In other words NF (0)={0}. So we are lead to prove the following.

Theorem 4.1 Let Y, X be Banach spaces, εR be a small parameter and η R 2 d . Let F:YX, H 1 , 2 :Y×Y× R 2 d + 1 X, ( y 1 , y 2 ,η,ε) H 1 , 2 ( y 1 , y 2 ,η,ε) be C 2 -functions such that

  1. (a)

    F(0)=0;

  2. (b)

    NF (0)={0};

  3. (c)

    there exist ψ 1 ,, ψ d X such that RF (0)= { ψ 1 , , ψ d } .

Set M: R 2 d R 2 d by

M(η):=( ψ 1 H 1 ( 0 , 0 , 0 , η , 0 ) ψ d H 1 ( 0 , 0 , 0 , η , 0 ) ψ 1 H 2 ( 0 , 0 , 0 , η , 0 ) ψ d H 2 ( 0 , 0 , 0 , η , 0 ) )
(37)

and suppose there exists η ¯ R 2 d such that M( η ¯ )=0 and the derivative M ( η ¯ ) is invertible. Then there exist r>0 and unique C 1 -function η=η(ε), defined in a neighborhood of ε=0R such that

lim ε 0 η(ε)= η ¯
(38)

and for η=η(ε), ε0, (35) has a unique solution ( y 1 (ε), y 2 (ε))Y×Y satisfying

( y 1 ( ε ) , y 2 ( ε ) ) r.

Moreover, y j (ε)=ε y ˜ j (ε) for C 0 -functions y ˜ j (ε)Y and we have

F (0) y ˜ j (0)+ H j (0,0,0, η ¯ ,0)=0,j=1,2.
(39)

Proof We look for solutions ( y 1 , y 2 ,η) of (35) that are close to ( y 1 , y 2 ,η)=(0,0, η ¯ ). Let P:XX be the projection such that RP= RF (0). Note codim RF (0)=d. From the implicit function theorem, we solve the projected equations

P F ( y 1 ) + ε P H 1 ( y 1 , y 2 , η , ε ) = 0 , P F ( y 2 ) + ε P H 2 ( y 1 , y 2 , η , ε ) = 0

for unique y 1 , 2 = Y 1 , 2 (η,ε)Y such that

Y 1 , 2 (η,0)=0,

provided |ε| ε 0 is sufficiently small and η in a fixed closed ball Ξ R 2 d with η ¯ Ξ . Note that Y 1 , 2 are C 2 -smooth. Setting Q=IP, we need to solve the bifurcation equations:

QF ( Y j ( η , ε ) ) +εQ H j ( Y 1 ( η , ε ) , Y 2 ( η , ε ) , η , ε ) =0,j=1,2.
(40)

Observe that

Qx=0xRP= RF (0) ψ i x=0for all i=1,,d.
(41)

Then Q F (0)=0 and so

QF ( Y j ( η , ε ) ) = O j ( ε 2 ) ,j=1,2,

uniformly with respect to η. We conclude that (40) can be written as

εQ H j (0,0,η,0)=Q R j (η,ε),j=1,2,
(42)

where

R j (η,ε):=F ( Y j ( η , ε ) ) +ε [ H j ( Y 1 ( η , ε ) , Y 2 ( η , ε ) , η , ε ) H j ( 0 , 0 , η , 0 ) ] .

Note that R j (η,ε) are C 2 -functions of (η,ε) and that ε 1 R j (η,ε)=O(ε) uniformly with respect to η, so

R ˜ j (η,ε):= { ε 1 R j ( η , ε ) , if  ε 0 , 0 , if  ε = 0

is C 1 in (η,ε). By (41), system (42) is equivalent to

M(η)= ( ψ i R ˜ 1 ( η , ε ) ψ i R ˜ 2 ( η , ε ) ) i = 1 , , d =O(ε).
(43)

Because of the assumptions we can apply the implicit function theorem to (43) to obtain a C 1 -function η(ε) defined in a neighborhood of ε=0 satisfying (43) and such that (38) holds. Setting

y j (ε):= Y j ( η ( ε ) , ε ) ,j=1,2

we see that y 1 (ε), y 2 (ε) are bounded C 1 -solutions of (35) with η=η(ε) such that y 1 (0)=0, y 2 (0)=0. Then we can write y 1 (ε)=ε y ˜ 1 (ε), y 2 (ε)=ε y ˜ 2 (ε)) for continuous y ˜ 1 (ε), y ˜ 2 (ε)Y where

F ( ε y ˜ j ( ε ) ) +ε H j ( ε y ˜ 1 ( ε ) , ε y ˜ 2 ( ε ) , η ( ε ) , ε ) =0,j=1,2.

Clearly (39) follows differentiating the above equality at ε=0. The proof is complete. □

Remark 4.2 Note that, because of M( η ¯ )=0, (39) is equivalent to

P F (0) y ˜ j (0)+ H j (0,0,0, η ¯ ,0)=0,j=1,2,

which has the unique solution

y ˜ j (0)= [ P F ( 0 ) ] 1 H j (0,0,0, η ¯ ,0),j=1,2.
(44)

Now we apply Theorem 4.1 to (28) with F(y), H 1 ( y 1 , y 2 ,η,ε), H 2 ( y 1 , y 2 ,η,ε) as in (33), (34) and

Y= C b 1 ( R , R 2 ) ,X= C b 0 ( R , R 2 ) ,η=(α,κ)R× R m ,

where C b k (R, R 2 ) is the Banach space of C k -functions bounded together with their derivatives with the usual sup-norm.

We already observed that F(0)=0 and NF (0)=0. Moreover,

RF (0)= { x = ( x 1 , x 2 ) X | ψ i ( s ) x i ( s ) d s = 0 , i = 1 , 2 } ,

where ψ i (s) have been defined in the previous Section 3. So d=2 and m=3. We recall, from [19], that ψ j (s)=φ(s) v j (θ(s)) where v j (t) are solutions of the adjoint equation of (16):

ω ( x h ( t ) ) v = ω ( x h ( t ) ) ω ( x h ( t ) ) F ( x h ( t ) ) v F ( x h ( t ) ) v,t] T , T + [,
(45)

and v j (0)=R P N P + . Hence (37) reads

M(α,κ)=( ψ 1 ( s ) φ ( s ) G 1 ( γ ( s ) , γ ( s ) , α + θ ( s ) , 0 , κ ) d s ψ 2 ( s ) φ ( s ) G 1 ( γ ( s ) , γ ( s ) , α + θ ( s ) , 0 , κ ) d s ψ 1 ( s ) φ ( s ) G 2 ( γ ( s ) , γ ( s ) , α + θ ( s ) , 0 , κ ) d s ψ 2 ( s ) φ ( s ) G 2 ( γ ( s ) , γ ( s ) , α + θ ( s ) , 0 , κ ) d s )

or passing to time t=θ(s):

M(α,κ)=( T T + v 1 ( t ) G 1 ( x h ( t ) , x h ( t ) , t + α , 0 , κ ) ω ( x h ( t ) ) d t T T + v 2 ( t ) G 1 ( x h ( t ) , x h ( t ) , t + α , 0 , κ ) ω ( x h ( t ) ) d t T T + v 1 ( t ) G 2 ( x h ( t ) , x h ( t ) , t + α , 0 , κ ) ω ( x h ( t ) ) d t T T + v 2 ( t ) G 2 ( x h ( t ) , x h ( t ) , t + α , 0 , κ ) ω ( x h ( t ) ) d t ).
(46)

A direct application of Theorem 4.1 gives the following.

Theorem 4.3 Let m=3 and M(α,κ) be given as in (46) where v 1 (t), v 2 (t) are two independent bounded solutions (on ) of the adjoint equation (45). Suppose that α ¯ and κ ¯ exist so that

M( α ¯ , κ ¯ )=0and M ( α , κ ) ( α ¯ , κ ¯ )GL(4,R).
(47)

Then there exist ε ¯ >0, ρ>0, unique C 1 -functions α(ε) and κ(ε) with α(0)= α ¯ and κ(0)= κ ¯ , defined for |ε|< ε ¯ , and a unique solution ( z 1 (s,ε), z 2 (s,ε)) of (24) with α=α(ε), κ=κ(ε), 0<|ε|< ε ¯ , such that

sup s R | z j ( s , ε ) γ ( s ) | φ ( s ) 1 <ρ,j=1,2.
(48)

Moreover,

sup s R | z j ( s , ε ) γ ( s ) | φ ( s ) 1 =O(ε),j=1,2.

Remark 4.4 (i) Equation (48) implies

z j (s,ε)=γ(s)+ε y ˜ j (s,ε)φ(s),j=1,2,

for C 0 -functions y ˜ j (s,ε) with sup | ε | ε ¯ sup s R | y ˜ j (s,ε)|<. Then we have

z j (s,ε)=γ(s)+ε y ˜ j (s,ε)φ(s)=γ(s)+ε y ˜ j (s,0)φ(s)+ε w j (s,ε)φ(s)

with w j (s,ε)= y ˜ j (s,ε) y ˜ j (s,0), so lim ε 0 sup s R | w j (s,ε)|=0. Hence

lim ε 0 sup s R | z j (s,ε)γ(s)ε y ˜ j (s,0)φ(s)|φ ( s ) 1 ε 1 =0,j=1,2,
(49)

which gives a first order approximation of z j (s,ε). Next, y ˜ j (s,0) can be computed using (44) adapted to this case. Hence y ˜ j (s,0) are bounded solutions of

y j = [ F ( γ ( s ) ) F ( γ ( s ) ) ω ( γ ( s ) ) ω ( γ ( s ) ) φ ( s ) φ ( s ) ] y j + 1 φ ( s ) G j ( γ ( s ) , γ ( s ) , θ ( s ) + α ¯ , κ ¯ ) .

Since (36) has exponential dichotomies on both R + (with projection P + =0) and R (with projection P =I) it follows that

y ˜ j (s,0)= { s X ( s ) X 1 ( z ) 1 φ ( z ) G j ( γ ( z ) , γ ( z ) , θ ( z ) + α ¯ , κ ¯ ) d z for  s 0 , s X ( s ) X 1 ( z ) 1 φ ( z ) G j ( γ ( z ) , γ ( z ) , θ ( z ) + α ¯ , κ ¯ ) d z for  s 0 .
(50)

X(s) is the fundamental solution of (36). Note formulas (50) are well defined at s=0, i.e., y ˜ j ( 0 ,0)= y ˜ j ( 0 + ,0), due to the first assumption of (47). Next, passing to time t=θ(s) and taking z j (t):=φ( θ 1 (t)) y ˜ j ( θ 1 (t),0), we get

z j ( t ) = θ 1 ( t ) φ ( θ 1 ( t ) ) X ( θ 1 ( t ) ) X 1 ( z ) φ ( z ) 1 G j ( γ ( z ) , γ ( z ) , θ ( z ) + α ¯ , κ ¯ ) d z = T t φ ( θ 1 ( t ) ) X ( θ 1 ( t ) ) X 1 ( θ 1 ( u ) ) φ ( θ 1 ( u ) ) 1 G j ( x h ( u ) , x h ( u ) , u + α ¯ , κ ¯ ) ω ( x h ( u ) ) d u

for T <t0 and

z j (t)= t T + φ ( θ 1 ( t ) ) X ( θ 1 ( t ) ) X 1 ( θ 1 ( u ) ) φ ( θ 1 ( u ) ) 1 G j ( x h ( u ) , x h ( u ) , u + α ¯ , κ ¯ ) ω ( x h ( u ) ) du

for 0t< T + . Note that z j (t) solves

ω ( x h ( t ) ) z = F ( x h ( t ) ) zF ( x h ( t ) ) ω ( x h ( t ) ) z ω ( x h ( t ) ) + G j ( x h ( t ) , x h ( t ) , t + α ¯ , κ ¯ )

with sup t ] T , T + [ | z j (t)|φ ( θ 1 ( t ) ) 1 <, and φ( θ 1 (t))X( θ 1 (t)) is a fundamental solution of (16).

(ii) Using (49), the functions x j (t,ε):= z j ( θ 1 (tα(ε)),ε) are bounded solutions of (7) in the interval ] T +α(ε), T + +α(ε)[ such that

lim ε 0 sup t ] T , T + [ | x j ( t + α ( ε ) , ε ) x h (t)ε z j (t)|φ ( θ 1 ( t ) ) 1 ε 1 =0.

Summarizing, we obtain the following corollary of Theorem 4.3.

Corollary 4.5 Let m=3 and M(α,κ) be given as in (46) where v 1 (t), v 2 (t) are two independent bounded solutions (on ) of the adjoint equation (45). Suppose that α ¯ and κ ¯ exist so that

M( α ¯ , κ ¯ )=0and M ( α , κ ) ( α ¯ , κ ¯ )GL(4,R).

Then there exist ε ¯ >0, ρ>0, unique C 1 -functions α(ε) and κ(ε) with α(0)= α ¯ and κ(0)= κ ¯ , defined for |ε|< ε ¯ , and a unique solution ( x 1 (t,ε), x 2 (t,ε)) of (7) with α=α(ε), κ=κ(ε), 0<|ε|< ε ¯ , such that

sup T < t < T + | x j ( t + α ( ε ) , ε ) x h ( t ) | φ ( θ 1 ( t ) ) 1 <ρ,j=1,2.

Moreover,

sup T < t < T + | x j ( t + α ( ε ) , ε ) x h ( t ) | φ ( θ 1 ( t ) ) 1 =O(ε),j=1,2.

5 Applications to RLC circuits

In this section we study the coupled equations

( u 1 + u 1 2 ) + ε γ 1 ( u 1 + u 1 2 ) + u 1 ε λ ( u 2 + u 2 2 ) + ε sin t = 0 , ( u 2 + u 2 2 ) + ε γ 2 ( u 2 + u 2 2 ) + u 2 ε λ ( u 1 + u 1 2 ) + ε χ sin ( t + ϖ ) = 0 ,
(51)

which is motivated by [2, 3]. Note that (51) is obtained by coupling two equations modeling nonlinear RLC circuits as in (1). Here γ 1 , γ 2 , λ, χ and ϖ are positive parameters. Setting w j = ( u j + u j 2 ) , j=1,2, (51) reads

{ ( 2 u 1 + 1 ) u 1 = w 1 , w 1 + ε γ 1 w 1 + u 1 ε λ w 2 + ε sin t = 0 , ( 2 u 2 + 1 ) u 2 = w 2 , w 2 + ε γ 2 w 2 + u 2 ε λ w 1 + ε χ sin ( t + ϖ ) = 0 .
(52)

By solving the second and fourth equations of (52) for w 1 and w 2 , we get:

{ ( 2 u 1 + 1 ) u 1 = w 1 , w 1 = u 1 + ε sin t + λ u 2 + γ 1 w 1 + ε λ ( χ sin ( t + ϖ ) + λ u 1 + γ 2 w 2 ) ε 2 λ 2 1 , ( 2 u 2 + 1 ) u 2 = w 2 , w 2 = u 2 + ε χ sin ( t + ϖ ) + λ u 1 + γ 2 w 2 + ε λ ( sin t + λ u 2 + γ 1 w 1 ) ε 2 λ 2 1 ,
(53)

provided |ελ|1. Since ω(u,w)=ω(u)=2u+1, to write the system (52) in the form (7) with parameter κ=( γ 1 , γ 2 ,λ) and (χ,ϖ) fixed, we have to multiply the second and fourth equation by 2 u 1 , 2 +1, respectively, and we obtain the system

{ ( 2 u 1 + 1 ) u 1 = w 1 , ( 2 u 1 + 1 ) w 1 = ( 2 u 1 + 1 ) u 1 + ε ( 2 u 1 + 1 ) sin t + λ u 2 + γ 1 w 1 + ε λ ( χ sin ( t + ϖ ) + λ u 1 + γ 2 w 2 ) ε 2 λ 2 1 , ( 2 u 2 + 1 ) u 2 = w 2 , ( 2 u 2 + 1 ) w 2 = ( 2 u 2 + 1 ) u 2 + ε ( 2 u 2 + 1 ) χ sin ( t + ϖ ) + λ u 1 + γ 2 w 2 + ε λ ( sin t + λ u 2 + γ 1 w 1 ) ε 2 λ 2 1 ,
(54)

with (uncoupled) unperturbed equation for ε=0 (see (8)):

{ ( 2 u 1 + 1 ) u 1 = w 1 , ( 2 u 1 + 1 ) w 1 = ( 2 u 1 + 1 ) u 1 , ( 2 u 2 + 1 ) u 2 = w 2 , ( 2 u 2 + 1 ) w 2 = ( 2 u 2 + 1 ) u 2 .
(55)

Neglecting left multiplicators 2u+1 (u= u 1 , u 2 ) in (55), we obtain the following system (see condition (C2)):

{ u 1 = w 1 , w 1 = ( 2 u 1 + 1 ) u 1 , u 2 = w 2 , w 2 = ( 2 u 2 + 1 ) u 2 .
(56)

Clearly, condition (C1) is satisfied with x 0 =( 1 2 ,0) and

F(x)= ( w , ( 2 u + 1 ) u ) ,ω(x)=2u+1,x= ( u , w ) .

The equation u +(2u+1)u=0 has the prime integral u 2 + 4 3 u 3 + u 2 . A solution u 0 (s) satisfying lim s u 0 (s)= 1 2 has to satisfy u 2 + 4 3 u 3 + u 2 1 12 =03 u 2 +(4u1) ( u + 1 2 ) 2 =0 with the solution

u 0 (s)= 1 4 ( 1 3 tanh 2 s 2 )

bounded on . Hence

γ(s)= ( 1 4 ( 1 3 tanh 2 s 2 ) , 6 csch 3 s sinh 4 s 2 ) .

Note ω(γ(s))=2 u 0 (s)+1= 3 2 sech 2 s 2 >0. From

F ( x 0 )=( 0 1 1 0 ),

we get μ ± =±1 and γ ± = ( 1 , ± 1 ) . Since ω( x 0 )= ( 2 , 0 ) , we derive ω( x 0 ), γ ± =2>0, and condition (C3) holds as well. Next, we have

G 1 ( x 1 , x 2 , t , ε , κ ) = 2 u 1 + 1 ε 2 λ 2 1 ( 0 sin t + λ u 2 + γ 1 w 1 + ε λ ( χ sin ( t + ϖ ) + λ u 1 + γ 2 w 2 ) ) , G 2 ( x 1 , x 2 , t , ε , κ ) = 2 u 2 + 1 ε 2 λ 2 1 ( 0 χ sin ( t + ϖ ) + λ u 1 + γ 2 w 2 + ε λ ( sin t + λ u 2 + γ 1 w 1 ) ) ;

hence G i ( x 0 , x 0 ,t,ε,κ)=0 and assumption (C2) is also verified. Here x i = ( u i , w i ) , i=1,2. Furthermore by (15)

θ(s)= 0 s ω ( γ ( τ ) ) dτ= 0 s 3 2 sech 2 τ 2 dτ=3tanh s 2 ,

so T ± =±3 and

x h (t)=γ ( θ 1 ( t ) ) = 1 4 ( 1 t 2 3 , t ( t 2 9 1 ) ) .

Thus

ω ( x h ( t ) ) = 1 6 ( 9 t 2 ) , ω ( x h ( t ) ) = ( 2 , 0 ) , F ( x h ( t ) ) ω ( x h ( t ) ) = 1 12 ( 2 t , t 2 3 ) , F ( x h ( t ) ) = ( 0 1 t 2 3 2 0 ) .

So now (16) has the form

z 1 = 2 t 9 t 2 z 1 + 6 9 t 2 z 2 , z 2 = z 1 ,
(57)

which has the solution x h (t)=( t 6 , 1 12 ( t 2 3)). In other words, we deal with

z 2 = 2 t 9 t 2 z 2 6 9 t 2 z 2 ,
(58)

possessing the solution 1 12 ( t 2 3). Following [[27], p.327], the second solution of (58) is given by

1 9 ( ( t 2 3 ) arctanh t 3 3 t ) .

Consequently a fundamental matrix solution of (57) has the form

Z(t)=( t 6 2 9 ( 3 ( t 2 6 ) t 2 9 t arctanh t 3 ) 1 12 ( t 2 3 ) 1 9 ( t 2 3 ) arctanh t 3 t 3 ).

Note detZ(t)= 1 9 t 2 . The adjoint system of (57) is (see (45))

ζ 1 = 2 t t 2 9 ζ 1 + ζ 2 , ζ 2 = 6 t 2 9 ζ 1
(59)

with the fundamental matrix solution

Z 1 (t)=( 1 9 ( 9 t 2 ) ( ( t 2 3 ) arctanh t 3 3 t ) 1 12 ( t 2 9 ) ( t 2 3 ) 2 3 ( t 2 6 ) 2 9 t ( t 2 9 ) arctanh t 3 1 6 t ( t 2 9 ) ).

Note lim t ± 3 Z 1 (t)= ( 0 0 2 0 ) . Thus we take

v 1 (t)=( 1 9 ( t 2 9 ) ( 3 t ( t 2 3 ) arctanh t 3 ) 2 3 ( t 2 6 ) 2 9 t ( t 2 9 ) arctanh t 3 )

and

v 2 (t)=( 1 12 ( t 2 9 ) ( t 2 3 ) 1 6 t ( t 2 9 ) ).

We now compute M(α,κ). We have (see the appendix)

T T + v 1 ( t ) G 1 ( x h ( t ) , x h ( t ) , t + α , 0 , κ ) ω ( x h ( t ) ) d t = 1 3 ( 9 λ + 4 sin α ( Γ ( 2 cos 3 3 sin 3 ) + cos 3 ( 9 2 Ci 6 + log 36 3 Si 6 ) + sin 3 ( 3 + 3 Ci 6 3 log 6 2 Si 6 ) ) ) = a 11 λ + a 12 sin α ,
(60)

where Si(t)= 0 t sin s s ds is the sine integral function, Ci(t)= t cos s s ds is the cosine integral function [[28], p.886] and Γ is the Euler constant. Similarly

T T + v 2 ( t ) G 1 ( x h ( t ) , x h ( t ) , t + α , 0 , κ ) ω ( x h ( t ) ) d t = 54 γ 1 35 2 cos α ( 3 cos 3 + 2 sin 3 ) = a 21 γ 1 + a 22 cos α
(61)

and

T T + v 1 ( t ) G 2 ( x h ( t ) , x h ( t ) , t + α , 0 , κ ) ω ( x h ( t ) ) d t = a 11 λ + a 12 χ sin ( α + ϖ ) , T T + v 2 ( t ) G 2 ( x h ( t ) , x h ( t ) , t + α , 0 , κ ) ω ( x h ( t ) ) d t = a 21 γ 2 + a 22 χ cos ( α + ϖ ) .
(62)

Note

a 11 =3, a 12 9.7406, a 21 = 54 35 , a 22 5.37547.

Consequently, the Melnikov function is now

M(α,κ)=( a 11 λ + a 12 sin α a 21 γ 1 + a 22 cos α a 11 λ + a 12 χ sin ( α + ϖ ) a 21 γ 2 + a 22 χ cos ( α + ϖ ) ).
(63)

The equation M(α,κ)=0 is equivalent to

λ = a 12 a 11 sin α = a 12 a 11 χ sin ( α + ϖ ) , γ 1 = a 22 a 21 cos α , γ 2 = a 22 a 21 χ cos ( α + ϖ ) .
(64)

So having χ>0 and ϖ>0 such that the equation

sinαχsin(α+ϖ)=0
(65)

has a simple zero α 0 with sin α 0 <0, cos α 0 >0 and cos( α 0 +ϖ)>0, formulas (64) give a simple zero ( α 0 , γ 1 , 0 , γ 2 , 0 , λ 0 ) of (63) with positive γ 1 , 0 , γ 2 , 0 , λ 0 , and Corollary 4.5 can be applied to (51). If χcosϖ1, then (65) is equivalent to

tanα= χ sin ϖ 1 χ cos ϖ .
(66)

Hence assuming ϖ]π,2π[ and χcosϖ<1, the right hand side of (66) is negative, and then

α 0 =arctan χ sin ϖ 1 χ cos ϖ ] π 2 ,0[
(67)

satisfies sin α 0 <0 and cos α 0 >0. Since α 0 satisfies (65) and sin α 0 <0, condition cos( α 0 +ϖ)>0 is equivalent to tan( α 0 +ϖ)<0. Then using (66), we derive

0>tan( α 0 +ϖ)= sin ϖ cos ϖ χ ,ϖ 3 π 2 .
(68)

When cosϖ<0, then (68) is not satisfied, since sinϖ<0. So we take ϖ] 3 π 2 ,2π[ and (68) gives also χ<cosϖ. Clearly χ<cosϖ implies 1>χcosϖ.

Summarizing we see that for any fixed χ and ϖ satisfying

0<χ<cosϖ,ϖ] 3 π 2 ,2π[,
(69)

the Melnikov function (63) has a simple zero ( α 0 , γ 1 , 0 , γ 2 , 0 , λ 0 ) given by (64) and (67), and γ 1 , 0 >0, γ 2 , 0 >0, λ 0 >0. Hence in the region given by (69) we apply Corollary 4.5 to (51) with parameters γ 1 , γ 2 , λ near γ 1 , 0 , γ 2 , 0 , λ 0 determined by (64) and (67), i.e.,

λ 0 = a 12 χ sin ϖ a 11 1 + χ 2 2 χ cos ϖ , γ 1 , 0 = a 22 ( 1 χ cos ϖ ) a 21 1 + χ 2 2 χ cos ϖ , γ 2 , 0 = χ a 22 ( cos ϖ χ ) a 21 1 + χ 2 2 χ cos ϖ
(70)

for any fixed ϖ and χ satisfying (69). Summarizing, we get the following.

Theorem 5.1 For any fixed ϖ, χ satisfying (69) and then α 0 , γ 1 , 0 , γ 2 , 0 , λ 0 given by (67) and (70), there is an ε 0 >0 and smooth functions α, γ 1 , γ 2 ,λ:] ε 0 ,ε[R with α(0)= α 0 , γ 1 (0)= γ 1 , 0 , γ 2 (0)= γ 2 , 0 , λ(0)= λ 0 , such that for any ε] ε 0 , ε 0 [{0}, system (51) with γ 1 = γ 1 (ε), γ 2 = γ 2 (ε), λ=λ(ε), possesses a unique solution ( u 1 (ε,t), u 2 (ε,t)) on ]3+α(ε),3+α(ε)[ such that

lim ε 0 sup t ] 3 , 3 [ | u j ( ε , t + α ( ε ) ) 1 4 ( 1 t 2 3 ) | ( 9 t 2 ) 1 = 0 , lim ε 0 sup t ] 3 , 3 [ | u j ( ε , t + α ( ε ) ) + t 6 | = 0 .
(71)

Proof We apply Corollary 4.5 to (53). Since, in this case, φ( θ 1 (t))= 1 2 9 t 2 9 + t 2 , according to Corollary 4.5 (53) has a solution ( u j (t), w j (t)), j=1,2, such that

lim ε 0 sup t ] 3 , 3 [ | u j ( ε , t + α ( ε ) ) 1 4 ( 1 t 2 3 ) | ( 9 t 2 ) 1 = 0 , lim ε 0 sup t ] 3 , 3 [ | w j ( ε , t + α ( ε ) ) t 4 ( t 2 9 1 ) | ( 9 t 2 ) 1 = 0 .
(72)

Now, w j (ε,t)=(1+2 u j (ε,t)) u j (ε,t) and t 4 ( t 2 9 1)=(1+2 u h (t)) u h (t), where u h (t)= 1 4 (1 t 2 3 ). So, taking t :=t+α(ε),

w j ( ε , t ) t 4 ( t 2 9 1 ) = ( 1 + 2 u j ( ε , t ) ) u j ( ε , t ) ( 1 + 2 u h ( t ) ) u h ( t ) = ( 1 + 2 u j ( ε , t ) ) ( u j ( ε , t ) u h ( t ) ) + 2 ( u j ( ε , t ) u h ( t ) ) u h ( t ) = ( 1 + 2 u h ( t ) ) ( u j ( ε , t ) u h ( t ) ) + 2 ( u j ( ε , t ) u h ( t ) ) ( u j ( ε , t ) u h ( t ) ) + 2 ( u j ( ε , t ) u h ( t ) ) u h ( t ) = [ 9 t 2 6 + 2 ( u j ( ε , t ) u h ( t ) ) ] ( u j ( ε , t ) u h ( t ) ) t 3 ( u j ( ε , t ) u h ( t ) )

and then

[ w j ( ε , t ) t 4 ( t 2 9 1 ) ] ( 9 t 2 ) 1 = [ 1 6 + 2 u j ( ε , t ) u h ( t ) 9 t 2 ] ( u j ( ε , t ) u h ( t ) ) t 3 u j ( ε , t ) u h ( t ) 9 t 2 ,
(73)

or, equivalently,

[ 1 6 + 2 u j ( ε , t ) u h ( t ) 9 t 2 ] ( u j ( ε , t ) u h ( t ) ) = [ w j ( ε , t ) t 4 ( t 2 9 1 ) ] ( 9 t 2 ) 1 + t 3 u j ( ε , t ) u h ( t ) 9 t 2 .
(74)

Since sup 3 < t < 3 | u j (ε, t ) u h (t)| ( 9 t 2 ) 1 =O(ε) we see that, for ε sufficiently small,

1 7 <| 1 6 +2 u j ( ε , t ) u h ( t ) 9 t 2 |< 1 5 ,

and hence, using (74) and (72),

lim ε 0 sup 3 < t < 3 | u j ( ε , t ) u h ( t ) | =0.
(75)

Vice versa, if (75) and the first of (72) hold, then (73) gives

lim ε 0 sup 3 < t < 3 | w j ( ε , t + α ( ε ) ) t 4 ( t 2 9 1 ) | ( 9 t 2 ) 1 =0.

Hence (71) and (72) are equivalent. The proof is complete. □

Of course, solutions given by Theorem 5.1 vary smoothly with respect (ϖ,χ) satisfying (69).

Remark 5.2 Missed in the above analysis is the second possibility when ϖ]0,π[. Then sinϖ>0 and (66) is negative if

κcosϖ>1,
(76)

so we get ϖ]0, π 2 [ and κ>1. Then inequality of (68) is satisfied since κ>1>cosϖ. So we conclude that the result of Theorem 5.1 is valid also for

κ cos ϖ > 1 , ϖ ] 0 , π 2 [ , λ 0 = a 12 χ sin ϖ a 11 1 + χ 2 2 χ cos ϖ , γ 1 , 0 = a 22 ( 1 χ cos ϖ ) a 21 1 + χ 2 2 χ cos ϖ , γ 2 , 0 = χ a 22 ( cos ϖ χ ) a 21 1 + χ 2 2 χ cos ϖ .

Appendix

Let v 12 (t)= 2 3 ( t 2 6) 2 9 t( t 2 9)arctanh t 3 be the second component of v 1 (t). Note that v 12 (t) is an even function and then

T T + v 1 ( t ) G 1 ( x h ( t ) , x h ( t ) , t + α , 0 , κ ) ω ( x h ( t ) ) d t = 3 3 v 12 ( t ) [ γ 1 t 4 ( t 2 9 1 ) + λ 4 ( 1 t 2 3 ) + sin t cos α + sin α cos t ] d t = 3 3 v 12 ( t ) [ λ 4 ( 1 t 2 3 ) + sin α cos t ] d t = λ 12 3 3 v 12 ( t ) ( t 2 3 ) d t 3 3 v 12 ( t ) cos t d t sin α .
(77)

Similarly,

T T + v 1 ( t ) G 2 ( x h ( t ) , x h ( t ) , t + α , 0 , κ ) ω ( x h ( t ) ) d t = λ 12 3 3 v 12 ( t ) ( t 2 3 ) d t χ 3 3 v 12 ( t ) cos t d t sin ( α + ϖ ) .
(78)

Now, using lim t ± 1 ( t 2 1)arctanh(t)=0 and integration by parts of the second integral,

3 3 v 12 ( t ) ( t 2 3 ) d t = 3 3 [ 2 3 ( t 2 6 ) 2 9 t ( t 2 9 ) arctanh t 3 ] ( t 2 3 ) d t = 144 5 2 9 3 3 t ( t 2 9 ) ( t 2 3 ) arctanh t 3 d t = 144 5 2 9 [ t 2 ( t 2 9 ) 2 6 arctanh t 3 t 2 ( t 2 9 ) 2 6 3 9 t 2 d t ] 3 3 = 144 5 1 9 3 3 t 2 ( t 2 9 ) d t = 144 5 + 36 5 = 36 .

Furthermore, since

3 3 v 12 (t)costdt= 3 3 [ 2 3 ( t 2 6 ) 2 9 t ( t 2 9 ) arctanh t 3 ] costdt,

we compute

3 3 ( t 2 6 ) costdt=2(sin3+6cos3),

and

I ( t ) : = t ( t 2 9 ) cos t arctanh t 3 d t = ( 3 ( 5 + t 2 ) cos t + t ( 15 + t 2 ) sin t ) arctanh t 3 + 3 ( 3 ( 5 + t 2 ) cos t + t ( 15 + t 2 ) sin t ) 9 + t 2 d t .

Next

3 ( 5 + t 2 ) cos t + t ( 15 + t 2 ) sin t 9 + t 2 d t = ( 3 cos t + t sin t ) d t 6 2 cos t + t sin t 9 + t 2 d t = t cos t + 4 sin t 6 2 cos t + t sin t 9 + t 2 d t .

Furthermore,

cos t 9 + t 2 d t = 1 6 cos t t 3 d t 1 6 cos t t + 3 d t = 1 6 cos ( 3 z ) z d z | z = 3 t 1 6 cos ( v 3 ) v d v | v = t + 3 = 1 6 ( cos 3 cos z + sin 3 sin z z d z | z = 3 t cos v cos 3 + sin v sin 3 v d v | v = t + 3 ) = 1 6 ( cos 3 Ci ( 3 t ) + sin 3 Si ( 3 t ) ) 1 6 ( cos 3 Ci ( 3 + t ) + sin 3 Si ( 3 + t ) ) = 1 6 ( cos 3 ( Ci ( 3 t ) Ci ( 3 + t ) ) + sin 3 ( Si ( 3 t ) Si ( 3 + t ) ) ) ,

and

t sin t 9 + t 2 d t = 1 2 sin t t 3 d t + 1 2 sin t t + 3 d t = 1 2 ( sin 3 Ci ( 3 t ) cos 3 Si ( 3 t ) ) + 1 2 ( cos 3 Si ( 3 + t ) sin 3 Ci ( 3 + t ) ) = 1 2 ( cos 3 ( Si ( 3 + t ) Si ( 3 t ) ) + sin 3 ( Ci ( 3 t ) Ci ( 3 + t ) ) ) .

Hence

I ( t ) = ( 3 ( 5 + t 2 ) cos t + t ( 15 + t 2 ) sin t ) arctanh t 3 3 t cos t + 12 sin t + 3 ( Ci ( 3 t ) ( 2 cos 3 3 sin 3 ) + Ci ( 3 + t ) ( 2 cos 3 + 3 sin 3 ) + ( 3 cos 3 + 2 sin 3 ) ( Si ( 3 t ) Si ( 3 + t ) ) ) .

Clearly I(0)=0. Next, using arctanh t 3 = 1 2 (ln6ln(3t))+o(1), Ci(3t)=Γ+ln(3t)+o(1) as t 3 , we derive

I ( t ) = Ci ( 3 + t ) ( 6 cos 3 + 9 sin 3 ) + 1 2 ( 3 cos t ( 2 t + 2 o ( 1 ) ( 5 + t 2 ) + ( 5 + t 2 ) ln 6 ) + 6 ( Γ + o ( 1 ) ) ( 2 cos 3 3 sin 3 ) + ( 24 + t ( 15 + t 2 ) ( 2 o ( 1 ) + ln 6 ) ) sin t + 6 ( 3 cos 3 + 2 sin 3 ) ( Si ( 3 t ) Si ( 3 + t ) ) ) + 1 2 ( 3 ( 5 + t 2 ) cos t + 6 ( 2 cos 3 3 sin 3 ) t ( 15 + t 2 ) sin t ) ln ( 3 t ) .

Setting f(t):= 1 2 (3(5+ t 2 )cost+6(2cos33sin3)t(15+ t 2 )sint) we see f(3)=0, so f(t)ln(3t)= f ( t )(t3)ln(3t), t (t,3) and using lim t 3 (t3)ln(3t)=0, we obtain

I ( 3 ) = lim t 3 I ( t ) = 3 cos 3 ( 3 + 2 Γ 2 Ci 6 + ln 36 3 Si 6 ) 3 sin 3 ( 4 + 3 Γ 3 Ci 6 + ln 216 + 2 Si 6 ) .

Summarizing we arrive at

3 3 v 12 ( t ) cos t d t = 4 3 ( sin 3 + 6 cos 3 ) 4 9 I ( 3 ) = sin 3 ( 3 + 3 Γ 3 Ci 6 + ln 216 + 2 Si 6 ) + cos 3 ( 9 2 Γ + 2 Ci 6 ln 36 + 3 Si 6 ) .

So inserting the above computations into (77) and (78), we get (60) and the first formula of (62). Next, since the second component v 22 (t)= 1 6 t( t 2 9) of v 2 (t) is odd, we get

T T + v 2 ( t ) G 1 ( x h ( t ) , x h ( t ) , t + α , 0 , κ ) ω ( x h ( t ) ) d t = 3 3 v 22 ( t ) ( sin ( t + α ) + λ 4 ( 1 t 2 3 ) + γ 1 4 t ( t 2 9 1 ) ) d t = 3 3 v 22 ( t ) sin t d t cos α + 1 36 γ 1 3 3 v 22 ( t ) t ( 9 t 2 ) d t .

But

3 3 v 22 (t)t ( 9 t 2 ) dt= 3 3 1 6 t 2 ( t 2 9 ) 2 dt= 1944 35

and

3 3 v 22 (t)sintdt= 3 3 1 6 t ( t 2 9 ) sintdt=4sin3+6cos3.

So we obtain (61). Similarly,

T T + v 2 ( t ) G 2 ( x h ( t ) , x h ( t ) , t + α , 0 , κ ) ω ( x h ( t ) ) d t = 3 3 v 22 ( t ) ( χ sin ( t + α + ϖ ) + λ 4 ( 1 t 2 3 ) + γ 2 4 t ( t 2 9 1 ) ) d t = 3 3 v 22 ( t ) sin t d t cos ( α + ϖ ) + 1 36 γ 2 3 3 v 22 ( t ) t ( 9 t 2 ) d t = ( 4 sin 3 + 6 cos 3 ) cos ( α + ϖ ) 54 35 γ 2 ,

which implies the second formula of (62).

References

  1. Battelli, F, Fečkan, M: Nonlinear RLC circuits and implicit ode’s. Differ. Integral Equ. (2014)

    MATH  Google Scholar 

  2. Lazarides N, Eleftheriou M, Tsironis GP: Discrete breathers in nonlinear magnetic metamaterials. Phys. Rev. Lett. 2006., 97: Article ID 157406

    Google Scholar 

  3. Veldes GP, Cuevas J, Kevrekidis PG, Frantzeskakis DJ: Quasidiscrete microwave solitons in a split-ring-resonator-based left-handed coplanar waveguide. Phys. Rev. E 2011., 83: Article ID 046608

    Google Scholar 

  4. Riaza R: Differential-Algebraic Systems, Analytical Aspects and Circuit Applications. World Scientific, Singapore; 2008.

    Book  MATH  Google Scholar 

  5. Kunkel P, Mehrmann V: Differential-Algebraic Equations, Analysis and Numerical Solution. Eur. Math. Soc., Zürich; 2006.

    Book  MATH  Google Scholar 

  6. Medved’ M: Normal forms of implicit and observed implicit differential equations. Riv. Mat. Pura Appl. 1991, 10: 95-107.

    MathSciNet  MATH  Google Scholar 

  7. Medved’ M: Qualitative properties of generalized vector fields. Riv. Mat. Pura Appl. 1994, 15: 7-31.

    MathSciNet  MATH  Google Scholar 

  8. Rabier PJ: Implicit differential equations near a singular point. J. Math. Anal. Appl. 1989, 144: 425-449. 10.1016/0022-247X(89)90344-2

    Article  MathSciNet  MATH  Google Scholar 

  9. Rabier PJ, Rheinboldt WC: A general existence and uniqueness theorem for implicit differential algebraic equations. Differ. Integral Equ. 1991, 4: 563-582.

    MathSciNet  MATH  Google Scholar 

  10. Rabier PJ, Rheinbold WC: A geometric treatment of implicit differential-algebraic equations. J. Differ. Equ. 1994, 109: 110-146. 10.1006/jdeq.1994.1046

    Article  MathSciNet  Google Scholar 

  11. Rabier PJ, Rheinbold WC: On impasse points of quasilinear differential algebraic equations. J. Math. Anal. Appl. 1994, 181: 429-454. 10.1006/jmaa.1994.1033

    Article  MathSciNet  MATH  Google Scholar 

  12. Rabier PJ, Rheinbold WC: On the computation of impasse points of quasilinear differential algebraic equations. Math. Comput. 1994, 62: 133-154.

    MathSciNet  Google Scholar 

  13. Andres J, Górniewicz L: Topological Principles for Boundary Value Problems. Kluwer, Dordrecht; 2003.

    Book  MATH  Google Scholar 

  14. Górniewicz L: Topological Fixed Point Theory of Multivalued Mappings. Springer, Berlin; 2009.

    MATH  Google Scholar 

  15. Fečkan M: Existence results for implicit differential equations. Math. Slovaca 1998, 48: 35-42.

    MathSciNet  MATH  Google Scholar 

  16. Frigon M, Kaczynski T: Boundary value problems for systems of implicit differential equations. J. Math. Anal. Appl. 1993, 179: 317-326. 10.1006/jmaa.1993.1353

    Article  MathSciNet  MATH  Google Scholar 

  17. Heikkilä S, Kumpulainen M, Seikkala S: Uniqueness and comparison results for implicit differential equations. Dyn. Syst. Appl. 1998, 7: 237-244.

    MathSciNet  MATH  Google Scholar 

  18. Li D: Peano’s theorem for implicit differential equations. J. Math. Anal. Appl. 2001, 258: 591-616. 10.1006/jmaa.2000.7395

    Article  MathSciNet  MATH  Google Scholar 

  19. Battelli F, Fečkan M: Melnikov theory for nonlinear implicit ode’s. J. Differ. Equ. 2014, 256: 1157-1190. 10.1016/j.jde.2013.10.012

    Article  MATH  Google Scholar 

  20. Awrejcewicz J, Holicke MM: Melnikov’s method and stick-slip chaotic oscillations in very weekly forced mechanical systems. Int. J. Bifurc. Chaos 1999, 9: 505-518. 10.1142/S0218127499000341

    Article  MATH  Google Scholar 

  21. Awrejcewicz J, Pyryev Y: Chaos prediction in the Duffing type system with friction using Melnikov’s functions. Nonlinear Anal., Real World Appl. 2006, 7: 12-24. 10.1016/j.nonrwa.2005.01.002

    Article  MathSciNet  MATH  Google Scholar 

  22. Coddington EA, Levinson N: Theory of Ordinary Differential Equations. Tata McGraw-Hill, New Delhi; 1972. Reprint (1987)

    MATH  Google Scholar 

  23. Battelli F, Lazzari C: Exponential dichotomies, heteroclinic orbits, and Melnikov functions. J. Differ. Equ. 1990, 86: 342-366. 10.1016/0022-0396(90)90034-M

    Article  MathSciNet  MATH  Google Scholar 

  24. Coppel WA: Dichotomies in Stability Theory. Springer, Berlin; 1978.

    MATH  Google Scholar 

  25. Palmer KJ: Exponential dichotomies and transversal homoclinic points. J. Differ. Equ. 1984, 55: 225-256. 10.1016/0022-0396(84)90082-2

    Article  MathSciNet  MATH  Google Scholar 

  26. Calamai A, Franca M: Melnikov methods and homoclinic orbits in discontinuous systems. J. Dyn. Differ. Equ. 2013, 25: 733-764. 10.1007/s10884-013-9307-4

    Article  MathSciNet  MATH  Google Scholar 

  27. Hartman P: Ordinary Differential Equations. Wiley, New York; 1964.

    MATH  Google Scholar 

  28. Gradshteyn IS, Ryzhik IM: Table of Integrals, Series, and Products. Academic Press, Boston; 2007.

    MATH  Google Scholar 

Download references

Acknowledgements

BF is partially supported by PRIN-MURST ‘Equazioni Differenziali Ordinarie e Applicazioni’ (Italy). MF is partially supported by the Grants VEGA-MS 1/0071/14, VEGA-SAV 2/0029/13 and Marche Polytechnic University, Ancona (Italy).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Flaviano Battelli.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The work presented here was carried out in collaboration between the authors. The authors contributed to every part of this study equally and read and approved the final version of the manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Battelli, F., Fečkan, M. Melnikov theory for weakly coupled nonlinear RLC circuits. Bound Value Probl 2014, 101 (2014). https://doi.org/10.1186/1687-2770-2014-101

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-2770-2014-101

Keywords