Skip to main content

Expanding the applicability of Lavrentiev regularization methods for ill-posed problems

Abstract

In this paper, we are concerned with the problem of approximating a solution of an ill-posed problem in a Hilbert space setting using the Lavrentiev regularization method and, in particular, expanding the applicability of this method by weakening the popular Lipschitz-type hypotheses considered in earlier studies such as (Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 26:35-48, 2005; Bakushinskii and Smirnova in Nonlinear Anal. 64:1255-1261, 2006; Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 28:13-25, 2007; Jin in Math. Comput. 69:1603-1623, 2000; Mahale and Nair in ANZIAM J. 51:191-217, 2009). Numerical examples are given to show that our convergence criteria are weaker and our error analysis tighter under less computational cost than the corresponding works given in (Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 26:35-48, 2005; Bakushinskii and Smirnova in Nonlinear Anal. 64:1255-1261, 2006; Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 28:13-25, 2007; Jin in Math. Comput. 69:1603-1623, 2000; Mahale and Nair in ANZIAM J. 51:191-217, 2009).

MSC:65F22, 65J15, 65J22, 65M30, 47A52.

1 Introduction

In this paper, we are interested in obtaining a stable approximate solution for a nonlinear ill-posed operator equation of the form

F(x)=y,
(1.1)

where F:D(F)XX is a monotone operator and X is a Hilbert space. We denote the inner product and the corresponding norm on a Hilbert space by , and , respectively. Let U(x,r) stand for the open ball in X with center xX and radius r>0. Note that F is a monotone operator if it satisfies the relation

F ( x 1 ) F ( x 2 ) , x 1 x 2 0
(1.2)

for all x 1 , x 2 D(F).

We assume, throughout this paper, that y δ Y is the available noisy data with

y y δ δ,
(1.3)

and (1.1) has a solution x ˆ . Since (1.1) is ill-posed, its solution need not depend continuously on the data, i.e., small perturbation in the data can cause large deviations in the solution. So, the regularization methods are used [18]. Since F is monotone, the Lavrentiev regularization is used to obtain a stable approximate solution of (1.1). In the Lavrentiev regularization, the approximate solution is obtained as a solution of the equation

F(x)+α(x x 0 )= y δ ,
(1.4)

where α>0 is the regularization parameter and x 0 is an initial guess for the solution x ˆ .

In [9], Bakushinskii and Smirnova proposed an iterative method

x k + 1 δ = x k δ ( α k I + A k , δ ) 1 [ ( F ( x k δ ) y δ ) + α k ( x k δ x 0 ) ] , x 0 δ = x 0 ,
(1.5)

where A k , δ := F ( x k δ ) and ( α k ) is a sequence of positive real numbers satisfying α k 0 as k. It is important to stop the iteration at an appropriate step, say k= k δ , and show that x k is well defined for 0<k k δ and x k δ δ x ˆ as δ0 (see [10]).

In [9, 11, 12], Bakushinskii and Smirnova chose the stopping index k δ by requiring it to satisfy

F ( x k δ δ ) y δ 2 τδ< F ( x k δ ) y δ

for k=0,1, and k δ 1,τ>1. In fact, they showed that x k δ δ x ˆ as δ0 under the following assumptions:

  1. (1)

    There exists L>0 such that F (x) F (y)Lxy for all x,yD(F);

  2. (2)

    There exists p>0 such that

    α k α k + 1 α k α k + 1 p
    (1.6)

for all kN;

  1. (3)

    ( 2 + L σ ) x 0 x ˆ t d σ2 x 0 x ˆ td α 0 , where

    σ:= ( τ 1 ) 2 ,t:=p α 0 +1,d=2 ( t x 0 x ˆ + p σ ) .

However, no error estimate was given in [9] (see [10]).

In [10], Mahale and Nair, motivated by the work of Qi-Nian Jin [13] for an iteratively regularized Gauss-Newton method, considered an alternate stopping criterion which not only ensures the convergence, but also derives an order optimal error estimate under a general source condition on x ˆ x 0 . Moreover, the condition that they imposed on { α k } is weaker than (1.6).

In the present paper, we are motivated by [10]. In particular, we expand the applicability of the method (1.5) by weakening one of the major hypotheses in [10] (see Assumption 2.1(2) in the next section).

In Section 2, we consider some basic assumptions required throughout the paper. Section 3 deals with the stopping rule and the result that establishes the existence of the stopping index. In Section 4, we prove results for the iterations based on the exact data and, in Section 5, the error analysis for the noisy data case is proved. The main order optimal result using the a posteriori stopping rule is provided in Section 6.

2 Basic assumptions and some preliminary results

We use the following assumptions to prove the results in this paper.

Assumption 2.1

  1. (1)

    There exists r>0 such that U( x ˆ ,r)D(F) and F:U( x ˆ ,r)X is Fréchet differentiable.

  2. (2)

    There exists K 0 >0 such that, for all u θ =u+θ( x ˆ u)U( x ˆ ,r), θ[0,1] and vX, there exists an element, say ϕ( x ˆ , u θ ,v)X, satisfying

    [ F ( x ˆ ) F ( u θ ) ] v= F ( u θ )ϕ( x ˆ , u θ ,v), ϕ ( x ˆ , u θ , v ) K 0 v x ˆ u θ

for all u θ U( x ˆ ,r) and vX.

  1. (3)

    ( F ( u ) + α I ) 1 F ( u θ )1 for all u θ U( x ˆ ,r).

  2. (4)

    ( F ( u ) + α I ) 1 1 α for all u θ U( x ˆ ,r).

The condition (2) in Assumption 2.1 weakens the popular hypotheses given in [10, 14] and [15].

Assumption 2.2 There exists a constant K>0 such that, for all x,yU( x ˆ ,r) and vX, there exists an element denoted by P(x,u,v)X satisfying

[ F ( x ) F ( u ) ] v= F (u)P(x,u,v), P ( x , u , v ) Kvxu.

Clearly, Assumption 2.2 implies Assumption 2.1(2) with K 0 =K, but not necessarily vice versa. Note that K 0 K holds in general and K 0 K can be arbitrarily large [1620]. Indeed, there are many classes of operators satisfying Assumption 2.1(2), but not Assumption 2.2 (see the numerical examples at the end of this study). Moreover, if K 0 is sufficiently smaller than K, which can happen since K K 0 can be arbitrarily large, then the results obtained in this study provide a tighter error analysis than the one in [10].

Finally, note that the computation of constant K is more expensive than the computation of K 0 .

We need the auxiliary results based on Assumption 2.1.

Proposition 2.3 For any uU( x ˆ ,r) and α>0,

( F ( u ) + α I ) 1 [ F ( x ˆ ) F ( u ) F ( u ) ] ( x ˆ u ) 3 K 0 2 x ˆ u 2 .

Proof Using the fundamental theorem of integration, for any uU( x ˆ ,r), we get

F( x ˆ )F(u)= 0 1 F ( u + t ( x ˆ u ) ) ( x ˆ u)dt.

Hence, by Assumption 2.2,

F ( x ˆ ) F ( u ) F ( u ) ( x ˆ u ) = 0 1 [ F ( u + t ( x ˆ u ) ) F ( x ˆ ) + F ( x ˆ ) F ( u ) ] ( x ˆ u ) d t = 0 1 F ( x ˆ ) [ ϕ ( u + t ( x ˆ u ) , x ˆ , x ˆ u ) ϕ ( u , x ˆ , x ˆ u ) ] d t .

Then, by (2), (3) in Assumption 2.1 and the inequality ( F ( u ) + α I ) 1 F ( u θ )1, we obtain in turn

( F ( u ) + α I ) 1 [ F ( x ˆ ) F ( u ) F ( u ) ] ( x ˆ u ) 0 1 ϕ ( u + t ( x ˆ u ) , x ˆ , x ˆ u ) + ϕ ( u , x ˆ , x ˆ u ) d t 0 1 K 0 x ˆ u 2 t d t + K 0 x ˆ u 2 3 K 0 2 x ˆ u 2 .

This completes the proof. □

Proposition 2.4 For any uU( x ˆ ,r) and α>0,

α ( F ( x ˆ ) + α I ) 1 ( F ( u ) + α I ) 1 K 0 x ˆ u.
(2.1)

Proof Let T x ˆ , u =α( ( F ( x ˆ ) + α I ) 1 ( F ( u ) + α I ) 1 ) for all vX. Then we have, by Assumption 2.2,

T x ˆ , u v = α ( F ( x ˆ ) + α I ) 1 ( F ( u ) F ( x ˆ ) ( F ( u ) + α I ) 1 ) v = ( F ( x ˆ ) + α I ) 1 F ( x ˆ ) ϕ ( u , x ˆ , α ( F ( u ) + α I ) 1 v ) K 0 x ˆ u v

for all vX. This completes the proof. □

Assumption 2.5 There exists a continuous and strictly monotonically increasing function φ:(0,a](0,) with a F ( x ˆ ) satisfying

  1. (1)

    lim λ 0 φ(λ)=0;

  2. (2)

    sup λ 0 α φ ( λ ) λ + α φ(α) for all α(0,a];

  3. (3)

    there exists vX with v1 such that

    x ˆ x 0 =φ ( F ( x ˆ ) ) v.
    (2.2)

Next, we assume a condition on the sequence { α k } considered in (1.5).

Assumption 2.6 ([10], Assumption 2.6)

The sequence { α k } of positive real numbers is such that

1 α k α k + 1 μ, lim k 0 α k =0
(2.3)

for a constant μ>1.

Note that the condition (2.3) on { α k } is weaker than (1.6) considered by Bakushinskii and Smirnova [9] (see [10]). In fact, if (1.6) is satisfied, then it also satisfies (2.3) with μ=p α 0 +1, but the converse need not be true (see [10]). Further, note that for these choices of { α k }, α k / α k + 1 is bounded, whereas ( α k α k + 1 )/ α k α k + 1 as k. (2) in Assumption 2.1 is used in the literature for regularization of many nonlinear ill-posed problems (see [4, 7, 8, 13, 21]).

3 Stopping rule

Let c 0 >4 and choose k δ to be the first non-negative integer such that x k δ in (1.5) is defined for each k{0,1,2,, k δ } and

α k δ ( A k δ δ + α k δ I ) 1 ( F ( x k δ δ ) y δ ) c 0 δ.
(3.1)

In the following, we establish the existence of such a k δ . First, we consider the positive integer NN satisfying

α N ( c 1 ) δ x 0 x ˆ < α k
(3.2)

for all k{0,1,,N1}, where c>1 and α 0 >(c1)δ/ x 0 x ˆ .

The following technical lemma from [10] is used to prove some of the results of this paper.

Lemma 3.1 ([10], Lemma 3.1)

Let a>0 and b0 be such that 4ab1 and θ:=(1 1 4 a b )/2a. Let θ 1 ,, θ n be non-negative real numbers such that θ k + 1 a θ k 2 +b and θ 0 θ. Then θ k θ for all k=1,2,,n.

The rest of the results in this paper can be proved along the same lines as those of the proof in [10]. In order for us to make the paper as self-contained as possible, we present the proof of one of them, and for the proof of the rest, we refer the reader to [10].

Theorem 3.2 ([10], Theorem 3.2)

Let (1.2), (1.3), (2.3) and Assumption  2.1 be satisfied. Let N be as in (3.2) for some c>1 and 6c K 0 x 0 x ˆ /(c1)1. Then x k δ is defined iteratively for each k{0,1,,N} and

x k δ x ˆ 2 c x 0 x ˆ c 1
(3.3)

for all k{0,1,,N}. In particular, if r>2c x 0 x ˆ /(c1), then x k δ B r ( x ˆ ) for k{0,1,,N}. Moreover,

α N ( A N δ + α N I ) 1 ( F ( x N δ ) y δ ) c 0 δ
(3.4)

for c 0 := 7 3 c+1.

Proof We show (3.3) by induction. It is obvious that (3.3) holds for k=0. Now, assume that (3.3) holds for some k{0,1,,N}. Then it follows from (1.5) that

x k + 1 δ x ˆ = x k δ x ˆ ( A k δ + α k I ) 1 [ F ( x k δ ) y δ + α k ( x k δ x 0 ) ] = ( A k δ + α k I ) 1 ( ( A k δ + α k I ) ( x k δ x ˆ ) [ F ( x k δ ) y δ + α k ( x k δ x 0 ) ] ) = ( A k δ + α k I ) 1 [ A k δ ( x k δ x ˆ ) + y δ F ( x k δ ) + α k ( x 0 x ˆ ) ] = α k ( A k δ + α k I ) 1 ( x 0 x ˆ ) + ( A k δ + α k I ) 1 ( y δ y ) + ( A k δ + α k I ) 1 [ F ( x ˆ ) F ( x k δ ) + A k δ ( x k δ x ˆ ) ] .
(3.5)

Using (1.3), the estimates ( A k δ + α k I ) 1 1/ α k , ( A k δ + α k I ) 1 A k δ 1 and Proposition 2.3, we have

α k ( A k δ + α k I ) 1 ( x 0 x ˆ ) + ( A k δ + α k I ) 1 ( y δ y ) x 0 x ˆ + δ α k

and

( A k δ + α k I ) 1 [ F ( x ˆ ) F ( x k δ ) + A k δ ( x k δ x ˆ ) ] 3 K 0 2 x k δ x ˆ 2 .

Thus we have

x k + 1 δ x ˆ x 0 x ˆ + δ α k + 3 K 0 2 x k δ x ˆ 2 .

But, by (3.2), δ α k x 0 x ˆ /(c1) and so

x k + 1 δ x ˆ c x 0 x ˆ c 1 + 3 K 0 2 x k δ x ˆ 2 ,

which leads to the recurrence relation

θ k + 1 a θ k 2 +b,

where

θ k = x k δ x ˆ ,a= 3 K 0 2 ,b= c x 0 x ˆ c 1 .

From the hypothesis of the theorem, we have 4ab=6c K 0 x 0 x ˆ c 1 <1. It is obvious that

θ 0 x 0 x ˆ θ:= 1 1 4 a b 2 a = 2 b 1 + 1 4 a b 2b= 2 c x 0 x ˆ c 1 .

Hence, by Lemma 3.1, we get

x k δ x ˆ θ 2 c x 0 x ˆ c 1
(3.6)

for all k{0,1,,N}. In particular, if r>2c x 0 x ˆ /(c1), then we have x k δ B r ( x ˆ ) for all k{0,1,,N}.

Next, let γ= α N ( A N δ + α N I ) 1 (F( x N δ ) y δ ). Then, using the estimates

α N ( A N δ + α N I ) 1 1, α N ( A N δ + α N I ) 1 A N δ α k

and Proposition 2.3, we have

γ δ + α N ( A N δ + α N I ) 1 ( F ( x N δ ) y + A N δ ( x N δ x ˆ ) A N δ ( x N δ x ˆ ) ) = δ + α N ( A N δ + α N I ) 1 [ F ( x N δ ) F ( x ˆ ) A N δ ( x N δ x ˆ ) + A N δ ( x N δ x ˆ ) ] δ + α N [ 3 K 0 x N δ x ˆ 2 2 + x N δ x ˆ ] δ + α N x N δ x ˆ [ 1 + 3 K 0 x N δ x ˆ 2 ] δ + 2 α N c x 0 δ x ˆ c 1 [ 1 + 3 K 0 c x 0 δ x ˆ c 1 ] δ + 2 c δ [ 1 + 1 6 ] ( 7 c 3 + 1 ) δ .
(3.7)

Therefore, we have α N ( A N δ + α N I ) 1 (F( x N δ ) y δ ) c 0 δ, where c 0 := 7 3 c+1. This completes the proof. □

4 Error bound for the case of noise-free data

Let

x k + 1 = x k ( A k + α k I ) 1 [ F ( x k ) y + α k ( x k x 0 ) ]
(4.1)

for all k0.

We show that each x k is well defined and belongs to U( x ˆ ,r) for r>2 x 0 x ˆ . For this, we make use of the following lemma.

Lemma 4.1 ([10], Lemma 4.1)

Let Assumption  2.1 hold. Suppose that, for all k{0,1,,n}, x k in (4.1) is well defined and ρ k := α k ( A k + α k I ) 1 ( x 0 x ˆ ) for some nN. Then we have

ρ k 3 K 0 x k x ˆ 2 2 x k + 1 x ˆ ρ k + 3 K 0 x k x ˆ 2 2
(4.2)

for all k{0,1,,n}.

Theorem 4.2 ([10], Theorem 4.2)

Let Assumption  2.1 hold. If 6 K 0 x 0 x ˆ 1 and r>2 x 0 x ˆ , then, for all kN, the iterates x k in (4.1) are well defined and

x k x ˆ 2 x 0 x ˆ 1 + 1 6 K 0 x 0 x ˆ 2 x 0 x ˆ
(4.3)

for all kN.

Lemma 4.3 ([10], Lemma 4.3)

Let Assumptions  2.1 and  2.6 hold and let r>2 x 0 x ˆ . Assume that Aη α 0 and 4μ(1+ η 1 ) K 0 x 0 x ˆ 1 for some η with 0<η<1. Then, for all kN, we have

1 ( 1 + η ) μ x k x ˆ α k ( A k + α k I ) 1 ( x 0 x ˆ ) 1 1 η x k x ˆ
(4.4)

and

1 η ( 1 + η ) μ x k x ˆ ( x k + 1 x ˆ ) ( 1 1 η + η ( 1 + η ) μ ) x k x ˆ .
(4.5)

The following corollary follows from Lemma 4.3 by taking η=1/3. We show that this particular case of Lemma 4.3 is better suited for our later results.

Corollary 4.4 ([10], Corollary 4.4)

Let Assumptions  2.1 and  2.6 hold and let r>2 x 0 x ˆ . Assume that A α 0 /3 and 16μ K 0 x 0 x ˆ 1. Then, for all kN, we have

3 4 μ x k x ˆ α k ( A + α k I ) 1 ( x 0 x ˆ ) 3 2 x k x ˆ
(4.6)

and

x k x ˆ 2 μ ( x k + 1 x ˆ ) 2 x k x ˆ .

Theorem 4.5 ([10], Theorem 4.5)

Let the assumptions of Lemma  4.3 hold. If x 0 is chosen such that x 0 x ˆ N ( F ( x ˆ ) ) , then lim k x k = x ˆ .

Lemma 4.6 ([10], Lemma 4.6)

Let the assumptions of Lemma  4.3 hold for η satisfying

( 1 1 η ( 1 + η ) μ ) [ 1 + ( 2 μ 1 ) η + 2 μ ] +2η< 4 3 .
(4.7)

Then, for all k,lN{0} with kl, we have

x l x ˆ c η [ x k x ˆ + α l ( A + α l I ) 1 ( F ( x l ) y ) α k ] ,

where

c η : = ( 1 b η ) 1 max { μ , 1 + ( 3 ε + 1 ) η 4 ( 1 η ) } , b η : = ( 3 ε + 1 ) η ( 1 η ) + 3 ε a 4 , ε : = 1 1 a a , a : = η ( 1 + η ) μ .

Remark 4.7 ([10], Remark 4.7)

It can be seen that (4.7) is satisfied if η1/3+1/24.

Now, if we take η=1/3, that is, K 0 x 0 x ˆ μ1/16 in Lemma 4.6, then it takes the following form.

Lemma 4.8 ([10], Lemma 4.8)

Let the assumptions of Lemma  4.3 hold with η=1/3. Then, for all kl0, we have

x l x ˆ c 1 / 3 [ x k x ˆ + α l ( A + α l I ) 1 ( F ( x l ) y ) α k ] ,

where

c 1 / 3 = [ 1 8 μ + ( 8 μ + 1 ) 3 ε 16 μ ] 1 max { μ , 1 + 3 ε + 1 8 } , ε : = 4 μ 4 μ + 4 μ 1 .

5 Error analysis with noisy data

The first result in this section gives an error estimate for x k δ x k under Assumption 2.5, where k=0,1,2,,N.

Lemma 5.1 ([10], Lemma 5.1)

Let Assumption  2.1 hold and let K 0 x 0 x ˆ 1/m, where m>(7+ 73 )/2, and N be the integer satisfying (3.2) with

c> m 2 4 m 6 m 2 7 m 6 .

Then, for all k{0,1,,N}, we have

x k δ x k δ ( 1 κ ) α k ,
(5.1)

where

κ:= 1 m ( 4 + 3 c c 1 + 6 m ) .

If we take m=8 in Lemma 5.1, then we get the following corollary as a particular case of Lemma 5.1. We make use of it in the following error analysis.

Corollary 5.2 ([10], Corollary 5.2)

Let Assumption  2.1 hold and let 16 K 0 x 0 x ˆ 1. Let N be the integer defined by (3.2) with c>13. Then, for all k{0,1,,N}, we have

x k δ x k δ ( 1 κ ) α k ,

where

κ:= 31 c 19 32 ( c 1 ) .

Lemma 5.3 ([10], Lemma 5.3)

Let the assumptions of Lemma  5.1 hold. Then we have

α k ( A + α k δ I ) 1 ( F ( x k δ ) y ) c 1 δ.

Moreover, if k δ >0, then, for all 0k< k δ , we have

α k ( A + α k I ) 1 ( F ( x k ) y ) c 2 δ,

where

c 1 = ( 1 + 2 c K 0 x 0 x ˆ c 1 ) ( c 0 + 2 κ 1 κ + 3 K 0 μ x 0 x ˆ 2 ( 1 κ ) 2 ( c 1 ) ) , c 2 = c 0 ( ( 2 κ ) ( 1 κ ) ) ( 3 K 0 x 0 x ˆ / 2 ( 1 κ ) 2 ( c 1 ) ) 1 + 2 ( c K 0 x 0 x ˆ / ( c 1 ) )

with c 0 = 7 3 c+1 and κ as in Lemma  5.1.

Theorem 5.4 ([10], Theorem 5.4)

Let Assumptions  2.1 and  2.6 hold. If 16kμ x 0 x ˆ 1 and the integer k δ is chosen according to stopping rule (3.1) with c 0 > 94 3 , then we have

x k δ δ x ˆ ξinf { x k x ˆ + δ α k : k 0 } ,
(5.2)

where ξ=max{2μϱ, c 1 / 3 c 1 + 1 1 κ ,c}, ϱ:=1+ μ ( 1 + 3 K 0 x 0 x ˆ ) c 2 ( 1 κ ) with c 1 / 3 and κ as in Lemma  4.8 and Corollary  5.2, respectively, and c 1 , c 2 as in Lemma  5.3.

6 Order optimal result with an a posteriori stopping rule

In this section, we show the convergence x k δ δ x ˆ as δ0 and also give an optimal error estimate for x k δ δ x ˆ .

Theorem 6.1 ([10], Theorem 6.1)

Let the assumptions of Theorem  5.4 hold and let k δ be the integer chosen by (3.1). If x 0 is chosen such that x 0 x ˆ N ( F ( x ˆ ) ) , then we have lim δ 0 x k δ δ = x ˆ . Moreover, if Assumption  2.5 is satisfied, then we have

x k δ δ x ˆ ξ μ ψ 1 (δ),

where ξ :=8μξ/3 with ξ as in Theorem  5.4 and ψ:(0,φ(a)](0,aφ(a)] is defined as ψ(λ):=λ φ 1 (λ), λ(0,φ(a)].

Proof From (4.6) and (5.2), we get

x k δ δ x ˆ ξ inf { α k ( A + α k I ) 1 ( x 0 x ˆ ) + δ α k : k = 0 , 1 , } ,
(6.1)

where ξ = 4 μ 3 max{2μ(1+ μ ( 1 + 3 k 0 x 0 x ˆ ) c 2 ( 1 κ ) ), c 1 / 3 c 1 + 1 1 κ ,c}. Now, we choose an integer m δ such that m δ =max{k: α k δ }. Then we have

x k δ δ x ˆ ξ inf { α m δ ( A + α m δ I ) 1 ( x 0 x ˆ ) + δ α m δ : k = 0 , 1 , } .
(6.2)

Note that δ α m δ δ , so δ α m δ 0 as δ0. Therefore by (6.2) to show that x k δ δ x ˆ as δ0, it is enough to prove that α m δ ( A + α m δ I ) 1 ( x 0 x ˆ )0 as δ0. Observe that for wR( F ( x ˆ )), i.e., w= F ( x ˆ )u for some uD(F), we have α m δ ( A + α m δ I ) 1 w α m δ u0 as δ0. Now since R( F ( x ˆ )) is a dense subset of N ( F ( x ˆ ) ) , it follows that α m δ ( A + α m δ I ) 1 ( x 0 x ˆ )0 as δ0. Using Assumption 2.5, we get that

α k ( A + α k I ) 1 ( x 0 x ˆ ) φ( α k ).
(6.3)

So, by (6.2) and (6.3), we obtain that

x k δ δ x ˆ ξ inf { φ ( α k ) + δ α k : k = 0 , 1 , } .
(6.4)

Choose k ˆ δ such that

φ( α k ˆ δ ) α k ˆ δ δ<φ( α k ) α k for k=0,1,, k δ 1.
(6.5)

This also implies that

ψ ( φ ( α k ˆ δ ) ) δ<ψ ( φ ( α k ) ) for k=0,1,, k δ 1.
(6.6)

From (6.4), x k δ δ x ˆ ξ {φ( α k ˆ δ )+ δ α k ˆ δ }. Now, using (6.5) and (6.6), we get x k δ δ x ˆ 2 ξ δ α k ˆ δ 2 ξ μ δ α k ˆ δ 1 2 ξ μ ψ 1 (δ). This completes the proof. □

7 Numerical examples

We provide two numerical examples, where K 0 <K.

Example 7.1 Let X=R, D(F)= U ( 0 , 1 ) ¯ , x ˆ =0 and define a function F on D(F) by

F(x)= e x 1.
(7.1)

Then, using (7.1) and Assumptions 2.1(2) and 2.2, we get

K 0 =e1<K=e.

Example 7.2 Let X=C([0,1]) (: the space of continuous functions defined on [0,1] equipped with the max norm) and D(F)= U ( 0 , 1 ) ¯ . Define an operator F on D(F) by

F(h)(x)=h(x)5 0 1 xθh ( θ ) 3 dθ.
(7.2)

Then the Fréchet-derivative is given by

F ( h [ u ] ) (x)=u(x)15 0 1 xθh ( θ ) 2 u(θ)dθ
(7.3)

for all uD(F). Using (7.2), (7.3), Assumptions 2.1(2), 2.2 for x ˆ =0, we get K 0 =7.5<K=15.

Next, we provide an example where K K 0 can be arbitrarily large.

Example 7.3 Let X=D(F)=R, x ˆ =0 and define a function F on D(F) by

F(x)= d 0 x d 1 sin1+ d 1 sin e d 2 x ,
(7.4)

where d 0 , d 1 and d 2 are the given parameters. Note that F( x ˆ )=F(0)=0. Then it can easily be seen that, for d 2 sufficiently large and d 1 sufficiently small, K K 0 can be arbitrarily large.

We now present two examples where Assumption 2.2 is not satisfied, but Assumption 2.1(2) is satisfied.

Example 7.4 Let X=D(F)=R, x ˆ =0 and define a function F on D by

F(x)= x 1 + 1 i 1 + 1 i + c 1 x c 1 i i + 1 ,
(7.5)

where c 1 is a real parameter and i>2 is an integer. Then F (x)= x 1 / i + c 1 is not Lipschitz on D. Hence Assumption 2.2 is not satisfied. However, the central Lipschitz condition in Assumption 2.2(2) holds for K 0 =1. We also have that F( x ˆ )=0. Indeed, we have

F ( x ) F ( x ˆ ) = | x 1 / i x ˆ 1 / i | = | x x ˆ | x ˆ i 1 i + + x i 1 i ,

and so

F ( x ) F ( x ˆ ) K 0 |x x ˆ |.

Example 7.5 We consider the integral equation

u(s)=f(s)+λ a b G(s,t)u ( t ) 1 + 1 / n dt
(7.6)

for all nN, where f is a given continuous function satisfying f(s)>0 for all s[a,b], λ is a real number and the kernel G is continuous and positive in [a,b]×[a,b].

For example, when G(s,t) is the Green kernel, the corresponding integral equation is equivalent to the boundary value problem

u = λ u 1 + 1 / n , u ( a ) = f ( a ) , u ( b ) = f ( b ) .

These types of problems have been considered in [1620]. The equation of the form (7.6) generalizes the equation of the form

u(s)= a b G(s,t)u ( t ) n dt,
(7.7)

which was studied in [1620]. Instead of (7.6), we can try to solve the equation F(u)=0, where

F:ΩC[a,b]C[a,b],Ω= { u C [ a , b ] : u ( s ) 0 , s [ a , b ] }

and

F(u)(s)=u(s)f(s)λ a b G(s,t)u ( t ) 1 + 1 / n dt.

The norm we consider is the max-norm. The derivative F is given by

F (u)v(s)=v(s)λ ( 1 + 1 n ) a b G(s,t)u ( t ) 1 / n v(t)dt

for all vΩ. First of all, we notice that F does not satisfy the Lipschitz-type condition in Ω. Let us consider, for instance, [a,b]=[0,1], G(s,t)=1 and y(t)=0. Then we have F (y)v(s)=v(s) and

F ( x ) F ( y ) =|λ| ( 1 + 1 n ) a b x ( t ) 1 / n dt.

If F were the Lipschitz function, then we had

F ( x ) F ( y ) L 1 xy

or, equivalently, the inequality

0 1 x ( t ) 1 / n dt L 2 max x [ 0 , 1 ] x(s)
(7.8)

would hold for all xΩ and for a constant L 2 . But this is not true. Consider, for example, the function

x j (t)= t j

for all j1 and t[0,1]. If these are substituted into (7.7), then we have

1 j 1 / n ( 1 + 1 / n ) L 2 j j 1 1 / n L 2 (1+1/n)

for all j1. This inequality is not true when j. Therefore, Assumption 2.2 is not satisfied in this case. However, Assumption 2.1(2) holds. To show this, suppose that x ˆ (t)=f(t) and γ= min s [ a , b ] f(s). Then, for all vΩ, we have

[ F ( x ) F ( x ˆ ) ] v = | λ | ( 1 + 1 n ) max s [ a , b ] | a b G ( s , t ) ( x ( t ) 1 / n f ( t ) 1 / n ) v ( t ) d t | | λ | ( 1 + 1 n ) max s [ a , b ] G n ( s , t ) ,

where G n (s,t)= G ( s , t ) | x ( t ) f ( t ) | x ( t ) ( n 1 ) / n + x ( t ) ( n 2 ) / n f ( t ) 1 / n + + f ( t ) ( n 1 ) / n v. Hence it follows that

[ F ( x ) F ( x ˆ ) ] v = | λ | ( 1 + 1 / n ) γ ( n 1 ) / n max s [ a , b ] a b G ( s , t ) d t x x ˆ K 0 x x ˆ ,

where K 0 = | λ | ( 1 + 1 / n ) γ ( n 1 ) / n N and N= max s [ a , b ] a b G(s,t)dt. Then Assumption 2.1(2) holds for sufficiently small λ.

In the following remarks, we compare our results with the corresponding ones in [10].

Remark 7.6 Note that the results in [10] were shown using Assumption 2.2, whereas we used weaker Assumption 2.1(2) in this paper. Next, our result, Proposition 2.3, was shown with 3 K 0 replacing K. Therefore, if 3 K 0 <K (see Example 7.3), then our result is tighter. Proposition 2.4 was shown with K 0 replacing K. Then, if K 0 <K, then our result is tighter. Theorem 3.2 was shown with 6 K 0 replacing 2K. Hence, if 3 K 0 <K, our result is tighter. Similar favorable to us observations are made for Lemma 4.1, Theorem 4.2 and the rest of the results in [10].

Remark 7.7 The results obtained here can also be realized for the operators F satisfying an autonomous differential equation of the form

F (x)=P ( F ( x ) ) ,

where P:XX is a known continuous operator. Since F ( x ˆ )=P(F( x ˆ ))=P(0), we can compute K 0 in Assumption 2.1(2) without actually knowing x ˆ . Returning back to Example 7.1, we see that we can set P(x)=x+1.

References

  1. Binder A, Engl HW, Groetsch CW, Neubauer A, Scherzer O: Weakly closed nonlinear operators and parameter identification in parabolic equations by Tikhonov regularization. Appl. Anal. 1994, 55: 215-235. 10.1080/00036819408840301

    Article  MathSciNet  MATH  Google Scholar 

  2. Engl HW, Hanke M, Neubauer A: Regularization of Inverse Problems. Kluwer, Dordrecht; 1993.

    MATH  Google Scholar 

  3. Engl HW, Kunisch K, Neubauer A: Convergence rates for Tikhonov regularization of nonlinear ill-posed problems. Inverse Probl. 1989, 5: 523-540. 10.1088/0266-5611/5/4/007

    Article  MathSciNet  MATH  Google Scholar 

  4. Jin Q, Hou ZY: On the choice of the regularization parameter for ordinary and iterated Tikhonov regularization of nonlinear ill-posed problems. Inverse Probl. 1997, 13: 815-827. 10.1088/0266-5611/13/3/016

    Article  MathSciNet  MATH  Google Scholar 

  5. Jin Q, Hou ZY: On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems. Numer. Math. 1990, 83: 139-159.

    MathSciNet  MATH  Google Scholar 

  6. Scherzer O, Engl HW, Kunisch K: Optimal a posteriori parameter choice for Tikhonov regularization for solving nonlinear ill-posed problems. SIAM J. Numer. Anal. 1993, 30: 1796-1838. 10.1137/0730091

    Article  MathSciNet  MATH  Google Scholar 

  7. Tautenhahn U: Lavrentiev regularization of nonlinear ill-posed problems. Vietnam J. Math. 2004, 32: 29-41.

    MathSciNet  MATH  Google Scholar 

  8. Tautenhahn U: On the method of Lavrentiev regularization for nonlinear ill-posed problems. Inverse Probl. 2002, 18: 191-207. 10.1088/0266-5611/18/1/313

    Article  MathSciNet  MATH  Google Scholar 

  9. Bakushinskii A, Smirnova A: Iterative regularization and generalized discrepancy principle for monotone operator equations. Numer. Funct. Anal. Optim. 2007, 28: 13-25. 10.1080/01630560701190315

    Article  MathSciNet  MATH  Google Scholar 

  10. Mahale P, Nair MT: Iterated Lavrentiev regularization for nonlinear ill-posed problems. ANZIAM J. 2009, 51: 191-217. 10.1017/S1446181109000418

    Article  MathSciNet  MATH  Google Scholar 

  11. Bakushinskii A, Smirnova A: On application of generalized discrepancy principle to iterative methods for nonlinear ill-posed problems. Numer. Funct. Anal. Optim. 2005, 26: 35-48. 10.1081/NFA-200051631

    Article  MathSciNet  MATH  Google Scholar 

  12. Bakushinskii A, Smirnova A: A posteriori stopping rule for regularized fixed point iterations. Nonlinear Anal. 2006, 64: 1255-1261. 10.1016/j.na.2005.06.031

    Article  MathSciNet  MATH  Google Scholar 

  13. Jin Q: On the iteratively regularized Gauss-Newton method for solving nonlinear ill-posed problems. Math. Comput. 2000, 69: 1603-1623. 10.1090/S0025-5718-00-01199-6

    Article  MathSciNet  MATH  Google Scholar 

  14. Mahale P, Nair MT: General source conditions for nonlinear ill-posed problems. Numer. Funct. Anal. Optim. 2007, 28: 111-126. 10.1080/01630560701189929

    Article  MathSciNet  MATH  Google Scholar 

  15. Semenova EV: Lavrentiev regularization and balancing principle for solving ill-posed problems with monotone operators. Comput. Methods Appl. Math. 2010, 4: 444-454.

    MathSciNet  MATH  Google Scholar 

  16. Argyros IK: Convergence and Application of Newton-Type Iterations. Springer, New York; 2008.

    MATH  Google Scholar 

  17. Argyros IK: Approximating solutions of equations using Newton’s method with a modified Newton’s method iterate as a starting point. Rev. Anal. Numér. Théor. Approx. 2007, 36: 123-138.

    MathSciNet  MATH  Google Scholar 

  18. Argyros IK: A semilocal convergence for directional Newton methods. Math. Comput. 2011, 80: 327-343.

    Article  MathSciNet  MATH  Google Scholar 

  19. Argyros IK, Hilout S: Weaker conditions for the convergence of Newton’s method. J. Complex. 2012, 28: 364-387. 10.1016/j.jco.2011.12.003

    Article  MathSciNet  MATH  Google Scholar 

  20. Argyros IK, Cho YJ, Hilout S: Numerical Methods for Equations and Its Applications. CRC Press, New York; 2012.

    MATH  Google Scholar 

  21. Tautenhahn U, Jin Q: Tikhonov regularization and a posteriori rule for solving nonlinear ill-posed problems. Inverse Probl. 2003, 19: 1-21. 10.1088/0266-5611/19/1/301

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

Dedicated to Professor Hari M Srivastava.

This paper was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (Grant Number: 2012-0008170).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yeol Je Cho.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Argyros, I.K., Cho, Y.J. & George, S. Expanding the applicability of Lavrentiev regularization methods for ill-posed problems. Bound Value Probl 2013, 114 (2013). https://doi.org/10.1186/1687-2770-2013-114

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-2770-2013-114

Keywords