Research

# Expanding the applicability of Lavrentiev regularization methods for ill-posed problems

Ioannis K Argyros1, Yeol Je Cho2* and Santhosh George3

Author Affiliations

1 Department of Mathematical Sciences, Cameron University, Lawton, OK, 73505, USA

2 Department of Mathematics Education and the RINS, Gyeongsang National University, Jinju, 660-701, Korea

3 Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Karnataka, 757 025, India

For all author emails, please log on.

Boundary Value Problems 2013, 2013:114  doi:10.1186/1687-2770-2013-114

The electronic version of this article is the complete one and can be found online at: http://www.boundaryvalueproblems.com/content/2013/1/114

 Received: 29 January 2013 Accepted: 18 April 2013 Published: 7 May 2013

© 2013 Argyros et al.; licensee Springer

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### Abstract

In this paper, we are concerned with the problem of approximating a solution of an ill-posed problem in a Hilbert space setting using the Lavrentiev regularization method and, in particular, expanding the applicability of this method by weakening the popular Lipschitz-type hypotheses considered in earlier studies such as (Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 26:35-48, 2005; Bakushinskii and Smirnova in Nonlinear Anal. 64:1255-1261, 2006; Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 28:13-25, 2007; Jin in Math. Comput. 69:1603-1623, 2000; Mahale and Nair in ANZIAM J. 51:191-217, 2009). Numerical examples are given to show that our convergence criteria are weaker and our error analysis tighter under less computational cost than the corresponding works given in (Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 26:35-48, 2005; Bakushinskii and Smirnova in Nonlinear Anal. 64:1255-1261, 2006; Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 28:13-25, 2007; Jin in Math. Comput. 69:1603-1623, 2000; Mahale and Nair in ANZIAM J. 51:191-217, 2009).

MSC: 65F22, 65J15, 65J22, 65M30, 47A52.

##### Keywords:
Lavrentiev regularization method; Hilbert space; ill-posed problems; stopping index; Fréchet-derivative; source function; boundary value problem

### 1 Introduction

In this paper, we are interested in obtaining a stable approximate solution for a nonlinear ill-posed operator equation of the form

(1.1)

where is a monotone operator and X is a Hilbert space. We denote the inner product and the corresponding norm on a Hilbert space by and , respectively. Let stand for the open ball in X with center and radius . Note that F is a monotone operator if it satisfies the relation

(1.2)

for all .

We assume, throughout this paper, that is the available noisy data with

(1.3)

and (1.1) has a solution . Since (1.1) is ill-posed, its solution need not depend continuously on the data, i.e., small perturbation in the data can cause large deviations in the solution. So, the regularization methods are used [1-8]. Since F is monotone, the Lavrentiev regularization is used to obtain a stable approximate solution of (1.1). In the Lavrentiev regularization, the approximate solution is obtained as a solution of the equation

(1.4)

where is the regularization parameter and is an initial guess for the solution .

In [9], Bakushinskii and Smirnova proposed an iterative method

(1.5)

where and is a sequence of positive real numbers satisfying as . It is important to stop the iteration at an appropriate step, say , and show that is well defined for and as (see [10]).

In [9,11,12], Bakushinskii and Smirnova chose the stopping index by requiring it to satisfy

for and . In fact, they showed that as under the following assumptions:

(1) There exists such that for all ;

(2) There exists such that

(1.6)

for all ;

(3) , where

However, no error estimate was given in [9] (see [10]).

In [10], Mahale and Nair, motivated by the work of Qi-Nian Jin [13] for an iteratively regularized Gauss-Newton method, considered an alternate stopping criterion which not only ensures the convergence, but also derives an order optimal error estimate under a general source condition on . Moreover, the condition that they imposed on is weaker than (1.6).

In the present paper, we are motivated by [10]. In particular, we expand the applicability of the method (1.5) by weakening one of the major hypotheses in [10] (see Assumption 2.1(2) in the next section).

In Section 2, we consider some basic assumptions required throughout the paper. Section 3 deals with the stopping rule and the result that establishes the existence of the stopping index. In Section 4, we prove results for the iterations based on the exact data and, in Section 5, the error analysis for the noisy data case is proved. The main order optimal result using the a posteriori stopping rule is provided in Section 6.

### 2 Basic assumptions and some preliminary results

We use the following assumptions to prove the results in this paper.

Assumption 2.1

(1) There exists such that and is Fréchet differentiable.

(2) There exists such that, for all , and , there exists an element, say , satisfying

for all and .

(3) for all .

(4) for all .

The condition (2) in Assumption 2.1 weakens the popular hypotheses given in [10,14] and [15].

Assumption 2.2 There exists a constant such that, for all and , there exists an element denoted by satisfying

Clearly, Assumption 2.2 implies Assumption 2.1(2) with , but not necessarily vice versa. Note that holds in general and can be arbitrarily large [16-20]. Indeed, there are many classes of operators satisfying Assumption 2.1(2), but not Assumption 2.2 (see the numerical examples at the end of this study). Moreover, if is sufficiently smaller than K, which can happen since can be arbitrarily large, then the results obtained in this study provide a tighter error analysis than the one in [10].

Finally, note that the computation of constant K is more expensive than the computation of .

We need the auxiliary results based on Assumption 2.1.

Proposition 2.3For anyand,

Proof Using the fundamental theorem of integration, for any , we get

Hence, by Assumption 2.2,

Then, by (2), (3) in Assumption 2.1 and the inequality , we obtain in turn

This completes the proof. □

Proposition 2.4For anyand,

(2.1)

Proof Let for all . Then we have, by Assumption 2.2,

for all . This completes the proof. □

Assumption 2.5 There exists a continuous and strictly monotonically increasing function with satisfying

(1) ;

(2) for all ;

(3) there exists with such that

(2.2)

Next, we assume a condition on the sequence considered in (1.5).

Assumption 2.6 ([10], Assumption 2.6)

The sequence of positive real numbers is such that

(2.3)

for a constant .

Note that the condition (2.3) on is weaker than (1.6) considered by Bakushinskii and Smirnova [9] (see [10]). In fact, if (1.6) is satisfied, then it also satisfies (2.3) with , but the converse need not be true (see [10]). Further, note that for these choices of , is bounded, whereas as . (2) in Assumption 2.1 is used in the literature for regularization of many nonlinear ill-posed problems (see [4,7,8,13,21]).

### 3 Stopping rule

Let and choose to be the first non-negative integer such that in (1.5) is defined for each and

(3.1)

In the following, we establish the existence of such a . First, we consider the positive integer satisfying

(3.2)

for all , where and .

The following technical lemma from [10] is used to prove some of the results of this paper.

Lemma 3.1 ([10], Lemma 3.1)

Letandbe such thatand. Letbe non-negative real numbers such thatand. Thenfor all.

The rest of the results in this paper can be proved along the same lines as those of the proof in [10]. In order for us to make the paper as self-contained as possible, we present the proof of one of them, and for the proof of the rest, we refer the reader to [10].

Theorem 3.2 ([10], Theorem 3.2)

Let (1.2), (1.3), (2.3) and Assumption 2.1 be satisfied. LetNbe as in (3.2) for someand. Thenis defined iteratively for eachand

(3.3)

for all. In particular, if, thenfor. Moreover,

(3.4)

for.

Proof We show (3.3) by induction. It is obvious that (3.3) holds for . Now, assume that (3.3) holds for some . Then it follows from (1.5) that

(3.5)

Using (1.3), the estimates , and Proposition 2.3, we have

and

Thus we have

But, by (3.2), and so

which leads to the recurrence relation

where

From the hypothesis of the theorem, we have . It is obvious that

Hence, by Lemma 3.1, we get

(3.6)

for all . In particular, if , then we have for all .

Next, let . Then, using the estimates

and Proposition 2.3, we have

(3.7)

Therefore, we have , where . This completes the proof. □

### 4 Error bound for the case of noise-free data

Let

(4.1)

for all .

We show that each is well defined and belongs to for . For this, we make use of the following lemma.

Lemma 4.1 ([10], Lemma 4.1)

Let Assumption 2.1 hold. Suppose that, for all, in (4.1) is well defined andfor some. Then we have

(4.2)

for all.

Theorem 4.2 ([10], Theorem 4.2)

Let Assumption 2.1 hold. Ifand, then, for all, the iteratesin (4.1) are well defined and

(4.3)

for all.

Lemma 4.3 ([10], Lemma 4.3)

Let Assumptions 2.1 and 2.6 hold and let. Assume thatandfor someηwith. Then, for all, we have

(4.4)

and

(4.5)

The following corollary follows from Lemma 4.3 by taking . We show that this particular case of Lemma 4.3 is better suited for our later results.

Corollary 4.4 ([10], Corollary 4.4)

Let Assumptions 2.1 and 2.6 hold and let. Assume thatand. Then, for all, we have

(4.6)

and

Theorem 4.5 ([10], Theorem 4.5)

Let the assumptions of Lemma 4.3 hold. Ifis chosen such that, then.

Lemma 4.6 ([10], Lemma 4.6)

Let the assumptions of Lemma 4.3 hold forηsatisfying

(4.7)

Then, for allwith, we have

where

Remark 4.7 ([10], Remark 4.7)

It can be seen that (4.7) is satisfied if .

Now, if we take , that is, in Lemma 4.6, then it takes the following form.

Lemma 4.8 ([10], Lemma 4.8)

Let the assumptions of Lemma 4.3 hold with. Then, for all, we have

where

### 5 Error analysis with noisy data

The first result in this section gives an error estimate for under Assumption 2.5, where .

Lemma 5.1 ([10], Lemma 5.1)

Let Assumption 2.1 hold and let, where, andNbe the integer satisfying (3.2) with

Then, for all, we have

(5.1)

where

If we take in Lemma 5.1, then we get the following corollary as a particular case of Lemma 5.1. We make use of it in the following error analysis.

Corollary 5.2 ([10], Corollary 5.2)

Let Assumption 2.1 hold and let. LetNbe the integer defined by (3.2) with. Then, for all, we have

where

Lemma 5.3 ([10], Lemma 5.3)

Let the assumptions of Lemma 5.1 hold. Then we have

Moreover, if, then, for all, we have

where

withandκas in Lemma 5.1.

Theorem 5.4 ([10], Theorem 5.4)

Let Assumptions 2.1 and 2.6 hold. Ifand the integeris chosen according to stopping rule (3.1) with, then we have

(5.2)

where, withandκas in Lemma 4.8 and Corollary 5.2, respectively, and, as in Lemma 5.3.

### 6 Order optimal result with an a posteriori stopping rule

In this section, we show the convergence as and also give an optimal error estimate for .

Theorem 6.1 ([10], Theorem 6.1)

Let the assumptions of Theorem 5.4 hold and letbe the integer chosen by (3.1). Ifis chosen such that, then we have. Moreover, if Assumption 2.5 is satisfied, then we have

wherewithξas in Theorem 5.4 andis defined as, .

Proof From (4.6) and (5.2), we get

(6.1)

where . Now, we choose an integer such that . Then we have

(6.2)

Note that , so as . Therefore by (6.2) to show that as , it is enough to prove that as . Observe that for , i.e., for some , we have as . Now since is a dense subset of , it follows that as . Using Assumption 2.5, we get that

(6.3)

So, by (6.2) and (6.3), we obtain that

(6.4)

Choose such that

(6.5)

This also implies that

(6.6)

From (6.4), . Now, using (6.5) and (6.6), we get . This completes the proof. □

### 7 Numerical examples

We provide two numerical examples, where .

Example 7.1 Let , , and define a function F on by

(7.1)

Then, using (7.1) and Assumptions 2.1(2) and 2.2, we get

Example 7.2 Let (: the space of continuous functions defined on equipped with the max norm) and . Define an operator F on by

(7.2)

Then the Fréchet-derivative is given by

(7.3)

for all . Using (7.2), (7.3), Assumptions 2.1(2), 2.2 for , we get .

Next, we provide an example where can be arbitrarily large.

Example 7.3 Let , and define a function F on by

(7.4)

where , and are the given parameters. Note that . Then it can easily be seen that, for sufficiently large and sufficiently small, can be arbitrarily large.

We now present two examples where Assumption 2.2 is not satisfied, but Assumption 2.1(2) is satisfied.

Example 7.4 Let , and define a function F on D by

(7.5)

where is a real parameter and is an integer. Then is not Lipschitz on D. Hence Assumption 2.2 is not satisfied. However, the central Lipschitz condition in Assumption 2.2(2) holds for . We also have that . Indeed, we have

and so

Example 7.5 We consider the integral equation

(7.6)

for all , where f is a given continuous function satisfying for all , λ is a real number and the kernel G is continuous and positive in .

For example, when is the Green kernel, the corresponding integral equation is equivalent to the boundary value problem

These types of problems have been considered in [16-20]. The equation of the form (7.6) generalizes the equation of the form

(7.7)

which was studied in [16-20]. Instead of (7.6), we can try to solve the equation , where

and

The norm we consider is the max-norm. The derivative is given by

for all . First of all, we notice that does not satisfy the Lipschitz-type condition in Ω. Let us consider, for instance, , and . Then we have and

If were the Lipschitz function, then we had

or, equivalently, the inequality

(7.8)

would hold for all and for a constant . But this is not true. Consider, for example, the function

for all and . If these are substituted into (7.7), then we have

for all . This inequality is not true when . Therefore, Assumption 2.2 is not satisfied in this case. However, Assumption 2.1(2) holds. To show this, suppose that and . Then, for all , we have

where . Hence it follows that

where and . Then Assumption 2.1(2) holds for sufficiently small λ.

In the following remarks, we compare our results with the corresponding ones in [10].

Remark 7.6 Note that the results in [10] were shown using Assumption 2.2, whereas we used weaker Assumption 2.1(2) in this paper. Next, our result, Proposition 2.3, was shown with replacing K. Therefore, if (see Example 7.3), then our result is tighter. Proposition 2.4 was shown with replacing K. Then, if , then our result is tighter. Theorem 3.2 was shown with replacing 2K. Hence, if , our result is tighter. Similar favorable to us observations are made for Lemma 4.1, Theorem 4.2 and the rest of the results in [10].

Remark 7.7 The results obtained here can also be realized for the operators F satisfying an autonomous differential equation of the form

where is a known continuous operator. Since , we can compute in Assumption 2.1(2) without actually knowing . Returning back to Example 7.1, we see that we can set .

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors read and approved the final manuscript.

### Acknowledgements

Dedicated to Professor Hari M Srivastava.

This paper was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (Grant Number: 2012-0008170).

### References

1. Binder, A, Engl, HW, Groetsch, CW, Neubauer, A, Scherzer, O: Weakly closed nonlinear operators and parameter identification in parabolic equations by Tikhonov regularization. Appl. Anal.. 55, 215–235 (1994). Publisher Full Text

2. Engl, HW, Hanke, M, Neubauer, A: Regularization of Inverse Problems, Kluwer, Dordrecht (1993)

3. Engl, HW, Kunisch, K, Neubauer, A: Convergence rates for Tikhonov regularization of nonlinear ill-posed problems. Inverse Probl.. 5, 523–540 (1989). Publisher Full Text

4. Jin, Q, Hou, ZY: On the choice of the regularization parameter for ordinary and iterated Tikhonov regularization of nonlinear ill-posed problems. Inverse Probl.. 13, 815–827 (1997). Publisher Full Text

5. Jin, Q, Hou, ZY: On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems. Numer. Math.. 83, 139–159 (1990)

6. Scherzer, O, Engl, HW, Kunisch, K: Optimal a posteriori parameter choice for Tikhonov regularization for solving nonlinear ill-posed problems. SIAM J. Numer. Anal.. 30, 1796–1838 (1993). Publisher Full Text

7. Tautenhahn, U: Lavrentiev regularization of nonlinear ill-posed problems. Vietnam J. Math.. 32, 29–41 (2004)

8. Tautenhahn, U: On the method of Lavrentiev regularization for nonlinear ill-posed problems. Inverse Probl.. 18, 191–207 (2002). Publisher Full Text

9. Bakushinskii, A, Smirnova, A: Iterative regularization and generalized discrepancy principle for monotone operator equations. Numer. Funct. Anal. Optim.. 28, 13–25 (2007). Publisher Full Text

10. Mahale, P, Nair, MT: Iterated Lavrentiev regularization for nonlinear ill-posed problems. ANZIAM J.. 51, 191–217 (2009). Publisher Full Text

11. Bakushinskii, A, Smirnova, A: On application of generalized discrepancy principle to iterative methods for nonlinear ill-posed problems. Numer. Funct. Anal. Optim.. 26, 35–48 (2005). Publisher Full Text

12. Bakushinskii, A, Smirnova, A: A posteriori stopping rule for regularized fixed point iterations. Nonlinear Anal.. 64, 1255–1261 (2006). Publisher Full Text

13. Jin, Q: On the iteratively regularized Gauss-Newton method for solving nonlinear ill-posed problems. Math. Comput.. 69, 1603–1623 (2000). Publisher Full Text

14. Mahale, P, Nair, MT: General source conditions for nonlinear ill-posed problems. Numer. Funct. Anal. Optim.. 28, 111–126 (2007). Publisher Full Text

15. Semenova, EV: Lavrentiev regularization and balancing principle for solving ill-posed problems with monotone operators. Comput. Methods Appl. Math.. 4, 444–454 (2010)

16. Argyros, IK: Convergence and Application of Newton-Type Iterations, Springer, New York (2008)

17. Argyros, IK: Approximating solutions of equations using Newton’s method with a modified Newton’s method iterate as a starting point. Rev. Anal. Numér. Théor. Approx.. 36, 123–138 (2007)

18. Argyros, IK: A semilocal convergence for directional Newton methods. Math. Comput.. 80, 327–343 (2011)

19. Argyros, IK, Hilout, S: Weaker conditions for the convergence of Newton’s method. J. Complex.. 28, 364–387 (2012). Publisher Full Text

20. Argyros, IK, Cho, YJ, Hilout, S: Numerical Methods for Equations and Its Applications, CRC Press, New York (2012)

21. Tautenhahn, U, Jin, Q: Tikhonov regularization and a posteriori rule for solving nonlinear ill-posed problems. Inverse Probl.. 19, 1–21 (2003). Publisher Full Text