In this paper, we are concerned with the problem of approximating a solution of an ill-posed problem in a Hilbert space setting using the Lavrentiev regularization method and, in particular, expanding the applicability of this method by weakening the popular Lipschitz-type hypotheses considered in earlier studies such as (Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 26:35-48, 2005; Bakushinskii and Smirnova in Nonlinear Anal. 64:1255-1261, 2006; Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 28:13-25, 2007; Jin in Math. Comput. 69:1603-1623, 2000; Mahale and Nair in ANZIAM J. 51:191-217, 2009). Numerical examples are given to show that our convergence criteria are weaker and our error analysis tighter under less computational cost than the corresponding works given in (Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 26:35-48, 2005; Bakushinskii and Smirnova in Nonlinear Anal. 64:1255-1261, 2006; Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 28:13-25, 2007; Jin in Math. Comput. 69:1603-1623, 2000; Mahale and Nair in ANZIAM J. 51:191-217, 2009).
MSC: 65F22, 65J15, 65J22, 65M30, 47A52.
Keywords:Lavrentiev regularization method; Hilbert space; ill-posed problems; stopping index; Fréchet-derivative; source function; boundary value problem
In this paper, we are interested in obtaining a stable approximate solution for a nonlinear ill-posed operator equation of the form
where is a monotone operator and X is a Hilbert space. We denote the inner product and the corresponding norm on a Hilbert space by and , respectively. Let stand for the open ball in X with center and radius . Note that F is a monotone operator if it satisfies the relation
and (1.1) has a solution . Since (1.1) is ill-posed, its solution need not depend continuously on the data, i.e., small perturbation in the data can cause large deviations in the solution. So, the regularization methods are used [1-8]. Since F is monotone, the Lavrentiev regularization is used to obtain a stable approximate solution of (1.1). In the Lavrentiev regularization, the approximate solution is obtained as a solution of the equation
In , Bakushinskii and Smirnova proposed an iterative method
where and is a sequence of positive real numbers satisfying as . It is important to stop the iteration at an appropriate step, say , and show that is well defined for and as (see ).
In , Mahale and Nair, motivated by the work of Qi-Nian Jin  for an iteratively regularized Gauss-Newton method, considered an alternate stopping criterion which not only ensures the convergence, but also derives an order optimal error estimate under a general source condition on . Moreover, the condition that they imposed on is weaker than (1.6).
In the present paper, we are motivated by . In particular, we expand the applicability of the method (1.5) by weakening one of the major hypotheses in  (see Assumption 2.1(2) in the next section).
In Section 2, we consider some basic assumptions required throughout the paper. Section 3 deals with the stopping rule and the result that establishes the existence of the stopping index. In Section 4, we prove results for the iterations based on the exact data and, in Section 5, the error analysis for the noisy data case is proved. The main order optimal result using the a posteriori stopping rule is provided in Section 6.
2 Basic assumptions and some preliminary results
We use the following assumptions to prove the results in this paper.
Clearly, Assumption 2.2 implies Assumption 2.1(2) with , but not necessarily vice versa. Note that holds in general and can be arbitrarily large [16-20]. Indeed, there are many classes of operators satisfying Assumption 2.1(2), but not Assumption 2.2 (see the numerical examples at the end of this study). Moreover, if is sufficiently smaller than K, which can happen since can be arbitrarily large, then the results obtained in this study provide a tighter error analysis than the one in .
We need the auxiliary results based on Assumption 2.1.
Hence, by Assumption 2.2,
This completes the proof. □
Assumption 2.6 (, Assumption 2.6)
Note that the condition (2.3) on is weaker than (1.6) considered by Bakushinskii and Smirnova  (see ). In fact, if (1.6) is satisfied, then it also satisfies (2.3) with , but the converse need not be true (see ). Further, note that for these choices of , is bounded, whereas as . (2) in Assumption 2.1 is used in the literature for regularization of many nonlinear ill-posed problems (see [4,7,8,13,21]).
3 Stopping rule
The following technical lemma from  is used to prove some of the results of this paper.
Lemma 3.1 (, Lemma 3.1)
The rest of the results in this paper can be proved along the same lines as those of the proof in . In order for us to make the paper as self-contained as possible, we present the proof of one of them, and for the proof of the rest, we refer the reader to .
Theorem 3.2 (, Theorem 3.2)
Thus we have
which leads to the recurrence relation
Hence, by Lemma 3.1, we get
and Proposition 2.3, we have
4 Error bound for the case of noise-free data
Lemma 4.1 (, Lemma 4.1)
Theorem 4.2 (, Theorem 4.2)
Lemma 4.3 (, Lemma 4.3)
Corollary 4.4 (, Corollary 4.4)
Theorem 4.5 (, Theorem 4.5)
Lemma 4.6 (, Lemma 4.6)
Let the assumptions of Lemma 4.3 hold forηsatisfying
Remark 4.7 (, Remark 4.7)
Lemma 4.8 (, Lemma 4.8)
5 Error analysis with noisy data
Lemma 5.1 (, Lemma 5.1)
Corollary 5.2 (, Corollary 5.2)
Lemma 5.3 (, Lemma 5.3)
Let the assumptions of Lemma 5.1 hold. Then we have
Theorem 5.4 (, Theorem 5.4)
6 Order optimal result with an a posteriori stopping rule
Theorem 6.1 (, Theorem 6.1)
Proof From (4.6) and (5.2), we get
Note that , so as . Therefore by (6.2) to show that as , it is enough to prove that as . Observe that for , i.e., for some , we have as . Now since is a dense subset of , it follows that as . Using Assumption 2.5, we get that
So, by (6.2) and (6.3), we obtain that
This also implies that
7 Numerical examples
Then, using (7.1) and Assumptions 2.1(2) and 2.2, we get
Then the Fréchet-derivative is given by
We now present two examples where Assumption 2.2 is not satisfied, but Assumption 2.1(2) is satisfied.
where is a real parameter and is an integer. Then is not Lipschitz on D. Hence Assumption 2.2 is not satisfied. However, the central Lipschitz condition in Assumption 2.2(2) holds for . We also have that . Indeed, we have
Example 7.5 We consider the integral equation
or, equivalently, the inequality
In the following remarks, we compare our results with the corresponding ones in .
Remark 7.6 Note that the results in  were shown using Assumption 2.2, whereas we used weaker Assumption 2.1(2) in this paper. Next, our result, Proposition 2.3, was shown with replacing K. Therefore, if (see Example 7.3), then our result is tighter. Proposition 2.4 was shown with replacing K. Then, if , then our result is tighter. Theorem 3.2 was shown with replacing 2K. Hence, if , our result is tighter. Similar favorable to us observations are made for Lemma 4.1, Theorem 4.2 and the rest of the results in .
Remark 7.7 The results obtained here can also be realized for the operators F satisfying an autonomous differential equation of the form
The authors declare that they have no competing interests.
All authors read and approved the final manuscript.
Dedicated to Professor Hari M Srivastava.
This paper was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (Grant Number: 2012-0008170).
Binder, A, Engl, HW, Groetsch, CW, Neubauer, A, Scherzer, O: Weakly closed nonlinear operators and parameter identification in parabolic equations by Tikhonov regularization. Appl. Anal.. 55, 215–235 (1994). Publisher Full Text
Engl, HW, Kunisch, K, Neubauer, A: Convergence rates for Tikhonov regularization of nonlinear ill-posed problems. Inverse Probl.. 5, 523–540 (1989). Publisher Full Text
Jin, Q, Hou, ZY: On the choice of the regularization parameter for ordinary and iterated Tikhonov regularization of nonlinear ill-posed problems. Inverse Probl.. 13, 815–827 (1997). Publisher Full Text
Scherzer, O, Engl, HW, Kunisch, K: Optimal a posteriori parameter choice for Tikhonov regularization for solving nonlinear ill-posed problems. SIAM J. Numer. Anal.. 30, 1796–1838 (1993). Publisher Full Text
Tautenhahn, U: On the method of Lavrentiev regularization for nonlinear ill-posed problems. Inverse Probl.. 18, 191–207 (2002). Publisher Full Text
Bakushinskii, A, Smirnova, A: Iterative regularization and generalized discrepancy principle for monotone operator equations. Numer. Funct. Anal. Optim.. 28, 13–25 (2007). Publisher Full Text
Mahale, P, Nair, MT: Iterated Lavrentiev regularization for nonlinear ill-posed problems. ANZIAM J.. 51, 191–217 (2009). Publisher Full Text
Bakushinskii, A, Smirnova, A: On application of generalized discrepancy principle to iterative methods for nonlinear ill-posed problems. Numer. Funct. Anal. Optim.. 26, 35–48 (2005). Publisher Full Text
Bakushinskii, A, Smirnova, A: A posteriori stopping rule for regularized fixed point iterations. Nonlinear Anal.. 64, 1255–1261 (2006). Publisher Full Text
Jin, Q: On the iteratively regularized Gauss-Newton method for solving nonlinear ill-posed problems. Math. Comput.. 69, 1603–1623 (2000). Publisher Full Text
Mahale, P, Nair, MT: General source conditions for nonlinear ill-posed problems. Numer. Funct. Anal. Optim.. 28, 111–126 (2007). Publisher Full Text
Argyros, IK, Hilout, S: Weaker conditions for the convergence of Newton’s method. J. Complex.. 28, 364–387 (2012). Publisher Full Text
Tautenhahn, U, Jin, Q: Tikhonov regularization and a posteriori rule for solving nonlinear ill-posed problems. Inverse Probl.. 19, 1–21 (2003). Publisher Full Text