Abstract
In this paper, we are concerned with the problem of approximating a solution of an illposed problem in a Hilbert space setting using the Lavrentiev regularization method and, in particular, expanding the applicability of this method by weakening the popular Lipschitztype hypotheses considered in earlier studies such as (Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 26:3548, 2005; Bakushinskii and Smirnova in Nonlinear Anal. 64:12551261, 2006; Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 28:1325, 2007; Jin in Math. Comput. 69:16031623, 2000; Mahale and Nair in ANZIAM J. 51:191217, 2009). Numerical examples are given to show that our convergence criteria are weaker and our error analysis tighter under less computational cost than the corresponding works given in (Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 26:3548, 2005; Bakushinskii and Smirnova in Nonlinear Anal. 64:12551261, 2006; Bakushinskii and Smirnova in Numer. Funct. Anal. Optim. 28:1325, 2007; Jin in Math. Comput. 69:16031623, 2000; Mahale and Nair in ANZIAM J. 51:191217, 2009).
MSC: 65F22, 65J15, 65J22, 65M30, 47A52.
Keywords:
Lavrentiev regularization method; Hilbert space; illposed problems; stopping index; Fréchetderivative; source function; boundary value problem1 Introduction
In this paper, we are interested in obtaining a stable approximate solution for a nonlinear illposed operator equation of the form
where is a monotone operator and X is a Hilbert space. We denote the inner product and the corresponding norm on a Hilbert space by and , respectively. Let stand for the open ball in X with center and radius . Note that F is a monotone operator if it satisfies the relation
We assume, throughout this paper, that is the available noisy data with
and (1.1) has a solution . Since (1.1) is illposed, its solution need not depend continuously on the data, i.e., small perturbation in the data can cause large deviations in the solution. So, the regularization methods are used [18]. Since F is monotone, the Lavrentiev regularization is used to obtain a stable approximate solution of (1.1). In the Lavrentiev regularization, the approximate solution is obtained as a solution of the equation
where is the regularization parameter and is an initial guess for the solution .
In [9], Bakushinskii and Smirnova proposed an iterative method
where and is a sequence of positive real numbers satisfying as . It is important to stop the iteration at an appropriate step, say , and show that is well defined for and as (see [10]).
In [9,11,12], Bakushinskii and Smirnova chose the stopping index by requiring it to satisfy
for and . In fact, they showed that as under the following assumptions:
(1) There exists such that for all ;
However, no error estimate was given in [9] (see [10]).
In [10], Mahale and Nair, motivated by the work of QiNian Jin [13] for an iteratively regularized GaussNewton method, considered an alternate stopping criterion which not only ensures the convergence, but also derives an order optimal error estimate under a general source condition on . Moreover, the condition that they imposed on is weaker than (1.6).
In the present paper, we are motivated by [10]. In particular, we expand the applicability of the method (1.5) by weakening one of the major hypotheses in [10] (see Assumption 2.1(2) in the next section).
In Section 2, we consider some basic assumptions required throughout the paper. Section 3 deals with the stopping rule and the result that establishes the existence of the stopping index. In Section 4, we prove results for the iterations based on the exact data and, in Section 5, the error analysis for the noisy data case is proved. The main order optimal result using the a posteriori stopping rule is provided in Section 6.
2 Basic assumptions and some preliminary results
We use the following assumptions to prove the results in this paper.
Assumption 2.1
(1) There exists such that and is Fréchet differentiable.
(2) There exists such that, for all , and , there exists an element, say , satisfying
The condition (2) in Assumption 2.1 weakens the popular hypotheses given in [10,14] and [15].
Assumption 2.2 There exists a constant such that, for all and , there exists an element denoted by satisfying
Clearly, Assumption 2.2 implies Assumption 2.1(2) with , but not necessarily vice versa. Note that holds in general and can be arbitrarily large [1620]. Indeed, there are many classes of operators satisfying Assumption 2.1(2), but not Assumption 2.2 (see the numerical examples at the end of this study). Moreover, if is sufficiently smaller than K, which can happen since can be arbitrarily large, then the results obtained in this study provide a tighter error analysis than the one in [10].
Finally, note that the computation of constant K is more expensive than the computation of .
We need the auxiliary results based on Assumption 2.1.
Proof Using the fundamental theorem of integration, for any , we get
Hence, by Assumption 2.2,
Then, by (2), (3) in Assumption 2.1 and the inequality , we obtain in turn
This completes the proof. □
Proof Let for all . Then we have, by Assumption 2.2,
for all . This completes the proof. □
Assumption 2.5 There exists a continuous and strictly monotonically increasing function with satisfying
(3) there exists with such that
Next, we assume a condition on the sequence considered in (1.5).
Assumption 2.6 ([10], Assumption 2.6)
The sequence of positive real numbers is such that
Note that the condition (2.3) on is weaker than (1.6) considered by Bakushinskii and Smirnova [9] (see [10]). In fact, if (1.6) is satisfied, then it also satisfies (2.3) with , but the converse need not be true (see [10]). Further, note that for these choices of , is bounded, whereas as . (2) in Assumption 2.1 is used in the literature for regularization of many nonlinear illposed problems (see [4,7,8,13,21]).
3 Stopping rule
Let and choose to be the first nonnegative integer such that in (1.5) is defined for each and
In the following, we establish the existence of such a . First, we consider the positive integer satisfying
The following technical lemma from [10] is used to prove some of the results of this paper.
Lemma 3.1 ([10], Lemma 3.1)
Letandbe such thatand. Letbe nonnegative real numbers such thatand. Thenfor all.
The rest of the results in this paper can be proved along the same lines as those of the proof in [10]. In order for us to make the paper as selfcontained as possible, we present the proof of one of them, and for the proof of the rest, we refer the reader to [10].
Theorem 3.2 ([10], Theorem 3.2)
Let (1.2), (1.3), (2.3) and Assumption 2.1 be satisfied. LetNbe as in (3.2) for someand. Thenis defined iteratively for eachand
for all. In particular, if, thenfor. Moreover,
Proof We show (3.3) by induction. It is obvious that (3.3) holds for . Now, assume that (3.3) holds for some . Then it follows from (1.5) that
Using (1.3), the estimates , and Proposition 2.3, we have
and
Thus we have
which leads to the recurrence relation
where
From the hypothesis of the theorem, we have . It is obvious that
Hence, by Lemma 3.1, we get
for all . In particular, if , then we have for all .
Next, let . Then, using the estimates
and Proposition 2.3, we have
4 Error bound for the case of noisefree data
Let
We show that each is well defined and belongs to for . For this, we make use of the following lemma.
Lemma 4.1 ([10], Lemma 4.1)
Let Assumption 2.1 hold. Suppose that, for all, in (4.1) is well defined andfor some. Then we have
Theorem 4.2 ([10], Theorem 4.2)
Let Assumption 2.1 hold. Ifand, then, for all, the iteratesin (4.1) are well defined and
Lemma 4.3 ([10], Lemma 4.3)
Let Assumptions 2.1 and 2.6 hold and let. Assume thatandfor someηwith. Then, for all, we have
and
The following corollary follows from Lemma 4.3 by taking . We show that this particular case of Lemma 4.3 is better suited for our later results.
Corollary 4.4 ([10], Corollary 4.4)
Let Assumptions 2.1 and 2.6 hold and let. Assume thatand. Then, for all, we have
and
Theorem 4.5 ([10], Theorem 4.5)
Let the assumptions of Lemma 4.3 hold. Ifis chosen such that, then.
Lemma 4.6 ([10], Lemma 4.6)
Let the assumptions of Lemma 4.3 hold forηsatisfying
where
Remark 4.7 ([10], Remark 4.7)
It can be seen that (4.7) is satisfied if .
Now, if we take , that is, in Lemma 4.6, then it takes the following form.
Lemma 4.8 ([10], Lemma 4.8)
Let the assumptions of Lemma 4.3 hold with. Then, for all, we have
where
5 Error analysis with noisy data
The first result in this section gives an error estimate for under Assumption 2.5, where .
Lemma 5.1 ([10], Lemma 5.1)
Let Assumption 2.1 hold and let, where, andNbe the integer satisfying (3.2) with
where
If we take in Lemma 5.1, then we get the following corollary as a particular case of Lemma 5.1. We make use of it in the following error analysis.
Corollary 5.2 ([10], Corollary 5.2)
Let Assumption 2.1 hold and let. LetNbe the integer defined by (3.2) with. Then, for all, we have
where
Lemma 5.3 ([10], Lemma 5.3)
Let the assumptions of Lemma 5.1 hold. Then we have
Moreover, if, then, for all, we have
where
Theorem 5.4 ([10], Theorem 5.4)
Let Assumptions 2.1 and 2.6 hold. Ifand the integeris chosen according to stopping rule (3.1) with, then we have
where, withandκas in Lemma 4.8 and Corollary 5.2, respectively, and, as in Lemma 5.3.
6 Order optimal result with an a posteriori stopping rule
In this section, we show the convergence as and also give an optimal error estimate for .
Theorem 6.1 ([10], Theorem 6.1)
Let the assumptions of Theorem 5.4 hold and letbe the integer chosen by (3.1). Ifis chosen such that, then we have. Moreover, if Assumption 2.5 is satisfied, then we have
wherewithξas in Theorem 5.4 andis defined as, .
Proof From (4.6) and (5.2), we get
where . Now, we choose an integer such that . Then we have
Note that , so as . Therefore by (6.2) to show that as , it is enough to prove that as . Observe that for , i.e., for some , we have as . Now since is a dense subset of , it follows that as . Using Assumption 2.5, we get that
So, by (6.2) and (6.3), we obtain that
This also implies that
From (6.4), . Now, using (6.5) and (6.6), we get . This completes the proof. □
7 Numerical examples
We provide two numerical examples, where .
Example 7.1 Let , , and define a function F on by
Then, using (7.1) and Assumptions 2.1(2) and 2.2, we get
Example 7.2 Let (: the space of continuous functions defined on equipped with the max norm) and . Define an operator F on by
Then the Fréchetderivative is given by
for all . Using (7.2), (7.3), Assumptions 2.1(2), 2.2 for , we get .
Next, we provide an example where can be arbitrarily large.
Example 7.3 Let , and define a function F on by
where , and are the given parameters. Note that . Then it can easily be seen that, for sufficiently large and sufficiently small, can be arbitrarily large.
We now present two examples where Assumption 2.2 is not satisfied, but Assumption 2.1(2) is satisfied.
Example 7.4 Let , and define a function F on D by
where is a real parameter and is an integer. Then is not Lipschitz on D. Hence Assumption 2.2 is not satisfied. However, the central Lipschitz condition in Assumption 2.2(2) holds for . We also have that . Indeed, we have
and so
Example 7.5 We consider the integral equation
for all , where f is a given continuous function satisfying for all , λ is a real number and the kernel G is continuous and positive in .
For example, when is the Green kernel, the corresponding integral equation is equivalent to the boundary value problem
These types of problems have been considered in [1620]. The equation of the form (7.6) generalizes the equation of the form
which was studied in [1620]. Instead of (7.6), we can try to solve the equation , where
and
The norm we consider is the maxnorm. The derivative is given by
for all . First of all, we notice that does not satisfy the Lipschitztype condition in Ω. Let us consider, for instance, , and . Then we have and
If were the Lipschitz function, then we had
or, equivalently, the inequality
would hold for all and for a constant . But this is not true. Consider, for example, the function
for all and . If these are substituted into (7.7), then we have
for all . This inequality is not true when . Therefore, Assumption 2.2 is not satisfied in this case. However, Assumption 2.1(2) holds. To show this, suppose that and . Then, for all , we have
where and . Then Assumption 2.1(2) holds for sufficiently small λ.
In the following remarks, we compare our results with the corresponding ones in [10].
Remark 7.6 Note that the results in [10] were shown using Assumption 2.2, whereas we used weaker Assumption 2.1(2) in this paper. Next, our result, Proposition 2.3, was shown with replacing K. Therefore, if (see Example 7.3), then our result is tighter. Proposition 2.4 was shown with replacing K. Then, if , then our result is tighter. Theorem 3.2 was shown with replacing 2K. Hence, if , our result is tighter. Similar favorable to us observations are made for Lemma 4.1, Theorem 4.2 and the rest of the results in [10].
Remark 7.7 The results obtained here can also be realized for the operators F satisfying an autonomous differential equation of the form
where is a known continuous operator. Since , we can compute in Assumption 2.1(2) without actually knowing . Returning back to Example 7.1, we see that we can set .
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors read and approved the final manuscript.
Acknowledgements
Dedicated to Professor Hari M Srivastava.
This paper was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (Grant Number: 20120008170).
References

Binder, A, Engl, HW, Groetsch, CW, Neubauer, A, Scherzer, O: Weakly closed nonlinear operators and parameter identification in parabolic equations by Tikhonov regularization. Appl. Anal.. 55, 215–235 (1994). Publisher Full Text

Engl, HW, Hanke, M, Neubauer, A: Regularization of Inverse Problems, Kluwer, Dordrecht (1993)

Engl, HW, Kunisch, K, Neubauer, A: Convergence rates for Tikhonov regularization of nonlinear illposed problems. Inverse Probl.. 5, 523–540 (1989). Publisher Full Text

Jin, Q, Hou, ZY: On the choice of the regularization parameter for ordinary and iterated Tikhonov regularization of nonlinear illposed problems. Inverse Probl.. 13, 815–827 (1997). Publisher Full Text

Jin, Q, Hou, ZY: On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear illposed problems. Numer. Math.. 83, 139–159 (1990)

Scherzer, O, Engl, HW, Kunisch, K: Optimal a posteriori parameter choice for Tikhonov regularization for solving nonlinear illposed problems. SIAM J. Numer. Anal.. 30, 1796–1838 (1993). Publisher Full Text

Tautenhahn, U: Lavrentiev regularization of nonlinear illposed problems. Vietnam J. Math.. 32, 29–41 (2004)

Tautenhahn, U: On the method of Lavrentiev regularization for nonlinear illposed problems. Inverse Probl.. 18, 191–207 (2002). Publisher Full Text

Bakushinskii, A, Smirnova, A: Iterative regularization and generalized discrepancy principle for monotone operator equations. Numer. Funct. Anal. Optim.. 28, 13–25 (2007). Publisher Full Text

Mahale, P, Nair, MT: Iterated Lavrentiev regularization for nonlinear illposed problems. ANZIAM J.. 51, 191–217 (2009). Publisher Full Text

Bakushinskii, A, Smirnova, A: On application of generalized discrepancy principle to iterative methods for nonlinear illposed problems. Numer. Funct. Anal. Optim.. 26, 35–48 (2005). Publisher Full Text

Bakushinskii, A, Smirnova, A: A posteriori stopping rule for regularized fixed point iterations. Nonlinear Anal.. 64, 1255–1261 (2006). Publisher Full Text

Jin, Q: On the iteratively regularized GaussNewton method for solving nonlinear illposed problems. Math. Comput.. 69, 1603–1623 (2000). Publisher Full Text

Mahale, P, Nair, MT: General source conditions for nonlinear illposed problems. Numer. Funct. Anal. Optim.. 28, 111–126 (2007). Publisher Full Text

Semenova, EV: Lavrentiev regularization and balancing principle for solving illposed problems with monotone operators. Comput. Methods Appl. Math.. 4, 444–454 (2010)

Argyros, IK: Convergence and Application of NewtonType Iterations, Springer, New York (2008)

Argyros, IK: Approximating solutions of equations using Newton’s method with a modified Newton’s method iterate as a starting point. Rev. Anal. Numér. Théor. Approx.. 36, 123–138 (2007)

Argyros, IK: A semilocal convergence for directional Newton methods. Math. Comput.. 80, 327–343 (2011)

Argyros, IK, Hilout, S: Weaker conditions for the convergence of Newton’s method. J. Complex.. 28, 364–387 (2012). Publisher Full Text

Argyros, IK, Cho, YJ, Hilout, S: Numerical Methods for Equations and Its Applications, CRC Press, New York (2012)

Tautenhahn, U, Jin, Q: Tikhonov regularization and a posteriori rule for solving nonlinear illposed problems. Inverse Probl.. 19, 1–21 (2003). Publisher Full Text