Skip to main content

Two regularization methods for a class of inverse boundary value problems of elliptic type

Abstract

This paper deals with the problem of determining an unknown boundary condition u(0) in the boundary value problem u y y (y)Au(y)=0, u(0)=f, u(+)=0, with the aid of an extra measurement at an internal point. It is well known that such a problem is severely ill-posed, i.e., the solution does not depend continuously on the data. In order to overcome the instability of the ill-posed problem, we propose two regularization procedures: the first method is based on the spectral truncation, and the second is a version of the Kozlov-Maz’ya iteration method. Finally, some other convergence results including some explicit convergence rates are also established under a priori bound assumptions on the exact solution.

MSC: 35R25, 65J20, 35J25.

1 Formulation of the problem

Throughout this paper, H denotes a complex separable Hilbert space endowed with the inner product (,) and the norm , L(H) stands for the Banach algebra of bounded linear operators on H.

Let A:D(A)HH be a positive, self-adjoint operator with compact resolvent, so that A has an orthonormal basis of eigenvectors ( ϕ n )H with real eigenvalues ( λ n ) R + , i.e.,

A ϕ n = λ n ϕ n , n N , ( ϕ i , ϕ j ) = δ i j = { 1 if  i = j , 0 if  i j , 0 < ν λ 1 λ 2 λ 3 , lim n λ n = , h H , h = n = 1 h n ϕ n , h n = ( h , ϕ n ) .

In this paper, we are interested in the following inverse boundary value problem: find (u(y),u(0)) satisfying

{ u y y A u = 0 , 0 < y < , u ( 0 ) = f , u ( + ) = 0 ,
(1.1)

where f is the unknown boundary condition to be determined from the interior data

u(b)=gH,0<b<.
(1.2)

This problem is an abstract version of an inverse boundary value problem, which generalizes inverse problems for second-order elliptic partial differential equations in a cylindrical domain, for example we mention the following problem.

Example 1.1 An example of (1.1) is the boundary value problem for the Laplace equation in the strip (0,π)×(0,), where the operator A is given by

A= 2 x 2 ,D(A)= H 0 1 (0,π) H 2 (0,π)H= L 2 (0,π),

which takes the form

{ u y y ( x , y ) + u x x ( x , y ) = 0 , x ( 0 , π ) , y ( 0 , + ) , u ( 0 , y ) = u ( π , y ) = 0 , y ( 0 , + ) , u ( x , 0 ) = f ( x ) , u ( x , + ) = 0 , x [ 0 , π ] , u ( x , y = b ) = g ( x ) , x [ 0 , π ] .

To our knowledge, there are few papers devoted to this class of problems in the abstract setting, except for [1, 2]. In [3], the author studied a similar problem posed on a bounded interval. In this study, the algebraic invertibility of the inverse problem was established. However, the regularization aspect was not investigated.

We note here that this inverse problem was studied by Levine and Vessella [2], where the authors considered the problem of recovering u(0) from the experimental data g 1 ,, g n associated to the internal measurements u( b 1 ),,u( b n ), in which the temperature is measured at various depths 0< b 1 << b n as approximate functions g 1 ,, g n H such that

i = 1 n p i u ( b i ) g i 2 ε 2 ,

where p 1 ,, p n are positive weights with i = 1 n p i =1 and ε denotes the level noisy.

The regularizing strategy employed in [2] is essentially based on the Tikhonov regularization and the conditional stability estimate u y (0)E for some a priori constant E.

In practice, the use of N-measurements or the average of a series of measurements is an expensive operation, and sometimes unrealizable. Moreover, the numerical implementation of the stabilized solutions by the Tikhonov regularization method for this class of problems will be a very complex task.

For these reasons, we propose in our study a practical regularizing strategy. We show that we can recover u(0) from the internal measurement u(b)=g under the conditional stability estimate u(0)E for some a priori constant E. Moreover, our investigation is supplemented by numerical simulations justifying the feasibility of our approach.

2 Preliminaries and basic results

In this section we present the notation and the functional setting which will be used in this paper and prepare some material which will be used in our analysis.

2.1 Notation

We denote by C(H) the set of all closed linear operators densely defined in H. The domain, range and kernel of a linear operator BC(H) are denoted as D(B), R(B) and N(B); the symbols ρ(B), σ(B) and σ p (B) are used for the resolvent set, spectrum and point spectrum of B, respectively. If V is a closed subspace of H, we denote by Π V the orthogonal projection from H to V.

For the ease of reading, we summarize some well-known facts in spectral theory.

2.2 Spectral theorem and properties

By the spectral theorem, for each strictly positive self-adjoint operator B,

B : D ( B ) H H , D ( B ) ¯ = H , B = B and ( B u , u ) γ u 2 , u D ( B ) ( γ > 0 ) ,

there is a unique right continuous family { E λ ,λ[γ,[}L(H) of orthogonal projection operators such that B= γ λd E λ with

D(B)= { v H : γ λ 2 d ( E λ v , v ) < } .

Theorem 2.1 [[4], Theorem 6, XII.2.5, pp.1196-1198]

Let { E λ ,λγ>0} be the spectral resolution of the identity associated to B, and let Φ be a complex Borel function defined E-almost everywhere on the real axis. Then Φ(B) is a closed operator with dense domain. Moreover,

  1. (i)

    D(Φ(B)):={hH: γ | Φ ( λ ) | 2 d( E λ v,v)<},

  2. (ii)

    (Φ(B)h,y)= γ Φ(λ)d( E λ h,y), hD(Φ(B)), yH,

  3. (iii)

    Φ ( B ) h 2 = γ | Φ ( λ ) | 2 d( E λ h,h), hD(Φ(B)),

  4. (iv)

    Φ ( B ) = Φ ¯ (B). In particular, if Φ is a real Borel function, then Φ(B) is self-adjoint.

We denote by S(y)= e y A = n = 1 + e y λ n (, ϕ n ) ϕ n L(H), y0, the C 0 -semigroup generated by A . Some basic properties of S(y) are listed in the following theorem.

Theorem 2.2 (see [5], Chapter 2, Theorem 6.13, p.74)

For this family of operators, we have:

  1. 1.

    S(y)1, y0;

  2. 2.

    the function yS(y), y>0, is analytic;

  3. 3.

    for every real r0 and y>0, the operator S(y)L(H,D( A r / 2 ));

  4. 4.

    for every integer k0 and y>0, S ( k ) (y)= A k / 2 S(y)c(k) y k ;

  5. 5.

    for every hD( A r / 2 ), r0, we have S(t) A r / 2 h= A r / 2 S(y)h.

Theorem 2.3 For y>0, S(y) is self-adjoint and one-to-one operator with dense range (S(y)=S ( y ) , R ( S ( y ) ) ¯ =H).

Proof Let ϕ y :[0,+[R, s ϕ y (s)= e y s . Then, by virtue of (iv) of Theorem 2.1, we can write S ( y ) = ϕ y ¯ (A)= ϕ y (A)= e y A =S(y).

Let hN(S( y 0 )), y 0 >0, then S( y 0 )h=0, which implies that S(y)S( y 0 )h=S(t+ t 0 )h=0, y0. Using analyticity, we obtain that S(y)h=0, y0. Strong continuity at 0 now gives h=0. This shows that N(S( y 0 ))={0}.

Thanks to

R ( S ( y 0 ) ) ¯ =N ( S ( y 0 ) ) = { 0 } =H,

we conclude that R(S( y 0 )) is dense in H. □

Remark 2.1 For y=b, this theorem ensures that S(b) is self-adjoint and one-to-one operator with dense range R(S(b)). Then we can define its inverse S ( b ) 1 = e b A , which is an unbounded self-adjoint strictly positive definite operator in H with dense domain

D ( S ( b ) 1 ) =R ( S ( b ) ) = { h H : e b A h 2 = n = 1 + e b λ n | ( h , ϕ n ) | 2 < + } .

Let us consider the following problem: for ξH find v C 1 (]0,+[;H)C([0,+[;H)C(]0,+[;D(A)) such that

v (y)+ A u(y)=0,0<y<+,v(0)=ξ.
(2.1)

Theorem 2.4 [[6], Theorem 7.5, p.191]

For any ξH, problem (2.1) has a unique solution, given by

v(y)=S(y)ξ= n = 1 e y λ n (ξ, ϕ n ) ϕ n .
(2.2)

Moreover, for all integer k0, v C (]0,+[;D( A k / 2 )). If, in addition, ξD( A j / 2 ), then vC([0,+[;D( A j / 2 )) C j ([0,+[;H) and

k,jN, d ( k + j ) d y v ( y ) = A k / 2 u ( y ) ( j ) c ( k ) y k A j / 2 ξ .

On the other hand, Theorem 2.4 provides smoothness results with respect to y: v C (]0,+[;H) C j ([0,+[;H) whenever ξD( A j / 2 ), jN. Under this same hypothesis, we also have smoothness in space: vC([0,+[;D( A j / 2 )) C j k ([0,+[;D( A k / 2 )), kj.

Here we recall a crucial theorem in the analysis of the inverse problems.

Theorem 2.5 [[7], Generalized Picard theorem, p.502]

Let B:D(B)HH be a self-adjoint operator and the Hilbert space H, and let E μ be its spectral resolution of unity. Let θC(R,R) and Z(θ):={tR:θ(t)=0}. We suppose that the set Z(θ) either is empty or contains isolated point only. Then the vectorial equation

θ(B)φ=ψ

is solvable if and only if

R | θ ( λ ) | 2 d | E λ ψ | 2 <.

Moreover,

N ( θ ( B ) ) ={0} σ p (B)Z(θ)=.

On the basis { ϕ n }, we introduce the Hilbert scale ( H s ) s R (resp. ( E s ) s R ) induced by as follows:

H s = { h H : n = 1 λ n 2 s | ( h , φ n ) | 2 < + } , E s = { h H : n = 1 e 2 b s λ n | ( h , φ n ) | 2 < + } .

2.3 Non-expansive operators

Definition 2.1 A linear operator ML(H) is called non-expansive if

M1.

Theorem 2.6 [[8], Theorem 2.2]

Let ML(H) be a positive, self-adjoint operator with M1. Putting V 0 =N(M) and V 1 =N(IM), we have

s lim n + M n = Π V 1 ,s lim n + ( I M ) n = Π V 0 ,

i.e.,

hH, lim n + M n h= Π V 1 h, lim n + ( I M ) n h= Π V 0 h.

For more details concerning the theory of non-expansive operators, we refer to Krasnosel’skii et al. [[9], p.66].

Let use consider the operator equation

Sφ=(IM)φ=ψ
(2.3)

for non-expansive operators M.

Theorem 2.7 Let M be a linear self-adjoint, positive and non-expansive operator on H. Let ψ ˆ H be such that equation (2.3) has a solution φ ˆ . If 1 is not an eigenvalue of M, i.e., (IM) is injective ( V 1 =N(IM)={0}), then the successive approximations

φ n + 1 =M φ n + ψ ˆ ,n=0,1,2,,

converge to φ ˆ for any initial data φ 0 H.

Proof From the hypothesis and by virtue of Theorem 2.6, we have

φ 0 H, M n φ 0 Π V 1 φ 0 = Π { 0 } φ 0 =0.
(2.4)

By induction with respect to n, it is easily seen that φ n has the explicit form

φ n = M n φ 0 + j = 0 n 1 M j ψ ˆ = M n φ 0 + ( I M n ) ( I M ) 1 ψ ˆ = M n φ 0 + ( I M n ) φ ˆ ,

and (2.4) allows us to conclude that

φ ˆ φ n = M n ( φ 0 φ ˆ )0,n.
(2.5)

 □

Remark 2.2 In many situations, some boundary value problems for partial differential equations which are ill-posed can be reduced to Fredholm operator equations of the first kind of the form Bφ=ψ, where B is compact, positive, and self-adjoint operator in a Hilbert space H. This equation can be rewritten in the following way:

φ=(IωB)φ+ωψ=Lφ+ωψ,

where L=(IωB), and ω is a positive parameter satisfying ω< 1 B . It is easily seen that the operator L is non-expansive and 1 is not an eigenvalue of L. It follows from Theorem 2.7 that the sequence { φ n } n = 0 converges and ( I ω B ) n ζ0 for every ζH as n.

3 Ill-posedness and stabilization of the inverse boundary value problem

3.1 Cauchy problem with Dirichlet conditions

Consider the following well-posed boundary value problem:

{ v y y A v = 0 , 0 < y < , v ( 0 ) = ξ , v ( + ) = 0 ,
(3.1)

where ξ is an H-valued function.

Definition 3.1 [[10], p.250]

  • A function v:[0,+[H is called a generalized solution to equation (3.1) if v E g =C([0,+[;H) C 2 (]0,+[;H) C 1 ([0,+[; H 1 ), and for all y]0,+[, u(y)D(A) and obeys equation (3.1) on the same interval ]0,+[.

  • A function v:[0,+[H is called a classical solution to equation (3.1) if v E c = C 1 ([0,+[;H) C 2 (]0,+[;H), and for all y]0,+[, u(y)D(A) and obeys equation (3.1) on the same interval ]0,+[.

Theorem 3.1 Problem (3.1) admits a unique generalized (resp. classical) solution if and only if ξH (resp. ξ H 1 ).

Proof By using the Fourier expansion and the given Dirichlet boundary conditions

v ( y ) = n = 1 + v n ( y ) ϕ n , v ( 0 ) = n = 1 + v n ( 0 ) ϕ n = ξ = n = 1 + ξ n ϕ n , v ( + ) = n = 1 + v n ( + ) ϕ n = 0 ,

we obtain

{ v n λ n v n ( y ) = 0 , 0 < y < , v n ( 0 ) = ξ n , v n ( + ) = 0 .
(3.2)

This differential equation admits two linearly independent fundamental solutions

φ n + (y)= e + y λ n , φ n (y)= e y λ n .

Thus, its general solution can be written as

v n (y)= c n + e + y λ n + c n e y λ n , c n + , c n R.

Applying v n (+)=0 and v n (0)= ξ n yields c n + =0 and c n = ξ n . Finally, the solution of (3.2) is

v(y)=S(y)ξ= e y A ξ= n = 1 + e y λ n ξ n ϕ n , ξ n =(ξ, ϕ n ).
(3.3)

Remark 3.1 It is easy to check that the expression (3.3) solves the problem

u (y)+ A u(y)=0,y]0,+[,u(0)=ξ.

If ξH (resp. ξ H 1 ), by virtue of Theorem 2.4 and Remark 3.1, we easily check the inclusion v E g (resp. v E c ) and v(y)D(A) for y]0,+[. □

3.2 Inverse boundary value problem

Our inverse problem is to determine v(0)=f from the supplementary condition v(b)=g, then we get

v(b)= n = 1 + e b λ n f n ϕ n =g= n = 1 + g n ϕ n .
(3.4)

We define

K=S(b):HH,hKh= n = 1 + e b λ n h n ϕ n .
(3.5)

The operator equation (3.5) is the main instrument in investigating problem (3.4). More precisely, we want to study the following properties:

  1. 1.

    Injectivity of K (identifiability);

  2. 2.

    Continuity of K and the existence of its inverse (stability);

  3. 3.

    The range of K.

It is easy to see that K is a linear compact self-adjoint operator with the singular values ( σ k = e b λ k ) k = 1 + , and by virtue of Remark 2.1, we have

1 . N ( K ) = { 0 } , 2 . R ( K ) = { h H : e b A h 2 = n = 1 + e 2 b λ n | ( h , ϕ n ) | 2 < + } , 3 . R ( K ) ¯ = H .

Now, to conclude the solvability of problem (3.4) it is enough to apply Theorem 2.5.

Corollary 3.1 The inverse problem (3.4) is uniquely solvable if and only if

u(b)=gR(K)= { h H : n = 1 + e 2 b λ n | ( h , ϕ n ) | 2 < + } .
(3.6)

In this case, we have

f=u(0)= K 1 g= n = 1 + e b λ n g n ϕ n .
(3.7)

In other words, the solution f of the inverse problem is obtained from the data g via the unbounded operator L= K 1 defined on functions g in the subspace

D(L)= { g H : n = 1 + e 2 b λ n | ( g , ϕ n ) | 2 < + , g n = ( g , ϕ n ) } .

Corollary 3.2 Problem (1.1)-(1.2) admits a unique solution uC([0,+[;H) if and only if

u(0)HgR(K)= { h H : n = 1 + e b λ n | ( h , ϕ n ) | 2 < + } .

In this case, we have

u(y)= e ( b y ) A g= n = 1 + e ( b y ) λ n g n ϕ n .
(3.8)

From this representation, we see that:

  • u(y) is stable in the interval [b,+[ ( sup y [ b , + [ u(y)g);

  • u is unstable in [0,b[. This follows from the high-frequency ω n = e ( b y ) λ n +, n+.

3.3 Regularization by truncation method and error estimates

A natural way to stabilize the problem is to eliminate all the components of large n from the solution and instead consider (3.7) only for nN.

Definition 3.2 For N>0, the regularized solution of problem (1.1)-(1.2) is given by

f N = n N e b λ n g n ϕ n , g n =(g, ϕ n ),
(3.9)
u N (y)= n N e ( b y ) λ n g n ϕ n , g n =(g, ϕ n ).
(3.10)

Remark 3.2 If the parameter N is large, f N is close to the exact solution f. On the other hand, if the parameter N is fixed, f N is bounded. So, the positive integer N plays the role of regularization parameter.

Remark 3.3 In view of

u ( y ) u N ( y ) = S ( y ) ( f f N ) ( f f N ) u u N ( f f N ) ,

and if g E 1 , i.e., n = 1 e 2 b λ n | ( g , ϕ n ) | 2 <, then

f f N 0,N,

implies

u u N = sup y [ 0 , + [ u ( y ) u N ( y ) 0,N.

Since the data g are based on (physical) observations and are not known with complete accuracy, we assume that g and g δ satisfy g g δ δ, where g δ denotes the measured data and δ denotes the level noisy.

Let ( f N δ , u N δ ) denote the regularized solution of problem (1.1), (1.2) with measured data  g δ :

f N δ = n N e b λ n g n δ ϕ n , g n δ = ( g δ , ϕ n ) ,
(3.11)
u N δ (y)= n N e ( b y ) λ n g n δ ϕ n , g n δ = ( g δ , ϕ n ) .
(3.12)

As usual, in order to obtain convergence rate, we assume that there exists an a priori bound for problem (1.2)

A r / 2 f 2 E 2 <+ n = 1 + λ n r e 2 b λ n | g n | 2 E 2 ,
(3.13)

where E>0 is a given constant.

Remark 3.4 For given two exact conditions g 1 and g 2 , let f 1 , N and f 2 , N be the corresponding regularized solutions, respectively. Then

f 2 , N f 1 , N 2 = n N e 2 b λ n | ( g 2 g 1 ) k | 2 e 2 b λ N g 2 g 1 2 .
(3.14)

The main theorem of this method is as follows.

Theorem 3.2 Let f N δ be the regularized solution given by (3.11), and let f be the exact solution given by (3.7). If A r / 2 fE, r>0 and if we choose λ N θ b log( 1 δ ), 0<θ<1, then we have the error bound

f f N δ ( b θ ) r ( 1 log ( 1 / δ ) ) r E+ δ 1 θ .
(3.15)

Proof From direct computations, we have

Δ 1 = f N f N δ e b λ N g g δ e b λ N δ , Δ 2 2 = f f N 2 = n = N + 1 + e 2 b λ n | g n | 2 = n = N + 1 + 1 λ n 2 r λ n 2 r e 2 b λ n | g n | 2 1 λ N + 1 2 r n = N + 1 + λ n 2 r e 2 b λ n | g n | 2 ( 1 λ N ) 2 r E 2 .

Using the triangle inequality

f f N δ f f N + f N f N δ = Δ 1 + Δ 2 ,

we obtain

f f N δ ( 1 λ N ) r E+ e b λ N δ.
(3.16)

By choosing λ N = θ b log( 1 δ ), 0<θ<1, we obtain

f f N δ ( b θ ) r ( 1 log ( 1 / δ ) ) r E+ δ 1 θ .

 □

Finally, from (3.4) and (3.15), we deduce the following corollary.

Corollary 3.3 Let u N δ be the regularized solution given by (3.12), and let u be the exact solution given by (3.8). If A r / 2 fE, r>0 and if we choose λ N = θ b log( 1 δ ), 0<θ<1, then we have the error bound

u u N δ = sup y [ 0 , + [ u ( y ) u N δ ( y ) f f N δ ( b θ ) r ( 1 log ( 1 / δ ) ) r E+ δ 1 θ .
(3.17)

4 Regularization by the Kozlov-Maz’ya iteration method and error estimates

In [11, 12] Kozlov and Maz’ya proposed an alternating iterative method to solve boundary value problems for general strongly elliptic and formally self-adjoint systems. After that, the idea of this method has been successfully used for solving various classes of ill-posed (elliptic, parabolic and hyperbolic) problems; see, e.g., [1315].

In this section we extend this method to our ill-posed problem.

4.1 Description of the method

The iterative algorithm for solving the inverse problem (1.1)-(1.2) starts by letting f 0 H be arbitrary. The first approximation u 0 (y) is the solution to the direct problem

{ u y y 0 A u 0 = 0 , 0 < y < , u 0 ( 0 ) = f 0 , u ( + ) = 0 .
(4.1)

If the pair ( u k , f k ) has been constructed, let

( P ) k + 1 : f k + 1 = f k ω ( u k ( b ) f ) ,
(4.2)

where ω is such that

0<ω< 1 K = e b λ 1 ,K= sup n e b λ n = e b λ 1 <1.

Finally, we get u k + 1 by solving the problem

{ u y y k + 1 A u k + 1 = 0 , 0 < y < , u k + 1 ( 0 ) = f k + 1 , u k + 1 ( + ) = 0 .
(4.3)

We set G=(IωK). If we iterate backwards in ( P ) k + 1 , we obtain

f k = G k f 0 +ω i = 0 k 1 G i g= G k f 0 + ( I G k ) K 1 g= G k f 0 +f G k f.
(4.4)

This implies that

f k f= G k ( f 0 f), u k (y)u(y)=S(y) G k ( f 0 f).
(4.5)

Proposition 4.1 The operator G=(IωK) is self-adjoint and non-expansive on H. Moreover, it has not 1 as eigenvalue.

Proof The self-adjointness follows from the definition of G (see Theorem 2.1). Since the inequality 0<1ω e b λ <1 for λσ(A), we have σ p (G)]0,1[, then 1 is not an eigenvalue of G. □

In general, the exact solution u(0)=fH is required to satisfy the so-called source condition; otherwise, the convergence of the regularization method approximating the problem can be arbitrarily slow. Since our problem is exponentially ill-posed (the eigenvalues s n = e b λ n of K converge exponentially to 0), it is well known in this case [16, 17] that the best choice to accelerate the convergence of the regularization method is to use logarithmic-type source conditions, i.e.,

( f 0 f)= Ψ β (ωK)ξ,ξH,ξE,
(4.6)

where

Ψ β (t)={ ln ( e t ) β , 0 < t 1 , 0 , t = 0 ,

with β>0.

Remark 4.1 [[16], p.34]

The logarithmic source condition ζ=( f 0 f)R( Ψ β (ωK)) is equivalent to the inclusion ζR( A β / 2 )=D( A β / 2 ).

Proof The proof is based on the following equivalence:

k = 1 ( ln ( e ω ) + λ n ) 2 β <+ k = 1 ( λ n ) 2 β <+.

 □

Lemma 4.1 [[18], Appendix, Lemma A.1]

Let β>0 and kN, k2. Then the real-valued function τ(t)= ( 1 t ) k ln ( e t ) β defined on [0,1] satisfies

τ(t)Cln ( k ) β .
(4.7)

Remark 4.2 Let k N . Then the real-valued function ϱ(t)=1 ( 1 t ) k defined on [0,1] satisfies

ϱ(t)kt.
(4.8)

Proof Using the mean value theorem, we can write

ϱ(t)ϱ(0)=(t0) ϱ ( t ˆ ),0< t ˆ <t,

then

ϱ(t)=tk ( 1 t ˆ ) k 1 kt.

 □

Let us consider the following real-valued functions:

Q ( λ ) = ( 1 ω e b λ ) k ln ( e ω e b λ ) β , λ [ λ 1 , + [ , P ( λ ) = ω i = 0 k 1 ( 1 ω e b λ ) i = ω 1 ( 1 ω e b λ ) k ω e b λ , λ [ λ 1 , + [ .

Using the change of variables t=ϑ(λ)=ω e b λ , we obtain

Q ˆ ( t ) = Q ( ϑ 1 ( t ) ) = ( 1 t ) k ln ( e t ) β , t [ 0 , 1 ] , P ˆ ( t ) = P ( ϑ 1 ( t ) ) = { ω 1 ( 1 t ) k t , t ] 0 , 1 ] , ω k , t = 0 .

Now we are in a position to state the main result of this method.

Theorem 4.1 Let g E 1 and ω satisfy 0<ω< e b λ 1 , let f 0 be an arbitrary element for the iterative procedure suggested above, and let u k be the kth approximate solution. Then we have

sup y [ 0 , + [ u ( y ) u k ( y ) 0,k.
(4.9)

Moreover, if ( f 0 f) H β (β>0), i.e., ( f 0 f)= Ψ β (ωK)ξ, ξH, ξE, then the rate of convergence of the method is given by

sup y [ 0 , + [ u ( y ) u k ( y ) CE ( 1 ln ( k ) ) β ,k2.
(4.10)

Proof By virtue of Proposition 4.1 and Theorem 2.7, it follows immediately

sup y [ 0 , + [ u ( y ) u k ( y ) G k ( f 0 f ) 0,k.

We have

u ( y ) u k ( y ) 2 = S ( y ) G k ( f 0 f ) 2 G k ( f 0 f ) 2 = n = 1 Q ( λ n ) 2 ( ξ , ϕ n ) 2 ( sup t [ 0 , 1 ] Q ˆ ( t ) ) 2 ξ 2 ( sup t [ 0 , 1 ] Q ˆ ( t ) ) 2 E 2 ,

and by virtue of Lemma 4.1 (estimate (4.7)), we conclude the desired estimate. □

Theorem 4.2 Let g E 1 and ω satisfy 0<ω< e b λ 1 , let f 0 be an arbitrary element for the iterative procedure suggested above, and let u k (resp. u k δ ) be the kth approximate solution for the exact data g (resp. for the inexact data g δ ) such that g g δ δ. Then, under condition (4.6), the following inequality holds:

sup y [ 0 , + [ u ( y ) u δ k ( y ) CE ( 1 ln ( k ) ) β +ε(k)δ,

where ε(k)=ω i = 0 k 1 ( I ω K ) i kω.

Proof Using (4.4) and the triangle inequality, we can write

f k = G k f 0 +ω i = 0 k 1 G i g, u k (y)=S(y) f k ,
(4.11)
f δ k = G k f 0 + ω i = 0 k 1 G i g δ , u k δ ( y ) = S ( y ) f δ k , u ( y ) u δ k ( y ) = ( u ( y ) u k ( y ) ) + ( u k ( y ) u δ k ( y ) ) Δ 1 + Δ 2 ,
(4.12)

where

Δ 1 = u ( y ) u k ( y ) u ( y ) u k ( y ) CE ( 1 ln ( k ) ) β ,k2,
(4.13)

and

Δ 2 = u k ( y ) u δ k ( y ) = S ( y ) ( f k f δ k ) = ω S ( y ) i = 0 k 1 G i ( g g δ ) ω i = 0 k 1 G i ( g g δ ) ω i = 0 k 1 G i δ = Δ ˆ 2 .

By using inequality (4.8), the quantity Δ ˆ 2 can be estimated as follows:

Δ ˆ 2 ωkδ.
(4.14)

Combining (4.13) and (4.14) and taking the supremum with respect to y[0,+[ of u(y) u δ k (y), we obtain the desired bound.

Remark 4.3 Choosing k=k(δ) such that ωkδ+ as δ0, we obtain

sup y [ 0 , + [ u k ( y ) u δ k ( y ) 0as k+.

 □

5 Numerical results

In this section we give a two-dimensional numerical test to show the feasibility and efficiency of the proposed methods. Numerical experiments were carried out using MATLAB.

We consider the following inverse problem:

{ u y y ( x , y ) + u x x ( x , y ) = 0 , x ( 0 , π ) , y ( 0 , + ) , u ( 0 , y ) = u ( π , y ) = 0 , y ( 0 , + ) , u ( x , 0 ) = f ( x ) , u ( x , + ) = 0 , x [ 0 , π ] ,
(5.1)

where f(x) is the unknown source and u(x,1)=g(x) is the supplementary condition.

It is easy to check that the operator

A= 2 x 2 ,D(A)= H 0 1 (0,π) H 2 (0,π)H= L 2 (0,π)

is positive, self-adjoint with compact resolvent (A is diagonalizable).

The eigenpairs ( λ n , ϕ n ) of A are

λ n = n 2 , ϕ n (x)= 2 π sin(nx),n N .

In this case, formula (3.7) takes the form

f ( x ) = u ( x , 0 ) = K 1 g ( x ) = 2 π n = 1 + e n ( 0 π g ( x ) sin ( n x ) d x ) sin ( n x ) .
(5.2)

Truncation method

We use trapezoid’s rule to approach the integral and do an approximate truncation for the series by choosing the sum of the front M+1 terms. After considering an equidistant grid 0= x 1 << x M + 1 =π, x j =(j1)h=(j1) π M , j=1(M+1), we get

f( x j )= 2 π i = 1 M + 1 n = 1 + e n ( h g ( x i ) sin ( n x i ) ) sin(n x j ),
(5.3)
f N ( x j )= 2 π i = 1 M + 1 n = 1 N e n ( h g ( x i ) sin ( n x i ) ) sin(n x j ),
(5.4)
f N δ ( x j )= 2 π i = 1 M + 1 n = 1 N e n ( h g δ ( x i ) sin ( n x i ) ) sin(n x j ).
(5.5)

In the following, we consider an example which has an exact expression of solutions (u(x,y),f(x)).

Example

If u(x,0)= 2 π esin(x), then the function u(x,y)= 2 π e 1 y sin(x) is the exact solution of problem (5.1). Consequently, the data function is g(x)=u(x,1)= 2 π sin(x).

Adding a random distributed perturbation (obtained by the Matlab command randn) to each data function, we obtain the vector g δ :

g δ =g+εrandn ( size ( g ) ) ,

where ε indicates the noise level of the measurement data and the function ‘randn()’ generates arrays of random numbers whose elements are normally distributed with mean 0, variance σ 2 =1, and standard deviation σ=1. ‘randn(size(g))’ returns an array of random entries that is the same size as g. The bound on the measurement error δ can be measured in the sense of Root Mean Square Error (RMSE) according to

δ= g δ g = ( 1 M + 1 i = 1 M + 1 ( g ( x i ) g δ ( x i ) ) 2 ) 1 / 2 .

Using g δ as a data function, we obtain the computed approximation f N δ by (5.5). The relative error E r (f) is given by

E r (f)= f N δ f f .
(5.6)

Kozlov-Maz’ya iteration method

By using the central difference with step length h= π N + 1 to approximate the first derivative u x and the second derivative u x x , we can get the following semi-discrete problem (ordinary differential equation):

{ u y y ( x i , y ) A h ( x i , y ) = 0 , x i = i h , i = 1 , , N , y ( 0 , + ) , u ( x 0 = 0 , y ) = u ( x N + 1 = π , y ) = 0 , y ( 0 , + ) , u ( x i , 0 ) = f ( x i ) , u ( x i , + ) = 0 , x i = i h , i = 1 , , N ,
(5.7)

where A h is the discretization matrix stemming from the operator A= d 2 d x 2 :

A h = 1 h 2 Tridiag(1,2,1) M N (R)

is a symmetric, positive definite matrix. We assume that it is fine enough so that the discretization errors are small compared to the uncertainty δ of the data; this means that A h is a good approximation of the differential operator A= d 2 d x 2 , whose unboundedness is reflected in a large norm of A h (see [[19], p.5]). The eigenpairs ( μ k , e k ) of A h are given by

μ k =4 ( N + 1 π ) 2 sin 2 ( k π 2 ( N + 1 ) ) , e k = ( sin ( j k π N + 1 ) ) j = 1 N ,k=1,,N.

The discrete iterative approximation of (4.12) takes the form

f k δ ( x j )= ( I ω K h ) k f 0 ( x j )+ω i = 0 k 1 ( I ω K h ) i g δ ( x j ),j=1,,N,
(5.8)

where K h = e A h and ω< 1 K h = e μ 1 =2.7881.

Figures 1-4, Table 1 show the comparisons between the exact solution and its computed approximations for different values N, M and ε.

Figure 1
figure 1

TM with ( noise level=0.01 , truncation term=4 , grid points of TR=21 ).

Figure 2
figure 2

TM with ( noise level=0.01 , truncation term=4 , grid points of TR=41 ).

Figure 3
figure 3

TM with ( noise level=0.001 , truncation term=4 , grid points of TR=21 ).

Figure 4
figure 4

TM with ( noise level=0.001 , truncation term=4 , grid points of TR=41 ).

Table 1 Truncation method: Relative error E r (f)

Figures 5-12, Table 2 show the comparisons between the exact solution and its computed approximations for different values N, k, ω and ε.

Figure 5
figure 5

ω=1.3941 .

Figure 6
figure 6

ω=1.8587 .

Figure 7
figure 7

ω=1.9517 .

Figure 8
figure 8

ω=2.2305 .

Figure 9
figure 9

ω=1.3941 .

Figure 10
figure 10

ω=1.8587 .

Figure 11
figure 11

ω=1.9517 .

Figure 12
figure 12

ω=2.2305 .

Table 2 Kozlov-Maz’ya method: Relative error E r (f)

Conclusion

The numerical results (Figures 1-4) are quite satisfactory. Even with the noise level ε=0.01, the numerical solutions are still in good agreement with the exact solution. In addition, the numerical results (Figures 5-12) are better for (ω=2.2305, ε=0.01) and (ω=1.9517, ε=0.001) and the other values are also acceptable.

In this study, a convergent and stable reconstruction of an unknown boundary condition has been obtained using two regularizing methods: truncation method and Kozlov-Maz’ya iteration method. Both theoretical and numerical studies have been provided.

Future work will involve the error effect arising in computing eigenfunctions and eigenvalues of the operator A on the truncation method. The question is how to obtain some optimal balance between the accuracy of eigensystem and the noise level of input data.

References

  1. Cosner C, Rundell W: Extension of solutions of second order partial differential equations by the method of quasireversibility. Houst. J. Math. 1984, 10(3):357-370.

    MathSciNet  MATH  Google Scholar 

  2. Levine HA, Vessella S: Estimates and regularization for solutions of some ill-posed problems of elliptic and parabolic type. Rend. Circ. Mat. Palermo 1985, 34: 141-160. 10.1007/BF02844893

    Article  MathSciNet  MATH  Google Scholar 

  3. Ivanov DY: Inverse boundary value problem for an abstract elliptic equation. Differ. Equ. 2000, 36(4):579-586. 10.1007/BF02754252

    Article  MathSciNet  MATH  Google Scholar 

  4. Dunford N, Schwartz J: Linear Operators, Part II. Wiley, New York; 1967.

    MATH  Google Scholar 

  5. Pazy A: Semigroups of Linear Operators and Application to Partial Differential Equations. Springer, New York; 1983.

    Book  MATH  Google Scholar 

  6. Brezis H: Functional Analysis, Sobolev Spaces and Partial Differential Equations. Springer, New York; 2011.

    MATH  Google Scholar 

  7. Prilepko AI, Orlovsky DG, Vasin IA Monographs and Textbooks in Pure and Applied Mathematics 222. In Methods for Solving Inverse Problems in Mathematical Physics. Marcel Dekker, New York; 2000.

    Google Scholar 

  8. Shlapunov A: On iterations of non-negative operators and their applications to elliptic systems. Math. Nachr. 2000, 218: 165-174. 10.1002/1522-2616(200010)218:1<165::AID-MANA165>3.0.CO;2-W

    Article  MathSciNet  MATH  Google Scholar 

  9. Krasnosel’skii MA, Vainikko GM, Zabreiko PP, Rutitskii YB: Approximate Solutions of Operator Equations. Wolters-Noordhoff, Groningen; 1972.

    Book  Google Scholar 

  10. Krein SG: Linear Differential Equations in Banach Space. Am. Math. Soc., Providence; 1971.

    Google Scholar 

  11. Kozlov VA, Maz’ya VG: On iterative procedures for solving ill-posed boundary value problems that preserve differential equations. Leningr. Math. J. 1990, 1: 1207-1228.

    MathSciNet  MATH  Google Scholar 

  12. Kozlov VA, Maz’ya VG, Fomin AV: An iterative method for solving the Cauchy problem for elliptic equations. U.S.S.R. Comput. Math. Math. Phys. 1991, 31(1):45-52.

    MathSciNet  MATH  Google Scholar 

  13. Bastay G Linköping Studies in Science and Technology. In Iterative Methods for Ill-Posed Boundary Value Problems. Linköping University, Linköping; 1995. Dissertations No. 392

    Google Scholar 

  14. Baumeister J, Leitao A: On iterative methods for solving ill-posed problems modeled by partial differential equations. J. Inverse Ill-Posed Probl. 2001, 9(1):13-29.

    Article  MathSciNet  MATH  Google Scholar 

  15. Maxwell, D: Kozlov-Maz’ya iteration as a form of Landweber iteration (2011). arXiv:1107.2194v1 [math.AP] 12 Jul.

    MATH  Google Scholar 

  16. Bakushinsky AB, Kokurin MY: Iterative Methods for Approximate Solution of Inverse Problems. Springer, Dordrecht; 2004.

    MATH  Google Scholar 

  17. Hohage T: Regularization of exponentially ill-posed problems. Numer. Funct. Anal. Optim. 2000, 21: 439-464. 10.1080/01630560008816965

    Article  MathSciNet  MATH  Google Scholar 

  18. Deuflhardy P, Engl HW, Scherzer O: A convergence analysis of iterative methods for the solution of nonlinear ill-posed problems under affinely invariant conditions. Inverse Probl. 1998, 14: 1081-1106. 10.1088/0266-5611/14/5/002

    Article  MathSciNet  MATH  Google Scholar 

  19. Eldén L, Simoncini V: A numerical solution of a Cauchy problem for an elliptic equation by Krylov subspaces. Inverse Probl. 2009., 25: Article ID 065002 (22pp)

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank the editor and the anonymous referees for their valuable comments and helpful suggestions that improved the quality of our paper. This work is supported by the DGRST of Algeria (PNR Project 2011-code: 8\92 u23\92 997).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nadjib Boussetila.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors have contributed equally. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Bouzitouna, A., Boussetila, N. & Rebbani, F. Two regularization methods for a class of inverse boundary value problems of elliptic type. Bound Value Probl 2013, 178 (2013). https://doi.org/10.1186/1687-2770-2013-178

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-2770-2013-178

Keywords