Skip to main content

On a Neumann boundary control in a parabolic system

Abstract

In this paper we have dealt with controlling a boundary condition of a parabolic system in one dimension. This control aims to find the best appropriate right-hand side boundary function which ensures the closeness between the solution of system at final time and the desired target for the solution. Since these types of problems are ill posed, we have used a regularized solution. By numerical examples we have tested the theoretical results.

1 Introduction

We consider the following one-dimensional parabolic partial differential equation:

$$\begin{aligned}& \frac{\partial u}{\partial t} = k\frac{\partial^{2}u}{\partial x^{2}} + h ( x,t ), \quad ( x,t ) \in Q = ( 0,l ) \times ( 0,T ), \end{aligned}$$
(1.1)
$$\begin{aligned}& u ( x,0 ) = u_{0} ( x ),\quad x \in \Omega = ( 0,l ), \end{aligned}$$
(1.2)
$$\begin{aligned}& \frac{\partial u}{\partial x} ( 0,t ) = 0,\qquad \frac{\partial u}{\partial x} ( l,t ) = g ( t ), \quad t \in ( 0,T ), \end{aligned}$$
(1.3)

where \(k > 0\) and \(h ( x,t )\), \(u_{0} ( x )\) are given functions satisfying the following conditions:

$$ u_{0} ( x ) \in H^{1} ( \Omega ),\qquad h ( x,t ) \in L_{2} ( Q ). $$
(1.4)

We want to obtain a suitable sized boundary function \(g ( t ) \in H^{1} ( 0,T )\) which approaches the solution of the problem (1.1)-(1.3) to the desired target \(y ( x ) \in L_{2} ( 0,l )\) at a final time \(t = T\).

This process requires the use of the following cost functional:

$$ J ( g ) = \bigl\Vert u ( x,T;g ) - y ( x ) \bigr\Vert _{L_{2} ( 0,l )}^{2} $$
(1.5)

and solving the problem

$$ J_{*} = \inf J ( g ) = J ( g_{*} ). $$
(1.6)

On the other hand we know that the problem (1.6) is numerically ill posed. In other words, quite different \(g ( t )\) functions can minimize the functional (1.5). Therefore, instead of the functional (1.5), we introduce the new functional

$$ J_{\alpha} ( g ) = \bigl\Vert u ( x,T;g ) - y ( x ) \bigr\Vert _{L_{2} ( 0,l )}^{2} + \alpha \Vert g \Vert _{H^{1} ( 0,T )}^{2} $$
(1.7)

and solve the problem

$$ J_{\alpha *} = \inf J_{\alpha} ( g ) = J_{\alpha} ( g_{*} ). $$
(1.8)

Here \(\alpha > 0\) is a regularization parameter which ensures both the uniqueness of the solution and a balance between the norms \(\Vert u ( x,T;g ) - y ( x ) \Vert _{L_{2} ( 0,l )}^{2}\) and \(\Vert g \Vert _{H^{1} ( 0,T )}^{2}\). We show the ill-posedness for \(\alpha = 0\) by a numerical example. Detailed information as regards the regularization parameter can be found in [1].

2 Some previous works and the different aspects of this work

Neumann boundary control problems with different objective functionals received a great deal of attention in the last years [25]. Besides, important studies involving final time targets are as follows.

In his famous work, Lions [6] considered the control u in the parabolic system

$$\begin{aligned}& \frac{\partial}{\partial t}y ( u ) + A ( t )y ( u ) = f \quad \mbox{in } Q, \\& y ( x,0;u ) = y_{0} ( x ),\quad x \in \Omega, \\& \frac{\partial}{\partial v}y ( u ) = u \quad \mbox{on } \Sigma \ (\mbox{boundary of } \Omega) \end{aligned}$$

minimizing the cost function

$$J ( u ) = \int_{\Omega} \bigl( y ( x,T;u ) - z_{d} \bigr)^{2}\, dx + ( Nu,u )_{L_{2} ( \Sigma )} $$

with target \(z_{d}\) and operator N. Taking \(f \in L_{2} ( Q )\), \(y_{0} \in L_{2} ( \Omega )\), \(u \in L_{2} ( \Sigma )\), he gave the optimality conditions.

Hasanoğlu [7] considered the boundary value problem

$$\begin{aligned}& u_{t} = \bigl( k ( x )u_{x} \bigr)_{x} + F ( x,t ),\quad ( x,t ) \in \Omega_{T}: = ( 0,l ) \times ( 0,T ], \\& u ( x,0 ) = \mu_{0} ( x ), \quad x \in ( 0,l ), \\& u_{x} ( 0,t ) = 0, \qquad - k ( l )u_{x} ( l,t ) = \nu \bigl[ u ( l,t ) - T_{0} ( t ) \bigr],\quad t \in ( 0,T ] \end{aligned}$$

and investigated the determination of the pair \(w: = \{ F ( x,t ),T_{0} ( t ) \}\) in the set

$$F ( x,t ) \in H^{0} ( \Omega_{T} ),\qquad T_{0} ( t ) \in H^{0} [ 0,T ],\quad 0 < T_{0*} \le T_{0} ( t ) \le T^{0*} \mbox{ a.e. }\forall t \in [ 0,T ] $$

minimizing the functional

$$J ( w ) = \int_{0}^{l} \bigl[ u ( x,T;w ) - \mu_{T} ( x ) \bigr]\, dx. $$

Hasanoğlu obtained the Fréchet derivative of the functional, established a minimizing sequence, and stated that this sequence weakly converges to the quasi-solution of the problem.

Dhamo and Tröltzsch [8] investigated the controllability aspects for optimal parabolic boundary control problems of type

$$\min J ( y,u ) = \frac{1}{2}\int_{0}^{1} \bigl( y ( x,T ) - y_{d} ( x ) \bigr)^{2}\, dx $$

subject to the one-dimensional heat equation

$$\begin{aligned}& y_{t} ( x,t ) = y_{xx} ( x,t ),\quad ( x,t ) \in ( 0,1 ) \times ( 0,T ), \\& y ( x,0 ) = 0,\quad x \in ( 0,1 ), \\& y_{x} ( 0,t ) = 0,\qquad y_{x} ( l,t ) = u ( t ), \quad t \in ( 0,T ) \end{aligned}$$

on the set of feasible controls

$$U_{\mathrm{ad}} = \bigl\{ u \in L_{2} ( 0,T ):\vert u \vert \le 1 \mbox{ a.e. in } [ 0,T ] \bigr\} . $$

Altmüller and Grüne [9] studied the stability properties of a model with predictive control without terminal constraints applied to the heat equation,

$$\begin{aligned}& y_{t} ( x,t ) = y_{xx} ( x,t ) + \mu y ( x,t ) \quad \mbox{on } \Omega \times ( 0,\infty ), \\& y ( x,0 ) = y_{0} ( x ) \quad \mbox{on } \Omega, \\& y ( 0,t ) = 0,\qquad y_{x} ( 1,t ) = v ( t ) \quad \mbox{on } ( 0, \infty ) \end{aligned}$$

by the cost functional

$$l ( y,v ) = \frac{1}{2}\bigl\Vert y (\cdot,nT ) \bigr\Vert _{L_{2} ( \Omega )}^{2} + \frac{\lambda}{2}\bigl\Vert v ( nT ) \bigr\Vert _{L_{2} ( \Omega )}^{2} $$

on the controls set \(L_{\infty} ( [ 0,T ] )\).

This work chooses more regular controls than previous work [6, 8, 9]. We take the controls in the closed and convex set \(G_{\mathrm{ad}} \subset H^{1} ( 0,T )\). This choice causes the addition of the control in the norm of \(H^{1} ( 0,T )\) to the functional. In the case that the control is in the space \(L_{2}\), the Fréchet derivative contains the solution of adjoint equation only. In the case of \(H^{1} ( 0,T )\) the Fréchet derivative contains not only the solution of the adjoint equation but also a solution of a second-order ordinary differential equation.

Numerical examples are rarely encountered in the literature. This work contains a detailed numerical investigation. Both the ill-posedness for \(\alpha = 0\) and the regularizing effect of this parameter for \(\alpha > 0\) are illustrated in detail.

3 A motivation for the problem

In this section we give a motivation for the problem. Consider a wire with diffusivity constant k. This wire is heated by a discontinuous heat source h. The initial temperature distribution is \(u_{0}\). The left end is insulated and the right end has a heat flux \(g ( t )\). The heat flux intensity function \(g ( t )\) produces the heat distribution \(u ( x,t;g )\) which is the solution of the PDE.

We want to control both the magnify of the heat flux function \(g ( t )\) and the distance between the heat distribution u at final time T and \(y ( x )\) via α. The optimal values are shown by \(g_{*}\) and \(J_{*}\) (see Figure 1).

Figure 1
figure 1

Scheme for the problem.

4 Existence and uniqueness of optimal solution

In this section we prove the existence and uniqueness of optimal solution. Let us define the closed and convex subset \(G_{\mathrm{ad}} \subset H^{1} ( 0,T )\) of admissible controls.

First of all we know from [10], p.33, that for every \(u_{0} ( x ) \in H^{1} ( \Omega )\), \(h ( x,t ) \in L_{2} ( Q )\), and \(g ( t ) \in H^{1} ( 0,T )\), the boundary value problem (1.1)-(1.3) admits a unique solution \(u \in H^{2,1} ( Q )\) that depends continuously on h, \(u_{0}\), and g by the following estimate:

$$ \Vert u \Vert _{H^{2,1} ( \Omega )}^{2} \le c_{1} \bigl( \Vert h \Vert _{L_{2} ( Q )}^{2} + \Vert u_{0} \Vert _{H^{1} ( \Omega )}^{2} + \Vert g \Vert _{H^{1} ( 0,T )}^{2} \bigr), $$
(4.1)

where \(c_{1}\) is a constant independent from h, \(u_{0}\), and g. Before giving the existence and uniqueness theorem for an optimal solution, we rearrange the cost functional \(J_{\alpha} ( g )\) given by (1.7) thus:

$$ J_{\alpha} ( g ) = \int_{0}^{l} \bigl[ u ( x,T;g ) - u ( x,T;0 ) + u ( x,T;0 ) - y ( x ) \bigr]^{2}\, dx + \alpha \int_{0}^{T} \bigl[ g^{2} ( t ) + \bigl( g' ( t ) \bigr)^{2} \bigr]\, dt. $$
(4.2)

To use the linearity of the transform \(g \to u [ g ] - u [ 0 ]\), we add and subtract the term \(u ( x,T;0 )\) to the functional \(J_{\alpha} ( g )\).

If we define the auxiliary functionals

$$\begin{aligned}& \pi ( g,g ) = \int_{0}^{l} \bigl[ u ( x,T;g ) - u ( x,T;0 ) \bigr] \bigl[ u ( x,T;g ) - u ( x,T;0 ) \bigr]\,dx \\& \hphantom{\pi ( g,g ) ={}}{}+ \alpha \int _{0}^{T} \bigl[ g^{2} ( t ) + \bigl( g' ( t ) \bigr)^{2} \bigr]\,dt, \end{aligned}$$
(4.3)
$$\begin{aligned}& Lg = \int_{0}^{l} \bigl[ u ( x,T;g ) - u ( x,T;0 ) \bigr] \bigl[ y ( x ) - u ( x,T;0 ) \bigr]\,dx, \end{aligned}$$
(4.4)
$$\begin{aligned}& b = \int_{0}^{l} \bigl[ y ( x ) - u ( x,T;0 ) \bigr]^{2}\,dx, \end{aligned}$$
(4.5)

then \(J_{\alpha} ( g )\) in (4.2) is briefly written as

$$ J_{\alpha} ( g ) = \pi ( g,g ) - 2Lg + b. $$
(4.6)

Due to the linearity of the transform \(g \to u [ g ] - u [ 0 ]\), it can easily be seen that the functional \(\pi ( g,g )\) defined by (4.3) is bilinear, coercive, symmetric, continuous, and strictly convex. In addition, the functional Lg is linear, continuous, and convex.

Now, we give the following theorem for the existence and uniqueness in view of [11].

Theorem 4.1

Let \(\pi ( g,g )\) be a coercive, bilinear, continuous, and symmetric form and let Lg be a linear and continuous functional. Then there is a unique element \(g_{*} \in G_{\mathrm{ad}}\) such that

$$ J_{\alpha} ( g_{*} ) = \inf_{g \in G_{\mathrm{ad}}}J_{\alpha} ( g ) $$
(4.7)

for the functional given in (4.2).

Proof

Let \(\{ g_{k} \} \in G_{\mathrm{ad}}\) be a minimizing sequence for \(J_{\alpha} ( g )\). By this we mean that

$$ J_{\alpha} ( g_{k} ) \to \inf_{g \in G_{\mathrm{ad}}}J_{\alpha} ( g ) $$
(4.8)

for \(k \to \infty\). Coercivity and continuity of \(\pi ( g,g )\) give

$$ J_{\alpha} ( g ) \ge \alpha \Vert g \Vert _{H^{1} ( 0,T )}^{2} - 2c_{2}\Vert g \Vert _{H^{1} ( 0,T )}. $$
(4.9)

Combining (4.8) with (4.9) we conclude that

$$ \Vert g_{k} \Vert _{H^{1} ( 0,T )} \le c_{3}. $$
(4.10)

Then the sequence \(\{ g_{k} \}\) has a weakly converging subsequence \(\{ g_{k_{m}} \}\) converging to the element \(g_{*} \in H^{1} ( 0,T )\). The set \(G_{\mathrm{ad}}\) is weakly closed, since it is closed and convex. Hence

$$ g_{*} \in G_{\mathrm{ad}}. $$
(4.11)

Moreover, the transform \(g \to J_{\alpha} ( g )\) is weakly lower semicontinuous, since \(g \to \pi ( g,g )\) is weakly lower semicontinuous and \(g \to Lg\) is weakly continuous. Then by the definition of lower semicontinuity, we have

$$J_{\alpha} ( g_{*} ) \le \liminf J_{\alpha} ( g_{k_{m}} ). $$

We can write the following using (4.8):

$$J_{\alpha} ( g_{*} ) \le \inf_{g \in G_{\mathrm{ad}}}J_{\alpha} ( g ) $$

and by (4.11) we obtain

$$J_{\alpha} ( g_{*} ) = \inf_{g \in G_{\mathrm{ad}}}J_{\alpha} ( g ). $$

Hence the existence of the solution for the problem (1.1)-(1.6) is obtained.

For uniqueness we use the strict convexity of \(J_{\alpha} ( g )\), since for all \(g_{1} \ne g_{2} \in H^{1} ( 0,T )\) and \(\beta \in ( 0,1 )\),

$$\begin{aligned}& J_{\alpha} \bigl( \beta g_{1} + ( 1 - \beta )g_{2} \bigr) \\& \quad = \pi \bigl( \beta g_{1} + ( 1 - \beta )g_{2},\beta g_{1} + ( 1 - \beta )g_{2} \bigr) - 2L \bigl( \beta g_{1} + ( 1 - \beta )g_{2} \bigr) + b \\& \quad < \beta \pi ( g_{1},g_{1} ) + ( 1 - \beta )\pi ( g_{2},g_{2} ) - 2 \bigl( \beta Lg_{1} + ( 1 - \beta )Lg_{2} \bigr) + b \\& \quad < \beta \bigl\{ \pi ( g_{1},g_{1} ) - 2Lg_{1} + b \bigr\} + ( 1 - \beta ) \bigl\{ \pi ( g_{2},g_{2} ) - 2Lg_{2} + b \bigr\} \\& \quad < \beta J_{\alpha} ( g_{1} ) + ( 1 - \beta )J_{\alpha} ( g_{2} ). \end{aligned}$$

Now let \(g_{1}\) and \(g_{2}\) be two elements satisfying

$$J_{\alpha} ( g_{1} ) = J_{\alpha} ( g_{2} ) = \inf_{g \in G_{\mathrm{ad}}}J_{\alpha} ( g ). $$

Since the set \(G_{\mathrm{ad}}\) is convex

$$\frac{1}{2} ( g_{1} + g_{2} ) \in G_{\mathrm{ad}} $$

and since \(J_{\alpha} ( g )\) is strictly convex while \(g_{1} \ne g_{2}\) we get

$$J_{\alpha} \biggl( \frac{1}{2} ( g_{1} + g_{2} ) \biggr) < \frac{1}{2}J_{\alpha} ( g_{1} ) + \frac{1}{2}J_{\alpha} ( g_{2} ) = \inf_{g \in G_{\mathrm{ad}}}J_{\alpha} ( g ) $$

and this is a contradiction. Then we must have \(g_{1} = g_{2}\). This shows that the minimum element is unique. Theorem 4.1 has been proven. □

5 Well-posedness of the problem

In Section 4, we proved the existence and uniqueness of optimal solution. In this section, we show that for a minimizing sequence \(\{ g_{k} ( t ) \}\), the convergence of \(J_{\alpha} ( \{ g_{k} \} ) \to J_{\alpha} ( g_{*} )\) implies \(\Vert g_{k} - g_{*} \Vert _{H^{1} ( 0,T )} \to 0\) for \(k \to \infty\) while \(\alpha > 0\).

For this purpose we must show that the functional \(J_{\alpha} ( g )\) is strongly convex.

Theorem 5.1

The functional \(J_{\alpha} ( g )\) is strongly convex with the convexity constant α.

Proof

By the definition of strong convexity of a functional, we must prove that

$$ J_{\alpha} \bigl( \beta g_{1} + ( 1 - \beta )g_{2} \bigr) \le \beta J_{\alpha} ( g_{1} ) + ( 1 - \beta )J_{\alpha} ( g_{2} ) - \chi \beta ( 1 - \beta )\Vert g_{1} - g_{2} \Vert _{H^{1} ( 0,T )}^{2} $$
(5.1)

for \(\chi > 0\).

First, let us show that the functional \(\alpha \Vert g \Vert _{H^{1} ( 0,T )}^{2}\) is strongly convex. For all \(g_{1},g_{2} \in G_{\mathrm{ad}}\) and \(\beta \in [ 0,1 ]\), we can write

$$\begin{aligned}& \alpha \bigl\Vert \beta g_{1} + ( 1 - \beta )g_{2} \bigr\Vert _{H^{1} ( 0,T )}^{2} \\& \quad = \alpha \int_{0}^{T} \bigl[ \bigl( \beta g_{1} + ( 1 - \beta )g_{2} \bigr)^{2} + \bigl( \beta g'_{1} + ( 1 - \beta )g'_{2} \bigr)^{2} \bigr]\,dt \\& \quad = \alpha \int_{0}^{T} \bigl[ \bigl( \beta g_{1}^{2} + ( 1 - \beta )g_{2}^{2} - \beta ( 1 - \beta ) ( g_{1} - g_{2} )^{2} \bigr) \\& \qquad {}+ \bigl( \beta \bigl( g'_{1} \bigr)^{2} + ( 1 - \beta ) \bigl( g'_{1} \bigr)^{2} - \beta ( 1 - \beta ) \bigl( g'_{1} - g'_{2} \bigr)^{2} \bigr) \bigr]\,dt \\& \quad = \alpha \beta \Vert g_{1} \Vert _{H^{1} ( 0,T )}^{2} + \alpha ( 1 - \beta )\Vert g_{2} \Vert _{H^{1} ( 0,T )}^{2} - \alpha \beta ( 1 - \beta )\Vert g_{1} - g_{2} \Vert _{H^{1} ( 0,T )}^{2}. \end{aligned}$$

Hence \(\alpha \Vert g \Vert _{H^{1} ( 0,T )}^{2}\) is strongly convex with the convexity constant \(\chi = \alpha\). Recalling the expression of \(\pi ( g,g )\) and using the above equality, we have

$$\begin{aligned}& \pi \bigl( \beta g_{1} + ( 1 - \beta )g_{2},\beta g_{1} + ( 1 - \beta )g_{2} \bigr) \\& \quad = \int_{0}^{l} \bigl[ \beta \bigl( u ( x,T;g_{1} ) - u ( x,T;0 ) \bigr) + ( 1 - \beta ) \bigl( u ( x,T;g_{2} ) - u ( x,T;0 ) \bigr) \bigr]^{2}\, dx \\& \qquad {}+ \alpha \beta \Vert g_{1} \Vert _{H^{1} ( 0,T )}^{2} + \alpha ( 1 - \beta )\Vert g_{2} \Vert _{H^{1} ( 0,T )}^{2} - \alpha \beta ( 1 - \beta )\Vert g_{1} - g_{2} \Vert _{H^{1} ( 0,T )}^{2}. \end{aligned}$$

On the other hand we know from Section 4 that \(\pi ( g,g )\) is strictly convex, so we get

$$\begin{aligned}& \pi \bigl( \beta g_{1} + ( 1 - \beta )g_{2},\beta g_{1} + ( 1 - \beta )g_{2} \bigr) \\& \quad \le \beta \int_{0}^{l} \bigl[ u ( x,T;g_{1} ) - u ( x,T;0 ) \bigr]^{2}\, dx + ( 1 - \beta )\int _{0}^{l} \bigl[ u ( x,T;g_{2} ) - u ( x,T;0 ) \bigr]^{2}\, dx \\& \qquad {}+ \alpha \beta \Vert g_{1} \Vert _{H^{1} ( 0,T )}^{2} + \alpha ( 1 - \beta )\Vert g_{2} \Vert _{H^{1} ( 0,T )}^{2} - \alpha \beta ( 1 - \beta )\Vert g_{1} - g_{2} \Vert _{H^{1} ( 0,T )}^{2} \\& \quad \le \beta \pi ( g_{1},g_{1} ) + ( 1 - \beta )\pi ( g_{2},g_{2} ) - \alpha \beta ( 1 - \beta )\Vert g_{1} - g_{2} \Vert _{H_{0}^{1}}^{2}. \end{aligned}$$

The functional \(\pi ( g,g )\) is strongly convex with the convexity constant α. As for \(J_{\alpha} ( g )\) we get

$$\begin{aligned} J_{\alpha} \bigl( \beta g_{1} + ( 1 - \beta )g_{2} \bigr) \le& \beta \pi ( g_{1},g_{1} ) + ( 1 - \beta )\pi ( g_{2},g_{2} ) - \alpha \beta ( 1 - \beta )\Vert g_{1} - g_{2} \Vert _{H^{1} ( 0,T )}^{2} \\ &{}- 2 \bigl( \beta Lg_{1} + ( 1 - \beta )Lg_{2} \bigr) + b \end{aligned}$$

and this implies (5.1). Hence \(J_{\alpha} ( g )\) is strongly convex with the convexity constant \(\chi = \alpha\). □

Theorem 5.2

For the strongly convex functional \(J_{\alpha} ( g )\) with the convexity constant α, there is a minimizing sequence which converges strongly to an element \(g_{*}\) and satisfies the following inequality:

$$ \Vert g_{k} - g_{*} \Vert _{H^{1} ( 0,T )}^{2} < \frac{2}{\alpha} \bigl( J_{\alpha} ( g_{k} ) - J_{\alpha} ( g_{*} ) \bigr). $$
(5.2)

Proof

This proof can be done in a similar way to [12]. If we take \(\beta = \frac{1}{2}\) in (5.1) then

$$J_{\alpha} \biggl( \frac{1}{2}g_{k} + \frac{1}{2}g_{*} \biggr) \le \frac{1}{2}J_{\alpha} ( g_{k} ) + \frac{1}{2}J_{\alpha} ( g_{*} ) - \alpha \frac{1}{4}\Vert g_{k} - g_{*} \Vert _{H^{1} ( 0,T )}^{2}. $$

On the other hand, since

$$J_{\alpha} ( g_{*} ) \le J_{\alpha} \biggl( \frac{1}{2}g_{k} + \frac{1}{2}g_{*} \biggr) $$

we get

$$J_{\alpha} ( g_{*} ) \le \frac{1}{2}J_{\alpha} ( g_{k} ) + \frac{1}{2}J_{\alpha} ( g_{*} ) - \alpha \frac{1}{4}\Vert g_{k} - g_{*} \Vert _{H^{1} ( 0,T )}^{2} $$

and

$$\Vert g_{k} - g_{*} \Vert _{H^{1} ( 0,T )}^{2} \le \frac{2}{\alpha} \bigl( J_{\alpha} ( g_{k} ) - J_{\alpha} ( g_{*} ) \bigr). $$

Hence the proof is done. □

6 Obtaining the optimal solution

Up to now we have seen that if a minimizing sequence is found then the limit of this sequence will be the solution of optimal control problem. In this section, we investigate how we can get this minimizing sequence. To do this, we must obtain the adjoint problem and the Fréchet derivation for the functional.

6.1 Adjoint problem and Fréchet derivation of the functional

The Lagrange functional for the problem can be written as follows:

$$\begin{aligned} L ( u,g,\eta ) =& \int_{0}^{l} \bigl[ u ( x,T;g ) - y ( x ) \bigr]^{2}\, dx + \alpha \int_{0}^{T} g^{2} ( t )\, dt + \alpha \int_{0}^{T} \bigl( g' ( t ) \bigr)^{2}\, dt \\ &{}+ \int_{0}^{T} \int_{0}^{l} \eta \biggl( \frac{\partial u}{\partial t} - k\frac{\partial^{2}u}{\partial x^{2}} - h ( x,t ) \biggr)\, dx\, dt. \end{aligned}$$

The stationarity condition \(\delta L = 0\) gives the adjoint problem

$$\begin{aligned}& \frac{\partial \eta}{\partial t} + k\frac{\partial^{2}\eta}{\partial x^{2}} = 0, \end{aligned}$$
(6.1)
$$\begin{aligned}& \eta ( x,T ) = - 2 \bigl[ u ( x,T;g ) - y ( x ) \bigr], \end{aligned}$$
(6.2)
$$\begin{aligned}& \eta_{x} ( 0,t ) = \eta_{x} ( l,t ) = 0. \end{aligned}$$
(6.3)

Let \(\Delta g ( t )\) be an increment to the function \(g ( t )\), then the difference function \(\Delta u ( x,t ) = u ( x,t;g + \Delta g ) - u ( x,t;g )\) is the solution of the difference problem:

$$\begin{aligned}& \frac{\partial \Delta u}{\partial t} = k\frac{\partial^{2}\Delta u}{\partial x^{2}}, \quad t \in ( 0,T ), x \in ( 0,l ), \end{aligned}$$
(6.4)
$$\begin{aligned}& \Delta u ( x,0 ) = 0,\quad x \in ( 0,l ), \end{aligned}$$
(6.5)
$$\begin{aligned}& \frac{\partial \Delta u}{\partial x} ( 0,t ) = 0, \qquad \frac{\partial \Delta u}{\partial x} ( l,t ) = \Delta g ( t ),\quad t \in ( 0,T ). \end{aligned}$$
(6.6)

Furthermore the difference function \(\Delta u ( x,t )\) satisfies the following estimate for \(t \in [ 0,T ]\):

$$ \bigl\Vert \Delta u (\cdot,t ) \bigr\Vert _{L_{2} ( 0,l )}^{2} \le c_{4} \bigl( \Vert \Delta g \Vert _{H^{1} ( 0,T )}^{2} \bigr). $$
(6.7)

On the other hand, the difference for the functional subject to \(\Delta g ( t )\) is

$$\begin{aligned} \bigl\vert \Delta J_{\alpha} ( g ) \bigr\vert =& \int _{0}^{l} \bigl\{ \bigl[ u ( x,T;g + \Delta g ) - y ( x ) \bigr]^{2} - \bigl[ u ( x,T;g ) - y ( x ) \bigr]^{2} \bigr\} \,dx \\ &{}+ 2\alpha \int_{0}^{T} g ( t )\Delta g ( t ) \,dt + \alpha \int_{0}^{T} \Delta g^{2} ( t ) \,dt \\ &{}+ 2\alpha \int_{0}^{T} g' ( t ) \Delta g' ( t )\,dt + \alpha \int_{0}^{T} \bigl( \Delta g' ( t ) \bigr)^{2}\,dt \\ =& 2\int_{0}^{l} \bigl[ u ( x,T;g ) - y ( x ) \bigr]\Delta u ( x,T )\,dx + \int_{0}^{l} \Delta u^{2} ( x,T )\,dx \\ &{}+ 2\alpha \int_{0}^{T} g ( t )\Delta g ( t ) \,dt + \alpha \int_{0}^{T} \Delta g^{2} ( t )\,dt \\ &{}+ 2\alpha \int_{0}^{T} g' ( t ) \Delta g' ( t )\,dt + \alpha \int_{0}^{T} \bigl( \Delta g' ( t ) \bigr)^{2}\,dt. \end{aligned}$$
(6.8)

We can obtain the following equality using the adjoint and difference problems:

$$\begin{aligned}& \int_{0}^{l} 2 \bigl[ u ( x,T;g ) - y ( x ) \bigr] \Delta u ( x,T )\, dx \\& \quad = - \int_{0}^{T} k\eta ( l,t ) \Delta g ( t )\, dt. \end{aligned}$$
(6.9)

Also, considering (6.9) in (6.8), we get

$$\begin{aligned} \bigl\vert \Delta J_{\alpha} ( g ) \bigr\vert =& - \int _{0}^{T} k\Delta g ( t )\eta ( l,t )\, dt + \int _{0}^{l} \Delta u^{2} ( x,T )\, dx \\ &{}+ 2\alpha \int_{0}^{T} g ( t )\Delta g ( t )\, dt + \alpha \int_{0}^{T} \Delta g^{2} ( t )\, dt \\ &{}+ 2\alpha \int_{0}^{T} g' ( t ) \Delta g' ( t )\, dt + \alpha \int_{0}^{T} \bigl( \Delta g' ( t ) \bigr)^{2}\, dt. \end{aligned}$$
(6.10)

In order to have the inner product in the space \(H^{1} ( 0,T )\) we must consider the function ξ, which is the weak solution of the following problem:

$$ \begin{aligned} &\xi '' - \xi = k\eta ( l,t ), \\ &\xi ' ( 0 ) = \xi ' ( T ) = 0. \end{aligned} $$
(6.11)

Then we write

$$\begin{aligned} \bigl\vert \Delta J_{\alpha} ( g ) \bigr\vert =& - \int _{0}^{T} \xi ''\Delta g ( t )\, dt + \int_{0}^{T} \xi \Delta g ( t ) \,dt + \int _{0}^{l} \Delta u^{2} ( x,T )\,dx \\ &{}+ 2\alpha \int_{0}^{T} g ( t )\Delta g ( t ) \,dt + \alpha \int_{0}^{T} \Delta g^{2} ( t )\,dt \\ &{}+ 2\alpha \int_{0}^{T} g' ( t ) \Delta g' ( t )\,dt + \alpha \int_{0}^{T} \bigl( \Delta g' ( t ) \bigr)^{2}\,dt \end{aligned}$$

and

$$\begin{aligned} \bigl\vert \Delta J_{\alpha} ( g ) \bigr\vert =& \int _{0}^{T} \xi '\Delta g' ( t )\, dt + \int_{0}^{T} \xi \Delta g ( t ) \,dt + \int _{0}^{l} \Delta u^{2} ( x,T )\,dx \\ &{}+ 2\alpha \int_{0}^{T} g ( t )\Delta g ( t ) \,dt + \alpha \int_{0}^{T} \Delta g^{2} ( t )\,dt \\ &{}+ 2\alpha \int_{0}^{T} g' ( t ) \Delta g' ( t )\,dt + \alpha \int_{0}^{T} \bigl( \Delta g' ( t ) \bigr)^{2}\,dt. \end{aligned}$$

Rearranging this, we obtain the equality

$$\begin{aligned} \bigl\vert \Delta J_{\alpha} ( g ) \bigr\vert =& \int _{0}^{T} \bigl( \xi + 2\alpha g ( t ) \bigr)\Delta g ( t ) \,dt \\ &{}+ \int_{0}^{T} \bigl( \xi ' + 2\alpha g' ( t ) \bigr)\Delta g' ( t )\,dt \\ &{}+ \int_{0}^{l} \Delta u^{2} ( x,T ) \,dx + \alpha \biggl[ \int_{0}^{T} \Delta g^{2} ( t )\,dt + \int_{0}^{T} \bigl( \Delta g' ( t ) \bigr)^{2}\,dt \biggr]. \end{aligned}$$
(6.12)

We take into account (6.7) and the following definition of the Fréchet derivation:

$$\bigl\vert \Delta J_{\alpha} ( g ) \bigr\vert = \bigl\langle J'_{\alpha} ( g ),\Delta g \bigr\rangle _{H^{1} ( 0,T )} + o \bigl( \Vert \Delta g \Vert _{H^{1} ( 0,T )}^{2} \bigr). $$

We get the Fréchet derivation for the functional thus:

$$ J'_{\alpha} ( g ) = \xi + 2\alpha g. $$
(6.13)

6.2 Constituting a minimizing sequence

In this section, we construct a minimizing sequence using the gradient method. If we take the initial element \(g_{0} \in G_{\mathrm{ad}}\), we can constitute a minimizing sequence by the rule

$$ g_{k + 1} = g_{k} - \beta_{k}\cdot J' ( g_{k} ),\quad k = 0,1, \ldots, $$
(6.14)

where \(J' ( g_{k} )\) is the Fréchet derivation accompanying the element \(g_{k}\). The \(\beta_{k}\) are sufficiently small numbers satisfying

$$ J_{\alpha} ( g_{k + 1} ) - J_{\alpha} ( g_{k} ) = \beta_{k} \biggl[ - \bigl\Vert J'_{\alpha} ( g_{k} ) \bigr\Vert ^{2} + \frac{o ( \beta_{k} )}{\beta_{k}} \biggr] < 0. $$
(6.15)

Computations of the \(\beta_{k}\) can be carried out by one of the methods shown in [12]. Since the functional is weakly lower semicontinuous, we have

$$J_{\alpha *} \le J_{\alpha} ( g ) \le \lim_{k \to \infty} J_{\alpha} ( g_{k} ) = J_{\alpha *}. $$

Iteration can be stopped by one of the following criteria:

$$ \Vert g_{k + 1} - g_{k} \Vert < \varepsilon_{1}, \qquad \bigl\vert J_{\alpha} ( g_{k + 1} ) - J_{\alpha} ( g_{k} ) \bigr\vert < \varepsilon_{2}, \qquad \bigl\Vert J'_{\alpha} ( g_{k} ) \bigr\Vert < \varepsilon_{3}. $$
(6.16)

7 A numerical example

Let us consider the following problem on the domain \(( x,t ) \in Q = ( 0,1 ) \times ( 0,1 )\), choosing \(k = 1\), \(l = 1\), \(T = 1\):

$$\begin{aligned}& u_{t} = u_{xx} + \left \{ \textstyle\begin{array}{l@{\quad}l} - x^{3}\sin t - 2x^{2}\sin t - 6x\cos t - 4\cos t, & 0 \le x < \frac{1}{2}, 0 \le t \le 1, \\ - x^{3}\sin t - x^{2}\sin t - x\sin t \\ \quad {}+ \frac{1}{4}\sin t - 6x\cos t - 2\cos t, & \frac{1}{2} < x \le 1, 0 \le t \le 1, \end{array}\displaystyle \right . \end{aligned}$$
(7.1)
$$\begin{aligned}& u ( x,0 ) = \left \{ \textstyle\begin{array}{l@{\quad}l} x^{3} + 2x^{2}, & 0 \le x \le \frac{1}{2}, \\ x^{3} + x^{2} + x - \frac{1}{4}, & \frac{1}{2} \le x \le 1, \end{array}\displaystyle \right . \end{aligned}$$
(7.2)
$$\begin{aligned}& u_{x} ( 0,t ) = 0,\qquad u_{x} ( 1,t ) = g ( t ). \end{aligned}$$
(7.3)

We use the cost functional

$$ J_{\alpha} ( g ) = \int_{0}^{1} \left [ u ( x,1;g ) - \left \{ \textstyle\begin{array}{l@{\quad}l} \cos ( 1 ) ( x^{3} + 2x^{2} ), & 0 \le x \le \frac{1}{2} \\ \cos ( 1 ) ( x^{3} + x^{2} + x - \frac{1}{4} ), & \frac{1}{2} \le x \le 1 \end{array}\displaystyle \right . \right ]^{2} \, dx + \alpha \Vert g \Vert _{H^{1} ( 0,1 )}^{2} $$
(7.4)

and want to solve the problem

$$ J_{\alpha *} = J_{\alpha} ( g_{*} ) = \inf J_{\alpha} ( g ). $$
(7.5)

We consider the solution of the parabolic problem (7.1)-(7.3) as \(u = u_{1} + u_{2}\) with \(u_{2} = \frac{x^{2}}{2l}g ( t )\). Then the following problem with a homogeneous boundary condition for the function \(u_{1}\) is obtained:

$$\begin{aligned}& \frac{\partial u_{1}}{\partial t} = k\frac{\partial^{2}u_{1}}{\partial x^{2}} + h ( x,t ) + \frac{k}{l}g ( t ) - \frac{x^{2}}{2l}g' ( t ), \end{aligned}$$
(7.6)
$$\begin{aligned}& u_{1} ( x,0 ) = u_{0} ( x ) - \frac{x^{2}}{2l}g ( 0 ), \end{aligned}$$
(7.7)
$$\begin{aligned}& \frac{\partial u_{1}}{\partial x} ( 0,t ) = 0,\qquad \frac{\partial u_{1}}{\partial x} ( l,t ) = 0. \end{aligned}$$
(7.8)

The weak solution for the problem (7.6)-(7.8) can be defined as follows:

$$\begin{aligned}& \int_{0}^{l} \frac{\partial u_{1}}{\partial t} v ( x )\,dx + \int _{0}^{l} k\frac{\partial u_{1}}{\partial x}\frac{\partial v}{\partial x} \,dx \\& \quad = \int_{0}^{l} h ( x,t )v ( x )\,dx + \frac{k}{l} \int_{0}^{l} g ( t )v ( x )\,dx - \int_{0}^{l} \frac{x^{2}}{2l}g' ( t )v ( x )\,dx\quad \forall v \in H^{1} ( \Omega ) \end{aligned}$$

and the solution to this equality can be approximated by the Feado-Galerkin method using the sum

$$ u_{1}^{N} ( x,t ) = \sum_{k = 1}^{N} c_{k} ( t )\varphi_{k} ( x ). $$
(7.9)

Here the functions \(\varphi_{k} ( x )\) are an orthogonal basis in \(H^{1} ( \Omega )\). Compatible with the boundary values, we can take these functions as

$$\biggl\{ \frac{1}{\sqrt{l}},\cos \frac{\pi}{l}x,\cos \frac{2\pi}{l}x, \ldots,\cos \frac{ ( n - 1 )\pi}{l}x \biggr\} . $$

The unknown functions \(c_{k} ( t )\) in (7.9) are found from the system of first-order ordinary differential equations

$$ \begin{aligned} &M\frac{dC}{dt} + AC = H, \\ &C ( 0 ) = C_{0}. \end{aligned} $$
(7.10)

In this system,

$$C = \left [ \textstyle\begin{array}{@{}c@{}} C_{1}^{N} ( t ) \\ C_{2}^{N} ( t ) \\ \vdots \\ C_{N}^{N} ( t ) \end{array}\displaystyle \right ] $$

is the matrix of unknowns,

$$C_{0} = \left [ \textstyle\begin{array}{@{}c@{}} \int_{0}^{l} [ u_{0} ( x ) - \frac{x^{2}}{2l}g ( 0 ) ]\varphi_{1} ( x )\, dx \\ \int_{0}^{l} [ u_{0} ( x ) - \frac{x^{2}}{2l}g ( 0 ) ]\varphi_{2} ( x )\, dx \\ \vdots \\ \int_{0}^{l} [ u_{0} ( x ) - \frac{x^{2}}{2l}g ( 0 ) ]\varphi_{N} ( x )\, dx \end{array}\displaystyle \right ] $$

is the initial data matrix. The coefficient matrices M and A are such that

$$M = \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & & \vdots \\ 0 & 0 & \cdots & 1 \end{array}\displaystyle \right ],\qquad A = \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} 0 & 0 & \cdots & 0 \\ 0 & ( \frac{\pi}{l} )^{2} & \cdots & 0 \\ \vdots & \vdots & & \vdots \\ 0 & 0 & \cdots & ( \frac{ ( n - 1 )\pi}{l} )^{2} \end{array}\displaystyle \right ] $$

and the right-hand side matrix H is

$$H = \left [ \textstyle\begin{array}{@{}c@{}} \int_{0}^{l} h ( x,t )\varphi_{1} ( x ) \,dx + \int_{0}^{l} \frac{k}{l}g ( t )\varphi_{1} ( x )\,dx - \int_{0}^{l} \frac{x^{2}}{2l}g' ( t )\varphi_{1} ( x )\,dx \\ \int_{0}^{l} h ( x,t )\varphi_{2} ( x ) \,dx + \int_{0}^{l} \frac{k}{l}g ( t )\varphi_{2} ( x )\,dx - \int_{0}^{l} \frac{x^{2}}{2l}g' ( t )\varphi_{2} ( x )\,dx \\ \vdots \\ \int_{0}^{l} h ( x,t )\varphi_{N} ( x ) \,dx + \int_{0}^{l} \frac{k}{l}g ( t )\varphi_{N} ( x )\,dx - \int_{0}^{l} \frac{x^{2}}{2l}g' ( t )\varphi_{N} ( x )\,dx \end{array}\displaystyle \right ]. $$

Since M and A are diagonal, each equation in the system (7.10) gives an ordinary differential equation. Therefore we can solve (7.10) and find the functions \(c_{k} ( t )\) exactly.

First, let us take \(\alpha = 0\) and consider the functional,

$$J ( g ) = \int_{0}^{1} \left [ u ( x,1;g ) - \left \{ \textstyle\begin{array}{l@{\quad}l} \cos ( 1 ) ( x^{3} + 2x^{2} ), & 0 \le x \le \frac{1}{2} \\ \cos ( 1 ) ( x^{3} + x^{2} + x - \frac{1}{4} ), & \frac{1}{2} \le x \le 1 \end{array}\displaystyle \right . \right ]^{2} \, dx. $$

The minimum value of this functional is \(J_{*} = 0\) and the functional takes this value for \(g_{*} = 6\cos (t)\). Taking \(N = 10\) to approximate the solution for the Feado-Galerkin method we obtain the minimum value as \(J_{*} = 0.27 \times 10^{ - 8}\).

The problem is ill posed in this case, since the minimum value is nearly obtained by quite different \(g ( t )\) functions.

Starting with the initial element \(g_{0} = \cos t\), if we construct a minimizing sequence by (6.14) for \(\beta = 0.2\) then we obtain the following element after 101 iterations:

$$\begin{aligned} g_{101} =& \cos t + 4.186083 + 0.140257\cos ( 3.141592t ) - 0.030318 \cos ( 6.283185t ) \\ &{}+ 0.010470\cos ( 9.424777t ) - 0.004550\cos ( 12.566370t ) \\ &{}+ 0.002299\cos ( 15.707963t ) - 0.001294\cos ( 18.849555t ) \\ &{}+ 0.000791\cos ( 21.991148t )- 0.000514\cos ( 25.132741t ) \\ &{} + 0.000351\cos ( 28.274333t ). \end{aligned}$$

The value of the functional for the element \(g_{101}\) is \(J ( g_{101} ) = 0.020786\). But the norm of the difference between these functions is \(\Vert g_{101} - g_{*} \Vert _{H^{1} ( 0,1 )} = 2.354540\). A graph of this solution is given in Figure 2.

Figure 2
figure 2

Two quite different functions that give nearly the same functional value for \(\pmb{T = 1}\) .

If we start another initial element \(g_{0} = 1\), and we construct a minimizing sequence by (6.14) for \(\beta = 0.2\), then we obtain the following element after 101 iterations:

$$\begin{aligned} g_{101} =& 5.023377 + 0.171417\cos ( 3.141592t ) - 0.037049\cos ( 6.283185t ) \\ &{}+ 0.012793\cos ( 9.424777t ) - 0.005558\cos ( 12.566370t ) \\ &{}+ 0.002807\cos ( 15.707963t )- 0.001580\cos ( 18.849555t ) \\ &{} + 0.000965\cos ( 21.991148t )- 0.000627\cos ( 25.132741t ) \\ &{} + 0.000428\cos ( 28.274333t ). \end{aligned}$$

The value of the functional for the element \(g_{101}\) is \(J ( g_{101} ) = 0.029751\). But the norm of the difference between these functions is \(\Vert g_{101} - g_{*} \Vert _{H^{1} ( 0,1 )} = 2.817847\).

These examples show that the problem is numerically ill posed for \(\alpha = 0\).

We take \(\alpha > 0\) as a regularization parameter and minimize the functional (7.4) using the minimizing sequence by (6.14) for \(\beta = 0.2\).

The values \(\int_{0}^{1} [ u ( x,1;g ) - y ( x ) ]^{2}\, dx\) and \(\Vert g \Vert _{H^{1} ( 0,1 )}^{2}\) are obtained as given in Table 1, if the stopping criterion is taken as \(\vert J_{\alpha} ( g_{k + 1} ) - J_{\alpha} ( g_{k} ) \vert < 1 \times 10^{ - 6}\).

Table 1 Some α , \(\pmb{\Vert u ( x,1;g ) - y ( x ) \Vert _{L_{2} ( 0,1 )}^{2}}\) and \(\pmb{\Vert g \Vert _{H^{1} ( 0,1 )}^{2}}\) values

In Figure 3, we can see that the values of \(\int_{0}^{1} [ u ( x,1;g ) - y ( x ) ]^{2}\, dx\) become smaller and the values of \(\Vert g \Vert _{H^{1} ( 0,1 )}^{2}\) become larger as the α decrease. The opposite occurs as the α increase.

Figure 3
figure 3

The results for some regularization parameters.

The problem is well posed for any \(\alpha > 0\). For example if we take \(\alpha = 0.6\) we get the functional

$$J_{0.6} ( g ) = \int_{0}^{1} \left [ u ( x,1;g ) - \left \{ \textstyle\begin{array}{l@{\quad}l} \cos ( 1 ) ( x^{3} + 2x^{2} ), & 0 \le x \le \frac{1}{2} \\ \cos ( 1 ) ( x^{3} + x^{2} + x - \frac{1}{4} ), & \frac{1}{2} \le x \le 1 \end{array}\displaystyle \right . \right ]^{2} \, dx + ( 0.6 ) \Vert g \Vert _{H^{1} ( 0,1 )}^{2}. $$

Let us construct a minimizing sequence by (6.14) for \(\beta = 0.2\) and stop the iteration by the criterion \(\vert J_{\alpha} ( g_{k + 1} ) - J_{\alpha} ( g_{k} ) \vert < 1 \times 10^{ - 6}\). If we start with the initial element \(g_{0} = 0\), we get the minimum value \(J_{0.6*} = 9.565356\) and the minimum element

$$\begin{aligned} g_{15} =& 3.162742 - 0.003404\cos (3.141592t) + 0.000722\cos ( 6.283185t ) \\ &{}- 0.000242\cos ( 9.424777t ) + 0.000101\cos ( 12.566370t ) \\ &{}- 0.000049\cos ( 15.707963t ) + 0.000026\cos ( 18.849555t ) \\ &{}- 0.000015\cos ( 21.991148t ) \\ &{}+ 0.000097\cos ( 25.132741t )0.000006\cos ( 28.274333t ). \end{aligned}$$

If we start with the initial element \(g_{0} = \cos (t)\), we get the minimum value \(J_{0.6*} = 9.565356\) and the minimum element

$$\begin{aligned} g_{27} =& 3.162081 + 0.000796\cos t - 0.003243\cos (3.141592t) \\ &{}+ 0.000687\cos ( 6.283185t ) - 0.000230\cos ( 9.424777t ) \\ &{}+ 0.000096\cos ( 12.566370t )- 0.000046\cos ( 15.707963t ) \\ &{} + 0.000025\cos ( 18.849555t ) - 0.000014\cos ( 21.991148t ) \\ &{}+ 0.000009\cos ( 25.132741t ) - 0.000006\cos ( 28.274333t ). \end{aligned}$$

The norm of the difference between these functions is \(\Vert g_{27} - g_{15} \Vert _{H^{1} ( 0,1 )} = 0.000841\).

It can be seen from Figure 4 that minimum values and minimum elements are close enough to each other, respectively. The problem is numerically well posed.

Figure 4
figure 4

Minimum elements corresponding different initial elements.

References

  1. Vasilev, FP: Numerical Methods for Solving Extremal Problems. Nauka, Moscow (1988)

    Google Scholar 

  2. Levaggi, L: Variable structure control for parabolic evolution equations. In: Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference, Seville, Spain, 12-15 December (2005)

    Google Scholar 

  3. Qian, L, Tian, L: Boundary control of an unstable heat equation. Int. J. Nonlinear Sci. 3(1), 68-73 (2007)

    Google Scholar 

  4. Elharfi, A: Output-feedback stabilization and control optimization for parabolic equations with Neumann boundary control. Electron. J. Differ. Equ. 2011, 146 (2011)

    MathSciNet  Google Scholar 

  5. Sadek, IS, Bokhari, MA: Optimal boundary control of heat conduction problems on an infinite time domain by control parameterization. J. Franklin Inst. 348, 1656-1667 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  6. Lions, JL: Optimal Control of Systems Governed by Partial Differential Equations. Springer, Berlin (1971)

    Book  MATH  Google Scholar 

  7. Hasanoğlu, A: Simultaneous determination of the source terms in a linear parabolic problem from the final overdetermination: weak solution approach. J. Math. Anal. Appl. 330, 766-779 (2007)

    Article  MathSciNet  Google Scholar 

  8. Dhamo, V, Tröltzsch, F: Some aspects of reachability for parabolic boundary control problems with control constraints. Comput. Optim. Appl. 50, 75-110 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  9. Altmüller, N, Grüne, L: A comparative stability analysis of Neumann and Dirichlet boundary MPC for the heat equation. In: Proceedings of the 1st IFAC Workshop on Control of Systems Modeled by Partial Differential Equations - CPDE (2013)

    Google Scholar 

  10. Lions, JL, Magenes, E: Non-Homogeneous Boundary Value Problems and Applications, vol. II. Springer, Berlin (1972)

    Book  Google Scholar 

  11. Lions, JL: Optimal Control of Systems Governed by Partial Differential Equations. Springer, Berlin (1971)

    Book  MATH  Google Scholar 

  12. İskenderov, AD, Tagiyev, RQ, Yagubov, QY: Optimization Methods. Çaşıoğlu, Bakü (2002)

    Google Scholar 

Download references

Acknowledgements

The authors are thankful to all the referees for their suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Şule S Şener.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The main idea of this paper was proposed by ŞSŞ and MS. ŞSŞ and MS prepared the manuscript initially and performed all the steps of the proofs in this research. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Şener, Ş.S., Subaşi, M. On a Neumann boundary control in a parabolic system. Bound Value Probl 2015, 166 (2015). https://doi.org/10.1186/s13661-015-0430-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13661-015-0430-5

MSC

Keywords