• No results found

Exact Minimizers in Real Interpolation : Some additional results

N/A
N/A
Protected

Academic year: 2021

Share "Exact Minimizers in Real Interpolation : Some additional results"

Copied!
29
0
0

Loading.... (view fulltext now)

Full text

(1)

Department of Mathematics

Exact Minimizers in Real Interpolation

Japhet Niyobuhungiro

(2)
(3)

Exact Minimizers in Real Interpolation

Some additional results

Japhet Niyobuhungiroa,b

aDivision of Mathematics and Applied Mathematics, Department of Mathematics,

Linköping University, SE–581 83 Linköping, Sweden

bDepartment of Mathematics, School of Pure and Applied Science, College of Science and Technology,

University of Rwanda, P.O. Box 3900 Kigali,Rwanda

Abstract

We present some extensions of results presented in our recent papers. First we extend the characterization of optimal decompositions for a Banach couple to optimal decompositions for a Banach triple. Next we show that our approach can apply when complex spaces are considered instead of real spaces. Finally we compare the performance of the algorithm that we have proposed for the ROF model with the Split Bregman algorithm. The Split Bregman algorithm can in principle be regarded as a benchmark algorithm for the ROF model. We find out that in most cases both algorithms behave in a similar way and that in some cases our algorithm decreases the error faster with the number of iterations.

Keywords: Regular Banach triple, Optimal decompositions, Complex Banach couple, Real Interpolation, Convex Duality

2010 MSC: 46B70, 46E35, 68U10

1. Optimal decomposition for a Banach triple 1.1. Introduction

In the paper [7], duality in convex analysis was used to characterize op-timal decompositions for functionals arising in real interpolation and only real Banach couples were considered. However in connection with applied problems in image processing (see, for example, the paper [1]) optimal de-composition for three spaces is sometimes needed. For example, models re-lying on the use of three different semi–norms such as the total variation for the geometrical component, the negative Sobolev norm for the texture and the neg-ative Besov norm for the noise, are often useful. For this reason we will inves-tigate decomposition for three spaces. However, description in this situation

Email address: jniyobuhungiro@ur.ac.rw, japhet.niyobuhungiro@liu.se (Japhet Niyobuhungiro)

(4)

starts to be more complicated, for example we no longer have the equality

(X0+X1+X2)∗ = X∗0∩X1∗∩X2∗even for the regular triple. Let(X0, X1, X2) be a compatible Banach triple. i.e., X0, X1 and X2 are all Banach spaces and there exists a Hausdorff topological vector space in which they are linearly and continuously embedded. Consider the the corresponding K–functional for s, t>0 and x∈X0+X1+X2defined by

K(s, t; x; X0, X1, X2) = inf x=x0+x1+x2  kx0kX0+skx1kX1+tkx2kX2  , . (1)

Its calculation is a difficult extremal problem, (see J. Peetre [10] for the Banach couples). More generally we define the corresponding Lp0,p1,p2–functional Lp0,p1,p2(s, t; x; X0, X1, X2) =x=xinf 0+x1+x2  1 p0 kx0kXp00 + s p1 kx1kXp11+ t p2 kx2kpX22  , (2) where s, t>0 and 1≤ p0, p1 p2<∞. The following question arises.

Problem 1. Suppose that for a given element x∈X0+X1+X2, 1≤ p0, p1, p2<∞ and s, t>0 there exists an optimal decomposition for the Lp0,p1,p2–functional, i.e. a decomposition x=x0,opt+x1,opt+x2,opt such that

Lp0,p1,p2(s, t; x; X0, X1, X2) =x=xinf 0+x1+x2  1 p0 kx0k p0 X0+ s p1 kx1k p1 X1+ t p2kx2k p2 X2  = p1 0 x0,opt p0 X0+ s p1 x1,opt p1 X1+ t p2 x2,opt p2 X2. How can this optimal decomposition be characterized (constructed)?

1.2. Preliminaries

Below we present some well–known definitions and results from convex analysis that are needed for the proofs of our main results. Throughout, E will denote a Banach space with the normk·kEand E∗will denote its topological

dual space. We will consider convex functions F : E → R∪ {+∞}. The

effective domain or simply domain of the function F is a convex set denoted by dom F, and defined by

dom F={x∈E : F(x) < +∞}.

The function F is said to be proper if dom F6= ∅. If the epigraph of a function F, i.e. the set

epi F={(x, α) ∈E×R | F(x) ≤α},

is closed then the function F is called lower semicontinuous (l.s.c.).

The definition of the operation of infimal convolution is the following. Definition 1. The infimal convolution of n functions Fi, i=0, 1, . . . , n−1 from E intoR∪ {+∞} is the function denoted by F0⊕F1⊕. . .⊕Fn−1 that maps E into R∪ {−∞,+∞}and is defined by

(F0⊕F1⊕. . .⊕Fn−1) (x) = inf x=x0+x1+...+xn−1

{F0(x0) +F1(x1) +. . .+Fn−1(xn−1)}, (3)

(5)

and it is exact at a point x∈E if the infimum is achieved, i.e.,

(F0⊕F1⊕. . .⊕Fn−1) (x) =x=x min

0+x1+...+xn−1

{F0(x0) +F1(x1) +. . .+Fn−1(xn−1)}. Suppose that(F0⊕F1⊕. . .⊕Fn−1) (x)is finite and exact. Then the decomposition x = x0+x1+. . .+xn−1, on which the infimum is attained will be called optimal and denoted as x=x0,opt+x1,opt+. . .+x(n−1),opt.

The following notion of conjugate function will be important for us. Definition 2. The conjugate function of F is the function F∗ : E∗ → R∪ {+∞}

defined by

F∗(y) =sup x∈E

{hy, xi −F(x)}. (4) Moreover, we will say that y is dual to x (y is also called a subgradient of the convex function F at the point x) with respect to F if F∗(y) =hy, xi −F(x)or, in symmetric form,

F(x) +F∗(y) =hy, xi.

The set of all dual elements to x is denoted by ∂F(x) and the function F is called subdifferentiable at the point x∈ E if the set ∂F(x)is not empty.

Note that (see [13], p. 24) for n = 2, if the set ∂F0(x0) ∩∂F1(x−x0) is nonempty then(F0⊕F1) (x)is exact at the point x.

We will also need the next two propositions. The proofs are straightfor-ward consequences of the definition and can be found in any standard book on convex analysis (see [11] or [4] for example).

Proposition 1. a) Let F(x) = 1pkxkEp where 1 < p < ∞, then F∗(y) =

1 p0kyk p0 E∗where 1p+ p10 =1. b) Let F(x) =kxkE, then F∗(y) =  0 if kykE∗≤1 +∞ if kykE∗>1. Proposition 2. If λR\ {0}then (λF)∗(y) =sup x∈E {hy, xi −λF(x)} =λF∗(y λ).

The following nontrivial result (its proof is based on the Baire category theorem) is a very useful tool to check subdifferentiability (see [5]).

Theorem 1. Let F: E −→ (−∞, ∞]be a convex and lower semicontinuous

func-tion. Then F is subdifferentiable over the interior of its domain.

The next theorem demonstrates that the conjugate of infimal convolution is the sum of the conjugates.

(6)

Theorem 2. Let F0, F1, . . . , Fn−1be convex functions from E intoR∪ {+∞}. Then

(F0⊕F1⊕. . .⊕Fn−1)∗=F0∗+F1∗+. . .+Fn−1∗ .

The proof is a traightforward application of the definition and we omit it. Remark 1. The dual result

(F0+F1+. . .+Fn−1)∗=F0∗⊕F1∗⊕. . .⊕Fn−1∗ ,

is valid only under some additional restriction on F0, F1, . . . , Fn−1 (see [11] in the finite dimensional setting).

1.3. Optimal decomposition for the L–functional

Let (X0, X1, X2) be a regular Banach triple, i.e. X0∩X1∩X2 is dense in each one of X0, X1and X2. Let ϕj : Xj −→ R∪ {+∞}, j= 0, 1, 2, be convex and proper functions. If by

ϕj(u) =



ϕj(u) if u∈ Xj

+∞ if u∈ (X0+X1+X2) \Xj j=0, 1, 2 (5)

we define functions on X0+X1+X2, then we can express the infimal convo-lution ϕ0ϕ1ϕ2on the space X0+X1+X2as follows:

(ϕ0ϕ1ϕ2)(x) = inf

x=x0+x1+x2

(ϕ0(x0) +ϕ1(x1) +ϕ2(x2)), xj ∈Xj, j=0, 1, 2. (6) We will use the following result of [3] which states that, for a compatible

family of normed spaces X= Xj



j∈J such that ∀j ∈ J, ∩j∈JXj is dense in Xj then∑j∈JXj

∗

is a closed subspace of∩j∈JX∗j.

The next result provides a characterization of the optimal decomposition for infimal convolution ϕ0ϕ1ϕ2.

Theorem 3. Let ϕj : Xj −→ R∪ {+∞}, j = 0, 1, 2, be convex proper functions. Suppose also that ϕ0ϕ1ϕ2is subdifferentiable at a given element x∈dom(ϕ0ϕ1ϕ2). Then the decomposition x = x0,opt+x1,opt+x2,opt is optimal for ϕ0⊕

ϕ1ϕ2if and only if there exists y∗ ∈ (X0+X1+X2)∗ ⊆ X0∗∩X∗1∩X2∗that is dual to xj,optwith respect to ϕj, j=0, 1, 2, respectively, i.e.

   ϕ0 x0,opt  = hy∗, x0,opti −ϕ0(y∗) ϕ1 x1,opt= hy∗, x1,opti −ϕ1(y∗) ϕ2 x2,opt= hy∗, x2,opti −ϕ2(y∗). (7)

Proof. Let us assume that the decomposition x = x0,opt+x1,opt+x2,opt is optimal for(ϕ0ϕ1ϕ2) (x). It means that

(ϕ0ϕ1ϕ2) (x) = inf x=x0+x1+x2  ϕ0(x0) +ϕ1(x1) +ϕ2(x2) (8) =ϕ0 x0,opt  +ϕ1 x1,opt  +ϕ2 x2,opt .

(7)

Since ϕ0ϕ1ϕ2is subdifferentiable at x then there exists y∗∈ (X0+X1+ X2)∗⊆X∗0∩X1∗∩X2∗such that

y∗∈(ϕ0ϕ1ϕ2) (x),

i.e.

(ϕ0ϕ1ϕ2) (x) =hy∗, xi − (ϕ0ϕ1ϕ2)∗(y∗). (9) From (8) and the formula for the conjugate of infimal convolution (see Theo-rem 2), we have that (9) is equivalent to

ϕ0 x0,opt+ϕ1 x1,opt+ϕ2 x2,opt=hy∗, xi − (ϕ0)∗(y∗) − (ϕ1)∗(y∗) − (ϕ2)∗(y∗). Or equivalently,

ϕ0 x0,opt+ (ϕ0)∗(y∗) −y∗, x0,opt +ϕ1 x1,opt+ (ϕ1)∗(y∗) −y∗, x1,opt + (10)

+ϕ2 x2,opt+ (ϕ2)∗(y∗) −y∗, x2,opt =0 From the definition of the conjugate functions (see (4)) we have

ϕj xj,opt+ (ϕj)∗(y∗) −y∗, xj,opt ≥0, ∀j=0, 1, 2. Taking in account (??) we see that, in fact, we have the equalities

ϕj xj,opt+ (ϕj)∗(y∗) −y∗, xj,opt =0, ∀j=0, 1, 2. (11)

Since for y∗∈X0∗∩X1∗∩X∗2we have

(ϕj)∗(y∗) = sup x∈X0+X1+X2  hy∗, xi −ϕj(x)  = sup x∈X0 hy∗, xi −ϕj(x)= ϕj(y∗) for j=0, 1, 2. Since xj,opt∈Xj (j=0, 1, 2), therefore from (11) we obtain (7).

Conversely, let us assume there exist y∗ ∈ X0∗∩X∗1∩X2∗ and a decompo-sition x= ˜x0+ ˜x1+ ˜x2such that (7) is satisfied. Then from the definitions of

ϕ0, ϕ1, ϕ2and their conjugates we have

   ϕ0(˜x0) = hy∗, ˜x0i − (ϕ0)∗(y∗) ϕ1(˜x1) = hy∗, ˜x1i − (ϕ1)∗(y∗) ϕ2(˜x2) = hy∗, ˜x2i − (ϕ2)∗(y∗). Then ϕ0(˜x0) +ϕ1(˜x1) +ϕ2(˜x2) = hy∗, ˜x0+ ˜x1+˜x2i − ((ϕ0)∗+ (ϕ1)∗+ (ϕ2)∗) (y∗). Since by Theorem 2, the conjugate of infimal convolution is the sum of conju-gates, then

(8)

By definition of infimal convolution (3), it follows that, in particular,

(ϕ0ϕ1ϕ2) (x) ≤ϕ0(˜x0) +ϕ1(˜x1) +ϕ2(˜x2). Then

(ϕ0ϕ1ϕ2) (x) ≤ hy∗, xi − (ϕ0ϕ1ϕ2)∗(y∗). Moreover, from the definition of a conjugate function we have

(ϕ0ϕ1ϕ2) (x) ≥ hy∗, xi − (ϕ0ϕ1ϕ2)∗(y∗). Combining this with the previous inequality we obtain

(ϕ0ϕ1ϕ2) (x) = hy∗, xi − (ϕ0ϕ1ϕ2)∗(y∗). From that and (12) we conclude that

(ϕ0ϕ1ϕ2) (x) =ϕ0(˜x0) +ϕ1(˜x1) +ϕ2(˜x2).

Therefore, the decomposition x = ˜x0+ ˜x1+ ˜x2 is optimal for (ϕ0ϕ1

ϕ2)(x). 

Below we assume that the triple(X0, X1, X2)is regular, i.e. X0∩X1∩X2is dense in each of Xj, j =0, 1, 2. Let x∈ X0+X1+X2and let s, t>0 be fixed parameters. We consider the following L–functional:

Lp0,p1,p2(s, t; x; X0, X1, X2) =x=xinf 0+x1+x2  1 p0 kx0kpX00+ s p1 kx1kpX11+ t p2 kx2kXp22  , (13) where 1≤ p0<∞, 1≤ p1<∞ and 1≤p2<∞.

If the functions ϕj: Xj −→R∪ {+∞}, j=0, 1, 2 are defined by

ϕ0(u) = 1 p0 kukp0 X0 , ϕ1(u) = s p1 kukp1 X1 and ϕ2(u) = t p2 kukp2 X2, (14)

then (see (5) and (6)) the L–functional (13) can be written as the infimal con-volution

Lp0,p1,p2(s, t; x; X0, X1, X2) = (ϕ0⊕ϕ1⊕ϕ2) (x),

on the space X0+X1+X2. So, as the function Lp0,p1,p2 is convex and con-tinuous on X0+X1+X2, we have by Theorem 1 that it is subdifferentiable over the interior of its domain, which is equal to X0+X1+X2. Next we use Propositions 1 and 2 to calculate the conjugate functions ϕj of ϕj, j = 0, 1, 2 and apply Theorem 3 to obtain the following characterization of the optimal decomposition for the L–functional

Corollary 1. Let1 < p0, p1, p2 < ∞ and s, t > 0. Then the decomposition x =

(9)

an element y∗ ∈ (X0+X1+X2)∗⊆X∗0∩X∗1∩X2∗such that          1 p0 x0,opt p0 X0 = hy∗, x0,opti − 1 p00ky∗k p00 X∗ 0 s p1 x1,opt p1 X1 = hy∗, x1,opti − s p01 ys∗ p0 1 X1∗ t p2 x2,opt p2 X2 = hy∗, x2,opti − t p0 2 y∗ t p02 X2∗. (15) where p1 j + 1 p0j =1, j=0, 1, 2.

Remark 2. From the inequality app + bp

0

p0 ≥ ab, a, b ≥ 0, in which the equality

is attained only when b = ap−1, it follows that the first condition in (15) is equiv-alent to ky∗kX∗ 0 = x0,opt p0−1 X0 and hy∗, x0,opti = ky∗kX0∗ · x0,opt X 0. Simi-larly, the second condition in (15) is equivalent to ky∗kX

1 = s x1,opt p1−1 X1 and hy∗, x1,opti = ky∗kX∗ 1 · x1,opt X

1, and the third condition in (15) is equivalent to ky∗kX∗ 2 =t x2,opt p2−1 X1 andhy∗, x1,opti = ky∗kX∗2 · x2,opt X1.

Let us now consider the special case when p1 = p2 = 1. As in previous Corollary, in this case we obtain the following result

Corollary 2. Let1<p0< +∞ and s, t>0. Then the decomposition x =x0,opt+

x1,opt+x2,opt is optimal for the Lp0,1,1–functional if and only if there exists y∗ ∈

(X0+X1+X2)∗⊆X0∗∩X1∗∩X2∗such thatky∗kX∗1 ≤s;ky∗kX∗2 ≤t and        1 p0 x0,opt p0 X0 = hy∗, x0,opti − 1 p0 0 ky∗k p00 X∗0 s x1,opt X1 = hy∗, x1,opti t x2,opt X 2 = hy∗, x2,opti (16) where p1 0 + 1 p00 =1.

Remark 3. In a similar way as in Corollary2, one can show that the decomposition

x = x0,opt+x1,opt+x2,opt is optimal for the K–functional (1) if and only if there

exists y∗ ∈ (X0+X1+X2)∗ ⊆X∗0∩X1∗∩X2∗such thatky∗kX0∗ ≤1;ky∗kX∗1 ≤s andky∗kX∗ 2 ≤t and      x0,opt X 0 = hy∗, x0,opti s x1,opt X1 = hy∗, x1,opti t x2,opt X 2 = hy∗, x2,opti.

1.4. A geometry of optimal decomposition for the triple(`p, X1, X2)onRn

Let us consider the spaceRnwith the normkxk`p and some Banach spaces X1 and X2on Rn with the norms k·kX1 and k·kX2 respectively. By X

(10)

X2∗we denote the Banach spaces onRnwith the dual norms kykX∗ 1 = sup kxkX1≤1 hy, xi and kykX∗ 2 = sup kxkX2≤1 hy, xi, where hy, xi = n

i=1 yixi.

We consider the Lp,1,1–functional for the triple(`p, X1, X2), i.e. Lp,1,1(s, t; x;`p, X1, X2) =x=xinf 0+x1+x2  1 pkx0k p `p+skx1kX1+tkx2kX2  , where s, t>0 and 1<p< +∞. Let F0, F1and F2be functions defined onRn by F0(u) = 1 p kuk p `p , F1(u) =skukX1 and F2(u) =tkukX2. (17) To formulate the result let us define the setsΩs,X1,Ωt,X2 by

Ωs,X1 = n u∈Rn : ∇F0(u) ∈sBX∗ 1 o , Ωt,X2 = n u∈Rn : ∇F0(u) ∈tBX∗ 2 o , (18) where sBX

1 (resp. tBX∗2) is the ball of the dual space X

1 (resp. X2∗) of radius s (resp. t) with its center at the origin.

Remark 4. (i) Notice that if element x0,opt of the optimal decomposition x =

x0,opt+x1,opt+x2,opt is equal to zero, then (see Corollary 2) so must be y∗,

x1,optand x2,optand therefore x=0. We assume in the sequel, that x0,opt6=0.

(ii) Note that the condition 1p x0,opt p X0 = hy∗, x0,opti − 1 p0ky∗kp 0 X∗0 in Corollary 2 can in this case be re-written as

1 p x0,opt p `p = hy∗, x0,opti − 1 p0 ky∗k p0 `p0, i.e. 1 p n

i=1 x0,opt;i p = n

i=1 y∗,i· x0,opt;i− 1 p0 n

i=1 |y∗,i|p 0 . (19) But 1p|a|p+p10|b|p 0

≥ab and the equality holds only when b=|a|p−1sgn(a), so the equality (19) is equivalent to

y∗= x0,opt

p−1

sgn x0,opt . Therefore, y∗ = ∇F0 x0,opt, where F0(x) = 1pkxk

p

`p and from Corollary 2 it follows that x0,opt∈Ωs,X1∩Ωt,X2.

(11)

Theorem 4. Let xRnwith optimal decomposition x=x

0,opt+x1,opt+x2,optfor Lp,1,1(s, t; x;`p, X1, X2)–functional. Then

(1) If x0,opt ∈int Ωs,X1∩Ωt,X2 then the optimal decomposition for

Lp,1,1(s, t; x;`p, X1, X2)–functional is given by x0,opt=x and x1,opt=x2,opt= 0.

(2) If x0,opt ∈int Ωs,X1 

∩bd Ωt,X2, then the optimal decomposition for Lp,1,1(s, t; x;`p, X1, X2)–functional is given by x = x0,opt+0+x2,opt and is such that

x2,opt,∇F0 x0,opt = x2,opt X 2 ∇F0 x0,opt X∗ 2 =t x2,opt X 2. (20) (3) If x0,opt ∈int Ωt,X2 

∩bd Ωs,X1 then the optimal decomposition for Lp,1,1(s, t; x;`p, X1, X2)–functional is given by x = x0,opt+x1,opt+0 and is such that x1,opt,∇F0 x0,opt = x1,opt X1 ∇F0 x0,opt  X∗1 =s x1,opt X1. (21) (4) If x0,opt ∈bd Ωs,X1 

∩bd Ωt,X2 then the optimal decomposition for

Lp,1,1(s, t; x;`p, X1, X2)–functional is given by x=x0,opt+x1,opt+x2,optsuch that ( x 1,opt,∇F0 x0,opt  = x1,opt X 1 ∇F0 x0,opt  X∗ 1 =s x1,opt X 1 x2,opt,∇F0 x0,opt = x2,opt X 2 ∇F0 x0,opt  X∗ 2 =t x2,opt X 2.

Proof. (1) Suppose that the element x0,opt is such that ∇F0(x0,opt) X

1 <s and ∇F0(x0,opt)

X∗2 <t. Then, from Corollary 2, if we take the element y∗= x0,opt p−1 sgn x0,opt= ∇F0(x0,opt), we have that s x1,opt X 1 =x1,opt,∇F0 x0,opt  ≤ x1,opt X 1 ∇F0 x0,opt X∗ 1 <s x1,opt X 1, and t x2,opt X2 =x2,opt,∇F0 x0,opt  ≤ x2,opt X2 ∇F0 x0,opt  X∗2 <t x2,opt X2, which implies that x1,opt = x2,opt = 0 since s, t > 0. Thus the optimal

decomposition is x=x+0+0.

(2) Suppose that the element x0,optis such that ∇F0(x0,opt) X

1

<s;k∇F0(x0)kX∗

2 = t. Then, from Corollary 2, if we take the element

y∗= x0,opt p−1 sgn x0,opt  = ∇F0(x0,opt),

(12)

we have that s x1,opt X1 =x1,opt,∇F0 x0,opt  ≤ x1,opt X1 ∇F0 x0,opt  X1∗<s x1,opt X1, which implies that x1,opt=0 since s>0. Note that in this case x2,opt6=0.

Indeed if x2,opt =0 then x0,opt =x. Therefore from the characterization of optimal decomposition, there exist y∗ ∈ ∂F0(x) = ∇F0(x)such that F1∗(y∗) = F2∗(y∗) = 0 which implies, by definition of functions F1 and F2that y∗ ∈sBX1∗∩tBX2∗, i.e., x0,opt =x ∈int Ωs,X1∩Ωt,X2. Therefore

x2,optmust be different from zero in this case and

x2,opt,∇F0 x0,opt = x2,opt X 2 ∇F0 x0,opt X∗ 2 =t x2,opt X 2 Therefore x=x0,opt+0+x2,optis optimal decomposition for Lp,1,1(s, t; x;`p, X1, X2)–functional.

(3) Suppose that the element x0,opt is such that ∇F0(x2,opt) X2∗<t; ∇F0(x2,opt)

X1∗=s. Then, from Corollary 2, if we take the element

y∗= x0,opt p−1 sgn x0,opt= ∇F0(x0,opt), we have that t x2,opt X2 =x2,opt,∇F0 x0,opt  ≤ x2,opt X2 ∇F0 x0,opt  X∗2 <t x2,opt X 2,

which implies that x2,opt=0 since t>0. As above, in this case x1,opt6=0 and x1,opt,∇F0 x0,opt = x1,opt X1 ∇F0 x0,opt  X∗1 =s x1,opt X1 Therefore x=x0,opt+x1,opt+0 is optimal decomposition for Lp,1,1(s, t; x;`p, X1, X2)–functional.

(4) Suppose that the element x0,opt is such that ∇F0(x0,opt) X∗ 1 = s; ∇F0(x0,opt) X∗ 2

= t. As above, in this case both

x1,optand x2,optmust be different from zero. Then if we take the element

y∗=|x0|p−1sgn(x0), we have that ( x 1,opt,∇F0 x0,opt = x1,opt X1 ∇F0 x0,opt  X1∗=s x1,opt X1 x2,opt,∇F0 x0,opt  = x2,opt X 2 ∇F0 x0,opt  X∗ 2 =t x2,opt X 2, and conclude that x=x0,opt+x1,opt+x2,optis optimal for

Lp,1,1(s, t; x;`p, X1, X2)–functional.

(13)

2. Optimal decomposition for complex Banach couples

In the papaer [7] it was shown that for the couple of real Banach spaces, convex analysis can be used to characterize optimal decomposition for K–, L– and E– functionals of real interpolation. However real interpoaltion is used for complex spaces also. Therefore the natural question that arises is how to characterize optimal decomposition for complex spaces. In this situation it appears some difficulties because convex analysis is developped mainy for real spaces. Nevertheless the natural correspondance existing between the space of functionals on real Banach space and the space of functionals on a complex Banach space makes it possible to use the same approach. Below we will outline this correspondance.

Let EC be a complex Banach space and let ER be the same space with the

same norm but considered real Banach space in the sense that we restrict multiplication by scalars to real numbers only, instead of complex numbers. Let (EC)∗ (resp. (ER)∗) be the dual space to EC (resp. ER) consisting of complex (resp. real) valued linear and bounded functionals f : ECC (resp.

g : ERR). We intent to illustrate in this paragraph, that the spaces(EC)∗

and (ER)∗ are isometric in some sense. Since f ∈ (EC)∗ is complex valued, then f(x) = f1(x) +i f2(x), x ∈ EC, where f1and f2 are real valued. If for the moment we regard EC as ER, then since f is linear and bounded on EC, then f1and f2are real valued linear and bounded functionals defined on ER. Furhermore, since f is linear then

i f(x) =i[f1(x) +i f2(x)] = f(ix) = f1(ix) +i f2(ix).

By comparing real and imaginary parts, we conclude that f2(x) = −f1(ix)so that

f(x) = f1(x) −i f1(ix). (22)

Clearly the mapping f 7−→ f1 is linear with norm not greater than 1 from

(EC)∗ considered as a real structured space to the space(ER)∗. Hence for all g∈ (ER)∗, set:

f(x) =g(x) −ig(ix), for all x∈EC, (23) and define the opeartor T : (ER)∗−→ (EC)∗defined by (23)

Lemma 1. The operator T: (ER)∗−→ (EC)∗defined by (23) is a bijective isometry from(ER)∗to(EC)∗.

Proof. First let us show that ∀g ∈ (ER)∗, the function f = Tg is linear has the same norm as g. To prove linearity, let λ=α+C. Then by definition

of f ,

(14)

Since g is linear on ER, then

f(λx) =αg(x) +βg(ix) −i[αg(ix) −βg(x)] = (α+)g(x) −i(α+)g(ix)

= (α+) [g(x) −ig(ix)] = (α+)f (x) =λ f(x)

So f(λx) = λ f(x),∀λ ∈ EC and therefore the function f = Tg is linear on

EC.

To show that T preserves the norm, we need to prove thatkgk(E

R)∗ =kfk(EC)∗. We have on one hand that,

kgk(E R)∗ = sup kxkEC≤1 |g(x)| = sup kxkEC≤1 |<e f(x)| ≤ sup kxkEC≤1 |f(x)| = kfk(E C)∗. On the other hand, using polar form (where θ(x)means that θ depends on x):

f(x) = |f (x)|eiθ(x) so that|f(x)| = f(x)e−iθ(x). So, from linearity of f , we have that

|f (x)| = fe−iθ(x)x

Since|f(x)|is real then fe−iθ(x)xis also real, which means, from definition of f , that |f(x)| = fe−iθ(x)x=ge−iθ(x)x. Therefore kfk(E C)∗= sup kxkEC≤1 |f(x)| = sup kxkEC≤1 ge−iθ(x)x≤ sup kxkEC≤1 g  e−iθ(x)x ≤ sup kykEC≤1 |g(y)| = kgk(E R)∗. We conclude thatkgk(E

R)∗ = kfk(EC)∗. It is clear that if g1, g2 ∈ (ER)∗ with g16= g2then Tg1 = f1 6= f2=Tg2. Moreover the operator T is onto. Indeed, from (22), for each f ∈ (EC)∗, there exist g∈ (ER)∗ such that f =Tg. Hence

is a bijective isometry from(ER)∗to(EC)∗. 

2.1. Some preliminaries and formalizations

Below we remind some well–known results from convex analysis that are needed for the proofs of main results for a complex Banach space E with the norm k·kE and dual space E∗. We will consider convex functions F : E →

R∪ {+∞}. The domain of the function F is a convex set dom F, defined by dom F={x∈E : F(x) < +∞}.

The function F is said to be proper if dom F6= ∅. If the epigraph of a function F, i.e. the set

epi F={(x, λ) ∈E×R | F(x) ≤λ},

is closed then the function F is called lower semicontinuous (l.s.c.). We will define the conjugate function as follows:

(15)

Definition 3. The conjugate function of F is the function F∗ : E∗ → R∪ {+∞}

defined by

F∗(y∗) =sup x∈E

{<ehy∗, xi −F(x)}. (24) Hence the subdifferentiability of a function will be defined as follows Definition 4. The function F : E → R∪ {+∞}is said to be subdifferentiable at x∈dom F if the following set named subdifferential of F at x,

∂F(x) ={y∗∈E∗: F(w) ≥F(x) + <ehy∗, w−xi, ∀w∈E}, (25) is nonempty.

Proposition 3.

y∗ ∈∂F(x)if and only if F(x) = <ehy∗, xi −F∗(y∗), ∀x ∈E, y∗∈E∗. (26)

Proof. Assume that y∗∈ ∂F(x)for some x∈dom F. Then for any w∈ E, we

have that F(w) ≥F(x) + <ehy∗, w−xi =F(x) + <ehy∗, wi − <ehy∗, xi (27) Or equivalently, <ehy∗, xi −F(x) ≥ <ehy∗, wi −F(w), ∀w∈E. It follows that <ehy∗, xi −F(x) ≥sup w∈E (<ehy∗, wi −F(w)) =F∗(y∗). (28) But, by definition we also have that

F∗(y∗) =sup w∈E

{<ehy∗, wi −F(w)} ≥ <ehy∗, xi −F(x). (29) Combining (28) and (29), we conclude that

F(x) = <ehy∗, xi −F∗(y∗). (30) Conversely, let us assume that F(x) = <ehy∗, xi −F∗(y∗). Then it follows that

<ehy∗, xi −F(x) =F∗(y∗) =sup w∈E

{<ehy∗, wi −F(w)} ≥ <ehy∗, wi −F(w), for all w∈E. Therefore

<ehy∗, xi −F(x) ≥ <ehy∗, wi −F(w), ∀w∈E Or equivalently

F(w) ≥F(x) + <ehy∗, w−xi, ∀w∈E, (31)

(16)

We will say that y∗is dual to x with respect to F if y∗∈∂F(x).

Proposition 4. Let ϕ0and ϕ1be functions from E with values inR∪ {+∞}. Then

(ϕ0⊕ϕ1)∗(y∗) =ϕ0∗(y∗) +ϕ1(y∗). (32) Proof. By definition (ϕ0⊕ϕ1)∗(y∗) =sup x∈E  <ehy∗, xi − inf x0∈E [ϕ0(x0) +τx0ϕ1(x)]  , where τx0ϕ1(x) =ϕ1(x−x0). (33) We have that (ϕ0⊕ϕ1)∗(y∗) =sup x∈E  <ehy∗, xi − inf x0∈E [ϕ0(x0) +τx0ϕ1(x)]  =sup x∈E sup x0∈E (<ehy∗, xi − [ϕ0(x0) +τx0ϕ1(x)]) =sup x0∈E −ϕ0(x0) +sup x∈E [<ehy∗, xi −τx0ϕ1(x)] ! =sup x0∈E −ϕ0(x0) + (τx0ϕ1) ∗ (y∗) But (τx0ϕ1) ∗ (y∗) =sup x∈E (<ehy∗, xi −ϕ1(x−x0)) =sup w∈E (<ehy∗, w+x0i −ϕ1(w)) = ϕ1(y∗) + <ehy∗, x0i. So, (ϕ0⊕ϕ1)∗(y∗) = sup x0∈E (−ϕ0(x0) +ϕ1(y∗) + <ehy∗, x0i) =ϕ1(y∗) +sup x0∈E (<ehy∗, x0i −ϕ0(x0)) = ϕ1(y∗) +ϕ0(y∗) We conclude that (ϕ0⊕ϕ1)∗(y∗) =ϕ0∗(y∗) +ϕ1(y∗). (34) 

The following theorem still holds. It guarantees subdifferentiability over the interior of the domain of the function.

Theorem 5. Let E be a complex Banach space and let F : E −→ −∞,+∞ be

a convex and lower semicontinuous function. Then F is subdifferentiable over the interior of its domain.

(17)

The result which gives characterization of optimal decomposition can then be written below:

Lemma 2. Let E be a complex Banach space and let ϕ0 : E −→ R∪ {+∞} and

ϕ1 : E −→ R∪ {+∞} be convex proper functions. Suppose also that ϕ0⊕ϕ1 is subdifferentiable for a given element x ∈ dom(ϕ0⊕ϕ1). Then the decomposition

x = x0,opt+x1,opt is optimal for ϕ0⊕ϕ1if and only if there exists y∗ ∈ E∗ that is

dual to both x0,opt and x1,optwith respect to ϕ0and ϕ1respectively, i.e. 

ϕ0 x0,opt= <ehy∗, x0,opti −ϕ0(y∗)

ϕ1 x1,opt 

= <ehy∗, x1,opti −ϕ1(y∗). (35) Proof. Let us assume that the decomposition x=x0,opt+x1,opt is optimal for

ϕ0⊕ϕ1(x). It means that

(ϕ0⊕ϕ1) (x) = inf x=x0+x1

{ϕ0(x0) +ϕ1(x1)} = ϕ0 x0,opt+ϕ1 x1,opt . (36) Since ϕ0⊕ϕ1is subdifferentiable at x then there exists y∗∈E∗ such that

y∗∈(ϕ0⊕ϕ1) (x), This is equivalent to

(ϕ0⊕ϕ1) (x) = <ehy∗, xi − (ϕ0⊕ϕ1)∗(y∗). (37) From (36) and the formula for the conjugate of infimal convolution (see Propo-sition 4), we have that (37) is equivalent to

ϕ0 x0,opt+ϕ1 x1,opt= <ehy∗, xi − (ϕ0)∗(y∗) − (ϕ1)∗(y∗). Or equivalently,

ϕ0 x0,opt+ (ϕ0)∗(y∗) − <ey∗, x0,opt +ϕ1 x1,opt+ (ϕ1)∗(y∗) (38)

− <ey∗, x1,opt

=0 From the definition of the conjugate functions (see (24)) we have

ϕ0 x0,opt+ (ϕ0)∗(y∗) − <ey∗, x0,opt ≥0, and

ϕ1 x1,opt+ (ϕ1)∗(y∗) − <ey∗, x1,opt ≥0. Taking into account (38) we see that, in fact, we have the equalities



ϕ0 x0,opt= <ehy∗, x0,opti − (ϕ0)∗(y∗)

ϕ1 x1,opt= <ehy∗, x1,opti − (ϕ1)∗(y∗). (39)

Conversely, let us assume there exist y∗∈E∗and a decomposition x= ˜x0+˜x1 such that (7) is satisfied. Then

(18)

Since by Proposition 4, the conjugate of infimal convolution is the sum of conjugates, then

ϕ0(˜x0) +ϕ1(˜x1) = <ehy∗, xi − (ϕ0⊕ϕ1)∗(y∗). (40) By definition of infimal convolution (3), it follows that, in particular,

(ϕ0⊕ϕ1) (x) ≤ϕ0(˜x0) +ϕ1(˜x1). Then

(ϕ0⊕ϕ1) (x) ≤ <ehy∗, xi − (ϕ0⊕ϕ1)∗(y∗). Moreover, from the definition of a conjugate function we have

(ϕ0⊕ϕ1) (x) ≥ <ehy∗, xi − (ϕ0⊕ϕ1)∗(y∗). Combining this with the previous inequality we obtain

(ϕ0⊕ϕ1) (x) = <ehy∗, xi − (ϕ0⊕ϕ1)∗(y∗). From that and (40) we conclude that

(ϕ0⊕ϕ1) (x) =ϕ0(˜x0) +ϕ1(˜x1).

Therefore, the decomposition x= ˜x0+ ˜x1is optimal for(ϕ0⊕ϕ1)(x).  2.2. Characterization of optimal decomposition

2.2.1. Optimal decomposition for the L–functional

Below we assume that the complex Banach couple(X0, X1)is regular, i.e. X0∩X1is dense in both X0 and X1. Let x∈ X0+X1and let t>0 be a fixed parameter. We consider the following L–functional:

Lp0,p1(t, x; X0, X1) =x=xinf 0+x1  1 p0 kx0kXp00 + t p1 kx1kXp11  , (41)

where 1≤ p0<∞ and 1≤ p1<∞. We are interested in a characterization of the optimal decomposition for this Lp0,p1-functional, i.e. a characterization of

x0,opt∈ X0and x1,opt ∈X1such that x=x0,opt+x1,optand

Lp0,p1(t, x; X0, X1) =  1 p0 x0,opt p0 X0+ t p1 x1,opt p1 X1  . (42)

Theorem 6. Let 1 < p0 < ∞ and 1 < p1 < ∞. Then the decomposition x =

x0,opt+x1,optis optimal for the L-functional (13) if and only if there exists an element

y∗∈X∗0∩X1∗such that    1 p0 x0,opt p0 X0 = <ehy∗, x0,opti − 1 p00 ky∗k p00 X∗ 0 t p1 x1,opt p1 X1 = <ehy∗, x1,opti − t p01 yt∗ p01 X∗ 1. (43)

(19)

Proof. The L-functional (13) can be written as the infimal convolution Lp0,p1(t, x; X0, X1) = (ϕ0⊕ϕ1) (x),

where the functions ϕ0and ϕ1are both defined on the sum X0+X1as follows:

ϕ0(u) = ( 1 p0kuk p0 X0 if u∈X0 +∞ if u∈ (X0+X1) \X0. (44) and ϕ1(u) = ( t p1kuk p1 X1 if u∈X1 +∞ if u∈ (X0+X1) \X1. (45) The conjugate functions ϕ0of ϕ0and ϕ∗1of ϕ1are defined on X0∗∩X1∗and are given by ϕ0(z) = 1 p00kzk p00 X0∗, z∈X ∗ 0∩X∗1, (46) ϕ1(z) = t p01 z t p01 X1∗, z∈X ∗ 0∩X∗1. (47)

As the function Lp0,p1 is convex and lower semicontinuous on X0+X1 there-fore from Theorem 5 it follows that it is subdifferentiable on the interior of its domain, which is equal to X0+X1. So from Lemma 3 we obtain that the de-composition x=x0,opt+x1,opt is optimal for the Lp0,p1-functional if and only if there exists y∗ ∈ (X0+X1)∗=X∗0∩X1∗such that



ϕ0 x0,opt= <ehy∗, x0,opti −ϕ0(y∗)

ϕ1 x1,opt 

= <ehy∗, x1,opti −ϕ1(y∗). (48) Taking into account the formulas (44) for ϕ0, ϕ1, the fact that x∈dom(ϕ0⊕ϕ1) and the formulas (46), (47) for their conjugates, we see that conditions (48) can be written as    1 p0 x0,opt p0 X0 = <ehy∗, x0,opti − 1 p00 ky∗k p00 X∗ 0 t p1 x1,opt p1 X1 = <ehy∗, x1,opti − t p01 yt∗ p01 X∗ 1. 

Remark 5. The conditions (43) are equivalent toky∗kX

0 = x0,opt p0−1 X0 ,ky∗kX∗1 = t x1,opt p1−1 X1 and <ehy∗, x0,opti = ky∗kX∗ 0 · x0,opt X0, <ehy∗, x1,opti = ky∗kX1∗ · x1,opt X1. Let us now consider a special but very important case.

(20)

2.2.2. The case when 1< p0< +∞ and p1=1. Consider the L-functional

Lp0,1(t, x; X0, X1) =x=xinf 0+x1  1 p0 kx0kpX00+tkx1kX1 

where 1< p0< +∞. Let ϕ0and ϕ1be functions defined on the sum X0+X1 as follows ϕ0(u) = ( 1 p0kuk p0 X0 if u∈X0 +∞ if u∈ (X0+X1) \X0. (49) and ϕ1(u) =  tkukX 1 if u∈X1 +∞ if u∈ (X0+X1) \X1. (50) Then the L-functional can be written as the following infimal convolution

Lp0,1(t, x; X0, X1) = (ϕ0⊕ϕ1) (x).

Theorem 7. Let 1 < p0 < +∞. Then the decomposition x = x0,opt+x1,opt is optimal for the Lp0,1-functional if and only if there exists y∗ ∈ X

∗ 0∩X1∗ such that ky∗kX∗ 1 ≤t and    1 p0 x0,opt p0 X0 = <ehy∗, x0,opti − 1 p00ky∗k p00 X∗0 t x1,opt X 1 = <ehy∗, x1,opti. (51)

Proof. As the function Lp0,1is convex and lower semicontinuous on X0+X1 therefore from Theorem 5 we obtain that it is subdifferentiable on the interior of its domain, which is equal to X0+X1. Then by Lemma 3, the decomposition

x =x0,opt+x1,opt is optimal for the Lp0,1-functional if and only if there exists

y∗∈ (X0+X1)∗=X0∗∩X1∗such that  ϕ0 x0,opt= <ehy∗, x0,opti −ϕ0(y∗) ϕ1 x1,,opt  = <ehy∗, x1,opti −ϕ1(y∗). (52) Taking into account the formulas (49) and (50) for ϕ0, ϕ1, the fact that x ∈ dom(ϕ0⊕ϕ1) and the formulas for their conjugates, we see that the con-ditions (52) are the same as the concon-ditions (51). Moreover, the condition

ky∗kX

1 ≤t follows from ϕ

1(y∗) <∞. 

Remark 6. Similarly to Remark2, we can demonstrate that the condition p10 x0,opt p0 X0 = <ehy∗, x0,opti −p10 0 ky∗kp 0 0 X0∗of Theorem 7 is equivalent toky∗kX∗ 0 = x0,opt p0−1 X0 and <ehy∗, x0,opti = ky∗kX∗ 0 · x0,opt X0.

(21)

Remark 7. A proof similar to the proof of Theorem7 shows that the decomposition

x = x0,opt+x1,opt is optimal for the K-functional if and only if there exists y∗ ∈

X0∗∩X1∗such thatky∗kX∗ 0 ≤1,ky∗kX∗1 ≤t and ( x0,opt X0 = <ehy∗, x0,opti t x1,opt X1 = <ehy∗, x1,opti. We would like to note that for the E-functional

E(t, x; X0, X1) = inf kx1kX1≤t

kx−x1kX0 the result is similar, but the proof is a bit different.

Proposition 5. The decomposition x = x0,opt+x1,opt, where x1,opt

X1 ≤ t, is optimal for the E-functional if and only if there exists y∗ ∈ X∗0∩X∗1 such that

ky∗kX∗ 0 ≤1 and ( x0,opt X 0 = <ehy∗, x0,opti <ehy∗, x1,opti =tky∗kX∗ 1.

3. ROF model on the graph and Split Bregman algorithms

A typical example of variational models for image and signal denoising based on the minimization of energy functionals, is known in image process-ing as the total variation (TV) regularization technique due to

Rudin-Osher-Fatemi (ROF) (see [12]) on a 2-dimensional domain Ω in R2 (for example

Ω = [0, 1]2). It suggests to take as an approximation to the original image f∗ the function fopt,t ∈ BV(Ω), which is the exact minimizer for the L2,1– functional for the couple L2, BV, namely

L2,1  t, fob; L2, BV  = inf g∈BV  1 2kfob−gk 2 L2+tkgkBV  , (53)

for some t>0, i.e., fopt,t∈BV is such that L2,1  t, fob; L2, BV  = 1 2 fob−fopt,t 2 L2+t fopt,t BV. (54)

In [8, 9] we consider the discrete analogue of the model (53) on the graph and propose an algorithm to construct the element fopt,t. In the literature, there are many numerical methods for the ROF model (see for example [2]) and in our own point of view, Split Bregman iteration algorithm introduced in [6] is probably one of the best of such methods. The "Split Bregman" method can solve a very broad class of L1–regularized problems of the form

min

g H(g) +kφ(g)kL1, (55)

(22)

Remark 8. This category includes many important problems in engineering, com-puter science and imaging science.

It is clear that the ROF denoising problem can be posed as L1–regularized optimization problem: min g µ 2kf−gk 2 L2 +kgkBV. (56)

where the BV seminorm for the differentiable function g is equal tokgkBV = R |∇g|.

We will compare our algorithm with the Split Bregman on some test ex-amples. In our algorithm, the parameter used is denoted by s and is equal to s=N×t for an image of size N by N (see [8] or [9] for more details). We find out that both algorithms are comparable in quality of reconstructed images and that on average our algorithm takes fewer number of iterations to reach the same tolerance.

(23)

Original image Observed noisy image

Split Bregman ROF model on the graph

Figure 1: Lenna, the original is a 512×512 image with intensity values ranging from 0 to 255. Top left: Original image f∗; Top right: Noisy image fob=f∗+ε×randn(512,512), i.e image with

Gaussian additive noise of standard deviation ε=25. Bottom left: Split Bregman reconstruction

(24)

Residual from Split Bregman Residual from ROF graph

Split Bregman ROF model on the graph

Figure 2: Top: Residual image, i.e., the difference between the observed image and the recon-structed image fob−fopt,t. Top left: Residual from Split Bregman reconstruction. Top right:

Residual from ROF model on the graph reconstruction. Bottom: The white shows, matrix compo-nentwise, parts of the recontructed image fopt,twhere

f∗i,j−fopt,ti,j

≥15, i, j=1, 2, . . . , N. Left: in Split Bregman reconstruction. Right: in ROF model on the graph reconstruction.

(25)

0 20 40 60 80 100 120 140 −14 −12 −10 −8 −6 −4 −2 0 Iterations log(normalized error)

Error vs. iteration number

ROF on Graph Split Bregman

Figure 3: Error vs. iteration number for the split Bregman ROF minimization algorithm

and for our ROF model on the graph algorithm. The error at iteration k is defined as

log uk−u∗ `2/ku∗k`2 

where uk is the approximation at iteration k, and uis the

ex-act solution. Convergence results are for the test image "Lena" in Figure 1 with Gaussian

noise (e = 22.05). We see that for example at iteration 20 our algorithms achieves an

nor-malized error log

u20−u∗

`2/ku∗k`2



= −10 while the same error in Split Bregman is

log u20−u∗ `2/ku∗k`2  = −8. i.e., u20−u∗ `2/ku∗k`2approximately equal to 4.42×10−5

(26)

Original image Observed noisy image

Split Bregman ROF model on the graph

Figure 4: Geometric features with a cusp, the original is a 512×512 image with regions of

intensity values 150 and 100. Top left: Original image f∗; Top right: Noisy image fob= f∗+ε×

randn(512,512), i.e image with Gaussian additive noise of standard deviation ε=15.3. Bottom

left: Split Bregman reconstruction using µ= 18.068. Bottom right: ROF model on the graph

(27)

Residual from Split Bregman Residual from ROF graph

Split Bregman ROF model on the graph

Figure 5: Geometric features with a cusp. Top: Residual image, i.e., the difference between the observed image and the reconstructed image fob−fopt,t. Top left: Residual from Split Bregman

reconstruction. Top right: Residual from ROF model on the graph reconstruction. Bottom: The white shows, matrix componentwise, parts of the recontructed image fopt,twhere

f∗i,j−fopt,ti,j

≥ 15, i, j=1, 2, . . . , N. Left: in Split Bregman reconstruction. Right: in ROF model on the graph reconstruction.

(28)

0 20 40 60 80 100 120 140 −14 −12 −10 −8 −6 −4 −2 0 Iterations log(normalized error)

Error vs. iteration number

ROF on Graph Split Bregman

Figure 6: Error vs. iteration number for the split Bregman ROF minimization algorithm

and for our ROF model on the graph algorithm. The error at iteration k is defined as

log

uk−u∗

`2/ku∗k`2 where ukis the approximation at iteration k, and u∗is the exact

so-lution. Convergence results are for the test image "an artificial geometric image" in Figure 4 with Gaussian noise (ε=15.30). We see that for example at iteration 20 our algorithms achieves

an normalized error log

u20−u∗

`2/ku∗k`2



= −9.6 while the same error in Split Bregman is

log u20u`2/ku∗k`2  = −8.3. i.e., u20u`2/ku∗k`2approximately equal to 6.25×10−5

(29)

References

[1] Aujol, J. F. and Chambolle, A. (2005). Dual norms and image decomposi-tion. International journal of computer vision, 63(1):85–104.

[2] Dahl, J., Hansen, P. C., Jensen, S. H., and Jensen, T. L. (2010). Algorithms and software for total variation image reconstruction via first-order meth-ods. Numer Algor, 53:67–92.

[3] Dore, G., Guidetti, D., and Venni, A. (1982). Some properties of the sum and the intersection of normed spaces. Atti Sem. Mat. Fis. Univ. Modena, XXXI:325–331.

[4] Ekeland, I. and Témam, R. (1999). Convex Analysis and Variational Problems. Siam.

[5] Fonseca, I. and Leoni, G. (2007). Modern Methods in the Calculus of Varia-tions: LPSpaces. Springer.

[6] Goldstein, T. and Osher, S. (2009). The split Bregman method for L1– regularized problems. Siam J. Imaging Sciences, 2(2):323–343.

[7] Kruglyak, N. and Niyobuhungiro, J. (2014). Characterization of optimal decompositions in real interpolation. Journal of Approximation Theory, 185:1– 11.

[8] Niyobuhungiro, J. and Setterqvist, E. (2014a). A new reiterative algo-rithm for the Rudin-Osher-Fatemi denoising model on the graph. In The 2nd International Conference on Intelligent Systems and Image Processing 2014, ICISIP2014, Kitakyushu, Japan, September 26-29, 2014.

[9] Niyobuhungiro, J. and Setterqvist, E. (2014b). ROF model on the graph. Technical Report LiTH-MAT-R–2014/06–SE, Department of Mathematics, Linköping University.

[10] Peetre, J. (1970). A new approach in interpolation spaces. Studia Math., 34:23–42.

[11] Rockafellar, R. T. (1972). Convex Analysis. Princeton University Press, Princeton, New Jersey.

[12] Rudin, L. I., Osher, S., and Fatemi, E. (1992). Nonlinear total variation based noise removal algorithms. Physica, North-Holland, D(60):259–268. [13] Strömberg, T. (1994). A Study of the Operation of Infimal Convolution. Luleå

References

Related documents

construction material continued to rise. By perfecting the laws and regulations of.. foreign trade, exports further contributed to the economy. In the mean time, M1 was on the

You should also refer to our Reported financial information in the Operating profit (2009 and 2008) table on page 39, our reconciliation of Core financial measures to

Michelfelder, D. “Our moral condition in cyberspace”. Ethics and Information Technology, vol. Cox and Critical Theory of International Relations”. International Studies, vol.

Alla lantbrukare bör få information och utbildning om vad riskerna med att utsättas för buller kan innebära, vilka bullrande arbetsmiljöer som man är skyldig att åtgärda samt hur

Författarna anser att det finns två olika typer kombinationer för att höja upplevelsen av mat och vin i kombination, de som är väl avvägda och finstämda samt de som är spretiga och

Behaviorismen är en annan teori som Larsen Schultz (1997, kap. Teorin finns i det psykologiska perspektivet och handlar om människans beteende samt om det yttre och

Exact Minimizers in Real Interpolation Characterization and Applications..

Linköping Studies in Science