• No results found

Total Variation-regularization in uorecense microscopy

N/A
N/A
Protected

Academic year: 2022

Share "Total Variation-regularization in uorecense microscopy"

Copied!
98
0
0

Loading.... (view fulltext now)

Full text

(1)

Total Variation-regularization in uorecense microscopy

J O N A T H A N F R A N S S O N

Master of Science Thesis

Stockholm, Sweden

2013

(2)
(3)

Total Variation-regularization in uorecense microscopy

J O N A T H A N F R A N S S O N

Master´s Thesis in Mathematics (30 ECTS credits) Master Programme in Mathematics (120 credits) Royal Institute of Technology year 2013

Supervisor at KTH was Ozan Öktem Examiner was Jan-Olov Strömberg

TRITA-MAT-E 2013:05 ISRN-KTH/MAT/E--13/05--SE

Royal Institute of Technology School of Engineering Sciences KTH SCI SE-100 44 Stockholm, Sweden URL: www.kth.se/sci

(4)
(5)

Abstract

Since a few years back, there has been a development boom of nano-scale imag- ing of cells and sub-cellullar structures. In short, the technique consists of making the specimen under study self-luminous when radiated by a laser of a fixed wavelength. Now, higher radiation intensity results in a higher signal to noise ratio, which gives a higher quality image with finer details. The backside is the damaging of the object when radiated with high intensity laser. Especially for the case of imaging live cells, one is interested in doing as little damage as possible. As a consequence, the image result becomes noisy. Retrieving data information from a noisy signal is a mathematical problem (inverse problem).

This thesis focuses on developing a software for reconstructing noisy confocal images, utilizing the mathematical theory of regularization. More specifically, the reconstruction is based on total variation regularization and its extension, Bregman iterations. The presentation will present some performance results of the applied software and also investigate how the regularization method depends on the choice of regularization parameter for different noise-levels dictated by the radiation intensity from the illuminating laser. The work has been carried out for the Department of Mathematics at KTH Royal Institute of Technology and the Advanced Light Microscopy facility at the Science for Life Laboratory in Stockholm.

(6)
(7)

Acknowledgments

I would like to take the chance to thank the people who made it possible for me to write my thesis in such an intriguing and interdisciplinary science. I am very thankful for Jan-Olov Str¨omberg, Professor in Mathematics at KTH, for taking an early interest in me and my interest in studying mathematics. I thank Ozan Oktem, Docent in Mathematics at KTH, for equipping me with extremely valu-¨ able skills for my future career. I would also like to thank Lars Filipsson, who encouraged me to study mathematics and inspired me as a teaching assistant in mathematics at KTH. And finally, even if they haven’t been directly involved in the work of this thesis, I thank all my students for teaching me the important stuff.

(8)
(9)

List of Symbols

Mathematics

Symbol Description

f, g Real continuous functions

f, g, x, y Discrete real function or element of Rm×n

n The normal vector

I (n× n) Identity matrix or n-dimensional interval

k · k Norm

x, y, c, C Scalar. If x is a vector in the same context, then x =kxk h · , · i Inner product or scalar product

κ(· ) Condition number of matrix or function for highest curvature of L-curve R, E, H Real, Euclidean and Hillbert space respectively

Re(· ), conj( · ) The real and conjugate part of a complex expression respectively T ( · ) Forward model

D( · ) Discrepancy functional S( · ) Regularization functional T V (· ) Total variation functional BV (· ) Space of bounded variation

BpS(· , · ) Bregman distance with regularization functionalS and subdifferential p F( · ) Fourier transform

K(· ) Compact operator

∗ Convolution operator

T Convolution matrix

H, δ Heaviside, Dirac

Prob(X = x) Probability of occurrence evaluated for x pX(x) Discrete probability density function N (µ, σ) Normal distribution

(10)

Physics

Symbol Description

I(· ) Intensity of light evaluated for x P SF , P , P Point Spread Function

J1(· ) Bessel function, first order of first kind Nα Numerical Aperture

M Optical magnification

λ Wavelength of light

U Complex amplitude of spherical wave

E Electric field

H Magnetic field

(11)

List of Figures

1.1 Sketch illustrating the basic setup of a confocal microscope . . . 13 1.2 Synthetically created Stimulated Emission Depletion Microscopy

(STED) illumination spots. Left image shows the excited region, center image shows the depleted region, and right image is the combined illumination spot. . . 14 1.3 Left figure illustrates a confocal micrograph of a dendrite with

spine structures taken from a rat striatal neuron. Dopamine 1 receptors are labeled green and the Sodium Patassium pump (Na,K-ATPase) in red. Right figure illustrates a STED micro- graph, same as in the left figure. . . 14

4.1 The figure shows two signals; one as a dotted red line and the other as a black solid line.Both have the same vertical variation but the red dotted one makes this variation quickly. The slow variation of the black solid line mimics smooth contrast variations in an image. The Total Variation (TV) functional is defined in (4.1) only measures the height of the jumps. Hence, both curves are equally penalized in the regularization scheme (4.2). . . 27

(12)

LIST OF FIGURES LIST OF FIGURES

4.2 Left figure illustrates data, which is a confocal microscopy image representing a 2D-section through a mouse colon specimen pre- pared with hematoxylin and eosin. The image was acquired using 63× optical magnification, 51 µm pinhole aperture, 134.86 µm voxel size, wavelength λ = 488 nm, and 2% laser intensity, see§6 for the details. Right figure illustrates the deconvolved image (see

§5.1.1) obtained by means of TV-regularization (using α = 0.1 as regularization parameter and 60 iterations to solve the optimiza- tion problem in (4.2)). We see that TV-regularization produces images with large mono-colored areas. . . 28 4.3 Geometrical illustration of the Bregman distance. . . 29

5.1 The naive approach of image reconstruction. Upper left figure is the true image (signal), upper right figure is the true image convolved with a gaussian kernel. Lower left figure is the recon- struction from the unperturbed convolved image and lower right figure is the reconstruction from the perturbed convolved image where one single pixel value has been contaminated by noise . . . 35 5.2 Applied zero Boundary Conditionss (BCs) to given data . . . 37 5.3 Example of Reflective Boundary Conditions . . . 38 5.4 An example of periodic BCs . . . 39 5.5 Left image shows an example of out-of-focus blur, right image

shows a blurry image where blur is caused by shot noise (see p. 25). 40

6.1 Confocal Microscope . . . 43 6.2 Illustration of the chain of imaging process in a confocal microscope 44

7.1 A typical behavior of an L-Curve. The red dot marks the point of interest; the highest curvature of the plot. . . 60

8.1 White Stripes Data: Top left figure is unperturbed data and the bottom left figure is its blurred and noisy representation. Top and bottom right figure is the 3D representation of its respective left figure. . . 63 8.2 Stairway to Heaven Data: Top left figure is unperturbed data and

the bottom left figure is its blurred and noisy representation. Top and bottom right figure is the 3D representation of its respective left figure. . . 63

(13)

LIST OF FIGURES LIST OF FIGURES

8.3 White Stripes Results: Top left and right figures is the result from TV-regularization, α = 0.1, 100 iterations. Middle left and right figures is the result from TV-regularization, α = 0.001, 100 iterations. Bottom left and right figures is the result from TV- regularization, α = 0.1, with 3 extended Bregman iterations. . . . 64 8.4 Stairway to Heaven Results: Top left and right figure is the result

from TV-regularization, α = 1, 25 iterations. Bottom left and right figure is the result from TV-regularization, α = 0.001, 100 iterations . . . 65 8.5 The figures illustrate mouse colon specimen prepared with hema-

toxylin and eosin. The optical magnification is 63X and the pin- hole aperture is 51µm. Voxelsize is 134.86µm and λ is 488nm.

The upper left and right figure and the lower left and right figure illustrate the specimen exposed by laser intensity at 0.1, 0.5, 1 and 30% respectively . . . 66 8.6 The left figures illustrate, from top to bottom, the L-curves for

0.1, 0.5, 1 and 30% of exposing laser intensity respectively. The right figures illustrate, the κ function for each respective left fig- ure. The red dot denotes the point of highest curvature. . . 67 8.7 Optimal regularization parameter α dependent on exposing laser

intensity . . . 68 8.8 Top figure is U2OS cells transfected with Green Fluorescence

Protein (GFP) recorded with 10X optical magnification and wide open pinhole aperture. Voxelsize is 0.29µm, λ is 488nm and ex- posing intensity is 1%. U2OS is an Osteosarcoma cell line (=from a patient with bone cancer). Bottom figure is the result from TV- regularization, α = 1, 60 iterations. . . 69 8.9 Top figure is MDA431 cells tranfected with GFP and recorded

with 63X optical magnification and pinhole aperture of size 1 Airy Unit. Voxelsize is 0.095µm, λ is 488nm and exposing intensity is 3%. MDA431 is a human breast carcinoma cell line. Bottom figure is the result from TV-regularization, α = 1, 60 iterations. . 70

(14)

LIST OF FIGURES LIST OF FIGURES

8.10 Top left figure is Bovine Pulmonary Artery Endothelial cells (i.e.

cells that make up the walls in a lung artery blood vessel in calf) recorded with 10X optical magnification and pinhole aperture of size 1 Airy Unit. Voxelsize is 0.233µm, λ is 405, 488 and 561nm each channel respectively. Laser intensity is 2.6%, 1% and 0.6%

each channel respectively. mCell nuclei are stained with DAPI (that binds to DNA), the actin filaments in the cells are detected with Alexa-Fluor 488 fluorochrome attached to Phalloidin and the cell mitochondria are detected with the MitoTracker Orange fluorochrome. Top right, bottom left and bottom right figure is the red, green and blue channel representation of top left figure respectively. . . 71 8.11 Top figure is the red channel of Figure 8.10. Bottom left fig-

ure is the result from TV-regularization, α = 1, 100 iterations and bottom right figure is the result from 3 extended Bregman iterations. . . 72 8.12 Top figure is the green channel of Figure 8.10. Bottom left fig-

ure is the result from TV-regularization, α = 1, 100 iterations and bottom right figure is the result from 2 extended Bregman iterations. . . 73 8.13 Top figure is the blue channel of Figure 8.10. Bottom left fig-

ure is the result from TV-regularization, α = 1, 100 iterations and bottom right figure is the result from 2 extended Bregman iterations. . . 74 8.14 Top figure is the same type of specimen as Figure 8.10 with 10×

optical magnification and 1 Airy unit pinhole aperture. Voxelsize is 0.24 µm, λ is 405, 488 and 561 nm each channel respectively.

Laser intensity is 0.4%, 1.5% and 0.5% each channel respectively.

Top right, bottom left and bottom right figure is the red, green and blue channel representation of top left figure respectively. . . 75 8.15 Top figure is the red channel of Figure 8.14. Bottom left fig-

ure is the result from TV-regularization, α = 1, 100 iterations and bottom right figure is the result from 2 extended Bregman iterations. . . 76 8.16 Top figure is the green channel of Figure 8.14. Bottom left fig-

ure is the result from TV-regularization, α = 0.1, 100 iterations and bottom right figure is the result from 2 extended Bregman iterations. . . 77 8.17 Top figure is the green channel of Figure 8.14. Bottom left figure

is the result from TV-regularization, α = 0.75, 100 iterations and bottom right figure is the result from 2 extended Bregman iterations. . . 78

(15)

LIST OF FIGURES LIST OF FIGURES

8.18 Top figure is the same type of specimen as Figure 8.10 recorded with 63X optical magnification and wide open pinhole aperture.

Voxelsize is 0.08µm, λ is 405, 488 and 561nm each channel re- spectively. Laser intensity is 0.2%, 1% and 0.1% each channel respectively. Top right, bottom left and bottom right figure is the red, green and blue channel representation of top left figure respectively. . . 79 8.19 Top figure is the green channel of Figure 8.18. Bottom left fig-

ure is the result from TV-regularization, α = 0.5, 60 iterations and bottom right figure is the result from 4 extended Bregman iterations. . . 80 8.20 Top figure is the green channel of Figure 8.18. Bottom left fig-

ure is the result from TV-regularization, α = 1, 60 iterations and bottom right figure is the result from 4 extended Bregman iterations. . . 81 8.21 Top figure is the green channel of Figure 8.18. Bottom left fig-

ure is the result from TV-regularization, α = 0.75, 60 iterations and bottom right figure is the result from 4 extended Bregman iterations. . . 82

(16)
(17)

Contents

1 Introduction 12

1.1 Fluorescence Microscopy . . . 12

1.1.1 Confocal Microscopy . . . 12

1.1.2 Stimulated Emission Depletion Microscopy (STED) . . . 13

1.2 Image reconstruction . . . 14

2 Inverse problems 16 2.1 Definition . . . 16

2.2 Ill-posedness . . . 17

2.3 Discretization . . . 17

3 Regularization 20 3.1 Concept of regularization . . . 20

3.2 Variational regularization . . . 21

3.3 Connection to statistical regularization . . . 22

3.3.1 The inverse problem in statistical regularization . . . 22

(18)

CONTENTS CONTENTS

3.3.2 The Maximum a Posteriori probability (MAP) estimator . 23

3.3.3 Connection to variational regularization . . . 24

4 Total Variation-type of regularization 26 4.1 Loss of contrast . . . 27

4.2 Iterated regularization . . . 28

4.2.1 The Bregman distance . . . 28

4.2.2 Bregman iterations . . . 29

4.2.3 Increasing contrast by Bregman iterations . . . 31

5 Image reconstruction 32 5.1 Imaging as an inverse problem . . . 32

5.1.1 Ill-posedness of deconvolution . . . 33

5.2 Discretizing the convolution . . . 34

5.2.1 Boundary conditions . . . 36

5.3 Convolution, blurring and noise . . . 39

6 The forward model in confocal microscopy 41 6.1 The inverse problem . . . 41

6.2 The forward model . . . 42

6.2.1 The experimental approach . . . 44

6.2.2 The theoretical approach . . . 46

6.3 Derivation of the Bessel PSF . . . 47

6.3.1 Maxwell’s equations and the scalar wave equation . . . . 47

6.3.2 Diffraction by an aperture . . . 49

6.3.3 Approximations to the Fresnel-Kirchoff integral by Fresnel and Fraunhofer . . . 50

(19)

CONTENTS CONTENTS

6.3.4 Fraunhofer diffraction over a circular aperture . . . 51

7 Implementation 53 7.1 Applying the experimental Point Spread Function (PSF) as a Gaussian kernel . . . 53

7.2 Solving the inverse problem . . . 54

7.2.1 EM algorithm . . . 55

7.2.2 EM-TV algorithm . . . 56

7.2.3 Bregman-EM-TV algorithm . . . 57

7.2.4 Discrete Derivatives . . . 58

7.3 Selecting the regularization parameter . . . 59

7.3.1 L-Curve Method . . . 60

8 Results 62 8.1 Synthetic Data - A first attempt . . . 62

8.1.1 Data . . . 62

8.1.2 Results . . . 62

8.2 L-curves . . . 65

8.3 Live cells . . . 68

9 Conclusions 83 9.1 Future work . . . 84

(20)
(21)

Chapter 1

Introduction

1.1 Fluorescence Microscopy

To produce a magnified image of an object through a microscope, one needs to probe the object under study with an external light source. In a traditional setup, this is done by an exposing light bulb, illuminating the object from below.

An alternative technique is based on utilizing the phenomena of fluorescence, which is the emission of light by a molecule (fluorophore) after it has absorbed light or other electromagnetic radiation. The idea is to add fluorophores to the object, which when exposed to light at specific wavelength, are excited and emit light. The emitted light, which is usually of a different, longer, wavelength is recorded. The light source used for illuminating the object needs to be coherent.

Next, we describe two fluorescence microscopy techniques, confocal and STED microscopy.

1.1.1 Confocal Microscopy

Figure 1.1 shows a schematic sketch of a confocal microscope, an instrument setup used for e.g. imaging live cells. The coherent light source used for illumi- nation is a laser. To obtain a focused bright illuminating spot, the light from the laser (the green rays in Figure 1.1) travels through a pinhole and passes a color filter. This light is refracted through a dichroic mirror before entering the microscope’s objective, which focuses it onto the specimen. As described above, the specimen is prepared with fluorophores that in turn act as sources emitting light when being excited by the illumination from the laser beam. The emitted light from the fluorophores is focused back through the objective where

A mirror only enabling light within a small range of wavelengths to pass [13].

(22)

Fluorescence Microscopy Introduction

it passes the dichroic mirror and is focused onto a detector in the image plane.

Just in front of the detector there is another pinhole whose size has a substantial impact on the image. The size governs the amount of out-of-focus light that reach the detector. The light that passes through the aforementioned pinhole is recorded by a digital detector, usually a Photo Multiplier Tube (PMT), that is capable of detecting single photons. The PMT converts the incoming photons into an intensity which then becomes the intensity value at a single pixel. As the entire setup moves, new parts of the object are scanned resulting in further pixels which eventually make up a digital image that represents the distribution of fluorophores in the object (specimen). Such images are of interest since fluo- rophores are often designed to chemically bind to specific molecules within the specimen. Another advantage of using a pinhole is that it allows for optical sec- tioning. A confocal microscope can register light essentially emitted from a 2D section within a object. Thereby, one can image 2D sections of 3D objects and successively build up a 3D image of the object. For further reading on the sub- ject of confocal microscopy and fluorescence microscopy, see e.g. [16, 20, 22, 18].

Figure 1.1: Sketch illustrating the basic setup of a confocal microscope

1.1.2 Stimulated Emission Depletion Microscopy (STED)

STED is a fluorescence microscopy technique that has revolutionized high res- olution imaging thanks to its ability to overcome the diffraction barrier. As in the case of a confocal microscopy described in the previous section, fluorophores are added to the specimen. When the specimen is illuminated by a laser, those fluorophores that are illuminated become excited and emit light during their excited state. However, due to diffraction, fluorophores in the vicinity of the focus point, will also absorb energy and reach the excited state. The pinhole placed before the detector (Figure 1.1) excludes most of the light emitted from the neighboring fluorophores, but it is a keen trade-off between signal strength and image resolution when choosing the pinhole size. Now, instead of a pinhole,

(23)

Image reconstruction Introduction

Figure 1.2: Synthetically created STED illumination spots. Left image shows the excited region, center image shows the depleted region, and right image is the combined illumination spot.

STED microscopy uses the technique of stimulated emission to improve image resolution. During the excited state, another laser illuminates the same point at the object by the fluorescence wavelength, forcing excited fluorophores to their ground state. The STED-illumination has a doughnut shaped bright spot, only affecting the surroundings of the focus point, and thereby reducing the size of the spot of emitted light. This phenomenon is illustrated in Figure 1.2, where the left image may also be seen as the confocal excitation spot. For further readings, see e.g. [19,§ 16.1].

Figure 1.3: Left figure illustrates a confocal micrograph of a dendrite with spine struc- tures taken from a rat striatal neuron. Dopamine 1 receptors are labeled green and the Sodium Patassium pump (Na,K-ATPase) in red. Right figure illustrates a STED micrograph, same as in the left figure.

1.2 Image reconstruction

In many imaging experiments, including fluorescence microscopy, data is the recorded pixel values in the microscope image. By fundamental signal theory, a certain amount of noise and distortion is always present. Hence, imaging can be seen as recoding a noisy and distorted variant of the true image, the latter henceforth referred to as the signal.

(24)

Image reconstruction Introduction

Image reconstruction is in this context a procedure that seeks to recover the signal (true image) from the data (the microscopy image). It should not be confused with image enhancement, which is about emphasizing certain features of the image, meant to somehow please the observer. On the contrary to im- age reconstruction, image enhancement does not necessarily seek to take any scientific meaning of image data into consideration.

Two features often pursued in image reconstruction is to reduce image distortion and noise. Reducing image distortion is basically to reduce the image blur, which may be caused by a numerous of physical phenomena. The lack of sharpness is often regarded as the result of the optical objective not providing a perfectly refracted image. Image noise may originate from the detector during the image collection process or it may be the result of a low Signal-to-Noise Ratio (SNR).

The resulting image from the reconstruction process is called the reconstructed image, or simply, the reconstruction.

Image reconstruction is a typical example of an inverse problem (see§2), where one wishes, from available measured data, to extract a better realization of some kind of object under study. The methods of dealing with this type of problems originates from quite sophisticated mathematical theory, which is presented in

§3.1.

(25)

Chapter 2

Inverse problems

This chapter presents the concept of an inverse problem from a mathematical point of view. The notion of an inverse and forward problem is provided in §2.1, ill- and well- posedness is discussed in §2.2, and the concept of discretization is formally presented in §2.3.

2.1 Definition

In many fields of science, such as astronomy and medical imaging, one is in- terested in finding the cause for observed behavior of a system under study.

Formally, this can be stated as an inverse problem where we seek to recover information about a signal ftrue∈ F given data g ∈ G in which

g=T (ftrue) + gnoise with T : F → G , (2.1) and whereT : F → G is the forward model and gnoiseis the noise component of measured data. Hence, the forward model is the mathematical model for how the signal gives rise to data in absence of noise and measurement errors.

Note that the concept of signal is here used in the wide sense and it represents any aspect of the system under study that is quantifiable. Examples of signals are scalar parameters in a system of equations modeling the response of the system or real- or complex valued functions.

The set F in (2.1) is referred as the reconstruction space and it contains all possible signals. It is often some infinite dimensional Hilbert, or Banach, space of functions representing feasible signals. The set G is the data space and it contains all possible data. It is always finite dimensional vector space of real

(26)

Ill-posedness Inverse problems

numbers R since one can only measure finite number of data points. Hence, we can without loss of generality assume that G ⊂ Rm. The inverse problem is said to be linear if T is a linear mapping. If T is also unknown, then the corresponding inverse problem is referred to as system identification. A nice introduction to the theory of inverse problems is e.g. given in [1].

On a final note, we mention that the forward problem is to calculate data that is caused by the signal, i.e. to find g given ftrue and T by solving (2.1). This is e.g. required if one seeks to simulate a systems response to a given input.

Forward problems are often easier than inverse problems.

2.2 Ill-posedness

An naive approach for solving an inverse problem given as in (2.1) is to try solving the equation for ftrue. This approach commonly fails due to variety of reasons. First, measured data g may be outside the range of T , meaning that (2.1) has no solution (non-existence). This can be due to modeling error (T is only an approximate model for the relation between signal and data) and/or the data g is distorted by gnoise, i.e. by noise and measurement errors. Next, even if (2.1) has a solution, it may not be unique (non-uniqueness) or it may be highly sensitive to perturbations in data g (instability). If any of these issues occur, we say that the inverse problem given in (2.1) is ill-posed. The formal definition, taken from [21,§ 2.1], is given below.

Definition 1. The problem of recoveringftrue∈ F given g ∈ G and the relation (2.1) is said to be well-posed if all of the following holds:

Existence & uniqueness: For each g∈ G there exists a unique f ∈ F , called a solution, such that (2.1) holds.

Stability: The solutionf depends continuously on g in some reasonable topol- ogy, i.e. ifT (f) = g and T (h) = g + 4g, then f → h in F whenever 4g→ 0 in G .

2.3 Discretization

When doing computations, ultimately we can only work with finite dimensional objects (elements of vector spaces). As described in §2.1, data is always such a finite dimensional object sinceG ⊂ Rm where m is the total number of data points. The reconstruction spaceF can however be infinite dimensional. In such case, in an implementation one must at some point consider a finite dimensional realization of the reconstruction space. This is the process of discretization.

Formally, discretization simply relates an element in F to a vector in Rn, and

(27)

Discretization Inverse problems

vice versa, by means of two mappings:

π :F → Rn and τ : Rn→ F .

Note that π is not 1-1 sinceF is infinite dimensional whereas Rnis n-dimensional.

The mapping τ above is still a kind of “inverse” to π in that it relates an element in Rn to an element inF . We will require that

n→∞lim

(τ◦ π)(f)

F= lim

n→∞

(π◦ τ)(f) Rn= 0.

This simply states that in the limit, as the dimension of the discretization tends to∞, the discretized element converges to the non-discretized one in the norm.

Given the mappings π and τ from the discretization of F , we can then define the corresponding discretized forward model :

T: Rn→ Rm as T :=T ◦ τ.

Then, by definition, the following diagram commutes:

F

Rn

G ⊂ Rm

...T . ...................... ...

...

...

...

...

...

...

...

...

...

. .. . .. ...

...

π

................................................................................

T

The discretized inverse problem corresponding to (2.1) is now given as follows:

Definition 2 (Discretized inverse problem). Recover (an estimate of ) ftrue∈ Rn given g∈ Rm where

g= T (ftrue) + gnoise with T : Rn→ Rm. (2.2) When T is a linear operator, it is natural to identify it with its matrix repre- sentation, resulting in the following matrix equation formulation of the inverse problem:

g= T· ftrue+ gnoise. (2.3)

A computational issue is that the matrix T is often too large to be kept in memory. We will avoid methods for solving (2.3) that assume that T can be stored in memory.

Note now that the concept of stability in Definition 1 can be phrased in terms of the condition number of T . The linear inverse problem in Definition 2 is ill- posed if the condition number κ of T is high, which describes the relationship between the errors in data before and after the action of T . The maximum value of κ for a given matrix T in a well-posed problem is determined by;

κ =kT k · kT−1k where k · k is the matrix norm.

(28)

Discretization Inverse problems

Example from imaging In case of imaging, elements in F are functions defined on some subset Ω ⊂ Rk with k = 2 (2D images) or k = 3 (3D im- ages). Images are usually digitized by means of pixels/voxels. This provides a discretization that is given by sampling a function defined by means of a fixed finite set of points Σ :={x1, . . . , xn} ⊂ Ω, i.e

π(f ) := f (x1), . . . , f (xn) ∈ Rn.

The points in Σ are here simply points associated to pixels/voxels (e.g. mid- points), so the image of π is simply the digitized image. A natural “inverse” is then given by

τ (f )(x) :=X

i

fiφi(x) for fixed set{φi}i⊂ F .

Note here that τ (f )∈ F is a function constructed as a finite linear combination of the functions φi. The coefficient fi denotes the i:th component of the vector f, and as such, it represents the value of the function at i:th pixel/voxel. The function φi is the characteristic function of the i:th pixel/voxel.

(29)

Chapter 3

Regularization of ill-posed inverse problems

Here, basic concepts from general regularization theory are introduced. The concept of a regularization is defined in §3.1, the variational regularization methods are introduced in §3.2 and their relation to statistical regularization is given in §3.3.2.

3.1 Concept of regularization

The inverse problem in Definition 2.1 was stated as the problem of solving g=T (ftrue) + gnoise with T : F → Rm

for ftrue ∈ F given g ∈ G ⊂ Rm. If this problem is ill-posed, then the above equation might not have any solutions, e.g. if g is not in the range of T due to the presence of gnoise. A common approach is therefore to relax the notion of a solution, e.g. by considering least-squares solutions:

f∈Fmin

T (f) − g F.

More generally, we could consider Maximum Likelihood (ML)-solutions:

fML:= argmin

f∈F D T (f), g

(3.1)

where D : G × G → R+ is the negative log-likelihood of the random variable with values inG that models measured data. This might take care of the issue of existence, but an issue that still remains is that of uniqueness and stability.

Since (2.1) is ill-posed, the above minimization problem could have infinitely

(30)

Variational regularization Regularization

many solutions. Furthermore, these do not necessarily depend continuously on data. Therefore, obtaining a solution of the above kind often has undesired features due to ill-conditioning and we need a method for selecting a unique solution that is robust against noise and errors in data.

A regularization method for an ill-posed problem of the above kind constructs a family of well-posed problems whose solution “converge” to an ML-solution of the ill-posed problem as the data error goes to zero. The formal definition is as follows:

Definition 3 (Regularization method). Consider the inverse problem in Defi- nition 2 and assume the forward operator T is linear and bounded. A regular- ization method for this problem is a parametrized family of linear and bounded operatorsRα: Rm→ Rn (α > 0) with the following properties:

(a) For all g∈ G , and for any α > 0, there exists a unique fα∈ F such that Rα(g) = fα.

(b) lim

α→0Rα T (f) = f for all f ∈ F .

(c) Define gexact := T (ftrue), i.e. gexact ∈ G is the exact data in absence of any noise and measurement errors. Next, assume

g− T (ftrue)

G ≤ δ for some δ > 0.

Then, for all α > 0 there exists  > 0, depending on α, such that

Rα(gexact)− Rα(g) F ≤ .

We refer to e.g. [21, Definition 2.18] for a generalization of the concept of a regularization method that does not assume a linear and bounded operatorT . Regularization methods may be constructed quite differently, and their suit- ability for the problem at hand depends on the ability to account for a priori knowledge about the true signal ftrue. In §3.3 we present some examples and point to different priori knowledge about ftrue.

3.2 Variational regularization

Variational regularization of the inverse problem in Definition 2.1 is given as (see e.g. [21,§3]);

Rα(g) := argmin

f∈F D(T (f), g) + αS(f) (3.2)

where D : G × G → R+ is called the data discrepancy functional,S : F → R+ is called the regularizaion functional (or penalty term), and α > 0 is called the regularization parameter. The entire functional in the above optimization is often called the Tikhonov functional.

(31)

Connection to statistical regularization Regularization

The data discrepancy functional in (3.2) describes the “distance” between the synthetic data T (f) related to a solution candidate f and measured data g.

It should be chosen as the negative log-likelihood of the random variable with values inG that models measured data. In such case, solving (3.2) with α = 0 is equivalent to trying to get an ML-solution, i.e.

R0(g) = fML where fML is defined in (3.1).

Now, the role of S in (3.2) is to enforce uniqueness and robustness, thereby enabling Rα to fulfill the formal criteria of being a regularization method as defined in§3.1. The regularization parameter α weights the regularization term against the data discrepancy term. A large α regularizes the problem heavily, letting α equal zero brings us back (3.1). The choice of α is thereby essential in solving inverse problems, and the notion of a large α is highly circumstantial.

WhenF is some function space, as is the case in imaging, then it is a sub-set of some suitably chosen infinite dimensional Hilbert, or Banach, space. The math- ematical theory for regularization, and in particular variational regularization, will then become very technical. The optimization problem in (3.2) is an op- timization over infinite dimensional space. Functional analysis combined with convex analysis needs to be involved for properly checking whether the criteria for regularization are fulfilled or not. An alternative is to first discretizeF (see

§2.3), then regularize. Then we can assume a finite dimensional reconstruction spaceF which makes the mathematical analysis much less technical. This will be the approach we take.

3.3 Connection to statistical regularization and the MAP estimator

Consider the case when F ⊂ Rn, i.e. if necessary we have discretized the reconstruction spaceF (see §2.3). The corresponding inverse problem is then given as in Definition 2. We will now explore the connection between variational regularization and MAP estimator in statistical regularization.

3.3.1 The inverse problem in statistical regularization

In statistical regularization one has a somewhat different take on what is meant by a solution to an inverse problem.

Definition 4(Statistical Reconstruction Problem). Let F be an n-dimensional random variable defined on F ⊂ Rn, so its sample values represent possible signals. Next, let G be an m-dimensional random variable defined onG ⊂ Rm, so its sample values represent possible data. Assume furthermore the we are given the following:

(a) A single sample g∈ G of G.

(32)

Connection to statistical regularization Regularization

(b) The likelihood of data g7→ πlikelihood(g|f) which is the probability distribu- tion function of G conditioned on F = f , i.e.

Prob G∈ U|F = f = Z

U

πlikelihood(g|f) dg for U ⊂ G .

Note that the forward model T in (2.2) will arise in the likelihood of data.

(c) The prior distribution of the signal f 7→ πprior(f ) which is the probability distribution of F , i.e.

Prob F ∈ V = Z

V

πprior(f ) df forV ⊂ F .

Then we seek to recover the posterior probability distribution g7→ πposterior(f|g)

which is the probability distribution function of F conditioned on G= g, i.e.

Prob F ∈ V |G = g = Z

V

πposterior(f|g) df for V ⊂ F .

3.3.2 The MAP estimator

Often it not feasible to recover the entire posterior probability distribution. This is especially the case when G ⊂ Rm andF ⊂ Rn are high dimensional. It is also desirable that the solution is a signal in the reconstruction spaceF instead of a probability density defined on that space. In such cases one has to recover an estimator and in this setting, the Maximum a Posteriori probability (MAP) estimator has proven to be useful.

The MAP-estimator fMAP ∈ F is simply defined as the element in F that maximizes the posterior probability distribution, i.e.

fMAP:= argmax

f∈F

πposterior(f|g). (3.3)

This choice of estimator corresponds to selecting the signal that is most likely given the measured data. In comparison, the ML-estimator fML is given as

fML:= argmax

f∈F

πlikelihood(g|f)

which corresponds to selecting the signal that gives rise to the most likely data.

In the case of ill-posed problems, this often results in over fitting.

(33)

Connection to statistical regularization Regularization

3.3.3 Connection to variational regularization

Now, fMAPcan be calculated using Bayes’ law :

fMAP= argmax

f∈F

 πlikelihood(g|f) πprior(f ) πdata(g)



= argmax

f∈F

πlikelihood(g|f) πprior(f ) (3.4)

whenever the probability distribution of data G is non-zero, i.e. πdata(g) 6= 0.

Note that (3.4) is recognized as the ML estimator if the prior distribution of F is uniform.

For the prior probability distributions, a common choice are the Gibbs priors which are as follows:

πprior(f )∝ exp −αS(f). (3.5)

The time is now ripe to see the connection between variational regularization, given as in (3.2), and the MAP-estimator given in (3.4). If we in (3.2) set

D T (f), g := − log πlikelihood(g|f)

αS(f) := − log πprior(f ), (3.6) with πprior given as a Gibbs prior in (3.5), then

fMAP= argmin

f∈F D(T (f), g) + αS(f)

i.e. fMAP=Rα(g) with fMAPgiven as in (3.4) andRα(g) as in (3.2). Note that the choice of data discrepancy functionalD depends on the statistical properties of data and the choice of regularization discrepancy functional depends on the a priori assumptions on the signal. In the two following we explicitly write down D for two cases that commonly occur.

Additive Gaussian noise Here we assume that measured data is contami- nated with additive Gaussian noise, so

G= T (ftrue) + 

where  is a Normally distributed random vector. Assuming that all components of the noise vector  are independently and identically distributed with zero mean and variance σ, i.e. i∈ N(0, σ), yields

πlikelihood(g|f) ∝ 1 2π√σ

n nY

i=1

exp



−1 2

T(f )i− gi σ

2 .

(34)

Connection to statistical regularization Regularization

Combining with (3.6) gives us

D T (f), g := − log πlikelihood(g|f)

=− log

"

 1 2π√σ

n nY

i=1

exp



−1 2

T(f )i− gi σ

2#

=−

n

X

i=1

"

log

 1

2√ σπ



−1 2

 T (f )i− gi σ

2#

= 1 2

n

X

i=1

"

 T (f )i− gi σ

2

− C

#

. (3.7)

for suitably chosen constant C. Since (3.7) enters into a minimization, we can drop additive and multiplicative terms that do not depend on f . Thus, we get

D T (f), g =

n

X

i=1

 T (f )i− gi σ

2

. (3.8)

Shot noise A common situation is when data is contaminated by Poisson distributed noise, also known as shot noise, i.e. the measured data g is assumed to be a sample of a Poisson distributed random variable G with mean T (ftrue).

This model is realistic when data represents images acquired under low dose conditions.

Assuming that each component of the data vector g is independent, we get

πlikelihood(g|f) ∝

n

Y

i=1

T(f )giiexp −T (f)i gi!

Combining with (3.6) gives us

D T (f), g := − log πlikelihood(g|f)

=− log

n

Y

i=1

T(f )giiexp −T (f)i gi!

!

− C

=−

n

X

i=1

hgilog T (f )i − T (f)i− log(gi!)i

− C

=

n

X

i=1

hT(f )i− gilog T (f )ii

+ C0 (3.9)

for suitably chosen constants C and C0. Since (3.9) enters into a minimization, we can drop additive and multiplicative terms that do not depend on f . Thus, we get

D T (f), g =

n

X

i=1

hT(f )i− gilog T (f )ii

. (3.10)

Note thatD is not a norm in Rmbut rather the Kullback-Leibler (KL) distance.

(35)

Chapter 4

Total Variation-type of regularization

Here we give the definition of Total Variation and how it may be used to regularize an ill-posed problem. We introduce the Bregman distance in §4.2.1 and how it may be used in regularization to complement the loss of contrast which Total Variation provides.

Consider inverse problems where the signal is a real-valued function defined on Rk. Furthermore, assume that this function has bounded variation, so F is a subset of functions of bounded variation in Rk. Then, the following functional is well-defined

STV(f ) :=

Z

∇f(x)

dx, (4.1)

and Total Variation (TV)-regularization of the inverse problem in (2.1) is given as

fTV := argmin

f∈F

αSTV(f ) +D T (f), g

(4.2) where the data discrepancy functionalD is chosen according to the statistics of gnoise (see§3.3.3).

This choice of regularization functional penalizes signals that oscillate, yet it allows for discontinuities. Therefore, TV-regularization is also known as an edge-preserving regularization and has become popular in a variety of inverse problems associated with imaging. It was first used in imaging for denoising by Rudin, Osher and Fatemi [17]. In that context it is also referred to as the Rudin-Osher-Fatemi (ROF)-model.

The definition of TV-semi norm above is however older than its usage as a regularization functional. The total variation of a function is in real analysis (see e.g. [21, 5, 6]) defined as follows:

(36)

Loss of contrast Total Variation-type of regularization

Definition 5. Let Ω⊂ Rk be an open subset. Given a function f in L1(Ω), the total variation of f in Ω is defined as

V (f, Ω) := sup

ϕ∈Cc1(Ω,Rk) kϕk≤1

Z

f (x) div(ϕ)(x) dx



whereCc1(Ω, Rk) is the set of continuously differentiable vector functions of com- pact support contained in Ω, and k · k is the essential supremum norm.

Note that this definition does not require that the domain Ω⊂ Rkis a bounded set. If however f is differentiable over a real bounded domain Ω, then it can be shown that V (f, Ω) =STV(f ).

4.1 Loss of contrast

Even though regularization by means of (4.2) efficiently reduces noise and clut- ter, the result tends to have low contrast as weakly varying regions in the signal are flattened. This is due to the inability of the TV-regularization functional in (4.1) to separate noise from those weakly varying portions of the signal. This ad- verse effect is called stair caising (or cartooning) and is illustrated in Figure 4.1 for the 1D case and in Figure 4.2 for the 2D case.

Figure 4.1: The figure shows two signals; one as a dotted red line and the other as a black solid line.Both have the same vertical variation but the red dotted one makes this variation quickly. The slow variation of the black solid line mimics smooth contrast variations in an image. The TV functional is defined in (4.1) only measures the height of the jumps. Hence, both curves are equally penalized in the regularization scheme (4.2).

A variety of methods have been developed for dealing with stair casing. Some include modifying the TV-regularization functional. The one that we will look close at in the following section is based on Iterated regularization.

(37)

Iterated regularization Total Variation-type of regularization

Figure 4.2: Left figure illustrates data, which is a confocal microscopy image rep- resenting a 2D-section through a mouse colon specimen prepared with hematoxylin and eosin. The image was acquired using 63× optical magnification, 51 µm pinhole aperture, 134.86 µm voxel size, wavelength λ = 488 nm, and 2% laser intensity, see

§6 for the details. Right figure illustrates the deconvolved image (see §5.1.1) obtained by means of TV-regularization (using α = 0.1 as regularization parameter and 60 it- erations to solve the optimization problem in (4.2)). We see that TV-regularization produces images with large mono-colored areas.

4.2 Iterated regularization for contrast enhance- ment

Iterated regularization methodsis a class of methods where one solves a sequence of regularizations. They should not be confused with iterative methods, which in regularization theory are given by fixed point-iterations converging Votowards a ML-solution, and where regularization is obtained by means of early stopping.

The relevance of iterated regularization for contrast enhancement in TV-regularization was first observed in [15]. There, Bregman distances have been used success- fully for iterative regularization methods. Particularly for the case of L2-data discrepancy terms, such as in (3.7), it has been shown that such methods can significantly improve reconstruction results. For example in the case of total variation, iterative Bregman regularization simultaneously enhances contrast in reconstructions by adding back residuals.

4.2.1 The Bregman distance

The Bregman distance was introduced in 1967 in [3] and is defined as follows:

Definition 6(Bregman distance). LetH be a Hilbert-space with inner-product h · , · iH. Also, let S : H → R be a functional that is Frechet differentiable at h∈ H . Then, the Bregman distance is defined as

BS(f, h) :=S(f) − S(h) −∇S(h), f − h H where∇S(h) is the gradient of S at h.

(38)

Iterated regularization Total Variation-type of regularization

Intuitively, the Bregman distance can be interpreted as the difference between the value ofS at f and the value of the first-order Taylor expansion of S around h evaluated at f , see Figure 4.3. The above definition can be extended to the

S (f )

S (h)

BpS(f , h)

p∈ ∂ S (h)

Figure 4.3: Geometrical illustration of the Bregman distance.

case whenS is a general convex, not necessarily differentiable functional. This is e.g. the case with the TV-regularization functionalSTVin (4.1) which is non- smooth. The general definition of the Bregman distance, see e.g. [4,§3.2] or [7], makes use of sub-differential calculus. Using this more general definition allows one to calculate the Bregman distance for the TV-regularization functional, see [4,§3.2] for the details.

The Bregman distance is not a metric for two reasons; the triangle inequality does not hold and symmetry is generally not satisfied. Still, if S is a convex functional, then

BS(f, q)≤ BS(f, h) for q := f + (1− t)h with 0 ≤ t ≤ 1.

(so q∈ H is a point somewhere on the line segment in the Hilbet space H con- necting f and h). Moreover,BS(f, h)≥ 0 for all f, h ∈ H giving a meaningful feature of what one means by the notion of a distance.

4.2.2 Bregman iterations

The Bregman iteration was originally proposed by [15] for TV-denoising. It has also been generalized into solving many variational regularization problems that are convex. The idea is to solve a sequence of variational regularization problems where in each step, the signal removed in the previous step (residual) is added back. This is shown to alleviate the loss of contrast problem discussed in §4.1.

(39)

Iterated regularization Total Variation-type of regularization

So, consider the variational regularization problem (3.2), which we restate here for convenience:

fmin∈F H(f) + αS(f) with H(f) := D(T (f), g). (4.3)

The Bregman iteration of (4.3) is then defined as follows:

f(k+1) := argmin

f∈F

nH(f) + αBS(f(k), f )o

. (4.4)

We introduce the following variant of (4.4)





f(k+1) := argmin

f∈F

nH(f) + αBSp(k)(f(k), f )o p(k+1):= p(k)−1

α∇H(f(k+1))

(4.5)

which is commonly used in literature. We also show that the two algorithms are, in some sense, analytically equal.

Note that in the updating scheme for f(k+1) in (4.5), we use a relaxed variant of the Bregman distance:

BSp(f, h) :=S(f) − S(h) −p, f − h F for f, h, p∈ F .

Note that the formal definition of the Bregman distance given in Definition 6 requires that p = ∇S(h). Hence, in order for (4.4) and (4.5) to provide the same solution, the following must hold:

p(k)=∇S(f(k+1)) as k→ ∞. (4.6)

But f(k+1) in (4.5) solves

fmin∈F

nH(f) + αBpS(k)(f(k), f )o ,

andH and S are convex and differentiable, so

∇h

H( · ) + αBSp(k)(f(k),· )i

(f(k+1)) = 0.

This is however equivalent to 1

α∇H(f(k+1)) +∇BpS(k)(f(k),· )(f(k+1)) = 0.

Inserting the definition of the relaxed variant of the Bregman distance gives us p(k+1):= p(k)− 1

α∇H(f(k+1)) =∇S(f(k+1)). (4.7) which implies that (4.6) holds.

(40)

Iterated regularization Total Variation-type of regularization

4.2.3 Increasing contrast by Bregman iterations

Let us now consider the case whenD is the L2-distance in data spaceG , such as in (3.7). One can then show that the Bregman iteration (4.5) simplifies to the following iterative minimization algorithm;

f(k+1):= argmin

f∈F

n

T (f) − g(k)

+ αSTV(f )o g(k+1):= g(k)+ g− T (f(k+1))

(4.8)

Hence, the data g is updated in the iterates by adding the term g− T (f(k+1)), which in some sense equals the noise levels of g. This causes the slowly varying portions of the signal to be added back but also the noise. The noise component is however reduced in the next iterate whereas hopefully some portion of the signal component survives.

For general data fidelitiesD, particularly those of non-L2type (i.e. not related to additive Gaussian noise), it is not yet known if one always can find a Breg- man regularization strategy that can be realized by a simple shift in the data fidelity. To find a Bregman iteration for general data fidelities that simultane- ously enhances contrast and is easy to implement is still a research topic, see e.g. [4].

(41)

Chapter 5

Image reconstruction as a deconvolution problem

Here we introduce image reconstruction as a practical example of an inverse problem.

In §5.1 we discuss under which premisses convolution can be used as a model for blurring, and explain why this leads the inverse problem to be ill-posed. We close this chapter by stating the discrete version of convolution and different choices for boundary conditions.

5.1 Imaging as an inverse problem

We begin by formalizing the notion of an image. From a user’s viewpoint, an image is any quantity that is spatially distributed in 2D/3D. Mathematically, this can be represented by a function in 2D/3D. Grey-scale images are real valued function in 2D/3D, i.e. f : Rn→ R with n = 2 or 3 and color images are vector valued function in 2D/3D, i.e. f : Rn→ Rk with n = 2 or 3 and k is the number of color channels (e.g. k = 3 for red, green & blue). Since we treat each color channel separately, we can without loss of generality assume that images are grey-scale.

In imaging we seek to recover an image ftrue: Rn→ R representing the spatial distribution in 2D/3D of some physical property of an object. The imaging device we have at our disposal generates imaging data g. Since all imaging device introduce distortions, the data g are indirect measurements of ftrue. Formally the goal in imaging can be stated as inverse problem of the form in (2.1), i.e. to recover the signal ftrue∈ F from data g ∈ G where

g=T (ftrue) + gnoise.

(42)

Imaging as an inverse problem Image reconstruction

In the context of imaging, the forward operator T above is usually called the model for image formation since it describes how an image is being formed by the imaging device in absence of noise. This also includes the actual finite sampling of the imaging data, so even though the signal is a function, the data is a vector.

Let us for the time being disregard the finite sampling of the imaging data, so henceforth imaging data and its noise component are not vectors but functions g and gnoise, respectively. The issue of including the finite sampling of images into the forward model is discussed in §5.2. We have linear image formation whenT is linear. Then there exists a function K : Rn× Rn→ R such that

T (f)(x) = Z

K(x, y)f (y) dy.

The function K above is called the impulse response of the imaging system.

Furthermore, the imaging system is translation invariant whenever K(x, y), as introduced above, only depends on x− y. Then there exists P : Rn → R such that

K(x, y) = P (x− y)

and the forward operatorT becomes a convolution operator:

T (f)(x) = (P ∗ f)(x) :=

Z

P (x− y)f(y) dy. (5.1) The function P above is called the PSF and its Fourier transform is called the Contrast Transfer Function (CTF). The PSF is commonly normalized, i.e. its total integral equals one, in which case it is referred to as a kernel.

To summarize, whenever an imaging system is linear and translation invariant, the associated inverse problem for image recovery can be formulated as the problem of recovering the signal ftrue∈ F from imaging data g ∈ G where

g = (ftrue∗ P ) + gnoise. (5.2) In the above, P is the PSF of the imaging system. This inverse problem is refereed to as deconvolution (sometimes the term deblurring is also used).

5.1.1 Ill-posedness of deconvolution

We will here show that deconvolution is an ill-posed problem. LetT be given by (5.1). Then, by elementary Fourier analysis one can show that, under certain circumstances,T−1 exists and is given by

T−1(g) :=F−1F(g) F(P )

.

Now set fnaive:=T−1(g) and let us investigate the relation between fnaive and ftrue when g is given by (5.2), i.e.

g = (ftrue∗ P ) + gnoise.

(43)

Discretizing the convolution Image reconstruction

First, the definition of fnaive combined with the convolution theorem gives us

F(fnaive) := F(g)

F(P ) =F (ftrue∗ P ) + gnoise

F(P ) =F(ftrue) +F(gnoise) F(P ) . It is now clear that if gnoise ≡ 0, then fnaive = ftrue. But what happens when gnoise 6≡ 0? Note first that T−1 does not exists at ξ whenever F(P )(ξ) = 0, so T−1 is not well defined. This is the issue of lack of existence (see §2.2).

Furthermore, even thoughF(P )(ξ) 6= 0 for all ξ, every realistic imaging system is band limited. This means that F(P )(ξ) → 0 as |ξ| → ∞. Since the noise is independent of the PSF, ξ 7→ F(gnoise)(ξ) will hardly cancel the decay of ξ7→ F(P )(ξ). Hence, unless gnoise≡ 0,

F(gnoise)(ξ)

F(P )(ξ) → ∞ as |ξ| → ∞.

This is the issue of instability (see §2.2).

In Figure 5.1 we illustrate this ill-posedness. The lower right figure is recon- structed from the same convolved image as the lower left reconstruction is based on, but with one single pixel value changed from 223 to 222. Reconstruction from this minimally altered data has no recognizable features.

5.2 Discretizing the convolution

The signal ftruein our imaging problem have so far been considered as functions.

The imaging data g recorded by the imaging device involves sampling, so it is represented by an array. Discretization refers to the sampling of the signal, i.e.

to the representation of the image as a digital image. Hence, the function ftrueis replaced by a sampled digital image ftrue that is an array, not necessarily with the same dimensions as measured imaging data g. Once we have discretized the signal, the linear forward operatorT (the convolution operator in (5.1)) has a discretized counterpart given by a matrix T . Hence, if ftrue represents the discretized signal given by an array, then the action of the forward operator T can be represented by a matrix multiplication with a convolution matrix T :

T (f) = T · ftrue.

The discretized deconvolution problem can be stated as recovering the digital image given by the array ftrue from imaging data given by the array g where

g= T· ftrue+ gnoise. (5.3)

We now focus on how to construct the convolution matrix T so that it is a good approximation of the actual convolution operator T . For simplicity we consider the 2D case and assume that the dimensions of the discretized signal

The image is an 8-bit image with pixel range between 0 and 255.

(44)

Discretizing the convolution Image reconstruction

Figure 5.1: The naive approach of image reconstruction. Upper left figure is the true image (signal), upper right figure is the true image convolved with a gaussian kernel.

Lower left figure is the reconstruction from the unperturbed convolved image and lower right figure is the reconstruction from the perturbed convolved image where one single pixel value has been contaminated by noise

and measured imaging data agree. Since both f and g represent 2D images, it makes sense to arrange the corresponding arrays as matrices. For the case on an 3× 3 image, this becomes

f =

f1,1 f1,2 f1,3

f2,1 f2,2 f2,3

f3,1 f3,2 f3,3

 and g =

g1,1 g1,2 g1,3

g2,1 g2,2 g2,3

g3,1 g3,2 g3,3

.

Let P be the matrix representation of the above PSF;

P =

p1,1 p1,2 p1,3

p2,1 p2,2 p2,3

p3,1 p3,2 p3,3

 (5.4)

which is essentially a digital image of the PSF P .

To construct the convolution matrix T , or more delicately, an equation for gi,j

and the 2D discretized convolution between f and g, one simply rotates the PSF matrix P by 180 degrees. Next, the rotated matrix P swipes through the matrix f from left to right. This results in total of nine equations, one for each

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Data från Tyskland visar att krav på samverkan leder till ökad patentering, men studien finner inte stöd för att finansiella stöd utan krav på samverkan ökar patentering

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

We compare the reconstruction quality of using Entropy as a regularizer to Tikhonov - and Total Variation (TV) regularization and the analytic Filtered Backprojection approach, and

Errata Corrige

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating