• No results found

Programming and Optimisation of a Digital Holographic Microscope for the Study of Eye Tissue

N/A
N/A
Protected

Academic year: 2022

Share "Programming and Optimisation of a Digital Holographic Microscope for the Study of Eye Tissue"

Copied!
73
0
0

Loading.... (view fulltext now)

Full text

(1)

Programming and

Optimisation of a Digital Holographic Microscope for

the Study of Eye Tissue

LUCIE DILHAN

Laboratoire d’Optique Appliquée ENSTA, École Polytechnique, CNRS UMR7639

Palaiseau, France Supervisor: Karsten Plamann

Examiner: Linda Lundström Department of Applied Physics School ofEngineering Sciences, KTH

(2)

TRITA-SCI-GRU 2018:001

(3)

i

Abstract

The objectives of the present project were to set up, optimise and characterise a dig- ital holographic microscopy (DHM) laboratory set-up designed for the study of eye tissue and to implement and optimise digital data processing and noise reduction routines. This work is part of a collaborative project aiming to provide quantitative methods for the in vitro and in vivo characterisation of human corneal transparency.

The laboratory set-up is based on a commercial laboratory microscope with zoom function (a “macroscope”). In continuation of previous work, we completed and op- timised, and extended a software for holographic signal processing and numerical propagation of the wavefront.

To characterise the set-up and quantify its performances for standard operation and in its DHM configuration, we compare the magnification and resolution to theo- retical values for a given set of parameters. We determined the magnification factor and the rotation angle between the object and camera planes. With a laser wave- length of 532 nm, a x1 objective and a zoom setting of x2.9 (which corresponds to a plane sample wavefront), we measured a magnification of 1.68. With the same pa- rameters, we measure a holographic resolution of about 11 µm. The wavefront phase could be determined with a precision of a fraction of the wavelength.

We subsequently performed analysis of the relative contribution of coherent noise and implemented and evaluated several noise reduction routines. While the impact of coherent noise remained visible in the amplitude image, interferometric precision was obtained for the phase of the wavefront and the set-up was considered qualified for its intended use for corneal characterisation.

A first test measurement was performed on primate cornea.

Subsequent work will address the further quantitative characterisation of the set- up for the full set of parameters (objectives, zoom positions, wavelengths), test mea- surements on samples with known transmission and light scattering properties (e.g.

solutions of PMMA beads) and the comparison of the results with the predictions of a theoretical model, and measurements on animal and human tissue.

(4)

ii

(5)

iii

Acknowledgement

I am thankful to my supervisor, Karsten Plamann, whose guidance and support from the first meeting to the last oral presentation enabled me to develop an understand- ing of the subject and the scope of the project. I would also like to show my gratitude to Romain Bocheux, PhD student at the LOA, who supported me and brainstormed with me to understand the small details that I encountered along the work. I would also like to thank Kristina Irsch, researcher at the CIC of the Quinze-Vingt, who helped carrying the project.

We gratefully acknowledge financial support by the Fondation de l’Avenir without whom this project would not have been possible.

(6)

iv

(7)

Contents

Abstract i

Acknowledgement iii

Contents v

Introduction 1

1 Background 5

1.1 The Cornea . . . 5

1.2 Theory of Holography . . . 7

1.2.1 Principle . . . 7

1.2.2 Digital holography . . . 8

1.2.3 Demodulation . . . 10

2 Material and methods 11 2.1 Optical setup . . . 11

2.1.1 Macroscope . . . 12

2.1.2 Camera . . . 12

2.1.3 Motors . . . 12

2.2 Set-up . . . 13

3 Computer interface and signal processing 14 3.1 Computer interface . . . 14

3.1.1 Standard interface . . . 14

3.1.2 Labview interface . . . 15

3.1.3 Matlab interface . . . 16

3.2 Signal processing : specifications . . . 16

3.3 Software . . . 17

3.3.1 Off-line mode . . . 17

3.3.2 On-line mode . . . 18

v

(8)

vi CONTENTS

4 Performances 20

4.1 Characterisation of the macroscope . . . 20

4.1.1 Magnifying power . . . 20

4.1.2 Resolution . . . 21

4.2 Imaging system . . . 22

4.3 Magnifying power and resolution of the holograms . . . 23

4.3.1 Magnifying power . . . 23

4.3.2 Resolution before propagation . . . 23

4.3.3 Resolution after propagation . . . 23

4.4 Artefacts on the Fourier transform of the hologram . . . 24

4.5 Noise analysis . . . 25

4.5.1 MTF and noise analysis through Matlab . . . 25

4.5.2 Speckle analysis through Image J . . . 27

4.5.3 Size of zone of interest and noise . . . 27

5 Phase-shift demodulation 28 5.1 Implementation . . . 28

5.1.1 Correlation of object and image plane coordinates . . . 29

5.1.2 Reference beam angle and fringe pattern distance and orienta- tion . . . 31

5.1.3 Protocol . . . 34

5.2 Further experiments . . . 35

5.2.1 Reference and object beams . . . 35

5.2.2 Demodulation beyond one period . . . 36

5.3 Normalisation . . . 37

6 Measurements on eye tissue 39 Summary and conclusion 41 List of Figures 43 List of Tables 45 List of Symbols and Notations 46 References 47 A Matlab Code 49 A.1 Noise analysis . . . 49

A.2 Cross-correlation . . . 51

A.3 Demodulation program . . . 52

A.3.1 Function shift_calc . . . 54

A.3.2 Function unshift . . . 56

(9)

CONTENTS vii

A.3.3 Function demodulatePiOver2PhaseShift . . . 58 A.4 Normalisation and filters . . . 59

B Set-up 61

B.1 Lasers . . . 61 B.2 Beam expander and beam splitter . . . 62 B.3 Macroscope and camera . . . 63

(10)
(11)

Introduction

One of the leading causes of blindness worldwide is the lack of anterior segment transparency, with corneal blindness affecting 10 million people. Blindness could be prevented by having early diagnosis and quantitative follow-up. This is why there is a critical need for reliable and user-friendly clinical tools for objective and quantitative characterisation of corneal transparency. The project I joined is part of a collaboration between the Laboratory of Applied Optics (LOA), the Clinical In- vestigation Centre (CIC) of the QuinzeVingts National Eye Hospital and the Institut Langevin of ESPCI ParisTech, aims to address this need.

The main goal of the whole project is to provide clinical diagnostic devices capa- ble of providing objective and quantitative data, expressed in terms of the param- eters of a physical transparency model, in order to improve diagnostics, optimise treatment and obtain a better predictability of the further evolution of the patient’s eyesight.

Measurements of corneal transparency are performed on laboratory set-ups, a mathematical model for corneal transparency, developed with Rémi Carminati, In- stitut Langevin, ESPCI ParisTech, and a Quantitative Polarized Slit-lamp Biomi- croscopy (QPSB), a transformative approach for objective and quantitative cornea assessment in vivo, are developed.

Two techniques were identified as optimal means of optical characterisation meth- ods in a laboratory setting : full-field optical coherence tomography (FF-OCT), and digital holographic microscopy (DHM). These techniques permit high-resolution mea- surements and characterisation of eye tissue in a backscattering and transmission geometry respectively.

The project I contributed to in the conception of a dedicated multi-wavelength set-up for digital holographic microscopy, for corneal transparency measurements.

At the first stage of its development, the DHM was adapted from an existing three- wavelength set-up to a laboratory microscope with enhanced zoom functions in or- der to benefit from its resolution and zoom functionalities. The aim of this work is to develop an objective and quantitative method for the determination of corneal transparency by completing, testing and validating a laboratory set-up based on the

1

(12)

2 INTRODUCTION

principle of digital holographic microscopy and to perform transparency measure- ments on human and/or animal corneal tissue.

The first holographic technique was proposed in 1948 by Dennis Gabor [1] as a lensless imaging process. He pointed out that when a coherent reference wave is present simultaneously with light diffracted by or scattered from an object, the in- terference pattern created by the recombination of the two waves holds information about the phase and amplitude of the diffracted or scattered waves. The interference pattern can be recorded by a media sensitive to light intensity, and reconstructed by a reference wave identical to the one used during recording. Dennis Gabor received a Nobel Prize in physics in 1971 for his invention. The drawback of Gabor’s invention is however that the real and virtual images are superimposed when reconstructing the hologram, as can be seen in Fig. 2.

Figure 1: Recording of Gabor hologram, reprinted from [2]

Figure 2: Formation of twin images from a Gabor hologram, reprinted from [2]

In the 1960s, E.N Leith and J. Upatnieks [3] saw similarities between their syn- thetic -aperture-radar problem and the lensless imaging process described by Gabor.

Using the emerging technology of lasers, they performed lensless three-dimensional photography. The main improvement of the Leith-Upatnieks technology was the in- troduction of an angle θ between the reference beam and the object beam, allowing to reconstruct the virtual and real images separately.

At the beginning of digital holographic microscopy in the early 1970s, the ideas were there but the computation capacity was not sufficient to render a good qual- ity digital hologram. It was put on hold until the 1990s, when the computer were beginning to be powerful enough to compute all the calculations necessary to the reconstruction of the holograms. The development of inexpensive high-resolution sensors and high quality lasers allowed the recording of reasonable quality in the early 2000s. Digital holographic microscopy has now many applications, including oceanography, metrology and, what is of most interest to us in this thesis, life science.

(13)

INTRODUCTION 3

Figure 3: Recording a Leith- Upatnieks hologram, reprinted from [2]

Figure 4: Reconstruction of images from a Leith- Upatnieks hologram, reprinted from [2]

Our project is based on Leith-Upatnieks technique, as we use a set-up with an angle θ between the reference beam and the object beam.

This thesis is organised in six chapters. In the first, I present the background on the subject, particularly the physiology of the cornea and an overview of digi- tal holography. In the second chapter, I present the material and methods used for the work, from the optical set-up to a first assessment of it. In the third chapter, I present the implementation I performed in the Matlab programming language. In the fourth chapter, I characterise the physical performances of the set-up. In the fifth chapter, the phase-shift demodulation technique I implemented is studied and its performances are evaluated. The last chapter is about measurements on eye tissue.

(14)
(15)

Chapter 1 Background

In this chapter, I will first present the human cornea and its structure, I will then describe the concept of holography, and how it is used in our context.

1.1 The Cornea

The cornea and sclera are the interfaces of the eye with the exterior, they also form the supporting walls of the eye. Healthy cornea is a transparent tissue that serves as the strongest lens of the eye, representing two thirds of its total optical power. The sclera is opaque and surrounds the cornea.

Sclera and cornea have similar compositions mostly consisting of collagen fibrils, but different structures. The cornea is completely avascular and thus has to obtain its nutrients by diffusion through the endothelium and from the tear fluid through the epithelium.

Figure 1.1: Anatomy of the cornea, reprinted from [4]

As is shown on Fig. 1.1, the cornea consists of five different layers. Three are 5

(16)

6 CHAPTER 1. BACKGROUND

cellular layers, the epithelium, stroma and endothelium, and two are interfaces, Bowman’s layer and Descemet’s membrane, that are both collagenous tissues [5].

The stroma accounts for 85% of the corneal thickness, and is structured such as it gives the cornea its transparency. It is composed of microstructures, lamellae, that are themselves composed of nanostructures that are collagen fibrils. Lamellae are disposed in a parallel manner, as book pages. The disposition of the collagen fibrils is what ensures the transparency of a healthy cornea, as can be seen in Fig. 1.2. It has been suggested by Maurice [6] that the fibrils are forming a strongly organised structure. They are almost regularly spaced, and one fibril is surrounded by 6 fibrils, in a hexagonal pattern. This is represented in Fig. 1.2 in the zoomed part. The fibril marked in red has six equidistant neighbours, indicated in green. Keracocytes are found sparsely in the stroma layer, they ensure its maintenance.

Figure 1.2: Healthy cornea and its struc- ture, Aptel F., LOA, Palaiseau, Salvoldelli M., Hôtel Dieu, Paris, 2010

Figure 1.3: Oedematous cornea and its structure, Aptel F., LOA, Palaiseau, Salvoldelli M., Hôtel Dieu, Paris, 2010

If the hexagonal structure is lost, as shown in Fig. 1.3 in an oedematous cornea, the transparency is lost. Here, the interstitial space between the collagen fibrils has increased due to an increase of liquid present in the stroma. When the structure of the fibrils is disrupted, so is the transparency. An oedematous cornea generally presents lakes, regions completely devoid of fibrils, that have dimensions of sev- eral micrometres. The scattering by these lakes perturbs the propagating wave as

(17)

CHAPTER 1. BACKGROUND 7

strongly as fluctuations in the fibril density and therefore has to be taken into ac- count as an additional, independent scattering process [7].

1.2 Theory of Holography

Holography is relevant when studying corneal transparency, as it allows us to re- trieve the phase of the wave traversing the cornea.

1.2.1 Principle

When imaging an object, we know that typical detectors like photographic film or CMOS or CCD camera record the intensity of the incoming wavefront. In this thesis, we are interested in retrieving the phase and amplitude of the signal.

The camera records the average intensity I(x, y), given by the formula :

I(x, y) = |R(x, y)|2+ |O(x, y)|2+ O(x, y)R(x, y) + O(x, y)R(x, y), (1.1) where R(x, y) is reference signal and O(x, y) the signal of the object beam. To re- trieve the phase, we need to isolate one of the last two terms. They both contain the amplitude and phase of the signal, and are called twin signals.

The principle in off-axis holography is that we introduce an angle θ between the reference beam and the object beam. The tilted reference wave is described by the following equation : ER = exp −jλ x sin θ. The angle will induce two twin images in the Fourier space, as shown in Fig. 1.4.

Figure 1.4: Spatial frequency representation of off-axis holograms, reprinted from [8]

We retrieve the wavefront by selecting a zone of interest in the Fourier space, ei- ther the real or virtual image, and inverse Fourier transform it. This way, we select only one of the last two terms of equation 1.1.

(18)

8 CHAPTER 1. BACKGROUND

1.2.2 Digital holography

The main advantage of digital holography is that we can take an image outside of the image plane and numerically propagate the wavefront back to it.

For instance, if what we image has small objects in it, the image of one point on the camera would cover only a few pixels, leading to a loss of information. If we capture the image outside of the image plane, we will have a large point spread function instead of a narrow one on the sensor. Then we can propagate the signal back to the image plane in order to have the correct image.

The propagation can be calculated using several algorithms. Here, I will describe the four propagation methods that I implemented into the software. The methods are described by Verrier [8] and Kim [9], and I will call them Fresnel transform method (FTM), Huygens convolution method (HCM), angular spectrum method (ASM) and digital lens propagation. They are adapted to different problems and can be com- bined.

Fresnel Transform Method

This method involves the use of a single Fourier transform. It is used on large holograms that are recorded far from the image plane.

There is a minimum propagation distance set by this method, described by Kim [9], given as zmin = λNX02, where X02 is the width in µm of the image, N is the width in pixels and λ is the wavelength. This distance is required to obtain a valid diffraction pattern and avoid aliasing. This is because the output plane needs to be at least as large as the input plane.

Furthermore, with this method, the pixel resolution depends on the propagation distance. This dependence is due to the pixel distances, ∆ξ, of the reconstructed im- age being proportional to the reconstruction distance z, with ∆ξ = N ∆xλz . Here, ∆x is the distance between pixels on the sensor in the horizontal direction [10].

Under the Fresnel approximation, we get the propagation equation 1.2 [8] : E(p) = exp(jkz)

√jλz exp



j πλzp2 N2∆x2

 Fn

E(n) exp j π

λz(n∆x)2o

(1.2)

Huygens Convolution Method

With this method, we calculate a spatial convolution of the field’s distribution with the impulse response given as hz(z) = expjλzπ x2 [9]. When going into the Fourier space, the convolution is a simple multiplication, which yields equation 1.3 [8].

This method is typically used for small holograms near the image plane.

E(ξ) = ejkz

√jλzF−1[F {E(x)} × F {hz(z)}] (1.3)

(19)

CHAPTER 1. BACKGROUND 9

This method has the same limitations on the propagation distance as the Fresnel transform method, meaning that there is a minimum propagation distance given as zmin = λNX02. This happens when the curvature of the spherical wavefront propagated is too close to the sensor array, producing local fringe frequency on the CCD plane to be higher than the Nyquist frequency [9].

It also has a limitation on long propagation distances. When the propagation dis- tance in larger than zmax = X02, the fringe period becomes larger than the CCD array, making the recording of the diffraction information impossible. This implicates a loss of high-frequency structures when approaching the distance zmax, until com- plete loss of signal [9].

Angular Spectrum Method

It is a convolution based method as well, that requires two Fourier transform. Here, we propagate the angular spectrum of the hologram, where the angular spectrum transfer function is given by [11]:

H(fx) = exp

 2jπz

λ (1 −1 2fx2λ2)



(1.4) This method is used for short distances, including z = 0 [9].

E(ξ) = 1

√jλzF−1[F {E(x) × H(fx)}] (1.5) There is a numerical instability for very large distances z. This is no concern for holography, as we do not propagate over such distances.

Comparison of the methods Digital lens propagation

There are two ways of using a digital lens. One can either use the Huygens convolu- tion method, eq. 1.6, or the angular spectrum method; eq. 1.7 [8]. They can both be used to propagate a wavefront to any other plane. These methods allow the user to choose their magnifying power of the lens, Γ. We now have z0 = zΓ.

E(ξ) = ejkz0

√jλz0F−1[F {E(x) × L(x)} × F {hz0(z0)}] (1.6)

E(ξ) = ejkz0

√jλz0F−1[F {E(x) × L(x)} × H(fx)}] (1.7) Where L(x) = exp

−jλRπ

cx2

is the lens, with a curvature radius of Rc.

The difference between the two approaches reside in the use of FFTs, and thus the calculus cost.

(20)

10 CHAPTER 1. BACKGROUND

FTM HCM ASM

Number of

FFT needed

1 3 2

Propagation distance limits

minimum prop- agation distance zmin, otherwise aliasing

minimum prop- agation distance zmin, otherwise aliasing and maxi- mum propagation distance zmax, otherwise loss of high-frequency structures

not for propaga- tion distances typi- cally used in holog- raphy

Works with parabolic approxi- mation of spherical wavefronts

spherical wave- fronts

plane waves

Magnification changes with z constant constant Table 1.1: Comparison of the propagation methods, according to [9], [10]

1.2.3 Demodulation

The principle of this method is to remove the constant terms and reduce the coher- ent noise by taking several measurements at known phases and do a calculation to retain only the signal coming from the object.

The objective in using a demodulation approach is to change the phase between the reference beam and the object beam, in order to move the interference fringes over the sample. It is usually done by moving a mirror in the path of one of the beams. By moving the mirror by defined distances, we can make π2 phase-shift im- ages, meaning that we move the fringes by a quarter of the fringe spacing over the sample.

The usual technique consists in acquiring 5 images. One of the reference beam alone, to get εR the amplitude of the reference beam, and one for each phase shift α = 0, π/2, π, 3π/2, to get their respective intensities.

We use the following equation, from Kim [9] : Eo(x, y) = 1

R[(I0− Iπ) + i(I3π/2− Iπ/2)] (1.8) Using the theory of this method, we thought of another approach to it. Instead of moving the fringes over the sample, we want to move the sample under the fringes.

This means that we need to move the sample four times of a quarter of the fringe spacing, to cover a whole period and be able to use the same equation 1.8. This will be further explained in Chapter 5.

(21)

Chapter 2

Material and methods

2.1 Optical setup

Figure 2.1: Our digital holographic microscopy set-up

11

(22)

12 CHAPTER 2. MATERIAL AND METHODS

2.1.1 Macroscope

Figure 2.2: AZ100M Nikon macroscope

A "macroscope" is a microscope with an efficient zoom function designed for biology laboratory use, in order to be able to have a sample, and then zoom inside the specimen, to see finer details.

The macroscope we use in our labora- tory setup is a AZ100M produced by Nikon, Tokyo, Japan.

It is composed of commutable objec- tives, a x1 to x8 zoom function and a tube lens, sending the image to a given image plane, whichever objective is used. In our set-up, we use objectives x1, x2, x4 and x5.

Then zoom control function is de- signed for manual use. We observe a hysteresis in the controlled zoom functionality, meaning that we can never be sure of the actual value of the zoom. This induces variations in the measurements.

2.1.2 Camera

We use the Pixelink PL B686CU, 3000x2208 pixels with a pixel pitch of 3.5 µm, pro- duced by Pixelink, Ottawa, Canada. It has a colour CMOS sensor and a frame rate of 5 frames per second.

2.1.3 Motors

We use three step motors, referenced GTS150 produced by Newport Corporation, Irvine, California, USA. Each motor is dedicated to an axis, thus allowing the sam- ple stage to move along the x, y and z axis. Each motor has a range of 150 mm, with a positioning precision of 0.05 µm. They are controlled by a computer internet interface, provided by the manufacturer.

(23)

CHAPTER 2. MATERIAL AND METHODS 13

2.2 Set-up

In our experimental set-up as shown in Fig. 2.1, we have the choice between three coherent lighting :

– blue laser at 457 nm (model: PSU-H-FDA New Industries Optoelectronics Tech.

Co., LTD, Changchun, China),

– green laser at 532 nm (model: LD-WL206 New Industries Optoelectronics Tech.

Co., LTD, Changchun, China),

– red laser at 633 nm (model: HNL210L Thorlabs, Newton, NJ, U.S.).

The laser beam is expanded thanks to an objective focusing it onto a small aper- ture with a diameter of 25 µmthat is imaged by a convergent achromatic lens of focal length f = 20 cm. This allows to collimate the laser beam wavefront. The laser beam is then split into an object and a reference beam, using a non-polarising beam-splitter. The reference beam goes directly to the camera, and the object beam is projected through the sample, which is fixed on the sample stage, and through the macroscope. The reference and object beam are recombined to form the hologram on the sensor of the camera.

The last recombining cube is where we control the angle θ between the two beams, changing the fringe spacing when modifying its position.

(24)

Chapter 3

Computer interface and signal processing

3.1 Computer interface

3.1.1 Standard interface

An interface is provided with the Pixelink camera. It allows us to record an image with the camera. With it, we can set several parameters as the exposure time of the capture, the size of the image or its name.

We use the macroscope’s interface to set the zoom. It should be noted that the zoom has a positioning hysteresis. This means that even when using the macro- scope’s program to set the zoom, we can not be sure that it has reach the intended value. This have to be taken into account when characterising the set-up.

The motors’ interface allows us to precisely control the position of the platform in the x,y and z direction, with a 0.05 µm precision.

14

(25)

CHAPTER 3. COMPUTER INTERFACE AND SIGNAL PROCESSING 15

3.1.2 Labview interface

Figure 3.1: Labview’s interface

The Labview software is used to monitor the live feed from the cam- era, mostly during set-up alignment.

This program was coded by Romain Bocheux.

We display the Fourier transform of the image live, as shown on Fig. 3.1 at the bottom, to make the necessary set-up adjustments. In the configura- tion chosen for the present work, the holograms should be recorded while imaging in the Fourier plane, meaning that the images of the sample in the Fourier space should be small spots as shown in Fig. 3.2a, not squares or rectangles as shown in Fig. 3.2b. To be at the right focus, we can change the zoom through the macroscope’s software or change the position of the sample or camera.

(a) FFT of a hologram recorded at the im- age plane

(b) FFT of a hologram recorded outside of the image plane

Figure 3.2: Fourier transform of holograms in different image Fourier planes

To set the fringe spacing, we adjust the position of the recombining cube. In the Labview software, the further the images are from the central peak, the smaller the fringe spacing is.

(26)

16 CHAPTER 3. COMPUTER INTERFACE AND SIGNAL PROCESSING

3.1.3 Matlab interface

The objective is to implement the Matlab program on the laboratory computer and control the acquisition of holograms in real time.

Figure 3.3: Matlab interface

3.2 Signal processing : specifications

We want to control the DHM set-up with a single computer interface. The required tasks are specified as follows :

– Control the sample stage step motors,

– capture images with the camera, either live feed or recording, – use the macroscope’s zoom functionality,

– retrieve the wavefront from the intensity images recorded or live from the cam- era,

– propagate the wavefronts if necessary,

(27)

CHAPTER 3. COMPUTER INTERFACE AND SIGNAL PROCESSING 17

– execute demodulation routines.

The Matlab program was partially written by previous interns, Ali Ndiaye [12], Damien Huet and Franz Glessgen [13]. I corrected and optimised the interface and wrote additional propagation algorithms to be used within the program.

3.3 Software

The software is intended to operate either in an off-line mode or an on-line mode. I completed the implementation of the off-line mode.

The camera, motors and macroscope are not yet controllable with Matlab.

3.3.1 Off-line mode

For the Off-line mode, we can either import holograms or import wavefronts that were previously recorded.

If a hologram is imported, the actions are as follows:

– Fourier transform the imported image

– Display the Fourier transform for zone of interest selection

– Select a zone of interest, as default the software selects the whole image – Inverse Fourier transform the selected area

– Select the propagation to perform on the wavefront, fill in the parameters : FTM, enter distance z with respect to zmin

HCM, enter distance z with respect to zmin

ASM, enter distance z

Digital lens, fill in distance z and Γ the magnifying power – Propagate the wavefront

– Choose to either save the amplitude of the signal or the complex matrix con- taining the whole wavefront

If a wavefront is imported, the steps are as follows :

– Select the propagation to perform on the wavefront, fill in the parameters : FTM, enter distance z with respect to zmin

HCM, enter distance z with respect to zmin

ASM, enter distance z

Digital lens, fill in distance z and Γ the magnifying power

(28)

18 CHAPTER 3. COMPUTER INTERFACE AND SIGNAL PROCESSING

– Propagate the wavefront

– Choose to either save the amplitude of the signal or the complex matrix con- taining the complete wavefront

3.3.2 On-line mode

Using the on-line mode can be useful for alignment purposes and for adjusting the parameters in real time. The program will be used as follows :

– Get live feed from the set-up camera – Fourier transform the image

– Display the Fourier transform for zone of interest selection

– Select a zone of interest, as default the software selects the whole image – Inverse Fourier transform the selected area

– Select the propagation to perform on the wavefront, fill in the parameters : FTM, enter distance z with respect to zmin

HCM, enter distance z with respect to zmin

ASM, enter distance z

Digital lens, fill in distance z and Γ the magnifying power – Propagate the wavefront

– Choose to either save the amplitude of the signal or the complex matrix con- taining the complete wavefront

Fig. 3.4 presents the flowchart, according to the American National Symbols In- stitute (ANSI) standards and symbols [14].

(29)

CHAPTER 3. COMPUTER INTERFACE AND SIGNAL PROCESSING 19

Figure 3.4: Flowchart of the software

(30)

Chapter 4

Performances

4.1 Characterisation of the macroscope

4.1.1 Magnifying power

The classical definition is given as G = αα0, with α the angle at which the object is ob- served at 25 cm of the eye and α0the angle at which the object is observed at infinity through the microscope.

As the macroscope’s specifications are provided for a given set of objectives and oculars and we are not using oculars, we do not expect to find a magnifying power of 2.9 for the objective x1 with zoom x2.9.

For our system, the magnifying power is defined as the effective size of a pixel in the camera plane, Pcamover the effective size of a pixel in the sample plane, Psample.

Γ = Pcam Psample

(4.1) We find Pcam = 3.5 µmin the camera data sheet. We need to measure Psample. We use a reference target, the U.S. Air Force target, to measure the effective size of one pixel in the sample plane. It is done by measuring the size in pixel of a known element of the target, and deduce the size, in micrometre, of one pixel in the sample plane.

The measurements are done at λ = 532 nm, with a zoom x2.9 while changing the objective. The numerical aperture (NA) changes with the zoom. The evaluated value of numerical aperture for the zoom position x2.9 is found with the results in table 4.1.

20

(31)

CHAPTER 4. PERFORMANCES 21

Objective NA Psample(µm) Magnifying power

x 1 0.07 1.953 ± 0.004 1.79

x 2 0.14 0.9837 ± 0.0005 3.56 x 4 0.28 0.4924 ± 0.0004 7.11

Table 4.1: Magnifying power measurements, made for several objectives and zoom x2.9 As expected, we do not find a magnifying power of 2.9 for the x1 objective with zoom 2.9. However, we do have a factor 2 between each objectives, which is consis- tent with the macroscope specifications.

4.1.2 Resolution

Here, we image the U.S. Air Force target with an incoherent white light illumination, objective x1, for three different zoom, x1, x3 and x8. The resolution for the different zoom values are given in table 4.2.

Zoom Resolution (µm)

x 1 11.05

x 3 4.38

x 8 2.46

Table 4.2: Measurement of the resolution of the macroscope for different zoom, objective x1 As expected for this type of imaging device, the resolution increases with increas- ing zoom setting.

If we tentatively calculate the effective numerical aperture as N Aef f = 0.61×λd , with λ = 550nmand d being the smallest resolvable distance as measured above, we ob- serve increasing effective NA with increasing zoom setting as shown in Fig. 4.1. The nominal numerical aperture is N A = 0.1.

Figure 4.1: Evolution of the effective numerical aperture when changing the zoom of the macroscope

(32)

22 CHAPTER 4. PERFORMANCES

4.2 Imaging system

The experimental protocol for hologram acquisition consists of several steps.

At first, a laser wavelength has to be chosen among the three available wave- lengths, see Appendix B.1, then can illuminate the sample and start aligning the beams.

The fringe spacing may be adjusted by aligning the separator cube, this can be monitored by the Labview interface as the fringe spacing in the sample plane is in- versely proportional to the distance between the central peak and the image in the Fourier space.

Then we can record a hologram that will then be imported into the Matlab pro- gram for further signal processing, see Fig. 3.3.

When the hologram is imported (an example is shown in Fig. 4.2.a), we can select the size of the zone of interest, the blue rectangle on the Fourier transform (on Fig.

3.3, image on the right). The program then calculates the inverse Fourier transform of the zone of interest and displays its amplitude in the bottom left corner.

We have the choice to propagate the wavefront. The user can fill in the desired parameters, and propagate the wavefront to a plane of his or her choice. Then, the resulting wavefront can be saved, either its amplitude as shown in Fig. 4.2b, or the complex matrix representing the phase and amplitude of the signal. The phase is represented in Fig. 4.2c.

(a) Hologram of the US Air Force target before pro- cessing

(b) Amplitude of the US Air Force target after being processed

(c) Phase of the US Air Force target after being processed

Figure 4.2: Different stages of image processing

(33)

CHAPTER 4. PERFORMANCES 23

4.3 Magnifying power and resolution of the holograms

4.3.1 Magnifying power

I made measurements to assess the magnifying power of the system in the configu- ration used for recording holograms. I made this measurements with the objective x1 and zoom position x2.9 (providing a plane sample wavefront) on the macroscope.

The method is similar to the one used to characterise the macroscope’s magni- fying power. The U.S. Air Force target was imaged, with the 532 nm green laser, and the effective size of a pixel in the sample plane was measured. The magnifying power is defined as the effective size of a pixel in the camera plane, Pcam, over the effective size of pixel in the sample plane Psample.

Pcam (µm) 3.5

Psample(µm) 2.08 ± 0.01 Magnifying power Γ= 1.68

Table 4.3: Measurements of magnifying power, made for objective x1 and zoom x2.9

4.3.2 Resolution before propagation

The ideal resolution of the set-up is given as the cut-off frequency of the detector, fco

and the cut-off frequency related to the size of the zone of interest fzoi.

The cut-off frequency of the detector is given as the inverse of twice the effective size of a pixel in the sample plane. Hence fco = 2×P1

sample. With Psample = 2.08 × 10−6 m−1

as given in the previous part, we have fco= 2.4 × 105 m−1.

The cut-off frequency of the zone of interest of the Fourier transform of an hologarm is given as fzoi =

2 3+

2 × fco = 7.7 × 104 m−1.

The actual resolution of the system is measured by identifying the smallest re- solved object on the image of the normalised USAF reference chart.

Before propagation, when retrieving the phase and amplitude of a recorded holo- gram, we reach a nominal resolution of d = 11.05 µm, according to the U.S. Air Force chart.

4.3.3 Resolution after propagation

The resolution is dependent on the size of the zone of interest selected before the propagation. If the zone of interest is too small, the amplitude will be blurred. More- over, selecting a zone too wide is not necessary, as over a certain size, we collect only noise and not the signal.

(34)

24 CHAPTER 4. PERFORMANCES

When propagating a wavefront with the previously described algorithms over any distance compatible with the numerical stability of the algorithm used, the res- olution is still d = 11 µm. The propagation does not affect the resolution when the size of zone of interest is chosen carefully.

4.4 Artefacts on the Fourier transform of the holo- gram

On the Fourier transform of an off-axis hologram, we should only observe the central peak and two lateral peaks corresponding to the Fourier transforms of the real and virtual images. However, when we perform our measurements we see additional artefactual peaks at specific positions in the Fourier plane. On Fig. 4.3, the black arrow originating in the centre points to an image with a frequency fi. We can see that it has a correspondent at the frequency fart = fmax− fi, with fart the frequency of the artefact image.

Figure 4.3: Fourier transform of a USAF target

The artefacts are not positioned at positions corresponding to multiples of the first peak - centre distance and thus are not harmonics caused by non-linearities of the camera sensor. Given the mirrored position of the additional peaks, they are likely caused by aliasing effects, the origin of which could not be elucidated during this work.

Future experiments are planned using different cameras.

(35)

CHAPTER 4. PERFORMANCES 25

4.5 Noise analysis

While performing phase retrieval and wavefront analysis, we met a known problem of digital holographic microscopy : speckle [2]. The images resulting from the re- trieval of information from an hologram were blurred by speckle as shown in Fig.

4.4a, hence small spots, larger than the smallest frequency, thus reducing the res- olution. To better understand this phenomenon, we conducted a systematic noise analysis.

(a) Amplitude (b) Phase

Figure 4.4: Zoom on the speckle in the signal

The first approach is to compare the noise of an image to the modulation transfer function (MTF) of the whole set-up.

4.5.1 MTF and noise analysis through Matlab

The first step is to plot the theoretical amplitude modulation transfer function (MTF) of the set-up and compare it to the noise level. As our set-up is lit by coherent light, the ideal amplitude MTF is constant up to the coherent cut-off frequency fc, as can be seen in Fig. 4.5, the dashed black line. On the same graph is also plotted the effective amplitude MTF of the device, given for N Aef f = 0.8, in solid black on Fig. 4.5. To this plot we superimpose the noise of an image, to see how it reduces the resolution.

For that, I recorded the hologram of the wavefront of the device. I then retrieved the phase and amplitude information by putting the image through the Matlab pro- gram. This gives us the wavefront of the device, Wd.

(36)

26 CHAPTER 4. PERFORMANCES

In order to evaluate the noise, I did a 1D Fourier transform of Wd. I then took the average of each rows, getting a vector called Vr, and did the same for the columns, getting a vector Vc. So we have Vr and Vc, the two vectors filled with the average of each rows and columns of the Fourier transform of Wd.

In order to have a good representation of the noise, I need to normalise Vr and Vc by the average amplitude over the wavefront Wd. In order to do that, I coded a short Matlab program, see Appendix A.1, that calculates I0 through equation 4.2, the average intensity over the image, and then calculates A0 the average amplitude given by A0 =√

I0.

I0 = |A|2 (4.2)

By normalising Vr and Vc by A0, I was able to plot, on the same graph, the MTF of the set-up, and the noise given for rows and columns, represented in Fig. 4.5.

Figure 4.5: Ideal and effective amplitude MTF with noise of the device

We can see that from the frequency fnoise = 1.30 × 105 m−1, we have the noise overcoming the signal. This means that it is more complicated to discern noise from signal with details finer than 7.7 µm on the image. Moreover, the noise is at 0.94 for fnoise = 1.18 × 105 m−1, this implies that the details finer than 8.5 µm are most likely noise, and not the signal.

(37)

CHAPTER 4. PERFORMANCES 27

4.5.2 Speckle analysis through Image J

In order to confirm the results obtained through Matlab, I did some image analysis with the software Image J. I measured the apparent size of the speckle noise observed in the same image analysed with Matlab. A zoom of the speckle noise is shown in Fig. 4.4.

I measured the size of several speckles, and found an average of 11 ± 2 µm per speckle. This result is consistent with the measurement of noise made with Matlab.

4.5.3 Size of zone of interest and noise

As the images used in our program are not squared, we found ourselves with differ- ent x and y spatial sampling.

The cut-off frequency of the system should be taken into account when deciding what size of ZOI should be used in the Matlab program. It was determine that the cut-off frequency was fc = 3.8 × 105 m−1. This represents 1740 pixels in the x di- rection and 2365 pixels in the y direction. The cut-off frequency sets an upper limit to the zone of interest (ZOI). Although, this limit is never reached when using the software, as we try to limit the ZOI around the image we want to select.

At the frequency at which the noise overcomes the signal, fnoise = 1.30 × 105 m−1 represents 595 pixel along the x direction and 809 pixels on the y direction. This means that the zone of interest should not be larger that 595 by 809 pixels. This yields table 4.4, where are listed the maximum sizes of zone of interest according to the frequency limitations.

Frequency (m−1) Size of ZOI (pixel) xmax ymax 3.8 × 105 1740 2365 1.30 × 105 595 809 1.18 × 105 545 740

Table 4.4: Frequency and equivalent size of ZOI

(38)

Chapter 5

Phase-shift demodulation

5.1 Implementation

In order to dissociate the sample signal from the coherent noise induced by the op- tical set-up, we implement a demodulation method using sample displacement. A similar method is used by Pan [15], moving the sample randomly instead of a con- trolled translation as we perform. We move the sample by a defined step in a given direction, to have a total stage translation of one fringe spacing.

To get the values needed for the step and direction, we have to determine the orien- tation and spacing of the fringes. Then, we move the sample in a direction perpen- dicular to the fringes, as shown in Fig. 5.1.

Figure 5.1: Direction of sample translation according to fringe orientation

The sample is positioned on a translation stage that is directed by three motors, to move along the x, y, and z axis. We control the motors through a computer interface provided by the manufacturer, as it is not possible to control it with Matlab.

In the first part of this section, I will present the calculations necessary to the characterisation of the angle between the sample plane and the camera plane φ, and

28

(39)

CHAPTER 5. PHASE-SHIFT DEMODULATION 29

its measurement on the set-up.

5.1.1 Correlation of object and image plane coordinates

In our set-up, there is an angle φ between the x-axis of the camera and the x-axis of the motors. We want to determine that angle in order to take it into account for further measurements, and normalise the system.

From the angle between the sample plane and the camera plane, we have the follow- ing rotational matrix :

R(φ) =cos φ − sin φ sin φ cos φ



(5.1) Given x and y the coordinates in the sample plane, x0 and y0 the coordinates in the camera plane and Γ the magnification between the two planes, as shown in Fig.

5.2, we have :

x0 y0



= ΓRx y



(5.2) We first want to determine the rotational matrix. To do that, we do a cross- correlation measurement. This consists in taking two images of a sample. In the first one, we simply image the sample in the object plane. In the second one, we translate the sample by moving the platform with the motors by, for example, 100 µmon the x-axis.

When we have recorded the two images, we perform a cross-correlation of the two images, which is a multiplication, in the Fourier space, of one image with the conjugate complex of the other. We do that through a Matlab program, see Appendix A.2.

When performing the cross-correlation, we have the first signal I(x, y), and the second signal, translated by ∆xand ∆y, called I0(x, y). We have the translated signal as : I0(x, y) = I(x, y) × δ(x − ∆x) × δ(y − ∆y). This yields equation 5.3.

CC = F−1{| ˜I × ˜I| × e−jk∆x × e−jk∆y} = AC(I) × δ(x − ∆x) × δ(y − ∆y) (5.3) , where CC and AC respectively are the cross-correlation and auto-correlation signals. As stated in equation 5.3, the cross-correlation signal is then constituted of two spots. One is the autocorrelation signal which appears as a bright central peak, the other is the cross-correlation signal which appears as a peak shifted by ∆x and

y. When plotting the cross-correlation matrix, if the camera is well aligned to the image plane, we see two spots on the x-axis, separated by Γ ∗ 100 µm.

We measure the angle φ between the x-axis and the direction of the second dot, as shown in Fig. 5.2 on the right.

(40)

30 CHAPTER 5. PHASE-SHIFT DEMODULATION

Figure 5.2: Cross-correlation - determination of φ and Γ

To measure φ on our set-up, I performed the measurements by moving the sam- ple along the y-axis. I took 4 images between which I moved the sample of 100 µm on the y-axis. The measurements were done with the objective x1 and zoom x2.9. I performed the cross-correlation calculation with the Matlab code, see Appendix A.2, and got the results found in table 5.1, measured in the camera plane.

We can see a zoom of the centre of the cross-correlation plot on Fig. 5.3, this cross- correlation was done between the first and the second images taken, meaning that the motors moved of 100 µm along the y-axis.

Figure 5.3: Zoom on the result of cross-correlation

Figure 5.4: Profile of the zoom, to verify the dis- tance

To measure the distance between the two peaks of Fig. 5.3, I plotted the profile of a line traversing both peaks. As can be seen in Fig. 5.4, the distance in the sample plane is 168 µm, as expected in the camera plane.

For the angle φ,we can see that the cross-correlation spot is along the x-axis and not the y-axis. The measurements state that φ = 90 for this particular set-up (the exact right angle observed here being a coincidence).

(41)

CHAPTER 5. PHASE-SHIFT DEMODULATION 31

Images used

Sample plane translation

Camera plane

translation Γ φ

∆x(µm) ∆y (µm) ∆x0(µm) ∆y0(µm)

1 and 2 0 100 168 0 1.68 90

1 and 3 0 200 336 0 1.68 90

1 and 4 0 300 504 0 1.68 90

Table 5.1: Cross-correlation results

The cross-correlation procedure needs to be repeated whenever the camera is moved, to normalise the calculations of the movement of the translation stage.

5.1.2 Reference beam angle and fringe pattern distance and ori- entation

Two parameters need to be determined in order to have the motors executing the correct movement under the fringe pattern. First, we need to know the direction in which to move, and then the distance of the step. Both information can be extracted from the Fourier transform of the image of the sample.

When performing off-axis holography, we find ourselves with several images in the Fourier space. We have a central peak, which we do not presently exploit, and we have virtual and real images of the sample. We will now focus on only one of those two, the real image.

When plotting the Fourier transform, we can measure the distance between the central peak and the centre of the real image. The coordinates of the centre of the real image, fx and fy, are all the information we need to make our calculations.

Figure 5.5: Fourier transform - determination of θ

(42)

32 CHAPTER 5. PHASE-SHIFT DEMODULATION

The spatial coordinates fx and fy are given relatively to the central autocorrela- tion peak of the image, see Fig. 5.5.

θ = arctan fy fx



(5.4) This gives us information on the orientation of the fringes in the camera plane, as they are perpendicular to the direction of θ, meaning that we want to move the sample in the direction of θ. We must not forget to take into account the angle φ to go back to the object plane. We have the angle β, representing the orientation of the fringes in the object plane, as shown in Fig. 5.6.

Figure 5.6: Interference fringes in the object plane

Now, we want to get the information about the fringe spacing. The distance between the central peak and the centre of the real image is the inverse of the fringe spacing. This gives us the fringe spacing df :

df = 1

pfx2+ fy2 (5.5)

With our set-up, we can, in the Fourier space, measure the distance, in pixels, between the centre and either the real or virtual image. From this measurement, we can calculate θ and df, as can be seen in Fig. 5.7.

Having θ and df, we can calculate the distances ∆x and ∆y, which are distances the motors have to cover in order to move the sample of a whole 2π phase under the fringes.

We have :

x= dfsin β and ∆y = dfcos β (5.6) where β is the angle in the object plane between the x-axis and the fringes, thus β = π2 − θ − φ.

(43)

CHAPTER 5. PHASE-SHIFT DEMODULATION 33

Figure 5.7: Fourier space, measurement of df

After the measurements of fx and fy in the Fourier space, we have all the infor- mation necessary to make our phase-shift images.

How to make the images

Having ∆x and ∆y, we can calculate the coordinates we need to use for control- ling the motors to make our phase-shift demodulation images. For example, with df = 8.89 µm, φ = 50 and θ = 9.9, we get ∆x = 4.5 µmand ∆y = 7.7 µm. We will make four measurements with specific coordinates for the motors, considering that the coordinates of the first image, with the phase shift 0, are (0,0,0). We get the table 5.2 :

Phase shift 0 π/2 π 3π/2

x(µm) 0 1.1 2.2 3.3

y(µm) 0 1.9 3.8 5.8

z(µm) 0 0 0 0

Table 5.2: Example of motors coordinates

With our set-up, we record the wavefront of the device as the reference, see Fig.

5.8b, as it can be used in the demodulation algorithm. Then, having determined the translation to make between the images, we make 4 measurements corresponding to a phase shift of 0, π/2, π and 3π/2.

With those 5 images, we can run the Matlab program, see Appendix A.3, and save the result of the demodulation.

(44)

34 CHAPTER 5. PHASE-SHIFT DEMODULATION

(a) Phase of the wavefront of the de- vice

(b) Wavefront of the device, without sample

(c) Profile over width, at half-height

(d) Phase of the one period demod- ulation result

(e) Demodulation result, going over one period

(f) Profile over width, at half-height s

(g) Pro- file over height, at half- width Figure 5.8: Amplitudes and phases of device wavefront and demodulation result, with asso-

ciated noise

We can see that the result of the demodulation, in Fig. 5.8e, has less noise than the wavefront of the device, in Fig. 5.8b. We measure a resolution of 8.8 µm.

We observe some residual noise in the background. We see slow variations, sim- ilar to a spherical aberration, and speckle patterns. The speckle is analysed in the noise section of this report. The next section is focused on trying to reduce the back- ground’s slow variations.

5.1.3 Protocol

Here I will describe the protocol used to make demodulation measurements. The calibration of the angle φ must be previously done. We then need to take an image of

(45)

CHAPTER 5. PHASE-SHIFT DEMODULATION 35

the reference and object beam, without sample, to get the information on the interfer- ence fringes, we here call it calibration image (CI). In order to obtain the information needed for the motors, we then plot the Fourier transform of CI, and determine the pixel coordinates of the centre of the real image, f x0 and f y0.

We can then enter the coordinates into the Matlab program, shown in Appendix A.3, in order to get the matrix of coordinates to give the motors using the function shift_calc, found in Appendix A.3.1.

The next step is to record the 4 phase-shift images, with the sample in place. One can also take a reference image, to normalise the signal, if needed. The four images are then processed with the same Matlab program, using the function unshift that can be seen in Appendix A.3.2, that will shift the images back to the plane of the 0 phase shift image. We can then make the demodulation calculation, using function demodulatePiOver2PhaseShift found in Appendix A.3.3, and save the amplitude as- sociated with it.

In order to simplify this protocol, I wrote functions to calculate the fringe spac- ing, and thus the movement of the motors, and the shift to use for each images to be able to superimpose them. There is also a demodulation function, and a routine to save the amplitude of the signal.

5.2 Further experiments

In this part, we investigate how to further reduce the noise we still have in the de- modulated image.

5.2.1 Reference and object beams

In this part, we make the same experiments as before, but this time without any object in the path. This means that we only get the signal from the noise on both reference and object beams.

Once we use the object beam as reference, and then we use the reference beam as reference for the demodulation calculation.

Comparing the results, we see that Fig. 5.9, normalised by the reference beam alone, has more pronounced interference patterns due to dust than Fig. 5.11 which shows data normalised by the object beam, without sample. From that, we can con- clude that the reference arm is cleaner that the object arm. While looking at the profiles of the images shown in Fig. 5.10 and 5.12, we can see that the noise level is similar in both images. The profiles plotted are the average pixel intensities of 5

(46)

36 CHAPTER 5. PHASE-SHIFT DEMODULATION

Figure 5.9: Image normalised by the reference beam

Figure 5.10: Profile at half-height of Fig.

5.9

Figure 5.11: Image normalised by the object beam

Figure 5.12: Profile at half-height of Fig.

5.11

lines, taken at half-height of the images.

5.2.2 Demodulation beyond one period

In this part, we try to reduce the slow background variations. One idea to do it is to translate the sample over more than one period, and average the demodulation results of each periods.

As before, we take a first image to determine the orientation and spacing of the fringes. From this, we calculate the translation we’ll have to make between two images in order to have a π/2 phase-shift. This time, we take more than 5 images.

We still take the image of the reference beam, and 4 for the first 2π shift, but then, we move the sample further more, to cover more fringe spacing. We start by doing this over 10π, meaning that we take 20 images. One of the reference beam, and 19 while moving the object in its plane.

(47)

CHAPTER 5. PHASE-SHIFT DEMODULATION 37

(a) Amplitude (b) Phase

Figure 5.13: Demodulation result over 10 π

The result of this 5-periods demodulation is shown in Fig. 5.13. By going over 5 periods and averaging, we can see an improvement of the background speckle. The resolution is 7.8 µm. It is better than doing the demodulation over a single period.

5.3 Normalisation

A way to reduce the noise is to normalise the image of the two beams with a sample, called Iswith the image of the two beams without sample, called Ins.

When we image a sample, what we get on Is is a multiplication of the transmittance of the sample, T , and the object beam without sample, O0.

Equation (1.1) becomes :

Is(x, y) = |R(x, y)|2+|O0(x, y)T (x, y)|2+O0(x, y)T (x, y)R(x, y)+O0(x, y)T(x, y)R(x, y) (5.7) This means that if we want to get only the information about the transmittance of the sample T (x, y), we need to divide the wavefront of the two beams with sample by the wavefront of the two beams without sample, as stated in equation 5.8, with Ws = O0(x, y)T (x, y)R(x, y) the wavefront of the device with sample and Wd = O0(x, y)R(x, y) the wavefront of the device without sample. It is important to do the calculation on the reconstructed holograms, meaning the matrix containing the phase and amplitude of the signals.

T (x, y) = Ws

Wd (5.8)

When normalising, we have to be cautious about dividing by 0. To make sure we do not have any value of the matrix Wdequal to zero, we use low pass filters on Wd. We can see the results of the different filters in Fig. 5.14. I use a Butterworth low-pass filter and apply it on the Fourier transform of Wdin Fig. 5.14f, I work directly on Wd

(48)

38 CHAPTER 5. PHASE-SHIFT DEMODULATION

by applying a moving average filter in Fig. 5.14b, a Gaussian filter in Fig. 5.14d, a median filter in 5.14c or put all the value between -0.5 and 0.5 to 1 in Fig. 5.14e. The code for the normalisation with those filters can be found in Appendix A.4.

(a) No low-pass filter ap- plied

(b) Moving average filter (c) Median filter

(d) Gaussian filter (e) Wd(−0.5 : 0.5) = 1 (f) Butterworth filter, order 3 Figure 5.14: Amplitude and phase results of normalisation by different low-pass filters

With or without filter, we measure a resolution of 11 µm. This technique does not allow to improve the amplitude’s resolution, although the phase is correctly ren- dered.

(49)

Chapter 6

Measurements on eye tissue

We measured the phase and amplitude of a human cornea. On Fig. 6.1 are shown the retrieved amplitude and phase from the cornea. The study was conducted ac- cording to the tenets of the Declaration of Helsinki and the French legislation for scientific use of human corneas.

Figure 6.1: Amplitude and phase of a human cornea

When dealing with a cornea, the amplitude becomes less interesting, as the tissue is transparent. We are mostly interested in the phase and its behaviour when going through the corneal tissue. Here on Fig. 6.1, we see a pattern in the phase, character- ising the signal going through.

39

(50)

40 CHAPTER 6. MEASUREMENTS ON EYE TISSUE

The measurement indicate that the cornea is strongly scattering, which is not surprising taking into account its conservation conditions and its fixated state.

Systematic measurements on animal and human cornea preserved under defined conditions will be undertaken during subsequent projects.

(51)

Summary and conclusion

The objectives of this work were to set-up, optimise and characterise a digital holo- graphic microscopy laboratory set-up designed for the study of eye tissue and to implement and optimise digital data processing and noise reduction routines.

The device is based on a laboratory microscope with a powerful zoom function (a “macroscope”) to which a home made three wavelength Mach-Zehnder interfer- ometer and a step motor translation stage for sample displacement have been added.

During this work, we characterised the magnification and the resolution of the device for a given parameter set and compared it to theoretical values.

In continuation of previous work we extended and optimised a Matlab program with numerical interface for the processing of holograms and the numerical propa- gation of electromagnetic wavefronts. We implemented and tested three algorithms based on the discretisation of the Huygens-Fresnel principle, implemented a “nu- merical lens” and programmed the numerical reconstruction of the complex electro- magnetic wavefront from the hologram.

Experiments on test targets using the given parameter set showed satisfactory performance of the device concerning the reconstruction of the amplitude, and very good performance when reconstructing the phase of the wavefront. The resolution proved to be within the expected range.

Raw data suffered from the presence of digital noise caused by imperfections and dust within the optical device. We tested a series of normalisation and demodula- tion strategies. Demodulation based on measurements performed while moving the sample in defined steps permitted us to dissociate the sample signal from the digital noise and suppressing much of the latter.

We terminated our experimental campaign by a measurement on human cornea.

The experimental set-up is now operational and ready for further study of eye tis- sue. For future work, the device will need to be serviced and operated in a clean environment; this is planned for the month of March when the group will move to a

41

(52)

42 CONCLUSION

new and dedicated laboratory environment.

Further work will need to explore parameter settings with different objectives and different zoom settings. The latter will need to take into account non-planar sample wavefronts, for which the necessary routines (numerical lens, numerical propagation) have been implemented during the present internship. Further charac- terisation will be performed by imaging known substances with specific parameters, focusing on the phase and its behaviour.

The set-up will then be used as a laboratory characterisation method for the op- tical transparency of cornea as part of the host work-groups activity in quantifying and modelling corneal transparency in view of developing quantitative clinical char- acterisation devices for in vivo measurements on patients.

(53)

List of Figures

1 Recording of Gabor hologram, reprinted from [2] . . . 2 2 Formation of twin images from a Gabor hologram, reprinted from [2] . 2 3 Recording a Leith-Upatnieks hologram, reprinted from [2] . . . 3 4 Reconstruction of images from a Leith-Upatnieks hologram, reprinted

from [2] . . . 3 1.1 Anatomy of the cornea, reprinted from [4] . . . 5 1.2 Healthy cornea and its structure, Aptel F., LOA, Palaiseau, Salvoldelli

M., Hôtel Dieu, Paris, 2010 . . . 6 1.3 Oedematous cornea and its structure, Aptel F., LOA, Palaiseau, Salvold-

elli M., Hôtel Dieu, Paris, 2010 . . . 6 1.4 Spatial frequency representation of off-axis holograms, reprinted from

[8] . . . 7 2.1 Our digital holographic microscopy set-up . . . 11 2.2 AZ100M Nikon macroscope . . . 12 3.1 Labview’s interface . . . 15 3.2 Fourier transform of holograms in different image Fourier planes . . . 15 3.3 Matlab interface . . . 16 3.4 Flowchart of the software . . . 19 4.1 Evolution of the effective numerical aperture when changing the zoom

of the macroscope . . . 21 4.2 Different stages of image processing . . . 22 4.3 Fourier transform of a USAF target . . . 24 4.4 Zoom on the speckle in the signal . . . 25 4.5 Ideal and effective amplitude MTF with noise of the device . . . 26 5.1 Direction of sample translation according to fringe orientation . . . 28 5.2 Cross-correlation - determination of φ and Γ . . . 30 5.3 Zoom on the result of cross-correlation . . . 30 5.4 Profile of the zoom, to verify the distance . . . 30 5.5 Fourier transform - determination of θ . . . 31 5.6 Interference fringes in the object plane . . . 32

43

References

Related documents

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Från den teoretiska modellen vet vi att när det finns två budgivare på marknaden, och marknadsandelen för månadens vara ökar, så leder detta till lägre

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

I regleringsbrevet för 2014 uppdrog Regeringen åt Tillväxtanalys att ”föreslå mätmetoder och indikatorer som kan användas vid utvärdering av de samhällsekonomiska effekterna av

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar