• No results found

Speckle imaging and image processingapplied to life sciences

N/A
N/A
Protected

Academic year: 2021

Share "Speckle imaging and image processingapplied to life sciences"

Copied!
42
0
0

Loading.... (view fulltext now)

Full text

(1)

applied to life sciences

How can one use the full potential of speckle signal in coherent imaging with a single snapshot or a short stack of images?

XAVIER BERTHELON

Master’s Thesis at Creatis,

CNRS UMR 5220 – INSERM U1044 – Université Lyon 1 – INSA Lyon

Supervisor (France): David ROUSSEAU, Creatis Supervisor (Sweden): Martin VIKLUND, KTH Examiner (Sweden): Jerker WIDENGREN, KTH

20 January 2015

(2)
(3)

This Master Thesis Project focuses on the use of speckle imaging, which is ob- tained by shining coherent light on objects. Speckle imaging has a wide range of applications among which is bio-speckle imaging: a non-invasive technique used to characterize the activity of biological tissues. The line of analysis of this project will be the following: How can one use the full potential of speckle signal in coherent imaging with a single snapshot or a short stack of images? I divided my project into two parts:

• Time recording of a speckle pattern with a simple optical setup and over a short period of time carries several information. How can one use this information to depict objects in a 3D scene?

• Speckle imaging is widely used in biological sciences. We are going to apply

our imaging techniques on biomedical related data.

(4)

Introduction . . . . 3

1 A multi-modal speckle imaging technique 5 1.1 Optical setup . . . . 5

1.2 Distance from speckle . . . . 6

1.2.1 Pinciple . . . . 6

1.2.2 Results . . . . 6

1.3 Degree of polarization map from speckle . . . 10

1.3.1 Principle . . . 10

1.3.2 Results . . . 11

1.4 Bio-speckle activity map from speckle . . . 12

1.4.1 Principle . . . 12

1.4.2 Results . . . 13

1.5 Experiments with a 3D scene . . . 14

1.5.1 Analysis . . . 15

2 Applications: Bio-speckle imaging 17 2.1 Interest of monitoring blood flow with laser speckle contrast imaging 17 2.2 Experimental setup . . . 17

2.2.1 Setup for small animal imaging . . . 17

2.2.2 Surgical protocol . . . 18

2.3 Results . . . 19

2.3.1 Image acquisition . . . 19

2.3.2 Image processing . . . 20

A Stereovision 24 A.1 Theory . . . 24

A.2 Matlab interface . . . 25

A.3 Details and example with template images . . . 26

A.4 Results with images acquired by the setup . . . 27

A.5 Comparison with Minoru3D Webcam and speckle illumination . . . . 30

B Intensity registration 32 C Cameras and Optical zoom 34 C.1 Cameras . . . 34

C.2 Optical zoom . . . 35

1

(5)

D Camera Interface 36

E User Guides 37

(6)

Speckle is a phenomenon observed when coherent light passes through a randomly fluctuating medium or is reflected onto a rough surface. As soon as continuous- wave lasers became widely used and commercially available, this fine-scale granular pattern could be easily observed. But what exactly lies beyond the word "speckle imaging"? The speckle signal is the pattern created by coherent waves when they interact with a rough plan that has micro-structures on its surface. In optics, the coherent light used is a monochromatic laser, which is reflected by the surface in a random process due to the phase shift introduced by the micro-structures (see Figure 1a). The reflected light waves will interfere with each other, sometimes constructively, creating bright grains and sometimes destructively, creating dark areas as shown in Figure 1b.

(a) (b)

Figure 1: Panel (a) is an illustration of speckle principle and (b) a speckle pattern of an apple recorded with a monochromatic CCD camera.

Macroscopic movements of the entire sample or microscopic fluctuation processes within the sample modify the speckle pattern. Time recording of the evolution of the speckle pattern with a CCD camera therefore allows a non-invasive study of biological phenomenon like plant’s growth. Moreover other information can be ex- tracted from the speckle signal such as the degree of polarization of an area or depth information. New 3D technologies such as the Kinect use a pattern of coherent light, similar to the speckle pattern, to reconstruct a 3D scene.

3

(7)

We noticed that these different imaging methods are rarely used simultaneously, making speckle imaging a mono-modal technique. This is mostly due to constraints such as, for instance, time in bio-medical applications or distance for military uses.

As each image contains complementary information on the speckle signal, we pro- pose in this report to build a close detection setup that simultaneously makes use of the different information contained in the speckle pattern.

In a first part we show how to use the speckle as a multi-modal imaging technique with a simple optical setup. We can therefore create speckle size maps, degree of polarization maps and bio-speckle activity maps. Thanks to this supplementary in- formation we manage to highlight contrast in an image that appears almost uniform in intensity initially.

The second part is dedicated to the application of our imaging techniques on bio-

medical related experimental data.

(8)

A multi-modal speckle imaging technique

Some 3D scenes can appear not well contrasted in intensity when it actually exists many other contrasts within the image (depths differences, polarization or even biological activity). In this section, we present our basic imaging setup and explain how it is possible to go beyond the sole luminance imaging mapping.

1.1 Optical setup

The speckle imaging setup requires the use of a CCD camera and a laser, which will be placed beside the camera, with a diffuser (piece of tape) in front of it. The laser focuses on a point around 1cm away from its opening.The closer to this point the diffuser is, the bigger the grains will be and the further it is, the smaller the grains will be. We fixed the diffuser on a micro-metric screw in order to be able to tune the speckle grain size. Differences in speckle grain size will be useful for the shape from texture reconstruction process as well as for bio-speckle acquisition. The Figure 1.1 shows the complete setup that will be used to record images of the scene over time.

Figure 1.1: Setup using a CCD camera, a monochromatic laser and a diffuser.

5

(9)

1.2 Distance from speckle

1.2.1 Pinciple

Due to the diffusion of the light, object which are closer to the camera will show areas of smaller speckle grain size whereas the background will show areas with the largest speckle grain size. As it is possible to compute the grain size for a specific area, we propose to use this geometrical fact to estimate a distance map from speckle grain size measurement. This technique is similar to shape from texture, which is a method that determines depth from the deformation of a known structured pattern.

The speckle grain size is computed according to the paper from Nassif et al., us- ing the normalized auto-correlation function c I (x, y) of the speckle intensity pattern which we will denote I(x, y). It is acquired in the observation plane (x,y) of the cam- era. The auto-correlation function has a 0 basis and its width provides a reasonable measurement of the average size of a speckle grain. The base of this computation is the Wiener-Khintchine theorem and it is described by the following equation

c I (x, y) = F T −1 [|F T [I(x, y)]| 2 ]− < I(x, y) > 2

< I 2 (x, y) > − < I(x, y) > 2 , (1.1) where FT is the Fourier Transform and <> the spatial average. The horizontal dimension of the speckle grain is represented by the full width at half maximum (FWHM) of the horizontal profile of c I (x, y) and the vertical dimension is the FWHM of the vertical profile of c I (x, y). Speckle grain size depends on

• the wavelength λ

• the distance of observation between the camera and the illuminated sample D

• the diameter of the circularly-illuminated area as seen by the camera D e Hence for a circularly illuminated surface, one has

d x = 1.22λD

D e cos θ , (1.2)

d y = 1.22λD

D e , (1.3)

θ being the angle of observation between the camera and the optical axis. We can therefore compute D from the acquired values of d x and d y by performing an average of the two values found for D.

1.2.2 Results

In a first time we use an object closer to the camera than the background. For now

the object will be flat and perpendicular to the observation direction. The aim is to

discriminate the object location thanks to the horizontal speckle grain size. As it is

inversely proportional to the distance from the camera as shown in equation (1.2),

(10)

Figure 1.2: Speckle pattern projected on an object (left) and background (right)

we should compute smaller grains on the object. Below is shown the image with the object in Figure 1.2.

To have a good estimate of the speckle grain size it is necessary to compute an average size over several small windows inside each region. The windows should be larger than the speckle grain size. For our experiments we used small windows of 80x80 pixels and an average of the speckle grain size has been performed over 10 small windows chosen randomly in both regions. The average values of horizontal speckle grain size for both regions are shown in table 1.1.

Table 1.1: Horizontal speckle grain size measured in pixels on object and background

Object Background

Experiment 1 5.8 6.5

Experiment 2 5.8 7.6

Experiment 3 5.2 7.8

Experiment 4 5.3 6.3

We observe that the speckle grain size is smaller on the object which means that it appears closer to the camera than the background. However the size varies a lot from one experiment to another, hence the need to perform several experiments to have a better idea of the mean speckle size.

We perform a new experiment using a tilted plane. We will divide our image into

4 regions as shown in Figure 1.3 and perform the same type of acquisition. Results

are shown on table 1.2.

(11)

Figure 1.3: Tilted plane depicting the four regions of interest.

Table 1.2: Horizontal speckle grain size measured in pixels for tilted plane

Region 1 Region 2 Region 3 Region 4

Experiment 1 7.5 6.9 6.3 5.2

Experiment 2 6.5 5.6 4.7 4.5

Experiment 3 6.4 5.7 4.6 5.0

Experiment 4 7.0 6.6 6.2 4.8

Results are what we expected, giving bigger values for the furthest region (Region 1) and smaller values for the closest one (Region 4). For experiment 3 though, the region 3 has lower values than the region 4. We can think that the use of only one direction for the computation of speckle grain size is not robust enough.

We improved the algorithm by computing the vertical speckle grain size as well and having therefore access to the average area of speckle grain size in a region (measured in pixel). Assuming that they are ellipses their area is

A = πab (1.4)

with a and b the half axes of the ellipse which correspond to half the size of the FWHM we measure with the auto-covariance algorithm. A new set of 3 experiments were performed to determine the area of speckle grains in 3 regions:

• region1: superior part of the image

• region2: middle of the image

• region3: bottom of the image

(12)

Table 1.3: Area of speckle grain measured in pixels in different regions

Experiment 1 Region 1 Region 2 Region 3

Vertical 7.0 6.6 5.1

Horizontal 6.4 5.0 4.2

Area 35.4 26.0 17.1

Experiment 2 Region 1 Region 2 Region 3

Vertical 6.2 4.5 4.7

Horizontal 7.1 5.7 5.3

Area 34.7 20.5 19.7

Experiment 3 Region 1 Region 2 Region 3

Vertical 6.8 4.6 4.9

Horizontal 7.3 6.0 5.4

Area 39.0 22.6 21.0

Results are shown below in table 1.3

As explained before, speckle grain size can be not very distinct. However, using both directions allows to compensate for such an ambiguity and we obtain coherent values for each region.

As we can see, the computation of speckle grain size, no matter the direction, can be variable from one experiment to another. We found empirically that below a depth difference of 3cm the speckle grain size difference we can measure between two regions is not large enough to ensure the user of a real disparity. Significant results can be obtained if the disparity between object, meaning the depth differences, are large enough.

The limitations here are:

• The camera resolution which does not allow the user to use very small speckle grains and thus have more accurate computations

• A non-uniform lightning which is responsible for a low signal to noise ratio at the edges of our picture. Our measurements are therefore noise-polluted. A work has been done on this point in Appendix B.

We will now consider the case where objects are located in the same plane in our

3D scene. Depth is no longer a valid criteria to discriminate them and new methods

need to be investigated.

(13)

1.3 Degree of polarization map from speckle

1.3.1 Principle

According to the paper from Réfrégier et al., if the speckle is fully developed, the intensity on each pixel of the detector is the sum of two independent random in- tensities that have exponential distributions with different means. Therefore the probability density function (pdf) of the resulting intensity, which we will denote I can be written

p I (I) = 1 PI T

(

exp − 2I

(1 + P)I T

!

− exp − 2I

(1 − P)I T

!)

(1.5)

where P is the degree of polarization of the reflected light and I T is the mean intensity. We may notice that for a fully polarized light, i.e P = 1, the pdf is exponential, while it is a gamma function of order 2 for P = 0. This is illustrated in Figure 1.4.

Figure 1.4: Probability density function of the intensity of a partially polarized speckle pattern as a function of the intensity and degree of polarization. Extract from Goodman.

The first step is to select a small region of interest and from the pdf of this region we are able to fit the curve with a double exponential that has the following expression:

f it(x) = ae bx + ce dx . (1.6)

Finally the coefficients identification allows us to compute P thanks to the equation 1.5. Indeed we have

P = b + d

d − b . (1.7)

(14)

1.3.2 Results

We used a piece of metal on which we stuck a piece of tape in region 2 and three pieces of tape on top of each other in region 3, as seen in Figure 1.5b. Tape is made of polymers oriented along the axis of elongation when it was made. We expect the fibers to have a birefringence effect (property of a material having a refractive index that depends on the polarization and propagation direction of light) and therefore depolarize the light. The region 1 will correspond to the background (metal tin). In Figure 1.5a is an image captured by the camera with the 3 different regions.

(a) Image in intensity (b) Same image with highlight of ROIs

Figure 1.5: Metal plate with pieces of tape - estimation of the degree of polarization The pdf was computed for each region and a fit has been applied to each curve as shown in Figure 1.6.

Figure 1.6: Probability density functions for the 3 ROIs

(15)

Laser light is partially polarized. When it hits the metal (Region 1) the polarization is not changed. Its value is obtained thanks to the coefficients of the fit curve and is equal to P = 0.86. However, when hitting the tape, polarization is altered due to the micro-structure of the material (elongated fibers). With 1 fold the degree of polarization is down to P = 0.64 and with 3 folds the degree of polarization is even lower: P = 0.42. As seen in Figure 1.7 the computation of the degree of polarization on small windows of size 5x5 highlights the location of both pieces of tape on top of the metal. The size of these small windows was chosen arbitrarily. It must be as small as possible to have the best spatial resolution but large enough to have sufficient data for the fitting operation.

Figure 1.7: Map of the degree of polarization. Values close to 1 mean a complete polarization and values closer to 0 mean a complete depolarization.

The result is a bit noisy due to artifacts during the fitting operation. The compu- tation of the degree of polarization is therefore not accurate. However, the whole map gives the user a good estimate in order to discriminates objects which do not appear very contrasted in intensity, as seen in Figure 1.5a.

1.4 Bio-speckle activity map from speckle

1.4.1 Principle

The word bio-speckle is used when the surface of study is a biological tissue. Laser light is going through biological materials a few µm deep. As the biological process is locally changing the index and the path length of the tissue, the reflected light waves will vary and therefore change the speckle pattern in time. The index n and the path length L are directly related to the phase of the reflected wave

φ = 2πnL

λ . (1.8)

Speckle movements can be easily visualized by comparing one image with a reference

one. In this respect we will compute the normalized cross-covariance (NC) curves

(16)

in small windows within our image. Each image acquired at time t is compared to the first image taken at t = t 0

N C(t) = X

x,y

f (x, y, 0) − ¯ f 0 )(f (x, y, t) − ¯ f t ) σ f

0

σ f

t

!

(1.9)

with f (x, y, 0) being a pixel of the first image at t = t 0 , f (x, y, t) a pixel of the image f acquired at time t, ¯ f t and σ f

t

the spatial mean and the spatial standard deviation of the image f at time t. If N C(t) = 1, the image f is identical to the reference image. The more N C(t) will be closer to 0 the more the images are different. Biological samples will therefore show a decreasing cross-covariance over time due to biological phenomena on the surface that affect the speckle pattern.

The value of the initial decay will give a good approximate of how fast the speckle pattern is changing in time.

1.4.2 Results

We took a scene with a static background (sheet of paper) and a biological object (apple). The image captured by the camera is displayed in Figure 1.8a.

We compute the NC of several small windows within our image, one is located onto the biological sample and the other on the background. Results of the cross- covariance function over time are shown in Figure 1.8b.

(a) (b)

Figure 1.8: On panel (a) is an apple (right) showing biospeckle activity and paper background (left). Panel (b) shows its corresponding normalized Cross-covariance curves in red for the paper and in green for the apple.

Due to biological activity the NC is decreasing exponentially for the biological sample whereas the NC for the paper is almost identical and close to one, showing that the speckle pattern is not altered in this region. It is therefore possible to discriminate the object from the background.

We performed a second experiment with an apple that is less contrasted in intensity

regarding the background (Figure 1.9a). In order to get an estimate of the gain

(17)

in contrast we will use the definition of the Michelson contrast, which is used to compare two patterns that present equivalent features. It is defined as

c = |I 1 − I 2 | I 1 + I 2

(1.10) When we measure the mean intensity over a small window we have for the apple I = 185 and for the background I = 175, which represents a contrast of 5%. The box however is well contrasted in intensity regarding the background. In the Figure 1.9b is shown the NC curves’ time constants map. We can see how it really highlights the location of the biological object and how static objects appear the same (no speckle movement on their surface). The contrast between the apple and the background is now of 99%.

(a) (b)

Figure 1.9: (a)Image of an apple on foam background and (b) NC time constant(s) map computed for 40x40pixels windows.

1.5 Experiments with a 3D scene

We built a 3D scene that contains the three types of contrasts we previously inves-

tigated. The goal here is to see if we can determine highlight these three contrasts

using only a short stack of images. In Figure 1.10a is an apple held in a foam sup-

port. Beside it, is a box on which we stuck a piece of tape. We should be able

to discriminate every single object as seen in Figure 1.10b thanks to the various

contrasts it exist between objects.

(18)

(a) (b)

Figure 1.10: (a) 3D scene not highly contrasted in intensity and (b) schematic segmentation of the scene (a).

1.5.1 Analysis

The speckle size map shown in Figure 1.11a highlights three distinct regions. The white one represents the background where speckle grains are the biggest. The light gray area represents the box and the last black region the foam with the apple. This last region is entirely black which should denote very small speckle grains. As a mat- ter of fact, speckle grains are here not well contrasted and a proper measurement of their size is impossible. We will need another method to see objects in this region.

The degree of polarization map in Figure 1.12a should highlight the piece of tape stuck on the left side of the box. In the left part of the image corresponding to the foam, the degree of polarization is equal to 0.

The apple is the only living object in our 3D scene. The bio-speckle activity map in Figure 1.13a therefore highlights it as we observe micro-movements only in this region.

(a) (b)

Figure 1.11: Panel (a) is the speckle size map for the 3D scene with the area of the

speckle grains in pixels. Panel (b) represents the supposedly highlighted objects.

(19)

(a)

(b)

Figure 1.12: Panel (a) is the degree of polarization map of the 3D scene. Panel (b) represents the supposedly highlighted objects.

(a) (b)

Figure 1.13: Panel (a) is the bio-speckle activity map of the 3D scene. Panel (b)

represents the supposedly highlighted objects.

(20)

Applications: Bio-speckle imaging

2.1 Interest of monitoring blood flow with laser speckle contrast imaging

Monitoring blood flow is of high interest as it can serve as a crucial diagnostic in- dicator of tissue viability, injury or disease. It exists many methods to dynamically track and monitor changes in blood flow. Laser doppler flowmetry or flow imaging base scanner are commonly used, however they are single point detection methods.

In comparison, laser speckle contrast imaging has a high spatial and temporal res- olution which makes this tool very attractive. The setup is moreover rather simple and inexpensive as one only need a camera and a laser source. This non-contact technique has the advantage of being easily transported unlike several other medical devices such as MRI scanners or CT scanners. The region of study however has to be on the surface as it cannot image deep in the tissues.

We acquire images at a relatively high frame rate (up to 400fps).

2.2 Experimental setup

2.2.1 Setup for small animal imaging

We built a similar setup to the one we used for our experiments on the 3D scene.

A monochromatic laser (λ = 632nm) is used with a diffuser in front of it. It has been decided to buy a new camera to perform those experiments as well as a new optics (see description and details in Appendix C). The interest in buying this new equipement is to work at very high frame rates. We are therefore able to observe fast micro-movements or displacement for example during blood reperfusion after an ischemic stroke. Experiments were performed at the CERMEP (Centre d’Exploration et de Recherche Médicale par Emission de Positons) with the help of Elisa Cuccione, an Italian phD biologist. Figure 2.1 represents the setup and Figure 2.2 gives a schematic representation.

17

(21)

Figure 2.1: Experimental setup for image acquisition during rat’s cerebral ischemia.

Figure 2.2: Setup for Biospeckle imaging.

2.2.2 Surgical protocol

We performed two experiments on small animals, the first one using a mouse (male, C57/Bl6 ( 30 gr)) and the second one using a rat (male, Wistar-Han ( 400 gr)).

Animals were housed in Plexiglas cages in a colony room maintained on a 12/12 h light/dark cycle (07:00 – 19:00). Both experiments were performed by a biologist who was authorized to manipulate and experiment on small animals.

The goal of the first experiment is to retrieve the heartbeat of the animal thanks

to a short stack of images. The mouse was anesthetized with urethane throughout

(22)

surgery and imaging. The animal’s head was shaved and positioned in the stereotac- tic frame. Then the scalp was exposed and no craniotomy was performed. Hence, imaging was performed through intact skull. We don’t need to have a direct view on the vessels as the skull is very thin. The red laser has a depth of penetration of a few micrometers which is enough to get a signal from the vessels under the bone.

The rat was instead anesthetized with isofluorane. A large cranial window was per- formed on the right hemisphere, frequently flushing with saline solution. For that we used a dental drill in order to remove the skull without damaging the brain tissues, the dura mater, beneath. Finally we used a drop of oil placed on the cranial window in order to improve visibility. With a direct view on the vessels of the brain this time we can perform laser speckle contrast imaging.

To alleviate pain, both animals received subcutaneous ibuprophene before start- ing the procedure. During surgery, body temperature was monitored continuously with a rectal probe and maintained at 37.0 Celsius degrees by a feedback-regulated heating pad.

2.3 Results

2.3.1 Image acquisition

We acquired images with the HiSpec camera using the interface shown in Appendix D. I had the task to write user guides about how to use the interface of the camera and record stacks of images. I also wrote several other notes on how to run the image processing codes and routines to perform image processing, which can be found in Appendix E.

In Figure 2.3a and 2.3b are shown respectively the head of a mouse and the head of rat acquired by the camera under laser light exposure.

(a) (b)

Figure 2.3: Panel (a) shows the head of a mouse and (b) the open skull of a rat.

In a first time we need to process the images. The stacks we recorded consist of a

huge amount of data (up to 4Gb) and only part of the image is interesting. The

(23)

routine consists of several steps:

• As we are only using a monochromatic red laser as light source, most of the information will be contained within the red channel. Therefore we can split the channels and only keep this one.

• Because light exposure is low, due to a very small shutter time and a weak light power, we can enhance the contrast and brightness of our image.

• Finally we can crop the image to keep only the region of interest.

For an initial stack of 3.5 Gb we get a stack of only 135 Mb after pre-processing the images. This step is very important as it will considerably decrease the run time when going through with the laser speckle contrast imaging algorithm. In Figure 2.4 is shown an image of a rat’s head after pre-processing.

Figure 2.4: Rat’s head after pre-processing in ImageJ.

2.3.2 Image processing

Retrieving the heartbeat of the mouse.

We then plot the z-axis profile of a small region on top of the skull. The evolution of the grey level intensity over time should give us an estimate of the heart rate.

The heartbeat of a mouse is around 10 Hz, and a bit below that value when under anesthesia, therefore we took a set of 150 frames recorded at 300 frames per second.

We should see 5 heartbeats during this laps of time. The z-axis profile over 0.5

seconds is shown in Figure 2.5.

(24)

Figure 2.5: Image of the mouse’s head (right) and the z-axis profil (left) of the region of interest (yellow circle) over 0.5s.

When performing a Fourier analysis of this signal we find that there are three dis- tinct frequencies, two around 100 Hz which correspond to neon lights frequencies and a second one around 6 Hz which is likely to be the heart beat (see Figure 2.6).

The neon lights are here impairing our measurement and it should be avoided in future experiments by using an opaque box around the setup to perform proper acquisitions.

Figure 2.6: Fourier analysis of the z-axis profile. The frequency at 6 Hz corresponds ot the heartbeat and the frequency at 100 Hz to the neon lights.

Performing Laser Sepckle Contrast Imaging on a rat’s head.

We computed the speckle contrast in an image of a rat’s head in order to retrieve a

map of the blood vessels. The result isshown in Figure 2.7.

(25)

Figure 2.7: Mean intensity map (left panel) and the corresponding laser speckle contrast map(right panel).

The vessels appears, although they are not well contrasted. It is due to the fact that the whole image has the same dynamic part. The animal’s breathing is making the entire speckle pattern move and therefore doesn’t allow us to observe grey levels variations only due to the blood flow. We should improve the setup by maintaining the rat’s head in place during acquisition. Moreover, results are impaired by the neon light and noise. As for the mouse’s heartbeat, an opaque box will be needed in the future to perform acquisitions.

This work will be continued by Elisa Cuccione, an Italian PhD student working on

the study of co-lateral blood flow during ischemia.

(26)

We saw that speckle imaging has a wide range of applications among which are the monitoring of bio-speckle activity, the determination of a degree of polarization and depth investigation in a 3D scene. Our contribution has been the creation of an innovative optical setup that uses speckle imaging as a multi-modal technique.

Thanks to it we manage to create various segmentation maps from a short stack of images which appeared initially not well contrasted in intensity.

This setup and the denoising method have been tested on bio-medical related exper- imental data. We carried a set of experiments at the CERMEP during which images from mice and rats were acquired and processed. Results are promising, however the breathing of the animal as well as neon lights influence remain a major issue when running contrast imaging algorithms. Further work is to be done to compensate for this macro movement and suppress this unwanted illumination.

23

(27)

Stereovision

By adding a second camera to our setup we increase the amount of data acquired and improve the reconstructions. It is also the opportunity to try out a new type of 3D reconstruction, stereo vision, which could improve the current segmentation.

The principle is the following: thanks to two images of the sample acquired at two distinct positions (spatial shift), an evaluation of the disparity on every point of the scene is possible. The disparity is the difference in position of an object on an image when seen by the left and right camera. This is how the human brain is able to extract depth information from the two-dimensional retinal images in stereopsis.

This disparity is inversely related to the depth at which objects are located. Close objects will have a high disparity and objects far away will have almost none (same position on both images).

A.1 Theory

A binocular stereo vision system is composed of two cameras both of which observe the same scene. We get thus two images from two different angles of view. It can be verified that for any 3D point, its projection in one image must lie in the plane formed by its projection in the other image and the optical centers of the two cameras. This is known as epipolar constraint as illustrated in FigureA.1.

Figure A.1: Scheme illustrating the epipolar geometry used during stereo-vision.

24

(28)

Assuming that a 3D point M is observed by two cameras with optical centers O 1 and O 2 , we get the projections m 1 and m 2 of this point in the two image planes (c.f.: Figure A.1). We define the epipolar plane as the plan containing M , O 1 and O 2 . The points m 1 , m 2 also belong to this plane.

Consider the case where m 2 , O 1 and O 2 are given and we want to find the corre- sponding point of m 2 in the fisrt image, i.e.: m 1 . The epipole plane is determined by m 2 , O 1 and O 2 (without knowing the position of M ). Since m 1 must belong to this epipole plane and the image plane of the first camera, it must lie on the line l 1 which is the intersection of these two planes. We call the line l 1 the epipolar line associated with m 2 . By symmetry, the m 2 must lie on the epipolar line l 2 associated with m 1 . This epipolar constraint is used to look for the corresponding point in one image given a point in the other image. The search can be restricted to the epipolar line instead of the whole image.

The first step is therefore to rectify the stereo images, so that the epipolar lines fall along the horizontal scan lines of the images. This makes the processing much easier. In this case, if we have a point m 1 = (u 1 , v 1 ) in one image, the corresponding point m 2 = (u 2 , v 2 ) in the other image is at the same height as m 1 , i.e.: v 1 = v 2 . In this context, the disparity is defined as:

d = u 2 − u 1 (A.1)

We can obtain the depth information of a 3D point from the disparity since its depth is inversely proportional to the corresponding disparity:

d = B f

z (A.2)

with B the baseline (space between the cameras), f the focal length of the camera and z the depth at which objects are located.

A.2 Matlab interface

Matlab is used to compute a disparity map and perform a 3D reconstruction of a

scene. Part of the code was developed by Wim Abbeloos and was then adapted to

fit our needs for reconstruction. The reconstruction is controlled via a GUI as seen

in Figure D.

(29)

Figure A.2: Matlab GUI interface for stereo-vision developped by Wim Abbeloos.

It displays a disparity map and a 3D map from two 2D stereo-images.

A.3 Details and example with template images

Images and code adaptation was done using data from Scharstein et al.. On this interface the user is selecting the right and left images which will be used for stere- ovision. The button "View images" displays in a separate window the two images:

the reference image and the target image.

For RGB images, a single channel is selected thanks to the popup menu. Then the user must select the filtering window size. The choice of window size is important:

a small window may not show enough intensity variation and have a low signal to noise ratio as seen in Figure A.3 on the other hand, a large window may cover regions with multiple disparities. The number of maximum disparity is also set by the user.

It corresponds to the histogram’s width of the disparity map.

(a) Window size = 2 (b) Window size = 6

(c) Window size = 21 (d) Window size = 43

Figure A.3: Influence of filtering window size in the disparity map computation.

From (a) to (d) window size is increased showing a reduction of noise at the cost of

spatial resolution.

(30)

The push button "Match" displays the disparity map in grayscale or color image depending on what is selected by the user. If the box "Reconstruction" is ticked, a new window pops up with a 3D reconstruction of the scene.

A.4 Results with images acquired by the setup

Images of a 3D scene were acquired in Labview and then processed in Matlab.

The first step in order to use the GUI described above is to retreive "perfect"

stereo images. As we compare intensity values during the search of disparities it is important that our images have the same grey values. A matching of histograms has therefore been performed for each image as shown in Figure A.4. We can see after this operation that both images have the same grey values and the same amount of these grey values.

Figure A.4: Histogram and image display of both stereo-images before matching (upper histograms) and after (lower histograms).

The next step has been to register the images so that the epipolar lines fall along

the horizontal scan lines of the images as explained above in A.1. In order to do this

we use a small black square on the background. This object is supposed to remain

static from one image to another as its disparity is 0. We will use this property to

register images. The results of this registration is shown in Figure A.5.

(31)

Figure A.5: Image registration for both right and left image, a small spatial shift has been applied to the images so that the epipolar lines fall along the horizontal scan lines (ref A.1).

Finally the computation of the disparity map has been performed using those two images. The result shown in Figure A.6 shows several useful information.

• The center of the apple appears closer (higher disparity shown in orange) than its edges (smaller disparity displayed in green/light blue).

• We observe artifacts in the background (on top of the apple)which should not be present. This is due to differences in the grey levels of the shadow observed on the acquired images.

• A ’hole’ with no disparity in seen on the apple (dark blue). This is probably due to a filtering window too small to detect a proper disparity.

Another experiment was performed using a round box and getting rid of the arti-

facts in the background. For that the images were acquired with a high exposure,

displaying a uniformly white background while conserving the details on the object

to study. Images acquired are shown in Figure A.7.

(32)

Figure A.6: Disparity map of the apple image showing regions of high pixel disparity in hot colors (red,orange) and low pixel disparity in cold colors (bleu, green).

Figure A.7: Stereo-images of a box under high-exposure.

The disparity map obtained for this scene is shown in Figure A.8 for two values of window filtering.

(a) Window size=15 Max. Disparity=50 (b) Window size=20 Max. Disparity=45

Figure A.8: Disparity maps of the box stereo-images shown with a window size of 15 in (a) and a window size of 20 in (b).

As we can see on the previous pictures the disparity map isn’t as homogeneous as

(33)

for the template images. One reason might be that the disparities are much bigger in our images than in the template images. Another reason is that cameras are not perfectly identical and a histogram matching has been performed to have the same amount of grey levels on both images. However this slight difference can result in errors in disparity maps.

A.5 Comparison with Minoru3D Webcam and speckle illumination

Let us compare with the reconstruction using a 3D camera: the Minoru 3D webcam, shown in Figure A.9. Acquisition is done simultaneously for both images, left and right, and image registration is done automatically. We then used our algorithm to reconstruct the scene shown in Figure A.10a. The disparity map is displayed in Figure A.10b. We observe that objects are well dissociated. Some small artifacts remain though, mainly from shadows.

Figure A.9: Imge fo the Minoru 3D USB Webcam used to perform 3D imaging.

(a) (b)

Figure A.10: 3D scene to reconstruct shown in panel (a). The panel (b) shows the disparity map with close objects in orange and objects far away in blue.

Under speckle illumination the computation of a proper disparity map becomes

much harder. It relies on finding similarities to estimates the displacement from one

(34)

image to another. With a speckle pattern this similarity is very hard to find if the disparity is higher than the average grain size, which typically is a few pixels. As shown in Figure A.11b the disparity map has many artifacts due to a bad estimate of the displacement.

(a) 3D object with speckle

(b) Disparity map - Window size = 21 Figure A.11: Computation fo the disparity map using the Matlab GUI. (a) An image of an apple is acquired and registered to give a disparity map (b) which shows many artifacts due to a bad estimate of the disparities.

This 3D reconstruction method is therefore not the best one under random speckle

illumination. More work should be done on the influence of speckle grain size in

disparity maps computation.

(35)

Intensity registration

When observing a speckle pattern we see that for a uniform plan the intensity of the projected speckle pattern is not identical in every region (region 1 and 2) of the image, as shown in Figure B.1. According to the litterature the laser beam coming out of our laser source has a Bessel like shape resulting in darker edges and brighter center.

Figure B.1: Speckle pattern of a uniform plan

As edges are darker than the center we may wonder how it affects our measurements, in particular for the speckle grain size. We performed 4 experiments for which the horizontal speckle grain size in pixel has been computed using the previous image. Results are shown on table B.1 and we observe that speckle grain size in low-illuminated regions is significantly smaller than for the bright center, despite the fact that the observed scene is flat (meaning that depth is the same on every point).

There is a need to compensate for this non-uniform intensity. The solution is to standardize our image using a small window within our picture. Each small win- dow will be divided by the maximum intensity value within the window. We must

32

(36)

Table B.1: Horizontal Speckle Grain Size in pixels For non-corrected illumination

Region 1 Region 2 Experiment 1 14.8 15.8 Experiment 2 14.5 15.6 Experiment 3 14.7 15.9 Experiment 4 14.8 15.7

however keep in mind that doing this we enhance both the signal and noise in those regions. The aim is not to have a "better" signal but to have a brighter signal so that our algorithm works with the same parameters for both regions. In this respect, the center will have a high SNR while the edges will keep a low SNR. However, it helps speckle grain size computation as we can see in table B.2.

Table B.2: Horizontal Speckle Grain Size in pixels For corrected illumination

Region 1 Region 2 Experiment 1 15.4 15.5 Experiment 2 15.2 15.3 Experiment 3 15.6 15.1 Experiment 4 15.7 15.9

It exists several other source of noise such as for example the analog-numeric con-

version. The influence of intensity registration could be further studied in other

experiments. For instance to compute the degree of polarization or the bio-speckle

activity of a sample.

(37)

Cameras and Optical zoom

C.1 Cameras

We have been working with 2 different cameras. The DMK 42BUC03 from the society ImagingSource, displayed in Figure C.1, is a low-cost camera which has a resolution of 640 x 480 pixels and can work at a frame rate of 48 frames per second. It can acquire images in real time and is rather handy to use. The frame rate and resolution are very important as we need to image small vessels which are 30µm small. Moreover we need to compensate for small movements which are not caused by the blood flow, for example respiration of the animal, air convection...

and for that we need a high frame rate. That is the reason why we decided to buy a new camera with a much higher frame rate. After a thorough investigation of what is done nowadays, I selected the HiSpec Lite, shown in Figure C.2. This camera, manufactured by the society Fastec Imaging, can record up to 1000 fps at a resolution of 704 x 528 pixels. Although it does not do real time imaging this camera has an internal memory of 4Gb which allows us to record up to 2 minutes at full frame rate.

Figure C.1: DMK camera Figure C.2: Hi Spec Lite

34

(38)

C.2 Optical zoom

On our camera we have mounted a macro zoom, shown in Figure C.3, manufactured by the company Edmunds Optics.

Figure C.3: Edmund Optics Macro Zoom

The objective is to observe a biospeckle signal coming from small vessels that can

be only 30µm wide. Considering that the smallest working distance for such zoom

is 15cm at best and that the camera resolution is 704x528 pixel, it gives us a spatial

resolution of 12µm with a field of view of 8mm. It corresponds perfectly to what

we wanted as the cranial window will not be larger than 7mm. Moreover even if

we take into account the loss of spatial resolution during the reconstruction process

(resolution is divided by 3 due to the auto-correlation function), we still meet the

specifications.

(39)

Camera Interface

Figure D.1: HiSPec lite camera interface.

36

(40)

User Guides

Below can be found the two user guides one can use to acquire images with the HiSpec Lite camera and process them using ImageJ.I wrote both of them and they have been tested successfully by several doctors and biologists.

37

(41)

1. Connect  the  camera  to  the  computer  via  the  Ethernet  cable.    Make  sure  it  is  correctly   plugged  by  checking  the  orange  light  near  the  plug  on  the  camera.  

2. Open  the  software  HiSpec  Control  Software  located  on  the  desktop.  

3. On  the  top  left  hand  corner  is  a  small  window  showing  the  available  cameras.  Select  the   HiSpec  Lite  color.  

4. Once  the  connection  is  established  you  should  see  two  more  windows  on  the  left  side  and   the  image  display  on  the  right.    

5. In  the  window  Camera  settings,  you  can  tune  the  frame  rate  and  the  shutter  speed.    

6. Nothing  else  is  to  change,  however  in  the  Advanced  settings  window,  it  may  be  interesting  to     tune  the  gain  of  the  camera  for  low  light  exposures.  

7. To  perform  an  acquisition    

a. Click  on  the  Record  button  

b. You  can  watch  the  recorded  video  afterwards  by  clicking  Play    

c. Save  the  video  in  the  desired  file  on  the  computer.  Time  for  data  transfer  depends  on   the  size  of  your  video.  

8. You  can  use  the  toolbar  to  zoom  in  the  image,  display  the  histogram,  tune  the  white  balance  

9. In  the  end  disconnect  the  camera  by  clicking  on  Disconnect  before  unplugging  it.    

(42)

1. Import  a  stack  of  images  in  Fiji:  File  -­‐>  Import  -­‐>  Image  Sequence…  

 You  may  select  only  a  few  images  if  you  don’t  need  to  import  the  whole  stack.    

2. Start  by  splitting  the  channels:  Image  -­‐>  Color  -­‐>  Split  Channels  

Here  you  should  keep  only  the  channel  of  interest  (often  the  red  one)  that  is  to  say   the  one  with  the  more  signal.  

3. Adjust  the  brightness  and  contrast  of  the  image  if  necessary:  Image  -­‐>  Adjust  -­‐>  

Brightness/Contrast…  

4. Select  a  region  of  interest  on  the  skull.  The  region  should  be  big  enough  to  envelope   several  speckle  grains.  

5. Plot  the  Z-­‐axis  profile:  Image  -­‐>  Stacks  -­‐>  Plot  Z-­‐axis  profile  

On  this  profile  you  should  be  able  to  see  both  the  frequency  of  the  heart  beat  and   the  frequency  of  the  breathing.  In  example  below  we  acquired  600  images  at  300fps.  

The  heart  beat  of  a  mouse  is  around  10Hz  and  its  breathing  around  2Hz.  We  should   therefore  see  around  20  heart  beats  and  3-­‐4  breaths.  

 

 

 

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

40 Så kallad gold- plating, att gå längre än vad EU-lagstiftningen egentligen kräver, förkommer i viss utsträckning enligt underökningen Regelindikator som genomförts

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Tillväxtanalys har haft i uppdrag av rege- ringen att under år 2013 göra en fortsatt och fördjupad analys av följande index: Ekono- miskt frihetsindex (EFW), som

a) Inom den regionala utvecklingen betonas allt oftare betydelsen av de kvalitativa faktorerna och kunnandet. En kvalitativ faktor är samarbetet mellan de olika

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

Below the office package Suite, you can find two different programs which are software Center and TeamViewer quicksupport.. Teamviewer QuickSupport is a tool that enables

The business environments in these regions have been compared through nine different variables, namely the motives to start a business, the perception of entrepreneurship by