• No results found

Development of Registration and Fusion Methods for the Jonasson Medical Imaging Center MiniPET-microCT

N/A
N/A
Protected

Academic year: 2022

Share "Development of Registration and Fusion Methods for the Jonasson Medical Imaging Center MiniPET-microCT"

Copied!
69
0
0

Loading.... (view fulltext now)

Full text

(1)

Medical Imaging Center miniPET- microCT

Utveckling av bildregistrerings- och

fusionsmetoder för ett miniPET-mikroCT vid Jonassons center för medicinsk avbildning DIMITRIOS GKOTSOULIAS

KTH ROYAL INSTITUTE OF TECHNOLOGY

SCHOOL OF ENGINEERING SCIENCES IN CHEMISTRY, BIOTECHNOLOGY AND HEALTH

(2)
(3)

DIMITRIOS GKOTSOULIAS

Master in Medical Engineering Date: June 28, 2018

Supervisor: Masimilianno Colarieti-Tosti Co-supervisor: David Marlevi

Examiner: Philip Koeck

School of Engineering Sciences in Chemistry, Biotechnology and Health

(4)
(5)

pervisor in this thesis, for his important suggestions, help and support.

For his most valuable help on the design of the phantoms and his suggestions, I wish to thank a lot Peter Arfert, technician at KTH Med- ical Engineering Department.

For providing us18F-FDG every time we needed -making the mea- surements for this thesis possible- I would like to express my gratitude to Mattias Karlsson, Physicist at the Nuclear Medicine Dep. of Karolin- ska University Hospital.

For their valuable suggestions and guidance, I wish to thank my group supervisor, Rodrigo Moreno and my reviewer for this thesis, Philip Koeck.

Any accomplishment would never be possible without your unconditional love, support in every way, belief in me and giving me the best example in life someone could ever have.

Thank you for everything Mum and Dad..

(6)

Abstract

Multimodal image registration is essential when combining functional and structural imaging modalities. Among the most common combi- nations, numerous methods have been developed for co-registration of CT and PET, typically validated on human size scanners. However, only a few registration studies have been performed for the combi- nation of small animal miniPET and microCT imaging. The Jonasson Center for Medical Imaging at KTH possesses an integrated miniPET/

microCT for pre-clinical research purposes. The motivation for this work is the need for the development of fusion method(s) for combin- ing the data of these two modalities. In this work, a novel pipeline registration method, employing image processing and Mutual Infor- mation (MI) is proposed and implemented. The method starts with a pre-alignment step before acquisition of the miniPET/microCT vol- umes, followed by scaling, binarization and processing of the two vol- umes and finally, a registration procedure by Maximization of Mutual Information (MIM) as a voxel-based similarity metric. A established intrinsic landmarks based method is also implemented for compari- son. For the validation of the methods, volumes acquired by in-house designed 3D printed Polyethylene (PE) phantoms, filled with multi- ple concentrations of 18F-FDG were used. The misalignment between corresponding points volumes after registration, is analyzed and com- pared in terms of absolute spatial distance. The proposed method based on 3D processed volumes outperformed the Landmarks based registration method, showing average misalignments of 0.5 mm. The registered volumes were also successfully visualized together using Alpha blending. By so, an automatic fusion method for miniPET/ mi- croCT has thus been implemented, presented and evaluated, raising prospects for multimodal imaging research at the Jonasson Center for Medical Imaging.

(7)

är dock validerade koregisteringsmetoder inte lika vanliga, och fram- förallt inom forskningsbaserad avbildning (där egendesignade scan- ners förekommer) är behovet av effektiva metoder tydligt. Inom Jo- nassons centrum för medicinsk avbildning på KTH har ett multimo- dalt miniPET/ microCT-system utvecklats, och det är just behovet av koregistering som utgjort basen för följande examensarbete. I följande arbete har en koregistreringsmetod baserad på fördefinierad bildbe- handling och analys av gemensam information (mutual information, MI) implementerats och testats. Metoden bygger på ett initialt upplin- jeringssteg innan microCT/ miniPET bildtagning. Efter detta genom- förs en rad bildbehandlingssteg (skalning, binärisering) innan en slut- giltig koregistrering genomförs med hjälp av s.k. maximering av ge- mensam information (Maximization of Mutual Information, MIM). En etablerad landmärkesbaserad metod implementerades även som jäm- förelse. Metoderna testades sedan genom multimodal avbildning av egendesignade 3D-printade fantomer, fyllda med varierande aktivitet av 18F-FDG. Den föreslagna MI-metoden överträffade den etablerade landmärkesmetoden, med ett genomsnittligt koregistreringsfel på 0.5 mm. Visualisering av de koregistrerade volymerna genomfördes även genom så kallad alpha blending. Genom detta har en koregistrerings- metod för miniPET/microCT:n på Jonassons center för medicinsk av- bildning implementerats och testats, vilket möjliggör för framtida stu- dier av multimodal avbilning på KTH.

(8)

1 Introduction 1

2 Materials and Methods 2

2.1 Materials . . . 2

2.1.1 Jonasson Center miniPET . . . 2

2.1.2 Jonasson Center microCT . . . 2

2.1.3 Motorized Linear Stage - Modalities bed control . 2 2.1.4 Phantom Models . . . 3

2.1.5 Software Development Tools . . . 4

2.2 Methodology . . . 5

2.2.1 Registration methods development . . . 5

2.2.2 Fusion-Visualization . . . 11

2.2.3 Experiments-Measurements . . . 11

2.2.4 Error Metrics for quantitative validation . . . 12

3 Results 14 3.1 Experiment 1 derived volumes (Phantom 1) using Land- marks based Registration . . . 14

3.2 Experiment 2 derived volumes (Phantom 2) using Land- marks based Registration . . . 16

3.3 Experiment 2 derived volumes (Phantom 2) using Im- age processing/MI metric based registration on central slice . . . 19

3.4 Experiment 2 derived volumes (Phantom 2) using Im- age processing/MI metric based registration on 3D vol- umes . . . 22

3.5 Quantitative errors analysis . . . 25 3.5.1 Method: Intrinsic Landmarks based registration . 25 3.5.2 Method: Automatic MI based registration (by slice) 25

vi

(9)

A State of the art 30

A.1 Positron Emission Tomography (PET) . . . 30

A.2 MiniPET . . . 31

A.3 Computed Tomography (CT) . . . 32

A.4 Multimodal Imaging - PET/CT Fusion . . . 33

A.5 Steps of the general Image Fusion Procedure . . . 34

A.6 Registration of Medical Images . . . 34

A.6.1 Modalities involved . . . 35

A.6.2 Subject . . . 36

A.6.3 Dimensions of application . . . 36

A.6.4 Nature of transformation . . . 37

A.6.5 Registration Method basis . . . 38

A.7 Fusion of Medical Images . . . 39

A.7.1 Pixel-based fusion methods . . . 40

A.7.2 Morphology based fusion methods . . . 41

A.7.3 Knowledge based fusion methods . . . 41

A.7.4 Wavelets based fusion methods . . . 41

A.7.5 Components Analysis based fusion methods . . . 42

A.7.6 Neural Networks (NN) based fusion methods . . 42

A.7.7 Other Methods . . . 43

A.8 Phantom models design . . . 43

A.9 18F-FDG . . . 44

A.10 Elastix transform file parameters used for MI rigid reg- istration . . . 44

A.11 Colormaps . . . 46

A.12 Geometrical planes . . . 47

A.13 Mutual information (MI) . . . 48

A.13.1 Information entropy . . . 48

A.13.2 Shannon’s entropy on images . . . 49

A.13.3 Joint histogram and probability distribution func- tion (pdf) calculation . . . 49

(10)

A.13.4 From entropy and pdfs to Mutual Information metric . . . 51 A.13.5 Pre-processing and MI metric . . . 52

(11)

”Structure without function is a corpse. . . function without structure is a ghost.”

The most important and emerging field on the way for Multimodal Imaging, is image registration. In all inter-modality registration cases, the difficulty lies on the fact that the volumes acquired by the two modalities have strong differences on voxel size in all dimensions, varying contrast, intensities/noise profiles and information depicted on the final volume, as microCT provides anatomical information and miniPET provides functional information.

Many methods have been implemented to solve the registration problem, with clear being the need for development of dedicated meth- ods for each individual case, depended on the modalities involved, imaging characteristics, object imaged, system knowledge etc. (For more, please see Appendix, Section A.7).

The Jonasson Center for Medical Imaging integrated miniPET - mi- croCT, lacks of multimodality imaging capability. The aim of this the- sis, is proposing an intermodality fusion method, that will result to a satisfactory alignment and merging of the volumes acquired by the miniPET - microCT integrated modalities at the Jonasson Center, KTH.

1

(12)

Materials and Methods

The following sections describe the physical phantom models, techni- cal equipment and software used to implement and validate the meth- ods described in Section 2.2.

2.1 Materials

2.1.1 Jonasson Center miniPET

Appendix, A.2.

2.1.2 Jonasson Center microCT

Appendix, A.3.

2.1.3 Motorized Linear Stage - Modalities bed control

For the bed moving, a motorized linear stage by Zaber Technologies Inc. is used: X-LSQ600A-E01. The stepper motor of the stage, can pro- vide travel range of 750 mm, speed up to 42 mm/s with minimum 0.000061 mm/s, micro-step size of 0.09921875 µm and most impor- tantly for our experiments, maximum accuracy of 150 µm. For more information on this part, please visit the Zaber site [2].

2

(13)

that we do have a clear slice by slice difference in the data that miniPET detects (1mm spatial sensitivity). The design sheets of phantom 1 can be found in section A.9.

Figure 2.1: Phantom model 1. The chamber for18F-FDG filling is indi- cated with notationA1.

Phantom Model 2

The material used for the second phantom (Figure 2.2) is again PE. We have three distinct chambers, 2 spherical inner ones, 0.15 ml and 0.65 ml (A2, B2), and the outer one, 8.5 ml (C2), that can be filled with multi- ple concentrations of18F-FDG, other radiation source or even contrast agents. Thus, the volume that gets filled with radiation is typically the whole phantom volume.

This phantom is designed in a way that resembles cases closer to reality, where: 1.the radiation is accumulating in different levels ev- erywhere in the phantom or tissue, 2. the shape is not cylindrical, 3.

we can clearly get partial structural information also from the PET vol- umes. The design sheets of phantom 2 can be found in section A.

(14)

Figure 2.2: Phantom model 2. The three distinct chambers for18F-FDG filling are indicated with notationA2, B2, C2.

2.1.5 Software Development Tools

For the software development in this thesis, in order to integrate with the rest of the software for the miniPET/microCT modalities, Python3 was mostly used, in Spyder development environment. 3Dslicer soft- ware and Paraview were also used for visualization.

For the intrinsic landmarks-based registration in Python, Scikit- image and OpenCV libraries were used. Scikit-image is a Python writ- ten, open-source, image processing dedicated collection of algorithms [3]. OpenCV (Open Source Computer Vision Library) is a commonly used open source computer vision and machine learning software li- brary [4]. For the automatic methods based on Mutual Information, the aforementioned libraries were used, along with the python wrap- per for the Elastix tool: Simple-Elastix.

Elastix Tool - Simple-Elastix Python Wrapper

Elastix is a software package dedicated for medical images registra- tion, developed in Netherlands, by Stefan Klein and Marius Staring.

It is a collection of highly optimized implemented algorithms, com- monly used in medical image registration problems, like Mutual In- formation, Point distance, etc. It is based on the Insight Toolkit (ITK) library, written in C++ . It has a modular, command-line and parame- ter files driven interface.

In our case we used the Simple Elastix, which is a Python wrap- per for Elastix. It is imported as a library in any Python module, and

(15)

tal procedures.

2.2.1 Registration methods development

Pre-alignment

For any registration method to work as expected, pre-alignment is es- sential because almost all algorithms that will be described, fail when initial data are too misaligned. In almost all modern publications ei- ther manual pre-alignment [9] or a fast algorithmic method (for exam- ple PCA in [10]) is used. In our case, since we work on an integrated modality, where acquisitions can happen one after the other -under known parameterizations- the system itself can give us sufficient pre- alignment standards and reduce the degrees of freedom and iterations needed to accomplish a satisfactory transformation for registration.

Figure 2.3: Our modalities system with indications for the axes and the FOV centers (Printed with permission from Peter Arfert, KTH).

Experimentally, the exact Field Of View (FOV) centers in respect to

(16)

the bed center is found for each modality on z-axis (see Figure 2.3), with mm precision. This is done by calculation of the physical center of the FOV of both modalities and validation by image acquisition of the phantom at these points, to ensure that the object lies at the center.

The table below shows derived centers in terms of bed distance from it’s initial position in mm.

Figure 2.4: Table of FOV centers of each modality in regard to the bed center.

The correspondence between the centers of our reconstructed vol- umes on z-axis is ensured. The center positions on z-axis, constitute in- puts to the motor (by Zaber Motors) that moves the bed (MATLAB/Python or Zaber Motors console controlled): The sample is placed properly on bed and automatically moved exactly in the center of the FOV in both modalities. This way, the two volumes characteristics are quite simi- lar, even before registration. Scaling, small rotations and translation is needed for alignment.

By centering the sample this way, we know that by upsampling the PET image using bicubic interpolation (details can be found in [11]) on z-axis (or downsampling the CT image), by the right factor, we can have our object imaged by the same number of slices in both modali- ties. Then we can continue with the transformation on x,y axes.

Landmarks-based Registration method

The first method implemented is intrinsic landmarks-based method (See Appendix A.7.5). The main aim of this implementation is to have basic method, for the results of the automatic method presented in this thesis to be compared with. Since we did not want add-up weight on the bed, we excluded the idea of using extrinsic landmarks (fidu- cials), which need external structures, like in [12]. The workflow for this method assumes that the transformation is affine:

(17)

Figure 2.5: Point based registration proposed workflow.

As indicated, in this method, we try to define with one step, the affine transform between the corresponding point sets. For an affine transformation, the system is defined as:

T (A)(x) = A(x − c) + t + c (2.1) where T is the transform, x is any voxel in the volume, c is the center of rotation and t the translation. Matrix A has no restriction, so the in- puts can be rotated, translated, scaled and sheared. In our case where we try to establish the affine transform between 2D point sets, the A matrix gains six degrees of freedom and becomes:

A = a1,1 a1,2 tx a2,1 a2,2 ty



The final A matrix is derived iteratively, by minimizing the sum of the squared distances between the fixed point set and the transformed moving point set:

arg min

A n

X

i=1

(|T (A)(xi) − yi|)2 (2.2) where x is a point (voxel) of the moving point set, y is the correspond- ing point of the fixed point set and n is the number of points (3 in our case). The aforementioned procedure is implemented in Python, using OpenCV library.

Proposed Automatic method 1: Combining Image processing and Mutual Information on 2D slices

The second method implemented is based on taking advantage of the integrated modalities known information, image processing and Mu- tual Information Maximization (MI) algorithm.

(18)

Please see Appendix, Section A.13 for the mathematics and expla- nation of MI on images, the implementation details and reasoning for the choice of this metric.

Accurate pre-alignment step is still needed. After that step, acqui- sition protocols are followed, the two volumes are acquired and auto- matically interpolated on z-axis, just like in the previous method. On x,y axes of the miniPET volume, we apply scaling using bicubic inter- polation, with factors that we are able to extract from the modalities imaging characteristics and are constant (may slightly vary after each calibration): Magnification 3.4 on x,y axes, 16 on z axis.

The miniPET and CT volumes are then binarized:

fi,Binarized(ˆx) =

(1 if fi,Init(ˆx) > ε 0 else

where ˆx, is the position vector, fi,Binarized(ˆx) is the Binarized Im- age, i = miniP ET, microCT , fi,Init(ˆx)is the input image and ε is the threshold value used in between 0-255 (18 for microCT, 20 for minip- PET respectively.)

Knowing that the structures are not always homogeneous and they can have holes in between, after binarization flood-field processing is used so the holes get ”filled” with ’1’. Information about the imple- mentation used by Scimage Library can be found in [13].

The aforementioned processing of the volumes, has as ultimate goal the most efficient/simplest input to the MI metric of the regis- tration: Considering the intensity of the images as random variables, the randomness lies on the dispersion of their histogram (The more dispersed the histogram is, the more randomness the image includes, thus higher entropy), see Appendix, A.13. Ending up only on two histogram values (BW), minimizes the randomness of the input vari- ables/images (and their joint histogram), making it considerably eas- ier for MI to maximize (see Appendix, A.13).

Mutual information metric is applied on the central slice of the bi- narized/filled volumes to extract the Rigid transformation (Rotation/

Translation). The derived transformation is then applied to all slices of the scaled miniPET volume. The procedures are briefly explained on the schematics below:

(19)

Figure 2.6: Automatic registration method proposed workflow using binarized/filled central slice to derive the transformation.

Figure 2.7: Registration iterative procedure basic modules, imple- mented in SimpleElastix. Corresponds to the box noted as SimpleE- lastix in Figure 2.6.

Details on the Elastix implementations and parameterizations of Adaptive stochastic Gradient descent, B-Splines interpolation and rigid transform, can be found in [5]. The parameterization files that gave the optimum results in our methods can be found in Appendix, section A.11.

Proposed Automatic method 2: Combining Image processing and Mutual Information on 3D volumes

Under the same basis of binarizing and filling the volumes, another method was implemented. The steps of scaling, binarizing and filling remain the same. After we have the binarizing/filled scaled volumes, we use MI to derive the transformation between the whole volumes

(20)

and not separate slices. Then, the derived transformation is applied on the scaled miniPET volume. The accuracy of pre-alignment can be downgraded here, as the MI is trying to align also the z axis.The procedures are briefly explained on the schematic below

Figure 2.8: Automatic registration method proposed workflow using full 3D binarized/filled volumes to derive the transformation.

Figure 2.9: Registration iterative procedure basic modules, imple- mented in SimpleElastix. Corresponds to the box noted as SimpleE- lastix in Figure 2.8.

Details on the Elastix implementations and parameterizations of Adaptive stochastic Gradient descent, B-Splines interpolation and rigid transform (done with SimpleElastix tool: Transformix), can be found in [5]. The parameterization files (including all the parameters) that gave the optimum results in our methods can be found in Appendix, section A.11.

(21)

The details are explained at [14]. In 2D space, the final pixel values are given by:

V ali,j = α1Icti,j + α2Ipeti,j (2.3) where i,j is the pixel position, Ict, Ipet is the CT and PET image inten- sity, respectively. The factors α1, α2(Alpha values) are manually set in our implementation at 0.4, 0.6 respectively. In 2D space visualization (slices), the 2.7 applies to the RED channel and GREEN, BLUE chan- nels are given the CT values, so we get the sense of CT ’background’.

The mix of red color for radiation areas and BW for structural infor- mation from CT, is commonly used.

2.2.3 Experiments-Measurements

During this thesis, several experiments were conducted using the miniPET and microCT Jonasson center for Medical Imaging modalities. The ones presented below are the ones that their results were used to ex- tract the result images of the registration-fusion methods, presented in this thesis.

Experiment 1

Figure 2.10: Experiment 1 Details

For experiment 1, we obtained liquid 18F-FDG from Karolinska Uni- versity Hospital: a syringe of 0.7 ml at 34.8 MBq and one at 0.1ml at 12.75 MBq. The second syringe was used as point source for the cali- bration of the miniPET. Of the first syringe, 0.4 ml of 18F-FDG diluted

(22)

in in 1.8 ml of water were used for filling our phantom. At the time that the measurements started, the total activity of the source used in the phantom was 4.5 MBq. The miniPET acquisition time was set to 10 min.

The aim of this experiment, as well as the phantom’s design, is to test the Landmarks based registration on a case that intrinsic land- marks are very easy to be selected.

Experiment 2

Figure 2.11: Experiment 2 Details

For experiment 2, we obtained liquid 18F-FDG from Karolinska Uni- versity Hospital: one syringe of 0.8 ml at 31.4 MBq. When the degrad- ing activity reached to approximately 2MBq/0.1 ml, we diluted 0.2 ml of 18F-FDG in 8.9 ml of water, 0.1 ml of 18F-FDG in 0.55 ml of water and 0.1 ml of 18F-FDG in 0.1 ml of water, to fill the phantom 2 C2,B2

and A2chambers respectively, as indicated at Figure 2.2. The miniPET acqusition time was set to 5 min.

The phantom 2 is designed to resemble real cases of tissues and professional phantoms, where the 18F-FDG is distributed in different concentrations everywhere in the tissue/material. Structural informa- tion are present on miniPET image, in terms of lower intensities. The aim of this experiment using this phantom, is to test the performance of the MI based methods, described in previous Section.

2.2.4 Error Metrics for quantitative validation

For the quantitative validation of the registration, metrics of Distance between corresponding points after transforming the moving image.

8 corresponding points are selected in axial, coronal and sagittal mi- croCT on registered miniPET slices. Between them, the distance is measured on all axes as well as the Euclidean distance on x,y, as de-

(23)

i=1

For mean difference in z-axis:

M Dz = (1/n)

n

X

i=1

|(zAi− zBi)|vz (2.6)

For mean distance error:

M DExy = (1/n)

n

X

i=1

( q

((xAi− xBi)vx)2+ ((yAi− yBi)vy)2) (2.7)

where Ai, Bi are the corresponding samples (points), n is the number of different points and vi is the CT voxel dimension in direction i = x, y, z. In our case, with 600x600 pixels (x,y) depicting 50x50 mm in physical space, we have vx = vy = 0.08mm/pixelapproximately.On z axis, 50mm are depicted in 300 pixels, thus, vz = 0.16mm/pixel.

(24)

Results

In this section the results of each method are presented. The coronal, axial and sagital planes indicated in the image labels of this section, are defined for the phantom models in Appendix, Section A.12.

3.1 Experiment 1 derived volumes (Phantom 1) using Landmarks based Registration

Here, the volumes acquired by experiment 1 are used (Section 2.2.3).

Intrinsic landmarks were selected on the central slice and the proce- dure described in Section 2.2.1, Landmarks based Registration method is followed.

(a) microCT (600x600x300) (b) miniPET (371x371x300) Figure 3.1: 3D Rendered Images of Phantom 1 from both modalities (Paraview Visualization). The miniPET image is presented after inter- polation on z-axis.

14

(25)

Figure 3.2: Central slice of Phantom 1 by both modalities: Unregistered and Registered cut - Coronal view (slice 150).

Visualization of the registration shows fine accuracy, on all axes.

This is expected due to the very easy selection of registration points for phantom 1. In Figure 3.2 the reader can observe the before and after of the affine transformation of the minPET central slice.

Figure 3.3: Registered cuts example: Phantom 1 volume - Axial view (slice 300).

In figures 3.3, 3.4, we can observe the good alignment between mi- croCT, miniPET volumes.

Figure 3.4: Registered cuts example: Phantom 1 volume - Mid-sagittal view (slice 300).

(26)

Figure 3.5: 3D rendered miniPET/microCT Phantom 1 volume regis- tered with intrinsic landmarks based method (Paraview). microCT is presented on Xray colormap, miniPET on Heat colormap (for the col- ormaps see Appendix, section A.12).

The succesful alignment can be observed on the 3D rendered vol- umes. A great detail is the radiation leak that can be seen at the bottom of the chamber, due to 3D printing error. 18F-FDG filled the small gaps in the solid PE, making visible the malfunction in structure that was barely visible by eye.

3.2 Experiment 2 derived volumes (Phantom 2) using Landmarks based Registration

(a) microCT (600x600x300) (b) miniPET (371x371x300) Figure 3.6: 3D Rendered Images of Phantom 2 from both modalities (Paraview Visualization).The miniPET image is presented after inter- polation on z-axis.

(27)

Figure 3.7: Central slice of Phantom 2 by both modalities: Unregistered and Registered with intrinsic landmarks based method - Mid-sagittal view (slice 150)

Figure 3.7 shows the miniPET slice before and after transformation (registration). The misalignment is observable at the walls of the inner phantom chambers. Considering that the thickness is 0.5mm, it is a small misalignment. Note here that the intrinsic landmarks are not as easy to select as in phantom 1.

Figure 3.8: Registered cuts example: Phantom 2 volume - Para-sagittal view (slice 132)

(28)

Figure 3.9: Registered cuts example: Phantom 2 volume - Axial view (slice 300).

In Figure 3.9 some misalignment in z-axis can be observed, due to non accurate pre-alignment.

Figure 3.10: Registered cuts example: Phantom 2 volume - Mid- coronal view (slice 300).

Figure 3.11: 3D rendered PET/CT Phantom 2 volume registered with intrinsic landmarks based method (Paraview). microCT is presented on Xray colormap, miniPET on Heat colormap (for the colormaps see Appendix, section A.12).

(29)

(a) microCT (600x600x300) (b) miniPET (371x371x300) Figure 3.12: 3D Rendered Images of Phantom 2 from both modalities (Paraview). The miniPET image is presented after interpolation on Z- axis.

Figure 3.13: Left: Initial microCT central slice. Right: The same slice binarized and filled for input in the registration algorithm

(30)

Figure 3.14: Left: Initial miniPET central slice. Center: The same slice resized as explained in Methods section. Right: Binarized and filled for input in the registration algorithm

Figure 3.15: Central slice of Phantom 2 by both modalities: Unregis- tered and Registered with automatic MI based method - Mid-sagittal view (slice 150)

The automatic method axially seems to perform slightly better that the Landmarks based method on the same phantom, as indicated in Figures 3.15, 3.16.

(31)

Figure 3.16: Registered cuts example: Phantom 2 volume - Para- sagittal view (slice 132)

Figure 3.17: Registered cuts example: Phantom 2 volume - Axial view (slice 300).

Figure 3.18: Registered cuts example: Phantom 2 volume - Coronal view (slice 320).

As in Landmarks based registration for this phantom (where the same experimental measurement derived volumes were used), we have

(32)

slight misalignment on z-axis, due to non-accurate pre-alignment of the object on the bed.

Figure 3.19: 3D rendered PET/CT Phantom 2 volume registered with automatic MI based method on central slice (Paraview). microCT is presented on Xray colormap (BW), miniPET on Heat colormap (for the colormaps see Appendix, section A.12).

3.4 Experiment 2 derived volumes (Phantom 2) using Image processing/MI metric based registration on 3D volumes

(a) microCT (600x600x300) (b) miniPET (371x371x300) Figure 3.20: 3D Rendered Images of Phantom 2 from both modalities (Paraview). The miniPET image is presented after interpolation on Z- axis.

miniPET volume is scaled and interpolated on x, y, z axes. After bina- rization and filling (see Methods Section), we get the volumes below:

(33)

(a) microCT Binarized/Filled volume (b) miniPET Binarized/Filled volume Figure 3.21: 3D Rendered binarized/filled Phantom 2 volumes from both modalities (Paraview). Note that the bed is removed from the CT image for better visualization.

A very accurate axial alignment is observed easily, by the Figures 3.22, 3.23, 3.24, below.

Figure 3.22: Registered cuts example: Phantom 2 volume - Para- sagittal view (slice 140)

(34)

Figure 3.23: Registered cuts example: Phantom 2 volume - Axial view (slice 300).

Figure 3.24: Registered cuts example: Phantom 2 volume - Coronal view (slice 302).

It is very important to notice in figures 3.23, 3.24, that the z-axis misalignment due to inaccuracies in pre-registration, is corrected with this method which allows for translation also on the z-axis, for the moving miniPET volume. This will be further discussed in Discussion section.

(35)

Figure 3.25: 3D rendered PET/CT Phantom 2 volume registered with automatic MI based method on 3D volumes (Paraview). microCT is presented on Xray colormap, miniPET on Heat colormap (for the col- ormaps see Appendix, section A.12).

3.5 Quantitative errors analysis

Misalignment errors are analyzed here for each registration method presented, based on the interpolated, registered miniPET slices and the corresponding microCT slices of the two phantom models. The analysis is based on [15].

3.5.1 Method: Intrinsic Landmarks based registration

The error values presented are based on 8 selected corresponding points on the slices of both phantoms registered volumes, acquired by the In- trinsic Landmarks based registration method.

Figure 3.26: Errors in mm for the Intrinsic Landmarks based registra- tion method.

3.5.2 Method: Automatic MI based registration (by slice)

The error values presented are based on 8 corresponding points on the slices of phantom 2 registered volume, acquired by the MI based regis-

(36)

tration method on 2D slices. This method was not tested on phantom 1.

Figure 3.27: Errors in mm for the MI based registration method on 2D slices.

3.5.3 Method: Automatic MI based registration (di- rect on 3D volumes)

The error values presented are based on 8 corresponding points on the slices of phantom 2 registered volume, acquired by the MI based registration method on 3D volumes. This method was not tested on phantom 1.

Figure 3.28: Errors in mm for the MI based registration method on 3D volumes.

(37)

solution for volumes fusion for the Jonasson Center for Medical Imag- ing integrated miniPET/microCT. The dificulty lies on the fact that the volumes acquired by the two modalities have strong differences on voxels dimensions, intensity profiles and information depicted on the final volume, as microCT provides anatomical information and miniPET provides functional information.

A complete, automatic volume fusion method for miniPET and microCT modalities based on Image processing and Mutual Informa- tion (MI) is implemented, presented, and evaluated individually and in comparison with another implemented method, the intrinsic land- marks based registration. The method initializes with a step of pre- alignment, positioning of the object on calculated positions on the bed of the modalities, followed by processing of the acquired volume for optimization of the input to the registration loop, based on MI metric.

The methods were validated on data by own designed -specially for this aim- 3D printed phantoms.

4.2 Discussion

The proposed MI based method takes full advantage of the modalities system knowledge, their integration as well as the imaging charac- teristics of each modality, providing the simplest possible inputs for

27

(38)

maximization of the MI metric (processed volumes).

The MI based method showed remarkable spatial registration ac- curacy, individually but also in comparison with the best accuracy that was accomplished by the Landmarks based registration method as well as to studies with similar registration problems in small animal imaging, indicating [15], [12] and [16]. On phantom 2, the 3D MI based method outperformed both the 2D MI based and the Landmarks based method.

Morever, the application of the method using the 3D processed vol- umes and not slices, outperformed the other methods on all axes for phantom 2 and minimizes the pre-alignment step dependance for the z-axis registration,from errors >1mm to <0.4mm. The change of the accuracy on z-axis is easily observable by comparing Figures 3.9, 3.17, 3.23 and the error results on section 3.5.

It is important to point out that to achieve this level of accuracy us- ing intrinsic landmarks based method, the landmarks had to be very carefully selected, which is sometimes impossible and always time- consuming. This is also observed by the fact that the Landmarks based method performed better on phantom 1, where the landmarks are easy to spot and select with accuracy. The automatic MI based method achieves similar accuracy without need for user selected landmarks, thus minimizing the user-induced errors. The aforementioned can also be easily derived even by simple visual observation of the resulting images or volumes and by knowing the phantom models dimensions.

The misalignments are measured after acquisition, scaling/ inter- polation and application of registration transform. Thus, errors con- nected with each one of these steps, may distort this criterion. It is also important to be noted that we are discussing about misalignments/

distance errors, as they are depicted on the final volume of only one phantom/example.

4.3 Limitations - Future work

The future work is directly connected with the limitations and draw- backs of the new proposed method. A major aspect that could be im- proved is automated binarization threshold deriving (Section 2), by statistical analysis of the volume intensity values.

The performance of the proposed method is based on the fact that

(39)

Squared Difference (MSD) or Normalized Correlation Coefficient (NCC) [8]. Αn addition of a non-linear transformation final step, could pos- sibly improve further the results. Also, addition of an allowance for slight scaling during the registration, could correct for any small scal- ing error of the volumes which is done before registration in the auto- matic method.

Finally, it is worth mentioning that more testing and misregistra- tion errors analysis (quantitative by distances and qualitative by an ex- pert) could be done for the new method, on datasets of actual PET/CT small animal studies, for a more thorough statistical analysis.

(40)

State of the art

A.1 Positron Emission Tomography (PET)

Positron-emission tomography (PET) is a nuclear, tomographic, non- invasive imaging technique that is used to observe functional metabolic processes by computing the 3D distribution of radioactivity caused by positrons and electrons annihilation in tissue. As a positron emit- ting radionuclide is inserted in tissue (or a phantom model), it emits positrons, as:

Az → Az−1+ v + e++ E

Where v is the neutrino, e+ is the positron and E is an amount of en- ergy.

The positrons emitted by the aforementioned reaction, are travel- ing a short path in medium and finally, after a statistically constant amount of time, it annihilates with an existing electron. This annihila- tion, is followed by energy emission, in form of two antiparallel (180 degrees span) gamma rays (each has energy of 511 keV):

(e+) + (e) → 2γ

In PET Imaging the aim is the detection of all gamma ray couples - coincidences- caused by the same annihilation event. This is typically done by several detectors (mostly scintillation crystals coupled to pho- tomultiplier tubes), placed in a ring, surrounding the object under scanning. These detectors can detect multiple energy photons emit- ted but during a PET scan only the ones that are detected within a

30

(41)

Figure A.1: 2-D Representation of PET Ring and LOR. Annihilation occurs and the two gamma rays are detected by opposite placed de- tectors of the ring

Raw data in PET Imaging are the coincidence events counts for each LOR in 3-D volume. Reconstruction of these data leads to 3D im- ages of radionuclide concentration within the scanned volume within the Field Of View (FOV). Characteristic of PET scanners, is the good spatial resolution and the lack of temporal resolution [17] [18].

A.2 MiniPET

Small animal imaging is a vital part of modern medical research, since there is an increasing need for precise depiction of physiological pro- cesses that occur in small animals, since these functions are part of modeling for structures, functions, and processes in human pharma- cokinetics, drug testing, tumor growth pathways etc. [19]. Mini-PET scanners, with smaller radius of the detectors ring, can give excellent spatial resolution, but with small Field Of View (FOV) and bad tem- poral resolution. The STH mini-PET consists of 12 detector modules made of LYSO scintillator pixelated crystals (1.27 x 1.27 x 12 mm3) in a 35x35-pixel matrix with pitch of 1.347 mm. The crystals are coupled to

(42)

SiPMs. The spatial resolution at one fourth CFOV is 1.24 mm (tangen- tial), 1.15 mm (radial) and 1.23 mm (axial). The 3D volume produced by STH mini-PET basic reconstruction procedure, is 371x371x35 pixels, on x, y, z, respectively.

A.3 Computed Tomography (CT)

Computed Tomography (CT) is a radiographic, tomographic Imaging technique that uses X-ray measurements - projections of many angles and algorithmically combines them, reconstructing a 3D volume of

“slices”, tomographic images of the subject [20] [21]. Its main use is for identification of structural characteristics of tissue, phantoms etc.

The method is of course fundamentally based on the electromagnetic radiation attenuation properties of the subject under investigation [22].

Due to primary importance regarding its use in this thesis, we will pro- ceed to more information about cone-beam CT, and more specifically micro-CT.

The cone beam geometry is presented on Image A.2:

Figure A.2: Representation of cone-beam CT acquisition system. X- ray source and Detector are opposed and rotating around the object acquiring multiple-angle projections.

A typical cone-beam CT modality consists of an X-ray source and a detector, opposed and attached on a gantry that allow them to rotate in orbit, as can be seen in Figure Α.2. During the rotation, in mul- tiple angles, X-rays are emitted by the source, attenuate through the subject and finally meet the flat panel (which uses fluorescent crystal

(43)

depicted are particularly small.

At Jonasson Medical Imaging Center, the microCT is constructed by commercial parts by scratch. It consists of a Hamamatsu CMOS flat panel C7942CA-22 detector, a Hamamatsu microfocus X-ray source 90 kV, pulsed MX-L10951-04, a stainless steel ring gantry by Mekanex, a stepper motor for rotation of the gantry by Oriental motor (Step- ping motor-gearhead-driver AR98MC-H100-3 + universal controller SCX11), a slip ring by Penlink (model PL025-56-4P-36S, with 4 x 10A and 28 x 2A plus 1 Gigabit RJ45 contacts), a frame grabber by EPIX (PIXCI D2X-C7942, with XCAP libraries) and finally the steering of FG and source is done by a single board computer PCM-9562N-S6A1E from Advantech. More information on the parts and a performance analysis of the microCT can be found in [24].

A.4 Multimodal Imaging - PET/CT Fusion

Multimodal Imaging refers to the methods of extracting valuable in- formation from images by different modalities, ultimately combining them to a new image. Integration of the information seems to be of huge importance, especially when the combined images have very dif- ferent information to give [1] [25].

This fact is also indicated by the dramatic increase of papers in the last years, with subjects connected to image fusion [26] [27].In medical Imaging, combination of modalities can offer promising potential of diagnosis and treatment planning, in several cases. Specifically, on the combination of PET and CT, we get functional information and struc- tural information at the final image, by the two modalities, respec- tively. Defining the exact position of the activity within the complex structure of tissues, is of primary importance [28]. Since 2001, when the first CT/PET scanner was installed, huge advances have occurred

(44)

and now, the simple PET scanner without dual PET/CT capabilities sales are close to zero [1].

A.5 Steps of the general Image Fusion Pro- cedure

Image fusion procedure is generally divided in two main steps. The first is the Image Registration. The goal of registration, is to align the corresponding features of a dataset of images, with respect to a refer- ence image [29]. The basic components of a typical registration frame- work are presented on the schematic below (They can differ, but typi- cally the ones presented exist on every workflow):

Figure A.3: Basic components of a typical registration framework.

Transform represents the spatial transformation of points between the fixed and moving image space. Non-grid positions intensities are derived by the interpolator. The metric (which minimizes or maxi- mizes) is a measure of how well the fixed image and the transformed moving image are matching. The optimizer receives the metric value and tries to optimize it over a defined transform parameters space.

The second is Image Fusion. The aim of this part of the procedure is to produce and visualize a more information-wise valuable image, by merging the corresponding parts of the input images [29].

A.6 Registration of Medical Images

As aforementioned, registration is the vital function of aligning the im- ages in regard to corresponding features. This is one of the most used

(45)

Figure A.4: Basic categorization of registration methods

In the next subsections, a more thorough review on the rationale behind the aforementioned method categories will be given.

A.6.1 Modalities involved

Monomodal

Monomodal applications are the ones that include and register images taken by one and only modality. A special case is formed in case the images to be registered are taken in depth of time, so it is possible that the characteristics of the image-volume are changed. Applications can be detection of movement, size change, anatomical change etc. Almost all modalities of medical imaging have monomodal registration appli- cations, mentioning X-rays, CT, PET, SPECT, Microscopy modalities, US, etc.

Multimodal

In multimodal applications images or volumes of more than one modal- ities are involved in the registration process. Multimodal registration

(46)

can be further classified in anatomical-anatomical registration appli- cations (CT-MR, CT-Xray, US-MR, US-CT etc.), which aims to project structures only, by both modalities, and anatomical-functional appli- cations where the registration aim is to combine the structural charac- teristics by one modality image with the functional characteristics by another, mentioning PET-CT, SPECT-CT, SPECT-MR, PET-MR, etc..

Modality acquired image to model

External models are computationally made, combining numerous data of acquisitions, information and man-made models. Atlases is the most common external model. Mostly used in determination of shape/size change (abnormality) of a structure of the patient of exact place of function in case structural data are missing, in regard to the statisti- cally normal. There are reported cases of usage of such methods in CT, MR, SPECT, X-rays, etc.

A.6.2 Subject

Intra-subject

Registration of images-volumes of the same subject acquired in dif- ferent time. They have several clinical applications, mostly detection- diagnostic by determining the shape or intensity changes, and on surgery results evaluation.

Inter-subject

Different subjects images-volume registrations. Less commonly used in clinical practice, with applications on shape, size and intensity change detection.

External model to subject

See section A.7.1: Modalities involved: Modality acquired image to model.

A.6.3 Dimensions of application

As indicated in Figure 4, the spatial dimensionality of a registration, can vary between 2D-2D, 3D-3D and 2D-3D methods. When register-

(47)

2D-3D registration is used when we want to register one or more 2D images in a certain slice of a 3D volume. In general, 2D-2D and 2D-3D registrations are easier to implement, and less demanding in compu- tational resources compared to 3D-3D registration.

A.6.4 Nature of transformation

Linear Transformations

In linear transformations, the whole 3D volume or 2D image, is trans- formed (translations, rotations, scaling and shearing) and the final im- ages preserve the distances and parallel lines. Generally, the parallelism- preserving transformations are called affine and can be rigid or non- rigid.

Figure A.5: Rigid, Affine and Projective transformation examples in regard to a 2D reference. Reprinted with permission from [34].

In this thesis, affine landmark registration implementation details are described at section 2.2.1. More detailed reviews on these methods can be found in [33].

(48)

Non-Linear Transformations

Non-Linear transformations can be global or local curved transforma- tions. The preservation of parallelism is not present anymore in these models. Their non-linearity restricts them from being expressed di- rectly by simplistic transformation matrices like in the case of rigid or affine. They can be described by polynomial functions of the reference coordinates, or local displacement (deformation) functions and their complexity lies on the existence of regularization factors.

Figure A.6: Non-rigid transformation example in regard to a 2D refer- ence. Licenced Modified image by [34].

Numerous algorithms have been developed that last decade re- garding the non-rigid registration processes for medical and other ap- plications. Compared to rigid transformations, they lack in speed and require significantly more computational resources, though, further research on them will be proved of great importance on applications that suffer by tissue deformation, movement etc. Details on these methods can be found in [33]

A.6.5 Registration Method basis

Extrinsic registration methods

An extrinsic registration method includes the use of objects detectable by all modalities (present clearly in all images-volumes to be regis- tered) that participate. These objects vary in medical imaging, patient- markers, modality-markers, frames, etc.[12].

Intrinsic registration methods

Intrinsic registration methods are the ones that are based on structural information by the image-volume, not artificial landmarks. The in-

(49)

a cost function is minimized between the new images. This method can be used also to elastic models. Intensity based methods rely on the criterion that the two registered images will have maximized their sim- ilarity. This method also minimizes/maximizes a function, like Mean Square Error, Correlation function, Mutual Information (MI) etc. These methods have found a great range of uses (in 2D-2D and 3D-3D), in monomodal applications, where we have strong intensity relevance between the images, as well as in multimodal dedicated registration cases, like PET/CT and MR/PET-SPECT, where they are mostly used after processing of the input images to end up to more correlated in- tensity results between them, for example, non-linear low intensity en- hancement of PET image[10]. Other relevant examples can be found in[9][35][36].

Extrinsic and Landmark based Intrinsic methods are of compar- atively lower computational complexity, simpler and offer accuracy (which depends on the landmarks selection accuracy). Segmentation based methods are also low in computational complexity. Intesity based methods, work well on mono-modal applications, but become more sophisticated and require more computational resources when it comes to multi-modal ones. Common used in clinical applications is also combination of aforementioned techniques, for example land- marking and refinement using segmentation.

A.7 Fusion of Medical Images

As aforementioned, image fusion with goal to produce a more information- wise valuable image, by merging the corresponding parts of the input images. Through out the last decades, where the fusion of data es- pecially between different modalities has become crucial for medical practice, many algorithmic procedures have been established aiming

(50)

to optimize the fused image’s value, depend on the needs of each ap- plication. These methods have different rationales and based on scien- tific and survey papers -with many different categorizations- conclud- ing to the following one[29][27][37][38][39][26]:

Figure A.7: Categorization of fusion methods used in medical imag- ing.

A.7.1 Pixel-based fusion methods

Pixel based methods are the ones using strictly pixel-wise operations, from simple arithmetic ones (subtraction, addition, multiplication, di- vision, max-min, median, weighted median etc.), to more complex operators. Despite the simplicity and limits of these methods, many times they perform well, especially when we have contrast difference between the input images.

Composite Images and Alpha Blending

Composite image is an image derived by intensity merging of the in- put images. Alpha blending is the most commonly used method for creating composites in clinical applications, due to the low complex- ity, acceptable results and ease of parameters changing. It is based on simple idea of combining the translucent color of one image with the color of the other image as background, pixel-wise. The proportions of transparency are given by a weights, called alpha. Alpha ranges from 0 to 1, where 0 represents full transparency and 1 full opacity. There are also other sophisticated methods developed for blending, using non-linear functions [40], panchromatic sharpening[41][42] and other combinations.

(51)

example in CT/MR fusion [43].

A drawback of these methods is that their success is strongly bonded with the choice of the right structuring element (shape and size) that controls opening and closing basic operations, specifically for each ap- plication. Another drawback is their strong dependency on pixel in- tensity values.

A.7.3 Knowledge based fusion methods

Knowledge based term, indicates a method that has it’s design basis to practical knowledge. For example, segmentation methods can be developed with respect to doctor’s views on what area would be use- ful to be segmented. The main advantage of a method developed this way is the fact that it relies on specialized human knowledge, which is irreplaceable. The drawback on the other side, lies again on the hu- man judgment limitations on images, especially in regard to intensity changes.

A.7.4 Wavelets based fusion methods

The rationale behind wavelets application is to identify and extract the primary information (valuable details) of a reference image and forward them on another, by decomposing it in coefficients, basically by convolution with a finite decaying wave (wavelet). As wavelets methods work on both frequency and time, they are able to detect most of the details (appearing on frequency changes) of an image.

A general workflow of these methods includes both images to be decomposed by Discrete Wavelets Transform (DWT). Then a fusion decision is applied between the decomposed images and the result un- dergoes Inverse Discrete Wavelets Transform (IDWT), providing the final fused image that will have maintained the strong characteristics

(52)

- details of both input images on the same resolution. The fusion deci- sion is a vital part of the procedure and it varies from simple subtrac- tion and other operators, to more complex models [44][45].

As aforementioned there are many existing applications of wavelets- based methods in research, indicating [46][47][48] and many others (see [29][27][39] relevant references), as well as wavelets methods used in combination with others, for example PCA [49][50].

A.7.5 Components Analysis based fusion methods

Components analysis methods are based into statistical compression of the data space, to a sub-space that captures the essence of the actual data. This is called dimensionality reduction. Τhe logic is similar to the wavelets methods where the decomposed data are combined and reconstruct another image. The subspaces, which maintain the impor- tant characteristics of the images, can be used to reconstruct the fused image with the coefficients derived by the Component Analysis ap- plication. Principal Component Analysis (PCA) is a commonly used algorithm of this family, and has been used in medical applications (MR/PET, CT/PET etc.), alone and in combination with wavelets and other methods [50][51][52]. Other component analysis methods in use for medical applications, are independent Component Analysis (ICA) and Partial Least squares (PLS).

A.7.6 Neural Networks (NN) based fusion methods

All neural network systems, require training with big datasets of val- idated data, to be able to make decisions. Neural networks, due to their adaptive character, are seen as an optimal solution, since medical modalities change fast, and the ability of the network to adapt just by training, is a really attractive solution. Many applications have already been published (see references of [38]).

It is known though, that the success of a neural network predicting capability, is strongly depended on the training process (algorithm) and the quality and quantity of training data. Many times the training data are limited, making it even more difficult to generalize the effi- ciency of a NN in different modalities and cases. There are also cases that the NNs are combined with other fusion techniques for improve- ment, like fuzzy logic, wavelets, PCA etc.

(53)

A.8 Phantom models design

In this section the design characteristics of phantom model can be found. The design is finalized in SolidWorks, by Peter Arfert, KTH.

Figure A.8: Design sheet and dimensions of phantom 1

(54)

Figure A.9: Design sheet and dimensions of phantom 2

A.9

18

F-FDG

Fluorine-18 (18F) is the most commonly β+ emitting radio-isotope used in clinical applications. Elemental fluorine is a diatomic molecule and a gas at room temperature. It has a half-life of approximately 109 min.

Annihilation events following the β+ emission anti-diametrically re- lease two coincident γ photons (511 keV each) that can be detected by systems of detectors like PET [53]. In our experiments, we used a chemical compound of18F called Fludeoxyglucose(18F-FDG) [54] (liq- uid in room temperature). Molecules of deoxyglucose (a glucose ana-

log), are chemically bound with molecules of the Fluorine-18. In PET/miniPET examinations, this combination is used as a marker for the tissue glu-

cose uptake and constitutes the most commonly used radiopharma- ceutical in clinical and pre-clinical routine. More details on 18F-FDG are out of the scope for this thesis and can be found indicatively in [54]. Note that 18F-FDG is sometimes for simplicity referred to as just

18F in PET imaging routine.

A.10 Elastix transform file parameters used for MI rigid registration

Default parameter map ’rigid’ is being loaded and parameterized in our Python module (SimpleElastix wrapper), by the following com- mands:

(55)

Figure A.10: Parameters file for MI based metric on central slice

(56)

Figure A.11: Parameters file for MI based metric on 3D volumes

A.11 Colormaps

Figure A.12: Colormaps used in Paraview for visualization of the 3D volumes

(57)

Figure A.13: Planes indicated on Phantom 1 for better understanding of the slices presented in Results, Chapter 3. The planes movement direction is also indicated.

(58)

Figure A.14: Planes indicated on Phantom 2 for better understanding of the slices presented in Results, Chapter 3. The planes movement direction is also indicated.

A.13 Mutual information (MI)

Several publications show MI metric various implementations as the best performing distance (intensities) metric, on multimodality regis- tration and other applications [55]. MI is an information entropy based metric. On the next sections, an introduction on the mathematics and rationale of MI is presented: starting with entropy, continuing with an overview of the implementation details and finishing with an explana- tion of the rationale behind the pre-processing of the images that we conducted within the registration pipeline.

A.13.1 Information entropy

Entropy, or information entropy (in information theory) is a measure of the amount of information or uncertainty of a random variable. First

(59)

is the average amount of information gained by a set of events. A very interesting overview of the development of information entropy, including examples, is described in [55].

A.13.2 Shannon’s entropy on images

Someone will question: How is entropy -and generally random variables- associated with images? Shannon’s entropy can be computed for an image, if we consider its intensity distribution (gray scale values) as random variable. The probability distribution of the image intensities is the count of the number of occurrences for each grayscale intensity value (histogram), divided by the total occurrences. The less intensi- ties we have, the lower the entropy becomes: entropy here, constitutes a measure of the amount of dispersion of the image histogram.

A.13.3 Joint histogram and probability distribution func- tion (pdf) calculation

The joint histogram of two images, is a 2D plot of the combinations of the gray values for all corresponding points, explained in [56]. It can be used to estimate the joint probability distribution (again after dividing each value with the total entries). Again, Shannon’s entropy can be calculated, resulting in the joint entropy. Joint entropy value constitutes a measure of uncertainty associated with the two random variables (images intensities). The more dispersed the joint histogram is, the higher the joint entropy becomes:

(60)

Figure A.15: In this figure we can observe the dispersion of 2D his- tograms (joint histogram) in two examples of images. The right one corresponds to registered images and its entropy is ≈ 3.2,when the left one, corresponds to the same images, non-registered and results to an entropy of ≈ 7.1.

A statistical, non-parametric way to approximate a continuous pdf for an random variable (in our case the image) -and not a discrete, like histogram- is Kernel Density Estimation (KDE) or Parzen Windows method [57]. Assuming that we have a dataset x1,x2. . . xnand we want to approximate their pdf. We use a mix of known continuous distri- bution functions ϕ(x), called kernels, centered on the x1,x2. . . xn with pre-selected bandwidth h:

fh(x) =

n

X

i=1

ϕ(x − xi/h) (A.2)

(61)

Figure A.16: Example of KDE with Gaussian kernels (windows) in red.

The final distribution is depicted in blue.

In Elastix, an implementation B-spline Parzen windows is used for the estimation of pdfs, not on the whole image/volume, but in several pixel areas in it (randomly selected). This leads to the same result and reduces the computational effort, as indicated in [5].

A.13.4 From entropy and pdfs to Mutual Information metric

Mutual Information definitions vary but all of them are based on Shan- non’s entropy. A common definition considers the intensity functions of the two input images/volumes as random variables I1, I2with their pdfs pI1, pI2 and their joint pdf pI1I2. In this definition, by minimizing the joint entropy between the two variables, their mutual information is maximized:

M I(I1, I2) = HI1 + HI2 − HI1I2 (A.3) where HI1 is the entropy of I1, HI2 the entropy of I2 and HI1I2 their joint entropy.

Another definition is used in this thesis (implemented in Elastix), which follows the definition of the Kullback-Leibler dependence measure be- tween two distributions p, q, based on Shannons entropy [55], given by the equationPn

i=1pilog(pi/qi). Analogically to this, for the calcula-

(62)

tion of dependence between two images I1, I2pdfs we have:

M I(I1, I2) =X

I1I2

pI1I2log(pI1I2/pI1pI2) (A.4)

This is a measure of the distance between the joint pdf of images I1, I2and their joint pdf in case of total independence: The dependence between the two images is measured, assuming that we have max- imum dependence of the distribution of intensity gray values when images are registered. Thus, MI is maximized when images are regis- tered. More details and exact equations for the implementation of the equation A.4 and the calculation of probability distributions in Elastix can be found in [5] and [58].

A.13.5 Pre-processing and MI metric

In our case, the pre-processed images/volumes end up with only two intensity values, 0 and 1, taking advantage of the structural informa- tion depicted on miniPET images by lower intensities (method ex- plained in Chapter 2, Methods). Similar technique was used in [15].

This procedure eventually ensures a similarity between the im- ages/volumes and gives the simplest possible input to the registra- tion loop. We know that for both of the images, as they come closer to alignment, we have correspondence of the intensity grayscale val- ues, since we only have two values, 1 for the object parts and 0 for background. This way we minimize the error caused by the totally different intensity profiles of the raw miniPET/microCT volumes.

(63)

[4] “Opencv (open source computer vision library).”

[5] “Elastix software tool for medical images registration.”

[6] S. Klein, M. Staring, K. Murphy, M. Viergever, and J. Pluim,

“elastix: A toolbox for intensity-based medical image registra- tion,” IEEE Transactions on Medical Imaging, vol. 29, pp. 196–205, jan 2010.

[7] D. Shamonin, “Fast parallel image registration on cpu and gpu for diagnostic classification of alzheimers disease,” Frontiers in Neu- roinformatics, vol. 7, 2013.

[8] “Simpleelastix - a python wrapper for elastix software for medical image registration.”

[9] J. J. Vaquero, M. Desco, J. Pascau, A. Santos, I. Lee, J. Seidel, and M. V. Green, “Pet, ct, and mr image registration of the rat brain and skull,” IEEE Transactions on Nuclear Science, vol. 48, pp. 1440–

1445, Aug 2001.

[10] S. Bricq, H. L. Kidane, J. Zavala-Bojorquez, A. Oudot, J.-M.

Vrigneaud, F. Brunotte, P. M. Walker, A. Cochet, and A. La- lande, “Automatic deformable PET/MRI registration for preclin-

53

(64)

ical studies based on b-splines and non-linear intensity transfor- mation,” Medical & Biological Engineering & Computing, feb 2018.

[11] R. Clouard, “Image team - the pantheon project- tutorial: Image rescaling,” June 2011.

[12] P. L. Chow, D. B. Stout, E. Komisopoulou, and A. F. Chatzi- ioannou, “A method of image registration for small animal, multi-modality imaging,” Physics in Medicine and Biology, vol. 51, pp. 379–390, jan 2006.

[13] T. S. community, “scipy.ndimage.morphology.binaryfillholes,”

2015.

[14] A. R. Smith, “Image compositing fundamentals-technical memo 4,” Aug. 1995.

[15] S. Bricq, H. L. Kidane, J. Zavala-Bojorquez, A. Oudot, J.-M.

Vrigneaud, F. Brunotte, P. M. Walker, A. Cochet, and A. La- lande, “Automatic deformable PET/MRI registration for preclin- ical studies based on b-splines and non-linear intensity transfor- mation,” Medical & Biological Engineering & Computing, feb 2018.

[16] N. Hayakawa, K. Uemura, K. Ishiwata, Y. Shimada, N. Ogi, T. Na- gaoka, H. Toyama, K. Oda, A. Tanaka, K. Endo, and M. Senda, “A pet-mri registration technique for pet studies of the rat brain 1 1 present address: National institute of radiological sciences, chiba, japan,” vol. 27, pp. 121–125, 01 2000.

[17] J. N. Wernick, Miles N.; Aarsvold, Emission Tomography : The Fun- damentals of PET and SPECT. San Diego: Elsevier Science, Decem- ber 2004.

[18] R. Badawi, “Introduction to pet physics, .”

[19] J. H. Botting and A. R. Morrison, “Animal research is vital to medicine,” Scientific American, vol. 276, February 1997.

[20] M. E. P. Simon R. Cherry, James A. Sorenson, Physics in Nuclear Medicine. Elsevier Saunders, 2012.

[21] E. Landis and D. Keane, “X-ray microtomography,” Materials Characterization, vol. 61, pp. 1305–1316, 12 2010.

References

Related documents

I regleringsbrevet för 2014 uppdrog Regeringen åt Tillväxtanalys att ”föreslå mätmetoder och indikatorer som kan användas vid utvärdering av de samhällsekonomiska effekterna av

Alternatively, given a model that is proved to be re- liable, it is possible to calculate different components of the neutron emission which can be used to derive different

It has both low level code for acquisition and other operations concerning the hardware and high level code for different image analysis like blob analysis,

For integration purposes, a data collection and distribution system based on the concept of cloud computing is proposed to collect data or information pertaining

(a) is the uncorrected image, (b) is the corrected image, (c) and (d) are plots of the attenuation for the line through the bottle from left to right at pixel 350 of the y-axis for

Med utgångspunkten att det finns en etablerad bild av jämlikheten kopplat till en offentlig vård och ej en privat vård har min ambition varit att försöka reda ut på vilket

Självfallet kan man hävda att en stor diktares privatliv äger egenintresse, och den som har att bedöma Meyers arbete bör besinna att Meyer skriver i en

impression management is one of the crucial aspects of all social interaction (Goffman, 1959), and it is reasonable to assume that one of the essential attractions of tobacco for