• No results found

Characterization and Reduction of Noise in PET Data Using MVW-PCA

N/A
N/A
Protected

Academic year: 2021

Share "Characterization and Reduction of Noise in PET Data Using MVW-PCA "

Copied!
63
0
0

Loading.... (view fulltext now)

Full text

(1)

Examensarbete 30 hp Mars 2009

Characterization and Reduction of Noise in PET Data Using MVW-PCA

Per-Edvin Svensson

Institutionen för informationsteknologi

Department of Information Technology

(2)
(3)

Teknisk- naturvetenskaplig fakultet UTH-enheten

Besöksadress:

Ångströmlaboratoriet Lägerhyddsvägen 1 Hus 4, Plan 0

Postadress:

Box 536 751 21 Uppsala

Telefon:

018 – 471 30 03

Telefax:

018 – 471 30 00

Hemsida:

http://www.teknat.uu.se/student

Characterization and Reduction of Noise in PET Data Using MVW-PCA

Per-Edvin Svensson

Masked Volume-Wise Principal Component Analysis (MVW-PCA) is used in Positron Emission Tomography (PET) to distinguish structures with different kinetic behaviours of an administered tracer. In the article where MVW-PCA was introduced, a noise pre-normalization was suggested due to temporal and spatial variations of the noise between slices. However, the noise pre-normalization proposed in that article was only applicable on datasets reconstructed using the analytical method Filtered Back-Projection (FBP). This study aimed at developing a new noise pre-normalization that is applicable on datasets regardless of whether the dataset was reconstructed with FBP or an iterative reconstruction algorithm, such as Ordered Subset Expectation Maximization (OSEM).

A phantom study was performed to investigate the differences of expectation values and standard deviations of datasets reconstructed with FBP and OSEM. A novel noise pre-normalization method named "higher-order principal component noise

pre-normalization" (HOPC noise pre-normalization) was suggested and evaluated against other pre-normalization methods on both synthetic and clinical datasets.

Results showed that MVW-PCA of data reconstructed with FBP was much more dependent on an appropriate pre-normalization than analysis of data reconstructed with OSEM. HOPC noise pre-normalization showed an overall good performance with both FBP and OSEM reconstructions, whereas the other pre-normalization methods only performed well with one of the two methods.

The HOPC noise pre-normalization has potential for improving the results from MVW-PCA on dynamic PET datasets independent of used reconstruction algorithm.

Tryckt av: Reprocentralen ITC IT 09 003

Examinator: Anders Jansson

Ämnesgranskare: Ewert Bengtsson

Handledare: Pasha Razifar

(4)
(5)

Preface

This master thesis is the result of a study performed in Uppsala, Sweden, at GE Healthcare in co-operation with Centre for Image Analysis (CBA). The author is currently finishing his studies as a master student in Engineering physics and Electrical engineering at The Institute of Technology at Linköping University. During the early phases of the project the author cooperated with two students who simultaneously performed their master thesis work at GE Healthcare within the same area [1, 2]. Therefore there are some similarities between the reports. This concerns the background chapter, 2, and parts of the synthetic study presented in sections 3.2.5 and 4.2.2. Apart from the work presented in this master thesis, work has been made on an application used to view and analyse dynamic Positron Emission Tomography (PET) data.

The results of the studies have resulted in four manuscripts, listed below, that either have been or are to be submitted for publication in scientific journals.

• P Razifar, H H Muhammed, F Engbrant, P-E Svensson, J Olsson, E Bengts- son, B Långström, and M Bergström, “Performance of principal compo- nent analysis and independent component analysis with respect to signal extraction from noisy positron emission tomography data — a study on computer simulated images”. Accepted for publication (2009)

• P-E Svensson, J Olsson, F Engbrant, E Bengtsson, B Långström and P. Razifar, “Characterization and reduction of noise in dynamic positron emission tomography data using masked volume-wise principal component analysis”. Manuscript (2009)

• J Olsson, R Oweinus, P-E Svensson, F Engbrant, B Långström, E Bengts- son, and P Razifar, “Automated Method for Generation of Input Function in Positron Emission Tomography Studies Using Masked Volume-Wise Principal Component Images”. Manuscript (2009)

• F Engbrant, P-E Svensson, J Olsson, B Långström, E Bengtsson, and P Razifar, “Application of Masked Volume-Wise Principal Component Analysis on In Vivo Animal Positron Emission Tomography Studies Using Flourine”. Manuscript (2009)

I would also like to thank a few people for helping me with this master thesis:

• My supervisor, Pasha Razifar for his great commitment and support through- out the whole project.

• My friends and colleagues, Johan Olsson and Fredrik Engbrant with whom I have had a lot of interesting and fruitful discussions.

• My father, Per-Åke Svensson for proofreading and giving valuable feed-

back on the report.

(6)
(7)

Acronyms and abbreviations 9

1 Introduction 11

1.1 Setting . . . . 11

1.2 Aim . . . . 11

1.3 Structure of the thesis . . . . 12

2 Background 13

2.1 Tomographical imaging modalities . . . . 13

2.1.1 Overview . . . . 13

2.1.2 Anatomical information . . . . 13

2.1.3 Physiological information . . . . 14

2.1.4 Integrated imaging modalities . . . . 16

2.1.5 Differences and applications . . . . 16

2.2 Positron emission tomography . . . . 17

2.2.1 Overview . . . . 17

2.2.2 Types of PET studies . . . . 18

2.2.3 Acquisition . . . . 18

2.2.4 Noise . . . . 19

2.2.5 Corrections . . . . 19

2.2.6 Reconstruction . . . . 20

2.3 Principal component analysis . . . . 20

2.3.1 Overview . . . . 20

2.3.2 Algorithm . . . . 20

2.3.3 Pre-normalizations . . . . 21

3 Materials and methods 23

3.1 Characterization of noise . . . . 23

3.1.1 Motive . . . . 23

3.1.2 The acquisition . . . . 23

3.1.3 Selection of samples . . . . 24

3.1.4 Expectation value . . . . 24

3.1.5 Standard deviation . . . . 24

3.1.6 Correlation between slices . . . . 25

3.1.7 Correlation between frames . . . . 25

3.1.8 Correlation between samples within the same slice . . . . 26

(8)

3.1.9 Correlation between samples within the same slice from one realization . . . . 26 3.2 Reduction of noise . . . . 28 3.2.1 Masked volume-wise PCA . . . . 28 3.2.2 Reconstruction from selected principal components . . . . 29 3.2.3 Background noise pre-normalization . . . . 30 3.2.4 Higher-order principal component pre-normalization . . . 30 3.2.5 Synthetic images . . . . 31 3.2.6 Clinical study . . . . 37

4 Results 39

4.1 Characterization of noise . . . . 39 4.1.1 Frames . . . . 39 4.1.2 Slices . . . . 40 4.1.3 Correlation between samples within the same slice . . . . 41 4.1.4 Correlation between samples within the same slice from

one realization . . . . 42 4.2 Reduction of noise . . . . 44 4.2.1 Higher-order principal component pre-normalization . . . 44 4.2.2 Synthetic images . . . . 45 4.2.3 Clinical study . . . . 49

5 Closing remarks 53

5.1 Discussion . . . . 53 5.1.1 Interpretation of PC images . . . . 53 5.1.2 Noise characteristics . . . . 54 5.1.3 Reduction of noise in PET data using MVW-PCA . . . . 55 5.2 Future work . . . . 56 5.3 Conclusion . . . . 56

Bibliography 56

List of Figures 60

List of Tables 62

(9)

Acronyms and abbreviations

ACF

Auto-Correlation Function

BN

Background Noise

CT

Computed Tomography

FBP

Filtered Back-Projection

FDG

Fluorodeoxyglucose

fMRI

Functional Magnetic Resonance Imaging

FORE

FOurier REbinning

FOV

Field of View

FWHM

Full Width at Half Maximum

LOR

Line of Response

HOPC

Higher-Order Principal Component

MRI

Magnetic Resonance Imaging

MSE

Mean Squared Error

MVW

Masked Volume-Wise

OSEM

Ordered Subset Expectation Maximization

PCA

Principal Component Analysis

PC

Principal Component

PET

Positron Emission Tomography

PIB

Pittsburgh Compound-B

ROI

Region of Interest

ROM

Removal of Mean

SV

Standardized Variables

SPECT

Single Photon Emission Computed Tomography

TAC

Time–Activity Curve

VOI

Volume of Interest

WSS

Weak-Sense Stationary

(10)
(11)

Introduction

1.1 Setting

Positron Emission Tomography (PET) is a non-invasive imaging modality used to visualize the functionality in tissues and organs in vivo in medical and research applications [3]. PET is based on measuring the concentration of a molecule labelled with a radionuclide, known as a tracer, designed to follow a specific physiological or biochemical path. The scanner detects photons that are a result from positron-electron annihilation events, creating an image or a set of images showing the tracer concentration in the scanned object.

PET data is acquired either as a static image volume from the whole scan or as a dynamic sequence of image volumes from different times of the scan.

Whole body acquisition is mostly performed as static PET studies often used to detect tumours, whereas studies of the brain are performed as dynamic PET studies and used to detect neurological disorders such as Parkinson’s disease, Alzheimer’s disease, phobia and schizophrenia by studying the kinetic behaviour of the tracer.

However, PET data suffers from noise and the different areas and tissues can be hard to discern. There are several methods for analysing PET data such as kinetic modelling, summation and multivariate image analysis methods.

It has been shown that Masked Volume-Wise Principal Component Analysis (MVW-PCA) can be used as a multivariate method that without modelling assumptions can separate tissues and organs with different kinetic behaviours of the PET tracer in different components [4]. In this approach, a new pre- normalization was suggested prior to application of PCA since noise variance varies, both temporally and spatially within a dataset. However the proposed pre-normalization approach is only applicable on datasets reconstructed using analytical methods such as Filtered Back-Projection (FBP).

1.2 Aim

The aim of this project was to characterize noise in PET data reconstructed with

FBP and Ordered Subset Expectation Maximization (OSEM), and to reduce the

noise using MVW-PCA.

(12)

1.3 Structure of the thesis

This thesis is divided into five chapters where the first is the introduction. The

second chapter is a technical background to different tomographical imaging

modalities with focus on PET. This chapter also contains an introduction to

the multivariate analysis method PCA and a new application to dynamic PET

data called MVW-PCA. The materials and methods used in this project are

described in chapter 3. The results are presented in chapter 4 and discussed in

chapter 5.

(13)

Background

2.1 Tomographical imaging modalities

2.1.1 Overview

In the context of medical imaging, an imaging modality is any of the various types of equipment used to acquire images of the body. Tomography is a tech- nique based on generating two-dimensional slices through a section of a three di- mensional volume. Tomogram generating imaging modalities include Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Single Photon Emission Computed Tomography (SPECT), combinations and variations of these [3]. Each imaging modality has its own advantages, and is therefore used in different research areas and for diagnosing different disorders. These imaging modalities are used to obtain either anatom- ical information, physiological information or both by integrating two imaging modalities.

2.1.2 Anatomical information

Computed tomography

Computed Tomography (CT) uses x-rays to visualize thin slices through the human body [5]. An x-ray tube and a detector are placed in the CT cameras gantry on opposite sides of the patient. The patient is gradually passed through the gantry as the system is rotated creating images from different angles yielding a three-dimensional x-ray image of the patient. The images have low structural noise and high contrast between bone and soft tissue, see figure 2.1(a) [3].

Magnetic resonance imaging

Magnetic Resonance Imaging (MRI) is based on the relaxation properties of ex-

cited hydrogen nuclei (single protons) and generates images with high structural

definition in soft tissue, see figure 2.1(b). The patient is placed in a station-

ary magnetic field, causing a small fraction of the spinning protons to line up

in parallel or anti-parallel direction compared to the field, this puts them in

(14)

specific energy states. Radio frequency pulses are sent into the body, mak- ing the protons rotate in phase with the pulses and putting them in a state of higher energy. When the stimulation is turned off, the protons emit their excitation energy as a radio signal. This signal can generate a cross sectional image, where different proton densities are represented by brighter or darker areas in the image. Different relaxation times generate images with different contrast. In brain imaging T1-weighted images (using the shorter spin-lattice relaxation time) are used to differentiate between white and grey matter of the brain. T2-weighted images (using longer spin-spin relaxation time) are used for investigation of diseased parts of the brain [3]. Applications in research include Diffusion MRI, which measures the diffusion of water molecules in bio- logical tissues; Multinuclear Imaging, which uses relaxation properties of other molecules than hydrogen; and in field research, where portable instruments use the magnetic field of the earth.

(a) (b)

Figure 2.1: Anatomical images. (a) is a coronal CT image of a torso

1

whereas (b) is a sagittal MRI image of a head

2

.

2.1.3 Physiological information

Functional magnetic resonance imaging

Functional Magnetic Resonance Imaging (fMRI) is based on MRI and measures the haemodynamic response, the change in blood flow, related to neural activity in the brain. Haemoglobin has different magnetic properties depending on if it’s oxygenated or deoxygenated, making the signal dependent of the level of oxygenation. This allows mapping of the functionality of different parts of the brain, see figure 2.2(a). The patient needs to lie still during the scan,

1Image courtesy of Dr Jens Sörensen, Uppsala University Hospital, Sweden

2Department of Mathematics, Uppsala University,http://www.math.uu.se

(15)

which usually takes between 15 minutes and 2 hours. Movement in excess of 3 millimetres will give unusable data. Images are usually taken every 1–4 seconds [6].

Single photon emission computed tomography

Single Photon Emission Computed Tomography (SPECT) is based on counting single photons that are emitted by gamma emitting radiopharmaceuticals. The tracer is administrated intravenously, and the patient is positioned into a gamma camera. Projection data is acquired covering 360 degrees around the patient.

The distribution of radiolabel molecules is measured and gives functional or biochemical information, see figure 2.2(b) [3].

Positron emission tomography

Positron Emission Tomography (PET) is based on detection of positron-electron annihilations events. A PET scan is prepared by administrating a radionuclide to the patient. In more than 95% of the studies this is done intravenously by injection, but the radionuclide can also be administrated orally by taking a pill or by inhalation of gas. When the radionuclide decays, a positron is emitted.

The positron travels a few millimetres until it has lost its energy by collisions and scatterings with the surrounding matter. When enough energy is lost the positron annihilates with an electron and generates two co-linear photons, each with 511keV energy, in anti-parallel directions. These photons are detected approximately at the same time. Interaction between the photon and the crystal in the detectors generates light flashes that are converted to electronic pulses that are in turn recorded by the cameras electronics. PET generates images with biological and functional information about the kinetic behaviour of the radiotracer [3], see figure 2.2(c). Since PET is the imaging modality used in this thesis, more details about PET will be discussed in section 2.2.

(a) (b) (c)

Figure 2.2: Functional images. (a) is an fMRI image of a brain

3

, (b) a coronal SPECT image of a torso

4

and (c) a coronal PET image of a torso

5

.

3Stanford Medicine,http://stanmed.stanford.edu

4Image courtesy of Dr Pasha Razifar, GE Healthcare, Sweden

5Image courtesy of Dr Jens Sörensen, Uppsala University Hospital, Sweden

(16)

2.1.4 Integrated imaging modalities

PET/CT

PET/CT is a combination of PET and CT [7, 8]. The CT data replace the transmission data from a regular PET scan, which is used for corrections in the acquired data and helps with localization of the structures [3].

PET/MRI

Compared to PET/CT systems, PET/MRI not only offers improved contrast in soft-tissue and reduced levels of ionizing radiation, but also MRI-specific infor- mation such as functional, spectroscopic and diffusion tensor imaging. PET/MRI has been successfully implemented for pre-clinical studies but combining PET and MRI for clinical use has proven to be a very challenging task. Technical advances in this area are expected in the near future [9].

SPECT/CT

SPECT/CT is a device containing a CT system and a gamma camera on a single gantry [10]. The SPECT procedure is performed, and then complemented with a CT transmission scan. The transmission data is used for corrections in the reconstruction of the projections [3]. Finally the data from the two imaging modalities are also merged into a composite image.

2.1.5 Differences and applications

CT scans are faster and more cost efficient than MRI and PET scans. CT images contain less structural information and detail than MRI but they have relatively low noise magnitude. The CT images have a high contrast between areas in the body with different densities, like bone and soft tissues, but nearly no contrast within the soft tissues. It can therefore be hard to distinguish pathological from healthy structures and different soft tissues from each other in the CT-images, and the high radiation dose limits the possibility of repeated scans [3]. CT is relatively unsuitable for diagnosing disorders in the brain. However it is widely used in oncology and for diagnosing heart diseases.

MRI provides greater contrast between soft tissues than CT, and is therefore useful in neurological, musculoskeletal, cardiovascular, and oncological imaging.

MRI provides little information about the functionality of the brain. Medical or bio stimulation implants such as pacemakers are considered a risk-increasing factor, towards MRI scanning because of the magnetic and radio frequency fields. However when using MRI or fMRI, the patient is not exposed to radiation [3].

PET has a high level of statistical noise that limits its efficiency [3]. The information about the radio-chemicals used for particular functions is capable of giving important support to research and diagnosis. It is often used in oncology and drug development, and can also provide diagnosis in several neurological disorders such as schizophrenia, Alzheimer’s disease, Parkinson’s disease and phobia [11–14].

PET/CT has the advantages from both PET and CT. PET have high

sensitivity when it comes to functional and biochemical information, whereas

(17)

CT gives high quality structural images [7, 8]. The combination has proved to increase the diagnostic value compared to each imaging modality used separately [15]. PET/CT has an important role in whole body imaging in oncology. It is faster and more accurate than PET or CT alone for the depiction of malignancy.

SPECT projections suffer from highly smoothed images and poor camera resolution. On the other hand it provides 3D information that can complement other studies. Therefore, it can provide information about localised function in internal organs, such as functional cardiac or brain imaging. Research areas include paediatrics [16].

SPECT/CT hybrid studies give accurate localization of tumours, measure- ment of invasion into surrounding tissues, and characterization of their func- tional status [10]. Two of SPECTs’ weaknesses are the long scanning time needed and the poor resolution in the resulting images.

2.2 Positron emission tomography

2.2.1 Overview

PET is a non-invasive tomographic technique used to obtain anatomical and physiological information in vivo in healthy and pathological organs and tissues.

PET has proven to be a useful tool in diagnosing cancer and cardiac diseases and has an increasingly important role in providing earlier diagnosis in several neurological disorders such as Alzheimer’s disease, Parkinson’s disease, phobia, epilepsy and cancer [11–13]

Modern PET cameras mainly consist of a translating bed surrounded by a set of detector rings. The detector rings contains crystal detectors (over 18.000 detectors) capable of creating a large number of trans-axial images with a resolution depending on the scanner and image reconstruction algorithm.

(a) (b)

Figure 2.3: PET Camera GE/Advance Nxi

6

, where (a) shows the gantry and translating bed, whereas (b) shows the detector ring.

6General Electric,http://www.ge.com

(18)

2.2.2 Types of PET studies

PET images used for investigations of the body are usually acquired as a set of stationary images across the body called static imaging. PET studies of the brain are often performed dynamically, meaning that the acquired data is a set of images from the same volume but from different time sequences. This makes it possible to analyse the kinetic behaviour of the used tracer in different parts of the brain. These multivariate image sets can be used to obtain physiological, biochemical and functional information of the brain using analysis methods such as kinetic modelling, compartment modelling, summation or multivariate analysis [17–21].

2.2.3 Acquisition

A complete PET study consists of three different scans; blank scan, transmission scan and emission scan.

The blank scan is performed every day to normalize the detectors of the camera, which are highly sensitive and therefore have different efficiency. The blank scan is performed with no patient in the camera. Instead of a tracer a radioactive rod source rotating around the cameras gantry yield the detectable photons [3].

The transmission scan is performed using the same source as in the blank scan but with the object in the Field of View (FOV). The data from the trans- mission scan and the blank scan is used for a so-called attenuation correction to compensate for the scanned objects geometry, and the fact that the photons from the decay site have to travel through different amounts of tissue when go- ing in different directions. In an integrated system such as a PET/CT camera the CT data can replace the transmission scan [3].

The emission scan is based on detection of positron-electron annihilation events. When performing a PET emission scan a molecule, labelled with a short-lived positron emitting radionuclide such as

11

C or

18

F, called a tracer is administered to the patient prior to or during the scan. There is a wide range of different tracers used in PET. A radionuclide is created using a cyclotron or generator and is incorporated into a compound designed to follow a specific physiological path. The radionuclide used when creating PET tracers is short lived. Hence reducing the amount of radiation exposed to the patient, see table 2.1. This also makes it possible to perform several scans on the same patient using the same or different kinds of tracer (multi-tracer study) [3].

Nuclide β

+

energy [MeV] β

+

range [mm] Half life [min]

11

C 0.96 1.1 20.3

15

O 1.70 2.5 2.04

18

F 0.64 0.6 110

68

Ga 1.90 2.9 67.7

Table 2.1: Commonly used isotopes in clinical PET studies.

When the substance decays it emits a positron and a neutrino. After travel-

ling a short distance, typically a few millimetres, the positron annihilates with

(19)

an electron in the surrounding tissue. This annihilation event yields two co- linear, anti-parallel, photons with 511 keV energy each. If detectors detect the photons on opposite sides of the cameras FOV within a timing window of about 10–12 ns (a photon travels ∼ 3 m in 10 ns) the event is registered as a true coincidence. When a 511 keV photon hits one of the crystals a light flash is emitted and registered by a photo detector. The light flash is converted to an electrical pulse that is registered by the camera’s electronics. After a detec- tor has detected a photon it is paralysed for a short period of time. During this so-called dead-time the detector cannot detect any photons. Apart from dead-time there are a number of different factors that effect the precision of the scan result, two major factors are random coincidences and scattered co- incidences. Random coincidences are detections originating from two different annihilations, but whose generated photons hit opposite detectors within the time-window. Scattered coincidences are detections in which one or both of the photons have been scattered in the tissue before hitting the detector, result- ing in a Line of Response (LOR) (the line through the detectors detecting the coincidence) that does not correspond to the position of the annihilation [3].

2.2.4 Noise

In traditional PET scanners, the main sources of noise are in decreasing order of magnitude: emission, transmission and blank scan [22]. With newer attenuation correction modes, e.g. CT, the noise from emission is clearly dominating. The detector system only affects the magnitude of the noise whereas the recording system, various corrections, the image reconstruction method and its parameters also affect the distribution and correlation of the noise [3].

Radioactive decay, measured by PET detectors, obey the Poisson distribu- tion. With an expected number of counts µ during a given time interval this distribution will have the standard deviation

µ. The deviation from µ for each sample from this distribution is defined as noise.

Apart from the statistical noise in the signal there are many factors that affect the noise both during the acquisition and the reconstruction. Some fac- tors that affect noise during the acquisition are the choice of acquisition mode, scan duration, amount of administered tracer, geometry of tracer distribution, detector efficiencies, attenuation, dead-time, random coincidences and scattered photons pairs that falsely have been registered as true coincidences. When re- constructing the PET data the applied correction methods as well as the choice of reconstruction algorithm have a heavy impact on the statistical properties of the noise [3].

2.2.5 Corrections

Acquisition data from PET scans contain several errors that need to be compen-

sated for prior to the reconstruction procedure. These compensations include

corrections for differences in detector efficiencies, random coincidences, scattered

coincidences, dead-time and attenuation correction, where attenuation correc-

tion is the main factor to affect the measured counts in PET acquisition. Since

a scanned object usually is not symmetric, the photons need to travel through

different amounts of tissue. By using the results from the blank scan and the

transmission scan these differences in attenuation can be compensated for [3].

(20)

2.2.6 Reconstruction

The two most commonly used methods for reconstructing tomographic data are Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM). FBP is an analytical and computationally efficient inversion algorithm for the two-dimensional radon transform that is both fast and easy to implement.

OSEM on the other hand is an optimized iterative expectation maximization algorithm, which iteratively maximizes a target function in order to reconstruct the tomographic data. In practice FBP produces relatively high noise variance in regions with low signal compared to OSEM. OSEM on the other hand produces noise that is more dependent on the signal, with high noise levels in high signal regions and low noise levels in low signal regions.

2.3 Principal component analysis

2.3.1 Overview

Principal Component Analysis (PCA), also known as Hotelling or Karhunen- Loève (KL) transform, was discovered by Karl Pearson in 1901 [23]. It is a method that explaines the variance-covariance structure through linear combi- nations of the original variables. Each linear combination, known as Principal Component (PC), is picked in such a way that it maximizes the variance, which is the same as minimizing the Mean Squared Error (MSE). This is done under the constraint that the norm of its weight vector equals one and that the new PC is uncorrelated to all previous PCs. Even though there may be many vari- ables, a large part of the systems total variability can often be accounted for by a small number of PCs. The PCs can then replace the variables without any significant loss of information. PCA’s general objectives are data reduction and interpretation [24].

2.3.2 Algorithm

Each observation x

1

, x

2

, . . . , x

p

is stored as a row vector in the input matrix

X =

x

11

x

12

. . . x

1n

x

21

x

22

. . . .. . .. . .. . . . . .. . x

p1

x

p2

. . . x

pn

= [x

1

, x

2

, . . . , x

p

]

T

. (2.1)

The unbiased estimate of the covariance matrix associated with X is

SX

=

s

11

s

12

. . . s

1p

s

21

s

22

. . . .. . .. . .. . . . . .. . s

p1

s

p2

. . . s

pp

(2.2)

where

s

ik

= 1 n − 1

n

X

j=1

(x

ij

− ¯ x

i

)(x

kj

− ¯ x

k

). (2.3)

(21)

If S

X

has the eigenvalue-eigenvector pairs (λ

1

, e

1

), (λ

2

, e

2

), . . . , (λ

p

, e

p

) where λ

1

≥ λ

2

≥ . . . ≥ λ

p

≥ 0 and the eigenvectors are stored in the matrix E = [e

1

, e

2

, . . . , e

p

]

T

the principal components are defined as

Y = EX = [y1

, y

2

, . . . , y

p

]

T

(2.4) where the variance is given by

V ar(y

i

) = e

Ti SXei

= λ

i

, i = 1, 2, . . . , p (2.5) and covariance by

Cov(y

i

, y

k

) = e

TiSXek

= 0, i 6= k. (2.6)

2.3.3 Pre-normalizations

Depending on the application it is often preferred to normalize data prior to PCA. The most commonly used pre-normalizations prior to PCA are the Removal of Mean (ROM) and Standardized Variables (SV) pre-normalization.

When pre-normalizing a dataset X the variables are mapped to a new set of pre-normalized variables Z.

Removal of mean

If observations are known to have a non-zero expectation value it should be subtracted from the observation to avoid PC

1

to always point in that direction.

In many datasets the expectation value is unknown but can be estimated by the arithmetic mean. Removal of the arithmetic mean is the default option in most implementations of PCA. The pre-normalized variable with removed mean is defined as

z

ik

= x

ik

− ¯ x

i

(2.7)

where x

ik

is the kth input variable in observation i.

Standardized variables

Observations may have different variations in the measurements and PCs from non-normalized data are in general not invariant to this. A common approach to handle this is to standardize the variables, giving each variable zero expec- tation value and unit variance. This is done by removing the arithmetic mean from the observations and then scale them with their standard deviation. The standardized variable to x

ik

is defined as

z

ik

= x

ik

− ¯ x

i

s

i

(2.8)

where s

i

is the estimated standard deviation of the observation i. When using

pre-normalization to standardized variables PCA will choose eigenvectors based

on correlation instead of covariance [24].

(22)
(23)

Materials and methods

3.1 Characterization of noise

3.1.1 Motive

A phantom study was performed in order to characterize the noise in PET data reconstructed with FBP and OSEM.

3.1.2 The acquisition

The study was performed on an eXplore VISTA Dual-Ring small animal PET scanner (GE Healthcare) seen in figure 3.1. The unit contains 2 rings of 18 phoswich detector modules capable of performing 3D data acquisition with an axial Field of View (FOV) of 48 mm and an effective trans-axial FOV of 67 mm.

The spatial resolution is 0.8–1.0 mm for reconstructions made with OSEM and 1.5–1.8 mm for reconstructions made with FBP [25].

Figure 3.1: eXplore VISTA small animal PET scanner

1

In the experiments a phantom with two cylindrical inserts was used. Both inserts was 15 mm in diameter. The insert in the upper part of the gantry was filled with 223 kBq/cm

3

of

18

F and the insert in the left part of the gantry with 73 kBq/cm

3

of

18

F. The duration of the emission scan was 90 minutes.

1General Electric,http://www.ge.com

(24)

Acquired data was reconstructed from list mode to a dynamic dataset by first applying the FOurier REbinning (FORE) algorithm to produce 61 two dimensional sinograms with a spatial resolution of 175 for 128 angles spanning the axial FOV for 18 frames. Corrections were made to compensate for the decay of

18

F. Two datasets were then reconstructed from the sinograms using OSEM and FBP in 2D mode. The dimensions of the reconstructed data were 175 × 175 × 61 × 18.

3.1.3 Selection of samples

Three circular Region of Interests (ROIs) of equal size were selected. One for each insert and one for a region where there were no radioactive substance present. Each ROI had a diameter of 11 mm which gave N

ROI

= 633 sample points per ROI. The 4 mm reduction in diameter compared to the inserts was chosen to avoid most of the spillover effects since the spatial resolution measured with Full Width at Half Maximum (FWHM) is less than 2 mm for the eXplore VISTA scanner [25], see figure 3.2.

Figure 3.2: An FBP reconstruction of slice 31 in the first frame, showing a cross section of the two inserts. The outline of the three ROIs is drawn in white.

3.1.4 Expectation value

The expectation value was estimated for each ROI in all slices and frames for both FBP and OSEM using the arithmetic mean

¯ x = 1

N

ROI

NROI

X

i=1

x

i

. (3.1)

3.1.5 Standard deviation

The sample standard deviation was calculated for each ROI in all slices and

frames for both FBP and OSEM using the square root of the unbiased sample

(25)

variance

s = v u u t

1 N

ROI

− 1

NROI

X

i=1

(x

i

− ¯ x)

2

. (3.2)

3.1.6 Correlation between slices

Before estimating the correlation between slices all samples within each ROI were divided into 9 groups where every third sample in u and v direction was put in the same group. This was done in order to reduce correlation between samples within the same group since this would alter the correlation estimate.

The distance between the samples was set to three since a shorter distance would result in considerably more correlation between the samples within each group and a longer distance would lead to too few samples in each group. It shall also be mentioned that even though the data was divided into groups the arithmetic mean of all samples within a ROI was used to estimate the expectation value for all samples within that ROI.

For each group the correlation matrix of the correlations between the m slices is

RSlices

=

1 r

12

. . . r

1m

r

21

1 . . . .. . .. . .. . . . . .. . r

m1

r

m2

. . . 1

, r

ik

= s

ik

s

ii

s

kk

(3.3)

where the covariances are calculated with the sample covariance

s

ik

= 1

p · N

Group

− 1

p

X

n=1 NGroup

X

j=1

(x

ijn

− ¯ x

i

)(x

kjn

− ¯ x

k

). (3.4)

N

Group

is the number of samples within the current group and p is the number of frames. x

ijn

is the j:th sample in the current group and ROI in slice i (or k for the corresponding variable x

kjn

) at frame n. ¯ x

in

(or ¯ x

kn

) is the mean of all samples in all groups in the current ROI in slice i (or k) and frame n.

The mean of the 9 correlation matrices was calculated, and the two diagonals closest to the main diagonal was plotted for each ROI and dataset.

3.1.7 Correlation between frames

The estimation of the correlation between frames was done in much the same way as for slices. The samples in each ROI were divided into 9 groups, the sample correlation matrix of the correlations between the p frames is

RFrames

=

1 r

12

. . . r

1p

r

21

1 . . . .. . .. . .. . . . . .. . r

p1

r

p2

. . . 1

, r

ik

= s

ik

s

ii

s

kk

(3.5)

(26)

where the covariances are calculated with the sample covariance

s

ik

= 1

m · N

Group

− 1

m

X

w=1 NGroup

X

j=1

(x

ijw

− ¯ x

i

)(x

kjw

− ¯ x

k

). (3.6)

N

Group

is the number of samples within the current group and m is the number of slices. x

ijw

is the j:th sample in the current group and ROI in frame i (or k for the corresponding variable x

kjw

) at slice w. ¯ x

iw

(or ¯ x

kw

) is the mean of all samples in all groups in the current ROI in frame i (or k) and slice w.

The mean of the 9 correlation matrices was calculated and the diagonal closest to the main diagonal was plotted for each ROI and dataset.

3.1.8 Correlation between samples within the same slice

A PET slice can be seen as a stochastic process with properties that can be estimated if enough independent realizations are available. One way of acquiring independent realizations of a slice or a whole scan is to perform gated scans [26]. In this study only one scan was available and the different slices in each observation had to be used. Since 2D reconstruction was used the differences in the statistical properties of the different slices were small enough to treat the slices as separate realizations of the same stochastic process of a slice. The calculations are similar to those for the correlation between slices and frames but there is one sample correlation matrix with the lags (k

1

, k

2

) for each choice of coordinates (u, v), i.e. the correlation matrix is four dimensional with elements given by

r

u,v,u+k1,v+k2

= s

u,v,u+k1,v+k2

s

u,v,u,v

s

u+k1,v+k2,u+k1,v+k2

(3.7) with covariances estimated from the sample covariance

s

u,v,u+k1,v+k2

= 1 p · m − 1

p

X

n=1 m

X

w=1

(x

u,v,w,n

− ¯ x

u,v

)(x

u+k1,v+k2,w,n

− ¯ x

u+k1,v+k2

).

(3.8) x

u,v,w,n

is the sample at coordinate (u, v, w, n) and ¯ x

u,v

is the mean of the samples at coordinate (u, v) over all frames and slices.

The correlation at different coordinates (u, v) within a slice for FBP and OSEM reconstructed data was investigated, both inside the three ROIs and outside the inserts. The differences in extent, orientation and isotropy of the correlation in datasets reconstructed with FBP and OSEM where studied.

3.1.9 Correlation between samples within the same slice from one realization

Auto-Correlation Function (ACF) is a property of a stochastic process that

describe its correlation between different points in time, or space depending of

the unity of the dimensions in the stochastic process. So far the different slices

have been treated as different realizations of a stochastic process of a slice when

estimating the ACF between points within a slice. If the stochastic process is

Weak-Sense Stationary (WSS) the ACF can also be estimated from a single

realization. WSS is a weaker form of stationarity that requires mean and ACF

(27)

to be independent of time, or coordinate within the slice in this case. If X[u, v]

is the stochastic process of a slice these two criterion are written as

E{X[u, v]} = m (3.9)

and

E{X[u + k

1

, v + k

2

]X[u, v]} = E{X[k

1

, k

2

]

2

}. (3.10) If a region A in a slice is WSS the samples within that region can be used to estimate the ACF, that is the same for every point within this region. To do this a smaller region B inside region A is used. An estimate used to estimate the auto-correlation coefficient (normalized auto-correlation) for coordinate (u, v) is defined as

r

k1,k2

=

X

(u,v)∈A

(x[u, v] − ¯ x

k1,k2

) b[u − k

1

, v − k

2

] − ¯ b 

s X

(u,v)∈A

(x[u, v] − ¯ x

k1,k2

)

2

· X

(u,v)∈A

b[u − k

1

, v − k

2

] − ¯ b 

2

(3.11)

where b is the template region within B with the arithmetic mean ¯ b, x is the realization of the slice and ¯ x

k1,k2

the mean of x in a region of size B placed on the coordinate (u, v). Even though only a single realization x is used, the estimate will approach the autocorrelation function as the size of A and B is increased.

The estimate has at least two applications. If used on WSS signals it es- timates the ACF and if used on non-WSS signals it identifies similarities and periodicities within the signal (how well B is correlated with A).

This estimate was used in different parts of a trans-axial slice and the results were compared to the ACF estimates in section 3.1.8.

To compare this estimate to the sample correlation the region A was set to

21 × 21 pixels, and it was then placed on the centre of each ROI in the centre

slice in the middle frame in the dataset. This estimated the ACF for the 5

neighbouring samples in each direction for each ROI (B was set to an 11 × 11

pixel region).

(28)

3.2 Reduction of noise

3.2.1 Masked volume-wise PCA

PCA can be performed on a set of images of any dimension, for example images of an object from different points in time. PCA will then separate the images into PC-images where early PC-images describe most of the variance within the input set of images. This approach has been used on dynamic PET data resulting in images with high contrast between structures with different kinetic behaviours of a tracer. PCA can be performed with either slices or volumes as observations, and either on the whole dataset or on selected parts [4].

Since it is common that only a limited part of a dataset contains informa- tion that is actually of interest, the dataset can be masked to only include data within this Volume of Interest (VOI). This procedure reduces memory usage, computation time and also has the advantage that the directions of the eigen- vectors only are dependent of data inside the VOI and not influenced by noise or other disturbing signals in the background.

Masked Volume-Wise PCA (MVW-PCA) is performed in a number of steps:

A mask representing the scanned object is created from either transmission im- ages from PET, or CT if a PET/CT study is performed. The mask is used to extract the VOI from the background and can also be used to perform back- ground noise pre-normalization, described in section 3.2.3. PCA is performed on the data within the VOI. PCs created with MVW-PCA is referred to as MVW-PCs[4]. To view the MVW-PCs they are placed back into the mask, see figure 3.3.

  

Frame 1

Frame 2

Frame p

PET data

Mask Masked

data volumes MVW-PCs Unmasked MVW-PCs Pre-normalize and mask PCA Unmask

Figure 3.3: Illustration of the MVW-PCA procedure.

(29)

In this report a slightly different approach is used when creating the unmasked MVW-PCs. Instead of projecting the masked pre-normalized data onto the eigenvectors retrieved from the PCA, the pre-normalized non-masked PET data is projected onto the eigenvectors. This gives an identical result inside the mask as seen in figure 3.4, the memory usage and speed is the same, but the removed background will still be visible instead of the sharp mask border and the padded zeroes seen in figure 3.4(a). Another advantage of using this method is that no information in the PET dataset is lost during the MVW-PCA and the whole original dataset can therefore be reconstructed using the method described in section 3.2.2.

(a)

−50

−40

−30

−20

−10 0 10 20 30 40 50

(a)

(b)

−50

−40

−30

−20

−10 0 10 20 30 40 50

(b)

Figure 3.4: A slice from MVW-PC

1

showing the differences between masked data (a) and non-masked data (b) projected onto the first eigenvector. The study was performed on the brain of a patient with alzheimers disease injected with the tracer

11

C-Pittsburgh Compound-B.

3.2.2 Reconstruction from selected principal components

Since the signl in PET datasets is temporally correlated whereas the noise is not, MVW-PCA can be used to reduce the dimensionality of datasets. In the space spanned by the MVW-PCs the signal is mostly described by lower-order MVW-PCs whereas noise is described by higher-order MVW-PCs. It is therefore usefull to be able to separate data spanned by the low-order MVW-PCs from data spanned by the higher-order MVW-PCs in the original frame-space.

Since PCA and MVW-PCA with non-masked data merely perform a change of basis of the pre-normalized data, no quantitative information is lost during the MVW-PCA and the full original dataset or parts of it can be reconstructed from the MVW-PCs.

As described in the section 2.3.2 PCs are created from the equation

Y = EZ,

where Z is the pre-normalized data. To retrieve the pre-normalized data from all MVW-PCs one can simply use the inverse of E.

Z = E−1Y

(30)

In order to calculate a selected number of MVW-PCs from the pre-normalized data, rows that correspond to unwanted MVW-PCs are removed from E, creat- ing ˜

E and ˜Y. In the same way columns that correspond to unwanted MVW-PCs

are removed from E

−1

in order to calculate the modified pre-normalized data ˜

Z

from the selected number of MVW-PCs in ˜

Y. Something to notice is that E is

an orthogonal eigenvector matrix that have the property E

T

= E

−1

which saves computation time. MVW-PCs may now efficiently be removed from Z without any unnecessary calculations using

Z = ˜

˜

E-1Y = ˜

˜

E-1EZ = ˜

˜

ETEZ = AZ,

˜

where A = ˜

ETE. The modified input matrix ˜

˜

X is then retrieved from ˜Z by

doing the inverse normalization. A flow chart of the procedure is shown in figure 3.5.

X Z Y

X

˜

Z

˜

Y

˜

Pre-normalization MVW-PCA Un-normalization Change of basis

Removal of MVW-PCs

Figure 3.5: Removal of MVW-PCs from X

3.2.3 Background noise pre-normalization

When performing 2D reconstructions of PET data, each slice tends to have varying levels of noise. Since PCA cannot separate variance due to signal from variance due to noise, it is desirable to have each slice scaled with the standard deviation of the noise to get unit noise variance in every observation. In PET data reconstructed with FBP the background contains a large amount of noise.

Background noise pre-normalization use a mask to separate the background from the VOI in order to estimate the standard deviation of the background in each slice. The pre-normalization is performed using the equation

z

ik

= x

ik

− ¯ x

i

s

w

(3.12)

where s

w

is the estimated standard deviation for the samples within the back- ground of slice w where the sample x

ik

is located. Background noise pre- normalization is only used on PET data reconstructed with FBP since the noise in data reconstructed with OSEM is too signal dependent, which results in noise magnitudes close to zero outside the VOI [4].

3.2.4 Higher-order principal component pre-normalization

Higher-Order Principal Component (HOPC) pre-normalization is a novel method

presented for the first time in this report. Much like Background Noise (BN)

pre-normalization it retrieves an estimate of each slice’s standard deviation,

which is then used for pre-normalizing the slices. Since the reconstruction of

early MVW-PCs is a good approximation of the expectation value in a dataset,

it can be removed and the standard deviation of the reconstruction of the rest

of the MVW-PCs should approximately be that of the noise.

(31)

The pre-normalization can be divided into three steps. The first is to do MVW-PCA on the whole FOV (can also be performed on the VOI if speed is of high importance) without any pre-normalization and set the first MVW- PC to zero. The second step is to reconstruct the MVW-PCs and estimate the standard deviation in each slice in the reconstruction. The third step is to perform the prenormalization described in equation 3.12 but with the new estimated noise standard deviation s

w

.

X Z

X

˜

s

w

Removal of

lower-order PCs Estimation of std (x

ik

− ¯ x

i

)/s

w

Figure 3.6: Late Principal Component Pre-normalization

The advantage with HOPC pre-normalization compared to background noise pre-normalization is that it is not dependent on the presence of noise in the background and that the background noise is proportional to the amount of noise in the rest of the slice.

In order to decide the number of early MVW-PCs to use in the reconstruction of late MVW-PCs the standard deviation of the estimated noise was compared to the standard deviation of the background in datasets reconstructed with FBP. To retrieve a quantitative measure of how well the two curves fit, one of the curves was multiplied with a scalar value that minimizes the MSE. This was done since PCA picks the same eigenvectors no matter the scale of the input data. The MSE for the two curves was then calculated. The comparisons were made on two clinical datasets reconstructed with FBP, where the first was a dataset retrieved from a full body scan with the tracer Fluorodeoxyglucose (FDG) described in section 3.2.6, and the second was a brain study with the tracer Pittsburgh Compound-B (PIB).

3.2.5 Synthetic images

Motive

To get an understanding of how PCA acts on actual dynamic PET data, syn- thetic images were produced. This was chosen as a starting point to the project since realizations of noise and signal are known prior to the analysis and can be used to validate the results. Another advantage of using synthetic images is the possibility to modify and study one parameter at a time to get a better understanding of how the analysis method react to different input.

Signal

In this study MATLAB (The Mathworks Inc., Natick, Massachusetts) was used

to create various datasets of dynamic synthetic images. The synthetic images

had one slice per observation with a size of 128 × 128 pixels that included four

geometric structures. The spatial size and the kinetic behaviour of the structures

was chosen to resemble the cerebellum (CBL), occipital cortex (Occip), frontal

(32)

cortex (FrntCx) and white matter (WhitM) in PET studies with the tracer

11

C labelled PIB as seen in figure 3.7.

CBL WhitM

FrntCx Occip

(a) Spatial structures

10 20 30 40 50

0 0.5 1 1.5 2 2.5 3 3.5

Time [min]

Concentration of activity [Bq/cm3] Noise

CBL FrntCx WhitM Occip

(b) Temporal expectation values

10 20 30 40 50

0 0.5 1 1.5 2 2.5 3 3.5

Time [min]

Concentration of activity [Bq/cm3] Noise

CBL FrntCx WhitM Occip

(c) Temporal standard devi- ations

Figure 3.7: Spatial and temporal behaviour of the signal structures. The spatial background mask used in background noise pre-normalization is shown in a black colour.

The temporal behaviour of the four Time–Activity Curves (TACs) was cal- culated with the kinetic function

k(t) = αe

−βt

· (1 − e

−γt

), (3.13) using the parameters found in table 3.1. Spatially all regions had a constant value. The signal was then convoluted with a point-spread function to create a spillover effect similar to that of images retrieved from a PET scanner. The time interval between the different frames was set to an interval used in 24-frame (60 min) PET studies with the tracer PIB and the values in the time vector t was set to the centre of each interval, see table 3.2.

Structure Function α β γ

CBL (mean) k

1

(t) 4.0 0.04 0.8

FrntCx (mean) k

2

(t) 3.0 0.01 1.0 WhitM (mean) k

3

(t) 1.5 0.007 0.7 Occip (mean) k

4

(t) 3.5 0.02 1.0 Noise (std.) k

n

(t) 0.7 0.03 1.5 Table 3.1: Parameters for the temporal functions.

Noise

Raw PET data is usually considered to be Poisson distributed, but after recon-

struction the noise is often approximated as normal distributed. The standard

deviation, k

n

, of the noise was calculated with equation 3.13 with parameters

from table 3.1. The standard deviation of the added noise was spatially con-

stant, which is a simplification of the noise typically seen in data reconstructed

with FBP. Both spatially correlated and uncorrelated noise was studied. To

correlate the noise, it was convoluted with a low pass filter followed by a mul-

tiplication with a scalar for each frame to compensate for the loss of standard

(33)

Frame Length [min] Time, t [min]

1 0.5 0.25

2 0.5 0.75

3 0.5 1.25

4 0.5 1.75

5 1 2.5

6 1 3.5

7 1 4.5

8 1 5.5

9 1 6.5

10 1 7.5

11 1 8.5

12 1 9.5

13 1 10.5

14 3 12.5

15 3 15.5

16 5 18.5

17 5 22.5

18 5 27.5

19 5 32.5

20 5 37.5

21 5 42.5

22 5 47.5

23 5 52.5

24 5 57.5

Table 3.2: Time protocol for a 24-frame scan (60 min) with the tracer PIB.

deviation. The same realization of the noise was used for both uncorrelated and correlated noise.

The synthetic datasets, x(t), were defined with the model

x(t) =

4

X

i=1

k

i

(t) · v

i

!

∗ ρ

v

+ c

t

(k

n

(t) · n) ∗ ρ

e

, (3.14)

where v

i

is a vector defining the spatial structure i, seen in figure 3.7(a). k

i

(t) and k

n

(t) also shown in figure 3.7 are the functions defined by equation 3.13 with parameters from table 3.1. n is a noise vector for the entire slice that has zero mean and unit variance. ρ

v

and ρ

n

are point spread functions (low pass filters) and c

t

a constant used to compensate for the reduction in standard deviation caused by the convolution.

Generated datasets

Three synthetic datasets were generated using the kinetic functions and struc-

tures described in section 3.2.5 and 3.2.5. The first set consisted of nothing

more than the signal, which is the four regions convoluted with a point-spread

function that gave the edges a blurry appearance, see figure 3.8. The second

and third dataset had the same signal components as the first dataset, but with

(34)

added Gaussian noise. The second dataset has uncorrelated noise that can be seen in figure 3.9, and the third dataset had correlated noise that is shown in figure 3.10.

Concentration of activity [Bq/cm3]

0 0.5 1 1.5 2 2.5 3

Figure 3.8: Montage of dataset without any noise. The sequence should be seen along rows starting from the upper left corner and ending in the lower right.

Pre-normalization

All datasets described in 3.2.5 were pre-normalized with the ROM pre-normalization, pre-normalized to SV, BN pre-normalized with the background mask seen in fig- ure 3.7(a) and Higher-Order Principal Component (HOPC) pre-normalized.

Estimates

Since the signal in the synthetic datasets was known in advance, it was used to retrieve accurate estimates of the noise.

The standard deviation of the noise in the pre-normalized noisy datasets was calculated by subtracting the non-noisy dataset that had been pre-normalized with the same coefficients as the two noisy datasets.

In order to get a quantitative evaluation of the performance of the different pre-normalizations, reconstructions of early PCs was used and compared to the correct signal. To measure the error in the reconstructions the MSE was calculated with

MSE = 1 N · p

X

(u,v,n)∈S

(x[u, v, n] − ˜ x[u, v, n])

2

, (3.15)

where N is the number of samples in an observation, p is the number of obser-

vations, (u, v, n) are the coordinates within a synthetic dataset S, x[u, v, n] is

the signal dataset and ˜ x[u, v, n] is the reconstructed dataset.

References

Related documents

Genom soldater och officerare som lämnar en relativ säkerhet för att strida, styrda av de lagar och normer som omvärlden, politiker och den egna professionen förväntar sig, mot en

För att besvara dessa frågor har vi valt ut 20 vetenskapliga artiklar inom området. Dessa har valts ut genom databassökning och manuell sökning. Artiklarna valdes ut efter

The overall adjusted mean insulin use (units/kg/day) at all times after baseline in the 14-day full-dose versus placebo groups was 0.59 versus 0.62 for all patients (not signi

I Karolinska gymnasiet var de ämnena eleverna studerade blandade, alla pojkar läste inte exakt samma ämnen, endast vissa studerade matematik, biologi, teckning

Det kan verka onödigt att utöka antalet byte som skickas från två till åtta för att sedan endast använda två byte, anledningen är att presentationsprogrammet för

Med en alternativ systemavgränsning, där restvärmen för med sig miljöpåverkan till systemet, skulle en brytpunkt uppstå tidigare för när det inte längre är klimateffektivt

For example, women were older at onset of AUD, consumed less pure alcohol per week, and had a shorter duration of excessive alcohol use when seeking treatment than men..

Bland annat när det kom till friskvårdspengen eller utnyttjandet av företagshälsovården, där cheferna ansåg att de anställda hade ett eget ansvar att bruka det som