• No results found

Improved Spatial Resolution in Segmented Silicon Strip Detectors

N/A
N/A
Protected

Academic year: 2021

Share "Improved Spatial Resolution in Segmented Silicon Strip Detectors"

Copied!
39
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT TECHNOLOGY, FIRST CYCLE, 15 CREDITS

STOCKHOLM SWEDEN 2019,

Improved Spatial Resolution in Segmented Silicon Strip Detectors

EVA BERGSTRÖM IDA JOHANSSON

KTH ROYAL INSTITUTE OF TECHNOLOGY

SCHOOL OF ENGINEERING SCIENCES IN CHEMISTRY,

(2)

.

(3)

This project was performed in collaboration with Prismatic Sensors AB

Supervisor at Prismatic Sensors AB: Mats Danielsson

Improved Spatial Resolution in Segmented Silicon Strip Detectors

Förbättrad spatiell upplösning i segmenterade kiselstrippdetektorer

E V A B E R G S T R Ö M I D A J O H A N S S O N

Degree project in medical engineering First level, 15 hp

Supervisors at KTH: Christel Sundberg, Mattias Mårtensson, Tobias Nyberg Examiner: Mats Nilsson

School of Engineering Sciences in Chemistry, Biotechnology and Health KTH Royal Institute of Technology

SE-141 86 Flemingsberg, Sweden http://www.kth.se/cbh

2019

(4)

.

(5)

Abstract

Semiconductor detectors are attracting interest for use in photon-counting spectral computed tomography. In order to obtain a high spatial resolution, it is of interest to find the photon interaction position. In this work we investigate if machine learning can be used to obtain a sub-pixel spatial resolution in a photon-counting silicon strip detector with pixels of 10 µm.

Simulated charge distributions from events in one, three, and seven positions in each of three pixels were investigated using the MATLAB

®

Classification Learner application to determine the correct interaction position. Different machine learning models were trained and tested in order to maximize performance. With pulses originating from one and seven positions within each pixel, the model was able to find the originating pixel with an accuracy of 100% and 88.9%

respectively. Further, the correct position within a pixel was found with an accuracy of 54.0%

and 29.4% using three and seven positions per pixel respectively. These results show the pos- sibility of improving the spatial resolution with machine learning.

Keywords: computed tomography, photon-counting, silicon strip detector, spatial resolution,

machine learning, classification

(6)

Sammanfattning

Halvledardetektorer är av stigande intresse inom forskning för användning i fotonräknande da- tortomografi med spektral upplösning. För att erhålla en hög spatiell upplösning är det av in- tresse att hitta fotonens ursprungliga interaktionsposition. I detta arbete undersöks om mask- ininlärning kan användas för att erhålla en spatiell upplösning på subpixelnivå i en fotonräk- nande kiselstrippdetektor med 10 µm pixlar. Laddningsfördelningen från simulerade interak- tioner i en, tre, och sju positioner inom var och en av tre pixlar undersöktes med hjälp av ap- plikationen Classification Learner i MATLAB

®

för att bestämma den korrekta interaktionspo- sitionen. Olika maskininlärningsmodeller tränades och testades för att maximera prestandan.

När pulser från en och sju positioner inom pixeln användes, kunde modellen hitta den korrekta pixeln med en noggrannhet på 100% respektive 88.9%. Vidare kunde den korrekta positionen inom en pixel bestämmas med en noggrannhet på 54.0% och 29.4% när tre respektive sju po- sitioner inom varje pixel användes. Resultaten visar att det skulle vara möjligt att förbättra den spatiella upplösningen med hjälp av maskininlärning.

Nyckelord: datortomografi, fotonräknande, kiselstrippdetektor, spatiell upplösning, maskinin-

lärning, klassificering

(7)

Acknowledgements

We wish to thank Mats Danielsson for the opportunity to do this project. We would also like to

express our gratitude to Christel Sundberg for all the help and support during the work.

(8)

Contents

1 Introduction 1

1.1 Aim . . . . 1

1.2 Limitation . . . . 1

2 Background 2 2.1 Principles of CT . . . . 2

2.2 Semiconductors . . . . 3

2.3 Silicon Strip Detectors . . . . 4

2.3.1 Detector Design . . . . 4

2.4 Supervised learning . . . . 5

2.4.1 Validation . . . . 6

2.4.2 Different classifiers . . . . 6

3 Method 8 3.1 Input signal . . . . 8

3.1.1 Simulation . . . . 8

3.1.2 Data processing . . . . 9

3.2 Finding the interaction point . . . . 9

3.2.1 Classification Learner . . . . 9

4 Results 11 4.1 Analysis of electron tracks . . . . 11

4.2 Classification results . . . . 11

5 Discussion 13

6 Conclusion 16

7 References 17

Appendices

Appendix 1: Results of model evaluation

Appendix 2: Code

(9)

1 Introduction

Medical X-ray imaging plays an essential role in today’s healthcare as X-ray images are used to diagnose many different medical conditions. X-ray images are based on the difference in photon attenuation between different tissues in the body. Unlike X-ray radiography, in which two-dimensional projectional images are generated, computed tomography (CT) is used to cre- ate cross-sectional images of the body. The commercial CT detectors used today are energy- integrating, integrating the signal from many photons during a certain time span. However, there are multiple problems associated with this type of detection [1]. Firstly, it limits the spa- tial resolution. Secondly, the loss of energy information from each photon makes it difficult to differentiate between different types of soft tissue. Thirdly, in order to obtain X-ray images with good contrast, the radiation dose delivered to the patient is relatively high. As X-rays are ionizing, the risk of cancer induction increases with the radiation dose.

These problems are believed to be solved using photon-counting detectors (PCDs): detectors in which each X-ray photon is detected individually together with its energy. This would enable a higher signal-to-noise ratio and a higher spatial resolution [2]. The improved image quality could further enable 1) using a lower dose while maintaining equal image quality as an energy- integrating system and 2) images obtained at equal dose as an energy-integrating system but with better contrast [2]. Depending on the application it is sometimes desirable to have a high contrast, while in other cases it is desirable to lower the dose. Current research in photon- counting detectors revolves around semiconductor detectors [3] and in this thesis the focus is silicon strip detectors.

When a photon interacts with a semiconductor detector energy is deposited. The energy is deposited along an electron track and results in released electron-hole pairs [4, 5]. The electron- hole pairs move towards the detector electrodes due to a high voltage bias applied across the semiconductor material [1]. This induces a measurable current in the electrodes which is pro- portional to the photon energy [1]. The induced current is essentially the detected signal which after processing is used to create the image. As the electrons and holes are transported through the detector material, the electron track and exact interaction position are only known within the volume of a pixel. This limits the spatial resolution.

1.1 Aim

The aim of the project was to investigate if the spatial resolution in a silicon strip detector could be improved using smaller pixels in combination with machine learning. More specific, the goal was to, based on the electron track and the energy deposited along it, investigate if the electron track could be detected by very small pixels in order to find the initial point of interaction of the photon.

1.2 Limitation

The project was based on simulations in two dimensions.

(10)

2 Background

The background research for this thesis consists of two main parts. The first part is about the hardware; the principles of CT detection, and more specifically silicon strip detectors. The second part is about the software; machine learning, supervised learning, and more specifically the Classification Learner application found in MATLAB

®

.

2.1 Principles of CT

X-ray imaging is based on measurements of X-ray attenuation. The transmission of the X-ray photon beam through a body is described by the Beer-Lambert law:

I = I

0

e

−µl

(1)

where I

0

is the initial intensity of the photon beam, I is the intensity of the photon beam after passing through the body, l is the length of the photon path through the body, and µ is the linear attenuation coefficient, which is material specific and energy dependent. Since different tissues have different attenuation properties it is possible to create an image of the patient by measuring µ along the photon path. In conventional X-ray imaging, the patient is illuminated with an X-ray beam from one angle while in CT, the patient is illuminated from many different angles. This results in multiple projections that, when computationally combined, form a three-dimensional image of the patient. In order to obtain accurate images, the spatial resolution of the detector plays a key part. The spatial resolution determines the level of detail in the images, e.g. with high spatial resolution, small structures can be distinguished.

The commercial CT detectors used today are energy-integrating and have an effective pixel width ranging from 0.5 to 0.625 mm [6]. They consist of either a scintillator combined with a photodiode or an ionization chamber [7]. In energy-integrating detectors the signals from multiple photons are added together during a certain time span [6]. This leads to the loss of energy information and low energy photons, which carry more contrast information compared to high energy photons, can thereby not be differentiated in order to improve the image quality [7]. Energy-integrating detectors further add electronic noise, which leads to increased noise in the image [1]. A result of the decreased contrast is that different types of soft tissue become more difficult to distinguish. This is one of the major limitations of conventional CT technol- ogy. Other limitations are a relatively high required dose, and that different tissues can have resembling pixel values [1].

In photon-counting spectral CT every photon is counted individually and information about the energy is included. This improves the energy resolution and enables a better separation of different tissues [1], furthermore the dose to the patient can be decreased [6]. A PCD translates the signal from a photon into a pulse with a pulse height that corresponds to the photon energy, and to measure the pulse energy, the pulse height is compared to a set of energy thresholds [1].

Among the challenges for PCDs is the risk of pulse pileup, which describes how two or more

photon events occur so close in time that the resulting pulses are not well-separated [8]. This

can lead to count loss or spectral distortion [9]. Another challenge is charge-sharing: when two

pixels detect the same event and register it as two different events [1]. To avoid the former a

fast detector that can handle a high count rate is needed. One way of doing this is by using a

detector that is segmented along the direction of the incident X-rays [8].

(11)

2.2 Semiconductors

The current research on PCDs is focused on semiconductor detectors of either silicon or cad- mium telluride/cadmium zinc telluride (CdTe/CZT) [3]. When a photon interacts in a semicon- ductor detector a high energy electron is released from one of the atoms in the material [5]. This electron collides with other electrons in the semiconductor material, resulting in a cascade of released electrons. For each released electron, a positively charged hole is left in the valence band of the atom, together they form an electron-hole pair [4, 5]. The energy required to create one electron-hole pair is material dependent, e.g. 3.6 eV for silicon, and the total number of released electron-hole pairs is proportional to the deposited photon energy [4, 5]. A high volt- age bias is applied across the semiconductor material which causes the charge carriers to drift:

electrons to electrodes on one side of the detector and holes to electrodes on the opposite side [1]. As the charge carriers drift towards each respective electrode, a current is induced in each electrode [4]. The current signal is then processed and compared to the energy thresholds: if the pulse height is higher than the lowest threshold value, a count is registered [1].

As the charge carriers drift through the detector they are also affected by Coulomb repulsion and thermal diffusion effects [10]. This increases the risk of charge sharing which occurs when charge carriers are located at the border between two pixels and the signal is split between the electrodes and registered by both [1]. Each electrode then measures a signal that is smaller than the actual deposited energy [10].

The signal is created by two different types of interactions between the incident photons and the semiconductor material: photoelectric interaction and Compton interaction. In the photoelec- tric effect the photon deposits all its energy to the released electron while a Compton interaction only results in the photon depositing some of its energy. After a photoelectric or Compton in- teraction in the semiconductor one electron with a high kinetic energy is released at first. This high energy electron then interacts with other atoms, resulting in an electron track along which electron-hole pairs are created [4, 5]. In figure 1, three simulated electron tracks are shown. All three electron tracks have been simulated with the same initial conditions.

Figure 1: Examples of simulated electron tracks resulting from 50 keV photon interac- tions. The x direction is defined as along the silicon wafer while the y direction is the wafer thickness. The electron tracks were simulated by the Physics of Medical Imaging group at KTH. [11].

(12)

According to the Shockley-Ramo theorem [12] the induced current i(t) in the electrode can be calculated as

i(t) = −q⃗v · ⃗ E

W

(2)

where q is the charge, ⃗v is the velocity of the charge carrier and ⃗ E

W

is the weighting field which is determined by setting the electrode used for measurement to unit potential and all other electrodes to zero [12, 13]. The velocity of the charge carrier is calculated as

v = µE (3)

where µ is the carrier mobility in the material and E is the electrical field [13].

2.3 Silicon Strip Detectors

A challenge in the development of PCDs is handling the high count rates in the detector [1].

This problem can be solved in different ways depending on what semiconductor material is used.

Silicon, with atomic number 14, is a semiconductor with 4 electrons in the valence band, shared between atoms via covalent bonds [14]. Silicon has several advantages as a detector material.

One is the mature manufacturing process and another is the lower production cost compared to other semiconductor materials [1, 3]. Another is that the relatively high mobilities for holes and electrons result in a fast collection time of the electrons and holes [3]. Though, due to the low atomic number, there is a high fraction of Compton interactions as opposed to photoelectric interactions for the energies used in CT. In Compton interactions only a part of the photon energy is deposited. This results in the loss of spectral information as the entire photon energy is not registered [12]. However, counts from Compton and photoelectric interactions are well separated spectrally since the energy from Compton interactions is too low to interact with the photoelectric part of the spectrum [3]. Therefore, the Compton interactions do not interfere with the spectral resolution of the counts from photoelectric interactions. The low atomic number also reduces the photon attenuation and thereby the dose efficiency, which however can be counteracted by using a relatively thick detector [10]. For silicon, the relatively low photon attenuation enables segmentation of the detector in the direction of the incident X-rays [1]. The segmentation can be seen as multiple readout layers in which each layer handles only a fraction of the incident X-ray flux [1, 10]. This largely mitigates pileup effects.

2.3.1 Detector Design

A photon-counting silicon strip detector with spectral resolution consists of a number of silicon

strip wafers [10], placed in the CT gantry as shown in figure 2a. A schematic illustration of a

silicon wafer is shown in figure 2b. In [10] a wafer geometry is presented in which the wafer

thickness is 500 µm and the width of each wafer is 20 mm divided into 50 columns, where

each column corresponds to one pixel in the image. This creates a pixel window of 0.4 x 0.5

mm

2

. The columns are divided into 16 depth segments, with one electrode in each segment and

column. The total depth is 30 mm, where the depth of each segment is exponentially increasing

to enable an equal distribution of counts in all segments. This results in a total of 800 detector

elements, each with a p-type electrode detecting the induced currents. Each electrode is con-

nected to a channel in one of the application-specific integrated circuits (ASICs). Each ASIC

has 160 channels and 8 energy thresholds per channel that are used to discretize the energy of

the photon signal. The ASICs amplify and integrate the signal, prevent counts from electronic

noise, and in general enable the photon-counting and energy registration [1, 8, 10].

(13)

Figure 2: (a) Schematic illustration of how the wafers are placed in a CT gantry. V is the applied bias voltage. (b) Schematic illustration of a segmented silicon strip wafer for spectral CT. Each column corresponds to a detector pixel and each segment within a column contains an electrode.

2.4 Supervised learning

Supervised learning is a type of machine learning that, opposed to unsupervised learning, uses training data with known responses to train a model to make predictions [15]. After training a model, it can be used to make predictions of the response variables for new data. Supervised learning algorithms can be divided into classification or regression algorithms. Classification is used when the responses are different categories called response classes, and regression is used when the response is linear [15], as shown in figure 3.

Figure 3: Example of (a) classification and (b) regression.

In MATLAB, several different tools and built-in applications that create and train models for machine learning are provided. The input data consists of predictor and response variables [16].

They can for example be numeric or string vectors, cell arrays of character vectors or character arrays. The predictors are the extracted features from the raw data representing measurements of each attribute to be analyzed, and the response is the desired output [16].

The Classification Learner is an application in the MATLAB Statistics and Machine Learning

Toolbox™ that trains models to classify data using a chosen classification algorithm [17]. Dif-

ferent settings can be used during training to alter the performance of the model such as various

validation schemes to estimate the model performance during training, or different classifiers

using various algorithms for training the model [17].

(14)

2.4.1 Validation

The information in this section is retrieved from the MATLAB documentation [18] unless oth- erwise stated.

When training a model, the desired result is a model that is good at generalization, meaning it finds a pattern in the input training data connected to the correct response [19]. This en- ables predictions of the response for new, unseen data based on the pattern in the input data.

With a good generalization it is possible to obtain a high accuracy, meaning a big proportion of correctly classified responses. When the model simply learns the training data, without gener- alization, it is called overfitting. Overfitting will likely result in a high accuracy during training but low accuracy when testing with unseen data.

One way to avoid overfitting is by using validation while training the model. Validation is an analysis of the accuracy of the model during training by testing the model on unseen data without letting the model know the responses. Validation can thus be a help when selecting which model to use by estimating the model’s performance on unseen data compared to the data used for training. There are different types of validation schemes and by choosing the val- idation scheme before training the models it is possible to use the same validation scheme for comparison of all models in one session.

Cross-validation is one type of validation scheme, where the input data is divided into a se- lected number of folds. If k folds are selected, the data is divided into k separate folds, working as k separate data sets. For each fold the Classification Learner trains the chosen model using the data from the other folds and then tests and estimates model performance using the data in the current fold. Finally, the Classification Learner calculates the average error for all folds.

Cross-validation is recommended for small sets of data.

Another type of validation is holdout validation, where a selected percentage of the data is used as a test set during training of the model. The Classification Learner first uses the training set to train the model and then estimates the model’s performance using the test set. Holdout validation is recommended for large data sets.

If no validation is used the Classification Learner performs training of the model with all the data and then computes the error on the same data. Therefore, it is not known if the model has done a generalization or simply remembers the specific data, where the latter results in overfitting.

2.4.2 Different classifiers

The information in this section is retrieved from the MATLAB documentation [20] unless oth- erwise stated.

The Classification Learner can train models of various types of classifiers using different train- ing algorithms, including decision trees and ensemble classifiers. By using the Parallel Pool tool from the MATLAB Parallel Computing Toolbox™ it is possible to train multiple models at the same time.

Decision trees are fast and easy to interpret with small memory usage. They predict a response

from the input data by starting in the root node and following the decisions in the tree down

(15)

to a leaf node carrying the response. Each step includes examining the value of one predictor and then continuing to the next node depending on that value. The decision trees used in the Classification Learner are binary. Se figure 4 for an example tree.

Figure 4: Example of a decision tree.

Ensemble classifiers merge the results from many of the simpler models into one advanced and more complex ensemble model. There are different types of ensemble classifiers and depend- ing on which algorithms are used the model obtains different qualities. One type of ensemble classifier is bootstrap-aggregated trees, also called bagged trees, which combine many decision trees to decrease overfitting and increase generalization [21].

When a satisfying model has been obtained it is possible to export the model or automatically

generate a MATLAB function for further use of the model.

(16)

3 Method

To investigate how the spatial resolution in a silicon strip detector can be improved using smaller pixels, four different classification tasks were investigated using machine learning:

• Pixel classification based on data with one interaction position in each pixel: finding the pixel in which the interaction occurred.

• Pixel classification based on data with seven interaction positions in each pixel.

• Sub-pixel classification based on data with three interaction positions in each pixel: find- ing the position within the pixel in which the interaction occurred.

• Sub-pixel classification based on data with seven interaction positions in each pixel.

The accuracy was calculated for each of the models used for the classification. A subset of the electron tracks was also analyzed and a mean electron track length was calculated.

3.1 Input signal 3.1.1 Simulation

The input signal used throughout the work is based on the simulated signals from photons of 50 keV that interact with the detector material through photoelectric interaction at different positions. The interaction positions were chosen in order to obtain a set of equally distributed positions in the x direction as shown in figure 5b. The interaction in the y direction was chosen as 250 µm for all simulated pulses (see figure 5a). The width of each pixel was 10 µm.

Figure 5: (a) A silicon wafer with the pixels (electrodes) in the x direction and the wafer thickness in the y direction. Photon interactions were simulated in pixel 24, 25 and 26 at y = 250 µm with a bias voltage of 300 V across the wafer. (b) The relative positions of the simulated photon interactions within each pixel.

Data was provided from an ongoing research project conducted by the Physics of Medical Imag-

ing group at KTH. The electron tracks resulting from each interacting photon were simulated

in Penelope [11]. In the simulations, the bias voltage between the detector electrodes was set

to 300 V and a simplified model of the electric field was used in which the electric field is ho-

mogeneous over the entire silicon diode. The induced current resulting from the electron-hole

transport through the material was simulated using the Shockley-Ramo theorem. Each pulse

was simulated with a discretization of 2 ns and a total of 500 pulses were simulated for each

position. The simulation included 50 pixels, where photon interactions were simulated in three

neighboring pixels: pixel 24, 25, and 26 (see figure 5a).

(17)

3.1.2 Data processing

Before classifying the pulses the data was processed in MATLAB (The MathWorks, Inc., Nat- ick, Massachusetts, USA, version R2019a) to obtain the desired training data and corresponding responses. The training data was obtained by calculating the total charge measured in each pixel following a photon interaction. The charge Q was obtained by integrating the measured current i(t) over the time span t according to:

Q =

t

0

i(τ )dτ (4)

From a set of 250 simulated electron tracks, a mean electron track length was calculated in the x direction (along the wafer) and in the y direction (wafer thickness).

3.2 Finding the interaction point

For the first classification task, simulated interactions from the position in the middle of each pixel (position 0.5 in figure 5b) were used, resulting in three response classes for the classifi- cation model, one for each position. In the second task, seven positions within each pixel were used (all positions in figure 5b), but with only one response class per pixel, resulting in a total of three response classes. In the third task, three positions in each pixel were used (positions 0.25, 0.5, 0.75 in figure 5b), resulting in a total of 9 response classes. In the fourth and final classification task, all seven positions were used in each pixel resulting in 21 response classes.

In each of the classification tasks the input was the total current for a specific position and interaction, and the response variable was the correct position. A model was then trained using those inputs and responses, and tested on unseen data to return a predicted response.

3.2.1 Classification Learner

For the classification tasks the MATLAB Classification Learner application was used. A set of classification models were trained using different classification algorithms and validated using cross-validation and holdout validation.

Cross-validation was first tested using k = 5, k = 10, k = 15, and k = 20 number of folds.

Holdout validation was then tested with 15%, 25% and 35% of the data for validation. All classification algorithms were tested for all validation settings. They were trained in parallel and the three with the highest performance were listed in a table. The procedure of evaluating the different settings was first done with one simulated interaction position in each pixel then repeated with seven positions with only one response class per pixel, and then with three and seven positions per pixel respectively. During this 98% of the data, 490 samples per position, was selected for training.

After training models of all types of classifying algorithms available in the Classification Learner

with the different validation settings, the model performances were compared for the different

numbers of positions per pixel. The model with the algorithm and validation settings that gen-

erated the best over-all performance was then retrained using 90% of the data, 450 samples

per position, for one, three and seven positions per pixel respectively. The resulting models

were exported and further tested with the unseen remaining 10% of the data, 50 samples per

position, by letting the models predict the responses for the data without knowing the correct,

(18)

true responses (for full code, see Appendix 2). The predicted responses were then compared

with the true responses and the accuracy was calculated as the number of correct classifications

divided by the total number of unseen samples. This was done both for pixel 25 separately and

for pixels 24, 25 and 26 combined.

(19)

4 Results

The results are presented first for the analysis of the electron tracks then for the classification with models from the Classification Learner.

4.1 Analysis of electron tracks

Figure 6 shows three examples of the charge distribution in a pulse from a simulated photon interaction. The three interactions in this example were all simulated with the same initial conditions: a photon of energy 50 keV interacting in the middle position of pixel 25.

Figure 6: Example of pulses from three simulated photon interactions, all occurring in the middle of pixel 25. The total charge in each of the three simulated interactions corresponds to a photon energy of 50 keV.

Table 1 shows the mean length of an electron track resulting from a photon of 50 keV. The mean length is given in both the x and y direction as defined in figure 5a.

Table 1: Mean length for an electron track created by a photon with energy 50 keV. The x direction is defined as along the silicon wafer while the y direction is the wafer thickness.

Direction Mean length [µm]

x 8.0

y 8.0

4.2 Classification results

From all of the tested models, the best performance was obtained with a model of type bagged trees, trained with cross-validation with 20 folds (BT20). The full test results are presented in Appendix 1.

Table 2 shows the calculated accuracy in percent for pixel 25 along with the combined ac-

curacy for pixels 24, 25, and 26. The classifier model was of type BT20 trained with 90% of

the original data and tested with the remaining 10%, unknown to the model, respectively for

one position, seven positions and one response class, three positions, and seven positions per

pixel.

(20)

Table 2: Accuracy for pixel 25 along with the accuracy for pixel 24, 25 and 26 combined for a model of type bagged trees trained with cross-validation using 20 folds for one position, seven positions and one response class, three positions, and seven positions per pixel.

Number of positions per

pixel

Number of response classes

per pixel

Accuracy, pixel 25

Accuracy, pixel 24, 25 and 26

combined

1 position 1 response class 100% 99.3%

7 positions 1 response class 88.9% 90.3%

3 positions 3 response

classes 54.0% 60.7%

7 positions 7 response

classes 29.4% 31.6%

(21)

5 Discussion

After training all the available models with different settings and 490 samples per position, the model with the best accuracy varied between the tasks. However, for both task three and four, performing sub-pixel classification, the best model was of type bagged trees trained with cross- validation with 20 folds (BT20). Since the main interest was to classify the positions within each pixel rather than to just find the pixel, the further analyzes were performed using BT20 models. To be able to test the models with a larger amount of unseen data a new model was trained for each of the four tasks with the BT20 settings, but with 450 samples per position. The remaining 50 samples per position were later used for testing of the model.

The first task was to use simulated photon interactions in one position per pixel and train models in the Classification Learner to find in which pixel each interaction occurred. The final trained model, of type BT20 trained with 450 samples, obtained a very high accuracy, ~99% for pixel 24, 25 and 26 combined as shown in table 2, when calculated from testing with unseen data.

All data used for this training and testing was from simulations of photon interactions exactly in the middle of each pixel, which is an ideal case for the model to differentiate the different pixels.

In reality photons can interact anywhere within the pixel, so to obtain more authentic classi- fication of the pixels the second task was to train the model with photon interactions from all seven positions within each pixel. The goal was to tell in which pixel the interaction occurred, and to calculate an accuracy of the trained model to affirm that the pixel size was big enough for the pulses to be separated between different pixels. The trained model obtained a relatively high accuracy, ~90% for pixel 24, 25 and 26 combined as shown in table 2, showing that the model was capable of correctly classifying ~90%, of the pulses when using 10 µm pixels. As expected, the accuracy for this task was slightly lower than when training with only the middle position, since this was not an ideal case.

The third and fourth task was to use simulated interactions first at three and then seven posi- tions per pixel and train a model to find in which point within the pixel the interaction occurred.

For three positions per pixel the calculated overall accuracy was ~61% for pixel 24, 25 and 26 combined. For seven positions this number was ~32%. One factor that possibly could have affected these values was that both pixel 24 and pixel 26 are edge pixels. Since no interactions occurred in pixel 23 or 27 (nor in any other pixels than pixels 24, 25 and 26) no response cat- egories were created for those pixels during training. Therefore, the interactions within pixel 24 and 26 cannot be confused with interactions in the adjacent pixels, 23 and 27 respectively.

This could lead to a greater proportion of correct classifications in these pixels compared to in pixels that have adjacent pixels containing possible response categories on both sides, such as pixel 25 in this case. The possibly increased accuracy in these pixels might affect the overall accuracy since two out of three pixels are edge pixels in this case. If so, it must be taken into consideration since in reality photon interactions can occur in all pixels in the detector and only two out of 50 pixels would be edge pixels.

To find an accuracy for each model which was not affected by the possibly increased accu-

racy in the edge pixels, the accuracy of each model was also calculated for merely pixel 25, the

pixel in the middle. The results are presented in table 2. With one position per pixel there was

no difference in accuracy between all three pixels combined and merely pixel 25. Because of

the size of the data set the difference between 100% for all three pixels combined and 99.3% for

(22)

pixel 25 was due to only one incorrect classification in pixel 25 which can be considered ran- dom. The results can therefore be equated, showing no signs of edge effects. In the other cases, the accuracy was slightly, but not significantly, lower than for all three pixels combined. This shows that the edge effects for the positions used in this work are small and can be considered negligible. If the difference in accuracy instead had been large, it would have been preferable to include classes for pixel 23 and 27 as well. This could be performed by recycling some of the data for the three original pixels: translating the interaction positions into the two new pixels.

Thus pixel 24 and 26 would no longer be edge pixels and would therefore contribute with more representative results in the calculated accuracy.

Figure 6 displays the charge distribution from three interactions with the same interaction po- sition. The pulses originating from the electrons and holes look quite different from each other and are spread out over multiple pixels, even when the photons interact with the detector in the exact same position and with the same energy. There are several reasons for this. Firstly, when the incoming photon hits the first electron in the detector, its high energy causes the electron to move around in random directions until it has lost its energy to other electrons creating a charge cloud along the electron track. The size of the charge cloud depends on the length and shape of the electron track. Since the direction of each electron track is random the charge cloud is not necessarily centered around the photon interaction point but rather around a random point depending on the electron track. Also, the distribution of created electrons and holes is not nec- essarily the same along the electron track, further affecting the size and location of the charge cloud. Secondly, the charge cloud grows as it moves towards the electrode due to diffusion effects, further dissolving the charge distribution. Therefore, the detected pulses are spread out over multiple pixels and the pixel with the highest measured charge is not always the pixel in which the interaction occurred, making it more complicated for the model to find the real inter- action point.

As described before the main reason for the initial size of the charge cloud and the difference between its center and the interaction point is that the electron tracks have a length, pointing in random directions from the initial interaction point. The mean length for an electron track of 50 keV is about 8 µm in both the x and y direction (see table 1) while the width of each pixel is 10 µm. Since the length of the electron track is in the same order of magnitude as the width of the pixel, it complicates the process of finding the initial point of interaction within the pixel and makes it difficult to obtain a spatial resolution better than the length of the electron track.

However, if the electron tracks were very short or point-like, the charge distribution in the pix- els could be more similar between different pulses and thereby the classification of the pulses could be easier, possibly resulting in a higher accuracy. Especially when classifying three and seven positions within each pixel, this would be of great importance to further improve the spatial resolution. The length of the electron tracks decreases as energy decreases, therefore, it would be of interest to investigate the same classification tasks with tracks resulting from a lower photon energy.

The size of the charge cloud is also affected by the diffusion effect. As the amount of dif-

fusion increases with the time it takes for the charge carriers to drift through the detector, the

thickness of the silicon wafer affects the amount of diffusion. Though, if a thinner wafer was

used to decrease diffusion other problems could arise, such as a need for a higher number of

wafers in the CT-gantry to cover an equally large detecting area.

(23)

Another factor affecting the accuracy of the classification is the amount of data used for training.

A better generalization might be possible with a larger set of training data and thereby possibly a higher accuracy. Using a more advanced algorithm for the classification might also further improve accuracy.

The results of this thesis indicates the possibility of improving the spatial resolution with the

help of machine learning. Therefore it would be reasonable to continue the research in this

area, using larger data sets and more advanced machine learning algorithms in order to achieve

results that could be clinically useful. If the continued research leads to a practical solution

implementable in the healthcare system it could be possible to improve healthcare and make

more accurate diagnoses.

(24)

6 Conclusion

We have investigated a number of classification tasks using machine learning in order to im-

prove the spatial resolution in a segmented photon-counting silicon strip detector. We show

that it is possible to separate 10 µm pixels with photon events originating in seven different

positions per pixel with an accuracy of 88.9%. We also show that it is possible to differentiate

between photon events that occur in three different regions of a 10 µm pixel with an accuracy of

54.0%. The results indicate that it is possible to further improve the spatial resolution without

further reducing the pixel size by using a more advanced machine learning algorithm to analyze

the measured charge distribution in each pixel.

(25)

7 References

[1] K. Taguchi and J. S. Iwanczyk, “Vision 20/20: Single photon counting x-ray detectors in medical imaging.,” Medical physics, vol. 40, no. 10, 2013.

[2] M. Persson, Spectral Computed Tomography with a Photon-Counting Silicon-Strip Detector. PhD thesis, KTH Royal Institute of Technology, 2016.

[3] H. Bornefalk and M. Danielsson, “Photon-counting spectral computed tomography using silicon strip detectors: A feasibility study,” Physics in Medicine and Biology, vol. 55, no. 7, pp. 1999–2022, 2010.

[4] M. Brigida, C. Favuzzi, P. Fusco, F. Gargano, N. Giglietto, F. Giordano, F. Loparco, B. Marangelli, M. N. Mazziotta, N. Mirizzi, S. Rainò, and P. Spinelli, “A new Monte Carlo code for full simulation of silicon strip detectors,” Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 533, no. 3, pp. 322–343, 2004.

[5] C. Xu, M. Danielsson, and H. Bornefalk, “Validity of spherical approximations of initial charge cloud shape in silicon detectors,” Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 648, pp. 190–193, 2011.

[6] W. Zhou, J. I. Lane, M. L. Carlson, M. R. Bruesewitz, R. J. Witte, K. K. Koeller, L. J.

Eckel, R. E. Carter, C. H. McCollough, and S. Leng, “Comparison of a

photon-counting-detector CT with an energy-integrating-detector CT for temporal bone imaging: A cadaveric study,” American Journal of Neuroradiology, vol. 39, no. 9, pp. 1733–1738, 2018.

[7] J. Hsieh, Computed tomography: Principles, Design, Artifacts, and Recent Advances.

Bellingham, WA: SPIE, 3rd ed., 2015.

[8] C. Xu, M. Danielsson, S. Karlsson, C. Svensson, and H. Bornefalk, “Preliminary evaluation of a silicon strip detector for photon-counting spectral CT,” Nuclear

Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 677, pp. 45–51, 2012.

[9] A. S. Wang, D. Harrison, V. Lobastov, and J. E. Tkaczyk, “Pulse pileup statistics for energy discriminating photon counting x-ray detectors,” Medical Physics, vol. 38, no. 7, pp. 4265–4275, 2011.

[10] X. Liu, H. Bornefalk, H. Chen, M. Danielsson, S. Karlsson, M. Persson, C. Xu, and B. Huber, “A silicon-strip detector for photon-counting spectral CT: Energy resolution from 40 keV to 120 keV,” IEEE Transactions on Nuclear Science, vol. 61, no. 3, pp. 1099–1105, 2014.

[11] S. J. Salvat Francesc, Fernández-Varea José M., “PENELOPE-2006: A code system for Monte Carlo simulation of electron and photon transport,” 2006.

[12] H. Bornefalk, C. Xu, C. Svensson, and M. Danielsson, “Design considerations to

overcome cross talk in a photon counting silicon strip detector for computed

tomography,” Nuclear Instruments and Methods in Physics Research, Section A:

(26)

Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 621, no. 1-3, pp. 371–378, 2010.

[13] H. Spieler, Semiconductor Detector Systems. New York: OUP Oxford, 2005.

[14] “kisel.” in Nationalencyklopedin. [online document]. Avaliable:

http://www.ne.se/uppslagsverk/encyklopedi/lång/kisel [Accessed: March 23, 2019].

[15] The MathWorks, Inc., “Supervised Learning.” mathworks.com, [Online]. Avaliable:

https://se.mathworks.com/discovery/supervised-learning.html# [Accessed:

April 19, 2019].

[16] The MathWorks, Inc., “Supervised Learning Workflow and Algorithms.”

mathworks.com, [Online]. Avaliable: https://se.mathworks.com/help/stats/

supervised-learning-machine-learning-workflow-and-algorithms.html [Accessed: May 16, 2019].

[17] The MathWorks, Inc., “Classification Learner.” mathworks.com, [Online]. Avaliable:

https://se.mathworks.com/help/stats/classificationlearner-app.html [Accessed: April 19, 2019].

[18] The MathWorks, Inc., “Select Data and Validation for Classification Problem.”

mathworks.com, [Online]. Avaliable: https://se.mathworks.com/help/stats/

select-data-and-validation-for-classification-problem.html [Accessed:

April 19, 2019].

[19] The MathWorks, Inc., “Machine Learning with MATLAB.” mathworks.com, [Online].

Avaliable: https:

//se.mathworks.com/campaigns/offers/machine-learning-with-matlab.

confirmation.html?elqsid=1554187971305&potential_use=Student [Accessed:

April 2, 2019].

[20] The MathWorks, Inc., “Choose Classifier Options.” mathworks.com, [Online].

Avaliable: https://se.mathworks.com/help/stats/choose-a-classifier.html [Accessed: April 22, 2019].

[21] The MathWorks, Inc., “TreeBagger.” mathworks.com, [Online]. Avaliable:

https://se.mathworks.com/help/stats/treebagger.html [Accessed: May 8,

2019].

(27)

Appendices

Appendix 1: Results of model evaluation

This appendix contains the results of the test of different models in the Classification Learner.

1 position per pixel

Table A1 and A2 shows the results for the three best models with different settings for cross- validation and holdout validation respectively, trained with 1 position per pixel.

Table A1: Cross-validation, 1 position

Cross-validation Number of folds Algorithm used for

training

Accuracy according to the Classification

Learner

5 folds

Boosted trees 0.969

Fine tree 0.959

Medium tree 0.954

10 folds

Boosted trees 0.969

Fine tree 0.959

Medium tree 0.954

15 folds

Boosted trees 0.969

Fine tree 0.956

RUSBoosted trees 0.948

20 folds

Boosted trees 0.970

Fine tree 0.965

Medium tree 0.952

RUSBoosted trees 0.952

(28)

Table A2: Holdout validation, 1 position

Holdout validation Percentage of data for

validation

Algorithm used for training

Accuracy according to the Classification

Learner

15%

Boosted trees 0.941

RUSBoosted trees 0.914

Medium tree 0.914

25%

Boosted trees 0.967

RUSBoosted trees 0.956

Fine tree 0.946

Medium tree 0.946

35%

Boosted trees 0.981

RUSBoosted trees 0.963

Medium tree 0.963

(29)

7 positions and 1 response class per pixel

Table A3 and A4 shows the results for the three best models with different settings for cross- validation and holdout validation respectively, trained with 7 positions and 1 response class per pixel.

Table A3: Cross-validation, 7 positions and 1 response class

Cross-validation Number of folds Algorithm used for

training

Accuracy according to the Classification

Learner

5 folds

Bagged Trees 0.899

Fine tree 0.889

Boosted trees 0.885

10 folds

Bagged Trees 0.900

Fine tree 0.889

Boosted trees 0.888

15 folds

Bagged Trees 0.902

Fine tree 0.888

Boosted trees 0.885

20 folds

Bagged Trees 0.899

Fine tree 0.886

Boosted trees 0.884

Table A4: Holdout validation, 7 positions and 1 response class

Holdout validation Percentage of data for

validation

Algorithm used for training

Accuracy according to the Classification

Learner

15%

Bagged trees 0.896

Fine tree 0.879

Boosted trees 0.873

25%

Bagged trees 0.902

Boosted trees 0.886

Fine tree 0.871

35%

Fine tree 0.894

Bagged trees 0.886

Boosted trees 0.884

(30)

3 positions per pixel

Table A5 and A6 shows the results for the three best models with different settings for cross- validation and holdout validation respectively, trained with 3 positions per pixel.

Table A5: Cross-validation, 3 positions

Cross-validation Number of folds Algorithm used for

training

Accuracy according to the Classification

Learner

5 folds

Bagged trees 0.571

Fine tree 0.571

Boosted trees 0.495

10 folds

Bagged trees 0.588

Fine tree 0.566

Boosted trees 0.501

15 folds

Bagged trees 0.591

Fine tree 0.573

Boosted trees 0.500

20 folds

Bagged trees 0.594

Fine tree 0.568

Boosted trees 0.500

Table A6: Holdout validation, 3 positions

Holdout validation Percentage of data for

validation

Algorithm used for training

Accuracy according to the Classification

Learner

15%

Bagged trees 0.570

Fine tree 0.552

Boosted trees 0.496

25%

Bagged trees 0.561

Fine tree 0.554

Boosted trees 0.474

35%

Bagged trees 0.566

Fine tree 0.545

Boosted trees 0.479

(31)

7 positions per pixel

Table A7 and A8 shows the results for the three best models with different settings for cross- validation and holdout validation respectively, trained with 7 positions per pixel.

Table A7: Cross-validation, 7 positions

Cross-validation Number of folds Algorithm used for

training

Accuracy according to the Classification

Learner

5 folds

Bagged trees 0.325

Fine tree 0.302

Subspace KNN 0.230

10 folds

Bagged trees 0.326

Fine tree 0.305

Subspace KNN 0.231

15 folds

Bagged trees 0.331

Fine tree 0.305

Subspace KNN 0.230

20 folds

Bagged trees 0.336

Fine tree 0.300

Subspace KNN 0.229

Table A8: Holdout validation, 7 positions

Holdout validation Percentage of data for

validation

Algorithm used for training

Accuracy according to the Classification

Learner

15%

Bagged trees 0.307

Fine tree 0.279

Boosted trees 0.227

25%

Bagged trees 0.319

Fine tree 0.286

Subspace KNN 0.226

35%

Bagged trees 0.319

Fine tree 0.300

Boosted trees 0.234

(32)

Appendix 2: Code

The following code was written in MATLAB by Eva Bergström and Ida Johansson and was used for processing and analysis of the simulated data. The code is divided into four scripts, one for each classification task.

1 position per pixel

1

% Load data matrix .

2

% Load trained model .

3

4

% Creates one empty matrix for every pixel position : ( columns = pixels , rows = pulses )

5

NoOfPulses = size( Current_pixel_24_middle ,1);

6

Mat_24_middle = zeros (NoOfPulses , size( Current_pixel_24_middle {1 ,1} ,2));

7

Mat_25_middle = Mat_24_middle ;

8

Mat_26_middle = Mat_24_middle ;

9

10

% Total charge put into matrices :

11 for i = 1: NoOfPulses

12

Mat_24_middle (i ,:) = sum( Current_pixel_24_middle {i ,1});

13

Mat_25_middle (i ,:) = sum( Current_pixel_25_middle {i ,1});

14

Mat_26_middle (i ,:) = sum( Current_pixel_26_middle {i ,1});

15 end 16

17

% Creates response vectors :

18

Resp_24_middle = ones( NoOfPulses ,1) *24.5;

19

Resp_25_middle = ones( NoOfPulses ,1) *25.5;

20

Resp_26_middle = ones( NoOfPulses ,1) *26.5;

21

22

n = 450; % Number of pulses used for training out of 500 (450 or 490).

23

24

% Creates trainging matrix including training data and responses :

25

TrainingMat_CL_1_pos = [ Mat_24_middle (1:n ,:); Mat_25_middle (1:n ,:);

Mat_26_middle (1:n ,:) ];

26

Responses_CL_1_pos = [ Resp_24_middle (1:n ,:); Resp_25_middle (1:n ,:);

Resp_26_middle (1:n ,:) ];

27

TrainingMat_CL_1_pos = [ TrainingMat_CL_1_pos , Responses_CL_1_pos ];

28

29

% Trains model:

30

classificationLearner

31

32

% Tests model on unseen data:

33

TestMat_CL_1_pos = [ Mat_24_middle (n+1:500 ,:); Mat_25_middle (n +1:500 ,:); Mat_26_middle (n+1:500 ,:) ];

34

prediction = TrainedModel . predictFcn ( TestMat_CL_1_pos );

(33)

7 positions and 1 response class per pixel

1

% Load data matrix .

2

% Load trained model .

3

4

% Creates one empty matrix for every pixel position : ( columns = pixels , rows = pulses )

5

NoOfPulses = size( Current_pixel_24_025 ,1);

6

ZeroMat = zeros (NoOfPulses , size( Current_pixel_24_025 {1 ,1} ,2));

7

Mat_24_0125 = ZeroMat ; Mat_24_025 = ZeroMat ; Mat_24_0375 = ZeroMat ; Mat_24_middle = ZeroMat ; Mat_24_0625 = ZeroMat ; Mat_24_075 = ZeroMat ; Mat_24_0875 = ZeroMat ;

8

Mat_25_0125 = ZeroMat ; Mat_25_025 = ZeroMat ; Mat_25_0375 = ZeroMat ; Mat_25_middle = ZeroMat ; Mat_25_0625 = ZeroMat ; Mat_25_075 = ZeroMat ; Mat_25_0875 = ZeroMat ;

9

Mat_26_0125 = ZeroMat ; Mat_26_025 = ZeroMat ; Mat_26_0375 = ZeroMat ; Mat_26_middle = ZeroMat ; Mat_26_0625 = ZeroMat ; Mat_26_075 = ZeroMat ; Mat_26_0875 = ZeroMat ;

10

11

% Total charge put into matrices :

12 for i = 1: NoOfPulses

13

Mat_24_0125 (i ,:) = sum( Current_pixel_24_0125 {i ,1});

14

Mat_24_025 (i ,:) = sum( Current_pixel_24_025 {i ,1});

15

Mat_24_0375 (i ,:) = sum( Current_pixel_24_0375 {i ,1});

16

Mat_24_middle (i ,:) = sum( Current_pixel_24_middle {i ,1});

17

Mat_24_0625 (i ,:) = sum( Current_pixel_24_0625 {i ,1});

18

Mat_24_075 (i ,:) = sum( Current_pixel_24_075 {i ,1});

19

Mat_24_0875 (i ,:) = sum( Current_pixel_24_0875 {i ,1});

20

21

Mat_25_0125 (i ,:) = sum( Current_pixel_25_0125 {i ,1});

22

Mat_25_025 (i ,:) = sum( Current_pixel_25_025 {i ,1});

23

Mat_25_0375 (i ,:) = sum( Current_pixel_25_0375 {i ,1});

24

Mat_25_middle (i ,:) = sum( Current_pixel_25_middle {i ,1});

25

Mat_25_0625 (i ,:) = sum( Current_pixel_25_0625 {i ,1});

26

Mat_25_075 (i ,:) = sum( Current_pixel_25_075 {i ,1});

27

Mat_25_0875 (i ,:) = sum( Current_pixel_25_0875 {i ,1});

28

29

Mat_26_0125 (i ,:) = sum( Current_pixel_26_0125 {i ,1});

30

Mat_26_025 (i ,:) = sum( Current_pixel_26_025 {i ,1});

31

Mat_26_0375 (i ,:) = sum( Current_pixel_26_0375 {i ,1});

32

Mat_26_middle (i ,:) = sum( Current_pixel_26_middle {i ,1});

33

Mat_26_0625 (i ,:) = sum( Current_pixel_26_0625 {i ,1});

34

Mat_26_075 (i ,:) = sum( Current_pixel_26_075 {i ,1});

35

Mat_26_0875 (i ,:) = sum( Current_pixel_26_0875 {i ,1});

36 end 37

38

% Creates response vectors :

39

Resp_24_0125 = ones( NoOfPulses ,1) *24;

40

Resp_24_025 = ones( NoOfPulses ,1) *24;

41

Resp_24_0375 = ones( NoOfPulses ,1) *24;

42

Resp_24_middle = ones( NoOfPulses ,1) *24;

(34)

43

Resp_24_0625 = ones( NoOfPulses ,1) *24;

44

Resp_24_075 = ones(NoOfPulses ,1) *24;

45

Resp_24_0875 = ones( NoOfPulses ,1) *24;

46

Resp_25_0125 = ones( NoOfPulses ,1) *25;

47

Resp_25_025 = ones(NoOfPulses ,1) *25;

48

Resp_25_0375 = ones( NoOfPulses ,1) *25;

49

Resp_25_middle = ones( NoOfPulses ,1) *25;

50

Resp_25_0625 = ones( NoOfPulses ,1) *25;

51

Resp_25_075 = ones(NoOfPulses ,1) *25;

52

Resp_25_0875 = ones( NoOfPulses ,1) *25;

53

Resp_26_0125 = ones( NoOfPulses ,1) *26;

54

Resp_26_025 = ones(NoOfPulses ,1) *26;

55

Resp_26_0375 = ones( NoOfPulses ,1) *26;

56

Resp_26_middle = ones( NoOfPulses ,1) *26;

57

Resp_26_0625 = ones( NoOfPulses ,1) *26;

58

Resp_26_075 = ones(NoOfPulses ,1) *26;

59

Resp_26_0875 = ones( NoOfPulses ,1) *26;

60

61

n = 450; % Number of pulses used for training out of 500 (450 or 490).

62

63

% Creates trainging matrix including training data and responses :

64

TrainingMat_CL_7_pos_1_pix = [ Mat_24_0125 (1:n ,:); Mat_24_025 (1:n ,:);

Mat_24_0375 (1:n ,:); Mat_24_middle (1:n ,:); Mat_24_0625 (1:n ,:);

Mat_24_075 (1:n ,:); Mat_24_0875 (1:n ,:); Mat_25_0125 (1:n ,:);

Mat_25_025 (1:n ,:); Mat_25_0375 (1:n ,:); Mat_25_middle (1:n ,:);

Mat_25_0625 (1:n ,:); Mat_25_075 (1:n ,:); Mat_25_0875 (1:n ,:);

Mat_26_0125 (1:n ,:); Mat_26_025 (1:n ,:); Mat_26_0375 (1:n ,:);

Mat_26_middle (1:n ,:); Mat_26_0625 (1:n ,:); Mat_26_075 (1:n ,:);

Mat_26_0875 (1:n ,:) ];

65

Responses_CL_7_pos_1_pix = [ Resp_24_0125 (1:n ,:); Resp_24_025 (1:n ,:);

Resp_24_0375 (1:n ,:); Resp_24_middle (1:n ,:); Resp_24_0625 (1:n ,:);

Resp_24_075 (1:n ,:); Resp_24_0875 (1:n ,:); Resp_25_0125 (1:n ,:);

Resp_25_025 (1:n ,:); Resp_25_0375 (1:n ,:); Resp_25_middle (1:n ,:);

Resp_25_0625 (1:n ,:); Resp_25_075 (1:n ,:); Resp_25_0875 (1:n ,:);

Resp_26_0125 (1:n ,:); Resp_26_025 (1:n ,:); Resp_26_0375 (1:n ,:);

Resp_26_middle (1:n ,:); Resp_26_0625 (1:n ,:); Resp_26_075 (1:n ,:);

Resp_26_0875 (1:n ,:) ];

66

TrainingMat_CL_7_pos_1_pix = [ TrainingMat_CL_7_pos_1_pix , Responses_CL_7_pos_1_pix ];

67

68

% Trains model:

69

classificationLearner

70

71

% Tests model on unseen data: ( pixel 24, 25 and 26 respectively )

72

TestMat24_7_pos_1_response = [ Mat_24_0125 (n+1:500 ,:); Mat_24_025 (n

+1:500 ,:); Mat_24_0375 (n+1:500 ,:); Mat_24_middle (n+1:500 ,:);

Mat_24_0625 (n+1:500 ,:); Mat_24_075 (n+1:500 ,:); Mat_24_0875 (n +1:500 ,:) ];

73

prediction24_7_pos_1_response = TrainedModel . predictFcn (

TestMat24_7_pos_1_response (: ,:));

(35)

74

75

TestMat25_7_pos_1_response = [ Mat_25_0125 (n+1:500 ,:); Mat_25_025 (n +1:500 ,:); Mat_25_0375 (n+1:500 ,:); Mat_25_middle (n+1:500 ,:);

Mat_25_0625 (n+1:500 ,:); Mat_25_075 (n+1:500 ,:); Mat_25_0875 (n +1:500 ,:) ];

76

predTest25_7_pos_1_response = TrainedModel . predictFcn ( TestMat25_7_pos_1_response (: ,:));

77

78

TestMat26_7_pos_1_response = [ Mat_26_0125 (n+1:500 ,:); Mat_26_025 (n +1:500 ,:); Mat_26_0375 (n+1:500 ,:); Mat_26_middle (n+1:500 ,:);

Mat_26_0625 (n+1:500 ,:); Mat_26_075 (n+1:500 ,:); Mat_26_0875 (n +1:500 ,:) ];

79

predTest26_7_pos_1_response = TrainedModel . predictFcn ( TestMat26_7_pos_1_response (: ,:));

.

3 positions per pixel

1

% Load data matrix .

2

% Load trained model .

3

4

% Creates one empty matrix for every pixel position : ( columns = pixels , rows = pulses )

5

NoOfPulses = size( Current_pixel_24_025 ,1);

6

ZeroMat = zeros (NoOfPulses , size( Current_pixel_24_025 {1 ,1} ,2));

7

Mat_24_025 = ZeroMat ; Mat_24_middle = ZeroMat ; Mat_24_075 = ZeroMat ;

8

Mat_25_025 = ZeroMat ; Mat_25_middle = ZeroMat ; Mat_25_075 = ZeroMat ;

9

Mat_26_025 = ZeroMat ; Mat_26_middle = ZeroMat ; Mat_26_075 = ZeroMat ;

10

11

% Total charge put into matrices :

12 for i = 1: NoOfPulses

13

Mat_24_025 (i ,:) = sum( Current_pixel_24_025 {i ,1});

14

Mat_24_middle (i ,:) = sum( Current_pixel_24_middle {i ,1});

15

Mat_24_075 (i ,:) = sum( Current_pixel_24_075 {i ,1});

16

Mat_25_025 (i ,:) = sum( Current_pixel_25_025 {i ,1});

17

Mat_25_middle (i ,:) = sum( Current_pixel_25_middle {i ,1});

18

Mat_25_075 (i ,:) = sum( Current_pixel_25_075 {i ,1});

19

Mat_26_025 (i ,:) = sum( Current_pixel_26_025 {i ,1});

20

Mat_26_middle (i ,:) = sum( Current_pixel_26_middle {i ,1});

21

Mat_26_075 (i ,:) = sum( Current_pixel_26_075 {i ,1});

22 end 23

24

% Creates response vectors :

25

Resp_24_025 = ones( NoOfPulses ,1) *24.25;

26

Resp_24_middle = ones( NoOfPulses ,1) *24.5;

27

Resp_24_075 = ones( NoOfPulses ,1) *24.75;

28

Resp_25_025 = ones( NoOfPulses ,1) *25.25;

29

Resp_25_middle = ones( NoOfPulses ,1) *25.5;

30

Resp_25_075 = ones( NoOfPulses ,1) *25.75;

31

Resp_26_025 = ones( NoOfPulses ,1) *26.25;

32

Resp_26_middle = ones( NoOfPulses ,1) *26.5;

(36)

33

Resp_26_075 = ones(NoOfPulses ,1) *26.75;

34

35

n = 450; % Number of pulses used for training out of 500 (450 or 490).

36

37

% Creates trainging matrix including training data and responses :

38

TrainingMat_CL_3_pos = [ Mat_24_025 (1:n ,:); Mat_24_middle (1:n ,:);

Mat_24_075 (1:n ,:); Mat_25_025 (1:n ,:); Mat_25_middle (1:n ,:);

Mat_25_075 (1:n ,:); Mat_26_025 (1:n ,:); Mat_26_middle (1:n ,:);

Mat_26_075 (1:n ,:) ];

39

Responses_CL_3_pos = [ Resp_24_025 (1:n ,:); Resp_24_middle (1:n ,:);

Resp_24_075 (1:n ,:); Resp_25_025 (1:n ,:); Resp_25_middle (1:n ,:);

Resp_25_075 (1:n ,:); Resp_26_025 (1:n ,:); Resp_26_middle (1:n ,:);

Resp_26_075 (1:n ,:) ];

40

TrainingMat_CL_3_pos = [ TrainingMat_CL_3_pos , Responses_CL_3_pos ];

41

42

% Trains model:

43

classificationLearner

44

45

% Tests model on unseen data:

46

TestMat_CL_3_pos = [ Mat_24_025 (n+1:500 ,:); Mat_24_middle (n+1:500 ,:);

Mat_24_075 (n+1:500 ,:); Mat_25_025 (n+1:500 ,:); Mat_25_middle (n +1:500 ,:); Mat_25_075 (n+1:500 ,:); Mat_26_025 (n+1:500 ,:);

Mat_26_middle (n+1:500 ,:); Mat_26_075 (n+1:500 ,:) ];

47

prediction = TrainedModel . predictFcn ( TestMat_CL_3_pos );

.

7 positions per pixel

1

% Load data matrix .

2

% Load trained model .

3

4

% Creates one empty matrix for every pixel position : ( columns = pixels , rows = pulses )

5

NoOfPulses = size( Current_pixel_24_025 ,1);

6

ZeroMat = zeros (NoOfPulses , size( Current_pixel_24_025 {1 ,1} ,2));

7

Mat_24_0125 = ZeroMat ; Mat_24_025 = ZeroMat ; Mat_24_0375 = ZeroMat ; Mat_24_middle = ZeroMat ; Mat_24_0625 = ZeroMat ; Mat_24_075 = ZeroMat ; Mat_24_0875 = ZeroMat ;

8

Mat_25_0125 = ZeroMat ; Mat_25_025 = ZeroMat ; Mat_25_0375 = ZeroMat ; Mat_25_middle = ZeroMat ; Mat_25_0625 = ZeroMat ; Mat_25_075 = ZeroMat ; Mat_25_0875 = ZeroMat ;

9

Mat_26_0125 = ZeroMat ; Mat_26_025 = ZeroMat ; Mat_26_0375 = ZeroMat ; Mat_26_middle = ZeroMat ; Mat_26_0625 = ZeroMat ; Mat_26_075 = ZeroMat ; Mat_26_0875 = ZeroMat ;

10

11

% Total charge put into matrices :

12 for i = 1: NoOfPulses

13

Mat_24_0125 (i ,:) = sum( Current_pixel_24_0125 {i ,1});

14

Mat_24_025 (i ,:) = sum( Current_pixel_24_025 {i ,1});

15

Mat_24_0375 (i ,:) = sum( Current_pixel_24_0375 {i ,1});

References

Related documents

Key words: mammography, anti-scatter grid, photon-counting, spectral computed tomography, silicon strip, ASIC, energy resolution, Compton scat- ter, material decomposition,

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

Energy issues are increasingly at the centre of the Brazilian policy agenda. Blessed with abundant energy resources of all sorts, the country is currently in a

Det finns många initiativ och aktiviteter för att främja och stärka internationellt samarbete bland forskare och studenter, de flesta på initiativ av och med budget från departementet

The fact that a majority of the stakeholders at Star Bowl- ing thought that the leaderboard made the task slightly more fun and motivated them slightly I would speculate that they

In the second phase of the work a framework for managing and optimizing the verification and validation activities in space software development was developed [5].. Knowledge of

The overall goal of this study is to evaluate if current use of the standards from the European Cooperation for Space Standardization (ECSS) is cost efficient and if there are ways

i) The single pixel model is used to simulate the signal generated when a photon interacts with the detector material. The signal is then used to evaluate the timing of a pulse and