• No results found

Design, Evaluation and Implementation of a Pipeline for Semi-Automatic Lung Nodule Segmentation

N/A
N/A
Protected

Academic year: 2021

Share "Design, Evaluation and Implementation of a Pipeline for Semi-Automatic Lung Nodule Segmentation"

Copied!
50
0
0

Loading.... (view fulltext now)

Full text

(1)

Institutionen för systemteknik

Department of Electrical Engineering

Examensarbete

Design, evaluation and implementation of a pipeline

for semi-automatic lung nodule segmentation

Examensarbete utfört i Datorseende

vid Tekniska högskolan vid Linköpings universitet

av

Lukas Berglin

LiTH-ISY-EX--16/4925--SE

Linköping 2016

Department of Electrical Engineering

Linköpings tekniska högskola

Linköpings universitet

Linköpings universitet

(2)
(3)

Design, evaluation and implementation of a pipeline

for semi-automatic lung nodule segmentation

Examensarbete utfört i Datorseende

vid Tekniska högskolan vid Linköpings universitet

av

Lukas Berglin

LiTH-ISY-EX--16/4925--SE

Handledare:

Marcus Wallenberg

isy, Linköpings universitet

Toms Vulfs

Sectra Imaging IT AB

Examinator:

Maria Magnusson

isy, Linköpings universitet

(4)
(5)

Abstract

Lung cancer is the most common type of cancer in the world and always manifests as lung nodules. Nodules are small tumors that consist of lung tissue. They are usually spherical in shape and their cores can be either solid or subsolid. Nodules are common in lungs, but not all of them are malignant. To determine if a nodule is malignant or benign, attributes like nodule size and volume growth are commonly used. The procedure to obtain these attributes is time consuming, and therefore calls for tools to simplify the process.

The purpose of this thesis work was to investigate the feasibility of a semi-automatic lung nodule segmentation pipeline including volume estimation. This was done by implement-ing, tuning and evaluating image processing algorithms with different characteristics to create pipeline candidates. These candidates were compared using a similarity index between their segmentation results and ground truth markings to determine the most promising one.

The best performing pipeline consisted of a fixed region of interest together with a level set segmentation algorithm. Its segmentation accuracy was not consistent for all nodules evaluated, but the pipeline showed great potential when dynamically adapting its parameters for each nodule. The use of dynamic parameters was only briefly explored, and further research would be necessary to determine its feasibility.

(6)
(7)

Acknowledgements

I would like to thank Sectra Imaging IT AB for giving me the opportunity to do my master thesis on such challenging but interesting field. Thank you, everyone at Sectra for the amazing work environment and positive energy. A special thank you to my supervisors Toms Vulfs and Marcus Wallenberg and examiner Maria Magnusson for the continuous help and feedback.

I would also like to thank Malin Bergqvist for always supporting and inspiring me to do my best.

(8)
(9)

Contents

1 Introduction 1 1.1 Motivation . . . 1 1.2 Purpose . . . 1 1.3 Problem statements . . . 2 1.4 Limitations . . . 2 1.5 Outline of report . . . 2 2 Background theory 3 2.1 Computed tomography . . . 3 2.2 Lung nodules . . . 3 2.3 Hounsfield units . . . 4 2.4 DICOM . . . 5 2.5 Jaccard index . . . 6 3 Algorithms 7 3.1 Region of interest . . . 7 3.1.1 Fixed size . . . 7 3.1.2 Derivative search . . . 8 3.2 Pre-processing . . . 8 3.2.1 Gaussian filtering . . . 8 3.2.2 Edge filtering . . . 9 3.2.3 Adaptive filtering . . . 10 3.2.4 Median filtering . . . 10 3.2.5 Cylinder filtering . . . 10

3.2.6 Local contrast enhancement filtering . . . 11

3.2.7 Anti-geometric diffusion . . . 12

3.3 Segmentation . . . 12

3.3.1 Region growing . . . 12

3.3.2 Standard deviation thresholding . . . 13

3.3.3 Lassen threshold . . . 13 3.3.4 Ensemble segmentation . . . 14 3.3.5 Level set . . . 15 3.3.6 Otsu’s method . . . 16 3.4 Post-processing . . . 17 4 Method 18 4.1 Solution layout . . . 18 ix

(10)

Contents x 4.1.1 Seed point . . . 19 4.1.2 Region of interest . . . 20 4.1.3 Isotropic voxels . . . 20 4.1.4 Pre-processing . . . 20 4.1.5 Segmentation . . . 21 4.1.6 Post-processing . . . 21 4.1.7 Volumetric analysis . . . 21 4.2 Tuning process . . . 21

4.3 Pipeline evaluation process . . . 22

5 Result 23 5.1 Test data . . . 23 5.2 Tuning . . . 24 5.3 Pipeline evaluation . . . 25 6 Discussion 27 6.1 Pipeline . . . 27 6.1.1 Region of interest . . . 27 6.1.2 Pre-processing . . . 27 6.1.3 Segmentation . . . 28 6.1.4 Post-processing . . . 29 6.2 Tuning process . . . 29

6.3 Pipeline evaluation process . . . 30

7 Conclusions 33

A Datasets 34

(11)

List of Figures

2.1 Radiography, anatomy planes and CT . . . 4

2.2 Examples of nodules handled in this thesis . . . 5

3.1 All fourteen directions used in the derivative search . . . 8

3.2 Examples to illustrate the effect of selected pre-processing algorithms . . . 9

3.3 Visualization of level set contours . . . 15

3.4 Examples of Otsu’s method for determining thresholds . . . 17

4.1 Layout of the final solution for the nodule analysis . . . 19

5.1 Axial slices of the selected tuning nodules . . . 25

6.1 Example of different complexity for subsolid nodules . . . 29

6.2 Example of increased performance using dynamic parameters . . . 30

6.3 Example of decreased performance using dynamic parameters . . . 30

6.4 Example of intensity leakage for small nodules close to chest walls . . . . 32

6.5 Example of low performance for a small nodule . . . 32

(12)

List of Tables

5.1 List of abbreviations and section references . . . 23

5.2 Results for the tuning process . . . 24

5.3 Results for the first run with the centered seed point . . . 25

5.4 Results for the second run with the seed point slightly off center . . . 26

(13)

Abbreviations

AGD Anti-Geometric Diffusion CT Computed Tomography

DICOM Digital Imaging and Communications in Medicine HU Hounsfield Unit

IDRI Image Database Resource Initiative LCE Local Contrast Enhancement LIDC Lung Image Database Consortium

PACS Picture Archiving and Communication System

(14)

Chapter 1

Introduction

This document is written as a master of science thesis at the Department of Electrical Engineering at Link¨oping University, with Maria Magnusson as examiner and Marcus Wallenberg as supervisor. The work was performed at Sectra Imaging IT AB, with Fredrik H¨all as requirement specifier and Toms Vulfs as supervisor. Sectra Imaging IT AB is a subsidiary of Sectra AB, which was founded in 1978 and specializes in both medical IT and secure communication.

1.1

Motivation

Lung cancer is the leading cause of cancer death in the world. In 2012, over 1.8 million new cases were diagnosed as lung cancer, which corresponded to 13.0% of all new cancer cases that year.

Lung cancer manifests as small tumors called lung nodules [1]. Manually assessing these lung nodules is a time consuming task, taking 5-10 minutes for each lung scan [2]. This is why efficient tools for diagnosing lung nodules in lung volumes are of highest importance.

1.2

Purpose

The aim of this thesis is to investigate the feasibility of a semi-automatic lung nodule segmentation system. This was done by evaluating the accuracy and the computational cost for multiple existing pre-processing and nodule segmentation algorithms. The best performing algorithm combination were implemented and optimized for Sectra’s propri-etary picture archiving and communication system (PACS).

(15)

Introduction 2

1.3

Problem statements

The work consisted of two main parts: a design and an implementation part. For the design part, the goal was to find the most feasible pipeline for lung nodule segmentation. The following two questions were therefore explored:

• What pipeline of existing pre-processing, segmentation and post-processing algo-rithms is the most promising one?

• What factors limit the pipelines not chosen among those evaluated?

For the implementation part, the goal was to implement and optimize the selected pipeline into Sectra’s proprietary PACS. When doing this, the layout and limitations of the existing software had to be taken into account.

1.4

Limitations

The limitations for this work were:

• The end user input is limited to a single click on a nodule. The click is assumed to be approximately in the center of mass of the nodule.

• The computational time for the whole pipeline has to be below two seconds per nodule.

1.5

Outline of report

Following this introductory chapter, the report includes five more chapters. Chapter 2 introduces related theory and concepts used, followed by chapter 3 which presents all algorithms used. Chapter 4 includes the method and describes the pipeline layout and all tests performed for the pipeline evaluation. Chapter 5 presents the results from the evaluation. Chapter 6 discusses the results and motivates the final pipeline chosen for implementation. Finally, chapter 7 contains conclusions and suggestions for future work.

(16)

Chapter 2

Background theory

This chapter introduces the necessary background theory. This includes computed to-mography, lung nodules, the medical image standard used and the Jaccard index.

2.1

Computed tomography

There exists multiple techniques in medical diagnostic imaging: computed tomography (CT), magnetic resonance imaging (MRI), ultrasound and radiography to mention a few. The most common technique for lung scanning today is CT, which is based on X-ray scanning. In classical X-ray radiography, X-rays are emitted from an X-ray tube and sent through the patient. Radiation not absorbed by the body are exposed to a digital flat panel detector to create the image. The image intensities represent the inside of the body, due to the different radiodensity factors for different anatomical structures. X-ray images are projections that show overlayed structures, see figure 2.1a.

CT is also based on X-rays but the flat panel detector is replaced with an arc-shaped detector. The X-ray tube and the detector rotate around the patient in a helical pattern. This way, readings are obtained from all angles around the patient which enables 3D reconstruction of the body. The 3D reconstruction is usually visualized as sagittal, coronal and axial slices, see figure 2.1b. In this work, a slice or an image refers to an axial slice, see figure 2.1c.

2.2

Lung nodules

Lung nodules are lung tissue abnormalities. They are overall round or oval-shaped and 3-30 mm in diameter [4]. As mentioned in section 1.1, lung cancer always manifests as

(17)

Background theory 4

(a) (b) (c)

Figure 2.1: Example of radiography, anatomy planes and CT. (A) Radiography image. (B) Anatomy planes. A = axial, B = coronal, C = sagittal [3]. (C) CT axial slice.

lung nodules. However, the opposite is not true as most lung nodules are not cancerous. Most lung nodules are benign, and may instead be the result of inflammations and scars from fungal or bacterial infections in the lung. They are very common and exist in at least 50% of people by the age of 50. Despite most nodules being benign, they are still potential manifestation of lung cancer and therefore there is a big challenge in determining whether or not they are cancerous. Two attributes commonly used to determine this are nodule size and volume growth.

For this work, both solid nodules and subsolid nodules are handled, see figure 2.2. Solid nodules are well-defined, bright and round in shape. They have a uniform intensity distribution with high contrast to air which make them relatively easy to detect in CT scans. Subsolid nodules do not have uniform shapes and/or intensity distributions, and their intensity distributions are closer to the air in the lung than solid nodules [5], which makes them harder to separate from the background. The nodules could be well-circumscribed or connected to blood vessels, but they are assumed to not be connected to the chest walls.

2.3

Hounsfield units

The Hounsfield unit (HU) value is a quantity used for describing the radiodensity in CT. It is a linear transformation of the linear attenuation coefficient µ and is used to present the attenuation in a standardised and convenient form. The conversion from µ to H is

H = µ − µwater µwater

(18)

Background theory 5

(a) A solid, well-circumscribed nodule. (b) A subsolid, well-circumscribed nodule. Figure 2.2: Examples of nodules handled in this thesis. Images from the LIDC-IDRI

database [6].

where µwater and µairare the linear attenuation coefficients of water and air respectively.

Note that Hwater = 0 HU and Hair= −1000 HU.

2.4

DICOM

The Digital Imaging and Communications in Medicine (DICOM) is an international standard in medical imaging. It is used for handling, storing, distributing and viewing any kind of medical images, regardless of their origin.

There are hundreds of attributes stored in each DICOM image, but only a few are interesting for the scope of this thesis, namely:

• Slice location: Defines where the slice is located.

• Slice thickness: Defines the thickness of the slice in millimeters.

• Pixel spacing: Defines the width and height of a pixel in millimeters.

• Rescale slope and intercept: Two variables used for linear transformation from voxel intensity to HU.

Voxel intensities in CT slices vary for different image acquisition techniques. This means that equal voxel intensities in two sets of images could correspond to different radioden-sities and vice versa. To enable algorithms to include radiodensity characteristics, HU are used instead of voxel intensities. The conversion from voxel intensity to HU is

(19)

Background theory 6

H(x) = I(x) · Rs+ Ri, (2.2)

where I(x) is the voxel intensity at image coordinate x = , Rs is the rescale slope and

Ri is the rescale intercept.

2.5

Jaccard index

The Jaccard index is used to compare similarities and diversities between two sets, see equation (2.3). It is calculated for a pair of sets by dividing their intersection with their union

J (A, B) = A ∩ B

A ∪ B (2.3)

where J (A, B) is the similarity for set A and set B. Note that 0 ≤ J ≤ 1 where A = B gives J = 1 and A ∩ B = 0 gives J = 0.

(20)

Chapter 3

Algorithms

In this chapter, all algorithms used in the thesis are presented. They will be categorized according to which step of the pipeline they are part of, and a detailed description for each algorithm will be presented. There is a multitude of algorithms within this field, and evaluating all of them would not be feasible. The algorithm selection is based on two reviews [7, 8] performed on papers in the field together with recommendations from supervisors.

3.1

Region of interest

Listed below are the two region of interest algorithms selected for this work.

3.1.1 Fixed size

The simplest and most intuitive algorithm for choosing a region of interest is to use a fixed size cube. The cube is centered around the user seed point with the edge length of 45 mm, 1.5 times larger than the largest size of a nodule [4]. The additional 50% is to ensure that any nodule fits inside the region even if the seed point is chosen slightly off-center.

Fixed size is based on the region of interest algorithm used by Lassen et al [5]. Lassen et al. use a user defined stroke to define a cubic region with side length 1.6 times the stroke length. Since this solution is limited to only include a seed point as user input, the algorithm was modified to fulfill this requirement.

(21)

Algorithms 8

Figure 3.1: All fourteen directions used in the derivative search.

3.1.2 Derivative search

The second region of interest algorithm is a low-pass filter together with a derivative search. The low-pass filter smooths the nodule body to remove small inhomogeneities in the nodule. After that intensity derivatives are calculated in fourteen different directions starting from the seed point, see figure 3.1. Expansion is performed in all these directions until, for each of them, a point is encountered where:

• The derivative is non-negative.

• The derivative has been negative at least once.

The first condition is set to stop the progression for a direction when it approaches nearby structures outside the vessel. The second condition handles seed points chosen slightly off-center. A off-center seed point traveling towards the center point of the nodule is likely to initially have a non-negative derivative, which otherwise would stop the progression. When all derivatives have stopped, the region of interest is selected as the largest bounding box surrounding all end points.

3.2

Pre-processing

Listed below are all pre-processing algorithms selected for this work. For visualization of performance for each algorithm, see figure 3.2.

3.2.1 Gaussian filtering

The Gaussian filter is a weighted low-pass filter. The filter is constructed by sampling a 3D Gaussian function

G(x, y, z) = 1 2πσ2e

−x2+y2+z2

(22)

Algorithms 9

1A 1B 1C 1D 1E 1F 1G 1H

2A 2B 2C 2D 2E 2F 2G 2H

3A 3B 3C 3D 3E 3F 3G 3H

4A 4B 4C 4D 4E 4F 4G 4H Figure 3.2: Four different examples to illustrate the effect of all selected pre-processing algorithms. The first row uses a test case, the following three rows use real nodule data. A = original image, B = Gaussian filtering, C = edge filtering, D = adaptive filtering, E = median filtering, F = cylinder filtering, G = local contrast enhancement, H =

anti-geometric diffusion.

where x, y and z are the distances from the center of the filter and σ is the standard deviation of the filter. The result of Gaussian filtering is a smoothed image, see figure 3.2, column B.

Gaussian filters were used by multiple authors according to [8], and Matlab’s exist-ing implementation has been used durexist-ing the design phase of this work. The tunable parameters are the filter size and σ.

3.2.2 Edge filtering

Edge filters are used to enhance image features, such as edges of objects. It uses an unsharp masking method, which creates a low-pass filtered version of the image and subtracts this from the original image. The resulting image then contains high frequency structures. These are then amplified by an amplification parameter and added to the original image. The result of edge filtering is an image with enhanced contrast near edges and borders, see figure 3.2, column C.

Edge filtering is a common technique used for enhancing edges. The tunable parameters are low-pass filter size and the amplification parameter for the high frequencies.

(23)

Algorithms 10

3.2.3 Adaptive filtering

The adaptive filter is the sum of a position invariant low-pass filter and a position variant band-pass filter [9]. The coefficients for the band-pass filter is controlled by a local control tensor, which is constructed using a local structure tensor and its eigenvectors and eigenvalues. This way the band-pass filter can adapt its coefficients to structures in the data. The result of adaptive filtering is an image with sharper edges and smoothed areas where there are little to no structures, see figure 3.2, column D.

This adaptive filter used is based on Signal Processing for Computer Vision by Granlund et al. [9]. Tunable parameters are the filter size and the standard deviation σ, for the averaging filter used when calculating the local structure tensor. It is also possible to choose if the structure tensor should be averaged and/or normalized.

3.2.4 Median filtering

The median filter sets each pixel to the median value in a surrounding neighbourhood. It is used to remove salt and pepper noise and for edge preserving smoothing. The result of the median filtering is a smoothed image where small vessels and structures have been removed, see figure 3.2, column E.

Median filters were used by multiple authors according to [8], and Matlab’s existing im-plementation has been used during the design phase of this work. The tunable parameter is the mask size.

3.2.5 Cylinder filtering

Cylinder filters are used to suppress vessel-like structures in the lung. It uses template matching with cylinder templates to find the vessel-like structure, where higher correla-tion between data and templates creates stronger filter responses. The strongest filter response is then subtracted from the original image to suppress vessels. The imple-mented algorithm uses seven templates, each with a length of seven pixels, expanding in the same directions as the derivative search, see section 3.1.2. The result of cylinder filtering is an image with reduced intensities for cylinder shaped structures, see figure 3.2, column F.

Cylinder filters are used by Chang et al. [10]. Chang et al. defines the filter response Fcyl(x) as

(24)

Algorithms 11 Fcyl(x) = max θ  min y∈Ωx θ I(y)  , (3.2)

where Ωxθ is the domain of the cylinder filter centered around coordinate x with orienta-tion θ and I(y) is the pixel intensity in coordinate y. Although the implemented filter is not equivalent to that of Chang et al., it uses the same principle of suppressing vessel-like structures in data. The tunable parameters are the kernel size and the cylinder radius.

3.2.6 Local contrast enhancement filtering

Local contrast enhancement (LCE) is used to improve the details and the local contrast. For each pixel, the operation removes a local mean and divides that results with the local standard deviation,

O(x, y) = I(x, y) − µ(x, y)

σ(x, y) (3.3)

where O(x, y) is the output image, I(x, y) is the input image, (x, y) are the image coordinates, µ is the local mean and σ is the local standard deviation. µ is given by

µ(x, y) = I(x, y) ∗ h(x, y), (3.4)

where h(x, y) is a Gaussian low-pass filter. σ is given by

σ(x, y) =pI2(x, y) ∗ h(x, y) − µ2(x, y). (3.5)

The result of LCE is an image with high contrast between fast and slow varying structure, see figure 3.2, column G.

This LCE filter is used by Messay et al. [11]. Messay et al. uses two different window sizes depending on the size of the nodule. The algorithm implemented for this work only uses a single window size regardless of the nodule size. The operation is done in 2D for each slice, and the tunable parameters are the window size and the standard deviation σ for the Gaussian low-pass filter.

(25)

Algorithms 12

3.2.7 Anti-geometric diffusion

Anti-geometric diffusion uses first and second derivatives along both x and y axes for each point in the image to calculate a new image

IAD= I2 xIxx+ 2IxIyIxy + Iy2Iyy I2 x+ Iy2 , (3.6)

where Ix,Iy are the first order derivatives and Ixx, Ixy and Iyy are the second order

derivatives of the image. 3 × 3 Sobel operators were used to calculate the derivatives. The result of anti-geometric diffusion is an image only highlighting edges, see figure 3.2, column H.

Anti-geometric diffusion is used by Ye et al. [12]. Ye et al. uses it as a pre-step to calculate a geometry feature called shape index. Different shape index values correspond to different shapes, which is used to identify sphere-like structures in the data. Shape index was excluded from this work due to insufficient information on how to calculate principal curvature, which is necessary for calculating the shape index. There are no tunable parameters for this algorithm. Other choices of derivative operators are possible, but this was not explored further.

3.3

Segmentation

Listed below are all segmentation algorithms selected for this work. After the image data has been pre-processed, the segmentation algorithms are used to find the nodule.

3.3.1 Region growing

Region growing is a threshold-based segmentation algorithm. It classifies voxels within set thresholds and connected to a specified seed point as part of the segmented object. Starting from a seed point, it checks if adjacent voxels in a 6-connective neighbour-hood are within specified upper and lower thresholds. Voxels within the thresholds are classified as part of the object, and the procedure is repeated for all newly classified voxels. The algorithm stops when no additional voxels are added. This is the baseline for threshold-determining segmentation algorithms.

(26)

Algorithms 13

3.3.2 Standard deviation thresholding

Standard deviation thresholding is a threshold-determining algorithm. It uses intensity characteristics in a local neighbourhood of the seed point to calculate thresholds for the region growing algorithm. The thresholds are defined as

Tlower = I(x) − a · σcube (3.7)

Tupper = I(x) + a · σcube (3.8)

where I(x) is the intensity at seed point x, a is constant and σcube is the standard

deviation of local neighbourhood surrounding the seed point. Tunable parameters are the window size of the local neighbourhood and the constant a.

3.3.3 Lassen threshold

This algorithm is a threshold-determining algorithm. It uses both intensity charac-teristics of the selected nodule along with general nodule characcharac-teristics. The upper threshold is set as the sum of the mean and standard deviation of the seed point and its local neighbourhood. The lower threshold is determined by

Tlower =    P +L 2 , if P +L 2 < −600 −600, otherwise (3.9)

where L is the typical nodule intensity and P is the typical background intensity. Both are determined from histogram analysis, where L is set as the highest peak in the histogram generated from the seed point’s local neighbourhood, and P is set as the highest peak in the region of interest excluding the region used to calculate L. When determining P , voxel intensities above -200 HU are excluded from the histogram to ignore large vessels and chest walls.

This is approach is based on the threshold determining algorithm used by Lassen et al. [5]. As previously mentioned is section 3.1, Lassen et al. uses a user defined stroke as input to mark the nodule and create a region of interest instead of a seed point. This stroke is also used for specifying the volumes necessary to determine L and P , where L analyses the volume generated from dilating the stroke by one voxel in each direction and P excludes the volume generated from dilating the stroke by four voxels. Tunable parameter for this algorithm is the radius of the small cubic volume for the calculation of L and P .

(27)

Algorithms 14

3.3.4 Ensemble segmentation

Ensemble segmentation combines multiple uses of region growing. Starting from a seed point, region growing is performed with upper and lower thresholds set as a multiple of the standard deviation of a small cubic region around the seed point, which is same as standard deviation thresholding, see section 3.3.2. The initial segmentation region is eroded to ensure that the region is found inside the nodule. Ten new seed points are selected at random from inside this region and the same region growing procedure is applied to each of these seed points, creating ten new regions. The intersection between all these defines the nodule core.

From the nodule core, eight regions are specified using the sagittal, coronal and axial planes all intersecting at the nodule core center. A random seed point is selected in each region. These eight seed points together with the nodule core center and a randomly selected seed point within the nodule core form ten parent seeds.

For each parent seed point, eight child seed points per slice are selected in a three slice neighbourhood around the parent seed point, giving a total of 24 child seed points. For each of these seed points, the region growing procedure is applied, creating 24 child regions. A child region is included to the parent region if it satisfies the following conditions:

if meanchild< meanparent− 3 · stdevparent then

exclude child region;

else if intersectionchild,parent/areachild> 0.2 then

include child region; else

exclude child region; end

Algorithm 1: Conditions for keeping a child region.

The parent and the included child regions together form a parent tumor segmentation. All ten parent tumor segmentation are added together, and a voxel is assigned to the final nodule segmentation if at least half of the voxels in its 3 × 3 × 3 neighborhood were included in at least half of the parent masks.

Ensemble segmentation is used by Gu et al. [13]. Gu et al. uses a different region growing algorithm called ”Click & Grow”, which differs somewhat from the one implemented in this thesis. Every aspect of their ”Click & Grow” implementation is not presented in the paper, but seems to be similar to the previously mentioned standard deviation thresholds algorithm and is therefore substituted. Gu et al. also uses an additional condition for including or excluding child regions called roundness feature. This feature

(28)

Algorithms 15

Figure 3.3: Visualization of level set contours in 2D [14]. The surface φ intersection with the zero level set creates the contour.

is only mentioned and not explained, and is therefore excluded. Tunable parameters for this algorithm are the same as for standard deviation thresholding, see section 3.3.2.

3.3.5 Level set

Level set is a segmentation algorithm that estimates the contour of an evolving surface φ(x(t), t) over time t [14]. It is defined as all points intersecting the plane where the surface has no height, i.e. the zero level set φ = 0, see figure 3.3.

Any point x changes over time with the φ, and φ can be any function as long as its zero level set gives a contour. A surface height map is calculated as the distance d to the surface,

φ(x, t = 0) = ±d (3.10)

where d is positive outside the contour and negative inside it.

Given an initial φt0, φt can be calculated for any t using the motion equation

∂φ ∂t, which gives ∂φ(x(t), t) ∂t = ∂φ(x(t), t) ∂x(t) xt+ φt. (3.11)

Here, the gradient ∂φ(x(t),t)∂x(t) = ∇φ, the speed xt = F (x(t))n, where F is a force and

n = |∇φ|∇φ is the normalized gradient. With this, the motion equation (3.11) can be rewritten as

(29)

Algorithms 16

φt+ ∇φxt= φt+ ∇φF n = φt+ F |∇φ|. (3.12)

To adjust the shape and the contour to the object, the force F should be derived from the image data. This is achieved by using gradients from the image. The contour should stop at edges of the object (F ≤ 0) and expand inside the object (F > 0), which corresponds to the inverse gradient information. Also, with φ comes the possibility to compute surface curvature κ

κ = ∇ ∇φ

|∇φ|, (3.13)

which is useful when controlling the surface smoothness.

The implementation used is based on Wang et al. [15, 16]. Presented in their papers are segmentation results for much larger objects such as a brain, an aorta and a liver, and the implementation is not adjusted specifically for nodule segmentation. The algorithm is initialized using a sphere as initial φt0 together with a vector for intensity analysis.

This intensity analysis is used for an additional stop criterion. Tunable parameters are the sphere radius, κ and the vector used for initialization.

3.3.6 Otsu’s method

Otsu’s method is a threshold-determining algorithm. It divides all pixels in an image into a specified number of classes using thresholds, see figure 3.4. The thresholds are determined by minimizing the variance within each class or, equivalently, maximizing the variance between classes. To minimize the effect of high intensity chest walls when determining the thresholds, an upper pixel intensity limit is set, excluding pixels above the limit when calculating the thresholds. When using more than two classes, it is necessary to specify which classes that should be considered as selected classes. Their corresponding thresholds are then used in the region growing algorithm.

Otsu’s method is a common method used for automatic thresholding [17]. Tunable parameters are the number of classes to find thresholds for, which classes to include in the segmentation and the upper limit for chest wall exclusion.

(30)

Algorithms 17

(a) (b) (c)

Figure 3.4: Examples of Otsu’s method for determining thresholds. (A) Original image, (B) Otsu’s method with two classes, C) Otsu’s method with four classes.

3.4

Post-processing

All region growing based algorithms are susceptible to leakage since they only include intensity based characteristics and do not take the shape of the segmented object into account. To minimize this leakage effect, all region growing based segmentation algo-rithms are followed by some morphological operations as post-processing. This includes three steps:

1. Erode the object to remove small structures such as vessels.

2. Remove objects not connected to the core object.

3. Dilate the object to return it to its original size.

Morphological operations are used by Messay et al. [11]. Both erosion and dilation are performed multiple times with 6-connective neighbourhoods.

(31)

Chapter 4

Method

This chapter describes the methods used to achieve the results of the thesis. First, an overview of the evaluated solution layout is provided. After that, the algorithm tuning process and automated test descriptions follows.

4.1

Solution layout

According to reviews by Dhara et al. and Lee at al. [7, 8], most automatic analysis solutions include common operations; pre-processing, lung field segmentation, nodule detection, false positive reduction, nodule classification and volumetric analysis.

• Pre-processing enhances the contrast between foreground and background.

• Lung field segmentation locates the lung region in the scans.

• Nodule detection finds a set of nodule candidates.

• False positive reduction removes candidates not considered to be nodules.

• Nodule classification determines if a nodule is benign or malignant.

• Volumetric analysis estimates volume and growth rate of the nodule.

An adjusted solution layout will be used in this work, see figure 4.1. The solution is semi-automatic which lets the user specify a seed point inside the nodule where the region of interest should be centered. This enables the algorithms to work only within a local neighbourhood of the nodule. Lung field segmentation will not be included since it is used for separating well-attached nodules from surrounding wall tissue which is an

(32)

Method 19

Figure 4.1: Layout of the final solution for the nodule analysis.

optional task for the work. False positive reduction will not be included since the seed point is assumed to be inside a nodule. Nodule classification and volumetric analysis will not be included, except for the volume estimate, because they lie beyond the scope of this thesis. In summary, the solution contains the following steps:

1. A seed point is provided by the user.

2. A region of interest is defined around the seed point.

3. Image data is processed to consist of isotropic voxels.

4. A pre-processing algorithm is performed on the region of interest.

5. A segmentation algorithm is performed on the processed region of interest.

6. Morphological operations are performed on the segmented volume.

7. Volumetric analysis is performed on the final volume.

4.1.1 Seed point

As a first step in the solution, a seed point is provided by the user. Since the solution is semi-automatic, it includes end user interaction. For the solution to work, the seed point is assumed to be close to the center of the nodule.

(33)

Method 20

4.1.2 Region of interest

Working on a small region of interest has two major advantages. Firstly, it reduces the workload of algorithms. Instead of applying algorithms on 512 × 512 pixels for every slice, it can applied on to a small fraction of voxels. Secondly, it places an upper bound on the segmentation error, as the segmented region is limited by the region of interest. For instance, errors caused by leakage into surrounding tissue are limited by the region-of-interest boundaries.

4.1.3 Isotropic voxels

CT scans usually have different voxel resolutions in different image dimensions. The pixel spacing in a data set is equal for both the x- and y-direction, but differs in the distance between slices. The slice thickness is usually larger than the pixel distance. Due to the anisotropy of the voxels, 3D operations are harder to implement. To simplify the use of 3D operations on the CT scans, isotropic voxels is preferable. The image data is therefore resampled through linear interpolation to get a slice thickness equal to the pixel spacing to ensure isotropic voxels.

4.1.4 Pre-processing

Prior to the segmentation, the image data is pre-processed. There are multiple reasons for this, which are:

• To make diffuse nodules more distinct.

• To suppress blood vessels connected to nodule.

• To enhance nodule edges.

Diffuse subsolid nodules usually have intensity inhomogeneities in its core. These irreg-ularities introduce problems in intensity- or derivative-based segmentation algorithms. To make the nodule core more homogeneous, it can be smoothed using Gaussian filter-ing and median filterfilter-ing. Blood vessels connected to the nodules are the main source of segmentation leakage. The HU intensity values for vessels are similar to those of the nodules, which causes region growing based algorithms to continuously expand outside the nodule. One way to reduce this vessel leakage is to suppress vessel like structures using median filtering and cylinder filtering. Nodules usually show gradually decreasing HU intensities close to their borders, which complicates the process of estimating where

(34)

Method 21

the actual contour lies. To simplify this, one wants to enhance edges to get a clearer nodule border using edge filtering, adaptive filtering, local contrast enhancement and anti-geometric diffusion.

4.1.5 Segmentation

Segmentation is the main task of determining which voxels that belong to the nodule, and which voxels that do not. The process uses the pre-processed image data as input, and returns a 3D region of the segmented volume. Most algorithms today use intensity based segmentation. To contrast that, a method using surface and shape characteristics is also included in the evaluation.

4.1.6 Post-processing

Post-processing is performed on the segmented volume. The purpose of this process is to fill holes in the nodule body and remove blood vessels and other surrounding structures connected to the nodule. The result of this step is the final segmentation.

4.1.7 Volumetric analysis

As an additional step in the solution, the volume is estimated by adding the segmented voxels followed by scaling with pixel spacing and slice thickness.

4.2

Tuning process

Before the algorithm evaluation, all algorithms underwent a tuning process. For the tuning process, four nodules were selected. The nodules were chosen to give a good representation of different nodule characteristics, and to show some of the challenging aspects of the segmentation process.

The tuning is done as an empirical and qualitative study, and the procedure is as follows.

1. Tunable parameters were identified. 2. Multiple parameter setups were created.

3. All setups for the pre-processing algorithm were evaluated together with all seg-mentation algorithms for all nodules. Equivalently, the segseg-mentation algorithms were evaluated together with all pre-processing algorithms.

(35)

Method 22

4. The setup including the highest mean Jaccard index for all four nodules were chosen as the optimal setup for that specific algorithm.

5. Algorithm combinations performing under 0.2 in Jaccard index for any of the four nodules were excluded from the evaluation.

As a first step, tunable parameters were identified for each algorithm. Combinations of different parameter values formed multiple parameter setups. Each of these setups were then evaluated separately for all nodules. Parameter value ranges were chosen within reasonable limits to reduce the number of runs performed per algorithm. The performance of each setup was measured using the Jaccard index.

For the pre-processing algorithms, the parameter setup which resulted in the highest Jaccard index mean for all four nodules in combination with any segmentation algorithm was selected. Equally, the best performing setup for the segmentation algorithms, in combination with any pre-processing algorithm, were selected.

In addition to selecting parameter setups, the tuning process indicates if a pre-processing/ segmentation algorithm combination are compatible or not. A combination with a Jac-card index less than 0.2 for any nodule is excluded from the final evaluation.

The post-processing algorithm was not included in the tuning process. It had been tuned during the implementation phase of pre-processing and segmentation algorithms.

4.3

Pipeline evaluation process

For the pipeline evaluation process, selected parameters were used together with a larger set of nodules. The performance of each algorithm combination for each nodule was measured using the Jaccard index and the computation time. The mean and standard deviation of both measurements were used when comparing different algorithm combi-nations against each other.

The evaluation process was run three times on the same data set. For the first run, the seed point for each nodule were selected as the nodule center. For the two following runs, noise was added to the seed point location to simulate a seed point chosen a few pixels slightly off center.

(36)

Chapter 5

Result

This chapter presents the results achieved from the methods presented in the previous chapter. First, the database used for both the tuning and the evaluation is presented, followed by the results from both the tuning process and the pipeline evaluation.

Abbreviations and section references for all algorithms are presented in table 5.1.

Table 5.1: List of abbreviations and section references.

Algorithm Abbreviation Section No filter NF Gaussian filtering GF 3.2.1 Edge filtering EF 3.2.2 Adaptive filtering AF 3.2.3 Median filtering MF 3.2.4 Cylinder filtering CF 3.2.5 Local contrast enhancement LCE 3.2.6 Anti-geometric diffusion AGD 3.2.7 Standard deviation Stdev 3.3.2 Lassen threshold filtering Lassen 3.3.3 Ensemble segmentation Ensemble 3.3.4 Level set Level-set 3.3.5 Otsu’s method Otsu 3.3.6

5.1

Test data

The test data used have been selected from the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI). LIDC-IDRI is a public database of CT scans [6]. It includes 1018 cases with DICOM images from thoracic CT scans. Each case includes at least one nodule, and provides ground truth borders for all of

(37)

Result 24

Table 5.2: Results for the tuning process.

NF GF EF AF MF CF LCE AGD Stdev X X Lassen X X X Ensemble X X X X X Level set X X X X X X X Otsu

them. The ground truth is produced from an annotation process performed by four ra-diologists, where each radiologist individually do readings of the data. The radiologists’ markings do not always correspond, which results in each nodule having a set of one to four slightly different ground truth borders.

The database was used for evaluation of the algorithms. The DICOM images were used as image input to the system and the ground truth markings were used to both extract a seed point and to calculate the ground truth volume. This volume, together with the results from the segmentation, was used to calculate the Jaccard index, see section 2.5.

For the tuning process, a set of four nodules, with one reading each form a single patient case, were used, see figure 5.1. They were chosen because of their different characteristics. Nodule 1 is a small, solid and spherical nodule close but not connected to the chest wall. Nodule 2 is a large solid nodule with large vessels connected to it. Nodule 3 is a large solid nodule with small vessels connected to it. Nodule 4 is a small subsolid nodule close, but not connected to the chest wall.

For the pipeline evaluation process, a larger set of 13 patient cases were used. These cases are a subset of the cases used by Lassen et al. [5], which all include at least one subsolid nodule. Each case were inspected before the evaluation to exclude nodules that did not satisfy the conditions of this study, i.e. were connected to a chest wall. Each scan includes between one to twenty nodules, with one to four markings per nodule. See appendix A for a specification of the cases used in the evaluation data set.

5.2

Tuning

Presented are the results from the tuning results, see table 5.2. Algorithm combinations selected for the pipeline evaluation are indicated with X.

(38)

Result 25

(a) (b)

(c) (d)

Figure 5.1: Axial slices of the selected tuning nodules: A) Nodule 1, small separated nodule. B) Nodule 2, large nodule with vessels. C) Nodule 3, large nodule with small

vessels. D) Nodule 4, small subsolid nodule.

5.3

Pipeline evaluation

Presented in table 5.3, 5.4 and 5.5 are the mean of all Jaccard index results for selected algorithm combinations from the pipeline evaluation process. Table 5.3 shows the results from the first run, where the seed point for each nodule was selected as the nodule center. Table 5.4 and 5.5 shows the results from the second and third run, where the seed point where chosen a few pixels slightly off center.

Table 5.3: Results for the first run with centered seed point.

NF GF EF AF MF CF LCE

Stdev 0,185 0,152

Lassen 0,361 0,275 0,311

Ensemble - - - -

(39)

Result 26

Table 5.4: Results for the second run with the seed point slightly off center.

NF GF EF AF MF CF LCE

Stdev 0,138 0,185

Lassen 0,250 0,190 0,234

Ensemble - - - -

-Level set 0,337 0,315 0,316 0,310 0,312 0,332 0,254

Table 5.5: Results for the third run with the seed point slightly off center.

NF GF EF AF MF CF LCE

Stdev 0,132 0,176

Lassen 0,244 0,147 0,226

Ensemble - - - -

(40)

Chapter 6

Discussion

This chapter presents thoughts and reflections on the results and the methods used to achieve them. All segmentation results presented in this chapters figures are results from the No filter/Level set pipeline.

6.1

Pipeline

6.1.1 Region of interest

After the implementation of both region of interest algorithms, I noticed for a lot of nodules cases, the region of interest did not include the whole nodule when using deriva-tive search. Nodules, especially large and subsolid nodules, usually include multiple local minimum and maximum points inside their cores, which would stop the derivative search before it has reached the borders of the nodule. Because of its sensitivity for local minimums in the data, I considered the algorithm too unreliable to guarantee that the whole nodule was always inside the region of interest, and since region of interest selection is a small and not as important part of the pipeline compared to pre-processing and segmentation, the fixed size algorithm were chosen for its reliability.

6.1.2 Pre-processing

Cylinder filtering has the best results out of all the pre-processing filters when combining all three runs, but its results are only slightly higher than Gaussian, adaptive and median filtering. Also, its results were slightly lower than using no filter at all. To use cylinder filtering to its full potential, templates would have to be in multiple sizes and additional orientations. This would be very computationally expensive, and not feasible for this

(41)

Discussion 28

work. An alternative could be to find vessel-like structures using gradients [18] and shape index [12] instead of template matching.

Both local contrast enhancement and anti-geometric diffusion have quite large draw-backs. While enhancing edges and structures, the results do not represent HU values any more. Anti-geometric diffusion only took derivatives into account, which resulted in static areas, such as nodule cores and background, having identical output. The same applies for local contrast enhancement filters, but since the Gaussian filter used to remove the local mean was quite big, only large nodules were affected.

Also, the Ye et al. [12] used the anti-geometric diffusion algorithm as a first part in their pre-processing. The second part, shape index, was not included in this work. The fact that anti-geometric diffusion was not intended to pre-process the data exclusively could explain its low performance results.

6.1.3 Segmentation

Ensemble segmentation got excluded due to its poor computational time. A single segmentation using this algorithm occasionally took a few minutes per nodule, which was not feasible for this work. Its computational time is very susceptible to leakage through vessels and other structures connected to the nodules since it performs the region growing algorithm multiple times. Its performance results was similar to both standard deviation thresholding and Lassen thresholding for the tuning process, and would have been a feasible algorithm to evaluate with a faster region growing algorithm or better vessel suppression.

An interesting approach to the segmentation part of the pipeline would be to instead of trying to segment the nodules, one could segment the blood vessels and other structures in the lung and exclude that content. The level set algorithm shows potential for this operation in the study by Wang et al. [19] and the review done by Kirbas et al. [20].

It also would have been interesting to explore machine learning systems and their po-tential as segmentation algorithms. There is plenty of research done in the field [21–23] and there are multiple large databases to use as training data, so the strategy would have been viable. This was excluded from the work due to time limitations.

(42)

Discussion 29

(a) The nodule used for tuning. (b) Example of larger, more diffuse nodule. Figure 6.1: Example of different complexity for subsolid nodules.

6.1.4 Post-processing

Very little time were spent on evaluating post-processing algorithms in comparison to both the pre-processing and the segmentation. This is related to the small focus on the area according to the two reviews [7, 8] used.

6.2

Tuning process

When identifying nodules that were hard to segment, I found that diffuse nodules in general were both larger and more inhomogeneous than the ones used for the tuning process, see figure 6.1. This led me to the conclusion that I should have used better representatives for subsolid nodules in the tuning process.

It also would have been beneficial to use a larger test data set, so that more nodule characteristics could have been more accurately represented. It would then have been easier to tune the algorithms to handle a larger variety of nodules, which most likely would have increased the performance.

Only using a single seed point as input, no additional information such as size, sphericity, orientation or texture could be provided by the user. All of these features, or just rough estimates of them, are useful for the tuning of the algorithms. During my implementation of the selected pipeline into Sectra’s PACS, I used visual inspection to compare its results to the results from the evaluation. For some nodules, the difference was substantial. The inspection led to the conclusion that two data dimensions were flipped between the two systems, which impacted my parameter setting. Manually adjusting the parameters to fit the nodules provided a significantly better result.

(43)

Discussion 30

(a) Static parameters. (b) Dynamic parameters.

Figure 6.2: Example of increased performance using dynamic parameters. Green = ground truth. Blue = segmentation.

(a) Static parameters. (b) Dynamic parameters.

Figure 6.3: Example of decreased performance using dynamic parameters. Green = ground truth. Blue = segmentation.

The tuning process always determined static parameters that were the same for all nodules. This inspection indicated that dynamic parameters, adjusted for each nodule, could improve the performance. To try this idea, I modified my derivative search, see section 3.1.2, to include conditions with HU characteristics. These nodule expansion estimates could then be used for setting parameters such as the initial radius and the initialization vector. This procedure was tested on a handful of the nodules and resulted in both better and worse performance, see figure 6.2 and 6.3 for examples.

6.3

Pipeline evaluation process

(44)

Discussion 31

• No filter/Level set

• Cylinder filtering/Level set

• Adaptive filtering/Level set • Median filtering/Level set

No filter had the highest total performance including all three runs. Cylinder filtering had a slightly lower variance in Jaccard index than the other pipelines, but a much higher computational cost. Adaptive filtering had the highest performance when no noise was included, but also the highest Jaccard index variance of all selected pipelines. Median filtering never out-performed all other pipelines in a single run, but performed just as good in general. Even though the mean results for the highest performing pipelines were quite even for all nodules, the variation in performance per nodule sometimes fluctuated. This indicates that selecting the pre-processing algorithm for each nodule would increase performance.

Comparing table 5.3 with table 5.4 and 5.5 gives an indication that the evaluated pipelines are not very robust. Noise added as small shifts in seed point position re-sulted in a notable decrease in performance. The decrease was more apparent for small nodules, since the shift represented a larger portion of the nodule radius. I think that the same idea presented in section 6.2, meaning estimating some nodule characteristics as part of determining parameters for the pipeline, would make it more robust.

The performance of all pipelines were pretty low. Even the best performing pipelines performed around 0-0.1 in Jaccard index for 15-20% of the nodules, which is very un-reliable. Presented below are some examples of difficult nodules with low segmentation performance.

Nodules close to chest walls often included intensity leakage. This blended the nodule and the wall together, see figure 6.4. This often caused the segmentation to include wall tissue outside the nodule, which resulted in really low test results.

Really small nodules had low test results in general, see figure 6.5. Since the parameters for the segmentation were static for all nodules, the initial φt0 was sometimes larger

than the nodule that the algorithm wanted to segment. Since φt0 is supposed to be

inside the object and then expand, the segmentation operated under conditions it was not constructed to. I think dynamic parameters would have helped a lot in this case.

Another problem with small nodules was that the level set function did not have time to adapt to the object before hitting its stop conditions. This lead to the segmentation

(45)

Discussion 32

(a) Ground truth. (b) Segmentation.

Figure 6.4: Example of intensity leakage for small nodules close to chest walls. (A) Ground truth for nodule. (B) Segmentation result for nodule using level set.

Figure 6.5: Example of low performance for small nodule. Green = ground truth, blue = segmentation.

of small nodules always being close to spherical due to the sphere used as φt0 in level

(46)

Chapter 7

Conclusions

Segmenting nodules is a very difficult task due to their varying characteristics. Using a single click approach limits the user to give additional information from visual inspection beyond position, which implies that the pipeline have to figure out those characteristics on its own. This work did not focus on that area of the pipeline and used static parame-ters instead. I think exploring the concept of dynamically adjusting pipeline parameparame-ters, maybe even the pipeline, for each nodule is necessary for the case of semi-automatic seg-mentation due to the large differences in characteristics. I think this would be a good example of future work for this pipeline.

The level set algorithm is a very powerful tool for segmentation. Its results were not outstanding for every nodule used in this work, but it had good compatibility with multiple pre-processing algorithms evaluated and showed great potential when used with parameters adjusted to the nodule. Also, including the fact that the implementation used is also used to segment significantly larger objects, it would be interesting to further explore the performance of a level set implementation adjusted for small objects such as nodules.

(47)

Appendix A

Datasets

This appendix specifies which sets from the LIDC-IDRI database for each data set. Nodules connected to the chest wall were excluded from the evaluation process.

Cases used in the tuning data set: 0111. Nodules used in the evaluation data set: 4. Total number of ground truth markings used in the evaluation data set: 4.

Cases used in the evaluation data set: 0003, 0045, 0076, 0305, 0392, 0402, 0476, 0497, 0686, 0730, 0752, 0805, 0850. Nodules used in the evaluation data set: 59. Total number of ground truth markings used in the evaluation data set: 170.

(48)

Bibliography

[1] Lung Cancer Alliance. Lung nodules, May 2014. URL

http://www.lungcanceralliance.org/Educational%20Materials/Nodules% 20Brochure%202014.pdf/.

[2] B. de Hoop, H. Gietema, S. van de Vorst, K. Murphy, R. van Klaveren, and M. Prokop. Pulmonary ground-glass nodules: increase in mass as an early indicator of growth. Radiology, 255, 2010.

[3] Wikipedia. Human anatomy planes signatures, 2010. URL

https://sv.wikipedia.org/wiki/Anatomiska_termer_f%C3%B6r_l%C3%A4ge# /media/File:Human_anatomy_planes_signatures.svg.

[4] Peter Mazzone. Pulmonary nodules, May 2014. URL

http://www.clevelandclinicmeded.com/medicalpubs/diseasemanagement/ hematology-oncology/pulmonary-nodules/.

[5] B. C. Lassen, C. Jacobs, J-M. Kuhnigk, B. van Ginneken, and E. M. van

Rikxoort. Robust semi-automatic segmentation of pulmonary subsolid nodules in chest computed tomography scans. Physics in Medicine & Biology, 60(3), 2015.

[6] SG III Armato, G McLennan, L Bidaut, MF McNitt-Gray, CR Meye, AP Reeves, B Zhao, DR Aberle, CI Henschke, EA Hoffman, EA Kazerooni, H MacMahon, EJR van Beek, D Yankelevitz, and et al. The lung image database consortium (lidc) and image database resource initiative (idri): A completed reference database of lung nodules on ct scans. Medical Physics, 38, 2011.

[7] A. K. Dhara, S. Mukhopadhyay, and N. Khandelwal. Computer-aided detection and analysis of pulmonary nodule from ct images: A survey. IETE Technical Review, 29(4), 2012.

[8] S. L. A. Lee, A. Z. Kouzani, and E. J. Hu. Automated detection of lung nodules in computed tomography image: a review. Machine Vision and Applications, 23 (1), 2010.

(49)

Bibliography 36

[9] G. H. Granlund and H. Knutsson. Signal Processing for Computer Vision. Kluwer Academic Publishers, 1995.

[10] A. Chang, H. Emoto, D. N. Metaxas, and L. Axel. Pulmonary mocronodule detection from 3d chest ct. MICCAI, LNCS 3217, 2004.

[11] T. Messay, R. C. Hardie, and S. K. Rogers. A new computationally efficient cad system for pulmonary nodule detection in ct imagery. Medical Image Analysis, 14, 2010.

[12] X. Ye, X. Lin, and J. Dehmeshki. Shaped-based computer-aided detection of lung nodules in thoracic ct images. IEEE Transactions On Biomedical Engineering, 56 (7), 2009.

[13] Y. Gu, V. Kumar, L. O. Hall, D. B. Goldgof, C-Y. Li, R. Korn, C. Bendtsen, E. Rios Velazquez, A. Dekker, H. Aerts, P. Lambin, X. Li, J. Tian, R. A. Gatenby, and R. J. Gillies. Automated delineation of lung tumors from ct images using a single click ensemble segmentation approach. Pattern Recognition, 46, 2013.

[14] Herve Lombaert. Level set method: Explanation, March 2006. URL http://step.polymtl.ca/~rv101/levelset/.

[15] C. Wang, H. Frimmel, and ¨O. Smedby. Level-set based vessel segmentation accelerated with periodic monotonic speed function. Medical Imaging, 7962, 2011. [16] C. Wang, H. Frimmel, and ¨O. Smedby. Fast level-set based image segmentation

using coherent propagation. Medical physics, 41(7), 2014.

[17] Nobuyuki Otsu. A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man and Cybernetics, 1979.

[18] R. Wiemker, T. Klinder, M. Bergtholdt, K. Meetz, I. C. Carlsen, and T. B¨ulow. A radial structure tensor and its use for shape-encoding medical visualization of tubular and nodular structures. IEEE Transactions on visualization and computer graphics, 2013.

[19] P. Wang, A. DeNunzio, P. Okunieff, and W. G. O’Dell. Lung metastases detection in ct images using 3d template matching. Medical Physics, 34(3), 2007.

[20] C. Kirbas and F. Quek. A review of vessel extraction techniques and algorithms. ACM Computing Surveys (CSUR), 2004.

[21] C. C. Reyes-Aldasoro and A. L Aldeco. Image segmentation and compression using neural network. Advances in Artificial Perception and Robotics CIMAT, 2000.

(50)

Bibliography 37

[22] T. Messay, R. C. Hardie, and T. R Tuinstra. Segmentation of pulmonary nodules in computed tomography using a regression neural network approach and its application to the lung image database consortium and image database resource initiative dataset. Medical Image Analysis, 22, 2015.

[23] M. J. Moghaddam and H. Soltanian-Zadeh. Medical image segmentation using artificial neural networks. INTECH Open Access Publisher, 2011.

References

Related documents

When the students have ubiquitous access to digital tools, they also have ubiquitous possibilities to take control over their learning processes (Bergström &amp; Mårell-Olsson,

överenskommelsen om internationella transporter av lättfördärvliga livsmedel och om specialutrustning för sådan transport (ATP), som utfärdades i Genève 1970 och trädde i

Ett tydligt mönster som studien visar är att partierna anser att sociala medier är ett utmärkt verktyg för just politisk kommunikation eftersom det ger politikerna en möjlighet att

The stepper motor drives the gear; meanwhile the gear drives the axis which connects the rope and wheel to make the middle drawer up and down according to Figure 4 and Figure

I had hopes that the tablet solution could solve 3 major issues related to our field work. 1) a significant reduction in the amount of print outs of parcel plans, lists and overview

In the context of a project management course, we introduced and evaluated the use of software process simulation (SPS) based games for improving stu- dents’ understanding of

The transformation of requirements written in traditional form into Simulink Design Verifier objectives can be time consuming as well as requiring knowledge of system model and

A number of coping strategies were used among the social workers; e.g., to emotionally shut themselves off from work when the situation was too difficult to handle or to resign