• No results found

Intelligent boundary extraction for area and volume measurement : Using LiveWire for 2D and 3D contour extraction in medical imaging

N/A
N/A
Protected

Academic year: 2021

Share "Intelligent boundary extraction for area and volume measurement : Using LiveWire for 2D and 3D contour extraction in medical imaging"

Copied!
42
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköpings universitet SE–581 83 Linköping

Linköping University | Department of Computer Science

Master thesis, 30 ECTS | Datateknik

2017 | LIU-IDA/LITH-EX-A--17/009--SE

Intelligent boundary

extraction for area and

volume measurement

Using LiveWire for 2D and 3D contour extraction in

medical imaging

Intelligent konturmatchning för area- och volymsmätning

Oscar Nöjdh

Supervisor : Christer Bäckström Examiner : Peter Jonsson

(2)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under 25 år från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns lösningar av teknisk och admin-istrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sam-manhang som är kränkande för upphovsmannenslitterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet – or its possible replacement – for a period of 25 years starting from the date of publication barring exceptional circum-stances. The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the con-sent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping Uni-versity Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

c

(3)

Abstract

This thesis tries to answer if a semi-automatic tool can speed up the process of seg-menting tumors to find the area of a slice in the tumor or the volume of the entire tumor. A few different 2D semi-automatic tools were considered. The final choice was to implement live-wire. The implemented live-wire was evaluated and improved upon with hands-on testing from developers. Two methods were found for extending live-wire to 3D bodies. The first method was to interpolate the seed points and create new contours using the new seed points. The second method was to let the user segment contours in two orthogonal projections. The intersections between those contours and planes in the third orthogonal projection were then used to create automatic contours in this third projection. Both tools were implemented and evaluated. The evaluation compared the two tools to manual seg-mentation on two cases posing different difficulties. Time-on-task and accuracy were mea-sured during the evaluation. The evaluation revealed that the semi-automatic tools could indeed save the user time while maintaining acceptable (80%) accuracy. The significance of all results were analyzed using two-tailed t-tests.

(4)

Acknowledgments

I would like to begin by thanking everyone at Sectra for the support during this thesis. Es-pecially Daniel Forsberg, Fredrik Häll, and Jacob Bernhard whose time and interest put this thesis into a larger perspective, creating the motivation needed to complete this thesis. A very special thanks to Lukas Gillsjö, my supervisor, I could not have asked for a better person to help me find my way at Sectra. Extra thanks to those who participated in my user evalua-tions. Last I would like to than all my friends and family who have supported me during my time at Linköping University, I have had a great time and I could not have done it without all of you.

(5)

Contents

Abstract iii

Acknowledgments iv

Contents v

List of Figures vii

List of Tables viii

List of Algorithms ix 1 Introduction 1 1.1 Motivation . . . 1 1.2 Aim . . . 1 1.3 Research Questions . . . 1 1.4 Delimitations . . . 2 2 Theory 3 2.1 Color Terminology . . . 3 2.2 Segmentation Techniques . . . 3 2.3 Dijkstra’s Algorithm . . . 6 2.4 Edge Detection . . . 6 2.5 3D Live-Wire . . . 8 2.6 Turtle Map . . . 10 2.7 Usability Evaluation . . . 11 2.8 Related Work . . . 12 3 Method 13 3.1 Pre-Study . . . 13 3.2 Implementation . . . 13 3.3 Evaluation . . . 14 4 Results 17 4.1 Pre-Study . . . 17 4.2 Implementation . . . 19 4.3 Evaluation . . . 23 5 Discussion 27 5.1 Results . . . 27 5.2 Method . . . 28

5.3 The Work In A Wider Context . . . 28

(6)

6 Conclusion 30

6.1 Answers To Research Questions . . . 30 6.2 General Conclusion . . . 30

(7)

List of Figures

2.1 Region growing process . . . 4

2.2 Mumford-Shah functional example . . . 4

2.3 Process of GrabCut . . . 5

2.4 Live-Wire line sticking to contour . . . 6

2.5 Path cooling example . . . 9

2.6 Example of 3D live-wire through mapping . . . 9

2.7 Seed point interpoliation . . . 10

2.8 Special cases in turtle map . . . 11

2.9 Example Turtle Map . . . 12

3.1 Samples from case 1 . . . 16

3.2 Samples from case 2 . . . 16

4.1 Region growing cases . . . 18

4.2 Smoothing comparison . . . 19

4.3 Live-Wire flowchart . . . 20

4.4 Result of the Douglas-Peucker algorithm . . . 21

4.5 Sobel kernels used in Canny edge detection . . . 21

4.6 Example results on lung tumor using 3D live-wire . . . 22

4.7 Results from case 1 . . . 24

4.8 Results from case 2 . . . 25

4.9 Mean time-on-task in case 1 and 2 . . . 25

4.10 Mean accuracy and relative error in case 1 . . . 26

(8)

List of Tables

4.1 Comments gathered from the intermediate evaluation and actions taken based on

those comments. . . 22

4.2 Mean values of the results of case 1 . . . 24

4.3 Mean values of the results of case 2 . . . 24

4.4 P-values of the T-tests when comparing time-on-task . . . 26

4.5 P-values of the T-tests when comparing accuracy . . . 26

(9)

List of Algorithms

1 Example of Dijkstra’s Algorithm . . . 7 2 Relaxation Algorithm in Dijkstra’s Algorithm . . . 7 3 Creating the turtle map . . . 11

(10)

1

Introduction

1.1

Motivation

Contour outlining in medical imaging can be a monotonous and time consuming task. Out-lining the left ventricle of the heart will take a radiologist on average 15 seconds per slice [1]. A study suggests that a semi-automatic segmentation tool can speed up this process, saving valuable time [1]. When calculating volumes, if the radiologist need to draw contours in ev-ery slice containing the left ventricle, those 15 seconds per slice will quickly multiply into a much longer period of time. This thesis tries to find out if a semi-automatic tool could save time when extracting contours in two and three dimensions.

1.2

Aim

The aim of this work was to create and evaluate a tool to be used when extracting 2D and 3D bodies in medical imaging. This tool was divided into two parts. First a 2D tool was created and evaluated, then two variants of 3D tools were created based on the first tool. The tools were evaluated based on user tests, measuring time and accuracy on extracted contours. The 2D and 3D tools were evaluated independently.

1.3

Research Questions

The work was divided into two parts. The first part was creating and evaluating a tool used for 2D contour extraction. The first question is based on the segmentation techniques used in that tool. The tool needed to be accurate enough to be useful in medical imaging.

Question 1: How can image segmentation techniques be used to create a 2D contour extraction

tool suitable for medical imaging?

The second part of of the work was extending the 2D tool into three dimensions. Since 3D images can be difficult to maneuver, one challenge was to make the tools interactive. The tools were based on contours extracted with a 2D tool and these contours were used to create a 3D body. The accuracy restriction was important in this case as well.

Question 2: How can a 2D contour extraction tool be extended to a 3D contour extraction tool?

The first two research questions discuss if such tools can be created. The third question discusses how these tools perform. User interaction was the most important factor. The

(11)

1.4. Delimitations

computing performance of the tool factored into this, and it was important to create a smooth user experience. Accuracy was measured as the percentage difference between a segmented volume and the ground truth volume. The acceptable limit of 80% was chosen to represent a "general sense of the size" of the segmented body, which is what physicians are generally looking for when taking these kinds of measurements.

Question 3: Can the described tools save users time compared to manual contour outlining,

within acceptable (80%) accuracy?

1.4

Delimitations

While several 2D segmentation techniques were researched and considered, only one was implemented and evaluated. There may be more ways to extend the 2D tool to 3D than described in this thesis. Only the two described methods were considered.

Time was budgeted to improve the tools based on the feedback from the evaluation. Save that predefined time-frame, possible improvements were documented, not implemented.

(12)

2

Theory

2.1

Color Terminology

This section explains how some words having to do with color and color representation in imaging are used in this thesis. The first important distinction to make is the one between ’color’ and ’texture’. Color is a full color representation of a pixel or point in an image. This can be represented in many different ways, for example as an RGB value. Texture on the other hand refers to a gray scale value of a pixel or point in an image. This is simply represented by a number of bits where 0 represents black, and the max value represents white. Both these expressions explain the value of a pixel but with the clear distinction that one is in color and the other one is in gray scale. In medical imaging, it is rare to represent images with color. The input images in this work were all gray scale.

An alpha value is an opacity value used in imaging to create transparency. An alpha value can for example be added to the RGB model to create transparent colors in images. The medical images used as input in this work did not have alpha values. If a method were to be used that required alpha values, those would have to be supplied by user interaction. This opacity value should not be confused with the alpha value used in t-statistics where alpha value represent the target probability that the null hypothesis can be discarded. T-statistics will be explained further in section section 2.7.

2.2

Segmentation Techniques

Several segmentation techniques from the literature were considered in this thesis. This sec-tion explains the theory behind those techniques.

Region Growing

Region growing is an automatic, region based segmentation technique. The user selects some seed points inside the object to be segmented. Those seed points will be models for what included points should look like. Starting from one of those point, the area grows to adjacent points based on rules set from data gathered from the seed points. This process can be seen in figure 2.1. The inclusion rules are often based on color or texture value [2]. Since each

(13)

2.2. Segmentation Techniques

Figure 2.1: The process of the region growing algorithm. The left image is a selected seed point and shows how it will expand to adjacent points. The image to the right show progress after a few iterations. Image from Feng [2]

Figure 2.2: The process of Mumford-Shah. To the left is the original image. In the center-left image, areas of high gradient are marked in black. In the center-right image boundaries of the Mumford-Shah model. Image to the right is the segmented image.

point is handled individually, the region growing technique is highly suitable for parallel programming. The union of the results from these individually handled points is the result of the algorithm. One well known implementation of this technique is the Magic Wand in Adobe Photoshop [3].

Mumford-Shah Functional

The Mumford-Shah functional is a segmentation technique used to divide an image into sub-regions. It uses energy functions and gradient magnitudes to find similar areas to group into a sub-region [4]. Since this method does not select an area, but segment an entire image, the user interaction would be to select the area or areas that should be included in the segmented object. The process of this method can be observed in figure 2.2.

GrabCut

GrabCut is a graph cut based segmentation technique. The user selects an area of interest surrounding the object to be segmented. Given a trimap, the algorithm computes a "hard" segmentation in this area using graph cut. This ’trimap’ is a map of the image with only three levels of alpha values: 0 for background, 1 for foreground and a set value [0, 1] for areas around the border. Alpha values are opacity values where α = 1 is foreground and α = 0 is background. The trimap is received from user input, where the user selects areas that are foreground and background. The graph cut method is an energy optimization function based on the alpha values. The graph cut method is iterated to refine the border of the segmentation. During the ’hard’ segmentation the aplpha values can only take on three values but during the following iterations each pixel will get an alpha value α P[0, 1]. After this, the user can

(14)

2.2. Segmentation Techniques

Figure 2.3: The process of GrabCut. Image from Rother et al. [3].

use a brush to select areas along the border where the initial segmentation was inaccurate. The brush can be used both to add areas that were not included in the initial segmentation, but also to remove areas that were included [3]. The GrabCut process can be observed in figure 2.3.

GrowCut

GrowCut is a segmentation technique that uses Cellular Automata. The user manually assign labels to some points (seed point) and the algorithm assigns these labels to all other points through automa evolution [5]. This method is essentially a variation of region growing that uses cellular automata.

Live-Wire

Live-Wire is a semi-automatic segmentation technique based on Dijkstra’s shortest path al-gorithm. Dijkstra’s algorithm is used by defining a graph G(E, V) where the vertices are the pixels (points) in the image, and edges the edges connect each pixel to its surrounding (neighbouring) pixels. The user only needs a few clicks to draw contours around an object in an image.

The image is preprocessed and each pixel is assigned a weight based on how likely it is that that pixel is on an edge. A point on the edge of the object will be cheap, making it pre-ferred to be traversed in Dijkstra’s algorithm. The edge weights used by Dijkstra’s algorithm are defined using these pixel values together with difference in texture values of the points the edge connects. When the user clicks on the image, a starting point (seed point) is placed. Dijkstra’s algorithm is run calculating the shortest path from the seed point to every other point. When the user moves the mouse the shortest path from the seed point to the current location is displayed. Another click freezes the current path and places another seed point. This is continued until two lines crosses each other and the area is complete. Figure 2.4 show how the line stuck to the edge when the mouse pointer was moved along the dashed line.

One famous implementation of live-wire, called Intelligent Scissors, was published in 1998. Intelligent Scissors uses Laplacian zero-crossings, gradient magnitudes, gradient

(15)

di-2.3. Dijkstra’s Algorithm

Figure 2.4: Line sticking to contour in live-wire. The dashed line show the mouse’s move-ment.

rections and pixel values (colors) to find the edges in the image [6]. This algorithm was implemented in Adobe Photoshop under the name Magnetic Lasso.

2.3

Dijkstra’s Algorithm

Dijkstra’s algorithm is a single-source shortest path algorithm developed by Edsger Dijkstra in 1959. The result of the algorithm is the shortest paths from the starting vertex to all other vertices. The algorithm maintains a set S of all visited vertices, it requires a graph G(V, E)and a weight function w. Each iteration chooses a vertex u P V ´ S with the shortest path in the set, and relaxes that vertex. The algorithm is finished when V ´ S=H. Dijkstars algorithm has a time complexity of O(V2). If the graph is very sparse, E=o(V2/lg(V)), the algorithm can be improved to run in O((V+E)˚lg(V)) by using a min-heap to locate u P V ´ S with the shortest path. If all vertices are reachable from the seed point, O((V+E)˚lg(V)) =

O(V ˚ lg(V))[7]. An example implementation can be found in algorithm 1. The relax method can be found in algorithm 2.

2.4

Edge Detection

There are many edge detection methods. This section describes some of the most commonly used in live-wire algorithms.

Laplacian Zero-Crossing

One common way of finding edges in live-wire is marking Laplacian zero-crossings [6, 8]. Detection of zero-crossings is divided into three main steps:

(16)

2.4. Edge Detection

Algorithm 1Example of Dijkstra’s Algorithm 1: functionDIJKSTRA(G, w, s) 2: Initialize-Single-Source(G, s) 3: S Ð H 4: Q Ð G.V 5: while Q ‰ H do 6: u Ð ExtractMin(L) 7: S Ð S Y tuu

8: foreach vertex v P G.Adj[u]do

9: Relax(u, v, w) 10: end for

11: end while

12: end function

Algorithm 2Relaxation Algorithm in Dijkstra’s Algorithm 1: functionRELAX(u, v, w)

2: if u.d+w(u, v)ăv.d then Źv.d is the weight of the currently shortest path to v 3: v.d Ð u.d+w(u, v)

4: v.π Ð u Źv.π is the predecessor of v in the shortest path to v 5: end if

6: end function

1. Convolve the image with a directional second derivative of a Gaussian filter. These filters are also known as Laplacian of Gaussian filter.

2. Localize the zero-crossings by finding neighbouring pixels with different signs (+/-). 3. Check the alignment and orientation of the zero-crossings.

Since the third step is relatively expensive and there is little gain, it can be excluded [9]. The found zero-crossings will be on potential edges. A smoothing (Gaussian) filter can be applied to the image before filtering for zero-crossings, to lower the noise level. A lower noise level will reduce the amount of false-positives when looking for edges, demonstrated in figure 4.2.

Gradients

The image gradients can be used to find edges. A strong gradient (i.e. a gradient with high magnitude) indicates a probable edge.

Since contours rarely have sharp turns, the gradient direction can be compared to the direction between two neighbouring points. Neighbouring points in images are defined as the eight pixels surrounding each pixel. High differences between pixel directions and the normal of gradient directions can be made expensive to filter out sharp turns [6].

Canny Edge Detection

Some newer implementations of live-wire has used Canny edge detection to find the edges in the images. This method is more suitable for medical images because it is less sensitive to image noise [10]. It was presented by John Canny in 1986 and is another widely used edge detection technique [11]. It is a relatively simple algorithm that can be divided into five steps [12].

1. Convolve the image with a gaussian filter.

(17)

2.5. 3D Live-Wire

3. Apply non-maximum suppression to thin lines.

4. Apply double threshold to filter out very weak edges and set strong edges. 5. Track edges to remove weak edges not connected to a strong edge.

Like in Laplacian zero-crossings the first step is used to suppress noise in the image. A Sobel filter is then applied to the image to find the gradients in the image. These gradients consist of a magnitude and a direction. The gradients will have a high magnitude on the edges with a direction from low to high pixel values. In the third step, each pixel’s gradient magnitude will be compared to its two neighbours in positive and negative direction of its gradient. If it is not the highest of the three it will be suppressed. This thins the lines to make sure that only the true edge is left. A double threshold is applied to the gradient magnitudes. Gradients with a magnitude lower than the lower threshold will be suppressed. Gradients with magnitudes between the thresholds will be considered weak edges. If the magnitude is above the higher threshold it will be considered a strong edge. In the fourth step all weak edges that are not connected to a strong edge will be suppressed. This removes leftover noise. This is done by promoting all weak edges to strong edges if and only if they are neighbours to a strong edge. This is iterated so that chains of weak edges connected to a strong edge will be strong edges. All points that are still weak edges when this iterative process is complete will be suppressed [12].

Path Cooling

Some implementations of live-wire uses path cooling to minimize the number of clicks re-quired by the user [6]. If two shortest paths share a common point, they also share the path between that that common point and the seed point. This is demonstrated in figure 2.5. When a point is included in a lot of user-created paths or have been included for a set time, it is prob-able that the user sees it as correct. An automatic seed point can then be created at that point. In figure 2.5, the paths that are marked in red and blue will be "frozen" and will not change when the user moves the mouse, even if a shorter path is found. On simple contours like a circle this technique makes it possible to extract the contour without having to place any manual seed points except for the first one.

Training Segments

On-the-fly training can be used in live-wire to make pixels with similar properties as the ones selected by the user cheaper. These properties can include color/texture value or color of the pixels in the direction of the gradient. Segments that the user have locked in are used as a template for how what properties other contour pixels should have. Similar pixels can be made cheap when calculating the shortest paths [6].

2.5

3D Live-Wire

The basic idea of 3D live-wire is to use a few user made contours to find the contours of a 3D object. One way of doing this is described by literature on the subject, a second way is presented in this thesis. The first method was described by Poon et al. [13] in 2008. The idea is to map the seed points of user created contours to another projection and create new contours there [13]. The method created for this thesis uses interpolation of seed points to create contours in one projection only.

3D Live-Wire Through Mapping

A first way of implementing 3D live-wire was described by Poon et al. [13]. The idea is to let the user create manual 2D live-wire contours from two orthogonal angles and then auto

(18)

2.5. 3D Live-Wire

Figure 2.5: Seed point is marked in red. The two paths have the blue part in common in their shortest path to the seed point.

segment several contours from the third orthogonal angle. Each orthogonal plane that has intersections with the manually created contours will be automatically segmented, creating contours in several parallel planes. These intersections are used as seed points for the auto-matically segmented contours. This is shown in figure 2.6. The intersections (seed points) have to be ordered in the new plane to be connected as a contour. For this purpose an algo-rithm based on L-systems turtle is used [13].

Figure 2.6: The red and blue contours are manually segmented. The green contour is au-tomatically segmented using seed points where the other contours intersect with the green plane. The image on the right shows the plane where the green contour was segmented, with intersections marked with white circles. Several planes parallel to the green plane will get automatically segmented with this method to create a 3D body. Image from Poon [13]

(19)

2.6. Turtle Map

Figure 2.7: Seed points are blue, manually segmented (2D live-wire) contours are red, auto-matically segmented contours are green. The seed points from the manually segmented con-tours are adapted to the contour in the next slice then connected with live-wire techniques.

3D Live-Wire Through Interpolation

The second way of extending live-wire to 3D is to let the user create several 2D live-wires with some slices between. The seed points from these contours can be interpolated to the other slices and there connected by using the same techniques as 2D live-wire. This process is demonstrated in figure 2.7. One notable disadvantage with this method is that the adaption of the seed points is imperfect. This means that the interpolated seed points may, especially on irregular shapes, not be places on the contour in the next slice. This error could potentially accumulate to a substantial error in the segmented volume.

This method is faster than manually placing contours due to a number of reasons. One reason is that the user does not have to manually place more seed points. This removes the reaction time from the user, which is generally the slowest part of the live-wire algorithm.

Another reason is the nature of Dijkstra’s algorithm. In 2D live-wire, Dijkstra’s algorithm will, on every click, find the shortest path from the clicked on point to every other point. When implemented with a min-heap, this is an O(V/lg(V))operation, where V is the num-ber of vertices [7]. When connecting the interpolated seed points, however, Dijkstra’s algo-rithm can be interrupted when the next seed point is visited. This will not reduce the worst case time of the algorithm but since the seed points should be placed on a contour they will be visited early in the process.

2.6

Turtle Map

The points where the plane of an automatically segmented contour intersects with the ex-isting contours will be used as seed points in mapping based 3D live-wire. When found, these points will be unordered in the new plane. L-System’s turtle is an algorithm that can be used in 3D live-wire to order the seed points in the automatically segmented contours. The algorithm is essentially a depth-first search that orders the points in clockwise order in the new plane [13].

The new plane will intersect with each manually segmented contour in an even amount of places. These points are paired with the opposing point from the same contour. Each pair are then placed on the turtle map with a straight "road" drawn between them. This is done for all intersecting contours. The algorithm then traverses the map by placing a "turtle" on a staring point. The turtle follows the road straight, turning left when it reaches an intersection and turning 180 degrees when reaching another point. The algorithm is finished when the turtle has returned to the staring point. The order of the points will be the order in witch the turtle visited them on the map. A pseudo implementation of this can be found in algorithm 3 and figure 2.9 shows and example map and how it would be traversed. There are special cases

(20)

2.7. Usability Evaluation

Figure 2.8: Special cases possible in turtle-map. Threes represents seed points, ones repre-sents roads. Note that in all three cases, seed points have been placed where an intersection would have been.

to the turtle map algorithm not handled in algorithm 3, some of these are displayed in figure 2.8. Note that such cases appear when a seed point happen to be placed on an intersection.

Algorithm 3Creating the turtle map 1: functionTURLTE(P)

2: M Ð 0 ŹM is the map, starts out empty

3: foreach pair p in P do 4: M[p.point1]Ð3 5: M[p.point2]Ð3

6: foreach point r between p.point1 and p.point2 do

7: M[r]ÐM[r] +1 8: end for 9: end for 10: TraverseMap(M) 11: end function

2.7

Usability Evaluation

Usability evaluation can be used to improve a system under development or to evaluate a complete system. User tests should normally be performed by users with similar background and experience as end users [14]. During these tests, subjects are exposed to a series of tasks. Users should think out loud while performing these tasks, so that thoughts and impressions can be gathered. Metrics can be collected to evaluate the system or compare versions of the same system.

Sample Size

Studies have shown that five participants is sufficient to uncover about 80% of issues in a system and that these are usually the most severe issues [15]. When collecting metrics, more is usually better. Sauro argues in his article [16] that a small sample size is fine, as long as significance is analyzed using t-statistics.

Two-tailed t-statistics take sample size into account and give a probability measurement of how probable it is that a difference can be found between to groups of samples (e.g. a group of test samples and a group of control samples). A null hypothesis is presented to the test, usually stating that no difference in mean values can be found when comparing the two groups of samples. The test will then return a probability value stating how probable it is

(21)

2.8. Related Work

Figure 2.9: Example turtle map with arrows showing traversal. Ones represent road, twos represent intersection, threes are seed points. The green border marks the staring/ending point. The small numbers by the arrows are the order of traversal.

that this null hypothesis can be thrown out. If a higher number of samples are used, the more certain the returned probability value will be.

Time-On-Task

Time-on-task is a widely used usability metric. It is simply a measurement of how long it takes a user to complete a specified task. This metric has been used since the nineteenth century and is reliable and well used [17]. When measuring time-on-task in smaller sample sizes (ă 30) t-statistics should be used to analyze the result because it takes the sample size into account [16].

2.8

Related Work

The 2D tool in this thesis is based on Inteligent Scissors presented by Mortensen and Barett [6] as well as the techniques presented by Kimmel and Bruckstein [18]. The 3D tools are mainly inspired by the work done by Poon et al. [13].

(22)

3

Method

This chapter describes the method that was used in this work. The method was divided into three parts: pre-study, implementation, and evaluation.

3.1

Pre-Study

Since the goal of this work was to implement and evaluate one segmentation tool, the goal of the pre-study was to research existing tools and choose one to implement. Tools were initially evaluated based on a few requirements that Sectra provided as goals for this thesis. The tool should:

1. Be fast

2. Require little interaction from the user 3. Be accurate

4. Work well on well defined lesions and objects in CT/MR data

Speed and accuracy are well defined metrics used in most literature on this topic. Because those metrics were well defined, the corresponding requirements did not have to be defined further. The amount of interaction required from the user was estimated as mouse button presses ("clicks") since this is something that medical IT wants to limit [19]. To find out if a tool or technique was well suited for CT/MR images, earlier applications were studied and considered.

3.2

Implementation

This section describe the implementation process of the semi-automatic tools.

Algorithm Input

Input to the algorithm consisted of two things: texture values that represented the image pix-els, and window levels. The texture values were arrays of 16 bit values where 0 represented

(23)

3.3. Evaluation

black and the max value represented white. The window levels were a double threshold set by the user. The texture values in the range [lower threshold, upper threshold] would be displayed in the the black/white spectrum. All values lower than the lower threshold would be displayed as black, and all values over the upper threshold would be displayed as white. This double threshold made it possible for the user to highlight different tissues and textures in the body.

2D Semi-Automatic Segmentation

Once a suitable tool was selected, that tool was implemented on a branch of Sectra’s PACS (Picture Archiving and Communication System). The system is an image handling and view-ing system, created for radiology and pathology. Implementview-ing directly into Sectra’s system gave the implementation a good framework of already implemented elements. Image render-ing, mouse and keyboard handlrender-ing, and drawing tools were all already in this framework. Sectra’s system had a tool where the user could place points using the mouse pointer, and straight lines would be drawn between these points. This tool served as the foundation for the new tools. All code was written in C#. The system had a testing environment in place that was used during implementation of these tools as well.

3D Semi-Automatic Segmentation

The next step was to extend the 2D tool to three dimensions. Both of the considered methods were implemented and evaluated. Since the 3D tools were implemented with a coordinate system where 1 unit equaled 1 millimeter, contours were auto-segmented with one millimeter apart. To calculate the volume of the segmented object, the areas of all the contours were summed:

V= ÿ

n=1..c´1

Dn,n´1˚(An+An´1)/2 (3.1)

where c is the number of contours, V is the volume, An is the area of contour n, and Dn,n´1 is the distance between contours n and n ´ 1. As mentioned Dn,n´1was usually one millimeter, but the user could manually delete contours to create a wider gap between two contours. Since the distances between contours were so small, this linear method was used rather than a more advanced method like spline interpolation.

3.3

Evaluation

Intermediate Evaluation

The 2D tool was evaluated by a small group of users before completion and was then im-proved based on the feedback from that evaluation. During this intermediate evaluation 10 developers at Sectra got to try the tool. First, the users got the chance to familiarize them-selves with the tool to minimize the effect of lack of experience. After the users were familiar with the tool, time was recorded while the users segmented three different shapes. The users used both the semi-automatic tool and a manual drawing tool to segment the shapes. Time and accuracy of these segmentations where compared. The purpose of this evaluation was mainly to give the users a sense of meaning while their comments on the tool where recorded. The comments was used to improve the 2D semi-automatic tool.

Final evaluation

Each tool was evaluated in four steps:

(24)

3.3. Evaluation

2. Let the user familiarize themselves with the tool. 3. Let the user segment case 1.

4. Let the user segment case 2.

All tools, including the drawing tool were evaluated using the same procedure. Due to lack of time, some subjects skipped the final step. Both metrics presented in this section were collected on every segmented case.

Metrics

During the evaluation, two metrics were used: time-on-task and the accuracy of the calcu-lated volume.

Time-on-task was measured form when the tool was selected in the drop-down menu to when the user said that they were satisfied with the result. The reason that the user was let to chose the end time was because the auto-segmented volumes could be interactively improved by the user and that time/improved accuracy should be considered in the result. The metric ZTwas calculated as the mean time-on-task of one tool applied on one case. The result of the time-on-task metric was analyzed using t-statistics because of the limited (ă 20) sample size [16]. Since the purpose of this thesis was to save time when measuring volumes, this was a natural choice of metric when evaluating the segmentation tools.

Accuracy was calculated as ZA = Vseg/Vtrue where Vseg is the segmented volume and Vtrue is the ground truth volume. The ground truth volume was extracted using manual segmentation in very slow and precise sessions. Acceptable accuracy was chosen as 80%-120% (0.8 ď ZA ď 1.2), which is higher than some similar studies on medical images [20]. Accuracy was also presented as relative error, calculated as ZE=|ZA´1|. Making ZE ď0, 2 an acceptable score.

The reason that two metrics based on accuracy were presented, was because they both presented some interesting properties. While relative error gave the magnitude of the accu-racy, the accuracy measurement described if the segmented volume was bigger or smaller than the ground truth.

Sample Size

Because of the lengthy user evaluations combined with a limited time budget, a small (ă 20) sample size was used. Using a small sample size can still give reliable results if the results are analyzed using t-statistics since t-statistics takes sample size into account [16].

Two-tailed t-tests were performed comparing the accuracy and time-on-task results be-tween implemented tools and manual segmentation, as well as bebe-tween the two implemented tools. The alpha value of these tests was chosen as 0.05 as this is common practice in clinical studies.

Cases

All tools were evaluated on two cases. The first case was one that the tools were known to handle well, the second case was more difficult.

The first case was a lung tumor that ranged through 10 images. Some samples from this case can be seen in figure 3.1. Due to the black background and clear contours this was an easy case that the semi-automatic tools should handle well. The tumor in this case split into

(25)

3.3. Evaluation

two towards the top images (seen to the right in figure 3.1). A tool that did not handle split cases would therefore have a lower accuracy in this case.

Figure 3.1: Samples from case 1, a lung tumor. In the rightmost image, one can see how the tumor split into two figures.

The second case was a brain tumor that ranged through 42 images. Some samples from this case can be seen in figure 3.2. This tumor was varying in color and shape through the images. The color of the tumor in a single image was often speckled. These properties made this second case much tougher than the first one. This tumor never split into two, meaning that the tools would not have to handle this to get a perfect accuracy rating.

Figure 3.2: Samples from case 2, a brain tumor. As shown, this case was very varying in shape and color.

(26)

4

Results

This chapter presents the results from the process described in the method chapter. First the results from the pre-study will be presented as a list of possible techniques with pros and cons and what technique was chosen for this thesis. Secondly the implementation of those techniques will be presented. This section will include technical choices made during implementation. Thirdly the result of the evaluations, both the intermediate and the final evaluation, will be presented.

4.1

Pre-Study

Several techniques were researched and considered according to the requirements described in the method chapter. The techniques are described below with pros and cons why that technique was suitable or not.

Region Growing

Because of its suitability for parallel execution, region growing can be made very fast. This technique also requires very little input from the user as only the initial seed points have to be manually placed. The accuracy of this algorithm depends heavily on the object to be seg-mented. A single colored object with a distinct edge that is on a different colored background will be segmented very accurately. In some applications on CT and MR images, the textures in the object are very similar to the textures of the background and this makes region growing unsuitable for some medical imaging. The examples in figure 4.1 are captured from an earlier implementation in Sectra’s system. As can be seen in the figure, this implementation did not handle the more difficult brain tumor case very well. While a good candidate, the fact that a 3D version of this technique was already implemented and tested by Sectra, ruled it out for this thesis.

Mumford-Shah Functional

According to the work done by Chen, Yang and Cohen [21] the method proposed in Mumford [4] is slow and not suitable for medical imaging. With the changes they propose however,

(27)

4.1. Pre-Study

Figure 4.1: Region growing works well in the image to the left (a lung tumor), not very well on the white object in the image to the right (a brain tumor).

it was a valid candidate. The changes proposed by Chen, Yang and Cohen [21] made the technique faster and more accurate in medical imaging. It requires little user interaction.

GrabCut

According to Rother [3], GrabCut is a very fast technique. Test images required less than a second for initial segmentation on an image of 450x300 pixels and then 0.12 seconds per brush mark from the user. It is however not accurate enough on medical imaging, as shown in Kalshetti [22].

GrowCut

GrowCut has a speed comparable to GrabCut, making it fast enough for this thesis. The algorithm only requires a few brush marks from the user to give an accurate result. Although the authors of the paper presenting this technique have a few examples from medical images, it has not been proven to work well on high detail medical images [5].

Live-Wire

Live-wire is slower and require more user interaction than all of the above mentioned tech-niques [23, 3]. What it does do that the others don’t, is giving the user instant feedback. While the other techniques are automatic after the user initiates the segmentation process, live-wire tells the user what the result of their next click will be and let the user decide if that is satis-factory. Live-wire is accurate and have been proven to work well in medical imaging [6, 24, 23].

Final Choice

The final choice was to use live-wire. This was because of the interactivity the instant feed-back brings. Because medical imaging present very differing scenarios that the algorithm needs to handle, giving the user an idea beforehand what the result will be helps to handle a wider array of cases. The result of a process that the user only initializes can be difficult to

(28)

4.2. Implementation

Figure 4.2: Left is original image. Middle is zero-crossings found without applying a smooth-ing filter first. Right is zero-crosssmooth-ings found when applysmooth-ing smoothsmooth-ing filter first.

anticipate. Another deciding point was that the contour nature of the result from live-wire is very suitable for extending this technique to three dimensions.

4.2

Implementation

2D Live-Wire

The implemented Live-Wire initially only used Laplacian zero-crossings to detect edges and assign weights since it is a fast and accurate way to detect edges. To limit the influence of the noise, a Gaussian filter was applied to blurr the image before using the zero-crossings technique. This removes some of the noise but also some detail. The result of applying zero-crossing with and without blurring the image can be seen in figure 4.2.

The Laplacian zero-crossings technique is by itself not suitable for medical imaging be-cause of the high noise levels in those images [24]. Bebe-cause of those limitations, Canny edge detection was added when calculating the weights. Path cooling was used to limit the number of clicks required from the user. When a segment was more than 30 pixels long and had been included in 80% of the calculated paths since the last seed point, the segment from that point back to the seed point would be locked in the same way as if the user had placed a seed point there. A locked segment means that the segment will not be changed even if the user moves the mouse.

A "trained" factor was added to the weight calculations. This factor compares all pixel values to the ones of the locked segments. Points with similar values are made cheaper and distant values are made more expensive. The pixel values in the direction of the gradients are also compared to try and keep the same value in the direction of the gradient at all times. This is because segmented objects are usually uniform in color.

(29)

4.2. Implementation

Figure 4.3: A flowchart describing the process of the implemented Live-Wire.

A flowchart of the algorithm can be found in figure 4.3.

To make the resulting area easier to interact with, it was simplified using the Dou-glas–Peucker algorithm [25]. This algorithm reduces the number of points required to repre-sent each contour, giving the user fewer points to move if the result was unsatisfactory. The Douglas-Peucker algorithm takes a starting point and an ending point (in this case two seed points) and all the ordered point between these. A line is drawn between the starting and ending point, and it is calculated what intermediate point is furthest from this line. If the "worst" point is further away than a set value e that point has to be included to keep the pre-cision of the curve. If it is closer than that it can be removed and the line between the staring point and ending point will be an appropriate approximation of the curve. If the point should be included, the algorithm is recursively called with the subsets [starting point, worst point] and [worst point, ending point]. The result is a subset of the original points. An example result of the Douglas-Pucker algorithm can be found in figure 4.4.

The user could move points by dragging them. Lines were also draggable, moving two points on either side of the line. The entire contour can be moved by dragging a point placed in the middle of the area.

Kernels And Weights

The Laplacian of Gaussian kernel used in the implementation was a 7x7 kernel with standard deviation 2.4. The Gaussian kernel used in blurring the image was a 5x5 kernel with standard deviation 1.4. A Sobel filter was used to find the gradients of the image, the kernels used in that convolution can be found in figure 4.5.

Weights were calculated according to this formula:(1+19 ˚ ZL) + (1+9 ˚ ZC) +20 ˚ ZT, where ZL and ZCare 0 if an edge was found and 1 if one was not found, using Laplacian

(30)

4.2. Implementation

Figure 4.4: The image to the left is the segmented contour before applying the Douglas-Pucker algorithm. To the right is the result of the algorithm

Figure 4.5: Sobel kernels used in Canny edge detection. X and Y direction kernels respec-tively.

zero-crossings and Canny edge detection respectively. The weighting factors 19, 9 and 20 were chosen based on extensive testing on different cases and have no other proof why just they were chosen. It could be possible to find better weighting constants, but these were good enough and no better ones could be found during testing. ZTis the training factor, a number between 0 and 1 described in section 4.2.

The reason that the weights were chosen with an offset of 1 was so that no pixels would be "free" to traverse. This means that no "fake" edges will be followed far out of the way of the real contour. It also, however, means that some small shapes will be missed, since taking shortcuts may be cheaper than strictly following the contour.

Intermediate Evaluation

A smaller evaluation was performed towards the end of the 2D live-wire development. While some metrics were gathered, t-tests on these showed that no significant conclusions could be made with respect to time-on-task and accuracy. The main purpose of this evaluation was to gather comments and improvement ideas on the 2D live-wire tool. The comments gathered during the intermediate evaluation, and the actions taken to improve based on them, can be found in table 4.1.

(31)

4.2. Implementation

Comment Implemented/Possible solution

Long loading times Implemented threaded convolution

Sometimes difficult to know when loading is finished

Improvements were made to the mouse icon updates

Users will follow contour with mouse even if they do not have to

Possible to make points passed over by mouse cheaper, this was not implemented however Expect seed-point on contour no matter where

the click is

Possibly because users were unaccustomed to live-wire. Nothing was implemented

Some areas of objects are easier to segment with manual segmentation

Added functionality to hold down mouse but-ton and manually segment parts of objects

Too many clicks Path cooling was implemented as an answer

to this comment To many points to move when not happy with

the result

Possible solution would be to lower the sensi-tivity of the Douglas-Pucker algorithm To few movable points to be able to get the

right contour when not happy with the result

Possible solution is to add ability to add point on contour by holding a key down and click-ing

Table 4.1: Comments gathered from the intermediate evaluation and actions taken based on those comments.

Figure 4.6: Example results from using auto-wire. The images to the far left and right were manually segmented in an added time of 15 seconds. The middle contours were auto seg-mented with auto-wire in a total time of 3.5 seconds.

3D Auto-Wire

Auto-wire, as the 3D live-wire tool was referred to, was implemented using the interpolated seed point method described in chapter 2.5. Auto-wire works in five steps:

1. User segments two contours with 2D live-wire.

2. From the two contours, each seed point creates a new seed point in the next slice. This is done by creating a seed point 1/-1 unit in the direction of the vector normal to the viewed plane.

3. All new seed points are moved to the cheapest point in a small surrounding area. 4. The new seed points are connected using the same techniques as in 2D live-wire. 5. Steps 2-4 are repeated until every slice between the manually segmented contours have

been processed. Step 1-4 are summarized in figure 2.7

Note that while the user always begin with two manual live-wire segmentations, these do not have to be on the very top and bottom ones of the tumor. These two can be a few slices apart. Adding a third manual live-wire segmentation a few slices away from the first

(32)

4.3. Evaluation

two would again begin an automatic segmentation process, connecting the new contour with the previous. When the process was complete, the user could add more manually segmented contours outside the segmented volume to get more auto-wired contours. It was also possible to remove inaccurate contours and replace them with manually segmented contours. The auto-wired contours could be interacted with in the same way as the manually segmented. The example in figure 4.6 is segmented using auto-wire.

3D Turtle-Wire

The mapping based 3D live-wire is called turtle-wire in this thesis because it used the turtle map algorithm. This tool worked in five steps:

1. The user segments one or more contour using 2D live-wire.

2. The user rotates the image 90 degrees around the y-axis to view the image from an orthogonal angle.

3. The user segments one or more contour using 2D live-wire.

4. The user rotates the image 90 degrees around the x-axis to view the image from a third orthogonal angle.

5. Mapping based 3D live-wire auto-segments the object from that third angle.

A contour was segmented every millimeter in the image, so the volume was calculated by adding the areas together. The automatically segmented contours were interactable in the same way as the 2D live-wire contours. Like in the auto-wire tool, the automatically segmented contours could be removed and replaced with manually segmented 2D live-wire contours.

4.3

Evaluation

The two tools were evaluated according to the method described in section 3.3. The results from these tests were compared to manual segmentation performed on the same cases. All results were analyzed using t-statistics. All collected samples can be observed in figure 4.7 (case 1) and figure 4.8 (case 2).

Time-On-Task Results

In the first case, the two semi-automatic tools had a better mean time-on-task than manual segmentation. Manual segmentation had a mean time-on-task of 290 seconds, auto-wire 116 seconds, and turtle-wire 93 second. This is equal to 40% and 32% (auto-wire and turtle-wire respectively) out of the mean time-on-task of manual segmentation. In the second case only turtle-wire had a better mean time-on-task than manual segmentation. Manual segmentation had a mean time-on-task of 680 seconds, auto-wire 690 seconds, and turtle-wire 364 seconds. The auto-wire samples had one extremely slow value which can be observed in figure 4.7. All mean time-on-task’s can be observed in figure 4.9 and tables 4.2 and 4.3.

Accuracy Results

In the first case manual segmentation had a mean accuracy of 1.019, auto-wire 0.832, and turtle-wire 0.868. None of the semi-automatic tools had better accuracy than manual seg-mentation, but all were within acceptable range. Manual segmentation had a mean relative error at 0.042, auto-wire at 0.168, and turtle-wire at 0.132. All were within acceptable range. In

(33)

4.3. Evaluation

Figure 4.7: Results from case 1, the less difficult of the cases.

Mean Time Mean Accuracy Mean Relative Error

Manual Segmentation 290 1.019 0.042

Auto-Wire 116 0.832 0.168

Turtle-Wire 93 0.868 0.132

Table 4.2: Mean values of the results of case 1. Relative error was calculated as |ZA´1|

Mean Time Mean Accuracy Mean Relative Error

Manual Segmentation 680 1.114 0.114

Auto-Wire 691 0.936 0.074

Turtle-Wire 364 0.989 0.107

Table 4.3: Mean values of the results of case 2.

the second case, manual segmentation a mean accuracy of 1.114, auto-wire 0.936 and turtle-wire 0.989. In this case, both semi-automatic tools had better mean accuracy than manual segmentation. Manual segmentation had a mean relative error at 0.114, auto-wire at 0.74, and turtle-wire at 0.107. In this regard, manual segmentation and turtle-wire were somewhat equal while auto-wire was more accurate. Mean accuracy and relative error can be observed in figures 4.10 and 4.11. All mean results can be observed in 4.3.

Evaluation Result Significance

Two-tailed t-tests were used to analyze the significance and reliability of the results. The al-pha value was selected as 0.05 as this is the common practice in clinical studies. All probabil-ity values from the t-tests on the time-on-task results can be found in table 4.4. The probabilprobabil-ity values tell how likely it is that no significant difference can be observed between the groups. A very small probability value indicates that there is a difference. When telling if a

(34)

signifi-4.3. Evaluation

Figure 4.8: Results from case 2, the more difficult of the cases.

Figure 4.9: Mean time-on-task in case 1 and 2.

cant result can be observed, a probability value of 0.05 is commonly used as a threshold. This gives a 99.95% probability that the a significant difference can be observed.

In the first case, significant difference in means were found when comparing time-on-task between manual segmentation and semi-automatic segmentation. Both tools performed better than manual segmentation. However, no significant difference was found when com-paring time-on-task between the two semi-automatic tools in the first case. In the second case, turtle-wire had a significant difference in time-on task compared to the other two methods, outperforming both. No significant difference in time-on-task was found when comparing manual segmentation with auto-wire in this case. All probability values from the t-tests on the accuracy results can be found in table 4.5.

When comparing accuracy in the two cases, significant differences in means were found when comparing manual segmentation with the two semi-automatic tools. The semi-automatic tools outperformed manual segmentation in the more difficult case 2, while man-ual segmentation was more accurate in case 1. No significant differences in accuracy could, however, be observed between the two tools. When comparing relative error, significant results were only found when comparing manual segmentation to the two semi-automatic

(35)

4.3. Evaluation

Figure 4.10: Mean accuracy and relative error in case 1.

Figure 4.11: Mean accuracy and relative error in case 2.

Manual vs Auto-Wire Manual vs Turtle-Wire Auto-Wire vs Turtle-Wire

Case 1 1.266E-5 1.803E-6 0.176

Case 2 0.937 3.635E-3 0.023

Table 4.4: P-values of the T-tests when comparing time-on-task

Manual vs Auto-Wire Manual vs Turtle-Wire Auto-Wire vs Turtle-Wire

Case 1 1.026E-8 1.944E-6 0.1

Case 2 1.336E-4 2.655E-2 0.324

Table 4.5: P-values of the T-tests when comparing accuracy

Manual vs Auto-Wire Manual vs Turtle-Wire Auto-Wire vs Turtle-Wire

Case 1 1.394E-7 0.301E-3 0.100

Case 2 0.184 0.782 0.306

Table 4.6: P-values of the T-tests when comparing relative error

tools in the first case. Manual segmentation was far more accurate in this case. All probabil-ity values from the t-tests on relative error can be observed in table 4.6. All significant results except the second case comparison of time-on-task between the two semi-automatic tools, were significant with a very high margin making them reliable results.

(36)

5

Discussion

This chapter will discuss the result and the method. The work in this thesis will also be discussed in a wider context.

5.1

Results

The results show that a semi-automatic 3D live-wire tool can be faster than manual segmen-tation. In three out of four comparisons, the semi-automatic tools performed better than manual segmentation. In the fourth, no significant difference could be observed. That fourth comparison compared the auto-wire tool to manual segmentation on the difficult test case. One sample among the tests in this group had significantly longer time-on-task than the rest, clocking at 1341 seconds compared to the second longest time at 918 seconds. Discounting this sample from the results would lower the mean value of this case to 598 seconds. T-test show, however, that still no significant difference could be observed between these cases.

One thing that made the semi-automatic tools drop in accuracy in case 1 was that the tumor was split in one end. None of the tools were made to handle such cases. This would especially influence the result in auto-wire since the tumor was not split in the final view segmented by turtle-wire. The split part missed by auto-wire summed up to 4.8% of the total volume. Adding this to the mean accuracy of the results from auto-wire puts that tool at an improved 0.89, with an relative error at 0.11. Since the tumor was not split in the third orthogonal view, turtle-wire did not receive this penalty. In the case that the tumor would have been split in the final view, turtle-wire would have received a similar penalty.

The result also show that while accuracy does not always drop with using the semi-automatic tools, when it does drop it remain above acceptable limit. These two main points would indicate that the use of semi-automated live-wire tools can, indeed, make the process of segmenting tumors faster and stay within acceptable accuracy. The results in this thesis say nothing about semi-automatic tools in general, however. It is possible that some other tool (e.g. GrabCut) can provide an opportunity for an equally well-performing tool but this would have to be tested in a similar fashion.

(37)

5.2. Method

5.2

Method

While the chosen semi-automatic base was chosen from a few well defined bullet points, a longer research period could have revealed a base better suited than live-wire. The author had a pre-defined time frame for research and stuck to that. Implementing the tools directly into Sectra’s PACS system gave the tools a lot of pre-built functionality like interaction, image rendering, and drawing functions. It did, however, bring some overhead with it. Implement-ing the tools on a clean slate could influence the result, but that would have been a too heavy build for this thesis. The evaluation method used proven metrics and two-tailed t-tests to solidify the results. This will be discussed further in the following subsections.

Replicability

The tools created in this thesis were implemented into Sectra’s PACS system, a very vast system. Implementing the same tools somewhere else could provide less overhead effecting the result. The filters used and the parameters set are clearly defined in section 4.2, using the same kernels should give the same results. The evaluation was performed on two highly varying cases, returning somewhat differing results. Evaluating on other cases than the two used in this thesis may outline the effect that case difficulty has on the result.

Reliability

Because of the limited sample size, all results were analyzed using two-tailed t-tests. Two-tailed t-tests takes sample size into account when analyzing if a significant difference between two results can be found. All results were presented with these levels of significance in mind. A negative result, that no significant difference was found, does not mean that one can not be found. More extensive research on the results where no significant difference was found could solidify that there was no difference or find significant differences.

Validity

The choice of metrics in this thesis fitted well into the research questions. Time-on-task is a widely used way of measuring how easy a tool is to use. Accuracy and relative error is a straightforward way of measuring how precise the segmentations were. Since the partic-ipating test subjects got a chance to familiarize themselves with the tools before evaluation, unaccustomedness should therefore not be a major factor in the result. Higher validity could have been achieved if the test subjects were actual physicians instead of developers.

5.3

The Work In A Wider Context

Responsibility and reliability are very important in medicine. A semi-automatic tool, as op-posed to a fully automatic tool, gives the users a greater impact on the result. In the end, the result will be what the user chooses it to be, even if the tool helped with providing partial re-sults to choose from. Letting the users choose the result instead of giving one to them is more easily defensible ethically. In the end the users will be responsible for the result, and how they use it. A probable use-case of the tools created in this thesis would be to get a sense of the size of something, rather than the exact volume. That use-case is in this thesis represented by the level of accepted accuracy of 80% rather than trying to achieve a perfect accuracy.

5.4

Future Work And Improvements

One main improvement, and probably the first point to address, is split objects. The fact that the tools created in this thesis did not handle such cases had a heavy influence on the result.

(38)

5.4. Future Work And Improvements

A second improvement could be to merge the two tools. One could let turtle-wire be the main tool and then let the user improve the result using auto-wire. This would work very well since the result of turtle-wire is contours in one projection only. This could speed up the process of correcting the result of turtle-wire, saving precious seconds of segmentation time.

Adding other edge detection techniques could potentially improve accuracy. Most edge detection techniques requires convolution of the image, which is very costly. ReSharper’s analysis tools showed that over 40% of the waiting time in turtle-wire was due to convolution [26]. Refactoring the code to move convolution from C# to C++ could allow for faster loading times or adding more edge detection techniques without a time penalty.

(39)

6

Conclusion

To conclude, the answers to the research questions in section 1.3 will be presented along with a few general conclusions.

6.1

Answers To Research Questions

Here are answers to the research questions stated in section 1.3, concluded from the work.

Question 1: How can image segmentation techniques be used to create a 2D contour extraction

tool suitable for medical imaging?

There are several techniques usable in medical imaging: the Mumford-Shah functional, region growing, GrabCut, GrowCut and live-wire can all be used if the noise typical for medical imaging is handled. Of these, live-wire is the most proven technique.

Question 2: How can a 2D contour extraction tool be extended to a 3D contour extraction tool?

Live-wire can be extended either through seed point interpolation or through the use of a mapping algorithm like the one used by Poon [13].

Question 3: Can the described tools save users time compared to manual contour outlining,

within acceptable (80%) accuracy?

Both a 3D live-wire tool based in seed point interpolation and one based in seed point map-ping can improve segmentation times on tumors while maintaining an acceptable level of accuracy. Accuracy is, especially in easier cases, better with manual segmentation. Accu-racy when using the semi-automatic tools are however still within acceptable range. More research is needed on what makes a case easy or difficult for manual and semi-automatic segmentation.

6.2

General Conclusion

Live-wire can be extended to three dimensions using seed point interpolation or seed point mapping. These two methods can save time when segmenting bodies in medical imaging while maintaining an acceptable accuracy withing 80%. How well these tools perform is case

(40)

6.2. General Conclusion

dependant, but a 3D live-wire tool using seed point mapping generally performs better than one using seed point interpolation.

(41)

Bibliography

[1] Martin Urschler, Heinz Mayer, Regine Bolter, and Franz Leberl. “The LiveWire Ap-proach for the Segmentation of Left Ventricle Electron-Beam CT Images”. In: 26th Work-shop of the Austrian Association for Pattern Recognition: Vision with Non-Traditional Sensors 160 (2002), pp. 319–326.

[2] Xiaoli Zhang, Xiongfei Li, and Yuncong Feng. “A medical image segmentation algo-rithm based on bi-directional region growing”. In: Optik - International Journal for Light and Electron Optics 126.20 (2015), pp. 2398–2404.

[3] Carsten Rother, Vladimir Kolmogorov, and Andrew Blake. “"GrabCut": interactive foreground extraction using iterated graph cuts”. In: ACM Transactions on Graphics 23.3 (2004), pp. 309–314.

[4] D Mumford and J Shah. “Optimal approximation by piecewise smooth functions and associated variational problems”. In: Comm. Pure Appl. Math. 42 (1989), pp. 577–685. [5] Vladimir Vezhnevets and Vadim Konouchine. “GrowCut - Interactive Multi-Label N-D

Image Segmentation By Cellular Automata”. In: Graphicon (2005), pp. 150–156.

[6] Eric N. Mortensen and William A. Barrett. “Interactive Segmentation with Intelligent Scissors”. In: Graphical Models and Image Processing 60.5 (1998), pp. 349–384.

[7] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Intro-duction to Algorithms. 3rd edition. Massachusetts Institute of Technology, 2009.

[8] Jun Zhang, Rong Yang, Xiaomao Liu, Hao Yue, Hao Zhu, Dandan Tian, Shu Chen, Yi-quan Li, and Jinwen Tian. “Livewire based single still image segmentation”. In: Proceed-ings of SPIE - The International Society for Optical Engineering. Vol. 8002. 2011, pp. 80021M-1 –800280021M-1M-7.

[9] D Marr and E Hildreth. “Theory of Edge Detection”. In: Proceedings of the Royal Society of London . Series B 207.1167 (1980), pp. 187–217.

[10] Zewei Zhang, Yue Ma, and Li Guo. “An interactive method based on livewire for seg-mentation of facial nerve in NMR images”. In: Lecture Notes in Computer Science (includ-ing subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 9428 (2015), pp. 607–614.

[11] Bing Wang and ShaoSheng Fan. “An Improved CANNY Edge Detection Algorithm”. In: 2009 Second International Workshop on Computer Science and Engineering (2009), pp. 497–500.

References

Related documents

Within each time step (sequencing cycle) the color channels representing A, C, G, and T were affinely registered to the general stain of that same time step, using Iterative

A z scan conducted with the 25 m wire, 0.50 mm below the laser optical axis, and with a backing pressure below but close to the threshold for self-injection, revealed the

The first is to evaluate how suitable di↵erent visualization methods are for fieldwork users working with utility networks.. The second is to get a better understanding of what

Sjuksköterskor som arbetade roterade skift eller arbetade långa skift beskrev trötthet, sömnbesvär och nedsatt mental hälsa till följd av skiftarbete och långa arbetsskift

Den praktiska implikationen av den här rapporten är att den vill hävda att det behövs ett skifte i utvecklingen inom ambulanssjukvården mot att även utveckla och öka

Tommie Lundqvist, Historieämnets historia: Recension av Sven Liljas Historia i tiden, Studentlitteraur, Lund 1989, Kronos : historia i skola och samhälle, 1989, Nr.2, s..

The aim of this study is to develop and optimize a procedure for microwave assisted extraction for three terpenes (alpha-pinen, camphor, borneol) from rosemary and to compare

The resulting error after time stepping the constant solution (5.1–5.1) using the standard Yee method on a N 3 grid.. Given in the column p is the