• No results found

Needle Localization in Ultrasound Images: FULL NEEDLE AXIS AND TIP LOCALIZATION IN ULTRASOUND IMAGES USING GPS DATA AND IMAGE PROCESSING

N/A
N/A
Protected

Academic year: 2022

Share "Needle Localization in Ultrasound Images: FULL NEEDLE AXIS AND TIP LOCALIZATION IN ULTRASOUND IMAGES USING GPS DATA AND IMAGE PROCESSING"

Copied!
47
0
0

Loading.... (view fulltext now)

Full text

(1)

DEGREE PROJECT, IN COMPUTER SCIENCE , SECOND LEVEL MONTPELLIER, FRANCE 2015

Needle Localization in Ultrasound Images

FULL NEEDLE AXIS AND TIP LOCALIZATION IN ULTRASOUND IMAGES USING GPS DATA AND IMAGE PROCESSING

KILIAN DEMEULEMEESTER

KTH ROYAL INSTITUTE OF TECHNOLOGY

SCHOOL OF COMPUTER SCIENCE AND COMMUNICATION (CSC)

(2)

Nållokalisering i ultraljudsbilder

Full needle axis and tip localization in Ultrasound Images using GPS data and image processing

Fullständig nålaxel och nålspetslokalisering i ultraljudsbilder med hjälp av GPS-data och bildbehandling.

Kilian DEMEULEMEESTER kiliande@kth.se

January 12, 2016

Master’s thesis at CSC Supervisor: Patric JENSFELT Examinator: Stefan CARLSSON

Degree Project, in Computer Science, Second Level Master’s program in System, Controls & Robotics

Master thesis fulfilled in Montpellier Laboratory of

Informatics, Robotics and Microelectronics

34000 Montpellier, FRANCE

(3)
(4)

Many medical interventions involve ultrasound based imaging systems to safely localize and navigate instruments into the patient body. To facilitate visual tracking of the instruments, we investigate the techniques and methodologies best suited for solving the problem of needle localization in ultrasound images.

We propose a robust procedure that automatically determines the position of a needle in 2D ultrasound images. Such a task is decomposed into the localization of the needle axis and its tip. A first estimation of the axis position is computed with the help of multiple position sensors, including one embedded in the transducer and another in the needle. Based on this, the needle axis is computed using a RANSAC algorithm. The tip is detected by analyzing the intensity along the axis and a Kalman filter is added to compensate for measurement uncertainties.

The algorithms were experimentally verified on real ultrasound images acquired by a 2D scanner scanning a portion of a cryogel phantom that contained a thin metallic needle. The experiments shows that the algorithms are capable of detecting a needle at millimeter accuracy. The computational time of the order of milliseconds permits real time needle localization.

Sammanfattning

Många medicinska åtgärder använder ultraljudsbaserade bildsystem för att på ett säkert sätt lokalisera och navigera instrument i patientens kropp. In denna avhandling undersöker vi de tekniker och metoder som är bäst lämpade för att lösa problemet med nållokalisering i ultraljudsbilder. Vi föreslår ett förfarande som automatiskt bestämmer positionen av en nål i 2D ultraljudsbilder. Uppgiften delas upp i lokaliseringen av nålens axel och dess spets. En första uppskattning av axelpositionen beräknas med hjälp av flera positionsgivare, inklusive en ultraljudsinstrumentet och en annan i nålen. Baserat på detta, beräknas nålaxeln med hjälp av en RANSAC algoritm. Spetsen detekteras genom analys av intensiteten längs axeln och ett Kalman-filter används för att kompensera för mätosäkerhet. Algoritmerna verifieras experimentellt på verkliga ultraljudsbilder som förvärvats av en 2D-skanner som skannat en del av en cryogel phantom som innehöll en tunn metallisk nål. Experimenten visar att algoritmerna kan detektera en nål med mil- limeterprecision. Beräkningstiden är i storleksordningen millisekunder vilket möjliggör nållokalisering i realtid.

iii

(5)

Contents

1 Introduction 1

1.1 Motivation . . . . 2

1.2 Thesis organization . . . . 2

2 Background 3 2.1 Technical background . . . . 3

2.2 State of the art . . . . 4

2.2.1 Ransac . . . . 4

2.2.2 Machine-learning . . . . 5

2.2.3 Kalman . . . . 5

2.2.4 PIP . . . . 6

2.2.5 Speckle tracking . . . . 8

2.2.6 Doppler . . . . 9

2.2.7 Sensor . . . . 9

2.2.8 Tip localization methods . . . . 9

3 Input Data 11 3.1 Acquisition . . . . 11

3.2 Assumption . . . . 12

4 Region of interest 13 4.1 System description . . . . 14

4.1.1 Frames . . . . 14

4.1.2 Conversion . . . . 14

4.2 ROI extraction . . . . 16

4.3 Algorithm . . . . 17

5 Axis localization 18 5.1 Normalization and thresholding . . . . 18

5.1.1 Normalization . . . . 18

5.1.2 Thresholding . . . . 19

5.2 RANSAC . . . . 20

5.2.1 Line fitting . . . . 20

iv

(6)

5.2.2 Cost function . . . . 20

5.2.3 Algorithm . . . . 21

6 Tip localization 23 6.1 Blob filling . . . . 23

6.1.1 Blob construction . . . . 23

6.1.2 Tip detection . . . . 24

6.2 Kalman filter . . . . 24

6.2.1 Model . . . . 25

6.2.2 Noise tuning . . . . 26

7 Experiments & Results 28 7.1 Experiments . . . . 28

7.2 Results . . . . 29

7.2.1 Ground truth data . . . . 29

7.2.2 Results generation . . . . 30

7.2.3 Scenario 1: Needle insertion tracking . . . . 30

7.2.4 Scenario 2: Poor quality acquisition . . . . 30

7.2.5 Scenario 3: Breathing . . . . 31

7.2.6 Performances . . . . 31

8 Conclusion 34 Bibliography 37 A Appendix 38 A.1 Basic functions . . . . 38

A.1.1 Distance to line . . . . 38

A.1.2 Histogram construction . . . . 38

(7)

List of Figures

2.1 Visualization of the error axis ε

axis

and the error tip ε

tip

. . . . 5

2.2 Needle being insert into a phantom Image courtesy of www.ultrasonix.com 6 2.3 Ultrasound transducers Image courtesy of www.ultrasonix.com . . . . . 6

2.4 SonixGPS scheme Image courtesy of www.ultrasonix.com . . . . 7

2.5 Line fitting with outliers using RANSAC (blue – inliers | red – outliers) . 7 2.6 The PIP transform . . . . 8

2.7 Doppler ultrasound method . . . . 9

2.8 Sensor method . . . . 10

3.1 A 2D ultrasound image . . . . 12

4.1 Region Of Interest . . . . 13

4.2 GPS Frames . . . . 14

4.3 Euler Angle . . . . 15

5.1 Normalization and thresholding of the image . . . . 19

5.2 Needle axis computed by the RANSAC algorithm Blue – axis | Red – inliers This axis was computed with 500 iterations in ∼ 350ms. . . . 22

6.1 Steps of tip detection . . . . 27

7.1 Experimental setup . . . . 29

7.2 Tip localization in Scenario 1: needle insertion tracking. The figure depicts the evolution of the tip position during the insertion of the needle. The error between the ground truth position and the estimated position is smaller (in average) with the Kalman filtering (it discards the few misestimated positions). . . . 32

7.3 Tip localization in Scenario 3: Breathing (the needle moves back and forth almost periodically). The figure depicts the evolution of the tip position during a almost periodic movement. The error between the ground truth position and the estimated position is bigger (in average) with the Kalman filtering (as it tends to “resist” the shakiness of the movement). Signal to noise ratio ≈ 90% . . . . 33

vi

(8)

I am grateful to a number of people that helped me to pursue this research. First and foremost, my profound thanks and appreciation are addressed to my supervisor Nabil Zemiti for his encouragement, help and kind support. His invaluable technical and editorial advice, discussions and guidance were a real support to complete this work.

I would like to thank Prof. Philippe Poignet, the head of the DEXTER team and head of the Robotic Department at LIRMM, who was actively interested in my work. His skills and uncompromising quest for excellence create an inspiring working atmosphere.

My sincere thanks belong to Prof. Patric Jensfelt from KTH Royal Institute of Technology in Stockholm, who was my advisor in Sweden.

The research presented in this thesis was carried at LIRMM in Montpellier inside the DEXTER team. My greatfull thanks goes to all of its members for their help and the nice atmosphere inside the laboratory.

vii

(9)

Chapter 1

Introduction

A wide variety of medical diagnosis and therapeutic procedures involve inserting long thin objects into the human body (needle, electrode, tube, etc.) to perform different tasks: sampling cells or tissues, introduce substances into tissues, measure electric ac- tivity in the cortex, etc. Such procedures requires highly accurate localization of the tool.

Different apparatus have been developed to solve the problem of tool localization inside the human body.

Stereotactic surgery makes use of a rigid structure fixed to the patient body to create a new frame of reference relative to the patient, delivering recognizable landmarks in the images and serving as a stable mounting base and instrument guide. However, difficulties in setting a reliable frame of reference have limited most of its application to brain surgery.

To avoid the limitations and patient discomfort associated with the stereotactic frame, frame-less techniques have been developed. Computed tomography (CT-scan) allows the practician to see inside the human body. A large series of 2D radiographic images are taken around a single axis of rotation, computed together to create 3D images.

However, CT-guided organ puncture require a great number of CT-scans (increasing pro- cedure time) and take up a lot of space around the patient (blocking access to the body).

The same situation is encountered with MRI imaging, with in addition the incompatibly with metal object.

Studies have shown the interest of ultrasound based guided puncture [6], [16]. Ul- trasound imaging is particularly suited for tool localization and tracking because of its short acquisition time, its compatibility with metal object and the absence of ionizing radiation

1

. Ultimately, high-resolution ultrasound scanners are portable and affordable.

The objective of this thesis is to solve the problem of needle localization in ultrasound images by proposing robust algorithms. We provide a summary of existing localization methods and propose three algorithms that permit to identify the needle. Our principal contribution is to combine position sensors with image analysis to localize the needle

1Ionizing radiation is radiation that carries enough enery to free electrons from atoms or molecules – e.g harmful and potentially lethal to living

1

(10)

axis.

1.1 Motivation

Tools such as metallic electrodes, injection needles and miniature tips that serves for sampling tissue cells, introducing fluids to body, measuring cell activities are routinely employed. In order to access deeper structure, practicians traditionally relied on surface marking to guide the needle into the correct position.

One common intervention is brachytherapy in which a radiation source is placed inside or next to the area requiring treatment. It is commonly used as an effective treat- ment for cervical, prostate, breast, and skin cancer. Another example is the ultrasound- guided breast biopsy which is performed to remove some cells and examine them under a microscope to determine a diagnosis.

In those interventions, needle misplacement has the potential to damage structures like arteries, nerve bundles and pleura. Qualitative measurements of the location of objects in medical images is a key to improve the quality of such procedures.

In this context, ultrasound images guidance offers several advantages including a greater likelihood of success, better visualization of the target. Moreover, they are produce at video-rate allowing either real-time analysis by the practician or automatic steering. Lastly, the purchase and operational cost are relatively modest (compared to other medical modalities) and the compactness of the probes facilitates the integration in the practician workspace.

1.2 Thesis organization

This thesis is organized as follows. We start in Chapter 2 with a description of the

research context and of the current methods achieving needle localization within ultra-

sound images. Chapter 3 consist of a description of the data acquisition process. The

core of the thesis consists of Chapter 4 – 6. First, we describe how sensors positions are

used to compute the region of interest inside the ultrasound images. We then deal with

the axis localization and the explanation of its main algorithm (RANSAC). Once the

needle axis is known, the tip localization is achieve with the method describe in Chapter

6. The experimental setups and their results can be found in Chapter 7. The thesis is

concluded with Chapter 8.

(11)

Chapter 2

Background

2.1 Technical background

Medical ultrasound permits to view internal body structures including tendons, mus- cles, joints, vessels and internal organs by exploiting the backscattering of energy in the boundaries between tissues. Numbers of sound waves are typically formed by a piezo- electric transducer and are transmitted to the body along pre-determined trajectories, each of them forming a narrow ultrasound beam. The return of the sound wave vibrates the transducer which turns the vibrations into electrical pulses which are processed and transformed into a digital image. Typically, the frequencies used in medical practice are in the range from 1 to 20 MHz and the image acquisition rate (frame rate) is usually in the range from 5 to 80 frames per second [2].

However, object detection in ultrasound images is made difficult by the following issues.

Absorption Some structure or object absorbs sound energy when sound waves collide with them. Part of the absorbed energy is transformed into heat. The energy transform into heat is said to have been “lost”.

Artifacts Reverberation and refraction of ultrasound waves creates artifacts which distort the object appearance.

Attenuation The intensity of the reflected ultrasound waves decrease with the travel distance. Thus, the deeper the object, the lower the intensity of the reflected ultrasound waves.

Speckle Medical ultrasound images are characteristic by its granular noise pattern called speckle, reducing the signal-to-noise ratio.

3

(12)

Resolution Practicians use needles with a diameter of several hundred of microns.

Hence, their dimension is comparable to the resolution of the ultrasound system.

Lastly, this thesis will often use the following notions.

Phantom A phantom is a tissue-mimicking object with well known properties (homo- geneous, known density) use in medical research (see Figure 2.2).

2D-3D transducers Almost all ultrasound transducers (also called probe) are com- posed of an array of small piezoelectric crystals. The physical layout of crystals on the transducer determines the size of the field of view (FOV) and its dimensionality (2D or 3D). Motorized 3D-probes consists of a rectangular layout of crystals sweeping back and forth (see Figure 2.3).

GPS The Ultrasonix company develop the technology SonixGPS, which consist of a combination of position sensors: one is situated in the probe, one is inside the needle (along the axis) and the last is fixed to the ultrasound machine (provides the frame of reference) (see Figure 2.4). Ultrasonix refer to those captors as GPS devices (we will use the same terminology).

Localization accuracy The results given experimentally by the localization algo- rithms have to be measured. We will refer to:

• Axis error ε

axis

= max



k −−→

Q

1

Ek, k

−−→

Q

2

T k ˆ



, where ||.|| is the Euclidean distance and Q

i,1≤i≤2

, E, ˆ T , T are as in Figure 2.1.

• Tip error ε

tip

= k

−→

T ˆ T k as the Euclidean distance between the actual tip position (ground truth) and the estimated one.

2.2 State of the art

Many techniques have been developed to achieve needle localization in ultrasound im- ages. In this section, we summarize the principle of each technique and discuss exper- imental results in terms of localization accuracy. Needle localization can be seen as a two step process: localization of the needle axis (a line inside the US image) and the localization of its tip (a point inside the US image). Image processing algorithms are based on the fact that the needle is the brightest object in the image.

2.2.1 Ransac

The RANSAC (RAndom SAmple Consensus) algorithm is by far the most used tech-

nique for axis localization. It is an iterative method to estimate parameters of a math-

ematical model from a set of observed data which contains outliers (see Figure 2.5).

(13)

2.2. State of the art 5

Field of View

Needle

Q1 Q2 T

Estimate E

Tˆ εaxis

εtip

Figure 2.1: Visualization of the error axis ε

axis

and the error tip ε

tip

It is a non-deterministic algorithm because it produces a reasonable result only with a certain probability, with this probability increasing as more iterations are allowed.

The algorithm was first published by Fischler and Bolles in 1981 [10] and has been used widely in signal and image processing. It is frequently use in axis localization be- cause of its few assumptions about the data (direction, bending) and its good results [2, 3, 7, 19, 20, 21, 22]. The algorithm is discussed more in Chapter 5.

The results show that the needle axis can be determined by the algorithm with an accuracy around 0.5 mm (depending of the set-up, acquisition being 3D or not, the depth, etc.) with a processing time smaller than 1s.

2.2.2 Machine-learning

Machine-learning algorithms have been proved useful in the context of image recognition and classification.

Charles R. Hatt et al. introduced a machine-learning based method for needle seg- mentation in 2D ultrasounds images in 2014 in [12]. They successfully localized the needle in 86.2% of the images, with a mean targeting error 0.48 mm – assuming that the needle orientation was known a priori. Their method was based on a statistical boosting approach to train a pixel-wise classifier for needle segmentation and their results showed enhanced localization accuracy (compared to other methods such as RANSAC).

However, machine-learning methods relies on a training set of annotated images.

Having such a set (at least ≈ 1000 images annotated by an expert) at disposal is really difficult.

2.2.3 Kalman

A Kalman filter [17] is often added to increase the stability of the detection algorithms

and realize application in dynamic situation. The Kalman filter is one of the most

used filter among the Bayes filter. It is an optimal recursive data processing algorithm.

(14)

Figure 2.2: Needle being insert into a phantom

Image courtesy of www.ultrasonix.com

(a) 2D probe (b) 3D probe

Figure 2.3: Ultrasound transducers Image courtesy of www.ultrasonix.com

It was first published in 1960 by Kalman [13]. It represents beliefs by the moments representation: At time t, the belief is represented by the the mean µ

t

and the covariance Σ

t

. This filter can predict the future state of a system, so it is very useful for the prediction of the position of the needle. This filter is widely used to improve needle detection accuracy [1, 7, 21, 22]. The algorithm is more discuss in Chapter 6.

Troy K. Adebar et al. in [1] reports – over 10,000 simulation trials – an average error (mean ± standard deviation) in final tip placement of 2.26 ± 1.30 mm using a Kalman Filter, and 11.85 ± 7.36 mm using the unfiltered segmentation results gotten with simple Doppler image segmentation (that is to say a improvement of ≈ 80% in precision).

2.2.4 PIP

PIP (Parallel Integral Projection) is a technique used to localize straight cylindrical

object in 3D images. It is a transform that maps an image function I : R

3

→ R

representing volume data to a function P

I

: R

4

→ R describing its projection as a

(15)

2.2. State of the art 7

Global Frame Sensor

Probe with GPS sensor

Needle with GPS Sensor

Figure 2.4: SonixGPS scheme Image courtesy of www.ultrasonix.com

Figure 2.5: Line fitting with outliers using RANSAC (blue – inliers | red – outliers)

function of the 2D displacement (u, v) and the projection direction determined by two angles (α, β):

P

I

(u, v, α, β) =

Z +∞

−∞

I(R(α, β)).(u, v, τ )

T

)dτ

where R(α, β) is the rotation matrix representing a rotation around the x-axis by an angle α and around the y-axis with an angle β.

The function P

I

can be used to find a straight electrode. Since the intensity along the electrode axis is bigger than the background intensity, the PIP transform is maximal for (u, v) and (α, β) depicting the axis. More intuitively, it consists of the integration of the image function along every possible line of the 3D space.

This methods has proven good results but a huge process time: In [4], the average

error in axis estimation is 0.15−0.2mm with a computational time of 60s. In [18, 20], the

speed-up is threefold compared to original PIP method with the same accuracy. There-

fore this technique achieved really good axis positionning, but its high computational

time make it unusable in real-time scenario.

(16)

~ x

~ y

~ z

α

~ w

β

u

v

/ Q

Integration line

/ /

Figure 2.6: The PIP transform – the integral of the image intensity function I is calculated along a line given by the point Q = (u, v) and the angles (α, β)

2.2.5 Speckle tracking

Speckle tracking methods were first published by Robinson et al. in 1982 [8] and com- pleted by Bohs et al. in 2000 [5]. Those methods track the backscattered echoes produced by ultrasonic scatterers in blood or tissues. Such speckle patterns remain relatively con- stant as the blood or tissue moves, and can thereby be tracked using pattern matching techniques. They were first used to detect the blood velocity with ultrasound and then adapted to measure the speed of the needle, used as an input for a filter such as Kalman [21, 22]. The pattern matching can be performed in one, two, or three dimensions, depending on the dimensions of ultrasound data available.

First, a small region is chosen in the image as the kernel region, which is selected according to the coordinates of the estimated tip position. Then, a larger region is chosen as a searching region in the second volume. During the tracking procedure, the kernel region slides voxel by voxel in the searching region, and the normalized cross-correlation (NCC) is used to compare the similarity between the kernel region and the searching region.

The NCC (in 3D) is given by – ρ, X

0

, X

1

being the correlation coefficient, the kernel region and the searching region:

ρ(a, b, c) =

m

X

i=1 n

X

j=1 p

X

k=1

h

X

0

(i, j, k) − ¯ X

0i h

X

1

(i + a, j + b, k + c) − ¯ X

1i v

u u t

m

X

i=1 n

X

j=1 p

X

k=1

h

X

0

(i, j, k) − ¯ X

0

i2 m

X

i=1 n

X

j=1 p

X

k=1

h

X

1

(i + a, j + b, k + c) − ¯ X

1

i2

(17)

2.2. State of the art 9

2.2.6 Doppler

An alternative approach to image processing algorithms is to vibrate the needle – usually with a piezoelectric buzzer – and detect the motion using Doppler ultrasound [11] (see Figure 2.7). The irregular Doppler response around the vibrating needle indicates the shape and location of the needle section. Achieving segmentation of the needle from the Doppler ultrasound data can be accomplished using simple image analysis techniques, unlike standard grayscale ultrasound data which generally require complex segmentation technique [1]. Moreover, this method can isolate needles from other linear structures.

Joseph D. Greer et al. in [11] report that this method differs from manual segmen- tation by 0.71 ± 0.55 mm in needle tip location and take 5 − 10 ms to run on a standard PC.

Figure 2.7: Doppler ultrasound method

2.2.7 Sensor

One unusual technique for tip localization is to use a sensor embedded on needles [14].

The acoustic energy emitted by the ultrasound transducer is converted into an electrical signal by a sensor localized on the tip on the needle (see Figure 2.8). The precise location of the needle can be estimated from this signal. In [14], two different types of materials are proposed to be used as sensor – co-polymer and piezoelectric. In their experiments, the technology achieved a tracking accuracy of 0.14 ± 0.03mm, significantly superior to competing technologies

The advantages of this technique are its feasibility and its high accuracy. However, the needle axis can not be computed with this method only.

2.2.8 Tip localization methods

Once the needle axis is computed – using one of the methods depicted above – finding the needle tip can be achieve using one of the following methods:

• Intensity drop: The intensity along the axis will present an abrupt drop at the

needle tip. This method is the simplest one and is often used as the preferred

(18)

Figure 2.8: Sensor method

method. [7, 19, 22]

• Prior probability: Based on a training set, the distribution of the intensity drop around the needle tip is modelized [3, 4, 18]. Here, the same drawback as for machine-learning technique can be mentioned: Having at disposal a set of anno- tated image to train the algorithm on is really difficult.

• The deepest point labelled as part of the needle is considered being the tip [9, 15, 20]. This method is quiet similar to the first one, however it can provide better results when part of the needle is not visible on the image. However, it recquires to know the needle insertion direction.

• Normalized cross correlation: see subsubsection 2.2.5, [22].

(19)

Chapter 3

Input Data

3.1 Acquisition

The ultrasound images used in this work were acquired using a UltraSonix SonixTouch Ultrasound System

1

. It is a high quality ultrasound system allowing 2D and 3D imaging, color/power Doppler (detect blood flow and flow directions) and equipped with the SonixGPS technology.

In this work, our objective was to use both image processing and position sensors informations. The GPS-equipped probe at our disposal was the C5-2/60 GPS probe (by UltraSonix), which is a 2D probe (see Figure 2.3).

Images can be describe as a function – with I, J ⊂ N

2

: I :

(

I × J → [0, 1]

(x, y) 7→ I(x, y) (3.1)

An example of an ultrasound image is shown in Figure 3.1.

The data were recorded to local files and the processing was performed offline in order to validate the techniques and the algorithms. However online processing is foreseen for a future work.

Each file was composed of the sequence of images and of the following parameter:

n

f rames

Number of frames in file w Image width (= |I|)

h Image height (= |J |) ss Data sample size in bits id

probe

Probe identifier

txf Transmit frequency in Hz sf Sampling frequency in Hz dr Data rate (Frame per second)

1http://www.analogicultrasound.com/

11

(20)

The positional informations from both the transducer and the needle are recorded to a different file during the acquisition. The data collected were organized as follow:

x, y, z Space coordinates S Rotation matrix

t timestamp

~ x

~ y

Cryogel

Phantom Needle

Figure 3.1: A 2D image acquired by the ultrasound scanner scanning a cryogel phantom

3.2 Assumption

Almost all the methods depicted in Section 2.2 relies on the following assumptions. First, the diameter of the needle is of the same order of the ultrasound wavelength and is much smaller than the needle’s length visible on the image. Secondly, the voxel intensity along the needle is bigger than the intensity of the background voxels. Finally, plane insertion is assumed, i.e almost all of the needle is visible in the image – as opposed to orthogonal insertion were only a cross section of the axis is visible.

The RANSAC algorithm describe in Chapter 5 has been implemented under the

assumption that the needle is not bending (straight axis) and can be modeled by a

linear function. However, it can be modified to fit higher degree curve. When working

with non-straight axis, a cubic curve is usually used because curves of lower order have

too little flexibility, whereas curves of higher order are excessively complex to fit [2].

(21)

Chapter 4

Region of interest

The method presented in the following chapter permits the definition of a ROI (Region Of Interest) in the 2D ultrasound image. The ROI is a small bounded area supposed to contain the whole needle. The principal interest in focusing on a smaller area is to reduce the computation time and to eliminate the other bright parts of the image. An example can be seen Figure 4.1.

Here, we focus first on characterizing properly the system and explaining the meaning of each data. Then, we propose a way of extracting the ROI in the ultrasound image.

~ x

~ y

Needle

ROI

Figure 4.1: Region Of Interest

13

(22)

4.1 System description

4.1.1 Frames

The ROI extraction is based on the positional informations retrieve by the GPS sys- tem. Each image has GPS data associated: from the needle position sensor, the probe position sensor and the reference position sensor. Both probe and needle provide their space coordinate and their rotation matrix according to the frame of reference (the ref- erence position sensor). Figure 4.2 depicts the different frames used: reference frame (O

R

, ~ x

R

, ~ y

R

, ~ z

R

), probe frame (O

P

, ~ x

P

, ~ y

P

, ~ z

P

), needle frame (O

N

, ~ x

N

, ~ y

N

, ~ z

N

) and image frame (O

I

, ~ x

I

, ~ y

I

, ~ z

I

).

O

R

− → x

R

− → y

R

− → z

R

×OI

−→ xI

yI

zI

OP

−→ xP

−→ yP

−→ zP

ON×

−→xN

−→ yN

−→ zN

Needle Reference

Probe

US image

FOV

Figure 4.2: GPS Frames

A vector ~ v can thus be express in different frames. To understand which one, we will denotate ~ v

i

as the coordinates of the vector ~ v in the frame (O

i

, ~ x

i

, ~ y

i

, ~ z

i

) for i ∈ {R, P, N, I}.

4.1.2 Conversion

Translation

The space coordinate depicts the translation from the reference coordinate system to the local coordinate system. For instance, for the needle sensor, [x, y, z]

T

= −−−−→

O

R

O

N

.

(23)

4.1. System description 15

Rotation

Assuming θ, φ and ψ are the Euler angle (see Figure 4.3), the rotation matrix S is composed as follow:

S = R

z1

(φ)R

x0

(θ)R

z2

(ψ) =

cos θ cos φ sin ψ sin θ cos φ − cos ψ sin φ cos ψ sin θ cos φ + sin ψ sin φ cos θ sin φ sin ψ sin θ sin φ + cos ψ cos φ cos ψ sin θ sin φ − sin ψ cos φ

− sin θ sin ψ cos θ cos ψ cos θ

(4.1)

The matrix S is a rotation matrix between the reference coordinate system and the local coordinate system. For instance, for the needle sensor, S

N

is the rotation matrix between (O

R

, ~ x

R

, ~ y

R

, ~ z

R

) and (O

N

, ~ x

N

, ~ y

N

, ~ z

N

).

~ x

1

~ y

1

~ z

1

x ~

0

φ

~

z2

θ

x~2

~ y2

ψ

Figure 4.3: Euler angles – The transformation is decomposed as 3 rotations (φ around z

1

, θ around x

0

and ψ around z

2

)

From 2D ultrasound image to reference frame

To reconstruct the whole 3D position of each pixel of the ultrasound image, it is needed to convert all 2D ultrasound image pixels to the reference coordinate system. This requires:

1. Transfer from the ultrasound image coordinate system to the probe coordinate system through a probe calibration matrix (see below),

2. Transfer from the probe coordinate system to the reference coordinate system through the rotation matrix and translation vector.

The calibration matrix

1

for the C5-2 probe is:

C =

0.087 −0.0033 −1.00

−29.7558 0.9433 −0.0034

−0.7053 0.0132 −0.0087

(4.2)

1The calibration matrix is provided by Ultrasonix.

(24)

Lastly, a point M = [i, j]

T

of the ultrasound image is transformed into 3D coordinates as [1, j, i]

T

.

4.2 ROI extraction

We shall now use the GPS informations to extract estimations of the needle parameters (axis and tip positions). As seen in Figure 4.2, the needle tip is O

N

. The coordinates of O

NR

are given directly by the position sensor in the needle.

Secondly, we can convert a point M

I

of the image frame to a point in the reference frame as:

M

R

= S

P−1

CM

I

(4.3)

Therefore:

O

IN

= C

−1

S

P

O

RN

(4.4)

As seen in Figure 4.2, the needle axis is merge with ~ x

N

. The coordinates of ~ x

RN

are:

~ x

RN

= S

N

1 0 0

(4.5)

And therefore, ~ x

PN

is:

~

x

PN

= S

P

S

N

1 0 0

(4.6)

Using the calibration matrix, we can express ~ x

IN

:

~

x

IN

= C

−1

S

P

S

N

1 0 0

(4.7)

Let R be the radius of the ROI and D be the line:

D :

n

O

IN

+ t~ x

IN

, t ∈ R

o

(4.8) Based on equation 4.4 and equation 4.7, we can construct the ROI:

ROI =

n

M = [i, j]

T

∈ I, (i, j) ∈ I × J | d(M

0

, D) ≤ R, M

0

= [1, j, i]

To

(4.9)

One can notice here that the value provided by equation 4.4 and equation 4.7 are

estimations of the needle tip and axis positions and can not be considered as ground

truth for the needle parameters.

(25)

4.3. Algorithm 17

4.3 Algorithm

The Algorithm 1 permits the definition of a ROI in the ultrasound image. Since the insertion is continuous and rather slow, the ROI does not need an update every frame.

This operation is only performed every ∆

f

frames. It returns 4 points: R

1

, R

2

, R

3

and R

4

, being the four corners of the ROI. The distanceToLine function is more detailed in Appendix A.1.1.

Algorithm 1: ROI extraction algorithm Data: GPS data, R

1

Let S be the set of points inside the ROI;

2

Compute O

IN

and ~ x

IN

;

3

for (i, j) ∈ I × J do

4

Let M

0

= [1, j, i]

T

;

5

Let d = distanceToLine(M

0

, O

IN

, ~ x

IN

);

6

if d ≤ R then

7

Add M = [i, j]

T

to S;

8

end

9

end

Result: min

i

S, max

j

S, min

i

S, max

i

S

(26)

Axis localization

This chapter introduces an algorithm to automatically detect the axis of a needle in a 2D image. We assume that the axis can be approximate by a straight line and for now on we will call “image” the “image restricted to its ROI”. The image is initially segmented using thresholding (Section 5.1). Then, the parameters of the axis are estimated by the RANSAC algorithm (Section 5.2).

5.1 Normalization and thresholding

Because of the hypothesis that the needle voxels are much brighter than the background voxels, we can classify the image voxels into two disjoint sets. The goal of this operation is to reduce the number of voxels that will be processed by the RANSAC algorithm and thus to decrease the computational time.

5.1.1 Normalization

Each 2D ultrasound image is normalized before being further processed. The normaliza- tion changes the pixel intensity values in order to enhance the contrast of the image and bring the image into the range [a, b]. This method guarantee that all the image share a similar range of grayscale.

Let I be the raw image and I

N

be the normalized image. I

N

is processed according to the following formula:

I

N,(i,j)

=



I

(i,j)

− min

i,j

I



b − a

max

i,j

I − min

i,j

I + a (5.1) In our case, we set [a, b] = [0, 255] for reasons of consistency with our display software.

Figure 5.1 shows an example of a normalized picture.

18

(27)

5.1. Normalization and thresholding 19

5.1.2 Thresholding

Once normalized, we classify each voxels in two different sets: S

n

(needle voxels) and S

b

(background voxels). To do so, we used a thresholding approach:

∀x = (i, j) ∈ I × J :

(

x ∈ S

n

; I

N

(x) ≥ T

x ∈ S

b

; I

N

(x) < T (5.2) where T is the threshold constant.

The value of T has great impact on the classification. For too small value, almost all voxels will be labelled as needle voxels, leading to a substantial number of false positive.

Conversely, for too big value, only a small number of real needle voxel will be labelled correctly.

Instead of fixing T for every image, we decided to use an adaptative threshold: T is set such that a fraction α of the brightest pixels are kept (see Algorithm 2). Once T is determined, the image is binarized using this value (see Figure 5.1).

We tuned the value of α manually on a set of ∼ 150 images and stopped on α = 7.5%.

Algorithm 2: Adaptive threshold algorithm Data: I

N

, α

1

Compute a 256 bins histogram H of I

N

(see Appendix A.1.2);

2

Let count_brightest = 0;

3

Let index = 255;

4

while count_brightest <

α×|I|×|J |100

do

5

count_brightest = count_brightest + H[index];

6

index = index − 1;

7

end

8

Let T = index;

Result: Threshold value T

(a) Raw image

(b) Normalized image

(c) Binary image White – Sn | Black – Sb

Figure 5.1: Normalization and thresholding of the image

(28)

5.2 RANSAC

5.2.1 Line fitting

A straight line can be represented as:

c :





R → R

2

t 7→ c(t, H) =

"

h

11

h

12

h

21

h

22

# "

1 t

#

(5.3)

Given two points D

1

= [x

1

, y

1

]

T

and D

2

= [x

2

, y

2

]

T

, D

1

6= D

2

, we want to compute the matrix H of the line going through this two points. Lets assume that c(0, H) = D

1

and c(1, H) = D

2

. Then:

H

"

1 1 0 1

#

=

"

x

1

x

2

y

1

y

2

#

(5.4)

Leading to:

H =

"

x

1

x

2

y

1

y

2

# "

1 1 0 1

#−1

(5.5)

H =

"

x

1

x

2

− x

1

y

1

y

2

− y

1

#

(5.6)

5.2.2 Cost function

Assuming the image has been segmented, we now have a set S

n

of needle voxels. Due to noise, reflection, and potential other bright objects in the image, the set S

n

does not contain only voxels belonging to the needle.

Let us assume that c(t, H) is one approximation of the needle axis. We need to establish a criterion that measure how well the curve c(t, H) fits the the set S

n

. This criterion will be taken as the number of point in S

n

consistent with the curve c(t, H). A point x is consistent with the curve c(t, H) if:

x consistent with c(t, H) ⇔ d(c(t, H), x) ≤ ρ (5.7) where ρ is a parameter. We tuned the value of ρ manually on a set of ∼ 150 images and stopped on ρ = 4.

Using this criterion, we can define a cost function C(H) as:

C(H) = −|{x ∈ S

n

| d(c(H, t), x) ≤ ρ}| (5.8)

(29)

5.2. RANSAC 21

The curve c(t, H

) that minimize C(H) is selected as the best approximation of the needle axis:

H

= arg min

H∈R4

C(H) (5.9)

5.2.3 Algorithm

The problem of axis localization can be summed up as the resolution of equation 5.9.

However, this problem has no close closed-form solution and therefore a numerical method must be implemented. We decided to use the RANSAC algorithm, which is an iterative algorithm for estimating the parameters of a model given a set of observed data (see Algorithm 3). The main assumption is that the set of observed data contains

“inliers” (i.e. data whose distribution can be explained by the model parameters) and

“outliers” (i.e data which do not fit the model). This assumption is clearly verified by the set S

n

.

As mention in Chapter 2, the axis localization should be achieve in real-time. In order to be close to this criteria, we stop the RANSAC algorithm after a maximum time t

max

. We also stop the algorithm in the case that a maximal number of iteration J as been reached. Since the insertion is continuous and rather slow, the axis parameters H don’t need an update every frame. This operation is only performed every ∆

f

frames.

Based on experimental value, the RANSAC algorithm provided really good results with the following parameters:





J = 500 t

max

= 1s

f

= ¯ t

R

dr ≈ 0.30 ∗ 33 ≈ 10

(5.10)

where ¯ t

R

is the mean time to run the RANSAC algorithm and dr is the data rate.

(30)

Algorithm 3: RANSAC algorithm Data: S

n

, J, t

max

1

Let j = 0;

2

Let C

best

= ∞;

3

while t

elapsed

≤ t

max

& j < J do

4

Select randomly 2 points D

1

and D

2

, D

1

6= D

2

from S

n

;

5

Let H such as c(H, t) go through D

1

and D

2

;

6

Let C = 0;

7

foreach x ∈ S

n

do

8

Let d = distanceToLine(c(H, t), x);

9

if d ≤ ρ then

10

Let C = C − 1;

11

end

12

end

13

if C < C

best

then

14

Let H

= H;

15

end

16

Let j = j + 1;

17

end

Result: Best model H

Figure 5.2 shows the result of the RANSAC algorithm overlaying the raw image. We can see that the computed axis seems to match the real needle axis. However, on the right part of the image, some data are considered as inliers since they lie on the line.

Needle Speckle

Figure 5.2: Needle axis computed by the RANSAC algorithm Blue – axis | Red – inliers

This axis was computed with 500 iterations in ∼ 350ms.

(31)

Chapter 6

Tip localization

This chapter introduces two algorithms to automatically detect the tip of a needle in a 2D image. We assume that the axis has been approximated by a straight line and that we have at disposal an estimation c(t, H). The task of tip localization given the needle axis is not trivial due to speckle noise and possible signal drop-outs, which result in apparent electrode breaks. The position of the tip is estimated using a blob filling algorithm (Section 6.1). Then, the accuracy of the estimation is improve by the use of a Kalman filter (Section 6.2).

6.1 Blob filling

6.1.1 Blob construction

In this section, we assume that the axis of the needle c(t, H) has been computed, that we are working on the cropped normalized picture I

N

and that the set S

n

is computed (as in section 5.1).

Let us denote N (x, x

0

) if x and x

0

are neighbor:

∀x = [i, j]

T

, x

0

= [i

0

, j

0

]

T

; N (x, x

0

) ⇔ |i − i

0

| ≤ 1 & |j − j

0

| ≤ 1 (6.1) Let us denote x ∼ x

0

if there is a path of bright pixel between x and x

0

:

x ∼ x

0

⇔ ∃(x

1

= x, . . . , x

n

= x

0

) |

(

∀i ∈ (1, . . . , n − 1), N (x

i

, x

i+1

)

∀i ∈ (1, . . . , n), x

i

∈ S

n

(6.2) The first idea behind our algorithm is to detect each blob of the image. A blob B is a region of connected pixels of the image within which the intensity is constant:

B = {x | ∀x

0

∈ B, x ∼ x

0

} (6.3)

The creation of each blob is made by a flood filling algorithm: starting from a point, every point connected to the first one (in the sense of ∼) is added to the blob (see

23

(32)

Algorithm 4).

Algorithm 4: Creation of the n blobs of the image I

N

Data: I

N

1

Let S

B

be the set of blobs;

2

foreach x ∈ S

n

do

3

if ∀B ∈ S

B

, x / ∈ B then

4

Create a new blob B

new

;

5

foreach x

0

∼ x do

6

Add x

0

to B

new

;

7

end

8

Add B

new

to S

B

;

9

end

10

end

Result: The set S

B

6.1.2 Tip detection

Once the set of blob S

B

has been computed, the tip detection is achieve with the following approach: the biggest blob intersecting the line c(t, H) is kept and considered as the needle blob. Indeed, even if there is speckle or intensity drop along this line, we assume that most of the needle is visible and therefore that along the line, the biggest blob is the one depicting the needle.

According to the direction of insertion (set manually as “left to right”, “up to down”, etc.), the tip is chosen as the point intersecting the blob’s boundary and the line c(t, H).

Figure 6.1 shows the different steps of the tip detection: normalization, binarization, flood filling, blobs intersecting the line and tip detection.

6.2 Kalman filter

Due to speckle pattern that can appears along the line c(t, H), the tip detection is uncertain and can sometimes “jump” from one position to another. In order to decrease this shaking and improve stability of the tip detection, we decided to add a Kalman filter. Kalman filtering is based on two main assumptions (more in [17]):

1. The next state probability p(x

t

| u

t

, x

t−1

) is a linear function in its arguments with added Gaussian noise:

x

t

= A

t

x

t−1

+ B

t

u

t

+ ε

t

(6.4)

where:

(33)

6.2. Kalman filter 25

• x

t

is the state at time t,

• u

t

is the control input,

• A

t

is the transition matrix,

• B

t

is the control matrix,

• ε

t

is a random variable with zero mean and covariance denoted Q

t

.

2. The measurement probability p(z

t

| x

t

) is a linear function in its arguments with added Gaussian noise:

z

t

= C

t

x

t

+ δ

t

(6.5)

where:

• z

t

is the measurement at time t,

• C

t

is the observation matrix,

• δ

t

is a random variable with zero mean and covariance denoted R

t

.

6.2.1 Model

In order to apply a Kalman filter to our case, we created the following model:

x

t

= [x, y, ˙ x, ˙ y]

T

(6.6)

z

t

= [x, y]

T

(6.7)

A

t

=

1 0 ∆

t

0 0 1 0 ∆

t

0 0 1 0

0 0 0 1

(6.8)

B

t

= 0 (6.9)

C

t

=

"

1 0 0 0 0 1 0 0

#

(6.10)

where:

• [x, y]

T

is the estimated tip position,

• [ ˙x, ˙y]

T

is the estimated tip velocity,

• ∆

t

is the time elapsed between to call of the kalman filter.

One can notice that we set B

t

= 0. Indeed, the insertion being performed manually, their is no control input. Therefore, it was decided that the input will be consider as part of the noise ε

t

.

Based on the equations 6.4 and 6.5, we can apply our Kalman filter. The algorithm

is sum up in Algorithm 5.

(34)

Algorithm 5: Kalman filter algorithm Data: µ

t−1

, Σ

t−1

, u

t

, z

t

1

µ ¯

t

= A

t

µ

t−1

+ B

t

u

t

;

2

Σ ¯

t

= A

t

Σ

t−1

A

Tt

+ Q

t

;

3

K

t

= ¯ Σ

t

C

tT

(C

t

Σ ¯

t

C

tT

+ R

t

)

−1

;

4

µ

t

= ¯ µ

t

+ K

t

(z

t

− C

t

µ ¯

t

);

5

Σ

t

= (I − K

t

C

t

) ¯ Σ

t

; Result: µ

t

, Σ

t

6.2.2 Noise tuning

One of the most challenging part in the implementation of a Kalman filter is the tuning of Q

t

and R

t

. R

t

can be obtain with some training data (covariance of the uncertainties in the measurement). However, Q

t

is harder to get and therefore we decide to tune both matrix manually. The two following matrix produced satisfying result on our experiments:

R

t

=

"

10

−1

0 0 10

−1

#

+ R(10

−4

) (6.11)

Q

t

=

10

−5

0 0 0

0 10

−5

0 0

0 0 10

−5

0

0 0 0 10

−5

+ R(10

−9

) (6.12)

where R(x) is a random matrix of maximal intensity x.

(35)

6.2. Kalman filter 27

(a) Normalized image

(b) Binary image

(c) Blobs of the image

(d) Blobs intersection with the line c(t, H)

(e) Biggest blob selection

(f) Tip detection

Figure 6.1: Steps of tip detection

(36)

Experiments & Results

In the previous chapters our method for needle axis and tip detection was presented. We explained the acquisition process and the assumptions made about the input data set.

In this chapter, we present our results and discuss the performance of our algorithms.

A series of experiments on 2D ultrasounds images were carried out to determine the localization accuracy of our process. The proposed method were tested on a portion of cryogel phantom. The experiments are presented in Section 7.1. The results are summarized in Section 7.2.

7.1 Experiments

The needle insertion was performed in a cryogel phantom immersed in a water tank.

Water was required around the phantom in order to ensure the acoustic impedance between the probe and the phantom. In practice, a water-based gel is placed between the patient’s skin and the probe. The probe and the needle were both equipped with a SonixGPS tracker.

Ultrasound scanner. The UltraSonix SonixTouch Ultrasound System was used with the following parameters:

• Zoom: 100% - no numerical zoom,

• Freq: 5.5 MHz

• Depth: 10 cm

• Gain: 50%

(Amplifies the reflected signal. However, it increases both signal and noise)

• Frame rate: 30 fps

Ultrasound probe The GPS-equipped probe at our disposal was the C5-2/60 GPS probe. It was positioned on top of the phantom so that the field of view covered the phantom.

Needle The needle inserted had a 0.9 mm radius and 70 mm length. It is commonly used in vascular access procedures. It was equipped with a Reusable Needle Sensor

28

References

Related documents

Samtidigt som de flesta barn beskrev att de vill uppleva gemenskap och göra olika saker tillsammans med föräldrar eller hela familjen beskrev en grupp tonåringar att det är viktigt

On the other hand, the method presented in Paper V localizes the robot in the layout map using sensor measurements and uses this localization to find correspondences between corners

In this study, the experimental verification is conducted based on the experiments introduced in section 3 to measure the design process time, grasp stability, performance

Radu Beller 25, Bucharest, Romania † Electronic supplementary information (ESI) available: Additional information regarding the TQ–PC 71 BM models and the methods; the excited

This includes simultaneous adjustment of GPS and INS data, simultaneous adjustment of GPS/INS positions and attitudes with image or laser scanner data, calibration of systems

forskningsfråga. Under inläsningsprocessen av abstrakt framträdde tidigt huvudkategorier som sedan utkristalliserade sig till underkategorier efter en mer djup läsning av artiklarna

För att utvärdera skillnader i kontaktvinkel mellan de olika limsystemen, fuktkvots- nivåema samt käm- och splintved analyserades resultaten enligt modellen Tukey HSD. Resultaten

Hon menar att det kritiska perspektivet på inkludering som förespråkas i styrdokumenten, kan bidra till att lärare istället för att inkludera alla elever osynliggör dem som är