• No results found

Tools and Algorithms for Classification in Volume Rendering of Dual Energy Data Sets

N/A
N/A
Protected

Academic year: 2021

Share "Tools and Algorithms for Classification in Volume Rendering of Dual Energy Data Sets"

Copied!
82
0
0

Loading.... (view fulltext now)

Full text

(1)

Department of Science and Technology

Institutionen för teknik och naturvetenskap

Linköping University Linköpings Universitet

SE-601 74 Norrköping, Sweden

601 74 Norrköping

LiU-ITN-TEK-A--08/087--SE

Tools and Algorithms for

Classification in Volume

Rendering of Dual Energy Data

Sets

Andreas Sievert

(2)

LiU-ITN-TEK-A--08/087--SE

Tools and Algorithms for

Classification in Volume

Rendering of Dual Energy Data

Sets

Examensarbete utfört i vetenskaplig visualisering

vid Tekniska Högskolan vid

Linköpings universitet

Andreas Sievert

Handledare Patric Ljung

Examinator Anders Ynnerman

Norrköping 2008-06-09-13:00

(3)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

under en längre tid från publiceringsdatum under förutsättning att inga

extra-ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,

skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för

ickekommersiell forskning och för undervisning. Överföring av upphovsrätten

vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av

dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,

säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ

art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i

den omfattning som god sed kräver vid användning av dokumentet på ovan

beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan

form eller i sådant sammanhang som är kränkande för upphovsmannens litterära

eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se

förlagets hemsida

http://www.ep.liu.se/

Copyright

The publishers will keep this document online on the Internet - or its possible

replacement - for a considerable time from the date of publication barring

exceptional circumstances.

The online availability of the document implies a permanent permission for

anyone to read, to download, to print out single copies for your own use and to

use it unchanged for any non-commercial research and educational purpose.

Subsequent transfers of copyright cannot revoke this permission. All other uses

of the document are conditional on the consent of the copyright owner. The

publisher has taken technical and administrative measures to assure authenticity,

security and accessibility.

According to intellectual property law the author has the right to be

mentioned when his/her work is accessed as described above and to be protected

against infringement.

For additional information about the Linköping University Electronic Press

and its procedures for publication and for assurance of document integrity,

please refer to its WWW home page:

http://www.ep.liu.se/

(4)

Institutionen för teknik och naturvetenskap

Department of Science and Technology

Examensarbete

Tools and Algorithms for Classification in Volume

Rendering of Dual Energy Data Sets

Examensarbete utfört i Visualisering vid Tekniska högskolan i Linköping

av

Andreas Maximilian Sievert

LITH-ITN-EX--YY/XXXX--SE

Norrköping 2008

Department of Science and Technology Linköpings tekniska högskola

Linköpings universitet Linköpings universitet

(5)
(6)

Tools and Algorithms for Classification in Volume

Rendering of Dual Energy Data Sets

Examensarbete utfört i Visualisering

vid Tekniska högskolan i Linköping

av

Andreas Maximilian Sievert

LITH-ITN-EX--YY/XXXX--SE

Handledare: Patric Ljung

Siemens Corporate Research

Examinator: Anders Ynnerman

itn, Linköpings universitet

(7)
(8)

Avdelning, Institution

Division, Department

Division of Media Technology Department of Science and Technology Linköpings universitet

SE-601 74 Norrköping, Sweden

Datum Date 2008-01-01 Språk Language  Svenska/Swedish  Engelska/English   Rapporttyp Report category  Licentiatavhandling  Examensarbete  C-uppsats  D-uppsats  Övrig rapport  

URL för elektronisk version

http://www.itn.liu.se http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-ZZZZ ISBNISRN LITH-ITN-EX--YY/XXXX--SE

Serietitel och serienummer

Title of series, numbering

ISSN

Titel

Title

Tools and Algorithms for Classification in Volume Rendering of Dual Energy Data Sets

Tools and Algorithms for Classification in Volume Rendering of Dual Energy Data Sets

Författare

Author

Andreas Maximilian Sievert

Sammanfattning

Abstract

In the last few years, technology within the medical imaging sector has made many advances, which in turn has opened many new possibilities. One such recent advance is the development of imaging with data from dual energy computed tomography, CT, scanners. However, with new possibilities come new challenges. One challenge, that is discussed in this thesis, is rendering of images created from two volumes in an efficient way with respect to the classification of the data. Focus lies on investigating how dual energy data sets can be classified in order to fully use the potential of having volumes from two different energy levels. A critical asset in this investigation is the ability to utilize a transfer function description that extends into two dimensions. One such transfer function description is presented in detail. With this two-dimensional description comes the need for a new way to interact with the transfer function. How the interaction between a user and the transfer function description is implemented for Siemens Corporate Research in Princeton, NJ will also be discussed in this thesis as well as the classification results of rendering dual energy data. These results show that it is possible to classify blood vessels cor-rectly when rendering dual energy computed tomography angiography, CTA, data.

Nyckelord

(9)
(10)

Abstract

In the last few years, technology within the medical imaging sector has made many advances, which in turn has opened many new possibilities. One such re-cent advance is the development of imaging with data from dual energy computed tomography, CT, scanners. However, with new possibilities come new challenges. One challenge, that is discussed in this thesis, is rendering of images created from two volumes in an efficient way with respect to the classification of the data. Focus lies on investigating how dual energy data sets can be classified in order to fully use the potential of having volumes from two different energy levels. A critical as-set in this investigation is the ability to utilize a transfer function description that extends into two dimensions. One such transfer function description is presented in detail. With this two-dimensional description comes the need for a new way to interact with the transfer function. How the interaction between a user and the transfer function description is implemented for Siemens Corporate Research in Princeton, NJ will also be discussed in this thesis as well as the classification results of rendering dual energy data. These results show that it is possible to classify blood vessels correctly when rendering dual energy computed tomography angiography, CTA, data.

(11)
(12)

Acknowledgments

I would like to thank Gianluca Paladini and his team of experts for allowing me to do my visualization project at Siemens Corporate Research. I would especially like to thank Tomas Moeller and Patric Ljung, who even though they have been bombarded with strange questions, have almost always had straight solid response to help me. Most of all I want to thank my beloved Anneli, who has endured way too much in order to allow me to be able to finish this thesis. Thank you.

(13)
(14)

Contents

1 Introduction 1 1.1 Motivation . . . 1 1.2 Problem description . . . 1 1.3 Objectives . . . 2 1.3.1 Limitations . . . 3 1.4 Thesis Outline . . . 3 1.5 Reader Prerequisites . . . 3 1.6 Abbreviations . . . 4 2 Background 5 2.1 Volumetric Data . . . 5 2.2 Volume Acquisition . . . 6 2.2.1 Computed Tomography . . . 6

2.2.2 Computed Tomography Angiography . . . 7

2.2.3 Dual Energy Computed Tomography . . . 7

2.3 Volume Rendering . . . 8

2.3.1 Image Order Techniques . . . 8

2.3.2 Model of Light Transport . . . 8

2.3.3 Volume Rendering Integral . . . 8

2.4 Transfer Functions . . . 10

2.4.1 Lookup Table . . . 10

2.4.2 Classification Scheme . . . 11

3 Related Work 13 3.1 Using Derived Attributes . . . 13

3.2 Multidimensional Transfer Functions . . . 14

3.3 Multidimensional Transfer Functions for Dual Energy Data Sets . 15 4 Approach 17 4.1 Current Implementation Issues . . . 17

4.2 Separation of Functionality . . . 18 4.2.1 Separation of GUI . . . 18 4.2.2 LUT Generation . . . 18 4.2.3 File Format . . . 19 4.3 Basic Primitives . . . 19 ix

(15)

x Contents

4.3.1 1D Primitives . . . 19

4.3.2 2D Primitives . . . 20

4.4 Histogram and Gradient Generation . . . 20

4.5 Rendering . . . 21 4.5.1 Rendering Issues . . . 21 4.5.2 Rendering Strategy 1 . . . 21 4.5.3 Rendering Strategy 2 . . . 22 5 Implementation 25 5.1 Overview . . . 25 5.2 Data Structure . . . 26 5.3 File Format . . . 28 5.4 GUI . . . 29 5.5 LUT Generation . . . 29 5.6 Histogram Generation . . . 29 5.6.1 Gradient Generation . . . 30 5.7 Rendering . . . 31

5.7.1 Dual Energy Data with Gradient Magnitude . . . 31

6 Results 33 6.1 New Data Structure . . . 33

6.2 New GUI . . . 33

6.2.1 Histograms . . . 34

6.3 Rendering . . . 37

6.3.1 Dual Energy . . . 37

6.3.2 Gradient Magnitude . . . 40

6.3.3 Dual Energy with Gradient Magnitude . . . 42

7 Discussion 51 7.1 Conclusions . . . 51 7.1.1 Rendering . . . 51 7.1.2 Editor . . . 52 7.1.3 Data Structure . . . 52 7.1.4 Histograms . . . 52 7.2 Analysis . . . 53 7.3 Future Work . . . 54 Bibliography 55 A Manual for User Interface 57 A.1 Scene Graph . . . 57

A.2 Interaction . . . 57

A.2.1 Fields . . . 58

A.2.2 Selection . . . 61

A.2.3 Moving . . . 61

A.2.4 Color Selection . . . 61

(16)

Contents xi

A.2.6 1D Editor . . . 61 A.2.7 2D Editor . . . 62

(17)
(18)

Chapter 1

Introduction

The first chapter contains a background overview and a motivation for this thesis as well as a problem description and the issues that it addresses. The main objec-tives and a thesis outline are also presented in this chapter.

1.1

Motivation

Volume rendering of medical data is an increasingly important tool for the physi-cians of today. It is used in numerous examinations with many different types of data. Ultrasound, magnetic resonance imaging and computed tomography all produce volumetric data that is commonly visualized through volume rendering.

However good these renderers are, the images produced are greatly dependent on the transfer function used. Without the right transfer function to map a data value to a color and opacity value, the image will most likely be less useful. For many years now there have been volume rendering transfer functions based on the mapping of a single density value to a specific color, as well as transfer functions that use derived attributes such as the magnitude of the gradient to improve the classification of each sample. Many such algorithms have been used to successively improve the quality of volume rendering techniques.

Many of the volume rendering tools we see in our general hospitals today, are developed at Siemens Corporate Research, SCR. Although their products are state of the art, they currently do not use the full potential of the dual energy data sets that the CT scanners can produce. This masters’ thesis project addresses this issue by presenting a new set of tools for classifying samples for dual energy CT data sets.

1.2

Problem description

With the introduction of dual energy data sets comes a new challenge; how to classify a sample when there are two different density values associated with it. It

(19)

2 Introduction

is possible to simply use two different transfer functions, one for each volume. This causes each sample to be classified twice and the two values have to be interpolated to give the final result.

This way of classifying does not utilize the fact that this is a two dimensional problem. This method tries to find a solution with two one-dimensional transfer functions as illustrated in figure 1.1. If the actual transfer function is instead specified in a two-dimensional space, the possibilities of correctly classifying a sample should greatly increase. With this approach the data values in each volume will together classify a sample.

Figure 1.1. Classifying a dual energy data set with two 1D transfer functions, one

TF for each volume and then interpolating or multiplying the two results, will have equivalent shape of a rectangle in 2D. This is illustrated in the figure above by placing the results of one transfer function on the horizontal axis and the other 1D TF on the vertical axis. Notice how their combined results give a rectangular shape. Data however, has a tendency to have irregular shapes, leaving either areas uncovered or extra areas covered. This will result in less adequate classification results.

1.3

Objectives

Given the problem described above, the main objective of this project is to investi-gate the possibilities of rendering dual energy data with two-dimensional transfer functions. This includes:

(20)

1.4 Thesis Outline 3

• Investigating how dual energy data, specifically CTA data, can be rendered with two-dimensional transfer functions.

• Creating a transfer function editor to suit the dual energy data. • Creating a general and extendable transfer function structure. • Investigating different ways of presenting the dual energy histogram. The implementation will be done in Siemens Corporate Research’s development software platform XIP.

1.3.1

Limitations

The thesis does not include the following:

• Creating the best possible UI, user interface, from a users perspective. • Creating a complete transfer function framework.

The framework is to be functional and act as a starting point to a complete framework, but not a complete framework.

1.4

Thesis Outline

The thesis is divided into seven chapters and two appendices. The first chapter presents the thesis problem to the reader. The next two chapters discuss the theory that the thesis work is based on. Understanding this information is vital to understanding what has been accomplished. Chapter four and five present the methods and ideas on how the problem is solved and how it is implemented. The final two chapters conclude the thesis by presenting the results and then giving an in depth discussion of what has been accomplished.

1.5

Reader Prerequisites

For the reader of this document to fully understand and appreciate it, a basic knowledge of computer graphics is recommended. This basic knowledge includes areas such as; scene graphs, OpenGL, Open Inventor and GPUs. Even though the basics will be presented, an understanding of volume rendering and its concepts will also help in understanding the content presented. The reader is also assumed to have some basic programming experience as wells as a basic understanding of signal processing phenomena such as aliasing.

(21)

4 Introduction

1.6

Abbreviations

• 1D - One dimensional. • 2D - Two dimensional. • 3D - Three dimensional. • 4D - Four dimensional. • CT - Computed Tomography.

• CTA - Computed Tomography Angiography. Study of blood vessels with CT.

• FBO - Frame Buffer Object.

• GLSL - Graphics Library Shading Language. • GPU - Graphics Processing Unit.

• GUI - Graphical User Interface. • LUT - LookUp Table.

• MRI - Magnetic Resonance Imaging. • OpenGL - Open Graphics Library. • PET - Positron Emission Tomography. • QT - Software used for GUI design.

• RAD - A graphical development environment used at SCR before XIP. • TF - Transfer Function.

• UI - User Interface.

• XIP - eXtensible Imaging Platform. Siemens Corporate Research’s graphical development software based on open inventor allowing future devleopment through the building of scene graphs of existing nodes via a graphical user interface.

(22)

Chapter 2

Background

This chapter presents some of the basic concepts one needs to understand to follow the topics discussed in this thesis. The chapter starts out by describing what the term volume refers to within volume rendering and then goes on to describe how these volumes are acquired. With the concept of volumes clarified the chapter then covers the basics of volume rendering as well as of transfer functions.

2.1

Volumetric Data

Volumetric data is most often a set of samples that represent a certain value in a 3D space [2]. Each sample can be described by its position (x,y,z) and its data value or values associated with this position. The term voxel is often used to refer to this kind of sample. As shown in figure 2.1, the voxels partition the volume into equally spaced sections for each sample. The data stored in each voxel is often a density value or a vector representing movement in a certain point.

Figure 2.1. A volume is made up of numerous voxels.

[Image source : http://www.vrvis.at/via/resources/course-volgraphics-2004]

Volumetric data may also be time-varying, in which case the data will actually 5

(23)

6 Background

be a 4D data set with each sample being dependent on (x,y,z,t). In most cases, the sample positions are spaced regularly along orthogonal axes. These types of data sets are called isotropic data sets and are used in this thesis.

2.2

Volume Acquisition

There are many areas where volume rendering is applicable, but one of the most widely used areas is within medical imaging. Here, volume rendering is used to render different kinds of images, ranging from ultrasound data of babies to detailed CT scans of corpses that are used in virtual autopsies. For whatever purpose, volume rendering needs to have a volume of data to render. The most common ways to acquire images within the medical field is through the above mentioned CT scan and ultrasound, as well as through magnetic resonance imaging, MRI, and through positron emission tomography, PET, scans.

2.2.1

Computed Tomography

Computed tomography uses the properties of x-rays to reconstruct a density image representing a slice of the person being scanned. The x-rays are sent from a source through the body to a detector array. This setup can now be used from several positions spread 360 degrees around the body. The amount of energy that is received at the detector for each location can be used to reconstruct each slice by a method called back projection. Figure 2.2 illustrates this method of reconstruction.

Figure 2.2. As more views from different directions are taken into account, a

recon-structed image starts to form. [Image source : http://www.dspguide.com/ch25/5.htm]

The reconstructed slice has different densities in different parts of the image. These densities are measured in Hounsfield Units and represent how much the x-rays are attenuated at a specific location. As illustrated in figure 2.3, if enough slices are acquired, a volume can be represented by stacking the slices on top of each other.

(24)

2.2 Volume Acquisition 7

Figure 2.3. To create a volume, the reconstructed slices can be stacked on top of each

other. [Image source : http://barre.nom.fr/medical/these/index.html]

2.2.2

Computed Tomography Angiography

Also referred to as CTA, Computed Tomography Angiography is used when study-ing the vessels of the body. To create a better data set, clearly showstudy-ing the blood vessels, a contrast medium is introduced to the body that is to be scanned. The contrast agent will then give the vessels a stronger response to the radiation, giving a clearer view of the vessels. CT angiography data does however produce density values of blood vessels that lie within the large density value range of bone tissue. This is because in CTA data, a contrast agent is usually applied. The contrast agents strengthens the response of the blood in the vessels, but places the response values within that of bone.

2.2.3

Dual Energy Computed Tomography

Dual energy CT is a CT scan where two different energy levels of radiation are simultaneously used to scan the body. The dual energy CT scan produces two volumes of data, one for each energy level.

Dual energy CT utilizes the fact that different materials can respond differently to different energy levels. What this means is that a material that has one response for the first energy level can have a different response for the other energy level. This in itself does not give the rendering more information, but if the relationship between the two responses is nonlinear, the difference between them does add useful information to the rendering. For example, bone and blood vessel have a very similar response values in a CTA scan. With contrast agents in the blood vessels, a certain energy level gives a slightly different value for the vessels than the value expected from a linear relationship. This allows for better classification possibilities.

(25)

8 Background

2.3

Volume Rendering

"Volume rendering or direct volume rendering is the process of creating a 2D image directly from 3D volumetric data" [2]. The keyword in this description is the word directly. There are many techniques that render a volume by first applying a surface extraction algorithm and then rendering the resulting surface. Volume rendering does not have a middle step of generating intermediate graphic primitives and is therefore often referred to as direct volume rendering.

Direct volume rendering can be achieved in many ways which generally fit in one of three categories; object-order, image-order and domain-based. In this paper image-order techniques have been used, and this will henceforth be assumed when volume rendering is discussed. Image order techniques rely on a backward mapping scheme where rays are cast from each pixel of the image out through the volume. Along the way the volume values are used to determine the final value of each pixel. Even though this can be performed in the other direction, the rays are still bound to a certain pixel and not the volume data itself, making it an image-order technique.

2.3.1

Image Order Techniques

Image order techniques range from x-ray rendering and maximum intensity pro-jection to isosurface rendering and full volume rendering [2]. All of these methods can be described by rays being casted from an image pixel, but the methods also interpolate the value of the volume where it is sampled. However, these methods do differ in how they combine the values sampled to produce the final image. In x-ray rendering all values are simply summed along the ray while maximum in-tensity projection uses the largest value along the ray. Isosurface rendering is just a special case of full volume rendering i.e. scattering is not considered.

2.3.2

Model of Light Transport

Full volume rendering is based on an optical model where light transport is de-termined as a function of the sampled value. The light model is based on the assumption that light travels in straight lines unless it is interacting with the par-ticipating medium. There are three basic phenomena that are taken into account in the optical model; emission, absorption and scattering. Emission adds light energy, absorption reduces it and scattering can do both. All of the light phe-nomena can be taken into account by the volume rendering integral, but in many applications an emission and absorption model is used.

2.3.3

Volume Rendering Integral

I0e −RD s0κ(t) dt + D Z s0 q(s)e− RD s κ(t) dtds (2.1)

(26)

2.3 Volume Rendering 9

Equation 2.1 [7], the volume rendering integral, describes how light is trans-ported and effected by the participating medium from a starting point s = s0 to an endpoint s = D. It is often more intuitive to think about the rendering integral by considering the analogy with light going through a cloud. How much of the light from the other side can a viewer see? How much light is added along the way through reflections from the sun and other phenomenon? This is what the integral describes and is what figure 2.4 tries to illustrate.

Figure 2.4. The figure displays how different light phenomena add together to create a

final model of light transport. Imagine the viewer to be positioned at the eye and think of which light phenomena would appear in the cloud. (A) describes light going through the cloud that is only attenuated by the density value of the cloud. (B) and (C) illustrates single scattering and multi scattering effects that can be taken in account. (D) illustrates how the cloud itself can contribute by adding light.

On the other side of the cloud there is an initial amount of light in a certain color. As the light passes through the cloud, some of the light is absorbed. How much of the light that is attenuated inside the cloud, depends on the density or medium in which the ray of light passes through. How much of the initial light that reaches the viewer is calculated by the first term in equation 2.1. I0 is the the initial amount of light that comes in at s=s0.

I0 e −RD

s0κ(t) dt

As well as being absorbed, light can be emitted and scattered along the way. In the analogy with the cloud, this could be the bright areas on the cloud where the sun is being reflected. Even though light can be added along the way, the added light can be absorbed by the medium after it has been added. Colors may also react differently within a different medium. Such effects are taken into account by the second term in equation 2.1. This term represents the contribution of the source terms attenuated by the medium along the remaining distance to the viewer. For a more detailed description of the volume rendering integral see [1].

(27)

10 Background D Z s0 q(s)e− RD s κ(t) dtds

In many rendering scenarios there is no background, and the only light in the scene is the light emitted by the participating medium. How much light that is emitted and what color it is can be adjusted with a transfer function. In the integral the transfer function effects the resulting image through the optical properties κ(t) and the source term q(s).

2.4

Transfer Functions

Within visualization, since there is no natural way of obtaining the absorption and emission coefficients of the data, the user assigns optical properties to the sampled data values to decide how the different structures within the volume should appear. The mapping of sample values to optical properties, as described by equation 2.2, is specified in a transfer function. The process of using a transfer function to identify features of interest based on abstract data values is often referred to as classification [1].

T F (Density) ⇒ RGBA (2.2)

In direct volume rendering the most simple case is the mapping of a specific density value to a color and a opacity value, usually RGBA. This mapping allows a user to change the result of the volume rendering integral and therefore view many different aspects of the volume. For example, a CT scan rendered without using a transfer function does not give the viewer much to look at, but rendered with a transfer function that maps the density of bone to one color and opacity and the other density values to zero, an image of the skeleton appears and the rendered image looks like figure 2.5.

The transfer function allows the viewer to explore the volume, giving him or her control over the visualization tool [2]. This interaction is usually done in a transfer function editor, a program or widget that allows the user to set the color and opacity of the transfer function. Choosing an effective transfer function is not always an easy task, especially when transfer functions become multidimensional. Adding a view of the histogram showing where the data is located can be of great assistance when editing the transfer function.

2.4.1

Lookup Table

When rendering, transfer functions are most often represented by a lookup table, LUT. A lookup table is a discrete representation of the transfer function which is easy to represent with a texture. Textures are commonly used within computer graphics and are therefore easy to use in a volume rendering implementation. Because of this discrete representation, there is the risk of introducing aliasing artifacts. Furthermore, high frequencies in the transfer function can also introduce

(28)

2.4 Transfer Functions 11

Figure 2.5. The transfer function is set to show the density values around bone. [Image source : http://barre.nom.fr/medical/these/index.html]

aliasing artifacts in the rendered image. Aliasing can be reduced by choosing the right classification scheme.

2.4.2

Classification Scheme

The mapping of a sample to an RGBA value can either be done before or after the sample value is interpolated [1]. If the mapping is performed after the sampled value has been interpolated, it is called post-classification. Mapping the transfer function before the sample is interpolated, called pre-classification, may avoid artifacts at object boundaries. However, pre-classification causes the loss of the high frequency detail in the transfer function that post-classification retains and therefore does not fulfill the sampling theorem, causing the resulting image to have blocky artifacts. Pre-classification, can however be useful in segmented volumes where interpolation between the segments is unwanted. The difference in rendering with these two methods is displayed in figure 2.6.

As well as pre and post-classification there is also pre-integrated transfer func-tions. Pre-integrated transfer functions do produce the best images, but require a new integration of the volume for every new transfer function. This makes in-teraction with the transfer function more difficult and time consuming [1]. Hence, the renderings produced in this paper are rendered in a post-classification scheme. Assigning a good transfer function can be a difficult task, due to the difficulties in identifying the features of interest in the transfer function domain. Displaying the histogram in the same view as the transfer function editor, as seen in figure 2.7, helps in identifying interesting parts of the volume.

(29)

12 Background

Figure 2.6. The left image is rendered with a pre-classification scheme and the right

image is rendered with a post-classification scheme. [Image source : http://www.cse.ohio-state.edu/ wangcha/cis78814Q-au02-lab2.htm]

Figure 2.7. The figure displays how a histogram can help in the transfer function editing

(30)

Chapter 3

Related Work

Many articles have addressed the topic of transfer functions within volume ren-dering. Some use several one-dimensional transfer functions to separate different properties of the volume. Others use derived properties and statistical analysis to generate the "optimal" solution. Many methods combine these two methods to render a better image. In this thesis, two-dimensional transfer functions are used and therefore the following chapter describes work relating to that area of transfer function editing.

3.1

Using Derived Attributes

The number of samples in data sets seem to be growing with each technological advance, and with it, the amount of information that is to be displayed. To give a good view of a volume, the boundaries usually contain sufficient information and are therefore a key aspect of transfer function editing. The information between the boundaries are uniform and give very little extra information. Therefore Levoy [6] introduced two transfer functions based on gradient magnitude. One of them, which is based on density and gradient magnitude, can be motivated by looking at the relationship between these two factors, as seen in figure 3.1.

If plotted against the original values, the gradient magnitude of a blurred circle in two dimensions, maps to an arch. A blurred circle refers to a circle with edges that are not a step function, but a soft shift from the circle to the background. In three dimensions, the same technique is applied to a blurred sphere, and the same arches appear in the plot. Using this technique on a whole volume of blurred spheres, results in several arches on top of each other. In theory each of these arches represent a uniform part of the volume. If referring to a volume of the human body, a uniform part of the volume is usually the same organ. Thus, this method enables the detection of organ boundaries.

Even though Levoy used the gradient magnitude to visualize the boundaries it was still difficult to utilize the plotted arches made by the data. Therefore Kindlmann et al. [3] proposed using the first and second derivative in the gradient direction to semi-automatically generate the transfer function.

(31)

14 Related Work

3.2

Multidimensional Transfer Functions

Multidimensional transfer functions are useful tools in extracting material bound-aries. The problem is that a good transfer function is difficult to identify, and a multidimensional transfer function is even more problematic. Kniss et al. [4] in-troduced an editing tool with widgets that help create three-dimensional transfer functions. He expanded Kindlmanns idea of using the density value, the gradi-ent magnitude, as seen in figure 3.1, and the second directional derivative, with an intuitive editing tool based on widgets. The widgets enabled interactive de-sign of multidimensional transfer functions which in turn enabled subtle surface properties to be visualized.

Figure 3.1. Gradient magnitude histogram. Notice the how the different parts of the

volume form arches

Later, Kniss et al. [5] expanded these ideas to encapsulate multivariate data. Here, modern graphics hardware was used to interactively render multivariate data with multidimensional transfer functions to provide interactive shadows for a volume. The multivariate data contained, at each sample point, several scalar values representing different simulated or measured quantities. The data could be a list of quantities from a single scanning modality or a combination of modalities, such as PET, MRI and CT.

Utilizing the gradients and second directional derivatives of a sample in CT data, as Kniss et al.[5] presented has the potential to produce great images, but it does not produce as great images with CT angiography data. In CT angiography, separation of the blood vessels from the nearby bone tissue is essential to a success-ful classification. However, according to Vega et al. [9], a closer inspection shows that the difference in intensities of the bone tissue and the vessels prevents the differentiation of the neighboring tissues. A method was presented where transfer functions are automatically aligned with the arc of the gradient magnitude for a

(32)

3.3 Multidimensional Transfer Functions for Dual Energy Data Sets 15

better separation of CT angiography data.

Next, Vega et al. [10] used the structure of the gradient magnitude histogram for segmentation of blood vessels by automatic generation of transfer functions. Here, the transfer function is automatically adjusted from a known template to better suit the current 2D histogram. The adjustment is based on image registra-tion techniques.

3.3

Multidimensional Transfer Functions for Dual

Energy Data Sets

Earlier dual energy data has been used in many segmentation algorithms to suc-cessfully classify desired parts of the volume. The segmentation is performed in a time consuming preprocessing step before the actual rendering. This operation requires prior information about the tissue classes being scanned. In line with new volume acquisition techniques with more than one data set of the same volume, Vega et al. [8] introduced a new method to classify a sample from two volumes. The method requires no knowledge of which tissues have been scanned and allows for interactive classification of the volume.

Just as the two-dimensional transfer function of gradient magnitude relies on features of the data set appearing in a histogram as arches, in dual energy data sets the differences in the density response appears as structures diverging from a linear relationship between the values. Vega et al. plotted these histograms and utilized an interactive 2D editor with direct volume rendering to show the separation possibilities in dual energy data. The editor used morphological operations to automatically detect areas of interest. Even though automatically detected, the transfer function still needed adjustments. The type of histograms produced is illustrated in figure 3.2.

Vega et al. also experimented with utilizing the two-dimensional transfer func-tion as a binary mask of which voxels are to be rendered. The color and opacity was then set by a one-dimensional transfer function.

(33)

16 Related Work

Figure 3.2. Dual CT histogram. The figure shows where certain tissue is found within

the histogram. The area for the lungs does not show up very well in this histogram. This is because the volume that generated the histogram included only a small part of the lungs.

(34)

Chapter 4

Approach

This chapter begins by describing in more detail some of the issues of the current implementation of transfer functions in the SCR framework. Current implemen-tation refers to the implemenimplemen-tation in the framework before this thesis work. The chapter then continues by focusing on how these issues are solved in this thesis project. Studying dual energy data sets requires a framework for two-dimensional transfer functions. How the transfer functions are intended to be edited to effi-ciently classify dual energy samples is therefore presented.

4.1

Current Implementation Issues

To enable an adequate support for transfer functions in current applications as well as future applications a few requirements need to be addressed. For the transfer function structure to be able to be utilized by the current applications, the structure needs to be a general framework only describing the basic underlying structure.

Being able to use the implementation in future applications set a requirement on what APIs and other programs the transfer function structure is linked to. Preferably the implementation is not linked to any other API.

Adding and removing primitives in a smooth manner is essential to transfer function exploration. This can be thought of as a UI issue, but without a good transfer function representation the task can become a complicated one. The representation needs to support this as well as the ability to load and save a TF to file. The ability to save to file allows transfer functions to be saved outside the context in which they are used. For example, in the case of a scene graph, the transfer function can be represented outside the scene graph and not in values stored in different fields. Thus, transfer functions can be used by a larger set of different applications.

Many TF function editing tools tend to make transfer function editing a te-dious and time consuming task. This creates the need for a simple and clear way of editing the transfer function. The transfer function editing also needs to be interactive. From this base it is possible to develop more complicated viewers. By

(35)

18 Approach

addressing these issues the framework can support current applications as well as allowing the structure to contribute to future applications.

4.2

Separation of Functionality

To fulfill the requirement of the transfer function implementations being reusable, functionality needs to modularized and made independent of other API. The idea is that the new editor is based on a toolkit that controls all the interaction with the transfer function. This toolkit is only dependent on standard C functions and basic OpenGL commands, making it a separate library that can be used for editing transfer functions outside of frameworks such as XIP, RAD and Qt. This separation should allow the tool to be a "living entity" that is easily updated to suit current and future use cases.

Figure 4.1. In order for the design to be reusable, a separation of functionality is needed.

Here the diagram shows the main structures that will achieve this separation.

4.2.1

Separation of GUI

Since the GUI should be able to have many different appearances, it is separated from the transfer function description. The idea is that a new editor should be able to edit transfer functions created in another editor. This becomes possible by separating the description of the transfer function from the editor. Each use case can now have its own editor which all rely on the same transfer function description.

4.2.2

LUT Generation

Since the generation of a lookup table, LUT, is a discrete representation of the transfer function and has very little to do with the actual UI, its functionality is also placed in the TF library. This separation allows the LUT to be generated elsewhere in the scene graph.

(36)

4.3 Basic Primitives 19

LUT generation is slow in the previous implementation. To remedy this prob-lem a solution where the LUT is rendered to an FBO using simple OpenGL prim-itives is used. This allows for adding complexity by using a shader during the rendering process. Allowing the primitives to be rendered instead of manually interpolated, transfers the workload to the GPU instead of the CPU and a perfor-mance increase can be achieved. Since the LUT is being generated more quickly the rendering allows for interactively updating the 2D transfer function.

Since the size of the 1D LUT is much smaller, it thereby allows for interactive LUT generation on the CPU using a simple linear interpolation scheme.

4.2.3

File Format

The ability to store and reuse the transfer functions requires a specified file format. To be able to interpret and write to a transfer function file this format needs its own parser and tree generator. To make the file format easier to understand, it is designed in a similar way to the file format describing the XIP scene graph. Parse tree generation is separated from the interpreting part of the parsing process. The separation allows the tree generation to be general and used in other areas as well. In order to be easily read by future users, the file format is text based. To allow an easy understanding of the file format, it is closely related to the data structure of the TF definitions. It basically uses the variable names from the data structure and then states the value associated with that variable name.

4.3

Basic Primitives

For the transfer function description to be general, it needs to support some type of basic structure that one can build upon. In this thesis, this structure is referred to as a transfer function primitive. There can be, and are, several types of primitives. Many things can separate one type of primitive from another, for example the amount of points needed to describe it, or in which order the points are rendered. Another quality to separate the primitive types could be the number of values associated with each point. In the most common use case, the point has a one-or two-dimensional coone-ordinate as well as an RGBA value. However, in the future this may not be adequate, as users may need to specify more than just an RGBA value. A schematic of this principle is displayed i figure 4.2. There are also cases where primitives have the exact same description, but how the user is allowed to edit the primitive is different. For example, moving a certain point of a trapezoid may not be allowed, yet is it not interpreted as a trapezoid the constraint is lifted.

4.3.1

1D Primitives

To accommodate the most common use cases, a few simple 1D transfer function primitives are included. They are a triangle peak, trapezoid and a general structure called envelope that can contain an arbitrary number of points. The first two come with several constraints. One can not add points to a triangle peak or trapezoid and still view them as just that, which means adding and or removing points is

(37)

20 Approach

Figure 4.2. The figure displays how the coordinate values, as well as the data values,

can be an arbitrary length.

not allowed. For a trapezoid, certain movement restrictions are invoked. Since the middle points are to form a horizontal line, they are restricted in their movement in the vertical direction by only allowing vertical movement as a pair.

4.3.2

2D Primitives

Since there are countless of ways to represent a 2D primitive, one is chosen. This representation of a 2D primitive closely resembles that of contour lines. The contour lines are not restricted in way that each point in a level must have the same value, but the name does describe the shape of primitive. Except for the center point, each point in the primitive is part of an edge to an enclosed area. The edge may not be moved outside the next level or inside its previous level. If looked on, as seen in figure 4.3, the lines act in a similar way to that of height map displayed with isolines.

Figure 4.3. The figure displays a schematic image of how a 2D primitive may look. The

inner level may never be moved outside the outer level.

4.4

Histogram and Gradient Generation

When editing a transfer function the histogram can be displayed behind the prim-itives. The histogram generation is however not connected to neither the editor nor the transfer function framework. Instead a separate scene graph node gen-erates the histogram image. Using a separate node allows future developers to optimize the histogram generation code as well as use the same histogram for other visualization tools.

(38)

4.5 Rendering 21

The same approach is used for the calculation of the gradient volume. A sep-arate node outputs the gradient volume needed to produce a gradient magnitude histogram. The gradient volume can be used in the rendering process, but since there are already two volumes on the graphics card, the rendering gradient is gen-erated on the fly with the same algorithm as the gradient calculation algorithm used for the histogram.

4.5

Rendering

With a functional UI suited for transfer function editing, the process of separating blood vessels from bone in real time classification faces new problems. On im-portant issue is how to optimally utilize the classification possibilities of the two dimensional transfer function.

4.5.1

Rendering Issues

When rendering data sets with more than one volume, several issues arise. One issue that is easily seen in dual CT data is that one volume is smaller. Since the energy levels are measured simultaneously, the mechanical structure of the CT scanner can only acquire volumes of two different sizes, one larger volume and one smaller volume inside the larger volume. This appears in the rendered volume when the sample only contains a value from the larger volume. This causes the lookup in the transfer function to be at one of the edges and therefore probably not classified as anything, unless the transfer function has been set to cover this area. Either way, a clear and apparent edge will be visible if measures are not taken to address this issue.

When rendering dual energy CT data, there is a strong tendency for classifi-cation to leave large amounts of speckle. In this case speckle refers to the small dots that are the result of classifying samples that are separated from likewise classified samples. Figure 4.4 shows how the classification of a blood vessel has been comprised by the speckle on the hip bone.

4.5.2

Rendering Strategy 1

The first strategy is to simply use the two-dimensional transfer function to classify a sample in the rendering process. This is just regular volume rendering with a different type of transfer function. However, there are still problems regarding how to treat the edge of the smaller volume.

One simple way of attacking the problem of the edges from the different sized volumes is to only use the samples where both volumes exist, otherwise classifying the sample as undefined. Since the values in the volumes are mostly nonzero, an approximation of classifying the sample as undefined, is to simply ignore values where the smaller volume is zero. This could however introduce strange artifacts within the region where both volumes are to be rendered. There could be valuable information that is then excluded from the rendering.

(39)

22 Approach

Figure 4.4. When trying to segment bone from blood vessels there is often speckle

appearing on the bone. The figure displays speckle on the hip bone. This is unwanted and is hopefully minimized with dual energy.

One question that needs to be investigated is how much information that is actually lost in the final rendered image. Most of the time a reconstructed volume has a nonzero values representing empty voxels. The values are close to zero, but still nonzero values, suggesting that it is very unlikely to discard samples within both the volumes.

Instead of discarding samples where one volume is zero, one can mathemat-ically express the cylinder volume that contains the smaller volume. However, this requires previous knowledge of the volume before rendering which may not available in all cases.

4.5.3

Rendering Strategy 2

Another approach of rendering the two volumes is to use one volume to calculate the gradient. In the case of dual energy CT, the larger volume is appropriate. Using the gradient to plot the gradient magnitude against the density is well known within volume rendering. With an appropriate transfer function, edges are clearly detected and rendered. However, the samples are not very easily classified. A new approach is to render the opacity using the gradient magnitude of the larger volume. Then, each sample is classified with a color value through the use of the dual energy transfer function. This combination of the two methods does not need to be binary. One can use both values and appropriately weight them for a suitable rendering.

(40)

func-4.5 Rendering 23

tion is easily adjusted to suit overall opacity. Then the user switches to rendering with the combined approach and fine tune the dual energy transfer function to get the classification of the rendered image. This method leaves an edge where the smaller volume ends. The edge is the result of little or no color being set in the voxels outside the smaller volume.

(41)
(42)

Chapter 5

Implementation

This chapter covers how the new transfer function toolkit is implemented. Initially, the chapter presents an overview of the design as well as how the transfer function structure is implemented. Thereafter, the chapter covers the GUI implementation and several other external scene graph nodes which use the toolkit.

Figure 5.1. The implementation of the design created the above shown structure to get

a sufficient separation of functionality.

5.1

Overview

The XipTFEngine object acts as the transfer function toolkit that all interaction goes through. However, there are many parts of the transfer function editing process that require information that is not controlled by the XipTFEngine. A good example is the XIPTFGenerateHistogram engine. This node generates an image that the transfer function editor uses as a background during its rendering.

(43)

26 Implementation

As well as the histogram engine, several other processes are totally separate from the actual toolkit. The implemented separation is displayed in figure 5.1.

Figure 5.2. The XipTFEngine includes an array of XipTFDefinitions as well as its

own attributes and member functions. XipTFDefinition in turn includes an array of XipTFPrimitives as well as its own attributes and member functions.

5.2

Data Structure

The data structure has a basic design as shown in figure 5.2. The encapsulating structure is the XipTFEngine, which contains a list of TF definitions as well as all the functions that alter the TF definitions. The XipTFEngine class is where all external interaction with the TF data structure is channeled through. This way there is a possibility of implementing a list of previous actions that allow the undo command. The XipTFEngine not only includes the functions that alter the TF definitions, but also the functions that use the definitions, such as rendering and saving of TF definitions.

The XipTFEngine is designed as singleton object, meaning that it is guaranteed that only one object of this class exists at a time. In this way, all the nodes and engines that interact with the TF definition, interact with the same object.

Each TF definition contains a list of the primitives that together can be utilized to create one LUT. As well as encapsulating the primitives, the XipTFDefinition class handles the creation of selection masks that are binary arrays telling which primitives in a definition that are to be used. For example, if the user wants to render only the 1D trapezoid shaped primitives in a certain definition, the other primitives would be masked out.

The TF data structure is designed in a scheme that allows several levels of interaction. The structure is based on TF primitives. Each primitive is described by a set of points and of attributes describing among other things which shape

(44)

5.2 Data Structure 27

Figure 5.3. The XipTFPrimitive includes an array of all the points as well as its own attributes and member functions. The points are actually only one long array that getFunctions know where to access for a certain point.

and which type the primitive is. In this case type refers to how the point values are to be interpreted. For example one type is SCALAR_COLORALPHA, which means that the coordinate is described by a scalar value and the remaining values describe the color and alpha values i that order.

The shape attribute instead describes how the points themselves are to be interpreted. One could say it describes the topology of the primitive, both in the sense of the actual rendering and in the way that the primitive is interacted with. Each time a point in a primitive is moved, the editor must check to see if this movement is allowed. The shape variable then allows the editor to know which constraints are to be given to a specific primitive.

Figure 5.4. The XipTFEngine includes function tables pointing to the corresponding

(45)

28 Implementation

The primitive specific functions, such as how to render a certain primitive and how intersection is calculated for each primitive, is implemented outside the data structure. The functions are called through a function table that points to the correct function for each primitive shape. This separation allows for adding new primitives to the transfer function library by adding an entry in the function table and implementing the corresponding rendering and intersect functions. Figure 5.4 describes this method of separation. To use the existing editor with added primitives a few more additions to the editor code need to be made.

5.3

File Format

An example of the file format implemented is attached in appendix B. It describes each attribute of the TF definition and its primitives. As described in figure 5.5 the XipTFParser handles the tree generation of a general file format closely resembling that of the scene graph files. General in this passage refers to that the tree generation has nothing to do with the actual information specified in the file. It just interprets the structure of the file. The XipTFParser implementation is not a product of this thesis, but the tree it creates is utilized and interpreted in the XipTFEngine class. Here each primitive and definition attribute is translated to the data structure or from the data structure to parse tree.

Figure 5.5. The loading process involves generating a general parse tree from a file and

then interpreting this tree to the corresponding values in the TF definition. The saving process generates the parse tree from the TF definition and lets the tree be written to file.

When saving a TF definition, XipTFEngine goes through all the attributes of the definition and each primitive with its attributes while adding these properties as nodes in a parse tree. The generated parse tree is then traversed and printed to a specified file by XipTFParser. When loading from a file, XipTFParser generates a tree of nodes containing name, argument and value of each property. This tree

(46)

5.4 GUI 29

can now be traversed by XipTFEngine, and each node is interpreted and a value is set for the specified definition.

5.4

GUI

With the TF structure separated from the UI, the editor uses the functions defined in the TF structure to edit the TF in suitable ways. This is achieved by allowing the UI to have a pointer to an instance of the TF structure object. Through this object the editor now manipulates different TF definitions according to how the user interacts with the UI.

The interaction of the UI is described in appendix A. The interaction is however limited to using the fields created in the node as well as the mouse, the Shift-key and the Ctrl-key. This design was created for the following reasons. First, no keyboard interaction is currently being sent through the scene graph as events. This means that any event resembling a pressed key will not be able to be caught by the editor node. Secondly, the UI was to be able to perform its tasks through the use of its fields. That way, if one wanted to make a user friendly UI with for example HTML or QT, the fields need only be connected and used properly.

5.5

LUT Generation

The lookup table generation is handled in the node SoXipTFGenerateLUT1D and SoXipTFGenerateLUT2D. Both nodes contain a pointer to the same instance of the transfer function engine object. In the case of the 1D LUT, the node will simply use a predefined LUT generation operator that returns a pointer to an array containing the LUT. Since the generation is taken care of by the TF library, no other node is needed in the scene graph except for one making the updates lazy. SoXipLazyGroup will only traverse the underlying nodes if the node’s ID has changed, which only occurs if any of the node’s fields or the node’s children have been altered. A typical scene graph where a one dimensional transfer function is used, can be viewed in figure A.3.

In the case of a 2D LUT, the LUT texture is rendered. In order to render to texture an FBO must be set up as the render target. This is done using the existing FBO nodes in the scene graph. The actual LUT generation node will, like the 1D variant receive a pointer to an instance of the transfer function engine object. With this object it can use the existing render functions to draw the primitive values to a texture. Figure A.1 displays a typical scene graph of how a two dimensional transfer function is used.

5.6

Histogram Generation

SoXipTFGenerateHistogram is an engine that takes two volumes and generates a selected 2D histogram image. The output image field needs to be connected to the editor manually for it to be viewed in the TF editor. To enable a variety of

(47)

30 Implementation

Figure 5.6. The above scene graph is a typical example of how the transfer function can

be used in the 1D case. The important nodes are the XipTFEditor1D and XipTFGener-ateLUT. The rest is normal volume rendering and OpenGL. The package 1DtransferFunc is a group of nodes setting up a texture and uniform variables for the LUT texture.

different histograms to be generated, an enum decides how each input volume is to be interpreted. A variety of different histograms can be generated, for example, ones describing the density of volume one to volume two, or the difference in density between the two densities against either density.

5.6.1

Gradient Generation

There are also a few modes where gradient magnitude histograms can be gener-ated. The gradient magnitude modes do, however only calculate the magnitude of a gradient volume. In many cases where gradient volumes are used, the same gradient needs to be calculated for the histogram and the actual rendering. Even if the rendering uses gradient calculated on the fly, the values will differ if the user does not implement the same gradient calculation method outside the shader. To allow this flexibility the gradient calculation was implemented in a separate engine, SoXipTFGenerateGradient.

The SoXipTFGenerateGradient engine uses a simple algorithm to calculate the gradient. It consists of calculating the central difference in all three voxel directions, and thereby approximating the gradient value.

(48)

5.7 Rendering 31

Figure 5.7. The above scene graph is a typical example of how the transfer function

can be used in the 2D case. Compared to the 1D case, the 2D case needs the LUT to be generated by rendering to an FBO. As shown above, this can be done at the rendering portion of the scene graph.

5.7

Rendering

GPU based texture slicing is used to render the dual energy CT data. A simple volume rendering slicer takes the two data values from the current position and looks up the value in the transfer function texture. This lookup gives the optical properties of the current sample. The color is multiplied by the alpha value and OpenGL blending does the actual compositing.

5.7.1

Dual Energy Data with Gradient Magnitude

To render a sample by using both dual energy and gradient magnitude, an im-plementation where the gradient is used for determining the opacity and the dual energy is used for setting the color. The separation of these optical properties forces the program to utilize two transfer functions, one for the gradient magni-tude and one for the dual energy data. The gradient magnimagni-tude can be used for either of the two volumes, giving slightly different renderings.

(49)
(50)

Chapter 6

Results

This chapter presents the results of the thesis project. The chapter covers the data structure and then displays some images of the GUI. The GUI relies on the ability to show different histograms, which are also presented. Finally the results of rendering the dual energy data sets with two-dimensional transfer functions are presented.

6.1

New Data Structure

The basic implemented structure, allows interaction with transfer functions. This includes saving, loading, adding new primitives to a definition as well as removing primitives from a definition. This functionality did not exist in the older imple-mentations. The transfer function is described in a file format instead of fields, allowing a parametric description to be saved on disk.

The design of how to handle transfer functions is now implemented in way that future employees, interns or full-time, can edit or add to the existing modules. The separation into modules also allows a good separation of the code. In order to separate the external libraries that are needed in the scene graph case, the functionality is separated into two dynamically linked libraries, one for the data structure and all its operations, and one for all the open inventor scene graph nodes that are to be used in XIP. This separation keeps the data structure, as far as possible, independent of other application interfaces, with the exception of OpenGL.

6.2

New GUI

To enable editing of transfer functions, the implemented framework contains two UIs. One for the one-dimensional case and one for the two-dimensional case. The possibility of connecting them and making a user friendly UI that suits physicians, lies in the hands of future developers.

(51)

34 Results

The UI implemented has capabilities for adjusting colors and opacities. In the two-dimensional case primitives can be rotated and stretched within the limits of keeping the internal triangles convex. Since the UI is based on fields, almost all interaction can be done from external connections. The fields are displayed in figure 6.1. However, interaction with the mouse is directly captured by the editor node. The UIs are shown in figures 6.2 and 6.3.

Figure 6.1. The editor contains these fields.

6.2.1

Histograms

Figures 6.4, 6.5 and 6.6 show the different histogram variations that can be gen-erated and were found applicable in the dual energy case.

Notice the strange artifacts in several of the histograms. As can be seen in figure 6.6, the artifacts only appear in the second energy level, implying something strange in that volume. The artifacts appear in all the tested data sets that have been down sampled in the size of the slices.

(52)

6.2 New GUI 35

Figure 6.2. A view of the 1D editor. The horizontal axis describes the density and

the vertical axis describes the resulting opacity. One one-dimensional primitive is shown above.

Figure 6.3. A view of the 2D editor. The horizontal axis describes the density of the

first volume and the vertical axis describes the difference in density between the two energy levels, zero being in the middle of the vertical axis. The figure displays three 2D primitives being rendered on top of a histogram.

(53)

36 Results

Figure 6.4. Histograms over the difference between the volume densities plotted against

one of the volumes density. (Left - vertical axis: Vol2-Vol1 X horizontal axis: Vol1) (Right - vertical axis: Vol1-Vol2 X horizontal axis: Vol2 )

Figure 6.5. Histograms over the volume densities plotted against each other. (Left

-vertical axis: Vol1 x horizontal axis: Vol2) (Right - -vertical axis: Vol2 x horizontal axis: Vol1 )

(54)

6.3 Rendering 37

Figure 6.6. Histograms over the two energy levels gradient magnitude plotted against

the same volume’s density. In the right image, several artifacts appear at the top of on of the arches. The artifacts are believed to be from the downsampling process of the second energy level’s volume. (Left: Vol1) (Right: Vol2 )

6.3

Rendering

The initiative that started this thesis was the idea that utilizing two-dimensional transfer functions would allow a better classification when rendering CT angiog-raphy data. Since there were several possible ways to render the dual energy data, there are also several rendering results to present. The results are separated by the method used to render them, the amount of slices used and the volume which is rendered. Not only do each of these characteristics give different renderings, but they differ in rendering speed and allow a different amount of interaction. The figures that are described as high resolution, typically with more than 1024 slices, have frame rates below 5 frames per second. The renderings with 1024 or less slices, have higher frame rates and interaction is possible.

The dual energy renderings presented, are from two different volume data sets, where each data set has two volumes. Since both volumes need to be on the graph-ics card, as well as the two-dimensional transfer function lookup tables, handling the graphics memory is an issue. Because the original volumes do not fit on the graphics card used, GeForce 8800 GTS, all the renderings of dual CT data is with down-sampled volumes.

The impact of using a higher resolution when rendering can be seen in figure 6.20, where the renderings show smaller vessels. These smaller vessels are classified correctly.

6.3.1

Dual Energy

Dual energy rendering classifies each sample through only using the dual energy transfer function. Below, several renderings show these results. Some of the

(55)

ren-38 Results

Figure 6.7. This figure shows a 512 slice dual energy rendering of a down sampled

volume. A gradient calculation is used to shade the volume accordingly. (NO gradient magnitude only shading)

(56)

6.3 Rendering 39

Figure 6.8. This figure shows a 512 slice dual energy rendering of a down sampled

volume. No gradient is used to shade the volume. This rendering is from another data set.

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Från den teoretiska modellen vet vi att när det finns två budgivare på marknaden, och marknadsandelen för månadens vara ökar, så leder detta till lägre

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

a) Inom den regionala utvecklingen betonas allt oftare betydelsen av de kvalitativa faktorerna och kunnandet. En kvalitativ faktor är samarbetet mellan de olika

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i