• No results found

Illustrative Visualization of Anatomical Structures

N/A
N/A
Protected

Academic year: 2021

Share "Illustrative Visualization of Anatomical Structures"

Copied!
47
0
0

Loading.... (view fulltext now)

Full text

(1)LiU-ITN-TEK-A--11/045--SE. Illustrative Visualization of Anatomical Structures Erik Jonsson 2011-08-19. Department of Science and Technology Linköping University SE-601 74 Norrköping , Sw eden. Institutionen för teknik och naturvetenskap Linköpings universitet 601 74 Norrköping.

(2) LiU-ITN-TEK-A--11/045--SE. Illustrative Visualization of Anatomical Structures Examensarbete utfört i medieteknik vid Tekniska högskolan vid Linköpings universitet. Erik Jonsson Examinator Karljohan Lundin Palmerius Norrköping 2011-08-19.

(3) Upphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under en längre tid från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/ Copyright The publishers will keep this document online on the Internet - or its possible replacement - for a considerable time from the date of publication barring exceptional circumstances. The online availability of the document implies a permanent permission for anyone to read, to download, to print out single copies for your own use and to use it unchanged for any non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional on the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its WWW home page: http://www.ep.liu.se/. © Erik Jonsson.

(4) Abstract Illustrative visualization is a term for visualization techniques inspired by traditional technical and medical illustration. These techniques are based on knowledge of the human perception and provide effective visual abstraction to make the visualizations more understandable. Within volume rendering these expressive visualizations can be achieved using non-photorealistic rendering that combines different levels of abstraction to convey the most important information to the viewer. In this thesis I will look at illustrative techniques and show how these can be used to visualize anatomical structures in a medical volume data. The result of the thesis is a prototype of an anatomy education application, that makes use of illustrative techniques to have a focus+context visualization with feature enhancement, tone shading and labels describing the anatomical structures. This results in an expressive visualization and interactive exploration of the human anatomy.. 1.

(5) Acknowledgements I would like to thank my supervisor Karl-Johan Lundin Palmerius and Lena Tibell at the Department of Science and Technology, Linköping University for their help and assistance throughout the thesis work. Thanks also to Daniel Forsberg at the Department of Biomedical Engineering, Linköping University for providing the human body data set together with the segmented data.. 2.

(6) Contents 1 Introduction 1.1 Motivation . . . 1.2 Purpose & Goal . 1.3 Limitations . . . 1.4 Outline . . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. 7 7 7 8 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Views . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. 9 9 9 10 10 11 11 12 14 14 15 15 16 16 18 18 19. 3 Theory 3.1 The Importance-aware Composition Scheme . . . . . . . . . . . . 3.2 The Tone Shading Model . . . . . . . . . . . . . . . . . . . . . .. 21 21 22. 4 Implementation 4.1 Illustrative Ray Casting . . . . . . . . 4.1.1 Segmentation Classification . . 4.1.2 Tone Shading . . . . . . . . . . 4.1.3 Importance-aware Composition 4.2 Labeling of Segmented Data . . . . . . 4.2.1 Segment Description File . . . 4.2.2 Layout Algorithm . . . . . . . 4.2.3 Rendering . . . . . . . . . . . . 4.3 Anatomy Application . . . . . . . . . 4.3.1 Design and User Interface . . .. 24 24 24 25 26 28 29 29 30 31 31. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. 2 Background 2.1 Anatomy Education . . . . . . . . . 2.1.1 Dissections . . . . . . . . . . 2.2 Volume Rendering . . . . . . . . . . 2.2.1 Volume Rendering Integral . 2.2.2 Segmented Volume Data . . . 2.2.3 Ray Casting . . . . . . . . . . 2.2.4 GPU-based Ray Casting . . . 2.2.5 Transfer Functions . . . . . . 2.2.6 Local Illumination . . . . . . 2.3 Illustrative Visualization . . . . . . . 2.3.1 Medical Illustrations . . . . . 2.3.2 Visual Abstraction . . . . . . 2.3.3 Cut-away Views and Ghosted 2.3.4 Visibility Control . . . . . . . 2.3.5 Textual Annotations . . . . . 2.4 Voreen . . . . . . . . . . . . . . . . .. . . . .. . . . .. . . . . . . . . . .. . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . ..

(7) 4.3.2 4.3.3. Focus+Context Widget . . . . . . . . . . . . . . . . . . . Labeling Widget . . . . . . . . . . . . . . . . . . . . . . .. 5 Conclusion 5.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Result of the Importance-aware Composition 5.1.2 Result of the Tone Shading . . . . . . . . . . 5.1.3 Result of the Anatomy Application . . . . . . 5.1.4 Performance . . . . . . . . . . . . . . . . . . 5.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 The Illustrative Techniques . . . . . . . . . . 5.2.2 The Anatomy Application . . . . . . . . . . . 5.3 Future work . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Additional Features . . . . . . . . . . . . . .. 4. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . .. 31 32 35 35 35 37 37 38 40 40 40 41 41.

(8) List of Figures 2.1 2.2 2.3 2.4 2.5 2.6. The front and back face from the bounding box of the volume The ray casting technique via rasterization . . . . . . . . . . A transfer function represented by a 1D texture . . . . . . . . Cut-away and ghosted illustration of a sphere . . . . . . . . . Medical illustrations by Leonardo da Vinci . . . . . . . . . . . The standard workspace in VoreenVE . . . . . . . . . . . . .. . . . . . .. 13 13 14 17 17 19. 3.1. Tone shading of a red object with blue/yellow tones . . . . . . .. 23. 4.1 4.2 4.3 4.4 4.5 4.6 4.7. 1D TF textures stored in a 2D segmentation TF texture Tone shading parameters . . . . . . . . . . . . . . . . . Importance Measurements Parameters . . . . . . . . . . Convex hull: A set of points enclosed by an elastic band The placement of labels . . . . . . . . . . . . . . . . . . The network of the anatomy application . . . . . . . . . Layout of the Labeling widget . . . . . . . . . . . . . . .. . . . . . . .. 25 26 27 30 30 32 33. 5.1 5.2. The intensity measurement . . . . . . . . . . . . . . . . . . . . . The gradient magnitude, silhouetteness and background measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Focus+context visualization . . . . . . . . . . . . . . . . . . . . . Comparision of Blinn-Phong shading and Tone shading . . . . . The Anatomy Application: Selection on Pericardium . . . . . . . The Anatomy Application: The Digestive and Urinary System .. 36. 5.3 5.4 5.5 5.6. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . .. . . . . . . .. 36 36 37 38 39.

(9) List of Tables 5.1. 5.2. Performance measurements of front-to-back composition and importanceaware composition with different settings on importance measurements (IM) and early ray termination (ERT). . . . . . . . . . . . 38 Performance measurement of tone shading and Blinn-Phong shading using front-to-back composition. . . . . . . . . . . . . . . . . 38.

(10) Chapter 1. Introduction In this Master’s thesis an illustrative volume rendering system has been developed at the division for Media and Information Technology, Department of Science and Technology at Linköping University. Illustrative techniques are used in the system to achieve an expressive visualization of anatomical structures. The thesis serves as a fulfillment of a Master of Science degree in Media Technology at Linköping University, Sweden.. 1.1. Motivation. The study of medicine and biology has always relied on visualizations to learn about anatomical relationships and structures. In these studies are dissections often used to support the anatomical learning with both visual and tactile experience. However, the use of dissection is declining for schools that have anatomical education [14]. High schools and universities are more often using other aids such as textbooks, plastic specimens and simulators to support their anatomy education. The computerized aids offer many new possibilities, where simulators and educational software lets the user explore anatomical structures in three dimensions. Often these applications are using surface rendering to render pre-modeled 3D models. However, through a technique called volume rendering the structures can be rendered directly from the medical data. Volume rendering has for a long time been considered as much slower than surface rendering, but with newer GPU’s it is possible to achieve interactive frame rates. With volume rendering it is possible to acquire renders that better corresponds with the real material. The density values in the medical data sets are directly mapped to RGBA values for the pixels in the rendered images. This allows for fuzzy surfaces with varying opacity, where surface and internal details can be rendered together, for example material such as soft tissue and blood vessels.. 1.2. Purpose & Goal. In this thesis an interactive volume visualization system for illustrative visualization and exploration of medical volume data is proposed. The purpose with the thesis is to develop a volume rendering application for anatomy education, which allows the user to interactively explore anatomical structures in a medical. 7.

(11) 8. Introduction. data set. The goal of using illustrative techniques is to achieve an expressive visualization, where complex data is conveyed in an intuitive and understandable way. Otherwise, the information can quickly overwhelm the user, which makes it harder for the user to convey the information. The goal with the thesis is to achieve illustrative visualization of anatomical structures and to show its use in an application for anatomy education.. 1.3. Limitations. The application in this thesis is based on research material and is developed as a proof-of-concept, where the potential of the methods are evaluated. This means that the user’s satisfaction is not evaluated and no user requirements are collected. Otherwise, the user’s need and opinions in such an application would have been questioned. The potential users are medical students and other medical expertise that would be interested in an application for anatomy education.. 1.4. Outline. The structure of the thesis is outlined as follows. Chapter 1: Introduction Describes the motivation, purpose, goal and limitations of the thesis. Chapter 2: Background Presents anatomy education and how it is performed at schools. Explains the theory and background behind volume rendering and illustrative visualization. Chapter 3: Theory Explains the theory behind the used illustrative methods in the thesis. Chapter 4: Implementation Explains the implementation of the illustrative methods and how they have been used in the anatomy application. Chapter 5: Conclusion Presents the result of the implementation and the performance of the methods. Discusses the result and the future work arising from the thesis.. 8.

(12) Chapter 2. Background 2.1. Anatomy Education. In medicine and biology education, the anatomy of animals and humans are studied to learn about the anatomical structures, functions and relationships. With this knowledge we can understand how our bodies work and how the evolution has created us and other living creatures. Textbooks are often used as an aid in anatomy education, where illustrations give a better understanding of the anatomical structures. Another aid is the use of dissections that give both visual and tactile experience to the anatomy education. Dissections can be traced back to the Renaissance [9], where dissections were applied on human cadavers. In modern time, dissections are often introduced in high school, where animal cadavers are studied. In veterinary and medical school, the studies are done on both animal and human cadavers. However, this has started to change and dissections are declining as an aid in anatomy education as described by Winkelmann [14].. 2.1.1. Dissections. The role of dissections as an anatomy teaching tool for medical students are described by McLachlan et al. [9] as an opportunity to study real material opposed to textbooks and other teaching material. The dissections also give an important three-dimensional view of the anatomy, where knowledge from lectures and tutorials can be used. Moreover, McLachlan et al. [9] mentions that it increases the self-directed learning and team working. However, the use of dissections also has its shortcomings, where practical problems concern ethical and moral issues, cost-effectiveness and safety. Cadavers might be dealt with improperly, the preservations are expensive and it can have potential health risks. Other problems are more about the educational value, where the major consideration is if the dissections are the most suitable way for high school students to study anatomy and also for those medical and veterinary students that will not work with real material in their future work. These students may only encounter anatomy through medical imaging and then the knowledge from dissections would be hard to translate to the views produced by imaging as described by McLachlan et al. [9].. 9.

(13) 10. Background. 2.2. Volume Rendering. Volume rendering is a technique to visualize three-dimensional data and has grown as a major field in scientific visualization. The volume data can be acquired from many different sources such as simulations of water, wind, clouds, fog, fire or other natural phenomena. However, the major application area for volume visualization is medical imaging, where the data is acquired from computed tomography (CT) or magnetic resonance imaging (MRI). These techniques use either x-ray beams or magnetic fields to extract and visualize scanned bodies or objects. With modern graphics hardware more efficient volume rendering techniques has evolved, which makes it possible to achieve volume rendering with interactive frame rates. The graphical processor units (GPUs) allow for hardware accelerated volume rendering techniques that takes advantage of the parallelism of modern graphic hardware. The ray casting techniques in volume rendering benefits especially from the parallelism where multiple rays can be processed at the same time and thus achieve real-time volume rendering. In this section I will explain the fundamental parts of volume rendering and how it efficiently can be produced by modern graphics hardware. Most of the volume rendering parts can be referred to the book by Engel et al. [5], which presents the fundamental parts of real-time volume graphics.. 2.2.1. Volume Rendering Integral. The volume rendering integral is the physical description of the volume rendering technique. The integral uses an optical model to find a solution to the light transport, where the flow of light is followed to produce the virtual imagery. In the light transport the light can interact with participating media and be emitted, absorbed and scattered. However, after a number of interactions the light transport becomes very complex and the complete solution becomes a computationally intensive task. Simplified optical models are therefore often used to achieve a more efficient volume rendering. The most common models are the following. • Absorption only (light can only be absorbed) • Emission only (light can only be emitted) • Emission-Absorption model (light can be absorbed and emitted) • Single scattering and shadowing (local illumination) • Multiple scattering (global illumination) In the classic volume rendering integral 2.1 the emission-absorption model is used. In this model the light can be emitted and absorbed, however it cannot be scattered as in other more complete illumination models. I(D) = I0 e. −. RD s0. κ(t)dt. Z D. +. q(s)e−. RD s. κ(t)dt. ds. (2.1). s0. In the volume rendering integral the light flow is followed from the background of the volume s0 , through the volume and against the position of the eye 10.

(14) 11. Background. D. The result is the total outgoing intensity I(D). In equation 2.1 the optical properties of emission and absorption is described respectively with the terms q(s) and κ(t). To simplify the integral the term Z s2. κ(t)dt. τ (s1 , s2 ) =. (2.2). s1. is defined as the optical depth between the positions s1 and s2 and the corresponding transparency is defined as −τ (s1 ,s2 ). T (s1 , s2 ) = e. −. =e. R s2 s1. κ(t)dt. (2.3). With these definitions of optical depth and transparency the following volume rendering integral can be obtained. Z D. q(s)T (s, D)ds. I(D) = I0 T (s0 , D) +. (2.4). s0. In the first term of equation 2.4 the initial intensity I0 from the background is attenuated through the volume, where the optical depth τ controls the transparency T of the medium. For small values of τ the medium is rather transparent and for larger values the medium is more opaque. In the second term the integral contribution of the emission source term q(t) is attenuated by the participating media along the remaining path through the volume to the viewer. To be able to compute the volume rendering integral 2.1 it needs to be discretized. This is commonly done by partioning the integration domain into several parts and thus approximates the integral with a Riemann sum. The discrete volume rendering integral can then be written as I(D) =. n X. ci. i=0. n Y. Tj. (2.5). j=i+1. with c0 = I(s0 ), where the integral is approximated from the starting point s0 to the eye point D with n number of intervals.. 2.2.2. Segmented Volume Data. The volume data consists of a 3D scalar field and is represented on a discrete uniform grid, where each of the cubic volume elements is called a voxel. In segmented volume data, each of the voxels is tagged as belonging to a segment The segments can be seen as individual objects, that has been separated from the volume by a process called volume segmentation. This is done before the actual volume rendering and is used to distinguish individual objects. In medical visualization this can for example be used to visualize a specific organ in a human body data set.. 2.2.3. Ray Casting. Ray casting is an image-based volume rendering technique, where the volume integral is evaluated along rays through the volume data. Usually the traversal order is front-to-back, where the volume data is traversed from the eye and into the volume. For each pixel in the image to be rendered, a ray is cast into the volume and data is sampled at discrete positions along the ray. At 11.

(15) 12. Background. each sample point on the ray an interpolation is done to get the correct voxel position. Transfer functions are then used to map the scalar data values to optical properties such as color and opacity. In the last step the samples are composited together to get the resulting pixel color. With the discrete volume rendering integral in equation 2.5 the composition schemes can be obtained, where the illumination I is represented with RGBA components with the color as C and the opacity as α. The composition equation for front-to-back traversal is given as follows. 0 0 Ci0 = Ci−1 + (1 − αi−1 )Ci 0 0 αi0 = αi−1 + (1 − αi−1 )αi. (2.6). 0 The new values Ci0 and αi0 are calculated from the color Ci−1 and opacity from the previous location i − 1, and the color and opacity Ci and αi at the current location i. With these steps the color and opacity is accumulated along the ray, which results in a RGBA value for the current pixel. In a similar way the back-to-front composition schemes are obtained as follows, 0 αi−1. 0 Ci0 = (1 − αi )Ci+1 + Ci. (2.7). where the value of Ci0 is calculated from the color Ci and αi from the current 0 location i, and the color Ci+1 from the previous location i + 1. In the back-to-front composition the opacity αi0 is not updated as in the front-to-back composition 2.6. That is since the color contribution Ci0 can be determined without using the accumulated opacity in a back-to-front traversal. However, a major advantage of the front-to-back traversal is that the traversal can be terminated when the accumulated opacity αi0 [0,1] reaches above one. Then the most opaque material has been evaluated along the ray and further traversal is unnecessary. This is a technique called early ray termination and is an efficient way to optimize the rendering and can be easily executed in the ray casting loop. For this reason is front-to-back composition the most commonly used composition scheme in volume rendering.. 2.2.4. GPU-based Ray Casting. In GPU-based ray casting the entire volume is stored in a 3D texture. The texture is transferred to a fragment shader and the rays are cast through the volume in a per-pixel basis. In order to calculate the ray direction different approaches can be used. The most basic solution is to compute the direction from the camera position and the screen space coordinates, but another way is to use rasterization [8]. In this technique the range of depths from where the ray enters the volume and to where the ray exits the volume is computed in a ray setup prior to the ray casting. This yields the front face and the back face from the bounding box of the volume, as can be seen in figure 2.1. The front and back face coordinates can then be used to compute the direction coordinates as follows, D(x, y) = Texit (x, y) − Tentry (x, y). 12. (2.8).

(16) 13. Background. (a) Front face. (b) Back face. Figure 2.1: The front and back face from the bounding box of the volume. where the coordinates can be seen as the entry and exit points of the ray traversing the volume. After the ray setup the ray casting is performed in a ray casting loop where equation 2.8 is used to determine when the ray has reach the exit point of the volume. The ray casting technique via rasterization can be seen in figure 2.2, where the rays (r) are traversed from the front faces (f ) to the back faces (b).. b0 f0. r0 r1. b1. f1. b2 r2. f2 f3. b3. fx bx. r3. rx. Figure 2.2: The ray casting technique via rasterization In the ray casting loop the rays are cast through the volume, where each ray is iteratively stepped through and the 3D texture is sampled using tri-linear interpolation. The sample is then used to apply the transfer function and get the color and opacity of the given sample. Finally, a composition scheme is used to blend the samples together. When the last sample has been reached the final RGBA value of the pixel has been computed and can be returned from the fragment shader. The expensive stage in this algorithm is the actual ray casting loop and therefore has many optimization techniques been developed to make it more efficient. Early ray termination is one technique that already has been presented in section 2.2.3, but another powerful technique is called empty 13.

(17) 14. Background. space skipping. This technique tries to not sample empty space through the volume, which occurs when visible parts of the volume does not fill up the entire bounding box. However, if the volume is subdivided into smaller blocks we can determine for each block whether it is empty or not. To achieve this we can use the front and back face of the smaller blocks and have a a much more fit and close bounding box of the volume, that more closely resembles the visible parts of the volume.. 2.2.5. Transfer Functions. The transfer functions are applied in the ray casting process as explained in section 2.2.3. These are used to map the optical properties such as absorption and emission to the scalar values in the volume and by that evaluate the volume rendering integral. In medical volume data the scalar values most commonly represent material density. The transfer functions classify the data and map it to color contributions, where each scalar value between 0 and 255 corresponds to a color and opacity. The transfer functions are commonly applied with the use of lookup tables, which contains discrete samples from the transfer function and are stored in a 1D or 2D texture. An example of a transfer function stored in a 1D texture can be seen in figure 2.3.. 0. 255. Figure 2.3: A transfer function represented by a 1D texture. 2.2.6. Local Illumination. The emission-absorption model presented in section 2.2.1 does not involve local illumination. However, the volume rendering integral in equation 2.1 can be extended to handle local illumination by adding an illumination term to the emission source term q(s): qextended (s) = qemission (s) + qillumination (s). (2.9). where qemission (s) is identical to the emission source term in the absorptionemission model. The term qillumination (s) describes the local reflection from a light that comes directly from the light source. With this term is is possible to achieve single scattering effects using local illumination models similar to traditional methods for surface lighting. In these the surface normal is used to calculate the light reflection. However, to use the local illumination models in volume rendering the normal is substituted by the normalized gradient vector of the volume. To do this the gradient is computed in the fragment shader using finite differencing schemes. These are based on Taylor expansion and can estimate the gradients by forward, backward or central difference. The most common approach in volume rendering is central differencing as seen in equation. 14.

(18) 15. Background. 2.10, which has a higher-order approximation error than forward and backward differences and thus creates a better estimation. f (x + h) − f (x − h) (2.10) 2h With the central difference formula in equation 2.10 the three components in the gradient vector ∇f (x, y, z) are estimated and can be used in a local illumination model, like for example the Blinn-Phong model. This model is the most common shading technique and computes the light reflected by an object by combination of the terms, ambient, diffuse and specular reflection. f 0 (x) =. IBlinnP hong = Iambient + Idif f use + Ispecular. (2.11). The ambient term Iambient is used to compensate from the missing indirect illumination. This is achieved by modeling a constant global ambient light that prevents the shadows from being completely black. With the diffuse and specular term the reflected incident light is modeled to create matte and shiny surfaces. The diffuse term Idif f use corresponds to the light that is scattered in all directions and the specular term Ispecular to the light that is scattered around the direction of the perfect reflection. The local illumination model in equation 2.11 can be integrated into the absorption model by adding the scattered light to the emission term as explained in equation 2.9. This means that the illumination of the volume can be determined by adding the local illumination to the emission of the volume.. 2.3. Illustrative Visualization. Volume rendering is often concerned with photorealistic rendering, where the goal is to produce highly realistic images. This is important for many applications, but photorealism can also prohibit the effective depiction of features of interest as described by Rautek et al. [10]. Important features may not be recognizable among the other visual content. Non-photorealistic rendering (NPR) has therefore emerged to visualize features that cannot be shown using a physically correct light transport. These techniques have been inspired by the artistic styles used in pen-and-ink drawings, hatching, stippling and water color paintings. The techniques that are inspired by technical and medical illustrations are called illustrative visualization techniques [10]. These make use of visual abstraction to effectively convey information to the viewer, where the techniques concerns about what and how to visualize the features in order to achieve an expressive visualization.. 2.3.1. Medical Illustrations. Scientific illustrations are often used for educational purposes to instruct and explain complex technical information. These can illustrate the mounting of a table, a surgical procedure, the anatomy of an animal or a technical device. Medical illustrations are used extensively in the medical field to represent anatomical structures in a clear and informative way. An illustration of a heart can for example give insight into its function and relation to other organs. These illustrations are often drawn using traditional or digital techniques and used in 15.

(19) 16. Background. textbooks, advertisements, presentations and many other contexts. The illustrations can also be of three-dimensions and be used as material in educational applications, instructional films or medical simulators. In the educational process the illustrations give an impact to the learning, where the illustrations provide insight by effectively conveying the information. The main goal of scientific illustrations are to convey information to the viewer, which is done by letting the viewer focus on the important parts instead of the parts that is not interesting. This is an approach called visual abstraction and is most commonly used in medical illustrations to emphasize important structures without removing them completely from its context. Visual abstraction and its use in medical illustrations are further explained in the following sections 2.3.2 and 2.3.3.. 2.3.2. Visual Abstraction. Visual abstraction is an important component in illustrative visualization, which is inspired by the abstraction techniques from traditional illustration. With abstraction the most important information is conveyed to the viewer, where the visual overload is reduced by letting the viewer focus on what is important. This is often done by having certain structures be emphasized and others be suppressed to ensure the visibility of the important structures and reduce the visual clutter. The different ways to provide abstraction can be divided into lowlevel and high-level abstraction techniques as described by Rautek et al. [10]. Where the low level techniques deals with how to visualize features of interest and high level techniques deals with what to visualize. The low-level techniques represent the artistic style of the illustration. Some examples of handcrafted techniques are silhouettes (or contours), hatching and stippling. The silhouette technique draws lines along the contours to enhance the shape depiction. Hatching and stippling are handcrafted shading techniques, which draws the illustration by only using strokes or small points. In computer graphics many of these techniques have been simulated to provide computerized stylized depiction and most effective is the silhouette technique, which is often used in surface and volume rendering. The high-level abstraction techniques concerns about the visibility in the illustration, where the most important information is uncovered to provide visibility of the more important features. Some examples of illustration techniques that are used in technical and medical illustrations are cut-away views, ghosted views and exploded views. These change the level of abstraction or the spatial arrangement of features to reveal the important information. These techniques are also called focus+context techniques and by Viola et al. [12] referred as smart visibility techniques.. 2.3.3. Cut-away Views and Ghosted Views. In technical and medical illustrations it is often important to visualize the interiors to understand the relation between different parts. However, without the context it is hard to see the spatial relationship and put together a mental picture of how parts are related. Different techniques have therefore been developed to be able to focus on important features while still maintaining the context. In the. 16.

(20) 17. Background. area of information visualization these are often called focus+context techniques and have become one key component in illustrative visualization [10]. Cut-away views and ghosted views are techniques used in traditional illustrations to apply this sort of abstraction to the data. The techniques use different approaches to reveal the most important parts in an illustration. In cut-away views the occluding parts are simply cut away to make the important parts visible, whereas the ghosted views removes the occluding parts by fading them away. These techniques concerns about the occluding parts, which is either removed or faded. This results in an illustration with focus on the important parts that are not completely removed from its context. An example of cut-away and ghosted view can be seen in figure 2.4.. (a) Whole sphere. (b) Cut-away sphere. (c) Ghosted sphere. Figure 2.4: Cut-away and ghosted illustration of a sphere. Medical illustrations have been using cut-away views, ghosted views and other similar techniques for centuries, where it helped the viewers recognize what they were looking at. It was already used by Leonardo da Vinci in the beginning of the 16th century in his drawings of anatomical structures as shown in figure 2.5. Nowadays, these techniques are frequently used, since we still gain most information from unknown data by seeing only small portions of the data as described by Krüger et al [7].. Figure 2.5: Medical illustrations by Leonardo da Vinci (Courtesy of ’The Royal c Collection 2005, Her Majesty Queen Elizabeth II’). 17.

(21) 18. Background. 2.3.4. Visibility Control. High-level visual abstraction as explained in 2.3.2 is one of the main components in illustrative visualization. This abstraction technique reveals the most important features by controlling the visibility in illustrations. In order to achieve similar things in illustrative visualization the visibility need to be controlled during volume rendering. Importance-driven volume rendering is introduced in the work by Viola et al. [11], where importance is defined as a visibility priority to determine the visibility of features in the rendering. In their work a high importance gives a high visibility priority in the rendering, which ensures the visibility of the important features. The rendering is based on segmented data, uses two rendering passes and consist of the following steps: 1. Importance values are assigned to the segmented data 2. The volume is traversed to estimate the level of sparseness 3. The final image is rendered with respect to the object importance Objects occluding more important structures in the rendering are rendered more sparsely to reveal the important structures. With this feature enhancement it is possible achieve both cut-away and ghosted views. Another approach is context-preserving volume rendering introduced by Bruckner et al. [2]. This approach modulates the opacity based on volume illumination, where regions that receive little illumination are emphasized, like for example the silhouettes of an object. The opacity is modulated by using the shading intensity, gradient magnitude, distance to eye and previously accumulated opacity. This makes it possible to explore the interiors of a data set without the need of segmentation. The context-aware volume rendering can be implemented in a single-pass fragment shader on the GPU. This makes this approach much more effective than importance-driven volume rendering, which requires multiple rendering passes. However, both of these approaches depends on data parameters and gives only indirect control over the focus location. A different approach was introduced by Krüger et al. [7] called ClearView, a context-preserving hot spot visualization technique. With this approach a user has direct control over the focus and can interactively explore the data sets. This technique uses a context layer and a focus layer which are rendered separately and composed into a final image. The contents of the layers are defined by the user together with a focus point, which allows for an interactive focus+context exploration.. 2.3.5. Textual Annotations. Textual annotations, labels or legends are techniques often seen in illustrations. These are used to describe the illustration and thus make the identification of different parts easier. This creates more meaningful illustrations, which the viewer can relate to and understand better. This is often used in medical illustrations, where education material uses textual annotations to explain anatomical structures. In anatomy education this is used to help medical students identify structures and see its relation to other structures. In the work by Bruckner et al. [3] an illustrative volume rendering system called VolumeShop was developed with the intent to provide a system for medical 18.

(22) 19. Background. illustrators. Within this system, textual annotations were implemented to match traditional illustrations and simplify the orientation in the interactive system. In this implementation an anchor point is connected with a line to a label, which is placed with the following guidelines for all objects in the data. • Labels shall not overlap • Lines connected between label and anchor point shall not cross • A label shall be placed as close as possible to its anchor point Moreover, the annotations are placed along the silhouette of the object, in order to not be occluded by the object. To do this the algorithm approximates the silhouette and places the labels at the closest distance to their anchor point, but outside of the silhouette. This results in rendered textual annotations that for example describe a medical data set with a text label for each anatomical structure.. 2.4. Voreen. Voreen [13] is a volume rendering engine developed by the Visualization and Computer Graphics Research Group (VisCG) at the Department of Computer Science at the University of Münster. The software is open source and built in C++. In Voreen a framework is provided for rapid prototyping of ray-castingbased volume visualizations, where a data-flow network concept is used to provide flexibility and reusability to the system. The network consists of processors, ports and properties. The nodes in the network are called processors, which have ports of different types (e.g. volume, data and geometry) to transfer data between them. The properties are used to control the processors and can be linked between different processor nodes. An example of a Voreen network is shown in figure 2.6.. Figure 2.6: The standard workspace in VoreenVE The environment in figure 2.6 is called VoreenVE and is developed together with Voreen. VoreenVE provides an environment to visualize the network, where 19.

(23) 20. Background. processors are visualized as nodes and can interactively be added, removed or connected to other processors. The environment also simplifies changing userdefined parameters with interactive GUI widgets, like for example sliders, color pickers and transfer function widgets.. 20.

(24) Chapter 3. Theory This chapter describes the theory behind the illustrative techniques. This involves the composition scheme, importance-aware composition and the shading model, tone shading. These have been chosen to achieve both high- and low-level abstraction in the illustrative visualization of medical data.. 3.1. The Importance-aware Composition Scheme. In order to have visibility control in the visualization the method importanceaware composition [4] was chosen. This is a method closely related to the visibility control techniques presented in section 2.3.4. The front-to-back composition equation 2.6 is modified in the Importance-aware Composition method to also measure sample importance, which makes it possible to achieve importancebased visibility control in a single rendering pass. In the composition equation 2.6 the visibility (transparency) can be obtained as one minus the accumulated opacity (1 − αi0 ), where the visibility is a value between [0,1]. This means that the visibility of a sample i can be controlled by modulating the accumulated opacity αi0 of the previous samples. Obviously, a sample would be visible if all previous samples would be invisible. To modulate the opacity based on the importance we thus need to control the visibility through sample importance and accumulated importance as explained by Pinto et al. [4]. This is done with a visibility function of the sample importance Ii and accumulated importance Ii0 , which computes the minimum visibility required for a sample i. vis(Ii , Ii0 ) = 1 − exp(Ii0 , Ii ). (3.1). Using the visibility function in equation 3.1 the opacity and color can be modulated with a scale (modulation) factor m as follows,. mi =.    1. 1.   1−vis(Ii ,Ii0 ) ai. if Ii <= Ii0 , if 1 − αi >= vis(Ii , Ii0 ), otherwise. (3.2). which is applied for each sample i in the composition step. The scale factor m in equation 3.2 modifies the accumulated opacity and color when the sample importance is greater than the accumulated importance and the visibility from the accumulated opacity is less than the required minimum visibility. With this 21.

(25) 22. Theory. we can obtain an importance-aware composition scheme that is valid for opaque samples as described by Pinto et al [4], 0 0 Ci0 = mCi−1 + (1 − mαi−1 )Ci 0 0 αi0 = mαi−1 + (1 − mαi−1 )αi 0 Ii0 = max(Ii−1 , Ii ). (3.3). where the accumulated opacity is always one for opaque samples. However, for translucent samples the accumulated importance computation also need to involve the sample opacity. This can be seen as a measurement of sample relevance, where a sample with zero opacity should not have any influence on the visualization as explained by Pinto et al. [4]. This leads to equation 3.4, where the accumulated importance Ii0 is computed based on the previously accumu0 , the current sample importance I and the sample opacity lated importance Ii−1 i αi . 0 0 Ii0 = max(Ii−1 , ln(αi + (1 − αi ) exp(Ii−1 − Ii )) + Ii ). (3.4). With this we can finally write the complete importance-aware composition scheme: 0 0 Cˆi0 = mCi−1 + (1 − mαi−1 )Ci 0 0 α ˆ i0 = mαi−1 + (1 − mαi−1 )αi 0 0 αi0 = αi−1 + (1 − αi−1 )αi. Ci0 =.   0. if α ˆ i0 = 0,. . otherwise. ˆ0 α0i C i α ˆ 0i. 0 0 Ii0 = max(Ii−1 , ln(αi + (1 − αi ) exp(Ii−1 − Ii )) + Ii ). (3.5). In equation 3.5 the opacity αi0 is used to scale up the accumulated color Ci0 to ensure a desired composition of a low-opacity/high-importance sample followed by a high-opacity/low-importance sample as described by Pinto et al. [4]. To use the importance-aware composition scheme the sample importance need to be measured for each sample in the composition step. Several importance measurements are presented in the work by Pinto et al. [4], but only some are used in this implementation. These are the measurements of intensity, gradient and silhouetteness. The implementation of these measurements together with the importance-aware composition scheme is further explained in section 4.1.3.. 3.2. The Tone Shading Model. Tone shading presented by Gooch et al. [6] is a non-photorealistic (NPR) shading technique that is based on technical illustrations, where surfaces are often shaded in both hue and luminance. From observations it is known that we perceive warm tones, such as red, orange or yellow closer to us and cool tones like blue, purple or green farther away. Shadows are for example perceived in a bluish tone closer to the horizon, due to the scattering effect. Therefore it is possible to improve 22.

(26) 23. Theory. the depth perception by interpolating from a warm tone to a cool tone in the shading. By that a clearer picture of shapes and structures can be obtained as described by Gooch et al. [6]. Tone shading can either be used as a variant or extend the existing local illumination model. Most commonly is it used to modify the diffuse term in the Blinn-Phong model, which is described in section 2.2.6. The diffuse term of the Blinn-Phong model determines the intensity of diffuse reflected light with (n · l), where n is the surface normal and l is the light vector. This gives the full range of angles [-1, 1] between the vectors, but to avoid surfaces being lit from behind, the model uses max((n · l, 0), which restricts the range to [0, 1]. However, by doing this the shape information in the dark regions is hidden, which makes the actual shape of the object harder to perceive. Unlike this, the Tone shading model uses the full range [-1, 1] to interpolate from a cool color to a warm color as shown in the following equation, 1 + (n · l) 1 + (n · l) )ka + (1 − )kb (3.6) 2 2 where l is the light vector and n is the normalized gradient of the volume. In equation 3.6 the terms ka and kb are derived from a linear blend between the colors kcool and kwarm and the color of the transfer function kt as shown in the following equations, I=(. ka = kcool + αkt. (3.7). kb = kwarm + βkt. (3.8). where the factors α and β are parameters between 0 and 1 to control the contribution of the sample color kt . With these equations the Tone shading. +. =. Figure 3.1: Tone shading of a red object with blue/yellow tones. model can be evaluated in a fragment shader, where the shading is applied to the samples on a per-fragment basis. An example of tone shading is shown in figure 3.1.. 23.

(27) Chapter 4. Implementation In this thesis, an application for anatomy education has been developed using the Voreen volume-rendering engine and the visualization environment, VoreenVE [13]. In Voreen a module with a new set of processors has been developed to create an illustrative visualization of anatomical structures. The application uses the Qt framework for the graphical user interface and the graphics library OpenGL and the shading language GLSL for the visualization and volume rendering implementation. The following sections describe the implementation of the chosen illustrative methods.. 4.1. Illustrative Ray Casting. This section describes the ray casting process and how it was implemented to achieve an illustrative visualization. The ray caster was constructed to allow both non- and pre-segmented data. In the ray caster processor the segmented volume data is rendered by receiving entry and exit points as well as volume data and segmented volume data. The ray casting loop is performed in a single-pass fragment shader and implemented using OpenGL Shading Language (GLSL). The entry and exit points are used to utilize rasterization in the GPU ray casting process as explained in section 2.2.4. In the fragment shader each ray are cast from the eye towards the volume, where the ray direction is computed from the entry and exit points of the volume. The ray casting loop then iteratively steps through the volume along the ray, samples the 3D volume texture using tri-linear interpolation, applies transfer function, shading and performs composition to achieve the final rendering. To achieve illustrative visualization of the segmented data a couple of additional methods have been implemented. This includes a segmentation transfer function, tone shading and importance-aware composition implementation. These have been implemented as described in the following sections.. 4.1.1. Segmentation Classification. In the classification step the data is mapped to optical properties in the volume rendering integral. This is often done using a transfer function, which maps the samples to color and opacity. In this implementation the ray casting is done on the GPU and the transfer function is stored as 1D textures. The textures are passed to the fragment shader, where RGBA samples are found through texture 24.

(28) 25. Implementation. segment id. lookups as described in section 2.2.5. However, with segmented volume data this becomes a bit different. In this case each segment can have its own transfer function, where different color and opacity can be applied for each segment. The implementation of the segmentation classification is based on the SegmentationRaycaster processor in Voreen. In this processor the volume and segmented volume are sent as 3D textures to the shader. Whereas the multiple transfer functions are sent as a single 2D texture, where all the segments 1D transfer functions are stored as shown in figure 4.1.. 0. 255. Figure 4.1: 1D TF textures stored in a 2D segmentation TF texture. In figure 4.1, each row in the 2D texture corresponds to a segment ID. In order to find the correct 1D transfer function of the segment the implementation determines the segment ID from the 3D texture of the segmentation volume. This is then used to look up the 1D TF textures that are stored in the 2D segmentation TF texture. The transfer function of a segment can then be found and used in the rendering. In the implementation of the importance-aware ray caster this is used to achieve rendering of segmented volume data, with different transfer function for each segment.. 4.1.2. Tone Shading. Tone shading is implemented as a shading technique in the fragment shader to achieve illustrative shading in the visualization. This technique uses a warm and a cool color to increase the perception of depth and shapes as described in section 3.2. This technique is implemented similar to other shading techniques in Voreen, such as the Blinn-Phong shading. The diffuse term of the Blinn-Phong model is replaced with the Tone shading model in equation 3.6. This way the ambient and specular terms was kept as a part of the model. To have a more flexible ray casting processor the Tone shading technique was added to a drop-down box containing the existing shading techniques. The parameters of the technique were also made updatable, where the factors, the warm and cool color were made definable in VoreenVE through sliders and color pickers. The parameter setup for the Tone shading technique can be seen in figure 4.2.. 25.

(29) 26. Implementation. Figure 4.2: Tone shading parameters. 4.1.3. Importance-aware Composition. The importance-aware composition method was implemented to provide visibility control in the illustrative visualization. This was achieved by replacing the traditional composition method in the GPU ray casting loop with one that was based on sample importance as described in section 3.1. The composition method was implemented in the fragment shader as shown in algorithm 1. Algorithm 1 Importance-aware Composition Set importance (Ii0 ) to zero for all samples do Compute sample importance (Ii ) Set scale factor (m) Perform composition scheme and scale the result Accumulate importance (Ii0 ) end for The sample importance can be computed with several measurements as proposed by Pinto et al. [4]. These are calculated for each sample during rendering and are used to emphasize features in different ways based on its importance. However, these measurements are not dependent on segmented data, which this thesis also covers. For this reason have a measurement for segmented data also been implemented among other measurements proposed by Pinto et al. [4], such as intensity, gradient, and silhouetteness measurement, and measurements for suppressing structures and achieving focus+context visualizations. The importance measurement is implemented in the fragment shader to compute the sample importance value, where the measurements are combined in a weighted sum to compute the final importance value. This is done together with a global weight to scale the weighted sum (Ii = Wglobal · (Wi0 Ii0 + . . . + Wn0 In0 )), where a global weight of zero, results in zero importance and thus a traditional composition. In the implementation every weight is passed to the shader and can be changed in VoreenVE with sliders from 0 to 1 as seen in figure 4.3. The following importance measurements are the ones that have been implemented. Intensity: The intensity measurement was implemented as described in Pinto et al. [4]. In this measurement the visibility is ensured for samples with high intensity, IS = Wintensity · intensity (4.1) where Wintensity is the corresponding weight and can be used to control the sample importance. This was implemented by using the intensity of the samples. 26.

(30) 27. Implementation. Figure 4.3: Importance Measurements Parameters. Gradient Magnitude: The gradient magnitude measurement ensures the visibility for the strongest boundaries with IS = Wgradient · |gradient|. (4.2). where the magnitude of the gradient is obtained from the sample in the implementation. Silhouetteness: The silhouetteness measurement was implemented to emphasize the silhouettes in the rendering. This measures how much a sample belongs to a silhouette using the normalized view vector V , the normalized gradient N and the gradient magnitude mG as described by Pinto et al. [4]. sil = mpG · smoothstep(s1 , s2 , (1.0 − abs(V · N ))). (4.3). IS = Wsil · sil. (4.4). By changing the influence of the gradient magnitude (p) or the slope of the smoothstep function (s1 , s2 ) the look of the silhouettes is controlled. However, this only ensures the visibility of silhouettes, so to make them more distinguishable the sample color Ci is scaled with a factor ρ to make the silhouettes darker, exp(−ρ · sil). The silhouettes become darker as the silhoutteness importance increases. This was implemented in the composition step by using the gradient obtained from the sample and the normalized view vector obtained as the difference between the view and the sample position. Segment Visibility: Another measurement was also implemented to be able to control the visibility of segments in segmented volume data. First a 1D texture is generated in the ray caster processor that stores the visibility values [0,1] of each segment. This is then passed to the shader where the segment ID of the samples is used to lookup the corresponding visibility value. The sample importance is then measured with the visibility value and a visibility weight, which is used to control the importance of the visible segments. Background: The intensity, gradient and silhoutteness can be used to emphasize important structures. However, to have a much clearer picture of the important structures the unimportant structures could be de-emphasized and 27.

(31) 28. Implementation. suppressed. As described by Pinto et al. [4] this can be accomplished by considering the background as a layer of opaque samples, that have a importance assigned to them together with a adjustable weight. This is implemented by considering the last sample in the ray traversal as the background, where the sample color is set to the background color. When a background sample is reached, the sample importance is set to the background weight. By adjusting the weight the background is made visible through the volume, as it suppresses the less important structures. Focus+Context: The combined weight sum is scaled with a global weight to control all the weights with the same variable as described previously. However, we can also scale the weights using a per-ray-global weight, where weights are scaled differently for each ray. This can be used to achieve focus+context visualizations as described by Pinto et al. [4]. In the implementation of the widget a focus is defined by a circular area, which can be interactively dragged and resized in the view. This is implemented as a global weight in the composition step, where the weight is obtained from the current ray (pixel) Pcoord , position Pf ocus and radius rf ocus of the focus area as shown in the following algorithm. Algorithm 2 Focus+context weight P = Pf ocus · D r = min(Dx , Dy ) · rf ocus l = length(Pcoord − P ) Compute the weight Wf ocus = smoothstep(r − 1, r + 1, l) In algorithm 2 the viewport dimension D is used together with the min() function to makes it possible to have a tall and narrow viewport as well as a wide and low one. By using -1 and 1 to adjust the step width of the smoothstep() function the circle is anti-aliased, independent on the viewport resolution. It is also possible to achieve a softer circle by changing the step width.. 4.2. Labeling of Segmented Data. Textual annotations (or labels) are implemented to add descriptions to the visualization of segmented data as described in section 2.3.5. With this a user is allowed to easier identify the different segments in the visualization. The labeling implementation is based on the Labeling processor in Voreen, which can be used to have illustrative labels in visualizations of segmented data. However, this processor have been extended and modified to be more interactive and include an information panel (or labeling widget), where the segments are presented a hierarchical list together with an information view. The implementation of this panel is further explained in section 4.3.3. The labeling process consist of the following steps, read the segmentation description file, generate the labels, position the labels and render the labels to the screen. These steps are described in the following sections.. 28.

(32) 29. Implementation. 4.2.1. Segment Description File. The labeling processor in Voreen uses an XML file to describe the segments with information about id and caption. However, this was extended with a group, name and info node to be able to have a tree hierarchy of labels and label groups as shown in the following example file. ... <group> <name>Top L e v e l</name> <i n f o>The top l e v e l item</ i n f o> <group> <name>Node</name> <i n f o>The node item</ i n f o> <l a b e l> <i d>0</ i d> <c a p t i o n>L e a f</ c a p t i o n> <i n f o>The l e a f item</ i n f o> </ l a b e l> </ group> </ group> Listing 4.1: Example of a segment description file Where the Top Level item is a parent to the Node item and the Node item is a parent to the Leaf item. This results in the following tree structure: Top Level → Node → Leaf. With this a tree hierarchy list could be achieved, which is used in the labeling widget presented in section 4.3.3.. 4.2.2. Layout Algorithm. The Labeling processor uses an IDRaycaster that renders an ID map to position the labels. The IDRaycaster receives the entry and exit points of the volume, the segmented volume data and the first hit points of the volume data ray casting result. The result of the ID map is a color coded map, where the segmentation ID’s are stored in the alpha channel together with the first hit positions in the three color channels. In the Labeling processor this is used to place the labels at the correct positions, where the ID map tells which segments that are visible at the moment. To place the labels the processor applies distance transform (or distance maps) on the ID map, which stores the closest distance to the segment border for each pixel. This is used to place the anchor points according to the size of the segment and the distance from the particular pixel to the segment border. The labels are then placed according to the guidelines in section 2.3.5, where the labels should be placed near the anchor point, but outside of the objects borders, without overlapping another label or causing an intersection with another connection line. This is done by approximating the silhouette of the object with a convex hull algorithm, which is used to compute the convex shape for a set of points. This can be seen as an elastic band that is stretched open and released to fit the boundary of the object as seen in figure 4.4. The convex hull is calculated in the Labeling processor, where the silhouette points of the ID map is used to give an approximation of the silhouette. This 29.

(33) 30. Implementation. Figure 4.4: Convex hull: A set of points enclosed by an elastic band. is then used in the placement of the labels, where the labels are placed outside of the convex hull at the closest distance to their anchor point. Finally the label positions are corrected for line intersections and label overlaps and can be rendered to the screen. The placement of labels is illustrated in figure 4.5.. Figure 4.5: The placement of labels. 4.2.3. Rendering. In the rendering step the labels, anchor points and connection lines are rendered. This is done in two rendering passes, where the first one renders halos around the anchor points and connection lines. This is done by using thicker lines and points colored with a specified halo color. The next pass renders the anchor points and connection lines with normal thickness and colors them with the same color as the label text. After that the pass renders quads at the label positions and maps the font texture onto them. The font texture is pre-generated for each label in the XML file. The caption of the labels is rendered to a bitmap using the font rendering library FreeType [1] and bound to a texture. However, to also be able to mark certain labels a selection color is added to the labeling processor. This is used in the font rendering to highlight the selected labels with a different font color than the specified label color. How the label selection is implemented is further explained in section 4.3.3.. 30.

(34) 31. 4.3. Implementation. Anatomy Application. Dissections, plastic models and textbooks are often used as aids in the anatomy education as explained in section 2.1. However, computerized technology offers new possibilities in how the teaching can be done. For this purpose a prototype of an anatomy application has been implemented. Text books with medical illustrations are often used as aids in anatomy education. These provide abstraction, which is crucial in order to have effective illustrations. The prototype is for this reason based on illustrative visualization techniques to achieve abstraction in the visualization. In the prototype a pre-segmented human body data set is used, which have been provided by the Center for Medical Image Science and Visualization (CMIV) in Linköping.. 4.3.1. Design and User Interface. In the design of the anatomy application the illustrative ray casting and labeling component are used together, where the Compositor processor is used to blend the renders together. The network of the system can be seen in figure 4.6, which is taken from VoreenVE. The user interface is designed to allow a user to interactively explore the anatomical structures, where the illustrative rendering can be controlled and information about the segmented data can be presented. This is achieved by the implementation of a focus+context and labeling widget as described in the following sections.. 4.3.2. Focus+Context Widget. The focus+context widget is implemented to interactively control the position and radius of the focus area. This is implemented as a geometry renderer in Voreen, which is used together with a geometry processor to be able to have multiple geometry rendering processors as seen in figure 4.6. In the focus+context widget a draggable and resizable 2D circle is rendered on the view plane. The circle is rendered and made clickable through the methods render() and renderP icking() derived from the GeometryRendererBase class in Voreen. In the render() method the outer borders of the circle are rendered using its color, position and radius, where the lines are anti-aliased using GL_LINE_SMOOTH. The renderP icking() method is used to render the pickable regions to a IDM anager object, which color codes the pickable regions and stores it in a render target. The method then performs the rendering similar to the render method, but the inner parts of the circle are rendered instead of the outer border lines. This allows the user to pick the circle by clicking somewhere within the circle. The circle can then be dragged or resized by checking isHit() in the IDM anager object and see if the circle has been picked or not. The circle is dragged by saving its initial position (px , py ) and mouse coordinates (x0 , y0 ) when the circle is hit. The circle position P is then updated according to the new mouse coordinates (x, y) until the user decides to release the circle. To resize the circle the initial radius r is saved instead. The circle radius R is then updated according to the change in the y direction. For a positive change in the y direction the circle is enlarged and vice versa. The drag and resizing computation can be seen in algorithm 3. 31.

(35) 32. Implementation. Figure 4.6: The network of the anatomy application. 4.3.3. Labeling Widget. A labeling widget was created to be able to interactively change which organs that are important to see in the visualization. This is added as a processor widget to the labeling processor. The widget is created as an abstraction layer in the Labeling class and is implemented in VoreenQt, the Qt GUI library of Voreen. In the Qt implementation a view is setup to hold a text label, text area, a tree view and three buttons as seen in figure 4.7. The tree view is implemented to organize the anatomical structures into which biological system they belong to and group structures that belong together with other structures, for example a group was created for the heart and its. 32.

(36) 33. Implementation. Algorithm 3 Drag and resize circle with mouse if isClicked then 0 ∆x = x−x Dx 0 −y ∆y = yD y if dragCircle then //Update circle position P Px = px + ∆x Py = py + ∆y else if resizeCircle then //Update circle radius R of f set = (∆x, ∆y) resizeDir = (0, 1) 1 f actor = 1+(of f set·resizeDir) R = r · f actor end if end if. Figure 4.7: Layout of the Labeling widget. different parts was included as childs to the group. The different biological systems are the groups of organs that work together to achieve certain task, these are for example the circulatory, digestive and respiratory system. Where for example the heart belongs to the circulatory system. These systems were chosen for the implementation since they are often studied in human anatomy. In the tree view a tree hierarchy view is created, which is filled with labels and label groups from the segmentation description file as described in listing 4.1. When traversing the XML file the labels and label groups are added to the tree view according to their parent. If the item has a parent it is found in the tree and the item can be added as a child to the found parent. This way their hierarchy in the XML file can be translated to the tree view. For each item a checkbox is also added. The text area and text label are used to show information about the labels. These are updated when a label or label group is selected in the tree view. This is done by linking the selection in the tree view to the text area and text label. The selection is also linked to the labels in the rendering, where a selected label is highlighted with a chosen color. These 33.

(37) 34. Implementation. rendered labels were also made clickable through an IDM anager object similar as in the focus+context widget in section 4.3.2. A rendering target for picking is used to render the quads of the labels as color coded regions. This is then used to find if a label is hit or not using the mouse coordinates and the isHit() method of the IDM anager. A picked label is highlighted in the view and set as selected in the tree view. In order to change which organ or system that should be visible a process was implemented to toggle the visibility of segments (organs). By using the segmentation visibility measurement as described in 4.1.3, the visibility is changed by updating the 1D texture. The visibility can then be changed in several ways. Either by selecting the checkbox icon of one of the items in the tree view, the button Show all segments or one of the buttons, Show/Hide or Hide others when having one item selected. When an action is performed on a label group or label it is propagated to the ray caster processor using property linkage. This linkage is done between two processors in VoreenVE and allows a property to be updated by another property of the same type. For example when choosing an action on a selected label, the action and segment ID is set in the labeling processor, which automatically sets the same properties in the ray caster processor. This processor then updates the visibility texture according to the action and segment ID. For example if the action is set to Hide the corresponding segment ID is found in the visibility texture and set to 0, which means that the segment has no importance in the importance-aware composition and will not be rendered.. 34.

(38) Chapter 5. Conclusion 5.1. Results. In this section the result of the implementation is presented. The importanceaware composition result is first presented followed by the tone shading result. Finally, the anatomy application result is presented, which uses the two previous components together with the labeling component. A human hand data set is used as test data for both the importance-aware composition and tone shading component and a pre-segmented human body data set is used for the anatomy application.. 5.1.1. Result of the Importance-aware Composition. The importance-aware composition is implemented with different sample importance measurements as explained in section 4.1.3. In this method, the measurements for intensity, gradient magnitude, silhouetteness, background and focus+context are implemented together with a measurement for segmented data, which is used in the anatomy application. To combine multiple importance measurements a weighted sum with a global weight is used in the implementation, where every measurement has its own weight to control its contribution to the visualization. The results of the different sample importance measurements are shown in the following figures. The intensity measurement is shown in figure 5.1, where the importance weight is changed from no weight (5.1a), to a moderate (5.1b) and high weight (5.1c). The gradient magnitude, silhouetteness and background measurement are seen in figure 5.2. In figure 5.2a the gradient magnitude is combined with the intensity measurement in a weighted sum. This increases the importance of the boundaries, which makes the shape more distinct than using only intensity measurement. In figure 5.2a the silhouetteness and background measurement are also added to the weighted sum. The silhouetteness values s1 = 0.4, s2 = 3.0, p = 0.5 are used to create the emphasized contours in the image and the background is suppressed by using a non-zero background weight. In figure 5.3 the combined weighted sum of the result in 5.2b is scaled with a per-ray-global weight. With this a focus+context visualization is produced, where the focus is defined as a circular area. The step width [radius+1, radius/2] is used to achieve the soft circle area.. 35.

(39) 36. Conclusion. (a) No intensity weight. (b) Moderate intensity weight. (c) High intensity weight. Figure 5.1: The intensity measurement. (a) Rendered with intensity and gradi- (b) Rendered as in a) but combined with ent measurements a silhouetteness measurement and background measurement. Figure 5.2: The gradient magnitude, silhouetteness and background measurement. Figure 5.3: Focus+context visualization. 36.

(40) 37. 5.1.2. Conclusion. Result of the Tone Shading. The result of the tone shading implementation is seen in figure 5.4, where tone shading (5.4b) is compared with the traditional Blinn-Phong shading (5.4a). In the figure the tone shading is setup using an orange warm tone with factor α = 0.8 and a blue cool tone with factor β = 0.3.. (a) Blinn-Phong shading. (b) Tone shading. Figure 5.4: Comparision of Blinn-Phong shading and Tone shading. 5.1.3. Result of the Anatomy Application. The implementation of the anatomy application resulted in an educational tool for anatomy education. An illustrative visualization is achieved by using the importance-aware composition, tone shading and labeling implementation. With these implementations the expressiveness of the volume rendering is increased. Within the application a user can explore a human body through a focus+context technique and view information about selected organs. The application interface consists of a 3D canvas view and an information panel. In the canvas view the user controls the volume visualization by rotating, zooming and panning the view. By using the focus+context widget the user can also control the size and position of the circular focus area. The information panel holds the list of organ structures available in the human body data set and presents them in a hierarchical list based on its biological system. Through the panel a user can hide and show specific organs or biological systems. In figure 5.5 the pericardium is selected, which contains the heart and belongs to the circulatory system. The anatomical structures that does not belong to the circulatory, digestive and respiratory systems has been hidden to have a clear view of the pericardium, for example the skin, muscle and skeleton structures. The information panel is shown to the left in figure 5.5, where information about the pericardium is presented and its place in the tree list view is shown. In the canvas view the visualization of the human anatomy is shown, where the pericardium label is highlighted to show the current selection. Another view of the application is shown in figure 5.6, where the digestive and urinary system is visualized. In this view a user has hidden the other systems in the data set to only make the current ones visible. 37.

References

Related documents

subclass VizzJOGL will be called to load the graph, after that it will call the method init() in the class JOGLAdvancedInteraction where the camera position is got from the graph

After the input file is read the script creates the visible objects in Blender that will represent the particles.. A keyframe is then inserted for the rotations and locations of

Further exploration of peer effects in random housing allocation for college has shown that the person with whom you share a room in the college years tends to have an effect on

In this article, we present a meta- analysis (i.e. a ‘‘survey of surveys’’) of manually collected survey papers that refer to the visual interpretation of machine learning

Lantmäteriet, The Swedish Mapping, Cadastral and Land Registration Authority, is responsible for the operation and maintenance of SWEPOS and SWEREF99 (the Swedish

This study has shown that a node-link graph representation is a valid visualization option for supporting Scania product coordinators with some of the analysis tasks required

Here, we combine laboratory behavioral scoring with PIT-tag tele-metry in the wild using juvenile brown trout as a model to address predictions from the pace-of-life-syndrome

Each of the previously defined objectives will result in one partial deliverable: [O1] the ​theoretical objective will result in a draft of a paper describing the current