• No results found

Efficient Isosurface Visualization of Implicit Functions.

N/A
N/A
Protected

Academic year: 2022

Share "Efficient Isosurface Visualization of Implicit Functions."

Copied!
34
0
0

Loading.... (view fulltext now)

Full text

(1)

Efficient Isosurface Visualization of Implicit Functions

Andr´ e Eriksson 880722-8617 andre4@kth.se

SA104X Degree Project in Engineering Physics, First Level School of Computer Science and Communication

KTH Royal Institute of Technology

Supervisor: Lars Kjelldahl

(2)

Abstract

Isosurface visualization is an important problem in computer graphics, with applications in fields

such as medical imaging, fluid dynamics and chemistry. A very natural application for isosurface

visualization is found in scientific visualization software, where efficient visualization of implicit

functions is of great importance. This paper proposes an efficient algorithm for visualizing three-

dimensional isosurfaces of implicit functions in real time, with topology guarantees, subject

to constraints, and optimally with regard to some error measure. An implementation of the

algorithm in the open source interactive geometry software GeoGebra is discussed and analyzed.

(3)

CONTENTS

Contents

1 Introduction 3

2 Background 4

2.1 Definitions . . . . 4

2.2 History . . . . 5

2.3 Related work . . . . 7

2.4 Difficulties . . . . 7

3 Method 9 3.1 Marching Cubes . . . . 9

3.1.1 Introduction . . . . 9

3.1.2 Details . . . . 9

3.1.3 Limitations . . . . 10

3.1.4 Extensions . . . . 10

3.2 ROAM . . . . 13

3.2.1 Introduction . . . . 13

3.2.2 Details . . . . 14

3.2.3 Extensions . . . . 14

3.3 The Algorithm . . . . 20

3.3.1 Overview . . . . 20

3.3.2 Polygonization phase . . . . 21

3.3.3 Subdivision phase . . . . 21

3.3.4 Considerations . . . . 22

4 Implementation 24 4.1 Polygonization Phase . . . . 24

4.2 Subdivision Phase . . . . 25

4.3 Future Improvements . . . . 25

5 Results 27 6 Future Work 30 7 Conclusion 31 7.1 Acknowledgements . . . . 31

Bibliography 32

(4)

CHAPTER 1. INTRODUCTION

1 Introduction

Visualizing isosurfaces is an important problem in computer graphics, and has been a subject of active study since the 1970s. Isosurfaces are especially well suited for visualizing large volume datasets on graphics hardware optimized for polygon meshes. For this reason, isosurface visual- ization has found applications in such diverse areas as medical imaging, geophysics, and mete- orology. Another area where the need for isosurface visualization arises is in three-dimensional graphing of mathematical functions. In information graphics software, being able to visualize three-dimensional implicit functions (on the form f (r) = 0) in an accurate and effective way is critical, as it opens the door to visualization of several other objects, such as scalar fields and intersections or projections of higher-dimensional objects.

Since its original publication in 1987, the Marching Cubes algorithm [2] has become a standard method for isosurface visualization. This paper proposes a novel extension of the Marching Cubes algorithm in combination with a subdivision algorithm inspired by the ROAM algorithm [1], commonly used for efficient real-time visualization of detailed three-dimensional terrain.

This approach has several benefits. Firstly, it allows for faster computation; fewer triangles are required to achieve an error within specfic bounds and fewer computations are required for a given triangle count. Secondly, it provides a tool for evaluating the quality of the mesh produced, and allows for dynamic optimization of the produced mesh subject to user-defined constraints.

Thirdly, the algorithm is well suited for real-time rendering, and for visualization of animated or otherwise dynamic surfaces.

This paper was written in conjunction with an implementation of the aforementioned algorithm

in the open source interactive geometry software GeoGebra. For this reason, there is a heavy

bias towards visualizing three-dimensional implicit functions. However, most of the discussion

in this paper is applicable to the visualization of volume datasets as well, and the proposed

algorithm can easily be extended to cover such applications.

(5)

CHAPTER 2. BACKGROUND

2 Background

Figure 2.1: Four nested isosurfaces of the function f (x, y, z) = x 2 + y 2 + z 2 + 2 cos 2x, corresponding to f (x, y, z) = 5, 10, 15, 20.

2.1 Definitions

Before delving into the subject, we need the following definitions:

Definition 1 The level set of a differentiable function f : R n − > R corresponding to a real value c is the set of points {(x 1 , . . . , x n ) ∈ R n : f (x 1 , . . . , x n ) = c}.

Definition 2 An isosurface is a level set in three dimensions.

A few different isosurfaces of one function, corresponding to different values of c, are shown in figure 2.1. In the remainder of the text, only isosurfaces corresponding to c = 0 will be discussed (as f (r) = c is equivalent to g(r) = f (r) − c = 0).

Definition 3 A polygon is a closed plane figure bounded by three or more line segments that terminate in pairs at the same number of vertices, and do not intersect other than at their vertices.

Definition 4 A polygon mesh is a set of polygons sharing edges such that no more than two

polygons share any given edge.

(6)

CHAPTER 2. BACKGROUND

Definition 5 A triangle mesh is a polygon mesh in which all the polygons are triangles.

In the remainder of the text, mesh is taken to mean triangle mesh, as the proposed algorithm is intended for triangle meshes. Furthermore, we denote the set of all triangle meshes by T .

Definition 6 A polygon mesh M is closed if every polygon P in M shares each of its edges with another polygon.

Definition 7 A polygon mesh M is a sub-mesh of N if N contains each vertex in M , and two vertices v i and v j in the set of vertices in N are joined by an edge iff the corresponding vertices v i 0 and v 0 j are joined by an edge in M . We denote this by M ⊂ N .

Definition 8 A mesh M is a subdivision of N if M can be obtained from N by a finite number of operations in which one triangle is split into two triangles.

Definition 9 A triangulation is a sequence of polygon meshes M 1 , M 2 , . . . , M n such that each M i is a subdivision of each M j when i > j.

Definition 10 The triangulation T 0 is a sub-triangulation of T if T 0 can be obtained from T by removing a a finite amount of meshes from its beginning, and a finite number of triangles from each mesh.

Definition 11 A cuberille is a dissection of space into equal cubes by three orthogonal sets of parallel equidistant planes.

Definition 12 A mesh is topologically correct with regard to a cuberille grid and an underlying function f (r) if its topology coincides with that of f , in the sense that it is homeomorphic to F −1 (c) where F is a trilinear function that is equal to f at the cuberille points.

By convention, the set of Boolean values (often {true, f alse} or {0, 1}) is referred to as B in the remainder of this text.

2.2 History

Generating isosurface meshes first became a subject of study in the 1970s, as innovations in

medical imaging led to a need for new data visualization methods. Early approaches to the

(7)

CHAPTER 2. BACKGROUND

problem include those by Keppel [4], in which contours of the isosurface were generated and subsequently connected with triangles, and Herman et al. [10], in which a mesh was constructed by sampling a cuberille grid. Other approaches, such as those proposed by Farrell [8] and Hohne [9], used ray tracing methods to visualize the data.

The Marching Cubes method, proposed in 1987 by Lorensen and Cline [2], was the first method to enable fast, accurate and (for the most part) artifact-free construction of isosurface meshes, and has since become the go-to method for isosurface construction. It uses a cuberille-based approach, dividing space into a set of cubes within each a set of triangles is constructed according to the function values at its corners. By taking into account symmetries of the cube, the initial 256 considerations are reduced to 15 special cases.

While the Marching Cubes method produces much better results than earlier methods, it still produces topologically incorrect meshes in certain cases. It leaves room for ambiguity, which can lead to ’cracks’ in the resulting mesh as well as incorrect topologies in volumes containing fine details. Initially pointed out by D¨ urst in 1988 [11], and subseqently tackled using various approaches ([12], [14], [13]) this problem was resolved by Chernyaev in 1995 [5]. In his paper, he categorizes the possible trilinear topologies of the cube, and proposes an extension (dubbed Marching Cubes 33) in which the 15 original configurations are increased to 33.

Another issue with the original Marching Cubes algorithm is that it fails to accurately ap- proximate isosurfaces of objects with sharp or thin features. Extensions that address these shortcomings include Extended Marching Cubes, proposed in 2001 by Kobbelt et al. [15] and Dual Marching Cubes, proposed in 2005 by Schaefer and Warren [16].

Several alternative approaches to isosurface visualization have also been proposed. One of the more popular is Dual Contouring, originally proposed under the alternate name of Surface Nets in 1998 by Gibson [7]. Also a cuberille-based method, it takes an approach slightly different to Marching Cubes. Instead of considering polygonizations within the cubes, it constructs a mesh by using the dual cuberille graph (that is, the graph of midpoints of adjacent cubes). This mesh is then projected onto the surface. Dual Contouring usually leads to better results than Marching Cubes and several extensions have been proposed, such as [17], [20] and [18].

This paper uses an approach that combines a modified version of the Marching Cubes algorithm

with a modified version of an algorithm used for real-time visualization, called ROAM. Originally

proposed in 1997 by Duchaineau et al. [1], the latter has been extensively cited and extended.

(8)

CHAPTER 2. BACKGROUND

2.3 Related work

Several methods with goals similar to those stated in this paper (optimizing an isosurface using trees w.r.t. an error measure) have been proposed. One of the first and most influential was proposed in 1996 by Shekhar et al. [19]. The authors of that paper described an approach in which the mesh is iteratively constructed within a cuberille grid, starting from a seed point.

The cubes which the surface passes through are then inserted into an octree, from which a mesh is constructed in a bottom-up manner by replacing octree elements that have small error values with their parents. While this method leads to good results, it is mainly useful as a precomputation step, extracting a mesh from volume data for later use. Because of the top- down approach, it is poorly suited for real-time refinement.

To the author’s knowledge, no similar two-phase method using a ROAM-like algorithm for refinement has been proposed. One method, proposed in 2002 by Gregorski et al [3], does make the connection between isosurfaces and ROAM, but does so with different stated goals, and in a different manner than this paper. It uses ROAM to efficiently visualize isosurfaces of datasets in a view-dependent manner, and uses a three-dimensional, tetrahedron-based analogue of the original ROAM algorithm to represent volume data. While this approach is useful for its stated purpose, it can not be directly applied to the problem of real-time refinement of unknown data.

2.4 Difficulties

Accurately visualizing isosurfaces is a complicated endeavour with several intrinsic challenges, both numerical and complexity-oriented in nature. One of the most difficult obstacles one is faced with when trying to visualize isosurfaces is that of locating them in the first place.

While this might appear trivial at first, and several methods do exist, there is no ‘best’ method to do so. When it comes to volume datasets, which are discrete, isosurfaces do not really

‘exist’ in the same way as for continuous sets, since usually very few or none of the sampling

points belong to a specific level set. Instead, one typically defines a continuous function that

interpolates the discrete dataset, and looks for isosurfaces of this function instead. For such well-

behaved functions, there can be no details finer than the resolution of the sampling grid, and

so one can readily find all isosurfaces by simply comparing adjacent points. When it comes to

isosurfaces of general functions, however, one can not make any such assumptions. Indeed, there

is no algorithm that is guaranteed to always find an isosurface in the general case (piecewise

continuous function) even in a given bounding volume, as this is equivalent to finding all zeroes

of such a function in an uncountable set. In addition, while this might be possible in special

(9)

CHAPTER 2. BACKGROUND

cases (such as when an explicit expression of the level set can be found), the problem is often intractable.

However, while the task of finding every isosurface is very difficult, there are useful heuristics that can help. The most common approach is to sample the underlying function at points in a cuberille grid, looking at the sign of the function at different points in order to determine where a surface might be located (if a continuous function does not have the same sign on the boundary of some connected set S, it has zeroes within S). Of course, one can also use any standard root-finding algorithm from numerical analysis to find a point on the isosurface.

Once a surface has been found, the next challenge is to construct a triangle mesh approximating it. If the cuberille approach is taken, constructing such a mesh is a fairly easy task - one can either use the calculated points to divide the space into a set of cuboids in which triangles are appropriately placed (as is done in Marching Cubes), or link up either the cuberille points themselves or points that have been suitably displaced (as is done in Dual Contouring).

In addition to the aforementioned difficulties, the task of plotting three-dimensional implicit

functions comes with its own set of challenges and considerations. For one, it can not be

expected that the user would only ever want to plot well-behaved functions. For example,

users might attempt to plot functions that are discontinuous or too detailed to be realistically

approximated. While one can not expect to be able to plot every possible implicit function, it is

of course desirable for the algorithm to be able to plot as many functions as possible. Singular

functions pose the biggest challenge - a function as simple as f (x, y, z) = x −1 can be very

difficult to accurately plot. Indeed, most cuberille-based approaches will incorrectly indicate an

isosurface at x = 0 for x −1 = 0. In order to avoid plotting isosurfaces at singularities one can

probe for roots within each candidate cube using some numerical root-finding algorithm, and

ignore cubes where no roots are found. Highly detailed functions (such as f (x, y, z) = sin( 1 x ))

are generally impossible to accurately plot. However, as long as the sampling resolution is

sufficiently high, one can achieve fairly visually appealing plots using cuberille-based methods.

(10)

CHAPTER 3. METHOD

3 Method

The method proposed consists of two phases. In the first phase, a modified version of the March- ing Cubes algorithm is used to construct a crude but topologically correct mesh approximation to the isosurface. This approximation is then passed on to the second part of the algorithm, in which an algorithm similar to ROAM is used to dynamically refine the mesh in real time. This subsection describes the Marching Cubes and ROAM algorithms as well as the extensions and modifications made to them, and gives an outline of the proposed algorithm.

3.1 Marching Cubes

3.1.1 Introduction

The Marching Cubes algorithm works as follows. Suppose that we are trying to find an isosurface of the continuous function f : R 3 → R within the volume V ∈ R 3 , and that we have a function g : R 3 → B, such that g(r 0 ) = ¬g(r 1 ) if r 0 an r 1 lie on different sides of the isosurface (for the problem of constructing the isosurface at value w of f we can take g to be g(r) = sgn(f (r) − w)).

The Marching Cube algorithm starts by dividing the viewing volume V into a set C of n 3 identical cubes (or, more generally, cuboids). Next, it iterates through these cubes, and for each cube c i ∈ C, evaluates g(r) at the corners of c i , obtaining a vector t ∈ B 8 . Based on the value of t, a set of triangles S i is constructed within c i . As each cube in C is evaluated, a mesh S = ∪S i approximating the isosurface is constructed.

3.1.2 Details

The vector t ∈ B 8 has 2 8 = 256 possible values corresponding to 256 different cube polygoniza- tions, a fairly cumbersome number. Fortunately, the number of unique polygonizations can be reduced to 15 by considering all elements of each orbit of the cube under rotations, reflections, and sign inversions to be identical. These 15 unique polygonizations are shown in figure 3.1.

It should also be noted that g(r) does not must be evaluated eight times for each cube. With

some ingenuity, it is simple to construct an algorithm that evaluates only one vertex per cube

and takes up O(n 2 ) space.

(11)

CHAPTER 3. METHOD

Figure 3.1: The 15 unique polygonizations used in Marching Cubes. Image courtesy of Wikipedia.

3.1.3 Limitations

Naturally, an algorithm as simple as Marching Cubes is bound to have several inherent limi- tations. As with most algorithms for implicit functions, the end result depends heavily on the sampling resolution, which is inversely proportional to the cube side length. If the sampling resolution is set too low, the resulting mesh will appear too rough. Worse still, surfaces with volumes smaller than the cuboid volume will typically be missed entirely. However, both space and time complexity grow with the cube of the resolution, so given time and space constraints, using a sufficiently fine resolution might not be feasible. When visualizing volume data, such as in medical applications, there is an underlying maximum resolution, which translates into a minimum element size. In mathematical functions, this limitation is not present. Indeed, for any given sampling resolution, even an isosurface as simple as α(|r| + β) = 0 can be made undetectable by Marching Cubes simply by selecting appropriate values of α and β.

3.1.4 Extensions

Octrees

One natural extension of the basic Marching Cubes algorithm involves octrees. An octree is a spatial tree data structure obtained by recursively subdividing a cubic volume into eight octants.

The basic idea is to start by evaluating only the root node of the octree (a cube containing

all of V), and to proceed recursively down the octree, evaluating only cubes that contain the

given isosurface, until a suitable level of detail is achieved. This can lead to dramatic gains in

efficiency (for example, using octrees g(r) will be evaluated O(n 2 ) times for a plane n·(r−r 0 ) = 0,

(12)

CHAPTER 3. METHOD

Figure 3.2: Comparison of the octree and regular approaches. Both images show the same mesh ap- proximating x 2 + y 2 + z 2 − 1 = 0. The left image was obtained using the octree approach while the right was obtained using the regular approach. 240 cubes were evaluated to produce the left image, while 512 were evaluated to produce the right image. These cubes are shown in blue.

as opposed to O(n 3 ) times for the regular Marching Cubes algorithm). However, these gains are not universal, and for some functions the octree approach will require the same amount of evaluations as the regular approach (although such functions are not typically of interest when it comes to visualization). A comparison of the regular and octree approaches is shown in figure 3.2.

As in the regular approach, vertices need not be evaluated more than once. Firstly, each cube will inherit one corner from its parent, and will share the rest of its corners with its siblings, so plenty of duplicate evaluations can be avoided if some care is taken in designing the recursive algorithm. Secondly, each child will also share all but one of its corners with the children of its parent’s siblings; with some pointer arithmetic plenty of unnecessary evaluations can be avoided.

It should be noted that the recursion algorithm as outlined above will lead to issues for most functions as the sampling resolution of the first few levels of the octree will be very low. As surfaces that are not approximately linear within a cube might not be detected through sampling of its corners there is a high likelihood that surfaces will remain undected in the first few levels of the octree. In practice a minimum level should be set above which all nodes in the octree are evaluated.

Projection

As noted above, the basic Marching Cubes approach leads to a crude and visually unappealing

mesh with sharp and regular features. Therefore, once a sufficient level of detail has been

(13)

CHAPTER 3. METHOD

achieved in the mesh, it is desireable to project its vertices onto the isosurface. Essentially, this is equivalent to the problem of finding the zero (if there is more than one, sufficient level of detail has not yet been reached) of f (r) along the cube edge on which the vertex is placed. This is a simple problem that can be solved using any standard numerical analysis algorithm, such as Newton’s method. However, in most cases, linear interpolation of the cube corners will give acceptable results and will lead to significantly shorter run times.

Topology guarantees

It is highly desirable to be able to offer some sort of guarantee that the resulting mesh will be topologically correct. The Marching Cubes 33 algorithm [5] guarantees that the resulting mesh is topologically correct given that the underlying function is trilinear within each cube. While this assumption is sound for many applications (such as volume data and distance fields), it does not generally hold for arbitrary continuous functions, unless the cuberille resolution is known to be fine enough that the function is approximately trilinear within each cube.

In order for our Marching Cubes implementation to be able to guarantee a topologically correct isosurface approximation, we must thus equip it with some mechanism to assess whether the cuberille resolution is fine enough. One possible method of achieving this is to attempt to count the number of times the isosurface passes through each edge of the cube. Since trilinear functions will appear to be linear over each edge of the cube and thus can have at most one zero along the edge, we can expect the trilinear approximation to be valid if there is at most one zero along each edge in the cuberille grid.

Assessing the number of zeroes along each edge must be done using numerical methods. While finding every root of a function in a given interval is generally intractable, functions for which one can not expect to quickly find every zero along a given interval probably can not be properly visualized anyway. Such an approach can be favorably combined with the octree refinement described above. If all intersections are found and stored for each of the 12 edges of the root node of the octree, there is no need to evaluate any more points lying on any of its edges for any of its descendants. When the root node is split, an additional 15 edges must be checked for roots.

In fact, such an approach essentially eliminates the need to evaluate the corners of each cube

in the usual fashion. Thus, this approach should perhaps be seen as more of a replacement

of than an extension of the usual Marching Cubes algorithm. It should be noted that this

approach also provides the projections of the mesh’s vertices without any separate projection

step. Furthermore, this method specifically avoids the problem of detection of ’false’ isosurfaces

(14)

CHAPTER 3. METHOD

of functions such as f (x, y, z) = x −1 described above.

3.2 ROAM

Figure 3.3: Example of the ROAM algorithm. Top: terrain rendered with and without wireframe.

Bottom: the corresponding mesh, with visible portions highlighted in light grey. Image courtesy of the original ROAM paper [1].

3.2.1 Introduction

ROAM, or Real-time Optimally Adapting Meshes, proposed in 1997 by Duchaineau et al. [1]

is an algorithm commonly used for efficient real-time visualization of complex terrain datasets.

It uses two priority queues to drive a continuous chain of operations (called split and merge operations) on a triangle mesh (corresponding to a precomputed bintree). These operations can be favourably performed in real-time, with a fixed number of operations between frames.

This allows for efficient, real-time, view-dependent optimization of the terrain. Furthermore,

the algorithm is flexible enough to be generalized to different kinds of data, and to be adapted

to applications very different from terrain rendering.

(15)

CHAPTER 3. METHOD

3.2.2 Details

Assume that we are given a triangle bintree in which each triangle is associated with an pre- computed error value, two priority queues p split and p merge , and some level of detail criterion.

The ROAM algorithm starts by inserting the root of the bintree into p split . Next, the main loop of the algorithm is iterated over, performing one split or merge operation (detailed below) with each iteration. Every n’th iteration, a frame is drawn. The mesh eventually converges to one that is optimal with regards to the error measure (usually some view-dependent modification of the precomputed error value) within the confines of the triangle bintree structure. As the view is changed, the process begins anew, with a series of split and merge operations once again leading to an optimal mesh.

Split operations, which are performed when the level of detail is too low, are performed by taking the element of highest priority in p split , replacing it in the mesh by its two children in the triangle bintree, and recursively performing split operations on adjacent triangles to maintain topological consistency. Any element that is split is moved from p split to p merge and is replaced in the mesh by its children. A split operation where several triangles are split is shown in figure 3.4.

Figure 3.4: A split operation performed on the triangle T, showing how a series of recur- sive splits results. Image cour- tesy of the original ROAM pa- per [1].

Merge operations are performed in a similar way, replacing the element of highest priority in p merge , and its siblings, with their parent in the triangle bintree, an operation that is performed recursively in order to maintain topological consistency.

p split and p merge simply assign priority to triangles based on their error value (although they are sorted in opposite ways). When the view is changed, if the error value is view-dependent, both queues will usually have be resorted. In addition, view changes might also lead to an overlap in error values between the two queues. Any such overlapping elements must be decimated before optimality can be ensured.

3.2.3 Extensions

Continuous and unknown data

As the original ROAM algorithm was designed to efficiently vi-

sualize terrain data, a few adaptions must be made to it in order for it to accept continuous and

unknown data.

(16)

CHAPTER 3. METHOD

Much like the volume datasets discussed previously, the ROAM algorithm was created to effi- ciently visualize area datasets, in which points have been sampled at regular intervals in a grid.

One advantage of such data, compared to the continuous and unknown data one is faced with in graphing applications, is that it can be exhaustingly analyzed before it is presented to the user. As a result, the ROAM algorithm includes a precomputation phase. In this phase, a tri- angle bintree is constructed on top of the area dataset, with an error value associated with each triangle. In essence, the ‘guaranteed optimality’ of the algorithm hinges on this precomputation phase, as it enables the guarantee that the error values are monotonic (more on this later). If error values are not monotonic, triangles that should be split or merged might end up at the wrong place in the priority queues, leading to a non-optimal mesh. If one can always tell where the error measure is not monotonic, it is possible to achieve optimality anyway. However, this is not possible in graphing applications.

In applications such as this, where nothing is known about the data beforehand, a different approach is needed. With continuous data, one can find no ‘maximum’ level of detail to aim for (at least not without making some very limiting assumptions about the functions involved).

Therefore, starting at a certain depth in the bintree and proceeding upwards will not guarantee optimality (although it might take the mesh much closer to this goal). In graphing applications, nothing is known about the data until the user enters a function, and thus any precomputation has to be performed after the function has been entered. This inevitably leads to extended execution times.

While it is always desirable to come as close to optimality as possible, an optimality guarantee is not essential to graphing applications. For this reason, the precomputation phase is skipped entirely, and the error measure is chosen to be one that will ‘almost always’ be monotonic (more on this in the subsection on error measures).

Dynamic data

Another issue with the original ROAM algorithm is that it is poorly adapted to dynamic data (i.e. data that varies over time). In interactive graphing applications, it is highly desirable to allow the user to modify the functions being graphed and view the results in real time. In order to allow for this, a few modifications must be made.

One naive approach is, every time the underlying function changes, to recompute the vertices

and errors of every element in the tree, reinsert all elements into their respective queues, and

let the mesh re-optimize itself. While simple, this approach has a major drawback - speed. By

recomputing every element, one will inevitably recompute parts of the tree that will never again

(17)

CHAPTER 3. METHOD

be displayed. Furthermore, the tree is bound to grow in size each time the function is changed so the execution time of this update phase will grow each time the function changes.

These hurdles can be overcome by associating a ‘version’ variable with with each element, cor- responding to the function for which it was last updated. In order for the ROAM algorithm to work, the elements that are currently visible in the mesh must be stored. Since it is reasonable to only update those elements that are visible when the function is changed, it is reasonable to update only these elements and defer updates of other elements to future split/merge operations or culling changes.

Fast approximate priority queues

Since the two priority queues that drive split and merge operations make up the backbone of ROAM, it is essential that they perform well. For most applications, heap-based priority queues are used. These have a complexity of O(log n) for both insertions and deletions. Since these two operations are performed very frequently in ROAM, and the queues will often have to hold a very large number of elements, it is natural to look for faster priority queue implementations.

While such implementations do exist for discrete or integer data, one quickly runs out of options when looking for fast priority queues supporting continuous priorities.

However, it is possible to achieve a complexity of nearly O(1) for all three operations used in

ROAM (insert, delete, peek) if the priority queues are allowed to occasionally give incorrect

results. One way to achieve this is to use a bucket-based approximate priority queue. Such a

queue consists of a fixed number N (preferably a large number) of ‘buckets’, each consisting of

a linked list of elements. The queue also contains an array of pointers to the first element of

each bucket. When an element is inserted into the queue, its priority is evaluated through some

(monotonic) function f : R− > {1, 2, . . . , N }, and the element is inserted at the front of the

corresponding bucket. A reference to the highest non-empty bucket is kept and updated for peek

operations. Delete options simply remove the specific element from the linked list corresponding

to its bucket. However, in order for the reference for peek operations to not be invalidated

when the final element of the highest non-empty bucket is removed, one has to redirect it to

the next highest non-empty bucket in such cases. This leads to a complexity that is not always

O(1) for delete operations (a trivial upper bound is O(n)). However, such operations can be

performed very quickly (typically though a small number of pointer evaluations). Hence, this

loss in performance is essentially negligible.

(18)

CHAPTER 3. METHOD

Lack of a natural tree-like structure

The original ROAM paper assumes that the mesh being handled can be represented as a triangle bintree. In addition, the algorithm can easily be extended to allow for meshes that can be represented by any tree-like structure. However, not all meshes can not be represented by some such structure. Since we can’t make any assumption about the representability of the output of Marching Cubes, we must assume that it can not be represented by any tree-like structure.

Fortunately, it is possible to adapt the ROAM algorithm to handle arbitrary meshes, although this necessitates certain limitations. Indeed, if the dual graph of the mesh (where each triangle corresponds to a node, and the nodes corresponding to adjacent triangles are joined by edges) is stored, one can perform split operations which maintain the mesh topology by recursively following this graph, splitting triangles wherever needed, and updating the graph.

When there is a tree-like structure, merge operations can be performed in a natural way by replacing a set of siblings with their parent (and recursively handling any cracks in the mesh).

When such a structure does not exist, merge operations become rather complicated. In a sense, the simplest way to perform merge operations is to perform split operations in reverse. If one records which triangles are modified, and how, by each split operation, reversing such an operation is a simple task. However, one then has to take into consideration in which order merges are performed, as a split operation will, in general, must be undone before any other split involving the same triangles is undone. Alternatively, one can define a kind of ‘multi-rooted’

tree by regarding each triangle in the input mesh as the root of its own tree. This allows merge operations to be performed in any order.

As has been demonstrated, performing merge operations when no tree-like structure is available

is a complex endeavour. Fortunately, merge operations are not really a necessity when it comes

to plots. Compared to the original ROAM paper, a stated goal of which was to approximately

visualize very detailed data at acceptable frame rates, the goal in plotting surfaces is to simply

obtain a sufficiently detailed mesh. For this reason, we might as well do away with merge

operations entirely, and simply subdivide the mesh through split operations until sufficient level

of detail is obtained, and then terminate the algorithm. This is the approach that will be taken

in this paper, in order to avoid unnecessarily complicating the algorithm. Note, however, that

merge operations might be useful in certain scenarios, such as when scaling the mesh (if the

view is zoomed in on a particular part of the mesh, that particular part should be subdivided

further, but on zooming out again, the mesh should be merged back to its original state).

(19)

CHAPTER 3. METHOD

Culling

The original ROAM paper outlines a very efficient procedure for frustum culling. However, as with the mesh operations, it relies on the existance of an underlying tree structure, and is therefore not applicable in this case. If the mesh is regarded as a multi-rooted tree in the sense described in the previous paragraph, a similar algorithm can be applied. Otherwise, one must resort to standard culling methods (using octrees, for example). However, just as with merge operations, culling is not a necessity for this particular application, so will not be discussed in detail in this paper.

Error Measures

The error measure used is pivotal to the quality of the end result, and for this reason is worthy of some serious consideration. There are several aspects that must be weighed against each other before a suitable error measure can be decided on. First, a definition:

Definition 13 An error measure is a function ε : T → R + .

The first consideration one must make is whether to use a view-dependant error measure. This can be very useful when visualizing large datasets, such as terrain, and indeed the original ROAM paper describes such a measure. However, it has generally been the impression of the author that, when visualizing functions, having a mesh that is constantly shifting (which is the inevitable result of such a measure) is unnecessarily distracting. For this reason, this paper does not treat view-dependant error measures.

The next step is to define a target error measure – in other words, an ideal error measure that one can then attempt to approximate. A natural candidate for this would be the total volume between a triangle and its projection on the surface. That is, given a surface defined by an implicit function f (r), and a triangle T : ε G (T ) = R

T |dT − proj f (dT )|.

This error measure has several desirable properties. Minimizing it will lead to a mesh that is as ‘close’ as possible to the underlying isosurface, in the sense that the volume difference is minimized. In addition, it is both convergent and (nearly) monotonic. These two properties are invaluable in a good error measure, and thus will be discussed in greater detail.

Definition 14 An error measure ε is monotonic w.r.t. a triangulation T = T 1 , T 2 , . . . if ∀i, j :

ε(T i ) ≥ ε(T j ) when i < j.

(20)

CHAPTER 3. METHOD

Definition 15 An error measure is monotonic if it is monotonic w.r.t. every possible triangu- lation.

Definition 16 An error measure ε is convergent w.r.t. a triangulation T = T 1 , T 2 , . . . if ε(T i ) → 0 as i → ∞.

Definition 17 An error measure is convergent if it is convergent w.r.t. every possible triangu- lation.

If an error measure is not monotonic, the ROAM algorithm can not guarantee optimality w.r.t.

it. Informally, if the error of a parent triangle T is smaller than the compound error of all its children, it might ‘hide’ in the split queue, and the children might never be split even though they should be. Basically, ROAM is guaranteed to always find a local minimum of the error measure, but unless the error measure is monotonic w.r.t. the given triangulation, there is no guarantee that the minimum found will be a global minimum.

Similarly, if an error measure is not convergent, it is very difficult to place any restrictions on the global error, since it can never be guaranteed that functions do not exist for which the restrictions can not be fulfilled. Even worse, if for some triangulation T , the error measure is not convergent w.r.t some sub-triangulation T 0 ⊂ T , ROAM might get stuck refining T 0 indefinitely.

Despite these useful properties, ε G also has some drawbacks. Firstly, while ε G can be approxi- mated arbitrarily close through numerical integration, doing so with any decent precision is far to costly to be done in real time. Secondly, any numerical approximation that is fast enough for real-time computation will not be monotonic w.r.t. most triangulations. Additionally, any approximation necessarily involves projection, and since we’re working with implicit functions, this requires numerical root-finding.

So what can we do about this? Not much, as it turns out. However, in practice, even very crude

approximations to ε G tend to perform rather well. One approximation that involves almost no

extra computation – at least for triangles that are eventually split – is to use the volume of

the tetrahedron defined by the three corners of the triangle and the projected midpoint of the

triangles base edge of the triangle. This error measure, henceforth referred to as ε 4 , is fast and

convergent, and while it is not monotonic for many meshes, it will tend to be monotonic often

enough to lead to satisfactory end results.

(21)

CHAPTER 3. METHOD

Constraints

As with many heuristic algorithms, a speed-accuracy tradeoff is inherent to the ROAM algo- rithm. Left to split indefinitely, the mesh produced by the ROAM algorithm will tend to converge to the given isosurface, as long as one assumes that points can be projected perfectly onto the surface and that an appropriate error measure has been used. However, the number of triangles in the mesh will tend towards infinity in the process. Clearly, it is desirable for the algorithm to terminate at some point, possibly to be resumed when the view changes. For this reason, it is useful to introduce the concept of a constraint function. Informally, a constraint function is a Boolean function that evaluates to true for any mesh that is not sufficiently detailed, and false otherwise. More formally:

Definition 18 A constraint function is a function δ : T → B such that if δ(M ) = f alse, then δ(M 0 ) = f alse for each subdivision M 0 of M .

Basically, our modified ROAM algorithm will evaluate a provided constraint function after each subdivision, and terminate when it returns false.

It is not difficult to imagine a few useful constraint functions. For example, if one wishes to place a constraint on the global mesh error (as is proposed in the original ROAM paper), one can use the function δ(M ) = min T

i

∈M ε(T i ) > β for some β ∈ R to place a constraint on the maximum local error. Similarly, δ(M ) = P

T

i

∈M ε(T i ) > β places a constraint on the global error.

New constraint functions can be creating by joining together multiple constraint functions through conjunction, disjunction and negation. For real-time applications, it is usually use- ful to place constraints both on the mesh triangle count and on the global or local error.

3.3 The Algorithm

3.3.1 Overview

As explained above, the algorithm is divided into two phases. The first phase (henceforth referred

to as the polygonization phase) takes as input an implicit equation, and outputs a rough but

(hopefully) topologically accurate triangle mesh approximation to the isosurface defined by the

equation.

(22)

CHAPTER 3. METHOD

The second phase (henceforth referred to as the subdivision phase) takes as input the triangle mesh from the triangulation phase, and outputs a mesh that has been optimized w.r.t. an error measure ε and a constraint function δ.

3.3.2 Polygonization phase

The goal of this phase is to create a rough mesh approximating the isosurface in a topologically correct way. This is accomplished through a version of the Marching Cubes algorithm with the extensions outlined above. This phase is divided into two subphases. The first subphase attempts to find a set of cubes enclosing the surface, such that the isosurface passes through no cube more than once. Within these cubes, the second subphase creates a triangle mesh approximating the surface.

The first step is performed using the octree approach outlined in the Marching Cubes subsection.

The viewing volume is recursively subdivided into eight octants. The recursion does not continue past elements through which no isosurface appears to pass, and stops when the isosurface passes through each element at most once. Note that this leads to a set of identical cubes arranged within a cuberille grid, and therefore the usual Marching Cubes polygonization scheme can be applied.

In the second step, the isosurface mesh is constructed within the cubes as per the Marching Cubes 33 polygonization scheme described above. The vertices of the mesh are then projected onto the surface.

The subdivision phase requires an input mesh whose triangles are linked in a graph. This can be done efficiently by linking up each cube with its neighbors in the first subphase, and in the second subphase, applying a slightly expanded polygonization scheme in which information on the triangles in the cube is stored. Furthermore, it is useful if in the subdivision phase we know which edge of each triangle is the longest. For this reason, the length of the triangles’ sides should be measured post-projection, and the base edge marked appropriately.

3.3.3 Subdivision phase

In this phase, a modified version of ROAM, with the extensions discussed in the ROAM subsec- tion, subdivides the mesh according to the specified error measure ε and the specified constraint δ. We assume that we are given as input a mesh M , formatted as in the last paragraph of the previous subsection.

A priority queue p split for split operations is created. Initially, each triangle T ∈ M is placed in

(23)

CHAPTER 3. METHOD

p split according to ε(T ). Next, a series of split operations (ideally a fixed number per frame) is performed on the element of highest priority in p split , as follows:

Algorithm 1 The main loop

1: while δ(M ) = true do

2: T ← pop(p split )

3: M ← split(M, T, ∅)

4: render a new frame every n:th split

5: end while

Algorithm 2 split(M, T, T 0 ) - M is the mesh, T is the triangle being split, and T 0 is the triangle that initiated the split.

1: split(M, T, T 0 ):

2: T 1 and T 2 are set to the two triangles obtained when T is split

3: M ← M ∪ {T 1 , T 2 } ∩ T

4: Update edges in the dual graph

5: if (T 0 6= ∅) ∧ (T 0 does not share a base edge with T ) then

6: set T 00 to whichever of T 1 or T 2 shares an edge with T 0

7: split(M, T 00 , T )

8: end if

9: insert(δ, T 1 , T 2 )

What happens in line 2 of split deserves elaboration. In order for the algorithm to produce an attractive and accurate result, each triangle should be split at the center of its base edge. The new vertex resulting from this split should be projected onto the surface, and new error values should be computed for T 1 and T 2 .

Of course, some work has to be performed in order to maintain the graph structure of the mesh when triangles are split. For reasons of breviy, such details are omitted from this report.

3.3.4 Considerations

The algorithm in the polygonization phase may be replaced with any alternative algorithm that takes as input an implicit equation and provides as output a mesh approximating the isosurface defined by the equation. I chose Marching Cubes because it is fast, simple, well known, and easily extensible. Naturally, it does have drawbacks compared to other isosurface algorithms, and substituting any such algorithm in its place should not be very difficult.

One drawback of the Marching Cubes approach taken above (in which the triangles are projected

onto the surface) is that it can result in thin, nearly degenerate triangles. In most cases,

subdividing such triangles will not increase the level of detail to any noticeable degree, and

(24)

CHAPTER 3. METHOD

so including them in the input mesh to the subdivision phase is in general not desirable. For this reason, it can be advantageous to perform a ‘mid-phase’ simplification, in which the output mesh from the polygonization phase is transformed using some topology-preserving simplification algorithm. Such algorithms will generally produce a simpler and more regular mesh, which can be very useful in the subdivision phase.

For brevity, merge operations and frustum culling are omitted from the algorithm description.

While these two tools can be of great utility in some cases, it is the view of the author that these

are generally not required for rendering implicit functions. The reasoning behind this decision

is explained in the subsection on ROAM. The one application where they can be very useful is

in optimizing the application for zooming - if the end user should be able to view small portions

of the same isosurface in very high detail, such tools can be used to facilitate exploration in real

time. However, the application of these tools to the algorithm is not discussed in detail in this

paper.

(25)

CHAPTER 4. IMPLEMENTATION

4 Implementation

The algorithm described above was implemented in the geogebra.geogebra3D.euclidian3D.plots package of GeoGebra. This section discusses this implementation in detail along with the issues that were faced and the solutions that were implemented. As all work was conducted by the author, the remainder of this chapter is written in the first person.

4.1 Polygonization Phase

Initially, I wrote only a very basic implementation of the Marching Cubes algorithm. This was then extended with octrees and projection.

The first major issue I faced involved making the link between the corner points of a cube and the triangles to be placed within it. As noted earlier, there is a total of 256 different such combinations. I decided to hard-code these 256 combinations for efficiency reasons. Because writing the 256 cases by hnd would have been a tedious and undoubtedly error-prone exercise, I instead hand-coded the 15 special cases shown in figure 3.1, then wrote a small program in Ruby that utilized regular expressions to modify these cases by performing rotations along three axes, as well as reflections and inversions. Next, I encoded each possible corner combination as a two-digit hexadecimal value, sorted a hash containing the different combinations, and outputted the values of the hash. This produced over two thousand lines of commented code, which I then compressed to a 256 line switch statement using another Ruby program.

At this point, I had a simple but functional Marching Cubes implementation. However, it was neither fast nor pretty. I decided to first improve speed. Initially, I had simply iterated through the viewing volume and, for each cube, evaluated the function at its eight corners. In order to improve performance, I opted to instead buffer the values of all points on one level in the grid.

With some refactoring this reduced the number of function evaluations per cube from eight to one.

Next, I turned my attention to octrees. The octree refinement was set up to run after an initial ordinary pas, in order to give a greater degree of customizability. To reduce the number of function evaluations performed, the octrees were set up so that grid points that had been evaluated at higher levels were not reevaluated, and so that no extra calculations were performed for siblings sharing corners.

My initial approach to creating the dual graph was to supplement the original set of cube

triangle data with information on how to link together the triangles. This would then have been

(26)

CHAPTER 4. IMPLEMENTATION

combined with pointers to adjacent cubes to create the dual graph in constant time. However, due to a computer crash, I had to abandon this approach in favor of a simpler, ad-hoc approach, in which the triangles are linked together through trial-and-error after they have been generated.

While not optimal, this approach works well in practice.

For projection, I settled on a numerical gradient evaluation followed by Newton’s method in the gradient direction. This process stops when the function value is within a certain interval.

Normals for shading are computed using the same numerical gradient evaluation method.

4.2 Subdivision Phase

I wrote an implementation of the bucket-based priority queue described in the section on ROAM, inspired by the container classes in the Java Class Library. This queue has a variable amount of buckets, and takes a constructor argument a special class whose job it is to associate a bucket number with each element of the generic type. In order to allow any class to be inserted into the queues, I had to introduce an inner class (called Link), an instance of which is associated with each object inserted into the queue. In order to facilitate fast lookups, I implemented an underlying hash map with the generic type for a key and a Link as value. While this approach might seem contrived, it led to a flexible and clean result.

I also created a specialized data structure I wrote, called a display list. In essence, this keeps track of the currently visible parts of the mesh, storing the triangles and their normals as part of a coherent data buffer. This is done in order to facilitate future use of Vertex Buffer Objects or similar techniques. Insertion, deletion and data access actions are all performed in constant time for this data structure.

I chose to implement the error measure ε 4 (M ) outlined in section 3.2.3, as I found it produced sufficiently accurate results without requiring evaluation of any points not part of the final mesh. I also settled for a constraint function that limits both triangle count and local error:

δ(M ) = (max T

i

∈M > α) ∧ (|M | < N ), for some maximum error α and some maximum triangle count N .

4.3 Future Improvements

There is plenty of room for improvement and tweaking. In terms of missing features, merge

operations, topology guarantees and support for dynamic data have yet to be implemented.

(27)

CHAPTER 4. IMPLEMENTATION

Furthermore, situations in which the projection method fails to find a root have not been taken into account. Such situations occur with both singular functions and functions that exhibit locally non-continuous or non-trilinear behaviour. A better root-finding algorithm should be implemented for such cases.

It is likely that performance of the algorithm an be significantly improved. While the current implementation is adequately fast for most applications, it was intended as a proof of concept rather than a finished product. There are likely to be many unnecessary function evaluations, especially in the Marching Cubes part of the implementation. Projection and normal calculations are performed separately, each requiring at least four function evaluations per vertex. A clever merging of the two calculations could quite possibly improve performance.

Figure 4.1: Screen captures of the GeoGebra implemenation, showing the real-time refinement of |r| +

2 cos 2x + 2 cos 2y + 2 cos 2z = 20. From left to right: mesh before subdivision - 8 triangles, mesh at 100

triangles, mesh at 1 000 triangles, mesh at 10 000 triangles.

(28)

CHAPTER 5. RESULTS

5 Results

The GeoGebra implementation, while very basic, highlights some of the areas in which the method proposed in this paper outperforms standard methods. This implementation was evalu- ated and compared to a pure Marching Cubes implementation in terms of speed, accuracy and flexibility.

In terms of speed, the proposed algorithm largely outperforms the basic Marching Cubes im- plementation. Since the algorithm works in real-time, a rough approximation can be presented quickly, regardless of the final level of detail. This results in a product that appears more fluid and responsive to user input. As an example, a test was performed in which an isosurface mesh of |r| = c with ∼ 2000 polygons was produced using both the octree marching cubes approach and the method proposed in this paper. While the former approach took an average of 2.2 s to produce a visible result, the latter took an average of only 9 ms.

Furthermore, the proposed algorithm produces an end result faster than Marching Cubes. On the same function as above, Marching Cubes took an average of 16% longer than the proposed algorithm to reach the given polygon count.

In terms of accuracy, the proposed algorithm performed much better than Marching Cubes on many isosurfaces. Since it refines the mesh selectively, giving higher detail to areas with high curvature, while Marching Cubes gives a uniform level of detail, the difference in accuracy is most apparent on highly non-uniform isosurfaces. As an example, the two algorithms were compared on the oblate spheroid given by 10x 2 + y 2 + z 2 = 20. For 1080 triangles, Marching Cubes gave a total error ( P

T

i

∈M ε 4 (T i )) of 5.1, while the proposed algorithm gave a total error of only 4.1. The results of this test are displayed in figure 5.2.

It should be noted, however, that the current implementation does not always give a smaller total error than the Marching Cubes implementation for highly regular isosurfaces. As an example, the Marching Cubes algorithm gives a slightly smaller error in the spherical isosurface test described above (however, the proposed algorithm still outperforms Marching Cubes when left to run for the same amount of time). The reason that Marching Cubes gives a smaller error for such an isosurface is that the proposed triangulation scheme is suboptimal when the mesh comprises many equilateral (or nearly equilateral) triangles. Suggestions for mitigating this issue are proposed in the next chapter.

One of the greatest strengths of the proposed algorithm, as compared to Marching Cubes(and

other traditional methods), lies in its flexibility. While it is generally impossible possible to get

anywhere near a given triangle count Marching Cubes without extensive trial and error, the

(29)

CHAPTER 5. RESULTS

algorithm proposed in this paper will almost always get within 1 triangle of the target triangle number. The same is true for both maximum and total error.

As Marching Cubes sometimes produces meshes with smaller error, it is reasonable to question the assumption that the polygonization phase gives the best results given a coarse input mesh.

However, while the error might initially be smaller for more detailed input meshes, the error tends to decrease more rapidly for coarse input meshes, as is seen in figure 5.1. This figure also shows that while Marching Cubes sometimes gives a sightly smaller error for a given triangle count on highly regular isosurfaces, the error difference is essentially negligible.

These results, in conjunction with the speed and visual quality of the implementation, even at such an early stage, show that the approach outlined in this paper serves its stated purpose well, and is a viable alternative to traditional methods when it comes to real-time visualization of isosurfaces.

Triangle Count

Logarithm of error

Figure 5.1: Global error compared to polygon count in the subdivision phase for a spherical isosurface, given meshes of different triangle counts given as output from the polygonization phase. Triangle count is plotted on the x axis, and log P

T

i

∈M ε 4 (T i ) on the y axis.

(30)

CHAPTER 5. RESULTS

Figure 5.2: The same oblate spheroid rendered using Marching Cubes (left) and the proposed algorithm

(right). The mesh on the left has uniform detail, while the mesh on the right has more detail in the areas

of high curvature around the brim.

(31)

CHAPTER 6. FUTURE WORK

6 Future Work

The method outlined could be extended in several directions. For one, the marching cubes- based approach to constructing the initial mesh has several disadvantages. It requires a uniform sampling grid, which leads to extraneous work for isosurfaces which are highly detailed in certain areas, but lack detail in others. This also has the disadvantage of possibly sending a mesh that is too high in detail to the subdivision phase. Since the subdivision algorithm performs best on crude meshes, this might lead to an end result that appears unnecessarily irregular. As the polygonization phase of the algorithm is replaceable, one possible direction would be to compare and contrast the marching cubes extension outlined in this paper with other previously suggested algorithms, such as Dual Marching Cubes ([16]) or Manifold Dual Contouring ([18]).

It might be fruitful to explore the effects of various changes to the subdivision phase. One could easily implement and test other error measures or subdivision algorithms. As merge operations were not fully explored in this paper, it may be worthwhile to identify more efficient merge schemes, especially for applications involving dynamic data. Furthermore, different triangulation schemes should be investigated. While the scheme suggested in this paper performs fairly well for most applications, it can not accurately reproduce sharp features. In addition, other subdivision schemes could potentially lead to faster convergence. One such idea would be to split equilateral or nearly equilateral triangles into three or four identical triangles, leading to better results when approximating regular shapes such as spheres.

Of course, the algorithm could also be adapted to applications other than real-time visualization of implicit functions. For applications such as visualization of volume data, the algorithm could be used both as a pre-computation step (in order to achieve a high-detail mesh subject to triangle count constraints) or as a real-time visualization tool (combined with a view-dependent error measure, the algorithm could be used to visualize very complex datasets in real time).

One topic that is not discussed at length in this paper is numerical root finding. Several steps

in the algorithm (most notably projection) hinge on the ability to accurately detect roots on

some closed interval. If roots are not found, the algorithm breaks down. If the condition of

approximate trilinearity discussed in the section on topology is fulfilled, however, this will not

be a problem. Nevertheless, evaluating different methods of root finding could lead to serious

improvements in efficiency, quality and stability.

(32)

CHAPTER 7. CONCLUSION

7 Conclusion

A new algorithm for visualizing isosurfaces, with a focus on real-time display of continuous implicit functions, has been presented. The algorithm has been implemented and shown to, perform better than traditional methods in many aspects. A combination of ideas from the Marching Cubes and ROAM algorithms led to a flexible algorithm that allows for real-time refinement of an isosurface subject to constraints and optimally with regard to a triangulation scheme and an error measure. The algorithm, consisting of a polygonization and a subdivision phase, has been covered extensively, and various issues concerning its implementation have been discussed.

7.1 Acknowledgements

I would like to thank Lars Kjelldahl for his suggestions on the project, Mathieu Blossier from

GeoGebra for his support with the GeoGebra implemenation, as well as Christopher Conley and

Tomas Eriksson for their substantial input and feedback on the report.

(33)

BIBLIOGRAPHY

Bibliography

[1] Duchaineau, M. et al. ROAMing terrain: real-time optimally adapting meshes. VIS ’97.

Proceedings of the 8th conference on Visualization, 1997.

[2] Lorensen, W. E., and Cline, H. E. Marching cubes: A high resolution 3D surface construction algorithm. SIGGRAPH ’87. Proceedings of the 14th annual conference on Computer graphics and interactive techniques, 1987.

[3] Gregorski, B. et al. Interactive view-dependent rendering of large isosurfaces. VIS ’02.

Proceedings of the conference on Visualization, 2002.

[4] Keppel, E. Approximating complex surfaces by triangulation of contour lines. IBM Journal of Research and Development. Volume 19 Issue 1, 1975.

[5] Chernyaev, E. V. Marching Cubes 33: Construction of Topologically Correct Isosurfaces.

Graphicon. 1995.

[6] Nielson, G. M., Hamann, B. The asymptotic decider: resolving the ambiguity in marching cubes. VIS ’91. Proceedings of the 2nd conference on Visualization 1991.

[7] Gibson, S. F. F. Constrained elastic surface nets: Generating smooth surfaces from binary segmented data. MICCAI ’98. Proceedings of the First International Conference on Medical Image Computing and Computer-Assisted Intervention, 1998.

[8] Farrell, E. J. Color Display and Interactive Interpretation of Three-Dimensional Data.

IBM Journal of Research and Development. Volume 27 Issue 4, 1983.

[9] Hohne, K. H. and Bernstein, R. Shading 3D-Images from CT Using Gray-Level Gradi- ents. IEEE transactions on medical imaging. Volume 5 Issue 1, 1986.

[10] Herman, G. T. and Udupa, J. K Display of 3D Digital Images: Computational Foun- dations and Medical Applications. IEEE Computer Graphics and Applications. Volume 3 Issue 5, 1983.

[11] D¨ urst, M. J. Additional reference to Marching Cubes. Computer Graphics. Volume 22 Issue 2, 1988.

[12] Wilhelms, J. and Van Gelder, A. Topological considerations in isosurface generation.

VVS ’90. Proceedings of the 1990 workshop on Volume visualization.

[13] Montani, J. et al. A modified look-up table for implicit disambiguation of Marching

Cubes. The Visual Computer. Volume 10 Issue 6, 1994.

(34)

BIBLIOGRAPHY

[14] Moore, D. and Warren, J. Compact isocontours from sampled data. Graphics Gems III. Academic Press, New York, 1992.

[15] Kobbelt, L. P. et al. Feature sensitive surface extraction from volume data. SIGGRAPH

’01. Proceedings of the 28th annual conference on Computer graphics and interactive tech- niques, 2001.

[16] Schaefer, S. and Warren, J. Dual Marching Cubes: Primal Contouring of Dual Grids.

PG ’04. Proceedings of the Computer Graphics and Applications, 12th Pacific Conference, 2004.

[17] Ju, T. et al. Dual contouring of hermite data. SIGGRAPH ’02. Proceedings of the 29th annual conference on Computer graphics and interactive techniques.

[18] Schaefer, S. et al. Manifold Dual Contouring. IEEE Transactions on Visualization and Computer Graphics. Volume 13 Issue 3, 2007.

[19] Shekhar, R. et al. Octree-based decimation of marching cubes surfaces VIS ’96. Pro- ceedings of the 7th conference on Visualization, 1996.

[20] Zhang, N. et al. Dual Contouring with Topology-Preserving Simplification Using En-

hanced Cell Representation. VIS ’04. Proceedings of the conference on Visualization, 2004.

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Samtliga regioner tycker sig i hög eller mycket hög utsträckning ha möjlighet att bidra till en stärkt regional kompetensförsörjning och uppskattar att de fått uppdraget

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

• Utbildningsnivåerna i Sveriges FA-regioner varierar kraftigt. I Stockholm har 46 procent av de sysselsatta eftergymnasial utbildning, medan samma andel i Dorotea endast

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av