• No results found

Fast and Adaptive Polygon Conversion By Means Of Sparse Volumes

N/A
N/A
Protected

Academic year: 2021

Share "Fast and Adaptive Polygon Conversion By Means Of Sparse Volumes"

Copied!
74
0
0

Loading.... (view fulltext now)

Full text

(1)

LiU-ITN-TEK-A--11/052--SE

Fast and Adaptive Polygon

Conversion By Means Of Sparse

Volumes

Mihai Aldén

(2)

LiU-ITN-TEK-A--11/052--SE

Fast and Adaptive Polygon

Conversion By Means Of Sparse

Volumes

Examensarbete utfört i medieteknik

vid Tekniska högskolan vid

Linköpings universitet

Mihai Aldén

Examinator Reiner Lenz

(3)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

under en längre tid från publiceringsdatum under förutsättning att inga

extra-ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,

skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för

ickekommersiell forskning och för undervisning. Överföring av upphovsrätten

vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av

dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,

säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ

art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i

den omfattning som god sed kräver vid användning av dokumentet på ovan

beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan

form eller i sådant sammanhang som är kränkande för upphovsmannens litterära

eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se

förlagets hemsida

http://www.ep.liu.se/

Copyright

The publishers will keep this document online on the Internet - or its possible

replacement - for a considerable time from the date of publication barring

exceptional circumstances.

The online availability of the document implies a permanent permission for

anyone to read, to download, to print out single copies for your own use and to

use it unchanged for any non-commercial research and educational purpose.

Subsequent transfers of copyright cannot revoke this permission. All other uses

of the document are conditional on the consent of the copyright owner. The

publisher has taken technical and administrative measures to assure authenticity,

security and accessibility.

According to intellectual property law the author has the right to be

mentioned when his/her work is accessed as described above and to be protected

against infringement.

For additional information about the Linköping University Electronic Press

and its procedures for publication and for assurance of document integrity,

please refer to its WWW home page:

http://www.ep.liu.se/

(4)
(5)

Abstract

This thesis describes the implementation of sparse volume conversion methods as used in movie production. The work consists of three different projects that are closely related. The first project presents the implementation of a signed distance field converter for polygonal models that can handle self-intersection meshes and non-manifold surfaces. The second project presents the implementation of an adaptive surfaces extraction al-gorithm for high-resolution volumes, in which a closed adaptive surface is extracted without the need of any post processing steps. The third project examines and de-scribes the implementation of a directed distance field based fracturing technique that uses explicit information from the original surface to generate seamless cuts between the fractured pieces. All of the projects focus on memory efficiency and fast parallel algorithms. They are developed specifically for the new highly efficient volume data structure VDB originating from Dreamworks Animation.

Acknowledgements

I would like to express my gratitude and sincere thanks to my supervisor Ken Museth, FX R&D Supervisor and Principle Engineer at Dreamworks Animation for his guidance, discussions and generously sharing his expertise. I also like to sincerely thank Peter Cucka, FX R&D Senior Software Engineer at Dreamworks Animation for discussions and further developing my programming skills. A special thanks goes to my examiner Reiner Lenz, Linköping University for his support and help. My appreciation and thanks also go to Michael Josefsson, for carefully reading my text and providing invaluable feedback.

(6)
(7)

Contents

1 Introduction 1

1.1 Scope and contribution . . . 1

1.2 Organization of the thesis . . . 2

1.3 Prerequisites . . . 2 1.4 Glossary . . . 3 2 Background 5 2.1 Surface properties . . . 5 2.2 Explicit surfaces . . . 6 2.3 Implicit surfaces . . . 7

2.4 The uniform data grid . . . 9

2.5 Difference between voxels and cells . . . 10

2.6 Narrow band level-sets . . . 10

2.7 Directed distance fields . . . 11

2.8 Hermite data . . . 12

3 Sparse volume data structure 13 3.1 Dynamic block allocation . . . 13

3.2 Efficiency . . . 13

3.3 Intel Threading Building Blocks . . . 15

3.4 Parallelization . . . 15

4 Data compression 17 4.1 Quantized normals . . . 17

4.2 Quantized Hermite data object . . . 19

5 Signed distance field conversion 21 5.1 Overview of the algorithm . . . 21

5.2 Polygon rasterization . . . 22

5.3 Determining the sign . . . 25

5.4 Cleaning up voxels . . . 26

5.5 Narrow band expansion . . . 28

5.6 Fog-volume conversion . . . 29

6 Adaptive surface extraction 31 6.1 Overview of the algorithm . . . 31

(8)

6.2 The quadratic error function . . . 32

6.2.1 Singular value decomposition . . . 33

6.3 Topology tests . . . 33

6.4 Adaptivity . . . 33

6.5 Polygon generation . . . 36

7 Fracturing 37 7.1 Overview of the algorithm . . . 37

8 Implementation overview 39 8.1 Code libraries . . . 39

8.2 Houdini integration . . . 39

9 Results 43 9.1 Signed distance field conversion . . . 43

9.2 Adaptive surface extraction . . . 47

9.3 Fracturing . . . 56

10 Discussion 59 10.1 Signed distance field conversion . . . 59

10.2 Adaptive surface extraction . . . 59

10.3 Fracturing . . . 61

11 Conclusion 63 11.1 Future work . . . 63

11.1.1 Signed distance field conversion . . . 63

11.2 Adaptive surface extraction . . . 64

(9)

1 Introduction

Polygonal models are by far the most popular surface representation in computer graph-ics today. The main reason for this is that polygonal models offer a very compact boundary representation and are well supported in hardware.

There are many different polygon primitives, some of the most common are quadrilat-erals (quads) and triangles. Polygonal models encode the surface explicitly, separating topological information such as the connectivity of vertices, edges and faces and geomet-ric information such as the surface embedding in 3D space defined by the vertices, surface normals and curvature properties. This structure makes certain types of operations in-herently hard to perform on polygonal meshes. For instance, topology changes such as performing boolean operations between two polygonal models is quite difficult and requires a lot of calculations [24] [16] while geometric changes such as scaling, rotation and translation transformations are quite straightforward and more easily performed [8] [20].

Many natural phenomenons such as fog, smoke, clouds and dust are not representable as surfaces and the surface of a simulated fluid might undergo many complex topologi-cal changes during the simulation which are very difficult to track using the polygonal mesh representation. Therefore it is common to treat these cases as volumes or particles instead, [18] [25] [6].

Volume data has for a long time been used in the context of scientific visualization. However the increasing use of natural effects and simulations in feature films has pushed the development for more efficient volumetric representations and thus the need for efficient conversion algorithms between the different formats.

1.1 Scope and contribution

In this thesis three different algorithms are developed, namely: • Signed distance field conversion of polygonal models. • Adaptive surfaces extraction from volumes.

(10)

CHAPTER 1. INTRODUCTION

The first two projects have been developed into working production tools. The third project focuses more on examining directed distance field for seamless fracturing and is not yet developed into a complete solution for production.

1.2 Organization of the thesis

The thesis is organized as follows:

• Chapter 2 Gives a short introduction to some of the key terms and concepts. • Chapter 3 Gives a high-level description of the sparse volume data structure VDB

and explains some useful parallel design patterns.

• Chapter 4 Explains two compression schemes used for normal and Hermite data. • Chapter 5 Describes the implementation of the signed distance field converter. • Chapter 6 Describes the implementation of the adaptive surface extraction

algo-rithm

• Chapter 7 Describes the implementation of the fracturing tool.

• Chapter 8 Describes how the different projects are integrated into a commercial 3D packages.

• Chapter 9 Presents some performance and visual results.

• Chapter 10 Summarizes the results and points out the strengths and weaknesses of the different algorithms.

• Chapter 11 Concludes the thesis and outlines some possible directions for future work.

1.3 Prerequisites

The reader is assumed to have existing knowledge of computer graphics at the upper-level undergraduate course upper-level. Concepts such as c++-programming, polygonal models,

vector graphics, linear algebra and some introductory knowledge of implicit surfaces and modeling are also assumed to be familiar.

(11)

CHAPTER 1. INTRODUCTION

1.4 Glossary

2D Two-dimensional 3D Three-dimensional CPU Central processing unit. CSG Constructive Solid Geometry

DB+Grid Sparse volume data structure developed by Ken Museth [19]. <3 Three-dimensional Euclidian space.

TBB Threading Building Blocks UI User Interface

VDB Another name for DB+Grid. VFX Visual effects

(12)
(13)

2 Background

The aim of this chapter is to introduce some key terms and concepts that will recur in the later chapters and also provide some further reading references.

2.1 Surface properties

The desired surface type for a 3D computer graphics objects is a closed, orientable two-manifold surface, embedded in 3D space.

• A surface is classified as a two-manifold meaning that the mesh can be unfolded into a continuous flat surface without overlapping pieces.

• The surface is considered to be oriented if all surface normals are contiguous and all surface patches maintain the orientation of their respective normal.

• A surface is closed if the embedding space can be partitioned into inside, on and outside disjoint-sets.

If these requirements are not met, certain modeling operations become very difficult or even impossible to perform, and certain geometrical properties are not well defined. Figure 2.1 below shows two typical non-manifold cases, the first case demonstrates T-junctions, meaning that more than two faces share an edge and the second case shows a surface with non-contiguous normals.

T - junction

Inconsistent normals

n

n Figure 2.1: Non-manifold cases.

Non-manifold surfaces are unfortunately very common in a lot of artist created 3D models. The reason is that the artist does not have time to care about the surface clas-sification of the model. If the model looks good from an artistic standpoint the artist’s work is considered done.

(14)

CHAPTER 2. BACKGROUND

2.2 Explicit surfaces

In computer graphics explicit surfaces are usually represented by height fields, paramet-ric functions, subdivision surface or polygonal models, where polygonal models are the preferred and most common representation.

Many different formats and data structures for polygonal models have been developed trough the years, see [7] for a survey. The choice of data structure usually depends on different needs, memory restrictions or fast neighborhood search capabilities. In this thesis a low memory foot print is the primary need, so the decision was made to use a compact representation that stores the polygonal data in simple contiguous memory arrays and sacrifice fast neighborhood search capabilities.

Each polygon consists of multiple vertices that refer to points, the most common setup is three or four vertices per polygon to form a triangle or a quad, as in figure 2.2.

Points p0 p1 p2 Vertex Edge

Face

(quad)

Face

(triangle) p4 p3 p1 p2 p4 p3 p0 v0 v2 v0 v2 v1 p1 p2 v1 v3

Figure 2.2: A set of points connected together to create a triangle and a quad.

Vertices are unique for each polygon while points can be shared between multiple poly-gons. One reason for distinguishing between vertices and points is to enable surface attributes to be defined at different rates. For instance, attributes that are local to the polygon, i.e. vertex normals (that are used to create sharp edges) are defined at a per-vertex rate. Attributes that are shared between neighboring polygons, i.e. point normals (used for smooth transitions between different surface patches and texture seams) are defined at a per-point rate.

Another reason for sharing points is that it reduces the memory footprint substan-tially. When constructing polygonal models by connecting together multiple points to form a polygonal mesh as shown in figure 2.3, the same point will be used multiple times. Points in three-dimensional Euclidean space are defined by three components each, while a vertex is usually defined as an unsigned index that points into an array of shared points.

(15)

CHAPTER 2. BACKGROUND

Face

(quad)

Face

(triangle) p0 p1 p2 p4 p3

Figure 2.3: The points are shared between neighboring polygons to enable smooth at-tribute transitions and reduce memory cost.

2.3 Implicit surfaces

A closed surface in three dimensional space <3 can be seen as a two dimensional

con-tour that separates the <3 domain into different subdomains. Unlike explicit surface

representations that directly define the contour, the implicit approach offers instead an indirect surface representation defined by a function φ(x) that maps a scalar value to every point in space x. The geometrical contour is found by selecting all points in space that correspond to a particular constant value φ(x) = C, the subset defined by C is usually called a level-set or iso-value.

Figure 2.4 illustrates a simple 2D example of the difference between explicit and im-plicit representations trough an analytically defined circle. The exim-plicit function is only defined on the contour, for every x the equation will give the corresponding value of y.

y x y x Outside Inside On surface Explicit Implicit

Figure 2.4: Explict and implicit circle equations. The explicit equation is only defined on the contour while the implicit equation is defined for all points in space. Here the constant r represents the circles radius.

(16)

CHAPTER 2. BACKGROUND

The implicit circle function maps all x, y ∈ <2 into a scalar. The circle’s contour can be found by setting C = 0 giving the so called zero level-set of φ. The zero level-set enables a simple and intuitive way of classifying each point in space as being inside, on or outside the contour:

Inside: φ(x, y) < 0 (2.1)

Outside: φ(x, y) > 0 (2.2)

On the surface: φ(x, y) = 0 (2.3)

The implicit representation of surfaces offers a very efficient and intuitive modeling con-cept, called constructive solid geometry (CSG), [8] [20]. This modeling concept can be used to construct more complex geometry by combining simpler primitives together in different ways. This is achieved trough a set of boolean operators, namely union, inter-section and differencing. These operations are illustrated in figure 2.5 with two implicit circles A and B.

A

B

Intersection Difference Union

Figure 2.5: Boolean (CSG) operations are used to construct new geometric objects by combining existing primitives.

Assuming the negative/positive inside/outside classification, the boolean operators are defined as follows:

Union(A, B) = A ∪ B = min(A, B) Intersection(A, B) = A ∩ B = max(A, B)

(17)

CHAPTER 2. BACKGROUND

2.4 The uniform data grid

sample points

x y

Figure 2.6: A 2D uniform data grid represented as a set of discrete points.

In order to work with implicit functions on a com-puter, it is common to discretize them into discrete points. The uniform data grid illustrated in figure 2.6 offers a simple way to discretize space into a set of regular positioned sample points. These dis-crete sample points are used to represent a desired continuous field, for example a density field or a ve-locity field, in two-dimensional or thre-dimensional Euclidian space. The most common way of imple-menting the uniform data grid is to allocate a con-tiguous memory array and use an indexing function to map the data. For example a 200 × 400 two-dimensional grid can be implemented by allocating a one-dimensional array of 160 000 elements, where the 2D sample point at position (x, y) is accessed

by indexing the array as array[x + y*200] in a c++ implementation. A uniform data

grid is straightforward to implement using a contiguous memory array, but this simple approach is unfortunately also very memory inefficient. As table 2.1 shows, the mem-ory footprint increases rapidly with the resolution of the uniform grid. The memmem-ory inefficiency has been one of the biggest drawbacks with volumetric data in general and many research departments and VFX companies have spent a lot of research trying to overcome this limitation.

Grid resolution (voxels) Data Memory cost

643 3 x 32-bit float 3 MB

1283 3 x 32-bit float 24 MB

2563 3 x 32-bit float 192 MB

5123 3 x 32-bit float 1.5 GB 10243 3 x 32-bit float 12 GB

(18)

CHAPTER 2. BACKGROUND

2.5 Difference between voxels and cells

Both voxels and cells define the smallest unit of space and span a cubical region in 3D Euclidian space, they are equal in size but partition the discrete space differently. In figure 5.7 the interpretation used in this thesis is shown. Voxels are centered around the discrete sample points of a uniform grid, while cells are made up of 4 sample points in 2D.

Sample points Voxels Cells dy dx dx dy

Figure 2.7: Voxel and cell interpretation in 2D where dx = dy.

In 3D the cells are made up 8 sample points.

2.6 Narrow band level-sets

This idea is based on the fact that the most practical level-set applications are only interested in tracking the contour defined by a small subset in the domain of φ(x). Therefore values far away from this contour are of no practical interest and can safely be discarded. The narrow band scheme introduced by [2] restricts the computation to

interface narrow band

Figure 2.8: Narrow band representation for level sets.

a small neighborhood around φ(x) = 0. This improves the computational efficiency for most level-set operations and by combining the narrow band approach with sparse data storage the overall memory footprint is drastically reduced [18].

(19)

CHAPTER 2. BACKGROUND

2.7 Directed distance fields

The directed distance field can be seen as a different discretization of the distance field. As explained in [13], this approach stems from the observation that surface extraction algorithms like the Marching Cubes method [15] computes surface samples only at the edges of the cell. Thus in order to extract a surface from the distance field only infor-mation on the cell edges is required. Storing the discrete intersection distances along

Original interface Interface reconstructed from

a level-set field Interface reconstructed froma directional distance field Original interface Interface reconstructed from

a level-set field Interface reconstructed froma directional distance field

Figure 2.9: The first figure shows the original contour and the explicit intersection points with the edges of the grid. The second figure shows the contour reconstructed from a level set field, this reconstruction estimates the intersection points by linear interpolation. The third figure shows the contour reconstructed from a directed diastase field, here the exact intersection points are computed. cell edges will triple the memory consumption. The main advantage of this approach as illustrated in figure 2.9 is that the extracted surface crosses the cell edges at the exact same position as the original surface. Unfortunately it is still not possible to reconstruct sharp features that are lost when the original surface is discretized.

The signed distances are stored in an up-wind fashion on the grid. This is done by only considering the edges created between the current voxel at position (i, j, k) and its positive up-wind neighbors, as shown in following figure:

(i,j,k) (i,j+1,k)

(i+1,j,k)

(20)

CHAPTER 2. BACKGROUND

2.8 Hermite data

The directional distance field algorithm does not solve the sampling issue. By extracting more local information from the original surface, namely exact intersection points and normals, sharp surface features that are lost when the original surface is undersampled can be reconstructed trough extrapolation [13]. This approach is illustrated in figure 2.11. In [12] this new extended information is named Hermite data.

Original interface Interface reconstructed from

a level-set field Interface reconstructed froma hermite data field

Figure 2.11: The first figure shows the original contour and the explicit intersection points with the edges of the grid. The second figure shows the contour reconstructed from a level set field. This reconstruction estimates the in-tersection points by linear interpolation. The third figure shows the contour reconstructed from a Hermite data field. This reconstruction computes ex-act intersection points and extrapolates an accurate representation of the original surface.

As figure 2.11 shows, with explicit intersection points and normals the original con-tour can be accurately reconstructed. As the following table shows, this representation increases the memory cost quite drastically for three dimensional grids.

Data type scalars components Memory cost

Level set 1 4 bytes

Directional distances 3 12 bytes

Hermite data 12 48 bytes

Table 2.2: Memory cost per voxel for different volumetric representations of contours. Each scalar is represented with a 32-bit float.

(21)

3 Sparse volume data structure

As previously mentioned the high memory cost is one of the biggest drawbacks with vol-umetric data, and a lot of research efforts have been made with different data structures to try to overcome this memory inefficiency. Most recently Ken Museth has developed an efficient dynamic block based data structure dubbed DB+Grid [19] that exploits the spatial coherency of narrow band level sets and volumes using a sparse representa-tion. The DB+Grid combines the dynamic-block approach with some characteristics of B+Trees. B+trees are typical for data-bases and file-systems and can be seen as a gen-eralization of binary search trees where the internal nodes can have a variable number of child nodes and all leaf nodes are required to be at the same level.

3.1 Dynamic block allocation

Subdividing the volume into small data blocks that fit into the CPU’s level 1 or 2 cache is an efficient way of improving the cache hit rate when frequently accessing neighboring voxels. As shown in [14] and [11], this has a signifiant impact on performance because these memories are located close the CPU with very low latency. Also, by dynamically allocating blocks in places of interest, and leaving other regions empty, a huge amount of memory can be saved. This strategy is the core of most sparse volume data structures. What regions are of interest or not depends on the application. For narrow band level sets this is usually defined by the blocks that intersect with the narrow band of data.

3.2 Efficiency

Because DB+Grid is a tree it also inherits some of the algorithmic benefits associated with trees and many of the volume processing operations have been implemented to take advantage of these properties. For instance, when merging two trees entire branches can be moved oppose to always moving single values or blocks.

The DB+Grid separates the encoding of data from the topology in order to provide fast sequential data access. Each internal node of the tree encodes the topology of its children in different masks. These masks are used to implement efficient sparse iterators that skip over large empty areas and only visit the actual data samples.

A simple illustration of the DB+Grid data structure is given in figure 3.1. The fan-out-factors in the illustrations are: 2 × 2 × 2 × 4. To convey the basic features.

(22)

CHAPTER 3. SPARSE VOLUME DATA STRUCTURE Root node 0110 1000 0001 1000 0001 11 01 10 10 11 1D Internal nodes Leaf nodes Topology bit-masks Out In Active values Out In Inactive values Dense array DB+Grid/VDB

Figure 3.1: Simple 1D illustration of the DB+Grid topology.

The example illustrated in figure 3.2 refers back to the implicit circle introduced in chapter 2, to demonstrate how level-set functions can be discretized into a sparse rep-resentation. It is important to note the active/inactive value classification: an active value belongs to the narrow band and an inactive does not.

The 1D segment illustrated in figure 3.1 above.

The fan-out-factors in this illustrations are: 2 × 2 × 2 × 4. This does not actually match the fan-outs of DB+Grid, but to convey the basic features.

In-active values inside allocated blocks contain a predefined value. For level sets two predefined values are used one for voxels inside the contour and another for voxels outside.

In-active values defined at node levels point to some predefined value. This reduces memory and enables the CSG operations to work on narrow band level sets. Out In Active values Out In Inactive values 2D y x

outside

inside

(23)

CHAPTER 3. SPARSE VOLUME DATA STRUCTURE

3.3 Intel Threading Building Blocks

Intel Threading Building Blocks1 (TBB), is a c++ open source library used to develop

threaded applications. It was decided to use the TBB library to parallelize the methods presented in this thesis. The main reason was that, while TBB abstracts some of the low-level threading details, it still provides the necessary control needed for writing optimal multi-core applications.

3.4 Parallelization

Because DB+Grid is a tree it is generally not possible to parallelize topology changes. New values can not be added to the grid in parallel, it is only safe to read and update al-ready existing values. Fortunately using TBB it is possible to implement several parallel design patterns [1], that can be used to break down and organize algorithms to permit parallel processing.

The parallel reduction scheme proved very useful to overcome the parallelization re-strictions of the DB+Grid. The parallel reduction scheme is implemented by giving each thread it’s own DB+Grid. Then the work is perform in parallel over separate regions of the domain. When the work is completed the data is merged together trough a series of associative reduction operations until only one final grid remains.

Sometimes it is not possible to guarantee that the different grids contain completely partial results, and the final grid can not simply be constructed by merging together the different parts. To combine overlapping regions the reduction scheme need to perform some kind of element wise selection in these regions. This action is different for different data types, for level-sets/signed distance fields this usually amounts to a min/max-operation.

(24)
(25)

4 Data compression

Compression is useful because it helps reduce the memory consumption for the surface extraction algorithm, making it practical to use for high resolution volume data. This chapter gives a thorough description of the compression schemes used for normal and Hermite data quantization.

4.1 Quantized normals

The surface normal of polygonal primitives embedded in three dimensional Euclidian space is defined as a three component vector of unit length. The general equation of a plane through the point x = (x0, y0, z0) is:

ax0+ by0+ cz0+ d = 0 (4.1)

where d ≡ −ax0 − by0 − cz0 is the closest distance from the origin to the plane and

n = (a, b, c) is the nonzero normal vector that is pointing outward and is perpendicular to this plane. The unit length requirement is expressed as follows:

a2+ b2+ c2 = 1 (4.2)

Representing the normal vector using 32-bit floats amounts to 12 bytes of data. The Hermite data surface extraction algorithm presented in chapter 6 stores three normals in each voxel of a high resolution grid. The memory footprint using the 12 byte normal representation increases rapidly and restricts the resolution of the grid. In order to save memory the normals are compressed using the technique presented in [4].

This compression method efficiently represents a normal using 2 bytes i.e. 16 bits of data by only storing two quantized components. The quantized components are later used to construct a vector that is collinear with the original normal, the collinear vector is then normalized to obtain a very accurate reconstruction of the original normal. The algorithm used to pack the normals is outlined below:

1. The signs of the three component are first stored using 3 bits. The signs can then safely be discarded. This brings the normals, that can be represented into the first octant and leaves 16 − 3 = 13 bits to represent them, see figure 4.1 for an overview. 2. The normals are then projected onto the two-dimensional plane that goes trough the X0 = (1, 0, 0), Y0 = (0, 1, 0) and Z0 = (0, 0, 1) points. The projective transform

(26)

CHAPTER 4. DATA COMPRESSION

is given by the following equations:

a1 = a/(a + b + c) (4.3)

b1 = b/(a + b + c) (4.4)

where a1+ b1 ≤ 1.

3. a1 and b1 are quantized in the 0 to 126 range (27− 1 quantization levels) by:

aq = ba1∗ 126c (4.5)

bq = bb1∗ 126c (4.6)

where 0 ≤ aq ≤ 126, 0 ≤ bq ≤ 126 and aq+ bq≤ 126.

4. The quantized components can not be stored directly into the remaining 13 bits since allowing both xq and yq to be in the 0 to 126 range requires 7 quantization

levels each which amounts to 14 bits in total.

10 11 12 13 14 15 16 9 8 7 6 5 4 3 2 1

aq

≤126 ≤ 63

bq

signs

3 bits 6 bits 13 bits 7 bits

Figure 4.1: A 16-bit word is used to store the quantized normal. Here aqand bq represent

the bit slots used for the quantized components.

The data is mapped as to fit in 13-bits as shown in figure 4.1 where aq is

rep-resented using a 7-bit slot and bq using a 6-bit slot. Both numbers can still be

represented using 7 quantization levels by checking if bq requires more than 6 bits

and storing the complement: aq = 127 − aq bq ≥ 64 aq otherwise bq = 127 − bq bq ≥ 64 bq otherwise (4.7)

Now both components are guaranteed to fit into their respective slot.

Given the quantized components aq and bq, a vector that is collinear with the original

(27)

CHAPTER 4. DATA COMPRESSION

1. Check if components need to be mapped back: aq = 127 − aq aq+ bq ≥ 127 aq otherwise bq = 127 − bq aq+ bq ≥ 127 bq otherwise (4.8) 2. The third quantized component is calculated by:

cq = 127 − aq− bq (4.9)

3. Calculating a normalization weight to reconstruct a collinear unit vector:

w = 1 pa2 q+ b2q + c2q (4.10) a = aq∗ w (4.11) b = bq∗ w (4.12) c = cq∗ w (4.13) (4.14) Calculating the normalization weight each time a quantized normal is unpacked requires significant computation time. The calculation performs three multiplications, two ad-ditions and a square root. In order to save time a lookup table with all the weights is constructed at program start-up time. Using 13 bits to represent both components yields 213 = 8192 possible normalization weights. Caching these values using 32-bit floats

re-quires 8192 × 4 bytes = 32 kilobytes which fits well in the processors data chache making this approach very efficient.

The compression of the normal from 12 bytes to 2 bytes uses two multiplications, two additions and one division while the reconstruction requires: three multiplications, two additions and a look up while at the same time using only 1/6th of the memory.

4.2 Quantized Hermite data object

Storing Hermite data on the three up-wind edges of a voxel as shown in figure 2.10 requires three normals and three edge distances. It is also necessary to store a sign flag for the voxel that encodes the inside/outside notion and a valid-data flag per edge. The necessary data for a single voxel is:

• 3 Normals • 3 Edge distances • 4 Flags

Using quantized normals requires 6 bytes of data and the remaining distances and flags can also be quantized into 4 bytes making the total cost for a Hermite data object 10 bytes. The edge distances and flags are quantized in the following manner:

(28)

CHAPTER 4. DATA COMPRESSION

• The edge distances are normalized and are stored using 10 bits each as xdist, ydist

and zdist respectively. This representation allows for 1024 quantization levels with

an error of less than 0.1% of the voxel-side/edge-length.

• 1-bit encodes if the up-wind z-edge contains valid data i.e. not null.

• 1-bit encodes the inside/outside notion. Since all up-wind edges of a voxel have the same sign.

The subdivision of the 32-bit word is illustrated in figure 4.2. The up-wind x and y edges can be validated by checking if their respective quantized normals are not-equal to zero. Value zero maps to n = (0, 0, 1) and in a up-wind configuration this normal can only be extracted at intersection points along the z-edge.

1

bits 1 10

zdist ydist xdist in/out flag

valid zdist

10 10

Figure 4.2: Quantized edge distances and flags using can be made to fit into 32-bits using the compression scheme presented.

Packing and unpacking the edge distances into the 32-bit word requires one multipli-cation and some bit shifting while the flags are set using bit operations directly. The quantized Hermite data object is implemented as a custom data type where many of the operations are performed directly on the compressed data. For instanced min/max operations, comparison operations and boolean combinations of different Hermite data objects is performed directly on the quantized data without unpacking. In figure 4.3 the total memory cost for a quantized Hermite object is shown, this data object is stored within the voxels of the Hermite data grid.

Flags

6 bytes 4 bytes

Total: 10 bytes Quantized Hermite data object

z

normal

z

dist

y

dist

x

dist

y

normal

Signs

x

normal

Figure 4.3: The quantized Hermite data object uses 10 bytes to encode: three normals, three distances and two flags explicitly. Two additional flags can be derived by interpreting the quantized x and y normals.

(29)

5 Signed distance field conversion

A polygonal mesh converter is a device that efficiently constructs a discrete signed dis-tance field from large polygonal models onto high-resolution grids. Such a converter has to be fast and robust to non-manifold polygonal surfaces with inconsistent normals and self-intersections.

The algorithm presented in this thesis is inspired by the approach in [5] where a signed distance field is generated within a narrow band around the polygonal surface. This method effectively generates a narrow-band distance field in two passes where, the first pass consists of a custom rasterization technique that accurately locates all boundary voxels that intersect the polygonal model, calculates their closest distances and stores a list of intersecting triangles for each voxel. In a second pass the boundary voxels are re-visited and the triangle lists are used to calculate closest distances for the non boundary neighboring voxels. The method presented in [5] has some limitations, these are:

• For computing the sign of the distance field the technique presented in [3] is used. However this approach is not robust to non-manifold surfaces and can not be used to resolve self-intersectinons.

• In order to expand the narrow-band and compute signed distance values within a desired region, the initial information is propagated using a fast marching [23] or fast sweeping method [26]. This approach only works for signed distance fields. In order to overcome these limitations and parallelize the signed distance field conversion some changes and extensions were introduces to the original method, these are presented in the next section.

5.1 Overview of the algorithm

The method presented in this thesis introduces some changes and new features to the signed distance converter presented in [5], namely:

• A parallel rasterization method that compute the initial narrow band of signed distances in one pass.

• A sign propagation method that can handle non-manifold polygonal surfaces with inconsistent normals and self-intersections.

• A parallel narrow band dilation with separate controls for the interior and exterior bands.

(30)

CHAPTER 5. SIGNED DISTANCE FIELD CONVERSION

• The ability to convert the original polygonal model into one of several volume types: signed distance field, closest point field, Hermite data field and fog volume. • Attribute transfer from the polygonal model to a volume grid.

The conversion algorithm is divided into six main stages of which stages 3 to 6 are optional depending on the desired output. A short overview of the main stages is outlined below:

1. The method is initialized by copying over the polygonal data from the host appli-cation into a local representation. Here quads are also subdivided into triangles and the points are transformed to the grid’s local space.

2. The polygons are rasterized into voxels, closest points and closest distances are calculated. Hermite data and attributes can also be extracted during this stage. 3. The correct inside/outside signs is determined for voxels in the narrow band. 4. Voxels generated by self-intersections are removed.

5. The narrow band of voxels is expanded to a desired width. 6. The signed distance field is converted into a fog-volume.

5.2 Polygon rasterization

In this stage triangles are rasterized onto the grid and closest distances are calculated. Given a polygonal mesh consisting of a set of M triangles the first task is to con-struct a local distance field around each triangle. The set of triangles M , is further divided into m unique sub-sets that are processed in parallel on the CPU. This stage is parallelized using a parallel reduce scheme where each thread is given it’s own grid.

missed voxels

boundary voxels

p0 p1

p2

Figure 5.1: Rasterizing a triangle by finding the intersecting boundary voxels. The missed voxels will not produce holes in the voxel surface, see text.

A triangle T = (p0, p1, p2), as

illus-trated in figure 5.1 is rasterized by first subdividing it into a set of line segments that are further subdivided into a set of points. This is done by first determining a metric dT that

indicates how large the triangles are with respect to the voxels sizes. The metric is calculated by rounding up the largest component of the longest edge dT = dmaxi||vi||∞e where:

vi =    v0 = p1 − p0 v1 = p2 − p1 v2 = p0 − p2 (5.1)

(31)

CHAPTER 5. SIGNED DISTANCE FIELD CONVERSION

As the triangles are already in local grid space:

• if dT ≤ 1 the triangle can fit into a voxel and the boundary voxels can be found

by rounding the triangles vertex positions.

• if dT > 1 the triangle can not fit into a voxel and the metric is used to subdivide

the triangle into a set of points that are rounded to the closest integer values to find corresponding boundary voxels.

This is a simple and fast approach that finds almost every boundary voxel that intersects the triangle rapidly.

As can be seen in figure 5.1 for a single triangle in 2D, some of the voxels are missed us-ing this rasterization approach and for a mesh the missed voxels are gous-ing to be situated around the corners and edges of the model and will not produce holes in the voxel surface. The inside/outside notion cannot be represented using only the initial set of boundary voxels, therfore the boundary voxels are used to add one more layer of neighboring voxels. Two algorithms are presented for computing boundary and non-boundary voxels. Algo-rithm 1 is called for all boundary voxels that are found during the triangle rasterization, by passing in the voxel and triangle references. This routine calculates and stores the following attributes on the grid: closest point, closest distance and triangle ID for the triangle that updated the voxel last.

Algorithm 2 is called for all non-boundary voxels. This routine calculates and stores the closest point, closest distance and triangle ID for the triangle that updated the voxel last.

In figure 5.2 a small portion of the resulting narrow band of distances is illustrated. All distances are stored with negative sign meaning that all voxels are assumed to be inside the model for now. The correct sign will be determined in the next stage.

In this stage it is also possible to transfer attributes from the original polygonal model to the grid by performing Barycentric interpolation of the closest point. Hermite data can also be derived by intersecting the up-wind edges of the voxels.

Note specifically that the number of triangle sub-sets m is not equal to the number of CPU-cores, in order to allow for efficient load balancing. The TBB framework will create more triangles sets than cores and enable cores that finish prematurely to ”steal” more work.

(32)

CHAPTER 5. SIGNED DISTANCE FIELD CONVERSION

Algorithm 1 Compute boundary voxel Input: voxel coordinate (i, j, k), triangle ID

tag voxel as boundary voxel

if the voxel does not contain a distance then store the new triangle ID

calculate the closest point on triangle store the closest point and the distance else

if the new triangle ID is not the same as the old then calculate the closest point on triangle

if the new distance is smaller then

update the closest point and the distance update triangle ID

end if end if end if

for all face and edge adjacent neighbors do call algorithm 2

end for

Algorithm 2 Compute non-boundary voxel Input: voxel coordinate (i, j, k), triangle ID

if voxel is set then

if old distance < 1/2 or voxel has same triangle ID then abort this algorithm

end if end if

calculate the closest point on triangle if the new distance is smaller then

update the closest point and the distance update triangle ID

(33)

CHAPTER 5. SIGNED DISTANCE FIELD CONVERSION Boundary voxels Local voxels Surface boundary

Figure 5.2: The resulting narrow band of negative distances around the polygonal surface.

5.3 Determining the sign

In this stage the correct sign is determined for the narrow band of negative distances. First the outer shell of non-boundary voxels is updated to have positive distances in-stead. This is done using a flood fill method that propagates the sign along the surface leaving the boundary voxels untouched. The flood fill is restricted from flowing past the boundary voxels.

Boundary voxels Local voxels

Surface Outside

Figure 5.3: Propagating the positive sign on the outer shell of local voxels using a flood fill approach.

Once the positive sign information has been propagated on the outer shell of local voxels the sign of the boundary voxels can also be determined. This step is done in parallel by evaluating each boundary voxel as follows:

• Use the current voxel’s center and the closest surface point to calculate a vector that points from the voxel to the closest surface point.

• Do the same for the edge and face adjacent voxels that are marked as outside. • Check the angle difference between the current voxel’s vector and the vectors of

(34)

CHAPTER 5. SIGNED DISTANCE FIELD CONVERSION

Boundary voxels Local voxels

Surface Outside

Figure 5.4: Determining the correct sign for all boundary voxels by evaluating the closest point information, see text.

5.4 Cleaning up voxels

If the original model contains internal self intersections, triangles that lie inside the model will create a set of voxels that needs to be removed. A simple two-step approach is illustrated in figure 5.5.

Boundary voxels Local voxels

Surface Outside

1. Some of the voxels are

created from self-intersectinons2. Removed boundary voxels 3. Removed non-boundary voxels

Figure 5.5: Cleaning up voxels that represent self-intersections.

The first step presented in algorithm 3 removes all boundary voxels that are not ad-jacent to a outside marked voxel.

(35)

CHAPTER 5. SIGNED DISTANCE FIELD CONVERSION

Algorithm 3 Remove unwanted boundary voxels for all boundary voxels do

set remove = true

for all face and edge adjacent voxels do if the voxel is marked as outside then

set remove = false end loop

end if end for

if remove is true then delete current voxel end if

end for

The second step presented in algorithm 4 removes all non-boundary voxels that are not adjacent to a boundary voxels.

Algorithm 4 Remove unwanted non-boundary voxels for all none boundary voxels do

set remove = true

for all face and edge adjacent neighbors do if is boundary voxel then

set remove = false end loop

end if end for

if remove is true then delete current voxel end if

end for

This simple two-step method can be computed in parallel and cleans up the initial narrow band of voxels. Removing the voxels generated by self-intersections reduces the memory footprint and allows the narrow bad to be expanded properly.

(36)

CHAPTER 5. SIGNED DISTANCE FIELD CONVERSION

5.5 Narrow band expansion

Sometimes it is desired to expand the initial narrow band of voxels to a desired width. As shown in figure 5.6 the expansion algorithm adds new layers of voxels.

Boundary voxels

Local voxels

Surface

1

1

1 1

0

1 1 1

1 1 1 1

1 1

1 1 1

1 1

1

1

1

1

1

1 1

1 1

Outside

2 2

2 2 2

2

2 2

2 2 2 2 2

2 2 2 2 2

2

2

0

0

0

0

Figure 5.6: Expanding the narrow band of voxels.

Algorithm 5 expands the narrow band to a desired width by adding more layers of voxels. In the initial step all non-boundary voxels are visited and a new layer of voxels is added. The algorithm also keeps track of the newly added voxels in an mask-grid. This process is repeated until the desired width is reached, using the previous mask-grid for traversal each time.

Algorithm 5 Add new voxel

for all face and edge adjacent voxels do

find the set neighbor Nclosest which has the closest surface point

end for

Get the triangle that updated Nclosest last

calculate the closest point on triangle

calculate the distance d, and get the sign from Nclosest

if d is within the interior or exterior band limit then store triangle ID

store the closest point and the distance end if

(37)

CHAPTER 5. SIGNED DISTANCE FIELD CONVERSION

5.6 Fog-volume conversion

As a final optional stage the signed distance field can be converted into a fog-volume. A fog-volume is variant of constant density volume objects where the border has a smooth transition from zero to the desired density value.

interface

narrow band (gradient)

Constant value

Figure 5.7: Fog volume representation, interior is constant and the narrow band is used to create a smooth gradient.

Fog volumes are generated from level sets using the narrow band to define the smooth transition. The exterior portion of the narrow band can be discarded if the fog volume needs to be contained within the original surface.

(38)
(39)

6 Adaptive surface extraction

The most popular and well known surface extraction method is the Marching Cubes (MC) algorithm originally developed by Lorensen and Cline [15] in 1987. The method has been further developed over the years to resolve topological ambiguities and prevent holes in the surface [22], [17].

The basic MC-algorithm is as follows: For each cell that exhibits a sign change, vertices are created on those edges that intersect the iso-surface and a triangle patch is then generated inside the cell. One of the biggest issues with the MC-algorithm is that it generates a large amount of triangles and the memory consumption increases rapidly with the resolution of the volumetric grid.

In order to overcome the memory limitations of the MC-algorithm several adaptive sur-face extraction algorithms have been developed. The method implemented is a slightly modified version of the dual countering method [12].

In order to efficiently and successfully extract surfaces from extremely high resolution level-sets, the following requirements have to be fulfilled:

1. The surface extraction algorithm has to construct a geometrically accurate polygo-nal mesh that is as close to the origipolygo-nal discretized surface as possible and preserve small features.

2. The extracted surface has to meet the requirements of a closed, oriented two-manifold surface, embedded in 3D space.

3. To preserve a low memory footprint the sampling rate of the extracted polygonal model must be adaptive to the local geometric properties i.e. it must be represen-tation efficient.

6.1 Overview of the algorithm

In order to lower memory cost and parallelize the surface extraction algorithm, some changes were introduced to the method proposed in [12], the changes were:

• A sparse and compressed data representation. Using quantized Hermite data and the sparse VDB grid. Oppose to using a uniform grid with uncompressed data.

(40)

CHAPTER 6. ADAPTIVE SURFACE EXTRACTION

• The adaptivity is created by merging together cells instead of constructing an octree. This approach enables the adaptivity to be generated in parallel.

The main stages are given below, then the important details are explained more thor-oughly.

• First a sparse Hermite data grid is constructed. This is done by tagging all grid edges that exhibit a sign change with exact intersection distances and their nor-mals.

• Second the Hermite data is used to evaluate a quadratic error function (QEF). The QEF can be used to place a vertex inside each cell containing a Hermite data edge, or to merge cells into larger regions to enable adaptivity.

• Thirdly, the the polygonal surface is constructed by visiting all edges that have been tagged with Hermite data and connecting together vertices from the adjacent cells.

6.2 The quadratic error function

The quadratic error function is constructed by defining a geometric error as the sum of squared distances to a set of tangent planes. These planes are defined by the Hermite data i.e. intersection points with normals and the QEF can be formulated as follows:

E(x) =X

i

ni· (x − pi)

2

(6.1)

where E(x) is the quadratic error for a vertex position x and the pair ni, pi correspond

to the intersection point and normal for edge i. This function can be rewritten in matrix form as a inner product:

E(x) = (Ax − b)T(Ax − b) (6.2)

where A is a matrix whose rows are the normals ni and b is a vector whose entries are

ni· pi. Equation 6.2 is then rewritten as follows:

E(x) = xTATAx − 2xTATb + bTb (6.3) Finding an optimal vertex position x that minimizes the quadratic error E(x) is equiv-alent to solving the normal equation:

ATAx = ATb (6.4)

(41)

CHAPTER 6. ADAPTIVE SURFACE EXTRACTION

6.2.1 Singular value decomposition

Solving the linear system defined by equation 6.4 directly might result in unwanted results. The reason is that this system might be underdetermined for certain cases where the sample normals are nearly coplanar resulting in a nearly singular A matrix. If this is ignored the obtained optimal position might lie outside the current region resulting in overlapping polygons. This issue is solved as purposed in [13] where the singular value decomposition (SVD) of A is calculated and then used to calculate the pseudo-inverse by explicitly setting the minimal singular values to zero.

6.3 Topology tests

Only relying on the quadratic error function during simplification may change the surface topology and even produce non-manifold surfaces. In order to ensure that the topology is preserved during simplification two topology tests need to be performed:

1. Test whether the contour of the coarse region is a manifold.

2. Test whether the fine contour is topologically equivalent to the coarse contour on each of the sub-faces of the coarse region.

The first test is performed as proposed in [10] where the corners of the cubical region are collapsed into a single point if the endpoints have the same sign. The contour associated with a region is then considered to be manifold if only one single edge remains. This is implemented using a precomputed look up table.

The second test is performed as proposed in [12] by performing a sequence of sign comparisons:

• The sign in the middle of a coarse edge must agree with the sign of at least one of the edge’s two endpoints.

• The sign in the middle of a coarse face must agree with the sign of at least one of the face’s four corners.

• The sign in the middle of a course cube must agree with the sign of at least one of the cube’s eight corners.

Performing these tests will ensure that the surface adaptive surface maintains the topol-ogy of the original surface.

6.4 Adaptivity

The adaptivity method presented in this thesis uses a mask grid to encode areas with small geometrical differences. The mask is implemented using a sparse VDB grid of un-signed integer 32-bits. The unun-signed integers will represent different things at different

(42)

CHAPTER 6. ADAPTIVE SURFACE EXTRACTION

stages of the algorithm. Initially, the first 31 bits encode the adaptivity level and a last bit is used as a flag to indicate if the cell is mergeable. Before outlining the algorithm, it is necessary to explain the adaptivity level concept.

The adaptivity level encodes the VDB data block subdivision into different fixed re-gions. The typical block size is 8 × 8 × 8 cells which allows for four different adaptivity levels, this is illustrated in figure 6.1. For instance level zero (L0) subdivides the block

into 512 regions which corresponds to actual cells, L1 subdivides the block into 64

re-gions, etc. Since the regions are fixed within a block, it is only necessary to store the adaptivity level for a cell to encode which region it belongs to.

L3: 1 region

of 8x8x8 cells L2: 8 regionsof 4x4x4 cells each. L1: 64 regionsof 2x2x2 cells each. L0: 512 regions / cells.

2D

3D

Figure 6.1: Block subdivision.

In order to efficiently thread the adaptivity algorithm the merging process is performed individually and in parallel for each VDB block without modifying topology The algo-rithm is outlined in four steps:

1. Create the mask grid: Visit all edges in the Hermite data grid that have been tagged with data. Set an initial value in the cells of the mask grid corresponding to the four adjacent cells each that share an edge. The initial value corresponds to only the last bit turned on, indicating that the current cell is mergeable and of level zero type.

2. For each block in the mask grid, start by going trough the L1 regions. If the

corresponding cells are mergeable then:

• Evaluate the topology test from chapter 6.3 using the eight corner values of the region.

• Evaluate the QEF from chapter 6.2 using the Hermite data of the correspond-ing cells.

3. If both the topology and QEF tests are passed, the cells in the mask grid are updated with the current adaptivity level. If not, the mergeable flag is turned off.

(43)

CHAPTER 6. ADAPTIVE SURFACE EXTRACTION

4. When all regions at the current adaptivity level have been computed, the process is repeated for the next adaptivity level.

This algorithm also keeps track of the number of unique regions within each block. This information is then used to allocate the shared points array and define start indices for each block into the shared points array.

Next unique point positions are created for each region using the Hermite data and the sub-cells of each region are updated with point list indices. The optimal positions can be calculated by solving equation 6.4. This is illustrated in figure 6.2.

2x2 blocks

unique points

Figure 6.2: Regions of different sizes where all sub cells share the same unique point.

When the adaptivity algorithm has finished, all sub-cells belonging to a region will have the same index into the unique point list.

(44)

CHAPTER 6. ADAPTIVE SURFACE EXTRACTION

6.5 Polygon generation

This stage generates the adaptive surface using the point indices stored in the mask grid. For each Hermite data tagged edge generate a quad by connecting together the vertices associated with the four distinct cells containing the edge. In the case where the edge is shared by three distinct edges a triangle is generated instead. This process is illustrated in figure 6.3. The orientation of the primitive is determined by a normal at the current position. This information can be obtained by simply looking at the sign of the current sample since the data is up-wind.

z+ edge (i,j,k) (i,j-1,k) (i-1,j,k) (i-1,j-1,k) Cells y z x

<

<

<

<

Voxels

Figure 6.3: Connecting together the vertices of the four adjacent cells that share an edge. The orientation is defined by the sign at the current position.

In figure 6.4 the mesh connectivity for regions of different sizes is illustrated.

Figure 6.4: The adaptive surface constructed by connecting together the unique vertices of the different regions.

This procedure will generate quads in homogeneous areas and triangles in transitional areas i.e. going from one level of adaptivity to another.

(45)

7 Fracturing

The task is to implement an artistically directed tool for fracturing polygonal models by means of volumetric CSG operations. This approach is similar to the methods presented in [21] and [9]. The method has to produce a set of disjoint polygonal fragments with arbitrary topology and seamless boundaries.

7.1 Overview of the algorithm

The algorithm starts by converting the original geometry into a sparse Hermite data grid. Then, disjoint fragments are created and meshed using the adaptive surface extraction method presented in the previous chapter to create seamless boundaries between the pieces.

The fragments are created by performing boolean operations on Hermite data grids. These operations are almost identical to the level set operations:

Union(A, B) = A ∪ B = min(A, B) Intersection(A, B) = A ∩ B = max(A, B)

Difference(A, B) = A \ B = max(A, −B)

The only difference is that the min/max computations have to be applied componentwise to the directed distances. For example the max computation between two Hermite data voxels H1 and H2 is calculated as:

max(H1, H2) = 

max(H1xdist, H2xdist)

max(H1ydist, H2ydist)

max(H1zdist, H2zdist)

 (7.1)

Once the Hermite data grid has been obtained from the original surface a set of cutter objects are placed on the original surface. Next, the cutter objects are processed in turn as follows:

• Convert the current cutter object into a Hermite Grid.

• Perform a boolean intersection with the original Hermite grid and mesh the result using the adaptive surface extraction method.

• Perform a boolean difference with the original Hermite grid and replace the original grid with the result.

(46)

CHAPTER 7. FRACTURING

This process is repeated for all cutter objects. The cutter object placement can be completely automated by scattering a set of points within the original object and placing randomly transformed cutter objects at these positions.

(47)

8 Implementation overview

First the different algorithms were implemented as stand alone c++ libraries. Then the

libraries where integrated into a commercial 3D animation package.

8.1 Code libraries

Three different c++ libraries where developed in this thesis.

• Voxelizer: library that contains all of the classes associated with the signed dis-tance field converter.

• Hermite: Quantized Hermite data type, threaded CSG methods and threaded level set grid to Hermite data grid converter.

• Mesher: Library contains all of the classes associated with adaptive surface ex-traction method

8.2 Houdini integration

Houdini1 from Side Effects Software is a procedural 3D animation package that is widely

by various visual effects companies. Houdini offers an open environment that can be ex-tended with proprietary tools trough the the Houdini Development Kit HDK API2. The different project libraries where embedded into custom Houdini tools using the Surface Operator construct in Houdini 11. Each tool exposes control to its parameters trough a integrated UI panel in Houdini.

1More information about Houdini can be found at: http://www.sidefx.com 2API documentation available at: http://www.sidefx.com/docs/hdk11.0/

(48)

CHAPTER 8. IMPLEMENTATION OVERVIEW

The figures below show the interfaces of the Houdini tools:

Figure 8.1: The interface of the polygon conversion tool.

(49)

CHAPTER 8. IMPLEMENTATION OVERVIEW

(50)
(51)

9 Results

In this chapter some performance and visual results for the different algorithms are presented. The tests where conducted on a HP z800 workstation equipped with a Quad-core Intel Xeon processor (4 Quad-cores no Hyper-Threading) and 12 GB of work memory.

9.1 Signed distance field conversion

Figure 9.1: Correct sign and voxel cleanup for self intersecting model. green = outside and red = inside.

(52)

CHAPTER 9. RESULTS

Figure 9.2: Interior fill by expanding the interior narrow band.

Figure 9.3: Simple cloud. First a polygonal model is converted into a level set then the level set is converted into a fog volume with smooth boundaries. Polygonal model courtesy of Jeff Budsberg, DreamWorks Animation.

(53)

CHAPTER 9. RESULTS

Voxel size Level-set resolution (voxels) Conversion time (sec)

4.0 120 × 200 × 104 4.09 3.0 159 × 266 × 138 4.37 2.0 237 × 398 × 205 5.71 1.0 473 × 794 × 409 7.6 0.7 674 × 1133 × 582 12.77 0.5 943 × 1586 × 814 19.93 0.36 1308 × 2202 × 1131 41.27

Table 9.1: SDF conversion performance, Thai Statue model from The Stanford 3D Scan-ning Repository. 8 990 336 triangles

Figure 9.4: Signed distance field conversion of the Thai Statue model from The Stanford 3D. Resolution = 120 × 200 × 104 voxels, conversion time = 4.09s.

(54)

CHAPTER 9. RESULTS

Figure 9.5: Signed distance field conversion of the Thai Statue model from The Stanford 3D. Resolution = 943 × 1586 × 814 voxels, conversion time = 19.93s.

(55)

CHAPTER 9. RESULTS

9.2 Adaptive surface extraction

Figure 9.6: Low-resolution performance tests on level set representations of the Happy Buddha model from The Stanford 3D Scanning Repository. Here the old method is a modified version of the surface extraction algorithm provided by Houdini. (It is modified to take advantage of the new VDB data structure.) The point count for the new method is measured from the adaptive version. The point count for the old method is measured from the non-threaded version because the threaded version does not produce unique points and the cost to consolidate the points is higher than the surface extraction.

(56)

CHAPTER 9. RESULTS

Figure 9.7: Surface extraction using Hermite data. The first case uses average vertex position and the second cases uses optimal vertex position. The surface produced by using average position corresponds to extracting the surface from a level set.

Method QEF threshold points polygons time (sec)

Houdini - 7,857,493 7,873,279 32.38 New method 0 7,918,578 7,918,809 14.61 New method 50 1,889,460 2,185,352 13.93 New method 150 1,138,554 1,320,249 11.40 New method 250 900,261 1,043,377 10.82 New method 500 672,221 778,715 10.20 New method 2000 370424 427,738 9.24

Table 9.2: Adaptive surface extraction from a 815 × 1982 × 816 level set representation of the Happy Buddha model from The Stanford 3D Scanning Repository. The time to convert the level set into a Hermite field is included. (The new method spends 60–70% of the time on copying the data from an internal representation to Houdini)

(57)

CHAPTER 9. RESULTS

Figure 9.8: Adaptive surface extraction from a 815 × 1982 × 816 level set representation of the Happy Buddha model. QEF threshold = 50, computed in 13.93s (Including the level set to Hermite data conversion time.)

(58)

CHAPTER 9. RESULTS

Figure 9.9: Adaptive surface extraction from a 815 × 1982 × 816 level set representation of the Happy Buddha model. QEF threshold = 500, computed in 10.20s (Including the level set to Hermite data conversion time.)

(59)

CHAPTER 9. RESULTS

Figure 9.10: Adaptive surface extraction from a 815 × 1982 × 816 level set representation of the Happy Buddha model. QEF threshold = 2000, computed in 9.24s (Including the level set to Hermite data conversion time.)

Mask type Points Time

none 2 366 433 10.86s

Max simplification outside 1 227 972 7.05s No mesh outside 1 052 087 5.64s

Table 9.3: Adaptive surface extraction from a 716 × 928 × 2671 level set using a frustum mask with different options for the outer regions. The frustum mask is rep-resented as an 84 × 72 × 63 level set converted in 159ms. The QEF threshold is set to 250. Extracting a surface with Houdini’s method produces 6 762 593 points and takes 28.72s to complete.

(60)

CHAPTER 9. RESULTS

Figure 9.11: Noise smoothing. As can be seen in the second image, the adaptive surface extraction reduces the surface noise from the particle based fluid simulation. Fluid simulation courtesy of Jeff Budsberg, DreamWorks Animation.

(61)

CHAPTER 9. RESULTS

Mask type Points Time

Max simplification outside 6 845 410 37.99s No mesh outside 6 196 708 29.6s

Table 9.4: Adaptive surface extraction from a 1426 × 1850 × 5337 level set using a frus-tum mask with different options for the outer regions. The frusfrus-tum mask is represented as an 84 × 72 × 63 level set converted in 159ms. The QEF threshold is set to 250.

Figure 9.12: Adaptive surface extraction using a frustum mask and the maximum sim-plification outside option. (716 × 928 × 2671 level set) Fluid simulation courtesy of Jeff Budsberg, DreamWorks Animation.

(62)

CHAPTER 9. RESULTS

Figure 9.13: Adaptive surface extraction using a frustum mask and the maximum simpli-fication outside option. Showing the camera view. The adaptive mesh that respects the error threshold is only generated within the frustum. Fluid simulation courtesy of Jeff Budsberg, DreamWorks Animation.

(63)

CHAPTER 9. RESULTS

Figure 9.14: Adaptive surface extraction using a frustum mask and the maximum simpli-fication outside option. Showing the entire mesh. The adaptive mesh that respects the error threshold is only generated within the frustum. Fluid simulation courtesy of Jeff Budsberg, DreamWorks Animation.

(64)

CHAPTER 9. RESULTS

9.3 Fracturing

Figure 9.15: Fractured rock: Hermite data field resolution = 404 × 200 × 357 voxels. Hermite data conversion takes between 874ms and 3.12s to generate for the different pieces. The fracturing computation takes between 80ms and 190ms for the different pieces. The adaptive mesh extraction with opti-mal vertex position takes between 450ms and 779ms. Image courtesy of DreamWorks Animation.

(65)

CHAPTER 9. RESULTS

Figure 9.16: Fractured statue: Hermite data field resolution = 506 × 563 × 465 vox-els. Hermite data conversion takes between 1.03s and 5.84s to generate for the different pieces. The fracturing computation takes between 215ms and 714ms for the different pieces. The adaptive mesh extraction with

(66)

Figure

Figure 2.2: A set of points connected together to create a triangle and a quad.
Figure 2.4 illustrates a simple 2D example of the difference between explicit and im- im-plicit representations trough an analytically defined circle
Table 2.1: Memory footprint for an uniform 3D grid, representing a velocity field.
Figure 2.10: The up-wind edges on a 2D voxel grid, these edges correspond to cell edges.
+7

References

Related documents

The diffusion of innovation, technology acceptance model, the factor of trust have been identified as being reviewed and undertaken in the majority of articles in the field of

We nd signicant positive (negative) eects for male doctors (female veterinaries), but none for the remaining graduates. There are only small negative eects on the probability to

Keywords: osteoporosis, fracture, bone mineral density, clinical risk factors, FRAX, Poisson model, 10 year probability, mortality, vitamin D, adiponectin.

Keywords: osteoporosis, fracture, bone mineral density, clinical risk factors, FRAX, Poisson model,.. 10 year probability, mortality, vitamin

But it also recognises that things and other material remains are much more than just sources of information; they also touch people, make them remember, and evoke reflections

The managers believed that the organizations work such as facilities, people in the organization, team structure, coaches and their leadership style, the

This does still not imply that private actors hold any direct legal authority in this connotation nor that soft law should be regarded as international law due to the fact that

Our research question addresses this knowledge gap; we ask what interprofessional student groups are doing when using a narrative note in the electronic patient record to support