• No results found

Automatic Mesh Decomposition for Real-time Collision Detection

N/A
N/A
Protected

Academic year: 2021

Share "Automatic Mesh Decomposition for Real-time Collision Detection"

Copied!
69
0
0

Loading.... (view fulltext now)

Full text

(1)

Institutionen för systemteknik

Department of Electrical Engineering

Examensarbete

AUTOMATIC MESH DECOMPOSITION FOR

REAL-TIME COLLISION DETECTION

Examensarbete utfört i Medieteknik

vid Tekniska högskolan vid Linköpings universitet

av

Henrik Bäcklund och Niklas Neijman

LiTH-ISY-EX--14/4755--SE

Linköping 2014

Department of Electrical Engineering

Linköpings tekniska högskola

Linköpings universitet

Linköpings universitet

(2)
(3)

AUTOMATIC MESH DECOMPOSITION FOR

REAL-TIME COLLISION DETECTION

Examensarbete utfört i Medieteknik

vid Tekniska högskolan vid Linköpings universitet

av

Henrik Bäcklund och Niklas Neijman

LiTH-ISY-EX--14/4755--SE

Handledare:

Jens Ogniewski

isy, Linköpings universitet

Ulrik Lindahl

Donya Labs

Examinator:

Ingemar Ragnemalm

isy, Linköpings universitet

Linköping, 5 maj 2014

(4)
(5)

Avdelning, Institution Division, Department

Information Coding, Media Processing Group Department of Electrical Engineering SE-581 83 Linköping Datum Date 2014-05-05 Språk Language  Svenska/Swedish  Engelska/English   Rapporttyp Report category  Licentiatavhandling  Examensarbete  C-uppsats  D-uppsats  Övrig rapport  

URL för elektronisk version

http://www.ep.liu.se

ISBN — ISRN

LiTH-ISY-EX--14/4755--SE Serietitel och serienummer Title of series, numbering

ISSN —

Titel Title

AUTOMATIC MESH DECOMPOSITION FOR REAL-TIME COLLISION DETECTION AUTOMATIC MESH DECOMPOSITION FOR REAL-TIME COLLISION DETECTION

Författare Author

Henrik Bäcklund och Niklas Neijman

Sammanfattning Abstract

Intersections tests between meshes in physics engines are time consuming and computatio-nal heavy tasks. In order to speed up these intersection tests, each mesh can be decomposed into several smaller convex hulls where the intersection test between each pair of these smal-ler hulls becomes more computationally efficient.

The decomposition of meshes within the game industry is today performed by digital artists and is considered a boring and time consuming task. Hence, the focus of this master thesis lies in automatically decompose a mesh into several smaller convex hulls and to approximate these decomposed pieces with bounding volumes of different complexity. These bounding volumes together represents a collision mesh that is fully usable in modern games.

Nyckelord

(6)
(7)

A B S T R AC T

Intersections tests between meshes in physics engines are time consuming and computational heavy tasks. In order to speed up these intersection tests, each mesh can be decomposed into several smaller convex hulls where the intersection test between each pair of these smaller hulls becomes more computationally efficient.

The decomposition of meshes within the game industry is today performed by digital artists and is considered a boring and time consuming task. Hence, the focus of this master thesis lies in automatically decompose a mesh into several smaller convex hulls and to approximate these decomposed pieces with bounding volumes of different complexity. These bounding volumes together represents a collision mesh that is fully usable in modern games.

(8)
(9)

C O N T E N T S 1 introduction 1 1.1 Problem formulation . . . 1 1.2 Report Structure . . . 2 2 background 3 2.1 3D Mesh Decomposition . . . 3 2.2 Related work . . . 4 2.3 Bounding volumes . . . 5 2.3.1 Bounding sphere . . . 7 2.3.2 Bounding capsule . . . 8

2.3.3 Object oriented bounding box . . . 10

2.3.4 Convex hull . . . 11

2.4 Variational mesh decomposition . . . 12

3 method 13 3.1 Hierarchical approximate convex decomposition . . . 13

3.1.1 Dual graph . . . 13 3.1.2 Edge collapse . . . 13 3.1.3 Cost function . . . 14 3.2 Primitive estimation . . . 17 3.2.1 Sphere . . . 17 3.2.2 Capsules . . . 18

3.2.3 Object Oriented Bounding Box . . . 21

3.2.4 Combined primitives . . . 27

3.3 Segmentation of Skinned meshes . . . 27

3.4 Segmentation using feature detection . . . 28

3.4.1 Clustering With K-Means . . . 28

3.4.2 Laplacian Matrix . . . 30

3.4.3 Eigenvectors of Laplacian Matrix . . . 31

4 result and discussion 33 4.1 HACD . . . 33

4.2 Primitive estimation . . . 35

4.2.1 Sphere estimation . . . 35

4.2.2 Capsule estimation . . . 36

4.2.3 OBB estimation . . . 39

4.2.4 Combining the primitives . . . 40

4.3 Bone-Data Utilization . . . 41

4.3.1 Bone-Data vs. No Bone-Data . . . 43

4.4 Segmentation using Feature Detection . . . 44

5 conclusion 47 5.1 Hierarchical Approximate Convex Decomposition . . . 47

5.2 Primitive Estimation . . . 47

5.3 Variational mesh decomposition . . . 48

6 future work 49

(10)

iv contents

6.1 Interpenetration of Convex Hulls . . . 49

6.2 Improve Standard Primitive Estimation . . . 49

6.3 Semi-automatic Convex Decomposition . . . 50

(11)

L I S T O F F I G U R E S

Figure 1 Decomposed Stanford bunny . . . 3

Figure 2 Decomposition comparisons of different method . . 5

Figure 3 Performance of different boudning volumes . . . . 6

Figure 4 Illustration of two cases of bounding spheres . . . . 7

Figure 5 Bounding capsule’s storage requirement . . . 8

Figure 6 Four scenarios of line intersections . . . 9

Figure 7 Four different objects encapsulated by capsules . . 9

Figure 8 OBB structure . . . 10

Figure 9 Two examples of OBB encapsulation . . . 11

Figure 10 Convex hull encapsulation . . . 11

Figure 11 Convex hull around convex object . . . 12

Figure 12 The dual graph . . . 14

Figure 13 Edge collapse on dual graph . . . 14

Figure 14 Segment combinations . . . 15

Figure 15 Concavity cost . . . 16

Figure 16 Aspect ratio . . . 16

Figure 17 Finding the sphere center . . . 17

Figure 18 Capsule length problem . . . 18

Figure 19 Capsule length solution . . . 19

Figure 20 Capsule orientation . . . 19

Figure 21 Desired capsule orientation . . . 20

Figure 22 Steps in capsule orientation fix . . . 20

Figure 23 Eigenvectors of covariance matrix . . . 22

Figure 24 Bounding rectangle of uneven distribution . . . 23

Figure 25 First step of the MABR-iteration . . . 26

Figure 26 Optimal solution of a MABR-iteration . . . 26

Figure 27 Combined bounding volumes . . . 27

Figure 28 Reduce number of bone for performance . . . 28

Figure 29 Initialization step of K-means algorithm . . . 29

Figure 30 Update state of K-means algorithm . . . 30

Figure 31 Numbered dual graph for laplacian matrix . . . 30

Figure 32 Skinned model . . . 33

Figure 33 HACD sequence . . . 33

Figure 34 Problem with the HACD method . . . 34

Figure 35 Sphere primitive encapsulation of single convex hull 35 Figure 36 Sphere primitive encapsulation of complex object . 36 Figure 37 Capsule primitive encapsulation of single convex hull 37 Figure 38 Adjusted capsule orientation . . . 37

Figure 39 Capsule primitive encapsulation of complex object . 38

Figure 40 OBB primitive encapsulation of single convex hull . 39

Figure 41 OBB primitive encapsulation of single complex object 40

Figure 42 Combined primitves encapulating a complex object 40

(12)

vi List of Figures

Figure 43 Improved HACD with 21 hulls . . . 41

Figure 44 Manually improved HACD with 21 hulls . . . 41

Figure 45 Error at the lower back . . . 42

Figure 46 HACD vs bone-data utilization . . . 43

Figure 47 Bone data vs no bone data . . . 43

Figure 48 Result of using pure K-Means . . . 44

Figure 49 Feature detection of teddy . . . 44

Figure 50 Feature detection of axe . . . 45

Figure 51 Comparison between an axe and a teddy . . . 45

Figure 52 Intersection between convex hulls . . . 49

(13)

L I S T O F TA B L E S

Table 1 Projection of vertices onto a plane . . . 24

Table 2 A stepwise explaination of the MABR method . . . 25

(14)

L I S T I N G S

Listing 1 Sphere variables . . . 7

Listing 2 Capsule variables . . . 8

Listing 3 Object oriented bounding box variables . . . 10

(15)

AC R O N Y M S

ACD Approximate Convex Decomposition

HACD Hierarchical Approximate Convex Decomposition

OBB Object Oriented Bounding Box PCA Principal Component Analysis MABR Minimum-Area Bounding Rectangle

(16)
(17)

1

I N T R O D U C T I O N

The field of application for automatic optimization algorithms have always been attractive within the field of computer graphics. That is also what this master thesis is all about; to automatically generate physics meshes for use in e. g. collisions in the gaming industry. By decomposing the mesh into convex parts and then also convert the convex parts into standard primitives or bounding volumes, the computations can be made more efficient. This is also the approach used for this thesis i. e. how to decompose a mesh into separate segments in order to make the collisions more efficient and visually good looking.

The way physics meshes are done today are mostly manual work, done by the 3D artist. In order to create the physics meshes, the artist has to divide the mesh into convex parts by hand and place the bounding volumes in place. This workflow is not that effective and it can cost a company a lot of money to just create the physics meshes. For the artist point of view, this task is boring and not creative at all.

The good aspect of the manual work is that the standard meshes can get perfectly encased by the physics meshes since the artist has 100% control over the work. However, the workload and time are two properties that have a huge impact of the work pipeline of the company, especially when it comes to the gaming industry where the deadlines are very tight. Hence, an automatic method to generate the meshes would reduce the man-hours, hours that could be spent on more important work.

The above problem of decomposing a mesh has been developed into a research project i. e. this master thesis at Donya Labs AB. Other methods for decomposing a mesh into convex parts have been developed before, but those methods have produced poor results with too many convex parts or too high polygon count (discussed inSection 2.2). There is a method from 2010 proposed by Mamou and Faouzi [8], which handles the decompositions well and this article is also the starting point of the thesis.

1.1 problem formulation

Below follows the problem formulations which defines the guidelines of the investigations and development of the thesis work.

• How much better is HACD compared to older convex decomposition methods?

• How adaptable to changes is the HACD method?

• Is it possible to utilize the bone-data of skinned meshes in order to improve HACD?

(18)

2 introduction

• Standard primitives are a cheaper way of computing collision. Can we use the convex hulls to obtain a primitive for each one of them? • Can HACD be improved by using feature detection?

• Is it possible to use semi-automatic methods to generate better convex hulls?

1.2 report structure

The structure of the report starts off by introducing the background of the thesis work inChapter 2, which also includes related work. InChapter 3all the method implemented are described, each method divided into separate sections. The following Chapter 4presents the results achieved from the methods. The results chapter is followed by theChapter 5where we make a final conclusion of the thesis work. The finalChapter 6contains some future work that describes what further improvements that could be done in order to improve the overall algorithm.

(19)

2

B AC K G R O U N D

This chapter contains some basic knowledge regarding the main topics of this report, where the purpose is to prepare the reader with knowledge so that the rest of the report gets easier to understand. This chapter also includes related work, which contains information about why we investigate the topic of mesh decomposition.

2.1 3d mesh decomposition

Throughout the report mesh decomposition is discussed, but what is it and why is it needed?

Mesh decomposition is a way to divide a 3d polygon mesh into smaller parts, as in figure1.

Figure 1: Stanford bunny have been decomposed into eight different parts, which can be used for various applications, e. g. convert each part to a convex hull and use the hulls for collisions. Image source [13]

This may sound as an easy task and yes, it is if you do it manually. Doing it automatically on the other hand, it will get more complicated, because a computer has no information about the features of the model that is being processed.

There are various fields of applications for mesh decompositions, the major one for this thesis work is to make use of the decomposed mesh by converting the different parts into convex hulls. The reason to convert each part of the decomposed mesh into a convex hull is to obtain a collision mesh for the object.

(20)

4 background

2.2 related work

The first and most important work that needs to be mentioned, is the work done by Backenhof [1]. He is a former thesis worker at Donya Labs that also researched the problem to automatically decompose a 3D mesh into several convex parts. The work is based on the method "Approximate Convex Decomposition" introduced by Lien et. al. [5], which is an iterative mesh divide-and-conquer method governed by three major steps in order to do the divisions:

1. Build a convex hull of the current meshes. For the first iteration when no divisions are computed, a convex hull is created for the whole input mesh.

2. Process each convex hull by comparing the hull to the corresponding mesh that it is encapsulating in order to compute the concavity. The concavity is used in a ranking system which decides what mesh to divide.

3. Decompose the mesh into two meshes according to the rankings. If the iteration not converges go back to (1) and start a new iteration. The iteration converges when a certain concavity or a maximum number of hulls is reached.

The mesh is divided by the use of cut-planes, where an detailed analysis of the mesh is computed to be able to place the cut-plane in the desired position. This method sounds promising, but the analysis of the cut-planes is also where the methods drawbacks lies. There exists some different methods to decide the placement of the cut-plane and Backenhof is using the notch between triangles, where the notch with the highest concavity is chosen. Using the notch as an error unit does however result in a collision mesh that have too many convex hulls when the aim is to keep the features of the mesh. On the other hand, if the number of hulls is restricted to a more manageable number for the decomposition, the hulls do not keep the features at all as seen in the right image in figure2.

Therefore, one of his conclusions was to emphasize another approach that Backenhof bumped into during his work i. e. "Hierarchical Approximate Convex Decomposition" developed by Mamou and Faouzi [8], shown in the center image in figure2.

Another method worth mention is: "Model Composition from Interchangeable Component" introduced by Kreavoy et al. [4]. Where the decompositions are computed by a incremental constrained Lloyd-type method to achieve the optimal number of segments corresponding to the mesh. The main idea of the method is to decompose the mesh into perceptional meaningful parts by defining the concavity as the area weighted average of the distances from the segments to their corresponding convex hulls.

According to Lien et al., this type of concavity definition does not include important features of the model. That is why they introduced ’Approximate Convex Decomposition’, which was discussed earlier in this section.

(21)

2.3 bounding volumes 5

Figure 2: Left:Standard model. Center: Results from Mamou and Faouzi [8], with 12 hulls. Right: Results from Backenhof [1], with 12 hulls.

2.3 bounding volumes

A time consuming task in physics engines is the intersection test between meshes. A regular mesh often consist of several thousands of triangles and a naive implementation could check the intersection between each possible pair of faces, this would be a ineffective and time consuming task. In order to reduce the time spent on intersection tests, bounding volumes can be used. A bounding volume is a geometric object whose purpose is to encapsulate a more complex object. The bounding volume can then be used as a more efficient object for collision detection. In order for a bounding volume to be effective the following properties are desired:

• Inexpensive intersection tests • Tight fitting

• Inexpensive to compute • Easy to rotate and translate • Low memory consumption

Figure3below shows four different types of bounding volumes, the original mesh and how they correlate with the desired properties:

The bounding volume is used to early determine if two objects intersects or not, thereby leading to early outs which drastically increase the performance. There is no such thing as the "best" bounding volume, there are just different situations where they are of best use. A sphere has low storage requirement, it can perform intersection tests fast but can only encapsulate spherical objects in a good manner. The capsule requires slightly more storage than the spherical bounding volume, it can however encapsulate oblong object with better results. The bounding volume that best encapsulates the object is the convex hull, the drawback is that it requires the most storage (still significantly less than the original mesh) but the convex hull gives a tight encapsulation of the object which leads to a more accurate collision detection.

The use of bounding volumes can further be improved by splitting the original mesh into several smaller parts where each part gets its own bounding volume. When an intersection has been confirmed between the bounding

(22)

6 background

Figure 3:The choice of bounding volume will affect the final performance. Bounding volumes that are fast during the intersection test will generally give lower quality on the visual results, while the bounding volumes that gives the best visual result will be slower during the intersection test. The original mesh will give the best visual results, but will consume the most time and memory.

volumes, one can easily determine between which two regions in the meshes the collision have appeared. To further improve the visual result, the collision can go even deeper and actually use the encapsulated parts of the original mesh to check collision against each other, this will give the maximum quality. Below follows an example of how the use of bounding volumes can significantly reduce the computation time for collisions:

Two meshes A and B of both n faces typically have an O n2 complexity. By using two bounding volumes per mesh, the intersection test can be reduced to only consider half of the faces of each mesh. The result will be an intersection test that has its computational cost reduced by 75%.

- Christer Ericson

The use of bounding volumes could be further improved by including a pruning hierarchy where each level in the hierarchy contains a different set of bounding volumes and have an increasing number of bounding volumes. This would allow for early outs as well as fine grained collision areas.

For this thesis, four different bounding volumes will be presented in the following sections. The bounding volumes discussed are: bounding sphere, object oriented bounding box, bounding capsule and convex hull.

(23)

2.3 bounding volumes 7

2.3.1 Bounding sphere

As mentioned earlier, desired properties of a bounding volume is low storage requirement and inexpensive intersection tests. The sphere fulfils both of these properties and since it only requires a position vector and a radius to be stored, it is the most memory efficient bounding volume.

Listing 1:Sphere variables struct Sphere

{

Vec3<Real> pos; double r;

};

A drawback of using spheres as bounding volumes are that they easily encapsulate more volume than necessary for flat objects. This leads to more frequent indications of collisions than necessary. Figure4below shows how the bounding sphere is a better choice for round object than for flat objects:

Figure 4: Two cases of a sphere encapsulating an object. (a) Since the encapsulated object is flat, the major part of the encapsulated volume will be additional volume. (b) The additional encapsulated volume is significantly less when the object is somewhat round.

In both cases the bounding sphere has managed to encapsulate the objects, the quality of the results do however differ. The use of bounding spheres for encapsulation of flat surfaces might however a good choice for some applications in order to save memory and computational power. If the object is to be far back in a scene, these inaccurate collisions might be acceptable since the observer might not be able to tell the difference.

(24)

8 background

2.3.2 Bounding capsule

The capsule is a geometric shape that can be seen as a sphere of radius r sliding between two points start and end. This is illustrated in figure5below:

Figure 5: The capsule can be seen as a sphere of radius r being swept between the two points start and end.

The capsule requires two position vectors and a radius to be stored.

Listing 2:Capsule variables struct Capsule { Vec3<Real> start; Vec3<Real> end; double r; };

As can be seen from listing2, the capsule is quite memory efficient and only requires a position vector more than the bounding sphere.

The intersection test is also relatively cheap and consist of:

1. Find the two closest points p1 and p2 between the two capsules. 2. Compute the distance d between the two points (p1, p2). 3. If the distance d < r1 + r2, then the two capsules are colliding. Finding the closest point between two capsules can be seen as finding the closest point between two lines. When finding the closest points there are four different scenarios that can occur as illustrated in figure6below:

(25)

2.3 bounding volumes 9

Figure 6: Four scenarios when finding the closest point between two lines. (a) The two lines are intersecting resulting in identical closest points, (b), (c) One endpoint is within the other line, (d) The closest points is both endpoints.

As can be seen from figure6finding the closest point between two lines is not as trivial as computing the distance between endpoints.6(a) illustrates that the point could be in between the endpoints on both lines (notice that the lines might be in different heights and hence not intersecting),6(b) shows that there could exist an infinite number of closest points,6(c) illustrate that a point may be found anywhere along the line and6(d) shows that the closest points might be two actual endpoints.

The capsule has a wider range of suitable shape to encapsulate than the bounding sphere, the capsule can encapsulate oblong objects by placing the endpoints start and end at a suitable distance apart. The capsule act as a bounding sphere by placing the endpoints at the same location (figure7(b)). Troublesome shapes are oblong objects with flat sides, for these shapes the capsule will produce additional volume at the endpoints (figure 7(c)). An even worse situation is when the object to be encapsulated is of a triangular shape, where the tip of the triangular shape will produce a big amount of additional volume (figure7(d)).

Figure 7: Four different objects encapsulated by capsules. (a) The Capsule does a good job encapsulating the object with little additional volume. (b) The capsule can give the same good result as a sphere by putting the start- and endpoint at the same location. (c) The capsule is not able to encapsulate objects with flat sides in a good manner. (d) One of the worst types of objects to encapsulate with a capsule are objects with a triangular shape.

(26)

10 background

Even though the capsule always will be able to produce better or just as good results as the bounding sphere, the bounding sphere can be more attractive for some applications where fast intersection tests are the highest priority.

2.3.3 Object oriented bounding box

The object oriented bounding box (also referred to as OBB) is simply a box whose orientation matches the object it encapsulates. The OBB differs from the bounding sphere and bounding capsule by not having any round features, this makes the OBB a strong bounding volume for objects with hard edges and flat surfaces in the aspect of low additional volume.

The OBB is built up by a total of five vectors. One vector to place the center of the OBB (center), three vectors to describe the local coordinate system (rot1, rot2, rot3) and the last vector (e) describes the dimension of

the OBB along each local coordinate axis.

Listing 3:Object oriented bounding box variables struct OBB { Vec3<Real> center; Vec3<Real> rot[3]; Vec3<Real> e; };

With five vectors, the OBB is an quite expensive bounding volume to store. The OBB may however be worth storing since it can encapsulate objects with sharp edges better than the bounding sphere and bounding capsule.

Figure8illustrates the visual representations of each component of the OBB.

Figure 8: The OBB is built up by a center point center, three local coordinate axes (rot1, rot2, rot3) and a vector e describing the half the length along each local axis.

As can be seen from figure8, each components of vector e describes half the length along each local axis, where e1 describes half the length along

(27)

2.3 bounding volumes 11

direction rot1. Correspondingly e2 and e3 describes the half length along

local axis rot2and rot3.

The goal when placing a bounding volume is always to achieve the best fit. For an OBB this means that at least one side of the encapsulated object should be parallel with one side of the OBB. Below follows a figure of two different objects being encapsulated by its object oriented bounding box.

Figure 9: Two cases of object oriented bounding box. (a) The OBB has aligned itself with the object and managed to achieve the best fittin box. (b) The OBB has encapsulated a spherical object, note that any rotation of the box would work since the encapsulated object is a sphere.

Figure9(a) has manage not only to make one side parallel with the object but three of them, resulting in a very good usage of the OBB. Figure 9(b) shows an OBB that has encapsulated a spherical object, since the object is a sphere the orientation of the OBB does not matter.

2.3.4 Convex hull

The bounding volume that will produce the tightest fit is the convex hull. It takes the point set of the segment and encapsulates it with the smallest convex point set possible. A convex hull can be thought of as a balloon stretched out by a segment inside of it. The convex hull or the "balloon" will eliminate the concave parts of the segment, and keep the bounding volume as tight as possible, this can be seen in figure10.

Figure 10: The convex hull has encapsulated the object and produced a tight fitting bounding volume.

(28)

12 background

A drawback of convex hulls is that if they are to encapsulate an object that already is convex the convex hull will be an exact replica of the original object, as illustrated in figure 11. Since the convex hull is identical to the original object, no collision detection improvements will be made for this object. To get some improvements in these situations the convex hulls can be simplified afterwards in order to reduce the number of points of the convex hull and thereby increasing the performance of the collision detection.

Figure 11: Since the object the convex hull is to encapsulate is an convex object the convex hull will be identical to the original object.

Once the convex hull is generated they can be used together with Minkowski sums in order to detect collisions in an effective manner. 2.4 variational mesh decomposition

In an attempt to obtain a more natural and meaningful segmentation of the object, the method "Variational mesh decomposition" developed by Zhang et al. [13] was considered. The method uses eigenvectors of a dual graph Laplacian matrix in order to obtain global information about the object such as the spectral attributes of the underlying model. From this data it is then possible to perform segmentation of the model.

(29)

3

M E T H O D

Through this chapter, a detailed description of all the methods that have been implemented during the thesis work are presented. The order of each method follows about the same order as we implemented them, which means that it is easy to follow the progress and the thoughts of each improvement. Methods that did not show to improve our main implementation are also presented during this chapter.

3.1 hierarchical approximate convex decomposition

Hierarchical approximate convex decomposition (HACD)[8] [7] is a method used to segment a mesh into several parts, where each part is convex. Since every part in the decomposed mesh is convex the collision detection becomes significantly less complicated.

The HACD-method consist of the following steps: 1. Compute the dual graph of the mesh

2. Simplify the dual graph 3. Compute the convex hulls

4. Repeat from step 2 until desired number of segments is achieved 3.1.1 Dual graph

The dual graph is a graph describing how the faces in the mesh are connected. A vertex in the dual graph correspond to a face in the mesh and an edge between two dual graph vertices indicates that two faces in the mesh are neighbours and are sharing an edge. This can be seen in figure12below:

The dual graph is the initial step in the process of dividing a mesh into several segments. One might think that the process of creating segments starts of by splitting the mesh into two segments, then three segments, then four segments and so on.. But as a matter of fact it is just the opposite. Every face in the mesh is initially a segment, and the number of segments is then reduced by merging segments together. The vertices in the dual graph represents the segments and the goal is to collapse these vertices to a desired number.

3.1.2 Edge collapse

As mentioned earlier each vertex in the dual graph is a segment and the aim is to reduce the number of segments by performing edge collapses on the

(30)

14 method

Figure 12: The original mesh and its computed dual graph. It can bee seen that the vertices in the dual graph represents faces in the mesh and that the edges in the dual graph indicates that the two faces in the mesh are neighbours.

dual graph. Figure13below illustrates how the process of collapsing edges in the dual graph affects the number of segments.

Figure 13: Initially each face in the mesh is a segment, as edge collapse operations are performed on the dual graph, the number of clusters is reduced.

In order to keep track of the vertices belonging to a specific segment, an ancestor list is required. For each edge collapse that is performed the ancestor list gets updated accordingly. If an edge collapse is performed between vertex v and vertex w, the ancestors of v gets updated as follows:

A (v)← A (v) ∪ A (w) ∪ w, (1) where the ancestor list of v gets updated with the vertex w and its corresponding ancestors.

3.1.3 Cost function

When collapsing edges it is desired to first collapse the edges that will have the least negative effect on the visual result. In figure14three segments are to be combined into two segments, with two possible choices of combinations. One drastically changes the visual appearance while the other preserves the visual appearance.

(31)

3.1 hierarchical approximate convex decomposition 15

Figure 14: The three segments can be combined in two possible ways. Combination 1 has combined the blue and yellow segments. Combination 2 has combined the yellow and green segments. Combination 2 is the preferred combination since it produces significantly less additional volume.

Even though it is trivial for a human to determine which of the two combinations that gives the best result, an automatic process like the HACD-method still needs to be guided by some cost function.

The cost function E(S(v, w)) guiding the edge collapse is based on a concavity cost C(S(v, w)) and an aspect ration Eshape(S(v, w))of the combined

surface S(v, w):

E(S(v, w)) = 1

DC(S(v, w)) + 

10DEshape(S(v, w)), (2) where D is a normalization factor equal to the diagonal of the bounding box of S,  is weight controlling the contribution of Eshape with respect

to the concavity cost and S(v, w) is the unification of the vertices v, w and their ancestors:

S(v, w) = A(v) ∪ A(w) ∪{v, w} (3) 3.1.3.1 Concavity cost

The concavity cost C(v, w) indicates how concave the combination of two surfaces are. Figure15below illustrates how the concavity is measured when combining surface v with surface w:

(32)

16 method

Figure 15: Illustration of the concavity cost. (a) The two surfaces that are to be combined. (b) The concavity cost for combining surface v with surface w is equal to the distance from point M0to point P(M0).

The concavity cost is based on the distance between point M0 (one of the

active dual graph vertices) and point P(M0), where P(M0)is the projection

of point M0along the normal of point M0 onto the convex hull of surface

S(v, w):

C(S(v, w)) = arg max

M∈S(v,w)||M0− P(M0)|| (4)

The concavity cost is high for surfaces with high concavity, while the cost cost is zero for convex surfaces.

3.1.3.2 Aspect ratio

The purpose of the aspect ratio term is to favor the generation of compact clusters rather than irregular clusters. The difference between compact and irregular clusters are illustrated in figure16below:

Figure 16: An example of a compact cluster and an irregular cluster, it is preferred to generate compact clusters since they produce more meaningful segmentations.

Compact clusters have the good properties that they will produce convex hulls that are tight fitting, while irregular clusters will lead to loose fitting convex hulls.

(33)

3.2 primitive estimation 17

The HACD-method defines the aspect ratio as follows: Eshape(S(v, w)) = ρ

2(S(v, w))

4π× σ(S(v, w)), (5) where ρ(S(v, w)) and σ(S(v, w)) are respectively the perimeter and the area of surface S(v, w). The lowest aspect ratio possible to achieve is the aspect ratio that comes from a disk shaped cluster, where the aspect ratio will equal one. The more irregular a cluster is the higher the aspect ratio.

3.2 primitive estimation

Although a convex hulls generated from the HACD-method form the tightest fitting bounding volumes, they might not be the best choice. Convex hulls requires large amounts of memory and might be expensive to operate upon. For this reason it is desired to estimate the convex hulls with appropriate primitives.

The use of primitives leads to lower memory usage and quicker intersection tests. In the following subsections three implemented primitives gets described namely: object oriented bounding box, capsule and sphere.

3.2.1 Sphere

The sphere is the simplest primitive to estimate. The hardest known problem of estimating the sphere is to find the center of the sphere. One might initially think that the center of the sphere should be equal to the mean position of the convex hulls vertices positions. This does however in most cases result in a loose fitting sphere primitive since the distribution of the convex hull vertices most likely is not uniformly spread. This is illustrated in figure17:

Figure 17: (a) Using the mean value of the non-uniformly spread vertices results in a center position that is not optimal. (b) By first calculating a bounding capsule for the convex hull, the capsules center can be used to position the spheres center in order to obtain a more accurate center.

(34)

18 method

A better solution when setting the center of the sphere would be to first calculate a bounding capsule and then use its center for positioning the center of the sphere.

The radius of the sphere is then set by using the largest distance from the center to any of the points.

3.2.2 Capsules

When estimating a new capsule that will encapsulate a segment it is desired to keep the volume of the capsule at the minimal, while still encapsulating the whole segment. Keeping the volume at a minimum will in general give fewer false collisions, while the encapsulation of the whole segment will make sure that no collisions are missed. In the following subsections two important aspects when generating a capsule will be presented, namely the length and the orientation of the capsule.

Capsule length

In the process of generating a capsule from a convex hull one might think that the width of the capsule should be the same as the width of the convex hull. This is not true since the capsule only has its maximum length along the center of the capsule. A convex hull that does not have its extreme points at the center of the capsule can never be encapsulated by simply setting the capsules length equal to the length of the convex hull. The problem of setting the length of the capsule is illustrated in figure18below:

Figure 18: The left side of the capsule has managed to encapsulate the convex hull since the left extreme point of the convex hull happens to be at the center of the capsule. The extreme point p for the right side of the convex hull is not in the center of the capsule, therefore the capsule fails to encapsulate the whole convex hull. d is the distance between the capsule and point p.

Although setting the length of the capsule equal to the length of the convex hull does not work in all situations, it is a good starting point. When the convex hull is not able to encapsulate the whole convex hull the length of the capsule needs to be expanded by moving the Start and End points. In order to find the distance d to move End in, a ray-casting operation is required. By ray-casting from point p in the direction (Start-End) the distance d can be found. The final result after moving point End distance d along direction End-Startcan be seen in figure19below:

(35)

3.2 primitive estimation 19

Figure 19: The final shape of the capsule after the point End has been moved by the distance d along the direction End-Start.

By moving point End by the distance d the capsule is not only guaranteed to encapsulate the segment but also doing so with the tightest fitting capsule for the given orientation.

Capsule orientation

The orientation of the capsule will greatly influence how tight fitting the capsule will be. Figure 20 shows four examples of capsules with different orientations that encapsulates the same object.

Figure 20: Four capsules with different orientations. All of the capsules manages to encapsulate the object, although (d) has managed to produce the tightest fitting capsule.

As a starting point in the process of generating an appropriate orientation of the capsule, the results from the OBB can be used. The OBB has three local axes (x,y,z), since the OBB is object oriented it has already managed to calculate a good orientation for a bounding box. The longest local axis of the bounding box can then be used as the orientation for the capsule. This choice of orientation is in general a good choice, there are however some objects where this choice of axis is not optimal. The problematic objects are objects who have a pointy feature on one side and blunt on the other side. A problematic convex hull is shown in figure21, the figure both illustrates the estimated capsule by using the OBB’s longest axis and the desired orientation of the estimated capsule.

(36)

20 method

Figure 21: Two capsules with different orientations encapsulating the same object. The top capsule uses the orientation from the longest local axis of the OBB. The bottom capsule illustrates the desired orientation of the capsule. The red and green line represents the length of the top and bottom capsule, it can be seen that the bottom capsule is shorter and hence encapsulates less additional volume.

It can be seen in figure21that the longest axis of the OBB may for some convex hulls not be the best orientation of the capsule. The desired capsule has managed to get the pointy feature of the convex hull centred on the capsule and thereby almost removing the need to expand the capsule. This leads to less encapsulated volume which is desired.

In order to achieve the desired orientation of the capsule, some additional steps have to be performed. Using the orientation given by the OBB, split the convex hull in two pieces along the perpendicular of the orientation. Compute an axis aligned bounding box (AABB) for each half. The new orientation is then the line from each center of the two AABB’s from which the new capsule can be obtained. The steps in the process is illustrated in figure22below:

Figure 22: The performed steps to obtain better orientation of the capsule for problematic convex hulls.

(37)

3.2 primitive estimation 21

3.2.3 Object Oriented Bounding Box

This section explains how the OBB have been implemented and improved (if the reader want more information about their properties, he is referred to section2.3.3).

To achieve a nearly perfect fitted OBB, the method explained is considering a hybrid method between Principal Component Analysis (PCA) and Minimum-Area Bounding Rectangle (MABR), presented in [3] by Christer Ericsson.

The reason of using a hybrid method is due to the drawback of PCA. The PCA method is dependent on the distribution of the triangles of the mesh that should be encapsulated. If the distribution happens to be uneven, we could end up in a bounding box that is badly fitted. To fix this problem, only one axis is used from the PCA and the remaining two axes are computed by MABR.

Principal Component Analysis

The PCA method is a statistical method that uses a covariance matrix and the eigenvectors of that matrix to find the orientation of the OBB i. e. the principal components of the OBB. The data that is used are the faces of each convex hull that is generated by HACD, where the faces are given by: [pk, qk, rk]where k is the face id that goes from 0 to n (n is the number of

faces). With this information we can now introduce the covariance matrix seen in equation6 Cij =   1 aH X 06k<n ak

12(9mk,imk,j+ pk,ipk,j+ qk,iqk,j+ rk,irk,j)  

− mH,imH,j

(6) where aHis the total area of the convex hull, akis the area of a face, mkis

the centroid of a face, mHis the centroid of the convex hull and i,j corresponds

to each coordinate component i. e. [x, y, z].

To compute the area of a convex hull, we sum the area of each triangle, as seen in equation7.

aH = X

06k<n

ak (7)

The area of each face is computed according to equation8

ak = ||(qk− pk)× (rk− pk)||

2 (8)

(38)

22 method

The centroid calculation for each convex hull is represented in equation9

mH= 1 aH

X

06k<n

akmk (9)

which corresponds to the mean of the face centroids weighted by their area, where the centroid of each face is defined in equation10.

mk = (pk+ qk+ rk)

3 (10)

The resulting matrix of C in equation6is a 3 × 3 matrix, which is the first step towards finding the principal components. But what information does the matrix give? Normally, the variance is represented as a single value, which does not say anything in this case, since we want the variance in three dimensions. That is why the covariance matrix is needed, to represent the variance in additional dimensions. For our problem we need the variance corresponding to the faces (the variance can be thought of as the spread of the faces), where each face is represented in 3 dimensions and thereby a 3× 3matrix is needed in order to gather all information of the variance, as seen in equation11 C =    

cov(x, x) cov(x, y) cov(x, z) cov(y, x) cov(y, y) cov(y, z) cov(z, x) cov(z, y) cov(z, z)

  

 (11)

where cov(x, x) = Cxx, cov(x, y) = Cxy, etc. computed from equation6.

The last step of the PCA method is to find the eigenvectors and eigenvalues of the covariance matrix. These eigenvectors provides the information about the pattern of the faces, which can be seen in figure23where the eigenvectors are represented as the red vectors (notice that the figure illustrates a two dimensional example, where the data are vertices instead of faces).

Figure 23: Illustration of the red eigenvectors of a data set’s covariance matrix, where the data set is represented by the yellow dots. In the figure we can also see that the eigenvectors could be used to orientate a bounding rectangle with a tight fit.

(39)

3.2 primitive estimation 23

In figure23, we can see how the eigenvectors give the information (i.e the orientation) of the pattern as mentioned before. The eigenvector with the highest eigenvalue is the line that is aligned along the vertices (rot1) and

the second eigenvector (rot2) is perpendicular to rot1. These components

can now be used to orientate the bounding box and the eigenvalues are used to scale the box to the proper volume.

The PCA method cannot be used on its own though, because the method is strongly dependent of the distribution of the data. In figure23the distribution is even and if it would be encapsulated by a box, the volume would be minimal. Meanwhile, in figure 24 it is not, due to the higher densities of vertices at the right and left of the graph. Hence, the bounding box generated from the components in figure24, would not be the optimal solution for the data.

Figure 24: The vertices of the rectangle are uneven distributed in the two dimensional space, which results in the components rot1 and rot2 that have incorrect directions that do not lead to a tight fitting bounding rectangle. The bounding rectangle is visualized by the dashed lines.

To solve the uneven-problem, we add another algorithm to the bounding box estimation method i. e. MABR which is the next topic to discuss. Minimum-Area Bounding Rectangle

There exists several methods to optimize the PCA method, where one of them is a brute force approach. This approach is done by using all the axes of the PCA and then rotating each axis to find the smallest volume of the bounding box.

A more efficient method is the minimum-area bounding rectangle [3], where one axis is chosen to generate a plane to which all vertices are projected onto. The plane is then utilised to find the rectangle with the smallest area that encapsulate all the projected vertices. The orientation of this rectangle produces the two remaining axes of the bounding box.

(40)

24 method

The first step in the process of MABR is to decide which axis to use as the normal of the plane mentioned above. According to [3], there are three different ways to choose between:

• All principal component box • Maximum principal component box • Minimum principal component box

These three methods have been evaluated by Barequet et al [2] and they concluded that the method that performed best was the minimum principal component box. The difference between the methods are that the all principal component boxuses all principal components to orientate the OBB, which is similar to a pure PCA. Meanwhile the maximum principal component box uses the axis with the largest eigenvalue and vice versa for the minimum principal component box, where the principal component with the least eigenvalue is used. The reason why minimum principal component box gives better result is because the maximum principal component is based on the eigenvector with the maximum variance (largest eigenvalue), thereof having a higher chance of producing a larger volume than expected.

By using the minimum principal component as normal ( ˆn) and the centroid r0 (computed in an earlier step in equation9) as the position of the plane.

The hyperplane can now be defined for the projection according to equation

12.

ˆn • (r − r0) = 0 (12)

To project the vertices onto the the plane, we use three different steps as seen in table1

step description

1. v = r0− r, r: vertex to be projected.

2. vsp= v − ˆn( ˆn • v), vsp: scalar projection of v.

3. rp= r0− vsp, rp: r projected onto the plane

Table 1:The table shows, step by step, how the projection of vertices into a plane is computed.

(41)

3.2 primitive estimation 25

where in 1. a vector between the point of the plane and the vertex to be projected is created. In 2. a scalar projection is computed in order to know how to project the vertex. In the third step, the final projection is computed i. e. the vertex is "moved" onto the plane.

The final step of the whole process in this section is to compute the actual MABR, which consist of an iteration through the projected vertices and each step is summarized in table2.

step description

1. ˆe0= pn− p(n−1), where 1 6 n 6 numberofvertices and p

represents the projected vertices. 2. ˆe1= ˆn × ˆe0, ˆn: the plane normal.

3. Project each vertex pn onto ˆe0 and ˆe1 in order to find the

maximum extents of the vertices on the axes.

4. Compute the area and store the smallest. Also, update the centroid, size and the components of the bounding box.

Table 2:The MABR method is divided into several steps to manage to generate an optimized OBB.

The main iteration iterates two vertices at a time as seen in the first step in table2, where a normalized vector ˆe0is computed. This is also visualized

in figure25, where the active vertices are marked in orange. In the second step we compute a perpendicular axis to ˆe0i. e. ˆe1as seen in figure25. These

two vector are then utilized in a projection (step 3. in table2), where all the vertices are projected onto each of the vectors ˆe0 and ˆe1 in order to find

the maximum extents ext0 and ext1, as seen in figure25and26. With the

extents, an area is computed to find the minimum area of the rectangle that the extents generates. This can be seen in figure25and26, where in figure

26there are four additional clockwise iterations. We can see that the extents computed in figure26would be the best in order to generate a tight fitting rectangle. When the smallest area is found the last thing to do, is to update the remaining components of the OBB with ˆe0 and ˆe1 and also to update

(42)

26 method

Figure 25: The visual representation of table 2, which is an overview of the MABR-iterations. In (a) we can see how each step of the iteration (orange vertices) computes two axes and then projects each vertex onto them. Figure (b) is an visualization of the results from (a), where the green dashed lines are the bounding rectangle. For this iteration step, the bounding volume in (b) would not be the optimal solution, since the area of the bounding rectangle could be much smaller.

Figure 26: Compared to figure25, figure (a) has been iterated four times and has reached the vertices marked in orange. Here we can see that the axes are aligned along the model, which produces a tight fitting bounding rectangle as seen in figure (b)as dashed green lines.

From this point, all the vertices belonging to the convex hull have been projected two times. The first time when the vertices were projected onto the plane generated by the minimum principal component (table1) and the second time when the vertices on the plane were projected onto the axes ˆe0

(43)

3.3 segmentation of skinned meshes 27

3.2.4 Combined primitives

As discussed in section2.3each primitive has its pros and cons, in the aspect of object encapsulation they all have their type of shape that they are best at encapsulating. In order to bring out the best of each primitive, different primitive types can be used for different parts of an object as illustrated in figure27:

Figure 27: Different primitives encapsulates different objects. The combined primitives has taken the best choice of primitive for each convex hull.

To measure the quality of the encapsulation the volume difference between the primitive and the convex hull is used. The primitive with the lowest volume error can then be used. They can also be weighted with respect to how expensive they are to use in intersection tests.

3.3 segmentation of skinned meshes

Section3.1introduced a method to segment a mesh into several segments. For skinned meshes, the method can be improved. Skinned meshes contain a predefined segmentation in the form of bones. Each bone in the mesh is a parent to a segment of the mesh that is suppose to follow the movement of the bone. The bone-data is used in a way that if an edge in the dual graph is about to be collapsed and the vertices of that edge belongs to different bones, that collapse gets a higher penalty. What happens is that the collapse is not prioritized and it will be put further back in the priority queue of collapses. Skinned meshes often have two bones attached to each vertex and hence it is required to first process the data so that each vertex can only belong to one bone. This is done by assigning the vertex to the bone that has the biggest contribution to the vertex, in the case of equal contribution between two bones either can be used.

A drawback of using skinned meshes is that the mesh might have a lot of small bones e.g the hands of a human character. A solution to to the problem of having to many bones is to reduce the number of bones by collapsing

(44)

28 method

the leaf bone to its parent, this way the animation will look its best while reducing the collision test cost. Figure28show both a skinned model with all its bones left and a model where the leaf bones have been removed.

Figure 28: Skinned meshes can be segmented using the bone data. In order to achieve good performance the number of bones is reduced.

3.4 segmentation using feature detection 3.4.1 Clustering With K-Means

The method in [13] is using a combination of different algorithms to segment a mesh. One of the algorithms that has a big impact on the method is the clustering algorithm K-means [11]. K-means decides which parts of the mesh belong to what unique cluster. Since the time of this thesis work was not enough to implement the whole method in [13], we decided to implement a standard K-means algorithm to see what it was capable to do.

The standard K-means algorithm uses the positions of the vertices to generate clusters. The first thing to do is to assign each vertex to a cluster randomly as a initial state. From the initialization, centroids can be generated which corresponds to a cluster position among the vertices. The centroids

(45)

3.4 segmentation using feature detection 29

are placed using the mean of the vertices belonging to a specific cluster and the whole initialization is visualized in figure29.

Figure 29: Illustration of the initialization step of K-means, where each color corresponds to a unique cluster. The left image show the vertices that are to be clustered. In the center, each vertex has been assigned to a cluster randomly. The right image visualizes the the centroids of each cluster as squares and their area with their color.

In the right image of figure29, we can see that the vertices need to be updated and assigned to a new cluster. This is done in an iteration step of the algorithm. To update to which cluster a vertex belongs to, equation13is used

Si={xp:||xp− mi|| 6 ||xp− mj|| ∀ 1 6 j 6 k} (13) where Siis the set of vertices belonging to cluster i and can be thought of as

the colored areas in figure29. xpis a vertex and m is a centroid of a cluster

corresponding to a square in figure29and k is the number of clusters. The last part of the iteration step is to update the centroids positions, which is done according to equation14

mi= 1 |Si|

X

xj∈Si

xj (14)

where the variables are the same as in equation13. What equation14calculates is the mean of the vertices belonging to a specific cluster. These iterations are visualized in figure30.

The iterations stop when the algorithm converges i. e. when no vertex changes cluster.

The content of this section contains parts of the method Variational Mesh Decompositionintroduced by Zhang et al. [13]. In the theory, we wanted to use this method to find distinct features in meshes (e.g legs, arms etc for a human character) in order to improve HACD even more.

(46)

30 method

Figure 30: These images shows the necessary steps in order to iterate the algorithm. In the left image, the vertices have been updated to its current cluster. The center image illustrates equation14, where the dashed squares are the new positions of the centroids. The right image shows the updated centroids and its area.

3.4.2 Laplacian Matrix

The laplacian matrix [12] can be used to find different properties of graphs. In this case the focus lies in the spectral information, which according to [13] reveal the feature information of the input mesh.

The laplacian matrix is a merging between a degree matrix [10] and a adjacency matrix [9]. The total size of the matrix is dependent on the number of dual graphs nodes and the structure is a symmetric matrix. In figure

31 below, the dual graph with numbered nodes is shown along with its corresponding Laplcaian matrix, which has been generated with the information from the dual graph.

Figure 31: The dual graph used to build the laplacian matrix, where each number corresponds to a row and a column in the left image.

As mentioned earlier, the Laplacian matrix holds the information about the degrees and the adjacencies of a graph. In the matrix in figure31, the diagonal corresponds to the number of incident edges for each node in the graph, where the first diagonal value 2 corresponds to the node with index 0in the dual graph in figure31. For the example above the node with index 0got two nodes connected to it, which can be seen in both the matrix and the dual graph in figure31.

(47)

3.4 segmentation using feature detection 31

The adjacency information in the Laplacian matrix corresponds to −1 in the matrix, which points out which nodes in the graph that is connected to the current node.

The last detail in order to understand the Laplacian matrix is to keep in mind that each row and column correspond to a graph node. Since the matrix is symmetric, both the first row and the first column hold the information for the node with index 0 and the third row and column belong to the node with index 2.

With the information stored in the Laplacian matrix, features can be found. This is what the next section explains, i.e how to receive feature information from the Laplacian matrix by computing the eigenvectors.

3.4.3 Eigenvectors of Laplacian Matrix

The process of obtaining the eigenvectors of the Laplacian matrix starts with defining the difference d1to be the squared distance between the face

normals of face τiand τjaccording to equation15. The difference between

two adjacent triangles is also influenced by whether or not the edge between the two faces are convex or concave. For concave edges the constant η equals 1.0while for convex edges η equals 0.2.

d1i, τj) =

η

2||N(τi) − N(τj)||

2 (15)

The difference d1is then used to define the weight for each possible pair of

faces, as can be seen in equation16

wij=|Edge(τi, τj)|e

daveraged1(τi,τj)

(16) where |Edge(τi, τj)| is the length of the edge between face τi and τj and

daverageis the average of all differences d1.

Once the weights are calculated the Laplacian matrix can be defined as in equation17. Li,j =       

−wij, i6= jand τiand τjshare a common edge

P

kwik, j = i

0, otherwise

(17)

Notice that the definition of the Laplacian matrix in figure17is not the same as in equation31, but the purpose is the same.

The final step in the process is to use the Laplacian matrix to obtain its eigenvalues and eigenvectors. The eigenvalue with value zero is a false solution and is hence ignored along with its corresponding eigenvector. The other eigenvectors are sorted in ascending order~v0,~v1,~v2,~v3, ...,~v|T|−1with

(48)

32 method

eigenvectors corresponds to a triangle and each eigenvector indicates a feature of the mesh. If k segments are desired, the k first eigenvectors should be used in order to find the k first most appropriate features to convert into segments.

(49)

4

R E S U L T A N D D I S C U S S I O N

Below follows the results of the methods described in the previous chapter, each result presented in the same order as in Chapter 3. The D-man mesh that have frequently been used during this chapter to illustrate the results of the implemented algorithms, is shown in figure32.

Figure 32: D-man contains: 14948 faces, 8736 vertices and 50 bones.

4.1 hacd

The results from how the Hierarchical Approximate Convex Decomposition method decomposes a model into convex hulls is illustrated in figure 33

below.

Figure 33: The original model gets decomposed into convex hulls. The number of convex hulls in each decomposition is specified below each figure.

The HACD method can in some cases have trouble when it comes to ordering what part should be decomposed. Figure34shows a situation where

(50)

34 result and discussion

the HACD method is deciding if the arm or foot should be decomposed first and how the method makes a bad decision on the ordering.

The top part illustrates the desired measured concavity where the distance is measured along the perpendicular vector of the convex hull and the edge between the two decomposed part. The bottom part on the other hand visualizes the actual calculated concavity, where the misleading concavity measure is a result of a distance measurement along a suboptimal direction. This direction is the face normal and is traced from each vertices of that face, the longest distance is then used for the calculation of the concavity. Since the foot has a normal that is almost parallel to the convex hull, the resulting concavity becomes large. The foot is therefore unfortunately decomposed before the arm.

Figure 34: The segmentation decision is based on the concavity measure (the distance between the underlying object and the convex hull). The figure illustrates how the algorithm decides which part should be subdivided. The upper part of the figure illustrates the concavity measure seen from a human perspective, where the arm would get decomposed into two pieces. The bottom part of the figure illustrates how the actual HACD method makes its decision on what part should be decomposed. The HACD method can use this unnatural decomposition order since the distanced is measured from the face vertices along the face normal. The result is that the foot gets decomposed before the arm.

It is not a surprise that the direction that the concavity is measured along is badly aligned with respect to the convex hull. The HACD method prefers to decompose the object along surfaces that are not flat in relation to each other, which is a good way to get nice decompositions. But since the surfaces are not flat in relation to each other the concavity measure direction will not be perpendicular to the convex hull around them.

(51)

4.2 primitive estimation 35

4.2 primitive estimation

The content of this section presents results regarding primitive estimation, where the results shows how the different standard primitives managed to encapsulate the target mesh. The theory behind the estimation can be read in section3.2.

4.2.1 Sphere estimation

Figure35shows how the primitive estimator behaves on the chosen mesh, but only one vertex of the object is in contact with the bounding sphere. However, for this object three vertices should be in contact with the bounding sphere in order to obtain a perfect bounding sphere.

Figure 35: The marked region in the figure is in contact with the Sphere. The object is not outside of the Sphere at any point and is thus successfully encapsulated.

The reason for the single object-sphere contact point is that it is hard to find the center of the object (even with the improved method for finding the sphere center as discussed in section3.2.1). The object is however well centred when viewed from the top. The object could although be lowered a bit when viewed from the bottom and the side.

(52)

36 result and discussion

How the sphere estimation behaves on its own on game character can be seen in figure 36and it illustrates clearly that the sphere estimator should not be used by its own.

Figure 36: A collision mesh consisting of 21 spheres viewed from three different direction (front, side and top). The underlying model consist of 21 convex hulls, where the conversion from convex hulls to the estimated spheres took 3 ms to compute..

Figure36shows how a complex model that has been decomposed into 21 convex hulls gets estimated with bounding spheres. The usage of bounding spheres as collision mesh has lead to a big amount of additional volume, since the bounding spheres do not resemble the complex model very much. Most of the decomposed model’s pieces are oblong and are thus not well suited to be estimated with bounding spheres. This problem was discussed in section2.3.1. One of the few pieces that is well suited to be encapsulated by a bounding sphere is the head part since it has round features, resulting in less additional volume.

4.2.2 Capsule estimation

The results of the capsule encapsulation in figure37shows that the bounding capsule has managed to encapsulate the whole object with the minimum capsule length, thereby having one object-capsule contact point at each endpoints of the capsule. The capsule has managed to center the object in a good manner as can be seen by looking from the top-, bottom- and side-view. For an optimal capsule encapsulation of the object at least two object-capsule contact points should exist in the middle section of the capsule instead of the current single object-capsule contact point.

(53)

4.2 primitive estimation 37

Figure 37: The marked regions in the figure are in contact with the capsule and thereby making the capsule fit tight. The object is however not outside of the capsule at any point and is thus successfully encapsulated.

Alternative capsule orientation

The left image in figure 38 shows a badly oriented capsule meanwhile in the right image shows an capsule with the adjusted orientation. The left capsules orientation is bad since the first step of the process is to align the capsule according to the OBB. It can be seen that the orientation of the adjusted capsule has been greatly improved by using the method illustrated in figure22and thereby it was possible to center the object with the capsule. The success of the capsule with the adjusted orientation also shows in the reduced volume error.

Figure 38: Two possible orientations of the capsule that can be obtained. The final capsule to be used as bounding volume is the capsule that has the lowest volume error. Left Initial orientation of the capsule when using the OBB’s orientation as guidance for the capsule’s orientation, the resulting volume error is 441%. Right The adjusted orientation of the capsule manages to reduce the volume error to 425% for this specific convex hull.

(54)

38 result and discussion

Figure39shows how a complex model that has been decomposed into 21 convex hulls gets estimated with bounding capsules. The capsules encapsulates the complex model with dramatically less volume error than the bounding sphere encapsulation. The estimated capsules have a tight fit for most convex hulls thus the resemblance between the complex model and the bounding capsules are high. Most of the convex hulls are oblong and hence the capsule is an appropriate estimate of the convex hulls. This can be seen in the side view where the capsules are aligned and are tight fitting around the legs and feet.

Figure 39: A collision mesh consisting of 21 capsules viewed from three different directions (front, side and top). The underlying model consists of 21 convex hulls. The conversion from convex hulls to estimated capsules took 4 ms to compute.

(55)

4.2 primitive estimation 39

4.2.3 OBB estimation

The OBB encapsulation in figure40has managed to encapsulate the whole object with six object-OBB contact points which indicate that the OBB has a tight fit. The good result is possible since the object has a quite evenly distributed set of vertices.

Figure 40: The marked regions in the figure are in contact with the OBB. Notice that always four points are in contact with the OBB in each view, thereby making the OBB fit tight. The object is not outside of the OBB at any point and is thus successfully encapsulated.

Figure41 shows how a complex model that has been decomposed into 21 convex hulls gets estimated with OBBs. The OBBs are tight fitting and appropriately aligned to the convex hulls in order to minimize the encapsulated volume. The right foot in figure 41(blue foot) is probably of little use, since the foot probably are going to be in contact with the ground. The estimated OBB should instead be aligned with the bottom side of the foot at the cost of more encapsulated volume. This way the model would not indicate collisions until the foot actually gets close to the ground. This is however not an implemented feature. Although the OBBs for the feet might not be useful in an application they do encapsulate the convex hull appropriately.

(56)

40 result and discussion

Figure 41: A collision mesh consisting of 21 OBBs viewed from three different directions (front, side and top). The underlying model consist of 21 convex hulls, where the conversion from convex hulls to estimated OBBs took 3 ms to compute..

4.2.4 Combining the primitives

Figure42shows how a complex model that has been decomposed into 21 convex hulls gets estimated using the best choice from the three different primitive types (bounding sphere, bounding capsule and OBB) for each convex hull.

Figure 42: A collision mesh consisting of 21 different types of primitives viewed from three different directions (front, side and top). The choice of primitive for each convex hull is guided by the volume error between convex hull and primitive and also a primitive weight. The underlying model consists of 21 convex hulls, where the conversion from convex hulls to estimated primitives took 4 ms to compute.

For the head part the bounding sphere has been selected, for the feet the OBB has been used and for the legs the capsule primitive has been used. The best choice of primitive is based on the expanded volume multiplied by a primitive weight, where the primitive weights are 0.4, 0.7 and 1.0 for bounding sphere, bounding capsule and OBB respectively. The actual

References

Related documents

In Figure 1, the completion time for the parallel program, using a schedule with one process per processor and no synchronization latency is 3 time units, i.e.. During time unit

With methods based on ray tracing the sound waves are traced from the audio source to the listener in the virtual environment, where the environment is based on a scene consisting

In order to adapt the real-time step detection algorithm for implementing in microcontroller MSP430 system which has no floating point unit (FPU) and then implementing algorithm on

Linköping Studies in Science

It has also shown that by using an autoregressive distributed lagged model one can model the fundamental values for real estate prices with both stationary

Yet, P4 comes with no built-in support for commonly used Fast Re-Route (FRR) forwarding operations, i.e., the forwarding action consists of a sequence of ports such that a

The idea in Mask R-CNN of adding modules on top of a Faster RCNN architecture will also be used in this thesis, as some modules will be placed on top of it to predict the 3D

överenskommelse träffats om att CC skulle anmäla, något som framgår genom ordet &#34;tror&#34;. Det var även CC som berättade för NP:s mamma. Hon har inte heller pratat med dig om