• No results found

Automatic LOD selection

N/A
N/A
Protected

Academic year: 2021

Share "Automatic LOD selection"

Copied!
51
0
0

Loading.... (view fulltext now)

Full text

(1)

Department of Science and Technology Institutionen för teknik och naturvetenskap

Linköping University Linköpings universitet

g n i p ö k r r o N 4 7 1 0 6 n e d e w S , g n i p ö k r r o N 4 7 1 0 6 -E S

LiU-ITN-TEK-A--17/054--SE

Automatic LOD selection

Isabelle Forsman

(2)

LiU-ITN-TEK-A--17/054--SE

Automatic LOD selection

Examensarbete utfört i Datateknik

vid Tekniska högskolan vid

Linköpings universitet

Isabelle Forsman

Handledare Robin Skånberg

Examinator Mark Eric Dieckmann

(3)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

under en längre tid från publiceringsdatum under förutsättning att inga

extra-ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,

skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för

ickekommersiell forskning och för undervisning. Överföring av upphovsrätten

vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av

dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,

säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ

art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i

den omfattning som god sed kräver vid användning av dokumentet på ovan

beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan

form eller i sådant sammanhang som är kränkande för upphovsmannens litterära

eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se

förlagets hemsida

http://www.ep.liu.se/

Copyright

The publishers will keep this document online on the Internet - or its possible

replacement - for a considerable time from the date of publication barring

exceptional circumstances.

The online availability of the document implies a permanent permission for

anyone to read, to download, to print out single copies for your own use and to

use it unchanged for any non-commercial research and educational purpose.

Subsequent transfers of copyright cannot revoke this permission. All other uses

of the document are conditional on the consent of the copyright owner. The

publisher has taken technical and administrative measures to assure authenticity,

security and accessibility.

According to intellectual property law the author has the right to be

mentioned when his/her work is accessed as described above and to be protected

against infringement.

For additional information about the Linköping University Electronic Press

and its procedures for publication and for assurance of document integrity,

please refer to its WWW home page:

http://www.ep.liu.se/

(4)

Abstract

In this paper a method to automatically generate transition distances for LOD, improv-ing image stability and performance is presented. Three different methods were tested all measuring the change between two level of details using the spatial frequency. The methods were implemented as an optional pre-processing step in order to determine the transition distances from multiple view directions. During run-time both view direction based selection and the furthest distance for each direction was tested in order to mea-sure the performance and image stability. The work was implemented in the Frostbite game engine and tested using data from DICE. The result is a method that generates transition distances by calculating the spatial frequency of the exclusive or comparison between the silhouette of a ground truth mesh and each level of detail. The transition distances generated using the automatic LOD selection tool was evaluated using visual empirical tests and performance measurements comparing the performance of a scene using automatic generated distances and manually selected distances. The tests shows that the resulting method produces good transition distances for meshes where the re-duction is visible in the mesh silhouette. The values might need a bit of tweaking in order to account for other image artifacts than silhouette changes but provide a good guidance to the artists.

(5)

Preface

I would like to thank DICE for suggesting and making this master thesis possible. My supervisor at DICE Daniel Johansson for his help and patience with my questions. Mikael Uddholm, Johannes Deligiannis and Charles de Rousiers for their suggestions, help and interest in my thesis. I would also like to thank my family and friends for their support throughout my studies.

(6)

Acronyms

Contrast sensitivity function CSF Field of view FOV Level of detail LOD Relative spatial frequency RSF

Spatial frequency SF Spherical harmonics SH Single instruction multiple data SIMD

Spherical linear interpolation slerp

(7)

Contents

Abstract i

Preface ii

Acronyms iii

List of Figures vi

List of Tables viii

1 Introduction 1 1.1 Background . . . 1 1.2 Purpose . . . 2 1.3 Questions . . . 2 1.4 Restrictions . . . 2 1.5 Structure of report . . . 3 2 Related work 4 3 Background 6 3.1 Spatial frequency . . . 6 3.2 Ray-tracing . . . 9 3.3 Flood fill . . . 10 3.4 Logical operations . . . 10 iv

(8)

CONTENTS v

4 Rendering the images 12

4.1 Ray-tracing . . . 12

4.2 Object distance . . . 12

4.3 Viewpoints . . . 13

5 Using the silhouette 14 5.1 Creating the image . . . 14

5.2 Calculating the relative spatial frequency . . . 14

6 Using the features 18 6.1 Creating the image . . . 18

6.2 Feature comparison . . . 19

6.2.1 Calculating the spatial frequencies . . . 19

6.2.2 Analyzing the spatial frequencies . . . 19

6.3 Depth change of each feature . . . 20

6.3.1 Depth mask . . . 20

6.3.2 Calculating the spatial frequencies . . . 21

7 Using the spatial frequencies 22 7.1 Determining the transition distances . . . 22

7.2 View direction . . . 22

7.2.1 Interpolation . . . 22

7.2.2 Spherical harmonics . . . 23

7.2.3 SH vs slerp . . . 25

8 Result and discussion 26 8.1 Result . . . 26

8.2 Discussion . . . 33

9 Conclusions and future work 36 9.1 Conclusions . . . 36

(9)

List of Figures

3.1 Contrast grating to the left with a small spatial frequency and to the right with a high spatial frequency. Image from [9] . . . 7 3.2 Contrast sensitivity function (CSF) generated for static detail positioned

in the viewers fovea. Image from [9] . . . 7 3.3 Calculating the FOV for an arbitrary display device. Image from [9] . . 8 3.4 Performing flood fill on (a) using the pixel marked with an inner border

as starting pixel results in (b) or (c). . . 10 3.5 Visual representation of the logical operators AND, OR and XOR. . . . 11 5.1 The silhouette of the ground truth mesh (a), and the generated exclusive

or between it and a LOD representation of it (b). . . 15 5.2 corner pixel (dark gray) causing problem when counting the longest

continuous line of pixels . . . 15 5.3 Performing erosion on the image in (a) results in image in (b) . . . 16 6.1 Segmenting the features, the line represents the mesh surface, the dashed

lines the vectors created between the intersection points and the dashed vector is its normal. The non-dashed vectors are the surface normals. Both cases shown in the example will be determined to belong to differ-ent features. . . 19 7.1 1D visualization of the projection process. Image from [5] . . . 24 7.2 1D visualization of scaling the basis functions in order to approximate

a small part of the function. Image from [5] . . . 24 7.3 1D visualization of summing the scaled functions. Image from [5] . . . 24

(10)

LIST OF FIGURES vii

7.4 Visualization of the basis functions, l is the band index, green (light gray) color represents positive values whilst red (dark gray) are nega-tive. Image from [5] . . . 25 8.1 Performance measurements comparing render frame time for automatic

and manual transition distances. . . 27 8.2 Measurements of the shader dispatch time for manually selected and

au-tomatically generated transition distances. During the measurements of automatically generated values slerp was used for view direction based selection. . . 28 8.3 Performance measurements comparing the use of transition distances

adapting to view direction and using the worst case distance. . . 28 8.4 Measurements of the render dispatch time for manually selected and

au-tomatically generated transition distances. During the measurements of automatically generated values slerp was used for view direction based selection. . . 29 8.5 Average LOD selection cost per frame. . . 30 8.6 Object rendered at the distance just before switching LOD and then after

the switch has been made. . . 32 8.7 example of a feature (light gray) being split in two by another feature

(11)

List of Tables

3.1 Description of the logical operators used in this paper. . . 11 8.1 Results of the automatically generated Lod transition distances given in

meters for the mesh shown in fig. 8.6 . . . 31 8.2 Difference between generated and manually selected transition distances,

rounded to meters. Negative value indicates that the transition using the auto distances occur before the manual transitions. Dist 1, in the table, refers to the transition distance between lod 0 and lod 1 and so on. . . . 31

(12)

Chapter 1

Introduction

1.1

Background

A common problem within computer graphics is the trade-off between complexity and performance. Level of detail (LOD) is a technique used to reduce the rendering com-plexity of a scene by rendering a mesh with less detail instead of the original mesh. For interactive real-time computer graphics applications such as computer games the differ-ent LOD-levels are pre-computed offline leaving the selection to be processed during run-time. Switching between different LOD-levels can cause a distinct popping effect causing image instabilities. These effects are caused by visible details being removed or added between the LOD-levels. There exists several methods in order to reduce the visibility of the popping effect. A common technique, that is used in the Frostbite en-gine, is the distance from the camera to the mesh, this is based on the fact that as details become smaller they will be harder to perceive. A question that rises is determining at what distances to transition between different LOD-levels without loosing efficiency or user perceived quality.

Manually selecting distances for LOD transitions can results in a loss of efficiency or vi-sual artifacts due to overdue and premature transitions respectively. Finding good tran-sition distances manually from a performance and quality perspective is also a time consuming process. Automatically selecting LOD-transition distances based on metrics could ensure optimal transition distances and save the time spent on selection.

(13)

1.2. PURPOSE 2

1.2

Purpose

The purpose of the master thesis is to plan, implement and evaluate a system for com-puting optimal view distances for mesh LOD transition. The work will be carried out in the Frostbite game engine and the goal is to accelerate the current work-flow, to improve image stability and to get more optimal LOD distances from a performance perspective.

1.3

Questions

• The most common detectable image artifact for LOD transitions is changes to the silhouette. Is it possible to calculate transition distances that generate image stable results based entirely on the silhouette or will texture deviations and/or light changes still be visible.

• Is it possible to use the depth information of a mesh in order to account for de-tail changes that are not visible in the silhouette and does it improve the image stability compared to only using the silhouette?

• Is it possible to adapt the transition distances to the view direction without causing image artifacts and is it profitable for performance?

• Is it possible to use depth and mesh surface information in order to correctly segment the mesh in an image, if so does this method improve the image quality and how does the methods compare in processing time cost?

1.4

Restrictions

The LOD selection algorithm should be implemented as a pre-processing step in order to not affect the run-time performance. The entire algorithm should operate on the CPU only since the build servers are not equipped with a GPU.

The meshes that the LOD selection algorithm should operate on cannot be guaranteed to be manifold meshes, that is each triangle edge spans two triangles no more no less. This limits the use of most mesh error metrics since most use a mesh data structure that requires the mesh to be manifold. Information about the scene lighting that a mesh will be placed in is not available either.

(14)

1.5. STRUCTURE OF REPORT 3

1.5

Structure of report

This paper will first present some previous work in the area and then describe technical concepts that are needed to understand the paper. Then the different methods tested in order to calculate LOD transition distances will be described. Lastly the results will be presented, discussed and possible future work suggested.

Throughout the paper the different LOD levels will be referred to as lod 0, lod 1 and so on where detail decreases with increasing level.

(15)

Chapter 2

Related work

The LOD selection criteria can either be based on the geometric definition of the mesh, as presented by Brown et al. [3] and Yu et al. [12], or by the rendered image of the object as presented by Reddy [9] and Lindström et al. [7]. Both methods having their pros and cons. Previous work based on the geometric properties of the mesh usually requires the mesh to be manifold in order to calculate the error metric used. It usually also uses information about frame rate in combination with the distance to camera or mesh screen size in order to select the LOD to render. Thus the transition distances is dependent on the complexity of the scene and the hardware used to render. Using an image based method the actual visual change can be measured using multiple render views of the model in order to measure the error. But an image based method is dependent on the scene lighting and resolution used to render.

Reddy [9,10,11], presented a method for selecting when a LOD should be transitioned such that the user is unaware of any visual change. He also presents a method to eval-uate the extent of performance improvements possible in an application using a LOD scheme. Finding that the size or distance selection criteria constitutes approximately 95% of the performance improvement. It also showed that the velocity and eccentricity selection criteria produce synergic speedup and should be implemented together if used. The method Reddy [9] presented in order to calculate transition distances used the spa-tial frequency of an image rendered from multiple viewpoints. The image was rendered using the light properties that is used in the run-time scene. The spatial frequency is then calculated by first segmenting the image into features based on the color properties of the pixels. Then the length of the continuous row of pixels that belonged to each fea-ture at any angle is calculated which is used to calculate the SF. By determining what frequencies overlap for different LODs one can determine what features exists in one LOD but not the other. Finding the feature with the least spatial frequency that does not exist in the next LOD level it can be used as threshold to determine at what distance the

(16)

5

LOD can swap without there being any visual artifacts. The highest spatial frequency perceivable to a human can then be calculated using the resolution and field of view (FOV) properties. By using the maximum displayable spatial frequency and the run-time information about eccentricity and velocity the optimal LOD can be selected. This method however requires that the changes between LODs are such that either a feature exists or it does not. Deciding a transition distance based on a feature that decreases in detail is not possible since the method will determine the distance at which the largest non-overlapping feature in the detailed LOD is no longer visible.

J. Cohen et al. [4] presented a method to estimate the error between meshes using the texture deviation. This was done by comparing the texture coordinates for each vertex in the lower resolution mesh to the vertex in the original mesh that is spatially closest. the distance between texture coordinates is then used as an error metric in order to approximate the texture deviation introduced with the LOD.

Yu et al. [12] presented a method based on the span of level changed. Calculating the level changed between mesh LODs based on the geometric properties of the mesh. An object importance value is then calculated using run-time information about position or screen size. The objects level of detail is then selected based on the importance value and a given frames per second requirement.

Pastor [8] presented two LOD selection techniques that use spatio-temporal error in order to achieve an emphasis on temporal quality when the user interaction speed is high and visual quality otherwise. Thus the method uses real time information about speed and at what angle the object is placed within the field of view in order to determine the transition distance.

Larkin et al. [6] evaluated the three most common perceivable artifacts introduced by mesh simplification. The three artifacts are texture, silhouette and lighting artifacts. Finding that changes in the object silhouette are the easiest and most common arti-fact to detect. Lighting artiarti-facts are more commonly detected than texture artiarti-facts but the response time for detecting texture artifacts at close ranges is less than for lighting artifacts. Larkin et al. [6] also found that animations does not accentuate or mask any of the artifacts.

(17)

Chapter 3

Background

In order to get a good understanding of the paper, knowledge about some technical concepts are needed. This chapter will present these concepts and describe them. Later on in the paper it will be described how they have been used within the project.

Since the aim of this project is to determine transition distances, using a pre-processing algorithm that improves the image stability by reducing the visible LOD-popping, it was decided to use an image based method based on the work by Reddy [9]. However because the method should operate entirely on the CPU and since information about scene lighting is not available the method has been adapted to fit the problem at hand.

3.1

Spatial frequency

A contrast grating is a sinusoidally changing contrast pattern creating light and dark bars, see fig.3.1, that is used to measure the visual acuity in humans. The visual acuity is a measure of the smallest detail a viewer is able to perceive. The measure is performed by only considering the size of the object, the contrast is not considered as such the measurement is performed using optimal light condition. The appearance of a contrast grating is dependent of its spatial frequency and the contrast. The distance between the bars on a contrast grating is the spatial frequency and is measured in units of cycles per degrees of visual field (c/deg). A large distance between the bars of a contrast grating implies a small spatial frequency, thus the spatial frequency is inversely proportional to size. The contrast measures the difference in luminance between adjacent light and dark bars. Empirical research has shown that the visibility of the details is dependent on the contrast gratings orientation, velocity across the retina, at what degree it is placed in our field of view and the background illumination.

A graph called the contrast sensitivity function (CSF) was generated using data from 6

(18)

3.1. SPATIAL FREQUENCY 7

Figure 3.1: Contrast grating to the left with a small spatial frequency and to the right with a high spatial frequency. Image from [9]

empirical contrast grating measurements [9], see fig.3.2. The CSF indicates the thresh-old of vision for a number of spatial frequencies. Spatial frequency and contrast com-binations below the CSF curve will be visible to the viewer whilst comcom-binations above the curve will not. The CSF shown i fig.3.2 was generated using data from detail that was static. CSF for moving objects show that the viewer will be able to perceive less detail compared to if the object had been static. The visual acuity in the CSF is given by a threshold contrast value of 1.

Figure 3.2: Contrast sensitivity function (CSF) generated for static detail positioned in the viewers fovea. Image from [9]

By considering a display device as a contrast grating the highest displayable frequency can be calculated. As previously mentioned the highest perceivable frequency is

(19)

de-3.1. SPATIAL FREQUENCY 8

pendent on the field of view it occupies in the viewers visual field. The horizontal and vertical field of view for an arbitrary display can be calculated using eq.3.1and eq.3.2

respectively. Where the width and height are the measurements of the screen and the distance is the distance between the viewer and the display, see fig.3.3.

F OVhoriz = 2atan( width 2 distance) (3.1) F OVvert = 2atan( height 2 distance) (3.2)

Figure 3.3: Calculating the FOV for an arbitrary display device. Image from [9] Each pixel in a display device can be considered as half a contrast cycle. Thus the highest displayable spatial frequency (ε) is equal to half of its pixel resolution. By using the horizontal and vertical angular resolution of the display, calculated in eq. 3.1 and

3.2, the highest displayable spatial frequency can be calculated using eq.3.3. ε = maxresolutionhoriz

2F OVhoriz

,resolutionvert 2F OVvert



(3.3) The highest perceivable frequency that a viewer is expected to perceive in an object can be calculated using the CSF and the highest displayable frequency (ε). This is done by thresholding the CSF with the highest displayable frequency ε.

The relative spatial frequencies (RSF) of details rendered in an image can be calculated by measuring the length of the longest continuous line of pixels within each detail. The relative spatial frequency is then calculated by:

RSF(θ) = 1

(20)

3.2. RAY-TRACING 9

where θ is the angle of the continuous line and l(θ) is the length at that angle. The length is multiplied by two since the frequency is a measurement between two sinusoidal bars, the length of the continuous line of pixels represents the width of one sinusoidal bar. This gives the RSF in the unit of pixels per degree (p/deg). In order to compare the measured RSF with the calculated highest displayable spatial frequency (ε) it needs to be converted into units of cycles per degrees (c/deg). The values can be converted by extracting the horizontal and vertical components of the relative spatial frequency this is done by considering the RSF(θ) as the hypotenuse of a triangle. The horizontal and vertical components can then be extracted using the following equations:

Choriz = RSF (θ)cos(θ) (3.5)

Cvert = RSF (θ)sin(θ) (3.6)

The extracted components need to be scaled depending on the display resolution and FOV. The scaling factors are calculated by:

Shoriz = width F OVhoriz (3.7) Svert = height F OVvert (3.8) where width and height are the pixel resolution of the display and F OVhorizontal and

F OVverticalare calculated using3.1and3.2respectively. The spatial frequency in units

of (c/deg) is then given by:

SF(θ) =p(ShorizChoriz)2+ (SvertCvert)2. (3.9)

3.2

Ray-tracing

Ray-tracing is the process of tracing a ray from the camera through each pixel on the view-plane and out into the scene. The ray is used in order to determine where in the world coordinate system the ray intersected an object, if any, and details about the sur-face it intersected. In order to determine the point of intersection the ray is intersection tested against all objects within the scene. The intersection tests can either be done im-plicitly or for arbitrary meshes for each triangle in the mesh. Ray-tracing is commonly used in physically based rendering in order to gather information about how light tra-verse the scene.

(21)

3.3. FLOOD FILL 10

3.3

Flood fill

Flood fill is an algorithm that given a node determines the area in its surrounding, that is if the data in the surrounding nodes are the same the nodes belong to the same area. The algorithm operates by given a starting node it compares the data in the surrounding nodes against the value of the starting node, all nodes that satisfies the requirement is given a color and the new nodes neighbors are tested. Depending upon the situation the requirement might be that the nodes should equal to or be approximately equal to the starting node. The algorithm can be 4-way based considering the horizontal and vertical neighbors or 8-way based that also account for the diagonal neighbors, see fig.3.4. The implementation is done using either a stack or a queue.

(a) Before flood fill (b) 4-way flood fill (c) 8-way flood fill Figure 3.4: Performing flood fill on (a) using the pixel marked with an inner border as starting pixel results in (b) or (c).

3.4

Logical operations

Logical operators are operators that compare two binary values and return a binary value. Computers can perform logical operations in a bit-wise fashion comparing each corresponding bit in two values returning a value of the same type with the result. There exists multiple types of logical operators, those used in this paper can be found in table

(22)

3.4. LOGICAL OPERATIONS 11

logical operator explanation bit-wise operation

And (∧)

The operator compares two binary values and returns true if both are true and

false otherwise. AND (&)

Inclusive or (∨)

compares two values and returns true if one

or both are true and false otherwise. OR (|)

Exclusive or (⊕)

compares two values and returns true if either one is true but not both, and false if

none is true or if both are true. XOR (∧

). Table 3.1: Description of the logical operators used in this paper.

(a) AND (b) OR (c) XOR

(23)

Chapter 4

Rendering the images

4.1

Ray-tracing

In order to measure the change in spatial frequency between the LODs the meshes will first need to be rendered using the resolution and field of view (FOV) settings used during run time. Since the algorithm is used as a preprocessing step that should be able to run entirely on the CPU the image is rendered using ray-tracing, explained in sec.3.2. By ray-tracing the scene the data available is the world coordinates of the intersection and the surface normal at the intersection point.

4.2

Object distance

Before the ray-tracing stage can be executed we need to determine the shortest distance to the camera at which the entire mesh will fit the image from every view angle. By using the minimum bounding radius of the mesh we know the minimum size that needs to fit the image when the mesh is rendered in the center. By projecting a sphere, with a radius equal to the minimum bounding radius of the mesh, onto clip space the radius size in pixels can be evaluated, see eq4.1where D is the distance between the camera and the object and F OVyis the vertical field of view. Since the desired radius size in pixels,

rpixels, is known we can calculate the distance between the camera and bounding sphere

that produces the desired result, see eq4.2. Since the field of view can be changed in the settings by the user, as well as changed during run time when the user is aiming down sights (ADS), a default value on the FOV is used. This is then mended by calculating a scaling factor for the distance values based on the actual FOV that is in use.

(24)

4.3. VIEWPOINTS 13 rpixels= rworld tan(F OVy 2 )D resolutiony 2 (4.1) D= rworld tan(F OVy 2 ) resolutiony 2 1 rpixels (4.2)

4.3

Viewpoints

In order to capture all features on the mesh the image is rendered from multiple view points. Selecting uniformly distributed viewpoints on the sphere can be done using spherical tessellation as presented by Borgefors [2]. The number of viewpoints needed to generate a good result is dependent on the complexity of the mesh. During this project a set of six viewpoints were used in order to keep the processing time to a minimal.

(25)

Chapter 5

Using the silhouette

One method used to determine transitions distances was by measuring the spatial fre-quency using the exclusive or between the original ground truth mesh and a LOD-level. This chapter will present and explain how this was implemented.

5.1

Creating the image

By using the data available from the ray-tracing step an image of the mesh silhouette can be rendered. By comparing the silhouette of the mesh with highest level of detail against the silhouette of the other LODs the change between them is saved as an image using the exclusive or method, an example image can be seen in fig.5.1.

Since the ray-tracing step is performed using perspective projection the silhouette will be the perspective silhouette which in cases can be different from opposite viewpoints. Thus opposite viewpoints are used during the processing step and during view direction based selection.

5.2

Calculating the relative spatial frequency

In order to determine the relative spatial frequency of the largest silhouette change the length of the longest continuous line of pixels that was marked as changed needs to be determined. This is done by determining the length of the longest continuous horizontal and vertical pixel line. However, the length of the longest continuous pixel line cannot be calculated by simply counting the pixels in the image since if the silhouette contains a change vertically and a change horizontally that merge the pixels residing in the merge region will generate large continuous pixel counts, see fig.5.2. Thus the length of the

(26)

5.2. CALCULATING THE RELATIVE SPATIAL FREQUENCY 15

(a) (b)

Figure 5.1: The silhouette of the ground truth mesh (a), and the generated exclusive or between it and a LOD representation of it (b).

longest continuous line of pixels is measured by eroding the silhouette mask.

Figure 5.2: corner pixel (dark gray) causing problem when counting the longest contin-uous line of pixels

Erosion is a binary process where each pixel is compared against its neighboring pixels to determine whether it is an edge pixel or not. All edge pixels are then assigned the value of zero, see fig5.3. The number of erosions needed in order to remove all pixels with a value of one is the pixel radius of the largest circle that will fit in the silhou-ette change. Thus the length of the longest continuous line of pixels is the number of erosions multiplied by two. The erosion is performed by comparing each pixel with its horizontal and vertical neighbors, if one of the neighboring pixels contains the value zero the current pixel should also be zero. This is evaluated using bit-wise operations, first the pixel is compared against the neighboring pixels using the AND operator. The result from the different comparisons are then merged using the same operator. In order to determine whether the current erosion pass eroded any pixels the resulting pixel value

(27)

5.2. CALCULATING THE RELATIVE SPATIAL FREQUENCY 16

is compared against the old value, if they are not the same a flag is set to indicate that something changed in the image. The bit-wise XOR can be used to determine whether the pixels contain the same value whilst bit-wise OR is used in order to assign the flag value. The mathematical formula used can be see eq.5.1where H and V are the hori-zontal and vertical neighboring pixels and P is the center pixel. When the length of the longest continuous pixel line is known the RSF can be calculated using eq.3.4.

f lag = f lag ∨ (Po⊕ ((Hr∧ P ) ∧ (Hl∧ P ) ∧ (Va∧ P ) ∧ (Vb∧ P ))) (5.1)

(a) (b)

Figure 5.3: Performing erosion on the image in (a) results in image in (b)

The method is performed in a brute force pixel-by-pixel fashion, but other methods have been implemented and compared in order to find an implementation with good performance. It was tested to store the image data using Morton code in order to increase the cache coherency during image data access. Morton code does not store the pixel data linearly but rather in a Z pattern increasing the cache coherency for operations where neighboring data is required such as interpolation and is the default way to store images in memory by the GPU. Details about Morton code is beyond the scope of this report and the curious reader is referred to read the explanation presented by Baert [1]. Morton code did however not result in an improvement due to the introduced overhead of indexing. It was also tested to utilize the single instruction multiple data (SIMD) capabilities of the CPU. The tested SIMD implementation operated on a six by six pixel grid at a time, in order to reduce the amount of discontinuous memory reads during vertical erosion. In order to sort the data for vertical erosion shuffles were performed on the vectors. However shuffle is a costly operation and thus resulted in a slower solution than the brute force one. The method can be implemented in SIMD entirely without shuffle operations by transposing the source image before vertical erosion. Transposing the image can be done in parallel by storing the transposed data into a new image. This also increase the cache coherency since values do not need to be swapped and can thus be read coherently. The overhead for transposing the image is however large

(28)

5.2. CALCULATING THE RELATIVE SPATIAL FREQUENCY 17

requiring more time than one erosion pass with the brute force implementation. Since the image needs to be transposed for each new erosion pass this method would not present a performance improvement either.

(29)

Chapter 6

Using the features

Two methods for determining the transition distances based on the spatial frequency of features in an image as presented by Reddy [9] have been implemented and tested. The difference between them is how the frequencies are calculated. This chapter will first describe the common factors of both methods and then present the two different ways of measuring the frequencies.

6.1

Creating the image

Using the data from the ray-tracing step, see sec.3.2, the mesh can be segmented into features, a feature is in this case an area on the mesh surface with similar structure. The data from the ray-tracing step is first stored into an image. In order to generate a segmented image of the mesh features the stored data is searched in order to find a pixel containing valid data. Once a pixel with valid data has been found its adjacent pixels are evaluated to determine if they belong to the same feature. Two pixels are considered to contain information about the same feature if the vector between the intersection points and the surface normals are approximately orthogonal and if the angle between the surface normals is less than a given threshold, see fig.6.1. This is then continued on the adjacent pixels causing a flood fill. Every detected feature is given a unique ID that is saved in an image and the data from the ray-tracing stage is removed. In order to support meshes containing multiple shells the pixel values of already segmented features are marked as invalid data and the process is continued until all pixels in the image are invalid.

(30)

6.2. FEATURE COMPARISON 19

Figure 6.1: Segmenting the features, the line represents the mesh surface, the dashed lines the vectors created between the intersection points and the dashed vector is its nor-mal. The non-dashed vectors are the surface normals. Both cases shown in the example will be determined to belong to different features.

6.2

Feature comparison

6.2.1

Calculating the spatial frequencies

As explained in sec.3.1the RSF is measured from the length of the longest continuous line of pixels. In this case the RSF is measured for each feature by counting the length of the longest continuous line of pixels in the segmented image. The pixels are counted using brute force methods in order to support arbitrary meshes. Given the length of the longest line of pixels the RSF is calculated using eq.3.4. The difference in RFS for a feature between different LODs is then used to estimate how much each feature changed between LODs.

6.2.2

Analyzing the spatial frequencies

Connecting features between LODs

In order to determine the change in frequency between features on the mesh the features must first be connected between LODs. This is done by iterating through every pixel of each feature of a LOD and saving each unique feature id stored in the corresponding pixels of the LOD that is one level more detailed. This process is performed along with the segmentation step, described in sec.6.1. The frequencies for the more detailed LOD can then be accessed using the saved feature ID values.

If the feature is only contained within one feature in the detailed LOD and the difference in SF between the two is large the object is assumed to have been removed. The same is true if the feature is contained within multiple features all of which have a large

(31)

differ-6.3. DEPTH CHANGE OF EACH FEATURE 20

ence in SF in comparison to the less detailed feature. If the difference in SF between the feature and any of the detailed features it is contained within is lower than a threshold it is considered to have been reduced in quality but not removed entirely. Thus the SF of the change is stored to represent how the feature changed for the LOD.

Some feature changes can cause incorrect frequency change measurements due to merg-ing features. A merge of features is defined such that the combined features in the de-tailed LOD build up a feature in the less dede-tailed LOD. For example a pyramid viewed from above each side of the pyramid will be segmented into a feature. If the pyramid in the next LOD is collapsed into a quad the difference in RSF between LOD will be equal to the RSF of each triangle side, causing a small spatial frequency. From this view however the only change visible to the viewer will be the change in lighting and texture. Thus the change is incorrectly measured which causes problems with transi-tion distances being to far away. In order to avoid this problem merges of features are detected and the RSF is not measured. Merges are detected by counting the number of occurrences each detailed feature as in the less detailed mesh. If a feature is only contained within one other feature it has either been removed or merged. A merge is assumed to have occurred if there are multiple high detailed features that are contained within a less detailed feature and each feature represents some part of the less detailed features edge. If it does not represent some part of a features edge it is considered to have been removed. The frequency of a low detailed feature that contain a removed feature is not measured since the change might have been due to the removed feature. Using this method each feature can be classified as removed, changed or merged and its appropriate frequency can be calculated.

6.3

Depth change of each feature

6.3.1

Depth mask

In order to be able to determine how much each feature has changed a depth change mask is created. The depth change mask is generated by comparing the depth recorded in the ray-tracing step for the mesh with the highest detail against the depth recorded for the other LODs. In other words the depth map of the highest LOD is compared against the depth map of the other LODs. If the difference between the depth values stored in the depth maps is larger than a threshold the pixel is considered to contain a part of the mesh that has changed between LODs and the corresponding pixel in the depth mask is assigned the value of one, otherwise it is assigned the value of zero.

(32)

6.3. DEPTH CHANGE OF EACH FEATURE 21

6.3.2

Calculating the spatial frequencies

As described in sec.3.1 the spatial frequency is calculated by determining the length of the longest continuous line of pixels. But instead of calculating the RSF from the silhouette change the depth map is used instead. In order to determine the change of each feature that occurred the depth mask is used in combination with the images with the segmented features. The longest continuous line of pixels for each feature is then counted by checking if the pixel in the depth mask indicates that the mesh at that po-sition has changed. If there was a change the feature ID in the corresponding pixel in the segmented image is fetched and the image is traversed horizontally and vertically counting the pixels until a pixel that was not changed or belong to another feature is reached.

When the length of the longest continuous pixel line for each feature is known the relative spatial frequencies is calculated by eq.3.4.

(33)

Chapter 7

Using the spatial frequencies

7.1

Determining the transition distances

In order to determine the transition distances between the LODs from the measured minimum frequency the rule of thumb that the frequency increases linearly as the size decreases is used. Thus the required size decrease can be calculated by calculating the required frequency increase. The required frequency increase is determined by defining a highest displayable frequency and calculating an increase factor using the minimum frequency change found. The transition distance is then determined by multiplying the scaling factor with the render distance calculated in eq.4.2.

7.2

View direction

Since the detail reduction on a mesh is not uniform, different views will generate a different set of transition distances. By using the distance to camera in combination with the run-time view direction the transition distances can accommodate for this non-uniform reduction and save on level complexity at the cost of selection. Since transition distances can not be calculated for all possible angles, spherical linear interpolation (slerp) or spherical harmonics (SH) can be used to approximate values for the view directions in between.

7.2.1

Interpolation

Given the run-time view direction the transition distance can be interpolated from the transition distances calculated during the pre-processing step. By determining the angle

(34)

7.2. VIEW DIRECTION 23

between the run-time view direction and the pre-processing view-directions the angle can be used as a scaling factor for each pre-processed transition distance. The offline view direction that is closest to the run-time view direction will contribute the most to the final transition distance. The scaling factor is calculated using the dot product between the normalized run-time view direction and the pre-processing view direction giving the cosine of the angle between the vectors. If the result is negative the vectors point in different directions the contribution from that direction is zero but the absolute value of the result is the cosine angle between the opposite view direction. Thus the transition distance for an arbitrary view direction is given by finding the three offline view directions with the smallest cosine angle, scaling and summing their transition distances. Since six view directions, the axis in a Cartesian coordinate system, were used for this method the sum of the scaling factors will be equal to one and thus there is no need for normalization.

7.2.2

Spherical harmonics

Spherical harmonics is a method for representing a 2D function on the surface of a sphere, this is done by summing and scaling basis functions [5]. Basis functions are pieces of signal that can be used to approximate a function by scaling and summing mul-tiple basis functions. In order to be able to approximate a function using basis functions the scaling factor for each basis function needs to be determined, this is done through projection. The projection process works by determining how good of an approximation each basis function is to the original function. This is done by integrating the product f(x)Bi(x) and results in a coefficient ci for each basis function Bi, see fig 7.1.

Scal-ing each basis function with the resultScal-ing coefficient results in an approximation of the function over the basis function span, see fig.7.2. Summing the product ciBi(x) for all

basis functions results in an approximation of the original function f(x), see fig7.3. The number of coefficients and basis functions depends on the order of the SH. Increasing the order results in a better approximation of the 2D function but an increased amount of coefficients and basis functions are needed. As the order of the SH increase a new

bandof basis functions is added. There are 2l + 1 basis functions for band l and a SH of order n evaluates all the basis functions through order n− 1, see fig.7.4. The basis functions are calculated as follows:

yml =      √ 2Km l cos(mφ)Plm(cos(θ)), m >0 √ 2Km l sin(−mφ)P −m l (cos(θ)), m < 0 K0 lP 0 l(cos(θ)), m== 0 where Pm

l is the associated Legendre polynomial and Klm is a scaling factor used to

(35)

7.2. VIEW DIRECTION 24

Klm = s

(2l + 1)(l − |m|!) 4π(l + |m|)!

Figure 7.1: 1D visualization of the projection process. Image from [5]

Figure 7.2: 1D visualization of scaling the basis functions in order to approximate a small part of the function. Image from [5]

Figure 7.3: 1D visualization of summing the scaled functions. Image from [5] The projection step is performed during pre-processing. The coefficients are saved and then used during run-time in order calculate the approximate function value by scaling the basis functions. By using SH a large amount of sample values can be recalculated using a few set of coefficient and basis functions. Thus SH is a good method for com-pressing diffuse data defined on the surface of a sphere.

(36)

7.2. VIEW DIRECTION 25

Figure 7.4: Visualization of the basis functions, l is the band index, green (light gray) color represents positive values whilst red (dark gray) are negative. Image from [5]

7.2.3

SH vs slerp

In comparison by using SH the function over the sphere can be approximated using a rather small number of coefficients and basis functions this is useful to compress a large sample set. However if the number of sample points is small, slerp is likely to require less data stored and less computation depending on the SH order used.

If no run-time selection should be added by the method the furthest transition distance can be selected. Since some view directions are rarely viewed the transition distances are preferably weighted by a given view direction importance value.

In this project both run-time view dependent, using SH and slerp, and non-view depen-dent selection will be tested and compared.

(37)

Chapter 8

Result and discussion

8.1

Result

In order to measure the quality of the generated transition distance values a visual com-parison was performed to compare the image stability between the automatically gen-erated transition distances and the manual distances. The visual comparison was em-pirically tested by letting a viewer traverse a level using both automatically generated distances and manually selected distances in order to approximate the visible LOD pop-ping effect that would be perceivable in the in game environment. The tests showed that the viewer were in cases still able to detect lighting and/or texture artifacts introduced when switching LOD. Silhouette changes however were not detected by the viewer. In order to compare the performance between manual and automatic transition distances and the affect of view direction based transition distances performance tests were per-formed. The performance tests were executed by placing the camera with the same position and direction for each method and capturing performance data. This was exe-cuted on multiple positions in order to gather a good overall representation of the quality vs. performance. The performance tests were measured using the PlayStation 4 console. During the performance tests the impact on frame time, dispatch time and time spent on LOD selection was measured. All performance measurements are given in millisec-onds thorough out the paper. The results showed that the cost difference in frame time was negligible, see fig. 8.1. The results for dispatch time showed that the automati-cally generated LOD transition values using view direction based slerp selection used on average 0.745ms less than the manually selected transition distances, see fig. 8.2. Not using view direction based selection cost on average 0.11ms more than using it, see fig8.3. The render dispatch time cost on average 0.15ms more using the automatically generated values, see fig.8.4. As for the cost of LOD selection, slerp required less time than SH, on average 0.004ms per mesh, see fig.8.5a. The average time spent on LOD

(38)

8.1. RESULT 27

selection per frame for slerp verses not using view direction based selection for a large scene can be seen in fig.8.5b. Overall the automatically generated transition distances cost on average 0.7ms more per frame in GPU processing time.

(a) First test

(b) Second test

(c) Third test

(d) Fourth test

Figure 8.1: Performance measurements comparing render frame time for automatic and manual transition distances.

(39)

8.1. RESULT 28

(a) First test

(b) Second test

(c) Third test

(d) Fourth test

Figure 8.2: Measurements of the shader dispatch time for manually selected and au-tomatically generated transition distances. During the measurements of auau-tomatically generated values slerp was used for view direction based selection.

(a) Average shader dispatch time.

Figure 8.3: Performance measurements comparing the use of transition distances adapt-ing to view direction and usadapt-ing the worst case distance.

(40)

8.1. RESULT 29

(a) First test

(b) Second test

(c) Third test

(d) Fourth test

Figure 8.4: Measurements of the render dispatch time for manually selected and au-tomatically generated transition distances. During the measurements of auau-tomatically generated values slerp was used for view direction based selection.

(41)

8.1. RESULT 30

(a) Per mesh

(b) Average measured within a level

(42)

8.1. RESULT 31

As a comparison the automatically generated distances for a set of meshes were com-pared with the manually set values by calculating the difference. The automatically generated transition distance values from the mesh seen in fig.8.6can be seen in fig.8.1

an the difference in values between the manually selected values in table8.2. As can be seen in the table the generated transition distance values differ between view directions. Using the largest transition distance instead of view direction based will cause details to be rendered that might not be perceivable from the view direction. View direction based LOD transitions adds some overhead to the selection phase and requires the different transition distances for each view direction to be stored or compressed using SH but can ensure image stability for each sample view direction and save on detail needed to render.

view direction: x -x y -y z -z

dist 1 0.71 0.71 0.45 0.45 0.35 0.57 dist 2 1.03 1.03 4.1 1.6 0.58 3.8

Table 8.1: Results of the automatically generated Lod transition distances given in me-ters for the mesh shown in fig.8.6

view direction: x -x y -y z -z

dist 1 -0.29 -0.29 -0.35 -0.35 -0.65 -0.43 dist 2 -1.97 -1.97 1.1 -1.4 -2.42 0.8

Table 8.2: Difference between generated and manually selected transition distances, rounded to meters. Negative value indicates that the transition using the auto distances occur before the manual transitions. Dist 1, in the table, refers to the transition distance between lod 0 and lod 1 and so on.

The pre-processing time for the final silhouette method takes approximately four sec-onds to process one mesh with six LOD levels using six different view directions. How-ever the time is dependent on degree of change between the LODs. The larger the change the longer time is needed to process the mesh. The quality criteria for the automatic LOD selection algorithm can be changed to an acceptable amount of pixel change in the silhouette instead of using the calculated highest displayable frequency. This was done in order to be able to find a good balance between image stability and performance cost and allow different objects to have a different quality criteria.

(43)

8.1. RESULT 32

(a) from lod 0 to lod 1

(b) from lod 1 to lod2

Figure 8.6: Object rendered at the distance just before switching LOD and then after the switch has been made.

(44)

8.2. DISCUSSION 33

8.2

Discussion

The resulting algorithm is able to approximate transition distances for mesh LODs in order to minimize the silhouette artifacts that are introduced when changing between LOD levels. The algorithm does not support meshes containing an alpha channel since detail changes behind the alpha channel are not measured. Support for alpha channels can be implemented but because of the complexity of supporting different materials it was determined to be out of scope for this project.

Since the mesh quality is a trade off between level complexity and performance cost it was decided to test and use a definable highest displayable frequency that was indepen-dent off the display and user setup instead of calculating the actual highest displayable frequency. This allows the artists to change the quality threshold who can thus adapt the transition distances to the importance of a mesh. It also makes it possible to adjust the quality for performance cost which is important in order to achieve the required FPS requirement to deliver a good enjoyable experience to the end user. Because of this only the length of the longest continuous line for the horizontal and vertical angle needs to be measured. This can be done since when the shortest of the two can no longer be rasterized into a pixel the detail will no longer be visible.

The methods presented in this paper has focused on generating LOD transition distances that reduce the artifacts of silhouette changes that is the most common and easiest arti-fact to detect [6]. Texture deviations and light artifacts that are introduced during LOD transition are in some cases still perceivable by the viewer. Currently the generated val-ues need to be checked and might need to be adjusted in order to remove these artifacts but the algorithm could also be improved upon in order to account for these artifacts. The texture deviation could be measured by comparing the change in texture coordi-nates between each LOD-level using the highest detailed level as a ground truth. This could be done by comparing the texture coordinates of each vertex in the less detailed mesh to the vertex that is spatially closest in the highest detailed mesh. The light change could be measured in a similar fashion but by comparing the change in angle between the vertices normal instead.

The results of the feature based selection algorithms are heavily dependent on the fea-ture segmentation, one incorrectly detected feafea-ture can result in bad transition distances. Since the method should work on arbitrary meshes with arbitrary changes between the mesh LOD there are countless of cases the segmentation step needs to handle in order to work properly. For instance if a feature becomes thinner in a less detailed LOD another feature that resides within might split the feature into two features resulting in a small frequency, large change, if not handled correctly, see fig. 8.7. This problem is avoid-able by accounting for the depth change in the image as well but only using the depth change causes problems with measuring the frequency of a change that is not visible

(45)

8.2. DISCUSSION 34

from the current view point. If the change only occurs in depth, that is it does not affect the silhouette of the detail alone, the viewer will not be able to perceive a geometrical difference from that view direction, it is however possible to notice lighting and texture deviations. This can cause incorrect frequencies to be calculated causing invalid data. Thus a hybrid of the two would be preferred in order to handle these problems. But due to the processing time needed and the added complexity compared to the silhouette method it was determined to scrap the method.

(a) lod 0 (b) lod 1

Figure 8.7: example of a feature (light gray) being split in two by another feature (dark gray) between LOD levels

Both slerp and SH were tested in order to adapt the LOD transition distances to the view direction. SH allows multiple sample points to be compressed into a set of coeffi-cients, which is useful in order to compress a large set of sample view directions. It is however not possible for an artist to adjust the values afterwards since it is not possible to change the coefficients. Storing the sample values and calculating the coefficients in pipeline is also impractical and removes the benefit of compressing a large amount of sample points. Whilst using slerp reduces the amount of sample view direction that is possible to output to the run-time, thus the interpolated transition distances will con-tain more error than using SH. During the performed tests six sample points were used in combination with slerp which produced a sufficient result, with no noticeable visual differences compared to the use of SH. Using slerp also had less of an impact on the run-time performance and was thus selected to be used in the end result.

The result from the performance tests can contain errors since the time elapsed be-fore measuring differed between tests. For testing the automatically generated distances the tests were also performed using automatically generated distances for all meshes within the scene, including meshes that the algorithm is not intended to be used on, e.g. meshes with an alpha channel. Some of the manually selected distances had not been properly set either. These factors effect the validity of the test results but can still give an indication of how using the automatically generated transition distances effect the

(46)

8.2. DISCUSSION 35

(47)

Chapter 9

Conclusions and future work

9.1

Conclusions

The most common detectable image artifact for LOD transitions is changes to the silhouette, is it possible to calculate transition distances that gen-erate image stable results based entirely on the silhouette or will texture deviations and/or light changes still be visible.

By using the silhouette one can determine at what distances the transition can occur without any change to the silhouette being visible to the viewer. Geometrical changes that do not belong to the silhouette of the object cannot be accounted for. This can cause problems with lighting and textural artifacts being visible to the viewer.

Is it possible to use the depth information of a mesh in order to account for detail changes that are not visible in the silhouette and does it im-prove the image stability compared to only using the silhouette?

By determining whether a pixel changed depth between the LOD levels it is possible to detect all regions on the mesh that changed. If a measured depth change does not affect the silhouette of the detail the change will not be geometrically noticeable for the viewer from this view direction. The changes that the viewer can perceive in such cases are texture and light changes thus only changes that effect the silhouette of the detail should be measured. However classifying the depth changes as effecting the silhouette of the detail or not has not been successful.

Another problem with using the depth is that it is not possible to determine whether the changes belong to the same detail causing different detail changes to merge and thus

(48)

9.2. FUTURE WORK 37

larger spatial frequencies are measured. One could try to use the depth information in order to segment the depth change by detecting a gradient in the original depth val-ues. This is however error prone and a bad segmentation results in incorrect frequency measurements.

Thus using the depth to account for changes to the silhouette of each detail on the mesh has not been possible without errors, causing overdue transition distances.

Is it possible to adapt the transition distances to the view direction with-out causing image artifacts and is it profitable for performance?

By using either interpolation or spherical harmonics the transition distances can be adapted to the view direction. Determining the transition distance for the run-time view adds some overhead to the CPU but generally improves the transition distances and lim-its the amount of work the GPU needs to process to visible details. No LOD transition image artifacts caused by view direction changes has been noticed.

Is it possible to use depth and mesh surface information in order to cor-rectly segment the mesh in an image, if so does this method improve the image quality and how does the methods compare in processing time cost?

The mesh surface and depth information can be used in order to segment the image for simple meshes. For more complex meshes however the segmentation is error prone causing incorrect frequency measurements and thus incorrect transition distances. The method works well for non-diffuse data but not for smooth meshes. The run-time cost compared to the simpler silhouette version is also significantly larger, taking on average 25s per mesh compared to 2s per mesh. Due to these factors the method was scraped in an early stage in order to focus on improving the silhouette method instead.

9.2

Future work

Eccentricity and velocity of an object effects the CSF such that the largest visible spatial frequency will be lower, larger details. Thus real-time information about object place-ment in the view and velocity could reduce the needed level complexity further. Thus these coefficients could be added in order to determine the highest perceivable frequency instead of the highest displayable frequency currently used. It would be interesting to test how these factors would effect the average run-time performance, if the extra se-lection cost is worth the gain in reduced level complexity. However according to Reddy [11] the eccentricity and velocity does not produce much of an improvement only 5%

(49)

9.2. FUTURE WORK 38

combined, as such the cost for implementation need to be quite small in order to be cost effective.

Texture and lighting deviations are in some cases still noticeable to the viewer and artists will have to adjust the generated transition distances in order to remove these artifacts. In order to reduce the maintenance performed by the artists methods that measure the texture and lighting deviations could be implemented. An automatic algorithm should however never be trusted and verification of the generated distances should always be performed.

Another method that would be interesting to test is to use a volume representing the change between meshes. By voxelizing the meshes and extracting the exclusive or all changes on the mesh would be extracted. This way geometrical changes that do not reside in the silhouette could be measured. By eroding the volume the transition distance can be calculated. Problems with depth changes could however still cause a problem by measuring changes that only affect the texture and lighting and might cause overdue transitions, but the problem of separating depth changes has been removed and thus this might not be as large of a problem.

Mesh detail reduction is done both manually by artists and automatically using software. Automatic software based reduction is usually based on error metrics removing the triangles that produce the least error. Manual reduction is not based on any error metric and thus reduction on a mesh done manually will not introduce a uniform error. For example the reduction of one detail can be large compared to other detail reductions in the same LOD. Causing the transition distance to be adapted to the largest change. If one could detect such reductions and identify the feature the artist could be notified and take action like reducing the amount of detail removed from the detail in the specific LOD and/or move the detail reduction to a lower resolution LOD-level. If all detail changes between a LOD is approximately the same the transition distance can be better approximated and contribute to better performance.

(50)

Bibliography

[1] J Baert. URL: http://www.forceflow.be/2013/10/07/morton-encodingdecoding-through-bit-interleaving-implementations/

(visited on 05/23/2017).

[2] G. Borgefors. “A hierarchical ’square’ tessellation of the sphere.” In: Pattern

Recognition Letters13.3 (1992), pp. 183–188.ISSN: 01678655.

[3] R. Brown, L. Cooper, and B Pham. “Visual Attention-based Polygon Level of Detail Management”. In: GRAPHITE ’03 Proceedings of the 1st international

conference on Computer graphics and interactive techniques in Australasia and South East Asia(2003), 55–ff.

[4] J. Cohen, M. Olano, and D. Manocha. “Appearance-preserving simplification.” In: Proceedings of the 25th Annual Conference on Computer Graphics and

Inter-active Techniques, SIGGRAPH 1998. Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1998. University of North Carolina, 1998, pp. 115–122.

[5] Robin Green. spherical harmonic lighting: the gritty details. 2003.URL:https:

//basesandframes.wordpress.com/2016/05/11/spherical-harmonic-lighting-the-gritty-details/.

[6] M Larkin and C. O’Sullivan. “Perception of Simplification Artifacts for Ani-mated Characters”. In: APGV ’11 Proceedings of the ACM SIGGRAPH

Sympo-sium on Applied Perception in Graphics and Visualization (Aug. 2011), pp. 93– 100.

[7] P. Lindström and G. Turk. “Image-Driven Simplification”. In: ACM Transactions

on Graphics (TOG)19.3 (July 2000), pp. 204–241.

[8] Oscar Ernesto Meruvia Pastor. “Level of detail selection and Interactivity”. MA thesis. University of Alberta, 1999.

[9] M. Reddy. “Perceptually Modulated Level of Detail for Virtual Environments”. PhD thesis. University of Edinburgh, 1997.

(51)

BIBLIOGRAPHY 40

[10] M. Reddy. “Perceptually optimzed 3D graphics”. In: IEEE Computer Graphics

Applications21.5 (Aug. 2002), pp. 68–75.

[11] M. Reddy. “Specification and evaluation of level of detail selection criteria.” In:

Virtual Reality3.2 (1998), pp. 132–143.ISSN: 13594338.

[12] Z. Yu, X. Liang, and Z. Chen. “A Level of Detail selection method for multi-type objects based on the span of level changed.” In: 2008 IEEE International

Confer-ence on Robotics, Automation and Mechatronics, RAM 2008. 2008 IEEE Interna-tional Conference on Robotics, Automation and Mechatronics, RAM 2008. State Key Lab. of Virtual Reality Technology, Systems, School of Computer Science, and Engineering, Beihang University, 2008, pp. 1102–1107.

References

Related documents

needed for rules R5 and R6 to satisfy the P*-condition are given for selected P* as follows. Following observations may be made from the table.. In fact, R6 seems to

We presented a machine learning ensemble classifier for the pre-selection of news reports for event coding.. In order to overcome the problem of a hugely imbalanced training

The testing algorithm prediction with the lowest mean square error deviation from the ground truth would be used as the value to be predicted in the selection algorithm.. Figure

Control rights such as drag-along rights, redemption rights, anti-dilution rights, majority board, majority voting, veto rights, and the right to replace the founder CEO had

Model selection for time and tie width effects on recapture and survival probability in forest and urban great tit males, specifically test- ing for a difference in slopes in

This is arbitrarily close to the upper bound for the same problem without space-restrictions. To make the algorithm competitive we also try to minimize the number of

The derived regions for the H 2 - norm of the interconnections are used to extent the H 2 - norm based method to the uncertain case, and enables robust control structure selection..

A decentralized configuration is Integral Controllable with Integrity (ICI) if there exists a controller such that the closed-loop system has the integrity property, that is, it