• No results found

In-game Interaction with a snowy landscape

N/A
N/A
Protected

Academic year: 2021

Share "In-game Interaction with a snowy landscape"

Copied!
47
0
0

Loading.... (view fulltext now)

Full text

(1)

Department of Science and Technology Institutionen för teknik och naturvetenskap

Linköpings universitet Linköpings universitet

SE-601 74 Norrköping, Sweden 601 74 Norrköping

Examensarbete

LITH-ITN-MT-EX--07/028--SE

In-game Interaction with a

snowy landscape

Jesper Carlson

Stellan Garhammar

(2)

LITH-ITN-MT-EX--07/028--SE

In-game Interaction with a

snowy landscape

Examensarbete utfört i medieteknik

vid Linköpings Tekniska Högskola, Campus

Norrköping

Jesper Carlson

Stellan Garhammar

Handledare Linus Blomberg

Examinator Matt Cooper

(3)

Rapporttyp Report category Examensarbete B-uppsats C-uppsats D-uppsats _ ________________ Språk Language Svenska/Swedish Engelska/English _ ________________ Titel Title Författare Author Sammanfattning Abstract ISBN _____________________________________________________ ISRN _________________________________________________________________

Serietitel och serienummer ISSN

Title of series, numbering ___________________________________

Nyckelord

Keyword

Datum Date

URL för elektronisk version

Avdelning, Institution Division, Department

Institutionen för teknik och naturvetenskap Department of Science and Technology

2007-05-28

x

x

LITH-ITN-MT-EX--07/028--SE

In-game Interaction with a snowy landscape

Jesper Carlson, Stellan Garhammar

We present a method for creation and interaction of snow in a computer game engine. This is achieved by generating a local transformable mesh around the player camera and osetting this mesh by a height map. This height map is used both for smoothing the terrain into a random snow landscape as well as for modifying the snow where prints are made. Our method compared to previous implementations has the advantage of everlasting prints in the vicinity of the player. To enhance the visual immersion we make use of lighting eects such as glitter and bloom on a highly detailed surface. The main advantages of using our method is that it is fast and that the performance loss when adding more dynamic models in the local area around the player is minimal.

(4)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

under en längre tid från publiceringsdatum under förutsättning att inga

extra-ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,

skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för

ickekommersiell forskning och för undervisning. Överföring av upphovsrätten

vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av

dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,

säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ

art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i

den omfattning som god sed kräver vid användning av dokumentet på ovan

beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan

form eller i sådant sammanhang som är kränkande för upphovsmannens litterära

eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se

förlagets hemsida

http://www.ep.liu.se/

Copyright

The publishers will keep this document online on the Internet - or its possible

replacement - for a considerable time from the date of publication barring

exceptional circumstances.

The online availability of the document implies a permanent permission for

anyone to read, to download, to print out single copies for your own use and to

use it unchanged for any non-commercial research and educational purpose.

Subsequent transfers of copyright cannot revoke this permission. All other uses

of the document are conditional on the consent of the copyright owner. The

publisher has taken technical and administrative measures to assure authenticity,

security and accessibility.

According to intellectual property law the author has the right to be

mentioned when his/her work is accessed as described above and to be protected

against infringement.

For additional information about the Linköping University Electronic Press

and its procedures for publication and for assurance of document integrity,

please refer to its WWW home page:

http://www.ep.liu.se/

(5)

In-game interaction with a snowy

landscape

Master Thesis in Media Technology

Supervisor: Matthew Cooper

Authors:

Jesper Carlson, Stellan Garhammar

Department of Science and Technology Link¨oping University, Norrk¨oping, Sweden

(6)

Abstract

We present a method for creation and interaction of snow in a computer game engine. This is achieved by generating a local transformable mesh around the player camera and offsetting this mesh by a height map. This height map is used both for smoothing the terrain into a random snow landscape as well as for modifying the snow where prints are made. Our method compared to previous implementations has the advantage of everlasting prints in the vicinity of the player. To enhance the visual immersion we make use of lighting effects such as glitter and bloom on a highly detailed surface. The main advantages of using our method is that it is fast and that the performance loss when adding more dynamic models in the local area around the player is minimal.

(7)

Acknowledgements

We would like to specially thank the following people for guidance and constructive criticism during the course of our thesis work

Linus Blomberg and Gustav Tax´en

and naturally all of the staff at Avalanche Studios for a fun and educational environment. We would also like to thank our supervisor Matthew Cooper for patience

and feedback on our work.

(8)

Contents

Abstract i

Acknowledgements ii

Contents iii

List of Figures vi

Acronyms & Abbreviations vii

Notation viii

1 Introduction 1

1.1 Problem specification . . . 1

1.2 Games with snow . . . 2

1.3 Method . . . 3

2 Theory 4 2.1 Snow . . . 4

2.1.1 Snow buildup and generation . . . 4

2.1.2 Snow interaction . . . 5 2.2 Render blocks . . . 6 2.3 Terrain generation . . . 7 2.4 Particles . . . 7 2.5 Shaders . . . 8 2.5.1 Vertex shaders . . . 8 2.5.2 Pixel shaders . . . 8 iii

(9)

Contents iv

2.6 Mapping techniques . . . 9

2.6.1 Bump mapping . . . 9

2.6.2 Parallax mapping . . . 10

2.6.3 Relief texturing . . . 11

2.6.4 Real displacement mapping . . . 11

2.7 Lighting models . . . 12

2.7.1 Phong reflectance model . . . 12

3 Implementation 15 3.1 Snow terrain . . . 15

3.2 Generation of the local height map . . . 15

3.3 Particles . . . 17

3.3.1 Particle system control . . . 17

3.3.2 Soft Particles . . . 17

3.4 Footprints . . . 18

3.5 Shading and visuals . . . 19

3.5.1 Lighting . . . 19 3.5.2 Glitter . . . 21 3.5.3 Border fade . . . 22 4 Result 24 4.1 The landscape . . . 24 4.2 Adding realism . . . 25 4.3 Performance . . . 25

5 Discussion / Future Work 27 5.1 Discussion . . . 27

5.1.1 What mapping technique is the best for snow? . . . 27

5.1.2 Everlasting prints . . . 28

(10)

Contents v 5.1.4 Glitter . . . 29 5.1.5 Physics . . . 29 5.1.6 Additional geometry . . . 30 5.2 Improvements . . . 30 5.2.1 Mesh resolution . . . 30 5.2.2 Tracks . . . 30 5.2.3 Particles . . . 31 5.2.4 Weather effects . . . 31 6 Conclusion 32 Bibliography 33

(11)

List of Figures

1.1 Our snow with footprints . . . 1

1.2 Snow in games: (a) Lost planet (b) Chromehounds (c) SSX (d) World of warcraft . . . 3

2.1 Snowfall projected from above . . . 5

2.2 A level of detail with 8 x 8 patches . . . 7

2.3 Point light calculated per: (a) vertex, (b) pixel . . . 9

2.4 Bump mapping: (a) no bump mapping (b) bump mapping . . . 10

2.5 Parallax mapping: (a) ordinary bump mapping (b) parallax mapping . . . 11

2.6 Phong reflectance model: (a) ambient (b) diffuse (c) specular (d) result . . 13

3.1 Local height map: (a) snow height map and (b) snow normal map . . . 16

3.2 Arbitrary height map . . . 17

3.3 The snow mesh displaced to resemble a real footprint . . . 19

3.4 Scene processed with : (a) bloom effect, (b) no bloom effect . . . 21

3.5 Bright spots simulating the sun glittering in the snow . . . 22

3.6 Blended alpha transition between terrain patches: in figure (a) and (b) there is no fade, figure (c) and (d) have active blending . . . 23

4.1 The Snow mesh rendered in wire frame . . . 24

4.2 Visual result: (a) graininess of the surface and (b) smoothness . . . 25

(12)

Acronyms & Abbreviations

CPU Central Processing Unit

GPU Graphic Processing Unit

PS Pixel Shader

VS Vertex Shader

SM Shader Model

LOD Level Of Detail

(13)

Notation

~

V vector in three dimensions

|~V | length of vector

ceil(a) smallest integer value that is greater or equal to a

cross(a, b) cross-product between vector a and b

distance(a, b) euclidian distance between point a and b

dot(a, b) dot-product of a and b

exp2(a) base-2 exponential of a

float3x3(a, b, c) matrix containing row vectors a, b and c

log2(a) base-2 logarithm of a

max(a, b) maximum number of a and b

pow(a, b) a raised to the power of b

smoothstep(a, b,c) smooth Hermite interpolation of c between the range of 0 and 1. a is the minimum range and b is the maximum range

(14)

Chapter 1

Introduction

Snow is not seen in many games today and where it is, it looks very different from the snow we see on the streets in winter. Not only is the snow surface in most games not very visually realistic, the interaction between the surface and all sorts of dynamic objects, including the characters, is non existant.

Figure 1.1: Our snow with footprints

We did this thesis work at Avalanche Studios who have established themselves, since the release of ”Just Cause”, as a serious competitor in the computer games market. Avalanche Studios has a need to be able to create snow covered landscapes for future game titles and there is no present solution implemented in the technical graphics engine. A figure of our final result is shown in 1.1.

1.1

Problem specification

This master thesis consists of two major parts. The first is to visualize snow in a virtual environment, and the second is to make it possible to dynamically interact with the snow surface in the vicinity of the avatar.

(15)

1.2. Introduction – Games with snow 2

The project is to be integrated into the existing game engine at Avalanche Studios. In this engine there is already an existing terrain model to use for the snow to accumulate on. To fulfill the first part of the thesis it is essential that the snow has similar characteristics to real snow. The interaction is to be shown with feedback from the snow surface when traversing it and small snow clouds appearing around the characters feet when walking in the deep snow. Also, the snow is to be compressed and displaced downwards into the footprints following the character.

To reach our goal the following problems should be solved:

• Efficient vertex and pixel shaders for the visualization of the snow surface;

• Generate a more dense mesh above the original terrain mesh, this will be the snow surface;

• Adding and removing particles around footprints. If possible, small meshes should be included in the snow clouds simulating chunks of snow;

• Footprints that remain visible for a certain amount of time;

1.2

Games with snow

There are not that many existing games with snowy landscapes in their virtual worlds. The recently released ”Lost planet” takes place in a world full of snow, where the cover is simply done with a texture. When the avatar is moving in the snow, interaction is shown with small pieces of geometry that are spawned around the feet, and some occasional decals (a textured polygon placed on the surface). The new chunks of snow are added at the same time as a particle effect is played so they only appear after the particles are gone. The chunks are then faded away based on the time since they were created. Since both the chunks and the decals are time dependent, they leave no long lasting imprints. There are also some weather effects in ”Lost Planet” giving the player a more convincing atmosphere. Other games such as ”Rallysport challenge” and ”Battlefield 2142” both have levels where there is snow around the player. But there is no real interaction with it, no more than decals. The same is true for ”World of Warcraft” where the decals from the footprints come with the occasional small particle system. Different snowboard games like ”SSX” and ”Amped” make use of less advanced lighting algorithms giving the snow surface strange characteristics. Also they make use of a decal following the snowboarding avatar down the hill. One game that makes use of an advanced shading technique for snow is ”Cromehounds”. In this game the player is controlling a ten thousand kilogram heavy machine leaving tracks in the snow, that are made up of decals, see figure 1.2.

(16)

1.3. Introduction – Method 3

(a) Lost planet (b) Cromehounds

(c) SSX (d) World of warcraft

Figure 1.2: Snow in games: (a) Lost planet (b) Chromehounds (c) SSX (d) World of warcraft

1.3

Method

Here is a short description of how we solve our problem;

1. Generate a high resolution terrain mesh around the active camera and generate a local height map and a normal map for it.

2. If the camera is moving outside of the center of the mesh, make adjustments to the mesh and modify it so that the camera always stays pretty much in the middle of the mesh.

3. When a character or any other dynamic object moves around, intersecting the mesh, update the local height map and its normal map accordingly. Also, play the particle systems associated with the object moving in the snow.

4. Offset the mesh as described by the local height map and render it with light effects, such as glitter and bloom.

(17)

Chapter 2

Theory

In this chapter we will deal with the different theoretical aspects of creating landscapes with snow. We will also explain how some techniques for mapping and lighting works as well as give a basic description of rendering with use of shaders. All this is of importance for the reader to understand the subsequent chapters.

2.1

Snow

Snow has existed in games since the early 1980s with varying results. Snow has been seen from early ”Ice climber” on Nintendo Entertainment System (NES) to ”Lost planet” on Xbox360. Since representing and rendering snow is a complex problem consisting of several individual parts, game creators have often been forced to focus on a few of these different parts. This thesis will only focus on the 3D phenomena of snow.

2.1.1

Snow buildup and generation

First, the physical equations and solutions that come with any large-scale dynamic system often need to be precalculated. For terrain, this can be done in modern games at the cost of longer loading times. In most games that have snow, piles and buildup have been generated by a level-designer and not from a physical simulation. Snow buildup in reality depends on several factors e.g. wind, snowfall and density of the snowflakes, to mention only a few. Previous work in the field of snow buildup in computer games is sparse, primarily due to the complex interaction between snow and wind and how to find an efficient representation of this in a game.

However, there are ways to fake this buildup. One of them is to calculate snow coverage of a given surface by projecting the snow in the direction of the snowfall. This is most likely to be a direction closely aligned with the ’up’ axis in the game. After projecting the snow, that information can be used with the camera projection to calculate snow coverage. This is described by Ohlsson and Seipel [10] and further discussed by Dudash

(18)

2.1. Theory – Snow 5

[1]. This technique can be compared to shadow mapping and uses a similar approach. Imagine a flat surface with a table on it and the snowfall comes perpendicular to this flat surface. In this case, the snow would build up on the table and everywhere on the flat surface not covered by the projection of the table. Directly under the table there should be no snow if we assume that there is no wind affecting the snowfall. This technique would be applicable in real-time with low computational complexity, see figure 2.1.

Figure 2.1: Snowfall projected from above

The above technique calculates the snow coverage of a surface given that it already has been snowing and that snow has accumulated and created a coverage on the surface. If we instead look at the process of individual snowflakes accumulating on a surface you would come closer to the work presented by Fearing [3]. Fearing introduces a particle-based technique. In short, the scene is ”sprayed” with a large amount of particles that are then used to calculate the snow coverage. This is not applicable in real-time rendering since the algorithm is quite expensive. That is, the common denominator of particle based solutions, that they all demand much computational power and time to give correct and good results. Having surplus of power and calculation time in computer games is a rare sight.

Another way to simulate the snow buildup would be to use a technique called level sets (Hinks [5] et. al.). This gives a correct and complete model taking physical phenomena into consideration. The level set method would, however, need to calculate a distance field from a given triangulation of the scene, perform calculation on this distance field and then triangulate the new results back to a terrain mesh. Thus, to use level sets is too time consuming to be discussed as a strong alterative for use in games.

2.1.2

Snow interaction

As we know snow exists in many forms (e.g. powder, ice, crust). The snow particles can have different density and reflect light in different ways depending on this. Snow can be

(19)

2.2. Theory – Render blocks 6

light as powder or packed into hard ice. It can dazzle you when you step into the sun outside or it can have transparency when you look at ice. In games this is achieved by assigning different materials and different lighting models for different surfaces. This, in a sense, is contradictive since snow or snow particles are essentially the same material: crystalized water. The solution that one would want would consist of only one material with several parameters to define what type of snow we have in our game.

With snow interaction we mean the physical interaction with snow particles. Particles get packed under feet when walking over snow, they swirl over ridges when the wind blows and they have different interaction models depending on what state they are in. In games this has been the most problematic issue to solve. Ideally, one would like snow to behave exactly as we are used to, but in reality this is unrealistic. Modifying existing terrain geometry often requires re-tessellation around the point of interaction. A car speeding through snow would plow tracks and chunks of snow would splash from the wheels. These chunks would interact with the layer of snow around them in several ways, all depending on the characteristics of the chunks themselves and the surroundings. These chunks should also be able to split into new small chunks or merge with other chunks. A simple version of this technique can be seen in the recent game ”Lost Planet”. One of the most frequently used methods for producing prints or marks in terrain is to use decals. A decal is a quad or polygon with a textured print on it. This is then blended with the underlying terrain. In its basic form this does not support any depth information and gives a flat impression.

As with all interaction we demand some sort of physical connection between objects that interact with each other. A stone hitting water would splash water around it at impact and then sink. A stone hitting soft snow would, in a similar way, splash snow around it and then sink into the snow. There is no current solution to this problem, only crude physical models that fake the behavior. With current hardware it is still problematic to do massive calculations on particles in games.

2.2

Render blocks

To understand the basics of how things are rendered in the Avalanche Engine, one has to understand one of the fundamental building blocks of the engine, render blocks. A render block is a code block that is added to the rendering pipeline. The render block can be accessed during construction, and once or multiple times per rendered frame.

Each render block has a draw function, which is responsible for the actual drawing of the object it represents. Render blocks are sorted to increase computational efficiency, and added to the render pipeline. This pipeline is traversed and the draw function in each render block is executed.

(20)

2.3. Theory – Terrain generation 7

2.3

Terrain generation

There are several ways to produce large scale terrain in computer games. As the users demand larger and more interactive worlds, the game producers need to keep up with this demand. That has given rise to worlds that today have infinite terrain (procedurally generated with no content) and as much as a 32 x 32 kilometers world (with content) as in the game ”Just Cause”.

The current way to generate terrain in the Avalanche engine is built on a system of terrain patches that are being created and moved around the world. A patch is simply a 2D area which has a certain dimension given from the level of detail (LOD). The LOD is decided from the distance between the camera and the ground. In Just Cause this system is built on the power of two, so LOD level 3 (23) implies that each patch will be 8 times 8 meters. Each of these LOD levels, in turn, has 8 x 8 terrain patches, see figure 2.2.

Figure 2.2: A level of detail with 8 x 8 patches

Since the camera is movable in the artificial world the patches also need to be able to follow the camera. This is done by moving the outmost line of patches ahead of the camera in the direction it is heading. Consider moving the camera along the X axis of a coordinate system and watching along the Z axis. The LOD contains eight columns of patches and when moving along the X axis the first column will be moved next to the 8th column. The similar procedure is done when moving along the Y axis but then row one is moved ahead of row eight. Of course the opposite is true when moving in a negative direction.

2.4

Particles

In almost every modern computer game there is some kind of particle system. A single particle is, in most cases, made out of a transformed sprite/billboard with an animated

(21)

2.5. Theory – Shaders 8

texture attached to it. This texture typically stores alpha in one channel in order to be able to blend the particle against the background. Particle systems are often used to display things such as bullets, smoke, explosions, fireworks and water splashes. Although clouds, fog, distant trees, grass, the sun, the moon and the stars are not generally referred to as being particle systems, the only differences are that they have an infinite lifetime, are not being emitted from a source and that they might not be as densely clustered. The first to introduce particle systems to computer graphics was Reeves [13].

2.5

Shaders

Shaders have become standard in the gaming industry during the 21st century. The potential of writing your own shaders and diverging from the fixed functionality of the graphics pipeline opens up possibilities for more advanced rendering algorithms.

While shaders can provide an opportunity to gain in performance and quality there is always a danger of assigning to much workload to the graphic processing unit (GPU). The tradeoff between GPU and central processing unit (CPU) work is an important aspect when designing games. The GPU is primarily used for rendering graphics, though other possibilities include physics and lighting calculations. Today practically all lighting calculations are done on the GPU.

Texture lookups are simple to do but cost time, and one should strive to keep the number of lookups to a minimum. The user should strive towards using more mathematical operations than texture lookups, this mainly because GPU hardware is designed to handle such operations and does not suffer from texture cache misses or bandwidth stalls.

2.5.1

Vertex shaders

Vertex shaders (VS) are used to process the vertices that are sent from the CPU to the GPU. The VS is responsible for outputting a vertex position in screen space. In a VS all operations are done one vertex at a time, so the shader only has knowledge about the current vertex it is processing.

Since the introduction of shader model (SM) 3.0 it is also possible to do texture lookups in the VS; this used mainly in different displacement techniques.

2.5.2

Pixel shaders

Pixel shaders are also known as fragment shaders, they operate on the image on pixel level. The pixel shader processes a group of pixels at a time, typically 2 x 2. We will

(22)

2.6. Theory – Shading techniques 9

refer to pixel shaders as (PS). Typically, most shading calculations are done in the pixel shader but there are situations when this is not efficient. Computing a certain operation for a given fragment can be costly compared to computing the same calculation in the vertex shader. The reason for doing calculations per pixel is that it is more precise and that it avoids the color interpolation between vertices. This can be seen in figure 2.3.

(a) Per vertex lighting (b) Per pixel lighting

Figure 2.3: Point light calculated per: (a) vertex, (b) pixel

2.6

Shading techniques

In computer graphics, various techniques exist to enhance the visual appearance of a surface. Most of these techniques are achieved by fooling the eye that a surface appears to have depth information and therefore looks realistically displaced.

As we view the real world we seldom reflect upon shadows, occlusions or highlights. When modeling a 3D scene for a computer game it is crucial that these different phenomena occur where we are used to. It does not take much divergence or error to make the scene become unreal and non-believable. All scenes are in some way non-realistic but there is a thin line between what the user accepts and what the user objects to. Naturally these methods in their basic implementation can not give cavities and when looking at a surface at grazing angles the viewer often see the obvious lack of height and depth.

The different techniques described below are used frequently and appear in most modern computer games. Bump mapping has been used since early 1990s. Relief texture mapping is used in games such as Quake 3 with little penalty to in-game performance (Policarpo et al. [11]). In games, the designer can switch between, or combine, these methods to create a realistic looking scene.

2.6.1

Bump mapping

Originally bump mapping was presented by Blinn in 1978 (Ebert et al. [2]). Blinn used a grey scale image to simulate a height map. To create a normal from this image one has to

(23)

2.6. Theory – Shading techniques 10

calculate the gradient of the surface parametrization (typically the texture coordinates) and the height map values. When this normal is acquired it can be used in a lighting model to produce a better result than if using an ordinary color texture. Bump mapping versus standard texture mapping is displayed in figure 2.4.

(a) Ordinary texture mapping (b) Bump texture mapping

Figure 2.4: Bump mapping: (a) no bump mapping (b) bump mapping

Instead of calculating gradients for each point in a height map one can precalculate a normal (using Blinn’s technique) and store them into a texture map. This map is known as a normal map. Each color channel in the normal map stores one dimension of data. Typically U and V axes are chosen to correspond to the R and G channel of the texture. Since the normal map is a color texture, the values for each channel are in the range 0 - 255 for each channel. This means that before using the normal stored in the three channels of the texture it needs to be re-normalized to [-1, 1]. Since the normal map is generated using the texture coordinates it is said that these normals exist in a new space, typically referred to as tangent space. When doing calculations with bump mapping techniques, one often has to transform all vectors into this tangent space. The tangent space matrix is the matrix that maps a vector from model space to tangent space. The matrix consists of three vectors, ~N , ~T and ~B. These are the normal, tangent, and bitangent. The normal is the normal of the surface being mapped, and the tangent and bitangent are calculated from the texture coordinates. The three vectors often form an orthogonal basis but are not required to do so, it depends on the texture coordinates. The normal from a normal map is then used in lighting calculations to give the looks of a bumpy surface.

2.6.2

Parallax mapping

Parallax mapping or virtual displacement mapping is a variation of the bump-mapping technique. The bump-mapping described in the previous section does not take geometric

(24)

2.6. Theory – Shading techniques 11

depth into consideration, and therefore it experiences no parallax. The parallax mapping technique uses the height map of an object to calculate an offset (to simulate parallax) that is used when sampling from the texture (Kaneko et al. [6]). Kaneko’s work has been developed further by Tatarchuk [15] as an alternative way to give the viewer a sense of depth in a scene and to create visually high detailed scenarios. This differs from the standard parallax mapping by having soft shadowing and occlusion. The effect is displayed in figure 2.5.

(a) Ordinary bump mapping (b) Parallax mapping

Figure 2.5: Parallax mapping: (a) ordinary bump mapping (b) parallax mapping

Parallax mapping gives realistic results, the only requirement is that the user has access to a height map of the object. It should be noted that the additional cost of using parallax mapping is produced by using ray-tracing, often on the GPU. For large-scale scenes this is a computationally heavy technique.

2.6.3

Relief texturing

Displacement mapping or relief texturing is a method proposed in Policarpo et al. [11]. The results look very good at close range but, as with parallax mapping, ray tracing from eye to the pixel/fragment has to be done. Policarpo uses a linear search algorithm to find the displacement of a certain pixel. When the height is found, he also gains information about what is in shadow and what is not. Still, ray tracing and search algorithms are time consuming and will most likely only be successful at smaller distances with high detailed objects. Though the method is expensive to use, it can be worth using it close-up of objects.

2.6.4

Real displacement mapping

Displacement mapping is a method that displaces the vertices themselves. Before SM 3.0 was introduced all ’real’ displacement had to be done on the CPU and that limited the

(25)

2.7. Theory – Lighting models 12

GPU to shading only. However when SM 3.0 became available it also made texture lookups in the vertex shader available. This produced a new dimension of shader possibilities. It is trivial to sample from a height map in the vertex shader and use this value to alter the position of a vertex. This gives a truly displaced surface since it alters the geometry of an object. But in order to have very high detail one needs to increase the tessellation of an object and that is one of the issues the different bump mapping techniques avoid. This increased vertex density spawns new problems that previous versions of the technique did not suffer, mainly with sending very large numbers of vertices to the GPU.

2.7

Lighting models

The interaction between light and different surface materials is something we observe every day. The way light rays bounce and reflect, emit or transcend a certain material will alter its wavelength and change the color we perceive. Much research within computer graphics is concerned with finding ways to mimic this behavior, and this has led to numerous lighting algorithms.

The problem with many of these different algorithms is that they seldom are general, instead they tend to be very specific to what material they are trying to imitate. This is however a natural product of the materials themselves since they differ considerably in light interaction.

2.7.1

Phong reflectance model

The Phong reflectance model is a de-facto standard for basic shading. It is simple and efficient to use for most materials. There are naturally cases when this model is not suitable, but it is a good model to use as a basis in a lighting algorithm.

The Phong reflectance model in its original version does not take second-order reflections into consideration. Compare this to radiosity and raytracing methods. This is compen-sated for by an ambient term. The ambient term is one of three terms in the model, the other two are called specular and diffuse. This means that to calculate the full illumi-nance of a scene, all three terms need to be evaluated. The different parts are shown in figure 2.6 ([12]).

(26)

2.7. Theory – Lighting models 13

(a) Ambient light (b) Diffuse light (c) Specular light (d) Final result

Figure 2.6: Phong reflectance model: (a) ambient (b) diffuse (c) specular (d) result

The mathematical formula for calculating the reflection of a certain point is as follows:

Ip = kaia+ n X p=lights (kd(~L · ~N )id+ ks( ~R · ~V )αis) (2.1) where

ka = ambient reflection constant

kd = diffuse reflection constant (Lambertian reflectance) ks = specular reflection constant

α = shininess constant

In equation 2.1 the ~L is the direction of the light source to the point, ~N is the normal in that given point, ~R is the mirror reflection of the light vector and ~V is the view direction from the point to the eye (camera). This is calculated for all the light intensities

is = specular intensity id = diffuse intensity ia = ambient intensity

The ambient term does not depend on any direction of either light or view and is therefore a direct product of the two constants. The diffuse term only depends on the light direction and the normal at the surface point. A light direction aligned with the normal would then give a strong contribution of diffuse light. The specular term on the other hand, is dependent on both the view vector and the reflection vector. If they are aligned, the result is a strong specular effect. The values of the α constant regulates the size of specular highlights. With large values of α the dot product of ~R and ~V decreases rapidly and this gives a very sharp specular highlight.

A modification to the original Phong model is the Blinn-Phong model. Since the Phong model is dependent on the reflection vector ~R, this vector must be recalculated for each

(27)

2.7. Theory – Lighting models 14

vertex (or pixel). If we instead define ~H as the half vector between ~L and ~V we can then replace the expression ~R · ~V with ~N · ~H.

~ H =

~ L + ~V

|~L + ~V | (2.2)

This may seem to offer no computational gain since it involves computing the square root. This is true, but if we assume that light and view vectors are directional vectors (light and camera position are considered to be at infinite distance of the object), the computation for the half vector only has to be done once per light. In other words the half vector does not depend on surface curvature or position like the Phong model does. The exponent α in ( ~R · ~V )α can be replaced by β so that ( ~N · ~H)β approximates the behavior of the Phong model. The differences of the specular term will affect the visual representation minimally.

(28)

Chapter 3

Implementation

This chapter will deal with our implementation of snow in the Avalanche engine. This chapter is based on and follows directly from the theory discussed in chapter 2.

3.1

Snow terrain

Since we need a high resolution mesh but do not want to pay the price of massive amounts of vertices to process, we chose to reduce the LOD. Our current implementation is based on LOD level 3 which means that each snow patch is 8 x 8 meters. Our snow is something that should be rendered to the scene, therefore we created a corresponding render block. The render block is responsible for each individual snow patch in our LOD. For each patch we have a mesh of 64 x 64 vertices, an arbitrary number chosen to give sufficient resolution.

Stepping down in LOD would give a higher density of vertices and also smaller patches to modify. We found that the LOD we chose was suitable for our needs. It provides a reasonably large area where the footprints can remain and different objects can interact with our surface. The vertices in our LOD is displaced in height by a fixed value to create the base snow surface.

3.2

Generation of the local height map

In addition to just displacing the surface with a constant offset a local height map is used to offset the snow surface further (this height map displaces each vertex from the value given in the height map). From this height map, a normal map is generated to give correct normals at each vertex, see figure 3.1. The obvious choice of resolution of the height map would be 64 x 64 (since each patch is 8 x 8 meters and we have 8 x 8 patches on each LOD) but this gives a visually repetitive pattern in the patches. Therefore the resolution was increased to 256 x 256. This means that the height map spans over 4 x 4 snow patches and gives much less repetition.

(29)

3.2. Implementation – Generation of the local height map 16

(a) Snow height map (b) Snow normal map

Figure 3.1: Local height map: (a) snow height map and (b) snow normal map

The height map is generated from two different textures, the first contains height data in one color channel. The second is the normal map generated from the first texture. The height map was constructed using the clouds filter in Adobe Photoshop CS2 and scaled appropriately. Using Nvidias texture plugin [9] a normal map can be generated from a given texture. The values in the height map ranges from zero to 255 which is enough precision in our case.

In the application on the CPU the textures are read into memory and each pixel value corresponds to a matching position in an array. These values are then normalized and available to the user through a get function. The reason of storing the values in an array instead of just accessing the texture itself is speed. Writing to and reading from textures are a typically slow process while the same for an array is very fast. The cost of once, when creating the patches, reading the whole texture and copying the values to an array is significantly less than constantly accessing texture for read/write. Since we are running SM 2.0 and do not have access to textures in the vertex shader stage we need to be able to modify and read our height map on the CPU and not on the GPU.

In short pseudo code our implementation for a vertex height can be described as;

for( each vertex )

calculate linear height from underlying terrain add a fixed value displacement

add the value from the height map end

Since the height is generated from a pre-made map it is simple to use an arbitrary height map with corresponding normal map. An example of this is seen in figure 3.2

(30)

3.3. Implementation – Particles 17

Figure 3.2: Arbitrary height map

3.3

Particles

When the character is moving around in the snow covered landscape we want to show some feedback from the snow with which it is colliding. Just as if a person runs around in deep powder snow, it is obvious that there will be small clouds of snow splashing around them. Therefore we use two particle systems at every footstep the character takes.

3.3.1

Particle system control

To decide when to play a certain particle system we track how far into an character animation file we are when the character put her feet down. The impact is considered to be when the characters’ foot hit our snow surface. When we have received the time of the impact, we fetch the matrices describing the position of right and left ankle joints in the characters skeleton. Time and position for every footstep is saved and pushed down into the render blocks for each snow terrain patch. The information is updated once every frame. Since the update functions of the terrain render blocks and the character movements are separated into different threads we may get some sync errors, although nothing major.

3.3.2

Soft Particles

To ensure that the particles will not ”cut” the terrain mesh as hard, we have implemented a variation of the soft particles algorithm. Our implementation is based on the ideas of Umenhoffer et al. [17], but we use a simplified model. In the method used by Umenhoffer et al. the particle cloud is interpreted as a spherical volume. A ray is then traced from the camera through the volume until it hits the terrain. Then, the density of the cloud is calculated from the distance of the sphere volume traversed. If the density of the particle

(31)

3.4. Implementation – Footprints 18

system is known, it is easy to fade the edges and achieve a smooth transition between the volume data, (i.e. the particles) and the surrounding scene. In our case, we decided to use a simplified algorithm, both because we found it to be ”good enough” and that it would be less expensive to compute. We make use of the depth value of the actual billboard of the particles and compare that with the depth value of the surrounding snow mesh. If the depth of the billboard is equal to the depth of the mesh we fade the alpha channel of the billboard, and blend it together with the rest of the scene.

The biggest problem we had with the implementation of soft particles was to send the depth value of the mesh at the specific pixel to the particle shader. The only way to do this in DirectX 9 is to render the depth of the mesh to a separate render target at the same time as we render the snow mesh to the back buffer. When this is done we can send the rendered depth surface to the particle shader and fetch its value. Since the depth was rendered to a target with the same dimensions as the back buffer we simply fetch the data from the same coordinates in the texture as the current pixel has on the screen. When using a render target as a single depth texture the precision of the depth values retrieved depend on the format of the render target. There are several different depth formats, however hardware limited us to use 8 bit per channel and not the more appropriate 16 bit floating point. We found that having a range of only eight bits was not enough to get a visually good result and needed to find a solution on how to keep more information from a single floating point value. We used equation (3.1) and (3.2) to save the calculated floating point value into two eight bit color channels. In this case the green and red channel of the depth texture.

exponent = ceil(log2(depth value)) (3.1)

base depth = depth value/exp2(exponent) (3.2)

When extracting the depth value from the texture in the particle PS we use equation (3.3) to expand the exponent back to the color range and (3.4) to obtain the actual depth value saved.

exponent = depth texture.g ∗ 255 − 128 (3.3)

depth value = depth texture.r ∗ exp2(exponent) (3.4)

3.4

Footprints

Since we have information about where the character’s feet are located at every frame and at every footstep, we can easily calculate positions for where footprints should be placed in the world. By transforming from world space into ”patch space” we can decide

(32)

3.5. Implementation – Shading and visuals 19

where in the actual patch to place the footprint. Creating a footprint is done by adjusting the values in the local height map, see chapter 3.2. For each frame, we send the vertex position to the VS. If we have made a new footprint, we modify the local height map and update the vertex buffer (a buffer that stores the individual information about vertices, e.g. position and normal) with the data in the local height map before sending it to the GPU and making the draw call. This way, the lifetime of a footprint is ”infinite” in the sense that as long as the snow patch it exists in is not removed, the footprint stays. At the same time, as we change the coordinates in the vertex buffer we also modify the normals sent with the buffer.

The creation of a footprint is done by altering the values in the local height map so that the vertices are pushed down to a base height slightly above the original terrain. To make a footprint more realistic we push down a few surrounding vertices in a certain formation, based on the direction of the character, see figure 3.3, to resemble a real footprint of a moving person.

Figure 3.3: The snow mesh displaced to resemble a real footprint

3.5

Shading and visuals

To enhance the feeling of being in a real landscape covered in snow it is of importance that the lighting of the objects in the scene is realistic.

3.5.1

Lighting

For the diffuse lighting factor of the snow mesh we decided to use the diffuse part (3.5) of the standard phong-model, chapter 2.7. The normal in the equation is in our case the final normal of the surface after being displaced and bump mapped, chapter 2.6.1.

(33)

3.5. Implementation – Shading and visuals 20

When calculating lighting it is of great importance that it is done in the correct coordinate system. For instance, bump mapping is always done in tangent space of the model. This is because the final normals for the model depends on both the orientation of the mesh in the world, as well as reflecting the detailed pattern of the normal map. In our case, we start out with a base normal generated from the gradient of the global terrain. This normal is added to the normal from the height map, and gives us a coarse normal of our terrain. Calculating the lighting on this coarse normal would yield a satisfactory result. Still, we apply a highly detailed bump map giving us the final normal of the surface. The coarse normal follows the mesh geometry roughly and will therefore give a feeling of correct lighting. The final normal is just a trick to make the viewer believe that the snow surface is grainy and not so smooth when looking at it closely. When applying the bump mapping technique, we take advantage of the fact that our mesh is always generated in perfect squares, so finding the tangent (3.6) and bitangent (3.7) for the tangent space matrix (3.8) is simple.

tangent = normalize(cross(normal, f loat3(0, 1, 0))) (3.6) bitangent = normalize(cross(normal, tangent)) (3.7) tangent space matrix = f loat3x3(tangent, bitangent, normal) (3.8)

We have not implemented the specular component of the Phong model since that gives the surface a more plastic appearance, and snow has not really got any specular highlights. But snow does have a specular effect in the sense that it reflects light very sharply at certain light angles. This effect is, of course, heavily dependent on the weather. To simulate this effect in a computer game is a bit tricky. One way would be to use a tone mapping filter (Larson et al. [8]) that would brighten and darken the image compared to the angle of the sun towards the viewer and also by calculating the total illuminance of the view port. This makes it possible to achieve the effect of a really bright scene when travelling from an indoor area into a bright outdoor area (just like the sunlight would reflect and scatter in the snow on a sunny day and leave the spectator almost blind from the very bright light when stepping outside). In Larson et al. a glare effect is discussed. This is the effect that appears when bright sources in the visual periphery, scatter light in the lens of the eye, obscuring foveal vision. To achieve an extra brightness factor, we tried to simulate this effect with the use of a blooming method similar to Ueda et al. [16]. Such a method extrapolates a higher brightness value for all light pixels in the scene and smooths the brightest pixels around light areas in the scene with a blurring filter. This is what we use in our implementation to try to give the spectator the feeling of being in a very bright environment. Also it gives the snow particles an extra dimension in appearance. Figure 3.4 demonstrates the effect received when applying the bloom post effect.

(34)

3.5. Implementation – Shading and visuals 21

(a) Bloom effect active

(b) Bloom effect inactive

Figure 3.4: Scene processed with : (a) bloom effect, (b) no bloom effect

3.5.2

Glitter

When the sun reflects in snow, small bright spots or ”sparkles” appear. These sparkles are dependent on the viewer’s position and the angle to the light source, i.e., the sun. We have implemented this effect by using a noise texture that we apply over the snow patch. To make sure that we only get sparkles where we want them, we constrain them to only lie inside the specular area found by applying the specular part (3.9) of Phong’s lighting model, (chapter 2.7). By constraining the glittering sparkles to the specular area, we also

(35)

3.5. Implementation – Shading and visuals 22

make sure that the effect never appears where there are shadows.

specular highlight = pow(max(dot(normal, half vector)), shininess f actor) (3.9)

The sparkle effect is illustrated in figure 3.5. Of course the effect is more visible when the scene is in motion. The ideas of how to implement the sparkles is a combination of what was done by Rosado [14] and the ”glitter” example included with Render Monkey [4].

Figure 3.5: Bright spots simulating the sun glittering in the snow

3.5.3

Border fade

To avoid a sharp contrast between our offset mesh and the underlying terrain we use an alpha fade towards the borders of our mesh. This effect is most visible when static and dynamic objects intersect the edge of our mesh. The fade makes this transition less noticeable.

By sending the camera position to the pixel shader it is easy to implement a fade based on the distance between the vertices and the camera.

r = distance(W orldP osition.xz, EyeP osition.xz); (3.10) f ade = 1 − smoothstep(near f ade, f ar f ade, r); (3.11)

In equation 3.10 we calculate the distance from the vertex position to the camera position. This is used with a smoothstep function in equation 3.11. The result of this smoothstep is a value between 0 and 1. This determines the alpha of the pixel and gives our fading. In figure 3.6 we see the border of the snow patch and a stone on both sides of it, both with and without fade enabled.

(36)

3.5. Implementation – Shading and visuals 23

(a) Outside of the snow mesh, no fade (b) Inside the snow mesh, no fade

(c) Outside of the snow mesh, fade (d) Inside snow mesh, fade

Figure 3.6: Blended alpha transition between terrain patches: in figure (a) and (b) there is no fade, figure (c) and (d) have active blending

(37)

Chapter 4

Result

This chapter describes the results of our project. First, we focus on the landscape and the intersection with it, then we will go into what was done to add realism to the scene.

4.1

The landscape

We have created a mesh with high resolution to use as terrain around the active camera in a game. The mesh is generated in several small patches that always follow the camera at a specific level of detail. Figure 4.1 shows the mesh rendered in wire frame to illustrate the resolution. The distance between two vertices is roughly ten centimeters. Since terrain generation is efficient, it is possible to seamlessly move around the world in a free flying mode. In figure 4.1 it is also possible to see the tracks left by the character when running through the mesh. For a closer look at a footprint from the character, see figure 3.3. These are generated by displacing vertices using a given pattern, in a local height map. We then generate the height displacement of our mesh by offsetting it from the underlying terrain plus adding the values in the local height map.

Figure 4.1: The Snow mesh rendered in wire frame

(38)

4.2. Result – Adding realism 25

4.2

Adding realism

To give the spectator a feeling of presence in the scene it was important to have a visually good looking snow surface. This was done by adding a layer of smooth noise to give the impression of a rolling landscape. On top of this we added a bump map and a matching color texture to give the impression of fine details and graininess. Figure 4.2 shows two images of the final result, including the applied glitter effect.

(a) Graininess (b) Smooth surface

Figure 4.2: Visual result: (a) graininess of the surface and (b) smoothness

As a way of making the transition between our snow mesh and the terrain softer, we implemented a fading algorithm. By default the transition is sharp, i.e. it is clearly visible where our mesh begins and ends. To make this less noticeable we implemented an alpha blending with our snow mesh and the underlying terrain and objects, see figure 3.6.

4.3

Performance

To give any exact numbers of how much performance we lose by adding our snow mesh is impossible without creating a benchmark sequence and running the application several times on different systems with both snow on and off.

Even though our implementation is far from optimized, it is a plausible solution for snow in computer games. One of the biggest advantages of using our method is that the performance loss when adding multiple dynamic interactions will be very small. Another advantage is that we can add several cars and big groups of people passing through the snow landscape next to our character and we would still not lose much more performance than we did by including the snow terrain in the first place.

At the moment, we use some texture memory for our different textures. If memory is limited, it would be possible to remove some of our textures. It would result in a

(39)

4.3. Result – Performance 26

slightly less detailed terrain, perhaps without the glitter. Adjusting the textures to smaller versions is also a way to reduce the memory cost, but the drawback is a result with a more repetitive look.

(40)

Chapter 5

Discussion / Future Work

In this chapter we will discuss the different choices we have made when creating our implementation, and ways to improve the algorithm.

5.1

Discussion

Even though snow is still a fairly unexplored area in computer graphics, and even more so in games, there are a few solutions available now. Since snow, just like other weather effects has a great influence on the rest of the environment in a scene, it is of interest to find a solution for many or all of the problems arising when applying this kind of effect. So far, there is no solution that covers all. In fact, there is no solution that even claim to cover most of the problems. Some of the methods will accumulate the snow, while others will create prints in it, but none will change the topology of the snow by wind or movement after it has fallen to the ground. If the method should be used in a computer game, it is of importance that everything can be done in real time. In other words, the calculations needed should be kept to a minimum and be efficient to compute, since performance is one of the main issues in games. The reason why we choose to implement our current method as we did, is that the more complex methods, e.g. Fearing are too slow for real-time applications.

5.1.1

What mapping technique is the best for snow?

When deciding what technique to use for displacing the prints in the snow mesh, we chose to use a displacement method, where all the displacement calculations are done on the CPU. This way, we get a mesh that is fully displaced and after that we do not have to worry about any other artifacts. Using a standard bump map method was never an option since it gives no feeling of depth. Other methods, like relief mapping and parallax occlusion mapping uses ray casting in the shaders, which makes them computationally demanding for the GPU. Relief mapping gives a good visual appearance at certain angels but is not a full solution when it comes to an entire scene. Parallax mapping, on the other

(41)

5.1. Discussion / Future Work – Discussion 28

hand gives very impressive results. Although both these techniques suffer from aliasing, it would be of interest to try the parallax mapping technique on our snow to see how much performance we would lose and how much better looking footprints we would gain. Because the prints would be possible to modify per pixel instead of per vertex, may result in a more realistic print.

5.1.2

Everlasting prints

Since we use a method with true displacement of the vertices to achieve footprints we get everlasting prints. This is preferable over usage of decals, since decals in games disappear shortly after they have been applied and that really does not imitate the way a footprint would act in the real world. Although we lose our prints if we move away from the area and then return back to the same place again, this is not such a bad thing. It is easy to imagine that some snow would have fallen or that the wind had filled the prints while we were gone. But seeing a print dissolve in front of you is not believable. Another thing that would increase the realism of the prints would be if it was possible to fill in already existing prints. An example of this would be; If a character is walking in the same area, just next to another print or if the wind would swirl by the same area and hide the old prints.

5.1.3

Where there is no snow

Another problem with including a snow coverage is that you may not want snow every-where. Perhaps there should only be snow around mountains, but not on the very steep slopes. Then it is of importance that the snow will fade out when moving over to another material. This can lead to problems of how the game physics should be implemented. It is preferable to simulate a crust on the snow cover that has a different physical model. Of course the characteristics of the material is very different depending on the rest of the weather. There exist many types of snow in the real world. Depending on what type of snow (powder, wet, icy) the features of the surface can be quite different. In a game, this is very hard to mimic and in our experience, one type is always chosen for the entire game.

Combining the information of the thickness of the snow and where we change towards a warmer climate could help us establish a criterion for switching between different mate-rials, both snow and other types of terrain.

(42)

5.1. Discussion / Future Work – Discussion 29

5.1.4

Glitter

Seeing snow with sparkles in computer games is rare. We believe that including a glitter effect is a great way to heighten the immersion of the player.

Getting the snow to sparkle as real snow does is difficult, especially since the glitter be-haves differently based on the temperature and the consistency of the snow. For instance, if the snow is a bit wet and is about to thaw, it glitters very much as very small sparkles and if it is more like cold powder snow, the sparkles appear as larger spread out light sources. Another feature of the glitter is that it does not appear in shadows and that it is strictly view dependent. We tried to simulate this as far as possible, but there are still ways to improve our method. Right now our glitter is difficult to see because the blooming around it is so bright that it becomes hard to see the individual sparkles. Also we went back to using non-view dependant glitter since the results were not convincing. Perhaps our sparkles should have been larger since the rest of the climate visualized is more cold and the snow and its sparkles should therefore act less like wet snow. To enhance our glitter it would have been possible to create an alpha mask containing only the glitter. Using this mask to blur the sparkles and then brighten them up like when doing bloom. That way we would see the sparkles more.

5.1.5

Physics

When including snow in a computer game the first problem we run into is that our visual perception does not concur with what we believe snow to look like. But there are other senses that are of equal importance. Even though we cannot directly feel and touch the graphics in a normal computer game we want to see that the character we are playing is experiencing these factors. So we want to see our avatar having more trouble with his/her balance when crossing a slippery surface as well as moving slower when traveling through deep snow. The feeling of inertia when crawling through a huge pile of snow is definitely present in real life. All of this could be implemented in a game, but it takes computational power and time. This is something we would want to implement in the future.

In a real landscape, snow does not cover all places equally, and after accumulating in different small hills and valleys it is swirled around by the wind, resulting in new topology. This is something very hard to mimic in a computer game and so far we have not seen anyone even attempt to do this in real time.

It is difficult to produce physically correct accumulation of snow in real time. But perhaps it might be possible to do some sort of pre-calculations of the snow coverage before offsetting and rendering. If the topology of the scene is fully known in advance it might be possible to construct a simple algorithm for building up snow in piles along the different

(43)

5.2. Discussion / Future Work – Improvements 30

static objects in the scene, as well as around hills and valleys. By using this information, it would be possible to generate a landscape with different snow depth depending on more physical patterns.

5.1.6

Additional geometry

One way to enhance the feeling of a living landscape is to add chunks of snow and other extra geometry. The game ”Lost Planet” already implements this around the feet of the character when moving around. This makes you feel that you are running around in the snow, even though the additional geometry does not look that great if you focus on it. Having chunks of snow coming up from the terrain and covering your legs, when pressing down your own feet, might not be so realistic. Adding chunks of snow in the vicinity of the foot print would feel more correct. This is something we would like to implement in the future.

5.2

Improvements

There is a deadline on every project and we have decided to not pursue any more of our ideas for this version. Still we have many interesting thoughts about what we could do if there was more time. A few of them will be presented in the next section.

5.2.1

Mesh resolution

One obvious improvement to our method would be to increase the number of vertices in each patch. The current resolution of 64 vertices in 8 meters yields to 1.25 decimeters between each vertex. This in turn gives a very coarse grid to modify when interacting and leaving a track. Since the triangles in each patch are generated in a specific way (see figure 3.3) all quadrants need different models for tracks. Increasing the resolution of the grid and these irregularities between tracks in different quadrants would become less visible. The drawback of increasing mesh resolution is performance. Doubling the resolution also doubles the amount of vertices for the GPU to process. A larger height map would also have to be used to avoid a repetitive pattern as well as a larger vertex buffer to store the data.

5.2.2

Tracks

At the current mesh resolution, each object that interacts with the snow should have its own deformation model when it comes to leaving an imprint. A car would leave tracks from the wheels while a non-player character could use a footprint as model.

(44)

5.2. Discussion / Future Work – Improvements 31

Taking this further, one could alter these snow deformation models depending on the height of the snow. In low snow height a character would leave tracks that much resemble an ordinary footprint. But if the height is large the same character would produce tracks that were longer and perhaps connected to each other, since the feet never rise above the snow surface. All these factors could be determined by the height and slope of the snow surface.

In our current implementation, the vertex normals that are being displaced are set to a predetermined fixed value. These normals should instead reflect the correct normal of the vertex in a track but this gave results that made the tracks virtually invisible. This is not unexpected since the normals in the track will have the same general direction as the surface and hence not give any realistic shading. Instead, normals could be determined by trial-and-error in order to give visually good results. This would, in turn, give rise to a problem with the light direction, and the normals in the track would have to compensate for this change as the sun changes direction. Ideally what we want is some sort of ambient occlusion model (Kontkanen and Laine [7]) for the tracks so that the vertex normals in the tracks can be correct and the shading of the tracks still looks good.

5.2.3

Particles

In our method, the particle systems do not take wind or any other physical aspect into account, except for gravity. To create a sense of snow spreading in a given direction when objects interact with it, these particle systems have to be modified. When, in our case the character, traverses the snowy landscape the particles should depend on how and in what direction the character is moving. Perhaps particles should spread more in the direction in front and behind of where he is heading. This should ideally be possible for the game designer to specify.

5.2.4

Weather effects

Adding weather effects would even further enhance the visual stimuli of our proposed method. We have only discussed the aspect of interacting with snow but to increase the immersion in a game more effects like snow storms, snow blowing over ridges and whipping around the character would matter a great deal.

(45)

Chapter 6

Conclusion

Snow remains a difficult problem in games. To have both visually good looking snow while maintaining high performance is not a trivial problem, even with modern hardware. We have chosen to focus our work around the interaction with snow and not on snow buildup or weather effects.

We have presented a robust implementation for introducing a modifiable snow surface in the Avalanche engine. The principle to add another layer of high resolution terrain that can be modified is fast and stable solution to our problem. The final method implements several different algorithms to make the interaction with the snow.

By examining our problem specification in chapter 1.1, we can see that we have satisfied all those criteria except for one, the addition of new geometry. We are pleased with our result, and while the additional geometry would enhance our method, the lack of it does not affect the result substantially. We can also see that we have avoided footprints that are visible for only a certain amount of time. Using our method they remain visible forever, as long as the player stays in the same area.

(46)

Bibliography

[1] Bryan Dudash. ShaderX 4 - Advanced Rendering Techniques, chapter 7.1 - Winter wonderland. Charles River Media Inc., 2006.

[2] D. Ebert, F. Musgrave, D. Peachey, K. Perlin, and S. Worley. Texturing and modeling, chapter 6 - Practical methods for texture design. Morgan Kaufman., 2003.

[3] Paul Fearing. Computer modelling of fallen snow. In In Proc. of SIGGRAPH, 2000.

[4] ATI 3D Application Research Group. Render monkey. SOFTWARE APPLICATION found on www.ati.com. Version 1.62.

[5] Tommy Hinks. Animating wind-driven snow buildup using an implicit approach. Master’s thesis, University of Link¨oping, 2006.

[6] Tomomichi Kaneko, Toshiyuki Takahei, Masahiko Inami, Naoki Kawakami, Yasuyuki Yanagida, Taro Maeda, and Susumu Tachi. Detailed shape representation with par-allax mapping. Technical report, The University of Tokyo, 2001.

[7] Janne Kontkanen and Samuli Laine. Ambient occlusion fields. In In Proc. of SIG-GRAPH, Symposium on Interactive 3D Graphics and Games. ACM Press, 2005.

[8] Gregory Ward Larson, Holly Rushmeier, and Christine Piatko. A visibility matching tone reproduction operator for high dynamic range scenes. Technical report, Univer-sity of California, 1997. Article available at http://radsite.lbl.gov/radiance/papers/ last visited 2007-03-05.

[9] Nvidia. Nvidia normal map filter. SOFTWARE PLUGIN found on www.nvidia.com. Version 7.83.

[10] Per Ohlsson and Stefan Seipel. Real-time rendering of accumulated snow. Technical report, Uppsala University, 2004.

[11] F´abio Policarpo, Manual M. Oliviera, and Joao L. D. Comba. Real-time relief map-ping on arbitrary polygonal surfaces. In In Proc. of SIGGRAPH, Transactions on Graphics (TOG). ACM Press, 2005.

[12] Various public authors. Phong reflection model. Article published on http://en.wikipedia.org/wiki/Phong reflection model, last visited 2007-03-05, 2006.

(47)

BIBLIOGRAPHY 34

[13] W. T. Reeves. Particle systems - a technique for modeling a class of fuzzy objects. In ACM Transactions on Graphics (TOG), 1983.

[14] Gilberto Rosado. ShaderX 4 - Advanced Rendering Techniques, chapter 7.1 - Ren-dering snow cover. Charles River Media Inc., 2006.

[15] Natalya Tatarchuk. Pratical parallax occlusion mapping with approximate soft shad-ows for detailed surface rendering. In In Proc. of SIGGRAPH, Advanced Real-Time Rendering in 3D Graphics and Games. ACM Press, 2006.

[16] Fumito Ueda, Hajime Sugiyama, Takuya Seki, and Masanobu Tanaka. The making of ”shadow of the colossus”. Article published on http://edusworld.org/ew/ficheros/2006/paginasWeb/making of sotc.html, last visited 2007-03-05, 2006.

[17] Tam´as Umenhoffer, L´aszl´o Szirmay-Kalos, and G´abor Sz´ıj´art´o. ShaderX 5 - Advanced Rendering Techniques, chapter 5.1 - Spherical Billboards for Rendering Volumetric Data. Charles River Media Inc., 2007.

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Samtliga regioner tycker sig i hög eller mycket hög utsträckning ha möjlighet att bidra till en stärkt regional kompetensförsörjning och uppskattar att de fått uppdraget

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av

In order to understand what the role of aesthetics in the road environment and especially along approach roads is, a literature study was conducted. Th e literature study yielded

If one instead creates sound by sending out ultrasonic frequencies the nonlinearly created audible sound get the same directivity as the ultrasonic frequencies, which have a