• No results found

Visualisation of a simulated dispersion cloud based on a stochastic particle modelling and Volume Rendering in OpenGL

N/A
N/A
Protected

Academic year: 2021

Share "Visualisation of a simulated dispersion cloud based on a stochastic particle modelling and Volume Rendering in OpenGL"

Copied!
34
0
0

Loading.... (view fulltext now)

Full text

(1)

Visualisation of a simulated dispersion cloud based on a stochastic particle modelling

and Volume Rendering in OpenGL

Per Bj¨ orklund

November 2, 2012

Master’s Thesis in Computing Science, 30 credits Supervisor at CS-UmU: Mikael R¨ annar

Examiner: Fredrik Georgesson

Ume˚ a University

Department of Computing Science SE-901 87 UME˚ A

SWEDEN

(2)
(3)

Abstract

When visualising a natural phenomena, such as a gas cloud, the constant movement and turbulens can pose a complex problem. With the use of a stochastic particle dispersion model a gas cloud is simulated and stored in a datafile. This thesis report describes some possible methods on how to structure and render such a datafile in an effective and realistic manner. This is achieved by using interpolation, the octree data structure and volume rendering.

The end product will be used in a larger project at FOI, with the goal to facilitate the training of personal. Three different rendering methods are implemented and compared against each other. The best one will build the foundation for the FOI project. The end result of this report shows that the methods could be used with satisfying results but more development is needed.

(4)

ii

(5)

Contents

1 Introduction 3

1.1 Background . . . 3

1.2 Problem Description . . . 4

1.3 Goals . . . 4

1.4 Structure of the thesis . . . 5

2 Volume data 7 2.1 Particles . . . 7

2.1.1 Dispersion model . . . 7

2.1.2 Interpolation . . . 8

2.2 Voxelisation . . . 8

2.3 Data structure . . . 9

2.3.1 Basic structure . . . 9

3 Volume rendering 11 3.1 What is volume rendering? . . . 11

3.2 Indirect Volume Rendering - Surface extraction . . . 12

3.2.1 Marching cube . . . 12

3.2.2 Marching Tetrahedrons . . . 13

3.3 Direct Volume Rendering - Volume Ray Casting . . . 15

3.3.1 Ray tracing . . . 15

3.3.2 Ray casting . . . 15

4 Results 17 4.1 Implementation . . . 17

4.1.1 Parsing phase . . . 17

4.1.2 Octree phase . . . 18

4.1.3 Rendering phase . . . 19

5 Conclusions and Discussion 23

References 25

iii

(6)

iv CONTENTS

(7)

List of Figures

2.1 Linear vs. polynomial interpolation . . . 8

2.2 From voxel to a regular grid. . . 9

2.3 Left: Recursive subdivision of a cube into octants. Right: The corresponding octree. Figure from Wikipedia under article Octree. . . 10

3.1 A single cube with known values at it´s eight corners . . . 12

3.2 The different cases the marching cube algorithm can generate. . . 13

3.3 How to split a cube according to the marching tetrahedron algorithm. . . 14

3.4 The cases the marching tetrahedron uses to generate triangles. . . 14

3.5 The ray tracing algorithm builds an image by extending rays into a scene. Figure taken from Wikipedia under article: Ray Tracing. . . 15

3.6 The four basic steps of volume ray casting: (1) Ray Casting (2) Sampling (3) Shading (4) Compositing. Figure taken from Wikipedia under article: Volume Ray Casting. . . 16

4.1 A basic flowchart of the program. . . 17

4.2 A dispersion cloud rendered with the Point spheres method. . . 19

4.3 The corresponding neighbours to a corner in a 2D voxel. . . 20

4.4 A dispersion cloud rendered with the Marching Tetrahedron method. . . 20

4.5 A dispersion cloud rendered with the Ray Casting method. . . 21

v

(8)

vi LIST OF FIGURES

(9)

Acknowledgements

I would like to thank H˚akan Grahn, Richard Thorell Billing and all the other coworkers at FOI for all the help with the project and this thesis report. I would also like to thank my fiance and my friends for all their support throughout this project.

1

(10)

2 LIST OF FIGURES

(11)

Chapter 1

Introduction

Visualising a natural phenomena such as gas or smoke is a challenging problem in the field of computer graphics. The main reason for the complexity of this problem is the constant movement and turbulens in the volume. This thesis will describe the data structures and rendering techniques used in visualising such a phenomena.

The movement of the dispersion cloud is acquired by using a stochastic particle dispersion model. This provides a specific position, for any given model particle, for each time step.

To structure the acquired dataset, a simple octree structure is used. The octree spatially partition the volume of the cloud into voxel elements, short for volumetric pixel, to ease the rendering phase. There are many different techniques to render a voxel grid, in this thesis volume rendering is used. Volume rendering can be split into two main areas, direct and indirect rendering. Indirect volume rendering uses a volume dataset to create a geometric form. The direct methods uses the dataset to calculate a RGBA colour value which is projected on to the correct pixel in the frame buffer and in this matter represent the dataset.

Volume rendering techniques can be very beneficial when visualising a three-dimensional dataset. These techniques can enhance internal features and/or add transparency to others.

These techniques are also used to extract surfaces information from solid objects.

1.1 Background

In today’s society, a large amount of hazardous chemicals are used by the industry. If some of these chemicals were to be released on purpose or by accident the effects would in many cases be severe and sometime even devastating. When one of these hazardous emissions occurs it is vital to know how the gas is behaving. Today the Swedish Armed Forces are using an instrument for measuring the concentration of gas particles in the air called a RAPID.The Swedish Defence Research Agency (FOI) now wants to develop a tool to simulate and visualise a gas cloud in the same manner as you would see it through a RAPID.

This tool would then be used to better understand how to deal with these emissions and to facilitate the training of personnel.

FOI already have numerical particle models to calculate the dispersion of gas emission from a point source. The output from this calculation model should be the foundation for the visualisation of the dispersed cloud.

3

(12)

4 Chapter 1. Introduction

1.2 Problem Description

The main purpose for this thesis project is to create a simple way to visualise the events from a datafile. The datafile comes from a numerical dispersion model used by FOI. This model uses model particles to describe the dispersion of the gas emission from a point source. The model particles that will be used only represent a sample point in a point cloud, not a real particle as it does not have a radius. These datafiles consists of a large number of model particles, approximately 10 000 to obtain good resolution but as many as two million could be used.

The high number of model particles makes it vital for the system to be structured in a way that eases the updating of positions and drawing of particles. To handle these massive files without putting latency in the system will also become a challenge. The time step between the positions of a model particle within the datafile is not set to a certain standard.

This means that the time difference between two positions can vary from several to less than one second. This will make for a rough simulation. To handle this the system must be able to derive the sub position between these points and their individual time .

The program will be developed in C++ and should use Open Scene Graph and OpenGL to solve the requirements. The program is developed in C++ to facilitate further develop- ment of the system as is it often used within the FOI organisation.

There are already toolkits that can handle visualising the data in the extent FOI is looking for e.g., VTK1or Paraview2. The problem with these solutions is the license terms as they requiers the end product to be open source.

1.3 Goals

This thesis report will concentrate on giving a overview in how different volume rendering techniques can be used to visualise a cloud in real or interactive time, but will also produce a program in which these techniques can be viewed.

The main project, in which this thesis is a part of, has a much larger end goal. In the final product, the user will be able to choose between different views. One god view, where the user can move freely in the world, and RAPID view, where the user will see the world as through the RAPID instrument. The end product should be able to handle a large amount of model particles and should be running in real-time. An optional requirement is to implement an infrared view of the gas cloud. It is also a wish from FOI that the program should be executable from Matlab, and if time permits the dispersion model is to be integrated within the system.

The main goal of this thesis project is to lay the foundation for this simulator and to integrate the previously mentioned techniques with the datafile. To build a good foun- dation a quick and effective way of parsing the file into a dataset containing the model particles positions for each time step is needed. When this dataset has been created, a good data structure should also be implemented to ease the rendering stage and to contain the information. Lastly, the rendering techniques are developed into a realistic and versatile solution.

1http://www.vtk.org/

2http://www.paraview.org/

(13)

1.4. Structure of the thesis 5

1.4 Structure of the thesis

Chapter 1 of this thesis report includes a overall introduction, a general purpose and the target objectives for this thesis project. Chapter 2 explains the general acquisition of the model particle dataset, the basic data types used and the data structure used to handle the obtained data. Chapter 3 contains a review of the rendering techniques used in this project. In Chapter 4 the results and achievements of the project are discussed. The report ends with Chapter 5, in which a discussion can be found about the general outcome of the project, goal evaluation and if the target requirements are met can be found.

(14)

6 Chapter 1. Introduction

(15)

Chapter 2

Volume data

This thesis is using the dynamic movement of model particles to impersonate a dispersion cloud. These model particles are discrete sample points from the calculation model, and are used to determine the clouds density in the volume. The following chapter is dedicated to introduce the most basic implementations of the system, the model particle system, the voxelisation and the octree data structure.

2.1 Particles

There are a number of different methods to determine the movement of the model particles.

The method which is used in this thesis is briefly explained in this section.

2.1.1 Dispersion model

To simulate the movement of density in an dispersion cloud there are different methods to choice from. To generate the dataset used in this thesis a Lagrangian[4] particle method is used. This method gives a number of discrete sample point to impersonate the cloud movement. To derive the position for a model particle, at any given time step, the dispersion model uses a mean wind and a stochastic distribution.

The dispersion model calculates new positions for the model particles using a variant of the RDM1 equation. This equation uses the mean wind at the model particles specific location to derive a new position. The new position is then nudged in some direction using stochastic value from the distribution, this operation represents the turbulence in the wind. The stochastic distribution needed to perform this equation are calculated from the Fokker-Planck equation. The mean wind profiles are produced from the samples of real wind measurements which has been transformed into a distribution using the similarity theory[14]2. The formula for calculating the model particles positions is described as follows:

dxi = uidt

ui = ui,old+ dui i, j = 1, 2, 3, ..

dui = ai(~x, ~u, t)dt + bij()dWj)

(2.1)

1Random Displacement Model

2See pages 347-404

7

(16)

8 Chapter 2. Volume data

In equation 2.1 the term bij()dWj simulate the turbulence of the air, and the term ai()dt serves as the mean wind speed. The output of this equation, ui,dui and dxi, are the new position, the difference in position and the delta speed of the model particle for this time step. These terms are derived from Fokker-Planck equation, as described in Sch¨onfeldt[12].

2.1.2 Interpolation

Because the final product will be simulating the model particles in real-time, or at least interactive frame rate, the system will need to update the positions several times a second.

The problem is that the provided dataset may only give one position per second. This will make for a very crude simulation. To avoid this behaviour the positions will be interpolated for every time step. This will increase the complexity of the program but the result will be a more lifelike visualisation. The positions are interpolated using a linear model which produce a straight line from point to point. The polynomial model calculates a more smooth curve between the points but the cost for each equation is higher. The linear equation[3] also remains stable within in the data range which the higher order equation tends to overshoot.

The difference can be seen in Figure 2.1.

Figure 2.1: Linear vs. polynomial interpolation

Since the difference in the positions location from one time step to another is very small, the cruder form of interpolation was selected as the extra computation would not be noticed unless the particles travels at a very high speed.

2.2 Voxelisation

A volume dataset consist of information from a specific location in a specified space. The information within the dataset can be of a number of different types e.g., scalar or vector.

The space, which often is in 3D, can be sampled to an regular or anisotropic grid, into so called voxels. A voxel, short for Volumetric pixel [10]3, is a volume element. If this volume element size is identical for all voxels, the grid is regular. If the size differs the grid is anisotropic. An example can be seen in Figure 2.2.

3Pages:502-504

(17)

2.3. Data structure 9

Figure 2.2: From voxel to a regular grid.

To structure information in this manner is common in the medical field, when rendering volumetric images such as CT, MRI or Ultra sound. This technique is also used in the computer game industry when creating terrain or height maps. Voxelisation[15] is the stage that concerned with converting geometric objects into grid of voxels. The final image produced from one of these voxelisation will be very blocky as the data is based on cubes.

But instead of converting a geometric object to a set of voxels, this algorithm divides the bounding box of the cloud. This will create several smaller volume elements inside the three dimensional regular grid. The voxels used in this thesis hold information about the density of model particle at that specific point in the grid, but the voxel can easily be extended to include other attributes. To voxelise the gas dispersion in this simulation the model particles are inserted into an Octree which will be described in the next section.

2.3 Data structure

An Octree is a three dimensional hierarchical binary decomposition of a specified space. In this chapter the basic technique and structure of the octree will be described.

2.3.1 Basic structure

The octree is a tree data structure, like the binary tree, but instead of only having two children per internal node it has eight. The octree structure is mostly used to partition a three-dimensional space by recursively subdividing the bounding volume of the root node.

The bounding volume of a node is simultaneously split along three axes derived from the center of the volume. This creates eight new boxes to insert into another layer in the tree structure, hence the name.

Since the split of the volume originated from the center, the final structure will become regular4. This can make some queries to the structure more efficient[10]5. How many times the starting volume is to be split varies, but most often a stopping criteria is met e.g, reached maximum depth or certain amount of primitives in a box. The last level is the leaf level where the leaf elements or voxels are stored. The octree structure can be viewed in Figure 2.3.

There are numerous variations of octree implementations. Each of them suited for its various application, but the main purpose of the octree is to provide a structure with a small memory footprint that could encapsulate a volume. The cells can then be used to extract a mesh of geometric primitives, with the use of e.g. Marching Cubes, but there are also other

4See section 2.2

5p:654-656

(18)

10 Chapter 2. Volume data

Figure 2.3: Left: Recursive subdivision of a cube into octants. Right: The corresponding octree. Figure from Wikipedia under article Octree.

uses to the structure e.g collision mapping. If every cell in every level points to a certain space, the octree is called a ”full octree” [13]. A full octree with n levels is built with the same number of voxels as a regular 3D grid (e.g., a full octree of 5 levels is constructed out of 32 x 32 x 32 voxels).

In this extreme case the octree takes more room than an ordinary 3D grid. However, when the voxels represent surfaces instead of volumes, i.e. hollow, most of the space is empty and is unused be the octree. Then the octree is very economic, allowing high resolution voxel spaces with low memory consumption. A regular 3D grid must have a position to each voxel, which can become long and tiresome in high resolution, but also have the disadvantage of representing every voxel in the volume even if most of the volume is empty. The most common operations on a voxel grid is to find a specific position or a voxel with certain coordinates. The main advantage of the octree is that the voxel coordinates are not stored but are implicit from the way the tree has been traversed.

(19)

Chapter 3

Volume rendering

In this chapter volume rendering will be described, first indirect volume rendering and then direct volume rendering.

3.1 What is volume rendering?

The term volume rendering is often used to describe the techniques which allows three- dimensional data to be visualised. The last decades these techniques have become a very popular visualising tool for medicine, biotechnology, engineering and many other sciences.

While the features of these techniques give an invaluable advantage, the lack of interactive frame rates has limited the use of this area in the past. Thankfully the computers continue to evolve and advances in hardware has lead to interactive and in some cases even real-time volume rendering performance[6]1.

A high frame rate is always essential when visualising, but especially when the user, like in this case, will study and try to understand the rendered dataset in front of them.

Since the particles will change location with every time step the volume data will be highly dynamic this will also put a strain on the system.

There are a number of different volume rendering methods used to visualise a dataset.

These methods can be split into direct and indirect methods. The difference is that the indirect methods creates a geometric form to the dataset, and the direct methods only imitates the dataset by e.g. colouring pixels in the frame buffer a certain way. A few examples of methods can be seen in Table 3.1.

Table 3.1: Some samples of Volume Rendering methods Indirect

Contour tracing Marching Cubes Marching Tetrahedra Direct

Ray casting Splatting

Texture Mapping

1p.229-258

11

(20)

12 Chapter 3. Volume rendering

3.2 Indirect Volume Rendering - Surface extraction

Rendering a volume dataset using surface extracting, or indirect algorithms, is a standard technique in scientific visualisation. Most of these methods involve approximating a sur- face contained within the data using geometric primitives. The main drawback with the algorithms in this area is that the meshes of primitives they produces, often triangular, which in many cases are small and ill shaped. These primitive meshes in most cases needs improvement with decimation, smoothing, or remeshing, which is very cost inefficient. In this section marching cubes and marching tetrahedrons will be described.

3.2.1 Marching cube

The Marching cubes algorithm, published by Lorensen and Cline[9] in 1987, creates a tri- angular mesh by computing an iso-surface from volume or discrete data. By voxelising2 the data and compute the connecting patches for every cube the algorithm can extract the surface of an object. This algorithm works both for 2D and 3D worlds.

There are two primary steps to the marching cube algorithm. The first is, given a cube with eight corners, generate one or more triangles. Then to ensure the quality of the image calculate the normal of the new primitives.

The Marching cubes uses a divide-and-conquer approach to generate the surface of an object. The algorithm calculates the triangles for one cube and then moves(or marches) on to the next. Each of the eight corners in the given cube contains a specified value. If this value is below the threshold the corner is outside the object and if the value is above it’s considered to be inside the objects. With this definition we can determine the intersection of the surface within the cube, see Figure 3.1.

Figure 3.1: A single cube with known values at it´s eight corners

A cube has eight corners and each of these points can have one of two different kind of values, either inside and outside the objects surface. With some simple math this means a cube can be one of 256 (=28) different combinations. For each of these combinations there are a set of triangles to generate and draw. These 256 cases can, because of symmetry and no contribution to the final image, be downsized to 14 unique cases. These cases can be viewed in Figure 3.2.

2See section 2.2

(21)

3.2. Indirect Volume Rendering - Surface extraction 13

Figure 3.2: The different cases the marching cube algorithm can generate.

The algorithm create individual indexing for each of the different cases and base them on the state of the cube. By using this indexing to tell which edges the surface intersects.

The algorithm uses this information to interpolate a set of triangles. One case can generate from one to five triangles per cube. The final step of the algorithm is simply to use the generated triangles vertex’s to determine the face normal.

A problem with the original 14 unique cases was that in some configurations when two cases are placed next to each other, a hole will be generated. This problem was later fixed by Chernyaev[2] in 1995. Another obstacle for this algorithm was that it was patented in 1987, but after 17 years the patent ended. A similar algorithm was developed, called Marching Tetrahedrons, in order to circumvent the patent, and also to fix the original hole generation problem. This algorithm will be described in the next section.

3.2.2 Marching Tetrahedrons

The Marching Tetrahedron algorithm, which was first developed by Gu´eziec and Hummel in 1995 [5], is similar to the marching cube algorithm in many aspects. They both uses voxelised3 data to generate a triangular mesh which defines the surface of an object. The difference between them is that the marching tetrahedron algorithm has a third step in the calculation. The three primary steps of this algorithm are, first divide the voxel into six different tetrahedrons. This can be seen in Figure 3.3.

In step two, each of these tetrahedrons are then used to calculate a case index which will be used to generate the appropriate triangles. Since a tetrahedron has 4 points and similar to the marching cube algorithm a point can either be considered to be outside or inside the

3See section 2.2

(22)

14 Chapter 3. Volume rendering

Figure 3.3: How to split a cube according to the marching tetrahedron algorithm.

object. This gives 16 unique cases which can be reduced to 7, because of symmetry and no contributions to the final image. One tetrahedron can generate from zero to two triangles.

The different cases for this algorithm can be seen in Figure 3.4.

Figure 3.4: The cases the marching tetrahedron uses to generate triangles.

The third and final step is to use the newly generated triangle to calculate a normal which will be used in the illumination stage. Unlike the marching cube algorithm this method ensures that no hole will be in the generated surface. This method also consist of a much smaller lookup table which makes it easy to implement. However, the drawback with this method is that the surface generated will consist of more triangles. The Marching cube algorithm generates from zero to five triangles per cube and this algorithm creates zero to two triangles per tetrahedron, which means zero to 12 triangles per cube. This will slow

(23)

3.3. Direct Volume Rendering - Volume Ray Casting 15

down the system.

3.3 Direct Volume Rendering - Volume Ray Casting

The first algorithm using ray casting was presented by Arthur Appel[1] in 1968. It was this article that came with the basic idea to send rays from the camera, one per pixel. The pixel is then coloured by determining if the ray intersected with an object. The first method that can be defined as volume ray casting appeared in 1977 by Kajiya [7], but it took until 1988 for Levoy [8] published a paper that became the modern definition of the term.

The basic idea of volume ray casting is to use a three-dimensional dataset to impersonate the object. This will only model the object and not forcing it to obtain a geometric structure.

Where other algorithms, e.g. marching cubes, only extract the mesh surface of an object, volume ray casting can visualise information within the object. This is particularity well used when visualising medical images, where certain specified data can be enhanced, or fluids, where the data is partially transparent.

Both ray casting and ray tracing can create highly realistic rendering but with high quality comes a high cost. This makes these techniques more suited for rendering images ahead of time and not for real-time renderings[10].

3.3.1 Ray tracing

Ray tracing, which volume Ray casting is variant of, is basically an attempt to imitate nature. The colours that is seen by the eye comes from rays of light cast by the sun, bouncing around the world and finally hitting the eye. Ray tracing performs this operation the other way around. From the eye rays are sent into the scene and when the ray hits a point on an object, another ray is sent. This ray, a shadow ray, travels from the the point on the object towards the light source. If the second ray reaches the light source without intersecting with another object the point is to be lit by the source, and if an object is in the path of the ray the point is in the shadow of the light source. This can be seen in Figure 3.5

Figure 3.5: The ray tracing algorithm builds an image by extending rays into a scene. Figure taken from Wikipedia under article: Ray Tracing.

3.3.2 Ray casting

In the case of the volume ray casting algorithm, when a ray intersects with an object the ray continues to traverse through the volume. This algorithm also requires a volume dataset to

(24)

16 Chapter 3. Volume rendering

know whats inside the volume. This algorithm is constructed by four primary steps. This steps can be seen in Figure 3.6.

Figure 3.6: The four basic steps of volume ray casting: (1) Ray Casting (2) Sampling (3) Shading (4) Compositing. Figure taken from Wikipedia under article: Volume Ray Casting.

The first step is to send out a ray, for each pixel, which will intersect with the volume same as in ray tracing. The next step is to create a few discrete point along the path of the ray to sample the volume data. These points are often at a predetermined distance from each other and are usually not aligned with the view and which are often placed between voxels. To adjust this the surrounding voxels values will be interpolated to obtain the correct value for the point.

This value is then used in the next step, shading. The voxel is shaded according to the location of the light source and the density within the volume between the voxel and the light source. Since the location is known, the only thing needed is the density. As in the ray tracing algorithm a shadow ray is sent towards the light source to see if the voxel is obscured but also to traverse the volume to calculate the density. With this information all sample points along the rays path are shaded. This will produce a three-dimensional grid of colour values which are used in the next step.

The final step of the algorithm is to merge all these sampling points and the background into one final colour to use on the pixel. This step is known as compositing and it produces a two-dimensional image of the three-dimensional voxel values. For every voxel the ray traversed two values are needed, the first being the shaded colour from the illumination model and the second one is the opacity value for the specific contents of that voxel. These two values are then combined into a final pixel intensity.

This technique also holds some other features that will be beneficial when rendering the simulation[11]. The technique basically projects the voxels visible from the specified direction, this means that the volume data can be visualised from any direction. Also hence the sample points exists inside the volume and the all data are coloured separately, the user can decide that different data should be enhanced or removed. This provides the feature that difference in data value could be visible or that surfaces that would otherwise be occluded can be seen, e.g. a pearl inside a box.

The speed of the ray casting can be influenced by many different predetermined factors, such as screen resolution, volume size and step size within the volume. These are mostly static variables that can affect the performance of the algorithm greatly. But there are also developed techniques to speed up the process e.g. early ray termination and empty space skipping.

(25)

Chapter 4

Results

The goal of the program, using the previously presented techniques, was to produce a efficient high-quality rendering tool for the gas phenomena. In this chapter the results of this merger is described.

4.1 Implementation

The system is made up of three primary steps, which can be seen in Figure 4.1. First step is the parsing, where information is collected from the datafile. The second step is to spatially structure the collected data and transform it into a density dataset. The third and final step is to visualise the transformed dataset. This is done with the techniques described in the previous chapter which will be described in this section.

Figure 4.1: A basic flowchart of the program.

4.1.1 Parsing phase

When the program starts the first thing needed to be done is to collect the position data from the selected datafile. The datafile contains a number of time steps segments sorted from the first to last. Within these segments the position and ID number for each living particle is presented in an unordered fashion. If a particle is not present in a segment then the particle is either deposited in the ground or not yet emitted from the source.

In the parsing phase the program creates a particle object for the maximum number of model particles used in the simulation model and store these in an vector. In these particle

17

(26)

18 Chapter 4. Results

objects the position from every time step, where the specified model particle is alive, is inserted. In a time step where the original model particle would not be present the object particle holds a position in origo, which in this system is where every particle originate from.

When a particle is located in origo, the program will see that particle as deposited or not emitted. This is done once in the beginning of the program and is then used throughout the entire program.

Update

Because this is a dynamic simulation and not a static picture the positions of the model particles must be updated. This is done every time a new frame begins to render. Every particle object has a list of the positions, acquired in the parsing phase. The objects also holds two position indices in this list, the first being the previous position and the latter being the next position of the particle.

At the beginning of a new frame the time difference from the previous frame is calculated.

This time difference is used to interpolate1 between the two referenced positions. This creates the current position of the specific particle object. If the time difference is greater than the sample time from the calculation model then reference pointers in the particle objects will be increased by one. One of these position interpolations will be referred to as time step in this chapter.

4.1.2 Octree phase

When the parsing phase is done and has provided the program with a dataset of particle objects which can be interpolated then the rendering phase can begin. This is mainly done by the two remaining primary steps mentioned previously in this chapter. The first step is to structure and transform the data, from an array of discrete sample positions to a density dataset. This is done because of, like in the volume ray casting algorithm2, it is the density of the cloud in a specific location that determines the colour and transparency.

To transform the object particles locations into a cloud density dataset the octree data structure is used. The octree is created at the start of the program and uses a predetermined depth as a stop criteria. Even if the octree only needs to be created once the volume of the octree and its voxels needs to be updated for each new frame. This is done to fit all the particles inside it its bounding box. The update of the octree is performed when all particle objects have been updated with a new interpolated position. During the update the root node receives a new bounding volume and preformes the same splits as in the creation of the tree. When the volume has been split the root sends the new volumes to the correct child which will execute the same operation. This produces a updated octree that can hold every particle in the cloud.

When the update has been completed it is time to introduce the particle objects to the octree. Each particle object is inserted into the octree and traverses the tree structure according to where in the bounding box of the cloud the volume is located. When the object has traversed the tree structure and found the final leaf node for that location the node simply increases the internal density.

Since the leaf node does not store a reference to or change the object, it can be seen as the particles moves freely in the world and that the octree examines the particles position and creates a new dataset from them. When all objects has been inserted the leaf nodes of

1Se section 2.1.2

2See section 3.3.2

(27)

4.1. Implementation 19

the octree holds a voxelised version of the density dataset. This new dataset can then be used by the different techniques to visualise the cloud.

4.1.3 Rendering phase

The last step of the program is to actually render the transformed data. To do this two different approaches of volume rendering have been used, direct and indirect. How these two methods have been used in the program is described in this section. For debugging purposes point spheres also got implemented.

Point Spheres

This rendering method only takes the interpolated position of the varies particle objects and render them as spheres at that specific position. This makes a very fast rendering, since the octree phase will not have to be executed, but with a low realism. This rendering method can be seen in Figure4.2. The spheres are not actual spheres, but instead they are billboard squares with a transparent texture on them. This method was used only to compare the other two methods and how well they follow the path of the particle objects.

Figure 4.2: A dispersion cloud rendered with the Point spheres method.

Marching Cubes/Tetrahedrons

In this implementation a number of things differs from what Lorensen and Cline wrote in their original thesis. The main difference is that instead of using values for each corner, to indicate if each of them are inside or outside, the implemented algorithm uses the voxelised density dataset from the octree phase. This dataset can only tell if a voxel is containing any particles, which if it does will be true for every corner of that voxel.

(28)

20 Chapter 4. Results

To adjust this the algorithm, prior to determining the case of each voxel, traverse the octree to find any empty voxels. When it finds a empty voxel the algorithm will check the neighbours of that voxel to see if they are containing any particles. If a neighbour containing particles is found the corresponding corner will be set, see Figure 4.3 for an example.

Figure 4.3: The corresponding neighbours to a corner in a 2D voxel.

When every empty voxel has been traversed the algorithm uses the original marching cube algorithm on the empty voxels3and produces a triangular mesh. The produced mesh represents the surface of the cloud. A cloud rendered with this method can be seen in Figure 4.4.

Figure 4.4: A dispersion cloud rendered with the Marching Tetrahedron method.

3See section 3.2.1

(29)

4.1. Implementation 21

Ray casting

The ray casting implementation is generally the same as the method described in the pre- vious chapter4. The only thing that differs from the theory of the algorithm is how the volume dataset is produced and managed. This method uses the same dataset, collected from the leaf level of the octree, as the indirect method. But as this method is implemented with GLSL5to run on the graphics card of the computer, the dataset must be transformed into a texture so that the data can be read properly. When the texture has been created the shader program is applied on a square that is always in front of the camera. The shader program starts with sending out rays for every pixel of the square. If the ray hits the volume it will be traversed and the pixel coloured by the density and if the ray misses the pixel will be discarded and show the scene behind the square. In this method the colouring of the pixel has been made to look like an heat map of the density. A cloud rendered by this method can be seen in Figure 4.5.

Figure 4.5: A dispersion cloud rendered with the Ray Casting method.

4See section 3.3

5http://www.opengl.org/documentation/glsl/

(30)

22 Chapter 4. Results

(31)

Chapter 5

Conclusions and Discussion

When I started this project and began to search the Internet and books for information, I was surprised by how many different methods and approaches there was. Now there was no one exactly like the one I needed but there was a lot of information almost to much.

As I continued in to the implementation phase of the project it was not the algorithm or the theory that was the problem. It was the Open Scene Graph (OSG) platform that caused the most problem. I found the platform very ill documented and not very userfreindly. If the problems with OSG and other surrounding problems could have been avoided then maybe I would have more satisfying results.

The marching cubes algorithm runs at real-time but is very susceptible to the increasing size of the cloud, as this increases the number of voxels that will generate triangles. The algorithm will also be heavily affected by the depth of the octree. This because the depth of the octree will control the size of the voxels. The size of the voxels will in turn affect the size of the triangles generated by the marching cube algorithm. The size will also affect preformance of the algorithm since a deeper tree calls for more voxels. Since the update of the octree is going to change the volume of the voxels as the cloud spread, the cloud is going to lose details throughout the simulation.

The ray casting method is also able to run in real-time but as the method is currently implemented, there is no depth perception. Hence no object can be displayed in the fore- ground of the cloud. This is a major disadvantage in a simulator. The use of the octree also gave this method a blocky look that is dependent of the size of the voxels.

The algorithm that was most intuitive and effective in retrospect was the point sphere method. This because it was not dependent of the octree data structure, hence not blocky, and the multiple points floating around gave an easy to understand view of the cloud.

But my opinion is that if the ray casting method is further developed with a simple illumination model and a solution to the depth problem, then the ray casting would be the best algorithm for this problem.

23

(32)

24 Chapter 5. Conclusions and Discussion

(33)

References

[1] Appel A. Some techniques for shading machine rendering of solids. AFIPS ’68 (Spring), pages 37–45, 1968.

[2] Evgeni V. Chernyaev. Marching cubes 33: Construction of topologically correct iso- surfaces. Technical Report CN/95-17, 1995.

[3] Ronald Fedkiw, Jos Stam, and Henrik Wann Jensen. Visual simulation of smoke.

Computer Graphics Proceedings, Annual Conference Series, pages 15–22, aug 2001.

[4] H˚akan Grahn. Modeling of disperion, deposition and evaporation from ground deposi- tion in a stochastic particle model. Thesis project, Dec 2000.

[5] Andr´e; Gu´eziec and Robert Hummel. Exploiting triangulated surface extraction using tetrahedral decomposition. IEEE Transactions on Visualization and Computer Graph- ics, 1:328–342, 1995.

[6] Christopher Johnson and Charles Hansen. Visualization Handbook. Academic Press, Inc., Orlando, FL, USA, 2004.

[7] James T. Kajiya and Brian P Von Herzen. Ray tracing volume densities. SIGGRAPH Comput. Graph., 18(3):165–174, January 1984.

[8] Marc Levoy. Display of surfaces from volume data. IEEE Comput. Graph. Appl., 8(3):29–37, May 1988.

[9] William E. Lorensen and Harvey E. Cline. Marching cubes: A high resolution 3d surface construction algorithm. SIGGRAPH Comput. Graph., 21(4):163–169, August 1987.

[10] T. Moller, E. Haines, and T. Akenine-Moller. Real-Time Rendering. AKPeters, 2002.

[11] John Pawasauskas. Volume visualization with ray casting. Advanced Topics in Com- puter Graphics, Feb 1997.

[12] Fredrik Sch¨onfeldt. A langevin equation dispersion model for the stably stratified planetary boundary layer. Thesis project, Dec 1997.

[13] Dr Nilo Stolte. Octree - geometrical and computational aspects of octrees. http:

//nilo.stolte.free.fr/octree.html. Accessed: May 2,2012.

[14] Roland B. Stull. An Introduction to Boundary Layer Meteorology. Kluwer Academic Publishers, 1988.

25

(34)

26 REFERENCES

[15] unkown. Fundamentals of voxelization. http://www.cs.sunysb.edu/vislab/

wordpress//projects/volume/Papers/Voxel/index.html. Accessed: May 4,2012.

References

Related documents

This method will calculate the emission from gas based on the ionization from hot stars in the nebulae and the distribution of the emitted colors.. [6] created a

We used the Instancing technique with 8 threads because it is more relevant with its lower frame time at higher number of particles when comparing it with stream- outC. The gap

Along with the previous outline, this may suggest that the acute response and adaptations to upper body concurrent exercise may be different compared to the legs..

På grund av förändringar i svensk kriminalpolitik, straffskärpning samt ökad polissatsning har det skett en ökning i antalet klienter inom Kriminalvården. Detta har medfört

Syftet med vår uppsats är att undersöka vad lärare i grundskolans senare år samt i gymnasiet anser vara möjligheter och hinder i matematikundervisningen för elever med

Interactive control kan leda till en sämre definierad roll genom för mycket diskussion kring hur man kan göra istället för att sätta upp tydlig mål för controllern, detta kan

Modellen för topologioptimeringen, genomskinliga blå delar är opti- meringsvolymen, röda områden är låsta och svarta linjer representerar stela element som används

These deficiencies of the Approximative Prediction Error Method led us to a more serious statistical study of the Wiener model problem in the realistic case of both dis- turbances