• No results found

Improving rendering times of Autodesk Maya Fluids using the GPU

N/A
N/A
Protected

Academic year: 2021

Share "Improving rendering times of Autodesk Maya Fluids using the GPU"

Copied!
48
0
0

Loading.... (view fulltext now)

Full text

(1)LiU-ITN-TEK-A--08/121--SE. Improving rendering times of Autodesk Maya Fluids using the GPU Jonas Andersson David Karlsson 2008-12-08. Department of Science and Technology Linköping University SE-601 74 Norrköping, Sweden. Institutionen för teknik och naturvetenskap Linköpings Universitet 601 74 Norrköping.

(2) LiU-ITN-TEK-A--08/121--SE. Improving rendering times of Autodesk Maya Fluids using the GPU Examensarbete utfört i medieteknik vid Tekniska Högskolan vid Linköpings universitet. Jonas Andersson David Karlsson Handledare Fredrik Averpil Examinator Mark Eric Dieckmann Norrköping 2008-12-08.

(3) Upphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under en längre tid från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/ Copyright The publishers will keep this document online on the Internet - or its possible replacement - for a considerable time from the date of publication barring exceptional circumstances. The online availability of the document implies a permanent permission for anyone to read, to download, to print out single copies for your own use and to use it unchanged for any non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional on the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its WWW home page: http://www.ep.liu.se/. © Jonas Andersson, David Karlsson.

(4) Abstract Fluid simulation is today a hot topic in computer graphics. New highly optimized algorithms have allowed complex systems to be simulated in high speed. This master thesis describes how the graphics processing unit, found in most computer workstations, can be used to optimize the rendering of volumetric fluids. The main aim of the work has been to develop a software that is capable of rendering fluids in high quality and with high performance using OpenGL. The software was developed at Filmgate, a digital effects company in G¨oteborg, and much time was spend making the interface and the workflow easy to use for people familiar with Autodesk Maya. The project resulted in a standalone rendering application, together with a set of plugins to exchange data between Maya and our renderer. Most of the goals have been reached when it comes to rendering features. The performance bottleneck turned out to be reading data from disc and this is an area suitable for future development of the software. Keywords: Volumetric rendering, Fluid simulation, Maya, GPU, OpenGL, GLSL.. Andersson, Karlsson, 2008.. i.

(5) ii.

(6) Acknowledgements We would like to thank our supervisor Fredrik Averpil and our examiner Mark E Dieckmann. We would also like to thank H˚ akan Blomdahl and Mikael H˚ akansson at Filmgate for helping us out whenever we got stuck.. Andersson, Karlsson, 2008.. iii.

(7) iv. Chapter 0. Acknowledgements.

(8) Contents Acknowledgements. iii. 1 Introduction 1.1 Purpose and motivation . . . . . . . . . . . . . . . . . . . . . . . 1.2 Problem description . . . . . . . . . . . . . . . . . . . . . . . . .. 1 1 2. 2 Background 2.1 Maya . . . . . . . . . . . . . 2.1.1 Scene Graph . . . . 2.2 Fluid simulation . . . . . . 2.3 Volume rendering . . . . . . 2.3.1 Ray casting . . . . . 2.3.2 Marching cubes . . . 2.3.3 Shear warp . . . . . 2.4 Rendering polygonal objects 2.5 OpenGL . . . . . . . . . . . 2.5.1 GLSL . . . . . . . . 2.6 OpenEXR . . . . . . . . . .. . . . . . . . . . . .. 3 Implementation 3.1 Maya plugins . . . . . . . . . 3.1.1 Fluid data . . . . . . . 3.1.2 Scene data . . . . . . 3.1.3 Mel interface . . . . . 3.2 Ray casting . . . . . . . . . . 3.2.1 Multiple fluids . . . . 3.3 Shadows . . . . . . . . . . . . 3.3.1 Fluid shadowing . . . 3.3.2 Sampling density from 3.3.3 Sampling density from 3.3.4 Mesh shadowing . . . 3.4 Hold out objects . . . . . . . 3.5 Motion blur . . . . . . . . . . 3.6 Rendering . . . . . . . . . . . 3.7 User interface . . . . . . . . . 3.7.1 Custom controls . . . 3.7.2 Main layout . . . . . . 3.7.3 File-menu options . . Andersson, Karlsson, 2008.. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. 3 3 4 4 5 5 5 6 6 7 7 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . fluids . meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. 9 9 9 10 11 11 13 13 14 14 14 16 16 17 18 19 19 20 21. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. v.

(9) vi. Contents. 3.7.4 3.7.5 3.7.6 3.7.7 3.7.8 3.7.9 3.7.10 3.7.11. The viewport . . . . . The timeline . . . . . Object selection . . . Fluid attribute editor Mesh attribute editor Render settings . . . . Caching . . . . . . . . The console . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. 21 23 23 24 25 26 27 28. 4 Results 4.1 Performance . . . . . . . . . . 4.2 Limitations . . . . . . . . . . 4.2.1 Differences from Maya 4.2.2 Technical issues . . . . 4.2.3 Design . . . . . . . . . 4.2.4 Performance . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. 31 31 32 32 32 33 34. 5 Discussion 35 5.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 5.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 A The test system 39 A.1 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39.

(10) Chapter 1. Introduction This report covers a thesis done at Link¨oping’s University in 2008. The thesis project began with us contacting Filmgate[1], a visual effects studio located in G¨ oteborg. Filmgate consists of a group of talented artists, who work mainly with compositing and matte paintings, but also quite a few 3D effects. Filmgate has been working on films including Arn - The Knight Templar, Arn - The Kingdom at Road’s End and The Descent. At the time when we contacted Filmgate, they had several incoming film projects which would require 3-dimensional fluid effects to be rendered. They did not have a complete work flow for this type of effects, because they did not like the way of rendering fluids offered by Autodesk Maya - their only 3D application at the time. It was therefore decided that we should develop a renderer that could make use of the complexity of Maya’s fluid engine but with better rendering times. The rest of this chapter will define the purpose, motivations and goals for the project.. 1.1. Purpose and motivation. Fluid simulation is a hot topic in computer graphics today. New highly optimized algorithms have allowed complex systems to be simulated and rendered in real-time, which was not possible a couple of years ago[2]. Large modeling and animation packages now usually have their own, very general, fluid simulation implementation - allowing for a large amount of customization. There has also been a lot of progress in the field of GPU programming and moving tasks traditionally done by the CPU to the graphics processor[10]. The purpose of this thesis project has been to develop a standalone program that combines the strength of the Maya Fluid solver with the rendering power of the GPU. The simulation is performed inside of Maya, but is shaded and rendered separately. This allows the user to tweak the look of the fluid in a near real-time environment without the need to re-render to see the result. This together with reducing the time needed to render the final animation will have a positive affect on the creative process. Andersson, Karlsson, 2008.. 1.

(11) 2. 1.2. Chapter 1. Introduction. Problem description. The program should be able to render fluids with results, equal to, or better than Maya. Rendering features such as motion blur, shadows from geometry and self shadowing have a big impact on the result and should be supported. The workflow should be optimized such that the complexity added by using a standalone program will be kept at a minimum. In places where user interaction is needed, the interface should look and behave similar to Maya, to make it as easy as possible to use both of the programs in parallel..

(12) Chapter 2. Background In this chapter we give a brief background to some programs and concepts that will make it easier to understand the rest of the report.. 2.1. Maya. Maya is a high-end 3D computer graphics and 3D modeling package (Figure 2.1) developed by Autodesk. Maya is used in the film and TV industry, as well as for video games, architectural visualization and design. It was awarded an Academy Award in 2003 for ”scientific and technical achievement”.. Figure 2.1: Screenshot of Maya 2008 running on Windows Vista. The open architecture that Maya is built upon is one strong reason for its success. A scripting language called Maya Embedded Language (MEL) is provided, and a lot of the built in functionality and tools are written in this scripting language. It’s possible to save projects, including models and animation, as a sequence of MEL commands (Maya ASCII) that can be opened and edited in any text editor. Besides MEL, Maya supports compiled plugins through an API and large parts, like the renderer or the user interface can be customized or completely replaced to suit the workflow of the user. Andersson, Karlsson, 2008.. 3.

(13) 4. 2.1.1. Chapter 2. Background. Scene Graph. Every object and all modifications that are done to the scene, is described by a node in Maya’s scene graph (Figure 2.2). All nodes are by some path attached to the scene’s root node and all nodes that have to be passed to get from an object to the root node affects that object.. Figure 2.2: The scene graph keeps track of the relationship between all objects and modifiers in the scene. The scene graph and information about all nodes is fully accessible using the Maya API.. 2.2. Fluid simulation. Fluid simulation is a tool used in the field of computer graphics to generate realistic water, smoke, explosions and related effects. Most fluid simulations are done using the Navier-Stokes[4] equations to describe the motion of the fluid over time. The equation is usually heavily simplified to allow for a quick and stable numerical solution[3]. This can significantly increase the speed of the simulation in computer graphic applications, where a good looking result is a good result (Figure 2.3). Compared to other areas such as physics, engineering, or mathematics where a physically correct simulation is needed.. Figure 2.3: Fluid simulation demo from NVIDIA[5] running in real-time on the GPU..

(14) 2.3. Volume rendering. 5. A fluid is often represented by a rectangular grid in either two or three dimensions. The time needed for simulation is then directly linked to the resolution of the grid. It’s especially important to use as low resolution as possible when dealing with three dimensional fields. For example, a cube with side length N would result in N3 voxels. The basic version of a fluid simulator contains a velocity field and a density field: x, y, z ∈ N 0 <= x, y, z < field resolution velocity(x, y, z) = V, V ∈ R3 density(x, y, z) = k, k ∈ R The simulation is then often extended to contain other elements such as temperature or color. Maya Fluids supports four different attributes that affects the simulation - density, velocity, fuel and temperature.. 2.3. Volume rendering. Volume rendering is a technique used to visualize two-dimensional projections of three-dimensional data sets. It’s mainly used to display images of X-Ray data from CT or MRI scanners, but the technique also has other applications, such as terrain visualization and fluid rendering. Below are several different methods used for volume rendering.. 2.3.1. Ray casting. In volume ray casting, a ray is cast for each pixel in the output image. These rays originate from the eye point of the camera and are shot through their respective pixel, in an imaginary image plane, and on through the volume of data. While inside the volume, the ray samples data at regular or adaptive intervals. At each of these samples, a ray is cast towards each light source in order to calculate the shading at that particular point of the volume. The ray ends when it has passed through the volumetric object or once its accumulated density has reached a value where nothing beyond that point will be seen. Ray casting provides results of high quality, but can be quite slow in software. Though, since each ray is independent the algorithm is well suited for parallel structures such as the GPU, where real time rendering speeds have been achieved.. 2.3.2. Marching cubes. The marching cubes algorithm[6] is used to render isosurfaces from volume data. The algorithm proceeds through the volume by taking the eight diagonal neighbors of each voxel, forming an imaginary cube. The value of each of these neighbors is then checked against the isosurface threshold and assigned a 1 if it’s inside the surface and a 0 otherwise. The resulting cube is then checked against a precalculated array containing 256 possible polygon surface configurations (Figure 2.4)..

(15) 6. Chapter 2. Background. Figure 2.4: The marching cube algorithm uses 15 unique cubes. All 256 possible variations can then be obtained by rotation and mirroring.. 2.3.3. Shear warp. The shear warp algorithm[7] is a relatively new approach, which tries to simplify the projection step by dividing it into a shear and a projection step (Figure 2.5). The algorithm starts by transforming the view transformation, such that the camera gets an axis-aligned view of the nearest side of the volume bounding box. The slices of the volume are then sheared to allow the camera to still see all the voxels it would have seen from its original position. The volume is then sampled into a so-called intermediate image. This intermediate image is distorted and must be warped to achieve the correct final image. The reason for moving the camera is because the volume is stored in a run length encoded version from each axis aligned viewing direction. This results in a faster rendering step, but with heavy memory consumption.. Figure 2.5: The shear step of the shear warp algorithm. The slices of the volume are sheared according to the camera transformation.. 2.4. Rendering polygonal objects. One common way of representing a 3d object is by using polygons. Any surface can be approximated by dividing it into a number of planar polygons (Figure 2.6). Many renderers, such as the hardware accelerated API’s OpenGL and Direct3D, are limited to rendering triangular polygons. This makes the rendering process much simpler and very fast. Allthough some volume rendering techniques, like marching cubes, use polygons to render its data, it’s not suitable for rendering gaseous fluids, such as smoke and fire..

(16) 2.5. OpenGL. 7. Figure 2.6: 3-dimensional object represented by planar polygons.. 2.5. OpenGL. OpenGL is an environment for developing interactive 2D and 3D graphics applications. It was released in 1992 and is today the industry’s most widely used application programming interface (API) for real-time graphics. It can be used on most desktop and workstation platforms and provides high visual quality and performance. OpenGL consists of over 250 different function calls and provides functionality to create, transform and render polygonal objects. The rendering can be customized and objects can be assigned texture maps, lighting properties, colors etc. The API is supported and accelerated by all of today’s GPUs from NVIDIA and ATI which leads to high performance on almost every desktop computer.. 2.5.1. GLSL. Traditionally OpenGL has always used a fixed pipeline for rendering. A large number of functions allowed customization to a certain degree and worked well for most cases. Missing features could be added by supplying an extension to the OpenGL Architecture Review Board[9] for validation and it would perhaps eventually appear in the API. The inspiration to GLSL came from shading languages such as the very popular Renderman shading language[8], used extensively in the visual effects industry. A shading language gives low-level control of how the rendering is performed. Instead of using built in functions to specify lighting, textures etc., a shading language uses a shader to describe the rendering process. A shader is basically a small program, a piece of text that calculates the color of the rendered pixel. Example - shader that uses a directional light source to calculate the color of the pixel: void main() { vec3 normal, lightDir; vec4 diffuse; float NdotL; normal = normalize(gl_NormalMatrix * gl_Normal); lightDir = normalize(vec3(gl_LightSource[0].position)); NdotL = max(dot(normal, lightDir), 0.0);.

(17) 8. Chapter 2. Background. diffuse = gl_FrontMaterial.diffuse * gl_LightSource[0].diffuse; gl_FragColor =. NdotL * diffuse;. }. Shaders allow OpenGL to be used in ways it was not originally designed for. Using OpenGL for General-Purpose computation on GPU (GPGPU[10]) is an area that has grown rapidly in the latest years and provides a lot of out of the box-thinking about what GPU acceleration can be used for.. 2.6. OpenEXR. OpenEXR is a high dynamic range image file format developed by the visual effects company Industrial Light and Magic[11]. It has been used in many movies including Harry Potter and Men in Black 2 and is currently the main image file format in all of ILM’s motion picture productions. ILM released OpenEXR to the public in 2003 and it is now supported by most of the standard applications used in the business. OpenEXR supports 32-bit floating point, 32-bit integer and a 16-bit floatingpoint format called ”half”. The format also includes three types of lossless image compression algorithms; run-length encoding, ZIP compression and PIZ compression. A great feature of OpenEXR is its support for arbitrary numbers of channels. It can store specular, alpha, RGB, normals etc. in the same file, which makes it very useful for compositing. The OpenEXR C++ API is free to use and includes functionality to read and write OpenEXR images, which makes it simple to support it in your own software..

(18) Chapter 3. Implementation 3.1. Maya plugins. Maya is built around an open system where programmers can change existing features or add new features using plugins. There are several tools available to develop Maya plugins: • Maya embedded language (MEL) - A powerful scripting language that allows access to the most common operations. • Python - A scripting language that provides an interface to MEL commands. • API - A C++ interface that provides better performance than MEL, together with low level access to Maya’s scene graph. • Maya Python API - A scripting language that is based around the functionality of the API. We needed a way to export the fluid simulation together with information about the scene that contains the fluid container. Using the API turned out to be the best option because we needed the performance to be able to access a lot of data. C++ was also a familiar environment for both of us.. 3.1.1. Fluid data. A command plug-in for Maya was developed called exportFluid that exported the selected fluid container for each frame in the simulation. Each file contained information about the current transformation, size, resolution and content of the fluid container. The command was executed from Maya’s command line, together with arguments specifying the range of frames, which attributes that should be exported, a file path and a name. Example: exportFluid -atr density -atr temperature -startFrame 1 -endFrame 100 -path "e:/fluids" fluid01; Output: e:/fluids/fluid01_frame00001.bin. Andersson, Karlsson, 2008.. 9.

(19) 10. Chapter 3. Implementation. e:/fluids/fluid01_frame00002.bin ... e:/fluids/fluid01_frame00100.bin. The size of the exported files can easily get very large. Exporting the density (8 bytes / grid point) for 100 frames of a 128x128x128 grid would consume 1.6 GB of disc space and would result in very long loading times. This made us think about ways to lower the size of the files, and led to the following decisions: • Option to specify which attributes to export. A fluid simulation can contain density, velocity, temperature and fuel but might only need the density and temperature when rendering. • A kind of run length encoding (RLE) was used when exporting density, temperature or fuel. It encoded a sequence of zeros as the length of the sequence preceded by a minus sign. Example: Raw encoding: 1, 0, 0, 0, 1, 3, 5, 5, 1, 0, 0, 0, 0 With RLE: 1, -3, 1, 3, 5, 5, 1, -4. This had a significant effect on the file size since large sections, especially in the beginning of the simulation, usually don’t contain any fluid. A threshold was used to determine when a value should be considered to be zero, since it’s rare that a value is exactly zero. • Exporting the attributes as an array of 32-bit floats. Maya uses 64-bit doubles internally for representation but this precision wasn’t considered to be needed in our application.. 3.1.2. Scene data. A similar plugin named sceneDataExport was developed to export the scene. The command doesn’t have any options about what to export, instead it just takes a range of frames, a path and a scene name as arguments and exports everything in the scene. Example: sceneDataExport -startFrame 1 -endFrame 100 -path "e:/scenes" scene01; Output: e:/scenes/scene01_frame00001.sce e:/scenes/scene01_frame00002.sce ... e:/scenes/scene01_frame00100.sce. The plugin is limited to export polygonal objects, cameras and point lights. This provides a basic functionality but at the same time prevents the handling of the scene from getting too complicated..

(20) 3.2. Ray casting. 3.1.3. 11. Mel interface. Besides the command line plugins created with the API Maya also provides an easy way to create a corresponding user interface through the MEL scripting language (Figure 3.1, Figure 3.2). A script is then bound to the export-button that generates and executes a command string based on the settings in the interface.. Figure 3.1: The dialog for exporting scene data.. Figure 3.2: The dialog for exporting fluid data.. 3.2. Ray casting. Volume ray casting is an algorithm where data is sampled along rays through a volume of data. Since the density outside the volume of data is always zero and therefore does not contribute to the final output, it would be optimal to only sample inside the volume. To achieve this, both points where the ray intersects the surface of the volume’s bounding box must be found, i.e. the entry and exit point. Both the entry and exit points can be found by constructing a vertex-colored bounding box around the volume of data. Each vertex in this bounding box will be colored based on its position in the volume’s local space, i.e. the minimum vertex vmin = (xmin , ymin , zmin ) will be colored black rgb(0,0,0) and the vertex vmax = (xmax , ymax , zmax ) will be colored white rgb(1,1,1) and so on. This bounding box will then be rendered twice, in the same resolution as the final.

(21) 12. Chapter 3. Implementation. output. The first time with back face culling activated, which will result in an image showing the sides of the box facing the camera. Since this image has the same resolution as the final output and since one ray will be cast for every pixel in the final image, each pixel in this rendering of the bounding box represents the starting point for a ray. The second time the bounding is rendered with front face culling activated, resulting in an image representing the exit point for each ray (Figure 3.3).. Figure 3.3: The bounding box used in the ray casting process rendered twice once with back face culling (left) and once with front face culling (right). Each pixel’s color represents a ray’s entry point into the volume (left) or exit point (right). Once the entry and exit points for a ray are known, the length, direction and number of sampling steps of that ray can be calculated and the actual ray casting can begin. During this process the volume data will be sampled at even intervals along the ray in a front-to-back order. At each step the opacity and the lighting is sampled and accumulated to the pixel’s final color:. N = number of samples PN colorf inal = i=0 densityi · f actori · colori f actor0 = 1 f actori = f actori−1 · (1 − densityi ) A feature of sampling front to back is that you can terminate the ray once the factor gets below a certain threshold, since the rest of the data will not significantly affect the final output. The ray will also be terminated if it passes the depth sampled from the hold out depth map or once it reaches the exit point. The rendering of the exit points and the actual ray casting process are combined in one GLSL fragment shader, while the entry points are rendered in a separate shader and stored in a texture, as can be seen in Figure 3.4. The exit points could have been rendered separately as well, but then an extra texture would have had to be stored and sent to the GPU..

(22) 3.3. Shadows. 13. Figure 3.4: The ray casting is done by two GLSL shaders. The first one calculates the entry point for each ray and stores it in a 2D texture. This texture is then sent, together with additional data, to the second GLSL shader which calculates the exit point for each ray and performs the actual ray casting.. 3.2.1. Multiple fluids. If the scene contains more than one fluid, a new bounding box will be created, large enough to contain all volume bounding boxes in the scene. This new larger box will be the one rendered in the first two steps of the ray casting algorithm.. 3.3. Shadows. Shadows are essential in volume rendering to give an impression of depth in the image (Figure 3.5). We implemented support for fluids shadowed by themselves, other fluids and polygon objects. We also added support for shadows casted by fluids onto polygon object for use in post-processing.. Figure 3.5: The resulting rendering of a fluid lit by two light sources. This chapter gives an introduction to how shadows are calculated from volumes using ray casting. We also present our algorithm for converting polygon.

(23) 14. Chapter 3. Implementation. objects to a sampled volume representation.. 3.3.1. Fluid shadowing. In ray casting, shadows are calculated by tracing a ray from the point we are currently sampling to every light source in the scene (Figure 3.6). The amount of light is then used as an illumination factor for that point.. Figure 3.6: The amount of incoming light is sampled for every point along a ray to calculate the final color. This can be a very time consuming process if the step size used for the ray casting is small. We are rendering objects with a fixed resolution that are limited in space, and we decided because of this to precompute the amount of light for each point and store it in a shadow texture. This texture has the same resolution as the fluid and is calculated by tracing a ray from every grid point to the scene’s light sources. The amount of light is calculated by sampling the density along a ray to find out how much light is blocked.. 3.3.2. Sampling density from fluids. Sampling the density of a fluid is straight forward since we already have it stored in a 3D texture. So the actual sampling just consist of transforming from world space to texture space and reading value from the density texture.. 3.3.3. Sampling density from meshes. Knowing when we are sampling inside of a polygonal object is problematic since we only have a representation of the shell around the object. We would then have to test for intersection with every polygon to see if we have penetrated the object. This is not easily done in OpenGL since there is no ”scene”, just knowledge about the current primitive being rendered. To solve this problem, we would like to sample the mesh and represent it using a 3D texture. This way we could use the same approach that we used for fluids. A number of algorithms exists for doing this, but we decided to develop our own algorithm that uses the GPU to increase the performance. The outline of the algorithm is presented below..

(24) 3.3. Shadows. 15. Mesh voxelization The algorithm starts with a mesh and a 3D texture of a user defined size. The mesh is then rendered once for every slice, every step in z, of the texture. Each pass renders the object using orthographic projection[12] with the far clipping plane located at infinity and the near clipping plane located at a depth corresponding to the current texture slice (Figure 3.7).. Figure 3.7: The camera set up for the mesh voxelization algorithm. In the first version of the algorithm, the shading of the object was set to render front facing polygons black and back facing polygons white on a black background. This way, everything of the mesh that is cut open by the near clipping plane would end up as white in the 3D texture. Rendering the middle z-slice of a 128x128x128 voxelization of a torus would generate something similar to Figure 3.8 (colors changed for illustration purpose). Figure 3.8: Torus intersected by the near clipping plane gives a cross-section of the object. This worked well until we tried it with objects that were self-intersecting. In this case, a front facing polygon rendered inside of the object would render black and close the volume. This wasn’t a limitation that we wanted to enforce on the artists because of the time consuming task of fixing the models. To solve this problem, we used the stencil buffer to get more information at each pixel than we could get from our first binary white / black-approach. We configured the stencil buffer to increase when rendering back facing polygons and decrease when rendering front facing polygons. This allows the algorithm to check if the stencil value is greater than 0 and in that case draw the pixel as white..

(25) 16. Chapter 3. Implementation. 3.3.4. Mesh shadowing. Shadows received by meshes are calculated by tracing a ray from the point being rendered to all light sources in the scene. The process is divided into two steps. The first step renders all geometry using a shader that sets the color of the pixel to the position of the point being rendered (Figure 3.9). The result from the first step is then used to determine the shadowing of each pixel using ray casting.. Figure 3.9: The mesh shadowing shader is given a texture where the position of each pixel in space is mapped from (x, y, z) to RGB.. 3.4. Hold out objects. Hold out objects are objects that block the camera’s view of the fluid or parts of the fluid (Figure 3.10). If the hold out object is positioned in front of the fluid, the part of the fluid that gets blocked from view could easily be masked out in compositing. But if the object, or parts of the object, is inside the fluid that would not be possible. Therefore the hold out objects had to be included in the rendering process.. Figure 3.10: A hold out object is used to block out parts of the volume. We first attempted to implement hold out objects by voxelizing the polygonal objects and then include them in the ray casting algorithm (see Shadows/Mesh Objects for a description of the voxelisation algorithm). Although this worked, it did not yield an appealing quality unless unreasonably high voxel resolutions were used. Instead we decided to implement this by rendering a depth texture from the camera and sending this texture to the ray casting shader. The ray casting.

(26) 3.5. Motion blur. 17. algorithm then samples the depth value and exits the ray once a pixel’s depth value is reached. Since OpenGL’s built-in depth map isn’t linear, it could not be used as the depth texture. To solve this, a simple GLSL shader was implemented that transforms the objects into eye coordinates and renders the depth value, normalized between the near and far clipping plane.. 3.5. Motion blur. Motion blur is the streaky, blurry effect that is often seen in photos of fastmoving objects (Figure 3.11). The effect occurs because a photo does not represent an instantaneous time, but rather a period of time depending on the camera’s exposure settings. Motion blur is a very important part of computer animation because without it movements tend to look staggered. This is especially true when dealing with compositions of animation and real video footage.. Figure 3.11: Motion blur appearing in a photo. In computer animation, motion blur can be approximated by applying it as a 2D post-effect[13] since this is faster to calculate. We decided to use true 3D motion blur instead, since this was mainly used in Maya by the artists at Filmgate. Our implementation of motion blur is controlled by three parameters; shutter angle, offset and number of samples (Figure 3.12). The shutter angle ranges between 0 and 360 degrees and it determines the length of the exposure time which affects the amount of blur. The offset parameter ranges from 0.0 to 1.0 frames and offsets the exposure period. This is used to control whether the motion blur should be calculated using the next or previous frame, or parts of both. The number of samples says in how many interpolation steps the exposure time is divided. An example of a motion blurred hold out objects can be seen in (Figure 3.13). The motion blur algorithm will, for each frame in the render sequence, step through the exposure period with a step size equal to the exposure time divided by the number of samples. For each step the scene is interpolated and rendered. The output is then weighted by the number of samples and added to the final image..

(27) 18. Chapter 3. Implementation. Figure 3.12: The offset and the shutter angle controls how adjacent frames are used to create the blur.. Motion blur pseudo code: for each frame { finalOutput = 0; for each sample { interpolateScene(); tempOutput = render(); finalOutput += tempOutput / numberOfSamples; } writeToImageFile(finalOutput); }. Figure 3.13: Hold out object without motion blur compared to the same object rendered using 20 motion blur samples (shutter angle 270, offset 0.5).. 3.6. Rendering. Once the user has configured all the rendering settings and clicked render, the rendering loop starts. It will iterate through the time range specified in the rendering settings dialog. While rendering, three frames of data - the current, next and previous, will be kept in memory in order to make motion blur calculations possible. For each frame in the sequence, the fluids will be lit and rendered to an off-screen OpenGL framebuffer. The data in the framebuffer will then be written to an OpenEXR file. The next step calculates all shadows cast by fluids, in a similar manner, and stored it in an additional layer of the file. If motion blur is enabled each frame will be interpolated and rendered several times and then blended together before being written to file..

(28) 3.7. User interface. 3.7. 19. User interface. The user interface of our application was designed to be fairly easy to use, but also to have a somewhat modern and appealing look. Its layout and individual components are similar to Maya. This will help users familiar with Maya to use the program. The development of the user interface has been done using Windows Forms[14], which is an API included as a part of Microsoft’s .NET framework. It works as a wrapper around the existing Windows API and provides access to most native Windows interface elements. The Microsoft Visual Studio Express editions, comes with a designer interface for Windows Forms.. 3.7.1. Custom controls. Throughout the project quite a few custom GUI controls had to be built, most of them since we chose not to use the standard Windows look. We developed new styles for existing controls such as group boxes, text fields, menus, etc. Though we also developed some new components since we needed to add functionality to the GUI. These components were a 1D graph, a RGB ramp, a color picker, a timeline and a flow layout panel. All custom controls were developed in C# using Microsoft Visual Studio C# 2008 Express. 1D ramp The 1D graph control maps floating point values along a graph, which can be controlled by the user (Figure 3.14). With mouse clicks the user can add or remove control points and drag them around. The graph is then interpolated linearly between these control points. The control was designed to be similar to Autodesk Maya’s corresponding graph control.. Figure 3.14: Ramp used for one dimensional input.. RGB ramp The RGB ramp control maps floating point values to a color (Figure 3.15). The user can add, remove or drag color control points along the ramp. The color of the control points can be changed using the color picker (see next session). The color is interpolated linearly between the control points. As for the 1D graph, this control was designed to resemble the corresponding ramp in Autodesk Maya..

(29) 20. Chapter 3. Implementation. Figure 3.15: Ramp used for color input. Color picker The color picker is displayed when clicking a control point in the RGB ramp (Figure 3.16). A color can be chosen by either typing in RGB or HSL values or by using the hue/saturation circle together with the lightness ramp. In the circle the saturation goes from 0.0 at the center to 1.0 at the rim. When changing the lightness the colors in the circle get brighter or darker, except for at the rim, where the colors always stay at medium lightness to better show the user what hue corresponds to what angle within the circle. It’s also possible to only affect the saturation by holding down the right mouse button and dragging the marker. To only affect the hue, drag the arrow just outside the rim of the circle.. Figure 3.16: The color picker and the different input methods: 1. RGB and HSL values, 2. Hue / Saturation circle, 3. Lightness ramp.. 3.7.2. Main layout. Figure 3.17 shows the main layout of the application. The layout consists of: • Viewport - this is where all scene data, fluids, previews, shadows etc. are shown. • Timeline - an interactive control that shows all imported fluids and their respective time range. • Menu - contains options like save / open projects, open attribute editors and viewport settings. • Controls - controls the range of the timeline and the playback of the scene..

(30) 3.7. User interface. 21. • Console - displays messages, warnings and errors. Any additional content, like the attribute editors, are displayed in pop-up windows.. Figure 3.17: The main layout of the application .. 3.7.3. File-menu options. This menu contains all the commands needed to import files to the project as well as options to save and load an existing project. Import fluids Fluids are added using the command File / Import fluid. This brings up a dialog where a Fluid data-file can be a imported. Any file of a simulation sequence can be chosen and all files will be imported that are part of the same simulation. Use scene The command File / Use Scene brings up the scene selection dialog. The scene sequence is selected in the same way as fluids, any file that is part of the sequence can be chosen and it will import the entire scene. Only one set of scene data can be active at one time. Loading a second scene will replace the first. Save / open project The current state of the program, including scene files, fluid data, shaders and workspace settings can be saved as a project file. The file contains references to the imported scene and fluids, instead of storing the actual data. The format follows the XML standard and is possible to edit in a text editor.. 3.7.4. The viewport. The viewport offers a perspective view of the currently imported data in the scene. It can be set to view the scene from an interactive user camera or from any camera imported from Maya. While in user camera mode, the view can be.

(31) 22. Chapter 3. Implementation. rotated, zoomed and panned. Usage: • Rotate: ALT + left mouse button • Pan: ALT + middle mouse button • Zoom: mouse wheel The active camera can be set from the main menu, under Viewport / Camera. Change view The viewport can be set to three different view modes; orientation view, preview and shadow view (Figure 3.18). These options are available from the main menu, under Viewport / View mode. The orientation view displays all imported objects. Polygonal objects are rendered, while only the bounding boxes of the fluids are shown. In this mode all objects can be selected by clicking them in the viewport. In preview mode, fluids are rendered as in the final rendering, with shading, hold out objects etc. Although the sampling rate of the ray casting algorithm might be set lower than in the final rendering process. Also, motion blur is not shown in the preview. While in preview mode, all frames visited will be cached to allow for faster playback. The shadow view mode displays a preview of all shadows, cast by fluids on polygonal objects. This is what will be rendered to the shadow layer of the final output.. Figure 3.18: The same scene viewed in orientation, preview and shadow mode.. Set background A feature within the viewport is to set its background in preview mode. This can be done from the main menu, under Viewport / Prev. background. There are three available options (Figure 3.19): • Checker - sets the background to a checker board pattern in two shades of gray. • Set background color - displays a color picker from where the background color can be chosen..

(32) 3.7. User interface. 23. • Set background image - lets the user choose a Targa image sequence as background. A feature implemented to make it possible to preview the fluid together with the rest of the animation before rendering.. Figure 3.19: Example of the three available background styles.. 3.7.5. The timeline. An important part of the application is the timeline (Figure 3.20). It’s a control that allows the user to set the current time frame by dragging the handle. The timeline also displays all currently imported fluids as individual layers. These layers show, for which time interval the imported fluids exists. Each layer can be selected from the timeline control, which will result in its corresponding fluid being selected in the viewport. When a layer is selected in the timeline the user can offset it in time by dragging it with the mouse. While dragging a layer its current starting frame will be shown next to the mouse cursor to help the user position it precisely. Another functionality, within the timeline control, is the possibility to lock the currently selected fluid at a certain frame. When a fluid is locked it will not change from frame to frame but rather stay as it was at the frame it was locked.. Figure 3.20: The timeline shows currently imported fluid and their location in time.. 3.7.6. Object selection. In order to control the attributes of an object it must first be selected. This can be done in different ways: • Selection Component - the selection component can be accessed from the menu. It contains a list of all selectable objects currently in the scene (Figure 3.21). • Picking - objects can be selected by clicking on them in the viewport. This can only be done when the viewport is in orientation mode.

(33) 24. Chapter 3. Implementation. The picking algorithm works by transforming the objects’ 3D world coordinates into 2D screen space. For polygonal meshes the cursors position is then checked against each 2D triangle. If the cursor resides inside any of the triangles the object gets selected. For fluids each line of their respective bounding box is checked against the mouse position. If the cursor is close enough to any of the lines the fluid gets selected. If the cursor is pointing at more than one object, the one closest to the camera is selected. • The timeline - a simple way to select fluids is by clicking on their respective layers in the timeline.. Figure 3.21: The selection component.. 3.7.7. Fluid attribute editor. The fluid attribute editor contains settings for how the fluid’s density, temperature, fuel and velocity affects the rendering. It was designed to look and behave similar to Maya’s shading settings (Figure 3.22) and is divided into four areas: • Opacity - controls the thickness of the fluid. • Color - controls how an attribute will affect the color of the fluid. • Incandescence - allows for assigning a color that is invariant to the amount of light that illuminates the fluid. This is used to give the impression that the fluid emits light (fire, glow etc). • Shadow density - a factor that describes how good the fluid is at absorbing light. A high value will result in a sharper looking fluid with lots of shadows. The opacity, color and incandescence all have the same set of options, that together with the ramp, controls the value used for rendering the fluid. The example below uses the opacity ramp, but the approach is the the same for color and incandescence as well (Figure 3.23). The value that goes into the ramp is chosen from the input menu, and can be either density, temperature, fuel or velocity. The value is first multiplied.

(34) 3.7. User interface. 25. Figure 3.22: The fluid attribute dialog next to Maya’s fluid shading settings.. Figure 3.23: A ramp is additionally controlled by specifying scale, input bias and output scale. by the input scale and then subtracted by the input bias. This yields a value between 0.0 and 1.0 that is mapped to the x-axis of the ramp. The corresponding ramp-value is then multiplied by the output scale to get the resulting value. Exporting settings A fluid’s render settings can be quite complex and time consuming to reproduce if the same settings is to be used for several fluids. We therefore implemented the option to save all settings as a shader XML-file using File / Save shader. The shader can then be loaded from any other fluid’s attribute editor using File / Open shader.. 3.7.8. Mesh attribute editor. The mesh attribute editor is used to control the polygonal mesh objects in the scene (Figure 3.24). It can be opened from the main menu under Edit / Attribute editor. From the mesh attribute editor the following settings are available: • Cast shadows - controls if the currently selected object should cast shadows. • X-res, y-res, z-res - the resolution of the voxelized object..

(35) 26. Chapter 3. Implementation. • Density - the density of the voxelized object. Determines how much light the object will let through. • Hold-out - controls if the currently selected object should be used as a hold out object.. Figure 3.24: The mesh attribute editor.. 3.7.9. Render settings. The render settings dialog can be accessed from the main menu under Render / Render settings (Figure 3.25). The following settings are available: • Range: Start frame - the first frame to be rendered. End frame - the last frame to be rendered. • Quality: Raycasting step size - the length of each step in the ray casting algorithm. Shorter steps equals better quality. Shadow step size - the length of each step in the lighting process. Shorter steps equals better quality. • Output: Width - the width of the output image. Height - the height of the output image. Format - output image format. File prefix - the name of the rendered files. (Files will be named: prefix.frameNumber.format). Directory - the directory to where the rendered files will be written..

(36) 3.7. User interface. 27. • Motion blur: Motion blur - enables motion blur. Shutter angle - affects the amount of blur in the rendered images. Larger values equals more blur. The valid range is ]0, 360]. Offset - determines how much of the previous frame and next frame will affect the blur. A value of 0.0 will result in an interpolation between the previous and the current frame, while a value of 1.0 will use the current and the next frame. Any value between 0.0 and 1.0 will result in a weighted interpolation between all three frames. Samples - the number of interpolation steps for each frame. More steps equals better quality.. Figure 3.25: The render settings dialog.. 3.7.10. Caching. The process of viewing a simulation can be slow if the size of the scene or the size of the fluid data is large. The time it takes to render one frame can easily make the update frequency lower than the animation is supposed to run at, for example 25 frames per second. This makes it hard to get an impression of how the simulation actually is going to look when rendered. To speed up the playback we implemented a caching system of the display in the viewport. The caching works by saving an image-file each frame with a special MD5[15] hashed filename. MD5 is an algorithm that takes a string as input and generates a 128-bit hash value, typically represented as a sequence of.

(37) 28. Chapter 3. Implementation. 32 hexadecimal digits. Even small changes in the string will, with a very high probability, result in a totally new hash value. Example: MD5("The quick brown fox jumps over the lazy dog") = 9e107d9d372bb6826bd81d3542a419d6 MD5("The quick brown fox jumps over the lazy dog.") = e4d909c290d0fb1ca068ffaddf22cbd0. The string that is used to generate the hash for the state of the program is a concatenation of several factors: • All numerical and boolean settings found in the fluid and mesh attribute editors. • The current frame number. • A cache version number. The renderer checks, before rendering anything, if there exists a file with the same name as an MD5-string generated from the current state, and if so, shows this image instead of rendering anything. The reason we are also adding the fluid and mesh attributes to the string is because it allows us to have several version of the simulation cached at the same time. A fluid simulation can, for example, be cached for both a shadow density of 0.5 and 0.2 and then changed between those values without the need to redo the caching. The cache version is a number that is increased every time the cache should be completely truncated. For example, when changing the view in the viewport or updating a ramp in the attribute editor. The actual image-files are saved in a simple binary format that contains the content of the viewport compressed using RLE-encoding. The files are stored in the program location/cache-directory and kept there until the program is restarted or the disc cache limit is reached (2 GB by default). A list of cache files and their corresponding size is kept within the program and is used to delete old files if new ones can’t be created without violating the limit.. 3.7.11. The console. A console was added to the program to allow a convenient way to send information from the program to the user. The console can be viewed when desired and can otherwise be left in the background. Different kinds of information is shown in the console (Figure 3.26):. Figure 3.26: Screenshot showing different console messages..

(38) 3.7. User interface. 29. • Warnings and errors - example: trying to render a frame that doesn’t have a corresponding scene. • General information - example: resolution and attributes of an imported fluid. • Progress info - example: how many frames of a sequence that have been rendered. The console is a stand-alone application that is executed automatically on startup. The application is very simple and the only thing it does is to listen for either an incoming text or shutdown message. The user interface is designed using the same custom controls as the main interface to keep a consistent look..

(39) 30. Chapter 3. Implementation.

(40) Chapter 4. Results 4.1. Performance. It’s hard to say something about the performance in general. The time it takes to read data from disc can easily dominate any performance test. The playback speed for a large scene, without caching, is usually below one frame per second. The test below only tests the time needed for the actual rendering. All data was loaded, both in Maya and in our program, before the actual measuring started. See the discussion chapter for a more thorough discussion of the performance problem related to reading the data from disc. Performance test In order to test the performance of our application in comparison to Maya, a basic test scene was constructed. The scene contains a fluid container with resolution 120x120x120, a point light source and a camera. The scene was rendered, from the same view, using Maya and the Mental Ray[16] renderer and with our application. The rendering resolution was 800x600 and the system specifications can be found in appendix A. The time needed to render the frame was 2 m 31 s with Maya and around 0.5 s with our program. The resulting images can be seen in Figure 4.1.. Figure 4.1: Fluid rendered with our program (left) and with Maya (right).. Andersson, Karlsson, 2008.. 31.

(41) 32. Chapter 4. Results. 4.2. Limitations. We knew when we started the project, that we weren’t going to be able to complete all project components, which we initially intended to during the assigned time period. Our approach was therefore to get a wide set of features working and not focus too much on every detail. In this section we will point out some of the limitations that we noticed developing the software.. 4.2.1. Differences from Maya. One of our primary goals was to be able to render fluids in the same way as Maya. There were some aspects in this area that we missed out on: • Higher-order filtering - the highest degree of texture filtering supported by OpenGL is linear filtering. Higher-order filtering has to be performed manually in the shader and would be significantly slower. Renderings done in Maya shows a noticeably smoother result, especially for low resolution grids, than we were able to produce. • Texturing - our application does not support mapping of textures to an attribute, such as opacity or color. This can be used to give the impression of a higher resolution fluid without the need to use a more detailed grid. This was partly left out by purpose since it’s, in most cases, hard for the user to make it look good for non-static fluids, much slower to render, and implementing a complete texturing system would be very time consuming. Having said that, there are still times when a correctly tweaked 3d texture would provide more detailed and appealing results. • Better interpolation between frames - since we export the scene at a fixed frame rate, we had to use interpolation to get the subframes used for the motion blur. This interpolation is always performed linearly and doesn’t take into account the kind of interpolation used in Maya. This could have been avoided if we used the FBX format. We originally intended to use this format but the FBX API turned out to be incompatible with managed C++ code, used by Windows Forms that we used to design the GUI. • Anti-aliasing - there will often be hard edges in the rendered image when using hold-out objects since we are only sampling one time per pixel without using any anti-aliasing technique. Our method of using a depthmap for the hold-out objects turned out to be hard to use together with anti-aliasing. This will result in hard edges around mesh objects.. 4.2.2. Technical issues. Most of the technical issues came from our approach of using the GPU to accelerate the rendering. Some are limitations of today’s hardware and some are problems with the way the GPU and GLSL works: • Texture units - all current video cards have a limit of how many textures (texture units) that can be used at the same time. Our program typically uses (2 + 3*number of fluids) texture units. The Geforce 8800 that we used for testing is able to use up to 64 textures, this limits the number.

(42) 4.2. Limitations. 33. of fluids used simultaneously in a scene to around 20. This will probably not be a problem on future generations of video cards. • Color space - OpenGL supports floating points frame buffers but unfortunately only for values in the range [0, 1]. This makes it harder to render images with a high dynamic range since all values will be clamped to 1.0. We tried to prevent this by dividing the values stored in the frame buffer with the sum of the intensity of all lights in the scene and then, before writing an OpenEXR image, multiply with the same value. This works to some extent but very bright areas together with incandescence can still cause values exceeding 1. This could maybe have been solved by dividing all values by some large constant instead of the sum of the intensities but this was not investigated further. • GLSL - The GLSL shading language has improved a lot recently, but is according to our experience still pretty immature and contains several bugs. There are commands that do not follow the specifications and forced us to design our own work-arounds. This unfortunately made the program crash on some machines when the instruction count of a shader got to high, but doesn’t seem to be a problem with newer hardware like the Geforce 8800.. 4.2.3. Design. The program was intended to fit nicely into a rendering workflow without adding too much complexity compared to doing the rendering in Maya. The way we designed the program turned out to cause some problems in this area that we weren’t able to foresee when we first started. • Many fluids - we designed the program and the Maya plugins to export every fluids container separately. This worked fine for smaller scenes, but soon became problematic for scenes with many fluid containers and would force the user to go back to the File / Import fluid command many times. • Fluid data and transformation together - the plugin used to export fluids exports a fluid’s simulation, together with information about the current translation, rotation, scaling etc. This takes up a lot of disc space and is not very flexible in cases where only the transformation of the fluids container changes or the same simulation is to be used in several containers. A better design would be to export information about the transformation of all fluid containers together with the scene and keep the simulation data separate. Merge fluid A standalone tool was developed called FluidMerge (Figure 4.2) that was used to circumvent the data/transformation issue. It took the simulation data from one fluid, the transformation data from another and created a new combined fluid. This tool was not intended to be a part of the workflow, but rather as a last minute solution to be able to test the program with scenes with many fluid containers..

(43) 34. Chapter 4. Results. Figure 4.2: The FluidMege interface.. 4.2.4. Performance. The performance is overall very good for the actual rendering, but reading the data from disc can sometimes drastically increase the time needed for preparing a frame for rendering. This turned out to be a much bigger bottleneck than we expected, and is in almost all cases linked to our way of handling the scene. The entire scene is read from disc and processed every frame (multiple scene files is loaded and interpolated if using motion blur) regardless of if an object has changed from the previous frame or not..

(44) Chapter 5. Discussion This section will present our thoughts of how the project turned out, if we reached our goals and finally some suggestion for future development.. 5.1. Conclusion. In this project we have implemented a ray caster for volumetric fluid data, exported from Autodesk Maya. The ray caster is using the GPU’s parallel structure to quickly produce high quality renderings of the volumes. When it comes to the ray casting algorithm, we have met most of our goals. Our algorithm can render volumes in high resolutions with self shadowing and shadows from other objects, which is discussed in chapter 3.3 Shadows. The algorithm also supports hold out objects as we discussed in chapter 3.4, as well as motion blur (chapter 3.5) and the result can easily be saved as an HDR-image, ready to be composed into an existing animation. As discussed in chapter 3.1 Maya plugins, a whole scene, including cameras and their respective focal length, aspect ratio etc, can be imported from Maya, which makes it easy to align an animation rendered in our program with one rendered in Maya. We have developed a complete graphical user interface to control how the fluids are rendered. The user interface and its features are described in chapter 3.7 User interface. We also developed two small user interfaces for exporting data from Maya. These are discussed in chapter 3.1 Maya plugins.. 5.2. Future work. The area were we haven’t quite reached our goal is when it comes to performance. The program still offers an appealing real-time environment for tweaking shading parameters and viewing a fluid from different angles, but is not as fast as we wanted it to be when it comes to rendering an entire sequence. This is much because of the time it takes to read the data from the disc. The program consisted of more than 12 000 lines of source code when we left the project and would serve as a good foundation for future development. During the final development and test stages of our project we came up with several ideas on how to improve performance and work-flow in our application. For example, the way the scene data is handled is far from perfect. When using Andersson, Karlsson, 2008.. 35.

(45) 36. Chapter 5. Discussion. large scenes the time to read each frame of scene data from disc takes much longer than the actual rendering. By using the FBX format it would be possible to keep the scene data in memory at all times and therefore drastically increase reading times. But as mentioned earlier the FBX API is not compatible with the Windows Forms API that we have used to develop the GUI. There might be a solution to this though, by creating a DLL based interface to access the FBX API, instead of including it directly into the project. Unfortunately we did not have enough time to test this. Another idea is to redesign the way fluids are exported and keep the transformation and simulation data separate. It would then be possible to create some kind of library of effects that could be imported and placed into any fluid container in the scene. Another obvious way to improve the performance is to optimize and continue the development of the ray casting algorithm, by implementing higher-order filtering etc. There are also some features left unimplemented from the wish list that we received from Filmgate. For example support for depth of field, reflective objects and a command line rendering interface. One could also implement support for other programs than Maya. Any program that uses a grid to simulate fluids and offers a low-level API can be supported by writing plugins to export its data..

(46) Bibliography [1] Filmgate http://www.filmgate.se/, acc. 2008-11-12 [2] Keenan Crane, GPU 3D Fluid Solver http://www.cs.caltech.edu/~keenan/project_fluid.html, acc. 200811-12 [3] Jos Stam, Stable Fluids http://www.dgp.toronto.edu/people/stam/reality/Research/pdf/ ns.pdf, acc. 2009-01-03 [4] Eric W. Weisstein, Navier-Stokes Equations http://scienceworld.wolfram.com/physics/ Navier-StokesEquations.html, acc. 2008-11-12 [5] NVIDIA, World Leader in Visual Computing Technologies http://www.nvidia.com/page/home.html, acc. 2008-11-12 [6] Paul Bourke, Polygonising a scalar field http://local.wasp.uwa.edu.au/~pbourke/geometry/polygonise/, acc. 2008-11-14 [7] Sebastian Zambal, Shear Warp Implementation http://www.cg.tuwien.ac.at/courses/projekte/vis/finished/ SZambal/basic.html, acc. 2008-11-14 [8] Pixar, RenderMan https://renderman.pixar.com/, acc. 2008-11-12 [9] SGI, About the OpenGL Architecture Review Board Working Group http://www.opengl.org/about/arb/, acc. 2008-11-12 [10] GPGPU, History http://www.gpgpu.org/data/history.shtml, acc. 2008-11-12 [11] Lucasfilm Ltd. Companies, Industrial Light & Magic http://www.ilm.com/, acc. 2008-11-12 [12] Wikipedia, Orthographic projection http://en.wikipedia.org/wiki/Orthographic_projection, acc. 200811-12 Andersson, Karlsson, 2008.. 37.

(47) 38. Bibliography. [13] Illuminate Labs, 2D Motion Blur http://www.illuminatelabs.com/support/tutorial-folder/ tutorials-turtle-3/2d-motion-blur/, acc. 2008-11-12 [14] Microsoft Corporation, The Official Microsoft WPF and Windows Forms Site http://windowsclient.net/, acc. 2008-11-12 [15] FAQS.ORG, The MD5 Message-Digest Algorithm http://www.faqs.org/rfcs/rfc1321, acc. 2008-11-12 [16] Autodesk, Mental ray http://usa.autodesk.com/adsk/servlet/index?siteID=123112&id= 6837573, acc. 2008-11-14.

(48) Appendix A. The test system A.1. Specifications. Intel C2D E6850 3.00 GHz 2 GB 800 MHz DDR2 RAM Geforce 8800 GTS 320 MB 7200 RPM HDD Windows XP Pro SP2. Andersson, Karlsson, 2008.. 39.

(49)

References

Related documents

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Regioner med en omfattande varuproduktion hade också en tydlig tendens att ha den starkaste nedgången i bruttoregionproduktionen (BRP) under krisåret 2009. De

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa

DIN representerar Tyskland i ISO och CEN, och har en permanent plats i ISO:s råd. Det ger dem en bra position för att påverka strategiska frågor inom den internationella

While firms that receive Almi loans often are extremely small, they have borrowed money with the intent to grow the firm, which should ensure that these firm have growth ambitions even