• No results found

Real-time rendering of subsurface scattering and skin

N/A
N/A
Protected

Academic year: 2021

Share "Real-time rendering of subsurface scattering and skin"

Copied!
37
0
0

Loading.... (view fulltext now)

Full text

(1)

Department of Science and Technology Institutionen för teknik och naturvetenskap

Linköping University Linköpings universitet

g n i p ö k r r o N 4 7 1 0 6 n e d e w S , g n i p ö k r r o N 4 7 1 0 6 -E S

LiU-ITN-TEK-A--17/019--SE

Realtidsrendering av hud

Daniel Holst

2017-06-09

(2)

LiU-ITN-TEK-A--17/019--SE

Realtidsrendering av hud

Examensarbete utfört i Medieteknik

vid Tekniska högskolan vid

Linköpings universitet

Daniel Holst

Handledare Apostolia Tsirikoglou

Examinator Jonas Unger

(3)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

under en längre tid från publiceringsdatum under förutsättning att inga

extra-ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,

skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för

ickekommersiell forskning och för undervisning. Överföring av upphovsrätten

vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av

dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,

säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ

art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i

den omfattning som god sed kräver vid användning av dokumentet på ovan

beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan

form eller i sådant sammanhang som är kränkande för upphovsmannens litterära

eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se

förlagets hemsida

http://www.ep.liu.se/

Copyright

The publishers will keep this document online on the Internet - or its possible

replacement - for a considerable time from the date of publication barring

exceptional circumstances.

The online availability of the document implies a permanent permission for

anyone to read, to download, to print out single copies for your own use and to

use it unchanged for any non-commercial research and educational purpose.

Subsequent transfers of copyright cannot revoke this permission. All other uses

of the document are conditional on the consent of the copyright owner. The

publisher has taken technical and administrative measures to assure authenticity,

security and accessibility.

According to intellectual property law the author has the right to be

mentioned when his/her work is accessed as described above and to be protected

against infringement.

For additional information about the Linköping University Electronic Press

and its procedures for publication and for assurance of document integrity,

please refer to its WWW home page:

http://www.ep.liu.se/

(4)

LINK ¨OPING UNIVERSITY

Real-time rendering of skin

by

Daniel Holst

(5)

LINK ¨OPING UNIVERSITY

Abstract

by Daniel Holst

In this thesis the topic of subsurface scattering is examined. A couple of methods to accomplish this in real-time applications will be explained and implemented in the game engine Stingray. The main topic is to get realistic skin for human characters and for an artist to be able to get the desired look in the editor. The implementation is divided into two parts where on takes care of the light from light sources in front of the object in order to create a smooth look. The other step handles light sources that are behind the object and contributes through light that passes through the object. These methods have to be combined in a good way after this to create a physically correct contribution. The last step is to optimize the solutions so it works well for a real-time game engine.

(6)

Acknowledgements

I would like to thank everyone at Autodesk for a great time and much support during my time there and especially my supervisor Jim, who has helped me and been a great support throughout the entire thesis work.

(7)

Contents

Abstract i Acknowledgements ii List of Figures v 1 Introduction 1 1.1 Human skin . . . 1

1.2 Real-time vs offline rendering . . . 2

1.3 Stingray . . . 2 1.4 Used content . . . 3 1.5 Related work . . . 3 2 Background 5 2.1 Light transport . . . 5 2.2 Subsurface scattering. . . 6 3 Method 7 3.1 Front light scattering . . . 7

3.2 Density dependent color . . . 8

3.3 Back light scattering . . . 10

3.3.1 Object depth and transmittance profile . . . 12

4 Implementation 13 4.1 Subsurface effect . . . 13

4.2 Back light scattering . . . 15

4.3 Optimization steps . . . 17

5 Results 18 5.1 Front light effect . . . 18

5.2 Translucency . . . 19

6 Discussion 23 6.1 Optimization . . . 23

6.2 Different skin colors . . . 24

6.3 Future work . . . 25 iii

(8)

Contents iv

(9)

List of Figures

1.1 When light passes through thin parts of the skin the red color is very visible. Image from [7]. . . 2

1.2 The model head from the beginning. . . 3

1.3 Diffusion profile for human skin. It can be seen here how the red color gets scattered more then green and blue. Image from [5]. . . 4

2.1 Difference between the normal brdf model and the bssrdf model. Image from [7]. . . 6

2.2 An illustration of how the light can bounce around inside the skin. Image from [7]. . . 6

3.1 Illustration of how the blur method works. Image from [2]. . . 9

3.2 Illustration of the density method. . . 10

3.3 Overview of direct and translucency lighting vectors. Image from [9]. . . 11

3.4 The color range that can be obtained from the transmittance profile based on the distance the light travels through the object. Image from [8]. . . . 12

4.1 To big kernel causes some unwanted artifacts that can be seen by the edges for the ear here for example. . . 14

4.2 UI for the density dependent subsurface scattering method . . . 15

4.3 The color contribution from translucency using depth and the transmit-tance profile tested on a cube. . . 16

5.1 The blur method applied on the model to the right compared to the original model to the left. . . 19

5.2 The human skin represented with the density dependent method to the right compared to original model to the left. . . 19

5.3 The difference between passing the specular light through as normal (left) or through the blurring pass (right). . . 20

5.4 Two different translucency gradients achieved from the input colors. The top one is created to mimic the gradient of human skin. . . 21

5.5 The translucency contribution for the two methods compared to without. Original left, Frostbite method middle, method using depth right. . . 21

5.6 The different methods tested on the Stanford bunny . . . 22

5.7 The result achieved with the Unreal Engine . . . 22

6.1 The color change from light Caucasian skin to dark African skin. Image from [12]. . . 24

(10)

Chapter 1

Introduction

In computer graphics the demands for the visual realism increases as the years go. When rendering scenes it is desired that the light behaves as physically correct as possible. For some surfaces this is rather easy to accomplish with normal lighting models, but for many materials this is not feasible as the light scatters in a complex way. For materials like milk, leaves, wax and human skin all the light does not reflect directly on the surface, but is scattered and enters the interior before it goes out again. The way the light traverse inside the medium depends on various physical factors like density and interaction with other parts of the body like bones. This effect is referred to as subsurface scattering and can be complex to imitate. Materials that have this property is called translucent and most of this report will examine how those materials can be rendered in a good and efficient way for real-time applications.

1.1

Human skin

The human skin has a lot of interesting functions and properties. It is the outer covering of our body that protects us against infections and regulates the heat of our body. As the skin is translucent and contains different layers the light enters the skin and bounces around between the layers before it leaves the surface. The different layers in the skin have different purposes and properties causing the light to behave differently. In a lower layer of the skin lies the blood vessels, causing the hint of red. This is especially visible in cases where a strong light source is behind a thin part of the skin, see image 1.1. In order to render human skin and get realistic results these layers needs to be considered in a good way. The properties of the skin are also a bit different from area to area on the body which will cause the appearance to differ. Because of its many special properties it is hard to render skin in a realistic way.

(11)

2

Figure 1.1: When light passes through thin parts of the skin the red color is very visible. Image from [7].

1.2

Real-time vs offline rendering

The difference between rendering of human skin in real-time applications and offline applications is very big. As the light transport inside skin is so complex it is not possible to do this in a physically correct way in real-time. For real-time implementations a lot of cheating and simplifications need to be done, but good results can still be achieved. For these cases the subsurface scattering effect is often accomplished by doing some post effects after a normal light shading. When rendering movies on the other hand the demands of visual realism is extremely high and then complex methods are used since it does not have to be rendered in real-time. With these methods it can take minutes or hours to render one frame. Methods that are used for offline rendering often use some sort of ray casting algorithm and the light is refracted and traced inside the object in a physically correct way. There are different methods that use the BSSRDF model and the diffusion approximation to simulate the light transport, more about this can be read in the paper by Jensen et al. [6] and the paper by Donner and Jensen [11].

1.3

Stingray

The implementation that is done in this project is done in the game engine Stingray and originally called Bitsquid. The game engine was later bought by Autodesk and integrated into their games development toolbox like 3ds studio and Maya. The engine has been used by different studios for many years and several games has been released using this.

(12)

3

1.4

Used content

To make this project easier to test a model of a human head was used that was made by a professional, see image1.2. For this model an albedo map was used to get the correct colors over the face together with a normal map. A specular map to create realistic highlights for the specular light was also available. This made the testing phase much easier compared to working with for example a cube and trying to visualize the effects on a human.

Figure 1.2: The model head from the beginning.

1.5

Related work

There has been a lot of research on the subject of subsurface scattering and over the years many different methods have come up with their specific benefits. Most of the work in this area has been done for offline rendering since it has been seen as too expensive for real time applications for a long time. But over the last years many methods for real time rendering are starting to appear as well. J. Jimenez et. al [1] presented a solution for subsurface scattering in real time by using diffusion kernels that mimic the diffuse reflectance profile of a translucent material. A diffuse reflectance profile defines how the light behaves when traversing inside an object and how the intensity of the light decays with a distance from the incidence point. A profile for human skin can be seen in image 1.3. This is done as a post rendering effect, so the translucent textures

(13)

4 are blurred with the kernel and creates a softer look. For the different passes the size of the kernel changes and some different color weights are also used in order to make the color scattering more interesting. This method is similar to the one mostly used in this paper.

Figure 1.3: Diffusion profile for human skin. It can be seen here how the red color gets scattered more then green and blue. Image from [5].

(14)

Chapter 2

Background

2.1

Light transport

When rendering scenes in computer graphics the lighting in the scene is the most im-portant part. This is mostly done using ray optics where the light moves from the light source in a straight line and can then be reflected or refracted when reaching a surface. The light can both reflect directly at a surface or refract and go inside the object causing it to scatter multiple times. If it reflects directly on the surface, this is referred to as single scattering. For this type of reflection the direction and position of the incoming light are of high importance. The opposite is multiple scattering, which is when the ray goes inside the material and scatters around before leaving the object again. These different types of reflection models are often referred to as brdf (Bidirectional reflectance distribution function) and bssrdf (Bidirectional scattering-surface reflectance distribution function) and illustrations for these can be seen in image 2.1. The BRDF can be expressed with the equation below2.1and more about this can be read in the paper by Kajiya [3]. The BSSRDF is a bit more complex than the BRDF and more about this can be read in the paper by Jensen et. al [6].

L0(x, ~w0) = Le(x, ~w0) +

Z

Li(x, ~wi)p(x, ~wi, ~w0)( ~wi, N )d ~wi (2.1)

For human skin the multiple scattering is of higher importance as the skin contains several layers which affects the light in a complex way. For multiple scattering the position and direction of the incoming ray is not related to the outgoing position and direction in the same way as for direct reflection as the ray scatters so much and leaves at a new position and direction. This is why we can use a blur filter to simulate this

(15)

6 randomness in the light transport. By using a blur filter we move from the single scattering way of simulating light transport to the multiple scattering way.

Figure 2.1: Difference between the normal brdf model and the bssrdf model. Image from [7].

2.2

Subsurface scattering

Subsurface scattering refers to the way light scatters and traverse in a translucent ma-terial. For many types of objects all of the light does not reflect directly on the surface, but instead it is refracted into the interior of the material. Inside of the surface it will be reflected with different angles and then exit the surface at a different point. These properties are very important for some types of material like human skin. In order to render skin in a realistic way the structure of the skin and its properties needs to be known to be able to simulate the flow of the light inside it. As mentioned earlier the skin contains different layers and this causes the light to bounce around between the layers before it leaves the surface, see image 2.2to see how the light can move inside the skin.

Figure 2.2: An illustration of how the light can bounce around inside the skin. Image from [7].

(16)

Chapter 3

Method

For skin shading two different approaches are needed. The first one takes care of light sources in front of the object and the second approach handle lights that are behind the object and contributes since the light in translucent materials gets transmitted through thin areas. The two different approaches and the methods used will be explained in this chapter.

3.1

Front light scattering

Jimenez et. al [1] proposed a method for real time applications that uses a post-processing method that simply use a filter to blur the diffuse light from a translucent object in texture space. This works well but has its limitations. The method that is used in this paper builds on this method but is applied in screen space instead and some limitations are then avoided [2]. With the texture space method the blur is applied on the textures in the same resolution all the time. When moving it to screen space this can be optimized by basing the blurring on the distance to the object. So if the object is far away, the blurring is only applied on a small part of the screen and if it is at a distance over a fixed distance no blur is applied at all since it will not be any visible difference. So this makes a huge difference in performance, especially if there is a large number of objects that should use this. With the screen space method a stencil buffer [4] can be used to mask out all pixels where we want to apply the blur filter and then passed through the blurring passes. By doing this we can apply the effect on all objects in the same pass as opposed to the texture space where each object texture is done in separate passes. It is also much easier to use the depth value so only the object visible from the camera is blurred.

(17)

8 To create this subsurface scattering in the skin we want to blur that object in a way to mimic the real reflection model of human skin that could be seen in image 1.3. To do this a Gaussian blur filter can be used. This is done 4 times as a full screen pass with different sizes on the kernel for the best result. But to optimize the filtering 1D-filters are used instead of a 2D-filter with a horizontal and a vertical pass. So it becomes 8 passes to create the blurring effect and one more pass to combine it in the correct way afterwards. The blurring filter in each pass had the values that can be seen below where the values represent the weight of the samples around the current pixel. So the value 0.382 is how much we take from the current position. The resulting pixel is the sum of the sample values multiplied with the kernel weights, see equation 3.1. The final color after all the passes are done is then a blended value from these and normalized in order for it to be energy conservative.

0.006 0.061 0.242 0.382 0.242 0.061 0.006

resColor =

7

X

i=0

pixelSample[(i − 3) ∗ stepSize] ∗ weight[i] (3.1)

We also want to affect the different color channels differently for a more interesting blur effect. For human skin a wide scattering of the red channel is desired which is handled by having different color weights for the different passes. An input color is also available for the user to use in order to affect the scattering color to the desired look. For each pass the width of the kernel also increases to get different ranges of scattering. The algorithm works like image3.1 shows.

3.2

Density dependent color

A version of the method presented by [2] is to take more consideration to the density map for the model. So instead of having one input value that changes all channels as before this way has three different input color values for the different layers and then the color to use is based on the density in that position. So in an area that has high density the top skin color is used the most and the opposite for areas with low density. This makes areas like the ear on a human face turn more into the color of the lowest layer which should be red in this case. This of course puts demands on a good density map for the model. With this method two other input values are also presented which is the depth of the transition between two color layers. So by adjusting these two values the artist can decide at what depth the different colors will be, see image 3.2. The different layers also have as before different range on the scattering so if the density value is

(18)

9

Figure 3.1: Illustration of how the blur method works. Image from [2].

low the lowest color channel is used with the widest blur kernel. In order to make this look good there has to be an interpolation part between these layers so no hard edges occurs. The three different interpolation factors that are based on the density and the two different layer depths is calculated as the equations below shows, see equations 3.2

- 3.4. These factors determine how much that layer contributes. The final color is then obtained with equation3.5where the blur passes are multiplied with the different layer colors and the factors that determine how much of this color we want and then added together. w1 = max(1.0 − 0.5/(1.0 − l1) ∗ (1.0 − density), 0.0) (3.2) w2 = max(1.0 − 0.5/l1− l2∗ l1+ l2 2.0 − density , 1.0) (3.3) w3 = max(1.0 − 0.5/l1− l2∗ |(2 ∗ l2− l1) − density| , 1.0) (3.4)

f inal color = blur1+ blur2∗ top color ∗ w1

+blur3∗ mid color ∗ w2 + blur4∗ low color ∗ w3

(19)

10

Figure 3.2: Illustration of the density method.

3.3

Back light scattering

For light sources behind the object a different approach is needed. For each point on a surface we want to know if there is a light source behind the object that will give any contribution to this point.

To achieve this effect in real time applications like video games Frostbite presented a solution that does not need a lot of memory or computation [9]. To do this some information about the varying thickness is needed in order to know how much of the light that gets through the object. The view and light direction are also needed for the correct attenuation of the light. To get the distance the light has travelled inside the object depth maps can be used, but this puts demands on memory and this is desired to be avoided. To get the thickness of the object a normal-inverted computation of ambient occlusion can be used and done offline and stored as a texture. Ambient occlusion is normally used to see how much a pixel is in shadow by checking how much the spot is occluded by its surrounding [10]. This is then instead made into the object so the normal is inverted and going into the object to get a good thickness estimate. This will give a good estimate of the light transport happening inside the object. For this implementation a density map was used that holds information about the density of the object in different areas. For the ears for example the density is lower causing it to let more light through. In image 3.3 an illustration of the method can be seen and the different vectors.

(20)

11

Figure 3.3: Overview of direct and translucency lighting vectors. Image from [9].

To get the final contribution from the light behind the object the following equations can be used where the different vectors, local thickness and some translucency parameters is combined. First we want the relationship between the viewing angle, the surface and the light source. This together with a small distortion factor that shifts the surface normal is then raised to the power of t power. This factor is a translucency power factor for the direct translucency. This is then multiplied with a scale factor and added together with a translucent ambient term that is a constant value for the entire object. This is multiplied with the light attenuation, the density of the object and the colors of both the light and the object.

light vec = light + normal ∗ t distortion (3.6) v inv l dot = saturate(dot(eye, −light vec))t power∗ t light scale (3.7) t = attn ∗ (v inv l dot + t ambient) ∗ saturate(1.0 − density) (3.8) t f inal = t ∗ light color ∗ base color (3.9)

(21)

12

3.3.1 Object depth and transmittance profile

The other approach that was based on the work by Jimenez et al. [8] focus more on actually using the distance the light has travelled inside the object and use this to determine the amount of light passing through the object. This is a very important factor as the light gets absorbed and scattering inside the object so the further it travels inside the less light should come out on the other side. To get this information the shadow map is used. In the shader each position is transformed into light view space so the two positions are in the same space. Then we can get the shadow position from the shadow map in the direction towards the light source from the current point. When both positions are known we can simply get the distance between them. It is important that the depth values for both positions are linear in order to get a correct distance. To get the correct contribution color based on this distance a transmittance profile is used. In image 3.4 the color gradient can be seen so based on the depth value a color from this is obtained. For a larger depth the contribution will then be black which means no contribution. For thinner parts the desired yellow or red color is obtained. This contribution is also affected by the light strength and angles between the viewer, surface normal and direction to the light source. In the equations below it can be seen how the correct color was obtained. First the correct distance inside the object is obtained and multiplied with a scale factor. This value is then used to get a color from the transmittance profile. The final color is then the obtained color from the transmittance profile multiplied with the light color and intensity and also a factor that depends on the relation between the inverted normal and the light vector. This color is then added to the pixel value and done for each light source in the scene.

distance = (sampleBackP os − sampleF rontP os) ∗ scaleF actor (3.10) depthColor = transmittanceP rof ile(distance) (3.11) res = depthColor ∗ light ∗ max(0.3 + dot(−N, L), 0) (3.12)

Figure 3.4: The color range that can be obtained from the transmittance profile based on the distance the light travels through the object. Image from [8].

(22)

Chapter 4

Implementation

The practical part of this work was to implement skin shading in the game engine Stingray developed by Autodesk. As this is a game engine a real-time solution was needed so a physically correct method could not be used. The engine already had a rendering pipeline and worked well but did not have a good way of simulating subsurface scattering and skin which was desired. The implementation was based on the work by Jimenez et. al [2] as mentioned earlier. The first step was to divide the different color components into different buffers, one for the specular part and one for the diffuse part. This was done since the blurring method should only be applied to the diffuse part as we think of the specular light as directly reflected on the surface.

4.1

Subsurface effect

In the table below it can be seen how the scattering radius increases and how it correlates to the color weights for human skin. It is very clear that for the red channel we want a wider scattering in the last pass with the biggest scattering. How the stepSize is used can be seen in equation 3.1.

stepSize R G B 0.0064 1.0 1.0 1.0 0.0516 0.3251 0.45 0.3583 0.2719 0.34 0.1864 0.0 2.0062 0.46 0.0 0.0402

Table 4.1: The different color weights for the different blur passes with different sizes.

First the image was blurred with the horizontal blur and passed to a second target. Then this horizontally blurred image was vertically blurred and sent back to both the first

(23)

14 target and the final target. When sending to the final render target an alpha blending operation was used to mix the values correctly with the different color weights seen in table 4.1. This was then repeated four times with different scattering radius and color weights.

An important thing for this to work correctly is the size of the kernels. In figure4.1it is easy to see a problem with this if the kernel width is too big. So in order for this to work correctly the size of the kernel must be dependent on the distance to the point from the camera and the width of a pixel. To handle this a couple of stretch factors sx and sy

are created and multiplied with the kernel width so it works wherever the object is in the scene, see equations below. The first term correction is the factor that handles so a surface that are at a steep angle with respect to the camera will need a narrower kernel. If the difference in depth is to big we multiply the correction factor with the constant maxdd instead of the difference in depth. The stretch factors are also divided with the depth value to the point so the kernel size gets smaller if the object is far away.

correction = correction scale ∗ min(|( d

dx(depth)|, maxdd) (4.1) sx=

sss strenth ∗ pixel size depth ∗ correction ∗ 1.0 0.0 ! (4.2) sy =

sss strenth ∗ pixel size depth ∗ correction ∗

0.0 1.0

!

(4.3)

Figure 4.1: To big kernel causes some unwanted artifacts that can be seen by the edges for the ear here for example.

(24)

15 In cases when the object is far away from the camera, the samples in the Gaussian filter will land in the same pixel which can then be exploited to save computation. This can be checked by seeing if the kernel width of the current pixel is less than 0.5 which is a half pixel. Then this can be ignored since the blurring samples will be the same. In order to create this in a user friendly way a couple of input values was created. First a color parameter was created that can be changed by the user that affect the color weight in the blur pass. This to enable the possibility for the user to create different appearances like if it is desired to create an alien creature a more green color can be entered for example. Or just small changes like if the person is supposed to be ill a more pale-looking color can be achieved or if the person has worked out the red color can be increased to get this look. The other input value that the user can use is a value for the scattering radius. This will change the range of the scattering so a higher value will result in a smoother looking surface.

For the density method a few other input values were created and used in the way explained in the previous chapter. The user interface for this method can be seen in image4.2where the user can change the two density depths and the three different layer colors. This gives the artist more freedom to change the appearance of the skin in a more advanced way.

Figure 4.2: UI for the density dependent subsurface scattering method

4.2

Back light scattering

Another important thing in order to really accomplish a subsurface effect is to handle back light scattering and not only the front light scattering. This means that light that comes from behind the object might go through the object and affect how it looks from the front as can be seen in image 1.1. As we are working in screen space we lose the irradiance not seen from the camera so this type of light would normally be neglected. But if the object is translucent this effect is very important in order to achieve realistic results, so some assumptions can be made in order to fake this [8]. For this effect the

(25)

16 method presented by Frostbite was first tested. So in addition to the normal light the light passing through the object also needed to be added. This was done with the equations presented in the previous section.

For the translucent light we also want to affect it with the desired color for it to match with the other method. Light that passes through human skin and is scattered out should not be white like the color of the light source but should be affected by the interior of the skin. To achieve this effect the translucency color is multiplied with the subsurface scattering color attribute that the artist can change and which also affected the scattering for the front light method.

The other method that was implemented used the distance the light travels inside the object in order to decide the contribution. The color obtained using the transmittance profile can be seen in a clear way when tested on a cube, see image 4.3. The final contribution from light sources behind the object was then achieved with the equations below. This equation is applied for each light source and added to the existing surface color. This contribution is added to the diffuse light and passed through the blur step to get a softer look for this as well. Then merged together with the specular light after that, see the code snippet below.

Figure 4.3: The color contribution from translucency using depth and the transmit-tance profile tested on a cube.

L = l i g h t _ c o l o r * l i g h t _ a t t e n u a t i o n ;

// get c o n t r i b u t i o n from light s o u r c e s behin d the object b a c k _ l i g h t = L * o b j e c t _ d e p t h * (1.0 - d e n s i t y );

// blur d i f f u s e light as a post p r o c e s s

b l u r r e d _ c o l o r = blur ( d i f f u s e _ c o l o r + b a c k _ l i g h t ); // final color of that point

(26)

17

4.3

Optimization steps

For this to work well and efficiently in a real time application, optimization is important. After all these mentioned effects were implemented the computation time for this was a bit high. So in order to decrease this some optimization steps were required. The first thing that was done was to decrease the resolution when doing the blur passes. This was done instead of increasing the steps for sampling and the same result could be achieved. After each blur pass was done the buffer was down-sampled to half of the resolution with a 2x2 average kernel. This was done between the three passes so for the last blur pass the resolution was 1/8 of the original resolution. This decreased the computation time a bit but more optimization was still needed. A very effective way of reducing computation time for this sort of situation is with a stencil buffer to be able to stencil out the objects that should be affected. A stencil buffer is an additional buffer to the normal color buffer and depth buffer on the GPU. This buffer works per pixel, so the first thing to do with this is to mark all pixels that contains an object with a skin material. When doing the blur passes later the stencil buffer is used in a very optimized way on the GPU to only do our calculations for skin materials.

(27)

Chapter 5

Results

After both the contribution from light sources in front of the object and light sources behind the object was handled some good results were achieved. This chapter will show the results for each of the approaches.

5.1

Front light effect

The blur method gave a nice smooth looking surface that looks more like skin. Together with a good color weight function it looks realistic. With a good color map and with the input for scattering color the artist can achieve realistic results for different types of skin. In image5.1 the result with this method can be seen.

The method that uses the density map in a more specific way gave nice results and feels appropriate for a game engine. Since the method uses input from the user it gives a lot of freedom for the artist to create the appearance that they desire. These attributes can be changed in order to alter the age of the skin or maybe make it more pale if the person is tired and more red if the person has worked out for example. Results for this method can be seen in image 5.2.

A test was made where the specular part was also blurred by passing this light through the blur pass as well. This made the specular highlights a bit more diffuse and blurred. The difference can be seen in image5.3.

(28)

19

Figure 5.1: The blur method applied on the model to the right compared to the original model to the left.

Figure 5.2: The human skin represented with the density dependent method to the right compared to original model to the left.

5.2

Translucency

The translucency is a tricky part and especially for skin as it is so complex in its structure. It is hard though to create the transition so it does not also go through the rest of his

(29)

20

Figure 5.3: The difference between passing the specular light through as normal (left) or through the blurring pass (right).

face equally much. This puts a lot of weight on the quality of the density map used for the final result. The tested implementation from Frostbite [9] showed that it was not appropriate for human skin as this method had too many limitations. The method works well for large models without many details that has a constant density throughout the volume. But for a human model with a lot of details the results is not so good as the light going through only relies on the density value at the surface and not the volume itself.

With the method presented by Jimenez et. al. [8] the distance the light travels through the object was obtained and with this more realistic result was achieved. This made the problem of light going through thick objects disappear and only contribution in thin parts like the ears could be seen.

To make the transmittance profile more interesting and more general a profile was created using the three layer colors used for the density method instead. From this more different types of objects and creatures can use the subsurface effect and get a good translucency effect from this. In image5.4a couple of different profiles have been created from these input colors. With the translucency scale factor the user can scale the gradient so the correct color is obtained at the desired depth.

In image5.5the difference between the methods can be seen. The left image is without any contribution. The middle one uses the Frostbite method and it can be seen that the effects is not so concentrated to the ear compared to the other method to the right. In the right image where the distance the light travels inside the object is used the contribution gets more specific and realistic and with the transmittance profile a nice color is also obtained.

The different methods were also tested on the Standford bunny and the results can be seen in image5.6where the left one is with the standard material, the middle one using the frostbite method and the second method to the right.

(30)

21

Figure 5.4: Two different translucency gradients achieved from the input colors. The top one is created to mimic the gradient of human skin.

Figure 5.5: The translucency contribution for the two methods compared to without. Original left, Frostbite method middle, method using depth right.

Due to the translucency contribution the light intensity for the surface sometime seemed to bright. This since we consider the light passing through from behind but not the light entering and leaving on the other side. To deal with this an absorption factor was added that handles this and makes the color intensity look more realistic.

To be able to compare my results in a better way I tried out Unreal Engine [15], which is another game engine and used the same model and applied their skin shading. This was

(31)

22

Figure 5.6: The different methods tested on the Stanford bunny

done to be able to see which parts of my implementation that might need improvements. The result achieved with the Unreal Engine can be seen in image 5.7.

(32)

Chapter 6

Discussion

6.1

Optimization

For a game engine it is of high importance that everything is optimized to reduce any kind of latency for the user. After most of the implementation for skin shading was done an analyze of the computation time was done and it could be seen that it was too expensive. The post processing blur method for the subsurface scattering effect caused by lights in front of the object was examined and it took 1.505ms for the GPU to compute these passes. This can be compared to the global lighting pass that is responsible for the lighting of the entire scene and this took 1.344ms. Based on this it could be established that the skin shading part was too expensive and needed some optimization. The blur passes were done for every pixel and the kernel size varied to create different ranges of scattering. But this could instead be done by decreasing the resolution of our pixel pass and using the same kernel size for each pass. This decreases the amount of computations and does not affect the results too much. By decreasing the resolution by four for each pass the computation time for the GPU was decreased to 0.641ms for the skin shading part. By decreasing the resolution with 16 for the last two passes the computation was decreased to 0.51 but here some artifacts occurred in the image as the kernel got to big, so for this the kernel width would have to be changed between the passes to avoid this.

Method Blur passes(ms) Recombine(ms) Total time(ms) Without down-sampling 1.137 0.368 1.505

Down-sampling 2-2-2-2 0.404 0.237 0.641 Down-sampling 2-2-4-4 0.262 0.248 0.51

Another thing that was done to optimize this was to use a stencil buffer that marks out all the pixel that is a skin material. We could use this to only do the blur passes for

(33)

24 these pixels instead of doing the check for each pixel inside the blur passes. Because of the down-sampling between the blur passes the stencil buffer did not work well for the down sampled passes so this was only used for the first pass with the most pixels, so it is still a good optimization. In the table below some measurements can be seen when the stencil buffer is enabled. The different distances makes the number of skin pixels increase or decrease. So for the close distance we see that the stencil buffer does not do so much as almost every pixel on the screen is of skin material anyway. It is clear for the other distances that the computation time for the first two passes were decreased a lot.

Distance Pass 1-2(ms) Pass 3-8(ms) Total time(ms) close 0.30 0.463 0.763

middle 0.05 0.30 0.35 far 0.02 0.28 0.30

6.2

Different skin colors

As there are a lot of different types of skin a method like this needs to be rather general. Although most of the influence of this is based on the color map used for the skin. But with the combination of the density based method one can alter the colors better to achieve really good results for all types of skin if the color map is good. For this project only a color map for Caucasian skin was tested and to try to create another type of skin from this like really dark skin will not look so good as the starting point is to far away from the desired results. So for this another color map is needed and in the image below it can be seen that there are a lot of different skin colors6.1. So one has to make sure that the color of the texture map is fairly similar to the profile of the subsurface scattering. More about how different color types can be achieved can be read about in the paper by Donner and Jensen [12] and by Jimenez et al. [13].

Figure 6.1: The color change from light Caucasian skin to dark African skin. Image from [12].

(34)

25

6.3

Future work

The implementation is only tested on PC so it could be further improved for the engine so it works for all platforms including VR-platforms. The optimization part is also desired to be improved. The stencil buffer only acts on the first blur pass at the moment as it would have to alter in resolution otherwise and would require some implementation time. This however would be a good optimization step for future work.

(35)

Chapter 7

Conclusion

The final results gave a realistic appearance of human skin and compared well with the results achieved in Unreal Engine. With the added flexibility of the more density based method the artist can alter the appearance of the material in a better way. The con-tribution from light sources behind the object was improved by calculating the actual distance the light traveled inside it. This showed to be an important factor and together with the transmittance profile gave a nice color based on this distance. With the opti-mization steps the calculation time was decreased a lot without lowering the quality in any way. It is also concluded that it could be used with great results for other types of materials than skin as it is so user friendly and can be adjusted to look like many things.

(36)

Bibliography

[1] J. Jimenez, K. Zsolnai, A. Jarabo, C. Freude, T. Auzinger, X-C. Wu, J. von der Pahlen, M. Wimmer and D. Gutierrez, Separable Subsurface Scattering, Computer Graphics Forum 2015 (presented at EGSR 2015)

[2] Jorge Jimenez, Veronica Sundstedt and Diego Gutierrez, Screen-space perceptual rendering of human skin, ACM Transactions on Applied Perception, 2009

[3] James T. Kajiya, The rendering equation, California Institute of Technology, 1986 [4] Stencil buffer, Microsoft documentation,

https://msdn.microsoft.com/en-us/library/bb976074.aspx

[5] D’eon E., Luebke D.: Advanced techniques for realistic real-time skin rendering. In GPU Gems 3, Nguyen H., (Ed.). Addison Wesley, 2007

[6] Jensen, H. W., Marschner, S. R., Levoy, M., Hanrahan, P, A Practical Model for Subsurface Light Transport, Proceedings of ACM SIGGRAPH 2001

[7] Neil Blevins, Translucency and Sub-Surface Scattering, 2001, http://www.neilblevins.com/cg education/translucency/translucency.htm

[8] Jorge Jimenez, David Whelan, Veronica Sundstedt and Diego Gutierrez, Real-Time Realistic Skin Translucency IEEE Computer Graphics and Applications, Vol. 30(4), 2010

[9] Colin Barr´e-Brisebois, Approximating Translucency for a Fast, Cheap and Con-vinving Subsurface Scattering Look, Game Developers Conference 2011

[10] M. McGuire, Ambient Occlusion Volumes, High Performance Graphics, 2010 [11] Donner, Craig and Jensen, Henrik Wann, Rendering Translucent Materials Using

Photon Diffusion, ACM SIGGRAPH 2008 Classes, 2008, ACM

[12] Donner, C., And Jensen, H. W. 2006, A spectral BSSRDF for shading human skin, In Rendering Techniques 2006, 409–417

(37)

28 [13] Jimenez, J., Scully, T., Barbosa, N., Donner, C., Alvarez, X., Vieira, T., Matts, P., Orvaldho, V., Guitierrez, D., and Weyrich, T. 2010, A practical appearance model for dynamic facial color

[14] d’Eon E, Luebke D, and Enderton E, Efficient Rendering of Human Skin, 2007 [15] Unreal Engine, https://www.unrealengine.com/

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating