• No results found

Design Tools for Sketching of Dome Productions in Virtual Reality

N/A
N/A
Protected

Academic year: 2021

Share "Design Tools for Sketching of Dome Productions in Virtual Reality"

Copied!
44
0
0

Loading.... (view fulltext now)

Full text

(1)

Department of Science and Technology

Institutionen för teknik och naturvetenskap

LiU-ITN-TEK-A--18/036--SE

Design Tools for Sketching of

Dome Productions in Virtual

Reality

Andreas Kihlström

2018-08-28

(2)

LiU-ITN-TEK-A--18/036--SE

Design Tools for Sketching of

Dome Productions in Virtual

Reality

Examensarbete utfört i Medieteknik

vid Tekniska högskolan vid

Linköpings universitet

Andreas Kihlström

Handledare Patric Ljung

Examinator Daniel Jönsson

(3)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

under en längre tid från publiceringsdatum under förutsättning att inga

extra-ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,

skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för

ickekommersiell forskning och för undervisning. Överföring av upphovsrätten

vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av

dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,

säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ

art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i

den omfattning som god sed kräver vid användning av dokumentet på ovan

beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan

form eller i sådant sammanhang som är kränkande för upphovsmannens litterära

eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se

förlagets hemsida

http://www.ep.liu.se/

Copyright

The publishers will keep this document online on the Internet - or its possible

replacement - for a considerable time from the date of publication barring

exceptional circumstances.

The online availability of the document implies a permanent permission for

anyone to read, to download, to print out single copies for your own use and to

use it unchanged for any non-commercial research and educational purpose.

Subsequent transfers of copyright cannot revoke this permission. All other uses

of the document are conditional on the consent of the copyright owner. The

publisher has taken technical and administrative measures to assure authenticity,

security and accessibility.

According to intellectual property law the author has the right to be

mentioned when his/her work is accessed as described above and to be protected

against infringement.

For additional information about the Linköping University Electronic Press

and its procedures for publication and for assurance of document integrity,

please refer to its WWW home page:

http://www.ep.liu.se/

(4)

Linköpings universitet SE–581 83 Linköping

Linköping University | Department of Science and Technology

Master thesis, 30 ECTS | Medieteknik

2018 | LIU-ITN/LITH-EX-A--18/2018--SE

Design Tools for Sketching of

Dome Productions in Virtual

Reality

Designverktyg för Sketchning av Dom Produktioner i Virtuell

Verklighet

Andreas Kihlström

Supervisor : Patric Ljung Examiner : Daniel Jönsson

(5)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under 25 år från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns lösningar av teknisk och admin-istrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sam-manhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet – or its possible replacement – for a period of 25 years starting from the date of publication barring exceptional circum-stances. The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the con-sent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping Uni-versity Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

(6)

Abstract

This report presents the problem of designers working on new productions for full-domes. The back and forth process of moving between a work station and the fulldome is time consuming, a faster alternative would be useful. This thesis presents an option, a virtual reality application where a user can sketch the new environment directly on a vir-tual representation of a fulldome. The result would then be exported directly to the real fulldome to be displayed.

The application is developed using Unreal Engine 4. The virtual dome is constructed using a procedurally generated mesh, with a paintable material assigned to it. All painting functionality is implemented manually, as is all other tools.

The final product is fully useable, but requires additional work if it is to be used com-mercially. Additional features can be added, including certain features discussed that were cut due to time constraints, as well as improvements to existing features. Application sta-bility is currently a concern that needs to be addressed, as well as optimizations to the software.

Keywords

(7)

Contents

Abstract iii 1 Introduction 1 1.1 Background . . . 1 1.2 Purpose . . . 1 1.3 Workflow . . . 1 1.4 Methods . . . 2 1.5 Question formulations . . . 2 1.6 Limitations . . . 3 2 Theoretical Background 4 2.1 Computer Graphics . . . 4 2.2 Painting Theory . . . 10 3 Execution 21 3.1 Unreal Engine 4 . . . 21

3.2 Procedural Mesh Generation . . . 22

3.3 Dynamic Textures and Materials . . . 22

3.4 Saving and Exporting . . . 24

3.5 Virtual Reality . . . 25

3.6 Sketching Controls and Implementation . . . 26

4 Results 27 4.1 Workflow . . . 27 4.2 Mesh Generation . . . 27 4.3 Interaction . . . 28 4.4 Painting . . . 30 5 Discussion 32 5.1 Questions . . . 32 5.2 Future Work . . . 33 6 Conclusion 34 Bibliography 35

(8)

List of Figures

2.1 A sphere represented by triangles. . . 5

2.2 Possible UV unwraps of a sphere. . . 6

2.3 Mapping comparisons between the standard and custom formats. . . 7

2.4 Sphere with cubic vertex mapping. . . 7

2.5 Cubesphere UVW mapping. . . 8

2.6 Triangle refinement grid. Left side is before subdivision, right side is after. . . 8

2.7 Types of subdivision. . . 9

2.8 Vertex position weights. . . 9

2.9 Two different examples of the normal blending mode. . . 11

2.10 Two different examples of the multiply blending mode. . . 11

2.11 Two different examples of the screen blending mode. . . 12

2.12 Two different examples of the overlay blending mode. . . 12

2.13 Two different examples of the darken blending mode. . . 13

2.14 Two different examples of the lighten blending mode. . . 13

2.15 Two different examples of the colour dodge blending mode. . . 14

2.16 Two different examples of the colour burn blending mode. . . 14

2.17 Two different examples of the hard light blending mode. . . 15

2.18 Two different examples of the soft light blending mode. . . 15

2.19 Two different examples of the difference blending mode. . . 16

2.20 Two different examples of the exclusion blending mode. . . 16

2.21 Illustrations of the simple marking tools. . . 17

2.22 Illustrations of the wand marking tool, using different threshold values. . . 17

2.23 A before and after comparison of a gaussian blur operation. . . 20

2.24 A before and after comparison of a sharpen operation. . . 20

3.1 Blueprint nodes. When the L key is pressed, the program executes according to the nodes connected to the event. . . 21

3.2 Material template. The resulting material is interpolated depending on the Alpha value of the lower texture parameter. . . 22

4.1 The mesh used for the dome. . . 28

4.2 Both controllers and their menus. . . 28

4.3 Menu screens. . . 29

4.4 Different brush colours, some with different opacity values. . . 30

4.5 The Canvas texture used, and how it looks on the virtual dome. . . 31

(9)

Chapter 1

Introduction

A fulldome is a dome based video projection environment, consisting of a spherical projection surface surrounding the viewer and filling their entire field of view. The fulldome can be used to immerse the viewer in a virtual environment. The design process for creating new environments for a fulldome production is not without its fair share of obstacles however. This chapter will describe these, and possible solutions as well.

1.1

Background

When a new production is to be made for a fulldome, a vital part of the design process is to sketch a mock-up of the environment and display it on the fulldome itself. This usually involves sketching the environment on a flat surface, likely a digital arts program on a desk-top [1], then exporting the texture, applying a half-cube distortion to it and display it in the fulldome. The problem with this method is that the designers will be unable to see any errors they make in the sketch until it is displayed on the fulldome surface. Therefore, the design-ers have to repeatedly go between the sketching workstation and the fulldome to repeatedly correct any errors made [2][1].

This is a time consuming process, and it also requires physical access to the fulldome itself to complete. Most fulldome environments are fully booked during most days, either displaying existing productions or being used for other projects. Also, not all designers can be physically present themselves at all times. Because of these issues, there is no way to make this work portable. Therefore, the design process takes more time than it needs to. If these issues could be solved, then much time could be saved, and used for other purposes [2].

1.2

Purpose

The purpose of this project is to find, and create a solution for the issues describes in sec-tion 1.1. An alternative sketching surface is required to solve the first issue, that of the back and forth between the work station and the fulldome itself. Preferably, the surface provided should produce a result that looks identical on the actual fulldome itself. And finally, the solution needs to be portable. The designer should not require access to the fulldome at all during the design process. The optimal scenario is if the designers can work from completely different locations on the same production.

1.3

Workflow

A schedule with project deadlines will be created at the start of the thesis work. This schedule will provide all necessary information about when features should be implemented. It also

(10)

1.4. Methods

determines the order of when features should be implemented. Work on one deadline should be fully completed before moving on to the next.

As for the development itself, the best way to approach this issue is to have continuous feedback from a designer that requires such a solution. Their input would provide not only the list of features that is required, but also whether or not their implementation is satisfac-tory. They will provide their opinions on each step of the implementation.

Therefore, every other friday during the project a meeting will be held with the designer and the thesis supervisor. Both the designer and the supervisor can provide feedback on the current state of the project. When the application is ready, a proper demonstration will be held.

This development model borrows ideas from both agile development and the waterfall model [3]. While the development schedule itself is more akin to the waterfall model of de-velopment, the constant communication with a designer is more similar to certain aspects of Scrum [3], where a functional piece of software is to be provided to the customer for feedback at regular intervals.

1.4

Methods

With these necessities in mind, the proposed solution is to create a Virtual Reality application for the HTC Vive, allowing designers to paint on a virtual dome surface. This provides a portable solution, allowing the user to paint on a virtual dome surface. It can also produce a result that would look identical on both surfaces, assuming that the virtual dome uses the same texture mapping as the fulldome uses for projection.

Creating an application for the HTC Vive headset requires a programming platform able to interface with it. The two main prospects being considered for this thesis work is the Unity game engine and Unreal Engine 4. Both of these game engines make it very easy to develop applications for the HTC Vive, with many tools and assets already available.

For this project, the engine chosen in Unreal engine 4. Its internal scripting language, called Blueprints, can be used for rapid prototyping, and it can also be combined with regular programming to create more advanced features quickly.

1.5

Question formulations

The following questions were posed in the beginning of this thesis work: • Can the fulldome be represented by an arbitrary spherical shape?

• What type of navigation is optimal for the virtual fulldome environment?

• What kind of controls should be available other than the sketching tool, and how should they be implemented?

Not all fulldomes are created the same way. Different fulldome environments use dif-ferent sizes, and have varying degrees of visible surface for the audience. Some fulldome productions have more than 180° of visible surface area, and thus it would be beneficial if the virtual dome can be represented by an arbitrary spherical shape. This way, the surface can be procedurally generated, and fit potentially any fulldome setup.

Navigation can often pose a significant problem for virtual reality applications. In this case, the application is meant to be portable, so that a design sketch can continue on multiple locations, potentially in places where space is limited. Since the dome is meant to be proce-durally generated, its resulting size may not correspond to the room setup available. Thus, another way of navigating the environment is needed.

(11)

1.6. Limitations

Most painting and sketching programs come with many features other than the basics most people associate with painting. The virtual environment also offer additional opportu-nities, as full 3D movement can assist in changing the perspective of the sketch. This poses the question, what kind of features should be added to the application? What can be added within the time constraints of the project?

1.6

Limitations

The first limitation for a user is that they require a HTC Vive virtual reality headset to use the application. The headset, while portable, is not perfectly so. In order to use it the user must set up trackers in a sufficiently large room before the headset can be used. They must then be connected to a desktop powerful enough to handle virtual reality applications. Laptops are in general not powerful enough for this task.

(12)

Chapter 2

Theoretical Background

Before work can begin, understanding of the theory behind it is required. There are several aspects that need explaining, especially in the realm of 3D computer graphics. These will be explained below.

2.1

Computer Graphics

The first question posed was whether or not the fulldome could be represented by an ar-bitrary spherical shape. The means of answering this is to create a procedurally generated sphere, where the user can control all necessary parameters relevant to the application. Pro-cedural generation within Unreal Engine 4 requires a set of parameters in order to create any shape. These parameters are as follows:

• A list of vertices.

• A list of triangle indices. This list must be three times as long as the list of vertices. • A list of normal vectors, one for each vertex.

• A list of tangent vectors, one for each vertex. • A list of 2D texture coordinates, one for each vertex.

All of these lists provide all the necessary information to create a procedural mesh.

Procedural Generation

Assigning each vertex individually is a time consuming process and altogether unnecessary for this project. Since the mesh to be generated is a sphere, each vertex can be calculated using the mathematical definition of a sphere, as shown in Equation 2.1.

(x´ x0)2+ (y´ y0)2+ (z´ z0)2=r2 (2.1)

where (x, y, z) refers to three-dimensional coordinates, (x0, y0, z0) is the centre of the

sphere and r is the radius. Using this equation, the vertex locations can be calculated and assigned.

In computer graphics, a sphere is defined by a pair of poles, a number of horizontal rings where vertices are located and a number vertical slices intersecting the rings. On these inter-sections, the vertices will be placed. This can be seen in Figure 2.1.

The detail level of the sphere is determined by the number of rings and the number of slices intersecting them. In order to make the sphere look symmetrical, the number of slices should be twice that of the number of rings. The rings closest to the two poles will have all of their vertices connected to the pole vertices.

(13)

2.1. Computer Graphics

Figure 2.1: A sphere represented by triangles.

The creation of each vertex will be done one ring at a time. The vertices have to be assigned in a specific order. When creating the list of triangle indices, each triangle needs to be defined by the vertices in proper order, in order to define which side of the triangle points out of the sphere. The rings will be defined by Equation 2.2, a formula to calculate the radius of a sphere cap [4]. rr =a2 ˚ h ˚ rs´ h2 h=rs´ py=2 ˚ rs´(Ns´ kr+1 Ns )˚(2 ˚ rs) Ns =Nr+1 (2.2)

The radius of the ring is represented by rr, while the radius of the sphere is represented

by rs. The variable h is the height of the spherical cap created by clipping the sphere using

the selected ring. The variable Ns represents the number of segments created by the rings,

always the number of rings, Nr, plus one. Finally, krshows which ring is being worked on.

Most of the other lists can be applied at the same time. The normal vectors, the tangent vectors and the texture coordinates can be assigned simultaneously to make sure that they all correspond to the appropriate vertex.

The normal vector is calculated by drawing a vector from the centre of the sphere to the vertex being assigned. This way, the normal vector will always point out from the sphere.

The tangent calculation is done a different way. A tangent vector is defined as any vector parallel to the surface, implying that the tangent is always 90° rotated from the normal vector. This tangent vector can be used by calculating the cross-product of the normal vector, and any other vector not on the same line. These two vectors will define a plane, the normal of which will always be 90° rotated from both vectors.

(14)

2.1. Computer Graphics

The terms are defined as vt, the tangent vector, vn, the normal vector and vo, the other

vector used. The final parameter to set at this point is the texture coordinate for the vertex.

Texture Mapping

Textures are saved in a strictly quadratic format, usually in an image file the size of which is a power of 2. Usually this does not present a problem when mapping the texture to a flat surface, but gets considerably more difficult for more complex shapes. The sphere has always been a problem, due to the inevitable distortions created when mapping flat textures to a curved surface.

The way that texture mapping works in Unreal Engine 4, is that each vertex represents a point on the texture being applied. In order to apply a texture onto an entire object, all polygons need to be mapped onto the texture with texture coordinates, a process called UV Mapping [5]. Since the application requires painting, all polygons need to be attached to each other in the UV map in order to maintain a consistent stroke. Otherwise, the stroke will cap off when close to an edge in the UV mapped texture. However, since this is a sphere, this is not a possibility. There is no way to UV map a sphere to a texture in a way that allows for simple continuous painting across the entire surface. Examples can be seen in Figure 2.2.

(a) Single side unwrap of a sphere

(b) Both sides of a sphere un-wrapped, disconnected from each other.

(c) Complex Unwrap, au-tomatically generated by Blender.

Figure 2.2: Possible UV unwraps of a sphere.

Another feature evident by these UV mappings is that polygons near the poles are smaller than polygons near the edges. Unless the texture used compensates for this, it will result in distortion near these areas, where the texture is warped. This kind of distortion is called pincushion distortion.

Both of these issues must be considered for the application of texture coordinates. Most textures are applied to spheres using spherical mapping, a method that calculates the texture coordinates for each vertex by converting their positions from Cartesian coordinates to po-lar coordinates. The resulting texture map puts the poles the top and bottom middle, with all polygons in the middle. While this approach produces a visually superior result, it is a difficult approach for the purposes of this project. Painting at the poles themselves is a dif-ficult task, as all triangles at these location are located far away from each other. The brush only affects a small subset of triangles this way. A side by side comparison of the single side approach and the spherical mapping can be seen in Figure 2.3.

Attempting to compensate for the standard mapping would be difficult, as in order to paint around the pole itself, the brush would have to expand its width to cover all triangles at the top of the texture. This means that in order for the brush to look uniform as it paints on the sphere, it would have to adapt its width depending on where on the sphere it was painting. Even then, the result may not look especially pleasing.

(15)

2.1. Computer Graphics

(a) Single side mapping. The texture is mir-rored on both sides.

(b) Spherical mapping. The standard format used in most programs.

Figure 2.3: Mapping comparisons between the standard and custom formats.

A More Complex Sphere

With some of the issues brought up in the previous sections in mind, it becomes clear that the regular type of sphere has some problems when used for painting. Since the size of each polygon is not uniform, especially not at the pole, the brush size would not stay constant across the entire sphere. However, there exists alternative forms of spheres with different kinds of vertex mapping. These alternatives attempt to deal with these issues by removing the pole altogether. The example examined here will be the Cube Sphere, illustrated in Figure 2.4.

Figure 2.4: Sphere with cubic vertex mapping.

Instead of a central pole at the top and the bottom, this alternative sphere maps its vertices in a way that mimics that of a cube. The corners of the cube is where each side is joined, with the polygons mimicking that of squares instead of triangles. The resulting UV Wrap has a much more uniform size across the polygons, reducing the amount of distortion. This can be seen in Figure 2.5.

While this type of sphere is better suited, it is also harder to procedurally generate. Since the vertices do not follow a simple ring pattern, placement is more difficult. There exists a work-around however. If a cube is created with a sufficient amount of vertices, its vertices

(16)

2.1. Computer Graphics

Figure 2.5: Cubesphere UVW mapping.

can be displaced, creating a sphere with the correct vertex mapping. In order for this to work, a cube must be created with several iterations of mesh subdivision [6].

Mesh Subdivision

Mesh Subdivision is a method used to easily increase the level of detail for any mesh, regard-less of shape. This is done by subdividing existing polygons into smaller ones. If all triangles are divided equally, a single triangle turns into four new triangles. The effect of of this can be seen in Figure 2.6, with new vertices inside a triangle shown in green on the right side.

Figure 2.6: Triangle refinement grid. Left side is before subdivision, right side is after. The simplest of subdivision schemes leave it at that, creating more polygons and leaving the mesh itself unchanged. While this method could indeed be used, since all vertices will be displaced into a sphere anyway, the vertex placement on the sphere may not be optimal as a result. A comparison is made in Figure 2.7, where the result from a simple subdivision algorithm results in greater distortion along the lines that used to be corners. The complex subdivision algorithm displayed on the right side is a better option for painting in this case.

More complex subdivision schemes will not only add more vertices to the mesh, they will also transform existing vertices in order to create a smoother surface. There have been multiple different subdivision algorithms implemented over the years, some working with splines and others with triangles. Since the procedural mesh is triangle based, it would be best to utilise a triangle based subdivision algorithm.

(17)

2.1. Computer Graphics

(a) Simple subdivision, vertices are not moved with each itera-tion.

(b) Complex subdivision, ver-tices are moved with each iter-ation.

Figure 2.7: Types of subdivision.

Such an algorithm was proposed by C. Loop [6]. This method can be used to smoothen an arbitrary triangle based surface, which is what is needed here. The algorithm consists of two parts: adding new vertices to the mesh along the edges of each triangle, and moving the old vertices to create a smoother surface. The new positions are calculated using weighted averages of previous locations. These weighted averages are calculated before any new points are added. The value of the weights depend on the number of connecting edges of the vertex, the so called valence value. The general idea behind it is illustrated in Figure 2.8.

Figure 2.8: Vertex position weights.

There are two different cases displayed here, either a point is an interior point (top left and right), or a boundary points (bottom left and right). The left side displays new points in green, while the right side displays the calculated weight values for existing points. If the mesh being subdivided is a so called manifold surface, with no holes in it, all vertices are interior points. If this is not the case, the boundary rules has to be considered. Since the subdivision is to be used on a cube, the mesh is a manifold surface and only the interior rules will be applied.

The value for β is calculated using the following equation, where k is the valence value of the vertex. β= #3 8k if: k ą 3 3 16 if: k=3 (2.4) Cube To Sphere

With a subdivided cube, all that is left is to displace cubes vertices to create a sphere. The centre of the cube will be used as the centre of the sphere. For each vertex, a vector will be created, pointing fro the centre to the vertex. Then the vertex will be moved to the point

(18)

2.2. Painting Theory

on the vector, where the distance between the vertex and the centre equals the radius of the sphere. This process can be seen in Equation 2.5.

VCtPo =Po´ C

Pn =VCtPo˚ r

(2.5) An explanation of the terms are as follows: VCtPois the vector going from the centre to the

old point on the cube. As such, Pois the old point and C is the centre point. This means that

Pnis the new point, and r is the radius of the new sphere.

2.2

Painting Theory

In order to implement more advanced features than simple painting, brought up in the third question, more research is required for various other features and tools. There are many advanced tools that can be implemented, some of which are listed below.

• Layers and blending modes. • Opacity.

• Marking tools. • Filters.

Some of these tools are easier to implement than others. Opacity is intrinsically tied with blending modes, but is also used when painting normally, when the user wants to apply a lighter overlay of colour instead of replacing whatever lies beneath.

Layers and Blend Modes

Whenever different elements need to be separated from each other in a picture, layers are used. Using these layers, a designer can pick and choose which parts of the image to ma-nipulate, leaving others unaffected. Layers are implemented as separate images with unique settings that can blend with other layers to produce different effects. There are multiple types of blend modes [7] that can be used, picked at will by the designer.

The resulting image displayed depends of the Blend Function that is used, described here with B(Cb, Cs) = Cr. In this blend function, the term Cb represents the backdrop layer, Cs

represents the source layer and Cr represents the result. This means that the source layer is

situated above the backdrop. The various kinds of blend functions are displayed below.

Normal

The normal blend function simply selects the source colour as the result, as displayed in Equation2.6.

B(cb, cs) =cs (2.6)

Assuming that the opacity value for the layer is at 100%, the displayed colour will always be picked from the source layer. If this is not the case, then Equation 2.7 will be used instead. The resulting value is calculated depending on the opacity of the two layers , known as the α value. This method is known as Alpha Compositing [8].

B(cb, cs) =al pha(cs, cb)

al pha(cs, cb) =cs(α)˚ cs+ (1 ´ cs(α))˚ cb(α)˚ cb

(2.7) The effect of this blending mode can be seen in Figure 2.9.

(19)

2.2. Painting Theory

(a) With a solid red background. (b) With a blue gradient background.

Figure 2.9: Two different examples of the normal blending mode.

Multiply

This blend function multiplies the backdrop and source colour values.

B(cb, cs) =cbˆ cs (2.8)

The resulting colour is always at least as dark one of the two used layers. Multiplication with black produces black colour, and multiplication with white leaves the colour unchanged. Multiplication with any other colour results in a darker colour. The effect of this blending mode can be seen in Figure 2.10.

(a) With a solid red background. (b) With a blue gradient background.

Figure 2.10: Two different examples of the multiply blending mode.

Screen

This blend function multiplies the complements of the backdrop and source colour values, and then complements the result.

B(cb, cs) =1 ´[(1 ´ cb)ˆ(1 ´ cs)] =cb+cs´(cbˆ cs) (2.9)

This blend function has the opposite effect compared to Multiply, in that the result is at least as light as either of the two used layers. Screening with white always produces a white result, and screening with black leaves the colour unchanged. The effect of this blending mode can be seen in Figure 2.11.

(20)

2.2. Painting Theory

(a) With a solid red background. (b) With a blue gradient background.

Figure 2.11: Two different examples of the screen blending mode.

Overlay

This blend function either uses Multiply or Screening, depending on the colour value of the backdrop. Source colours overlay the backdrop while preserving its highlights and shadows. Therefore, the result is a mix with the source colour to reflect the luminosity of the backdrop. B(cb, cs) =HardLight(cs, cb) (2.10)

The effect of this blending mode can be seen in Figure 2.12.

(a) With a solid red background. (b) With a blue gradient background.

Figure 2.12: Two different examples of the overlay blending mode.

Darken

This blend function selects the darker colour when comparing the source with the backdrop. B(cb, cs) =min(cb, cs) (2.11)

If the source is darker, the backdrop is replaced. Otherwise, the result is unchanged. The effect of this blending mode can be seen in Figure 2.13.

(21)

2.2. Painting Theory

(a) With a solid red background. (b) With a blue gradient background.

Figure 2.13: Two different examples of the darken blending mode.

Lighten

This blend function selects the lighter colour when comparing the source with the backdrop. B(cb, cs) =max(cb, cs) (2.12)

If the source is lighter, the backdrop is replaced. Otherwise, the result is unchanged. The effect of this blending mode can be seen in Figure 2.14.

(a) With a solid red background. (b) With a blue gradient background.

Figure 2.14: Two different examples of the lighten blending mode.

Colour dodge

This blend function brightens the backdrop to reflect the source layer. Painting with black produces no changes. B(cb, cs) =#min(1, cb 1´cs), if csą 0 1, if cs =0 (2.13) The effect of this blending mode can be seen in Figure 2.15.

(22)

2.2. Painting Theory

(a) With a solid red background. (b) With a blue gradient background.

Figure 2.15: Two different examples of the colour dodge blending mode.

Colour burn

This blend function darkens the backdrop colour to reflect the source colour. Painting with white produces no change.

B(cb, cs) =#1 ´ min(

1,1´cb

cs ), if cs ą 0

0, if cs =0

(2.14) The effect of this blending mode can be seen in Figure 2.16.

(a) With a solid red background. (b) With a blue gradient background.

Figure 2.16: Two different examples of the colour burn blending mode.

Hard light

This blend function Multiplies or Screens the colours, depending on the source colour value.

B(cb, cs) =

#Multiply(cb, 2 ˆ cs) if cs ď 0.5

Screen(cb, 2 ˆ cs´ 1) if csą 0.5

(2.15) The effect of this blending mode can be seen in Figure 2.17.

(23)

2.2. Painting Theory

(a) With a solid red background. (b) With a blue gradient background.

Figure 2.17: Two different examples of the hard light blending mode.

Soft light

This blend function Darkens or Lightens the colours, depending on the source colour value.

B(cb, cs) = # cb´(1 ´ 2 ˆ cs)ˆ cbˆ(1 ´ cb) if csď 0.5 cb+ (2 ˆ cs´ 1)ˆ(D(cb)´ cb) if csą 0.5 where Dx= # ((16 ˆ x ´ 12)ˆ x+4)ˆ x if x ď 0.25 ? x if x ą 0.25 (2.16)

The effect of this blending mode can be seen in Figure 2.18.

(a) With a solid red background. (b) With a blue gradient background.

Figure 2.18: Two different examples of the soft light blending mode.

Difference

This blend function subtracts the darker of the two layer colours from the lighter colour. B(cb, cs) =|cb´ cs| (2.17)

Painting with black produces no change, while painting with white inverts the colour of the backdrop. The effect of this blending mode can be seen in Figure 2.19.

(24)

2.2. Painting Theory

(a) With a solid red background. (b) With a blue gradient background.

Figure 2.19: Two different examples of the difference blending mode.

Exclusion

This blend function produces a similar effect to that of Difference, but lower in contrast. Paint-ing with black produces no change, while paintPaint-ing with white inverts the colour of the back-drop.

B(cb, cs) =cb+cs´ 2 ˆ cbˆ cs (2.18)

The effect of this blending mode can be seen in Figure 2.20.

(a) With a solid red background. (b) With a blue gradient background.

Figure 2.20: Two different examples of the exclusion blending mode.

Marking tools

The marking tool is a simple concept. The designer marks a section of the image that they want to paint on. If they try to paint or manipulate any part of the image outside the marked area, nothing happens. This way, the designer can focus on a section on the image without fear of accidentally applying something on another part of the image.

There are four main tools used for marking, with some variations. These are as follows: • Rectangular Selection

(25)

2.2. Painting Theory

• Lasso Selection • Magic Wand Selection

Most of these tools are rather simple. The user drags a simple geometric shape on the image, and all pixels inside are selected. The Rectangular and Elliptical selection tools can be used to quickly apply this to the image. The lasso tool is used to draw a more complex shape on the image, selecting all pixels inside when the shape is completed. Examples of these can be seen in Figure 2.21.

(a) Marking with the rectangu-lar marking tool.

(b) Marking with the elliptical marking tool.

(c) Marking with the lasso marking tool.

Figure 2.21: Illustrations of the simple marking tools.

Magic Wand Selection

Magic Wand selection selects an area of the image based on the colour of the point picked by the designer. The algorithm calculates each neighbouring pixel and compares its colour to that of the root pixel. If the colour difference lies below a certain threshold, the neighbour is selected. This process spreads outwards, until neighbours are found that lie outside of the threshold, creating a border where neighbours are no longer selected. This is illustrated in Figure2.22.

(a) Marking with the wand, using a thresh-old of 200.

(b) Marking with the wand, using a thresh-old of 100.

(c) Marking with the wand, using a thresh-old of 50.

(d) Marking with the wand, using a thresh-old of 10.

Figure 2.22: Illustrations of the wand marking tool, using different threshold values. This selection tool requires an algorithm for comparing colours between pixels. While the colour distance can be calculated between RGB values, this is not an accurate measurement. The RGB colour space does not compensate for how humans perceive colour properly, and while thresholding using this method is simple, the result may be unpredictable. Alternatives exist, in the form of different colour spaces.

(26)

2.2. Painting Theory

Colour spaces

There are two main colour space alternatives to RGB, one is the CIE 1931 (CieXYZ) colour space, and the other is the CIE L*a*b* (CieLAB) colour space [9]. CieXYZ was created from a series of experiments to determine the links between light wavelength distribution and hu-man colour perception. CieLAB was created as a uniform colour space, designed to measure the differences in colours. It is device independent, and can therefore be used on any device. The optimal way to compare colour distance is with the use of CieLAB, due to its unifor-mity. While there is no direct conversion from RGB to CieLAB colour coordinates, there is a conversion from CieXYZ to CieLAB. Therefore, in order to compare a colour with the thresh-old the RGB value must first be converted to CieXYZ, and then to CieLAB. The algorithm for this can be found in Equations 2.19 and 2.20.

vrgb= [r, g, b] mrgb2xyz=   0.412453 0.357580 0.180423 0.212671 0.715160 0.072169 0.019334 0.119193 0.950227   vxyz=mrgb2xyzˆ vrgb = [x, y, z] (2.19)

The matrix used for Equation 2.19 depends on the RGB working space being used. This matrix is taken from the sRGB colour working space [10].

vxyz = [x, y, z] D65 = [0.950456, 1.0, 1.088754] vxyzN= vxyz D65 = [xn, yn, zn] vexp = [x 1 3 n, y 1 3 n, z 1 3

n] = [xexp, yexp, zexp]

T=0.008856

L=#116 ˚ yexp´ 16 if: yexpą T

903.3 ˚ yn if: yexpď T vxyzF =   7.787 ˚ xn+11616 7.787 ˚ yn+11616 7.787 ˚ zn+11616  = [xf, yf, zf] a= $ ’ ’ ’ ’ & ’ ’ ’ ’ %

500 ˚(xexp´ yexp) if: xną T, yn ą T

500 ˚(xf ´ yexp) if: xnď T, yn ą T 500 ˚(xexp´ yf) if: xną T, yn ď T 500 ˚(xf ´ yf) if: xnď T, yn ď T b= $ ’ ’ ’ ’ & ’ ’ ’ ’ %

200 ˚(yexp´ zexp) if: yną T, zną T

200 ˚(yf ´ zexp) if: ynď T, zną T

200 ˚(yexp´ zf) if: yną T, znď T

200 ˚(yf ´ zf) if: yn ď T, znď T

vLab = [L, a, b]

(2.20)

The second conversion involves many more steps than the first. The first step is to nor-malize the XYZ values with a given whitepoint. In this case, D65 is used, which represents normal daylight at noon. The variable T is used as a threshold, controlling whether or not the calculation uses values from vexp, vxyzNor vxyzF. After the CieLAB values have been

(27)

2.2. Painting Theory

compared with a given threshold set by the user, and this will determine if a pixel is selected or not. The colour distance is calculated using Equation 2.21.

f(Lp, ap, bp, Lt, at, bt) =

b

(Lp´ Lt)2+ (ap´ at)2+ (bp´ bt)2 (2.21)

The variables with the p attached are the values from the pixel, while the variables with the t attached are the values from the colour picked by the magic wand.

Filters

Filters are image processing methods that can apply specific effects to an entire image. This is done by applying a filter kernel that moves across the entire image. The effect of this depends on the composition of the kernel. Examples of filters can be found below.

Gaussian Blur

Gaussian blur is a lowpass filter that applies a Gaussian function to the filter kernel, with the aim of blurring or smoothing an image. It is a popular method to reduce image noise. There exists two methods of implementing the filter, either in one dimension as a line kernel, or two dimension as a box. The function applied in one dimension can be seen in Equation 2.22, while the function applied in two dimensions can be seen in Equation 2.23.

G(x) = ? 1 2πσ2e ´2σ2x2 (2.22) G(x, y) =? 1 2πσ2e ´x2 +y2 2σ2 (2.23)

The x and y coordinates represent the coordinates around the filter kernel midpoint. Thus, G(0, 0)would be in the middle of the filter kernel. The value of σ is the standard deviation function, the width of the filter kernel, with higher values resulting in greater amounts of blur. The function is then applied to the filter kernel. A default size often used is a 7x7 filter kernel, example found in Equation 2.24.

          G(´3, 3) G(´2, 3) G(´1, 3) G(0, 3) G(1, 3) G(2, 3) G(3, 3) G(´3, 2) G(´2, 2) G(´1, 2) G(0, 2) G(1, 2) G(2, 2) G(3, 2) G(´3, 1) G(´2, 1) G(´1, 1) G(0, 1) G(1, 1) G(2, 1) G(3, 1) G(´3, 0) G(´2, 0) G(´1, 0) G(0, 0) G(1, 0) G(2, 0) G(3, 0) G(´3, ´1) G(´2, ´1) G(´1, ´1) G(0, ´1) G(1, ´1) G(2, ´1) G(3, ´1) G(´3, ´2) G(´2, ´2) G(´1, ´2) G(0, ´2) G(1, ´2) G(2, ´2) G(3, ´2) G(´3, ´3) G(´2, ´3) G(´1, ´3) G(0, ´3) G(1, ´3) G(2, ´3) G(3, ´3)           (2.24)

The usual method of applying the gaussian kernel is to use two separate passes of the one-dimensional kernel, one vertical and one horizontal. Despite having to make two passes over the image, this method still requires fewer calculations compared to the 2D kernel. An example of the gaussian blur operation can be seen in Figure 2.23.

Laplacian of Gaussian

A filter with the opposite effect of the Gaussian filter is the so called Laplacian of Gaussian filter. Instead of blurring the image, this filter type sharpens the edges of an image, making them more distinct. The filter is divided into two different sections, that being the Laplacian and the gaussian blur. The Laplacian filter is an edge detection filter, and is very sensitive on its own. This can result in minor edges being enhanced that the user doesn’t care about.

(28)

2.2. Painting Theory

(a) Image with no filter operations performed.

(b) Image with a gaussian blur fil-ter operation performed.

Figure 2.23: A before and after comparison of a gaussian blur operation.

Therefore, a gaussian blur is applied to the image first to smoothen all minor edges, leaving only the significant ones.

The Laplacian of a function is defined as the divergence of its gradient▽[11]. Mathemat-ically, the gradient is applied twice to the function to accomplish this,▽˚▽= △[12]. The calculation for these two functions can be found below.

▽˚ f(x, y) = B Bx˚ f(x, y) + B By ˚ f(x, y) ▽˚▽˚ f(x, y) = △˚ f(x, y) = B 2 B2x˚ f(x, y) + B2 B2y˚ f(x, y) (2.25)

Using the formulae in Equation 2.25, the Laplacian of the Gaussian can be calculated. The gaussian is used from Equation 2.23. The resulting equation can then be applied to a filter kernel in a similar manner as in Equation 2.24.

△˚ G(x, y) = B2 B2x˚ G(x, y) + B2 B2y˚ G(x, y) = x2+y2´ 2σ2 σ4 e ´x2+y2 2σ2 (2.26)

The result of the LoG filter will not be a sharpened image however. The edge detection filter produces an image highlighting all edges in an image. In order to sharpen the original image, the original must subtract the values of the edge detection filter from itself. All The resulting image is sharpened. The effect of a sharpening filter can be seen in Figure 2.24.

(a) Image with no filter operations performed.

(b) Image with a sharpen filter op-eration performed.

(29)

Chapter 3

Execution

With the background behind the application explained, the implementation process can be-gin. This chapter provides an overview of the implementation within the engine itself, dis-cussing the various parts brought up in chapter two, and how they are implemented. These parts include the implementation of the virtual dome surface, navigation in the virtual envi-ronment, texture mapping, painting and other sketching controls.

3.1

Unreal Engine 4

When working within the engine, there are two main ways to implement functionality. One way is working with code directly, using C++ to create new functionality. Since many classes and objects already exists within the engine, it is easy to inherit functionality and start from a code base. While this method is very flexible, as the developer can create whatever method they require, it is a slow approach that requires time to implement new features.

The other method uses the existing scripting language of the engine, called Blueprints. This method is constrained to already existing methods within the engine, but is far easier and faster to work with. This means that new prototypes of the application can be created at a much faster rate compared to raw code. An example of this system can be seen in Figure 3.1.

Figure 3.1: Blueprint nodes. When the L key is pressed, the program executes according to the nodes connected to the event.

In the end, a mix of both features will be used. It is possible to create functions using code, that can be called upon using blueprint nodes. Therefore, mixing the two provides both the flexibility and the speed required for creating a new application.

All non character objects in the scene will be implemented as Actors. Actors are inherent to Unreal Engine 4, and are used to create objects with custom functionality. This is done

(30)

3.2. Procedural Mesh Generation

either by creating a blueprint Actor or by creating a new class entirely inheriting from the Actor superclass. Due to the need of creating new features, the second option of these is chosen.

The user will be placed in the scene as a custom Pawn character. This type of object is used to create player or AI controlled characters that can interact with the environment. Using this custom Pawn character, custom controls can be implemented, allowing the player to use all of the custom functionality in the Actors present.

3.2

Procedural Mesh Generation

Procedural mesh generation is done in Unreal Engine 4 using the ProceduralMeshComponent plugin [13]. This plugin provides the functionality necessary to create a procedural mesh. The main function used to create a new mesh is the CreateMeshSection function [14]. This function requests a list of parameters, displayed previously in section 2.1.

In order to create these lists, a class to generate these must be created. This class will be called Spheroid, and will be responsible for all mesh related mathematics. With this class created, it will provide the lists using the equations derived in Section 2.1, with Equations 2.1 to 2.5. The main exception is the list of vertex colours, as this parameter will be unnecessary when a material is applied.

3.3

Dynamic Textures and Materials

With the mesh completed and implemented, the next step is to apply a dynamic texture that the user can paint on [15]. There are multiple ways to apply a material on a mesh in Unreal Engine 4, but only one way to apply a material that is modifiable. This is done with the Mate-rialInstanceDynamicobject [16]. This is an instance created from a material asset template, that can be modified on its own, with the template remaining unaffected. The material template used can be seen in Figure 3.2, defined using blueprint scripting.

Figure 3.2: Material template. The resulting material is interpolated depending on the Alpha value of the lower texture parameter.

There are a variety of ways to create and apply one of these dynamic instances onto a mesh. The method used here mixes two variants, applying it both through the use of C++ and Blueprints. Using blueprints, a new instance can be created targeting the mesh directly, ensuring that the instance is applied correctly to it. Then, the new instance is passed to the

(31)

3.3. Dynamic Textures and Materials

code as a pointer, making sure that the various functions necessary for painting can access it properly.

With the instance now accessible in the code, the variables necessary for painting can be created as well. The first of these is a structure for the texture. The texture structure, called Texture2D, has access to an internal function used for changing the colour values of pixels. This function is called UpdateTextureRegions [17], and it will be used for the painting function. The second structure required is a texture region structure, that determines what part of the texture will be accessed when it is to be updated. Due to the way the UV coordinates have been mapped, only one region is necessary, one covering the entire texture.

Finally, an array of texture colours is required. This array will be accessed when assigning new colours to the texture. The painting function writes new values into this array, which is then passed as a parameter to the UpdateTextureRegions [17] function.

Layers

While the structures tied to the mesh itself will be used for the painting, they are not tied directly to the layers themselves. Instead, these are kept separate in the code, as only one texture can be tied to the mesh itself and displayed on it. Therefore, the painting function will make use of the layers first, and then calculate the result from them and copy it to the mesh texture.

The layers have a separate set of properties. These are listed below.

• A separate dynamic material instance, using the same template as the mesh. • A texture structure, tied to the texture parameter of the separate instance. • A unique array of texture colours.

• A unique layer ID, to identify its location in the layer structure. • A unique opacity value for the layer.

• A visibility modifier, to determine if it is to be displayed at all. • A variable defining which blend mode is being used.

Half of these features exist as to mimic the material being used on the mesh, so that they can be updated themselves. Both the material instance and the texture structure can be used to update the displayed image on the layer menu. As for the blend mode being used for the layer, these are used during final calculation of the resulting colour.

Paint Region

The painting functionality is divided into three parts. The first part applies new colour onto the texture, by replacing values in the array of texture colours. This is done using Equation 2.7, comparing the current colour as the backdrop with the brush colour as the source. The area of pixels affected depends on the current brush size, acting as the brush radius. In order to create a circular brush, the pixels will be selected depending on their euclidean distance from the pixel to the picked texture coordinate. This painting function only affects the currently selected layer.

The second part is used to calculate the resulting colour on the mesh depending on how many layers are active. If only one layer is currently active, then the mesh will use the colour values directly from that layer. If two layers are active, then the resulting colour will use both layers, and calculate the result depending on the blend mode of the upper layer. If more than two layers are active, this process will be repeated for all visible layers, until a final set of colours have been calculated.

(32)

3.4. Saving and Exporting

The final part is the updating of the texture itself. This function uses the stored colour values for the mesh itself, and applies them to the texture using the UpdateTextureRegions function [17]. This function accepts a number of parameters, listed below.

• Mip Index, the mip number to update.

• Number of Regions, the amount of TextureUpdateRegions supplied. • Regions to update, a list of all TextureUpdateRegions to use.

• SrcPitch, the pitch of the source data in bytes. • SrcBpp, the size of a single pixel in bytes.

• SrcData, an array containing all texture colours to update the texture with.

• Data Cleanup Function. A function defining how to and when to delete variables to prevent memory leaks.

The first parameter defines which mip number to update. A mipmap is a series of textures with decreasing size and resolution to create different levels of detail as distance to an object increases. The mip index refers to which of these textures is to be updated. No level of detail is necessary, and thus only one image is used. Therefore, the index will always be zero. The second parameter decides how many separate regions are to be updated. If more than one is defined, the third parameter is provided as an array. The fourth parameter is calculated using the width of the texture times the amount of bytes necessary to store one pixel. This size is used in the fifth parameter as well, and for RGBA values that value is 4. The source data array is the same one defined earlier, in section 3.3. The final parameter is provided as a custom class with an overloaded operator method for (), where all texture regions will be deleted after use.

The texture regions themselves may also present a solution to a problem posed in Section 2.1.2. A problem with using the single side of double side UV mapping scheme was that there was no way to paint with a proper transition between the sides of the sphere. The brush would cut off as it reached an edge of the UV map, until it crossed over to the other side. This is not a good option for a canvas.

If the two sides of the UV map are moved to separate spots on the texture, multiple texture regions can be defined. When the brush approaches the edge, the painting function may be called twice, one for each texture region. The coordinates would be moved so that the brush would hit two spots, one on the region near the edge for the first region, and the other just outside the edge for the other texture region. The two places would correspond to the same spot on the sphere, overlap with each other and provide a clean transition for the brush.

3.4

Saving and Exporting

Finally, the application needs to be able to export the resulting texture on the dome surface. This process is in and of itself rather simple. Unreal Engine 4 can take the colour values used in the structure attached to the dome itself, and use that to create an image file of the same size. This feature can also be expanded to export separate layers should the user wish to do that. The resulting image file is currently being exported to the PNG format.

Secondly, another feature that is useful is the ability to save all of the work made to a file, and load it at a later time to continue working on it. This process is somewhat more complicated. A customised saving system can be created, that converts all of the necessary information into binary machine code [18]. The information to be saved is all the data neces-sary to reconstruct the layers and their settings, as the mesh structure itself is based entirely on these. The extension used for these binary files are picked by the developer, and as such

(33)

3.5. Virtual Reality

can be named anything. In this case, the saved binary file is designated with the .dsvrsf ex-tension, an acronym for Dome Sketch Virtual Reality Save File.

Not all attributes of the layers can be stored easily. Pointers and advanced structure pose difficulties when attempting to convert them to binary, which makes three of the attributes contained in the layers ineligible. These are the material instance, the texture structure and the array of texture colours. The first two of these do not pose a problem, as they can be rebuilt using the rest of the data. The final dynamic array needs to be fixed though. Thus, its contents is copied into an internal array structure, not unlike a C++ vector.

With all of the necessary parameters properly stored, they can be converted into binary code. This is done using an overloaded operator method «, using the internal FArchive object. These archive objects are used to work with files, both to save data and load data. This overloaded operator works both ways, making sure that all data is always saved and loaded in the right order. If this was not the case, the program would crash [18].

One final hurdle exists. While an arbitrary number of layers can be stored in a binary format such as this, a prepared structure is necessary for loading them again. This structure needs to contain the same amount of layers as was saved previously. However, it is not pos-sible to read from the saved binary file how many layers were stored, as the only information one can attain is the exact size in bytes. A work-around is required.

A second much smaller file is created, its only purpose to store the amount of saved lay-ers. This file will be loaded first and converted back into data. With this information, the program can create a list of correct size, where the saved layers can be loaded into. The sec-ond binary file is designated with the .dsvrc extension, an acronym for Dome Sketch Virtual Reality Count.

3.5

Virtual Reality

An important end goal is for the application to function in VR, and thus the project must be created with this in mind. While the technical aspects around the project itself is less impor-tant, there are design decisions that are influenced by this. The first of which is interaction with the environment. The HTC Vive uses a pair of controllers to interact with the virtual en-vironment. Compared to a mouse and keyboard, the precision afforded by these controllers are inferior compared to the mouse when pointing at things. Also, the amount of buttons present on the controllers are far fewer compared to the keyboard.

This necessitates the creation of a menu that the user can interact with. This menu pro-vides access to all advanced features that can’t be mapped onto the controllers themselves. Preferably, this menu must always be present to the user, as they can move around the envi-ronment and requires access to it regardless of where they are standing. The usual method for creating a menu in other applications with access to mouse control is to create a Heads Up Display (HUD). This would be a menu attached to the screen itself, that would then be interacted with. Since a Heads Up Display (HUD) can’t be interacted with in virtual reality, an alternative is needed. A possibility would be to attach a menu onto one of the controllers, and use the other one to interact with it. Since the controllers are always present, this would solve that issue.

Navigation

An issue brought up in the second question in Section 1.5, is the matter of navigation. Since the dome itself can be scaled up and down according to the users desires, the user may wish to walk to different places and view the dome from different perspectives to make sure that everything looks good from all angles. However, as room space is forever limited, an alterna-tive to physical navigation is necessary. This is a task in and of itself, tying movement to the controllers is usually not a good idea due to the resulting VR sickness of having the avatar move while the user is stationary [19].

(34)

3.6. Sketching Controls and Implementation

There are multiple different ways to try and implement navigation in ways that limit VR sickness. The simplest way to do this is to implement teleportation. Instantaneous movement from one spot to another is useful when perfect immersion is not a concern, as it is easy to implement. Alternatives such as walking in place to simulate physical movement can help reduce VR sickness, but methods such as these require access to tracking equipment that can register body movement or trackers tied to the limbs [19]. Motion can also be simulated using the movement of the controllers themselves, and this would require no extra equipment.

The method of navigation chosen for the application is teleportation. It is easily imple-mented in comparison to the other alternatives, and immersion is a secondary concern since the purpose of the application is not to provide an immersive experience.

3.6

Sketching Controls and Implementation

The various controls brought up in relation to the third and final question in Section 1.5 need to be implemented as well. The implementation of some of these will be simpler than others. While layers, blending modes and opacity have been covered, the others have not.

Marking Tools

There is a simple way to check whether or not a pixel is marked or not. If a structure of the same size as the texture is created, it can be used to represent marked or unmarked pixels. If a pixel in a certain location is selected, that index of the structure is set to true. If not, it is set to false. Whenever the painting function is called, it checks the pixel in question with the structure, and decides whether or not to paint on it.

Marking an area depends on the specific tool used. A rectangular or elliptical marking tool would simply draw the shape from the point where the user presses the trigger, to the point where the user releases it. The magic wand in comparison would use the algorithms presented in Equations 2.19. to 2.21.

Filters

There is little in the way of advanced implementation here. The filter kernel simply moves along the picture, and applies its values to create the filter effect. Texture padding can be skipped simply by putting the kernel outside the texture itself, and applying the values of the pixels effected.

(35)

Chapter 4

Results

The following is a description of the resulting application. The first section describes the workflow of the project itself, and how a user may interact with the application. The other sections describe different portions of the application, and how they were implemented.

4.1

Workflow

The implementation was an iterative process, as contact was maintained with the designer throughout. Communication was done at regular intervals with the designer, through meet-ings where progress was discussed. Whenever new features were added, they could be demonstrated easily, making the constant feedback a valuable resource. With this feedback came the ability to easily prioritise which features should be implemented first.

For instance, the tools brought up in section 2.2 were prioritised higher or lower depending on the wishes of the designer. The implementation of layers was assigned the highest priority of these, as it is a baseline for a multitude of other features, and almost a prerequisite for advanced painting.

Another instance of useful feedback gained was the decision to not use the standard sphere shown in Figure 2.1. The concerns about the resolution at the poles were brought up as feedback, and also the idea to use a cube sphere instead.

Application Work-flow

The user starts with a blank canvas, surrounding them as a dome. The user is equipped with two controllers, with different functionality tied to them. The left controller has sev-eral menus attached to it, providing all of the functionality required as there are not enough buttons provided to map everything to. The menus clearly display their purpose with de-scriptive icons and titles. The right controller is used for all interaction, both with the left controller menus and with the canvas itself.

The user can paint on the canvas, change the colour and size of the brush and add multiple layers that can be painted on separately. The opacity of both the brush and the layers can be changed easily through the use of the menus and the right controller. When the user is satisfied with their work they can save it using the left controls menu. They can also create a blank canvas to start something new, or load an existing file to continue with previous work.

4.2

Mesh Generation

The procedural generation was implemented by creating two classes, Spheroid and Dynam-icActor. The Spheroid class handles all the necessary math calculation for the procedural generation, providing all of the lists required to generate a mesh. The second class is the representation of the dome in the application itself, that the user can interact with.

(36)

4.3. Interaction

An implementation of procedurally generating an ordinary sphere, as illustrated in Sec-tion 2.1. The result of this is displayed on the left side of Figure 2.3. As stated earlier, this results in problems with distortion on the surface. A simple way of compensating for this was implemented in the painting algorithm, scaling the brush size depending on the dis-tance to the pole of the sphere. While the brush size remained consistent, it became clear that the resolution near the poles was poor in comparison to the rest of the sphere.

Currently, the DynamicActor uses an imported half-Cubesphere, with custom UV map-ping. It became quite clear that the method used for procedurally generating such a creation would take too long to implement, and as such this method became its replacement. The method can be seen in Figure 3.1, where a blueprint node retrieves all the necessary informa-tion to generate a mesh procedurally and feeds it into the DynamicActor. The mesh used can be seen in Figure 4.1.

(a) Half Cubesphere shown in colour.

(b) Half Cubesphere shown in wireframe mode.

Figure 4.1: The mesh used for the dome.

The drawback with this method is that, if the user requires a different level of detail on the mesh itself, they would have to create a new mesh with an external program and re-import it to the engine. This lacks the flexibility provided by the procedural generation method.

4.3

Interaction

The user has access to two HTC Vive controllers. Since the amount of buttons on these con-trollers are limited, most functionality is located at a menu attached to the left controller seen in the virtual environment. See Figure 4.2 for an image of what the controllers look like in the virtual environment.

(37)

4.3. Interaction

The menu connected to the left controller can be interacted with, and all advanced fea-tures can be found on it, with the exception of brush size and brush opacity, mapped to the trackpad of the right controller. The menu consists of three separate screens surrounding the left controller. These screens can be rotated around it using the left controller trackpad but-tons, so that the user does not need to rotate the controller itself to interact with the menu. The different menu screens can be seen in Figure 4.3.

(a) Control menu.

(b) Layer menu.

(c) Colour picker menu.

(d) Status Menu.

Figure 4.3: Menu screens.

The current options on the control options offer control over the following parameters: • A "new" button, displayed in the top left corner, This button resets the canvas and all

layers completely, allowing the user to start over.

• An "open" button, top middle. This button is used to select a previously saved canvas, using the method described in section 3.6.

• A "save" button, top right corner. This button is used to save the current canvas, using the method described in section 3.6.

• The brush icon, middle left, allows the user to select the brush tool. Compared to the pencil, this tool has soft edges when painting.

• The eye dropper icon, middle, allows the user to select the colour picker tool. When active, the colour picker is used to select a colour from any point on the canvas.

• The middle right icon is used to select the teleport tool. When active, the user can teleport in the virtual environment.

References

Related documents

Regioner med en omfattande varuproduktion hade också en tydlig tendens att ha den starkaste nedgången i bruttoregionproduktionen (BRP) under krisåret 2009. De

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

Den förbättrade tillgängligheten berör framför allt boende i områden med en mycket hög eller hög tillgänglighet till tätorter, men även antalet personer med längre än

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av

Det har inte varit möjligt att skapa en tydlig överblick över hur FoI-verksamheten på Energimyndigheten bidrar till målet, det vill säga hur målen påverkar resursprioriteringar

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa

DIN representerar Tyskland i ISO och CEN, och har en permanent plats i ISO:s råd. Det ger dem en bra position för att påverka strategiska frågor inom den internationella