• No results found

Mobile Virtual Reality Environment in Unity 3D

N/A
N/A
Protected

Academic year: 2021

Share "Mobile Virtual Reality Environment in Unity 3D"

Copied!
44
0
0

Loading.... (view fulltext now)

Full text

(1)

MdH University

IDT

Bachelor Thesis

Mobile Virtual Reality

Environment in Unity 3D

Patrick Sjöö

Supervisor: Daniel Kade Examiner: Rikard Lindell

Västerås, Sweden November, 2014

(2)

Abstract I

Abstract

This report is a contribution to an existing research project, which works on supporting mo-tion capture actors by improving their immersion and experiences through an augmented reality setup. The problem denition is that a motion capture studio, up to now, does not provide a large scenery for an actor. A motion capture scene is made almost entirely out of props and actors wearing motion capture (mocap) suits. To set the stage and environment there is usually a director explaining what the props are and what the situation is. The rest lies in the hands of the actors to imagine the scene.

This project provided the controls for viewing a virtual environment using a smartphone with an Android operating system. The result was an application containing a virtual world that the user could look and walk around in using the smartphone's gyroscope and accelerometer respectively. This shall help the actor to get a better view of the surrounding world in which he or she is supposed to act in. The phone was connected to a pico projector and both devices were mounted on the user's head to get all needed input such as turning, tilting and physical movements. The phone can also be mounted in several positions which can be changed in real time. Some user testing was made to see how users handled the devices and what they thought of the application.

(3)

Acknowledgements II

Acknowledgements

There are two people I would like to give thanks to:

First of all, Daniel Kade, for answering all my questions I had during the course of this project. Also a big thanks for providing me with an awesome level to walk around in and lots of other xes and advice.

Secondly, I would like to give a hearty thanks to Soe Allared for her never ending support and motivation before, during and surely after this project. I would probably never have gotten this project if she hadn't motivated me into it. My deepest thanks to you.

I would also like to thank Pepper, the nest dog I've ever had in my life, for putting up with me during the long days of coding, testing and problem solving. Always showing constant love and loyalty, and for getting me out of my chair when things became tiresome.

Västerås, November 2014 Patrick Sjöö

(4)

Table of Contents III

Table of Contents

Abstract I Acknowledgements II Table of Contents IV List of Figures V 1 Introduction 1 2 State-of-the-art 2

3 Research question and problem analysis 7

4 Method 8 5 Terms 10 5.1 Vectors . . . 10 5.2 Quaternions . . . 10 5.3 Euler angles . . . 10 5.4 C# . . . 11 5.5 Character controller . . . 11 5.6 Blender . . . 11

5.7 Frame rate and frames . . . 11

5.8 High-pass and low-pass lters . . . 11

5.9 Gyroscope . . . 12 5.10 Accelerometer . . . 12 6 Discussion of choices 13 6.1 Phone . . . 13 6.2 Game engine . . . 13 6.2.1 UDK . . . 13 6.2.2 Unity . . . 14

6.3 Mounting the device and projector . . . 14

6.3.1 Position . . . 14

6.3.2 Mounting . . . 16

7 Implementation 18 7.1 Setup . . . 18

7.2 Code design . . . 18

(5)

Table of Contents IV 7.3 Program ow . . . 19 7.3.1 Initialization . . . 20 7.3.2 Input . . . 21 7.3.3 Update . . . 21 7.3.4 Draw . . . 21 7.4 Step algorithm . . . 22 7.5 Rotation algorithm . . . 24

7.6 Finding phone mounting positions . . . 25

7.7 Interface . . . 26 7.8 Level . . . 28 8 User tests 30 9 Conclusion 34 10 Future Work 35 Bibliography 36

(6)

List of Figures V

List of Figures

2.1 The Oculus VR showing it's capability to separate the user's vision from everything else. Source: WikiMedia Commons . . . 2 2.2 Picture showing the similarities between the Avegant VR Retinal display

and a regular pair of sunglasses. Source: CNET . . . 3 2.3 The Durovis Dive mounting device is pictured to show how the phone is

being mounted and how the lenses are placed. Source: Durovis . . . 4 5.1 Picture to better visualize how a gyroscope works. All the circles in the

picture represent an axis that you can spin around. Source: WikiMedia Commons . . . 12 6.1 Picture showing the phone mounted on the side of the head with the pico

projector on top . . . 15 6.2 One of the ideas on how to mount the device. The picture is showing a

headstrap with two straps for added support. Source: GoPro . . . 16 6.3 The nished version of the prototype to get a better understanding of how

it looks. This was used in all tests afterwards. Featuring the creator of the cap as well . . . 17 7.1 This picture shows a UML (Unied Modeling Language) diagram over how

all objects and scripts are connected. It also shows the scripts' variables and methods with their respective types and return types . . . 19 7.2 Picture showing how a game loop is built. To break the loop and exit the

program, the user needs to shut it down - most likely through some kind of input. Picture was created with an online UML diagram tool . . . 20 7.3 A picture showing the two dierent algorithms, the arrows shows which

acceleration is being measured. The left side has the regular algorithm active and the right side has the WIP algorithm active. The gray area represents the smartphone . . . 24 7.4 Picture showing roughly where the positions are in relation to the user's

head. The rst one shows the user with the phone on the left side. The second shows the right side. The third shows the phone on top of the user's head (note that the view is slightly tilted). The last picture shows the phone on the back, close to the neck. . . 26 7.5 The making of the test level to show in more detail how it looks. The top

half shows the rst plane created and cut according to the size of an actual shooting oor. The bottom half shows the nal level with textures and lighting. . . 28

(7)

List of Figures VI 7.6 A scene from the nal test map featuring a guard tower with a special set

of trees (they have a dierent kind of foilage) . . . 29 7.7 Another scene featuring a car by a small jungle of bamboo trees . . . 29 8.1 The diagrams are showing what the testers answered to the questions shown

above them to showcase possible connections between them . . . 31 8.2 Picture shows a diagram of how the testers felt about the realism in our

application . . . 32 8.3 Picture of a user testing the application in a small conference room . . . 33

(8)

1 Introduction 1

1 Introduction

Motion capture today makes animation more realistic than ever before, and it's commonly used in media, on the internet and in video games[18]. Motion capture means that you use real actors with markers attached to them (usually on a suit similar to wet suits) to track their movements, and sometimes facial expressions, to create animations, which are later put on something else - like a video game character. One of the problems that comes up during the acting is the fact that unlike a real movie set, there is usually no real props to tell the actors anything about the location or happenings around them, there are only the instructions from a director. The aim of this project was to contribute to a bigger research project, which goal was to create a more immerse environment for motion capture actors. This thesis was made to provide the controls for displaying a virtual scenery for a mo-tion capture environment and see if it was possible to achieve it using a regular phone. It describes our project in terms of code, design and the development of the resulting appli-cation. It also tells about the algorithms used for both rotation and movement, issues with design, discussions and comparisons with other projects and tools, the implementation and eventual cuts of features.

We wrote this application to make the whole set very exible and compact, whilst trying to have it as powerful as possible, to provide a visual contribution to the bigger project. One requirement for this project was that it had to be made using a game engine. With this program, you can take almost any Android phone (not counting those that lack a gyroscope), install it and have it up and running in a couple of minutes. Because it all exists on a smartphone, there are nearly no limits for where it can be used. Everything is done with free software.

At the end, we had a working prototype application which could be used to walk and look around using only the phone. Connected to a pico projector mounted on top of the user's head, the user could get a bigger picture displayed in front of him or her. The phone could be mounted on any side of the head, and the application could be adjusted to the new position whilst running with the push of a button.

(9)

2 State-of-the-art 2

2 State-of-the-art

There exists some technology and products that have similarities with this project. The eld of virtual and augmented reality evolves all the time, always producing better hard-ware and new technologies[23][24].

For instance, there is the Oculus VR[10], which is a virtual reality headset designed for gaming. It features a screen that is mounted in front of the user's head. It uses several sensors such as a gyroscope, an accelerometer and a magnetometer to supposedly give it absolute head tracking relative to earth, without drag. It shuts out the rest of the user's view so only the device's screen is visible. To date there hasn't been an ocial release for consumers, but there exists developer kits and games need to be specically programmed to work with the Oculus VR. When comparing it to this project, it uses similar types of sensors to achieve it's intense atmosphere, but as mentioned before it lacks an ocial release. Also, the few games it has support for, uses a controller to move around and the device is only for looking around in the world. This project wants to be able to move around using the accelerometer as well, and not depend on a controller. Oculus VR also has to be powered by another source than it's own, which a phone solves by having it's own battery. It was also an inspiration on how to mount the device for our own project.

Figure 2.1: The Oculus VR showing it's capability to separate the user's vision from ev-erything else. Source: WikiMedia Commons

(10)

2 State-of-the-art 3 Another product that's out there is the Avegant Virtual Retinal Display[11], which seems to put focus on displaying images. Just like Oculus VR, the user's eld of view get's shut out to only see the display. The dierence here is that it uses a retinal display, which in simple words project an image straight into the user's retina. This is a very complex process and requires a lot of ne tuning to get right, as it's trying to replicate the way eyes receive and handle light, rather than trying to display an image in front of them. The device itself looks like a pair of glasses, and there doesn't seem to be any sensors on them like a gyroscope. This is where using a phone comes to be a strength, because it's sensors can be used as a way to control a character or an object in the world.

Figure 2.2: Picture showing the similarities between the Avegant VR Retinal display and a regular pair of sunglasses. Source: CNET

(11)

2 State-of-the-art 4 Yet another implementation for both Android and iOS is a project called Dive[12]. It uses a regular smartphone and a plastic frame in which you place it. The frame contains two lenses which provide the user with a better eld of view. It makes use of the phone's sensors to look around in dierent worlds. The are even versions that supports phones not equipped with a gyroscope, although those provide a very limited (if any at all) experience. There exists a game for Dive, which describes it as a "Run and Jump"-game. When looking at how the movement is handled, it's done by looking at a certain object long enough for it to turn green. At that point, the user start moving forward at a constant speed in the game. Also, Dive can only be played in landscape mode on the phone, given that it attempts to show two separate images to create a 3D-eect. In our project, we wanted to be able to run the application in both landscape and portrait mode.

Figure 2.3: The Durovis Dive mounting device is pictured to show how the phone is being mounted and how the lenses are placed. Source: Durovis

(12)

2 State-of-the-art 5 A project that has a few similarities with this project is a project named "`AR Street View"'[7], which uses Google Maps's street database matched with a smartphone's GPS and camera to create a live video of the user's surroundings. The phone itself provides the image, and the data from the database gives information for certain street names and key locations. This project comes with a positional problem at times, and uses a gyroscope and accelerometer to create a pedometer to count the user's steps and in what direction they go. The dierence between our project and this one that their sensors are not placed in the phone, but at the waist. Also, there are four required components for "`AR street view"', namely the software, the database server, an external sensor (which most likely refers to the gyroscope and accelerometer) and a UMPC (or laptop PC) with a Bluetooth adapter. In comparison, our project only requires you to have our software installed on your smartphone equipped with an accelerometer and a gyroscope.

A project more focused on motion capture is named "`Practical Motion Capture in Ev-eryday Surroundings"'[16]. Like our project, it uses sensors mounted on the user to track their movements, and it's designed to capture regular everyday motions - from casual walking to cooking food. It has good replication of movements and uses gyroscopes, ac-celerometers and ultrasonic sources to measure the movements. The rst two are bundled together on a single chip, whilst the ultrasonic source works on it's own. The chips are placed on the user's joints and the rest is placed on dierent parts of the upper body. All of this data is then stored and processed in a backpack containing a driver box with a hard drive attached through a USB port. One of the more interesting aspects of this project is the amount of sensors needed to obtain a good quality of movements, as nearly every motion in the user's body has to be tracked. This project gave an idea on some of the complexity involved using gyroscopes and accelerometers for tracking movement.

The project REFLCT[19] aims to provide training programs using pico projectors and retroreective surfaces to display images. It utilizes a helmet with one or more pico projec-tors, one or more reective surface and some form of tracking, to display individual images for the users. The tracking is made by using a motion capture system to get the camera's position, which is then fed to a PC, which provides the correct image. REFLCT mentions having a version developed for smartphone using Unity, although tracking is still made through WiFi and might possibly be the motion capture system as well. Our project does not rely on any other system than the phone itself, even for dening it's position, which would be the major dierence between the two.

Another take on making a cheap and portable solution for the market in motion capture is the MuVR[25]. It features a mobile augmented experience and aims to create a cheap solution. The hardware it utilizes includes an Oculus Rift[10], a Raspberry Pi computer, a smartphone, a Razer Hydra tracking system and a battery pack (with some minor con-nection cables and adapters). The Oculus Rift handles all the visuals, sounds and head movement. The smartphone provides all locomotion for the movement through the virtual world, and the Razer Hydra uses electromagnetic tracking to keep track of the body. The sensor's are read via wi through the Raspberry Pi and it also supports multiple users to join the same world. Just like our project, this one used Unity to develop the virtual world to walk around in, and does so by making it stereoscopic so everything renders in 3D. Another similar thing is the use of the smartphone's accelerometer to determine whether or not a user is walking, although the actual results are not mentioned - only speculated.

(13)

2 State-of-the-art 6 One of the bigger dierence is notably the amount of equipment (Our project uses two, whilst the MuVR uses more than six including cables), this naturally leads to a higher range of sensors as both the Oculus Rift and the smartphone provides separate sets of data, potentially leading to higher accuracy in movement tracking. One downside to using an Oculus Rift for translating actual walking to a virtual world is that the user's vision is strictly limited to that on it's screen, which could lead to hardships if you would want objects to interact with (like boxes to climb, or other things to go under).

(14)

3 Research question and problem analysis 7

3 Research question and problem analysis

Like every other project, this thesis aims to solve a problem, and this section will give a more detailed description of that.

The problem at hand for this thesis comes from an already existing research project, which is trying to create a more immerse environment for motion capture actors to act in, both through graphics and audio. Usually on a motion capture stage, the actors are put in a scene where every day items are used as props. A director tells them the environment they're in, what the props are supposed to resemble (if any) and the events that happen (like explosions). The actors have to take these directions and imagine it in front of them, unlike a real movie set where the props are more or less real and look just like they are supposed to look in the nal product. In order to try and make the motion capture stage more real, audio and graphics are added to the stage. This project focuses on the graphics part, and more specically by providing the controls for an application to view the environment that the actor is to see. The problem we're facing comes down to the question: "Is it possible to make the controls for displaying a virtual scenery used in a motion capture environment?"

What we had to create was some kind of application which could be used in motion capture, possibly something mobile that would provide the actor with something visual without interrupting the regular acting.

(15)

4 Method 8

4 Method

When working on this project, we followed a simple pattern of idea gathering - implemen-tation - testing.

As for starting out, we needed to see what else was out there. Has anybody done something just like this or at least close to what we are trying to achieve? There existed some projects which had some similarities to our own, although in some cases not designed for our target group. At the same time as we did the search for similar projects, so for the rst few meetings we sat down and had a few brainstorming sessions. We already knew the project had to be made for a phone, so we discussed a lot of ideas on how the application could be designed and also how it would be mounted on the user. When we had decided on a design we were happy to make a prototype of, we started implementing the application. The methods used for working with the implementation has mainly been to use agile project methods. Agile methods are commonly used in project where time is of the essence, and they're often used in computer projects. Right from the start, we decided to go for SCRUM[6], which focuses on getting a working prototype out as fast as possible to enable testing on users early. As the projects made progress, new features are added with the user tests in mind. This process of constant evaluation keeps the nal product close to what the user wants, although a lot of the time goes to testing. This method suited us the best as we wanted to get something to use quickly and tests as soon as possible, and by having a dynamic work method where the essential features were done rst, we could focus on making more fun and experimental ones if there was any time left over. It's important to know that SCRUM is a work method, and not a scientic method of research.

At the start of the implementation, it was all about getting familiar with the development environment, and after a few weeks the actual development began. Because we were working with our own version of SCRUM, we had weekly meetings to constantly check up on the progress of the project. As soon as we had a version that we could run properly, we started testing it inside the group to nd bugs, design aws and better algorithms. Eventually a nal prototype had been nished to be tested by regular users. The test was carried out at our university, Mälardalens Högskola, with 6 dierent persons. All of them were given time to try out the device and were asked to ll in a form afterward. These results gave us a very clear picture of how users react to the idea of walking around in a virtual world and how people handle the application the rst time. The users had some suggestions on some of the areas where they would like to see the application being used as well. The tests were evaluated on a qualitative grounds. As there unfortunately wasn't more time for testing, a quantitative evaluation simply didn't have enough material. In a research ethical statement, we made sure that the testers understood that the test was entirely voluntary and that they could, at any time, abort it. We also made sure the testers knew we weren't testing them, but the application itself.

The argumentation method used for this project leans towards inductive reasoning, which means that we worked from a question, and then try various approaches to reach a

(16)

4 Method 9 conclusion in the end. The reason it's an inductive method is because of the exploratory nature of this project. As there indeed exists other projects with their own conclusions doesn't necessarily mean that we will reach the same one, if anything like them at all. This is why we conduct the tests and try a lot of algorithms and dierent things with our application. In our project, the things connected to this type of reasoning has been brainstorming, prototype building and testing - both on ourselves in the work group and with real users. Through brainstorming we got our ideas on how start the project, which we backed up with what we had found on our state-of-the-art section. To approach our ideas at hand, we tried to construct said ideas and tested them ourselves as well as discussing the results. If the conclusion came to that it was a good idea, we decided to develop it further until we had a working prototype, continuously testing it and discussing our ndings. The prototype gave us a lot of information on what the limits were both for the hardware and software and the user tests provided us with information if the application could preform according to our problem at hand - if it is possible to make the controls for displaying a virtual scenery used in a motion capture environment.

(17)

5 Terms 10

5 Terms

This section of the report explains some of the terms used in this project in detail to provide a common understanding of the terms.

5.1 Vectors

A vector is used all over the world of mathematics, physics and video games. It's usually described as a line between points and can have an innite amount of dimensions, but in almost all cases 2- or 3-dimensional vectors are used. They are usually written as a collection of values and also have a direction and length. For instance, the 2-dimensional vector (1, 1) can describe a vector that goes from point (0, 0) to point (1, 1). To calculate this, let's call the rst point A, and the second point B, then to calculate the vector AB, we need to subtract the point in reverse order. Subtracting vectors basically means that you subtract the corresponding positions with each other, which leads to AB = (1-0, 1-0) = (1,1). However, a vector does not always start at (0,0), if A = (0, 2) and B = (2, 2), then we get AB = (2-0,2-2) = (2,0) which shows a vector that goes straight to the right for two steps along the X-axis. Vectors always show what direction they travel in and for far they go, but are not dened where they, for example, start. For this reason, you can not rely solely on vectors to acquire a position. However, if you have a point to describe a position, then a vector can be added (or subtracted) to change said position. This is why it's important to understand the dierence between points and vectors. Vectors are used immensely in graphics calculations as well, but will not be discussed in this section.

5.2 Quaternions

To describe an orientation in 3D, or a rotation if you will, one way is to use quaternions[1, 2]. They are amongst one of the more complex mathematical terms and when it comes to using them in this project, the need to know exactly what they are is not as important as the knowledge of how to use them. In order to understand quaternions fully, an extensive knowledge of complex numbers is needed, but what one needs to know for this project is that quaternions are used for any kind of rotation done with the camera.

5.3 Euler angles

When speaking of euler angles, the easiest way to think of them is by using them as regular degrees in geometry. Euler angles are used to keep track of a rotation along the X-, Y-, and Z-axis and just like regular geometry there's 360 degrees to a full lap. So if a model has a rotation vector with euler angles (0, 0, 0) it's not rotated at all from it's native or default rotation and should it have something in the nature of (90, 0, 180) we can read that it's

(18)

5.4 C# 11 rotated 90 degrees along the X-axis, and 180 degrees along the Z-axis. Euler angles can be used instead of quaternions.

5.4 C#

C# is an object oriented programming language created by Microsoft. It's used in many programs, and has it's roots in the C/C++ language. It focuses heavily on object oriented programming, which means that the program is structured with classes which acts as main templates. From those classes, objects are created and interacted with during actual runtime of the program/application. For was the program language used for scripting.

5.5 Character controller

A character controller[4] is a special kind of game object in Unity. It features a collider shaped like a capsule, predened methods for movement and other useful settings like changing the height of a step, to decide how high an edge has to be in order to step over it without having to jump. It doesn't come with a model, but one can be assigned if needed. For this project, a character controller is used to move around in the environment, as a simulated user.

5.6 Blender

Blender[5] is a free, open source 3D-model creator. With Blender you can create 3D-models with animations and textures. It has some very powerful features, but has a rumor to have a high learning curve. It has been used in both movie projects and television commercials. This program was used to create the rst test/debug map.

5.7 Frame rate and frames

Frame rate is the speed in which something is updated (commonly used in games) and is measured in frames per second (fps). Most games to date use either 30 or 60 fps.

5.8 High-pass and low-pass lters

Just as the name tells, a high-pass or low-pass lter is a lter that only lets certain values pass. The reason why something should have a lter like this is to get a more consistent set of data from a constant streams and are used in the electric and audio industry. In this project, a high-pass lter is used to get rid of jittery shaking when looking around the world.

(19)

5.9 Gyroscope 12

5.9 Gyroscope

A gyroscope is a device that measures orientation. They're used all over the world where the need for stability is high and you don't want things to fall over. Trains, planes, rockets, ships, pointing devices and a lot more use them. For this project, it makes the phone aware of how it's turned and that's essential for looking around in the world.

Figure 5.1: Picture to better visualize how a gyroscope works. All the circles in the picture represent an axis that you can spin around. Source: WikiMedia Commons

5.10 Accelerometer

An accelerometer[8] is a device that measures force applied to it and is used in lots of areas, all ranging from vibrations in machines to safety features like airbag deployment. The values it produces can be translated into a vector, which can be used to get a sense of direction and acceleration. The accelerometer is aected by g-force and can detect very small and big changes in force. For instance, an accelerometer laying on still on a at surface will register the earth's gravity pulling it down. Therefore, gravity always have to be taken into consideration when reading values from the accelerometer. The ones used in modern phones can pick up extremely small changes (like the small trembles in your hand when you try to hold the phone perfectly still) to very big ones (like very violent shakes). In this project, the accelerometer is used to get the user's direction.

(20)

6 Discussion of choices 13

6 Discussion of choices

In this chapter, discussions about dierent choices is the topic. Everything from engines to how to fasten the device on the intended user.

6.1 Phone

When the project started, we were handed a Samsung Galaxy S4 Mini. The reason for this is the fact that it gives amongst some of the best performance to cost ratio as of writing time. There are other phones that have the same kind of sensors, and even more to boot, but either they're more expensive or they're bigger in size. The S4 Mini provides us with the sensors we need, contains the specications needed to run the application, a size that is very easy to manage and is also very light weight. The S4 Mini also has Android (ver. JB 4.2.2) as an operating system, which was a requirement to begin with.

6.2 Game engine

6.2.1 UDK

UDK[9] (Unreal Development Kit) is a software used to create 3D applications and uses the Unreal Engine to render it's graphics. The Unreal Engine has received awards for it's advanced graphics and can be seen in many video games and movies. It exists for most available platforms and is in many ways similar to Unity. The reason for choosing Unity over UDK is because Unity seemed the easiest to develop in, as the amount of learning was practically none at all. One of the advantages Unity has is the method of importing models, scripts or basically any asset you want - you simply drag and drop what you want into your project and the program refreshes your folders and keeps track of it. Unity also keeps track of any changes made from outside programs, such as editing a script or texture with another program. UDK uses it's own packaging type whenever you import your assets, which may not be self explanatory to somebody who is using UDK the rst times. UDK also has the disadvantage to not being able to read as many le formats as Unity. Both softwares have a free license for educational purposes and non-commercial publishing, but if you want to publish your work commercially (i.e make money of it), there are a few prices to be paid. You are allowed to publish games with Unity's free version, provided that you are not a commercial entity with an annual gross revenue over a certain limit, although it comes with limited graphics and some limited features. For instance, there is no real time shadows in Unity's free version. The Pro version of Unity provides you with all features, but at a price. UDK requires you to get a license from the start if you want to publish something, then if you make more than a certain amount of money you will have to pay a royalty fee to them.

(21)

6.3 Mounting the device and projector 14

6.2.2 Unity

Unity[3] is a scene creator. The layout has similarities to some 3D-modeling programs, but the key dierence is that Unity is mainly used to create and connect environments using models built in a 3D-modeling program, such as Blender[5] och Maya. Unity can import almost any model of almost any known le type and features a project viewer that automatically refreshes every time something is imported or changed outside the program itself. The program is free for educational and non-commercial purposes, it can develop on PC and Mac and it can publish to almost any known operating system or modern game platform. This was the software used to develop the project, mainly for it's easy learning curve and publishing, but also because it's free in almost every aspect. It has some pretty powerful tools and graphics as well, although it may not always match the cutting edge technology as of writing this (which would then be the Unreal Engine). Another thing that was good about Unity was that the scripts could be written in C#, which was the language the group had the most experience of.

6.3 Mounting the device and projector

There were a few ideas on where and how to place the mobile device and the pico projector on the head. Naturally, it had to be the head, otherwise the movements of looking around would be impossible to imitate. As for actually getting the devices to stick to their places, there were some ideas as well.

6.3.1 Position

There were three positions in general that had the most sense in terms of performance, the rst one being at on top of the head with the screen facing up. This position gives a very good set of data from the user's movements, as the phone clearly follows all the turns and tilts. The top of the head is also (in most cases) a at surface to strap the phone to. When we started tests with this position, it quickly became clear that if you wanted to have a good center of gravity for your head, you would probably have to stack the phone on top of the projector. The good thing about having them tight together is that you get something very compact to use, the downside is that it becomes a bit bulky to look around with (at least with the ways we tested it). Another thing worth noting is that the compass of your device goes haywire due to the very small magnetic eld the projector produces whilst running.

(22)

6.3 Mounting the device and projector 15 The second position that was thought up was to place the phone on the side of your head, and the projector on the other. This position gives a good translation of tilts and turns, as well as a good weight distribution, as you have one device on either side of the head. Just as with the top of the head, the side is also a at place where you can secure the phone and projector somewhat easily. The phone is placed in landscape view when in this position, and another plus is that the Android OS automatically corrects the axes so they point the right way (meaning their default directions).

Figure 6.1: Picture showing the phone mounted on the side of the head with the pico projector on top

The third position was to place the phone the back of your head, near the neck. This position gives an average reading from tilts and turns, but a very good upside to it is that you get a lot of weight of your head. The biggest problem is probably when you want to look up, because the phone gets stuck and hinders the head from moving freely.

(23)

6.3 Mounting the device and projector 16

6.3.2 Mounting

When it comes to mounting the devices, it should sit tight enough so that it doesn't get thrown around too much, but also it shouldn't cause the wearer any harm or discomfort as it could aect the performance of the user. We sat down and brainstormed some simple and dierent ideas to use. One of the ideas that came up early was to put the phone under a cap, or perhaps sew a "pocket" onto a cap where the phone will be placed. During testing with having the phone completely unsecured under a cap showed that it worked, but as a regular cap produce little to no grip for the actual phone. This was expected, but gave a good idea of how using a simple cap could prove to be a simple solution to the problem of mounting the device.

Another idea was to use a simple strap, which usually comes with normal headlamps. Most of those straps sit on your head using only one band, or two which makes a wire-frame cap. This idea could be extended to secure the device from dierent directions, for instance you could have one strap going around the side of your head, and another going underneath your chin.

Ultimately when deciding position and method of mounting, the decision was made to put the phone on the side of the head. It yielded better and more stable results in our testing, and also because of the ability to put a strap around the head via the chin. For the prototype that was used in our tests, we used a wool cap instead of a head strap. As for the projector, it had to be on top of the head (like the picture above), this was due to the projector providing a better image when it was on top. When it was on the side, the eye that had the projector closest had a sharp and clear image, whilst the other had a more faded one. This ultimately lead to a bad image to look at and the decision was made to always have the projector on top.

Figure 6.2: One of the ideas on how to mount the device. The picture is showing a head-strap with two head-straps for added support. Source: GoPro

This was because we wanted to try it out and it sounded simple enough enough to either sew a pocket into, or even use duct tape to hold everything into place. What we ended up with as a prototype was a regular wool cap that had holes in the folded end to be able to plug cables in easier. The projector was taped to the top of the cap, and the phone was secured on whatever side was needed using a tiny amount of tape as well.

(24)

6.3 Mounting the device and projector 17 The reason to why mounting the device securely (and preferably comfortably) is due to activity it's supposed to be used as. The user (ultimately motion capture actors) should be able to turn his or her head quickly, or be able to walk/run or even climb without the device falling o or being displaced. The wool cap was created by Daniel Kade, a PhD Student at Mälardalens Högskola - the university this project was carried out at.

Figure 6.3: The nished version of the prototype to get a better understanding of how it looks. This was used in all tests afterwards. Featuring the creator of the cap as well

(25)

7 Implementation 18

7 Implementation

7.1 Setup

In order to start writing scripts for this, we needed a program to write the code in. Unity has the comfortable option of having MonoDevelop[13] built into it, and you can write all the code in three dierent languages; C#, Java or Boo (which is based on the more popular language Python). Another feature you can from using the built in version of MonoDevelop is due to all libraries connected to Unity[3] are at the programmers disposal, through these you can access the phone's sensors and read their values. In layman's terms, the connection between the phone's sensors and the program is all in the code. As for drawing the world on the phone's screen, that's all done in Unity.

When it comes to holding everything together, we have Unity. It uses scenes to hold everything that is used in an application, everything from models and object and entire levels to scripts and code. Apart from that, it also handles organization by keeping track of the objects and their relations. Such a relation can be that one object is a child to another object. This makes this object keep the same position, scale and other properties should the parent be moved or changed. These relations are viewed as a hierarchy with the parents always being higher than their children.

At the end of it all you want to build the program with levels, models and code into one big thing, Unity handles that part as well by packaging everything (called assets) and creating the necessary installation le needed.

7.2 Code design

The design for the actual code was at rst to try to take advantage of the strong object orientation in C#. When later faced with the problem at hand, it turned out that it would most likely be either illogical, or too much work. Usually when coding with object orientation, the goal is to have logic objects that contain logical variables. An example would be if a class called "Person" was created. Some logical variables could be things like "Name", "Age" and not "Fur color" or "Brand". Also when using object orientation, the strength lies in being able to create unique objects from that particular class.

In this project's case, there's only three things that are worked with; the character controller[4], the main camera and the environment (or level). As there won't be any creation of another character controller or camera, the need for specic classes doesn't exist because we only want one of each. So to solve the problem at hand of designing the code, the decision was made to create one script for each one of the controllable objects, which ended up being three; the character controller, the main camera and the menu. The character controller's script was to handle everything that was connected to movement of the character itself using the accelerometer and all the eventual calculations. The camera's

(26)

7.3 Program ow 19 script had all the code for looking around in the world using the gyroscope, it also does all the oset calculations so that the camera is turned right when switching between dierent views. The menu's script displays the HUD with all the menus and debug values when testing is done. One thing of importance with this script is that it doesn't have it's own object to be attached to it, but on the other hand it doesn't matter where it's placed, because the script only makes use of the "OnGUI" method, which is a global method available in any script created. As such, the menu's script is attached to the character controller. The menu's script is also responsible for input from the phone's buttons, like exiting the application was a certain button on the phone is pressed.

Figure 7.1: This picture shows a UML (Unied Modeling Language) diagram over how all objects and scripts are connected. It also shows the scripts' variables and methods with their respective types and return types

Some other thoughts and ideas that were put into the design of the code was to keep it as clean as possible and to have sucient commentary in order to make the code understand-able should anyone want to modify it. Optimizing as much as possible for performance reasons was always considered during coding.

7.3 Program ow

All programs running have a specic "ow" or a certain pattern they follow from start to end. In the gaming industry, most applications have what's called a "game loop" and

(27)

7.3 Program ow 20 refers to when the actual game is running. This process goes in the order; input -> update -> draw -> input -> ... In the input phase, all input from the user is being received, this means all values from the accelerometer and the gyroscope are read and stored into the correct variables. Any button pressed on the phone itself is being read here as well. After all input has been read and stored, the update phase follows. In this part of the loop, any input is used to update values in the game, which can range from changing position and speed, to making simple choices in a menu. Other things like ongoing animations are advanced to their next frame, object that already have a velocity applied will be updated with their new position and objects that should no longer exist (like defeated enemies) gets deleted or cleaned up. Now the last phase steps in, where everything gets drawn. This takes the newly updated objects in the world and draws them correctly on to the screen. The methods for calculating WHAT to draw diers between 2D- and 3D applications.

Figure 7.2: Picture showing how a game loop is built. To break the loop and exit the program, the user needs to shut it down - most likely through some kind of input. Picture was created with an online UML diagram tool

This application has a program ow similar to a game loop, which will be explained in the following section:

7.3.1 Initialization

This part is the rst one that runs when the program starts. It creates all variables used in the program, and in some cases declares a starting value, in the scripts the initialization of

(28)

7.3 Program ow 21 variables is done either at the moment they're created or in the mandatory Start() method. It only runs once.

7.3.2 Input

The values from the accelerometer[8] and gyroscope are read and stored into variables. The gyroscope's values are multiplied by a quaternion, which is a rotation oset to make the view come out correctly. Depending in which orientation is used on the phone, a dierent oset is used.

7.3.3 Update

Two out of the three active scripts in this application use a method called Update. The camera's update script starts by checking the previously stored rotation value to check how much it diers in comparison, it has to be over a certain limit in able for the camera to apply the new rotation. This is the high-pass lter for the rotation. The accelerometer values is read from an internal variable provided by unity called "userAcceleration", which is part of the gyroscope, but calculates the relative acceleration. That means the actual acceleration the phone has in a direction, and not how much acceleration that is being applied. Then it checks if the acceleration upwards is greater than a certain value and if it is, the user is taking a step. Based on the relative acceleration that was stored earlier, the value is checked to see if the character controller should be moved forwards or backwards. The character controller is then moved accordingly and a step counter is increased.

7.3.4 Draw

As Unity handles all the drawing automatically, all graphics are displayed correctly from the start. Although this project has it's own HUD that is drawn on top of the phone's screen. The positions and sizes of the dierent controllers are read and drawn, some of the menus and options can be hidden, therefore it checks a boolean value to see whether or not they are supposed to be visible before they get created and ultimately drawn. This is done as soon as something in the GUI changes. In dierence to the update method, which runs every frame, the OnGUI() method can be called several times per frame at times. Only the menu script utilizes the OnGUI() method and the reason for this is because the menu and debug values exists in the HUD.

(29)

7.4 Step algorithm 22

7.4 Step algorithm

Dening a step using an accelerometer is easy in theory. Each axis of the accelerometer has it's own values displaying how much it's being aected by force, including gravity. If you check for a certain dierence in force between the last check and your current one, and it's greater than your set limit, you can assume that the phone is shaking or moving with enough force to count as a step. In this application, the step algorithm is one of the most critical areas, and as such it received the most attention when it came to solutions and work hours. The actual algorithm grew through several versions, the rst one was programmed to directly translate the accelerometers values to the character controller. What you get here is easiest described as tilting a platform to make a ball roll, with the character controller being the actual ball. In other words, if you lean forward the acceleration will take you forward, if you lean to the side, you move left and right, etc. This rst idea was implemented as a test to access the phone's sensors, to get a feel for how sensitive they are and, especially, how they are aligned. Starting from that point, we had to keep testing our new implementations to nd improvements to come up with a new design to evaluate. Designing our application through testing and evaluation put everything into a chain in the form of: Implementation → Testing → Evaluation → Redesign → Implementation...

As we made progress, more emphasis was put on the actual requirements for a step. It set a limit for how much the acceleration would be in the axis pointing upwards. When somebody takes a step, they need to hoist themselves up a little and then shue the other leg in front of them. This creates an acceleration upwards, followed by an acceleration downwards. If either one of the forces go above or below a certain value, we can assume the user has either lifted himself up, or stomped his foot on the ground. This version had some issues with a multiple registrations, as it could record several steps when actually only one was taken. This was due to the fact that the user might be running. When a step eventually was taken, the formula to produce the actual movement took the current acceleration the accelerometer showed, multiplied it with a speed and then multiplied that with the vector that's pointed forward in relation to the character controller. This meant that if the user produced a weak force the movement would be very small and a strong force produces a big movement.

Another idea of improving the algorithm was an attempt to get the axis pointing side-ways to be implemented into the application, using the same principle as before. Although it kept the same issues. At this point, we attempted to solve a reoccurring problem - tilt-ing the phone produces false positive values. Tilttilt-ing the phone aects the accelerometer, making the values become very big or very low. This problem made you able to simply tilt the phone and shake it. The shake gets seen as a step, and because you had a great value, you'd move at a very high speed at whatever direction the values pointed to. It made the character very hard to control and required a solution that tracked your previ-ous movements. The solution that was thought up and tested was to store the previprevi-ous acceleration in a variable, then subtract those values from your current acceleration. The value produced will be the direction the character is supposed to travel in.

(30)

7.4 Step algorithm 23 An example: Assume our current acceleration in the Z-axes is -0.2, and our previous acceleration was -0.1. The values suggest the phone has gained an acceleration in the negative direction, as the current acceleration is smaller than the previous one.

To get the direction the phone's traveling in, we subtract the current value with the previous value, which gives -0.2 - (-0.1) = -0.1 The value tells the phone is traveling in a negative direction.

Another example, crossing between positive and negative values: Assume our current acceleration in the X-axes is -0.4, and our previous one was -0.5. The values tell that the phone is having a force applied in the positive, because -0.4 is greater than -0.5. As before, we use the same method to calculate the direction, -0.4 - (-0.5) = 0.1 The phone is traveling in a positive direction. Take note that our CURRENT acceleration is negative, which is the reason this problem needed solving.

One aspect we wanted to try was to implement the change of stance, as in crouching or jumping. The hardest part with those features is in recognizing when they are being executed. At the start, it seemed natural to simply look for a violent movement upwards and interpret that as a jump, and do the opposite direction for crouching down. After some testing with these ideas, it proved quickly that they would not be able to t into the application. The character's movements were extremely unpredictable - many times when a step was expected, the character jumped and when stomping down, the character sometimes crouched. The reason for this is because of the line between, for instance, run-ning and jumping are to close to each other in terms of upwards force. Therefore there is not real way to see any dierence between the two and because the movement became so unstable and unpredictable, the stances were cut out.

In another attempt to further extend the accuracy of movement speed and overall smoothness, we tried to have a large amount of small movements towards the direction the phone is moving, and the way it was supposed to work was by entering a "walking state" through a regular valid step. As long as you remained in this state, the direct input received from the accelerometer moved the character. The movements became very uid, but had the same fatal aw as before - having the phone tilted produced for extreme values, which made the character take o.

Getting a consistent set of data to determine which direction the phone was heading was also a big issue. The reason for this is when the application was tested for the rst prototype, the output wasn't consistent enough. Many times when the user took a step forward it was recorded correctly, but the direction appeared random. One of the reasons why the values appear to be random could be caused by the nature of a human step when the foot stomps down, creating a violent shake. It registers as a valid step, but as it's violent (and basically no way to control it), the direction is wherever it points when the user stomps down. To solve this problem, the algorithm was overlooked from scratch and redesigned. The changes that were made was a set step limit, a more accurate step counter, a time limit between steps and the ability to use the phone at on the head. The new algorithm worked as a regular pedometer, which recognizes a shake just like before. When a step is taken, the character moves the set distance and a timer starts. The user couldn't take another step while the timer was below a certain limit. The limits were ne tuned and after more testing, a good average was found that recorded almost every regular step

(31)

7.5 Rotation algorithm 24 taken. Still, the problem with the random directions existed, but this algorithm was stable enough to be t into the very rst prototype.

After extensive testing and reading, it was discovered that Unity has a built-in support for the aforementioned calculation to get rid of the problem with tilting. This led to a much cleaner code, and made the values more readable and accurate. After implementing the algorithm using the built-in support, more testing began. After more tests had been performed, it appeared as if the values produced when actually moving forward didn't go above a certain limit (give or take a few random ones). After changing the code so it matched this limit, the results turned out much better than before.

As a last addition, we added another function to take steps. This feature was an algo-rithm that had the user walk on the spot to move forward in the same direction as the user is looking. The user can switch between the usual walking algorithm and the "Walk-In-Place" algorithm (WIP) with the press of a button on the HUD. At the end of it all we had two algorithms that could be switched between - one that responds to a motion up and forwards/backwards, and one that only responds to a motion upwards.

Figure 7.3: A picture showing the two dierent algorithms, the arrows shows which ac-celeration is being measured. The left side has the regular algorithm active and the right side has the WIP algorithm active. The gray area represents the smartphone

7.5 Rotation algorithm

The rotation script for the camera was completed fairly quick, mainly due to the fact that once the camera rotated the way it was supposed to, there was not much to actually change. The rst idea that sprung to mind came due to early testing and playing around with Unity. This early in the project, a very simple rst-person application was created through a tutorial and for turning the character controller and looking around, euler angles were used. As with the algorithm for the steps, this script followed the same chain of evaluation

(32)

7.6 Finding phone mounting positions 25 So why didn't the project's rotation script use euler angles? Euler angles are easier to understand (as you can actually visualize something turned a certain amount of degrees) than quaternions[1] and they CAN be translated to euler angles. There are two reasons why quaternions were chosen over euler angles. The rst reason being due to the translation between the quaternion and euler angles wouldn't work. It produced a lot of compilation errors and despite trying to solve the problem using dierent approaches, none worked properly. A few solutions made the application run, but produced completely wrong results such as not being able to rotate at all. The second reason is because of the amount of code it saves. The gyroscope returns a quaternion when passing it's values, so by not having to convert that quaternion to euler angles nearly every single time the application updates (which comes to several times per second), one can save some performance that way. Also, using quaternions gets rid of a euler angle problem called "Gimbal locking"[14]. This problem basically means that two or more axes in a gimbal are parallel to each other and therefore you get the exact same rotation from two dierent axes, this means that you've lost the ability to rotate around one or more axes.

The rst implementation of the rotation script was more or less a tutorial example borrowed from Unity's forum[15], the reason for this was to get a better understanding of quaternions and how their values looked. When we had a better understood how to use and work with quaternions, we enhanced the rst code further by adding a high-pass lter for smoother camera movements and also made sure the rotation oset was correct in relation to the phone. The rst rotation oset had the phone laying down with the screen facing up, but was now congured so it worked like looking through a camera. The high pass lter got rid of some of the jittery movements as well, and works by only allowing values over a certain limit to be applied to the camera. The gyroscope picks up nearly every single movement made, even the extremely small shakes humans do when keeping their head still, therefore the high-pass lter was implemented. We also implemented a method called "slerp" to even further smooth out the movements of the camera. Slerp works by interpolating between two points, meaning it has a start and a destination, then it calculates small even movements to make a smoother transition.

7.6 Finding phone mounting positions

As the phone was going to be used to support motion capture actors, the application had to be able to be more dynamic and t into dierent locations on the user's head. This lead to another problem for us to solve. As a start, we wanted something to simply look at, and the simplest solution was to use the phone as you would when using a regular camera. As the gyroscope on Android had a dierent default orientation, an oset was found, and after some tweaking and testing the screen was turned correctly. The screen was in portrait mode during this version.

To further test the application and mounting device, we started making the positions for where the phone would actually sit, which was the side of the head and on top of the head. Both of these position required new oset quaternions, which were found and tested. Only the left side of the head was implemented and it was also the only side to feature the wider landscape mode.

(33)

7.7 Interface 26 We later decided to implement two new positions - the right side of the head, and the back of the head. Even though the idea of having the phone on the back of the head was more or less scrapped during our discussion on how to mount the devices, we thought it would be better to have too many positions than too few. Another addition to all positions was to have them all in landscape mode to get all views in wide screen. To better t with the other positions, we decided to always have the top of the phone pointing forward, both to make it easier to use, but also for the sake of the accelerometer's and the gyroscope's axes. When all quaternions were found and tested, this was the version used for the user tests.

Figure 7.4: Picture showing roughly where the positions are in relation to the user's head. The rst one shows the user with the phone on the left side. The second shows the right side. The third shows the phone on top of the user's head (note that the view is slightly tilted). The last picture shows the phone on the back, close to the neck.

7.7 Interface

When creating the interface, the initial idea was a simple design with a menu containing features. Creating menus in Unity is done by coding the HUD (Heads-Up Display) into a specic GUI method, which is global for all scripts and can therefore be implemented anywhere. In this method, it's specied the type of controller you want, where to place it, what content it should have (like button text) and if you want to apply an optional style to it. The style can be created in a separate skin le, which will determine how the menu will look. Similarities can be drawn towards CSS les used in HTML websites.

The features that were sought after in the GUI (Graphical User Interface) was a way to change the orientation of the device, a way to exit the application, options that included being able to switch sensors on and o and some kind of reset button to be able to make a soft restart (meaning you restart the application without turning it o, having to load everything again).

The design of the interface went through a lot of changes in design during the course of the project all according to useability. Every time a new button or feature was introduced, there usually had to be a change to the overall design. One example would be the toggle buttons for enabling and disabling the sensors, they started out as regular radio buttons, but later got changed into buttons acting like radio buttons and they also changed size a lot during the course of the project.

(34)

7.7 Interface 27 very small one. Even though the design looked good on the test run in the computer, the resulting design could be completely unusable on a phone. Similar things would happen when running the application on dierent phones, due to the phones not having the same resolution. The positions of controls for the GUI is in the code, meaning it can be very hard to get a menu that it scaled and positioned in relation to the phone's own resolution, including font size.

After a lot of time with this HUD design in place, a decision was made to make the HUD completely responsive. This means that the HUD responds to dierent screen sizes and makes sure the menu ts. At a start the buttons and labels all had set sizes, which caused the menu to be very small and hard to read on phones that had a high resolution. When the responsive design later was made, all the buttons and labels use values relative to the screen size to make sure it would look the same on all screens.

(35)

7.8 Level 28

7.8 Level

The rst level created for testing was based on a real motion capture studio and their shooting oor. There were very few details all through the project, as it was mainly used for testing. The main idea was to create the outline of the shooting oor with the measurements being as accurate as possible. To make sure the player wasn't able to cross the outline of the map, it was made into an inescapable pit. This was to prevent the character controller from falling o the edge. After some testing, some blocks were added to have a better sense of direction. Later a tall block was added with an attached script, to see how the lights were displayed on the phone and if everything was working correctly with the collision.

Figure 7.5: The making of the test level to show in more detail how it looks. The top half shows the rst plane created and cut according to the size of an actual shooting oor. The bottom half shows the nal level with textures and lighting.

(36)

7.8 Level 29 A more advanced map was created to see the dierences between the computer and phone versions in terms of lighting, shadows and overall performance. It was created by Daniel Kade.

As one might suspect, a computer is a lot more powerful when it comes to processing power and graphics. Despite of being a superior machine, we were able to produce graphics like those on PC and also managed to maintain a playable framerate (which lies somewhere above 30 frames per second). The new level that grew started out as a golf course, but was later turned into a piece of land with dierent areas to walk around in. We used this map to see if all components would t together and to see if the code would work correctly when put in something a bit more demanding. This map had a dierent set of cameras as well, which we had to see if the scripts would work with. The menu had locations programmed into it for easy access without any need for walking, as a mean to show some nice scenery. Another reason to make a bigger and more detailed map was to see if there would be any reason to be able to scale the user in relation to the environment.

Figure 7.6: A scene from the nal test map featuring a guard tower with a special set of trees (they have a dierent kind of foilage)

(37)

8 User tests 30

8 User tests

To make the application better and more understandable to real users, we had the program tested with 6 people, and had them complete 3 simple tasks and later ll out a form with a couple of questions. The testers were taken into a room and were told to put on a modied wool cap, which had a pico projector taped on to the top and a compartment for the phone on the side of it (as seen in Figure 5.3). They were given a very brief description of how the application works, and after the correct settings had been set, they were told to perform simple tasks like exploring the world. As mentioned earlier, we made sure to the testers that we were testing the application, and not them.

The rst task had the users to just look around and familiarize themselves with the application. By focusing only on the looking part, there was no chance of accidental steps being taken. The second task they were told to do was to try and walk around in the world using the regular stepping algorithm where the ability to walk in both directions. Lastly, the third task the user did was to use the Walk-In-Place algorithm whilst exploring more of the world. After all the tasks were done, the users were handed a form with questions to ll in.

The questions that were asked in the form rst had the user ll in their gender and age. After that, the questions became more focused on the device and the mounting of it and how well it was attached to them. The users were asked to answer these questions using a scale of 1 to 5, other had a scale with text instead. Just like the regular scale of 1 to 5, but without the numbers.

An example question with a scale: "On a scale of 1 to 5, how realistic was the environment?" 1 Not realistic at all 2 Not realistic 3 Somewhat realistic 4 Realistic 5 -Highly realistic

Lastly, the users had some space where they could ll in their own comments on how they would use the device and program or if there was anything else that wasn't questioned in the form or during the test.

The questions that were asked during the test were mainly connected to how the user felt using the program, those in the nature of: "Is there any problem you're experiencing?", "Does the walking and looking feel natural to you?" and "Does the environment look good to you?".

(38)

8 User tests 31 After the tests were made, the results were evaluated and studied to see if the testers felt the application was working in the intended way, or that there perhaps were some connections between the questions. Because of the limited amount of tests made, we made sure that the results we had were thoroughly studied, making the evaluation a qualitative one rather than quantitative. After putting the results into diagrams, we could see a possible connection between three of our questions presented in our form. Those three were about whether the equipment was disturbing, if the equipment was noticed, and if the tester experienced nausea. In the cases where the user didn't take too much notice of it (in this case either "sometimes" or "rarely"), they didn't feel like the equipment was disturbing, nor did they experience nausea.

Figure 8.1: The diagrams are showing what the testers answered to the questions shown above them to showcase possible connections between them

(39)

8 User tests 32 Another question that was on the form was how the testers saw the world realistically. As we wanted to try and immerse the tester as much as possible, this question felt like it weighed pretty heavily. As gure 7.2 shows, the testers rated the application's graphics "Somewhat realistic" or higher, which in this case means average or above average. As for the result of the realism, it's possible that the testers rate the graphics as average realism due to the fact that they are fully aware of the kind of application they're using, but may also be due to the colorful landscape or perhaps the lack of sounds. Despite the average result (and in one instance higher than that), the graphics are a secondary thing for this project, but could possibly be tweaked for increased realism by adding more detail to the environment.

Figure 8.2: Picture shows a diagram of how the testers felt about the realism in our appli-cation

(40)

8 User tests 33 The conclusions that could be drawn from the tests were that the device seemed to be in the way. This is most likely due to the fact that a wool cap doesn't sit as tight on the head as one would want to. With some weight attached to it, it can be hard to control the movements of the head in a comfortable matter when the equipment goes out from it's designated spot, which happened quite often. This was expected though due to very simplistic nature of the headgear, a wool cap with duct tape is certain to have aws when it comes to keeping things in their places. During the tests we could clearly see that all users had issues with the cap, as they either had to make some adjustments to it during testing, or that they even had to hold it in place as they went. Two questions were especially interesting in the questionnaire, those two were if the user experienced any nausea during the test and if the equipment disturbed or hindered them in any way. Half of the people who said they had experience nausea also wrote that they felt the equipment was restricting or disturbing. Judging by those results it would seem like it's very important that the device sits comfortably on the user, as it could be a cause for nausea whilst using the device. Another interesting thing was that the Walk-In-Place algorithm generated a much better response than the regular step algorithm. When asked how they liked the regular algorithm, the answer almost always included the fact that as they moved closer to or further away from a wall, the image of the projector shrank and grew accordingly.

(41)

9 Conclusion 34

9 Conclusion

This project showed us that it is possible to make an application to display a virtual world using a smartphone. A user can look and walk around in the virtual world using only the phone as the controlling device, which was the problem we started to work against from the beginning. Although we wanted to be able to use a wide range of moves, like jumping and crouching, our hardware unfortunately limited us to just having the essential movements. The reason behind this was because of the phone itself and it's internal sensors producing very uncontrollable values, even when it was laying still, making it extremely tough to determine actual movement from the "noise". We made the application able to be placed in several positions to better t into our intended target group of motion capture actors.

The user tests that were conducted showed that it seems to be very important how well the smartphone is mounted on the user, as those who felt the devices being poorly tted also experienced nausea. This may be due to our own way of mounting the device on the user, but it might have been worth looking into the fastening of the device further. The rest of the test produced above average results, so it would seem that the users were pretty pleased with the usability of the program itself, as they didn't handle any of the settings and menus themselves. They all showed positive feedback on the general idea of using this kind of approach for viewing a 3-dimensional world.

Figure

Figure 2.1: The Oculus VR showing it's capability to separate the user's vision from ev- ev-erything else
Figure 2.2: Picture showing the similarities between the Avegant VR Retinal display and a regular pair of sunglasses
Figure 2.3: The Durovis Dive mounting device is pictured to show how the phone is being mounted and how the lenses are placed
Figure 6.1: Picture showing the phone mounted on the side of the head with the pico projector on top
+7

References

Related documents

(Naess 1960, Toulmin 1958, Scriven 1976) At postgraduate levels, students are expected to be able to search for new information and to structure it, unaided by teacher comments.

Enligt vad Backhaus och Tikoo (2004) förklarar i arbetet med arbetsgivarvarumärket behöver företag arbeta både med den interna och externa marknadskommunikationen för att

European SMEs indicates to have a higher degree of impact on the relationship between social and environmental dimension of CSR and financial performance, proved by

The collective outputs that Stockholm Makerspace community seeks to create are: (1) to sustain the non-for-profit organization through active communal involvement into care

This case study examines the role of music and music-making for the youth in Brikama, the Gambia in terms of freedom of expression, sustainable development and social change.. The

The results from the above section on future political careers can be summa- rized as revealing positive effects of being borderline elected into a municipal council on, first,

Kjeldsen menar att det finns någon som han kallar för inledande ethos medans Lindqvist-Grinde säger att det inte exciterar någon retorisk term för hur vi uppfattar en person innan

The aim of the research was to learn to what extent people knew about the existence of eco-labelled seafood, how much people were willing to pay for it, if those who