• No results found

Design of a Graphical User Interface for Virtual Reality with Oculus Rift

N/A
N/A
Protected

Academic year: 2021

Share "Design of a Graphical User Interface for Virtual Reality with Oculus Rift"

Copied!
53
0
0

Loading.... (view fulltext now)

Full text

(1)

Institutionen för systemteknik

Department of Electrical Engineering

Examensarbete

Design of a graphical user interface for virtual reality

with Oculus Rift

Examensarbete utfört i Informationskodning vid Tekniska högskolan vid Linköpings universitet

av Robin Silverhav LiTH-ISY-EX--15/4910--SE

Linköping 2015

Department of Electrical Engineering Linköpings tekniska högskola

(2)
(3)

Design of a graphical user interface for virtual reality

with Oculus Rift

Examensarbete utfört i Informationskodning

vid Tekniska högskolan vid Linköpings universitet

av

Robin Silverhav LiTH-ISY-EX--15/4910--SE

Handledare: Jens Ogniewski

isy, Linköpings universitet

Jonathan Nilsson

Voysys

Examinator: Ingemar Ragnemalm

(4)
(5)

Avdelning, Institution Division, Department

Informationskodning

Department of Electrical Engineering SE-581 83 Linköping Datum Date 2015-10-01 Språk Language Svenska/Swedish Engelska/English   Rapporttyp Report category Licentiatavhandling Examensarbete C-uppsats D-uppsats Övrig rapport  

URL för elektronisk version

http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-XXXXX

ISBN — ISRN

LiTH-ISY-EX--15/4910--SE

Serietitel och serienummer Title of series, numbering

ISSN —

Titel Title

Design av ett användargränssnitt för virtual reality med Oculus Rift Design of a graphical user interface for virtual reality with Oculus Rift

Författare Author

Robin Silverhav

Sammanfattning Abstract

Virtual reality is a concept that has existed for some time but the recent advances in the per-formance of commercial computers has led the development of different commercial head mounted displays, for example the Oculus Rift. With this growing interest in virtual real-ity, it is important to evaluate existing techniques used when designing user interfaces. In addition, it is also important to develop new techniques to be able to give the user the best experience when using virtual reality applications.

This thesis investigates the design of a user interface for a virtual reality using Oculus Rift combined with the Razer Hydra and Leap Motion as input devices. A set of different graphical user interface components were developed and, together with the different input devices, evaluated with a user test to try to determine their advantages. During the imple-mentation of the project the importance of giving the user feedback was shown. Adding both visual and aural feedback when interacting with the GUI increases the usability of the system.

According to the conducted user test, people preferred using the Leap Motion even if it was not the easiest input device to use. It also showed that the current implementation of input devices was not precise enough to be able to draw conclusions about the different user interface components.

(6)
(7)

Abstract

Virtual reality is a concept that has existed for some time but the recent advances in the performance of commercial computers has led the development of differ-ent commercial head mounted displays, for example the Oculus Rift. With this growing interest in virtual reality, it is important to evaluate existing techniques used when designing user interfaces. In addition, it is also important to develop new techniques to be able to give the user the best experience when using virtual reality applications.

This thesis investigates the design of a user interface for a virtual reality using Oculus Rift combined with the Razer Hydra and Leap Motion as input devices. A set of different graphical user interface components were developed and, together with the different input devices, evaluated with a user test to try to determine their advantages. During the implementation of the project the importance of giving the user feedback was shown. Adding both visual and aural feedback when interacting with the GUI increases the usability of the system.

According to the conducted user test, people preferred using the Leap Motion even if it was not the easiest input device to use. It also showed that the current implementation of input devices was not precise enough to be able to draw con-clusions about the different user interface components.

(8)
(9)

Acknowledgments

I would like to thank Voysys and the employees at Voysys for letting me do my thesis project at the company and for dedicating their time and resources helping me. Their guidance made the result of this thesis possible. I would also like to thank Ingemar Ragnemalm for the support through the implementation of this thesis project and Jens Ogniewski for the guidance when writing this thesis.

(10)
(11)

Contents

Notation ix 1 Introduction 1 1.1 Aim . . . 1 1.2 Constraints . . . 2 2 Background 3 2.1 Virtual Reality best practice . . . 3

2.2 User Testing . . . 6

2.3 OpenCV . . . 7

3 Application 9 3.1 Existing System at Voysys . . . 9

3.1.1 Scene graph . . . 10

3.1.2 Voyage file . . . 10

3.2 Oculus Rift . . . 11

3.3 Available input devices . . . 13

3.3.1 Leap Motion . . . 13 3.3.2 Razer Hydra . . . 15 4 Method 17 4.1 Input devices . . . 17 4.2 GUI components . . . 18 4.3 User testing . . . 21 4.3.1 Head tracking . . . 22 4.3.2 Razer Hydra . . . 22 4.3.3 Leap Motion . . . 22 5 Result 25 5.1 Implementation . . . 25 5.2 User test . . . 26 5.2.1 First iteration . . . 26 5.2.2 Second iteration . . . 27

(12)

viii Contents 5.2.3 Third iteration . . . 28 5.2.4 Questionnaire . . . 29 6 Discussion 31 6.1 Implementation . . . 31 6.2 User test . . . 34 6.3 Conclusion . . . 34 6.4 Future work . . . 35 Bibliography 37

(13)

Notation

Abbreviations

Abbreviation Meaning VR Virtual Reality

HMD Head Mounted Display GUI Graphical User Interface HUD Heads Up Display

(14)
(15)

1

Introduction

Virtual reality is a concept that has existed for a long time. It is only recently that the computational power of today’s computers are capable to create an immer-sive enough experience and the cost and quality of displays used are low enough for commercial use. This has in turn increased the development of virtual real-ity devices, with the goal of making affordable commercial products. With the increased development of virtual reality hardware, applications that utilizes vir-tual reality are also being developed. These applications have to be designed differently than ordinary applications, taking things like depth, distances, eye strain and simulator sickness in consideration. This does not only affect the ren-dering of the application but also how the user interacts with it. User interfaces and usability for ordinary applications have been researched for years. However, user interfaces in virtual reality have not been studied in the same extent.

This master thesis project is focused on the interaction with and design of the GUI for a HMD in combination with different input devices. The aim is to deter-mine how to design different GUI elements and how the user can use a set of se-lected input devices to interact with them. The project was done at the company Voysys in Norrköping. The choice of input devices was based on their features and how compatible they are with the virtual environment that they will be used in.

1.1

Aim

This master thesis project will aim to answer these questsions:

• What input device is best combined with Oculus Rift in an environment where the camera’s position is fixed?

(16)

2 1 Introduction

virtual environment?

1.2

Constraints

The design of user interfaces is a very broad area. To be able to investigate the challenges with virtual reality, this master thesis will focus on working with Voysys’s existing system, implementing the different GUI elements inside their application. This will not give a general conclusion of what always works, as some parts of different systems may work in a different way. The camera that is used in Voysys’s system has a static position in the world and will not provide cases where the users position may affect the user experience with the input de-vice or the GUI elements.

(17)

2

Background

This chapter includes relevant background information about the main topics of this report.

2.1

Virtual Reality best practice

In this section two different best practices are summed up and should be consid-ered when developing applications that uses virtual reality. These best practices are from two articles, Oculus Best Practices [Oculus, 2015] by the Oculus Rift team and VR Best Practices Guidelines [Leap, 2015] from the Leap Motion team. This summary will only consist of best practices that affects this project. For ex-ample, there are best practices regarding movement and rendering which are not interesting as the camera has a static position in the world and the rendering is al-ready implemented in the existing system. The best practices that are mentioned in this section are the ones that could affect the design or implementation of the GUI components and input devices.

When designing the GUI in virtual reality it is important to take in consider-ation the hardware that is used. The Oculus Rift, with its displays, lenses and rendering technique, places some unique constraints on the GUI. This, in combi-nation with the fact that the GUI exists in 3 dimensional (3D) space, makes the best practice guidelines differ from the traditional guidelines.

Depth is a new dimension that does not exist in traditional 2 dimensional (2D) user interfaces but is an important aspect of the GUI of a virtual reality application. Because of this, it is discouraged to use a traditional heads up display (HUD). The traditional HUD has to be rendered at a certain distance from the users eyes and occludes everything. If an object moves closer to the user then the HUD, it will be occluded by the HUD even when it is closer to the user. This will create a contradiction in depth perception. It is instead encouraged to have the

(18)

4 2 Background

GUI as a part of the 3D virtual environment as much as possible [Oculus, 2015]. When placing the GUI component it is important to not require the user to swivel their eyes in their sockets to be able to see the GUI. Ideally, the GUI should fit inside the middle 1/3rd of the view area [Oculus, 2015]. This is especially important if implementing a HUD like GUI that follows the movement of the head as the user can not move their head to explore the it.

If a crosshair or a cursor is needed in the GUI, it is important to draw it at the same depth as the object that is pointed on. If not, the eyes of the user will focus on the object behind it, making the cursor appear as a doubled image [Oculus, 2015].

Due to the optics of the Oculus Rift the GUI components should be placed within a range of 0.75 to 3.5 meters from the user’s eyes. Objects may exist out-side of this range but converging the eyes on objects closer than the comfortable distance range can cause the lenses of the eyes to misfocus. This makes clearly rendered objects appear blurry as well as it can lead to eye strain [Oculus, 2015]. Due to the resolution limitations of current HMDs, only the text in the middle of the field of view may appear clear, while text along the periphery may be blurry. To be able to read it, the user have to turn their head, moving the center of the view area along the GUI component containing text. For this reason, avoiding long lines of text is recommended [Leap, 2015].

If a GUI component containing text is a flat surface, the start of the text will be closer to the user’s eyes then the end of the text. When reading the text, the user have to change the focus of the eyes, which can cause distortion and blurring. This problem can be solved by making the GUI component to a slightly concave surface, facing the user. This will reduce the difference in distance at the start and end of the text and significantly improve readability [Leap, 2015].

With virtual reality, it is possible to have input that is based on a virtual rep-resentation of the user inside the virtual environment, a motion controller. For example tracking the users hands and having virtual hands inside the world that represent the real hands of the user.

This is a new way of interacting with a system that most users are not used to which makes it important to make it clear for the user how to interact and with what. Creating good affordance is critical to achieve this. The term affordance refers to the perceived and actual properties of an object, primarily the properties that determine how the object can be used. For example, a button should look like a piston to show the user that a pushing motion should be used. It should also only react to user input when the user moves its hands in the correct direction, pushing the button down [Leap, 2015].

With traditional inputs, such as a mouse, you get tactile feedback when press-ing a button, as that button is a physical object that your fpress-inger touches. If the GUI only exists in the virtual world along with virtual representations of the users hands, no tactile feedback will exist as there are no real object to touch. This makes it important to give the user other feedback when interacting with the GUI.

Visual feedback should be used for any casual movement. For example, if a user moves its hand close to a button, the button should rescale or move even if

(19)

2.1 Virtual Reality best practice 5

the button is not fully pressed. When this happens, the kinetic response of the object coincides with a mental model that the user has, making the interaction easier to estimate [Leap, 2015].

Combining the visual feedback with aural feedback can create the sensation of tactile feedback. This can be a click sound when the button is fully pressed to indicate that the input has been registered which fits in to the mental model the user has of pressing a button [Leap, 2015].

Having the correct scaling of the GUI components is important. When design-ing a traditional GUI that uses a mouse as input, the size of the buttons can be smaller as the precision of a mouse is higher the a motion controller. The inexpe-rience that most users have and the physical strain on the body that the motion controller also has greatly decrease the precision.

Further more it is also important to think about how the user should interact with the GUI when scaling the components. If the user uses the whole hand to interact with the GUI, the buttons should be big enough so that the hand does not accidentally click two buttons at once. If, instead, the user only uses the tip of its finger to interact, the button size can be reduced.

In the real world, people interacts with objects that are obscured by their hands on a regular basis. This is normally achieved with touch to provide feed-back. As the virtual GUI does not exist in the real world, touch can not be used to help the user interact with occluded objects. Two techniques that can be used to help the user are to have the hands semi-transparent so that the user can see what is behind them or to have the size of the object that the user interacts with to never be smaller than the part of the hand that it interacts with [Leap, 2015].

A way to reduce the error rate when having a virtual representation of the users hand in the virtual environment is to only have one part of the hand that is able to interact with the GUI. For example, the tip of the index finger [Leap, 2015]. This representation opens up new possibilities when it comes to placement of the GUI components. They could be placed stationary in the world, stationary relative to the users movement, fixed position inside the users field of view or placed on the users hand. It is important to consider these new options when it comes to building a scene for the virtual environment [Leap, 2015].

When implementing the Leap Motion as input device in a system it is impor-tant to consider how the hardware works. The user should be encouraged to keep their fingers splayed with the hands perpendicular to the Leap Motion controller whenever possible as this pose is the most reliably tracked by the tracking system [Leap, 2015].

The Leap Motion software comes with a Confidence API which can be used to get a confidence value for the tracked hands. The lower the value, the more implausible the hand is. For example, when one hand moves in front of the other, the confidence value for the occluded hand will go from a 1.0 to a 0.0. With this, especially when using a head mounted device, it is possible to find implausible hands that are still tracked by the system and discard them [Leap, 2015]. Avoiding interactions when the hands are at the edge of the field of view of the Leap Motion controller is encouraged as it can result in spontaneous hand

(20)

6 2 Background

2.2

User Testing

User testing is a tool for evaluating the usability of a system. It has existed for a long time and today it is a part of almost every development project that involves a user interface or user interaction in general. This section is a review of a paper by [Nielsen, 1993] and a book by [Shneiderman, 1992].

User testing is done by setting up a test environment in the system that con-sists of a set of tasks that spans the system’s expected use. A group of users (often around 10 people) that have never seen the system before are then presented with the tasks and should complete them with minimum guidance from the ob-serving developer. While a user completes the tasks, the developer is obob-serving how he/she interacts with the system.

The practical outcome of a user test is a list of usability problems and sugges-tions for interface improvements. This means finding bad designs and getting an estimation of the users subjective satisfaction when using the system. To be able to measure usability we first have to define it. Nielsen [Nielsen, 1993] says that a system is usable if it is:

• easy to learn: users can go quickly from not knowing the system to doing some work

• efficient: letting the expert user attain a high level of productivity

• easy to remember: infrequent users can return after a period of inactivity without having to learn everything all over

• relatively error-free or error-forgiving: users do not make many errors, and when they do, the errors are not catastrophic

• pleasant to use: satisfying users subjectively, so they like to use the system It is important to remember that not all of these attributes are equally impor-tant to all systems. For a system where the common user only tries it once and never again it is more important to have a system that is easier to learn then it is easy to remember.

To be able to evaluate these attributes questionnaires and surveys are com-monly used. They are a way of gathering more data than the observation of the user test. Most use a Likert-like scale, where the answers to a questions is a scale between two opposites (e.g easy / difficult). The answer has the same amount of positive and negative choices and the distance between values is the same.

Designing the user interfaces with user testing is often done iteratively. Each iteration consists of completing a design, having a user test the system and ob-serve all problems with the design. After this, the developer fix the obob-served problems. Another user test is then conducted to see if the fix solved the prob-lems that the users experienced and to find new usability probprob-lems introduced by the changes.

In general, the first few iterations results in major usability gains as the de-veloper finds the true usability catastrophes [Nielsen, 1993]. For each iteration,

(21)

2.3 OpenCV 7

the developer learns from the previous one, making it easier to find and fix more system breaking design flaws.

User testing is not the only way to evaluate the usabilty of a system. there are alternatives to user testing where the developer evaluates the user interface without users. For example heuristic evaluation and cognitive walkthrough.

A heuristic evaluation is an informal method and involves having usability specialists judge whether an interface follows a set of general principles or rules of thumb. The list of usability rules that are followed is based on the experience of different developers [Lewis and Rieman, 1993].

A cognitive walkthrough is when you have a prototype or a detailed design description of the interface of a system and you try to tell a believable story of how to complete a task in that system. To make the story believable you have to motivate each of the users actions based on their general knowledge. If a be-lievable story can not be told, a problem with the interface is found [Lewis and Rieman, 1993].

2.3

OpenCV

OpenCV is a cross platform open source computer vision library available online. The development started 1998 at Intel and is now supported by Willow Garage and Itseez [itseez, 2015]. It is written in optimized C/C++ that takes advantage of multi-core processing and has interfaces for C++, C, Python and Java. It con-tains a mix of low-level image-processing functions and high-level algorithms [Pulli et al., 2012]. These algorithms can be anything from matrix and vector ma-nipulation with linear algebra, basic image filtering with edge detection, corner detection or motion analysis with optical flow and tracking.

In 2010 a new module was released to OpenCV that added support for Graphic Processor Unit (GPU) accellerated computing without any knowledge of GPU programming. The GPU version is consistent with the CPU version, making it easy to run an algorithm that works on the CPU on the GPU instead.

A GPU can deliver an increase in speed of 30 times for low-level functions and up to 10 times for high-level practice functions, which include more overhead

(22)
(23)

3

Application

This chapter describes the existing system at Voysys and the hardware that was used during the project.

3.1

Existing System at Voysys

Voysys has an existing system for rendering a 360 degree panorama video in a virtual environment. The camera has a static position in the middle of the scene and is surrounded with a sphere that has a video feed projected on it. This video feed consists of the video from 6 different cameras which are positioned in a cluster to cover the whole spherical view of a scene. They overlap each other to create a stitched panorama video which is projected on the inside of the sphere that surrounds the camera. This system will be used for the development and testing of GUI components.

To render the system in virtual reality, Voysys is using a HMD called Oculus Rift (see 3.2 for more technical information). The rendering is written by Voysys in OpenGL using the provided Oculus Rift shaders to get the correct stereoscopic 3D.

Voysys has developed a simple input device for the Oculus Rift which is based on its viewing direction. A ray is cast in the viewing direction and an input han-dler checks if the ray collides with any collision surfaces. When the ray collides with the surface of a GUI components, a timer starts. When the timer exceeds a defined click time, a click event is triggered by that GUI components.

A problem with this system is that if you move your eyes, you can look at an object without the Oculus Rift facing it. The user think that a click should trigger but the system does not, as it can’t track the users eyes.

(24)

10 3 Application

3.1.1

Scene graph

A scene is a collection of objects that exists in a virtual environment. The scenes in the Voysys’s system are built from scene graphs. The scene graph is a tree data structure where all the nodes of the tree are objects in the scene. Each node can have many children but only one parent. A change made in the parent will also be applied to its children. This means that an operation made on a node will automatically propagate through the group of nodes that are dependent on this operation. This is mostly used to apply the correct transform (translation, rotation and scale) to different objects in the scene. Using this technique, differ-ent GUI compondiffer-ents can easily be placed in groups that move, scale and rotate together.

Figure 3.1:An illustration of the tree structure of the scene graph. There are two panorama videos, each has one GUI component.

As is shown in 3.1, each of the panorama videos, p1 and p2, has a GUI com-ponent, c1 and c2. If, for example, both c1 and c2 are placed at the position [0.0, 0.0, 0.0] they will not end up at the same location. Their position in the world are relative to their parents which will place c1 at the same location as p1 and c2 at the same location as p2. The GUI component c2 in figure 3.1 has a collision mesh as a child. This means that if the GUI component moves, scales or rotates, the collision mesh will follow it and transform with it.

3.1.2

Voyage file

To define the scene graph described in figure 3.1.1 the system uses a file format named .voyage. This file contains the definitions of the scene that is to be ren-dered in a simple way that makes it easy to create new scenes with different con-tent. The voyage file is then parsed by the system when starting.

(25)

3.2 Oculus Rift 11

Here is an example of what an object looks like in the voyage file: { type: "panorama_video"; id: "p1"; on_end: "loop"; position: [0.0, 0.0, 0.0]; rotation: [0.0, -90.0, 0.0]; config: "lobby/ericsson_lobby.pv"; children: ( . . . ); }

This object is a panorama video that loops when it ends. It is placed in world coordinates [0.0, 0.0, 0.0] and is rotated 90 degrees. It also has a set of children which are defined with the same structure inside the children attribute of the object.

3.2

Oculus Rift

Oculus Rift Development Kit 2 (DK2) is a head mounted display that Voysys uses to display their virtual reality. It is the second development kit released and it has a higher resolution screen, a lower latency, a higher refresh rate and positional tracking than the original Development Kit 1 (DK1).

Figure 3.2: The Oculus Rift DK2 head mounted display and its positional infrared sensor [Oculus, 2015 (accessed May 6, 2015]

(26)

12 3 Application

of view thanks to the size of the screen. It uses a gyroscope, accelerometer and magnetometer to be able to track the orientation of the user’s head. This lets the user look around in the virtual world by panning, tilting and rolling the head. The Oculus Rift DK2 has an array of LED’s spread across the surface of the HMD. The near infrared CMOS Sensor (see figure 3.2, at the bottom right of the figure) which is used to track the position of the Oculus Rift can make it possible to lean forward and inspect objects or look around corners in the virtual world. The CMOS Sensor is connected to the computer with a USB cable and is also connected to the HMD to be able to synchronize them, making the recorded movement of the head as close to the real movement of the head as possible.

The HMD is connected to the computer with a HDMI and a USB cable. There are two modes that the Oculus can be used in, extended mode and direct mode. Direct mode is less compatable and only a few demos uses this mode at the mo-ment. Extended mode is when the Oculus Rift acts as a second display and is more compatible then direct mode.

Figure 3.3:Rendering to the Oculus Rift screen.

In figure 3.3 it is shown how rendering for the Oculus Rift looks like. The whole scene is rendered twice, once for each eye. The cameras are offset from each other with the same distance as between the eyes of a person, facing the same point in the world. This technique gives stereoscopic 3D rendering which enhances the sense of depth, emulating how the eyes sees the world. To increase the sense of the field of view for the Oculus Rift, lenses are placed between the eyes and the displays inside it.

These lenses will distort the image as a pincushion distortion (see A in figure 3.4. If the scene were rendered with the same technique as is used for standard desktop rendering, a flat image, the image would look more distorted further away from the center of the screen. To compensate for the lens distortion, a distortion shader is used for rendering the images. This shader will apply a barrel distortion (see B in figure 3.4 which will counteract the distortion from the lenses [Niemelä, 2014]. The end result is seen in figure 3.3 where each image for the

(27)

3.3 Available input devices 13

Figure 3.4:A visualization of distortions. A is pincushion distortion and B is barrel distortion.

different eyes are rendered with a barrel distortion.

3.3

Available input devices

There are a lot of different input devices available for virtual reality with the Oculus Rift. Each of these have different advantages and are suited for different applications. Two input devices commonly used for hand tracking with Oculus Rift is the Leap Motion and the Razer Hydra. The Leap motion has an article written by the developers about the best practice when implementing Leap Mo-tion with virtual reality [Leap, 2015]. It also has a user friendly API with good documentation [LeapAPI, 2015]. The Razer Hydra has been used, for example, in a project to create a training environment for a surgical robot [Grande et al., 2013] with good results and can also be used in combination with Oculus Rift in video games such as Half Life 2 and Minecraft.

3.3.1

Leap Motion

The Leap Motion is a motion controller that is used to track the hands of a user or objects that the user holds in its hands in 3D space in front of him/her. The controller can both be used placed on a surface, facing up, or placed on the Ocu-lus Rift with a special mount. To track the hands of the user it uses three infrared light (IR) emitters and two IR cameras. This allows it to find the positions of the

(28)

14 3 Application

Figure 3.5: The Leap Motion controller with all components visible [Leap, 2015 (accessed May 7, 2015]

The effective range of the Leap Motion Controller extends from approximately 25 to 600 millimeters above the device in a field of view of about 150 degrees [LeapAPI, 2015]. The manufacturer states that the sensors accuracy in position detection is about 0.01 mm. Recent research has shown that an accuracy of be-low 0.2 mm for static setups and of 0.4 mm for dynamic setups was obtained in realistic scenarios [Weichert et al., 2013]. With this kind of precision the limiting factor for the obtainable user performance in pointing tasks is not the accuracy and precision of the Leap Motion but the human motor system itself [Bachmann et al., 2014].

This field of view and tracking distance means that you can only use your hands in a smaller space in front of of the controller. If it is mounted on the Oculus Rift, you will not be able to point to the edge of your periphery, as it is outside of this field of view of the Leap Motion controller.

The Leap Motion has an API that gives easy access to the position of tracked objects in millimeters relative to controller. These positions are given in the coor-dinate system shown in the figure 3.6. The coorcoor-dinate system of OpenGL is the same as the Leap Motions’s when it is placed on a surface facing up. The X-axis is to the right, the Y-axis up and the negative Z-axis in to the screen. This is useful as the developer does not have to handle the raw data. The API also makes it easy to track guestures with the hands.

As the Leap Motion tracks the users hands and fingers, no physical controllers have to be held in the hands of the user.

Positioning the Leap Motion on an Oculus Rift is easy with the optional Ocu-lus mount. This will make the Leap Motion move with the OcuOcu-lus Rift when moving the head allowing the hands to be tracked even when the user is turning around. An important thing to note is that when the Leap Motion controller is placed on the Oculus Rift, the coordinate system is rotated. A transformation has to be done to be able to map the object coordinates to the world coordinates.

(29)

3.3 Available input devices 15

Figure 3.6:The coordinate system that the Leap Motion uses when tracking objects [Leap, 2015 (accessed May 7, 2015].

The Leap Motion controller has difficulty maintaining accuracy when the whole hand is not in direct line of sight of the controller. This means that if a hand is rotated from palm facing towards the Leap Motion to when the palm is perpendicular to it, the detection can loose track of the hand [Potter et al., 2013]. This will lead to either noise in the tracking or an untracked hand.

3.3.2

Razer Hydra

The Razer Hydra is a game controller that consists of two handheld controllers, one for each hand, that are connected together with a cord and a base unit. The base station uses a low-powered magnetic field to position the controllers. This gives each controller 6 degrees of freedom: positioning along the x-axis, y-axis and z-axis. It also tracks the pitch, roll and yaw when moving them around in space.

The controllers have an analog trigger button which could be used for pinch-ing of fpinch-ingers for grabbpinch-ing objects in the world. Furthermore each controller has an analog joystick, a digital bumper button and four digital buttons [Grande et al., 2013].

To track the two hand held controllers, the base station of the Razer Hydra emits a low energy magnetic field that both hand controllers record to estimate their precise position and orientation. The controllers and the base station each contain three magnetic coils. These coils work in tandem with the amplification circuitry, digital signal processor and positioning algorithm to translate field data into position and orientation [Mei et al., 2014].

The Razer Hydra is able to track the controllers with sub millimeter position-ing accuracy and millidegree orientation accuracy [Mei et al., 2014] when the controllers are closer then 50cm from the base unit, else tracking becomes unsta-ble[Kuntz and Cíger, 2012].

(30)

16 3 Application

Figure 3.7:The Razer Hydra controller [Razer, 2015 (accessed May 8, 2015].

device. It is a cross-platform minimal C interface for getting data and sending commands to and from the Razer Hydra.

(31)

4

Method

4.1

Input devices

To improve the existing input system that Voysys have, changes were implemented to the head tracking input device. The problem was that the user looks around inside the Oculus and this leads to confusion about where the center of the view is. To solve this problem there are eye tracking devices that can be used within the Oculus. Implementing this could be used as a solution, but as Voysys did not have this extension to the Oculus, another solution was implemented instead. This solution was to add a small crosshair that sits in the middle of the screen to help the user see where the head is facing. The crosshair is drawn at the same distance as the object the user is looking at to reduce the eye strain.

Two new input devices were implemented and evaluated in the system, both compared to each other but also compared to the improved version of the existing simple input system of Voysys. These new input devices are the Razer Hydra and the Leap Motion, as they are commonly used in combination with Oculus Rift. By testing three different input devices, the advantages and disadvantages in different situations can be found for each device. This gives an idea of what device is most suited for the existing system at Voysys.

The input device system was designed to make it easy to switch between the different input devices and changing the offset of the placement of the hands in the world. The Razer Hydra, which has physical controllers that the user holds in its hands, does not only have spacial tracking, but also physical buttons. A use for these buttons were not implemented as the concept of hand tracking for input was the focus in the project.

The tracking of the users hands is translated to world coordinates and repre-sented in the world as virtual hands. Each hand consists of a sphere for the palm of the hand and a number of smaller spheres for the joints of the fingers. Only

(32)

18 4 Method

one of these joints are able to interact with GUI components to reduce the error rate when the user clicks on smaller buttons that are grouped together. The Leap Motion tracks the actual hands of the user and has hands with 3 joints per finger where the tip of the index finger is the joint that can interact with the GUI. This is a very simple but natural visualization of a hand. The Razer Hydras hands only have one joint for each hand which is able to interact with the GUI. This is because it does not track the actual hand, but only the controller that is held in the hand.

Figure 4.1: To the left is the Razer Hydra hand which only has one sphere for the palm and one sphere for the index finger. To the right is the Leap Motion hand which has one sphere for the palm and three spheres for each finger.

Both the Leap Motion and the Razer Hydra are mapped as close to the move-ment of the actual hands of the user as possible to ensure as high immersion in to the virtual world as possible. Each of the input devices has different data about the position of the hands, for example different coordinate systems. They are converted to the same format so that the system then can move the hands accordingly.

4.2

GUI components

This section describes the different GUI components that were implemented in Voysys’s existing system. A GUI component consists of a GUI element and a GUI container where the element handles the interaction and rendering and the con-tainer handles the positioning, scaling rotation and animation. Three different containers were implemented:

• Fixed view position: fixed in the view space. When the user moves their head, the container follows.

(33)

4.2 GUI components 19

• Fixed body position: follows the users wrist.

These containers update their own position based on what type they are. The fixed view container will move as the user moves their head. It has a fixed dis-tance from the users eyes and the position of the container is a 2-dimensional vector where (0, 0) is in the middle of the screen.

The fixed world container has its position defined as a 2-dimensional vector where the values of the vector are angles. This is because the container is placed in the world based on spherical coordinates with the radius set to a fixed distance. The x-value of the vector is the rotation in the x-plane (around the y-axis) and the y-value is the rotation in the y-plane (around the x-axis). The container is created to face the camera. In the previous system you had to define a position in space coordinates (x, y, z) and a rotation of the object. As the GUI always should face the user to maximize the usability, the manual definition of the rotation was removed.

The fixed world container has a simple animation system implemented, hid-ing it from the user until his/her hands are tracked by the Leap Motion. Then the GUI container will move from behind the camera to its position in the world. This simple animation system gives the possibility to hide part of the user inter-face from the user until he/she actually interacts with the system.

The fixed body container is only implemented to work with the Leap Motion and is placed on the wrist of the users virtual hand. It moves and rotates as the user moves their hand.

To be able to test how a user interacts with a real system the following differ-ent types for GUI elemdiffer-ents were implemdiffer-ented:

• Image: an image that does not react to user interaction.

• Transport: moves the user to a different camera location when clicked. • Slider: lets the user set a value between 0.0 and 1.0. Updates when finger is

close enough to the slider.

• Text input: shows a keyboard on click. Interactions with the keyboard will update the text of the text input.

The image GUI element is only used to show information to the user and does not react to interaction.

The transport GUI element moves the user from one camera location to an-other. Each of the camera location are surrounded with a panorama video sphere and can have different GUI elements in them.

The slider is a GUI element that has a number that has a value between 0.0 and 1.0. As the hand of a user gets close enough to interact, the whole slider lights up, indicating that it is being manipulated. To set the value of the slider, the user just moves their finger along the surface of the GUI element. Moving the slider to the left will decrease the value and moving it to the right will increase it.

(34)

20 4 Method

Figure 4.2:The slider GUI element inside a fixed world GUI container with the Leap Motion hands in front of it.

The text input GUI element consists of two parts: a clickable text field that has a text and keyboard that is hidden at first. When the user clicks on the text field the keyboard moves its keys from behind the text input to the bottom of it, at the same time as the text field moves up. When the user clicks the keys on the keyboard, the text in the text field is updated.

Figure 4.3: The text input GUI element inside a fixed world GUI container with the Leap Motion hands in front of it.

With these component, a realistic scenario can be created so that a user test can be conducted.

(35)

4.3 User testing 21

4.3

User testing

A user test was used to evaluate the different input devices and GUI components. This method was used as it is important to involve users in the interface design process [Lewis and Rieman, 1993].

The user test had a total of 8 users participating, as this is enough to get a good result [Nielsen, 1993]. Three different test environments was used to be able to evaluate the different input devices. Each of these test environments consisted of different phases which present different GUI components and challenged the user in different ways. Some of the phases were identical for the different input devices as, for example, the head tracking input and Leap Motion both work independently of the rotation of the user. The Razer Hydra on the other hand has to have some phases specialized with the GUI elements present only in front of the user as the input device is based on the base station that is placed in front of the user. The precision loss when moving the Hydra controllers too far from the base station in combination with the cords which connects the controllers to the base station, hinder the users possible movement.

The goal of the different phases of the user test was to test the different GUI components and their behavior. In each phase there were two colors used for the different GUI components. The goal for the user was to interact only with the green components and to avoid the red colored components. If the user acciden-tally clicked or were forced to click a red component by not finding the green component, it was noted as an error. Even if an error was made, the user still moved on to the next phase.

After testing an input device, the user was asked to answer short questions about the usability of the different kinds of GUI components. The questions used a Likert scale that ranges from 1 to 7 for each different kind of GUI component that was presented during the test and a comment field where the user could leave comments about the tested environment and input device. After testing the last input device, the user was asked if he/she were to try the system again, which input device he/she would use.

When doing the test the observer observed the user while they tried to com-plete the given task for each phase. During this, a technique called think aloud was used. When using this technique, the user is invited to talk about what they do, why they do it and to react verbally when doing the test. This affects the task time as verbalizing the thought process creates additional cognitive load and the user might even pause the task activity to talk about something that they react on [Shneiderman, 1992]. This will not affect the user tests of this system as much as it would in case of a system where productivity is the main concern.

To get a better GUI system, an iterative approach was used when doing the user tests. This is when you use a cycle of testing and implementing fixes for problems that are observed. The observations and the fixes that are implemented are documented for each phase.

First, the system was tested by 1 user to find the largest bugs and usability is-sues. After this there was a smaller implementation phase fixing these problems,

(36)

22 4 Method

impossible. Then another test phase was performed, with a slightly larger group of two users. This found more of the larger problems with the system and was followed by another implementation phase. With the improved version of the system, a larger scale user test was done. This consisted of five users.

4.3.1

Head tracking

The phases of the head tracking test consists of:

1. The user gets a chance to test the virtual environment, having a button almost in front of the user, making it easy to locate.

2. The user gets introduced to turning the head to find components, to deter-mine if placement of GUI components that are not in front of the user are harder to interact with.

3. The user has to enter the word "oculus" in a text field with a virtual key-board. This will show how good the input device performs in a precision based task.

4.3.2

Razer Hydra

For the Razer Hydra, two phases were added to test the interaction with GUI ele-ments fixed in the view. This would be impossible to test with the head tracking as the click is triggered when the user look at an object long enough. If the objects moves with the movement of the head the user can not look at it and therefore not interact with these kinds of components.

1. The user has to click correctly on the green button when 4 buttons (3 of them red) are presented close to one and other.

2. The user is introduced to the fixed view component. In this phase, the green button is fixed in the world and the red buttons are fixed in the view. This will show if placing interactable GUI elements in fixed view position inter-feres with the task that the user tries to complete. For example accidentally moving the head and clicking something that was not intended.

3. The user is presented with one green GUI component fixed in the view and three red GUI components fixed in the world to see if how hard it is for the user to interact with something that moves with the head.

4. This phase is the same as phase 3 for head tracking but with an added slider. It further tests the precision of the input device.

4.3.3

Leap Motion

As both the Razer Hydra and Leap Motion are positional controllers, the tests for them are very similar. The only difference is that the placement of GUI element is restricted for the Razer Hydra due to the base station and range of the controllers.

(37)

4.3 User testing 23

Also the Leap Motion adds a phase where a GUI element is fixed to the users wrist. This phase only exists for the Leap Motion as it is the only input device where this is implemented.

1. this phase is the same as phase 1 for the Razer Hydra but it has animations on the buttons. These animations are only implemented for the Leap Mo-tion at the time and could only be tested with this input device. AnimaMo-tions could give value beyond usability and this phase will evaluate it.

2. The user is presented with 3 red buttons that are fixed in the world and one green button that is fixed on the wrist of the user. This phase is only available for the Leap Motion and tests if the user can find the green button without any guidance. It will also see if the user thinks that it is intuitive to interact with a GUI element that follows the movement of the body. 3. same as Razer Hydra phase 2

(38)
(39)

5

Result

This chapter presents the results of this thesis, both the result of the implementa-tion and the user test.

5.1

Implementation

This section will present the result of the implementation of the GUI system in Voysys’s existing system. During implementation not everything that was planned to be implemented exists in the final product. This was due to lack of time and underestimation of other implementation tasks. The wrist compo-nent for the Razer Hydra was never fully implemented as the API was not as well defined as the Leap Motion API. A fourth type of GUI container, dynamic world container, was also supposed to be implemented. This GUI container that was supposed to track objects in the panorama video feed, but was never imple-mented. A smaller test was done with OpenCV where a simple pattern recogni-tion was implemented to analyse the image rendered by the system before it was sent for displaying on the Oculus Rift. The image was copied from a buffer on the GPU to the CPU to be processed with existing functions in OpenCV. This drasti-cally reduced the frames per second to the point where the system was not usable any more. The tracking worked with a lot of instability and errors, but it could in some cases, render a red dot on the pattern that was defined for this test case. As this implementation only made the experience unusable and an optimized imple-mentation of this was not possible with the resources that was available, it was removed from the system.

(40)

26 5 Result

5.2

User test

This section is divided into different sections for each iteration of the user test and also the results of the questionnaire. Each subsection will present the obser-vations made during an iteration of a user test and will also state any changes made in that iteration before the next iteration is tested. The experience of the users ranges from novice (never worked with programming, never used Oculus Rift or Leap Motion) to advanced (worked with programming, tried the Oculus Rift and Leap Motion).

5.2.1

First iteration

The user test of this iteration consisted of one user. The user was 23 years old, studies biology and had no experience of the Leap Motion and no experience with the Oculus Rift.

Observations

During the user test of this iteration it was observed that the user did not have any major problems when using the head tracking as input device. When using the Razer Hydra and the Leap Motion as input device the user had a hard time determine the distance to the GUI components. This was observed more often with the Razer Hydra than the Leap Motion as the user accidentally clicked on buttons more often.

Accidental interaction with the GUI were also observed with the Leap Motion, mostly due to the shape of the hand the user had when interacting but also due to confusion about which part of the virtual hand interacted with the GUI. When shaping the hands, the user retracted all fingers except the index finger, and tried to click with it. This caused instability with the tracking of the hand and it got noisy, misshaped or wasn’t able to track it at all. Knowing what part of the hand that could interact with the GUI was not as big of a problem with the Razer Hydra, as it only has one joint sphere, the click point. With the Leap Motion, which has 15 joint spheres per hand, the user got confused which one of these did the actual interaction and led the user to think that the palm sphere was the click point.

The mapping between world and virtual reality for the Razer Hydra was not correct, as the user was observed raising their hands too much or having them too close to the base station. When the Razer Hydra controllers got to close to the base station, the tracking of them became incorrect.

It was also observed that the text input GUI element did not hide the keyboard when the user was finished, which made the user think that something was wrong with the text or that something more had to be done with the text input element to finish the test.

When the user did not finish the last phase before the panorama video ended, the user was transported to the first phase.

(41)

5.2 User test 27

Implementation

A looping of the last phase was implemented so the user can spend as much time in this phase as needed.

To reduce the number of accidental clicks and confusion about how to interact with the GUI with the Leap Motion and Razer Hydra, the color of the last joint sphere of the index finger, which is used for interaction, was changed to green. This is to match the color of GUI elements that the user is supposed to interact with.

5.2.2

Second iteration

This iteration was conducted with two users. They were 23 years old, studies information technology and had experience using and working with the Leap Motion and they have also tried the Oculus Rift before with a few different tech demos.

Observations

The sense of depth was the main problem, especially with the Razer Hydra. The users got used to it fast, but there were still a lot of double clicks. This was due to when the user clicks through a button, the click sound will play once when moving through it and triggering the click event and once again when moving the finger back through the button. This was a good way of observing the precision of a click.

Clicking a button from the wrong side was observed when the users were typing on the virtual keyboard for the text input GUI element. They pressed the correct button, but when moving the hand back they sometimes accidentally clicked adjacent buttons. This was also observed when the user clicked a trans-port button, which changes the scene and moves the user to the next phase of the test. One of the users clicked through the button, triggering the transport event, and instead of moving the hand back, kept it in place until the next scene was loaded. When the user now moved the hand, it was behind one of the buttons, which accidentally interacted with the GUI component.

The users were observed having their hands too far away from the face (with the Leap Motion tracker mounted on the Oculus Rift) or having them at the edge of the field of view of the Leap Motion controller. This led to unstable tracking of the hands, making them sometimes disappear or moving in unexpected ways. The shape of the hands with only the index finger extended was observed in both users when using the Leap Motion. This had the same problems as in the first iteration.

Implementation

After some measurements of the implementation of the Razer Hydra, it turned out that it was mapped correctly to world coordinates. No changes to the

(42)

map-28 5 Result

trollers away from the base station and not forcing the user to extend their arms too much.

A done-state was implemented to the text input so that when entering the correct text in a text input, the keyboard would hide to tell the user that they are done with the interaction of this component and can move on to the next.

To guide the user to a better way of using the Leap Motion, an image was placed in the first phase of the Leap Motion test that shows two hands with their fingers spread out and the back of the hand facing the face.

5.2.3

Third iteration

This iteration was conducted with six users. One of these users were 35 years old and the age of the rest ranged from 23 to 25 years old. All users studies master programs, including computer science (5users), electronic design (1user), biology (1 user) and psycology (1 user). None of these users had previous experience using the Leap Motion or the Oculus Rift. As this was the last iteration of the user test, no further implementations were made to the system to improve it.

Observations

Setting the value of the slider to exactly 0.8 was not too hard with either of the input devices. What seemed to be the problem with precision tasks, like the slider, was that when the user was moving his/her hands away from the slider, it was easy to accidentally change the value.

The addition of an image of two hands at the beginning of the first phase of the Leap Motion helped users realizing how to interact with the system.

It was again observed, as in the second iteration, that some of the users extend their arms when clicking the transport buttons and leave their arms extended. As stated before, this led to accidental interaction with the GUI. But this time it happened more frequently at the first phase of the Leap Motion test, when the image of two hands was shown.

The users had problems finding and interacting with the GUI component that were placed on the wrist. Most users tried to find the green button in the world by looking around with their hands down. With no hands tracked, the wrist element was not rendered and the user could not find the correct GUI component to interact with.

The users who found the button on the wrist failed to interact with it because they thought it was placed on the back of the arm instead of the wrist. When trying to click the button they instead clicked on their arm, making the button inaccessible. The users were only able to click the button by twisting the hand on the same arm as the element was placed, forcing the button to rotate until it could be pressed.

When using the Razer Hydra, most users “punched” the GUI components instead of trying to interact with them with precision. This punching behaviour led to more errors while writing text in the text input.

(43)

5.2 User test 29

When using the Leap Motion, it was not clear that the only point that can interact with the GUI on the hand was the tip of the index finger, even after the color change of the index tip to green. Users still tried to click with different fingers or even the palm of the hand, leading to accidental interactions with the GUI.

5.2.4

Questionnaire

The result of the questionnaire was only based on the 6 users of the last iteration of the system as the changes made in the system in earlier iterations could change the potential outcome of the later tests. The result can be seen in table 5.1 where the mean value of the Likart scale for each of the GUI component for each input device is presented. The value can range from 1 to 7 and cells with a "x" is rep-resenting no value as that type of GUI component was not tested for that input device.

Fixed world Fixed view Fixed body (wrist) Head tracking 5.33 (2 - 7) x x

Razer Hydra 4.5 (2 - 7) 4.66 (3 - 7) x Leap Motion 3.833 (1 - 6) 4.33 (2 - 6) 4 (1 - 6)

Table 5.1:Results of the quesionnaire for the user test. The mean value for each GUI component is presented for each input device. Cells with a "x" is representing no value. After the value is a parenthesis where the first value is the minimum value recorded and, separated with a "-", the last value is the maximum value recorded.

The results of the last question in the questionnaire was based on all 8 users of the user test, not just the last iteration. It asked what input device the user would choose if they were to try the system again and showed that the majority of users (62.5%, 5 users) would use the Leap Motion. Razer Hydra was chosen by

(44)
(45)

6

Discussion

This chapter evaluates the result of the implementation and the user test con-ducted during this thesis project.

6.1

Implementation

From the beginning of this project, the plan was to implement four different GUI containers but only three of them was implemented; fixed world, fixed view and fixed body. The fourth would have been dynamic world GUI container. This was supposed to be able to track object that moved in the panorama video feed that surrounds the user. With this, stickers could be placed in the real world when filming the panorama video which then would automatically become GUI components in the virtual world. A test of this was implemented, but the results were very low FPS with unstable tracking. If there would have been more time, a solution could have been found that would have worked in the system without diminishing the user experience.

The text input GUI element that, on interaction, revealed a virtual keyboard worked surprisingly well. Even if the size of the buttons of the keyboard were the same size as the interacting sphere of the virtual hand, most users had no problem writing the intended word during the user test. I did not evaluate any other way of inputting text, but had plans on using the slider component. This could have been implemented with the letter a to the left, z to the right and the rest of the alphabet in between. This would make the text input based around a sliding movement of the hand instead of a clicking movement.

During the implementation of the GUI components the importance of hav-ing visual and aural feedback became very obvious. It was almost impossible to determine when the interaction began with the buttons before the scaling were implemented. After the implementation of the scaling, making them smaller as

(46)

32 6 Discussion

the interaction sphere of the hand came close, the feeling of interaction became more obvious. This, in combination with adding a sound when the button is fully clicked, fits so well with the mental model that the user have of how a button should work.

Observations made during the user tests pointed out things that could have been implemented better or in a better way.

Users had a hard time interacting with GUI components that were fixed on the users wrist. They either did not find it or thought it was fixed on top of the arm. This led them to try to click on a button through their arm. Implementing tracking of the arm and rendering it in the virtual world would solve this problem and could open up more possibilities when it comes to fixed body GUI containers.

In the first iterations implementation phase, a change of color of the tip of the index finger was made to guide the user to only interact with the GUI with that point of the hand. An existing shader was used to draw the spheres of the hands and this shader had the light direction against the camera. This resulted in the green color of the index finger tip not stick out from the rest of the joints. Most users did not reflect over the different color of the index finger tip and tried using other parts of the hand when clicking in the later iterations of the user test. This could have been fixed with a small change to the shader, making the green color brighter, and a better result considering the errors made by the users might have been observed.

The most challenging aspect of interacting with the motion controllers in the virtual environment was the perception of depth. This was observed with both the motion controllers, but was worse with the Razer Hydra. Having 3D objects that are surrounded by a panorama video sphere which also has objects is part of the problem. An object in the video could seem to be as close to the user as a button but as the user tries to moves its virtual hand towards it, it seems further away. The distance to objects would have been easier to determine in an environment where all objects are 3D objects.

Most users started the Razer Hydra test with slow hand movement towards the button that they wanted to click, looking for the visual feedback as they had no other sense of how close they were to it. Other users punched the GUI com-ponents instead of using smaller, precision movement, which could be due to the fact that they were not able to determine the distance. It could also be the physical controllers that have to be held by the user and how the controllers are tracked.

The mapping of the coordinates of the Razer Hydra controllers to the virtual world was not as easy as it was with the Leap Motion. Even when it was done correctly it did not feel as direct as the Leap Motion. Measurements were made to try to determine what was wrong, but no error in the translation from input coordinates to world coordinates was found. The fact that the hand for the Razer Hydra only had one joint in the virtual world could have been part of the problem. With the Leap Motion you get a realistic hand that moves as your real hand moves but the hand with the Razer Hydra looks nothing like a real hand. This could have a negative effect on the users immersion and the mental model that the user have of how their hand should look and feel in the world. Changing how the

(47)

6.1 Implementation 33

hand is represented in the world to have as many joints as the Leap Motion hand but with a pose that looks close to how the users hand holds the controller might increase the immersion.

The result with the Leap Motion was better as people had an easier time deter-mining the distance to the object that they wanted to interact with in the world. Some user clicked through objects and left the hands there until the next scene was loaded, without realising it. As they moved their hands back they acciden-tally clicked on buttons. This could be a problem with the implementation of the click event on the buttons. Having the click triggering on the way back from the fully pushed button would possibly guide the user to move their hand back after each click, preventing incorrect starting placement of the hand in a new scene.

Rendering the hands transparent could help the user with the depth percep-tion for both mopercep-tion controllers. It would let the user see what they interact with as they are interacting. Combining this with another depth cue, such as a shadow of the approaching hand on the GUI component, could be a next step to improve the sense of depth. However, there was not enough time to implement such a change in the system. Also, an additional user test would have had to be conducted to see the result of the changes.

The Leap Motion has a confidence API that could have been used to filter out implausible hands. This could be two left hands or an object that the Leap Motion think is a hand, that is not. These objects are often barely tracked, have an unnatural form and are unstable. This filtering is not implemented in the current implementation of the Leap Motion in Voysys’s system.

During the testing of the Leap Motion controller it was observed that many users had their hands in a pose where all fingers are contracted except for the index finger. The Leap Motion has a hard time tracking hands that are in this pointing pose when mounted on the Oculus Rift. In the implementation phase of the second iteration, an image was added to the first phase of the Leap Motion test that showed to hands perpendicular to the camera with all fingers spread. This simple image helped users keeping their hands in a more spread out pose. But this image was placed at a greater distance from the camera than the rest of the GUI so that it would not occlude it. What this led to was that users put their hands up too far away and accidentally clicked buttons that appeared when the hands got tracked. Having this, non-standard, distance with one of the GUI component only confused the user and should have been changed to the same distance as the rest of the GUI.

Another observation that was made was that some user that used the Leap Motion tried interacting with the GUI when it is at the edge of the field of view of the Leap Motion. As was mentioned in chapter 2, the Leap Motion has unstable tracking at the edge of the field of view, which led to input error during the tests. A solution to this could be to implement a visual feedback when the user has the hands in the area right in front of the controller, encouraging them to always keep them there. It could be, for example, a lower transparency when moving the hands way from the middle of the field of view or a change of color. The same technique could be used to encourage the user to keep their hands in the correct

(48)

34 6 Discussion

6.2

User test

As an iterative approach was used to get the best possible implementation of both the GUI components and the input devices, a fewer number of users was used in the earlier iterations. This was because the usability problems were so obvious and had to be fixed early to get better result in the later iterations. As the system required a Leap Motion, an Oculus Rift, a Razer Hydra and a computer with decent performance to be able to run, running a larger user test was never a possibility. The result of this is that the number of testers were only eight. During the user test the user had to answer questions about the usability of the different input devices and GUI components in the form of a questionnaire. This turned out not to be as relevant as hoped. As the users were encouraged to speak out loud what they were thinking and what they were doing, the comment sections of the questionnaire only repeated what was already observed. Also it was hard to draw conclusions from the usability questions using a Likart scale as the number of testers were so low. This was not a problem as the focus with these earlier iterations were to fix the large usability problems and not comparing quantitative data.

Most of the observations that were made during the user test are about the in-put devices, not the GUI components. The problem was that to be able to interact with the GUI, the user had to use the input device. As the implementation of the input device had issues with usability, stability and depth perception, the user test did not give any good results for the actual GUI components. The number of users who participated in the user test did not give enough quantitative data to be able to draw conclusions from the questionnaire questions. With the input devices implemented with less usability issues and a larger amount of users in the user test, a better evaluation of the GUI components could have been done.

6.3

Conclusion

This section will present the conclusion of this project drawn from the results. When implementing GUI components in a virtual environment it is very im-portant to give the user both visual and aural feedback on interaction as no tactile feedback is given when using motion controllers. This is something that should be done in some way for all GUI components.

I would recommend using the text input GUI element that was implemented during this project, as this worked very well. It had the layout of a standard keyboard which most users recognize. The buttons on the keyboard also showed that the GUI components does not have to be large, they can be as small as the interacting sphere of the users hand. What is important to remember is to have some distance between GUI components. This is to reduce the numbers of errors that occurs as most users still are inexperience with virtual reality and the input devices.

The Leap Motion was the controller that most users would use if they would use the system again. This was interesting as it was not the input device that

References

Related documents

In the following, the focus is on the question of how to get the visual information to the eyes. Many decisions and actions in everyday life are in fact influenced by visual

The relation created is a complex machine for governing (Rose 1996) where the bodies viewed as non-political (in this case, the study counsellor) are dependent of the

The second approach consists of parsing the code directly by daGui (or more accurately by the framework’s adapter). The problem with this method is that daGui would have to have

minimising force losses, but generally speaking at the expense of less lifetime and control. A common optimisation is choosing different wheels for the trailer and for the truck. The

Visitors will feel like the website is unprofessional and will not have trust towards it.[3] It would result in that users decides to leave for competitors that have a

For the interactive e-learning system, the design and implementation of interaction model for different 3D scenarios roaming with various input modes to satisfy the

This prototype contained different import functions, two major data set windows; one overview window and one where the program has calculated and organized fault events by

Using inductive power transfer, energy can be transferred wirelessly which have an interesting attribute when it comes to implantable devices.. The main reason is that no