• No results found

Realistic virtual hands: Exploring how appearance affects the sense of embodiment

N/A
N/A
Protected

Academic year: 2021

Share "Realistic virtual hands: Exploring how appearance affects the sense of embodiment"

Copied!
60
0
0

Loading.... (view fulltext now)

Full text

(1)

LiU-ITN-TEK-A--17/009--SE

Realistic virtual hands:

Exploring how appearance

affects the sense of embodiment

Johan Nordin

(2)

LiU-ITN-TEK-A--17/009--SE

Realistic virtual hands:

Exploring how appearance

affects the sense of embodiment

Examensarbete utfört i Medieteknik

vid Tekniska högskolan vid

Linköpings universitet

Johan Nordin

Handledare Ali Samini

Examinator Karljohan Lundin Palmerius

Norrköping 2017-03-17

(3)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

under en längre tid från publiceringsdatum under förutsättning att inga

extra-ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,

skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för

ickekommersiell forskning och för undervisning. Överföring av upphovsrätten

vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av

dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,

säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ

art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i

den omfattning som god sed kräver vid användning av dokumentet på ovan

beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan

form eller i sådant sammanhang som är kränkande för upphovsmannens litterära

eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se

förlagets hemsida

http://www.ep.liu.se/

Copyright

The publishers will keep this document online on the Internet - or its possible

replacement - for a considerable time from the date of publication barring

exceptional circumstances.

The online availability of the document implies a permanent permission for

anyone to read, to download, to print out single copies for your own use and to

use it unchanged for any non-commercial research and educational purpose.

Subsequent transfers of copyright cannot revoke this permission. All other uses

of the document are conditional on the consent of the copyright owner. The

publisher has taken technical and administrative measures to assure authenticity,

security and accessibility.

According to intellectual property law the author has the right to be

mentioned when his/her work is accessed as described above and to be protected

against infringement.

For additional information about the Linköping University Electronic Press

and its procedures for publication and for assurance of document integrity,

please refer to its WWW home page:

http://www.ep.liu.se/

(4)

Realistic virtual hands: Exploring how appearance affects

the sense of embodiment.

Johan Nordin

March 31, 2017

(5)

Abstract

How would you react if you looked down on your hands and they had been replaced with someone else’s hands?

This is the case for virtual reality applications that incorporate virtual hands, all applications have their own representation of virtual hands but none takes the user’s own hand into account. For this reason, we have created a framework that allows to customize geometric features of existing hand models towards a more personal hand representation.

We have designed and conducted an experiment to study sense of ownership of four virtual hand representations. It was found that participants pay attention to size of the hand and the length of the fingers but do not necessarily consider the virtual hands as their own. We believe that a virtual hand that truly creates the impression of one’s own hand may benefit educational, training or rehabilitation virtual reality applications. In order to achieve this, our conclusion is that a certain extent of size, shape and appearance of the user’s hand need to be considered.

(6)

Acknowledgments

The work done for this master’s thesis was carried out at Gleechi in Stockholm, Sweden. Writing a thesis is like putting together a giant puzzle, and all who have contributed with pieces or helped me to understand how the pieces fit together have my utmost gratitude. First, I would like to thank my su-pervisor Kai Huebner, Co-founder and CTO at Gleechi, for support and supervision. Thanks to Dan Song, Co-founder at Gleechi and Jakob Johansson, Co-founder and CEO at Gleechi, for believing in me and all your support. Thanks to Prasanth Korada for being the other thesis student and a great friend to discuss things with. Thanks to my academic supervisor Ali Salmini and my examiner Karljo-han Palmerius, at Linköping University. I would also like to tKarljo-hank Pontus Valsinger. I am grateful that you took the time to proof-read my report. He has been chasing my linguistic habits and deleted amounts of unnecessary commas. Finally, a big thanks to my family who always has encouraged and supported me throughout my education.

Stockholm, March 2017 Johan Nordin

(7)

Contents

Abstract i

Acknowledgments ii

List of Figures vi

List of Tables vii

List of Acronyms viii

1 Introduction 1

1.1 Background . . . 1

1.2 Motivation. . . 1

1.3 Research Questions . . . 2

1.4 Aim and Purpose . . . 2

1.5 Thesis Outline . . . 3 2 Virtual Reality 4 2.1 Definition . . . 4 2.2 Sense of Embodiment . . . 5 2.2.1 Sense of Self-Location . . . 5 2.2.2 Sense of Agency . . . 5 2.2.3 Sense of Ownership . . . 5

2.3 Body Ownership Illusion . . . 6

(8)

CONTENTS CONTENTS

4.2 Virtual Hand. . . 11

5 Hand Customization Framework 13 5.1 Motivation. . . 13

5.2 User Interface . . . 14

5.3 Implementation . . . 16

5.3.1 System Overview . . . 16

5.3.2 Mesh Deformation Methods . . . 17

5.3.3 Recompute Rig . . . 18

6 User Studies 20 6.1 Virtual Hand Representation . . . 20

6.1.1 Individualized Hand . . . 21

6.1.2 Oversized Hand. . . 22

6.1.3 Cartoon Hand . . . 22

6.1.4 Opposite Masculine/Feminine Hand . . . 22

6.2 Experimental Setup . . . 23 6.3 Experimental Design . . . 24 6.3.1 Environment . . . 25 6.4 Participants . . . 26 6.5 Experimental Protocol . . . 26 6.5.1 Procedure . . . 28 7 Results 29 7.1 Questionnaire . . . 29 7.2 Interview . . . 31 8 Discussion 32 8.1 Experiment Design . . . 32 8.2 Experiment Results . . . 33 9 Conclusion 35 9.1 Research Question . . . 35 9.2 Future Work . . . 36 A Look-up Table. 41

(9)

CONTENTS CONTENTS

B Hand Customization Framework. 43

(10)

List of Figures

4.1 Bone structure of a human hand. . . 10

4.2 The chimpanzee’s hand compared to the human hand . . . 11

4.3 A simplified rig structure. . . 12

5.1 Four arbitrary hand models from Unity Asset Store. . . 13

5.2 Slider interface of the framework . . . 14

5.3 User interface. . . 15

5.4 Example when the underlying rig is not updated by blend shapes. . . 18

5.5 A step by step illustration of how the new joint position is calculated for a finger. . . 19

6.1 Virtual hand representation. . . 21

6.2 Example of individualized hands.. . . 22

6.3 Hardware setup for the experiment.. . . 23

6.4 The virtual environment used in the experiment. . . 25

6.5 Locations used for measurements of the hands. . . 27

7.1 Boxplot of the questionnaire results. . . 30

C.1 Pre-questionnaire . . . 44

C.2 User comments . . . 46

C.3 Information sheet. . . 47

(11)

List of Tables

6.1 Post-test Questionnaires. . . 28

7.1 Statistical summary for the questionnaire responses. . . 31

A.1 Look-up table.. . . 41

A.2 All individualized hand configurations.. . . 42

B.1 Implemented features.. . . 43

(12)

List of Acronyms

3D Three-dimensional

API Application Programming Interface VR Virtual Reality

DoF Degrees of Freedom HMD Head Mounted Display RHI Rubber Hand Illusion VHI Virtual Hand Illusion

(13)

Chapter 1

Introduction

1.1

Background

Gleechi is a Stockholm-based startup that has developed a software that enables grasping interactions in real-time. Their solution VirtualGrasp creates realistic hand animations based on the available information about an object’s shape and the kinematics of the hand. It is not a hand tracking algorithm but a solution that takes available inputs (from an arbitrary hardware device) and animates the desired output (the virtual hand). The solution is therefore hardware-agnostic (independent of the hardware device), hand-agnostic (independent of the kinematic structure) and object-agnostic (independent of the object shape).

1.2

Motivation

With the recent growth of virtual reality (VR) applications there is a demand to create highly immer-sive environments in which the avatar that the user embodies reflects any kind of actions in the virtual world as precise as possible. One of the main actions that humans use for interacting with the real world is the grasping of objects.

However, the visual representation of hand-held input devices in VR has not really been addressed because it has been a minor concern in a technology that is under rapid development. Therefore it has often resolved through an intermediate solution, that is less than desired. Initial experiments have shown that virtual hands that do not match the player’s expectations in appearance or behavior, often is leading to the loss of feeling of presence [23]. On the contrary, virtual hands that look super realistic which move almost, but not exactly, like human beings, cause a response of ghostliness among the participants [20].

One of the challenges in designing handheld products in the real world is accommodating hand size variances. For example, when designing a product it is equally important that it works well for both

(14)

1.3. RESEARCH QUESTIONS CHAPTER 1. INTRODUCTION In a virtual world, the issue of variability in hand size is reversed when virtual hands can have various sizes. The spatial dimension of VR reveals that people instantaneously get a feeling whether a virtual hand is too big or too small compared to their own hand. The problem that remains is that the size need to be both consistent with the user’s perceived size and the actual size of the virtual world. So instead of designing tools for different hand sizes, the virtual world could to the utmost extent be customized according to the user’s hands.

Accordingly, the optimal type of hand appearances and behaviors are not only dependent on the specific application or game, but even on the specific user. Thus, there is a motivation for a generic framework to create hands for the virtual environments, in which hands can easily be customized towards the end user’s real hands, in terms of features such as size, skin tone and proportions. As a research question, it is of great interest to understand what type of hands are adequate and for which kind of VR scenarios and a generic framework will therefore greatly simplify such study.

1.3

Research Questions

All the observations discussed thus far have led to the following research question:

• Can a realistic virtual hand that is visually close to the user’s real hand provide an increased sense of embodiment compared to a generic virtual hand that does not take the user’s hand into account?

Between realistic virtual hands,

– How is the sense of ownership affected with respect to anatomical features, such as finger length and hand shape?

– How is the sense of ownership affected with respect to visual features, such as skin color and level of realism?

1.4

Aim and Purpose

To answer this question we need to study how different virtual hands alter the user experience in a virtual environment. For this reason, a tool to create hand models with different shape is needed. One way is to use one of the many modeling and sculpting applications: Autodesk Maya1; Pixologic

ZBrush2and Blender3. For users that are not used to working with modeling software it can be difficult

to create realistic 3D-models. Therefore, a simple and intuitive framework for tuning features of a human hand has to be developed.

Finally, the customized hand models will be examined in user studies. It is of interest to study whether virtual hands with individual features can improve the overall user experience. This can be helpful for future design decision for developers in virtual reality.

1http://www.autodesk.com/products/maya/overview

2

http://pixologic.com/

(15)

1.5. THESIS OUTLINE CHAPTER 1. INTRODUCTION

1.5

Thesis Outline

In order to have a common understanding why interaction is important in virtual reality we will (chap-ter 2) give a brief overview of embodiment, body ownership, tracking devices, and how hands are represented in virtual reality. With this understanding we will (chapter 3) present the related research in the area of body ownership and virtual hand illusion. In chapter 4, we will review the biological structure of the human hand and describe the key features to create realistic virtual hands.

Once it has been explained how virtual hands are structured, we can explain how our hand customiza-tion framework is implemented (chapter 5) and how those are used in the user studies that were conducted (chapter 6). In chapter 7 we will report our results of the user studies. In chapter 8 we discuss the outcome of the experiment and the design decisions that we made for the experiment. Finally, in chapter 9, the report will be summarized with a short conclusion that includes the research question and an outlook on future work.

(16)

Chapter 2

Virtual Reality

This chapter will give a brief introduction of virtual reality and the concepts related to virtual hands such as embodiment, body ownership and tracking devices.

2.1

Definition

Virtual reality (VR) is a technology that replicates an environment and simulates the user’s physical presence by enabling user interaction. Several terms — virtual environments, virtual worlds, VR and artificial reality — are commonly used interchangeably to denote virtual reality. According to [27] the four cornerstones of an VR experience are:

• A virtual world • Immersion • Interactivity • Sensory feedback

A virtual world is the illusion of a three dimensional space that is presented to the user through stereoscopic images. Immersion is the feeling of being present inside an environment, rather than observing it from outside on a two-dimensional computer screen. Interactivity is the ability to move in a virtual world and to interact with objects and other users. Finally, sensory feedback of the user’s actions and position completes the illusion of a virtual reality by tricking the human senses.

Real-time feedback from sensors that updates the user’s perspective of the virtual world is essential for VR. For example, continuous 6 degrees-of-freedom (DoF) head-tracking is a key factor to create presence, the feeling of being inside a virtual world and not just viewing it from a camera. It is this feeling of being relocated into another world that is unique for the VR technology, which sought to maximize in all aspects. One of these aspects that can alter the feeling of presence is the existence of a virtual body. If this virtual body resembles with the user’s expectations and the user feels embodied, it can enhance “presence” and the overall experience.

(17)

2.2. SENSE OF EMBODIMENT CHAPTER 2. VIRTUAL REALITY

2.2

Sense of Embodiment

How we experience ourselves inside a virtual reality is a manifold question. The term embodiment is used in various contexts and can easily be misinterpreted. Therefore it is useful to have a terminology to describe the phenomenon of embodiment. In this context, embodiment is the gathered experience of sensations that arise when being inside and controlling a virtual body in a head-mounted display (HMD). [12] define that the “sense of embodiment (SoE) toward a body B as the sense that emerges when B’s properties are processed as if they were the properties of one’s own biological body". While the term embodiment is a specific type of information processing, SoE is a working definition of everyday experiences associated with one’s biological body [5]. SoE is decomposed into three subcomponents: self-location, agency and ownership. For the rest of this document, we will use these three terms to describe the subjective experience of embodiment in VR.

2.2.1

Sense of Self-Location

The sense of self-location refers to one’s spatial experience of being inside a body. Self-location shall not be confused with the closely related term presence. Presence can be described as the feeling of “being there” or more explicitly, the feeling of one’s self being located inside a physical or virtual environment [12]. The distinction between self-location and presence is that presence does not require a body representation.

2.2.2

Sense of Agency

The sense of agency refers to when oneself is in control of one’s own actions. Agency can be denoted as our body’s perception, which includes the motor function and locomotor system of our bodies. In a virtual environment it can be seen as the perceived feeling of being in control of a virtual body.

2.2.3

Sense of Ownership

The sense of ownership refers to which degree oneself perceives a body to be one’s own. It has been suggested that there are two processes behind sense of body ownership, either only bottom-up influ-ences, or as a combination of top-down and bottom-up influences [30] [29]. Bottom-up influences refer to sensory information from visual, tactile and proprioceptive input. Proprioception is the un-conscious sensory flow of the position and orientation of our body. We are constantly aware of our actions and knows where our body is located, without having to look at it. Top-down is the secondary flow of information from cognitive processes of sensory stimuli, for example, existence of sufficient human likeness to presume that an artificial body can be one’s body [12]. The sense of ownership can also be observed when a body part is exposed to dangerous situations [5].

(18)

2.3. BODY OWNERSHIP ILLUSION CHAPTER 2. VIRTUAL REALITY

2.3

Body Ownership Illusion

There are infinite possibilities to alter the virtual representation of our bodies in VR. Consequently, comprehensive research of the topic of body ownership has been done with various body shapes, structures and appearances. The well known rubber hand illusion (RHI) experiment showed that self-location can be altered when synchronous touching is applied to a rubber hand [3]. The RHI is known as the virtual hand illusion (VHI) when it takes place in virtual environments, the rubber hand is then replaced by a virtual hand that is controllable with tracking devices such as 6-DoF controller or gloves.

It has been shown that a basic morphological similarity between the real and virtual body is needed to induce the VHI. Furthermore, is it suggested that increasing the similarity between one’s biological body and the virtual body could strengthen the ownership by promoting top-down influences [12]. For this reason, personalized avatars could enhance ownership since this would also promote self-recognition.

The similarities and differences between hands can be described in many ways. Sexual dimorphism is the scientific term for physical differences between males and females. Men and women are more physically similar than we are different. Nonetheless, there are a few distinctions in our physiques and the shape of our hands. Moreover, studies have showed that people behave and perceive things differently in virtual reality from a gender perspective [34].

Furthermore, in situations where the user’s visual representation is missing, audio cues such as breath-ing or footsteps can be important in maintainbreath-ing an ownership illusion.

2.4

Visual Representation

Because the real world is blocked from the user’s view, interaction while wearing an HMD requires some graphic representation of the hands. The first question that comes up when developing in VR is probably going to be: how are we going to represent the user’s hands? At the moment, there are two common approaches to represent hand-held controllers in VR: either showing the actual controllers or a pair of virtual hands.

Showing the physical controllers is a practical solution because it provides a one-to-one mapping between the real and virtual world. Thus, avoiding the possibility of breaking immersion when the pose of the virtual hand does not match the pose of the real hand and it is easy to show the buttons that are being pressed by indicating it on the virtual controller. Although there are many advantages of showing the actual physical devices, ideally interaction in a virtual world should be done in the same way as in the real world i.e. using our hands.

Within traditional computer games it is common to have a visual representation of the user, an avatar. In some cases, the user has no control over the appearance of the avatar but there are games in which the user is allowed to customize their own avatar. Users can adjust features to reflect themselves or in some other way to enhance the experience and feel more immersed.

The eventual goal for a visual representation in VR would clearly be to show a full body avatar. However, we can only represent the parts of our body that can be accurately tracked. As the existing

(19)

2.4. VISUAL REPRESENTATION CHAPTER 2. VIRTUAL REALITY tracking solutions are not capable of tracking the full arm, it is common to only represent virtual hands that have been cut off by the wrist. In this way, the issue of a virtual arm being orientated differently than the real arm is avoided.

Nevertheless, the relationship between the user and the virtual avatar can be even stronger due to the feeling of presence that virtual reality brings. Previous studies indicate that people accept all sort of avatars or virtual bodies. However, a problem may occur if the virtual body tries to mimic the user’s own body but does not correspond to the user’s expectations. It can reduce presence if the virtual hands are similar but still differ from the user’s own hands [23].

Consequently, we believe that by allowing users to not only embody a pair of virtual hands but to cus-tomize their virtual hands according to their own preferences, would improve the feeling of presence in educational, training or rehabilitation virtual reality applications.

(20)

Chapter 3

Related work

This chapter presents the related research in the area of body ownership and virtual hand illusion. Most of the embodiment studies in the past have utilized mannequins or a passive fake limb to demonstrate that visuo-tactile correlations can induce an illusion of ownership. However, it does not provide in-formation about the influence of the sense of agency on embodiment. The VR technology has opened up new possibilities to study how we perceive and relate ourselves towards a virtual body that we are actively controlling. The fact that VHI can be induced only by active movements and continuous visual feedback is important for interaction in virtual reality applications.

Researchers in neuroscience and psychology consider VR as a useful tool for conducting experimental studies. Components such as material, anatomy, size, hand pose and the spatial relation between the virtual and real hand have been shown to modulate the induction of the VHI and the experience of body-ownership. For example, in a study of virtual hands representing different levels of realism and rendering styles, the VHI is suggested to be perceived stronger for more realistic looking virtual hands [13].

Research related to skin tone and skin color has shown that a reddened arm decreased the pain thresh-old compared with normal skin tone [16]. An interesting observation is the design choice of Oculus to avoid material that look to much like the human skin for their avatars. Instead, they use a global color to create a translucent material and highlight contours and details of the geometry by view de-pendent lighting. The reason for this is the perceptual problems of natural skin appearance. Although the skin color can be close to the user’s real skin color, small differences can reduce believability of the avatar [23]. On the other hand, the body ownership illusion decreases when the virtual body gets more transparent [17].

Another study showed that realistic rendering produce more emotional response than lower quality or a stylized rendering [18]. Moreover, a study that explored the effects of varying degrees of realism of virtual hands for interaction tasks in VR found that an abstract representation produced a greater sense of agency than the realistic representation [1]. This is in line with the theory of uncanny valley, that people expect less accurate animation when the object is more abstract [20].

In psychology the relationship between personality and nonverbal behavior is being studied. In our case, especially the question of how hand poses and motions relates to personalities in a virtual en-vironment is of interest. Research suggests that static hand poses and motions impact on a virtual avatar’s perceived personality [33]. For example, all 6 DoF devices that lack finger tracking will have

(21)

CHAPTER 3. RELATED WORK a static rest pose for the virtual hand and all the finger movements are artificial. This suggests that all artificial hand and finger movements that are introduced may need to be adjusted for individuals. Studies have also been conducted on how people adopt virtual hands of a different structure than the human hand. It was found that participants, to some extent, accepted a six-digit virtual hand as their own [9].

In regard to size, it has been shown that a larger hand than one’s own hand is embodied to a greater extent compared to a hand that is smaller than one’s own hand [24]. Furthermore, by changing the size of a virtual body, research suggest that our perception of the virtual environment changes and objects are perceived according to new size of the virtual body [8].

(22)

Chapter 4

Human and Virtual Hands

To get a better understanding of the complexity of a human hand, we have to review its biological structure. This chapter describes the key anatomical features of human hands and the corresponding features of realistic virtual hands.

4.1

Human Hand Anatomy

The bones of the hand and wrist are shown in Figure 4.1. Each hand contains 27 distinct bones that give the hand an incredible range and precision of motion. There are eight bones located in the wrist called carpals, five bones in the palm called the metacarpals and three in each finger called phalanges. The thumb only has two bones that are also called phalanges. This specific configuration of the bones makes it possible to manipulate objects in many different ways. The hands alone have more joints than the rest of the body and the fact that the thumb opposes the other four fingers is essential for human’s ability to perform grasping motions [15].

(23)

4.2. VIRTUAL HAND CHAPTER 4. HUMAN AND VIRTUAL HANDS Besides the anatomical structure of the hand, it is relevant to study the proportions and dimensions of hands. We can conclude that there are certain characteristics that make the human hand different from other mammals sharing the same anatomical structure. For example, the chimpanzee’s hand has much longer fingers, a longer palm, and a shorter thumb than the human hand, seeFigure 4.2. Human fingers have a certain length in relation to the palm and the length between the joints of the fingers are definite in proportion to the whole finger. By knowing the length of one bone, the others can be determined closely to the anatomical measures [2].

Figure 4.2: The chimpanzee’s hand (left ) compared to the human hand (right) [32].

The overall size of the hand is important when it comes to dynamic abilities. For example, a large hand with long fingers generates different motion paths and grasps in a different way than a small hand with short fingers. Since we only have our own hands to relate to, the size is of a virtual hand is highly relevant.

Furthermore, the shapes and sizes of hands vary depending on age, gender and genetics. Although the overall proportions of hands remain the same with respect to size, it is possible to distinguish the features of male and female hands as well as the hands of children, babies and elderly. The average male hand is generally larger than the average female hand because the underlying bones and muscles are larger. In male hands the fingernails and the fingers are slightly broader compared to the more tapered fingers of females.

Another attribute of hands, that is favored among researchers, is the 2D:4D digit ratio, which is the relative length of the index finger and the ring finger. It has been known for many years that this ratio varies according to sex. Males tend to have a longer ring finger relative to the index finger when compared to females.

(24)

4.2. VIRTUAL HAND CHAPTER 4. HUMAN AND VIRTUAL HANDS A rig structure is essentially a hierarchy of articulated rigid links that represent the bones of the wrist, palm and fingers. The structure of hand models have various complexity, a high resolution anatomical model can be overly complicated and too computationally expensive in many applications. Thus, in general, simplifications are made in order to keep models only as complicated as necessary.

Without an underlying skeleton, the hand model would just be a static 3D mesh unable to move or be animated. Both the position and orientation of the joints are important for the fingers to rotate in the correct angle in order to not intersect with each other. An example of a simplified adaptation of the real human skeleton yet still sufficiently performing most of the movements, seeFigure 4.3.

Figure 4.3: A simplified rig structure.

Because of the degrees of freedom of the underlying skeleton, a hand mesh requires more than a visually appealing shape to animate and deform well. Instead of a uniform distribution of polygons, a topology made of edge loops along the creases of the hand is critical for good deformations. An edge loop is a group of polygons that follows a specific path around a model, preferably mirroring the anatomy from a real hand. Moreover, keeping down the number of polygons of the mesh is also important considering that the hands get rendered twice for a VR application.

Finally, “skinning” is the process of binding the rig to a mesh. Each vertex is weighted according to nearby bones so that when the bones move the vertices move with them, like a skin. Hence the name, “skinned meshes”.

(25)

Chapter 5

Hand Customization Framework

In the previous chapter, we described the biological structure of the human hand and gave an introduc-tion to how hands are represented in a virtual environment. In this chapter, we present the framework that was implemented in order to customize hand models and generate more user specific virtual hands. The framework has been developed using Blender’s Python API.

5.1

Motivation

Perhaps modeling a realistic humanoid hand more than any other part of the human body except the face, requires the most practice to master. To model a human hand with natural shapes and proportions is a difficult task even for a professional artist. This implies that a hand model that is generated using geometric primitives is not visually satisfying and does not give the impression of a real human hand.

(26)

5.2. USER INTERFACE CHAPTER 5. HAND CUSTOMIZATION FRAMEWORK The alternative of modeling hands by oneself is to buy or download hand models that are available at online marketplaces such as Unity Asset Store1or UE4 Marketplace2. There are a variety of full body

avatars and hand models of varying quality and appearance available, see Figure 5.1. But none of these hand models will take the user’s hand into account. Fingers have a certain length, the fingernails look a certain way and they are not able to be customized further. In general, the hand models that are available in an asset store are impersonal. The fact that humans are able to pick up insignificant inconsistencies and subtle variations are important to bring realism into an animated model [21].

5.2

User Interface

As we have seen, hands have various dimensions and characteristics, but all with the same underlying structure. As a first step, all imaginable features of hands were collected. We made a distinction be-tween the geometric and surface level features of a hand model. Each feature was ranked according to how important it was and how difficult it was estimated to implement. Moreover, we also consid-ered how these features could be evaluated in user studies. In this way a set of core features were determined to be implemented in the framework, seeTable B.1.

Figure 5.2: The slider interface of the framework.

In our implementation the user controls features of a hand through a slider interface, seeFigure 5.2. A basic feature is composed of two blend shapes, one that represents an inner target shape and one that represents an outer target shape. This implies that the Euclidean distance of the slider is an approxi-mation of the distance between the target shapes. The geometric transforapproxi-mations to achieve the target shapes have been created with various modifiers that are available in the Blender API. This enables

1

https://www.assetstore.unity3d.com/

(27)

5.2. USER INTERFACE CHAPTER 5. HAND CUSTOMIZATION FRAMEWORK one to interpolate between the inner and outer target shapes, with the default mesh being in between. In addition to these basic features, we implemented hierarchical features that are compositions of two or more blend shapes. While basic features consist of immediate deformations such as changing the length of the fingers, length of the palm and thickness of the wrist etc. The hierarchical features con-sist of more abstract character traits such as female/male or old/young. Hence, the abstract features can be seen as linear regressions in the subspace of the basic features.

The initial idea was to include dynamic features in the framework to be able to modify features such as friction or how fast fingers move. However, the solution that was chosen with Blender did not allow to integrate these kind of features into our framework. In addition, because of the difficulty to combine dynamic and static features while keeping the user studies manageable, we chose to limit our user studies to only involve shape and appearance features of virtual hands.

We are computing three measures to obtain real-world dimensions and proportions of hand models. These measures are the middle finger length, palm length and palm width. This allows to either enter a value or adjust the slider to a desired length, as long as the length is within the valid range of the blend shapes. The measures are calculated between vertices that are located at the measurement origins on the mesh, see Figure 6.4. Since the final vertex position are influenced by both the deformation of the rig and the weights of the blend shapes, both of these must be taken into account to determine the current vertex position. This means that the final vertex position is obtained by summing the contribution of the rig deformation and blend shapes to the original vertex position.

Figure 5.3: User interface.

All the main functionalities are accessible through a panel that is located in the object properties section, see Figure 5.3. The first two buttons “Add” and “Delete”, load the active model into the framework and remove a model that had been loaded into the framework.

There are two then buttons for exporting "Prepare Export" and "Export". The first button “Prepare Export" mirrors and recomputes the skinning weights of the hand model. Before exporting the model

(28)

5.3. IMPLEMENTATION CHAPTER 5. HAND CUSTOMIZATION FRAMEWORK The button “Reset Values" resets all blend shapes and "Generate random" creates a set of random blend shapes within the valid ranges. The “Update rig” button is used to recompute the position of the joints, which has been explained above and “Debug” allows one to quickly print a debug message. The functionality to save a configuration of blend shapes weights and load them later was added to the addon. To the left of the "Save config" button there is a text input box that enables to enter a name for the current values. These are then available in a drop down list to the left of the "Load config" button. Furthermore, there are two buttons to manage the dimensions of a model. The first button "Refresh" to obtain the actual dimensions of the hand model and a button "Apply" which applies the new measure. The current measures and the new measures to be applied are shown by sliders below these buttons.

5.3

Implementation

Our goal with the framework is to change the geometric features of existing hand models. For exam-ple, making fingers longer, the wrist thinner or the palm wider. In short, this framework are able to import, change characteristics and export hand models with different proportions and dimensions.

5.3.1

System Overview

Before the development of our framework started, we evaluated existing frameworks in mesh process-ing, modeling and character customization. This was done in order to learn more about mesh deforma-tion methods and also to explore the possibility of developing our interface in an existing framework. The core requirements were that the framework should be open-source and support FBX files. The following frameworks were evaluated: Blender; OpenFlipper3; MeshLab4; MEPP5and MakeHuman6.

We decided to use Blender’s Python API to create our hand customization framework. Blender is a modern open-source 3D-modeling software that can be used cross-platform and the Python API al-lows to control most functionality in Blender. The other frameworks were either too focused on mesh processing or lacked in usability and documentation.

Our framework takes advantage of the Autodesk Filmbox (FBX) file format to handle mesh data and modify existing mesh models. This is done by adding geometric deformations to a mesh that are then baked into the model and stored as animation data in an FBX file. Our user interface in Blender allows the user to tune and tweak hands into different dimensions. In addition, it is possible to export a chosen configuration as a new hand model stored in an FBX file or import the FBX file into game engines such as Unreal Engine and Unity and tweak the hand model during run-time. Since there is no standard for how hand models are saved, an arbitrary FBX file containing a hand model can consist of one or two hands, have different sizes and be oriented in various ways. Therefore, the first step when a hand model is imported into our framework is to normalize the hand. All hand models are scaled uniformly and then transformed to have a consistent orientation in the framework.

Regardless of how many hands the original file contains, only one hand, either the right or left is

3 http://openflipper.org/ 4 http://meshlab.sourceforge.net/ 5https://liris.cnrs.fr/mepp/ 6http://www.makehuman.org/

(29)

5.3. IMPLEMENTATION CHAPTER 5. HAND CUSTOMIZATION FRAMEWORK loaded into the user interface. It is easier to modify the features of one hand and there is no point of doing redundant operations for two hands. When a hand model has been customized, two hands are created by mirroring the modified hand. Further, it is possible to either export hands individually or both hands together. When both hands are exported together, an underlying rig structure that connects the hands together is also created. The interface is decoupled from the FBX file and works with any mesh that contains the appropriate set of blend shapes. The Blender interface retrieves the blend shapes stored in the FBX file and connects them with the user interface.

5.3.2

Mesh Deformation Methods

Our framework is using a combination of both rig deformation and blend shapes to manipulate the mesh. These two deformation techniques are widely used throughout the animation industry and existing character customization software uses this technique to adjust features of a full body avatar or faces (MakeHuman7, Facegen8, Morph3D9).

Blend skinning is a rig deformation method that deforms a mesh in the skeleton subspace. Each ver-tex in the mesh surface is transformed using a weighted influence of its neighbouring bones. This is usually done as a translation, rotation or scaling operation on the rig. We have only used the trans-lation operations when the other two easily cause undesirable artifacts in the mesh. For example, rig deformation is used to transform the origin of the finger bones and thus indirectly changes the length of the palm.

A blend shape model is the weighted sum of a number of topologically coincident mesh objects. In contrast to blend skinning, deformation by blend shapes applies directly to the mesh. By varying the weights, a range of virtual hands can be generated. Blend shapes are particularly useful for small deformations, which are not easy to parametrize. By default, a polygonal mesh model with m ver-tices has3m degrees of freedom. For a polygonal mesh with n blend shapes, the mesh movement is restricted to a subspace of n dimensions.

Hence, a blend shape model expresses the geometry of a hand h from a generic hand shape h0, a

vector hk of concatenated vertex displacements and a weight vector w. All blend shapes are linear

models in which the individual basis vectors are not orthogonal but instead represent the individual target shapes. The resulting shape h is the linear combination of a number of target shapes.

h= h0+ n

X

k=0

(30)

5.3. IMPLEMENTATION CHAPTER 5. HAND CUSTOMIZATION FRAMEWORK

5.3.3

Recompute Rig

When the weights of blend shapes are changed, the position of the bones will remain constant. It is however not a problem for small deformations. But for larger deformations, the rig needs to be updated and re-skinned to the mesh such as when the fingers are deformed along their direction, see

Figure 5.4. The position of the joints relative to the mesh are critical to obtaining a correct motion and animation for the fingers.

(a) Original rig structure. (b) Recomputed rig structure.

Figure 5.4: Example when the underlying rig is not updated by blend shapes.

In order to resolve this problem, we have implemented a function that updates the position of the joints when the blend shapes are changed. To determine the new position of the joints, we need to find out how the mesh surface moves when being deformed by the blend shapes. How a mesh surface moves can be computed by the displacement of a vertex. But in order to avoid the process of finding vertices close to a bone, we can take advantage of the information stored in blend shapes.

All vertices have a weighted skinning value between 0 and 1, created when the rig was skinned to the mesh. By comparing this weighted value to a threshold value and keeping only the vertices with high weighted values, we obtain the vertices that are closest to a bone. We then calculate a distance vector between the original position and the current position for these vertices. The average of these vectors are an approximation of the real displacement of the vertices closest to a bone for the current blend shapes weights.

(31)

5.3. IMPLEMENTATION CHAPTER 5. HAND CUSTOMIZATION FRAMEWORK

Figure 5.5: A step by step illustration of how the new joint position is calculated for a finger. The circles denoted J1-J4 represent a joint and the lines between is the interconnected bones. The orange color visualizes the original position while green represents the updated position.

By using the algorithm described above to determine the movement of the surface closest to a bone, we have created an ad hoc solution to update all joints of a finger, by only calculating the displacement of the fingertip joint J4, see Figure 5.5. We exploit the fact that the bones of a finger is a coherent hierarchy and the position of the joint J1 is fixed when the length of the fingers varies.

In (A) we have the case when the finger length has been extended and the joints remains at the original location. We then calculate the new position of joint J4 according to the algorithm described above (B). Thus, we have both the location of joint J1 and J4 and we now need to determine the location of the joints between them. To determine the new position of joint J3, we first calculate the direction from the new position of J4 to the original position of J3. We then reuse the original ratio of the bones to determine the offset from J4 to the new position for J3 (C). This can then be repeated for the remaining intermediate joint J2 (D). Once this is done, we have all the new positions of the joints (E).

(32)

Chapter 6

User Studies

In order to answer our research question, we have designed and conducted user studies using the ear-lier presented hand customization framework. Our independent variable for the experiment is the rep-resentation of the virtual hands. The dependent variable is the feeling of embodiment, measured with questions testing ownership and implications or signs of ownership. The experiment had a within-group design, which means that all participants were shown the same representations of virtual hands. To reduce the learning effect, the order of the virtual hands was randomized for each participant.

6.1

Virtual Hand Representation

We hypothesize that the sense of ownership is strengthened when the virtual hand resembles the user’s real hand. For this reason, different virtual hand representations were used to explore features that has been suggested in previous studies to affect the sense of embodiment. As we seen insection 2.2, the sense of embodiment is promoted when virtual hands are processed as if they were the properties of one’s own real body. One of the virtual hands is therefore going to mimic the user’s hand’s dimensions and proportions.

The size of a virtual body has suggested to influence the sense of embodiment. Hands that are bigger than one’s own has been shown to increase the feeling of embodiment compared to smaller hands [24]. However, this experiment was performed by displaying a projected image of the user’s own hand on a screen. Therefore, we want to explore whether this is true for virtual hands in VR too. We also wanted to examine a representation that was more abstract and not fully realistic since it is a common way to represent virtual hands. The last representation is intended to explore how the ownership is affected by virtual hands that have traits of the opposite sex. Previous experiments indicate that the user’s behavior is influenced by the skin color of a virtual body [11]. We therefore wanted to investigate how virtual hands that not are generic but instead have female or male characteristics is affecting the sense of embodiment.

(33)

6.1. VIRTUAL HAND REPRESENTATION CHAPTER 6. USER STUDIES The four virtual hand representations, see Figure 6.1, that we used in the experiment were: individ-ualized; oversized; cartoon and opposite sex. Each of the representations will be described in more detail below.

C

D

B

Figure 6.1: Virtual hand representation. A: Individualized hand. B: Oversized hand. C: Cartoon hand. D: Opposite Masculine/Feminine hand.

All hands are based on the same generic mesh that has been equipped with blend shapes, except the two hands that represent the opposite sexes. As a result of the pilot study, the male and female hands from the hand customization framework were not perceived to portray enough masculine and feminine attributes. Instead we used the male and female hand models from the Leap Motion SDK1.

However, all mesh models have been uniformly scaled to the same dimensions using the hand cus-tomization framework and they all have same underlying rig structure so the movement of the fingers are identical.

6.1.1

Individualized Hand

The first hand is intended to mimic the user’s real hand in terms of size and proportions, seeFigure 6.2. This means that this representation will change depending on the user. The three variables that were used to generate this hand was the length of the middle finger, palm length, and palm width.

The whole process of exporting hands from the framework, importing the file into Unreal Engine and configuring the active hands in the experiment environment takes around 2-3 minutes. In order to avoid this effort during the user studies and the need to repeat this process for each user test, 18 hands were created in advance. We then used a lookup table, seeTable A.1, to identify the hand model that was closest to the user’s own dimensions.

(34)

6.1. VIRTUAL HAND REPRESENTATION CHAPTER 6. USER STUDIES

Figure 6.2: Example of the individualized hands. Config 1 - Denoted as A here shows a virtual hand with short fingers length, short palm length and normal palm width; Config 9 - Denoted as B here shows a virtual hand with normal fingers length, normal palm length and normal palm width; Config 13 - Denoted as C here shows a virtual hand with long fingers length, short palm length and normal palm width. SeeTable A.2for all configurations of individualized hands.

6.1.2

Oversized Hand

In general, the scaling of a virtual world is important, which also applies to the representation of hands. Virtual hands that are either bigger or smaller than one’s own create a proprioceptive mismatch between the real and virtual hands. This implies that virtual hands that deviates in size should be perceived as different from one’s own and perhaps even clumsy. This pair of virtual hands are larger than the average hand and are supposed to be unnaturally big. The generic mesh model has been scaled with a factor of 1.4 and has a total length (palm length + finger length) of 24.22 cm.

6.1.3

Cartoon Hand

With this representation, we want to explore how abstraction affect our feeling of ownership. The hand is characterized by the white color with black outlines that gives an impression of a cartoon hand. Furthermore, the hand is slightly smaller than the other representation while the shapes of the fingers are more round and the wrists are disproportionately slimer.

6.1.4

Opposite Masculine/Feminine Hand

We want to explore how a pair of hands with typical male and female traits influence ownership. This is done by providing a pair of hands with the typical male characteristics to the female participants and typical female hands to the male participants.

(35)

6.2. EXPERIMENTAL SETUP CHAPTER 6. USER STUDIES

6.2

Experimental Setup

Interactivity has been defined as one of the cornerstones of VR. There exists a wide range of tracking systems that enable user interaction. Our main focus is the hand tracking solutions, both with and without finger tracking. As users are completely immersed of an HMD, the types of input devices that can be used are limited. One of the most important aspects for input devices are to provide a correspondence between the physical and virtual environment. As a result, having accurate tracking is a vital part of making interaction techniques usable within VR applications. The differences between various tracking systems implies differences in how virtual hands are represented.

A typical VR system consists of a head-mounted display and two 6 DoF devices to track the positions and orientation of the user’s hands. The fact that the users need to actively hold the controllers in their hands can to some extent be considered limiting since it restricts finger movements and the ability to freely interact with the hands. Although 6 DoF devices can have high accuracy, position and orientation is not enough to control all of the joints of a hand and are unable to reproduce the position and posture of the user’s hands.

To get more detailed tracking information about the user’s hand posture and finger flexions, a pair of gloves or an optical tracking system can be used. The major disadvantage of optical tracking solutions are occlusion. In many cases, the camera is unable to observe information about the fingers that are occluded.

Figure 6.3: Hardware setup for the experiment. Leap Motion is mounted on the HTC Vive and a developer version of Manus VR from 2016.

The setup for the user studies consisted of an HTC Vive2, a Leap Motion3 and a pair of Manus VR4 gloves. Together with Oculus Rift5, HTC Vive is a high-end virtual reality headset, which means a higher resolution, a wider field of view and 6-DoF head tracking that gives an increased feeling of presence compared to low-end or mobile virtual reality headset. Leap Motion is a hand tracking device

(36)

6.3. EXPERIMENTAL DESIGN CHAPTER 6. USER STUDIES that uses infrared cameras to track both the position of the hands and finger movements. Although Leap Motion supports tracking of more than one hand, it is not always robust when two hands are tracked. For this reason, we used Gleechi’s VirtualGrasp sensor library to combine Leap Motion and ManusVR to track both hands more robustly. With this setup, Leap Motion was only used to track the position of the wrist while the ManusVR gloves tracked finger movements. As shown inFigure 6.3, Leap Motion was mounted on the front of the HMD, to track the user’s wrist and hands.

6.3

Experimental Design

One is faced with plenty of design decisions when developing for VR — not least when designing an experiment for user studies. The purpose of this experiment is to examine the sense of ownership for different representation of virtual hands. It was considered important to not introduce any artificial hand movements. For this reason we combined Leap Motion and the Manus VR gloves to track the full movement of the user’s real hands and designed an experiment to not involve any grasping interaction.

We believe that by showing a one-to-one mapping of the user’s real hand movements, the user will pay more attention to the virtual hands appearance and less attention to movements. It has been shown that subtle abnormalities in movement can be distracting and decrease presence [10].

Another key design decision for the experiment to promote ownership, is that both hands are tracked and represented in the virtual world. The fact that people have two hands in the real world implies that the ownership illusion resembles the same number of virtual limbs [5].

Because some users find it more difficult to read text in VR, we let the participants take off the HMD in order to answer the questionnaire. There were no time limit to answer the questionnaire so the participants could read the statements at their own pace on a computer screen and provide their answers via a keyboard. We tried to minimize the time that the users spent with the HMD, in order to keep the total time of the experiment reasonable, assuming that the participants still had enough time to form an opinion about the virtual hand. The time (60 seconds) that each pair of hands was shown was determined by the pilot study.

With the kind of questionnaire that is intended to investigate the users’ subjective experiences, it was considered important to allow the user to answer the questions as soon as possible after the experiment. If users were allowed to first see all representations and then answer a questionnaire, their answers could be biased since our questionnaire is intended to explore their immediate experience. The alternative to read the questionnaire to the user and let them answer orally, whilst still wearing the HMD was expelled because the pilot study showed that questions often had to be repeated and explained more frequently.

The infrared cameras of the Leap Motion device work better when not facing a direction with highly reflecting objects such as windows or glossy computer screens. We therefore designed the experiment so that participants were rotated 90 degrees away from the desk to the computer screen during the experiment. In this way they were not at risk of bumping into the the table or the computer screen.

(37)

6.3. EXPERIMENTAL DESIGN CHAPTER 6. USER STUDIES According to our experimental design, our three hypotheses for the experiment were:

H1. Increased sense of ownership for the individualized hand. H2. Increased sense of agency for the cartoon hand.

H3. Decreased sense of ownership for the opposite hand.

6.3.1

Environment

We used Unreal Engine 4.14 to construct the 3D environment for the user studies. It consists of a room with four windows and a door, as depicted inFigure 6.4. The user is located in the center of the room behind the table. The table’s function was both for placing objects and to act as a boundary. Furniture was added to create a sense of well-being and the scene kept simple to avoid distractions. All the objects in the scene are either free assets from the UE4 Marketplace, 3D models under Creative Commons license or objects from the NVIDIA VR Funhouse demo6.

Figure 6.4: The virtual environment used in the experiment.

On the opposing wall in front of the user, there is a basketball hoop and a blackboard. While the user test is running, a score is displayed by the number of balloons that have been placed in the basketball hoop. If the test is paused, a text is displayed indicating that the test is paused. The basketball hoop is located 1.28 m from the user’s position. This means it is not possible to extend forward and put a balloon directly in the basket. The scoreboard is incremented whenever the user manages to get a balloon into the basketball hoop, by carefully pushing a balloon in the right direction.

Furthermore, there are three main objects on the table that users are supposed to interact with during the experiment and these are easily accessible within reach. A metal tube that inflates the balloons is located to the left of the table. The balloon can be detached from the metal tube if the user applies

(38)

6.4. PARTICIPANTS CHAPTER 6. USER STUDIES therefore move slower than solid objects in the real world. Virtual objects that move slightly slower and stay longer in the air are more forgiving in the sense that it is good for the experience of VR, when users have more time to interact with them whilst in the air. This is great because it creates a natural mapping between the physics of the real world and the virtual world. For example, if a rock or a metal object was moving as a low gravity object in the virtual world we would consider the virtual environment to be non-realistic since we are not used to be in a low gravity environment.

In addition to the balloons, four cubes are placed to the right side of the table. These cubes were added as a result of the pilot study. It was observed that users were not stimulated enough when they only had the balloons to interact with. The cubes have also been provided with a lower gravitational value than is plausible in reality. This means that the cubes also hover a little longer in the air and make it easier to interact with them.

There are two yellow buttons on the right of the table. One button is for resetting the position of the balloon and the other button stacks up the cubes in their original tower again. This functionality was added after the pilot study in order to give the user more control over the environment.

The experiment has intentionally been designed so that users are not able to grasp the objects. Thus, we selected objects that are not associated with grasping using one hand. This is the case for balloons or large cubes, since people rarely or never grab them with one hand and we therefore made them more appropriate for our experiment. We have used the built-in physics engine of Unreal Engine to simulate the physics and detect collision between the virtual hands and objects. All fingers had dynamic box colliders that were updated according to the length and size of the virtual fingers. Moreover, all box colliders were updated in real-time along with the transformation of the rig. This made it possible to touch objects with a high precision using the tip of the fingers.

If a balloon was pushed outside the volume above the table, a new balloon was spawn 50 centimeters above the table. However, we had a condition that only allowed one balloon to be in the volume above the table at the same time.

6.4

Participants

Thirteen participants were recruited through social media, seeTable C.1. Eight male and five female participants took part in the experiment, aged from 24 to 33 years (M=28.61; SD=2.50). Four subjects did not have any previous experience of virtual reality and nine had tried VR before. Six subjects considered themselves as experienced in playing computer games and seven had little experience of computer games.

6.5

Experimental Protocol

When the participants arrived, they were asked to read an information sheet and were then instructed to take three measurements of their own hand. The information sheet, see Figure C.3, which was given to the participants before the experiment contained information about the purpose of the study as well as risks and handling of the data collected during user tests. The measurements of the user’s hands were done using a transparent ruler with an accuracy of 0.5 cm and were monitored by the

(39)

6.5. EXPERIMENTAL PROTOCOL CHAPTER 6. USER STUDIES experimenter to make sure that the measures were done correctly.

Figure 6.5: Locations used for measurements of the hands.

As shown in Figure 6.5, all fingers were kept extended during the measures. Palm length was mea-sured as a straight distance between the distal crease of wrist joint (C) and the root of the middle finger (B). Finger length was measured from the root of the middle finger (B) to the tip of the middle finger (A). Palm width was measured as the distance between the index finger (D) and the pinky finger (E). Using these measurements, the lookup table was then used to get the closest representation of the 18 pre-generated individualized hands.

The users were asked to enter general information, physical measurements and previous experience in a pre-questionnaire, seeFigure C.1. All experiments were performed at the same location and under the same conditions. The participant was then asked to sit down in a chair and put on the equipment for the experiment. The experimenter then asked if the equipment fitted well and helped to adjust if necessary. In order to avoid user’s hands clashing with the table or computer screen that was in front of the participant, all participants were asked to rotate 90 degrees away from the table.

Before the experiment started, the experimenter explained to the participants that they were going to see four hand representations and that after each hand representation, they would be asked to take off the HMD and answer a questionnaire, see Table 6.1, related to the corresponding virtual hand representation. The questionnaire addressed a number of questions related to sense of ownership, which are either direct ownership questions or implications or signs of ownership. The questionnaire was inspired by the works of [14] and refined in the pilot study.

(40)

6.5. EXPERIMENTAL PROTOCOL CHAPTER 6. USER STUDIES seen all of the virtual hands. When the test was completed, participants were given a short debrief to explain the purpose of the experiment, seeFigure C.4. The entire process took around 20-25 minutes to complete.

During the pilot study, it was observed that people with no previous experience of Leap Motion needed instructions on how to deal with tracking issues. Therefore, all participants were given instructions before the experiment began to hold their real hands in front of them with the palm towards their face if the virtual hands disappeared during the experiment. They were not given any further instructions about the experiment.

Table 6.1: Post-test Questionnaires.

ID Questionnaire Item Corresponding Concept Q1 I felt as if the virtual hands were a part of my body. Ownership

Q2 I felt that the size of the virtual hands was close to my own hands. Visual similarity Q3 I felt that the shape of virtual fingers were similar to my own fingers. Visual similarity Q4 I felt that the length of virtual fingers were similar to my own fingers. Visual similarity Q5 I felt satisfied with the visual appearance of the virtual hands Realism

Q6 I felt as if the virtual hands belonged to the opposite sex Ownership Q7 I felt as if I was in control of the virtual hands. Agency Q8 I felt discomfort when the virtual hands went through an object. Ownership

6.5.1

Procedure

The experiment begins when the participant is seated and wears the HMD and gloves. Initially, all participants were given 15 seconds to look around and become familiar with the environment. During this time, the users did not see any visual representation of their hands.

When 15 seconds had passed, the experiment was paused and the participant was asked by the ex-perimenter to hold up their hands in front of them. This was done to make sure that the Leap Motion registered the user’s hands. Then, the first pair of hands became visible to the user during 60 seconds. When 60 seconds had passed, the virtual hands were hidden again and the experiment was paused. The participant removed the HMD and answered the questionnaire. This procedure was repeated four times.

(41)

Chapter 7

Results

In this chapter, we will present the results of the user studies. Our results mainly consist of the ques-tionnaire, which was answered by participants during the user study but also the comments from the short interview that followed the experiment.

7.1

Questionnaire

A statistical summary of the questionnaire responses are shown inTable 7.1. In addition to the statis-tical summary, we have visualized the questionnaire data using a boxplot, seeFigure 7.1. A boxplot summarizes data using five values: the median, lower and upper quartile, minimum and maximum value. On each box, the central mark indicates the median while the bottom and top edges of the box indicate the 25th and 75th percentiles, respectively. Thus, the box contains 50% of the values and each vertical line contains 25% of the values. Values outside this range are considered as outliers and marked with a triangle or rhombus.

The following observations were made by analyzing the boxplot visually. Overall, there is a large spread among answers for all the statements except (Q7) and (Q8). We observe that participants gave a neutral answer to both (Q1) and (Q3). In addition, the statement (Q3) has a large range of the lower and upper quartile for all hands. In general participants felt satisfied with the visual appearance of all virtual hands (Q5). Furthermore, participants perceived being in control (Q7) and did not feel any discomfort when the virtual hand went through an object (Q8), for all virtual hands.

(42)

7.1. QUESTIONNAIRE CHAPTER 7. RESULTS 1 2 3 4 1 2 3 4 5 6 7 Q1 1 2 3 4 1 2 3 4 5 6 7 Q2 1 2 3 4 1 2 3 4 5 6 7 Q3 1 2 3 4 1 2 3 4 5 6 7 Q4 1 2 3 4 1 2 3 4 5 6 7 Q5 1 2 3 4 1 2 3 4 5 6 7 Q6 1 2 3 4 1 2 3 4 5 6 7 Q7 1 2 3 4 1 2 3 4 5 6 7 Q8

Virtual hand representation

Answers

Figure 7.1: The vertical axes are the questionnaire responses. On the horizontal axes are the four vir-tual hands numbered as follows: 1 (blue) represents the individualized hand, 2 (green) is the oversized hand, 3 (red) is the cartoon hand and 4 (purple) the opposite hand.

In addition, we also analyzed the questionnaire data (Q1-Q8) using a one way ANOVA. This was done in order to determine whether our dependent variable differ between the virtual hands. The null hypothesis in ANOVA is that all average values are equal. If the ANOVA test provide a significant result, we can reject the null hypothesis and say for a certain confidence that at least one of the average values differs from the others in a way that is not due to chance. The significance level was set at p < 0.05. Post-hoc analysis was conducted with Tukey tests. In addition, we also report the effect sizes for each statements that yielded a significant difference. This value describe the magnitude of the difference between the average values. We found significant differences between virtual hands in three statements (Q2), (Q4) and (Q6):

- The first statement that yielded a significant difference was Q2 (F(3, 48)=3.53, p<0.05; η2

=0.18). The size of the individualized hands were perceived to be closer to their own hand than the cartoon hand.

- The second statement that showed a significant difference was Q4 (F(3, 48)=4.15, p<0.05; η2

=0.21). The length of the fingers was rated to be less similar for the cartoon hand compared to both the individualized hand and the oversized hand.

- The third significant result that was found was Q6 (F(3, 48)=7.45, p<0.01; η2

=0.31). A post-hoc test showed a significant difference between the oversized hand and the opposite hand. The result showed that the opposite hand to a greater extent was ranked as being associated with the other sex compared to the oversized hand.

References

Related documents

1 Metaphor has become a major aspect of the study of language and thought with the result that the nature of metaphor and the use of metaphor in different types of discourse

STRAMA and the Swedish Public Health Authority who work hard with limiting the use of the drugs and spreading of drug waste. They work with healthcare, animal health care and

where Em and En, are the sums of failed and right identifications, respectively. The experiments in Table 2 were designed based on different ambient temperatures. Pilot studies

The second study involved pairs of users playing the game together in one multiplayer session in order to study the effects of an additional player presence within the

A small experiment was done by playing the first-person shooter Dead Trigger 2 using its default joystick (which is of the same type as the conventional joystick) but using the

This study aims to investigate the relationship between physiological hand tremor of undergraduate medical students with performance of a microsurgical suturing task during the

Multisensory integration elicits sense of ownership for body parts but not for non-corporeal objects. Experimenting with the acting self. The rubber hand illusion

For both Digital gallery and Hands-on laboratory, learning has been achieved .This can be easily seen from the results in “Proposition qualities” which showed that the total