• No results found

Use of head mounted virtual reality displays in flight training simulation

N/A
N/A
Protected

Academic year: 2021

Share "Use of head mounted virtual reality displays in flight training simulation"

Copied!
63
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköpings universitet SE–581 83 Linköping

Linköping University | Department of Computer and Information Science

Master thesis, 30 ECTS | Datateknik

2018 | LIU-IDA/LITH-EX-A--18/053--SE

Use of head mounted virtual

reality displays in flight

train-ing simulation

VR-glasögons användbarhet för pilotträningssimulering

Anders Gustafsson

Examiner : Zebo Peng

(2)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under 25 år från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns lösningar av teknisk och admin-istrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sam-manhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet – or its possible replacement – for a period of 25 years starting from the date of publication barring exceptional circum-stances. The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the con-sent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping Uni-versity Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

(3)

Abstract

The purpose of this thesis was to evaluate currently commercially available head mounted virtual reality displays for potential use in pilot training simulators. For this purpose a commercial simulator was modified to display the virtual environment in an Oculus Rift DK2 headset. A typical monitor based setup was used to provide a set of hardware require-ments which the VR implementation had to meet or exceed to be considered potentially usable for pilot training simulators. User tests were then performed with a group of users representative of those normally using pilot training simulators, including both pilots and engineers working with simulator development. The main focus of the user tests was to evaluate some potential weaknesses found in the technical comparison (such as when a measured parameter was close to the lower limit defined by the monitor based setup) and to make a measurement of the usability of the VR implementation.

The results from the technical comparison showed that the technical requirements were met and in most cases also exceeded. There were however some potential weaknesses revealed during the user tests, which included screen resolution and the field of view. There was one main critical deficiency found during the user tests. This was the lack of interaction with the aircraft as users were only able to interact with the flight stick and throttle lever. While this enabled the users to control many aspects of the aircraft (by using buttons and other controls fitted on the flight stick/throttle) in a training scenario a user also has to be able to interact with other switches and/or monitors in the cockpit. This was however a known limitation of the implementation and thus didn’t affect the tested parts of the simulator.

The user tests also confirmed that the resolution was a potential problem, but that the overall usability was high. Thus the VR implementation had potential for use in a pilot training simulator, if the critical issues found during the user tests were solved.

(4)

Acknowledgments

I would like thank all Saab and Vricon employees that I came in contact with while writing this thesis. All those who participated in the user tests, without whom any results would have been impossible to find. To all those who were helpful and answered any questions I had and explained the complicated world of pilot training simulators. And finally to all those who helped me in other ways, be it with administrative issues or the smaller things such as inviting me out to lunch or made sure that I was invited to activities for employees, making me feel welcome and making the task of writing this thesis much more enjoyable.

A special thanks also to the following: To Stefan Furenbäck, my supervisor at Saab who helped organize everything, who answered any questions I had or pointed me in the right direction when other people could give a better answer.

To Erik, with whom I could share ideas and discuss problems and who pushed me both forwards and to make decisions even when they were hard to make. To Alexander, for all the help and support in the beginning. Even if the initial development track had to be shut down it provided useful insights when modifying the simulator later used during the project. To Zebo Peng, my examiner at Linköping University, who was very supportive and easy to get in contact with when needed, even though I was not physically located at the university when writing this thesis.

(5)

Contents

Abstract iii

Acknowledgments iv

Contents v

List of Figures vii

List of Tables viii

1 Introduction 1

1.1 Motivation . . . 1

1.2 Aim . . . 2

1.3 Research questions . . . 2

1.4 Delimitations . . . 2

2 Background and Theory 4 2.1 Virtual Reality . . . 4

2.2 Flight training simulators . . . 6

2.3 Health issues . . . 7

2.4 Measures of usability in Virtual Reality . . . 9

2.5 Related work . . . 10

3 Method 13 3.1 Technical comparison . . . 13

3.2 Implementation . . . 15

3.3 Preparations for testing . . . 17

3.4 User tests . . . 26 3.5 Data analysis . . . 27 4 Results 29 4.1 Technical comparison . . . 29 4.2 User tests . . . 30 5 Discussion 35 5.1 Results . . . 35 5.2 Method . . . 37

5.3 The work in a wider context . . . 42

6 Conclusion 44 6.1 Technical requirements . . . 44

6.2 Usability . . . 44

(6)

Bibliography 47 Appendices

(7)

List of Figures

2.1 Hardware setup of SCALAB simulator. . . 11

2.2 Hardware setup of VRFS prototype. . . 12

3.1 The relationship between the distance d and the angle α . . . . 14

3.2 Block diagram for the VR HMD simulator prototype. . . 17

3.3 Simulated cockpit environment with instruments used in Task 1 highlighted. . . . 20

3.4 Václav Havel Airport runways. . . 22

3.5 Task 3 flight path example. . . 22

3.6 Comparison scale used in user test questionnaire . . . 26

4.1 Comparison scale results, test average marked with a red x. . . 31

(8)

List of Tables

2.1 Computation of SSQ Scores . . . 8

3.1 Calculated FoV angles depending on the distance from the monitor. . . 14

3.2 Absolute and preferred technical specifications. . . 15

3.3 Margin of error allowed for each instrument. . . 21

3.4 Task 3 flight path checkpoint descriptions. . . 23

4.1 Comparison between the HMD, the lowest requirements and preferred values. HMD resolution is per eye. . . 29

4.2 Screen properties for both setups. . . 30

4.3 Results from Task 1 with min, max and mean completion time for the respective groups. . . 30

4.4 Total amount of incorrect answers . . . 30

4.5 Average symptom score before and after user tests. . . 34

4.6 Average total and per category SSQ score with and without category weights as described in table 2.1 . . . 34

(9)

1

Introduction

Flight simulators today are an essential part of pilot training as there are several benefits with simulating flight over using a real aircraft. Using a simulator instead of real aircraft is more cost efficient, not only due to the fact that operating an aircraft is very expensive (compared to simulating) but also as a simulator can focus on certain parts of a flight.

The cost efficiency grows more and more important as scenarios gets larger, involving more aircraft and more advanced systems. It is both hard and time consuming to plan and set up an advanced training scenario which may include both friendly and opposing forces as well as different tactical systems such as radar and electronic warfare (EW) systems.

The second point regards efficiency, while a real aircraft always has to take-off and land, a simulator can skip these steps if they are not relevant for the exercise. This makes the time spent in the simulator more efficient than what can be achieved in an actual aircraft as only the relevant part of the flight can be repeated over and over without having to spend time with landings or flying back and forth from the air base.

1.1

Motivation

Flight training simulators exist in several versions with varying fidelity, from advanced CAVE-setups1with a complete cockpit environment to simpler systems using one or more screens and a simplified cockpit environment. There are also simulators that allow several pilots to share the environment enabling them to train, not only on operating the aircraft but also on interacting with each other, both in the air and through different on-board systems (such as communication).

With recent advances in virtual reality (VR), especially regarding head-mounted displays (HMDs), the interest of using VR HMDs to supplement current simulators has increased. Possible future usage include both pilot training simulators as well as role playing stations which are used to support a pilot training simulation.

(10)

1.2. Aim

1.2

Aim

The aim of this thesis is to provide a comparison between current capabilities of commercially available VR HMDs to flight training simulators used today. To provide an evaluation of cur-rent VR HMD capabilities, a proof-of-concept prototype simulator with basic controls using an Oculus DK2 VR HMD to project the virtual environment to a user will be constructed. This prototype will be evaluated with regard to usability (see section 2.4 for a definition) and com-pared to a low-fidelity flight training simulator with simpler instruments and controls using a single screen to show the virtual environment. The results of this evaluation, both regard-ing usability and more technological differences between both setups, will provide insight on how technical differences between current screen-based and VR HMD based simulators affect the users and some guidelines on how to best utilize simulators using HMDs.

1.3

Research questions

There are two research questions:

• Does contemporary VR HMD meet the technical requirements of a flight training sim-ulator?

• Can VR HMDs be used in pilot training with retained, or improved, realism compared to current systems?

1.4

Delimitations

The focus of the thesis will be limited to professional, low-fidelity, screen-based flight train-ing simulators and how a VR HMD can be used to complement and improve such a setup. For this purpose a prototype simulator will be implemented and tested against a group of potential users. This user group will contain people who normally works with the current simulators: engineers working with the development of the simulator and/or the simulated aircraft and pilots who use the simulator for training purposes.

The prototype will consist of a flight simulator integrated with a commercially available HMD. The interaction required to control the simulated aircraft will be centered around the HOTAS concept (i.e., Hands On Throttle And Stick). In the prototype this will include the main flight controls, throttle and aileron control. Rudder controls will also be available, al-though not by using pedals as is done in real aircraft but rather as a set of analogue buttons located on the throttle lever. Other interaction with the aircraft will not be covered in this thesis. Future ideas and suggestions on where to start researching this will be presented in section 6.3.

Due to time concerns the prototype will not be an integration with an actual flight training simulator, instead a commercial simulator will be used. The use of a commercial simula-tor does not in itself pose a problem to the result of this thesis as the purpose is to provide insights on current capabilities and problems of VR HMDs to serve as a base for future devel-opment, rather than an actual implementation to be used. It does however introduce a set of differences in the environment that is not related to the VR HMD experience which will have to be taken into consideration during testing.

There is also a delimitation regarding usability (section 2.4). Although accessibility is some-thing that normally should be considered, in this thesis it will only be mentioned briefly. The reason behind this is due to the potential users of the kind of simulators discussed. The

(11)

1.4. Delimitations

users can be divided into two groups, pilots training with the simulators and engineers or instructors working with the simulator.

In the case of the latter group, there may well be cases where a person has some sort of disability which should be considered. However, as the extra equipment used with a VR HMD compared to the current simulators would only include the headset itself, the most interesting part would be if and how the headset can be adjusted to fit the user. An example is if a user wearing glasses can adjust the HMD so that it fits in a comfortable way without forcing the user to remove the glasses. Any other aspects, such as what is rendered on the HMD screen, will not be discussed in any detail as the usage of a HMD in a simulator is a way of showing the user the simulated environment, not how to show it. Whilst this could in commercial context include navigating through menus, which could require a HMD focused design, in this case such things are not done by the pilot. The simulator pilot (engineer as well as real pilot) will only interact with the aircraft and its controls and these have for the purpose of training be as realistic as possible, thus interaction with the simulator is either done by someone other than the pilot, or by the simulated pilot with the same requirements as in a real aircraft. How to present information in a cockpit is also a quite large research area in itself and as such, out of the scope of this thesis.

In the case of pilots, there will be few (if any) cases where a user will have any disabilities, as there are high requirements regarding both fitness and health that a person has to pass in order to become a pilot [13]. An example is eyesight (which has to be near perfect), in some cases even a corrective eye surgery won’t be enough to meet the requirements [5]. Thus it would be meaningless to focus too much at such cases where a person with disabilities would have problems using the simulator equipment if the disability would disqualify the person as a pilot.

(12)

2

Background and Theory

This chapter presents some historical notes regarding both virtual reality and flight simula-tion as well as theory on which this thesis will be based on. This includes both theory that will be used in this thesis, such as research regarding simulator sickness or usability, as well as other related work.

2.1

Virtual Reality

While the exact origins of the term "virtual reality" are unknown, one of the first implemen-tations was the Sensorama, patented by Morton Heilig in 1962 and with the first working prototype built the same year. The Sensorama was a machine able to show the user a short film in stereoscopic 3D, augmented with stereo sound and scents. It also had some ability to move the users body to make the experience feel as immersive as possible. It was however a commercial failure as Heilig was unable to secure funding and the prototype was the only device built.

The first example of an HMD was created by Ivan Sutherland in 1968. The device, capable of head-tracking, could show the user "wire frame" line drawings of objects in 3D [15].

The first products aimed at the consumer market came during the late 1980’s with devices such as the Eyephone, an HMD that was launched in 1988. Another device worth mention-ing was the Virtual Boy, a game console manufactured by Nintendo which used a headpiece similar to an HMD. It was the first consumer product capable of displaying content in stereo-scopic 3D, although only in red as full color LCDs would have made the device too expensive. But even with the monochrome screens the device was still expensive and combined with the lack of head-tracking and poor reception of both games and the 3D effects, the Visual Boy was a commercial failure.

The first prototype version of the Oculus Rift was shown to the public at E31in June 2012, fol-lowed by a crowdfunding campaign later that year [9]. The campaign resulted in the first De-veloper kit, DK1, which was released in March 2013 [9]. One of the primary strengths with the

(13)

2.1. Virtual Reality

Rift DK1 was the relatively high field of view (FOV) which, at 110 degrees, was considerably higher than previous commercial devices. The Rift DK2 was released mid-2014 and its im-provements over the DK1 included higher screen resolution and a different motion-tracking system. The first consumer version of the Rift, the CV1, was released in 2016. Also released in 2016 was the HTC Vive, a device similar to the Oculus Rift which was produced and devel-oped by the communications equipment manufacturer HTC and the video game developer Valve Corporation. There are also several headsets designed to be used with smartphones, such as the Google Cardboard (released in 2014) and the Samsung Gear VR (2015).

2.1.1

Output hardware

Output hardware is the hardware used to display the virtual environment to the user. There are several different technologies but this section will focus on two types: head mounted displays and CAVE-systems.

Other technologies include spatially and temporally multiplexed displays which are tech-niques to create an illusion of 3D on a single screen by manipulating the image shown. Tem-porally multiplexing displays works by alternating each frame to show either the left or right eye view while blocking the other eyes view with a pair of glasses needed [12]. Spatially mul-tiplexing displays show both left and right eye views superimposed on the same screen but through different polarized filters; the user then needs to use polarized glasses to separate the superimposed views from each other [12].

Cave Automatic Virtual Environment systems

A CAVE system utilizes the walls (may also include the ceiling and floor) of a room to create the virtual environment. An advantage with a CAVE system is that it can, combined with feedback systems such as surround sound and motion systems, create a very convincing environment. Another advantage is that other hardware can easily be integrated with the virtual environment, in the case of flight training simulators this could mean anything from simpler flight instruments to a complete cockpit with real instruments. The drawback of such a system is that it requires a large physical space (compared to a VR HMD) and can be very expensive, both when looking at the initial cost as well as maintenance.

Head-mounted displays

Instead of projecting the environment on the walls as is done in a CAVE setup, a head mounted display uses a smaller screen (or screens) mounted on a headset close to the user’s eyes. The illusion of depth is created through parallax, similar to temporally or spatially multiplexed displays, but instead of alternating left and right view every other frame or su-perimposing them on the same screen HMDs often show both left and right view at the same time. This can be done by only allowing each eye to see one of the views so that the right eye can’t see the left view and vice versa. By tracking the users movements (see section 2.1.2) the view shown on the HMD changes depending on the users movements, thus allowing the user to look around in the environment.

2.1.2

Input methods

This section will present some of the available methods for users to provide input to the virtual environment. The amount of input methods is large, ranging from the traditional keyboard and mouse to hand gestures and movement. The focus here will be on the three methods that is of extra interest in this thesis: tracking, consumer devices and the flight con-trol devices used in current simulators.

(14)

2.2. Flight training simulators

Tracking

An important part of creating a convincing virtual environment is to allow the user to move about in it. The number of ways a user can do this is called degrees of freedom (DoF). This can be done in several ways, of which tracking are two: internal and external tracking. Internal tracking uses sensors such as accelerometers and gyroscopes to measure a movement while external relies on external cameras.

Most often 3DoF in this context means that the user can control roll, pitch and yaw, thus being able to look around in the environment. With 6DoF the user is also capable of controlling movement along the X, Y and Z axes.

A third method, or rather an extension of internal tracking, is to use sensors not only in the HMD but also on the body part or item that needs tracking. Several approaches to this exist, with the most common today being the hand held controls that often come bundled with commercial VR HMDs. Another approach is to use clothing items, such as gloves with different sensors sewn in to measure any movements done. An advantage of this is that small movements are more easily captured than by other techniques such as external tracking (where the movement might be too small for it to be noticed or obscured from the sensors) or the consumer devices mentioned below (which will not track the actual movement but rather the device itself). The disadvantage is that the items (such as gloves) used will be more complicated both to manufacture and to use (as the sensors must be properly positioned with regard to the movement they are supposed to track) and, since more sensors are needed, most likely also more expensive than simpler hand held controls.

Consumer devices

Consumer devices refers to physical devices available to consumers and not an input tech-nique in itself, as some of the devices specially developed for use with VR HMDs use some method of tracking, as described above, while more traditional devices such as joysticks or game pads use other methods of providing input data.

There is a clear difference between tracking devices developed for mobile VR solutions and their desktop equivalents regarding the technology used for the tracking. Most mobile de-vices use internal tracking both for the headset (since smartphones already have the sensors needed) and the controllers themselves while desktop devices tend to use external tracking.

Flight training simulator controls

The most basic controls needed in a flight simulator are the primary flight controls allowing the pilot to adjust the throttle, rudder (controlling yaw) and ailerons (roll and pitch). Even though these three controls can be put together in a single device, as done with many con-sumer joysticks, they are often separated in their own physical devices to mimic the real controls of the simulated aircraft.

2.2

Flight training simulators

Ever since the first manned flight the importance of proper training has been recognized, and with that the need to simulate flying to allow a pilot to become more experienced with the aircraft before actually flying it. The first digital simulators were developed during the 1950’s and 60’s as contemporary computers became powerful enough to be able to handle the requirements of flight simulation. These early simulators were, however, very diverse due to the lack of standards to be followed during development. This eventually led to the

(15)

2.3. Health issues

formation of the Flight Simulator Technical Sub-Committee (FSTSC) under the International Air Transport Association (IATA) in 1973 [11].

2.2.1

Pilot simulation

Pilot simulation refers to simulations from a pilots perspective. In this thesis this will usually refer to training scenarios with actual pilots but can also mean simulations done by engineers to test a new function.

The fidelity of a pilot training simulator can vary from a high fidelity CAVE-setup down to simulators with a simple cockpit mock-up and several monitors. Low fidelity solutions with only simple controls and a single screen are seldom used for this purpose but may be used in the role playing simulations described below.

2.2.2

Role playing

Role playing is when a simulator is used as support or opposition to a pilot training session. As such the person flying is not necessarily a real pilot and the focus in the role playing station is not the training of the current user. Instead the role playing station can be used to control another aircraft to support a pilot in a pilot training simulator in an exercise. What is done with a role playing simulation depends on the scenario in the training simulator. This may include both friendly aircraft such as other fighters or a tanker aircraft as well as hostile forces. Normally other aircrafts are flown by an AI, but in some cases manual control might be wanted and for such scenarios a role playing station is used.

The fidelity of a role playing station can technically be as high as a CAVE-setup but will in almost all cases be a very simple setup with a single screen and only the most necessary controls.

2.3

Health issues

Earlier research has shown that users immersed in a virtual environment are prone to devel-oping symptoms or effects similar to motion sickness. While similar, the number of symp-toms used to describe these sympsymp-toms are fewer than what is used to describe motion sick-ness [6]. The reason for this is that these symptoms, called simulator sicksick-ness or virtual reality induced symptoms and effects (VRISE), are based on the same symptoms used to describe motion sickness. It was however discovered that some of the motion sickness symptoms didn’t occur or were inappropriate [7].

2.3.1

Simulator Sickness Questionnaire

The Simulator Sickness Questionnaire (SSQ) is a method to measure the presence and sever-ity of simulator sickness. It was derived from the Pensacola Motion Sickness Questionnaire (MSQ) which was developed during the late 1960’s [6] to study motion sickness. During de-velopment, over 1100 MSQs were used with data from 10 US Navy aircraft simulators [7]. The main reason for the development of the SSQ was the difference between the objectives, while simulator sickness is similar to motion sickness, there are some differences that made the MSQ less than ideal. Some of the symptoms described in the MSQ almost never occurred when measuring simulation sickness and others were removed as they were not appropriate when measuring simulation sickness. One example of the latter was drowsiness. While im-portant when measuring motion sickness it didn’t necessarily indicate simulator sickness as it was also reported without any other symptoms. In those cases the reason for the drowsi-ness could be tiresome simulator exercises rather than any simulation sickdrowsi-ness [7]. The SSQ

(16)

2.3. Health issues Weight SSQ Symptom N O D General discomfort 1 1 Fatigue 1 Headache 1 Eyestrain 1 Difficulty focusing 1 1 Increased salivation 1 Sweating 1 Nausea 1 1 Difficulty concentrating 1 1 Fullness of head 1 Blurred vision 1 1

Dizzy (eyes open) 1

Dizzy (eyes closed) 1

Vertigo 1 Stomach awareness 1 Burping 1 Total [1] [2] [3] Score N= [1]˚9.54 O= [2]˚7.58 D= [3]˚13.92 TS= ([1] + [2] + [3])˚3.74

Table 2.1: Computation of SSQ Scores

has been used to measure VRISE [3] as simulator sickness and VRISE are similar, although the virtual environment might not be that of a simulator.

The SSQ includes 16 symptoms (as opposed to 28 in the MSQ) described in table 2.1. These were grouped in three symptom clusters: Nausea (nausea, stomach awareness, increased sali-vation and burping), Oculumotor (eyestrain, difficulty focusing, blurred vision and headache) and Disorientation (dizziness and vertigo) [7]. In table 2.1 these clusters are represented as columns (N, O and D). Symptoms included in each cluster are indicated with a 1 (as all symp-toms have the same weight in the SSQ as opposed to the MSQ).

Each symptom is rated 0, 1, 2 or 3 depending on how severe the symptoms are, where zero indicates that the symptom was not reported at all. The SSQ score is then obtained by adding the value of each symptom in the three different symptom clusters/columns (denoted with [1], [2] and [3] in table 2.1) multiplied by a cluster/column weight (9.54, 7.58 and 13.92 re-spectively). The total score is obtained by adding the score from the three symptom clus-ters/columns multiplied with 3.74.

(17)

2.4. Measures of usability in Virtual Reality

2.4

Measures of usability in Virtual Reality

For the evaluation it is important to have a clear definition of what usability is, as at a first glance it may seem simple, either something is usable or not. It is however much more com-plicated than that. Two devices can be equally usable but for two different reasons. The definition that will mainly be used in this thesis is the one given in [14] which describes six attributes that together describe the usability in a device or program. They are usefulness, ef-ficiency, effectiveness, learnability, satisfaction and accessibility. There are other definitions, such as the one given in [8], which in a similar way describes five attributes: learnability, efficiency, memorability, errors and satisfaction. These attributes are very similar to some of the ones described by [14]. The error attribute for example is identical to effectiveness, and learnability in [14] includes both learnability and memorability in [8].

2.4.1

Usefulness

The usefulness attribute describes how well a program or device enables a user to accom-plish a goal or task. This attribute is one of the most important ones as it also becomes a measurement of how likely a user is to continue using the program or device. Even if a prod-uct gets high scores on all other attributes, if it gets a low score on usefulness it is unlikely to be used, as it won’t help the user accomplish his or her task. It is also one of the attributes most often forgotten in the early stages of developing a product. If not thought of early it can create problems during development as developers are forced to try to take the users point of view, or even using themselves as user models which might lead to a more system-oriented product, rather than user-oriented as was wanted [14].

2.4.2

Efficiency

The definition of efficiency varies depending on the source. In [14] it is defined as a mea-surement of how quickly a user can complete a task correctly. This does not take the user’s expertise in the system into account, which is the main difference compared to what is de-scribed in [8], where the user is considered to be an expert. Advantages of measuring a device or program according to the definition used in [8] is that it also provides a way to measure a user’s expertise. By continuously measuring a user’s performance of completing a certain task a learning curve for the device or program can be plotted. The disadvantage is that the user group needed for testing has to be considered experts, which might cause problems if what is tested is new and no users have had the time to learn how to use it to a level where they can be considered experts [8].

2.4.3

Effectiveness

Effectiveness is a measurement of how well a program or device behaves compared to how a user expects it to. This is equivalent to the error attribute described in [8]. Errors can be categorized depending of their severity. For example, a small user error that does not affect the result of a task other than how much time it takes the user to complete it should not be counted together with more catastrophic errors that can result in an incomplete or incorrectly finished task [8].

2.4.4

Learnability

Learnability describes how well a user is able to operate the program after a predetermined amount and time of training. It could also be a measurement of how well users that uses the program infrequently are able to relearn how to use it [14], although that could also be its own attribute, memorability [8]. A separation between learnability and memorability could lead to finding problems and weaknesses that would otherwise have been missed. A device

(18)

2.5. Related work

could get a good grade in memorability but low in learnability and by separating them, good ratings in one of the attributes can’t hide lower ratings in the other.

Learnability may seem similar to effectiveness, but where effectiveness is focused at how the program behaves learnability is focused at how well a user can use it.

2.4.5

Satisfaction

Satisfaction is a subjective attribute referring to what the user thinks and feels about the de-vice or program, if it is pleasant to use. Satisfaction can be measured in several ways, either by written or oral questioning [14] or by more psychophysiological measures such as EEGs, heart rate and blood pressure [8]. Satisfaction can be a very important attribute for some applications, especially those targeted at non-work situations, such as games and other en-tertainment applications. For this type of devices or applications, other attributes might not be as important or even relevant. Efficiency for instance might not play an important role when evaluating a game, as a user might want to spend time playing a game without caring much for how efficient he or she is [8].

2.4.6

Accessibility

The broadest definition of accessibility refers to how a device or program is accessible to a user so that the user can complete a task. However, it might also be interesting to narrow the definition down and instead of all possible users look at users with some sort of disability as focusing on making a device usable for a person with some sort of disability can make the device better for persons without that particular disability as well [14].

2.5

Related work

While there are several examples of similar research, there are few examples done after the recent change in availability of affordable HMD:s that came with the release of the Oculus Rift. This section will cover other relevant research regarding the use of VR HMD:s for flight training purposes.

In 2009, Yavrukuk et al. presented a helicopter simulator, called the SCALAB Virtual Reality simulator, using a VR HMD [17]. This simulator was built with the purpose of creating a low-cost alternative to more advanced simulators by using a VR HMD instead of more ex-pensive equipment. The simulator was built on FlightGear and used several computers to track movements and render a stereoscopic view to the user. For interaction a joystick setup was used along with tracking gloves to track the users hand movements. The total cost of the simulator setup shown in figure 2.1 was $15 000, a lot less expensive than a CAVE-setup. The conclusions were that the low cost simulator had a high feeling of realism and that the interaction with the simulator by using the tracking gloves felt more natural than interacting with the real controls but without any visual feedback. It was also found that the field of view had room for improvement as it was somewhat limited.

In 2011, Aslandere et al. presented a generic virtual reality flight simulator (VRFS) [1]. The purpose of creating a generic simulator was to avoid having to build an expensive cock-pit mock-up for every different aircraft to be simulated. Earlier work had suggested using already available virtual tools which were created for prototyping and different evaluation purposes. However, as such tools were created for purposes strictly inside an aircraft they lacked both an external environment and simulation capabilities. As both simulation capa-bilities and an external environment are required, the prototyping tools were not usable, thus creating the need for a generic simulator. The result was a simulator using a HMD with stereoscopic 3D and optical tracking. The optical tracking were able to register both head and

(19)

2.5. Related work

Figure 2.1: Hardware setup of SCALAB simulator.

hand movements, thus eliminating the need of a separate tracking system such as the gloves used in [17]. The VRFS was built as a distributed system (figure 2.2), both due to the high computing power needed (more than what one computer was able to handle) and as network connectivity was needed when several users were using the same system. A disadvantage of this setup was the increased system latency.

The conclusions from the VRFS was that the simulator as such had potential, and that the virtual hand interaction was suitable for visual prototyping. It was however not suitable for pilot training.

A similarity between these simulators is the hardware setup, in both cases several computers were used. In the SCALAB simulator setup (figure 2.1) three computers were used, one for an instructor station, the other two to render the right and left eye view respectively. In the VRFS prototype (figure 2.2), four was used for motion tracking, simulation and rendering the environment to the HMD.

(20)

2.5. Related work

(21)

3

Method

This chapter describes the method used. The method consisted of several parts:

• A technical comparison between the HMD used and a low-fidelity monitor based sim-ulator setup.

• An implementation of a flight simulator prototype using a VR HMD. • Designing and preparing for user tests.

• Analyzing the results from the user tests and technical comparison.

3.1

Technical comparison

The purpose of the technical comparison was to provide key differences between the VR HMD and the hardware currently used and to answer the first research question. To be able to get an answer, the technical requirements of flight training simulators (which had to be met or exceeded) had to be clearly defined. There was an almost endless amount of possible configurations, ranging from CAVE-setups using a realistic cockpit mock up down to desktop simulators with a single monitor and simple controls. As the technical requirements should be seen as a lower bound, a simpler single monitor setup was used during the comparison. This setup used a 27" monitor with a 1440p resolution and a refresh rate of 60 Hz. There were however cases where a 1080p monitor had been used. Thus the preferred lower limit regarding resolution was set to 1440p but with 1080p as the absolute lower limit of what could be considered acceptable. In a similar way the lowest acceptable refresh rate was 60 Hz. In this case there was no explicitly specified preferred refresh rate but it should be noted that a higher refresh rate could give a smoother experience. Exceeding the 60 Hz refresh rate should therefor be seen as something both positive and preferred for future development. Another useful measure is the field of view (FoV), but defining this is not as straight forward as defining a lower bound for the resolution or frame rate. First of all which FoV that is

(22)

3.1. Technical comparison

Figure 3.1: The relationship between the distance d and the angle α

d FoV angle (2α)

100 cm 33,4

50 cm 61,9

Table 3.1: Calculated FoV angles depending on the distance from the monitor.

referred and second, deciding whether FoV is defined as the software FoV or the viewing angle towards the monitor.

FoV can be measured in three ways, horizontally, vertically, or diagonally. Having both a high vertical and horizontal FoV is of value, but as the HMD screen (for each eye) has an aspect ratio close to 1 (8:9) the two values will be similar. This could be compared to the monitor where the aspect ratio is different (16:9), leading to a larger difference between horizontal and vertical FoV. Because of this the horizontal FoV will be measured as the monitor will have a larger horizontal FoV than vertical, thus increasing the requirements that has to be met by the HMD.

The software FoV refers to the in-simulator camera view angle. This can be set arbitrarily for both monitor and HMD setups (as it is done in software), although some settings may be less useful than others. Since the software FoV can be changed and set to fit the users preferences it would not be meaningful to use it as a comparison between two hardware setups. Another measure is the viewing angle towards the monitor, this angle depends on the size of the screen and the distance from the users eyes. While the user can be positioned at different distances from the monitor, a typical distance can still be defined with reasonable precision. Thus, the viewing angle was chosen as the measure when comparing the field of view in different setups (and unless otherwise stated, FoV will refer to the viewing angle). The lower bound of the FoV was obtained by calculating the angle α as shown in figure 3.1 and multiplying with 2. As the distance d could vary two values were used, a maximum and minimum distance between which a user might typically be located. The monitor width was 60 cm resulting in the two FoV angles shown in table 3.1.

The lower bound for the HMD FoV was set to the highest value in table 3.1, which was obtained when d=50. Here, an upper bound could also be noted. While having a high FoV is preferred, a FoV angle much more than 210 degrees would be meaningless as this would exceed the capabilities of a human eye.

(23)

3.2. Implementation

Parameter Absolute Preferred Resolution 1080p 1440p Refresh rate (Hz) 60 >60

Field of view 61,9 210

Table 3.2: Absolute and preferred technical specifications.

A summary of the requirements (both what was absolutely needed and what was preferred) that a HMD had to meet or exceed can be seen in table 3.2.

3.2

Implementation

The implementation consisted of integrating the HMD software development kit (SDK) with the flight simulator code. This could be done in several ways depending on how the simulator was written and how much time that was available. The following ways of implementing the VR HMD was available:

• Using a game engine supported by the SDK • Integrate the SDK with rendering API • Integrate the SDK with OpenSceneGraph

For the purpose of this thesis the first alternative was discarded. The advantage of using a SDK supported game engine was that the HMD integration would be very simple, as ev-erything needed would already be present in the game engine. It would however require a whole new simulator to be built as there were no suitable open source simulators available. This disadvantage was too large to be considered a relevant solution as building a decent flight simulator from scratch would require much more time than available for this project. Thus only two methods were left, integrating the SDK at rendering API level or (as was later chosen) integrating the SDK with OpenSceneGraph.

Initially the method chosen was to integrate the HMD SDK with a rendering API, in this case OpenGL as the Image Generator, or IG, needed to be platform independent and as such was already using OpenGL. The advantage of this was that there were official documentation on how to proceed with the implementation. There were however two main disadvantages. First of all much more work would be needed to fully implement all functions as everything had to be done manually (as opposed to using a supported game engine where much of the func-tionality is already implemented) which would require time. Second, this method required a good knowledge of how the rendering was done in the simulator or image generator. This method was seemingly the most straightforward as the disadvantages initially were not seen as too large and as it required no other dependencies than the simulator code and the SDK. It was however quickly found out that the time required to get a good enough understanding of how the rendering was done in the image generator was so long that the risk of delaying the whole project had to be considered. As this was seen as unacceptable, another solution had to be found which resulted in using another flight simulator instead of the image generator first chosen.

This simulator used OpenSceneGraph which is a rendering middle-ware that is used to rep-resent the graphical scene as a graph. This adds a layer of abstraction between the rendering API and objects in the scene. The advantage of this was that instead of integrating the HMD SDK with the rendering API, it could be integrated into OSG as a set of cameras. These OSG

(24)

3.2. Implementation

cameras could then be integrated with the simulator’s scene graph. Thus the implementation could be divided into two parts:

• Integrating the HMD SDK with OSG

• Integrating that result with the flight simulator.

The first part was solved by using a third party library which had functionality to add a set of two OSG cameras (left and right eye) and methods to manipulate them with the head tracking from the HMD SDK.

3.2.1

Implementation using OpenSceneGraph and third party libraries

The flight simulator chosen was FlightGear [4] which is an open source flight simulator using OpenSceneGraph [10].

The integration of the HMD with FlightGear consisted of two parts, integrating the HMD API with OSG and integrating the result from the first part with FlightGear. For the first part a third party library was found that, with minor modifications, could be used [2]. This library supplied two things: first of all a way to render to the HMD and second, a way to manipulate the OSG cameras with the tracking done in the HMD.

The first part was done by using two Render To Texture (RTT) cameras. These two cam-eras were pointing in the same way but were spaced a couple of centimeters apart from each other, creating a stereoscopic 3D effect when using the HMD. This distance, called interpupil-lary distance (IPD), is different for every human being, but as the HMD had no adjustment for this the value was constant. The cameras would however still be unable to react to any movements done by a user wearing the HMD, to solve this an update callback was added to connect the tracking part of the HMD API to the RTT cameras view and projection matrices. The result from this third party library, henceforth called the HMD Viewer, could then be used in the second step consisting of integrating the HMD Viewer with FlightGear.

The last part of the implementation was to integrate the HMD Viewer with FlightGear. This was in theory simple as the HMD Viewer acted as the root node to which the whole scene graph was added. Thus, in theory, the only thing needed was to add the HMD Viewer as a child node to the root node in FlightGear. However, FlightGear used a separate struct for its cameras which included the OSG cameras as well as some FlightGear related data. This data included a register to control whether some terrain/model overlays, such as runway lights or wire frames to show interactable objects in the cockpit, should be rendered or not. This register was integrated with the HMD Viewer cameras, thus allowing the different overlays to be controlled in the same way as was already done with the original FlightGear cameras. Although the prototype was working as specified, a decision was made to add the same callback function used to allow the HMD Viewer cameras to react to a user’s movements to the FlightGear main camera. This made the camera act as a mirror of the HMD view, without the stereoscopic 3D and barrel distortion. The reason this was added was so that the FlightGear main camera view could show the same thing as the HMD user saw, both as help for the test moderator and as a view do show other people during demonstrations.

3.2.2

Simulator prototype

The implementation resulted in a prototype consisting of two parts: a computer running the modified version of FlightGear and the Oculus Rift HMD with its tracking system. A block

(25)

3.3. Preparations for testing

Figure 3.2: Block diagram for the VR HMD simulator prototype.

diagram of the hardware setup of prototype can be seen in figure 3.2. To this prototype any USB compatible controls could be connected. During the user tests a flight stick with separate throttle control was used as this allowed the participant better throttle control and could be placed to replicate the flight stick and throttle location of the simulated aircraft.

The prototype had three views: the left and right HMD view as well as a view shown on a monitor. The latter was a modified version of the view normally present on the unmodified simulator. This third view affected the performance in a negative way but it was decided to keep it as it provided a way for the test moderator to see what the participant saw and did.

3.3

Preparations for testing

Before actual user tests could start a test plan had to be established and a pilot test using the test plan had to be done to find if there were any errors with the plan or the process described in it. The test plan included the following parts:

(26)

3.3. Preparations for testing

• Main objectives with the user tests • A description of the research questions • Participants

• Test environment, including location of the test and equipment used

• The method used during testing, including a description of the tasks to be performed by the participants

• A description of what data that was to be collected • How the collected data was to be analyzed and presented

In addition to the test plan the following documents were created:

• An introduction script • Test questionnaire

The purpose of the introduction script was to assure that all participants got the same infor-mation. This included a background to the tests, information regarding the tasks and avail-able aircraft controls. The test questionnaire consisted of the form with the SSQ and other questions the participant had to answer during the different tasks described in section 3.3.1.

3.3.1

Test plan

The test plan was the document describing the user tests. This included the objectives of the tests, the method used to complete the tests as well as what data that was to be collected.

Objectives and research questions

Since the test plan was connected to this thesis the research questions were the same as de-scribed in section 1.3. The objectives of the test plan was to gather data on how the partici-pants reacted on the differences between the HMD and screen based setups. This included both technical differences found in the technical comparison as well as well as more subjec-tive experiences regarding usability and immersion.

Participants

For the usability test to have any relevance, the group of test participants had to reflect the group of potential users defined in section 1.4. Thus, the test participant group consisted of both pilots and engineers.

Since pilot resources (number of available pilots) were scarce, only one pilot was able to participate. The number of participating engineers were higher. Including the participants in the informal test sessions described further in section 3.4.1, the total number of participants were around 30. Out of these, the majority performed the more informal test.

Test environment

The test environment consisted of a personal computer running the simulated software, an Oculus Rift DK2 HMD with an IR-camera for motion tracking, a flight stick and a throttle lever. On the computer a mirrored view of what the test participant could see was shown. This enabled the test moderator to both see what the participant saw and to interact with the

(27)

3.3. Preparations for testing

simulator to set up the test and to help the participant if needed without the need to wear the HMD.

Test plan method

The test consisted of six parts:

• A pre-test health control to assure that the participant wasn’t sick or had any other problems that could affect the result of the SSQ

• Task 1: Reading instruments • Task 2: Gentle flying

• Task 3: Maneuvering • Task 4: Gentle flying • Task 5: SSQ

Before the participant arrived the simulator environment had to be setup in the following way:

First of all the simulator had to be started with the chosen location and aircraft. Then the VR HMD had to be rotated and moved around so that the in-game cameras had seen everything around the aircraft. The reason that this had to be done was to assure that the environment around the aircraft had been loaded into the computers memory as the initial loading affected the performance (the simulator could freeze for almost a second several times during this phase). Then the aircraft had to be positioned in the air and the simulation paused. This was to assure that the instruments the participant had to read in the first part showed something different than on-ground values. The reason for this was that some instruments would show obvious values which could affect the results in a negative way (such as the speed indicators, the participant could easily determine it to be zero just by looking out the cockpit canopy even if he or she can’t see the instrument values by themselves). Pausing the simulator assured that the instrument values were kept constant during the first part and that the aircraft didn’t crash during the time between setup and when the participant took over controls. Since the aircraft could not be setup in the exact same way for all tests (as a prerecorded flight was not preferable) the respective instrument values needed in the first part had to be noted down as a key to the answers given by the participant. After the setup was completed the participant could be greeted and informed in accordance to the introduction script.

Before the first task a control of the participants health was done. This was to assure that the participant was at normal health, as this was a requirement of the SSQ and that he or she didn’t feel any of the symptoms described in the SSQ. This was done by asking the participant if he or she was at a normal health level and by filling in an SSQ.

The purpose of the first task was to evaluate the difference in pixel density found during the technical comparison in section 3.1. While the resolution in both setups were similar, physical screen size were not and as such created a large difference in pixel density. The task consisted of five sub tasks where the participant had to find and read the value of the following instruments (for location in cockpit see figure 3.3):

1. Indicated airspeed (IAS) in knots 2. IAS in kilometers per hour

(28)

3.3. Preparations for testing

Figure 3.3: Simulated cockpit environment with instruments used in Task 1 highlighted.

3. Fuel amount (presented as percent remaining) 4. Compass heading

5. Indicated altitude in meters

During this task the simulator was kept in a paused state so that the indicated values would not change, the participant was however still able to look and move around in the cockpit. These instruments were chosen for two reasons, as some of the instruments were small (such as the IAS in knots indicated by a small number in the HUD) there was a risk that the par-ticipant would have problems reading the indicated value due to the lower resolution in the VR HMD compared to a normal screen. The instruments also showed important information that could help the participant during the next tasks, especially if the participant did not have prior experience with the simulated aircraft.

The participant had to complete each sub task before starting with the next, each sub task was timed and the answer written down and compared with the keys collected during the setup. The time it took for the participant to complete each sub task was also written down and later compared to the results of a control group. This control group performed task 1 with a normal computer screen with a screen resolution comparable to the VR HMD. Due to the design of the instruments there had to be an upper and lower limit to what would be considered a correct answer, these limits are shown in table 3.3. This was because some instruments had a low precision, the gauge showing the IAS in kilometers per hour for instance had markers in the gauge every 50 meters and numbers only every 400 km/h. It would be unfair and possibly even cause errors unrelated to what was actually researched if a precise value was required.

The second task, called "gentle flying", was performed with the simulator in an unpaused state with the participant in control of the simulated aircraft. The purpose of this task was

(29)

3.3. Preparations for testing Instrument Margin IAS (kts) ˘0 IAS (km/h) ˘50 Fuel amount (%) ˘10 Compass heading ˘5 Altitude (m) ˘50

Table 3.3: Margin of error allowed for each instrument.

to test the participant’s ability to look around in the environment outside the cockpit while flying the aircraft. The task consisted of two parts: the participant had to first locate the nearby airport (the airport from which the aircraft started at during setup) and then find all runway numbers. The airport used during the test was the Václav Havel Airport just outside Prague, the Czech Republic. The airport has three physical runways (see figure 3.4) with one runway number in each direction.

Although runways 22 and 041are currently not in use the runway numbers were present in the simulator and as such were used anyway as to not make the test unnecessarily confusing. The runway numbers were not the critical part as it was just a way of forcing the user to look around outside. Runway 12 was however omitted due to an error in the simulated version leading to the texture of the crossing runways 24 and 06 being placed over the texture for runway 12 and thus obscuring the actual number making it impossible for the participant to see. The participant was recommended to keep the aircraft at an altitude of 1000 meters but could perform this task at whichever altitude they preferred as long as they kept it as constant as possible and never descended below 500 meters as this would increase the risk of the participant performing a controlled flight into terrain (CFIT).

Before starting with the third task the participant was asked if he or she had developed any symptoms during the earlier tasks. If simulator sickness symptoms were present and severe, the participant was given the explicit option to skip the remaining tasks that involved using the VR HMD and go directly to the last part of the test where the SSQ was filled in. The reason that the last two tasks including usage of the VR HMD could be skipped was due to the purpose of them. As the purpose of both tasks was to examine if the participant devel-oped any lingering symptoms of simulator sickness/VRISE, both tasks were redundant if the participant already had severe symptoms before starting with the tasks.

The purpose of the third task, "maneuvering", was to force the participant to perform more acrobatic maneuvers with the aircraft and thus increase the risk of developing simulator sick-ness symptoms/VRISE. The task consisted of two sub tasks, during the first the participant was asked to test how the aircraft responded to large flight stick inputs. This was done so that the participant would have a basic knowledge on how the aircraft would perform when performing the second sub task which included flying at a low altitude where the risk of per-forming a CFIT was high if the participant had no prior knowledge of flying the aircraft in a similar way. The participant was then asked to line up the aircraft with the starting point of the second task (point A in figure 3.5). The second sub task consisted of flying the aircraft according to a predetermined flight plan. This flight plan had 15 checkpoints including start and endpoint lettered A to O (figure 3.5). At each checkpoint the participant had to do a ma-neuver (see table 3.4 for detailed description), either to continue to fly according to the flight path or to do some acrobatics to try to induce simulator sickness symptoms. The second to last checkpoint consisted of the participant circling the airport and was used as a transition to the fourth task.

1Runways are named after their magnetic heading in decadegrees, thus the heading of runway 22 is 220°and 04,

(30)

3.3. Preparations for testing

Figure 3.4: Václav Havel Airport runways.

(31)

3.3. Preparations for testing

CP Maneuver Notes

A Entry point: line up with runway 24 B Climb to 3500 meters

C Split-S, level off at approx. 700 metres

D Hard aileron roll (left or right) Caution: altitude E Hard turn left

F Hard turn right G Inside loop

H Right turn, line up parallel to runway 30 I Climb to 3500 metres

J Split-S, level off at approx. 700 metres

K Gentle aileron rolls (right) Caution: altitude

L Immelmann turn Might cause spatial disorientation

M Right turn N Circle airport O End of task

Table 3.4: Task 3 flight path checkpoint descriptions.

The fourth task, also called "gentle flying", was to allow any temporary simulation sickness symptoms to subside. The reason why this was wanted was that some symptoms measured in the SSQ could be normal after performing some of the maneuvers during the third task without actually be a symptom of simulator sickness. For example after performing a contin-uous hard aileron roll at low altitude and high speed, the participant may feel some discom-fort and other symptoms described in table 2.1. This may however be unrelated to the VR HMD and simulator and more related to the performed maneuver. By allowing the user to fly more gently for a short time period after these more acrobatic maneuvers the most immediate and temporary effects might subside. This would leave the longer lasting symptoms caused by the simulator or the VR HMD. As such this task gave the participant a large degree of freedom to explore the virtual environment, as long as any maneuvers were kept at a casual level. The participant was given the opportunity to try and land the aircraft but could also fly more freely. If the participant did not choose to land, the fourth task had a time limit of no more than 5 minutes of flying. For time reasons the participant only got one attempt which was done without ILS2or other guidance systems other than hints from the test moderator. After the completion of the fourth task the simulation was ended and the participant could remove the VR HMD. The fifth and final task was the SSQ described in section 2.3.1. The participant was asked to rate presence and if present the severity of each symptom in the SSQ in accordance to the scale described in section 2.3.1. The result of each participant was saved for later analysis.

Measures and presentation of data

The user test could be divided into three parts regarding the data that was collected.

First, the pre-test health check and the last task measuring if the participant developed any symptoms of simulator sickness. This was done with the SSQ described in section 2.3.1. The purpose of the first task was to evaluate if the difference in screen resolution between the VR HMD and the compared simulator setup using a normal screen affected the user experi-ence. The data collected during this task was each instrument value and the time it took the participant to read them. This task also had a control group performing the same tasks but

(32)

3.3. Preparations for testing

with a normal screen instead of the VR HMD. A comparison between the two user groups provided two important things: correctness and efficiency. Correctness was a measure of how good the participants were able to read the instruments. Since some of the instruments were analogue gauges where the exact value could be hard to specify even in perfect conditions errors were expected in both groups. The interesting data was not the error rate per se but rather if there was a noticeable difference between both groups. Efficiency was a measure of how quickly the user was able to read each respective instrument value, again the interest-ing data was not how quickly the participants in the VR HMD group was able to read the instruments but if there was any difference between them and the control group.

The data collected during the other tests was coupled to usability as described in section 2.4 (although correctness and efficiency above can be coupled to the usability attributes effec-tiveness and efficiency, respectively). This data was collected with a questionnaire where the test moderator first answered some factual questions connected to the participant’s perfor-mance and then asked the participant to answer some questions connected to the usability attributes.

The questions answered by the moderator were designed to be as brief as possible to avoid creating any ambiguity regarding the answers. This meant that each question, although a motivation was also needed, could be answered either by a yes or no or with an already specified alternative. The questions answered by the test moderator were the following:

• Was the participant able to complete the tasks?

This question was regarding the usefulness attribute and while the question may seem over simplified it was coupled, and mainly answered, with the data from task 1 described above and its purpose was mainly to catch any unforeseen errors made by the participant.

• Did the participant make any errors related to the VR HMD experience?

This question was partly answered by the data regarding effectiveness from task 1 but also included observations made during the other tasks. Examples of errors a participant could do was if the participant would deliberately cheat or by doing something that would be impos-sible in a real aircraft such as moving outside of the cockpit boundaries (this was posimpos-sible due to the area where the tracking worked was larger than the simulated cockpit environment. The last question, regarding learnability, was evaluating the participant’s use of the extra degrees of movement provided by the VR HMD compared to the screen based simulator setup. That is, if the participant was using these extra degrees of freedom

• Straight away or intuitively? • After getting reminded of it? • Not at all or very rarely?

While usability was in part answered by the evaluation in task 1 and questions answered by the test moderator the participant also got to add a more subjective opinion by answering the following questions. Regarding usefulness:

• Did the participant and test moderator agree on the result from the first task and other results regarding usefulness?

(33)

3.3. Preparations for testing

And, if the result were less than perfect:

• Did the participant have any input on why the result was what it was and what might have caused it?

The participant was also asked to evaluate the accessibility of the VR HMD setup. Although accessibility was not the main focus of the thesis (as mentioned in section 1.4) there were important parts regarding accessibility that had to be taken into consideration. These were things primarily connected to the physical HMD and how it could be adjusted to fit the per-son using it. The participant was asked the following questions:

• Did the headset fit in a comfortable way?

If not, what caused the problems?

• Was some adjustable setting missing or not good enough?

There were some limitations with the HMD used that were known before testing and thus the last question was of high interest. One of these known limitations with the HMD was the interpupillary distance (IPD), while it was known that this could not be changed on the HMD it was not known if, and how much, it would affect a participant. Another was the distance between lens and the wearer’s eye. This distance could be adjusted but it was not known if the adjustable settings were good enough. Would for instance a participant wearing glasses still be able to fit the HMD comfortably without having to remove the glasses?

The last questions the participant was asked were regarding the overall satisfaction with the VR HMD setup. The initial question was simple: What did the participant think?

• What was good? • What was bad?

Were some things worse than others?

Were some things critical?

• If critical problems are fixed, does the participant think a HMD could be a useful addi-tion to current simulators?

As a complement/replacement for current activities?

As an introduction of something not currently done?

While broad questions such as these can be dangerous to ask a participant (who might find it hard do give a good answer), the expertise of the participants has to be considered. The par-ticipants in this test were either actual pilots or engineers working with aircraft simulators on a daily basis, thus this sort of question was still useful as the participants could give technical details on both limitations of the HMD and any advantages or disadvantages with the setup. None of the questions above were regarding efficiency. This attribute was answered by the evaluation performed during task 1.

Then the participant was asked to rate, if at all possible, the immersion of the VR HMD setup compared to other simulators such as the low fidelity one used as comparison in this thesis as well as the much more advanced CAVE setups also used. This was done by asking the

References

Related documents

improvisers/ jazz musicians- Jan-Gunnar Hoff and Audun Kleive and myself- together with world-leading recording engineer and recording innovator Morten Lindberg of 2l, set out to

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

Swedenergy would like to underline the need of technology neutral methods for calculating the amount of renewable energy used for cooling and district cooling and to achieve an

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating