• No results found

Design and Evaluation of a 3D Map View using Augmented Reality in Flight Training Simulators

N/A
N/A
Protected

Academic year: 2021

Share "Design and Evaluation of a 3D Map View using Augmented Reality in Flight Training Simulators"

Copied!
72
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköpings universitet SE–581 83 Linköping

Linköping University | Department of Computer and Information Science

Master thesis, 30 ECTS | Datateknik

2018 | LIU-IDA/LITH-EX-A--18/041--SE

Design and Evaluation of a 3D

Map View using Augmented

Reality in Flight Training

Sim-ulators

Design och utvärdering av en kartvy i 3D med hjälp av förstärkt

verklighet i flygträningssimulatorer

Philip Montalvo

Tobias Pihl

Supervisor : Sahand Sadjadee Examiner : Erik Berglund

(2)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under 25 år från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns lösningar av teknisk och admin-istrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sam-manhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet – or its possible replacement – for a period of 25 years starting from the date of publication barring exceptional circum-stances. The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the con-sent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping Uni-versity Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

c

Philip Montalvo Tobias Pihl

(3)

Abstract

The ability to visualize and manipulate an airspace is an important tool for an instructor controlling a flight simulator mission. Usually, airspaces are observed and manipulated through 2D and 3D views on a computer screen. A problem with using computer screen is spatial perception, which is significantly limited when observing a 3D space. Technologies such as AR and VR provide new possibilities for presenting and manipulating objects and spaces in 3D which could be used to improve spatial perception.

Microsoft’s HoloLens is a see-through head mounted display which can project 3D holo-grams into the real world to create an AR experience. This thesis presents a prototype for displaying a 3D map view using HoloLens which has been designed to improve spatial perception while maintaining high usability. The prototype has been evaluated through heuristic and formative evaluations and was well received by its potential users. The pro-totype has also been used to present suggestions to improve spatial ability for similar ap-plications.

(4)

Acknowledgments

There are several people which have helped realizing this project, to which we would like to extend our gratitude. We would like to thank our examiner Erik Berglund for his help and valuable feedback during this project. We want to thank our supervisor at Saab, Stefan Furenbäck, for his help, important feedback and for making this thesis possible. We would also like to thank all of our testers at Saab, which have provided crucial feedback. We want to thank Peter Sköld for his help and continuous feedback throughout the development. We also want to thank our supervisor Sahand Sadjadee for his feedback during our thesis presen-tation. Finally we would like to thank our opponents Patric Lundin and Henrik Lundström for their feedback during our presentation and on this thesis.

(5)

Contents

Abstract iii Acknowledgments iv Contents v List of Figures ix List of Tables x 1 Introduction 1 1.1 Motivation . . . 1 1.2 Aim . . . 2 1.3 Research Questions . . . 2 1.4 Delimitations . . . 2 2 Background 3 2.1 Existing System . . . 3 2.2 Microsoft HoloLens . . . 3 3 Theory 5 3.1 Virtual Reality . . . 5 3.2 Augmented Reality . . . 5

3.3 AR Technology and Devices . . . 6

3.4 Spatial Perception . . . 7

3.4.1 Spatial Mapping . . . 7

3.4.2 Geographic Information Systems . . . 8

3.5 AR and Education . . . 8 3.6 Usability Metrics . . . 9 3.6.1 Knowability . . . 9 3.6.2 Operability . . . 9 3.6.3 Efficiency . . . 10 3.6.4 Subjective Satisfaction . . . 10 3.7 Usability in AR . . . 10 3.7.1 Simulator Sickness . . . 11

3.7.2 The State of Human-Computer Interaction Principles in AR . . . 11

3.7.3 HCI Methodologies . . . 11

User Task Analysis . . . 12

3.7.4 Evaluating Usability . . . 12

Evaluation Issues . . . 12

Evaluation Methods . . . 13

3.7.5 Test Subjects . . . 14

(6)

4 Method 15 4.1 Pre-Study . . . 15 4.2 Iterative Development . . . 15 4.3 Design . . . 15 4.3.1 Use Cases . . . 15 Planning . . . 16

Real-time Observation and Manipulation . . . 16

Reviewing and Educational . . . 16

4.3.2 Requirements . . . 17

4.3.3 User Task Analysis . . . 17

4.4 Implementation . . . 17

4.4.1 Unity3D . . . 18

HoloLens Toolkit . . . 18

4.4.2 Graphics . . . 18

4.4.3 Graphical User Interface . . . 18

4.4.4 Input . . . 18

4.4.5 Features . . . 18

Manipulation of Objects . . . 18

Voice Commands . . . 19

Additional Information . . . 19

Pathing and Tracing . . . 19

Radar . . . 19

4.5 Testing and Evaluation . . . 19

4.5.1 Methods . . . 19

4.5.2 Heuristic Evaluation and Cognitive Walkthrough . . . 20

4.5.3 Formative Evaluation . . . 21

Test 1: Spatial Perception . . . 22

Test 2: Areal Perception . . . 22

Test 3: Functionality and Input . . . 22

4.5.4 Post-Test Interview . . . 23

4.5.5 Post-Test Questionnaire . . . 24

5 Results 26 5.1 Implementation: First Iteration . . . 26

5.1.1 Design . . . 26

Models . . . 26

Terrain . . . 27

Textures and Colors . . . 27

Lighting and Shadows . . . 27

5.1.2 Features . . . 27 Manipulation . . . 27 Voice Input . . . 28 Waypoints . . . 28 Ground Projections . . . 28 Trails . . . 28 Radar . . . 28 Behaviour . . . 29 Tooltip . . . 29

Placement and Scaling . . . 30

5.1.3 GUI . . . 30

Entities . . . 30

Help . . . 30

(7)

Selected Object Panel . . . 30

5.2 Evaluation: First Iteration . . . 31

5.2.1 User Tests . . . 32

Spatial Awareness . . . 32

Areal Awareness . . . 32

General . . . 32

5.3 Implementation: Second Iteration . . . 33

5.3.1 Design . . . 33 Models . . . 33 Waypoints . . . 34 Radar . . . 34 Grid . . . 35 Terrain . . . 35 5.3.2 Features . . . 35 Time Management . . . 35 Trails . . . 35 5.3.3 GUI . . . 36

5.4 Evaluation: Second Iteration . . . 36

5.4.1 User Tests . . . 37 Spatial Awareness . . . 37 Areal Awareness . . . 37 General . . . 37 5.4.2 Questionnaire . . . 38 6 Discussion 41 6.1 Results . . . 41

6.1.1 Movement and Spatial Awareness . . . 41

6.1.2 Terrain Controls . . . 42

6.1.3 GUI Visibility and FOV . . . 42

6.1.4 Adjust . . . 42

6.1.5 Selection . . . 43

6.1.6 Functionality of Entity Buttons . . . 43

6.1.7 Comfort and Simulation Sickness . . . 44

6.1.8 Usefulness in Flight Simulation . . . 44

6.1.9 New Technology Influence . . . 45

6.1.10 Questionnaire . . . 45 AR and VR Familiarity . . . 45 Knowability . . . 45 Operability . . . 45 Efficiency . . . 46 Subjective Satisfaction . . . 46 Future . . . 46 6.2 Method . . . 46

6.2.1 Requirements and Functionality . . . 46

6.2.2 Iterative Development . . . 47

6.2.3 Formative Evaluation . . . 47

Test Users . . . 47

Issues with Testing in AR . . . 47

Test Structure . . . 48

6.2.4 Interviews and Questionnaire . . . 48

6.2.5 Heuristic Evaluation and Cognitive Walkthrough . . . 48

6.2.6 Replicability . . . 49

(8)

6.2.8 Validity . . . 49 6.3 Sources of information . . . 49 6.4 The Work in a Wider Context . . . 50

7 Conclusion 51

7.1 Future Work . . . 52

(9)

List of Figures

2.1 Microsoft HoloLens . . . 4

4.1 Iterative Development Process . . . 16

4.2 Test 1 - Spatial perception . . . 22

4.3 Test 2 - Areal perception . . . 23

5.1 Early stage of the prototype in the first iteration . . . 27

5.2 Comparison tooltip and manipulation spheres. . . 29

5.3 GUI for adding entities. . . 31

5.4 Selected object panel. . . 31

5.5 Second iteration. . . 34

5.6 Waypoints and radar areas. . . 34

5.7 The GUI in its default state. . . 36

5.8 Time panel. . . 36

5.9 Questionnaire Result - Average and Standard Deviation . . . 39

(10)

List of Tables

4.1 Qualitative Evaluation Methods . . . 20 4.2 Questionnaire Questions . . . 25 5.1 Questionnaire Abbreviations . . . 38

(11)

1

Introduction

1.1

Motivation

Recent developments in technologies such as augmented reality (AR) and virtual reality (VR) have introduced new ways to present and manipulate information. AR can be presented in multiple ways, either using computer or mobile screens to project objects into the real world environment or using headsets with lenses to project the object directly in the user’s view. Previously, only the type of AR using computer or mobile screens has been widely adopted by consumers, as our mobile phones now are AR capable using its screen and camera. As an alternative to displaying applications and programs on different types of screens, it is now possible to present them directly in the user’s field of view (FOV). Projected objects in the user’s view are called holograms and can be either 2D or 3D objects in the real world 3D space.

A technology which enables the use of AR and holograms through a wearable headset is Microsoft HoloLens1. This new headset is what Furth et al. [10] would call a see-through head-mounted display (HMD). HoloLens not only lets the user view holograms, it also offers the possibility to manipulate holograms using hand gestures captured by multiple cameras. This offers a new way to view and interact with applications in 3D rather than through a regular computer screen and controlled with a mouse and keyboard. A use case for this new way of interaction is in flight simulators, where missions are observed on a computer screen by an instructor. In Saab’s flight training simulators, missions are planned by the instructor in a 2D map view presented on a computer screen. The map view and entities on the map can be manipulated to, for example, move entities or place new ones. The instructor leading the training session can then follow the mission in the map view as the mission is executed. However, with technologies such as AR it could be possible to present this map as a hologram, projecting the map as a 3D model in the real world. The ability to view the map in 3D instead of 2D could lead to enhanced spatial perception, as the observer can see the actual 3D positions of objects which is not possible in 2D. This thesis presents a prototype of an AR application for viewing a 3D map, how it has been evaluated and how it has been designed to improve spatial perception.

(12)

1.2. Aim

1.2

Aim

The aim of this thesis is to develop a prototype, which presents a 3D map view as a comple-ment to Saab’s 2D maps to improve spatial perception and to present suggestions for future development of maps in AR. Another aim of this thesis is to evaluate the technology’s possi-bilities as well as presenting a roadmap of future use and development of AR and HoloLens in flight simulators. An additional aim is being able to determine in what other areas in flight simulation AR can be used, and what benefits might be gained from that.

1.3

Research Questions

To achieve the aims for this thesis the following research question is formulated:

• How should a map view in Saab’s flight training simulator be designed, using AR, to enhance spatial perception?

1.4

Delimitations

This prototype will be developed as a standalone application, meaning that it will not be compatible with the real mission training simulator. By limiting the system to a standalone app, more effort can be laid on testing basic interaction with the new technology. Addition-ally, the effect of spatial sounds, another important part of spatial understanding, will not be considered.

(13)

2

Background

This chapter provides a context for the thesis based on the provided requirements, expecta-tions and specificaexpecta-tions.

2.1

Existing System

This thesis explores the possibilities of developing a complement to the map view in Saab’s Instructor and Operator System (IOS). The IOS is a system where an instructor can observe and manipulate the environment in which a pilot in a training simulator is operating. An important part of the IOS is the map view, it allows the instructor to see the position of entities in the simulation as well as manipulating, adding and removing entities. The instructor has a set of different views and controls in which the simulation can be observed and modified. These controls and views consist of a 2D map view, a 3D map view, the pilot’s controls and a simulation control panel. The map is primarily presented in a 2D view, the 3D view is used less frequently. The prototype developed for this thesis is meant to explore possibilities to improve certain aspects such as spatial perception compared to the current map view in the IOS. The prototype has a very specific target user group, however, the concept of 3D maps in HoloLens is not limited to the IOS user group. As the prototype is developed as a complement to another software, it already has a group of users familiar with the expected functionality. An analysis of the existing system provides a foundation for the functionality, features and user tasks for the AR prototype.

2.2

Microsoft HoloLens

Microsoft HoloLens is a HMD capable of presenting holographic 3D objects in the real world. HoloLens is a self-contained system, meaning it does not require another device to function. The headset contains see-through holographic lenses, one for each eye. It has 4 environmen-tal understanding cameras, one depth camera and a ’regular’ camera. It has an Intel Atom processor and a Holographic Processing Unit (HPU) together with 2GB of RAM. It weighs just under 580 grams. As stated in the definitions by Azuma et al. [2], AR must be real-time interactive, which HoloLens handles through hand gestures or voice commands. It is

(14)

2.2. Microsoft HoloLens

also possible to use a hardware clicker provided with HoloLens for input without gestures. HoloLens can be seen in figure 2.1. [16]

(15)

3

Theory

This chapter aims to introduce some common theories in AR, VR and related technologies.

3.1

Virtual Reality

VR is the concept of simulating the physical presence of the user in a virtual computer-generated environment. However, to what extent or what mix of real and virtual objects is not always clear. Milgram et al. [29] states that mixed reality (MR) is a subset of VR and that AR is the best known MR. They also define a "virtuality continuum" which is an axis where VR technologies are categorized based on to what extent they mix real and virtual. In the leftmost part of the axis is the unmodified real environment and in the rightmost part is a pure computer-generated virtual environment (VE). On the left side of the axis (towards real environment) sits AR, mixing virtual objects into the real world. This means that MR is anything in between the extremes of the axis.

However, VR is commonly referred to as the experience in which the user is completely im-mersed in a virtual world, using technologies such as the HTC Vive1or the Oculus Rift2. The term ’AR’ is often used for any MR experience. Even though this thesis is mainly concerned with AR, it could be benefitial to keep the different subsets of VR in mind when talking about to what extent the experience is real or computer-generated. The term immersive VR or AR is used to differentiate experiences which are not presented on a computer screen but instead in, for example, a head-mounted display.

3.2

Augmented Reality

AR is a technology which augments a view of the physical world by adding additional infor-mation to it. Furth et al. [10] defines AR as "a real-time direct or indirect view of a physical real-world environment that has been enhanced or augmented by adding virtual computer generated information to it". AR can be interactive and manipulate the computer-generated

1https://www.vive.com/eu/product/ 2https://www.oculus.com/rift/

(16)

3.3. AR Technology and Devices

objects and combines real and virtual objects as compared to a VE which presents the user to a pure virtual environment. AR can therefore bring information to the user which is not in the user’s real world environment by augmenting it. Azuma et al. [2] defines a well adopted definition of AR. It states that AR must combine real and virtual objects in a real environment. As in the definition by Furth et al, it must run interactively in real time. It must also align the real and virtual objects with each other, therefore it must not only place virtual objects in the real world, but also consider the position of real world objects. For example, if we have a table in the real world, we would expect any computer-generated solid object to stay on that table if we put it there and not fall through.

3.3

AR Technology and Devices

Although this thesis is regarding using AR for visual augmentation, AR is not limited to sight and visual perceptions [2]. AR is not only used with HMDs, other alternatives for displaying AR includes devices such as hand-held displays and projection displays. A popular form of AR is through smart-phones. This technology uses the camera to capture the real world and the screen to display the camera’s view with an augmented overlay. A systematic overview by Dey et al. [7] shows that in publications between 2005 and 2014, AR usage through hand-held displays are increasing while AR based on HMDs are decreasing.

There are many different ways to interact with an AR system, Furht et al. [10] shows some examples such as gloves, wireless wristbands and smart phones. However, it does not bring up hand gestures (without gloves). This type of input has not been standardized prior to the release of HoloLens, which might change in the future similar to how the touch interface changed interaction with phones. Game-driven VR headsets handles input through special hand controllers which are used to track the position of the user’s hands and for input. How-ever, some VR gaming experience can also be done through a regular game controller, such as the Xbox 360 controller3. These controllers are typically used to, for example, move the player around in a game world. Since the user needs to be immersed in a VE to be able to control movement through a controller, it cannot be applied in AR.

Furth et al. [10] also discusses the computational power required to power an AR system. They claim that an AR system requires a powerful central processing unit (CPU) and a large amount of random access memory (RAM). They also state that currently, mobile computing systems are done by having a laptop in a backpack which is something they hope to be re-placed. For most of the stationary VR system, such as the HTC Vive or the Oculus Rift, the computations are done in a computer. However, in HoloLens, computations are done in the headset itself, creating a mobile system without a backpack with a laptop or other setups limiting mobility.

In their paper from 2001, Azuma et al. [2] point out some issues regarding AR displays in that time, problems which may still be present today 15 years later. The problems range from cost, size, weight, to see-through displays not having enough brightness, resolution or field of view to blend real and virtual objects. As new HMDs are released to the consumer market, some of these issues may have been resolved. One of the recently released headsets is HoloLens, which appears to be one of the strong candidates to popularize mixed reality [11].

3

(17)

3.4. Spatial Perception

3.4

Spatial Perception

Spatial perception is the ability to perceive and visually understand spatial relationships re-garding the person’s body orientation. A possible benefit from using AR to visualize a 3D environment is enhanced spatial perception. Kruijff et al. [21] and Luboschik et al. [23] ad-dress issues and cues in spatial perception. They discuss both natural spatial cues such as shadows and occlusions and help to improve spatial perception in AR, but also cues such as grids which are aligned with the user’s view can help. Issues with a limited field of view in HoloLens may make tasks more difficult but are not necessarily associated with poor depth estimation. Neither poor graphical fidelity is associated with poor depth perception. Com-mon ways to increase graphical fidelity in computer games such as anti-aliasing might not be a desired feature in AR since it can lead to perceptual distortions. Examples of other issues with spatial cues in AR are reflections, latency, individual differences and poorly chosen color schemes.

McIntire et al. [27] point out some positive and negative effects of stereoscopic 3D display viewing comparing to more traditional 2D displays. They also point out that 2D displays are unable to portray two important spatial cues. The first cue is self-motion parallax, which is movement of the user in space over time. The second is binocular disparity, referring to the benefit of extracting depth information using two eyes. This means that a 2D screen will not benefit spatial perception even if it is viewed by two eyes. They conclude that 3D displays are a promising way to enhance spatial perception for primarily tasks that are multidimensional in nature, for difficult, complex, or unfamiliar tasks and tasks where other visual cues are lacking. The authors also describe positive and negative aspects of spatial 3D in the context of an air traffic control system. They find that understanding spatial relationships between objects can be improved in specific tasks that are naturally 3D, but no improvement in tasks without a real need for a 3D understanding of the view. McIntire et at. [26] have also created a comprehensive review of user performance in stereoscopic 3D displays in different tasks. They find that for tasks such as judging positions and distances, finding, identifying and/or classifying objects and manipulation real/virtual spatial objects had mostly positive results. However, tasks related to learning, training and planning had little benefit of stereoscopic 3D.

A. Schnabel et al. [34] find similar results conducting an experiment with building 3D cubes from 2D plans, screen-based 3D and immersive 3D. They find that the task performance of building the cubes is significantly better using the 2D plans, but that the spatial perception is improved using 3D VEs. The paper is from 2003 and the authors also have concerns about the state of the technology, stating that the immersive VE tools are limited. However, even though the limitations of the immersive VE, it significantly contributes to the understanding of 3D volumes and spatial perception.

3.4.1

Spatial Mapping

A core feature of HoloLens is spatial mapping, which provides a representation of real-world surfaces in front of the user. This allows developers to create a mixed reality experience. Without spatial mapping, holograms would not consider any real world surfaces which could break immersion. Designing with spatial mapping in mind, applications should work while ’floating’ in the air as well as in a space with real surfaces. Working with real surfaces offers possibilities such as placing user interface (UI) elements on walls, putting down objects on tables or hiding holograms behind real objects.

(18)

3.5. AR and Education

3.4.2

Geographic Information Systems

Geographic information systems (GIS) are computer systems displaying spatial data on Earth’s surface. GIS can be used to display any kinds of information in a specific location, the location can be expressed in, for example, latitude and longitude. Visualization of geospatial data is an important area of application for VR [15]. VR combined with GIS can provide new ways to analyze geospatial data. New interface technologies, environments and interaction methods may change how we acquire spatial knowledge as 3D GIS visualizations have been moving towards VEs. A paper by Hedley et al. [15] presents questions which are relevant for this thesis, such as if 3D AR maps could improve spatial decision-making over 2D maps and how interactions would be different.

3.5

AR and Education

There have been studies on using AR to assist in learning, claiming that AR can be used as an effective learning tool for specific topics [18]. A generalization is that interactive digital simulations are effective learning tools and since AR is interactive digital simulations they are also potentially beneficial for learning [32]. A recent study by Ibáñez et al. [19] suggest that using AR for a physics course had a positive impact on students’ motivation. Billinghurst et al. [4] show that education is one of the most used AR application areas and can help students to learn more efficiently compared to traditional 2D desktop interfaces.

In his journal, Radu [32] presents different areas in education and learning where AR is or is not effective. The author has also constructed a questionnaire which can be used to eval-uate the learning potential of an AR medium. The journal shows that HMD AR has many positive effects on learning and displaying 3D and spatial information. The more negative aspects of AR include visualizing text and limited general availability for HMDs. The jour-nal shows several areas in education where AR could be more effective than the traditiojour-nal learning media such as books, video or desktop experiences. One benefit is improved con-tent understanding, specifically spatial understanding such as geometrical shapes, chemical structures or astronomy. The author also points out that AR has shown to be effective for long-term memory retention as well as student motivation and collaboration. A final ex-ample of learning benefits using AR is improved physical task performance, indicating that solving educational 3D tasks may have better results than traditional media in areas such as task completion time. Negative aspects, such as attention tunneling, are also presented. The author shows that some studies present a problem with AR taking up too much attention and that cuing for the user’s attention could be hazardous. Another negative possibility with educational AR is in usability issues, where multiple papers have indicated that usability problems can lead to poor performance in AR systems leading traditional media to be more effective at given tasks.

Wu et al. [36] summarize the current status of AR in education and present multiple areas where AR can be beneficial. However, they also state that many of the empirical studies in their paper have limitations in their evidential validity. They also find that there are issues re-lated to AR in education such as technological, pedagogical and learning issues. The authors conclude that while many of their analyzed papers display potential for AR in education, its effects seem shallow compared to other more mature technologies used in education. The au-thors suggest that there should be more evidence for AR’s effect on education through more controlled and comprehensive studies with larger samples.

Martín-Gutiérrez et al. [24] present a paper about improving spatial abilities of engineering students in an educational environment. The authors present an augmented book which displays 3D virtual models designed to help students perform visualization tasks developing

(19)

3.6. Usability Metrics

their spatial ability. The authors conclude that their application has positive impact on the students’ spatial abilities and performs well in regards to usability. The spatial abilities were measured through tests and usability was measured qualitatively using a questionnaire. Kaufmann et al. [14] summarize three evaluations of AR used for education in geometry. The evaluations have been done in an application which allows for manipulation of 3D objects using tangible interactions and a HMD. The summarizing consist of formative evaluations including more than 100 students over 6 years. The application’s development process has been iterative and therefore improved over these years with the help of user feedback. In the final version of the application, usability was rated higher than its desktop counterpart but had technical issues in robustness. The paper concludes that cognitive overhead can limit the educational potential of the application and thus important to focus the students on the actual task with little overhead.

3.6

Usability Metrics

This thesis investigates how a system should be designed in order to achieve better spatial perception. However, only creating a system allowing the user to do so would be insufficient. The prototype developed for this thesis is based on an existing user experience. This means a set of functionality is expected from a replacement system. In order to test whether the new system is sufficient in terms of usability, some usability metrics were considered.

There are several types of usability and not every one of them is be covered in this thesis. The usability metrics used in this thesis are the metrics Alonso-Ríos et al. [1] have termed as knowability, operability, efficiency and subjective satisfaction.

3.6.1

Knowability

Knowability is defined as a tool for measuring how well a user can learn, and also remember, how to use a system. Knowability contains a few subattributes.

How easy the system is to perceive correctly is defined as clarity. Clarity is divided into three groups. Clarity of elements is the ease of which elements are perceived. Clarity of structure which refers to having elements organized in a way so they are easily understood. Clarity in functioning describes the way user tasks are performed and how system tasks are executed. How well a user is able to remember the elements and functionality of a system is defined as memorability and another subattribute is helpfulness, which is how well a system can pro-vide help to the user when they are not sure of how to use it. The interface must change as the device changes, but an effort should be made to ensure that parts of the interface are recognizable, such as layouts and element design. [1]

3.6.2

Operability

As described by Alonso-Ríos et al [1], operability is the "capacity of the system to provide users with the necessary functionalities". In this thesis, it means that that the users are able to experience a similar functionality to what already exists in the IOS. Low operability, in this case, could mean that important functionality has been removed in the AR prototype. There are several subcategories to operability, one of them is precision which is the capability of the system to perform its tasks correctly. This is an important metric to the thesis, if the AR system performs its tasks worse than the existing system it is probably not useful at all. Completeness is another operability metric, concerning the possibilities to let the users perform all intended tasks in the system. If the AR system has less functionality than the existing system, its completeness would be low.

(20)

3.7. Usability in AR

Another subattribute of operability is flexibility which concerns adaptiveness and controllabil-ity. This concerns the systems ability to adapt to the users preferences and needs. In this case, it can be seen in two different ways. First, in the AR application itself, adapting within the context of AR and usability in AR. Second, between the existing system and the AR system, it could be considered as flexibility to be able to choose whether to use the existing system or the AR system.

3.6.3

Efficiency

Efficiency is a measure of the capacity a system is able to produce proportional to resources invested [1]. Efficiency contains four subattributes. Efficiency in human effort is the capacity of which a system produces results in return for user effort. Efficiency in task execution time describes how much time is invested by the user and time taken for the system to respond. Efficiency in tied up resources refers to both material and human resources. Efficiency in economic costs in terms of cost of human resources, the cost of the system, cost of equipment required and cost of consumable items.

In this thesis efficiency has been taken into consideration when drawing conclusions of how well an AR system compares to the existing system. It is important to measure the efficiency and weigh the result together with the other metrics in order to determine the usefulness of the system.

3.6.4

Subjective Satisfaction

Subjective Satisfaction is the system’s ability to create interest and feeling of pleasure [1]. If the AR prototype is less satisfactory than the existing system it would be a problem if, for example, the AR system would already have worse operability. Therefore, subjective satisfaction is an important metric to this thesis and needs to be kept at a high level. In the paper by Alonso-Ríos et al [1], subjective satisfaction has two subcategories, interest and aesthetics. Both are relevant for this project and have therefore been be taken into account when designing the experience.

3.7

Usability in AR

Although there are many similarities between desktop computer applications and AR appli-cations there are some usability issues exclusive to VR and AR appliappli-cations, such as simulator sickness.

Dey et al. [7] present a systematic review of several usability studies in AR between 2005 and 2014. The paper concludes that most studies are user studies and the most popular method of collecting data is by questionnaires. This led to subjective rating being the most used way of measuring usability. The authors suggest that there are opportunities for more types of evaluation methods rather than just filling out questionnaires. They also suggest that a wider range of the population is used for testing since it is currently overrepresented by young, educated and males.

Kaufmann et al. [7] have written a paper summarizing usability evaluations of an educational AR application using a HMD. The evaluations were done in the early 2000’s, but the results are transferable to modern AR experiences. The authors improved their application through several iterations according to user feedback. In the end, the AR application’s usability was rated higher than the counterpart desktop application, suggesting that AR applications have the possibility to surpass the desktop application in some cases. The authors addresses that there are problems with latency and tracking with their HMDs causing problems for the

(21)

3.7. Usability in AR

users. However, in their case, only a few percent of their users had strong discomfort symp-toms, symptoms as headache and eye strain. The authors identify possible reasons for this to be low frame rates, lag or bad fitting of HMDs. Although all of these issues are worth considering, it is also possible that some might be resolved automatically by using modern AR hardware, such as HoloLens.

3.7.1

Simulator Sickness

As mentioned in the section 3.7, Kaufmann et al. showed that simulator sickness had a neg-ative impact on usability. A paper from 2000 by LaViola [22] discusses problems with ’cy-bersickness’, which has similar symptoms to motion sickness. The paper brings up multiple factors that could be causing this cybersickness, both individual factors in humans such as gender, age and illness but also technological factors. The technological factors that the au-thor brings up are position tracking errors, lag and flicker. The issues are not only problematic because they can cause different types of symptoms (eye strain, headache, nausea etc.) but because they are difficult to predict individually. However, this paper is now several years old and the author thinks that some of these technological issues could be resolved by better technology in the future. Yet, it is still important to consider these factors even if HoloLens could have improved some of them. McIntire et at. [26] also suggest, from their comprehen-sive review, that comfort is important for the user’s acceptability and thus should be taken into consideration.

3.7.2

The State of Human-Computer Interaction Principles in AR

Although AR is not a brand new technology, most of the development in AR is technology driven and not focused on human-computer interaction (HCI) and usability. In 2004, Gabbard et al. [20] conducted a survey which concluded that only 3% out of 1100 papers addressed HCI. In 2007, Dünser et al. [9] investigated how well HCI principles related to AR application design. They state that there is clearly a need for more usability research in the AR field and that most AR research at this time is trying to overcome technical problems rather than design related problems. A conclusion is that currently no specific design guidelines exist for AR and that designers of AR systems will need to develop specific solutions to their problems. However, that does not mean that the general HCI guidelines should be ignored, some might still be useful although many of them are thought to be interacted with through mouse, keyboard and a computer screen.

Dünser et al. [8] presents a survey of evaluation techniques used in AR papers between 1993 and 2007. They classify the publication’s evaluation types into four different categories, describing what they evaluated: Perception, user performance, collaboration and usability. The latest result, from 2007, shows that User Performance is the most frequent evaluation type followed by usability. The papers were also classified by the evaluation method, describing how they performed the evaluation. The methods were split into the categories: Objective measurement, subjective measurement, qualitative analysis, usability evaluation techniques and informal evaluations. Considering the entire time span, objective measurement is the most popular method in almost every year. However, the latest result shows all the methods are well represented, with the exception of usability, which has a lot less publications than the rest. The authors point out that the amount of AR user evaluations appears to be increasing.

3.7.3

HCI Methodologies

In 1999, Gabbard et al. [12] presented an iterative method for developing usable and useful interfaces for VE prototypes. The method consists of four steps which are to be repeated.

(22)

3.7. Usability in AR

1. User task analysis

2. Expert guideline-based evaluation 3. Formative user-centered evaluation 4. Summative comparative evaluation

The first step describes all the tasks and subtasks a system needs to perform its intended function. This step involves identifying the user’s needs and goals. The second step in-volves identifying potential usability issues by comparing to current user interface standards or guidelines. In our case, this could be done by comparing to current HoloLens applications and Microsoft’s design guidelines [28] regarding MR. The third step encourages the develop-ers to involve usdevelop-ers early and keep them involved throughout development. In this step, the developers create user task scenarios based on the user task analysis which the users then try to perform. The developers then observes the users trying to perform the tasks and collects usability data which will be used to compare and for suggesting design changes. The last step involves trying out different interaction designs and comparing them to each other. It should be the same user tasks but performed in different ways through different interaction design.

User Task Analysis

User task analysis is very important for input to early system design [30]. It should be based on the users’ goals and how they currently approach the task. It is used to define usability requirements and suggest ways of meeting these requirements [25]. The goal of the task analysis is to provide new input to the user interface design and improving usability. The task analysis should be focused on information processing limitations and comparisons to the current models. The user task analysis can be done in the following steps:

1. Gather background information about the work

2. Collect and analyze data from the existing system and how it is being used 3. Construct and validate a model of the users’ current task organization

3.7.4

Evaluating Usability

There are some issues and specific ways to evaluate usability in VR and AR environments as described by Bow et al. [5]. Although the paper is from 2002 and some of the discussed issues have been solved by newer technology, some of the issues and evaluation methods are still very relevant.

Evaluation Issues

Some issues which Bow et al. [5] address related to the fact that AR is still a relatively new technology where some potential users might have not ever used a HMD. Therefore, results may vary a lot simply based on how experienced the user is with AR technologies. Some technologies might not allow for multiple viewers, if the users are wearing HMDs it might be a problem to observe what the user is actually seeing. Other problems include fatigue and simulator sickness, which can vary a lot from user to user. Due to this potential problem, the authors suggest that there is some kind of subjective measurement used during the eval-uation, such as user comfort. Another problem is the lack of published and verified design guidelines, however, HoloLens does have defined guidelines [28].

(23)

3.7. Usability in AR

Evaluation Methods

Bach et al. [3] determine that while there are few evaluation methods available specifically to MR it is possible to use some general evaluation methods. The suggested methods are questionnaires, interviews, inspection methods and user testing. Bow et al. [5] also suggest different evaluation methods for VR and AR such as cognitive walkthroughs, formative evaluations, heuristic expert evaluation, post-hoc questionnaire, interviews and comparative evaluation. Some methods do not require user involvement and some require experts, ad-ditionally, the methods can also be either formal or informal. In this case, informal methods requiring few users and testers are preferred due to the lack of users and possible testers for the system developed for this thesis. The methods considered to be most relevant for this thesis are therefore cognitive walkthroughs, formative evaluations and interviews.

Heuristic Evaluation:In this method, an interface is evaluated by applying relevant heuris-tics and/or design guidelines to the interface to evaluate it. This method requires no users, but instead requires ’usability experts’ or ’evaluators’ to apply the guidelines. Nielsen [30] recommends 5 evaluators, with a minimum of 3 evaluators to get a high proportion of usabil-ity issues found in an interface. The evaluation is performed by each evaluator individually at first, so that each evaluator is unbiased until all evaluators have finished. The outcome of heuristic evaluation is a list of usability problems.

Cognitive Walkthrough: This method is performed by stepping through common user tasks. The usability is evaluated by the system’s ability to support each step. The authors claim that this is especially useful for evaluating usability for novice users of the system and for users in an exploratory learning mode.

Formative Evaluation / Testing: This method is observational and empirical, and involves having users perform task-based scenarios and evaluating usability by the users’ ability to perform the scenarios. Formative evaluation can be both formal and informal. The informal type providing qualitative results such as incidents, comments and general reactions. The formal version can produce quantitative results but requires more users. User testing is a fundamental part of usability [30]. It provides direct information about how users interact with the system and what problems they might have with it. Tests can be conducted by hav-ing users perform test tasks which are tasks that should be representative of what might be in the final system. These task should cover a sensible amount of important functionality in the system. The tasks should have a result which the user is informed about. The test session should begin with easier tests in order to boost the user’s morale and the final test should make the user feel some kind of accomplishment. To collect data from the test, quantifiable measurements can be used. Measurements such as, task completion time, number of errors, time spent recovering from errors or number of times the user expresses joy or frustration. Tests data can also be collected by more qualitative methods such as thinking aloud [30] which Nielsen calls one of the most valuable usability engineering methods. Thinking aloud only need a few users and involves having the test users saying whatever they are thinking while they are using the system and performing tasks. It is useful to identify users’ misconceptions and major problems as it can show what they are doing, why and while they are doing the task rather than trying to figure out later why they did a specific thing. Thinking aloud could be an especially important testing method for AR due to the difficulties observing an AR-user. Interview and Questionnaires: Interviewing is a technique used to evaluate usability by talking to them after a test session. Questionnaires is a data collection method which provides a standardized way to collect information [33]. Questionnaires can be carried out by having users fill out questions with a predetermined set of answers, for example, 1-5. In-terviews can go into a deep level of details, providing information a post-hoc questionnaire

(24)

3.8. Prototyping

might have not. The interviews are a good way to gather subjective reactions and opinions, which could include things such as any discomfort caused by simulator sickness or fatigue caused by the head-worn AR system. Interviews may have a set of prepared questions and/or be more open-ended. Interviews can happen both during a demonstration of the system and after a testing session.

3.7.5

Test Subjects

In order to conduct a proper evaluation by testing, a target group for testing should be care-fully chosen. Gerard [13] suggests that novice users and expert users finds similar problems, however, novice users tends to find more problems than expert users. Therefore, it could be beneficial to perform user tests using subjects both with and without experience of the existing system in order to reveal as much problems as possible with the new interface.

3.8

Prototyping

Nielsen [30] states that it is desirable to perform usability evaluation on prototypes which can be developed much faster, and at a lower cost, than a final system. A prototype can therefore change a lot during development in consideration to user feedback. Nielsen also describes two types of prototyping, vertical and horizontal prototyping. Vertical prototyping reduces the number of features compared to the full system, meaning that the system will have few but functional functions. Horizontal prototyping means having a lot of features but the functionality of the features will be poor. There are also additional ways of speeding up prototyping, such as: not emphasizing efficiency, accepting poorer code quality, using simplified algorithms and models.

The prototyping design process is inspired by the paper by Buchenau et al. [6], describing "Experience Prototyping". They state that a prototype is useful for identifying issues and design opportunities, additionally, prototypes are created to achieve a representation of a possible complete experience. They propose two interesting questions for prototypes which will be considered when creating the prototype for this thesis. The questions are very relevant to a prototype supposed to replace or complement some existing experience.

• What is the essence of the existing user experience?

• What are essential factors that our design should preserve?

The second part raises an important question, which parts of the existing system should be-have the same (in a positive way) and what can be improved with this new technology? It could be very important for a user of the existing system to feel familiarity in the new proto-type, but observed and interacted with in a new and possibly improved way.

(25)

4

Method

This chapter presents the methodology used when designing the prototype. It presents meth-ods and guidelines which were used during the development’s design phases, how the im-plementation was done and how the prototype was tested and evaluated.

4.1

Pre-Study

In order to start designing a prototype there needs to be some fundamental understanding of the topics required to do this thesis. Therefore, a pre-study was conducted about the neces-sary topics such as AR, usability (specifically usability in AR), HoloLens, development tools and Saab’s IOS. Additionally, there was some time to get to know the equipment such as HoloLens, development tools and the IOS before designing and implementing the prototype.

4.2

Iterative Development

A circular iterative process was used in order to develop the prototype. In this particular process four steps are included as seen in figure 4.1. The process is meant to be repeated multiple times to achieve a better result based on feedback on previous iterations of the loop. In this thesis the process was repeated two times, starting in the design phase.

4.3

Design

The design phase describes the prototype’s functionality, features and requirements and how they were decided. The section details what should be included in the prototype and the though process behind it.

4.3.1

Use Cases

In this section the potential use cases of the AR system is presented. Multiple use cases can be relevant and one does not exclude any other. Although the main purpose is to better understand and observe a 3D space, it can be done in various phases or in different use cases in multiple areas in flight training simulators.

(26)

4.3. Design

Design

Implementation

Testing

Evaluation

Figure 4.1: Iterative Development Process

Planning

A potential use of the prototype is to use it in the planning stage of a flight training mission. The prototype could then be used to place entities and to set their behaviors. A potential benefit from performing the planning part of the mission in AR is the ability to quickly build a scenario in 3D with natural and accurate 3D manipulations. Currently, the entities are placed in 2D and height is set separately. A possible drawback is limitations in graphical user interfaces (GUI) and functionality, meaning that the AR prototype only would have simplified functionality and not provide as extensive functionality as the existing system. Real-time Observation and Manipulation

An obvious use case is observing missions in real-time. Allowing for a 3D representation of an airspace during a mission which can be both observed and manipulated. Potential benefits from this real-time observation is improved perception of the airspace and the ability to do simpler manipulations to the airspace and its entities in 3D. Additionally, if there are two instructors with HoloLenses observing the same airspace it is possible to gain multiple perspectives, something which is not possible when two instructors are observing the same computer screen.

Reviewing and Educational

A third potential use is reviewing a mission in retrospect for educational purposes. This would be used to gain insight of the the airspace as a mission played out. The functionality would be very similar to the real-time observations but with the addition of suitable tools for moving back and forward through the mission’s timeline. This use case would have limited manipulation, as changing or adding entities in a mission that has already been carried out would make little sense. Therefore the focus in this use case would be visualization. This use case could greatly benefit from having two HMDs, one for the instructor and one for the pilot or student which both are observing the same mission.

(27)

4.4. Implementation

4.3.2

Requirements

When designing the prototype, it was important to focus on the most essential functionality. After a walkthrough of the IOS with an expert it became clear that there were certain key features which the interactive map view requires. The expert pointed out what features were the most important to the map view and the first thing mentioned was the ability have an overview. The map view presents easily interpreted information about things such as air-plane locations, radar distances and flight paths. The second most important thing which was mentioned as a key feature was the drag and drop functionality. This kind of interaction is used for placing new entities or moving existing ones. Drag and drop can also be used to manipulate the map, allowing the user to both pan and zoom. Based on the expert’s inputs and Saab’s expectations, the following features were set as fundamental requirements for the prototype:

• The ability to view a 3D map as a hologram. • The ability to place and manipulate entities.

• The ability to view height data, flight paths and trails.

4.3.3

User Task Analysis

Here we describe some common user scenarios which are used to shape the design of the pro-totype and evaluate its performance. The user tasks are meant to represent core functionality of the system. These tasks are also tightly coupled with the user tests as the users will be per-forming some of the user tasks in various forms during the tests. Based on the requirements, interviews and features of the IOS and HoloLens, the following user tasks were developed:

• Observe an aircraft and its relevant data, such as name, altitude and velocity. • Create and place aircraft and waypoints.

• Manipulate entities’ positions. • Remove entities.

• View radar.

• Edit the map’s size. • Move the map. • Use voice commands. • Change behavior of aircraft. • Pause and resume the simulation. • Rewind and fast-forward time.

4.4

Implementation

The implementation section details how the features and requirements in the design phase have been realized. It describes in detail how the system was built and what decisions have been made to fulfill the requirements and user tasks.

(28)

4.4. Implementation

4.4.1

Unity3D

The prototype in its entirety and thus the implementation of the holographic representation of a map was done through Unity3D (Unity), a popular and widely used engine for game de-velopment and other 3D applications. Unity allows dede-velopment for several target platforms and is currently recommended by Microsoft for developing HoloLens applications.

HoloLens Toolkit

HoloLens Toolkit (HoloToolkit) is a collection of resources developed by Microsoft, which aims to accelerate development for HoloLens and other mixed reality platforms. It was used to lower development time and create consistency with HoloLens guidelines and other HoloLens applications. The toolkit was used to implement, for example, buttons, sliders, scaling manipulation functionality, cursors and other aesthetics. The toolkit also provides some useful additions to Unity such as the ability to preview HoloLens applications di-rectly in Unity’s preview window without having to build the application to HoloLens or the HoloLens emulator, which is a much more time demanding process.

4.4.2

Graphics

Since the aim of this work is not to create a graphically advanced simulation, time spent on visual elements was kept to a minimum. These visual elements include things such as models, textures and effects. Even though the visuals was kept simplistic, it was desired to maintain a consistent theme, meaning that different objects throughout the simulation should have similar style. Since HoloLens is restricted in its hardware, it had to be ensured that the graphics were limited to the extent that a high frame rate could be maintained.

4.4.3

Graphical User Interface

Since HoloLens is limited in its field of view, it is important to think about the layout of the GUI. As the guideline states, GUI elements should be grouped into the user’s field of view [28]. This was taken into account when designing the GUI. The GUI should be well connected to the expected features, and it should be clear and intuitive how to navigate it to find them. There are no specific requirements on the visual design of the GUI, it should however be simple, easy to understand and consistent in its style. If the GUI is developed with the requirements above, it should provide a high level of knowability and learnability.

4.4.4

Input

One restriction of HoloLens is the limited input options. Aiming is done using a cursor which follows the user’s head movements (gaze). Possible ways to interact with HoloLens is a pinching gesture in the air in front of HoloLens (air tap), air tap followed by a dragging motion, gaze and voice commands. It was important to keep these limitations in mind during development.

4.4.5

Features

In this section we describe certain features that should be included in the prototype. Some issues presented by the limitations of HoloLens as well as guidelines are described and how the prototype should be developed considering these.

Manipulation of Objects

One requirement for the prototype was to be able to manipulate objects. The manipulation included interactions such as selecting as well as dragging and dropping entities. Users of

(29)

4.5. Testing and Evaluation

the prototype were most likely have a lot of experience in using a mouse for manipulating a cursor. However, in this prototype a cursor were manipulated using gaze. Seeing that HMDs were not as widely used as a standard computer setup at this time, aiming the cursor would probably be less precise. Therefore, it had to be ensured that elements which the user can interact with were modeled at a size which is comfortable aiming at using gaze.

Voice Commands

When developing for HoloLens, it is possible to use voice input. Since the input is as lim-ited as it is, it was desirable to include voice as a way to expand the way to interact with the prototype. However, the voice input should only to be used for simple keywords such as "pause" and "resume" to minimize voice recognition error. According to the guidelines, functions controlled by voice should also be designed in a way that it is always possible to reverse the action [28], thus making it possible to correct mistaken voice commands.

Additional Information

In flight simulators, there is a lot of information that is interesting to display, especially from the instructors point of view. In this prototype such information included names, altitudes, velocities. If all such information would be displayed simultaneously, the prototype would quickly become cluttered. Therefore, it was preferable to let the user choose what information should be displayed at what time. This could be achieved by showing the user additional information on special events, such as when gazing or selecting entities.

Pathing and Tracing

One requirement was to be able to set paths for aircraft. This could be done with waypoints. When combining two or more, waypoints can describe a path an aircraft is following. To implement this, waypoints needed to have a visual presentation. The relation between way-points also needed to be clear so that a user can determine in what order aircraft flies towards them. It was also interesting to visualize earlier positions of an entity, which could be done by implementing a trail. This way the user can analyze its movement and more clearly deter-mine its direction.

Radar

It should be possible to see the coverage of radars for aircraft or other entities. This could be achieved by highlighting areas with a more or less complex geometry. The user should be able to clearly tell when entities are located inside such an area. To achieve this the geometry must be transparent in some way, allowing the user to still see entities while inside these areas.

4.5

Testing and Evaluation

This section describes how data about the usability metrics and functionality was collected. Testing and evaluation was used to further develop the prototype. In order to produce a result that meets the specified requirement and achieve the best possible usability, the prototype was tested and changed continuously throughout the period of the thesis.

4.5.1

Methods

Finding a larger group of users was challenging since the testers had to be found within Saab. The number of expert users of the existing system or AR were too few to conduct a proper

(30)

4.5. Testing and Evaluation

quantitative test, and therefore, performing qualitative tests with fewer number of partici-pants was preferable in this case. However, to receive some measurable quantitative data, an evaluation using a questionnaire was performed as well for the final testing. Although a large set of test users is preferred for this kind of evaluation, we asked each final user to fill out the questionnaire to get as much data as possible. All of the qualitative methods require between 0 - 5 users [30] which is a reasonable number of users for this thesis, as shown in table 5.1. As the first two evaluation methods were does not require any users, they were performed internally without any user interaction, both before and after user test sessions.

Table 4.1: Qualitative Evaluation Methods

Method Min req. users

Heuristic Evaluation None

Cognitive Walkthrough None

Formative Evaluation/Observation/Thinking Aloud 3

Interviews 5

4.5.2

Heuristic Evaluation and Cognitive Walkthrough

Throughout the entire development cycle heuristic evaluation and cognitive walkthroughs have been applied. When new functionality had been implemented a comparison could be made with any relevant guidelines. However, as described in the theory section, there is a lack of guidelines for designing user interfaces for AR HMDs. Although some guidelines from scholar papers exists, it is not certain how well they apply to HoloLens. Therefore, most design principles used in this thesis comes from Microsoft’s HoloLens design guidelines [28], best practises as well as some AR guidelines which were found applicable. Additionally, gen-eral HCI guidelines and to some extent guidelines and design principles from VR experiences can also be used.

As the prototype is developed for HoloLens it is natural to follow Microsoft’s guidelines. Some of the fundamental interaction guidelines for HoloLens that were followed are: Not transporting the user, letting the user adjust content, using short animations and not hav-ing abrupt movement or camera acceleration. Microsoft also recommends grouphav-ing UI ele-ments together in the center of the users’ view to avoid them not finding it, caused by both HoloLens’s field of view and the user’s attentional cone of vision. [28]

Some more general HCI principles that were applied and evaluated come from Norman et al. [31]. They describe some attributes that were considered when designing the user interface:

• Visibility • Feedback • Consistency • Non-destruction operations • Discoverability • Scalability • Reliability

As features and functionality were developed alongside the user tasks, cognitive walk-throughs were performed to ensure an intuitive approach to the UI design. The cognitive

(31)

4.5. Testing and Evaluation

walkthrough was done by identifying user scenarios associated with the specific functional-ity which was implemented, and stepping through each subtask in the scenario.

4.5.3

Formative Evaluation

The formative evaluation was conducted as user tests where different parts of usability is tested to evaluate how well the prototype performs regarding the metrics. The evaluation can be done with regards to different target groups. First, the users using the existing system, who then can draw conclusions about the usability in contrast to the existing system. Another possible target group is users who have never or rarely used the existing system, thereby creating a first impression of the system through this new way of interaction. If the system meets specifications and the usability requirements, both of these groups should get a positive impression of the prototype. The formative evaluation was used in the iterative process as suggested by Gabbard et al. [12]. They describe the formative evaluation in 5 steps where the steps 2-5 are repeated:

1. Develop user task scenarios

2. Perform test scenarios using thinking aloud 3. Collect usability data

4. Suggest improvements for user interaction 5. Refine user task scenarios

The testing was performed in an informal way, meaning that there were no strict require-ments for each test session. The test sessions made use of techniques such as talking aloud and interviews to capture the subject’s opinions about the usability and functionality of the system. The tests were performed both spontaneously throughout development, at key points in the development process and as a concluding final test in the end of the thesis period. This allowed for instant feedback for experimental changes in the prototype devel-opment as well as a more detailed and comprehensive feedback at milestones.

Observation in HMD AR has some limitations which were needed to be considered, making it more difficult to monitor the test session than, for example, tablet AR. HoloLens provides the functionality to see what the user is seeing by a web interface which streams the users view with a short delay. However, streaming causes the visual performance on HoloLens to decline by displaying a lower resolution and thereby making objects blurrier and text more difficult to read. Due to network restrictions at Saab, streaming had to be done via USB connection between HoloLens and a computer, and by doing so, the user’s movement also becomes restricted. Considering these limitations, the choice was made to let the users have the full visual quality experience at the cost of insight into the user’s actions. This meant having to depend more on thinking aloud and interviews.

As most of our users had not used a HoloLens before, a chance was given to each tester to go through a tutorial to get familiar with its basic concepts. This was done using the ’Gestures’ application in HoloLens which tutors the basic concepts of holograms and input gestures. The tutorial takes about 5 minutes. The tester then performs test scenarios in order to eval-uate the systems performance in functionality and usability. After the test scenarios were completed, a short and informal interview was held to ask the user about topics that might not have come up during the scenarios through, for example, thinking aloud. Additionally,

(32)

4.5. Testing and Evaluation

in the final tests after the interview was completed, the users were asked to fill out a short questionnaire.

The tests were split up into three different parts, each with a specific purpose and goal. The tests are designed to assess the system’s different features and usability. The estimated total time for each test session was around 40 minutes, distributed roughly as 5 minutes introduc-tion, 20 minutes with the tests, 15 minutes interview and questionnaire.

Test 1: Spatial Perception

The goal of this test is to see how well the user is able to make decisions about the positions of entities using spatial awareness. The test was carried out by placing groups of entities in the map at different altitudes and distances from each other, see figure 4.2. For example, having one red aircraft surrounded by three blue aircraft. The test was then performed by letting the tester first determine which one of the blue aircraft that has higher altitude and then which one is closer to the red aircraft, letting them move around to gain spatial understanding of the scene. Conclusions about the spatial perception could then be achieved by observing the time required to perform the task and the accuracy of the answers. During this test, the scene was paused, meaning all aircraft were standing still in the air.

Figure 4.2: Test 1 - Spatial perception

Test 2: Areal Perception

The second test continues within the theme of testing spatial understanding and perception, but this time by letting users determine whether entities are located inside certain areas, see figure 4.3. The test consists of several aircraft as well as anti-aircraft on the ground. As opposed to the first test, the aircraft and the anti-aircraft now have radar areas. The aircraft’s radar is shaped like a cone and the anti-aircraft has a radar area shaped like a sphere, where the anti-aircraft itself sits in the middle. As in the first test, groups of entities were placed on the map. The users were tasked with figuring out which aircraft were located in other entities’ radar areas.

Test 3: Functionality and Input

The third test aims to determine how well the prototype performs in terms of usability and intuitiveness. Instead of having a pre-built scene, this test lets the user interact with the user interface to create a scene containing aircraft, waypoints and anti-aircraft. The test starts with an empty scene, as shown in figure 5.7. In this test the users must navigate the interface themselves, meaning that we do not provide step instructions on how to do a task but rather provide the task and let the user figure out the rest. Conclusion can then be made about the

References

Related documents

The three studies comprising this thesis investigate: teachers’ vocal health and well-being in relation to classroom acoustics (Study I), the effects of the in-service training on

The second approach is based on the user interface designer's role: it presents a method of designing the user interface such that the knowledge and rationale behind the design

In addition, we explored the presence of language ideo- logies in the twofold empirical data, the results of which show that differ- ent forms of communication (i.e., spoken or

Taken together, the studies presented in this thesis shows that the everyday lives of children with cochlear implants in mainstream schools is complex, and it is technologies in

Inertia, backdrivability, friction/damping, maximum exertable force, continuous force, minimum displayed force, dynamic force range, stiffness, position resolution, system

Specific objectives are (i) to derive explicit estimators of parameters in the extended growth curve model when the covariance matrix is linearly structured, (ii) to propose

tydlig och klar mot dottern och på så sätt trygga henne, vi erbjuder samtal för att prata om hur föräldrarna kan lösa sina inbördes konflikter för att dottern skall påverkas

The aim of this study was to describe and explore potential consequences for health-related quality of life, well-being and activity level, of having a certified service or