• No results found

Route learning and user attention in a mobile augmented reality navigation application

N/A
N/A
Protected

Academic year: 2021

Share "Route learning and user attention in a mobile augmented reality navigation application"

Copied!
15
0
0

Loading.... (view fulltext now)

Full text

(1)

IN THE FIELD OF TECHNOLOGY DEGREE PROJECT

MEDIA TECHNOLOGY

AND THE MAIN FIELD OF STUDY

COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS

STOCKHOLM SWEDEN 2019,

Route learning and user attention

in a mobile augmented reality

navigation application

OSCAR STRÖM

KTH ROYAL INSTITUTE OF TECHNOLOGY

SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE

(2)

Abstract

Augmented Reality (AR) is a technology that adds a virtual layer to the physical world.

The most promising future for AR is in personal computing. Several studies have found

that the usage of AR guidance leads to problems with attention to objects that are not

directly addressed by the AR guidance.

This study explore how AR guidance affects route learning. To do so, a Mobile Augmented

Reality application for navigation was developed. The application was developed for

Android using ARCore and Sceneform API. Route learning was evaluating by comparing

the performance of participants navigating using the application to a reference group that

got verbal instructions for the same route. Route learning performance was measured in

the participants memory for details of the route e.g. the amounts of sharp turns, specific

landmarks they had seen along the route and their ability to retail where they had seen

the distractions in relation to the final point of the route.

The results of the study show that route learning was not noticeably different for the

participants using the application compared to the participants that got verbal

instructions. Using the application did not reduce the participant’s ability to navigate

safely. Due to the few participants of the study the results should not be considered

statistically definitive rather as trends.

(3)

Sammanfattning

Augmented Reality (AR) a r en teknologi da r ett virtuellt lager la ggs pa den fysiska va rlden.

Det mest lovande anva ndningsomra det fo r AR a r inom personlig databehandling.

Resultatet av flera studier visar att av anva ndning av AR fo r va gledning leder till minskad

uppma rksamhet till fo rema l i anva ndarens omgivning.

Denna studie utforskar hur navigering i AR pa verkar Route learning. Fo r att underso ka

detta utvecklades en mobil AR applikation fo r navigering. Applikationen utvecklades fo r

Android och anva nder ARCore och Sceneform API. Route learning utva rderades genom

att ja mfo ra anva ndare av applikationen med en referensgrupp av anva ndare som

navigerade genom verbala instruktioner. Route learning ja mfo rdes genom deltagarnas

minne av detaljer i rutten de navigerat till exempel antalet skarpa sva ngar, specifika

landma rken la ngs rutten samt deras fo rma ga att a terbera tta var dessa landma rken var

placerade i rutten i relation till ruttens slutpunkt.

Resultatet av studien visar att Route learning inte pa verkades ma rkbart fo r anva ndarna

av applikationen, da rav var applikationen ett sa kert medel att navigera med.

(4)

Route learning and user attention in a mobile augmented reality

navigation application

Oscar Ström

Royal Institute of Technology Institution/University Name

Stockholm, Sweden osst@kth.se

Oscar Ström

Royal institute of technology Stockholm, Sweden

osst@kth.se

Third Author Name/Surname Department Name Institution/University Name

City, State, Country email@email.com

ABSTRACT

Augmented Reality (AR) is a technology that adds a virtual layer to the physical world. The most promising future for AR is in personal computing. Several studies have found that the usage of AR guidance leads to problems with attention to objects that are not directly addressed by the AR guidance.

This study explore how AR guidance affects route learning. To do so, a Mobile Augmented Reality application for navigation was developed. The application was developed for Android using ARCore and Sceneform API. Route learning was evaluating by comparing the performance of participants navigating using the application to a reference group that got verbal instructions for the same route. Route learning performance was measured in the participants memory for details of the route e.g. the amounts of sharp turns, specific landmarks they had seen along the route and their ability to retail where they had seen the distractions in relation to the final point of the route.

The results of the study show that route learning was not noticeably different for the participants using the application compared to the participants that got verbal instructions. Using the application did not reduce the participant’s ability to navigate safely. Due to the few participants of the study the results should not be considered statistically definitive rather as trends.

1 INTRODUCTION

Augmented Reality (AR) is the interface of the future. By merging virtual and real information, AR can guide a user through daily tasks giving context specific information and guidance on top of the users sensory input in real time.

1 https://www.statista.com/statistics/330695/number-of-smartphone-users- worldwide/

There is no general definition for what AR is. El Sayed Et al. define AR as a technology of adding virtual objects to real scenes through enabling the addition of missing information in real life [18]. Azume defines AR systems based on the three criteria’s: combination of real and virtual, interactive in real time and 3D registration of virtual and real objects [1]. AR is classified as a mixed reality technology and is generally described as an augmentation of computer-generated information with the real world [2].

The history of Augmented Reality ranges back to the early 1940s where AR was used for heads up displays (HUD) in Aircrafts. Since then, AR has been used in a wide variety of purposes such as military equipment, Teleprompters, weather visualizations, sports broadcasts and much more. The first Mobile Augmented Reality (MAR) system was released in 1996 and was called The Touring man. It combined a head mounted display, a handheld tablet, and a backpack containing a computer, GPS and internet connection [3]. Surveys suggests that the most promising future for AR is in personal computing where it would serve as an assistant that can do everything from giving navigational guidance to managing personal information related to a specific location or people [4]. Recently AR made its entrance in the smartphone market, a market with over 2.5 billion users.1 Both Google and Apple have released their own respective software developer kits, ARKit and ARCore, to let developers create MAR applications for their corresponding operating systems.2

As with any technology, AR has its problems. Several studies have shown that the use of AR for guidance leads to problems with user attention to objects not directly addressed by the AR interface [6, 7, 8]. These problems should be further investigated for AR to become a suitable interface for personal computing in the future.

2 https://medium.com/@vieyrasoftware/comparing-google-arcore-and-apple- arkit-81b4727132ad

(5)

One usage area of MAR is for navigational guidance [3]. To move around safely, a user must pay attention to their surroundings. One of the most effective processes that control the way in which humans perform on recall cases is the process of attention [16]. One method to evaluate route learning is by letting a person familiarize himself with a route and thereafter ask about information related to the route such as: Landmarks along the route, estimation of the route distance and drawing estimations of the route [5]. The main study of this paper is heavily based on this method of evaluating route learning. For this study route learning is compared for users navigating using a MAR application to a reference group navigating using verbal instructions. To do that, a MAR navigation application was created.

Figure 1: Picture of the navigation application.

1.1 Research Question

Every time we move around, we rely on our navigation ability. During navigation we think about a route’s starting point, its goal and landmarks along the way.

Learning a route relies on paying attention to various features of the environment [5]. One of the most effective processes that control the way in which humans perform on recall cases is the process of attention [16]. Several studies have shown that AR guidance have a negative effect on the user’s attention to details that are not directly addressed by the AR guidance. A study of enhanced navigation systems developed for surgeons have found that the technology caused inattentional blindness where surgeons were less likely to identify significant unexpected findings, clearly within view, during surgery when using the system [6]. Several studies of heads up displays have shown a decrease in response time to various unexpected obstacles [7, 8].

This study aims to explore if similar problems occur for

AR navigational guidance and how it affects route learning.

The following research question was formulated for the paper:

How does using a MAR navigation application affect route learning?

To help answer the main question two sub-questions were formulated.

-How is the user’s attention affected when using the application during navigation?

For MAR navigation to be and feel safe, users should be aware of their surroundings during navigation. The results of other studies mentioned earlier in this section have shown that AR guidance leads to problems with attention. To complement the quantitative data from the performance of route learning, attention related questions were brought up in qualitative interviews with the participants of the study (see 3.2).

- Did the usability of the application interfere with the navigation?

Usability is evaluated to distinguish between problems related to MAR and problems related to interface design of the application.

Based on the results of other studies presented earlier in this section, the hypothesis for this study is that the application will have a negative impact on route learning.

1.2 Delimitations

Technical instruments for measuring attention, such as using mobile eye trackers, was not available for this study.

2 THEORY AND RELATED RESEARCH

This chapter will establish related work and AR as a tool for navigation.

2.1 Route Learning

In a study by Ineke J. M. van der Ham et al, route learning was measure by first letting participant familiarize themselves with a route. After the learning phase users were tested on their memory for landmarks, route properties and layout of the environment. First the participants were shown 12 objects. For each object they were asked to indicate if they had seen it during navigation. Next the participants were asked to estimate the route distance in meters. Thereafter the users were

(6)

asked to place each object they had seen during navigation on a horizontal slider to estimate where on the route they had encountered the object. Then the users were asked to point to the beginning and endpoint of the route while imagining passing a landmark. Lastly the participants were asked to draw the route they had walked and to estimate the position of the landmarks they had seen along the route.

The study measured difference in route learning when comparing real life performance to virtual route and hybrid routes (using an GPS application).

The results of the study indicate that real life route learning was better than purely virtual conditions and application aided navigation hardly differed from real life performance [5]. Other studies of route learning when using GPS based mobile application have shown the same results [19].

The method used for this paper is based on the method described in this section.

2.2 Augmented Reality as an Interface

Augmented Reality is a multimodal interface. A multimodal interface is an interface that is based on human’s natural ways of interacting [9]. In 2004 Reeves et al. defined guidelines for multimodal interface design.

One of those guidelines states that a multimodal interface should maximize human cognitive and physical abilities, based on an understanding of users’ human information processing abilities and limitations [10]. Van Krevelen et al. suggests that interfaces for Augmented Reality must follow guidelines as not to overload the user with information as well as preventing users to overly rely on the system so that environmental cues are missed [4].

One commonly occurring problem with AR as an interface is the effect it has on user attention. In 2012 Dixon et al. studied AR navigational tools that are developed for medical procedures to look for inattentional blindness in surgical context. The results of the study show that group using AR navigation during surgery was less likely to identify significant unexpected finding clearly within view [6]. Similar problems have been found for studies of heads up displays. Robert S.

McCann et al. tested the hypothesis that visual attention can be focused on either the heads-up display or the world beyond them, but not both simultaneously, in a runway simulator. The results of the study confirms the

3 https://developers.google.com/ar/

hypothesis that was tested, pilots had a faster response time to events occurring in the same perceptual group (an event occurring on the HUD as the pilot were focusing on the HUD) compared to when the event occur on different perceptual groups (something happening on the runway as pilot were focusing on the HUD) [7]. NASA conducted a similar study on cognitive issues in head-up display where pilot’s performance was measured when using heads-up displays compared to conventional instruments in a flight simulator. The results show a mean difference in responsive to obstacles on the runway of 4.13 seconds with the HUD and 1.75 seconds without it [8].

2.3 Augmented Reality for Navigation

In March 2018 Google released ARCore, a software developer kit that allows for MAR applications to be built using Google's own algorithms. ARCore uses odometry and mapping for its environmental understanding.

Concurrent odometry and mapping understands its surroundings by using sensory input, primarily imagery capture, to detect key features of its environment and compare those to its previously stored inputs to be able to localize and improve its understanding of the environment. The device navigates an environment while simultaneously constructing or augmenting a map of the environment. When ARCore understands its environment it uses that information to align a virtual camera with the position of the device’s camera which allows for rendering a virtual image on top of the image obtained from the device’s camera to make the virtual object appear as if it is part of the real world.3 Apple’s ARKit uses Visual-inertial odometry which is a similar technique to Concurrent odometry and mapping in that it also uses identification of features in its environment.4 The techniques used in ARCore and ARKit have a lot of technical similarities to simultaneous localization and mapping which is used for autonomous robots to understand unknown environments [11]. ARCore’s technical similarities to simultaneous localization and mapping makes it a suitable technology for AR navigation that works independent of GPS input. ARCore was used for the application built for this study (see 3.1).

2.4 Usability

To properly measure the effect a MAR navigation application, have on route learning, the usability of the

4 https://developer.apple.com/arkit/

(7)

application should be taken into consideration. As discussed in chapter 2.1 the interface of an AR application should be designed to maximize human cognitive abilities, thus the usability of the application should be considered for the validity of the results of the study.

Usability is traditionally associated with five attributes [12]:

• Learnability: the system should be easy to learn.

• Efficiency: the system should be easy to use.

• Memorability: the system should be easy to remember.

• Errors: the system should have a low error rate.

• Satisfaction: the system should be pleasant to use.

User interfaces are commonly evaluated using empirical methods, one such method is the System Usability Scale questionnaire (SUS) [13]. The System Usability Scale questionnaire consists of 10 questions that is answered on a 1 (Strongly disagree) to 5 (Strongly agree) Likert scale. The score of the system usability scale is the sum of the contribution from each question multiplied by 2.5 which gives a score between 0 and 100 [14].

For this study, each participant that used the application answered a system usability questionnaire to evaluate if the usability of the application affected navigation (see 3.2.2).

3 METHOD

The purpose of this study is to compare route memory performance for users of a MAR navigational application to a reference group navigating by verbal instruction.

This chapter goes through the development process of the application that was used for the study and the creation of the main study design.

3.1 Application

The application was designed with usability and minimalism in mind as Krevelen et al. suggested that one of the biggest problems with AR interface design is overloading the user with information [4]. The development of the prototype was an iterative process where two pilot studies were conducted before the final design was decided. The interface of the application uses the camera input with an AR overlay of green spheres hovering along a route.

For the first iteration of the application ARCore Anchors (tracking points that ARCore uses to understand

its environment) were evenly distributed along a route with six meters between each anchor. A green sphere was attached to each anchor to display the route to the user, for readability the green spheres will be referred to as checkpoints further along this paper. By following the checkpoints, the users could navigate a given route.

When starting the application, the user is first asked to scan a known area so that ARCore can find its reference point (see figure 2). When a reference point is found the checkpoints are displayed on top of the camera feed with a message asking the user to follow the green spheres (see figure 2). When the final checkpoint has been walked by a message is shown telling the user that the navigation is completed.

3.1.1 First Pilot Study. The first draft of the application was used in a pilot test with one person. The participant was a student who have experience with ARCore development. During the pilot test it was discovered that the application crashed when it was used to navigate large distances due to the amount of anchors that the cell phone had to keep track of. Tracking an anchor is a performance heavy task.

Another issue that was brought up during the pilot test was that displaying all the checkpoint along the whole route at once on the screen occasionally made it hard to determine where to go, due to the checkpoints interfering with each other.

3.1.2 Second Iteration. To prevent crashes the application was altered so that only the start point of a route was an anchor and the rest of the checkpoints were added in relation to that first anchors. This means that the application only keeps track of one existing point in the real world and put the rest of its points in relation to that reference point. This made the prototype stable with only a small loss in accuracy of the checkpoints positioning. As a precaution for future crashes, Cloud anchors was implemented in the application. Cloud Anchors are stored online so that if the application crashes, the same session can be rebuilt by calling for the anchor id using a Cloud Anchor API.

To fix the interface issue of checkpoints visually interfering with each other the application was altered so that the checkpoints were displayed one at the time along the route during navigation.

3.1.3 Second Pilot Study. The application was used in a second pilot study with two people. The participants of the study were developers from a company who had

(8)

previous experience with AR development. The participants got to try the prototype and were asked to give feedback on the design and functionality via a semi structured interview. The purpose of the pilot study was to find design flaws to improve usability. Three issues were addressed during the pilot study. The first issue was that the checkpoints were hard to find. The second issue was that there were no indications direction. A third issue that was brought up was that the six-meter distance between checkpoints were too much and should be reduced.

3.1.4 Third iteration. To make the checkpoints easier to find, a complementary green line was added to the ground between each checkpoint aimed towards the next checkpoint (See figure 2). The prototype was also adjusted so that instead of disappearing, checkpoints that had been walked past changed to a red color to indicate to the user that they are walking the wrong way if they were to see those checkpoints. The distance between each point was reduced from six meters to two meters.

1 First iteration Each checkpoint is an anchor, green spheres are attached to the anchors displaying the path for the user.

2 First pilot study Problems with crashes and checkpoint visual interference 3 Second iteration Added cloud anchors.

Implemented Cloud Anchors and limited the application to keep track of only one Anchor.

4 Second pilot study

No indication of direction, checkpoints hard to find and to

big distance between

checkpoints.

5 Main study iteration

Added line to ground in the direction of the next checkpoint. Change colors of spheres that have been passed to give directions. Distance between checkpoints reduced from 6 to 2 meters.

Table 1: Flow chart of application development process.

3.1.5 Application limitations. The application was developed solely for this study and was limited to keeping track on one route only. The prototype was

designed so that the user could only walk a premade path, if a user were to take an alternative route to the same endpoint the application would not adjust its checkpoints according to the alternative route.

Figure 2: Pictures of the final iteration of the application when asking the user to scan the start point (left) and during navigation (right).

3.2 Main Study Design

3.2.1 Route Learning. The method used in this paper was heavily based on [5]. Each route that was used for this study was approximately 300 meters and had four sharp turns. For each route two distractors (coca cola bottles filled with colored water) were placed directly in the path of navigation. First the participants were asked to navigate a route using the application. After the navigation users were tested on their memory for landmarks (distractions), route properties and layout of the environment. First the user was asked if they had seen any distractions along the route they had navigated.

Then the participants were asked to draw or explain the route and where they had seen the distractions along the route. Distractions are also used in other studies related to how AR affects attention (see 2.2).

The main study was tested in a pilot test with two people. After the pilot test it was decided that only two distractors would be used in the main study, since more distraction led to the participants figuring out the purpose of the study during navigation which could affect the results.

To reduce learning effects a between group study design was conducted where one group of participants got to navigate using the prototype and the other group got verbal instructions on where to navigate. To be able to recruit as many participants as possible for the study

(9)

the test was conducted at five different locations in Stockholm with similar routes to navigate (see figure 3).

Each test was conducted outdoors with sunny weather condition. The participants were not told the purpose of the study; however, they were informed that they were not being timed and that they should navigate as they would when using other means of navigation. This was done so that the participants could feel relaxed and not try to compete. The purpose of the study wasn’t told for similar reasons so that the users wouldn’t intentionally pay as much attention as possible to the environment and distort the results.

3.2.2 Semi Structured interviews and usability tests. After the route learning test participants of the application test group were asked to evaluate the usability of the application through a System Usability Scale questionnaire (see 2.4). Thereafter a semi structured interview was held with a focus on the usability of the application and the attention of the participant during navigation.

3.3 Recruitment and tools

The target group for the study were users of any navigation application. The participants of the main

study were recruited through school and social media.

The reference group consisted of eight people (six males, two female) with ages ranging from 20 to 25 years old.

The test group that tested the application consisted of eight people (four males, four female) with ages ranging from 20 to 53 years old. The participants were assigned groups randomly. Five of the participants in the test group used applications to navigate regularly, the rest used it occasionally. Seven of the participants in the reference group used applications to navigate regularly, whereas one used it occasionally.

The prototype was developed using Android Studio with ARCore SDK and Sceneform API, the code was written in Java.

4 RESULTS

This chapter presents the results from the final user study, separated into route learning, usability and interviews.

4.1 Route Learning

Both the reference group of participants and prototype user group were tested on their route memory. The route Tallåsvägen, Vallentuna Gläntstigen, Vallentuna Kungshamra, Solna Krassevägen, Solna Tallåsvägen, Huddinge Figure 3: Pictures of each route that was navigated.

(10)

memory results are divided into three categories:

distractions seen, route positioning and route turns.

Distractions seen is the amount of distractions that was seen by the participants during navigation. Route positioning is the participant’s ability to retell where along the route they had seen a distraction. Route turns is the participants ability to retell how many sharp they made during the navigation, either by drawing the route or explaining it.

User Group Result (Max=2)

Standard Deviation

Median

Application User

1,5 0,5 1,5

Reference group

1,875 0,33 2,0

Table 2: Table of results for distractions seen. The total value of distractors used for the test are 2 which means the results can value from 0 to 2.

The mean of the reference group for seeing the distractions was 1.875 out of 2.0. The mean of the prototype user group for seeing the distractions was 1.50 out of 2.0 (see table 2). An unpaired two tailed T- test gives a p-value of 0.12 which indicates a high level of uncertainty in the results.

User Group Result (Max=2)

Standard Deviation

Median

Application User

1,5 0,5 1,5

Reference group

1,875 0,33 2,0

Table 3: Table of results for route positioning. The points for route positioning ranges from 0 to 2. To get a point a user must remember where they had seen the distraction on the route in relation to the endpoint.

The results for route positioning was identical to the results of distractions seen. This means that for each distraction that a participant saw during navigation, they were also able to retell where they had seen it in relation to the endpoint of the route after navigation (see table 3).

The mean of the reference group for route turns was 3.75 out of 4.0. The mean of the prototype user group for route turns was 3.63 out of 4.0 (see table 4). An unpaired two tailed T-test gives a p-value of 0.61 which indicates a high level of uncertainty in the results.

User Group Result (Max=4)

Standard Deviation

Median

Application User

3,63 0,48 4

Reference Group

3,75 0,43 4

Table 4: Table of results for route turns. The points for route turn ranges from 0 to 4. To get a point a user must remember the amount of sharp turns on the route either by drawing the route or explaining the route.

4.2 Usability

Most participants had no problem understanding the prototype and every participant completed the navigation. Only one participant decided to intentionally go the wrong way to see how the application would respond.

Figure 4: Box diagram of SUS score. The bard represents the mean value.

Figure 5: Individual SUS score for each participant that tested the application.

The results of the System Usability Scale questionnaire had a mean of the SUS score was 84.1 (see

0 20 40 60 80 100

1 2 3 4 5 6 7 8

SUS score

Application tester

(11)

figure 4). Research suggests that a SUS score above 68 is considered above average in usability and everything below 68 is considered below average. The lowest score obtained from this study was 72.5 which is still above average.5

4.3 Interviews

The notes taken during the navigation and the semi structured interview have been structured into statements that were shared among at least two participants. If a participant made a similar statement it was considered the same in the results. The statements are grouped into two categories: usability and attention.

4.3.1 Statements about usability.

Application user Statement

1 2 3 4 5 6 7 8

I would use it on short distances

Navigation felt intuitive I always knew where to go The distance between the checkpoints were too short The application lacks a sense of overview

Table 5: Semi-structured interview statements with regards to usability.A cell is colored blue if the participant made a similar statement to the one on the right.

The design of the interface is closely related to both user attention during navigation and the usability of the application. Overall the participants of the study were satisfied with the interface of the application.

Seven out of eight participants stated that navigating with the application felt intuitive. Four participants stated that they would only use MAR navigation for short distance and other means of navigation for longer distances. Four participants stated that they missed the feeling of overview that you get with regular map navigation. These participants wanted to be able to take an alternative route and get a sense of the remaining distance of the navigation. Four participants brought up that the distance between the checkpoints was too close

5 https://www.usability.gov/how-to-and-tools/methods/system-usability- scale.html

to each other, which forced them to put more attention than necessary to the phone.

4.3.2 Statements about attention.

Application user Statement

1 2 3 4 5 6 7 8

I spent most time looking in the phone

I did not have good attention to my surroundings

The application should warn you for danger More interface elements would have a negative impact on my attention The camera input made me aware of my surroundings Table 6: Semi-structured interview statements with regards to attention. A cell is colored blue if the participant made a similar statement to the one on the right.

The results of the semi structured interviews show that all participants of the study felt that they had spent most of their time during navigation looking at the cellphone.

Five participants stated that they were less aware of what was happening around them during navigation.

Three participants suggested implementation of features that were directly related to safety. Two participants suggested that the application should warn the user before sharp turns since there could be oncoming traffic.

Another participant suggested that user should be prompted to put the phone down regularly. Five participants stated that the camera input of the interface made them more aware of their surroundings.

5 DISCUSSION

This chapter will discuss the results of the study in relation to previous research. Since the number of participants for the study was few, the results should be considered as trends rather than statistics.

(12)

5.1 Route learning

The purpose of this study was to answer the research question: How does using a MAR navigation application affect route learning after navigation?

The results of this study does indicates that there is a difference in route memory when comparing the reference group to the test group but the difference is of low significance and is highly likely to be the result of chance. Thus, it can be assumed that the application did not affect the route learning after navigation notably and the participants were able to navigate using the application safely. Other studies of route learning showed that the use of a GPS application for navigational guidance had no significant impact on route learning.

This study yields similar results indicating that using MAR for navigation performs equally to GPS based navigational guidance [5, 19].

The results for distraction recognition indicate a difference in performance but with low statistical significance and a high likelihood of the difference being the result of chance. This mean that the application user group and reference group performed equally for recognizing unexpected objects (distractions) along the route they were navigating. These results differ from several other studies where users of AR guidance were less likely to identify unexpected findings [6, 7, 8]. The reason for why the results in this study differ might be because of a difference in experienced cognitive load on the participants of the studies that are compared.

Cognitive load increase with the complexity of a performed task [17]. In this case the cognitive load for navigation is compared to the cognitive load of a surgeon during surgery or a pilot during flight. One can assume that the cognitive load put on a surgeon during surgery or a pilot during flight is higher than during navigation and thus resulting in them being less likely to identify unexpected findings.

Another reason for the difference in results could be due to the distractors used. The red color of the coca cola bottles might have stood out too much in the route and thus were easier to spot than typical problems when navigating such as pavement height differences.

5.3 Attention

As discussed in section 5.1 the results of this study indicate that navigational guidance in AR had no significant effect on the participant’s route learning. This suggests that the participants paid good attention to their surroundings during the navigation.

The results of the semi-structured interviews suggest that the participants of the study either had or could see potential issues with the application related to attention.

Five participants stated that they felt that they were not aware of what was going on around them during navigation and three participants suggested addition of features directly related to safety. One participant noticed, when walking back the same route that she had navigated, that she had been walking on soil and grass during navigation without noticing it. The fact that this was instantly noticed when walking the route without navigational guidance indicates a difference in attention.

Seven participants stated that the number of features and interface design would have an impact on their attention during navigation, which correlates with design guidelines suggested in other studies [4].

An interesting result of the study is that five participants felt that the camera input used in the interface made them more aware of their surroundings.

Studies of HUD interfaces in aircrafts shows visual attention can be focused on either the heads-up display or the world beyond them, but not both simultaneously [7]. With the use of camera input in an MAR application both the interface and the world around the user can be viewed at the same distance on the phones display making the experience different from when using HUD interfaces in aircrafts. This result is further discussed in section 5.5.

5.2 Usability

The results of the SUS questionnaire for the application suggests that the usability of the application was of low cognitive load and did not interfere with the participants ability to navigate.

There might be several reasons for why the application used for this had a high usability score. AR is a multimodal interface that uses and simulates how we see things naturally. Since the interface is natural, it was assumed that it would have a high usability score [9].

A second reason for the high usability score might have been because the application had few features and a minimalistic interface. Stripping the application of features that weren’t AR was intentional so that those features wouldn’t interfere with the results of the study.

A third reason for the high usability score is the context that the application was used in. Four participants stated that they would use AR navigation for short distances but regular map navigation for longer distances. The paths in the study were approximately

(13)

300 meters, if the participants were to test the prototype for longer distances it might have had an impact on the usability score. As brought up in section 5.1 the cognitive load put on the participants for this study might be low relative to other studies and might have be reflected in the usability score for the application.

5.4 Methodology Criticism

This section brings up areas of the method that could affect the results of the thesis.

As brought up earlier, the study had a low number of participants and thus the results should be considered as trends.

When conducting a study outside it’s impossible to enforce identical condition for all participants because of disturbing factors such as weather conditions, traffic and noise [15]. As mentioned in section 3.2 this study was conducted on several different location, each route was carefully selected to have as similar conditions to each other as possible.

To measure attention, one would typically think of using mobile eye trackers. Since a mobile eye tracker was not available for this study, data for attention relies on results for route learning and answers from the semi structured interviews.

The few amounts of distractors used for this study was the direct results of a pilot test where it was discovered that using to many distractions caused the participants to see a pattern and start looking for more distractions during navigation. The consequence of using few distractors was that it was hard to compare the performance of the reference group to the application user group in detail for route learning. An alternative solution would have been to use a variety of distractors that are different from each other making it less obvious for the participant what is tested. Another solution would be to time the participants on navigation so that there would be less potential focus on the distractors.

5.5 Future Work

Studies related to attention when using AR should be further researched. The results of this study shows that there is no significant difference in route learning or distraction recognition when navigating using AR guidance while other studies have shown that AR guidance affect attention negatively [6, 7, 8].

The results of this study only apply to recognition of static objects, some example of such objects could be stairs or curbstone. An area that could be further

researched is user attention and reaction time related to moving objects such as cars or cyclists.

The effects on route learning when using AR navigational guidance with high cognitive load should also be studied further as the results from this study differ from results of other studies where cognitive load might have been higher. Even if the cognitive load of an MAR application itself is low it should be evaluated in a context of use where cognitive load is high.

Another area that could be further explored is how to design AR application to feel safe to the users. The results for route learning in this study suggests that the application users were navigating safely. However, the results of the semi structured interviews show that several users had concerns about safety when navigating with the application and felt that they did not know what was going on around them. For AR to be a suitable interface for personal computing, users should be safe and feel safe when using it.

6 CONCLUSIONS

The results of this study indicate that using a MAR application for navigational guidance does not affect route learning and thus does not interfere with the user’s navigational ability. This means that the user can navigate safely using the application for navigational guidance. However, the results of the semi-structured interviews suggest that the users had concerns with attention and safety during navigation.

The hypothesis for this study was that the use of MAR guidance for navigation would have a negative impact on Route learning. The results of the study did not support the hypothesis. The results of this study are generable for MAR navigation in environments of similar cognitive load to the routes used for the study. The most promising future for AR is in personal computing where it would serve as a personal assistant [4]. To get there, issues of attention related to the use of the technology must be further researched for the safety of the users.

ACKNOWLEDGEMENTS

I would like to thank my supervisor at KTH, Christopher Peters, for providing me with continuous feedback and help. My supervisor at The Mobile Life, Dan Isacson for helping me with establish a subject to explore. Lastly, thank you to everyone that participated in the user study.

(14)

REFERENCES

[1] Azuma. 1997. A survey of augmented reality. Presence- Teleoperators and Virtual Environments, volume 6, pp. 355-385.

[2] Milgram, Paul & Kishino, Fumio. 1994. A Taxonomy of Mixed Reality Visual Displays. IEICE Transactions on Information and Systems, volume E77-D, pp. 1321-1329.

[3] Pedie. j. 2017. Historical Overview. In: Augmented Reality.

Springer, Cham. pp. 59-86.

[4] van Krevelen, D.W.F., Poelman, R. 2010. A Survey of Augmented Reality Technologies, Applications and Limitations. The International Journal of Virtual Reality, volume 9, pp.1–20.

[5] van der Ham, Faber, Venselaar, van Kreveld, Lo ffler. 2015.

Ecological validity of virtual environments to asses human navigation ability. Front. Psychol. 6:637.

[6] Dixon, Daly, Chan. et al. 2013. Surgeons blinded by enhanced navigation: the effect of augmented reality on attention. Surgical Endoscopy, volume 27, pp. 454-461.

[7] McCann, Foyle, Johnston. 1993. Attentional limitations with head- up displays. Proceedings of the Seventh International Symposium on Aviation Psychology, pp.70-75.

[8] Fisher, Haines, Price. 1980. Cognitive Issues in Head-up Displays.

NASA-TP-1711.

[9] Turk. 2014. Multimodal Interactions: A review. Pattern Recognition Letters, volume 36, pp.189-195.

[10] Reeves, Lai, Larson, Oviatt, Balaji, Buisine, Collings, Cohen, Kraal, Martin, McTear, Raman, Stanney, Su, Wang. 2004. Guidelines for multimodal user interface design. Communications of the ACM, volume 47, pp. 57-59.

[11] Durrant-Whyte, Bailey. 2006. Simultaneous localization and mapping: part 1. IEEE Robotics & Automation Magazine, volume 3, pp. 99-110.

[12] Nielsen. 1993. Usability Engineering. Elsevier, pp. 23-48 [13] Nielsen. 1994. Usability Inspection Methods. In Conference

companion on Human Factors in Computing Systems, ACM, pp.

413-414.

[14] Broke. 1996. SUS- A quick and dirty usability scale. Usability evaluation in industry 189, volume 194, pp. 4-7.

[15] Gillner, Mallot. 1998. Navigation and acquisition of spatial knowledge in a virtual maze. J. Cogn. Neurosci, volume 10, pp.

445–463.

[16] Underwood. 1976. Attention and Memory. Pergamon, pp. 169- 207.

[17] Haji, Rojas, Childs, de Ribaupierre, Dubrowski. 2015. Measuring cognitive load: performance, mental effort and simulation task complexity. Medical education, volume 49, pp. 815-827.

[18] Sayed, Zayed, Sharawy. 2011. ARSC: augmented reality student card- an augmented reality solution for the education field.

Computers & Education, volume 56, pp. 1045-1061.

[19] Coutrot et al., Schmidt, Pittman, Hong, Winer, Hölscher, Dalton, Hornberger, Spiers. 2019. Virtual Navigation tested on a mobile app is predictive of real-world wayfinding navigation performance. PLos One, 14(3).

(15)

www.kth.se

TRITA-EECS-EX-2019:455

References

Related documents

Those flows are assumed to be constant during the time it takes to get to the intersections, counted as the predicted time to the start of the segment plus the free speed time to

The post-experiment questionnaire showed that when the test subjects consciously had to rate the experience they had when using the applications Google Maps

The other dimension that influences user loyalty is switching barriers, which means things that make it difficult or troublesome for a customer to stop using a product or

The aim of this master thesis project is to explore if AR Head-mounted displays (HMD) to- gether with computer vision and Artificial Intelligence (AI) algorithms are ready to be

For the interactive e-learning system, the design and implementation of interaction model for different 3D scenarios roaming with various input modes to satisfy the

(n.d.) leads us to believe that AR could also be used to visualize hidden information on a truck i.e. wires, pipes, beams and parts that are hidden inside of the truck or

First we argue that the individualistic and consequentialist value base of health economics (as in both welfarism and extra welfarism and indeed health policy

omhändertagande, lyhörd, passiv, aggressionshämmad, tuff, mogen, eftertänksam, sårbar, beroende, förnuftig och uppfinningsrik. Dessutom finns karaktärsbeskrivningar som pekar på