• No results found

Making use of the environmental space in

N/A
N/A
Protected

Academic year: 2021

Share "Making use of the environmental space in"

Copied!
62
0
0

Loading.... (view fulltext now)

Full text

(1)

Making use of the environmental space in

augmented reality

Jesper Sj¨oberg

January 10, 2019

Master’s Thesis in Interaction Technology and Design, 30 credits Supervisors: Arvid Br¨ ane, H˚ akan Gulliksson

Examiner: Thomas Mejtoft

Ume˚ a University

Department of Applied Physics and Electronics SE-901 87 UME˚ A

SWEDEN

(2)

Abstract

Augmented reality (AR) is constantly moving forward and pushing its bound- aries forward. New applications and frameworks for mobile devices are rapidly developing. Head mounted displays are evolving and making an impact on in- dustries and people. In this thesis, we are going to evaluate the concept of how to make use of the environmental space in augmented reality. Within the environmental space, we are going to focus on secondary elements — elements and objects that not are in the focus of the users. Both augmented reality in smartphones and head-mounted displays are going to be considered. Through an evaluation conducted with four participants during a week, we are going to find use cases and scenarios where this type of concept could be used and where it can be applied. The results of this thesis shows where and how the can be use for a concept such as this.

(3)

Contents

1 Introduction 4

1.1 Objective . . . 6

1.2 Limitations . . . 6

2 Background 7 2.1 North Kingdom . . . 7

2.2 Not the Real Reality . . . 8

2.2.1 Physical Environment . . . 8

2.2.2 Augmented Reality . . . 8

2.2.3 Augmented Virtuality . . . 9

2.2.4 Virtual Reality . . . 9

2.2.5 Mixed Reality . . . 9

2.2.6 Cross Reality . . . 10

2.3 Secondary elements . . . 10

2.3.1 Advertisement . . . 10

3 Theory 12 3.1 Augmented Reality . . . 12

3.1.1 Mobile Augmented Reality . . . 12

3.1.2 Head Mounted Displays . . . 13

3.1.3 Objects . . . 13

3.1.4 Information distribution . . . 15

3.1.5 Environment and Audience . . . 15

3.2 Secondary objects in real life . . . 16

3.2.1 Targeted ads . . . 17

3.2.2 Interactive . . . 17

3.2.3 Augmented Advertisement . . . 17

3.3 Visual Perception . . . 18

3.3.1 Depth . . . 19

3.3.2 Design thinking . . . 20

3.4 ISO 9241-210:2010 . . . 20

3.4.1 UX in AR . . . 21

3.5 Attention . . . 21

1

(4)

CONTENTS 2

3.5.1 Attract attention . . . 23

3.5.2 Repeated exposure . . . 23

3.5.3 Sensory stimuli . . . 23

3.6 Human needs . . . 24

3.6.1 Maslows pyramid . . . 24

3.6.2 UXellence framework . . . 24

3.7 Creating Experiences . . . 26

3.7.1 Designing for AR . . . 26

3.7.2 Interactions for AR . . . 28

3.7.3 Visuals & how to use them . . . 29

3.7.4 Visual indicators . . . 29

3.7.5 Advertisement design . . . 30

3.8 Frameworks & Tools . . . 30

3.8.1 Unity . . . 30

3.8.2 Vuforia . . . 31

3.8.3 AR Core . . . 31

3.8.4 AR Kit . . . 31

3.9 Existing AR solutions . . . 31

4 Methods 33 4.1 Discovery phase . . . 33

4.1.1 Literature study . . . 34

4.1.2 Interview . . . 34

4.1.3 Workshop . . . 35

4.2 Implementation . . . 37

4.2.1 Tools used . . . 37

4.2.2 Prototype . . . 37

4.3 Concept Evaluation . . . 38

4.3.1 Concepts . . . 39

4.3.2 Brief . . . 39

4.3.3 Field exploration . . . 39

4.3.4 Interview . . . 39

4.3.5 Data analysis . . . 40

4.3.6 What to expect from the evaluation . . . 40

5 Results 41 5.1 Workshop . . . 41

5.2 Prototype . . . 42

5.2.1 Lo-fi . . . 42

5.2.2 Hi-fi . . . 42

5.3 Evaluation . . . 45

6 Discussion 49 6.1 Workshop . . . 49

6.2 Advertisement . . . 50

6.3 Prototype . . . 50

(5)

CONTENTS 3

6.4 Evaluation . . . 51

7 Conclusion 52

8 Future Work 53

(6)

Chapter 1

Introduction

”It consists of this pair of spectacles. While you wear them every one you meet will be marked upon the forehead with a letter indicating his or her character. The good will bear the letter ’G,’ the evil the letter ’E.’ The wise will be marked with a ’W’ and the foolish with an ’F.’ The kind will show a ’K’ upon their foreheads and the cruel a letter ’C. Thus you may determine by a single look the true natures of all those you encounter.”

(L. Frank Baum, The Master Key, 1901 p.94)

The quote is from Frank L. Baum’s fiction novel The Master Key[22] that was published in 1902. It is the first ever documentation on augmented reality (AR) and gave us the first use case of what to do with it. At the time, more than 115 years ago this was, of course, considered science fiction.

In 1957, a cinematographer named Morton Heilig invented the Sensorama [40, 45]. A machine that is considered to be the first multimodal technology combin- ing sounds, vibration, and smell for the user to experience motion pictures in a more immersive way. In 1968, Ivan Sutherland, a computer scientist developed the very first head mounted display (HMD). His system was created to display simple wireframe drawings [2] [62]. All of what these distinguished scientists, writers, and thinkers came up with has influenced how we use and perceive our environment today.

In the last couple of years usage of mobile augmented reality has exploded.

Pokemon go1, Snapchat2 and Ikea Place3 has brought the technology of AR into the everyday use of smartphones.

The development of new Augmented Reality solutions is rapidly increasing and

1Pokemon Go: https://www.pokemongo.com/en-us/

2Snapchat: https://www.snapchat.com

3IKEA Place: https://www.ikea.com/gb/en/customer-service/ikea-apps/

4

(7)

5

with constant support from smartphone manufacturers. Apple released their new AR platform ARkit24in September of 2018 and Google released their plat- form, ARcore5just before Apple, in their Google I/O conference. Both of these supports new ways to implement and use AR in numerous ways. Two of the main things these platforms have developed are object recognition including sur- face recognition and the ability to share experiences through augmented reality.

Along with the platforms developed for AR applications, the hardware is also continually evolving, with sharper, more pixel-dense screens, faster processors and better cameras [69] all of which helps improving the AR experience. In AR, objects are most often placed in the center of the screen, in other words, the center of the field of view. The rest of the environment, on the sides and behind the primary object is most often left untouched, leaving an opportunity for exploration on what can be viewed and placed there.

Mobile AR is a field that is constantly pushing boundaries forward. Developing and broadening its market. Major companies on the market constantly release new development kits6 7and new apps are appearing in the different app stores constantly. A traditional AR application consists of the video stream from the back or front-facing camera of a smartphone to its screen. On top of the video stream, a layer is added, this layer consists of one or several augmented objects for the user to interact with. These objects are most commonly put directly in the user’s field of view and leaving everything behind as they were.

Here lies an opportunity to design and use of screen space that has previously been untouched. This thesis investigates and show some of the use cases for Secondary objects in AR that could help us in our everyday lives.

4https://developer.apple.com/arkit/

5https://developers.google.com/ar/

6developer.apple.com/arkit

7developer.google.com/ar

(8)

1.1. Objective 6

1.1 Objective

The aim of this study is to investigate and evaluate some aspects of how objects are perceived in Augmented reality. The thesis focuses on the objects behind the primary object and investigates whether there is a commercial use for the unexplored space in the background of applications. The specific goals are the following:

• Investigate current AR technologies

• Investigate AR design

• Investigate basic human needs.

• Evaluate where secondary objects can be used in AR

• Design a prototype based on findings throughout this thesis

These will aid during the investigation of the use of secondary objects further and develop new use cases for this type of information distribution. To see where there is need for secondary objects, the concept of secondary objects are going to be evaluated together with basic human needs, in order to see where it can be helpful and aid people in their everyday lives.

1.2 Limitations

The number of ways augmented objects can beput into a natural environment are many and consists of several different approaches. Even though a significant quantity of this study could be applied to a broader range of AR, this thesis is focusing on the handheld mobile augmented reality in smartphones. This thesis is 30 credits and stretches over 20 weeks. A prototype is going to be constructed. However, the time limit leaves little or no time for an iterative design and development process.

(9)

Chapter 2

Background

This chapter introduces the company in which this thesis was written in col- laboration with. It also introduces the different degrees of digitally altered environments, spanning from the unaltered world to a exclusively virtual world.

An overview of secondary elements and advertisement are presented.

2.1 North Kingdom

This thesis are produced in collaborating with North Kingdom (NK). NK is a global experience design company that started in Skellefte˚a and has expanded to Los Angeles and Stockholm. On their website, they describe themselves accordingly:

We believe that new value can be created wherever people, busi- ness, and technology collide. We help our clients harness that value through the creation of experiences, products, and services that play a meaningful role in people’s lives. Through human-centered design, we make the complex simple and relatable, no matter what medium or platform.1

NK has worked on several augmented reality projects throughout the years and has immense knowledge about the subject. With teams consisting of designers, developers, and UX designers they have the ability and capacity to help move this project forward and give useful feedback.

1Read more: www.northkingdom.com

7

(10)

2.2. Not the Real Reality 8

2.2 Not the Real Reality

There are several different ways of digitally modifying the reality with different levels of additions to the real world. In 1994, Paul Milgram introduced the

”virtual continuum”, which can be observed as a chart concerning the level of augmentation of the real environment, commencing with no added objects and travels all the way to the virtual reality where everything is digitally exhibited, as can be seen in 2.1.

Figure 2.1: Milgrams Reality-virtuality continuum [41]

2.2.1 Physical Environment

The real environment is the world around us as it is. With no added layers of augmentation and no digitally added objects to our vision.

2.2.2 Augmented Reality

Augmented Reality (AR) is where the main weight and focus of this thesis lies.

There are as many and diverse definitions to AR as there are areas of use. Some are discussing and saying that AR is the technology of adding virtual objects to real scenes through enabling the missing information into real life [19]. While others are saying that AR is a real-time computationally mediated perception [12].

What they have in common is that AR is the technology that brings digital objects toward the real world in order to improve, enhance and broaden the environment [49][20]. Along with all the definitions, there are several different approaches to AR with different levels of complexity and styles of visualizing the AR content.

Projection displays are used for viewing holograms and are often called spatial augmented reality [33].

(11)

2.2. Not the Real Reality 9

Optical see-through is the kind of augmented reality that appears with the technology of head-mounted displays(HMD)2 3.

Video see-through uses a device’s camera and screen at the same time broad- casting the video to the screen making it possible to add digital objects into the real environment [14][65]. Applications that have the ability to process the information through a screen, on a higher level use computer vision to get an understanding of what object is viewed. This type of technology can be consid- ered as artificial intelligence(AI) that can interpret objects through the camera stream [43].

2.2.3 Augmented Virtuality

Augmented Virtuality(AV) uses more senses than the vision to change the re- ality, with the addition of sounds, smell, and coordination. IMAX4 has an immersive movie experience in 3D that could be considered AV [65]

2.2.4 Virtual Reality

Virtual Reality is the far opposite of what a real environment is. All of the displayed objects are virtual and put in a virtual world, often using a Head- mounted display(HMD). There are three different levels of HMDs, low, mid and hi levels. Low-level headsets normally consist of a box and have a removable display, usually a phone that slides in. This type has limited interactions and no controller. Mid-level HMD still has the same mobile device as a screen, but more interactions due to a connected controller, this type of device also have a more sophisticated solution of how the screen is viewed, making it a bit higher resolution device than the low-level. Hi-level is a headset with built-in screens.

Interactions in hi-level units consist of two controllers making it a more matter- of-course process. [5] VR is typically used for playing games and watching movies. Since the screen used covers the entire field of view this technology creates a more immersive experience. Along with the recreational use cases, there are also several benefits of using this technology in commercial purposes.

When designing large objects, like cars or even trucks VR can be used to get better visibility and ergonomics [40].

2.2.5 Mixed Reality

Mixed Reality (MR is sometimes called Hybrid Reality. MR span from almost real reality to almost virtual reality and everything in between can be considered

2Microsoft Hololens: https://www.microsoft.com/en-us/hololens, Magic leap:

https://www.magicleap.com/

3Magic leap: https://www.magicleap.com/

4Imax: https://www.imax.com

(12)

2.3. Secondary elements 10

a mixed reality. If it is a handheld device or a Head-mounted display does not matter as long as the medium is inside the brackets of MR [41]. Mixed reality is the broadest of all of the technologies. When discussing AR in this thesis, it could as well be deem MR.

2.2.6 Cross Reality

Cross-Reality (XR), (extended reality) is a fifth technology that combines all of the above. It uses ubiquitous sensory networks and online virtual worlds to create more immersive experiences in the real world [47]. XR has the ability to map up any environment and either make it a digital one, or place augmented objects into that environment [4]. XR can be seen as a term that collects AR, VR, and MR together to make it less confusing.

2.3 Secondary elements

Secondary elements are what we have around us but that are not in the focus of our current field of view. They can come in all shapes and sizes, like text, 2D images, 3D objects or video. The key part is them being secondary to something primary.

One way to describe and research this is how advertisement works. When we walk outside or take a subway ride, the ads are always there, no matter if we look at it or not. This concept could be translated into augmented reality and researched, to see what possibilities lie in this field.

Ordinary advertisement design and placement today might be able to adapt to new technology and adjusted into augmented reality.

2.3.1 Advertisement

Advertisement or an ad can appear in many different ways and formats. On your phone, in a newspaper or the TV. 4000 BC, in India, a rock drawing was made in advertising purposes [6]. The first ever printed ad was made in 1468 by William Caxton who wanted to promote his new book. Two hundred years later the word advertisement was introduced, and this new form of selling medium evolved and kept evolving. Ads took over billboards and newspapers. In 1941 the first TV ad aired. Today, some of the biggest corporations are funded by advertisements [59].

The purpose of the ads is still the same. Getting a message out to a broader audience. Even if technology has changed the way, we consume ads we still use them the same way as William Caxton did when he printed his ad.

(13)

2.3. Secondary elements 11

The traditional advertisement is what one can find in magazines, newspapers and on the television. This is a type of ads that are a one-to-many communi- cation and does not consider the recipient at a personal level. Target groups are strictly aimed by presumptions and by physical locations. Local newspapers can target their ads to the part of the country where the paper is distributed to, and magazines can aim ads towards their target group.

In 1994 the first banner ad on a computer was published by Wired magazine [39] (and it had a click-through rate of 44%). With the social media new ways of making ads on the internet emerged. Corporations with substantial customer databases realized the vast perks of knowing a users personal information when creating and targeting ads. Instead of wasting time and money promoting a product or service the old way ads can be tailor-made.

Commercials in MAR applications are a bit more complicated and does not really exist in the same way. They are adjusted for mobile devices, but they are still in classic video mode. This interferes with users experience since they get kicked out from their AR experience until the ad has stopped playing. This type of experience in AR is strictly against the design guidelines from the companies that provide the app stores. See more in section 2.4.1

(14)

Chapter 3

Theory

This chapter introduces the literature study made for this thesis. It offers a deeper understanding of AR technologies and advertisement. Furthermore an introduction to how design and especially user experience design are used in the creation of MAR applications.

3.1 Augmented Reality

Augmented reality is the real environment with an additional layer of information[3].

Different technologies offer various levels of interaction and information. This augmented layer is put on top of the environment viewed through the camera and are meant to enhance the real world with relevant text, images or 3D- objects [3]. Applications that use AR can be used for educational, recreational or medical purposes(among others).

3.1.1 Mobile Augmented Reality

There are two types of mobile augmented reality (MAR) devices that are going to be considered in this thesis. This section is about MAR on phones and tablets which will be the main focus in this thesis. In section 3.1.2 there will be an introduction to head-mounted displays. MAR is the handheld version of AR which uses a smartphone or a tablet to alter and enhance the surroundings.

This type of device uses the built-in screen and the built-in back camera, live streaming the camera feed to the screen and then adding a layer on top of that. This layer is what is making the reality to an augmented reality [64]. The augmented layer is typically text or an object placed in the focus area of the screen.

This layers can be anything that is augmented, in the Pokemon application

12

(15)

3.1. Augmented Reality 13

there is a monster that a user tries to catch and in snapchat there is layers that augments a users face or surroundings

Mobile augmented reality has been around for some time and with its primary focus on recreational, social and game centered applications. More use cases are developed continuously with its primary focus on AR. Development of func- tional and helpful applications are on the rise. Tape measurement applications, GPS applications, and aid for educational purposes are some of the new areas of use. In 2018 two of the major smartphone companies released new frameworks for developing AR experiences [1][27]. These contain software for easier detect- ing and scanning environment, making it possible for the user to map their surrounding environment and place augmented objects with greater precision than before.

3.1.2 Head Mounted Displays

Head mounted displays have the same idea as MAR. But instead of using a phone with its screen and camera HMDs are built like a pair of glasses that uses a see-through screen instead of a regular glass, on which an augmented layer can be added. In this case, the screen always covers the users’ field of view.

This type of device can be used for educational and other functional purposes [38].

Nadine Hachach-Haram discusses in her Ted-talk [32] how AR could change the future of surgery. She brings up the problems with educated specialist surgeons in the third world and how few there are. She says that AR could be an aid for regular doctors, to get instructions and help from more educated and experienced surgeons, living in other parts of the world. AR could be a tool for sharing information and knowledge, which in this case, could help save lives.

In medical schools, this type of tool could be used to study human anatomy or learn more complex tasks and get better visual feedback on abstract subjects.

Aircraft company Boeing has started using HMD AR in their factories. This resulted in up to 90% faster learning curve when exposed to new tasks and up to 30% reduced overall time on the assemble line [7]. When an engineer in a factory uses this kind of product, they have both their hands free for the task that needs to be performed and can in real time, get instructions on the task.

3.1.3 Objects

When discussing object in AR, there is a wide possible range of them. From text or images to video or 3D objects. Application navigation and UI are often a part of the augmented environment.When we discuss objects in this thesis, we refer to something that holds and are able to share information or in some

(16)

3.1. Augmented Reality 14

way enhance the environment. The main concept is that when a digital ob- ject is added to the real world, we consider it to be augmented reality. Face tracking algorithms and frameworks developed for detecting surfaces and planes are getting more accurate and faster [1], which opens up the AR world for new opportunities.

Figure 3.1: Layers of augmentation

Primary

For phone MAR there are essentially two directions to go. The first is the type of design implementation that applications like Pokemon use. The forward fac- ing camera streams to the screen and objects appear in front of the users. As seen in figure 3.1, there is a layer put on top of the environment containing the digitally added objects. The second is using the other camera on the phone, facing the user and then adding layers or artifacts around a human, much like the filters used on various social media platforms. Both of these are examples of how primary objects can be used in AR. The level of interactivity and the number of objects depends on the application in question. Design guidelines, as can be seen in section 3.7, suggests that user interfaces should be a part of the augmented world, where a 3D object also can be interactive and the interface of the application. When placing objects in the augmented environment, the cus- tomary suggestions is that the interactive object should be placed in the center of the users’ field of view, making it the most obvious to interact with.

(17)

3.1. Augmented Reality 15

Secondary

The objects in AR are in most cases primary and are put in the center of the screen, gaining as much attention as possible [28]. Other objects that are not in the center of the screen or are complementary to the primary object are consid- ered to be secondary. This type of objects is the same as the secondary object in the real environment. On a desk there is a computer, which is the primary object, besides is, there is a coffee cup. The coffee cup, in this case, is what we consider to be a secondary object, and placed horizontally to the computer.

There is another possibility, where an element is placed behind the primary object. This works the same way, just it being in the distance instead of on the side. In digitally altered environments the visual hierarchy can help determine what object we are supposed to look at and what is the complementary object.

There are a few basic things to consider. The larger an element appears, the more attention it will attract. Bright colors and high contrasts can be of help when empathizing an element. The lack of visual hierarchy users tend to fall back in a predictable reading path, left to right. So here lies the opportunity to break these standard sets of perceiving content and make the users see what’s essential in the order the designer wants to.

3.1.4 Information distribution

Augmented reality is frequently used as an information distribution system, with the information being e.g. text, images or 3D objects. Since AR design guidelines propose contextual, volumetric interfaces the technology AR is a pow- erful tool when educating and sharing information. Schools around the world are adopting the new technology to enhance and offer a more immersive learn- ing experience. This type of learning has been proven useful for understanding more complex and visual tasks, like introducing the solar system for the first time in school. Instead of reading about it, kids can walk around and get a bet- ter understanding of what it is [67]. Youth with concentration disabilities also benefit from AR information, since its more immersive and easier to interact with.

There are some challenges in AR as an information distributor. This channel must be adjusted and specific for the targeted audience in each and every case.

Audience characteristics, physical context, and audience engagement are critical concepts for AR as an information distributor [48].

3.1.5 Environment and Audience

Physical context or environment is a big part of how information is presented.

This means that the information needs to be adjusted based on the current loca- tion and altered depending on how and where the device is being used. However,

(18)

3.2. Secondary objects in real life 16

there is one major issue with the environmental question when designing, build- ing and testing AR applications. The responsiveness of traditional applications stretches between the different available screen sizes. In AR this issue depends on where the user deploys the application and what environment he or she is in.

AR applications can be used at all places where there is an internet connection.

For the purpose of this thesis, we want to explore the outside environment with AR. To get users to use application outside there are several methods. Gamifica- tion of ordinary tasks as working out or exercising, in general, are a proven way for users to go outside. Information targeted activities such as visiting historical places where there is a benefit of knowing more about the location are also a documented use case with positive feedback. Furthermore, there is likewise a beneficial use of AR when to locate and find a way in cities. Instead of looking at a flat 2D map an AR application can give directions in the streets instead.

For example, there is an application that helps with finding the right directions.

Instead of just getting the traditional GPS interface there is a little character that is visible in AR. A user can then follow this character around and it helps you get to the right location. [48] New technology always takes time to reach the broad masses of users. The ones that adopt new technology, products, and concepts the fastest are traditionally the ones born into the digitalization age.

Since they often use new technologies on a regular basis the targeted group for user studies, research and prototype testing will be those.

The ones that are the fastest of embracing new products and concepts are tra- ditionally the one born into the digitalization age, the ones born between 1982 and 2000 [21] and are the targeted group for the tests.

3.2 Secondary objects in real life

One way to think of secondary elements in real life and how they are built is to think about how the advertisement industry work. They have to make people notice their products and remember them in order to sell their products. There- fore, some research on the topic of advertisements design, how advertisement work and how it is perceived was conducted. Since our smart-phones contains so much more information about us and our surroundings, it is an important source of inspiration about how secondary objects work in real life.

The origin of the advertisement industry was a one-way mass communication medium, like in newspapers or television [60]. This concept is starting to convert into a more personalized approach with a focused audience for each ad [61].

When internet usage increases the consuming of online content increases with it, advertising markets develop from the traditional to a more personalized and interactive version on social media platforms [21].

(19)

3.2. Secondary objects in real life 17

3.2.1 Targeted ads

Digital advertisement keeps evolving. Ads that once were a one-to-many com- munication is now a one-to-one. All is because social media platforms have information about who a user is and what that person is interested in [18]. This type of ads targets specific groups sorted by age, gender, demographic or simply by which website a user visits.

There are more complex advertising strategies as well, considering a contex- tual perspective and only targets certain groups with certain interests. Such information is collected from social media platforms and search engines. The data is based on pages liked and interactions with others. Geotargeting is instead to target an audience from a location-based perspective.

Studies have shown that brand personality related content is associated with higher engagement from the end viewer [16]. Advertisements that creates emo- tional or humors content have a higher click-through rate than ads that do not [16].

3.2.2 Interactive

The interactive advertisement has been a part of the internet experience since the first clickable banner ad, more than twenty years ago. The level of inter- action then was only a click that sent a user to a new page containing more information [54]. With the digitalization era, the opportunities for interactive ads increased. Several approaches are being used every day. The ability to introduce a storytelling perspective on ads created more customizable content leading up to better experiences and more clicks. Immediate feedback and cus- tom content help targeting and altering ads on the spot. One important thing in interactive ads is the feedback to a user. There has to be a level of involvement from both sides for this type of communication to work [61].

3.2.3 Augmented Advertisement

AR ads in the past have consisted of either AR applications targeted to attract costumers through new and immersive technology. The other approach was to make installations outside, in a bus stop for example. The first approach could be considered a product of the targeted brand instead of an ad. Since a user needs to download and install an application, there is a desire to use that kind of product and applications. Instead of viewing this product as an ad we perceive it as an application or a gadget [46]. This means that fewer people are seeing it, but the ones that do interact with it tend to spend more time with the product in question. When installing an augmented advertisement campaign outside,

(20)

3.3. Visual Perception 18

in a city environment, the camera part of the augmented application can be troubling. Many countries have policies of using stationary cameras outside and this can be a problem for augmented reality campaigns.

Admented Reality

Google Glass was a test project, trying to augment the environment through an HMD. This device was the size of regular glasses and was supposed to be worn and used as a smartphone [58] When Google glass1was starting to be developed some people saw new uses cases. With the opportunity to have augmented re- ality, at all time was intriguing. When augmenting the everyday environment, all of the apps built today could be integrated, the quote from Frank L Baum’s could be a reality along with all of the available information. Along with the practical information distribution, containing useful data about the surround- ing environment the term admented reality was coined. This was about using Google ads in Google Glass. Making it possible to push personalized, location- based ads to someones augmented reality, in the context where the user is at the moment. For example ads about discount could be shown in or near the store in question. The admented reality was never introduced into any markets.

Mainly because HMDs are too expensive for the public market and therefore no reason to develop this type of product. However, the concept holds a lot of useful information and can be of help when designing for smartphone AR.

The admented reality concept is highly influenced by the web design of Google, using cards with opacity as the augmented layer2. Just like on Google’s website ads appear when an item is searched for. The concept of admented reality was never launched for public usage.

3.3 Visual Perception

Visual perception is highly discussed in the psychology community. In 1966 James Gibson discussed this subject and argued that perception is not to figure out what something is or do. That there is no need to test a hypothesis about an object but rather that objects contain enough information to tell us what they are and how to interact with them. He claims that perception can be explained exclusively in terms of the environment [55, 24].

Richard Gregory presented his theory in 1970 and believes that perception is a constructive process which relies on top to down processing [29]. He argues that stimuli triggered signals are processed by our brain which needs previous knowledge of what it is. We can then make inferences to decide what object we are looking at [55][29]. Gregory argues that we test hypotheses and that

1Read more: google.com/glass

2Google design website

(21)

3.3. Visual Perception 19

incorrect formatting of these will result in perceptional errors or visual illusions [30].

Figure 3.2: A Necker cube

The lack of previous knowledge about an object, or that something does not look like what someone is used to tend to create illusions. The Necker cube, as seen in 3.2 is a well known optical illusion created by Louis Albert Necker in 1832. It is a 3-dimensional cube with no visual cues. It is an ambiguous wire-frame, meaning that there are several right answers to what it is. When looking at both the bottom right and top left rectangle can be considered to be the front of the cube [37].

How we perceive things and objects are as different as we are as humans. What one person sees or hears can appear different to someone else. Perception is the phenomenon of how we identify and interpret what we see, hear and smell [55][56].

3.3.1 Depth

In a real environment, we experience and perceive everything around us in three dimensions. Our binocular vision, our two eyes give us the ability to perceive depth and distance to objects.

On screens we perceive depth different. Since the screen is static and does not move, visual cues need to be added for making the objects appear in a depth of field.

Pictorial depth cues are how depth can be added to flat images. Interposi- tion is to place objects further away behind nearer objects, in that way they will appear to be in the distance. Linear perspective is the use of object contours and where parallel lines converge towards the horizon. aerial cues can indicate that an object is far away in the sense of loss of detail and clarity. Relative

(22)

3.4. ISO 9241-210:2010 20

brightness cues helps understand where the source of light is. Using shadows and loss of light in the distance can be helpful for depth of field.[17]

Kinetic Depth cues helps provide a depth of field when the viewpoint is in change. motion perspective cues can be about experienced speed when consider- ing distance. Cars near you appear to go faster than cars in the distance.Relative motion cues are like motion perspicuity cues but are about how fixed objects in the distance are perceived [17].

3.3.2 Design thinking

Cognitive psychology has always been a big part in designing. In 1956 miller introduced the magical seven, saying that our information limit is 7 (+-2). With that, he meant that we could only keep seven pieces of information in our head without any memory techniques. Chomsky said in 1959 that languages are a set of rules rather than verbal behaviours [42]. The work made by these cognitive psychology scientists has made a foundation for design thinking and designing for humans. Research made by them gives a designer the capability to predict a users behavior and from that design a better product.

Gestalt principles

With its background evolving from 1920s psychology, the gestalt principles are one of the most commonly used psychology terms regarding design [35]. There are seven principles describing different organization methods our visual percep- tion does — for example, the principle of similarity which states that we tend to group similar objects together [63].

Affordance

The distinguished perceptual psychologist James J. Gibson introduced affor- dance in 1977 describing it as a relationship between man and thing. That an object gives away clues on what to do with it. A steering wheel wants to be turned, and a button wants to be pushed [34].

3.4 ISO 9241-210:2010

According to the International Organization for Standardization (ISO) User experience design is ”A persons insight and response that is an outcome of usage or predicted usage of a system, product or service” [36]. UX is the process of designing products or services with the user in focus including design, branding, usability, and function [44].

(23)

3.5. Attention 21

This means that UX design is to design something that predicts a users be- havior — making it easier and more intuitive to use. Instead of just designing something that is visually pleasing. UX design considers the entire flow of a product. All the way from an easy onboarding to an application, making it easy for first-time users to find what they are looking for, to making navigation and design easy and intuitive.

3.4.1 UX in AR

User experience design for AR applications is a relatively new subject in user- centered design thinking. Designers can no longer participate in what envi- ronment the user is in. As discussed by authors Amir Dirin and Teemu H.

Laine (2018), there are eight challenges when considering UX for MAR appli- cations:

Physical challenges are that there often is a higher level of interaction in MAR applications, that users need to use their entire body to look around and move, instead of traditional tap, swipe and pinch gestures. Since this is a newer tech- nology users often have mental restraints, and on the note of new technology, MAR still do not have rapid prototyping tools. There are also hardware, or technical challenges, smartphones still lack battery power or processing ca- pacity to run MAR applications over time. User interface (UI) design often consists of familiar metaphors and design heuristics which might not work when presented in other contexts. Development challenges exist in that there still is some difficulties developing apps. Timing, users get frustrated when AR ob- ject take to much time. When it comes to UX design for MAR, designers need to adjust their ordinary design thinking, based in 2D applications and instead iterate through a 3D design process. [15]

3.5 Attention

In everyday environments, our field of view is crowded with objects and infor- mation. But we only have the capability to focus on a limited number of objects at a given time [9].

(24)

3.5. Attention 22

Figure 3.3: Broadbents tube of attention

What we focus on is where our attention lies. This cognitive process gathers all information, winnows it and selects what’s in focus and what’s ignored. Sub- consciously we sort out what is interesting and relevant for us[ref gibson], and we can then decide to focus on what we believe is most important. Broadbent explained this process using a Y-shaped tube, as can be seen in figure 3.3. The balls are representing stimuli, and the lock is when we process something. Balls can come from different directions and is processed depending on several things, like the importance of stimuli or the velocity it runs down the pipe. If the pipe is not in even angle balls from one direction are going to enter faster. With this Broadbent means that we are more observant of certain objects. If two balls are dropped in the pipes simultaneously, there will be a jam and a cognitive distraction. However, if the balls are dropped asymmetrical, one by one we process the stimuli effective and can transport the gathered information to our short-term memory [10].

Cognitive load is a term used in cognitive sciences. It refers to the effort of tasks performed in the brain. We can design alongside and take this into consideration, making tasks and products more comfortable to use and easier to understand. On the far edge of this concept, we have information overload and occurs when

The founder of this theory argues that we can design according to this issue, making tasks and products easier to use and easier to understand. The edge case of this is information overload when the brain can’t process everything that is being perceived [57].

The type of cognitive load that is relevant for this thesis is the Extraneous cognitive load. This type is the information that is generated by outside effects and information.

(25)

3.5. Attention 23

3.5.1 Attract attention

Ads have several techniques to get our attention aimed towards their product or service. In Television, there can be a slightly louder volume during the commercials, or someone says something that we notice and stands out in the crowd of sounds. We all tend to react more to products that stand out from the crowd and are somewhat different [52]. In outside environments placement of ads is important. Since there is no way to force visual attention to advertisement here the placement needs to be appropriate and adjusted according to what environment it is. The design choices also need to protrude in order to get our attention. This can be done in many ways. With contrast, colors or even the message sent from the advertiser. We all react differently to different ads, and that is a challenge when designing ads. What some people notice and get interested by might not even be seen by others. This means that ads outside, which can be targeted to a location but not on an individual basis might only get attention from some people.

3.5.2 Repeated exposure

Mere repeated exposure is a highly debated phenomenon that occurs when some- one is repeatedly exposed to a stimuli. The theory says that the person will in the long term prefer this type of stimuli to other, similar ones. This works on many different levels, all the way from chicken eggs being played certain music and then enjoying it when they hatch, to humans being exposed subconscious to it something. [68]

3.5.3 Sensory stimuli

Peripheral vision is what we notice on the sides of what we are looking at[11].

This is from when humans lived outside and had the need to be aware of what happened around them, looking out for predators and other dangers. Today we don’t have the same need towards this, but we still react to what’s in or peripheral field of view. Since our brain tends to add information that our vision has missed, we tend to see wrong things here. For example, if someone is riding his or her bike home from a friend after watching a horror movie and riding through a forest. A cut down tree that would be approximately the same height as a man can in the peripheral vision be seen as a man. Often in a frightening way. This is because we have the information about an object of a specific height and our brain fill in the rest in order for it to make sense. It can also be for us to react faster and help us escape from dangerous things or situations. Since our brain works faster than what it takes for us to move our head and see that it is a tree. And we can avoid the potential danger. This is of course not as relevant as it were thousands of years ago, but still a major part of our attention span and our field of view. Peripheral vision can and are being

(26)

3.6. Human needs 24

used within advertising in an outside environment. Designs to make us notice what is around us and implicitly making us focus, and perceiving the ad.

3.6 Human needs

Our human needs are what defines us and create a big part of our personalities.

It is what drives us and makes us choose certain decisions in life. Some people value personal success and self-fulfillment high and will, therefore, choose in life accordingly. Some people value friends and family the highest and will spend more time with them. And in that meaning, making life decisions on what is most important to them.

3.6.1 Maslows pyramid

When discussing human needs and candidate psychological needs a great place to start is Maslow’s hierarchy of needs. Maslow has divided five categories into three levels. He believes that you need to work from the bottom of the pyramid and up, meaning that the basics needs has to work for us before we can start to work ourselves up on the ladder, achieving more in relationships and work.

The first, the bottom layer of the pyramid is what he calls the basic needs and consist of two layers. The most fundamental one, the lowest and biggest of the pyramid are physiological needs, and here we can find food, water, warmth and rest. Fundamental things that are never going to change. Above that are the safety needs. The next part, the middle of the pyramid is psychological needs, and this section is divided into two levels. First, there is belongingness and love needs. This is intimate relationships and friends. After that, we have Esteem needs, which refers to prestige and feeling of accomplishment. The top of the pyramid is the self-fulfillment needs which is to achieving one’s full potential.

Maslow argues that our needs are built from the bottom of the pyramid and up, which means that we need to have a solid first layer, a foundation before we can start with the next layer. Our physiological needs like eating and having a place to rest need to be there before we can start working on the next level. The top layer of the pyramid, self-fulfillment can only be achieved when we have the two underlying layers as a foundation. This means that we need to start building our life’s from the bottom of the pyramid and first when we have the bottom layer, we can start working ourselves upwards.

3.6.2 UXellence framework

The UXellence framework is developed by Nora Fronemann and Matthias Peiss- ner, and they have made use of how human needs connect to good user expe-

(27)

3.6. Human needs 25

rience. They present a method for user-driven innovation and the possibility to explore concepts through user testing. Their work assumes that a positive UX can be created using basic human needs and that users should be included early in the exploration phase in order to generate use cases and issues with the concept. The experience of an application is more than the splash screen and the navigation. By using this framework, concepts can be evaluated towards how people think and act but first of all, what is most important to them. The concept is built in five stages, briefing, field exploration, user interviews, data analysis, and expert evaluation. [23]

Basic needs in the UXellence framework

Basic needs according to Shel- don and Reiss

Security

- Needing structure, the absence of danger and the independence of outer circumstances

Security (Sheldon), Tranquil- ity(Reiss), Order(Reiss)

Keeping the meaningful

- Collecting meaningful things Saving(Reiss) Self-expression

- Developing one’s own character and showing it to others (including: ide- alism)

Self-actualization-

meaning(Sheldon), Idealism(Reiss), Independence(Reiss)

Relatedness

- Feeling close to the ones who are important to someone

Relatedness(Sheldon), Social Con- tact(Reiss), Family(Reiss), Ro- mance(Reiss)

Popularity

- Being popular and appreciated by others. Including: altruism

Popularity-influence(Sheldon), Status(Reiss), Acceptance(Reiss), Honor(Reiss)

Competition

- Being better than others Status(Reiss) Physical health

- Supporting one’s own well being Physical thriving(Sheldon), Eat- ing(Reiss), physical exercise(Reiss) Competence

- Feeling able to master challanges.

Including: autonomy

Competence(Sheldon), Auton- omy(Reiss), Power(Reiss)

Influence

- Achieving something in my environ- ment with others

Popularity-Influence(Sheldon), Power(Reiss)

Stimulation

- Curiosity and exploring new things Pleasure(Sheldon), Vengeance(Reiss)

(28)

3.7. Creating Experiences 26

The evaluation phase the UXellence framework uses is based on the work of a research paper by Sheldon [53] and one by Reiss [50]. Both of them discuss candidate psychological needs and tries to determine which ones are the most fundamental for humans and what is driving us. Sheldon refers to them as needs while Reiss refers to them as desires. Their work lay the foundation of the framework. Their work combined within the UXellence framework can be seen above.

3.7 Creating Experiences

Compared to traditional 2D applications the format of 3 dimensions and AR are fairly new. This means that there is a longer on-boarding process and another way of thinking when designing applications and experiences. The way we think of an application needs to be expanded to the environment and UIs need to be more adaptive to a specific application [25].

3.7.1 Designing for AR

Designing for augmented reality is similar but not equal to designing for tradi- tional applications. In a traditional application the screen is everything there is to the application, outside of the phone, there is nothing. In AR applications the UI spans wider and consists of everything around a user. Googles design team has figured out five key concepts for designing AR applications in order to make them as good as possible for the users [25].

User Environment

The responsiveness of 2D applications is set by the screen size of a device and in that order that objects appear according to a set of restraints within the screen of the device. In AR, responsiveness covers the entire surrounding environment where an application is being used. This means that there is no way for a designer to predict in what environment an application is going to be used in, and need to design accordingly. The application should early on give cues on the type of augmented objects there is, and what kind of environment that is needed.

When building AR applications, it is important to consider surfaces available for a user. Surfaces are typically floors, streets, walls or tables. Applications should know what and where to deploy objects and what size of the object. There are three levels of objects, table objects that are in size to be viewed and interacted with on a table. Life size is created for indoor or outdoor use and finally, World scale objects where there is no limit in size or movement. Applications should also make us and integrate the application between augmented objects

(29)

3.7. Creating Experiences 27

and real objects, to create a more immersive experience. In early prototype stages, it is important to throw away all the 2D applications stencils and sketch environments instead of screens and buttons. When environments are set to add full-size objects into them, to get a better perspective of what it might be.

User Movement

It is often a challenge to get new users to move around in the augmented en- vironment. Due to traditional applications and lack of experience in AR users tend to stay stationary. Therefore it can be beneficial to design objects that reach out of the screen or move around. Designing AR behind the bounds of the screen can make the experience to feel more immersive and more lifelike. When objects fly off the screen a marker to show where in the environment it can help with making the user move around and explore the surroundings, depending on the application user need to move differently. Set the expectations early in the experience for the user to plan and know how much room they need and how much space for moving around.

Onboarding

When initializing users into AR standard AR icons should be used for the user to understand that there are opportunities to augment objects. Icons that represent the type of augmentation can be useful, to let the user know what kind of environment is preferred. Animations are helpful when telling a user to move the device in order to create depth in the viewport. When introducing new objects that a user has the ability to place in their environment icons should be shown whether it is a stationary object or an object that is supposed to move around. For example, if it is a board game, the object should stay stationary even if interacted with. If it is a chair, it should be able to move around.

Object Interaction

Object interaction is one of the most important section of designing for AR.

It is the difference between poorly placed and integrated objects too well-made experiences. When objects collide in AR usually nothing happens, the object collided with disappears and act unnaturally. Instead of just removing an object there is more beneficial to add a filter or an effect on the screen, letting the user know that the object still is there, only collided with. When placing an object, both the object and the surface should give some sort of visual feedback to the user — communicating where the object is and where it is going to land when dropped or placed. When placed an object should have intuitive interactions, like dragging, pinching or twisting. The issue with this type of interactions is

(30)

3.7. Creating Experiences 28

that the limitation of feedback and that fingers covers a lot of screen space.

Instead of a traditional selection like a gallery or a listing interface, a reticle selection can be used. This leaves more space on the screen and gives the user feedback on the type of object interacted with.

Volumetric UI

The UI of an application is crucial to make it easy navigable and easy to use.

Since the entire device is the viewport, controls and elements should be held to a minimum. 2D objects should be carefully considered before adding. Since users focus on the experience, the video stream controls in the surface can be hard to see and access. Instead, the interface could be built into the envi- ronment. For example an application to discover the solar system, instead of having a UI with buttons, the planets can be pressed and displaying the relevant information.

Since objects are of different size and can move in the screen some of them can sometimes be harder to interact with. Therefore the touch target connected to an object should always be the same size.

3.7.2 Interactions for AR

In May of 2018, Google hosted their annual I/O conference with one talk dis- cussing interactions in augmented reality [26]. There were three main headlines.

The first, context driven superpowers refers to that humans already have a visual and physical understanding of the world an AR app is used in. This means that we can use context driven signals to trigger experiences from the environmental context. Ques in the environment can be seen as a signal to what the application should do er behave like. The most important part of any AR applications is its screen, and therefore some of the regular phone interactions are redundant, and in some cases a huge pain point. Users should be able to see the entire screen as much as possible. Apps should be as hands-free as possible, giving it a snackable feel. However, there are some limitations and issues with the environmental context regarding AR. If the environment is manipulated too much there can be an uncanny feeling that something is not right.

Shared augmentation Interactions are not limited to a user and his or her screen. In AR, interactions can consist of communication between users, in real time. The difference is much like watching a movie home versus going to the movies. People can be immersed by the same thing at the same time. Further- more, they discuss that applications do not have to be launched at the same location. As long as there is a communication between the users and feedback is given of what the other person is doing and seeing, there is a feeling of com- munity.

Expressive inputs In traditional phone development, the design has been lim- ited to a fixed number of interactions like tap, pinch, and swipe for example.

(31)

3.7. Creating Experiences 29

With the use of camera inputs can be way more than the traditional. User movement can be an expressive input to trigger certain actions. Another new controller can be the users face. When using the front-facing camera in AR applications actions can be triggered on facial expressions. This can go even further with connecting the UI to the entire body and surroundings. Change of pose, movement or expression can all trigger context relevant actions. When interacting with objects in the real world, traditional interactions get more real.

Because an object is placed in an environment that we are used to see in ev- eryday life, we expect that the object should behave similar to one in the real world, like if we push it, it should move in that direction.

3.7.3 Visuals & how to use them

AR applications consist of the real environment with a layer of augmented ob- jects put on top. These augmented objects come in a wide range of shapes and are used differently depending on what kind of information wants to be dis- tributed. The most commonly used object in AR applications is the 3D object.

This type of object can be anything from an airplane to a plant. It is either placed in the center of the screen, making it easy to interact with or made as a visual that works together with the environment. 2D images or flat surfaces can be used for augmenting already existing images. This type is used to make images in newspapers come alive and be intractable. The third commonly used object is the Interface itself. The UI of AR apps should always be kept to a minimum, making room for the main experience. One solution to this has been to integrate the UI into the augmented objects.

3.7.4 Visual indicators

Visual indicators can be a sign that something about a certain element or object has changed. In the real world, this can be the American style mailboxes, where a flag us put up when there is mail. In applications, small visual features are presented to indicate the same. On mail applications on a phone, there is a red dot, again indicating that there is mail to be read. One of Jacob Nielsen’s ten heuristics for interface design is visibility for system status. He talks about giving a user the appropriate feedback based on what actions are executed. For example, when selecting multiple things in a list, the system should tell what objects already has been chosen and which are left. The feedback on the status of the system should be immediate to prevent any uncertainties from the user.

He continues discussing that when we use applications that give us feedback, we starting to build a relationship with the application, trusting it that it knows what is going on. The communication between the application and the user is fundamental for it to be as good as it can. [31]

(32)

3.8. Frameworks & Tools 30

3.7.5 Advertisement design

Just like any type of design, advertisement design is subjective, meaning that we all perceive it differently and that our taste in visuals is not the same. The things written here are general rules and guideline. From a conducted interview with an experienced designer, some basics were found when it comes to designing to get attention. One of the key parts is to make the design feel similar and relatable to the brand. Colors, fonts and image language should be according to the guidelines of the company branding. To get attention to the design it is important to call an action and use visual metaphors to make people intrigued by what is there.

From the interview with an experienced advertisement designer, some basics were found. The main part of any ad is the companies branding of that prod- uct. This is the part that says what kind of colors, what kind of fonts and images should be used in the ad. No matter how good it looks, a coca cola ad should never look like anything else. Furthermore, the ad has to draw attention to itself and then make the viewer focus on the message it is presenting. There are several ways of doing this, and it can be including a call to action or using visual metaphors.

3.8 Frameworks & Tools

This section presents some of the most used frameworks and tools when it comes to developing AR applications for mobile phones.

When developing and designing AR applications for mobile phones, there are several ways to go. This section introduces some of the most common ways of doing this. Both cross-platform frameworks and platform-specific frameworks are considered.

3.8.1 Unity

Unity 3D is a game engine and a tool for creating cross-platform applications.

AR and VR have been supported for a long time, and this is one of the most used tools for developing AR and VR applications and games. Unity combine scripts, usually written in C# (C-sharp) and objects connected to these scripts. That makes it easier to design and understand where the code is going and what it is doing. As said unity is cross-platformed and can compile and deploy applications to everything from Android and iOS to PlayStation and Xbox.

(33)

3.9. Existing AR solutions 31

3.8.2 Vuforia

Vuforia3 is a software developer kit (SDK) for AR applications in both 2D and 3D. It uses computer vision to recognize planes, images or objects. Vuforia SDK can either be used with Vuforia Studio, which is their own platform for building AR applications. Vuforia can also be used with Unity as an SDK and as a database for image recognition.

3.8.3 AR Core

Released in February of 2018, ARcore4 is Google’s framework for AR applica- tions. ARcore has been developed to make easier and better experiences for AR.

For instance, they have developed functions such as object recognition. This is to find and detect environments like streets and walls.

3.8.4 AR Kit

ARkit5 is the framework for running AR applications on Apple devices. It has been around for some time. In September of 2018, the 2.0 version was released.

A new feature implemented is a tape measure function, in AR. A real object can be accurately measured with the AR application. This function is a result of one of the new things in this framework. The object recognition functions are improving and therefore applications like this are possible. For the release of the 2.0 version, the main focus was on the multiplayer function in AR games. This means that objects can be set up in an augmented environment and be viewed simultaneously through different devices, making room for playing board games together with friends, or even more advanced, animated games.

3.9 Existing AR solutions

As commercial campaigns, AR has been used in many markets and in many different ways. Blippar6 has several case studies showing applications built for enhancing shopping experiences or promoting products.

Campaigns using gamification are a commonly used advertisement experience.

A game around a product is developed to create a reason for users to interact with the product promoted. In 2009 Mini cars7 created a game where a car existed as an augmented object and only could be seen through the phone. To

3https://www.vuforia.com/

4https://developers.google.com/ar/

5https://developer.apple.com/arkit/

6https://www.blippar.com

7https://www.mini.com

(34)

3.9. Existing AR solutions 32

play the game the user had to find the car and hide it in the augmented envi- ronment so that other users couldn’t find it. The users that had hidden the car for the longest time at the end of the campaign won a real version of it.

Together with Michel Cors, Facebook recently started their new AR advertise- ment platform. The goal is to display and sell glasses, directly in the news feed on Facebook. The ads are interactive and when pressed the camera starts, giv- ing the user the opportunity to test the glasses on them self, in AR. The goal is to include this augmentation and making it possible to buy glasses right in the feed. They use this in the middle of the regular posts which means that they have to trigger the AR function manually and then there are several steps in order to make the front-facing camera feed augmented.

On the 25th of September 2018 the football club Southampton8 released a new augmented reality advertisement system for their stadium. This product changes its advertisement strategies and content depending on what country the game is televised into. Making the ad more personalized and location accurate.

The fundamental technology behind this solution is to have a predefined surface adjusted and adapted for augmentation. That is when an augmented layer is put on top of the video stream, its always in the same placements and it looks the same, with only the content altered.

8Southamptonfc.com

(35)

Chapter 4

Methods

This chapter introduces the methods used in this thesis. It is divided into three parts. The first, discovery is to find out more information about the topic. This contains a literature study, a workshop along with an information gathering interview. The second part is to implement a prototype. Tools and programming languages and environments are presented. The third part is the concept evaluation phase where we present how to gather information and evaluate the concept.

4.1 Discovery phase

The discovery phase was a three-step process. The first was to gather informa- tion about a topic, broaden the understanding and gather knowledge about the subject. The second part is an expert interview conducted with an expert in the field. The third is a workshop in order to get new and more perspectives, find solutions and problems about the topic.

33

(36)

4.1. Discovery phase 34

4.1.1 Literature study

To gain a better understanding of the subject, a literature study was conducted.

To find published articles and relevant information about the subjects, several databases and search engines were used. The most frequently used was Google Scholar1 and Ume˚a University’s2. The results of this phase lays the foundation of chapter 3.

Primary searches include:

• AR technologies and where to use them

• User experience design for AR applications

• Perceptions on screens and visual depth perception

• Objects in AR

• Digital advertisement

• Outdoor advertisement

4.1.2 Interview

An interview was performed in order to gain more knowledge of the design process regarding advertisement design. The interview was made with a graphic designer from one of Sweden biggest clothing brands with experience in both digital marketing as well as traditional. Rowley(2012) discusses how to conduct an interview and what questions need to be considered before the interview, during and then after [51].

Before

When setting up an interview, there are several important questions to consider.

Why is there a need for an interview during the discovery phase?

- The topic of this thesis is broad and scholarly research done on the adver- tisement design for AR when talking about secondary objects are hard to find.

Therefore an interview with an experienced graphical designer was performed.

Which type of interview is best?

- Depending on the interviewers’ previous knowledge about the subject and what kind of information wants to be obtained different styles can be used. For

1https://scholar.google.com

2https://umu.diva-portal.org

(37)

4.1. Discovery phase 35

this interview, a semi-structured interview was performed with predefined ques- tions and time for conversation. This type of interview is most often when the interviewer has basic to intermediate level of knowledge about the subject and therefore has the ability to ask relevant supplementary questions [66].

How to decide what questions to ask?

- The questions needs, of course, to be adjusted to each interview and be rele- vant to the subject. Since the interview conducted is semi-structured its more important to have a good outline and giving the interviewee the opportunity to speak freely.

During

During the execution of an interview, we want to get the best possible result.

The following questions answer possible issues and help to get a good result.

How to ensure that the interviewee understands the questions?

- Make sure that the questions asked are thoroughly considered and that they are relevant. In order to get unbiased answers, it is essential that all questions are asked in a neutral way. Questions should not be invasive, too vague or invite to a yes or no answer.

How to keep the conversations going? - In an ideal interview, the interviewee just keeps talking about the subject, almost answering the question without them being asked. That it is more of a conversation than an interview [8].

After

When organizing and trying to interpret the collected data there are two steps to extract as much relevant information as possible. The first, creating tags describing general phases or sections of the interview. This helps to locate and gives a basic structure of what has been said. Step two is to categorize, sorting among the tags and comparing them. Sorting similar together etc. This gives a prioritized list of information that has been sorted with respect to the questions that needed to be answered [13].

4.1.3 Workshop

In order to get new perspectives and gather more information, a workshop was performed. This workshop was accomplished along with NK and their UX team.

The main purpose of this was to enhance and broaden the idea generation and

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Exakt hur dessa verksamheter har uppstått studeras inte i detalj, men nyetableringar kan exempelvis vara ett resultat av avknoppningar från större företag inklusive

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

Syftet eller förväntan med denna rapport är inte heller att kunna ”mäta” effekter kvantita- tivt, utan att med huvudsakligt fokus på output och resultat i eller från

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

• Utbildningsnivåerna i Sveriges FA-regioner varierar kraftigt. I Stockholm har 46 procent av de sysselsatta eftergymnasial utbildning, medan samma andel i Dorotea endast

Den förbättrade tillgängligheten berör framför allt boende i områden med en mycket hög eller hög tillgänglighet till tätorter, men även antalet personer med längre än