• No results found

Designing an interface for teach pendants, with focus on novice robot users: Investigating users experience in early usage of a teach pendant

N/A
N/A
Protected

Academic year: 2022

Share "Designing an interface for teach pendants, with focus on novice robot users: Investigating users experience in early usage of a teach pendant"

Copied!
126
0
0

Loading.... (view fulltext now)

Full text

(1)

Supervisor: Ole Norberg

Designing a interface for teach pendants, with focus

on novice robot users

Investigating users experience in early usage of a teach pendant

Amanda Engström

M.Sc Interaction Technology & Design, 300 credits Master thesis 30 credits

Department of Applied Physics & Electronics

Spring 2021

(2)

Abstract

The thesis aimed to investigate whether an interactive application could aid novice users’ early usage with the OmniCore FlexPendant 1 . The method fol- lowed the Double Diamond design model. An initial literature research was made including a competitive analysis, followed by a design sprint that started the concept making of the application. Thereafter several iterations of user re- search and prototyping were made. Final user tests were conducted on a hi-fi prototype. The result showed that a functional application based on the hi-fi prototype would aid novice users early experience of the OmniCore FlexPen- dant. However, the result also showed that some of the functionality should not be limited for a specific application but should be available in the whole FlexPendant. Further should the functionality overall become more interactive in order to compose the best user experience for the novice users.

Keywords - Industrial robots, Collaborative industrial robots, Teach pendant, FlexPendant, OmniCore, Interactive training, Application, Prototype, Double Diamond design model.

1 The HMI device to the newest version of controller at ABB

(3)

Examensarbetets mål var att undersöka huruvida en interaktiv applikation skulle

kunna underlätta för oerfarna användares tidiga användning av OmniCore Flex-

Pendant. Metoden för arbetet följde Double Diamond designmodellen. En ini-

tial litteraturundersökning innehållande en konkurrentanalys genomfördes följt

av en designsprint som utgjorde starten för skapandet av olika koncept för ap-

plikationen. Därefter gjordes olika iterationer av användarundersökningar och

prototypande, slutliga användartester gjordes på en hi-fi prototyp. Resultatet

visade på att en fungerande applikation baserad på hi-fi prototypen skulle un-

derlätta för oerfarna användare i början av sitt användande av OmniCore Flex-

Pendanten. Däremot visade resultatet också på att delar av funktionaliteten

inte borde vara begränsad i en applikation utan borde vara tillgänglig i hela

FlexPendanten. Vidare borde funktionaliteten överlag vara mer interaktiv för

att kunna erbjuda den bästa användarupplevelsen för den oerfarne användaren.

(4)

Acknowledgements

Foremost I need to thank my supervisor at ABB, Björn Löfvendahl for all the support, great discussions and fun conversations. Björn together with Jonas Brönmark and Elina Vartiainen gave me great brainstorming sessions and made me feel welcomed and appreciated at ABB.

Thank you to Ole Norberg, for being my supervisor from Umeå University, helping me to stay on the right path and for always keeping his finger crossed for me.

A special thank you to Evelina Malmqvist and Christina Metcalfe for providing great feedback and improvements for my report. Together with Lisa Fjellström and Frida Ylitalo you have supported me throughout and given great advice along the way.

Last but not least, thank you Erik Viberg for always believing in me, reading

my report over and over again and for only using my own words against me in

emergency cases.

(5)

Contents

1 Introduction 2

1.1 Problem Statement . . . . 3

1.2 Objective . . . . 4

1.3 ABB . . . . 4

1.4 Delimitation . . . . 4

2 Theoretical Framework 6 2.1 Industrial Robots . . . . 6

2.1.1 Challenges for Industrial Robots . . . . 6

2.1.2 Human Robot Interaction (HRI) Within Industrial Robots . . . . 7

2.1.3 Future of Industrial Robots . . . . 7

2.2 Collaborative Industrial Robots . . . . 8

2.2.1 Different Approaches for Collaborative Industrial Robots 8 2.2.2 Challenges for Collaborative Robots . . . 10

2.2.3 The Human Factor . . . 10

2.3 The Handheld Control for the Robots . . . 11

2.4 Interactive Training . . . 12

2.4.1 Designing and Develop for Interactive Learning . . . 13

2.4.2 Interactive Training in Industrial Environments . . . 14

2.5 Competitive Analysis . . . 15

2.5.1 FANUC . . . 15

2.5.2 KUKA . . . 15

2.5.3 Universal Robots . . . 16

2.5.4 Visualizing Quaternions . . . 16

(6)

Contents

2.5.5 TrainLab . . . 16

2.5.6 Resonate Learning . . . 17

2.5.7 Userlane . . . 17

2.5.8 Microsoft . . . 18

3 Methodology 19 3.1 Double Diamond Methodology . . . 19

3.1.1 Discover . . . 21

3.1.2 Define . . . 21

3.1.3 Develop . . . 22

3.1.4 Deliver . . . 22

3.2 Literature Research . . . 23

3.3 Interviews . . . 23

3.4 Design Sprint . . . 24

3.4.1 Day 1 . . . 24

3.4.2 Day 2 . . . 25

3.4.3 Day 3 . . . 25

3.4.4 Day 4 . . . 25

3.5 Prototyping . . . 25

3.6 Interviews Regarding the Prototype . . . 26

3.6.1 Pilot Testing of Hi-fi Prototype . . . 26

3.7 Forming Hypothesis . . . 27

3.8 User Tests . . . 27

3.8.1 Evaluation on the Usability of the Prototype . . . 27

4 Result 29

(7)

4.1 Discover - Iteration 1 . . . 29

4.1.1 Literature Research . . . 29

4.1.2 Initial Interviews . . . 30

4.2 Define - Iteration 1 . . . 32

4.2.1 Design Sprint . . . 32

4.3 Iteration 1 - Develop . . . 35

4.3.1 Prototype . . . 35

4.4 Deliver - Iteration 1 . . . 37

4.5 Discover - Iteration 2 . . . 37

4.5.1 Interviews for Mid-fi Prototype . . . 37

4.6 Define - Iteration 2 . . . 37

4.6.1 Hypothesis . . . 38

4.7 Develop - Iteration 2 . . . 38

4.8 Deliver - Iteration 2 . . . 39

4.8.1 User Tests . . . 39

5 Discussion 48 5.1 Hypotheses . . . 48

6 Conclusion 54 6.1 Future Work . . . 54

Appendix A Concepts 61 A.1 Tutorial Library . . . 61

A.2 On-boarding . . . 64

A.3 Simulation . . . 67

A.4 Interactive Walkthrough . . . 69

(8)

Contents

A.5 Interactive Manual . . . 71

Appendix B Mid-fi prototype 73

B.1 Version A . . . 73 B.2 Version B . . . 87

Appendix C Hi-fi prototype 96

Appendix D User test questionnaire 117

(9)

1 Introduction

Ever since 1960 when the first robots were introduced to industrial production, they have constituted an essential part of the construction line. Facilitating for humans by replacing the human workers in monotonous, repetitive, heavy and dangerous work tasks [1]. However, they do not possess all capabilities that we humans take for granted, such as the ability to act on unforeseen circumstances or changing environments. The robots are limited by how they are programmed and the capability and input from the robots’ sensors. Even with their limi- tations, industrial robots have capabilities that cannot be achieved by humans [3]. Necessities for the general robotic manufacturing systems are high speed, predictability, precision only to name a few [2, 3]. With industrial robots, costs can get reduced in different areas such as in operating and wastage. The capa- bility of the industrial robots have increased over time. Robot manufacturers have made them safer, faster, more precise, increased their reach and improved their communication with external systems [3]. This advancement of industrial robotics has made them more widespread and the usage range from manufactur- ing to healthcare [3]. Currently, collaborative robotics in the industrial sector is of high demand [4, 5, 6, 7].

Collaborative robots are intended for operating alongside the human operator in a dynamic, collaborative way, in a close proximity [6, 8]. The benefits of using collaborative robots are plenty, for example, getting more flexible and agile manufacturing processes [6] and re-configurable systems that can follow requirements for more customized products [9, 10]. Collaborative robots make it possible in contrast to traditional industrial robots to be fenceless. The fenceless collaborative robots makes the production flow better and allows automation of new processes [6]. However, with the fenceless robots the probability for hazardous situations increases[8].

The human operators’ interaction with the collaborative robots needs to be safe in order for the collaboration to function. The interaction between the robot and the operator, HRI (Human Robot Interaction), can be perceived as the information exchange between the human and robot. Robots are machines that can operate with high force and speed in relatively large workstations where hazardous situations may arise [9]. A key challenge in dynamic workflows is how the operator instructs the robot [6]. In cases where the robot may operate in a predefined way, the human behavior could result in a dangerous interaction [9]. Humans that have supervisory control over robots that do not maintain enough knowledge of the working robot are not to blame if something happens.

Nor is it the robots’ fault, rather it is the fault of insufficient feedback [11].

When designing systems where humans are needed, the human factor always

needs to be considered [12]. The factors that need to be taken into account

include the mental and physical workload [7].

(10)

1 Introduction

The machine interfaces used for the interaction between humans and collab- orative robots are not designed for non-expert operators. Operators spend a significant amount of time trying to remember or predict what the robots next move is, instead of being able to focus on the actual task [9]. To enable a safe collaboration between industrial collaborative robots and humans the com- munication between them go through user-friendly interfaces. Hence it would increase the feeling of comfort for the operator while working with such power- ful robots [9]. When novice and expert operators operated the same robot for the same task via teleoperated interfaces in a 3D environment the differences were significant. Novice operators experienced five times higher cognitive load than the considered amount needed to perform the tasks. They were also 50%

less efficient than an expert operator to complete the task [13]. The operators’

experience does heavily influence the operation of teleoperated robotic systems [13].

In the end of initial training sessions with novice users they need to have enough understanding of the system so that they can use it. If the novice users can- not use the system the teaching has not been efficient [14]. For human robot interaction, understanding the system implies a whole range of insights. In- sights to ensure that the users has built a correct mental or internal conceptual presentation of the system [14].

On-the-job training (OJT) is an effective tool when a lot of independence is granted to the task performer. It does however come with some setbacks such as being expensive, limited and in some cases also lacks the actual training context [15]. Interactive training affects the level of engagement and motivation by supporting the learning processes [15].

Interactive learning is a technique that often uses technology in order to get the trainees actively engaged in the learning process [16].

1.1 Problem Statement

It is a challenging task to bring human operators together with robots [9]. The main aspect for the collaboration to work between a robot and operators lies in how the operator can instruct the robot [6].

The interfaces for controlling the robots are not designed for novice users [9].

The interface that the robot and operator communicate with needs to be user- friendly so that the operator feel more comfortable while working with the robots [9].

One way to teach the new operators to instruct robots is through interactive

learning that will help them feel engaged and motivated [15].

(11)

1.2 Objective

The objective of this thesis is to investigate if an interactive application on the teach pendant 2 can help novice users for OmniCore 3 to get an initial setup and insight in how to use the teach pendent.

1.3 ABB

ABB as a company was founded in 1988 in Switzerland by the mergers between the two electrical engineering companies ASEA and BBC [17]. Today ABB has four main areas which include electrification, process automation, motion and robotics and discrete automation [18]. The present thesis was done with the ABB Robotics R&D HMI 4 and UX team in Västerås.

ABB is located in over 100 countries run by approximately 110 000 employees all driven by the same values of courage, collaboration, care and curiosity. With a purpose to create superior value by pushing the boundaries of technology so performance can reach new levels. They energize the transformation of society and industry to achieve a more productive and sustainable future [18].

1.4 Delimitation

The delimitations of the thesis are divided into five subsections.

Time frame - One delimitation is the set time frame that exists for this work.

The work is restricted to a 20 week limit.

Distance - On account of the current situation of the Covid-19 pandemic most of the work has been done out of office. Which also aggravated the process to get in contact with the actual robots and FlexPendant. Therefore the final user tests were not conducted on a real FlexPendant in person. Instead they had to be done over online meetings.

Implementation - The implementation ends with a hi-fi prototype that is user tested.

Knowledge adoption - One delimitation in the final hi-fi prototype is that all texts and information is not correct.

Users - The final delimitation was that the users that were contacted did not

2 The handheld HMI device for the robot

3 The newest version for the controller at ABB

4 Human-machine-interface

(12)

1 Introduction

all fit the target group of novice users for OmniCore.

(13)

2 Theoretical Framework

The theoretical framework section presents the literature research with an in- troduction to industrial robots and collaborative robots will be presented. A section on interactive training and how it could be used follows. Together with a brief introduction of the handheld teach pendant 5 for the robots as well as more humanistic studies.

2.1 Industrial Robots

Industrial robots have since their beginning formed a crucial part to the con- struction line. Through easing the human labour by taking over the jobs that are heavy, repetitive and even hazardous [1]. The robots are designed for per- forming tasks repeatedly, quickly and with high accuracy [2, 3, 24]. With the help of industrial robots, expenses can be reduced both in operating costs and in wastage costs [3].

Robotics is a broad field and far from all robots or all industrial robots share the same characteristics [24]. Industrial robots do not always operate in indus- trial environments, in fact most businesses are called industries, for example, the banking industry [25]. The advancement of industrial robots makes them operate in a range of fields from manufacturing environments to health care [3].

The capability of industrial robots have increased, they are now safer, more accurate, have a longer reach and better communication with external systems [3].

Over the last decade the robot controllers 6 have shrunk more than 80% in size.

The performance has improved and the cost has been reduced by at least a factor of four [25]. The processors have become more powerful and that combined with smart low cost sensors, makes the industrial robots gain more intelligence [25].

A focus has been placed on HRI by reducing harm to people in collisions [26]

and by making robots easier to program [27].

2.1.1 Challenges for Industrial Robots

Industrial robots have a history of being designed for static environments. If anything unaccounted for happens and it is not a part of the robots’ configura- tion the robot will not take it into account. Then the robot will not be able to give enough feedback about thee state of the process back to the operator [24].

5 The interface that they are maneuvered from

6 The computer that controls the robot

(14)

2 Theoretical Framework

Robots cannot act upon unforeseen context or changing environments, they are limited by their sensor ability [3] and they do not adapt well to dynamic envi- ronments [24]. Further, robots do not provide a well established HRI and can be difficult to program for end-users.

There is a need to actively engage human operators in a constructive way, to- gether with finding a balance of autonomy for the robots [24]. If they have too little autonomy the human operators will waste their time by attending to the robot when he in fact needs to see to his own tasks. On the other hand if the autonomy is too high the situational awareness of the robots’ activity will disappear [24].

2.1.2 Human Robot Interaction (HRI) Within Industrial Robots Ways to interact with or operate industrial robots are often limited. For ex- ample, the only interaction could be pushing a button in order to start a task.

In some cases the interaction is nonexistent, with robots that act entirely in an automated system [24].

It is not always obvious for the human operator to know why all of their re- quests might not have been carried out [24]. It is not always visually clear that something abnormal is preventing the movement or that a joint cannot rotate further. With better interactions robots can be controlled with more flu- ency which could lead to two phases of interaction, a programming or teaching phase, and an execution phase [24]. In the latter phase, previously programmed requests will be carried out or the operator can perform direct manipulations taught in the previous phase [24].

Intuitive graphical user interfaces (GUI) are trending by providing familiar in- terfaces such as Windows style menus for robot teaching. Front-end interfaces that are PC-based make it easier for users to customize applications without taking help of the manufacturer, hence the familiar desktop format [25]. Pro- gramming robots has been done through specialized programming languages.

However, now the aim is to make it easy to use no matter the users’ skills which are accomplished with instructions that are “master-to-apprentice” styled [25].

2.1.3 Future of Industrial Robots

Academic literature that is about robotics is extensive and expands fast [25].

With application of sensors, smart actuators, networks, and innovations in soft-

ware functionality the movement for industrial robot technology is towards en-

hanced robot intelligence [25]. With more intelligent industrial robots it will

be possible to adapt to unstructured environments and oblige changes in that

(15)

environment [25].

2.2 Collaborative Industrial Robots

The interest of collaborative robots has increased during the recent years [4, 6, 7, 8, 10]. Robots are no longer considered a bulky part of the production line, but machinery that actively shares the workspace with the human opera- tors [6]. Unlike traditional industrial robots, collaborative robots are meant to work alongside humans without fences [6, 8, 10, 19]. Through this arrangement increasing the production flow and allowing the automation of new processes [6].

Manufacturing companies see significant potential of industrial human robot collaboration (HRC) in enhancing productivity and product quality [23]. With collaborative robots it becomes possible to increase the combining of robots’

endurance and humans’ flexibility and problem-solving [10]. Also, industrial HRC might enhance ergonomics in the workplace through commissioning the heavy, monotonous and sometimes hazardous tasks to the robot [23].

2.2.1 Different Approaches for Collaborative Industrial Robots There is not only one way to work with collaborative industrial robots [6, 7].

The importance of establishing the different approaches and the different levels of collaboration in them are essential. Hence it helps in preventing safety issues and to form and evaluate the operators’ aspects of HRI [7]. There can occur some confusion hence to all the different levels and terminology when speaking about collaborative robots. Because HRC is not well established and terms that are often used such as coexistence, cooperation and collaboration can mean different things [7].

Aaltonen et al. [7] presented a way to characterize the interaction in collabora- tive robotic cells. In hopes to fulfill the safety requirements, standards and to ensure the human operators’ interaction will be pleasant.

Aaltonen et al. [7] continue with mentioning three levels of workspace sharing between humans and robots. Fixed safety fence, exclusive motion zone, which implies that the operator only has physical contact with standing robots. Shared workspace which gives the robot and operator contact during simultaneous mo- tions [7].

Further Michalos et al. [20] describe the different levels of interaction as, “the

robot and the human operator could have a common task and workspace (figure

1), a shared task and workspace (figure 3), or a common task and a separate

(16)

2 Theoretical Framework

workspace (figure 2)”.

Figure 1: Figure shows when the robot and operator share a common task and workspace. Shows Collaboration (Based on Michalos et al. [20])

Figure 2: Figure shows when the robot and operator share a common task but not workspace. Shows parallel cooperation (Based on Michalos et al. [20]) The characteristics for the different forms of co-work are divided into four groups. The first form is coexistence, where the robot and the human oper- ator exist at the same time but in different workspaces. The second form is sequential cooperation, in this form the operator and robot successively per- form actions on the same workspace and robots get stopped when a human is working on the same workspace. The third form represents parallel cooperation, which occurs when both the human and the robot work simultaneously towards a shared goal but do not have any physical contact. Lastly is the collabora- tion form, where the human and the robot are joined in their action and work hand-in-hand and might even have physical contact.

Schou et al.[6] say that there are two main approaches for humans to collaborate

with robots, truly collaborative robot and robot assistants. Where in the latter

(17)

Figure 3: Figure shows when the robot and operator share a common task and workspace, but were the robot is active and not active in some cases. Shows sequential cooperation (Based on Michalos et al. [20])

the robot does not necessarily solve tasks together with the human but serves as an assistant working alongside the human operator. Truly collaborative robots however solve industrial tasks as a team with the human operator by direct collaboration.

2.2.2 Challenges for Collaborative Robots

The safety risks with collaborative robots are something that is often brought up while talking about collaborative robots [10, 23]. One main issue is the high risks for injuries and the need of safeguarding the workspaces [8]. Herrero et al.

[4] mean that a workspace containing collaborative robot-human interactions must be able to ensure the operators safety. Which Matthias et al. [21] also imply through saying that direct interaction in a shared workspace between robots and humans can lead up to additional safety risks.

In addition to the safety risks, complex interaction between humans and robots may also have adverse effects on the mental health of the human workers [22].

(Korner et al., 2019; Szalma and Taylor, 2011) argue that this kind of relation- ship between humans and robots might act as an additional stress factor at the workspace (as cited in Gihleb et al., 2020)[19].

2.2.3 The Human Factor

Bringing humans and robots together is not easy and Michalos et al. [9] have

listed what they consider two main areas to cover when bringing humans and

robots together. Firstly, the ergonomic impact both physical and cognitive needs

to be reduced on operations in assembly lines. Secondly, a safe collaboration

must be ensured between humans and robots. This could be fulfilled by making

(18)

2 Theoretical Framework

the robots act as assistants and by letting the communication between them go through multiple user-friendly interfaces.

In this case academic literature points at two kinds of issues Michalos et al.

[9] argue. The first is situational awareness, which can occure by high or low levels of automation or due to being interrupted by other factors like noise.

This can lead to a situation where an operator loses track of the state of a machine. The second issue is Mode-error, this happens when the operators continue working with assumptions that the state of the robot system has not changed even when it has. This occurs when systems become complex and the tasks as monitoring are automated which leads to systems mode being able to change without the operator being aware of it. Lastly, tools should allow the immersion of the human in the workflow be provided. By this tool increasing awareness of the robot operation status the aim is to increase the feeling of comfort and acceptance of the operators.

In a Finnish study carried out in the spring of 2018 by Aaltonen et al.[10] they sent out an online questionnaire to the members of The Robotic Society in Fin- land. Through that survey they gathered information about which barriers exist for making applications for collaborative robots. Lack of knowledge was one of the most significant reasons but also the ease-of-use. A lot of the participants in the questionnaire had comments that indicated concerns towards the risks and safety regulations among end-users. The respondents were also asked to select development needs they valued most important. Over one third thought that

“new ways of allocating work between human workers and collaborative robots, and safety technologies should be especially developed”. Other answers included new kinds of user interfaces such as gestures and speech and the utilization of machine vision and of artificial intelligence.

Implementation of HRC is not as easy as rolling out fully functioning robots in the workspace. The workforce is a huge part and the robots need to be accepted by them in order for a successful implementation [23]. Operators involvement in early stages such in the concept phase is an important learning curve for the people that will implement the system [23].

2.3 The Handheld Control for the Robots

The FlexPendant, ABB’s teach pendant 7 , see figure 4 that will be used in this research is aimed for the OmniCore version, the seventh generation of controller released by ABB. The controller is the computer that controls the robot and humans uses the FlexPendant as an interface to the controller.

The OmniCore FlexPendant is used for both traditional industrial robots and

7 An HMI device that controls the robot

(19)

for collaborative industrial robots. The programming language that is used on the OmniCore FlexPendant is RAPID, ABB’s own programming language.

The most essential tool for humans to control an industrial robot arm is a teach pendant, it controls the movements, speed, positions of the robotic arm and accomplishes teaching tasks [34].

Figure 4: The OmniCore FlexPendant [28]

2.4 Interactive Training

To get students more engaged and increase their understanding towards the course material, instructors can use interactive training/ learning [16]. Interac- tive learning often uses some sort of technology to achieve this active technique for learning [16].

Gan et al. [30] means that the process of creating technological solutions has become easier. The focus has shifted from what can be built during a certain amount of time to what should be built. As well as how it will be assured that the solutions are good for the students. Mishra and Sharma [29] agree and mean that in the last two decades due to computer technologies, a unique classroom has been built. With the development of interactive multimedia programs.

It has been recognized that important tools to enhance the educational experi- ence of students and their collaborative learning curve are web-based teaching and learning [29]. Interactive digital media can help students in group work by maximizing their learning as both individuals and as part of a group [29].

A more active learning environment in contrast to the traditional way of learning

means that the students are more included in the learning process. This gives

(20)

2 Theoretical Framework

them a deeper understanding and they feel more connected to the subject that is taught [31].

The students that are practicing more active learning feels that it is great for cooperative learning and group work. These methods make them feel more involved and help them in activities that implement the thinking processes such as analyzing and information retrieval [32]. However, some students prefer the more traditional way because it feels more familiar. The interactive way is a big contrast from the traditional way in which they can take more of a passive role [32].

2.4.1 Designing and Develop for Interactive Learning

Mishra and Sharma [29] brings up interactive multimedia to describe computer software that primarily deals with provision of information. It is a complex task to design and develop programs that are interactive multimedia based. Mishra and Sharma [29] continues with saying that it needs a team of experts including at least one content provider, one multimedia developer, one graphic designer and one instructional designer.

Mishra and Sharma [29] means that the issues that can be encountered when designing for multimedia interactive learning reaches outside of design and de- velop issues. Multimedia-enabled teaching and learning has to be integrated into an already existing system and practice.

Mishra and Sharma [29] provides seven principles that help individuals learn, present, and transfer information better, these are:

1. Multimedia principle, when the instructional environment involves words and pictures rather than words or pictures alone.

2. Modality principle, when the instructional environment involves audi- tory narration and animation rather than on-screen text and animation.

3. Redundancy principle, when the instructional environment involves narration and animation rather than on-screen text, narration, and ani- mation.

4. Coherence principle, when the instructional environment is free of ex- traneous words, pictures, and sounds.

5. Signaling principle, when the instructional environment involves cues, or signals, that guide an individual’s attention and processing during a multimedia presentation.

6. Contiguity principle, where words or narration and pictures or narra-

tion are presented simultaneously in time and space.

(21)

7. Segmentation principle, where individuals experience concurrent nar- ration and animation in short, user-controlled segments, rather than as a longer continuous presentation.

2.4.2 Interactive Training in Industrial Environments

Moniz and Krings [33] implies that if manufacturers want to achieve higher pro- ductivity levels they should start by using robotic systems and thereby gaining competence and quality. With the help of intuitive interaction the accuracy of programming and planning can increase.

To make digital technologies effective for a business they need to cover every company department and improve productivity and safety [35]. Lessons are useful for continuous improvement and safety. However, the learning process can be passive and not engaging especially when they are carried out at the end of a work shift [35].

However, technologies that can simulate risk scenarios could help to increase the concentration and ease the learning process according to Celentano [36] cited by Lanzotti et al. [35].

Something that should be kept in mind is that virtual reality (VR) technologies can be more realistic and involving. But that does not necessarily correspond to having better usability [35]. Furthermore, real-life videos that are showing the best practice and correct operational procedures are not as communicative as digital simulations. Ergonomic environments related to safe and unsafe con- ditions can be shown and explained to the workers in real time. This enhances the workers awareness of the importance of proper working procedures [35].

Vergnano, Berselli, Pellicciari [37] underlined that the design of any training ma- terial starts from the realization that operator training is different from teach- ing. Teaching is focused on knowledge to be learned from written instructions, movies and tutorials. Training focuses on work and skills through guided expe- riences and not only information to gain more confidence on specific results in a short amount of time.

The practical training operators get on the job (OJT) is fundamental even though it could be critical of effort, time and cost [37]. The interactive features of interactive experiences are fundamental to virtually reproduce the actual experience, which is a necessity for training the operators. Vergnano, Berselli, Pellicciari [37] continues with pointing out that the simulations must run in a time flow that represents reality as much as possible.

A practice that has shown to be favorable is that by recording experienced oper-

ators performing tasks and then using this material when training new operators

(22)

2 Theoretical Framework

[37]. It has been noted that fundamental features while training operators are user interaction for both the trainees and instructors. [37].

2.5 Competitive Analysis

To be aware of what training tools that are already out a competitive analysis was conducted.

2.5.1 FANUC

FANUC is a leading supplier of robots, CNC systems and factory automation.

Besides supplying robots the company also works with training material [38].

The different training methods for robots at FANUC include e-learning, on-site training and virtual training [39]. The e-learning training is in form of an eLearn course. The course that self-paced and the users can move through the course at their own pace and can choose what area to focus on. The course structure is based on the concept of the skills the user would be taught in the on-site training at FANUC training facilities [39]. The eLearning includes challenge testing and saves the progress to the specific user. Instructors can also demand to get reports on how their specific students that have taken the eLearn course have progressed [39]. The course itself will give instant feedback to the user and can thereby easily know what the user then needs to review [39]. The FANUC CNC software works exactly as it does in the actual hardware controls. The Levil CNC certification cart can bring the machine to the actual classroom, shop floor or events. Thereby providing a real machine that can be used by the real FANUC control [41].

The virtual training courses are instructor-led robotics virtual training and are accessible for many of their robots [40]. They are taught live by a FANUC instructor using WebEx and as a service product they use their own ROBOGU- IDE. To participate in the training course the users need a computer that is connected to the internet. The user must also be able to receive post because a paper manual will be received. During the virtual training the user will be working with virtual robots by using the cloud based ROBOGUIDE products [40].

2.5.2 KUKA

KUKA is an automation corporation and is a leading global supplier of intelli-

gent automation solutions [42]. They offer their customers eLearning courses,

(23)

that the user can access wherever and whenever and the user can repeat the con- tent over and over again [43]. KUKA offers multiple eLearning courses, aimed for different kinds of users. The courses are ranging from being synoptic to more task specific [44].

2.5.3 Universal Robots

Universal robots (UR) manufactures collaborative robots.

UR have free, open to all online, hands on training modules, they use interactive simulation in order to engage the user [45].

The course will give an introduction in how to master basic programming skills.

The aim for the course is to reduce the gap in knowledge about collaborative robots that exists today. The course is available in multiple languages, these are English, Spanish, German, French and Chinese. The course itself consists of six e-learning modules. This gives training in basic programming for UR robots [45].

The training includes tasks such as configuring end-effectors, connecting I/Os, creating basic programs for example making the collaborative robot place parts in a box arriving on a conveyor, and applying safety features to an application.

Other tasks in the online course include adding safety zones around the robot application.

UR offers training through on-location courses, webinars and video tutorials in addition to the free online course [46].

2.5.4 Visualizing Quaternions

Grant Sanderson explains quaternions with an explorable video series with tech- nology that Ben Eater has created [47]. In these explorable videos Grant Sander- son explains through audio quaternions. In the video a coordinate system is dis- played and a 3D sphere is shown. While he is talking he encourages the viewer to interact with the video by changing the rotation of the sphere. The viewer can do both when he is talking or by pausing his voice and do it by themselves without guidance [47]. Grant himself also interacts with the video by changing the same values as the viewer can manipulate.

2.5.5 TrainLab

Transub has developed an interactive training tool app called TrainLab, made

for tablets called TrainLab[48]. The application aims to get conductors and

train drivers familiar with new rolling stocks. The application can be used for

(24)

2 Theoretical Framework

different train models and the user will be immersed into a 3D reproduction of the rolling stock. The user will also get access to videos and theoretical content [48]. The application has two exploration modes, one that is guided and one that is unguided. The guided mode will show the different areas of the train and throughout provide information in form of texts, videos, animations, pictures and audio. Meanwhile the unguided let the user explore the train without a wizard [48].

The application is changeable and the clients of TrainLab can modify the specific training scenarios and even add new tasks [48].

TrainLab is both portable and user-friendly, when the user has downloaded the application it can be used both on- and offline. The application is very accessible, it is available with Android, iOS and can be converted into a web version [48].

2.5.6 Resonate Learning

Resonate Learning creates and designs support for training and performance.

One of the supports comes in the form of interactive user manuals. These manuals include, videos, animations, illustrations and interactive 3D equipment models. In the interactive manual users can search for specific content, book- mark, annotate and take notes they can go back to [54].

2.5.7 Userlane

As video tutorials have replaced handbooks Userlane thinks that within a time period of five years every application will include on-screen interactive guides.

The guides will help the users through different processes in real time [55].

Userlane is a software that makes it possible for businesses to create their own custom on-screen interactive guides. The aim is to make it possible for anyone to immediately use software without any instructions or training [55].

The Userlane guides appear directly in the actual application so that the user does not need to change from the process he is in, in order to get instructions.

The interactive aspect is that the user can accomplish tasks in the application meanwhile he is being guided through the process [55].

Userlane does also offers guided on-boarding tours. This take new users through

all the features simultaneously as the novice user can test the program. So the

on-boarding also shows necessary steps the user needs to go through in order to

set up their environment [55].

(25)

2.5.8 Microsoft

Microsoft has an interactive walkthrough guide for using Microsoft teams. It

consists of an interactive video with audio. Where the user can choose what

kind of scenario they want a walkthrough on. The user can then choose to pause

the video whenever and repeat steps multiple times [56].

(26)

3 Methodology

3 Methodology

The methodology of this thesis is divided into several sections. First the chosen design process, Double Diamond will be introduced. After this a chronological order of the events are gone through. During the method process the aim was to have the users in focus. Interviews and user-studies have been carried out in order to get representative information and basis for producing an interactive training tool. See figure 5 for an overview of the method process divided into two different iterations.

Figure 5: Graphic presentation of the two different iterations that was made.

3.1 Double Diamond Methodology

The Double Diamond is a design model, that was created by the British Design

Council [49] after a long study that was carried out. They wanted to learn how

different corporations such as Microsoft, Starbucks, Sony and Lego processed

(27)

information in order to come up with solutions. What they found was that all companies went through the same steps in the search for innovation [49]. The different corporations had their own name for the process and did it in particular ways but the stages they went through were all the same. The British Design Council took these stages and divided them into the Double Diamond design model [49].

Figure 6: Presentation of the Double Diamond design model graphically ( Based on [49])

The Double Diamond model has four stages: Discovery, Definition, Development and Delivery. Together the four stages constitute a map of sorts that designers use in order to organize their thoughts, see figure 6. The model is not linear even though it might appear that way. Going back and forth in numerous iterations is encouraged so that the problem can be fully understood and that the best solution can be found [49]. The Double Diamond is what it sounds like, two diamonds that are linked together and gives a representation of what steps should be taken. It does also show how the thought process should look like during the different steps. The thinking in the first and third section should be divergent and in the second and fourth it should be convergent [51].

The choice of the Double Diamond design model for the method in this work comes from different aspects. Firstly the model is well tested and has remained reliable and robust even though demands are becoming bigger [50].

Secondly, it can be stretched and used in different ways depending on the focus on the project. Which is great in a work like this when conditions may change [51], see figure 7.

Lastly, it is an easy tool to use when explaining how the design process will be

conducted for people that are not that involved in the process. It also shows

how to work with divergent and convergent thinking in a clear way [51].

(28)

3 Methodology

Figure 7: Presentation of several ways the Double Diamond design model can look, graphically (Based on [51])

3.1.1 Discover

The first stage in the Double Diamond design model is discovery, the stage is about research and learning more about the different variables. Variables that affect the issue and possible solutions for the problem [49]. In this stage the thinking should be divergent so that creativity and problem-solving are maximized [49, 51]. One common way to start this process for a company is to lay down the issue, present hypotheses and define ways that will aid in gathering information [49]. Common activities in this stage are market research and user testing. This stage will generate a large amount of information and the managing and organizing of all the information are crucial. It is recommended that the data that has been found is gathered in a project brief, making it easier to pass on. The British Design Council noticed that successful companies tended to get their designers face-to-face with users in the research process [49]. The aim of this first stage is to identify and contextualize the problem or opportunity. For this work, the discover phase includes the literature research and interviews.

3.1.2 Define

After the first step of gathering data the second stage will follow, which is the definition stage. In this stage, the findings in the first step is evaluated and defined. This stage is for filtering through all the information from the previous step and elaborate on it [49]. Examples of tasks are identifying bottlenecks, resource waste, finding hidden opportunities or setting a list of no-goes 8 . The

8 What the design team should not do

(29)

thinking should be more convergent in this stage [51] by narrowing it down and taking the finances of the company, its resources, logistics and market situation in mind before designing anything [49]. It also sets the context and assesses the realism of what can actually be done. As well as analyze how the project aligns with the corporate brand.

The aim of the stage is to elaborate the project, making everyone involved and on the same page. Which is made by making everyone understand the context of the project, internal and external. As well as creating an understanding of what capabilities the company has in regard to the project [49]. The second stage comes to an end with a corporate sign-off. This means that the project gets approval by getting the needed resources for continuing or gets scraped [49]. The design sprint and the making of hypotheses in this work correspond to the Define phase.

3.1.3 Develop

The third stage of the Double Diamond design model marks the start of the actual design process. In this stage the solution for the problem defined in the previous stage begins. The work is mostly multi-disciplinary, putting the designers with internal partners that possess expertise knowledge that is needed for the project. By putting different departments together the problem-solving process becomes more efficient [49]. Some development methods that can be used in this stage are brainstorming, visualization boards, creating different scenarios, just to name a few [49]. The importance lies in the outcome of the used method, to get a prototype. Furthermore, the benefit with gathering people from different departments and making them be a part of the process leads to fewer prototypes being needed and less problems are encountered during testing.

It is common during the Double Diamond process that testing and feedback happen continuously throughout the development stage [49]. In this paper the prototyping in the different stages covers the develop stage.

3.1.4 Deliver

Lastly is the delivery stage which includes the final testing of the product and

the official sign-off to production [49]. During this stage a last look is made

and some final testing to make sure that there are no issues left. The testing is

commonly done against regulations and legal standards. It can also be damage

testing and compatibility testing. Customer satisfaction and the impact of the

design are usually assessed in this stage to quantify the “value of good design

for the brand” [49]. The final product in this thesis together with the final user

tests corresponds to the deliver phase.

(30)

3 Methodology

3.2 Literature Research

Initially in the work process a variety of literature was studied and evaluated.

The literature ranged from books, articles, previous studies to videos and blog posts, which gave a broad perspective and understanding of the field. The sources of the literature mainly came from Google Scholar and the Umeå Uni- versity library.

The conclusion of the literature research covered numerous areas from the char- acteristics and challenges for industrial robots, collaborative robots, interactive training and their control that are of interest for this study. A competitive anal- ysis was also made over interactive tool usage today. Together these areas laid the foundation for the theoretical framework (described in Section 2, Theory).

For the Double Diamond design method that is presented in Section 3.1 as the chosen design process for this work. The literature research falls into the first stage which is Discovery.

3.3 Interviews

Qualitative interviews were conducted with robot users in order to get an un- derstanding of what struggles robot users have when it comes to programming and maneuvering robots. The interview was semi-structured hence it opens up to follow-up questions and discussions.

Before the interviews were conducted with actual robot users five different per- sonas were created through brainstorming in order to categorize the target groups. The personas consisted of a developer, operator, fitter, instructor and student. These personas were then used to test the interview questions in a pilot test set up. The pilot participants then answered according to the persona they were given on the different questions. The pilot-test was carried out in order to see what kind of data could be expected from the interviews.

Depending on the users’ occupation the interview questions changed. However, they did have a common goal, in figuring out the struggles and pleasings that come with operating a robot with a teach pendant for a novice user.

All of the interviews were conducted in Swedish during the spring of 2021. None

of the interviews were conducted in person hence the delimitations talked about

in section 1.4. The interviews were scheduled for an hour each. All interviews

(31)

had the same introduction where it was explained that they would be anonymous and that their responses could not be traced back to them. These interviews correspond to the first stage of the Double Diamond design process, Discover.

3.4 Design Sprint

In order to figure out the real problem and solutions during a limited time, a design sprint was conducted from the book Solve Big Problems and test New Ideas in Just Five Days [52]. All the tasks are from the book but they have since 2016 made an updated version, Design sprint 2.0 where the sprint activities last for four days instead of five [53]. In order to get a more in depth description of the different task during the sprint see [52]. For more in depth description for the design sprint 2.0 concept see [53]. The design sprint corresponds to the Define phase in the Double Diamond design model.

3.4.1 Day 1

The first day of the sprint began with defining the challenge with expert inter- views and “how might we notes". The expert interviews in this case had already been conducted by interviewing robot users. Instead of conducting new inter- views the ones that were already conducted were read through simultaneously as "how might we notes" were created based on the responses of the conducted interviews.

The second task was to create a long term goal and sprint questions. The long term goal was created by figuring out where the product will be in two years if everything goes as planned. In order to produce the sprint questions, look to the long term goal and think pessimistically about what can stand in the way of reaching the long term goal.

The third assignment for the first day was to create a map of how all the different users discover, learns about and uses the product in order to get to the goal.

After the third assignment was completed, the “how might we notes” was placed in the map in order to see where the focus was. This finding ended the first part of the first day.

The second part of the day was for producing solutions. The first task was to

conduct a lightning demo where other sources were looked into and what kind of

solutions are out there. This had already been done by the competitive analysis

see section 2.6. After the lightning demos the task was a 4-part sketching. This

included note taking, that consists of taking notes of the things that had been

written down from the previous tasks, from the first and second day. Doodling

was the next part of the 4-part sketching which consisted of sketching out ideas

(32)

3 Methodology

that came from the note taking. Thereafter it was time for Crazy 8’s which was done by sketching out eight different ways of one of the previous doodles. The final step was making a complete concept that was sketched out as a solution to the problem. This sketch of a concept ends the first day.

3.4.2 Day 2

The second day should have begun with a vote on the concepts that were made during the previous day but because the sprint was made by only one person that step was not needed. When the chosen concept had been selected it was time to make a storyboard. By first creating six actions that would guide the user through the process, example of actions could be clicking on an ad or pressing a button on a special page. When these actions had been done they were placed in eight cells that has been drawn to see when the steps needed to be completed. After this the drawings and sketches that had been created from the previous day was used in order to fill the eight cells. The pictures helped with what content would be shown in different parts of the process and the actions that had been put out worked as a representative of the user flow.

The storyboarding ends day two of the sprint.

3.4.3 Day 3

Day three consisted entirely of prototyping, mid-fi prototypes that could be shown to users.

3.4.4 Day 4

The last day of the sprint should be dedicated to user testing. Where users got to test out the prototypes and give feedback. In the case of this specific work no user tests were conducted the last day. The prototypes that were made the day before were however shown to the ABB supervisor who gave input and helped out in a brainstorming session. The brainstorming session was conducted in order to figure out in which direction the project should proceed.

3.5 Prototyping

In the last days of the design sprint the prototyping stage started and then

continued for three iterations. First as doodles and sketches on paper and

then in Figma (an interface design tool) in order to make a more interactive

prototype.

(33)

The first prototype session in Figma was in order to create lo-fi prototypes of different concepts. The concepts were shown to ABB UX employees that came with input and helped figure out which path that should be taken. These prototypes had no color or icons and were mainly used in order to see a flow and concepts.

After the lo-fi concepts were completed and a path forward had been chosen in collaboration with the ABB UX employees the mid-fi prototype took form. This prototype was also made in Figma but with more features. When this mid-fi prototype was done, interviews were held in order to gather input on the design and functionality before the last iteration of prototyping began.

The last prototyping session was for constructing a hi-fi prototype. Which was the next step from the mid-fi prototype, making it clickable and containing all the features so that the hypotheses could be tested.

3.6 Interviews Regarding the Prototype

Between every prototype session interviews and or discussions were held in order to keep the work on the right track.

After the lo-fi prototype concepts were done, UX employees were consulted in order to choose a direction that would align best with what was reasonable to do.

After the mid-fi prototype a sales person from ABB and an end-user were con- sulted in order to get input, to see if the prototype was heading in the right direction.

3.6.1 Pilot Testing of Hi-fi Prototype

After the hi-fi prototyping phase ended but before the actual user tests started, three pilot-tests were conducted. In order to spot errors both in the prototype and also in the manuscript and scenarios for the user test.

Of the three pilot tests that were conducted only one of them had previous

experience of working with a teach pendant. Nevertheless the other two pilot

tests still provided useful feedback.

(34)

3 Methodology

3.7 Forming Hypothesis

The result from previous interviews, both from the ones conducted with users about what they found difficult and what kind of aid they wanted. And the interviews that were conducted as feedback of the lo-fi and mid-fi prototype, laid the foundation for the hypothesis that the final prototype would meet. The hypotheses were constructed by going through the result from all the interviews and together with employees at ABB see what was realistic. But also see what kind of changes that were already planned for the FlexPendant and thereby find appropriate hypotheses to test in the user test.

3.8 User Tests

The user tests for the hi-fi prototype followed a structured form, with the help of a manuscript so that all of the users participating would get the same infor- mation. See Appendix D for the whole questionnaire that has been translated from Swedish to English.

The manuscript contained basic information about the thesis work and what was expected of them in the current stage of the work. Their rights were read for them, stating that their participation was voluntary and that they would remain anonymous throughout the work.

Then they were asked some initial questions about their background with work- ing with robots and how self-assured they felt when operating an OmniCore FlexPendant.

After these questions they were asked to go through four different scenarios in the prototype. Afterwards they were asked some follow-up questions. The scenarios and all questions can be accessed in Appendix D.

3.8.1 Evaluation on the Usability of the Prototype

One part of the follow-up questions after the scenarios was the After-Scenario Questionnaire (ASQ). ASQ are three questions that are asked to the participant of the user test in order to calculate how easy or hard the test was for the participant. "Specifically, the ASQ covers a rating of the ease of a task, the amount of time the task took to complete, and the level of support received throughout the process. " [57].

The ASQ questions are:

1. Overall, I am satisfied with the ease of completing the task in this scenario.

(35)

2. Overall, I am satisfied with the amount of time it took to complete the task in this scenario.

3. Overall, I am satisfied with the support information (on-line help, mes- sages, documentation) when completing the task.

The participant answer the question on a scale 1 to 7, where 1 represents not satisfied at all and 7 represents extremely satisfied.

After the participant had answered the ASQ questions, the ASQ score was calculated by taking the average of the responses.

The higher the ASQ score, the easier or time efficient or supportive the user felt

during the user test [57].

(36)

4 Result

4 Result

This section presents the result from the work that has been done, from the interviews to the final product. The section is divided into different subsections representing the sections of the Double Diamond design model.

4.1 Discover - Iteration 1

This subsection presents the result from the discovery process which includes the literature study and the user interviews.

4.1.1 Literature Research

In addition to the general theory for industrial and collaborative robots the liter- ature study gave insight to both interactive training in general and in industrial environments. Furthermore, the competitive analysis gave an insight into what kinds of solutions that already exist in different business areas. The biggest takeaways from the literature study have been summarized into the following list:

• The seven principles from section 2.5.1 for designing for multimedia.

• Other robot manufacturers have e-learning courses but some also have more aimed interactive courses for specific tasks.

• Other solutions include interactive manuals, interactive videos, walkthroughs, interactive guides and on-boarding.

• The practical training from the job is fundamental even though it can be consuming both time and resources.

• VR technologies can be more realistic but that does not automatically mean that it has better usability then other solutions.

• Operator training is different from teaching. Teaching is focused on knowl- edge and training focuses on work skills through guided experiences. Train- ing uses not only information in order to gain confidence for specific results in a short amount of time but also actual tasks.

• Ergonomic environments showing safe and unsafe conditions in real time

will enhance the workers’ awareness of the importance of working proce-

dures. It is better to show in real environments than in video recordings.

(37)

• Fundamental features while training operators are user interaction for both the trainee and instructors. As well as real-time simulations interfaced with physical hardware.

• There exists a divided opinion if real-life videos are a better practise than digital simulations when it comes to interactive training.

4.1.2 Initial Interviews

In total eight persons were interviewed; they were both external and internal robot users from ABB. They all had occupations that corresponded to the cre- ated personas, such as robot fitter, instructor, student and ABB engineer. One limitation in the interview part was that no interview was conducted with an operator, another limitation is the fact that OmniCore is new and it is harder to get in contact with users of a new product. It was a struggle to find users for the interviews so they were conducted on users that also did not have any experience of OmniCore. A summary of the interviews is listed below.

Struggles users have when using a teach pendant

All the interviewees were asked what they struggle with or what they see novice users struggle with when using a FlexPendant.

One struggle that came up in many interviews was the fact that it is hard to navigate in the FlexPendant. Another difficulty most of the interviewees pointed out was the programming application and that it was easier to program with the computer than on the FlexPendant. For novice users that also are inexperienced with programming it is hard to understand the code.

Instructors had also noticed that it is harder for people with little to no previous computer experience to understand how to use the FlexPendant.

Another issue for novice users is to know which way the robot will actually move and to fine tune the movements to the right speed and distance.

Furthermore, one interviewee pointed out that the manual to the FlexPendant was hard to read and understand completely.

The last struggle that was conveyed was that it is hard to know which coordinate system that is active.

How they act when problems occur

When asked how they handle or how they see others handle problems they

do not know how to solve, they answer quite heterogeneously. However, almost

everyone contacted someone else, a coworker, a boss or even the salesperson that

sold them the robot in the first place. Those that did not reach out to anyone

else either got stressed and did nothing or just tried to solve the problem through

(38)

4 Result

guessing.

What kind of aid do they want

When asked what they wanted in an application that could help them with the FlexPendant, plenty of different applications and solutions arose.

One proposal was an application that helps the user to jog (move) the robot in different coordinate systems. As well in different ways such as axis for axis but also linear and reorient.

Another proposal was for an application that would make it easier to use the revolution counters especially when calibrating them. Because it is difficult to know how precise the user should tune them and also to verify how good the calibration turned out.

An additional solution that was pointed out by three of the interviewees was to have some sort of guide where the user could follow instructions step by step, in combination with pictures that can show how to do tasks.

Another request was to be able to press a button so that the robot would go back to default position or some sort of service mode.

One solution was to have some sort of tutorials that the user does not need to follow to the end and in order to see what is relevant in the moment. Also on the tutorial route is that the FlexPendant should contain the tutorial for the robot the user wishes to operate.

One more proposal was to implement some sort of sensor that could feel where the user is in relation to the robot. So that the robot adjusts to the user and not the other way around.

A manual in Swedish or at least in not technical English as it is right now, was also a request.

An application that could visualize what exists in the space around the robot, such as the safety zones. This visualisation could also simulate if the robot would collide with something if the actual program would run. An additional request was also to make a cheat sheet for different tasks.

Another aspect that was brought up was that the program parameters could change color if the velocity is low and high respectively.

The last proposition was a wizard that could show you how to perform different

tasks.

(39)

4.2 Define - Iteration 1

This subsection of the result presents the results from the Define phase which includes the design sprint.

4.2.1 Design Sprint

The design sprint resulted in five different concepts for an interactive tool that could help novice users along in the beginning. The five concepts were sketched out in lo-fi prototypes. In excess of the concepts the sprint resulted in some other functions and ideas which are listed below.

Concept 1 - Tutorial Library

The first concept, the tutorial library would be an application where the user can go in and watch tutorials. Simultaneously the user should be able to change tab and the tutorial would follow so that the user could perform the tasks at the same time he watched the tutorial. See figure 8 in order to see one frame of the concept.

Figure 8: Displays chosen tutorial when the user wants to do task simultaneously as watching a tutorial. The video appears in the top right corner, see Appendix A.1 for the whole concept in pictures.

Concept 2 - On-boarding

Secondly was a concept that took the form of an on-boarding where the first

time the FlexPendant is turned on the user can choose to be directed to an

on-boarding. In the on-boarding the user learns how to navigate through the

FlexPendant and how to perform simple tasks. The on-boarding concept also

contains a checklist that shows what needs to be done the first time you use the

FlexPendant. See figure 9 in order to see one frame of the concept.

(40)

4 Result

Figure 9: On-boarding that displays information about the applications in the OmniCore FlexPendant. See Appendix A.2 for the whole concept in pictures.

Concept 3 - Simulations

Simulation was the third concept that aimed to help the user understand how the robot would move depending on where the user stood in relation to the robot. So the user could choose where he stood according to the robot and then jog the virtual robot that would appear in the FlexPendant and see how it moved. See figure 10 in order to see one frame of the concept.

Figure 10: Displays the robot in a simulation, the robot is facing left. See Appendix A.3 for the whole concept in pictures

Concept 4 - Interactive Walkthrough

The fourth concept was an interactive walkthrough. Here the user could choose

what task he wanted to perform and then get an interactive guide to help him

through it from the home page to the finished task. The aim was to have the

(41)

guide in the actual interface so that the task got performed simultaneously as the user got a walkthrough on how to do it. See figure 11 in order to see one frame of the concept.

Figure 11: The walkthrough in the next step showing where to press in the application. See Appendix A.4 for the whole concept in pictures

Concept 5 - Interactive Manual

The last concept was an interactive manual. The manual in the FlexPendant would be a lighter version of the existing one. The language would be easy and the user could bookmark and save parts of the manual if they wanted to go back to it and even highlight parts of the text. See figure 12 in order to see one frame of the concept.

Figure 12: Shows that the user can make notes in the manual. See Appendix A.5 for the whole concept in pictures

Starter kit - A kit the users would receive with building blocks when ordering

References

Related documents

Även om jag här meddelat en del kritiska synpunkter får detta inte skymma sikten för att Gisela Vilhelmsdot­ ter lyckats lyfta fram många tidigare obeaktade litterära kvaliteter

Här betyder allmännyttan mer för barn än för yngre vuxna och flytten hemifrån innebär netto att ungdomarna inte bara lämnar egnahemssektorn utan också lämnar

En viktig aspekt kring detta, för att öka kunskapen i ett företag, vore därför att designa ett mätinstrument som ligger inom det interna perspektivet på BSC där företaget

Association of active human herpesvirus-6, -7 and parvovirus b19 infection with clinical outcomes in patients with myalgic encephalomyelitis/ chronic fatigue syndrome..

q u i data opera, quoad ejus falvo Hebraismi genio fieri pos- fet, conaretur applicare eandem ad vernaculam noftram, ac d o cere, quænam vox Germanica Hebræam

Musiken i grundskolan har en uppgift att ge eleverna en inkörsport till ämnet musik, vilket framförallt är viktigt för de elever som inte får intresset med sig

bearbetning av materialet för att kunna identifiera det mest signifikanta delarna i svaren för varje tema som intervjun behandlade. De delar av materialet som visar på något

Intagsstopp torde inte kunna ses som godtagbart med hänsyn till de styrdokument och riktlinjer gällande god tillgänglighet och korta väntetider som finns (jfr. Det råder således