• No results found

Resilience Engineering within ATM - Development, adaption, and application of the Resilience Analysis Grid (RAG)

N/A
N/A
Protected

Academic year: 2021

Share "Resilience Engineering within ATM - Development, adaption, and application of the Resilience Analysis Grid (RAG)"

Copied!
74
0
0

Loading.... (view fulltext now)

Full text

(1)

LIU-ITN-TEK-A--15/016--SE

Utvärdering och omdesign av

mobilapplikation för

insamling av dagboksdata

Johan Dagvall

Emma Wegelid

2015-06-15

(2)

LIU-ITN-TEK-A--15/016--SE

Utvärdering och omdesign av

mobilapplikation för

insamling av dagboksdata

Examensarbete utfört i Medieteknik

vid Tekniska högskolan vid

Linköpings universitet

Johan Dagvall

Emma Wegelid

Handledare Katerina Vrotsou

Examinator Camilla Forsell

(3)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

under en längre tid från publiceringsdatum under förutsättning att inga

extra-ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,

skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för

ickekommersiell forskning och för undervisning. Överföring av upphovsrätten

vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av

dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,

säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ

art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i

den omfattning som god sed kräver vid användning av dokumentet på ovan

beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan

form eller i sådant sammanhang som är kränkande för upphovsmannens litterära

eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se

förlagets hemsida http://www.ep.liu.se/

Copyright

The publishers will keep this document online on the Internet - or its possible

replacement - for a considerable time from the date of publication barring

exceptional circumstances.

The online availability of the document implies a permanent permission for

anyone to read, to download, to print out single copies for your own use and to

use it unchanged for any non-commercial research and educational purpose.

Subsequent transfers of copyright cannot revoke this permission. All other uses

of the document are conditional on the consent of the copyright owner. The

publisher has taken technical and administrative measures to assure authenticity,

security and accessibility.

According to intellectual property law the author has the right to be

mentioned when his/her work is accessed as described above and to be protected

against infringement.

For additional information about the Linköping University Electronic Press

and its procedures for publication and for assurance of document integrity,

please refer to its WWW home page: http://www.ep.liu.se/

(4)

Abstract

Activity diaries are used as data source for studying how humans spend their time on a daily basis. To simplify the collection of these diaries, the PODD (POrtable Diary Data collection) project has been de-veloped within the fields of information visualization and activity data. The project consists of a smart phone application which makes it easier and more effective to gather activity diaries.

The aim of this thesis is to evaluate and improve the usability of the PODD smart phone application. The evaluation has been done by con-ducting usability studies with one iteration of redesign. The first study was carried out on the original design of the application. Based on the usability issues discovered in this study, a redesign of the user interface was implemented. A second study was then conducted on the redesigned version of the application, to be able to compare results with the previous study.

For a comparison to be done, the two studies were carried out us-ing the same methods. Each study was divided into two parts with 10 participants each, in the first part the participants were asked to use the application continuously for two days. This was followed up with a survey with questions regarding their perception of the usability for the applica-tion.

The second part of the study was conducted in a controlled lab envi-ronment, also with 10 participants. The participants were given fourteen tasks to complete, based on ten different areas of the application.

(5)

Acknowledgements

The work presented in this thesis has been carried out at the Divi-sion for Media and Information Technology, a part of the Department of Science and Technology at Link¨oping University.

First of all, we would like to thank our supervisor Katerina Vrotsou and examiner Camilla Forsell for support and feedback throughout the project. A special thanks to Mathias Bergqvist for helping us out with troubles of a technical nature. Finally, we want to thank our families for the support during our years at the university.

Johan Dagvall and Emma Wegelid Norrk¨oping, June 2015

(6)

Contents

1 Introduction 8 1.1 Background . . . 8 1.2 Aim . . . 8 1.3 Method . . . 8 1.4 Limitations . . . 9 2 Background 10 2.1 Usability . . . 10 2.2 Usability testing . . . 10 2.2.1 Lab testing . . . 10 2.2.2 Beta testing . . . 11 2.3 Usability metrics . . . 11 2.3.1 Performance metrics . . . 12 2.3.2 Issues-based metrics . . . 12 2.3.3 Self-reported metrics . . . 12

2.4 Guidelines for designing user interfaces . . . 12

2.4.1 Consistency . . . 13

2.4.2 Factors for efficiency . . . 13

2.4.3 Structure . . . 13

2.4.4 User control . . . 14

2.5 Related work . . . 14

3 Design and usage of PODD 16 3.1 Start view . . . 16

3.2 Calendar view . . . 17

3.3 History view . . . 19

4 Method 20 4.1 Preparation . . . 20

4.2 General arrangement of studies . . . 20

4.2.1 Beta testing . . . 20

4.2.2 Lab testing . . . 21

4.3 Redesign . . . 23

5 Study 1 24 5.1 Results from beta testing . . . 24

5.1.1 Feedback from open-ended questions . . . 26

5.2 Results from lab testing . . . 27

5.2.1 Issues and feedback . . . 29

6 Redesign 31 6.1 Start view . . . 31

6.1.1 Issue: Misleading hierarchy . . . 31

6.2 Calendar view . . . 31

6.2.1 Issue: Too cluttered . . . 31

6.2.2 Issue: Hidden information . . . 32

6.3 History view . . . 33

(7)

6.4 Other issues . . . 34

6.4.1 Issue: Too many clicks when adding activity . . . 34

6.4.2 Issue: No feedback on activity tree depth . . . 35

7 Study 2 37 7.1 Results from beta testing . . . 37

7.1.1 Feedback from open-ended questions . . . 39

7.2 Results from lab testing . . . 40

7.2.1 Issues and feedback . . . 42

8 Results and Discussion 44 8.1 Results from beta testing . . . 44

8.2 Results from lab testing . . . 47

8.3 General discussion . . . 49

9 Conclusions and future work 51 9.1 Future work . . . 51

9.2 Final words . . . 52

References 53

A Lab testing: Information for participant 55

B Lab testing: System Usability Scale, modified 56

C Beta testing: Information for participant 57

D Beta testing: Post-test questionnaire 59

(8)

List of Figures

1 Items appear grouped as rows. . . 13

2 Items appear grouped as columns. . . 13

3 Similar items appear grouped. . . 14

4 Start view. . . 17

5 Choosing category. . . 17

6 Choosing activity. . . 17

7 Example of choosing status variable. . . 17

8 Confirming new activity. . . 17

9 Start view after changing current activity. . . 17

10 Calendar view: 1. Header with date picker 2. Calendar with activities 3. Activity 4. Gap 5. Time after line representing current time 6. Zoom 7. Button for uploading data. . . 17

11 Editing an activity in calendar view. . . 18

12 Information about the activity. . . 18

13 Overview of editing activity. . . 18

14 Updated activity in calendar view. . . 18

15 Editing gap in calendar view. . . 19

16 Overview of editing gap. . . 19

17 New activity added in calendar view. . . 19

18 History view. . . 19

19 Test session with test leader on the right and participant on the left. . . 23

20 Test session with observer on the left and participant on the right. 23 21 Responses to ”In what way have you mainly entered activities?”. 24 22 Responses to ”I learned how to use the start view quickly”. . . . 25

23 Responses to ”The start view was easy to use”. . . 25

24 Responses to ”Have you used the calendar view in the application?”. 26 25 Responses to ”Have you used the history view in the application?”. 26 26 Average time on task for study 1. . . 28

27 Task success for study 1. . . 28

28 Expectation measure for study 1. . . 29

29 Start view in design 1. . . 31

30 Start view in design 2. . . 31

31 Calendar view in design 1. . . 32

32 Calendar view in design 2. . . 32

33 Calendar view in design 1. . . 33

34 Calendar view in design 2. . . 33

35 History view in design 1. . . 33

36 History view in design 2. . . 33

37 ”Place” variable when adding a new activity in design 1. . . 34

38 ”Company and devices” variable when adding a new activity in design 1. . . 34

39 ”Mood” variable when adding a new activity in design 1. . . 34

40 Overview page when changing the current activity in design 2. . 35

41 Overview page when editing an activity in design 2. . . 35

42 Overview page when editing a gap in design 2. . . 35

43 Page for choosing company ”Alone” in design 2. . . 35

(9)

45 Activity tree in design 1. . . 36

46 Activity tree in design 2. . . 36

47 Responses to ”In what way have you mainly entered activities?”. 37 48 Responses to ”I learned how to use the start view quickly”. . . . 38

49 Responses to ”The start view was easy to use”. . . 38

50 Responses to ”Have you used the calendar view in the application?”. 39 51 Responses to ”Have you used the history view in the application?”. 39 52 Average time on task for study 2. . . 41

53 Task success for study 2. . . 41

54 Expectation measure for study 2. . . 42

55 Comparison of responses to ”In what way have you mainly en-tered activities?”. . . 44

56 Comparison of responses to ”I learned how to use the start view quickly”. . . 45

57 Comparison of responses to ”The start view was easy to use”. . . 45

58 Comparison of responses to ”Have you used the calendar view in the application?”. . . 46

59 Comparison of responses to ”Have you used the history view in the application?”. . . 46

60 Comparison of the average SUS score (modified version) among participants in beta testings. . . 47

61 Comparison of average time on task. . . 48

62 Comparison of task success. . . 48

63 Comparison of expectation measure. . . 49

64 Comparison of average SUS score (modified version) among par-ticipants in lab tests. . . 49

65 Sketches of the start view . . . 66

66 Sketch of the start view . . . 67

67 Sketch of the calendar view . . . 68

68 Sketches of the calendar view . . . 68

69 Sketches of the calendar view . . . 69

70 Sketches of the history view . . . 69

71 Sketches of ”Company and devices” and overview page . . . 70

(10)

List of Tables

1 Demography for study 1. . . 24 2 Demography for study 2. . . 37

(11)

1

Introduction

In this chapter, the background of the master thesis is presented. Aim, method and limitations are also discussed.

1.1

Background

Statistic bureaus commonly use activity diaries as their data source for studying how individuals in a population spend their time [5]. Activity diaries are usually collected in hand written forms and then manually translated into a digital code. This is time consuming and leads to inaccuracies in both time and interpretation of activities. The PODD (POrtable Diary Data collection) project is centered around a smart phone application aiming to make this process easier and more effective.

PODD has been developed by experts in the fields of information visualiza-tion and activity diaries, without any testing on users within the target group for the application. Due to the wide target audience, it is essential for the ap-plication to be intuitive and easy to use. Here lies the background and reason for this master thesis project.

1.2

Aim

The aim for this project is to evaluate and improve the usability of the PODD application by usability testing. A redesign of the application will be suggested based on research in the field of usability, focusing on the issues discovered during the evaluation.

This work aims to answer the following problem statements: Is the PODD application fulfilling the users’ needs from a usability perspective? How can the usability of the PODD application be improved and ensured based on an evaluation?

1.3

Method

The project started with research in relevant fields, such as how to plan and conduct usability tests. Two usability studies were conducted during this thesis work. The first one was conducted on the original design of the application. Based on the results of the study, a redesign of the application was performed. Following this, a second study was conducted on the redesigned application.

The studies were planned based on the applications functionality and re-search in the field of usability testing. Each study consisted of two parts, of which one was lab tests conducted in a controlled lab environment. The other part was usability testing in the form of a beta test, where the participants continuously used the application in a more natural environment. Both quali-tative and quantiquali-tative data was collected as a complement to each other. The quantitative data was gathered to compare the usability between the new and the previous design, and to test specific functionality of the application. The feedback from the qualitative data would provide with information about what the users thought of the application and what they wanted.

(12)

re-The redesign process consisted of making sketches of the planned changes, getting familiar with the code base and finally implementing changes. The final step was to conduct another study to evaluate if the usability had improved given the redesign. This study was organized in the same way as the first one to allow comparison.

1.4

Limitations

This thesis will be limited to evaluate the smart phone application part of PODD, excluding the web site where users can register. Number of design iterations are limited to one, meaning two studies. To keep the comparison work from becoming too extensive, a limited number of usability metrics has been chosen as measures in the lab testing part of the studies.

(13)

2

Background

In this chapter, theoretical background relevant for this project is presented. It defines usability, several methods for usability testing, and finally gives a brief review of general guidelines for designing user interfaces.

2.1

Usability

Like functionality, usability is a property of each product [8]. If functionality refers to what a product can do, usability corresponds to how people use the product and what they think about it. Based on the aims and purpose of this work, establishing a definition of usability is therefore a core issue.

The definition of usability from ISO 9241-11 [9] reads as follows. ... the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.

In a similar way, Dumas and Redish [8] define usability as people who use the product can do so quickly and easily to accomplish their own tasks.

Krug’s first law of usability says ”Don’t make me think!” [11]. In other words, for a product to have good usability it should be self-evident, obvious and self-explanatory.

2.2

Usability testing

There are several ways to improve and ensure usability [8]. One of the main factors is about keeping users and their needs in mind throughout the whole design process. By continuously testing versions of the product on potential users, the design can be iterated based on results from these tests. This is the general definition of usability testing, and is considered an appropriate way of evaluating and improving the usability of the PODD application.

The following five items are characteristics for usability testing [8]: • Primary goal is to improve usability of a system or product

• Participants of the test represent real users

• Participants of the test perform real tasks

• Behavior of the participants are being observed

• The resulting data are being analyzed in order to recommend changes

2.2.1 Lab testing

The traditional type of usability testing is lab tests conducted in a controlled lab environment [14]. The test participants are observed as they perform given tasks on the product.

Planning is crucial to get a valuable result from lab tests, and involves several activities [8]. Defining goals and concerns is the first step. It is a good idea to go from more general concerns and make them specific. Sources from which goals and concerns are taken from can be quantitative usability goals, previous tests or expert reviews.

(14)

When defining and recruiting participants it is important to look at the future users of the product or system. These should be represented among the group of test participants.

The appropriate tasks to test are the ones that reveal usability problems for the system or product. They should also represent tasks which real users will perform with the finished product.

The chosen tasks to test have to be translated into task scenarios, which are used to tell the test participants what to do during the test session. The scenar-ios should be unambiguous and give the user the right amount of information to perform the task.

Usability measures can be divided into performance and subjective measures. The first ones are quantitative; task time and number of errors are two examples. Subjective measures are about the participants’ perceptions and opinions. For more details on usability metrics, see chapter 2.3.

The last step of planning is to prepare materials and testing environment, together with assigning roles for the test team.

Think-aloud protocol is a method in lab usability testing where the par-ticipant is asked to think out loud when they are using the application [8]. This makes it easier to find features in the product that are not intuitive. The information received through this is very much depending on the participant, whether he or she is comfortable with sharing their thoughts or not.

Retrospective thinking out loud is a version of the method, where the partic-ipant is filmed performing tasks and afterwards describes how they completed certain tasks. In this work, retrospective thinking out loud was discarded even though proven to be more beneficial, due to increasing test times by 80 %. 2.2.2 Beta testing

A beta test is a type of usability test, conducted with an early release of a product. The test participants use the product in real environments, conducting real tasks [3]. For this reason, beta testing is considered relevant in the work of evaluating the PODD application.

The purpose of a beta test is to get feedback and find the last bugs before the launch. Through testing the product in its real environment, functional bugs can be found that a controlled usability test would have missed [8].

The results from beta tests are partly depending on the people testing the product. Users may forget to write down issues that appeared and the steps they made in the process. The test is not controlled, meaning the user chooses how to use the product. This can lead to some functions not being tested, or that they are tested but potential issues are not reported.

Beta testing generally takes place at the end of the development on a com-pleted or nearly comcom-pleted product. Being that close to the release could mean that the discovered usability issues will not get fixed in time for release. This is the main disadvantage with beta testing.

2.3

Usability metrics

Using usability metrics when talking about usability adds structure to the design and testing process [15]. For this project, usability metrics are used as a tool for comparing results from the two studies.

(15)

This chapter addresses three categories of usability metrics: performance metrics, issues-based metrics and self-reported metrics.

2.3.1 Performance metrics

Task success measures how well users can complete a set of tasks [15]. It is important that each task given to the user has a clear end-state, which requires the tasks having definitions of what is considered success. Task success can be divided into two categories: binary success and levels of success. Binary success gives a user two possible outcomes on whether he or she completed a task: pass or fail. Levels of success can be useful when there is a gray area associated with task success. Using three levels is a regular approach: complete failure, partial success, or complete success.

Time-on-task is equal to the time it takes for a user to perform a task. This metric is especially important to measure for systems or products where tasks are frequently performed by users, and time efficiency is essential.

Errors can be defined as incorrect actions that may result in task failure, or simply makes the process of completing tasks inefficient for a user.

2.3.2 Issues-based metrics

In order to address usability issues, it can be valuable to introduce specific metrics into it [15]. For example, measuring issues by category or by task gives information on which parts or functions to focus design improvements on.

When discussing usability issues, it is important to be aware of the possible risks of bias. Sources of bias can be participants, tasks, artifacts and environ-ments.

2.3.3 Self-reported metrics

Collecting self-reported data is a source to information about the user’s percep-tion of a system [15]. This gives a user-centered perspective on the matter of usability.

Post-task ratings are used to see which tasks are perceived the most difficult. Expectation measure is a measure based on users’ expected difficulty for a task, compared to the experienced difficulty. The outcome of this measure is plotted in a scatter plot and can be used to prioritize tasks for improvement. For example, if a task has an average perception of being easy, but is experienced to be very difficult, this task is important to take care of.

Post-session ratings are used to get an overall judgement of a users’ expe-rience, by collecting data at the end of a test session. The System Usability Scale, SUS, includes statements of which half are negatively worded and the other positive. A user is asked to rate the level of agreement from one to five, strongly disagree to strongly agree respectively. Each statement contributes to a total SUS score, ranging from 0 to 100.

2.4

Guidelines for designing user interfaces

This section addresses some of the important areas to take in consideration when designing user interfaces, which is done in the redesign part of this work.

(16)

2.4.1 Consistency

By acting and looking the same throughout all components, a user interface can be considered consistent. When a user performs an action, it should always give the same result. [12]

How we perceive things we see is primarily biased by three things: our experience, the current context, and our goals. Our experience makes us look for familiar patterns. If a product or system is not designed the way we are used to, it makes us confused. It may also take longer to find what we are looking for, i.e. our goals. When designing user interfaces it is therefore important to avoid ambiguity and be consistent. Understanding what goals a user may have when using the interface is crucial, in order to ensure usability. [10]

2.4.2 Factors for efficiency

Visual structure plays an important role when it comes to finding relevant in-formation quickly. By using a visual hierarchy in terms of font size, font style and sections of information, the user is allowed to focus on what is relevant for him or her. [10]

Unlike spoken language, our brain is not naturally fitted for reading. With this in mind, there are things to avoid when designing user interfaces. One is uncommon or unfamiliar vocabulary, such as technical terms. Tiny or clut-tered fonts will also disrupt the users’ reading. In general, the amount of text presented to the user should be kept to a minimum. [10]

2.4.3 Structure

The Gestalt principles are a number of theories which describe how we as hu-mans perceive what we see. They exemplify how we automatically see structures such as shapes and figures, rather than edges and lines. In this section, the prin-ciples relevant for this work are described.

The proximity principle is about how objects that are closer appear grouped; those farther apart do not (figures 1 and 2). The similarity principle is also about grouping: if objects looks similar to each other, they appear grouped (figure 3). [10]

(17)

Figure 3: Similar items appear grouped.

2.4.4 User control

Being in control when interacting with the interface gives the user a feeling of also being in charge. The opposite makes the user frustrated. To achieve this, a user interface should be design in a simple, predictable and flexible way. [12]

2.5

Related work

By conducting iterative usability tests and thereby targeting the needs of end users, Cristancho-Lacroix et al. [6] developed a website with user-centered de-sign. Both qualitative and quantitative data were collected, similar to the eval-uation of PODD. Cristancho-Lacroix et al. concludes questionnaires offering answers on which usability issues a product might have, but leaves out ”why” users have these issues. For this reason, interviews and think-aloud protocols provided valuable additional data to the questionnaires.

Becker and Yannotta [4] developed a user-centered website by iterated us-ability testing, using the think-aloud method. By conducting usus-ability tests on the original version of the web page, redesign based on the results, and then perform another round of tests on the new design, Becker and Yannotta could ensure user-centered design based on real users’ behavior.

Perrin et al. [2] performed a usability study to evaluate a library search tool. A test session consisted of two stages, first the participant completed a series of tasks to gather the participants observations and comments about the program. After the tasks the participants filled out a SUS survey to evaluate the system based on a self-reported user experience. The tasks were sorted in order of ascending difficulty to test learnability and minimize user frustration. The test design would indicate if the participants learned to use the program, and could remember the process of certain tasks. During the test, the participants were timed, and each error or problem that occurred was noted.

Kaikkonen et al. [1] performed usability tests on a mobile application for transferring files between a computer and the mobile phone. The aim for this project was to compare the effect of the environment when performing a usability test. Therefore, one usability test without distractions was performed in a laboratory, and the other in the field, in a more real environment. The study had identical tasks in the field and laboratory respectively since the aim was

(18)

participant was filmed, observed and followed by a moderator who supplied the participants with new assignments after they were completed.

This thesis differs from the mentioned related work in the sense that the first usability study will be conducted on a nearly finished product. In addition, the articles mentioned above except Becker and Yannotta [4], did not conduct an iterative study to verify improved usability, as will be the case in this work.

(19)

3

Design and usage of PODD

PODD (POrtable Diary Data collector) is a digital platform developed for hand-held devices, used for collecting diary data in a research purpose. The collection is done with a smart phone application, developed in the operating system An-droid at the Department of Science and Technology at Link¨oping University.

A user should be able to enter all activities that are performed during a day. When adding a new activity, the user chooses from a pre-defined set of activities and status variables. A status variable is additional information about an activity, such as place, company and devices and mood. Activities can be managed through one of the three main views: start view, calendar view and history view. For each of these views follows a section with screen shots and description of usage.

3.1

Start view

Figure 4 shows the start view of the application. Navigation tools can be found in the upper right corner, above the start view. Here the user can navigate to either start view or calendar view by using the corresponding icons. These can also be reached, together with the history view, by using the menu to the right of the icons. The navigation tools can be reached at any time in the application. Figures 5 - 9 describe the interface when a user changes the ongoing activity and its associated information. By clicking the red lined button in figure 4, the user is navigated to figure 5 with the main categories for different kinds of activities. When choosing one of these categories, the user comes to figure 6 with a list of more specific activities. Some of these have their own sub categories, depending on the pre-defined set of activities. After choosing an activity, status variables are selected (see example of this in figure 7). Each status variable has its own view that the user has to go through. Figure 8 shows the pop up dialog with information about the chosen activity, which the user has to confirm in order to save. After saving and thereby changing the ongoing activity, the user is navigated back to the start view, see figure 9.

(20)

Figure 4: Start view. Figure 5: Choosing category. Figure 6: Choosing activity.

Figure 7: Example of choos-ing status variable.

Figure 8: Confirming new ac-tivity.

Figure 9: Start view after changing current activity.

3.2

Calendar view

The calendar view and its different parts are shown in figure 10. Here it is pos-sible to edit activities and gaps. During the use of the application, all activities entered by the user are saved locally on the device. When being done with collecting diary data, the user has to upload the data to the PODD server. This is also done in the applications’ calendar view.

Figure 10: Calendar view: 1. Header with date picker 2. Calendar with activities 3. Activity 4. Gap 5. Time after line representing current time 6. Zoom 7. Button for uploading data.

(21)

When clicking on an activity in the calendar view (figure 11), a pop up is presented with information about this particular activity (figure 12). From here, the user can choose to edit the activity, and is then navigated to figure 13. In this view, the user can edit the activity itself, but also start time and end time for the activity. When saving the changes made, the user is navigated back to the calendar view (figure 14).

Figure 11: Editing an activity in calendar view. Figure 12: Information about the activity.

(22)

In the case of changing start or end time for an activity, gaps can occur between activities, visualized in the calendar view (figure 15). By clicking one of these gaps, the user is navigated to figure 16, where it is possible to add a new activity to fill the gap. The user can also edit start and end time. After saving, the new activity can be seen in the updated calendar view (figure 17).

Figure 15: Editing gap in cal-endar view.

Figure 16: Overview of edit-ing gap.

Figure 17: New activity added in calendar view.

3.3

History view

Figure 18 shows the history view. Here the user can see all activities listed per day. It is possible to edit gaps in this view, but not activities. Gaps are edited by clicking the ”Edit” button for the corresponding gap, and the user is then navigated to the same procedure as when editing gaps from the calendar view (figure 16).

(23)

4

Method

This chapter describes the steps of implementation, from preparation via us-ability studies to redesign.

4.1

Preparation

Decisions concerning arrangement of the studies were based on guidelines in relevant literature, together with the aim of the project. Since the future use of the application is intended to be on everyday basis and quite frequent, it was important to perform testing in an environment with similar conditions; one part of the studies therefore consisted of a beta test. To be able to use usability metrics for comparison of the two studies, the beta testing was supplemented with lab tests. This part made it possible to test specific functionality of the application in a controlled environment.

Since the target group for PODD is very diversified, the recruitment process was simplified. Friends, family and people in the department were therefore asked to participate. Regarding the size of each test group, ten participants per group were considered enough [15].

4.2

General arrangement of studies

Two studies were carried out during this thesis. The first was conducted on the original design of the application. The second was done on the redesign with the implemented changes, to evaluate if the usability had improved.

Following terms are defined here and will be used from now on in the report: • Design 1: Original design of the application

• Study 1: Study conducted on design 1

• Design 2: Redesign with changes based on results from study 1

• Study 2: Study conducted on design 2

Each study had two parts. One of these was lab tests conducted in a con-trolled lab environment. The other part consisted of beta tests, where partici-pants used the application in a more natural habitat.

4.2.1 Beta testing

In each beta testing session, ten people participated. In cases where the partici-pants owned an Android smart phone, the application was installed on their de-vice. Other participants borrowed a smart phone with the application installed beforehand. The test participants were given a document with information and instructions, see appendix C. They were informed to use PODD for two days, and continuously enter data about their daily activities.

After a beta testing session was completed, the participants were given a sur-vey (see appendix D) by e-mail, with questions regarding their user experience of the application.

(24)

Questions 1-20 were quantitative and qualitative questions to specify how the users used the application and why they did, or did not use a specific view for entering their data.

Statements 21-32 are a modified version of the System Usability Scale (SUS) in [15, p. 138]. Due to the modifications made, an equivalent to the SUS score had to be calculated differently. To make them range from 0 to 100, they were weighted with 2.67 instead of 2.5 as in the original SUS. The method of how a statement contributes to the score depending on it being positive or negative, was used in the same way as with the original SUS.

The open-ended questions 34-37 and 39 in the survey are inspired by the post-session interview questionnaire in [7, p. 99].

4.2.2 Lab testing

A total of twenty people participated in the lab tests, ten in each study. Each test was recorded with a camera, with only the mobile phone and participants’ hands showing on tape. From these videos, the time spent on each task was collected. The test was lead by a test leader (see figure 19), who controlled the pace of the test, presented the participant with new questions, and handled software bugs. The test leader also encouraged or assisted the participant if necessary, which added uncertainty of the collected usability metrics. An ob-server observed the test and took notes on important issues that could have been missed by the camera (see figure 20).

The functionality of the application was categorised into ten different areas (A-J). The test consisted of fourteen tasks (1-14), distributed over these areas.

A. Starting a new activity

1. You’re reading a book at home in your couch. Add this as your current activity.

B. Add an activity in retrospect on the current day

2. 06:30 this morning you went out to the kitchen and prepared breakfast, which took 15 minutes. Enter the activity in the calendar view.

3. Go to the history view and fill the gap from 00.00 to 06.30 today, when you slept alone at home.

4. One of your friends slept over in the sofa bed, and you started eating breakfast at 06.45. You were done at 08.00. Add the activity. C. Edit current activity

5. You are moving out to your garden och continue with your current activity. Change your current location.

6. You got a text message from a friend who declines your invitation to tonight’s party. Change your current mood to ”Sad”.

D. Change current activity

(25)

E. Start two activities at the same time

8. You’re done with your current activity and start to clean the kitchen. At the same time you’re singing your favourite song. Add these two activities. You’re also happy. Change your mood.

F. Add a secondary activity in retrospect

9. When you and your friend had breakfast this morning, you listened to the radio. Add this activity.

G. Edit an already added activity

10. Change the start time for your lunch yesterday to 12.00. 11. For yesterdays’ lunch you entered company ”Alone”. You now remembered that a colleague made you company. Change company for the activity.

H. End a secondary activity

12. You stop singing. Change this in the application. I. Add an activity in retrospect on the previous day

13. You went shopping for groceries with a friend at the supermarket yesterday, between 17.00 and 18.00, but forgot to enter this in the application. Add the activity.

J. Upload data

14. Upload data for XX/XX.

In order to analyze expectation measure, each participant was asked to rate the expected difficulty for each area, and thereafter rate how it was experienced. Below are the statements given to the user for rating, before and after each area: How easy/difficult do you expect this to be in the application?

Very Easy o o o o o Very Difficult How easy/difficult did you experience this to be in the application?

Very Easy o o o o o Very Difficult

With all tasks completed, the participant filled out a survey (appendix B). They were also asked about their overall experience and additional opinions.

(26)

Figure 19: Test session with test leader on the right and participant on the left.

Figure 20: Test session with observer on the left and participant on the right.

The following usability metrics were collected during the test to measure usability:

• Average time on task - To define which tasks that are the most time consuming. Geometric mean was used due to small sample size [13]. • Task success - To see which tasks that are difficult to perform.

• Expectation measure - To gain understanding of which tasks that are ex-pected to be easy, but are perceived as difficult. Tasks in that category need to be fixed fast.

• Error rate - To discover the number of total errors done by participants per task, in relation to total number of error opportunities per task.

4.3

Redesign

Each usability issue recognized in study 1 was prioritized from 1 to 4, 1 being the most important issue. Prioritization was based on how frequent an issue had been, both in the lab testing session as well as in the beta testing.

The implementing process began with a sketching part. With paper and pen, sketches were made to make rough decisions on how to solve the usability issues. From these sketches, programming tasks were created to make changes in the code base of the application. The tasks were prioritized based on which usability issue they belonged to.

(27)

5

Study 1

Ten individuals participated in each part of study 1. Their demographic char-acteristics can be seen in table 1.

Male Female Range of age Median age Beta testing 3 7 22 - 50 24,5 Lab testing 6 4 23 - 49 30,5

Table 1: Demography for study 1.

5.1

Results from beta testing

This section presents a selection of answers to the questionnaire (appendix D), given to participants in the beta testing part of study 1.

As seen in figure 21, a majority of the participants entered their activities continuously, which means they have been using the start view (figure 4 in section 3.1) most. Figures 22 and 23 shows the participants’ perception of the start view.

Figures 24 and 25 present the usage of calendar view (figure 10 in section 3.2) and history view (figure 18 in section 3.3), respectively. Those who did not use the calendar view reported they had no need of it. The two participants who had not used the history view both replied they did not know it existed.

The average of the modified SUS score for participants in the beta testing of study 1 was 39. Ranging from 0 to 100, this is considered a quite low score.

(28)

Figure 22: Responses to ”I learned how to use the start view quickly”.

(29)

Figure 24: Responses to ”Have you used the calendar view in the application?”.

Figure 25: Responses to ”Have you used the history view in the application?”.

5.1.1 Feedback from open-ended questions Start view, see figure 4 in section 3.1

Four out of ten participants thought the start views hierarchy was unclear. The button for adding a secondary activity was conceived as being part of the sta-tus variables, due to the dotted lines. The purpose of the button for changing the ongoing activity was also unclear, and misinterpreted as editing the activity. Calendar view, see figure 10 in section 3.2

The calender view was for six out of ten participants very cluttered, and it was too difficult to see and edit shorter activities. This was made worse when used on devices with smaller screens. The amount of columns in the calendar was confusing and unclear to some. One test participant did not use the calender view because of these reasons.

(30)

History view, see figure 18 in 3.3

Three out of ten participants wanted to be able to edit activities in the his-tory view, because they thought it was easier to get an overview of the day then in the calendar view. Two out of ten did not know that the history view existed. Other feedback

A reminder to use the application in some way was requested by seven out of ten participants, for example a notification of some sort, asking if you are finished with an activity that has lasted for a while. Shorter activities in particular were hard to remember to change. Splitting existing activities in two to fill in a forgotten lunch break for example was requested because it would simplify this otherwise troublesome task.

To lessen the amount of clicks required to complete a task in the application was requested by three out of ten participants, because filling out activities during the day was very time consuming. A solution to this, requested by three out of ten participants was to implement buttons for the most frequently used activities. A search function in the activity hierarchy and that the app could suggest activities that are likely to follow another was requested by one participant.

The activity hierarchy was difficult to navigate in because there was no indication to how deep each category went, so activities that were believed ”close enough” were chosen instead of the correct one. This was requested to be improved by three out of ten participants.

5.2

Results from lab testing

This section presents the results of the usability metrics measured in the lab tests in study 1.

Figure 26 shows the average time that participants put on each task, with a 95 % confidence interval. The large confidence interval for task 12 is due to one extreme outlier among the collected times.

Task success for each task can be seen in figure 27. Comparing this to figure 26 explains the rather large confidence intervals for task 2, 5 and 9. For these the task success is lower than average, and the participants that failed are the outliers causing the large confidence intervals.

Figure 28 shows average expectation rating versus average experience rating for each area A-J (see list of areas in section 4.2.2). The scale ranges from 1, ”Very Difficult”, to 5, ”Very Easy”. Most areas were experienced similarly to how they were expected, which corresponds to upper right corner of the graph. Area ”F. Add a secondary activity in retrospect” corresponds to the dot in the lower right corner, and was perceived as more difficult than expected. It was therefore considered a prioritized usability issue.

The average of the modified version of SUS score for participants in the lab testing of study 1 was 60,5.

(31)

Figure 26: Average time on task for study 1.

(32)

Figure 28: Expectation measure for study 1.

5.2.1 Issues and feedback Start view, see figure 4 in section 3.1

The buttons for the status variables belonging to the primary activity were overlooked by some participants. This was due to the dotted lines, separating the primary activity from the status variables, and the placement of the buttons. Several of the participants misunderstood the purpose of the button for changing current activity, and thought it was for editing the currently ongo-ing activity. This was due to the button text beongo-ing misleadongo-ing, and should be changed to something that indicates the actual purpose of it.

Calendar view, see figure 10 in section 3.2

In general, the calendar view was perceived by participants as being too clut-tered. Different methods for setting the start and end time for activities was suggested, for instance that the end time automatically would be set to the start time, to lessen the amount of scrolling required. Another desire was to set the start and end time depending on where in the calendar the user clicked.

The header in the calendar view was requested to be fixed in place, so it does not become hidden when scrolling in the view. One big contributor to this issue was the automatic scrolling occurring when editing or adding activities or entering the view. Apart from feeling annoying and unnecessary, it made the column for the secondary activity difficult to find. Adding the secondary activity to a past activity was difficult because the column was difficult to see. Many participants incorrectly thought a secondary activity could be added by editing the primary activity it ”belonged to”.

Another issue was that the application didn’t set the start and end time for the secondary activity according to the related primary activity.

History view, see figure 18 in section 3.3

As in the beta test, the ability to edit activities through the history view was requested. To have only one day visible at a time was another idea, because the list of activities will become very long when the application is used continuously.

(33)

Other feedback

As the with the results from the beta test, there was an issue with too many clicks when adding an activity. For example, participants did not want to be forced to go through status variables that they did not plan to change, and they wanted an indication how deep the each activity hierarchy was. They also suggested a search function to quicker find the activity they were looking for.

The view for choosing company and devices was perceived cluttered and a suggestion was made that company and devices should be separated from each other, to make the view less confusing and cluttered.

Some participants wished for some sort of filter for the status variables, removing options not logical for the chosen activity. For example, if the user has chosen ”Sleep” as activity, it is not very likely he or she will set ”Grocery store” as value of the ”Place” variable.

For task 9 (see task list in section 4.2.2), it was very difficult for most par-ticipants to find the activity ”Listen to radio”. This was due to the relaxation activities being separated into categories ”Individual relaxation” and ”Relax-ation with other”, in which ”Listen to radio” was placed in the former. Since the task specified the company of a friend, it made it very confusing for some participants to find the activity. This explains the high average time on task for task 9 (figure 26).

(34)

6

Redesign

This chapter describes the implemented changes, based on a priority among the usability issues found in study 1. The parts of the application and its associated issues are divided into separate sections below.

Remaining usability issues that were not addressed in the redesign process are discussed in section 9.1.

6.1

Start view

Figure 29 shows the start view of design 1, before redesign. 6.1.1 Issue: Misleading hierarchy

Study 1 showed that the start view had a weak hierarchy. Information about the current activity are placed at the bottom instead of next to the activity itself, see figure 29. This made it unclear what belonged to what. Rough sketches of solutions and different design ideas can be seen in appendix E, figures 65 and 66.

The implementation made in the redesign can be seen in figure 30. Informa-tion related to the current activity has been moved to clarify the relaInforma-tions.

Figure 29: Start view in design 1. Figure 30: Start view in design 2.

6.2

Calendar view

The calendar view of design 1, before redesign, can be seen in figure 31. 6.2.1 Issue: Too cluttered

The general opinion of the calendar view among participants of the beta testing in study 1 was about it being too cluttered, and easy to press in the wrong place.

(35)

Many different ideas surfaced when sketching on the calendar view. Amongst them, a simple version with only two columns, one for the main activity con-taining the related status variables, and the other for the secondary activity (figure 69 in appendix E). This made the calendar easier to understand, but for short activities it would become very hard to see the related status variables. On smaller screens the column titles were too long, so an idea was to use icons instead of text (figures 67 and 68 in appendix E). However, there was a chance that the icons were interpreted differently depending on the user.

To make the calendar less cluttered, the red-checkered background was re-moved and spacing was added between the date boxes at the top, see figure 32.

Figure 31: Calendar view in design 1. Figure 32: Calendar view in design 2.

6.2.2 Issue: Hidden information

In the lab test of study 1 it was discovered many of the participants missed the column titles due to the auto scroll functionality in combination with dynamic positions for the titles. See figure 33.

In the redesign, auto scroll was removed and calendar headers position fixed to the top. See figure 34.

(36)

Figure 33: Calendar view in design 1. Figure 34: Calendar view in design 2.

6.3

History view

Figure 35 shows the history view of design 1, before redesign. 6.3.1 Issue: Unclear purpose

Several of the participants in the beta testing of study 1 requested functionality to edit activities from the history view. There were also opinions on the list getting quite long with a large number of activities. The sketch in figure 70 show an idea to make the activities in the history view editable and expandable, revealing the related status variables. In the redesign, functionality to edit an activity was implemented. A drop down list with possibility to choose which day to show was also added. See figure 36.

(37)

6.4

Other issues

This section addresses other usability issues found in study 1. 6.4.1 Issue: Too many clicks when adding activity

When adding a new activity with design 1, each status variable had its own view, see figures 37, 38 and 39. According to participants in both beta and lab tests, this caused a quite large number of clicks when adding a new activity, the most common action in the application.

Figure 37: ”Place” variable when adding a new activity in design 1.

Figure 38: ”Company and de-vices” variable when adding a new activity in design 1.

Figure 39: ”Mood” variable when adding a new activity in design 1.

The sketches in figure 72 and the bottom sketch in 71 show various versions overview page. This view contains all the status variables for the related activ-ity in the same view.

The overview page was introduced in design 2 to reduce the amount of clicks required. For consistency, this view is used when adding a new activity, editing an existing activity, or editing a gap. See figures 40, 41 and 42.

In combination with the new overview page, the redesign involved imple-menting an updated version of ”Company and devices” page. The previous ver-sion was perceived as cluttered, difficult to operate and confusing that ”Alone” could be picked together with any sort of company.

The top two sketches in figure 71 show two approaches of a new version of ”Company and devices”. Choosing no company would filter out impossible combinations like ”Alone” and ”With friend”. The results for the new version can be seen in figures 43 and 44.

(38)

Figure 40: Overview page when changing the current ac-tivity in design 2.

Figure 41: Overview page when editing an activity in de-sign 2.

Figure 42: Overview page when editing a gap in design 2.

Figure 43: Page for choosing company ”Alone” in design 2.

Figure 44: Page for choosing company ”With others” in design 2.

6.4.2 Issue: No feedback on activity tree depth

When adding a new activity, the user must go through an activity tree with all possible choices of activities, see figure 45. Feedback from study 1 showed that the lack of feedback on how far down in the tree the user had gone was frustrating and gave the user a feeling of not being in control.

With this in mind, an arrow was set on the buttons for all activities with a sub group of activities, indicating further steps. See figure 46.

(39)
(40)

7

Study 2

Ten individuals participated in each part of study 2. Their demographic char-acteristics can be seen in table 2.

Male Female Range of age Median age Beta testing 6 4 21 - 61 25

Lab testing 8 2 22 - 25 23,5

Table 2: Demography for study 2.

7.1

Results from beta testing

This section presents a selection of answers to the questionnaire, see appendix D, given to participants in the beta testing part of study 2.

As seen in figure 47, there was a fairly even distribution in what way par-ticipants had entered activities.

Participants’ perception of the start view can be seen in figures 48 and 49. Distribution of usage of calendar view and history view is shown in figures 50 and 51, respectively.

The average of SUS scores among participants of the beta testing in study 2 was 70,3.

(41)

Figure 48: Responses to ”I learned how to use the start view quickly”.

(42)

Figure 50: Responses to ”Have you used the calendar view in the application?”.

Figure 51: Responses to ”Have you used the history view in the application?”.

7.1.1 Feedback from open-ended questions Start view, see figure 30 in section 6.1

Six out of ten participants perceived the start view as good. Four out of ten thought it was unclear and confusing. Several participants requested the func-tionality to set start time manually when switching activity, in case he or she had forgotten to change on the right time. The application would then auto-matically change the end time of the previous activity. One participant did not find the placement of the ”Change activity” button intuitive, but had wanted it at the bottom. There were also demands on a list with common activities, for a quicker input.

(43)

Calendar view, see figure 34 in section 6.2

Of those six participants who had used the calendar view, three found it being the best part of application due to the possibility to overview ones activities. Two of six found it too cluttered, and had difficulties to see short activities. History view, see figure 36 in section 6.3

Four of those six participants who had used the history view perceived it as good. Other feedback

Six out of ten participants would have wanted reminders consisting of notifi-cations, for example when an activity had lasted a certain time, in case the user had forgotten to switch activity. Four of ten did not request reminders. Two out of ten participants wished for a search function, to quicker find the proper activity. Three out of ten participants felt it irritating to have to go to the ”Company and devices” page and click ”OK”, even though they had no company. They proposed a default value being ”Alone”, in order to avoid unnecessary clicks.

7.2

Results from lab testing

This section presents the results of the usability metrics measured in the lab tests in study 2.

Figure 52 shows the average time on task for each task, with a 95 % confi-dence interval. Comparing it to figure 53, the quite large conficonfi-dence intervals for task 2 and 3 can be explained with a low success rate, as where participants spent a lot of time on these tasks without completing them.

The average expectation rate versus average experience rate of areas A-J (see list of areas in section 4.2.2) is presented in figure 54. The scale ranges from 1, ”Very Difficult”, to 5, ”Very Easy”. Most areas were experienced as expected, represented in the upper right corner. As seen in figure 54, area F was experienced more difficult than expected. Figure 52 confirms this by showing one of the highest average time on task for task 9, which belonged to area F.

(44)

Figure 52: Average time on task for study 2.

(45)

Figure 54: Expectation measure for study 2.

7.2.1 Issues and feedback

Overview page, see figures 40 - 42 in section 6.4.1

Five out of ten participants did not immediately understand how the ”Company and devices” page (figures 43 - 44 in 6.4.1) worked. As with beta participants, it was illogical to be forced into this page even though they were alone during an activity. One participant also found it confusing how the radio buttons were combined with the check boxes; it seemed mandatory to select a check box. There was a request to hide check boxes for having company instead of only making them disabled, when selecting the ”Alone” radio button.

Several participants appreciated the pop up dialogs for confirmation in order to save, it gave a feeling of control och reassurance. Some did consider them unnecessary and excess after a few tasks.

Calendar view, see figure 34 in section 6.2

Considering the calendar view, some participants wished for the application to sense the approximate time of where a user has clicked in the calendar, and automatically set the start time to this. The end time could then be set by default to one hour after start time. This would make it more efficient in terms of not having to scroll as much.

Other feedback

When given the task on uploading data, some participants stated a wish to have this functionality not only in the calendar view, but also in the history view. The were suggestions on having a specific page for the uploading function alone. To some, the text ”Upload data” instead of just ”Upload” would have made the purpose of the button even more clear.

There were also desires to have a search function for finding activities, as described in other sections on feedback.

In task 9 (see task list in section 4.2.2), when asked to add a secondary activity in retrospect, several participants tried to edit the primary activity to which the secondary activity should ”belong”. When realizing this was not the

(46)

way to perform the task, it was requested to have an ”Add secondary activity” button for each primary activity.

(47)

8

Results and Discussion

In this chapter the results from study 1 and 2 are compared and discussed. The discussion is based on chosen methods, conditions during tests and other factors that could have affected the result.

8.1

Results from beta testing

Figure 55 shows the differences in how beta participants of study 1 and 2 have entered activities. This graph together with figures 58 and 59 can give an understanding of how users preferred to use the application.

The start view was perceived as easier to learn in study 2 compared to study 1 (figure 56), but more difficult to use (figure 57).

The beta testing contained very few quantitative post session questions, which made an objective comparison quite difficult. Another factor is that no bug testing was done in prior to the beta test in study 2, which led to unfortunate bugs that might have affected the participants’ opinions about the application. Additional quantitative questions regarding the application would have ease a comparison between the beta testings. Thorough bug testing would have made the application more robust and less disturbing to the participants of study 2.

(48)

Figure 56: Comparison of responses to ”I learned how to use the start view quickly”.

(49)

Figure 58: Comparison of responses to ”Have you used the calendar view in the application?”.

Figure 59: Comparison of responses to ”Have you used the history view in the application?”.

A comparison between the average SUS score for the beta testings can be seen in figure 60.

(50)

Figure 60: Comparison of the average SUS score (modified version) among participants in beta testings.

8.2

Results from lab testing

As seen in figure 61, study 2 had a lower time on task in general compared to study 1. This could be explained by the overview page being introduced in design 2, resulting in fewer clicks and therefore making the application more time efficient.

Another difference is the time on task for task 9, which participants in study 1 perceived as confusing due to misleading placement of activities in the activity tree. Rearranging the activities led to less misunderstanding and thereby a lower time on task in study 2.

The age distribution of the two studies were quite different. For example, the median age in study 1 (table 1 in section 5) was seven years higher than in study 2 (table 2 in section 7). This could be a contributing factor to the improved results on time on task and task success in study 2, due to younger people being assumed more accustomed to mobile applications. In hindsight it would have been interesting to collect data on how accustomed participants were with mobile applications. This could have been compared to the collected usability metrics and possibly show a correlation between experience with mobile applications and better results, such as lower time on task and higher task success.

For some tasks the task success were particularly low. This was partly due to sloppiness of participants when reading the given task. This led to wrong input and thereby task fail, such as entering an incorrect place. For example ”Another store” instead of ”Grocery store” that was given in the task. Another common reason for failing was executing tasks in a view other then the one specified.

Unclear phrasing was also a reason for low task success, especially regarding task 2. When looking at the observations made during the lab tests of both studies, it was clear that a majority of the total twenty participants misunder-stood which activity that was intended to enter. Instead of the given ”Prepare breakfast”, several chosen ”Eat breakfast”. The resulting task success for task 2 should also be considered biased for another reason, being that some

(51)

partici-pants got hints to get the activity right, as where other participartici-pants did not get help.

The results of the extreme outlier from study 1 can be seen clearly when comparing confidence intervals in figure 61. There was a discussion whether to remove this outlier from the results to improve the data from study. The decision was made to keep it in order to compare the studies with each other. Given more time, a better choice would perhaps have been to replace the outlier with a new tester.

The areas of the lab testing (see list in section 4.2.2) were experienced more similar to the expectation in study 2 compared to study 1. This meaning, participants in study 2 both expected and experienced the tasks to be easy.

Figure 61: Comparison of average time on task.

(52)

Figure 63: Comparison of expectation measure.

Figure 64 shows a comparison of the average SUS score (modified version) among participants in the lab tests of study 1 and 2, respectively.

Figure 64: Comparison of average SUS score (modified version) among participants in lab tests.

8.3

General discussion

As seen when comparing the results of the two studies, there is not always that big of a difference between feedback in study 1 and 2. This applies especially for the beta testing sessions. One reason might be the quite small adjustments made in the redesign, due to lack of time. Since the application was developed in advance of this work, the initial part of redesigning involved getting familiar with the code base and its structure. The time needed for this part was under-estimated, which led to less time for actual implementation. Several usability issues therefore had to be discarded, resulting in some feedback in study 2 being the same as for study 1.

In retrospect, the participants could have been more consciously selected, to obtain a more inhomogeneous group of testers. This regards both age and how

(53)

accustomed to smart phones each person were. The latter should also have been collected in conjunction with test sessions, in order to analyze results based on this parameter.

During the lab tests, error rate for each task was gathered. Due to incon-sistent test monitoring, some participants in study 1 received help if they were struggling with a task, while participants in study 2 did not receive help. In-stead of relying on the error rate and task success to measure usability, the issues participants had during the tests laid the foundation for the redesign.

(54)

9

Conclusions and future work

It is of utmost importance to keep the intended users in mind when designing the user interface of a system or product. The initial study of evaluation in this work was carried out on an already existing and completed application, developed without testing on potential users. This led to results in study 1 showing several usability issues concerning the interface.

Analyzing the results of the studies conducted in this work shows the im-portance of planning when designing a usability evaluation. Appropriate par-ticipants, usability metrics, goals and concerns driving the evaluation are all parameters that have to be carefully thought through. The more thorough the planning is, the more valuable the results of the evaluation will be.

Redesigning the user interface of a finished product based on an evaluation could be time consuming in case the evaluation reveals severe usability issues concerning the core structure of the application. By implementing iterated redesigns based on user tests, usability can be ensured throughout the entire process.

9.1

Future work

This chapter presents usability issues discovered in study 1 that were not ad-dressed in the redesign process.

Upload data

Results from lab tests showed that it was desired to be able to upload data, not only in the calendar view but also in the history view. Additionally it was requested to make it possible to uploading several days at a time.

Notifications

Different types of notifications to help the user remember to change activity were suggested by several participants in the studies. From the more discrete notification in the status bar at the top of the screen indicating that the app is running, to a sound and pop up with a reminder to change activity. This is something that should be added as a option, to the users who struggle to remember updating frequently.

Smart functions

The activity hierarchy was to many users frustrating to search through to find the activity that they were looking for. The ability to search for an activity would ease the frustration and speed up the process of finding the right activity. This could also be used to assist new users to find the correct path to where the activity is located through highlighting the path through the activities.

The list of available places should be dependant of the activity you have chosen and filter out places where the chosen activity can’t take place.

Overview page

As the overview page was implemented in the redesign, it would be desirable to make improvements based on issues from study 2, and perform another set of lab tests. The same goes for the ”Company and devices” page, which lacked in

(55)

several areas according to participants of study 2. Secondary activities

Adding a secondary activity from the calendar view is too difficult, a solution to this problem is to add a button for it on the overview page.

9.2

Final words

The studies conducted in this work have provided a foundation of knowledge regarding the usability of the PODD application. Certain important usability issues has been dealt with through a redesign, but as seen in the sections above there are several areas of the application that still lack of usability. Further iterations of usability studies and associated redesign processes would ensure an even higher level of user-centered design.

References

Related documents

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Från den teoretiska modellen vet vi att när det finns två budgivare på marknaden, och marknadsandelen för månadens vara ökar, så leder detta till lägre

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

I regleringsbrevet för 2014 uppdrog Regeringen åt Tillväxtanalys att ”föreslå mätmetoder och indikatorer som kan användas vid utvärdering av de samhällsekonomiska effekterna av

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

• Utbildningsnivåerna i Sveriges FA-regioner varierar kraftigt. I Stockholm har 46 procent av de sysselsatta eftergymnasial utbildning, medan samma andel i Dorotea endast

Den förbättrade tillgängligheten berör framför allt boende i områden med en mycket hög eller hög tillgänglighet till tätorter, men även antalet personer med längre än