• No results found

Usability guided development of a participant database system

N/A
N/A
Protected

Academic year: 2021

Share "Usability guided development of a participant database system"

Copied!
34
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköpings universitet SE–581 83 Linköping

Linköping University | Department of Computer and Information Science

Bachelor thesis, 16 ECTS | Datavetenskap

2018 | LIU-IDA/LITH-EX-G--18/004--SE

Usability guided development of

a par cipant database system

Användbarhetsledd utveckling av e deltagardatabassystem

Joakim Falk

Supervisor : Anders Fröberg Examiner : Henrik Eriksson

(2)

Upphovsrä

De a dokument hålls llgängligt på Internet – eller dess fram da ersä are – under 25 år från pub-liceringsdatum under förutsä ning a inga extraordinära omständigheter uppstår. Tillgång ll doku-mentet innebär llstånd för var och en a läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och a använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrä en vid en senare dpunkt kan inte upphäva de a llstånd. All annan användning av doku-mentet kräver upphovsmannens medgivande. För a garantera äktheten, säkerheten och llgäng-ligheten finns lösningar av teknisk och administra v art. Upphovsmannens ideella rä innefa ar rä a bli nämnd som upphovsman i den omfa ning som god sed kräver vid användning av dokumentet på ovan beskrivna sä samt skydd mot a dokumentet ändras eller presenteras i sådan form eller i så-dant sammanhang som är kränkande för upphovsmannens li erära eller konstnärliga anseende eller egenart. För y erligare informa on om Linköping University Electronic Press se förlagets hemsida h p://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet – or its possible replacement – for a period of 25 years star ng from the date of publica on barring excep onal circumstances. The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educa onal purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are condi onal upon the consent of the copyright owner. The publisher has taken technical and administra ve measures to assure authen city, security and accessibility. According to intellectual property law the author has the right to be men oned when his/her work is accessed as described above and to be protected against infringement. For addi onal informa on about the Linköping University Electronic Press and its procedures for publica on and for assurance of document integrity, please refer to its www home page: h p://www.ep.liu.se/.

(3)

Abstract

This project consisted of the development of a web based participant database system to replace a spreadsheet based one for the project “Vi Ses! Språkvänner i Linköping” and the evaluation of it. The design and implementation was done iteratively using a collection of usability guidelines as well as the results of a set of user tests. User tests were also used for the evaluation of the web database system. The task during the user tests was participant matching and the main measurement taken was task completion time.

The project resulted in a web database system that could fully replace the spreadsheet database system. The evaluation tests showed that this system was both faster to match participants in as well as less error prone.

(4)

Contents

Abstract iii Contents iv List of Figures v List of Tables vi 1 Introduction 1 1.1 Motivation . . . 1 1.2 Aim . . . 1 1.3 Research questions . . . 1 1.4 Delimitations . . . 2 2 Background 3 3 Theory 4 3.1 Usability basics . . . 4

3.2 Studying the users . . . 5

3.3 User testing . . . 5

3.4 Usability guidelines . . . 6

3.5 Parallel and iterative design . . . 7

4 Method 9 4.1 Analysis . . . 9 4.2 Implementation . . . 9 4.3 Evaluation . . . 10 5 Results 12 5.1 Analysis . . . 12 5.2 Implementation . . . 15 5.3 Evaluation . . . 18 6 Discussion 23 6.1 Results . . . 23 6.2 Method . . . 24

6.3 Ethical and social aspects . . . 25

7 Conclusion 26 7.1 Future work . . . 26

(5)

List of Figures

5.1 Screenshot of the spreadsheet database . . . 15 5.2 Screenshot of the web database . . . 17 5.3 Screenshot of the web database . . . 17

(6)

List of Tables

5.1 First status set . . . 13

5.2 Final status set . . . 13

5.3 Participant information attributes . . . 14

5.4 About test 1 . . . 19 5.5 About test 2 . . . 19 5.6 About test 3 . . . 20 5.7 About test 4 . . . 20 5.8 About test 5 . . . 21 5.9 About test 6 . . . 21 5.10 About test 7 . . . 21

(7)

1

Introduction

This project consisted of developing and evaluating a web based database system to replace the spreadsheet based database system that the project “Vi Ses! Språkvänner i Linköping” was currently using.

The system’s main functionality was to manage the project’s participants. This includes, for example, adding/removing participants, editing participant information and matching es-tablished participants with non-eses-tablished participants.

1.1 Motivation

Vi Ses! Språkvänner i Linköping were having several problems managing their current partici-pation database (a Microsoft Excel spreadsheet) such as data duplication, data inconsistencies and trouble viewing several participants at the same time. Therefore they requested a “better” database without, or at least with less of, these problems.

1.2 Aim

The purpose of this thesis project was to develop a more usable database system for managing Vi Ses!’s participants than their current spreadsheet database.

1.3 Research questions

“More usable” was too broad and subjective to be able to accurately measure. Therefore, it was decided to focus on how much time it takes to do certain things in the system. More specifically, the focus should be on matching participants.

The research questions would then become:

How can a new database system be realised such that it (1) can replace Vi Ses! Språkvänner i Linköping’s current database and (2) is faster to match participants in? (Assuming such a system can be realised within the scope of this project.)

In this context, matching participants means to suggest which participants might would like to become language friends with each other.

(8)

1.4. Delimitations

Note that, even though the main focus would be on matching participants it was still important to design and develop the whole system and not just a participant matching system in a vacuum. Otherwise it might been overly simplified, not representative of how such a feature would actually function and therefore not fairly comparable to the old system. Also, a participant matching system without the rest of the database system would be significantly less useful to Vi Ses!.

1.4 Delimitations

As previously mentioned, the project was delimited to focus measurements only on participant matching.

The system should work on any modern desktop or laptop computer, as this was Vi Ses!’s primary work machines. Therefore support for older computers or mobile devices were not necessary.

To not carry any additional costs the new database should be hosted on Vi Ses!’s existing hosting provider (described in the background chapter 2). To protect the privacy of the project’s participants and prevent sabotage the database should also require some kind of authorization.

(9)

2

Background

Vi Ses! Språkvänner i Linköping (shortened to Vi Ses! in this thesis) is a project by Linköpings Södra rödakorskrets. The project aims at bridging the gap between immigrants and established people in Sweden.

Participants who apply to partake in the project are assigned as established or non-established in Sweden. Established participants are then matched with non-non-established par-ticipants to form language friend pairs. The goals of these pairs are for the parpar-ticipants to do things together, learn from each other and tie bonds of friendship. For the non-established par-ticipants this usually means training their Swedish language skills and learning about Swedish culture.

The Vi Ses! project’s staff consists of a few volunteers and one employee, the project leader.

There exist similar language friend projects in many other parts of Sweden, performed by various organisations. The homepage for the language friend project in Eskilstuna1, performed

by Eskilstuna municipality, lists over one hundred places were such projects exist [17]. Prior to this project Vi Ses! already had a homepage hosted by a hosting provider. The site was using WordPress, which in turn used MySQL and PHP.

1The language friend project in Eskilstuna works, since 2014, on a national level to organise and support local language friend projects.

(10)

3

Theory

Since the usability of two particular systems is the main focus of this study, the theory chapter will also focus on usability. Specifically, this chapter will present the theory that is used as the basis for many design and implementation decisions of the new web database system as well as for the usability evaluation of both database systems.

There exists many methods for usability evaluation. These can be grouped into two main categories, namely empirical testing and inspection methods. [15]

The most popular inspection methods seems to be heuristic evaluation, cognitive walk-through and pluralistic walkwalk-through. [3]

Among empirical testing methods user testing, presented in 3.3, is probably the most commonly used one. [15]

3.1 Usability basics

Jakob Nielsen defines usability by five quality components [9][14]:

• Learnability: How easy is it for users to accomplish basic tasks the first time they en-counter the design?

• Efficiency: Once users have learned the design, how quickly can they perform tasks? • Memorability: When users return to the design after a period of not using it, how easily

can they reestablish proficiency?

• Errors: How many errors do users make, how severe are these errors, and how easily can they recover from the errors?

• Satisfaction: How pleasant is it to use the design?

Nielsen also states that there are many other important quality attributes, such as utility, which refers to the functionality of the design, i.e. what the users actually need. He further defines a design as useful if it has both usability and utility. [14]

Jonas Löwgren defines usability slightly different, using what he calls the REAL approach, consisting of four components [6]:

(11)

3.2. Studying the users

• Relevance: How well the system serves the users’ needs.

• Efficiency: How efficiently the users can carry out their tasks using the system. • Attitude: The users’ subjective feelings towards the system.

• Learnability: How easy the system is to learn for initial use and how well the users remember the skills over time.

Comparing these definitions one can identify that relevance is the same as utility, attitude is mostly the same as satisfaction and that memorability is incorporated in learnability. Since making errors usually is time consuming one can also presume that errors is incorporated in efficiency in the REAL definition.

3.2 Studying the users

When developing a new system there are several possible scenarios; for example it could replace previous routines or introduce new procedures that would restructure the existing work. To understand the impact of the new system, and what the new system need to support, it is important to study how the users’ work is carried out. [6]

There are many ways and methods to study how the users work. Löwgren sees observations and interviews as the easiest. He suggests to start with the former to get a feeling for the people and their tasks which helps to structure the interviews. He also notes that people who talk about their work do not always give an accurate description of what they actually do. There can be various reasons for this, but it indicates the importance of not only relying on interviews. [6]

3.3 User testing

Nielsen sees user testing as the most basic and useful method of studying usability and states that it consists of three components: representative users, representative tasks and observation [14]. Dumas and Redish identify five components, the same three as Nielsen and two more. The first additional component is the test goal, and that the primary goal is to improve usability. The other component is analysing the data from the test, as well as diagnosing the real problems [2].

Löwgren refers to this type of testing as empirical testing (or field testing if the location is at the users’ workplace) and says that it is primarily used to measure efficiency and learnability though errors and task completion times [6].

Representative users and tasks

Jakob Nielsen argues that the main reasons that user testing works are user engagement and suspension of disbelief. The latter means that the user can effectively pretend that the scenario is real, even though it is purely artificial. He compares this to a television show, that viewers can engage in even knowing it is just actors pretending to be those characters. However, for this to work the test needs realistic tasks and representative users who might actually perform similar tasks in the real world. [8]

Dumas and Redish also states that the tasks should be as realistic as possible and gives examples where test team members acts as customers (or whatever was appropriate for the scenario) for the participants to help strenghen the realism. The importance of representative test participants is summarized by: “If the participants in the usability test do not represent real users, you are not seeing what will happen when the product gets to the real users.” [2]

(12)

3.4. Usability guidelines

Observation

Three observation techniques is listed by Löwgren: in-person observation, using video cameras and having the system itself logging everything relevant. For subjective assessments of the users’ attitudes he also mentions interviews, questionnaires and asking to users to “think aloud” while performing the tasks. [6]

Nielsen stresses the importance for the observer to just observe; helping the users or direct-ing their attention to any particular part of the screen will contaminate the test result [14]. It is also important for the observer to be quiet and preferably stay out of sight (e.g. behind the user) to keep the user’s suspension of disbelief [8].

Dumas and Redish states that it is the observation component that distinguishes usability user testing from focus groups, surveys and beta testing [2].

Formative and summative evaluation

Löwgren states that usability testing can be distinguished into formative and summative eval-uation, the difference being where in the process the tests are done and their goals. Formative evaluation is used during development to provide input for the next iteration of the system. In contrast, summative evaluation is used when the system is finished to determine how good it is. [6]

3.4 Usability guidelines

There exist many usability guidelines to be considered when designing a system.

Consistency

Consistency means that the same thing works the same way in similar situations. It is impor-tant at several different levels. [6]

Nielsen argues that consistency is one of the most powerful usability principle. When things always behave the same users know what will happen based on earlier experience and they do not have to worry about it. They feel in control of the system and will like it more. In turn, if the system breaks the users’ expectations they will feel insecure. [13]

Steve Krug agrees that consistency is something to strive for. He does, however, warn about seeing consistency as an absolute good. He states that if something can be made significantly clearer by making it slightly inconsistent one should choose in favour of clarity. [5]

Responsiveness and feedback

Responsiveness is important for the efficiency and the satisfaction/attitude of the system. Nielsen presents three important response time limits and how they affect the user [16]:

• 0.1 second or less gives the user the feeling of instantaneous response and that the user is directly manipulating the data.

• 0.1 to 1 second keeps the users flow of thought seamless. The user can sense a delay but still feels in control.

• 1 to 10 seconds creates an unpleasant user experience but still keeps the users attention. The user feels at the mercy of the computer but can still handle it. After 10 seconds the user starts thinking about other things and will often make the user leave the system. The cited sources of these numbers are two papers. The first one is specifically about response times and written by Miller, a behavioural scientist. Miller presents seventeen cate-gories of responses and response times based on those and the psychological needs of users. For

(13)

3.5. Parallel and iterative design

control actions given by the user, such as pressing a key, Miller also states the 0.1 second limit, as these actions should feel immediate. In a scenario where the user is reading text spanning across several pages, Miller states that flipping to the next page should not take longer than one second, otherwise the delay “will seem intrusive to the continuity of thought”. For a user to keep his/her frame of mind Miller states the maximum limit of 15 seconds, a five second difference from Nielsen’s 10 second limit. [7]

The other cited source is a paper describing an experimental information retrieval system. It contains a table of exactly the limits presented by Nielsen. These are, in turn, cited from two books by Allen Newell and others. [1]

Nielsen suggests to display a progress indicator, preferably a percent-done one, for opera-tions taking more than 10 seconds to provide feedback for the user that the system is working and has not crashed, indicate how long the user has to wait and provide something for the user to look at. For operations that take about 2 to 10 seconds a less conspicuous progress feedback can be displayed, for example a busy cursor. Operations that take less time requires no special progress feedback. [11]

Löwgren states that the system should give feedback for every user action. The response to frequent and minor actions could be very modest. An extreme example is the rm command1

in Unix where the only feedback is a new prompt and a lack of error messages (assuming no errors occur). In contrast, unusual and major actions should be indicated as such. [6]

Undo

If the system provides undo functions it makes it easier for the user to explore the systems capabilities. However, if the system contains functions that can not be undone it is very important to inform the user of this. [6]

Shortcuts

As a user gains more experience with a system, the user typically wants to work faster and reduce the number of interactions. To achieve this shortcuts could be implemented in differ-ent ways depending on the system. Examples are command abbreviations, special keys and keyboard shortcuts. [6]

Design for error

Löwgren suggests that one should try to design one’s system such that there are no serious errors for a user to make. If it is not possible, one should make sure that the user can back out of an error state without having to redo a lot of work. [6]

3.5 Parallel and iterative design

Parallel and iterative design are two different ways to consider design alternatives.

Iterative design is when a single design is continuously improved and refined. After each usability evaluation (such as user testing 3.3) the design is revised based on the usability findings.

In contrast, with parallel design, multiple alternative designs are created which are then in parallel evaluated.

Nielsen favours iterative design and sees it as simpler, cheaper and stronger [10]. He presents four case studies using iterative design in which the median usability2 increased by

45% after only the first iteration and 165% after two to four3iterations [9].

1The rm command in Unix (or Unix-inspired systems) is used to remove a file in the filesystem.

2This was a numeric value calculated based on several usability measurements, such as task completion times, user satisfaction and number of errors.

(14)

3.5. Parallel and iterative design

Nielsen, however, recognizes one limitation of iterative design; it could encourage hill-climbing to a local maximum when a superior solution in another area of the design space area might be available. He does not see that as a huge problem for many designers, as most are working on features that already have a massive number of well-documented best practices. However, for those willing to invest more into usability he suggests to combine parallel and iterative design. In particular he suggests to start with parallel design, then create a merged version of the best features of the designs (based on the usability evaluations) and after that continue refining this merged version with iterative design. [10]

(15)

4

Method

The project consisted of three main parts: analysis, implementation and evaluation. While the main work of these parts occurred in that order, many times these parts were worked on simultaneously. See the respective sections for further details.

Most of the main project work was done in Vi Ses!’s office space, usually with the Vi Ses! project leader present.

4.1 Analysis

The main methods used to specify the requirements of the database system were interviews and meetings with the project leader of Vi ses!. Other methods used were short questions and examining the spreadsheet database.

At the start of the project the broader and harder-to-change-later requirements were speci-fied, such as what data would be mandatory for each participant. Later, usually in conjunction with its implementation, smaller and more specific details were specified, such as which par-ticular statistics should be recorded.

4.2 Implementation

In the early and middle stages of development design decisions were based on the requirement specification and theory, mainly the usability guidelines described in 3.4. Later, design deci-sions could be based on the result of the user tests, following iterative design. Parallel design, or a combination of the two, was deemed too time-consuming for this project. For more about parallel and iterative design, see 3.5.

The development was done iteratively; code changes were done in small portions, then infor-mally tested and evaluated. For the sake of consistency 3.4 some parts of the implementation was changed and iterated over several times.

During the development of the client, any time a more complex feature was to be im-plemented a library with that feature was searched for. This was to save time instead of “reinventing the wheel”. Additionally, a feature provided by a library would likely be overall better than was possible to implement during the time of this project.

(16)

4.3. Evaluation

After the formative tests (see 4.3) the project leader gained permanent access to the database, allowing her to try out other features other than participant matching. This was to find bugs and allow her to come with suggestions.

4.3 Evaluation

Evaluation was done using user tests, described in 3.3. Both formative and summative tests were performed on both the old spreadsheet database and the new web database separately.

The only participant in this user tests was the Vi ses! project leader. While this was far from ideal this was the only representative user available during this project. To avoid confusion between the test participant and the participants in the database(s) she will in the context of the user tests still be addressed as the project leader instead of the test participant. The tests were done in a separate room with only the test leader and the project leader. The observation was done in-person by the test leader in accordance to the presented theory. The main measurement taken was task completion time, but errors and other significant information were also noted. It was considered to make the new database system automatically record task times, but since the same could not be done for the spreadsheet database this idea was abandoned in favour of making the tests as similar as possible.

Task

As a result of one of the research questions (see 1.3), the task of the tests was to match participants in the database.

The participants in the databases used in the tests were made up, but designed to look like real participants. This was to be able to publish the test databases without having to violate any real participant’s privacy. The reason that it was important to still make them look like real participants was for the suspension of disbelief (see 3.3).

After each test the participants in the test database were changed so that the project leader would not be able to make the same match during the next test, as this likely would have effected the task completion time. In all tests there were six matchable established participants and more non-established participants. This was to represent the real life distribution of established and non-established participants. The reason for it to be an exact number of matchable established participants in each test was to make the resulting number of matches in all the tests decently similar.

Introduction

At the beginning of the test session, before the tasks, the test leader held a short introduction about the test. As it was always the same test participant, the project leader, the introduc-tion was somewhat shorter after the first test, since she had already heard it before. Test introduction consisted of:

• Presenting the test and the purpose of it.

• Emphasising that the test was meant to test the system and not her.

• Explaining what the test leader would be doing during the test (measuring time and taking notes).

• Explaining that the test leader would not answer any questions during the test, unless necessary. Questions could be answered after the test instead.

• Presenting the task and asking her to make matches until she felt like she was done. • Noting that she could make several matches for the same participant if she wanted to. • Asking if there were any questions before starting the test.

(17)

4.3. Evaluation

Test conclusion

After the task the test leader answered any questions the project leader had. The project leader was also asked to state any comments she had about the task. Additionally, during some of the tests the project leader was asked a few questions.

After the test

After the test, the matches the project leader had made was reviewed by the test leader. The test leader also scanned the tested database for any errors made during the test that had not been detected.

(18)

5

Results

The following sections describes the results of this project organised in the same manner as the method chapter.

5.1 Analysis

Several meetings were held to determine Vi Ses!’s workflow regarding participants and how to translate this into statuses for them as a way to represent where in the process a given participant was. The process could be summarised as the following:

1. The participant sends in an application using a paper form or through the Vi Ses! web page.

2. The participant is contacted. The project is presented for them and some questions are asked about matching preferences and (if needed) general information about the participant to complement the information from the application.

3. The participant is matched with one or several other participants.

4. Matched participants are contacted and asked if they would like to meet and see if they want to become language friends.

5. A meeting is booked at a time when both participants and someone from the Vi Ses! project can attend.

6. The meeting is held.

7. After being language friends for six months the participants are asked to partake in an evaluation study. Afterwards the participants are removed from the project.

Note that the actual process was not always as straight-forward as the list. For example, if a participant was not content with its match it would jump back to step 3 in the list. Also, several steps were many times done at the same time. For example, steps 2, 3 and half of 4 could be done during the same phone call.

(19)

5.1. Analysis

Status Description

1 New New application

2 To be matched Contacted and is waiting for a match

3 Suggestion Has at least one suggestion for a language friend 4 Feedback Has met a suggested language friend

5 Active Active language friend Table 5.1: First status set

Later these statuses were revised such that 4 Feedback was removed, as it was deemed unnecessary and 2 To be matched was renamed to 2 Contacted. Additionally, another status called 0 Red marked was added to be able to mark someone as subject to be removed from the database, but temporary kept in case things change. This could for example be someone who sent in an application without any contact information or someone that could not be contacted after several attempts. If the imaginary participants in these examples later contacted Vi Ses! the red marked status could simply be changed instead of them having to resubmit their whole application to Vi Ses!. Table 5.2 displays this updated status set.

Status Description

1 New New application

2 Contacted Contacted and is waiting for a match

3 Suggestion Has at least one suggestion for a language friend 4 Active Active language friend

0 Red marked To be removed

Table 5.2: Final status set

Participant information

An important part of the analysis was to determine what information about the participants needed to be stored in the database and which of these information attributes should be considered to be mandatory in the new system. This was sketched out quite early in the project, based on the spreadsheet database, and later revised several times as new improvements were realised and the Vi Ses! project leader came with new requests. The final version of this set of participant information attributes can be found in table 5.3. These attributes were what would later be implemented as columns in the participant table in the MySQL database.

For flexibility, and particularly to be able to add participants to the database who apply with incompletely filled in applications, not many participant information attributes could be mandatory. As seen in table 5.3 only five attributes were required by the database system to have assigned values. One of them (id) would be automatically assigned a value by the database system. The others could easily be set by the user adding the participant to the database no matter how incomplete an application was received. The only slight complication was that the application date would not be known when receiving a paper application without this information. However, it was not very important that the application date would be entirely accurate, so the date when the user would add the participant to the database would suffice.

The established attribute could be seen as semi-mandatory. While it would not be required to add a participant, it would be required to match the participant. This was because es-tablished participants were always matched with non-eses-tablished participants. It would not make sense to match a participant with no established value since one would not know if the participant should be match with an established or a non-established participant.

(20)

5.1. Analysis

Attribute Description

id* A unique identifier for each participant.

status* A representation for where in the process the participant is. status notes Status related notes regarding the participant.

application date* The date of application.

application mean* If the participant applied using a paper form or through the web page.

accepted* If the participant has explicitly accepted to have his/her informa-tion stored the database.

established If the participant is established or new in Sweden.

gender The gender of the participant. Can have the values woman, man and other.

civil status If the participant is single or in some kind of relationship. age of children The age/ages of the participant’s child/children.

language friend with If the participant wants to be language friends with his/her family or by himself/herself.

name The participant’s name.

address The participant’s address. phone The participant’s phone number. e-mail The participant’s e-mail.

age The participant’s age.

background Some background information about the participant, e.g. educa-tion, prior and current occupations etc.

language The language/languages that the participant speaks and possibly his/her country of origin.

interests The interests of the participant. notes Notes about the participant.

Table 5.3: Participant information attributes. Attributes marked with * must have a value.

The spreadsheet database contained the attributes occupation, eduction and work experi-ence. These were all combined into the background attribute. The reason for this was because these attributes often contained the same values (e.g. when someone had an education in a field and was working in that same field) or no value. Additionally, the attributes were not considered important enough to warrant three separate attributes.

Using the same reasoning the attributes language and country of origin was combined into language.

The age of children attribute somewhat corresponds to family situation in the spreadsheet database. However, family situation were broader and could for example contain information about the participants partner. It was considered to have a binary field called children as well. Eventually it was decided that age of children was sufficient by itself.

The application mean and accepted attributes where the most recent ones added and did not have any equivalent attributes in the spreadsheet database. These attributes where added to ease the management of stored applications and make it easier for Vi Ses! to comply with privacy laws.

To get an idea of how the spreadsheet database looked like, see figure 5.1. It is a screen-shot of the spreadsheet database used in the user tests. Apart from the actual participant information, and a few unnecessary columns, it was identical to real spreadsheet database.

(21)

5.2. Implementation

Figure 5.1: Screenshot of the spreadsheet database in Microsoft Excel (with test participants). 17 out of 24 columns can be seen. The non-selected sheets were only used for displaying statistics.

5.2 Implementation

The new Vi Ses! web database was implemented in three main parts; the server, the client and the database tables. The server was written in PHP for easy integration with the existing WordPress installation. The database tables used by the server was implemented in the existing MySQL database. The client was implemented in JavaScript, HTML and CSS.

For increased responsiveness AJAX1 was used. The reason that AJAX was chosen over

WebSockets was because WebSockets was thought to take longer time to implement, since PHP did not have any native support for WebSockets.

Authentication was done using WordPress and a custom capability created using the Mem-bers plug-in. The reason for this was to simplify account management while still allowing for accounts with specialised capabilities2.

Client libraries

To improve the interface, improve browser support and simplify the implementation of the client a few libraries was used.

One of the main libraries used was jQuery. It mainly simplified the implementation by providing an API for “HTML document traversal and manipulation, event handling, animation, and Ajax (...) that works across a multitude of browsers”[4]. It was also required by several other libraries used for the client.

For sorting, filtering and displaying tables tablesorter (fork by Mottie) was used. The original had not been updated since 2015, but the fork was actively being developed and contained many improvements compared to the original.

1AJAX, originally an abbreviation of Asynchronous JavaScript and XML (but does not have to involve XML), is a technique used to allow JavaScript to communicate with the web server without reloading the entire current web page.

2Capability is the terminology used in WordPress for a permission. Examples of other capabilities in WordPress are edit pages, delete users and read private posts.

(22)

5.2. Implementation

For many other interface improvements jQueryUI was used. For example, it was used for displaying calendars (for picking dates), accordions (expandable/collapsible areas) and pop-up dialogues.

At the statistics page Chart.js was also used for displaying a pie chart.

Client pages

The client was implemented as a set of pages navigated between through a navigation menu, just like a normal web site. The navigation menu had a couple of categories, but the only one interesting for the thesis is the participant category. It contained six pages which will now be described in further detail.

The first page was the Add participant page. It was basically just a large form where users could add a participant and all its associated data.

The second page was the Edit participant page. From there a user could view all partici-pants in the database and edit almost everything about any given participant.

The table displaying the participants would by default only display a few participants at a time, as to not take up too much vertical space at the page. The participants could be navigated through using next, previous, first and last buttons. The number of participants shown could be changed using a simple drop-down list. The table was sortable by any column/columns and could be filtered using drop-down lists for columns with few possible values (such as status) and free text for others. There also existed an (initially) collapsed accordion for additional table settings where the user could select what columns should be visible/hidden. This was important because there were too many columns (too much participant data) to effectively be displayed all at once. Additionally, many columns contained data that was only useful for the user in some situations. For example, emails and phone numbers were mostly usually only useful if the user intended to contact the participants.

When a participant in the table was selected it was highlighted and its corresponding data was displayed beneath the table in editable forms. Using these a user could change almost all participant data. Some restrictions applied to ensure database consistency. For example, established could not be changed for a participant that was an active language friend (status 4 active) and participant id could never be changed. From here, a participant could also be removed.

The third page was the Match participants page. Using this a user could match participants, creating suggestions for whom should form language friend pairs.

The page consisted of two tables; one containing participants established in Sweden, the other participants new in Sweden. The tables were identical to the one on the Edit participant page, except for the restrictions on which participants were displayed in the tables. The table settings accordion also had a few extra settings, such as which table should be displayed first and if participants with status 1 New should be displayed as well. The tables were placed in accordions with headers that displayed a summary of the selected participant’s data. The idea here was that when a user selected a participant in one table the user could collapse that table to find a match for that participant in the other table while still easily be able to see the first participants data and not having to scroll through a bunch of irrelevant participants in the process.

Figure 5.2 is a screenshot displaying this page, with both table accordions expanded. Figure 5.3 displays the same page with the top table accordion collapsed.

The matching was done simply by selecting one participant in each table and clicking the Add matching suggestion button. Any given participant could have any number of suggestions. The fourth page was the Suggestions page. Here the user could view all matching sugges-tions, remove any given suggestion or upgrade it to an active language friend pair.

The suggestions were displayed in a table much like those previously described, only simpler. Each suggestion contained brief information about both participants and a button for each participant which, when clicked, displayed a pop-up dialogue with complete information about

(23)

5.2. Implementation

Figure 5.2: Screenshot of participant matching in the web database with both table accordions expanded. Due to screen space limitations only the top table is fully seen.

Figure 5.3: Screenshot of participant matching in the web database with the top table accor-dions collapsed. Since a participant is selected in the top table, its accordion’s caption text is a summary of that participant’s data.

the participant. This table did not have any table settings accordion as it did not need the extra options.

Here the user could also see and change status notes for the participants involved in the selected suggestion.

The fifth page was the Active pairs page. At this page the user could see all active language friend pairs and remove them, either if the participants did not want to be language friends

(24)

5.3. Evaluation

any longer or if they had been language friends long enough that they were considered done in the eyes of the project.

The active pairs were displayed in a table exactly like the one on the Suggestions page, except that it also included a date for when they became language friends.

Here the user could also see and change the status notes for the participants, just like on the Suggestions page.

The sixth and final page was the Statistics page. Here various statistics were displayed, grouped into current/total and monthly statistics.

Usability guideline realisations

The usability guidelines described in the theory chapter (3.4) were realised in the implemen-tations in several different ways.

As previously mentioned, for increased responsiveness (3.4) AJAX was used. While no formal tests were conducted on response times, through repeated usage of the system a rough approximation of response times could be estimated. Loading whole pages took from about one to a few seconds. Actions on a page that required an AJAX call took from 0.1 seconds to little over one second. Actions on a page that only required JavaScript took less than that, appearing instantaneous. The longer response times were usually experienced when no server requests had been sent for a while. On rare occasions response times spiked at more than 10 seconds. Since response times were not the focus of the project the reason for these differences was not examined, though a guess could be that the former had to do with some kind of server cache and the latter was just unrelated temporary network issues.

Feedback (3.4) given to the user was implemented to let the user know the system was working as well as to indicate when the system was done. During all AJAX calls the cursor was changed to a busy cursor until the response from the server had been received. User actions that did not result in any natural feedback were given feedback in the form of a message that stated the action had been performed or, in the case of error, what went wrong.

Consistency (3.4) was implemented throughout the system by making similar elements look and behave similar. For example, this was the reason for making the tables across the different pages similar even though they contained different types of data.

The system was designed for error (3.4) in two main ways. The first way was to validate all user entered data. The second was to issue a warning whenever the user had altered data, but not saved it, and tried to perform an action that would discard the unsaved data.

While the system was not implemented with an explicit undo (3.4) function, most actions could be undone using other actions. Actions that could not be completely undone were actions that resulted in removal of any participants and actions that explicitly altered the statistics. The reason that statistics altering actions were undoable was because no functionality was implemented to adjust the statistics. This functionality was not implemented mainly because of time constrains.

Shortcuts (3.4) were not explicitly implemented, but supported through keyboard shortcuts in the browser and from the client libraries used.

5.3 Evaluation

The following subsections describe the user tests performed with the project leader as well as a small interview with her.

The project leader described herself as a very competitive individual and expressed at several times that she had trouble ignoring the time taking aspect of the tests. The test leader noticed that she often appeared to hurry through the tasks, especially the more tests she participated in. How this possibly affected the result of the tests is discussed in the discussion chapter in 6.1.

(25)

5.3. Evaluation

The task preparation was a time consuming process, mainly designing realistic participants. In total 150 participants were created for these tests.

Test 1

Short information about this test can be found in table 5.4.

The project leader began the task by writing down short summaries (including IDs) about all the established participants on paper. This was a method she had started using while matching participants in the real database to allow her to be able view several participants at the same time.

Type Formative

Database Spreadsheet

Date 2017-07-03

Total time 21.5 minutes

Final matches 6 Table 5.4: About test 1 After about 6 minutes and 20 seconds there was a short

break for a few minutes because the project leader was dis-turbed and had to help someone else in another room. The time of the break was excluded from the total time.

During the task the project leader matched participants that had status 1 New. Previously it was not considered that users would like to do this, so only participants with status 2 Contacted and 3 Suggestion could be matched in the web database. After this test an option was added to

show, and thus be able to match, participants with status 1 New as well. When the task ended the project leader had made the following mistakes: • Almost missed that a participant had status 0 cancelled.

• Wrote the wrong ID. She later realised and fixed this.

• Missed writing ID and changing status. She later realised and fixed this. • Started writing status notes for the wrong participant.

After the test the project leader was asked how the matching speed during the test felt compared to when she was doing it in the real database. The answer was that it felt like usual. She was also asked if the test database felt like a good representation of the real database. The answer was yes, but that it was somewhat simpler due to fewer participants. At this point the test database had 85 participants and the real database had about 400 participants.

Test 2

Short information about this test can be found in table 5.5.

Type Formative

Database Web

Date 2017-07-04

Total time 11 minutes

Final matches 6 Table 5.5: About test 2 This was the first time the project leader saw the

match-ing interface and the first time she used the web database. Since the test was focused on matching participants, and not page navigation, the test leader showed which pages would be used during the task so that the project leader did not have to look though the different pages during the task to figure out which ones would be relevant.

During the task the project leader initially had some trouble finding the match button. She did not open the

table settings, therefore only using the default settings. This led to that she did not match any participants with status 1 New. She also did not notice that she could collapse the table accordions.

After the task, the project leader said that she was amazed by how fast it went compared to the spreadsheet database.

At the end of the session, the test leader showed the project leader the features that she had missed so that they could be evaluated during the next test.

(26)

5.3. Evaluation

Test 3

Short information about this test can be found in table 5.6.

This was the second time the project leader used the web database. The test took place quite a lot of time (about two months) after the previous test due to vacations and the project leader’s schedule.

Type Formative

Database Web

Date 2017-09-11

Total time 9.5 minutes

Final matches 6 Table 5.6: About test 3 During the task the project leader used the table

set-tings to show an additional column (status notes). She did however miss the setting to show participants with status 1 New. She also forgot that she could collapse the table accordions.

Something notable that the project leader did during the task was to remove a matching suggestion she had pre-viously added. If she had not done this a non-established participant would have gotten two suggestions. This was not something she had done in previous tests.

After the task the project leader commented that it was little harder to match because it were few non-established participants. Usually, in the real database, she had about 100 to choose from, but here it was only 8. Note that this would have been less of a problem if she enabled the setting to show status 1 New participants, increasing the number to 24. Another comment was that in the real database there were usually notes about if the participants had any preferences regarding what kind of persons they would like to be matched with.

At the end of the session the project leader was reminded of features she missed. She also requested that the status notes column would be shown by default, so this change was implemented shortly after the test.

Test 4

Short information about this test can be found in table 5.7.

Based on the comments after test 3 more participants had matching preferences than in previous tests. This was also true for the following tests.

Type Summative

Database Spreadsheet

Date 2017-10-03

Total time 11.5 minutes

Final matches 6 Table 5.7: About test 4 Just like in the first test the project leader began the

task by writing down short summaries about all the estab-lished participants on paper. One difference that she made was that she also made the matching part on paper, using the database as a read-only view. Only after she made all the matches did she add them to the database.

While entering the matches she mistakenly wrote the matches for the established participants in the wrong col-umn. This was not realised during the task, thus was not corrected.

After the test the project leader was asked why she thought that the task completion time for this test was much lower than for the first test, the previous test using the spreadsheet database. Her answer was that it probably was partly because she was more used to the testing at this point and partly because her method for matching participants in the spreadsheet database had been refined since the first test.

Test 5

Short information about this test can be found in table 5.8.

As mentioned in the method chapter 4.2, the project leader gained permanent access to the database after the formative tests. This had resulted in several changes for the participant matching. The order of the table columns had been completely reordered according to the

(27)

5.3. Evaluation

project leader’s wishes, the application date and phone columns were changed to be shown by default and the option to show participants with status 1 New was now enabled by default.

Type Summative

Database Web

Date 2017-10-25

Total time 5.5 minutes

Final matches 6 Table 5.8: About test 5 The task was completed without any complications. At

one point she opened the table settings to show the age of children column because one of the participants she was matching requested to be matched to someone that had a child in the same age as she did.

During the task, the project leader did not make use of the collapsible table accordions.

At the end of the task the project leader spent almost a minute reviewing her matches before she decided she was done.

After the test the project leader had only the comment “Felt good!” and said that it was easier now that participants with status 1 New are shown by default.

Test 6

Short information about this test can be found in table 5.9.

Immediately at the beginning of the task, before even looking at any participants, the project leader enabled the age of children column in the table settings. After the test she was asked why and explained that she had started to take children more into account during the matching process.

Type Summative

Database Web

Date 2017-10-31

Total time 4.5 minutes

Final matches 7 Table 5.9: About test 6 During the task she also enabled the language column

because one of the participants she was matching had said that he would like to be matched with someone who knows an unusual language. She was a bit unsure what languages were meant by that so she matched him with two other par-ticipants, one speaking Polish and one speaking Korean. If this fictitious person would consider those languages un-usual could be up for debate, but her reasoning was that it was rare that participants in the project spoke those

lan-guages. However, the participant who spoke Korean had requested that she would like to be matched with another woman. This was something the project leader missed.

Just like the last test, the project leader did not use the collapsible table accordions. The project leader did not take much time to review her matches as she felt quite confident about them.

After the test the project leader said that she thought that the test was good and conformed to reality.

Test 7

Short information about this test can be found in table 5.10.

The project leader used the same technique as in test 4; writing down short summaries about all the established participants on paper, as well as the matches.

Type Summative

Database Spreadsheet

Date 2017-11-01

Total time 11.5 minutes

Final matches 6 Table 5.10: About test 7 When entering the matches the project leader

mistak-enly started to write in the wrong column, overwriting other data. However, she noticed this somewhat quickly and cor-rected her mistake before continuing by using the undo button.

The project leader did not take any additional time to review her matches as she felt sure about them and con-fident she had not made any mistakes. As far as the test leader could determine, this was correct.

(28)

5.3. Evaluation

Like the last test she thought that the test was good and conformed to reality. This time she elaborated a little, saying that the participant test database was now closer to real database compared to the first tests.

Overall thoughts

At 2017-11-14 a small interview with the project leader was conducted to let her express her overall thoughts on the new web database. Here follows a summary of her expressed thoughts during that interview.

The project leader thought that the web database was very good and said that she was very pleased with it. She thought that it gave her a good overview, displaying all the information she needed.

She continued to explain that it made her more efficient. The biggest difference was felt using the participant matching system. She also expressed that it was great that she could have several pages opened at once, using web browser tabs. In that way she could work with several things at once, which was especially useful when she got interrupted and had to do something different in the database system. Previously, with the spreadsheet database, this was usually handled by writing down the thing she had to do on paper and then do it at a more convenient time, assuming she remembered it and did not lose the paper.

The project leader felt that with the web database she did not have to be afraid to ac-cidentally delete anything or cause any inconsistencies with participant IDs, since IDs are handled automatically in the web database. She also expressed that she no longer feared to “lose someone”. As a result she felt less stressed and less nervous.

(29)

6

Discussion

6.1 Results

The results of this thesis are both the test results from the user tests and the web database system itself.

Test results

The test times from the usability tests were somewhat varied. Tests with the spreadsheet database ranged from over 20 minutes to about 11 minutes, and test with the web database ranged from over 10 minutes to under 5 minutes. The summative tests, however, were more uniform with about 11 minutes for the spreadsheet database and about 5 minutes for the web database.

There are several possible explanations for the variation in test times. One likely explana-tion is that the project leader refined her matching methods over time, thus performing the tasks more effectively. This is, for example, backed up by the observation described under Test 3.

Another likely explanation is that the test situation affected the project leader’s behaviour, such that her task completion times were also affected and the more she got used to doing tests her behaviour got less or more affected. This is backed up by the observation that she appeared more competitive the more tests she performed.

Yet another explanation is that the tasks themselves, or more accurately the database participants in the tasks, might have affected the task completion times. While the tasks were designed to be similar they obviously could not be exactly alike, as discussed in 4.3. Also, since the database participants were always designed to look like real participants, the participant creation process might have become better (in the realistic aspect) over time. This could be attested by comments made by the project leader during the test sessions.

An explanation regarding test time variations for the web database tests is that the project leader got more used to the interface and learnt how it worked. This should be especially true after Test 3, when the project leader used the system outside of the tests. However, this would not apply as much to the spreadsheet database, as the project leader had already used it for quite some time before any of the tests.

(30)

6.2. Method

It is entirely possible that the reason for the variation in task completion times is a com-bination of all these explanations.

Despite the variation in task completion times the tests still indicate that the participant matching in the web database is faster. This is because in every single spreadsheet database test the task took longer than in any of the web database tests. Granted, one web database test was only 30 seconds faster than the fastest spreadsheet database test, which is not that much. However, this was the first test with the web database, before the project leader even had seen how participant matching in the web database looked like. Additionally, the participant matching page in the web database changed several times after that. Looking at the other extreme, the fastest web database test was more than five times faster than the slowest spreadsheet database test.

Counting only the summative tests the task completion times show that participant match-ing is about twice as fast in the web database as in the spreadsheet database. However, because the number of these tests were quite few one should perhaps not focus too much on exactly how much faster the web database’s participant matching is, but rather that it is faster. As this is indicated by the formative tests as well this seems to be a rather accurate conclusion.

Even though task completion times were the main measurement during the tasks, it was not the only one. Errors made were also noted. While the number of errors varied throughout the tests, they tended to be fewer (depending on what you count as errors) and less severe for the web database. The reason for this is most likely that some errors that can be done in the spreadsheet database are simply impossible to do in the web database. For example, in the web database the user can not accidentally write matches in the wrong column or accidentally delete participant information.

Database system

Although the Vi Ses! project leader was very pleased with how the web database turned out, it was certainly not perfect. It could always contain more features and most likely be more usable, especially those features that were not usability tested. Ideas for improvements are presented in 7.1.

6.2 Method

The method used in this project was heavily influenced by resource limitations. The most obvious of these limitations was time. With more time more designs could have been explored, for example by using the combined parallel and iterative design approach described by Nielsen [10], likely yielding a more usable final design. More time could also be spent on performing more user tests, yielding more reliable test results.

Another limitation was manpower. Increased manpower could, to some extent, have been a substitute for time.

Yet another limitation was user test participants. More representative user test participants would likely have resulted in both a more usable final design and more reliable test results. Representative users, however, would have been a much harder number to increase than time or manpower, as there were no one else who worked with participant matching in the Vi Ses! project.

Theoretically one could have tried to increase the representative user test participants by contacting people who worked with participant matching in other language friend projects. This would have required much more time/manpower and possibly other resources, since these people would have been located in other areas. It is also unclear if these other users would have been truly representative as other language friend projects might work with their participants in an entirely different way.

Another way to counteract the problem with the low number of representative user test participants could have been to perform user tests with random user test participants that are

(31)

6.3. Ethical and social aspects

not as representative of the system’s users. This should then have been a complement to, not a replacement for, the user tests with representative users. Naturally, this would have required more time/manpower as well.

Given that the user tests were always performed with the same participant the validity of the result could certainly be questioned. While the goal was to measure and compare the usability of the old spreadsheet database and the new web database, it could be entirely possible that the Vi Ses! project leader was a statistical outlier and that she did not think about or perform the tasks in a way that most users would. In that case the measurements do not correlate to the usability for most users. However, for the Vi Ses! project this would not be a problem until the project leader was replaced or if someone else started to perform participant matching. Still, in this unlikely case, the web database system’s participant matching had some objective usability advantages, such as the impossibility to make certain errors.

Sources

Most of this thesis’ sources are sources on usability in general. Some of them are decently old, which could cause some concern that they might be outdated. However, Nielsen stating that “Usability is a very stable field.” and that “[many] keep finding the same result study after study”[12] together with the fact that it is just about usability in general, which should be more researched than something more specific, greatly lessen this concern.

Among the thesis’s sources there is a quite large amount of web sources. While this is generally considered to be somewhat bad, it should not be seen as much of a problem in this case. The reason for this is because the web sources mainly are web articles by Jakob Nielsen, a famous usability expert. Most web articles appear to stay unaltered after publication, and those which are updated are apparent with what is updated and when. Given those conditions, Jakob Nielsen’s web articles could be seen as about as reliable sources as books on the subject (or possibly more reliable as outdated information would probably have resulted in an update for a web article, but not necessarily for a book).

More relevant sources on more specific usability issues, such as studies about having only one user test participant, would have been preferable. Unfortunately these could not be found.

6.3 Ethical and social aspects

As this work concerns the development of a database for the Vi Ses! project’s participants the main ethical aspect is the privacy of these participants.

Their privacy were protected from unauthorised users of the web database by requiring authentication via WordPress. While it was outside of the scope of this project to evaluate the security of WordPress it was assumed to be decently good.

Another way the privacy of the participants affected the web database was the implemen-tation of the accepted participant information attribute (see 5.1). However, the value of this attribute did not affect any other functionality; a participant with an accepted value set to false could still become a language friend and partake in the whole process. So it would be entirely up to the user to ensure the compliance of the participants. Note that it was still an improvement compared to the spreadsheet database, which had no such field at all.

The social aspect of this work is that, with the help of the new web database, the Vi Ses! project can hopefully work more efficiently, effectively aiding their goal.

(32)

7

Conclusion

The resulting web database system was realised as a set of web pages, each with a specific functionality, iteratively designed and developed using a collection of usability guidelines and the results of user tests.

The web database succeeded in replacing all essential functionality in the spreadsheet database. The web database also included additional features, such as automatic data vali-dation and integration with WordPress. It also succeeded in outperforming the spreadsheet database in participant matching, based on the user tests. This, together with how pleased the Vi Ses! project leader was with the web database (see 5.3), mean that the aim of this project was fully achieved.

Compared to the old spreadsheet database, the new web database will likely increase the Vi Ses! project leader’s work efficiency, prevent many previously common errors and generally help managing the project’s participants. Effectively the web database will likely aid the goal of Vi Ses! project in the long run by allowing the project leader to spend less time and energy managing the participants in the database. It might also help reduce the project leader’s overall stress.

7.1 Future work

The web database system could likely be improved in several ways. This section describes a few ideas on what these improvements could involve.

One improvement would be to implement a wider range of authentication levels. This could be as simple as just adding read-only access or as complicated as conducting a complete study on what user roles would be suitable and desirable for this system and implement authentication levels based on that.

Another possible improvement would be to change the communication from AJAX to WebSockets. It would most likely improve the response times significantly. However, since response times were already usually decently low (see 5.2) and the spikes might still occur using WebSockets, it is unclear if the gain of this change would be worth the work required.

The system could also be adopted for other language friend projects. A more ambitious approach could even be to generalise the system so that it could be used by all (or most) language friend projects in Sweden. Since no contact was made with other language friend projects it is unclear how realistic that idea is.

(33)

Bibliography

[1] Stuart K Card, George G Robertson, and Jock D Mackinlay. “The information visualizer, an information workspace”. In: Proceedings of the SIGCHI Conference on Human factors in computing systems. ACM. 1991, pp. 181–186.

[2] Joseph S Dumas and Janice Redish. A practical guide to usability testing. Intellect books, 1999.

[3] Tasha Hollingsed and David G Novick. “Usability inspection methods after 15 years of research and practice”. In: Proceedings of the 25th annual ACM international conference on Design of communication. ACM. 2007, pp. 249–255.

[4] jQuery. https://jquery.com/. Accessed 2017-11-15.

[5] Steve Krug. Don’t make me think, revisited: a common sense approach to web and mobile usability. New Riders, 2014.

[6] Jonas Löwgren. Human-computer interaction. Studentlitteratur, 1993.

[7] Robert B Miller. “Response time in man-computer conversational transactions”. In: Pro-ceedings of the December 9-11, 1968, fall joint computer conference, part I. ACM. 1968, pp. 267–277.

[8] Jakob Nielsen. Authentic Behavior in User Testing. https : / / www . nngroup . com / articles/authentic-behavior-in-user-testing/. Accessed 2017-11-15. 2005. [9] Jakob Nielsen. “Iterative user-interface design”. In: Computer 26.11 (1993), pp. 32–41. [10] Jakob Nielsen. Parallel Iterative Design + Competitive Testing = High Usability. https:

//www.nngroup.com/articles/parallel- and- iterative- design/. Accessed 2017-11-15. 2011.

[11] Jakob Nielsen. Response Times: The 3 Important Limits. https://www.nngroup.com/ articles/response-times-3-important-limits/. Accessed 2017-11-15. 1993. [12] Jakob Nielsen. Risks of Quantitative Studies. https://www.nngroup.com/articles/

risks-of-quantitative-studies/. Accessed 2017-11-15. 2004.

[13] Jakob Nielsen. Top 10 Mistakes in Web Design. https://www.nngroup.com/articles/ top-10-mistakes-web-design/. Accessed 2017-11-15. 2011.

[14] Jakob Nielsen. Usability 101: Introduction to usability. https://www.nngroup.com/ articles/usability-101-introduction-to-usability/. Accessed 2017-11-15. 2012.

(34)

Bibliography

[15] Jakob Nielsen. “Usability inspection methods”. In: Conference companion on Human factors in computing systems. ACM. 1994, pp. 413–414.

[16] Jakob Nielsen. Website response times. https : / / www . nngroup . com / articles / website-response-times/. Accessed 2017-11-15. 2010.

[17] Språkvän & Flyktingguide Eskilstuna - Verksamheter i Sverige. http://sprakvan.se/ verksamheter/. Accessed 2017-11-15.

References

Related documents

Bachelor's programme in Exercise Biomedicine, 180 credits.. Test-retest reliability of the 300-yard Shuttle

Furthermore we can conclude in this report that, since the PLC is so different from case to case, it is hard to make one specific model that is applicable to all. If a company is

CEN has initiated the work to design a new helmet test oblique or angled impact test method a helmet test method that can measure the rotational energy absorption in a helmet

Stöden omfattar statliga lån och kreditgarantier; anstånd med skatter och avgifter; tillfälligt sänkta arbetsgivaravgifter under pandemins första fas; ökat statligt ansvar

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Från den teoretiska modellen vet vi att när det finns två budgivare på marknaden, och marknadsandelen för månadens vara ökar, så leder detta till lägre

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in