• No results found

MittÄrDitt - Sharing is Caring : A case study in developing simple web applications that are perceived as trustworthy by their users

N/A
N/A
Protected

Academic year: 2021

Share "MittÄrDitt - Sharing is Caring : A case study in developing simple web applications that are perceived as trustworthy by their users"

Copied!
119
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköpings universitet Linköping University | Department of Computer Science Bachelor thesis, 18 HP | Computer and information science at the Institute of Technology Spring 2018 | LIU-IDA/LITH-EX-G--18/036--SE

MittÄrDitt - Sharing is Caring

A case study in developing simple web applications that

are perceived as trustworthy by their users

MittÄrDitt - Delad Glädje är Dubbel Glädje

- En fallstudie i utveckling av simpla webbapplikationer som

uppfattas som trovärdiga av sina användare

Kazem Bahadori

Herman Eklund

Carl Göransson

Henrik Johansson

Jakob Lindau

Martin Seller

Linnea Sjögren

Matilda Wolf

Supervisor : Susanna Dahlgren Examiner : Aseel Berglund

(2)

från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns lösningar av teknisk och admin-istrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sam-manhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet – or its possible replacement – for a period of 25 years starting from the date of publication barring exceptional circum-stances. The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the con-sent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping Uni-versity Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

c Kazem Bahadori Herman Eklund Carl Göransson Henrik Johansson Jakob Lindau Martin Seller Linnea Sjögren Matilda Wolf

(3)

Abstract

The purpose of this study was to create a web application that provided peer to peer rental solutions for students and while doing so maximize the usability of the application with regards to simplicity and online trust. An initial market survey was conducted, see appendix B, and the overall results were promising. The few survey takers that were hesitant towards the idea thought that using a rental solution would not be worth the while. Hence the research question focused on maximizing usability in regards to simplicity and online trust in order to make the e-shop an easy rental solution to use. The application was developed in three iterations and through the development process user tests were conducted and metrics regarding simplicity and perceived online trust were collected. The test conducted made use of the concurrent think aloud procedure, retrospective probing, surveying and time data used for calculating the effectiveness and efficiency of the test participants. The test results all improved throughout the development process and the application was perceived as both simple and trustworthy by test participants. By the end of the study, an application that could provide a peer to peer rental solution that was trusted by its users and was simple to use was realized. The study concluded that the evaluation methods used were good indicators of whether a web application is simple and trustworthy by identifying issues with the application as well as the improvements reflected in the test results. However they should have been applied on separate test occasions.

(4)

Abstract . . . i 1 Introduction 1 1.1 Aim . . . 2 1.2 Research Question . . . 2 1.3 Delimitations . . . 2 2 Theory 3 2.1 E-commerce . . . 3 2.2 Usability . . . 3 2.2.1 Simplicity . . . 4 2.2.2 Online Trust . . . 5 2.3 Method Theory . . . 6 2.3.1 Survey Method . . . 6 2.3.2 User Stories . . . 7 2.3.3 Brainwriting . . . 7

2.3.4 The NABC method . . . 7

2.3.5 SWOT . . . 7 2.3.6 STP . . . 8 2.3.7 Prototype . . . 9 2.3.8 Evaluation Metrics . . . 9 2.3.9 User Testing . . . 11 3 Method 13 3.1 Sprint 0 . . . 13

3.1.1 Sprint 0 - Market Analysis . . . 13

3.1.2 Sprint 0 - User Stories . . . 14

3.1.3 Sprint 0 - Prototyping . . . 14 3.2 Sprint 1 . . . 14 3.2.1 Sprint 1 - Development . . . 14 3.2.2 Sprint 1 - Test . . . 15 3.3 Sprint 2 . . . 17 3.3.1 Sprint 2 - Development . . . 17 3.3.2 Sprint 2 - Test . . . 17 3.4 Sprint 3 . . . 19 3.4.1 Sprint 3 - Development . . . 19 3.4.2 Sprint 3 - Test . . . 19

(5)

4 Results 21

4.1 Sprint 0 . . . 21

4.1.1 Sprint 0 - Survey Results . . . 21

4.2 Sprint 1 . . . 22

4.2.1 Sprint 1 - Development . . . 22

4.2.2 Sprint 1 - Concurrent Think Aloud Protocol and Retrospective probing . . . 25

4.2.3 Sprint 1 - System Usability Scale . . . 27

4.2.4 Sprint 1 - Efficiency and Effectiveness . . . 27

4.3 Sprint 2 . . . 28

4.3.1 Sprint 2 - Development . . . 28

4.3.2 Sprint 2 - Concurrent Think Aloud Protocol and Retrospective Probing . . . 31

4.3.3 Sprint 2 - System Usability Scale . . . 34

4.3.4 Sprint 2 - Efficiency and Effectiveness . . . 34

4.4 Sprint 3 . . . 35

4.4.1 Sprint 3 - Development . . . 35

4.4.2 Sprint 3 - Concurrent Think Aloud Protocol and Retrospective Probing . . . 40

4.4.3 Sprint 3 - System Usability Scale . . . 43

4.4.4 Sprint 3 - Efficiency and Effectiveness . . . 43

5 Discussion 44 5.1 Results . . . 44 5.1.1 Simplicity . . . 44 5.1.2 Online Trust . . . 46 5.1.3 Usability . . . 48 5.2 Method . . . 48

5.2.1 Concurrent Think Aloud Protocol . . . 49

5.2.2 System Usability Scale . . . 50

5.2.3 Retrospective Probing . . . 50

5.2.4 Efficiency . . . 51

5.2.5 Effectiveness . . . 51

5.2.6 Source Criticism . . . 52

5.2.7 Social and ethical aspects . . . 52

6 Conclusion 53 6.1 Aim and Research Question . . . 53

6.2 Recommendations for Future Studies . . . 54

(6)

B Market Survey 79

C Test Protocols 82

D Test Result Sprint 1 83

E Test Result Sprint 2 91

(7)

List of Figures

4.1 The first half index page after the first sprint. . . 23

4.2 The second half index page after the first sprint. . . 23

4.3 Footer in the index page . . . 29

4.4 New design of registration form . . . 29

4.5 My page layout design and updated advert card design . . . 30

4.6 Create advert page at end of Sprint 3 . . . 36

4.7 Contact us pop-up . . . 37

4.8 The confirmation page . . . 38

4.9 Success page . . . 39

(8)

3.1 SUS survey questionnaire . . . 16

3.2 SUS survey questionnaire . . . 18

3.3 SUS survey questionnaire . . . 20

4.1 Average SUS score Sprint 1 . . . 27

4.2 Average SUS score Sprint 2 . . . 34

4.3 Average SUS score Sprint 3 . . . 43

E.1 Overall effectiveness and efficiency . . . 100

(9)

1

|

Introduction

Many people often find themselves in need of an object which they are not interested in owning, but rather utilize for a short period of time for a specific purpose. This is typically a situation where a short-term lease would be the preferred solution.

The process of borrowing a specific thing today is often tedious and complicated and it requires a lot of manual searching. There are some limited contemporary solutions to these problems within housing compounds, university associations or other closed circuits. However, this seems unreasonably inefficient.

Suppose the existence of a digital platform which connected these people with different but coinciding interests by providing efficient organization of rental activities, whereas owners could earn money from leasing their rarely used belongings while people in temporary need of things could save money by renting instead of buying. This solution could also help reduce the demand and production of special purpose objects while engaging more people in an environmentally friendly sharing economy.

An initial market and user verification study was conducted to quantify and to better understand the need for a potential solution to this problem, see appendix B. The study showed that 67% of the participants have things they would like to lease for some sort of compensation, whereas 69% of the rest said they had not because they did not think that it would be worth their while. 93% of the participants said that they have experienced situations where they would have preferred renting instead of buying an object. If there existed a digital platform organizing rental activities between peers 41% said they would use it and 56% said they might use it. The predominant reason for not using the prospected website was concerns about a too complicated process, leading to a general mindset among the test persons that it might simply not be worthwhile using the application. As studies show, a high level o usability is crucial for a websites success on a competitive market [1]. With this in mind, an understanding arise that in order to attract and keep as many customers as possible one need to concentrate on eliminating complicated processes, developing a website as customer friendly as possible, thus a decision was made to focus on maximizing the website’s usability by choosing simplicity as a focal point.

Furthermore, the study positively verified that there is a need for a solution which could be in the form of a digital platform, though it also showed significant hesitance towards the suggested solution due to it potentially being to tedious and inefficient and thus not be worth their while. Therefore, it was decided to focus on and examine how to construct a digital application to be perceived as simple to use. This web application was constructed around the pretend company "MittÄrDitt" as to make the experiences as real as possible for the users. Furthermore, users abstain from using the website if they do not deem it to be safe [2]. Safe in regards to submitting their bank credentials to the website before renting an item. With the desire to receive as many users as possible it was decided to examine how to increase the level of perceived online trust in addition to simplicity.

(10)

1.1 Aim

The purpose of this study is to determine how a web application providing peer to peer rental solutions should be constructed in order to maximize usability so that it is a preferred method for organizing rental activities.

1.2 Research Question

How can a web application providing peer to peer rental solutions be constructed, with regards to its simplicity and online trust in order to maximize usability?

With regards to the term simplicity we will be looking at trying to make the web application as intuitive as possible by simplifying workflows and visual noise. We will not consider aspects such as learnability, loading time optimization and linguistics.

Online trust within the usability scope refers to the perceived trust by the user when using the ap-plication rather than the actual safety, security and reliability of the apap-plication. These aspects will only be considered to a limited extent. These delimitations have been made due to lack of sufficient competence and time.

1.3 Delimitations

The metrics used in order to test and evaluate the usability of the web application referred to in this report will be based on those suggested in the ISO 9241-11 standard, a widely accepted method of measuring usability. There is numerous different stand alone and composite methods for measuring usability in software programs, but we have decided to primarily look at the widely accepted and standardized metrics effectiveness, efficiency and satisfaction, to maintain consistency in comparable results and to be able to observe results from different tests coherently.

The web application will be tested solely on students from Campus Valla and US, University of Linköping and thus the results will most likely not be representative for a more diverse user base. The lack of socioeconomic diversity in the testing groups will be disregarded.

(11)

2

|

Theory

This chapter will mainly cover the underlying theory for answering the research question. Firstly, some general points about e-commerce are described. Simplicity and perceived trustability of an on-line application are then defined. Moreover, in order to measure the usability of the application, the usability metrics effectiveness, efficiency and satisfaction are presented along with relevant meth-ods of measurement. Lastly, the underlying theory for methmeth-ods used in this case study have been reviewed.

2.1 E-commerce

The market for e-commerce in Sweden has grown steadily in the last couple of years. The market grew 9% during 2017 and has grown a total of 45% in the last 4 years to create a total turnover of 109.5 billion SEK in 2017 [3]. This can be compared to retail sales who increased 2.4% in 2017 [4].

Students shop online more than the average swede, 77% in the last year compared to 68% among all Swedish persons between 16 and 85 years old and in the first quarter of 2017 73% of students had purchased something online compared to 63% for everyone between 16 and 85 [5]. In a survey conducted by YouGov for DIBS it was concluded that the main reason for choosing e-commerce over regular stores among consumers has shifted, from lower prices in earlier years to the simplicity and time saving aspects of it today. It was also concluded that 44% of consumers have aborted a purchase because of shortcomings in the e-shop. The most common reason, at 34%, was that the e-shop did not have the preferred payment method. 25% cited complicated registration as the reason and 28% unclear terms and conditions. The most preferred payment method among consumers is card payments [3].

2.2 Usability

One of the major vital factors for a websites long-term survival is its usability. If a website is difficult to use and thus failing the usability requirements the customers will leave for another better solution. One of the most straight forward and effective method of improving the usability of a web application is by user testing. When performing the tests the observer should ask the user to perform specified task while carefully observing the process. It is of great importance that the observer only observe the process and does not in any way interact with the user by giving hints or being a distraction in any way. To be able to identify most of the usability problems that might be present in the design it is usually enough with five test participant in each test. If using a greater number of persons the

(12)

process might be too difficult and expensive. While at the same time, if using less than five all of the usability problems might not be discovered. To further improve, usability should be taken into consideration in each stage of the developing process. Including before beginning with a new design, when making prototypes, when redesigning the ideas and lastly, when deciding on the final design before implementation. [1]

To further understand the usability concept and how it could be measured in a meaningful way, one must begin with looking at the definition. The International Organization for Standardization (ISO) defines usability as the ’Extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use’ [6]. Since the usability concept is built upon those three different key factors, an opportunity also arise to assess the overall perceived usability of the application. Simply achieved by looking at, and measuring the three key factors individually to evaluate, track progress during different iterations and determine the overall usability of the application.

As stated in the introduction, developing with simplicity and online trust in mind became crucial when understanding the importance of those concepts through the market survey, two concepts both having a strong positive correlation with usability [7, 8]. With the basis explained, this section aims to describe and present the underlying theories regarding how an application can be developed in order to maximize its usability with emphasis on the two traits, simplicity and online trust.

2.2.1 Simplicity

According to Nielsen, users are very goal driven and strives to complete the task of which they sought out the web page for. Doing this, they want as few distractions as possible [9]. There is also the issue with having too much interactivity and feedback. Getting more of this than the average user easily understands will result in an interface failing in its most basic objective, to be understandable to the user. It is further stated that it is in fact the over stimulation and feedback, the direct opposite of simplicity which will result in this failure [10]. When creating an interface and incorporating simplicity, you also increase the chances of the product standing out in a more complex market [11].

It is stated by Lee et al. that simplicity can be divided into four different areas, each of which will improve the overall simplicity of an interface, leading towards increased perceived usability [8]:

• Reduction • Organization • Integration • Prioritization

Reduction is to break down as many aspects of the design into its essential parts as possible. Organi-zation is to arrange the parts and make sure that the interface is nice and tidy with every part where they belong. ’Organization makes a system of many appear fewer’ [11]. Integration is to make sure that all the parts that belong together, and the user might need at the same time or in short proximity to another, is presented simultaneously to the user. Prioritization is to make sure that the interface shows what the user wants to see and nothing else, no feature should be added just because the user might want to see it, in order to maintain the simplicity, no feature should be redundant [8].

(13)

Chapter 2 – Theory

Lee et al. further stated that ’Simplicity is considered a crucial concept for a successful UI design’. This is according to Lee et al. because in a world of technology which is getting more and more complex, the need of simplicity is required to not get lost when using modern and future interfaces [8]. According to Maeda, knowledge and simplicity goes very well together and one can not have one without the other. That in fact good design is achieved when knowledge is presented in a simple way [11].

Furthermore, Karvonen stated that the use of simplicity and clarity in design strengthens the trust towards the web site. Since a clear message and a clear design seems to have nothing to hide, it looks transparent. What you see is what you get. This especially in the case of e-commerce where visual pleasantness in form of "clean" and "clear" design, a result of simple design has lead to increased trust towards the company. [12]

Lee et al. also theorizes that simplicity will have a positive effect on usability and satisfaction. Usability, since the use of simplicity will make needed functions and features more accessible as well as making it easier to understand. Satisfaction will be increased as an effect of increased usability, which will increase the chances that the overall objective the user has will succeed, eventually resulting in higher trust towards the company and higher overall satisfaction with the device or interface. [8]

To evaluate the simplicity of a web application the usability metrics effectiveness and efficiency will be used. These will be further explained in section 2.3.8, but in essence these metrics tell how well a user could perform given tasks. These two metrics are well suited since a simple web application should aid the user in reaching its intended goals.

2.2.2 Online Trust

Trust has been identified as one of the key variables to create a successful platform for e-commerce where users are submitting financial and personal information to the merchant via the Internet [2]. Generally, the following four characteristics can be used to define online trust [13]:

• The trustor and the trustee serves as the most important parts to establish a trusting relationship. The trustor is typically a consumer who is browsing an e-commerce web application, and the trustee is the e-commerce web application, or more specific, the vendor that the web application represents.

• The vulnerability of consumers. The trustor needs to be confident with what kind of information the web application is storing, not only when making a transaction but also when visiting the web application and not making a transaction. Furthermore, the trustor needs to be confident that the trustee will not misuse information that is collected.

• The produced actions of the consumers. If the consumers are confident that they have more to gain than to lose when using the web application, it is likely that they will use the web application. Produced actions of the consumers in other words means that the consumers have to trust the web application enough to be willing to visit it for browsing or making a purchase. • The subject matter. Every individual has its own conception of what is accepted as a trusted web application. People hold different attitudes towards machines and technology as well as there are individual and situational factors that affects the level of perceived trust.

(14)

Trust is gained incremental by peoples own past experiences as well as by third party recommenda-tions [14]. Also, there are factors within the web application that has been identified to significantly increase peoples enhanced perception of trust. Ease of placing orders as well as easy access to product descriptions and pictures are two criteria [15]. Good aesthetics has also been shown to increase the online trust of a web application [12].

A web application is perceived as more trustworthy if the users are pleased with their experience after visiting the web application. In order to make a web application likable the ease of which the users navigate on the web application, learn the basic functions and manage the systems are important. Additionally, the degree of error avoidance affects the users general experience of trust to the web application. [16]

The perceived competence of the e-commerce is influencing the users trust [17]. Hence, the design of the e-commerce needs to be professional and credible in order to gain trust. Using eye-catching graphics captures the interest of the user and gives a professional impression, however overuse of graphics may damage the professional impression of a web application. Cool colors are to prefer as the tone of the interface while the main color should be a moderate pastel color [13].

The quality of the content on the web application has a strong relationship to the perceived trust of the web application. This means, that not only the web design should give a trustful impression, but also the information on the web application. [18]

Kim et al. argues that the perceived trust for the e-commerce highly influences the consumers purchase intentions. In their study they found that if the perceived risk of the e-commerce is high the consumers purchasing intentions decrease while if the perceived benefit of the e-commerce is high, the consumers purchase intentions are positively affected. Furthermore, Kim et al. found that if the consumers feel familiar with the e-commerce the perceived trust will increase. On the other hand they found that third-party logos, i.e. verifications for secure payments, does not affect the perceived trust of the e-commerce but does decrease the perceived risk of the e-commerce.[19]

Just as Kim et al., Hu et al. found that trust highly affects the consumers purchasing intentions and argue that an e-commerce must be perceived as trustworthy by its users in order to succeed. Hu et al. though found that third party logos in fact does affect the perceived trust of an e-commerce. [20]

2.3 Method Theory

This section aims to describe the theory behind the chosen methods. Beginning with a brief explanation of the different concepts and later discussing how one should implement those in the most effective way.

2.3.1 Survey Method

Surveys are tools used to gather information from the future users of the service. When conducting e-surveys the representativeness and a high response rate are important factors. For increasing the number of responses a personalized correspondence is important [21]. The e-survey format is usable when the target group is accessible over the Internet [22]. For more accurate responses the survey questions shall be ordered logically. The questions shall not be too long nor use ambiguous wording [22].

(15)

Chapter 2 – Theory

A commonly used survey format is Likert questions. The questions are answered by a numbered scale. Usually from 1-5 or 1-7. Where 1 correlates to strong disagreement and 5 (or 7) to strong agreement. A strength in Likert questions is, according to Cooper and Johnson, that users are familiar with the structure of these questions and comfortable conducting them. Furthermore, the format can be altered to evaluate multiple aspects of the questioned subject. Moreover, a weakness in the Likert format is the neutral middle option, ’3’. The survey takers must phrase this point on the scale non-ambiguously. [23]

2.3.2 User Stories

In order to specify desirable functionality to implement in a software development project, user stories can be used. A user story describes functionality of the project that would be useful for either user or owner of the system. Each user story shall target a specific functionality, not be too extensive and testable to determine if the story is complete. A user story shall be of the size that it is reasonable for one or a pair of programmers to code and test the story over a period of half a day up to approximately two weeks. [24]

2.3.3 Brainwriting

For rapid generation of ideas during a short period of time, brainwriting can be used. The method is deemed to generate more ideas than brainstorming. Furthermore, brainwriting includes all participants in the activity. [25]

Brainwriting can be performed using the 6-3-5 method. 6 group members participating receives a piece of paper each. For 5 minutes, each participant shall write down 3 ideas. After the 5 minutes, every paper is passed forward to the next person. The round is repeated six times, by the end of 30 minutes over 100 ideas have been written down. [26]

2.3.4 The NABC method

The NABC method is a method developed by the Stanford Research Institute to be used as a tool to describe, communicate and develop ideas. It focuses on the value propositions a specific idea brings to its intended customers and users. The name of the method represents the aspects that is important regarding an idea. N for need, A for approach, B for benefits and C for competition. When using the NABC method the idea is presented by describing the need it answers to, the approach it uses to fulfill that need, how users will benefit from the proposition and who the competition is. [27]

2.3.5 SWOT

SWOT analysis is used in order to determine the strategic positioning for a organization or idea with regards to internal and external circumstances. The SWOT analysis focuses on how to utilize strengths and opportunities effectively and how to handle weaknesses and threats satisfactory. SWOT stands for Strengths, Weaknesses, Opportunities and Threats. Usually the organization or the idea-makers organize internal and external circumstances with these terms to understand its general situation. It is also used as a baseline to develop long term corporate strategies or how to position offerings. [28]

(16)

2.3.6 STP

STP stands for segmentation, targeting and positioning and it is a widely used marketing model that is used to determine the optimal marketing mix that a company should offer based on who the customers are. The method consists of three main steps. Segmentation means dividing the market into several homogeneous blocks based on different variables such as geography or demography. Targeting means selecting one or several of these blocks of customers. The final step, positioning, is about how to position your offer towards the selected blocks in relation to other competitors addressing the same customers. [29]

(17)

Chapter 2 – Theory

2.3.7 Prototype

A prototype is defined as a representation of a design idea and plays a vital role in developing and exploring different states of a system or a product [30]. Furthermore, the process of prototyping is very beneficial in terms of testing design ideas and fixing usability problems at a very low cost instead of implementing something useless. Nielson also argues that the ’biggest improvements in user experience come from gathering usability data as early as possible in a design project’, which the process of prototyping allows [31].

To distinguish between different prototypes with regards to its complexity the term fidelity is used. A prototype with high fidelity is very close to the final product or system. It is interactive, complex and highly functional. A low-fidelity prototype on the other hand only represents some parts of the final design with low complexity and with low or non-existing functionality. [32]

Beginning with discussing high-fidelity prototypes a number of different benefits could be identified. When using a complex prototype, the interactivity is very realistic in terms of loading times and responses. Therefore, when tested by users the risk of failure or inconsistency is low. A high-fidelity will also allow for extensive work flow test and examination of different parts of the user interface such as menus, graphical content and accordions. However, a low-fidelity prototype has some superior functionality as well, the major one being the preparation time associated with a static prototype. Instead of focusing on a workable interface, one has more time to work on designs, addressing more pages, menus and content. Furthermore, a design change can be implemented quicker and a low-fidelity prototype put less pressure on the user. [31]

Although pros and cons can be identified with both, studies suggest that low-fidelity prototyping is more efficient and identifies numerous problems with high-fidelity such as the time aspect, developers resisting changes by becoming to attached to their design and contributing to high expectation that later will might be hard to achieve [32]. Therefore, one should focus on using low-fidelity prototyping when developing a web application.

2.3.8 Evaluation Metrics

Usability is defined as an overall aggregation of the metrics effectiveness, efficiency and satisfaction. To get a better understanding of the usability concept one can look at the different attributes in the definition and describe them in greater detail which this chapter aims to do.

Effectiveness

The first key component in the usability definition is effectiveness which simply means how often users achieve their goals using the system. Or with other words, in a set of tasks specified by the observer, how many of these does the user manage to complete. One method used to measure usability in terms of effectiveness is completion rate. To measure the completion rate, one begins with assigning binary values, if the user achieves the task in a successful way the observer indicates this with number “1”, if the user fails to complete the task the observer indicates this with a “0”. The results are later expressed as a ratio as shown in equation 1. [33]

Effectiveness =Number of tasks completed successfully

(18)

Efficiency

The second component in the usability definition is efficiency. Efficiency is measured in the time it takes for a user to successfully complete a task [34]. The time is simply the number of seconds it takes from starting the task until its completion. The efficiency can be calculated in two ways, an overall relative efficiency or a time-based efficiency.

All the participating users are given a list of tasks to complete and for each task the time taken in seconds and a “1” if the task was successfully completed and a “0” if not. The time-based efficiency is calculated according to equation 2, where N is the total number of tasks, R is the number of users, nij

is whether user j successfully completed task i and tij is the time user j used on task i.

Time Based Efficiency = PR j=1 PN i=1 nij tij N R (2)

The result of the function will be the number of goals completed per second. An overall relative efficiency calculates the ratio of the time taken by those who successfully completes the task to the time taken by all users. The result is in percent and is calculated using equation 3.

Overall Relative Efficiency = PR j=1 PN i=1nijtij PR j=1 PN i=1tij (3) Satisfaction

User satisfaction depends on how well the product or service has met or surpassed the user’s expecta-tions [34]. The result of a satisfaction survey is built on the user’s expectaexpecta-tions before usage and its assessment after. The satisfaction is essentially the subjective reactions to using a system [35]. User satisfaction in web applications is measured through standardized satisfaction questionnaires. In this report we will discuss the System Usability Scale which is a Likert scale (further explained in section 2.3.1) to measure user satisfaction developed by Brooke J [36].

The System Usability Scale (SUS) is quick and easy to use because the questionnaire consists of only 10 questions which makes it easy to administer. As mentioned, SUS is a Likert scale rating method. Each answer could be ranked from 1 to 5 where 1 means "strongly disagree" and 5 means "strongly agree".

The structure of the questionnaire follows a certain pattern where every odd numbered question poses a question positively and every even numbered question poses one negatively. This pattern plays a significant role in producing the total score of the System Usability Scale test. The score is calculated using an algorithm based on the previous mentioned pattern. The algorithm is as follows [36]:

1. For questions 1,3,5,7,9, subtract 1 from the score 2. For questions 2,4,6,8,10, subtract the score value from 5

3. Obtain a total sum by adding these new values. Then multiply the sum by a factor of 2.5. This gives a total system usability score ranging between 0 to 100.

(19)

Chapter 2 – Theory

A study observing the average SUS scores on different kinds of applications show a total mean value of 70 on the SUS score, when observing more than 3000 applications. The study also made an adjective based scale and found that it correlated well with the numerical scale, which gives an indication for the usability of a system. It shows that a SUS score around 50 means "OK", a score around 70 means "Good" and a score around 85 means "Excellent" [37].

2.3.9 User Testing

All web applications need to be, in some way, useful for its end users and the design of a usable website is pivotal to e-business success [17]. There are several ways that the usability of a web application could be evaluated but the main goal of any evaluation method is to improve the usability of the product being studied [38]. One of these evaluation methods is user testing which is a process involving users that test and evaluate the product. For the tests to be as valid as possible the users testing the application should be real users performing real tasks [39]. Having user tests as part of the development process is also something that is of significance since identifying and correcting usability problems in an early stage limits the time and cost it takes to correct them [39]. By having five user tests 80% of the problems will be detected and additional tests are less likely to reveal new information [40].

Concurrent Think Aloud Protocol

The concurrent think aloud (CTA) protocol is a user testing method in which the participants of the test are to verbalize their actions and thoughts while trying to complete a set of predefined tasks. This method is a good tool for discovering and illustrating usability problems [41, 42, 43]. The CTA protocol has its foundation in the work of Ericsson and Simon’s Protocol Analysis: Verbal Reports as Data [44]. In this framework the researcher should not engage with a person who is in the process of testing. The researcher should only intervene if the tester falls silent for a longer period of time (15 -60 seconds), then only with a remainder of "keep talking".

There are researchers who are saying that they are using the theoretical background set up by Ericsson and Simon when conducting their CTA procedures, however there are evidence that these researchers deviate from the strict definitions presented by Ericsson and Simon [45]. One of the reasons as to why the researchers that conduct the CTA procedures do not strictly cohere to these theories, are that the theories themselves are not developed in the domain of usability testing. They are developed in the context of cognitive psychology and examine fundamentally different things. In cognitive psychology the subject under examination is the mind and its processes whilst in usability testing, it is the software, interface, document etc. that is the subject under examination [45].

Boren and Ramey have therefore presented an alternative definition, based on speech communication theory, of which the CTA protocol foundation could rely upon [45]. Boren and Ramey argue that the intervention of the researchers does not have to be as strict as defined by Ericsson and Simon, and that the intervention does not render the information gathered useless. This framework allows for a more natural interaction, where the researcher takes the role of active listener and the tester the role of speaker. The framework of Boren and Ramey have however a strong disadvantage if the usability metrics under examination is navigability since the extra interaction between the researcher and tester have shown to result in that the testers are less lost and can successfully complete more tasks [43].

(20)

Retrospective Probing

Another way of collecting both qualitative and quantitative data from user tests is to let the user go through a retrospective procedure by answering questions immediately after a test or whenever completing a specific task. This method is called retrospective probing.

Retrospective probing procedures can contain both open and closed questions which provides qualita-tive and quantitaqualita-tive data respecqualita-tively to aid in design decisions. Open questions let the users express their experience more freely and with greater detail whereas closed questions often imply a simple yes or no answer or answering in the form of a Likert scale rating. [46]

Birns and her colleagues argues that one of the main advantages of using a retrospective approach is that the user responds to questions in the context of the entire experience they had when using the application interface. The questions are also usually posed so that the results can be compared and quantified, which makes it easier to get an overall picture of how the application performed in the tests. [46]

One important aspect to consider when using this method is that users, when looking back at their experience retrospectively, may rationalize their behavior and develop theories to explain or justify certain actions. These thoughts may very well just be a construction of their mind in the process of trying to remember what they thought and felt, and might thus give a less accurate picture of the actual experience. This feature of the human brain is very important to consider and is also why this method should be used in combination with the concurrent think aloud protocol, to get a comprehensive usability test result [46]. This combination of methods is supported by Holzinger in his paper describing the usability engineering process [47].

(21)

3

|

Method

This section aims to describe the methods, procedures and measurements used in the report in order to investigate and obtain important data regarding the development process and the research question. Below in this section the methods are described in detail including, what they are and how they are going to be used. This is of great importance in terms of replicability and the methods used are therefore described as thoroughly as possible. The work was carried out in multiple iterations, called sprints. Where the first sprint, Sprint 0, deemed to analyze the market and find out what should be implemented. The following sprints, Sprint 1-3, were web-application developing. By working iteratively, in sprints, time for reflection and further decision making is given during the developing process.

3.1 Sprint 0

The methods in Sprint 0 have been conducted in order to examine the market and document the needs and requirements demanded by the end customers. Furthermore, the process of developing a prototype is described in detail.

3.1.1 Sprint 0 - Market Analysis

In order to verify that the market actually needed the planned web application a market analysis were conducted. As a first stage a NABC-analysis were made to gather useful insight in to what the developers of the service thought they solved for their costumers and why the customers would use their service. After this a current situation analysis were conducted where the competition were analyzed with the help of Porter’s five forces and the market were also further analyzed using a STP-analysis. A SWOT-analysis were conducted to get a deeper understanding of the company ’MittÄrDitt’ inner strengths and weaknesses as well as of the opportunities and threats from external factors. To see the complete market analysis see appendix A.

When all this internal work were conducted a survey consisted of 7 questions were created in a Google form. The link to the Google Form was sent to different student groups in social media and there were also some students on campus that were asked in person, by members of our group. After a couple of days the form was closed and the result were gathered, see appendix B.

(22)

3.1.2 Sprint 0 - User Stories

To produce initial ideas for the project a brainwriting session was conducted. This method was chosen as it allows the group to produce a large amount of ideas in a short time period, as described in section 2.3.3 above along with a description of how the method is used. It particularly gives all members the same chance to contribute with ideas unlike in a brainstorming session where a few individuals produce most of the ideas. All the produced ideas were ranked on a three-point scale based on how important they were for the application to be functional. The ideas were then reworked into user stories on the form “As a < type of user >, I want < some goal >”, for example "As a user I want to be able to see if another user is a student". This was to make it clear to all members of the group how each of the ideas should work when it is implemented and what the point of implementing it was.

3.1.3 Sprint 0 - Prototyping

Before the development process of the web application was initiated it was of great importance to first establish the essentials in terms of design and layout. In order to gain understanding about those concepts the method of design prototyping was utilized, further explained in section 2.3.7. To create an understanding about the project, establish a starting point before the programming phase and outline the purpose of the application a simple low-fidelity prototype sketch was developed using Adobe Illustrator. The first iteration of the design draft was only focused on the homepage of the application. Later along the developing process the prototype was extended to show more features and design aspects of the website. The rough design drafts were beneficial in terms of understanding the basic structure and to be able to correct mistakes and make improvements along the way without any coding.

3.2 Sprint 1

This section specifies the methods used for the initial development process and the corresponding user test.

3.2.1 Sprint 1 - Development

The first implementation of ’MittÄrDitt’ consisted of the basic features necessary for submitting an object for renting, to rent and to create a user profile. With regards to simplicity and online trust the implementation was done with careful consideration of colors and logical placements of the different sections.

The server side was built with Python and the microframework Flask. The website’s client side was created with HTML, CSS, jQuery and the framework Bootstrap. The website’s users, adverts and data were stored in a database. Retrieving and providing data between the database and web application was performed by using SQLAlchemy and the database itself was developed in SQLite.

(23)

Chapter 3 – Method

3.2.2 Sprint 1 - Test

By the end of Sprint 1 the final implementation was tested. The importance and purpose of user testing is explained in section 2.3.9. Since the web application only had basic features the test focused on general perception of the website. New ideas from the test group were also audited and added to user stories if deemed fit.

The test group were people from the web application’s target segment, students on Campus Valla and US, University of Linköping. The test group consisted of five students from different majors, to obtain results representing the mix on campus. No students from the computer science bachelor was chosen with consideration of their unique insight in the development process.

The test participants were to complete a task list whilst performing concurrent think aloud procedure, described in section 2.3.9. The task list was created a few days before the user test, to best match the implemented features at that point of time.

The test participants were first introduced to the project and the purpose of the test. They were briefed on CTA, what it is and how it is performed. For the full test procedure see appendix C. The following task list was to be completed.

1. Show adverts

2. Go to an advert and identify its price 3. Create an account

4. Login

5. Find the advert on the advert page 6. Go to "my pages"

7. Delete your advert 8. Log out

Two observers were present during the test. One whom took notes and conducted the CTA. The other had a stopwatch with the objective to collect quantitative data, for example to time each individual task as well as noting if the test participant was able to complete the task or not. This data was necessary to evaluate the application with the usability metrics presented in section 2.3.8.

After the task list was completed the test participants conducted the SUS survey 3.3. In order to measure and collect evaluation data for user satisfaction and online trust. Question 1-6 targeting satisfaction and question 7-10 online trust.

(24)

Table 3.1: SUS survey questionnaire

# Question Rating

1 I think that i would like to use this system frequently 2 I found the system unnecessarily complex

3 I thought the system was easy to use

4 I think that i would need the support of a technical person to be able to use this system 5 I found the various functions in this system were well integrated

6 I thought there was too much inconsistency in this system

7 I would imagine that most people would learn to use this system very quickly 8 I do not trust and would feel unsafe using this web application

9 I felt very confident using the system

10 I would feel unsafe submitting my credit details on this site

After the survey the participants proceeded with a retrospective probing session. In this retrospective probing, open questions were asked by one of the testers and the questions asked were about the testers general perception about the web application, with the aim to receive further results on user satisfaction and online trust. When all five tests were completed the results were summarized and evaluated. See appendix D. Feedback with strong correlation to simplicity and online trust were primarily prioritized for future development and improvement.

(25)

Chapter 3 – Method

3.3 Sprint 2

This section specifies the methods used for the second development process and the corresponding user test.

3.3.1 Sprint 2 - Development

The information gathered from the user testing laid the basis for the development during Sprint 2 and a few features implemented in Sprint 1 were expanded upon and more features introduced. All new features were implemented in accordance with the theories presented in chapter 2.

The focus of this sprint was styling and to get the website in a state where a user would want to use the website to actually make purchases. This included general styling of elements such as navigation bar, front page, login screen and advert page among other things. Glyphicons from Font Awesome1

were also introduced to highlight certain elements such as the login button and to better recognize what the user should input in certain input fields. All styling was done according to the principles described in the usability section 2.2.

During the sprint additional features were also added such as database support for leasing an item that had been uploaded to the website and the ability to search for uploaded items. The checkout process was also implemented this sprint with the help of Stripe2which allows a user to pay for their items in

a secure way. Furthermore, a way of contacting the MittÄrDitt team was introduced to improve the online trust of the website.

3.3.2 Sprint 2 - Test

When the development of Sprint 2 was completed a second user test round was conducted. 0As before, simplicity and online trust were the two metrics being evaluated. A few days before the user tests were conducted a new task list was produced, see appendix C, which included the task from previous tests with additional tasks testing the newly implemented features. Test participants were chosen randomly from Campus Valla and US with the constraint that they had to be students, had not participated in the previous test and were not currently undergoing or had completed a bachelors (or master) in computer science. There were in total five participants chosen for this round of testing.

Just as in Sprint 1 the evaluation method used were the concurrent think aloud protocol, a SUS survey which ended with a retrospective probing session. The test procedure was conducted according to the standard approach that was developed to be used in all user tests throughout the project, see appendix C. The tasks that were to be completed in this test were:

1https://fontawesome.com/ 2https://stripe.com/

(26)

1. Show adverts

2. Go to an advert and identify its price 3. Create an account

4. Login

5. Create an advert

6. Find the created advert on the advert page 7. Go to "my pages"

8. Choose an advert you have not created from the front page 9. Choose what period you want to rent the item and go to payment 10. Pay for your order

11. Logout

There were two observers present during the test as before, one whom took notes and conducted the CTA. The other had a stopwatch with the objective to collect quantitative data, for example to time each individual task as well as noting if the test participant was able to complete the task or not. Just as in Sprint 1 when the test participants had finished their tasks, they were given a SUS survey to complete 3.3. Question 1-6 targeting satisfaction and question 7-10 online trust respectively. After that a retrospective probing session were conducted and just as in the previous test one of the testers asked open questions about the test participants perception about the web application. When the test had been completed the information gathered was summarized and evaluated by the group, see E. The information then laid the basis for further development of the application with regards to the research question.

Table 3.2: SUS survey questionnaire

# Question Rating

1 I think that i would like to use this system frequently 2 I found the system unnecessarily complex

3 I thought the system was easy to use

4 I think that i would need the support of a technical person to be able to use this system 5 I found the various functions in this system were well integrated

6 I thought there was too much inconsistency in this system

7 I would imagine that most people would learn to use this system very quickly 8 I do not trust and would feel unsafe using this web application

9 I felt very confident using the system

(27)

Chapter 3 – Method

3.4 Sprint 3

This section specifies the methods used for the third and final sprint with regards to the development process and the corresponding user test.

3.4.1 Sprint 3 - Development

The last sprint in the project focused on expanding the functionality regarding creating receipts and creating a way for the user to view order history and to make complaints. Additional functionality includes creating a way for the user to give feedback and creating a page where information about the creators can be found alongside a guide to the page. Furthermore, the bugs found during previous user tests and development had been handled. Feedback generated through user tests from previous sprints had been considered and actions taken in regard to improving the web application. In addition to added functionality, overall design of the web application had been improved in order to increase the simplicity and online trust of the application.

3.4.2 Sprint 3 - Test

At the end of the third sprint the last user tests were conducted. The simplicity and the user trust were tested, as before, using the concurrent think aloud protocol, a SUS survey and a retrospective probing session. A few days before the tests were conducted a test plan was developed with the plan from Sprint 2 as the base, with a few additional questions, see appendix C. The participants of this test round were like before selected randomly with the conditions described in section 3.3.2. There were again five test participants. The following tasks were performed during this test round:

1. Show adverts

2. Go to an advert and identify its price 3. Create an account

4. Login

5. Create an advert

6. Find the created advert on the advert page 7. Go to "my pages"

8. Delete your advert

9. Choose an advert you have not created from the front page 10. Choose what period you want to rent the item and go to payment 11. Pay for your order

12. Find the advert you just paid for on my page

13. Find the page with information about the creators of the website 14. Find and read about step 3 in the renting process

(28)

16. Send a complaint 17. Logout

There were two observers present during the test as before, one whom took notes and conducted the CTA. The other had a stopwatch with the objective to collect quantitative data, for example to time each individual task as well as noting if the test participant was able to complete the task or not. When the CTA was done the test participants conducted a SUS survey 3.3. Question 1-6 targeting satisfaction and question 7-10 online trust respectively. After that just as in previous sprints a retrospective probing session was conducted. When the tests had been completed the information gathered were summarized and evaluated by the group, see appendix F.

Table 3.3: SUS survey questionnaire

# Question Rating

1 I think that i would like to use this system frequently 2 I found the system unnecessarily complex

3 I thought the system was easy to use

4 I think that i would need the support of a technical person to be able to use this system 5 I found the various functions in this system were well integrated

6 I thought there was too much inconsistency in this system

7 I would imagine that most people would learn to use this system very quickly 8 I do not trust and would feel unsafe using this web application

9 I felt very confident using the system

(29)

4

|

Results

This section aims to present all results reached in sprints 0 - 3. Both in the matter of how far the development process had evolved, and the results from user testing concluding each sprint 1-3.

4.1 Sprint 0

Sprint 0 was mainly dedicated to planning and conducting the market survey in order to get knowledge of whether or not the application was viable from a economical perspective, to see if the people considered key customers were in fact interested in the product.

The rest of the sprint was focused on writing the market plan, the project plan and planning further how the application was supposed to be made by creating a back log.

4.1.1 Sprint 0 - Survey Results

The market survey was conducted, 96 people took the survey, for complete survey results see appendix B, the compiled results are as follows:

Do you have items in your possession of which you may consider leasing with compensation? • Yes - 66.7%

• No - 33.3%

If you chose no, why not? (Multiple choices allowed) • I do not own anything which i want to lease - 28.8% • I do not have any way of leasing items comfortably - 38.5% • Leasing feels unsafe - 42.3%

• I do not feel like it is worth the effort - 69.2%

If yes, rank the following reasons why you find renting to be good.

• Because it is cheaper than buying new. 1st place - 44%, 2nd place - 18%, 3rd place - 28%. • Because I have insufficient room in my home 1st place - 27%, 2nd place - 35%, 3rd place - 38%. • Because it is better for the environment to rent 1st place 38%, 2nd place 24%, 3rd place

(30)

If you need an item for an occasional use, how do you acquire it?

• Buy a new item. Always - 5%, usually - 60%, seldom - 35%, Never - 0% • Borrow from a friend. Always - 18%, usually - 68%, seldom - 12%, Never - 0% • Buy from a Facebook page. Always - 0%, usually - 5%, seldom - 45%, Never - 45% If there existed an application which handled renting between students, would you use it?

• Yes - 40.6% • No - 3.1% • Maybe - 56.3%

There was a follow-up question for those who answered "no", which was why they would not feel like using this type of application. Their answers were all similar, they thought it would not be worth the effort.

How long have you studied at Linköping University? • 1 year - 2.7% • 2 years - 37.5% • 3 years - 47.9% • 4 years - 10.6% • 5 years - 1.3% • over 5 years - 0%

• I do not study at Linköping University - 0%

4.2 Sprint 1

This sprint was the first of three development sprints. The initial version of the web application and first user test results are presented below.

4.2.1 Sprint 1 - Development

The initial build was mostly focused on getting the base of the application ready to use. The pages first implemented were index, account, adverts, unique advert, create advert, login and register. These pages laid the foundation for the complete build of the website. This build also included a base which all of the pages shared. In this base there was a navigation bar that would ease the navigation of the website and it could be used to navigate to the pages advert, login, account and create advert. The navigation bar was made fixed which meant that it would be shown at all times on the website. The navigation bar also included a search bar which also was not yet implemented. The base also included a footer which had the company name "MittÄrDitt" displayed and a contact us button that was not yet functional.

(31)

Chapter 4 – Results

Figure 4.1: The first half index page after the first sprint.

Figure 4.2: The second half index page after the first sprint.

Index Page

The index page was the page that the user first visited upon going to the "MittÄrDitt" web application. It consisted of a header displaying the name "MittÄrDitt" with a slogan underneath. After that, further down the page there were two rows which displayed adverts. One with the header "Utforska populära annonser" (explore popular adverts) and the other "Utforska nyligen tillagda annonser" (explore recently uploaded adverts). Each row contained four adverts each, in advert cards that were also used on the advert page. After these two advert sections a section which displayed information about why a user should rent items was presented. This part had a animations which swept the

(32)

bullet points into view from the right. These animations were used to make the user observant to the information and it made it hard to overlook the information displayed. This page was divided into clear sections with each section having its own background color.

Advert Page

The advert page displayed the uploaded adverts on cards which displayed the uploaded image if one was provided by the user or a default one if none was provided. The card contained information about the item with a title and a description. The cards also contained a button "Läs mer" (read more) which would take the user to the page of that particular item. The advert cards were displayed in rows of four with a bit of space between them to make them easy to differentiate.

Unique Advert Page

The unique advert page did not display much information at this stage. It only displayed the title of the advert, its uploaded image and the price with no additional styling.

Create Advert Page

The create advert page displayed input fields centered on the page. These fields were title, description, price, condition, image and pickup. These were not yet styled, and each input field just featured the input and its title. At the bottom of the inputs a green button with the text "Ladda upp" (upload) was displayed for the user to click when the user was finished with filling out the above displayed form. If the user were to try and submit a form which was insufficient, the fields that were empty would display red text underneath it with leading text indicating what was wrong with the field.

Login Page

To be able to authenticate users and have other useful functionality coupled with authentication, the module Flask-Login was used.

The login page displayed two fields for the user to fill in, email and password. It also featured four buttons which were "Logga in" (login), "Ny användare? Skapa konto här!" (New user? Create account here), "Avbryt" (abort) and "Glömt lösenord?" (forgot password?). These buttons were all green except the abort button which was red to enhance its meaning to the user. The user would also be prompted in red text near the input fields if the user tried to login with wrong information. When the user successfully logged in the user was directed to the index page.

Registration Page

To facilitate user input the module Flask-WTForm were used. This module was used on the login and create advert pages as well.

The create account page displayed the information we wanted to collect from a user when signing up and it was email, name, address, telephone number and password. The fields for input were the same as on the login page and had minimum styling. The page also featured a green button with the label "Registrera dig!" (register!).

(33)

Chapter 4 – Results

Account Page

The account page displayed the adverts that user had uploaded with slightly modified advert cards which instead of the "Läs mer" (read more) button had two buttons, one "Redigera" (edit) and "Ta bort" (delete). These buttons had different colors to enhance the meaning of each button. The delete button was colored red and the edit button was blue. This page also includes information about the user that was logged in. It displayed information such as name, email, phone number and address. The displayed user information had at this point no styling and were just displayed in a bullet point list. The page also included a green logout button.

Style Choices

At this stage the styling of the page was minimal and it was only the core functionality of the web application that were implemented. However, a few conscious decisions had been made regarding the styling and one of these were the background color was set to #f7ff89 which is a light gray which contrasts the other content in a good way. Another choice was the font used, which was the default font used in bootstrap, sans serif. The buttons in the navigation bar was also colored according to the "MittÄrDitt" logo which was #2CB297 a light green color with a white text color. The light green color was also chosen as a natural color to display information buttons. When these information buttons were hovered, they changed color to #25846D which is a slightly darker green.

4.2.2 Sprint 1 - Concurrent Think Aloud Protocol and Retrospective

prob-ing

The CTA was conducted by five users while performing nine carefully chosen tasks. This section has been divided by the tasks that was performed and those tasks with constructive comments are presented. The whole CTA protocol is presented in appendix D. After the CTA was performed the user was asked to do retrospective probing. The result from the retrospective probing is presented after the CTA tasks, all comments gathered are presented in appendix D.

Go to an Advert and Identify its Price

The whole test group found the task easy to complete, reaching the unique advert page by clicking on an advert card. Three out of five test participants commented on the unique advert page layout. The overall input was that the design could be improved as well as more content should be added. One test participant expressed their desire to see more info about the advert, such as description and condition.

Create an Account

All test participants succeeded to find the registration page, though confusion occurred regarding how to best reach the registration form since neither the button “Logga in” (login) nor “Skapa annons” (create advert) felt as the right one to choose. Furthermore, confirmation upon a successful registration was desirable according to one test participant.

(34)

Log in

All five test participants completed the task, they deemed the login process to be intuitive and simple to complete. Following the expected placement of the login button in the top right corner.

Go to Account Page

The account page was easy to find, though the layout could be more in line with the remaining web application. The adverts were too big according to two test participants.

Delete Advert

The test participants executed the task with ease, locating the “Ta bort annons” (delete advert) button was done quickly due to its size and bright red color. The test participants expressed their wish for a pop-up confirmation upon clicking the delete button, to minimize the risk of deleting their advert accidentally.

Log out

The test participants deemed the logout process to be intuitive and easily performed. One test partic-ipant suggested log out to be reached from the navigation bar and a drop-down menu on the account button.

Retrospective Probing

An important input from three out of five test participants was that they did not feel safe or confident whilst using the web application. They suggested that more info about the creators, how the process works and that the payment method is verified should be added in order to increase the perceived online trust. Furthermore, the test participants are positive towards the general design and basic functionality. Though they were all eager to see more functionality and better integrated functionality.

(35)

Chapter 4 – Results

4.2.3 Sprint 1 - System Usability Scale

As shown in table 4.1, the average test result of sprint one was 84. This indicates according to the Likert scale that the usability is high.

Table 4.1: Average SUS score Sprint 1

Question Rating

I think that i would like to use this system frequently 2.6

I found the system unnecessarily complex 1

I thought the system was easy to use 4.8

I think that i would need the support of a technical person to be able to use this system 1.6 I found the various functions in this system were well integrated 4.4

I thought there was too much inconsistency in this system 1.8

I would imagine that most people would learn to use this system very quickly 5 I do not trust and would feel unsafe using this web application 1.2

I felt very confident using the system 4.4

I would feel unsafe submitting my credit details on this site 2

Average SUS Score 84

4.2.4 Sprint 1 - Efficiency and Effectiveness

All of the participating users in the user tests after the first sprint of programing were able to complete all of the given tasks. This means that the total effectiveness after the first sprint was 100%. The overall relative efficiency was also calculated. Since all of the attempted tasks were successfully completed this ratio is 1 or 100%.

Another method used to measure the efficiency was Time Based Efficiency. The results of this test show that users were able to complete approximately 0.12 tasks per second. To measure the accuracy of the results, the standard deviation were calculated and for the first sprint this was at 16.7%.

(36)

4.3 Sprint 2

This sprint was the second of three developing sprints. The current version of the web application and corresponding user test results are presented below.

4.3.1 Sprint 2 - Development

During the second development sprint a lot of the work went into style improvements, designing views and workflow as simplistic and intuitive as possible. Some new functionality was implemented, including the creation of the lease entity in the database that captures and stores the relation between two users making a deal. Other major functionality implementations during the sprint was stripe payment, text search navigation, the start of order history presentation and a major code- and project structure refactoring. More development events and progress are further described in the following sections.

Code Refactoring

The initial task of the second development sprint was to re-structure the project and its code. It was done in order to simplify the overall structure and to increase modularity. Functionality, styling and HTML code was completely separated to make the code easier to understand and oversee. All pages were also divided into smaller sections, defined in different files, in order to increase the modularity and to make it easier to work with. Corresponding CSS styling and functional JavaScript code was divided accordingly into separate files. The Jinja2 framework played a pivotal part in the re-structuring. A specific macro functionality in the Jinja2 framework was used in order to create "objects" which would be used in several places in similar ways without having to write the code more than once. The advert card was defined as a macro and all pages displaying adverts just used this macro. The design of the advert card was updated and improved as well.

Base

There were a couple of changes made to the web application’s base template of which all other pages inherited from. The footer, containing relevant and necessary information, which previously only existed on the landing page was moved to the base template and displayed on all pages as a result. The text search field functionality was completed, and the navigation bar was slightly altered by removing the button "Alla annonser" (all adverts). A "contact us" button and modal was implemented and the design of the advert cards was also updated and improved.

(37)

Chapter 4 – Results

Figure 4.3: Footer in the index page

General Styling

A lot of the development during the sprint went into style alterations and improvements. A couple of examples are the implementation of glyph icons to make the interface more intuitive, final styling of registration and login forms, general page header design, unique advert page design, my pages design and the advert card design.

References

Related documents

Lärarna själva svarar att de brister i kompetens, många att de inte når de uppsatta målen och några att de inte ens känner till målen för ämnet.. När det fallerar på så

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

a) Inom den regionala utvecklingen betonas allt oftare betydelsen av de kvalitativa faktorerna och kunnandet. En kvalitativ faktor är samarbetet mellan de olika

We developed a theoretical framework model in literature review (model 1) chapter regarding influence of trust in the sharing economy and how it influences customer retention and

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating

Experience of adjuvant treatment among postmenopausal women with breast cancer - Health-Related Quality of Life, symptom experience, stressful events and coping strategies..

Museum, art museums, 19 century, Sweden, Gustaf Anckarsvärd, Axel Nyström, Fredrik Boije, formation, manifestation, National Portrait Gallery, Uppsala university art museum,

Ideal type (representing attitudes, strategies and behaviors contributing to weight maintenance.. Characterized by these questions in