• No results found

When Should Feedback be Provided in Online Forms? : Using Revisits as a Measurement of Optimal Scanpath Disruption and Re-evaluating the Modal Theory of Form Completion.

N/A
N/A
Protected

Academic year: 2021

Share "When Should Feedback be Provided in Online Forms? : Using Revisits as a Measurement of Optimal Scanpath Disruption and Re-evaluating the Modal Theory of Form Completion."

Copied!
41
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköpings universitet SE–581 83 Linköping

Linköping University | Department of Computer Science

Bachelor thesis, 18 ECTS | Cognitive Science

202017 | LIU-IDA/KOGVET-G--17/002—SE

When Should Feedback be

Provided in Online Forms?

Using Revisits as a Measurement of Optimal Scanpath

Disruption and Re-evaluating the Modal Theory of Form

Completion.

Isabella Koniakowski

Supervisor : Felix Koch

External Supervisor : Nils-Erik Gustafsson Examiner : Shahram Moradi

(2)

Copyright

The publishers will keep this document online on the Internet – or its possible replacement – for a period of 25 years starting from the date of publication barring exceptional circum-stances. The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the con-sent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping Uni-versity Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

c

(3)

Abstract

In web forms, feedback can be provided to users at different points in time. This study investigates these three ways of providing feedback to find which results in the shortest completion time, which results in the lowest number of gaze revisits to input fields, and which type of feedback the users prefer. This was investigated through development of prototypes that were tested with 30 participants in a within-group design after which they were interviewed about their experiences. Providing feedback instantly or after form sub-mission resulted in significantly shorter completion times than providing feedback after users left a field. Providing feedback instantly also resulted in significantly fewer revisits to input fields compared to providing feedback after leaving a field. Through a thematic analysis, users’ experiences were shown to be the most negative when given feedback af-ter form submission, while the most positive experiences occurred when users were given feedback immediately. The results indicate that providing feedback immediately may be an equally good or better alternative to earlier research recommendations to provide

(4)
(5)

feed-Acknowledgments

Firstly, I would like to thank my advisers Felix Koch, Linnea Berg Timm and Nils-Erik Gustafsson for your excellent feedback, your answers to all of my questions and your con-stant encouragement. Secondly, I would also like to thank Jasmina Jahic, Elin Sjöström and my family for your support, no matter the time of day or night. You are, have been, and will always be, invaluable to me. Finally, I would like to thank all of the participants in the study for their opinions and reactions.

(6)

Contents

Abstract iii

Acknowledgments iv

Contents vi

List of Figures viii

List of Tables 1

1 Introduction 3

1.1 Background . . . 3

1.1.1 Earlier Studies on Web Forms . . . 3

1.1.2 Eye Tracking in Usability . . . 4

1.2 Aim . . . 4

1.3 Research Questions . . . 5

1.4 Delimitations . . . 5

2 Theory 7 2.1 Design of Web Forms . . . 7

2.2 Feedback in Web Forms . . . 7

2.2.1 Modal Theory of Form Completion . . . 8

2.3 Genre Analysis . . . 8

2.4 Eye Tracking . . . 9

2.4.1 Areas of Interest . . . 9

2.4.2 Scanpaths and Revisits . . . 10

2.5 Observation and Interview . . . 10

3 Method 13 3.1 Pre-study . . . 13 3.1.1 Genre Analysis . . . 13 3.1.2 Prototype Development . . . 14 3.2 Pilot Study . . . 16 3.3 Main Experiment . . . 16 3.3.1 Participants . . . 16 3.3.2 Apparatus . . . 17 3.3.3 Procedure . . . 17 3.3.4 Analysis . . . 17 4 Results 19 4.1 Quantitative Results . . . 19 4.1.1 Completion Time . . . 19 4.1.2 Revisits . . . 19 vi

(7)

4.2 Qualitative Results . . . 20 4.3 Summary . . . 21

5 Discussion 23

5.1 Results . . . 23 5.2 Method . . . 24 5.3 The Work in a Wider Context . . . 25

6 Conclusion 27

References 29

(8)

List of Figures

3.1 The developed prototype. . . 15 3.2 The developed prototype with error icons and messages to the right. . . 15 3.3 AOIs marked on the developed prototype. . . 18 4.1 Thematic map showing the participants’ experiences of Instant, where feedback

was provided instantly. . . 20 4.2 Thematic map showing the participants’ experiences of On Leave, where feedback

was provided as participants left a field. . . 21 4.3 Thematic map showing the participants’ experiences of After, where feedback was

provided after form submission. . . 21

(9)

List of Tables

2.1 Guidelines for usable web form design. . . 8

3.1 Variables in registration processes. . . 14

3.2 The order of conditions for the participants ordered vertically from first to last with context stated to the left. . . 16

3.3 Size information about the AOIs marked in BeGaze 3.6, percentage area is percent-age of the total screen area. . . 17

4.1 Average validation deviation values for each condition. . . 19

4.2 Median completion time expressed in seconds for each condition. . . 19

4.3 Median number of revisits to AOIs for each condition. . . 20

B.1 Number of participants that were coded to each theme in Instant. . . 33

B.2 Number of participants that were coded to each theme in On Leave. . . 33

(10)
(11)

1

Introduction

In general, users hate web forms, but they are useful for organisations (Jarrett & Gaffney, 2009; Wroblewski, 2008). Web forms are an effective way for companies to gather informa-tion about their users, for users to be able to communicate with companies, and for gathering data in research. Some examples of this is providing bank or shipping information when online-shopping, contacting a company when questions about products arise, or gathering population data for a study. Web forms are also often used in registration processes, where they can be an effective way to provide extra content to, and gain detailed information about recurring users. However, if users do not understand why they should register for a website or service, the web form can also be the reason why users leave a website, or intentionally en-ters incorrect information (Jarrett & Gaffney, 2009). There are several ways to make the users’ experience of a web form as straightforward and smooth as possible; only asking users for relevant information (Wroblewski, 2008), grouping relevant information together (Graham, 2008; Jarrett & Gaffney, 2009), having a clear visual path to completion (Wroblewski, 2008), and providing validation or feedback to users (Al-Saleh et al., 2012) are some of them.

These factors all contribute to the experience of the web form and the website as a whole, but no matter how little information a web form asks for, how well-grouped that information is, or how visually pleasing a web form is, some type of validation or feedback is necessary to make sure that the correct information is entered. This ensures that users know when they have entered incorrect information and provides opportunities for them to re-enter the information.

1.1

Background

In the following subsections, earlier studies on web forms, as well as when it is appropriate to use eye tracking in usability studies is presented. An introduction to the current thesis is also provided.

1.1.1

Earlier Studies on Web Forms

There have been a couple of studies examining how, and when, feedback should be given in web forms. In a first study (J. Bargas-Avila & Oberholzer, 2003) users were found to make sig-nificantly fewer errors when given embedded feedback (that is, feedback put into the website) as well as feedback in dialogues one entry at a time, after submitting the form. The results confirmed the validity of the common approach of presenting errors embedded in the form and suggested that it might not be optimal for users to get feedback as they leave a field.

A second study (J. Bargas-Avila, Oberholzer, Schmutz, de Vito, & Opwis, 2007), again showed that providing embedded feedback after submission of the form led to significantly fewer errors than providing embedded feedback as users leave a field. It was also stated in their discussion that completion times seemed to differ between the different conditions, but no significant differences between the conditions could be found. Based on these results, J. Bargas-Avila et al. argue that form completion is done in a modal way, stating that the

(12)

1. INTRODUCTION

completion mode, where users enter information, lies before the revision mode, where users correct their errors, and that this could be the reason that users make fewer errors when given feedback after submission of a form.

The findings of these two studies contradict findings from a later study, where Al-Saleh et al. (2012) showed that most users did, in fact, respond to feedback while in the completion mode when the feedback was related to errors in the current field and so, more research is needed to investigate if immediate feedback should be recommended or not.

1.1.2

Eye Tracking in Usability

To find usability problems where other methods fail, eye tracking can be used. Bojko (2013) recommends using eye tracking to find the source of a usability problem, especially problems like why an action takes longer than expected, or problems that involve interface layout, affordances, or messaging. Through views of users’ gazes, knowledge about where the gaze is interrupted from the optimal path can be gained. In this study, it is proposed that the number of revisits to areas of interest, AOIs, (as input fields) can be used as a measurement of the disruption of the optimal gaze path. Another way to measure comprehension or efficiency of an interface is through completion time (Tullis & Albert, 2013), and this measurement will therefore be used as a control variable. In addition, eye tracking can also be used to explain qualitative results from interviews or to aid observation methods through allowing a live view of participants’ gaze. The qualitative methods provide insight into the experiences of the users while the eye tracking can help understand which elements of a web page that improve, or impair, the experience.

Feedback in forms is important to help users get through processes like registrations with-out the experience of a whole website being worsened. It is also important to provide infor-mation about users to website owners. inUse Experience AB is a user experience consultant firm that develops registration processes on a regular basis. To provide the best user ex-perience for their end-users, they are interested in knowing more about how registration processes can be improved.

In collaboration with inUse Experience AB, this study further explores when feedback should be given in registration forms to make the registration process as smooth and straight-forward as possible. The study compares three points in time where feedback can be pro-vided: immediately as users write, as they leave a field, and after submitting a form. Know-ing which type of feedback that is preferred will lessen the amount of work that needs to be done when designing and creating registration processes, and improve the users’ experience when registering on websites.

1.2

Aim

The study aims to find if there is a relation between giving feedback instantly, as users leave a field, or after from submission when registering through a registration form on a website, and completion time as well as number of revisits to AOIs. It also aims to investigate users’ attitudes towards and experiences of these types of feedback.

(13)

1.3. Research Questions

1.3

Research Questions

1. When should feedback be provided in registration forms to minimise: a) completion time?

b) number of revisits to AOIs?

2. When should feedback be provided in registration forms to facilitate a positive user experience?

1.4

Delimitations

The study will not address registration forms on mobile devices or for multicultural websites, nor will it address all of the different information that could be asked for in a registration form. Instead, the registration form will be distilled to core attributes that most registrations have in common.

(14)
(15)

2

Theory

The theory section describes guidelines to follow when designing web forms and when to provide different types of feedback. A brief description of genre analysis is then followed by an account of how eye tracking and revisits may be used to measure how the optimal scanpath of users can be disrupted. Finally, observations and interviews are introduced as a complement to completion times and revisits to further explore the users’ personal experi-ences of feedback.

2.1

Design of Web Forms

Web form design differs between different websites as there are different requirements and terms that apply to each website. For example, some websites might need to have several form pages while others might only need one, depending on how much information that is needed from the users. There are, however, design guidelines that can be applied to all web forms. J. Bargas-Avila et al. (2010) have compiled a list of 20 guidelines for usable web form design based on best practices and empirical studies. Out of the empirically founded guide-lines, guideline 5, 13, and 14 are relevant for the purposes of this thesis. These are shown in Table 2.1. Guideline 5 is based on Penzo’s (2006) not peer reviewed study recommending vertically aligned labels, as horizontally left-aligned labels, and to some extent horizontally right-aligned labels, impose a higher workload on users. Penzo’s study is taken at face value as the entirety of the guidelines have been validated in a master thesis by Hänggli (2012) who found the trend that forms were perceived as more usable if they had been developed implementing J. Bargas-Avila et al.’s 20 guidelines. Guideline 13 is based on findings from J. A. Bargas-Avila, Orsini, Piosczyk, Urwyler, and Opwis (2011) whose results show that re-quired input formats should be stated in advance to minimise the number of errors and trials, and that examples of correct inputs does not affect the number of errors or trials. Linderman and Frieds (as cited by J. Bargas-Avila et al., 2010) showed that error messages should be written in a familiar language, clearly state what the error is and how it can be corrected, as well as easily noticeable through the help of colour, icons and text.

2.2

Feedback in Web Forms

Norman (2013) writes that feedback is an important tool to bridge the gulf of evaluation. The gulf reflects the effort that is needed to understand the state of a system, and whether the performed actions have led to the goal. In forms, feedback minimises this effort by showing the users if they entered incorrect information, and how this can be amended. To help users complete a form, error messages need to be noticeable. Wroblewski (2008) recommends using a "double visual emphasis" i.e. combining changing the colour of a label, adding instructions, changing background colour or adding an icon. He argues that there is a greater probability that an error will be noticed by all users when a form uses double visual emphasis, making sure that among others colour blind users also notice errors. This technique is also utilised in

(16)

2. THEORY

Table 2.1: Guidelines for usable web form design as described by J. Bargas-Avila et al. (2010). Guideline Based on

5 To enable people to fill in a form as fast as possible, place the labels above the corresponding input fields.

13 If answers are required in a specific format, state this in advance communicating the imposed rule (format specification) without an additional example.

14 Error messages should be polite and explain to the user in famil-iar language that a mistake has occurred. Eventually the error message should apologize for the mistake and it should clearly describe what the mistake is and how it can be corrected.

J. Bargas-Avila and Oberholzer’s (2003) embedded feedback through an error message at the top of the page and error messages to the right of the respective input field.

Al-Saleh et al. (2012) state that inline validation, providing feedback as the user is entering information, is essential when the complexity of fields in a form increases the likelihood that users enter invalid data. Complex fields could be fields asking for usernames, passwords or values that need a specific format, but inline validation is less useful when entering e.g. a first name, as people tend to know what their first name is. This validation should also be confirmatory, so that the users know when they have succeeded with a task, not only when they have failed. However, as stated in the introduction, J. Bargas-Avila et al. (2007) do not share this view. They recommend providing the user with feedback after form submission instead.

2.2.1

Modal Theory of Form Completion

Based on their findings, J. Bargas-Avila et al. (2007) postulate a "modal theory of form comple-tion". Users start in the "Completion Mode" where the goal is to fill in the correct information and where error messages are often ignored. As users finish filling out the form, they switch to "Revision Mode" when the first error message appears, and necessary actions are made to correct the errors. This theory aligns with providing feedback after form submission, and does not clash considerably with immediate feedback, as it states that users will simply ig-nore it until they enter the revision mode.

2.3

Genre Analysis

A genre analysis compares the form, content, and purpose of several websites to find similar-ities and abstract design principles. The content, elements and sub-elements of each website are mapped out and the purposes of the web pages and the elements are identified. The hi-erarchy and layout of the web pages are described, making note of movement, appearance, texture, sound and affordances. Then, information about the web pages is compared and characteristics that are necessary for users to understand what genre the website belongs to, what the purpose of the website is, and potential sub-genres are identified (Arvola, Lundberg, & Holmlid, 2010). In this study, genre analysis was used to characterise registration forms and account creation as a genre to develop prototypes that reflect the registration forms that users are exposed to naturally.

(17)

2.4. Eye Tracking

2.4

Eye Tracking

The eye varies between resting on something, called a fixation, and moving from fixation to fixation, called a saccade. A saccade lasts between 10 and 100 ms, and the image from the eye is too blurred during it to provide any information. A fixation, on the other hand, lasts between 100 and 500 ms and during a fixation, there is time to process information (Nielsen & Pernice, 2010).

Eye trackers measure the movements of the eye in hope of gaining knowledge about the processes of the mind. This belief is founded in the mind-eye hypothesis, which states that what people are looking at and what they are thinking about is related (Nielsen & Pernice, 2010; Pashler, 1998). Furthermore, Holmqvist et al. (2011) state that attention, processing and fixations are closely related. They do, however, present studies that show that these three are not always that closely correlated; attention can arrive up to 250 ms earlier than the fixation in a laboratory setting and processing of a fixated item may continue for long after a fixation has ended.

There are several kinds of eye tracking equipment, differing in frequency of data collec-tion, accuracy, precision and mobility of the system itself. Most eye trackers today use the pupil-and-corneal-reflection method where infra-red light is used both to shine on the eye, and to record the eye. This avoids all natural light reflections and therefore makes the corneal reflection created by the infra-red light easier to record. The geometrical centre of the pupil is identified by pinpointing the darkest part of the eye and is used together with the corneal reflection to estimate the gaze of the participant. Lid occlusion of, or eye lashes in front of the pupil can reduce the ability of the eye-tracker to accurately locate the centre of the pupil and can therefore affect the eye tracker’s ability to track eye movements (Holmqvist et al., 2011).

The eye tracker compiles saccades and fixations to gaze patterns for further analysis but these gaze patterns do not tell the full story. Humans only see with high resolution in 2 % of their visual field (foveal vision), the rest being blurred in what is called the peripheral vision. This makes a strong case that the fixations recorded by the eye tracker is what the user sees, but it cannot be assumed that people have not perceived something in their peripheral vision. Elements of the surroundings that are not fixated on can still be perceived, albeit not in as high resolution (Nielsen & Pernice, 2010).

Eye trackers frequency of data collection varies, where stationary eye-trackers generally have a higher frequency of gathered data than mobile ones. A 50 Hz eye-tracker has a win-dow of no sampling that is 20 ms long, while with a 500 Hz eye-tracker, the winwin-dow is 2 ms long, increasing precision of on- and offsets of saccades, fixations and other, more delicate, eye events (Holmqvist et al., 2011).

Tullis and Albert (2013) describe that an eye tracking session starts off with a calibration where the participant is shown a series of dots. The eye tracker software then uses the image of the pupil together with the location of the dots to calculate where the participant is look-ing. The quality of the calibration is typically expressed as degrees that the actual gaze could deviate from the calculated gaze. This means that a calculated gaze, with a bad calibration, will show that the participant is looking somewhere that they are actually not. One degree deviation in calibration corresponds to one centimetre on the screen and is generally consid-ered to be acceptable (Tullis & Albert, 2013), but if the study employs AOIs, the size of these has to be taken into account.

2.4.1

Areas of Interest

AOIs are defined as areas of the stimulus that are more interesting to gather data about. By sectioning one or more parts of the stimulus as AOIs, statistics about these areas can be collected (Holmqvist et al., 2011). The calibration needs to be exact enough so that an actual gaze in one AOI cannot be recorded as a calculated gaze in another AOI. Holmqvist et al. (2011) write about measures that can be calculated about AOIs, e.g. number of AOI hits,

(18)

2. THEORY

transitions and revisits can be gathered. An AOI hit is whenever a participant’s gaze lands at a specified AOI, an AOI transition is when a participant’s gaze switches from one AOI to another, and a revisit is when an earlier visited AOI is visited again after the gaze has moved to another area. Any area that is not covered by an AOI is referred to as whitespace, and if a participant spends a lot of dwell time in the whitespace, it can indicate processing or afterthought.

2.4.2

Scanpaths and Revisits

When users search for specific content on the web, they actively search for information using a strategy or plan (Bergstrom & Schall, 2014). In these tasks, the optimal way for the users’ gaze to move, called optimal scanpath, is a straight line to the desired target (Goldberg & Kotval, 1999). A longer scanpath results in longer completion time, and less effective task completion. When using scanpaths to measure disruption from the optimal scanpath, all par-ticipants’ scanpaths need to be unified. This does, however, demand a mathematical analysis to be performed on the dataset (Eraslan, Yesilada, & Harper, 2015) which all eye tracking researchers might not be able to conduct, or which may take more time than available.

In eye tracking research about reading on computer screens, words can be coded as AOIs and backtracking eye movements, called regressive saccades, to these AOIs can occur because of oculomotor aiming errors, failures in word identification (Kennedy, 2000), or they can be used as a measure of processing difficulty during encoding (Ghaoui, 2006; Kennedy, 2000). Rayner, Chace, Slattery, and Ashby (2006) do however state that reading time has typically been used to measure comprehension processes or discourse comprehension in reading.

Revisits to AOIs have so far been used, outside of reading, as a measure of ’stickiness’ of an AOI, how much attention that AOI grabs from the users (Tullis & Albert, 2013). That is, however, not their only use. As scanpaths can be a demanding choice for assessing task search efficiency, and both search efficiency and reading comprehension have been measured with completion time, I propose using the number of revisits to AOIs as a measure of how much the optimal scanpath has been disrupted. To look back to an earlier input field will, in the context of web forms, be an interruption of the optimal scanpath, or what Wroblewski (2008) calls the clear path to completion. The optimal scanpath in forms would be looking at, and entering information in, one input field at a time to finally arrive at the sign up button. Presenting errors at different points in time would disrupt this scanpath to different degrees. Having more revisits would therefore, according to this theory, be negative, as a simple scan-path is desired.

2.5

Observation and Interview

Bojko (2013) writes that eye tracking can help validate insights from qualitative data. This data could be interview-data, observational-data, or ideally both, as what people remember doing and what people actually do, do not always match (Goodwin, 2009). While observing, Arvola (2014) states that special attention should be paid to areas or events where problems arise and that the underlying reasons for these should then be further analysed. Observing users’ interactions with an interface can be difficult. Their reactions and verbal responses can be easily seen or heard, but what triggers these reactions is harder to deduct. Using live eye tracking can alleviate this by letting the observer know what the user is looking at and paying attention to while reacting in different ways. Observation does not, however, even with live eye tracking result in an interpretation of what is observed or knowledge about what the user experiences, and should therefore be combined with other methods, such as interviews, and further analysis (Goodwin, 2009).

To understand users’ personal experience, reasoning, or opinions, qualitative interviews. Howitt (2010) describes that qualitative interviews are characterized by open ended questions and a semi-structured organisation where a list of questions or areas that should be touched 10

(19)

2.5. Observation and Interview

upon are listed. He writes that these questions can be expanded or skipped depending on what the interviewee spontaneously brings up during the interview. While interviewing, the researcher needs to listen actively; absorbing what is being said while formulating further questions to "fill the gaps". If there is time and resources, the interview is recorded and the audio is later transcribed to facilitate further analysis of the conversation.

To identify, analyse, and report patterns and themes within data, Braun and Clarke (2006) recommend using thematic analysis. They state that it can be used to achieve a rich descrip-tion of an entire dataset or a nuanced account of one particular theme or research quesdescrip-tion. A thematic analysis can either be focused on surface meaning or latent meaning, gaining knowledge about experiences and meanings or sociocultural contexts, respectively. Braun and Clarke (2006) also describe the procedure of a thematic analysis. The first step of a the-matic analysis is to familiarise oneself with the data and generate initial coding. The search for themes then starts, sorting the coded passages into coherent themes distinct from each other. From these themes, thematic maps are created and reviewed and the motivations be-hind the themes are formulated and written down.

(20)
(21)

3

Method

The method chapter begins with a description of the pre-study that was conducted to de-velop prototypes for the main experiment. A description of the pilot study and what was learned from it is then presented. Finally, it describes the design of the main experiment, the participants, apparatus, and procedure of the main experiment, as well as the methods used to analyse the results.

3.1

Pre-study

A pre-study was conducted to develop the prototypes for the study. Firstly, the genre analysis conducted to find industry standards is described and secondly, the developed prototypes are described.

3.1.1

Genre Analysis

A genre analysis was conducted with the goal of finding the characteristics, form, and ele-ments that characterise a registration form. This knowledge was then used to give the proto-types ecological validity by anchoring their variables and layouts in real-world registration forms.

Model Websites

As model websites for the genre analysis, www.google.se, www.facebook.com, www.reddit.com, www.wikipedia.org, and www.twitch.tv were used. These websites were the top five visited websites with different registration forms in Sweden, as stated by Alexa.com (2017). The top-list was determined by the number of daily visitors and page views over the last month at the point of retrieval.

The genre analysis was applied only to the registration processes of the websites, finding what characterises a registration form, rather than the whole websites. The most visited websites were chosen as they are likely to represent registrations that users, at one point or another, visit while browsing the internet.

Analysis

The first part of a genre analysis usually consists of identifying elements and their sub-elements in a hierarchical breakdown across the website. As the registrations of all websites were single pages, this step was not applied. The purpose of the registration processes was identified as "having the user registered on the website". Next, the form of the websites was described, considering movement, space, time, appearance, texture and sound. Of special interest in the descriptions were the ways in which the different pages provided feedback to the users. Finally, the websites were compared to each other, placing them side by side to find differences and similarities in elements, characteristics, and form. The similarities were summarised into a blueprint depicting the necessary characteristics of a registration form.

(22)

3. METHOD

Results

All of the registration processes had a form structure where labels or inline lables described what information the user should fill in. The most common variables were email, username, and password. The full variables list is shown in Table 3.1. In addition to labels and the input fields, all of the websites had a button for form submission. The text on this button varied from "Register" or "Join" to "Next step" or "Create an account". Drop down lists were the most common option for the birthday variable and was also used in one of the two gender variable cases, while the other case used radio buttons. In three of the cases, a CAPTCHA or reCAPTCHA function was used to minimise the possibility of accounts created by bots by asking the users to either choose pictures with a certain object in them, or check a checkbox stating that they were not a robot.

Table 3.1: Variables in registration processes.

Variable Google Facebook Reddit Wikipedia Twitch

Name x x Email x x x x x Username x x x x Password x x x x x Confirm password x x x* Birthday x x x Gender x x Phone number x x Location x

*Wikipedia provided a "View password" function in association with the "Confirm password" variable.

There was a variation between the websites in how they indicated required fields and how they provided feedback in general. All of the websites provided double visual feedback in different forms; Wikipedia provided it at the top of the page and next to the relevant input field after form submission, while Google and Facebook provided it with a red border around the field and an error message to the right of the field or a warning icon inside the field respectively as users left a field. Reddit and Twitch provided feedback in the form of an icon inside the field and an error message to the right of the field or at the top of the page respectively. Their feedback was provided instantly with a short delay for processing.

Wikipedia indicated that one field was optional with an extra text by one of the labels. Google and Facebook did not give any indication of which fields were required, although all fields were required for Facebook and all but a few for Google. Reddit, and to some extent Twitch, provided negative and positive inline verification but did not indicate required fields unprovoked.

3.1.2

Prototype Development

To test the different types of feedback given in forms, prototypes were created using Axure RP8. They were developed using the relevant guidelines from J. Bargas-Avila et al. (2010) and the results from the genre analysis as foundation. The prototypes were identical in most aspects, differing in when feedback was provided, the surrounding context, and the font.

Figure 3.1 shows the central elements of the prototypes. All contained the labels "User-name", "Email", "Password" and "Confirm Password", followed by a button labelled "Sign up". The labels were positioned above the input fields and the components were vertically grouped to allow for an easy overview. The registration fields and labels were all centred on the screen to facilitate as high quality eye tracking data as possible.

(23)

3.1. Pre-study

Figure 3.1: The developed prototype.

Three conditions were developed for the study, providing feedback at different points in time. In "Instant" feedback was provided instantly, in "On Leave" feedback was provided as the participant left the input field and in "After" feedback was provided after form submis-sion. The feedback was provided in the form of an error icon and an error message, both shown for all variables in Figure 3.2.

To ensure that participants recieved error messages, the prototypes were created so that the participants would enter incorrect information. The first time the participant entered a username, it was always occupied and email addresses had to be over 15 characters. The different conditions also had different requirements on passwords, Instant requiring at least one upper-case letter, On Leave requiring at least one special character and After requiring at least one numerical. All of the contexts required the password to be a minimum of 10 characters long.

(24)

3. METHOD

For each condition, three prototypes were created, resulting in a total of nine prototypes. The difference between prototypes for the same condition was the surrounding web page, called context, that was modelled on the three most visited websites from Alexa.com (2017). The three contexts were "Facebook-inspired", "Reddit-inspired", and "Google-inspired" con-text, all of them reassembling and using elements inspired from the registration pages of the real websites. The font of the prototypes and the style of the sign up button were also styled to match the contexts.

3.2

Pilot Study

A pilot study was conducted with three participants, all women between 21 and 22 years of age. After the first participant, the experiment design was changed to correctly separate participants into counter balanced groups to counteract learning effects. When all three par-ticipants had participated, the instructions for the study were changed to clarify what the participants would be asked to do during the session. Finally, the interview questions were better specified to give continuity to the post-test interviews. The subsequent study is de-scribed in the following sections.

3.3

Main Experiment

The main experiment was conducted with eye tracking, using the created prototypes as stim-uli. As eye tracking measurements can vary a lot from individual to individual, a within-group design was used. There were in total three conditions, corresponding to each type of feedback: instant feedback (Instant), feedback when leaving a field (On Leave), and feedback after form submission (After). For each of the three conditions, there were tree different con-texts: "Facebook-inspired", "Reddit-inspired" and "Google-inspired"context. This resulted in a total of nine prototypes, three for each condition.

To counterbalance the learning effect, participants were randomly assigned a group and exposed to the conditions in the order according to Table 3.2. The order of the context did not change between participants.

Table 3.2: The order of conditions for the participants ordered vertically from first to last with context stated to the left.

Context Group 1 Group 2 Group 3

"Facebook-inspired" Instant On Leave After "Reddit-inspired" On Leave After Instant "Google-inspired" After Instant On Leave

3.3.1

Participants

Out of the 35 participants recruited for the study, 5 participants were excluded from the eye tracking part of the experiment due to loss of data, technical problems, or the calibration not being within the threshold values. Out of the remaining 30, 16 were women and 14 were men. The age ranged between 19 and 31 (M = 22.9, SD = 2.66). As the study used a within-subject design, all participants were exposed to all conditions.

33 participants were observed interacting with the prototypes and subsequently inter-viewed about their experiences (17 women, 16 men) ranging from age 19 to 31 (M = 23, SD = 2.57).

The population was recruited from Linköping University using a convenience selection with the restriction that participants would not need vision correction to use a computer at 60-70 cm distance and that the calibrations resulted in a deviation below one degree. All of the participants were students and informed consent was obtained from all participants. 16

(25)

3.3. Main Experiment

3.3.2

Apparatus

The eye tracking data was collected using a SensoMotoric Instruments RED500 remote eye tracking device with a refresh rate of 500 Hz and iViewX 2.8 software on a Windows 7 PC. The eye tracker uses the pupil-and-corneal-reflection method to track the eyes. The stim-uli that was presented consisted of the prototypes earlier developed. They were presented through SMI Experiment Center 3.6.44’s web-stimulus setting using Internet Explorer 10 as the browser and appearing on a 1680 x 1090 flat screen. The participants also had access to a keyboard and a mouse.

3.3.3

Procedure

As a session started, the participants were first given an informational paper and a consent form. After reading and filling out the informational paper (Appendix A), they were ver-bally walked through the procedure of the session, informed that their personal information would be confidential, and then asked to read and sign the consent form. The participants were then asked to perform a trial nine-point calibration to familiarise them with the process. For each website, the participants were first shown a promotional video about the website, introducing the website that they were to sign up for. An informational text was then shown, asking the participant to sign up for the website. The data gathering then started with a calibration, after which the participants were presented with a website to sign up for. The calibration was validated and accepted only if the eye tracking was not off by more than a maximum of 1 degree. This procedure was then repeated for the other two websites with the associated video, informational text, and calibration. During the session, observational data was collected with pen and paper. When all three sign-ups had been performed, the different feedbacks were verbally explained and a short interview was conducted. In the interview, participants were asked if they liked or disliked any feedback-alternative compared to the others and if they noticed anything particular during the session. Answers from interviews as well as observations made during the study were jotted down for later analysis.

3.3.4

Analysis

The eye tracking data was initially analysed using BeGaze 3.6 to mark AOIs and summarize statistics about the participants’ eye movements. There were in total four AOIs per prototype of the same size (640px x-direction and 71px y-direction) and they were centred around a label-input field pair each, as seen in Figure 3.3. Additional information about the size of the AOIs is shown in Table 3.3.

As the data was not normally distributed, non-parametric ANOVAs for within-group de-sign (Friedman tests) were conducted. This allowed discovery of de-significant differences in variance within participants’ results caused by feedback type (Field, 2013). This was fol-lowed up with Wilcoxon signed-rank tests comparing the conditions to find where there was significance. To correct for the number of tests, Bonferroni correction was applied to the alpha-value, correcting it to .0167 (Field, 2013).

Table 3.3: Size information about the AOIs marked in BeGaze 3.6, percentage area is percent-age of the total screen area.

Area [px] Percentage area [%]

AOI 45440 2.2

Total AOI 181760 8.8

Whitespace 1891840 91

The interviews and observations were analysed together through a thematic analysis. The analysis was conducted using the second research question as a starting point and with the

(26)

3. METHOD

Figure 3.3: AOIs marked on the developed prototype.

purpose of finding themes for the three different forms of feedback. The thematic analysis did not account for latent meanings in the dataset, but focused on the surface meaning to achieve knowledge about the experiences of the participants.

(27)

4

Results

The results chapter begins with a very brief account of tracking ratio and validation devia-tion during the experiment, followed by the statistical results from compledevia-tion time and the number of revisits made to AOIs. Three thematic maps are then presented with short data excerpts from the observations and interviews that support the thematic analysis. Finally, the results are summarized.

4.1

Quantitative Results

The tracking ratio for participants ranged between 42.7 % and 72.6 % (M = 59.7 %, SD = .07). The average validation deviation was .5 in x-direction and .48 in y-direction; values for each condition are shown in Table 4.1.

Table 4.1: Average validation deviation values for each condition. Condition Deviation X [˝] Deviation Y [˝]

Instant .53 .50

On Leave .55 .47

After .43 .48

4.1.1

Completion Time

There was a statistically significant difference in completion time depending on timing of feedback using Friedman test, χ2(2) = 18.467, p = .00009. Post hoc analysis with Wilcoxon signed-rank tests was conducted with Bonferroni correction applied, resulting in a signifi-cance level set at p < .0167. Median perceived effort levels for the conditions are shown in Table 4.2. Completion time was significantly shorter in Instant, compared to On Leave, (Z = -3.651, p = .00026), and in After compared to On Leave, (Z = -3.116, p = .002). However, no significant difference was found in completion time between Instant and After (Z = -.710, p = .478).

Table 4.2: Median completion time expressed in seconds for each condition. Condition Median [s]

Instant 49.7

On Leave 66.4

After 53.7

4.1.2

Revisits

There was a statistically significant difference in the number of revisits depending on timing of feedback using Friedman test, χ2(2) = 9.548, p = .008. Post hoc analysis with Wilcoxon

(28)

4. RESULTS

signed-rank tests was conducted with Bonferroni correction applied, resulting in a signifi-cance level set at p < .0167. Median perceived effort levels for the conditions are shown in Table 4.3. Number of revisits were significantly fewer in Instant, compared to On Leave, (Z = -2.621, p = .0088). However, no significant differences were found in number of revisits between On Leave and After, (Z = -2.284, p = .022), or Instant and After (Z = -.113, p = .910).

Table 4.3: Median number of revisits to AOIs for each condition. Condition Median

Instant 16

On Leave 21

After 15

4.2

Qualitative Results

The thematic analysis resulted in the three thematic maps shown in Figure 4.1, 4.2 and 4.3, each corresponding to one of the three feedback types. The exact number of participants that were coded to each theme is not shown in these maps as some participants made statements and acted in ways that counted in several, conflicting, themes. Additionally, in quantitative research, the goal is not to find how many participants that made certain statements or acted in certain ways, but to reflect the different experiences of the participants. The number of par-ticipants that were coded to each theme is therefore of less interest. The values can, however, be seen in Appendix B.

The themes developed from the participants’ experiences of instant feedback in Instant (Figure 4.1) were mostly positive, producing statements like "[instant feedback] was most in-tuitive as it lets you know immediately" and "[instant feedback] is faster, one does not have to think as much". However, some participants were more negative towards the feedback, stating that it was an annoyance to be given feedback before they had finished entering in-formation in a field. Other participants did not notice the feedback until they had finished entering information in a field or did not look at the feedback until the sign up button had been pressed.

Figure 4.1: Thematic map showing the participants’ experiences of Instant, where feedback was provided instantly.

The themes developed from interviews and observations about On Leave (Figure 4.2) were more diverse than those from Instant. Several participants found this form of feedback irri-tating, while some found it straightforward. One salient theme was that participants had trouble understanding the mechanics of the feedback, not understanding that they had to

(29)

4.3. Summary

leave the field for feedback to be updated. This resulted in statements as "[in On Leave] it was still showing wrong, even though one had changed it" and "[On Leave] was the most frus-trating". However, some participants stated that they did not feel that much of a difference between Instant and On Leave.

Figure 4.2: Thematic map showing the participants’ experiences of On Leave, where feedback was provided as participants left a field.

From interviews and observations of interactions with After, the themes were mostly neg-ative. A few participants did not understand the mechanics of the feedback straight away, but not nearly as many as in Instant. Several participants made dissatisfied facial expressions when given feedback or stated that it was irritating to be given feedback so late in their pro-cess. Examples of this opinion are "in [After] you think that you are done [but you are not]" and "it is easier when you can do it little by little".

Figure 4.3: Thematic map showing the participants’ experiences of After, where feedback was provided after form submission.

4.3

Summary

The statistical analysis showed that significantly less time was spent in Instant and After com-pared to On Leave. There was also significantly more revisits to AOIs in On Leave comcom-pared to Instant. The qualitative analysis showed a mostly positive experience for Instant, while After was mostly negatively experienced. Several participants had problems understanding

(30)

4. RESULTS

the mechanics of On Leave, but experiences of the condition were mixed; ranging from mostly negative or neutral experiences, to some positive experiences.

(31)

5

Discussion

The discussion is divided into three parts: discussion of results, discussion of methods, and possible implications of the study. The results section discusses completion times, revisits and qualitative results together, to arrive at recommendations for feedback types in forms, as well as future research. The method discussion follows the overall structure of the paper, starting with a discussion of the validity of the prototypes, continuing with a discussion of how the eye tracking study was performed and ending with a discussion of the validity of the results. Finally, the work is discussed from a wider perspective, along with current trends and possible futures.

5.1

Results

Completion times are often used as a measurement of how good an interface is, shorter com-pletion times means that less time and therefore also less effort was needed to complete a task. In this study, research question 1a was "When should feedback be provided in registration forms to minimise completion time?". The results show a significantly shorter completion time with instant feedback and feedback after form submission, compared to feedback when leaving a field. This indicates that providing feedback instantly as the users type, may be a valid option to J. Bargas-Avila et al.’s (2007) suggestion of providing feedback after a form has been submitted. This would also be in line with Al-Saleh et al.’s (2012) findings. Further-more, the results are in line with J. Bargas-Avila et al.’s conclusion that providing feedback as users leave a field is not to be recommended.

Research question 1b, "When should feedback be provided in registration forms to min-imise number of revisits to input fields?", also found a straightforward answer. There were significantly fewer revisits when feedback was provided instantly, compared to when it was provided as the participant left a field. This reinforces the completion time conclusions, show-ing instant feedback as an alternative to feedback after form submission. The fact that the analysis of completion time and number of revisits correspond, also reinforces the use of re-visits as a measure of comprehension and task difficulty of an interface and as a measure of optimal scanpath disruption.

There is, however, a slight dissonance between the results from the statistical analysis and the thematic analysis of qualitative results. Research question 2; "When should feedback be provided in registration forms to facilitate a positive user experience?" does therefore require a more nuanced answer. Being given feedback instantly as the user types and after form submission did not result in the same user experience. On the contrary, the users had a much more positive experience of instant feedback than of feedback after submission, while being given feedback as they left a field resulted in mixed experiences and sometimes confusion.

The users’ experience of instant feedback contradicts the modal theory of J. Bargas-Avila et al. (2007), stating that users’ filling of forms is done through a completion mode and a revision mode, as several users were not only positive to the feedback, but were irritated by being given feedback after form submission. In Instant, there were only a few users who

(32)

5. DISCUSSION

did not notice or act on the feedback when it was given, something that does not fit with the modal theory of form completion. The results indicate that instant feedback allows the modal form theory to be an iterative process, rather than a sequential process. In an iterative version of the theory, the completion mode would be as long as it takes to enter information into a field, and the revision mode would begin when feedback is given. When the error has been corrected, the second iteration would begin with a completion mode. The process would iterate until form completion.

Nonetheless, the qualitative results also show that some users had a negative experience of instant feedback as they felt interrupted by it. This does not clash as much with J. Bargas-Avila et al.’s modal theory and suggests that it is, in this regard, valid. To investigate this, I suggest performing studies with a fourth option of feedback. As users expressed that they liked being given feedback immediately, but did not like to be interrupted, and as many users had problems understanding the mechanics of being given feedback as they left a field, a middle ground might be providing immediate feedback with a temporal delay. With this kind of feedback, users would not be interrupted by the feedback, but they would still understand the mechanics of it. To my knowledge, this kind of feedback has not been researched yet and could therefore be a potential future study. It would also be interesting to conduct studies directly comparing the relationship between revisits to AOIs and scanpaths as measurements for optimal scanpath disruption.

5.2

Method

The prototypes created had variables that were aggregated from five websites. This resulted in a case were the "Facebook-inspired"-context asked users to fill in the variable username, which is not usually asked for when signing up for social networks. As this made some of the users pause, it could have affected the results had not the conditions been properly counter balanced.

Another potential problem with the prototypes were the requirements put on the users’ input. It is hard to measure reactions to feedback if no feedback is needed. Therefore, the prototypes had built in faults, e.g. the first time that users entered their username it was always wrong and the demands on passwords were different from condition to condition. Some users noticed this which could have affected both their gaze patterns and how "real" they felt that the prototypes were. This might have changed their behaviour and affected the validity of the study.

A third potential problem with the prototypes is the requirements set on the different con-ditions. Specifically, the password-field had different requirements depending on condition. The most unusual requirements were in the On Leave condition, requiring users to use at least on special character in their password. The fact that these requirements were tied to the con-ditions opens the possibility that the experiences of the users may have been influenced by the different requirements. This problem could also have affected the quantitative results, making On Leave more likely to cause participants problems. The results concerning On Leave were, however, in line with earlier research and can therefore be seen as valid, although there is need to further validate them.

The validity of the study could also have been affected by the laboratory setting and the knowledge that the researcher knew exactly where the participants were looking. This, in combination with an uncomfortable work position (sitting very still), may very well have affected how the users interacted with the prototypes. This is, however, something that is hard to avoid when using eye tracking as the requirements on both the environment and the participants are quite high, not allowing windows and requiring the participants to not move at all.

The representativeness of participants could for this study be seen as problematic as gaze behaviour varies a lot with age. In addition, all of the participants studied at Linköping

(33)

5.3. The Work in a Wider Context

University, which makes it safe to assume that they regularly used computers. This could mean that the same behaviours or opinions might not appear in a differently aged population or in a population with less computer experience, which in turn makes generalisations of the findings problematic.

As for what was found in the results, one could react to the low tracking ratio. This has however been adequately accounted for, as videos of participants were manually screened and it was determined that most of the data loss was due to participants looking at the key-board while entering information. With participants that have more computer experience or proficiency, this problem could be amended, but at the expense of the sample being less representative of the whole population.

The final problem to discuss is the calibration and its effect on the validity of the revisits results. The average validation deviation in all conditions were 0.50 degrees x-direction and 0.48 degrees y-direction. This would, in practice, mean a mean deviation of 0.5 cm and 0.48 cm respectively in either direction at the centre of the screen. The AOIs spanned a total of 2 cm in y-direction. A gaze at the centre of the AOI would therefore always be counted as an AOI hit, while a gaze at the lowest fourth of the AOI could be counted as a hit in the upper part of the AOI below. This is something that is in general problematic with AOIs, and a reason that AOIs should not be placed in too close proximity of each other. For this study, it means that some of the gazes at one AOI could have been recorded as a hit on the AOI above or below. This would not, however, have affected the results too much as the revisits to AOIs for each condition were counted together and a revisit to one AOI simply would have been counted as a revisit to another.

5.3

The Work in a Wider Context

I think that it is reasonable to assume that the findings from registration processes in this study, may be translated to other web forms. A question one might ask, though, is if web forms are going to be around much longer anyway. Registrations as a concept is currently changing towards more websites using solutions like social log in, where users can use al-ready existing social media accounts to log into other websites. This does, however, give the social media websites a lot of information about the users, which some might disapprove of for privacy reasons and to be able to use these accounts, users first need to register on the social media website. Nonetheless, other kinds of web forms are currently used in a number of situations that cannot be replaced by social log in; when applying for a job, when doing your taxes (in some countries) or when conducting social, academical or market research (Bergstrom & Schall, 2014). In these cases, and probably others as well, I believe that the find-ings from this study could help minimise users’ negative experiences and therefore decrease the number of users that do not complete, or have a negative experience of, important forms.

(34)
(35)

6

Conclusion

To summarize; providing feedback instantly as the users type or after form submission re-sulted in significantly shorter completion times than providing feedback as they left an input field. No significant difference in completion time was found between providing feedback instantly and providing feedback after form submission. There were also significantly fewer revisits to AOIs when providing feedback instantly compared to providing feedback as users left an input field. The similarities in these findings suggest that revisits to AOIs may be used as a measure of optimal scanpath disruption. Users did, however, experience the con-ditions differently. Being given feedback immediately resulted in mostly positive experience, while being given feedback as they left a field resulted in mixed experiences and being given feedback after submitting a form resulted in mostly negative experiences.

The results of the study indicate that providing instant feedback as the users type may be an equally good, or better, alternative to J. Bargas-Avila et al.’s (2007) recommendation that feedback should be given after submission of a form. Based on the thematic analysis, providing feedback instantly could improve users’ experiences of web forms compared to experiences of feedback after form submission and after leaving a field. There is, however, a need to validate these conclusions in additional studies, controlling for differences in require-ments put on different input variables.

(36)
(37)

References

Alexa.com. (2017). Alexa top sites in sweden. Retrieved 2017-03-28, from http://www.alexa .com/topsites/countries/SE

Al-Saleh, M., Al-Wabil, A., Al-Attas, E., Al-Abdulkarim, A., Chaurasia, M., & Alfaifi, R. (2012). Inline immediate feedback in Arabic web forms: An eye tracking study of trans-actional tasks. 2012 International Conference on Innovations in Information Technology, IIT 2012, 333–338. doi: 10.1109/INNOVATIONS.2012.6207761

Arvola, M. (2014). Interaktionsdesign och UX: om att skapa en god användarupplevelse. Lund, Sweden: Studentlitteratur.

Arvola, M., Lundberg, J., & Holmlid, S. (2010). Analysis of Precedent Designs: Competitive Analysis Meets Genre Analysis. In Proceedings of the 6th nordic conference on human-computer interaction: Extending boundaries. (pp. 23–31).

Bargas-Avila, J., Brenzikofer, O., Roth, S., Tuch, A., Orsini, S., & Opwis, K. (2010). Simple but Crucial User Interfaces in the World Wide Web: Introducing 20 Guidelines for Usable Web Form Design. User Interfaces(May), 1–11. doi: 10.5772/9500

Bargas-Avila, J., & Oberholzer, G. (2003). Online form validation: Don’t show errors right away. Human–Computer Interaction . . . (c), 848–851.

Bargas-Avila, J., Oberholzer, G., Schmutz, P., de Vito, M., & Opwis, K. (2007). Usable error message presentation in the World Wide Web: Do not show errors right away. Interact-ing with Computers, 19(3), 330–341. doi: 10.1016/j.intcom.2007.01.003

Bargas-Avila, J. A., Orsini, S., Piosczyk, H., Urwyler, D., & Opwis, K. (2011, 1). Enhancing online forms: Use format specifications for fields with format restrictions to help re-spondents. Interacting with Computers, 23(1), 33–39. doi: 10.1016/j.intcom.2010.08.001 Bergstrom, J. R., & Schall, A. J. (2014). Eye Tracking in User Experience Design. Waltham,

Massachusetts: Elsevier. doi: 10.1016/C2012-0-06867-6

Bojko, A. (2013). Eye Tracking the User Experience: A Practical Guide to Research. New York: Rosenfeld Media.

Braun, V., & Clarke, V. (2006, 1). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. doi: 10.1191/1478088706qp063oa

Eraslan, S., Yesilada, Y., & Harper, S. (2015). Eye tracking scanpath analysis techniques on web pages: A survey, evaluation and comparison. Journal of Eye Movement Research. Field, A. (2013). Discovering statistics using IBM SPSS statistics (3rd ed.). London: Sage. Ghaoui, C. (2006). Encyclopedia of human computer interaction. Idea Group Reference.

Goldberg, J. H., & Kotval, X. P. (1999). Computer interface evaluation using eye movements: methods and constructs. International Journal of Industrial Ergonomics, 24(6), 631–645. doi: 10.1016/S0169-8141(98)00068-7

Goodwin, K. (2009). Designing for the digital Age. Indianapolis, Indiana: Wiley. doi: 10.1075/ idj.19.3.09ehr

Graham, L. (2008). Gestalt Theory in Interactive Media Design. Journal of Humanities & Social Sciences, 2(1), 1–12. doi: 10.1145/1754393.1754394

Hänggli, P. (2012). Testing 20 guidelines for usable web forms : How do they perform when used together ? (Unpublished doctoral dissertation). University of Basel.

Holmqvist, K., Nyström, N., Andersson, R., Dewhurst, R., Jarodzka, H., & van de Weijer, J. (2011). Eye-tracking: A comprehensive guide to methods and measures. Oxford University

(38)

REFERENCES

press. doi: 0199697086

Howitt, D. (2010). Introduction to qualitative methods in psychology. Harlow: Prentice hall. Jarrett, C., & Gaffney, G. (2009). Forms that work : designing Web forms for usability. Burlington,

Massachusetts: Morgan Kaufmann.

Kennedy, A. (2000). Reading as a perceptual process. Elsevier.

Nielsen, J., & Pernice, K. (2010). Eyetracking web usability. Berkeley, California: New Riders. Norman, D. A. (2013). The design of everyday things. New York: Basic Books.

Pashler, H. (1998). Attention. Hove, England: Psychology Press/Erlbaum (UK).

Penzo, M. (2006). Label Placement in Forms. Retrieved from http://www.uxmatters.com/ mt/archives/2006/07/label-placement-in-forms.php

Rayner, K., Chace, K. H., Slattery, T. J., & Ashby, J. (2006, 7). Eye Movements as Reflections of Comprehension Processes in Reading. Scientific Studies of Reading, 10(3), 241–255. doi: 10.1207/s1532799xssr1003{\_}3

Tullis, T. T., & Albert, B. W. (2013). Measuring the user experience : collecting, analyzing, and presenting usability metrics. Elsevier.

Wroblewski, L. (2008). Web form design : filling in the blanks. New York: Rosenfeld Media.

(39)

P: T:

Bachelors thesis: Investigating forms

In this study, you will be presented with three websites where you will be asked to sign up for an account. Eye tracking data will be collected during the session. To make sure that the eye tracking data is as accurate as possible, try to sit still, moving your eyes rather than moving your head. If you want to speak to the researcher, please keep looking at the screen. After you have performed a trial calibration, the experiment will be conducted as follows:

 You are shown an introductory video about the website

 An instruction will be shown, asking you to sign up with your own information and not to use any password that you have used before

 A calibration is performed where you will be asked to follow a dot with your eyes  The registration process of the website will be shown and you will have as long as

you need to sign up.

The process will then be repeated for the remaining two websites.

If you, at any point, no longer wish to take part in the project, notify the researcher and you will be withdrawn from the study. Please remember that it is the websites that are being tested, and not you as a participant.

How old are you?

Gender:

Do you have normal vision at 60cm distance?

Do you usually look down at the keyboard when writing?

(40)
(41)

B

Appendix

Tables B.1, B.2 , and B.3 show the number of participants that were coded into the discussed themes. Note that some participants have been coded to several themes and that the most fair picture of the relevance of different themes is given in the results chapter (Section 4.2).

Table B.1: Number of participants that were coded to each theme in Instant.

Theme Number of participants

Positive 25

Negative 3

Not used as intended 6

Table B.2: Number of participants that were coded to each theme in On Leave.

Theme Number of participants

Irritating 7

Straightforward 3

Did not understand mechanics 5

No difference 6

Table B.3: Number of participants that were coded to each theme in After.

Theme Number of participants

Did not understand mechanics 2 Dissatisfied facial expression 3

References

Related documents

PS2 Female fashion/lifestyle page 31 American / English speaking Pink/white/ black as a dominant colour, pink rounded fonts, sefl-picture with a chihuahua dog, image of a pink

The results reported in Table 4 indicate that leaders of private corporations in Colombia consider that the money delivered as electoral donations to political candidates is a legal

He is also aware that some of his students choose to ignore the recasts while others repeat the correct form, but he does not see any major negative consequences of using recasting

improvisers/ jazz musicians- Jan-Gunnar Hoff and Audun Kleive and myself- together with world-leading recording engineer and recording innovator Morten Lindberg of 2l, set out to

10 Student Perceptions of the Ability to Write in English 49 List of Tables Table 1 Entity and Incremental Theory Characteristics 10 Table 2 Formative and Summative Assessment

Betänkande med förslag till reformerad hyreslagstiftning, ss.. 14 formuleringen ”om inte annat anges” utgörs hyreslagen även av dispositiva regler för vilka parterna fritt

Subcortical regions involved in emotional and rewarding processes, such as the amygdala and nucleus accumbens appears to lie at the core of self-regulation failure, whereas

The results will show the respondents’ answers to the questions regarding the use of dietary supplements, how often the respondents use dietary supplements, the main