• No results found

Microinteractions : The effects on web-based forms

N/A
N/A
Protected

Academic year: 2021

Share "Microinteractions : The effects on web-based forms"

Copied!
45
0
0

Loading.... (view fulltext now)

Full text

(1)

Microinteractions

The effects on web-based forms

Main subject area: Informatics

Author: Albin Lagerquist & Andreas Samuelsson Supervisor: Martin Lindh

(2)

This final thesis has been carried out at the School of Engineering at Jönköping University within informatics. The authors are responsible for the presented opinions, conclusions and results.

Examiner: Ulf Seigerroth Supervisor: Martin Lindh

Scope: 15 hp

(3)

Abstract

Introduction

In an age where users are becoming more dependent on digital tools and digital literacy there is an increasing demand for efficient and approachable communication between users and interfaces. Microinteractions are functional and interactive details of a specific product with the purpose of enhancing the usability. It can be a subtle or prominent detail, i.e. an animation or transition, solely dedicated to providing the user with feedback in response to a performed action.

Purpose

The purpose of this study was to examine how microinteractions influence users’ ability to complete web-based forms according to the data requirements and through this also provide an understanding of how the user experience is affected.

Method

To achieve the purpose, a qualitative case study was conducted which consisted of two components, user tests and follow-up interviews. These were conducted with 10 participants currently studying at Jönköping University. The collected data was then put through a comparative analysis with a deductive logic, which consisted of a thematic analysis with a latent approach.

Results

Consequently, the result of this study does in fact suggest that microinteractions can improve users’ ability to complete web-based forms according to data requirements in a positive way and as such improve the perceived experience. The vast majority of the participants did consider the dynamic presentation of information as more approachable and justified our assumptions.

Limitations

Due to the small sample size chosen to conduct this study, the generalisability of the result is limited. Since the research was performed on a relatively small number of participants it should rather be seen as a framework and inspiration for further research to validate the results on a larger scale.

Keywords

Microinteractions, user interface, user experience, usability, human-computer interaction.

(4)

Table of content

1. Introduction 5

1.1 Background 5

1.2 Problem statement 6

1.3 Purpose and research questions 6

1.4 Scope and limitations 7

1.5 Disposition 8

2. Method and implementation 9

2.1 Case study 9

2.2 Participants 10

2.3 Data collection 10

2.3.1 User tests 11

2.3.2 Test structure 13

2.3.2.1 Test with microinteractions 14 2.3.2.2 Test without microinteractions 17

2.3.3 Interviews 19

2.3.4 Interview questions 20

2.4 Data analysis 22

2.4.1 User test analysis 23

2.4.2 Interview analysis 23

2.5 Validity and reliability 24

2.6 Considerations 25 3. Theoretical framework 26 3.1 Structure of microinteractions 26 3.2 User experience 27 3.3 User interface 27 3.4 Usability 27 4. Results 29

4.1 Presentation of data collected 29 4.1.1 Test variant without microinteractions & interview 29 4.1.2 Test variant with microinteractions & interview 30

4.2 Result analysis 31

5. Discussion 34

(5)

5.2 Method discussion 36

6. Conclusions and further research 39

6.1 Conclusions 39

6.2 Implications 40

6.3 Further research 40

(6)

1.

Introduction

This chapter presents the background of the thesis subject and the area that it addresses. Further, the purpose and research questions are presented together with the scope and limitations of the study. Lastly, a disposition outline of the thesis is drawn.

1.1 Background

As the term microinteractions implies, it defines various “micro” or small interactions that a user has with a product. According to Saffer (2013) who describes microinteractions in his book,“A microinteraction is a contained product moment that revolves around a single use case.... Every time you change a setting, sync your data or devices, set an alarm, pick a password, turn on an appliance, log in, set a status message, or favorite or Like something, you are engaging with a microinteraction.” (p.2).

Microinteractions can therefore refer to a vast number of different interactions. The subject for our research, however, is more specifically the microinteractions found on the web. These have been increasingly more common in recent years as their influence on the user experience has been well implied (Joyce, 2018). In fact, you most likely encounter them daily even if you are not noticing it. The only time it is noticeable is when something goes wrong. As in the example brought up by Saffer (2013), when a member of the audience at a classical music show had his iPhone alarm go off, thus interrupting the show, something that the New York Times (Wakin, 2012) wrote about. The person in question had just gotten a new iPhone and had prior to the show toggled on the mute switch, a microinteraction in itself, thinking it would mute all sound whereas it only muted sound not initiated by the user, whom in this case had an active alarm set that counts as a user-initiated interaction according to the set rules of the iPhone (Apple, 2020). As such, the alarm went off, causing the whole concert hall to become infuriated. This is one example of a microinteraction which otherwise might not have been noticed at large, but when something goes wrong like this, people start noticing. The same goes for most microinteractions, even the ones found on the web.

Microinteractions have four different components where one of them is feedback (Saffer, 2013). Normally when you do something, you expect something to happen in return. This comes down to human psychology (Cooper et al., 2014) and you might recognize this as you most likely have encountered websites that do not provide instant feedback when performing a task, like clicking a button. You might have pressed it, only to be kept waiting for a second or two before something happens and, in this time, you clicked the button a few more times resulting in you ending up with

(7)

far more than you hoped for. These mishaps could easily be prevented by implementing the use of microinteractions (Joyce, 2018). Giving a small feedback that you have successfully pressed the button, i.e., performing the task that you aimed to do, thus prevents you from having doubts whether you actually pressed it.

Microinteractions and their benefits when used correctly have been shown before (Hernia, 2020). However, they have mostly been shown to do so in broader terms and in limited areas like specific applications and use cases. Given the amount of use cases, there are many areas which are mostly generalized and relatively untouched. One such area is web-based forms and how microinteractions influence the users’ ability to complete and fill these in. Web-based forms are mostly like regular physical forms that require some type of information from the user. It is an important part of many websites as this is where they receive much of the users’ personal data (Malheiros & Preibusch, 2013). By studying the effects of microinteractions in this area, more in-depth insights are gained and allow for a validation of utilizing these practices.

1.2 Problem statement

Even if a user interface or form is well designed, it is important to provide feedback to the user to guide them and let them know that their actions have been acknowledged (Boyl, 2019). As mentioned in [1.1 Background], we as humans expect interactions to go both ways since, most of the time, when we do something there is a response to our actions. Something happens. For web-based forms, microinteractions can be used at multiple points throughout, instead of only giving some feedback at the end of the form. This way the user can be sure that each part is correctly filled in before moving on to the next input and as such eliminating eventual frustration that might arise when they do not receive any feedback until the very end. These small clues or nudges along the way result in a better user experience, mainly because of the prevention of any eventual errors and frustration (Saffer, 2013). The same principles can be applied to many different tasks that the user aims to accomplish. The small amount of research done that touches upon the area mainly states that such principles increase the user experience in general but does not in-depth specify how. This is the gap in knowledge that this paper sought to provide insight into.

1.3 Purpose and research questions

Expanding upon the gap in knowledge mentioned above, this paper aims to provide more in-depth knowledge about how microinteractions influence the users’ ability to

(8)

complete web-based forms according to data requirements. Where data requirement refers to the required format data needs to be inputted in. Through this, the study will also aim to provide a better understanding of the effects that such microinteractions have on the user experience when completing the task. This part of the process is also of interest since the influence of the users’ ability might not be correlated with their experience. Therefore, such understanding is of importance to better allow user interface- and web designers decide whether to implement such microinteractions. As such, it can act as a pointer as to whether it is worth the time, cost and effort of the implementation. To be able to achieve the purpose, it has been formulated into two research questions, whereas the first focuses on the users’ ability and the second on their experience:

1. How do microinteractions influence users' ability to complete web-based forms according to data requirements?

2. How do microinteractions affect the user experience of such forms?

1.4 Scope and limitations

As mentioned in [1.1 Background], microinteractions are found in many different instances and can be implemented in a vast number of products, where some are even built entirely around one specific microinteraction (Saffer, 2013). Because of this, the study is delimited to exclusively cover the use of microinteractions found in web-based forms. Further, the study only investigates how microinteractions influence the users’ ability to complete such forms according to specific data requirements and how they affect their experience of doing so. This also means that microinteractions in general or designed for other areas of the user interface itself are not studied, neither is the general visual design of it. There are also other kinds of forms used on other mediums than the web, like in mobile applications, such forms are not studied either. Web-based forms can also be accessed from various types of devices. Therefore, the study is delimited to the ones accessed on a desktop and not on touch-enabled devices, such as smartphones and tablets. This is because microinteractions might differ in terms of application in different devices (Saffer, 2013). As such, it would have required the study to cover multiple devices whereas the purpose of the study is delimited to provide more specific in-depth insight of the effects of microinteractions on web-based forms accessed on desktop. The results of this paper could, however, act as a hint for other areas of user interfaces, processes and devices but should not be considered factual. Though, it could provide insight to inspire further research.

(9)

Due to time limitations and the confined scope of a bachelor thesis, the paper does not delve deep into the theories themselves of microinteractions and user experience. It does, however, touch upon such theories but is limited to its fundamentals because it is of value to explain these briefly to further build upon its effects when applied in the specific study case.

1.5 Disposition

The following part of the paper consists of the method and implementation of the study, where the approach is explained together with data collection and analysis as well as the validity, reliability, and the considerations. Afterwards, the theoretical framework is laid out to be followed by a presentation of the result and a discussion. Lastly, a conclusion is made.

(10)

2.

Method and implementation

This chapter touches and explains the methodology used to answer the research questions. The approach is to use the most comprehensive and suitable way to extract the information regarding microinteractions. The chapter is divided into three main subsections; an introduction where the methods used are explained and how they are suited to answer the research questions, an explanation in more detail about the method approach itself, and lastly a closer look at the structure and design of the component’s used in the methods.

2.1 Case study

To collect data regarding the subject and answer the research questions, we have utilized a qualitative method in the form of a case study. The qualitative case study consisted of a remote and observed user test followed by a semi-structured interview. This is a suitable approach when a topic does not have a lot of previous research and needs to be understood better (Creswell, 2009), which was the case regarding the first research question as to how microinteractions can influence users' ability to complete web-based forms according to data requirements. A case study is also suitable since it observes a specific case thoroughly in a task bound by time (Leavy, 2014). As the purpose was also to understand how microinteractions influence the user experience regarding such forms, a case study provides a closer look at how different individuals perceive and receive this cognitive feedback (Creswell, 2009). Since another aim was to identify changes and any eventual impacts among the participants, the researchers consequently investigated the influence of microinteractions. A case study was therefore suitable and allowed the researchers to identify patterns and sequences in the user experience (Baškarada, 2014; Jorgensen, 1989). A quantitative method would not have allowed the researchers to obtain the psychological aspects such as more in-depth thoughts and feelings among the participants to the same extent and would instead have presented more statistical generalization (Baškarada, 2014; Jorgensen, 1989). Therefore it was more beneficial to utilize a qualitative method, like a case study, that better achieves the task of collecting the participants’ thoughts and impressions which were then concluded into patterns and recurrent themes.

The two components of the case study are a user test and a follow-up interview. In the user test, the participant was given a certain task to complete whilst being observed doing so. It was a suitable approach when testing usability such as the case of web-based forms and also allowed for observation of the users’ behavior and note-taking (Creswell, 2009). Furthermore, user tests are beneficial for measuring and

(11)

monitoring how well a user can complete a particular task as well as enlighten what types of problems they might encounter (Cooper et al., 2014).

The interviews were intended to further enlighten the researchers about the participants’ experience and perception of the task performed. As Barnum (2020) states, interviews are a great solution to achieve further knowledge about participants’ experience. They also allowed the researchers to be adaptive and in control of the questioning.

After the user tests and interviews, the results were analysed, compared and further interpreted whilst looking for recurring themes, behaviors, and deviations. Through this, the data collected through the case study allowed the research questions to be answered.

2.2 Participants

Due to the topic and limitations of this paper, the researchers chose to focus on students from Jönköping University (JU). The participants chosen for the case study were done so irrespective of gender and nationality but within the age group ranging from 20-30 years old. The decision of conducting the research merely on students from JU in the mentioned age group was a question of selecting participants who felt familiar with digital processes and as such had an established digital literacy. These assumptions were also later validated through an added question in the interviews. The chosen target group of participants was initially also a matter of logistics since the initial hope was to conduct the user tests at a physical location whereas JU could provide appropriate lab environments for the participants to perform the tests in. Since conducting tests in such an environment would have allowed for greater control (Barnum, 2020). But considering the, at the time, still ongoing pandemic, these ideas were reconsidered. Instead it was decided to take place entirely online, through the digital platform Discord.

Using participants who felt comfortable browsing on computers was significant because the test was evaluated through parameters that could vary depending on participants’ digital literacy and skill levels. Furthermore, the young audience also increased the probability of them being comfortable using the web.

2.3 Data collection

Case studies normally imply a qualitative method which, as mentioned, is the case here as well (Rebolj, 2013). To further build upon this, user tests and in-depth interviews also fall into this methodology. As such, it is necessary to mention

(12)

qualitative research done in environments where the participant feels more comfortable can be beneficial, like environments that they normally reside in (Creswell, 2009). This is mainly due to an unwillingness to influence the participants and make them act out of the ordinary, which in turn could result in response bias since human subjects rarely respond passively to stimuli (Orne, 1962). Considering this and the current situation with restrictions due to the pandemic, together with time constraints, our tests were carried out entirely online. Observed user tests and interviews are also commonly carried out in person and thus there had to be some adjustments made. Firstly, it was problematic in the sense that we could not have total control of the environment like that of a test lab which could have been used given other circumstances. To counteract this, there were certain instructions given to limit potential distractions. The participants were told to put their phones on mute and disable notifications for the duration of the tests and interviews. By providing such instructions, some potential interruptions were prevented which is one of the benefits normally associated with being in a lab environment (Barnum, 2020). This enabled us to both maintain some control of the environment while the participants would be in a familiar environment that they normally reside in, which is proven to be beneficial circumstances/context/situation for experiments (Merriman et al., 2016). This also meant that there were additional requirements to be met for the participants, like having access to a computer of their own as well as the software used for the communication. Taking this into consideration, some extra time had to be put aside to find the right participants mentioned in [2.2 Participants] as well as finding the software best suited, both in terms of it being suitable for the test and interviews as well as its availability. Given the instructions provided and carefully selected participants, the case study was made possible and could provide sufficiently reliable results to answer the research questions given the circumstances.

2.3.1 User tests

As for the user test, it was, as mentioned above, conducted entirely online. Firstly, a voice chat was set up between the moderators and the participants where initial instructions were given. Discord was used because of its broad availability and functions that met the requirements for the test. The main requirement was the ability for the participants to share their screen which they, after the initial instructions, were told to do. They were then tasked with going onto a website specifically created for the test where they were presented with a sign-up form. Their task was to complete and fill in the form like they normally would when signing up for a service on the web.

(13)

The form consisted of the following input fields: ● First Name ● Last Name ● E-mail ● Phone ● Date of birth ● Password ● Confirm Password

Upon completion of the sign-up process, i.e., the form, the user is tasked with pressing a sign-up button where the user test ends.

For the user tests to provide sufficient data, we created two variants of the form. This was a necessity since the purpose of the research was to investigate any influence microinteractions might have on the users’ ability to complete a task and their experience whilst doing so. Thus, the need for two variants arose. Both variants contain the same inputs and the only differences between them are the use of microinteractions. The first one doesn’t utilize microinteractions at all and hence is static whereas the second one is more dynamic thanks to the implementation of microinteractions. This allows for a comparative analysis to be made, which is of importance to bring insight as to how the users’ ability differs. Although the tests could arguably have been longer and more complex but due to the scope and limitations of the research we deemed a sign-up form process to provide sufficient data for the analysis.

When deciding upon how many participants were needed for the data to be sufficient, considered to be reliable and had conclusions be drawn from, we deemed that a total number of ten participants was enough. Those were then split into two groups of five, where each group conducted each variant of the test which is further expanded upon below and is also shown to be a preferred number when it comes to the benefit/cost ratio (Nielsen & Landauer, 1993). There is, however, always a risk of biased results because of the small number of participants (Cope, Clifford, French & Gillespie, 2016), even with preventative measures taken. But, through carefully selected participants and results being in line with what was expected together with the findings of J. Nielsen (2000), we are confident that the reliability of the tests is satisfactory.

Furthermore, the two different variants of the test brought up the issue of having the participants complete only one or both. After some discussion, we concluded that they should complete both variants. It is important, however, to mention that only the first variant taken will be used for a thematic analysis and be compared using a constant

(14)

comparative method. This decision was mainly due to the two variants being too similar and by completing one, the results for the second one would have been greatly affected and therefore would most likely have affected the reliability. By comparing only one of the variants, the first one taken, we minimize the risk of them learning the purpose of the test which lies in natural human behavior and as such allowing for greater validity of the tests (Barnum, 2020). If they would be aware that the microinteractions, or the lack of, was the thing being studied, they would have paid extra attention to these and possibly act out of the ordinary which in turn would have made the results less reliable. Furthermore, given the length of the tests and the nature of microinteractions, the purpose of the first test variant taken was not obvious to the participants. This phenomenon is called inattentional blindness which implies that the users are more focused on the given task, which is to fill out the fields in the form, thus making them overlook small details such as the microinteractions (Jensen, Yao, Street, & Simons, 2011). As for the second test variant taken, it will only be used to enable more in-depth questions in the follow-up interview.

Before commencing the user tests, a scenario was presented to the participants to introduce them to the task, which was as follows: “You want to sign-up for an online service and thus are required to fill out a sign-up form. Your task is now to complete this form according to the data requirements specified, thus signing up for the service.”

As the test structure itself was pretty straight forward, we concluded that there was no need for any rules or tips to be presented before or during the test. To further expand upon this, the participants were selected with the assumption that they possessed a good digital literacy. The participants were instructed to go ahead and complete the presented task right away after being introduced to it.

2.3.2 Test structure

When designing the test we ended up doing so purely by code to allow for greater flexibility and control. Since the tests were also carried out online and remotely, we had to make sure they could be hosted and be made accessible for the participants. Hence, after some research we concluded that a suitable way would be to use the online code editor called CodePen. This platform allows for coding directly on the web as well as direct hosting through their website which could be shared through a link to the participants. This link was then provided to the participants prior to the test and after having the scenario explained. Depending on what test the participants were selected to do first, they were given the corresponding link. The overall design of the tests were kept minimal in order for key aspects to be studied without distractions or possible outcomes due to different design choices. Instead the goal was to keep it as

(15)

simple as possible to allow for greater control as well as having the only differences in the two variants being the use or lack of microinteractions. Even more so, for our comparative analysis we had to make sure both variants contained the same elements and requirements. The differences here were only how the requirements were presented, either as feedback as a result of a microinteraction or as a static text.

As for the elements of interest, which were the input fields, we used the most common ones associated with web-based sign-up forms. However, due to the fairly basic structure of the test, we had to make sure that the inputs were not too straight forward for the participants to complete it without hick-ups. Hence, we included some less common data requirements. The input fields of interest were mainly phone number, date of birth and password. These fields all had some special data requirements and needed to be entered in specific formats as can be seen below:

● Phone number: +46XX-XXX-XXXX ● Date of birth: YYYY-MM-DD

● Passwords had to contain the following: A lowercase letter, a capital (uppercase) letter, a number and be at least 8 characters long.

These fields were especially of interest due to the requirements and as such were parts of the test where we expected to see the most difference in terms of the participant’s ability and experience when the two variants were compared. As previously mentioned, the required formats were the same between the two test variants, with the only thing being different was the presentation of the data requirements.

2.3.2.1 Test with microinteractions

For the dynamic test utilizing microinteractions the data requirements were presented dynamically at each input field. These were triggered once an input was focused by the participant. There was also placeholder text within each field to give clues as to the required formats. Furthermore, there was colored and direct feedback given at each input in this variant to let the participant know whether that input was filled in correctly or not. The colors used followed convention and turned green for when the input was correct and yellow for when there was some additional attention required. There were also small microinteractions used within the format areas to let the participant know that they had met the data requirement, this was made in the form of a red cross turning into a green checkmark. Lastly, there were some additional microinteractions added to the sign-up button including different stages. After being clicked the text changes to Submitting and the checkmark turns into a spinning cog to provide additional feedback that something is loading, which is normally done when

(16)

clicking a submit button (Dziuba, 2019). After two seconds of loading, the button changes again and turns green to let the participant know that they have successfully completed the test. See figure A, B, C and D below for screens from the test variant with microinteractions.

Figure 1 Test variant with microinteractions overview.

(17)

Figure 3 Focused password input field (test variant with microinteractions).

(18)

2.3.2.2 Test without microinteractions

As for the second variant of the test, it did not contain any microinteractions in order to allow for the comparative analysis to be made. Hence, the data requirements had to be presented differently than in the first variant. Where the first test variant presented the required formats dynamically, this one displayed them statically in the form of an informational box beside the form at the top. This meant that instead of only providing the required format for the input currently in focus, all data requirements were displayed at all times in a fixed location beside the form. By using this approach we still made sure that the same information was available to participants of both tests with the only difference being in what way it was presented, which in the other variant was presented through the use of microinteractions. The placeholder text within the inputs were also removed to make the second variant without microinteractions even more static. With the absence of microinteractions there was no direct feedback when entering data into the inputs. Instead, this feedback was only given once the sign-up button at the end had been triggered. At this point, the inputs that had been correctly filled in according to the requirements turned green whereas the incorrect ones turned yellow. We consciously used the same conventional colors in both tests to allow for similar experiences. The same colors applied to the sign-up button at the end. When pressed, it briefly changes to yellow if any field is incorrect and if all fields are correct it turns green. The yellow here aims to let the participant know that there are one or more errors in the form. The green illustrates that the form has been successfully filled in and that the test is completed. This can all be seen throughout figures E, F and G shown below.

(19)
(20)

Figure 6 Invalid inputs (Test variant without microinteractions).

Figure 7 Successfully completed test variant without microinteractions.

2.3.3 Interviews

After conducting the user tests, interviews were held to gain further knowledge of the participants’ experience and perception. These interviews were semi-structured with open-ended questions to create a dialogue between the interviewer and participants, enabling follow-up questions where the participants were asked to elaborate their answers (Cope, Clifford, French & Gillespie, 2016). This gave the researchers the ability to explore the participants’ thoughts, feelings and collect data to be able to look for recurring themes and differences in order to answer the second research question regarding how microinteractions affect the user experience and its comparative nature. The purpose of having a semi-structured interview rather than an unstructured one

(21)

was to make sure gathered data from different interviews was comparative but still, maintain a flexible conversation with the opportunity of follow-up thoughts and feelings among participants, which could vary and be difficult to predict. However, by still having the control of a semi-structured interview, the researchers could also ensure that the conversation maintained focus on the topic and that relevant information was gathered and key questions were allowed to be answered.

As the research consisted of two tests, one static which lacked the use of microinteractions, and one more dynamic utilizing them, the interviews had some minor variations of questions depending on which test variant the participant took first.

To further build upon this, we decided to go with an interview right after the first test had been taken to get a better understanding of their initial experience followed by a second interview conducted after the second test was taken. This was done since one of the main purposes of the tests was to be able to do a comparative analysis between the two. Hence, we decided that an interview after each test was the better option in comparison to having only one interview conducted after both test variants were taken, which could have resulted in biased answers due to them figuring out the purpose of the study. Through the first initial interview, the main proportion of the data collection was gathered. Where the second interview provided valuable supplementary data to draw conclusions from and validate findings if, for any reason, the first interview's data would not be sufficient to answer our research question about how the user experience was affected.

2.3.4 Interview questions

In this subchapter we briefly explain each question used in the interviews and the reasoning behind them. The questions asked after each test started in the same way but then broke off into their own paths with different goals. Below the interview questions are highlighted and grouped depending on after what test they were asked. Afterwards, the questions are explained in a chronological order.

Questions to be asked after the first test variant taken: 1. How did you experience this test?

2. Did the process feel familiar? If yes, in what way? 3. How often do you see yourself using registration forms?

4. Did you perceive any of the input fields more or less difficult? If yes, which and in what way? If not, they were all equally difficult?

(22)

Questions to be asked after the second test variant taken: 1. How did you experience this test?

2. How was your experience compared between the two variants? In what way? 3. Could you identify any specific differences between the tests? If yes, which

ones?

4. What do you think the test was about?

5. If you were to make any changes, what would they be?

1. How did you experience this test? This question was the initial question asked and one of great value due to its relevance to our second research question which were as follows, How does microinteractions affect the user experience of such forms?. As such, this question aims to find out the participants initial thoughts and overall experience of the test.

2. Did the process feel familiar? If yes, in what way? Where the first question was directly related to providing insights for our second research question, this one aimed to get a better understanding of the participants prior experience of web-based forms.

3. How often do you see yourself using registration forms? Related to the prior question, this one aims to find out further information regarding the participants experience and familiarity with such forms. This data increased the validity of the research when it comes to the participants matching the expectations in terms of prior experience when the target audience was selected.

4. Did you perceive any of the input fields more or less difficult? If yes, which and in what way? This one was included to receive more in-depth information regarding their perceived experience. This was an area of interest due to its nature in terms of positive and negative experiences, which relates directly to answering our research questions and comparative nature of the analysis.

5. How did you experience this test? Asked again after the second test variant had been conducted to hear the participants initial experience of this one as well.

6. How was your experience compared between the two variants? In what way? After hearing their initial experiences of the second variant, a follow up

(23)

question was asked regarding their experience compared between the two variants. To find out some of their own thoughts and comparisons were of interest to validate and confirm some of our own comparisons made.

7. Could you identify any specific differences between the tests? If yes, which ones? Whereas the last question focused on the participants' experience compared between the two tests, this one builds upon that in a more specific sense to provide valuable insight as to how perceptive they were of the details in the tests.

8. What do you think the test was about? As the interview drew to its end, this question was intended to provide valuable insights related to validity as to if the participants had figured out and were made aware of what was being studied in tests. As stated by Nichols & Maner (2008), if this would be the case it could result in biased actions and answers which is commonly referred to as demand characteristics as well as the good-subject effect. This question was therefore included as an attempt to make sure bias was avoided and increase validity.

9. If you were to make any changes, what would they be? The last question of the interview was intended to get further understanding as to what the participants disliked about the tests. As such, we indirectly received further valuable data regarding their perceived experiences.

2.4 Data analysis

As stated by Thorne (2000), data analysis is the most complex phase of qualitative research and thus needs to be carefully considered. For us to be able to analyze the qualitative data gathered in an adequate way, we started off with deductive logic. As is the case of microinteractions in general, they usually enhance and improve the user experience (Boyl, 2019). Therefore, our initial assumption was that the result of this data analysis would show that microinteractions would do so in the case of web-based forms as well. To validate and analyze the data, we employed a thematic analysis with a latent approach, which is well-suited when doing a comparative analysis of empirical data (Braun & Clarke, 2006). Throughout the analysis we utilized the constant comparative method that, as suggested by Glaser and Strauss (1967), can be applied to research of many various sizes. By using this method we were able to compare the test results and observations of each participant as well as their answers from the interview in a systematic and categorical way, which is essential when using a thematic analysis of a qualitative study like ours (Alhojailan, 2012). We were then

(24)

able to expand upon the deductive approach and conclude if our initial assumptions were validated or not. The observed and recorded tests were labeled and categorized by which of the variants were used. This was done mainly to facilitate our approach and allowed for the data to be sorted in a similar way. When presented, themes and patterns were identified which then were used to further code the data. Furthermore, an additional benefit of a thematic analysis is that outliers are able to be identified (Alhojailan, 2012).

2.4.1 User test analysis

Before commencing evaluation and analysis of the user tests, we predefined what we were looking for in order to facilitate the analysis and allow a more focused review. However, attention was still allowed to be broad enough to pick up unexpected occurrences. After having defined the areas of focus, they were labeled and categorized. The test sessions were then reviewed one by one where notes were taken of the actions taken, time elapsed and any eventual issues encountered. These notes were then added to defined categories such as errors, time elapsed as well as expressed experience. The error categories were further divided into two parts, amount of direct errors, which refers to input fields left incorrect, together with the amount of rewrites that refers to the amount of times an input field has to be rewritten in order to get it right without leaving it. This divide was needed since rewriting inputted text does not always equal an error but instead only demonstrates that a participant might be having a slight issue with the specific field. Furthermore, these two categories were connected to each input field throughout the tests together with each participant encountering them which allowed for a more detailed analysis to be had. For example, eventual errors were noted at each input field where they occurred for each participant. All of these notes for all participants were then summarized into broader categories defined by the two variants of the tests. As such, the summarized total of notes for both test variants and its participants could be compared, which were essential in order to be able to draw any conclusions from the user tests. The categorized factors which were eventually compared were errors, rewrites, time elapsed as well as any noted experiences.

2.4.2 Interview analysis

After having reviewed the user tests, the focus was shifted to the interviews where the first step was to transcribe them in order to familiarize with the data and allow the thematic analysis to be as efficient as possible. Once all interviews had been transcribed and familiarized the process of coding the data commenced which was

(25)

structured into three parts: open coding, axial coding and selective coding. However, these procedures should only, as stated by Corbin & Strauss (2008), be considered as tools and not hinder the dynamic and fluid nature of a qualitative analysis. Even more so due to the deductive and latent approach, the coding process was adapted to accommodate preconceived expected themes as well as reading into the subtext and underlying data.

Firstly, small components of data implying any experiences the participants had were highlighted. This data was the main focus of the interview transcriptions due to its relevance to answering our second research question regarding how microinteractions affect the user experience of web-based forms. The latent approach also allowed the researchers to read into the subcontext of what was said during the interviews. This was an important part because of the semi-structured interviews and the open ended answers it allowed. By reading into the subcontext, vital data could be extracted and codes be found where participants did not specifically express either any directly positive or negative experiences, even when asked to do so. Furthermore, when there was insufficient data even when reading into the subcontext of the interview, the corresponding user test and notes were retrieved and reviewed. Like the process of axial coding, connections and potential relationships between codes were investigated. Those were then put into broader categories and allowed for credible assumptions about their experience to be made. As such, assumptions and codes could be highlighted in all interviews and participants. To further validate such assumptions, more coded data was gathered from the second interviews where the participants were asked to reflect on their own experience from both tests taken. This acted as a fail safe and validation to our assumptions from reading into the subcontext where no obvious statements could be derived.

Lastly, after having gone through selective coding and its iterative process, the transcriptions painted an overall clear picture. Some interviews, however, required us to connect the coded themes to categories and notes derived from the prior user tests in order to come to reliable assumptions and conclusions. As such, our analysis of both the user tests and interviews provided valuable data which in some cases could not be assessed reliably without the other.

2.5 Validity and reliability

As mentioned earlier in this chapter, various precautions were taken to ensure the validity and reliability of the methods used, but also for the research itself. These measures taken were also extended to the researchers, which included communication on a regular basis to allow for greater comprehension of the analysis and conclusions.

(26)

Whenever recorded tests and interviews were transcribed, this was done by both of the researchers to prevent any mistakes, such as mishearings, occurring.

Since the qualitative data was collected through a variety of methods, like user test recordings, interview recordings and respective transcripts, triangulation was used to ensure its validity. Attention was also put into applying a systematic process for coding data as described by Corbin & Strauss (2008). Thick descriptions of context and peer debriefings were also used to add reasoning and validity, as well as reliability to the overall study.

When it comes to the participants of the study, certain measures were taken to ensure that they did indeed possess the digital literacy expected among them as previously mentioned in [2.2 Participants]. During the interviews an additional question was added for this purpose which also confirmed our initial expectations.

2.6 Considerations

As with all research, there are various aspects that must be considered. The ones that we deemed relevant for our study include but are not limited to scientific and ethical. Firstly, we had to consider the fact that microinteractions have been shown to enhance the general user experience of digital products (Saffer, 2013). This led us to investigate as to what extent it has been researched for the specific use case of web-based forms. We found that while there had been some research made (Falkowska et al., 2019) already, there was little to none which examined the correlation between microinteractions and the users’ ability to complete such forms as well as the impact it has on their experience.

Secondly, there are some disadvantages regarding user tests that need to be considered. An example of this is that the participants know that they are doing a test, which could influence their behavior and persistence in the test (Spannagel et al., 2005). There are also the related demand characteristics to be considered which is mentioned and briefly described in [2.3.4 Interview questions] for question number eight, where that question was included to validate that this factor did not influence the participants’ actions in a significant way.

Lastly, because of the nature of the methods applied we had to consider some ethical aspects. The user tests and follow-up interviews conducted were all recorded which required us to inform and get each participant’s approval prior. We made assurances that it would not be used for anything besides gathering data for the study, which is important to reassure due to the growing business of selling data (Jerome, 2013).

(27)

3.

Theoretical framework

This chapter describes the theoretical foundation used to answer the research questions stated in [1.3 Purpose & Research questions].

3.1 Structure of microinteractions

Saffer (2013) explains how the effectiveness of microinteractions is determined by their contained size as well as their form and can be dissected into four different parts; trigger, rules, feedback, and loops and modes, which should be accounted for when designing and creating a microinteraction. He continues by describing a thoughtful and good microinteraction as something that gracefully handles each of these four stages.

The ‘trigger’ is the first part of a microinteraction and can be either user-initiated, meaning that the user does something to initiate the microinteraction, or system-initiated, meaning that the application or system meets a certain predefined condition and thus initiates the microinteraction (Boyl, 2019).

The ‘rules’ is the second stage and determines what happens when a microinteraction has been activated, meaning a sequence of behavior that is engaged when triggered (Boyl, 2019). This could for example be a functionality that is turned on or a display of a certain status of the system. The ‘trigger’ should always turn on at least one ‘rule’. Boyl (2019) continues by saying that when using digital systems or applications, ‘rules’ can be a lot more nuanced and difficult to understand. Therefore, they rely on the third stage, ‘feedback’.

‘Feedback’ refers to the action of letting users know what is happening and can be received in different ways such as visual, aural, and haptic. Everything the users see, hear or feel to help them understand the ‘rule’ is by definition ‘feedback’ (Saffer, 2013). This could for example be an animation filling up a button to inform the user that his or her click-action is being processed. Saffer (2013) furthermore describes this stage as the moment where there is an opportunity to express personality and design regarding the brand’s image.

The fourth and last stage, ‘loops and modes’, touch duration and what happens with the microinteraction over time. It is referred to as the meta-rules (Saffer, 2013); What is the optimal duration for a particular microinteraction? Does it vary due to changes of conditions? How and when does it end? This could for example be how the appearance of a submit-button changes, depending on if the user has filled in all the mandatory information needed in a form.

(28)

Consequently, these were vital stages in our case and the creation of reliable and suitable microinteractions utilized in the user tests. The structure of microintearctions were parameters accountable regarding the validation of the user test performed where lack of this knowledge could affect the result and reliability of the data.

3.2 User experience

User experience (UX) is the practice of how a user interacts and experiences a particular interactive product, service, or system. It is the result of how an individual perceives the usage or anticipated usage (Kraft, 2012). Kraft also explains how users’ emotions regarding usability is often the result of the UX and determines whether they fulfill a specific task or end up repelling the system and leave it incomplete. For example, a token of good UX is when the users perceive an interaction, website, or system as relevant and can fulfill it with ease and a clear understanding of what they have done.

Since this study examined how microinteractions influenced the user experience, this knowledge was important in the creation of user tests and the conducting of interviews with appropriate questions, providing us with comparative parameters and allowing the participants to express and acknowledge their experience. Due to the importance of feelings and thoughts in our test, this was something that validated the decision of conducting semi-structured interviews.

3.3 User interface

The user interface is where human-computer interactions meet (Bødker, 1987). Essentially, it is how a user interacts with a digital product or service, like navigational menus and buttons. In relation to this research, it can be described as the layout and elements of a website and can sometimes be referred to as a graphical user interface. Even more so, the various parts or elements that the user interacts with is part of the user interface. As in our case, the user test design itself is the user interface that the user interacts with, i.e. the inputs and the button.

3.4 Usability

Usability refers to the action of how well users comprehend a specific product or design and accomplish a specific task. It is often used as a measurement in terms of the effectiveness, efficiency, and satisfactorily association with the task performed. Nielsen (2012) defines usability by five quality components: learnability, efficiency,

(29)

memorability, errors, and satisfaction. In our case, we focused on the three components: learnability, errors and satisfaction.

Learnability refers to how easy it is for the user to accomplish the task at first approach. Errors aim to measure how many errors the user made. In our study this was by definition when a user left an input incomplete. Whereas when users changed a specific input without unfocusing, it was not accounted for as an error but instead a rewriting. Satisfaction is the result of how pleasant the users’ experience was.

This was seen through our evaluation of the result regarding the user tests and interviews. Taking into account measurements of these components, the study could extract data which allowed for it to be compared and analyzed.

(30)

4.

Results

This chapter aims to present the results of the conducted case study as well as answer the research questions stated in [1.3 Purpose and research questions] through the presentation of the collected data put through the analysis. It is divided into two sections, beginning with a presentation of the collected data followed by putting it through the analysis performed in consonance with the methods explained in [2.4 Data analysis].

Since all the interviews were conducted in Swedish, some parts were translated into English for presentational purposes in this chapter. However, the interviews were not translated for the actual thematic analysis of the data and as such provides no risk for bias due to data getting lost in translation.

4.1 Presentation of data collected

For this first part, all data from the conducted case study is presented objectively. It has been further divided by the two test variants conducted and the respective interviews.

4.1.1 Test variant without microinteractions & interview

When compiling the data regarding the user test without microinteractions, all of the five participants showed uncertainty based on the metrics of errors and time. The majority of the subjects also expressed annoyance of how they were not able to interpret what to type in some of the inputs. This was further enlightened regarding the significance of how none of the participants were able to accomplish the test without any errors. As seen in table H, three out of five encountered multiple errors and had to revisit multiple inputs which also resulted in an increase of time elapsed. The total number of errors among the participants of this test variant amounted to 11, together with a total 21 rewrites. As for the time elapsed, the results varied, with participant five completing the form within 1 minute and 16 seconds in relation to participant four, who completed the test in 2 minutes and 44 seconds. The total combined time elapsed of the participants concluded in 8 minutes and 54 seconds. Further, there was an evident pattern found in the thematic analysis of the follow-up interviews, where most participants explained that they did not notice the static presentation of information disclosing the required formats until they had pushed the submit button and as such received the feedback regarding any errors. The vast majority also argued that this was a flaw of usability and one participant even described it as something that could result in a drop out of completing the web-based form altogether.

(31)

Table 1 Results derived from the test variant without microinteractions.

Participant Errors Rewritings Time elapsed

#1 1 1 01:21 #2 4 5 01:34 #3 3 11 01:59 #4 2 3 02:44 #5 1 1 01:16 Total 11 21 08:54

4.1.2 Test variant with microinteractions & interview

The data derived from this test showed assurance among four out of the five participants who conducted the test. This was shown through the pre-defined metrics of encountered issues and time elapsed. It was a distinction compared to the results retrieved from the other test variant without microinteractions. Four out of five subjects also verbally described their experience as positive and appreciated the direct feedback throughout the test. Nonetheless, one of the subjects, namely participant number one, did not acknowledge this type of direct feedback as something positive. Instead it was described as something that seemed frivolous and led to difficulties interpreting what the required format was supposed to be. However, the result of this participant is considered as an outlier. As seen in table J, a total of 6 errors were observed, i.e. the amount of times a user did not meet the data requirements for a specific input, and a total 10 rewritings for all the participants. Similar to the static test variant, the time elapsed was quite varied, which can be seen in table J, with participant one completing the form in 1 minute and 40 seconds compared to participant three who completed the task in 31 seconds. The total time elapsed of all participants ended up being 5 minutes and 52 seconds. As such, all defined metrics were lower in total when compared to the test variant without microinteractions.

Table 2 Results derived from the dynamic test with microinteractions.

Participant Errors Rewritings Time elapsed

#1 3 7 01:40 #2 2 1 01:16 #3 0 0 00:31 #4 0 1 01:28 #5 1 1 00:57 Total 6 10 05:52

(32)

4.2 Analysis of results

More or less all of the participants were familiar with the process of signing up and utilizing web-based forms. The majority of those who participated in the study, nine participants, estimated a monthly or weekly usage of forms as the one presented for the test.

Following is the summed up result and analysis of the initial five participants who were asked to begin with the test variant without microinteractions. All of the participants who were subjected and presented to this test, described uncertainty and often referred to the formatting of the inputs as an issue rather than mentioning the lack of direct feedback. Through observation and audio confirmation, all five participants also showed different degrees of irritation and distress as a result of only receiving feedback when actually pressing the submit button and expecting completion. It was noticeable how some of the inputs with commonly used format requirements, namely names and email, were completed rapidly without any hesitation or errors due to the familiarity. It was rather our elements of interest which utilized less common format requirements such as phone number and date of birth, that distinguished whether the participants acknowledged the information provided and understood the specified requirements. Since the format of these inputs were less common, they were often referred to as the problem, as mentioned above, arguing that details such as prefixes regarding country or region in the phone number input are automated and do not normally demand the user to type them. It was also noticeable how the presented information was overlooked. Further, some of the participants declared completion of web based forms as a habitual behavior with little or no thinking due to the often automated configuration of inputs and as a result of this paid little to no attention regarding the data requirements. Considering the included participants' experience, error ratio and time to completion, there was a clear pattern of how the statically presented information was not sufficient to bring awareness to the requirements in regard to the formats.

Further, the data regarding the more dynamic test utilizing microinteractions and corresponding interview is put through the analysis. As mentioned in [4.1.2 Test variant without microinteractions & interview], all of the five participants who executed this test were familiar with the process of web-based forms. In this test there was a significant difference of assurance of the participants compared to those who carried out the test without microinteractions. Most of these tests were executed without any serious hesitations by the participants and it was evident how the active feedback and clues, microinteractions, caught their attention. Four out of five participants noticeably comprehended the requirements for the more challenging

(33)

inputs and could make adjustments accordingly, still being in the stage of entering data in the specific input. When asked how they experienced the test, four participants said they favored the layout and usage of active indications. However, as previously stated, one subject expressed confusion and preferred more static information. When asked to identify any differences between the tests the subject argued that the test variant without any microinteractions being more pedagogical whereas the other participants perceived the variant utilizing microinteractions as the greater model, pinpointing the value of live interaction with the users and how it made them feel more assured and confident in their actions and approach.

When comparing the results among all ten participants included in the study it was evident how those presented and exposed to microinteractions were left with a better experience in terms of understanding, assurance and errors, see figure K below. This was strengthened through a recurring theme of how the vast majority referred to that user test as the form with more plain and straight forward feedback, often signifying it with words such as simple and practical. Nonetheless, as mentioned in [4.1.2 Test variant without microinteractions & interview], there was one outlier who saw microinteractions as inconvenient and described the informative box as the most efficient way of instructing the user.

Further on, none of those who were subjected to begin with the test lacking microinteraction, felt the equivalent confidence seen among the subjects asked to begin with the other form. Nine out of ten participants also verbally expressed how they preferred the test utilizing microinteractions when asked about their experience compared between the two variants.

In relation to our first research question, how microinteractions influence users' ability to complete web-based forms according to data requirements, it was apparent how microinteractions influenced learnability, errors and satisfaction in a positive way. When comparing the result and data, seen in figure K and figure L, from the tests conducted it was evident that parameters such as time to completion, errors and rewritings significantly differentiated between the two variants.

In correspondence to our second research question, how do microinteractions affect the user experience of such forms, there was a clear theme of how the participants subjected to the test with microinteractions perceived the interaction as more relevant and fulfilled the task more easily. Further, when asked about their experience compared between the two variants, nine out of ten subjects referred to the web-based form with microinteractions as the preferable one.

(34)

Figure 8 Total number of errors and rewritings noted from both test variants.

(35)

5.

Discussion

In this chapter, the results from the study as well as the methods used to retrieve those results will be discussed. This will be done both in relation to the results and methods themselves, but also in relation to other previously performed studies in the area. Limitations previously mentioned in [1.4 Scope and limitations] will also be expanded upon together with implications of the study.

5.1 Result discussion

Through the results providing more in-depth knowledge about how microinteractions influence the users’ ability to complete web-based forms according to data requirements, a better understanding and general assumption whether they are of value to implement is provided. It also allows for an understanding of how the user experience is affected. While the initial starting point was that these two aspects of the process did not necessarily have to be correlated, it became clear that they indeed are connected based on the results.

As such, the results of the case study are directly relevant to allow for such in-depth knowledge and understanding. The fact that a clear correlation between the use of microinteraction and the users’ error frequency rate can be shown is also relevant to the purpose and the first research question of the research. Furthermore, the results also provide sufficient data in order for reliable assumptions to be made about how the user experience is affected by the use of microinteractions.

Firstly, the results will be evaluated in relation to the first research question as to how microinteractions influence the users’ ability. The results obtained from the user tests were mainly presented in the form of issues encountered as well as additional notes regarding each participant’s experience, which were derived either from what they expressed or by reading into the subcontext of their observed actions. By comparing these results from both test variants, their relevance to the first research question became apparent. A correlation was shown between the use of microinteractions and the amount of issues a user encountered which in turn translates well into how the users’ ability to complete the task was influenced. This also refers to usability, which is described in [3.4 Usability], where components include learnability, errors and satisfaction (Nielsen, 2012). These are all relevant to the results found in the sense that the comparisons of the two user tests showed a difference both in terms of the learnability and error component. The effects on the later component, satisfaction, was shown to an extent but were shown to a greater extent in the results of the interviews which are discussed more below. Furthermore, the results from the conducted user tests also reflect those of another study made who reported results

(36)

showing that microinteractions shorten time elapsed on various elements whilst completing a web-based form (Falkowska et al., 2019). This in turn implies an increased learnability and as such also presumably a decrease in errors, which was also something that our results showed, thus increasing validity of the results from the user tests.

Continuing to discuss the results relevant to the second research question that aims to answer how the users’ experience was affected, which was mainly answered through the follow-up interviews. These results were more complex in nature and had to be carefully analyzed in order to provide a clear and applicable result. Reliable assumptions could only be made after the thematic analysis process had been conducted. After this process, the relevance of the results to the second research question were made apparent. The results of this study show that microinteractions improve the user experience, as the participants could fulfill their tasks with more ease and seemed more likely to finish the task, which aligns with the definition of user experience made by Kraft (2012). These findings were allowed to be made mainly due to the open-ended questions of the interview where participants were encouraged to elaborate and speak freely. As such, the results were not contained by certain parameters of what was expected to be found, but instead were derived through an open-ended approach. This also allowed for potentially unexpected results to be studied and had conclusions drawn from. Furthermore, the results also align with those of another study conducted examining the effects that microinteractions have on users in another use case, namely the Instagram application (Hernia, 2020). Those results showed a correlation between microinteractions and positive emotions which is the same correlation that has been shown in the results derived from this case study. Namely, the correlation between the use of microinteractions and the usability component satisfaction, that seeks to derive how pleasant the users’ experience was. Altogether, the results obtained were sufficient in order for the research to be able to act as a general pointer whether to implement microinteractions within web-based forms, which was part of the purpose. Through this, the two research questions were also allowed to be answered.

(37)

5.2 Method discussion

As initially envisioned, the chosen methods supplemented the purpose of the research in a satisfactory way. The use of a case study was well-founded due to the small case scenario being studied. Furthermore, the two components of the study also complemented each other in a good way. This was something that we had hoped would be the case, but in some ways exceeded our expectations. Instead of looking at the methods used as two separate entities, it became clear that they were just in fact part of one case study and could be treated as such. Where we had initially thought that the user tests would mainly provide insights into the users’ ability and the interviews into the user experience, which corresponds to the two research questions, it became apparent that it was not that black and white. Instead, the interviews also provided valuable insights into how the users’ ability was influenced and vice versa. While this could be considered as a strength, it also highlights some weaknesses of the individual methods. For example, in some cases it might not have been possible to have made credible assumptions without data from the other method. Given these insights, it was of great value to utilize both methods which allowed for more credible and better results to be had.

Therefore, the application of the case study worked very well. The chosen and later applied methods eventually enabled the purpose to be fulfilled, which were to provide more in-depth knowledge about the users’ ability and their experience in the specific case examined. Through the purpose being fulfilled, the two research questions were allowed to be answered in a satisfactory way. It has to be mentioned though, that there are aspects of the methods that could have potentially been exposed to biased results. Such an aspect that needs to be considered is that due to the nature of the user test itself, it might have influenced the participants to try unusually hard to complete the given task. As such, they might have paid more attention to the given feedback than they normally would have. That in turn could be a factor which might have resulted in better results in the test variant with microinteractions due to the more prominent feedback. This would then pose the risk of biased results and a weakness of the executed method. There are however arguments supporting that this was not the case. Namely due to the results of the test variant without microinteractions, in which such tendencies were not observed to the same extent. That is, if the nature of the test itself would indeed have influenced the participants in such a way that they would have tried harder and paid more attention than normally, it should be assumed that the results for the test variant without microinteractions would have seen the same effects on its results. Where the results would have been closer to those of the variant utilizing microinteractions with very low error frequencies. But since this was not the case, we are confident enough that this factor did not impact the test results in a significant way which would render them biased. Instead the results point to the

References

Related documents

WebSSO allows the users’ authentication information to be propagated from a website (where the user signs-on) towards others. WebSSO creates a transparent and coherent system for

R2 : - Ja vi pratade ju med Namn från MRS och hon påpekade också det att det vore bra att ha något, för de får också väldigt tunga modeller när de ska göra kataloger och

Data cleaning methods can also be applied to detect the correct label input and mislabeled data. • The actual method does not detect the links

The percentage of participants that consistently maintained at least a 4 kg weight loss was 40.8% in the interactive technology group and 45.2% in the personal contact group, compared

Based upon this, one can argue that in order to enhance innovation during the time of a contract, it is crucial to have a systematic of how to handle and evaluate new ideas

68 Axel Adlercreutz säger att eftersom den svenska lagen inte ger mycket utrymme för att avtal skall kunna komma till stånd, utan att bägge parters viljeförklaringar

The main findings reported in this thesis are (i) the personality trait extroversion has a U- shaped relationship with conformity propensity – low and high scores on this trait

Min uppfattning av kommunens arbete med brukarinflytande, är att det i kommunen finns goda möjligheter för de äldre att göra sina röster hörda och att denna studie