• No results found

Interactive user onboarding and its effect on activation rates

N/A
N/A
Protected

Academic year: 2021

Share "Interactive user onboarding and its effect on activation rates"

Copied!
17
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS

STOCKHOLM SWEDEN 2020,

Interactive user onboarding and its

effect on activation rates

A statistical study of feature introductions in

applications with complex interfaces

FILIP STÅL

KTH ROYAL INSTITUTE OF TECHNOLOGY

SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE

(2)

Abstract

User onboarding procedures and digital feature introductions are still in their infancy. Consequently, hard data on its impli- cations and its effects on user behavior is scarce and difficult to come by. To contribute to this field of research, within the context of HCI, this study was conducted. It studies the implementation of a contextually sensitive step-by-step feature in- troduction on a digital platform, presented by a complex and composite interface, providing access to a large number of features. It then proceeds to statistically evaluate the effects of the onboarding implementation on activation rates for which the onboarding was designed to improve.

The study’s findings were that users that received the step-by-step feature introduction were associated with statistically significantly lower mean times spent completing the activation requirements, compared to users that did not. Additionally, users partaking in the feature introduction had higher overall activation rates. The study also provides insight into how user time spent completing the different steps leading up to a successful activation was divided among the steps and discusses its significance for the design of user onboarding experiences in the future.

(3)

Sammanfattning

Processer fo r onboarding av anva ndare till digitala applikationer och dess funktioner a r fortfarande ett relativt nytt och ostuderat forskningsa mne. Fo ljaktligen a r kvantitativ data om dess pa verkan pa anva ndares beteende ba de knappha ndig och sva rfunnen. Studien genomfo rdes da rfo r i syfte fo r att bidra till detta forskningsomra de, inom ramen fo r HCI. Den studerar implementeringen av en kontextuellt medveten steg-fo r-steg onboarding pa en digital plattform, som representeras genom ett komplext och sammansatt gra nssnitt som portal till ett stort antal olika anva ndarfunktioner. Da refter fortsa tter den att statistiskt utva rdera effekterna som implementeringen av onboardingtja nsten fo r med sig, samt den fo ra ndrade aktiveringsgrad hos nya anva ndare som den bidrar till.

Resultaten av studien var att anva ndare som fick ta del av den nya interaktiva och stegvisa onboardingprocessen fo rknippades med statistiskt signifikant la gre genomsnittliga tider fo r att fullborda aktiveringskraven, ja mfo rt med anva ndare som inte fick ta del av den nya onboardingprocessen. En fo rba ttrad aktiveringsgrad fo r anva ndare som fa tt ta del av onboardingen kan ocksa observeras. Studien ger a ven inblick i hur anva ndarnas tid spenderades mellan de olika stegen i processen som ledde till en lyckad aktivering, samt att studien diskuterar resultatens betydelse fo r utformningen av onboardingprocesser fo r digitala applikationer i framtiden.

(4)

Interactive user onboarding and its effect on activation rates

A statistical study of feature introductions in applications with complex interfaces

Filip Stål

KTH Royal Institute of Technology Stockholm, Sweden

filsta@kth.se

ABSTRACT

User onboarding procedures and digital feature intro- ductions are still in their infancy. Consequently, hard data on its implications and its effects on user behavior is scarce and difficult to come by. To contribute to this field of research, within the context of HCI, this study was conducted. It studies the implementation of a contextu- ally sensitive step-by-step feature introduction on a dig- ital platform, presented by a complex and composite in- terface, providing access to a large number of features. It then proceeds to statistically evaluate the effects of the onboarding implementation on activation rates for which the onboarding was designed to improve.

The study’s findings were that users that received the step-by-step feature introduction were associated with statistically significantly lower mean times spent com- pleting the activation requirements, compared to users that did not. Additionally, users partaking in the feature introduction had higher overall activation rates. The study also provides insight into how user time spent completing the different steps leading up to a successful activation was divided among the steps and discusses its significance for the design of user onboarding experi- ences in the future.

KEYWORDS

User Onboarding; Software Onboarding; Feature Tours;

Feature Introduction; User Activation

1. INTRODUCTION

There are countless strategies for addressing and im- proving user activation in digital applications, such as newsletters, gamification, loyalty programs, and tar- geted ads. The process from user signup to initial suc- cessful use of a platform, henceforth referred to as the onboarding process, is an important step in the user ex- perience journey, and one which potentially has great

effects on customer activation rates - but where hard data is scarce and difficult to come by. The onboarding process is also where a significant portion of software- as-a-service (SaaS) customers get stuck and are lost [4].

As an online platform grows and evolves, it is not uncom- mon that its user interface becomes increasingly bloated and complex, where Facebook and Twitter are two prime examples where this trend can be observed. This intro- duces usability challenges in the form of information overload and cluttered interfaces, both of which are com- mon and serious concerns for digital products imple- menting new features over time. The Challengermode es- ports platform is no exception to this phenomenon. Back in 2016, more than four years ago, a master’s thesis study labeled it a complex real-time web user interface [20].

Since then, a considerable number of new features, types of users, and technologies have been added, making the interface increasingly difficult for new users to navigate and grasp.

When someone starts playing a new computer- or con- sole game, they frequently get to play some introductory and interactive tutorial of sorts, which along with in- structions teaches the new player the movement- and in- teraction controls of the game. The impact of such tuto- rials and step-by-step feature introductions on user acti- vation and retention, appears to vary with game com- plexity [1]. Similar to this is the concept of product- or feature tours, which is a user onboarding concept for se- quentially teaching Software-as-a-Service- (SaaS) and Platforms-as-a-Service (PaaS) users how to use, or do something.

The concept of user onboarding within HCI is argued by some, to still be in its infancy [12], and the best-practices within the industry are still being worked out. This study aims to examine the implementation process and the ef- fects on user activation that the implementation of a step-by-step user onboarding procedure has for a web

(5)

application, presented by a complex interface. The study attempts to do so through answering the following re- search question.

[RQ] How does the implementation of a step-by-step feature introduction affect new user activation rates, for a specific task, on services with complex web-inter- faces?

2. BACKGROUND

In this section, previous related works that are relevant to the study are presented. The context of the study as well as definitions and descriptions of the concept of user onboarding is researched.

2.1 Case description

This study was conducted in collaboration with Chal- lengermode AB. Challengermode is an online platform with a mission to make competitive esports available for amateurs and professionals alike. The platform is a tool providing a large number of features for its users, includ- ing social & community communications, a marketplace for selling esports-related services, and the ability to host and participate in tournaments. Although this thesis is conducted with Challengermode, the methodology is a general one, applied in a specific context. Therefore, hereinafter, the Challengermode platform is referred to as The Company, The Platform, or The Product. This is done in order to not over-empathize the specific setting and its significance for the study’s result and its findings, which is thought to be of general nature. The context is generalized to a website presented by a complex web user-interface. This is elaborated on further in the dis- cussion section.

2.3 Teaching a product’s functionality

For a long time, the common practice for teaching a digi- tal product’s functionality was, with few exceptions, through frequently asked questions sections (FAQs), user manuals and documentation [5, 18]. However, these methods of conveying instructions have been found both unappealing [26] and insufficiently helpful [32].

1 https://www.businessofapps.com/data/app-statistics/#7

As a result of this, more state-of-the-art solutions have been invented and developed throughout the years.

Modern onboarding experiences often instead incorpo- rate welcome messages, product tours and progress tracking [2]. There is a trend towards adaptive and con- textually aware documentation [7], utilizing action- driven tooltips and textboxes [2, 10]

2.4 Onboarding defined

Onboarding is a term most frequently used in business contexts and within organizational theory, where it re- fers to an organization's practice of facilitating and intro- ducing new employees [3, 29]. The field of human-com- puter interaction has, to some extent, adopted this termi- nology, but instead refers to new users of a product and their first impressions and experiences with said prod- uct. The subject of user onboarding within HCI is argued by some, to still be in its infancy [12], and its definition and terminology yet to be adequately established within the HCI community and field of study [14, 28]. Occasion- ally one might therefore stumble upon some analogous terminologies, where parts of the HCI community tend to use the more general term “learnability” [14] or the ver- bose, yet ambiguous - “new user experience”.

2.5 Why Onboarding

The percentage of returning users varies greatly be- tween different types of applications, but in general, Hu- lick estimates that between 40-60 percent of users will return a second time to a digital product [17]. Eleven in- dividual usage sessions are frequently used as the bench- mark for having achieved application retention, and ac- cording to statistics report provided by busi- nessofapps.com1, using data provided by Statista [8], the average app retention rate 2019 was a meager 32 per- cent. The same report also states that 25 percent of ap- plications downloaded worldwide are only used once.

Lincoln Murphy, a well-known growth architect in the onboarding industry strongly argues that the main rea- son behind this, is due to lacking user onboarding expe- riences [22]. Several other publications also highlight the importance of having users realize a product's value as soon as possible, [17, 22, 23].

2.6 Best practices in user software onboarding

The onboarding process should strive to make the user realize the value of the product as soon as possible, while

(6)

also easing the user into its functionalities and interface at a comfortable pace [17, 18, 23]. It goes without saying, that the optimal onboarding experience varies greatly on a case-by-case basis for different features and products.

While video documentation might be the preferred solu- tion for some cases, in-product tours might be the more suitable option for another. A substantial number of fac- tors need to be considered, ranging from what types of users there are and what is known about them to what the onboarding implementation tries to achieve. This fact makes the actual design and planning of the onboarding procedure a complex, but immensely im- portant one.

2.7 Product- and Feature Tours

Within the user onboarding industry there are a few ma- jor players that sell SaaS’s that allow companies to, alleg- edly without much effort, implement in-product user ex- periences. They often allow for the creation of in-product product tours, A/B testing and different options for col- lecting user feedback (e.g. trychameleon.com or inter- com.com). With industry standards being ill-defined and best-practices not yet agreed upon, the knowledge, re- sults and findings that these companies share are among the few sources of information that exists out there.

Fortunately, some of these companies regularly release guidelines for implementing onboarding experiences and maximizing user impact. They also publish collected statistics and benchmarks on user engagement for their different products, one example being their product tours [6, 19]. One must take into consideration that these companies also have a financial incentive to make these product tours appear as impactful as possible. Even so, these companies have access to significant amounts of data, and thus makes for an interesting source of insight and statistics when it comes to user onboarding.

Chameleon published a benchmark report on their prod- uct tours in 2019, where they shared insights gathered from 15 million user interactions [6]. They themselves state that motivation behind the benchmark report is that hard data on engagement, performance and conver- sion rates of user onboarding and product tours are scarce. Some of their key findings were that:

• The average completion rate for a product tour is 61%.

• Keeping tours short is of utmost importance. A violent drop-off could be observed if tour length exceeded 4 steps. Completion rates for 4-step tours were 46%, while a 5-step tour had a mea- ger 23%.

• Users averaged 12 seconds to complete the tours.

• Design matters. Modals, tooltips and progress indicators improved completion rates of tours.

• Context mattered a lot. Users turned out to be 123% more likely to complete a tour which they triggered themselves, compared to one that was triggered automatically.

3. METHODOLOGY

This section describes and motivates the different meth- ods used in the research. It communicates what research was conducted, what data was collected and how the data was collected. It also includes how the data was an- alyzed as well as materials and software that was used in the study.

The goal for this thesis was to design, implement and evaluate the implementation of a step-by-step feature in- troduction on a PaaS, presented by a complex user inter- face. The study is inherently multifaceted and has a de- velopmental and statistical approach to its methodology, where an attempt is made to measure artifactual impacts on a composite system. Therefore, the Design Science paradigm [15] was highly relevant for the study and con- sequently, a variant of the Design Science Research Methodology (DSRM), described by Peffers et al. [25], was adopted. The methodology focuses on six distinct phases and brings about specific guidelines on iteration and evaluation for research projects.

(7)

Figure 1. Different steps of A3, the implemented feature introduction. Larger resolution images can be found in figures 2- 6.

3.1 The Design Science Methodology

1. Problem- and motivation identification:

Define the problem and describe the need for a solution.

2. Defining the objective for a solution: De- fine what the artifact should accomplish.

3. Design and development: Creation of the artifact.

4. Demonstration: Use of the artifact and demonstration of how it solves the problem.

5. Evaluation: Evaluation of how well the arti- fact solves the problem. Iterate back to de- velopment and improve on the solution if necessary.

6. Communication: Communicate the find- ings of the study to relevant audiences.

3.2 Identification of the problem and its motivation The first and preparatory part of the study consisted of a thorough literature study on the subject of user onboard- ing combined with iterative meetings with The Company.

The goal of the meetings was primarily to define the re- search problem and to justify the value of a solution to the user onboarding problem. The purpose of the litera- ture study was to immerse in the onboarding subject within the context of HCI and to examine different possi- ble approaches to solving the problem of easing users into the different functionalities of the platform. A con- textually relevant step-by-step feature introduction was concluded as the desired solution for teaching users the steps necessary in order to participate in esports tourna- ments on the platform. This feature was selected due to

it being identified as one of the more important Key Per- formance Indicators (KPIs) for The Company. It included connecting external game accounts with your PaaS ac- count, which is a necessity for taking advantage of many of the platform’s features.

3.3 Defining the objectives for a solution

Following the identification and definition of the re- search problem was a phase of defining the desired ef- fects of the onboarding implementation. Qualitative and quantitative objectives were considered as to compare with the outcome of the onboarding implementation.

The implementation should strive to statistically signifi- cantly reduce user times spent on the platform before ac- tivation requirements (connecting a game account) are met. The qualitative aspects first and foremost required the artifact to, to the greatest extent achievable, be seam- lessly embedded into the visual identity of the platform.

It should also be an adequate fit with the existing soft- ware code-base and follow company code conventions.

3.4 Design and development

This was an iterative design and development process with, arguably, three created artifacts.

The desired artifact was to be an interactive feature tour, utilizing lightboxes (modals), tooltips and progress indi- cators, to visually guide the users, instructing them how to set up their account in order to participate in tourna- ments in a game of their choice. Contextually sensitive feature introductions like these are frequently argued by researchers to have greater potential for successful user onboarding than static or external instructions [2, 3]. The artifact should also be reusable, in the sense that it should be designed and implemented in such a way that

(8)

it can be utilized for teaching and introducing different features in future iterations.

Three different implementations were explored. The first two artifacts (A1 & A2) were concluded as unsatis- factory solutions, while the third and last (A3) was con- sidered satisfactory.

3.4.1 Artifact 1 (A1)

A1 was implemented using a third-party SaaS, namely In- tercom’s product tours. This almost entirely pre-built so- lution left much to be desired when it came to flexibility and options for customization and was later disregarded, as a heuristic evaluation concluded it was unable to be seamlessly embedded into the platform's established visual identity.

3.4.2 Artifact 2 (A2)

A2 was developed using the third-party and open- sourced JavaScript library, React-Joyride. While it proved more customizable than the first iteration of the solution, it was mutually agreed that the library’s fit with the plat- form codebase rendered it suboptimal.

3.4.3 Artifact 3 (A3)

The final iteration of the feature tour, A3, was developed from scratch using JavaScript (ECMAScript 2019) &

TypeScript (Version 3.8), React.js (Versions 16.13.0 &

16.13.1) and .NET (.NET Core 3.1) for the backend.

3.5 Demonstration

The solution was demonstrated to- and tested by the em- ployees at The Company as part of the later stages of the iterative development process. Feedback was collected and the solution revisited and improved on. Some feed- back, while valuable and true, was decided to be outside of the scope of the study - and was thus taken note of as potential future improvements on the work.

3.6 Evaluation

In order to evaluate the implications of a feature tour on The Platform, a contextually aware and interactive fea- ture introduction was developed. After deployment to the live production environment, a subset (n=2000+) of randomly selected users were presented with the choice of being shown the feature introduction for participating in tournaments or opting out of it. User data from inter- action with the implemented artifact was collected over a span of a couple of weeks and stored in an anonymized database. Collected data consisted of an anonymized

user identification code, timestamps of interaction as well as a code representing which step of the feature tour the interaction belonged to. Meanwhile, some data were collected from a subset of users not being offered the fea- ture introduction as well. This data included unassisted activation rates for the same tasks that the feature intro- duction aspired to improve during the same period of time, namely the rates at which new users set up their account in order to be able to participate in tournaments.

A Welch’s t-test was performed to test if the mean acti- vation times of the independent sample groups signifi- cantly differed from one another. The timestamps of the different steps were analyzed in order to better under- stand user behavior within the feature tour and for iden- tifying key parts where improvements could be made.

4. RESULTS

In this results section of the study, the users that received the step-by-step feature introduction will be referred to as Students. This is not to be confused with the Student’s t-test [31], which is referenced in a paragraph, which is the common name for a statistical test. The users not re- ceiving the feature introduction will be referred to as NonStudents.

4.1 Data collection

Data concerning the activation rates of more than 25000 newly registered users was collected. The two independ- ent sample groups were made up of users not being of- fered the feature introduction, NonStudents and users given the option of receiving the interactive feature in- troduction, referred to as Students. Activation in this in- stance was defined as a user having successfully con- nected their account on the platform, with a game ac- count of their choice. The data cases in the respective groups where users did manage to fulfill the activation requirements (N=20551 for NonStudents, N=1926 for Students) were used for further analysis in section 4.5 and onward.

Table 1: Data collected from newly registered users that completed the activation requirements.

Data Collection Cases total NonStudent 20551

Student 1926

(9)

4.2 Acceptance rates for the optional feature introduc- tion

More than three-fourths (78.33%) of users presented with the option (See Figure 2) to get the feature introduc- tion accepted it. 21.27% therefore opted out of the user onboarding procedure for demonstrating how to con- nect a user’s account on The Platform with a desired game account.

4.3 Activation rates of students versus normal users 30.95% of newly registered users, not offered the feature introduction ended up successfully fulfilling the activa- tion requirements on their own during the span of the study’s data collection phase. In comparison, 40.54% of users that were offered the feature introduction did ful- fill the requirements. If one looks at only the users that accepted the feature introduction, this number is instead 54.07%.

4.4 Completion rates of the feature introduction steps

4.4.1 Welcome prompt (Step One)

After having signed up for an account on the PaaS, and finding themselves on the platform’s landing page, users were prompted with a modal. The modal (See Fig. 2) asked the user whether they would like guidance on the steps necessary in order to participate in tournaments on the platform. 78.33% accepted and 21.27% declined.

Figure 2. The welcome modal where users could opt-in or opt-out of the feature introduction.

4.4.2 Games (Step 2)

97.02% of the users accepting the onboarding also com- pleted the second step of the tour (See Fig. 3), in other words they followed the tooltip and navigated to their

game of choice. The remainder navigated elsewhere on the platform or left the site.

Figure 3. A tooltip with some instructions for the user, pointing at a list of available games in the sidebar. The tooltip is paired with a progress indicator with three

steps.

4.4.3 Connect Account (Step 3)

72.74% of the total tested users made it through the third step as well. Meaning they followed the tooltip and pressed the connect account button, shown in Figure 4.

This took them to a modal with instructions on how to connect the accounts, paired with instructions for con- necting to that specific game.

Figure 4. A tooltip pointing at a button. The button con- tent is different for different games and so is the content

of the modal it triggers.

4.4.4 Browse Tournaments (Step 4)

The final step in the feature introduction required users to successfully login with the credentials for the external account, and thus connecting their account on the PaaS with their desired game account (See Fig. 5). If they suc- ceeded in doing so, they were presented with a tooltip highlighting a tab, which when clicked allowed the users to browse the different listed tournaments for that game.

This last step was completed by 54.07% of users that started the onboarding process for tournaments.

(10)

Figure 5. After successfully connecting a game account, this tooltip directs users to a tab with upcoming tourna-

ments they can sign up for.

Figure 6. Completion banner for the feature introduction.

4.5 Deep dive into comparisons of Students and NonStu- dents with respect to activation times

In order to answer the question of how the feature intro- duction affected user activation times for newly regis- tered users, the activation times for the Students and NonStudents that did fulfill the activation requirements were examined. Time is presented either in seconds, the MM:SS or the HH:MM:SS format, representing hours, minutes and seconds respectively. The group of users that did not receive the feature introduction but com- pleted activation, the non-students (N=20551) was asso- ciated with an activation time of M=06:35:25 (SD=30:39:17) and MED=00:03:11. The corresponding value for the student group (N=1926) is the numerically smaller values of M=04:42:22 (SD=26:29:16) and MED=00:02:11.

The drastically different mean and median values are in- dicative of highly skewed data, with the mean being higher than the median resulting in a positively, or right- skewed distribution. This results in the median being the better and more robust indicator of a central tendency

and is thus better suited for characterizing the distribu- tion of both samples. Furthermore, it is evident that ex- treme outliers are present in the data, likely the cause of the skew, and a resulting Skewness value of 8,398 and a Kurtosis of 80,712 can be observed from a descriptive analysis performed in SPSS (IBM SPSS Statistics for Win- dows, version 26.0. Armonk, NY: IBM Corp.). Plotting the distribution using bar graphs and visually inspecting it, although difficult without zooming, indicates an approx- imately exponential distribution of activation times.

4.6 Handling Outliers

Tukey introduced the interquartile range (IQR) multi- plier approach [30] to detecting outliers in data sets, which is the default one used in the IBM SPSS Statistics software. This method of identifying and eliminating out- liers suggests using a factor of 1.5 times the IQR and dis- regarding or transforming data points which fall out of this range. This does in fact produce a much more nor- mal-like distribution, even if it in this case creates what resembles a positive-side amputee normal distribution [13]. However, there are studies indicating that this ap- proach is inaccurate approximately 50% of the time and instead suggests using a multiplier of 2.2 [16]. There- fore, in order to be on the safer side, a more careful ap- proximate multiplier of 2.64 on the IQR for the median of the group with the largest range, which in this case is the non-student group (IQR=00:10:10), is therefore chosen to detect extreme outliers. This conveniently ends up making 30 minutes the upper limit for the time taken to connect an account after having signed up on the plat- form. Paired with the heuristic assumption that it is highly uncommon for a user to spend more than 30 minutes in succession trying to accomplish the steps re- quired to meet the activation requirements. As can be ob- served by the descriptive statistics of the dataset ridden of the identified outliers in Table 3, the values of the mean and standard deviation are significantly dimin- ished by this process, while the median values were rel- atively unaffected in comparison.

(11)

Figure 7 & 8. The figures illustrate the sample distributions for NonStudents and Students after outlier reduction. The horizontal axes hold the time delta from platform signup to account pairing with a game account of the user’s choice.

Table 2. Descriptive statistics of the independent samples after outlier removal, using the IQR multiplier approach

with a multiplier of 2.64 [30].

Statistics Std. Error NonStudents N 16843

Mean 04:15 00:02 Median 02:24

Std. Dev. 05:04

Min 00:08

Max 29:59

IQR 03:40

Skewness 2.491 .019 Kurtosis 6.713 .038

Students N 1669

Mean 03:27 00:06 Median 01:47

Std. Dev. 04:37

Min 00:12

Max 29:54

IQR 02:44

Skewness 3.068 .060 Kurtosis 10.731 .120

4.7 Resulting differences

In order to test the hypothesis that the students and non- students were associated with statistically significantly different mean activation times, an independent sample Welch’s t-test was performed. As can be observed in Ta- ble 4 for the Levene’s test for equality of variances, the significance value much lower than 0.05 means that we can reject the null hypothesis that the samples are of equal variances, and therefore opt for Welch’s t-test, due to it not assuming homogeneity of variances. This can be observed in Table 2 as well, considering that Levene’s test for equality of variances becomes increasingly sen- sitive to differences in variances with unequally propor- tioned sample sizes.

Table 3. Levene’s test for equality of variances F Sig.

Equal variances assumed 28.98 .000

The student and non-student groups are considered suf- ficiently normally distributed, for the purpose of con- ducting the Welch’s t-test (i.e., N of smallest sample = 1669, max skew ≤ |3.068| and max kurtosis ≤ |10,731|).

The independent samples Welch’s t-test was associated with a statistically significant effect, t(2087.372)=6.72, p

= .000. Thus, with a p << .050, the non-students were as- sociated with a statistically significantly longer mean

(12)

activation time than the students who received the step- by-step onboarding procedure. Cohen’s d was estimated at 0.17, which is a small effect based on Cohen’s [9]

guidelines.

Table 4. Descriptive statistics of the independent samples after outlier removal.

N Mean Std. Dev. Std. Error Mean NonStudent 16843 04:15 05:04 00:02

Student 1669 03:27 04:37 00:06

Table 5. Welch’s t-test for Equality of Means.

T Df Sig (2-

tailed)

Mean Diff.

Eq. Variances not assumed

6.715 2087.372 .000 00:48

In an attempt to be especially thorough, a logarithmic transformation of the dependent time values was carried out, as to make sure not to have violated Welch's t-test’s assumption of normality within the groups. The transfor- mation successfully translated the positively skewed data to better conform to the approximate normal distri- bution. The results can be observed in Figure 9 and Fig- ure 10.

Figure 9. Histogram of logarithmically transformed Non- Student’s sample distribution.

Figure 10. Histogram of logarithmically transformed Stu- dent’s sample distribution.

For the logarithmically transformed data, Levene’s Test for equality of variances tells us that the variances are now sufficiently statistically equal for a standard Stu- dent’s t-test. For the sake of consistency and them pro- ducing next to identical outcomes in this case, the argua- bly more robust [11] Welch’s t-test is reused. The stu- dent and non-student groups are considered sufficiently normal, even more decidedly than in the previous in- stance, for the purpose of conducting the Welch’s t-test (i.e., N for smallest independent sample group = 1669, skew << |2.0 | and kurtosis << |9.0|, [27]). This yielded similar results to the previous test. The independent samples Welch’s t-test was once again associated with a statistically significant effect, t(2020.122)=9.343, p=.000. Thus, with a p << .050, the non-students were again associated with a statistically significantly longer mean activation time than the students who received the step-by-step onboarding procedure.

4.8 How was the feature introduction interacted with?

This part of the analysis was conducted on a subset of the users that received the feature tour (n=783) after disre- garding outliers. The reason for not including all users having received the feature introduction, was firstly due to the data being delivered in two batches, where an analysis was conducted before having received the sec- ond batch of data. Secondly, due to an illegitimate as- sumption that all newly registered users would always initially be routed to the landing page, which is where the first step of the tutorial was set to be triggered. Thus, a small portion of the newly registered users ended up not being given the contextual feature tour in the right order.

This also meant that this fragment of users did not get the immediate option of opting out of the feature tour,

(13)

which is why this subset, where such a pattern was ob- served, was excluded from this part of the analysis. With a significant difference in mean time spent before reach- ing the activation requirements already concluded, a de- cision was also made to place an upper limit of twelve minutes for completion of the four steps of the feature introduction, in order for a user to be included in this analysis.

Figure 11. The mean and median times spent between dif- ferent steps of the feature introduction. The label 1 repre- sents time spent between steps One and Two. The label 2, between Two and Three, and the label 3, between Three

and Four.

As shown in Figure 11, the shortest mean and median times for completing a step, was by a compelling margin, the time from having accepted the tutorial to completing the second step (Figure 2). A considerable gap can be ob- served with regards to the mean time relative to the next group, labeled 2, which represents the times spent be- tween steps two and three (Figure 3). Lastly, the mean and median times spent between the last two steps of the tutorial are significantly longer than the rest, M=192.5s, MED=60s.

5. DISCUSSION

This study set out to investigate how the implementation of a step-by-step feature introduction affects user activa- tion rates on a platform with a complex interface. The re- sults of the study indicate that step-by-step onboarding procedures do indeed reduce the mean time spent

completing the steps that users would otherwise have to spend time exploring and finding on their own.

5.1 Users not completing a step

Users not completing a step does not necessarily mean that the step did not accomplish what it intended to. This is assuming the goal was to teach the user how to do something, rather than making them do something. A step in the feature introduction was set as complete once the user clicked the element, or one of the elements, that the contextual tooltip pointed at. If the user instead clicked the close symbol on the tooltip, they were still considered to have taken part of and completed that step, as the tooltip had still fulfilled its purpose of informing the user where to find something or how to get to the next step, even if they decided not to navigate there right away.

5.2 Entering credentials for their game account

The third to fourth step in the feature introduction was perhaps to no surprise, the part where the most users dropped off and the average and median times spent was significantly longer. This could be due to user attention dropping off after a set number of steps in a feature in- troduction, as implied by Trychameleon [6]. However, it could also be due to several other factors. It was an ob- jectively more complex step, in which users had to recall their credentials for the game account in question, enter it correctly, consider the trustworthiness of the proce- dure and much more. Which is substantially different from the “click here, click there” nature of the three pre- ceding steps. One takeaway from this is however that feature introductions, like the one implemented in this study, allows for a more thorough evaluation of how us- ers interact with a feature. One could argue that this has provided new insight on the pain-points of the activation goal set by the company, which in turn can be used to im- prove that user experience.

5.3 Different users, different goals

As concluded in the pre-study and briefly touched on in the introduction, there exists a lot of different users with different goals on the platform [24]. A portion of users likely uses the Challengermode platform for other pur- poses than participating in tournaments. This effectively renders the feature introduction implemented less fruit- ful, imaginably even useless, for those users. This strengthens the argument for contextual sensitivity in onboarding procedures. It also strengthens the

(14)

argument for allowing the onboarding procedures to be optionally triggered. Thus, an opt-in centered user expe- rience design approach, rather than opt-out, should be adopted for the sake of the users.

A general recommendation sprung from this is thus to implement a checklist of sorts for the user to complete on the platforms or apps that are presented by complex interfaces. The checklist should either be filled with tasks to complete dependent on some preferences set by the users during their account registration on the platform, alternatively with a general set of tasks that should be of interest to all users. The critical part is that the checklist allows users to at their own pace and if they want to, trig- ger a feature introduction for the selected feature or task.

It is however apparent that the features considered the most central and critical for a platform should be the most streamlined ones. Occasionally, instead of figuring out how to best shepherd users where the application wants them, it might be an equally valuable approach to rethink and reevaluate the steps required to reach that goal, in order to make them as visible, accessible and ef- fortless as possible.

5.4 Method Discussion

The study and its method are mostly of a quantitative na- ture, where data on observed user behaviors were col- lected and analyzed using statistical methods. The bene- fit of this, coupled with the large sample sizes, is the abil- ity for actual substantiation and verification of general hypotheses about feature introductions. Limitations of the study were for example, the sorts of data that were collected in the data collection phase. While more data on the users, e.g. preferred genre of games, gender or age, would allow for deeper analysis and hopefully other in- teresting findings – there is also the aspect of user integ- rity that needs to be considered.

Illegitimate assumptions of user and technology behav- ior were touched on in paragraph 4.8, where some edge user cases not accounted for, led to fragments of (for its intended purpose) unusable data. Fortunately, the large sample sizes made the user data not used negligible.

The pandemic outbreak of COVID-19 did affect and delay the study, mostly due to office spaces being closed down, restricting access to development environments, hard- ware resources and colleagues. Fortunately, the user data collection remained unaffected, although delayed.

5.4.1 Selection Bias

There are grounds to believe that a sampling bias may exist where students partaking in the feature introduc- tion are more inclined to ‘activate’, than the NonStu- dents. This could explain the different mean and median values observed and in turn lessen the external rele- vancy of the study’s findings [21]. However, it should be noted that comparisons in the results section 4.5 of the study and onward, were only made with NonStudents that, no matter opting out of the feature introduction, eventually did meet the activation requirements. In other words, they did still end up connecting a game ac- count within the time frame where the A/B-testing was conducted.

6. FUTURE WORK

An interesting extension of this study would be to inves- tigate how a successful first visit on a platform, affects the user's motivation to return to the platform. It is not difficult to imagine that there could be benefits to letting users figure out how to do tasks themselves, through ex- ploration of the platform, something also worth consid- ering for future studies. This would likely require the study to run over a longer time, while perhaps also incor- porating qualitative user data. Studying the users that did opt out of the onboarding but completed the steps on their own, is also a potential topic for an extension of this study.

Differences that can be observed in user behavior, de- pending on demographic information could also be im- mensely interesting. One could try to figure out if it is possible to successfully and sufficiently accurately pre- dict which users visit the platform for what purpose.

Thus, creating a great incentive for deeper and smarter personalization of the user onboarding experience on an individual level. Incorporating some sort of qualitative part is likely beneficial for most quantitative studies, as they provide greater depth and understanding as to why something turned out the way it did, which this study probably also would benefit from.

If this study in particular were to be extended on, I would highly suggest dwelling deeper into the areas of gamifi- cation and user psychology which are areas tightly con- nected to the one of user onboarding, even though this study in particular focused primarily of filling the gaps of

(15)

non-commercially available hard-data on user onboard- ing implications.

7. CONCLUSION

A good number of users chose to partake in the feature introduction and out of those, 50 percent ended up com- pleting it. Users that received the step-by-step feature in- troduction were associated with statistically signifi- cantly lower mean times spent completing the activation requirements set up by the company, than users that were left to figure out the necessary steps on their own.

Additionally, users partaking in the feature introduction had higher overall activation rates. The implemented ar- tifact also provided insight into the pain-points associ- ated with the activation goal, as well as how the user’s time was distributed on the different sub-steps of the onboarding procedure.

8. REFERENCES

[1] E. Andersen et al., “The Impact of Tutorials on Games of Varying Complexity,” in Proceedings of the SIGCHI Conference on Human Factors in Com- puting Systems, 2012, pp. 59–68, doi:

10.1145/2207676.2207687.

[2] K. Balboni, “We categorized over 500 user onboarding experiences into 8 UI/UX patterns,”

2020. https://www.appcues.com/blog/user- onboarding-ui-ux-patterns (accessed Jun. 19, 2020).

[3] T. N. Bauer and B. Erdogan, “Organizational so- cialization: The effective onboarding of new em- ployees.,” in APA handbook of industrial and or- ganizational psychology, Vol 3: Maintaining, ex- panding, and contracting the organization., Wash- ington, DC, US: American Psychological Associa- tion, 2011, pp. 51–64.

[4] A. Benlian, M. Koufaris, and T. Hess, The role of SAAS service quality for continued SAAS use: Em- pirical insights from SAAS using firms. 2010.

[5] S. Camacho Herrero, “Gamified Learning of Soft- ware Tool Functionality : Design and implemen- tation of an interactive in-app learning interface for complex software,” 2019.

[6] Chameleon, “Chameleon product tour benchmark report 2019,” 2019. [Online]. Available:

https://www.trychameleon.com/assets/chame- leon-product-tour-benchmarks-report-2019.pdf.

[7] P. K. Chilana, A. J. Ko, and J. O. Wobbrock, “Lemon- Aid: Selection-Based Crowdsourced Contextual Help for Web Applications,” in Proceedings of the SIGCHI Conference on Human Factors in Compu- ting Systems, 2012, pp. 1549–1558, doi:

10.1145/2207676.2208620.

[8] J. Clement, “Mobile application user retention rate worldwide from 2012 to 2019,” 2020. [Online].

Available: https://www.statista.com/statis- tics/751532/worldwide-application-user-reten- tion-rate/.

[9] J. Cohen, “A power primer.,” Psychological bulle- tin, vol. 112, no. 1, pp. 155–159, Jul. 1992, doi:

10.1037//0033-2909.112.1.155.

[10] A. Cooper, About Face: The Essentials of User In- terface Design, 1st ed. USA: John Wiley & Sons, Inc., 1995.

[11] C. Delacre, Marie; Lakens, Daniel; Leys, “Why Psychologists Should by Default Use Welch’s t- test Instead of Student’s t-test,” International Re- view of Social Psychology, 2017, doi:

10.31219/osf.io/sbp6k.

[12] A. Fedorov, “An intro to user onboarding, part 1,”

2015. https://www.invisionapp.com/inside-de- sign/an_intro_to_user_onboarding_part_1/ (ac- cessed Jun. 12, 2020).

[13] G. Gignac, “Outliers Do’s and Dont’s,” 2011.

http://www.how2stats.net/2011/09/outliers- dos-and-donts.html.

[14] T. Grossman, G. Fitzmaurice, and R. Attar, “A Sur- vey of Software Learnability: Metrics, Methodol- ogies and Guidelines,” in Proceedings of the SIGCHI Conference on Human Factors in Compu- ting Systems, 2009, pp. 649–658, doi:

10.1145/1518701.1518803.

[15] A. Hevner et al., “Design Science in Information Systems Research,” Management Information Systems Quarterly, vol. 28, pp. 75-, 2004.

[16] B. Hoaglin D. C, Iglewicz, “Fine-tuning some re- sistant rules for outlier labeling,” Journal of the American Statistical Association, 1987.

(16)

[17] S. Hulick, The Elements of User Onboarding.

useronboard.com, 2014.

[18] C. Hulick, Samuel; Traynor, Des; Galavan Ruairí ; Colman, Jonathon; Allan, Robbie; Swanser, Allan;

Irvine-Broque, Brendan; Yang, Siya; Chang, Inter- com on Onboarding, 2nd ed. Intercom, 2019.

[19] J. Jenkins, “Best practices for using Intercom’s Product Tours,” 2020. https://www.inter- com.com/help/en/articles/3095688-best-prac- tices-for-using-intercom-s-product-tours.

[20] R. Larsson, “A user centric approach for evaluat- ing and enhancing the usability of a complex real- time web user interface,” 2016.

[21] J. G. Lynch Jr., “On the External Validity of Exper- iments in Consumer Research,” Journal of Con- sumer Research, vol. 9, no. 3, pp. 225–239, Dec.

1982, doi: 10.1086/208919.

[22] M. Mehta, Nick; Steinman, Dan; Murphy, Lincoln;

Martinez, Customer Success: How Innovative Com- panies Are Reducing Churn and Growing Recur- ring Revenue, 1st ed. Wiley, 2016.

[23] L. Murphy, “The Secret to Successful Customer Onboarding,” 2016. https://sixteenven- tures.com/customer-onboarding (accessed Jun.

15, 2020).

[24] S. O degaard Jacobsson, “Deepening User Engage- ment on an Esports Platform Using Gamification : A Multi-Conceptual Study,” 2019.

[25] K. Peffers, T. Tuunanen, M. Rothenberger, and S.

Chatterjee, “A Design Science Research Method- ology for Information Systems Research,” J.

Manage. Inf. Syst., vol. 24, no. 3, pp. 45–77, Dec.

2007, doi: 10.2753/MIS0742-1222240302.

[26] M. Rettig, “Nobody Reads Documentation,” Com- mun. ACM, vol. 34, no. 7, pp. 19–24, 1991, doi:

10.1145/105783.105788.

[27] E. Schmider, M. Ziegler, E. Danay, L. Beyer, and M.

Buhner, “Is it really robust? Reinvestigating the robustness of ANOVA against the normal distri- bution assumption,” Meth Eur J Res Meth Behav Soc Sci, vol. 6, pp. 15–147, 2010.

[28] J. Sjo berg and H. Winba ck, “Designing the user onboarding process in analytics software Taking an omnichannel perspective,” 2017.

[29] A. Snell, “Researching onboarding best practice:

Using research to connect onboarding processes with employee satisfaction,” Strategic HR Review, vol. 5, pp. 32–35, 2006, doi:

10.1108/14754390680000925.

[30] J. W. Tukey, “Performance of Some Resistant Rules for Outlier Labeling,” Hoaglin 1986 Perfor-

mance OS, 1986, doi:

10.1080/01621459.1986.10478363.

[31] R. E. Walpole and H. Raymond. Myers, Probability

& statistics for engineers & scientists. New Delhi:

Pearson, 2006.

[32] C. J. Welty, “Usage of and Satisfaction with Online Help vs. Search Engines for Aid in Software Use,”

in Proceedings of the 29th ACM International Con- ference on Design of Communication, 2011, pp.

203–210, doi: 10.1145/2038476.2038516.

(17)

www.kth.se

TRITA-EECS-EX-2020:633

References

Related documents

When studying electrocatalysis under heterogeneous con- ditions with a hydrophobic supporting material (Figure 2), three key e ffects need to be addressed: (1) the electric field

Charges (APCs) for authors from affiliated institutions who wish to publish in the Press’s hybrid and fully Open Access journals, depending on the agreement. For a list

However, the change in the data may appear at any time, not just in the middle of the data as in this case. To determine if the data changes markedly and at what time it changes,

i praktiken bedömer det som avses. En tydligare definition och specificering mot innebandy är önskvärd för att detta krav ska kunna bedömas. Att ”samarbetsförmågan med

Det finns många intressanta aspekter på detta plötsliga yttrande om ett verks, mitt verks, förmodade tråkighet, som framfördes, liksom tog sig upp till ytan under ett offentligt

Hypothesis I: A firm which undergoes a listing change from Nasdaq First North to the Swedish main market Stockholm stock exchange is expected to experience an increase in the number

In this work users expectations and demands on a conversa- tional user interfaces have been discussed when faced with an insurance customer support chatbot. Based on the

(1997) studie mellan människor med fibromyalgi och människor som ansåg sig vara friska, användes en ”bipolär adjektiv skala”. Exemplen var nöjdhet mot missnöjdhet; oberoende