• No results found

In the age of algorithms, what about the consumer?: A qualitative study of consumers' perceptions of and attitudes towards algorithms and how they affect the consumers' online behavior.

N/A
N/A
Protected

Academic year: 2022

Share "In the age of algorithms, what about the consumer?: A qualitative study of consumers' perceptions of and attitudes towards algorithms and how they affect the consumers' online behavior."

Copied!
63
0
0

Loading.... (view fulltext now)

Full text

(1)

In the age of algorithms, what about the consumer?

A qualitative study of consumers' perceptions of and attitudes towards algorithms and how they affect the consumers' online behavior.

Isabelle Erlandsson

International Business and Economic, bachelor's level 2021

Luleå University of Technology

Department of Social Sciences, Technology and Arts

(2)

Acknowledgements

This bachelor’s thesis is the final assignment in the three-year bachelor’s program in International Business at Luleå University of Technology. The thesis was written during the spring of 2021, and it has been an equally challenging, fun, learning and fascinating process. The road to this finalized thesis has not been straight and it has required copious amounts of coffee, snacks and brainstorming sessions. There have been ups and downs but, in the end, it has all been worth it.

It is a great feeling to finally complete my bachelor’s thesis, and I am proud to present it.

I would like to express my gratitude and say a massive thank you to my supervisor, Kerry Chipp, for her supervision, inputs and guiding throughout this process. This thesis would not have been the same if it were not for her. I would like to express my gratitude and thank all the participants in the study, your opinions and views have made this study what it is, and I am eternally grateful for your participation. I would also like to thank my friends and my family who have supported me in my writing process by always being there by my side. Lastly, I would like to thank my partner, Alfred, who has been an absolute rock throughout this process and who has discussed several topics and questions with me, helped me structure and plan, and provided me with homecooked meals after long days of writing.

Thank you all for your support, it means the world.

Gothenburg, June 2021.

Isabelle Erlandsson

(3)

Abstract

We are moving towards a more digitalized society; we use smart devices and apps, and we can consume and search for things online. This digitalization provides for massive benefits as our behaviors translate into patterns and information on us, and companies can use this to improve their performance and revenue. However, this digitalization does not only come with benefits but also drawbacks. As we move towards a more digitalized society, risks may arise along the way which may impair our personal autonomy, expose us to risky situations with privacy and cybersecurity issues, and complicate our choices.

This thesis investigates consumers’ perceptions of and attitudes towards algorithms, and whether they affect their online behavior, which is also the purpose of the thesis. This thesis uses an exploratory and qualitative method, and in order to fulfill the purpose, long and semi-structured interviews have been conducted with eight interviewees who have shared their thoughts and opinions on different matters related to the increasing presence of algorithms.

The main finding is that the consumers’ perceptions of and attitudes towards algorithms are negative. A majority of the interviewees stated that they want for their autonomy to remain intact, that they value their privacy and do not want it to be a tradeoff between privacy and personalization, that they want their “online persona” to be representative, and that they want companies to communicate better and take more responsibility when handling their data. The findings were also that algorithms do not affect the consumer’s online behavior.

The study confirms previous studies which have stated that there are benefits and drawbacks with algorithms, and although beneficial, the benefits may impair the consumer’s well-being.

The study also confirms that there is a privacy and personalization tradeoff, and in situations where privacy is prominent, trust may promote better marketing outcomes. This study also provides a suggested extension of the paradox of choice by introducing the paradox of customization, and the Technology Acceptance Model which enables investigating acceptance of advanced technologies within devices, rather than acceptance of technology itself.

Keywords: TAM, algorithms, perceptions, attitudes, autonomy, privacy, paradox of choice,

adaptive behavior.

(4)

TABLE OF CONTENTS

1. Introduction ... 1

1.1 Theoretical Background ... 1

1.1.1 Corporate Perspective ... 1

1.1.2 Consumer Perspective ... 3

1.2 Problem Discussion ... 4

1.3 Purpose ... 6

1.4 Delimitations ... 6

1.5 Thesis Structure and Overview ... 7

2. Literature Review ... 8

2.1 Technology Acceptance Model (TAM) ... 8

2.2 Algorithms ... 11

2.3 Attitude Towards Using (TAM) ... 12

2.3.1 Consumer Autonomy ... 12

2.3.2 Adaptive Behavior ... 14

2.3.3 The Paradox of Choice ... 14

2.3.4 Privacy ... 16

2.4 Conceptual Framework and Conceptual Adaptation of TAM ... 17

2.5 Propositions ... 18

3. Methodology ... 19

3.1 Research Purpose ... 19

3.2 Research Approach ... 19

3.3 Research Strategy ... 20

3.4 Data Collection ... 21

3.5 Sampling ... 21

3.5.1 Ethical Aspects ... 22

3.6 Data Analysis ... 22

3.7 Data Validation and Verification ... 23

4. Empirical Data ... 24

4.1 Introductory and defining questions ... 24

4.2 Algorithms ... 25

4.3 Autonomy ... 26

4.4 Paradox of Choice ... 28

4.5 Privacy ... 29

4.6 Summary of interviews ... 31

5. Data Analysis and Discussion ... 34

5.1 Algorithms and Adaptive Behavior ... 34

5.2 Autonomy ... 37

(5)

5.3 Paradox of Choice ... 39

5.4 Privacy ... 40

6. Findings and Conclusions ... 43

6.1 Summary and conclusion of findings ... 43

6.2 Conclusions for RQ1 ... 44

6.3 Conclusions for RQ2 ... 45

6.4 Implications for TAM ... 46

6.5 Theoretical implications ... 47

6.6 Managerial implications ... 48

6.7 Limitations with study ... 49

6.8 Suggestions for future research ... 49

Reference List ... 50

Appendix A – Interview questions ... I

*

(6)

LIST OF FIGURES

Figure 1: Conceptual definition to clarify the purpose and chosen perspective of this study. ... 6

Figure 2: Thesis Structure – chapter and content overview. ... 7

Figure 3: Literature Review – overview. ... 8

Figure 4: Basic Concepts of User Acceptance Models, adapted from Venkatesh (2003). ... 9

Figure 5: The Technology Acceptance Model, adapted from Davis et al. (1989, p. 985). ... 10

Figure 6: Conceptual Framework of the study. ... 18

Figure 7: Overview of research questions and corresponding concepts within the Attitude Towards Using level in TAM. ... 34

Figure 8: Suggested relationship between transparency and trust. ... 43

Figure 9: Suggested development and extension of TAM. ... 47

LIST OF TABLES Table 1: Overview of interviewees and interviews ... 22

Table 2: Summary and KT’s from introduction ... 25

Table 3: Showing KT’s and responses from topic number two on algorithms ... 26

Table 4: Showing KT’s from the third concept of autonomy ... 27

Table 5: Showing KT’s from the fourth concept of choice and paradox of choice ... 29

Table 6: Showing KT’s from the fourth concept of privacy ... 30

Table 7: Overview and explanation of keywords mentioned ... 33

(7)

This page has intentionally been left blank.

(8)

1. Introduction

This chapter provides background information about the chosen area of research and aims to evaluate current research and present the purpose and research questions. The chapter introduces the chosen topic with help of a background and problem discussion including theories on big data and algorithms.

1.1 Theoretical Background

The theoretical background provides an introduction into the area of big data and algorithms from both a corporate and a consumer perspective. The corporate perspective focuses on big data itself but also highlights GDPR and benefits for companies, and the consumer perspective focuses on benefits for the consumer and introduces consumer well-being and welfare in this time of big data.

1.1.1 Corporate Perspective

Companies worldwide all try to access the benefits from accessing information available on social media as the access of this information can help companies both improve their performance and increase their revenue. The extraction of this valuable data is commonly known and referred to as big data which has the capability of guiding a revolutionary transformation in both research and invention, but also business marketing (Alsghaier et al., 2017).

According to Hofacker et al. (2015, p. 89), big data was introduced when the data storage costs dropped below of deleting the data. There are many different definitions of the concept “big data” which all differ depending on who is asked, but most definitions express the growing technological ability to “capture, aggregate, and process an ever-greater volume, velocity, and variety of data” (Agnellutti, 2014, p. 3). Erevelles et al. (2015) state that big data is defined by three dimensions, generally referred to as “the three Vs”: Volume, Velocity, and Variety.

Sets of big data are large and complex datasets derived from, for example, instruments, click

streams and Internet transactions. However, what matters in regard to big data is simply what it

does. In addition to how big data is defined as a “technological phenomenon”, the ample array

(9)

of potential use for big data raises central questions in regard to legal, ethical, and social norms, and if these are enough to protect privacy and other values in a world with big data. There is a remarkable computing power which opens up for new innovations and discoveries as well as advancements in our quality of life. Nonetheless, this can create an imbalance and inequality of power between those holding the data, and those who intentionally or unintentionally supply the data (Agnellutti, 2014).

Big data enables data collection which opens up for remarkable insights for companies. The data can either be digital by nature, meaning that it is created for or by digital use including email or GPS location, or analogue which means that it is derived from the physical world and transformed into a digital format. This includes visual or verbal information captured by phones and cameras, or activity data from wearable equipment (Agnellutti, 2014). Big data also enables data-driven analytics and allows extracting and collecting relevant data which can be transformed into business insights (Anderl et al., 2016). Using big data allows researchers and companies to set up complex algorithms which can help discover behavioral patterns and trends (Allen, 2016).

When companies want to collect data, collecting consent from the data subject is important. The safest and most flexible legal basis argued for the processing of personal data is explicit consent, but anonymization is also mentioned as a possible option. Anonymization cannot be applied fully, however, as anonymization means removing details whilst introducing noise which leads to decreased data quality and utility (Bonatti & Kirrane, 2019). As companies are not allowed to collect consumer data without the consent of the consumer, the consumer must “opt in”. If consumers opt out from being tracked, companies cannot follow their online behavior which prevents them from monitoring the consumer (Choi et al., 2020). However, there is a range of approaches when collecting consent from data subjects. The two extreme approaches are (i) Purely static, where consent is requested beforehand which then results in a long document where users typically do not fully read nor comprehend; and (ii) Purely dynamic, where consent is requested along the way prior to each step taken which means that the user is troubled with many requests of which many are similar (Bonatti & Kirrane, 2019).

To regulate companies’ data collection, the European General Data Protection Regulation

(GDPR) was introduced in 2018. GDPR controls and places strict restraint on the processing of

personal information and data and applies to all organizations tracking or providing services to

(10)

European citizens (Bonatti & Kirrane, 2019). Personal data within GDPR is defined by GDPR.eu (2020), as:

“‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as name, an identification number,

location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;”.

1.1.2 Consumer Perspective

Every time one shops online, details about the consumption is shared with retailers. These details are studied to be able to figure out one’s preferences, likes and needs (Hill, 2012). These details along with the collection of consumer data open up for recommendations and personalized offerings as well as discounts and higher relevance in terms of marketing communications. With the information provided, the marketers are now able to provide more benefits to the consumer (Martin & Murphy, 2016). The collection of data can contribute to the well-being of the consumer as well by making the choices both easier, more practical and more efficient (André et al., 2018).

The European Consumer Organization (BEUC) (2018) argue that in the age of big data, consumer’s lives are now “dominated by products and technologies” which are all connected as well as increasingly automated and intelligent. This shift towards the increasing use of automated decision-making processes (ADS) and systems will affect the consumer markets and decision- making process. In an environment filled with artificial intelligence (AI) and automated decision making, consumers may be put in a vulnerable position where they leap the risk of being manipulated into a certain purchase (BEUC, 2018).

The automated decision-making process differs from the more traditional consumer decision

journey (BEUC, 2018) which usually, according to Court et al. (2009) initiates with a need to

solve a problem and ends with a resolution or reevaluation of said need or problem. The

consumer journey is a continual process where the consumer starts with considering different

alternatives to satisfy a want or a need, assesses and elects among them, and then gets involved

(11)

in consumption. Stankevich (2017) mentions that the traditional consumer journey normally is based on long and thorough processes which include searching for information, evaluation and comparison.

Consumers sometimes seek to engage in making tradeoffs, even if it could be better to adopt a satisficing strategy. This has implications for the consumer well-being and welfare in this age of big data and AI. Personalized and targeted recommendations might increase the possibility that option number one presented to the consumer will meet the satisfaction threshold and decrease the likelihood of the consumer engaging in comparison shopping. However, the magnitude of search engines and comparison websites now make it easier for the consumer to view and evaluate a larger range of products. This may both decrease the chances of a product being purchased, and lower the consumer satisfaction (André et al., 2018).

1.2 Problem Discussion

As mentioned, collection of consumer data allows for recommendations and personalized offerings. However, this possibility has led to a focus shift where much of the focus now lays on consumer privacy (Martin & Murphy, 2016); how do consumers feel about big data and the fact that some data collection might be taking place without them knowing? Martin and Murphy (2016) argue that the topic of discussion now has shifted from whether consumers want to share their private information, to how consumers react when their private information is accessible and available to interested parties, such as marketers. A respondent to a survey conducted by Rainie and Duggan (2016, p. 9) replied that “I share data every time I leave the house, whether I want to or not […] The data isn’t really the problem. It’s who gets to see and use that data that creates problems. It’s too late to put that genie back in the bottle”.

With the large amounts of data provided by big data systems, opportunities and challenges come

along. Traditionally, researchers have not been able to analyze the large data sets with the

statistical methods developed in the 19

th

and 20

th

century. The traditional data analysis methods

have not been able to handle the constant arrival of new data and information due to its large

size and variety and to deal with this problem, researchers have developed what is called

predictive analytics or user behavior analytics to handle big data. These analytical methods, which

(12)

include an assortment of different statistical techniques, include creating learning algorithms which in short aim to find patterns with the help of predictive power (Grable & Lyons, 2018).

These algorithms contain and have been fed with collected data, and the definition of an algorithm is “a sequence of steps and instructions that can be applied to data” (Agnellutti, 2014, p. 48). Algorithms help generate categories which then filter information whilst looking for patterns and relationships (Agnellutti, 2014). However, the use of the aforementioned ADS may pose a threat to both data and privacy protection in many ways, one being the mere quantity of data needed to train the algorithms (Castelluccia & Le Métayer, 2019). There is also a risk that the use of algorithms might impair the consumers’ sense of autonomy which then can harm the consumer well-being (André et al., 2018). There is also a risk that the consumer might be unaware of which parameters that have defined the algorithm’s choices or how much each parameter weighs in into the final choice presented by the algorithms (OECD, 2014).

As mentioned by Hill (2012), shopping equals leaving trails of one’s preferences and patterns.

One example of a situation with these trails is when the American low-price retailer Target found a way to develop “pregnancy predictions” based on customer’s Guest ID’s. The pregnancy prediction was based on said Guest ID as well as purchasing patterns across 25 product categories, the score was then used to send out personalized offerings connected to their pregnancy stage.

The strategy seemed to be perfect at first in terms of marketing as it allowed for Target to reach out to parents-to-be and turn them into loyal, returning customers. However, the strategy encountered problems when the father of a 16-year-old girl showed up at a Target store, asking angrily why Target had been sending out pregnancy-related offers and discounts. The father was under the impression that Target was trying to encourage his daughter to fall pregnant, but it later surfaced that his daughter was in fact pregnant but that she had not yet told anyone.

This is an example of a problematic situation where a company has collected data with the intention of helping and providing customization through suggestions, but instead has ended up crossing the line and rather, being intrusive. Something which, once again, explicitly raises the question and highlights the importance of consumer privacy. Just as the respondent in Rainie and Duggan’s (2016, p. 9) survey stated: “the problem is not the data itself but rather who gets to see and use the data”.

As mentioned in this chapter, algorithms can provide benefits for both companies and consumers,

but how do consumers perceive algorithms and what is their attitude towards them? How do

(13)

consumers feel in situations like the abovementioned faux pas with Target? What is their attitude towards algorithms? There is a gap in the research area on the consumer in the age of algorithms and there is a gap in the literature on the consumer’s perception and attitude towards algorithms.

Investigating consumers’ opinions, perceptions and attitudes in relation to algorithms is therefore of great importance as it can both affect the consumer’s behavior, but also provide guidelines for algorithmic strategies. Thus, it is of great interest and great importance to investigate this very topic to contribute to the area of research with the help of a solid model. This study will be using the Technology Acceptance Model to investigate the perception of, attitude towards, and acceptance of technology, i.e., algorithms.

1.3 Purpose

Based on the preceding introduction and problem discussion, the overall purpose of this thesis is to investigate the consumer in relation to big data and the use of algorithms. The scope of the purpose and perspective for this study is defined in Figure 1 below.

Figure 1: Conceptual definition to clarify the purpose and chosen perspective of this study.

Therefore, the main purpose is to, with a qualitative method, investigate the perception and attitudes towards the presence of algorithms and whether it affects the consumers’ online behaviors and if so, how. In the interest of investigating this properly and to fulfill the purpose, the research questions (RQ’s) are stated as follows:

RQ1: What are consumers’ perceptions of and attitudes towards algorithms online?

RQ2: Do algorithms affect the consumer’s online behavior, and if so, how?

1.4 Delimitations

This qualitative study approaches the topic of big data, algorithms and online behaviors from a Swedish consumer perspective. The limitation of this study is therefore people from Sweden

Consumer

Algorithms Targeted marketing

Data collection

Perception and

attitude Outcome

Purpose and perspective of this study

(14)

with participants consisting of four females and four males within the Gen Y/Millennial generation born between 1981 and 1996. The participants are individuals who regularly use computers, smartphones or other devices with internet connections for online purposes including shopping and consumption.

1.5 Thesis Structure and Overview

To facilitate, Figure 2 provides an overview of how the thesis is structured. C1-C6 present the headings in this thesis where C is an abbreviation for chapter.

Figure 2: Thesis Structure – chapter and content overview.

C1 Introduction Theoretical Background, Problem Discussion, Purpose, Delimitations.

C2 Literature review &

Framework Algorithms, Autonomy, Adaptive Behavior, Paradox of Choice, Privacy,

TAM, Conceptual Framework.

C3 Methodology Research Purpose, Approach, Method, Strategy, Sampling, Ethics,

Data Analysis and Validation.

C4 Empirical Data Presentation of interviews, keywords, elaboration of keywords.

C5 Data Analysis Analysis of interviews and keywords,

connection to literature and theory.

C6 Findings & Conclusions Conclusions, implications,

limitations, suggestions.

(15)

2. Literature Review

The literature review presents relevant literature and is based on thorough research which helps define important theories, terms and concepts. The review starts with a presentation of the chosen key model for this study, the Technology Acceptance Model (TAM), followed by an introduction to the topic of Algorithms. Further, the chosen concepts including Consumer Autonomy, Adaptive Behavior, Paradox of Choice, and Privacy, are presented. The chapter then concludes with a conceptual framework and model of TAM, specified for this study. The chapter also presents the propositions (P) used in this study. Figure 3 below shows an overview of the literature review chapter.

Figure 3: Literature Review – overview.

2.1 Technology Acceptance Model (TAM)

Venkatesh (2003) defines the basic concept of user acceptance models as presented in Figure 4 below. This study focuses on the consumer in relation to the advanced technologies of algorithms, and therefore, TAM will be used as a tool for investigating factors such as the perception of said technologies as well as attitudes towards them. In this study, the definition of technology acceptance and the use of TAM is not limited to whether the consumer decides to use technology itself (e.g., computers, smartphones and so on) but rather accept the advanced technologies of algorithms that these bring. Hence, the study assumes that the interview subjects already have accepted computers and smartphones and seeks to investigate the acceptance towards the advanced technologies that these come with. One distinction that needs to be made within the use, however, is that the use of these is sometimes involuntary rather than voluntary.

Consumer Autonomy

Technology Acceptance Model

Conceptual Framework and Conceptual Adaptation of TAM

Adaptive behavior Paradox of Choice Privacy

Algorithms

(16)

For example, as mentioned in the problem discussion, a customer might sign up for a “guest ID”

at a store like Target, which then uses your purchases and patterns to create suggestions, but the customer might not be aware of or fully understand what they are signing up for entirely.

Although the customer can collect points and rewards, the data collection building the suggestions might be taking place without them knowing and hence, the use of the technology is involuntary.

Figure 4: Basic Concepts of User Acceptance Models, adapted from Venkatesh (2003).

Originally formulated by Davis et al. (1989), TAM is one of the most prominent models within the acceptance of technology. Davis et al. (1989) studied acceptance and why some firms may not be able to seize the entire value of technology as there might be acceptance barriers.

The two fundamental factors influencing an individual’s intention are: (i) perceived ease of use, and (ii) perceived usefulness. For example, someone who considers computer games to be difficult and a waste of time will most likely not adopt this technology, whereas someone who perceives computer games as easy and a way of adding mental stimulation will be more likely to adopt to the technology of computer games (Charness & Boot, 2016). The perceived ease of use (EOU) is defined as the “degree to which the prospective user expects the target system to be free of effort”, while perceived usefulness (U) refers to the “prospective user’s subjective probability that using a specific application system will increase his or her job performance within an organizational context” (Davis et al., 1989, p. 985).

Individual reactions to using information

technology

Intentions to use information

technology

Actual use of information

technology

(17)

Figure 5: The Technology Acceptance Model, adapted from Davis et al. (1989, p. 985).

Some researchers have argued that TAM is too simple and that it fails to include potential barriers which may or may not affect the technology adoption, while other researchers consider TAM a solid model which is applicable across many different technology purposes. Although a lack of weighing in factors such as subjective norms, the TAM framework is a model which can be used generally in information and communication technology research to understand and explain certain user behaviors (Alomary & Woollard, 2015).

Possible external variables within the TAM are argued to be Interaction, Information Offer, Personalization, Playfulness, and Instant Connectivity (Shin, 2016). As declared in both the Theoretical Background and Problem Discussion, Personalization is an important factor within big data, algorithms and the consumer’s use of these.

There are plenty of technology acceptance models including, but not limited to, the Unified Theory of Acceptance and Use of Technology (UTAUT), and Theory of Reasoned Action (TRA). The UTAUT model adds factors such as effort expectancy, social influences and conditions which may facilitate explaining adoption and usage of technology. It has, however, been criticized for having some imperfections in integrating the different variables included (Bagozzi, 2007). The TRA has been criticized for being a limited model in terms of variables and variance (De Grove et al., 2012).

Despite the limitations and critique towards the model, TAM is still considered a sturdy and trustworthy model. It is a suitable model for this study as the study seeks to investigate the perceptions of and attitudes towards algorithms and see if it affects the consumer’s online behavior. In the following chapter, more information about algorithms is presented.

Attitude Toward Using (A) External Variables

Perceived Usefulness

(U)

Perceived Ease of Use

(EOU)

Behavioral Intention to Use

(BI)

Actual System Use

(18)

2.2 Algorithms

Simply put, algorithms are defined by sequences of instructions and steps which can be applied to data. Algorithms help create categories which filter information, look for patterns and relationships, and aid in information analysis. The algorithms can help unlock information in the extensive amount of information available to companies which can both empower the consumer but also increase the potential of discrimination in decisions that are automated. They can increase companies’ ability to better target consumers with the help of targeted marketing and advertisements and it is of great value as they can help reach the consumer who are more likely to purchase from them (Agnellutti, 2014). The presence of algorithms in consumers’ everyday lives is increasing and they are able to understand and learn rapidly from experience (Castelo et al., 2019). Algorithms are, in one way, shaping the new social media landscape and they help influence users by determining what content is relevant for each individual (Smith, 2018).

When an internet user is online, every step along the way creates digital footprints, also known as cookies. Cookies are a source of data which includes usage data and is not created intentionally by the user (Marian & Wamba, 2020). They are planted into the consumer’s system and are then used to track the consumer’s browsing habits or page visits. If user A visits a website with the intention of purchasing a certain product, the user will be exposed to an ad which “A” then clicks on. After this, when A visits other websites, a notification will be sent by his computer, mapping out which websites A visited. This allows for thorough detailing of what kind of websites A is visiting which opens up for advertisements based on the browsing history. These practices must be disclosed in a privacy policy which should be accessible on the website as they open up for the potential of liability risks (Younes, 2019).

Gal and Elkin-Koren (2017) state that there are some benefits (virtues) as well as harms and risks

with algorithms. They mention benefits such as: (i) speedier decision making, (ii) ability to make

parallel decisions when faced with a large range of products and attributes, (iii) help reduce

information as well as transaction costs, (iv) help avoid consumer biases which lead to non-

optimal choices, (v) they can help overpower shrewd marketing efforts, and (vi) unburden

consumers when faced with demanding decisions. To the contrary, harms and risks include (i)

limiting the consumer choice and autonomy whilst increasing the vulnerability to faulty

decisions, (ii) reduction of the consumer autonomy as they separate the consumer from their

choice, (iii) they might not necessarily reflect the preferences of the consumer accurately as they

(19)

may be based on incorrect assumptions, (iv) they might increase the exposure to certain harms such as privacy and cybersecurity issues, and (v) they have the potential of increasing economic and political inequalities.

Gal and Elkin-Koren (2017) mention relevant benefits and drawbacks, and consumers may perceive these differently as well as have different attitudes toward them, but what do the different perceptions and attitudes stem from? In the following subchapter of 2.3, possible factors influencing the perception and attitude are presented.

2.3 Attitude Towards Using (TAM)

The following section, including headings 2.3.1-2.3.4, explains the concepts which will be used within the Attitude Towards Using-level in TAM in this study. The four concepts are:

Consumer Autonomy, Adaptive Behavior, Paradox of Choice, and Privacy, and these have been chosen as the author believes that the consumer might have opinions towards technology usage grounded in these concepts. These concepts will help analyze the consumers’ perceptions of and attitudes towards algorithms in the empirical data chapter later on.

2.3.1 Consumer Autonomy

Although constrained by factors such as price, time and information, consumers exert autonomy whenever they are presented with and choose freely from a set of possible options (Wertenbroch et al., 2020). Within contexts of consumer choice, autonomy is defined as “consumers’ ability to make and enact decisions on their own, free from external influences imposed by other agents” (Zwebner & Schrift, 2020).

André et al. (2018) depict that there is a potential paradox which might characterize choice in

this time of automation, AI and data-driven marketing. Despite all the welfare-enhancing

benefits these technologies bring, they may result in reactance if the consumer feels undermined

or deprived of their sense of autonomy in terms of decision-making. Predictive algorithms based

on habits, patterns and cookies overall are getting better at forecasting consumers’ preferences,

and this problem may arise if and/or when consumers feel deprived of their ability to control

their own choices. Rather than feeling empowerment in the choices they make; consumers may

(20)

feel estranged from the ability to choose, and this technological change may impact the consumer’s well-being.

Younes (2019) states that personalized advertisements may appear to be harmless, but when connecting the dots, they also entail predicting future behavior where the consumer’s right to choose freely may be stripped and companies might be choosing on the consumer’s behalf and thus, impose on their autonomy. A suggested solution by Younes (2019) is to always be aware of the digital footprints a consumer leaves behind as it could help put the consumer in control as well as a better position in terms of privacy protection.

Consumers prefer to think of their decisions as self-determined, but the act of choosing also has the risk of affecting consumers negatively. Choices often entail trying to pick and choose the best option, be that product or service, from a set. The consumer reviews and compares different characteristics of the options and the task itself can be considered relatively easy if there is a dominant or superior option emerged from the available options. On the other hand, if there is no superior alternative it may result in a less satisfying experience and result compared to if the same product had been chosen without choosing it from a range of options and products. This is called Tradeoff Conflict and it may occur when the consumer is faced with multiple options and choices (André et al., 2018).

To summarize, the consumer’s perception of autonomy may result in different consumer reactions and behaviors if their view on autonomy is not respected or represented. André et al.

(2018) state that consumers finding out that a program can predict their choices, they might

choose less-preferred options because they want their autonomy to remain intact. To the

contrary, when a consumer is told that a program can establish how persistent their choices are

compared to their preferences, consumers do not leave their usual preferences. This finding may

play an important role when designing algorithmic recommendations as it tells us that outlining

consumers’ preferences and choices as predictable could, paradoxically, result in different

consumer behaviors which in the end make the algorithms and algorithmic suggestions less

accurate.

(21)

2.3.2 Adaptive Behavior

When observed, different people behave differently. Within social psychology, this is known as the Hawthorne effect which states that people being observed are likely to be on their “best behavior”, i.e., behave according to standard procedures rather than using their usual methods.

The effect was first described in the 1950’s by Henry A. Landsberger (Baxter, 2015, p. 382).

Zwebner and Schrift (2020) found that consumers are particularly hesitant to being observed during the time of constructing their own preferences, and that this hesitation is derived from the feeling of threat to their own autonomy. Even if the observing part, i.e., data collectors, may not see what the consumer ends up choosing, the simple act of being observed during the

“preference-construction stage” (i.e., the decision-making process) makes the consumer feel exposed to outside influence which cripples the sense of autonomy and independence.

Consumers therefore try to avoid being observed in this stage. Zwebner and Schrift (2020) state that if observation does take place, consumers may alter their preferences and hence, adapt their behavior, and behave in a way that allows them to solve the decision making with as little conflict as possible such as choosing default options or not purchasing at all.

Zwebner and Schrift (2020) also found that not only do this aversion to being observed come from privacy related issues such as how companies use and share their information, but also from how the consumer perceives their ability to make a choice, independently and free from external influences. In their study, Zwebner and Schrift (2020) found that individuals who were observed during their decision-making process were more likely to opt out compared to those who were not observed. They also found that consumers showed a much lower interest in continued use of the online platform if the platform tracked their process while making a decision.

2.3.3 The Paradox of Choice

When human beings are put in front of some situations, analysis paralysis, or paralysis of analysis,

might occur. It refers to overthinking and/or overanalyzing a situation to the extent of a decision

nor action being taken, resulting in paralysis of the outcome. Rather than trying something and

changing if something goes wrong, the decision is being treated as overly complicated with too

many options, resulting in a choice never being made. The paralysis may occur when someone

(22)

is searching for the most optimal or perfect solution whilst avoiding faulty decisions (Kurien et al., 2014).

It is a myth that more choices equal more sales (Kurien et al., 2014). Rather, too many choices can result in no decision at all and hence, no sales. Thus, the paradox of choice tells us that “the more choices available, the less likely a shopper is to make a decision” (Silverman, 2019, p. 9) and due to this, the consumer is more likely to experience post-purchase regrets. This does not only affect the consumer but also retail businesses since there is a plethora online of different options with the possibility of shopping anytime anywhere (Silverman, 2019).

To handle the problem of more choices, or too many choices, mass customization (MC) was introduced (Toffler, 1970). MC aims to help provide tools for the consumer to customize what they are looking for. The customizable features of goods are often called “product attributes”

and the options within these attributes are called attribute levels. For example, size and color are attributes, whereas all colors and all sizes available are attribute levels (Piasecki & Hanna, 2011).

For companies to pleasure the consumers, i.e., the users of the configuration and customization options, they must provide them with a wide variety of options meaning that they need to provide their users with the option of customizing lots of product attributes. Also, all attributes need to have varieties of levels to choose from (Piasecki & Hanna, 2011).

Piasecki and Hanna (2011) also state that the assumption can be made that the paradox of choice takes place when there is no option for customization, but it is even more likely to occur when there is an option for MC; the MC users must first choose how they wish to configure and customize, but also specify a great number of levels of product features. Customers may not be able to match their own wants and needs with the product features available and this is directly connected to the amount of choice available. Companies are prone to answer to the customers’

wants and needs by giving them the option to customize among attributes, but they face the threat of exposing their customers to the burden of choice. On the other hand, if there is a limited choice of product attributes, customers may think that the product attribute they seek to customize is not customizable (Piasecki & Hanna, 2011).

As mentioned, paradox of choice can occur when there are too many products to choose from.

Algorithms can provide a consumer with more options and recommendations based on their

previous behaviors and previous purchases, but how does the consumer act when these

(23)

alternatives are presented? Does the number of alternatives result in the consumer purchasing more items, or does the consumer choose to not engage at all? The possibility of customizing and/or filtering when browsing certainly opens up for less alternatives and the possibility of finding the right item quicker, but even if filters have been applied, the consumer is yet to find multiple alternatives, although fewer than without the filter. The interest of research here is to investigate how consumer’s view the number of choices, and the different alternatives available for increasing or decreasing the number of alternatives presented to the consumer.

Customization is in one way a type of personalization, and this personalization requires the consumer to let go of and reduce their privacy (Martin & Murphy, 2016). Personalization can result in increased engagement from the consumer, but it requires clear information on the data collection, otherwise the consumer might be put in a vulnerable position. The effectiveness of personalization increases when the consumer’s feeling of trust overpowers their concerns about privacy (Bleier & Eisenbeiss, 2015).

2.3.4 Privacy

Privacy is a wide-ranging and conflicting idea (Younes, 2019) and traditionally, privacy has been a difficult concept to define and regulate. Privacy theories, laws and policies all share one common denominator: “conceptualizing personal information as static pieces of knowledge about someone” (Pan, 2016, p. 241). Information privacy was defined in 1968 as “the right to select what personal information about me is known to what people” (Younes, 2019, p. 137).

Privacy is not, contrary to popular belief, about having secrets and not sharing, but rather being

empowered to choose what kind of information and to what extent said information is being

shared (Younes, 2019). Big data and collection of consumer data come with the risk of

exacerbating privacy concerns for consumers. Although convenient and relevant, personalization

also carries serious privacy concerns, especially when the consumer is unaware that the data

collection is taking place (Aguirre et al., 2015). Data subjects may not understand to which

extent their data is being generated and what the data is used for, despite platforms trying to

cover this in their terms and conditions (Penneck, 2019). Online users tend to have incentives

to accept interactive and preference demanding websites which translates into an extensive

amount of risk that online users let go of their privacy (Younes, 2019).

(24)

According to Kotler and Armstrong (2016, p. 155), marketers must be cautious when collecting customer information data so that the privacy line is not crossed. The increasing problem of consumer privacy has become a problem for the industry of marketing research as there is a fine line between collecting data and possibly discovering sensitive data whilst maintaining the consumer trust in mind. At the same time, the consumer is stuck with tradeoffs between wanting to personalize and maintaining their privacy. Consumers want to receive personalized and relevant offers but are also worried that that they might be tracked too closely. Failure to properly address issues related to privacy may result in less cooperative consumers.

Studies have shown that people respond more positively to personalization and targeted marketing if there is a greater ability to control their personal privacy settings, and in situations where privacy is prominent, trust may promote more positive marketing outcomes. These include eagerness to disclose information, intent of purchase, click-through, less ad-annoyance and more ad acceptance (Tucker 2014; Martin & Murphy 2016).

Although consumers may benefit from personalized marketing by receiving more personal suggestions, there is a risk that too much is found out about the consumers’ lives which can be used to take advantage of the consumer. Policy makers and consumers raise the question of however online merchandisers should be allowed to plant cookies in the browsers; cookies which are then used to track and follow the consumer as well as provide targeted ads and marketing efforts (Kotler & Armstrong, 2016, p. 556).

2.4 Conceptual Framework and Conceptual Adaptation of TAM

The concepts mentioned in the literature review, including the Technology Acceptance Model, create the foundation for the study to help fulfill the purpose and answer the research questions.

Implementing the TAM within the area of exposure to data collection and algorithms will

provide indications of attitudes and how the consumer perceives these factors. The TAM

traditionally focuses on technology adaptation in terms of utilization of technology in general,

whereas this study interprets the TAM into the utilization and presence of algorithms and what

the outcome is. The conceptual use of TAM for this study is presented in Figure 6 below.

(25)

Figure 6: Conceptual Framework of the study.

As can be observed in Figure 6, the conceptual figure defines external variables as Algorithms as these are the factors that are exposed to the consumer. Then, Perceived Usefulness and Perceived Ease of Use are transformed into Perception of External Variables, i.e., the algorithms. As previously mentioned, personalization and algorithms do come with benefits as well, and the interest is within investigating the perception of these. There is also an interest in investigating the Attitudes Towards Variables by using concepts such as Autonomy and how the consumer perceives their autonomy in relation to automated decision-making by algorithms; Adaptive Behavior which may or may not occur when feeling observed of said algorithms; Privacy related issues and how the consumer views their privacy in relation to algorithms; and Paradox of Choice when presented with multiple options. This will then be summarized into the Behavioral Intention to Use where the previous attitudes and perceptions on stages 1-3 will be analyzed in terms of Acceptance. Lastly, the last step is Outcome which refers to the outcome of the previous steps and how the consumer chooses to act.

2.5 Propositions

For this study, three propositions (P1-3) have been formulated. Bhattacherjee (2012) states that propositions are preliminary and tentative relationships between structures stated in an explanatory form. Proposition statements do not have to be true but have to be empirically testable using data so that a result showing whether it is true or false can be derived. Compared to hypotheses which are specified on an empirical level, propositions are specified on a theoretical level where it is not rejected in the same way as hypotheses if they turn out to be false. The propositions are anchored in the research questions as well as the theoretical background and literature review. Thus, the propositions for this study are the following:

P1: Consumers’ perception of algorithms is negative, P2: Consumers’ attitude towards algorithms is negative,

P3: Consumers’ perceptions of and attitudes towards algorithms affect their online behavior.

External Variables Algorithms

Perception of ExternalVariables

Attitude Towards variables Autonomy Adaptive Behavior Paradox of Choice

Privacy

Behavioral Intention to Use

Acceptance Outcome

(26)

3. Methodology

In this chapter, research purpose, approaches, strategy and data collection methods will be presented, evaluated and discussed. The chapter also includes a presentation of the sample for the study as well as ethical aspects which have been taken into account in the process. Each heading includes a thorough explanation of the chosen alternative and strategy for this study.

3.1 Research Purpose

This study uses an exploratory research approach. Exploratory research does not aim to offer final nor conclusive solutions, but rather study a problem where a definition is lacking (Saunders et al., 2012). Exploratory research opens up for the risk of biased information which is not necessarily representable or applicable in a larger context, but it allows for further research within the area as it simply seeks to explore the research questions (Dudovskiy, n.d.). Advantages of exploratory research according to Dudovskiy (n.d.) are: (i) flexibility and possibility to adapt and change, (ii) effective in laying the groundwork for further studies within the area, and (iii) timesaving as it defines early what research is worth pursuing.

The research purpose and goal for this study is to examine consumers perception of and attitudes towards big data, algorithms, and if/how algorithms affect their online behavior. Furthermore, the purpose of the study is to use exploratory methods primarily to gain insight into the consumers’ opinions and views.

3.2 Research Approach

This study is a qualitative study with a deductive research approach. Qualitative methods focus

on the collection and interpretation of non-numeric data which can be retrieved from, for

example, in-depth interviews (Malhotra & Birks, 2007). The deductive research approach, or

deductive reasoning, has its ground in the reasoner’s beliefs. The reasoning includes inputs –

propositions, which aims to guarantee truth of the output – the conclusion. These propositions

may be based on the beliefs or assumptions of the reasoner which are to be explored (Schechter,

2013). The deductive method deducts conclusions from existing premises or propositions and

according to Babbie (2010), the deductive method stems from an anticipated pattern. Dudovskiy

(27)

(n.d.) claims that there are some advantages with a deductive approach and method. Among others, deduction allows for relationships to be explained between variables and concepts, and the possibility of generalizing findings in research to a certain extent.

As the author of this study had some initial thoughts and anticipations on patterns, and since propositions were used, the deductive method was chosen. Caulfield (2020) also mentions that the deductive method allows for approaching the data with themes expected to be found in the data, grounded in theory or existing knowledge, which further motivates the choice of deductive approach.

3.3 Research Strategy

The chosen research strategy in this study was an interview strategy with long and semi- structured interviews. Saunders et al. (2012) declare that it is favorable to use semi-structured interviews when conducting exploratory research, and that semi-structured and in-depth interviews allows for probing answers where the researcher wants the interviewees to explain or further build on their responses. Different interviewees may use different words in a certain way and probing the meanings of quotes will increase the significance and depth in the data obtained.

The eight interviewees were subjected to interviews with open questions as it allowed for the interviewee to answer in their own words.

McCracken (1988) states that long interviews are considered one of the most compelling methods within qualitative research. The long interview method can help give insight and “take us into the mental world of the individual” (p. 2) where patterns can be observed, and it can allow us to enter the mind of a person to both see and experience the world through their eyes.

Long interviews allow the interviewer access to the interviewees and individuals without putting

their patience to the test and even better, without infringing on their privacy. It allows for data

collection within qualitative analysis without having to observe the participant as it avoids

discreet observation as well as lingering contact.

(28)

3.4 Data Collection

Malhotra and Birks (2007) state that data collection can be conducted through either primary or secondary data sources. Secondary data has been collected prior and has already been used in a different context. To the contrary, primary data is data collected by the researcher which is then used for a certain purpose.

This study consists of both primary and secondary data. The secondary data laid the foundation for the preceding introduction and literature review chapters, whereas the primary data was collected by the author of this report through in-depth interviews. As previously mentioned, the interviews consisted of open questions rather than closed ones.

3.5 Sampling

Robinson (2014) mentions a four-step approach or four-point method when conducting qualitative sampling: (i) define and establish a sample based on inclusion and/or exclusion criteria, (ii) decide on sample size, (iii) formulate strategy and specify categories of who is to be included in sample, and (iv) source sample by recruiting participants from desired population.

The method for the sampling was quota sampling which opens up for the possibility of choosing how many people with certain characteristics that are to be included as participants in the study.

Characteristics may include age and gender, but also profession or marital status (Mack et al., 2005). With this in mind, the idea behind the sampling was to keep some diversity in mind and sample different individuals and interviewees but within the same age group, who all regularly use computers and/or smartphones. The primary criterion for this study was to achieve a 50/50 balance between genders and to have four females and four males, but also to gather participants within the Gen Y/Millennials generation.

The interviewees, named I-1 to I-8, were contacted via email in late April and all interviewees

who were contacted wanted to participate in the study. The average duration of the interviews

was 59 minutes. Final distribution and information about the interviewees can be found in Table

1 below.

(29)

Table 1: Overview of interviewees and interviews

ID Gender M/F Age Duration (min) Transcribed words Date

I-1

F 27 74 3301 2021-05-02

I-2

M 24 72 4452 2021-05-04

I-3

M 25 43 3468 2021-05-07

I-4

M 27 62 4396 2021-05-10

I-5

F 28 64 4128 2021-05-12

I-6

F 27 55 4010 2021-05-14

I-7

M 25 48 3587 2021-05-15

I-8

F 25 53 3680 2021-05-17

3.5.1 Ethical Aspects

To remain ethical and keep the interviewees privacy in mind, the interviewees were informed that they would be anonymous. Each interviewee was given an ID according to a system of Interviewee 1, Interviewee 2, and these are named I-1 to I-8 as can be observed in Table 1 above. Anonymization was applied as the goal was for the interviewees to be as honest, open and personal as possible. The information about the interviewees was put into an Excel spreadsheet with the interviewee’s name, given ID for the study, and age, and the only person who had access to the spreadsheet was the author. When the responses from each interviewee were broken down and when keywords were extracted, the interviewee IDs were used. In the keyword summary table, no ID was used – only the words and corresponding concept.

3.6 Data Analysis

All interviews were recorded and the transcription tool in Word was used to help the writing process. After conducting the interviews, each interview was transcribed and translated from Swedish to English as all of the interview subjects were Swedish and Swedish speaking. Each interviewee received their own word document into which their responses were dictated, and key sentences and keywords were put into an Excel document. Furthermore, the document also contained a spreadsheet with each of the concepts, questions and the corresponding responses.

Lastly, the spreadsheet was translated into a table where keywords, codes and elaboration of each

keyword was provided. The table providing these keywords and explanations can be found in

Table 7 in the Empirical Data chapter.

(30)

The analysis was conducted through a thematic analysis of the interview materials as it is common analysis method when analyzing qualitative data (Caulfield, 2020). Caulfield (2020) also states that the thematic analysis is a good analysis method when investigating and extracting views, perceptions or opinions from, for example, interview transcripts.

3.7 Data Validation and Verification

There are four aspects of trustworthiness which can be taken into account when establishing trustworthiness within qualitative research: credibility, transferability, dependability, and confirmability (Guba, 1981). Credibility refers to the process of the study and aims to form how both the data and analysis will be performed, and to assure that no important data has been ruled out (Graneheim & Lundman, 2004). Graneheim and Lundman (2004) state that one way of increasing credibility is through getting agreement from fellow researchers, colleagues or the informants themselves. To increase the credibility of this study, interviews with different and unrelated participants were made, the participants had no idea of who the other participants were and there was no way to affect their opinion.

Transferability refers to the extent to which the findings of this study can be applied to other contexts and situations (Graneheim & Lundman, 2004), and to the number of study objects; the more representative the sample, the more generalizable the results (Krippendorff, 2004). This study has used a mixed sample of participants, but to increase the transferability further, the sample could have been larger and even more diversified.

Dependability is also classified as stability and concerns how the data collected can and/or will

change over time (Graneheim & Lundman, 2004) and to increase the dependability of this study

whilst decreasing the risk of the data changing, the data collection and the interviews were all

conducted within an interval of 14 days. Lastly, confirmability, refers to the ability of the

researcher when it comes to demonstrating that the responses are objective and neutral and free

from the researcher’s own bias (Polit & Beck, 2012). To increase the confirmability, quotes

which illustrate the different themes found in the interviews have been provided.

(31)

4. Empirical Data

This chapter presents data from the qualitative interviews in order to answer the research questions. The quotes are from eight interviewees, listed as ID 1-8 and the average interview time was 61 minutes. The purpose of this study was to investigate consumers’ perceptions of and attitudes towards algorithms and if these affect their online behavior. This chapter is divided into the following subheadings based on the interview questions: Introduction, Algorithms, Autonomy, Paradox of Choice, and Privacy. The concept of Adaptive Behavior falls under the Algorithms category. The interview guide can be found in Appendix A.

4.1 Introductory and defining questions

Introductory, each interviewee was asked to talk about their online consumption habits and their purchasing frequency online. The interviewees were asked to declare whether most of their purchases are made online or in-store, and to talk about their safety measures.

Key themes in the introduction were: convenient, timesaving, comparing, touch and feel, familiarity. The convenience and timesaving factors were in regard to shopping online as the quantity and mere convenience were the major factors; multiple options can be found within a second compared to having to go to different stores in real life. The comparing factor was mentioned as it allowed for price and item comparison in a quicker way than in real life.

However, touching and feeling the items was also mentioned as some stated that they like to touch and feel the item or material before purchasing something. Lastly, familiarity was mentioned as some stated that they are people of habit who continue to purchase products they already own and since they know what they want, they can access and purchase it easily.

In Table 2 below, key takeaways (KT’s) from each interviewee’s response are presented.

(32)

Table 2: Summary and KT’s from introduction

4.2 Algorithms

The second part of the interviews consisted of questions about algorithms. The interviewees were asked five main questions on their perception of and attitude towards algorithms and their function, their decision-making process when intending to purchase, their feelings towards algorithms during this process, and whether they had ever adapted their behavior to avoid certain outcomes.

Key themes within the algorithms concept were: scary, two sides, self-efficacy, careful, risky.

The scary theme came from both the feeling of uncertainty: not knowing what the data will be used for, who has access to it, or for how long it is stored. The “two sides” factor refers to the feeling that there are both benefits and drawbacks with algorithms, and that there is also a company and a consumer side of it. Self-efficacy refers to the ability to choose for oneself, and careful refers to a careful behavior which has been developed by some. Lastly, risky refers to feelings that the algorithmic processes are risky and the feeling that something could go wrong.

The KT’s from these questions are presented in Table 3 below.

23

4. Empirical Data

This chapter presents data results from the qualitative interviews in order to answer the research questions. The quotes are from eight interviewees, listed as ID 1-8. The purpose of this study was to investigate consumers’ perceptions of and attitudes towards algorithms and how these affect their online behavior. This chapter is divided into the following subheadings based on the interview questionnaire: Introduction, Algorithms, Autonomy, Paradox of Choice, and Privacy.

The concept of Adaptive Behavior falls under the Algorithms category. The full questionnaire and interview guide can be found in Appendix A.

4.1 Introductory and defining questions

Introductory, each interviewee was asked to talk about their online consumption habits and their purchasing frequency online. The interviewees were asked to declare whether most of their purchases are made online or in-store, and to talk about their safety measures. In Table 2 below, key takeaways (KT’s) from each interviewee’s response are presented.

Table 2: Summary and KT’s from introduction.

ID Consumption habits, frequency of online purchases, safety measures online.

I-1 “I shop most of my things online. It’s convenient and there are so many shopping and shipping options. It feels safer now during covid as well.” Shops online regularly, no incognito mode or VPN.

I-2 “I don’t shop too often, but I feel that the assortment online is much better. But it depends on the product, some things I need to make sure that they fit, but if I know that something is good then I buy it online.” No VPN, only incognito when streaming sports online or when booking trips.

I-3 “Most of my shopping is in stores, I buy online maybe 10 times a year” – “I buy in store because I can see and feel the product”; “If I have a product in mind then I'll go to a store that sells that product.” No incognito or VPN, only incognito when booking trips/flights.

I-4 “I prefer online consumption over going to stores and I do most of my shopping online. It allows me to compare prices, compare items, and I can do it whichever time that fits me.” Shops online regularly,no incognito or VPN.

I-5 “I buy everything online, clothes, beauty products, pharmacy stuff, electronics. Everything but food. It saves me time and I can compare prices easily.” Shops online often, no incognito or VPN (only when working) I-6 “I tend to buy more things in stores than online as it gives me a better experience when I can try, touch, and

see things IRL and ask for help and opinions from staff. My online shopping is limited to when I feel that there’s no other way for me to get a product.” Doesn’t shop often, no incognito or VPN.

I-7 “I probably do about 10% of my purchases online. I’m not a big fan of buying without feeling what I buy, I think that’s something very underestimated when buying things.” “I don’t prevent people from tracking me, I’m an open book, and I don’t think there’s anything I could do to stop them.” Doesn’t shop often, mostly in store, nothing to prevent tracking.

I-8 “I shop most of my things online, it’s convenient and it allows me to compare the same item in different stores to different prices. I don’t buy unnecessary things but maybe shop online 1-2 times a month, I mostly buy things I already know of like good brands and formulas which I know are good for me.” Shops often regularly, sometimes uses incognito mode.

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

a) Inom den regionala utvecklingen betonas allt oftare betydelsen av de kvalitativa faktorerna och kunnandet. En kvalitativ faktor är samarbetet mellan de olika

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

• Utbildningsnivåerna i Sveriges FA-regioner varierar kraftigt. I Stockholm har 46 procent av de sysselsatta eftergymnasial utbildning, medan samma andel i Dorotea endast