• No results found

Ethical views on the influence of interactive systems

N/A
N/A
Protected

Academic year: 2021

Share "Ethical views on the influence of interactive systems"

Copied!
90
0
0

Loading.... (view fulltext now)

Full text

(1)
(2)

Teknik och samhälle Data- och Informationsvetenskap

Examensarbete

15 högskolepoäng, grundnivå

Ethical views on the influence of

interactive systems

Etiska åsikter om inflytande i interaktiva system

Mirella Liljekvist Ellen Lindberg

Examen: kandidatexamen 180 hp Handledare: Thomas Perderson Huvudområde: datavetenskap Examinator: Dipak Surie

Program: informationsarkitektur

(3)
(4)

Abstract

Humans have always, and in different ways, influenced each other’s behaviors and with today's technological development there are greater opportunities for analysis of large amounts of data to find patterns that were previously difficult to find. With artificial intelligence, machine learning and data mining

techniques, it is becoming easier and easier for companies to tailor make individual messages to users - both to make the user experience better in interactive systems, but also to influence users to change their behavior in specific situations. This became clear in 2018 when the Cambridge Analytica scandal became known to the public, where data was extracted to influence users in the 2016 US presidential election. This made it clear that the

technical possibilities of collecting, analyzing and using data were greater than previously known.

This study was conducted in order to examine the ethical views of users on influence through interactive systems. The study is based on previous research on influence, the ethics of IT and today's digital data analysis techniques. Through a survey and interviews, users' views on influence in different situations and through different devices were examine. The study also examined whether there is any difference in views between users of interactive systems and developers of interactive systems.

The results of the study show that users do not make a difference in the influence through different devices, but that the situations in which the influence occurs determine the user's ethical views towards the influence. Developers generally feel more positive about influence through interactive systems than users do, but the results of the study still show that the

situations play a greater role than the technical devices. Through the study's interviews, the results of the survey were confirmed. It became clear that users of interactive systems do not want to be influenced, but that many of them also feel so aware of influence that they can choose not to be influenced. Keywords: Influence, Interactive Systems, Artificial Intelligence, Machine Learning, Ethics in IT.

(5)

Sammanfattning

Människan har i alla år, och i varierande form, influerat varandra till förändrat beteenden och med dagens tekniska utveckling finns det större möjlighet att analysera stora mängder data och hitta mönster som tidigare varit svårt för människan att nå. Med artificiell intelligens, maskininlärning och data mining-tekniker blir det allt lättare för företag att med teknik

individanpassa budskap till användare - dels för att göra användarupplevelsen bättre i interaktiva system, men också för att i en förlängning påverka

användare att förändra sitt beteende i utvalda situationer. Detta blev tydligt år 2018 när Cambridge Analytica-skandalen blev känd för allmänheten som en situation där data utvunnits för att påverka användare i det amerikanska presidentvalet 2016. Det stod då klart att de tekniska möjligheterna till såväl insamling, analysering och användningen av data är större än vi tidigare trott. För att undersöka användares etiska inställning till påverkan via interaktiva system utfördes denna studie. Grunden för studien bygger på tidigare

forskning inom såväl mänsklig påverkan, etik inom IT och dagens digitala tekniker för dataanalys. Genom en enkät och intervjuer med faktiska

användare undersöktes användares inställning till påverkan i olika situationer och genom olika tekniska enheter. Studien undersökte även om det finns någon skillnad i inställning mellan användare av interaktiva system och utvecklare av interaktiva system.

Resultatet av studien visar att användare inte gör skillnad på påverkan genom olika enheter, utan att det är situationerna i sig som avgör användarens etiska inställning till påverkan. Utvecklare skattar sig generellt mer positiva till

påverkan via interaktiva system än vad användare gör, men resultaten av studien visar ändå att situationerna av påverkan spelar större roll än de tekniska enheterna. Genom studiens intervjuer bekräftades enkätens resultat samtidigt som det blev uppenbart att användare av interaktiva system inte vill bli påverkade, men att många av dem också känner sig så pass medvetna att de själva kan välja bort att bli påverkade.

Sökord: Inflytande, Interaktiva system, Artificiell intelligens, Maskininlärning, Etik inom IT.

(6)

Abbreviations

GDPR - General Data Protection Regulation

AI - Artificial Intelligence

IT - Information Technology

SPSS - Statistical Package for the Social Sciences

Definitions

IEEE (Institute of Electrical and Electronics Engineers) - A technical

professional organization dedicated to advancing technology for the

benefit of humanity.

Interactive System – A system in a device that interacts with the user,

for example a computer system.

(7)
(8)

Table of contents

1.0 Introduction 1 1.1 Background 2 1.2 Research Questions 2 1.3 Method 3 2.0 Related work 3

2.1 Artificial intelligence and Data Analysis 3

2.2 Ethics for computer science 4

2.3 Interactive systems, influence and manipulation 6

2.3.1 Cambridge Analytica 6 2.3.2 Facebook 8 2.3.3 OKCupid 10 3.0 Study 11 3.1 Survey 11 3.2 Interviews 12

3.3 Ethical Considerations for the study 12

4.0 Results 14 4.1 Results questionnaires 14 4.2 Results of Interviews 25 5.0 Analysis 27 5.1 Analysis of survey 27 5.2 Analysis of interviews 37 5.3 Conclusions 38 7.0 Future Work 41 References 42 Appendix 45 Appendix A -Questionnaire 45

Appendix B - Interview questions 53

Appendix C - Results of questionnaire 55

Appendix D - Calculations 60

(9)
(10)

1

1.0 Introduction

Humans have been communicating, storing, retrieving and manipulating information since the dawning of the written word. From the library at Alexandria to today’s IT systems, information has been and still is, a way for us to move forward as a society, to learn and to evolve. Through many interactive systems available to us today we are met by information in one form or another. We are in contact with devices and interactive systems in ways we might not even be aware of and many interactive products are in everyday use without us even thinking about them. Smartphones, tablets, remote controls, ATMs, coffee machines, computers, electric toothbrushes and so on. The prevalence of IT systems in our society is obvious and influences our decision making processes in many ways, in large as well as small decisions, for example what to make for dinner might be influenced by an article on the meat industry seen on social media, or a decision to switch careers could be influenced by an ad seen on TV. But the fact that individual decisions can be influenced by an interactive system may be a problem with ethical implications. Ethics are concerned with if something is good or bad for individuals and society but the judgement of what is ethical can be highly individual as well as cultural and societal. It is through our common opinion as a society that we decide if something is ethical or not.

Influencing or manipulating each other is something humans have always done in one way or another. From changing the way, we do something by observing someone else, to changing our minds when someone makes a compelling argument. This is especially evident when it comes to politics and advertising, where many different mediums vie for our attention and try to sway us. Until the advent of the Internet and the many ways we as users can be reached today, advertisers and other parties had to contend with using more analog means of reaching us. As new technology has been developed these parties have gotten better at finding new ways to present information and to potentially influence users through algorithms running in devices. When and how these algorithms are used to present information is interesting to study from an ethical point of view as these aspects become increasingly important when we can no longer detect that we have been influenced. How then do we as a society view the potential to influence, and to be influenced through interactive systems? And do we approve of some kinds of influence more than others?

(11)

2

1.1 Background

In 2018 news broke around the world that the data of approximately 80 million users on the social platform Facebook had been collected, without the knowledge and full consent of those users. The data in turn had been

processed and used to influence the behavior of the users in the 2016

presidential election in the United States of America [18]. This had been done in in part by the political consulting firm Cambridge Analytica, thereby making the events known to the world as the “Cambridge Analytica scandal”.

The methods used in the scandal were new, but the tricks were old. What then was the difference and why were the reactions to it so strong? The methods that were used were digital and sparked a conversation about not only what we can do through digital means, but how we do it, and if we should. This event made it obvious to the public in general that the new way to our hearts and minds might be through machine learning, algorithms and predictions. Laws and regulations are socially and politically constructed and therefore rarely represent a potentially objective truth [1]. The fact that something is doable and does not violate the law does not necessarily mean that it should be implemented. The Cambridge Analytica scandal is a recent example of the questionable development and use of Artificial Intelligence (AI) and machine learning. With AI and machine learning comes not only a question of what is legal to do, but also what is ethical. What is ethically justifiable to develop and implement? Or maybe rather; what is not ethically justifiable to develop and implement, even though it is legal?

If something is ethical or not may differ based on which ethical theory one adheres to and to the placement in time and space. What was once considered ethical in ancient Roman culture might not be considered ethical in a modern society. There are several ethical principles that can be used as guidelines when casting

judgement on if something is ethical or not [1]. In deontological ethics for example, laws and rules are observed and adhered to in their moral judgment. From the deontological perspective, one refrains from stealing because it is a duty not to commit theft, while the consequentialist theory is based on not stealing because theft has negative consequences for a society in the long run. Unlike deontological ethics, it may be ethically justifiable to violate the law if the consequences of the act can outweigh the act itself. What is ethical or not therefore varies between theories, situations, environments and over time. Apartheid, female suffrage and the death penalty are three examples of where human’s ethical judgment changes over time as well as between cultures.

With regards to Cambridge Analytica and what is possible to develop and implement with AI-algorithms and machine learning, an ethical approach to these techniques should be included in the development and usage of them. When companies now have the opportunity to influence users, when does the user feel that influence through interactive systems is ethically justifiable?

1.2 Research Questions

This study will focus on public opinions of ethical aspects on influencing user behavior through interactive systems in devices. The research questions are as follows:

(12)

3

● When is it ethically sound for interactive systems to influence user

behavior?

○ In what situations do people approve of being influenced? ○ Do people perceive a difference in influence between different

interactive devices?

○ What is, if there is, the difference in ethical views between people who only use interactive systems and people who also develop them?

1.3 Method

The study conducted a review of related work to understand the area of interest. To find if there was a general consensus on the ethics of influencing users through interactive systems, a survey was chosen as a method for the study, where two rounds of pilot tests were conducted to assure the quality of the questionnaire used in the survey. The questionnaire was also reviewed by an expert in the field, to validate the questions. Interviews with the general public were also chosen as a method to broaden the perspective on the effects of influence. With the combination of these methods the study was able to collect quantitative as well as qualitative data to be able to answer the research questions.

With observations and an experiment, the study could have aimed to test what choices users make in interactive systems, to directly link these choices with the user's answers about influence. Due to the global Covid-19 pandemic these two methods were deemed to not be suitable. A survey and interviews were chosen to get the thoughts and opinions from actual users, since the research questions are about user’s ethical views on interactive systems.

2.0 Related work

To get a sense of previous research in the area the study conducted a review of related work. The purpose of the review was to gain a deeper understanding of research already done. Through the review a few different topics were found and explored.

2.1 Artificial intelligence and Data Analysis

Artificial Intelligence (or AI) is the ability of computer programs and robots to imitate the natural intelligence of humans and animals [2], mostly through cognitive processes such as sequence planning, learning from previous experiences and generalizing [3]. Through AI-algorithms, systems can

dynamically learn from data mining processes and take actions that have the best chance of accomplishing a specific goal. Digitalization and the collection of large amounts of data from the internet make data mining processes possible. These algorithms are developed to recognize patterns in data and to group the different entities together in specific ways [4]. Classification and regression are two types of data mining processes, where classification is used for grouping into categories, and regression determines a numeric value for the event. Similarity matching uses known data to find similarities between entities and this is the method behind making product recommendations. Profiling is the method used to analyze user behavior.

(13)

4

Data mining is not about how to collect data, but about how to manage the analyses made from the collected data. Large amounts of data can be analyzed to make target groups more specific and advertisement more correct for each individual. There are both positives and negatives to this kind of data analysis. Researchers in the field write about the importance of having principles for AI, but also about the difficulties in creating and following them [5]. The

important aspect of this is to discuss the tensions that can arise when

different ethical principles contradict each other. Researchers believe that the basic principle is that AI should be created and used “for the common good” - the problem with this is that the perception of what is and isn’t good differs substantially between different populations and therefore becomes a too general principle to follow in practice.

Data analysts fear a data economy driven by agendas hidden from public view, biases and human prejudice [6]. They find that it is one thing to use analytics to extract patterns from big data objectively and another thing to manipulate the analysis of the data to superimpose patterns. Data analyst Cathy O'Neil writes that ”many poisonous assumptions are camouflaged by math and go largely untested and unquestioned.”

The principle is illustrated by showing the misuse of and confidence in data analytics in examples from the United States, where the misuse of data

analytics has corrupted teacher evaluation models, recidivism and jurisdiction modeling in prisons and influenced questionable data-driven college rankings. O'Neil calls this the dark side of big data, where there is an overreliance on and misuse of data analysis.

2.2 Ethics for computer science

To guide employees in the field of computer science, ethical codes have been created, for example those by the Institute of Electrical and Electronics

Engineers, IEEE, [7]. These guidelines were created to help individuals in their profession and should be seen as a form of rules when facing ethical

dilemmas, including:

“to improve the understanding by individuals and society of the capabilities and societal implications of conventional and emerging technologies, including intelligent systems”.

In the context of the ethics of influence through interactive systems such as mobile phones and computers, it is important to explore if indeed guidelines as set by for example IEEE, are universal. Ethical considerations for computer science as a field, for data, data processing and for artificial intelligence

algorithms can vary depending on many things, not least on cultural differences. African researchers for example, discuss the pitfalls of

computation and data and advocate for a way for AI to be used in a more socially acceptable and fair capacity [8]. From this perspective the social contracts, theories and ideas of ethical conduct used today have been developed with a mainly European perspective in mind. The cultural

differences between Europe and Africa are great enough to make the social contract theories and ethical frameworks difficult to apply. The differences in cultures are great enough that if research into this field does not incorporate cultural differences the risks of perpetuating the same injustices that were witnessed in colonial times are high. The data and computational power of

(14)

5

European countries and northern civilizations is equated to colonial times guns, weaponry, steel and immunity to disease. Due to the cultural differences there is a potential danger in shaping ethical views on a culture that does not view things like privacy and technology in the same way. In Africa privacy is not the same as in the Western world, where it takes on a different meaning and sometimes is even non-existent.

Humans sometimes make decisions based on laws and sometimes just

because something feels right. It is therefore of the utmost importance that we are aware of various ethical theories when we develop and use AI [99].

Researchers in the field consider three ethical theories; consequentialism, deontological ethics and virtue ethics. Consequentialism, as a theory, aims to be based on the consequences of different alternatives, and to always weigh these against each other. Within the consequentialism, decisions in ethical issues are based on which alternative is most beneficial to most people. Deontological ethics, unlike consequentialism, is based on laws and regulations. In deontology one considers the duty of each individual. It assumes the goodness or the evil of an act, for example that it is wrong to lie. Lastly are virtue ethics, which also differ from the two previous theories. Virtue ethics is based on the characteristics of the individual rather than the action, and it is based on the type of person you want to be. Being a good person, being kind to others and being generous are generally viewed as nice traits. This is the kind of person that you want to be, which is the reasoning that forms the basis of virtue ethics. Virtue ethics are about long-term actions, not one-time actions.

In deontological ethics, laws, rules and regulations are the driving force behind a way of conduct where actions can be right or wrong [1]. Part of ethical

considerations for technology and for the way that technology can influence us is in the way laws around it are shaped. Some researchers propose an

international regulatory agency to grapple with laws and ethical questions concerning technology [9]. Researchers urge for the hasty development of such an organization as issues such as autonomous weapons, cryptocurrencies, autonomous vehicles and personalized political ad hacking are already a reality and are affecting international trade, politics and war. There is a concern about the growing legal vacuum in all domains affected by

technological progress and even though there are many organizations and governing bodies working on laws surrounding technology, these often stop at borders. Data has no borders, which makes the need for an international governing body imperative.

In 2018, as few months after the Cambridge Analytica scandal was revealed, the European Union launched the General Data Protection Regulation (GDPR). The Cambridge Analytica scandal was not the catalyst for the creation of the GDPR, as it had been years in the making, but drove home the point of a need to regulate the privacy of and protection of personal data. The General Data Protection Regulation attempts to protect users and their data through three main points [28]:

“This Regulation lays down rules relating to the protection of natural persons with regard to the processing of

personal data and rules relating to the free movement of personal data.

(15)

6

This Regulation protects fundamental rights and freedoms

of natural persons and in particular their right to the protection of personal data.

The free movement of personal data within the Union shall be neither restricted nor prohibited for reasons connected with the protection of natural persons with regard to the processing of personal data.”

The GDPR applies to any company or entity that processes data as part of its activities in the EU or a company that is established outside of the EU but is offering goods or services or is monitoring the behavior of individuals in the EU [29]. Although the GDPR does not apply to individuals outside of the

European Union it attempts to do what researchers have been urging us to do, and to create a base on which the world can discuss deontological ethics with a practical application.

The deontological point of view on user influence is to ensure that the

influence follows laws and regulations [10]. If the influence can be considered to be both lawful and a “good” act it is permissible according to deontology. A theoretical example of this could be an application on a mobile phone that reminds the user to move once every hour to promote a healthy lifestyle. From a consequentialist point of view user influenced must be considered by the consequences of the influence. If the theoretical health application has a negative effect on users, it might not be permissible according to

consequentialist ethics. But conversely, when viewed through the lens of virtue ethics which deal with who we want to be as people, the application might again be permissible as it promotes a long-term healthy lifestyle.

2.3 Interactive systems, influence and

manipulation

According to a Swedish study, "Svenskarna och internet", 98% of Sweden’s population had internet access in their homes in 2019 and 95% said that they use the internet [21]. They use the internet in many different aspects of

everyday life, such as work, education, as encyclopedias and in social

interactions. This study researched many different aspects of Swedes uses of internet and compared the results to results from research done in previous years.

The study found that nearly half of the participants were worried that large companies such as Facebook and Google could breach their privacy, which is an increase of 17% since 2015. It also found that only about 4 out of 10 users have had any education on criticism of sources, and that those who had education on this were mainly younger users [21]. This result implies that there is a large group of the Swedish population that has difficulty discerning between reliable and unreliable sources.

2.3.1 Cambridge Analytica

There are many examples of influencing and manipulating behavior for political or commercial reasons. From old commercials that wanted to convince the public of the health benefits of smoking [11][12][13] to political

(16)

7

pandering in order to gain votes, examples of attempts to influence the

behavior of a group is abound in modern society as well as in our history. In US elections this has been a recurring companion, from the early days of Elbridge Gerry’s redistricting efforts [14] to Richard Nixon’s dirty tricks against politicians he distrusted [15] and more recent attempts to sway public opinion in the “Defeat Crooked Hillary” project [16] which was part of the Cambridge Analytica scandal.

The Cambridge Analytica scandal started as an app developed by Aleksandr Kogan called "thisisyourdigitallife" which was developed for Kogan’s company Global Science Research (GSR). The application was a personality test that collected data from users who took the test on Facebook, but the application also gave GSR access to data from the users' Facebook friends. The total number of Facebook profiles collected is said to be somewhere around 80 million [18]. The data was later shared from GSR to Cambridge Analytica via Aleksandr Kogan, who was also an employee of Cambridge Analytica. The analysis carried out on the collected data resulted in a detailed personality mapping of the users and eventually allowed Cambridge Analytica to categorize and find users that could be influenced. In 2016, Cambridge Analytica ran a campaign with Donald Trump, in which the target group was comprised of users who were uncertain in their political position and who therefore could possibly be influenced by a politically angled message [18]. The Cambridge Analytica scandal is a clear example of what can be accomplished when collected data is made available for analysis.

The technology used in the scandal was developed by researchers for a study conducted at Cambridge University [17]. The study was conducted a year before Aleksandr Kogan collected the data which later led to the Cambridge Analytica scandal. The study was based on Facebook profiles and Facebook likes collected from 58,000 participants who also participated in a personality test. The results of the study showed that it was possible to predict gender and ethnic origin with over 90% accuracy. In addition to these attributes, the study was able to predict everything from political affiliation (85%) and sexual

orientation (93% / 95%) to drug and alcohol consumption (65% / 70%) and if the participants' parents still were together when the participant was 21 years old (60 %). What is remarkable about this study is not the exact percentage predicted in each area, but that it was possible to predict so much and so accurate information about the participants in the study based on the data collected from their Facebook accounts. The researchers in the study could see the potential pitfalls of this result and included a warning and a hope:

“It is our hope, however, that the trust and goodwill among parties interacting in the digital environment can be

maintained by providing users with transparency and control over their information, leading to an individually controlled balance between the promises and perils of the Digital Age."

One year later, Kogan conducted the data collection through his application on Facebook, another two years later the collected data formed the basis for a political influence campaign of American citizens before the presidential election that led to Donald Trump's presidency.

(17)

8

When applying the ethical principles of deontology, consequentialism and virtue ethics to the Cambridge Analytica scandal, different views emerge. Some aspects of the scandal could from a deontological view, be interpreted as

wrong. Chief amongst these was the fact that Facebook had allowed their users data to be used in the way that it had. Under the European Union’s General Data Protection Regulation (GDPR) Facebook may have broken the regulations by the way they mishandled their user’s data. At the writing of this study the UK Information Commissioner’s Office (ICO) has handed Facebook a fine of £500 000, which is still being appealed by Facebook [31]. Also,

Cambridge Analytica’s parent company Strategic Communication Laboratories (SCL) was fined £15 000 for not handing over all data requested by journalist David Carroll [27]. Therefore, according to deontology, Cambridge Analytica acted unethically as they did not follow the law. Consequentialist ethics would view the Cambridge Analytica scandal a bit differently. As consequentialism is concerned with consequences of an act one would have to look at the fallout of the scandal. This leaves much up to interpretation by the individual, as some might view the election of Donald Trump as a good thing, and others might not. Consequentialism is therefore in this case difficult to apply. Virtue ethics in turn, are based around persons instead of actions. In the case of the

Cambridge Analytica scandal one would have to look at the rightness or wrongness of an action. Was for example, Alexander Kogan wrong to use the methods researchers at Cambridge University had developed for commercial and political reasons? Do you want to be a person that makes influencing people possible? Again, this depends on an individual’s point of view and makes virtue ethics complex to apply.

An example of the complexity of the public's opinion on right or wrong can be seen in a study conducted in 2018 where researchers collected tweets from the social media application Twitter in both Spanish and in English [18]. The researchers wanted to compare how people who spoke different languages reacted to data privacy breaches. The study found that perspectives on data security vary greatly between cultures and countries. Belgians for example reported less concern over sensitive information leakage than other countries, including the US in the study. The study postulates that this is because of differences in privacy laws between different countries. English speakers in the study more often blamed governments and organizations for breaches of data privacy whereas Spanish speakers tended to blame the individual users who hadn’t been more careful in protecting their data. The difference in opinions of responsibility was hypothesized to be because of a cultural dimension as set by Hofstede known as “power distance”. The power distance dimension is defined as: “the extent to which the less powerful members of institutions and organizations within a country expect and accept that power is distributed unequally” [19]. People in countries with a high level of power distance tend to accept the hierarchies of power without questioning authority. Anglo and Nordic countries are distinguished by having a low level of power distance, but the opposite was found in Latin American countries which often score high in power distance. Culture, therefore, and the way we view power and

responsibility can have a great impact on the public’s stance on the ethics of influence.

2.3.2 Facebook

In 2012 Facebook conducted an experiment where the data of approximately 690,000 users was collected and analyzed to study how emotional states can

(18)

9

spread through social networks [20]. The study divided the users in to two groups, one positive and one negative, and exposed them to their friend’s emotional content through their News Feed. The positive group was exposed to more positive content and a reduced negative content and the negative group was exposed to a more negative content and a reduced positive. The findings indicated that the positive group tended to post more positive things

themselves and the negative group posted more negative things. As the emotional state of the users was influenced, one can argue for this being manipulation and that experiments and manipulations of this kind illustrate the risks and the potential of research through social media. This is a deeper and more advanced form of behavioral experimentation and intentional deception as the programming code had been altered to manipulate information without forewarning. This is not the only example of content manipulation done by Facebook. In 2010 another experiment was conducted with startling results. The experiment was conducted during the 2010

presidential election in the US, where on Election Day Facebook researchers randomly divided 61 million users in to three categories. One group received a social message, one an informational message and one was a control group. Both message- groups were presented with a different non-political message to encourage them to vote. At the top of their New Feed the group with the social message was shown a statement that encouraged them to vote, a link to local polling places and a button with “I Voted”. With the button they could see a counter of how many other users of Facebook had clicked it and six randomly selected profile pictures of the user’s Facebook friends who had also clicked the button. The other message group who received an informational message were shown the same message, the poll information, the button and the counter but no pictures of their Facebook friends. The control group didn’t receive any message at all. The results showed that the group that received the social message was more likely to vote than the users in the two other groups. This experiment provided evidence of that there is a connection between online behavior and behavior in real life.

The research conducted by Facebook on their users was not done to outright manipulate them, but to help Facebook understand their users better and to create a better user experience. At the time of the emotion-experiment the data use policy did not explicitly allow Facebook to use their user’s data for

research purposes in order to conduct research. This was amended a few moths after the experiment in their new data use policy to include research for “internal purposes” [20]. The ethical implications of experimenting on your users are, as with the Cambridge Analytica example difficult to assess. After the amendment of Facebook’s data use policy, it could be viewed as ethical according to deontological ethics, as it did not brake ant laws or regulations as long as it was approved by the users through Facebooks user consent form. According to consequentialist ethics one would have to view the consequences of the experiments. If they gave the users a better user experience and

generally made their lives better, it could be said to adhere to this principle. Virtue ethics would also be difficult to apply in this situation as this principle is concerned with who we want to be as a society. Do we want a society where companies experiment on their users, even if it is for an outwardly good cause? These early Facebook experiments give a clear indication of that these conversations are important, far reaching in time, and can potentially have an impact not only on individuals and society, but on general policies for other social media platforms.

(19)

10

2.3.3 OKCupid

Other online companies have also conducted experiments on unsuspecting users. The online dating website OKCupid conducted their experiments at about the same time as the Facebook News Feed experiments [25]. In one experiment OKCupid altered the compatibility percentage that their matching algorithm automatically provides, to suggest that the users were a better or worse match than their actual score. This experiment took pairs of users that had a worse matching score, with a 30% compatibility level and changed their matching score to 90%, which suggested that they were a good match for each other. The experiment showed that these users sent more first messages to potential matches when their compatibility score was higher. They were also more engaged in conversations as they exchanged four messages or more. After the experiment was concluded the affected users were notified of the mismatches. OKCupid’s intention with the experiment was to find if their matching algorithm worked purely by the power of suggestion or if it in fact could predict real compatibility. By deliberately mismatching users OKCupid could test their algorithm on the unsuspecting users to study their reactions. The user’s reactions to being influenced in this way was not taken into

account but could potentially have had life-altering consequences. Although the users were not aware of the experiment, as with the early Facebook experiments, the intent behind it was not intentionally malicious. On the contrary, the idea behind the experiment was to find if OKCupid’s matching algorithm really was helpful in finding a partner. This seems, from a consequentialist ethical view like a good thing. And as OKCupid found through the experiments, their matching algorithm really was accurate and helpful. But as in most instances, both for Facebook and for OKCupid, better

knowledge of user’s online behavior is a by-product of research conducted for a company’s own benefit. This means that those who benefit the most from the research, through increasing revenues, are the companies conducting the experiments while the highest potential risks fall to the users [20]. The question of consequence is therefore a difficult one to unravel. According to deontological principles OKCupid have done nothing wrong, as their users, just as in the case of Facebook, have signed a consent form. Again, the same questions arise surrounding virtue ethics. Do we as a society approve of this? Are we okay with being influenced in this way if the reasons behind it are objectively “for a good cause”? And for whom are they good?

Taking part of and being influenced by interactive systems in devices such as computers, tablets and smartphones is an everyday occurrence in our society. Therefore, it is important to be aware of the how, the why and the who behind the influence and to decide if the way we use this technology aligns with the way we wish our society to be. As our research of related work has shown, it is possible to influence through interactive systems, and there are several

examples of cases where this has happened. It has also shown that questions regarding ethics are complex and often depend on people's opinions.

(20)

11

3.0 Study

To find out about people's opinions on influence from interactive systems the study was comprised of two different data collection methods, a questionnaire and interviews. The combination of these methods provided quantitative as well as qualitative data.

3.1 Survey

The first data collection method used in the study was a survey through a questionnaire. The survey was conducted with the purpose of gaining a deeper understanding of the ethical views of the general public as well as two different groups, “users” and “developers” and to answer the research questions with quantitative data. A user in this study was defined as a person who uses interactive systems but does not develop them. A developer was defined as a person who develops interactive systems, either for work or out of interest. A developer is also, per definition, a user of interactive systems, but a user is not always a developer. The purpose of the division between these groups was to see if there were any differences in ethical views in the two groups and what those differences were.

The questionnaire mixed open and closed questions answered with a Likert scale [22]. With the open questions, the participants were able to decide freely on how to answer the questions, and the closed questions gave a range of answers to choose from [23]. The questions surrounded five different situations and four different devices. The situations chosen were, being

influenced when purchasing a product, when choosing a travel destination, in health-related situations, in a political opinion and in altering an opinion on a current piece of news. The situations were chosen to reflect a scale of severity, where the study deemed purchasing a product to be a situation of potentially less consequence than altering an opinion on news or politics. The four different devices were chosen to give a variety of technologies through which one could be influenced and included smartphone, computer, smartwatch and smart speakers. Smartphone and computer were chosen as most participants were likely to have experience of using these. Smartwatch and smart speakers were deemed as less likely to have been used by all participants but added a challenge to participants thinking when imagining influence through different devices.

To ensure the validity of the questionnaire, it was evaluated by the study’s supervisor and two rounds of pilot tests were conducted. The first pilot questionnaire was sent to ten participants. The second was sent to 6

participants. After adjustments the finished questionnaire was distributed by using the self-selection sampling method. This method is described as a method where "researchers advertise their interest in a particular topic and their need for respondents and collect data from anyone who responds" [23]. The minimum number of responses for the study was considered to be at least 30 participants, which is a good rule for small-scale studies according to Oates [23]. With the total number of 159 participants a proper analysis could be done.

(21)

12

The questionnaire was shared via the social media platform Facebook to reach a wide variety of participants with different ages and backgrounds and a total of 159 participants conducted the questionnaire. A total of 16 number of questions made up the questionnaire and the first question aimed to divide the participants in to one of the two groups, users or developers.

The results of the questionnaire were analyzed to find patterns, similarities and differences. The results from the two different groups were compared. The full questionnaire can be found in Appendix A.

3.2 Interviews

The second data collection method used in the study was a series of semi-structured interviews. The decision to use semi-semi-structured interviews was made because semi-structured interviews allow for the researcher to have a set of open-ended questions that can then be followed with clarifying questions [30]. As the subject matter of this study is based around the perceptions of the participants, a more free-flowing type of interview was deemed to be better suited. Had the interviews been only structured interviews, the study would not have been able to gain the deeper insights it needed to answer the

research questions. The questions in the interviews were the same as for the questionnaire and used to gain a deeper and broader understanding of the reasoning behind participants position on user influence in interactive systems and to answer the research questions through qualitative data. The study conducted a total of five interviews with five participants, three users of interactive systems and two developers of interactive systems. The number of participants was chosen according to recommendations by Saunders (36) of between 5 to 25 samples. The participants were of different ages, technical backgrounds and sex. The results of the interviews were analyzed to find similarities, differences and new insights through thematic analysis. The interviews were conducted via telephone due to the restrictions of the global COVID-19 pandemic.

3.3 Ethical Considerations for the study

There are several ethical considerations concerning research. Research is an important part of today’s society and as such must be appropriately adapted to the evolving society around us. Ethically conducted research needs to maintain the professional integrity of the researchers. Various interests and values must be accounted for and balanced, where one of these interests is the pursuit of knowledge. The research subject’s integrity and the protection of them against various risks and forms of harm is another interest. The protection of integrity-sensitive material collected during research and

questions surrounding the ownership of the material are also important [24]. This study aimed to take these interests into consideration and to follow the rules and regulations of the General Data Protection Regulation (GDPR) [25]. The questionnaire was sent out via the social media platform Facebook to ensure that participation was completely voluntary. The data collected from the questionnaire was anonymous and in compliance with the rules and regulations of the GDPR. The interviews were conducted via telephone and were recorded with the participants permission. The recordings were saved while the thesis was in progress and were deleted immediately after.

(22)
(23)

14

4.0 Results

This section presents a summary of the results from the study's research methods. The methods chosen provided the study with qualitative as well as quantitative data and is presented in two different sections, one for the questionnaires and one for interviews.

4.1 Results questionnaires

Two pilot tests were created to confirm that the participants understood the questions correctly and to find if anything had been missed that could be of interest for the study. Random participants were selected to participate in the pilot tests. No personal information about the pilot test subjects was gathered. The participants were presented with eleven questions from the questionnaire and three questions about the questionnaire to evaluate its quality.

In the first pilot test the Swedish word “påverkan” was used. In Swedish there are several words for influence, where “påverkan” has a more negative tone than the word “inflytande” which is more neutral. The results of the first pilot test revealed that the participants interpreted the word “påverkan” as a directly negative one. The decision was made to use the word “inflytande” in the second pilot test. Participants of both pilot tests found it somewhat difficult to understand the concept of influence and as a result of this examples of influence and situations were added for the finished questionnaire.

The results of the final questionnaire were as follows:

S1 - Q1: Select your age group

Figure 1.

In Figure 1, it shows that 35,2% of participants were in the age group 26-35, which was also the largest group. 23,3% were in the age group of 36-45. 20,8% were in the age group of under 25 years of age and 15,1% were 46-55 years old. 3,1% were 56-65 and 1,9% were over 76 years old. The smallest group was the 66-75-year old’s, which were 0,6%.

(24)

15

S1 - Q2: Select your gender identity

Figure 2.

Figure 2 shows that 81,1% of the participants were women, 18,2% were men and 0,6% were non-binary.

S1 - Q3: Are you a user of interactive systems or a developer of interactive systems?

Figure 3.

In figure 3 it shows that 82,4% of the participants consider themselves to be users of interactive systems, and 17,5% of the participants consider

(25)

16

S2 - Q1: How well informed do you consider yourself to be in regard to your personal integrity online?

Figure 4.

Figure 4 shows how informed the participants consider about their personal integrity online on a scale from 1-10 where 1=Not at all and 10=Very well. The figure shows that 3,1% rate themselves as a 1, 3,1% as a 2, 5,7% as a 3, 9,4% as a 4, 14,5% as a 5, 17,6% as a 6, 12,6% as a 7, 20,1% as an 8, 6,9% as a 9 and 6,9% as a 10. This means that the majority of participants rate

themselves as an 8.

S2 - Q2: How critical of a source’s reliability do you consider yourself to be?

Figure 5.

Figure 5 shows how critical of a source’s reliability the participants consider themselves to be on a scale from 1-10 where 1=Not at all and 10=Very well.

(26)

17

The figure shows that 0,6% rate themselves as a 4, 3,1% as a 5, 4,4% as a 6, 11,3% as a 7, 30,8% as an 8, 28,9% as a 9 and 20,8% as a 10. No participants rated themselves as a 1, 2 or 3.

S3 - Q1: How okay are you with allowing an interactive system to have an influence in your decision to buy a product?

Figure 6.

Figure 6 shows how okay the participants were with allowing an interactive system to influence them in their decision to buy a product on a scale from 1-10 where 1=Not at all okay and 1-10=Completely okay. The figure shows that 15,7% rated themselves as a 1, 4,4% as a 2, 13,2% as a 3, 10,7% as a 4, 16,4% as a 5, 11,3% as a 6, 10,7% as a 7, 10,1% as an 8, 3,1% as a 9, 4,4% as a 10. The largest group is the 5 and 1.

(27)

18

S3 - Q2: How okay are you with allowing the following interactive systems to have an influence in your decision to buy a product?

Figure 7.

Figure 7 shows how okay the participants were with allowing different

interactive systems to have an influence in their decision to buy a product on a scale where 1=Not at all and 5=Completely okay, with the additional option to select “Don’t know”. For phones 24,5% selected a 1, 24,5% selected a 2, 22,6% selected a 3, 22,0% selected a 4 and 5,7% selected a 5. 0,6% selected “Don’t know”. For computers 23,3% selected a 1, 25,8% selected a 2, 22,6% selected a 3, 20,1% selected a 4 and 7,5% selected a 5. 0,6% selected “Don’t know”. For smartwatches 45,9% selected a 1, 20,8% selected a 2, 13,2% selected a 3, 9,4% selected a 4 and 3,1% selected a 5. 7,5% selected “Don’t know”. For smart speakers 50,9% selected a 1, 22,0% selected a 2, 11,9% selected a 3, 6,3% selected a 4 and 1,9% selected a 5. 6,9% selected “Don’t know”.

S3 - Q3: How okay are you with allowing an interactive system to have an influence in your travel destination decision?

(28)

19

Figure 8 shows how okay the participants were with allowing an interactive system to have an influence in their travel destination decisions on a scale from 1-10 where 1=Not at all okay and 10=Completely okay. 17,6% rated themselves as a 1, 10,1% as a 2, 15,7% as a 3, 4,4% as a 4, 7,5% as a 5, 6,9% as a 6, 17,6% as a 7, 11,3% as an 8, 5,0% as a 9 and 3,8% as a 10.

S3 - Q4: How okay are you with allowing the following interactive systems to have an influence in your travel destination decision?

Figure 9.

Figure 9 shows how okay the participants were with allowing different

interactive systems to have an influence in their travel destination decision on a scale where 1=Not at all and 5=Completely okay, with the additional option to select “Don’t know”. For phones 28,3% selected a 1, 23,3% a 2, 22,0% a 3, 17,6% a 4, 8,2% a 5 and 0,6% selected “Don’t know”. For computers 28,3% selected a 1, 24,5% a 2, 20,8% a 3, 17,6% a 4, 8,2% a 5 and 0,6% selected “Don’t know”. For smartwatches 56,0% selected a 1, 20,1% selected a 2, 10,1% selected a 3, 3,8% selected a 4 and 2,5% selected a 5. 7,5% selected “Don’t know”. For smart speakers 58,5% selected a 1, 20,1% selected a 2, 8,8% selected a 3, 3,8% selected a 4 and 1,9% selected a 5. 6,9% selected “Don’t know”.

(29)

20

S3 - Q5: How okay are you with allowing an interactive system to have an influence in your decisions around health?

Figure 10.

Figure 10 shows how okay participants were with allowing an interactive system to have an influence in their decisions around health on a scale from 1-10 where 1=Not at all okay and 10=Completely okay. 14,5% rated

themselves as a 1, 10,1% as a 2, 8,2% as a 3, 7,5% as a 4, 15,1% as a 5, 6,3% as a 6, 11,9% as a 7, 12,6% as an 8, 5,7% as a 9, and 8,2% as a 10.

S3 - Q6: How okay are you with allowing the following interactive systems to have an influence in your decisions around health?

Figure 11.

Figure 11 shows how okay the participants were with allowing different

interactive systems to have an influence in their decisions around health on a scale where 1=Not at all and 5=Completely okay, with the additional option to select “Don’t know”. For phones 28,3% selected a 1, 20,8% a 2, 18,9% a 3, 24,5% a 4, 6,9% a 5 and 0,6% selected “Don’t know”. For computers 36,5% selected a 1, 25,2% a 2, 17,0% a 3, 16,4% a 4, 3,8% a 5 and 1,3% selected “Don’t know”. For smartwatches 25,2% selected a 1, 17,0% selected a 2,

(30)

21

14,5% selected a 3, 20,8% selected a 4 and 17,0% selected a 5. 5,7% selected “Don’t know”. For smart speakers 56,6% selected a 1, 20,8% selected a 2, 9,4% selected a 3, 5,7% selected a 4 and 0,6% selected a 5. 6,9% selected “Don’t know”.

S3 - Q7: How okay are you with allowing an interactive system to have an influence on your opinions in politics?

Figure 12.

Figure 12 shows how okay participants were with allowing an interactive system to have an influence on their opinions in politics on a scale from 1-10 where 1=Not at all okay and 10=Completely okay. 58,5% rated themselves as a 1, 9,4% as a 2, 10,1% as a 3, 5,7% as a 4, 5,0% as a 5, 5,7% as a 6, 1,3% as a 7, 1,3% as an 8, 1,3% as a 9 and 1,9% as a 10.

(31)

22

S3 - Q8: How okay are you with allowing the following interactive systems to have an influence on your opinions in politics?

Figure 13.

Figure 13 shows how okay the participants were with allowing different

interactive systems to have an influence in their opinions in politics on a scale where 1=Not at all and 5=Completely okay, with the additional option to select “Don’t know”. For phones 69,8% selected a 1, 15,1% a 2, 8,8% a 3, 3,8% a 4, 0,6% a 5 and 1,9% selected “Don’t know”. For computers 66,7% selected a 1, 17,6% a 2, 8,8% a 3, 4,4% a 4, 0,6% a 5 and 1,9% selected “Don’t know”. For smartwatches 82,4% selected a 1, 6,9% selected a 2, 3,8% selected a 3, 1,9% selected a 4 and 0% selected a 5. 5,0% selected “Don’t know”. For smart speakers 80,5% selected a 1, 7,5% selected a 2, 3,1% selected a 3, 2,5% selected a 4 and 0% selected a 5. 6,3% selected “Don’t know”.

(32)

23

S3 - Q9: How okay are you with allowing an interactive system to have an influence in altering your opinion around a current article in the news?

Figure 14.

Figure 14 shows how okay participants were with allowing an interactive system to have an influence in altering their opinion around a current article in the news on a scale from 1-10 where 1=Not at all okay and 10=Completely okay. 29,6% rated themselves as 1, 11,3% as a 2, 11,9% as a 3, 6,3% as a 4, 15,7% as a 5, 5,7% as a 6, 4,4% as a 7, 7,5% as an 8. 2,5% as a 9 and 5,0% as a 10.

S3 - Q10: How okay are you with allowing the following interactive systems to have an influence in altering your opinion around a current article in the news?

(33)

24

Figure 15 shows how okay the participants were with allowing different

interactive systems to have an influence in altering their opinion around a current article in the news on a scale where 1=Not at all and 5=Completely okay, with the additional option to select “Don’t know”. For phones 42,8% selected a 1, 20,8% a 2, 17,6% a 3, 11,3% a 4, 5,7% a 5 and 1,9% selected “Don’t know”. For computers 44,7% selected a 1, 20,1% a 2, 15,7% a 3, 11,9% a 4, 5,7% a 5 and 1,9% selected “Don’t know”. For smartwatches 74,2%

selected a 1, 9,4% selected a 2, 5,0% selected a 3, 4,4% selected a 4 and 0,6% selected a 5. 6,3% selected “Don’t know”. For smart speakers 71,1% selected a 1, 10,7% selected a 2, 5,7% selected a 3, 4,4% selected a 4 and 1,3% selected a 5. 6,9% selected “Don’t know”.

S4 - Q1: Do you have any other thoughts or comments regarding influence in interactive systems that the previous questions have not covered? Please elaborate!

Figure 16.

Figure 16 shows a sample of comments from the participants. Several of the participants expressed concern around the influence of interactive systems and the “Big Brother is watching you”-phenomenon. Some participants were positive to interactive systems giving them suggestions for things but did still not want to have their choices influenced without their knowledge. Some participants had a hard time understanding the concept of influence and many interpreted it as something negative.

(34)

25

4.2 Results of Interviews

All five participants claim to be well aware of their personal integrity on the Internet (Q1). Three of the participants mention GDPR in connection with their personal integrity and express that data collection has become more

transparent since the introduction of GDPR. Three of the participants also expressed a change in their perception of their personal privacy on the Internet, where two of the participants mentioned that GDPR made it clearer what counts as personal data. Participant 3 referred to it as “Before GDPR we cared more about social security numbers and telephone numbers. But I did not think about that my email or my IP-number also counts as personal information”. With regards to source criticism (Q2), all five participants respond that they are partly aware of source criticism, but four of the participants express that they feel that they should to be more critical of sources than they are.

When asked if participants notice that interactive systems have an influence on them (Q3), four of the participants respond that they clearly notice

advertising / marketing of products. Participant 2 describes a recently experienced situation, where they ordered shoes online and two days later received advertising for the shoe brand on their Facebook page.

No participants minded interactive systems having an influence on their choices to purchase a product (Q4). Four of the participants said that this is because they feel that they still have a choice of purchasing the product or not. Participant 1 expressed this as “I choose if I want to”. The participants have the same attitude toward influence on the choice of destination (Q5). Participant 5 feels that they are less okay with being influenced in choice of travel destinations than purchasing products "because travel destinations feel more personal", while the remaining four participants express that they

experience the influence of travel destinations as fine, since they feel like they always have a choice. The participants' attitudes towards influence via

smartwatches (Q6) vary. One of the participants doesn’t have an opinion as the they have no experience of a smartwatch, but two participants see the influence of a smartwatch as a positive thing and consider the smartwatch to be like a self-chosen health coach.

Four out of five participants are critical of the fact that interactive systems can influence political issues (Q7). Participant 5 said that "It feels like it becomes personal, and it affects our lives" and participant 1 said that they could see how political influence could be very serious and dangerous and referred to the political influence of the Nazi party on the German people during World War II.

When the participants are asked whether they feel that there is a difference in influence between different interactive systems, the responses vary (Q9). One participant expresses that they feel fine with being influenced through their smartwatch because they themselves have chosen the product to be able to get information through. Participant 1 expresses that they do not want advertising at all, neither in the regular mail nor in their interactive systems, and

participant 4 believes that influence with images, such as ones through a computer or smartphone, has greater impact because more senses are being affected.

(35)
(36)

27

5.0 Analysis

The results of the data collected from the survey and the interviews was analyzed to find patterns, correlations and statistical significance. The analysis was divided into two parts, one for the survey and one for the

interviews. Different methods were used for the analysis of the two parts. The survey was analyzed with the statistical data analysis software SPSS

(Statistical Package for the Social Sciences). The interviews were analyzed with thematic analysis to identify patterns in the qualitative data.

5.1 Analysis of survey

To analyze the results of the survey, based on the research questions, the study used the statistical data analysis software SPSS (Statistical Package for the Social Sciences). The study analyzed the results to find patterns,

correlations and the statistical significance of the results. The total number of participants in the study was 159. The results were analyzed with univariate and bivariate analysis.

The first analysis conducted was an analysis of the population.

Figure 17.

Figure 18.

Figure 17 and 18 shows the total number of participants in the study, and how many of these were categorized as users of interactive systems and developers of interactive systems, where “Användare av interaktiva system” = users and “Utvecklare (och användare) av interaktiva system” = developers. 82,4% of the participants were users of interactive systems, and 17,6% were

(37)

28

developers and users of interactive systems. The majority of the participants were between the ages of 26-35, 35,2%. The smallest age group was found to be the 66-75-group which only consisted of 0,6%.

To answer the first research questions, “In what situations do people approve of being influenced?”, the study compared the means of the different situations for influence; product, travel, health, politics and news. The answer to this question was found in survey questions 1, 3, 5, 7 and 9. The participants selected an answer on a scale from 1-10 for these questions where 1= not okay and 10= completely okay.

The comparison of the means showed that on average the participants were more accepting of being influenced in situations surrounding purchasing a product, choosing a travel destination and health related influence than in situations with politics and news. All results were low on a scale of 1-10, where the highest mean was for health at 5.19 and the lowest was for politics at 2.38, see figure 19.

Figure 19.

When analyzing the statistical significance of these results, the study found that in situations concerning products, travel and health the results were not statistically significant. In the situations with politics and news, the results were found to be statistically significant. The analysis was conducted by performing an ANOVA One Way- test and a robust test of the equality of means to ensure the statistical significance with a Welch and Brown-Forsythe test, see figure 20 and 21.

(38)

29

Figure 21.

Through bivariate analysis of the correlations between the different situations, the study found that there was a statistically significant correlation between all of the situations. This analysis was conducted to find if there were any correlations between the different situations. This was done to find if the differences in the scoring of the situations were accidental or not as other analyses indicated the importance of situation over device.

(39)

30

Figure 22.

To answer the second research question “Do people perceive a difference in influence between different interactive devices?” the study compared the means of the participants opinions of influence through different devices. The devices in the study were telephone, computer, smartwatch and speakers and could be rated on a scale from 1-5 where 1=not at all ok and 5=completely ok. The study found that the means for all the different devices were relatively low, none above a 2.24 average score. The lowest average score was found for speakers at an average of 1.43.

Figure 23.

Figure 23 shows the average means of all the devices. The means indicate that participants were overall more accepting of being influenced through a

telephone and a computer than a smartwatch or a speaker.

The study then compared the average means to ensure that they were statistically significant through a One-way ANOVA test.

(40)

31

Figure 24.

Figure 24 shows the statistical significance of the means of each device. The study found that there was a statistical significance in the difference of all devices except for computer and phone. This implies that although computer and telephone got the highest mean scores, this result could be accidental, whereas the scores for the other devices could not.

The study also conducted analyses of the correlation between situations, devices and scores to find if the situations could affect the participants score on the different devices having an influence on them.

(41)

32

Figure 25.

Figure 25 shows the correlations between situation, device and score. The study found that there was a statistically significant correlation between score and situation and score and device, but not between situation and device. This indicates that the scores on the devices was not dependent on the situations in which they were used.

To answer research question three, What is, if there is, the difference in ethical views between people who only use interactive systems and people who also develop them?, the study divided the participants in to two groups, users of interactive systems and developers of interactive systems. The study asked the participants to rate their knowledge on personal data integrity and source criticism on a scale from 1-10 where 1=not at all and 10=very well. The study also analyzed the differences in answers to research question 1 and 2. First, a comparison of means was conducted, where group 1=users of interactive systems and group 2=developers and user of interactive systems. The first set of questions that the study compared were the questions concerning

(42)

33

Figure 26.

Figure 26 shows that the means of users (1) were lower than those of

developers (2) on both questions, although the difference on the question on source criticism was very small, 8.36 for users and 8.46 for developers.

Developers rated themselves higher on the question concerning personal data integrity, at an average of 7.32 against the user’s 5.99.

The study then compared the average means through a One-way ANOVA test to ensure that the results were statistically significant. The comparison showed that the differences were statistically significant in the personal integrity- question, but not in the source criticism- question, as shown in figure 27.

Figure 27.

The study continued to analyze the results from the questionnaires to find differences between the two groups and analyzed their attitudes towards influence in different situations. First, the means of the different questions were compared.

(43)

34

Figure 28.

Figure 28 shows that users (group 1) on average gave lower scores on all situations than developers (group 2), although not by much.

When examining the average means of participants acceptance to being

influenced by a device in a specific situation the study found some differences between users and developers, see figure 29.

Figure 29.

Figure 29 shows the mean for users (“Användare av interaktiva system”) and developers (“Utvecklare (och användare) av interaktiva system”) in the

situation Product with the different devices. Here the study found that developers generally were more accepting of being influenced through all devices than users. The only exception were users gave a slightly higher score was in being influenced to buy a product through a smartwatch.

(44)

35

Figure 30.

Figure 30 shows the mean for users (“Användare av interaktiva system”) and developers (“Utvecklare (och användare) av interaktiva system”) in the

situation Travel with the different devices. Developers again give higher scores on all devices than users.

Figure 31.

Figure 31 shows the mean for users (“Användare av interaktiva system”) and developers (“Utvecklare (och användare) av interaktiva system”) in the

situation Health with the different devices. In this situation developers gave a higher score on the devices computer and speaker than users, and lower on telephone and smartwatch.

(45)

36

Figure 32 shows the mean for users (“Användare av interaktiva system”) and developers (“Utvecklare (och användare) av interaktiva system”) in the

situation Politics with the different devices. The study found that developers gave a generally higher score on all devices in this situation than users.

Figure 33.

Figure 33 shows the mean for users (“Användare av interaktiva system”) and developers (“Utvecklare (och användare) av interaktiva system”) in the

situation News with the different devices. In this situation developers gave a higher score to all devices except for the device smartwatch where users and developers had the same average mean score.

Figure

Figure 4 shows how informed the participants consider about their personal  integrity online on a scale from 1-10 where 1=Not at all and 10=Very well
Figure 7 shows how okay the participants were with allowing different
Figure 10 shows how okay participants were with allowing an interactive  system to have an influence in their decisions around health on a scale from  1-10 where 1=Not at all okay and 10=Completely okay
Figure 14 shows how okay participants were with allowing an interactive  system to have an influence in altering their opinion around a current article  in the news on a scale from 1-10 where 1=Not at all okay and 10=Completely  okay
+6

References

Related documents

Keywords: Occupational groups, children in need of special support, views, special needs, inclusion, SENCOs, educational leaders, preschools and schools.. ISBN:

Gratis läromedel från KlassKlur – KlassKlur.weebly.com – Kolla in vår hemsida för fler gratis läromedel – 2020-01-03 18:24.. My name

Gratis läromedel från KlassKlur – KlassKlur.weebly.com – Kolla in vår hemsida för fler gratis läromedel – 2020-01-03 18:26.. My name

That is a simple standard procedure which is not obvious to everyone, but if the company values delivering quality services, in other words if the Corporate Culture advocates

Such findings suggest that the speculative component has become a significant driver of stock returns (Curtis, 2012, p. The findings are significant for accounting research

actors involved with the provision of bus services perceive their situation where the service now are procured and provided by private companies. The purpose of this study is

Enligt vad Backhaus och Tikoo (2004) förklarar i arbetet med arbetsgivarvarumärket behöver företag arbeta både med den interna och externa marknadskommunikationen för att

More research can also be done on difference between different consciousness-raising groups online, and differences between different social media platforms, as it is important