• No results found

Profiling Algorithms and Content Targeting - An Exploration of the Filter Bubble Phenomenon

N/A
N/A
Protected

Academic year: 2021

Share "Profiling Algorithms and Content Targeting - An Exploration of the Filter Bubble Phenomenon"

Copied!
59
0
0

Loading.... (view fulltext now)

Full text

(1)

Profiling Algorithms and Content

Targeting - An Exploration of the

Filter Bubble Phenomenon

Thesis Project 1

Interaction Design Master 2014 Malmö University

Sonja Rattay

sonja.rattay@gmail.com

Supervisor

(2)

Acknowledgments

I want to thank Linda Hilfling for her invaluable support and feedback

through-out the project.

I would also like to thank the Jörn Messeter for leading the thesis project and

for giving valuable feedbacks during our seminar sessions.

(3)

CONTENT

1. INTRODUCTION

5

2. RESEARCH FRAMEWORK

8

2.1 Research Methods

8

2.2 Data Mining

8

2.3 Life pattern analysis

9

2.4 Digital footprint vs digital shadow

9

2.5 The acting parties

10

2.5.1 User - User Interaction

10

2.5.2 User - Provider Interaction

11

2.5.3 User - Governmental Interaction

12

2.5.4 Provider - Governmental Interaction

12

2.6 Related Work

14

2.6.1 The Life of Balthasar Glättli

14

2.6.2 American Psycho

15

2.6.3 Google NEST

16

3. PERSONAL EXPLORATION

17

3.1 Individual Experiments

18

3.1.2 Facebook

19

3.1.3 Google

21

3.1.4 Tracking patterns

24

3.2 Group Exploration

25

3.2.1 Personalized advertisement 25

3.2.2 Personalized search results 27

4. THEORETICAL DISCUSSION

29

4.1 Market asymmetry

29

4.1.1 Selective supply and shielded market perception

29

4.1.2 Free labour

30

4.2 Filter Bubble

31

4.3 Unreasonable algorithms

32

4.3.1 The right to forget

32

4.3.2 Rationality is not logic

34

(4)

5. DESIGN CONCEPT

37

5.1 Ideation

37

5.1.1 Critical Design Approach

37

5.1.2 Design intention

37

5.2 Realization

38

5.2.1 Artistic realization

38

5.2.2 Technical realization

42

5.3 Future Extensions

43

5.3.1 Profile picture implementation

43

5.3.2 Term prioritizing

44

6. CONCLUSION

45

6.1 Personal discussion

45

6.2 Final thought

48

References

49

Appendix

51

(5)

1. INTRODUCTION

The internet has undergone a strong transformation in the last couple of years. From a former rather abstract promise of information freedom, anonymity, strengthened participation possibilities and transparency, it has evolved into an important tool, widely implemented on various levels. But due to its numerous functionalities it bears opportunities in the same amount as issues, like privacy violation and surveillance abuse. Online services have extended their needs of identification while Big Data has introduced new prospects of profiling. This has resulted in new extents of virtual identification and personalization, turning on its head what Assange et al. describe as one of the main promises of the internet from the beginning: “Privacy for the weak and transparency for the powerful” (2012:7). With its functionalities evolving from cortical to both highly integrated and ubiquitous, the internet and its power configurations start to actively re-in-fluence what Terranova calls the ‘Outernet, the network of social, cultural, and economic relationships that criss-crosses and exceeds the Internet’ (2000:34) and thus shaping our perception of anonymity and privacy.

In 2014, revelations of Snowden about the practices of the NSA collecting and mining enormous amounts of data from all over the world, were published by various newspapers such as The Guardian (2013). From this it becomes clear that the internet is no longer only a tool for certain companies and individuals but has become a completely integrated component on an economical and political level. This divides the participants of digital interaction in three main parties: the public body meaning civilians with personal interests, the political body meaning gov-ernments and belonging organizations, and the economical body meaning compa-nies and businesses. While Snowden’s revelations caused a large ongoing discus-sion, the extent of the consequences of these practices for the individual as well as the public body remain unclear, as the extent and nature of the participation of economical and political body are not transparent.

To actually understand the nature and so the impact of personal data in the hands of someone unknown users have to know the nature and functionality of the tools which are used to monitor, categorize, classify and target them. Con-stantly increasing storing capacities and evolving analysing tools seem to enable algorithms to evaluate every click and put it into context. These modern tech-nologies are called life pattern analysis and data mining, and their function is the interpretation of a huge web of social data. In this aspect the mass of data is

(6)

what makes technologies like life pattern analysis possible since it relies on the principle of statistics where everything is declared comprehensive if only enough data is available. With every action linked to any digital device we add to this pool of behavioural data, extend and sharpen an online representation of ourselves and create a personal profile which is composed by our internet search behaviour, up- and downloaded files, route requests, mouse clicks and messages. As the physical and digital world tend to merge more and more through ubiquitous computing and the approach towards an internet of things, as well as real life consequences following online behaviour, these trails we leave in the digital will become bigger and also gain increasing influence on our lives.

But not only do we create our own digital shadow in the internet, with our online activities we also contribute to the technical compilation of our society itself. Each fact shared about ourselves adds up to a more extensive interpretation about how humans in general behave in our present world. Based on these interpreta-tions companies, as well as governments and governmental instituinterpreta-tions (such as secret services), try to predict future actions, in order to upgrade the targeting of potential customers or conspicuous citizens. Digital information thus is exposed to heteronomous profiling activities. Following these targeting technologies the access to information, services or goods can be modified, according to the agenda and perception of the recipients of personal data. Without being aware of it we as users might only be seeing a small part of the informational architecture of the internet beyond the web. We might exist in a bubble, which is defined by algo-rithms and computers based on personal behaviour patterns, and are permanently monitored on how we act in this bubble.

Because our interaction patterns become an important part of analysing technol-ogy, it is necessary to elaborate on how these aspects form our interaction habits from an interaction design perspective, in order to reflect on how the interaction with personal data can be shaped and influenced. Based on this, an interaction designer can help balance out the currently existing (and due to these described practices) uneven power distribution between companies and governmental organizations on the strong side and users on the weaker side. Continuing from here, the ground can be laid for further interaction design concepts and tools

(7)

and economical theories and combined with design approaches such as critical de-sign. This thesis therefore reflects, amongst others, on the philosophical thesis of “Discipline and Punish” by Foucault (1977), which addresses the consequences of a surveillance society and monitored behaviour in relation to correlations between power and knowledge.

It also deliberates the sociological thesis of “The Filter Bubble” by Eli Pariser (2011), who follows up on this topic in a more recent and observant approach describing the influences of personalization trends and the polarizing effects of targeted content as practical implementations of monitored behaviour. From a computer science perspective it considers the approach of Weizenbaum (1976), who, as the first computer scientist writing a computer program which aims to mimic human behaviour, questions the technical feasibility of an intelligent algo-rithm which is capable of fully understanding human interaction and thus ques-tioning the role of technology as a surrogate for human judgement in the knowl-edge-power correlation. From a design perspective I focus on a critical approach to provoke debate and to create a visualisation concept, which enables the user to see themselves from another perspective, from ‘inside the algorithm’.

As an outcome, a critical artefact is constructed to express the alienation which is caused between digital shadow and personality by profiling algorithms.

1.1 Research question

This thesis aims to explore the personal experience of a user being confronted with her categorized profile and to examine it under consideration of relevant the-ories from related research areas as listed in the introduction. Though the scope of this thesis does not allow the development of a functioning tool kit, which could be used against analysing and targeting technology, I believe that engaging users in exploring and questioning their digital shadow can lead to a more critical notion of the topic and thus more autonomy for the users. Critical design meth-ods can help to create an experience which triggers emotions and curiosity as well as scepticism towards the system of profiling and targeting. These thoughts steer towards the question:

How does targeting and profiling manipulate our behaviour both on a personal and a public level and how can it be translated into a query-provoking experi-ence?

(8)

2. RESEARCH FRAMEWORK

2.1 Research Methods

The main part of the practical exploration process relies on personal exploration of myself as a user, and experiments using my own data and browsing behaviour. I tracked and recorded this using Ghostery and Lightbeam as tracking tools to re-construct algorithmic behaviour and to map the reactions of algorithms. Ghostery shows which third parties for each visited site are tracking the current behaviour and for which purposes, such as widgets, advertisement or analytics. Lightbeam records each query of third parties while browsing, and shows which parties place cookies as well as how visited and requesting sites are connected.

As reference to observe impacts on virtual representation I used the interest categorization which is provided by Google for every user using their services as well as recorded results displayed in the browser of advertisement and searches. Alongside personal exploration experiments, I used questionnaires, as well as group tasks and experiments to observe modified results based on personal inter-action and individualized preference profiles. For the final design concept I used a metaphorical visualization as a critical design statement to frame the way of reflection within the practice of interaction in the field of profiling and targeting.

2.2 Data Mining

The American Association for Artificial Intelligence (1996) describes data mining or knowledge discovery as the practice of extracting interesting information and useful patterns from large volumes of data, or ‘data sets’. It combines statistics with artificial intelligence database management to turn low level data into a more comprehensive and compact form in which it can be used to discover correlations and relationships between formerly separated aspects. Data mining has been used in a lot of fields for a long time such as insurance, advertisement and related re-search, but current developments of online services and platforms have helped to create big data sets which are more extensive than ever.

(9)

2.3 Life pattern analysis

Pattern analysis deals with the attempt to detect characteristic relations in data in an automatic manner, based on statistic and machine learning methods (Shawe-Taylor & Christianini, 2004). Life pattern analysis applies this method to the behavioural patterns of human beings, to predict future behaviour and to understand and interpret habits and actions based on former behavioural patterns extracted from data sets of a specific target or a network of targets. Analysed in-formation generally includes daily routines, family background, location data and personal preferences, but can include any available information about the target. Life pattern analysis is based on the premise that our lives can be categorized and analysed based on routines and relies on recurrent decisions and habits as well as statistical correlations.

Life pattern analysis also rests on the assumption that there is enough personal data available to create comprehensive enough personal data sets to place deci-sions and actions into the right context to understand the causality of our actions.

2.4 Digital footprint vs digital shadow

When looking at the digital profile created by data mining and life pattern analy-sis, I want to distinguish between actively shared information (which I will call ‘the digital footprint’ for the rest of the thesis), and passively shared information, which is generally referred to as the digital shadow. The digital footprint contains information such as social media profiles, shared pictures and active participation in online discussions. Data which is created passively, including recorded clicks on advertisement, how long we stay on certain websites, as well as data like pictures on surveillance cameras, GPS signals or stored email discussions, results in the digital shadow. Both kinds of behaviour and information can be used as attempts to reconstruct a digital profile mirroring interests, on which predictions about product preferences, needs, spending capacity and willingness to purchase, can be based. As in real life, our shadow covers more ground around us than our footprint and while we can set our feet consciously we are not able to control where our shadow falls. In the future, ubiquitous computing and the internet of things will increase the amount of passively produced data and therefore extend our digital shadow and the amount of statistical analysis potential.

(10)

2.5 The acting parties

Digital interaction can happen in different layers, not just in social networks or on websites, where users interact and communicate openly with each other and which mainly describes the visible interaction on the front end side. Interaction between front end users and back end users tends to be invisible, meaning the result of the interaction is obscure for at least one of the participants who is not aware of the context and the consequences of actions and is therefore manipulat-ed by their counterpart.

These interactions can be mapped on three main acting bodies, as mentioned in the introduction: the public body - citizens and front end users, the economical body - companies and agencies as providers of digital tools, technologies and services, and the political body - governments and military organizations. The interaction areas as well as the problematic aspects between these bodies can be mapped in the following four areas:

2.5.1 User - User Interaction:

Misuse of data in a social context on the front end side where both participants are members of the public body

This part describes complications which have their origin in social interactions and the misuse of data by other users violating personal rights and social integ-rity. This interaction between users generally only happens on the front end of websites or networks. The main aspects of these issues besides obvious problem-atic social structures (such as lack of respect, racism, sexism, insults, and the like), is that data on the internet is accessible to far more people than most of the users realise and that the anonymity some sites allow their users on the front end seems to evoke far more aggressive behaviour from some people than in face to face interaction. I will not focus on these interactions in this thesis as my main focus is the power disparity between the people in charge of data mining tools and the users, but I want to stress that these interactions do strongly contribute to the manner in which front end users publish and share personal data and are therefore are an important factor in themselves.

(11)

2.5.2 User - Provider Interaction

:

Contextual manipulations by the economical based on economical decisions These issues have their origin in economical decisions and business plans, which are based on data gathering and analysis. Gilles Deleuze (1992) describes the shift from a production based capitalism to a product based capitalism as the back-ground of product defining methods. While production and industry has been moved to third world countries, marketing and selling the right products to the right customers has evolved as being the tool to create profits in our first world societies. This strategy however requires extensive knowledge about potential customers, which leads to business models solely based on data generation and interpretation, in order to enable companies to specifically target those users with the highest feasibility of becoming a frequent and well paying customer.

Looking from that angle it imposes the question: who are the actual customers of social networks such as Facebook or Twitter - the users or the advertisement agencies? The user ‘pays’ with personal data and circumcised privacy to be able to use the offered service, sometimes to an extent that is not justified by the provid-ed service because the functions or services would also work without that kind of personal data (Assange et al., 2012). The user is rather ‘sold’ to businesses which in turn pay actual money to buy more user attention from successful websites. These in turn rely on the user to keep them successful by engaging in offered services, and therefore close the circle of free labour in terms of the limited atten-tion market (Terranova, 2000). The interacatten-tion between providers such as web-sites and advertisement agencies or companies is often unclear for the user, who is therefore unable to actually estimate her own value for the provider and thus is not able to evaluate the fairness of the deal.

One of the results of these profiling methods is strongly targeted marketing and in the long run, even personalized prizing as researched by Choudhary et al. (2005), which only offers certain products and services to desired customers or only provides market share to companies with desired products. This causes an information asymmetry which so far discriminates the user, until she is aware of her value as a target and therefore able to influence the limitation by scrutinizing and determining the parameters of her own classification. I will address this in the context of the design concept.

(12)

2.5.3 User - Governmental Interaction

:

Manipulation of content or data collection by the political body to strengthen governmental control and political power

Though a lot of surveillance technology is marketed by governments to protect citizens from crime such as terrorism or black marketing, it is also a tool to protect national interest which may not always be the same as the public inter-est. While the internet enables political movements such as the Arab spring, it also endangers the participants who are traceable through their actions using digital devices. In her talk at the Theorizing the Web Conference in April 2014, Janet Vertesi uses the handling of social media during the election in Egypt as an example of how the role of social media like Twitter and Facebook visualizes and also influences the power structure between the political and public body. While social networks were used in the beginning by the public to organize riots and demonstrations, they were also used by the government to identify organiz-ers and agitators, who as a consequence were arrested and imprisoned. So while social networks had a catalysing role at the beginning of the protests, they also served the opposing side. Through the use of social media, the planned actions of protestors became apparent, which agitators actually became aware of and includ-ed the order to NOT distribute messages and briefs through social minclud-edia in the messages, which though were ignored by the crowd. It was not until protestants actions became too numerous to track, making it impossible to pick out certain individuals, that social networks were blocked, which in turn forced the revolting public to use other technologies hiding their activities, such as the anonymous TOR browser, or simply to go out on the streets to connect. This example can be transferred to the wider picture of governments using personal data to identify persons who do not conform to politically and socially approved behaviour and the real life consequences, politically undesired behaviour, or even only the classi-fication of being connected to those, that this can trigger.

2.5.4 Provider - Governmental Interaction

:

International law decisions and lobbying resulting in changing parameters of digi-tal interaction under exclusion of the public body

(13)

open their data pool if it is legally requested by officials such as secret services. It is very difficult to actually have an insight in how these interactions work, as through the nature of secret services their actions have to remain secret. Only by the actions of whistleblowers such as Edward Snowden can the public body gain comprehensive knowledge about these processes and manipulations, though recently published transparency reports can give an idea of the extent to which governments request user data, as published by Google (2013), Yahoo (2013), Microsoft (2013), LinkedIn (2013) and Facebook (2013).

A comparison between them shows that requests in general have strongly in-creased, with Yahoo being the provider with the most affected users (around 30.000) and LinkedIn the one with the least requests (around 250). Surprisingly Facebook has a comparably small number of 5000 affected users as a result of up to 999 requests.

(14)

2.6 Related work

As this topic is not a new one there are a number of projects which cover the topic of data mining and data analysis and look at it from different angles. The following projects visualize different explorations of the interaction between the acting bodies and therefore influenced my own exploration experiments.

2.6.1 The life of Balthasar Glättli

Just recently the Swiss politician Balthasar Glättli allowed the agency ‘Open Data City‘ to visualize his meta data collected by his mobile telecommunication provid-er as well as public Tweets and Facebook updates (open data city, 2013), follow-ing the current discussion around the TTIP. The data is published as an online information visualization (see figure 1), presenting a detailed digital profile, the way it exists for all of us in the data bases of the political and economical body. This experiment was conducted to visualize how detailed and accurate our daily life can be reconstructed within the current legal scope, without even scanning the content of personal emails or recording private phone calls.

Figure 1:

Screenshot of the applicationvisualization by open data city for the project “The Life of Balthasar Glättli”

(15)

Similar experiments have been carried out before, using the connection data of the German politician Malte Spitz (Zeit Online, 2009) and the British journal-ist Daniel Thomas (Financial Times, 2013). New though in this example is that Glättli is not treated as a shielded individual - the data of people close to Glättli was also integrated as far as they were publicly available, e.g. responses to public Facebook posts. His wife also contributed her meta data, therefore it was possible to reconstruct a part of the social network in which Glättli lives and to map and predict relationship patterns.

This experiment shows how informative even meta data (information about data without factoring in the actual content) can be, if only it is available in a suffi-cient quantity. Because of the interpretation of connection between persons, like Glättli’s wife, an almost solid profile can be created, based only on facts like how long phone calls last with certain people, at what times they call or are called. A movement profile illustrates daily routines and patterns as well as preferences of places, which in turn allows a better understanding of general characteristics of the target. This experiment also explores how the interaction between public and political body can look and can be manipulated based on law creation and deci-sions.

2.6.2 American Psycho

Mimi Cabell and Jason Huff retold the story of Bret Easton Ellis‘ American Psycho through Google advertisement, which they collected while sending each other the story page by page via Google Mail, to detect how Google would inter-pret this particular violent and racist content to turn it into targeted advertise-ment. It therefore is a very good example of a project playing with the interaction between economical and public body. This project is especially interesting as it targets the view Google has on the user, and how it reduces our profiles to a col-lection of possible product preferences. It also displays how we can partly influ-ence what we are seeing on the one hand and on the other hand how data from a single source such as email content can totally mislead an algorithm:

“Google‘s choice and use of standard ads unrelated to the content next to which they appeared offered an alternate window into how Google ads function — the ad for Crest Whitestrips Coupons appeared the highest number of times, next to both the most graphic and the most mundane sections of the book, leaving no clear logic as to how it was selected to appear. This ‘misreading’ ultimately echoes

(16)

the hollowness at the centre of advertising and consumer culture, a theme ex-plored in excess in American Psycho.“ (Mimi Cabell, 2010)

It almost feels like a dialogue between the writers and Google, with the advertise-ment being the answers to the writers actions, including misunderstandings and completely out of context contributions. It also shows that complex discourse structures can not be yet interpreted in a sufficient manner by Google algorithms and that positive or negative attitudes can not be distinguished - as long as a top-ic is mentioned it seems to show as a suitable result for targeting. These insuf-ficient interpretation approaches explored in the scope of this project represents one of the most bizarre aspects of targeting and has therefore been readdressed in the design concept.

2.6.3 Google NEST

At the re:publica conference 2014 the German activist group PENG! Collective presented four fictive new Google products under the name Google Nest Labs. The campaign included a presentation with two fake Google employees, a pro-totype demonstration as well as a website built in the style of Googles corporate design. The four services were Google Bee (a personal drone), Google Hug (an app which identified emotional needs such as affection and sympathy and would lead individuals with similar needs towards each other), Google Bye (a service which would display a timeline of people who passed away for their very own dignified remembrance) and Google Trust (an insurance for data misuse since Google can not promise that personal data will not be stolen or misused at any time). The campaign had a wide reach through newspapers and social networks, but the website only lasted a week before Google pushed their request to take the side down. The aim of the campaign was to make a point about how Google is slowly but surely undermining our understanding of privacy and will extend its attempts to collect and record increasing amounts of personal data into areas of our lives which feel increasingly intrusive. The amount of people who did realize that the campaign was a hoax was small which indicates what we dare Google to do and what is perceived as realistic. The campaign seemed to strike a nerve as most of the people who did not identify the campaign as satire right away reacted very negatively and even with disgust.

(17)

3. PERSONAL EXPLORATION

Inspired by the experiments of Cabell & Huff and Glättli, I want to explore the scope of my current digital shadow. This approach has confronted me with quite a challenge, as it is very problematic to reconstruct the extent of the profile which governmental organizations have the ability to compile about me from an average user’s perspective and without extended authority. Unfortunately I do not have, for example, the capacities Glättli used for the exploration of his digital traces, as it requires the cooperation of various mobile network providers and also an efficient analysing technology to filter, connect and visualize the coherence of my data set. I thus want to focus more on the interaction between economi-cal and public body, meaning between providers and me as an exemplary user. Although this also required lengthy research to find suitable tools and traces, it is rather achievable considering the time frame and intended scope of this thesis. Glättli’s experiment does however give valuable insight for the further direction of experiments, as it demonstrates how informative small pieces of meta data in high quantity can be. As this is one of the major characteristics of data mining and analysis, this aspect is applicable for every kind of profiling and thus also important for economical targeting. The fact that Cabell & Huff focussed on the interpretation of Google Mail influenced my later decision to explore further aspects of Googles’ categorization system beyond email content analysis. Goog-le Search and GoogGoog-le Ads also affect more peopGoog-le and on a broader scope then Google Mail, and therefore give a more comprehensive picture of the field. As the reaction to the Intervention by PENG Collective showed, Google is also perceived as one of the main actors in the field of data mining and therefore has a strong symbolic role when it comes to illustrating the effects of profiling and targeting.

(18)

3.1 Individual Experiments

To explore the current state of my digital profile I tried to find a way to see what kind of data is collected about me and how it categorizes me. As there is a variety of businesses collecting and selling data belonging to the economical body and focussing on user-provider interaction, there seems to be no one place accessible for me, that holds all my captured data together or can give an overview of where and what is collected and how it is used. The data farming companies, which can be found on the internet by any user, can be divided into two categories:

On the one hand, businesses, which offer the user services requiring the user to actively set up a profile and share at least parts of her information consciously, but which continue their behaviour tracking excessing the frame of the actual product. These companies tend not to sell their collected data but offer targeted services, based on their extensive database. They focus on behavioural patterns on a bigger scale and try to identify habits indicating certain needs, which can be fulfilled through consuming goods or services. The collected data is used to categorize users and put them into groups of interests which are likely to act and consume in a certain similar way. Some of these companies have their own sys-tems of collecting and analysing data or even provide technology for other busi-nesses to install their own collecting algorithms when a user visits their website. The best known examples for these strategy are Google and Facebook.

On the other hand there are businesses, which act hidden from the user as third parties, placing cookies on partner sites and collecting information using, for example, bonus programs and the like. The compiled data sets mostly get sold directly, without further extensive interpretation. The biggest suppliers in this market are BlueKai and Axciom in the US, with Axciom also being active within the EU. Both services offer users to have a look into their databases and see how they are categorized. Unfortunately these services require an address in the US to have an insight into the digital database, a request though to Axciom directly with my German address did not connect to any data set. Reports and articles however from users who have checked out their available profiles however show hilariously inapplicable categorizations and assumed interests as can be seen in an experiment by Jeffrey Rosen for New York Times (2012) and an article from

(19)

The fact that there is currently no opportunity to access my data set of any third party providers strengthened my first decision, to explore the categorization of my data set created by Facebook and Google as strongest representatives of the first category of data mining businesses.

3.1.2 Facebook

Before deciding to focus on Google solely, I tried to explore the extent of profil-ing by Facebook followprofil-ing this incident: On 22nd of April, I saw this advertise-ment constellation (figure 3) on my timeline. Since I am a vegetarian and even eat mostly vegan, I have ‘liked’ a lot of sites which post related things like recipes, protest campaigns or projects. So in my timeline I found a post from a vegan recipe site. Right next to it, in the advertisement bar of Facebook, two totally mismatching ads were displayed - an ad for the “Monster Steakhouse” in Copen-hagen and below this an ad for a three day hunting course from the site

“Huntinglife”. This obvious malfunction of the Facebook targeting algorithm caused me to wonder how Facebook uses our profile data to reconstruct our pref-erences and assumed needs.

(20)

According to the Facebook Advertisement terms from 2014, Facebook uses a categorization model which is based on the activities of the user such as likes of pages, comments and status updates. Facebook uses simple correlation assump-tions such as users who like a page named “Running” probably like running and therefore get to see ads about running equipment or a new gym. Facebook also tries to find more complicated correlations by combining location data, status content and likes to calculate broader preferences.

On the website of Facebook itself it is not possible to find out which actions or status updates are used for analysing purposes and which actions cause which kind of categorization.

As shown by the organisation Europe vs Facebook (2014), Facebook used to hand out personal data sets in form of CDs, but after repeated requests has been hesitant to grant broad and easy access to individual persons. The activist group Europe vs Facebook is operating from Ireland and tries to force Facebook to hand out personal data sets based on the EU law, but it seems that the more people do request their data the more Facebook is unwilling to reveal it. Furthermore, to be entitled to receive an almost full copy of the personal data set, all personal data has to be given such as full name, address, birth date et cetera, which contradicts the purpose especially for people who managed to hide their true full name or other personal data from Facebook so far. In case a user receives a requested copy of the personal profile it does not seem to be as complete as Facebook pretends it to be according to Europe vs Facebook, and does not include data such as adver-tisement interaction, history about pages which were liked and similar ‘small’ data fragments, which are in fact very valuable for Facebook to create patterns. Following some campaigns by Europe vs Facebook, Facebook has introduced a download module directly on the website to request a copy of the personal profile. My request on 11th of April 2014 though has not resulted in any feedback or response so far, other than my request is in process, even after repeated requests on 25th of April 2014 and 5th of May, 2014. The copy of a data set available now also seems to be even less complete than the one which was available via CD before, according to Europe vs Facebook.

(21)

re-since the algorithm, which stands as a black box between action and result, can not get every complex context right and therefore runs the risk of producing bizarre constellations and interpretations, which anyone would most likely recog-nize as contradicting.

3.1.3 Google

After not finding a possibility to gain an insight into the methods Facebook uses to generate a profile out of my data, as well as the mentioned context of Goog-le drawn from the project by Cabell & Huff, I decided to focus on my GoogGoog-le profile. Google offers a range of products and tools for their own purposes and also for advertisers to learn about the behaviour of users and also provides ho-listic services such as Google analytics and Google ads, which include a database of interests and categorizations from which advertisers can choose to pay Google to display their ads to a specific audience, similar to the service Facebook offers. According to Google they use searches, emails which are sent with a Google mail account, clicks on sites and the timespan people stay on certain pages for analyt-ical purposes. The Google profile impacts displayed advertisement but also search results and suggested sites. Google uses cookies, like Facebook.

Google does not provide any possibility to request the collected data, but it pub-lishes the list of interests which can be assigned to users and also lets users see a list with their already assigned interest categories, as well as their estimated age, gender and languages. The profile differs in the amount of identified interests when logged in and out and is created based on website visits, Google searches and clicked advertisements, which are stored in the cookies of the used browser. Google provides the option to clean this list out by deleting not appropriate in-terests and to opt out of the system and deny any further collection of data and browsing behaviour.

Who does Google think I am?

After having a look at my profile and my personal interest list assumed by Goog-le, it seems Googles algorithms do not do an as good job as they claim, since over half of my interest list was irrelevant to me, a few items even belong to things I actively dislike, which makes me wonder how Google gets this impression from me. Next to the interests, Google states what the interests are based on, such as website visits, Google searches or Youtube videos. As I almost never use You-tube, I was surprised to find objects based on my YouTube habits in my list.

(22)

To get a clearer picture of how close my Google persona, my ‘Google-Me’, is to my actual preferences, I listed my 166 Google defined interests and rated them on a scale from ‘-5’ being topics I really dislike and where related advertisement would bother me, to ‘5’ being topics I do identify with and would look for deals myself (see appendix 1). The average score of the list was 0.8, which can be translated more or less to ‘I do not mind them but I also do not care’. The list in-cludes 30 interests which do match my preferences and 21 interests I do not like that much. Of course this rating is very subjective and based on my own per-ception of myself, but it does leave an irritating impression, when I as a user can not identify with my virtually estimated profile stronger than I would with any randomly chosen profile. As one single data set is also non-viable but very biased source for generalized judgement, I repeated a similar, though not as detailed approach with a bigger group of participants in a later step.

I continued with clustering the interests into theme related fields to extract pos-sible interpretations from it (see figure 2). From this apportionment an emphasis on technical and business related themes can be seen, which are far stronger topics for my Google persona than for example design related fields, which from my estimation is not accurate, since my focus in my professional life is design, containing elements of business and technology, and not the other way around.

Figure 2:

Chart of my Google interests, sorted for themes

(23)

From the objects in my Google interest list I am probably single, since there are no clues for anything related to relationship, family or children. Since my ‘Goog-le-Me’ listens to a lot of music and is interested in games, but there is no clue for outdoor activities such as cultural events, cinema, concerts, or similar, seem to be a stay-at-home person, maybe even with pets, since they appear comparably often as well. None of such interpretations are true, but could validly be drawn from this list, which can be explained by me not being online often while being outside. This aspect does stresses the fragmented nature of a digital profile - only including online activity, therefore neglecting a very important offline context in their profiling activity. This fragmentation will again be addressed in the follow-ing design concept, as it poses a potential enterfollow-ing point for interaction design concepts and holds a strong critical design potential.

The most striking aspect though about my preference list is my rather unhealthy lifestyle. According to my Google preferences, I preferably eat sweets, desserts, baked goods and fast food and drink coffee. These conditions paired with an interest in wrestling, martial arts, body building and extreme sports, which all provide a high level of injury risk, would cause a quite costly health insurance for ‘Google-Me’, if this data would be used for any estimation of my health status. This representation puts me in an unprofitable light, especially in contrast to my actual lifestyle, which is as mentioned vegetarian/vegan, containing a lot of vegetables and mostly fruits instead of sweets. I have not eaten in fast food res-taurants for over 5 years and cook fresh almost every day, I drink coffee around once a week, I exercise moderately and though I have practiced and do have an interest in martial arts, taking part in wrestling, body building and any other kind of extreme sports would never cross my mind.

In order to have a list with which to compare my Google persona to, and to see if the Google categories actually provide interests which would make it possible to mirror my interests in an accurate way, I went through the whole list of general Googles categories and created my own preference list which would mirror my interests more appropriately (see appendix 2). Without trying to do so I ended up with exactly 166 interests on my personal list as well. Though it seems that Google offers more categories in the technical field, my self-made list includes a balanced amount of design, business and technology related objects. It also in-cludes a lot more ecological related, as well as politically relevant, objects.

(24)

3.1.4 Tracking patterns

To see how and if my profile would change over time and my listed interests react to my browsing activities, I observed my browsing behaviour and my Goog-le profiGoog-le over a coupGoog-le of days as well as my Facebook activities, advertisement and suggestions (before I decided to focus on Google solely).

To have a better insight on which actions cause which categorization, I deleted my complete interest list as well as my cookies and disabled any blocking plug ins for advertisement and cookies. I installed Ghostery and Lightbeam to record all tracking attempts and third party connections and listed which interests occurred after four days browsing and how they can be reconnected to my browsing be-haviour. After four days, I had visited 61 sites. Lightbeam showed me 427 third party sites which had connected to my browser in this time (see figure 3 and appendix 3). Google listed 20 interests for me based on my new cookies, disre-garding my Google profile connected to my profile ID. Out of these, 14 interests can be connected to websites I had visited or searches I had made. Six interests though seem not connected at all to my behaviour, which raises the question of how Google classifies certain sites and keywords. One third of the assigned interests being of obscure origin after only four days of tracing the relevance of Googles classification algorithms seems to be questionable. This aspect of ob-scure correlation between single elements of real interests and assumed interests also influenced the later design concept as object of critical consideration, as it portrays the disability of the user to influence the connection between behaviour and interpretation results and thus blurs the actual correlation between certain elements.

Figure 3:

Recording of my first and third party con-nections using Light-beam and Ghostery (round elements represent vistied sites, triangels are third party accesses)

(25)

3.2 Group Exploration

To examine if it is just my Google profile which seems inapplicable, I expanded my experiments to group tasks.

3.2.1 Personalized advertisement

I first conducted a small group survey targeting the issue of personalized adver-tisement. I questioned a group of 30 anonymous participants about their opinions concerning personalized advertisement and their experience with it (results of the questionnaire can be found in appendix 4). Out of these 30 participants, only one person was not aware of personalized advertisement, and 22 participants try to avoid advertisement in general by using ad blockers or similar add ons to disable advertisement in their browser, which probably has a big influence in how adver-tisement on the internet is experienced. The opinions about and experiences with personalized advertisement are mixed within this group, with some participants calling it down right creepy, while others prefer it over general advertisement and see some positive aspects in it: one participant refers to it as a good reminder, of what she still wants buy. In general the participants seem to worry about how the required data is collected but also seem to be positive about the filter effect and appreciate the fact that they do not have to deal with things they are not interested in anyway. Half of the participants though claim to have experienced a situation where personalized advertisement or suggested sites made them feel uncomfortable (see figure 4). Interestingly two thirds of the participants do not feel like personalized advertisement in general is doing a good job in covering their interests (see figure 5).

Figure 4 (left) and 5 (right):

Answers of partici-pants from a con-ducted questionaire. Results of the ques-tionaire can be found in appendix 4.

(26)

In the second part of the questionnaire, the participants were asked to check their interest list generated by Google and to evaluate it. The feedback on how they perceive their representation by this list is pretty mixed, but only a few state that their list is mostly accurate. Besides of a not so small part of people who are not able to access their list (sometimes due to blocking add ons, sometimes without any obvious reasons), a majority finds mostly objects in their list which they de-scribe as not relevant. Some participants expressed their surprise in the feedback section, wondering who Google thinks they are and why.

“Always odd: porn, hand made shoes, vodka - does Google think I’m a rich old man?!”

Quote of one participant (from results of the questionnaire, see appendix 3) To a similar and more scientifically reliable result came the Pew Research Centre (2012). In their study of the use of search engines and the perception of person-alized advertisement, 68% of the participants were not okay with targeted ad-vertising because they disapprove of tracked and analysed behaviour, while 28% approve.

The feedback to this survey has motivated me to reflect on the consequences of mismatching profiles and the distortion it causes on our digital shadow. The reaction to the preference list were similar like to a really old image of one self in terms of recognizing familiar traits but not being able to identify with the bigger picture. On another side it felt like a fun house mirror, where visitors are drawn in by the strangeness of distortion in which they see themselves, just as partic-ipants were eager to examine particular character traits they assumingly have according to the preference lists.

(27)

3.2.2 Personalized search results

Another aspect of Google personalizing the appearance of what we see of the internet is the result of Google searches and suggested sites. Pariser (2011) de-scribes an example where two friends of him receive totally different results when searching for the term ‘BP’, a big oil and gas company. While one friend received investment suggestions, the other saw articles about environmental pollution. To ascertain if such effects occur in my environment as well, I set up a group activity and asked four friends of mine to search for two highly controver-sial terms, ‘proof of climate change’ and ‘consequences of abortion’, and compared the results (see table 1 and 2).

It occurred that the first three results mostly included the same three pages when using Google, though in different order or with advertised pages placed before them or suggested scientific articles. According to the settings of the user, the first result would therefore differ from being an ad, an scientific source or an actual search result.

suggested

sites? scientific articles? position 1 position 2 position 3 position 4 search results part. 1 no yes first-care.org familyandlife.org lifesitenews.com christianvoicesforlife.org 36,300,000 part. 2 no no familyandlife.org first-care.org lifesitenews.com americanpregnancy.org 23,000,000 part. 3 no no fisrt-care.org familyandlife.org lifesitenews.com americanpregnancy.org 23,100,000 part. 4 no yes familyandlife.org lifesitenews.com first-care.org christianvoicesforlife.org 23,000,000

suggested

sites? scientific articles? position 1 position 2 position 3 position 4 search results part. 1 no yes climate.nasa.gov skepticalscience.com google images royalsociety.org 175,000,000 part. 2 yes no practicalaction.org cat.org.uk iiea.com climate.nasa.gov 99,100,00 part. 3 no no climate.nasa.gov skepticalscience.com royalsociety.org washingtontimes.com 99,300,000 part. 4 no no climate.nasa.gov skepticalscience.com ncdc.noaa.gov en.wikipedia.org 109,000,000

Tabell 1 (top) Results of the search for ‘consequences of abortion’

Tabell 2 (bottom): Results of the search fpr ‘proof of climate change’

(28)

A study by ‘digital relevance’ (2011) shows, that the position as well as the ar-rangement of search results does matter a lot for the likeliness of being clicked and therefore influences the likeliness of us consuming the content of a site. So though the results differ mostly in order and representation instead of content, it can be said that every user has a differently designed interface to react to and a unique view on the information provided by internet search engines. The person-alized design of the search results thus translates into a personperson-alized perception of the answers to the asked question. In the study of Pew Research Centre (2012), 65% of the user stated they would not want ranked search results according to their previous searching behaviour, as they worry about missing out on informa-tion.

This applied filter of preferences which filters out particular information as it assumes they do not match the profile, blends out certain parts of the individual. Based in this observation, the aspect of a perforated representation will be revisit-ed in the design concept as well.

(29)

4. THEORETICAL DISCUSSION

The results of the conducted experiments can be divided into four aspects, which are worth putting into a theoretical context with existing theories and approach-es towards them. The exploration of the advertisement algorithms opens up the questions of unreasonable classification by algorithms and the occurrence of market asymmetry in the area of interaction between economic and public body. These aspects can be extrapolated to the field of interaction between political and public body and thus the occurrence of filter bubbles and conforming behaviour.

4.1 Market asymmetry

Market asymmetry describes the power shift between provider and customer, when the customers perception of the market and though her ability to evaluate the market supply as a whole as well as her own value and position in the market is limited (Akerlof, 1970). Data mining and thereon based targeted advertisement can enhance these symptoms in certain ways:

4.1.1 Selective supply and shielded market perception

Targeted advertisement and personalized content limits our perception of avail-ability of products and services based on our digital shadow, which contains elements, users are partly unable to control. This creates an uneven power rela-tionship with rules and conditions for the consumption of supply which are not transparent to the front end user, as the extend of used data and the require-ments to be targeted are obscure, allowing businesses to treat potential customers in very different ways according to their business needs instead of the needs of the customer. The user turns from a customer into a profile which gets sold as a product by the providers to the advertisement agencies, divided into user seg-ments which live in separated online realities. Consequentially some ‘products’, meaning profiles, are going to be more valuable than others, depending on spend-ing capacities and the like. Direct consequences can be differspend-ing prizes, discounts or in general red lining, which means limited access to the range of products available (Choudhary et al., 2005). This offers a strong entering point for critical design approaches to intervene with these impacts and scrutinize the dominant position of the economical body within the means of an interaction designer, as here the circumstances are directly bound to online interaction practices.

Another issue contributing to market asymmetry is, due the unawareness of the extend how this data is used to target advertisement, the limited signal effect,

(30)

advertisement normally has for a user. While users might agree to continuously receive deals and offers by certain suppliers, because they are already convinced with their quality, or to receive targeted advertisement on certain platforms to save time and effort, users also need a neutral space to compare the whole choice of products available (Kihlstrom and Riordan, 1984). As long as these neutral spaces are not clearly distinguishable from the not neutral ones, which use target-ed advertisement and thus shield the user from certain offers, and the user also is not in control of which data is used to target advertisement, she is not able to identify the real signal of the advertisement, as the signal looses meaning by loos-ing audience, and thus the whole potential of the market, which puts her into the weaker position when looking for suitable deals. This takes away the autonomy of the user as a consumer and thus also her right and ability to contribute mar-ket forming activities, based in conscious consumption or resignation of certain products (Verde Garrido, 2014).

In the personal exploration, this factor came for example into play when search-ing for the term ‘consequences of climate change’. On the first position of the participant, who saw suggested sites in her search results ,appeared a featured site which asked for donations to fight the climate change. Following the prem-isses of Kihlstrom and Riordan, the participant would be more likely to donate if she could assume that every person searching for this term would see the same suggestion. The moment she would be aware that just a limited and specifically framed amount of people is seeing the ad, the odds of her donating are smaller because the ad looses its signal and thus relevance when loosing audience.

4.1.2 Free labour

Besides the limited overview of the market supply current technologies of data mining are also limiting the awareness of their function and therefore limit the understanding of the user in how far she is a part of the value producing system. Users strongly contribute to the process of data mining by providing informa-tion about each other. While the concept of investigating targets by using per-sons who stand close to the target has been used in a lot of political systems, the informant used to be reimbursed in a certain way by the information receiving party. An example is the investigation system of the old GDR, where one third of

(31)

social currencies like attention for adding to the information pool of the providers (example from Assange et al. 2012). The economical body hereby profits from the ignorance of the user towards their value to the market, who thus provide free labour to the system (Terranova, 2000).

4.2 Filter Bubble

„We shape our tools and thereafter our tools shape us“ (McLuhan)

Simply put life pattern analysis is based on our habits and the likeliness of people following certain patterns or making certain decisions, which are also mainly based on routines and familiarity. Thereby resulting personalized contents on the internet can help us to navigate through the overwhelming information supply but also create a filter bubble around us, which makes it hard to discover aspects, products or news, which might not confirm to those predicted preferences of ours. While we do prefer to receive information, which confirm to our already ex-isting mind set, we naturally come across new aspects on social life as we have to deal with people who might not share our opinions and views, but maybe other interests, which encourages us to have an insight into the perspective of others (Pariser, 2012). This effect is weakened by personalisation since it targets and benefits from predictable habits both in selling us things we already like and in selling us new things, using pattern analysis to predict the best times to introduce us to new products.

As what we clicked in the past defines what we see in the future, we become de-fined by our once set preferences for which others choose relevant products and news for us to confirm to this set profile which gets extracted from our browsing history. The consequences of this personalization in the virtual world is a rein-forcement of our existing opinions and preferences, a constructed virtual reality based on what we have already perceived and accepted as appropriate to us. Next to what we clicked in the past also what others with yet similar preferences have looked at defines how our affections are predicted, which results in people be-coming even more similar to others they are already similar to, turning individu-als in categories and piles of properties, reinforcing divergences between differing interest groups, instead of allowing individualization and experiences which do not conform to our behaviour expressed in the past. This thought also feeds into my reflections on how to stress the alienation from ones actual identity inside the design concept.

(32)

In the personal exploration, this filter mechanism did not really show strongly in terms of personal advertising, which though could be explained with users not realizing the limitations of the perceived content, but in contrast strongly notic-ing when thnotic-ings do not confirm to what they are used to see, such as I myself did strongly react to the hunting advertisement, purely because it was so extraordi-nary for me to see something so contradicting to my personal mindset.

4.3 Unreasonable algorithms

As depicted by the exploration part, targeting algorithms seem not to have an as clear vision of who the users are than some might claim. Between somewhat accurate representation and bizarrely mismatching assigned preferences, it stands to question, how accurate targeting and to a broader extend life pattern analysis can actually be, not only in individual cases but also on a wider level. While the occurrence of contradicting profiling can be caused simply by bad or not suffi-cient programming, it also raises the question how close interpeting algorithms can actually get to predicting human behaviour and which aspects influence these results. I want to set the results of my exploration into context with mainly two theoretical approaches, one by Mayer-Schöneberger and one by Weizenbaum.

4.3.1 The right to forget

Life pattern analysis is based on the premiss that enough data can statistically validate every assumption (Shawe-Taylor & Christianini, 2004). This might be true for a really big group of people, but for single individuals, the amount of data which needs to be ascertained has to be extensive as well. The question is how is a computer supposed to know when to ‘forget’ certain facts about the target and to assess from which point of time older information is not relevant anymore?

As the human brain is not build to remember an extensive amount of detailed information and our social network is not build on knowing everything about everyone, we filter and select our social interactions, we vary our actions accord-ing to our counterpart and we revise our evaluations regularly. We develop and evolve, because we learn new things, integrate them into our current mind sets and forget things when we do not need them anymore. We might remember certain single events from our past which had a strong influence on us or where

(33)

our perception of other people and things (Mayer-Schöneberger, 2009). This distinguishes us from computers since we on one hand are not able to clearly visualize the amount of small data fragments which can be stored about individu-als while computers on the other hand are not able to assess the importance of an information on their own.

Besides the relevance of machines needing to eliminate certain facts about us after a certain time to be able to reconstruct our behaviour, it is also a matter of privacy and the freedom to overcome old episodes of ones live. Recently, the Court of Justice of the European Union decided, that Google has to exclude cer-tain personal information under cercer-tain circumstances from their search results, meaning they have to delete the connected links from their database (Spiegel, 20.05.2014) - a vague injunction, but nevertheless an acknowledgement, that human evolvement also includes the discarding of episodes in the past and thus a realization of what Warren and Brandeis (1890) demanded already over a hundred years ago with their claimed to integrate the right to privacy into the jurisdiction-al sector, though applied onto digitjurisdiction-al means today. Warren and Brandeis articulat-ed their concerns as follows: “recent inventions and business methods call atten-tion to the next step which must be taken for the protecatten-tion of the person, and for securing the individual what Judge Cooley calls the ‘right to be left alone‘“ (1890:2) and thereby already take a stand for the intervention on a political level to protect the development of the individual.

Even though they referred to technologies such as photography and recording and to „the evil of invasion of privacy by the newspapers“ (1890:2) in 1890, the same can be said about current inventions such as social sharing platforms and networks, which can be seen as a modern extension to the methods of expres-sion. Independently of the chosen method of expression, every person „generally retains the power to fix the limits of the publicity which shall be given them“ (Warren and Brandeis, 1890:3) (them being content in any form).

It further is not the question if data should be collected or produced, the issue is how it shall be used. Neither the creation nor the act of sharing or publish-ing of data or content allows the use of these objects in an unintended manner, according to the public codec of trust for decency and well meaning, which weakens more and more with the opportunities given nowadays, shielded with anonymity and hidden from the public body. Both the economical as well as the political body are committing a breach of trust in the moment, they use data for something else than intended by the user, influencing a society in which distrust towards authorities has a solid breeding ground (Warren and Brandeis, 1890).

(34)

4.3.2 Rationality is not logic

Besides the ‘right to forget’ which questions the eligibility of algorithms using every available data trace of ours to reconstruct our digital shadow, it also is a matter of feasibility. The assumption of an algorithm being able to predict future actions relies on the premiss that human behaviour can either be statistically or logically explained and therefore continued in their linear consequence.

Neumann and Morgenstern (1953) state that “the chief objection against using this very simplified model of an isolated individual for the theory of a social ex-change economy is that it does not represent an individual exposed to the man-ifold social influences.” (Neumann and Morgenstern, 1953:9) and thereby refer to the fact that the social surrounding we live in influences our behaviour and in return we also shape our environment what therefore creates a feedback loop which always has to be kept in mind when putting behaviour into context - every action of ours is a reaction to a previous trigger which might not be as obvious from the outside but totally rational from an emotional point of an individual. The awareness of others knowing about ourselves therefore changes our behav-iour in a profound way.

This contextual web of interactions is depended on so many, partially so little factors, that we ourselves do not understand how and why we are sometimes acting the way we do, computers though only act according to a set of internal instructions, which are set by humans who perceive human nature based on their own logic (Weizenbaum, 1976). Rationality therefore can not be equated with logic, because rationality is based on the assumption of material optimization, whereby logical actions lead to personal optimization which can mean totally dif-ferent things to difdif-ferent people. The principles of ‘logic’ and ‘rationality’ there-fore can not be interpreted by machines in a satisfactory manner, since no algo-rithm so far is in the position to reconstruct emotional logic and probably never will be (von Neumann, 1953). This also lies within the character of logic being applied in a wider context including group dynamics, while rationality might more be applied for individual decisions, and thus weakening the compatibility of on another, as the further the attempt to learn about social group relationships and their correlations is pushed, the more blurred become the characteristics of

(35)

4.4 Conforming behaviour

Based on the possible consequences of information asymmetry as being red lining and prize discrimination, as well as under the aspect of rational algorithms, which do not encounter the shades of human logic, users are vulnerable to how algo-rithms classify them. Though current consequences seem to be limited to which kind of advertisement we see, on a more sophisticated level these stored behav-ioural patterns can be used for any kind of services, also for banks or insurances, which currently use their own systems of data collection to estimate the qualifi-cation of a customer for certain products such as loans or bonuses. This would touch on a more real and far more sensitive part of our lives, even more so when we consider surveillance practices and their use to identify ‘threats’ to national interests, which in extreme cases can mean matters of live and death.

The awareness of being constantly classified based on our behaviour paired with the lack of understanding, how classification algorithms actually work and which aspects result in which kind of classification, create a knowledge-power-complex, with knowledge about the counterpart being grounded on knowledge about how to gain this knowledge meaning on how to use the skill or technology or au-thority, so that both types of knowledges determine each other, creating power as a result and source at the same time. Therefore, data as well as technology to analyse and interpret it provide a base for further power extension, as it is knowl-edge in itself and tool at the same time (Foucault, 1975).

This reinforced power disparity can lead to a strong insecurity about which behaviour is riskless and therefore to a tendency to turn to conforming behav-iour, adapting to what seems to be accepted by our society and therefore taught to the computers as valid reasons for certain actions, turning the internet into a platform of disciplining and therefore into a kind of “postpanoptikum” (Kah-mann, 2014). With the intention of the action being judged it has to be compared, interpreted and analysed with other actions therefore attesting any action being suspicious just for being: in a surveillance society, communication is put un-der general suspicion, and the omnipresent feeling of being watched and judged changes the general behaviour of people, turning every personal area into a public space (Foucault, 1975).

(36)

Combining both the impossibility of rationality matching human logic and the tendency to try to confirm to the set frame of behaviour when feeling surveilled I see the impossibility of humans being able to manage to adapt to that set frame as in a postpanoptikum state the frame is strongly influenced by technical ration-ality and thus not attainable for humans. This aspect of impossible convergency also feeds into the soon to be addressed design concept as it also poses a contra-diction inside the system to be addressed in a critical way to expose this bizarrely enforced correlation between rationality and logic.

To wrap up the theoretical exploration, I want to summarize the premises the theoretical elaboration has reflected upon: Using data mining and behavioural analysis, computers are creating a virtual mirror of ourselves, trying to capture our characteristics and predicting our future actions. They thereby rely on al-gorithms and analysing technology which are programmed based on an abstract conception of personality correlations, to recognize familiar aspects and classify the belonging profile accordingly. These classifications can be used to create a digital landscape on the internet appealing to our assumed preferences by tailoring advertisement, search results and site suggestions, which leads to a polarization of the users perception, even in a way which might not suit the actual preferences and interests of the user. This strengthens the power disparity between public and economic as well as political body as it limits the autonomy of the user in both economical and political aspects, and can have a reshaping effect on behav-iour patterns following the need to confirm to desired patterns.

Figure

Tabell 1 (top)  Results of the search  for ‘consequences of  abortion’

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Tillväxtanalys har haft i uppdrag av rege- ringen att under år 2013 göra en fortsatt och fördjupad analys av följande index: Ekono- miskt frihetsindex (EFW), som

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

Det har inte varit möjligt att skapa en tydlig överblick över hur FoI-verksamheten på Energimyndigheten bidrar till målet, det vill säga hur målen påverkar resursprioriteringar