• No results found

Study of a relationship. Designerly explorations of machine learning algorithms

N/A
N/A
Protected

Academic year: 2021

Share "Study of a relationship. Designerly explorations of machine learning algorithms"

Copied!
35
0
0

Loading.... (view fulltext now)

Full text

(1)

Study of a relationship

Designerly explorations of machine learning algorithms

Agnieszka Billewicz

Interaction Design

One-Year Master’s Programme (60 credits) 15 credits

Spring 2017 – Semester 2 Supervisor: Elisabet Nilsson

(2)

ABSTRACT

Study of a relationship is a 10-week Research by Design project that explores the space of intersection between Design and Machine Learning. It is a series of design engagements and experiments, heavily grounded in the present time and simple technology, that produces a semi-abstract knowledge on relationship that can be established between human and Machine Learning artefacts. This research strives to propose an alternative designerly approach towards Machine Learning, one that would promote evoking positive emotions, usage for personal purposes and understanding of basic principles behind technology, thus putting the human in the position of control.

TABLE OF CONTENTS

1. INTRODUCTION: MACHINE LEARNING

1.1. Design opening

1.2. Aim of the research

2. THEORY

2.1. Background

2.2. State of art

2.3. Existing research

3. ETHICAL CONSIDERATION

4. METHODOLOGY

4.1. Methods

4.2. Vocabulary

5. DESIGN PROCESS

5.1. Desk research

5.2. Survey

5.2.1. Knowledge of Machine Learning

5.2.2. Attributes of technology

5.2.3. The issue of control

5.3. Iteration I: Swearing Machine

5.3.1. The Concept

5.3.2. The Prototype

5.3.3. The Insights

5.4. Iteration II: Co-design workshop

5.4.1. Exploration

5.4.2. Training

5.4.3. Use

5.4.4. Sweater Whether

5.4.5. The insights and future steps

5.5. Iteration III: Silly Lamp

5.5.1. The concept

5.5.2. The prototype

5.5.3. The user-testing

6. STUDY OF A RELATIONSHIP

7. DISCUSSION

8. CONCLUSION

(3)

1. INTRODUCTION: MACHINE LEARNING

One of the first informal definitions of Machine learning (ML) is one proposed by Arthur Samuel in 1959: “field of study that gives computers the ability to learn without being explicitly programmed” (Ng, 2017). ML studies algorithms that allow machines to learn to perform specific actions based on provided examples, observations, and data, rather than being programmed step by step. The bigger the data set the machine is trained on, the greater the accuracy of the performance can be expected. There are several types of ML algorithms, two main ones being supervised learning (teaches the machine by providing examples – the one this thesis project focuses on) and unsupervised learning (machine learns by finding patterns in the data), whereas others include reinforced learning or recommender systems (Ng, 2017).

The research in the field of ML is pursued by many scientists, mainly focusing on development of algorithms with future applications in mind like self-driving cars, medical diagnostics or human-like Artificial Intelligence systems (Ng, 2017). However, the technology based on ML algorithms is already commonly used in products and services we use every day. Some of the most known examples of applications are: e-mail spam filtering, speech recognition used by virtual assistants, web search, tools for studying the human genome or advertising based on your web history. Considering

characteristics of interaction, it can be claimed that in most of these applications it is possible for the subject/user to be completely unaware of the technology behind them. The interaction is somehow indirect – the machine is trained on the data provided by the user, however the data is provided unintentionally as a byproduct of the user’s actions. Stanford University’s Massive Open Online Course on Machine Learning in the description of the course states that “Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it” (Stanford, 2017). What should also be mentioned is that machine learning is widely considered a subarea of the bigger field of Artificial Intelligence (Shapire, 2008). Due to this connection, as well as the future

possibilities emerging from the research of the field, ML has become a hot buzzword with the interest in it rising (see Figure 1) together with misconceptions, fears and fascinations.

Figure 1. The interest in “machine learning” through the last five years according to Google web search. Numbers indicate frequency of search for this term, where 100 is the highest popularity. Source: Google Trends accessed on 14

May 2017. https://trends.google.com/trends/explore?q=%2Fm%2F01hyh_

1.1. Design opening

Machine learning made its way into our everyday life. It is no longer only a technology of tomorrow but rather already a technology of today. However, even though the technology itself is not new, the opportunities for designers to work with it opened up quite recently – with the first open-source machine learning software e.g. Wekinator (Fiebrink, 2009). This resulted in the current state of the field, in which there seems to be a gap between technology driven research with utility-focused application proposals heavily grounded in the tradition of computer science and the more recently

(4)

emerged explorations of machine learning from an artistic point of view (for example the work of Rebecca Fiebrink (2009), Andreas Refsgaard (2016), William Anderson (2017) or Shinseungback Kimyonghun (2016)). Therefore, this project aims to explore the domain from the less common design perspective as opposed to commercial, technological or artistic ones.

In the context of this project design perspective is characterised by focus on the materiality, experience and interaction in contrast to the one focusing on possible profit (commercial), complexity of the algorithm and feasibility (technological) or making a statement, evoking emotions and provoking discussion (art). In other words, I want to bring the attention to actual singular human-computer interaction in real-time rather than discuss what may be the prerequisites (technology), results (art) or surrounding system (market).

1.2. Aim and scope of the research

How can we design machine learning products and services that empower humans? In general, this research strives to propose an alternative designerly approach towards machine learning, one that would promote evoking positive emotions, usage for personal purposes and understanding of basic principles behind technology, thus putting the human in the position of control. The aim is to provide initial research into the ML field from a design perspective through concepts, prototypes, and study of human-machine relationship that evolves over time.

This approach is based on heavily grounding the research in the present. It is not in scope of the research to discuss the undeniable dangers or exciting possibilities of the future technology, but rather promote usage of very simple algorithms and technology that has already been accessible for a long time in order to create simple artefacts that an individual can understand, adapt, and use for personal purposes. It is an attempt of addressing the tendency of researchers and practitioners in the field to design for the potential future. Dourish and Bell described this phenomena as “the problem of

proximate future”. (Dourish & Bell, 2011) Amongst many projects describing a future that seems to be almost here, this project aims no more and no less than to explore the now of machine learning. After all, “the best way to understand the future is to do your best to create a local approximation and try to use it every day”. (Dourish & Bell, 2011)

2. THEORY 2.1. Background

At the core interest of this project lies the curiosity about the relationship that can be formed between human and technology based on ML algorithms. Artificial intelligence (AI) is a field that creates a great deal of a heated debate and is more than often associated with negative emotions, ranging from mistrust to repellence. The issue deepens with the negative portrayal of AI in culture, especially in the Sci-Fi movement. It is not without reason – although ML algorithms are widely spread amongst products and services we use every day, they are mostly used for strictly commercial purposes. Even if the users are in fact aware of the algorithms tracking and learning from their personal data, they don’t have the ability to understand or control them. The black box of AI exists, and what’s worse is that it is created by someone who doesn’t necessarily have our best interest in mind. Moreover, the

algorithms are too complex for a human mind to understand the rules behind certain decisions and the various consequences of applying the algorithms in certain market sections (including ethical or socio-political ones). Consequently, fear is an important factor to be noted when discussing the current relationship between people and ML algorithms at the macroscale.

(5)

On the other hand, the growing interest in the ML field has reached the point where the phrase exists in pop-culture as a buzzword. Not being able to comprehend the algorithms is a source of fascination, rather than fear. Especially in the light of recent advancements in AI development, machine learning presents so many exciting new possibilities. No wonder that for a certain number of people ML stands for everything “shiny, technological and by default better”. That could be especially true for the growing generation of kids – digital natives – that have already developed a diametrically different approach to technology, based on openness and familiarity (a great example is a recent viral video of a girl hugging a water hydrant speaking “Hi robot. I love you robot” (marxj1, 2017)). More and more people are finding themselves at this end of the spectrum – uncritically acclaiming new technologies, seeking value and salvation in their development or failing to recognise the technology as “other” rather than human-like.

Both of these mind-sets are fuelling design approaches with a very specific power relation. The user is not in the position of control, no matter if he fears or uncritically embraces the technology. How (and if) can we change relationship of adults with AI? This TP1 project is an exploration of a possible design approach, that could address the issue of relationship between human and technology based on machine learning. Can we re-negotiate it by proposing a more human-centred approach? If and how can we invoke positive emotions through ML-powered computing and how would this relationship develop? Is it possible to empower people through these artefacts or environments? What are the other ways humans could be experiencing the world through ML technology? How can the human-AI relationship develop over time as the agency of the artefact shifts? The issue of “black box of AI” or proclaimed human fear of machines are often stated as reasons why seamless and unobtrusive interaction paradigms cannot be implemented yet without ethical concerns. But can we reach the context-aware, behaviour-aware environment by gradually building trust and understanding? How would human relationship with ambient environment actually look like? How would the relationship with entire infrastructure be gradually re-negotiated? Was Yvonne Rogers right writing back at the beginning of the discussion on ubiquitous computing in the HCI movement that it “fails to create the kind of expansive, playful, and engaging experiences that promote human participation in the new domains”? (Dourish & Bell, 2011) Could we ensure democratic, inclusive, and engaging experiences through the first stages of personal “training” of ML algorithms? And how does ambient environment mediate our relationship with the world? Can alterity relation shift into background relation and does this change open the design space for exploration? (Ohlin & Olsson, 2015) What transition mechanism would be appropriate for such shift? Lastly, how should we communicate with artefacts based on ML? Can we look for non-verbal semiotic resources to enable multimodal interaction that do not support anthropomorphism? Could those semiotic resources later supplement the meaning-making process when the technology finally masters the use of the language? All of these are just some, out of the vast amount, of questions that could be explored by applying a design perspective into the ML research.

2.2. State of art

To contrast the current situation of applying ML algorithms mainly for the purpose of extracting and storing massive amount of your personal data to learn from, create individual consumer profiles and customize provided services, I initially looked into state-of-art examples following the notion of Positive Design. (Desmet & Pohlmeyer, 2013) This decision is based on the idea that one way of renegotiating human relationship with the technology could be to focus on flourishing the subject (either human or machine), rather than the benefit of surrounding system (e.g. corporations). Consequently, a lot of Positive Design artefacts were especially relevant and inspiring for this research: the ones explicitly focused on use appropriated by an individual to reach personal goals, create subjective meaning, and develop specific relationship with one particular product. An example is the Chocolate Machine created by Flavius Kehr, Matthias Laschke and Marc Hassenzahl (see Figure 2).

(6)

Figure 2. The Chocolate Machine. Copyrights: Matthias Laschke. Used with permission.

Chocolate Machine (Kehr at al., 2012) is a desk machine that releases chocolate balls on a user’s desk every 40-60 minutes. Each time the user can decide to put the candy back into the machine or eat it. This is a physical representation of the psychological theory of Ego Depletion, which claims that will power can be trained similarly to physical strength. The user continuously faces temptation and therefore can use the machine to improve their own self-control. This artefact can be seen as a very playful exploration of a personal use of technology. The approach however, not surprising in the area of game design or tangible interactions, could be a novel way of exploring ML technology.

A state-of-art example that actually incorporates machine learning algorithms is Eye Conductor by Andreas Refsgaard – a musical instrument for physically disabled that lets you create music with your facial expressions (see Figure 3).

(7)

Figure 3. Eye Conductor. Copyrights: Andreas Refsgaard. Used with permission.

The musical interface built with an eye tracker, a webcam, and Processing code empowers people, who might not even be able to produce any sounds by themselves due to physical disabilities, to compose music. (Refsgaard, 2016) Most of Andreas’ work is relevant background for this thesis project as it addresses the power relations between people and technology based on ML algorithms, aiming at shifting the human role from a passive consumer to active co-creator and coordinator of his personal technology. However, Andreas’ work similarly to many recent examples (including the work of Rebecca’s Fiebrink, creator of Wekinator) is an artistic exploration of machine learning. Following the same values, this research positions itself in parallel to those explorations.

An example that in its idea is the closest to described in the paper research, the biggest inspiration behind this TP1, is Bjørn Karmann’s Objectifier (see Figure 4). It is a device that lets you plug in any of your domestic appliances and turn them off and on by performing a gesture or movement in front of the camera. Due to machine learning algorithms, the user can train the device by himself which gesture should trigger the on/off function.

(8)

Figure 4. Objectifier. Copyrights: Bjørn Karmann. Used with permission.

2.3. Existing research

As stated at the beginning of text, there is a research gap when it comes to the intersection of theory of design and ML in academic texts, in comparison to how established these technologies are in our daily lives and massive coverage of the topic in online sources. It is not only my own point of view: only a few days ago (May 2017) an article UX Design Innovation: Challenges for Working with Machine Learning as a Design Material was published in which the authors also see this gap as they state their “objective is to initiate an innovative research and education agenda for UX design that explores what ML might be from a design perspective”. (Dove, Halskov, Forlizzi & Zimmerman, 2017) In their review of literature concerning design and ML the authors recognize the entire section of academic texts dealing with the control relations between users and technology (e.g. (Schneiderman & Maes, 1997)) but in a more broad sense of seemingly “intelligent” systems rather than specifically targeting the ML field.

However, this situation is changing. The growing interest in ML is becoming also visible in the design community. Ideas of alternative designerly approaches to ML emerge: during last year’s CHI

conference a workshop Human-Centered Machine Learning (Gillies et al., 2016) was held with the solely purpose of bringing together researchers in the “emerging field”. (Gillies, 2015) The research described in this paper directly contributes to the notion of human-centered ML. Specifically, the thesis project builds on the theory of Interactive Machine Learning (Fails & Olsen, 2003),

characterized in contrast to traditional ML processes by “rapid train-feedback-correct cycles, where users iteratively provide corrective feedback to a learner after viewing its output.” (Amershi, Cakmak, Knox & Kulesza, 2014)

3. ETHICAL CONSIDERATION

Machine Learning field and the process of gathering data for the training set is topic creating a lot of ethical concerns. Although those discussions are not in scope of this research and due to short time

(9)

frame cannot be addressed sufficiently, I strived for taking them into the consideration during the design process. The ethical considerations of data privacy are physically reflected in the prototype (choice of IR communication over Wi-Fi or pixelating camera views).

4. METHODOLOGY

The described work should be primarily treated as research by design executed in a very short time frame, that is an initial exploration into a larger research topic. As such, it strives to follow the scientific criteria proposed by Jonas Löwgren (2007):

- Novelty, as its addressing the gap in theoretical knowledge and approach proposed by the project is not a part of generally acclaimed and maintained strategies.

- Relevance, as it should be of interest to interaction designers community due to apparent lack of established knowledge and skillsets for working with ML as design material (Dove et al., 2017), as well as it is externally relevant to practitioners and users as the ML technology becomes more and more pervasive and ever-present.

- Groundedness, as it is initially analytically grounded in the reasoning and during the course of the project it provides theoretical grounding coming from literature and surveys, as well as empirical grounding based on workshops and user tests.

- Criticizability, as it is an exploration of the topic rather than a hard statement, inviting discussion and raising questions, open for critics of the assumptions the work is based on and its conceptual

construction.

Working within the domain of research by design this thesis uses the strategy described as “exploring the potentials of a certain design material, design ideal or technology” (Löwgren, 2007). Therefore, the entire project experiments with the design material of ML and the space of possible new artefacts with the aim of introducing semi-abstract knowledge contribution. This contribution is a set of concepts and qualities that can be applied to ML artefacts by other designers and researchers, preferably with the aim of exploring the design space by this specific aesthetics.

The specific approach for the design work done is Concept-Driven Interaction Design Research as defined by Erik Stolterman and Mikael Wiberg (2010):

“1. The point of departure is conceptual/theoretical rather than empirical.

2. The research furthers conceptual and theoretical explorations through hands-on design and development of artifacts.

3. The end result—that is, the final design—is optimized in relation to a specific idea, concept, or theory rather than to a specific problem, user, or a particular use context.”

In opposition to the approach that focuses on the situation at hand, Concept-Driven Interaction Design Research focuses on the interaction itself. As the aim of the research is to conduct initial explorations into the interaction with ML technology in general, it should be understood that the situational context

(10)

of specific tested concepts is not the focus. That is why the thesis covers a small repertoire of explored concepts.

4.1. Methods

Based on the methodology of Research by Design and Concept-Driven Interaction Design Research the design process incorporates a variety of methods also originating from other methodologies e.g. Participatory Design. The incorporated methods are described in the later Design Process section and involve: desk research, Buxtonian sketching (Buxton, 2010), survey on a sample of 75 people, prototyping (Houde & Hill, 1997), co-design workshop, enactments, interviews and user tests that could be considered as Technology Probes (Hutchinson et al., 2003). Studying the relationship between human and technology I chose the methods that either engage human participants or explore material of technology.

4.2. Vocabulary

As crafting interactions is within the core interest of this project I will write a lot about aesthetics of interaction. To facilitate this process I will describe the interactions using attributes coming from Interaction Vocabulary developed by Eva Lenz, Sarah Diefenbah and Marc Hassenzahl (2013). 5. DESIGN PROCESS

5.1. Desk research

The initial phase, as well as a big part of the entire process, was desk research. ML is a vast and complicated field, and one that is sadly neglected during design education. In order to reach a basic state of familiarity I enrolled myself into two online courses (see Figure 5).

Figure 5. Two online courses (Stanford, 2017)(Goldsmiths, 2017) used to gather knowledge of ML. Screenshot 14 April 2017

(11)

The courses followed in preparation for this research were: Machine Learning by Stanford University on the portal Coursera (Stanford, 2017) and Machine Learning for Musicians and Artists by

Goldsmiths University of London on the portal Kadenze (Goldsmiths, 2017). These two resources describe fundamental knowledge about the ML field and can be traced back as the sources of my knowledge about basic rules or technical vocabulary. What is interesting is that these resources contrast with each other in the same way as theoretical research, what could be treated as another reason for arguing an existing gap between. Clearly, the dichotomy between technical and artistic approach has consequences beyond the current state of research.

5.2. Survey

To even think about the possibility of proposing alternative design approach that could re-negotiate the relationship between ML artefacts and people, the current relationship should be investigated. During the desk research phase it become apparent that it is not only research through design that is missing from the academic texts, but also research for design. Consequently, a survey was conducted to inform the prototype explorations and get the a view of the design space (see appendix A).

The online survey was filled by 75 people aged 18-65. Due to time restrictions and the focus of the entire project this group is small and could not be called representative. However, as much as the curated self-selecting participation model allowed, the survey was aimed at providing a general insight into the knowledge of and approach to ML technology in different cultures (mainly Sweden and Poland) and in different fields (Computer Science, Art, Design and other). Group of roughly 1/3 of participants comes from Sweden (24 people, 32%), a bit more than that comes from Poland (32 answers, 43%). The rest 19 answers (25%) come from other countries (Austria, Denmark, France, Germany, Hungary, Netherlands, Slovakia, Spain, Switzerland, United Kingdom, United States of America). The sample consists of three groups when it comes to professional affiliations: 27 people coming from the field of Computer Science (36%), 21 connected to Design (28%) and 27

working/studying in other fields (36%). The survey contained two multiple-choice questions, four free-text questions and one that requested assessment using a numerical scale. All the responses collected over the period of one and a half week were analysed by a simple statistics. The free-text responses were categorized with a key to find patterns among the responses and work with the quantitative data. In the following sections I will analyse the gathered data. The described percentage of participants is always given in relation to all the answers (100% - 75 answers).

5.2.1. Knowledge of Machine Learning

Out of all of the respondents 25% claimed not to know what machine learning is, most of them coming from fields other than Computer Science or Design (19%). The massive part of the sample (75%) claiming to know what is machine learning is connected to the fact that roughly half of those answers belong to Computer Scientists (CS). Still, even in the CS-oriented group 7 participants out of 27 (26%) admitted that they don’t know how machine learning works. However, the proclaimed knowledge about what machine learning is seems to not always be true. Out of 75% of participants who answered yes to Do you know what machine learning is?, more than one third (29% of all answers and 39% of yes answers) admitted they do not know how it works. What is more, out of 34 people (45%) who claimed to know not only what ML is but also how it works, only around half of them – 16 participants (21%) were able to correctly identify the general idea behind it. Another 10 of them (13%) gave incomplete explanations e.g. ones that are true for only one specific type of

algorithm (which suggests they may think all types of algorithms work on this basis). Concluding, the knowledge of machine learning field seems to be often only proclaimed. From all the participants who

(12)

are not connected to the field of CS or Design (so the closest of the target group of people not connected to the ML field) half of them do not have an idea of what ML is or have never heard of it. This finding could suggest that there is a space for developing design approach that helps people to understand the general idea behind ML. As it is emerges from the survey, the actual knowledge of ML can be tricky even for those somehow connected to the field, the concepts and prototypes developed in later stages will focus on reflecting the very basic idea of ML: working without being explicitly programmed with performance enhancing together with the amount of data.

5.2. 2.Attributes of Technology

One of the most interesting for the design process questions was What would you like the modern technologies to be like? This question was inspired by recent international survey on how people would like to communicate with AI - Do you speak human? by SPACE10 (2017). While I do believe that asking general public what kind of relationship they would like to establish with new technologies is tremendously important, I felt like the multiple-choice questions were very restrictive and

suggestive. For instance, the question “How would you like the AI to behave?” has three pre-written choices: “Motherly and protective”, “Autonomous and challenging” and “Obedient and assisting”. (SPACE10, 2017) Obviously they do not cover all the spectrum of possible relationships that can be established between humans and AI. That is why in my own survey for the purpose of this research I used free-text questions to explore what kind of relationship people would like to establish with ML technology. As the sample group is very small and not representative, these answers served as inspiration rather than strict guidance for the design process. Attributes written by the participants included: relevant, personal, predictive, transparent, empathic, forgiving, user friendly, helpful, controlled, safe.

5.2.3. The issue of control

The last two questions in the survey were focusing on the power relations between users and technology (see Figure 6).

Figure 6. Answers to questions about power relations

Out of all the participants more than half (65%) feels like they are in control over the technologies they use. However, from the sample of people connected to Computer Science only 2 out 27 people do

(13)

not feel in control (8% out of participants from the field of Computer Science). On the other hand, from the same sized sample of people who are not connected to either Computer Science or Design, there is 14 participants out of 27 who feel like they are controlled (50% of the Other category). Of course these numbers are also connected to age, countries and other factors. Still, there is a grounding for a theory that the greater the knowledge and experience with the field, the easier it is to feel in control, which only seems natural. Such an assumption would support an idea that by exposing people to ML artefacts and providing basic knowledge, one is influencing the power relations and enhancing the feeling of control.

5.3. Iteration I: Swearing Machine

Building on the insights from the survey and striving to explore novel ways to experiment with ML artefacts the concept of Swearing Machine was born. Many of the state-of-art design examples

researched at the beginning of the design process (like Positive Design artefacts from DIoPD (2012) or Pleasurable Troublemakers of Matthias Laschke (2017)) were following similar aesthetics of use, which I represented in this paper by the example of the Chocolate Machine. Regardless of the specific situation at hand presented by the examples, they shared qualities like playfulness, engagement, and simplicity. They were lightly approaching human interaction in a direct and non-serious way. This approach struck me as one that could allow me to explore the relationship between the human and the artefact and foremost as strikingly opposite to the mainstream approach to designing ML based products and services. For the first iteration of the design work this playful strategy was mimicked by Swearing Machine concept. The concept is a result of a session of Buxtonian sketching (Buxton, 2010) in the software of Wekinator and theoretical discussions with supervisors and peers. 5.3.1. The concept

Swearing Machine is a simple non-serious artefact that you can train to detect your swears and react in an ambient way of changing colour (see Figure 7). It allows the user to appropriate simple Machine Learning algorithm for his own, very personal, use of self-development. It is a designerly experiment that lets the user to have a non-serious direct interaction with ML. It can be trained with an example of a swear word the person would like to stop using habitually, but in fact it is very open-ended in its simplicity: one can easily imagine training it with a name of a former romantic partner or an artificial sound. It is a seemingly useless, not utilitarian and completely silly. However it is also a simplest representation that lets the user to experience two simple truths of ML: it is only a vessel, until fed with example data and that the performance of the algorithms (in this case detecting certain sound) enhances with the amount of examples trained.

(14)

Figure 7. The Swearing Machine. Concept Sketch

5.3.2. The prototype

Due to short-time timeframe of the project, the prototype for Swearing Machine done at the begging of the process was a digital solution instead of a tangible artefact. It was based on the Dynamic Time Warping method, which enables us to compute the similarity between two sequences of data over time – in this case the sequence being the speech recording of a single word and the recording in present time (or to be exact, the most recent sequence of the recording made up to the present moment). The audio features used are MFCC (features used for speech recognitions). They are extracted from the recording through an Open Frameworks program and sent as an input of 13 values through an OSC message to Wekinator, an open-source machine learning software created by Rebecca Fiebrink (2009). For the purpose of rapid prototyping the output destination was a Processing code coming through an example bundle of Wekinator – which changed the background colour of the sketch (see Figure 8).

(15)

Figure 8. Screenshot from the process of prototyping the Swearing Machine

By maximising this window for full screen view I could use the entire monitor as a part of a prototype. I trained it with one chosen word and left it for a day on a desk, to react to the sounds happening in the room. Therefore it could function as a role and implementation prototype according to Stephanie Houde & Charles Hill’s (1997) model, but not look & feel (see Figure 9).

(16)

5.3.3. The insights

First and foremost, I see design experiments as opening the design space for new questions and reflections. How to create an intimate one-on-one relation with ML based artefact? What is the simplest way to embody the training experience? How does experimenting with ML alter your own relation with technology? Can ML be used to facilitate self-focused personal development? Could and should technology help us become better humans? Are there areas of life too human to be linked with technology?

Interaction with the Swearing Machine was engaging, spatially proximate and direct at the beginning, during the phase of training. Over time it drifted towards inconstant, spatially separated, approximate and gentle. This process triggered different emotional responses over time from curiosity, amusement to boredom and annoyance over falsely triggered output. Not only the testing, but entire process of set-up, training and experimenting had started to change my own relation with ML field – it laid

foundations for thinking about ML as a design material rather than black box of advanced technology. Reflecting critically on the prototype, the ML model proved to be quite ineffective as random sounds kept activating it, which could not be completely eliminated by adjusting the threshold or providing a greater amount of examples. Moreover, while the concept proved itself to be quite effective with reflecting the simplest assumptions of ML, it could not provide the user with more than that – as it is harder for humans to think about patterns and similarities between speech than for example between visuals. It could be harder to think about how the model compares two sound sequences. Therefore, I decided to not proceed with speech recognition in later experiments and explore different types of inputs. I did like however, how the colours proved themselves to be a well-suited output for ambient display in the second phase of passive usage. This resulted in a desire to work with light as an output in the next iteration of the prototype, to allow for the output to fall into the periphery of the user’s attention in long-term use and hence study the change of relation over time. The prototype seemed to succeed in following the specific aesthetics described in the concept and its silly, non-serious premise in combination with the socially relevant action of swearing as expected triggered some emotional reactions: a group of young people reacted by laughter, jokes and amusement to the Swearing Machine making it into a light, social situation of friends meeting. On a symbolic meaning though, one can wonder whether the self-development mediated by technology does not conflict with the idea of empowering people by putting them in the direct control of training the algorithm. Who trains whom in this situation? The strong characteristics of this prototype’s situation at hand have resulted in a decision to choose a more universal situation for the next iteration. This could ease the understanding of the overall focus of the research, which lies in explored approach rather than specific context of use. 5.4. Iteration II: Co-design workshop

Having in mind a general idea for the second iteration of the concept to be based on light, the process of hardware sketching was started. The relevant parts of the technological explorations will be described in the prototype of the Silly Lamp (5.5.2.). In the meantime, struggling with the technical solutions, a co-design workshop was organised to inform later concepts in a more empirical way and to study deeper the relationship between various people and ML algorithms. The entire workshop was divided in phases, similar to the ones already established in the described design approach:

exploration, training, use. The structure therefore mimicked evolving the relation with ML artefact over time with exercises focused on corresponding actions. The co-design session involved two participants: male and female with design backgrounds, who respectively assessed their own knowledge of machine learning field as 3 and 4 on a scale from 1 to 10. Their initial emotions and

(17)

associations with the field of machine learning were: search engines, Twitter bots, efficiency (acceptance) and The Matrix, Black Mirror, Google, Spotify (sceptism, lack of trust). 5.4.1. Exploration

The first of the design engagements introduced during the workshop was focused on studying how people explore ML algorithms and how they can build their understanding of them. The participants were situated on the opposite sides of small table. One of them was equipped with a pen and blank sheets of paper, whereas the other used a computer with a Processing sketch of Nearest Neighbour Classifier with 3 classes powered by Wekinator. During the exercise one of the participants would start marking points on the paper that he would like to serve as training examples. The other

participant reacted by marking those points in the Processing sketch adding them as training examples of classes of her choice. As a next step an algorithm would be trained on these examples. Then the participant marking points would start marking control points. Each time he was informed by the partner about the class of the point: Red, Green or Blue according to the output of the model. His tasks were to try to understand and explain the rules behind the algorithms, as well as try to conceptualise the decision boundaries (see Figure 10).

Figure 10. Understanding Nearest Neighbour Classifier Exercise

This engagement was not designed to explain the way algorithm works to the participants, but rather study how they approach the problem of finding reasoning behind the model. The points the

participant had been choosing were more or less structured, not placed randomly, but in a pattern of similar distances and directions. His strategy was to start with re-checking all of the examples the model has been trained on for their classes and only afterwards did he proceed with exploration of the

(18)

empty space. Projecting this kind of strategy on a human-artefact interaction, gives an important clue about designing feedback of the interaction. By designing feedback responses from the artefact that signify whether the feature directly corresponds to one of the data in the training set or was it placed in the class by some more distant similarities patterned by the learning algorithm, could facilitate the process of sense-making of the user.

As predicted, the user had troubles understanding the pattern between the classifications and felt frustrated at times. This reinforces the insight coming from the survey, that for the purpose of creating an artefact that facilitates understanding of ML it is better to focus at this point at a very simple reflection of input/model/output relation, rather than any kind of more sophisticated understanding of the algorithm.

5.4.2. Training

The next phase of the workshop was based on the enactment method. After a simple introduction with explanation of the simplest premise ML is based on, participants were asked to enact the simple model from the previous exercise in a real world. During the enactment they took turns with one of them playing the user and the other the artefact that uses light as an output for the ML model (see Figure 11). In the first exercise the type of data in the training set was pre-defined: the coordinates of the points in the two-dimensional space of the paper/interface. This time the point of the exercise was to see what kind of features will the participants choose by themselves and which explorations will come to them naturally. The features explored by the participants were: speed, posture, height of a certain point in space and volume of sound. It is easy to notice that participants mainly chose visual types of input, probably most natural for human recognition. It was easiest for them to comprehend and train on the speed feature, which could be connected to the lack of noise in communication (everything except the user in the environment was still, while for example loudness feature was met with a confusion from the person enacting the algorithm who did know if he should react also to the sounds of the environment or just the voice of the user). The feature that resulted in the most positive

emotional response was the volume of the sound, which was mostly connected to loudness of speech. “I feel empowered, like it reinforces what I am saying” – said the participant while enacting. Possibly, the emotional response could be connected to the fact that it was one of the effortlessly controlled features – it was easy to control, while at the same time no additional action was needed other than talking which everybody in the room were already doing to communicate, which made it feel less fake or artificial. All of the insights from this phase shaped the design interaction with the third iteration of the concept.

(19)

Figure 11. The enactment

5.4.3. Use

The third phase of the workshop focused on the long-term use of a possible artefact and related feelings and situations. It was a lengthy session of open-discussion and interviews. The structure was inspired by Interaction Driven Design techniques (Meang, Lim & Lee, 2012). Starting from

interaction not in the physical sense of the movement, but in more general sense of the action of training the conversation focused on the situation, needs and function respectively (see Figure 12).

(20)

Figure 12. Participants’ notes from discussion following Interaction Driven Design structure

The session was finished with a re-labelling exercise in which participants passed a prop, a colourful lightbulb to each other, played with it and shared their thoughts, emotions and possible scenarios (see Figure 13). Participants showed a curiosity in an artefact with more utilitarian functions and talked about different ways they could use the artefact to address particular personal needs, mainly by visualising and categorising data (eg. stress level). All of the insight led to the next iteration of the concept: Sweater Weather.

(21)

Figure 13. With the aid of a prop participants comes up with his own scenario for an artefact that visualises his level of stress

5.4.4. Sweater Weather

Sweater Weather is a concept of a digital platform using ML to turn an objective weather app into a subjective one (see Figure 12). Since the first iteration of the concept (Swearing Machine) the strategy of trying to compute for similarities data that is easily and naturally perceived by humans (eg. speech) has changed to choosing input that might provide difficulties to be interpreted by humans (degrees of temperature, speed of the wind, humidity in percentage). After all, what we actually want to know when we check the weather outside is how much clothes should we put on. The issue is increasingly complicated as each single person has a different perception of the temperature or different dressing habits. Sweater Weather allows user to appropriate ML technology for personal use, training the model with their own preferences and exploring how the algorithm finds patterns between different dates and classifies the weather in a more human way.

5.4.5. The insights and future steps

This iteration of the concept, similarly to the previous and next one doesn’t have a focal purpose of providing a plausible solution to usage of ML, but focuses on stimulating the study of the relationship between people and technology. How to craft playful and non-serious but simultaneously informative and utilitarian interactions? Should we use our personal data or protect it? Why do we see the need to classify and label incoming data and information? Where is the line between technology providing suggestions and humans blindly following orders? What are the power relations in technologies that

(22)

suggest us a course of action? What if that suggestion is only based on our intentionally provided direct input?

Due to the short timeframe of the thesis project I did not manage to user-test this solution. The decision to proceed to the third iteration of a concept instead, was dictated by too many grey areas to be addressed during prototyping of Sweater Weather. The most important of them, was that similarly to the Swearing Machine, the context of use is still quite characteristic – while a more neutral situation at hand could bring the attention to the theoretical approach and actual interaction.

Moreover, looking critically at Sweater Weather concept, it could be said that it fails to promote human participation in an engaging way. It also raises certain ethical concerns as the need for accessing weather data requires net access, in contrast to previously and later explored concepts. It falls already into Quantified Self category, thus opening a whole new set of questions than is in concern of this short thesis project.

However, it should be noted that having a possibility of user testing Sweater Weather could be very useful for this research because of its potential for studying trust over time. It would also allow to explore how humans rely on technology in the decision making process. Finally, it has bigger potentials in visualising how the learning algorithm actually works.

5.5. Iteration III: Silly Lamp 5.5.1. The Concept

Building heavily on workshop insights as well as the results of two previous iterations, Silly Lamp concept is no longer about any specific situation or action, but about an artefact everybody has in their homes: a lamp. Silly Lamp is a lamp that will do nothing unless you teach it. The concept is meant to explore a relationship that develops with a ML artefact over time and how can we reach a state of context-aware environment that remains in our control. It is a lamp that you can train to recognise certain situations/actions and to react accordingly. The name is an obvious reference to the notion of smart lighting and it is probably easiest to explain the concept by differentiating it from smart objects. Firstly, it is not an IoT product, it is not connected to Wi-Fi network. Quite on the contrary, since you train it with your data which I consider a delicate material, it is especially important that it does not record or store the data, nor it is possible to hack it remotely to get access to sensors. For remote communication it uses IR signals only detectable from a very short range of several steps away. Secondly, it is a bit like a tool or an empty vessel – it will not do anything until you train it and nothing beyond that. Sure, it may get confused at the beginning, you need to invest time in in, but there’s nothing there that you didn’t deliberately and directly put inside. It won’t automatically react to all your movements and behaviours, connect to your phone, personalise automatically based on your actions, anticipate your needs or ever suggest you any solution. Lastly, it requires time and attention to train it, therefore establishes a relationship that shifts over time. It’s agency changes along with the phases of the relationship: from exploration, through training to everyday use.

5.5.2. The Prototype

The prototype of Silly Lamp is especially important for testing the relationship between the user and the artefact but also because of the will to situate the research in now. The conscious decision for the technological side was to use simple algorithms, open source software and easily accessible cheap hardware, so anybody could appropriate his own lamp at home into a silly one. The technological side

(23)

of the design process proved itself to be quite a challenge for a person without experience in hardware and software sketching.

I started by assembling a together a simplest “lamp” by putting together an RGB LED lightbulb, a cable unmounted from an extension cord, a plug and an old recycled ceramic E27 bulb holder. To hack the lightbulb I had to encode the IR signals that could control the changes of light intensity and colour (see Figure 14).

Figure 14. Receiving and sending IR signals with externally powered Arduino Uno boards

Receiving, decoding and sending the IR signals was an iterative process of trying out different components and debugging code. In the end the solution that managed to control the lightbulb remotely was sending raw data of previously recoded IR signals (see Figure 15).

(24)

Figure 15. Receiving raw samples of IR signals through Serial Monitor of Arduino IDE (right) and comparing to previously recorded signals in RC5 protocol (down left)

Due to technical problems with connecting Kinect sensor to the software, bearing in minds the time constraints, I decided to prototype with the laptop camera. The recorded frames of the camera view are not stored and are actually simplified into 100 pixels, which makes the view obscured for a human eye but still lets the algorithms find patterns between frames. The feature of pixelated frame was captured with Processing which set an OSC message with 100 inputs to Wekinator. The Wekinator software computes patterns between frames using a classifier with nine classes and sends one output through OSC message to Processing. A sketch in Processing displays current class and sends a message to a port used by Arduino IDE to control the IR LED pinned to Arduino board (see Figure 16).

(25)

Figure 16. Prototyping process. The camera view seems useless for human eye (up right) but model recognises class 3 (person sitting at the desk).

5.5.3. User testing

The tested prototype is a role prototype. The conducted user tests were concerned with studying the relationship with the artefact, its phases, frequency and attributes of interactions, emotional responses and needs. Two user-test sessions were conducted, respectively one and three days long. A prototype would not include entire lamp structure. Instead only the light bulb was replaced in lamps that were already in the room, so the study fully focused on the results of changing the interaction paradigm. Prototype had to be supported by a computer and used user’s own equipment for that purpose.

Installing the prototype in the room was followed by explanation of how to use it and how to put in the training data. The classifier could be trained on up to 9 classes: on, off, white light, three colourful options and three degrees of strength. There was no specific instruction on how much time or attention should be given to the prototype. Before and after the session (depending on the user 24 or 72 hours) the users filled out a survey (see appendix B). After the first session I conducted a short conductive interview on the entire experience (see appendix C).

The users have used the prototype in a different ways. The first one explored it more playfully, testing its boundaries, for example training it with two contradicting examples. She also thought about the interaction in a more gestural way – training the lamp on examples heavily based on some body movement or position, some of them quite peculiar like raising both arms. The second user had a very different strategy. She trained different classes by space rather than gesture: recording examples of herself being in different parts of the room. Moreover, she used this different lighting options to define those spaces, in more utilitarian way than just playing around. For example training the white basic light for when she is working at the desk and warm colourful glow when she is resting in the background armchair. The first approach was defined by seeking new direct ways of control and the second by using the lamp as a tool for other purposes like organisation of space and activities. The

(26)

blank space of the prototype lets the users fill it in in their own way, use their creativity. It seemed to succeed in opening up the space for new possibilities and giving an agency to an artefact but at the same time giving the user power to control it, change it or erase it.

The agency given to the artefact by the user is plastic, can be formed in real time, what facilitates thinking about ML as design material. For example during the second day of the test the user was extremely busy working and could not afford any distractions. She recorder additional training examples, so the shift from white to blue light would happen also when she is leaning to reach something from the chair, specifically her phone laying at the end of the table and used the lamp to stop herself from habitual procrastination. As illustrated by the example the approach to usage of the artefact varies not only between users but also over time, thus changing the kind of relationship developed between human and technology. This aspect along with summary of the dynamics between users and the artefact will be described more carefully in the chapter Study of a relationship.

The characteristics of the prototype, even those unintentional had a big impact on the test. The interface used to record a training example for a specific class was a GUI (although the concept assumed a tangible solution). Therefore it allowed for easier control over the artefact because of the preview of the output and input on the screen. It was easier for the user to conceptualise patterns in case of confusion. This signifies the importance of designing good feedforward and feedback for users actions in case of future tests with a new prototype. Also the type of light emitted by the lightbulb is crucial for the interaction. Even though the lightbulbs emits colours, they selected colours were purposefully dimmed, so the fast changes would not be annoying. Indeed, they were rather met with tenderness or affection from the users (“Oh, what’s up with you?”). The blinking that occurred when the learning algorithm classified consecutive frames as two different classes and jumped back and forth between them was actually perceived as natural signal for some kind of problem and thus can be considered a good signifier for the need of enlarging the training set.

Similarly to the previous iterations, Silly Lamp also raised to my attention new questions. What are the consequences of the ML model giving out a “wrong” output? How to educate people on what causes the unexpected outcomes? How to show to users different ways of thinking and interacting with technology? How to make people aware of the power generated by influencing the selection of training data for ML algorithms? How to give people the feel of dynamics with which model changes with the smallest change in the training set?

6. STUDY OF A RELATIONSHIP

Those two briefly described different approaches to purpose of the lamp and its training by the two users resulted in differences in establishing a relationship with the artefact. The playful, exploratory strategy of the first user grew into anthropomorphic relation. The user describes the feeling of training the lamp as “training a puppy”. This results in a closer bond with clear emotional response and attachment. However, an anthropomorphic approach is bound to create a feeling of an independent mind behind the artefact. This situation produces certain power relations of obiedience/disobidience. The user described those in an interview:

“And then later sometimes the puppy simply doesn't wanna obey and does whatever it wants. But it didn't annoy me. It rather was like - what's up with you now? What part didn't you get? So it made me feel like creating a personal connection with the lamp. Like it's alive with its own mind. But also like a dog is trying it's best to serve me. Even if it' best is not always what was expected.”

(27)

In contrast to the first user describing her attitude towards the lamp as “having a friend”, the more utilitarian strategy of the second user resulted in an approach described by her as “solving a mind puzzle”. The relation established by the second user is not anthropomorphic. She describes the lamp by neutral terms “exploratory, experimental, technical, unique”. However, it does not meet this relation lacks attachment and emotional responses. Quite on the contrary, the second user writes about feelings of pride (over mastering a classification between similar situations), tenderness or irritation (over mistakenly classified outputs) and amusement (over confused blinking). She particularly liked the moment of coming back to the room and noticing the lamp signifying the lack of her presence by cold blue light: “there is something about it lightning up when I come back to sit by the desk,

something satisfying”.

The attachment described by both of the users is inevitable connected to the fact that the lamp simply requires attention. One has to invest time to train it, thus creating the attachment. The attention provided to lamp over time varies greatly from maximum to none. During the second testing session the lamp has been trained 9 times during the first day with 36 session and 3813 recorded examples. In comparison, on the second day it the user performed only 2 additional trainings with 7 sessions and 366 frames. Between the trainings and interactions, there are long periods of time when the users treats it like any other lamp, paying no attention to it.

Time invested in the training, gives subjective value to the object. The attention provided to the lamp deepens the understanding of how it works, thus the basic rules of ML. The knowledge of ML field on a scale from 1 to 10 proclaimed by the user before and after the test session grew. But it is not really about self-reported knowledge, as much the tacit knowledge, a feeling, having experienced of how model changes with the training set.

Basing on the insights gathered from the two users as well as the knowledge gathered during previous iterations it is possible to describe a general timeline of relationship developing between human and a ML-based artefact created within proposed by this paper aesthetics. Firstly, there is a phase of exploration, when user tests out the possibilities presented by the lamp with explicit focus on its materiality e.g. various kinds of lights emitted. The attributes characterising the interaction at this stage are: discovery, ambiguity, play, incidental, diverging. The second phase is the training. Varying in length, it is the phase when user often interact directly with the artefact, recording new training examples. The attributes of interaction at this stage could be: control, challenge, expression. It is inconsistent, stepwise, direct. In the third phase the model becomes more stable and the training sessions sporadic. The user may stop paying attention to the artefact or even get bored or irritated by it. The interaction becomes effortless, mediated, neutral, fluent and gentle. This is an iterative process in which user can come back to the beginning of phase 2 at any time.

7. DISCUSSION

This entire research is just a very initial exploration of the design material of ML. While some of the outcomes suggest that proposed approach is successful in empowering people to take control and educating them on basic premises of ML, the sample group of survey participants, co-designers and test users is simply too small to declare any solid statements. With a longer timeframe of the project a representative sample for the survey would have been gathered, the user test would become long-term observations and more prototypes of various concepts within the same aesthetics would have been

(28)

tested. A subtle but powerful difference would be also replacing the participants involved in the study with ones coming from a non-technical and non-designer background.

However, the research still opens the design space for the discussion, provides insights to continue with from every work stage and introduces a set of theoretical concepts, qualities and interaction attributes. This semi-abstract knowledge is valuable as crafting immaterial and abstract body of interaction with ML technology proved itself to be a challenge for designers (Dove et al., 2017). Reflecting on the design process, it can be said that the theoretical approach evolved over time from a critical one towards more scientific and that the most valuable aspects of the process were also the most troublesome. While involving user’s through participatory engagements enriched the study in many new perspectives, it also introduced to the project factors beyond control of the designer, which could affect the time plan of a short project tremendously. For example the co-design workshop had to be re-arranged at the spot and moved to another place due to two fire alarms in the building. The planned long-time user test of the prototype had to be stopped after the first day due to external

circumstances, thus splitting the test into two short-term ones, affecting the possibility of studying how relationship evolves over time. Working with hardware and software enabled experimentation with the experience of training, but was a challenging process for a designer without technical background. I consider the basic familiarity with the ML field and the technical skills acquired over the course of the project as the most valuable outcome.

8. CONCLUSION

How can we design machine learning products and services that empower humans? This research is just an initial take on what could possibly be explored further as design perspective on ML technology and the power relations it inflicts. However, the experiments on the design material of ML introduce a loosely-defined designerly approach, described by a set of concepts, qualities, attributes, insights and strategies appearing in this text. The approach succeeded in evoking positive emotions, inspiring changing the agency of the artefact for personal purposes and increasing the understanding of basic principles of machine learning, giving away the feeling of the field. Therefore, it is prosed as one of the directions that could result in ML products and services that put the human in the position of control, empowering them to new ways of relationships with technology.

The nature of this semi-abstract knowledge contribution allows it to be appropriated in relation to different concepts, contexts and design. In this way it has an intention of promoting generally understood conceptual and theoretical disputes about the nature of relationship between human and ML technology. Finally, it should be noted that rather than proposing a completely novel strategy, the hands-on experiments described by this paper are building on designers and artists mentioned in the Theory section, in hope of making those alternative approaches to ML a generally acclaimed and maintained strategies.

Granting the project would be to continue, I see its potential in democratising the technology. Could the various concepts of silly prototypes be open-sourced as tutorials online? Would people participate in a study by building their own prototypes and reporting on the relationship paradigms that they inflict? Could we promote designerly ways of open-source tinkering with machine learning?

(29)

This project is not meant as critical towards current practices. However, while during the entire research process I strived for a scientific approach towards existing ways of interaction with ML technology, not assessing but merely exploring them, proposing alternatives on a base of gaps in the research and experimenting with new perspectives, the motivation that pushed me to start the research was far from objective. As MIT Media Lab’s associate director Andy Lippman (2017) said during his opening remarks at the spring member meeting commenting the recent political situation and the way it is influencing science and technology: “We don’t get mad, we get even.” It is my personal belief that in the face of forces bigger than ourselves and disturbing direction that design of new technologies is heading to, we don’t need protests, we don’t need angry statements, lurky science fiction, provocative speculative design and closed-classroom debates. Not more than we need to actually design

alternatives, promote makers culture and get our hands dirty. Allow people to re-define their relationship with ML technology by themselves.

ACKNOWLEDGEMENTS

I would like to extend my gratitude towards my supervisor Elisabet Nilsson for introducing me into the world of research, answering tons of my questions even if they were not related to this project and making every supervision a pleasure. Further than that, I would also like to thank David Cuartielles for technological guidance, Lars Holmberg for inspiring conversations and the participants involved in the design process. Finally, I am most grateful for the proofreading of this paper done by Kevin Ong and all the emotional support in the hard times of submission I received from Bianca Di Giovanni.

REFERENCES

Amershi, S., Cakmak, M., Knox, W. & Kulesza, T. (2014). Power to the people: The role of humans in interactive machine learning. AI Magazine, 35,(4), 105-120.

Anderson, W. (2017, May 10). Using Machine Learning to Make Art. Magenta. Retrieved from https://magenta.as/using-machine-learning-to-make-art-84df7d3bb911

Buxton, Bill. (2010). Sketching User Experiences: Getting the Design Right and the Right Design. London: Morgan Kaufmann.

Desmet, P. M. A., & Pohlmeyer, A. E. (2013). Positive design: An introduction to design for subjective well-being. International Journal of Design, 7(3), 5-19.

DIoPD. (2012). Delft Institute of Positive Design. Retrieved April 7, 2017, from http://studiolab.ide.tudelft.nl/diopd/about-us/mission/

Dove, G., Halskov, K., Forlizzi J., & Zimmerman, J. (2017). UX Design Innovation: Challenges for Working with Machine Learning as a Design Material. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 278-288. https://doi.org/10.1145/ 3025453.3025739

Dourish, P., & Bell, G. (2011). Divining a digital future : mess and mythology in ubiquitous computing. MIT Press.

Fails, J. A., and Olsen Jr, D. R. 2003. Interactive Machine Learning. In Proceedings of the 8th International Conference on Intelligent User Interfaces, 39–45. New York: Association for Computing Machinery.

(30)

Fiebrink, R. (2009). Wekinator | Software for real-time, interactive machine learning. Retrieved April 7, 2017, from http://www.wekinator.org/

Gillies, M. (2015). Human-Centered Machine Learning. Retrieved May 21, 2017, from https://www.doc.gold.ac.uk/~mas02mg/MarcoGillies/category/interactive-machine-learning/

Gillies, M., Fiebrink, R., Tanaka, A., Garcia, J., Bevilacqua, F., Heloir, A., Nunnari, F., Mackay, W., Amershi, S., Lee, B., d’Alessandro, N., Tilmanne, J., Kulesza, T. & Ceramiaux, B. (2016). Human-Centred Machine Learning. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 3558-3565. https://doi.org/10.1145/ 2851581.2856492 Goldsmiths University of London. (2017). Machine Learning for Musicians and Artists [Online course]. Retrieved from https://www.kadenze.com/courses/machine-learning-for-musicians-and-artists/info

Haude, S. & Hill, C. (1997). What do Prototypes Prototype?. In Helander, M., Laundauer, T. & Prabhu, P. (Eds.), Handbook of Human-Computer Interaction (2.ed.). Amsterdam: Elsevier Science B. V.

Hutchinson, H., Mackay, W., Westerlund, B., Benderson, B.B., Druin, A., Plaisant, C., Beadouin-Lafon, M., Conversy, S., Evans, H., Hansen, H., Roussel, N., Eiderbäck, B., Lindquist, S., Sundblad,Y. (2003). Technology Probes: Inspiring Design for and with Families. In CHI’03 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 17-24. https://doi.org/10.1145/ 642611.642616

Knight, W. (2016). AI’s Language Problem. Retrieved April 3, 2017, from https://www.technologyreview.com/s/602094/ais-language-problem/

Laschke, M. (2017). The Aesthetics of Friction. Retrieved May 20, 2017, from http://www.pleasurabletroublemakers.com/#/aesthetic-of-frcition/

Lenz, E., Diefenbach, S. & Hassenzahl, M. (2013). Exploring relationships between interaction attributes and experience. In Proceedings of the 6th International Conference on Designing Pleasurable Products and Interfaces, 126-135. https://doi.org/10.1145/ 2513506.2513520 Lippman, A. (2017, 13 April). Art, Science, and the Media Lab [Video file]. Retrieved from https://www.youtube.com/watch?v=eHRaF14kSKA

Löwgren, J. (2007). Interaction design, research practices and design research on the digital materials. Retrieved May 14, 2017, from webzone.k3.mah.se/k3jolo/

marxj1. (2017). Rayna meets a “robot”- YouTube. Retrieved April 7, 2017, from https://www.youtube.com/watch?v=h1E-FlguwGw&feature=youtu.be

Maeng, S., Lim, Y., & Lee, K. (2012). Interaction-Driven Design: a new approach for interactive product development. In Proceedings of Designing Interactive Systems Conference’12, 448-457. https://doi.org/10.1145/2317956.2318022

Ng, A. (2017) What is Machine Learning? [video]. In Machine learning by Stanford University [Coursera MOOC]. Retrieved from

https://www.coursera.org/learn/machine-learning/lecture/B1CZ7/what-is-machine-learning

Ohlin, F., & Olsson, C. M. (2015). Beyond a utility view of personal informatics. Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers – UbiComp ’15, 1087–1092. https://doi.org/10.1145/2800835.2800965

(31)

Refsgaard, A. (2016). Eye Conductor. Retrieved 2017 May 11 from https://andreasrefsgaard.dk/project/eye-conductor/

Shapire R., (2008). COS 511: Theoretical Machine Learning [Lecture notes]. Retrieved from https://www.cs.princeton.edu/courses/archive/spr08/cos511/scribe_notes/0204.pdf

Shinseungback Kimyonghun. (2016). Animal Classifier. Retrieved 2017 May 11 from http://ssbkyh.com/works/animal_classifier/

Shneiderman, B. & Maes, P. (1997). Direct manipulation vs. interface agents. Interactions. 42-61. SPACE10. (2017). Do you speak human?. Retrieved 2017 May 20 from

http://doyouspeakhuman.com/

Stanford University. (2017). Machine Learning [Online course]. Retrieved from https://www.coursera.org/learn/machine-learning

Stolterman, E. & Wiberg, M. (2010). Concept-Driven Interaction Design Research. Human–Computer Interaction, 25, 95–118.

APPENDIX LIST:

APPENDIX A – On-line Survey

APPENDIX B – Survey before/after user test APPENDIX C – Interview

Figure

Figure 1. The interest in “machine learning” through the last five years according to Google web search
Figure 2. The Chocolate Machine. Copyrights: Matthias Laschke. Used with permission.
Figure 3. Eye Conductor. Copyrights: Andreas Refsgaard. Used with permission.
Figure 4. Objectifier. Copyrights: Bjørn Karmann. Used with permission.
+7

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

This project focuses on the possible impact of (collaborative and non-collaborative) R&D grants on technological and industrial diversification in regions, while controlling

Analysen visar också att FoU-bidrag med krav på samverkan i högre grad än när det inte är ett krav, ökar regioners benägenhet att diversifiera till nya branscher och

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

Tillväxtanalys har haft i uppdrag av rege- ringen att under år 2013 göra en fortsatt och fördjupad analys av följande index: Ekono- miskt frihetsindex (EFW), som

Similar to Hyland and Milton, Hinkel finds that the non-native writers are more limited in their use of epistemic markers (particularly hedging devices) than the native writers,

Stenbom S, Cleveland-Innes M, Hrastinski S (2013) Examining a learning-driven relationship of inquiry discerning emotional presence in online math coaching. In: Proceedings of the

Let A be an arbitrary subset of a vector space E and let [A] be the set of all finite linear combinations in