• No results found

Broadcastr - What happens when we show to our surroundings what we do on our smartphones?

N/A
N/A
Protected

Academic year: 2021

Share "Broadcastr - What happens when we show to our surroundings what we do on our smartphones?"

Copied!
34
0
0

Loading.... (view fulltext now)

Full text

(1)

 

Broadcastr 

What happens when we show to our 

surroundings what we do on our smartphones?

 

Jesper Hyldahl Fogh                                  Interaction Design  

Master's Programme (60 ECTS)  15 ECTS 

Spring 2017 - 2nd Semester  Supervisor: Dimitrios Gkouskos 

(2)

ABSTRACT 

Smartphones are capable of a multitude of things, yet it is still common to hear about the  smartphone as a whole as being harmful to humans. In order to challenge the perception of  smartphones as harmful, a concept was manifested in the form of seven iterations of 

prototypes. The concept, called Broadcastr, revolved around broadcasting to one's immediate  surroundings what one was doing on one's smartphone. While continuously developing the  prototypes, the concept was evaluated by the researcher. Evaluation occurred both in the  process of prototyping itself as well as by exposing it to other people. The final design  consisted of a Raspberry Pi Zero W, which was connected to an Android app via Bluetooth.  The Android app ran in the background and monitored whether a new app was activated by  the smartphone user. When this happened, a message was sent to the Raspberry Pi, which  would display an icon on a 0.9" 128x64 OLED display, which corresponded to the category of  the app being activated. It was found that the prototype showed an indication of being  capable of challenging perceptions of the smartphone as being harmful, and that it became a  useful tool for others to know what the smartphone user was doing. Finally, two possible  future research projects were presented. One project would focus on another type of device's  activity being broadcasted, while the other would introduce the broadcasting device to a high  school class to study its effects.  

(3)

TABLE OF CONTENTS 

ABSTRACT 2  TABLE OF CONTENTS 3  1 · INTRODUCTION 5  1.1 ∙ Research Question 5  2 · THEORY 5 

2.1 ∙ Concept-driven design research 5 

2.2 ∙ Non-verbal communication in public spaces 6 

2.3 ∙ Feedback and feedforward 7 

2.4 ∙ Canonical examples 7 

2.4.1 ∙ Sound and LED notifications 7 

2.4.2 ∙ App use statistics 8 

2.4.3 ∙ Portable self-expressive projections 8 

3 · METHOD 9  3.1 ∙ Prototyping 9  3.2 ∙ Action research 10  3.3 ∙ Project plan 10  4 · DESIGN PROCESS 11  4.1 ∙ Design constraints 11  4.1.1 ∙ Single-user 11  4.1.2 ∙ Portable device 11  4.1.3 ∙ Automatic broadcasting 12 

4.1.4 ∙ Motivating the design constraints 12 

4.2 ∙ Version 1 12 

4.2.1 ∙ Iteration 1: Monitoring apps 12 

4.2.2 ∙ Iteration 2: Connecting the hardware over Bluetooth 13  4.2.3 ∙ Iteration 3: Exploring outputs and finalizing the first prototype 14 

4.2.4 ∙ Evaluating the prototype 15 

(4)

4.2.4.2 ∙ Body language 16 

4.2.4.3 ∙ Privacy 16 

4.2 ∙ Version 2 17 

4.2.1 ∙ Iteration 4: Moving to Raspberry Pi Zero W and changing the output 17  4.2.2 ∙ Iteration 5: Using Google Play Store categories 19  4.2.3 ∙ Iteration 6: Pathworking categorization & increasing wearability 21 

4.2.4 ∙ Iteration 7: Final usability considerations 25 

5 · MAIN RESULTS AND FINAL DESIGN 26 

5.1 ∙ The final design in relation to design constraints 26 

5.2 ∙ Evaluation of the final design 26 

6 · DISCUSSION 28 

6.1 ∙ Methodology 28 

6.2 ∙ Other possible designs 28 

6.2.1 ∙ Different categorizations 28 

6.2.2 ∙ Moving the prototype to other areas 29 

6.2.3 ∙ Multiple people 30 

7 · CONCLUSION 30 

7.1 ∙ Perspective 30 

7.1.1 ∙ Device convergence 30 

7.1.2 ∙ Social ramifications of broadcasting 31 

ACKNOWLEDGEMENTS 32 

REFERENCES 32 

APPENDICES 35 

 

(5)

1 ·

INTRODUCTION 

On my smartphone start screen, I have 31 apps. They range from apps for social media to mail  to navigation to language learning to games to radio to books. In total, I have 116 apps with an  even wider range of uses. It is clear that smartphones have many uses for many different  contexts. Yet, it is still common to hear that smartphones as a whole are harmful. Huffington  Post has an entire tag for articles related to smartphone addiction("Smartphone Addiction",  2017). The music video for Moby & The Void Pacific Choir's "Are You Lost In The World Like  Me?" is an overt criticism of the smartphone(Moby & The Void Pacific Choir, 2016). There are at  least five TEDx talks about how smartphones are damaging(Butler, 2016; Dedyukhina, 2016;  Frenkel, 2014; Makosinski, 2016; Wincent, 2016). Parts of the design research community have  even taken it upon themselves to engage with harmful smartphone use by creating prototypes  that "mitigate smartphone disturbance"(Choi, Jeong, Ko, & Lee, 2016; Ko, Choi, Yatani, & Lee,  2016). All in all, public debate surrounding the smartphone seems to be very focused on the  negative impact it can have on us.  

Part of this vilification of the smartphone might be due to the fact that we cannot easily see  what people are doing with their smartphone. The physical form of the smartphone does not  change when users change their activity. The only activity cue that the smartphone itself  produces is through the screen, and occasionally via sound. The rest of the activity needs to  be deduced from the body language of the user. E.g. a smartphone held in landscape mode  by a user wearing headphones may indicate that a user is watching a video. But then again, it  might also mean that a user is reading a book while listening to music. 

This project aims to challenge the perception of the smartphone as harmful, and open up a  discussion that perceives the smartphone as a complex entity. Through creating several  prototypes, collectively as a concept called Broadcastr, which enable smartphone users to  broadcast their smartphone activity to their immediate surroundings, I investigate how people  relate to both their own and others' use of smartphones in public spaces. 

It is outside of the scope of this paper to exhaustively document smartphone use in public  spaces, and to see whether perceptions of smartphones change over a longer period of time  when being exposed to my prototype. This project should be seen as only an initial 

examination of the concept of broadcasting smartphone use.  

1.1 · Research Question 

The research question for this project is: 

Can we challenge the perception of smartphones as harmful by enabling users to  broadcast their smartphone activity? 

2 ·

THEORY 

In this section, I will briefly introduce a set of theories that form the theoretical grounding for  my design process as well as theories that will be employed in the evaluation of my 

prototypes.  

2.1 · Concept-driven design research 

Stolterman and Wiberg(2010) introduce their design research approach Concept-driven  Interaction Design Research [CDIDR] as a way of conducting interaction design research from  a conceptual or theoretical point of departure instead of an empirical one. The research is 

(6)

mostly conducted through designing and developing artifacts. The final design should be  evaluated in relation to a specific idea, concept or theory instead of a specific problem, user or  use case. It is thus a way of focusing one's research process on developing concepts and  theories in a designerly way.  

The CDIDR approach fits well with this project in that I am developing a new concept. It is the  theory surrounding the concept that is of interest to me, and not the possible user experience  of the prototype. Engaging with the concept through developing artifacts means that I can  materialize my concept in the real world. It becomes a knowledge-carrying object in itself,  which means that there is a specific implementation of the concept that others can react to.  This is likely to be more successful in achieving responses than any abstraction of the concept  could. 

This project uses the CDIDR approach as the driving force for shaping the design process  itself. In order to follow the CDIDR, this project relies heavily on prototyping and conceptual  development instead of user-driven research. My evaluation methods of the various iterations  of prototypes were partly done by following an action research approach, which will be  introduced in section 3.2. 

The CDIDR approach is but one approach to research-through-design. Zimmerman, Forlizzi  and Evenson(2007) describe a similar approach, whose focus is also on developing artifacts in  order to generate knowledge. The CDIDR differs by seeking to develop knowledge about  concepts and theory, and not solutions to real-world issues, which makes it a good fit for my  research.  

2.2 · Non-verbal communication in public spaces 

According to Goffman(1966), individuals in a shared public space communicate non-verbally  through bodily appearance and personal acts. He calls this body idiom . Goffman continues by  stating that body idiom is intrinsically different from verbal communication, since one can stop  talking, but not stop communicating non-verbally. Goffman relates this to social events, such  as political rallies, and investigates how body idiom can show situated involvement , i.e. how  involved an individual is in her situated activities. Actual involvement is a matter that only the  individual herself can gauge. Goffman thus argues that actual involvement is of little use, since  involvement can only be inferred through signs. For this reason, he is more interested in  perceived involvement, which is intrinsically connected to body idiom. Our perceived 

involvement in a situation is dependent on signs that we communicate through verbal as well  as non-verbal communication. Goffman also introduces the term involvement shields to  describe barriers that individuals can use to block their activities from others. Goffman speaks  of bathrooms and bedrooms as shielding places, and also of newspapers as portable shields  able to hide a yawn. When these shields are put up, individuals can indicate to their 

surroundings that they are not engaged in the current situation. The appropriate shield to use  must be gauged for each situation.  

Smartphones relate to all these concepts in various ways. They become part of the body  idiom, when we use them in public spaces. They extend what we communicate with our  surroundings. This is where it starts to become troublesome that the smartphone  communicates very little to the surroundings by itself. This is also where the perceived  involvement of a person using a smartphone can sometimes go awry. The smartphone can  easily be interpreted as an involvement shield. This shield may be interpreted as signifying  that the user is more involved in what's happening on her phone than her physical 

(7)

true. They might be looking up directions for a restaurant or trying to find an old email with  more information that is relevant to the physical surroundings.  

These concepts are used in this project mostly as motivation. Goffman helps describe why it is  necessary to build on top of the smartphone's expression towards the physical surroundings.  Furthermore, the theory is used to explain dive deeper into certain aspects of both the  concept and the issues that arise when exploring the problem area. 

While this text does not relate to smartphones directly, Goffman is a widely cited sociologist.  Particularly his theory on backstage and self presentation has been used widely in online  media studies(Hogan, 2010; Papacharissi and Mendelson, 2010; Walker, 2000). Furthermore,  the concepts introduced in this paper have been used as recently as 2016 to describe  smartphone use in public spaces(Hatuka & Toch, 2016). For these reasons, I still find it useful  to draw on his theory as theoretical grounding despite its age.  

2.3 · Feedback and feedforward 

Feedback and feedforward is introduced in "But how, Donald, tell us how?" by Djajadiningrat,  Overbeeke and Wensveen(2002) as an alternative to Donald Norman's concept of 

affordance(1988). They argue that there should be more focus on pre-action and post-action ,  respectively feedforward and feedback. Feedforward indicates what the user can expect from  performing the action before the action is performed. Feedback indicates something about  the action being carried out either as it is being carried out or after it has been carried out.  They flesh out the concept of feedback by focusing on four "unities": unity of location , unity of  direction , unity of modality and unity of time . To achieve these unities, the action and the  feedback must happen either in the same location, the same direction, the same modality or at  the same time. According to Djajadiningrat, Overbeeke and Wensveen, fulfilling all of these  means that an artifact has strong feedback.  

This description of feedback and feedforward in "But how, Donald, tell us how?" is mostly  focused on improving tangible interaction. However, in a later paper, the concepts are  expanded upon, and grow their area of application to include non-tangible interaction as  well(Djajadiningrat, Overbeeke and Wensveen, 2004). The concepts of feedback and 

feedforward are split up into three separate types: augmented, functional and inherent . They  also add two more unities, namely unity of modality and expression. While I will not rely on  these distinctions in my analyses, they do show that the theory has a wider application than  just tangible interaction.  

2.4 · Canonical examples 

In this section, I will present a few concepts that shine some light on the general area of  interest that Broadcastr falls into.  

2.4.1 · Sound and LED notifications 

Notifications can give a smartphone user's surroundings an indication of what a smartphone is  used for. In certain cases, such as the Facebook Messenger app or the Twitter app, the 

notification will come with a unique sound that allows one to identify the app that delivered  the notification. This will require that the smartphone owner has turned on the sound on their  phone. Another option that some phones come with is an LED notification light. Phones like  the HTC One M8 or the Samsung Galaxy S6 have a tiny light, which changes color depending  on what app has sent a notification. Some apps, like Light Flow(2017) even allow users to 

(8)

customize these notification lights.  

My concept shares one specific similarity to this. Both Broadcastr and notifications 

communicate something about smartphone use to their surroundings. However Broadcastr  differs from this concept in two ways. Firstly, sound and LED notifications were not designed to  show the spectators of a smartphone user what the smartphone is being used for. They were  designed for the user herself to know what app wants her attention. Secondly, Broadcastr  focuses on active use of a smartphone, whereas notifications only occur when new  information is being pushed to the smartphone user. 

2.4.2 · App use statistics 

Some apps allow smartphone users to get an overview of which apps they themselves use  most. Apps like QualityTime(2016) and Moment(2016) are just a few of the apps, which help  you get an overview of what apps you are using and how much. Common for most of them  seems to be that they want you to use your phone less. Both QualityTime and Moment include  ways of forcefully limiting one's phone usage by restricting access to certain things, but also  provides softer alerts when they have used an app for a certain amount of time. 

An exception to the app genre seems to be Menthal(2017). Contrary to QualityTime(2016) and  Moment(2016), the Menthal app is more interested in gathering data than reducing 

smartphone use. Menthal was built by a group of researchers, who wanted to gather as much  data as possible in order to " scienti cally assess, how often people are actually using their  mobile phones. "(Menthal, 2017) However, while the app does not explicitly aim to reduce  smartphone use, the introductory video for the app asks the question: " Are you in control of  your smartphone? Or is your smartphone controlling you? "(Team Menthal, 2013) This implies  that the aim of the app once again is to reduce smartphone use. 

There is a similar attention to smartphones as being complex entities with multiple uses  between Broadcastr and these apps. It is not just about the smartphone, but also what apps  you use. However, the main difference between these apps and Broadcastr is that Broadcastr  does not ask an individual to relate to their own smartphone use. Rather, the focus is shifted  onto the spectators of a smartphone user. Secondly, the app activity of Broadcastr is shown in  real-time, i.e. when the user is actively using their phone. There is no aggregation over time or  statistics being gathered.  

2.4.3 · Portable self-expressive projections 

Cowan, Griswold and Hollan(2010) researched different applications for projector phones in a  social computing setting. They characterized both technical specifications and design 

considerations for the LG Expo, which acted as their main prototyping tool. They then continue  to envision six different scenarios where the social applications are in the center of focus.  These scenarios were then illustrated through low-fi prototypes. Of these six scenarios,  particularly one is of relevance to this project. The scenario goes like this:  

As Jerry walks to the subway, he listens to upbeat music and projects bright colors  onto the ground around him, to brighten the overcast day and match his mood and  out t. His virtual business card is visible along the edge of the projected display. On  the subway he sits next to a girl, dressed in black and surrounded by a projection of  dark red ames swirling around her favorite musicians, who scowls at him(...). He  moves over when her projected ames hit his pants leg. When he gets to work, his  colleague Lauren notices his cheery projection and observes that he must be in a 

(9)

good mood. (Cowan, Griswold and Hollan, 2010, p. 4) 

In this scenario, a heavy emphasis is put on self-expression and advertisement of personal or  commercial interests. The user of the projection is able to display whatever he wants to the  world as a sort of extension to his body idiom. Cowan, Griswold and Hollan(2010) 

acknowledge that this type of projection might be invasive for others for a variety of reasons.   Another example of self-expressive projections comes from Leung, Tomitsch and Moere(2011).  They built a wearable prototype that would project the wearer's Facebook data onto a nearby  ceiling. The data was organized as a visualization in two different modes: Likes and Friends.  The Likes mode focused on the pages that a user had liked on Facebook, while the Friends  mode would visualize the wearer's friends and interactions with them.  

Broadcastr shares some similarities with both these projects in its focus on communicating  extra information to one's surroundings via digitally augmented interfaces. With Cowan,  Griswold and Hollan's(2010) project, Broadcastr also shares an interest in bringing the content  of a phone into its nearby physical surroundings. Similarly, Broadcastr shares an intention with  Leung, Tomitsch and Moere's(2011) concept. They both seek to expose the hidden digital  sphere to the immediate surroundings. Yet there is still a difference in the manner in which  activity is being broadcast. In neither of the two projects, the smartphone activity of a user is  broadcasted in a non-intrusive manner that allows spectators to glean what the user is  currently doing. Cowan, Griswold and Hollan do describe some scenarios wherein 

smartphone activity is projected onto a wall, but the purpose of this projection is to collaborate  with others on the content of the projection. Broadcastr instead focuses on providing 

glanceable information to the surroundings as an extra layer of body idiom that interpret.  

3 ·

METHOD 

The methodology behind this project is mostly rooted in prototyping and action research. In  this chapter, I will present what I mean by that and how I use it in the project.  

3.1 · Prototyping 

In extension to this project being framed around the CDIDR approach, prototypes have played  an integral part. To explain my prototyping process, I draw on both Houde and Hill(1997) and  Bill Buxton(2007). 

Houde and Hill(1997) divide prototyping into three different types that serve three different  purposes: implementation , look and feel , and role . Implementation describes prototypes that  investigate what is technically possible, whereas look and feel is about the sensory 

experience of using the prototype. Finally, role refers to what function the artifact serves in a  user's life. The three dimensions are not mutually exclusive. In fact, prototypes are likely to be  somewhere in between several dimensions. As an addition, Houde and Hill describe the  integration prototype, which is to be found in the intersection of all the three dimensions.  These types of prototypes integrate all three elements of a concept into one coherent  prototype. In some cases, these will constitute the "final design". I follow Houde and Hill's  definition of prototype, and employ different prototypes for different purposes at various times  of my design process.  

Buxton(2007) thinks of prototypes in a slightly different manner. He does not distinguish  between different prototype types. Instead, he distinguishes them from sketches in regards to  fidelity. Sketches are low fidelity, while prototypes are high fidelity. While I do not follow this  definition of prototype, I do rely on his description of the sketching process to explain my own 

(10)

prototyping practice. To Buxton, sketching is a cyclical conversation between the mind and the  sketch. As the mind reads the sketch, new knowledge is gathered, which spurs on the 

creation of a new sketch. Once again, this sketch is read and the cycle continues. In a similar, I  also conduct my prototyping as a cycle between mind and prototype.  

3.2 · Action research 

When talking about action research, it is very important to distinguish what type of action  research one is doing. There are multiple ways of practicing it, but in broad terms, action  research is a social science approach where a researcher has two simultaneous goals: to  affect change in the studied area and to increase knowledge in a certain research domain.  Action research studies have in general been about working within an organization(Huang,  2010). My project differs from these studies in that it does not relate to any specific 

organization but rather to my own life. Ned Kock(2013) introduces a broader term when  discussing HCI-related action research studies: research client . This term better describes  what I have done in this project, since it can be argued that my research client is smartphone  users and spectators of smartphone use. 

Despite this slight incompatibility with typical action research, my project shares similarities  with the approach by being reflexive, actionable and significant in nature(Huang, 2010). By  reflexive, I mean that my research has relied on myself reflecting on the concept and how it  acts upon the world. With actionable, I refer to the fact that the prototype is being put to use in  the world, and that this inspires new actions both concept-wise and methodologically. Finally,  when referring to my research as significant, I argue that the project has relevance beyond  just smartphones in my own life. An argument that will be briefly explored further in section 9.1.  This goes for both the impact of the concept in action and the knowledge embodied in my  prototypes.  

There are elements of the CDIDR process that lend themselves well to action research. When  working with new concepts in a rapid prototyping fashion, squashing bugs and making  prototypes usable is usually not at the top of the to-do list. If one wanted to expose these  prototypes to users, research may become corrupted by the appearance of bugs or lack of  usability. However, if one were to use the designer of the prototype himself as the user, bugs  and lack of usability can be circumnavigated. Particularly for my process, this has been useful,  since much of the focus has been on the spectators of the concept, and not the user himself.  This approach does not however come without downsides. Using myself as the object of  research, results in data that cannot be generalized. Furthermore, I am intrinsically more  positively biased towards my own concept. This is something that I should actively be aware  of in order to reduce bias-based decision making.  

Action research has also had an impact on the way my work has been structured. While not  following the specific Action Research Cycle, as laid out by Kock(2013), my project follows the  iterative nature of the cycle. The specifics of my process will be described in the following  section. 

3.3 · Project plan 

According to Kock(2013), the action research cycle has five steps: diagnosing , action planning ,  action taking , evaluating and specifying learning . I have adapted this process slightly to  contain four steps instead: problem framing , prototyping , using the prototype and evaluating  the prototype . The first step, problem framing, is about specifying the problem to be solved by  this prototype. The next step is about building the prototype while investigating possible  solutions to the problem in a buxtonian fashion, i.e. through a conversation between mind and 

(11)

prototype. From here on it is about actively using the prototype to understand whether it  solves the framed problem, and to investigate what new questions it raises. Finally, an  evaluation is performed on the prototype to formalize what should be done in the next  iteration. This last step flows into the first step of the next iteration.  

Specifically, my project has resulted in 7 iterations being produced. Some iterations have been  more extensively evaluated than others, while some have seen larger jumps in conceptual and  technological advancement. In the following section, I will go in depth with my process. In it, I  talk about versions and iterations. Versions indicate larger steps in the process, wherein  multiple iterations occur. Each version ends with an extensive evaluation, while each iteration  is only evaluated briefly. An overview of my process is seen in figure 1.  

  Figure 1 · An overview of my process 

4 ·

DESIGN PROCESS 

The concept itself, broadcasting what one is doing on one's smartphone, can be imagined in  several ways. In order to ensure some form of guided design process, I am introducing a few  design constraints, which will be followed up by a motivation for the choice of constraints.  

4.1 · Design constraints 

4.1.1 · Single-user 

There will only be one device per person. This means that one device will not show the  smartphone activities of several users. Because of this constraint, there are two types of users  relevant to my concept: broadcasters and spectators . Broadcasters consist of the wearers of  the prototype whose phone the device is linked to. Spectators refer to the people observing  the broadcasters, that need to understand the broadcast being communicated to them.   4.1.2 · Portable device 

In extension to the device being single-user, the device will also be in the form of a wearable,  portable device. It should be possible for the user to carry it around on their body, preferably  in a manner non-intrusive to their behavior pre-device.  

4.1.3 · Automatic broadcasting 

The information being broadcast should not be constantly updated manually by the user,  when they change their smartphone activity. It should however be possible to turn the  broadcast on or off. In this way, the main interaction can be said to be hidden from the user 

(12)

once the connection to the device is active. Interacting with the device is as such an extension  to the normal smartphone experience.  

4.1.4 · Motivating the design constraints 

Having a single-user device makes it easier to plan the research process. This means that we  only need one person's phone to communicate to one person's device. Seeing as the 

research team consists of just one person, this is preferable. It is possible that this constraint  would have been gotten rid of, if it was not technically possible to build a single-user device. In  that case, I could have resorted to looking at aggregating multiple users' smartphone activity.   The fact that the device should be wearable has something to do with the single-user 

constraint. By making the device wearable, it becomes clear whose activity the device is  broadcasting. It means that there is a closer unity in location between the phone and the  device.  

Lastly, adding the requirement for automatic broadcasting is an extension of the underlying  assumption in this paper: that smartphones are multi-use devices. It is assumed that 

smartphone users change their behavior quickly and frequently, and as such, it would be naïve  to expect users to manually change their broadcast as they did so. If the research question  was to make users more aware of their own smartphone use, it is possible that having users  choose their own broadcast could make sense.  

With these design constraints in place, I will now describe the design process.  

4.2 · Version 1 

The first version of Broadcastr had the main goal of investigating the technical feasibility of the  concept. The aim was to make a central Android app, which is wirelessly connected to a piece  of hardware. The central app would then send a message to the hardware, whenever a new  app was put to use, which would then broadcast a message depending on the app.  

4.2.1 · Iteration 1: Monitoring apps 

The first iteration was what Houde and Hill(1997) would call an implementation prototype. It  consisted mainly of an Android app. The purpose was to first of all figure out if it was possible  to read via one central app what other smartphone apps a user was currently using. It turned  out to be possible to start a service in the background of the Android system, which would  then monitor what apps were used. To start monitoring what apps are being used, the app  needs to be activated by the user. Once it is activated, it sends a notification to the user  containing the name of the most recently activated app. This notification served no purpose  other than as a debugging tool. It was not intended as a broadcast. Here it is important to  stress that the app does not show what app the user is currently using, but which app has  most recently been active. This is a technical limitation of the app, and not an intended 

outcome. This means that if a user is using Instagram while a Facebook notification is pushed,  Broadcastr will show Facebook as being the active app. This realization came after using the  prototype for a day. While being a sort of bug, this was deemed a minor issue for a first  prototype. This first iteration was a necessary first step, because a failure to monitor app use  would have required a complete restructuring of the research process. Instead, it would have  to be less driven by functioning prototypes.  

(13)

  Figure 2 · The first iteration of the app  4.2.2 · Iteration 2: Connecting the hardware over Bluetooth 

The purpose of the next iteration was to connect an Arduino 101 wirelessly to the central app.  The 101 was chosen because it is one of the official Bluetooth Arduino devices, and was  instantly available to me, which meant that the process could start quickly. Once again, this  iteration was meant to investigate the technical feasibility of the concept, making it an  implementation prototype. On the app side, a selection of apps were manually defined as  belonging to a set of categories(see figure 3). This category was then mapped to a number.  The number was sent to the Arduino 101, when a most recently activated app was recognized  as belonging to one of the categories. These categories were based on a brief evaluation of  the most used apps on my own smartphone. This categorization is not exhaustive, and it does  not accurately define what an app's appropriate category is. Some apps, like Instagram,  Snapchat and Youtube, are both media apps as well as social media apps. Most apps were  also not present in the categories, which means that they did not register with the hardware.  At this level, the focus was however just on connecting the hardware with the phone, so any  lack in the categorization was not of importance. 

For this iteration, the Arduino was connected to a piezo buzzer, which emitted a short beep at  different frequencies depending on which category was recognized. A piezo buzzer was  chosen, because it was simple to hook up to the Arduino. An LED was also considered, but  the piezo buzzer had an advantage in communicating various categories. It was easy to simply  map different categories to different frequencies of beeping, whereas the LED would need to  be either multi-color or blink at different frequencies to communicate differences. For this  implementation prototype, the piezo buzzer served its purpose well.  

Social 

Media  Planning  Chat  Games  Finance  Media  Language  Browsers 

Facebook  Calendar  Slack  Apollo Null  WeShare  Spotify  Google 

(14)

Instagram  Inbox  WhatsApp  5 Second 

Guess  Mobilbank Nordea  Musix- match  Duolingo    Twitter  Google 

Maps  Messenger Facebook  Kintsukuroi  MobilePay  Vimeo  Japanese JED -  Dictionary 

 

Snapchat  Zoho Mail  SMS App  Monument 

Valley    YouTube     

  Contacts  Google 

Allo  Two Dots    TuneIn Radio     

  Google 

Keep    Threes    Photos     

Figure 3 · An overview of the app categories(top row) and the corresponding apps  implemented in the first prototype. 

4.2.3 · Iteration 3: Exploring outputs and finalizing the first prototype 

The third iteration shifted towards role and look and feel prototyping. By working with different  output modalities, I started exploring the look and feel of the concept. I built and tested out  the prototype in my parents' vacation house during Easter, where there were seven people  present including myself. This allowed me to mimic the impact my prototype would have on a  public space. As mentioned, iteration 2 used a piezo buzzer with different frequencies for  different app categories. This proved to be both quite annoying to listen to and uninformative.  It was possible to differentiate between two different app categories, but more than that  quickly became difficult. So instead I tried having each app type be mapped to a three note  melody. It became much more pleasurable to listen to, and it was easier to distinguish 

between different categories. But still, the use of sound was quite invasive in the public space,  because it was produced every time I would change to a new app. Furthermore, the sound  was only present when I made changes. Thus, if one was not present when I changed my app,  they would not be able to know what I was doing. This was not my intention for the concept.  This made me shift to a visual output instead, and so I started working on using a 

seven-segment display. This particular display was chosen, because it was available to me,  and allowed for more complex communication than a single LED. Therefore I could easily  continue with the categories I had previously defined. The display was programmed to show  the first letter of an active app's category. However, I quickly realized that it was a quite limited  output. It could not produce an M for instance, so Media was compromised by showing an n  with a period following it. The final iteration of the prototype was then powered by a 9V  battery and put on a piece of string, so it became wearable. This prototype was then ready to  start investigating the role of the concept in a larger evaluation. 

(15)

  Figure 4 · The third iteration 

4.2.4 · Evaluating the prototype 

In order to evaluate the role of the prototype, five brief interviews were conducted at Nordic  Game Jam 2017. Nordic Game Jam 2017 was a semi-public space with a lot of technologically  savvy people present and fairly diverse since game developers from more than 20 countries  were present. Respondents were chosen by simply walking up to people, who seemed to be  free to answer questions. A friend of mine, as an exemplary user, was given the prototype and  asked to stand close by while using her phone, as I interviewed the respondents. The display  of the prototype stuck out through her shirt, as shown in figure 5, thus broadcasting what app  type was being used. 

  Figure 5 · The prototype, not broadcasting anything, as worn by the user  At first, the respondents were given a brief introduction to the project as being about 

smartphone use in public spaces. Then the age and smartphone model of the respondent was  recorded, before the respondents were asked to guess what the user was doing on her  phone. After this, they were informed that the project is more specifically about broadcasting  what one does on one's phone. Respondents were introduced to the prototype, and how  different app types showed different letters on the display. They were asked whether they  found it useful to know what other people were doing on their phone. And finally, they were 

(16)

asked whether they themselves would consider broadcasting their behavior. The specific  phrasing of the written down questions and notes of the respondents' answers can be seen in  the appendix. 

In general, three interesting themes came up during the interviews: stereotypical smartphone  use , body language , and privacy . 

4.2.4.1 · Stereotypical smartphone use 

There seemed to be a tendency for the respondents to think that the user was either chatting,  on social media or browsing the web. Only one of the five respondents verbally 

acknowledged that smartphones can be used for many things. The exact reason for this  tendency is difficult to gauge, but one user explained it as a matter of body language, which I  will get into in the next section. If looking at reports on common smartphone use, the tendency  can seem like the statistically safe answer(Nielsen, 2016; AudienceProject, 2016). Yet it could  also be argued that the statistically safe answer is that smartphones can be used for many  things, and it can not easily be judged from simply looking at a smartphone user. However, it  should be stated that there are some methodological issues with my asking people about  apps. How one person defines "social media" or "browsing the web" may be different than  others. Despite this limitation, it can still safely be said that none of the respondents thought  the user was reading a book, playing a game or learning a new language. 

4.2.4.2 · Body language 

As mentioned, body language played a role in explaining why one respondent thought the  user was chatting. He saw the user using two fingers, and figured that she was chatting, when  in fact she was using a Planning app. It was also mentioned by another respondent that body  language is an already existing cue for figuring out what a user is doing with their phone. A  third respondent mentioned that she found the broadcasting concept useful, as it was 

sometimes difficult for her to interpret the body language of others. To her, it would sometimes  seem as if someone was taking a picture of her, even though it was more likely to her that they  were just trying to see their phone screen better. A fourth respondent stated that the user was  not doing something healthy due to the position of her neck. As a summation, it can be said  that body language exists as an added layer onto understanding what others use their  smartphone for. This echoes Goffman's idea of the body idiom, and shows that smartphones  do, as speculated, play a role in this.  

4.2.4.3 · Privacy 

When reacting to the concept, many respondents saw privacy as an issue. One respondent  mentioned that the granularity of the communication was important in figuring out whether the  concept could be useful to him. If it specified who he was chatting with, it would infringe on his  privacy, but if it simply specified that he was chatting, it could be useful. Another respondent  was self conscious about her smartphone use, and was not interested in letting others know  that she was playing games or on social media. On the other hand, she thought it could be  useful to know if others were livestreaming her. The respondent who mentioned that it could  be difficult to know whether others were taking pictures of her expressed a similar concern  regarding her privacy. The fact that privacy is important for the respondents seems to suggest  that there is anxiety about smartphone use. They can give users some degree of power over  others by being able to take pictures at any time without it being obvious. For one respondent,  there was even anxiety about her own smartphone use. She did not want others to be able to  see that she was using social media and playing games. Interestingly, the very same user had  three smartphones, because she was a journalist. It seems likely that she uses at least one of 

(17)

them for work purposes, yet she was wary of showing off her non-work use.  

4.2 · Version 2 

Having finished a larger evaluation, I was now back on the path to further explore the concept.  The previous iteration had three main issues that became the focus of this next version. First  of all, the output of the seven segment display was hard to decipher. The fact that it had  trouble showing an M also pointed to a certain lack of flexibility. Furthermore, the display was  difficult to see in daylight. Secondly, not all apps were given a category, which resulted in a lot  of apps lacking a broadcast. This particularly became an issue, when the prototype was  handed over to another user with entirely different apps installed. Third, the size of the  prototype was still quite clunky, and not easily wearable without being conscious of it. Thus, I  focused this version on also reducing the size and improving the wearability of the prototype.  My methodology also changed slightly at this level. I started wearing the prototype myself, and  thus continually evaluated it in collaboration with my environment. These evaluations will be  briefly touched upon in this section, but a more thorough summarization can be found in  section 5.2.  

4.2.1 · Iteration 4: Moving to Raspberry Pi Zero W and changing the output  For the first iteration of the second version, I went to work on reducing the size of the  prototype. This iteration was mostly an implementation prototype with some look and feel  prototyping happening when I changed the output. In figuring out what hardware to use for  this purpose, I only had two requirements: bluetooth and a size smaller than an Arduino 101.  While wi-fi could also potentially function for wireless connections, it is traditionally linked to a  wireless router as the gateway between devices. Bluetooth is designed to be used for 

communicating between two devices, which matched my concept exactly. These two  requirements, however, caused more trouble than expected. I was only able to find one  Arduino of a smaller size than the 101 with bluetooth capabilities: the Bluno Beetle(DFRobot,  2017). However, since it was being shipped from China, it would likely take a while to arrive by  mail. Per recommendation by a friend, I was turned onto the Raspberry Pi Zero W instead. It fit  my requirements perfectly by having bluetooth capabilities and a small size. While moving to a  Raspberry Pi would require me to rewrite my program, it would be shipped from the UK and  thus arrive faster.  

Having decided on the hardware to use, I also contemplated the output type. I wanted  something flexible that was visible in daylight, and which could more easily be deciphered by  spectators. I decided on a 0.9 inch 128x64 white monochrome OLED display. I put this display  to use by showing icons for each app category. Icons provide more specific information for a  spectator to decipher. The icons were taken from Google's Material Icon set(Google, 2017).  Using an established icon set meant that it was quick for me to set up and test the prototype  without spending too much time designing my own icons. The Material Icon set is also a good  choice for this concept, since it is designed to be used in Android apps. This means that many  icons are available that align with common app categories. The icons that were set to the  different categories can be seen in figure 6.  

(18)

 

Social 

Media  Planning  Chat  Games  Finance  Media  Language  Browsers 

       

 

Figure 6 · The categories and their corresponding icons for iteration 4 

Finally, I just needed a power source for the prototype. A USB power bank fit well, since it is  capable of providing 5 volts to the Raspberry Pi. Having attached the Raspberry Pi to the  battery with gaffer tape, I was capable of wearing the prototype out in the real world. I decided  to wear it myself, and start to get a feel for how it was to broadcast one's smartphone 

behavior. The prototype was worn as shown in figure 7 with the power bank extending down  into my breast pocket.  

 

Figure 7 · Iteration 4 in my breast pocket not broadcasting anything 

When first powered on, the prototype would show the icon on the left in figure 8 to indicate  that it was ready to start a connection with a phone. This acted as a sort of feedforward to the  broadcaster. When successfully making a connection, the icon would change to the one on  the right as feedback showing that broadcasting could commence. In this way, this iteration  had also become more clear at communicating to the broadcaster when setting up the device. 

(19)

  Figure 8 · Bluetooth status icons 

The introduction of icons however introduced a new issue to the concept. How do I decide  which icon to use for what categories? Some categories are easier to boil down to just one  icon than others. Games can be signified by a game controller, and chat can be signified by a  chat bubble. However, particularly social media turned out to be a wicked problem. In this  iteration, a share icon was chosen, yet this was changed later on. It was clear that finding the  perfect icon for all categories, an icon that everyone would recognize as belonging to its  category, was unlikely to happen. This complexity had to be acknowledged and managed. A  similar issue also existed with the previous output type. Yet for the seven segment display, the  options were so limited that showing a single letter was one of few possible solutions. As  such, the issue was not as striking at the time.  

4.2.2 · Iteration 5: Using Google Play Store categories 

For the next iteration, focus was put on categorizing the apps. The main issue with 

categorization at this moment was that many apps had no categories. It had become clear  from the previous iteration that categorization had no single solution. Seeing as entire studies  have even been made in the attempt to solve this issue in regards to the iTunes App 

Store(Vakulenko, Müller and Brock, 2014), it was unlikely that I would fix it. To solve the issue of  apps lacking categories however, I turned to the Google Play Store. A large portion of Android  apps are listed on the Google Play Store, unless they are stock Android apps or only appear  on other app stores like the Amazon App Store. Their online listing can easily be found from  their package name. As such, it only takes an HTTP GET request to the proper URL in order to  fetch the main category of an app. To facilitate this process, I added a single button to the app,  which would create an internal list of all apps installed on the device with their corresponding  Google Play Store category. From here on, the manual categories from iteration 2 were  removed. This also meant that when installing the app for the first time, updating this internal  list of apps and their categories was a necessary step. 

(20)

  Figure 9 · The app interface in iteration 5 

This change had a series of consequences for the broadcasting as well. With my own 

categories gone, I had to map the icons onto Google Play Store categories. Where there used  to be Chat, there was now Communication. This quickly resulted in some quirky behavior,  since Google Chrome was also marked as a Communication app. This also meant that there  was no category for browsers on the Google Play Store. While not necessarily an incorrect  categorization, Google Chrome did not really match the chat bubble icon that was used for  Chat beforehand. But this iteration still managed to show a major improvement simply due to  almost all apps now being automatically categorized. For the apps without categories, the  device was initially designed so the broadcasting screen would go blank. However, it turned  out that the USB power bank shut itself off, if it was not drawing enough current. This is a  mechanism that ensures a power bank does not keep supplying power to an already 

fully charged phone. However, for my prototype this became a nuisance, since turning off the  broadcasting screen meant that almost no current was being drawn. If the screen was left  turned off, the Raspberry Pi itself would turn off shortly thereafter. Three dots were shown on  the broadcasting display instead. This maintained a certain amount of current being drawn,  while not communicating anything specific. A few apps that had actually been categorized,  also showed three dots. The difference was that at this moment, their category was not yet  assigned an icon. Examples of these include "Art and Design", "Shopping" and "Weather".  Common for all of them was that they were not my most regularly used apps, so they were  prioritized lower. To truly go all the way with the Google Play Store categories, it would have  meant to also bring icons for these categories. Yet at this point, my main focus was to have  categories for as many apps as possible, and not to find the correct categories. For this  reason, the categories shown in Figure 10 are also little more than translations of my old 

(21)

categories into the new Google Play Store-based system.  

Social  Productivity/ 

Tools/Business  Communication  Games 

       

 

 

Finance  Music and Audio /  Photography/  Video Players 

 

Education  Books and  Reference 

       

 

Figure 10 · Google Play Store categories and their assigned icons in iteration 5  4.2.3 · Iteration 6: Pathworking categorization & increasing wearability 

After having evaluated the prototype extensively by wearing it for 5 days and at various social  occasions, I set about working on two goals: building upon the categorization and increasing  the wearability of the prototype.  

As mentioned previously, the categorization still suffered from some quirky behavior, e.g. with  Google Chrome. Fixing it was solved by looking at what I call internal Android categories. In  the Android system, it is possible to categorize one's app with a few predefined types. These  types are not the same as the Google Play Store categories, and also have a different 

purpose. They exist to simplify communication between different apps. For instance, if the  Facebook app wants to open a link to a webpage, it can send this link to the default browser  app via the Android system. This default browser app is categorized internally, and not via the  Google Play Store. This categorization is not necessary for all apps, and can therefore not be a  replacement for the Google Play Store system. Instead, it is an extra layer to increase the  complexity of broadcasting. There are a handful of these categories, but only two categories  were of interest to me at this time: Browser and Home. Browser indicates an app that is able to  browse the web, while a Home app is the app that controls the home screen of a phone. In my  case, this was the Google Now Launcher(2015), but other examples include Nova 

Launcher(2017) and Evie Launcher(2017). By checking for these categories as well, I was able  to show more meaningful information based on these two very important smartphone use  cases. The home screen can be seen as the "default" state of a smartphone, since it is the  gateway to all other apps. Therefore, it is important as a way of showing when one is not  doing anything in particular on one's phone, but is merely in a state of limbo. Secondly,  distinguishing a browser from a chat app was an important step, since their purpose is more  complex than merely Communication. It was also shown in the evaluation after iteration 3 that  browsing the web was one of the most commonly anticipated smartphone activities.  

The second part of fixing categorization was in reviewing all the categories that had so far  been assigned. Productivity and Tools were lumped together at this moment. This 

(22)

categorization seemed incorrect, as the purpose of the apps were quite different. Tools usually  refers to an app that can assist in some other activity. For instance, a calculator or a translation  app. Productivity on the other hand generally refers to management apps like a calendar or  note-taking apps. Productivity was now given the same icon as Business apps, while Tools  were given their own category. Social was also given an icon showing two people to signify  something social, since not everyone was able to recognize the share icon. This makes sense  considering that the notion of a Share icon has been the subject of some debate within the  design community for a while(ewooycom, 2015; Ming, 2014). Finally, the Media categories  were split up from each other to afford spectators the ability to differentiate between various  types of media.  

Social  Productivity/ 

Business  Tools  Communi- cation  Games  Finance 

       

 

Music and 

Audio  Photography  players Video  Education  Books and reference 

         

 

Figure 11 · Google Play Store categories and their assigned icons in iteration 5 

The other focus of this iteration was on increasing wearability. So far, the battery was 3-4 times  as large as the Raspberry Pi, and it was only possible to wear it in a pocket. The solution to  this manifested itself in two ways: reducing size and making the prototype attachable. In order  to reduce the size of the prototype as a whole, I had to start with the battery. While I preferably  wanted something rechargeable, my demand for increased wearability had me put my focus  only on size. I acquired three different types of battery holders that could supply power with  AA or AAA batteries. One was a holder for three AAA batteries, which would supply an  average of 4.5 volts, and another was a similar holder for four AAA batteries with a total  average output of 6 volts. The third holder contained four AA batteries, had a built-in USB  output, which would convert the voltage to 5 volts and a clasp so one could easily attach it to  clothes.  

(23)

 

Figure 12 · Different battery types and Raspberry Pis for comparison. From top left: 4 x AAA  holder, 4 x AA holder with USB and clasp, 3 x AAA holder, USB power bank from iteration 4, 

Raspberry Pi Zero W with attached OLED display and Raspberry Pi B+.  

Before moving on to the other battery holders, I wore the prototype with the USB holder on  the neck of my shirt as can be seen in figure 12 for an evening of drinking beers with friends.  By powering the Raspberry Pi through USB, there were built-in safety measures in cases of  faulty batteries or similar. It was quite clear that the size of the holder was both clunkier and  heavier than the USB power bank. However, the clasp itself made it much more relaxing to  wear, since I was no longer worried of the prototype falling out of my pocket. 

  Figure 13 · First iteration of wearing the USB battery holder prototype  

A friend suggested that I instead put the battery pack in my pocket, and used a longer USB  cable to reach the Raspberry Pi. I tried this out for a couple of hours the next day, but soon  found it to be cumbersome filling my pocket with a battery and having a USB cable wrapped  around my belt rim as shown in figure 13. Having all of the parts connected seemed to had 

Figure

Figure   3   ·     An   overview   of   the   app   categories(top   row)   and   the   corresponding   apps  implemented   in   the   first   prototype. 
Figure   7   ·      Iteration   4   in   my   breast   pocket   not   broadcasting   anything 
Figure   10   ·      Google   Play   Store   categories   and   their   assigned   icons   in   iteration   5  4.2.3   ·     Iteration   6:   Pathworking   categorization   &   increasing   wearability 
Figure   11   ·      Google   Play   Store   categories   and   their   assigned   icons   in   iteration   5 
+2

References

Related documents

In Model 1 IKEA-entry was found to have increased durable-goods retail revenues by 19.72% and employment by 17.42% in the entry municipality (total retail employment went up by

Retention ponds in naturalized green areas will filter impurities out of storm water and protect the surrounding natural habitats. Clean, naturally treated storm water is directed

The brain view is rather unpopular among psychological view theorists, as psychological views hold that humans are numerically identical to some sort of conscious

In the second part of the interviews, the feature of a monetary compensation was introduced. The attempt was to find an answer to the second research question of this thesis,

The organizations we have chosen to study are Ving Sweden AB, the Swedish Exhibition and Congress Centre, Malmö City Theatre and Gothenburg Research Institute as they

Torbjörn Becker, Director at SITE, and followed by a short discussion by Giancarlo Spagnolo, SITE Research Fellow and Professor, University of Rome II. Time will be available

Thus, on the one hand, certain variables seemed sensitive to the age of the participating children: a larger share of utterances serving instrumental functions, a larger share

To test whether this aggregation was caused by DCDC2C binding to free tubulin, a bio-layer interferometry (BLI) assay was performed [226]. In this assay, a sensor measures