• No results found

Microsoft Teams:

N/A
N/A
Protected

Academic year: 2021

Share "Microsoft Teams:"

Copied!
63
0
0

Loading.... (view fulltext now)

Full text

(1)

Emil Bergman

Microsoft Teams:

A qualitative usability study

Information Systems

Bachelor's thesis

Term: Autumn-20

Supervisor: Dr. Bridget Kane PhD

(2)

Abstract

Working together at distance is no easy thing and the usability of the digital workspaces used is of utmost importance. Companies providing these digital workspaces constantly needs to evaluate the usability to find problems and improve their product with design principles for designing interactive systems. The purpose of this study is to contribute with knowledge of the usability of the digital workspace MS Teams. There are several methods for usability evaluation but the most fundamental is to test with real users, a usability test.

The method used to evaluate the usability of Microsoft Teams is a usability tests with participants as close to the intended end-users as possible and with post-test interviews after each test. During the usability test the participants were observed whilst using the product and thinking aloud. Thinking aloud is the participant, while using the product, continuously think out loud which gives the researcher an opportunity to understand how the participants view the product and to identify any misconceptions they might have.

The main results show that there are several usability problems with Microsoft Teams especially during the log on process and with changing output source. At the same time the results show that sharing files and calling are some of Microsoft Teams strengths in regard to usability and that the perceived usability of Microsoft Teams is high.

(3)

Acknowledgments

I would like to thank my supervisor, Bridget Kane, for all the encouragement, support and help with the thesis. I also want to thank Adam Beskow, Mustapha Drammeh and Muhammad Abdirahman Omar for all their help during our many long discussions.

I also want to thank all my friends a family that participated in this study, were willing to discuss my subject and help me complete the thesis.

I also want to give a special thanks to my mother who as always gave me support, strength and help to see this study through.

Without you all, this thesis would not have been possible.

Emil

(4)

Table of content

1. Introduction ... 1

1.1 Background ... 1

1.2 Purpose ... 1

1.3 Target group ... 1

1.4 Ethical considerations ... 2

1.5 Scope ... 2

2. Literature overview ... 3

2.1 Designing interactive systems ... 3

2.2 PACT framework ... 3

2.3 Design principles ... 3

2.4 Usability ... 4

2.5 Usability evaluation ... 4

2.6 Communication ... 5

2.6.1 Verbal communication ... 5

2.6.2 Written communication ... 5

2.6.3 Non-verbal communication ... 5

2.7 Distance matters ... 6

2.8 Computer Supported Cooperative Work ... 7

2.8.1 Shared work spaces ... 7

2.8.2 Shared workspaces ... 7

2.9 MS Teams ... 7

3. Methodology ... 9

3.1 Usability testing ... 9

3.1.1 Pilot test ... 9

3.1.2 Metrics and measures ... 10

3.1.3 Test introduction script ... 10

3.1.4 Thinking aloud ... 10

3.1.5 Number of participants ... 11

3.1.6 Getting test users ... 12

3.1.7 Participant background ... 12

3.1.8 Test environment ... 13

3.1.9 Test plan ... 13

3.1.10 Tasks ... 13

3.2 Data collection in a usability test ... 14

3.2.1 Observation ... 14

3.2.2 Post-test interview ... 14

(5)

3.2.3 Recording ... 15

3.3 Study Design ... 15

3.3.1 The usability test ... 15

3.3.2 Preparations ... 15

3.3.3 Implementation ... 20

4. Results and analysis ... 22

4.1 Results ... 22

4.1.1 Task completed ... 22

4.1.2 Problems discovered through usability test ... 22

4.1.3 Remarks from the post-test interviews ... 30

4.1.2 Anonymously advised features of MS Teams ... 30

4.2 Analysis ... 32

4.2.1 Method discussion and reflection ... 32

4.2.2 Changes made to MS Teams ... 32

4.2.3 Reliability and validity ... 33

4.2.4 Usability problems found ... 34

5. Recommendations and conclusion ... 37

5.1 Recommendations ... 37

5.2 Limitations ... 38

5.3 Further research ... 38

5.4 Conclusion ... 39

Bibliography ... 40

Appendices ... 42

Appendix A: Pre-test questionnaire ... 42

Appendix B: Introduction script ... 43

Appendix C: Information letter ... 44

Appendix D: Consent form ... 45

Appendix E: Information notes for tasks ... 46

Appendix F: Individual results and post-test interview... 47

(6)

List of figures

Figure 1. Characteristics that contribute to achieving common ground. (Source: Modification

of Olson & Olson 2000, p.160) ... 6

Figure 2. The space-time matrix. (Source: Modification of Benyon, 2014, p.368) ... 7

Figure 3. Usability problems found per participant. (Source: Modification of Nielsen 2000) 12 Figure 4. Current window and shared files ... 25

Figure 5. Chat symbol, start new chat symbol and video ... 26

Figure 6. Call symbol in chat box ... 26

Figure 7. Settings from activity ... 27

Figure 8. Settings from user avatar. ... 27

Figure 9. Menu inside setting ... 28

Figure 10. Settings in Devices ... 28

Figure 11. Share content symbol ... 29

Figure 12. Minimized call at the right bottom corner of the PC ... 29

(7)

List of tables

Table 1. Information regarding test participants ... 18

Table 2. Successful or failed task according to SCC ... 22

Table 3. Usability problems discovered through usability tests ... 23

Table 4. Anonymously advised good and bad features of MS Teams ... 31

(8)

1. Introduction

The introduction contains a background of the problem area, the purpose of the thesis, the target group for the thesis, ethical considerations made and the scope of the thesis.

1.1 Background

A central role in the field of human-computer interaction (HCI) and interaction design is that the systems should have a high usability. The most well-known definition of usability

according to Barnum (2011, p.11) is the one from the International Organization for

Standardization (9241-11): “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.” Usability is all around us, from how easy it is for a user to set an alarm for the next morning, regardless if you set it on your smartphone or an actual alarm clock, to registration of a new customer in an enterprise resource planning system. At the time of writing this thesis the COVID-19 pandemic has forced a lot of businesses to alter their way of conducting business and a lot of people now work from home. Meetings that previously was done face to face are now occurring via video conferencing systems and Microsoft Teams (hereinafter referred to as MS Teams) is a digital workspace that is often used. This places high demands on the usability of MS Teams to ensure that this transition and change in work procedures goes as smoothly as possible. In a well-known study called ‘Distance Matters’ (Olson &

Olson, 2000, p.152) the authors bring up several problems with working together at distance and explains how people working at distance must change the way they work to achieve the same result as when they work at the same physical place.

1.2 Purpose

The purpose of this study is to contribute with knowledge of the usability of the digital workspace MS Teams.

This thesis addresses three Research Questions (RQ):

RQ1: What is the perceived usability of MS Teams?

RQ2: Are there any usability problems with MS Teams?

RQ3: Are there any particular strengths in the usability of MS Teams?

1.3 Target group

The target group for this thesis are the people working with MS Teams since they can learn about the perceived usability of their product. Another target group are the people working with other video conference systems or digital workspaces or people planning on creating a new video conference system or digital workspace since they could learn design principles to include or exclude from their product. The study is also of interest for companies that wants to conduct their own usability study of one of their systems or researchers that wants to conduct a usability study of any kind for MS Teams.

(9)

1.4 Ethical considerations

There are four main requirements for conducting a research, information requirements which means that affected persons are informed of the purpose of the research, consent requirements which means that participants in the research decides for themselves on their participation.

The requirements go on with confidentiality requirements which means that any data collected from participants shall be kept confidential and stored in a way that unauthorized persons cannot reach it. The last requirement is usage requirements which means that the data collected from participants can only be used for research purposes (Patel & Davidson, 2011, p.63).

This thesis has been designed with these four main requirements and the Swedish Research Councils principles for good research practice in mind. The participants in the usability test in this study was all informed that they were being the subject of a research and gave a written consent which is in accordance with good research practice (Swedish Research Council, 2017, p.26). In the written consent the participants were informed about the purpose of the study and the scoop of their participation including recordings of voice during the test session. The written consent also included information that participation is voluntary and can be withdrawn at any point during the study without further consequences.

Since the recording of sound from persons constitutes as handling of personal data, which are data that can be linked, directly or indirectly to a physical person (Swedish Research Council, 2017, p.27 & p.71) these recordings were stored in a secure manner so no unauthorized persons could reach it and destroyed after the final grade of the thesis.

1.5 Scope

Usability plays an important role in HCI and interaction design and this thesis examines the perceived usability of MS Teams. MS Teams have different types of users such as

administrator roles, Team owner and Team member and as a guest user (Microsoft, 2018;

Microsoft, 2020a; Microsoft, 2020b). Since MS Teams is not available from Karlstad University this thesis only examined the perceived usability from a guest users’ perspective and not any other roles.

When improving the design of a product through usability tests you want to work iterative and conduct a usability test with five participants, where you will find 85% of the usability problems, fix these problems in a redesign and then do another usability test with five new participants (Nielsen 2000). Since this thesis does not have any collaborations with Microsoft any usability problems found in the first test will not be fixed and there will therefore only be one iteration of tests.

(10)

2. Literature overview

This chapter explains fundamental concepts for the thesis in relation to its purpose and Research Questions. The overview provides an explanation of design principles used for designing interactive systems and information regarding distance collaboration. Most of the literature used for the design principles comes from Jacob Nielsen and the late David Benyon.

Nielsen holds a PhD in HCI, has co-founded the Nielsen Norman Group and has invented several usability methods (Nielsen Norman Group, n.d.). Benyon, was a Professor of Human- Computer Systems with decades of experience of HCI and over 150 refereed publications covering HCI, interaction design and intelligent user interfaces (Benyon, 2014). The key literature for distance collaboration comes from Olson and Olson who are Professors in the School of Information and the Department of Psychology and their study Distance Matters which has been quoted more than 2400 times (Olson & Olson, 2000, p.152; Google Scholar, 2020).

2.1 Designing interactive systems

Designing interactive systems is about creating an interactive experience for people and it is important to be human-centred. Being human-centred means thinking about what the people want to do instead of what the technology can do, designing new ways for people to connect with each other, having people involved in the design process and to design for diversity. To ensure that the design truly is human-centred the PACT framework (PACT is an acronym for people, activities, contexts, and technology) is a useful tool (Benyon, 2014, p.12 & p.25).

2.2 PACT framework

As stated in the previous section PACT is an acronym for people, activities, contexts and technology and a useful tool in designing interactive systems. People could differ in physical characteristics and their five senses, sight, hearing, touch, smell, and taste but also have psychological differences and vary in their usage of the system. The activities vary whether they are time sensitive, involves cooperation, whether they are safety-critical, the nature of the content they require and in complexity. The contexts vary depending on physical, social, and organizational aspects and the technologies vary depending on the input, output, content, and communication they support (Benyon, 2014, p.27-43).

2.3 Design principles

According to Nielsen (1993, p.20) there are several usability guidelines and design principles created over the years and they all list similar heuristics. Nielsen (1993, p.20) lists ten

usability principles that according to him should be followed by all user interface designers.

In more recent literature Benyon (2014, p. 86) claims, however, that the level of abstraction provided by different people at different times sometimes is inconsistent and confusing.

Benyon (2014, p.86-87) elaborates twelve design principles based on Nielsen heuristics and groups them in to three main categories, learnability, effectiveness, and accommodation listed below.

(11)

Learnability:

 Visibility – Ensuring that users can see what functions are available and what the system is currently doing.

 Consistency – Consistency with design features, similar systems, and standard ways of working.

 Familiarity – Use a language and symbols that the intended users are familiar with.

 Affordance – Design things so it is clear what they are for like making buttons look like clickable buttons.

Effectiveness:

 Navigation – Support which enables users to move around the system like maps, directional signs and information signs.

 Control – Allow users to take control and make it clear who or what is in control.

 Feedback – Provide rapid feedback so the users know what effect their actions had.

 Recovery – Enable quick and effective recovery from actions, especially mistakes and errors.

 Constraints – Provide constrains so users do not try doing inappropriate things.

Accommodation:

 Flexibility – Allow doing things in different ways to accommodate users with different expertise and allow personalisation of the system.

 Style – Make the design stylish and attractive.

 Conviviality – Make the system polite, friendly, and overall pleasant.

2.4 Usability

As stated in the introduction chapter the most well-known definition of usability is the one from ISO (9241-11): “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.”

This definition focuses on three measures of usability, effectiveness, efficiency, and satisfaction but at the same time clarifies that it is within a specified context of use by a specified user with a specified goal. With high effectiveness and efficiency, a user can achieve their goal for using the product with accuracy and speed. The last measure is

satisfaction which derives from the user’s perception of satisfaction. This measure is so strong that if users feel that their overall experience was a positive one, they will still have an interest in using the product although problems regarding effectiveness and efficiency may be obvious (Barnum, 2011, p.11-12).

There are other definitions of usability like the 5Es, which comes from Whitney Quesenbery, a well-known usability consultant. The 5Es consists of effective, efficient, engaging, error tolerant and easy to learn (Quesenbery, n.d.). Another definition derives from Nielsen (2012a) which define usability with five quality components, learnability, efficiency, memorability, errors, and satisfaction. Rubin and Chisnell (2008, p.4) state that what really makes something usable is the absence of frustration in using the product and defines usability as when the user can do what he or she wants, in the way that they expect to be able to do it without any hindrance, hesitation or questions. They go on describing that for a product to be useful it must be efficient, effective, satisfying, learnable and accessible.

2.5 Usability evaluation

“User testing with real users is the most fundamental usability method and is in some sense irreplaceable, since it provides direct information about how people use computers and what their exact problems are with the concrete interface being tested.” (Nielsen, 1993, p.165)

(12)

There are different methods for determining the usability of a product or conducting a usability evaluation. One common method early in the design process is expert evaluation which is when a usability expert looks at the system and tries using the system which is a quick and effective method for finding usability issues. Another method is heuristic

evaluation which is when a person trained in HCI examines the design to see how it measures against a list of good heuristics. Even though both these methods are effective ways of

determining the usability of a system there are no substitute for getting real people to use the system (Benyon, 2014, p.217). Usability tests with real users is the most fundamental

usability method and, in many senses, irreplaceable. It provides the researcher with direct information on how user use the product and what their exact problems with the interface are.

(Nielsen, 1993, p.165). Nielsen (1993, p.165) even states that the other methods for

conducting a usability evaluation can act as supplements to usability test to gather additional information which further emphasises the importance of conducting a usability test.

To ensure that the usability evaluation is as effective as possible there are several things to consider. First of the researcher must ensure that the aims of the evaluation are established, who the intended participants are, what context of use and state of the technology and obtaining or constructing scenarios illustrating how the product will be used. Secondly the researcher must decide on what evaluation method to choose, plan and recruit people and organizing the testing venue and equipment. After that they can carry out the evaluation and analyse and document the results (Benyon, 2014, p.224).

2.6 Communication

Communication is critical to be able to work together and to understand communication means to understand theories of semiotics, the study if signs, how they function and how we exchange signs through some communication channel. Signs could take the form of words, either through speech or writing, or images, sounds, gestures, or objects. (Benyon, 2014, p.369 & p.529).

2.6.1 Verbal communication

Speech has more characteristics than just the words spoken. Within linguistics the term prosody is concerned with the rhythm, stress, and intonation of speech where variations of pitch, the tone of speech or even the speed of delivery can affect the meaning that they convey. These variations are important for conveying emotions and subtle variations of meaning which could be lost in written communication (Benyon, 2014, p.530).

2.6.2 Written communication

Prosody can often be lost when communicating through written text but the use of italic, bold and other typographic cues to indicate emphasis has long been used. In more recent time emoticons has been used as an additional cue for conveying emotions or variations of meaning in written communication (Benyon, 2014, p.530).

2.6.3 Non-verbal communication

Non-verbal communication refers to communication in the form of signs outside of the spoken channel whether they are intentional or not. These signs could be facial expressions, gestures, or body language. Facial expressions concerning changes in the eyes, mouth, cheeks, and other facial muscles is an important component in non-verbal communication. It

(13)

is also believed that a large portion of the brain is dedicated to understanding each other’s facial expressions. Gestures could be movements of our head, hands or body and is important to display the structure of the utterance by showing how they are grouped, pointing at objects or people and give illustrations of size, shape, or movement. Body language are concerned with posture and movement which expresses attitudes and moods. This could be leaning forward, folding the arms, or keeping eye contact (Benyon, 2014, p.531-532).

2.7 Distance matters

Olson and Olson (2000, p.139) explains that with the invention of groupware (which we can call digital workspace), people expect to communicate easily with each other and accomplish difficult work even though they are remotely located. According to Olson and Olson (2000, p.152) these attempts to collaborate with distance technology often fail. When using an audio connection and a shared editor for real-time work even people who are used to working together will not get the same quality of work as that done face to face. However, they discovered that those with video connections produced output with the same high quality as those working face to face. It is worth mentioning that those working with video connections had to change the process of their work and focus more on clarification and management overhead than those actually working face to face (Olson & Olson, 2000, p.152).

To achieve effective communication between people the communication needs to take place with respect to some level of common ground. Common ground refers to the knowledge of what the participants communicating have in common and that they are aware that they have it in common. The concept is however not only about the person’s background it is also about knowledge gained from a person’s appearance and behaviour during the conversation. A facial expression or verbal reply could indicate that the receiver did not understand the

transmitter and the transmitter can revise their assumptions of common ground, rephrase what they said and repair the misunderstanding. This common ground is a subtle dance that adapts the steps to each new discovery and is constructed from whatever cues we have at the moment and the fewer cues we have the harder common ground is to establish. The cues and their mediums can be seen in Figure 1 (Olson & Olson, 2000, p.157).

Figure 1. Characteristics that contribute to achieving common ground. (Source: Modification of Olson & Olson 2000, p.160)

(14)

2.8 Computer Supported Cooperative Work

Computer supported cooperative work (CSCW) is a domain linked with HCI and the term is often associated with technology regarding social computing applications within the world of work. These applications support both remote and face-to-face collaborations and consists of video- and audio-conferencing, chat and application sharing which provide same-time (synchronous) different-place communications and e-mail and threaded discussions which provides different time (asynchronous) different-place communications. One way of

categorizing CSCW technologies is with the space-time matrix, see Figure 2, which illustrates that people can collaborate whilst being co-present or at different locations and that their collaboration can be synchronized or asynchronized (Benyon, 2014, p.363-369).

Figure 2. The space-time matrix. (Source: Modification of Benyon, 2014, p.368)

2.8.1 Shared work spaces

Shared work spaces is technology that support asynchronous working, such as bulletin boards, threaded discussions, news groups and shared folders by giving users access to shared

information. In essence all it takes is a user permit sharing and a set of permissions is established with those who should have access to the information (Benyon, 2014, p.370).

2.8.2 Shared workspaces

Shared workspaces are technology tailored for specific purposes that support synchronous working. These technologies could be very different, and instances include real-time shared text-editing systems, free hand sketching for architectural design or even technologies that support the illusion of collaborating in a three-dimensional space (Benyon, 2014, p.371).

2.9 MS Teams

MS Teams is a hub for teamwork where people both inside and outside of an organization can connect and collaborate synchronous. People can have meetings or calls one on one with fully integrated voice and video, informal chats, co-authoring a document or work together in other apps and services. MS teams offers a shared workspace for people to iterate quickly on a project, work together with team files and collaborate on shared deliverables. Every new team created generates a new Micrososoft 365 group, a Sharepoint online site and document

library, an Exchange online with shared mailbox and calendar, a Onenote notebook and ties

(15)

into other Microsoft 365 and Office 365 apps such as Planner and Power BI (Microsoft, 2020c).

It is important that synchronous, different place communication and collaboration works smoothly, now more than ever due to the ongoing pandemic in the world. Since MS Teams is a rather complex system this places high demands on its usability. To examine the usability of a system one needs to study how efficient, effective, and satisfactory a product is. In addition to this it is important to address a specified context of use by a specified user with a specified goal.

(16)

3. Methodology

This chapter describes the method used for gathering empirical data and how the method was used in this study. The literature used for the method comes from Nielsen’s book Usability Engineering and several articles from the Nielsen Norman Group, Barnum’s book Usability Testing Essentials: Ready, Set…. Test! and Rubin and Chisnell’s book Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests. Barnum is a UX research expert and award-winning author and speaker on topics including usability testing and technical communication (Barnum, 2020). Rubin holds a degree in Experimental Psychology and has over 30 years of experience as a human factors/usability specialist and Chisnell is an

independent usability consultant and user researcher who has been doing usability research since 1982 (Rubin & Chisnell, 2008).

3.1 Usability testing

According to Barnum (2011, p.13) a usability test is the activity of observing users working with a product, performing tasks that are real and meaningful to them and divide them into two different types. The types are formative testing which is tests that are done iterative during product development and summative testing which is done after the product is finished with the goal of validating that the product meets requirements (Barnum, 2011, p.14). Rubin and Chisnell (2008, p.29-35) have chosen to describe several different tests depending on what stage in the development the product is. Their definition of a formative test is the same as Barnum’s, but they believe that the summative test is also done in an early or midway stage of the development stage and instead calls, what Barnum claims is a summative test, a

validation or verification test. Rubin and Chisnell (2008, p.21) define a usability test as the process that employs people, who are representative of the target group, as testing participants to evaluate the degree to which the product meets specific usability criteria. The main goal of formative evaluation is to learn in detail what aspects of the design is good, what is bad, and how it can be improved and is often done through a thinking-aloud test (Nielsen, 1993, p.170).

A usability tests could be big, with a large number of participants to provide different metrics or statistics or smaller to quickly find out what works best for the users (Barnum, 2011, p.18).

Rubin and Chisnell (2008, p.21) claims that the range of tests are considerable, and a test could be complex in its design with large sample sizes or more informal and qualitative with a small number of participants depending on the objective, time, and resources available.

3.1.1 Pilot test

“The importance of conducting one or more pilot tests cannot be overstated. Do not cut this step short, or you will find that your first one or two real participants will be used “to get the bugs out”

of your testing process, essentially acting as the pilot test.” (Rubin & Chisnell, 2008, p.215)

When constructing a usability test there is always a risk that some of the tasks are

incomprehensible, that they are misunderstood or that the tasks are too difficult. That is why there should always be a trial of the test procedures on a few pilot subjects through pilot tests.

On smaller tests one or two pilot subjects could be enough unless severe deficiencies are discovered (Nielsen, 1993, p.174). Rubin and Chisnell (2008, p.215) explain that cutting the pilot-step will always set the Test Administrator back because the first one or two real tests carried out will always go to waste to get the bugs out. The importance of conducting pilot

(17)

tests cannot be overstated and how the test should be tested in its entirety. With its entirety they mean the orientation script, test scenario and conducting the entire test which not only helps finding tasks that might not be applicable, questionnaires that were misunderstood and finding new areas that needs testing but also gives the Test Administrator an opportunity to practice their data collection.

3.1.2 Metrics and measures

When the product is sufficiently developed it can be important and useful to set metrics for the study which can be used by management to support business goals for the product or by user experience practitioners to help make a case for product improvements. It is highly effective to combine metrics with observations and comments from the participants (Barnum, 2011, p.137). Determining if a task has been achieved successfully or not is reasonably straightforward but there are some difficulties using metrics. One of the difficulties are

deciding what the acceptable figure for percentage of tasks successfully completed is. A lot of the time these percentages derives from comparative testing against previous versions,

alternative designs, or rival products. Another difficulty for the researcher is determining if the metric is relevant (Benyon, 2014, p.226-227). In contrast to Benyon who deems

determination of success as easy, Rubin and Chisnell (2008, p.80) argue that there is much disagreement and differing opinions on what represents successful completion of a task.

Rubin and Chisnell (2008, p.80) suggest dealing with this problem by including a successful completion criteria (SCC) to every task description.

3.1.3 Test introduction script

Before the test starts an introduction should be read to the participants welcoming them, thanking them for participating and stating the purpose of the study. This introduction is a good place to inform the participants that every aspect, both positive and negative, of what the participant thinks and does is of interest and mention that you are not the developer (and will not take criticism personally) and therefore open to any suggestions the participant wants to share (Barnum, 2011, p.168). Another important thing during the introduction is to assure the participants that it is the product and not the participants themselves that are being tested. It is important to always read from the script to ensure that every participant has the exact same information and to try to keep the speech short and professional. The researcher should also explain that the participants are allowed to ask questions but that they might not be answered as to not affect the study (Rubin & Chisnell, 2008, p.155-161).

3.1.4 Thinking aloud

Thinking-aloud test is when the participant, while using the product, continuously think out loud, which might be the most valuable usability engineering method. When the participant think aloud it gives the researcher an opportunity to understand how they view the product, identifying misconceptions among the participants and it shows how the participants view each individual interface item (Nielsen, 1993, p.195). In a more recent article Nielsen (2012b) claims that he still stands by his statement that thinking aloud is the most valuable usability engineering method and continuous with a list of think-aloud benefits. The benefits are:

o Cheap – There are no special equipment needed, all the researcher must do is sit next to the participant and take notes as he or she talks.

o Robust – Even though most people are poor facilitators the findings of the think-aloud method are still good even if the study was run poorly.

o Flexible – The researcher can use this method at any stage of the development and to evaluate any type of user interface.

(18)

o Convincing – Since it gives a direct exposure to how users think about the design it can convince most of the people involved in the design to pay attention to usability.

o Easy to learn – the researcher does not need a lot to run a basic think-aloud test.

Even though Nielsen (1993, p.195-196) endorse the think-aloud method he brings up a few disadvantages with the method. The main disadvantage is that it does not work very well with performance measurements. Other disadvantages are that when users think-aloud their own theories on what is causing the usability issue might cloud the real problem and it seems unnatural for people to think-aloud and participants might have a hard time doing it and conducting the test while doing it which might have an impact on the results.

3.1.5 Number of participants

“Some people think that usability is very costly and complex and that user tests should be reserved for the rare web design project with a huge budget and a lavish time schedule. Not true. Elaborate usability tests are a waste of resources. The best results come from testing no more than 5 users and running as many small tests as you can afford.” (Nielsen, 2000)

There are not an exact minimum number of participants needed to conduct a usability test that is universally agreed upon. When conducting strict experimental methods, it is more common to have a larger number of participants and studies that are designed to produce statistics often fall within the 12-to-20 participant range. With that said it is still possible to conduct a

usability test with only five participants as long as they all fit within one subgroup of testers (Barnum, 2011, p.252 & p.116). When doing a less formal usability test, which will not result in any statistical knowledge, but instead general usability deficiencies four or five participants is enough (Rubin & Chisnell, 2008, p.72).

Nielsen (2000) describes how using no more than five users and instead running as many small tests as you can afford is the best option to get the most results. He and his colleague Tom Landauer showed the number of usability problems through a formula:

N(1-(1-L)n)

N is the number of usability problems in the design, L is the proportion of usability problems found while testing a single user and n is the number of participants. This formula plotted as a curve is commonly found in usability related studies:

(19)

Figure 3. Usability problems found per participant. (Source: Modification of Nielsen 2000)

3.1.6 Getting test users

When recruiting people as participant in a usability study the most important thing to remember is that they should be as representative of the intended end-user for the product.

Software that is intended for the general population can in principle use anybody as a

participant in a usability test but one thing that should be considered is that if the software is also intended to be used by older users, which might have different characteristics, they should be included (Nielsen, 1993, p.175-176). When the product is intended for the general population it is not uncommon to have friends and family that might be suitable to participate as test users in the study. When using friends and family it is important for the researcher to act professional, do everything exactly as you would normally do, not talk about the study until after it is complete and not being overly friendly during the test (Rubin & Chisnell, 2008, p.134). One common way of categorizing users is by dividing them into novice and expert users. It is typical to test these two subgroups in separate test since wrong conclusions could be drawn otherwise (Nielsen, 1993, p.177-178).

3.1.7 Participant background

Even though all the participants are representative of the intended end-user they might have different characteristics that might be useful to know. To get this relevant information from each participant a pre-test questionnaire can be used. The benefits of a pre-test questionnaire are that it could shed some light on a participant’s specific actions (Barnum, 2011, p.173-

(20)

174). Rubin and Chisnell (2008, p.174) describe ‘pre-test questionnaires’ which addresses specific test objectives such as participants first impression of the product. What Barnum (2011, p.173) calls pre-test questionnaires, Rubin and Chisnell (2008, p.162-163) call

background questionnaires and state how it is important to ascertain all background about the participants that might affect their performance. In this thesis the term ‘pre-test

questionnaires’ is used to describe the questionnaire that depict the background of the participants

3.1.8 Test environment

The idea that a usability test must be formal and highly structured and conducted in a usability lab is incorrect, in fact some tests should never be done in a lab. To determine the appropriate location of a usability test the researcher must consider several factors. Some of these factors are what kind of test are being done, if qualitative data will be collected from the participants, if special equipment is needed to collect data and how easy it is for the participants to leave their daily routine to take part of the study (Rubin & Chisnell, 2008, p.94-95). Barnum (2011, p.25) goes as far as to state that all you need is a pad and pen, the product and the user. She continues by explaining that usability test can be done in a lab, in the field, remotely or in any suitable space such as a conference room or office. The benefit with an informal lab is that it can be set up anywhere and at a very little cost, if the room has a desk all you need to add is a laptop (Barnum, 2011, p.37).

3.1.9 Test plan

When full reporting is required or expected or when key stakeholders are absent during the planning a test plan should be produced (Barnum, 2011, p.145). It is also an opportunity to clarify the purpose of the test and if the test is a summative evaluation or a formative evaluation (Nielsen, 1993, p.170). According to Nielsen (1993, p.170-171) the test plan should address the following issues:

o The goal of the test

o Where and when the test will take place o How long each test session expects to take o What technology will be needed for the test o The state of the system and the network o The person that will serve as experimenter

o The persons that will serve as participants and how they will be obtained o The number of participants that will be needed

o Criteria to determine if tasks are successfully completed o Any user aids that will be available to the participants o In what extent the experimenter will help participants o What data will be collected and how it will be analysed

o What the criterion for pronouncing the interface a success or not is 3.1.10 Tasks

When deciding on test tasks it is important to choose tasks that are as representative of the actual tasks that the system will be used for and to cover the most important parts of the user interface. The tasks need to be small enough that the participants can complete them in a timely fashion and not too small, so they become trivial. Every task should have a clear goal, so the participants know when it is done and that is different from just playing around. To ensure that every participant gets the same instructions for every task they should be given to them in writing and the participants should be given a chance to ask questions about the task

(21)

to avoid misinterpretations. The task should be business-oriented and never humorous to avoid a nonserious feeling and the very first task should be simple to ensure the participant complete it which gives them a morale and confidence boost (Nielsen, 1993, p.185-187).

Developers will often disagree on what represents a successful completion of a task and a SCC should be added with the task description to avoid these disagreements (Rubin &

Chisnell, 2008, p.80). Assistance with the tasks should only be done as a last resort since it will affect the results in a major way. Times when assist might be needed are when the participant is very lost or very confused or when they are exceptionally frustrated and might give up (Rubin & Chisnell, 2008, p.211-212).

3.2 Data collection in a usability test

The following sub-sections describe how data are collected during a usability test. Much of the data originates from the observations and the participant thinking aloud but additional data can be acquired from the post-test interview.

3.2.1 Observation

When logging observations in a formative test it is common to do a rich qualitative logging process where quotes from participants, descriptions of actions and nonverbal observations such as sighs, leaning forward or putting their head in their hands are logged (Barnum, 2011, p.225-226). When the participants are prompted to think aloud, as talked about in sub-section 3.1.4 Thinking aloud, the quotes from participants add richness to the data collected. Rubin and Chisnell (2008, p.154) states that additional observers can be important since their notes can be very useful and in their checklist for observers, they also state that the observers should avoid laughing, grunting, aha-ing and distracting body language.

3.2.2 Post-test interview

To get additional data there should be a debriefing of the participant after the test is complete which allows participants to share their experience in their own words (Barnum, 2011, p.187).

It is common to ask the participants for any comments that they might have about the system and if they have any suggestions for improvement. Even if these suggestions do not lead to any change in the design, they still give the researcher a rich source of ideas to consider. The post-test interview is also an opportunity for the researcher to ask the participant questions about events during the test that where hard to understand for the researcher (Nielsen, 1993, p.191). Barnum (2011, p.187) claims that the post-test interview should be semi-structured and use a few predetermined questions to get the interview going and then let it take its own direction after that. A semi-structured interview is when the scientist creates a list with topics to discuss and the interviewee has the freedom to design the answers (Patel & Davidson 2011, p.82). Rubin and Chisnell (2008, p.232) believe that the post-test interview should start with open ended questions like “How did that go?” or “so, what do you think?”. This is to allow the participants to vent if they are feeling frustrated but also to learn about the topic or problem, they choose to speak of which is often the most prominent to them. After that the researcher should start asking questions related to the Research Questions and more specific questions regarding problems with tasks the participant might have had.

There are some guidelines on questioning participants in a study and Rubin and Chisnell (2008, p.230) brings up that the interviewer never should make the participant defensive about their actions or opinions which means that the interview should feel like a discussion amongst peers. Rubin and Chisnell (2008, p.230) go on stating that the interviewer should not react to the participants answers in any way. This is supported by literature regarding research

(22)

methodology where the authors state that if the interviewee feels judged or criticized it might evoke defensive attitude and it is enough with a gesture or facial expression (Patel &

Davidson, 2011, p.75).

One important aspect worth remembering is that when conducting an interview, one cannot always trust the participants answers. The reason behind this is that people often gives the answer that they think they ought to give, especially if the answers feel embarrassing (Nielsen, 1993, p.214).

3.2.3 Recording

According to Nielsen (1993, p.203) there is normally no need to videotape a usability test since most of the usability problems will be found anyways, it is better to spend time on more testing instead of analysing videotapes. To ensure that the participant is given the researchers full attention during the post-test interview though, the interview should be audio recorded.

The transcript from this interview can also be used for content analysis (Rubin & Chisnell, 2008, p.236).

3.3 Study Design

With regard to the purpose of this thesis the method selected was a usability test since testing with real users is the most fundamental usability method and, in many senses, irreplaceable (Nielsen, 1993, p.165). Since MS Teams is not free and not available through Karlstad

University the participants in this study were logged in as guest users and invited to a fictional team within an actual company. The reason that they were logged in as guests is so that they would not be able to access any company information and to avoid any misadventures occurring that could affect the company.

To enable more rich data, users of the full version of MS Teams were invited to submit anonymous comments on MS Teams, in the form of an informal interview where they were asked to comment on what they thought of as good features and what they thought of as bad features.

3.3.1 The usability test

Since there had not been any formal published usability tests on MS Teams the usability test in this study covered the most common tasks in the system. The test was not a summative test or a validation test since there are no previous usability issues documented or guidelines/goals from Microsoft to base it on. Neither was it a formative test since there is no intended

opportunity for these results to further develop the product. The method is built on Nielsen (1993) guidelines for usability tests with some modifications to fit this thesis considering the scope and resources available.

3.3.2 Preparations

The sections below describe all the preparations made before the actual usability tests were conducted. Among these preparations were creating suitable tasks, gathering participants, constructing a test plan, pre-test questionnaire and introduction script and conducting the pilot test.

(23)

3.3.2.1 Task descriptions

When creating the tasks for this study the overall usability and most common tasks where in focus. Further on the points made by Nielsen (1993, p.185-187) regarding the tasks, see sub- section 3.1.10 Tasks, where considered when constructing the tasks. To ensure that there is no confusion regarding if a task has been successfully completed a SCC have been added to every task. The time the participant has is two minutes per task, far more than what is needed if you have experience in MS Teams. The reason for this is to give the participants time to work through hindrances and potentially get frustrated which can be more revealing than at any other time, but also to stop them before they get so frustrated that they are no longer willing to try (Rubin & Chisnell, 2008, p.55). If a task has not been completed within two minutes and the participants seems lost, confused or very frustrated help were offered to complete the task. This help was to ensure that the participant could continue with the next task, but the task would be marked down as failed. The tasks (T) are:

T1. Log on to MS Teams

o The participant will log in to MS Teams with a given a username and a password.

o SCC: The participant logged in to MS Teams.

T2. Access shared files

o The participant will access the shared files in a given channel within their team.

o The participant accessed the shared files.

T3. Open a shared file and copy the sentence

o Open a given shared folder and a given word-file within that folder and copy the sentence.

o SCC: The participant opened the correct folder and the correct word-file and copied the sentence.

T4. Send a chat message

o The participant will send the copied sentence as a chat message to a given person within their team.

o SCC: The participant sent the correct chat message to the correct person.

T5. Call a person

o The participant will call a given person.

o SCC: The participant called the correct person.

T6. Change the output source

o The participant will change to the given output source and test it.

o SCC: The participant changed the output source to the correct working one.

T7. Share the screen

o The participant will share the given screen with the person in the call.

o SCC: The participant shared the correct screen.

(24)

3.3.2.2 Participants

According to Microsoft, MS Teams is a hub for teamwork and an app for putting together a team and work together through chat instead of just emails, and channels instead of just file folders (Microsoft, 2020d). The product is intended for the general population as long as they engage in some kind of collaboration through technology. To ensure that the participants are as close to the intended end-users as possible they all should work or study to later work at a place where they could engage in collaboration through technology. Other things that were considered when recruiting participants where the participants age, computer skills, and previous experience with similar technology or MS Teams. The number of participants was discussed in sub-section 3.1.5 Number of participants, and since this study conducted a less formal usability test and examines general usability issues the number of participants in the test were five, in accordance with what Rubin and Chisnell (2008) states and what Nielsen (2000) states.

Due to the Covid-19 pandemic it was hard to find participants that were willing to take part in the study, due to a lot of people feeling uncomfortable meeting new people face to face. As stated in sub-section 3.1.6 Getting test users, family and friends could act as participants if the guidelines mentioned in that sub-section is fulfilled. The participants in this study therefore consists of friends and family and was recruited by simply being asked to participate. In Table 1 is the information from the pre-test questionnaire that every participant filled in is

presented.

(25)

Table 1. Information regarding test participants

(26)

3.3.2.3 Pre-test questionnaire

To ascertain background information that might affect performance regarding every participant a small pre-test questionnaire were constructed. The questionnaire answers questions regarding the participants age, sex, education, work, experience with MS Teams, previous experience with similar technology and computer skills and can be found in its entirety in appendix A.

3.3.2.4 Introduction script

The introduction script was read in its entirety to every participant to ensure that they all got the exact same information. All the participant was invited to ask any questions regarding the test before the test begun. The introduction script can be found in appendix B.

3.3.2.5 The pilot test

A pilot test was conducted before the actual testing began. In the pilot test some uncertainties in the pre-test questionnaire were discovered which led to some changes and clarifications.

No uncertainties where discovered in the introduction script but one paragraph was added to it. The paragraph that was added to it was that the participant should do the task and when they felt that they had completed the task they would stop using the product and wait for further instructions. Regarding the test the tasks where all kept the same, but the pilot participant explained in the post-test interview that the fact that the task themselves where written in English (the pilot participants native language is Swedish) made the tasks harder to understand. To combat this problem the tasks where translated to Swedish and the other participants were given both Swedish and English versions of the tasks.

Other problems that where discovered during the pilot was that the participant did not think aloud to the extent that was hoped. To combat this problem the Test Administrator must prompt the participant to think aloud by asking questions like “what are you thinking now?”

and “Is that what you expected would happen?” (Nielsen, 1993, p.197). Asking these questions while observing and typing observation, keeping track on time for every task and explaining to participants that tasks were completed, and they can move on to the next task was too much to handle all at once. It was therefor decided to not type observations and instead just observe, prompt thinking aloud and keeping track on time for every task.

3.3.2.6 Test plan

Before the test actually began a test plan was constructed in accordance with Nielsen (1993, p.170-171) view on what a test plan should address:

 The goal of the test is to examine the perceived usability of MS Teams, examine if there are any usability problems or particular strengths of the usability of MS Teams and examine if there are any changes that could be made to MS Teams to improve its overall usability.

 The tests will be conducted in a home office with access to two computers,

microphones for both the participant and the Test Administrator during the usability test and the interviews and will be conducted without any disturbances.

 Each test session is expected to take 30 minutes.

 The technology needed is one computer for the participant and one for the experimenter.

 The state of the participants system will be started but not logged in and the computer will be connected to the Internet.

(27)

 The participants will be family and friends for convenience since the intended end- user is the general population as long as they engage in some kind of collaboration through technology and the ongoing pandemic at the time of writing making it hard gathering random people as participants.

 The number of participants will be five.

 The criteria to determine if the task has been successfully completed will be made clear from the SCC with every task.

 The participants will have no aids during the test.

 The experimenter will answer questions regarding the test, the tasks and the procedures if they do not interfere with the results. If it is determined that the

participant will not be able to complete the task within given time the participant will be given help, and the task marked as failed, to be able to move on the next task.

 The data that will be collected is from the experimenter observing the participant, from the participant “thinking-aloud” and from the post-test interview.

 For each task, an SCC was identified and whether or not the product will be considered a success or not is based upon the SCCs.

3.3.3 Implementation

The details of conducting the actual tests is described in the sub-sub-sections below and takes its foundation from section 3.1 Usability testing and section 3.2 Data collection. The sub-sub- sections describe the test environment, the procedures of the usability test, how the data were collected, and the Information letter and the Consent form the participants were notified of in accordance with GDPR.

3.3.3.1 The test environment

All the tests were conducted at the Test Administrators home office due to the ongoing Covid-19 pandemic and Karlstad University recommending students to not visit the university. The home office was set up as a test lab in a fashion Rubin and Chisnell (2008, p.101) describes as a Simple Single-Room Setup but without the additional observers. There were no one else in the home office and the room was quiet and secluded. The participant sat at a desk with a PC using windows 10 and the Test Administrator sat at an angle behind the participant to overview and observe the test.

3.3.3.2 Information letter and Consent form

Before the test started every participant read an information letter and read and signed a consent form since their voice was recorded. The Information letter can be found in its entirety in appendix C and the Consent form can be found in its entirety in appendix D.

3.3.3.3 The test

During the test, the participant got the pre-test questionnaire and the consent form to read and sign. After that the participant had the introduction script read to them and where given an opportunity to ask questions. Then the participant got the information notes with the tasks and necessary information to complete the task which can be found in its entirety in appendix E.

After that, the audio recording was started, and the test began.

As stated in the introduction script the participant read the instruction and information for one task at a time and when they felt that the task was completed, they stopped using the product and waited for further instructions. These instructions would be either help to complete the

(28)

task (which resulted in the task being considered as failed) or “you can now continue with the next task”.

During the entire test, the Test Administrator acted as a co-tester by answering messages, calls and talking to the participants. The think-aloud method was used during the test, and the participant where often prompted to express what they were thinking.

The time of each task were noted and difficulties the participant had during the test were observed to then ask questions regarding these difficulties during the post-test interview.

3.3.3.4 Think out loud/participant observations

The Test Administrator conducted the observations and acted as moderator. The participants were not assisted in any way during the test procedure. To ensure that no observation was missed or any spoken thoughts from participants forgotten the test, was audio recorded. These recordings where later listened to and all the relevant parts transcribed and can be found in appendix F.

These recordings where stored in a safe manner unavailable to unauthorized persons and destroyed after the thesis received a passing grade.

3.3.3.5 Post-test interview

After every usability test there was a short interview or debriefing of the participant to get an increased understanding of their experience with the system. In accordance with Rubin and Chisnell (2008, p.232) the post-test interview began with more open-ended questions and then narrowed down to more specific problems found during the tests. The predetermined

questions where:

 How did that go?

 You seem to have had some problems with ____. Could you explain what made it hard or what your opinion is regarding that task?

 What do you think about the usability of MS Teams?

 Are there any changes you would like to see on MS Teams?

 Are there any thing particular about MS Teams that you really like?

(29)

4. Results and analysis

In this chapter the results from the usability tests and the anonymously advised features of MS Teams from users of the commercial version are presented and analysed.

4.1 Results

Described below is the results of the usability tests presented in two tables and the results from the anonymously advised features of MS Teams from users of the commercial version.

There are also several figures that depicts parts of the graphical user interface of MS Teams to further help the reader understand. In appendix F the individual results and the post-test interview is presented for a more in-depth presentation of the results.

4.1.1 Task completed

Most people where quite successful during the usability test and completed five out of seven tasks. One test participant (TP), TP1 completed every task and TP5 deviated from the other participants by failing four out of seven tasks. Table 2 shows all the tasks the TPs performed and whether they were considered successful or failed according to the SCC.

Table 2. Successful or failed task according to SCC T1. Log in T2. Access

files

T3. Open file T4. Send message

T5. Call T6. Change output

T7. Share content

TP1 Successful Successful Successful Successful Successful Successful Successful

TP2 Failed Successful Successful Successful Successful Failed Successful

TP3 Failed Successful Successful Successful Successful Failed Successful

TP4 Successful Successful Successful Successful Successful Failed Successful

TP5 Failed Successful Successful Failed Successful Failed Failed

4.1.2 Problems discovered through usability test

In total a number of five usability test where conducted with participants either studying at Karlstad University or with exams from a university or equivalent education. Table 3 shows the current task, the mean time it took every participant to complete the task and every usability problems discovered in correlation with that current task. Each individual usability test, all the usability problems discovered and the time it took the participants to complete the task are available in appendix F.

(30)

Table 3. Usability problems discovered through usability tests

Task Mean

time

Observations 1. Log

on to MS Teams

02:15 TP1 and TP5 hesitated before clicking sign in since they had not entered the password which the user is prompted to enter in a new box.

TP5 also wrote the entire password without being in the text field where you enter the password in the new box.

Sign in box and password box.

TP2, TP3 and TP5 had troubles writing some of the symbols in the password, out of these three only one found the password eye after looking for a while.

Password eye.

2.

Access shared files

00:53 TP1, TP2, TP3 and TP5 did not realize that they were in the Current window, and that is where they could find shared files. They also tried pressing the channel name and the three dots next to the channels name, see Figure 4.

3. Open a shared file and copy the sentence

00:36 No problems encountered except for TP5 that went back and read the task description then hesitated for a while but eventually copied the sentence.

4. Send a chat message

01:56 TP1 and TP5 watched the video which explained how to start a new chat, TP1 realized that it did explain what he needed to know and found it useful. TP1 and TP2 pressed the start new chat symbol, see Figure 5. The other three participants, TP3, TP4 and TP5 searched in

(31)

search bar, see top of Figure 4, for the name of the person they were supposed to send the message to. The time it took TP5 to complete the task was over 4 minutes and he tried changing from Chat to Contacts under Chat in the main menu.

Search bar at the top of the product.

Change from Chat to Contacts.

5. Call a person

00:41 TP2, TP3, and TP4 pressed the call symbol straight from the chat box, see Figure 6. TP1 pressed the back arrow, then found the person they were supposed to call from activity and called them by pressing the persons avatar and TP5 pressed back arrow, looked for a while and then navigated back to the chat box and called the correct person.

Persons avatar.

6.

Change output source

02:27 TP2, TP3, TP4 and TP5 had troubles finding the words “settings” or cogwheel symbol, TP1 and TP3 found it from activity, see Figure 7, and TP2 found it from the user avatar, see Figure 8. TP3, TP4 and TP5 did not complete the task and asked or were given help. TP4 and TP5 needed help finding settings but then where able to complete the task on their own within the settings menu, see Figure 9. TP3 found settings

(32)

but asked for help to complete the task within the settings menu, see Figure 9. TP2 completed the task but the time the task took where over two minutes and therefore the task was considered failed. TP1

completed the task without help and within the given time. When in the correct place within settings, see Figure 10, the participants seemed unsure if the task was completed or not and hesitated or had to be prompted to try it.

7. Share the screen

01:15 TP1, TP2, TP3 and TP4 completed the task, but it was obvious that they did not recognize the share content symbol, see Figure 11, since they had to hoover over the symbols to find the correct one. TP2 and TP3 also had troubles finding their way back to the call, since it minimizes after being inside settings, see Figure 12.

Figure 4. Current window and shared files

(33)

Figure 5. Chat symbol, start new chat symbol and video

Figure 6. Call symbol in chat box

(34)

Figure 7. Settings from activity

Figure 8. Settings from user avatar.

(35)

Figure 9. Menu inside setting

Figure 10. Settings in Devices

(36)

Figure 11. Share content symbol

Figure 12. Minimized call at the right bottom corner of the PC

(37)

4.1.3 Remarks from the post-test interviews

In this sub-section the comments or remarks from the participants given during the post-test interview are presented. Transcriptions from each individual post-test interview can be found in appendix F.

All the participants stated that they felt that the test went well, and that the product had high usability.

TP1, TP2 and TP3 suggested some changes for the product. TP1 wanted the field for

entering password in the same box as the field where you enter your username, color changes of the content in Current window and a confirmation button when changing output source.

TP2 wanted to see some sort of contact list and TP3 wanted a tab called ongoing call or the call box to be fixed in a corner to easier navigate back to calls and to make the text bigger or in the centre of the Current window box.

TP2 claimed that the password eye symbol was weak but that she liked the share content function.

TP3 stated she looked for a show password symbol but could not find it. She also claimed that navigating to settings to change output source was hard and that see would like to see the settings symbol in a more obvious place. She also stated that the fact that you can call, send messages, and share files in one place is a good thing.

TP4 stated that he felt that MS Teams was easier than Skype and that he liked the share content function. He also claimed that the location of the settings symbol was logical to him, and like other programs he had used before even though he could not find it. He stated that he did not recognize the share content symbol. He also stated that MS Teams had lots of great functionality, that he liked the search bar and that the chat conversations are visible.

TP5 stated that he looked for something that would show password but could not find it. He also claimed that he recognized the share content symbol in contradiction to what he said during the test. He also mentioned that the product could be useful for communication, sharing content and for lectures and that he liked the share content function and how the product was easy to navigate and straight forward.

4.1.2 Anonymously advised features of MS Teams

Users of the commercial version of MS Teams were invited to advice, anonymously on what they regard as good features and bad features of MS Teams. In total three persons (P) made comments and the result of them are presented in Table 3. They do not bring up the same problems or good features as the participants in this study which can be explained by them using use the product every workday. Their answers are a part of the study to get a feeling for what the actual users of MS Teams thinks and feel about the product.

References

Related documents

Denna åtskillnad som Burroughs gör i sitt brev till Ginsberg finns även i hans romaner Junky och Queer.. I Queer är uppdelningen mellan queers och fags

In 2011 I accompanied two delegations to Kenya and Sudan, where the Swedish Migration Board organized COPs for people who had been granted permanent Swedish residence

• Page ii, first sentence “Akademisk avhandling f¨ or avl¨ agande av tek- nologie licentiatexamen (TeknL) inom ¨ amnesomr˚ adet teoretisk fysik.”. should be replaced by

It was a drawing, mapping, writing and collaborative publishing workshop in which par- ticipants worked with thoughts on public space and place thru drawing maps, images, words

government study, in the final report, it was concluded that there is “no evidence that high-frequency firms have been able to manipulate the prices of shares for their own

instrument, musical role, musical function, technical challenges, solo role, piccolo duo, orchestration, instrumentation, instrumental development, timbre, sonority, Soviet

50 Swedish elites compiled these ballads in visböcker (“songbooks”), many of which provide source material for Sveriges medeltida ballader. 51 For Sweden, interest in recording

Before presenting the theoretical argument of this study, theoretical definitions for the key concepts which will be used frequently should be given. An armed