• No results found

Designing for usability of 3D configuration in E-commerce : Interactive design of 3D in web applications

N/A
N/A
Protected

Academic year: 2021

Share "Designing for usability of 3D configuration in E-commerce : Interactive design of 3D in web applications"

Copied!
58
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköpings universitet SE–581 83 Linköping

Linköping University | Department of Computer Science

Master thesis, 30 ECTS | Datateknik

2017 | LIU-IDA/LITH-EX-A--17/038--SE

Designing for usability of 3D

configuration in E-commerce

Interactive design of 3D in web applications

Design för användbarhet i 3D-konfigurering inom

e-handel

Alfred Axelsson

Supervisor : Anders Fröberg Examiner : Erik Berglund

(2)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under 25 år från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns lösningar av teknisk och admin-istrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sam-manhang som är kränkande för upphovsmannenslitterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet – or its possible replacement – for a period of 25 years starting from the date of publication barring exceptional circum-stances. The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the con-sent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping Uni-versity Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

c

(3)

Abstract

Mass production of consumer products is both wasteful and limit the consumers’ power to influence the design of what they are buying. When shopping for a product, the customer must choose between a range of specific products with limited variations. What if the customer could make all the design choices, creating and buying the product according to his or her own needs?

A 3D product generator holding a configurable model of a product was created to re-place static content in online stores and give creative power to customers. This work aimed at creating an effective 3D product generator by evaluating how users experience the de-sign of and interaction with it, and finding general dede-sign goals when introducing interac-tive 3D content in static 2D environments.

A prototype of a 3D product generator in a generic online storefront was implemented in two iterations, improving on and evaluating the design through usability testing. The evaluation of the final prototype suggested that the interface was indeed effective in both design and interaction. This work concludes that user feedback is crucial to creating a successful user experience, which in turn is important when creating interfaces for product configuration.

(4)

Acknowledgments

First, I would like to thank everyone at SkyMaker AB for supporting me, with special shout-outs to my mentor Jonathan Brandtberg for his help throughout the project and our discus-sions, and to Kristofer Skyttner for giving me this opportunity. Furthermore, I want to thank Erik Berglund and Anders Fröberg for their supervision and help in completing this thesis.

I also want to thank everyone who helped me by participating in the usability tests. This could not have been done without your help, thank you.

Finally, I want to thank my family and the friends who have supported me both during my work on this project, but also during the entirety of my time at Linköping University.

(5)

Contents

Abstract iii

Acknowledgments iv

Contents v

List of Figures vii

List of Tables viii

1 Introduction 1 1.1 Aim . . . 1 1.2 Research questions . . . 1 1.3 Delimitations . . . 2 1.4 Project Context . . . 2 2 Theory 3 2.1 Web design practices . . . 3

2.1.1 Interface design . . . 4

2.1.2 Visual Hierarchy . . . 4

2.1.3 2D vs 3D . . . 5

2.2 Usability testing . . . 6

2.2.1 Planning before testing . . . 6

2.2.2 Collecting and analyzing the results . . . 8

2.3 Focus groups . . . 10

2.4 Confidence interval . . . 10

3 Method 12 3.1 Implementation . . . 13

3.1.1 Prototyping . . . 13

3.1.2 Iterating the design with usability testing . . . 13

3.1.3 Formative evaluation of the prototype interface . . . 14

3.2 Evaluation . . . 16

3.2.1 Recruiting participants . . . 16

3.2.2 Test session . . . 16

3.2.3 Data collection and analysis . . . 17

3.3 Motivation . . . 17

4 Results 18 4.1 Implementation . . . 18

4.1.1 Wireframes . . . 18

4.1.2 Initial prototype . . . 19

(6)

4.1.4 Final prototype . . . 22 4.2 Evaluation . . . 24 5 Discussion 27 5.1 Results . . . 27 5.1.1 Prototype . . . 27 5.1.2 Formative results . . . 28 5.1.3 Summative results . . . 28 5.2 Method . . . 28 5.2.1 An iterative process . . . 29 5.2.2 Usability testing . . . 29 5.2.3 Sources of information . . . 30

5.3 The work in a wider context . . . 31

5.3.1 Societal aspects . . . 31 5.3.2 Ethical aspects . . . 31 6 Conclusion 32 6.1 Future work . . . 33 Bibliography 34 A Appendix A 36 A.1 Wireframes . . . 36 B Appendix B 40 B.1 Test plan formative test . . . 40

B.2 Test plan summative test . . . 43

C Appendix C 45 C.1 Prototype 1 . . . 45

D Appendix D 48 D.1 Prototype 2 . . . 48

(7)

List of Figures

3.1 Overview of the method . . . 12

4.1 Final Wireframe . . . 19

4.2 Screenshots of the first prototype (detailed images can be found in appendix C) . . 20

4.3 Task times from the formative study . . . 21

4.4 Task success from the formative study . . . 22

4.5 Screenshots of the final prototype (detailed images can be found in appendix D) . 24 4.6 Average number of errors per task from summative study . . . 25

4.7 Percentage of users that made an error per task from summative study . . . 26

A.1 First wireframe . . . 37

A.2 Second wireframe . . . 38

A.3 Final Wireframe in more detail . . . 39

C.1 Screenshot of Prototype 1 (initial screen) . . . 46

C.2 Screenshot of Prototype 1 (modified screen) . . . 47

D.1 Screenshot of Prototype 2 (initial screen) . . . 49

(8)

List of Tables

3.1 Characteristics of participants in formative study . . . 15

3.2 Characteristics of participants in validation test . . . 16

4.1 List of issues found during the formative tests . . . 23

(9)

1

Introduction

Millions of users interact with web applications every day. Websites are designed with a purpose and are adapted to the content, the functionality and hopefully, its users. User Ex-perience (UX) is the exEx-perience of this interaction. T. Tullis and B. Albert [19] believe that the user experience is defined by three characteristics. These can be summarized as the observ-able or measurobserv-able experience of a user interacting with an interface. Interface design, how interaction is performed and how the system responds are some aspects of creating the user experience.

When shopping for a product online, the customer has to choose between a range of spe-cific products with various features and designs as well as different prices. Then each prod-uct might come with varied measurements and colors. But the choices are still limited to the range of products the producer chose to manufacture.

What if the customer could make all these choices him- or herself using a 3D product generator? First choosing a base product, then designing it using the available parameters and finally adding extra features to create the exact product that he or she was looking for. To enable this, we must make sure that it is easy for the customer to understand how to interact with the product generator and utilize all its features.

1.1

Aim

This work focuses on evaluating how users experience the design of and interaction with a 3D product generator in an online store. The goal is to create an effective user interface for the generator by iterating the design with usability testing. The process aims at finding general design goals when creating usable interfaces and introducing interactive 3D content in static 2D environments.

1.2

Research questions

By implementing a design based on research and then iterate it with usability tests the design will be evaluated and used to answer the following research question.

(10)

1.3. Delimitations

1. How should the interaction with a 3D product generator in a web application be de-signed to achieve high effectiveness in terms of usability, allowing users to easily reach their goals and complete their tasks?

1.3

Delimitations

To limit the scope of this work, the following delimitations have been defined. This allow for a more precise focus and help the decision process during implementation.

• The implementation only concern the interface of a product page on a web store. The web store will not be implemented to have full functionality.

• The implementation does not have to be browser independent. • Color choices are not among the important aspects of the design.

1.4

Project Context

This document concern a master thesis project in computer science at Linköping Uni-versity. The project was carried out in cooperation with the company SkyMaker AB (www.skymaker.se), who encouraged and supported the project. SkyMaker is a small com-pany situated in Linköping, Sweden. They develop 2D and 3D generators to help their cus-tomers configure their products.

(11)

2

Theory

The design of online stores has shown to influence sales.[20] Users’ emotions are influenced by their perceptions of the usability and aesthetics of the store, these emotions connect de-sign attributes to whether they approach or avoid the store.[15] As this work focuses on the design of and interaction with an e-commerce interface using 3D elements, and evaluating the usability of design, the related work is spread in different fields of research. The design of user interfaces, web design, the use of 3D in websites and most importantly, usability testing.

2.1

Web design practices

Eccher [7] details that studies have shown how users visiting a homepage only spend 10 to 20 seconds, and suggests that usability is the key to conveying a message and keeping the user at the site. There are three areas of web design that impact the usability of the web site: architecture, layout and navigation.

The architecture of the site is the way pages and sections of the site are connected and how the user flow is designed. There are some things to consider when designing the architecture: Using a site map to present the architecture. Using a naming convention that users are famil-iar with. Reducing the number of clicks needed for users to reach content they are looking for. Not linking out of the sections unexpectedly which could confuse users. Breadcrumb tech-niques could help reduce confusion and give users a sense of location on the site. Avoiding a flat architecture where everything is cluttered and confusing for users, and instead designing one that is cascading where content is organized in subsections[7].

The layout of the site should enable the user to locate information easily. It mainly refers to how elements on the site is positioned and can be split into the two areas scrolling and positioning. Scrolling is widely debated and there are some pros and cons to it. Notably a page with scrolling can fit more content while some user might prefer clicking through content. Horizontal scrolling however, is never acceptable according to Eccher [7] because it contradicts usability standards and require more motion than scrolling vertically. Positioning content is about where on the page different items are placed. The menu which typically is positioned on top or to the left. The header which sometimes is combined with the menu and could contain the logo, functional links and ads.[7] The rest of the space can be filled with different items. A feature area, content, a sidebar (often to the right) and a footer. There can also be ads displayed in these elements.[14]

(12)

2.1. Web design practices

Navigation is key when creating effective and usable websites. It should be intuitive to help users find what they are looking for. The menus should be consistent throughout the pages of the site. They should stay in the same locations, their content should be static and only change if subcategories are added when a user navigates, and there should be a limited number of menus. Preferably no more than two.[7] When navigation is good it will go unno-ticed. It will help users while being used without thinking. If the navigation doesn’t work as expected users will start to notice it.[2] The structure of a menu could be either horizontal or vertical. With a horizontal structure the width of the screen need to be considered while there is more room for content to stretch the width of the screen. For a vertical structure the con-cerns are opposite where the horizontal room for content is smaller while it is easier to add many items. The design of menu items is also important. Using text is preferred over images to make it clear what the navigation does. However, when designing the items, their length must be considered so that all items fit inside the length of the menu.[7] The menu items also need to be different enough to not confuse users[2]. All navigation, links and menus, must look like actual navigation to make it clear that they are interactive[2].

2.1.1

Interface design

Interface design is the design if the interactive elements that will connect users to the func-tionality. When realizing interaction with the functionality, the interface elements are selected and arranged in a way that is intuitive and easy to use. This process is part of interface de-sign because it enables users to do things.[9] The dede-sign of the interface should focus on the actions most users are likely to take and not give the same priority to the rarely used func-tionality. In a successful interface, users should notice the important aspects while not seeing the unimportant parts. Some of the standard interface elements are: buttons, checkboxes, input fields and lists.[9]

2.1.2

Visual Hierarchy

The concept of hierarchy is among the most important aspects of design. The visual hierarchy within a design is the sequencing of its elements to define the most important element, the second most important elements, and so on.[14]

One tool used to draw the attention of users is contrast. Contrast between design elements of the interface can make the essential items stand out and catch the eye of users. Users pay attention when things are different, which won’t happen with a featureless interface without contrast.[9]

B. Miller and R. Black [14] tell us that information of pretty much any type can be broken down into three to four levels. More levels of importance make it difficult to contrast the differences between levels. A hierarchy is created through a design system and the system can be developed through different methods. Alignment, shape, size, scale, color, texture and depth can all be used to define a hierarchy between design elements.

Alignment

In a system of alignment, the elements are logically grouped through their meaning or func-tion. The hierarchy is created through elements that break this system. These elements are given more visual value as they don’t fit into a cohesive unit with other elements. Part of or-ganizing design elements is the organization of space. Intentionally created space, so called white space, is essential when designing relationships between elements to define a system of hierarchy.[14]

(13)

2.1. Web design practices

Scale

Using scale to have dominant elements in a design is critical when creating hierarchical se-quences. Scale is relative, which means elements of different sizes need to be included for comparison to get a dynamic feeling of scale. Elements can also stretch out of borders or outside of a page to enhance the sense of scale.[14]

Color

Color has the instructive qualities to guide, direct and persuade users while also being able to set a mood or tone to the design, which appeals to users’ emotions. As color signifies a mean-ing for people it can be used as a powerful tool in design. Colors can be recognized quickly making it important when forming relationships between elements.[14] A company gener-ally have a color palette used in all material made by the company. The colors in a palette are chosen to work well together and not to compete for the visual attention of users.[9]

Images

Images or iconography can be used to replace text e.g. descriptions, and enable users to more easily get information from the layout of a page. All images on a page should be intentionally put there to add something. Too many images might prevent good user experience as they add to the size of the web page.[14]

Texture

Texture can be anything from shiny and smooth to rough and dull. It helps connect users to the page’s content through the sense of a tactile experience. Type, images and similar elements on a page combine to create an overall texture that may be intended or not. This is present in every design and can be perceived subconsciously by the users.[14]

Depth

Creating the impression of depth can be done by overlapping design elements, adding gra-dient color and shadows or using three-dimensional design elements. Depth and dimension in a page makes it more realistic, add visual interest and draw users into the design.[14]

2.1.3

2D vs 3D

In this work, photos are the normally the two-dimensional content used in web design that could be replaced with three-dimensional content. Photos can be used to illustrate products, demonstrate what products do and answer users’ questions at a glance. This makes it possible for users to understand what they are buying without reading a lot of text. In their research, J. Allen and J. Chudley [2] discovered that users want to see large photos to enable them to immerse themselves in the photos and examine details of the products. Content photos should be made as large as possible or enlargeable if space is limited.[2]

Research has been conducted to examine how users’ online shopping experience is af-fected by the use of 3D environments in the online store. The 3D environment in this case was a virtual store where the user could walk around in a 3D store and interact with books. The study propose that the user experience is greatly influenced by the use of a 3D environ-ment instead of a 2D storefront. The 3D store appeared harder to use which suggest that one should be cautious when introducing new technology to online stores.[20]

(14)

2.2. Usability testing

2.2

Usability testing

Usability tests can be used to test almost anything from business processes to mobile appli-cations. They test the usability of the product and not the users themselves. Conclusions can be drawn by observing how test users interact with and use the product and what problems arise.[2] By performing usability tests, you gain access to understanding your users. What pleases them or frustrates them, what their goals and expectations are and their experience with your product.[4] With this information the design process of your product can focus on the users to support them in their goals and how they want to use the product. "We are testing the product, not you." is the central mantra in usability testing.

Usability can be defined as "The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified con-text of use.", from the International Organization for Standardization, ISO (9241-11). This definition name the measures of usability: effectiveness, efficiency and satisfaction.[4] These measures can be specified further and described as follows[4, 16, 19]:

• Usefulness - Determined by the degree to which a user’s goals can be achieved from using the product. Usefulness estimates if users will use the product or not.

• Effectiveness - Concerns the ease of use for users to reach their goals and how com-pletely a task can be performed. Effectiveness can be quantified using error rate. • Learnability - Involve how well a user learned to use the product after a fixed time of

training. This is a part of effectiveness.

• Efficiency - Is the speed with which a task is performed or the effort required to com-plete it. For the most part efficiency is in some way measured in time.

• Satisfaction - A user is more inclined to use, and perform well, when using a product that grant satisfaction. The extent to which the use of the product was a good experi-ence. Concerns the user’s perceptions of the product. User satisfaction can be deter-mined by asking about their experience.

2.2.1

Planning before testing

The first part of the test process is planning, regardless of the test method. There are two main parts of planning for a usability test: choosing the test method and writing the test plan. The test plan defines the details of the test and can be used as support during the test process. Below, two kinds of usability studies are described followed by an explanation of a test plan. Formative studies

Formative studies are usually small informal tests that reveal what users like and produce a list of what should be fixed during development[4]. The main goal when performing forma-tive studies is to improve the design concepts early and this is why they are usually carried out early before development of the design is finalized[16, 19]. The nature of these kinds of studies make them suitable for iterative development[19]. Design can be evaluated, im-proved upon and evaluated again in a reasonably short amount of time. If the developers have incorrect assumptions about their users and their users’ goals the product will most likely suffer usability problems in the end[16]. The earlier the formative tests begin, the more impact they will have on the design[19]. This also implies that less assumptions have to be made.

When planning for a formative study it is important to identify what subgroup of the user population that is to be tested. By recruiting participant in the same group based on a user profile the results will be better. Different groups have different goals and needs which

(15)

2.2. Usability testing

must be identified to interpret the test results. If the test will be small with around five test participants one subgroup would be enough. With a larger test, more subgroups could be used while reducing the number of participants in each group. The findings between tests with different subgroups will likely overlap.[4]

During formative testing the product’s functionality does not need to be complete. In-stead a prototype or model can be used to represent the product’s basic design and func-tionality during testing. The prototype could be anything from a paper representation to a user interface.[16] To obtain useful results from small tests of this kind the participant are given specific tasks to perform[4, 16]. The tasks represent realistic scenarios the participants can solve to accomplish some goal. Conducting all tests with the same set of tasks makes it possible to see recurring problems and common usage patterns.[4] Participants can also review the product and answer questions under guidance either if it is too early in develop-ment for tasks or while performing tasks. Either way, formative testing usually requires the moderator to interact considerably with the participant[16]. Much of the test data collected is of cognitive nature and require the moderator to explore the participant’s thoughts. This is typically done by encouraging the participants to "think aloud".[4, 16]. This adds an addi-tional dimension where users’ thoughts, reactions and feelings are expressed, which makes it easier to understand their experience and how to make it better.[4] The most suitable testing technique depend on the stage of the development process and how refined the prototype is[16].

Summative studies

Summative studies are larger compared to formative and usually performed later in the de-velopment process[4]. Some litterature separate summative tests and verification tests. J Rubin and D Chisnell [16] defines the summative test as a more mature version of the forma-tive test, performed early to midway through the development process. Their description of the verification test instead holds a definition similar to the one used for summative testing throughout this work.

The objective or goal of summative studies is to evaluate how well the product compares to its predetermined objectives or some benchmark.[16, 19] Their focus is to evaluate against some set of criteria and not to find ways to improve the product, as is the case with formative studies. Summative testing can also be used to compare features of two or more products.[4] Summative tests are often conducted when development of a product is completed or close to. They require a large number of participants as the results are used to generate metrics. Success on tasks, failure on tasks, average time on task, completion rates, and error rates are some examples of measures that can be obtained.[4] Not all larger studies are done in late stages of development. In some cases, the validity of small studies are challenged. More problems need to be uncovered by conducting larger studies. Specifically, this is true for large, complex websites where the user base consist of a variety of users and in systems where substandard usability could result in personal risk or injury.[4]

The test plan

The test plan is the foundation of the test process and the documentation to support deci-sions about what to test, characteristics of participants etc[4]. It will describe the process and communicate what is decided and what resources are required to conduct the test[16]. A test plan typically consists of the following parts.

• Purpose or goal of the test. Explains the reason for conducting the test and any issues it will focus on. Can be defined at a high level as later parts will go further into detail[16]. • Research questions further describes the issues that will be cleared during the test. These questions must be accurate and clear to direct the research and describe what

(16)

2.2. Usability testing

the test want to find out. The answer to the research questions must be measurable or observable[16].

• Characteristics of test participants are specified in a user profile to describe the partici-pants that best represent the end users or a subgroup of them[16]. These characteristics are used as criteria when recruiting test participants and should be as specific as pos-sible. It is also important to define how many test users that is required to get valid results.[19] When testing using more than one subgroup of users each subgroup is de-scribed with its own user profile[4].

• Methodology that describes how the research and test session will be performed. This part should also describe the test’s type and include what will be done and how much time each step will require.[4, 16] With a detailed description of the method the credi-bility of the results will be solidified[16].

• A task list of what participants will perform when testing. The tasks should be se-quences of interactions future users will perform to achieve their goals when using the product.[4, 16] For each task, there should be a short description, the materials needed and which state the product needs to be in, a criterion for successful completion and any benchmarks that will be collected.[16]

• Test environment and equipment required to perform the test. This details what is needed to simulate realistic conditions and scenarios that correlate to actual situations where the product is expected to be used.[4, 16]

• Evaluation methods that describes what data that will be collected and how it will be done. All data of interest, both quantitative measures and participants’ preferences should be listed[16]. Any questionnaire that will be used to gather data should also be included[4].

• Deliverables that explains how the results of the test will be delivered. Outlines what will be presented and dates for delivery.[4, 16] This part is particularly important in a large organization where different teams and management rely on the results of the test.

2.2.2

Collecting and analyzing the results

There are two types of data that can be collected from usability tests. Performance data, the qualitative measurements and preference data, collected from observations done during tests and questionnaires. What has been collected depends on the goal of the performed test and the process of analyzing is also done differently depending on the type of data and research questions.

Preference data

The data collected from observing participants and using a "think aloud" process is known as qualitative data or preference data as it describes what users feel when interacting with the product. These data are primarily collected during formative studies. If the design process is iterative with studies in each iteration, the easiest way to analyze these findings is by counting the number of issues. The number of unique problem, the frequency of issues per participant or frequency of issues per task can be used to compare between iterations and determine if the usability is improving or not. The frequency of participants that experience issues or issues experienced in different areas of design can also be used to compare, but is even more suited to find what areas need the most work to improve the usability.[19]

Preference data can be collected in a multitude of ways, e.g. observations, questionnaires, participants’ comments and your interpretation of their behavior. By comparing data from separate sources and look for consistencies and inconsistencies the strength or the findings

(17)

2.2. Usability testing

can be assessed. This process is called triangulation. Triangulation might help discover con-flicts between different data where something that seemed an issue when observing actually wasn’t as important to the user. Triangulation can also be used to compare between quanti-tative and qualiquanti-tative data.[4]

Task success

In the field of usability, task success is the most common metric. It can be calculated for any kind of task for any kind of product that is tested. Tasks are defined with a clear end state, a goal, which is used to determine success. Task success can be either binary or be defined by levels of success.[19]

Binary success has two results: success or failure. An average success rate from binary success can be calculated per task or per participant.[19]

If succeeding part of a task is valuable, levels of success can be identified and used to measure the success rate. These levels must also be defined before conducting tests as sub targets of achieving full success.[19]

Time on task

The time a user requires to complete a task, time on task or task time, is used to measure the tested product’s efficiency. Time on task is defined as the time between when the user start performing a task until the task is successfully completed.[19]

Time on task is commonly analyzed by task through calculating the average time spent on the task by all participants. The median or geometric mean could also be calculated instead of the mean for time on task depending on if the collected data seems to be biased or if there are potential outliers.[19]

Errors

Error rate, the number of errors made when interacting with a product can be used to de-fine the product’s effectiveness[16]. An action that have a negative impact on success is an error. But errors are not typically defined as one thing, instead there are several types of er-rors. There are errors of omission, when you leave something out or fail to take an action required to reach a goal, and errors of commission, doing something unnecessary that does not contribute to reaching the goal.[19]

Error data is most often organized by task. If a given task present a single error oppor-tunity the data is binary, no error or one error. Analyzing binary error data is often done by task. The percentage of participants who experienced an error on a task can be compiled for each task. It is also possible to analyze the error rate by type of user and view number of errors made by users from different user groups. This is typically good when comparing results from different user groups that tested different prototypes or products.

If a given task present multiple error opportunities the data is analyzed differently. One way to analyze is to calculate the error rate for each task by dividing the total number of errors for a task by the total number of error opportunities. Because different tasks have varying amounts of opportunities for errors, directly comparing error frequencies against each other could prove misleading. Another way to compare different task would be to calculate the average number of errors made by participants for each task. This also gives an indication of how a typical user would perform using the product. Calculating averages also reduce the bias of extremes where some users might contribute most of the errors. In some cases, it might be important to consider how severe an error is. By applying a severity level to errors and weigh them accordingly would make it possible to make a fair comparison of error frequency or error rate.[19]

Sometimes a test will see if the product performs to some certain level. With error rate this would be a threshold, a limit of how many errors is acceptable. The threshold might differ

(18)

2.3. Focus groups

between user groups or different tasks and before evaluation this should be established. By simply comparing the to the threshold it is possible to determine if the product is as effective as expected or if it needs more work.[19]

2.3

Focus groups

Focus group studies are conducted as series of discussions. These are created to collect partic-ipants perceptions on some areas of interest of the evaluated product, service, topic, etc.[12]. The use of focus group discussions is a qualitative research method where a group of people, the focus group, participate in an interactive discussion that focus on specific issues in the areas of interest[11].

Focus groups can be used as an alternative to, or complement to, usability testing, when evaluating the user experience of a system.

2.4

Confidence interval

Confidence intervals are used frequently in research to estimate a range containing the un-known, true population parameter. A confidence interval is calculated for some sample data using some confidence level, typically 95% (other typical values include 99%, 90%, 80% and 85%). The confidence level determines how confident we are in the method of generating the confidence intervals. A calculated confidence interval either contain the true parameter or it does not.[18]

There are several formulas to calculate confidence interval for the sample data. If the data is binomial, as is true when there’s only one error opportunity or when measuring comple-tion rates, the most commonly used method is the Wald method. The formula for the Wald method is seen in equation (2.1) where p is the sample proportion, n is the sample size and z(α

2)is the critical value of the normal distribution for the defined level of confidence.

p ˘ z(α

2)

c

p(1 ´ p)

n (2.1)

Intervals created using this method are inaccurate at small sample sizes or when the results are all close to 0 or 1. To create good intervals a large sample size is required.[18, 17]

Another formula is the "Exact" method. It was created to work for all sample sizes and to guarantee that the confidence interval provides at least 95% coverage. However, exact intervals suffer from being too conservative. Their intervals become larger and cover more than the defined confidence level. This is especially true with small sample data.[18, 17] A third method is the adjusted-Wald method. For 95% confidence intervals this method is really the same as the Wald method but requires the addition of two successes and two failures to the observed sample data.[18, 17] More precise this means adding two to the numerator and four to the denominator which is derived from the critical value of the normal distribution, which is 1.96 for 95% intervals[18]. This is done in equation (2.2) where x is the number of completions and n is the sample size. By inserting p= padjinto equation (2.1) the confidence

interval is calculated. padj= x+z22 n+z2 = x+1.92 n+3.84 (2.2)

The adjusted-Wald method requires more work to calculate compared to the normal Wald method and is not used as much. However, research has shown that this method create good intervals for most completion rates, even close to 0 and 1.[18, 17, 1] At small sample sizes the method also have improved accuracy compared to the standard Wald method and the exact method. While, for larger sizes, adding two successes and two failures doesn’t do much to help performance, it also does nothing to harm the process, either.[18]

(19)

2.4. Confidence interval

When creating confidence interval for sample data that is continuous or of some rating scale, the t-distribution is recommended. The distribution works almost the same as the nor-mal except that it considers the size of sample data. It makes intervals wider when sample size get smaller to adjust to the estimate of the population. For larger sample sizes the con-fidence intervals using the t-distribution converges on those using the normal distribution. J. Sauro and J. Lewis [18] recommend using it for all sample sizes as it provides the best intervals regardless. ¯x ˘ t(α 2) s ? n (2.3)

The formula for creating t-confidence intervals can be seen in equation (2.3) where ¯x is the sample mean, n is the sample size, s is the standard deviation and t(1´α

2)is the critical value

(20)

3

Method

The method used to create and implement the interaction with, and design of the interface as well as the evaluation of said interface is proposed in this chapter. During implementation, an iterative method was used to improve the design using usability tests. The initial design could then be enhanced and evaluated using a more extensive usability test. An overview of the method can be seen in figure 3.1.

(21)

3.1. Implementation

3.1

Implementation

Implementation of the interface was carried out using established technology for web devel-opment: HTML5, JavaScript, CSS and the JavaScript graphics library three.js. Development was performed using the WebStorm IDE (Integrated Development Environment) from Jet-Brains (https://www.jetbrains.com/).

HTML5

The standard language used to create web pages is Hypertext Markup Language, HTML. HTML5 is the latest version of HTML.[5] HTML allow web developers to structure and present the contents of the web page.[21]

JavaScript (JS)

JavaScript is a programming language commonly used to create functionality in the web browser. JavaScript is especially good for controlling the content of the document displayed on a web page. It can also be embedded in HTML as scripts. In the context of the web browser it is also called Client-side JavaScript as it is run in clients’ browser and not on a server.[8] CSS

Cascading Style Sheets (CSS) is a language used to define the visual style of elements defined in HTML. As HTML define the structure and presentation of a web page, CSS control the visual presentation of images, text and other elements. Size, fonts and colors are some of the design items CSS control.[13]

three.js

Three.js is a JavaScript library that enable developers to easily create 3D scenes for web pages. It provides an extensive set of functions that in turn uses WebGL to render the scenes.[6] WebGL is a graphics API (application programming interface) for advanced web-based 3D using JS and HTML5.[3]

3.1.1

Prototyping

Initial focus in the project was the creation of a prototype interface. A prototype is an early sample or model of a product. It is built to be tested and improved upon until it’s in a state where the actual product can be developed from it.[10] In this project the initial prototype was created using wireframes. Wireframes are simple schematics of the page that depict the layout of its information, navigation elements and interface elements [9]. They establish the simple basics of the page’s visual design and are used early in the design process to convey the basic shapes and form of the web store. The wireframes were evaluated by asking for input from colleagues at SkyMaker and the resulting design was implemented using the technologies listed in chapter 3.1.

3.1.2

Iterating the design with usability testing

The user interface design was improved in iterations using usability tests. A formative study was conducted on the prototype to improve its initial design, while the final design of the interface was later evaluated using a more extensive summative test. The tests helped deter-mine what is most important from the user’s point of view when interacting with the interface of a 3D web application. When conducting usability tests, design issues can be found that is

(22)

3.1. Implementation

not obvious to designers or developers during development. These issues can then be reme-died and help enhance the design and usability further, resulting in an interface that is easy to use.

The following usability metrics were collected during tests to determine if the prototype interface was usable and what parts needed to be re-designed to achieve higher usability:

• Task success - Users success rate when performing given tasks. Collected as binary success.

• Time on Task - How fast users could complete given tasks. Measured from starting a task until task success is achieved.

• Errors - How many mistakes users made when attempting to complete given tasks. Collected through counting errors of omission and errors of commission made.

• Self-reported issues - How users perceived the system. Collected by asking users what they thought about the product.

• Usability issues - Interpreted from how users behaved when solving tasks. Include confusion, misinterpretation and mistakes that did not help in completing tasks. The learnability (how well a user learned to use the product) of the interface was deter-mined by measuring the task success and time on task while performing the tasks.

The effectiveness (how completely tasks could be performed) was determined by measur-ing task success and error rate while the efficiency (how quickly a task could be completed) was determined simply from time on task.

The users’ level of satisfaction was answered from the self-reported issues. These issues also showed if the users perceived the interface to be easy to use or not. Usability issues added proofing to the product by identifying issues that were not obvious from test statistics and the users’ answers.

3.1.3

Formative evaluation of the prototype interface

The purpose of the formative study was to find design issues in the prototype design and to get an overview of what users might think of interacting with a 3D interface to generate a product.

Planning was the first part of conducting the study and a test plan was written to detail the testing process. The different parts of a test plan are described in chapter 2.2.1. The test plan for this test is included as appendix B. To make sure the test plan was sound a test session was held not to test the prototype but to evaluate the procedures of the test. This helped improve the test plan and made the test sessions more structured.

Recruiting participants

Recruitment of participants was based on their previous experience with online shopping. Participants was recruited among students at Linköping University. Characteristics and num-ber of participants can be seen in table 3.1.

Formative test session

Each participant underwent a test session consisting of an introduction, a performance of tasks and a debriefing. The introduction prepared the participant before performing the test and explained what was expected of him or her. The moderator also asked what expectations the participant had toward the prototype. During the performance step the participant was encouraged to "think aloud" and the moderator both observed and interacted with the par-ticipant to create an understanding of how he or she perceived the interface, and the thought

(23)

3.1. Implementation

Minimum number of participants 4 Characteristics

Experience with online shopping

Frequent shoppers ( > 11 times / year) 1 Moderate shoppers ( 6 – 11 times /year) 2 Infrequent shoppers ( < 6 times / year) 1 Table 3.1: Characteristics of participants in formative study

process involved in the participants’ decisions. The following tasks was given to the partici-pants during the test and performed in order. The description of each task and its success can be seen in the test plan, appendix B.

• Task 1 - Basic interaction with the 3D model.

• Task 2 - First time configuration using interactive components. • Task 3 - Second time configuration using interactive components. • Task 4 - Free exploration of the interface and configuration space.

During the debriefing uncertainties that remained were discussed and clarified to collect all possible issues of the interface. The participant also answered questions of the experience and could ask questions back.

Metrics

There were several metrics collected during this test. Task success and time on task were collected as quantitative measurements. These metrics can be used to identify issues encoun-tered during tasks. Task success was defined for each task in the test plan and a stopwatch was used to measure time on task. Task success was analyzed per task as success rate with percent of users that succeeded. Confidence intervals for these statistics were calculated us-ing the adjusted-Wald method, described in chapter 2.4. Time on task was also analyzed by task, as the average time required to complete a task. The confidence interval of the results was calculated using the t-distribution.

Self-reported issues and usability issues were collected as qualitative metrics. These can be used to get a picture of how users think when they interact with the product, what issues they experience, and how they approach the tasks. Self-reported issues can also help find design issues that cause a task failure or high time requirements when solving tasks. Two methods were used to collect issues.

• The "think aloud" process encouraged the participants to voice their preconceptions and expectations as well as their preferences when conducting the tasks. Thinking aloud helps gather both self-reported and usability issues.

• Asking questions before and after the study to gather the participants’ expectations and preconceptions as well as how they perceived the experience.

Triangulation was used to find consistencies among the issues and to compare with the quantitative findings. The result was presented as a list of design issues that impact the usability of the prototype. The issues were in order of severity based on how frequent they occurred and how it affected the user.

(24)

3.2. Evaluation

Minimum number of participants 12 Characteristics

Experience with online shopping

Frequent shoppers ( > 11 times / year) 4 Moderate shoppers ( 6 – 11 times /year) 4 Infrequent shoppers (< 6 times / year) 4 Age 18 - 24 4 25 - 30 4 >30 4 Gender Female 6 Male 6

Table 3.2: Characteristics of participants in validation test

3.2

Evaluation

When the design and implementation of the interface is completed it will be evaluated using a summative usability test. The purpose of the test is to find out if users can use the interface to complete their goals. This test will verify the effectiveness of the implementation and the results will be used to answer the research question.

How should the interaction with a 3D product generator in a web application be designed to achieve high usability in terms of effectiveness?

Planning for the summative test will be done using a test plan that will detail the process of the test. It will also ensure that tests are being conducted equally and that data collection is performed equal for all participants. The test plan for this test can be seen in appendix B.

3.2.1

Recruiting participants

Participants will be recruited among students and through SkyMaker’s customer contacts. None of the users should have participated in the formative study detailed in chapter 3.1.3. The recruitment will be based on their previous experience with online shopping. Partici-pants will also be of varying age and gender. Characteristics and desired number of partici-pants can be seen in table 3.2.

3.2.2

Test session

A test session will be structured into three parts: an introduction, a performance of tasks and a debriefing. During the introduction, the participant will be briefed on how the test works and why it is done to ensure that he or she will know what is expected during the test. The participant will then be left to complete the following tasks. The tasks is identical to those performed in the formative test sessions. Exact descriptions of these tasks and their definition of success can be found in the test plan, appendix B.

• Task 1 - Basic interaction with the 3D model.

• Task 2 - First time configuration using interactive components. • Task 3 - Second time configuration using interactive components. • Task 4 - Free exploration of the interface and configuration space.

(25)

3.3. Motivation

To ensure that the participant use the product in its finalized state without interference or support, the moderator simply observes. The post-test debriefing will be used to discuss any particular issues that might remain and to answer any of the participant’s questions.

3.2.3

Data collection and analysis

The data collected during the test will consist of the number of errors made. This will be collected by counting all incorrect selections and all errors of omission. These data will give quantitative results of the interface’s effectiveness.

The error rate will be calculated by task. Both as average number of errors made by all participants and as percentage of users that made an error. Confidence intervals will be calcu-lated to ascertain the reliability of the statistics. In the case of multiple errors the confidence interval will be calculated using the t-distribution while the adjusted-Wald method will be used to calculate for binary error rate.

3.3

Motivation

The decision to apply usability testing instead of focus groups were based on the fact that both qualitative and quantitative data would be collected. While focus groups are suitable for the collection of qualitative data they are not as useful when doing a quantitative evaluation. As the process was iterative, using both usability testing and another evaluation method was not considered, instead the same evaluation method was used in both iterations only collecting different types of data.

(26)

4

Results

In this chapter the results from each part of the project will be presented. The different results from the phases of implementation will be described first followed by the concluding results from the evaluation process. The results are presented chronologically and build upon each other. This is especially true for the implementation results.

4.1

Implementation

The results from the implementation phase include several items: the wireframes that was created as a base for the layout and design of the prototype, the initial prototype developed from these wireframes, the test results of the formative study and finally the improved pro-totype.

4.1.1

Wireframes

This section describes the results of the wireframes’ creation process, detailing the final wire-frame, seen in figure 4.1. Initially, two wireframes with small differences were produced which were then combined and improved upon in a final version. The final wireframe was used to determine the basic layout of the interface.

The final wireframe consisted of a header bar, a product pane, an information pane, and a footer. The header contained the site logo, a search bar and a shopping cart followed by a navigation bar with global navigation items for different product categories and account management. Below the header bar the product pane was placed containing breadcrumbs revealing the users’ location on the website, the 3D canvas for the product model and the interactive components linked to the 3D model. The grouping of the interactive components was outlined with sliders on top, other objects below them and a larger button at the bottom for adding the product to the shopping cart. The third pane contained a tab bar and a content area for displaying information linked to the different tabs. The footer at the bottom of the page would be holding site specific information and navigation to such information.

The main changes from the first two wireframes was the restructured header bar and the removal of global navigation from the product pane to create better visual alignment. The early wireframes are included in appendix A.

(27)

4.1. Implementation

Figure 4.1: Final Wireframe

4.1.2

Initial prototype

The following section details the elements of the first prototype interface. The functionality and graphic elements will be described to provide an understanding of how the interaction was designed. An overview of the prototype interface can be seen in figure 4.2 and a detailed version can be seen in appendix C.

Following the first limitation in chapter 1.3, the parts of the prototype interface that does not concern the product lack full functionality. However, the interface was designed to look and feel as a fully functional web store product page. To create visual hierarchy neutral colors are used. Borders are set in light gray to avoid messy lines, text is in black and the background is white. Important parts, interactive component and active items are colored in a bright blue color to guide the users to notice and interact with them.

The header bar was designed with navigation buttons, a search bar and a shopping cart button as in the wireframe. The cart and account buttons were given icons suited to their functions. None of the items in the header bar had any functionality and it was designed with different neutral colors compared to the rest of the page to be perceived as important without gaining visual hierarchy over the rest of the page.

On top of the product pane there were breadcrumbs to give a sense of location and to give navigation back to what could be previously visited pages. These links also lacked function-ality as with the other navigation. Below the breadcrumbs the product name was presented in a large font, sitting atop the canvas of the 3D model. In the canvas, a 3D model of a table was shown standing on a wooden floor. The tabletop was a rectangular mesh with a textured surface. Each table legs consisted of a cylinder from the tabletop down to a rectangular foot. The legs were given a metal texture surface to resemble a metal material. Finally, the floor was a plane mesh with a wooden textured material. The scene in the canvas also contained three types of lights. Directional light from above the table model, a spotlight following the

(28)

4.1. Implementation

(a) Initial screen

(b) Modified model

Figure 4.2: Screenshots of the first prototype (detailed images can be found in appendix C)

camera and ambient light to brighten the scene. The materials of the meshes were made more realistic through a color map and a normal map where the normal map was used to calculate how the light reflected in the material to give it a three-dimensional look while still being on a two-dimensional surface. In the bottom of the canvas an overlay was placed with images of arrows with the purpose to indicate interaction with the model. When a user was hovering over the canvas with the mouse the arrows became fully visible while they became transpar-ent when the mouse left the canvas. This design can be seen between figures 4.2a and 4.2b. To strengthen the indication of interaction further, the mouse cursor was changed from the standard arrow into a hand when hovering over the 3D canvas. Additionally the user could interact with the scene’s camera in two ways: rotating it by holding left mouse button and moving the mouse, and zooming in and out through scrolling the mouse wheel.

(29)

4.1. Implementation

To the right of the canvas, the last part of the product pane held the configuration com-ponents. Starting with sliders on top that stretched the width of the area. Maximum and minimum values were shown above the endpoints of the sliders and in the middle a title de-scribed what the slider did. Centered below each slider an input field displayed the slider’s current value which was connected to some parameter of the 3D model. This value could be changed either through the slider thump or through the input field. Below the sliders a bold font title told what part of the 3D model was concerned followed by drop down lists with material and color options for that part. By changing any parameter of the configuration the model was updated and rendered with the new parameter. As a special case, when the size of the tabletop was changed the size of the floor was also changed to stay larger than the table. Below the configuration components a large buy button was placed to be visible. Clicking it put the configured product in the shopping cart, updating the text of the shopping cart label. The information pane consisted of a menu bar with tabs that controlled the content of the content segment below. The bar could display product information and specifications, customer reviews and accessories. The menu had complete functionality while the content was placeholder text.

The footer in the bottom of the page also lacked any functionality. It was designed with space for a company logo, social media links, and links to customer service.

4.1.3

Formative usability test

The formative usability test was conducted as described in the test plan with no deviations. Actual characteristics of participants was identical to table 3.1 with four participants. The test plan can be found in appendix B. Results from this test include both the statistic analysis of participants’ time on task and task success followed by a list of design issues compiled from the qualitative data.

Participant’s time on task gave an indication of how quickly users could perform different tasks using the prototype interface. As can be seen in figure 4.3, basic interaction with the 3D model for first time users (task 1) was fast while the first time interaction with the

configu-20,40 49,31 30,22 68,92 0 10 20 30 40 50 60 70 80 90 100

Task 1 Task 2 Task 3 Task 4

Task ti

me

s

(s)

Geometric mean of task times with 95% c.i.

(30)

4.1. Implementation

ration components (task 2) required more time and gave more uncertain results. The second time participants interacted with the configuration (task 3) was considerably faster and could indicate they that learned quickly and the prototype having good learnability. When given the freedom to explore (task 4) the results show participants taking longer than when doing specified tasks which is logical. However, the confidence interval is narrow which indicate that users would require more or less the same amount of time to explore the interface.

Task success from the test can be seen in figure 4.4. These results show the prototype enabling users to complete intended tasks. However, there were an issue understanding the zoom function in the 3D canvas which made task 1 harder to complete. All task success data is uncertain because of the sample size being small which results in wide confidence intervals.

The list of design issues compiled from the qualitative data can be seen in table 4.1.

4.1.4

Final prototype

In this section the final prototype interface is presented. Based on the issues outlined in table 4.1 changes were made to the first prototype described in section 4.1.2. These changes will be described to provide an understanding of how the functionality, design and interaction was improved. An overview of the final prototype can be seen in figure 4.5 and a detailed version can be seen in appendix D.

The first change was due to issue one, lacking feedback on the buy button. Two anima-tions were added to the interface when a user click the button. The button itself is animated, changing color to white to appear pressed and changing text to tell the user that the item was added to the cart. The button then slowly transition back to its original appearance. Simulta-neously, the shopping cart in the header bar is updated and is animated in the same color as the original buy button to showcase that an update occurred to the cart, before transitioning back to its original appearance. These effects can be seen in figure 4.5b.

Change number two was made to handle issue number two. A border was added to the 3D canvas to display what parts of the interface belonged to it. The border was designed as light gray, thin borders identical to the interface’s already existing borders. This indicated

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

Task 1 Task 2 Task 3 Task 4

Completion rate with 95% c.i.

(31)

4.1. Implementation

where the boundary was without cluttering the interface. An effect was added to the canvas to clearly display when the user hovered with the mouse cursor on top of it, changing the border color to the highlight color used in other parts of the interface. This effect can also be seen in figure 4.5b.

To handle the third issue, three changes were made. First, the layout of the configuration items was changed. Items that configure the same part of the model was grouped vertically instead of horizontally and these groups were put next to each other. This clarified what items were connected. The second change was the increase in margin between sliders and groups of connected configuration elements. This improved the readability and made room for the third change. The last part of solving issue three was the addition of borders between sliders and between groups of connected configuration items. The borders were identical to the existing borders, thin and light gray. Their purpose was to define boundaries, clarifying what items belonged together while avoiding an increase of clutter in the interface.

The next change dealt with issue four, the misunderstood arrows on top of the canvas. As a new change was made to show when the mouse cursor was hovering on top of the canvas, the arrows were removed to avoid creating confusion.

When dealing with issue six, users not using the ability to zoom in the 3D canvas, a new canvas overlay was added. This overlay held a horizontal slider to indicate the ability to zoom and two icons to show what directions of the slider was zooming in and out. The slider moved when using the mouse wheel to zoom in and out, the buttons were given click events for zooming, and the slider thumb could also be used for zooming if preferred. This gave users more choices and a clear indication that zooming in the 3D canvas was possible.

Issue seven was the use of standard HTML components that were perceived as out of place in the design. The input and drop-down components was redesigned with rounded corners to match the rounded design of the sliders and an effect was added when a compo-nent was interacted with. This effect gave the focused compocompo-nent a border in the highlight color which appeared to glow using a colored shadow. This can be seen in figure 4.5a where the effect is active on a focused drop-down list.

The next issue was the floor appearing unnatural when it changed size. This was reme-died by setting its size to a fixed value that appeared as the model was placed in the middle of a room. The purpose of this change was to avoid creating confusion and the unnatural feeling users experienced during the test.

To handle issue nine, the slider thumbs had their size increased enough to make them prominent without cluttering and shadowing other parts of the interface.

The remaining issues were left more or less unattended. Issues connected to the sliders, four and ten, were mitigated by adding some explanatory text to the information tab. Issue

No. Issue Severity

1 No obvious feedback when pressing the buy button 9

2 Hard to know when the mouse cursor is inside the 3D canvas 8 3 Configuration items linked to one part does not seem to belong together 6 4 The arrow overlay purpose is not perceived as intended 6

5 Sliders with the same value are not aligned 6

6 User does not zoom 4

7 Standard HTML components does not fit in 4

8 The floor changing size is perceived as unnatural 3

9 Small slider thumbs 2

10 No explanation to maximum and minimum slider values 2

11 The camera position is locked 1

12 Input fields function is not clear 1

(32)

4.2. Evaluation

(a) Initial screen

(b) Modified model

Figure 4.5: Screenshots of the final prototype (detailed images can be found in appendix D)

eleven was not considered severe enough and issue twelve was dealt with indirectly when more margin and borders were added between sliders.

4.2

Evaluation

The evaluation process was conducted as a summative test of the prototype interface, follow-ing the description in the test plan found in appendix B. However, the process deviated from the test plan on the point characteristics of participants. The desired spread of participant characteristics and minimum number of participants can be seen in table 3.2. In table 4.2 be-low, the actual characteristics of the test participants can be seen. The deviation was based on the number of potential participants willing to help and the assumption that collecting

(33)

4.2. Evaluation

Number of participants 17 Characteristics

Experience with online shopping

Frequent shoppers ( > 11 times / year) 9 Moderate shoppers ( 6 – 11 times /year) 4 Infrequent shoppers (< 6 times / year) 4 Age 18 - 23 5 24 - 26 8 >26 4 Gender Female 6 Male 11

Table 4.2: Actual characteristics of participants in the summative test

more data means better results. Results from the evaluation include the statistical analysis of the number of errors made by participants during the test. The results are presented in two parts. Both as average number of errors made per task and as percentage of users that made an error per task. Errors are shown as number of incorrect selections, number of errors of omission, and total number of errors.

Figure 4.6 display the average number of errors made per task by participants of the sum-mative test. Blue bars represent incorrect selections, gray bars represent errors of omission, and orange bars represent the sum of all errors. 95% confidence intervals are specified for each type of data per task. This graph indicates that interaction with the model in the 3D canvas (task 1) was effective for most users and very rarely resulted in any kind of error. Participants seemed likely to experience at least one error when using the configuration com-ponents for the first time (task 2), while errors made from repeated interaction (task 3 and

0,06 1,12 0,71 0,53 0,18 0,12 0,12 0,06 0,24 1,24 0,82 0,59 0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1 1,1 1,2 1,3 1,4 1,5 1,6 1,7 1,8 1,9

Task1 Task2 Task3 Task4

Number of errors made per task with 95% c.i.

Incorrect selections Errors of omission Total number of errors

(34)

4.2. Evaluation 5,9% 70,6% 47,1% 35,3% 11,8% 5,9% 11,8% 5,9% 17,6% 70,6% 52,9% 41,2% 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

Task1 Task2 Task3 Task4

Percentage of users that made an error per task

with 95% c.i.

Incorrect selections Errors of omission Any error

Figure 4.7: Percentage of users that made an error per task from summative study

task 4) show a decrease in numbers. The number of incorrect selections decreased quickly while users seemed to make roughly the same amount of errors of omission. However, errors of omission were rarely encountered at all.

The next graph, figure 4.7 show the percentage of participants who made an incorrect selection in blue, an error of omission in gray, and any kind of error in orange. These results are also displayed per task with 95% confidence intervals. This graph show that a majority of participants made at least one error during their first configuration of the model (task 2). With repeated interaction, fewer participants made errors, specifically incorrect selections. Errors of omission was increased when configuring material and color of the model (task 3) but then decreased again. The results suggest that the prototype interface was indeed effective, allowing a large majority of users perform tasks without encountering any errors or at most a single error.

References

Related documents

Visitors will feel like the website is unprofessional and will not have trust towards it.[3] It would result in that users decides to leave for competitors that have a

Even though we did not have specific hypotheses pertaining to the effects of brain volume on RT for happy, neutral and angry facial emotions, respectively, or as a function of the

[r]

The highly customizable virtual showroom will give brands the opportunity to craft a 3D storytelling space online, thus will be able to communicate the brand identity

Figure 7 The HUD showing left and right mouse button states, the currently selected node, the tool in use and view

The final prototype used to help answer the research question was designed based on the personas, the findings from the paper prototype, and general theoretical

 Bengt Göransson 2001 ISSN 1404-3203 Printed by the Department of Information Technology, Uppsala University, Sweden... LLL 8VDELOLW\GHVLJQHUVLPSURYHWKHXVHUFHQWUHGGHVLJQSURFHVV˜

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel