• No results found

Development of a 3D viewer for showing of house models in a web browser : A usability evaluation of navigation techniques

N/A
N/A
Protected

Academic year: 2021

Share "Development of a 3D viewer for showing of house models in a web browser : A usability evaluation of navigation techniques"

Copied!
47
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköpings universitet SE–581 83 Linköping

Linköping University | Department of Computer and Information Science

Master thesis, 30 ECTS | Datateknik

2021 | LIU-IDA/LITH-EX-A--21/016--SE

Development of a 3D viewer

for showing of house models

in a web browser

A usability evaluation of navigation techniques

Utveckling av en 3D visare för visning av husmodeller i en

web-bläsare

Pål Kastman

Supervisor : Anders Fröberg Examiner : Erik Berglund

(2)

Copyright

© 2021 Pål Kastman

This work is licensed under CC BY 4.0 https://creativecommons.org/licenses/by/4.0/, unless otherwise stated.

(3)

Abstract

The architectural industry today struggles with how to best show their models to interested suitors opposite the construction industry which have the advantage of the fact that they can build physical models of houses which they can then show. This is where BIM comes into the picture. By extracting the graphics from these models and visualising them in a web browser this study has by creating a viewer with different navigation techniques sought to find out which techniques where most efficient for navigating models in the viewer. This was done with the help of user tests which results show that when it comes to projections, users were more efficient with perspective projection than orthogonal projections, however, user interviews show that users could still find a use for orthographic projection as it was better for displaying floor plans. Egocentric perspective were more efficient than allocentric perspective, but most users

preferred egocentric perspective inside the models and allocentric projection outside of it. As for clipping objects and using clip planes, it is a closer race as users completed the task faster with clip plane but to a greater extent with culling of objects. However, most users wanted to use both technologies at the same time so that they could complement each other.

(4)

Acknowledgments

I would like to thank Erik Berglund and Anders Fröberg for their guidance and feedback during this work.

I also want to give a special thanks to my family and friends for having kept inspiring me to finish this.

(5)

Contents

Abstract iii

Acknowledgments iv

Contents v

List of Figures vii

List of Tables viii

Glossary ix 1 Introduction 1 1.1 Background . . . 1 1.2 Motivation . . . 1 1.3 Aim . . . 2 1.4 Research questions . . . 2 1.5 Delimitations . . . 2 2 Theory 4 2.1 Spatial memory and virtual environments . . . 4

2.2 Navigation techniques . . . 5

2.2.1 Egocentric and Allocentric Perspectives . . . 5

2.2.2 Gimbal Lock . . . 6

2.3 Culling and Clipping . . . 6

2.4 Graphical projections . . . 6 2.4.1 Projection plane . . . 6 2.4.2 Ray . . . 7 2.4.3 Parallel projection . . . 7 2.4.4 Perspective projection . . . 8 2.5 Usability testing . . . 9 2.5.1 Summative studies . . . 9 2.5.2 Formative studies . . . 9

2.5.3 Usability testing metrics . . . 10

2.5.4 Planning the tests . . . 10

2.6 Usability measures . . . 11 2.6.1 Performance measures . . . 11 2.6.2 Perception-based measures . . . 12 2.7 Confidence intervals . . . 12 2.8 Test data . . . 13 2.8.1 Binary data . . . 13 2.8.2 Continuous data . . . 13

(6)

3 Method 15 3.1 Prestudy . . . 15 3.2 Implementation . . . 15 3.3 Usability study . . . 16 3.3.1 Test model . . . 16 3.3.2 Test users . . . 16 3.3.3 Test sessions . . . 16 3.3.4 User Tests . . . 16 3.3.5 User interviews . . . 17 4 Results 18 4.1 Prestudy . . . 18 4.2 Implementation . . . 18 4.2.1 Camera control . . . 20

4.2.2 Clip plane control . . . 20

4.2.3 Graphical projections . . . 21 4.3 Usability Study . . . 22 4.3.1 User Tests . . . 22 4.3.2 User Interviews . . . 24 5 Discussion 26 5.1 Method . . . 26 5.2 Results . . . 27 5.2.1 Prestudy . . . 27 5.2.2 Implementation . . . 28 5.2.3 Evaluation . . . 28

5.3 Societal and ethical aspects . . . 30

6 Conclusion 31 Bibliography 32 A Appendix A – Test plan 35 A.1 Purpose . . . 35

A.2 Research questions . . . 35

A.3 Method . . . 35

A.4 User profiles . . . 35

A.5 Tasks list . . . 36

A.6 Test environment & equipment . . . 36

A.7 Evaluation method . . . 36

A.8 Deliverables . . . 37

(7)

List of Figures

1.1 Architectural models . . . 2

2.1 Six degrees of freedom of a ship . . . 5

2.2 View Frustum . . . 6

2.3 Various projections of cube above plane . . . 7

2.4 Axonometric projections . . . 8

2.5 Vanishing Points . . . 8

2.6 Problems found against users tested on in formative testing. . . 10

4.1 GUI used for testing . . . 19

4.2 Right click menu. . . 19

4.3 Clip plane control. . . 21

4.4 Implemented graphical projections . . . 22

4.5 Task success results . . . 23

(8)

List of Tables

(9)

Glossary

BIM Building information modeling. 1, 2, 27

CAD Computer-aided design. 15

CSS Cascading style sheets. 15

DoF Degrees of freedom. 4, 5, 6

FPV First-person view. 5

glTFTM Graphics Language Transmission Format. 2, 28

GUI Graphical user interface. 15, 18, 20

HTML Hypertext Markup Language. 15

IFC Industry Foundation Classes. 27, 28

ISO International Organization for Standardization. 10

SaaS Software as a service. 1

VE Virtual Environment. 2, 4, 5, 31

VR Virtual Reality. 2, 27

(10)

1

Introduction

This master thesis was done in collaboration with Sandell Collection AB – an architecture firm based in Stockholm, Sweden. From now on, this firm will be referred to as the client.

1.1

Background

When buying a house in Sweden today, the customer normally contacts a building contractor which in turn contacts an architecture firm. This process makes it difficult for architecture firms to influence the customer to choose their designs. If the architecture firms wants to show their designs to the customer directly, they basically have two alternatives. They can either construct a miniature model of the design (see figure 1.1), or they can show a computer model of the design.

Both of these alternatives means that they would have to invite all customers, one by one, to show the models at their office. That would be very time-consuming, and hence, very expensive too. If instead, they were able to make the models available online, so that anyone interested could explore them in their own pace, they would not only be able to save time, but they could also reach more potential buyers.

1.2

Motivation

Building Information Modelling (BIM) is the process of collecting all the information about a building in one place. The data is stored in a BIM file that works as a database for the building. The database can be used to visualise the building in 3D as it contains all the geometries and coordinates of all the objects in the building. This makes it possible for architects, engineers and contractors to work collaboratively and in real-time on the same model.

The software that is used by architects, to work in the BIM files are desktop applications that are very advanced and also very expensive, examples of these are Autodesk Revit1and RhinoCeros2.

1https://www.autodesk.com/products/revit

(11)

1.3. Aim

(a)Interior model of a condo (b)Exterior model of a building

Figure 1.1:Physical architectural models that can be shown to potential customers to promote the sale of a building. In (a) the focus is on the interior of the building, whereas in (b) the exterior is shown instead.

There also exist some applications that are Software as a service (SaaS) applications (Au-todesk Forge3, bimsync4). These use Web Graphics Library5(WebGL) which is a JavaScript6 library for visualising 2D and 3D graphics directly in a web browser without the need to in-stall additional software. These could also be used to let customers explore models in a web browser, but the downside is that the pricing is often based on the number of views which can make these options expensive as well.

However, information that is used for visualisation, such as geometries, can be exported from BIM files, by converting them into other file formats such as Graphics Language Trans-mission Format7(glTFTM) or the Wavefront obj file format [18]. These files could then be used to visualise the models in a web browser using WebGL.

1.3

Aim

The aim of this work is to build a prototype, that uses several different techniques for navigat-ing, explornavigat-ing, and viewing 3D models in a virtual environment (VE). These techniques will then be evaluated in a usability study to determine which techniques that enable the users to explore the models as efficiently as possible.

1.4

Research questions

The research question of this thesis is the following.

How should a 3D viewer be designed in order to help users achieve high efficiency in terms of usability, when navigating large-scale virtual environments.

1.5

Delimitations

The focus of this thesis lies in the usability of the application that will be built. Therefore the application will be considered a prototype and will not be considered to be compatible with any specific web browser, nor any version of them.

3https://forge.autodesk.com/

4https://bimsync.com/

5https://www.khronos.org/webgl/

6https://developer.mozilla.org/en-US/docs/Web/JavaScript

(12)

1.5. Delimitations

Virtual environments mentioned in this thesis, are defined as 3D models on a computer screen, and not virtual reality (VR).

(13)

2

Theory

This chapter contains theory about spatial memory, Virtual environments, navigation tech-niques, and different ways to project 3D objects on two dimensional surfaces. It also contains theory regarding usability testing.

2.1

Spatial memory and virtual environments

Spatial memory is the part of human memory where cognitive maps are stored of environ-ments that have been visited or studied on a map. This information can be gathered when planning a route to, or navigating inside an environment. The theory of a cognitive map was first proposed in 1948 by Edward C. Tolman [31]. He experimented with lab rats, and made the observation, that when the rats had been able to learn a way out of a maze, they were able to switch to an alternate route if the one they had learned had been blocked.

Since then, more research has been made in this area. James. F. Herman et al. conducted tests with 20 children (10 girls and 10 boys) split into three different age groups up until five years of age. After first visiting a large scale environment, they were asked to draw it on paper. Half of the children of each age group successfully drew a correct model after the first visit, the remainder of the children managed it after the third visit. [10]

K. Woollett et al. [36] conducted tests on two different groups of people. The first group contained taxi drivers with good knowledge of the streets of London, the second group did not have that knowledge. They showed that the taxi drivers had easier to learn a new town than the second test group. However, when a modified version of London was tested, the second test group performed better.

As computer graphics are getting better, the ability to build increasingly advanced VEs has made research focus on how users are able to store these as cognitive maps, compared to maps and physical environments.

Richardsson et al. [22] used maps, physical environments and VEs to test which were the most effective alternative. Their conclusion was that when environments only contained one level, there was no difference. However, when the number of levels was equal or greater than two, VEs gave the worst result. Waller et al. [34] showed that short exposure of training in VEs was less effective than maps as a learning media. But if the users were exposed for longer periods of time, it could even surpass visits to the real environment.

(14)

2.2. Navigation techniques

2.2

Navigation techniques

In a virtual environment there are six degrees of freedom (DoF). These can be split into three rotational DoF for rotating about the xyz axes and three translational DoF for movement along the xyz axes (see figure 2.1).

Since a regular computer mouse is a 2D input device, it is only able to control two DoF at a time in a 3D virtual environment. Over the years, several devices have been developed [3, 9, 32, 35] in order to increase the number of DoF that can be controlled.

Studies have shown that for some operations, these devices are superior to the regular mouse [11, 16], although, a more recent study [5] showed that regular mouses were more efficient when performing object placement in a 3D virtual environment.

Figure 2.1:The six DoF of a ship. Three DoF (denoted 1-3) for transition along the xyz axes. And three DoF (denoted 4-6) for rotations around the xyz axes.

("Ship movements on the wave", by Brosen, licensed under CC BY-SA 3.0)

2.2.1

Egocentric and Allocentric Perspectives

Rotations and translations are usually split into egocentric and allocentric perspectives. In egocentric perspective, rotations are made from the observers perspective. This means that the rotational centre is at the location of the camera, and are usually called First-person view (FPV) in video games [13]. Translations for this perspective are naturally in the axes that are defined left/right and up/down from the observers perspective. In allocentric perspective, the rotational centre is instead placed at a distant point that is external to the observer [13], conveniently at an object that is to be observed. Zooming can be used to translate the view closer to the object, whereas translations are usually done in the same axes as the projection plane.

Research has been made on how spatial learning differ depending on if egocentric or al-locentric perspective is used. Thorndyke et al. [30] compared egocentric perspective based on navigation experience and allocentric perspective based on studies of maps, they con-cluded that for egocentric perspective, users had acquired a better knowledge of routes in the environment, whereas users that had studied maps had acquired better survey knowledge of the environment. This study was later replicated by Ruddle et al. [24] with almost the same result, the difference however, was that they used VEs instead of the actual physical environment.

More recent studies has also been made. Münzer et al. [17] investigated how spatial knowledge acquisition differed depending on if egocentric or allocentric perspective was used. They tested several different groups with different aptitudes. Their conclusions was that egocentric perspective is more natural for the users as it much more resembles our way of navigating. Whereas allocentric perspective could be used to acquire survey knowledge,

(15)

2.3. Culling and Clipping

though it may need some time to learn depending on the aptitude of the user. Their recom-mendation was to give the user the opportunity to choose between the two options.

2.2.2

Gimbal Lock

Gimbal locking is when one of the rotational DoF discussed in section 2.2 are removed. It is usually the rotation around the axis that is perpendicular to the projection plane that is removed, hence removing the possibility to tilt the camera sideways [12].

2.3

Culling and Clipping

In computer graphics, the view frustum consists of six planes that define what is visible on the screen. In order to speed up rendering in computer graphics, objects that are outside the view frustum can be culled from view, which means that they are not rendered at all [12].

Objects that are partially inside the view frustum can instead be clipped, so that the part of the object that is not visible is not rendered. This is done using a clip plane, which is placed at the location of one of the planes that define the view frustum. Clip planes can also be placed inside the view frustum to create a cross section of whatever is inside the view frustum. This will make it possible to view the inside of objects. [12]

Figure 2.2: The view frustum when using a perspective graphical projection described in section 2.4.4. The planes that are not shown in colour are the left, bottom and far planes. ("View Frustum", by MithrandirMage, licensed under CC BY-SA 3.0)

2.4

Graphical projections

Graphical Projection is the process of illustrating a three dimensional object onto any kind of two dimensional surface, this could for instance be a paper, a canvas, or a computer screen. There are many different ways to do this, but all of these are based on two different tech-niques, which will be described in section 2.4.3 and 2.4.4. In order to understand them, infor-mation about planes and rays need to be provided first.

2.4.1

Projection plane

The projection plane or image plane, is the face (front) of where something is to be projected. Examples of this can be seen in figure 2.3 where objects are projected onto projection planes using different projection techniques.

(16)

2.4. Graphical projections

2.4.2

Ray

A ray is a straight line that is drawn from one point of an object and through the projection plane. A projection consists of a system of rays that together produces the projection of the object on the projection plane [29]. Examples of this can be seen in figure 2.3.

2.4.3

Parallel projection

Parallel projection can be divided into orthographic and oblique projection techniques. For both of these, the rays going from the object that is to be projected to the projection plane are parallel to each other. In orthographic projections, all the rays are perpendicular to the projection plane, as opposed to oblique projection where this is not the case [29]. To get a better understanding of this, see Figure 2.3. We will be putting our focus on orthographic projections.

Figure 2.3:"Various projections of cube above plane", by Datumizer, licensed under CC BY-SA 4.0

Orthographic projections can be split into axonometric and multiview projections. In multiview projections, the object are placed so that all the angles are either perpendicular or parallel to the plane of projection. That means only one side of the object will be visible, making it a two dimensional view, and therefore multiple views would be needed to show more than one side of the object, hence the name. This can be seen in figure 2.3

In axonometric projections, the object is rotated around one or several axes to display all three dimensions. There are three types of axonometric projections – even though the object can be rotated in infinite ways. The three types are defined by the angles in the three axes of space. When all angles are unequal, it is called a trimetric projection. When two angles are equal, it is called a dimetric projection. And finally when all angles are equal

(17)

2.4. Graphical projections

Figure 2.4: "Trimetric projection", "Dimetric projection", and "Isometric projection", by Datumizer, licensed under CC BY-SA 3.0

(120°), it is called an isometric projection. Although the trimetric projection are said to be the most pleasing to look at for the viewer, it is the isometric that is most useful when it comes to engineering and architecture. This is because all angles and lengths that are one size in one place of the drawing, will have the same size wherever they are in the drawing (equal measure) and therefore it is possible to take measurements directly from the drawing (see figure 2.4. [6]).

2.4.4

Perspective projection

In perspective projection, all the rays can be said to originate from a single point, and thus only one ray will be perpendicular to the projection plane (see figure 2.3. In figure 2.5, a cube is projected onto a projection plane, the lines on both sides of the cube are parallel to each other and will therefore originate to the same vanishing point. This makes us able to see the depth of objects. In other words, we can determine if objects are located at different distances. [15]

Figure 2.5: Perspective projection with two vanishing points at the edges of the horizontal line on the projection plane.

(18)

2.5. Usability testing

2.5

Usability testing

Usability testing is a type of testing where users are given a set of tasks to perform with a product. Observations can then be made of how the users interact with the product [14]. It is neither the user nor the product that is tested, but the relationship the user has to the product [14]. It is also a great opportunity to improve the development process, so that the same mistakes are not done over and over again [7].

There are many different ways to perform usability testing [14, 19], two of these are de-scribed below.

2.5.1

Summative studies

Summative studies are large studies that are usually done on a fully developed product to validate the success of the product [4].

Because these studies are larger, the tests are usually done in an unmoderated test setup where the user is given a script that defines the different tasks that is to be performed. Where-upon the user gets to solve the given tasks.

The benefit of this method is that it will speed up the process, as a moderator does not need to be present and multiple tests can be done concurrently. One drawback with this method is that perception-based data can not be gathered with the speak-aloud method as described in section 2.6.2. [2]

2.5.2

Formative studies

Formative studies are as opposed to summative testing, not performed to prove something. Instead they are meant to provide insights into how users perceive the product, before it is fully developed.

This is done by performing tests in iterations before and in the early stages of a product. The product can be evaluated, and changes can be made to it – which would not have been possible in a summative study. It can also help settle potential arguments that may arise about how the product should be developed.[4]

Formative tests are usually carried out in a lab setting with a test user and a moderator. The moderator will give the user tasks to perform, and will then observe the users perfor-mance and behaviour during the test. [2]

There has been a lot of research on how large the test group should be in formative testing. Jakob Nielsen and Tom Landauer, showed that after the first user had tested the product, one third of the problems had been found. When five users had conducted the test, 85% of the problems had been found. They also showed that after five users, it would only cost more in terms of time put into the tests, than what would be earned in terms of errors found (see figure 2.6) [20, 21]. Another study also showed that with four or five users, 80% of the problems had been found [33].

There are several studies that oppose this idea. One study performed tests on several different products, and showed that when using five test users, an average of only 35% of the problems in the product were found. Though for some of the tests in the study, the theory still held [27]. Another study tested 60 users and showed – by randomly selecting groups of 5 users – that the number of problems found ranged between 55% and 99%. When groups of 10 and 20 users were tested, the number of problems found did not go beneath 80% and 95% respectively, regardless of how the groups were selected [8]. A third study showed that the theory could fail spectacularly, depending on which users that were chosen to conduct the testing. But then again, for some user groups it still held [37].

This shows that in order to get a desired result, it is not only the size of the sample that is important but also that the test group contains users from all different target groups. A classic example of this is the 1936 presidential poll made by the Digest Poll, where they failed

(19)

2.5. Usability testing

Figure 2.6: The curve shows the percentage of problems found against the number of users tested on.

by a large margin to anticipate the winner of the 1936 presidential election by failing to ask several different groups of users [28].

2.5.3

Usability testing metrics

The metrics for usability testing are defined in the International Organization for Standard-ization (ISO) 9241-11, which states the following about usability testing.

"The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use." Effectiveness, efficiency and satisfaction are further defined in the ISO as.

• Effectiveness - The accuracy and completeness with which users achieve specified goals.

• Efficiency - The resources expended in relation to the accuracy and completeness with which users achieve goals.

• Satisfaction - Freedom from discomfort, and positive attitude to the use of the product. The meaning of these metrics can be described further. Effectiveness is the extent to which the product behaves in the way that the user expects it to. Efficiency is the speed at which the user is able to perform tasks in a well-executed manner. Satisfaction is the user’s own feelings about the product. This can not be measured but instead gathered by interviewing the user. If a speak-aloud method is used, notes can be taken of what the users says during the tests. While the effectiveness and efficiency metrics can tell us that users have a problem using the product, satisfaction can tell us why the problem exists. [23]

2.5.4

Planning the tests

The test plan can be seen as the blueprint for the tests as it contains all the crucial information about the test. Below are some parts that are typically included in a test plan. [23].

• Purpose & goal of the test - This section should provide an overview of the test plan, it should not include any specific information as it is meant to give the user a grasp of what the test aims to do without reading the whole plan. [4, 23]

(20)

2.6. Usability measures

• Research questions - Describes the issues further than the purpose does. It is important that this part clearly states what the tests aims to find out. [23]

• Method or test design - This section should describe in detail how the tests will pro-ceed, from the user arriving until the user is done testing.

• User profiles - Describes the users for the test. If several test groups with different characteristics are used, they should all be described here. [4]

• Participant incentives - Defines if or what incentives that are to be given to the users in the test [4]. This could be money, gift certificates or food. Giving participants a little something, can make them more likely to show up in time and increase their desire to help [14].

• List of tasks - Specifies what the users should do in the test. It should consist of func-tionality in the product that users may end up using in the product [4]. Each task should contain a brief description of the task, any special material or machine state of the prod-uct is required to carry out the task, a description of what is considered to be a success-fully completed task, and finally how the success should be measured [23]. Since all tasks may not be solved during the test – as time is a factor – they should be prioritised so that those believed to be most important are done first [23].

• Test environment & equipment - If certain equipment was needed to perform the tests, it can be described here [4]. The equipment listed should only be those that users need to carry out the test and not any equipment that is needed to, for example, take notes or record the tests [23]. This section may also include settings on the equipment, if any such were necessary [4].

• Moderator role - Describes what the moderator will do during the test. This may be skipped if it is only the moderator and user that is involved in the test, however if the test involves more people that is monitoring it, then the test plan should include this section so that nothing the moderator does during the test may lead to confusion among the observers. [23]

• Evaluation method - Describes the data that will be collected, both quantitative and qualitative, along with the method that was used to do so. Can also contain question-naires if used. [4]

• Deliverables - Describes how and where the result of the tests will be reported [4]. The results can for instance be presented in a report, or in an oral presentation [23].

2.6

Usability measures

This section describes different types of data that can be collected when testing. It also ex-plains different methods to calculate confidence intervals depending on the type of data in the sample.

2.6.1

Performance measures

Performance measures are information about how the user fares during the test, this can for instance be the number of errors the user makes during a task, or the time a user spent on a task. This information can tell if there is a usability problem somewhere in the product. [2] The most common performance measures are listed below.

• Task success - This is the most commonly used metric, as it can be used for almost any type of product as long as the tasks have a clear goal. It can either be calculated as a

(21)

2.7. Confidence intervals

binary success (pass or fail) or as in steps of success, where different levels would have to be defined. [2]

• Time on task - Can be used to measure the efficiency of a product. Measured by elapsed time, where shorter time is usually better (with some exceptions) [2].

• Errors - Errors are incorrect operations that the user makes that can lead to failures. One or several errors can be the cause of a usability issue, and therefore it may be worth measuring the number of errors as a complement to issues, because if issues are measured it would be easy to miss a potential problem. Similarly to task success, in order to measure errors, the goal of a task needs to be clearly stated. [2]

• Amount of effort - Time on task was said to measure efficiency. But efficiency can also be measured by looking at how many steps the user took in performing a task. These steps can for example be the number of clicks a user had to perform in order to get to a certain menu. [2]

• Learnability - Is the extent to which a product can be learned to be used efficiently. This is done by – on a number of occasions – measuring the elapsed time a task takes to perform, in order to see if any improvements are made over time [2]

2.6.2

Perception-based measures

Perception-based measures or self-reported data are comments from the user [2, 23, 26]. This data is much needed as a complement to performance data, because if a user is only observed, then the data can get misinterpreted and incorrect assumptions may be drawn. For example, if it is taking a user a lot of time to perform a task, we might think we have a problem as the time on task will be high, when in fact the user might just be enjoying the product. [2]

There are several ways to collect this data. One way is the speak-aloud method, where users are encouraged to verbalise what they are thinking. This can give more detailed insights to why the user is experiencing a certain problem. However, a downside with this method however, is that tasks can take a lot more time and should therefore be avoided when using time on task as a performance measure. [23]

It can also be gathered by asking the user predetermined questions about the product, or make them fill out a questionnaire. When a questionnaire is used, data can either be collected after each task (post-task) or after the whole session (post-session). Post-task is meant to give us more insight into which tasks the user experienced as being the most problematic. Whereas post-session is meant to give a more overall picture of how the users perceived the product.

If post-task data has been collected, post-session data can be calculated by taking the mean value of the post-task data. The questionnaire usually consists of an uneven number of questions with a scale going from disagree to agree. In order to give the users the option to take a neutral stance it should contain an uneven number of questions. The questionnaire can also consist of, or contain, answer boxes without prefilled alternatives. [2, 23]

2.7

Confidence intervals

Since we can almost never test the entire population of users, our result will not be one hun-dred percent true to the population. But what we can do, is calculate an interval where we can say that the result will be within with a certain proportion. This interval is called

confi-dence interval, and it is two times the margin of error. The proportion of times we can say that it will be inside the interval is called confidence level, it is set before we calculate the interval and can be set between 0-100%. If it is set to 95%, then we can say that 95 times out of a 100, the result will be inside the interval. One common misunderstanding is to think that

(22)

2.8. Test data

the result will be inside the confidence interval with a 95% certainty, which is not the case. [26] Example 2.1 explains this in more detail.

Example 2.1: Confidence interval

If an election poll has one of two candidates as a winner with 64% of the votes, the margin of error is 4% and with a confidence level of 95%. Then the confidence interval is eight percent wide, reaching from 60% to 68% (64 ±4%). The confidence level tells us that this will be the case 95 times out of 100.

2.8

Test data

To calculate the confidence intervals, different methods should be used depending on if the collected data are binary or continuous. Some of these methods are explained in the sections below.

2.8.1

Binary data

For binary data such as completion rates, the Wald formula can be used. This formula can be seen in equation 2.1, where n is the sample size, ˆp the proportion of trials that were successes, and z the critical value from the normal distribution for the level of confidence. [2]

ˆp ˘ z 1´α 2  c ˆp(1 ´ ˆp) n (2.1)

One major problem with the Wald formula is that it is not very precise when working with small sample sizes or when the proportion is near 0 or 1 – which is very common with small sample sizes [1]. This means that when one is believed to have a confidence level of 95%, it is probably much lower than that. This has led people into believing that large sample sizes are always needed, which is false. Both the adjusted-Wald method and the exact method can be used for small sample sizes.

The difference between the Wald method and the adjusted-Wald method is that the small sample size is taken into account by adjusting the observed proportion of task successes. To achieve this, ˆp and n in equation 2.1 are replaced with ˆpadj and nadj with the help of

equations 2.2 and 2.3.

ˆpadj=

np+z22

n+z2 (2.2)

nadj=n+z2 (2.3)

The exact method can as mentioned earlier also be used for small samples, though it has some drawbacks. When calculating a 95% confidence level, it provides at least a 95% coverage which means that it is more likely to be closer to 99% than 95%. This results in an unnecessary wide confidence interval, which will only get wider as the sample size gets smaller. It is also mathematically heavy to calculate. [26]

2.8.2

Continuous data

For continuous data, such as time on task, other methods must be used. Both the t-distribution and the z-t-distribution (normal t-distribution) can be used. The difference between them is that the t-distribution takes the sample size into account and makes the interval wider

(23)

2.8. Test data

as the sample gets smaller. For samples with total over 100, the difference between them is only by fractions [26]. Therefore only the t-distribution will be described here.

¯x ˘ t 1´α 2  s ? n (2.4)

Equation 2.4 shows the t-distribution. ¯x denotes the sample mean, n the sample size, s the standard deviation and t is the critical value for the t-distribution for n ´ 1 degrees of freedom and the specified level of confidence. [26]

One problem with mean values is that if one or several values diverge from the rest of the values, it may make the mean diverge as well – especially if the sample size is small. Therefore it is better to use the median value. If the sample size is even, the mean of the two most centre numbers should be used. This way we will avoid the influence of extremely low or high values. [26]

However, median values have been shown to have problems as well. In a study by Sauro et al. [25] they proved that when sample sizes are smaller than 25, the median value con-stantly got overestimated – giving a biased statistic.

Another study also showed that median values tended to overestimate the sample centre. Furthermore, they also showed that the geometric mean gave the least error for calculating the sample centre, when comparing it with other methods (trim-top, trim-all, harmonic and Winsorized means).

The geometric mean can be found by converting the raw sample data with the natural logarithm. Then take the arithmetic mean of these values and finally inverting the mean using the exponential function [26].

An example of calculating a confidence interval with a t-distribution and a geometric mean for a sample containing continuous data can be seen in example 2.2.

Example 2.2: t-distribution with geometric mean

Consider a sample containing the following task times: a = {70, 60, 75, 63, 51} The geometric mean ¯x is first calculated:

A= 1 n n ÿ i=1 ln(ai) = ln (70) +ln(60) +ln(75) +ln(63) +ln(51) 5 «4, 15 ¯x=eA«63, 4 The standard deviation s is then calculated:

s= g f f e 1 n ´ 1 n ÿ i=1 (xi´¯x)2«9, 26

The final step is to insert all values into equation 2.4 along with the t-value. For a confidence level of 95% and n=1 we get t=2.776.

¯x ˘ t α 2  s ? n =63, 4 ˘ 2, 776 9, 26 ? 5 =63, 4 ˘ 11, 5

(24)

3

Method

This chapter is split into three parts. The first part covers the prestudy that was done to discover the exact problem the client had – what they aimed to achieve with this project. The second part covers the implementation – what tools that were used during that phase. The third and last part covers the evaluation of the product that was done with usability testing.

3.1

Prestudy

A short prestudy was done with the intention of understanding the problem that the cus-tomer had. During the fall of 2018 a meeting was held with the client, where they showed their process of showing customers their models. Relevant theory was also collected to show the customer what could be done during this meeting.

3.2

Implementation

The implementation of the product was done in one iteration using web development tools such as JavaScript1, cascading style sheets2 (CSS) and HyperText Markup Language3 (HTML). Three JavaScript libraries were used, these were Xeogl4, dat.GUI5, and RequireJS6.

RequireJS is a file and module loader that makes it easier to work with lots of files during development, as it makes sure that files are not imported several times and in the right order. dat.GUI is a lightweight graphical user interface (GUI) that can be used to change vari-ables in JavaScript, without having to reload the web page.

Xeogl is the WebGL library that was used. It it specialised for models that have a large number of individual items, as it does not have any game engine effects such as reflections and shadows, which can slow down rendering. Therefore it is a more suitable option for

1https://developer.mozilla.org/en-US/docs/Web/JavaScript 2https://developer.mozilla.org/en-US/docs/Learn/CSS 3https://developer.mozilla.org/en-US/docs/Learn/HTML 4http://xeogl.org/ 5https://github.com/dataarts/dat.gui 6https://requirejs.org/

(25)

3.3. Usability study

products aimed at industries such as CAD, medicine and architecture, than for example the much more popular options three.js7and BabylonJS8.

3.3

Usability study

In order to evaluate the product and answer the research question, the product was evaluated at the end of the project with formative usability testing.

Even though the product was developed in one iteration, formative testing was preferred over summative testing. This decision had a number of reasons to it, the main one was that with a limited time frame for the project, it was considered to be too much job to prepare an unmoderated test setup. Also, even if this would have been possible, a moderated test setup was preferred by the author as any potential misunderstandings about the test or the product could then be directly eliminated.

3.3.1

Test model

The house model that was used for the tests is called Lilla Integralen9, and is a house that

the client has built in the new city district Vallastaden10in Linköping. The model was not the complete model that was used when constructing the house, but instead a simpler model that had been drawn in SketchUp11and used to 3D print a model of the house. This was because the real model contained a lot of information that was not needed for the tests.

3.3.2

Test users

Test users were not chosen on the basis of if they were potential house buyers or if they had any previous experience with any similar product for 3D visualisation. This was because the product was intended to be used by anyone.

3.3.3

Test sessions

The test sessions were conducted either at a location chosen by the moderator, or in a location chosen by the user. The main point was that the moderator and the user were not disturbed during the test, and that the user would feel comfortable at the location.

Before the testing begun, the moderator instructed the user on how the test worked and what was expected of the user during the test. When the test had begun, the user was given a set of tasks to do. The user could at any point during the test give up on a task or the whole test.

The tasks can be found in appendix A.5.

3.3.4

User Tests

Task success and time on task was collected as performance metrics to measure the efficiency of the product. Task success was based on if the goal that was declared along with the task had been reached. Time on task was based on how long this task took for the user to complete. A confidence level of 95% were chosen for both these metrics, and confidence intervals for each task was then calculated.

For task success, confidence intervals were calculated using the t-distribution with a ge-ometric mean. The adjusted-Wald method was used to calculate confidence intervals for the task success. 7https://threejs.org/ 8https://www.babylonjs.com/ 9http://www.sandellsandberg.se/content/lilla-integralen-vallastaden/ 10https://www.vallastaden2017.se/ 11https://www.sketchup.com/

(26)

3.3. Usability study

3.3.5

User interviews

Perception-based data will be gathered by asking the users a series of questions after the tasks have been completed. This method was chosen over the speak-aloud method because of the problem discussed in section 2.6.2 with using the speak-aloud method along with the time on task metric. A questionnaire could have been used but as the moderator and the user were both in the same room, it felt more natural to have a discussion afterwards instead. Also Münzer et al. [17] experienced that when comparing egocentric and allocentric perspectives and using the speak aloud method, the users were only verbalising what they were seeing on the screen, and not what they were thinking. Therefore it would not provide any more information as the moderator is sitting in the same room viewing the same screen.

If the user had any additional questions after the test and interview was done, the mod-erator would stay and discuss this until the user was satisfied.

(27)

4

Results

This chapter contains the outcome of the prestudy, implementation, and evaluation sections of the method chapter.

4.1

Prestudy

During the meeting described in section 3.1, the client showed the different processes they were currently using when showing customers their house models. There were primarily two different methods that they used, the first one was simply showing a physical model of the building, which they had either built themselves, 3D printed, or hired someone to build for them (see figure 1.1). The other option was to show the customer the model on a TV screen in a conference room.

The main problem with the physical model was that they either showed the interior or the exterior of the model, so they either had to build two models or settle for one of them. The problem with computer models was that their program was hard to navigate in, and therefore had to be done by someone from the client.

The clients wish was to have a product built that their customers themselves could use to explore and navigate the models at home, and at their own pace.

4.2

Implementation

The viewer was implemented with both an orthographic projection, and an perspective pro-jection. As there are no way to place the model exactly so that the sizes of the angles can be determined, therefore it is most likely an trimetric axonometric orthographic projection. Which is explained in section 2.4.

Xeogl contains functionality to mark, cull, and make objects transparent. All this function-ality was used. Items can be selected by clicking them. Several objects can also be selected by holding the shift button on the keyboard while clicking them. When one or more objects are selected, it can be deleted or ghosted through a menu that you access by right-clicking. The menu also contains functionality for saving the current camera position, restoring the last saved camera position, deselecting all selected objects, and displaying all objects selected for cull or ghost. There is also the possibility to zoom in on the last selected item. If no objects are marked, then the menu will not show the possible actions for culling and ghosting objects.

(28)

4.2. Implementation

A simple and minimal GUI was implemented – positioned at the top right of the browser – to quickly and easily during the test be able to change the settings for the camera control, projection type, and the clip control.

Figure 4.1:A simple GUI used for tests.

(29)

4.2. Implementation

4.2.1

Camera control

To be able to navigate, a camera control was implemented with both an egocentric- and an allocentric perspective, explained in section 2.2.1.

For both these perspectives, navigation can be done using the keyboard and the mouse device. Translations are done using the w,a,s and d-keys of the keyboard and, rotations can either be done with the arrow keys on the keyboard or by pressing the left key of the mouse device and dragging the device in the desired direction. Translations with the w and d keys will be in the forward/backward direction of view, however, for the egocentric perspective there exists an option in the GUI to disable this, which will make translations forward and backwards instead take place in x-plane perpendicular to the model, i.e. the plane that is perceived as the plane currently standing on. This functionality is especially useful as it will make it possible to walk around in the model and at the same time looking up and down in it without going through the ceiling or floor.

The direction of the vertical rotations can also be inverted in a menu, so that dragging the mouse towards oneself makes the camera tilt upwards.

4.2.2

Clip plane control

Besides culling objects from view, to be able to see inside models, a clip plane described in section 2.3 was also used. To control it, a clip plane control was implemented that consists of two arcs, an arrow, and a rectangular outline of a plane (see figure 4.3).

By clicking and dragging the arcs, the control can be rotated around the axis that it is currently facing. Note that only two degrees of freedom have been used as there is not point in rotating the clip around itself as that won’t have any effect on what objects that are clipped. The control can also be translated along the normal of the clip plane with the arrow.

The control is positioned at an offset in the direction of the clip plane, this is done so that the clip plane will not be placed on top of, or in between other objects in the model which would make it more difficult to pinpoint the control. The size of the controller is adjusted on every rotation, zoom and translation of the scene by calculating the distance to the model and then updating the controller. This makes it possible to use the controller even when the model is zoomed out as it will not be too small to use. In the same way, the control would be too large at a close distance to the model and instead only be in the way of other objects. However, there is an upper limit for when the control will not be made any larger, because when you can not see the model anymore, there is no point in using it and this is therefore done as an indication to the user that it is better to zoom in.

Hovering the different parts of the control highlights them in a yellow colour to indicate to the user that interaction is possible. When interacting with the control, it will fire an event that is used to inactivate the camera control described in section 4.2.1, this is necessary because it would be impossibly to control both the camera control and the clip plane control at the same time.

The intention of the visual outline mentioned before is in order to help the user compre-hend that it is a clip plane that is being rotated and translated and not just the control. The size of this will vary depending on the size of the loaded model, and will as opposed to the control not be adjusted depending on the distance to the model.

From the GUI, the clip plane can be positioned at the centre of the model, with the direc-tion facing one of the six different possible normal planes of the scene. The control can also be turned off from the GUI.

(30)

4.2. Implementation

Figure 4.3:The control used to position the clip plane.

4.2.3

Graphical projections

Xeogl contains three different graphical projections, of which two were used.

The first projection was the perspective projection. Examples of how a model looks in this projection can be seen in figure 4.4c and 4.4d, this projection type is explained in section 2.4.4. The second projection was the orthographic projection. Examples of how a model looks in this projection can be seen in figure 4.4a and 4.4b, this projection type is explained in section 2.4.3.

(31)

4.3. Usability Study

(a)Orthographic (b)Orthographic

(c)Perspective (d)Perspective

Figure 4.4: The two different projections that were used. The orthographic projection in (a) and (b), and the perspective projection in (c) and (d).

4.3

Usability Study

The usability study was performed in two parts where the first part consisted of user tests and the second part was an interview containing predefined questions about the tests that users had to answer when they were done with the tests.

4.3.1

User Tests

The user tests were performed according to the test plan in appendix A.

The model which the tests were supposed to be carried out on, were intended to be a complete model that included an interior with complete textures. Unfortunately, that model could not be used as it contained too many detailed objects and textures, making it impossible for the browser to load. Instead, a simpler model was used that only contained one floor, but that still had furniture.

As the product was intended to be used by anyone, there was no desired spread of char-acteristics such as age or gender among the test users. Yet, it may still be worth noting that the characteristics of the users participating in this study, varies both in age, gender, and past experiences with similar products such as video games. The distribution of the test users characteristics can be seen in table 4.1.

The results from the tests include statistical data on task time and task success from the six different tests that were performed.

(32)

4.3. Usability Study

Table 4.1:Characteristics of test participants Total number of participants 15

Gender Female 9 Male 6 Age 20-30 6 30-40 7 ą60 2

Experience with video games

None 4

Some 5

Great 6

Task success can be seen in figure 4.5. This shows that more users passed task 1b than 1a, while for task 2a and 2b the amount of users that passed the task were equal. The last task was the one where you can see the biggest gap between users who were able to complete it, were more passed 3b than 3a.

Task 3a and 3b were slightly different from the other tasks, as the users themselves had to decide when they were finished. Some users simply wanted to remove more objects in the drawing than others, which meant they took longer with task 3b than others.

Figure 4.5: Task success results. Task 1a: Orthographic graphical projection, task 1b: Per-spective graphical projection, task 2a: Egocentric navigation perPer-spective, task 2b: Allocentric navigation perspective, task 3a: Clip plane, task 3b: Object culling.

The results for time on task can be seen in figure 4.6. For the first tasks, task 1a took longer time to finish than task 1b. It also shows quite clearly that the confidence interval was far greater for 1a than 1b which indicates that the spread of times were greater for task 1a.

(33)

4.3. Usability Study

Task 2b took longer to complete for users than 2a, but the confidence intervals for the two tasks are quite similar indicating that the spread of times were more even between these two tasks.

The final two tasks, 3a and 3b also had a more even confidence intervals in between them but were a bit different due to that users themselves had to decide when they were finished with task 3a. As some users were more competition oriented, they wanted to finish as quickly as possible and therefore did not remove as many objects as other users.

Figure 4.6: Time on task results. Task 1a: Orthographic graphical projection, task 1b: Per-spective graphical projection, task 2a: Egocentric navigation perPer-spective, task 2b: Allocentric navigation perspective, task 3a: Clip plane, task 3b: Object culling.

4.3.2

User Interviews

For the first two tests that tested user performance using different projections, all users thought that perspective projection was preferable to orthographic projection when it came to navigating the model. However, many users liked the orthographic projection when the model was placed perpendicular to the view – showing one side of the house – as they thought the model looked more like a drawing then.

When described to the users why and how orthographic projections are normally used as explained in section 2.4.3, almost all users wanted to look down on the house from above to compare how a floor plan looks in the different projections even though that it was not included in the tests. When doing so, several users reflected that they recognised that view from floor plans of houses and apartments they had previously seen. This fits well with how brokers choose to present floor plans. However, one user pointed out that he had seen floor plans made with both orthographic- and perspective projection, but the user also had no preference in any of the two and instead concluded that it is good to have both alternatives to compare with.

In the two tests where egocentric and allocentric navigation perspective were compared, all users preferred allocentric perspective when navigating outside the model. But as they

(34)

4.3. Usability Study

got closer to the model, or were inside it, they preferred egocentric perspective instead, as they thought it was too hard to rotate around the intended point, as many of them acciden-tally picked the wrong object and the point of rotation became something else than they first thought, and would in some cases make the rotation place them outside the model again.

In the last two tests the preferred technique for being able to look inside the model while still being on the outside, were more even in between the users. Some preferred to remove objects while some preferred the clip plane.

Several users expressed a desire of being able to combine these techniques instead of hav-ing to choose one or the other.

(35)

5

Discussion

In this chapter, the method and result chapters discussed along with the societal and ethical aspects of this work.

5.1

Method

As described in section 2.5 and 2.5.2, the biggest differences between summative and forma-tive testing are that summaforma-tive tests are done on products that are fully developed or when development is in the final stages of development [4], while formative testing is normally done in iterations which can then help to further influence the ongoing implementation of the product to help make it more as the users would like it to be. As this was a prototype product, it might seem normal to use formative testing done in iterations. But it was deemed too time consuming to perform tests in several iterations as well as completing all the func-tionality inside the time frame of the project. It is possible that the funcfunc-tionality would have been better adapted to the users if formative testing had been done in iterations, but then there would also have been a risk that the product would not have been finished in time to perform the tests, and this was considered to be a worse scenario.

The number of users that should be used to get a valid result in a formative test method varies greatly depending on the literature, this is mentioned in section 2.5.2. While some of these articles state that no more than five test users should be necessary, others are arguing that more users needs to be used in order to get a valid result. This led to the decision to use more than five but not as many users as some studies claimed would be needed.

A more reliable results might have been achieved, had more users been involved in the tests. However, this is unclear as there are several contradicting articles, and it is quite hard to decide which ones should be considered more reliable. However, none of the articles men-tions that it will give a worse result to use more than five users. Therefore, an amount of users was chosen that was considered reasonable to be able to conclude in time.

Another argument for not using too many users was that if more users were included in the study the test would probably have had to be performed remotely, which would then have left out the possibility to have face-to-face conversation with the users. However, this would not had done anything for the results of this project as the product after the tests would not be taken further. But it may instead be of interest to the customer if they decide to take the work on this prototype further themselves.

(36)

5.2. Results

Having users perform tests remotely means that they themselves would have been re-sponsible for measuring their own results. It also means that for each user in the study, not only is a result added but also a margin of error in when each user believes that a task is completed. Instead, it was now the moderator who measured the result for each user, which meant that the decision when each user finished the task was the same for all users. This makes the result more valid.

Even though the used test method does not strictly follow how formative studies are normally performed, it is still a mix between formative and summative studies, which would still make it possible for someone else to be able to recreate the tests without any issues.

Learnability and amount of effort are two metrics that are described in section 2.6.1 and that could have been used instead of task success and time on task. However, learnability re-quires that tests are done in iterations in order to see a change in the test users behaviour and as discussed above, and as this was not the case in this study, this metric was not considered. The main argument as to why amount of effort was not chosen as a metric was because it was considered too complicated to keep track of all the user’s choices. But also, it was con-sidered an uncertainty if more steps taken by the user really can be concon-sidered as a negative result.

In the same way that the thoughts of the users do not directly affect the results of this study as it was not performed in iterations as explained above in this section, the user interview held at the end of each test may be of interest to the customer, should they choose to continue developing the prototype.

5.2

Results

This work has produced three different types of results. First, a preliminary study was made, which generated relevant theory. Then with the help of that theory, a product was devel-oped during the implementation phase. Last – during an evaluation phase – the product was evaluated with the help of test users. In this section, the result of these three phases are discussed.

5.2.1

Prestudy

There are many different techniques for navigating in a virtual world. Many of these require input devices of a different form than an ordinary user normally possesses. There also exist many articles from recent years that have performed tests with VR techniques. Theory that included any of these types of input devices has not been considered as ordinary users do not usually have these in their home, and as this product was meant to be used by anyone who has a computer with regular input devices that usually comes with the computer.

Section 2.1 about spatial memory and virtual environments was included to give the reader a better understanding of what it is, as it is referred to throughout the report. However, some of the other material in the theory chapter have not been implemented in the product or used when conducting the user tests. This information has been regarded as adjacent to the theory that was used, and have therefore been included with the hope that it will help the reader get a better understanding as they will have something to compare with.

This information includes axonometric and multiview projections in section 2.4, amount of effort, errors, and learnability performance measures in section 2.6.1, the wald formula, the exact method for calculating binary data in section 2.8.1, and the z-distribution in sec-tion 2.8.2.

(37)

5.2. Results

5.2.2

Implementation

At the beginning of the project, the idea was to use BIM-files of file format IFC (Industry Foundation Classes). IFC-files works like a database and neeeds to be hosted from a server. BIM-files are described in more detail in chapter 1.

In order to do this, the IFC-files must first be fetched from the server before it can be vi-sualised in the web browser. BIMserver1were used to host the files, the models were fetched from it with the help of BIMserver-JavaScript-API2, and last but not least to visualise the models, BIMsurfer3was used, which in turn uses Xeogl.

The intent was to extend and modify the functionality of BIMsurfer. The problem though, was that all of the above mentioned products are open source software, and hence, under constant development. This in turn, leads to them being very unstable and different versions of them had lots of compatibility problems with each other which made it hard to find a com-bination of versions of them that was running stable. This led to a lot of time in the beginning of this work being spent on troubleshooting and switching between versions and little time developing new functionality, and since that was not even part of the work a decision was made to not use these products and instead directly use Xeogl.

In order to do this, the IFC files needed to be converted into static glTFTMfiles that only contained the graphics of the models. The possibility to do this was something that was found out in the middle of the project, and had it been known in an earlier stage that this could be done, then a different WebGL framework could even have been used, since Xeogl had not been actively chosen but instead came as a result of using BIMSurfer which in turn uses Xeogl.

Although the author of Xeogl claims4 that framework performs better when rendering models containing a lot of objects, putting performance aside, using a more popular frame-work could have had other advantages. The use of products that are more popular could have helped in ways one may not come to think of in the first place, such as when more people encounter a problem there is a chance that someone has shared their problems and solutions already on the internet. Also, as all these frameworks are open source projects, users usu-ally contribute with their own implementations, which means that they usuusu-ally have more functionality.

three.js and BabylonJS are two alternative frameworks that are much more popular than Xeogl, as both of these frameworks have communities with forums where users can gather and share problems along with possible solutions to these problems. Some problems took a lot of time and effort to solve and using a different framework could potentially have helped solve these problems quicker. For instance developing the transform that is attached to the clip plane described in section 4.2.2 was probably the hardest and most time consuming part of the implementation phase. three.js contains a transform control5that could probably have been used for this and some time could potentially have been saved there.

5.2.3

Evaluation

As explained in section 4.3.1 the model that was intended to be used for the usability eval-uation could not be used as it contained too many objects, which made it impossible for the web browser to load. The original model was created in the SketchUp6software, which does not support conversion directly to the glTFTMformat. Therefore conversions were attempted

1https://github.com/opensourceBIM/BIMserver 2https://github.com/opensourceBIM/BIMserver-JavaScript-API 3https://github.com/opensourceBIM/BIMsurfer 4https://stackoverflow.com/a/6965426 5https://threejs.org/examples/?q=control#misc_controls_transform 6https://www.sketchup.com/

(38)

5.2. Results

with other software such as obj2gltf7and Autodesk 3ds max8, but unfortunately without any success. It was therefore decided that a different model should be used for the tests.

It is possible that the result could have been different if the other model had been used, but on the other hand this product was intended to be used with many different models, which means that the result can still be seen as valid regardless of which model was used when evaluating it.

The first tests that tested how users perform with different projections showed that users managed to complete the task to a slightly greater extent with perspective projection than with orthogonal projection. Most users were also faster completing the task with the per-spective projection, and the confidence interval for perper-spective projection is also significantly smaller as the time it took for users to complete that task was much more evenly distributed. This could be because the perspective projection is more similar to reality. It might also be because it is very mush alike a video game than the orthogonal projection.

Since the orthogonal projection is not really similar to something we as humans are used to as the orthogonal projection causes objects to appear with equal size regardless of the distance to the camera, this means that a rotation has to be performed over a longer distance than with perspective projection, which may explain why it took more time to complete the task with orthographic projection.

Although users performed worse with the orthogonal projection, many users still thought that the technique was useful to use in order to look at the model from above and hence see the floor plans of the model, which shows that it can still be useful, and that these projections could complement each other in different ways.

The second test that tested whether users performed better with allocentric or egocentric perspective showed that task success was the same for both techniques, as all users passed both tests. However, users completed the test much faster with allocentric than with ego-centric perspective. This could be because of the issue several users mentioned in the user interviews, with that they found it difficult to navigate when using the allocentric perspective when being inside the model by clicking on something they had not intended which led to that the rotation took place around a point they had not intended.

This is not a problem with egocentric perspective as it does not matter where on the screen the user clicks when rotating as the rotation takes place around the point the camera is at.

The confidence intervals were more even in this task than in the previous task, which shows less variation among the users results. The results of this test was as expected, and complies well with the conclusion of Münzer et al. [17], that egocentric perspective is a more natural way for users to navigate a model.

In the last test where the users had to first cull objects, and then use a clip plane to see the inside of the model from the outside. The number of users who completed the test using culling was higher than the number of users who completed it with the clip plane. Although, the users completed the test faster using the clip plane. The two users that did not complete the test with the clip plane, were as mentioned in section 4.3 the only two who belonged to the highest age group, they also belonged to the category of users who had not played video games.

The reason they were not able to complete the task with the clip plane could of course be one or the other of these or even the combination of the two, but it can of course also be a completely different reason that has not been taken into account here. Regardless, it could be stated that all users managed to complete the task with the help of culling and if looking at the user interviews, most users were positive about the possibility of being able to choose a method to use to complete this task. This should help most people to cope with this task, especially if the product allows to combine these methods.

7https://github.com/CesiumGS/obj2gltf

References

Related documents

LANDSTINGET BLEKINGE health portal was selected for current study as according to authors it is possible to provide the citizens with better access of... 12 health information and

From the part of the study measuring users’ perception of Facebook as a place for narcissists, 60,5% of the answers were consistent with narcissism.. How these answers are

[r]

Respondenterna fick frågor gällande Grön omsorg i sin helhet men även kring deras situation gällande diagnos, när och varför de kom till gården samt hur deras vardag ser ut när

Results Those who consumed mephedrone reported persistent negative mood, physical problems and fatigue, compared to those who did not —after controlling for baseline group

The application example is implemented in the commercial modelling tool Dymola to provide a reference for a TLM-based master simulation tool, supporting both FMI and TLM.. The

Assuming that the algorithm should be implemented as streamlined as possible, in this case one row of blocks at a time, the interpolation and equalization of pixels in the first

Självfallet kan man hävda att en stor diktares privatliv äger egenintresse, och den som har att bedöma Meyers arbete bör besinna att Meyer skriver i en