• No results found

Automating the CAD to Virtual Reality Pipeline for Assembly Simulation

N/A
N/A
Protected

Academic year: 2021

Share "Automating the CAD to Virtual Reality Pipeline for Assembly Simulation"

Copied!
73
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköping University | Department of Management and Engineering Master’s thesis, 30 credits| Master of Science – Design and Product Development Spring 2020| LIU-IEI-TEK-A--20/03750—SE

Automating the CAD to Virtual

Reality Pipeline for Assembly

Simulation

Anton Engberg Gustav Eriksson

Supervisor: David Beuger Examiner: Johan Persson

Linköping University SE-581 83 Linköping, Sweden +46 013 28 10 00, www.liu.se

(2)
(3)

i

Abstract

Virtual reality is emerging as a valuable tool in the manufacturing industry, as it allows engineers to place themselves in a virtual environment in which they can inspect and evaluate their 3D designs, providing a sense of scale not available through a 2D screen. Siemens Industrial Turbomachinery AB currently uses physical prototypes to assess whether their designs work from an assembly perspective, which can be expensive and time consuming to make and are often downscaled. Therefore, an interest in exploring the possibility of using virtual reality as a tool for simulating and evaluating assembly sequences has emerged, as well as training operators on said sequences, which lays the foundation for this thesis work.

The thesis explores the possibility of using virtual reality to simulate assembly sequences using imported CAD models. Emphasis is put on automating the CAD to virtual reality pipeline, as well as how arbitrary CAD models can be presented in virtual reality and how assembly evaluation and training of said CAD models can be simulated in VR. An application is developed in Unreal Engine to explore the possibilities of using the program for virtual reality assembly simulation, as well as identifying potential problem areas. A solution to each of the problems are proposed, and these solutions together make up the application. The application is evaluated with end users to identify areas of improvement. The general conclusions that can be drawn from the results are that there are differences in how CAD programs and Unreal Engine handle and make use of 3D geometry which can cause issues, and that the number of parts and the size of these parts are the two most prominent parameters that can cause problems when importing, handling and using arbitrary CAD models in Unreal Engine.

(4)

ii

Acknowledgements

We would like to thank Siemens Industrial Turbomachinery for providing the case which our work is based on, as well as their continuous cooperation in the writing of the master thesis, especially our project owner Almir Ajkunic as well as Pierre Fransson who provided valuable insights in the development of the VR application.

We would also like to thank our supervisor David Beuger for the help and knowledge he has provided about working with Unreal Engine and VR, as well as our examiner Johan Persson for valuable feedback throughout the writing of the report.

Finally, we would like to thank our opponents Fredrik Källmén and Johan Dahl as they were a continuous source of feedback and where invaluable for discussing problems and obstacles we faced throughout the thesis work.

Linköping, June 2020

(5)

iii

Table of Contents

1 INTRODUCTION ... 7 1.1 CONTEXT ... 7 1.2 BACKGROUND... 7 1.3 PURPOSE ... 8 1.4 RESEARCH QUESTIONS ... 8 1.5 DELIMITATIONS ... 8 2 THEORETICAL BACKGROUND ... 9

2.1 UNREAL ENGINE CONCEPTS ... 9

Blueprints Visual Scripting ... 9

Levels and Sublevels ... 9

Static Mesh ... 10 Actor ... 11 Datasmith ... 12 Visual Dataprep ... 12 Events ... 12 Blueprint Interface ... 13 Collision ... 13 Editor Scripting... 14 2.2 VR AS AN ENGINEERING TOOL ... 15 2.3 VR AS AN EDUCATIONAL TOOL ... 15

2.4 CAD TO VR/GAME ENGINE ... 17

3 METHOD ... 18 3.1 PRE-STUDY ... 18 Interview ... 18 User stories ... 19 3.2 APPLICATION DEVELOPMENT ... 19 Brainstorming ... 20 Tasks ... 21 Testing ... 21 Task board ... 22 3.3 EVALUATION ... 23 Usability context ... 24 User observations ... 24

System Usability Scale ... 25

Expectation measures ... 26 4 IMPLEMENTATION ... 27 4.1 PRE-STUDY ... 27 Interview ... 27 User stories ... 27 4.2 APPLICATION DEVELOPMENT ... 27 Brainstorming ... 27 Tasks ... 28 Testing ... 29 Task board ... 29 4.3 EVALUATION ... 30 Usability context ... 31 User observations ... 31

System Usability Scale ... 31

Expectation measures ... 32

Interview ... 32

(6)

iv 5.1 PRE-STUDY ... 33 Interview ... 33 User stories ... 34 5.2 APPLICATION DEVELOPMENT ... 34 Application overview ... 34

CAD data import ... 35

Presenting CAD data ... 39

Assembly sequence simulation ... 43

5.3 EVALUATION ... 45

Usability context ... 45

User observations ... 48

System usability scale ... 49

Expectation measures ... 50 Interviews ... 50 6 DISCUSSION ... 53 6.1 METHOD DISCUSSION ... 53 6.2 RESULT DISCUSSION ... 54 7 CONCLUSION ... 58 8 FUTURE STUDIES ... 60 9 REFERENCES ... 61 APPENDICES ... 64

APPENDIX A REQUIREMENTS AND USER STORY COUNTERPARTS ... 65

APPENDIX B TASK LIST ... 67

APPENDIX C EXPECTATION MEASUREMENT FORM ... 69

APPENDIX D EXPERIENCE MEASUREMENT FORM ... 70

(7)

v

Table of Figures

Figure 1. Example of a blueprint script ... 9

Figure 2. Level created in UE (Epic Games, 2014f) ... 10

Figure 3. Static Mesh in wireframe/rendered mode ... 11

Figure 4. Custom actors in a level ... 11

Figure 5. Event node On Component Begin Overlap ... 12

Figure 6. Interface message node and corresponding event node ... 13

Figure 7. Static mesh with collision geometry ... 13

Figure 8. Visualisation of Hit Event (Epic Games, 2014d) ... 14

Figure 9. Visualisation of Overlap Event (Epic Games, 2014d) ... 14

Figure 10. Process structure ... 18

Figure 11. General application development process ... 20

Figure 12. The four levels of software testing ... 22

Figure 13. Schematic illustration of how cards on the task board are handled ... 23

Figure 14. SUS adjective scale (Bangor, Kortum and Miller, 2009) ... 25

Figure 15. Plotting Evaluation measures result (Albert and Dixon, 2013) ... 26

Figure 16. Example from brainstorming on whiteboard ... 28

Figure 17. Example of user story ... 30

Figure 18. The usability testing session processes used ... 31

Figure 19. Tool menu ... 35

Figure 20. Tree hierarchy of assembly in Autodesk Fusion 360/UE ... 36

Figure 21. Example of part which splits on import (in UE, from Siemens NX) ... 36

Figure 22. Copies of the same asset with different rotations... 37

Figure 23. Parts with different origin offset ... 38

Figure 24. Model adjustment menu ... 39

Figure 25. Example of part menu ... 40

Figure 26. Example of a part menu slot... 40

Figure 27. Bound volume on static mesh asset (2D) ... 41

Figure 28. Scanner tool ... 42

Figure 29. Measuring tool ... 42

Figure 30. A fully visible ghost assembly of an imported model... 43

Figure 31. A 2D example of the rotational symmetry problem... 44

(8)

vi

Table of Tables

Table 1. Interview types ... 18

Table 2. The project summary from the usability context analysis ... 45

Table 3. The stakeholder information from the usability context analysis ... 46

Table 4. The user context description from the usability context analysis... 46

Table 5. The technical environment description from the usability context analysis .... 47

(9)

1

Introduction

In a time with increasing demands on the industry for shorter and less expensive design to manufacturing processes, the importance of concept evaluation tools that can help detect faulty design decisions become apparent. When the time-consuming and costly iterative process between CAD software and physical prototypes fail to meet these requirements, more industries turn to virtual reality to review CAD models and experience the full-scale product in an interactive way. VR can also be used to train operators to do assembly operations prior to the product being production ready, thus reducing start up time of a new production line. But even though VR has proven to be a viable shortcut in the design to manufacturing process, bringing CAD data into the VR world still requires a lot of manual labour, especially if you want more complex intractability such as testing assembly operations. This thesis proposes an approach to automating this process, by creating an application that streamlines the CAD to VR pipeline and provides scripted functionality for simulating assembly operations on a generic CAD assembly within an interactive VR environment.

1.1

Context

This master thesis project is carried out at Linköping University at the department of management and engineering, in cooperation with Siemens Industrial Turbomachinery AB, located in Finspång, Sweden. Siemens is an international company that manufacture and sells gas- and steam turbines mainly used for electricity generation.

The application developed in this thesis is created with Unreal Engine 4, and many insights are closely linked to this software. The hardware used for the application is HTC Vive, which consists of a Head Mounted Device and two motion controllers.

1.2

Background

To seed out design flaws early in the process which affects the possibility to assemble a product, Siemens use physical prototypes to evaluate the geometrical properties of an assembly. Siemens believe that there are several benefits to be had if a product in development can be evaluated in VR in addition to using physical prototypes. Physical prototypes are often heavily downscaled, and therefore they do not always give the engineers a good perception of size of the product and its parts. The prototypes are also usually made to only show part of a product, for example one half or one quarter. This is partly due to the cost related to having the prototypes made, but also so that the engineers can view the internals of the product. By evaluating products in VR, limitations from only using physical prototype are removed, and Siemens estimate that both lead times and cost for evaluating products during their development phase can be reduced.

Siemens’ IT department have previously developed VR tools used for evaluating concepts on other departments of the company. Their use is mainly for static design reviews to get a sense of size and overall impression of a product, and thus little to no interactive functionalities have been implemented in these VR environments. More

(10)

recently, a need for VR more interactively has emerged at Siemens as a way of evaluating assembly operations on a product.

1.3

Purpose

The purpose of the thesis is to explore solutions for automating the import and

preparation pipeline of CAD-files for Unreal Engine, with the aim of simulating assembly of CAD-models in a VR environment for design evaluation and training

purposes. The goal is to create an application that can import an arbitrary CAD-model (meaning that the size and number of parts cannot be predicted) into Unreal Engine and

prepare it for usage (massaging the CAD data for usage in Unreal Engine and bringing it

into the VR environment), with tools for evaluating the feasibility of assembly sequences in terms of geometrical limitations and reachability, and additionally saving sequences for operator training.

By working with the case provided by Siemens, problem areas can be identified along with suggestions for how these problems can be solved. These insights can then be used in similar future projects in the subject area of VR and CAD to game engine pipeline automation.

1.4

Research questions

By developing and testing the application, the following research questions are answered:

1. Which problems can occur for automated import and preparation of arbitrary CAD models using Unreal Engine, and how can they be solved?

2. How can arbitrary CAD models be presented in a VR environment?

3. How can assembly operations of an arbitrary CAD model be simulated in VR for evaluation and training purposes?

1.5

Delimitations

The thesis focuses on exploring technical challenges with CAD import and VR, and how this can be solved. Some tools and UI elements are created to enable interaction with the technical systems but are only developed to a point where they can be used to test the system. The purpose is not to draw any conclusions about user interface design and therefore this thesis should not be viewed as a guideline for how to create good user interactions in VR.

The level of detail for the assembly simulation in this thesis is limited to placing parts in the correct position, as well as defining subassemblies. It does not involve more complex operations such as tightening bolts, adjusting distances and applying oil to parts. All work is also done with bare hands and more complex tools such as screwdriver and overhead crane are not implemented in the application.

(11)

2

Theoretical background

This chapter aims to give the reader insight on topics relevant to the thesis, as well as add theoretical substance that will be referenced in later chapters.

2.1

Unreal Engine concepts

Unreal Engine (UE) is a game engine developed by Epic Games. Beside some commercial limitations, where you must pay royalties for successful products, it is free to use. UE was first created for the first-person shooter Unreal, but it has since then been used successfully in several genres. In recent years it has also seen more frequent use in industry, due to UE’s real-time rendering capabilities and tools for importing and managing a wide range of CAD data formats. UE is free to use in industries as well if the use is kept to internal projects and not sold externally.

This chapter aims to provide the reader with some understanding of the more common concepts that you encounter when using UE. This is deemed necessary as the use of UE is central for this master thesis, even though the end goal is to provide insights on a more general level which can be applied to other development tools as well.

Blueprints Visual Scripting

Blueprints Visual Scripting, more commonly referred to as just blueprints, is a complete gameplay scripting system within UE. The system uses a node-based interface where you connect blocks to create functionality. For example, the blueprint script seen in Figure 1 is run for each frame update (tick) and increments the integer variable Number of Items. It then checks if the variable has passed a certain threshold, and if such is the case, prints a message to the screen. The blocks are visual representations of C++ concepts such as variables, functions, objects etc. Blueprints gives designers the ability to use a wide range of concepts and tools otherwise only available to programmers. In addition, C++ code can be made into custom Blueprints, allowing programmers to create new tools for the designers. (Epic Games, 2014c)

Figure 1. Example of a blueprint script

Levels and Sublevels

Everything that can be seen and/or interacted with in a videogame resides in a Level. In UE, a level is made up of Static Meshes, Actors, Lights etc. working together to bring the desired experience to the player (see Figure 2 for example). The base is always a Persistent Level, and Sublevels can be used to add additional level content at runtime by streaming the Sublevel into memory. This can be used to enhance performance, for

(12)

example a room could be streamed into memory when a door is opened, and similarly be streamed out of memory when the door is closed. (Epic Games, 2014f)

Figure 2. Level created in UE (Epic Games, 2014f)

Static Mesh

Static meshes are the basic unit to creating geometry in UE. They are 3D models created in a 3D modelling software such as 3dsMax and Maya, that are imported into UE and saved as packages that can be used in various ways to create renderable elements, called Static Mesh Assets (see Figure 3 for example). Placing a Static Mesh Asset directly into a level makes it a Static Mesh Actor, but it can also be used as a component in Actors etc. Static Meshes can be translated, rotated, and scaled but cannot be changed by individual vertices, due to them being cached in video memory for efficiency. Static Meshes can use its own geometry for collision detection, or use simplified geometry collision which allows for physics simulations. (Epic Games, 2014g)

(13)

Figure 3. Static Mesh in wireframe/rendered mode

Actor

An actor is any object that can be placed into a Level. It is a generic class that supports transformation in the 3D space. Note that the transformation is not saved in the Actor itself but rather its Root Component. It can be added into a Level manually or spawned and destroyed through scripted functionality at runtime. Actors can hold components such as Static Meshes and perform actions through scripted functionality activated by Events such as Tick or Begin Overlap. Actors are core to adding playable content in UE. Custom created actors, which is how most unique content is added, become child classes to the Actor class. Examples of actors in a level can be seen in Figure 4. (Epic Games, 2014a)

(14)

Datasmith

Datasmith is a collection of tools and plugins which can be used to bring CAD assets into UE. It is designed primarily for disciplines who want to utilize the real-time rendering and visualization capabilities of UE as a design tool, such as architecture, engineering, manufacturing etc. Datasmith allows the user to bring in entire preconstructed scenes and assemblies into UE, by keeping the relative layout of the assets on import. It brings in Static Meshes, Materials, Textures, Animations etc. and creates a single Datasmith Scene Asset which stores references to the imported data and assembly hierarchy from the source application. Datasmith supports a wide variety of 3D design applications and native file formats, such as Solidworks, Catia and Siemens NX. Datasmith also provides a reimport workflow, allowing the user to reimport CAD models which have been changed in the source program. (Epic Games, 2018)

Visual Dataprep

Visual Dataprep is a tool that adds additional functionality to the Datasmith import workflow. It allows the user to create a reusable import pipeline that changes the imported data by cleaning, merging, replacing it etc. For example, you can sort through all your imported meshes and remove all which name starts with Placeholder_. (Epic Games, 2019b)

Events

Events is a type of node that is used in Blueprints to start execution of other nodes. They allow for a Blueprint to respond to actions that take place in the game, such as game start and object overlaps. Common Events are Tick, On Component Begin Overlap (see Figure 5) and Begin Play, and you can also create your own execution nodes within a specific Blueprint called Custom Event. Events always have an execution pin, but can also output variables, such as references to overlapped actors etc. (Epic Games, 2014e)

(15)

Blueprint Interface

Blueprint Interface is a way to communicate between blueprints. An interface consists of functions without implementation, which can be called for any blueprint that implements the interface. New functions can be added in the interface and given variable inputs and outputs, allowing any kind of data to be sent between blueprints. Data is sent by using the message node in the sending blueprint, which takes a target blueprint reference as input and activates a corresponding event in the target blueprint (see Figure 6). (Epic Games, 2014b)

Figure 6. Interface message node and corresponding event node

Collision

For an object to be able to collide in UE, it must have a simplified collision geometry or use the objects own geometry for collision calculations (see Figure 7, the green lines is the simplified geometry and the blue lines is the objects own geometry). It should be clarified that the latter can only be used for static objects, which is a limitation in UE.

Figure 7. Static mesh with collision geometry

An objects collision behaviour is determined by its own object type and a series of responses to how it will interact with other object types. Based on this, there are a few different interaction cases (Epic Games, 2014d):

(16)

• Blocking means that objects cannot move through each other, essentially the behaviour of real-world objects. This happens when both objects are set to block each other’s type. Objects colliding in this fashion can produce a Hit Event to be used in blueprint. See Figure 8 for visualisation.

• Ignore means that an object will pass through other objects, and vice versa. • Overlap works similarly to ignore but can create Overlap Events. See Figure 9 for

visualisation.

Figure 8. Visualisation of Hit Event (Epic Games, 2014d)

Figure 9. Visualisation of Overlap Event (Epic Games, 2014d)

Editor Scripting

Editor scripting can be utilized to programmatically use tools and commands in the UE editor, thus automating work that would otherwise need to be done manually. The tools and commands in the editor are used outside of gameplay to change things in a level, manipulate assets, create gameplay interactions etc. The automation can be done either with a reusable script (Editor Utility Blueprint), or by constructing complete interfaces to control the automation with (Editor Utility Widget). Editor scripting can be useful for minimizing repetitiveness of tasks, creating custom asset import pipelines, communicating with third-party applications etc. Broadly speaking, it can help customize the editor for specific purposes where the standard editor fails. (Epic Games, 2019a)

(17)

2.2

VR as an engineering tool

There are many examples of VR being used in the industry by engineers as a tool for gathering insights and making educated design decisions. In a survey made by Berg and Vance (2017) they found real-life cases of VR being used both as an integrated part of the design process, while in other cases it is used to address issues as they come up. The most common use case is centred around visibility of a human in a specific environment or posture, testing what the user can see and what obstructs the view. A good example of this being useful is the automotive industry, where you want to optimize the driver’s visibility without compromising the integrity of the car body. VR has proven to be a well-suited tool for this task as it integrates movement and interaction in the simulation. There are also examples of VR being used by ergonomic engineers to evaluate the size of components to ensure that they remain visible for operators during assembly.

The impact of physical tasks on human operators can be analysed by reviewing how they

posture themselves when doing tasks in VR. By combining a Head Mounted Device with

physical props and force sensors, ergonomic engineers can estimate the force required by operators to perform assembly operations in each posture. The insights from such simulations ensure that people of many heights and strengths can perform assembly tasks in a safe way. (Berg and Vance, 2017)

VR has proven to be a useful tool for communicating a sense of space within a virtual environment. Testimonies from engineers who took the survey (Berg and Vance, 2017) describe how design choices would feel strange when seen on a screen, but becoming obvious when reviewed in VR. They also describe the sensation of familiarity you get from being in a real environment after having experienced it in VR.

Advancement in rendering quality have come to a point where objects can be aesthetically evaluated in VR. Improved lighting and material properties allows for a near realistic product to be rendered out in full scale, and customization to materials and other properties can be done in real-time to test out different looks and feels. This allows the engineers to fix visual aspects that otherwise might not have been noticed until production. (Berg and Vance, 2017)

The paper Workplace analysis and design using virtual reality techniques (Michalos et

al., 2018) presents a method to analyse and enhance industrial workplaces by using VR.

The authors state that layout planning of industrial settings often is oversimplified, neglecting constraints and actual human motion while being time-consuming to simulate, and suggest VR as a supportive tool for production engineers that will tackle these problems. A case presented in the paper involving two human operators shows the value of VR as a layout planning tool, as evaluation with VR helped reduce both walking distance and task execution time.

2.3

VR as an educational tool

A study conducted by Murcia-López and Steed (2018) examines the value of manual assembly task training done in VR compared to more traditional real-life training. They argue for the value of VR training as a more cost-effective and safer alternative, and potentially even a more effective training method. The study had 60 participants be trained to solve 3D burr puzzles in one of six conditions compromised of VR and real-life training elements. Afterwards the test subject was tested on what they had learned, to evaluate the effectiveness of VR as a training method compared to real-life training. The

(18)

results by the VR trained participants showed great promise, as there was no significant statistical difference to the real-life training.

A paper by Dodoo et al. (2018) presents design suggestion for a VR assembly training application. It covers several different topics such as interaction functionality, presenting assembly instructions, and limitations of bringing CAD models into a VR environment and simulating the assembly in a logical way. The application presented uses the controller’s trigger button for all interaction, be it either grabbing objects or interacting with interfaces by using a ray pointer. A combination of colours and animations indicates to the user in what order the assembly tasks should be performed and where the parts should be placed. The application utilizes a snapping functionality, where the parts will snap in place once close enough to their correct position. This serves to help the user correctly place the parts. The authors discuss the problems that occur with this functionality, as testing is required to find good snapping tolerances for each unique part, which differs mainly due to difference in size. Another time-consuming task was to create custom guiding animations, as each unique part required its own animation. Problems with performance is also discussed, as the CAD model used initially contained many small fastening parts with a high vertex count that caused the frame rate to drop. As a VR application should have a minimum of 30 frames per second, this had to be adjusted by decimating the smaller parts to more reasonable vertex counts. The authors argue that this does not lessen the user experience, as parts such as screws does not have to be highly detailed.

In the article Virtual reality training for assembly of hybrid medical devices (Ho et al., 2018) the authors argue for VR as a skill training tool in the medical device manufacturing industry that can optimize and expedite the efficiency level of new workers. They suggest that real-life training causes issues with safety, long training periods, greater cost etc. and proposes a VR training system that would solve many of these problems. The system is designed to be pedagogic for the user, eliminating the need for human assistance. The system consists of three steps, which are as follows:

1. First, the trainee can browse the work cell freely, and when stopping and focusing on a virtual part, is presented with additional information about that part such as purpose and real image(s).

2. Secondly, the trainee is no longer allowed to browse freely, and is instead guided through the assembly step sequence. Details about the assembly steps are presented.

3. Lastly, the trainee is supposed to perform the assembly sequence, with the program giving hints only by request from the trainee or when errors are made, allowing for the user to explore and think critically about the assembly actions.

Through evaluation Ho et al. (2018) demonstrate that their training method have significant advantages both to other VR training and conventional training methods, as users reported both more effective skill/knowledge transfer and higher levels of confidence, enjoyment and comfort during the training.

(19)

2.4

CAD to VR/Game Engine

The article Two-Way Cooperation of Architectural 3D CAD and Game Engine (Kado and Hirasawa, 2018) explore the concept for bidirectionally updating 3D models between CAD programs and a VR application. They describe a system which allows for editing of a windows height in real time within the VR application, storing changes in a relay database which serves as the connection between the VR application and the CAD program. Testing of the application gave good results according to the authors, but the complexity of updating meshes within the VR application in real-time proved very computationally expensive.

PiXYZ (Metaverse Technologies France, 2020) is a software that provides a plugin for

game engines, allowing for direct import of CAD models from a wide variety of file formats. It shares a lot of similarities with UE’s Datasmith Importer (see 2.1.5), while additionally allowing for import of CAD models at runtime and supporting an import of meta data from CAD software’s. The downside of PiXYZ compared to Datasmith is the cost, as PiXYZ comes with a licensing fee for each computer that uses the software.

(20)

3

Method

In the following chapter, the overall methodology as well as specific methods used are explained. The overall process is divided into three phases which can be seen in Figure 10 together with when each method is used.

Figure 10. Process structure

3.1

Pre-study

This chapter presents the methods used for staking out the user needs of Siemens, from which the application is later developed.

Interview

Abrahamsson et al. (2015) describes interviews as one of the most fundamental methods of collecting information about a person’s experiences, opinions, thoughts and values. The result of an interview is always subjective, however the resulting data can be both qualitative and quantitative depending on the type of interview. There are three main types of interviews: unstructured, semi-structured and structured. Table 1 shows the characteristics for each type of interview based on how described them.

Table 1. Interview types

Type Implementation characteristics Resulting data

type

Unstructured • Only open-ended questions

• Does not require the interviewer to have good knowledge about subject

• Allows for follow-up questions and deep dives into certain subjects

• Results can be hard to compile and compare

Qualitative

Structured • Predetermined list of questions and in which order they are asked

• Does not allow for follow-up questions • Requires interviewers to have a good

understanding of the subject and purpose of the interview

(21)

User stories

User stories is the most common tool for manifesting user needs and requirements in a clear-cut and uniform manner in agile development projects (Lucassen et al., 2016). The user story is supposed to present the following information about a user need or requirement: who owns the need, what the need is and why the need exists (Lucassen et

al., 2016; Agile Alliance, 2020d). The common format for user stories to express this

information is “As a… I want to… so that...” (Lucassen et al., 2016; Agile Alliance, 2020d). An example of this would be: “As a Design Engineer, I want to import my CAD assembly into UE so that I can evaluate my design in VR”.

Working with user stories can be broken down into the 3 C’s (What is User Story?, 2017), which are as follows:

• Card. The user’s need in the format who, what and why as described above. A summary of intent for which details remain to be defined.

• Conversation. A discussion between the team, users, product owner etc. to determine the more detailed behaviour needed to fulfil the user story. This stage helps to create a shared understanding of a user story.

• Confirmation. This represents the acceptance test, the confirmation from the customer or product owner that the story fulfils the intent and more detailed requirements.

The criteria of a good user story can be summarized by the acronym INVEST (What is

User Story?, 2017):

• Independent. A user story should be independent from other user stories, meaning that it can be released and useful without the need of other features.

• Negotiable. Should not be over defined, leaving room for discussion. It is a tool for development, not a strict contract.

• Valuable. It should have a clear value to the end user.

• Estimable. Should be possible to estimate needed time and work needed to implement the user story.

• Small. A user story should be relatively small, being possible to finish in a few days.

• Testable. A user story is confirmed via pre-written acceptance criteria.

3.2

Application development

The procedure used for planning and carrying out the work process during the development phase is an agile process with some concepts, principles and practices taken from established agile frameworks and methods such as Scrum and Extreme

• Results are generally easy to analyse

Semi-structured

• Predetermined subjects or questions, but allows for follow-up questions

• Results are often easier to analyse than results from an unstructured interview

• In-between structured and unstructured interviews when it comes to formality

Qualitative and quantitative

(22)

Programming. The work is done in so called sprints, which is a time period of one week where the aim is to produce a potentially shippable product increment (Agile Alliance, 2020b).

This chapter presents an iterative process for developing features for the application. An overview of this process can be seen in Figure 11, and stages of the process are explained in detail in subsequent chapters.

Figure 11. General application development process

The idea of the process is to conceptualize solutions for the requirements and breaking them up into tasks that are implemented in the application. If a solution or implementation attempt fails, the faulty parts are iterated until a satisfying result is achieved.

Brainstorming

Brainstorming is a method for using existing knowledge to generate solution concepts, which can either be carried out by individuals or in groups of people working together. During brainstorming, evaluation of concepts should be avoided as to not limit the number of generated ideas. By generating many ideas, it is more likely that the entire solution space will be explored, and each idea can additionally act as a stimulus for further idea generation. Ideas that may seem infeasible to one person, might be improved by others, and it is therefore important not to throw away any ideas initially. Using graphical

(23)

and physical media to express an idea can be helpful, as words can be insufficient for describing physical entities. (Ulrich and Eppinger, 2012)

Tasks

Tasks are used to decompose user stories to a finer level of detail, breaking down what needs to be done for a user story to be completed. It helps the team concretize and clear up any assumptions that has been made about the user story, giving a clear definition of when all criteria for a user story are fulfilled. Typically, a user story has multiple associated tasks, which in agile planning can grow in numbers as a project evolves. Tasks are kept small and should not take more than a day to finish. (Francino, 2015)

A user story is a functionality that will be visible to the end user, whilst tasks instruct the development team on what work needs to be done, e.g. code this, design that, create test

data for this, automate that. As one understands from the task examples, tasks can and

will most likely involve developers from multiple disciplines, or at least require work within different disciplines. A task is usually done by one person, but there is some ambiguity to that statement, as an evaluation meeting or other group activity could also be defined as a task. (Cohn, 2015)

Testing

When developing software, an important step of the process is testing the code for defects and to see if it works as expected. Testing can be done by the software developers or by independent testers and can either be done manually providing the software with inputs, or by automating tests with a computer which allows for a larger quantity of tests.

Black Box Testing, also called behavioural testing, is a method where the tester in unaware

of the internal structure, design, and implementation of the tested item. This method can help to find, incorrect or missing functions, interface errors, errors in data structures or external database access, behaviour or performance errors and initialization and termination errors. A test is performed by a tester providing inputs to a software and comparing the output to the expected outcome. An advantage with black box testing is that the tester does not need knowledge of the programming language or how the software has been implemented. It is also done from a user’s perspective, which can help with avoiding developer-bias. A disadvantage is that since tests are done manually, only a few tests can be run which often leaves a lot of cases untested. (Software Testing Fundamentals, 2009a)

White Box Testing, as opposed to black box testing, is a method in which the internal

structure, design and implementation of the software are known to the tester. This means that the tester can run a specific path through the code and predict the outcome, knowing exactly what should happen at each step. The tester’s extensive knowledge of the software also allows for more exhaustive testing, by determining every valid and invalid input. An advantage of the white box method is that testing can be done at an early level, as the tester does not need a graphical user interface to run the software. The very essence of the method is also a disadvantage though, as the tests can become very complex and therefore require highly skilled testers to perform the testing. (Software Testing Fundamentals, 2009d)

Grey Box Testing is a mix of black and white box testing, where the internal structure of

(24)

algorithms allowing for test cases to be designed, but the tests are run on a black box level. (Software Testing Fundamentals, 2010a)

Figure 12. The four levels of software testing

Testing can be performed at four levels (see Figure 12) of the software development cycle:

• Unit Testing. Individual software components are tested to validate if they perform as designed. These tests are usually performed by software developers themselves or their peers. Unit testing is done with the white box testing method. (Software Testing Fundamentals, 2009c)

• Integration Testing: Individual units are combined and tested together to identify faults in the interaction between the units. Integration testing is performed by developers themselves or independent testers. White, black or grey box testing can be used at this stage. (Software Testing Fundamentals, 2009b)

• System Testing: At this level, the complete and integrated software is tested, with the purpose of evaluating the system against the specified requirements. Black box testing is usually used, and tests are usually performed by independent testers. (Software Testing Fundamentals, 2010b)

• Acceptance Testing: The software is tested for acceptability, meaning that it evaluated in terms of how well it fulfils the requirements set by the customer at the beginning of development. This is usually done with Black Box Testing but with a less strict procedure which is ad-hoc, meaning that the system is tested at random which can detect interesting defects. This level of testing is performed internally but by other actors than the developers, or by the customer/end-users.

Task board

A task board is a platform used for planning the team’s daily activities. A task board is commonly used in agile frameworks such as Scrum and Extreme Programming (Agile Alliance, 2020c), where it is used to track what needs to be done, what is being done and what has been finished.

The foundation of the task board consists of five buckets: A Suggestions-bucket, a product backlog, a sprint backlog, an Evaluation bucket, and a Done bucket. Similar to a

(25)

Kanban board, each bucket represents a state that the contained cards are in (Agile Alliance, 2020a). The different buckets and states are the following:

• The Suggestions-bucket is where ideas and suggestions for features from both the project team and the project owner is placed

• The Product backlog contains all cards that should be processed before the end of the project that are not currently being worked on

• The Sprint backlog contains all cards that are currently being worked on

• The Evaluation-bucket contains all cards that needs to be evaluated and discussed with the project owner before they can be considered done

• The Done bucket is where all cards that are done are placed

A schematic example for how a card can be handled using these buckets are shown in Figure 13.

Figure 13. Schematic illustration of how cards on the task board are handled

3.3

Evaluation

In this chapter, the methods used to evaluate the created application from a user perspective are presented.

Evaluation of computer systems through usability testing is common practice in software development projects. Through usability testing, software can be assessed and tested to determine if they behave as expected and if they meet the user requirements (Dix et al., 2004). The purpose of usability testing can vary though, as Greenberg and Buxton (2008) describes: usability testing can be used during the development phase to find usability bugs, it can be used as a acceptance test where quantitative performance-based data is

(26)

collected (e.g. task success rate, task success time, number of errors), or it could be used to prove that novel designs and ideas work as theorised.

The way of performing and documenting the usability testing also varies, but is usually based around measuring or predicting efficiency, effectiveness or satisfaction in users that use the system to complete one or several tasks (Greenberg and Buxton, 2008).

Usability context

Maguire (2001) describes tasks as activities performed to reach a certain goal and presents a procedure of defining a usability context analysis, which is used to prepare for usability testing sessions.

Step 1: Describe the product or system in terms of goal, reason, target marketplace and

scope of system functions.

Step 2: Identify primary, secondary, and other users and stakeholders of the product or

system. For every group of users and stakeholders, their main roles and task goals in relation to the product are identified.

Step 3: Describe the context of use. In this step, the main aspects of the final product are

identified. These aspects can be different from project to project, but some examples are users, usage scenarios, the technical environment (hardware, software, network etc.), physical environment and organizational environment. Within these aspects, different contextual factors are identified. For example, age, level of training and language skills can be factors in the user aspect of the context, while goal, step-by-step breakdown and duration are typical factors of the usage scenario aspect of the context.

Step 4: Identify the important usability factors by evaluating each factor identified Step 3

and answering the question “Does this affect the usability?” with yes, no, or maybe. If the answer to this question is yes or maybe for a certain data point, it is likely that this data point must be considered when developing the usability tests or when evaluating the results of the usability tests.

Step 5: Document potential requirements of test conditions for each component marked

with yes or maybe. This results in requirements and conditions that the usability testing context should meet to result in data relevant to the context into which the product or system is planned to be implemented.

The results of this 5-step usability context analysis are summarized in several tables: project summary, stakeholder context description, user context description, task scenarios, task characteristics description, technical environment description, physical environment description and organizational environment description

User observations

Observing how a user interacts with a product can help to understand the usability of the product and the general user experience. Observations can produce both qualitative and quantitative data. Quantitative data is generally easier to analyse and can be reproduces in other studies. The most common user observations methods are controlled observation and naturalistic observation.

Controlled observations are often carried out in a laboratory environment and focuses on quantitative data, though it can also produce qualitative data. It is often decided beforehand which observations should be done, but additional observations can also be

(27)

done ad hoc to produce additional qualitative data. The user is aware of being observed in a controlled observation and is informed of the purpose of the observation. This can cause what is called the Hawthorne effect, meaning that a user may change their approach to carrying out a task if they know they are being observed.

Naturalistic observations are less structured and focuses on observing user’s interaction with a product in day-to-day life. Observations are recorded as the observer see fit and tend to produce qualitative results. Users tend to behave more realistically during these observations and encounter problems closer to real life use. Such observations are therefore more reliable and can be more useful for ideation. It is harder to get large sample sizes for this type of observation due to its time consumption and cost and is therefore better for idea generation. The smaller sample sizes also make the results more difficult to replicate.

It is important to not analyse during observations, but to simply record data. Analysis should be done at a later stage, and therefore detailed data is more valuable. Quantitative data can be condensed into graphs and tables. Qualitative data should be treated as indicators and not as absolute truths. (Interaction Design Foundation, 2020)

System Usability Scale

The System Usability Scale (SUS) is a standardized questionnaire template for evaluating a systems usability. The questionnaire was developed in 1986 by John Brookes at Digital

Equipment Co. Ltd. to evaluate the perceived usability of computer programs, and was

later on made public and free to use for everyone (Brooke, 2006). Although it was originally developed to evaluate computer software, Brooke (2006) mentions that SUS has been shown to be independent of the technology used and that it can be used to evaluate not only software, but also hardware, websites, mobile phones and operating systems. The questionnaire consists of ten statements with which a user rates their level of agreement or disagreement on a 5- or 7-point scale (Brooke, 1996). The collected data can then be transformed into an overall system usability score, ranging from 0 up to 100 (Brooke, 1996). Regarding the score of a SUS questionnaire, databases of SUS data (in total 11 855 completed questionnaires) shows that the mean score of a SUS questionnaire is around 70 (Lewis, 2018). To create a better understanding of what level of usability the SUS score for an individual system implies, Bangor, Kortum and Miller translates the SUS scoring scale to an adjective scale (Bangor, Kortum and Miller, 2009) as seen in Figure 14.

(28)

Expectation measures

Expectation measures is a method used to measure satisfaction during usability testing by comparing a user’s expectations to their experience of the system being tested. The tasks are presented to the participants before they try to solve them, and the participants rate how hard they think the task will be to perform (expectation) on a 5- or 7- point scale. After the participant has performed the tasks, they rate how difficult they thought the task was to complete (experience). By visualizing the comparison of the expectations and the experiences in a graph where the two measures are plotted on the x- and y axis respectively, these measures can support decisions regarding design choices and strategy. Four different design strategies can be found in the resulting graph shown in Figure 15. (Albert and Dixon, 2013)

(29)

4

Implementation

In the following chapter, the implementation of the methods used in the thesis is described, explaining how they were practically implemented during the different phases and what changes was made to the methods.

4.1

Pre-study

This chapter describes how the methods used for the pre-study phase were practically implemented.

Interview

The interview conducted during the pre-study was unstructured. The goal was to get a better understanding of Siemens requirements for the application, and resulted in a requirement specification where the requirements were prioritized by importance. The interviewees were the project owner, who was group manager of the division Whole

Engine, Manufacturing & Assembly Support, and a software developer at Siemens IT

who have previously worked with developing VR-applications at Siemens.

User stories

Of the three C´s, most of the effort was directed towards the first two, Cards and Conversation. Since the project aims to explore possible concepts, the confirmation step is replaced with a series of evaluation methods, which can confirm or deny the validity of suggested solutions. The cards, i.e. the user stories was used in this project to refine the functionalities established in the interview with the project owner, presented in chapter 5.1.1. Each functionality from the interview was rephrased as a user story, to empathise on user perspective of the software. For example, the functionality “Perform

simple measurements within the application” was rephrased as the user story “As a user, I want to take measurements within the application to understand the distance between parts”. The conversation served to break down the user stories and create deeper

understanding, and consisted of brainstorming sessions (described in chapter 4.2.1) and task definition (described in chapter 4.2.2). As the desired functionalities had already been discussed and established with the project owner during the initial interview, the conversation part of working with user stories was primarily done internally by the authors. This was done to understand the practical goals of the user stories and begin ideation on how to solve them.

4.2

Application development

This chapter describes how the methods used for the application development phase were practically implemented.

Brainstorming

Brainstorming was used in this project to conceptualize possible solutions for user stories. The idea was to create a wide variety of ideas and pick the solution that most effectively fulfil the user need. Due to the agile nature of the project and the multitude of user stories, there were several smaller brainstorming sessions through the project. Brainstorm sessions were performed both individually and together and were mainly carried out on a

(30)

whiteboard (see example in Figure 16). Text was used in many cases to describe ideas, due to code often being functional without a physical form, but drawing was still useful in some cases due to VR and CAD functionalities often revolving around 3D objects. For brainstorming sessions done together, ideas were first generated individually and then presented to the other person, which often spawned additional ideas. Once a selection of ideas had been created, each idea was discussed and evaluated based on possible implementation time, difficulty of implementation and possible secondary problems that might occur. Given these parameters, the best idea was selected for further development.

Figure 16. Example from brainstorming on whiteboard

Tasks

Tasks are usually defined directly from a user story, but since brainstorming was used to produce solution concepts for the user stories, tasks were derived from the selected solution concept. The solution concept gave a general idea of the direction of the work, but tasks still defined what concrete work needed to be done. Examples of common tasks that were done in the development process are:

• Initial research about UE concepts to determine sub sequential tasks • Adding features (code, user interface, 3D assets)

• Solving a specific problem/bug that had occurred • Integrating new features into the system

More complex tasks were often broken up into smaller tasks as knowledge of the problem was gained or spawned completely new tasks that needed to be resolved before addressing the original task. This also meant that tasks were added to user stories that were thought to be completed, as limits in finished features caused problems with new features.

(31)

Testing

As all software testing was done internally it should be considered white box testing, since the internal structure of the code was known when testing. The application was often tested using inputs available to the end user and there was little point in scripting tests, as most interactive elements only have one possible input, a button being pressed etc. so manual testing sufficed.

Testing was done on a unit level for functions, and in some cases entire blueprints. To omit the need for interactive interface elements at an early testing stage, the Begin Play event was used to execute the code as soon as the application was run, or as soon as the blueprint was spawned into the level. As some blueprints and functions used variable inputs that are produced by other blueprints/functions, temporary variables were created as placeholder inputs. Outputs were often checked with print string, a function that printed a text to the screen/log for debugging purposes. Most datatypes could be converted directly to a string for printing, and for array-based data the for each loop function was used, looping through the array and printing data from each item.

Testing was done on an integration level by combining functions in larger blueprints and testing the communication between the functions as well as the communication between blueprints. As testing became more complex is became harder to identify where errors occurred, as many different tracks of the code was often executed instantly. To tackle this, breakpoints (a typical debugging tool in software development) were used to pause the execution, allowing for outputs, variables etc. to be analysed thoroughly at any point during runtime instead of only relying on print string.

On the system testing level, the software was evaluated from an end-user´s perspective, using the user interface elements provided in the application to test different scenarios, and mostly within the VR environment. Usability was evaluated in terms of reachability, size of interface components etc. and whether the previously stated user stories have been fulfilled. As it was more tedious to perform tests with VR equipment, 2D interface items were sometimes added directly to the computer screen for testing in the desktop environment, where interaction inputs could be provided. A typical test played out common scenarios which the end-user would encounter, but effort was also put into finding and testing possible fringe behaviour.

Acceptance testing is replaced by a series of usability evaluation methods, which are covered in chapter 3.3.

Task board

The task board used in the project was a digital task board made using Microsoft Planner where so-called buckets (groups) containing cards were created. These cards can contain text, images and checklists, and can be moved between different buckets freely. The cards can also have different labels and be assigned to a person. Each card represented a user story and contained a checklist with tasks that needed to be implemented for the user story to be fulfilled. An example of a card can be seen in Figure 17.

(32)

Figure 17. Example of user story

The task board was updated on a weekly basis through sprint meetings. Cards in the sprint backlog were moved to the Evaluate/Done bucket if they had been finished or moved back to the product backlog if additional work was postponed. Cards could also remain in the sprint backlog if they needed additional work at the end of a sprint. If there was available space, new cards were also added to the sprint backlog from the product backlog. New cards were also assigned to one of the authors, as to clarify responsibilities and ensuring a fair workload. Sometimes the sprint backlog was emptied before the end of a sprint, and then a new card was brought in from the product backlog. A suggestion bucket was used to manage new ideas and request from the project owner, which were added to the product backlog if they wear deemed feasible for the scope of the project

4.3

Evaluation

This chapter describes how the methods used for the evaluation phase were practically implemented.

The application was evaluated during a usability testing session to identify issues and problems with it. The evaluation was done with three participants (participant A, B and C) at separate occasions. Participant A and B were mechanical engineers at Siemens and had very little prior experience with VR. Participant C was a software developer at Siemens and had prior experience both with using and developing VR applications. Participant A and B were given a task list (see Appendix B) that contained five tasks that they tried to solve, and data was collected before, during and after the participants carried out the tasks. The tasks are designed to include usage of all features and solutions where feedback is wanted. Both qualitative and quantitative data was collected with different methods further explained in the following chapters. Evaluation with Participant C focused on qualitative data, where Participant C could freely use the application and give feedback. Since Participant C had prior knowledge of the application there was no point of doing the structured analysis with task and questionnaires, since the data would be very different from Participant A and B. The overall evaluation process is illustrated in Figure 18.

(33)

Figure 18. The usability testing session processes used

Usability context

The tables that together constitutes the usability context were created prior to the usability testing session. Of the eight examples of tables originally mentioned in the method (chapter 3.3.1), four were created, while the rest were left out as they were deemed irrelevant or outside of the scope of the project. The tables created were Project summary,

Stakeholders, User context description and Technical environment description. The

information contained in the tables were used to establish requirements on the conditions of the usability testing session. Two examples of such conditions are what kind of equipment is needed and what tasks the participants should perform during the usability testing session.

User observations

The user observation can be described as a controlled observation, as it was conducted in a specific VR room which was prepared for the tests. The users were briefed on the purpose of the observation and received instructions on what they should do in the form of a task list. As they performed the tests, qualitative data was collected by taking notes on events such as the users’ behaviour, what problems they encountered, how severe the problems were and if any system bugs were encountered.

System Usability Scale

The SUS questionnaire (Appendix E) was filled out by the participants after they were done with all tasks on the tasks list. The participants filled out the questionnaire before any general discussion about the testing session was carried out, and the participant was encouraged to rate each statement without thinking too much about their answer.

(34)

Expectation measures

Before the participant started solving the tasks, they got a brief verbal description of each task as well as a written step-by-step description to read, and then rated how difficult they expect each task to be purely based on how the task is described. After all tasks were completed, the participants rated how difficult the task had been to perform. A 5-point scale was used for both the expectation measure and the experience measure, where 1 was very easy and 5 was very hard. After the data had been collected, the average expectation and experience rating for each task was calculated and plotted in a graph. The expectation and experience forms can be found in Appendix B and Appendix D.

Interview

For participants A and B, an unstructured interview was conducted while they were using the application to perform the tasks. They were encouraged to think out loud and asked to elaborate on problems they ran into, and how they would prefer that the system worked. The dialogue was documented by taking notes during the test session, but audio was also recorded for later transcription. Notes were taken both as the participants were solving tasks and after all tasks were finished.

For participant C, who did not follow the task list but rather used the application more freely, the dialogue was documented only by taking notes while the participant was trying out the different features of the application.

(35)

5

Result

In this chapter, the results of the thesis are presented. The results are divided into the different phases of the project, with pre-study and evaluation being divided by method, whilst application development is divided into the general areas relating to the research questions.

5.1

Pre-study

In this chapter, the results of the pre-study are presented, which includes a list of functionalities for the application as well as a user story conversion of said functionalities.

Interview

The result from the interview with the project owner and a software developer at Siemens IT resulted in a list of functionalities that should exist in the application, which is divided in three categories. Core functionalities have the highest priority for implementation and are deemed essential to the application. Secondary functionalities are less important and have less priority than the core functionalities. Some tasks are deemed Problematic to implement and are placed in this category based on the authors’ experience and through discussion with the project owner.

Core functionalities

1. Possibility to test assembly of an imported model 2. Gravity simulation

3. Possibility to import any CAD model into UE

4. Possibility to define subassemblies and part-part bonds in VR 5. Training functionality with assembly instructions

6. Possibility to set up and save a custom assembly sequence Secondary functionalities

7. Play back animations of assembly sequence

8. Adding tools such as fixtures or lifting tools (functional) 9. Perform simple measurements within the application

10. Compass function with vertical and lateral movement for precision 11. Instructional visual elements presenting key points before each subtask 12. Materials, graphical design, and other aesthetical aspects

Problematic to implement

13. Testing accessibility (collision for hands and tools) for operators. No need for interaction, only visual collision feedback

14. Possibility to import meta data from NX into UE

15. Possibility to import/export assembly sequence to/from NX Sequence 16. Possibility to import a new assembly during runtime

Besides the list of functionalities, the software developer also suggested some ideas for how assembly simulation could be done practically. This was done in an unstructured fashion, using a whiteboard to quickly sketch rough solution concepts. Most prominently, an idea was presented for using copies of parts as references to indicate where they should be assembled. By gathering the relative positions of parts in the assembly, correct

(36)

assembly could be determined by comparing the transform of the held part to its reference part.

User stories

The result from the user story generation can be found in Appendix A, showing the link between each requirement and its user story counterpart.

5.2

Application development

This chapter presents an overview of the application as well as insights and problems identified during the development of the application. The result in this chapter is not divided by the methods used during the application development, but rather the holistic result from combining the methods. The insights and problems are divided into three categories, each corresponding to one of the research questions presented in chapter 1.4.

Application overview

This chapter briefly describes the application to give an overview of its capabilities. Results relevant to the research questions are explained further in subsequent chapters. The application lets the user import any CAD model, using the built-in import features in the UE editor. In VR, the user can pick the model they want to work on, as well as a mode depending on what they want to accomplish.

• In engineering mode, the user can try out any assembly sequence and save one specific sequence for training purposes.

• In training mode, the user gets assembly instructions, based on the saved sequence from the engineering mode.

• In preview mode, the model is spawned in pre-assembled, and can be picked apart by the user for analysis.

Interactions are done with the VR motion controllers, simulating hands that can grab parts and various tools and menus. Parts can be assembled using proximity checks, where parts have a predetermined goal position to which the part snaps if it is dropped close enough to said position.

The parts of the imported model are presented in a dynamically created part menu, with a slot for each unique part. The user can spawn parts into the world by grabbing the slots, which can then be moved around the world with simulated physics. Picking up a part also shows a reference version of that part in an assembly preview, showing the user where the part should be assembled. If the part is then held close enough to its reference and released, it is assembled in place. It is also possible to create subassemblies by attaching parts to a base part which can then be assembled to the main assembly.

2D-menus are used for picking model and mode, which are interacted with by pointing with a hand. The user can spawn in an adjustment menu with their left hand, which can be used to move and rotate the assembly as needed in the world. A tool menu (see Figure 19) can be spawned in with the right hand, which provided the user with the following tools:

• Toggle gravity. Allows the user to turn of gravity for all the parts in the world. • Toggle Sub-assembly mode. Turns on sub-assembly mode, which allows the user

References

Related documents

For the interactive e-learning system, the design and implementation of interaction model for different 3D scenarios roaming with various input modes to satisfy the

We investigate cryptography and usability for such an application in the context of JavaScript and XMPP (Extendable Messaging and Presence Protocol), and develop a set of suit-

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

We have developed Sensors framework using Android API, which gives latest value of all possible sensors used in mobile phones, and notify end user programming about sensor

1 – 3 above it follows that the critical infrastruc- tures involved in future Smart grids (energy systems, control systems, information processing systems and business sys- tems)

Figure 5.2: To the left, the environment and to the right, the lever-based menu used in test level 2.. Environment: A picture of the environment of test level 2 can be seen in

Visitors will feel like the website is unprofessional and will not have trust towards it.[3] It would result in that users decides to leave for competitors that have a