• No results found

Navigation and tools in a virtual crime scene

N/A
N/A
Protected

Academic year: 2021

Share "Navigation and tools in a virtual crime scene"

Copied!
48
0
0

Loading.... (view fulltext now)

Full text

(1)

Department of Science and Technology

Institutionen för teknik och naturvetenskap

LiU-ITN-TEK-A--18/046--SE

Navigation and tools in a

virtual crime scene

Oscar Komulainen

Måns Lögdlund

(2)

LiU-ITN-TEK-A--18/046--SE

Navigation and tools in a

virtual crime scene

Examensarbete utfört i Medieteknik

vid Tekniska högskolan vid

Linköpings universitet

Oscar Komulainen

Måns Lögdlund

Handledare Karljohan Lundin Palmerius

Examinator Camilla Forsell

(3)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

under en längre tid från publiceringsdatum under förutsättning att inga

extra-ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,

skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för

ickekommersiell forskning och för undervisning. Överföring av upphovsrätten

vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av

dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,

säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ

art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i

den omfattning som god sed kräver vid användning av dokumentet på ovan

beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan

form eller i sådant sammanhang som är kränkande för upphovsmannens litterära

eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se

förlagets hemsida http://www.ep.liu.se/

Copyright

The publishers will keep this document online on the Internet - or its possible

replacement - for a considerable time from the date of publication barring

exceptional circumstances.

The online availability of the document implies a permanent permission for

anyone to read, to download, to print out single copies for your own use and to

use it unchanged for any non-commercial research and educational purpose.

Subsequent transfers of copyright cannot revoke this permission. All other uses

of the document are conditional on the consent of the copyright owner. The

publisher has taken technical and administrative measures to assure authenticity,

security and accessibility.

According to intellectual property law the author has the right to be

mentioned when his/her work is accessed as described above and to be protected

against infringement.

For additional information about the Linköping University Electronic Press

and its procedures for publication and for assurance of document integrity,

please refer to its WWW home page: http://www.ep.liu.se/

(4)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under 25 år från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns lösningar av teknisk och admin-istrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sam-manhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet – or its possible replacement – for a period of 25 years starting from the date of publication barring exceptional circum-stances. The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the con-sent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping Uni-versity Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

c

(5)

Abstract

Revisiting a crime scene is a vital part of investigating a crime. When physically visiting a crime scene there is however always a risk of contaminating the scene, and when working on a cold case, chances are that the physical crime has been altered. This thesis aims to explore what tools a criminal investigator would need to investigate a crime in a virtual environment and if a virtual reconstruction of a crime scene can be used to aid investigators when solving crimes. To explore these questions, an application has been developed in Unreal Engine that uses virtual reality (VR) to investigate a scene, reconstructed from data that has been obtained through laser scanning. The result is an application where the user is located in the court of Stockholm city, which was scanned with a laser scanner by NFC in conjunction with the terror attack on Drottninggatan in April 2017. The user can choose between a set of tools, e.g. a measuring tool and to place certain objects in the scene, in order to draw conclusions of what has happened. User tests with criminal investigators show that this type of application might be of use in some way for the Swedish police. It is however not clear how or when this would be possible which can be expected since this is a new type of application that has not been used by the police before.

(6)

Acknowledgments

We would like to thank our supervisor, Philip Engström, at NFC for making this project possible. We would also like to thank Camilla Forsell and Karljohan Lundin Palmerius at Linköping University.

(7)

Contents

Abstract ii Acknowledgments iii Contents iv List of Figures vi 1 Introduction 1 1.1 Motivation . . . 1 1.2 Aim . . . 1 1.3 Research questions . . . 2 1.4 Delimitations . . . 2 2 Theory 3 2.1 Obtaining physical data . . . 3

2.2 Development environment . . . 4

2.3 Hardware for virtual visualization . . . 6

2.4 Navigation and interaction in VR . . . 7

2.5 Selection techniques for VR applications . . . 9

3 Related work 10 4 Method 12 4.1 Creating a mesh from a given point cloud . . . 12

4.2 Implementing VR navigation and interaction in Unreal Engine 4 . . . 13

4.3 Creating an interactive interface in the virtual world . . . 14

4.4 Implementing a tool for measuring distances in the scene . . . 15

4.5 Loading and instantiating objects into the scene . . . 16

4.6 Highlighting selected objects . . . 16

4.7 Creating a visualization of a mannequins’ field of view . . . 16

4.8 Creating and instantiating annotation widgets . . . 19

4.9 Customizing poses for skeletal meshes . . . 19

4.10 User tests . . . 20

5 Results 21 5.1 Creating a mesh from a given point cloud . . . 21

5.2 Implementing basic VR navigation in Unreal Engine 4 . . . 21

5.3 Implementing a tool for measuring distances in the scene . . . 22

5.4 Creating an interactable interface in the virtual world . . . 23

5.5 Loading and instantiating objects into the scene . . . 24

5.6 Highlighting selected objects . . . 24

(8)

5.8 Creating and instantiating annotation widgets . . . 25 5.9 Customizing poses for skeletal meshes . . . 27

6 Discussion 31

6.1 Results . . . 31 6.2 Method . . . 32 6.3 The work in a wider context . . . 35

7 Conclusion 36

(9)

List of Figures

2.1 Simple blueprint used to capture HMD position and rotation. . . 5

4.1 Example of the UMG widget editor . . . 15

4.2 Example of an On-Clicked event in the widget Blueprint . . . 15

4.3 A selected object with the highlighting material applied to it. . . 17

4.4 Vertex 0, 1 and 2 forms a triangle . . . 18

4.5 An example of how the CSV file may look like. . . 19

5.1 Example point cloud obtained through laser scanning . . . 22

5.2 Mesh that was generated from a point cloud . . . 22

5.3 Textured mesh . . . 23

5.4 The teleportation arc and the circular teleport location indicator . . . 23

5.5 An example measurement . . . 24

5.6 The main menu that is attached to the left motion controller . . . 25

5.7 The menu that holds the instatiatable objects. . . 26

5.8 Highlighting an object that the user has selected . . . 26

5.9 The menu that is attached to the mannequin . . . 27

5.10 A visualization of the mannequin field of view . . . 28

5.11 The annotation menu and an instantiated annotation . . . 29

5.12 The pose menu for the mannequin . . . 30

6.1 A broken mesh due to the lack of points in the point cloud . . . 33

(10)

1

Introduction

1.1

Motivation

The methods for solving and preventing crimes has been greatly improved during the last century as a result of scientific research and technological development. One example is the ability to analyze human DNA through a method called DNA profiling [14] which was developed by Alec J. Jeffreys in 1984. As much as 99.9% of the human DNA is identical between individuals but DNA profiling makes it possible to analyze a small fragment in the DNA that is unique for all humans (except for monozygotic twins).

The development of technical equipment has made it easier to document crime scenes. The resolution of digital cameras is constantly increased as well as the color representation which makes it easier to find details in pictures from crime scenes. Remotely controlled drones has also made it possible to fly above out-doors crime scenes in order to get a better overview of the scene and subsurface vehicles are used to obtain images of the ocean floor. This is a vast improvement compared to time when the documentation consisted of text and hand made drawings of the crime scene.

The documentation methods that are currently used however, is sometimes not enough to make accurate assumptions of events at the scene and the criminal inspectors often has to visit the crime scene several times during the investigation. This is not always practical since the crime might have taken place at a location that is hard to revisit for some reason. In addition, the crime scene might change over time, for example if the weather conditions have changed or if an area has been rebuilt or in other ways altered.

The project discussed in this thesis presents an alternative crime scene visualization and documentation method where the criminal investigators can revisit the crime scene indefinite amount of times virtually, without having to physically be present at the scene.

1.2

Aim

This project is about reconstructing a crime scene in a virtual environment to see if this form of immersive media can aid criminal investigators during investigations. This is not meant

(11)

1.3. Research questions to replace current investigation methods, but to complement them. Furthermore the idea is that the application can be used for reconstructing a crime based on e.g. a witness testimony.

1.3

Research questions

• What are the most important tools for investigators in 3D virtual reality navigation and exploration, of scanned static data?

What are the most important investigation tools?

Are also modelling tools necessary and in that case, which modelling tools? • How can the application be catered towards novice users?

Intuitive and immersive navigation.

Non-invasive but rich menus.

Modifying the scene.

• Can this type of application be used to make accurate assumptions of a crime?

1.4

Delimitations

During this project we have focused on implementing tools that can be useful for criminal investigators even though this type of application could be used in other purposes as well. For example it could be used to reconstruct crime scenes based on witness testimonies and if this would have been the focus, the functionality would probably have differed.

The application only supports one user at a time, but it could be a useful feature to im-plement support for multiple users, so that several investigators could collaborate with each other. This might be added in the future.

(12)

2

Theory

To reach the goal of an interactive environment for exploring a crime scene, it is required to have a model of the crime scene and a model of interaction. In this chapter we will describe the most important aspects of building these models and possible challenges that comes with building these models.

2.1

Obtaining physical data

This section will describe the theoretical background for obtaining physical data from a crime scene and how to process the data in order to visualize it in a virtual environment.

2.1.1

Point clouds

A point cloud is a set of points, with each point containing spatial information, usually x, y, z values, representing a position in a 3D-space. Levoy and Whitted [16] defines a point as a 7-tuple:

(x, y, z, r, g, b, α)

Each of the values are defined as attributes. The x, y, z-values are called the spatial attributes and the rest of the attributes are simply called the non-spatial attributes. Points can also contain normal vector data apart from spatial and colour information. If the points do not contain any data of their normal’s, there are algorithms that can be used to estimate the sur-face normal’s. Point clouds can be generated from laser scans. Laser scanned data usually include the surface normal’s.

2.1.2

3D Laser scanning

Laser scanners capture the surface of geometric objects and saves the data as point clouds. Usually a camera is combined with the laser scanner in order to obtain a colourized point cloud. This report will focus on 3D range scanning techniques. Range scanning techniques are not in contact with the object it is scanning. They work by capturing range images, which are arrays of depth values for points on the scanned object [1].

(13)

2.2. Development environment Range scanning techniques are divided into two groups; passive and active. Passive sensoring works from a set of photographs, by taking advantage of visual cues present in the human visual system, also known as photogrammetry. It is called passive since it uses the light already present in the scene. The result of passive sensoring is heavily reliant on the illuminace in the scene[1].

Active or structured light sensoring works by emitting light or radiation and detecting its reflection. There are different approaches for capturing the reflection and these are described below.

Time-of-flight systems

Time-of-flight systems are the preferred choice when scanning large structures at long range. It yields a relatively constant accuracy across the whole volume of measurements. The range measurement is determined by calculating the time it takes for the light to reflect back. Drifts and jitters in the electronics are what has the biggest impact on the measurements [3]. The resolution of the measurements are in the order of 0.5 to 1 cm.

Triangulation

Triangulation techniques are the preferred choice for more close-up scanning. In contrast to time-of-flight systems, the measurement accuracy is not constant across all ranges. The accuracy of ranges diminishes with the squared distance between the scanned object and the scanner. The accuracy of triangulation scanning are usually in the order of 2 to 50 µm, depending on the distance to the scanned object.

Registration

When scanning a scene or a large object, multiple scans are required in order to obtain an accurate point cloud. This is done by scanning the scene from several different positions and then merging the scans into a single point cloud. To perform the merge of the point clouds, they must be aligned or registered into a common coordinate system [1].

Registration may be performed by accurate tracking. Coordinate measurement machines can be attached to the scanner in order to track its orientation and position. Optical tracking can also be used to track features in the scene and fiducial markers that have been placed in the scanning area. These tracking methods are not as accurate as the range measurements, which means that the initial tracking needs to be refined. The most successful approach to refine the registrations is the Iterative Closest Point (ICP) algorithm [1] as proposed by Besl and McKay [2].

2.2

Development environment

When the spatial data has been acquired and processed it can be visualized in some sort of 3D environment. The environment in which this project has been developed is a game engine. This section will describe the engine that has been used and it will also be compared to another similar game engine.

2.2.1

Game engines

Commercial game engines are often used when developing computer games and so called

(14)

2.2. Development environment purposes than entertainment. The reason why many people are using game engines is be-cause the engines already contain a foundation to build upon, which saves a lot of time in the development compared to implementing your own engine. Game engines can also be used when creating visualizations, for example in [10] by Feng et.al. a crime scene visualization tool is presented. This visualization tool was built in Unreal Engine [23] which is one of the biggest commercial game engines on the market. In the beginning of the project it was clear that some sort of game engine was to be used since the application would have many game-like attributes, but it was not clear which one. It mainly stood between Unreal Engine and Unity 3D, which are two of the biggest commercial game engines in the world. These will be described and compared in the following sections.

Unreal Engine

Unreal engine [23] is a game engine developed by Epic Games [11]. One of the first games that was developed in the engine was the first-person shooter game Unreal, that was developed in 1998. Before this, the engine had been under development for over three years in a basement owned by Tim Sweeney, who is the founder of the engine. Newer versions of the engine has come out since and the most recent version is Unreal Engine 4, at the time of writing. Its source code is written in C++. When using the engine, developers can either extend the engine by writing C++ code or by using Unreal Engines’ visual scripting language called

Blueprints. The Blueprint Visual Scripting [4] system is a node based interface, designed to create gameplay elements. It is used to design object oriented classes or objects in the game engine. An example of the blueprint system can be seen in Figure 2.1.

Figure 2.1: Simple blueprint used to capture HMD position and rotation.

Unreal engine supports several platforms, including Microsoft Windows, Linux and Ma-cOS. It also has support for virtual reality development. The engine communicates with an SDK, for example SteamVR or Oculus SDK, which means that the developer does not need

(15)

2.3. Hardware for virtual visualization to handle tracking of the VR equipment. Unreal engine also provides a Forward Renderer that is used when developing VR experiences. It is designed to generate a higher frame rate and produce better suited anti-aliasing in MSAA compared to the post-processing anti-aliasing techniques used in the Deferred Renderer. Unreal also provides Instanced Stereo Rendering for VR, which is a CPU optimization. Instanced stereo rendering means that there is only one draw call for each object in the scene instead of two calls, which is the case for the default stereo renderer.

Unity 3D

Unity 3D is a game engine developed by Unity Technologies [22]. It was first released in June of 2005 and has since grown to become one of the most popular game engines of today. Unity is written in C++ and uses C# and Java as their scripting languages. It is a cross-platform engine that supports building to 27 different platforms. It supports Virtual reality develop-ment, including HTC Vive, Oculus Rift and Microsoft mixed reality headsets.

Choice of game engine

Both Unreal and Unity have support for VR, which is critical to this implementation. How-ever, Unreal engine provides more tools to optimize the VR application with its instanced

stereo renderingand forward renderer.

Serious games, which was mentioned before, is a term used to describe games that are de-veloped with other purposes than entertainment. It is often used for scientific visualizations and educational games. A game catered towards criminal investigators could therefore be classified as a serious game. Petridis et.al [20] presented a comparison of four popular game engines, with the focus on serious games. Their results show that Unreal engine outper-formed the other engines, among them was Unity.

The pros of Unity is its ease of use, excellent documentation and active community. However, these benefits do not outweigh the sheer performance benefits of Unreal engine. Unreal en-gine was chosen in this implementation since it outperforms Unity when it comes to realistic visualization.

2.3

Hardware for virtual visualization

This section will describe the hardware that is required to run a virtual reality application and to get an immersive experience. Unreal engine has support for two of the most popular head-mounted display systems, HTC Vive and Oculus Rift.

2.3.1

Head-mounted display systems

There are several types of display systems that can be used when working with virtual reality and one of the most common systems is the head-mounted display (HMD). These types of displays are worn on the head and has one small display for each eye. The two displays shows images from slightly different perspectives, corresponding to the position of our left and right eye, allowing us to experience the same stereoscopic vision in the virtual world as we do in the real world. There are several manufacturers that have developed HMD:s that in some ways differ from each other. During this master thesis however, we have mainly used two different types of HMD:s; the HTC Vive and the Oculus Rift. These HMDs will be described in the following sections.

HTC Vive

HTC Vive is an HMD that incorporates a room scale tracking technology, developed by Valve, called the lighthouse. This technology uses two base stations equipped with IR LED lights and lasers, a head mounted display that is equipped with sensors that detects the light from the base stations and two hand-held controllers that are also equipped with sensors.

(16)

2.4. Navigation and interaction in VR The base stations flashes IR light at regular intervals and then two small motors are rotating (one vertically and the other horizontally) and sending out laser beams in between the LED flashes. This is the reason the technology is called the lighthouse; the base stations con-stantly flood the whole room with light, however we can not see it since IR light is outside the visual spectrum for humans. Each time a sensor receives a bright LED flash, it resets itself and then it waits to be hit by a laser beam. The sensor is counting the time between the LED flash and the the time it gets hit by a laser beam and then this information from all the sensors can be used to calculate where the HMD and the controllers are positioned in the room. This technique helps to combat the dead reckoning of the inertial measurements. This is a flexible technology that allows the user to move freely in the room and have both position and rotation of the head and motion controllers accurately tracked. The workspace is in this case limited by the maximum distance the base stations can be located from each other, which is five meters. This means that the user can move around freely in a workspace that has a diagonal length of five meters. Since all movement within the workspace is tracked, the implicit navigation works within this space. This means that the users’ movement in the real world will be tracked and copied into the virtual world, which is a natural and intuitive form of navigation that enhances the sense of immersion. If the user wants to move further in the virtual world, explicit navigation has to be used. Explicit navigation means the user can for example press a button on a controller in order to move around. Some other HMD:s has a very limited workspace for implicit navigation and therefore heavily relies on explicit navigation.

Oculus Rift

Oculus Rift is an HMD that was developed by Oculus VR which is a divison of Facebook Inc. The headset itself is quite similar to the headset of HTC Vive since it has the same resolution (1080x1200 per eye), same refresh rate (90 frames per second) and similar field of view ( 90 degrees horizontally and 110 degrees vertically). The tracking technique however, differs from the HTC Vives’ lighthouse technique. Instead of having base stations emitting light and an HMD equipped with sensors to register the light, the Oculus Rift has light emitters on the HMD and sensors in the room that detects the light. The early versions of the Oculus Rift (DK1, Development Kit 1) did not include any external sensors to track the position of the HMD since it was meant to be used in a fixed position (for example sitting in a chair). This brought a high risk of getting motion sick if the user performed a move that the tracking system did not support. The later version of Oculus Rift however (DK2, Development Kit 2) includes external infrared sensors for positional tracking. The DK2 with only the HMD can be used with only one sensor, but if the user want to be able to use the Oculus Touch controllers (one tracked controller for each hand), it is required to have at least two sensors since the controllers could easily occlude the HMD if only one sensor is used. If two sensors are used, it is hard to cover the whole 360 degree space around the user which means that if the user turns around, the tracking is lost. However it is possible to use multiple sensors and place them around the workspace in order to have full tracking around the user.

2.4

Navigation and interaction in VR

Navigation and interaction in VR has to be implemented in a way that feels natural to the user, in order to keep the user immersed in the virtual environment. If the navigation and interaction does not feel natural, there is a risk the user will experience nausea and cyber sickness.

(17)

2.4. Navigation and interaction in VR [19] defines interaction in VR as movement, selection, manipulation and scaling and each of these categories can be implemented in several ways. Two important aspects to con-sider when implementing interaction in VR are Direct User Interaction (implicit naviga-tion/interaction) and Physical Controls (explicit naviganaviga-tion/interaction). Direct user interac-tion is the interacinterac-tion that is handled by the tracking system and therefore becomes natural and intuitive to the user. Physical controls on the other hand is performed by for example pushing a button on a hand-held controller. Implicit interaction is often preferred since it is the most intuitive form of interaction, but the VR technology of today is somewhat limited and therefore it is not always possible to use implicit interaction. For example the users’ implicit movement is constrained by the limitations of the VR-workspace in the real world. When the VR-workspace is not large enough, explicit navigation has to be used. Some of the most common methods for explicit navigation in VR will be described in the following section which is based on [19].

2.4.1

Explicit navigation in VR

Pointing Modeis a form of explicit navigation where the user simply points in a direction and then holds a button on the controller in order to fly in the chosen direction. This is an intuitive form of navigation that is easy to learn how to use. However some people tend to feel nauseous when using navigation forms that incorporates continuous motion like flying, especially when flying in a different direction than the gaze direction. This is because the brain gets confused when our eyes tells us that we are flying around (in the virtual world) but our legs tells us that we are standing still (in the real world).

Crosshair Modeis similar to pointing mode but instead the navigation direction is deter-mined by the direction of a vector that goes from the users’ head to the motion controller. It is also common to implement accelerated flying speed depending on the distance between the motion controller and the head. If the user push the flying button on the controller and then move the controller further away from the head, the flying speed is increased.

Dynamic Scalingis a different kind of method where the user can scale the virtual world until the desired location is within reach. The center of scaling can be changed so that the user can reach different locations. For example the user can downscale the world with the scale center set to the user position, then when the whole virtual world is visible the user can choose another position as the scale center, then upscale the world again. With this method, the user can swiftly travel long distances in the virtual world.

Another VR navigation method is to use Virtual Controls to move within the scene, i.e to implement a virtual interface which the user can interact with in order to move. However, this form navigation is mostly used for VR application where the user input is strictly lim-ited, for example when using a cardboard HMD (a cardboard box which can be used to view VR applications on a smartphone) which only has one button for input. Compared to the navigation methods that can be used when having motion controllers (for example pointing mode, crosshair mode and teleporting), the virtual controls are less intuitive to use.

Using Teleportation as an explicit navigation method is common in modern VR appli-cations and it is easy to use. The user can point somewhere in the scene and then press a button on the motion controller to instantly teleport to the selected location. In contrast to the methods that involve continuous motion, like pointing mode and crosshair mode, teleportation has proven to result in less nausea for many users.

Goal Driven Navigation is a system where the user have a list or a map of predefined locations in the scene which the user can be teleported to. This method might not be as

(18)

2.5. Selection techniques for VR applications intuitive as other methods since the user has to access some sort of interface or menu in order to teleport. However it can serve as a complementing navigation method combined with some other explicit navigation method. It can be practical to have the possibility to teleport to locations that cannot be seen in the nearby environment and for this, goal driven navigation is a common solution.

Another important part of VR interaction is the selection of objects which will be described in the following section. The section is based on [19] as well.

2.5

Selection techniques for VR applications

[19] defines two separate types of selection techniques; Local Selection and At-a-distance

Se-lection. Local selection means that the user can select objects that is within reach and this is often used for applications that incorporates VR gloves with attached trackers. Then the user can grab virtual objects with his/her hands, similar to how we interact with objects in the real world.

At-a-distance selection on the other hand means that the user is able to select objects at a distance by for example pointing at the object and then press a button on the motion controller.

One other possibility is to use the users’ Gaze Direction to select objects i.e. send out a ray from the users’ head in the forward direction and use the ray for object interaction. This method could be used in applications that are similar to ours, but sending out a ray from the motion controller instead is more flexible. If the the ray is sent from the motion controller, the user can look in one direction and select objects by pointing in another direction. This also makes it possible to select objects that are behind other objects which the gaze direction selection does not support.

[19] also mentions object selection by using Voice Input which incorporates speech recog-nition to make it possible to choose an object by simply saying its name. A downside with this method is that the user has to remember the names of all objects in the scene and all objects would have to be labeled intuitively which could get gruesome for large scenes with many objects. This method would also require a high quality voice recognition algorithm or else it could cause irritation for the user when the program is not able to decide what has been said. If the application is supposed to be used internationally, the algorithm would also have to support different languages.

List selection is a method where a list of the selectable objects is given to the user. This could be useful since the user does not have to see the objects in order to select them, but at the same time it would also require the user to remember the names of all the objects.

(19)

3

Related work

The documentation of crime scenes are today mostly made by text, audio, images or, in rare cases, videos of the crime scene. This kind of documentation is limited and could be largely improved with modern technology. Visualizing a crime scene in VR could give the investigators an improved sense of being at the crime scene and a much better view of the scene compared to looking at a picture.

During the preliminary investigation we found that there has been several attempts to create crime scene reconstructions in a 3D world, but many of them does not include VR. We believe that VR can be a powerful tool in this kind of application since it enhances the sense of being present at the crime scene and it gives a better visual representation of the scene since the stereoscopic vision makes it easier to determine distances, depth and the position of objects.

Buck et al. [5] are using a laser scanner to obtain point clouds of a crime scene and they also scan the victims of the crime. They use the software Cyclone [15] to create 3D models from the point clouds and finally they make a reconstruction of the scene which they can analyze. This process is closely related to the reconstruction process in our project, but [5] does not apply VR as a tool to visualize the reconstructed crime scene.

Howard et al. [13] creates a reconstruction of the crime scene using drawings and images. They focus a lot on the lighting and texturing to make the scene look as realistic as possible, but they do not apply VR. The fact that they do not reconstruct the scene through point clouds or photogrammetry could result in a more visually appealing scene since it can be modelled without any input noise. In our application however, this method would not give a reconstruction that is accurate enough since the model would loose details that might be of importance for the investigators. The laser scanner makes it possible to create an accurate mesh that show exactly where objects where placed in the scene at the time, given that the point cloud contains minimal noise and that it covers all important areas. This would be hard to model by hand with only images as reference. In our case we also want to implement a measurement tool that can be used to measure a distance between two points. It is therefore of great importance that the reconstruction has the correct proportions and scale which is

(20)

also hard to achieve when modelling by hand.

Poelman et al. [21] have created an AR (Augmented Reality) platform for collaborative work between investigators at the crime scene and remote experts. They do not make a recon-struction of the crime scene, but instead they use optical see-through glasses which makes it possible to see the real environment and at the same time add virtual 3D-objects in the scene. The collaboration aspect is an interesting part that could be implemented in a VR-application as well. However, this implementation requires the investigator to be physically present at the crime scene which is not always possible or practical. Our approach requires scanning the scene once and then the scene can be preserved digitally for an indefinite period of time, which makes the crime scene more accessible. It also preserves details of the environment that could be altered in the physical crime scene.

Catrin Dath [7] was working on her master thesis at NFC one year ago (2017) and she started working on a similar idea to our project. She made a VR design for a crime scene visualization, but she used 3D photos instead of reconstructing the scene using point clouds. This meant that the user only could stand at specified nodes, where the 3D camera had been used, in order to get the correct perspective. This approach gives a much more detailed view of the scene since the photos from the camera can be viewed at full resolution, but it also means that the user has restricted navigation capabilities. Dath has focused on designing a program for analyzing a digital crime scene reconstruction rather than implementing tools. We have gathered inspiration from these designs when we implemented the tools for our project. Dath also had interviews with criminal investigator and a prosecutor about what would be important to include in this type of project and this was a valuable source of infor-mation.

(21)

4

Method

This chapter will describe the methods used to achieve the results presented in this paper. The creation of the crime scene model will be described first. Then the methods used to create the interaction model will be described.

4.1

Creating a mesh from a given point cloud

The point clouds used in this project were scanned by NFC. NFC used a laser scanner made by the company FARO [8]. The software FARO SCENE [9] was used to process the point clouds. From the software it was also possible to export all the scans separately in ptx-format, which is an ordered point cloud format. This format contains information about the scans’ position and orientation, which is practical when using the point clouds in another software. The point clouds that were used were dense and contained somewhere between 100-400 million points each and to use them in a VR application, the clouds had to be meshed and simplified. The process of meshing the point clouds, filtering and simplifying them were made in a software called Reality Capture [6]. They are secretive about the implementation and methods they have used in their software, but they provide instructions of how to use their tools. The standard workflow in Reality Capture consists of the following steps:

Import scans: The scans has to be either ptx- or e57-format which are both ordered point cloud formats. As mentioned before, the software FARO SCENE provides the possibility to export the point clouds in ptx-format which we did.

Align scans:Since the ordered point cloud formats contains information about the po-sition and orientation of the scans, all the separate scans can be aligned perfectly. In Reality Capture, this can be done automatically without manual labor.

Normal/High detail reconstruction:When creating a mesh from a point cloud in Real-ity Capture, it is possible to change a lot of settings depending on the desired qualReal-ity of the mesh. There are however two standard reconstruction settings called "Normal" and "High" where the Normal reconstruction uses a 2x image downsampling (gives lower image resolution) and the High reconstruction uses no downsampling (original image resolution). Creating a mesh with higher resolution will take considerably more

(22)

4.2. Implementing VR navigation and interaction in Unreal Engine 4 time but if the high detail is of importance, which it is in our case, this should be done anyway.

Simplification:Simplification can either be made directly in the point cloud by reduc-ing the number of points. There are also tools that remove "noise points" where the points are classified as noise depending on their distance to adjacent points. If one single point is separated from the other clusters of points i.e. the distance to the neigh-boring points is large, that point is classified as noise. When the mesh has been created it is also possible to reduce the number of vertices which is necessary if the mesh should be used in an interactive environment like a game for example.

Smoothing: If the mesh has spikes or unnaturally sharp edges it is possible to smooth these out by using a smoothing tool.

Unwrapping and texturing: The unwrapping tool can be used to calculate a new UV

map (texture map) for the mesh and it is possible to decide the max resolution of the tex-ture. This was a useful tool for us since we used Unreal Engine to create the application and it supports textures that has a maximum resolution of 8K.

Exporting:Unreal Engine supports 3D-objects of either obj- or fbx-format where fbx is the preferred format. Reality Capture supports exportation to both of these formats.

4.2

Implementing VR navigation and interaction in Unreal Engine 4

As mentioned earlier, it is very common to use Blueprints when working in Unreal Engine, but it is also possible to write classes in C++. There are many Blueprint examples on the Unreal Engine forums and the same goes for VR functionality, but we wanted to have full control of what is happening in the code, so we decided to write all the VR navigation code in C++ instead, which is considerably faster than using Blueprints, especially when doing matrix and vector calculations.

4.2.1

The choice of explicit navigation method

As mentioned earlier the implicit navigation is limited by the real-life workspace that the VR equipment is set up in, and this forces us to use some sort of explicit navigation method. Navigation methods that incorporates continuous movement has shown to induce cyber sickness and nausea to many people and therefore we choose not to use methods like the previously described pointing mode and crosshair mode.

Dynamic scaling is a method that can be practical when traveling large distances as described in section 2.4.1. However, in our application it is of great importance that the scene has the correct scale and therefore we decided that dynamic scaling is not suitable for our application.

Teleportation is the method we have chosen to use in our application, however we have also discussed implementing some form of goal driven navigation which would allow the user to manually select locations of interest and then later be able to teleport to these locations by selecting them in the menu. This is however not a priority and will therefore be a subject for future work.

4.2.2

Implementing teleportation

We wanted an intuitive visual feedback for the user to indicate where the teleportation will occur. For this, we gathered inspiration from popular VR applications and implemented an

(23)

4.3. Creating an interactive interface in the virtual world arc mesh that behaves as if a projectile is being shot from the motion controller. The arc collides with the environment mesh and a circle is displayed on the target location if it is valid. The user is then teleported to the valid target location when the button is released. A location is classified as valid if it is relatively flat, which is calculated by comparing the angle between the normal of the target location and the up vector. The angle is calculated from the dot product of the normal and the up vector, as can be seen in 4.1:

θ=arccos( n ¨ u

||n||||u||) (4.1) where θ is the angle, n is the normal vector of the target location and u is the up vector.

Limiting the user to flat surfaces was done in order to prevent the user from teleporting in to walls and the ceiling. A more common approach to limit the areas a user can teleport is to use a navigation mesh. However, in this implementation the environment geometry is represented in one big, static mesh. For a navigation mesh to work, the different geometries contained in the mesh would need to be divided into individual meshes.

4.2.3

The choice of VR interaction method

The local selection method described in section 2.5 was not an alternative for us since this requires special VR equipment with finger tracking, which we did not have access to.

Selecting objects by voice input would require considerably more time for research and implementation than other, simpler methods described in section 2.5. Furthermore it would imply some degree of effort for the user to memorize the names of all selectable objects, and since we want the application to be as easy to use as possible, this selection method was discarded.

The list selection method that was described in section 2.5 was also discarded since we wanted to give the user the possibility to select objects that are visible in the nearby environ-ment. However, the list selection method could be a useful addition but it is not prioritized within the given time frame.

In our application we have implemented a form of at-a-distance selection where we send a ray from the motion controller and when the user presses a specific button, the object that is hit by the ray is selected. This is a technique that is easy to use both at long range and short range.

4.3

Creating an interactive interface in the virtual world

Unreal Engine has a visual UI authoring tool called Unreal Motion Graphics UI Designer

(UMG)which can be used to create menu interfaces. We have created several interactive menus within our application, for example the VR menu which is connected to the left motion controller. These menu interfaces where made by using so called Widget Blueprints which are connected to a visual editor with predefined widgets like buttons, text boxes and many other components. An example of the visual creation of widgets is shown in Figure 4.1 where the red box shows the resulting interface, the green box shows the widget components that can be used and the blue box shows all widget components that are used in the current interface. When this is done, the functionality for the menu can be written in a Blueprint, for exam-ple creating On-Clicked events and changing colors of buttons when they are pressed. An

(24)

4.4. Implementing a tool for measuring distances in the scene

Figure 4.1: Example of the UMG widget editor

example of how an On-Clicked event can be created through a widget Blueprint is shown in Figure 4.2.

Figure 4.2: Example of an On-Clicked event in the widget Blueprint

4.4

Implementing a tool for measuring distances in the scene

We implemented a class which makes it possible to measure distances between two arbitrary points in the scene. The first point is registered when the user pushes a specific button at the motion controller and the second point is registered when the user releases that button. When the button is pushed and held down, a line is drawn from the first registered point to the position where the motion controller is pointing, in order to give visual feedback on the measurement. When the button is released, the second point is registered and the measurement is complete.

The final measurement is then a fixed line in the scene between the first and the second point. The line also has an attached UMG widget that displays the length of the measurement. When the measurement is finished, the two points and a reference to the widget is saved. The widget contains a remove button, which will remove the selected measurement from

(25)

4.5. Loading and instantiating objects into the scene the array and delete all references. When the measurement has been removed the line and widget will no longer be rendered.

4.5

Loading and instantiating objects into the scene

We wanted to make a menu with a list of all the objects that are instantiable and to do this we had to make the list dynamic. When this had been done we wanted to make it possible to select an object from the menu and then instantiate it into the scene by pressing a button on the motion controller. These two steps are described in the following sections.

4.5.1

Creating the object menu

The dynamic object menu was created through UMG as described in section 3.4. The menu contains a widget item that is called Scroll Box which is a list that can hold multiple child wid-gets and store them vertically. If the list contains many elements it is possible to scroll through them by using a scroll bar. The functionality for this list is partially written in C++ and par-tially in Blueprints. First a C++ function (FindAssetsInFolder) is called which searches a specific folder, finds all the objects in it and stores their names in an array which is returned. Then the rest is handled in the Blueprint for the menu; the array of object names is looped through and for every name, a button is created and added in the scroll box widget. Every button has a text that shows the object name. All this is done in the menus’ Event Construct which is called as soon as the menu is activated in the scene. This way, the list will always contain buttons that correspond to the objects that is located in the specified folder in the game editor.

4.5.2

Instantiating objects into the scene

Each button in the scroll box list has an On-Clicked event which is written in Blueprints. The On-Clicked event simply searches the folder where the instantiable objects are located and tries to find the object with the same name as the text of the button. When the user points at a location in the scene and presses a specific button, the selected object is loaded and then instantiated at the chosen location.

4.6

Highlighting selected objects

An object becomes selected when a user aims the right motion controller at an object and presses the selecting button. When an object becomes selected, a post process material is applied to it which highlight the edges of the selected object. An example of how the material looks when it is applied to an object can be seen in 4.3.

The material works by applying a 3x3 Sobel filter to detect the edges of the selected object. A custom depth map is used to apply the filter. The custom depth is a Z-buffer which only contains objects which have the custom depth render parameter set to true. Objects are not rendered to the custom depth map by default, but when they are selected the custom depth render parameter is set to true. Therefore, the material will only be applied to selected objects.

4.7

Creating a visualization of a mannequins’ field of view

This is a tool for visualizing the field of view of a character (mannequin) that is placed in the scene. The reason we have implemented this tool is because our supervisor at NFC told us that it sometimes can be helpful to have an overview of what the people in the scene could have seen from their current position. A form of visual field visualization was also designed

(26)

4.7. Creating a visualization of a mannequins’ field of view

Figure 4.3: A selected object with the highlighting material applied to it.

by [7], although this was done as a 2D-map. The field of view is in our case displayed as a green, transparent frustum that has its’ origin in the mannequins head.

The frustum is created in our C++ class, FieldOfViewDisplay which is created as a scene component (can be attached as a child to an actor in the scene). This component uses the Unreal Engine class ProceduralMeshComponent that can be used to create custom meshes by defining the vertices, triangles, vertex normals, UV-coordinates (texture coordinates), tan-gents and vertex colors. The calculations of these parameters will be described in the follow-ing section.

4.7.1

Mesh creation calculations

As mentioned above, the class ProceduralMeshComponent requires predefined vertices (a vector), triangles (array defining vertex connections), vertex normals (array of 3D-Vectors), UV-coordinates (array of 2D-3D-Vectors), tangents (array of 3D-Vectors) and vertex colors (array of RGBA-colors). Alternatively a custom material can be loaded and applied to the mesh. The tangents and UV-coordinates are optional parameters which, if left empty, is calculated automatically. The calculations of the remaining parameters will be described below.

The frustum is made of four triangles that all has the common origin, located at the head of the mannequin. This makes five vertices on total. The four corner points were found by first creating vectors in the forward direction of the mannequin (the local x-axis) and then rotate them locally using a horizontal and a vertical rotation. All the vertices where then stored in a vector, Vertices which is one of the input parameters for the procedural mesh creation.

The values of the horizontal and vertical angles of our frustum is based on the human near peripheral vision. The reason why we choose to restrict the field of view boundary to the inner peripheral vision is because outside this zone, the visual acuicity (the clarity of vision) declines steeply.

(27)

4.7. Creating a visualization of a mannequins’ field of view When all the vertices had been found, we had to define their connection, i.e define the tri-angles of the mesh. This was done by creating an array, Tritri-angles, of integers which represent vertex indices. If we for example add 0, 1 and 2 to Triangles, it means that vertex 0, 1 and 2 should form a triangle as shown in Figure 4.4. The order in which the vertices are defined is of great importance since it determines the normal direction. In this example when the vertex order is 0, 1, 2 the normal is pointing towards us (out from the paper), but if the vertices are defined in the opposite direction; 0, 2, 1 the normal will be pointing away from us (in to the paper).

Figure 4.4: Vertex 0, 1 and 2 forms a triangle

The vertex normal is a directional vector associated with a vertex and should not be con-fused with Face Normals (the normal of a face). The vertex normals are computed as the nor-malized sum of the face normals for the faces that are connected to that vertex. This is shown in equation (4.2) where nvi is the vertex normal and nfi is the the normal of an adjacent face

(triangle).

nvi =

{n

ÿ

nfi (4.2)

As mentioned before, the UV-coordinates and the tangents are optional input parameters which are calculated automatically if they are not provided. The material of the mesh can be set by either creating an array of colors,(R, G, B, A), for each vertex or by applying a material that has been created in the Unreal Engine editor. We choose to create a material in the editor and lower the alpha value in order to make it transparent.

(28)

4.8. Creating and instantiating annotation widgets

4.8

Creating and instantiating annotation widgets

Annotations are created through UMG and consist of a title and a text field which contains important notes that might aid the criminal investigators during the crime scene analysis. When using the program, the annotations can be found in a list (similar to the object menu that was described before) in the main menu that is connected to the left motion controller. Since the annotations in this case have the same widget components (a title and a text field), we decided to use the same widget for all annotations and change the content depending on which widget should be instantiated. The content of the annotations are stored in a CSV-file (Comma Separated Value), which is a text file with several elements per line and the elements are separated using a specific symbol. The elements are commonly separated using a comma, but since our elements represent text content for annotations, we want to allow commas to be a part of the text. Therefore we used semicolon as our element separator instead. An example of how the CSV-file may look like is shown in Figure 4.5, where each line represent an anno-tation. Each line contain 2-3 elements where the first element is the titel of the annotation, the second element is the content text and the third element is an image. The image is an optional feature and if the element is left empty in the CSV file, no image will be included for the an-notation. If an image is included in an annotation, the program will search a specific folder in the game content directory and if the image is found, it will be included in the annotation.

Figure 4.5: An example of how the CSV file may look like.

4.9

Customizing poses for skeletal meshes

Customizing the poses for the mannequins was done by using an animation blueprint. The animation blueprint was re-parented to a custom C++ animation instance class, which contains variables that the animation blueprint can access. These variables are rotation vectors and inverse kinematic positions for the modifiable bones of the skeletal mesh. Updating of these variables are handled in a separate class which has access to the motion controllers position and orientation, which are used to update the animation variables.

The hands and feet of the skeletal mesh are transformed by two bone IK nodes in the an-imation blueprint. Two bone IK nodes apply inverse kinematic (IK) in order to move the hand/foot to a target location. The target location is updated with the wands difference ma-trix, which is calculated with equation 4.3:

wdi f f = (w´1curr¨ wprev)´1 (4.3)

where wdi f f is the difference in the wands current matrix and the previous frames matrix,

w´1

curris the inverse of the current wand matrix and wprevis the wands matrix the previous

frame. The wand difference is then applied to the IK position of the selected bone. This means that the bone will follow the wands movements.

The head and the two spine bones can be transformed by changing the bones look at position. Transformations are done in the animation blueprint by Look At nodes. The look at position is the position in 3D space that the bone is facing. These positions are saved in three dimensional vectors that consist of x, y and z values. The look at position is updated in the

(29)

4.10. User tests same way as the IK positions, previously described.

The animation blueprint takes a reference pose and then applies the IK and FK transfor-mations on the skeletal mesh. In the pose menu of the mannequins, a reference pose can be chosen. The reference poses are manually created animation sequences made in Unreal engines editor. These reference poses act as the base pose that the other transformations orig-inates from.

4.10

User tests

During the implementation, user tests were performed with staff members at NFC in order to get constant feedback while developing the application. The participants were asked ques-tions about tools that had been recently implemented and how they could be improved upon. At a later stage of the project, a think-aloud [17] user test was performed with criminal in-vestigators of Norrköping law enforcement. There were 10 participants and two of them had previous VR experience with HMDs and the other eight participants had no previous expe-rience. The test was carried out in a big, open room, suitable for VR testing. HTC Vive was used as the HMD for the tests.

Since the majority of the users had little or no experience with HMDs they were first asked to walk around, while wearing the HMD in order to get more comfortable inside the VR en-vironment. Then they were asked to perform a number of tasks while thinking aloud. The tasks they were asked to perform were:

• Use the teleportation tool to move around the scene. • Make a measurement in the scene.

• Spawn a mannequin object. • Move the mannequin • Scale the mannequin

• Try animating the mannequin with the pose tool

• Place a bullet trajectory in the scene and move it to intersect with the mannequin. These tasks were constructed to yield feedback on the tools that had been implemented.

(30)

5

Results

This chapter presents the results that were obtained using the methods described in chapter 4. Presented in this section is the mesh that was created from a large, dense point cloud, the implementation of navigation in the scene and the virtual crime scene analysis tools that have been implemented.

5.1

Creating a mesh from a given point cloud

As described in the previous chapter, we were initially given several point clouds that NFC had obtained through laser scanning. The point clouds were imported into Reality Capture [6] were they were processed. A fragment of a point cloud, which was obtained from a residential area, is shown in Figure 5.1. This point cloud is not the one we used in the final application. It is just for demonstration purposes.

A highly detailed mesh were generated from the point cloud and the mesh was then simplified as much as possible without removing too much detail. The resulting mesh is shown in Figure 5.2.

When the mesh had been created, the unwrapping tool in Reality Capture was used to generate a UV-map (texture coordinates). When this was done, a texture could be generated using the images from the 3D-camera on the laser scanner. The textured mesh is shown in Figure 5.3

5.2

Implementing basic VR navigation in Unreal Engine 4

As mentioned in the previous chapter, the navigation was implemented in C++. It works in a way that is very similar to modern VR applications; the implicit navigation works in the space in real life that is covered by the base stations of the VR equipment and the explicit navigation method is teleportation. The teleportation arc and the circular teleport location indicator that was mentioned earlier is shown in Figure 5.4.

(31)

5.3. Implementing a tool for measuring distances in the scene

Figure 5.1: Example point cloud obtained through laser scanning

Figure 5.2: Mesh that was generated from a point cloud

5.3

Implementing a tool for measuring distances in the scene

As described earlier, the measurement tool makes it possible to measure the distance between two arbitrary points in the scene and the visual result is shown in Figure 5.5. As can be seen in the figure, the widget component also has a "Remove"-button which makes it possible to remove the measurement.

(32)

5.4. Creating an interactable interface in the virtual world

Figure 5.3: Textured mesh

Figure 5.4: The teleportation arc and the circular teleport location indicator

5.4

Creating an interactable interface in the virtual world

All of the visual interfaces was created using the UMG widgets as mentioned before. The design of the interfaces is simplistic and the texts and buttons are large to make it easy to understand and to interact with them. The buttons are initially bright in color but when the user hovers over them with the motion controller, they become darker to indicate that the button is interactable. Some of the buttons can be toggled and when it is active, the

(33)

5.5. Loading and instantiating objects into the scene

Figure 5.5: An example measurement

button remains darker until the user pushes the button again. The menus have got the same structure; the alternatives that can be chosen is always a button and a text that describes what the button does. The "Remove"-button has a red text saying "Remove" and the button is always located in the upper right corner of the menu. The "Back"-button that leads back to the previous menu is a small button with an arrow symbol pointing backwards and this button is always placed in the lower left corner. An example of this menu structure is shown in Figure 5.6.

5.5

Loading and instantiating objects into the scene

The user can open the main menu that is connected to the left motion controller and then navigate to the "Objects"-menu which was described in the previous chapter. This menu contains a list of all the objects that can be instantiated into the scene. The user can choose an object from the list and then point somewhere in the scene and push the button again and the object will be instantiated. The resulting object menu and an example of instantiating an object is shown in Figure 5.7.

5.6

Highlighting selected objects

When an object has been instantiated in the scene it can be selected by the user and when an object is selected it is highlighted as shown in Figure 5.8.

5.7

Creating a visualization of a mannequins’ field of view

When a mannequin has been instantiated in the scene it can be selected by the user and when the mannequin is selected, a menu is displayed next to the mannequin object. This is shown in Figure 5.9. The menu contains actions that can be performed on the mannequin and one of

(34)

5.8. Creating and instantiating annotation widgets

Figure 5.6: The main menu that is attached to the left motion controller

the alternatives is "Show FoV" which will visualize the mannequins’ field of view. The button can be toggled on and off and when it is toggled on, the field of view is displayed as a green, transparent frustum which is shown in Figure 5.10.

5.8

Creating and instantiating annotation widgets

The annotation widgets has the same simplistic design as the other UMG widgets in the scene. Instantiating annotations is very similar to instantiating other objects into the scene, though they are not located in the same menu. The annotations has their own menu called "Annotations" and this menu has the same structure as the object menu i.e. it contains a list of all instantiatable objects. The user can chose one of the annotations from the menu and then instantiate it the same way as with the objects. All annotations has two buttons; one that hides/shows the widget and one that removes the annotation completely which is shown in Figure 5.11.

(35)

5.8. Creating and instantiating annotation widgets

Figure 5.7: The menu that holds the instatiatable objects.

(36)

5.9. Customizing poses for skeletal meshes

Figure 5.9: The menu that is attached to the mannequin

5.9

Customizing poses for skeletal meshes

When a mannequin object has been instantiated in the scene and the user has selected it, an associated menu is displayed as described before. This menu contains an alternative to modify the pose of the mannequin. When this alternative is selected, the user is directed to another menu which contains alternative methods for altering the pose. This menu is shown in Figure 5.12.

The first alternative is to choose predefined poses like crouch and stand up and the second alternative is to freely modify the poses by hand. As described in the previous chapter, the arms and legs are controlled by inverse kinematic which makes it possible for the user to grab a hand or a foot and drag it to the desired location and the arm/leg will follow the movement in a realistic way. The head and torso are instead controlled by forward kinematics (rotation only) and the user can grab the head for example and move the wand to rotate it in different directions. It is also possible to combine the two pose modification tools i.e. select a predefined pose and then modify it manually.

(37)

5.9. Customizing poses for skeletal meshes

(38)

5.9. Customizing poses for skeletal meshes

(39)

5.9. Customizing poses for skeletal meshes

(40)

6

Discussion

This chapter will discuss the results received from the user tests that were performed, the methods used in the implementation and the work in a wider context.

6.1

Results

This section will cover the results of the user tests and how the user interaction was reworked to simplify the user interface.

6.1.1

Learning curve

Before the user tests we did not realize that it was not that easy to understand how to use the application. Things like this is hard to see from a developers’ perspective since the devel-opers already know how everything works. When we had our first user tests we discovered that even when we thoroughly guided the user through the application, some things were hard for them to grasp. This mostly had to do with the fact that we initially used several different buttons on the motion controllers to perform different tasks which resulted in an unnecessarily steep learning curve. After the user tests however, we changed the input han-dling so that almost every operation is performed with the same button. Instead the different tools are accessed through selection in the menu that is attached to the left motion controller. According to later user tests this seems to be easier to learn.

6.1.2

User tests with criminal investigators

Most of the participants had little or no VR experience prior to testing this application. This manifested itself in some participants becoming a bit overwhelmed by the VR experience. At times it was difficult to get them to focus on the task at hand while thinking aloud. Some of the participants experienced mild cybersickness. The guardian system of steam VR should have been explained better since the participants seemed to not move towards the center of the room when they reached an edge. Instead they were mostly relying on teleportation to navigate. However, this is common among novice users of VR. A solution could be to direct the user towards the centre of the workspace. For example, a glowing circle could be drawn at the centre of the workspace when a user gets to the edge of his/her workspace to indicate

References

Related documents

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating

As a result, over the last decade, virtual teams topic has generated a significant interest from researchers with the main research focus being on identification and

Table 1: The plate setup used in most of our LDH, ALT, AST and CK assays, where PC is Positive Control wells, NC is Negative Control wells, standard is the standard wells, test wells

Network throughput, jitter and packet loss are measured for different encryption and hashing algorithms thus studying the impact of the best algorithmic combination on

Alkasir’s data identified significant level of Internet censorship in most Arab countries, with some countries being more pervasive than others. Table 6 shows

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

The Steering group all through all phases consisted of The Danish Art Council for Visual Art and the Municipality of Helsingoer Culture House Toldkammeret.. The Scene is Set,