• No results found

Design and Implementation of a Virtual Environment for treatment of Post Traumatic Stress Disorder

N/A
N/A
Protected

Academic year: 2021

Share "Design and Implementation of a Virtual Environment for treatment of Post Traumatic Stress Disorder"

Copied!
52
0
0

Loading.... (view fulltext now)

Full text

(1)

Department of Science and Technology

Institutionen för teknik och naturvetenskap

Examensarbete

LITH-ITN-MT-EX--06/016--SE

Design and Implementation of a

Virtual Environment for

treatment of Post Traumatic

Stress Disorder

Johan Brännström

(2)

LITH-ITN-MT-EX--06/016--SE

Design and Implementation of a

Virtual Environment for

treatment of Post Traumatic

Stress Disorder

Examensarbete utfört i medieteknik

vid Linköpings Tekniska Högskola, Campus

Norrköping

Johan Brännström

Handledare Anders Backman

Handledare Mehdi Ghazinour

Examinator Matt Cooper

Norrköping 2006-03-10

(3)

Rapporttyp Report category Examensarbete B-uppsats C-uppsats D-uppsats _ ________________ Språk Language Svenska/Swedish Engelska/English _ ________________ Titel Title Författare Author Sammanfattning Abstract ISBN _____________________________________________________ ISRN _________________________________________________________________ Serietitel och serienummer ISSN

Title of series, numbering ___________________________________

Datum

Date

URL för elektronisk version

Avdelning, Institution

Division, Department

Institutionen för teknik och naturvetenskap Department of Science and Technology

2006-03-10

x

x

LITH-ITN-MT-EX--06/016--SE

Design and Implementation of a Virtual Environment for treatment of Post Traumatic Stress Disorder

Johan Brännström

The aim of this report is to describe the process of designing and implementing a Virtual Environment (VE), which is to be used in the treatment of Posttraumatic Stress Disorder (PTSD). The work was divided into three steps. First, a background study was made to see what previously been done in the field of Virtual Reality-based therapy of PTSD and other psychological conditions. Then a design of the VE was constructed based on the knowledge from the previous work, along with new ideas. Finally an application was developed based on the design. The application was limited to the treatment of people who have experienced a specific type of traumatic event. It is also to be used as a proof of concept of a new model of treatment. The result of this work is the application that has been developed. The

application places the user in a fully immersive and interactive VE, and gives the therapist full control of the stimuli flow.

(4)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

under en längre tid från publiceringsdatum under förutsättning att inga

extra-ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,

skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för

ickekommersiell forskning och för undervisning. Överföring av upphovsrätten

vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av

dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,

säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ

art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i

den omfattning som god sed kräver vid användning av dokumentet på ovan

beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan

form eller i sådant sammanhang som är kränkande för upphovsmannens litterära

eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se

förlagets hemsida

http://www.ep.liu.se/

Copyright

The publishers will keep this document online on the Internet - or its possible

replacement - for a considerable time from the date of publication barring

exceptional circumstances.

The online availability of the document implies a permanent permission for

anyone to read, to download, to print out single copies for your own use and to

use it unchanged for any non-commercial research and educational purpose.

Subsequent transfers of copyright cannot revoke this permission. All other uses

of the document are conditional on the consent of the copyright owner. The

publisher has taken technical and administrative measures to assure authenticity,

security and accessibility.

According to intellectual property law the author has the right to be

mentioned when his/her work is accessed as described above and to be protected

against infringement.

For additional information about the Linköping University Electronic Press

and its procedures for publication and for assurance of document integrity,

please refer to its WWW home page:

http://www.ep.liu.se/

(5)

Design and Implementation of a Virtual

Environment for Treatment of Posttraumatic

Stress Disorder

Johan Br¨

annstr¨

om

Ume˚

a, January 30, 2006

(6)

Abstract

The aim of this report is to describe the process of designing and implementing a Virtual Environment (VE), which is to be used in the treatment of Posttrau-matic Stress Disorder (PTSD). The work was divided into three steps. First, a background study was made to see what previously been done in the field of Virtual Reality-based therapy of PTSD and other psychological conditions. Then a design of the VE was constructed based on the knowledge from the pre-vious work, along with new ideas. Finally an application was developed based on the design. The application was limited to the treatment of people who have experienced a specific type of traumatic event. It is also to be used as a proof of concept of a new model of treatment. The result of this work is the application that has been developed. The application places the user in a fully immersive and interactive VE, and gives the therapist full control of the stimuli flow.

(7)

Preface

This diploma work project is based on the work done by the author at VRlab at the University of Ume˚a from January to August 2005. It will fulfil a Master of Science degree in Media Technology and Engineering at the Department of Science and Technology at the University of Link¨oping.

I would first like to thank my supervisors, Anders Backman, Mehdi Ghazinour, and Marcus Maxhall, and my examiner Matt Cooper, of this project for a chal-lenging, interesting, and most of all, fun project. Big thanks to Daniel Sj¨olie at VRLab who also have helped out a lot. A big “Thank you!” also goes out to everybody who helped out reading, correcting, criticizing, and improving the report and the work in general. Finally I would like to thank my opponent David Andersson.

I’ve had a great time.

Johan Br¨annstr¨om

(8)

Contents

1 Introduction 1

1.1 Background . . . 1

1.2 Purpose . . . 1

1.3 Problem description and limitations . . . 2

1.4 Document overview . . . 2

2 Related work 3 2.1 Virtual Reality and Psychotherapy . . . 3

2.2 Virtual Reality and Posttraumatic Stress Disorder . . . 5

3 Theory 9 3.1 Virtual Reality . . . 9

3.2 Posttraumatic Stress Disorder . . . 10

3.3 Virtual Reality in clinical applications . . . 11

4 Design 13 4.1 Designing Virtual Environments . . . 13

4.2 Scenario . . . 14 4.3 Environment . . . 16 4.3.1 Residential area . . . 16 4.3.2 Safety place . . . 17 4.4 Interaction . . . 17 4.4.1 User interface . . . 17 4.4.2 Therapist interface . . . 20 5 Implementation 21 5.1 Implementing virtual environments . . . 21

5.2 Tools . . . 21 5.2.1 Hardware . . . 21 5.2.2 Software . . . 22 5.2.3 Colosseum 3D . . . 23 5.3 Scenario . . . 24 5.4 Environment . . . 26 5.4.1 Residential area . . . 26 5.4.2 Safety place . . . 28 5.5 Interaction . . . 30 5.5.1 User interaction . . . 30

(9)

5.5.2 Therapist interaction . . . 31

6 Results and discussion 33

6.1 Results . . . 33 6.2 Discussion . . . 34 6.3 Future work . . . 35

(10)

List of Figures

2.1 Screenshot from acrophobia simulator. . . 4

2.2 Screenshot from airplane simulator. . . 4

2.3 Screenshot from arachnophobia simulator. . . 5

2.4 Screenshot from Vietnam simulator. . . 6

2.5 Screenshot of the WTC simulator. . . 7

2.6 Screenshot from the Iraq simulator. . . 8

2.7 Screenshot of the bus bombing simulator. . . 8

4.1 The five components of a VR-system. . . 14

4.2 Timeline of scenario. . . 16

4.3 Indication of interaction modes. . . 18

4.4 The wheelchair. . . 19

5.1 Hardware used. . . 22

5.2 Colosseum3D execution. . . 23

5.3 Soundnode attached to radio node. . . 24

5.4 SwitchNode altering between buildings. . . 25

5.5 Overview of the position of the shaker in the scenegraph. . . 26

5.6 Creation of physical attributes in schematic view. . . 28

5.7 Structure of a composite object. . . 29

5.8 The two states of the building models. . . 29

5.9 Lightmap of a bathroom wall. . . 30

5.10 Floorpanel used to open door. . . 31

5.11 Overview of switch nodes. . . 32

6.1 Screenshots from state 2 and state 3. . . 34

6.2 Screenshot of residential area. . . 35

6.3 Screenshots of apartment. . . 36

6.4 Screenshot of safety place. . . 37

Figure information

The following figures are reproduced with the permission of their respective owners:

• Figure 2.1, Virtually Better (2005), http://www.virtuallybetter.com/ virtual heights.htm

(11)

• Figure 2.2, Virtually Better (2005), http://www.virtuallybetter.com/ virtual airplane.htm

• Figure 2.3, Hunter Hoffman (University of Washington) (2005), http: //radio.weblogs.com/0105910/2003/11/01.html

• Figure 2.4 Virtually Better (2005), http://www.virtuallybetter.com/ virtual vietnam.htm

• Figure 2.5, Hunter Hoffman (University of Washington) (2005), http: //www.hitl.washington.edu/projects/ptsd/

• Figure 2.6, Skip Rizzo & Jarrell Pair (University of Southern California Institute for Creative Technologies) (2005), http://graphics.usc.edu/ hci/jarrell/transfer/ptsd pics.zip

• Figure 2.7, Hunter Hoffman (University of Washington) (2005), http: //www.imprintit.com/Creations/PTSD/

(12)

Chapter 1

Introduction

This chapter will give the reader an overview of the report and a background to the project that the report is based on. The purpose of the report is presented, as is the problem description.

1.1

Background

Posttraumatic Stress Disorder (PTSD) is a common anxiety disorder among people who have experienced traumatic events [Dyregrov, 2002]. A traumatic event can for example be war, natural disasters, accidents or rape [Michel et al., 2001]. A recent example is the tsunami disaster in Southeast Asia. The stan-dard treatment of PTSD, prior to the availability of therapy based on Virtual Reality (VR) applications, was imaginal exposure therapy [Rizzo et al., 2004]. Unfortunately, as one of the defining symptoms of the disorder is avoidance of reminders of the trauma, some patients are unable to visualize, imagine or de-scribe the traumatic event. To address this problem attempts to use VR in the treatment of PTSD were made. VR-based therapy of PTSD was first introduced by Rothbaum et al. [1999, 2001], and showed promising results. Good results were also shown in a case study by Difede and Hoffman [2002].

However, the focus in these studies has been on the psychological aspects and very little has been documented and discussed about the decisions made when constructing the Virtual Environments (VEs) used in the treatment. As the design of the VE can have impact on the success of the treatment it is important to take a deeper look on what underlying factors contribute to a successful VE. This work will explore these underlying factors and then use them to implement a (hopefully) successful VE.

1.2

Purpose

The purpose of the work is to design and implement a VE to be used in treatment of PTSD. To do this we will first take a look at previous work in the field and at some background theory of VR and PTSD. The knowledge gathered in the first part will then be used along with our own ideas to specify a design for the VE. The design will then be used as a basis when writing an application which will implement our ideas.

(13)

1.3

Problem description and limitations

We are faced with multiple problems when designing and implementing a VE. First we need to explore what factors that are important when constructing a succesful VE. We will then need to find an efficient way to implement our design into a full VR-application.

The implemented application will be used as a proof of concept; i.e. to try out the model of treatment. It will be used for further research, as no experiments have yet been made. The application cannot be used for treatment of every form of PTSD. It is aimed at a specific group of traumas, and cannot be used with other types of traumas.

1.4

Document overview

The report is divided into six chapters:

• Chapter 1 - Introduction (An overview of the problem and the report) • Chapter 2 - Related work (Introduces the reader to what previously been

done in the field)

• Chapter 3 - Theory (Background theory on VR and PTSD) • Chapter 4 - Design (How the VE was designed)

• Chapter 5 - Implementation (Implementation of the system into an ap-plication)

• Chapter 6 - Result and discussion (Presents the the results gained from design and implementation, along with a discussion regarding results)

(14)

Chapter 2

Related work

This chapter will introduce the reader to what previously has been done in the field of VR and psychotherapy. Then we examine the work done using VR-based treatment of PTSD.

2.1

Virtual Reality and Psychotherapy

Specific phobias, such as acrophobia, arachnophobia and fear of flying, were among the first psychiatric disorders where VR-based therapy was used. Phobias are usually treated with exposure therapy [Glanz and Durlach, 1997]. The exposure therapy can either be done “in vivo” (live exposure), or “in vitro” (imaginal exposure). This form of treatment was one the main reasons why VR-based therapy could be an alternative; the available technology at the time could easily simulate the stimuli needed in exposure therapy, and the created applications could be used to treat many people [Glanz and Durlach, 1997].

The first known controlled study of a psychiatric disorder examined the effi-cacy of VR-based therapy in the case of acrophobia (fear of heights) [Rothbaum et al., 1995]. 20 college students with acrophobia were randomly assigned to either VR-based therapy or a waiting-list comparison group. The patients were exposed to three different VEs; a scene with footbridges of different heights above water, a scene with outdoor balconies on different floors of the building, and a scene containing a glass elevator rising 49 floors. A screenshot from the glass elevator environment can be seen in figure 2.1. Significant differences be-tween the two groups were found in all measurements. The group who received VR-therapy showed significant improvement after 8 weeks of treatment; whereas the comparison group was unchanged. These findings suggested that it is pos-sible for a user to experience presence in a VE to “the point that attitudes and behaviors in the real world may be changed as a result of experiences within a virtual world” [Anderson et al., 2001].

The first controlled study comparing VR-based therapy and standard in vivo exposure therapy was conducted by Rothbaum et al. [2000]. A waitlist group for comparison was also used here. 49 patients with fear of flying were randomly assigned to one of the three groups. Inside the VE the user found him/herself sitting in a passenger seat by the window on a commercial airplane (figure 2.2 below). The virtual airplane was able to taxi, take off, land, and to fly in both

(15)

Figure 2.1: Screenshot from acrophobia simulator.

calm and turbulent weather. The results of the study indicated that there were no differences between the VR-based therapy and standard in vivo exposure therapy; by 6 months posttreatment 93% of both groups had flown.

Figure 2.2: Screenshot from airplane simulator.

Another controlled study which showed good results is the study on arachno-phobia (fear of spiders) by Garcia-Palacios et al. [2002]. 23 patients suffering from arachnophobia were assigned to VR-treatment or a waiting list. Exposure was done in a VE representing a kitchen containing a Guyana bird-eating taran-tula (see figure 2.3). To make the VE more convincing, tactile augmentation was introduced in the form of a spider prop which the user could touch in both the real and virtual world. 83% of those assigned to the VR-treatment showed

(16)

clinically significant improvement.

Figure 2.3: Screenshot from arachnophobia simulator.

Since specific phobias are an adjacent area to PTSD, the good results in phobias indicates that VR could be a useful tool in treatment of PTSD.

2.2

Virtual Reality and Posttraumatic Stress

Disorder

Previously, some attempts have been made to use VR in treatment of PTSD. These studies have all shown promising results and are the main reason why this project is being done, but it is important to add that no controlled study has yet been done. It would be especially interesting to see a controlled comparing study between VR-based therapy and regular in vivo or in vitro therapy.

VR-based therapy for PTSD was first introduced by Rothbaum et al. [1999, 2001] in a study examining the possibilities of treating chronic combat-related PTSD among Vietnam veterans. The treatment was carried out in two steps. In the first step the patients were just exposed to the VE. In the second step the patients, while immersed in the VE, were told to expose themselves by imagination to their most traumatic memories from their time in Vietnam. This was done by letting the patients describe, in detail, the memories triggered by the VEs, and to repeat them several times to allow habituation (a decreasing anxiety response). Since one of the most common complaints among Vietnam veterans with PTSD is a strong emotional response to the sound of helicopters, the first VE was designed as a ‘Huey’ helicopter, which was able to fly over various Vietnam terrains such as jungles, rivers and rice paddies. The second VE was designed as a jungle clearing, as seen in figure 2.4. All 8 participants interviewed at the 6-month follow-up reported reductions in PTSD symptoms ranging from 15% to 67%.

Difede and Hoffman [2002] were the first to study the possibilities of using VR-based exposure therapy in the case of acute PTSD (i.e., within a few months after the traumatic event). In this case study, concerning the treatment of

(17)

Figure 2.4: Screenshot from Vietnam simulator.

a woman who had survived the terrorist attack on the World Trade Center (WTC), Difede and Hoffman got results that suggested that VR-based exposure therapy is a promising medium for treating acute PTSD. The patient reported an 83% reduction in depression, a 90% reduction in PTSD symptoms and no longer met the criteria for any other psychiatric disorder after completing the therapy. The VE used was a model of Manhattan, complete with the two WTC towers. The therapist was able to control what the patient was experiencing in the VE through a number of action sequences. These ranged from just a jet flying over the WTC towers to the complete sequence with planes crashing into the towers and the collapse of the buildings. A screenshot of the exploding WTC towers is seen in figure 2.5 below.

One of the most recent applications where VR will be used in treatment of PTSD is in an application based on the X-Box game Full Spectrum War-rior [Rizzo et al., 2004, Sutliff, 2005, FSW, 2005]. This application was first developed, in conjunction with personnel from the US Army’s Infantry School at Fort Benning, Georgia, to train soldiers in leadership and tactics and then was developed into a commercial game. The application is aimed at returning Iraq war military service personnel, and will consist of a VE based on a Middle Eastern city which the patient can see from a number of perspectives (inside a vehicle, walking alone or with a patrol). The project is part of a five year contract worth $100 million that Institute of Creative Technology at University of Southern California signed with the US army. That gives some perspective of the magnitude of the problem of PTSD, and how important it is to explore possible solutions to it. A screenshot from the simulator is found below in figure 2.6:

Besides the mentioned projects above, there are a few others too, who are yet to publish their results. Researchers at University at Buffalo have developed a driving simulator which aims to help car-accident survivors to recover from PTSD [Donovan, 2005]. Imprint Interactive Technology have begun to develop an application which deals with terrorist victims or witnesses [Imprint, 2005].

(18)

Figure 2.5: Screenshot of the WTC simulator.

(19)

Figure 2.6: Screenshot from the Iraq simulator.

(20)

Chapter 3

Theory

This chapter will give the reader the theory regarding both VR and PTSD used when designing the system. We also show why using VR-based therapy in treatment of anxiety disorders is a good idea.

3.1

Virtual Reality

Although the term itself was first coined in 1989 by Jaron Lanier, VR dates back to an invention called the Sensorama [Burdea and Coiffet, 2003]. The Sensorama was invented by Morton Heilig in 1962, and simulated a motorcycle ride through New York. Even though VR has been around for about forty years, there still exists a lot of different definitions of what VR really is. Interactivity and immersion are two essential features often mentioned in the definitions. Interactivity is pretty much self-explanatory; the virtual environment can take input from the user and modify the environment accordingly, preferably in real-time. Immersion describes “the extent to which the computer displays are capable of delivering an inclusive, extensive, surrounding, and vivid illusion of reality to the senses of a human participant” [Slater and Wilbur, 1997]. Inclusive indicates the extent to which physical reality is shut out. Extensive indicates the range of sensory modalities accommodated. Surrounding indicates the extent to which this virtual reality is panoramic rather than limited to a narrow extent. Vivid indicates the resolution, fidelity, and variety of energy simulated within a particular modality.

Another key feature of VR is presence. Where immersion and interactivity are connected to the characteristics of the technology used, presence is a state of consciousness of the user [Slater and Wilbur, 1997]. Presence has almost as many definitions as VR itself, but in this report we have chosen one of the more widely used by Lombard and Ditton [1997] who defined presence as “the percep-tual illusion of nonmediation”. “Perceppercep-tual” indicates that “this phenomenon involves continuous responses of human sensory, cognitive, and affective pro-cessing systems to objects and entities in a person’s environment” [Lombard and Ditton, 1997]. An “illusion of nonmediation” occurs when “a person fails to perceive or acknowledge the existence of a medium in her communication en-vironment and responds as she would if the medium were not there” [Lombard and Ditton, 1997]. When presence occurs “the difference between ‘in

(21)

imagina-tion’ and in vivo disappears” [Riva et al., 2002]. This means that treatment methods using VR can be just as effective as any non-VR method, if the user experiences presence in the VE used in the treatment. Therefore it is extremely important for VR-based methods to strive for presence, or else there will be no improvement over regular imagination-based techniques.

In their article Lombard and Ditton [1997] list a wide range of different fac-tors on which presence depends. All of them are not of interest to this report, but those who are, we will now explore and later try to incorporate in the design of the VE. The first factor listed in the article is “Number and consistency of sensory outputs” which derives from the general belief that “the greater the number of human senses for which a medium provides simulation, the greater the capability of the medium to produce a sense of presence”. The next factor we will look at is “Visual display characteristics”. This is a factor which depends on many different variables, e.g. image quality, image size, and camera techniques used in the simulation. Another factor is the “Aural presentation characteris-tics”. This factor depends on two variables; sound quality and spatialization, i.e. 3D sound. So far we have only looked at factors depending on technology, but an important factor of presence is “Media user variables”. Especially the user’s prior experience and knowledge of the medium and the user’s willingness to suspend belief are important factors. Even though we have mentioned quite a few factors here (and there are a lot more to be found in the paper of Lombard) there is probably none as influential as the factor of interactivity. Important aspects which contribute to interactivity include the amount of change possible in each characteristic of the mediated experience, the degree of correspondence between the type of user input and the type of medium response, and the speed with which the medium responds to user input.

3.2

Posttraumatic Stress Disorder

One of the most internationally used handbooks when diagnosing mental disor-ders is the “Diagnostic and Statistical Manual of Mental Disordisor-ders (DSM-IV)” [APA, 1994]. DSM-IV defines PTSD through six criteria. According to Crite-rion A the person must have experienced, witnessed or confronted an event, or series of events, which involved death, serious injury, or threat of one’s own or others physical integrity. The person must also have reacted with intense fear or helplessness. There are three major categories of symptoms; reexperiencing; avoidance and numbing, and physiological hyper arousal. The criterions for these symptoms are found in Criterion B, C, and D in DSM-IV. Reexperiencing symptoms can, for instance, be reoccurring nightmares, intrusive memories re-lated to the traumatic event, or flashbacks. The person may also react intensely, both psychologically and physiologically, when facing cues associated with the traumatic event [APA, 1994]. These cues are sometimes obvious and sometimes more complex [Resick and Calhoun, 2001]. Persons who have experienced a traumatic event may, as stated earlier, show symptoms of avoidance and numb-ing. These symptoms reflect the persons attempt to gain psychological and emotional distance from the trauma [Resick and Calhoun, 2001]. The person may actively avoid thoughts and feelings which are related to the trauma, as well as activities, places and persons that elicit memories of the trauma [APA, 1994]. Sometimes the person may forget important parts of the traumatic event.

(22)

The person can also become numb and show limited affects (such as inability to feel love), or feel that he or she do not have a future [APA, 1994]. The person may also show prolonged symptoms of physiological hyper arousal, which means that they for instance can suffer from sleep disturbances, decreased ability to concentrate or irritability [APA, 1994].

PTSD is considered a common disorder among people who have experienced a traumatic event [Dyregrov, 2002]. In a study concerning the epidemiology of PTSD in Sweden [Frans, 2003], it was shown that the prevalence rate of PTSD is 5.6% in Sweden with a 2:1 female/male ratio. In the USA the numbers are almost the same; a prevalence of 7.8% with a 2:1 female/male ratio [NCPTSD, 2005].

Emotional processing theory is often used to describe anxiety disorders, such as PTSD. The theory suggests that fear memories include information about stimuli, responses, and meaning [Foa and Kozak, 1986]. Therapy is aimed at facilitating emotion processing and modifying the structure of the fear memory [Anderson et al., 2001]. Foa and Kozak [1986] suggest that two conditions are required for the reduction of fear through therapy based on this theory. First, the memory structure needs to be activated. Exposure techniques use confrontation between the patient and the trauma specific stimuli, which could be some kind of feared cue or the actual trauma memory itself [Michel et al., 2001]. These types of techniques have historically proven effective at activating the fear structure, since they elicit fearful responses [Rothbaum and Hodges, 1999]. Usually the intensity of the stimuli starts out at low rate until the anxiety is lowered. The intensity and duration of the stimuli is then raised gradually. The exposure can be in vitro or imaginal, the patient uses his or hers own memories of the trauma as stimuli, or in vivo (some form of live exposure is used). After the activation Foa and Kozak [1986] propose that information incompatible with the associations between stimuli and anxiety response must be provided during the therapy. This can be done in many ways. A commonly used technique is habituation; i.e. the confrontation is prolonged in order to allow the anxiety responses to gradually decay [Anderson et al., 2001]. To sum up, any method capable of activating the fear structure and modifying it would be predicted to improve the symptoms of anxiety [Rothbaum and Hodges, 1999], which is one of the reasons why VR has been used and shown good results in treatment of anxiety disorders.

3.3

Virtual Reality in clinical applications

VR has a long, in terms of VR-applications, history of usage in clinical appli-cations. Besides the previously mentioned applications in the field of anxiety disorders, VR has also been used successfully in areas such as eating disor-ders [Riva et al., 1999], pain distraction [Hoffman et al., 2000], and physical rehabilitation [Deutsch et al., 2001]. There are some advantages that often are mentioned when discussing VR in clinical applications, and here we will focus especially on the use of VR in psychotherapy. In their article about VR and psychotherapy Glanz and Durlach [1997] suggest the following advantages:

• VR is safer than regular in vivo treatment. • VR is cheaper than in vivo treatment.

(23)

• VR helps to preserve the patient’s integrity, since the patient never has to leave the therapists office.

• VR will also give the therapist complete control of the stimulus presented to the patient.

• VR is highly flexible; the VEs can easily be tailored to fit the needs of each patient.

• VR makes it easier to measure the responses of the patient. • VR allows for treatment over long distances; i.e. in telemedicine.

As stated earlier, any method that can activate the structure of the fear memory can be used in exposure therapy. This concept has been taken a step further by Vincelli [1999] and Riva et al. [2002] by generalizing VR to an “ad-vanced imaginal system”. An ad“ad-vanced imaginal system is “an experience that is able to reduce the gap existing between imagination and reality” [Riva et al., 2002]. According to Vincelli [1999] it is our memory and imagination that present the “absolute and relative limits to individual potential”. By using vir-tual experiences we are able to transcend these limits since “the recreated world may be more vivid and real at times than the one that most subjects are able to describe in their own imagination and through their own memory”. What he means is that through VR, we can enhance the human ability of imagination and make this new imagined world as real as the real world. Thus, VR can be used to construct the traumatic memories and expose the patient with these new constructed memories, and it would be equal to in vivo treatment. This may also solve the problem that patients have with visualizing the memories.

(24)

Chapter 4

Design

The previous chapters have given us an overview of what previously has been done and the necessary theory. In this chapter we will use that knowledge as guidelines when designing the VE. The design process is divided into three sections, each covering a part of the system, but first we will take a look at the VR system we will be using and the possibilities and limitations it gives us.

4.1

Designing Virtual Environments

Previously we have introduced some important concepts and ideas which we believe are crucial to make the system as effective as possible. These ideas and concepts, along with the limitations of our VR system and the proposed model of treatment, will now serve as guidelines in the design process. The design process will result in a blueprint for our VE, and this blueprint will then be used in the next chapter where we will construct our VE.

The system we will use to simulate the VE will give us some possibilities and some limitations. We will go into more detail about the system in the following chapter but we also need to go through some of the basic functionality here, in order to know what possibilities the system presents to us. The basic system can be seen as five different components (see figure 4.1); Software and databases, VR engine, I/O devices, user, and task [Burdea and Coiffet, 2003]. To be able to design the VE appropriately we need to match our design to the specifications of these components.

Software and databases consist of the programs and objects used to populate and build the VE. The VR engine will present the VE to the user, and also handle all the data sent from the input devices, and send data to the output devices. Among the I/O devices we find the tracking system, the HMD, and interaction equipment such as pinch gloves and wands. Within the system we also find a user and a task which is to be performed by the user. The system we will develop will be built around an authoring framework for VEs called Colosseum3D [VRlab, 2005], which has been developed at the University of Ume˚a. This framework uses a number of different libraries in order to take care of rendering, I/O, sound, and physics. An HMD is used for displaying the graphics to the user, a tracker system to calculate the position of the HMD, a pair of tracked pinch gloves for interaction, and a wheelchair for navigation.

(25)

Figure 4.1: The five components of a VR-system.

A wheelchair might seem as strange choice of navigation model, but since the application was to be used as proof of concept and an already implemented version of the wheelchair was at hand, it was decided that this type of navigation would be sufficient enough.

4.2

Scenario

The first thing to do in the design part of the project was to establish a scenario. The scenario is the script for our system; it tells us what will happen, when it will happen, how it will happen, and where it will happen. Since a guideline of the project was to focus on Middle Eastern civil war victims, an attack on a residential area was thought to be a good starting point for the scenario. During the attack the user will find him/herself in an apartment in this residential area. The attack would look like this:

1. Sounding of air-raid siren (State 1). The siren is heard through a radio in the apartment, and is played for about 5 minutes.

2. Air strike from F-16 aircrafts using bombs (State 2). Two F-16 aircraft ap-proach the residential area and they drop two bombs each which demolish buildings in the residential area.

3. Helicopter attack from Apache helicopters using machine guns and missiles (State 3). Two Apaches approache the residential area and then attack it using machine guns and missiles. The attack starts at the part of the residential area furthest away from the apartment, and then moves towards the apartment.

We chose this structure of the attack mainly for two reasons. First of all, the three different states are independent of each other, both in the VE and the

(26)

real world. An air strike could occur without the sounding of the siren (it might simply be broken), and a helicopter attack can take place without the air strike taking place first. Secondly, the states give rise to a natural increase in stimuli intensity. State 1 contains only audio stimuli. State 2 contains both audio and visual stimuli but without showing the cause of the stimuli (the F-16 will only be heard, not seen, but the effects of the attack will be seen by the user). State 3 contains both audio and visual stimuli and shows both the cause and effect of the stimuli to the user. This natural increase in intensity fits well with how exposure therapy is usually conducted; i.e. gradually rising in intensity. The states themselves are also designed to be somewhat gradual. The helicopter attack, for instance, starts with just the sound of two helicopters approaching for about two minutes before the actual attack starts. The independency and intensity gain leads to a flexible form of treatment where the therapist can choose freely how to construct the treatment sequences. After the attack sequence is over, the simulation proceeds to a state where the user is exposed to the effects of the attack; burning and exploded buildings, dead and hurt people, and ambulance sounds. During this attack sequence the user has the opportunity to move into the safety place at any time when he or she feels uncomfortable with the situation. The safety place is an environment where the user should feel relaxed and safe. Examples of possible environments are a beach or a garden, but any place where the user feels comfortable can be used. When in the safety place the user can move back into the apartment and resume the treatment where he or she left off at any time.

Since we wanted to make the user a more active participant in the therapy, a task was designed for the user to perform while in the VE. This was done for three reasons:

• We wanted the user to become acquainted with the VE and its possibilities of interaction and navigation (the task itself can be used when the user is first introduced to the VE, and then again in therapy to make it easier for the user to adapt to the VE).

• We wanted to enhance the sense of presence through interaction.

• We wanted a soft start of the treatment instead of rushing straight to activation of the stimuli.

As the user will be inside the apartment, this resulted in the task of preparing a meal. Different food objects available in the kitchen along with the needed equipment is to be put on the table by the user. The task is quite simple, yet it still accomplishes what we require. Besides the food preparing task the user is also allowed and encouraged to freely navigate the VE to get acquainted with it.

If we combine the attack sequence, its aftermath, and the interaction task we get five different states, as can be seen in the timeline in figure 4.2 below:

The timeline in Figure 4.2 describes a scenario where all the states are trig-gered and the user chooses to enter the safety place twice. The user starts the simulation with preparing the meal in the apartment while listening to the ra-dio. The radio then shifts to playing the air-raid siren, and the attack sequence starts. The first attack comes from the aircraft dropping bombs on the residen-tial area. At this time the user chooses to move into the safety place for the

(27)

Figure 4.2: Timeline of scenario.

first time. After the air-strike the helicopters start attacking using missiles and machine guns, and the user visits the safety place for a second time. As can be seen in the timeline, the estimated time of the attack sequence is approximately 20 minutes. This can vary depending on when the therapist triggers the states, whether the user needs to abort the therapy, and how many times the user decides to move into the safety place.

4.3

Environment

After the scenario had been established, work concerning the design of the environment begun. With environment we mean the look of the world where the scenario takes place. As stated in the scenario, the user will find him/her self inside an apartment in a residential area, with the possibility to move into the safety place. This gives us two environments to be designed; a residential area and a safety place.

4.3.1

Residential area

As stated earlier, the scenario will take place in a Middle Eastern civil war context, thus the look of the environment will try to resemble a Middle Eastern residential area. First we have the apartment where the user will perform the task, be presented with the stimuli, and be given the possibility to move into the safety place. We want this apartment to be very neutral, since a neutral environment lets the user “fill in the blanks” with personal experience [Hodges et al., 2001]. The same goes for the residential area; it should also be neutral, yet resemble a Middle Eastern residential area.

Now that we have a residential area we need to populate it with the needed objects. First of all we need objects to perform the attacks. We will need helicopters that fly around the residential area and attack predefined targets. We will need to show the effects of the attack through explosions and dead and hurt people. We will need people to walk the streets, talk, and to get hurt or killed by the attacks. Besides the pure graphical objects we will need a number of sounds, both ambient sounds and pure sound effects. We need for instance a number of different sound effects for the weapons used in the attacks, and some music to be played on the radio. We also need to consider what objects need to be animated and how they should be animated. For example, a helicopter is not animated in the same way as exploding buildings.

(28)

4.3.2

Safety place

The safety place has to be designed to have a relaxing effect on the user. When used in regular therapy, a safety place could be a beach, a beautiful wood clearing, or a nice park. Any place where the user feels comfortable and relaxed can be used. It would be preferable if the safety place could be used by many users with different traumatic experiences, but it is also important to consider each individual’s own experiences. For instance, it is not suitable to use a beach in case the user is a victim of a tsunami disaster. Because of this we have decided to design two different safety places; a tropical beach and a forest clearing. We feel that these two safety places are different enough to cover a sufficient number of users. The tropical beach will be located on a deserted island in the middle of the ocean. The island will have a big mountain in the middle, and some vegetation surrounding it, such as palm trees. The forest clearing will lie next to a pond surrounded by forest.

The safety places also need to be populated with the needed objects. Both safety places will need water in some form, and some form of vegetation. The water should preferably be animated in some way to simulate waves. A key component in the safety places is sound, and more specifically ambient sound, to make the environment more vivid and appealing to the user. The sound of waves hitting the beach shore and the sound of birds singing are examples of ambient sounds that will be needed.

4.4

Interaction

Now that we have established an environment and a scenario, we will take a look at what we need for our interaction, what interaction possibilities the VR-system offers and how these can be used to fit our needs. Interaction is, as stated earlier, one of the key factors of presence and thus one of the key factors to the success of the whole system. Two different user interfaces have been designed; one for the user and one for the therapist.

4.4.1

User interface

The user interface consists of three parts; an object manipulation part, a nav-igation part, and a part concerning the physical controllers available to the user. Since the user is supposed to perform a task (preparing a meal), the user must be able to manipulate the objects needed for the task, e.g. lift a cup from the kitchen sink to the table. Such an interaction model is already implemented in our VR-engine, and thus we will make use of the existing one instead of constructing a new one. This interaction model is constructed to be a direct correspondence to the way we interact in our daily life [Backman, 2005a]. The user will be able to grab a hold of physically simulated objects, move these around, have them interact with other physical objects, and to drop them where needed. The user will have two virtual hands corresponding to the pinch gloves he or she wears, and it is through these virtual hands the interaction with the physically simulated objects are performed. To guide the user when interacting, the hands have three different modes; neutral mode (see figure 4.3(a)), contact mode (see figure 4.3(b)), and grasp mode (see figure 4.3(c)).

(29)

(a) Hand in neutral mode. (b) Hand in contact mode.

(c) Hand in grasp mode.

Figure 4.3: Indication of interaction modes.

When in neutral mode (normal hand) the user is not near enough any objects to interact with. When in contact mode (open hand) the user is close enough to interact with some object. When in grasp mode (closed hand) the user has indicated through the pinch gloves that the hand will grasp the object that touches the hand. All objects in our VE are not physically simulated; we have only simulated those who are needed to perform the task.

The next part of the interaction is navigation, which consists of two parts; general navigation in the VE, and the navigation between the residential area and the safety place. The navigation in the VE is, as stated earlier, performed using a wheelchair (see figure 4.4). As with the interaction model for ma-nipulation of physical objects, the wheelchair navigational model was already implemented in the VR-engine. A benefit of using the wheelchair is that it does not require a lot of physical space, as the wheelchair is rigged in fixed position with counters at the wheels to measure the movement of each wheel. Thus, the user will not move in the physical room, only in the VE. The use of a wheelchair can become a problem since it might not feel natural for the user to navigate that way. But as this application will be used as a proof of concept to test if this method is a step in the right direction, we do not consider this as big problem. Besides the possibility to navigate in the residential area and the safety place, the user needs a way to move between the residential area and safety place, and vice versa. Some form of instant transportation was needed, and through this need the idea of a portal system was born. The idea was to place a portal of

(30)

Figure 4.4: The wheelchair.

some sort in both the residential area (inside the apartment) and the safety place, and as soon the user rolled into one of these portals he or she would instantly be transported to the other location. These portals were chosen to take the shape of elevator doors. Elevator doors are neutral, they look almost the same all over the world and they have the inherent meaning of transporting its user. The elevator doors will be scripted to open automatically when the user is close enough, thus the user does not have to open the elevator door him or her self.

The user will also have the possibility to pause the simulation through a pause button, which will be placed on some form of physical controller (i.e. an interaction device). When the user presses the button the simulation will stop, and the user will get a black screen in the HMD. When the user feels comfortable enough, the simulation can be resumed just where it was left off. The reason for implementing such a function is because we want to give the user options on how to deal with the anxiety which can be experienced in the VE. The user will have the possibility to either pause the system or to enter the safety place. Either way the patient chooses, the therapist will understand that the user have experienced something which made him or her leave the traumatic scene.

(31)

4.4.2

Therapist interface

One of the great advantages of VR-based therapy over regular therapy is the possibility to control the actual flow of stimuli. Therefore it is important to design and develop an easy-to-use interface for the therapist. In our case this means that we want to give the therapist control over the different attack states described earlier in the scenario. Through a number of different buttons the therapist will be able to start and stop each state, and thus control the stimuli flow. Moreover the therapist will have access to a pause button similar to the one the user has available. The therapist will also have a button which sends the user directly to the safety place. This might come in handy if the user needs help or is stuck in some way when trying to move to the safety place.

(32)

Chapter 5

Implementation

The design in the previous chapter resulted in a blueprint for our system. This blueprint consisted of three different parts; scenario, environment, and interac-tion. These parts will now be implemented using the framework Colosseum3D. This chapter will review and discuss this implementation process.

5.1

Implementing virtual environments

This chapter will take us through the implementation of our VE. The implemen-tation will consist of two parts; creating the content, and putting it all together. In the part of creating the content we used a number of different applications to create the content of our VE. For example we used an application called Terragen (see below) to generate the skyboxes needed. When we had created all the content of our VE we put all these objects together and created the final VE. For this part we used Colosseum3D.

5.2

Tools

To aid us in the implementation we have used a number of tools. In this section we will start by taking a look at the hardware setup that was used. After that we will review what software we have used. Finally we will take a closer look at Colosseum3D.

5.2.1

Hardware

To make the VE accessible for the user we needed some special type of hardware. In the last chapter we mentioned some of these components we were to make use of, and these fall under the category of I/O devices of our VR-system model (see figure 4.1). To start with, we have the Virtual Research V8 HMD [Virtual Research, 2005] (see figure 5.1(a)), an HMD with built in headphones. Through this HMD the user will be able to look around in the VE and experience spatial sound. Next we have the tracking system which is an Ascension MotionStar [Ascension Technology Corporation, 2005] (see figure 5.1(b)). The tracking system calculates the position of the user’s head so the right graphics is shown. Interaction is provided through two components; a pair of Fakespace Pinch

(33)

Gloves [Fakespace Labs, 2005] (see figure 5.1(c)) to let the user manipulate physical objects in the VE, and the wheelchair which lets the user navigate in the VE. The wheelchair setup has been developed at the University of Ume˚a. The system also uses a PC with the following specifications:

• Dual Intel 3.2 GHz • 1 Gb ram

• GeForce FX 5900 PCI Express

(a) Head mounted display. (b) Tracking system.

(c) Pinch gloves.

Figure 5.1: Hardware used.

5.2.2

Software

The last section dealt with the I/O devices of our VR-system model, and in this one we will review the Software & Databases part. To build all the graphical objects of our VE, we have mainly used Autodesk 3D Studio Max r7 [Autodesk, 2005]. The content produced in 3D Studio Max was exported to a format which Colosseum3D can handle, which was done using a modified version of OSG-exp 0.9.2b [Jensen, 2005]. MultiGen-Paradigm Creator [MultiGen-Paradigm, 2005] was used to set up some physical attributes of the VE. Terragen 0.9.19

(34)

[Planetside Software, 2005] was used to produce skyboxes. All other 2D im-age manipulation was done in GIMP 2.0.5 [Kimball and Mattis, 2005]. Finally Audacity 1.2.3 [Mazzoni, 2005] was used for sound manipulation.

5.2.3

Colosseum 3D

Colosseum 3D is a framework developed at VRlab, which is written in C++ and concentrates on the authoring process of virtual environments [Backman, 2005a]. The framework is modular and aims to use existing open source software. The most distinct features of Colosseum3D are the possibility to simulate a rich dynamic environment with a natural and intuitive interaction method. The simulation can be created either using Colosseum3D’s own descriptive file format (*.osv), Lua scripting language [PUC-Rio, 2005] or a C++ API. We chose to use the descriptive file format and Lua scripting. Figure 5.2 shows the general execution process of Colosseum3D.

Figure 5.2: Colosseum3D execution.

Colosseum3D uses existing, mostly open source, libraries and the most im-portant parts of Colosseum3D for our implementation are OpenSceneGraph (OSG) [OSG Community, 2005], Vortex [CMLabs Simulations, 2005], OpenAL [Loki Entertainment, 2005]. Rendering in Colosseum 3D is done using OSG, an open source high performance 3D graphics toolkit which uses the popular scenegraph paradigm to describe the scene and render it. Rigid body dynamics is implemented by the toolkit Vortex , a commercial real-time dynamics engine (the only non open source part in the framework). OpenAL is used to handle 3D sound in Colosseum 3D, and on top of this OpenAL++ [H¨am¨al¨a, 2002] is used, which is an object oriented abstraction layer written in C++. On top of OpenAL++, we find osgAL [Backman, 2005b], which lets the developer place sound sources directly in the scenegraph. Another important part of

(35)

Colos-seum3D is the already implemented parts we used for interaction and handling of some specific hardware. We have already discussed the interaction forms in the design chapter and now we will attach these parts to our application.

5.3

Scenario

In the previous chapter we compared the scenario with a movie script; i.e. a series of events which we wish to control in some manner. The implementation of the scenario will be equivalent to setting up these events. Colosseum3D provides us with an abstraction called events. We can make these events start and stop, they can be scheduled to start at a specific time, and they can also be reset. We will break down our states into smaller parts, and each of these parts will be controlled by an event. This implementation was entirely done in Lua.

Our scenario was made up of five different states. These all differed from each other so we had to construct unique events for each and everyone of them, even though we were able to use the same ideas for several of them. The initial state and state 1 only contain different sounds, state 2 contains both sound and exploding buildings, state 3 contains sound, exploding buildings and helicopters, and the final state contains sounds and burning buildings.

We will start by looking at the implementation of sound. Since Colosseum3D uses osgAL, we were able to place sounds anywhere we wanted in our scene by inserting a sound node into our scenegraph. We could, for instance, attach a sound node containing music to the node containing the radio in our scene, which resulted in the music following the position of the radio (see figure 5.3). So if we were to move the radio from its original position, the music would also be moved. Sound can also be tagged as “ambient”, which means that they are heard everywhere in the scene. The sound of waves in the beach scene is an example of an ambient sound.

Figure 5.3: Soundnode attached to radio node.

(36)

is to use OSG:s callback functions to manipulate objects over time. A Node-Callback is assigned to a transformation node, which lets the NodeNode-Callback feed the transformation node with matrices. We used this technique to animate the rotors of the helicopters, as it was easy to use for cyclic animations.

Another way to animate objects is to use OSG:s AnimationPaths. Ani-mationPaths allows the developer to specify a number of ControlPoints which describe the position (and scale and rotation if needed) of an object at a specific time. OSG then interpolates the position of the object between these Control-Points.

However, it can be very unpractical and tiresome to specify these Control-Points manually if you need a more complex animation. For more complex an-imations, such as the flightpaths of the helicopters, we first animated a dummy object in 3D Studio Max, then exported the animation to OSG using OSGexp. Since OSGexp exported the animations done in 3D Studio Max to an Anima-tionPath, the AnimationPath would then be extracted from the exported file and attached to whatever object in the code we wanted.

The exploding and burning buildings were made up of a combination of OSG:s particle systems and SwitchNodes. The particle systems provided the fire, smoke, and explosions. The SwitchNode was used to switch between the original house model and the destroyed one, as shown in figure 5.4.

Figure 5.4: SwitchNode altering between buildings.

To further add to the realism of the explosions a shaker is triggered at each explosion. The shaker uses a noise function based on the Perlin Noise function [Perlin, 1985] to create the trembles. The shaker is based on the MatrixTrans-formation class found in OSG, and is placed above the three states in the scene-graph (see figure 5.5) to affect everything in the residential area. The shaker uses the MatrixTransform to translate everything below it in the scenegraph by using the noise function, thus creating the shake effect.

The hardest part of the implementation of the scenario was to synchronize everything. Especially synchronizing sound and animations turned out to be a somewhat tiresome trial-and-error process. Another problem was handling the rather large scene graph we ended up with. This is a part we feel definitely can be restructured to be easier to understand for new developers of the application.

(37)

Figure 5.5: Overview of the position of the shaker in the scenegraph.

5.4

Environment

The scenario left us with two different environments to be constructed; the res-idential area, and the safety place. These were constructed by modeling in 3D Studio Max. Scenes modelled in 3D Studio Max cannot be used directly in Colosseum3D but need to be exported to a format which Colosseum3D can un-derstand, such as OSG:s own format .osg or .ive. In this project we have used the exporter OSGexp to perform this task. OSGexp was a bit tricky to use; procedural materials cannot be used in the scene, and large scenes are prob-lematic to export. Even so, the advantages of using this exporter outweighed these problems. A very nice feature of OSGexp is for example the possibility to export lightmaps baked in to the materials of the objects in the scenes (i.e. create textures based on the object’s appearance in the scene). Another feature that speaks f¨or the exporters advantage is the the way it exports animations made in 3D Studio Max.

5.4.1

Residential area

The residential area was broken down into two parts; the apartment where the user will be, and the residential area outside the apartment. The apartment was modeled using reference photos of an actual apartment. This proved to be a neat way of doing it, as little effort was needed to come up with the design and interior of the apartment.

Our VE was made up of objects, most of them modeled in 3d Studio Max. Objects in Colosseum3D have both visual attributes (i.e. the graphical rep-resentation seen on the screen), and physical attributes (see example object below):

Object {

(38)

Size [0.1 0.2] Position [0 0 0] Orientation [0 0 45] Dynamic 1 VisualAttributes { Geometry { File "box.osg" } PhysicalAttributes { Mass 1.0 Material 1 Geometry { Primitive "box" } }

Some objects in our world only needed visual attributes, as they were not needed for interaction or collision detection. An example of an object with only visual attributes is the lamp on the table next to the bed. Other objects needed both visual and physical attributes in order to be used for interaction and collision detection. The most important physical attributes for us were mass, material (collision properties), and geometry. Geometry specifies what type of geometry will be used in collision handling, as it often is desirable to use a simpler collision geometry than the visual geometry (which leads to simpler calculations and often almost no noticeable difference against using more com-plex geometry). These physical properties can be defined in two ways; either in the osv-file, or specified in 3D Studio Max. 3D Studio Max provides a view of the scene called the Schematic View (see figure 5.6). This shows the scene to the user as a hierarchical structure (almost like a scenegraph), and the user is able to link objects together and to edit the object’s properties. The object’s properties is a field which all objects in 3D Studio Max have, which is exported along with the object so it can be accessed in Colosseum3D as well. The linking of objects is used to connect the visual and physical attributes through a dummy object. The editing of the object’s properties allowed us to set the mass and material of the object. An example of this structure is seen in figure 5.7, where the usage of composite object is also shown. A composite object lets the user construct objects whose physical or visual attributes consists of two or more objects; in this example the visual attributes uses two different objects.

The residential area was completely modeled from scratch in 3D Studio Max. A few reference photos of Middle Eastern cities were used, but the majority was devloped after discussions and from our own ideas. Besides modeling the buildings which made up the residential area, a few of the buildings were selected to be attacked during the air strike and helicopter attack. These buildings were made in two versions; one normal version (see figure 5.8(a)), and one “destroyed” version to give the impression of an exploded building (see figure 5.8(b)).

A crucial part of the appearance of computer generated scenes is the light-ing of the scene. A technique often used in, for instance, computer games, is

(39)

Figure 5.6: Creation of physical attributes in schematic view.

pregenerated lightmaps. OSGexp provides the possibility to export lightmaps generated in 3D Studio Max. Some form of global illumination algorithm is often used to construct the lightmaps. In our case we chose to use 3D Studio Max’s internal radiosity engine. The lightmaps were then rendered in 3D Studio Max by using the function “Render to texture” which first unwraps the selected objects, then renders an image containing the rendered texture for each object, i.e. the lightmap (see figure 5.9 for an example). Even though the function works very well, the reader should be aware of the danger of combining it with usage of 3D Studio Max’s layers and groups, as objects selected in layers which belong to groups often are “forgotten” in the rendering which can lead to strange results.

5.4.2

Safety place

Only the beach of the two original safety places was implemented due to time limitation. The beach was modeled in 3D Studio Max, where the physical at-tributes of the scene were also added. The most interesting part of the beach scene is probably the water, which was completely done in Colosseum3D. It uses an OpenGL shader combined with an animated noise texture. The shader reflects the skybox used in the scene, and the noise texture provides the waves. This is then added to a plane and the result is a somewhat realistic water sur-face. An effect of the animated water texture is that the safety place seems more real and alive. Lightmaps were also added to give the scene a more realistic look.

An area which caused some problems when modeling the environment was the lighting, especially the lighting of the apartment. The first problem was the

(40)

Figure 5.7: Structure of a composite object.

(a) A building before the attack. (b) A building after the attack.

Figure 5.8: The two states of the building models.

radiosity engine. The engine is slow, which makes it tiresome to try differ-ent settings. It also produces strange results in some cases; for example, light leaked through the roof of the toilet into the kitchen (might have been caused by non-welded vertices or too low density in the radiosity mesh). Another prob-lem with the lighting of the apartment was to make the lights produce correct shadows. The shadows required a lot of tweaking and testing before reaching an acceptable appearance. Even the final shadows generated still look a bit strange in some places. Another area of the modeling which led to problems was the balance between reaching a sufficient level of realism while keeping the polygon count low. However, even though the final VE does suffer from a high polygon count and definetely can be improved, it still runs smoothly on the system used in the project.

(41)

Figure 5.9: Lightmap of a bathroom wall.

5.5

Interaction

We had two different interfaces to implement; one for the user and one for the therapist. We used already implemented parts in Colosseum3D as well as implementing some parts from scratch. The parts we built were written in Lua.

5.5.1

User interaction

We started by looking at what already was available for us to use and we found implementations for the HMD, the Pinch Gloves, and the wheelchair. These implementations were easily attached to our application, even though some testing is needed to see that everything is working correctly.

Next we needed to implement a few things ourselves. First we had the navigation between the appartment and the safety place. The navigation used an elevator door to let the user move between the two locations. A Lua script was written to handle this. The script used Colosseum3D’s collision handling to test whether the user is in the vicinity of the door. Collision handling was done by first setting up which two objects should be tested, and then by describing what would happen when the objects make contact (we could also describe what will happen after the contact, and if the objects have continous contact, but these were not needed to accomplish what we wanted in this case). In our case, we needed two sensors; one that would handle the opening of the elevator door, and one that would make sure that the user is inside the elevator and then send the user to the new location. The first one was placed on the floor/ground in front of the elevator and the other one inside the elevator. In figure 5.10

(42)

we can see this panel displayed in front of the elevator in the apartment. The second one used a SwitchNode to hide the location which the user just left and to unhide the location where he or she were going (see figure 5.11).

Figure 5.10: Floorpanel used to open door.

Besides the elevator navigation we also needed to implement a general pause function for the user and the therapist. To pause the simulation running in Colosseum3D we needed to pause several processes.

First, we needed to pause all the scheduled events so they did not fall out of synchronization with the rest of the processes. Events in Colosseum3D are handled by the library VRUtils, a utility library for handling, for example op-erating system specifical features, and which adds extra functionality to OSG. In VRUtils we find the ScaleTimer object, which allows us to scale the rate of the time which, for example, events are synchronized to. So by setting the scale factor to zero the simulation stops, and by bringing it back up to one the progresses as before. Secondly, we needed to pause all animations, or at least all AnimationPath animations. We did not need to pause the callback animations since these will look (almost) the same at any given time, which the Animation-Path will not. The AnimationAnimation-Path had a pause function which we could use. Thirdly, we needed to pause the physical simulation. The physical simulation in Colosseum3D is handled by the World object. This object handles collisions, generates responses, and simulates the dynamic behaviour of the objects. This simulation of the physical system, created by the World object, can be enabled and disabled by using the public method setEnable(). Finally we also needed to pause all sounds playing at the time the pause button is pressed, which could be done by using the pause function found in the SoundState class.

5.5.2

Therapist interaction

As the therapist will be able to control the VE through a number of keys on the keyboard we needed to handle these pressed keys in some way. Colosseum3d uses an abstraction called Interactors, which makes the handling of pressed keys

(43)

Figure 5.11: Overview of switch nodes.

very simple to implement. The Interactor simply triggers a certain event when a certain key is pressed. So when the therapist for example presses the button assigned to start state 1, the Interactor belonging to state 1 starts the air-raid siren and stops the music. The Pause button and Safety Place button also use this structure.

(44)

Chapter 6

Results and discussion

This last chapter will sum up the whole project and present what has and has not been accomplished, discuss the result, and finally give the reader an idea of what can be improved and extended.

6.1

Results

The result of this project is the application that has been developed. The application, which we have decided to call “Safety Place”, places the user in a fully immersive and interactive VE, and is to be used in treatment of PTSD. It also gives the therapist control over the different stimuli which the user will encounter in the VE.

The application is made up of a scenario, an environment, and two interac-tion interfaces. The scenario, to start with, contains five different states, where three can be controlled. These states contain increasing stimuli intensity and simulates the course of events of an airstrike against a Middle Eastern residential area where the user will find him or herself in an appartment. State 1 contains an raid siren being played. State 2 contains an airstrike performed by air-craft (see figure 6.1(a)). State 3 ends the attacking sequence with an attack of two helicopters (see figure 6.1(b)). The scenario also contains a task for the user to perform to add to the interactivity of the application.

Two different environments were constructed; a residential area and a safety place (in the form of a tropical beach). The residential area (see figure 6.2) was to be created as neutral as possible to suit as many users as possible, but also to let the users fill in the blanks with their own experiences. The residential area also contains the apartment where the user is situated (see figure 6.3). The main purpose of the safety place was to have a relaxing effect on the user. A tropical beach was constructed (see figure 6.4) as it is a place that many people associate with relaxation. A second safety place, a forest clearing, was also designed but never implemented due to insufficient time.

The most important factor of the sense of presence of a VE is the level of interactivity available to the user. By using some previously implemented types of interaction we have given our application a high level of interactivity. The user has the ability to interact with previously defined physical objects in the scene, and to freely move around using a wheelchair. To enter the safety place

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

Den förbättrade tillgängligheten berör framför allt boende i områden med en mycket hög eller hög tillgänglighet till tätorter, men även antalet personer med längre än