• No results found

Expression of Emotion in Virtual Crowds:Investigating Emotion Contagion and Perception of Emotional Behaviour in Crowd Simulation

N/A
N/A
Protected

Academic year: 2022

Share "Expression of Emotion in Virtual Crowds:Investigating Emotion Contagion and Perception of Emotional Behaviour in Crowd Simulation"

Copied!
78
0
0

Loading.... (view fulltext now)

Full text

(1)

Expression of Emotion in Virtual Crowds:

Investigating Emotion Contagion and Perception of Emotional Behaviour in Crowd Simulation

MIGUEL RAMOS CARRETERO

Master Thesis at CSC Supervisor: Christopher Peters

Examiner: Olle Bälter

(2)
(3)

Abstract

Emotional behaviour in the context of crowd simulation is a topic that is gaining particular interest in the area of artificial intelligence. Recent efforts in this domain have looked for the modelling of emotional emergence and social interaction inside a crowd of virtual agents, but further investigation is still needed in aspects such as simulation of emotional awareness and emotion contagion. Also, in relation to perception of emotions, many questions remain about perception of emotional behaviour in the context of virtual crowds.

This thesis investigates the current state-of-the-art of

emotional characters in virtual crowds and presents the im-

plementation of a computational model able to generate

expressive full-body motion behaviour and emotion conta-

gion in a crowd of virtual agents. Also, as a second part of

the thesis, this project presents a perceptual study in which

the perception of emotional behaviour is investigated in the

context of virtual crowds. The results of this thesis reveal

some interesting findings in relation to the perception and

modelling of virtual crowds, including some relevant effects

in relation to the influence of emotional crowd behaviour

in viewers, specially when virtual crowds are not the main

focus of a particular scene. These results aim to contribute

for the further development of this interdisciplinary area of

computer graphics, artificial intelligence and psychology.

(4)
(5)

Referat

Emotionellt Beteende i Simulerade Folkmassor Emotionellt beteende i simulerade folkmassor är ett ämne med ökande intresse, inom området för artificiell in- telligens. Nya studier har tittat på modellen för social in- teraktion inuti en grupp av virtuella agenter, men fortsatt utredning behövs fortfarande inom aspekter så som sim- ulation av emotionell medvetenhet och emotionell smitta.

Också, när det gäller synen på känslor, kvarstår många frå- gor kring synen på känslomässigt beteende i samband med virtuella folkmassor.

Denna studie undersöker de nuvarande "state-of-the-

art" emotionella egenskaperna i virtuella folksamlingar och

presenterar implementationen av en datormodell som kan

generera smittsamma känslor i en grupp av virtuella agen-

ter. Också, när det gäller synen på känslor, kvarstår många

frågor kring synen på känslomässigt beteende i samband

med virtuella folksamlingar. Som en andra del av denna

avhandlingen presenteras, i detta projekt, en perceptuell

studie där uppfattningen av emotionella beteenden under-

söks i samband med virtuella folksamlingar.

(6)
(7)

Resumen

Simulación de emociones en multitudes virtuales La simulación de emociones en multitudes virtuales es un área de investigación con gran interés en el campo de la inteligencia artificial. Estudios recientes han tratado de desarrollar personajes virtuales con comportamiento emo- cional y social, pero aún son pocos los estudios que han intentado simular conciencia emocional y sensibilidad al contagio de emociones. Por otro lado, aún existen ciertos interrogantes en relación con la percepción de emociones en el ámbito de multitudes virtuales.

Este proyecto es el resultado de una investigación en el área de simulación de emociones para multitudes vir- tuales, e incluye la descripción detallada de un modelo com- putacional capaz de simular multitudes virtuales con con- ciencia emocional y sensibilidad al contagio de emociones.

Esta investigación también incluye un estudio de percep-

ción centrado en el comportamiento emocional en multi-

tudes virtuales. Los resultados de este proyecto incluyen

interesantes hallazgos con respecto a la percepción de com-

portamiento social y pretenden contribuir al desarrollo in-

terdisciplinar de los campos de la informática gráfica, la

inteligencia artificial y la psicología.

(8)
(9)

Acknowledgements

I would like to thank all the people who helped me with the development of this master thesis:

Thanks to Veronica Ginman, Yoann Gueguen and Cédric Morin for their work on the graphics scene.

Thanks to Hongjie Li and Anxiao Chen for their assistance with the eye-tracker set-up.

Thanks to Nadia Berthouze and her team at UCL for the use of the UCL Inter- action Centre (UCLIC) Affective Body Position and Motion Database.

Thanks to Adam Qureshi for his help with the development of the perceptual study and the analysis of the results.

Thanks to my supervisor Christopher Peters for his invaluable help, his guidance and his contagious enthusiasm.

Thanks to my family and friends around the world for being there whenever I needed it.

Finally, special thanks to my mother, my father, my sister and my brother for their support and love day by day.

Thank you all!

(10)
(11)

Publications

The following papers are included as part of the work of this thesis:

Ramos, M., Peters, C., Qureshi, A. Modelling Emotional Behaviour in Virtual Crowds through Expressive Body Movements and Emotion Contagion. Presented in SIGRAD 2014.

Ramos, M., Qureshi, A., Peters, C. Evaluating the Perception of Group Emotion

from Full Body Movements in the Context of Virtual Crowds. Presented in the ACM

Symposium on Applied Perception 2014.

(12)
(13)

Contents

1 Introduction 1

1.1. Aims of the Research . . . . 2

1.1.1. Main Goals . . . . 2

1.1.2. Research Question . . . . 2

1.2. General Background . . . . 3

1.3. Significance . . . . 3

1.3.1. Industry . . . . 3

1.3.2. Academia . . . . 4

1.3.3. Societal . . . . 4

1.4. Interdisciplinary Aspects . . . . 4

1.5. Limitations . . . . 5

1.6. Report Overview . . . . 6

2 Foundations and Theory 7 2.1. Virtual Crowds and Multi-Agent Systems . . . . 7

2.1.1. Multi-Agent Systems . . . . 7

2.1.2. Virtual Crowds Based in Multi-Agent Systems . . . . 8

2.1.3. A* Path-Finding Algorithm for Crowd Navigation . . . . 9

2.2. Emotional Behaviour and Emotion Contagion . . . . 9

2.2.1. Expressive Body Movements . . . 10

2.2.2. Animation of Expressive Behaviour . . . 10

2.2.3. Finite State Machines for Animation of Emotional Behaviour 10 2.2.4. Emotional Awareness and Emotion Contagion in Crowds . . 11

2.3. Perception of Emotional Expressive Behaviour . . . 12

2.3.1. Perceptual Experiments . . . 12

2.3.2. Perceptual Studies with Virtual Behaviour . . . 12

2.4. Theory Summary . . . 12

3 Methodology 13 3.1. Management Methods . . . 13

3.1.1. Specifications and Milestones . . . 13

3.1.2. Work Tracking . . . 13

3.1.3. Prototypes . . . 14

(14)

3.2. Implementation Methods . . . 14

3.2.1. Computational Modelling . . . 14

3.2.2. Off-The-Shelf Components and Software . . . 14

3.3. Research Methods . . . 15

3.3.1. Research for Computational Modelling . . . 15

3.3.2. Research for Experimentation . . . 16

3.4. Methodology Summary . . . 16

4 Implementation 17 4.1. Graphic Models . . . 17

4.1.1. Character Model . . . 17

4.1.2. Scenario Model . . . 18

4.2. Animation . . . 18

4.2.1. Annotated Affective Data Corpus . . . 18

4.2.2. Selection of Emotional Animation . . . 19

4.2.3. FSM for Individual Emotional Behaviour . . . 19

4.2.4. Steering Paths for Walking Characters . . . 20

4.3. Emotional Model . . . 21

4.3.1. Internal State for Emotional Characters . . . 21

4.3.2. Algorithm of the Emotional Model . . . 22

4.4. Architecture of the Simulation . . . 24

4.5. Implementation Summary . . . 27

5 Evaluation 29 5.1. Controlled Scenario Simulations . . . 29

5.1.1. Scenario 1: Strong Contagion . . . 30

5.1.2. Scenario 2: Contagion by Steps . . . 31

5.2. Perceptual Study . . . 32

5.2.1. Definition of the Study . . . 32

5.2.2. Stimuli Composition . . . 33

5.2.3. Design and Set-Up . . . 33

5.2.4. Experiments . . . 34

5.2.5. Analysis of the Data . . . 36

5.2.6. Discussion . . . 39

5.3. Evaluation Summary . . . 39

6 Conclusions 41 6.1. General Findings . . . 41

6.2. Outcomes . . . 42

6.3. Reflections . . . 43

6.3.1. Gained Experience . . . 43

6.3.2. Difficulties . . . 43

6.3.3. External Reviews . . . 44

6.4. Final Summary and Future Work . . . 44

(15)

A Prototypes 47 A.1. First prototype . . . 47 A.2. Second prototype . . . 47 A.3. Third prototype . . . 48

B Participant Sheets 49

B.1. Ethical Clearance . . . 49 B.2. Instructions of the Experiment . . . 51

C Critic from Reviewers 53

C.1. Commentaries from SIGRAD 2014 . . . 53 C.2. Commentaries from the ACM Symposium of Applied Perception 2014 54

D Colour Figures 55

Bibliography 59

(16)
(17)

Chapter 1

Introduction

Expression of emotion in virtual characters is a challenging topic in the area of artificial intelligence (AI) and, recently, it has gained strong interest in the context of virtual crowds. The search for the generation of more believable artificial behaviour has always been an important issue in both industrial and scientific activities related to entertainment and virtual simulation and, nowadays, the advances in computer science and engineering allow for the creation of hundreds of virtual characters and complex digital worlds filled with life [41] (see Figure 1.1). Also, recent advances in this area have opened new paths of research in the modelling of virtual emotion for artificial characters. However, some issues still remain in relation to the emotional expressive behaviour in context of multiple characters and the way they interact with each other. Although the current state-of-the-art of virtual characters is broad and there has been lots of research in relation to crowd behaviour, the simulation of emotional awareness and emotion contagion between artificial characters are two important aspects in virtual crowds that still need further investigation [41].

The development of perceptual experiments for testing how viewers perceive simulations and artificial behaviour has also strong interest in many aspects of computer science, and usually the results of these experiments help for the search of better computational models and more satisfactory human-computer interaction.

In relation to crowd behaviour, modelling better computational models for virtual crowds has become a matter of great interest not only in the areas of special effects for film and video-games, but also in other specific domains such as crisis-training- simulation or psychology in relation to perception of social behaviour. Specially for the latter, many questions still remain about how people perceive expression of emotion in a context of multiple individuals and, particularly, there are few studies dealing with the investigation of emotional effects of virtual crowd behaviour.

The purpose of this research is to try to shed more light to all these matters

and to contribute to this area of computer graphics related to emotional behaviour

in virtual crowds.

(18)

CHAPTER 1. INTRODUCTION

Figure 1.1. These pictures display examples of virtual crowds from Monster University (Pixar, 2013), The Lord of the Rings (New Line Cinema, 2001), and Assassin’s Creed (Ubisoft, 2007).

1.1. Aims of the Research

In general terms, this thesis deals with the investigation of the current state-of- the-art of emotional behaviour for virtual crowds, particularly in relation to emotion contagion between virtual characters. Also, this work intends to investigate some perceptual aspects related with emotional virtual crowds, particularly when they are part of the context of a certain virtual scene.

1.1.1. Main Goals

This thesis aims to accomplish the following goals:

To develop a computational model capable of generating a virtual crowd in which the characters are both able to convey different emotional behaviour and susceptible to be affected by other characters’ emotions (See Chapter 4).

To design a perceptual study with the purpose to evaluate the computational model as well as to investigate certain aspects about perception of virtual emotional behaviour (See Chapter 5).

1.1.2. Research Question

In addition to the goals stated, this work deals with a study in relation to the effects of social context on the perception of emotions of a virtual scene. Specif- ically, this research aims to find how the effects of an emotional virtual crowd in a background context influence the perception of emotions of a virtual scene (See Chapter 5). As a problem statement, the research question is outlined as follow:

How is the emotional behaviour of virtual crowds perceived in virtual scenes?

(19)

1.2. GENERAL BACKGROUND

1.2. General Background

The simulation of groups of virtual agents has been widely investigated since the capabilities of computer graphics technologies allowed for the generation of multiple artificial individuals, and one of the most significant works in this area can be attributed to Reynolds and his ’boids’ model [37] in 1987. From this early stage, later works have been continuously improving the modelling of virtual agents in relation to more believable crowd behaviour [31], autonomous agents [39], collision avoidance [18] or impressions of personality (OCEAN) [10].

In relation to perception of emotion behaviour from virtual characters, there has been recent research regarding the perception of crowds [14] and body language [32]. More recent studies have also been studying the generation [19] and mapping [3] of expressive motions between artificial agents and real people.

Social awareness and psychology of human crowds has been also a challenging topic in relation to the understanding of these phenomena, and there has been a large investigation of the emotion contagion effect and its significance in human behaviour [20]. Specially for virtual characters and crowds, recent studies have developed some generic computational models in order to simulate the emotional awareness of virtual characters and the effects of emotion contagion [27, 35, 38, 6, 2].

Nevertheless, further investigation is needed in this area, since the modelling of virtual emotions is still a recent area of research and the current state-of-the-art is still rather basic and far from being mature [29].

1.3. Significance

Virtual crowds and emotion behaviour for virtual characters is a topic with increasing interest in AI and computer graphics. The following sections present an overview of some of the most significant contributions of this research in the industry and the academia as well as in relation with some societal aspects.

1.3.1. Industry

Believability in virtual crowds, that is, the portrayal of realistic behaviour in virtual characters, is a matter of great importance in several areas of computer graphics industry. Activities in the domain of entertainment, particularly in films and video-games, required an increasingly necessity of recurring crowds digitally generated to fill the context of the stories. Famous examples such as the film tril- ogy The Lord of The Ring or games like the saga Assassin’s Creed use a large amount of virtual characters in order to represent crowded cities and big armies.

Thus, in order to achieve a better immersion of the viewers into these virtual worlds,

it is of great importance to reach more believability in the behaviour of their vir-

tual characters, particularly when dealing with emotional behaviour. As stated by

Lasseter [26], when bringing artificial characters to life, one of the most important

things to bear in mind is the personality and the simulation of a thought process,

(20)

CHAPTER 1. INTRODUCTION that is, the simulation of thinking to justify each motion and each posture of the character. Likewise, this thought process should be as well emotionally portrayed in the behaviour of characters.

On the other hand, understanding the effects of social context (the aim of the research question, see Section 1.1.2) on the perception of emotion from a virtual crowd is of great importance, particularly in a domain where the artificial behaviour of multiple agents may alter the perception of other main characters which intends to robustly convey certain emotions. It is of great significance to recognise those situations in a particular scene in which the perception of emotions is context sensi- tive, so then animators, directors and storytellers can take extra care when dealing with these situations.

1.3.2. Academia

In the academia domain, the implementation of computational models dealing with emotional behaviour of multiple characters has been of great significance in the study of social behaviour, particularly for cases of emergency evacuation or panic simulations. Hence, the seek for more realistic models regarding virtual crowd be- haviour seems to be of great interest in educational areas such as crisis-management and training-simulations [27]. Also, regarding emotion contagion, there have been many studies that have proved, theoretically, the importance of this contagion effect in the way humans and other species behave [20, 36], which gives more importance to the investigation of this effect in the area of AI as well as to the development of better computational models capable of simulating this phenomenon.

1.3.3. Societal

Regarding the study of social context in relation to the perception of emotional behaviour of virtual crowds, understanding the human mind has been the most important task in the area of psychology. Thus, while implementing AI models to simulate particular processes of the mind, certain paths of knowledge about the complexity of the human mind can be as well discovered, helping humanity to gather more knowledge about how humans behave and the way emotions are perceived.

Also, economical and ecological aspects can possibly be as well related with the aim of this thesis, and behaviour of crowds in relation to specific contexts such as urban environments and sustainable cities may bring up also interesting research questions, since these contextual environments are altered and modified to a large extent by the behaviour of human beings and their interaction with such environments.

1.4. Interdisciplinary Aspects

Although the main area of this thesis is computer graphics, the research domain

deals also with several aspects of AI and psychology, mainly in relation to artificial

(21)

1.5. LIMITATIONS

agents, emotion contagion and perception of emotional behaviour. Due to these interdisciplinary aspects, it is of great importance to clarify certain terms that may not be common for readers without a background in these areas. The following list presents a definition of some of the most important key terms of this thesis (more information about the foundations and theory of this project is explained in Chapter 2):

Emotion: For this research, this term is defined as the particular state of mind from which an individual experience a certain mood. While an emotion represents a state of mind in general (i.e. happiness), mood refers to the physiological and psychological aspect of that state (in the case of happiness, the mood would be well-being) [12].

Virtual character: An animated artificial agent represented by a visual model, usually with humanoid appearance. A large number of virtual characters considered together constitute a virtual crowd [34].

Emotional behaviour: All the movements, gestures and positions made by an individual from which it is possible to infer or deduce a certain emotion [8]

(see Section 2.2.1 for more details).

Emotion contagion: The tendency to catch another person’s emotions. It is the first step of empathy, this complex cognitive process that consists in evaluating the situation from another person’s perspective [20] (see Section 2.2.4 for more details).

Affective appraisal: Human process from which the evaluation of events from the environment has an emotional significance with respect to the individual goals and decisions [23].

1.5. Limitations

The current state-of-the-art in the domain of virtual crowds and emotional be- haviour is enormously large, so it is important to define some limitations in this research, since the scope of this thesis cannot cover every single aspect of this area.

Hence, this research focus on the perception of emotional behaviour of virtual char- acters in relation to full-body movements, not considering other details such as face expressions or finger motion. Particularly, the emotions that are handled are two:

happiness and sadness, both taken from the list of Ekman basic emotions [11] (see

Section 4.2.2). In relation with the computational model presented, the implemen-

tation focuses on the simulation of emotion contagion in virtual crowds, but it does

not consider high-quality rendering strategies for virtual agents nor other aspects

of crowds such as character variety nor advanced motion planning.

(22)

CHAPTER 1. INTRODUCTION

1.6. Report Overview

Chapter 1 has introduced the aim of this thesis, including an overview to the main areas of related research, its potential contributions and the boundaries in with the project has been carried out.

The next chapters of this report are organized as follow: Chapter 2 presents the theory of this project, describing the foundations on which this research is based;

Chapter 3 details the methodology followed in this project in terms of manage-

ment, implementation and research; Chapter 4 describes the implementation of the

computational model for virtual crowds, including a detailed description of the al-

gorithm for the simulation of the emotion contagion effect; Chapter 5 presents the

evaluation of the computational model, first, through a controlled simulation, and

second, through a perceptual study; finally, Chapter 6 concludes this report with

some comments about the findings of this research as well as some reflections and

a final discussion of potential future work in related areas.

(23)

Chapter 2

Foundations and Theory

Chapter 2 details the foundations of the work and the theory in which this thesis lays in order to understand better the basis of this research. The main areas of this research can be categorized in three main domains: virtual crowds, emotional behaviour and perception of emotions.

2.1. Virtual Crowds and Multi-Agent Systems

The first part of this research deals with multi-agent systems and the theory from which virtual crowds are created.

2.1.1. Multi-Agent Systems

Although it is not the intention of this thesis to go in deep detail in this matter, it is important to clarify the definition of this term, since it is the basis of the computational model of this project (see Chapter 4 for more details).

To define what a multi-agent system is, we need to specify first what an agent is. Although there are several definitions for this term, the one that approximates better to the purpose of this thesis is the one coined by Jennings et al. [22], who proposes that an agent is a computer-based system, situated in an environment that acts autonomously and flexibly to attain the objectives for which it was created . For the matter of this research, we will consider an agent as a virtual character represented by a graphic model which interacts with other virtual characters in the context of a virtual scenario.

Thus, a multi-agent system can be defined as an organized group of individual agents able to communicate and interact with each other following a set of rules [22].

The communication takes place through the exchange of messages using certain

protocols of coordination and cooperation. In the context of this research, the

multi-agent system will be a virtual crowd composed of characters represented by

individual agents (See Figure 2.1).

(24)

CHAPTER 2. FOUNDATIONS AND THEORY

. . . Agent X

Agent Z Agent Y

Scenario

ENVIRONMENT Perceptors

Actuators

Figure 2.1. Simplified diagram of a multi-agent system. The agent X perceives information from the environment (the scenario and the other agents) and generate responses according to this.

2.1.2. Virtual Crowds Based in Multi-Agent Systems

Multi-agent systems have great importance in crowd simulation and they con- tribute significantly to the functionality, individuality and autonomy of the indi- viduals of the virtual characters of a crowd, as discussed by Pelechano et al. [34].

For many computational models for virtual crowd generation, it is in these systems in which rely the implementation of the AI and the interaction between virtual characters.

There has been some significant milestones based in the area of multi-agent systems applied to crowd simulation. In academia, the pioneering work of a multi- agent system for flock behaviour could be considered Reynolds’ ’boids’ model [37].

In the industry, some of the most famous and successful software for virtual crowds are Massive SW

1

and Golaem Crowd

2

, two simulation systems based on multi-agent approaches that have been extensively used for crowd generation for famous films such as the trilogy The Lord of The Rings and in digital animation such as Pixar’s Monsters University .

Others approaches apart from multi-agent systems have also attempted to sim- ulate crowd behaviour with methods based on particle systems, rule-based simula- tions and social forces [34]. Since these approaches consider the crowd as a whole, they are very effective with general behaviour. However, these approaches ignore the individuality of each member of the crowd, which make them less effective when dealing with characteristics related with personality such as emotional behaviour.

For these reasons, it seems that multi-agent systems are more feasible for the seek- ing of individual behaviour in virtual characters of crowds. This is also the main reason why this research has taken this approach (see Chapter 4).

1

Massive SW: http://www.massivesoftware.com/

2

Golaem Crowd: http://www.golaem.com/

(25)

2.2. EMOTIONAL BEHAVIOUR AND EMOTION CONTAGION

Figure 2.2. This is an example of how the A* path-finding algorithm works for virtual agents. Supposing that the red rectangle is an obstacle, if the blue agent wants to reach the yellow agent, the A* path-finding algorithm will calculate the shortest path on the graph mesh of the scenario.

2.1.3. A* Path-Finding Algorithm for Crowd Navigation

An important aspect to take into account in crowd behaviour is the steering navigation for walking characters. The most basic behaviour for a character from a virtual crowd is to be able to find a path between an origin point and a destiny point avoiding any obstacle that the scenario could have in-between such as walls, buildings or roads where the pedestrian is not able to pass. To implement this behaviour, there are already several methods for video-games which deal with this problem. One of the most common is the A* path-finding algorithm, which was used in the computational model of this thesis.

The A* path-finding algorithm is based on a graph mesh, and it basically calcu- lates the optimal path between two single nodes. For the heuristics of path-finding, the optimal path is usually the shortest path. This method follows the same core than the Dijkstra search algorithm [9], with some minor changes in the heuris- tics. Figure 2.2 shows an example of how this algorithm works for virtual agents.

Additional features such weighting the connections between nodes allows for more realism when calculating paths in environments with non-physical obstacles such as gardens or roads. Although this approach is not the most realistic for steering behaviour, since it always considers the shortest path, it is not the purpose of this thesis to go further in this matter, but the reader can find more information about more complex techniques of path-finding in the literature of this research [42, 34].

2.2. Emotional Behaviour and Emotion Contagion

The second area of this thesis concerns with the theory of emotional behaviour,

the animation of emotional virtual characters and the effects of emotion contagion

in relation to crowd behaviour.

(26)

CHAPTER 2. FOUNDATIONS AND THEORY

2.2.1. Expressive Body Movements

The term body language refers to certain body and face behaviours in humans and other species from which it is possible to infer intentions, thoughts or emo- tional states [8]. In relation to this, there have been many studies coping with the investigation of body movements and expression of emotions, both in facial expres- sions and in motion behaviour [11, 5, 8]. As stated in Section 1.5, this thesis focus on the latter, since dealing with facial expressions requires much more extensive investigation and that goes further beyond the scope and the limitations of this research.

According to Marco de Mejier [8], the inference of emotions from body move- ments is based in several aspects of the action, mainly characterized by the move- ments of the trunk and the arms, the force, the velocity and the direction of the action. As this author discusses, certain postures and behaviours are strongly re- lated with positive mood, such as openness of arm arrangement and head position.

Likewise, other cues can also be related with negative mood, such as drooping one’s head and closeness in the movements. Although it is difficult to categorize objec- tively every single gesture, position and movement in separate emotional categories, the results of several studies [5, 25, 8] indicate this general relationship between body movements and emotions, despite minor differences that can be found in relation to gender, age or culture [13].

2.2.2. Animation of Expressive Behaviour

In relation to the animation of emotional behaviour for virtual characters, there has been lots of research with the aim to map body movements from real actors into virtual characters. This process, known as motion tracking or motion capture (MoCap), has become the pioneer technology of animation for virtual characters in the entertainment industry as well as in academic research. Particularly for expres- sive body movements, there has been a recent research carried out at the University College of London [25], in which several acted and non-acted movements captured with MoCap technologies were studied and emotionally categorized according to the perception from people from three different cultures (Western, Middle-East and Far East cultures). As it is explained in Chapter 3, the aforementioned research and its MoCap library were the main references used for the animation of the virtual crowd of this thesis.

2.2.3. Finite State Machines for Animation of Emotional Behaviour

At this stage, it is important to clarify how the implementation of the animation

is set for several virtual characters. Due to the large number of characters in a crowd,

it is not feasible to use a traditional way of animation for each single character. In

these cases, when dealing with AI for animation, a very common tool easy to use is

finite state machines (FSM).

(27)

2.2. EMOTIONAL BEHAVIOUR AND EMOTION CONTAGION

Awake

Eat Drink [Thirsty]

[Not Thirsty]

[Hungry]

[Full]

Figure 2.3. Example of a finite state machine for a virtual character. The conditions of ’thirsty’ and ’hungry’ change the state of the FSM and, therefore, the behaviour of the character.

Look for the bed

Sleep [At the bed]

[Tired]

[Rested]

Awake

Eat Drink [Thirsty]

[Not Thirsty]

[Hungry]

[Full]

Figure 2.4. Example of a hierarchical finite state machine. In this case, the condition of ’tired’ changes the hierarchy and, depending on it, the behaviour of the character is controlled by a different FSM.

In animation, a FSM is usually represented with a directed graph in which the nodes (states) represent the different animation behaviours that a character can portray and the connections (transitions) depict the conditions to move from one behaviour to another [23]. Figure 2.3 illustrates an example of a FSM.

Depending of the necessities of the simulation and the complexity of the be- haviour of the virtual characters, a FSM can become hierarchical, that is, a FSM which allow grouping states together in a hierarchical way [23], allowing with this several layers of behaviour. Figure 2.4 shows an example of a hierarchical FSM.

In this research, the animation portrayed by the virtual characters of the crowd is controlled by hierarchical FSM. More details about this is explained in Chapter 4.

2.2.4. Emotional Awareness and Emotion Contagion in Crowds

Emotion contagion, as defined by Hatfield et al. [20] is the tendency to automat-

ically mimic and synchronize expressions, vocalizations, postures, and movements

with those of another person’s and, consequently, to converge emotionally . In rela-

tion with the body language, Hatfield also proposes that emotion contagion takes

place between two or more individual through an unconscious mechanism of mimic

and synchronization [20].

(28)

CHAPTER 2. FOUNDATIONS AND THEORY The theory of emotion contagion has been deeply investigated, and the psy- chological process and effects from which this phenomenon arises has a strongly scientific base [20]. However, this effect is still missing or have a lack of realism in many of the current crowd simulations [41], although, as commented before in Section 1.2, recent studies have developed the first approaches to imitate this phe- nomenon in virtual crowds [35, 33].

2.3. Perception of Emotional Expressive Behaviour

The third part of this thesis presents an overview of perceptual studies in relation with how people perceive and infer emotions from virtual behaviour.

2.3.1. Perceptual Experiments

Perceptual experiments deals with the investigation of the way humans respond and interpret certain stimuli directly related with perception (vision and hearing, mainly). With a concise and clear research question, the participants are asked to follow a certain task related with the aim of the problem that has to be solved.

Then, the data gathered serve as the base for a post-analysis in order to find an answer for the aforementioned research question [7].

2.3.2. Perceptual Studies with Virtual Behaviour

The investigation of how human perceive and infer emotions from others has been extensively investigated. According to Papelis et al. [33], there are mainly four channels to recognize emotions from other individual: face, voice, posture and body movement. Particularly for the latter, many studies have investigated how humans recognize emotions through body language, movements and gestures [24], as well as the perception of expressive body movements and emotions in virtual characters [5].

Also, recent studies have dealt directly with computational models for virtual characters and have focused their evaluation in perceptual experiments to help them recognize, evaluate and map expressive motions to artificial characters from motion capture technologies [4, 3, 19]. In a similar way, the computational model implemented in this thesis will be also evaluated through a perceptual experiment.

More details about this are presented in Chapter 5.

2.4. Theory Summary

Chapter 2 has described the theory in which this project is based on, including a general overview to the main algorithms and technologies used in the computational model, as well as the foundations to understand the basic theory of body language, emotion contagion and design of perceptual studies.

The next chapter introduces the methodology followed in this research.

(29)

Chapter 3

Methodology

Chapter 3 describes the methodology followed for the development of this thesis, including an overview of the methods used for management, implementation and research.

3.1. Management Methods

Due to the amount of work that is required in a master thesis, it was critical to follow certain methods of management and control to keep track of the work and to get auto-feedback of the progress during the development of the thesis. The next sections describe the management methods followed in this research. Additional comments and reflections about the progress of the work are detailed in Chapter 6.

3.1.1. Specifications and Milestones

In an early stage of the development of the master thesis, and after the bound- aries of the research were settled down, the specifications of the project were es- tablished with the agreement of the supervisor and the examiner. In it, a set of activities was created in order to divide the work into individual workable parts. In addition to these activities, several milestones were defined in order to set individual sub-goals to achieve after each month of the project.

3.1.2. Work Tracking

In order follow the progress of the specification agreement, several methods

were used to keep the track of the work monthly, weekly and daily. To divide the

progress of the work monthly, a time plan based on a Gantt Chart was created and

updated every month with the progress of the research. In addition to this, the

activities of each month were tracked in a tracing log. Certain number of hours

were assigned to each activity, aiming to complete them in a monthly work-load of

120 solid hours (30 hours/week, 6 hours/day). Also, physical meetings were agreed

with the supervisor every week and the important things were written in a meeting

(30)

CHAPTER 3. METHODOLOGY log. Lastly, individual notes of the progress of the research for each day were kept in a personal log.

3.1.3. Prototypes

Based on an iterative-incremental development (See next Section 3.2), three prototypes were defined for the implementation of the computational model, each of them containing a software deliverable with incremental features (see Appendix A). In addition, a deadline week was defined for each of these prototypes in order to finish them and to keep track of the progress of the research.

3.2. Implementation Methods

Since a significant part of this research was focused on the development of a computational model for virtual crowds, it was essential to delineate certain imple- mentation methodologies and to decide the external software and the programming tools from which the model would be built up.

3.2.1. Computational Modelling

The implementation methodology used for the development of the computa- tional model was based on an iterative-incremental development with basis in Agile Methodologies and Personal Software Development (Humphrey, W.).

The full development of the computational model was divided in three iteration, each of them consisting of: initialization, analysis of the problem to solve, design- ing of a solution, coding of the algorithms, evaluation and final post-mortem. In each iteration new features were added incrementally to the computational model according to the three prototypes defined in Section 3.1.3.

Since the computational model was aimed to be used in a perceptual experiment, the implementation of it was always guided by this fact. This required certain decisions when dealing with the visual aspects of the crowd such as the selection of a character model, the emotional animations, the colours of the scenario or the amount of characters, among others. All these characteristics are further explained in Chapter 4 and 5.

3.2.2. Off-The-Shelf Components and Software

For the implementation of the computational model, several software tools were used in order to complete the development of the model graphically and internally.

Since the project deals with the generation of an emotional virtual crowd in a virtual scenario, the model used to generate each of the individuals of the crowd was a free model of an androgynous mannequin downloaded from the online database Tur- boSquid

1

. In addition to this, the emotional animation was taken from two motion

1

TurboSquid: www.turbosquid.com

(31)

3.3. RESEARCH METHODS

capture libraries of acted emotions: the Carnegie-Mellon graphics Lab Motion Cap- ture Database

2

and the UCLIC Affective Posture and Body Motion Database [25].

These MoCap animations were mapped onto the mannequin model aforementioned in 3D Studio Max

3

. The virtual scenario used was based on a model developed from other students of the KTH as part of a course project.

In relation to the behaviour, the crowd simulation was generated in the Unity Game Engine

4

, in which the steering behaviour was implemented with a free third- party plug-in based on the A* Pathfinding

5

algorithm (See Chapter 4). The emo- tional animation of the characters were controlled with Mechanim, the Unity’s an- imation system based on hierarchical FSM (see Chapter 2 for more details about this). Finally, all the scripts containing the algorithms of the computational model and the emotional behaviour were coded in C# for Unity.

Regarding the perceptual study used in the evaluation (see Chapter 5), an eye- tracker camera (Tobii X1 Light Eye Tracker) was used as part of the tools of the experiments, and the participant tests were designed with the software Tobii Stu- dio

6

. The sound of the scenes of the stimuli was taken from the open audio database Freesound

7

.

3.3. Research Methods

To gather information about the state-of-the-art of the areas that were in close relation with the thesis, it was important to follow a research methodology to keep track of the important findings in each domain. For this purpose, six research logs were created, named: virtual crowds, emotional behaviour, emotion contagion, emotional model , perceptual experiments and a last additional category for general notes . During the first stage of the thesis there was an extensive preliminary research in the areas aforementioned, both in the form of articles and books. The results of this research is the Bibliography section that can be found at the end of this report.

3.3.1. Research for Computational Modelling

After reading the literature related with the different areas of this thesis, two recent research works were selected as the base for the implementation of the al- gorithms of the computational model. The one with strongest influence was the research of Pereira et al. [35], who proposes an implementation of a computa- tional model based in emotion contagion. The second research, which theory also contributes to this thesis, was a study by Papelis et al. [33], who focuses in the implementation of emotional behaviour of individuals using a cognitive approach.

2

CMU MoCap: http://mocap.cs.cmu.edu/

3

3D Studio Max: www.autodesk.com/products/autodesk-3ds-max/

4

Unity: www.unity3d.com

5

A* Pathfinding: www.arongranberg.com/astar/

6

Tobii: www.tobii.com

7

Freesound: http://www.freesound.org/

(32)

CHAPTER 3. METHODOLOGY

3.3.2. Research for Experimentation

The perceptual study designed as part of the evaluation of the computational model 5, has its basis Cunningham’s tutorial [7], who proposes a method for design- ing perceptual experiments. Following this methodology, the perceptual study was developed in order to test the model and to find an answer for the research question stated in Chapter 1. The considerations taken into account while designing the experiment were as follow:

The concise definition of the experiment: the domain of the research, the main objectives of the study and, most important, the research question that the experiment is addressing.

The target audience: the participants that are going to take part in the ex- periment, which could be limited by age, gender, occupation, nationality, etc.

The stimuli creation: the perceptual stimuli that is going to be presented to the participants during the experiment.

The design of the task for the participants: the clear definition of what the participant should do in the experiment and how the data is going to be collected from them (free description, rating, forced choice, etc.).

The ethics aspects implied within the experiment: this includes the conditions of the experiment and the guarantee about the physical and psychological integrity of the participants. This research follows the ethical guidelines of KTH

8

(See Appendix B.1).

Two experiments were designed in this project, being the first a pre-study for testing and the second a main experiment which provided the main results of the research. More information about this is detailed in Chapter 5.

3.4. Methodology Summary

Chapter 3 has described the main methodologies followed in this research: first, the management methods to outline a work-plan and track the work; second, the implementation methods and the software engineering approach for the development of the computational model; and, third, the research methods to collect the relevant information in the areas of related work and to develop the perceptual study.

Next chapter presents the details of the implementation of the computational model for virtual crowds and emotion contagion.

8

KTH Ethical Policy: http://intra.kth.se/en/regelverk/policyer/etisk-policy-1.27141

(33)

Chapter 4

Implementation

Chapter 4 describes the implementation for the generation of a virtual crowd with expressive behaviour and emotion contagion. This includes the details regard- ing the graphical models used, the emotional animation, the steering tools of the crowd and the computational model developed for the control of emotions.

4.1. Graphic Models

To construct the virtual world, two graphic models were used to represent both the individuals and the scenario: an androgynous mannequin and a model of the frontal building of the KTH campus.

4.1.1. Character Model

As explained in Section 3.2.2, a model of an androgynous mannequin was used to represent graphically each of the agents of the crowd (see Figure 4.1). This model was chosen due to its simplicity, since this project is limited to full-body motion behaviour and the absence of a face and other details (such as finger motion) suits well the purpose of the research. The only modification made in the model was the addition of a small protuberance in the front of the face in order to simulate a nose, allowing to identify better the direction of the face.

Figure 4.1. Mannequin models representing virtual characters of the crowd.

(34)

CHAPTER 4. IMPLEMENTATION

Figure 4.2. KTH model representing the contextual scenario for the virtual crowd.

4.1.2. Scenario Model

To create a context in which the virtual crowd moves, a virtual scenario was used in order to set the steering paths (Section 4.2.4). The scenario chosen was a model of the main building of KTH (see Figure 4.2). The reasons why this model was chosen was, first, due to its availability (which saved time from the thesis, since the purpose was not focused in modelling an scenario) and, second, due to its suitability to set a context for the perceptual study (Chapter 5).

4.2. Animation

The animation of the virtual crowd was made using motion capture technologies (see Section 2.2.2). Since the androgynous mannequin model was already rigged (that is, with a virtual skeleton ready to be animated) the mapping of MoCap animation was very straight forward in 3D Studio Max, and only some tweaks were needed to keep the animation clean and believable.

4.2.1. Annotated Affective Data Corpus

Due to the interest of using emotional behaviour for the virtual crowd, the

animations were selected from two annotated motion-capture libraries based on

acted emotions for full-body motion and rated emotionally by different cultures

(see Section 3.2.2). The animations were selected according to two types of agents

in the crowd: standing characters for conversational groups and individual walking

characters.

(35)

4.2. ANIMATION

4.2.2. Selection of Emotional Animation

The emotions selected were focused in two of the six Ekman basic emotions [11]:

happiness and sadness, in addition to a neutral emotional behaviour. Certain stud- ies have proven the facility of recognising these emotions [30], which was the main reason why they were selected. Other emotions such as anger and fear were con- sidered, but they were finally dismissed due to the limits of the research. To clarify these emotional terms, which may be confusing and open to subjective meanings, the following lines present the definitions used in this research for these words:

Happiness: An emotional state in which the individual is in a happy mood and feeling joy, pleasure or contentedness.

Sadness: An emotional state in which the individual is in a sad mood and feeling sorrow, despair or grief.

Neutral: The absence of a noticeable emotion state in an individual, that is, the state of being in a regular mood, not happy nor sad.

As explained in Section 3.2.2, the MoCap libraries used for the animation of the virtual characters were annotated, that is, each animation file included a description of the emotion or expression that was intended to convey. The selection of the emotional animation was done following these annotations.

For the animation of walking characters, the tags from which the animations were chosen were happy walk, normal walk and sad walk, the three of them taken from the CMU MoCap database.

For the animation of standing characters the selection required additional work, since the conversations were made from a loop or several animation clips. For neutral conversations, several animations with tags that did not express any emotion were taken from the CMU MoCap database. In the case of happy conversations, the neutral animations aforementioned were mixed with emotional animations taken from the UCLIC Affective database, making use of five additional clips tagged as joyful or happy. The same process was followed for sad conversations, taking five additional clips from the UCLIC Affective database tagged as sad or depressed.

4.2.3. FSM for Individual Emotional Behaviour

To create the behaviour of each animation state, the animations were blended

using hierarchical FSM in Mecanim (Unity’s animation system). Each of the three

emotional states that the virtual characters are able to convey (happiness, neutral

and sadness) is controlled by an individual FSM. In the second level of the hierarchy,

the emotional state of each virtual character determines the FSM that controls its

behaviour. Figure 4.3 and 4.4 show the hierarchical FSM for standing and walking

characters, respectively. As detailed in Chapter 5, a pre-study allowed to confirm

this emotional behaviour, which independently verified that the sad, happy and

neutral expressions were perceived as such when mapped onto the mannequin.

(36)

CHAPTER 4. IMPLEMENTATION

Sad talk 1 Sad talk 2

Sad talk 2 Sad gesture

Listening 1

Sad gesture

Sad talk 1 Listening 2

SAD CONVERSATION

Listening 1 Talking 2

Listening 2 Talking 1

Listening 1

Nodding

Listening 1 Nodding

NEUTRAL CONVERSATION

Happy talk 1 Happy jump 2

Happy talk 3 Happy talk 2

Happy talk 3

Listening 2

Happy Jump 1 Listening 1

HAPPY CONVERSATION

[Sad]

[Neutral] [Happy]

[Neutral]

Figure 4.3. Hierarchical FSM for the standing characters of the crowd. Note that, at the lowest level, there are no conditions for moving from one state to the next, so the transitions blend the states automatically when the animation clip reach the end. Also, to reduce synchronized movements, the initial state is selected randomly for each character at the beginning of the simulation.

Idle

Happy walking HAPPY WALKING

[Sad]

[Neutral] [Happy]

[Neutral]

[Destiny reached]

[Path calculated]

Idle

Neutral walking NEUTRAL WALKING

[Destiny reached]

[Path calculated]

Idle

Sad walking SAD WALKING

[Destiny reached]

[Path calculated]

Figure 4.4. Hierarchical FSM for the walking characters of the crowd. Random ve- locities are set at the beginning of the simulation to reduce synchronized movements.

4.2.4. Steering Paths for Walking Characters

In the case of the walking characters, it was necessary to find a solution for implementing the steering behaviour, that is, the automatic calculation of feasible walking paths for each virtual agent during the crowd simulation. As commented in Section 3.2.2, a free plug-in for Unity based on the A* path-finding algorithm was used (see Section 2.1.3 for more details about the theory of this algorithm).

The graph from which the algorithm calculates the routes for each character was based on the virtual scenario. A navigation mesh of 280x240 nodes was mapped onto the KTH model and, from it, just the side-walk pavements were considered as pedestrian areas in addition to three cross-walks (See Figure 4.5).

A total of twenty nodes from the graph mesh were tagged as destiny points, and

the virtual characters of the crowd were steered with a script to walk randomly from

a destiny point to another (see Section 4.4). This allowed for automatic steering

behaviour of the crowd and for the general impression of virtual agents walking

around in the scenario. The velocity of each character was also controlled by the

script, with some random variability to steer the crowd more heterogeneously.

(37)

4.3. EMOTIONAL MODEL

Figure 4.5. On the left, top view of the KTH scenario with the navigation mesh on it. The grey zones show the areas in which the virtual characters are able to walk.

On the right, snapshot of the crowd simulation.

4.3. Emotional Model

After the virtual crowd was set in the virtual scenario with the basic behaviour implemented (individual behaviour and steering paths), a computational model (called here Emotional Model) was developed in order to make the agents behave emotionally and to implement the emotion contagion effect from which the charac- ters are aware of the mood of others agents, being also able to be affected by them (see Section 2.2.4 for more details about the theory). This emotional behaviour was integrated at the individual agent level.

As commented in Section 3.3.1, the work described here builds upon two previous research works in the area of emotion contagion for virtual agents [35, 33].

4.3.1. Internal State for Emotional Characters

Given that the emotional behaviour was implemented as an individual level, the definition of an internal state for each agent was essential. Thus, each character of the crowd has several internal parameters which define its emotional state and other additional characteristics.

The current mood of each agent was defined by an integer in a one-dimensional

scale in the range [-1, 1] in order to numerically represent each of the three possible

emotional states of the model, being -1 sad, 0 neutral and 1 happy. The emotional

animations were blended according to this parameter using the FSM described in

Section 4.2.3. In addition to that, a secondary parameter was defined to determine

the susceptibility of each agent to contagion. This parameter was ranged in a float

scale from 0.0 to 1.0. Higher values of this parameter imply more probabilities of

being susceptible to catch others emotions. Thus, the emotional state of each agent

was represented by the tuple <e, s>, being e the current value of its mood and s

the probability of being emotionally affected by others.

(38)

CHAPTER 4. IMPLEMENTATION

PERCEPTION MODULE

Emotional agent seen

APPRAISAL

MODULE Susceptible CONTAGION

MODULE Inmune time

lapse

YES

NO

YES NO

Figure 4.6. Simplified diagram of the Emotional Model.

Apart from the emotional state, other additional parameters were defined for each character in order to adjust, individually, visual aspects such as model scale, animation velocity and emotional colouring. As explained in Section 4.4, all these parameters were adjustable at the beginning of each simulation through a contextual menu.

4.3.2. Algorithm of the Emotional Model

According to the theory of the emotion contagion effect (see Section 2.2.4), the changes in mood emerge from the emotional awareness between the different individuals of the crowd. Thus, to implement this phenomenon in the computational model, it was necessary to make each agent able to perceive, appraise and react to others emotions. This behaviour was implemented in the model through three separate parts: the Perception Module, the Appraisal Module and the Contagion Module . Figure 4.6 shows a simplified diagram of the algorithm. The next lines present the details of the functional aspects of each part:

Perception Module

To implement the Perception module, a field of view was defined for each agent

in the Unity engine to represent what the character sees and its field of awareness

of others emotions. This field of view was implemented through a ray-casting algo-

rithm. To do so, each virtual character has a capsule attached to its model, which

represent its area of influence. In addition to this, for each time t of the simulation,

each character cast a finite ray in its frontal direction at check for collision with the

capsule of other characters. The perception occurs when the ray of an emotional

agent, i.e. character A, intersects with the capsule of another agent, i.e. character

B. When this happens, character A receives the value e of the emotional state of

character B.

(39)

4.3. EMOTIONAL MODEL

while not intersection do cast ray R;

if ray R intersects with capsule of character B then take emotional state of B;

go to Appraisal Module;

end end

Algorithm 1: Pseudo-code of the Perception Module

Appraisal Module

When an agent A has perceived the emotion of another agent B, the next step of the algorithm is handled by the Appraisal module. This module is based in the affective appraisal theory briefly defined in Section 1.4. At this stage, if the emotional states of the agents differ, there is an evaluation to check the possibilities of emotional contagion. This evaluation is made using the susceptibility parameter s of the agent A. Through a random function based on an uniform distribution (random()), the model calculates a float number between 0.0 and 1.0 and compares it with the parameter s. Agent A will be affected by the mood of the other if random() < s .

To prevent loops and make the behaviour of the crowd more believable, at this stage, and in spite of any possible contagion, agent A sets a time lapse t in which it will be immune to emotional contagion.

if emotional state A != emotional state B then start immune time lapse t;

if random() < susceptibility A then go to Contagion Module;

end end

Algorithm 2: Pseudo-code of the Appraisal Module

Contagion Module

In the last step, if the susceptibility was positive in the previous module, then contagion occurs, and the Contagion module handles the change of mood. In this module, two alternative approaches were implemented: strong contagion and con- tagion by steps . As explained in Section 4.4, the contagion approach can be selected before starting the simulation.

In the first approach (strong contagion) the character that has been affected

simply copies the value of the emotional state of the agent perceived, converging

both emotionally. Thus, for example, if a happy character (1) is affected by a sad

character (-1), the happy character will change its emotional state to sad (-1).

(40)

CHAPTER 4. IMPLEMENTATION The second approach (contagion by steps), is less aggressive in terms of conta- gion, and in this case the emotional state of the character affected is moved one step down/up (-1/+1) in the one-dimensional mood scale, depending on the emotional state of the character perceived. In this case, if a happy character (1) is affected by a sad character (-1), the happy character will change its emotional state one step down, in this case, to neutral (0).

Once the contagion is done, the algorithm loops back to the Perception Module when the immune time lapse t sets by the Appraisal Module is over.

if contagion by steps then

if emotional state B > emotional state A then emotional state A + 1;

else

emotional state A - 1;

end

go to Perception Module when t = 0;

else

emotional state A = emotional state B;

go to Perception Module when t = 0;

end

Algorithm 3: Pseudo-code of the Contagion Module

4.4. Architecture of the Simulation

As commented in Section 3.2.2, the implementation of the computational model was built up in the Unity Engine. The algorithms for steering paths, animation behaviour and emotion contagion were coded following an scripting methodology in C#. A total of six scripts were written in order to implement the different functionalities of the model (see Figure 4.8):

Spawner: This script runs at the beginning of the simulation to generate the characters of the crowd. Mainly, it sets the initial position of the walking characters and the groups and it sets the initial mood for each character.

Walking controller: This is the main script for walking characters. It contains the algorithm to calculate the path from the current position of the character to a certain destiny point of the scenario. This script is based on the A* path- finding algorithm (see Section 2.1.3). Each walking agent of the simulation has its own instance of this script.

Animation controller: This is the main script for the emotional behaviour.

It contains the parameters regarding the emotional state of the character, as

well as the emotion contagion algorithm. Each virtual agent (both walking

and standing characters) has its own instance of this script.

(41)

4.4. ARCHITECTURE OF THE SIMULATION

Figure 4.7. This is an example of how the emotional model works. In image A, a sad agent (red) and a happy agent (yellow) are walking, no-one seeing each other.

Then, in image B, the sad agent spots the other when the ray intersects the capsule of influence of the happy agent. After checking for susceptibility, the sad agent is affected, so contagion takes place. Image C shows the result of strong contagion, and in this case the sad agent changes its mood to happy. Likewise, image D shows the result of contagion by steps, changing the mood in this case to neutral (orange).

Emotional trigger: This script triggers an automatic spiral of emotion con- tagion from the centre of the scenario. This was mainly implemented for evaluation purposes, allowing the user to affect the crowd directly.

Simulation stats: This script just keeps the parameters that define the global state of the simulation.

Start Menu: To facilitate the adjustment of the parameters of the simulation from an user perspective, the final model was wrapped up with this script.

This script presents a contextual menu at the beginning of the simulation,

allowing the user to adjust several parameters such number of walking and

standing characters, initial mood, type of contagion and emotional colouring,

among others. The main purpose of this script was to facilitate the evaluation

of the model an the generation of consistent scenes for the perceptual study.

(42)

CHAPTER 4. IMPLEMENTATION

spawner.cs

Walking character

Standing character Standing group

walkingController.cs animationController.cs animationController.cs

[0,n] [0,n]

[2,8]

contagionTrigger.cs

startMenu.cs

simulationStats.cs

Figure 4.8. Diagram of the architecture of the computational model. The script

startMenu generates the parameters of the simulation (simulationStats) and launchs

the spawner and the contagionTrigger. Then, the script spawner generates the walk- ing characters and the standing groups according to the parameters defined in sim-

ulationStats. Each walking character has its own walkingController and animation- Controller. Likewise, each standing character has its own animationController.

Figure 4.9. First screen of the program, where the user can adjust several param-

eters of the crowd simulation like the number of walking characters and the number

of standing groups, among others (Note that the number of walking characters is

independent from the number of standing groups).

(43)

4.5. IMPLEMENTATION SUMMARY

4.5. Implementation Summary

Chapter 4 has detailed the implementation of the computational model for the generation of virtual crowds and emotion contagion. The models and animations used have been presented, as well as the details of the emotion contagion algorithm and the general architecture of the final simulation program.

Next chapter presents the two evaluations carried out for the computational

model: the controlled scenario simulations and the perceptual study.

(44)

References

Related documents

Från den teoretiska modellen vet vi att när det finns två budgivare på marknaden, och marknadsandelen för månadens vara ökar, så leder detta till lägre

Syftet eller förväntan med denna rapport är inte heller att kunna ”mäta” effekter kvantita- tivt, utan att med huvudsakligt fokus på output och resultat i eller från

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

I regleringsbrevet för 2014 uppdrog Regeringen åt Tillväxtanalys att ”föreslå mätmetoder och indikatorer som kan användas vid utvärdering av de samhällsekonomiska effekterna av

a) Inom den regionala utvecklingen betonas allt oftare betydelsen av de kvalitativa faktorerna och kunnandet. En kvalitativ faktor är samarbetet mellan de olika

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

• Utbildningsnivåerna i Sveriges FA-regioner varierar kraftigt. I Stockholm har 46 procent av de sysselsatta eftergymnasial utbildning, medan samma andel i Dorotea endast

Denna förenkling innebär att den nuvarande statistiken över nystartade företag inom ramen för den internationella rapporteringen till Eurostat även kan bilda underlag för