• No results found

Simulating People Flow at an Airport: Case study: Arlanda Airport

N/A
N/A
Protected

Academic year: 2021

Share "Simulating People Flow at an Airport: Case study: Arlanda Airport"

Copied!
56
0
0

Loading.... (view fulltext now)

Full text

(1)

INOM

EXAMENSARBETE DATATEKNIK, GRUNDNIVÅ, 15 HP

,

STOCKHOLM SVERIGE 2020

Simulating People Flow at an

Airport

Case study: Arlanda Airport

LINUS BEIN FAHLANDER

MELKER MOSSBERG

(2)

Authors Linus Bein Fahlander, Melker Mossberg

University KTH Royal Institute of Technology

Supervisor Anders Sjögren

Examinor Fadil Galjic

Bachelor Thesis Degree Project in Computer Engineering, First Cycle

Faculty Electrical Engineering and Computer Science

University KTH Royal Institute of Technology

(3)

Abstract

Companies that manage large numbers of people in public spaces, such as air-ports, would benefit from having the ability to accurately predict people-flow in their facilities. However, creating high-performance crowd-simulations in a con-text with continually changing time-tables and gate locations is a complex prob-lem. In this thesis we propose a simulation system that handles a large number of simulated agents whose behavior is based on scheduled flight data. The system allows for the visualization of people flows and congestion, as well as the export of statistics to benchmark against a real machine operated counting system.

Our solution combines modern game development technologies for

controlling ambient characters and visualizing the environment, with traditional agent-based modeling methods. The simulation spawns human-like agents in the environment based on real (live) flight schedules and normally distributed behaviors.

The system was applied at Arlanda Airport, the largest airport in Sweden, which is owned and operated by Swedavia AB. Swedavia has provided us with knowledge about their processes as well as given access to data sources with live information about flight departures from the airport.

The result indicates that modern game engines, such as Unreal Engine, have the potential of being a convenient development environment for scalable crowd simulation systems. The prototype developed for this project is able to simu-late all departing travelers at Arlanda Terminal 5 during a given day. The data set used for this project is based on historic flights from April to May 2019. With many optimizations left outside of scope for this project, the system has a capacity of speeding up the simulation run-time by a maximum factor of 20. The histori-cal flight data used for evaluating the model lacks information which causes the prototype to consistently over estimate the number of agents to simulate. Yet, the prototype has an average accuracy of 79.4% when it comes to predicting the flow of people passing through security at Terminal 5.

The conclusion from this project is that, it is possible to develop simulation tools using modern game development technologies that are useful for stake-holders managing travelers at airports. With that said, several optimizations have been identified that would potentially improve the prediction accuracy, the stabil-ity, and the usability of the software. These optimizations should be considered before deploying and relying upon this kind of system in an airport for real.

(4)

Abstract

Företag som ansvarar för offentliga miljöer kan dra nytta av att förutspå flöden av människor. Dessvärre är det inte trivialt att skapa mjukvara som med hög nog-granhet simulerar stora folkmassor. I detta projekt utvecklade och utvärderade vi en lösning på problemet att simulera ett stort antal människor, vars beteende baseras på dynamisk data. Lösningen visualiserar flödet av personer i en virtuell miljö, samt sparar statistik för att kunna jämföras mot verkliga observationer.

Vår lösning kombinerar spelutvecklingsverktyg med agentbaserade simulations-strategier för att visualisera personmodeller som rör sig genom miljön.

Personmodellerna generas i simulationen baserat på dynamisk data från realtids-system.

Lösningen tillämpades på Arlanda flygplats, vilket är den största flygplatsen i Sverige, som ägs av Swedavia AB. Swedavia har som samarbetspartner försett oss med information om deras processer samt tillgång till deras realtidssystem som ger information om avgångar från flygplatsen.

Resultatet av projektet är en lösning som visar stor potential för användandet av spelmotorer som utvecklingsmiljö för denna typ av simuleringar. Den utveck-lade prototypen är byggd i spelmotorn Unreal Engine och kan simulera alla utre-sande resenärer som rör sig genom Terminal 5 på Arlanda under en given dag. Underlaget som används för att utvärdera prototypen var historisk data från april till maj 2019. Lösningen har kapacitet att simulera upp till 20 gånger snabbare än verkligheten.

Den historiska flygdatan som användes för utvärderingen hade vissa brister, vilket påverkade resulatet på ett sätt som realtidsystem inte hade gjort. Detta bidrog till att lösningen konsekvent överestimerade antalet resande i mätningarna. Trots detta lyckades prototypen uppnå en genomsnittlig träffsäkerhet på 79.4% för tidpunkten då resenärerna anländande till säkerhetskontrollen på Terminal 5. Slutsatsen av detta projekt är att det är möjligt att utveckla simuleringsverk-tyg med hjälp av spelmotorer som är värdefulla vid hantering av stora flöden av människor. Dock har prototypen som utvecklades under projektet inte tillräcklig träffsäkerhet eller tillförlitlighet för att användas som beslutsunderlag utan vidare utveckling.

(5)

Contents

1 Introduction 7

1.1 Background . . . 7

1.2 Hypothesis . . . 8

1.3 Purpose . . . 8

1.4 Case study requirements . . . 8

1.5 Limitations . . . 8 1.6 Outline . . . 9 2 Theoretical background 11 2.1 Simulating Crowds . . . 11 2.2 Simulation Software . . . 13 2.3 Swedavia API . . . 14 2.4 Cloud Functions . . . 14 2.5 Box-Muller Transformation . . . 15 2.6 Related Work . . . 15 3 Methodology 17 3.1 Methodology Theory . . . 17 3.2 Methodology Implementation . . . 18 3.3 Methodology limitations . . . 19

3.4 Breaking down the research hypothesis . . . 19

3.5 Prototype Case Study . . . 20

3.6 Interview . . . 20

3.7 Literary Study . . . 21

3.8 Airport observations . . . 21

3.9 Method for drawing conclusions . . . 21

4 Simulation prototype 23 4.1 Functionality . . . 23 4.2 Architecture . . . 26 4.3 Challenges . . . 33 5 Result 35 5.1 Prototype Case-Study . . . 35 5.2 Interview . . . 40 5.3 Literary study . . . 41 6 Discussion 43 6.1 Answers to the hypothesis sub questions . . . 43

6.2 Evaluating the Prototype and Case Study . . . 45

6.3 Method . . . 46

(6)
(7)

1

Introduction

The total number of flights per person is steadily rising in Sweden [3], which means that more and more people are traveling through the major airports. This leads to a growing demand from airport managers to be able to make accurate predictions about the "people flow" in their facilities. People-flow can be defined as a quality measure of people’s movement in different environments. "Good people flow"

infers low congestion, high passability and short queue times; parameters that are all highly important for any large airport to run its operations efficiently. [7]

Unfortunate scenarios such as flight delays can result in sudden high densities of people that create disturbances if operators are not well prepared. If decision-makers at airports had access to people-flow predictions they could make optimizations in everything, from staffing and flight-scheduling to planning of the actual architecture of the airport. This would result in a smoother experience for both the visitors and the airport operators. [16] There are many other industries for whom such simulations could be beneficial. What is special about pedestrians at an airport is that their

behavior is largely determined by the arrivals and departures their scheduled flights. However, if a tool was developed for predicting people-flow at an airport, then a similar setup could be used for other public spaces such as train stations or offices.

1.1

Background

Simulations are generally considered to be one of the best support technologies for predicting complex systems that are dynamic and stochastic in nature. [13] How-ever, accurately simulating crowd behaviour with hundreds of autonomous agents is a complex problem. Especially in a constantly changing environment, such as an airport with gate-changes and delays.

What "predicting people-flow" really means in this context is to simulate crowd movement ahead of time. Modeling behaviours of "crowds" of people is a task with many challenges such as: realistically modeling interactions between pedestrians, the collective motion of large scale crowds, obstacle avoidance as well as representing virtual humans in their environment. Creating a people-flow simulation tool is a task that would be very large and complex in scope if it was built from scratch.

At the same time, modern "game engines" are making it more and more conve-nient to solve these problems. Not only do game engines provide environments for writing code and algorithms, but they also provide engines for rendering high-quality 3D graphics, physics and collision detection, animation, artificial intelligence, networking, streaming, memory management, threading, localization support, and more. All of this makes the challenge of scope mentioned earlier a lot more manage-able.

(8)

1.2

Hypothesis

To explore the use of simulation-technology as a mean of predicting people flow at an airport, we propose the following research hypothesis:

It is possible to aid decision-making at an airport by simulating people flow.

1.3

Purpose

The objective of this thesis is to conduct a case study to answer the research question. The aim is to develop a scalable and accurate model of real-time pedestrian activity at an airport, Arlanda Airport in Sweden specifically. The approach presented combines game development technology and agent based modeling to simulate people-flow based on scheduled arrivals and departures from the airport.

1.4

Case study requirements

The case study is developed in co-operation with Swedavia AB, a company that oper-ates the largest airport in Sweden, Arlanda. They have agreed to be the subject of our research and to provide the necessary information for the development of a proto-type. The requirements for the prototype to be developed was decided together with stakeholders at Swedavia. The requirements for the prototype can be summarized as: • The system should simulate the movement of departing travellers at "Terminal

5" at Arlanda.

• It should be based on real departure-times available from Swedavia’s API. • The simulation should measure the movement of simulated agents in a way that

is comparable to Swedavia’s autonomous people-counting system. • Data collected from each simulation should be stored.

1.5

Limitations

One limitation of the thesis is that the simulation only accounts for travelers depart-ing from Terminal 5 at Arlanda Airport. Arrivdepart-ing travelers, as well as staff, will be completely ignored for the scope of the project. This might of course impact the gen-eral "people flow" in the simulated building, but it should not have any significant impact on the measurements made because of how the data collection is set up.

There was another substantial limitation to the project during the time it was con-ducted, the COVID-19 pandemic. This had an enormous impact on the travel industry and consequently Arlanda airport. The number of passengers dropped to a tenth of the normal for some airline companies during the worst pass of the pandemic. [11] Since the prototype is supposed to rely on live data from scheduled flights, it was

(9)

considered that the results may be skewed due to extreme conditions. Therefore, his-torical flight schedules were used as the basis to benchmark the performance of the prototype. The official recommendation of ’social distancing’, which was in place during the pandemic, hindered us from going to Arlanda to do field research when the project was just started.

1.6

Outline

Chapter 2 will present a theoretical background to the subject of "crowd simulation" and the technology investigated for the build of the project prototype.

Chapter 3 presents the approach and methods used to answer the research question. Chapter 4 details the decisions guiding the architecture of the prototype and the dif-ferent conclusions drawn along the way about the technology stack.

Chapter 5 covers the result of the prototype case-study, the group interview and the literary study.

Chapter 6 covers a discussion about the conclusions made in chapter 5. It discusses the methodology of the project and suggest improvements for future research.

(10)
(11)

2

Theoretical background

This chapter aims to lay out a theoretical background required to answer the research question. The major areas covered here are: simulation theory, crowd simulation, ex-isting technologies, and previous solutions to similar problems. In addition to this, a brief overview will be given to the data source (Swedavia’s API) which will be fun-damental to the simulation itself.

2.1

Simulating Crowds

A simulation is a set of rules that define how a system changes over time given a cur-rent state. Thus, unlike analytical models, simulation models are not solved, but are run and it is possible to observe the state at any point in the simulation life-cycle. This advantage of simulation makes it possible to estimate the performance of systems too complex for analytical solutions. [8]

Any model used for crowd simulation needs to take several components into ac-count. This includes specifying a computer-based geometric model of the environ-ment and taking into account the interactions amongst agents (human-like entities) and the environment. In this context, a crowd is composed of several agents, each with a set of goals and obstacles that constitute the environment. [2]

There exists many techniques for simulating crowd movement and dynamics. Choosing a method depends on the type of crowd patterns or behaviour to be sim-ulated and the surrounding environment. An important aspect is whether the simu-lated crowds are homogeneous or heterogeneous. Homogeneous crowds correspond to instances where each agent has very similar behaviors or goals. This coherence can be exploited to accelerate the overall simulation performance. Homogeneous crowd system include: "flow-based models", governed by differential equations that uni-formly dictate the flow of crowds [10], and "continuum crowd models" which allow a small, fixed number of goals, and aggregate dynamics for dense crowd simulations. [17].

In contrast to these homogeneous methods, "agent-based modelling" allows for true heterogeneity as it computes behaviour for each agent individually. The be-haviour of actors is set explicitly on a micro-level which, by the collective actions of many actors, generate new structures on the macro-level.

In the field of game development, it is common that non-player characters (NPC) are modeled using agent-based modeling and the logic that define the specific be-haviour of each actor is referred to as the "AI Strategic layer" ("AI" meaning artificial intelligence).

There are traditionally three approaches to implementing the "AI strategic layer" in agent based modelling: finite-state machines, behaviour trees and utility AI.

(12)

2.1.1 Finite-state machines

Figure 1: FSM [13] The finite-state machine (FSM) is a commonly used model

for representing and controlling execution flow of non-player characters (NPC). The logic controlling each agent is repre-sented as a set of states, transitions between states, and con-ditions for each transition, see Fig 1 below. This strategy is highly applicable for agents with very restricted behavior. However, when the task becomes complex, FSM becomes hard to maintain as the time spent on implementation is too high.

2.1.2 Behaviour trees

Behavior trees (BT) are similar to FSM, but more scalable, modular and reusable. The agent logic is structured in a hierarchical tree of nodes representing states. After ini-tializing a BT, nodes are run from left to right as long as necessary conditions are met until all nodes are depleted, at which point the BT can be run from the start again. Each node can hold complex subsets of tasks and interaction with the environment, which allows for flexible control of the agent strategy. The disadvantages of BT is first: that complex behavior trees can become massive and hard to maintain, and secondly; that the behavior of agent can become very linear and predictable if designed poorly. Due to the stock implementation of BT in the most popular game engines, BT as an approach has become very popular and convenient to use.[6]

Figure 2: Unreal Engine’s Behaviour Tree implementation. This BT is the one used by the agents in 4.2.7 Prototype - Agent Behaviour

(13)

2.1.3 Utility AI

Utility AI is a technique based on the concept of Utility Theory[9], where the agents calculate which action will give the biggest profit every time it makes a decision. Cal-culating maximum profit from a current state makes decision-making more abstract so that multiple agents with identical intentions might behave differently. This gives an impression that feels more natural than the more rigid approaches such as FSM and BT. Utility AI is gaining popularity as it allows for very scalable and flexible sys-tems for creating agent behavior.

2.1.4 Visual Scripting

Visual Scripting is a way to describe logic using graphical components instead of writing code. These graphical representations of logic are then translated to exe-cutable code. One goal with Visual Scripting is to reduce the amount of programming knowledge required to add logic to a system. The Visual Scripting language available through the Unreal Editor is called Blueprints.[20] Through Blueprints you can easily access and add functionality that integrates directly with the Unreal Game Engine.

Figure 3: Unreal Engine’s visual scripting language Blueprints. The right image is the Blueprints equivalent of the C++ code in the left image

2.2

Simulation Software

As previously stated, computer-run simulations can be defined as mathematical mod-els running on a computer using some sort of software. Simulation software is often purpose-built for a specific purpose such as simulating physics, climatology, chem-istry, biology, and manufacturing, or human systems such as economics, psychology, social science, and health care. Some of the world’s most recognized crowd simula-tion software has been developed for the movie industry to create simulasimula-tions of huge crowds and armies in blockbuster movies. A few very renowned in this category are "Golaem Crowd", "Massive" and "Miarmy", all of which are built as plugins to the 3D computer graphics application "Autodesk Maya". This software provides enormous capacity in the number of units simulated. However, they do require that the simu-lation is fully-rendered before they can be updated or information can be extracted.

(14)

They are all also focused on the visual rendition rather than collecting data from the simulation.

There exist pure crowd simulation frameworks, not inherently bound to a partic-ular graphics engine. These frameworks can provide efficient implementations for large scale multi-agent path-finding in real-time simulation, especially since they do not even have to be rendered out. These frameworks would need to be implemented as plugins to some other graphics engine to be visualized with high-end graphics. These frameworks also have varying data formats for representing the environment. One such framework, called Menge, will be discussed later in the report. It uses XML to represent the geometry of the environment. This representation would need to be translated in order to be used in a game engine with a different representation of the geometry.

Lastly comes the option of using the tools built into game engines to run the sim-ulation. Game engines are inherently built for real-time rendering and updates. They also make it simple to manage "agents" in their computer-generated environments. The game engines with the largest user-bases are Unity, developed by Unity Technolo-gies, and Unreal Engine, developed by Epic Games. Both of these platforms provide tools to build complex 3D scenes, manage a multitude of agents in the environment, as well as communicating with the internet for live-updates. These game engines have built-in implementations for path-finding of agents which, if used, saves time for developers. The two different game-engines have different models for billing. Both options provide a version free-of-charge. The difference shows when the ap-plication developed becomes commercial. With Unity, developers currently need to have a paid license to develop games commercially which is a flat fee.[18] Unreal Engine is free to use for developing internal non-game related software.[19]

2.3

Swedavia API

The data source that will be used to determine how many passengers will pass through "Terminal 5" of Arlanda is an API platform managed by Swedavia. [1] Swedavia of-fers some of their data publicly free of charge. For this project, we were granted access to a limited-access API which contained extended data about departures, arrivals as well as some people-flow parameters inside Arlanda.

2.4

Cloud Functions

When the term "cloud function" is mentioned in the report, it is referring to a comput-ing service hosted by Google. "Google Cloud Functions" is a serverless, event-driven computing service within Google Cloud Platform. Developers can use it to create and implement programmatic functions within Google’s public cloud, without hav-ing to provide the underlyhav-ing cloud infrastructure such as servers, storage, and other resources. [15]

(15)

2.5

Box-Muller Transformation

A "box-muller transformation" transforms from a two-dimensional continuous uni-form distribution to a two-dimensional bi-variate normal distribution value between 0 and 1 inclusive. [21] This can be used to develop algorithms that produce normally distributed values based on two random values as will be exemplified in chapter 4.2.8. The random input values must be uniformly distributed.

2.6

Related Work

2.6.1 Multi-agent crowd simulation on large areas with utility-based behavior models: Sochi Olympic Park Station use case

This study[14] investigates the difference in crowd movement at the Sochi Olympic Park station during and after the Olympic Games. The system is built using Unreal Engine, with agents using utility-based algorithms for decision making. They also im-plemented a third-party path-finding library as an improvement to Unreal Engine’s default path-finding logic. This study proves the potential of using Unreal Engine for simulating large crowds of agents, as well as extracting graphs and statistics di-rectly from the simulation. The study does not include any connection to a real-world data source nor does the agents have any simulation of sight to inform their decision making.

(16)
(17)

3

Methodology

This chapter aims to give an understanding of the methods used in this project.

3.1

Methodology Theory

The methodology used in the project is based on the work of Andersson et al: "Veten-skaplighet – Utvärdering av tre implementeringsprojekt inom IT Bygg och Fastighet 2002". [4]. They made a generalization of the principles presented in "No. 1 Epis-temology & Methodology I: Exploring the World", written by Mario Bunge (1982). [5] The original methodology by Bunge can be broken down into four discrete ar-eas: problem identification, research, development, and evaluation. Andersson et al’s generalized version were made to better suit problem-solving in modern technologi-cal research. The generalized version can be summarized in the following steps:

1. How can this problem be solved?

2. How can a technique or product be developed to solve the problem in an effi-cient way?

3. What research or data already exists for this type of technique?

4. Develop the new technique with step 3 as standpoint. If this works, jump to step 6.

5. Try with a new technique.

6. Create a model or simulation of the suggested technique or product 7. Which consequences does the new technique or product bring?

8. Testing the technique or product. If the test is satisfactory, jump to step 10. 9. Identify and fix shortcomings of the technique or product.

10. Evaluate the result in comparison with existing knowledge and identify new problems that occur for future research.

Adhering to this methodology ensures that the problem solving is done systemat-ically and that the results from this project will have scientific value.

(18)

3.2

Methodology Implementation

This section aims to explain how the generalized version of Bunge’s methodology is implemented in this project. An overview can also be seen in Figure 4 below.

Figure 4: Implementation of Bunge’s methodology

3.2.1 Solution exploration strategy (Bunge Step 1-3)

The first step in finding solutions to our problem is to break down our research hy-pothesis. This is done in two steps. The first is to unravel the research hypothesis into multiple sub-questions. From this, we derive a hierarchy of smaller problems that need to be solved to find answers to the sub-questions. The second step is to find answers and technical solutions to each problem. This is done by carrying out a lit-erary study and interviewing stakeholders at Swedavia. In this phase, we also make smaller efforts to try out the different technical solutions before adding them to our strategy.

3.2.2 Development Strategy (Bunge Step 4-8)

Step 4 and 5 of Bunge’s methodology says to develop a technical solution based on conclusions from the previous steps, then to test and improve the solution in a loop until satisfactory performance is achieved.

We implement this strategy by iteratively developing a prototype and having weekly meetings with our partner, Swedavia, to reevaluate the strategy and make new priorities along the way with regards to the time left for the project.

(19)

Step 6-8 of Bunge’s methodology is to develop and evaluate a "model/simulation" for testing the prototype. In our case, this translates to developing tools for compar-ing the performance of the prototype to reality, and evaluatcompar-ing the meancompar-ing of the data that is generated. If the performance is unsatisfactory, decisions are made with regards to what time investments should be made to improve the prototype given the time left for the project.

3.2.3 Evaluation Strategy (Bunge Step 9-10)

The final step of Bunge’s methodology is to evaluate the result and to identify possi-bilities for further research. The results in this project will be acquired in parts: first by performing a case-study of the prototype, and then by analyzing those results to-gether with the interview answers and conclusions from the literary study.

The case study is focused solely on the prototype and its performance with regard to the requirements. It does not in itself aim to answer all sub-questions for the thesis. That is done shortly after when all results are summarized and evaluated against the research hypothesis.

The project results and the research method will be discussed in Chapter 6. Here we discuss the validity and reliability of the methods as well as suggest improvements for the prototype.

3.3

Methodology limitations

This thesis is based on qualitative methods of research: a case study, an in-depth group interview, field research, and literary studies.

According to the European Journal of Education [12], these qualitative methods provide a means to investigate complex situations with multiple variables under analysis. However, it can be difficult to establish a cause-effect connection to reach conclusions and it can be hard to generalize, particularly when a small number of case studies are considered. In our case, this means that the case study and interview may argue the viability of our prototype for a limited set of requirements set by a limited amount of stakeholders. But, it can not be used to argue any generalized truths about the field of simulation science as a whole.

3.4

Breaking down the research hypothesis

The hypothesis proposed in 1.2 is complex and requires analysis from several perspec-tives to be properly answered. The hypothesis is therefore divided into sub-questions below. The main research hypothesis will be answered by finding answers to each of the sub-questions.

• In which way can people-flow be represented and controlled? • Which people-flow patterns* are important for decision-making?

(20)

• How can we collect relevant data and visualize these patterns? • How can the accuracy of crowd simulations be evaluated?

• How can live measurements from external sources be used to improve the sim-ulation accuracy in real-time?

* The phrase "people flow patterns" refers to macro-level phenomena that occur as a result of the movement and behavior of several individuals interacting with each other. Some examples are congestion, long queues, blocking, etc.

3.5

Prototype Case Study

The construction of a prototype is key to the research method of this project. This is because the prototype itself will be the focus of a case study at the end of the project aiming to answer the research hypothesis. Thus, we are making the assumption that valuable insight can be derived, to the research hypothesis, from qualitative analysis of the prototype.

When it comes to evaluating simulations, particularly in engineering contexts, it is common to focus on verification and validation. [8] Verification can be divided into solution verification and code verification. Solution verification verifies that the output of the simulation algorithm approximates the true solutions to the differential equa-tions of the original model. In our case, there is no differential equation to compare to, so no solution verification can be done. Code verification, on the other hand, veri-fies that the code carries out the intended algorithm. This can be done by comparing the nature of the algorithms implemented to the behavior of the real-world counter-parts. Validation involves comparing model output with observable data. In our case the model is constructed in such a way that the output is directly comparable to that of an automated human counting system at Arlanda, thus making it possible to do scientifically reliable output validation.

3.6

Interview

We conducted an in-depth group interview with relevant stakeholders and employ-ees at Swedavia. The goal of the interview was to reveal what aspects of people-flow would be most beneficial to accurately predict for the decision-makers at Arlanda. The reason the interview was conducted in a group was to make it time-efficient and to resonate with the different problems brought up between the participants to max-imize insights. The interview was conducted as a conference call. The interview questions were sent in advance for all participants to think about. The questions dis-cussed in the interview covered what features would help the most in their respective roles, how they would like this data to be presented, etc. The interview can be found attached to the report.

(21)

3.7

Literary Study

The literary study was divided into two parts. The first was to gain knowledge about the different areas of science and technology that touch upon crowd simulation meth-ods. We found one academic textbook and several research papers that were of much use. The articles we read were found on researchgate.com, sciencedirect.com, and arxiv.org.

The second part of the literary study aimed at mapping out the technologies that would make candidates in implementing our prototype. That included reading soft-ware documentation and watching tutorials about solving various challenges that we expected for the prototype.

3.7.1 Simulation methods

Scientific literature and previous research were studied on "computer simulation" in general and "crowd simulations" in particular. The purpose was to get a general un-derstanding of disciplines, as well as exploring different techniques and technologies in order to decide what development environment should be used for the prototype.

3.7.2 Existing crowd simulation tools

We researched existing crowd simulation tools by searching for relevant articles, pa-pers, and discussion on Google. This was intended to both expose features that would be relevant for our prototype and to serve as a basis for deciding which technology-stack to use.

3.8

Airport observations

In order to create a 3D representation of the environment for the simulation, we needed to make observations from the airport itself. This was mainly done by visiting the airport, as well as studying maps and reference images. We were also helped by staff at Swedavia who made some observations for us when we could not visit the airport ourselves. The airport observations gave us a better understanding of how agents would move into the environment and where they enter and exit the building.

3.9

Method for drawing conclusions

The final step of the project will be to infer conclusions about the research hypothesis. The basis for drawing these conclusions will be a qualitative analysis of the case study, the group interview, and the literary study. The methodology of the project will also be assessed with regard to its ability to provide reliable answers to the hypothesis.

(22)
(23)

4

Simulation prototype

This section will detail how the prototype was built, how key parts of the architecture are structured, and what challenges we faced along the way. The development of this prototype is a fundamental part of the research strategy, as explained in chapter 3.5

From studying other simulation software we knew that a major decision had to be made with regards to what graphics engine to use. Either we could try and build our own simple graphics engine to visualize our simulations, or we could utilize one of the many alternatives that already exist.

We decided to use the game engine Unreal Engine, as it provides a 3D editor and ways to visualize human-like models moving without a considerable time invest-ment. This software is open-source and based on the C++ language, so it allows for heavy customization if needed. This aspect is important not only for this project to be successful but from the perspective of Swedavia, who has plans to expand this software once this project is complete.

We ran into several challenges when developing this simulation platform, but ul-timately we implemented all features that were planned in one way or another. This link hosts a video demo: https://youtu.be/UWjTvcpqJpo

4.1

Functionality

4.1.1 Application Interface

The application starts with a screen where the user can either simply start the simula-tion using live data, or enter a date to use historical data for the simulasimula-tion instead. It is also possible to offset the start time by a given amount of hours. This view can be seen in Figure 5

After the button has been clicked to "Fetch Flight Data", another graphical user interface shows up which allows the user to start, pause, resume, or "save and quit" the simulation. The interface also provides the option of toggling between some pre-defined camera angles in the environment, as is demonstrated in Figure 6 and Figure 7. There are also four different speed modes to the simulation, "1x", "2x", "5x", "10x" and "20x", plus an extra "skip" button which jumps to the point in time when the first agent will be spawned.

(24)

Figure 5: A screenshot from the user interface starting screen

(25)

Figure 7: A screenshot from the user interface using camera 4

The 3D model in the simulation covers the entire second floor of Terminal 5 at Arlanda airport, even though the agents only traverse a part of it. We think that showing the entire floor makes the scene more recognisable for the people who work at Arlanda.

It was decided to have pre-defined camera angles, rather than letting the user control the camera freely, to lower the learning curve of using the tool.

The agents in the simulation are colored-coded according to what door they en-tered from.

4.1.2 Back-end Storage and Analysis

When the user presses the "Save and Exit" button in the interface, the application sends a payload of all the data collected during the simulation to a server hosted in the cloud. There the data is parsed and stored in a database for later analysis.

Separate from the application we have also created a script which, given the simu-lation ID displayed while running a simusimu-lation, will extract the data from that session and export it as a CSV file. This file contains the amount of people passing the ADK zone per 5 minute interval. The file is structured to allow for easy comparison to the real ADK data for analysis of the simulation’s accuracy.

(26)

4.2

Architecture

This section lays out the major components of solution architecture.

4.2.1 Overview

The diagram in Figure 8 depicts a high-level overview of the architecture.

(27)

4.2.2 Isolated Simulation Time

The simulation architecture builds on having a virtual simulation time, which can be run either at the same speed as real-time or to be sped up to a maximum of 20 times the real-time speed. The simulation time runs in a separate layer from the regular game timer. It is configured to start according to the start time set by the user and ends when the simulation time reaches the end time set by the user.

The simulation requires a list of timestamps, at which it will put a number of agents into the simulation environment. This list is derived from real, scheduled flight data coming from Swedavia’s API and a cloud function that distributes the agents’ arrival on a normal distribution centered around 1 hour and 40 minutes before depar-ture (in other words, anywhere between 40-240 minutes before depardepar-ture). Agents are spawned at one of the airport’s entrances, chosen at random. All agents are given a set of goals depending on what flights they are scheduled to depart on. The flight determines what check-in desk the agent will stand in line at and what security check they pass through.

4.2.3 Simulation controller

Our simulation controller is responsible for keeping the state of the simulation and delegating events to other parts of the program that originated from the simulation. It also parses the goals and entrance data provided by the API into references to objects in the 3D environment.

The controller is responsible for storing and updating the current simulation time. The controller is also responsible for keeping track of what agents should be spawned at a certain point in the simulation. To do that it interfaces with the entrance objects to instantiate the agents into the 3D environment.

4.2.4 APIs & Cloud Functionality

Data manipulation and data storage are done outside of the game engine, in "cloud functions" hosted on "Google Cloud Platform". These functions are invoked by the Unreal Engine instance using HTTP GET requests, using a plugin to Unreal Engine called "VaRest". This creates simplicity as "simulation logic" and "data handling" are kept separate. The game engine interfaces with two API endpoints; one for loading new agent spawn-times and one for uploading the data it has collected. All computa-tion in the cloud is done in JavaScript that enables proper testing and error manage-ment, which would be more cumbersome to code in Unreal Engine.

(28)

4.2.5 Startup routine

When the simulation is run, all assets that are required before the start of the simu-lation are loaded into a scene ("level"). These assets include the 3D-environment and utility objects in the scene. When the level is created, an API-call is made to the cloud service, which in turn will make an API call to Swedavia’s API to fetch flight data, as can be seen in the illustration below in Figure 9. The flight data is used to create a list if "spawn times", which will be sent back to the game engine. A "spawn time" includes an "ISO DateTime" and a list of agents, each with their own set of goals depending on their flight.

At this point the "Simulation Controller" is created. It takes the list of "spawn times" and generates a list of agents and assigns corresponding goals in the 3D-environment.

Figure 9: The startup routine

4.2.6 Simulation run-time

The basis of the simulation is achieved by instantiating agents with human-like mod-els into the world and letting them navigate the 3D environment. This is done several times during the duration of the simulation, as the simulation time will correspond to the spawn times provided by the API.

The user is able to speed up or slow down the simulation during run-time, to get more detailed information about a part of the simulation.

(29)

4.2.7 Agent behaviour

Each agent has an attached Behaviour Tree (see 2.1.2), which is used to simulate its behavior. This behavior is based on a list of goals, which the agent has to move to before reaching the end of its simulation. Every time the agent reaches a point within a certain radius of the current goal object, the behavior logic iterates forward in the list of goals and selects a new goal object to use as the next destination the agent should reach. This is repeated until all goals have been reached, at which point the agent is removed from the simulation.

The goals are represented in the 3D world using an Unreal Engine object called Target Point. These objects are placed by hand and given a name to indicate which goal corresponds to what object in the world. For example, one Target Point is placed in front of the 3D model of the first check-in desk and given the name checkin0, which corresponds to a goal given by our API.

The Behaviour Tree itself is only responsible for deciding which order the agent should do things, the actual logic to perform those actions are contained within so called Tasks. These Tasks are Blueprints that define logic between two nodes, the Event Receive Execute event and the Finish Execute function. The event is called by the Be-haviour Tree when it decides to execute a Task. The BeBe-haviour Tree then waits for the Finish Execute function to be called by the Task, with a Success parameter indicating if the task was successful or not. This makes it possible to execute asynchronous func-tions within a Task, for example the AIController’s MoveTo function shown in Figure 10.

Figure 10: The MoveToGoal task used by the prototype’s Behaviour Tree

4.2.8 Normal Distribution of Arrival Behaviour

It was assumed that people’s preferences, when it comes to time-margin at the air-port before departure, varies according to normal distribution between a maximum of three hours and a minimum of 40 minutes before the departure of their flight. This behavioral difference was implemented in the cloud function that prepares the list of agents to be simulated. Before deciding the spawn-time for each agent, a "Box-Muller" transformation (algorithm explained in chapter 2.5) is applied to the time margin be-fore departure in order to create normally distributed spawn times.

(30)

4.2.9 Building the environment

The goal for the simulation environment was to make a 3D model of "Terminal 5" at Arlanda airport. It should have the exact same dimensions and internal virtual measurement as the real building has. This would require us to have an architec-tural drawing of the building. Getting permission to such drawings turned out to be difficult since the level of security surrounding Arlanda is really high.

We found our solution in Google Maps, which quite accurately maps out not only the exterior walls of Arlanda but also the interior wall for every floor of the airport. The way we used this resource was to stitch together lots of screenshots from this map at a scale where 1 cm on the screen corresponds to 10 m in reality to create a new map of floor 2 of Terminal 5 at Arlanda. This new map, and it’s measurements, was then used as a reference image to build an accurate, to-scale 3D model of Terminal 5, see Figure 11. The model details the outer and inner walls as well as each check-in desk. All shops and rooms that were non-essential for the simulation were simply filled with an appropriately shaped block to prohibit simulated agents from going in there.

Figure 11: Google Maps | Model in Blender | Unreal Engine environment

4.2.10 Data Collection

There are two major areas of data collection in the prototype. Both of these feed mea-surements to a central process in the code which manages all data collection. This global statistics module also uploads all data once the simulation is finished.

The first area of data collection relates to one of the prototype requirements, which is to record data that can be compared to the real-life measurements collected by an autonomous human counting system at Arlanda Terminal 5. To set this up, we asked for consultation from Swedavia’s division of "capacity analysis". They delivered a data set for us to compare to and instruction on where to set up our measurement area in order to mirror the real-world counterpart. We build sensors in the 3D environment functions like "trigger boxes". When an agent steps inside the trigger area, an event is fired which sends data to the global stats module.

(31)

The second part of data collection comes straight from each individual agent, transmitted as the last step before they are removed from the simulation. These stats include life-time and agent "id".

Once the simulation is finished, all data is formatted in a suitable way and sent, as an HTTP POST request, to our cloud service. The cloud service’s main purpose is to store the data in a database.

4.2.11 Handling queues

Queuing behavior is required by each agent at two points on their simulated walk through the airport. We built custom logic for managing queues in the game engine. The way this is achieved was to create an alternative kind of "target" for the agent to walk to. This target has a special tag, indicating that the target is not only a destination to move to, but is also a queue. When an agent arrives at a queue, they reach a part of their behavior tree which repeatedly evaluates whether or not to move forward in the queue or if they have reached the front of the queue, at which point they are free to move on to their next goal.

The entity that keeps track of the queue state is an object representing the queue itself. Such a queue object holds multiple "targets" which function as vertices on a path. Every time the queue state updates, the agent in-queue is prompted to distribute evenly along this path, as can be seen in Figure 12. The time it takes for a person to finish their check-in, once they arrive first in line, is one minute.

(32)

4.2.12 Assumptions about Swedavia’s historical data

In order to understand the riskiest assumption we had to make when building the simulation, two data sources need to be explained.

Firstly, "ADK data" referrers to real-life data from a system called the "Automa-tiska DokumentKontrollen" (ADK). From this data source, we get the count of people scanning their tickets in order to pass through security throughout the day. We get two flows of people, the "fast-track" lane and the regular lane, which we add together in order to compare with our simulation. However, there are two security controls at Arlanda Terminal 5, but the ADK data we received only covers one of them, the biggest one. This security control is called "CB".

Secondly, because we wanted to simulate flights before the COVID-19 pandemic (see motivation at chapter 1.5), we asked Swedavia for data about historic flights. We had one issue with the historic data, and that was that it lacked information about what flights were assigned which check-in desks. This information is crucial for the sim-ulation to run reliably since the choice of check-in desks determines what security control is chosen, and thus whether or not the passengers will be counted in the ADK data.

This means that we had to make an assumption about "what ratio of departing travelers go to security control ’CB’?". The way we made this assumption was by comparing the ratio of passengers that passed through "CB" on a sample of three different days that we had historic data. On 2019-03-01 it was 50.93%, on 2019-03-06 it was 49.29% and on 2019-03-11 it was 53.01%. We decided to round the average off to 50%. But, a traveler cannot go to any check-in desk, they have to go to the set of desks assigned to that particular flight. Thus, we cannot simply split the passengers 50/50 between the two sides, we have to make the split per flight.

We started with an assumption of a 50/50 split of flights, but this turned out to give really unreliable results when we ran the simulation several times since the number of passengers varies a lot between different flights and we choose the distribution randomly. If we were unlucky, the balance would be off by thousands of agents. We tried to remedy this by keeping track of the balance between the two sides of the terminal as we populated them with agents striving for equal distribution. This had a very slight positive effect on the outcome since huge flights can disrupt the balance enormously which is then compensated for, leading to unnatural scheduling behavior.

It should be said though, that the issue described above was not present at all when running the simulation using "live flight data" because that API contains infor-mation about check-in desks. However, for the reasons given in chapter 1.5, we still feel like it is more relevant to use historic data as input to the simulation. If we could get historic data containing information about check-in desks, the results would be more reliable.

(33)

4.2.13 Assumptions about traveler behavior

The assumptions below were detected through code validation. Some of them clearly don’t reflect reality, but are required for the scope of this project, such as "nobody takes detours from the path we have programmed". Some of the assumptions would ideally be verified statistically, such as the distribution of walking speeds.

• The earliest people arrive 3 hours before departure. The latest people arrive 40 minutes before departure. The distribution is normal.

• Everyone visits a check-in desk before going to security.

• The process of checking in one’s bag takes 30 seconds on average. • People walk with a speed of 1.5 m/s, plus-minus 0,25 m/s.

• People always aim for taking the shortest path possible.

• Nobody takes detours from the list of goals we have programmed. • People choose which entrance to the airport they use randomly.

4.3

Challenges

4.3.1 Path-finding frameworks

Inspired by the implementation of the Menge Crowd Simulation Framework in the Sochi report[14], our original plan was to integrate that library into Unreal. That proved more difficult than we expected as no public implementation of the Menge framework running inside Unreal Engine existed. Due to our relative inexperience with C++ and Unreal Engine we decided, after four days of work trying, that it would be too time-consuming to complete this integration. We got to the point where we were able to successfully instantiate an instance of the Menge framework. But the prospect of integrating all the Menge functionality with the Unreal Engine seemed too big of a risk with respect to the project’s time constraint.

4.3.2 Speeding up the simulation

When we evaluated Unreal Engine as a platform for our prototype, we ran small tests where we sped up the game time and measured how it performed. Those tests indicated that we would be able to achieve up to 50x real-time speed up, which we deemed reasonable, considering it would then take just under 10 minutes to simulate 8 hours.

What we later discovered was that the Unreal Engine has a limit of 20x when speeding up the game time, although no error message is shown to the user when setting the value to anything above 20. It is only communicated as warnings in the

(34)

debug log, which we noticed only when we were well into the process of building the prototype.

After having tried a solution of altering the movement of the agents as we went above 20x we decided that 20x would have to be enough for this prototype. It still allows you to simulate an entire day at Arlanda ( 20 hours) in about 1 hour.

(35)

5

Result

This chapter deals with the result of the research that has been done during the project. The result is split into four subsections: a case study of the implemented prototype, analysis of the interview, results from the literary study, and a summary.

5.1

Prototype Case-Study

The purpose of the case study is to find answers to some of the hypothesis sub-question from a qualitative analysis of the prototype. As stated in the methodology chapter, it is assumed that it is possible to derive insight into the hypothesis sub-questions from studying a prototype.

5.1.1 Delivery on Requirements

To recap, below are the requirements for the final product:

• The system should simulate the movement of departing travelers at "Terminal 5" at Arlanda.

• It should be based on real departure-times available from Swedavias APIs. • Measurements of simulated people-flow should mirror zones used for

measure-ments in real-life.

• Data collected from each simulation should be saved and stored.

From Chapter 4 we can derive that the system does simulate the movement of travel-ers, from their arrival at one of the entrances of the "Terminal 5", all the way to their destination gate. The entire environment is modeled to life-size dimensions and cov-ers all of "Terminal 5". The simulation is based on real departure-times fetched from Swedavia’s own APIs at the time of the simulation. Each simulated "traveler" arrives at some point in time, between 3 hours and 30 minutes before their flight. Thus, the first two requirements are fulfilled.

We can also conclude that the data collected in the simulation directly mirrors the "zones" used in Swedavia’s automated person counting system. This way, the data is comparable for performance testing the simulation. Finally, we can also conclude that the data being collected through the simulation is uploaded to a database when the simulation is finished. From all of the points made above, we can conclude that all the requirements for the prototype are sufficiently met.

5.1.2 Application Usability

The features detailed in 4.1 let the user experience the simulated people flow first-hand, free to draw their own conclusion about patterns based on what they can see on the screen.

(36)

The second way that people flow patterns are visualized in the system is by col-lecting simulated statistics. This is achieved with the ability to export a CSV file with data that can be compared to the output of Swedavia’s ADK system.

Another important parameter of usability is the speed of the simulation. As ex-plained in chapter 4.3.2, speeding up the simulation comes with the cost of perfor-mance. From our testing, it seems that the simulation is able to run stable with a thousand agents in the scene at about twenty times faster than real-time without the software crashing or agents getting stuck.

The platform is basically also platform-agnostic, thanks to the export functionality of Unreal Engine. We can port the simulation to Windows, Mac OS, PlayStation, IOS, Android, and more. It remains unknown how the simulation would perform on other setups than the one we have tested, which is on the Windows operating system on a somewhat high-performance PC.

The points made about the prototype usability mentioned above give some insight into the ways that people-flow can be represented and controlled.

5.1.3 Simulation Output Validation

The most important measure of a simulation tool’s worth is its ability to make ac-curate predictions. Since the virtual measurements made in this simulation are built to exactly mirror those in the automated human counting system at Arlanda Termi-nal 5, it is a straight forward task to validate the simulated output statistics against observable data.

The way our simulation output is validated is by comparing simulated statistics of people passing the big security control to real-life statistics of the exact same event in real life. As explained in chapter 4.2.12, we received historical data for all flights from April to October during 2019 (before COVID-19 start affecting flights). The data we received covered both the "Automatiska DokumentKontrollen" (ADK) as well as historical flight data of departures from "Terminal 5". We used the historical flight data to run our simulation and output statistics which we could later compare to the ADK data.

(37)

Figure 13: Multiple simulated days compared to ADK

Looking at Figure 13 one can observe the variation in the total number of pas-sengers simulated between multiple runs. The simulation was run four times for the same day which resulted in an average total that was close to four thousand agents over the ADK data. As previously stated, this is the fault of us inaccurately guessing what flights are assigned to what check-in desks.

There is however something valuable to deduce from the graph. Looking at the simulation run that was the most accurate to the ADK data (the lowest dashed line in Fig 13) we can see that the agents arrive too late in the simulation. If the normal distribution was stretched to the left, favoring earlier arrivals, then the curves would line up more in this graph. This might either indicate that we should make people spawn even earlier (but keeping a normalized distribution), or that the distribution is not entirely normal, but rather leaning toward early arrivals.

(38)

Figure 14: Average simulation spawn rates versus ADK rates

Figure 14 confirms that the simulation spawns to many agents. However, we can once again see the potential for improvement if agents "arrived earlier". Then the pattern of peaks and valleys would match up better with the ADK data even if the total number of passengers is wrong.

Figure 15: Averages from multiple simulated days vs ADK averages

Figure 15 confirms that the simulation output runs with too many agents even when averaging out several different simulated days. It also seems like the agents "arrive to late" even when we average out multiple days.

(39)

When inspecting the average difference between the simulation output and the ADK data per five minutes over four different days, we get an accuracy of 79.4%.

The process of developing these tools for measurement, as well as the method for validating our simulation model observations, as mentioned above, gives some answers to several of our sub-questions. Like: How can we collect relevant data and visualize these patterns?

5.1.4 Code Verification

The simulated environment matches reality with quite a high accuracy. We had the fortune having our contact person at Swedavia go out to Terminal 5 and take sample measurements of the distance between a few walls using laser measurement tools. We used these measurements to verify that the simulation model on average was correct to 97.2% (2.8 cm deviation on a 1 m distance).

Figure 16:

The walking paths of the simulated agents are however lacking in realism, which is obvious from simple observation. Using the current path-finding algorithm, all agents strive for taking the shortest path possible, regardless of people standing in the way. This has proven not to be very efficient as they collide with each other, nor does it reflect how most humans walk. The cause of this, and the fact that finding a solution failed, was explained in chapter 4.3.1.

In addition to unrealistic pedestrian behavior, there are many behavioral parame-ters that have not been taken into consideration. For example: is the margin between arrival at the airport and flight departure really a behavior that is normally distributed between people? Looking at the graphs in chapter 5.1.3, it seems that the pattern of peaks and valleys follow those in the ADK data to some extent even though the to-tal number of passengers is off. As mentioned before, there might be behaviors such as eating and shopping at the airport that might skew the distribution. One way to research this would be to ask for more of Swedavia’s data and analyze it or to do a survey at Arlanda and ask people about their behavior.

(40)

5.2

Interview

The interview was held together with three key stakeholders at Swedavia with decision-making roles relating to creating the best possible flow of passengers through the airport. Their responsibilities include staffing of personnel at check-in desks and se-curity controls, commanding tasks, evaluating and predicting needs at the airport to make sure passengers reach their flights in time.

The answers from the interview serve as the main way to get answers to the re-search sub-question: "Which people-flow patterns are important for decision-making?". It also gives some indication aboutwhat live data from external sources be used to im-prove the simulation accuracy in real-time.

From the interview it was clear that there exists a large demand for tools that improve their possibility to predict the flow of people through the airport. All of the interviewees confirmed that accurate predictions would improve the quality of their own work, potentially save money for Swedavia and give visitors a better experience.

5.2.1 Imagined Use-Cases

The interviewees were asked to imagine the potential beneficial use-cases of a system that could predict people’s flow. They described the following use-cases:

• Making better estimations of the required staffing for a given day. • Getting alerted about upcoming surges in numbers of passengers.

• Evaluating different constellations of open check-in desks in order to create even queues that do not block other passengers.

• Predicting when will be the least intrusive time to do maintenance work. • Seeing what effect construction work will have on the flow of people.

• Gaining a better understanding of the difference in people flow metrics outside and inside of the security control.

The interviewees were also asked to imagine use-cases for other departments of the airport. Their conclusions were that many departments would benefit generally from having relevant predictions as the basis for decision making. For example, staff sit-ting in the airport control-tower would benefit from seeing the consequences where they park an airplane. The entire airport could basically become better at cooperating with the goal of creating smooth flows of people in any scenario. It was also said that having the ability to predict and adjust to challenging scenarios will also make Swedavia a more dependable business partner and create trust between the company and its partners.

(41)

5.2.2 How should the predictions be presented?

The interviewees wanted fresh and relevant predictions to be available two hours in advance, in an app or a desktop dashboard. They wanted an overview of the full day as well as being able to zoom in on a specific time of day. They also wanted to be able to isolate the flow of people from a specific fight.

5.2.3 Interview Conclusions

We obtained two major understandings from conducting the interview, the first being that the demand for simulation software that can make accurate predictions is big at Arlanda airport in general. The second realization was that disturbances outside of the airport are one of the biggest issues that the interviewees face on a daily basis. Delayed trains, accidents on the freeway, and other major events that separated from Arlanda regularly cause the biggest problems. Thus, there was an expressed demand for trying to integrate such parameters into the simulation.

5.3

Literary study

5.3.1 Academic Literature

The book "Modeling, Simulation And Visual Analysis Of Crowds" [2] gave us a foun-dation for understanding modern crowd simulation principles and techniques. The book maps out relevant technologies such as computer vision, graphics, and crowd dynamics, as tools for analyzing and synthesizing the pedestrian movement in dense, heterogeneous crowds.

Another very insightful source of information was the Stanford Encyclopedia of Philosophy which contains a publication on "Simulation Science" which gives an overview of the area of science in general as well as a philosophical view on the scientific value of simulations. [8]

(42)
(43)

6

Discussion

This final chapter will be dedicated to discussing the result and the method of the project. It will also suggest further research and development of the system.

6.1

Answers to the hypothesis sub questions

The following subsections will dive into each sub-question and discuss the answers derived from the result chapter.

6.1.1 In which way can people flow be represented and controlled?

Through our own development of a simulation system, and reading about other’s attempts, our conclusion is that people flow can be represented and controlled to a useful degree using agent-based modeling in Unreal Engine, given some conditions.

The first condition is that you have enough information about the nature of the reality you are simulating. In our case, it was obvious that not knowing which check-in disks had been assigned to a certacheck-in flight resulted check-in uncertacheck-inty. This put an enormous pressure on our assumption about the distribution being correct, otherwise potentially skewing the results with thousands of actors. There also exists an infinite amount of behavioral parameters that would need to be tweaked in order to mimic reality such as people coming early to the airport to eat or shop before their flight, people traveling in groups, and people arriving according to the time-table of public transport, etc.

The second condition for representing and controlling people flow using agent-based modeling in a game engine is that it is that the software algorithms and hard-ware capacity can handle the complexity of the situation. With agent-based modeling, complexity scales with the number of working agents in the environment. In our case, it seems that the complexity of simulating all departing travelers is within reach even at a twenty-time speedup, but it is uncertain if our setup would cope if we increased the scope to simulating arriving travelers as well, or all of Arlanda instead of only "Terminal 5".

6.1.2 Which people flow patterns are important for decision-making?

To answer this question, we look to the result of the interview with stakeholders at Swedavia. Our conclusion is that the single most important people flow pattern ex-pressed by the interviewees is sudden surges in people arriving at the check-in-desks or security controls. The primary use-case was to avoid bottlenecks by adapting the number of employees, their tasks, and their placement, depending on the expected people flow at different areas of the airport. Other people flow patterns were also brought up as beneficial to predict, but not as important as avoiding bottlenecks.

(44)

6.1.3 How can we collect relevant data and visualize these patterns?

Given the conclusions above, "relevant data" infers requirements such as being able to identify "surges in people at check-in and security control" and that the data needs to have high validity. The method proposed for this is to collect simulated data from an event of which there exists a lot of real-life data. In our case, this was the "ADK data" which offered us a detailed account of people passing through a certain area in the airport throughout the day. This data comparison has high validity since it models reality exactly.

The way that this setup could be used to predict "surges of people" in the near future would be to regularly make comparisons between the simulated people flow to live ADK data. For example, if the simulation expects 100 out of 110 airplane passengers to have walked passed the ADK-zone at a given time, but only 50 people have actually done so according to the live ADK data: Then the simulation system could judge if the remaining 60 people will cause the queue lines to grow unwantedly. This could trigger a warning message to decision-makers who would then have time to react before the "surge of people" even begins.

6.1.4 How can the accuracy of crowd simulations be evaluated?

The answer to this question was reached through the literary study of "simulation" as a field of science. The result of that study indicated that a few methods are useful when evaluating the accuracy of a simulation: verification and validation. We found these methods to be very useful in the development and evaluation of the prototype. If a simulation model continuously has a high output accuracy no matter what day is simulated, then the simulation could be considered useful as a tool. But reaching that point of high validity is, as stated before, possibly an infinitely detailed task of making good assumptions. This is where verification through code analysis was helpful for us. We rated what assumptions would have the greatest effect on the simulation output, and tried to treat them with significance accordingly. The most significant assump-tions need to be "the most correct". However, this method is not as straight forward as validation since it is up to the developers to judge the rating-dilemma described above.

6.1.5 How can live measurements from external sources be used to improve the simulation accuracy in real-time?

Real-time data-sources could potentially improve the simulation immensely. One of these ways was described earlier, to adapt the simulation to live ADK data in order to make more valuable predictions about congestion as check-in disks or security con-trols. Another valuable live data-source would be connected to the public transport to Arlanda, such as the train "Arlanda Express". This could be used as the spawning mechanism for a proportion of all simulated agents in order to mimic the people ar-riving by train to Arlanda. Yet another live data-source would be traffic information

References

Related documents

Take the Airport Coaches (Flygbussarna) to Centralstation/T-centralen (The Central Railway Station) in Stockholm, the trip takes approximately 20 minutes and costs about SEK 80

11 § PBL om detaljplanen innebär att ett riksintresse inte tillgodoses eller att en bebyggelse blir olämplig med hänsyn till människors hälsa och säkerhet eller risken för

In agile projects this is mainly addressed through frequent and direct communication between the customer and the development team, and the detailed requirements are often documented

The study has further concluded the different economic development, public finance priority, the more hierarchical political authority system as well as the social

In the plans over the area it is clear that Nanjing Planning Bureau is using modernistic land use planning, the area where Jiagang Cun is situated is marked for commercial use,

Uppsatsen underfråga utgår ifrån att om det finns en nedåtgång i dödföddheten och spädbarnsdödligheten bland spädbarnen från Vara inom den institutionella

Jag har börjat med att läsa in mig på tidigare forskning för att skapa mig en bild av interaktion, kommunikation och relation mellan människa och djur inom sociologin,

The main research issue is to portray customers’ real adoption behaviour in the case of self check-in service and how adoption behaviours relate to the factors of the TBSS encounters