• No results found

Target Tracking in Decentralised Networks with Bandwidth Limitations

N/A
N/A
Protected

Academic year: 2021

Share "Target Tracking in Decentralised Networks with Bandwidth Limitations"

Copied!
111
0
0

Loading.... (view fulltext now)

Full text

(1)

Target Tracking in

Decentralised Networks

with Bandwidth Limitations

Tim Fornell and Jacob Holmberg

(2)

Target Tracking in Decentralised Networks with Bandwidth Limitations Tim Fornell and Jacob Holmberg

LiTH-ISY-EX--18/5172--SE Supervisor: Per Boström-Rost

isy, Linköping University Jonas Nygårds

Swedish Defence Research Agency, FOI Viktor Deleskog

Swedish Defence Research Agency, FOI

Examiner: Gustaf Hendeby

isy, Linköping University

Division of Automatic Control Department of Electrical Engineering

Linköping University SE-581 83 Linköping, Sweden

(3)

The number and the size of sensor networks, e.g., used for monitoring of pub-lic places, are steadily increasing, introducing new demands on the algorithms used to process the collected measurements. The straightforward solution is cen-tralised fusion, where all measurements are sent to a common node where all estimation is performed. This can be shown to be optimal, but it is resource in-tensive, scales poorly, and is sensitive to communication and sensor node failure. The alternative is to perform decentralised fusion, where the computations are spread out in the network. Distributing the computation results in an algorithm that scales better with the size of the network and that can be more robust to hardware failure. The price of decentralisation is that it is more difficult to pro-vide optimal estimates. Hence, a decentralised method needs to be designed to maximise scaling and robustness while minimising the performance loss. This MSc thesis studies tree aspects of the design of decentralised networks: the network topology, communication schemes, and methods to fuse the estimates from different sensor nodes. Results are obtained using simulations of a network consisting of radar sensors, where the quality of the estimates are compared (the root mean square error, RMSE) and the consistency of the estimates (the normalised estimation error squared, NEES). Based on the simulation, it is rec-ommended that a 2-tree network topology should be used, and that estimates should be communicated throughout the network using an algorithm that allows information to propagate. This is achieved by sending information in two steps. The first step is to let the nodes send information to their neighbours with a cer-tain frequency, after which a fusion is performed. The second step is to let the nodes indirectly forward the information they receive by sending the result of the fusion. This second step is not performed every time information is received, but rather at an interval, e.g., every fifth time. Furthermore, 3 sub-optimal meth-ods to fuse possibly correlated estimates are evaluated: Covariance Intersection, Safe Fusion, and Inverse Covariance Intersection. The outcome is to recommend using Inverse Covariance Intersection.

(4)
(5)

This thesis is the result of hard work and many work hours. Nevertheless, it would not have been possible without help from our supervisors. At FOI: Jonas Nygårds and Viktor Deleskog, we are forever grateful for all help and insightful comments.

From Linköpings University: our supervisor Per Boström-Rost and examinator Gustaf Hendeby, many thanks for your weekly input.

Lastly we would like to thank to our families and friends, we are in great grati-tude for the support they provided during our studies. Especially thanks to our mutual friends Martin and Oscar, without you these five years would have been much harder to get through.

Linköping, June 2018 Tim Fornell och Jacob Holmberg

(6)
(7)

Notation xi

1 Introduction 1

1.1 Motivation . . . 1

1.1.1 Centralised Data Fusion . . . 2

1.1.2 Decentralised Data Fusion . . . 2

1.2 Purpose . . . 3

1.3 Problem Formulation . . . 3

1.4 Limitations . . . 4

1.5 Contributions . . . 4

1.6 Thesis Outline . . . 5

2 Theory and Background 7 2.1 Target Tracking . . . 7

2.1.1 History of Tracking . . . 8

2.1.2 Modelling . . . 9

2.1.3 Extended Kalman Filter . . . 10

2.2 Tracking Architectures . . . 10

2.2.1 Centralised Fusion . . . 11

2.2.1.1 Centralised Kalman Filter . . . 11

2.2.2 Decentralised Fusion . . . 12

2.2.2.1 Background . . . 13

2.3 Fusion of Estimates . . . 13

2.3.1 Optimal Bayesian Fusion . . . 14

2.3.2 Sub-optimal Fusion Algorithms . . . 15

2.3.2.1 Safe Fusion . . . 16

2.3.2.2 Covariance Intersection . . . 17

2.3.2.3 Inverse Covariance Intersection . . . 18

2.4 Sensor Networks . . . 20

2.4.1 Sensor Network Topologies . . . 20

2.4.1.1 Rooted Trees . . . 21

2.4.1.2 K-trees . . . . 22

2.4.2 Number of Links . . . 23 vii

(8)

2.4.3 Communication Limitations . . . 23

2.4.3.1 Bandwidth . . . 24

2.4.3.2 Sensor and Information Outage . . . 24

3 Simulation Study Description 27 3.1 Evaluation Approach . . . 27

3.2 Evaluation Methods . . . 28

3.2.1 Monte Carlo Simulations . . . 28

3.2.2 Evaluation of Performance . . . 29

3.2.2.1 RMSE . . . 29

3.2.2.2 Trace of Covariance Matrix . . . 29

3.2.2.3 NEES . . . 30 3.2.2.4 Compared Quantities . . . 30 3.2.2.5 Bandwidth Usage . . . 31 3.2.3 Simulation Environment . . . 31 3.2.3.1 Baseline . . . 32 3.3 Studied Scenario . . . 32

3.3.1 Sensor Node Operation . . . 32

3.3.1.1 Target Motion Model . . . 33

3.3.1.2 Sensor Measurement Model . . . 34

3.3.1.3 Sensor Operation . . . 35

3.3.2 Sensor Network Topologies . . . 36

3.3.2.1 Full Connection Topology . . . 36

3.3.2.2 2-Tree Topology . . . 36

3.3.2.3 Tree Topology with Width Prioritised . . . 36

3.3.2.4 Tree Topology with Depth Prioritised . . . 37

3.3.3 Communication Algorithms . . . 38 3.3.3.1 Fusion Order . . . 38 3.3.3.2 Fuse-Time . . . 39 3.3.3.3 Algorithm 0 . . . 40 3.3.3.4 Algorithm 1 . . . 41 3.3.3.5 Algorithm 2 . . . 43 3.3.3.6 Algorithm 3 . . . 45

3.3.4 Sub-optimal Fusion Algorithms . . . 47

3.3.5 Methods for Minimising Required Bandwidth . . . 47

3.3.5.1 Increase of Fuse-time . . . 48

3.3.5.2 Second Fusion . . . 48

3.3.5.3 Diagonal . . . 48

4 Evaluation of Simulated Scenarios 49 4.1 Overview of Simulation Results . . . 49

4.2 Sensor Network Topologies . . . 50

4.3 Communication Algorithms . . . 53

4.4 Sub-optimal Fusion Algorithm . . . 57

4.5 Minimising the Number of Sent Elements . . . 59

(9)

4.6.1 EKF Anomaly . . . 68

4.6.2 Switch of Optimal Sensor . . . 68

4.6.3 Effects of Fusion Order . . . 70

4.6.4 Cross-Covariance . . . 72

4.6.4.1 Cross-Covariance for SF . . . 72

4.6.4.2 Cross-Covariance for CI and ICI . . . 74

5 Conclusion and Future Work 77 5.1 General Conclusion . . . 77

5.2 Future Work . . . 78

A Derivations and Definitions 83 A.1 Jacobian . . . 83

A.2 Optimal Bayesian Fusion . . . 83

B Simulation 85 B.1 Initialisation . . . 85

B.2 General Parameters . . . 86

C Extra Figures 89 C.1 Switch of Optimal Sensor . . . 89

C.2 Effects of Fusion Order . . . 90

C.3 Minimising Used Bandwidth . . . 91

C.3.1 Original Parameters . . . 91

C.3.2 Fuse-time . . . 92

C.3.3 Second Fusion . . . 94

C.3.4 Diagonal . . . 96

(10)
(11)

Symbols Notation Meaning x States ˆ x Estimate of states y Measurement Ts Sample time

(px, py, pz) Position in Cartesian coordinates

(vx, vy, vz) Velocity in Cartesian coordinates

(r, θ, ϕ) Spherical coordinates (r, θ) Polar coordinates

P Covariance matrix

X Stochastic variable with mean ˆx and covariance P

Si Sensor i

p(y|x) Conditional probability

Set union

Set intersection

\ Set difference

E[x] Expected value

(12)

Abbreviations

Abbreviation Meaning

kf Kalman Filter

ekf Extended Kalman Filter

ckf Centralized Kalman Filter

ukf Unscented Kalman Filter

sf Safe Fusion

ci Covariance Intersection

ici Inverse Covariance Intersection

gimf Generalised Information Matrix Fusion

svd Singular Value Decomposition

rmse Root Mean Square Error

nees Normalised Estimation Error Squared

foi Swedish Defence Research Agency

fc Fusion Core

(13)

1

Introduction

This chapter begins with a motivation why this thesis has been written. This is followed by two highly relevant definitions: centralised and decentralised data fusion. After this, the purpose, problem formulation and limitations of the the-sis are presented. Lastly, there is a short description of what each author has contributed with and an outline of the thesis.

1.1

Motivation

The development of sensors in the last decades has led to an increase of surveil-lance in public places, with the purpose of protecting people and preventing criminal behaviour [3]. One important component in these surveillance systems is tracking, which is used to get increased situational awareness, especially dur-ing surveillance of vulnerable infrastructure. This is definitely somethdur-ing that the society would benefit from as it could be used to, e.g. prevent crimes or pro-tect people in general. Despite this, surveillance of public spaces with the pur-pose of tracking people as they walk around is a sensitive area, as it is believed to pose a threat to privacy. The reason why this poses a problem is a combination of several factors and one such factor is that when a tracking system of this kind is implemented it is often done using visual cameras. This thesis does not treat the concept of using visual cameras as sensors, but what is presented in the the-sis might be implemented using cameras as sensors, and therefore the discussion about surveillance is relevant.

When implementing a tracking system of this sort it is common to use multiple sensors. The reason for this is that multiple sensors makes it possible to cover

(14)

a larger area and to combine information such that a better tracking result can be achieved. When using multiple sensors, the network topology, i.e., how they communicate and how their information is used, can be implemented in a few different ways. Two of the most common are: centralised and decentralised ar-chitectures. Centralised architectures are easier to implement but often require high bandwidth and are vulnerable to loss of information, making them ineffi-cient for systems with a large number of sensors. Decentralised architectures are a bit harder to implement, but does not require as much bandwidth and are less vulnerable to loss of information. Based on this it is of high interest to investigate the possibilities for a decentralised architecture and how it should be designed.

1.1.1

Centralised Data Fusion

As described in [13] sensors in centralised architectures send their raw measure-ments to a central hub, also known as a fusion core (fc), that fuses the data to achieve an estimate. The most sought benefit with centralised architectures is typically the possibility to obtain the theoretically optimal estimate, under the assumption that the bandwidth is sufficient [5]. However, since the available bandwidth is often limited, the network structure needs to be scalable so that the required bandwidth does not increase drastically when sensors are added. Ad-ditional sensors also mean more work for the fc, which in turn leads to higher demand on computational power. These potential bottlenecks are much smaller issues in the decentralised architecture.

1.1.2

Decentralised Data Fusion

The goal with a decentralised architecture is to not be dependent on a single fc. The following section is based on [8]. Instead of relying on a single fc, every node in a decentralised architecture has its own processing facility. In other words, a decentralised architecture does not require a central node that performs the fusion or acts as a communication hub. As a consequence, every sensor performs the fusion of information from local estimates and information received from neighbouring nodes by itself.

In [8] they define a decentralised architecture as a system that fulfils the follow-ing requirements:

• No single central fusion core exists; the chance of the network operating successfully does not depend on a single node.

• The communication in the network must be kept strictly node-to-node, no node is central for the communication and nodes cannot broadcast informa-tion.

(15)

• Individual sensors do not have knowledge of the full network topology. They only know about the sensors that they are directly connected to. If all the requirements are fulfilled a couple of characteristics are obtained, of which the two most important are scalability and survivability. The network is much easier to scale when there are no centralised computational bottlenecks. Also, the lack of communication bandwidth is more manageable because there is no need to send all information to a central hub. The chance of the network to operate successfully does not depend on a single node, which means that the loss or addition of a node does not result in system failure.

1.2

Purpose

The purpose of this thesis is to perform a pre-study to test new ideas regarding how information is transmitted in a decentralised network with bandwidth limi-tations while tracking a target. The system is tracking a single target, represented by a person, as it moves in an urban environment. As described above this can be used in security applications, e.g. following people of interest when a crime has been committed. The main focus is to thoroughly study methods to reduce the amount of communicated information without producing results incompatible with the statistical model.

1.3

Problem Formulation

As described above, a decentralised network has advantages compared to a cen-tralised network in the case of bandwidth limitations, since it in most cases does not have to send as much data. This thesis investigates how the nodes in a de-centralised network can communicate with each other such that each node has approximately the same estimate of the target while at the same time minimising the total amount of data being sent. In order to do this a few different questions are studied:

• How should the nodes be connected?

• What kind of data should they send to their neighbours? • Which fusion algorithm should the nodes deploy? • How often should they send this data?

To determine if any of the tested cases actually have any benefits they are pared to a centralised network. In doing so the accuracy of the estimate is com-pared.

(16)

1.4

Limitations

To make sure that the scope of the thesis does not become too large some assump-tions and boundaries have been set up.

The communication is considered ideal, which means that there is no noise on the communication channel. The limitation of the transmissions are set up be-forehand and does not depend on the characteristics of transmitter, receiver nor communications medium. This means that all information that is sent, arrives at the desired location.

There is no time delay on the transmitted information at the local nodes, the in-formation is considered available directly after it is sent. Processing inin-formation is considered to not require any time. In other words, the time it takes to send and process information between nodes can all be done within one time-sample. Only one target object is present. Thus, this thesis does not consider the target association problem.

1.5

Contributions

Since this thesis has two authors the work has been divided between them. The report was divided so that each person gets to contribute with both theory, im-plementation and analysis. This means that each person has written one or more sections in each chapter. When choosing the overall layout of the thesis, i.e., how the experiment should be constructed and how it should be approached, both authors have contributed to the final design and result. Any choice made when designing the experiment was only made if both authors agreed about it. When implementing the experiment, i.e., programming the simulation environ-ment, the work was divided so that each author is in charge of implementing the sections they have written in Chapter 2. However, this does not mean that they wrote all the code concerning this area, it only means that the author is in charge of making sure it works and that everything that should be implemented is implemented. Therefore, both authors have contributed in each area.

The different chapters in the report are divided as follows. Chapter 2 is divided so that Jacob was in charge of the sections describing how tracking has been de-veloped throughout history, centralised tracking and the limitations a real-world system can have. Tim was in charge of the sections describing modelling in gen-eral and decentralised tracking. In Chapter 3 the work is divided so that Jacob was charge of the parts that are related to the simulation environment, while Tim was in charge of the parts related to the communication algorithms. In Chapter 4 the work is divided so that Jacob was in charge of the general analysis of the fu-sion algorithms together with the analysis of the bandwidth, while Tim was in charge of analysing communication algorithms, the special cases and the general

(17)

results for when the number of sent elements are minimised. In Chapter 5 the work was performed together since it is in this chapter that the results are merged to form a conclusion. All topics not specifically mentioned here were written by both authors.

1.6

Thesis Outline

The outline of the thesis is as follows: Chapter 2 includes relevant theory on target tracking and sensor networks. This is followed by Chapter 3 that presents and discusses the methods that are used to answer the problem formulation. In Chapter 4 the results are presented and evaluated according to the evaluation method presented in the previous chapter. Lastly, Chapter 5 concludes the thesis and describes some suggestions for future work.

(18)
(19)

2

Theory and Background

This chapter starts by introducing the concept of target tracking and how it can be implemented. After this, there is a short description of how it has devel-oped throughout history. Thereafter, an explanation on how sensor fusion is per-formed in two different architectures. Lastly, there is an explanation of what is meant by a network structure.

2.1

Target Tracking

This thesis treats the concept of target tracking, meaning that with the help of sensors, e.g., visual cameras, infrared cameras or radar sensors, identify a target of interest, e.g., vehicle, people or missiles, and estimate the state of this target as time progresses. A simple illustration of what is meant by target tracking can be seen in Figure 2.1. In the scenario depicted in Figure 2.1 there are three sensors and one target, and the objective of the sensors, in this case, could be to estimate the position of the target as it moves along its trajectory. As the target moves along its trajectory it will at some point enter the field of view of each sensor, and when any of the three sensors detects a target in their field of view they will attempt to follow it. How the detection is done depends on the type of sensor, e.g., a visual camera could use some kind of image processing algorithm, but this is not something covered in this thesis. This thesis only covers how the tracking is performed, given the measurements, and the requirements to perform it is treated in the sections to come.

(20)

S1 S2 S3 X X - Target - Target trajectory - Sensors - Field of view

Figure 2.1:Illustration of target tracking, the three sensors S1, S2and S3will

estimate the position of the target, X when it is in their field of view.

2.1.1

History of Tracking

Historically there have been several filters used for tracking, two of the earliest were the Wiener filter and the Alpha-beta tracker. The Wiener filter [11] was one of the first filters used for tracking problems, presented by Wiener in 1949. The filter was constructed to separate a signal from its noise in an observation with the use of the signal spectra. It can be seen as the start of the research in filter theory and the tracking problem [11].

The Alpha-beta tracker [26], first presented in 1957, comes in many forms and is a simplified form of an observer. It approximates the system by a model with two states. The first state is obtained by integrating the second state over time, which makes the filter suitable for estimating position and velocity of a target. The advantages are low complexity and fast computational time, but the use of only two states makes it very limited.

A big breakthrough for filter theory was the Kalman filter [17], presented by Kalman in 1960. The practical use of the Wiener filter was limited because the modelling and computational tools of that time were insufficient. Another thing that hampered the Wiener filter was its restrictions to time-invariant signal mod-els and scalar processes. The Kalman filter solves the same problem but uses state-space models for the signal instead of the signal spectra. Over the years many versions of the Kalman filter have been developed, some of them are briefly presented here.

• Kalman Filter, kf: estimates the states in a linear state-space model. • Extended Kalman Filter, ekf: in engineering applications, few things are

truly linear and the solution was the Extended Kalman Filter, which was first developed in 1962 [25]. It linearizes the system around the current

(21)

state estimate with the help of Taylor expansions. The most common vari-ants of the ekf is the first order ekf and the second order ekf, the order of the ekf tells which order of Taylor expansions that is used. As described in [10], during the linearization a Jacobian or a Hessian, depending on the order of the ekf, is computed.

• Unscented Kalman Filter, ukf: developed in 1995 [16], the ukf spreads out a couple of points in the state-space where a distribution is fitted for every time step. These points are obtained using a deterministic sampling technique called unscented transform, which is a method to transform a Gaussian distribution through nonlinear mappings. According to [10] the difference of using the unscented transform compared to the linearization in the ekf makes the ukf more suitable for highly nonlinear systems. As mentioned before the major difference between Wiener and Kalman filter is the use of state-space models. According to [10] this is what makes the Kalman filter especially useful for centralised tracking since it can easily be modified to handle several measurements.

There are many solutions to the tracking problem and to simplify it, the solutions can be divided into two major approaches. A centralised network structure with a fc and a decentralised network structure where every sensor itself computes an estimate [5].

2.1.2

Modelling

Two of the most important concepts needed in a target tracking system is a model of what the sensors are measuring and a model describing the target. A general description of the dynamics of a target is

xk+1 = f (xk, uk, wk), (2.1)

and a general description of how the measurements are related to the target state is

yk = h(xk, uk, ek). (2.2)

In the above equations xkis the target state, ukis the control signal, wkand ekare

process and measurement noise, respectively, and ykis the measurement for time

k. In these models, the targets state xkcould be the position of the target together

with velocity and acceleration. The control signal uk could be a force that pushes

on the target. The measurement noise ekis a representation of the accuracy of the

sensor and the process noise wk is a representation of the simplifications made in

(22)

2.1.3

Extended Kalman Filter

One way to implement a target tracking system is to use an Extended Kalman Filter. An ekf can be implemented when a sensor model and a target model have been defined according to (2.1) and (2.2) in the previous section. The algorithm for the ekf is described in [10] and is presented in Algorithm 2.1. It uses the first order Taylor expansion of the nonlinear functions h(xk, uk, ek) and f (xk, uk, vk)

described in Section 2.1.2 and maintains an estimate ˆxk and a corresponding

co-variance matrix Pk.

Algorithm 2.1:DARE-based ekf [10]

The ekf for the model described in Section 2.1.2 with additive noises vk

and ek is given by the following recursions initialised with ˆx1|0and P1|0.

Measurement update: Sk = Rk+ h 0 ( ˆxk|k−1)Pk|k−1(h 0 ( ˆxk|k−1))T Kk = Pk|k−1(h 0 ( ˆxk|k−1))TS1 k εk = ykh( ˆxk|k−1) ˆ xk|k= ˆxk|k−1+ Kkεk Pk|k = Pk|k−1Pk|k−1(h 0 ( ˆxk|k−1))TS1 k h 0 ( ˆxk|k−1)Pk|k−1 Time update: ˆ xk+1|k= f ( ˆxk|k) Pk+1|k = Qk+ f 0 ( ˆxk|k)Pk|k(f 0 ( ˆxk|k))T

2.2

Tracking Architectures

This section presents two common architectures that can be used when perform-ing fusion in a network of sensors and which fusion algorithms that can be used in each architecture. The first, and the most commonly used, is centralised ar-chitectures. The second, and the one that this thesis is focusing on, is decen-tralised architectures. The difference between these two architectures is that in centralised architectures the fusion is performed on measurements provided by the sensors while in decentralised architectures the fusion is performed on esti-mates provided by the individual sensors.

(23)

2.2.1

Centralised Fusion

As described in Section 1.1.1 the sensors in a centralised network send their ob-servations to a fc. The fc fuses these obob-servations into an estimate ˆx under the

assumption that all measurement errors are independent. An example of a cen-tralised network structure is shown in Figure 2.2. As can be seen in Figure 2.2

Fusion Core S1 S2 S3 x^ y1 y2 y3

Figure 2.2:Illustration of how data flows in a centralised network.

there are three sensors S1, S2and S3, which send their measurements to a central

fc. The fc then fuses these measurements and produces an estimate ˆx. If any more sensors were to be added to the system, the only difference would be that the fc receives another set of measurements to be fused.

2.2.1.1

Centralised Kalman Filter

One of the most common ways to implement a centralised architecture is to use a centralised version of the ekf, the Centralised Kalman Filter (ckf). The ekf can easily be used when more than one measurement is available and one way of doing this is by doing some small modifications to Algorithm 2.1. Firstly, the measurements need to be collected into a single measurement equation. To il-lustrate the changes it is assumed that there are m measurements according to the sensor equation in (2.2) available. These measurements can then be stacked according to yk=                yk1 yk2 .. . ykm                =                h1(xk) h2(xk) .. . hm(xk)                +                e1 e2 .. . em                = h(xk) + e. (2.3)

(24)

Secondly, the variable Rk containing the measurement noise covariance needs to be modified according to Rk=                R1k 0 . . . 0 0 R2k . . . 0 .. . ... . .. ... 0 0 . . . Rmk                (2.4)

where Rikis the covariance matrix of the measurement noise from sensor i at time

k. Algorithm 2.1 can now be used with h = h and Rk = Rkaccording to (2.3) and

(2.4).

2.2.2

Decentralised Fusion

In a decentralised network, there is no single fc as in a centralised network. In-stead, a decentralised network consists of several sensors which each has its own fc. Each sensor is then connected to one or more sensors. This means that each sensor in a decentralised network tracks targets locally and maintains a local es-timate ˆx of the target state. These local estimates can be sent to neighbouring

sensors with the goal of achieving an estimate closer to the true state of the tar-get. Neighbouring sensors are those that are in direct contact with each other. An illustration of a decentralised network can be seen in Figure 2.3. In the fig-ure, there are three sensors and each sensor produces a local estimate ˆxi, which is sent to each neighbour. When a sensor receives an estimate from a neighbour it fuses this estimate with its own local estimate. A problem in decentralised

ar-S1 S3 S2 x^1 x3 ^ x 1 ^ x^2 x2 ^ x3 ^

Figure 2.3:Illustration of how data flows in a decentralised network. chitectures is that optimal fusion is difficult since because the estimates provided by the sensors may contain common information. What is meant by this and a solution to this problem is presented in the sections to come.

(25)

2.2.2.1

Background

The research of sensor fusion within distributed networks has been an active re-search area since the late 1970’s. The first papers treated the use of a decen-tralised structure and filtering to control large scale-systems, which is discussed in [14]. The main problem in the beginning was to avoid double counting infor-mation, of which a thorough explanation is given in Section 2.3.1. In centralised network structures this is not a problem because the fc has access to all infor-mation. However, in decentralised network structures the estimates often have unknown correlation, which may lead to an incorrect estimate as double counting of information may occur.

In [5] a short description of the development within decentralised tracking is given. It states that the solution to the distributed estimation problem can be divided into two different approaches. The first is to reconstruct the global esti-mate from local estiesti-mates, the second is to find the best linear fusion rule given the local estimates. The solution to the first approach includes a number of meth-ods, one of the first was Information Decorrelation [28]. This method is trying to achieve optimal fusion by identifying the new information in each local estimate and combining it with the global estimate. To summarise the methods of this ap-proach, they reconstruct the global estimate by straightforward modelling of the estimates dependency due to previous communication and process noise. This in-formation is used to identify additional inin-formation needed to reach the optimal estimate. One of the main problems with this approach is its dependency on the system dynamics, communication facilities and that local estimates from earlier times are needed.

This thesis treats the second approach, linear state estimate fusion, which in short tries to fuse two or more estimates to one without knowing the correlation be-tween them. A more thorough explanation of this given in Section 2.3.2. The advantages are a more robust system, but it does not produce the optimal esti-mate in all cases.

2.3

Fusion of Estimates

When performing a fusion of data of some sort, e.g., measurements or estimates, the goal is to achieve a final estimated state that is either identical to or as close as possible to the true state. This section presents the concept of optimal fusion and why it is not always possible to perform in a decentralised network. This is followed by a description of sub-optimal fusion, which is one solution to this problem, and a few sub-optimal algorithms. Sub-optimal fusion does not always produce the best estimate but can be used in a decentralised network.

(26)

2.3.1

Optimal Bayesian Fusion

The following section describes the concept of achieving an optimal result, also known as optimal Bayesian fusion and is based on [12] where it is described. Suppose two sensors S1 and S2 both have created a set of measurements for n

time instances according to

N1= {y1(t1), y1(t2), . . . , y1(tn)}

N2= {y2(t1), y2(t2), . . . , y2(tn)}. (2.5)

If the measurements above, {N1, N2}, are assumed to be conditionally indepen-dent for a given x, the conditional probability can be written as

p(N1, N2|x) = n Y i=1 p(y1(ti) | x) n Y i=1 p(y2(ti) | x), (2.6)

which is valid if the measurement error of each sensor is independent over time and sensor. The goal in a decentralised sensor network is to compute the proba-bility p(x|N1∪N2) with the local estimate from the sensors in the network. The

local estimates are achieved by computing the local posterior conditional prob-ability p(x|N1) and p(x|N2). However, the local estimate from each sensor may

share some common information, as illustrated in Figure 2.4. A fusion of these

Figure 2.4:Illustration of when N1and N2have common information.

two sets of measurements can be presented by the union of them and is given by

N1∪N2= (N1\N2) ∪ (N2\N1) ∪ (N1∩N2), (2.7)

where \ denotes the set difference. The above equation illustrates that the com-mon information N1∩N2 has to be taken into consideration when fusing the

estimates. To achieve a fused estimate x from these two sets the probability

p(x|N1∪N2) is calculated according to p(x | N1∪N2) = C

1p(x | N1)p(x | N2)

(27)

where

C = p(N1∩N2)p(N1∪N2) p(N1)p(N2)

(2.9) is a normalising constant. Equation (2.8) states that the probability p(x|N1∪N2)

is equal to the product of the local probabilities p(x|N1) and p(x|N2), divided by

the common probability p(x|N1∩N2). This means that the common information

has to be found before performing a fusion. The derivation of (2.8) can be seen in Appendix A.

Double Counting

To illustrate a common problem that could occur if the common information is not exactly known, an example is presented. The example consists of a sensor net-work with three sensors connected as in Figure 2.5. As illustrated in Figure 2.5, at time t1, S1sends its estimate to S2which combines it with its own local estimate.

At time t2, S2sends its estimate to S3, which at t3sends to S1. If S1 is unaware

that the information it receives from S3contains the information it sent at time t1

it cannot perform an optimal fusion. This problem is known as double counting and is a common problem, especially in decentralised networks. Double count-ing leads to the accuracy of the estimate becount-ing overestimated, which makes the estimate appear more accurate than it truly is.

S1

S3 S2

t2

t3 t1

Figure 2.5:Illustration of an information loop.

2.3.2

Sub-optimal Fusion Algorithms

As described in Section 2.3.1, to perform optimal fusion the common information has to be known. According to [12] this can be achieved by, e.g., data tagging, which identifies and keeps track of the common information. However, methods like this rely on the possibility to transmit the information pedigree. Since most ad hoc networks have some kind of bandwidth limitation, sending this pedigree

(28)

is most likely not possible. Even if it would be possible it might not be practical because of possible failures and adaptive communication strategies within the network. As a solution to this problem, this thesis investigates a few sub-optimal fusion algorithms. These algorithms try to estimate the common information of estimates before fusing them and as a consequence, an optimal result cannot be guaranteed. Reference [20] defines two important characteristics that should be considered when evaluating sub-optimal fusion algorithms, those are:

Consistency: An estimate ( ˆx, P ) is consistent if the actual error covariance

matrix is bounded by the reported covariance matrix P , i.e., P ≥ E[ ˜x ˜xT], where ˜x = ˆx − x, and x is ground truth.

Tightness: Assume the optimal fusion result is ( ˆxΓ, PΓ). Let Λ ≥ 0 be an upper bound for every possible PΓ, i.e., PΓ ≤ Λ for all admissible Γ . A

fusion result ( ˆxfus, Pfus) is tight if the implication PΓ ≤ Λ ≤Pfus⇒ Λ= Pfus

holds for all admissible Γ .

These two definitions are used to see if the presented sub-optimal fusion algo-rithms produce reasonable estimates. Consistency can be interpreted in the fol-lowing way: since the covariance matrix is a measure of uncertainty it is, together with the estimated state, an indication of where the target can be located. If the actual error covariance is not bounded by the covariance matrix produced by the sub-optimal fusion algorithm, there is a possibility that the target is outside of the indicated area. This is an indication that the produced estimate is wrong or that the algorithm is overly confident. Tightness can be interpreted as an indica-tion of how close the sub-optimal fused estimate is to the covariance produced by an optimal fusion, while still guaranteeing consistency.

The algorithms that are presented below all treat the same problem: how to fuse two estimates ˆx1, ˆx2and their respective covariance matrices Cov( ˆx1) = P1,

Cov( ˆx2) = P2into a combined estimate ˆx and covariance matrix Cov( ˆx) = P .

2.3.2.1

Safe Fusion

Safe Fusion (sf) is an algorithm that fuses two estimates by computing the covari-ance of the fused estimate, P such that it corresponds to the smallest ellipsoid containing the intersection of P1 and P2 [10]. The Safe Fusion algorithm is

(29)

pre-sented in Algorithm 2.2.

Algorithm 2.2:Safe fusion [10] Let I1= P

1

1 and I2= P1

2 . The fused estimate of ˆx1and ˆx2is then

computed as follows: 1. svd: I1= U1D1U1T.

2. svd: D11/2U1TI2U1D1/2

1 = U2D2U2T.

3. Transformation matrix: T = U2TD11/2U1.

4. State transformation: ˆ¯x1= T ˆx1and ˆ¯x2= T ˆx2. The covariances of these

are Cov( ˆ¯x1) = I and Cov( ˆ¯x1) = D1

2 , respectively.

5. For each component i = 1, 2, . . . , nx, let

ˆ¯xi =        ˆ¯xi 1, D2ii< 1, ˆ¯xi 2, D2ii1, (2.10a) Dii=        1, D2ii< 1, D2ii, D2ii1, (2.10b)

where D is a diagonal matrix. 6. Inverse state transformation:

ˆ

x = T−1ˆ¯x, (2.11a)

P = T−1D−1TT. (2.11b)

As described in [10] the algorithm can be interpreted geometrically as transform-ing the estimate from one sensor, ( ˆx1, P1), from an ellipsoid to a unit-circle in

the first step by decorrelating the different directions. In steps two to four, the coordinate system is rotated so that the estimate from the second sensor, ( ˆx2, P2),

becomes aligned to the axes, without affecting the circle. The fifth step checks if the semi-axis of the ellipsoid for ( ˆx2, P2) is less than one or not. The algorithm is

illustrated in Figure 2.6 below, where the ellipsoid with thick lines represents the covariance matrix of the fused estimate.

2.3.2.2

Covariance Intersection

As explained in [15], Covariance Intersection (ci) was specifically designed to deal with estimate fusion when the kf was insufficient. The kf requires at least a decent estimate of the cross-correlations, which in this case is completely un-known. Compared to sf this algorithm fuses the two estimates by finding the smallest ellipsoid that passes through the intersections of P1and P2[10], as

(30)

illus-ˆ x1 xˆ2 xˆ 1 ˆ x2 ˆ x1 ˆ x2

Figure 2.6:Illustration of how Safe Fusion fuses two estimates. [22]

trated in Figure 2.7. The resulting algorithm is outlined in Algorithm 2.3.

Algorithm 2.3:Covariance Intersection [15]

Combine the estimates ( ˆx1, P1) and ( ˆx2, P2) according to:

P−1= ωP1 1 + (1 − ω)P1 2 P−1x = ωPˆ 1−1xˆ1+ (1 − ω)P1 2 xˆ2, (2.12) where ω is chosen according to:

ω = argmin

ω

J(P ), 0 ≤ ω ≤ 1, (2.13) where J(P ) is a function of P that is minimised and can be chosen in several different ways.

As stated in the algorithm, the function J(P ) can be selected in many ways. In this thesis, the trace of P is minimised, which is one of the alternatives presented in [15]. Figure 2.7 illustrates how the fused estimate compares to ˆx1, ˆx2and their respective covariance matrices P1and P2. The covariance matrix P of the fused

estimate ˆx is illustrated with the thick ellipsoid.

2.3.2.3

Inverse Covariance Intersection

One of the most recent methods for dealing with a fusion of locally computed estimates is Inverse Covariance Intersection (ici). In [20] they give examples of when ci is not tight enough and sf is not consistent, so ici was introduced to

(31)

ˆ x1

ˆ x2

Figure 2.7:Illustration of the result when Covariance Intersection fuses two estimates. Figure from [9].

meet the requirements of both tightness and consistency. Algorithm 2.4:Inverse Covariance Intersection [20]

Combine the estimates ( ˆx1, P1) and ( ˆx2, P2) according to: ˆ

x = K ˆx1+ L ˆx2

P−1= P1−1+ P2−1−(ωP1+ (1 − ωP2)−1), ω ∈ [0, 1] (2.14) where the gains L and K are given by

K = P (P1−1−ω(ωP1+ (1 − ω)P2)−1)

L = P (P2−1−(1 − ω)(ωP1+ (1 − ω)P2)−1) (2.15) and ω is chosen according to:

ω = argmin

ω J(P ), 0 ≤ ω ≤ 1, (2.16)

where J(P ) is a function of P that is minimised and can be chosen in several different ways.

As stated in the algorithm, the function J(P ) can be selected in many ways. In this thesis, the trace of P is minimised, which is one of the alternatives presented in [20]. Compared to sf and ci there is no intuitive way to, geometrically, describe the steps taken in ici. However, the algorithm starts by assuming that the esti-mates are completely uncorrelated, i.e., P1−1+ P2−1. It then compensates for this by removing the common information, which is estimated by finding an optimal value for ω in (ωP1+ (1 − ωP2)−1). This results in an estimate that is more

conser-vative than sf and more tight than ci, as proven in [20]. The algorithm has also been proven to produce consistent fusion results for problems where common process noise is present, as discussed in [21]. The results of a fusion are illutrated in Figure 2.8 and it can be seen that ici is tighter than ci, since it has a smaller uncertainty, and more conservative than sf, since it has a larger uncertainty.

(32)

Figure 2.8: Illustration of how Inverse Covariance Intersection fuses two estimates compared to Safe Fusion and Covariance Intersection.

2.4

Sensor Networks

This section describes how sensor networks can be constructed and some limita-tions that exist in real-world systems. It starts by presenting the network topolo-gies that are tested followed by a description of possible limitations on the sensor network.

2.4.1

Sensor Network Topologies

The topology of a sensor network is a description of how the sensors in the net-work are connected to each other as well as how they talk to each other and if the communication is one-way or two-way. There are many different ways this can be done, as can be seen in the mathematical field of graph theory. The most straightforward approach is to let all sensors be connected to each other, which is likely to yield the best result. This topology is called full connection, i.e., all information is available for all sensors. However, this is often impossible due to restrictions such as limited available hardware or restrictions regarding how much data can be sent on the channel. This section gives a thorough description of two types of tree topologies that are studied in this thesis, rooted trees and

k-trees. This is followed by a comparison of the number of links each topology

(33)

2.4.1.1

Rooted Trees

In [24] tree topologies are described as connected acyclic graphs, meaning that the nodes are connected such that no cycles can occur (acyclic) and that any node can reach any other node in the graph (connected). There are several different ways a tree can be constructed and the method that is used in this report is called rooted trees. As explained in [24] a rooted tree is a tree topology where one of the nodes is distinguished from the others. This node is called the root of the tree. Both trees and rooted trees are single connected networks since there is only one path between each pair of nodes.

This thesis considers two approaches to constructing a rooted tree, width first or depth first. These two approaches are described in Algorithms 2.5 and 2.6, respectively.

Algorithm 2.5:Constructing a tree with width prioritised.

1. Set current node to the root node, depth to 1 and jump to step 2.

2. If there are unconnected nodes: jump to step 3. Otherwise jump to step 6. 3. If the current node does not have the maximum number of children: find

the geographically closest one and connect it, and jump to step 2. Otherwise jump to step 4.

4. If there are nodes at the current depth without child nodes: set the closest one to the current node and jump to step 3. Otherwise jump to step 5. 5. Set the closest child to the current node, increase depth by 1 and jump to

step 2.

6. The tree is finished.

Algorithm 2.6:Constructing a tree with depth prioritised.

1. Set current node to the root node, depth to 1 and jump to step 2.

2. If there are unconnected nodes: jump to step 3. Otherwise jump to step 6. 3. If the current node is not located at maximum depth: jump to step 4.

Otherwise jump to step 5.

4. If the current node does not have the maximum number of children: find the geographically closest one and connect it, then set that node as the current node, increase depth by 1 and jump to step 2. Otherwise jump to step 5.

5. Set the current nodes parent to the current node, decrease depth by 1 and jump to step 2.

6. The tree is finished.

To illustrate the difference between the two algorithms an example is studied. In this example, the maximum number of child nodes and the maximum allowed depth is three and there are four nodes to connect. The width-first algorithm results in a tree where the root node has three child nodes, as can be seen in Figure 2.9a. If depth-first is used, the resulting tree would look like the one

(34)

presented in Figure 2.9b. It consists of S1at depth one with S2as a child at depth

two which in turn has S3and S4as child nodes at depth three.

S3 d = 1 d = 2 d = 3 S4 S1 S2 Depth

(a)Width prioritised.

d = 1 d = 2 d = 3 S1 S2 S4 S3 Depth (b)Depth prioritised. Figure 2.9:Illustration of trees produced by Algorithms 2.5 and 2.6.

2.4.1.2

K-trees

The following section is based on [12], where the definition and the advantages of the k-tree are presented. One of the biggest weaknesses with single connected net-works is the lack of robustness since the failure of a single node can have a huge impact on the performance. Structuring the network according to a k-tree topol-ogy is a way around this problem. The k-tree topoltopol-ogy contains cliques, which consists of k + 1 nodes and each adjacent clique overlaps at a separator. A separa-tor can be constructed by any possible combination of k nodes inside the clique. Each separator divides the network into distinct parts and if information shall be sent between two different parts they have to go through a separator, which in practical terms means that all nodes in a separator have to be non-functional to get standalone parts in the network.

Figure 2.10 shows an example of a k-tree with k = 2. One clique is highlighted and it is located inside the dashed line and consists of S2, S4and S5. In this clique, S2and S5is a separator and if information should be sent from S3or S7to S1, S4

or S6it has to go through S2or S5.

The reason why k-tree topologies are used is to improve redundancy and dy-namism but also to maintain scalability and correctness [12]. For the decen-tralised architecture it is highly valued to have a high scalability and redundancy as the communication often is a bottleneck. The k-tree topology is one of the simplest topologies that has a low complexity while still guaranteeing relatively high scalability [12].

(35)

S1 S2 S3 S4 S5 S6 S7

Figure 2.10:Example of a 2-tree topology, the nodes inside the dashed line makes up a clique.

2.4.2

Number of Links

The amount of information that is sent over the sensor network is directly affected by how many links the network consist of. In [12] a list with the number of links for different tree topologies is presented. This list is presented below and shows the number of links for the treated communication topologies, where N is equal to the number of sensors.

• Rooted trees, number of links: N − 1 • k-tree, number of links are: kN −12(k2+ k) • Full connection, number of links: 12(N2−N )

From the list, it is easy to see that the number of links in rooted trees and 2-tree increases linearly with N . However, for full connection the increase is polynomial with O(N2). Therefore, the total amount of information sent is much higher for full connection when N increases.

2.4.3

Communication Limitations

Almost all technical fields today use some form of communication medium, this can be wireless channels such as Wi-Fi or physical cables. One thing that can lead to a bottleneck is the amount of information needed to guarantee a satisfying result. The amount of information that is needed may be more than what the network is capable of sending and receiving, which may result in information loss [19]. This can in some cases result in a deterioration and in some a completely useless outcome. In sensor networks, this is a very important aspect, as more

(36)

information can, under certain conditions, give a more accurate and trustworthy result [1]. With the development of more accurate sensors, there is more available data to send which might increase the demands on the communication. Therefore it is of high interest to use the available communication capacity to its fullest. This thesis is based on simulation studies, no hardware setup is used as a refer-ence. Instead, a general way to lower the amount of used bandwidth is sought for. Still, it is important to have the practical problems in mind to get an un-derstanding why this is important. Therefore the most common are discussed below.

2.4.3.1

Bandwidth

Bandwidth is, as described in [18], the bit-rate of the information sent in a net-work and is measured in bits per second. It should not be confused with the definition in signal processing, where it is the frequency range between the high-est and lowhigh-est accessible frequency. All sensors have rhigh-estrictions on how much data they can transmit and receive. If a sending sensor tries to send more data than the bandwidth of the channel allows, time delays might occur. A similar case is when a receiving sensor cannot handle the maximum bandwidth in the network, resulting in lost data.

2.4.3.2

Sensor and Information Outage

Sometimes sensors stop working. It can be temporary cases, such as power failure or components that break and need to be fixed. In some cases, this can take some time to fix or notice, and it is, therefore, important to understand how a sensor network reacts to sensor outage. Figure 2.11 illustrates an example where five sensors are connected in a tree topology. S3stops working, which results in three

separate networks. If this is not noticed the overall quality drastically decreases as S1and S2are only connected to each other while S4and S5works alone.

S1 S2

S3

S4

S5

(37)

Information outage, on the other hand, is something that happens all the time, especially when the communication is wireless. As described in [19], wireless links coexist in the same spectrum, which may cause interference to occur which leads to information outage. In this thesis, information outage does not occur unless otherwise stated.

(38)
(39)

3

Simulation Study Description

As described in Section 2.2 there are two major approaches to construct a sensor network capable of tracking targets, centralised and decentralised architectures. This chapter describes how a decentralised sensor network can be constructed, tested and evaluated.

Firstly, there is a description of how the problem is approached and what eval-uation methods that are used. Secondly, there is a description of how a decen-tralised sensor network can be constructed and what different configurations that are tested.

3.1

Evaluation Approach

As described in Section 1.3 the problem that is considered in this thesis is how to track a target in a decentralised network with bandwidth limitations. The problem formulation has a very wide perspective, so to answer it and to achieve a general conclusion a sensor setup is simulated. This sensor setup is represen-tative of a network performing target tracking in an urban environment. It is simulated for different decentralised network structures, which are then evalu-ated and compared to each other. To evaluate which structure that gives the best performance four properties are studied, they are:

• Error: how close the estimated state produced by the sensors are to the true state of the target.

• Estimated uncertainty: how uncertain the fusion algorithm believes the es-timated state is.

(40)

• Error to uncertainty ratio: how large the uncertainty is in relation to the actual error of the estimate.

• Bandwidth: how much data the network transmits each second.

To construct the different decentralised network structures, three characteristics are varied:

• Network topology, how the sensor are connected to each other.

• Communication algorithm, how sensors in the network talk to each other, what they should send and how often.

• Fusion algorithm, how each sensor fuses the information their neighbours provide with their own estimate.

Each of these characteristics have several different options and one option from each area is combined to create a decentralised network which can be simulated, evaluated and compared to other combinations. Each simulated scenario is com-pared to a baseline, consisting of a ckf, which produces results close to optimal. The combination that has the best performance, i.e., closest to the ckf, is then investigated further to see if the required bandwidth can be decreased. The gen-eral conclusion to the problem is then given by the combination that has the best performance while at the same time using the least amount of bandwidth.

3.2

Evaluation Methods

This section presents which methods that are used to evaluate the simulated re-sults. Firstly, there is an explanation of the concept of Monte Carlo simulations and how it is used. This is followed by an explanation of how the different net-work topologies, communication algorithms and fusion algorithms are evaluated and compared to each other. After this, a description of how the used bandwidth is calculated. Lastly, there is a description of the simulation environment.

3.2.1

Monte Carlo Simulations

When simulations are performed with stochastic variables the result differ for each simulation. The stochastic variables, in this case, are the process and mea-surement noise, which both are Gaussian. One way to get knowledge about the expected tracking performance is to use Monte Carlo simulations. Monte Carlo simulations are based on repeated random sampling to get numerical results [2]. If a simulation is running multiple times with one or more stochastic variables the result differs slightly each iteration based on the structure of the noise. If the average value of all simulations is calculated the random extreme values from the noise fades out, meaning the mean of a number of observations approaches the expected value when the number of observations grow [4].

(41)

To get a result that can be trusted all algorithms and evaluation methods are calculated for each time instance M times, where M equals the number of Monte Carlo simulations. Then the mean of all M simulations in each time instance is calculated. An example of how this is used can be seen in (3.1).

3.2.2

Evaluation of Performance

This section presents the general methods that are used to evaluate the simulated results. The methods presented are used to compare all the different configura-tions of decentralised networks, that are simulated, to each other.

3.2.2.1

RMSE

TheRoot Mean Square Error (rmse), defined in [2], gives a measure of how large

the absolute error is for the estimate in each time instance compared to the real state. Since it uses the real state of the target it can only be used when the ground truth is known, which therefore can be used in the simulation scenarios. To get a value that can be trusted the rmse is calculated for each time instance k according to: RMSEk = v u t 1 M M X i=1 ||xkxˆk ||2, (3.1)

where M is the number of Monte Carlo simulations. The rmse, calculated ac-cording to (3.1) above, describes how close to the true state the estimate is, in meters. Therefore, making it possible to easily evaluate the accuracy of the esti-mated state. When the rmse is calculated, only the estiesti-mated positions are used, not the estimated velocities.

3.2.2.2

Trace of Covariance Matrix

The trace, defined in [2], is used to look at the covariance matrices of estimates, ˆP .

The trace of a covariance matrix is defined as the sum of the diagonal elements in a matrix, according to tr( ˆP ) = nx X i=1 ˆ Pii, (3.2)

where ˆP is an nx×nx matrix and ˆPii represents diagonal element i. If

q tr( ˆP )

is less than the value given by the rmse at a given time instance it is an indica-tion that something is not right. Since the rmse is an indicaindica-tion of the actual error of the estimate and

q

tr( ˆP ) is the estimated error calculated by the fusion

(42)

algorithm has given a result that is overly confident, meaning that the algorithm is not consistent. The relation of

q

tr( ˆP ) and the rmse for a consistent algorithm

is

q

tr( ˆP ) ≥ RMSE, (3.3)

where ˆP is the estimated covariance matrix calculated by the fusion algorithms

or ekf. When the trace is calculated, only the estimated positions are used, not the estimated velocities.

3.2.2.3

NEES

In [2] theNormalised Estimation Error Squared (nees) is introduced as a method

to test different filters for consistency. It is defined as

εk = (xkxˆk)TPˆ

1

(xkxˆk), (3.4)

where x is the true state, ˆx is the estimated state and ˆP is covariance matrix

pro-duced by the filter. If the filter that produces ˆx and ˆP is consistent the test

quan-tity should be chi-square distributed with nxdegrees of freedom, where nxis the

dimension of x. That is, the nees should have an expected value of

E[ε(t)] = nx. (3.5)

By studying the NEES for different filters it is possible to determine if they are consistent. If a filter has a NEES above the expected value it is an indication that it is not consistent, e.g., an overconfident filter. However, having a NEES below this value is an indication of a filter that overestimates the uncertainty. Therefore, the NEES can be used to evaluate how the filter uses the available information, e.g., if double counting occurs. When the nees is calculated, only the estimated positions are used, not the estimated velocities.

3.2.2.4

Compared Quantities

To be able to quickly get an overview of the results, two different quantities are used. They are calculated for rmse, trace of the covariance matrix and nees for each tested scenario. The first quantity is a series of scalar quantities, each scalar quantity represents the mean for all sensors and is calculated for each time in-stance. The second is a single scalar quantity that represents the average for all time instances and for all sensors. It is calculated by taking the scalar quantities produced when calculating the first quantity and, using these, taking the mean for all time instances. This produces a single scalar quantity, which is a represen-tation of the desired property for each tested scenario. The quantities are referred to as average over sensors and average over sensors and time, respectively.

(43)

3.2.2.5

Bandwidth Usage

As described in Section 2.4.3.1 the bandwidth is the amount of sent data per second and is measured in bits per second. This means that the bandwidth is not measured in bits per second but rather elements per fusion step. The number of steps is equal to the duration divided by the fuse-time, where fuse-time decides how often a fusion occurs and is defined in Section 3.3.3.2. To compare the used bandwidth for each tested scenario, the total number of sent elements is counted and then divided by the total number of steps.

The reason why the total number of steps is used is to achieve a measurement that is independent of the frequency, i.e., fuse-time, the communication algorithm is operating with. The value can easily be converted to elements per second if the frequency and duration of the simulation is known. This value, sent elements per fusion step, is then compared between each tested scenario and the scenario with the least amount of elements per step uses the least amount of bandwidth. These elements consist of the estimates and their accompanying covariance matrix and in this thesis, one element is made up of at most 64 bits. This makes it possible to approximate one element as 64 bits.

3.2.3

Simulation Environment

As described in Section 3.1 different decentralised networks are simulated in a simulation environment. The simulation environment is capable of simulating a finite number of sensors tracking one target as it progresses along a prede-termined track. The simulation environment simulates one scenario at a time, where one scenario is a combination of the sensor setup, which is described in Section 3.3, and one option from each of the following categories:

• Network topologies, which is described in Section 3.3.2.

• Communication algorithms, which is described in Section 3.3.3. • Sub-optimal fusion algorithms, which is described in Section 2.3.2.

When a simulation of a scenario is performed it is done using Monte Carlo simu-lations, as described in Section 3.2.1. Each simulation starts with an initialisation, during which a couple of parameters are set. The complete list of parameters is presented in Appendix B. Thereafter, a target object is created with the motion model that is presented in Section 3.3.1.1 and its trajectory is calculated. After this, each sensor simulates measurements for the whole trajectory according to the measurement model that is presented in Section 3.3.1.2, after which white Gaussian noise is added. The variance of the noise can be seen in Appendix B and is the same for all scenarios. When the measurements have been performed, the next step is to apply the local ekf of each sensor to them, according to Sec-tion 2.1.3. After this, each sensor in the network sends the estimate produced by their local ekf and performs fusions according to the chosen network topology

(44)

and communication algorithm. Lastly, the result is evaluated with the evaluation methods presented in Section 3.2.2.

3.2.3.1

Baseline

The result from the simulated scenario is then compared to the result of a ckf, which is presented in Section 2.2.1.1. The ckf is under linear conditions the op-timal filter, but this is not true in this thesis since the sensor measurement model is nonlinear. However, the ckf produces results close to optimal which makes it suitable to use as a reference. During the simulation, the ckf is evaluated by the methods described previously.

3.3

Studied Scenario

To compare the combinations of the different characteristics presented in Sec-tion 3.1 they are all tested on the same sensor setup. The sensor setup consists of eight sensors placed in two clusters around two areas, one in the shape of a trian-gle and one in the shape of a pentagon. The object that is tracked can, as discussed in Section 2.1, be many different things. However, in this thesis the tracked target is a human being travelling in an urban environment. Therefore, the sensors are mounted at different heights to represent cameras mounted on houses of differ-ent sizes. The path of the target is chosen so that it passes through both clusters of sensors in a straight line at a path one meter above ground level and it is vis-ible by all sensors at all time. The whole setup can be seen in Figure 3.1, where path of the target is the dashed line and starts at p = (−350, 90, 1) m and ends at

p = (350, 30, 1) m. The placement of the sensors is chosen based on two things.

Firstly, to get the most general result it is important that the target travels through areas where the sensor coverage is good and areas where it is bad. Therefore, the two clusters are placed far apart from each other and the path of the target is cho-sen so that it passes through both clusters. This results in good coverage when the target is within each cluster and bad coverage when it is in between them. Secondly, it is of interest to see how many sensors a cluster should contain and therefore the two clusters contain a different number of sensors.

This section begins with a description of how each sensor in the sensor setup operates. This is followed by a presentation of the alternatives for each of the characteristics presented in section 3.1.

3.3.1

Sensor Node Operation

This section describes how the sensors in the sensor setup work, so that they are able to track a target. Firstly, two models are defined, the first model is a

(45)

descrip-Figure 3.1:Overview of the setup used in the simulation.

tion of how the target moves and the second is a description of what the sensors are measuring. Secondly, it is explained how each individual sensor works, how they use the two models in an ekf and how they fuse information from neigh-bours with a fc.

3.3.1.1

Target Motion Model

As described in Section 2.1.2, to implement an ekf a model of how the state of the target changes over time is needed. There are many different ways a target can be modelled and this thesis is based on the Constant Velocity (cv) model. The Constant Velocity model is a linear model which defines f (xk, wk) as

xk+1= f (xk, wk) = I0n TsIn n In ! xk+ 0.5T 2 sIn TsIn ! wk. (3.6)

In the above equation the state is xk = (pk, vk)T where pkis the position, vk is the

velocity, In is the identity matrix, wk(0, Qk) is process noise with covariance

Qk, Tsis the sampling time and n is the dimension of pkand vk.

The focus is to evaluate the decentralised architecture under bandwidth limita-tion and not how the target model affects the result. This combined with the fact that it is considered reasonable that a person walking on a street could be approx-imated as walking in a straight line with a constant speed, makes the constant velocity model suitable. The purpose is to evaluate the network structure rather than the tracking. Therefore, the process noise is chosen low since the target is moving in a straight line, indicating that the model describes the target well.

References

Related documents

The ambiguous space for recognition of doctoral supervision in the fine and performing arts Åsa Lindberg-Sand, Henrik Frisk &amp; Karin Johansson, Lund University.. In 2010, a

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Från den teoretiska modellen vet vi att när det finns två budgivare på marknaden, och marknadsandelen för månadens vara ökar, så leder detta till lägre

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

pedagogue should therefore not be seen as a representative for their native tongue, but just as any other pedagogue but with a special competence. The advantage that these two bi-

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating