• No results found

Designing schedules and Filter for cooperative localization

N/A
N/A
Protected

Academic year: 2021

Share "Designing schedules and Filter for cooperative localization"

Copied!
81
0
0

Loading.... (view fulltext now)

Full text

(1)

cooperative localization

DAVID AGUILAR P´

EREZ

Master’s Degree Project

Stockholm, Sweden October 2012

(2)
(3)

First, I would like to thank to my supervisor of the thesis, Satyam Dwivedi, for giving me the opportunity to perform my master thesis in the department of Signal Processing, in the School of Electrical Engineering of the Royal Institute of Technology in Stockholm, and for his excellent guidance and advice during all the period I have worked in this project. I also want to thank to him his flexibility and availability, allowing me to work on my own, even in being in Spain during last months.

I want to thank as well to all good friends who have been by my side during the development of the thesis. They have been really good workmates who have given help to me when I needed, and an incredible support in bad times. Thank to them as well all the good times spent during this period.

And finally, I want to express my biggest thanks to my family for giving me all the necessary support to perform the master thesis in Sweden, although this entailed several difficulties.

(4)

Contents

1 Introduction and background 1

1.1 Localization and historical perspective . . . 1

1.2 Cooperative localization . . . 3

1.3 3 - way ranging . . . 5

1.4 Outline of the thesis . . . 6

2 Scheduled cooperative localization 7 2.1 The concept . . . 7

2.2 The schedule . . . 9

2.3 Schedule generator: Generating one schedule . . . 12

2.4 Schedule generator: Generating all possible schedules . . . 15

2.5 Processing time . . . 17

2.6 Summary . . . 17

3 Kalman filter with scheduling 18 3.1 3-node scenario simulation . . . 19

3.2 4-Node simulation and 5-node simulation . . . 25

3.2.1 4 Nodes . . . 25

3.2.2 5 Nodes . . . 26

3.3 Noise model and behaviour . . . 27

4 QR decomposition 34 4.1 Gram-Schmidt orthogonalization . . . 35

4.1.1 The concept . . . 35

4.1.2 Computational complexity of Gram-Schmidt orthogonalization . . 36

4.2 Householder transformation . . . 39

4.2.1 The concept . . . 39

4.2.2 Computational complexity of Householder transformation . . . 40

4.3 Givens Rotations . . . 43

4.3.1 The concept . . . 43

4.4 Comparison between methods . . . 46 i

(5)

5 Complexity of computation at every node 49

5.1 The filter . . . 49

5.1.1 Statement of the problem . . . 49

5.1.2 Innovations . . . 50

5.1.3 Estimation of the state . . . 51

5.1.4 Kalman gain . . . 52

5.1.5 Predicted state error correlation matrix . . . 53

5.1.6 Initial conditions . . . 54

5.1.7 Summary . . . 54

5.2 Kalman filter complexity . . . 54

5.3 Matrix inversion in Kalman gain . . . 57

5.3.1 Forward and backward substitution . . . 58

5.3.2 Complexity of backward substitution . . . 58

5.3.3 Decomposition and inversion . . . 60

5.3.3.1 Inversion by QR decomposition . . . 60

5.3.3.2 Inversion with Cholesky . . . 60

5.3.3.3 Complexity of Cholesky . . . 62

5.4 Total complexity of the filter . . . 63

5.5 Complexity with the increasing number of nodes . . . 65

6 Conclusions 67

(6)

List of Figures

1.1 Scenario with four anchors and four agents . . . 2

1.2 Transmissions in time in a 3 node scenario . . . 5

2.1 Transmissions in time in a 3 node scenario . . . 8

2.2 Connections in a 4 node scenario . . . 11

2.3 Comparison of the number of pulse transmissions between the two systems 12 2.4 Scanning tree to form schedules . . . 16

3.1 Timeline of transmissions for a three node scenario with schedule 1-2-3-2-1-3 . . . 19

3.2 Scenario with three nodes . . . 20

3.3 Movement of the three nodes . . . 21

3.4 Evolution of distances in 250 s measured in node 1 . . . 23

3.5 Evolution of distances in 250 s . . . 24

3.6 Estimated movement of the three nodes . . . 25

3.7 Timeline in a four node scenario . . . 26

3.8 Estimated movement the four nodes . . . 27

3.9 Estimated distances in a 4-node scenario . . . 28

3.10 Timeline in a five node scenario . . . 28

3.11 Estimated movement the four nodes . . . 29

3.12 Estimated distances in a 4-node scenario . . . 29

3.13 Noise power against distance . . . 31

3.14 Distance error against noise . . . 31

3.15 Position error against noise . . . 32

3.16 Scenarios with different noise power . . . 33

4.1 Complexity of Gram-Schmidt orthogonalization against matrix size . . . . 38

4.2 Complexity of Householder transformation against matrix size . . . 44

4.3 Comparison of operations in different methods . . . 47

4.4 Comparison of floating point operations in different methods . . . 47

4.5 Comparison of floating point operations in Gram-Schmidt orthogonalization and Householder transformation . . . 48

(7)
(8)

List of Tables

2.1 Set of equations at each node . . . 8

2.2 Number of required transmissions against number of nodes . . . 11

2.3 Number of required transmissions against number of nodes . . . 14

2.4 Algorithm to generate a schedule . . . 14

2.5 Number of combinations against number of nodes . . . 15

2.6 Number of schedules given the number of nodes . . . 15

3.1 Set of equations at each node . . . 20

4.1 Complexity of calculating a vector projection . . . 36

4.2 Complexity of calculating a vector normalization . . . 36

4.3 Complexity of calculating each step of Gram Schmidt . . . 36

4.4 Complexity of the total Gram-Schmidt algorithm . . . 38

4.5 Order of the complexity in the Gram-Schmidt algorithm for each operation 38 4.6 Complexity of each step in Householder . . . 40

4.7 Complexity of the total Householder algorithm . . . 43

4.8 Order of the complexity of the Householder algorithm for each operation . 43 4.9 Order of the complexity of the Householder algorithm for each operation . 43 4.10 Order of the complexity of the Householder algorithm for each operation, M = N . . . 44

4.11 Order of the complexity of the Householder algorithm for each operation, M = N . . . 46

5.1 Kalman filter summary . . . 55

5.2 Size of the Kalman filter variables . . . 55

5.3 Complexity of the predicted state estimation . . . 56

5.4 Complexity of the predicted state error correlation matrix . . . 56

5.5 Complexity of the Kalman gain . . . 56

5.6 Complexity of the innovations . . . 56

5.7 Complexity of the measurement update state estimation . . . 56

5.8 Complexity of the updated predicted state error correlation matrix . . . . 57

5.9 Complexity of one iteration of Kalman filter . . . 57 v

(9)

5.10 Complexity of obtaining each element of the inverse depending on its

index difference . . . 59

5.11 Complexity of backward substitution . . . 60

5.12 Complexity of obtaining the inverse of a matrix decomposing it with Gram-Schmidt . . . 60

5.13 Complexity of obtaining the inverse of a matrix decomposing it with Householder . . . 61

5.14 Complexity of obtaining the inverse of a matrix decomposing it with Givens rotations . . . 61

5.15 Complexity of obtaining the diagonal elements of D . . . 62

5.16 Complexity of obtaining the diagonal elements of D . . . 62

5.17 Complexity of obtaining the elements of L . . . 63

5.18 Complexity of obtaining the elements of L . . . 63

5.19 Complexity of Cholesky . . . 63

5.20 Complexity of inversion with Cholesky . . . 64

5.21 Total complexity of Kalman filter . . . 64

5.22 Order of complexity when M = N . . . 65

5.23 Order of operations with the number of measurements . . . 65

(10)
(11)

Introduction and background

1.1

Localization and historical perspective

From thousands of years ago, humans have tried to find out where they were in order to go to their destination. Navigation and localization soon became an important issue, so important that now is considered a science.

Localization systems can be divided in two main groups: Positioning systems, and dead reckoning (1)

• Positioning systems: The position is facilitated directly by the localization system • Dead reckoning: The position is calculated by a set of relative measures to the initial position. An initial position is needed to be known. Magnitudes that can be measured in order to estimate the current position can be the velocity or the acceleration.

Both kind of systems can coexist and work together.

Brief historical introduction

First known localization system was astronomic localization. The stars were used as static references and by looking at them, the position could be estimated. By the usage of an almanac, nautical charts, sextant, and some other tools, ships could be positioned in the middle of the ocean, or guided to their destination. These principles are used nowadays as well in some navigation systems.

With the appearance of radio aids, several new systems of positioning were developed. The first system that used radio waves was radio-goniometry (1). It was based on a squared antenna that was capable to calculate the direction of the incoming signal due to its radiation pattern. Its references were several beacons situated in land, in known positions. Knowing the range of two beacons, the position could be calculated. Later, several similar versions of this system appear. They used rotating antennas, multiple

(12)

2 1.1. Localization and historical perspective

1

2

3

t

12

t

13

t

23

Node 1

Node 2

Time

t

12

t

12

t

12

RTT

D

1

t

12

2

1

2

3

4

1

3

2

1

4

Schedule:

Node 1

Node 2

Node 3

Node 4

1

2

3

4

5

6

1

2

3

4

5

6

1

2

3

4

5

6

1

2

3

4

5

6

Node 1

Node 2

Node 3

T

31

T

32

T

33

Time

1

3

3

2

3

1

3

2

Schedule 1

1

3

2

Schedule 2

2

3

1

2

Schedule 6

···

1 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 1 1 1 0 1 0 0 0 0 1 1 1 1 0 1 0 0 0 1 1 1 1 0 0 1 0 0 1 1 1 1 0 1 0 0 0 1 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 0 1 0 1 1

1

1 1 1 1 1 0 1 1 1 1 1 1 1 0 0 0 1 1 1 0 1 0 0 0 1 1 1 1 0 1 0 1 0 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 0 1 0 1 0 1 1 1 1 0 1 0 1 1 1 1 1

1

2

3

4

2

1

2

4

1

Schedule:

Node 2

Node 3

1

2

3

4

5

5

1

2

3

4

6

7

Anchors

Agents

Communication

Movement

Figure 1.1: Scenario with four anchors and four agents

antennas, or some other electronic aid in order to improve the accuracy. Nowadays this system is used in security systems.

Another kind of system that appeared later was the hyperbolic systems (1). LORAN (Long range Navigation) is the best known system of this type. They were based in measuring the time difference between the arrival of signals from several beacons situated in known positions. By measuring this time difference between a minimum of three beacons, localization could be achieved. LORAN is still used in ship navigation and there are several beacons situated in many coast places that are still working.

The most recent localization systems are the satellite based systems. The best known system is GPS (Global positioning system), which is a system that belongs to the US. In Europe, Galileo system is being theveloped at this moment, which is a very similar system. They both use satellites as reference. There is only one direction in communication, from satellite to the receiver. They are based in spherical estimation, i.e., the receiver makes a sphere of possible positions with every satellite measure. Thus, measures from four satellites are needed to calculate the position.

At present time, there are many different localization and positioning systems. Each one with its own properties and applications. References are still used. Some systems use static references with known fixed position, known as anchors, and some of them use also mobile references with no known fixed position, known as agents in addition to these anchors, or without anchors. For example, anchors could be the stars in astronomic navigation, beacons in radio-goniometry or hyperbolic navigation, or satellites in GPS or Galileo, while agents could be each device to localize, for example a GPS receiver that communicates with anothers. Figure 1.1 shows a small scenario with anchors and mobile agents that communicate in order to perform localization.

The communication between agents in order to perform positioning and localization is called cooperative localization, as they cooperate between them with some measure sharing.

(13)

1.2

Cooperative localization

The main feature in cooperative localization systems is the presence of

communication between agents, not only between agents and anchors. In this

communication, several relevant data related to each agent is transmitted to other

to perform the localization and positioning issue. There are mainly two kinds of

cooperative localization systems. Systems with the presence of several anchors, where the localization is performed by two types of communication, anchor agent, and agent -agent, and systems without anchors, where all the components are mobile agents which communicate with the others. Cooperative localization systems have several advantages comparing with a standard localization systems. The main advantage is that they work fine in harsh environments, where it is impossible to get any external communication, for example, inside a cave, or a building, where it is difficult to get a signal from the outside. Another advantage is that the energy consumption is lower. As these systems are used in relatively small scenarios, there is no need to establish any connection to a distant anchor, which can be a satellite or any other beacon. Communication occurs in small distances, leading to a small waste of energy.

Cooperative localization can be used in several applications. This kind of systems can be used in animal tracking (2). A sensor can be attached to animals, and track where they flay, how they interact and so on. GPS can be also used to this application but the energy consumption is too high. Another application is logistics (2). It can be used to track the position of ware, machinery or vehicles. It can be used also to track the conditions of storage in big stores, or in a net of stores. One other situation where cooperative localizations systems can be used is in rescue teams, firemen, police, army, etc. During a rescue, or a military operation, it is important to know the position of all the components of the team so that if anything happens, all team will quickly know.

The cooperative localization process can be divided in two phases (3), (4). Measurement phase and location update phase. During the measurement phase, each agent gets his own measures and receives measures from other nodes. These measures

are called intra-node and inter-node measurements. Once the agent has got the

corresponding measures, the update phase begins. The update phase consists on running an algorithm to process the measured data. The accuracy obtained when processing the data depends on the quality of the measurements, the clock drift, and the algorithm.

In the case of localization, the position of each agent must be known. A way to get this exact position is measuring the distance between each specific node an the others, and also measure the direction from which signals from other nodes arrive. The following part shows some methods to get this measures.

Measuring the distance: Received signal strength (RSS)

The power of the received signal at each agent depends on the initial power of the

(14)

4 1.2. Cooperative localization

the two agents, d. Signal power decays cuadratically with the distance. Measuring the signal power at the receiver and comparing it with the reference power, distance can be calculated (3; 4). Equation 1.1 is applied to calculate that distance (2):

P(d) = P0− 10nplog

d d0

(1.1)

Where np is the path-loss exponent, and d0is the reference distance where P − 0 has

been measured.

This technique is not so complex, but it is not accurate and any variation in the environment would affect significantly to the path loss and so, the signal strength. Another aspect that affects the signal strength is the batteries of the transmitter. If the transmitter has low battery, the reference signal power will be lower than expected, and this will influence in the recevide signal power. To correct these inaccuracies, some other information between agents must be sent so that each node can correct its measure.

Measuring the distance: Time of arrival (TOA)

TOA is a more accurate technique than RSS to calculate the distance between agents (2; 3; 4). It is based in measuring the time the signals spend to travel from one node to another. The methodology is the following. A node broadcasts a signal. The other nodes, get that signal, and retransmits an answer to the first node. The first node has been waiting to these answers. With an internal timer it measures the time between the transmission of the first signal, and the reception of each answer. Then using equation 1.2, that node can calculate the distance where the other nodes are situated.

t= 2d − D

c (1.2)

Where t is the measure that node gets, d is the distance between nodes, D is the delay produced due to processing at remote node, and c is the speed of light.

This technique has the problem of multipath. Usually the first received answer is that one that travels through the direct trajectory between nodes, but sometimes this trajectory is slower than another one. If this case occurs, a reflected signal will arrive sooner and the measure will not be correct. Wide band signals can be used (2; 3; 4). Wide band signals have better temporal resolution and then, the line of sight component can be separated from the other contributions. Despite this, it is very difficult to treat with early arriving multipath signals.

There is a modification of this technique called time difference of arrival (TDOA). Its is based on the same principle but instead of measuring the time between the node itself and the others, measuring the time difference between signals from other nodes that arrive at the node, and then, calculate distances.

(15)

Chapter 1. Introduction and background2 5 3

t12

t13

t23

Node 1 Node 2 Time

t12 t12 t12 RTT D

1

2

3

4

1

3

2

1

4

Schedule:

Node 1 Node 2 Node 3 Node 4 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 Node 1 Node 2 Node 3 T31 T32 T33 Time

1

3

3

2

3

1

3

2

Schedule 1

1

3

2

Schedule 2

2

3

1

2

Schedule 6

···

1 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 1 1 1 0 1 0 0 0 0 1 1 1 1 0 1 0 0 0 1 1 1 1 0 0 1 0 0 1 1 1 1 0 1 0 0 0 1 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 0 1 0 1 1

1

1 1 1 1 1 0 1 1 1 1 1 1 1 0 0 0 1 1 1 0 1 0 0 0 1 1 1 1 0 1 0 1 0 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 0 1 0 1 0 1 1 1 1 0 1 0 1 1 1 1 1

1

2

3

4

2

1

2

4

1

Schedule:

Node 2 Node 3 1 2 3 4 5 5 1 2 3 4 6 7

Figure 1.2: Transmissions in time in a 3 node scenario

Measuring the angle: Angle of arrival (AOA)

Knowing the range is not enough to have complete knowledge of the position of the agents. With a known range, d, the agent can be situated among a circumference centered in the position of the node with radius d. Measuring the angle of arrival of signals from other nodes will allow the agent to know the exact position in that circumference. Letting the receiver be an array of m sensors, the node will be capable to measure the angle of arrival. The system is similar as human audition system. An array of two sensors compose the human audition system. With those two sensors, a person can know the angle of arrival of every sound. This principle moved to this problem, can be applied to this problem to measure the AOA.

A brief introduction to cooperative localization has been explained above. Next section presents the state of the art of the system that is going to be treated in the thesis.

1.3

3 - way ranging

Suppose that an scenario contains two nodes. When the system begins to work, node 1 transmits a pulse. After receiving a pulse from node 1, node 2 responds with a pulse after a known delay. Node 1 measures the time difference between its transmission and the moment it receives the response of node 2. When node 1 transmits again responding to response of node 2, the time of flight can be estimated at node 2 as well. At this moment, both nodes know the range, by the transmission of three pulses. The name 3 - way ranging comes from this fact. Figure 1.2 shows graphically the 3 - way ranging process between two nodes.

(16)

6 1.4. Outline of the thesis

D is the processing delay, t12 is the traveling time of the signal between nodes,

and RTT is the round trip time, the time from the transmission of the pulse until the reception of the answer at each node.

This can be extended to scenarios with higher number of nodes. The methodology is to do the same process as in the scenario with two nodes for every link between nodes. In a three node scenario, the process must be done three times, as three is the number of interconnections. With four nodes, the number raises until six, and so on. The number of times that three way ranging must be done depending on the number n of nodes can be calculated with equation 1.3:

cn2 = n!

2! (n − 2) =

n(n − 1)

2 (1.3)

Hence, as three pulse transmissions per interconnection are required to have complete knowledge, the number of pulse transmissions which are necessary are:

Ntx= 3∆cn2 =

3

2n(n − 1) (1.4)

With this technique of 3 - way ranging, only range is calculated. To have exact knowledge of the position of the other devices in the scenario some extra feature must be included. The angle of arrival technique can be implemented by attaching some array of receivers to the devices, so that every node knows the angle where the pulse transmitted by node i comes. Knowing the angle of arrival and the range, a complete knowledge of the position is obtained. Another way to reach this is to do some signal processing and track the position of the nodes.

A way to find all ranges in the network is proposed in (5). Further in the thesis few aspects of this method will be discussed.

1.4

Outline of the thesis

First chapter deals with a discussion of the suggested scheduled improvement. An algorithm based on several compulsory constraints is proposed in order to generate the required schedule depending on the size of the network. First algorithm generates one schedule adapted to a particular network. Parting from this first algorithm, a second one is developed to obtain all the possible schedules for each size of network. The following chapter talks about the signal processing needed at each node. A model of Kalman filter adapted to the scheduled approach is implemented at each device in the network. Some simulations have been done with three, four and five nodes, showing the corresponding results in several graphs. Next chapter deals with the QR decomposition of matrices and its complexity. Results obtained in this chapter are used in the last part of the thesis, which deals with the complexity of the Kalman filter at each node. At the end, an expression of the complexity depending on the size of the network is obtained.

(17)

Scheduled cooperative

localization

2.1

The concept

In the previous chapter the basic idea of localization based on three pulse transmissions per pair of nodes was introduced. This system works fine either in outdoors or indoors. Simplicity is the main feature of the system. This simplicity leads other aspects to be improvable. In this case, in 3-way ranging, each node is capable to know the ranges between itself and the others, unknowing completely the rest of ranges between nodes. This fact can be improved so that every node has a complete knowledge of ranges in a network. Another aspect to improve is the reduction of energy waste by reducing the amount of pulse emissions. To satisfy these two issues, a fixed scheduled pulse transmissions is proposed. With this scheduled pulse transmission plus and additional signal processing at each node, a new system of positioning can be developed. As the devices must have a signal processing unit, the simplicity of the system is reduced, but this disadvantage is not important compared with the advantages exposed above.

Figure 2.1 shows a basic time line of an scenario composed by three nodes. The scheduled used in this figure is 1 − 2 − 3 − 2 − 1 − 3 and this pattern is repeated every time. With this approach, what nodes measure is not the traveling time of the pulse transmissions between nodes, but the time difference between the received pulses from

different nodes. This time difference is shown in figure 2.1 as Tij, where i is the number

of the node and j is the number of the measurement during the pattern. In this example,

i, j= 1, 2, 3. Another aspect to be taken into account is the processing time D at each

node, since it has to be included in the calculations in order to avoid wrong results. From now on, configuration in figure 2.1 is going to be used in order to explain the system.

From measurements in this scheduled version, several equations are obtained, having as unknown values these signal traveling times. Table 3.1 gathers the equations which correspond to each node in this example. They can be deducted by observing figure

2.1. The unknown values tij correspond to the traveling time of the transmitted pulse

(18)

8 2.1. The concept Node 1 Node 2 Node 3 Time D D D D D T14 T13 T12 T11 T21 T22 T23 T31 T32 T33 t12 t23 t13

Figure 2.1: Transmissions in time in a 3 node scenario

between node i and j and value D corresponds to the processing time at each node.

Node 1 Node 2 Node 3

T11= 2t12+ D T12= 2t23+ 2D T12= t12+ t23− t13+ D

T21= t23+ t13− t12+ D T22= t13+ t12− t23+ D T23= 2t13+ 2D

T31= 2t13+ 2D T32= t13+ t23− t12+ D T33= 2t23+ 2D

Table 2.1: Set of equations at each node These compilation of equations can be rewritten in matrix form:

Tm = Cmt+ Dm (2.1)

Where Tmis the measurement vector at node, Cmis the measurement matrix related

to node m, Dm is the processing time vector at node m, and t = [t12, t13, t23]T is the

unknown range values. Hence, at node 1: T1 =    T11 T12 T13    C1 =    2 0 0 −1 1 1 1 −1 1    T1 =    1 1 1    (2.2) At node 2:

(19)

T2=    T21 T22 T23    C2 =    0 0 2 2 0 0 −1 1 1    T2=    2 2 1    (2.3) At node 3: T3 =    T31 T32 T33    C3=    1 −1 1 0 0 1 1 1 −1    T3 =    1 2 1    (2.4)

With these bunch of equations the system is defined for three nodes. This concept can be extended to an m number of nodes. As the number of nodes rises, so does the length of the scheduling pattern and hence, the number of measurements and equations. Now, the concept is almost defined. To have a complete definition of it, it is necessary to know how to make a good schedule and after that, calculate which is the minimum number of transmissions in the scheduling pattern and see if this number is lower than in the three way ranging.

2.2

The schedule

Making the schedule is one important aspect in this system, since not all schedules are valid. As it can be seen at equations in table 3.1, a linear system is obtained at each node, so the schedule must be designed in order to get a consistent independent system of equations.

Another aspect to be taken into account is the number of pulse transmissions per node at each scheduling pattern. These number of pulse transmissions must be uniformly distributed among all the nodes in the system. The reason is that if node i transmits more times than the other nodes, node i will get less number of measurements since the more number of transmissions a node does, the less number of receptions compared to other nodes. this fact leads to getting a consistent dependent system at node i. The opposite happens if the number of transmissions is lower than others. If node i transmits less times, it will get more measurements compared to other nodes, and hence several linear dependent equations.

So, the aim is to get a schedule with the number of transmissions uniformly distributed among nodes, and without any linear dependencies between the equations obtained at each node.

Avoiding repetition of pairs

First, it would be good to know how the equations at each node are generated. An equation is generated every time a pulse arrives at the receiver of the node. For example, taking the previous schedule 1 − 2 − 3 − 2 − 1 − 3, node 1 generates equations

(20)

10 2.2. The schedule

measuring the difference between the arrival of the pulses from 1−2, 2−3, 3−2 and 2−3 with a transmission in the middle. Note that when counting the number of equations at node i, node i is transparent because transmissions don not generate equations. In this example 1 − 2 − 3 − 2 − 1 − 3, bold numbers generate an equation, the difference between the arrival of the pulse of node 2, and the pulse of node 3.

It can be seen that in this pattern example, there is no repetition in the order in which nodes transmit. That is there is not any sub pattern. If any sub pattern appears in the scheduling pattern, the generated equations will not be enough because any equation will be doubled and hence, linear dependencies appear. Let us look at one example. Let us suppose that the pattern is 1 − 2 − 3 − 2 − 3 − 1. One can see that the sub pattern 2 − 3 is repeated. Observing generated equations at node 2:

T21= 2t23+ 2D

T22= 2t23+ 2D

T23= 2t12+ D

One can see that the first two equations are equal. That is produced because either the first equation or the second are produced by the same pattern of transmission. So, the obtained conclusion is that if a pair is repeated in the schedule, equations are doubled and linear dependencies appear so, a correct schedule must not have any repetition of pairs.

After this subsection, what is clear is that a correct scheduling pattern must not have repetition of pair in it, and that the number of transmissions must be uniformly distributed among nodes. The only issue that lasts in order to get an optimal schedule is to get the minimum number of transmissions, the minimum necessary schedule length to get all the measures.

Minimum number of transmissions

Before entering the problem of the calculation of the minimum number of

transmissions, it is necessary to know how many unknown ranging values tij are required.

The number of ranging values equals the number of interconnections between nodes. The number of interconnections can be calculated by equation 2.5.

Nint= cn2 =

n(n − 1)

2 (2.5)

In figure 2.2 is represented a 4 node scenario with its corresponding 6 interconnections.

(21)

2

3

4

1

Figure 2.2: Connections in a 4 node scenario

the number of equations by the method exposed in the subsection above, seeing as transparent the node where the equations are being counted, values in table 2.2 are obtained. There is also a comparison between the three way ranging system. One can see a considerable reduction in the number of transmissions.

Number of nodes 3 - way ranging Scheduled approach

2 3 3 3 9 6 4 18 9 5 30 14 6 45 19 7 63 26 8 84 33 9 108 42 · · · · n 3cn 2 cn2 + dn2e + 1

Table 2.2: Number of required transmissions against number of nodes

So, the minimum number of required transmissions is:

Ntx= cn2 +

ln 2 m

+ 1 (2.6)

Figure 2.3 shows a comparison between both systems. There it can be seen how the difference in the number of transmissions grows exponentially with the number of nodes in the scenario. The more nodes in the scenario, the more is the relative energy saving.

(22)

12 2.3. Schedule generator: Generating one schedule 2 4 6 8 10 12 14 16 18 20 0 100 200 300 400 500 600 Number of nodes Nu m b er of p u ls e tr an sm is si on s

Number of required transmissions in the scheduled approach Number of required transmissions in 3−way ranging

Figure 2.3: Comparison of the number of pulse transmissions between the two systems

2.3

Schedule generator: Generating one schedule

Now, the features of a good scheduling pattern are known. Next step is to design a schedule generator. If the number of nodes is small, generating it manually may be faster and simpler than using a generator, but if the number of nodes is considerably large, manual generation is not adecuate. In this section, an algorithm to generate a large scheduling pattern is proposed. Three are the aspects to be taken into account: Minimum number of transmissions, avoid linear dependencies, that is, avoid generating sub patterns inside the main schedule, and the fairness in the distribution of the pulse transmissions among the nodes.

Part 1: Minimum number of transmissions

First thing needed to be calculated is the minimum number of pulse transmissions given the number of nodes. That number can be calculated with equation 2.6. The

algorithm will keep on calculating new transmissions until it reaches Ntx.

Part 2: Avoid linear dependencies

As explained above, no repetitions in the order of transmission must appear during the scheduling pattern. If node j transmits just after node i, node j cannot transmit again after node i until the schedule ends and the whole pattern begins again. One way to treat this fact is to define an auxiliary n × n matrix A (called ’checkmatrix’) where n is the number of nodes, rows i denote the nodes that have transmitted just before, and

columns j denotes nodes to transmit at present time. aij can be only 1 or 0. 0 if the

transmission is possible, and 1 if the transmission is not possible because it will generate dependencies. For example, in a n = 4 scenario, node 2 has just transmitted before, and

(23)

the checkmatrix at present time is: A=       1 0 1 0 1 1 0 1 0 1 1 0 1 1 0 1      

Only node 3 is available to transmit since a23 = 0 and a21, a22 and a24 = 1.

That means that sub patterns 2 − 1 and 2 − 4 have already appeared in the schedule. The initial checkmatrix equals the identity, and every time a pulse is transmitted, this

checkmatrix is updated by storing a 1 in the corresponding aij. The initial values of the

diagonal are initially 1 because the case that a node transmits twice is not contempled since it is not useful to generate non-dependent equations.

Part 3: Fairness in transmissions

fairness in transmissions can be achieved by selecting the available node with less number of transmissions until that time. Depending on the amount of 1s in the row associated to each node in the checkmatrix, it can be seen how many times has a node transmitted a pulse before. The idea is that, if there is more than one node available to transmit at that time, select the one with less number of 1s in its associated row, thus, every node will transmit the same amount of times. If after this last restriction, there is more than one node available to transmit, the node can be selected randomly, since there is no other requirement.

The algorithm

First part of the algorithm should be the initialization of every element. As said before, the checkmatrix should be initialized as an n × n identity matrix. The sequence has to be initialized as well with the first transmission. It is recommended to initialize it setting the first node to transmit node 1, although it can be initialized with every node. At this point, knowing the minimum number of transmissions, and having initialized

all the elements the algorithm can be started. This algorithm must be run Ntx− 1 times

(First transmission has been already set) and every time it is run, a new element has to be added to the pattern. The first thing to do at each iteration is to look for every possible nodes that are able to transmit at this time, by looking the position of zeros in the row that is associated to the node that has previously transmitted. Having done this, this positions (nodes) are stored as possible nodes to transmit at this time. If there is only one possible node to transmit, it is selected, the checkmatrix is updated, and nothing more is needed. If there is more than one possible node to transmit, the node with less previous transmissions must be selected, in order to maintain the fairness in transmissions. The

(24)

14 2.3. Schedule generator: Generating one schedule

number of transmissions that each node has made can be calculated by summing the ones located in the column of the checkmatrix associated to each node. One good way to do this is to create a sub matrix of the checkmatrix (called ’checksubmatrix’ ) with every column associated to the possible nodes to transmit. Once constructed this checksub matrix, the number of the columns (nodes) with more number of zeros (less number of ones, less transmissions) are stored. These stored numbers are the nodes that are able to transmit at this time, if it is only one node available, it is selected, added to the pattern, and the checkmatrix is updated. If there is more than one node available to transmit, it is randomly selected, added to the pattern, and the checkmatrix is updated as well. At the end of the algorithm, the selected node has to be set as last transmitting node for the next iteration.

Table 2.3 shows several schedules obtained with this algorithm for different number of nodes:

Nodes Transmissions Schedule

3 6 1 2 3 2 1 3

4 9 1 3 4 2 1 2 4 3 2

5 14 1 2 1 3 4 5 1 4 2 3 5 2 4 1

6 19 1 2 1 3 4 5 6 1 4 2 3 5 1 6 2 4 3 6 5

7 26 1 2 1 3 4 5 6 7 1 4 2 3 5 7 6 1 5 2 4 3 6 2 7 3 1 6

Table 2.3: Number of required transmissions against number of nodes Table 2.4 shows a summary of the algorithm.

Algorithm to generate a schedule

1. Calculate the length of the schedule.

2. Initialize checkmatrix as the identity and the schedule as 1.

3. Search in checkmatrix possible nodes to transmit.

4. Choose the node with less number of previous transmissions,

choosing the column with less number of 1s.

5. Check that the selected row is not a ones vector, go to step 4 if yes.

6. Add the node to the sequence.

7. Update checkmatrix

8. Set selected node as last transmitting node and go to step 3

until the length of the schedule matches with step 1 Table 2.4: Algorithm to generate a schedule

(25)

G= [gij]N XN; gij ∈ {0, 1}

Algorithm 1Schedule Construction

1: Initialize: n= 1, m = 1, Ntx= 1, g11= 1, T = {1} 2: while (Ntx ≤ P ) do 3: sj = X i gij j = 1, 2, 3, · · · , N ; j 6= m 4: m= arg min{sj} 5: gnm= 1, T = {T , m}, Ntx= Ntx+ 1, n = m 6: end while

Where G is the checkmatrix, T is the sequence of the schedule, Ntx is the number of

current transmissions and P is the total length of the schedule calculated by 2.6

2.4

Schedule generator: Generating all possible schedules

After introducing this algorithm, next step is to create another algorithm that, given a fixed number of nodes, returns all the possible schedules. At the first sight, one can see that the number of schedules grows really really fast with the number of nodes. Table 2.5 shows an approximation of the number of possible schedules depending on the number of nodes.

Nschd≤ n(Ntx−1) (2.7)

Equation 2.7 gives the total number of combinations to prove, where n is the number

of nodes and Ntx is the number of transmissions.

Nodes Transmissions Combinations to prove

3 6 243

4 9 65536

5 14 1.22 × 109

6 19 1.01 × 1014

Table 2.5: Number of combinations against number of nodes

Nodes Number of schedules

3 6

4 498

5 438144

Table 2.6: Number of schedules given the number of nodes

Table 2.6 Shows the number of schedules depending on the number of nodes. In these tables, the growth in the amount of schedules can be appreciated.

The idea of this algorithm is to scan the possibilities like in a tree diagram. Initialize the sequence with the first transmission, node 1, and then begin adding new nodes.

(26)

16 2.4. Schedule generator: Generating all possible schedules

1

3

1

3

2

3

1

2

3

2

1

Figure 2.4: Scanning tree to form schedules

When the scan arrives to the end of the tree, erase last part of the pattern, the one which has been already scanned, and follow the scanning in another branch of the tree. This idea can be better seen graphically in figure 2.4, where each circle represents a node.

This figure represents an example of a tree analysis with three nodes. First, the algorithm adds ’2 - 1 - 3 - 2 - 3’ to the first ’1’, one by one, when it reaches the end it goes back one position and adds a ’1’. Then, as it is the end, goes back to the previous branch again. This branch is also finished, so it jumps back again to the previous branch to follow. Doing this, a set of algorithms can be obtained.

First of all, one shall initialize the first transmission, node 1, and the checkmatrix, the identity. Each branch will have its own checkmatrix. Once initialized, a recursive function is run. This function adds a node to the schedule.

The first task of this function is checking the checkmatrix associated to the branch that is being run, to look for available nodes to transmit and store them as possible nodes. If there is no available nodes to transmit, or the number of transmissions equals the minimum number of transmissions, the end of the branch is reached and the function ends without adding a new node. If there are available nodes to transmit, the first available node in the possible nodes is added to the schedule and the checkmatrix is updated. As a new node has been added, a new branch has been created. The function will call again itself to add nodes in this branch, until it reaches the end. When this branch is completed, it returns again to the mother function and choose the next possible node to transmit, the next one as the one chosen in the previous, and so on.

Using the example of the figure, the first call to the function will give possible nodes = {2,3}. Then, it will add node 2 to the schedule, update the checkmatrix associated

(27)

to this branch, and then it will call itself. It will give now possible nodes = {1,3}, it will add node 1 to the schedule, update the matrix and call itself again, and this until it reaches the last branch. In the last branch in the example, node 2 has transmitted, it will give possible nodes = {3,1} and it will select node 3 and call itself. As it is the last branch, it will return to the mother function. Now, it will select node 1 to transmit after node 2 and will call itself again. Here it happens the same, it will return to the mother function. As there is no more available nodes here to transmit, it will return again to another upper level. Following all this process, all the schedules can be obtained.

With this function, many possible schedules are obtained, but the property of fairness in transmissions has not been taken into account. To select the correct schedules, the number of transmission of each node in each schedule are counted. If the difference of transmissions of the most active node and the less active node is greater that 2, that means that fairness is not achieved and that schedule is not valid.

With all of these, all the schedules are obtained. One must be very careful when running this algorithm since the number of schedules is huge for scenarios with more than 5 nodes.

2.5

Processing time

One last issue that must be defined is the measure processing time at each node. It is a really important aspect in this scheduled pulse based problem. This processing time (D) must follow:

D > tmax (2.8)

Where tmax is the traveling time of the pulse if two nodes are located in the furthest

points in the scenario. If this delay does not follow equation 2.8, pulses can be received in different order in several nodes and, therefore, the schedule would be violated and the system will not work properly. It is important to design this delay correctly.

2.6

Summary

In this chapter, a new approach on pulse based cooperative localization approach, based in scheduling the transmissions has been introduced. It has demonstrated that, although the complexity rises, compared with the three-way ranging approach, the number of required transmissions lowers considerably, so the energy save is notable. After that, a description of the schedule, features and restrictions, has been explained. Later, an algorithm for a schedule generator has been proposed and explained. Then, another algorithm to obtain all possible schedules depending on the number of nodes has been also explained. At the end, the problem of the processing time at each node has been mentioned as an important aspect in the system

(28)

Chapter 3

Kalman filter with scheduling

Once generated the scheduling pattern, it is necessary to design the algorithm that will process the measures. As the system is running in a noisy and changeable environment, an adaptive algorithm is required. This adaptive algorithm should be able to calculate the distances regardless of the movement and perturbations. Algorithm chosen to perform this issue is Kalman filter. Kalman filter is an algorithm that is based in two equations:

• Process equation:

xk+1 = Fkxk+ mk (3.1)

Where x is the state vector, F is the state transition matrix, and m is the process noise which has a Gaussian probability density function with zero mean.

• Measurement equation

zk= Hkxk+ nk (3.2)

Where xk is the state vector, H is the measurement matrix, and n is the

measurement noise, which is also a Gaussian zero-mean noise.

To adapt these two equations to this problem, the first thing to do is to define what each variable actually is. As the variable that is being estimated is the distance between nodes, that will be the state vector x = [d12, d13, . . . , d1N, d23, d24, . . . , dN −1,N] where N

is the number of nodes forming the system. This distance vector is highly related with the traveling time between nodes. The size of this state vectors depends on the number of devices and equals the number of interconnections between them. This size M can be obtained with the expression:

M = n(n − 1)

2 (3.3)

Measurement vector z is the time between the reception of two signals at each node zi = [Ti1, Ti2, . . . TiN]. Where the i index indicates the number of the node in the pattern,

and N is the total number of equations. Number of equations can be calculated as: 18

(29)

Node 1 Node 2 Node 3 Time D D D D D T14 T13 T12 T11 T21 T22 T23 T31 T32 T33 t12 t23 t13

Figure 3.1: Timeline of transmissions for a three node scenario with schedule 1-2-3-2-1-3

N = 1 + n(n − 1) 2 + ln 2 m (3.4) As there is no dependences between the current state and the following one, the state

transition matrix should be the identity FM = IM, where M is the size of the vector x.

Getting measurement matrix is not trivial. It is different at each node since it depends on the equations that are obtained at each node. Once these equations are subtracted from the schedule at each node, that matrix H can be obtained.

To illustrate the problem, two simulations have been done, the first one with three nodes, and the second one with five nodes.

3.1

3-node scenario simulation

In a scenario with three nodes, three distances are needed to be calculated, so the

state vector will be x = [t12, t13, t23]. Times are chosen in the state vector instead of

distances. To transform the state vector into distances, it should be multiplied by the speed of light. Measurement vector will be zi = [Ti1, Ti2, Ti3]. As it can be seen, in this

system, M = N = 3. Figure 3.2 shows a standard three node scenario.

Looking at figure 3.1, the equations at each node can be obtained. Equations are gathered at table 3.1

Where D is the processing delay at each node, which is a known value. Writing the equations in a general form:

(30)

20 3.1. 3-node scenario simulation

1

2

3

t

13

t

23

t

12

Figure 3.2: Scenario with three nodes

Node 1 Node 2 Node 3

T11= 2t12+ D T21= 2t23+ 2D T31= t12− t13+ t23+ D

T12= −t12+ t13+ t23+ D T22= 2t12+ 2D T32= t23+ 2D

T13= t12− t13+ t23+ D T23= −t12+ t13+ t23+ D T33= t12+ t13− t23+ D

Table 3.1: Set of equations at each node

Ti = Hit+ Di+ ni (3.5)

Where Di is a vector that contains de processing delays at node i, and ni is the

measurement noise at node i.

Using this equation and looking at table 3.1, measurement matrices and delay vectors can be obtained: H1 =    2 0 0 −1 1 1 1 −1 1    D1 =    1 1 1    H2 =    0 0 2 2 0 0 −1 1 1    D1=    2 2 1   

(31)

Scenario 200 400 600 800 1000 100 200 300 400 500 600 700 800 900 1000

Initial position node 1 Initial position node 2 Initial position node 3 Actual trajectory node 1 Actual trajectory node 2 Actual trajectory node 3

Figure 3.3: Movement of the three nodes

H1 =    1 −1 1 0 0 1 1 1 −1    D1 =    1 2 1   

At this point, the system is defined. Now is time to generate a scenario. It will be a 1000 × 1000 scenario with three randomly located nodes. Once the scenario is generated, a random movement must be applied to each node. This movement is going to be random. The way to generate this kind of motion is to apply to each node a random acceleration, parting from a determinate initial movement, static, or with an initial velocity.

Given the initial position of the node, the initial velocity, the strength of the random acceleration, the time step, the total time, and the time that the node is in repose until the movement begins, the complete motion during that time at each time step at every node can be calculated. As it can be seen in figure 3.3, every node parts from the initial position and moves through the scenario randomly.

Now that the complete scenario is created, it is required to generate the measurements at each node. Traveling time of the signals between nodes is necessary to be known in order to create measurements, therefore, the euclidean distance between nodes must be calculated and then divided by the speed of light. After that, these traveling times have to be multiplied by the measurement matrix H and then added to the processing delay vector in order to create a set of clean measurements, noise free measurements.

(32)

22 3.1. 3-node scenario simulation

ziclean = Hit+ Di (3.6)

Where the subindex i is the number of the node in the schedule.

The following step is defining the filter parameters. As the scenario is composed by three nodes and three equations are needed at each node, the size of the state vector x, M, equals to the size of the measurement vector size ,N , which is 3. The measurement

matrix at each node will be the scheduling matrix Hi defined above and the state

transition matrix will be the identity matrix with size equal to M. The process noise will have a standard deviation of one and will have a Gaussian distribution with zero mean. Measurement noise will be more complex. It will be also a zero mean Gaussian noise but its intensity will depend on the distance, since further measurements are more

noisy that nearer. This noise will have a relative intensity of 10−8 because this system

works with magnitudes in the speed of light order. Once this noise is generated, noisy

measurements can be created by adding this noise to the clean measurements vector zi

The last important issue before running the filter is to initialize the state vector and the prediction error covariance matrix.

x0 =    0 0 0    P0=    1 0 0 0 1 0 0 0 1   

At this point the filter is prepared to be run. An important aspect to be taken into account is that the measurement noise is not stationary, so the measurement noise covariance matrix will be different at each iteration of the filter, hence, it has to be calculated every time the distances are updated, that means that inside the loop filter, there will be a part that calculates this covariance matrix before the filter is executed. After calculating the covariance matrices, the filter equations are computed:

x+k = Fkxk (3.7) P+k = FkPkFk+ QM (3.8) Kk = FkP+kH T HP+ kH T + Q N −1 (3.9) xk+1 = x+k + Kk(zk− Hx+k − d) (3.10) Pk+1 = [I − KkH] P+k + QM (3.11)

Equations 3.7 to 3.11 are used to compute the filter updating it every N measures, that is, every time a complete schedule finishes. It is recommendable to design this filter so that it updates the distances every time a signal arrives to the node, not when the whole schedule finishes. Redesigning the filter, these equations are obtained:

(33)

0 50 100 150 200 250 0 100 200 300 400 500 600 700

Time

d 12 d 13 d 23

Figure 3.4: Evolution of distances in 250 s measured in node 1

h = Hj (3.12) x+k = Fkxk (3.13) P+k = FkPkFk+ QM (3.14) Kk = FkP+khT hP+khT + QN −1 (3.15) αk = zk− hx+k − d (3.16) xk+1 = zk− Kkα (3.17) Pk+1 = [I − KkH] P+k + QM (3.18)

Where h is the jth row vector of H which correspond to the Tij measure being i the

corresponding node. α is a the scalar innovation related to this measure. Kalman gain

Kis still a M × N matrix but only adapted to one measure, not the whole scheduling

pattern. In the code appendix is written the MATLAB code of the two versions. This second version is run during a period of 250 seconds and updated every one second, and the results are shown in figure 3.4

In that figure, the actual distance and the estimated distance are shown in the same graph. It can be seen that the convergence time of the filter is relatively small and it follows the actual distances. Running the filter at each of the three nodes,each one with its own scheduling matrix and its own measurements, figure 3.5 is obtained. It an be seen that the three nodes, obtain the same results.

(34)

24 3.1. 3-node scenario simulation 0 50 100 150 200 250 0 200 400 600 800 Time

Distances measured at node 1

d12 d13 d23 0 50 100 150 200 250 0 200 400 600 800

Distances measured at node 2

Time 0 50 100 150 200 250 0 200 400 600 800

Distances measured at node 3

Time

Figure 3.5: Evolution of distances in 250 s

At this point, it is convenient to transform all these calculated distances into position in order to compare the actual trajectories with the estimated ones by representing both in the same figure. Position needs to be calculated every time step so it will be necessary to include some extra code inside the filter loop once the new distance vector is obtained, after the filtering part. These updated distances are stored in a column M-size vector. Treating this vector column with some functions, positions from distances can be calculated. These functions are squareform, cmdscale and procrustes.

• Squareform: Transforms the distances column M-size vector in a M x M S matrix,

where every element sij of S corresponds to dij, which is the distance between

node i and node j

d=    d12 d13 d23   ; S =    0 d12 d13 d12 0 d23 d13 d23 0   

• Cmdscale: Takes matrix S and calculates a relative spatial distribution in a p-dimensional space using position 0 as reference

• Procrustes: Makes a translation, rotation, reflection and scaling of the coordinates obtained with cmdscale in order to fit those coordinates to the problem.

(35)

Scenario 200 400 600 800 1000 100 200 300 400 500 600 700 800 900 1000

Initial position node 1 Initial position node 2 Initial position node 3 Actual trajectory node 1 Actual trajectory node 2 Actual trajectory node 3 Estimated trajectory node 1 Estimated trajectory node 2 Estimated trajectory node 3

Figure 3.6: Estimated movement of the three nodes

After executing these functions after filtering the measurement, the position of the nodes are obtained. In figure 3.6 all these estimations can be observed in the generated scenario.

Several points do not fit the trajectory, it is due to the convergence time of the filter. At the beginning, in before convergence, the filter is not able to estimate the true position and it gives not valid results.

3.2

4-Node simulation and 5-node simulation

In this section simulations of scenarios with 4 nodes and 5 nodes are done. As the whole process has been described in the section above, in this part only node one is going to be observed. The other nodes will have the same behaviour as node one doing the process explained in previous section.

3.2.1 4 Nodes

With four nodes, the schedule 1 − 2 − 3 − 4 − 2 − 1 − 4 − 3 − 1 has been used, obtaining the timeline shown in figure 3.7. The following 6 × 6 scheduling matrix and

(36)

26 3.2. 4-Node simulation and 5-node simulation Node 1 Node 3 Node 4 Node 2 Time T16 T14 T15 T12 T13 T11

Figure 3.7: Timeline in a four node scenario

the following six-size processing delay vector at node one:

H=            2 0 0 0 0 0 −1 1 0 1 0 0 0 −1 1 0 0 1 1 0 −1 0 1 0 0 0 2 0 0 0 0 1 −1 0 0 1            ; D =            1 1 1 1 2 1           

Results after running the filter are shown in figures 3.8, and 3.9.

3.2.2 5 Nodes

With five nodes, the schedule 1 − 2 − 3 − 4 − 5 − 1 − 3 − 5 − 2 − 4 − 1 − 4 − 2 − 5 has been used, obtaining a timeline shown in figure 3.10. The following 10×10 scheduling matrix and the following ten-size processing delay vector at node one:

(37)

Scenario 200 400 600 800 1000 1200 1400 200 400 600 800 1000 1200 1400

Initial position node 1 Initial position node 2 Initial position node 3 Initial position node 4 Actual trajectory node 1 Actual trajectory node 2 Actual trajectory node 3 Actual trajectory node 4 Estimated trajectory node 1 Estimated trajectory node 2 Estimated trajectory node 3 Estimated trajectory node 4

Figure 3.8: Estimated movement the four nodes

H=                      2 0 0 0 0 0 0 0 0 0 −1 1 0 0 1 0 0 0 0 0 0 −1 1 0 0 0 0 1 0 0 0 0 −1 1 0 0 0 0 0 1 0 2 0 0 0 0 0 0 0 0 0 −1 0 1 0 0 0 0 1 0 1 0 0 −1 0 0 1 0 0 0 −1 0 1 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 0 −1 0 0 1 0 0 1 0 0 0                      ; D =                      2 1 1 1 2 1 1 1 2 1                     

Results after running the filter are shown in figures 3.8, and 3.9.

3.3

Noise model and behaviour

Once simulated these three scenarios, is time to see the behaviour of the system with the noise. The noise for these scenarios has been modeled as a noise whose power rises with the distance between nodes, as the further are the nodes, the more likely to have

(38)

28 3.3. Noise model and behaviour 0 50 100 150 200 250 0 200 400 600 800 1000 1200 1400 1600

Time

Distances measured at node 1

d 12 d 13 d 14 d 23 d 24 d 34

Figure 3.9: Estimated distances in a 4-node scenario

Node 1 Node 2 Node 3 Node 4 Node 5 Time T11 T12 T13 T14 T15 T16 T17 T18 T19 T110 T111

(39)

500 1000 1500 2000 2500 500 1000 1500 2000 2500

Initial position node 1 Initial position node 2 Initial position node 3 Initial position node 4 Initial position node 5 Actual trajectory node 1 Actual trajectory node 2 Actual trajectory node 3 Actual trajectory node 4 Actual trajectory node 5 Estimated trajectory node 1 Estimated trajectory node 2 Estimated trajectory node 3 Estimated trajectory node 4 Estimated trajectory node 5

Figure 3.11: Estimated movement the four nodes

0 50 100 150 200 250 0 1000 2000 3000 4000 5000 6000 Time

Distances measured at node 1

d12 d13 d 14 d15 d23 d24 d25 d34 d35 d45

(40)

30 3.3. Noise model and behaviour

disturbances between nodes, and during the more time is the noise affected by external facts. The noise is modeled by following equation:

r(t) = N (0, 0.3σ0e

t

2) (3.19)

Where N denotes normal distribution with mean equal to zero and standard

deviation equal to the expression. r is the non-stationary noise, σ0 is a constant that

controls the noise power, and t is the traveling time of a transmitted pulse between the transmitting and receiving node. Variable t controls the power of the noise depending on the distance. Introducing this model of noise to every node, an scenario with five nodes has been simulated. This scenario is a 250m × 250m plane where a model of constant velocity and random acceleration has been applied to each node. To analyze

the behaviour of this system with the noise several values of σ0 have been used. Value

of σ0 used are:

σ0= {0, 0.5 × 10−8,1 × 10−8, . . . ,1.5 × 10−7}

As it is not a stationary noise model, the average power of it is different for each scenario, it is different even for the same scenario with different trajectories. In this case, the power of the noise has been calculated for this particular scenario, taking the assumption that the behaviour of this system would be almost the same:

Pnoise= σ2= 0.09σ20et (3.20)

Figure 3.13 shows the power of the noise against distance. Two kinds of errors have been analyzed, the first one is the distance error, which is the error of the filter when calculating the distances between all nodes, and the position error, which is the distance between the exact position of the node in the plane, and the estimated position of the in the same plane.

Edistance = |d − ˆd| (3.21)

Eposition = p(x − ˆx)2+ (y − ˆy)2 (3.22)

Where d is the true distances between nodes, ˆd is the estimated distances, x and y

are the coordinates of the node, and ˆx and ˆy are the estimated coordinates in the plane.

Figures 3.14 and 3.15 show the error distance and the position error against noise power, and figure 3.16 shows an evolution of the scenario with different noise power.

One can see that with a noise power greater that 1.2 × 10−15, the position error

becomes grater than 12.5 m, which is a considerable value in a 250m × 250m scenario (5 %).

(41)

0 5 10 15 x 108 0 0.2 0.4 0.6 0.8 1 1.2 1.4x 10 −11 Distance No is e p o w er

Figure 3.13: Noise power against distance

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 10−15 0 2 4 6 8 10 12 Noise power D is ta n ce er ro r

(42)

32 3.3. Noise model and behaviour 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 10−15 0 2 4 6 8 10 12 14 16 Noise power D is ta n ce p o si ti o n er ro r

(43)

Sigma = 0

Noise power = 0

Sigma = 2.5e-008

Noise power = 5.625e-017

Sigma = 5.5e-008

Noise power = 2.7225e-016

Sigma = 8.5e-008

Noise power = 6.5025e-016

Sigma = 1.15e-007

Noise power = 1.1903e-015

Sigma = 1.45e-007

Noise power = 1.8923e-015

(44)

Chapter 4

QR decomposition

QR decomposition is a methodology to decompose a M × N matrix into a product of two matrices Q and R, each one with different characteristics:

A= QR (4.1)

Where A = [a1, a2, . . . , aN] and aiare column vectors composing the matrix. Matrix

Q= [q1, q2, . . . , qM] is a M × M matrix where all column vectors are orthogonal, that

is, the scalar product between them equals to zero.

< qi, qj >= 0, ∀i 6= j (4.2)

It has the peculiarity as well that its inverse equals its transpose. This fact is an important advantage since inverting this matrix is computationally easy and not complex.

Q−1= QT (4.3)

The other matrix, R = [r1, r2, . . . , rN] is an M × N upper triangular matrix where

all elements below the main diagonal are equal to zero. The main advantage of having an upper triangular matrix is that it can be used in methods like backward substitution, with a relatively low computational complexity.

Once defined the QR decomposition, in the following pages three methods to perform it are going to be explained. As computational complexity is a very important issue to be taken into account when implementing algorithms in hardware, a detailed analysis for each method is going to be done. The three proposed methods to calculate matrices

Q and R are the Givens rotations, Householder transformation, and Gram-Schmidt

decomposition. The assumption of M = N is taken during this chapter. 34

(45)

4.1

Gram-Schmidt orthogonalization

4.1.1 The concept

The last technique introduced in this chapter to perform the QR decomposition is the Gram-Schmidt orthogonalization. Here, there is a difference in the decomposition. In the other methods the decomposition was done in this way, (M × N ) = (M × M )(M × N ). With Schmidt the sizes of matrices Q and R differs with the others. Gram-Schmidt decomposes the A matris in this way, (M × N ) = (M × N )(N × N ).

This methodology consist of generating orthonormal column vectors recursively by calculating several vector projections. The term recursively means that each new vector is calculated taking into account the previously calculated ones, that is, the first column vector is independent, the second depends on the projection of the first vector over the second, the third depends on the projections of the first and the second over the third, and so on until the last vector. Equation 4.4 shows hot to calculate the projection of one vector a over another vector e:

proje(a) =< e, a > e (4.4)

Being <, > the scalar product operator defined as:

< e, a >= e1a1+ e2a2+ e3a3+ · · · + eMaM (4.5)

Once defined how to calculate projections over vectors, the process to generate orthogonal column vectors is the following:

u1 = a1 u2 = a2− proje1(a2) u3 = a3− proje1(a3) − proje2(a3) · · · uN = aN − N X j=1 projej(aN)

Being ak the kth vector column of the original matrix A = [a1, a2, a3. · · · , aN],

uk the new orthogonal vector associated to the ak vector, and ek an unitary vector

corresponding to each uk calculated as:

ek=

uk

kukk

(4.6) Where k ∗ k is the norm operator. Q matrix is then formed, by concatenating each

(46)

36 4.1. Gram-Schmidt orthogonalization

Q= [e1, e2, e3, · · · , eN] (4.7)

To generate R, several scalar products are required. Equation 4.8 shows the way to obtain R: R=       < e1, a1 > < e1, a2 > < e1, a3 > · · · 0 < e2, a2 > < e2, a3 > · · · 0 0 < e3, a3 > · · · .. . ... ... . ..       (4.8)

Where k scalar products are required to calculate the kth column vector of R

4.1.2 Computational complexity of Gram-Schmidt orthogonalization

To begin the analysis of the complexity, the fist step is to look at the projections equation. Table 4.1 summarizes the number of operations in calculating the projection of one vector over another.

Equation Multiplications Additions Divisions Square roots

< e, a > M M-1 0 0

< e, a > e M 0 0 0

Total projection 2M M-1 0 0

Table 4.1: Complexity of calculating a vector projection

Next step is to calculate the number of operations that are necessary to normalize vectors. Table 4.2 shows the results:

Equation Multiplications Additions Divisions Square roots

kuk =qu21+ u2

2+ · · · + u2M M M-1 0 1

u

kuk 0 0 M 0

Total normalization M M-1 M 1

Table 4.2: Complexity of calculating a vector normalization

Once discussed the complexity of the projections and the normalization, the analysis is done step by step. Table 4.3 show the complexity at each step in the algorithm.

Vector Mults Adds Divs Sqrts

u1 = a1; e1 = ||uu11|| M M-1 M 1 u2 = a2− proje1(a2); e2= u2 ||u2|| 3M 3M-2 M 1 u3= a3− proje2(a3) − proje2(a3); e3 = u3 ||u3|| 5M 5M-3 M 1 · · · · uN = aN −PNj=1projej(aN); eN = uN ||uN|| (2N-1)M (2N-1)M-N M 1

(47)

At this point, the number of operations of each step is known. To know the whole complexity of the algorithm the summation of the contribution of every step must be done. Multiplications Nmults = N X n=1 (2n − 1)M = 2M N (N + 1) 2  − M N = M N2 (4.9) Additions Nmults = N X n=1 (2n − 1)M − N = 2M N (N + 1) 2  − M N − N(N + 1) 2 = M N2−N 2 2 − N 2 (4.10) Divisions Nmults = N X n=1 M = M N (4.11) Square roots Nmults = N X n=1 1 = N (4.12)

All these expressions are gathered in table 4.4:

Taking all kind of operations as floating point operations (flops), the total complexity of the algorithm can be analyzed. Last row in table 4.4 show the total complexity of the Gram-Schmidt algorithm.

(48)

38 4.1. Gram-Schmidt orthogonalization

Gram-Schmidt Operations Order

Multiplications M N2 M N2 Additions M N2−N2 2 − N 2 M N 2 Divisions M N M N Square roots N N Flops 2M N2N2 2 − M N + N 2 2M N 2N2 2

Table 4.4: Complexity of the total Gram-Schmidt algorithm

0 10 20 30 40 50 60 70 80 90 100 100 101 102 103 104 105 106 Matrix dimension Nu m b er of op er at ion s Multiplications Additions Divisions Square roots

Figure 4.1: Complexity of Gram-Schmidt orthogonalization against matrix size

Order of Operations M = N Multiplications M3 Additions M3 Divisions M2 Square roots M Total 2M3

Figure

Figure 1.1: Scenario with four anchors and four agents
Figure 1.2: Transmissions in time in a 3 node scenario
Figure 2.1: Transmissions in time in a 3 node scenario
Figure 2.2: Connections in a 4 node scenario
+7

References

Related documents

By working with a commercial brand like Helly Hansen, time constraints were part of the whole process of getting the garments for this collection produced. Thanks to Internet and

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

While trying to keep the domestic groups satisfied by being an ally with Israel, they also have to try and satisfy their foreign agenda in the Middle East, where Israel is seen as

In this thesis we investigated the Internet and social media usage for the truck drivers and owners in Bulgaria, Romania, Turkey and Ukraine, with a special focus on

The EU exports of waste abroad have negative environmental and public health consequences in the countries of destination, while resources for the circular economy.. domestically