• No results found

Extended Consensus Algorithms

N/A
N/A
Protected

Academic year: 2021

Share "Extended Consensus Algorithms"

Copied!
94
0
0

Loading.... (view fulltext now)

Full text

(1)

Extended Consensus Algorithms

June 20, 2013

SEBASTIAN VAN DE HOEF

Master’s Degree Project

Stockholm, Sweden 2013

(2)
(3)

January 2013 to June 2013

Extended Consensus Algorithms

— Master Thesis —

Sebastian van de Hoef

1

Automatic Control Laboratory

School of Electrical Engineering, KTH Royal Institute of Technology, Sweden

(4)
(5)

Abstract

An extension of the linear consensus protocol for agents moving in the plane is considered. For single integrator agents the use of a vector perpendicular to the standard consensus feedback leads to a large family of trajectories. If the new perpendicular term is applied only sustained oscillations are facilitated. For special configurations the form of the system trajectories is given in form of eigenvalues and –vectors of the system matrix.

A proof is given that this additional term does not effect stability. On the other hand it is motivated that robustness against discrete implementation and switching topologies can be decreased.

The control strategy is also applied to agents with double integrator dynamics. Stability can be archived with sufficiently high velocity feedback and the effect of this feedback on the system performance is further discussed.

(6)
(7)

Acknowledgments

First of all I want to thank my supervisor Dimos Dimarogonas for his excellent help and inspiration throughout the work on this thesis.

Furthermore I want to express my gratitude to everybody working at the automatic control lab at KTH for a nice and inspiring working atmosphere.

Thanks to Meng Guo and Philipp Heer for giving me feedback on my thesis and to Christian Steinmetz for checking grammar and spelling.

As this thesis marks the end of a two years master programme at KTH for me, I also want to thank my fellow students who have been an essential element in making this time a really exciting experience in which I have learned a lot and have had much fun. To be mentioned here are also “Studienstiftung des deutschen Volkes” and “Dr. J¨urgen Ulderup Stiftung” have to be mentioned as they supported my studies at KTH.

Last but not least I want to thank my parents for their support.

(8)
(9)

Contents

List of Figures xi 1 Introduction 1 1.1 Background . . . 1 1.1.1 Consensus Algorithms . . . 1 1.1.2 Vehicle Coordination . . . 2

1.1.3 Event– and Self–Triggered Control . . . 2

1.2 Contribution and Outline of This Thesis . . . 2

2 Single Integrator Dynamics 5 2.1 System Layout . . . 5

2.2 Linear Position Feedback . . . 6

2.2.1 Stability . . . 8

2.2.2 Simplification in the Case of Γ and B Proportional . . . 8

2.2.3 Simplification in the Case of B = 0 . . . 9

2.3 Properties of the Matrices BL and ΓL . . . 10

2.3.1 General Properties Valid for all BL . . . 11

2.3.2 Eigenvalues and –Vectors for BL for Circulant Graphs . . . 12

2.4 Example Applications to Orbit Design . . . 22

2.4.1 Centering the Trajectories Around Zero . . . 22

2.4.2 Complete Graphs With One Different Weight . . . 23

2.4.3 Circulant Graphs with Alternating Weights . . . 26

2.5 Discrete Implementation . . . 30

2.6 Switching Topologies . . . 31

3 Double Integrator Dynamics 37 3.1 Velocity Feedback . . . 38

3.2 Proportional Damping . . . 38

3.2.1 Sufficient Condition for α . . . 40

3.2.2 Further Discussion on the Effect of α . . . 42

3.2.3 Consideration of All Eigenvalues . . . 43

3.2.4 Removing Drift . . . 45

3.2.5 No Damping . . . 51

4 Transient Control For Single Integrator Dynamics 53 4.1 Admissible Control Input . . . 53

(10)

4.2 Transient Control With Bounded Input Magnitude . . . 57

4.2.1 Self–Triggered Control . . . 57

4.2.2 Self–Triggered Control With Asymptotic Convergence . . . 59

4.2.3 Avoidance of Circular Regions in State Space . . . 63

5 Conclusion and Future Work 73 5.1 Conclusion . . . 73

5.2 Future Work . . . 74

(11)

Notation

All scalars are written lower case and slanted. Vectors are lower–case, bold and straight. All matrices are upper–case, bold and straight.

Symbol Explanation

0 vector of matrix with all entries zero 1 vector with all entries one

a coefficient in a solution of linear differential equation or element of matrix A or element of a special eigenvector

A system matrix

A label of a point

α constant

b abbreviations in a linear system or elements of a special eigenvector c abbreviations in a linear system

C set of complex numbers

d elements of a special eigenvector D incidence matrix δ angle ∆x translation e unit vector  (small) constant in R G communication graph Γ,B weight matrices γ, β weight vector γ, β weights

h inter sampling time i, j indexes I identity matrix j imaginary unit k, l, m, n ,p variables in N L laplacian matrix λ eigenvalue

M set of m in a circulant graph N set of indexes of neighboring agents N set of natural numbers

N number of agents

(12)

π circle constant R set of real numbers

r real valued variable in the solution of an eigenvector R rotation matrix

s abbreviation for a sum or distance S set of indices

S skew–symmetric matrix

t time

u input magnitude or element of vector u

u input v general vector V matrix of eigenvectors v elements of vector v x state variable z error variable Operator Explanation ⊗ kronecker product

diagonal(·) diagonal matrix with “·” on the diagonal exp(· ) exponential function

sin(· ) sinus function cos(· ) cosinus function

˙

(· ) time derivative (· )T transpose

(· )∗ complex conjugate

|· | euclidean vector norm or cardinality in case of a set Re(· ) real part of “·”

Im(· ) imaginary part of “·”

maxa,b(f (a, b)) maximum over a and b of the function f (a, b) max(a, b) maximum of a and b

!

= set equal

A1A2 distance between point A1 and A2 df (a)

da total derivative of f (a) to a ∂f (a)

(13)

List of Figures

2.1 Illustration of the system layout and the control law. The circles are the agents. For agents 1 and 2 it is illustrated how zagent,2,1is randomly picked as z1,2 = z1. For the other links only the error variable which is used to globally describe the system is drawn. For agent 4 it is shown how the local control law is composed of the direct term −γ4(z3+ z4) and the perpendicular term β4S(z3+ z4) . . . 7 2.2 Circulant graph with M = {1, 3} and N = 7 resulting in k = 4. The circles

represent the nodes and the lines between the circles the edges. . . 13 2.3 Some of the modes for N = 12, β = [1 2 1 2 . . . 1 2]T and m = {1, 2}. The

agent positions are indicated by crosses and the links in the communication graph with dashed lines. The different colors indicate the different agents and the outgoing links from the corresponding agent. The eigenvalue corresponding to the mode is given on top of each plot. . . 20 2.4 Trajectories for the complete graph are shown here. The different colors belong

to different agents. The dotted lines between the agents marked as crosses at the agents initial positions indicate the edges of G. . . 24 2.5 The time evolution of the trajectory shown in figure 2.4 is displayed here. The

crosses indicate the current agents positions. . . 25 2.6 Trajectories for the complete graph with different radius of the big circle. The

different colors belong to different agents. The dotted lines between the agents marked as crosses at the agents initial positions indicate the edges of G. . . . 26 2.7 Trajectories for the complete graph with pairs of agents covering rings around

the origin. The different colors belong to different agents. The dotted lines between the agents marked as crosses at the agents initial positions indicate the edges of G. . . 27 2.8 Trajectories for circulant graph triggering only two modes with the same

eigen-value. The different colors belong to different agents. The dotted lines between the agents marked as crosses at the agents initial positions indicate the edges of G. . . 29 2.9 Effect of α on the sampling time if Γ = αB . . . 31 2.10 |z| for switching topology . . . 33 2.11 Trajectory with switching topology trying to increase |z| after t = 80 with time

(14)

3.1 Example of agent trajectories for double integrator dynamics and velocity feed-back . . . 39 3.2 Geometric proof sufficiently large α . . . 42 3.3 Effect of different α on the maximum real part of ˜λ. The points where α = α0

are indicated by crosses. . . 44 3.4 rmax for different α and the different eigenvalues of a ring graph with N = 9 . 46 3.5 Simulation of the double integrator case with proportional damping and α =

α0. The different colors belong to different agents. The dotted lines between the agents marked as crosses at the agents initial positions indicate the edges of G. . . 47 3.6 Simulation of the double integrator case with proportional damping, α = α0

and B = 0. The different colors belong to different agents. The dotted lines between the agents marked as crosses at the agents initial positions indicate the edges of G. . . 48 3.7 Simulation of the double integrator case with proportional damping and

mini-mal α. The different colors belong to different agents. The dotted lines between the agents marked as crosses at the agents initial positions indicate the edges of G. . . 49 4.1 Illustration of the control input for agent i which leads to ˙|z|2< 0 . . . 55 4.2 Agents trajectories in the plane for the self–triggered control scheme . . . 59 4.3 First dimension of the agents trajectories over time for the self–triggered control

scheme . . . 60 4.4 Magnitude of the control inputs of the agents over time. Each cross marks an

sampling instant. . . 61 4.5 Agents trajectories in the plane for the self–triggered control scheme with

com-munication of the current maximum speed . . . 64 4.6 First dimension of the agents trajectories over time for the self–triggered control

scheme with communication of the current maximum speed . . . 65 4.7 First dimension of the agents trajectories over time for the self–triggered control

scheme with communication of the current maximum speed from t = 1.2 . . . 66 4.8 Magnitude of the control inputs of the agents over time. Each cross marks an

sampling instant. . . 67 4.9 Plot for the derivation of the admissible discrete control input for bounded

control input signal . . . 69 4.10 Agents trajectories in the plane avoiding a circular region indicated by a black

(15)

1

Introduction

1.1 Background

1.1.1 Consensus Algorithms

Consensus algorithms have become a corner stone in decentralized control in recent years. Every time if a number of agents have to agree on a value while every agent can only access information from a subset of the other agents this class of algorithms can be applied. This value could be some physical quantity such as a position or a speed but also more abstract properties as a common center of a formation. Hence these algorithms can often be a building block of more complex control schemes.

There is so much work available on consensus such that this introduction cannot give an exhaustive overview over all the work done in the topic but only point the reader to some publications which allow to enter the field and further information.

Important criteria to discriminate the different algorithms are the properties of the consen-sus point i.e. on which value the agents finally agree with respect to the initial conditions. Here average [34], [36], [20], average–min–max consensus [7] are to be mentioned. For a large family of functions of the initial conditions see [8].

There are consensus algorithms for different agent dynamics. As in this work, often simple system dynamics such as single or double integrator dynamics are assumed which can then be approximated by underlying controllers. Examples for single integrator dynamics are [34], [5], [30], [24] and for double integrator dynamics [39], [16], [35], [51], [23]. In some applications it is also possible to arbitrarily choose the system dynamics if for instance the average of distributed measurements should be computed in a distributed way. There are also algorithms available which can deal with more complicated agent dynamics such as non– holonomic agents [46], [26], [12] arbitrary controllable and observable linear systems [50] as well as other non–linear dynamics [51], [49], [25].

In many cases the communication topology between the agents can change over time. Many consensus algorithms still reach consensus with the desired properties as long as the network of agents stays connected [34], [18], [36].

Even though most modern applications work in discrete time due to the omnipresence of discrete processors which made large distributed control systems first feasible many consensus algorithms are designed in continuous time [34], [24] and can–when implemented–be approxi-mated by sufficiently fast sampling. There are however also explicitly time discrete algorithms [46], [18], [36], [31].

(16)

strategy is linear [24], [18] or non–linear [41], [35] (bounded input), [23], [12], [25].

[38], [33] are good overviews to enter the topic. In the books [37] and [29] even more exhaustive information can be found.

1.1.2 Vehicle Coordination

Another topic which is related to this report is the coordination of vehicles. The classical keywords are here of course flocking, swarming and formation. Here, also a multitude of work exists and it is out of scope of this report to give an exhaustive overview. The reader might refer to [6], [37], [1] to get an overview.

In this context the work on circumnavigation as for instance [22], [19], [42] is relevant as well. Here vehicles coordinate their relative motion while archiving some other motion pattern such as circular motion around a target or parallel motion. In this thesis work a control algorithm is present which facilitates more complicated relative motion patterns based on relative state information.

1.1.3 Event– and Self–Triggered Control

As more and more control systems are part of larger systems which serve several tasks and use shared communication mediums the need has arisen to limit the number of discrete control actions as much as possible without compromising the systems performance too much. With this background event– and self–triggered control schemes have been developed.

Event–triggered control [43], [11], [28], means that the state of the controlled plant is constantly monitored. If certain conditions are fulfilled an event is triggered i.e. the plant output is sampled and a control action is taken. This can drastically reduce the amount of control actions compared to a periodic time driven sampling strategy. The disadvantage is that this strategies require the plant state to be constantly monitored.

This limitation is overcome by self–triggered control strategies [47], [3], [10], [2]. At each sampling instant they compute when the next sampling and control action has to take place.

1.2 Contribution and Outline of This Thesis

The work in this thesis is largely based on the results presented in two conference papers [44] and [45]. The authors proposed an extension of the classical linear continuous time consensus for plants with single integrator dynamics but state vector in R2. The novelty is that they treat the two dimensions, which could be for instance the position of a vehicle in the plane, not independently but also apply some feedback perpendicular to the control input computed by classical linear consensus strategies.

This feedback strategy can produce consensus points outside the convex hull of the initial agents positions. Additionally the paths taken by the agents show a great variety depending on the feedback coefficients, initial conditions and graph structure. If only perpendicular feedback is applied the system exhibits sustained oscillations with some intriguing patterns in state space.

The authors of [44] and [45] analyzed the system behavior in great detail for systems with three agents where the number of possible communication graphs is relatively small.

(17)

1.2 Contribution and Outline of This Thesis

arbitrary initial conditions and number of agents. It is shown how this results can be em-ployed to control the system trajectories in the case of sustained oscillations. Limitations of implementing this control scheme in a discrete way and with switching topologies are illustrated.

In chapter 3 the same feedback strategy is applied to agents with double integrator dy-namics. First the feedback is applied on the velocity and the results from chapter 2 can quite easily be transferred. Then in section 3.2 position and velocity feedback is applied. Here the focus lies less on the trajectories and sustained oscillations but rather on stability. It is shown that sufficient velocity feedback i.e. damping is needed to stabilize the system.

(18)
(19)

2

Single Integrator Dynamics

2.1 System Layout

The system considered in this part of the thesis is similar to the one presented in [44]. There are N agents each with state vectors xi ∈ R2 for agents i = 1, . . . , N having single integrator dynamics:

˙xi = uifor i = 1, . . . , N (2.1) with ui ∈ R2.

The agents are connected over an undirected graph without loops G. G is assumed to be connected. Every agent is a node and if there is an edge between two agents they are mutually able to measure their relative position. M is the number of edges in G.

Every agent (index i) uses the position differences to its neighbors (index j) zagent,ij = xj−xi to compute its control signal. This means zagent,ij = −zagent,ji.

In order to globally describe the system behavior for every edge in G connecting agents i and j either the error variable zi,j = zk = zagent,ij or zi,j = zk = −zagent,ij is picked. The orientation of zkwill not play a role in the analysis and is hence arbitrary. All zk are stacked together to z = [zT1 zT2 . . . zTM]T.

This asymmetry introduced by the mapping from zagent,ij/zagent,ji to zk can be described by an directed graph Gd where agent i is head of an edge between agent i and agent j if zi,j = −zagent,ij and tail if zi,j = zagent,ij. So whether zagent,ij or zagent,ji is picked for zi,j is encoded in the direction of the edge.

D ∈ RN ×M is the incidence matrix of Gd. According to [44] the relation

z = (DT⊗ I2)x (2.2)

holds with I2 being the 2 × 2 identity matrix and x = [xT1 xT2 . . . xTN]T.

In figure 2.1 one can find an example illustration of the system layout. The incidence matrix D for this example would be

(20)

2.2 Linear Position Feedback

One of the goals of this thesis was to give further insight into the trajectories generated by the control law proposed in [44]:

u = −(ΓD ⊗ I2)z + (BD ⊗ S)z (2.4) with Γ = diag(γ) ∈ RN ×N where γ ∈ RN and B = diag(β) ∈ RN ×N where β ∈ RN. The vector u is a stack vector of the single agents inputs u = [uT1 uT2 . . . uTN]T. Furthermore

S = 0 1 −1 0 

. (2.5)

The first summand of (2.4) corresponds to classical consensus. It means that each agent moves towards each neighbor with a speed proportional to the distance. The final control signal for an agent is a superposition of the control signals calculated for the single neighbors. The second term of (2.4) is similar to the first one with the difference that the agents don’t move towards their neighbors but perpendicular. So the second summand (BD ⊗ S)z creates a control signal ui for each agent which is perpendicular to the signal generated by −(ΓD ⊗ I2)z and scaled by the ratio of the two coefficients βi and γi.

In figure 2.1 the control law for an agent is illustrated.

Using (2.4) gives according to [44] for the position of the agents:

˙x = (−(ΓL) ⊗ I2+ (BL) ⊗ S) x, (2.6) and for position differences:

˙z = −(DTΓD) ⊗ I2+ (DTBD) ⊗ S z. (2.7) The matrix L is the Laplacian of G and can be calculated as L = DDT.

Equation (2.6) is an autonomous linear equation in state space form. Hence the solutions are only dependent on the initial conditions and the eigenvalues and –vectors of Ax = −(ΓL)⊗ I2+ (BL) ⊗ S.

The solution of an autonomous linear differential equation has the form:

x(t) = N X

i=1

aivx,iexp (λx,it) (2.8)

with ai ∈ C and vx,i being the ith eigenvector of Ax with eigenvalue λx,i. The coefficients ai determine the magnitude and phasing of the eigensolutions. They are determined by the initial conditions x0. Let a = [a1 a2 . . . a2N]T and V = [vx,1 vx,2 . . . vx,2N]. Then setting t = 0 in (2.8) yields

x0= Va. (2.9)

As Ax is a real valued matrix all its complex eigenvectors and –values come in complex conjugate pairs and so do the corresponding coefficients ai. See [15] for details.

(21)

2.2 Linear Position Feedback zagent,1,2 zagent,2,1= z1,2= z1 −γ4(z3+ z4) β4S(z3+ z4) u4 z2 z3 z4 z5

Figure 2.1: Illustration of the system layout and the control law. The circles are the agents. For agents 1 and 2 it is illustrated how zagent,2,1is randomly picked as z1,2 = z1.

(22)

2.2.1 Stability

Proposition 1: The system described by (2.7) is stable if all γ > 0. Proof. The time derivative of the square of the norm of z is:

˙ (|z|2) =(zz) = ˙zTz + zT˙z. (2.10) With (2.7): ˙ (|z|2) = zT −(DTΓD) ⊗ I 2+ (DTBD) ⊗ S T z + zT −(DTΓD) ⊗ I2+ (DTBD) ⊗ S z (2.11) Because Γ and B are diagonal matrices they are symmetrical and as S is skew–symmetric, ST= −S. Therefore ˙ (|z|2) = zT −(DTΓD) ⊗ I 2− (DTBD) ⊗ S − (DTΓD) ⊗ I2+ (DTBD) ⊗ S z = zT −2(DTΓD) ⊗ I 2 z. (2.12)

Using (2.2) to substitute z for x on the right hand side gives ˙

(|z|2) = xT −2(DDTΓDDT) ⊗ I 2 x = xT(−2(LΓL) ⊗ I2) x.

(2.13) In the above equation the first and second dimension of x are totally decoupled and identical due to the identity matrix in the kronecker product. So it is sufficient to consider the matrix −2LΓL. As the diagonal elements of Γ are according to the assumptions positive they can be decomposed into two identical matrices Γ = ¯Γ ¯Γ = ¯ΓTΓ. ¯¯ Γ is a well a diagonal matrix with diagonal elements ¯γi=

γi. Recall that L = LT. So −2LΓL = −2 ¯ΓLT ¯

ΓL which has the same eigenvalues as −2 ¯ΓL. This matrix is however structurally identical to BL. Therefore 1 spans the null–space or in other words

˙

(|z|2) is only zero when all agents have the same position meaning z = 0 and |z| = 0. All other eigenvalues of −2 ¯ΓL are negative. See section 2.3.1 for details.

This means that (|z|˙2) < 0 as long as |z| 6= 0. As |z|2 ≥ 0 invoking La Salle’s invariance principle it has eventually to converge to zero. |z|2= 0 implies z = 0 meaning that consensus is asymptotically reached or in other words that all agents have the same state as time goes to infinity.

Proposition 2: (|z|˙2) is constant for system (2.7) if Γ = 0. Proof. The results follows directly from (2.13).

Note that does not imply that all elements of z are constant as the system evolves but as one of them gets shorter at least one of the other elements have to get longer.

2.2.2 Simplification in the Case of Γ and B Proportional

(23)

2.2 Linear Position Feedback

Proposition 3: Let Γ = αB with α ∈ R. Assume v = [v1v2. . . vN]T being an eigenvector to the matrix BL with eigenvalue λ.

Then a vector ¯v = [(v1) (±jv1) (v2) (±jv2) . . . (vN) (±jvN)]T ∈ C2N is an eigenvector to Ax= −(ΓL) ⊗ I2+ (BL) ⊗ S with eigenvalue (−α ± j)λ. Proof. Define: BL = A =    a11 · · · a1N .. . . .. ... aN 1 · · · aN N   . (2.14) Then: Axv =¯        a11 −α 1 −1 −α  · · · a1N −α 1 −1 −α  .. . . .. ... aN 1 −α 1 −1 −α  · · · aN N −α 1 −1 −α                    v1 ±jv1 v2 ±jv2 .. . vN ±jvN            =        N P k=1  a1k  (−α ± j) vk (−1 − α(±j)) vk  .. . N P k=1  aN k  (−α ± j) vk (−1 − α(±j)) vk         =            (−α ± j) λv1 (−α ± j) (±j)λv1 (−α ± j) λv2 (−α ± j) (±j)λv2 .. . (−α ± j) λvN (−α ± j) (±j)λvN            = (−α ± j) λ¯v. (2.15)

This means that if λ ∈ R and α = 0 all eigenvalues of Ax lie on the imaginary axis. In the case of real eigenvalues and –vectors for BL all the entries of the eigenvector of Ax for the first dimension and for the second dimension are imaginary and come in complex pairs. That means that the trajectories of the agents are superpositions of circular movements. Each complex conjugate pair of eigenvectors causes a circular movement with angular frequency according to the associated eigenvalue λ and the radii according to v.

2.2.3 Simplification in the Case of B = 0

(24)

Proposition 4: Assume v = [v1v2. . . vN]T being an eigenvector to the matrix ΓL with eigenvalue λ. Then a vector ¯v = [(α1v1) (α2v1) (α1v2) (α2v2) . . . (α1vN) (α2vN)]T with α1, α2∈ R is an eigenvector the to (ΓL) ⊗ I2 with eigenvalue λ.

Proof. Define: ΓL = A =    a11 · · · a1N .. . . .. ... aN 1 · · · aN N   . (2.16) Then: (A ⊗ S)¯v =        a11 1 0 0 1  · · · a1N 1 0 0 1  .. . . .. ... aN 1 1 0 0 1  · · · aN N 1 0 0 1                    α1v1 α2v1 α1v2 α2v2 .. . α1vN α2vN            =        N P k=1  a1k α1vk α2vk  .. . N P k=1  aN k α1vk α2vk         =            λα1v1 λα2v1 λα1v2 λα2v2 .. . λα1vN λα2vN            = λ¯v. (2.17)

It’s easy to see that the two dimensions in which the agents move are in this case decoupled. The dynamics for the first dimension can be described by N eigenvectors in which α1 = 1 and α2= 0 and for the second dimension α1 = 0 and α2 = 1.

2.3 Properties of the Matrices BL and ΓL

Having the relations presented in 2.2.2 and 2.2.3 motivates to further investigate the eigen-values of BL and ΓL. Both matrices are principally of the same structure. Therefore only BL will be investigated. All statements are also true for ΓL replacing Γ for B.

(25)

2.3 Properties of the Matrices BL and ΓL

2.3.1 General Properties Valid for all BL

Recall that B = diag(β) and β = [β1β2 . . . βN]T.

Proposition 5: Let m be the number of zero diagonal entries of B. The rank of BL is N − 1 for m = 0 and m = 1 and N − m for m > 2 if G is connected.

Proof. L has rank N − 1 if G is connected [34]. The multiplication with B just scales the row vectors and therefore doesn’t change the rank if there are no zero entries in β.

As 1 spans the null–space of L = LT all remaining N − 1 row vectors are needed to express the N th row vector. Hence any set of N −1 row vectors of L is linearly independent. Therefore setting one entry in β to zero will not change the rank of BL.

Any additional zero entry in β will decrease the rank by one as it replaces one linearly independent row vector by a row of zeros.

Proposition 6: The vector 1 spans the null–space of BL if βi 6= 0 for i = 1 . . . N and G connected.

Proof. As the rank of BL is in this case N − 1 there is only one linearly independent solution v to the equation BLv = 0. As the vector of all ones 1 has the property L1 = 0 (see [13]) 1 has to span the complete null-space of BL.

Proposition 7: If β has m entries positive and and N − m entries negative BL has m eigenvalues λ with Re(λ) ≥ 0 and N − m eigenvalues λ with Re(λ) ≤ 0.

Proof. L has the structure that all off–diagonal elements are −1 or 0 depending on whether there is a connection between the nodes associated with the column/row. The diagonal elements are the number of neighbors of the corresponding agent and hence the negative sum of the rest of the row. See [13].

From Gerˇsgorin theorem it follows that the eigenvalues lie in the union of N discs. The ith disc has center βiLii and radius |βiLii|.

Following up the logic of the proof for the Gerˇsgorin theorem as presented in [17] it’s possible to conclude that m eigenvalues lie in the union of the discs defined by positive β and N − m in the union of the discs defined by negative β.

Therefore BL = A is split up in a matrix Adcontaining only the diagonal of A and a matrix Aodcontaining only the off–diagonal elements of A. Then for arbitrarily small positive  the eigenvalues of Ad+ (1 − )Aod are located in two disjoint regions as the center of the discs is still βiLii but the radius is (1 − ) |βiLii|. As the eigenvalues of a matrix are continuous functions of its entries the eigenvalues located in the negative half–plane for  > 0 can only go to zero but not “jump” into the the positive plane. Likewise the eigenvalues located in the positive half–plane cannot jump into the negative half–plane as  goes to zero.

(26)

Choosing ¯S = diag([√β1. . . √

βN]) gives ¯

S−1BL¯S = ¯SL¯S = (¯SL¯S)T (2.18) which is a real valued symmetric matrix and hence it has real eigenvalues.

Proposition 9: If all entries of β are strictly negative then all eigenvalues of BL are real. Proof. If v is an eigenvector of −BL with eigenvalues λ then v is an eigenvector of BL with eigenvalue −λ. As all eigenvalues of BL are real and the product of (−1) times a real value is real. Therefore all the eigenvalues of −BL are real.

In fact numeric experiments with random numbers indicate that the product of any real diagonal matrix with a symmetric, diagonally dominant matrix (which holds for L) has real eigenvalues.

2.3.2 Eigenvalues and –Vectors for BL for Circulant Graphs

Equal Weights

If B = αI then BL = αL. Obviously the eigenvectors are the same for αL and L and the eigenvalues are simply multiplied by α. So it is sufficient to study the eigenvalues and –vectors of L in that case.

If G is a circulant graph then L is a circulant matrix. A k–regular circulant graph for even k can be constructed by placing all the nodes in a ring and then every node is connected with the node m ∈ N positions further. This can be repeated for different integers m. Mind that this does not necessarily lead to connected graphs if m > 1. Here always connected graphs are assumed. Let M denote the set of values m has to take according to the above procedure in order to construct a particular k–regular graph. These graphs are also known as crown graphs if M = {1, 2, . . . ,k2}. In figure 2.2 an example for M = {1, 3} and N = 7 can be found.

For all circulant matrices the eigenvectors are the same and there is a closed form expression for them and their eigenvalues. See for instance [9].

Let A ∈ RN ×N be a circulant matrix with:

A =      a1 a2 · · · aN −1 aN aN a1 · · · aN −2 aN −1 . .. ... ... . .. . .. a2 a3 · · · aN a1      . (2.19)

Then the ith eigenvector of A is vi = h ω0i ωi1 . . . ωiN −1iT with ωi = exp  2πji N  . The corre-sponding eigenvalues λi are given by λi = [a1 a2 · · · aN −1aN] vi.

For a k-regular circulant graph, L looks like:

(27)

2.3 Properties of the Matrices BL and ΓL

Figure 2.2: Circulant graph with M = {1, 3} and N = 7 resulting in k = 4. The circles represent the nodes and the lines between the circles the edges.

Whether − 1/0is −1 or 0 depends on the choice of M. So the ith eigenvalue is:

λi = k − X p∈M ωip+ ωN −pi = k − X p∈M exp 2πj N ip  + exp 2πj N i (N − p)  = k − X p∈M exp 2πj N ip  + exp  −2πj N ip  = k − 2 X p∈M cos 2π Nip  . (2.21)

The above results are summarized as:

Proposition 10: If G is a circulant graph and B = αI then the eigenvalues of BL are given by (2.21) and the corresponding eigenvectors are vi=

h ω0i ωi1 . . . ωN −1i iT with ωi = exp  2πji N  .

The eigenvectors vi are complex valued, which makes it hard to interpret them as positions in a plane. As the eigenvalues of L are real as well as L itself there must be set of real eigenvectors spanning the eigenspace of L.

(28)

In case of even N there is another real eigenvector vN/2. The kth element of vN/2 is vN/2,k= exp  2πj N N 2 k  = exp (πjk) = (−1)k. (2.22) All the other eigenvectors have linearly independent real and imaginary part unequal zero.

Proof. The mth element of vi is ωim−1 = exp 

2πj

N i (m − 1) 

. This is a complex number of length 1 and argument 2πjN i (m − 1). The real/imaginary part of all elements of v is only zero if the argument is a multiple of π/multiple of π plus π2 for m = 0 . . . N − 1, m ∈ Z which obviously only the case for m = 0 and m =N/2ifN/2∈ Z which is the case if N is even.

The real and imaginary part of v are linearly independent as the first element of v is always 1.

Hence a complete basis of eigenvectors can be constructed by considering v0, vN/2, if N is

even and Im (vi), as well as Re (vi) for i = 1 . . . bN −12 c.

Proposition 12: The real and imaginary parts of the eigenvalues for i = dN +12 e . . . N − 1 are linearly dependent on the real and imaginary parts of the eigenvectors for i = 1 . . . bN −12 c. Proof. Let vi,m be the mth element of vi. Then

Re (vi,N −m) = cos  2π Ni(N − m)  = cos 2π Nim  = Re (vi,m) . (2.23) Similarly Im (vi,N −m) = sin  2π Ni(N − m)  = − sin 2π Nim  = −Im (vi,m) . (2.24)

A consequence of this is that there are bN −12 c pairs of real eigenvalues which are the same. This is because L is a real valued matrix and hence if multiplied with a complex valued vector the real and imaginary part of the calculation can be separated. So the real and the imaginary parts are each an eigenvector on their own with however the same eigenvalue. Because they are linearly independent there have to be two eigenvalues associated to these two eigenvalues with the same value.

As they are “created” by an exponential function a pair of eigenvectors with same eigenvalue plotted against each other gives a circle.

Alternating Weights

Instead of choosing all weights βi the same, analytical solutions for the eigenvectors and -values can be found for alternating weights and even N . That means that

β = [β1 β2 β1 β2 . . . β1 β2]T.

(29)

2.3 Properties of the Matrices BL and ΓL

Proposition 13: If G is a circulant graph, B = αI and β = [β1 β2 β1 β2 . . . β1 β2]T then eigenvectors of BL are given by (2.25). The eigenvalues and parameters r1 and r2 are given by (2.30) and (2.31). There two cases for the solutions.

Case 1: If sodd,i= 0 then r1 or r2 zero and the other arbitrary. The eigenvalues are given by (2.35), (2.36) in that case.

Case 2: If sodd,i6= 0 then ratio ¯r = rr12 is given by (2.41) and the eigenvalues by (2.47). A complete basis of eigenvectors is given for i = 1, . . . , bN4c.

Proof. In the case a similar approach as in 2.3.2 is successful i.e.

vi= h r1ωi0r2ωi1r1ωi2r2ω3i . . . r1ωN −2i r2ωiN −1 iT (2.25) with r1, r2 ∈ R and ωi as in 2.3.2.

Let Modd be the set of odd numbers and Meven of even numbers in M.

(30)

Similarly for even m in the interval [0 . . . N − 1] β2  kr2exp  2πj N im  − r1 X p∈Modd  exp 2πj N i(m + p)  + exp 2πj N i(m − p)  − . . . · · · − r2 X p∈Meven  exp 2πj N i(m + p)  + exp 2πj N i(m − p)    = β2exp  2πj N im   kr2− 2r1 X p∈Modd  cos 2π N ip  − 2r2 X p∈Meven  cos 2π Nip    = λr2exp  2πj N im  ⇔ λr2 = β2  kr2− 2r1 X p∈Modd  cos 2π Nip  − 2r2 X p∈Meven  cos 2π Nip   . (2.27) And with abbreviations:

sodd,i= X p∈Modd  cos 2π Nip  (2.28) seven,i= X p∈Meven  cos 2π Nip  (2.29) (2.26) and (2.27) become λr1 = β1(kr1− 2r2sodd,i− 2r1seven,i) (2.30) λr2 = β2(kr2− 2r1sodd,i− 2r2seven,i) . (2.31) So (2.30) and (2.31) are two equations for r1 and r2. Now there are two cases:

Case 1:

sodd,i= 0. (2.32)

Then (2.30) and (2.31) reduce to:

r1λ = r1β1(k − 2seven,i) (2.33) r2λ = r2β2(k − 2seven,i) . (2.34) As by assumption, β1 6= β2 (2.33) and (2.34) can only be solved by r1 = 0 or r2 = 0. The parameter r2 or r1 unequal zero can then have an arbitrary value as this just means that the eigenvector is scaled. r1 = 0 and r2= 0 gives just the trivial solution, a zero vector.

If r1= 0, λ becomes

λ = β2(k − 2seven,i) (2.35) and for r2= 0

(31)

2.3 Properties of the Matrices BL and ΓL

Case 2:

sodd,i6= 0. (2.37)

Then r1 = 0 ∧ r26= 0 and r16= 0 ∧ r2 = 0 is not a solution. This can easily be seen by setting r1 = 0 in (2.30) as this gives:

0 = −2r2sodd,i (2.38)

which is a contradiction with the assumptions for this case. Similarly setting r2= 0 in (2.31) gives:

0 = −2r1sodd,i (2.39)

which is as well in contradiction with the assumptions. So combining (2.30) and (2.31) gives:

β1  k − 2r2 r1 sodd,i− 2seven,i  = β2  k − 2r1 r2 sodd,i− 2seven,i  . (2.40) Setting ¯r = r1

r2 gives a quadratic equation in ¯r:

0 = 2β2sodd,i¯r2+ (β1− β2) (k − 2seven,i) ¯r − 2β1sodd,i. (2.41) With the standard formula for quadratic equations it is easy to see that (2.41) has real solutions if and only if

(β1− β2)2(k − 2seven,i)2+ 4β1β2(2sodd,i)2 ≥ 0. (2.42) If β1β2 ≥ 0 (2.42) is obviously fulfilled.

If β1β2< 0 then the worst case occurs for max β1,β2  |β1β2| (β1− β2)2  s.t. β1β2 < 0 (2.43) max β1,β2  −β 1β2 (β1− β2)2  s.t. β1β2 < 0. (2.44) The constraint β1β2 < 0 means that either β1 > 0, β2 < 0 or β1 < 0, β2 > 0 and hence |β1− β2| = |β1| + |β2|. So −β1β2 is maximized if |β1||β1| is maximized.

Assume that |β1| + |β2| = α. Then |β1||β2| = |β1|(α − |β1|). In order to maximize this: ∂ (|β1|(α − |β1|)) ∂|β1| = α − 2|β1| ! = 0 ⇔|β1| = |β2| = α 2 (2.45)

which is a maximum as |β1|(α − |β1|) is an upside–down parable in |β1|. Inserting (2.45) into (2.42) gives the condition:

(32)

The eigenvalues are then easily derived from (2.31) as

λ = β2(k − 2¯rsodd,i− 2seven,i) . (2.47) Given a graph where ModdS Meven= 1 . . . n, n ∈ N then if n is even Meven and Modd have each k4 elements. As cos(t) ≤ 1, t ∈ R in this case 2seven,i≤ k2 and 2sodd,i ≤ k2. Hence (2.46) is fulfilled.

If n is odd Modd has also k4 elements but Meven only k4 − 2. Therefore (2.46) is fulfilled even in this case.

Similar as in 2.3.2 the imaginary and real part of the eigenvectors are linearly independent as long as they are not purely real or imaginary. This would give with the two different ¯r for each i 4N solutions. Therefore some of them have to be linearly dependent.

sodd,N −i= X p∈Modd  cos 2π N(N − i)p  = sodd,i (2.48) as well as seven,N −i= X p∈Meven  cos 2π N(N − i)p  = seven,i. (2.49) From (2.31) λ = β2(k − 2¯rsodd,i− 2seven,i) . (2.50) So λi= λN −i. sodd,i+N/2= X p∈Modd  cos 2π N(i + N 2 )p  = −sodd,i (2.51) and seven,i+N/2= X p∈Meven  cos 2π N(i + N 2)p  = seven,i. (2.52) So the coefficient for the quadratic term and the constant term in (2.41) change sign.

A general quadratic equation ax2+ bx + c = 0 has solutions x1/2 = −b ±

b2− 4ac

2a . (2.53)

The quadratic equation −ax2+ bx − c = 0 has solutions ¯

x1/2 =

−b ±√b2− 4ac

2(−a) = −x1/2. (2.54)

Hence two solutions for ¯r for i + N2 are the same as for i. Therefore

λi+N/2= β2(k − 2(−¯r) (−sodd,i− 2seven,i)) = λi. (2.55)

So it is sufficient to consider i = 1 . . . N4 if N4 ∈ N and i = 1 . . .N −2 4 if

N 4 ∈ N./

IfN4 ∈ N then sodd,N/4= 0 which gives one purely imaginary and one purely real eigenvector.

(33)

2.3 Properties of the Matrices BL and ΓL

Example To illustrate the above explanation consider the case of N = 12, M = {1, 2} and two different weights i.e. β = [1 2 1 2 1 2 1 2]T.

In this case the eigenvalues are {0, 6, 1.63, 1.63, 7.37, 7.37, 4.63, 4.63, 10.37, 10.37, 6, 12} and the corresponding eigenvectors being the row vectors of V are

V =                      1 −0.5 1 0 −0.46 0 1 0 −0.19 0 1 0 1 1 0.69 0.46 1 0.5 0.19 0.37 0.5 1 0 1 1 −0.5 0.5 1 −0.23 −0.34 −0.5 1 0.09 −0.19 −1 0 1 1 0 0.91 0 1 −0.37 0 −1 0 0 −1 1 −0.5 −0.5 1 0.23 −0.34 −0.5 −1 0.09 0.19 1 0 1 1 −0.69 0.46 −1 0.5 0.19 −0.37 0.5 −1 0 1 1 −0.5 −1 0 0.46 0 1 0 −0.19 0 −1 0 1 1 −0.69 −0.46 −1 −0.5 0.19 0.37 0.5 1 0 −1 1 −0.5 −0.5 −1 0.23 0.34 −0.5 1 0.09 −0.19 1 0 1 1 0 −0.91 0 −1 −0.37 0 −1 0 0 1 1 −0.5 0.5 −1 −0.23 0.34 −0.5 −1 0.09 0.19 −1 0 1 1 0.69 −0.46 1 −0.5 0.19 −0.37 0.5 −1 0 −1                      . (2.56) As 124 = 3 hence a natural number, i = 0 and j = 3 give only two eigenvectors each as the real or imaginary part is zero. i = 1 and i = 2 give four eigenvectors each. The eigenvectors with eigenvalues 1.63 and 7.37 correspond to i = 1 and the eigenvectors with eigenvalues 4.63 and 10.37 correspond to i = 2.

In figure 2.3 one can see the eigenvectors resulting from the imaginary parts plotted over the eigenvectors coming from the real parts of v1 and v2 (rows) for the two different solutions of (2.41) (columns). The links between the agents are drawn with dashed lines. Note that for i = 2 every position is occupied by two agents.

Simulations indicate that similar results could be achieved with having the same weight for every mth agent. Therefore Nm has to be natural number. Instead of two different r there seem to be m different r.

Complete Graphs

A special case of a circulant graph is the complete graph where every node is connected to all the others. Let there be l ≤ N groups of agents with similar weights. So there are nm agents with weight βm. So β has the form β = [α1 . . . α1α2 . . . α2 . . . αl . . . αl].

This section is summarized as:

Proposition 14: For the complete graph with l groups of weights in β there is one eigenvector 1 with eigenvalue 0. For each of l groups of weights with weight αm there are nm − 1 eigenvectors with eigenvalue λ = αmN where nm is the number of agents in the group. Furthermore there are l − 1 eigenvectors with structure as in (2.63) and eigenvalues given by (2.64).

(34)

−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 λ = 1.627719 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 λ = 7.372281 −1 0 1 −0.5 0 0.5 1 λ = 4.627719 −1 0 1 −1 −0.5 0 0.5 λ = 10.372281

Figure 2.3: Some of the modes for N = 12, β = [1 2 1 2 . . . 1 2]Tand m = {1, 2}. The agent

(35)

2.3 Properties of the Matrices BL and ΓL

For m = 1 this would be for instance:

vn1,1=             a b .. . b 0 .. . 0             . (2.57)

L has in that case structure:

L =      N − 1 −1 · · · −1 −1 −1 N − 1 · · · −1 −1 . .. . .. . .. ... . .. −1 −1 · · · −1 N − 1      . (2.58)

So for vn1,1 to be an eigenvector with eigenvalue λ it has to fulfill the equations

(N − 1)a − (n1− 1)b = λ α1 a (2.59) −a + (N − n1+ 1)b = λ α1 b. (2.60)

These equations are derived from the first n1 rows. This set of linear equations is fulfilled by λ = α1N . Setting b = 1 gives a = −(n1− 1).

The remaining N − n1 rows give the equation −1Tvn1,1 = 0. The sum of the elements of

vn1,1 is (n1− 1)b + a = (n1− 1) − (n1− 1) = 0. Therefore also the condition for vn1,1 to be

an eigenvector from the remaining N − n1 is fulfilled.

Now n1− 1 linearly independent eigenvectors can be constructed by permuting the first n1 elements of vn1,1. For example the second eigenvector for the first group would be:

vn1,2=               b a b .. . b 0 .. . 0               . (2.61)

(36)

has exactly the same structure as L. L has however rank N −1 (see [34]) and hence rank (P) = n1− 1. Therefore n1− 1 linearly independent eigenvectors can be constructed this way.

The same procedure can be repeated for the other groups of weights. This gives N − l eigenvectors.

There is obviously the eigenvector 1 with eigenvalue 0.

The remaining l−1 eigenvectors can be constructed as vectors such that the first n1elements are the same, then the next n2 elements are the same and so on. An eigenvector like this would look like

v =d1 . . . d1 d2 . . . d2d3 . . . d3 . . . dl . . . dl T

. (2.63)

The vector can be normalized such that d1 = 1. This gives for m = 1 . . . l the equations

αm     ((N − 1) − (n1− 1)) dm+ l X p=1 p6=m dp(−1)np     = λdm ⇔ αm  N dm− l X p=1 dpnp  = λdm. (2.64)

Note that this relation holds even for l = N .

In the special case of n1 = 1, l = 2, α1 = α and α2 = 1, so only one weight is different, (2.64) is solved by λ = n1 + α(N − n1) and dd12 = −αN −nn11. In this case, there is only one eigenvector with a non–zero entry as the first element and this eigenvector has eigenvalue 1 + α(N − 1). There is one eigenvalue in zero and the other eigenvalues are N .

2.4 Example Applications to Orbit Design

In this section it is demonstrated by a couple of examples how the results from section 2.3.2 can be used to design orbits when Γ = 0 which means that all eigenvalues lie on the imaginary axis and the system neither diverges nor converges. This kind of orbits could be used in applications where a certain area needs to be covered as for instance in surveillance of an area with multiple agents.

2.4.1 Centering the Trajectories Around Zero

First of all the trajectories should be centered around zero which means that the two modes with eigenvalue zero should not be present in the solution. To achieve this for any initial condition x0 a translation in the first and the second dimension can be calculated. Let these translations be ∆x1 and ∆x2.

(37)

2.4 Example Applications to Orbit Design

Let ¯x0 be the translated initial condition

¯ x0 = x0+            ∆x1 ∆x2 ∆x1 ∆x2 .. . ∆x1 ∆x2            = x0+ ∆x (2.65) such that a1= a2 = 0.

Rearranging (2.9) for ¯x0 and combining with (2.65) gives

a = V−1(x0+ ∆x) (2.66)

where the first two rows corresponding to a1 and a2 have to be zero. This gives a linear system for ∆x1 and ∆x2. Let vinv,i be the ith row vector of V−1. With that

bi= vinv,ix0 i = 1, 2 (2.67)

and

− vinv,i∆x = c1i∆x1+ c2i∆x2 i = 1, 2 (2.68) which leads to linear system

c11 c12 c21 c22  ∆x1 ∆x2  =b1 b2  . (2.69)

That this system of equations has a solution is intuitive as the mode with eigenvalue zero to a constant term in the solution. It is easy to verify that a vector

¯

V =α1 α2 α1 α2 . . . α1 α2 

(2.70) with α1, α2 ∈ R are eigenvectors with eigenvalue zero to Ax as 1 is eigenvector to L with eigenvalue 0 by writing out Ax1. So by translating the initial conditions this constant term can be eliminated.

2.4.2 Complete Graphs With One Different Weight

In this example βi = 1 for i = 2 . . . N . G is complete, that means that between every two nodes there is an edge.

According to section 2.2.2 the agents trajectories are superpositions of circles whose radii are determined by the eigenvectors of BL and the angular frequency is determined by the corresponding eigenvalues.

For the first example, N is chosen to be N = 6.

(38)

−1 −0.5 0 0.5 1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

Figure 2.4: Trajectories for the complete graph are shown here. The different colors belong to different agents. The dotted lines between the agents marked as crosses at the agents initial positions indicate the edges of G.

All other modes just influence agents 2 . . . 6 and as they all have the same eigenvalue i.e. 6 their relative position doesn’t change and they rotate around the moving center determined by the third mode. This moving center lies in this case exactly in the average of the positions of the agents 2 . . . 6 as the sum of the elements of these eigenvectors is zero (see section 2.3.2). The center rotates on a circle with radius r2. The radius of the circle on which the first agent moves is r1. According to section 2.3.2, the ratio between r1 and r2 is rr12 = −5β1. If for instance r1

r2 = 2 then β1 = − 2

5. You can find the resulting trajectories in figure 2.4 and the evolution over time in figure 2.5.

(39)

2.4 Example Applications to Orbit Design

(40)

−1.5 −1 −0.5 0 0.5 1 1.5 −1 −0.5 0 0.5 1 1.5

Figure 2.6: Trajectories for the complete graph with different radius of the big circle. The different colors belong to different agents. The dotted lines between the agents marked as crosses at the agents initial positions indicate the edges of G.

initial state again. Therefore there are 7 crests in this case.

Now it’s for instance possible to have agents 2 . . . 6 covering the same area but choosing radius 1.5 for the red circle instead. This means r1

r2 = 3 = −5β1 resulting in β1 = − 3 5 and gives λ3 = −2 and so the angle between two crests is 2π4 resulting in 4 crests. In figure 2.6 one can find the resulting trajectories.

It is also possible to pick 7 agents and have again one agent rotating on an outer circle while three pairs of two agents cover rings inside this circle. To achieve this the first agent is placed on (0, 1). The other agents are lined up with equal distances on the second axis top down such that distance between them is 26 the second agents starts also at (0, 1) while the fifth agent starts at (0, 0). If now r2= 16 agents 2 and 7 will stay in a distance between 23 and 1 from the center.

r1 = 1 gives rr12 = 6 = −6β1 ⇔ β1 = −1. You find the resulting trajectories in figure 2.7.

2.4.3 Circulant Graphs with Alternating Weights

(41)

2.4 Example Applications to Orbit Design −1 −0.5 0 0.5 1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

(42)

rotation velocity is according to the eigenvalue associated with the mode.

Let vre,i be the eigenvector to BL coming from the real part of vi and vim,ithe one coming from the imaginary part. Recall the structure of vi. As the elements of vi are exponential function with imaginary argument, multiplied with two different scalars if vre,i is plotted over vim,i the points lie on circles with radius r1 and r2. The angle between two consecutive points is hereby constant for a given i.

From section 2.2.2 it is known that if the agents are placed on the first axis with positions according to an eigenvector v of BL and if Γ = 0 the agents will move in circles with radii according to the eigenvector. In this case the initial condition is composed as follows:

x0 = 1 2            v1 +jv1 v2 +jv2 .. . vN +jvN            +1 2            v1 −jv1 v2 −jv2 .. . vN −jvN            =            v1 0 v2 0 .. . vN 0            . (2.71)

Now if the mode corresponding to vim,i is additionally triggered it will rotate with the same angular velocity as the mode associated with vre,i hence the relative positions of the agents will not change. It is possible to trigger the mode coming from vre,i by choosing the agents positions in the first dimension according to vre,iand additionally triggering the modes coming from vim,i by choosing the agents positions in the second dimension according to vim,i.

Let vre,i,j be the jth element of vre,i and vim,i,jbe the jth element of vim,i. Then the initial conditions are composed for the described case as:

x0 = 1 2αre            vre,i,1 +jvre,i,1 vre,i,2 +jvre,i,2 .. . vre,i,N +jvre,i,N            +1 2αre            vre,i,1 −jvre,i,1 vre,i,2 −jvre,i,2 .. . vre,i,N −jvre,i,N            − 1 2jαim            vim,i,1 +jvim,i,1 vim,i,2 +jvim,i,2 .. . vim,i,N +jvim,i,N            +1 2jαim            vim,i,1 −jvim,i,1 vim,i,2 −jvim,i,2 .. . vim,i,N −jvim,i,N            =            αrevre,i,1 αimvim,i,1 αrevre,i,2 αimvim,i,2 .. . αrevre,i,N αimvim,i,N            . (2.72)

With the parameters αre, αim∈ R the axes can be arbitrarily scaled which turns the placement on circles into ellipses. The agents will however still move in circles.

(43)

2.4 Example Applications to Orbit Design −3 −2 −1 0 1 2 3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5

(44)

2.5 Discrete Implementation

As most modern control system are implemented using discrete computers it is worth looking into the discretization of the suggested control strategy. This shows for example that the results presented in section 2.4 are not as straightforward to implement as it might seem.

Here it is assumed that the input is periodically computed according to (2.4) and then held constant during the sampling period with length h. This leads with (2.6) to the discrete system x(t + h) = x(t) + h Z 0 ˙x(τ )dτ ⇔ x(t + h) = x(t) + h Z 0 Axx(t)dτ ⇔ x(t + h) = (I + hAx) x(t). (2.73)

The two eigenvalues of Axin zero corresponding to the translation of the whole formation are obviously mapped into eigenvalues of I + hAx in 1 independent of h. The other eigenvalues of I + hAx, λ have to have |λ| < 1 for (2.73) to asymptotically reach consensus or |λ| ≤ 1 for (2.73) to be stable meaning that |z|2 does not increase.

As with any zero–order–hold discretization there is a direct mapping from each eigenvalue of the system matrix λc of continuous system to an eigenvalue λd of the discrete one. In [48] it is shown that for each pair of λc and λd

d| < 1 ⇔ h < −2Re (λc) |λc|2 (2.74) and |λd| = 1 ⇔ h = −2Re (λc) |λc|2 . (2.75)

Hence for the system to reach consensus h is upper bounded by the λc which gives the the smallest right hand site of (2.74) excluding the eigenvalues in λc = 0 that correspond to translation of the system.

The first direct consequence from this result is that if all eigenvalues of Ax are imaginary which is the case as elaborated in section 2.2.2 if Γ = 0 then stability would require infinitely small h which is of course not feasible.

It is also interesting to look a bit deeper into the case of Γ and B proportional as presented in section 2.2.2. Then

λc= (−α ± j) λ (2.76)

with Γ = αB and λ being a non–zero eigenvalue of BL. Inserting (2.76) into (2.74) and assuming λ ∈ R gives

h < 2 λ

α

1 + α2. (2.77)

(45)

2.6 Switching Topologies 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 α α 1 + α 2

Figure 2.9: Effect of α on the sampling time if Γ = αB

on the unit circle by choosing α appropriately and thus keeping the property of sustained oscillations with different modes. Only the modes which give the minimum h according to (2.75) can be sustained. This will be in most cases only one.

Now to the effect of α on h. In figure 2.9 one can find a plot of the function 1+αα2. At zero

the function is zero which matches the previous result for the case of Γ = 0. Between 0 and 1 the function rises up to 0.5 which translates into smaller sampling frequencies and then it falls asymptotically to zero which means a higher sampling frequency due to a faster system.

2.6 Switching Topologies

In many cases consensus algorithms work as well under switching topologies i.e. that the sets of edges in G changes over time as long as the graph stays connected (e.g. [34], [18], [36]). This is an important feature in applications where the communication links are for instance unreliable or have only a certain range such as wireless communication or if the neighbors are only observed by sensors.

(46)

ones |z| increases at every switching instant. In general as the system evolves not all possible edges keep the same length. Hence the next time the switching takes place |z| can be increased again.

That this can lead to instability will be motivated here by a simulated counter example. The parameters are of the system are: N = 7, γ = 0.001 · 1, β = [−1 1 − 1 1 − 1 1 − 1]T, x0 = [1 1 2 2 3 3 4 4 5 5 6 6 7 7]T.

The communication graph G is a path graph. At regular intervals of 0.1 the following action is performed if it leads to increased |z|: The smallest edge in the graph is removed which gives two path graphs with two end–nodes (nodes with only one neighbor) each. Then one end–node of either graph is connected by a new edge such that the edge has maximal length.

In figure 2.10 one can find a plot which shows |z| over the simulation time. It clearly increases.

As the maximal increase in |z| is limited by the size of the convex hull of x and the convex hull is limited in size by |z|, for sufficiently high gains γi and if switching occurs sufficiently seldom stability can still be expected. Then convergence simply outruns to possible increase in |z| due to switching. It might be interesting for future work to come up with more concrete results which kind of switching sequences lead to stability.

An interesting effect occurs if the above switching strategy is applied to systems which have a mode such that the two endpoints of the graph stay closer together than any edge in the graph. Then it can be observed in simulation that the system converges to that mode.

In figure 2.11 one can see several snapshots of trajectories after t = 80 with an interval between two snapshots of 0.2. The system creating these trajectories is the same as in the example above except for the weight–vector β which was changed to β = 1. The initial conditions are the same as well.

You can clearly see that the end–nodes of the node are all the time closer then any other edge in the graph. The motion pattern is close to a fixed configuration which rotates, i.e. a mode of the system.

In figure 2.12 one can see |z| over time. It increases a lot during the first 10 time units1 |z| increases just as in figure 2.10. Then it flattens out and at t = 30 it decreases and there are only few jumps upwards due to switching and they are rather small.

1

(47)

2.6 Switching Topologies 0 10 20 30 40 50 60 70 80 90 100 0 500 1000 1500 2000 2500 3000 3500 4000 4500 t |z |

(48)
(49)

2.6 Switching Topologies 0 10 20 30 40 50 60 70 80 90 100 2 4 6 8 10 12 14 t |z |

(50)
(51)

3

Double Integrator Dynamics

In this section the main idea of the novel control scheme presented in [44], [45] is applied to double integrator plants which is another important class of dynamics used to describe a number of systems.

In section 3.1 the feedback law is simply transferred from feedback on the position to feedback of the velocity. The position is then simply the integration of the velocities. In section 3.2 both position and velocity feedback is applied.

So consider instead of single integrator dynamics for every agent as in section 2.1 double integrator dynamics i.e. for i = 1 . . . N

˙xx,i= xv,i ˙xv,i = ui

(3.1)

with xx,i, xv,i, ui∈ R2. xx,i is the position of the ith agent and xv,i its velocity.

Let us define the stack vectors xx = [xTx,1 xTx,1 . . . xTx,N]T, xv = [xTv,1 xTv,1 . . . xTv,N]T, u = [uT

1 uT2 . . . uTN]T and x = [xTx xTv]T. Then similar to section 2.1 define

z = (DT⊗ I2)xx (3.2)

˙z = (DT⊗ I2) ˙xx= (DT⊗ I2)xv. (3.3) Now consider a similar feedback law as the one proposed in section 2.2:

u = −(ΓxD ⊗ I2)z + (BxD ⊗ S)z − (ΓvD ⊗ I2) ˙z + (BvD ⊗ S) ˙z (3.4) with Γx = diag(γx), Γv = diag(γv), Bx = diag(βx), Bv = diag(βv) where Γx, Γv, Bx, Bv ∈ RN ×N and γx, γv, βx, βv ∈ RN.

Furthermore we have the abbreviations for position and velocity feedback matrices: Af,x= −(ΓxL ⊗ I2) + (BxL ⊗ S)

Af,v= −(ΓvL ⊗ I2) + (BvL ⊗ S).

(3.5)

(52)

3.1 Velocity Feedback

If Af,x= 0 then the dynamics for xv (the velocity) are exactly the same as the dynamics for the position in the single integrator case. Hence all results for the single integrator case can be applied. xx is then simply xv integrated. There is no feedback from the position. The initial conditions for the position of an agent will therefore only translate its trajectory but not qualitatively change it.

This implies also that if all elements of γv are positive the agents converge to the same speed.

Example Consider G a circulant graph and Γv = 0, Bv = I. This is the setup discussed in section 2.3.2. So distributing the initial values of the speeds evenly in a ring will trigger one mode with eigenvalue λ = k − 2 P

p∈M

cos 2πNp. That means for the initial conditions

xv,i,0 = r  sin(2π Ni) cos(2πNi)  (3.7) where r is the radius of the circle.

Then each agents speed will rotate on a circle with radius r with angular velocity λ. I.e. xv,i(t) = r  sin(λt + 2π Ni) cos(λt + 2πNi)  . (3.8)

And for the positions:

˙xx,i= xv,i ⇔ xx,i= t Z 0 xv,idt + xx,i,0 = r λ − cos(λt +2π Ni) + cos( 2π Ni) sin(λt +2πNi) − sin(2πNi)  + xx,i,0 (3.9)

So the agents move on circles with radius λr and center [cos(2πNi), − sin(2πNi]T+ xx,i,0.

In figure 3.1 one can see the trajectories for N = 9 agents with M = {1} this is a ring graph. The initial positions of the agents were chosen such that the centers of the circles lie on a circle. The initial positions are indicated in the figure as crosses and the links between the agents as dashed lines.

The eigenvalue of the mode is λ = 2 − cos(2π9 ) = 0.4679, r was set to 1 and hence the radius of the circles on which the agents move is λ1 = 2.1372.

3.2 Proportional Damping

(53)

3.2 Proportional Damping −6 −4 −2 0 2 4 6 −5 −4 −3 −2 −1 0 1 2 3 4 5

(54)

Let ˜x = [˜xTx x˜Tv]T be an eigenvector to the system matrix in (3.6) with eigenvalue ˜λ. This leads to the requirements:

˜

xv = ˜λ˜xx (3.10)

Af,xx˜x+ αAf,xx˜v = ˜λ˜xv. (3.11) Combining (3.10) and (3.11) leads to the equation

Af,xx˜x+ αAf,xλ˜˜xx= ˜λ2x˜x ⇔ (1 + α˜λ)Af,xx˜x= ˜λ2x˜x

(3.12)

Assume now that Af,x˜xx = λ˜xx e.g. that ˜xx is an eigenvector of the matrix Af,x with eigenvalue λ. Properties of eigenvectors and eigenvalues of this kind of matrix are extensively discussed in section 2.3. Then with (3.12)

λ(1 + α˜λ)˜xx= ˜λ2x˜x ⇔ λ(1 + α˜λ) = ˜λ2 ⇔ ˜λ2− αλ˜λ − λ = 0

(3.13)

which is an quadratic equation for ˜λ with solutions ˜

λ = αλ ± √

α2λ2+ 4λ

2 . (3.14)

As λ ∈ C the connection between the parameter α and ˜λ is not trivial.

For every λ there are two different solutions. For stability of the system Re(˜λ) < 0 is needed. Therefore it is of interest the study the effect of α on the ˜λ with larger real part. This real part will be referred to as

rmax= max Re αλ +√α2λ2+ 4λ 2 ! , Re αλ − √ α2λ2+ 4λ 2 !! . (3.15)

3.2.1 Sufficient Condition for α

In [40] the following condition based on equation (3.14) on α to ensure Re(˜λ) < 0 if Re(λ) < 0 is presented: Proposition 15: If α > v u u t 2 |λ| cosπ 2 − tan −1−Re(λ) Im(λ)  = α0 (3.16)

and Re(λ) < 0 then Re(˜λ) < 0.

(55)

3.2 Proportional Damping

Proof. In figure 3.2 one can find the important elements of (3.14) drawn in the complex plane. The elements in black correspond to α = α0, the elements in gray to some α > α0.

For the proof only λ in the second quadrant are considered. This is because the complex conjugate of λ, λ∗ gives the complex conjugate solution ˜λ∗.

Assume that ˜λ is a solution of (3.13), i.e. ˜

λ2− αλ˜λ − λ = 0. (3.17)

Taking the complex conjugate of both sides of the above equation yields ˜λ2− αλ˜λ − λ∗= 0

⇔ (˜λ∗)2− αλ∗˜λ∗− λ∗= 0.

(3.18)

Hence ˜λ∗ solves (3.13) for λ∗.

Now back to figure 3.2. The angle between the imaginary axis and λ is γ = tan−1−Re(λ)Im(λ) . The argument of λ is then γ +π2 and hence the argument of λ2 is 2(γ +π2) = 2γ + π. As α ∈ R, it doesn’t change the argument when multiplied with a complex number but simply scales the norm respectively the length of the vector in the complex plane. So the angle between the negative real axis and (αλ)2 is 2γ as indicated in figure 3.2.

The angle between the imaginary axis and 4λ is obviously also γ and therefore the angle to the right (A1 → A2 → A0) with the real axis is π2 − γ. As angles in a triangle sum up to π the angle between (αλ)2 and 4λ (the angle at A1 in the triangle A0, A1, A2) is π − 2γ − (π2 − γ) = π2 − γ.

Therefore the distance A0A2 is the same as the distance A0A1. Hence (αλ)2+ 4λ is always shorter than (αλ)2as long as (αλ)2+4λ is in the third quadrant. As 0 < γ < π2 by assumption, the phase angle of (αλ)2+4λ is always smaller than the one of (αλ)2. Therefore the length and phase angle ofp(αλ)2+ 4λ is smaller than the one of αλ. Because (αλ)2+ 4λ is in the third quadrant, p(αλ)2+ 4λ is in the second quadrant. This means that |Re(p(αλ)2+ 4λ)| < |Re(αλ)| and hence both solutions of (3.14) have negative real part.

So what is now the α for which p(αλ)2+ 4λ is exactly on the imaginary axis and hence for smaller α in the second quadrant and for larger in the third? This case is illustrated in black in figure 3.2.

Then |(αλ)2+ 4λ| = |αλ|. Applying the law of cosines in the triangle A0, A1, A2 in the corner at A1 gives |αλ|4 = |αλ|4+ |4λ|2− 2|αλ|2|4λ| cos(π 2 − γ) ⇔ 2|λ|2= α2|λ|3cos(π 2 − γ) ⇔ 2 = α2|λ| cos(π 2 − γ) ⇔ α = s 2 |λ| cos(π2 − γ) = α0 (3.19) with α0 as in (3.16).

(56)

Figure 3.2: Geometric proof sufficiently large α

Proposition 16: The minimal damping coefficient α0 can as well be calculated as α0=

s 2

−Re(λ). (3.20)

Proof. The real part of λ is

Re(λ) = −|λ| sin (γ) = −|λ| cos π

2 − γ 

. (3.21)

Recall that γ = tan−1

−Re(λ) Im(λ)



. Inserting this and (3.21) into (3.16) gives (3.20). This condition can as well be derived by requiring Im (αλ)2+ 4λ = 0.

3.2.2 Further Discussion on the Effect of α

If α is further decreased the real part of αλ decreases linearly with α while the real part of p(αλ)2+ 4λ increases again as the phase angle gets smaller than π

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Exakt hur dessa verksamheter har uppstått studeras inte i detalj, men nyetableringar kan exempelvis vara ett resultat av avknoppningar från större företag inklusive

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Coad (2007) presenterar resultat som indikerar att små företag inom tillverkningsindustrin i Frankrike generellt kännetecknas av att tillväxten är negativt korrelerad över

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar