• No results found

Event-based Control for Multi-Agent Systems

N/A
N/A
Protected

Academic year: 2021

Share "Event-based Control for Multi-Agent Systems"

Copied!
94
0
0

Loading.... (view fulltext now)

Full text

(1)

Event-based Control for Multi-Agent Systems

November 25, 2010

GEORG SEBASTIAN SEYBOTH

Master’s Degree Project Stockholm, Sweden 2010

XR-EE-RT 2010:023

(2)
(3)

May to November 2010

Event-based Control for Multi-Agent Systems

— Diploma Thesis —

Georg Sebastian Seyboth

1

Automatic Control Laboratory

School of Electrical Engineering, KTH Royal Institute of Technology, Sweden

&

Institute for Systems Theory and Automatic Control Universit¨at Stuttgart, Germany

Supervisor

Dr. D. V. Dimarogonas KTH Stockholm

Examiner

Prof. Dr. K. H. Johansson KTH Stockholm

Examiner

Prof. Dr.-Ing. F. Allg¨ower Universit¨at Stuttgart

Stockholm, November 25, 2010

1g.seyboth@gmx.de

(4)
(5)

Abstract

In this thesis, a novel approach to the average consensus problem for multi-agent systems is followed. A new event-based control strategy is proposed, which incorporates event-based scheduling of state measurement broadcasts over the network. The control-laws are based on the resulting piecewise constant functions of these measurement values. This facilitates implementation on digital platforms such as microprocessors and reduces the number of inter- agent communications over the network. Starting from a basic problem setup with single- integrator agents, fixed undirected connected communication topologies, and no time-delays, the novel strategy is developed. Different triggering conditions guaranteeing convergence to an adjustable region around the average consensus point or asymptotic convergence to this point, respectively, are discussed. Numerical simulations show the effectiveness of this approach, outperforming classical time-scheduled implementations of the consensus protocol in terms of load on the communication medium. Furthermore the problem class is extended to networks with directed communication links, switching topologies, and time-delays in the communication as well as to agents with double-integrator dynamics. As an illustrative example, the novel strategy is applied to a formation control problem of non-holonomic mobile robots in the plane.

(6)
(7)

Acknowledgements

I would like to thank everybody, who gave me the possibility of working on my thesis project at the Automatic Control Department of the Royal Institute of Technology (KTH), and made my stay in Stockholm such a great pleasure.

My first thanks go to Prof. Dr. Karl Henrik Johansson for accepting me as exchange student and supervising my thesis. He gave me a warm welcome and created a friendly, inspiring and productive atmosphere for my thesis work. I would like to thank Dr. Dimos Dimarogonas for supervising my thesis, and Dr. Guodong Shi as well as the whole Networked Control Systems group for very interesting and fruitful discussions. I very much enjoyed my visit at the Automatic Control Department.

Second, I want to thank Prof. Dr.-Ing. Frank Allg¨ower for giving me the opportunity of writing my diploma thesis abroad and introducing me to Prof. Dr. Karl Henrik Johansson. I also thank my supervisors Dr. Peter Wieland and Rainer Blind of the Institute for Systems Theory and Automatic Control at the University of Stuttgart.

Finally, I want to thank my family and friends for supporting me throughout my visit in Stockholm.

Georg Seyboth Stockholm, November 2010

(8)
(9)

Contents

Notation ix

List of Figures xi

1 Introduction 1

1.1 Cooperative Control . . . 1

1.2 Time-scheduled vs. Event-scheduled Control . . . 2

1.3 Event-based Cooperative Control . . . 3

1.4 Goals of this Project . . . 4

1.5 Outline . . . 4

2 Background and Problem Formulation 7 2.1 Algebraic Graph Theory . . . 7

2.2 System Model . . . 8

2.3 Consensus Protocol and Time-scheduled Implementation . . . 9

2.4 Prior Event-based Implementation . . . 10

2.5 Problem Statement . . . 10

3 Novel Event-based Control Strategy 11 3.1 Adaptive Trigger Functions . . . 13

3.2 Static Trigger Functions . . . 18

3.3 Time-dependent Trigger Functions . . . 25

3.3.1 Exponentially Decreasing Threshold . . . 25

3.3.2 Exponentially Decreasing Threshold with Offset . . . 29

3.3.3 General Time-dependent Threshold . . . 32

3.3.4 Knowledge of the Algebraic Connectivity . . . 36

3.4 Comparison to Time-scheduled Control . . . 36

4 Extensions regarding Network Model 39 4.1 Directed Communication Graphs . . . 39

4.2 Switching Communication Topologies . . . 43

4.2.1 Always Connected Topologies . . . 44

4.2.2 Uniformly Connected Topologies . . . 45

4.3 Time-delays in the Communication . . . 49

4.3.1 Comparison to Time-scheduled Control . . . 57

(10)

Contents

5 Extensions regarding Agent Dynamics 59

5.1 Double-Integrator Agents . . . 59

5.1.1 Comparison to Time-scheduled Control . . . 68

5.2 Formation Control for Mobile Robots . . . 68

5.2.1 Model of the Non-holonomic Mobile Robot . . . 69

5.2.2 Problem Setup . . . 69

5.2.3 Feedback Linearization and Event-based Control . . . 70

5.2.4 Simulation Results . . . 70

6 Conclusions and Future Work 73 6.1 Conclusions . . . 73

6.2 Future Work . . . 75

References 77

(11)

Notation

0 The scalar zero or square matrix of zeros α A parameter in the trigger functions

∆ The time-delay

δ(t) The disagreement vector

δξ(t) The disagreement corresponding to ξ(t) δζ(t) The disagreement corresponding to ζ(t)

Γ The system matrix of a double-integrator multi-agent system ξˆi(t) The latest broadcasted position of double-integrator agent i ζˆi(t) The latest broadcasted velocity of double-integrator agent i

ˆ

xi(t) The latest broadcasted state of agent i λ2(G) The algebraic connectivity of graph G

λ3(Γ) The non-zero eigenvalue of Γ with largest real part λj(G) The j-th smallest eigenvalue of L

k·k The Euclidean norm for vectors or induced norm for matrices, respectively R The real numbers

R+ The positive real numbers R+

0 The positive real numbers including zero 0 The column vector of zeros

1 The column vector of ones E The edge set

V The vertex set

µ The gain in the double-integrator consensus protocol σ(t) The topology switching signal

(12)

Notation

τ The lower bound on inter-event intervals τD The lower bound on the dwell times

τs The constant sampling period for time-scheduled control ξi(t) The position of double-integrator agent i

ζi(t) The velocity of double-integrator agent i A The adjacency matrix

a The initial average of all agents’ positions b The average of all agents’ initial velocities c0 A parameter in the trigger functions c1 A parameter in the trigger functions D The degree matrix

di The degree of vertex i

eξ(t) The measurement error corresponding to ξ(t) eζ(t) The measurement error corresponding to ζ(t) ei(t) The measurement error of agent i

fi(·) The trigger function of agent i G The communication graph

hi(t) The general time-dependent threshold of agent i I The identity matrix

L The Laplacian matrix N The number of agents Ni The neighbor set of agent i tik The k-th event time of agent i Tk The topology switching times ui(t) The control input of agent i

xi(t) The state of single-integrator agent i

(13)

List of Figures

1.1 The big picture . . . 1

2.1 Example graph G . . . 7

2.2 Setup for event-based communication . . . 10

3.1 Algorithm running in agent i’s microprocessor . . . 12

3.2 Simulation results for trigger functions (3.7) . . . 17

3.3 Simulation results for trigger functions (3.10) . . . 23

3.4 Simulation results in detail . . . 24

3.5 Solution of the implicit equation for τ . . . 27

3.6 Simulation results for trigger functions (3.20) . . . 28

3.7 Simulation results for trigger functions (3.22) . . . 31

3.8 Simulation results for trigger functions (3.23) . . . 35

3.9 Lower bound on λ2(G) in terms of N . . . 36

3.10 Communication graphs . . . 37

3.11 Comparison for graph G1 . . . 38

3.12 Comparison for graph G2 . . . 38

3.13 Comparison for graph G3 . . . 38

4.1 Directed graph G and corresponding Laplacian L . . . 41

4.2 Simulation results for the directed graph . . . 42

4.3 Inconsistent state values in case of switching topologies . . . 44

4.4 Time axis for switching topologies . . . 45

4.5 Periodically switching communication topology . . . 49

4.6 Simulation results for periodically switching communication topology . . . 49

4.7 Simulation results for trigger functions (3.22) and time-delay ∆ = 0.1 . . . . 57

5.1 Zero-order hold vs. first-order hold for the position . . . 66

5.2 Simulation results for double integrator agents and trigger functions (5.8) . . 67

5.3 Comparison of time- and event-scheduled control for double-integrator agents 68 5.4 Mobile robot . . . 69

5.5 Formation control for mobile robots. Simulation results . . . 71

5.6 Formation control for mobile robots. Snapshot at t = 1.736s . . . 72

5.7 Formation control for mobile robots. Snapshot at t = 18.926s . . . 72

(14)
(15)

1 Introduction

The focus of this thesis project is on the intersection of cooperative control and event-based control, two active research areas in modern control theory. This chapter provides an intro- duction to both fields and motivates the present work.

cooperative event-based control control

Figure 1.1:The big picture

1.1 Cooperative Control

The last years witnessed a growing interest in coordination and cooperative control of groups of autonomous vehicles. Significant developments in the fields of communication technol- ogy, wireless technology, embedded devices, and many more, enable the development of au- tonomous air, ground, or underwater vehicles. Groups of such vehicles, referred to as agents, can be utilized to solve a variety of problems very efficiently, for example exploration and monitoring tasks.

It is appealing to find distributed control mechanisms for such multi-agent systems, since in various applications, there might be no centralized entity which has information about all group members and coordinates their actions. Moreover, in order to cope with failure or joining of agents, obstacles in the operating environment, or other external influences, neighbor-based coordination strategies are favorable. Each agent is supposed to decide on his actions based only on his own and his direct neighbors’ information, while guaranteeing that the whole group fulfills a given group objective.

The development and investigation of such distributed coordination strategies is a very active field of research in modern control theory. In order to make such complex problems theoretically tractable, it is necessary to abstract from reality. In particular, both the agent dynamics and the communication mechanisms have to be modeled. A common and widely used problem setup is the following: The agents are modeled by single-integrators. The communication networks are modeled by graphs, where the nodes correspond to agents and

(16)

1 Introduction

edges between nodes correspond to communication links between agents. One example of a group objective for such multi-agent systems is state agreement or consensus, i.e., all agents are supposed to converge to a common point or state value. If this value is demanded to be the average of all agents’ initial states, then this problem is referred to as average consensus problem. Such consensus problems have a variety of applications in formation control, flocking, attitude synchronization in satellite swarms, distributed sensor networks, or congestion control in communication networks, just to name few.

The assumptions on the communication network play a crucial role in the analysis of such consensus problems. The most restrictive assumptions on the network, which means the most rigorous abstraction from reality, are the following. The communication topology is fixed and communication links are undirected. The network is connected, which means there are no subgroups of agents which have no communication links between them. Moreover, there are no time-delays due to communication and there are no packet losses or other external disturbances. This reduced system model captures the core of the multi-agent coordination problem. A solution to this consensus problem is presented in [1]. The proposed distributed consensus protocol is a control-law that takes only information of direct neighbors into ac- count, but guarantees that the overall system converges to average consensus asymptotically.

The results are extended to a larger class of networks, i.e., directed communication topologies, time-varying topologies, and communication subject to time-delays are discussed.

The case of time-varying topologies is also addressed in [2] and [3]. The survey papers [4], [5], [6] provide a comprehensive overview of consensus protocols and convergence results under different assumptions on the network. In [5], a variety of applications for consensus problems is discussed, which illustrates the relevance of this problem. In recent time, a lot of effort is put into relaxing assumptions on the network connectivity [7], [8], [9]. Further- more, the consensus protocol has been extended to agents with double-integrator dynamics in [10]. This extension is of great importance, since agents like holonomic mobile robots require second-order dynamic models. The book [11] provides a comprehensive summary of con- sensus problems and their application for distributed multi-vehicle coordination. It includes illustrative examples with real mobile robots, which coordinate their movement in the plane.

This proves the applicability of such distributed coordination strategies to real autonomous vehicles.

1.2 Time-scheduled vs. Event-scheduled Control

In practice, controllers are in general implemented on digital platforms like microprocessors.

Therefore, control-laws can only be updated at discrete instances of time. The classical approach is periodic scheduling of measurement samplings and control updates. Both mea- surements and control updates are determined by a constant sampling period, and between updates the signals are held constant with zero-order hold techniques. This strategy is referred to as time-scheduled control.

However, time-scheduled control might be conservative in terms of the number of control updates, since the constant sampling period has to guarantee stability in the worst-case scenario. It is desirable to reduce the number of control updates as well as the amount of communication over the network in order to decrease traffic and energy consumption. In order to overcome the conservatism of time-scheduled control, a novel scheduling technique has been proposed, where the sampling times are not periodic, but determined by certain

(17)

1.3 Event-based Cooperative Control

events that are triggered depending on the plant’s behavior.

This approach is called event-based control in [12], [13] and it is shown, that this approach is favorable in some cases. In this setup, events are triggered whenever the output value of the system deviates from the reference and crosses a predefined threshold. In data acquisition applications, this idea is known as send-on-delta concept [14]. Event-based scheduling has been further developed for control applications in [15], where the author proposes a deter- ministic strategy, which guarantees asymptotic stability of the closed-loop system. Events are triggered whenever a threshold depending on the plants actual state is crossed. Note that closed-loop systems consisting of continuous-time plants and event-based controllers are hy- brid systems [16], which makes the analysis challenging. In particular, such control strategies have to guarantee not only stability, but also exclude Zeno-behavior of the hybrid system, cf.

[17], [18], [19]. In the present context, Zeno-behavior means that there is an accumulation point of events on the time axis. The concept of event-based control is further developed in [20], [21] for control applications over wireless networks. In wireless applications, it is desirable to minimize the number of data transmissions in order to reduce the load on the communication medium and decrease the energy consumption of sending nodes. In [22], [23], [24], the authors use event-based scheduling techniques in networked control systems. In their setup, events do not correspond to control updates but to measurement broadcasts over the network, which provides the possibility to effectively reduce the number of measurement transmissions over the network.

Moreover, there is a scheduling strategy named self-triggered control, cf. [25], [26], [27], which can be seen as a natural extension of event-triggered control. The idea of self-triggered control is the following. If an event is triggered, the control-law is updated and, at the same time, the next sampling time instance is computed, based on the plants actual state. Thus, it is not necessary to monitor the state of the system continuously, in contrast to the event- triggered case. The sensing devices can be put to power-saving mode between the triggering times, which reduces the energy consumption. However, the system runs in open-loop between the triggering times, which might be critical in terms of robustness.

1.3 Event-based Cooperative Control

In practice, agents like mobile robots are equipped with digital microprocessors which coordi- nate measurement acquisition, communication with other agents and control actuation. Thus it is necessary to implement consensus protocols on a digital platform. Important results on time-scheduled implementations of consensus protocols are presented in [28] for single- integrator agents and in [29], [30], [31] for double-integrator agents. In particular, necessary and sufficient conditions on the sampling period for asymptotic stability of the closed-loop system are given.

Event-based control strategies seem to be suitable for cooperative control tasks of multi- agent systems, since it can be expected that the number of control updates or measurement broadcasts decreases significantly. For this reason, an event-based implementation of the consensus protocol is developed in [32], [33], [34]. Following the ideas of [15], the authors present a decentralized event-based strategy to determine the control updates. The overall system reaches average consensus asymptotically, while Zeno-behavior is excluded. In [32], it is necessary that each agent knows the average of all agents’ states beforehand, which makes the proposed strategy impractical. This strong requirement is relaxed in [33]. However, in

(18)

1 Introduction

[33] it is still necessary that each agent updates its control-law not only at its own event- times, but also whenever one of its neighbors triggers an event. In [34] it is shown that this requirement can be relaxed, but then the overall system does not necessarily converge to the average value of all initial states. The most serious limitation of the proposed control strategy is the fact that each agent has to monitor the states of its neighbors continuously in order to evaluate the triggering condition. Even though the control signals are rendered piecewise constant, continuous communication between agents is required. Thus, the main advantage of event-based control, which is reducing the number of samples, does not take affect.

Lately, a distributed self-triggered implementation of the multi-agent consensus protocol has been proposed in [35]. Self-triggered control overcomes the drawback that the neighbors’

states have to be monitored continuously in order to evaluate the trigger condition. The proposed strategy guarantees asymptotic convergence to average consensus for the overall system, while Zeno-executions are excluded. However, this strategy still has some limitations.

The computational effort at each triggering time instance is higher than in the event-based case since the computations for the next time are much more involved. These computations have to be done not only at the triggering time instances, but also whenever one of the neighbors triggers. In order to compute the next triggering time, each agent needs not only information of its direct neighbors, but also of its neighbors’ neighbors.

1.4 Goals of this Project

The starting point of the present work are the latest results on event-based cooperative control discussed above [32], [33], [34]. The focus is on further development of such event-triggered strategies, the self-triggered approach is out of scope. The primary goal of this project is to develop a novel control strategy for multi-agent coordination, such that measurement broadcasts over the network are scheduled in an event-based fashion. A similar strategy is used for stabilization of networked control systems in [22], [23]. It can be expected that such strategies allow for effective reduction of the load on the communication medium in multi-agent coordination problems, compared to time-scheduled control [28]. Furthermore, event-based controllers are suitable for digital implementation on embedded devices.

Starting from a basic problem setup with strict assumptions on both communication net- work and agent dynamics, the goal is to extend the novel event-based strategies to a large class of multi-agent systems by relaxing restrictive assumptions. Furthermore, the control strategy has to be evaluated in comparisons to time-scheduled control in order to verify the benefits.

1.5 Outline

The remainder of this thesis is organized as follows. In Chapter 2, the main problem setup of this work is presented. After some mathematical preliminaries, the model of the multi-agent system is given. Then the consensus protocol, which solves the average consensus problem in continuous-time, is introduced and its time-scheduled implementation is discussed. The prob- lem formulation for the event-based control strategy closes this chapter. Chapter 3 presents the novel event-based control strategy. Various triggering conditions are proposed, and all results are illustrated by numerical simulations. In the end the event-based control strategy is compared to time-scheduled implementations of the consensus protocol. Chapter 4 presents

(19)

1.5 Outline

various extensions to the main problem setup regarding the communication network, and the corresponding results for the proposed event-based control strategy. In particular, directed communication graphs, switching network topologies, and time-delays in the communication links are discussed. In Chapter 5, the event-based control strategy is extended to networks of double-integrator agents. As an illustrative example, Section 5.2 presents the application of the proposed control strategy to formation control of a group of non-holonomic mobile robots. Chapter 6 summarizes the main results and shows directions of future research.

(20)
(21)

2 Background and Problem Formulation

This chapter introduces the main problem setup for this work, which is the average consensus problem of single-integrator agents with undirected, fixed, connected communication topology and no time-delays. Once an event-based strategy for this setup is found, cf. Chapter 3, these requirements can be relaxed step by step, cf. Chapters 4 and 5, thus extending the novel strategy to a large class of multi-agent systems.

An important tool for analysis of multi-agent systems is algebraic graph theory, therefore Section 2.1 gives a brief introduction and reviews some important results. Section 2.2 intro- duces the model of the multi-agent system. Section 2.3 presents the consensus protocol which solves the average consensus problem in continuous-time, as well as its time-scheduled im- plementation. Recent event-based implementations are discussed in Section 2.4. Section 2.5 concludes this chapter with a strict problem formulation for the novel event-based control strategy.

2.1 Algebraic Graph Theory

The communication topologies of multi-agent systems can be modeled by graphs, where nodes correspond to agents and edges between nodes to communication links between agents. There- fore, this section is dedicated to review some facts from algebraic graph theory [36]. Figure 2.1 shows an example graph which will be used to illustrate the definitions and results. The

Figure 2.1:Example graph G

graph is denoted by G and consists of vertices (or nodes) V and edges E. For a graph with N vertices, V = {1, . . . , N} is called the vertex set. If there is an edge (i, j) between nodes i and j, then i and j are called adjacent, and the edge set E of graph G is given by E = {(i, j) ∈ V × V : i, j adjacent }. Note that edges are in general directed, while for undirected graphs it holds (i, j) ∈ E ⇔ (j, i) ∈ E. The adjacency matrix A of a graph is defined element-wise with aij = 1 if i and j are adjacent and aij = 0 otherwise. A path from node i to node j is a sequence of distinct vertices, starting form i and ending with j, such that each pair of consecutive nodes is adjacent. Moreover, if there is a path from node i to node

(22)

2 Background and Problem Formulation

j, then i and j are called connected. If all pairs of nodes in graph G are connected, then the graph G is called connected. Furthermore, the degree matrix D of graph G is the diagonal matrix with diagonal elements di equal to the number of vertices which are connected to node i, i.e., the degree di is the cardinality of node i’s neighbor set Ni= {j ∈ V : (i, j) ∈ E}. The Laplacian matrix L of graph G is defined as

L = D − A.

For undirected graphs, the Laplacian matrix L is symmetric and positive semi-definite, i.e., L = LT ≥ 0. Note that the row sums of L are zero by definition. Thus, the vector of ones, i.e., 1, is an eigenvector of L corresponding to eigenvalue λ1(G) = 0, i.e., L1 = 0. For connected graphs, L has exactly one zero eigenvalue, i.e., the eigenvalues of L can be denoted in increasing order

0 = λ1(G) < λ2(G) ≤ · · · ≤ λN(G),

and the smallest non-zero eigenvalue λ2(G) is called algebraic connectivity.

The example graph in Figure 2.1 is undirected and connected. The vertex and edge sets are given by

V = {1, . . . , 5}, E = {(1, 2), (2, 1), (1, 3), (3, 1), (2, 3), (3, 2), (3, 4), (4, 3), (4, 5), (5, 4)}, and it holds

A =





0 1 1 0 0 1 0 1 0 0 1 1 0 1 0 0 0 1 0 1 0 0 0 1 0





, D =





2 0 0 0 0 0 2 0 0 0 0 0 3 0 0 0 0 0 2 0 0 0 0 0 1





 .

The Laplacian matrix is given by

L =





2 −1 −1 0 0

−1 2 −1 0 0

−1 −1 3 −1 0

0 0 −1 2 −1

0 0 0 −1 1





(2.1)

and the algebraic connectivity of this graph is λ2(G) = 0.5188. It is easy to see that the row sums of L are zero and L = LT. This graph is used in the numerical examples throughout this work.

2.2 System Model

The multi-agent system under consideration consists of N agents with single-integrator dy- namics, i.e.,

˙xi(t) = ui(t)

(23)

2.3 Consensus Protocol and Time-scheduled Implementation

for i ∈ V = {1, ..., N} and where ui(t) denote the control inputs. According to the associated communication graph G, each agent is assigned a neighbor set Ni ⊂ V consisting of all agents that communicate with agent i. With stack vectors x = [x1, ..., xN]T and u = [u1, ..., uN]T the overall system dynamics and initial conditions are given by

˙x(t) = u(t), x(0) = x0. (2.2)

At first, the communication graphs G are assumed to be undirected and connected. Extensions of the problem class are made in Chapters 4 and 5.

2.3 Consensus Protocol and Time-scheduled Implementation

The standard distributed consensus protocol which solves the average consensus problem for multi-agent system (2.2) introduced in [1] is given by

ui(t) = − X

j∈Ni

(xi(t) − xj(t)) (2.3)

For undirected, connected graphs G, the control-law (2.3) globally asymptotically solves the average consensus problem, i.e., the average of all agents’ states remains constant over time and for all i ∈ V it holds

xi(t)t→∞−→ 1 N

X

i∈V

xi(0).

Control-law (2.3) is intuitive since it steers each agent towards the center of its neighbors.

Thus the overall system is contracted and reaches consensus asymptotically. This contraction property of consensus protocols is discussed in [2]. Note that consensus reaching is guaranteed for a larger class of multi-agent systems including directed graphs, switching topologies or time-delayed communication, cf. [1] among others. These cases are discussed in Chapter 4.

With (2.2), (2.3), and the definition of L, the closed-loop system can be written in stack vector notation

˙x(t) = −Lx(t). (2.4)

As discussed in Chapter 1, in practical applications it is necessary to implement the continuous-time control-law (2.3) on digital platforms. The classical method is time-scheduled control, i.e., measurements are acquired at discrete time instances via zero-order hold tech- niques and control-laws are updated periodically according to a constant sampling period τs, i.e.,

u(t) = −Lx(tk), t ∈ [tk, tk+1[ (2.5) where tk+1 = tk + τs, t0 = 0. The control signal u(t) is consequently piecewise constant.

The stability properties of multi-agent systems under time-scheduled control are analyzed in [28]. The authors prove that control-law (2.5), updated with constant sampling period τs, globally asymptotically solves the average consensus problem if and only if the sampling period satisfies

0 < τs< 2

λN(G). (2.6)

This result serves as benchmark for the event-based control strategy proposed in the present work. Comparison and evaluation can be found in Section 3.4.

(24)

2 Background and Problem Formulation

2.4 Prior Event-based Implementation

As discussed in Chapter 1, event-based scheduling is a promising approach to distributed multi-agent coordination over networks. The groundwork has been laid in [32], [33]. These prior event-based implementations of consensus protocol (2.3) are described in Section 1.3.

In short, the idea is to update the controller (2.3) in event-based fashion. Each agent decides, based on local information like the last update values and the neighbors’ states, when to re- compute the control-law. The main limitation of the event-based control strategies presented in [32], [33] and [34] is the fact that they require continuous communication between neigh- boring agents, the savings due to the event-based strategy are only in terms of less control updates.

2.5 Problem Statement

The goal of this work is the development of an event-based control strategy, which reduces the load on the communication medium, along with the agents’ energy consumption, compared to previous approaches. In order to develop the control strategy incorporating event-based measurement broadcasts, the following setup is considered. Each agent consists of a digital microprocessor and dynamics as shown in Figure 2.2. The microprocessor of agent i monitors

agent i microprocessor

dynamics ui(t) xi(t)

ˆ xi(t) ˆ

xj(t), j ∈ Ni

Figure 2.2:Setup for event-based communication

its own measurement value xi(t) continuously. Based on local information, the microprocessor decides when to broadcast the actual measurement value over the network. Therefore the latest broadcasted value of agent i can be described by the piecewise constant function

ˆ

xi(t) = xi(tik), t ∈ [tik, tik+1[

where ti0, ti1, ... is the sequence of event-times of agent i. Whenever one agent sends or receives a new measurement value, it updates its control-law immediately, thus rendering the control signal piecewise constant. Analogously to the continuous-time protocol (2.3), the control-law based on the broadcasted state values is defined as

ui(t) = −X

j∈Ni

(ˆxi(t) − ˆxj(t)) ,

or with stack vector ˆx = [ˆx1, ..., ˆxN]T,

u(t) = −Lˆx(t). (2.7)

The problem is now to find a ruling which determines, based on local information, when agent i has to trigger and broadcast a new measurement value to its neighbors.

(25)

3 Novel Event-based Control Strategy

This section presents solutions to the problem posed in Section 2.5. Multi-agent systems consisting of single-integrator agents and undirected connected fixed communication graphs are addressed. Various extensions of the problem class are presented in Chapters 4 and 5.

The following triggering mechanism is proposed. Define trigger functions fi(·) which depend on local information of agent i only. For agent i, the states xj(t), j 6= i are unknown, only xi(t), ˆxi(t), and the broadcasted values ˆxj(t) of the neighbors j ∈ Ni are available. The trigger functions take values in R. An event for agent i is triggered as soon as the trigger condition

fi

xi(t), ˆxi(t), [

j∈Ni

ˆ xj(t)

> 0 (3.1)

is fulfilled. The sequence of event-times for agent i is thus defined iteratively by tik+1= inf

t : t > tik, fi(t) > 0

where ti0≥ 0 is the first instance of time when condition (3.1) is fulfilled. Therefore, for each agent there is a monotonically increasing sequence of events

0 ≤ ti0 ≤ ti1 ≤ ti2 ≤ · · · .

The problem is now to derive suitable trigger functions fi(·), such that the closed-loop system reaches average consensus with guaranteed performance and as few events as possible. In order to exclude Zeno-behavior, it is necessary to prove that there are no accumulation points in the event sequences.

The event-based control algorithm can be illustrated as follows. In each agents’ micro- processor runs the loop shown in Figure 3.1, which is initialized at time t = 0 by setting ˆ

x(0) = [0, . . . , 0]T. Then, all agents compute their trigger functions and check the trig- ger conditions (3.1). If the trigger condition is fulfilled for one agent, this particular agent broadcasts its actual measurement value over the network. After that, each agent receives measurement updates of its neighbors, in case one of them sent a new value. Note that for now it is assumed that there are no communication time-delays. Finally the control-law ui(t) is updated and the loop starts from the beginning with evaluation of the trigger function.

Before suitable trigger functions are presented in Sections 3.1, 3.2, and 3.3, some useful variables for the following computations are introduced. Define for each agent i ∈ V and t ≥ 0 the measurement errors

ei(t) = ˆxi(t) − xi(t) (3.2)

(26)

3 Novel Event-based Control Strategy

initialize: ˆx(0) = 0

compute trigger function fi(t)

check trigger condition fi(t) > 0

if fulfilled, broadcast new measurement ˆxi(t) = xi(t)

receive updates ˆxj(t), j ∈ Ni, if any

compute control law ui(t) based on ˆxi(t) and ˆxj(t), j ∈ Ni

Figure 3.1: Algorithm running in agent i’s microprocessor

and denote the stack vector e = [e1, ..., eN]T. With this notation, the control-law (2.7) can be written as u(t) = −L(x(t) + e(t)) and the closed-loop system is given by

˙x(t) = −Lˆx(t) = −L(x(t) + e(t)). (3.3) This notation is motivated by [15] and will be of great importance for the synthesis of the event-based control strategy. As in [33], define the average value

a(t) = 1 N

X

i∈V

xi(t).

The initial average is a(0). The time derivative of a(t) is given by

˙a(t) = 1 N

X

i∈V

˙xi(t) = 1 N

X

i∈V

ui(t) = 1

N1Tu(t) = −1

N1TL(x(t) + e(t)) ≡ 0 since 1TL = 0T. Therefore it holds

a(t) = a(0) = a ∀t ≥ 0.

Thus the state x(t) can be decomposed according to

x(t) = a1 + δ(t) (3.4)

with disagreement vector δ(t), following the notation of [1].

By definition, the disagreement vector has zero average, i.e., 1Tδ(t) ≡ 0. Moreover, it is easy to see that Lx(t) = Lδ(t), which will be exploited in the following.

These computations show that the average value a of all agents’ states remains constant over time under the proposed event-based control strategy. The disagreement vector δ(t) is useful for stability analysis since average consensus is reached if and only if the consensus point δ = 0 is an asymptotically stable equilibrium point.

(27)

3.1 Adaptive Trigger Functions

The following sections contain the main results of this work. Different trigger functions are discussed, which exclude Zeno-behavior of the overall system, while guaranteeing desired convergence properties. In Section 3.1, a trigger function is proposed, which depends on measurement error and actual control input of the corresponding agent. The closed-loop system converges to average consensus asymptotically. In Section 3.2, very intuitive trigger functions are discussed, which bound the measurement error of each agent by a constant. This corresponds to the ideas of [12] and [14]. It is shown that the overall system converges to a ball around the average consensus point, which scales with the threshold on the measurement error. In Section 3.3, trigger functions with time-dependent threshold on the measurement error are presented. In particular, exponentially decreasing thresholds guarantee asymptotic convergence to the consensus point, while Zeno-behavior can still be excluded under some assumptions. Finally in Section 3.4, the novel event-based control strategy is compared to time-scheduled implementations of the consensus protocol. The evaluation shows that event-based control is significantly better in terms of the number of necessary inter-agent communications.

3.1 Adaptive Trigger Functions

The first approach to the problem is based on Lyapunov methods according to [1], [15], [33].

From (3.4) it can be seen that the disagreement dynamics are given by

˙δ(t) = ˙x(t) = −Lx(t) − Le(t) = −Lδ(t) − Le(t) (3.5) with initial condition δ(0) = x(0) − a1. According to [1] and [33], the Lyapunov function candidate

V (x) = 1 2xTLx

is chosen. In [1] it is shown that this is a valid Lyapunov function for the disagreement dynamics (3.5), since

V (x) = 1

2xTLx = 1

2(δ + 1a)TL(δ + 1a) = 1

TLδ ≥ 1

2(G)kδk2> 0 ∀δ 6= 0

and ˙V negative definite with respect to the equilibrium point δ = 0 for e ≡ 0. For non-zero measurement errors e(t), the derivative along trajectories of (3.3) is given by

V (x) = x˙ TL ˙x = −xTLL(x + e) = −xTLLx − xTLLe

≤ −kLxk2+ kLxkkLkkek

≤ −(1 − σ)kLxk2− σkLxk2+ kLxkkLkkek.

Consequently it holds

V (x) ≤ −(1 − σ)kLxk˙ 2, which is negative definite for 0 < σ < 1, if

kek ≤ σ

kLkkLxk. (3.6)

(28)

3 Novel Event-based Control Strategy

This result can be interpreted in terms of input-to-state stability (ISS), cf. [37], [38]. The foregoing analysis shows that the closed-loop system (3.5) is ISS with respect to the state measurement error e(t), i.e., there exist a class KL function β and a class K function γ such that

kδ(t)k ≤ β kδ(t0)k, t − t0

+ γ

 sup

t0≤s≤tke(s)k



∀t ≥ t0.

Note that for linear systems ISS is equivalent to asymptotic stability of the unforced system.

However, the notion of ISS will be very useful for the analysis of the time-delayed case in Section 4.3. Connections between consensus algorithms and ISS are discussed in [39].

If inequality (3.6) holds for all t ≥ 0, then asymptotic stability is guaranteed. In order to get a decentralized control strategy, we derive local conditions which imply this global inequality.

Note that trigger functions in (3.1) can only depend on agent i’s local information, i.e., the states xj(t), j 6= i are unknown, only xi(t), ˆxi(t), and the broadcasted values ˆxj(t) of the neighbors j ∈ Ni are available. Therefore the local conditions must only depend on these quantities. Observe that

kLˆxk = kL(x + e)k ≤ kLxk + kLkkek, and thus,

kLˆxk − kLkkek ≤ kLxk.

From there it can be seen that inequality kek ≤ σ

kLk(kLˆxk − kLkkek) implies (3.6). This inequality can be rewritten as

kek ≤ σ

(1 + σ)kLkkLˆxk which is equivalent to

kek2

 σ

(1 + σ)kLk

2

kLˆxk2. Since ke(t)k2=P

i∈V|ei(t)|2, this global inequality is implied by the set of local inequalities

|ei|2

 σ

(1 + σ)kLk

2

X

j∈Ni

(ˆxi(t) − ˆxj(t))

2

∀i ∈ V,

which is equivalent to

|ei| ≤ σ (1 + σ)kLk

X

j∈Ni

(ˆxi(t) − ˆxj(t))

∀i ∈ V.

(29)

3.1 Adaptive Trigger Functions

These local inequalities can be enforced for all t ≥ 0 by the event-based control strategy. In particular, this is achieved by trigger functions

fi

xi(t), ˆxi(t), [

j∈Ni

ˆ xj(t)

= |ei| − σ (1 + σ)kLk

X

j∈Ni

(ˆxi(t) − ˆxj(t)) .

Consequently, event-based control with these trigger functions guarantees asymptotic conver- gence to average consensus of the overall system.

However, due to the hybrid nature of the system, it is necessary to investigate the sequences of event times in detail and exclude Zeno-behavior [18], [19]. It has to be shown that these sequences are well defined and have no accumulation points. The main result is stated in Theorem 3.1. It includes the convergence result derived above and describes the triggering behavior of the overall system.

Theorem 3.1. Consider the multi-agent system (2.2) with control law (2.7) and assume the communication graph G is undirected and connected. Define the trigger functions

fi(ei(t), ui(t)) = |ei(t)| − σ

(1 + σ)kLk|ui(t)| (3.7)

where 0 < σ < 1. Then, the closed-loop system reaches average consensus asymptotically.

Furthermore, all agents trigger synchronously and with constant inter-event time

τ = σ

(1 + σ)kLk. (3.8)

If for one agent i the control input ui(kτ ), k ∈ N, is equal to zero, then this agent will not trigger at time t = (k + 1)τ .

Proof. The triggering function (3.7) is equivalent to the one derived above by the definition of ui(t). Therefore, the stability result for the closed-loop system holds if there are no Zeno- executions in the hybrid system.

The following analysis of the triggering times shows that this is true. First of all, note that no time-delays due to communication or computation in the microprocessors are assumed, all information exchange happens instantaneously. At time t = 0, all agents initialize with

ˆ

xi(0) = 0 and the assumption ˆxj(0) = 0 for all j ∈ Ni. Then, according to Figure 3.1, they compute their trigger functions and check the trigger conditions. The state measurement errors at time t = 0 are given by

e(0) = ˆx(0) − x(0) = −x(0),

and since u(0) = 0, the trigger conditions are fulfilled for all agents j ∈ V with non-zero initial condition xj(0) 6= 0. These agents broadcast their first measurement value at t = 0 and reset their measurement errors ej(0) to zero. After the measurement broadcasts it holds

ˆ

xi(0+) = xi(0+) ei(0+) = 0

for all i ∈ V and each agent i updates its control law ui(0+) such that u(0+) = −Lˆx(0+).

(30)

3 Novel Event-based Control Strategy

Since the measurement errors are now zero for all agents, no more events are triggered at this instance of time and the system starts evolving.

The next step is to investigate the time, after which the next events are triggered. Denote t = 0+. For t > t, the state measurement errors ei(t) start increasing according to

ei(t) = −ui(t)(t − t)

since, according to (3.2), the time derivative of ei(t) is given by ˙ei(t) = −ui(t) for all t where no event happens. From there it follows that

|ei(t)| = |ui(t)| (t − t).

The trigger condition is fulfilled when function (3.7) crosses zero, which happens after time τ = (t − t) = σ

(1 + σ)kLk.

Since this holds for each agent i, all agents will broadcast their next measurement value at time t+τ synchronously. If ui(t) happens to be zero for some agent i, then the measurement error of this particular agent will not increase and no event will be triggered at t+τ . However, if ui changes after an update of one of the neighbors, agent i will trigger the next event again after time τ , synchronously with the other agents. The same reasoning is possible for t = τ and then for t = 2τ and so on. Consequently the events happen synchronously for all times t ≥ 0, the sequences of event times are well-defined for all agents and contain no accumulation points.

Example 3.1. This example illustrates the results of Theorem 3.1. Consider the multi-agent system (2.2) with communication graph G given in Figure 2.1 and Laplacian matrix (2.1). The parameter σ is set to σ = 0.9 and the initial conditions x(0) are chosen such that all modes of the system are excited, i.e., if v1, ..., vN are the normalized eigenvectors corresponding to the eigenvalues λ1(G), ..., λN(G),

x(0) = v2+ · · · + vN

kv2+ · · · + vNk. (3.9)

The simulation results for this example are shown in Figure 3.2. The first plot shows the evolution of all agents’ states xi(t). The second and third plot show the piecewise constant control signals ui(t) and broadcasted states ˆxi(t). The evolution of measurement errors ei(t) is shown in the fourth plot. In the last plot, the events of each agent are marked. The closed- loop system converges to average consensus, the results are consistent with Theorem 3.1.

In particular it can be seen that all agents trigger synchronously with constant inter-event time τ .

Remark. Consider the special case that all agents start with equal initial conditions, which means they are already in consensus. Then it follows from Theorem 3.1 that each agent sends his actual measurement value only once, at time t = 0. Moreover, if all initial states are equal to zero, no trigger will happen at all.

Event-based control according to Theorem 3.1 solves the problem stated in Section 2.5.

However, there are some issues that have to be discussed. Firstly, the proposed trigger func- tions (3.7) lead to the same behavior as time-scheduled control. The inter-event interval (3.8),

(31)

3.1 Adaptive Trigger Functions

−1

−0.5 0 0.5 1

xi(t)

Trigger function fi(t) = |ei(t)| − 0.114|ui(t)|

−2 0 2

ui(t)

−1 0 1

ˆxi(t)

0 0.2 0.4

|ei(t)|

0 1 2 3 4 5 6 7 8 9 10

12 34 5

events

time t

Figure 3.2:Simulation results for trigger functions (3.7)

(32)

3 Novel Event-based Control Strategy

along with condition 0 < σ < 1, yields the maximum sampling time τ = 1/(2kLk), which is four times smaller than the upper bound (2.6) for time-scheduled control. Consequently there are no benefits from the event-based implementation.

Secondly, functions (3.7) contain the parameter kLk, which depends on the network topol- ogy. In fact, each agent has to be aware of the network topology in order to evaluate this parameter, which is undesirable in the context of distributed control.

Thirdly, the assumption that computations and communications happen instantaneously plays a crucial role in the analysis. It is not obvious if the results extend qualitatively to more realistic setups, in particular the property of synchronous triggering is lost in case of communication delays.

In order to overcome these issues, the problem is tackled by another approach in the next sections.

3.2 Static Trigger Functions

In Section 3.1, the proposed trigger functions are motivated by straight forward Lyapunov stability analysis. In this section, the problem is approached reversely, that is, a trigger function is proposed by intuition and then the resulting properties of the closed-loop system are analyzed. The original idea of event-based control proposed in [12] is to trigger an event whenever the state deviates from the equilibrium point and crosses a defined threshold. The same idea is used in [14] to schedule samples in sensing applications. This approach applied to the present problem yields the condition

|ei(t)| < c0,

which can easily be implemented using trigger functions fi(ei(t)) = |ei(t)| − c0. The intuition of this triggering strategy is that each agent broadcasts a new value over the network as soon as the difference between its current state and its latest broadcasted state crosses the threshold c0. Before the corresponding convergence result is presented in Theorem 3.3, the following Lemma is introduced.

Lemma 3.2. Suppose L is the Laplacian of an undirected connected graph G. Then, for all t ≥ 0 and all vectors v ∈ RN with zero average, 1Tv = 0, it holds

ke−Ltvk ≤ e−λ2(G)tkvk.

Proof. Since the graph G is undirected, the Laplacian L is symmetric, i.e., L = LT. The eigenvalues of L are denoted by 0 = λ1 < λ2 ≤ λ3 ≤ · · · ≤ λN, and the corresponding eigenvectors by v1, v2, . . . , vN. Since L is symmetric, these eigenvectors can always be chosen such that they form an orthonormal basis. Let

T =

v1 v2 . . . vN

be the matrix of such eigenvectors, then it holds T TT = I and L = T diag (0, λ2, . . . , λN) TT. Hence,

e−Lt= T diag

1, e−λ2t, . . . , e−λNt TT

= T diag (1, 0, . . . , 0) TT + T diag

0, e−λ2t, . . . , e−λNt TT.

(33)

3.2 Static Trigger Functions

We know that the eigenvector v1 corresponding to eigenvalue λ1 = 0 is the normalized vector of ones, and thus

T diag (1, 0, . . . , 0) TT = 1

N11T = 1 N



1 · · · 1 ... . .. ...

1 · · · 1

 .

Now, for v ∈ RN, it holds e−Ltv = 1

N11Tv + T diag

0, e−λ2t, . . . , e−λNt TTv.

Since v has zero average by assumption, the first term is equal to zero and consequently ke−Ltvk = kT diag

0, e−λ2t, . . . , e−λNt TTvk

≤ kT kk diag

0, e−λ2t, . . . , e−λNt

kkTTkkvk.

With the norm of the transformation matrix kT k =

q

λmax(T TT) =p

λmax(I) = 1, it follows

ke−Ltvk ≤ k diag

0, e−λ2t, . . . , e−λNt

kkvk = e−λ2tkvk.

Remark. From Lemma 3.2, it follows that the speed of convergence for consensus in the continuous feedback case (2.4) is at least λ2(G), i.e.,

kδ(t)k = ke−Ltδ(0)k ≤ e−λ2(G)tkδ(0)k.

This result was already presented in [1], where the proof is based on Lyapunov methods.

This lemma can be applied to prove the following theorem, which is the main result of this section.

Theorem 3.3. Consider the multi-agent system (2.2) with control-law (2.7) and assume the communication graph G is undirected and connected. Define the static trigger functions

fi(ei(t)) = |ei(t)| − c0 (3.10) with positive constant c0. Then, for all initial conditions x0∈ RN andt ≥ 0, it holds

kδ(t)k ≤ kLk λ2(G)

√N c0+ e−λ2(G)t



kδ(0)k − kLk λ2(G)

√N c0



. (3.11)

Furthermore the closed-loop system does not exhibit Zeno-behavior.

Realizing that the disagreement dynamics are ISS with respect to e(t), this result is quite intuitive. Qualitatively it states that the system state converges to a region around the consensus point if the errors e(t) are bounded and the size of the region depends on this bound. Theorem 3.3 gives an explicit bound on this region in terms of c0 and shows that the convergence happens exponentially fast with speed λ2(G).

(34)

3 Novel Event-based Control Strategy

Proof. The analytical solution δ(t) of the disagreement dynamics (3.5) is given by δ(t) = e−Ltδ(0) −

Z t 0

e−L(t−s)Le(s)ds. (3.12)

Thus, the disagreement is bounded by

kδ(t)k ≤ ke−Ltδ(0)k + k Z t

0

e−L(t−s)Le(s)dsk

≤ ke−Ltδ(0)k + Z t

0 ke−L(t−s)Le(s)kds.

The vector Le(t) has zero average since L1 = 0 and Lemma 3.2 can be applied. This yields kδ(t)k ≤ e−λ2(G)tkδ(0)k +

Z t

0

e−λ2(G)(t−s)kLe(s)kds. (3.13) Since kLe(t)k ≤ kLkke(t)k and since trigger conditions fi(ei(t)) = |ei(t)| − c0> 0 enforce

ei(t) ≤ c0 ∀t ≥ 0, i ∈ V, it holds

kLe(t)k ≤ kLk√ N c0. Thus, the disagreement vector is bounded by

kδ(t)k ≤ e−λ2(G)tkδ(0)k + kLk√ N c0

Z t 0

e−λ2(G)(t−s)ds

= e−λ2(G)tkδ(0)k + kLk√ N c0

 1

λ2(G)e−λ2(G)(t−s)

t 0

= e−λ2(G)tkδ(0)k + kLk λ2(G)

√N c0

1 − e−λ2(G)t

= kLk λ2(G)

√N c0+ e−λ2(G)t



kδ(0)k − kLk λ2(G)

√N c0

 .

It remains to show that the inter-event times are lower-bounded by a positive constant τ . Assume that agent i triggers at time t ≥ 0. Then it holds ei(t) = 0. Note that fi(0) = −c0 < 0 and therefore agent i cannot trigger again at the same instance of time.

From (3.2) follows that

˙ei(t) = − ˙xi(t) = −ui(t) (3.14) between the trigger events. Since ui(t) is bounded and due to continuity of solutions, we con- clude that the next inter-event time is strictly positive through the following argumentation.

Observe that

|ui(t)| ≤ ku(t)k = kL(x(t) + e(t))k ≤ kLx(t)k + kLkke(t)k ≤ kLx(t)k + kLk√ N c0

(35)

3.2 Static Trigger Functions

and with kLx(t)k = kLδ(t)k ≤ kLkkδ(t)k,

|ui(t)| ≤ kLkkδ(t)k + kLk√

N c0 (3.15)

for all i ∈ V and t ≥ 0. From (3.11) follows that kδ(t)k ≤ kδ(0)k + kLk

λ2(G)

√N c0

for all t ≥ 0, and with (3.15),

|ui(t)| ≤ kLk √

N c0+ kδ(0)k + kLk λ2(G)

√N c0

 , where the right-hand side is independent of t. Along with (3.14), this yields

|ei(t)| ≤ Z t

t|ui(s)| ds ≤ kLk √

N c0+ kδ(0)k + kLk λ2(G)

√N c0



(t − t) (3.16) for all t ≥ t. The next event is triggered as soon as trigger function (3.10) crosses zero, i.e., fi(ei(t)) = |ei(t)| − c0> 0. From (3.16) it can be concluded that this trigger condition is not fulfilled before

kLk √

N c0+ kδ(0)k + kLk λ2(G)

√N c0



(t − t) = c0. Thus, a lower bound on the inter-event times is given by the positive value

τ = c0

kLk √

N c0+ kδ(0)k +λkLk2(G)

N c0 . (3.17)

This bound holds for all times t and all agents i, therefore there are no accumulation points in the sequences of event-times and Zeno-behavior of the closed-loop system is excluded.

Remark. Consider the multi-agent system (2.2) with control-law (2.7) and trigger functions (3.10) and assume the communication graph G is undirected and connected. Then, for t → ∞, it holds according to Theorem 3.3

kδ(t)k ≤ kLk λ2(G)

√N c0. (3.18)

However, this estimate is conservative. It can be improved by the following observations.

From (3.12) in the proof of Theorem 3.3 it follows kδ(t)k ≤ e−λ2(G)tkδ(0)k +

Z t

0 ke−L(t−s)Le(s)kds.

And analogously to the proof of Lemma 3.2 it can be seen that e−LtL = T diag

1, e−λ2t, . . . , e−λNt

TTT diag (0, λ2, . . . , λN) TT

= T diag

0, λ2e−λ2t, . . . , λNe−λNt TT

References

Related documents

Combining the user selection algorithm of Chapter 4 and the low complexity beam- forming algorithms of Chapter 5, with the analytical results of Chapter 6, provides the tools to

To compensate for this, every agent will have their memory of agents at that blue exit decreased by two per turn instead of one, and each time any blue agent takes the max of

It is shown how the relation between tree graphs and the null space of the corresponding incidence matrix encode fundamental properties for these two multi-agent control problems..

The control actuation updates considered in this paper are event-driven, depending on the ratio of a certain measurement error with respect to the norm of a function of the state,

Specially constructed extremum seeking schemes for single and multi-agent systems are presented, where the agents have only access to the current value of their individual

Keywords: networked control systems, time-varying delays, delay modeling, network modeling, simulation with variable delays, packet disordering, large delay case, true

A key challenge in event- triggered control for multi-agent systems is how to design triggering laws to determine the corresponding triggering times, and to exclude Zeno behavior..

In addition to communication delays, the distributed optimal control is based upon systems with interconnected dynamics to both neighboring vehicles and local state