• No results found

Stabilization and Collision Avoidance of Non-point Agents in Dynamic Environments: A Potential Field Approach.

N/A
N/A
Protected

Academic year: 2022

Share "Stabilization and Collision Avoidance of Non-point Agents in Dynamic Environments: A Potential Field Approach."

Copied!
117
0
0

Loading.... (view fulltext now)

Full text

(1)

Stabilization and Collision Avoidance of Non-point Agents in Dynamic Environments: A Potential Field Approach.

MICHELE COLLEDANCHISE

Master’s Degree Project Stockholm, Sweden December 2012

XR-EE-RT 2012:036

(2)

Stabilization and Collision Avoidance of Non-point Agents in Dynamic Environments: A Potential

Field Approach.

Candidate: Examiner:

Michele Colledanchise Dr. Dimos Dimarogonas Supervisor:

Meng Guo

Stockholm December 2012

Automatic Control

School of Electrical Engineering

Kungliga Tekniska Högskolan

(3)
(4)

Abstract

Mobile robot navigation has found interest during the last few years in civilian and military research areas due to its main benefit of substituting hu- man presence in dangerous tasks. Examples include bomb deactivation, fire suppression and aerial surveillance.

In literature, different definitions of the environment where the vehicles’ motion takes place are considered and the main challenge for mobile robots is to nav- igate in such environment guaranteeing collision avoidance and convergence towards a desired target. Firstly, in the thesis these classical environments are extended considering a more general scenario characterized by workspace bounds and an arbitrary number on objects within it. The main contribution is to develop control strategies able to perform several tasks using artificial poten- tial functions methodologies. In particular, the first studied problem consists of a single spherical shaped agent navigating among both fixed and moving obstacles. The case of study is then extended to the multi-agent navigation problem, proposing both centralized and decentralized policies. In these prob- lems, single integrator kinematic models are considered.

The methodologies are then extended to the more realistic case of mobile robots described by a double integrator kinematic model.

In the thesis, theoretical results will be developed using tools from Lyapunov stability and LaSalle’s invariance principle. The derivations will be then illus- trated by a set of representative numerical examples.

(5)
(6)

Acknowledgements

I want to thank Dr. Dimos V. Dimarogonas from Royal Institute of Technology for accepting me as student for the master thesis project, to introducing me into the autonomous navigation problem and for his support during this work. I also want to thank Meng Guo (KTH) for his feedback and tips and for his effort to share his experience on autonomous navigation problem. My sincere thanks go also to Professor Tommaso Leo, from Polytechnic Univesity of Marche (UNIVPM), for his support and guidance during the last years and for providing me with the opportunity to do the master thesis project at Royal Institute of Technology, a fundamental opportunity in my academic career. I also want to thank my co-advisor Dr. Giuseppe Orlando (UNIVPM) for his collaboration and for introducing me to the non-linear systems analysis injecting me his own enthusiasm to this subject.

(7)

Unless otherwise noted, figures and contents are courtesy of the author.

(8)

Contents

Contents i

List of Figures iii

1 Introduction 1

1.1 Path planning problem . . . 1

1.2 Applications . . . 2

1.2.1 Driverless cars . . . 2

1.2.2 Exploration . . . 3

1.2.3 Areial Surveillance . . . 7

1.2.4 Coordination . . . 8

1.2.5 Safety . . . 10

1.3 Control structure . . . 10

1.4 Motivation . . . 11

1.5 Outline . . . 12

2 Literature overview 13 2.1 Single point agent in a dynamic environment . . . 14

2.2 Multiple non-point agent using a distributed control . . . 15

2.3 Multiple non-point agent using a centralized control . . . 18

3 Navigation Function 21 3.1 Sphere world, workspace, free space . . . 21

3.2 Construction . . . 22

3.2.1 Modification of ˆϕ . . . 25

3.3 Proof of correctness . . . 27

3.4 Definition of feasible path . . . 29

i

(9)

ii CONTENTS

4 Single Agent Navigation 33

4.1 Single agent in static environment . . . 33

4.1.1 Controller synthesis . . . 34

4.2 Single agent in dynamic environment . . . 35

4.2.1 Controller Synthesis . . . 37

5 Multi-agent Navigation 43 5.1 Centralized Control . . . 43

5.1.1 Controller synthesis . . . 45

5.2 Decentralized Control . . . 47

5.2.1 Controller synthesis . . . 50

6 Double Integrator dynamic 53 6.1 Single Agent among Static Obstacles . . . 53

6.1.1 Controller synthesis . . . 54

6.2 Single agent among moving obstacles . . . 56

6.2.1 Controller Synthesis . . . 58

7 Simulations results 59 7.1 Single agent in static environment . . . 59

7.1.1 Single integrator . . . 59

7.1.2 Double integrator . . . 62

7.2 Single agent in dynamic environment . . . 65

7.2.1 Single Integrator . . . 65

7.2.2 Double integrator . . . 71

7.3 Multiple agent in dynamic environment . . . 76

7.3.1 Centralized policy . . . 76

7.3.2 Decentralized policy . . . 81

8 Conclusions and future works 87

A Mathematical preliminaries 89

B Mathematical derivations 93

C Nomenclature 97

References 101

(10)

List of Figures

1.1 Example of path planning . . . 1

1.2 Lexus hybrid modified as autonomous vehicles for the project Google Car [picture by Henry Fountain published at http://www.nytimes.com]. 3 1.3 Examples of explorers . . . 4

1.4 3D riconstruction of "Messenger" spacecraft approaching Mercury [pic- ture by Johns Hopkins published at http://messenger.jhuapl.edu]. . . . 6

1.5 French military drone surveillance drones over Mali [picture published at http://www.telegraph.co.uk]. . . 7

1.6 Drones formation task (http://eth.de). . . 9

1.7 U.S. Marine Corps’ IED DETONATOR (http://www.defendamerica.mil). . . 10

2.1 Example of three agent conflict [7] . . . 16

2.2 Proximity situations with respect to the agent A [7] . . . 17

2.3 Examples of relation tree [7] . . . 20

3.1 Scenario’s example . . . 23

3.2 Workspace example . . . 25

3.3 Examples of surface and contour of the derived navigation function . . . 28

3.4 Example of vector field related to the navigation function . . . 28

3.5 Scenario’s example . . . 29

3.6 Scenario’s example . . . 30

3.7 Unfeasible path due to a non connected freespace . . . 31

3.8 Unfeasible path due to a saddle point . . . 32

4.1 Control scheme used . . . 35

4.2 Control scheme used . . . 37

4.3 Example of saddle points in a navigation function . . . 40 4.4 Modification of navigation function’s surface due to the moving obstacles 41

iii

(11)

iv List of Figures

5.1 Control scheme used . . . 45

5.2 Control scheme used . . . 49

6.1 Control scheme used . . . 55

6.2 Phase diagram plots of q in two different situation . . . 55

6.3 Control scheme used . . . 57

7.1 Simulation A . . . 60

7.2 Simulation B . . . 61

7.3 Simulation C . . . 61

7.4 Simulation A . . . 63

7.5 Simulation B . . . 63

7.6 Simulation C . . . 64

7.7 Simulation A . . . 68

7.8 Simulation B . . . 69

7.9 Simulation C . . . 70

7.10 Simulation A . . . 73

7.11 Simulation B . . . 74

7.12 Simulation C . . . 75

7.13 Simulation A . . . 78

7.14 Simulation B . . . 79

7.15 Simulation C . . . 80

7.16 Simulation A . . . 84

7.17 Simulation B . . . 85

7.18 Simulation C . . . 86

(12)

Chapter 1

Introduction

The deployment of autonomous vehicles has a deal with the scientific research over the few past decades, encouraged by interests in space, land and submarine explo- ration. For mobile robots, the ability to navigate in an environment with a desired behavior is one of the main challenges. Typically an environment is characterized by its structure and the objects in it.

The goal of autonomous navigation is to define a control approach in order to lead a vehicle to its destination while avoiding collisions within the environment. Navi- gation function methodologies are moreover a field of increasing research during the last years, has applications in multi-robot systems, guidance of autonomous vehicles and traffic management also since the use of several agents instead of a single one could perform a task in less time, the developing of control strategies for multiple agents has evolved. A distributed policy, recently, has gained prominence in the field of robotic due computational factors, easier communication, absence of a cen- tral control station and flexibility. These policies are based on closed loop strategies because they are preferable to a open loop ones in terms of robustness, disturbance suppression and convergence in a time-variant environment.

However these methodologies often have been studied to the limited cases where the environment is stationary and the agent is described by a single point.

1.1 Path planning problem

The path planning problem is devoted to find a path for a controlled agent in order to follow a defined continuous trajectory that guides the agent from the start point to the destination while avoiding collisions with any other entities. The path should connect the agent’s initial position and its destination. The agent is assumed to be an omnidirectional moving object, the only restriction to its movement is given by the workspace’s bound and the obstacles. A feasible path has to take into account the geometric constraint given by the obstacles and the agent shape, velocity and acceleration constraint are classified as holonomic and non-holonomic constraints.

1

(13)

2 CHAPTER 1. INTRODUCTION

b

Agent

Obstacles

Destination

Path

Figure 1.1: Example of path planning

An holonomic constraint is a constraint which can be integrated resulting in a po- sition constraint, on the other hand, an non-holonomic constraint is not integrable.

On a holonomic constraint, the position can also be seen as a function of time and the final position depends on the path taken to achieve it.

Another interesting challenge is the autonomous navigation in a dynamic environ- ment, such situation can occur when different agents belong on the same environ- ment or some moving obstacles lie into it.

1.2 Applications

The employment of autonomous mobile systems has found usage either in military and civilian applications by their characteristic of the absence of a human driver;

this characteristic is often exploited to simplify the human-machine interaction or to substitute the human presence in dangerous tasks with a autonomous devices,so far these application are still in beta-testing process and has typically a computer science approach.

1.2.1 Driverless cars

The autopilot system has found recent interest in the autonomous vehicles: typically cars that can drive themselves without assistance from a human driver; those cars are often called driveless cars due to their main feature. The usage of driverless vehicles, potentially, can transform the car industry while reducing car strokes, improving traffic management and providing the use of vehicles even for those people who cannot actually drive themselves as elders, invalids and so on . Moreover, with a social changing, the concept of "private cars" could turns into a "use of public"

services using a self-driving public transport vehicles that take into account each path for each customer trading a public vehicle, for a group of people, which its route covers the customers’ desired path; this will also reduce the car parking problem and

(14)

1.2. APPLICATIONS 3

Figure 1.2: Lexus hybrid modified as autonomous vehicles for the project Google Car [picture by Henry Fountain published at http://www.nytimes.com].

will have a positive impact with the environment. The Google company is studying for the developing a driverless car, to use in its project "Google Street View"; The various driverless cars prototypes are, so far, used in safe areas as testbeds and free open spaces. A Google driverless care was also cruising the California Highways but was still monitored by a human driver in case of system failure. To develop the final Google’s project in term of a complete autonomous car in the traffic, the government’s authorization is needed: the Google project has received the green light from the state of California to use its cars in the state’s streets, figure 1.2 shown a prototype Lexus hybrid cars which will be used in 2014 in California as driverless car.

Moreover the most prominent research agency of the department of defense of the united states, the Defense Advanced Research Projects Agency (DARPA), is organizing a new prize competition for driverless vehicles: DARPA Grand Challenge 2013, the cash prizes award are allocated in order to fill the gap between research and military foundations.

1.2.2 Exploration Land

The land exploration by autonomous vehicle is made typically across astronomical body and other celestial surfaces from the Earth; such explore is commonly called planetary rover or simply rovers.

The rovers are often designed to navigate in broken and dangerous lands, to pick up ground, air and, whenever is possible, water samples which are analyzed for

(15)

4 CHAPTER 1. INTRODUCTION

astronomical studies. Some semi-autonomous rovers are designed to carry members of the human space crew.

(a) Rover "Sojourner" in cruise config- uration (http://seds.org).

(b)The "SeaWolf 5" hy-

draulic manipulator

(http://www.nauticexpo.com).

Figure 1.3: Examples of explorers

So far, the study around the rovers development were focused on the mechanical reliability in term of compactness capacity,energy consuming and on the other hand withstanding of high levels of gravity, temperatures, pressure, dust, corrosion, cos- mic rays, solar emissions, electro-magnetic noise designing a fault tolerant system with a faiure time as long as possible. A real example of these rover is the Mars rover "Sojourner" showed in figure 1.3(a) designed to acquire data from the martian surface, unfortunately it still requires human assistance for identifying targets di- rection and in some cases event the obstacle. These rovers got a remote commands from a human driver on the space headquarter on the earth or on a satellite, in the last decade the study of a complete rover that can navigate, collect and acquire data autonomously has found interest from several spaces agency in the world, avoiding a remote driving characterized by lag and delays: for example, sending a signal from Mars to Earth takes between 3 and 21 minutes.

Sea

Beside the land exploration, the sea exploration, in particular the deep sea ex- ploration, has gained interest for commercial, scientific and military purposes; the deploying of robotic submersibles are involved since autonomous vehicles can reach sea beds, hulks and other submerged object in the sea which are out of the human diver’s reach due to the extreme conditions of the depths as freezing temperature, crushing pressure and other physical conditions not suitable for human body. Deep

(16)

1.2. APPLICATIONS 5 sea exploration typically investigates about physical, chemical, and biological anal- ysis of the sea bed and is a growing area of the geophysical research since some ocean depths are still unknown and undiscovered, note that more than half Earth is made by the darkest region of the ocean, where due to the blackness, any plant cannot grow.

Until recently, the deep sea research was considered quite unproductive due to the limited applications but lately biologists, archaeologists, engineers and other scien- tist are beginning to discover the wide variety of unknown flora and fauna of the depths. So far, unmanned undersea robot applications are limited to the use of "re- motely operated vehicles" (ROVs) with several tools depending on their use; those vehicles are, as their name suggests, remotely controlled by a human driver. They are usually fitted with cameras and mechanical arms (as the "SeaWolf" in figure 1.3(b)), and is often used for monitoring underwater equipment of oil platforms, or perform routine maintenance operations particularly difficult. A new challenge is to provide completely autonomous vehicles that can roam on their own, sending data information back to laboratories on land and return to the surface when their mission is completed.

Another challenge is the energy saving of these vehicles: recent ROV’s industry are developing veichles with electric thrusts that allows tricky maneuver and ensure its efficiency in terms of "ecomobility". Do not underestimate the fact that the vehicle is equipped with a number of elements which are actually consuming power as men- tioned above, the energy consumption is took into account due to the limited energy supply (tipical batteries installed on board) and environmental sustainability.

Thus, the actual picture of the used ROVs is still without a complete autonomous control system, due to the unknown environment end the unpredictable motion of the moving objects under the sea; with the scientific research new tools and systems could be developed to attains those goals that, as now, are still ideally.

Space

Human exploration of space entails many problems, ranging from travel time to the survival of the crew in an extreme environment: it would be surrounded from outer space to address new problems or maybe not even imagine. Asteroids, cosmic rays, dark matter, blacks holes, these are just some of the problems that a human crew space encounters to explore the cosmos.

In the last decades the spaces agency understood that the best way to explore space is the use of robots. They do not have the strict needs as a human; they can survive for several of years with super batteries or solar energy, they do not have no problems with loss of muscle tone or space sickness. Moreover a robot does not need any physical training or psychological tests and all those preventative measures that must be taken with a human crew and furthermore if a space mission will fail a robot crew can be just replaced with other robots, is not pobbile to claim the same using a human crew.

The ideal method would be to take advantage of self-replicating machines, able

(17)

6 CHAPTER 1. INTRODUCTION

Figure 1.4: 3D riconstruction of "Messenger" spacecraft approaching Mercury [picture by Johns Hopkins published at http://messenger.jhuapl.edu].

to build the perfect copies of themselves once they reach their goals. This process would generate a cloud of robots that travel through space in different directions at the same time, shortening the time for exploration and providing a cluster of data to the scientists, who remain on Earth to analyze the results of the probes.

However, for now, we must settle for small robots travelers carrying miniature chemical laboratories. Which certainly does not mean charm automated exploration of space, but makes it less extensive, especially if we want to move beyond the limits of our Solar System.

But having considered and the advantages of having a vehicle traveling in space instead of a fragile human crew, the use of robots for space exploration is already in the program of many space agencies worldwide. The main problem, also here, is that of poor decision-making of the robot, so far at least, especially for missions that push the edge of the Solar System. A complete autonomous Space explores is not developed yet, an autonomous spacecraft with the ability to adapt, just like a living being, they will no longer need hundreds of people on the headquarter to control and monitor them; such robot should reach its goals while avoiding collision with asteroids and other celestial moving bodies, considered as obstacles in all senses and purposes.

1.2.3 Areial Surveillance

The term aerial surveillance refers to images and video acquisition from an areal vehicle as a flying drone, unmanned plane or a spy helicopter used for military purposes like battlefield monitoring or civilian security like streets patrolling. The survey, which until the twentieth century was only on land or sea, from the half of 1900’s also used aircraft to perform military tasks as monitoring enemy activity, such vehicles usually does not carry weapons. These aircraft are also employed in the civil field, as traffic monitoring control of territory and clandestine immigration.

(18)

1.2. APPLICATIONS 7

Figure 1.5: French military drone surveillance drones over Mali [picture published at http://www.telegraph.co.uk].

Several government’s security department are testing different unmanned aircraft vehicles with the intention to develop an autonomous aircraft to patrol the skies over their homeland for the monitoring of critical infrastructure, border control, and general surveillance; for example the department of homeland security of the United States of America is testing UAVs planned to be employed in SWAT oper- ations, moreover the department of defense and security of the United Kingdom is allocating funds for the developing of micro aerial vehicles to be used as assistance to the police forces.

Moreover, defense programs have recently focused its attention on aerial surveil- lance, aerial surveillance up to now, is carried out by drones, or teams of drones, controlled remotely by human operators, the challenge now is to design and pro- duce a completely autonomous vehicle even able to recognize suspicious behavior and decide which area to monitor and coordinate with drones neighbors and inform operators at headquarters only in cases of concern, thus increasing the areas moni- tored simultaneously requiring a smaller amount of human resources.

The Aerial surveillance is also used for rescue issues as mountain rescues and sea rescue. Rescue operations normally carried out search and rescue and first aid of missing, wounded and sick people in mountain or marine environment, these envi- ronments are hostile lands to the "classic" air rescue, as human driven air ambulance or wheeled rescue vehicles, the worst thing for a rescuer is to become another victim in the rescue process. The first goal of the human rescues is find where the victim actually is, with the employ of a team of autonomous vehicles will be easier and faster to send the a first aid squad in the correct position, in these cases the rapidity of action is crucial.

(19)

8 CHAPTER 1. INTRODUCTION

1.2.4 Coordination

Swarms One of the multi-agent tasks is the coordination of several robots that forms a swarm, these robot have typically a simple mechanical structure; their inter- action with each other is such that a collective behavior is made in order to achieve goals more efficiently than a single robot, like repairing and bio-inspired tasks, in fact recently swarm intelligent has gained interest in the biological study of insects, bacteria, and other fields where a "smart" coordination occurs.

Hence a swarm has to be controlled in two layers: a high layer in which the "swarm task" is defined in particular which goal the whole multi-agent system has to achieve and a low level where goal and features of each agent are defined; such approaches emphasizes a large number of agents and promotes scalability, thus the number of the constituent elements has less importance and a decentralized control pol- icy is often required to improve these features. Moreover, these swarms requires also a complex and synchronized communication between their components and, sometimes, with a central core like main hub or a monitoring station for real-time feedback or commands.

A swarm has a dynamic behavior both in terms of whole system and in terms of individual cooperation and interaction between each others. From simple control law on a single element a large set of complex swarm behaviors can be produced.

Further studies of bio-inspired swarm robotic are also aimed to predict and under- stand some behavior of those biological swarms that are still unexplored or only the features of the individual members are given.

Recent technological progresses in microelectronic and miniature mechanic have made possible the idea of potential applications in nanorobotics, microbotics and in biomedical researches like micromachinery in the human body for medical or surgery tasks.

A swarm is typicality made by robots with same characteristic and without any rigid formation constraint, tasks of multi-robot in a formation constraint are described in the next section.

Formations When a robot has to take into account the position of the others to move into a workspace there is a formation constraint among these elements;

moreover, with a rigid formation, the collision between individuals members is in- trinsically achieved.

Robots in a formation constraint generally share information and behaviors to work together in exploring, transporting, surveillance and monitoring tasks; their forma- tion can also changes, breaking formation rules for a while in order to achieve the new desired formation.

Typically formation tasks requires the communication a large quantity of informa- tion with a central node, due to this feature a centralized control policy is made, defining so the absolute position of each member; in a distributed policy each mem- ber has its own controller, a leader has to be defined is such a way the other mem-

(20)

1.2. APPLICATIONS 9

Figure 1.6: Drones formation task (http://eth.de).

bers, that act as followers, can attain a relative position from the leader, a follower can serve as leaders for other members, achieving the desired formation solving the problem of who follows who. In a distributed control policy each follower are following a single "formation leader" directly or indirectly.

1.2.5 Safety

Due to its physical structure, a robot can handle its tasks even in extreme condi- tions whom are impracticable for a human operator it also can substitutes a human operator in dangerous tasks, as bomb deactivation or fire suppression or other tasks where the human presence on site is considered dangerous. For example the Marine Corps technician of the United States are using tele-operated robots to detonate a unexposed devices in Fallujah, Iraq (see figure 1.7).

Figure 1.7: U.S. Marine Corps’ IED DETONATOR (http://www.defendamerica.mil).

So far, these tele-operating robots are still remotely operated by a human operator from distance, so is rather a following of sequencens of movements, an autonomous

(21)

10 CHAPTER 1. INTRODUCTION

robot for these tasks is not developed yet.

Another example is made by research center of the U.S. Navy, in recent years it has worked to make a firefighter robot specializes in handling and extinguish down the fire on ships in fact the fire on board is one of the most feared by those who navigate the sea, its difficult to control and tame, often at a high cost in terms of damage and human lives.

1.3 Control structure

In this work the control synthesis can be schematized in three different layers:

• Scenario planning:

In this layer a scenario is defined by knowing the number of the obstacles and agent, obstacles’ geometric characteristics, agents’ shape,workspace bound and the destination point. In addition to the scenario, the agent’s dealing with the obstacles is defined, in this layer how much distance an agent has to keep from an obstacle or another agent is defined. The aim of this layer is to derive a navigation function which contains all these information and is used in the navigation planning.

• Navigation planning:

Is the core of the controller synthesis; once the scenario is defined, a control signal that leads the agent from every suitable initial point to the destination while avoiding collisions with obstacles has to be derived. This planning has to take into account, in each instant of time, positions and velocities of the other entities whom belong to the agent’s scenario. The navigation here defined is based on the kinematic model of the agent thus it gives a behavior on position, velocity and acceleration that a real agent, like an autonomous vehicle or a robot, has to follow in order to satisfy the requirements described above independently on its physical characteristics.

• Motion control:

Is the lower layer of the control structure, it takes into account the dynam- ical model of the agent, its physical properties, its sensoring system and the actuators; it also aims to develop a control law for the actuators in order to achieve the kinematic obtained in the navigation planning layer, typically is a solution of a classic tracking problem.

This study is focused on the firsts two layers due to the open challenges regarding these subjects and due to the regardless of the dynamic constraint of a specific robot or autonomous vehicle, developing thus a controlled motion suitable for different applications.

(22)

1.4. MOTIVATION 11

1.4 Motivation

The main motivation for this work is to propose solutions to the navigation tasks in particular:

• Single non-point agent among static non-point obstacles.

• Single non-point agent among moving non-point obstacles.

• Multiple non-point agents among moving non-point obstacles using a central- ized policy.

• Multiple non-point agents among moving non-point obstacles using a decen- tralized policy.

• Double integrator model

The presence of several agents which share the same workspace is based on the assumption that each moving entity in the workspace is controlled and its aim is, at least, to avoid collision with other entities. A moving obstacle might have an unpredictable motion, is therefore necessary a control strategy that takes into account the position of the obstacles each instant of time, the path planning has to be redefined in real-time. The presence of moving obstacles is insidious because uncontrolled entities could interfere with the agent’s goals and in some cases it may make impossible the satisfaction of the agent’s goals; moreover only the agent can change its position in order to avoid any incoming collision.

In this work single agent and multi-agent tasks are considered, the main motivation for considering multi-agent scenarios is to propose a solution that is less conservative than considering other agents as obstacle.

The decentralized control of complex tasks aims to increase the efficiency of the system avoiding the use of a central hub, besides is less susceptible to a single agent’s fault.

Another motivation consider a double integrator since most of controlled vehicles have a dynamic such that its acceleration is accessible by the actuators, a control law as defined will simplify the motion control described in the section 1.3.

1.5 Outline

This work presents different solutions to the autonomous navigation problem and is organized as follows: on chapter 2, the state of the art is presented to give to the reader a brief overview of the artificial potential field approach of the naviga- tion problem highlighting differences with this works; Chapter 3 presents the con- struction of the navigation function used to derive the control logic and define the

(23)

12 CHAPTER 1. INTRODUCTION

requirements in terms of collision avoidance and convergence. From chapter 4 dif- ferent problem statements are presented and for each of them a controller synthesis is made by proving convergence and collision avoidance. Finally, chapter 7 presents the simulations results of non-trivial assets. To improve reading comprehension, mathematical derivations are collected in appendix, moreover mathematical pre- liminaries are presented in appendix as well in order to make results and proofs more clear to the reader.

(24)

Chapter 2

Literature overview

The mobile robot navigation is a field with a long research history due to the math- ematical complexity of the problem’s approach. In the scientific literature different approaches and solutions were proposed: open loop and closed loop, decentralized and centralized. Open loop approaches develop an off-line path planning, from the initial point to the destination. Such approach is not suitable for task with moving obstacle since the obstacles’ motion is not know a priori and an on-line planning is required. Open loop approaches typically use an optimal resolution among given waypoints and a decentralized implementation is introduced in order to resolve con- flict situations imposing constraints on the vehicle motion (see [2]). The closed loop methodologies define a control law updated on the knowledge of the system config- uratiom at each time instant, these approachess use the Lyapunov stability analysis to prove the convergence of the system both, with a single agent or multiple agents, examples of these metodologies are given by Jadbabaie et alii [19], Renet alii [33]

and Smith et alii [36].

Another task that has gained interest is the platooning control where the vehicles must converge to a desired configuration defined by constraint on their relative po- sitions, different solution was proposed by Dimarogonas et alii [6] [8], Eren et alii [12], Saber et alii [35], Stipanovic et alii [38], Tanner et alii [39]).

The mathematical theory behind the various approaches are different and it rang- ing form the use of biharmonic function (see Masoud et alii [30] and Laureline et alii [17]) to the use of the formal methods (see Fainekos [13], Karaman et alii [20], Loizou et alii [27]).

This work is focused on the potential field theory, the potential field approach was introduced by Rimon and Koditschek [23] by proposing the use of navigation func- tions as solution to the path planning problem. Their aim was to lead a single point from an initial position to a destination while avoiding collisions with spherical ob- stacles in the workspace. Navigation functions are polar expression of potential functions; since they are bounded, by Weierstraβ, they have a global minimum at the destination point and a maximum on the obstacles and workspace edges. The

13

(25)

14 CHAPTER 2. LITERATURE OVERVIEW

absence of local minima or maxima into the workspace was proved in the work of Rimon and Koditschek itself (see section 3.3 of [23]). Moreover navigation functions are Morse [31], or rather a smooth mapping with a bounded domain that has all interior critical points with a non-degenerate Hessian.

From the above description, it is intuitively that if an agent was to move with the direction of the negative gradient of a navigation function it would reach the des- tination avoiding the obstacles. This chapter is dedicated to give a short overview of the navigation function methodologies used for agent’s navigation task, as men- tioned in chapter 1 the proposed solutions are focused on the multi-non-point agent systems without obstacles or on a single non-point agent among static obstacles.

This chapter has some focus on recent subject about autonomous navigation, the methodologies will be explained in order to point out differences with this work, as mentioned in section 1.4.

2.1 Single point agent in a dynamic environment

So far, in the literature a solution of the navigation problem among dynamic ob- stacles is limited to the single point agent, as proposed by Loizou et al. [28]. Their approach takes into account a non-smooth navigation function with a set of measure zero in which the gradient is not defined.

Defining a smooth function σ : [0 ∞) → [0 ∞) with the following properties:

• σ(0) = 0

• ˙σ(0) = 0

• 0 < ˙σ(x) < µ with x ∈ R+ and µ ∈ R+

and a smooth function γ : [0 δ) → [0 ∞) with the following properties:

• γ(δ) = 0

• lim

x→0γ(x) = ∞

• −∞ < ˙γ(x) < −µ − θ ∀x ∈ (0 δ]

where θ is a positive parameter.

By defining the following function:

F(q) =

(V(q) if d > 0 V(q) + bj(q) Otherwise

(26)

2.2. MULTIPLE NON-POINT AGENT USING A DISTRIBUTED CONTROL15 where V (q) = σ(||q||), bj(q) = γ(||q − qj|| − rj), d = ||q − qj|| − rj, q is the robot’s position qj the center of the j-th obstacle, rj its radius and j is the index of the nearest obstacle. Defining a differential equation as follows:

˙x = −∇F (q)

then, according to [15], the Filippov solutions of (2.1) are absolutely continuous.

Assuming now that the robot kinematics are described by:

˙q = u

and let the following equation describe the motion of the obstacles:

˙qj = h(t)j j= 1, 2, ..., nO

where nO represents the number of obstacles.

Let the switch ζ defined as follows:

ζ =

(1 if d ≤ 0 0 Otherwise and η

η(f(x)) =

(1 if f 6= [00]

0 Otherwise the system (2.1), under the control law:

u= ζ · ( ˙qi+ λ x − qi

||x − qi||) + η(−∇F (q)) + (1 − η) · ˙qT is globally asymptotically stable.

The proof of stability is made by choosing (2.1) as Lyapunov function candidate.

2.2 Multiple non-point agent using a distributed control

A solution of a multiple non-point agent task was proposed by Dimarogonas et al.

[7]. In their scenario each moving entity is controllable, and their controllers are independent of each other. The controllers are based on the potential field approach by using a navigation with the same structure for each agent.

In this work the following assumptions are made:

• Each agent has global knowledge of the position of the others at each time instant.

(27)

16 CHAPTER 2. LITERATURE OVERVIEW

• Each agent has knowledge only of its own desired destination but not of the others.

• Spherical agents are considered.

• The workspace is bounded and spherical.

• The dynamics of each agent are holonomic.

In this work N agents which are operating in the same workspace W ⊂ Rn are considered. Each agent occupies a disc: R = {q ∈ R2 : ||q − qi|| ≤ ri} in the workspace where qi ∈ R2 is the center of the disc and ri is the radius of the agent.

The configuration space is spanned by q = [q1, q2, ..., qN]T. The motion of each agent is described by:

˙qi = ui i= 1, 2, ..., N

The desired destinations of the agents are denoted by the index d: qd= [qd1, qd2, ..., qdN]T A three agents’ conflict scenario is shown in figure 2.1

Figure 2.1: Example of three agent conflict [7]

The proposed navigation function is:

ϕ(q) = γdi+ fi

((γdi+ fi)k+ Gi)1/k (2.1) Where γdi = ||q − qdi||2, is the squared of the distance between the current "i-th"

agent’s position qi and its final destination qdi.

The f function

The f function used into (2.1) is the key difference respect to decentralized policies.

In this work the control law of each agent disregards the destination of the others.

Without the f function the navigation function of the "i-th" agent will be zero when its initial condition coincides with its final destination, hence the f function ensures

(28)

2.2. MULTIPLE NON-POINT AGENT USING A DISTRIBUTED CONTROL17 that the destination point is a non-degenerate local minimum of ϕ in this situation.

The f function is defined by:

fi(Gi) =

a0+ P3

j=1ajGji if Gj ≤ X

0 Otherwise

where X = fi(0) > 0 is a positive parameter.

The G function

Each agent has its own Gifunction which represents its relative position with all the other agents. Each Gi is minimized on the boundary of the i-th agent’s free space.

To construct the Gi function the study of the proximity situations with respect to the agent i is needed, different of these situation are shown in figure 2.2

Figure 2.2: Proximity situations with respect to the agent A [7]

The proximity function between agent i and agent j is defined as:

β{i,j} = ||qi− qj||2(ri+ rj)2

This function is used to define a specific measure of the distance between the agent iand the other agents involved in the relation, this measure is called relation prox- imity function constructed as follows:

(bRk)l= X

j∈(Rk)l

β{R,j}

where j ∈ (RK)l identifies which agents are in the specific relation of agent R.

A relation verification function is defined by:

(gRk)l= (bRk)l+ λ(bRk)l

(bRk)l+ (BRC

k)lh1 where λ and h are positive parameters, (BRC

k)l = Q

m∈(RCk)l

(bm)l and (RCk)l is the complementary set of relations of level-l.

(29)

18 CHAPTER 2. LITERATURE OVERVIEW

Finally, the Gi function is constructed as follows:

Gi=

niL

Y

l=1 niRl

Y

j=1

(gRj)l

where niL is the number of levels and niRl is the number of relation in level-l with respect the agent i.

By using (2.1) as navigation function and as feedback control strategy:

ui= −Ki

∂ϕi

∂qi

where the convergence and collision avoidance are satisfied as proved in section 3.7 of Ki is a positive parameter and ϕi is the navigation function derived (2.1) [7].

2.3 Multiple non-point agent using a centralized control

A centralized policy for a mobile robots navigation was proposed by Loizou et al.

[28] their work is focused on tasks for multiple holonomic vehicles. A central hub, which has global knowledge of the workspace and agents’ positions, communicates with the agents and defines the control law for each agent which drives the agents from any initial configuration (position of individuals agents) to the desired config- uration avoiding collisions.

• A central hub has a global knowledge about workspace, agents’ positions and their destinations.

• Each agent communicates only with the hub end receives from it the control signal time instant.

• Spherical agents are considered.

• The workspace is bounded and spherical.

• The dynamics of each agent are holonomic.

The scenario is: m mobile robots and their workspace W ⊂ R2. Each robot is represented by a disk in the workspace:Ri= {q ∈ R2 : ||q − qi|| ≤ ri}where q ∈ R2 is the center the disk and ri is the radius of the disk.

Each robot’s position is represented by qi and the configuration of the whole system

(30)

2.3. MULTIPLE NON-POINT AGENT USING A CENTRALIZED CONTROL19 is spanned by q = [q1T, q2T, ..., qmT]T.

The motion of each agent is described by:

˙qi = ui i ∈[1, 2, ..., m]

The navigation function used is:

ϕ= γk γk+ G

!1/k

where γ = ||q − qd||2 is the squared distance between the current system configura- tion q and the goal configuration qd, k is a positive parameter.

The G function

Similarly to the distributed policy [7], with a centralized control the definition of proximity function, relation proximity function and relation verification function used in 2.3 has to be defined, since the agents’ positions are collected in a single vector q, the definition of these functions are different.

First, the concept of relation level has to be explained. A set of relation is a set of all possible collision schemes between agents a binary relation is a relation of two colliding agent and so on, a "relation tree" is the set of agents that has some relation. The number of binary relation in a relation is called relation level, figure 2.3 shows examples of relation tree.

(a)One-tree relation (b) Two-tree relation Figure 2.3: Examples of relation tree [7]

The relation proximity function gives a measure of of the distance between robots involved in a single relation and is defined as follows:

bR= qTPRq − X

{i,j}∈R

(ri+ rj)2

(31)

20 CHAPTER 2. LITERATURE OVERVIEW

where R is the set of binary relations and PR= P

{i,j}∈R

Rj,i with

Ri,j =

O2×2(i−1) I2×2

O2(i−1)×2N O2×2(j−i−1) O2(j−i−1)×2N

−I2×2 O2×2(N−j)

O2×2(j−i−1)

O2(N −j)×2M −I2×2 O2(i−1)×2N I2×2 O2×2(N−j)

is the relation matrix.

The relation verification function is defined by:

gRj = bRj+ λbRj

bRj+ BR1hC j

where

BRC

j = Y

k∈RCj

bk

and RCj is the complementary set of Rj, the set relation of level j. Is possible now to define G as:

G=

nL

Y

L=1 nR,L

Y

j=1

(gRj)L

With this approach the convergence to the goal configuration is achieved by using the control law for the whole system:

u= −K∇ϕ where K is a positive parameter and ϕ is (2.3)

(32)

Chapter 3

Navigation Function

This section refer to the construction of navigation function on analytic manifolds with boundary which resembles Euclidean n-space for a non-point agent among non-point obstacles, and is an extension of the potential field approach for a single point agent among static obstacles delevoped by Rimon and Koditscheck in [23]

3.1 Sphere world, workspace, free space

A sphere world is an arbitrary compact connected subset of Euclidean n-space which its boundary is defined by the disjoint union on a finite number of (n-1)-spheres [23].

Formally, a topological space S is compact if each of its open covers has a finite subcover. Hence, for every arbitrary collection

{Cα}α∈Ψ of open subsets of S such that

S= [

α∈Ψ

Cα there is a finite subset Υ of Ψ such that

S= [

β∈Υ

Cβ.

Since a disjoint set of (n-1)-sphere bounds the sphere world and since it is compact, a single sphere is the boundary on the sphere world. Moreover, A n − 1sphere bounds the workspace; a compact space without obstacle in which an agent can lie, is possible to define as follows:

W ,nq ∈ En: kq − q0k2 6 (r + ρ0)2o 21

(33)

22 CHAPTER 3. NAVIGATION FUNCTION

Where:

q is the center of the agent’s position q0 is the center of the workspace.

r is the agent radius1. ρ0 is the world radius.

The set of the obstacles are defined as:

Oj ,nq ∈ En: kq − qjk2 6 (r + ρj)2o Where:

q is the position agent.

r is the agent radius.

qj is the center of the circle representing of the obstacle j.

ρj is the j-th obstacle radius.

Note that the world sphere bound substantially describes an obstacle, in fact {En\ W}is an obstacle to all intents and purposes and it is possible to define the

"zero-th" obstacle as:

O0, En\ W

The free space is defined as an Euclidean n-space bounded by M + 1 obstacles (M object in the workspace and the workspace bound), and is the space in which an agent can lie without any collision, formally :

F , W \

M

[

j=1

Oj where M is the number of the obstacles.

3.2 Construction

Definition 1 (Koditscheck and Rimon [23]) Let F ⊂ En be a compact con- nected analytic manifold with boundary. A map ϕ : F → [0 , 1], is a navigation function if it is:

i. Analytic on F ;

ii. Polar on F , ∃! minimum at d ∈ F;

iii. Morse on F 2;

1The study can be extended to point agent by setting r = 0

2A scalar valued function is called Morse function if all its critical points are non-degenerate.

(34)

3.2. CONSTRUCTION 23

iv. Admissible on F 3

The basic idea of the potential field approach is to define a cost function which its negated gradient field is attractive toward the destination and repulsing from the workspace boundary, that maps a vector field in Enwhich has negative gradient in a neighborhood of the destination, and positive gradient near the workspace boundary.

The requirement on the analyticity of the function is due to the fact that an analytic function has a closed form expression, which makes the computation of its gradient straightforward. Having ϕ as a polar function means that it has a global minimum on its domain F. The condition on negative gradient is a control requirement and will be described further, such cost function defines implicitly a high cost and low cost. High cost is related to the obstacle avoidance, the low cost to the reaching of the destination. For this purpose "destination function" and "obstacle function" are defined.

bqr

bq0

ρ0

bq1

ρ1

bq2

ρ2 r

(a)Scenario’s properties

∂F

(b)Free space boundary Figure 3.1: Scenario’s example

Destination function

The destination function γ : F → [0 , ∞) gives an idea how close the agent is to its destination and is defined by:

γ(q) , γd(q)k

3 lim

q→∂Fϕ(q) = 1.

(35)

24 CHAPTER 3. NAVIGATION FUNCTION

where k ∈ N is a tuning parameter that regulates the shape of the trajectory and γd

is the squared Euclidean norm of the difference between the current agent position and the desired one:

γd(q) , kq − dk2 Where:

q is the center of the agent’s position dis the final position.

Function γ is related to the "low cost", in fact the zero-level of γ is γ(0) = kdk2k. The agent achieves the configuration d by minimizing γd.

The tuning parameter k describes how much the agent cares about the destination function in respect to the obstacle function. Roughly speaking an higher value of k could imply a path closer to the obstacles.

Obstacle Function

In this study, the obstacles are modeled as n-sphere objects that lie in the workspace. In real tasks not all obstacles are sphere shaped, anyway it is possi- ble to extend the study to a non-spherical obstacles task by finding a diffeomorphic function that turns the obstacle shape into a n-sphere object, in this case the orig- inal shape must be homeomorphic to the n-sphere, for more details see section 2.5 of [23].

The obstacle function β : F → [0 , ∞) defines a measure of how much the agent is close to an obstacle or the world bound. Since in this work non point entities, agent and obstacles, are considered a function that takes into account the distance between the agent and each obstacle, considering their radii, should be used. To this purpose a function βj is defined, it gives a measure of how the agent is far from the j-th obstacle and is negative when a collision occurs. This function is defined by :

βj , kq − qjk2(r + ρj)2 Where:

q is the center of the agent’s position r is the agent radius.

qj is the center of the circle representing of the obstacle j.

ρj is the j-th obstacle radius.

By knowing characteristic and geometric properties of agent, obstacles and workspace, a scenario can be build as illustrated in figure 3.1

Now a function that gives a measure of the distance between the agent and the bound of the sphere world is needed, β0 is defined to this purpose by:

β0 , (ρ0− r)2− kq − q0k2

(36)

3.2. CONSTRUCTION 25 Where:

ρ0 is the world radius.

q0 is the workspace center

Note that β0 gives a measure of how far the boundary of the world sphere is from the agent and is negative if it is outside.

The obstacle function β is defined by:

β ,

M

Y

j=0

βj;

The agent achieves collision avoidance by maximizing β, in fact the zero level of this function describes the bound of workspace ∂F.

β therefore describes which positions the agent has to attain in order to avoid a collision, thus beta gives information about the free space.

The free space area can also be defined by: {q | βj(q) > 0} ∀j, an example is shown in figure 3.2 note that this definition refers to the point q (the black dot in the figure) and it has to take a distance from the edges of the obstacle and from the bound of the "sphere world" of at least the agent radius r.

Then, according to the definition of γ and β a polar function ˆϕ: F → [0 ∞), is defined as follows:

ˆ ϕ , γ

β (3.1)

From this equation, a valid navigation function will defined.

3.2.1 Modification of ˆϕ

Since a valid navigation function has as codomain [0 1], the image of (3.1) must be squashed by the diffeomorphism σ into [0 1], where

σ(x) , x 1 + x.

Since σ(·) is a diffeomorphism, it preserves the properties of the navigation function, as proved in [23].

Note that the destination is a degenerate critical point of ˆϕ, in fact:

ϕˆ(d) = dk−1∇γdβ − ∇βγdk β2

q=d

= 0

and

(37)

26 CHAPTER 3. NAVIGATION FUNCTION

b

Forbidden area

Free space

Figure 3.2: Workspace example

2ϕˆ(d) = (k(k − 1)γdk−2∇γdβ+ kγdk−12γdβ+ kγdk−1∇γd∇β

β4 +

+ −∇2βγdk− ∇βkγdk−122β∇β(kγdk−1∇γdβ − ∇βγdk) β4

q=d

= 0.

Since d ∈ F, ˆϕis analytic on F and γd(d)=0. ˆϕis not a Morse function, to made it Morse , the distortion σd(·) is applied:

σd(x) , xk1 k ∈ N

When the agent reaches its destination, γd vanishes by definition and ˆϕ becomes zero, thus if a moving obstacle is heading to the agent, the collision can not be avoided, because the agent is on the minimum point and its value cannot decrease.

In order to counteract this limitation, a f : R → R will be added in such a way that the navigation function has a non-zero value when a collision is about to occur at agent’s final destination.

f ,

(0 if β > ε

f¯(1 −ε22 +ε33) if β ≤ ε

(38)

3.2. CONSTRUCTION 27 Where:

f ∈ R¯ + is a tuning parameter and is the value of f when β is equal to 0 ε >0 small enough.

The proposed navigation function is a combination of the functions described above:

ϕ= σd◦ σ ◦ϕ ◦ˆ (f + γd) , (f + γd)k (f + γd)k+ β

!1k

(3.2)

Figure 3.3 shows an example of the navigation function surface and its contour on R2. It is clearly visible that there is only a global minimum, which corresponds with the destination point. Since there are no local minima, a path forward to the destination should follow the negative gradient of the navigation function. Intu- itively the gradient is a vector field that is repulsive respect to the obstacles and attractive respect to the destination.

Lower bound of k As proved in [5] to have a valid navigation function the parameter k has to be greater than a lower bound. This is intuitively because, since a lower value of k will increase the value of the navigation function "near" the obstacle, if k is too small there will be a saddle point between the obstacles and the agent can pass through them.

Let Φε be a set "ε-away from a collision" Φε , q ∈ {F \ {∂F ∪ B(ε) ∪ d}}, where B(ε) is an open annulus defined as q ∈ F : β ∈ (0 ε), a critical point of ˆϕ in Φε is given by:

ˆ ϕ=γ

β =⇒ ∇ ˆϕ= β∇γ − γ∇β β2

=⇒ β∇γ = γ∇β =⇒ϕ=0ˆ

=⇒ β∇(γd+ f)k= (γd+ f)k∇β

taking the magnitude of both sides yelds:

kβ ||∇(γd+ f)|| = (γd+ f) ||∇β||

a sufficient condition to not hold this equalitis is given by:

k > d+ f) ||∇β||

β ||∇(γd+ f)||

which gives a lower bound of k.

An example of this vector field is shown in figure 3.4

References

Related documents

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

a) Inom den regionala utvecklingen betonas allt oftare betydelsen av de kvalitativa faktorerna och kunnandet. En kvalitativ faktor är samarbetet mellan de olika

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av