Kandidatexjobb i
elektroteknik 2018
D
skursen för utbildningsprogrammet elektro- och systemteknik på KTH. Arbetet gjordes
i grupper om två och pågick på under hela vårterminen 2018, kursens omfattning
mot-svarar 10 veckor heltidsstudier. I år deltog studenter från fyra olika program: 52 kursdeltagare
studerade elektroteknik, 21 teknisk fysik, 17 farkostteknik, 11 energi och miljö och två
kursdel-tagare var utbytesstudenter. I år erbjöds projekt inom ramen av 14 större kontext:
SMART AUTONOMOUS SYSTEMS IN ROBOTICS
|
ANSVARIG: DIMOS DIMAROGONASSMART AUTONOMOUS SYSTEMS IN TRANSPORTATION
|
ANSVARIG: JONAS MÅRTENSSONLEARNING IN DYNAMICAL SYSTEMS
| ANSVARIG: CRISTIAN ROJASSMART CITIES: INFRASTRUCTURES
| ANSVARIG: CARLO FISCHIONESMART CITIES: CYBERSECURITY
| ANSVARIG: CARLO FISCHIONEBIG DATA & AI
| ANSVARIG: TOBIAS OECHTERINGFUSION - THE SUN’S ENERGY SOURCE ON EARTH
| ANSVARIG: TOMAS JONSSONEXPLORING JUPITER AND THE GALILEAN MOONS
| ANSVARIG: NATHANIEL CUNNINGHAMSPACE SYSTEMS
| ANSVARIG: NICKOLAY IVCHENKOINNOVATIVE ANTENNAS FOR WIRELESS SYSTEMS
| ANSVARIG: OSCAR QUEVEDO TERUELWAVE ENERGY
| ANSVARIG: ANDERS HAGNESTÅLWIND POWER INTEGRATION
| ANSVARIG: MIKAEL AMELINHVDC GRIDS
| ANSVARIG: STAFFAN NORRGAPOWER SYSTEM CONTROL
| ANSVARIG: MUHAMMAD ALMASFör att lära sig hur man genomför ett tekniskt eller vetenskapligt projekt har studenterna: följt
en seminarieserie, deltagit i en workshop, ställt upp en arbetsplan, lärt sig citera och referera
med hjälp av referenshanteringsprogram, sammanfattat projektresultaten i rapporter som
följer standarden av vetenskapliga tidskriften IEEE, gett skriftlig och muntlig feedback på
varandras rapporter och i slutet av kursen presenterat sina arbeten på en gemensam
presenta-tionsdag.
KEX-boken är organiserat efter kontext. Resultaten av gemensamma reflektioner runt
kon-textens betydelse för samhället och miljön återfinns som inledningen till de olika kontexterna
i denna bok. Dessa inledningar inkluderar även korta populärvetenskapliga texter. Därefter
följer de enstaka projektrapport inom varje kontext. Numrering återspeglar att vissa projekt
inte valdes i år medan andra projekt gjordes av upp till tre olika projektgrupper. Beroende på
rapportens språk är rapporttitlar i innehållsförteckningen antingen på engelska eller svenska.
Ett stort tack till alla inblandade som gjorde kandidatexjobbskursen till en succé. Jag vill börja
med att nämna alla trevliga och entusiastiska studenter, deras outtröttliga handledare, lärarna
som höll i seminarierna (Joakim Lilliesköld, Hans Sohlström, Gabriella Hernqvist) och
refer-enshanteringslabben (Lorenz Roth och Gabriel Giono med assistenter). ’Last but not least’ den
nästan osynliga men livsnödvändiga hjälpen som gavs av Kristin Linngård. Hon tog hand om
kursens administration. Jag själv var ansvarig för organisationen av kursen och höll i ett par av
seminarierna.
Anita Kullen
Kursansvarig för kandidatexjobbskursen inom elektroteknik 2018
Stockholm, 13 juni 2018
CONTEXT A: SMART AUTONOMOUS SYSTEMS IN ROBOTICS
7
A1. Autonomous Trajectory Tracking for a Differential Drive Vehicle
11
A2. High Level Motion Planning for a Robot
23
A3. Motion Planning and Control of Unmanned Aerial Vehicles
33
CONTEXT B: SMART AUTONOMOUS SYSTEMS IN TRANSPORTATIONS
43
B1a. Self-organizing Buses - Headway-based Approach
47
B1b. Implementation of an Automatic Control Strategy to Minimize Headway Variance
55
B2a. Model Predictive Control Design for Vehicle Platooning
63
B3a. Autonomous Overtaking Using Reachability Analysis and MPC
73
B3b. Autonomous Highway Overtaking With Model Predictive Control
83
CONTEXT C1: LEARNING IN DYNAMICAL SYSTEMS
91
C1a. Evaluation of Independent Component Analysis Algorithms for Separation of Voices
95
C1b. Change Point Detection and Kernel Ridge Regression for Trend Analysis on Financial Data
101
C1c. Evaluating Different Algorithms for Detecting Change-points in Time Series
109
CONTEXT C2: LEARNING IN DYNAMICAL SYSTEMS
117
C2a. Portfolio Inversion: Finding Market State Probabilities From Optimal Portfolios
121
C2b. Portfolio Optimization with Market State Analysis
131
C3a. Incrementally Expanding Enciorment in Deep Reinforcement Learning
139
C3b.
Deep
Reinforcement
Learning
for
Snake
147
C3c.
Reinforcement
Learning
for
Video
Games
153
CONTEXT C3: LEARNING IN DYNAMICAL SYSTEMS
161
C4a. A Study of Reinforcement Learning in Multi-Agent Systems
165
C4b. Deep Reinforcement Learning in Distributed Optimization
173
C4c. Exploring Deep Reinforcement Learning Algorithms for Homogeneous Multi-Agent Systems
179
C5a. Real-time System Control With Deep Reinforcement Learning
187
C5b. Generalizing Deep Deterministic Policy Gradient
193
CONTEXT D: SMART CITIES: INFRASTRUCTURES
203
D1. Message Prioritization for Autonomous Vehicle Communication
207
D3a. Modelling Communication Networks for Active Traffic Management
221
D3b. Modeling Communication Network in Electrical Grids
231
CONTEXT E: SMART CITIES: CYBERSECURITY
237
E2a.
Internet
of
Things
Hacking
241
E2b. Security Testing of an OBD-II Connected IoT Device
251
E3. Modelling and Security Analysis of Internet Connected Cars
257
CONTEXT F1: BIG DATA AND AI
267
F1. Mobile Network Traffic Predicition Based on Machine Learning
271
F2a. Source Localization by Inverse Diffusion and Convex Optimization
279
F2b. Inverse Diffusion by Proximal Optimization with TensorFlow
287
CONTEXT F2: BIG DATA ANALYSIS
297
F3a. Automatic Sleep Scoring Using Keras
301
F3b. Power Spectral Density Based Sleep Scoring Using Artificial Neural Networks
309
F3c.
Machine
Learning
for
Sleep
Scoring
321
F4a. Testing a MIMO Channel for Stationarity
329
F4b. Estimation of Statistical Properties in a Mobile MIMO System
335
CONTEXT F3: CLASSIFICATION ALGORITHMS AND RECOMMENDER SYSTEMS
343
F5a. Recommender Systems for Movie Recommendations
347
F5b. Recommender Systems for Movies Using a Class of Neural Networks
355
CONTEXT G: FUSION - THE SUN’S ENERGY SOURCE ON EARTH
381
G2. The Mapping and Visualization of Deuterium in Surfaces of Plasma Facing Components
385
G3. Modeling of Current Drive with Radio Waves on DEMO
395
CONTEXT H: EXPLORING JUPITER AND THE GALILEAN MOONS
403
H2. Analysis of the ion Composition in the Io PlasmaTorus From Observations by the New Horizons
Mission
407
H3. Modeling Far Ultraviolet Auroral Ovals at Ganymede
417
CONTEXT
I:
SPACE
SYSTEMS
427
I4. Recovery System for a Free Falling Unit
431
I5. Ejection System for REXUS PRIME
441
CONTEXT K: INNOVATIVE ANTENNAS FOR THE NEW GENERATION OF WIRELESS SYSTEMS
449
K1. Helical Waveguides With Higher Symmetries
453
K2. Study of Periodic Structures Higher Symmetries
461
CONTEXT L: WAVE ENERGY
471
L1. Consequences of Magnetic Propertiesin Stainless Steel for a High-efficiency Wave Power Generator
475
L3. Linear Ferrite Generator Prototype for Wave Power
485
CONTEXT M: WIND POWER INTEGRATION
493
M1.
Grid
Capacity
and
Upgrade
Costs
497
M2. Koordinering av vindkraft och vattenkraft
509
CONTEXT
N:
HVDC
GRIDS
521
N1. Operation and Control of HVDC grids
525
CONTEXT
O:
POWER
SYSTEM
CONTROL
531
O1. Rapid Prototyping of Microgrid Controllers for Autonomous and Grid-Connected Operation 537
O2. Adaptive Protection Scheme for Microgrid
545
THE AUTONOMOUS FUTURE OF FOOD DELIVERY
As you enter the kitchen, the smell of burnt food reaches your nostrils. In five minutes, your date will
arrive. What if you somehow could save this evening? With the click of a button, a drone delivers a
love-ly dinner to your back door, just before your date rings the bell.
The portrayed scenario is only one application where a drone can improve your everyday life. Drones,
wheth-er awheth-erial or ground vehicles, undoubtedly have a place in our society, not only for the fact that they will be
able to deliver your mail on weekends. They will aid humans by being quicker than any car, and reaching
their destination regardless of terrain, traffic and gas prices, while making roads safer and less polluted. This
scenario isn’t as far off in the future as one might think. Pizzas, for example, are already being delivered by
drones in New Zealand.
The emergence of ground-based delivery robots is also increasing. With a rigid security system, robots like
those from Starship technologies will be tough to steal from and likely form a new standard of delivery
prac-tices going forward. They hold several benefits compared to aerial drones in that they can be a lot more sturdy
and heavy, which makes them perfect for non-emergency deliveries.
Today, further research is being done on how to obtain more useful information from the sensors onboard
autonomous vehicles, such as smart cameras and distance detectors to scan the surroundings. In many ways
this is similar to how a bat detects obstacles through sonar. Another area of research is how to optimise the
path planning algorithms to reach higher efficiency and reliability without disturbing the environment the
robots operate in.
Autonomous vehicles have already proven themselves to be a superior delivery system in many ways. The
ex-tent of drones in the immediate future will depend on how efficient and robust the system can be compared to
traditional human delivery services in the environment it is deployed in. No matter what, it will play a major
role in the future quality of life and contribute to a more advanced society.
T
he emerging autonomous industry has
recent-ly allowed ground and aerial vehicles to be easy
to produce while reducing their cost. Because
of these advances, autonomous robots have already
started to be introduced into our society. With
fur-ther research in this field, fur-there will be many more
implementations in the future, performing tasks that
we still cannot imagine. One of the more demanding
challenges right now is making autonomous vehicles
move around safely in dynamic environments and
optimising the path to follow.
In context A we look at this problem from several
points of view. For project A2 the purpose is to study
high level motion planning for a robot. This is done
by constructing a framework which returns a plan
for a given task expressed in linear temporal
log-ic (LTL). The task could, for example, be to order a
robot to visit specified rooms and avoid obstacles or
some specific areas.
An intuitive way to make an autonomous robot
nav-igate in space is trajectory planning through
poten-tial fields. This mathematical way of modelling the
environment lets the robot interact with obstacles as
if they exerted virtual forces on the robot. This
meth-od works for both ground and aerial vehicles, which
have been used in project A1 and A3 respectively. The
projects have implemented the mathematical model
it in a way specifically suits their model.
Project A1 details trajectory planning for
autono-mous differential drive ground vehicles (AGVs) and
tests suitable methods for path planning, obstacle
avoidance and formation control using potential
fields and control theory. The proposed
implemen-tation of potential field-based movement is tested for
ground-based robots, and the simulations are
veri-fied in Matlab.
In project A3, potential fields are applied to track
and navigate multiple autonomous aerial vehicles in
an environment filled with obstacles. A
mathemati-cal model and a linearized state feedback controller
is implemented in order to control the individual
drones. The main goal of this project is to get all the
drones to fly safely to their respective goals, using
po-tential fields, without colliding or getting stuck.
Our mathematical models can help autonomous
vehicles to operate in any environment, regardless of
its previous knowledge of its surroundings. Making
simple mathematical models for autonomous path
planning and system control will provide a tool for
other people to develop and find new for
autono-mous vehicles.
Future projects inspired by project A2 could use
more complex environments and focus more on the
transitions in the environment. One could also use
a different mathematical model of the environment
and compare the results to ours. A more ambitious
project would be to include the transition time
be-tween different states of the framework.
The emergence of drones, possibly causing
con-gested air traffic in specific areas, is something that
has to be addressed in future research. Another
problem that hasn’t been touched upon in this
con-text is how to provide the control system with good
enough data from the sensors. This is an area which
requires more research. Furthermore, our
mathe-matical models do not take winds, road conditions
and other real-world variables into account.
The projects in this context have focused mainly on
developing mathematical models and simulations for
autonomous vehicles. A suggestion for future
pro-jects would be to implement these models on a
phys-ical vehicle and verify the results of the simulations.
IMPACT ON SOCIETY AND ENVIRONMENT
There are many different views on how autonomous
systems will impact humanity. An obvious benefit of
the autonomous systems that are emerging around
the world today is automated emergency response.
Instead of a traditional ambulance setting out to save
someone’s life, a specialised ambulance drone can
reach its destination much quicker than any car. In
the same way, police drones could be dispatched and
bypass any obstructions that a perpetrator might use
to get away, such as traffic jams or bad ground
visi-bility. Another use of such drones is disaster survey,
where drone swarms collectively can map out huge
areas in a matter of seconds with a single click on a
map. The flying nature of drones can make this
espe-cially useful after a tsunami, earthquake or a flood,
when ground accessibility may be very limited.
The use of autonomous vehicles will improve
peo-ple’s daily lives, but human coexistence with robots
will also bring up privacy concerns. Today’s most
advanced autonomous vehicles utilise smart
camer-as in order to move around or do surveillance tcamer-asks,
these images could be recorded by the manufacturer.
Automated systems can also be hacked and leak
in-formation about individuals’ personal lives. On top
of that, dictatorial regimes could use automated
sys-tems to monitor its citizens. The dilemma is whether
this privacy is more important than the security that
a surveillance drone can offer by recognising a
bur-glary or an accident and sending that information to
the emergency services.
Since more tasks can be automated with robots and
from the comfort of our own homes, society might
become more disconnected. If people don’t have a
reason to go outside for shopping and other everyday
tasks, we might not socialise and interact in person
as much. By not being as active as before, the risk of
certain diseases could also increase.
Automating the industry and transportation
sec-tors allows us to use our planet’s resources more
effi-ciently. Automation can reduce the industrial
materi-al consumption by optimising resource management
using, for example, automated precision tools. In
transportation, automation can be used to optimise
path planning and traffic flow, thus reducing
emis-sions. This is beneficial for the environment, but also
economically beneficial for the transportation sector.
On the other hand, introducing autonomous systems
will increase the demand for lithium-ion batteries,
and the process of extracting lithium has a high
neg-ative impact on the environment.
Without any doubt, weaponing autonomous
sys-tems will be a considerable concern in the future.
Letting an algorithm choose how a weapon should
deploy distances the user both physically and
emo-tionally from the consequences of the act. This could
increase the damage done to civilians and
infrastruc-ture through miscalculations of the algorithms and
abuse. If all you need to kill a target is to provide an
image of the target to a computer program, our
so-ciety can suddenly become incredibly unsafe. Even
more terrifying is the ability to order killings after
human classifications, such as skin colour, rather
than a specific person. On the contrary, one might
argue that using autonomous weapons will decrease
the death toll by not risking soldiers’ lives, and
safe-ly let the weapon operate on its own. Autonomous
systems can also reduce collateral damage because of
its superior precision. We do however consider the
dangers of autonomous weapons to far outweigh the
potential benefits, even if the system can be proven
useful in some cases.
In conclusion, a lot of questions need to be
dis-cussed when it comes to autonomous vehicles and
robots regarding ethics, but as of now we still think
that the benefits of automation will overall outweigh
the problems that come with it. We ultimately
con-sider the automation of robots to be an improvement
for the world as a whole.
Autonomous Trajectory Tracking for a
Differential Drive Vehicle
Mikael Glamheden and Simon Eriksson
Abstract—This paper explores controlling a two-wheeled dif-ferential drive vehicle using path planning algorithms and potential fields in order to track a target area while avoiding obstacles. Additionally, formation control was investigated using potential fields and a virtual structure approach separately. Finally, analysis of communication constraints in the form of sampling, disturbances and quantization are taken into account and theoretical or analysis results are given. It was concluded that the potential fields method result in an intuitive and dynamic controller that can be used to navigate within a large-scale and dynamic environment, as well as be used for formation control. The virtual structure approach is more robust when dealing with formation control, but it does not consider obstacle avoidance on its own.
I. INTRODUCTION
Autonomous ground vehicles (AGVs) is an area which has caught the attention of the public and the industry alike for the past several decades. It is a field where a lot of investments are being made, not only in basic research but also in numerous applications within industry and government institutions. It is estimated that more than $80 billion will be invested into autonomous vehicles in 2018 alone [1].
Most of these investments are fueled by the prospect of self-driving cars. Newcomers like Waymo and Tesla are competing head to head with established automakers like Ford and GM in the race to reach full autonomy. The systems are complex and are dependent on advancements in multiple fields such as sensor technology, machine learning and control theory [2], [3].
AGV technology, however, has applications in many other areas as well, and the vehicles appear in many shapes and forms. AGVs are especially advantageous when doing tasks that are either tedious, dangerous or impossible to do with a man-operated vehicle. Among such tasks are farming and surveillance, military operations and construction work. AGVs also have many applications in space exploration where small and versatile robots could explore celestial bodies without needing to be closely monitored by humans.
This paper will approach the topic of autonomous ground vehicles from the aspect of control, and communication be-tween vehicles. The purpose of this project is to study an autonomous non-holonomic differential-drive vehicle with two wheels. The vehicle should be able to perform trajectory tracking and obstacle avoidance as well as drive in specified formations. Simulations are done to test how well the vehicle performs. The performance of the vehicle will also be evalu-ated while it is subject to certain communication constraints such as sampling, disturbance and quantization. The ambition of this report is to sum up a number of problems from the
Y
X y
x θ
Fig. 1. Model of differential drive vehicle. Figure from [4].
area of control theory that arise when dealing with this vehicle model, and present them in a language that is on the level of a bachelor student.
The structure of the report is as follows. In Section II math-ematical notation used in the report is briefly explained. Then in Section III, the vehicle model that will be used throughout the report is presented and shown to be controllable. Further, in Sections IV-VII a number of problems from the area of control theory are introduced. These are all introduced with the application of the model from Section III in mind. Then in Section VIII we present a number of simulations with the intention to test the theory introduced in the Sections IV-VII. We then go on and analyze the results in Section IX. The same section includes our reflections on the subject as a whole. Finally our conclusions are found in Section X.
II. MATHEMATICALNOTATION
This section describes the mathematical notation that is going to be used throughout the paper.
A capitalized letter, such as M, typically denotes a matrix.
A boldface lowercase letter denotes a vector. Inis the identity
matrix of dimensions Rn×n. || · || denotes Euclidean norm.
λi(A)denotes the eigenvalues of the matrix A. det(A) is the
determinant of the matrix A. Newton’s notation is used for time derivatives, that is ˙x(t) is the first derivative of x with respect to t.
III. DIFFERENTIALDRIVEMODEL
The vehicle that is to be studied in this project is a differential drive vehicle. A model of the vehicle can be seen in Fig. 1. Let the center of the vehicle be the guide-point whose
generalized coordinate is given by the vector q = [x, y, θ]T. It
Fig. 2. Block diagram of the vehicle model in chained form as described by (5).
equivalent to it being subject to the following non-holonomic constraint:
˙x sin θ− ˙y cos θ = 0 (1)
According to [4] the kinematic model for a differential drive vehicle is then ˙x ˙y ˙θ = cos θsin θ 0 v1+ 00 1 v2 (2)
where v1 is the driving velocity and v2is the steering velocity
of the vehicle.
A. Tracking Control Design
Since the system is not a linear system on the form
˙x = Ax + Bu (3)
it is difficult to do direct analysis on the system. ”Given a mechanical system subject to non-holonomic constraints, it is often possible to convert its constraints into the chained form either locally or globally by using a coordinate transformation and a control mapping” [4]. This can be done with (2) which will aid with the design of the tracking controller. In the special case of n variables and 2 inputs, chained form is defined as [4]
˙x = u1, ˙x2= u1x3, . . . , ˙xn−1= u1xn, ˙xn= u2 (4)
Where x = [x1, . . . , xn]T is the state, u = [u1, u2]T is the
control, and y = x is the output. The model (2) can be transformed to a system on the form of (4) in the following way: The coordinates are transformed into
z1= θ, z2= x sin θ− y cos θ, z3= x cos θ + y sin θ
and the control variables are transformed into
u1= v2, u2= v1− z2v2
This results in the following equations for the kinematic model from (2):
˙z1= v2= u1
˙z2= z3v2= z3u1 (5)
˙z3= v1− z2v2= u2
As a block diagram the system would look like Fig. 2. We see that the system can be split up into two parts where the second part is dependent on the state of the first part.
Now that the system is in chained form a linear model for the tracking error can be described. We get a cascade structure made out of two systems
˙z1e= w1, y1e= z1e (6) ˙z2e ˙z3e = 0 u1d(t) 0 0 z2e z3e + 0 1 w2+ z3 0 w1, (7) y2e= z2e
where ze = [z1e, z2e, z3e]T = z − zd is the state tracking
error, ye = [y1e, y2e, y3e]T = y− yd the output tracking
error, and w = [w1, w2]T = u− ud is the feedback control
to be designed. zd, yd, ud is the desired state trajectory,
desired output trajectory and the open loop steering control, respectively. Since the first sub-system described by (6) is of
first order it is trivial to design a stabilizing control w1for it. A
simple negative feedback controller with constant gain can be used. The second sub-system described by (7) then simplifies to ˙z2e ˙z3e = 0 u1d(t) 0 0 z2e z3e + 0 1 w2 (8) y2e= z2e
since w1 is stabilizing and assumed vanishing. It is now
possible to apply the pole placement method to find a feedback control for the system described by (8). This method is described in [5]. The feedback law will be designed as follows:
w2(t) =−l(t) z2e z3e =−l1(t) l2(t) z2e z3e The poles of the system are given by its eigenvalues which are described by the following determinant
det sI2− ( 0 u1d(t) 0 0 − 0 1 l(t)) = det s −u1d(t) l1(t) s + l2(t) = = s2+ l2(t)s + u1d(t)l1(t) (9)
The system (8) is stable if and only if all the poles are within the negative half-plane. From (9), one can see that it is always
possible by tuning the control gains l1 and l2. For example, if
we want the poles to be (−3, −3), this results in the following stable feedback law:
l1(t) = 9 u1d(t) , l2(t) = 6, w2(t) =−l1(t) l2(t) z2e z3e (10) This finally results in the following feedback system:
˙z2e ˙z3e = 0 u 1d(t) 9 u1d(t) 6 z2e z3e (11)
We see that the tracking controller is working as long as u1d(t)
is non-vanishing. For the case that u1d(t)is vanishing, other
tracking controllers, such as exponential time function based controller can be used [4].
Fig. 2. Block diagram of the vehicle model in chained form as described by (5).
equivalent to it being subject to the following non-holonomic constraint:
˙x sin θ− ˙y cos θ = 0 (1)
According to [4] the kinematic model for a differential drive vehicle is then ˙x ˙y ˙θ = cos θsin θ 0 v1+ 00 1 v2 (2)
where v1is the driving velocity and v2is the steering velocity
of the vehicle.
A. Tracking Control Design
Since the system is not a linear system on the form
˙x = Ax + Bu (3)
it is difficult to do direct analysis on the system. ”Given a mechanical system subject to non-holonomic constraints, it is often possible to convert its constraints into the chained form either locally or globally by using a coordinate transformation and a control mapping” [4]. This can be done with (2) which will aid with the design of the tracking controller. In the special case of n variables and 2 inputs, chained form is defined as [4]
˙x = u1, ˙x2= u1x3, . . . , ˙xn−1= u1xn, ˙xn= u2 (4)
Where x = [x1, . . . , xn]T is the state, u = [u1, u2]T is the
control, and y = x is the output. The model (2) can be transformed to a system on the form of (4) in the following way: The coordinates are transformed into
z1= θ, z2= x sin θ− y cos θ, z3= x cos θ + y sin θ
and the control variables are transformed into
u1= v2, u2= v1− z2v2
This results in the following equations for the kinematic model from (2):
˙z1= v2= u1
˙z2= z3v2= z3u1 (5)
˙z3= v1− z2v2= u2
As a block diagram the system would look like Fig. 2. We see that the system can be split up into two parts where the second part is dependent on the state of the first part.
Now that the system is in chained form a linear model for the tracking error can be described. We get a cascade structure made out of two systems
˙z1e= w1, y1e= z1e (6) ˙z2e ˙z3e = 0 u1d(t) 0 0 z2e z3e + 0 1 w2+ z3 0 w1, (7) y2e= z2e
where ze = [z1e, z2e, z3e]T = z − zd is the state tracking
error, ye = [y1e, y2e, y3e]T = y− yd the output tracking
error, and w = [w1, w2]T = u− ud is the feedback control
to be designed. zd, yd, ud is the desired state trajectory,
desired output trajectory and the open loop steering control, respectively. Since the first sub-system described by (6) is of
first order it is trivial to design a stabilizing control w1for it. A
simple negative feedback controller with constant gain can be used. The second sub-system described by (7) then simplifies to ˙z2e ˙z3e = 0 u1d(t) 0 0 z2e z3e + 0 1 w2 (8) y2e= z2e
since w1 is stabilizing and assumed vanishing. It is now
possible to apply the pole placement method to find a feedback control for the system described by (8). This method is described in [5]. The feedback law will be designed as follows:
w2(t) =−l(t) z2e z3e =−l1(t) l2(t) z2e z3e The poles of the system are given by its eigenvalues which are described by the following determinant
det sI2− ( 0 u1d(t) 0 0 − 0 1 l(t)) = det s −u1d(t) l1(t) s + l2(t) = = s2+ l2(t)s + u1d(t)l1(t) (9)
The system (8) is stable if and only if all the poles are within the negative half-plane. From (9), one can see that it is always
possible by tuning the control gains l1 and l2. For example, if
we want the poles to be (−3, −3), this results in the following stable feedback law:
l1(t) = 9 u1d(t) , l2(t) = 6, w2(t) =−l1(t) l2(t) z2e z3e (10) This finally results in the following feedback system:
˙z2e ˙z3e = 0 u 1d(t) 9 u1d(t) 6 z2e z3e (11)
We see that the tracking controller is working as long as u1d(t)
is non-vanishing. For the case that u1d(t)is vanishing, other
tracking controllers, such as exponential time function based controller can be used [4].
IV. OBSTACLEAVOIDANCE& PATHPLANNING
In this section two different methods are proposed for making a robot move from point A to point B, while avoiding any obstacles it detects. The first method is a programming-based approach where the robot’s movements is decided by a set of conditional clauses. The second method is a very mathematical approach which uses potential fields.
A. Obstacle Avoidance Using an Algorithm
The first type of path planning involved an algorithm that sampled unobstructed points by using its sensor in order to proceed towards the goal. For this project, the sensor was assumed to be a camera with wide field of view and a proximity sensor with range longer than the robot’s size by at least a factor of 2. The length of the robot will be named l below, and the scanning range is thus assumed to be at least 2l. The general task was to design the algorithm such that the robot would be able to avoid obstacles of any size and shape, assuming it was aware of its own placement in the environment. From the several types of algorithms proposed, only one algorithm was fully complete, but no algorithms were used in the final robot controller, as discussed in Section IX.
The proposed algorithm is as follows:
Pre-processing: Do this once
1) Pick a point that the robot needs to get to, call this node
pend.
2) Connect the starting node pstart with pend through a
line.
3) Along this line, create nodes l cm apart, where each node has data pointing to the next node. Call those node ”path nodes”.
Query processing: Repeat until the goal is reached
1) If the robot is at a path node and an obstacle is not detected, proceed to move to the next path node. 2) Else, if an obstacle has been detected:
a) Turn α◦ to the left and move l cm forward.
b) If there is an obstacle still, go to a). Else: turn α◦
to the right.
i) If there is no obstacle now, move l cm forward. ii) If there is an obstacle, go to b).
iii) If the robot can move to a path node without an obstacle in between, move to that node.
B. Obstacle Avoidance Using Potential Fields
A more dynamic way to make a robot drive to a predeter-mined goal while avoiding obstacles is to model the robot as a point mass in a potential field. In this field the robot will always try to move to a spot with lower potential. The goal is modeled as a point of low potential and the obstacles in turn are modeled as points of high potential. The negative gradient of the potential field will give the virtual forces acting on the robot. Points of high potential will create a repelling force on the robot as it is approaching it while the robot is drawn to
points of lower potential. The control inputs to the robot are then described as functions of these forces, which in turn will make the robot drive away from obstacles as well as move towards the goal. The methods that will be used here can be found in [6].
A simple and commonly used function for attractive poten-tial is
Uatt(q) = 1
2ξ||q − qgoal|| (12)
where q = [x, y]T are the Cartesian coordinates of the vehicle,
qgoal are the Cartesian coordinates of the goal, and ξ is a
positive scaling factor. This the function that has been used in this thesis. The negative gradient of (12) gives the attracting forces on the robot
Fatt(q) =−∇Uatt(q) = ξ(qgoal− q) (13)
That is, the attracting force is simply the difference between the robots position and the position of the goal, scaled by some positive real number ξ.
The potential field function for the repelling force from the obstacles can be chosen as
Urep(q) = 1 2η( 1 ||q−qobs|| − 1 ρ0) 2 ||q − qobs|| ≤ ρ0 0 ||q − qobs|| > ρ0 (14)
where q is the same as in (12), qobs are the coordinates of
the obstacle, and ρ0 is the radius of influence of the obstacle
(seen as the detection distance of the robot’s sensors). η is a
positive real number. The function Urep(q)is non-negative for
all q and approaches zero when ||q − qobs|| approaches zero.
With the potential field given in (14) the repelling force
Frep(q) can be formulated as the negative gradient of the
potential field
Frep(q) =−∇Urep(q) =
= η( 1 ||q−qobs||− 1 ρ0) 2 (qobs−q) (||q−qobs||) 3 2 ||q − qobs|| ≤ ρ0 0 ||q − qobs|| > ρ0 (15) From [6] we get that the control input u for the vehicle can be related to the desired trajectory in the following way
u = G#(x) ˙xd= [GT(x)G(x)]−1GT(x) ˙xd (16)
where G(x) is the matrix of the kinematic model and xdis the
desired trajectory of the vehicle. G#(x)is the pseudo-inverse
of the matrix G(x). In the case where the kinematic model is as in (2) and the coordinate system is the one of Fig. 1, G(x)
and xd will be the following
G(x) = cosθsinθ 00 0 1 , xd= xydd θd (17)
Using (16) and combining it with (17) in turn results in the following relation between control inputs and desired trajectory u = u1 u2 =
˙xdcos θ + ˙ydsin θ
˙θd
The repelling force Frep(q) can be related to the desired
trajectory variables as shown in the following two equations ˙xd ˙yd = k1Frep(q) = k1 Fx(q) Fy(q) (19) and ˙θd= k2 θ− arctan F y(q) Fx(q) (20)
where k1, k2 ∈ Z. Combining Equations (18), (19) and (20)
results in u = u1 u2 = k1(Fx(q)· cos θ + Fy(q)· sin θ) k2 θ− arctanFy(q) Fx(q) (21)
which describes the control input to the robot from a single obstacle. The total control input to the robot as a result of all obstacles will be the sum
utot= u1+ u2+ ... + um
from all the m number of obstacles which are closer to the
robot than distance ρ0.
V. SAMPLING
So far the control inputs to the system have been able to update in real time. In a real world application however, the control input is implemented in a digital platform and will thus be subject to sampling. Therefore, we will now consider how to maintain stability for the system in the case where the control inputs to the robot only can be updated every multiple of a time interval T . The argumentation in this section is derived from [7].
Consider the system in (5). Let
x0= z1, x =
z2
z3
so that the system described by (5) becomes
˙x0= u1 (22) ˙x = Ax + bu2= 0 u1 0 0 x + 0 1 u2 (23) A. Definition of Stability
To say that the system is stable under sampling we first have to define stability. In this case stability means that for
any given initial state x(t0), one has
lim
t→∞||x(t)|| = 0, (24)
which we call asymptotic stability. During the sampling period
[kT, (k + 1)T ] u1 and u2 are assumed constant. That is, the
value of u1and u2are assumed to update instantaneously, and
it happens at the sampling instants.
B. Design of Feedback
Consider now the second system, described by (23). Its impulse response can be written as
x(t) = eA(t−kT )x(kT ) + (
t
kT
eAσbdσ)u2(kT ) (25)
Evaluated at t = (k + 1)T the expression can be rewritten as
x((k + 1)T ) = ˆAx(kT ) + ˆbu2(kT ) (26) where ˆ A = eAT = 1 u1(kT )T 0 1 (27) ˆ b = T 0 eAσbdσ = T2 2 u1(kT ) T (28) Now introduce the transformation
H(k) = u1(kT ) 0 0 1 , J = 0 1 0 0 (29) The transformation is introduced so that we get a system that is time-invariant during the sampling period and thus rendering it simpler to find the conditions that make up a stable feedback law. This is shown now.
Equations (27) and (28) can be rewritten as ˆ A = H(k)−1eJTH(k) (30) ˆ b = H(k) T 0 eJσbdσ (31)
and the coordinates can be transformed into ¯
x((k + 1)T ) = H(k)−1x(kT ) (32)
The whole system can then be rewritten as ¯ x = ¯A¯x(kT ) + ¯bu2(kT ) (33) Where ¯ A = u 1(kT ) u1((k+1)T ) 0 0 1 eJT = H(k + 1)−1AH(k)ˆ (34) ¯ b = u 1(kT ) u1((k+1)T ) 0 0 1 T 0 eJσbdσ = H(k + 1)−1ˆb (35)
The system described by (33) is time-invariant under the assumption that
u1(kT )/u1((k + 1)T )
is a constant, which holds in the interval [kT, (k + 1)T ] according to the assumptions made previously.
We are now ready to design the feedback laws. The first system described by (22) is a simple first order linear system. The feedback law is simply given as
u1(kT ) =−k1x0(kT ), k1> 0 (36)
Using the above expression and the assumptions about u1, u2
being constant during a sampling period, we can formulate the following relation:
x0((k + 1)T )− x0(kT )
⇒ x0((k + 1)T ) = (1− k1T )x(kT )
for which |1 − k1T| < 1 has to be true for stability. That is
0 < T < 2/k1. For example, k1 = 1, T = 1/2 would be a
stable scenario.
For the second system, described by (33) we design the following feedback law
u2(kT ) =−k2x(kT ) =¯ −k2H(k)−1x(kT ) (37)
Combining Equations (33) and (37) results in the following equation
¯
x((k + 1)T ) = ( ¯A− ¯bk2)¯x(kT ) (38)
For this to be a stable system, the condition
|λi( ¯A− ¯bk2)| < 1
has to hold. For example, if k1= 1, T = 1/2 then
k2=
3 2
is one stable choice for k2. The results above are summarized
as follows.
Given a differential drive vehicle described by
˙x0= u1 (39) ˙x = Ax + bu2= 0 u1 0 0 x + 0 1 u2. (40)
The sampled data controller is given by
u1(kT ) =−k1x0(kT )
u2(kT ) =−k2H(k)−1x(kT ). (41)
If the control gains k1, k2 and the sampling interval T satisfy
k1< 0, |1 − k1T| < 1, |λi( ¯A− ¯bk2)| < 1. (42)
Then, the system (39) is asymptotically stable. Here ¯A and ¯b
are the results of the transform
H(k) = u1(kT ) 0 0 1 , J = 0 1 0 0 (43) ¯ A = u 1(kT ) u1((k+1)T ) 0 0 1 eJT = H(k + 1)−1eATH(k) (44) ¯ b = u 1(kT ) u1((k+1)T ) 0 0 1 T 0 eJσbdσ = T 0 eAσbdσ. (45)
C. Sampled Controller in Original Coordinate System
The controller (41) is written in terms of the chained form transformation (5). We can easily rewrite the controller in
terms of the original control inputs v1, v2 and the original
coordinate system with variables x, y, θ, that is the system in (2). We apply the transformation (41) backwards and end up with
v1= (k21
k1θ− k
1θ)(x sin θ− y cos θ) − k22(x cos θ + y sin θ)
v2=−k1θ (46)
where x, y, θ all are functions of time.
VI. DISTURBANCES AND QUANTIZATION
This part of the project was meant to investigate the effects of disturbances and quantization and evaluate how robust the designed controllers are in the presence of communication constraints. The first method tested involved randomly de-tecting obstacles for one detection cycle with a pre-defined chance. Next, the robots margin of error was tested by adding a constant error to the position of detected obstacles, oriented in different directions. A range error was also introduced so that obstacle detection range was severely shortened. All the disturbances were tested simultaneously as well as indepen-dently.
Quantization is the issue of a digital measurement round-ing analog inputs. This simulation intended to recreate what would happen if the computational options of the robot were severely limited, and how much of an error such a disturbance would induce. Quantization was applied in two instances: by rounding the detection range of how close the robot was to an obstacle, and by rounding how detailed the force calculations were. Both of these obstructions intend to limit the robot’s ability to use computations precisely.
VII. FORMATIONCONTROL
The problem of making a group of robots follow a desired trajectory, while also keeping a specified geometric formation, is called formation control. In this section two different meth-ods are proposed for formation control. The first method uses potential fields, a method which has already been mentioned previously in the paper. The second method for formation control models the formation as a virtual structure.
A. Formation Control Using Potential Fields
Implementation of this version of formation control is straightforward and an iteration of previous work. By utilizing the potential field controller, it is possible to manipulate the way the robot navigate the track by adding a non-stationary goal. Instead of placing the attracting goal at the end of the track, the goal is placed in relation to another robot. Additionally, robots are modeled as moving obstacles to each other so that if the robots get to close to each other they are repelled. This enables the robots to hold a formation specified by the goal placement. Two types of formations were tested: a platooning approach where the robots moved in a straight line and followed the robot in front of it, and a leader-follower approach where two robots attempted to form a triangle in relation to a leader robot. Obstacle avoidance was still active for all robots, and therefore it was possible for obstacle avoidance to occur in tandem with this type of formation control.
B. Formation Control Using Virtual Structure
The idea of the virtual structure approach for formation control is to introduce a virtual robot with the same kinematics as the actual robots. This virtual robot is set to follow a desired trajectory. The robots are then related to the position of the virtual one to form a geometric shape as they follow
Y X xd 3 y3d xd 2 yd 2 xd 1 yd 1 qd vs(t) p3 p2 p1
Fig. 3. Desired position of robots in relation to the virtual structure for a given formation shape.
the desired trajectory. This is arguably a better approach than making one of the robots into a leader in the formation, since such an approach is completely dependent on the previous robots. If a leader robot would fail, the whole formation would fail, while the virtual structure approach allows for greater robustness and reliability.
In this paper we will present the more general case of formation control using the virtual structure approach for n vehicles with the kinematic model (2), presented in Section III. Then a kinematic control law originally proposed in [8] is presented. For a proof of the stability of this control law we refer to the same paper, [8]. We will then simulate a special case using a formation of three robots in a equilateral triangle. The results of these simulations are shown in Section VIII.
1) Virtual Structure for Differential Drive Vehicle:
Con-sider n identical differential drive robots with kinematic model (2). The equation for the kinematic model is repeated here for clarification: qi= ˙xi ˙yi ˙θi = cos θsin θii 0 vi+ 00 1 ωi (47)
In (47) qi = [xi, yi, θi]T are the state variables, vi, ωi is the
forward velocity and the angular velocity respectively for robot
i. We now introduce an additional virtual robot with the same
kinematics as the other ones. This one will form the virtual center of the formation. The desired position and orientation of the other robots are then related to the coordinates of the
virtual center using vectors pi= (pxi, pyi)T. See Fig. 3 for an
example case. The desired trajectories (xd
i, ydi) of the robots are therefore xd i = xdvs+ pxicos θdvs− pyisin θvsd yid= ydvs+ pxisin θvsd + pyicos θvsd (48) where (xd
vs(t), ydvs(t)) is the desired trajectory of the virtual
structure. θd
vs and θid follow from the non-slip constraint (1)
and (xd
vs, yvsd ) or (xdi, ydi) respectively. To clarify, (1) can be
rewritten as θ = arctan ˙x ˙y (49)
2) Definition of Control Problem: For the formation control
problem to be solved the following needs to hold asymptoti-cally xi(t) yi(t) − xd i(t) yd i(t) = 0 0 , i = 1, . . . , n (50)
This stability problem can be reformulated in terms of error
variables [xe
i, yei, θie]T. According to [8] this can be done in
the following way: First we formulate equations for the desired velocities for the individual robots
vid= ( ˙xd i)2+ ( ˙ydi)2, ω d i = ¨ yd i ˙xdi − ¨xdi ˙yid ( ˙xd i)2+ ( ˙yid)2 (51) The components are obtained by differentiating (48). The
desired velocities for the virtual structure, vd
vs and ωdvs, are
calculated using the same formula, but in that case it is
(xd
vs(t), ydvs(t)) that should be differentiated. For stability to
be achieved we assume that vd
vs and ωdvs are bounded. Then
[8] defines the error variables of robot i as x e i ye i θe i =
− sin θcos θii cos θsin θii 00
0 0 1 x d i − xi yd i − yi θd i − θi (52)
Lastly, the time derivate of (52) can be shown to be
˙xei = ωiyie− vi+ vidcos θei
˙yei =−ωixie+ vdi sin θei i = 1, . . . , n (53)
˙θe
i = ωdi − ωi
The formation control problem is formulated as the require-ment to render the time derivatives of the errors (53) globally asymptotically stable.
3) Control Law: One kinematic control law that satisfies
the control problem above is the following one presented in [8] vi= vdi + cxixei − c y iω d iyei + j∈Ni ˜ cxij(xei − xej)− j∈Ni ˜ cyijωdi(yie− yej) (54) ωi= ωdi + cθiθei + j∈Ni ˜ cθij(θei − θje) where cx i, c y
i and cθi are positive constants indicating gain. It
is the coupling terms inside the summation signs that makes the robots aware of each other. Removing these terms would simply make the robots track the virtual center. The function of the coupling terms are to put the robots back in formation in the case when some of them are moved out of position. We will now explain these coupling terms in detail.
Suppose we have an adjacency matrix aij corresponding
to a communication graph for the formation. For example, if the formation is the one in Fig. 3 the communication graph indicating the neighbors of each robot could look like the left picture in Fig. 4 and its adjacency matrix would be the table to the right in the same figure. Note that it is not necessary or even desirable to connect all the robots to each other in the
graph. Connections may be left out. Continuing, Ni in (54)
denotes the set of indexes of neighbors of robot i ∈ {1, . . . , n}.
Fig. 4. A communication graph and its corresponding adjacency matrix aij
in the case shown in Fig. 4, N1 ={2, 3}. Finally, the gains
˜
cx ij, ˜c
y
ij, ˜cθij < 0 are chosen so that ˜cxij = ˜cxji and ˜c y ij = ˜c
y ji.
If the control law in (54) is used and if the gains are chosen in this way the formation control problem is satisfied. This is proven in [8].
VIII. SIMULATIONS
In this section simulations related to the theory discussed in Sections IV-VII are presented. The simulations are presented in the same order as their corresponding theoretical section. For each subsection the simulations are explained and its connection to the theory pointed out. In the end the results are presented.
A. Simulations Related to Potential Fields
Here, simulations related to the use of potential fields are presented. The simulations use the kinematic vehicle model (2) in combination with the control law (21) given in Section IV-B, as well as the attracting force (13) and the repelling force (15). For these simulations, the constants introduced in Equations (13), (15) and (21) were set to
ξ = 1, η = 8, k1= 1, k2= 4 (55)
The robot is assumed to occupy an circular area of radius 0.5. The obstacles are simulated as boxes of side length 0.6. The robot is assumed to to have sensors which can detect obstacles
up to an radius of 3 away from the robot, that is ρ0 = 3 in
(15). In all simulations the robot was simulated as traveling in the left to right direction on a path of width 8; the length of the track, the number of obstacles as well as the positions of the obstacles has been varied.
First a simulation without any obstacles was made. This was done to verify the trajectory tracking controller. The robot was made to follow a sinusoidal path. The path following was done
by updating the goal coordinates qgoal for Equation 13 as the
robot reached within a distance of 1 of it. The desired path is given by y = 2 sin(x). The results can be seen in Fig. 5.
Next, obstacles were added. In this simulation two obstacles
are added at x, y coordinates [4, −1]T and [6, 2]T. The track
is made to be of length 10. The final goal is at [10, 0]T. Fig.
6 shows the result of this simulation.
B. Simulations Related to Sampling
In this subsection simulations related to the sampled data controller, discussed in Section V, are presented. The simu-lations are intended verify the theory presented there. This
Fig. 5. Simulation of robot following trajectory of a sinusoidal function. Dashed line shows function f(x) = 2 sin x.
Fig. 6. Simulation of robot finding its way through a course with obstacles, using potential fields.
is done by simulating a differential drive vehicle with the sampled data controller (46) using a few different values for the gains and sampling period. For comparison, the same controller with a real-time feedback is also tested and plotted in the same graph as its corresponding sampled version.
1) Stable Case: In this case the parameters are set to be
the same as the example values given in Section V, that is
T =1 2, k1= 1, k2= 3 2 (56)
The starting position of the vehicle is set to be q0 =
[10, 1, π/2]T. In Fig. 7 x, y, θ has been graphed separately
as functions of time for this case. We see that the robot is stable since all parameters (x, y, θ) tends towards zero.
2) Unstable Case: In this case the parameters are just
slightly changed from the previous case, to render a scenario which according to the discussion in V-B should be unstable.
Fig. 7. x(t), y(t), θ(t) for the sampled controller with parameters according to (56), as well as the real-time version of the controller. q0= [10, 1, π/2]T.
The parameters are set to be
T =1 2, k1= 1, k2= 5 2 (57)
where one of the components of k2 has been amplified.
Like before, the starting position of the vehicle is set to be
q0 = [10, 1, π/2]T. In Fig. 8 (x, y, θ) have been graphed
separately as functions of time for this case. We see that the robot is unstable since not all of the parameters (x, y, θ) tend towards zero. In this case the robot’s x-coordinate oscillates uncontrollably.
C. Simulations Related to Disturbances And Quantization
Randomly detecting fake obstacles for one detection cycle didn’t change the robot’s performance at all due to the robot’s deceleration not being potent enough. This meant that if the robot was instructed to brake due to the obstacle, the obstacle would essentially be gone before the next cycle. The random disturbance only got noticeable when the random chance approached 20%, which resulted in the vehicle stopping and turning unpredictably. The robot performed remarkably well when testing the margin of error and cleared the test courses with up to 20 cm error in a random direction. The range error did not impact the robot much at all, due to the potential function being strong when the robot gets close to an obstacle, and in that way it avoided any unwanted collisions.
In the two instances of quantization, limiting detection range and limiting how detailed the force calculations, the tests were done in a linear escalating manner. In the final test case, the robots detection range was rounded to the nearest decimeter and the force was rounded to one decimal. This averaged to around 2.5% error in virtual forces and 3 cm in distance error when the vehicle was close to an obstacle. As previously noted, the robot was assumed to be 10 cm in length.
D. Simulations of Formation Control Using Potential Fields
The potential field controller was used as described in section VII. Two types of formations were tested: Platooning, where the robots attempt to follow the previous robot, and a triangle shape, where the robots strive to form a triangle in relation to the first robot. For platooning, the goal of each robot was placed two decimeters left of the previous robot to form a chain of three robots. For the triangle formation, the goals of two following robots were placed two decimeters diagonally on each side behind the leader robot.
The platooning tests worked well with obstacles, but the triangle formation wasn’t able to navigate obstacles while keeping the formation. Neither controller retains the original formations very well when changing trajectory due to the way the goal is placed, but the robots traverse through the courses with remarkable reliability. Fig. 9 shows a snapshot of how each robot tries to navigate to a point near the robot in front of it, while still avoiding obstacles. Fig. 10 also shows how the triangle formation assembled in the simulation and highlights an issue with this method which will be discussed in Section IX.
Fig. 8. x(t), y(t), θ(t) for the sampled controller with parameters according to (57), as well as the real-time version of the controller. q0= [10, 1, π/2]T.
Fig. 9. Straight line formation control using goal placement. Each X shows the goal of a following vehicle. The axes represent length in decimeters.
Fig. 10. Transition to triangle formation control using potential field goal placement. Each X shows the goal of a following vehicle. The displacement between robots and goals is explained in Section IX.
During the simulations, it was also noted that this system can have a long response time due to the slow nature of moving potential fields. To attempt minimizing this problem, the attracting force of any moving goals was amplified with a factor ranging from 1.5 to 6. Modifying the force resulted in a tighter formation up to the maximum value of 6, but after this the robots were more erratic and unpredictable in their behavior.
E. Simulations of Formation Control Using Virtual Structure
Here simulations related to the virtual structure approach of doing formation control will be presented. The purpose of these simulations are to verify that the controller (54) fulfills the control problem stated in Section VII-B, that is that it renders (53) asymptotically stable. Furthermore, it should also perform better at keeping the formation when the formation is subject to perturbation. In these simulations the control parameters were set to the values in Table I. The desired trajectory of the virtual structure was set to
xd
vs(t) = 0.05t− 1.2
TABLE I
CONTROL PARAMETERS USED IN SIMULATIONS cx i = 1 c y i = 30 cθi = 0.5 ˜ cx ij= 5 ˜c y ij= 30 ˜cθij= 0.1 xd
vs(t), yvsd (t) are assumed to be in meters and t in
sec-onds. The desired shape of the formation is set to be an equilateral triangle where the vectors indicating the co-ordinates of the corners of the triangle are set to be
p1 = −0.15, −0.15/√3 T , p2 = 0.15,−0.15/√3 T , p3 =
0, 0.3/√3T where the units are in meters. The sides of the
triangle have length 0.3 m. All the simulations are run for 30 s. The initial positions of the vehicles were set to the following coordinates for all simulations
q0R1 = −1.3−0.6 0.3 , q0 R2 = −1.2−0.7 0 , q0 R3 = −1.1−0.8 −0.5 (59)
In these simulations the graph shown in Fig. 4 was used. In the case of a disconnected graph, a fully disconnected graph was used where no communication at all between robots occurred.
Results from two different simulations are shown here. In the first simulation a connected communication graph was used. One of the robots was perturbed by moving it −0.2 m in the y-direction at t = 5 s. In Fig. 11 the positions of
each robot is shown at four different time instances; at t0,
slightly before the perturbation; at t = 4 s; at the time of perturbation t = 5 s; and finally at t = 30 s. At the time of the perturbation, the state of the formation in case it would not have been disturbed is indicated with a red dashed line. Fig. 13 shows the path of the robots in the plane for the same simulations.
The second simulation shown in this section is the case of a disconnected communication graph. All parameters are exactly the same as in the previous simulation. Fig. 12 shows the position of each robot at four different time instances.
IX. DISCUSSION
The implemented simulations performed as the theoretical material suggested. Virtual forces and potential fields is a simple yet brilliant way of letting a robot autonomously calculate how to approach a goal, without having any detailed knowledge of what the operating environment looks like. Po-tential fields also appeared to be vastly superior to sequential path planning due to the robot moving smoother with this method and the ability to handle moving obstacles.
The realistic application of formation control may not seem obvious at first glance. As most robots are small and can only observe objects from one angle, formation control can enable much quicker and more reliable 3D analysis of objects. A great application for this would be planet exploration, where a formation of robots can quickly analyze and scan an area or formation instead of letting one vehicle drive around it. Applications closer to Earth might involve surveillance robots that monitor changes to an area or synchronized search and
Fig. 11. Formation using virtual structure approach, connected communica-tion graph. Robot 3 is disturbed at t = 5. Shape of formacommunica-tion shown at four different time instances.
Fig. 12. Formation using virtual structure approach, disconnected communi-cation graph. Robot 3 is disturbed at t = 5. Shape of formation shown at four different time instances.
rescue missions where a single robot would simply take too much time.
A. Path Planning Algorithm
When evaluating the path planning algorithm, it might be noted that the proposed algorithm is very similar to autonomous vacuum cleaners, usually called Roombas. This was a subconscious design decision, yet not unexpected since that algorithm has the same goal of being able to get around any obstacle. The Roomba movement is also easy to apply to a two-wheeled robot. To improve the algorithm further, it should be noted that the angle α can be altered depending on the robot’s turn ratio. If the robot has a slow turn ratio, the angle can be decreased to do less turning checks and the driving length between turning checks can be increased to 2l. Another thing that can improve the algorithm is if we assume the robot can see obstacles to its side, in which case it could