• No results found

A Path Following Method with Obstacle Avoidance for UGVs

N/A
N/A
Protected

Academic year: 2021

Share "A Path Following Method with Obstacle Avoidance for UGVs"

Copied!
59
0
0

Loading.... (view fulltext now)

Full text

(1)

Institutionen för systemteknik

Department of Electrical Engineering

Examensarbete

A Path Following Method with Obstacle Avoidance

for UGVs

Examensarbete utfört i Reglerteknik vid Tekniska högskolan i Linköping

av

Anna Lindefelt and Anders Nordlund LITH-ISY-EX--08/4101--SE

Linköping 2008

Department of Electrical Engineering Linköpings tekniska högskola

Linköpings universitet Linköpings universitet

(2)
(3)

A Path Following Method with Obstacle Avoidance

for UGVs

Examensarbete utfört i Reglerteknik

vid Tekniska högskolan i Linköping

av

Anna Lindefelt and Anders Nordlund LITH-ISY-EX--08/4101--SE

Handledare: Per Skoglar

FOI

Examinator: Thomas Schön

ISY, Linköpings universitet Linköping, 5 March, 2008

(4)
(5)

Avdelning, Institution

Division, Department

Division of Automatic Control Department of Electrical Engineering Linköpings universitet

SE-581 83 Linköping, Sweden

Datum Date 2008-03-05 Språk Language  Svenska/Swedish  Engelska/English   Rapporttyp Report category  Licentiatavhandling  Examensarbete  C-uppsats  D-uppsats  Övrig rapport  

URL för elektronisk version

http://www.control.isy.liu.se http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11242 ISBNISRN LITH-ISY-EX--08/4101--SE

Serietitel och serienummer

Title of series, numbering

ISSN

Titel

Title

En metod för banföljning med kollisionsundvikning för UGV:er A Path Following Method with Obstacle Avoidance for UGVs

Författare

Author

Anna Lindefelt and Anders Nordlund

Sammanfattning

Abstract

The goal of this thesis is to make an unmanned ground vehicle (UGV) follow a given reference trajectory, without colliding with obstacles in its way. This thesis will especially focus on modeling and controlling the UGV, which is based on the power wheelchair Trax from Permobil.

In order to make the UGV follow a given reference trajectory without colliding, it is crucial to know the position of the UGV at all times. Odometry is used to estimate the position of the UGV relative a starting point. For the odometry to work in a satisfying way, parameters such as wheel radii and wheel base have to be calibrated. Two control signals are used to control the motion of the UGV, one to control the speed and one to control the steering angles of the two front wheels. By modeling the motion of the UGV as a function of the control signals, the motion can be predicted. A path following algorithm is developed in order to make the UGV navigate by maps. The maps are given in advance and do not contain any obstacles. A method to handle obstacles that comes in the way is presented.

Nyckelord

Keywords obstacle avoidance, path planning, odometry calibration, unmanned ground vehi-cle (UGV)

(6)
(7)

Abstract

The goal of this thesis is to make an unmanned ground vehicle (UGV) follow a given reference trajectory, without colliding with obstacles in its way. This thesis will especially focus on modeling and controlling the UGV, which is based on the power wheelchair Trax from Permobil.

In order to make the UGV follow a given reference trajectory without colliding, it is crucial to know the position of the UGV at all times. Odometry is used to estimate the position of the UGV relative a starting point. For the odometry to work in a satisfying way, parameters such as wheel radii and wheel base have to be calibrated. Two control signals are used to control the motion of the UGV, one to control the speed and one to control the steering angles of the two front wheels. By modeling the motion of the UGV as a function of the control signals, the motion can be predicted. A path following algorithm is developed in order to make the UGV navigate by maps. The maps are given in advance and do not contain any obstacles. A method to handle obstacles that comes in the way is presented.

Sammanfattning

Målet med examensarbetet är att göra ett obemannat markfordon (UGV) som kan följa en given referensbana utan att kollidera med hinder som kommer i dess väg. Detta examensarbete fokuserar på modellering och styrning av UGV:n, som är baserad på elrullstolen Trax från Permobil.

För att kunna få UGV:n att följa en given referensbana utan att kollidera är det viktigt att hela tiden känna till UGV:ns position. Odometri andvänds för att estimera positionen på UGV:n relativt en startpunkt. För att odometrin skall fungera på ett tillfredställande sätt måste parametrar som hjulradier och hjulbas kalibreras. Två styrsignaler används för att styra UGV:ns rörelse, en för att styra hastigheten och en för att styra styrvinklarna på de två framhjulen. Genom att modellera rörelsen för UGV:n som en funktion av styrsignalerna kan rörelsen pre-dikteras. En banföljningsalgoritm utvecklas för att UGV:n skall kunna navigera med hjälp av kartor. Kartorna är givna i förväg och innehåller inga hinder. En metod för att hantera hinder som kommer i vägen kommer att presenteras.

(8)
(9)

Acknowledgments

There are a number of people we would like to thank for their help and support during this thesis. First we would like to thank our supervisor Per Skoglar at Swedish Defence Research Agency (FOI) for making this thesis possible. We also want to thank Per for your support and guidance during the thesis.

We would like to thank Jonas Nygårds at FOI for all your help and ideas, which without we would not have come so far. Peter Nordin at IEI, Linköping University for helping us when we had problems with the UGV

We would like to thank our examiner Dr Thomas Schön for being helpful and available.

Anders would specially thank Charlie for her love and support. Marol for al-ways being there as a great friend and lab-partner.

Anna would specially thank “Onsdagsfikagänget” for inspiring conversions and great support.

Last but not least, we would like to thank all our friends for making the years here in Linköping a memorable time.

Anna Lindefelt and Anders Nordlund

Linköping, March 2008

(10)
(11)

Contents

1 Introduction 1

1.1 Background . . . 1

1.2 Objectives . . . 2

1.3 Limitations . . . 2

1.4 Hardware and Software . . . 3

1.5 Outline . . . 4 2 Vehicle Model 5 2.1 2D Car Model . . . 5 2.2 Tricyle Model . . . 7 3 Odometry Calibration 9 3.1 Laser Data . . . 9

3.2 Estimating the Wheel Radii and Calibrating the Wheel Base . . . 10

3.2.1 Estimating the Wheel Radii . . . 10

3.2.2 Error in Wheel Radii Estimation . . . 11

3.2.3 Calibrating the Wheel Base . . . 11

3.2.4 Validating the Wheel Radii and the Wheel Base . . . 11

3.2.5 Error Margin in the Odometry . . . 12

4 Modeling the UGV Motion Dynamics 15 4.1 Motion Dynamics and Control System of the Power Wheelchair . . 15

4.2 Modeling the Speed . . . 16

4.2.1 The Speed as a Function of the Control Signal u1. . . 16

4.2.2 The Acceleration Phase . . . 18

4.3 Modeling the Steering Angle . . . 20

4.3.1 The Steering Angle as a Function of the Control Signal u2 . 21 5 Path Following Algorithms 27 5.1 Introduction . . . 27

5.2 The Global Path Plan . . . 28

5.3 Base Trajectories . . . 29

5.4 The Feedback Rule . . . 30

5.4.1 How the Feedback Rule is used . . . 31

5.5 Collision Detection . . . 32 ix

(12)

x Contents

5.6 Choosing a Criterion . . . 33

5.7 The Algorithm . . . 34

5.8 Simulation of the Path Following Algorithm . . . 35

5.9 Narrow Passages . . . 35

5.10 Experimental Results . . . 36

5.11 Advantages and Disadvantages with this Algorithm . . . 36

6 Conclusions and Future Work 39 6.1 Conclusions . . . 39

6.1.1 Identification of the Models . . . 39

6.1.2 The Path Following Algorithm . . . 40

6.2 Future Work . . . 41

Bibliography 43

(13)

Chapter 1

Introduction

This chapter will give a brief introduction to the thesis. It will also present the structure of this report.

1.1

Background

During the last decades autonomous robotics has been available for airborne and submersed systems. No comparable auto-pilots existed for ground vehicles until a couple of years ago [15]. There are many problems to be solved when working with autonomous robots, e.g. path planning, collision avoidance, localization and map-ping. To have a robot that uses a “stop and go” method is unacceptable [6]. The path finding algorithm has to be efficient, but at the same time a number of com-putationally demanding decisions have to be considered. For a car-like robot there are nonholonomic constraint [12]. The robot in this thesis is a car-like robot and the nonholonomic constraint implies for example, the robot cannot rotate around its own axis. There are two types of path planning, global and local [9]. With global path planning, a path from a starting point to an end point is obtained. The assumptions regarding obstacles when planning the global path, until the robot is traveling the path can be changed. For example, obstacles can be moved into the desired path. A local path planner can be used for collision avoidance, i.e. to plan a path around these obstacles. There are a number of methods to handle collision avoidance, see e.g. [3], [7], [16].

Depending on which sensors the robot is equipped with, the localization prob-lem can be solved with sensor fusion [5], [16]. Using only dead reckoning is a way to handle the localization problem [4], [13]. Simultaneous localization and mapping (SLAM) problem is a way to simultaneously handle the localization and mapping problem [14].

At the department of Sensor Systems, at the Swedish Defence Research Agency (FOI), Linköping, Sweden, the development of an unmanned ground vehicle (UGV) is in progress. The purpose is to have a mobile sensor platform as a resource to

(14)

2 Introduction

Figure 1.1. The UGV, based on the power wheelchair Trax from Permobil.

different projects at Sensor Systems. The UGV will function as a general sensor platform where desired sensors, EO/IR, laser, radar, etc. easily can be mounted.

1.2

Objectives

The goal of this master thesis is to develop an UGV that can follow a given reference trajectory without colliding. This thesis will especially focus on modeling and controlling the UGV. In order to reach the goal, four tasks have been defined. 1. To identify a model for the odometry, i.e. develop a system based on dead

reckoning to estimate the position of the robot relative a starting point. 2. To build a model describing the relationship between the control signals and

the motion of the UGV.

3. To make the UGV follow a given reference trajectory, provided that no ob-stacles exist.

4. To improve the third task so that the UGV can avoid obstacles while follow-ing a given reference trajectory.

1.3

Limitations

It is assumed to be no slippage when estimating the robot’s motion. Detection of obstacles with the laser has not been implemented on the computer that controls the UGV.

(15)

1.4 Hardware and Software 3

Figure 1.2. Two SICK-lasers mounted in the front of the UGV.

1.4

Hardware and Software

The UGV is based on the power wheelchair Trax from Permobil and can be seen in Figure 1.1. It is driven on the rear wheels with two electrical motors, one for each wheel. The two motors driving the rear wheels are equipped with encoders for the odometry, i.e. for the estimation of the robot’s position. When the robot is turn-ing the rear wheels will have different speeds, i.e. it is differential driven. There is also an electrical motor for controlling the steering angles of the two front wheels. The computer which controls the robot is a PC/104-computer, on which the free software tool Player/Stage 2.0 is used. Player/Stage is a robot control interface that allows the robot control program to be written in any program language [1]. It can be run on any computer that has a network connection with the robot, to control it. The computer controls the robot with two control signals, the speed and the steering angles. The platform is also equipped with two SICK-lasers which are mounted in the front of the robot, see Figure 1.2. In this thesis only one SICK-laser will be used. The SICK-SICK-laser measures the range to objects in a 180 degree interval with a resolution of 1 degree. The power wheelchair can be controlled by a joystick that is provided by Permobil. The joystick has been modified with a sec-ond mode in order to allow the computer to control the robot. This secsec-ond mode is implemented so that the computer controls the robot via the joystick. This also means that the original control system of the Trax sets the limitations, e.g. how fast the robot can go when it is turning. How the original control system works is unknown. Presumably the control system has two important tasks, to make sure the robot is not tipping over and to make the ride with the power wheelchair pleasant.

(16)

4 Introduction

1.5

Outline

In Chapter 2 a vehicle model for estimating the position of the robot will be derived. A model from the speed and steering angle to the robot’s motion will be presented. The thesis continues in Chapter 3 with a way of calibrating the parameters in the odometry model given in Chapter 2. A discussion about the difficulties in modeling the speed and the steering angle will be given in Chapter 4. In Chapter 5 algorithms for path following are discussed. Results from Matlab simulations of the chosen algorithm will be shown. Results from implementations in the hardware will also be given. A summary of the results in the previous chapters and suggestions for future work will be presented in Chapter 6.

(17)

Chapter 2

Vehicle Model

In this chapter a model of the odometry will be given. A model from the speed and steering angle to predict the robot’s motion will also be presented.

The rotations of the rear wheels are measured with encoders placed at the end of the two motor shafts. With the measurements from the encoders, the robot’s position relative to a starting point can be estimated, so-called dead reckoning. Another word is odometry, which will be used in this thesis. In order to be able to estimate the robot’s position given the encoder measurements, a vehicle model has to be derived.

The speed and steering angles are the two control signals that affect the motion of the robot. In order to predict the motion of the robot, a model with these control signals as input signals is needed. The steering angles of the two front wheels will differ when the robot is turning, to fulfill the Ackermann steering geometry [17]. Since there are no measurements on the two front wheels it will be impossible to estimate the steering angles individually. By approximating the two front wheels with one wheel there will be a tricycle instead. The distance between the front and the rear wheels will be a constant value. With the tricycle model it will be possible to estimate the steering angle of the front wheel.

2.1

2D Car Model

Let x and y denote the robot’s coordinate in a global coordinate system and let θ denote the orientation (heading direction) of the robot. The position and orientation at time k is given by (xk, yk, θk) and at time k+1 it is (xk+1, yk+1, θk+1).

∆dk denotes the distance between (xk, yk) and (xk+1, yk+1). Let the sample time

be Ts = t(k + 1) − t(k). Assuming no slippage, the travel distance for the right

wheel during the time interval Ts is

∆dr,k= nrr∆er,k, (2.1)

(18)

6 Vehicle Model 0 0 1 1 0 0 1 1 V Vk ωk 1 2ωkTs (xk+1, yk+1) θk ∆dk ωkTs (xk, yk) B

Figure 2.1. Geometric discrete-time model of a car-like robot. The position and speed

of the middle point of the rear axle at time k is given by (xk, yk) and Vk. ∆dk is the

distance between (xk, yk) and (xk+1, yk+1). ωVk

k is the radius of the circle segment on which the robot is currently traveling.

where ∆er,k is the right encoder count, n is the gear ratio from the motor shaft

to the wheel and rris the radius of the right wheel. For the left wheel the travel

distance is

∆dl,k= nrl∆el,k. (2.2)

Assuming that the linear velocity Vk and the rotational velocity ωk are kept

con-stant during the time interval Ts, the robot moves on a circle segment with a

radius

R = Vk

ωk. (2.3)

When ωk = 0 the robot moves on a straight line, which can be seen as a circle

(19)

2.2 Tricyle Model 7

point of the rear axle as

∆dk=

∆dr,k+ ∆dl,k

2 = VkTs. (2.4)

The rotational velocity of the robot is

ωk= ∆vr,k− ∆vl,k

B =

∆dr,k− ∆dl,k

BTs , (2.5)

where B is the wheel base. Combining (2.4) and (2.5) yields

Vk ωk = B 2 ∆dr,k+ ∆dl,k ∆dr,k− ∆dl,k . (2.6)

The parameters that are unknown are the wheel base B and the wheel radii rr

and rl. The trigonometry in Figure 2.1 gives that when the robot is turning, half

of its traveled distance is calculated as ∆dk 2 = Vk ωk sin(ωkTs 2 ). (2.7)

When ωk = 0 the robot moves on a straight line and the whole traveled distance

is VkTs. A discrete-time model of the robot is now easily derived from Figure 2.1 as   xk+1 yk+1 θk+1  =   xk yk θk  +   ∆dkcos(θk+ωk2Ts) ∆dksin(θk+ωk2Ts) ωkTs   (2.8) where ∆dk =  2Vk ωk sin( ωkTs 2 ), ωk6= 0 VkTs, ωk= 0 (2.9)

2.2

Tricyle Model

In Section 2.1 a model for the odometry was derived, see (2.8) and (2.9). This model does not cover the fact that the robot is controlled by both the velocity and the steering angle. By extending (2.8) with the steering angle φ, the model can be used to predict the robot’s motion. Trigonometry in Figure 2.2 gives

tan(φ) = L

R, (2.10)

where φ is the steering angle, L is the length of the robot and R is the radius of the circle segment on which the robot is currently traveling. The circle radius can be calculated as

R = V

ω, (2.11)

where V is the velocity and ω is the rotational velocity. Combining (2.10) and (2.11) yields

ω =V

(20)

8 Vehicle Model

X

Y

y

x

L

R

V

θ

φ

φ

Figure 2.2. Geometric model of a tricycle. The orientation of the robot is given by θ

and the direction of the steering wheel is given by φ.

Discretization of (2.12) using the Euler approximation gives

θk+1= θk+

TsVk

L tan(φk). (2.13)

At every time k, the rotational velocity ωk is given by

ωk =Vk

L tan(φk). (2.14)

In order to derive a model that can predict the motion of the robot for a given steering angle φk and velocity Vk, a substitution of θk+1and ωk in (2.8) and (2.9)

with (2.13) and (2.14) is done. This will give the following model   xk+1 yk+1 θk+1  =   xk yk θk  +   ∆dkcos(θk+V2LkTstan(φk)) ∆dksin(θk+V2LkTstan(φk)) VkTs L tan(φk)   (2.15) where ∆dk=  2L tan(φk)sin( VkTs 2L tan(φk)), φk 6= 0 VkTs, φk= 0 (2.16)

(21)

Chapter 3

Odometry Calibration

This chapter will give a brief review of how the laser data is used. A discussion about calibration and validation of the parameters in the odometry model will be given.

By measuring the rotation of the rear wheels with encoders, an estimation of the position of the robot relative a starting point can be made. This dead reck-oning is called odometry. Two types of errors can occur when using odometry, systematic and non-systematic errors. Systematic errors can occur in the wheel radii and uncertainty about the effective wheelbase. Non-systematic errors can be due to slippage and uneven terrain [4]. By calibrating the odometry the effect of systematic errors is minimized.

3.1

Laser Data

The SICK-laser measures the range from the laser to objects in a 180 degree in-terval with a resolution of 1 degree. The objects can be anything from walls to a person walking by. The distances are transformed into global coordinates, i.e. the objects positions are seen from the starting point of the robot. This makes it easy to plot the collected measurements of the laser for a drive.

When the laser sees e.g. through a window, it is not exactly known what will happen due to reflexes. It can lead to phantom obstacles appears. This means that the robot think there is an obstacle but in fact there is not. In Figure 3.1 the robot is driven 64 meters by hand. Here it looks like the driver has driven the robot through an obstacle. This obstacle does not exist in reality and it is therefore a phantom obstacle.

(22)

10 Odometry Calibration

Figure 3.1. Map over the robot’s motion during a 64 meter drive. The line shows the

route the robot traveled and the dots are walls and other obstacles. The starting and end points are marked with a circle and a big dot. The obstacle that the robot traveles through is a phantom obstacle.

3.2

Estimating the Wheel Radii and Calibrating

the Wheel Base

In [13] the wheel radii and the wheel base are calibrated by first commanding the robot to drive straight and then in two circles, clockwise and counterclockwise respectively.

3.2.1

Estimating the Wheel Radii

When the robot is traveling on a stright line, the traveled distances of the right wheel ∆drand the left wheel ∆dlare equal to the traveled distance of the middle

point of the rear axel ∆dk. When commanding the robot to drive toward a wall,

its travel distance is measured with e.g. a laser. The wheel radii are calculated from (2.1) and (2.2)  n∆er,k 0 0 n∆el,k  | {z } Ak  rr rl  | {z } r =  ∆dr,k ∆dl,k  | {z } bk (3.1)

(23)

3.2 Estimating the Wheel Radii and Calibrating the Wheel Base 11

where ∆e is the encoder count, n is the gear ratio, r is the radii of the wheels and ∆dr,k and ∆dl,k are equal to the distace measured with the laser. The estimate

of the wheel radii ˆr is calculated by using the least square method

ˆ r = arg min r N X k=1 kAkr − bkk2. (3.2)

In Section 3.2.4 a validation of the estimated wheel radii is made.

3.2.2

Error in Wheel Radii Estimation

There are at least two types of errors that can occur in this kind of estimation of the wheel radii, human and systematic. These errors affect how good the odometry will be. When the robot is commanded to drive straight toward a wall it is hard to line it up so that it will drive exactly straight toward the wall. This is a human error. According to Section 4.3.1 the robot does not always drive in a perfectly straight line, which is a “stochastic” systematic error in the hardware. These errors are hard to eliminate.

3.2.3

Calibrating the Wheel Base

The robot is commanded to travel two laps in a circle with a given radius, both clockwise and counterclockwise. By plotting the position of the robot a comparison between, the actual circle radius and the circle radius estimated using odometry can be made. Combining (2.1), (2.2), (2.3) and (2.6) yields

R = B

2

rr∆er,k+ rl∆el,k rr∆er,k− rl∆el,k

. (3.3)

When the wheel radii have been estimated they are constant. The only parameter left to affect the estimated circle radius is the wheel base. By changing B, the radius of the estimated circle will change. When the estimated circle radius is equal to the actual radius, the calibration of the wheel base is done.

3.2.4

Validating the Wheel Radii and the Wheel Base

To validate that proper wheel radii are estimated, the robot is commanded to drive on a straight line again. The difference between the travel distance according to the odometry and according to the laser is shown in Figure 3.2. The small bias that can be seen in Figure 3.2 is probably caused by a human error or the fact that the robot did not travel perfectly straight. Since the bias is so small the estimation of the wheel radii are sufficient. To validate that a proper wheel base is calibrated the robot is once again commanded to travel two laps in a circle, both clockwise and counterclockwise. By plotting the laser data in a global coordinate system a map over the room appears, see Section 3.1. If the room is twisting dur-ing the drive, the calibration of either the wheel radii or the wheel base is wrong. If the room is twisted in different directions when the robot travels clockwise and

(24)

12 Odometry Calibration

Figure 3.2. Difference in travel distance between odometry and laser data.

counterclockwise, the calibration of the wheel base is wrong. An example of this can be seen in Figure 3.3. Here, the room is twisted clockwise when the robot travels counterclockwise, and counterclockwise when the robot travels clockwise. If the estimation of the wheel radii are wrong, the room will be twisted in same direction when the robot travels clockwise and counterclockwise. An example of this can be seen in Figure 3.4.

If the pressure in the tires changes the wheel base and the wheel radii will change. This will affect how good the odometry is. If the effect is too large the calibration has to be redone.

3.2.5

Error Margin in the Odometry

How to know if the odometry works in a satisfying way? One way to figure this out is to command the robot to drive around and come back to the starting point. If the odometry is not good enough this is probably not the best way since the robot relies on the odometry to locate it self. A better way is probably to drive the robot randomly by hand. To stop the robot in the exact same position as it started can be hard. The distance between the starting and end point must most likely be measure by hand, e.g. using a ruler. If the distance between the physical start and end point is the same as the odometry, the odometry is perfect. However, due to e.g. slip the distance will probably differ. A decision has to be made, depending on the application of the robot, how much the odometry is allowed to differ from reality. In Figure 3.1 the robot is driven 64 meters by hand. The starting and

(25)

3.2 Estimating the Wheel Radii and Calibrating the Wheel Base 13

(a) The robot travels counterclockwise. (b) The robot travels clockwise.

Figure 3.3. The room is twisted in different directions when the wheel base is calibrated

wrong.

(a) The robot travels counterclockwise. (b) The robot travels clockwise.

Figure 3.4. The room is twisted in the same direction when the wheel radii are estimated

wrong.

end points are marked with a circle and a big dot. Here it is easy to see that the end point does not coincide with the starting point. The difference between the odometry and reality is approximately 0.15 meters which is really good for this application.

(26)
(27)

Chapter 4

Modeling the UGV Motion

Dynamics

This chapter will discuss the difficulties of modeling the motion of the robot from the control signals.

When commanding the robot to drive around, two control signals are used. These two control signals do not have any units of measurement. The first control signal

u1 is used to command the robot to drive forward or backward, i.e. it controls the

speed of the robot. The second control signal u2 is used to command the robot

to make turns, i.e. it controls the steering angles of the two front wheels. When these control signals are set, they are sent into Permobil’s control system. Exactly how this control system works is unknown. This control system produce voltages to the motors. Different values on the control signals give different voltages to the motors, which will give the robot different speeds and steering angles.

4.1

Motion Dynamics and Control System of the

Power Wheelchair

In Section 1.4 a short description of the hardware and software is given. Figure 4.1 shows a block diagram from the two control signals to the sampled speed and steering angle. The control signals u1 and u2 are sent by the computer, via the

joystick to the control system of the power wheelchair. The control system converts the control signals into voltages and sends them to the three motors. How this conversion is made is unknown. One of the primary goals of the control system of the power wheelchair is to make sure that the power wheelchair does not tip over when it is turning. The speed is therefore decreased during a turn by the control system of the power wheelchair. The control system of the power wheelchair seems to have some type of memory of what has happened in the past, i.e. what the output to the motors have previously been. This memory is probably used to make sure that the voltages do not change to fast. For example if a step in the

(28)

16 Modeling the UGV Motion Dynamics Permobil Control System Joystick Odometry u2 u1 noise steering angle speed e1 e2 Power Wheelchair Dynamics of the O M T O R S

Figure 4.1. A block diagram from the two control signals to the sampled speed and

steering angle.

control signal u1is made, there will be different outputs depending on the size of

the step and the old value of the control signal. The three motors are elements in the motion dynamics of the power wheelchair. The motion dynamics of the power wheelchair is also unknown. The encoders placed at the end of the two motor shafts measures the difference in angel of rotation of the rear wheels between two points in time. The measurements made by the encoders will contain noise. The odometry uses the encoder measurements to estimate the position of the robot. The sampled speed and steering angle are also estimates based on the encoder measurements. Since both the control system and the motion dynamics of the power wheelchair are unknown, they will be included in the motion model.

4.2

Modeling the Speed

In order to be able to predict the motion of the robot, a model of the speed has to be derived. The control signal u1 is used to command the robot to drive forward

and backward, i.e. it controls the speed. The speed will therefore be modelled as a function of the control signal u1. This model will include the control system and

motion dynamics of the power wheelchair.

4.2.1

The Speed as a Function of the Control Signal u

1 By setting the control signal u1to different positive values while the second control

signal u2is set to zero, the robot will drive straight forward with different speeds.

When u1< 0.10 the speed of the robot is so small that the robot is barely moving.

The speed for these values on u1are therefore considered as not interesting. From

now on the speed of the robot will be viewed as zero when u1< 0.10.

The robot is commanded to drive forward with u1 = 0.10. The control signal

u1 is then increased with small steps. When u1 is increased with small steps the

speed of the robot will increase. By commanding the robot to drive straight for-ward while increasing the control signal u1, the speed can be given as a function

(29)

4.2 Modeling the Speed 17

(a) The speed as a function of the control sig-nal.

(b) The logarithm of the speed.

Figure 4.2. The speed as a function of the control signal and the logarithm of the speed.

i.e. the dynamics in the acceleration phase is ignored. This plot indicates that the speed of the robot is a nonlinear function of the control signal u1. In fact, the

non-linearity in the speed seems to resemble an exponential function. In Figure 4.2(b) the logarithm of the sampled speed is plotted. The logarithm of the speed can be approximated with a linear function. In order to find a model of the speed as a function of the control signal u1 the speed assumes the exponential function

V = eA1+A2u1, (4.1)

where A1 and A2 are constants. To identify these constants a nonlinear least

square method is used to adapt (4.1) to the sampled speed. This is done by using the command fsolve in Matlab. In Figure 4.3(a) the model can be seen along with the sampled speed. In Figure 4.3(b) the logarithm of the model can be seen along with the logarithm of the sampled speed. In Figures 4.2 and 4.3 it can be seen that the sampled speed reaches its maximum value when the control signal is 0.29. In order to simplify the model, the modelled speed is from now on viewed as constant for control signals larger than 0.29.

V =    0 u1< 0.10 eA1+A2u1 0.10 ≤ u 1≤ 0.29 eA1+A20.29 u 1> 0.29 (4.2)

When driving the robot several times for the same value on u1, the speed will

sometimes appear to differ between different drives. This difference in speed be-comes clearer when u1 is large. Figure 4.4 shows the speed of the robot for five

different occasions using the same value on u1. This plot clearly shows that the

speed can differ for different drives when the same value on u1 is used. For this

value on u1 the speed can differ with at least 0.12 m/s, which is a rather big

relative difference. This “stochastic” behavior makes it difficult to find a suitable model of the speed as a function of the control signal u1. The reason for this

(30)

18 Modeling the UGV Motion Dynamics

(a) The speed as a function of the control sig-nal.

(b) The logarithm of the speed.

Figure 4.3. The model of the speed as a function of the control signal, the sampled

speed, the logarithm of the model and the logarithm of the sampled speed.

Figure 4.4. Variations in the speed for different drives when the same value on u1 is

used.

“stochastic” behavior is unclear and will be left to future work. A validation of the model with several measurements will not be done. An discussion about how to use the speed to predict the motion of the robot will be given in Section 4.2.2.

4.2.2

The Acceleration Phase

In order to derive a proper model of the speed as a function of the control signal

u1, the acceleration of the robot has to be taken in account. In Figure 4.5 the

speed is plotted for some of the control signals from previous experiment. At first sight the three largest and the three smallest control signals have similar appear-ances during the acceleration phase. If approximations of the acceleration phase for u1 = 0.10 and u1 = 0.14 would be done with straight lines, they would have

(31)

4.2 Modeling the Speed 19

Figure 4.5. The speed of the robot for different values on the control signal u1. The

control signal is set at different points in time.

(a) The speed of the robot for u1= 0.22, u1=

0.26 and u1= 0.30.

(b) The speed of the robot for u1= 0.10, u1=

0.14 and u1= 0.18.

Figure 4.6. The speed of the robot for different values on the control signal u1. The

control signal is set at the same point in time.

slightly steeper gradient than for the two smaller control signals. The accelera-tion phase when u1= 0.22, u1 = 0.26 and u1 = 0.30 all approximately resemble

a first order system. Figure 4.6(a) shows the speed for the three largest control signals and Figure 4.6(b) the three smallest. In Figure 4.5 the control signals are set in different points in time and in Figure 4.6 the control signals are set at the same time in respective plot. As discussed in Section 4.2.1 the speed can differ for different drives when using the same value on u1. An example of this was

shown in Figure 4.4. This plot also shows that the acceleration phase can differ for the same value on u1. Consequently, the acceleration phase does not have the

same appearance for either the same or different values on u1. This seemingly

(32)

20 Modeling the UGV Motion Dynamics

In spite of this, it is better to make approximations of the acceleration phase then not taking it into consideration at all. One way to approximate the acceler-ation phase is using a linear function. A problem with a linear approximacceler-ation is to decide how long the acceleration phase is or at which speed a constant value is reached, i.e. the acceleration is zero. Another way of approximating the accelera-tion phase is to use a first order system. The same problem will occur with a first order system as with a linear function, i.e. the length of the acceleration phase and the final value of the speed will be unknown due to the “stochastic” behavior. A third option is to use the mean value of the speed. Before this idea is pre-sented an understanding of how the path planner works is needed. The path planner predicts how the robot is going to move based on a sequence of control signals. To predict the robot’s motion the speed is needed. When a prediction has been made based on a sequence of control signals, the first part of the sequence is executed. This means that the robot drives the first part of the planned path. After a certain time or after a certain traveled distance, the path planner plans a new path based on a new sequence of control signals. This is repeated until the robot has reached its end point. How many control signals in the sequence that are executed before this is done, have to be adjusted so a smooth trajectory is obtained. A more detailed description of the path planner can be found in Chap-ter 5. The idea is to use the mean value of the speed the robot had during the last executed travel distance, in order to predict the robot’s motion when planning the next path. This distance does not need to be the whole travel distance since the last path planning. In fact it can be an advantage to only use the very last part of the travel distance. Since it is unknown how the acceleration phase will appear, how long it is going to last and which speed the robot will reach, the mean value of the speed the robot just had might be the speed the robot is trying to reach. If the robot plans a path frequently with the calculated mean value of the speed, the speed of the robot when executing the control signals will be close enough to the speed the robot actually drives in even if it is accelerating. When the first path is planned there is no mean value of the speed to be obtained since the robot has not moved. By setting the mean value of the speed to the speed the robot is meant to reach, a path can be planned. This results in that the robot will not move according to the planned path. It is therefore important to make the restriction that the robot starts with going straight forward and that there are no obstacles in the close region of the starting point. The sooner a new path is planned, the sooner the robot can be allowed to turn. This solution is only a temporary ad hoc solution to solve the problem with the “stochastic” behavior. When the “stochastic” behavior has been solved this solution will not be needed.

4.3

Modeling the Steering Angle

In order to be able to predict the motion of the robot, a model of the steering angle has to be derived. The control signal u2 is used to command the robot to turn

(33)

4.3 Modeling the Steering Angle 21

be modelled as a function of the control signal u2. This model will include the

control system and motion dynamics of the power wheelchair.

4.3.1

The Steering Angle as a Function of the Control Signal

u

2

The steering angle φ for each time k can be calculated by rewriting (2.14)

φk= arctan(ωkL

Vk ). (4.3)

In order to find out how φ depends on u2, the robot is commanded to drive

straight forward by setting the control signal u1 to a constant value. When the

robot reaches a constant speed the second control signal u2is increased with small

steps until the maximum of u2 is reached. After this is done, the robot is

com-manded to drive straight forward with the same value on u1 while u2is decreased

with small steps. When the robot starts turning the speed will decrease. This decrease in speed is made by the control system of the power wheelchair in order to make sure that the power wheelchair do not tip over. In Figure 4.7(a) the speed is plotted for a constant value on u1, while u2 is increased. In Figure 4.7(b) the

speed is plotted together with u2. u2have been inversed, scaled and moved up in

order to make the connection between u2 and the speed easier to see.

In Figure 4.8 the speed is plotted during a turn, for u1 = 0.10 and u1 = 0.15

together with u2. u2 have been inversed, scaled and moved up in order to make

the connection between u2 and the speed easier to see. Here it is obvious that

the decrease in speed is not the same for different values on u1. This makes it

hard to derive a model of how the speed decrease during a turn. When studying Figure 4.7(b) closely it can be seen that there is a non-constant time delay from a change in the control signal u2to a change in the speed occurs. This makes it even

harder to derive a model of how the speed decrease during a turn. In Section 4.2.2 there is a discussion about using the mean value of the speed when the robot is driving to predict the robot’s motion. When this is done a decrease in speed will quickly be discovered and taken into consideration when planning the next travel distance. The decrease in speed of the robot when it is turning can therefore be disregarded.

In order to find out if the steering angel depends on u1, the robot is commanded

to drive with different values on u1 while u2 just as before is increased and

de-creased. In Figure 4.9 the steering angle is plotted for right and left turns while

u1 = 0.10 and u1 = 0.15. Here it can be seen that it takes more steps in u2 to

reach the maximum steering angle for u1= 0.15 then u1= 0.10, i.e. the steering

angle depends on u1. This means that there is a cross-coupling between the speed

and the steering angle. It can also be seen that the appearance of the steering angle is not the same for u1 = 0.10 and u1 = 0.15. This means that in order to

find the steering angle as a function of the control signal u2, a model has to be

(34)

22 Modeling the UGV Motion Dynamics

(a) The speed of the robot for u1= 0.15 when u2increases.

(b) The speed of the robot for u1= 0.15 and the increasing control signal u2

where u2have been inverted, scaled and moved up (see legend).

Figure 4.7. The speed of the robot when it is turning and the relation between the

(35)

4.3 Modeling the Steering Angle 23

Figure 4.8. The speed of the robot for u1= 0.10, u1= 0.15 and the increasing control

signals u2, where u2 have been inverted, scaled and moved up (see legend).

(a) The steering angle for decreasing u2 when

u1= 0.10 and u1= 0.15.

(b) The steering angle for increasing u2 when

u1= 0.10 and u1= 0.15.

Figure 4.9. The steering angle of the robot for right and left turns when u1= 0.10 and

(36)

24 Modeling the UGV Motion Dynamics

about 0.30 m/s. This is a pleasant speed to drive the robot in indoors wherefore the steering angle as a function of the control signal u2is modelled for this value on u1.

By commanding the robot to drive several times with a constant value on u1

while u2is increased and decreased, plots of the steering angle as a function of the

control signal u2 can be made. When the robot is commanded to drive straight

forward it sometimes drift to the right and sometimes to the left. Since this be-havior is hard to predict the steering angle for −0.01 < u2 < 0.01 will from now

on assume the value zero. Figure 4.10 shows the steering angel as a function of the control signal u2 for several drives when u1 = 0.15. In Figure 4.10(a) the

(a) The steering angle for decreasing u2 when

u1= 0.15.

(b) The steering angle for increasing u2 when

u1= 0.15.

Figure 4.10. The steering angle of the robot for right and left turns when u1= 0.15.

steering angle is plotted for right turns and in Figure 4.10(b) for left turns. For

u2≤ −0.11 and u2≥ 0.09 the steering angle can be approximated with a constant

value. When −0.11 < u2 ≤ −0.01 and 0.01 ≤ u2 < 0.09 the appearance of the

steering angel as a function of u2 have some resemblance to a tansig function [2].

A plot of the tansig function can be seen in Figure 4.11. The tansig function is mathematically described as

tansig(h) = 2

1 + e(−2h) − 1. (4.4)

Since the steering angle as a function of u2does not have a perfect fit to a tansig

function, (4.4) needs to be modified. This modification results in

φ = 2C1

1 + e−2C2(u2−C3)+ C4, (4.5)

where C1, C2, C3 and C4 are constants. To identify these constants a nonlinear

least square method is used to adapt (4.5) to the measurements. This is done by using the command fsolve in Matlab. The identification will not give the same

(37)

4.3 Modeling the Steering Angle 25

Figure 4.11. The tansig function.

values for the constants for the right and the left turn. (4.5) together with the approximations for u2≤ −0.11 and u2≥ 0.09 gives the model of the steering angle

as a function of u2when u1= 0.15 φ =              Cr,max u2≤ −0.11 2Cr,1 1+e−2Cr,2(u2−Cr,3) + Cr,4 −0.11 < u2≤ −0.01 0 −0.01 < u2< 0.01 2Cl,1 1+e−2Cl,2(u2−Cl,3) + Cl,4 0.1 ≤ u2< 0.09 Cl,max u2≥ 0.09 (4.6)

The model can be seen together with the sampled data in Figure 4.12. Fig-ure 4.12(a) shows the model when u2 ≤ −0.01 and Figure 4.12(b) shows the

(38)

26 Modeling the UGV Motion Dynamics

(a) The model of the steering angle as a function of u2for decreasing u2when

u1= 0.15.

(b) The model of the steering angle as a function of u2for increasing u2when

u1= 0.15.

Figure 4.12. The model of the steering angle as a function of u2for right and left turns

(39)

Chapter 5

Path Following Algorithms

This chapter will discuss path following algorithms. An implementation of one path following algorithm will also be presented.

The path to follow is given in advance and consists of several line segments with different orientations. Maps of the surrounding area will also be given in advance. There can be movable obstacles such as chairs and tables, which have been moved into the area and are therefore not visible in the maps. To follow a path, different forms of feedback rules can be used, see e.g. [10], [11] and [12]. The feedback rule requires that there is a path to follow with no obstacles in the way. One advantage with using a feedback rule instead of open-loop control, is that some wheel slippage can be handled [12]. Open-loop control means that there is no feedback of what is happening to the robot. Different probabilistic techniques to predict and control robots, have become popular in the last couple of years [14]. Techniques with neural networks can be used for predictive control of car-like robots [3], [8]. With predictive control, obstacles that comes in the way can be handled. This is called obstacle avoidance. The robot Stanley in DARPA Grand Challenge 2005 used a local planner for collision avoidance which used a number of base trajectories to decide how do drive [16].

5.1

Introduction

The algorithm that will be presented here will use a map and a global path plan. The global path plan can be made by hand or by some other path planning algo-rithm. The map is a grid map, i.e. a two-dimensional map with an appropriate resolution. A grid point can be marked as drivable or not, e.g. an obstacle is marked as not drivable. The goal is to get the robot to follow the global plan in a satisfying way and to avoid collisions. This means that the goal is not to get to exact positions along the way but to go to the final position. A global path planner searches for the optimal path from a starting point to an end point. When planning the path it is assumed that the robot cannot go backward. One reason for this is that there are no sensors directed backward and the safety of the robot

(40)

28 Path Following Algorithms

can therefore not be guaranteed. There will be a local path planner for obstacle avoidance. There will be two types of obstacles; those in the map and those that are not in the map. The obstacles that are not in the map can for example be chairs that have been moved into the desired path. These obstacles will be dis-covered by the laser, but they will not be added to the map, i.e. a simultaneous localization and mapping (SLAM) problem will not be solved.

The basic idea of the path following algorithm is to run a number of control signal sequences, referred to as base trajectories, through a model of the robot and then to evaluate the predicted positions of the robot compared to the given plan. These base trajectories are predetermined. A feedback rule will also be used to calculate a sequence of control signals. These control signals results in a base trajectory, and will change according to the feedback rule. Since the base trajec-tories are run through the model it guarantees that the robot can execute them. The model that will be used is derived in Chapter 2 and Chapter 4. This model is used to predict how the robot is going to move. The base trajectory that gives the best trajectory according to a certain criterion is the one that will be used. The length of the prediction horizon is denoted N . Since this is a local path planner the algorithm has to be evaluated with appropriate intervals. If the algorithm is only evaluated when the robot reaches the prediction horizon, the safety of the robot cannot be guaranteed. There could be an obstacle immediately after the prediction horizon ends, in which case the robot may not have time to stop and it will crash. How often the algorithm should be evaluated must be chosen, with a specific time interval or after the robot has traveled a specific distance. The speed will be kept constant during the drive, i.e. the control signal u1 is constant.

5.2

The Global Path Plan

The global plan consists of a number of checkpoints that the robot should pass. A checkpoint is a circle with a given radius and the checkpoint coordinates are denoted (xref, yref, θref). The coordinate θref describes the orientation of the line

between two consecutive checkpoints. The reason for choosing the checkpoint as a circle instead of as a point is that in reality it is very hard to get the robot to an exact position. If there is a 90 degree turn in the global plan the robot will travel on a circle segment rather than doing a 90 degree turn. The reason for this is that the robot cannot rotate around its own axis, nonholonomic constraints. The radius of the circle segment will be equal to the smallest turning radius of the robot. The rule for switching line segment is that the robot should be within the checkpoint to switch to the next line segment. If there is an obstacle in front of the checkpoint the robot has to go around and behind the obstacle in order to clear the checkpoint. In order to go around an obstacle the robot may get closer to the next line segment than to the segment it is suppose to follow, see Figure 5.1. Since the primary goal is to follow the path in a satisfying way and not to get to exact positions, the solution will in this case be to switch line segment. This

(41)

5.3 Base Trajectories 29

Checkpoint Robot

Figure 5.1. An obstacle in front of a checkpoint. The dashed lines are the line

seg-ments to follow, the circle is the checkpoint, the black area is walls and the gray area is the obstacle. The robot is supposed to travel on the dotted line in order to clear the checkpoint. With the additional rule the robot will travel on the dash dotted line.

will lead to an additional rule of switching line segment. The robot will switch to the next line segment when the distance to the next line segment is less than the distance to the present line segment. With only the second rule, the robot will sometimes have to pass the next line segment in order to switch to it. This is the case if there is 90 degrees between the two line segments, i.e. a sharp turn, and the robot is on the current line segment. The robot will make the turn too late in these situations. This is one of the reasons for having both rules.

5.3

Base Trajectories

A base trajectory is a path that the robot can travel on, provided that there are no obstacles in the way. Depending on how long the prediction horizon is, the length of base trajectories will differ. There is a time delay from changing the control signals until the robot responds, see Section 4.3.1. There is also a delay in the hardware; it takes some time for the front wheels to go from one position to another. Because of these delays there is no point of changing the control signal

u2 to often. The base trajectories consists of a number of piecewise constant

control signals. How many piecewise constant control signals the base trajectories consists of depends on how often there is a point to change the control signal due to the latency in the system and how long the prediction horizon is. Testing the base trajectories will result in a tree with a number of paths that are possible for the robot to travel, if there are no obstacles in the way, see Figure 5.2 for an illustration of those. How many base trajectories that can be tested depends on the time it takes for the computer to evaluate them. A stop and go movement is not desirable. The base trajectories in this thesis are constructed in an ad hoc manner. Since the robot does not travel the whole last predicted path, there is a sequence of control signals that has not been executed. These were a part of the best base trajectory the last time. They will therefore be tested again with an additional set of control signals to make it a complete base trajectory. All other

(42)

30 Path Following Algorithms

Figure 5.2. A path tree consists of base trajectories, i.e. predicted paths that are

possible for the robot to travel if there are no obstacles in the way.

predetermined base trajectories will also be tested.

5.4

The Feedback Rule

Depending on how the model and the path are represented, there are many dif-ferent ways to choose the feedback term, see e.g. [10], [11], [12]. The feedback rule will be used to calculate control signals which will compose a base trajectory, which takes the robot closer to the line segment it is following. When deriving the 2D car model in Section 2.1 the assumption was that the robot is moving on a circle segment with radius R. The curvature of a circle segment is defined as

κ = 1

R. (5.1)

The feedback rule that will be presented here is taken from [11]. The idea is to calculate the derivative of the curvature on the path that the robot should follow in order to follow the line between two checkpoints. The feedback rule from [11] is

ds = −aκ − b(θ − θref) − c∆l, (5.2)

where s is the path length, a, b and c are positive constants and ∆l is the signed orthogonal distance from the robot’s position to the line segment, see Figure 5.3. If the robot is on the left side of the line ∆l > 0, on the right side of the line ∆l < 0 and ∆l = 0 when it is on the line. The representation of the global path

(43)

5.4 The Feedback Rule 31 ∆l Line x y θ θref

Figure 5.3. The principle and geometery of the feedback rule.

described in Section 5.2 suits this feedback rule. According to [11] the feedback rule will be uniformly asymptotically stable if a, b and c are chosen as positive constants and ab − c > 0.

5.4.1

How the Feedback Rule is used

When the robot is traveling with constant control signals, the curvature of the circle segment along which the robot is traveling should be constant. In reality the measurements of the curvature is not constant for constant control signals, see Figure 5.4. This is probably due to measurement noise and limited resolution of the encoders. A way to handle this is to use the mean curvature of a number of samples. The mean curvature will then be more or less constant, see Figure 5.4. Another reason to use the mean curvature is that the curvature is sampled with 250 Hz and there is no point of changing the control signal this often. Instead of having a fixed time for calculating the mean curvature a fixed path length s is chosen. The control signals will be constant during the time it takes the robot to travel the distance s. This means that s will be the length of the circle segment. If s is chosen small, the feedback rule will be evaluated sufficiently often to fulfill its purpose. Let κk denote the mean curvature of the path length sk. According

to (5.2) the desired change of the desired curvature will be ∆κk

∆sk

= −aκk− b(θk− θref) − c∆lk. (5.3)

In order to calculate the next desired steering angle the next desired κ has to be calculated

κk+1= κk+ ∆κk. (5.4)

The curvature in (5.1) can using (2.3) be rewritten as

κk =ωk

Vk

(44)

32 Path Following Algorithms

Figure 5.4. The curvature of a circle segment when the robot is traveling with constant

control signals.

where ωk is the rotational velocity and Vk is the linear velocity. Combining (4.3)

and (5.5) yields

φk+1= arctan(Lκk+1), (5.6)

where L is the length of the robot.

5.5

Collision Detection

In order to guarantee that the robot is not colliding, a way to check whether it will collide or not is needed. The grid map has an appropriate resolution where every grid point is marked as drivable or not. The robot will occupy a number of grid points. How many depends on the resolution of the map and the rotation and size of the robot. It will be computationally demanding to check if all grid points the robot occupies are drivable after every predicted movement. The approach in this thesis is to check whether if certain points of the robot are colliding or not. The points that have been chosen are the corners of the robot, the middle point of each long side and the middle point of the front side. One obvious disadvantage with this is that the obstacle must be larger than half the width of the robot. If the obstacles are smaller, there is a risk that a collision will not be detected. If the robot is turning around a corner, there is a possibility that the corner touches the side of the robot without a collision is detected. If one of the corner points or middle point of the long sides touches the corner the collision is detected. If the robot instead would be scaled up a bit when testing if it collides or not, the risk of a corner touches the side will be reduced. A disadvantage with scaling the robot is that obstacles have to be at least the half width of the scaled robot to be detected, which is even larger than before. Another disadvantage with scaling the

(45)

5.6 Choosing a Criterion 33

robot is that it will be harder to go through narrow passages, i.e. it is more likely to predict a collision with a larger robot for these passages. An advantage with scaling the robot is if the robot is predicted to pass through a narrow passage, it is more likely that it makes it in reality.

5.6

Choosing a Criterion

The criterion for choosing a base trajectory will differ depending on the purpose of the robot. If the goal is that the robot’s turn rate should be small, a factor depending on the turn rate is needed. Another example is if the speed is not constant during the drive, i.e. u1is allowed to be changed, a factor depending on

the speed could be used to give base trajectories with higher speed a larger value. The larger value a base trajectory will get, the better the base trajectory will be according to the criterion. The task of the robot that is presented in this thesis is to follow a given path in a satisfying way without colliding with obstacles. The criterion will therefore have a factor depending on whether the robot collides or not. If a collision is predicted for a certain base trajectory, this base trajectory will not be chosen. The distance from the robot to the given path will also be a factor in the criterion. A reason for this is to make the robot travel close to the given path. Another factor in the criterion will be the difference in the direction of the robot compared to the given path at the end of the prediction horizon, i.e. the angle between the robot and the line segment to follow. Let γ denote this angle

γ = kθN − θrefkrad, (5.7)

where k•krad normalizes the angle to the interval [−π, π]. To clarify why γ is needed two examples will now be given. The starting point of the robot in the two examples will be close to the given line segment. The robot will travel parallel to the line segment while approaching a 90 degree turn. If the robot travels straight forward, the prediction horizon will end nearby the 90 degree turn. In the first example, the robot travels straight forward the whole time. The distance to the given path will then be small the whole time, but the direction of the robot will be wrong at the end of the prediction horizon, i.e. the robot starts turning too late. In the second example, the robot first travels close to the given path but after a while it starts turning. The direction of the robot at the end of the prediction horizon will probably be better in the second example then it will be in the first. A natural desire is that the difference in the direction of the robot compared to a given path should be small. How small it should be is hard to say. This will depend on the distance to the given path. If the goal is that the difference in the direction of the robot compared to a given path should be zero as often as possible, there will be a risk that the robot travels parallel to the given path. This is all right if the distance to the given path is small. If the distance is not small, the robot will not get closer to the path as long as it travels parallel to it. This is not desirable since it could lead to problems in narrow passages, see Section 5.9. The effect on the criterion, that the distance to the given path and the difference in the direction of the robot compared to the given path will have to be a compromise.

(46)

34 Path Following Algorithms

An optimal compromise is hard to find. A way to handle the problems in the two examples is; Small differences in the directions of the robot compared to the given path are most desired. The effect on the criterion will therefore be the largest when γ ∈ [−α, α], where α is chosen small. When γ ∈ [−β, β], where β > α, the effect on the criterion will not be as large. The reason for this second interval is to benefit that the robot makes a turn even if it is not the best turn. If the difference in the direction of the robot compared to the given path is too big, it means that the robot did not start to make the turn in time. This is not desired at all and the effect on the criterion will therefore be zero. The goal is to follow the path and not to go to exact positions. It will therefore be better if the robot is predicted to switch line segment than if it is not predicted to switch line segment. A factor depending on whether the robot is predicted to switch line segment or not will also be in the criterion. The criterion used here is chosen as

J = J1∗ f (γ) +

J2

PN

i=1|∆li|

+ J3∗ S + C, (5.8)

where J1, J2 and J3 are user defined weights and

f (γ) =    1 −α < γ < α 0.5 −β < γ < β 0 otherwise (5.9) S = 

1 if predicted to switch line segment

0 otherwise (5.10) C =  −∞ if a collision is predicted 0 otherwise (5.11) where β > α.

5.7

The Algorithm

How often the algorithm should be evaluated must be decided, i.e. if it should be evaluated after a certain time or after the robot has traveled a certain distance. A decision of how long the prediction horizon N should be must be made. How many piecewise constant control signal a base trajectory should consist of must also to be determined. After this an appropriate number of base trajectories have to be decided upon The criterion must also be chosen, i.e. which factors that should effect it and the values of their weights.

(47)

5.8 Simulation of the Path Following Algorithm 35

Algorithm 1

1. Run one of the base trajectories through the model.

2. Check if the robot hits an obstacle. If so, jump to step 1 otherwise continue. 3. Evaluate the predicted path according to the criterion (5.8).

4. Repeat step 1, 2 and 3 until all base trajectories have been considered. 5. Choose the base trajectory that gives the best value according to the

crite-rion. If there is no clear path, stop and call for help, otherwise continue. 6. Execute the chosen sequence of control signals.

7. Jump to step 1 when it is time to choose a new path.

5.8

Simulation of the Path Following Algorithm

To validate that the algorithm works in a satisfying way it has been implemented and simulated in Matlab. The robot follows the path in a satisfying way and it is avoiding the gray obstacles in Figure 5.5. The problem with an obstacle in front of a checkpoint described in Section 5.2 is handled. Since there is not a complete motion model of the robot available, the simulation cannot guarantee that the obstacle avoidance part will work in reality.

5.9

Narrow Passages

If there is short of space around the robot and it is supposed to go through a narrow passage according to the global path plan it can be tricky. An example where the robot is traveling in a tight corridor and the global path plan tells it to do a sharp turn and to go through a door will now be discussed. In Figure 5.5 this problem is simulated. Depending on the global path and the incoming position of the robot the result will differ. The better the global path plan is, the easier it will be for the robot to complete the path. The tighter the corridor is and the narrower the doorway, is the more crucial the incoming position will be. In Figure 5.5(b) there are two sharp right turns short after each other, from the long corridor into the short corridor and then into the large room. Some of the times, the robot makes the complete path and other times it stops before entering the large room. The map used in the simulation is an approximate map over an area where the robot has been driven by hand. To make these sharp turns when driving the robot by hand is hard and it is not always successful. A major reason for why narrow passages are hard to go through is that the robot cannot turn around its own axis, due to nonholonomic constraints. When the robot is going in the other direction through the door the robot can go more or less straight toward the door,

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

In this section the statistical estimation and detection algorithms that in this paper are used to solve the problem of detection and discrimination of double talk and change in

Slutsatsen som dras är att en variant av discrete differential evolution presterar betydligt bättre än en generisk genetisk algoritm på detta problem, men inget generellt antagande

i) The external logic to which the power is being switched should have its own reset circuitry to automatically reset the logic when power is re-applied when moving out of

If you bike every day to the lab, you reduce your emissions with 1200 kg/year, compared to a petrol-driven car (10 km one way).. DID