• No results found

Localisation and mapping in smoke filled environments

N/A
N/A
Protected

Academic year: 2021

Share "Localisation and mapping in smoke filled environments"

Copied!
80
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT MECHANICAL ENGINEERING, SECOND CYCLE, 30 CREDITS

,

STOCKHOLM SWEDEN 2017

Localisation and mapping in

smoke filled environments

A study of robot perception in harsh conditions

using practical experiments.

AGNES JERNSTRÖM

(2)
(3)

.

Master of Science Thesis MMK 2017:180 MDA 620 Localisation and mapping in smoke filled environments

Agnes jernstr¨om Martin Zakardissnehf

Approved Examiner Supervisor

2017-10-16 Martin T¨orngren Lei Feng

Commissioner Contact person

Realisator Robotics Tommy Andersson

.

Abstract

.

This thesis is carried out together with Realisator Robotics who is currently devel-oping a fire-fighting assistant robot, FUMO. The aim of the thesis is to implement autonomous path planning and SLAM (Simultaneous Localisation and Mapping) functionality on the existing FUMO prototype as well as to test how robust these are to smoke.

After an initial literature study on different ways of robot perception in smoke it was decided to use a multi-echo LIDAR as the main sensor for these tasks. All implementations were done in ROS (Robot Operating System) and open source code was used for some functions. Testing of the system was first performed in a simulated environment. In this environment smoke was approximated using Gaus-sian noise. However it was later concluded that this did not accurately portray all effects of real smoke. It does however share some similarities. The final tests were performed at a testing facility for smoke divers where the algorithms were tested in different levels of smoke. The tests showed that the multi-echo LIDAR managed to see through light to medium smoke, in other words smoke which you could see through with your bare eyes to up to a few meters. In those conditions the SLAM algorithm was able to create usable maps. When new smoke was contin-uously added to the already smoke filled environment the maps became fragmented and unreadable. The autonomous path planning was not tested in smoke due to safety concerns. However the localisation which the path planning is based on was tested when driving the robot manually through the smoke. The result from this hints at a possibility of successfully using the path planning in these conditions.

(4)

Examensarbete MMK 2017:180 MDA 620 Localisation and mapping in smoke filled environments

Agnes Jernstr¨om Martin Zakardissnehf

Godk¨ant Examinator Handledare

2017-10-16 Martin T¨orngren Lei Feng

Uppdragsgivare Kontaktperson

Realisator Robotics Tommy Andersson

.

Sammanfattning

.

Det h¨ar examensarbetet ¨ar utf¨ort i samarbete med Realisator Robotics, vilka f¨or tillf¨allet utvecklar en robot, FUMO, som ska hj¨alpa till vid brandbek¨ampning. M˚alet med examensarbetet ¨ar att implementera autonom navigering fr˚an en punkt till en annan samt SLAM (Simultaneous Localisation and Mapping, simultan lokalis-ering och kartl¨aggning) funktionalitet. Dessa funktioners f¨orm˚aga att hantera r¨ok ska ¨aven testas.

Efter en inledande litteraturstudie p˚a olika s¨att att l¨osa en robots perception i r¨ok s˚a blev det best¨amt att anv¨anda en s˚a kallad ”multi-echo LIDAR” som huvud-sensor. Alla implementationer ¨ar gjorda i robotoperativsystemet ROS och ¨oppen k¨allkod har anv¨ants f¨or vissa funktioner. De f¨orsta testerna av systemet gjordes i en simulerad milj¨o. I den s˚a approximerades r¨oken utav Gaussiskt brus. Det blev dock senare fastst¨allt att detta inte lyckas representera alla effekter utav riktig r¨ok. De delar dock vissa beteenden. De slutgiltiga testerna utf¨ordes i en testanl¨aggning f¨or r¨okdykare, d¨ar algoritmerna testades i olika niv˚aer av r¨ok. Dessa tester visade att multi-echo LIDAR:n klarade av att se igenom l¨att till mediumtjock r¨ok, det vill s¨aga r¨ok som kan ses igenom upp till ett par meter med blotta ¨ogat. SLAM-algoritmen kunde i dessa fall generera anv¨andbara kartor. N¨ar det kontinuerligt lades till ny r¨ok till testomr˚adet s˚a blev kartorna fragmenterade och ol¨asliga. Den autonoma navigeringen testades inte i r¨ok p˚a grund av s¨akerhetsrisker. D¨aremot s˚a testades lokaliseringen som den bygger p˚a genom att manuellt k¨ora roboten genom r¨oken. Resultaten fr˚an detta tyder p˚a att det ¨ar m¨ojligt att anv¨anda den autonoma navigeringen under r¨okfyllda f¨orh˚allanden.

(5)

Work distribution

Martin Zakardissnehf

The navigation and localisation parts in theory, simulations, implementations and tests.

Agnes Jernstr¨om

The mapping (SLAM) and odometry parts in theory, simulations, implementations and tests.

Both

(6)
(7)

Acknowledgements

We would like to thank our supervisor Lei Feng for his guidence during this thesis. We would also like to thank Thomas Eriksson, Tommy Andersson, Daniel Adolfs-son and Jonathan Grimtj¨arn for their support with the hardware and the existing software on FUMO. Further we want to thank S¨odert¨orns Brandf¨orsvarsf¨orbund for letting us use their facilities for tests in smoke and lastly we thank SICK for lending us a LIDAR to use in the thesis.

(8)

Contents

Contents viii 1 Introduction 1 1.1 Background . . . 1 1.2 Problem statement . . . 5 1.3 Research questions . . . 5 1.4 Expected outcome . . . 5 1.5 Limitations . . . 6 1.6 Method . . . 6 2 Prior knowledge 7 2.1 Odometry . . . 7

2.2 Simultaneous Localisation and Mapping . . . 9

2.3 Localisation . . . 11 2.4 Navigation . . . 16 3 Demonstrator 21 3.1 Hardware . . . 21 3.2 Software . . . 22 4 Tests 25 4.1 Simulations . . . 25 4.2 Hardware tests . . . 36 4.3 Test in smoke . . . 39

5 Discussion and conclusions 61

6 Recommendations and future work 65

Bibliography 67

(9)

Chapter 1

Introduction

1.1

Background

Together with six of the larger Swedish rescue services, the Swedish Civil Contin-gencies Agency (MSB) and the Swedish Association of Local Authorities and Re-gions (SKL), AB Realisator Robotics conduct the development of the fire-fighting assistant robot, FUMO, that constitutes as a complement for the fire brigades in difficult and complex operations. Examples of such situations are fires in parking garages, tunnels and business parks. The idea is that FUMO, among others, will assist in smoke diving and withstand the same environment as a smoke diver. It is also possible that FUMO will be used at construction sites to assist the workers. Development on a previous model of the robot has been done by students in the mechatronics higher course at KTH. In its current state the robot is remote con-trolled and equipped with an IR camera for vision. It has caterpillar tracks which gives it the ability of climbing staircases. A model of the current prototype can be seen in Fig. 1.1. Market launch is scheduled for 2017.

(10)

For future applications there is an interest in developing a follow-path functionality for the robot. This means that FUMO should be able to go autonomously between a starting point and a number of selected locations using a certain predefined path. Due to the environments where FUMO will be used it needs to be able to navigate in smoke, dust and in the dark.

Similar projects

To get an idea of how the task of navigating in smoke filled environments has been solved in other projects, a background study has been performed. The focus was to find out which sensors could be used to achieve smoke resistant perception. Four different types of sensors have been used in the examined projects: Radar, IR cam-eras, ultrasonic range finders and LIDAR. Some projects use a combination of these sensors and some use only one.

Radar measures distance or velocity by sending out a signal and measuring the difference in intensity between the sent out signal and the reflected return signal. If a short wavelenght is used it can see through smoke [19]. There are two main types of radar, FMCW (Frequency Modulated Continuous Wave) radar and pulse radar. FMCW radar always transmits and receives which makes it possible to de-tect objects close to the robot. It has a measurement resolution of up to 2 cm in a range of 20 to 30 meters, which is shorter than the usual range of a pulsed radar. It is more susceptible to interference from other electronic systems than other radars due to its frequency range. Some frequency ranges need special permits to operate on [19]. Pulse radar sends out signals in short ”pulses” and recieves the echo signal in the pause in between pulses. It is a proven technology that works well, the dis-advantage is that it generally has trouble detecting objects that are close to it [19]. This is not optimal in indoor environments. One type of pulse radar is UWB (Ultra Wide-Band) radar. This type of pulse radar is suitable for short range. However, echoes may cause problems in indoor use. Also, while patterns exist for flat walls, corners and edges the scattering from more complex objects may be interpreted as misreadings.

Ultrasonic sensors send out a pulse of ultrasonic sound and measure the time-of-flight between the sent out signal and the reflected return signal to measure distance to objects [2]. They usually have a range of a few meters, at most 10 meters. Variations in air temperature affect the speed of the sound and may thus cause misreadings. However there exist sensors which compensate for this.

(11)

1.1. Background

LIDARs are laser scanners which send out a short laser pulse and measures the distance to objects by measuring the time-of-flight until the signal returns [20]. Some LIDARs have a single laser beam which is moved to get more data points but there are also LIDARs which use multiple beams. There exist both 2D and 3D LIDARs and they come with different ranges and field of view. The data from the most basic LIDAR sensors will be corrupted in smoke filled environments but there are sensors which can compensate for this. The Velodyne LIDAR sends out millions of laser beams at different angles in order to find ”holes” between particles [20]. There also exist so called ”multi-echo” LIDARs which can analyse two echo signals from each measuring beam. It can thus filter out echoes of less intensity, see Fig. 1.2 for an example.

Figure 1.2: The intensity of return signals for a multi-echo LIDAR in fog [1]

Castro et al. [14] chose to combine a 3D laser scanner with a 94 Ghz FMCW radar. The radar set up was custom built with 360 degree FOV (field of view), 2 degree angular resolution and an accuracy of 0.2 meters. The setup was developed for an UGV (Unmanned Ground Vehicle) intended for outdoor use. Data from the laser scanner which was inconsistent with the points detected by the radar was re-moved. Data points which were close to an object that had been confirmed by the radar were left unchanged since it could not be confirmed whether they were part of the object or dust particles. During a test with smoke the map generated using this approach was consistent with the ground truth map generated under smoke free conditions by 84 percent.

Radar technology has also been implemented for indoor SLAM by Marck et al. in [22]. They used a 24 GHz FMCW radar with a rotating antenna which was developed by the research team. The rig was carried by a human when testing the system. Their system managed to map brick walls and metal objects with a fair accuracy but struggled with walls with windows and cluttered areas.

(12)

mapped before or not. They differentiate between three types of landmarks; walls, corners and point scatters. The position of the system and the landmarks is then determined using EKF (Extended Kalman Filter) SLAM. Using this approach they manage to map most of the key features in their test environment. However, the algorithm also maps false landmarks if the EKF has been wrongly initialised and this error is not possible to correct once it has occurred. By optimising the thresh-olds for the algorithm false landmarks can be avoided but this also results in actual landmarks remaining unmapped.

Another approach is to combine the laser data with data from ultrasonic range finders. This has been done by Santos et al. in [24]. They use data from a sensor which detects smoke to decide which data to use. The data from the LRF (light range finder) is not trusted in certain levels of smoke and when the standard de-viation in the readings is above 0.8. The ultrasonic sensors are only trusted for readings less than 2.5 meters to filter out old echoed signals. In their setup they use 9 ultrasonic sensors which each have a FOV of 20 degrees, giving the setup a total FOV of 180 degrees. The addition of the sensors greatly improved the system’s robustness to smoke compared to only using the LRF. However, in high densities of smoke, where the robot had to rely more on the ultrasonics, it was not enough to fully map the area.

SLAM using only ultrasonic range finders has been achieved by Wu et al. [28]. They used a setup of eight sensors in a circle and to avoid acoustic interference each sensor was operated separately from the others. No such measure was taken by Santos et al. [24] which may account for their poor results. To improve the resolution of the mapping Wu et al. set the robot to rotate by a small amount with its location unchanged after each set of measurements. The robot was not tested in smoke but the ultrasonic sensors should remain unaffected in such conditions. Another approach to SLAM in smoke filled environments is to use visual SLAM using thermal infrared cameras. This has been done by Emilsson et al. [9] and Brunner et al. [5] with fair results. However, a problem with this approach is that images without any thermal contrast can occur, especially in narrow environments, making it impossible to detect any features in the image.

(13)

1.2. Problem statement

Conclusions

To make the setup as simple as possible a LIDAR with multi-echos will be used. Many of the open source tools for SLAM which exist are made specifically for the use of LIDARs and being able to use these will greatly improve the chances of creating a functioning prototype. Industrial LIDARs with multi-echo functionality can come with pre-made software to filter out fog, for example the SICK LMS1xx line [1]. A muli-echo LIDAR is also the approach that is likely to give the best results in a scenario without smoke among the discussed solutions. This will result in a more reliable ground truth. The LIDAR will be mounted close to the ground and should therefore be free from the thickest smoke. No navigation was performed in the research by Shamsudin et al. [25] and it is therefore deemed to be of interest to try the multi-echo lidar technology for autonomous driving in smoke.

1.2

Problem statement

FUMO should be able to go autonomously between a starting point and a number of selected locations using a certain predefined path. The path decisions should be based on a 2D map of the indoor environment. The map should be previously built using a SLAM algorithm. This SLAM algorithm should be based on existing algorithms. Due to the environments where FUMO will be used it needs to be able to navigate in smoke, dust and in the dark.

1.3

Research questions

What smoke levels can a multi-echo LIDAR handle?

How does a map generated under smoke free conditions differ from a map gen-erated in a smoke filled environment with our chosen approach?

How much does the chosen path in a smoke filled environment deviate from the path chosen under perfect conditions?

1.4

Expected outcome

(14)

All algorithms will be initially evaluated and improved in a simulation environment with simplified robot dynamics. The algorithms will then be implemented in the existing prototype of FUMO.

1.5

Limitations

The robot will only be tested inside in a known, controlled environment. Tests without smoke will always be performed before those with smoke. The robot will be tested with multiple start and end points which will be unknown for the robot. If the robot detects an obstacle that is not present in the previously built map it does not need to add it to the map. The robot is using ROS so we will not have full control of the robot’s behaviour such a scheduling in a way that would be possible if a real-time operating system had been used. Due to the limited time frame of the project only one approach will be tested for SLAM and path planning respectively.

1.6

Method

(15)

Chapter 2

Prior knowledge

2.1

Odometry

The odometry describes where the robot is located and how it has travelled in order to get there. This is done using wheel encoders and an IMU (Inertial Measurement Unit). The calculation of the robot position using the encoders is done using the wheel radius and the wheelbase of the robot. The amount of rotations done by each wheel can be received from the encoders. This data is used to calculate the distance ∆D travelled by each wheel during a time period by taking

∆D = 2π(nnow− npast) (2.1)

where nnow is the current amount of rotations done by a wheel and npast is the

previous amount. This is done for one of the wheels to the left and one of the wheels to the right. If the wheels on both sides have roughly the same speed, the robot is assumed to be going straight forward or straight backwards. Otherwise the change in angle from the last measurement is calculated as

∆θ = −R(∆Dlef t∆Dright)

b (2.2)

where R is the wheel radius, ∆D is the distance travelled by the wheel since last time (described in Eq. 2.1), and b is the length of the wheelbase. This is then added to the total angle. If the two wheels have travelled approximately the same distance in the opposite direction the robot is assumed to be turning on the spot. In other cases the robot’s new position in the xy-plane is calculated using

∆x = R · ∆Dlef t·cos(θ) (2.3)

∆y = R · ∆Dlef t·sin(θ) (2.4)

(16)

The filter is updated with the relative pose difference between the sensors when a measurement from each sensor has been received. Kalman filters represent the belief

bel(xt) (in our case the robot pose) at a time t by the mean µt and the covariance

Σt, where µtis a vector and the covariance is a symmetric and positive-semidefinite

quadratic matrix. The EKF assumes that the state transition probability p(xt |

ut, xt−1) and measurement probability p(zt | xt) are controlled by the nonlinear

functions

xt= g(ut, xt−1) + t (2.5)

zt= h(xt) + δt (2.6)

where xt and xt−1 are state vectors and ut is the control vector at time t. zt is

the measurements and t is a Gaussian random vector which models the

uncer-tainty and has the same dimension as the state vector. The vector δt describes the

measurement noise. The nonlinear functions g and h are then approximated by linear functions using Taylor expansion. For g the Taylor expansion is developed around the mean of the posterior µt−1 while for h it is done around predicted belief

µt. EKF is computationally efficient and therefore suitable for state estimation in

robotics where the robot pose needs to be continually updated without interfering with other necessary calculations. The estimation works best when the uncertainty of the state estimate is small. The improvement of using robot pose ekf instead of the wheel odometry alone is shown in Fig. 2.1. It shows the results obtained by the developers of the package when driving a robot in a loop. The odometry based on

robot pose ekf manages to close the loop while the wheel odometry is slightly off

[23].

(17)

2.2. Simultaneous Localisation and Mapping

2.2

Simultaneous Localisation and Mapping

SLAM (Simultaneous Localisation and Mapping) concerns the problem of mapping an unknown environment while at the same time localising where you are in the map. There are many different approaches to solving the SLAM problem. This project will be using the method by Grisetti et. al. described in [18]. This method is used in several of the projects described in the background ([22], [24], [28]) and has been proven effective. It is based on grid mapping using Rao-Blackwellized particle filters and is implemented in ROS under the name ”gmapping” (see [15]). Grid mapping builds a map by estimating the probability that parts of a ”grid” is occupied. Parts of the grid where it is likely that an object is are visualised as darker than the ones where it is more likely to be open. See Fig. 2.2 for an example of how it may look.

Figure 2.2: Example of grid mapping. Black cells are most likely occupied and white cells are most likely free. Grey cells have not been observed [27].

The Rao-Blackwellized particle filter estimates the joint posterior

p(x1:t, m | z1:t, u1:t−1) (2.7)

about the map m and the trajectory x of the robot. This is done using the sen-sor observations z and the odometry measurements u. The joint posterior can be factorised as

p(x1:t, m | z1:t, u1:t−1) = p(m | x1:t, z1:t) · p(x1:t | z1:t, u1:t−1) (2.8)

(18)

In gmapping the standard amount of particles for this filter is 30 [15]. The samples are taken from the so called proposal distribution π which is based on the odometry [18]. After sampling, each sample is given an importance weight which signify the probability that the particle is ”true”. The weights are calculated as

ωt(i) = p(x

(i)

1:t | z1:t, u1:t−1)

π(x(i)1:t | z1:t, u1:t−1)

(2.9) Since there is only a finite number of particles they need to be re-sampled. During the re-sampling step new particles are drawn based on the weights of the old par-ticles. All new particles are initially given the same weight. One thing that sets the work by Grisetti aside from previous solutions to the SLAM problem is that they use an adaptive re-sampling technique. This makes it less likely that accurate particles are eliminated during the re-sampling step, a problem commonly referred to as particle depletion. The accuracy of the posterior represented by a particle set is estimated by the so-called effective sample size. This is calculated as

Nef f = PN 1

i=1(ωe

(i))2 (2.10)

whereωe

(i) is the normalised weight of particle i and N is the total amount of

parti-cles. Each time Nef f drops below a certain threshold the re-sampling is performed.

In gmapping this threshold is equal to half the amount of particles used, N

2. The

overall algorithm can be summarised as follows:

1. An initial guess for the robot’s pose is obtained from the particles representing the previous one and the odometry measurements collected since last time. 2. A scan matching is executed based on the map starting from the initial guess.

The scan-matcher aims to find the most likely pose by matching the current observation against the map generated so far. The scan-matching is done on a per particle basis.

3. A set of sampling points are selected in an interval around the pose reported by the scan-matcher. Based on this the mean and covariance matrix of the proposal is computed.

4. The new pose of the particle is drawn by computing a Gaussian approximation of the improved proposal distribution.

5. Update the importance weights. The variance of the weights is higher the worse the approximation of the target distribution is.

(19)

2.3. Localisation

2.3

Localisation

Overview

Since the path planning is limited to when the robot is in a known environment with a ”perfect” map, the main task of the navigation will be to locate where in the map the robot is. To do this AMCL (Adaptive Monte Carlo Localisation) will be used. AMCL is a type of Monte Carlo Localisation where the amount of particles in the particle filter changes using KLD-sampling (Kullback-Leibler Divergence). In the beginning all the particles are given a random starting point in the world as shown in Fig. 2.3 and when the robot moves all particles will move accordingly. When the filter updates it takes information from the LIDAR and the particles start to converge and forming clusters (Fig. 2.4) where the robot believes it is located. After a while the particles have converged to only one cluster (Fig. 2.5). If AMCL converges to the wrong position, it is possible to recover by adding particles at random locations in the map. AMCL uses a motion model and measurement model [27]. To implement AMCL the ros package amcl [16] was used.

(a) Particles (b) Actual position

(20)

(a) Particles (b) Actual position

Figure 2.4: The particles after the robot started moving

(a) Particles (b) Actual position

Figure 2.5: The particles when amcl have converged to one point

A flowchart of the algorithm is shown in Fig. 2.6. At the start AMCL is given a rough starting point if the robot’s position is known and it will create a filter covering this area. Otherwise if global localisation is used the filter covers the entire map. The inner loop continues until a statistical bound has been satisfied (M > Mx) and the new filter is completed, its position is then published and the

(21)

2.3. Localisation

(22)

Motion model

The motion model is used to know how the robot moves.

Figure 2.7: Movement between two points

For inputs an initial pose xt−1 = (x y θ) and an odometry reading ut = (xt−1 xt)

where xt = (x0 y0 θ0) and xt−1 = (x y θ) is used. xt−1 and xt are in the robot’s

internal coordinates whose relation to the global coordinates is hidden. As seen in Fig. 2.7 the movement between two points consists of three steps, first a rotation

δrot1 followed by a translation δtrans and finally another rotation δrot2. They are

calculated according to δrot1= atan2(y0− y, x0− x) − θ0 (2.11) δtrans= q (x − x0)2+ (y − y0)2 (2.12) δrot2= θ0− θ − δrot1 (2.13)

The next step is to convert the movement from the internal to the global coordinates, this is done using sampling.

ˆδrot1= δrot1− sample(α1δ2rot1+ α2δ2trans) (2.14)

ˆδtrans= δtrans− sample(α3δtrans2 + α4δrot12 + α4δ2rot2) (2.15)

ˆδrot2= δrot2− sample(α1δ2rot2+ α2δ2trans) (2.16)

Where α1, α2, α3 and α4 are independent noise parameters. Sampling works by

(23)

2.3. Localisation The final step is then to calculate the new pose xt

x0 = x + ˆδtrans cos(θ + ˆδrot1) (2.17)

y0 = y + ˆδtrans sin(θ + ˆδrot1) (2.18)

θ0= θ + ˆδrot1ˆδrot2 (2.19)

Measurement model

The measurement model used is a likelihood fields for range finder model. The basic principle of the model is to take a laser scan zt and project it on to the map. To

calculate each laser point zk

t position in the global map the following equations are

used with the LIDARs position in the local coordinates as (xk,sens yk,sens θk,sens)

xzk

t = x + cos(θ)xk,sens− sin(θ)yk,sens+ cos(θ + θk,sens)z

k

t (2.20)

yzk

t = y + cos(θ)yk,sens+ sin(θ)xk,sens+ sin(θ + θk,sens)z

k

t (2.21)

To find the nearest neighbour of the measured point the euclidean distance between (xzk

t, yzkt) and each point in the map that is occupied by an object is calculated.

The likelihood is then calculated according to

ω = ω ∗ (zhit∗ prob(dist2, σhit2 ) +

zrand

zmax) (2.22)

Where prob(dist2, σ2

hit) computes the probability of the euclidean distance to the

nearest object under a zero-centred Gaussian distribution with a variance of σ2

hit.

zhitand zrandare mixing weights and zmaxis the maximum distance that the LIDAR

can see.

Monte Carlo Localisation

MCL (Monte Carlo Localisation) takes a particle filter with a given size, it then runs the motion model on each particle in the filter and uses this “guess” in the measurement model to give the particle a weight. The position xt and weight ωt

is then stored in a new temporary dataset with the same size as the filter. This temporary set is then used to create the new filter by generating the new particles based on the weight of the old ones according to

draw i with probability ∝ ωt (2.23)

(24)

robot is that the localisation is correct. KLD-sampling is a sampling that is based on the Kullback − Leibler divergence which measures the difference between two probability distributions [21] The basic idea of KLD-sampling is to add particles to the filter until the error between the true posterior and the sample based approx-imation is less than  with the probability 1 − δ [12]. This means that particles will be added to the filter until a statistical bound is satisfied. Fox describes in [12] how to calculate this bound and by using the Wilson-Hilferty transformation the following approximation can be calculated.

Mx= k −1 2 ( 1 −9(k − 1)2 + s 2 9(k − 1))z1−δ )3 (2.24)

Mxwill be calculated each time a new particle falls into an empty bin in a histogram,

this histogram starts out empty and when the non-empty bins increase, k will increase by the same amount. z1−δ depends on the probability 1 - δ and can be

found in standard statistical tables.

Recover from failure

If the particles would converge to the wrong position it will be difficult for the robot to find its actual position, this can be solved by adding random particles in the map. To add particles two exponential filters are needed, one for the short-term ωf astand

one for the long-term ωslow. New particles are then added if

max{0.0, 1.0 − ωf ast ωslow

} (2.25) is satisfied. The exponential filters are calculated according to

ωf ast= αf ast(ωavg − ωf ast) (2.26)

ωslow= αslow(ωavg− ωslow) (2.27)

where αslow is the weight for the long-term while αf ast is for the short-term and

they have the relation 0 ≤ αslow αf ast[27]. ωavg is the average weight probability

of all the particles in the filter.

2.4

Navigation

(25)

2.4. Navigation

Costmap

A costmap in ROS consist of three map layers: static, obstacles and inflation. The static map layer contains the map generated using gmapping. The obstacle map layer contains the discrepancies between the static map and the sensor data that the robot receives when moving around. The inflation layer inflates the obstacles to generate a cost between 0 and 255 for each cell. The objects detected by the LIDAR are given the maximum cost and are considered ”lethal”. The next zone consists of the cells that if the centre of the robot would be there the robot would collide with the object. When there is no guaranteed collision the cost declines until it reaches zero. The global planner uses this inflation to calculate the path to the goal with the lowest cost. The local planner also incorporates the costmap in its calculations alongside distance measurements.

Global planner

The global planner used is global planner [8] which is based on NF1 by Brock et al. [4], global planner can use both Dijkstra’s algorithm and A* to find the path, for this project Dijkstra’s will be used. The map can be described as a grid where the robot can move to every cell that is not occupied by another object, Dijkstra will then try to find the shortest way to the goal by first checking the cells next to it, see Fig. 2.8a. It then checks the cells surrounding the cells from 2.8a, the results is seen in 2.8b. This procedure continues until the goal is found as seen in 2.8c. The final step is to back-trace the path from the goal to the start as in 2.8d.

The difference between Dijkstra and A* is that the later uses an heuristic to ”guide” the search towards the goal, the difference between the two algorithms can be seen in Fig. 2.9. A* checks fewer cells since it is guided this makes it possible to find a solution faster than with Dijkstra’s but we can not guarantee that it is the the best solution. With this in mind Dijkstra’s is chosen since the time saved with A* would not mean anything due to the low speed of the robot. In Fig. 2.9a the shortest path is the one with the lowest cost but to be able to use Dijkstra in an environment with walls and other obstacles it is needed to use the values from the costmap instead of only the distance to the goal. This might result in a longer path but the path will be safer. The cost for each cell is calculated as

costtot = costneutral+ cf ∗ costcostmap (2.28)

costneutral is the default cost of the cell and costcostmap is the cost from the costmap

(26)

(a) (b)

(c) (d)

Figure 2.8: The black square is the start position, blue is the goal, red is the cells that the robot have checked and the green path is the shortest path from start to goal

(a) (b)

(27)

2.4. Navigation

Local planner

For the local planner base local planner [7] is used, which can use trajectory rollout [17] or DWA (Dynamic Window Approach) [13] and the local costmap. DWA and Trajectory rollout works by sampling dx, dy and dθ in the robot’s control space and then for each sample perform a short forward simulation to see where the robot would end up. After that the different trajectories are scored based on proximity to obstacles, proximity to the goal, proximity to the global path, and speed. Trajec-tories that would cause a collision are discarded. The trajectory with the highest score is chosen and the process is repeated.

The difference between the two algorithms is that DWA only samples for a step in-stead of the entire period like trajectory rollout. Trajectory rollout is recommended for robots with low acceleration and therefore it was chosen for this application. The only downside is that it is more computationally heavy than DWA [7].

The score that is used to evaluate the trajectories is

cost= ωpath∗ dpath+ ωgoal∗ dgoal+ ωobstacles∗ cellcost (2.29)

dpath is the distance from the endpoint of the trajectory to the global path in meters

and ωpathis its weight, increasing this will make the local planner prefer trajectories

that are close to the global plan. dgoal is the distance to the goal from the endpoint

of the trajectory in meters, it uses the weight ωgoal by increasing it the planner will

attempt to reach its goal and can take riskier paths and go off from the global plan.

cellcost is the highest obstacle cost for the robot along the trajectory in obstacle

cost (0 to 254), ωobstacles controls how much the planner will try to avoid obstacles.

(28)
(29)

Chapter 3

Demonstrator

3.1

Hardware

Realisator Robotics has a prototype of FUMO which can be controlled remotely. It is equipped with carterpillar tracks with wheel encoders, and several cameras (including one IR camera). The prototype is shown in Fig. 3.1. Tests will be performed using this prototype. For this project some additional sensors had to be added, as well as an board computer to run the necessary software. The on-board computer has an Intel i7-4790S processor and 16 GB of memory. The sensors that have been used in this project is an IMU (Inertial Measurement Unit) (MTi-28A53G35), a LIDAR (SICK LMS 100-10000), a Wi-Fi module for communication between the on-board computer and a host computer, and the encoders.

(30)

Structure

The sensor data from the encoders (serial), IMU (USB) and LIDAR (Ethernet) is sent to the robot’s computer that is running ubuntu with ROS. The motors how-ever are not controlled by this computer but by a separate system from National Instruments that then is connected to the motor drivers. The reason for this is that the main function of the robot is to be driven using a remote controller and in those cases the functionality that the ROS computer brings is unnecessary. Further, ROS is not stable enough for commercial uses. The connection between ROS and Lab-View (the software on the National Instruments computer) is done using Ethernet where a node in ROS takes the velocities produced by the planner and convert them to the corresponding values in LabView. For the Ethernet communications a switch is used which the sensors that use Ethernet and the computers are connected to. There is also a router connected to the switch that enables a remote computer to wireless connect to the robot.

3.2

Software

The software is written for ROS (Robot Operating System) which is an open source framework for writing robot software. ROS provides tools and libraries which sim-plify the development of complex robot operations [10]. As previously mentioned the SLAM algorithm used in this project has already been implemented as a pack-age in ROS by other researchers [15]. For a list of all third party ROS packpack-ages used in this project see Appendix A. Software for controlling the motors and receiv-ing raw data from the encoders had been implemented in the FUMO prototype by Realisator Robotics before the start of this project.

Figure 3.2: The node structure when mapping

(31)

3.2. Software described in the prior knowledge chapter, and the Scan node takes in data from the LIDAR. Since the sensors are physically located in different places on the robot a translation between the different coordinate frames is needed. This is handled by the tf node. All frames in this tree are shown in Fig. 3.3. The global frame is the map frame. This is connected to the odom frame which keeps track of the robot’s estimated position according to robot pose ekf. The base link is a static frame located at the centre of the robot and the transformation of the sensor frames are done with respect to this. The base footprint is a frame which is located at ground level directly under the base link and this is the frame on which the encoder data is recieved. The robot base frame is in turn connected to frames tied to each wheel on the robot. These are used for giving drive commands. When mapping, the

robot pose ekf node and the gmapping node takes in sensor data and information

about the transforms and use that to create new transforms (between odom and

base link, and map and odom respectively). The final map is distributed by the gmapping node.

Figure 3.3: The connections of all frames. The connection between the different sensors and the base link is static while the connection between the map frame and the odometry frame (odom) is created by gmapping or AMCL depending on the mode of operation, and the connection between odom and base link is created by

(32)

The node structure for the path planning mode is shown in Fig. 3.4. The relation between the robot pose ekf node and the others is the same as when mapping. However in this case the link between the global map frame and the odom frame (in Fig. 3.3) is created by the AMCL node. When using path planning a pre-made map is needed. This is distributed on the Map node. The move base node is the node that outputs the final path.

(33)

Chapter 4

Tests

4.1

Simulations

Before any software was implemented on the FUMO prototype using real sensors, the robot behaviour was simulated in Gazebo.

Gazebo is an open source simulation software created for simulating robots un-der various conditions and is compatible with ROS. The simulated sensors can thus transmit data directly to ROS topics and the robot can be controlled using ROS tools [11].

The simulations were made based on a simple model of the FUMO prototype pro-vided by Realisator Robotics. To enable the tests necessary for this project a skid steer plugin was added and the model was modified to contain the sensors mentioned in the hardware section.

(34)

Mapping

The SLAM algorithm was tested by using the simulated sensor data generated by driving the robot in Gazebo. An example of a map generated in this way is shown in Fig. 4.1. It was not possible to simulate smoke in Gazebo but Gaussian noise could be added to the simulated LIDAR. This gave an idea of how well the mapping algorithms work in environments with disturbances. The mapping was tested using standard deviations of 0.001, 0.01, 0.1 and 1 for the Gaussian noise in the LIDAR where a noise of 0.001 can be considered normal under perfect conditions. All tests were performed in the same simulated environment. The robot was driven manually in each test since Gazebo does not provide tools for recording a path and driving it again while collecting new sensor data. Because of this some variations in the resulting map was the result of small alterations in the path of the robot. In all tests the robot was driven forward with a speed of 0.2 m/s and the turning speed varied between 0.1 and 0.3 m/s. The maps created are shown in Fig. 4.2. To make it easier to determine whether differences between the maps are due to noise or the path, each image also shows the path taken by the robot.

The map generated with a standard deviation of 0.01 is fairly similar to the one with 0.001. With a standard deviation of 0.1 the walls start to become fuzzy but it is still possible to determine the main features of the map. When the standard deviation is 0.2 the inner walls of the map start to disappear. At 0.5 they are barely visible and at 1 the map is completely unreadable. None of the inner walls are shown. One way of determining the accuracy of the proportions in the map is by calculating the mapping error. This is done by choosing a set of corners in the ground truth environment and comparing their relative locations to the constructed map. The chosen corners are shown in Fig. 4.3.

The ground truth locations of the corners are calculated using the known dimen-sions of the simulated environment. The position of the corners in the map are received in rviz, a tool for viewing data sent over ROS topics. The results for the generated maps presented in Fig. 4.2 are shown in Table 4.1. The lower the nor-malised difference Nk is the closer the map is to having the same proportions as

(35)

4.1. Simulations

(a) Standard deviation 0.001 (b) Standard deviation 0.01

(c) Standard deviation 0.1 (d) Standard deviation 0.2

(e) Standard deviation 0.5 (f) Standard deviation 1

(36)

Figure 4.3: The corners used for calculating the mapping error are marked in red Gaussian noise Nk σN k max min

0.001 0.0102 0.0101 0.0497 0.0036 0.01 0.0097 0.0105 0.0500 0.00003

0.1 0.0106 0.0080 0.0348 0.0028 0.2 0.0188 0.0063 0.0340 0.0100

Table 4.1: Mapping error for different standard deviations for Gaussian noise in the LIDAR where Nk is the mean value of the normalised differences, σN k is the

standard deviation of the normalised differences, max the largest difference and min the smallest difference

To get a better idea of how the noise affects the maps the maps are compared pixel by pixel with the best case map (standard deviation of 0.001). The normalised difference to this map is calculated as

(ngt− nmap)

ngt (4.1)

where ngtis the number of occupied respectively empty pixels in the ”ground truth”

(in this case the map generated with a standard deviation of 0.001) and nmap is

(37)

4.1. Simulations

Figure 4.4: Normalised difference to the map generated with a standard deviation of 0.001 for each other test.

Gaussian noise Unkown pixels [%] Occupied pixels [%] Empty pixels [%] 0.001 68,4 1,1 30,4 0.01 68,4 1,2 30,5 0.1 67,8 1,7 30,5 0.2 67,1 1,4 31,5 0.5 64,8 1,3 32,9 1.0 61,5 1,1 37,5 Table 4.2: Amount of the map which is unknown, occupied and empty

(38)

Figure 4.5: Average distance to nearest neighbour in centimetres for the different maps

(39)

4.1. Simulations

Localisation

To tune the AMCL algorithm different parameters were tested on three different paths in the simulated world; a short, a medium and a long path. To make sure that the same path was used for all the tests the robot was first manually controlled driving around in the world while all the data from the sensors were recorded in a

rosbag which is a tool for saving the information sent over ROS topics so that it

can be replayed later. This data was then used for all tests.

To evaluate the different combinations the mean error from the time when the euclidean distance between the AMCL belief and ground truth are below 0.5 me-ters are calculated. This is to remove the large difference between them in the beginning when AMCL believes it could be anywhere in the map and also provides a way to see how fast the combination is at finding the right solution.

The AMCL and ground truth data sets have different amounts of data points in them so to be able to compare them each time stamp from AMCL is compared to the ones in the ground truth to see if the times match. If there is no match the ground truth value is approximated using spline interpolation.

Figure 4.7: The three paths, short (green), medium (red) and long (blue) where the circle indicates the start and the cross indicates the goal

(40)

names Min Max ts tm tl es em el

a 500 40 000 22.9 7.5 0.0 0.18848 0.1733 0.11896 b 10 000 40 000 21.0 5.4 2.9 0.16562 0.17896 0.11992 c 20 000 40 000 29.0 5.3 2.7 0.13116 0.19639 0.14856

Table 4.3: The top three performing combinations, where e is the mean error and t is the time it took to find where the robot is where s=short, m=medium and l=long

(a) (b) (c)

Figure 4.8: The AMCL belief (red) and ground truth (blue) on the short path for the three combinations. The ”o” marks the starting position of the robot and the ”x” marks the end position.

(a) (b) (c)

(41)

4.1. Simulations

(a) (b) (c)

Figure 4.10: The AMCL belief (red) and ground truth (blue) on the long path for the three combinations. The ”o” marks the starting position of the robot and the ”x” marks the end position.

The different AMCL beliefs in Fig. 4.8 to 4.10 look similar to each other which is reasonable since they will start with the same amount of particles and thus should roughly find the target in the same time.

To find the recovery parameters αslow and αf ast the same test as above is used

except that instead of using global localisation, AMCL is given a starting pose that differ from the robot’s actual starting pose to simulate what would happen if it converged to the wrong solution. The result from this is shown in Fig. 4.11

(a) (b)

Figure 4.11: The mean error(a) and the path(b) with αslow = 0.1 and αf ast = 2

max =40 000 min =10 000

The error varies a lot during the first 100 seconds. This is due to AMCL trying to find where the robot is by adding large amounts of particles, once it finds the correct position only a few particles are added for robustness. With αslow = 0.1

and αf ast = 2 the adding of random particles is quite aggressive. This works in

simulated environments where there is no noise and it is easy to get a perfect map so for the real tests αslow and αf astwill probably be smaller otherwise the algorithm

(42)

Navigation

To see how robust the navigation is to the smoke, Gaussian noise was added to the LIDAR in a similar way as for the mapping test. Adding noise with the standard deviation of 0.5 and a mean of 0.5 produced a smoke like disturbance.

As seen in Fig. 4.12 the smoke does not behave in the same way as Gaussian noise but this does give an estimate on how it does handle smoke. For example the Gaussian noise comes from the walls so the robot does have a clear path to the goal most of the time but the Gaussian noise can suddenly add a disturbance right in front of the robot that can confuse it. The real smoke on the other hand does block the path but does not move as close to the robot.

(a) The real smoke during testing (b) The Gaussian noise during simulation

(43)

4.1. Simulations behind and thus the error in X increases while the error in Y is small.

Figure 4.13: The test with Gaussian noise (red) and without (blue). The ”o” marks the starting position of the robot and the ”x” marks the end position.

(44)

4.2

Hardware tests

This section covers the initial tests performed using the prototype robot.

Odometry

In order to calculate the robot’s position using the data from the encoders one needs to know the wheel diameters and wheelbase of the robot. If either of these are wrongly estimated the resulting odometry will be wrong. The first odometry test that was performed aimed to check whether the wheel diameters were estimated correctly. The robot was driven in a 4 meter long straight line and the odometry data from the encoders was stored in a rosbag. The test was performed three times to ensure repeatability. The calculated distance from the encoders were in all cases within centimeters from 4 meters. There were some slight variations in the results but since they were not constant between the test rounds they were attributed to the uncertainty of the test method.

The odometry was also tested using the bi-directional square path experiment de-scribed by Borenstein in [3]. The robot drives in a 4 by 4 meter square, making four 90 degree turns on the spot. After each run the robot’s absolute position is mea-sured and compared to the position calculated by the odometry. Since there are two major error sources in the calculations (wheel diameter and the distance between left and right wheels) the test is performed both in the clockwise and counterclock-wise direction. A total of four runs per direction was performed. If the wrong wheel to wheel distance is used the robot will make turns of less degrees (if the distance is too long) or of more degrees (if the distance is too short) in both the clockwise and the counterclockwise runs. This error will not have any effect on the result when it drives in a straight line. If the robot has unequal wheel diameters the robot will turn slightly when the robot drives in a straight line. Borenstein recommends pre-programming the path into the robot. However this was not possible for the prototype used in this project. Because of this there may be differences between the tests caused by human error. When the tests were performed the odometry data was saved using rosbag and analysed using Matlab. The results from the first test are shown in Fig. 4.15 and 4.16.

(45)

4.2. Hardware tests

(a) (b)

(c) (d)

Figure 4.15: The result of the first odometry test in the clockwise direction. The ”o” marks the starting position of the robot and the ”x” marks the end position

(a) (b)

(c) (d)

(46)

used to correct it. Instead an iterative approach was used where the wheelbase was increased in increments of 0.2 meters starting at 0.8 meters, which was approxi-mately the distance suggested by the calculations described in [3]. To speed up this process each new wheelbase value was only tested once in the clockwise direction. The result of a few attempts are shown in Fig. 4.17.

(a) (b) (c)

Figure 4.17: The result of odometry tests with increased wheelbase in the clockwise direction. The ”o” marks the starting position of the robot and the ”x” marks the end position. The wheelbase was set to a) 0.8 meters, b) 1.0 meters, c) 1.2 meters. As seen in Fig. 4.17 the odometry is greatly improved by the increased wheelbase. At 1.2 meters the result is good enough to be considered accurate. The angles are still not precisely 90 degrees but it is uncertain whether that is because of the odom-etry or due to human error while driving or drawing the test track. The start and stop position differed slightly in the ground truth due to the robot being manually controlled.

(47)

4.3. Test in smoke

Figure 4.18: A map of the testing facility mapped using the robot

4.3

Test in smoke

The aim of these tests were to compare the robot’s performance in a smoke filled environment to how it performed under smoke free conditions. The tests were per-formed at a testing facility for smoke divers. Both mapping and localisation were tested. To make the comparison possible, all tests were first performed in the same environment without smoke.

(48)

(a) (b) (c)

(d) (e)

(49)

4.3. Test in smoke

Sensor test

The first test that was performed aimed to see how the LIDAR was affected by the smoke. The robot was placed in a hallway (part a in Fig. 4.19) and remained stationary during the entire test. Smoke was added through a vent near the floor approximately two meters away from the robot. The LIDAR was set to the stan-dard angular resolution of 0.5° and a frequency of 50 Hz.

(50)

(a) Time = 0s (b) Time = 40.7s (c) Time = 44.0s

(d) Time = 47.1s (e) Time = 59.6s (f) Time = 69.9s

(51)

4.3. Test in smoke

Mapping

The mapping was tested by manually driving the robot at a slow speed up and down a hallway inside the test facility. The same course was used for both the test with smoke and the one without but since the robot was manually controlled there were some slight variations in the path taken. The test without smoke is considered as the ground truth in these tests. Besides the ground truth map, two mapping attempts in smoke were performed. One where the mapping process was initialised in a smoke free condition and one where the mapping was started shortly after smoke had been added to the test site. Smoke was only added once, which means that the conditions were worse in the beginning of the map building when most of the smoke was at ground level. The resulting maps are shown in Fig. 4.21.

(a) (b) (c)

Figure 4.21: a) Map generated without smoke, b) map where mapping is started before smoke is added, c) map where mapping process starts after the smoke is added

When mapping in smoke the detected smoke first appears as walls in the map but as the robot moves forward many of these are discarded due to new conflicting in-formation. A good example of this can be found in Fig. 4.22 where the building process of map b) in Fig. 4.21 is shown. When the mapping process is started in a smoke free environment the LIDAR will initially see a lot further than when the process is started in an already smoke filled environment. This makes the mapping algorithm more certain about the over all appearance of the room.

(52)

(a) (b) (c) (d)

Figure 4.22: Building of the map when smoke is added

Instead of placing the ”walls” in front of each other the way it is in Fig. 4.21 the ”walls” are placed on top of eachother and the robot’s global pose is ”corrected” (moved back behind the ”wall”).

(a) (b) (c) (d)

Figure 4.23: Building of the map in a smoke filled room

(53)

4.3. Test in smoke from 30 to 80. These maps are shown in Fig. 4.24 and it was decided to keep this setting for the tests with other ones as well. Another thing that was attempted was the addition of a filter which removed LIDAR data with an intensity lower than 200. This was done since the sensor tests had shown that the returns from smoke generally had a lower intensity than the ones from walls and other objects. It was believed that this could diminish the amount of false positives caused by the smoke. The resulting maps from this setting are shown in Fig. 4.26. Lastly it was also at-tempted to assume that all unknown areas in the map consisted of free space. This was done to replicate the phenomenon seen in Fig. 4.22, where the false positives due to the smoke were removed after the location has been visited according to the odometry. The result from this is shown in Fig. 4.25.

The amount of smoke in the testing facility is hard to control since its behaviour is not constant and some smoke will stay in the facility after each test. In general, these tests have first been done in what we have decided to classify as a low level of smoke. Smoke is in this case added to the corridor for two seconds before the mapping is started. Medium smoke level is achieved when adding smoke for six seconds and for a high smoke level smoke is first added for six seconds and then additional smoke is added when the robot has reached the other end of the hallway. Due to a limited time at the testing site some tests could only be performed at two of these levels of smoke. The original tests (with maps shown in Fig. 4.21) can be considered as having low and medium smoke levels.

(54)

(a) No smoke (b) Low smoke level

(c) Medium smoke level (d) High smoke level

Figure 4.24: Maps generated with higher resolution calculated as described in the following equation

(ngt− nsmoke)

ngt (4.2)

where ngt is the number of occupied respectively empty pixels in the ground truth

and nsmoke is the same for the map generated in smoke. The result of this is shown

(55)

4.3. Test in smoke

(a) No smoke (b) Low smoke level

(c) Medium smoke level (d) High smoke level

Figure 4.25: Maps generated when assuming that unknown space is unoccupied.

(a) Low smoke level (b) High smoke level

(56)

Figure 4.27: Amount of shared features between each map and their ground truth. Note that the original settings were not tested on a high level of smoke and the filter was not tested in medium smoke level.

the nearest neighbours in the ground truth map are calculated for each occupied pixel. This is done by calculating the Euclidean distance from a point in the map to the points in the ground truth map and observe which distance is the shortest. This is made as a replacement of the mapping error to estimate the distortion of the map. It is not certain that the nearest neighbour represent the same map feature but large distances still hint at a distorted map. It is also worth noting that the total amount of occupied points is lower in the original maps than in the other ones due to the increased resolution. The average distance is shown in Fig. 4.29 and the median distance is shown in Fig. 4.30.

(57)

4.3. Test in smoke

(58)

Figure 4.29: Average distance to nearest neighbour in centimetres. Note that the original settings were not tested on a high level of smoke and the filter was not tested in medium smoke level.

(59)

4.3. Test in smoke

Localisation

To test the localisation, the robot was driven manually in the smoke in one of the corridors (A and B in Fig. 4.18). First it was driven with the LIDAR facing the direction the robot was moving in and then when it had reached the end of the corridor it reversed on the way back to the starting point.

Three tests were done with varying smoke levels between low and medium. The data from the robot was recorded in to rosbags so multiple combinations of param-eters could be tested on the same data. For these tests the resolution of the LIDAR was 0.25° and the frequency was 25 Hz.

In the simulations the ground truth is the positions that Gazebo outputted. For the real tests this is not possible since the robot’s exact position is unavailable. The performance is therefore evaluated using the standard deviation of the particle filter. The lower the standard deviation is the closer the particles are to each other and this indicates how confident the filter is. This does not mean that the robot thinks it is at the right position so the simulations are monitored to make sure that the particles converge to the correct solution. For these first tests AMCL is given a starting pose that is roughly the same as the actual starting position.

test laser beams min particles zrand mean min max

1 1080 500 0.5 0.163 0.065 0.372 2 270 500 0.5 0.119 0.065 0.191 3 27 500 0.5 0.144 0.071 0.260 4 1080 10000 0.5 0.138 0.069 0.209 5 270 10000 0.5 0.148 0.068 0.326 6 27 10000 0.5 0.145 0.081 0.246 7 1080 500 0.95 0.139 0.067 0.305 8 1080 10000 0.95 0.142 0.068 0.219 9 270 500 0.95 0.120 0.061 0.178 10 270 10000 0.95 0.142 0.069 0.231

(60)

test laser beams min particles zrand mean min max 1 1080 500 0.5 0.116 0.060 0.237 2 270 500 0.5 0.122 0.056 0.226 3 27 500 0.5 0.152 0.062 0.296 4 1080 10000 0.5 0.161 0.061 0.365 5 270 10000 0.5 0.138 0.061 0.269 6 27 10000 0.5 0.172 0.072 0.292 7 1080 500 0.95 0.156 0.058 0.395 8 1080 10000 0.95 0.155 0.061 0.374 9 270 500 0.95 0.129 0.063 0.275 10 270 10000 0.95 0.151 0.063 0.372

Table 4.5: Standard deviation for the second test run

test laser beams min particles zrand mean min max

1 1080 500 0.5 0.137 0.057 0.216 2 270 500 0.5 0.129 0.056 0.206 3 27 500 0.5 0.160 0.070 0.033 4 1080 10000 0.5 0.173 0.083 0.269 5 270 10000 0.5 0.141 0.063 0.212 6 27 10000 0.5 0.162 0.074 0.229 7 1080 500 0.95 0.145 0.058 0.266 8 1080 10000 0.95 0.176 0.079 0.293 9 270 500 0.95 0.142 0.055 0.234 10 270 10000 0.95 0.143 0.067 0.232

Table 4.6: Standard deviation for the third test run

(61)

4.3. Test in smoke

(a) Test 1 (b) Test 2

(c) Test 5 (d) Test 9

Figure 4.31: The standard deviation over time on the first test run

(a) Test 1 (b) Test 2

(c) Test 5 (d) Test 9

(62)

(a) Test 1 (b) Test 2

(c) Test 5 (d) Test 9

(63)

4.3. Test in smoke test laser beams min particles max particles zrand mean min max

11 1080 500 40 000 0.5 1.12 0.118 4.308 21 1080 500 20 000 0.5 0.76 0.084 4.667 31 1080 500 10 000 0.5 0.99 0.078 5.165 41 1080 500 5 000 0.5 1.35 0.138 5.147 12 270 500 40 000 0.5 0.85 0.068 4.434 22 270 500 20 000 0.5 0.83 0.078 4.785 32 270 500 20 000 0.5 0.96 0.082 4.998 42 270 500 5 000 0.5 1.31 0.143 5.306 19 270 500 40 000 0.95 0.68 0.071 4.065 29 270 500 20 000 0.95 0.86 0.075 4.847 39 270 500 10 000 0.95 1.27 0.106 4.719 49 270 500 5 000 0.95 1.66 0.116 5.949

Table 4.7: Standard deviation using global localisation

The tests that perform the overall best on the three runs are 1,2,5 and 9. Most of these tests have a minimum at 40 to 60 seconds into the test, this is when the robot reaches the end of the corridor and sees the wall. When the robot reverses the standard deviation starts to increase again. This is expected since when the robot can’t see the wall at the end everything looks the same.

Global localisation tests were done on the same parameters as test 1, 2 and 9. The particles never reached the maximum amount of particles so a maximum of 5000, 10 000 and 20 000 was also tested. The data from 1, 2 and 9 were used as a ”ground truth” for the tests of the global localisation.

The subscript of the test numbers in Table 4.7 and 4.8 indicates which test from Table 4.5 was used as a ground truth, the only parameter that differed between the tests and their ground truth was the amount of max particles.

(64)

test laser beams min particles max particles zrand mean min max 11 1080 500 40 000 0.5 0.81 0.012 5.682 21 1080 500 20 000 0.5 0.83 0.018 5.687 31 1080 500 10 000 0.5 2.41 0.019 9.901 41 1080 500 5 000 0.5 5.35 0.780 9.551 12 270 500 40 000 0.5 0.97 0.111 5.463 22 270 500 20 000 0.5 0.40 0.002 5.631 32 270 500 20 000 0.5 1.18 0.029 10.203 42 270 500 5 000 0.5 4.55 0.761 10.533 19 270 500 40 000 0.95 0.95 0.035 5.654 29 270 500 20 000 0.95 0.37 0.003 5.632 39 270 500 10 000 0.95 2.15 0.197 10.169 49 270 500 5 000 0.95 5.93 0.859 12.572

Table 4.8: Euclidean distance between the tests and ”ground truth”

(a) Test 12 (b) Test 22

(c) Test 32 (d) Test 42

(65)

4.3. Test in smoke

(a) Test 12 (b) Test 22

(c) Test 32 (d) Test 42

Figure 4.35: standard deviation over time for the global localisation tests

(a) Test 12 (b) Test 22

(c) Test 32 (d) Test 42

(66)

(a) Test 12 (b) Test 22

(c) Test 32 (d) Test 42

Figure 4.37: How the path varies in the different global localisation tests

In Fig. 4.34c the Euclidean distance peaks at time 10 to 20 seconds, this is when it had converged to the wrong position and the rapid decline is when it ”snaps” back to the correct position. Fig. 4.34d shows that it never finds the correct position but the Euclidean distance is shaped like a sine wave, this is because it thought it was in the right corridor but moved in the wrong direction so the dips is where the robot and the ”ground truth” pass each other. In Fig. 4.34a and Fig. 4.34b we can see that both 12 and 22 manage to find the right location.

Fig. 4.35d shows the problem of using standard deviation to determine how good the performance is since it declines similarly to the others but to the wrong solu-tion. Fig. 4.36 shows that when the robot believes that it has found the location the amount of particles declines rapidly and we can see that when we reverse, the robot becomes more uncertain and starts adding particles again. In Fig. 4.36a the curve declines much slower than the others which is because the large amount of particles make every update take longer time and thus it takes longer for the decline to start. The settings shown in Fig. 4.36c are the fastest even if they converge to the wrong solution at first.

(67)

4.3. Test in smoke test max particles alpha slow alpha fast mean min max

1 5000 0.001 0.5 5.20 0.838 9.63 2 5000 0.01 0.5 1.80 0.147 9.31 3 5000 0.005 0.5 1.04 0.005 9.41 4 10000 0.005 0.5 2.65 0.205 9.85 5 20000 0.005 0.5 0.80 0.126 5.71 6 40000 0.005 0.5 2.27 0.074 5.66

Table 4.9: Euclidean distance between the tests and ”ground truth” from the second bag

the particles are spread evenly across the map. 12 follows the ground truth closely

after it finds the location. 22 makes a small detour but otherwise follows the path

in the same way as 12 and locates the robot faster. As described earlier 32gives the

wrong pose in the start but manages to switch location. 42 believes after jumping

around a bit in the start that the robot is in the right hallway but moves in the opposite direction to what it is actually doing.

To see if it is possible to use 5000 particles the recovery parameters were increased, Table 4.9 shows the Euclidean distance to the ground truth (test 1 in 4.5). The amount of laser beams is 1080, minimum particles is 500 and zrandis 0.5. We can see

(68)

test max particles alpha slow alpha fast mean min max 1 5000 0.01 0.5 9.44 3.02 14.39 2 5000 0.005 0.5 8.26 1.26 14.41

Table 4.10: Euclidean distance between the tests and ”ground truth” from the first bag

test max particles alpha slow alpha fast mean min max 1 5000 0.01 0.5 1.80 0.054 8.69 2 5000 0.005 0.5 2.18 0.101 8.57

(69)

Chapter 5

Discussion and conclusions

Mapping

It is not possible to guarantee that the levels of smoke are the same between the different mapping tests or that the robot is driven in exactly the same speed and path between the tests. It is therefore not certain that the tests would generate exactly the same results if they are repeated. However since the same environment, smoke machine and general path is used in all tests the results are deemed to be certain enough for a fair comparison of the different settings which were attempted. The multi-echo functionality does not succeed in filtering out all smoke. When the smoke is at a level where it is not possible to see more than around half a meter ahead of you with your bare eyes, the LIDAR is generally also unable to see trough it. Adding further filtering for low intensity data also results in maps with lower quality. Even though the smoke generally has lower intensity returns some of these will correspond to ”real” features which thus will be lost. And when no features behind the smoke are visible filtering out the smoke data will still leave that area unknown. It can therefore be concluded that it is not enough to observe the inten-sity of the return data if smoke shall be successfully filtered out.

References

Related documents

The presence of polypeptide synthesis and assembly factors from Elonga- tion Factor G1 to the chaperonins GroL, DnaK2 and DnaK3 (Table 1) suggests that the puncta could have a

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating

The EU exports of waste abroad have negative environmental and public health consequences in the countries of destination, while resources for the circular economy.. domestically

Using time-of-flight measurements with acoustic beacons has been commonly used in underwater navigation [83,84,86] to obtain a global position estimate; it has also proved successful

This table consists of the following columns; time of arrival (Ank), the location of the bed (Plats), the name of the patient (Namn), the nurse who is responsible for the patient

Since public corporate scandals often come from the result of management not knowing about the misbehavior or unsuccessful internal whistleblowing, companies might be

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in