• No results found

Johannes van Esch

N/A
N/A
Protected

Academic year: 2021

Share "Johannes van Esch"

Copied!
53
0
0

Loading.... (view fulltext now)

Full text

(1)

DEGREE THESIS

BSc program in Computer Science and Engineering, 180 credits

RONAS

Robot Navigation System

Johannes van Esch

Degree project in Computer Science and Engineering, 15 credits

Halmstad, 17 june 2016

(2)
(3)

Abstract

More and more self driving vehicles are introduced into the society, varying from autonomous vacuum cleaners to cars which drive people to their destinations without the need of human interaction. There will be probably even more such applications in the future. For this reason Halmstad University is interested in ways how robots can localize them selves in a known environment, this information is useful for possible future courses. This report describes one possible way to write the software needed to control robot movement and to determine the position of a robot on a known map. A laser scanner (LIDAR xv-11) is used by the robot to create a map of the environment. This map is passed on to a particle filter which compares the map with a reference map by using the iterative closest point scan matching algorithm. The software is able to control the robot by entering commands in a command shell. The same command shell is used to configure the robot and to retrieve the actual robot position on a map.

Fler och fler sj¨alvg˚ aende fordon inf¨ors i samh¨allet, varierar fr˚ an sj¨alvg˚ aende damm- sugare till bilar som transporterar folk utan behov av m¨ansklig interaktion. I framtiden blir s˚ adana fordon f¨ormodligen mer vanliga. Av denna anledning ¨ar H¨ogskolan i Halmstad intresserad av s¨att hur robotar kan lokalisera sig sj¨alva p˚ a olika s¨att i en k¨and omgivning, denna teknologi ¨ar anv¨andbar f¨or eventuella framtida kurser. I denna rapport beskrivs ett m¨ojligt s¨att f¨or att programmera en robot tillverkat av Halmstad H¨ogskola. F¨or att best¨amma robotens position i en k¨and omgivning anv¨ander roboten en laserscanner (LIDAR xv-11). Denna information av omgivningen beh¨ovs vid partikelfilters ber¨akningen.

Partikelfiltret best¨ammer robotens position med hj¨alp av iterative closest point scan-

matchning algoritmen. Programvaran kan ¨aven styra roboten genom att ange kommandon

i ett anv¨andargr¨ansnitt. Samma anv¨andargr¨ansnitt anv¨ands ocks˚ a f¨or att konfigurera

roboten samt att visa den aktuella robotpositionen i en karta.

(4)
(5)

Revision History

Revision Date Author(s) Description

0.1 10 may 2016 Johannes van Esch Initial version

0.2 12 may 2016 Johannes van Esch Changes based on comments of supervisor 0.3 13 may 2016 Johannes van Esch Preliminary version for examination

0.4 5 june 2016 Johannes van Esch Changes based on comments of examiner and opponents

1.0 17 june 2016 Johannes van Esch Final version

(6)
(7)

Contents

1 Introduction 5

1.1 Purpose and goal . . . . 5

1.2 Project limits . . . . 6

2 Background 7 2.1 Moving . . . . 9

2.2 Mapping . . . 12

2.3 Localization . . . 13

2.4 Summary . . . 17

3 Method 19 3.1 Development method . . . 19

3.2 Development environment . . . 19

3.3 Application model . . . 20

3.4 Common driver model . . . . 21

3.5 User interface . . . 22

3.6 Position calculation . . . 22

3.7 Navigation calculation . . . 23

4 Results 25 4.1 Software structure . . . 25

4.2 Robot movement . . . 26

4.2.1 Robot speed command accuracy . . . 26

4.2.2 Track correction . . . 28

4.3 Robot localization . . . 29

4.3.1 Particle filter visualisation . . . 29

4.3.2 Particle filter accuracy test . . . . 31

4.3.3 Simultaneous robot movement and scanning . . . 32

5 Discussion 33

6 Conclusion 35

A Time plan 37

(8)

B UML class diagrams 39

B.1 User Interface . . . 39

B.2 Common Driver Model . . . 40

B.3 Position Calculator . . . . 41

B.4 Navigation Calculator . . . 42

B.5 All classes . . . 43

C Speed commands vs encoder pulses 45

Bibliography 46

(9)

Chapter 1 Introduction

Halmstad university o↵ers several master level courses in the field of robotics. One course called design of embedded and intelligent systems uses a small two wheeled robot for its practical exercises, the university would like to explore the possibility of more extended use of this robot.

The RONAS project is expanding the robots hardware with a LIDAR (light radar) sensor together with software to enable the robot to localize itself on a known map. The knowledge about possible pitfalls and limits of the hard- and software are valuable when new practical exercises for robotics courses are being developed.

1.1 Purpose and goal

The purpose of the project is to develop a software system that

• is able to collect data from several robot mounted sensors

• can move the robot from one position to another

• is able to locate the robots position in an environment

The software system shall be modular in such a way that it is relatively easy to remove, replace or add sensors. The project provides software drivers for two sensors, the LIDAR sensor and the robot wheel encoders.

A LIDAR sensor is a sensor which is able to create a map of the environment using a laser scanner. The encoder mounted on each robot wheel is a sensor that counts pulses when a wheel moves, one pulse stands for a certain distance.

Software drivers for other sensors are not provided but can and should be written according the driver model provided by the project software.

The system provides a plain but efficient user interface through which it is possible to

read positioning information and to control rudimentary robot movement.

(10)

1.2 Project limits

Besides some rudimentary robot movement, the software shall not include any trajectory

planning. Only when the project planning and time allows it, a more advanced trajectory

planning shall be developed, but this feature has a low priority.

(11)

Chapter 2 Background

A well known similar product which uses the same technology as the RONAS project, is Google’s self driving car [2]. This self driving car uses a LIDAR sensor that scans the area and uses this data to localize it self. Besides a similar goal and using a similar technique, the two projects di↵er in budget. For example, the LIDAR sensor that Google Car is using costs 80.000 dollar [2], the projects LIDAR sensor costs about 150 dollar. Google’s self driving car is closed source, that means that their solution to the localization problem is not publicly available and therefore not usable for this project.

Due to the lack of basic robotic knowledge and common solutions for robot movement, localization, navigation and mapping, most of the basic knowledge about those subjects has been acquired through the book Probabilistic Robotics by Thrun et al. [17] and following the SLAM lectures of Cyrill Stachniss [16] of the University of Freiburg.

The largest problem with moving robots is the noise and therefore uncertainty of the data one reads from the robots sensors and models. There is also the fact that we deal with mechanics and its inherent limits of accuracy, for example when one wheel of a two wheel robot has worn out by one tenth of a millimeter it will a↵ect the accuracy of the whole system. An additional problem with this inaccuracy is that it accumulates over time because the software is unable to correct an error when it is unable to detect one.

With the use of probability theory calculus one can overcome this uncertainty prob- lem inherent to moving robots. The fundamentals of this probabilistic approach are that everything from the robots perspective of view is represented by a probability distribution.

By combining those probability distributions the robot is able to calculate the probability of its current state.

To illustrate this probabilistic approach one can imagine a straight hallway with three doors and a moving robot which is unaware of its current position. In figure 2.1(a), the robot does not know where it is, therefore its believe bel(x) curve is flat. Meaning that the robot can be anywhere along the hallway. When the robot moves in 2.1(b), it senses the first door. There are now three possibilities where the robot might be in the hallway, the first, the second or the third door. This is represented by the three Gaussian curves.

When the robot moves further 2.1(c), the three Gaussian curves become less high because

the robot movement introduces extra uncertainty. When the robot senses the second door

in 2.1(d) the probability of its location state becomes quite large, the robot has located

(12)

itself along the hallway. When the robot moves on in 2.1(e) it is still aware of its position, but because the robot moves, it becomes less certain of that position.

Figure 2.1: Illustration of Markov localization (image courtesy of [17])

The robot that the RONAS project uses is a small two wheel robot equipped with wheel encoders and a LIDAR sensor mounted on top. The LIDAR sensor continuously rotates a laser scanner which measures the distance between the sensor and an object in the path of the laser beam. For every degree of rotation a distance in millimeters to the measured object is returned. The minimum distance which can be measured is 15 centimeter while the maximum distance equals 6 meters [8]. The sensor rotates at about 260 rotations per minute which results in slightly more then four 360 degree scans every second. When the rotation speed is too high or too low, the accuracy of the scan decreases.

There are three main problem fields to be solved to reach the projects goal

• The robot has to be able to move from one point to another point in its environment,

• it has to be able to scan its environment and compare the scan results with a known

map of the environment,

(13)

• and by using the results of the previous two steps it should be able to localize itself in the environment.

The following sections describes each problem field in more detail.

2.1 Moving

To calculate the robots movement one needs a kinematic model, this model describes the robots measurements together with the formulas needed to calculate the robots angular speed, and the speed over x and y axis, all based on the speed of each wheel. The foundation of the kinematic model of the projects robot is created as described by Armah et al. [5].

The parameters of the kinematic model are illustrated in figure 2.2 below with the following definitions:

V, the robots velocity in the x direction VL / VR, the velocity of the left / right wheel

C, the center of the robot from which its state calculated W, the angular velocity

R, is the radius of the wheel

L, the distance between the centers of the wheels

The robots state is represented by (x, y, ✓) where (x, y) equals the position and ✓ is the heading, all measured from the robots center C.

VL VR

V

C

R

L W

Figure 2.2: Visualization of the parameters of the kinematic model

The kinematic model of the robot is given by:

˙x = R

2 (V

r

+ V

l

) cos (✓) (2.1)

(14)

˙y = R

2 (V

r

+ V

l

) sin (✓) (2.2)

˙✓ = R

L (V

r

V

l

) (2.3)

A way to control robot movement is the odometry based model (chapter 5 in [17]). This model uses the wheel encoders as described in section 1.1 to measure the travelled distance and angular changes of the robot. Moving the robot from (x, y, ✓) to (x

0

, y

0

, ✓

0

) takes three steps, illustrated by figure 2.3. First the robot rotates

rot1

degrees in the direction of the destination, then the robot moves

trans

to the destination and finally the robot rotates

rot2

degrees at the destination to match the new positions angle.

Figure 2.3: Odometry based motion model (image courtesy of [17])

The equations 2.4 to 2.6 are used to calculate

trans

,

rot1

and

rot2

when the robot moves from its current position (x, y, ✓) to a destination (x

0

, y

0

, ✓

0

).

trans

= q

(x

0

x)

2

+ (y

0

y)

2

(2.4)

rot1

= atan2 (y

0

y, x

0

x) ✓ (2.5)

rot2

= ✓

0

rot1

(2.6)

To calculate the robots position based on the generated pulses of the wheel encoders equa- tion 2.7 from Hellstr¨om [13] can be used. In this equation, ICC stands for Instantaneous Center of Curvature which represents the common center point of the curvature each wheel

is rolling.

2 6 6 4

x

0

y

0

0

3 7 7 5 =

2 6 6 4

cos(! t) sin(! t) 0 sin(! t) cos(! t) 0

0 0 1

3 7 7 5

2 6 6 4

x ICC

x

y ICC

y

✓ 3 7 7 5 +

2 6 6 4

ICC

x

ICC

y

! t 3 7 7

5 (2.7)

(15)

Where

R = l/2 (n

l

+ n

r

) / (n

r

n

l

) (2.8)

! t = (n

r

n

l

) step/l (2.9)

ICC = [x R sin✓, y + R cos✓] (2.10)

Where n

l

and n

r

is the left and right wheel encoder pulse count and step equals the distance the wheel of the robot travels for one encoder pulse.

If the software needs a path tracking component to compensate for any deviations, then there are several path tracking algorithms available such as follow the carrot, pure pursuit and vector pursuit which are all described by Lundgren [14].

One relatively easy to implement algorithm is called ”pure pursuit”, this algorithm constantly looks ahead at a certain distance l and calculates the curvature it needs to follow to reach that lookahead point. The curvature is defined as =

1r

which is the inverse of the radius of a circle. The equation 2.11 is used to calculate the curvature to reach the lookahead point, this equation takes only two parameters. Note that the l used in this equation stands for the lookahead distance and not for the kinematic parameter l which represents the distance between the centers of the robot wheels.

= 2✓

err

l (2.11)

Where l equals the lookahead distance and ✓

err

is the error angle, figure 2.4 explains how this error angle is derived.

Figure 2.4: Geometry of the Pure Pursuit algorithm (image courtesy of [12])

The pure pursuit algorithm only has one configurable parameter (the lookahead

distance l). A large value of l gives a smaller curvature which results in a more smooth

(16)

path correction, while a smaller value of l gives a larger curvature which corrects more aggressively. Figure 2.5 illustrates the working of the pure pursuit path tracking algorithm.

Figure 2.5: The Pure Pursuit algorithm in action (image courtesy of [12])

2.2 Mapping

The collected LIDAR data represents a 360 degrees map of the environment, this map has to be matched with a known map of the environment to determine the robots position.

The technique to accomplish this is called scan-matching. There are several algorithms for scan-matching, two commonly used algorithms are the Iterative Closest Point (ICP) algorithm [11] which tries to match two sets of points and a variant of the ICP algorithm from Cox [7] which tries to match a set of points with a set of lines.

A scan match returns two matrices, one rotation matrix and a translation matrix, when both matrices are applied to the scanned map this map should closely fit the reference map, with that we have found our location on the reference map. Equation 2.12 shows how to apply the obtained rotation and translation matrix to each point (x, y) of the scanned map to get a new point (x

0

, y

0

) on the reference map.

"

x

0

y

0

#

=

"

cos sin sin cos

#

| {z }

rotation matrix

"

x y

# +

"

x

trans

y

trans

#

| {z }

translation matrix

(2.12)

(17)

There are several ICP libraries available, the following three were considered for this project:

• The mobile robotics programming toolkit [3]

• Point Cloud Library [15]

• ICP library [11]

• C(anonical) Scan Matcher (CSM) library [1]

The mobile robotics programming toolkit [3], the Point Cloud Library [15] and the C Scan Matcher library [1] requires additional support libraries to be installed to work properly.

2.3 Localization

The localization part of the project is the part where the current location of the robot is calculated. The sensor data is used to calculate the probable location of the robot.

Bayesian filters are used to solve this problem using the probabilistic paradigm as described in the beginning of this chapter.

The theory behind Bayesian filters (taken from [16] and [17]) is explained briefly be- low. To calculate our belief of the state x

t

we use:

bel (x

t

) = p (x

t

|z

1:t

, u

1:t

) (2.13) Where:

x

t

is our state

z

1:t

are our observations to time t (data from sensors)

u

1:t

are the control command to time t to move the robot (ie. move forward 1 cm) When we apply Bayes rule we get:

bel (x

t

) = ⌘p (z

t

|x

t

, z

1:t 1

, u

1:t

) p (x

t

|z

1:t 1

, u

1:t

) (2.14) Where ⌘ is the normalization factor. After applying the Markov assumption (if we now the current state of the world we can forget the previous state) we get:

bel (x

t

) = ⌘p (z

t

|x

t

) p (x

t

|z

1:t 1

, u

1:t

) (2.15) After applying the law of total probability to introduce a new variable x

t 1

(the previous state) we get:

bel (x

t

) = ⌘p (z

t

|x

t

) Z

xt 1

p (x

t

|x

t 1

, z

1:t 1

, u

1:t

) p (x

t 1

|z

1:t 1

, u

1:t

) dx

t 1

(2.16) We apply the Markov assumption again to remove everything before t 1:

bel (x

t

) = ⌘p (z

t

|x

t

) Z

xt 1

p (x

t

|x

t 1

, u

t

) p (x

t 1

|z

1:t 1

, u

1:t

) dx

t 1

(2.17)

(18)

Yet another Markov assumption gives us:

bel (x

t

) = ⌘p (z

t

|x

t

) Z

xt 1

p (x

t

|x

t 1

, u

t

) p (x

t 1

|z

1:t 1

, u

1:t 1

) dx

t 1

(2.18) Now we have a recursive term we can replace with bel (x

t 1

) and we have our recursive Bayes filter:

bel (x

t

) = ⌘p (z

t

|x

t

) Z

xt 1

p (x

t

|x

t 1

, u

t

) bel (x

t 1

) (2.19)

The Bayes filter can be written as a two step process, the Prediction step:

bel (x

t

) = Z

xt 1

p (x

t

|x

t 1

, u

t

)

| {z }

motion model

bel (x

t 1

) (2.20)

and the correction step:

bel (x

t

) = ⌘p (z

t

|x

t

)

| {z }

sensor or observation model

bel (x

t

) (2.21)

This leads to the following pseudocode for a general Bayes filter (taken from [17]) :

1: BayesFilter(bel (x

t 1

) , u

t

, z

t

) 2: for all x

t

do

3: bel (x

t

) = R

p (x

t

|x

t 1

, u

t

) bel (x

t 1

) dx

t 1

4: bel (x

t

) = ⌘p (z

t

|x

t

) bel (x

t

) 5: end for

6: return bel(x

t

) 7: endBayesFilter

A Bayes filter is only a framework for recursive state estimation, there are di↵erent realizations with di↵erent properties, like for example:

• Linear vs non-linear models for motion and observation.

• Gaussian vs non-Gaussian distributions.

• Parametric vs non-parametric filters.

There are many implementations of Bayes filters all of which complies with a di↵erent number of the Bayes filter properties like mentioned above. The RONAS project looked into the Kalman filter and the particle filter.

The Kalman filter expects a linear transition model and observation model, it also expects

that all of its probability distributions are Gaussian. The pseudocode below shows the

algorithm of the Kalman filter, rows two and three implements the prediction step and

the rows four, five and six the correction step.

(19)

1: KalmanFilter (µ

t 1

, ⌃

t 1

, u

t

, z

t

) 2: µ ¯

t

= A

t

µ

t 1

+ B

t

u

t

3: ⌃ ¯

t

= A

t

t 1

A

Tt

+ R

t

4: K

t

= ¯ ⌃

t

C

tT

C

t

⌃ ¯

t

C

tT

+ Q

t 1

5: µ

t

= ¯ µ

t

+ K

t

(z

t

C

t

µ ¯

t

) 6: ⌃

t

= (I K

t

C

t

) ¯ ⌃

t

7: return µ

t

, ⌃

t

8: endKalmanFilter

In the above Kalman filter pseudocode, the code is read as follows:

Row 2 Predict the state:

¯

µ

t

is the predicted state (n x 1)

A

t

is the model (n x n) that predict the new state when no controls applied µ

t 1

is the previous state

B

t

is the model (n x l) that predicts what changes are made to the state based on command inputs

u

t

are the command inputs (l x 1) (ie. move forward with a velocity of 1 cm/s).

Row 3 Predict the error covariance

⌃ ¯

t

is the predicted error covariance (n x n)

t 1

is the previous error covariance (n x n) R

t

is the covariance of the error noise (n x n)

Row 4 Calculate the Kalman gain, this step calculates how much each sensor contributes to the final estimated state

K

t

is the calculated Kalman gain

C

t

is the sensor model (how the sensor reflects the vehicle state) (n x n) Q

t

describes the sensor noise (n x n)

Row 5 Update the estimate with the measurements of the sensor

µ

t

is the new predicted state (the output of the Kalman filter) z

t

are the measurements of the sensor (n x 1)

Row 6 Update the error covariance

I is the identity matrix

(20)

A particle filter is a Bayes filter that in contrast to the Kalman filter can work with non-linear transition and observation models (Arulampalam et al. [6]). Basically one starts with a large set (depending on the area to cover) of random uniformly distributed samples and after each filter pass, the samples gather more and more around the predicted location, depending on their importance. The most probable state is calculated by the weighted mean value of the returned sample set. The pseudocode below shows the algorithm of a particle filter.

1: ParticleFilter(X

t 1

, u

t

, z

t

) 2: X ¯

t

= X

t

= ;

3: for j = 1 to J do 4: x

[j]t

⇠ p ⇣

x

t

|u

t

, x

[j]t 1

⌘ 5: w

t[j]

= p ⇣

z

t

|x

[j]t

⌘ 6: X ¯

t

= ¯ X

t

+ hx

[j]t

, w

[j]t

i 7: endfor

8: for j = 1 to J do

9: draw i 2 1, ..., J with probability ↵w

[i]t

10: add x

[i]t

to X

t

11: endfor 12: return X

t

13: endParticleFilter

In the above particle filter pseudocode, the code is read as follows:

Row 2 Creates two empty sample sets:

X ¯

t

is the predicted set.

X

t

is the corrected set.

Row 3 Loops through the number of samples of the set:

J contains the number of samples in the set.

Row 4 Adjust each sample of the set according the command inputs:

x

[j]t

element j of the corrected set.

u

t

equals the control input.

Row 5 Assign a weight to each sample, the higher the weight the higher the probability:

w

t[j]

the weight of sample j.

(21)

z

t

equals the observation (sensor readings).

Row 6 Add the new sample with its weight to the predicted set:

X ¯

t

is the predicted set.

Row 8-11 Draw randomly J samples in such a way that the samples with the highest weight get drawn more often. Add those samples to the new set X

t

.

Row 12 Return the new sample set.

2.4 Summary

Three main problems have to be solved to reach reach the projects goal.

• One have to be able to move the robot, therefore a kinematic model of the robot is needed and an odometry based motion model can be used to accomplish robot movement. A path tracking component can be programmed by using the follow the carrot, pure pursuit or vector pursuit algorithm.

• Sensor data that builds a map shall be compared with a known map of the environ- ment using a technique called scan-matching. Two possible algorithms are point to point matching and point to line matching, there are several libraries available which can be used.

• The robot shall be able to localize it self, probability theory calculus with the use

Bayesian filters can be used for this task. There are more implementations of Bayesian

filters, for this project the Kalman filter and the particle filter are considered.

(22)
(23)

Chapter 3 Method

3.1 Development method

At the start of the project all the tasks needed to solve the projects problems were identified and roughly planned in time. The tasks and time planning were adjusted later on, based on the knowledge gained from the background research described in the previous chapter.

This final planning is used during the remaining time of the project and can be found in appendix A.

3.2 Development environment

A small two wheel robot was provided by Halmstad University. This robot is equipped with a Raspberry Pi model 2 embedded system on which the software shall be installed and executed. The robot is also equipped with an Arduino Nano subsystem which acts as an interface for the wheel speed and the wheel encoders. The Raspberry PI and the Arduino Nano are communicating which each other through a 115200 bps serial connection.

The robot is powered using a rechargeable battery of the litium polymer (LIPO) type.

Three batteries and a USB charging adapter is provided to charge the battery during development of the software.

The university also provided a LIDAR sensor. This sensor comes from a Neato robotic vacuum cleaner [8] and communicates with the Raspberry Pi using a 115200 bps serial interface.

In order to mount the LIDAR sensor on top of the robot, the Raspberry Pi had to be lowered and an extra mounting layer had to be added to hold the sensor. The extra mounting layer is designed in such a way that the center of the LIDAR sensor matches the center of the robot.

Figure 3.1 shows a picture of the final robot equipped with the LIDAR sensor on top.

(24)

Figure 3.1: The robot hardware

Due to its inherent modularity, speed and available support libraries, the program language of choice is C++. The project use UML [9] for the design language and OOP as the development method for the software. The project delivers a UML class diagram of the system together with UML sequence diagrams to explain the program flow if needed.

The Raspberry Pi supports a wide range of C libraries which the developer can use.

For the RONAS project the WiringPi [4] is used to communicate with the GPIO and the serial ports. Because of its simplicity and the ability to function without any additional software installations, the ICP library [11] is used for performing the scan match between the LIDAR generated map and a reference map.

The Raspberry Pi is equipped with the default Raspbian operating system on which graphical user interface is disabled to save resources.

The project does not use an Integrated Development Environment (IDE) to develop its software. The reason for not using an IDE is that when the software is going to be used in future course practicals, there should be no extra hurdles for the student to learn, configure and use a complex IDE running a cross platform compiler. The software is written using a regular text editor. Compilation of the source takes place on the Raspberry Pi itself using a general makefile which checks all the changes and dependencies and compiles only the sources that needs a (re)compile.

3.3 Application model

The software system is divided into four parts or components, each part is designed and

programmed separately into a set of coherent classes. Communication between the di↵erent

(25)

components is implemented using the observer design pattern as described by Gamma et al. [10]. The observer pattern notifies changes of a class to its observer classes, when for example the LIDAR sensor has performed a complete scan it notifies the particle filter that new data is available to process. This way there is no need to poll continuously to check if there is new data available, this relieves the CPU and save the robots battery.

The software is multi threaded to make maximum use of the Raspberry PI four CPU cores to enhance the systems performance and responsiveness.

Figure 3.2 below shows the di↵erent components of the software system, everything within the dotted rectangle is part of the software of the RONAS project. In the following sections each software component is described in more detail.

LIDAR

Encoders Common driver Robot

model

Kinematic model Position calculation

Other positioning

sensors

Navigation calculation User interface

Reference map

Figure 3.2: The software components

3.4 Common driver model

The common driver model acts as a base for all the drivers needed for the robot positioning sensors. A driver written according this model can be plugged into to the system after which the sensor data can be used by the positioning calculation component (section 3.6) to calculate the robots position. For this project, only two drivers will be developed, a driver for the LIDAR sensor and a driver for the robot wheel encoders.

The driver model has scan matching support, that means that if there is a two

dimensional map (as described later in chapter 3.6) available of the robots environment,

the map shall be shared with the driver. The driver is then able to map its sensor readings

on to the supplied map for enhanced accuracy. Driver classes are capable of notifying their

observers when new position data becomes available.

(26)

3.5 User interface

During the development, testing and delivery of project, it is necessary to be able to interact with the system. For this purpose the system must have a user interface through which the end user can observe and control the system.

The user interface consists of a command shell from which one is able to control and configure the robot. To be able to run the software, one has to login on the Raspberry Pi and start the RONAS software, this will present the command shell. Table 3.1 lists all the commands which are available at the command prompt.

Command Action

filter list Lists the available filters.

filter show Shows the current active filter.

filter use Sets the active filter.

driver list Lists available drivers.

driver activate Activates a driver.

driver deactivate Deactivates a driver.

showpos Shows the robots actual positions from the drivers and the filter.

move Moves the robot forward or backward.

rotate Rotates the robot.

reset Reset the RONAS system to startup values.

exit Shutdown the RONAS system and exit the command shell.

Table 3.1: Available RONAS commands from the command shell

To aid in debugging the software and to visualize data generated by the LIDAR sensor during development, there is also a rudimentary graphical user interface available. This interface is written in Java and receives data using the TCP protocol send by the LIDAR driver class and the particle filter class.

3.6 Position calculation

The position calculation component is an observer class that observes the sensor drivers and the filters for changes of positioning data. The position calculator calculates the robots position and shares this position with its observers.

The calculated position is the absolute position on the given reference map. Because the robot shall drive in a constrained environment at the electronics laboratory of the university, the reference map can and will be generated by the software. Given the fact that the project uses ICP point to point scan matching (see chapter 3.2) the generated map will be represented as a set of points. The map generator takes the size of the con- strained environment and the number of points as a parameter to generate a reference map.

To calculate the robots position, the project uses Bayesian filtering as described in

(27)

section 2.3. The project uses a particle filter, mainly because that the particle filter is more flexible then the Kalman filter. The models used by the particle filter doesn’t have to be linear or Gaussian shaped (chapter 4.3 in [17]). Also the particle filter is easier to program compared to the Kalman filter. The downside of the particle filter is that it is slower and more CPU intensive depending on the number of samples used. But considering the hardware used (Raspberry Pi 2) and the constrained environment where the robot shall drive, this should not be a problem.

The particle filter needs three parameters, the number of particles, the position deviation and the theta deviation. The position- and theta deviation parameters are used to keep the particles random and avoiding them to converge eventually all to one single particle.

3.7 Navigation calculation

The navigation calculation part is responsible for the robot movement. This component executes the move and rotate commands and calculates with help of the kinematic model described ins section 2.1 how long and how fast to turn each wheel to reach the desired end position. The robot uses an acceleration / deceleration model to reach full speed / stationary speed to minimize wheel slippage and therefore positioning errors. This component also provides change in control information to its observers, control information consists of the current left- and right wheel speed.

If the planning and time allows to write a path correction algorithm, the Pure Pursuit path tracking algorithm as described by Giesbrecht et al. [12] and explained in section 2.1 shall be used. The reason for this choice is that the projects time is limited and the Pure Pursuit algorithm is relatively easy to implement.

The robot shall use the odometry based motion model described in section 2.1 as base to calculate its movement.

In table 3.2 below the relevant kinematic parameters of the robot used by the navi- gation calculation are listed.

Parameter Description Value

r wheel radius 1.6 cm

l distance between the center of the two robot wheels 8.98 cm

step wheel encoder pulse distance 0.1675 mm

Table 3.2: Kinematic model parameters for the project robot with their values.

(28)
(29)

Chapter 4 Results

4.1 Software structure

The software structure is visualized using a UML class diagram B.5 and the classes fits the structure as described in section 3.3. The ronas class is the main thread of the program which instantiates the command interpreter, navigation calculator, position calculator, clock, wheel encoder driver and the LIDAR sensor driver as a separate thread.

The command interpreter waits for user input and passes the command validation and execution to the command executioner, for simplicity the ronas class has been defined as the command executioner.

Both, the position calculator and the navigation calculator receives a clock tick from the clock which runs default at 50 Hertz. All robot control commands, such as change of wheel speed, run at this clock frequency.

The navigation calculator works with a command queue, this means that commands are added to the queue after which the commands are processed according the first in first out order. The queue structure has the advantage that control is directly given back to the end user after the command is entered in the command shell. The message flow between objects when a move or rotate command is entered is illustrated in the UML sequence diagram 4.1.

Figure 4.1: UML sequence diagram of the move and rotate command

The sensor drivers run as separate threads and notify their observers when there is

(30)

new position data available. To avoid to many observer calls, the wheel encoder sensor driver samples the wheel encoders at a rate of once every 100 milliseconds. Due to the low scan rate of the LIDAR sensor driver of approximately four Hertz, this driver is not limited at its sampling speed.

When new position data from the sensor drivers becomes available and the sensor is marked as active, the position calculator passes this data together with the current robot controls (left and right wheel speed) to the active filter. The filter then calculates the robots position based on the data it receives and returns this position to the position calculator. The position calculator notifies its observers of the new calculated position.

The UML sequence diagram 4.2 shows the message flow for a position update.

Figure 4.2: UML sequence diagram of a position update

4.2 Robot movement

The robot uses an Arduino Nano board to control the speed of both wheels and to keep track of the wheel encoder pulses. Communication with the Raspberry Pi takes place over a 115200 bps serial line. The robot moves when speed commands are send to the left- and right wheel of the robot, the Arduino controller uses a PI controller which keeps the wheels running at the desired speed.

4.2.1 Robot speed command accuracy

In an attempt to find the correlation between the speed commands and the actual wheel speed, there were some inconsistencies. A test was conducted by performing each step below for each speed setting.

• Reset the wheel encoder values to zero.

• Send the desired motor speed command to each wheel

• Wait one second for the wheels to settle

• Read the wheel encoders

• Wait one second

• Read the wheel encoders

(31)

• Calculate the delta between the first and the second wheel encoder reading

The first test showed that at higher speeds the both wheels are not running at the same speed, the test results are shown below in table 4.1.

Wheel speed command code Left encoder pulses Right encoder pulses Di↵erence

1 27 27 0

8 114 115 -1

16 227 227 0

32 457 456 1

64 886 877 9

96 1136 1128 8

100 1167 1159 8

127 1387 1379 8

Table 4.1: Motor speed vs wheel encoder pulses per second

Because the result of this test has some impact on the accuracy of the robot movement, another test was performed using the whole range of speed commands. This test was performed on the left wheel encoder to test the linearity of the speed function and uses the same steps as described in the previous test. The results of this test are shown in the graph of figure 4.3.

Figure 4.3: Wheel speed command code vs wheel encoder pulses per second. The y-axis represents the speed command code and the x-axis represents the actual wheel speed in encoder pulses per

second.

(32)

From the graph above one can see that when the wheel speed command passes the value of 60, the speed function ceases to be linear. One also notices that above this value there is an increase of some inexplicable speed dents.

With the results of above tests in mind and the fact that the current release of the software does not compensate for either the wheel encoder inconsistencies and the non- linearity of the speed function, the robot moves based on motor speed commands are not accurate.

4.2.2 Track correction

Despite the fact that a path track correction algorithm should be programmed only when time and planning allows it, some time was spend anyway to implement the pure pursuit algorithm to check whether it was able to compensate for the robot speed command inaccuracies. The pure pursuit path track correction algorithm as described in section 2.1 has been programmed into the navigation calculator and adds small speed corrections to either the left or the right wheel to make the robot follow the error correcting curvature.

The algorithm tries to slow down one wheel so the robot drives the desired curvature to its lookahead point. However, the resolution of the motor speed commands versus the actual speed is too low to compensate smoothly for larger curvatures which resulted in erratic robot movement, especially at lower speeds as the example below explains.

This example uses a lookahead distance of 10 centimeter (0.1 meter), an error theta of 0.0873 radians (about 5 degrees) and a wheel speed of 5 (73 pulses per second, see table C.1).

= 2✓

err

l = 2 · 0.0873

0.1 = 1.746 (4.1)

r = 1

= 1

1.746 = 0.573 (4.2)

To make the robot follow an arc with a radius of 0.573 meter the left and the right wheel should follow their own arc with a respective radius of

r

lef t

= r l

2 = 0.573 0.0898

2 = 0.5281 (4.3)

r

right

= r + l

2 = 0.573 + 0.0898

2 = 0.6179 (4.4)

If the right wheel keeps running at a speed of 73 encoder pulses per second the left wheel speed in pulses per second is calculated as follows

v

lef t

= v

right

r

lef t

r

right

= 73 · 0.5281

0.6179 = 62.3908 (4.5)

When the speed of the left wheel is lowered by one step to value of 4, the wheel runs at a

speed of 56 pulses per second which should be about 62.4 pulses per second, this results in

a steeper driving curve. This steeper curve is compensated on the next clock tick with yet

another steep curve in the opposite direction, which gives the robot its erratic behavior

when trying to correct the path.

(33)

Because the implementation of the pure pursuit algorithm didn’t give the desired correction results immediately and given time constraints of the project the path correction was disabled in the software.

4.3 Robot localization

Localization of the robot by just using the wheel encoder pulse count is done by using the equations 2.7 to 2.10 of Hellstr¨om [13]. The equation takes the current position and uses the left- and right wheel encoder pulses to obtain the new position. Because the position is not defined when the system is started, the calculated position is relative to (0, 0, 0).

The encoder position can be synchronized with the position obtained from the position calculator by entering the debug sync command from the command shell.

The LIDAR sensor is the other sensor the robot uses to localize itself. This sensor continuously rotates a laser scanner which measures the distance between the sensor and an object in the path of the laser beam. For every degree of rotation a distance in millimeters to the measured object is returned. The minimum distance which can be measured is 15 centimeter while the maximum distance equals 6 meters [8]. The sensor rotates at about 260 rotations per minute which results in slightly more then four 360 degree scans every second. When the rotation speed is too high or too low, the accuracy of the scan decreases.

To locate the robot on a reference map using the scanned LIDAR map a scan match is performed using the iterative closest point (ICP) library of Geiger et al. [11]. To avoid that the ICP scan match algorithm stops matching when it enters a local minimum, the approx- imate angle of the robots theta position must be provided. In practice this means that the robots x-axis should be aligned with the reference map x-axis before the software is started.

To avoid one has to align the initial position of the robot with the reference map, the particle filter was programmed to perform a scan match for each particle. After that each particle is weighted based on the least translation and rotation needed to match the reference map. The result is that eventually all the particles are converging to the position of the robot on the reference map. The robots position is calculated as the weighted average of all the particles.

4.3.1 Particle filter visualisation

To illustrate the working of the particle filter the following set of figures shows the di↵erent states of the filter during the process. The figures 4.4 to 4.8 are actual screenshots taken from the graphical Java user interface described in section 3.5 which was used to visualize map data during programming of the software.

During recording of this series of filter states the robot was at a stationary position.

(34)

Figure 4.4 shows the map generated by the LIDAR sensor, the circle with the arrow marks the actual position and heading of the robot.

Figure 4.4: The view from the LIDAR sensor

In figure 4.5 the reference map is shown with all the random particles generated by the particle filter at startup.

Figure 4.5: Particle filter particles initial state

The figures 4.6 and 4.8 shows the particle positions towards final converge.

Figure 4.6: Particle filter particles after one iteration

(35)

Figure 4.7: Particle filter particles towards final converge

The final figure 4.8 shows the particles converged around the robot position, the actual position of the robot is determined by calculating the weighted average of all the particles.

This test was performed using 200 particles and it took 50 iterations for the particle filter to converge around the actual robot position.

Figure 4.8: Particle filter particles are converged around the actual robots position

4.3.2 Particle filter accuracy test

To test the accuracy of the particle filter localization, the test above was performed three times with the robot stationary on the same position every test run. After 50 iterations of the particle filter, the position from the particle filter was read using the showpos command and written down in table 4.3. The angle of the actual robot position is approximately due to lack of accurate measurement tools. The particle filter parameters as described in section 3.6 were set according table 4.2.

Particle filter parameter Value Unit Number of particles 200 each Position deviation 8.0 millimeter Theta deviation 0.1 degree

Table 4.2: Particle filter parameter values during the test

The test results are written down in table 4.3 below.

(36)

Test run Actual robot position in mm Position from particle filter in mm

1 (456, 194, ±70 ) (454, 202, 75.5 )

2 (456, 194, ±70 ) Did not converge after 50 iterations

3 (456, 194, ±70 ) (463, 191, 74.2 )

Table 4.3: Particle filter accuracy test results

Although this implementation of a particle filter is accurate and works as predicted, it will not be workable in real life situations because of the amount of time it takes for the ICP algorithm to scan match each particle. The first iteration takes 2.5 seconds for each particle to scan match, every subsequent iteration takes about 150 milliseconds to scan match one single particle. The performed test above took

2.5 · 200 + 49 · 200 · 0.150 = 1970 seconds (4.6)

4.3.3 Simultaneous robot movement and scanning

The tests above were performed with the robot at a stationary position, but when the robot is moving (especially rotating) and is scanning at the same time, the scanned map becomes distorted like the example in figure 4.9 because the LIDAR sensor scan speed is too low to keep up with the robot movement. Depending on the level of distortion, it is hard or even impossible for the ICP scan match algorithm to find a satisfying match with the reference map. Since the software is not compensating the LIDAR map data for the robot movement, a reliable scan match could only be obtained when the robot is stationary or moving at a very low speed.

Figure 4.9: Example of skewed LIDAR map

(37)

Chapter 5 Discussion

The project goals are met. The whole system is build with 36 C++ classes from which many classes are derived from a base class and did not need much programming to get the required functionality. The Thread class makes it easy to run an object in a separate thread to make optimum usage of all the CPU cores.

The command shell is easy to work with, a list of available commands with their parameters is shown when an unknown or wrong command is entered. The graphical Java user interface is just used for testing and debugging, but there is a solid base to write a more advanced user interface using the TCP classes of the RONAS software. With those classes one can define a TCP user interface over the network to control the robot with a user interface of choice.

The robot is able to move accurate at lower speeds. At higher speeds the Arduino Nano subsystem doesn’t keep the wheels running at the same speed, one have to look into the subsystems software to be able solve this problem. It should also be possible to compensate for this problem, but then the resolution of the speed commands should be higher. It is not possible right now to give small enough speed corrections for each wheel to smoothly correct deviations. A solution would be to control the motor speed by sending the motor control command as wheel encoder pulses per second. This way the RONAS software can have full control over the wheel speed and is capable of performing precise navigation and path correction.

The programmed particle filter works, but one iteration takes much time. The rea- son for this is that the ICP scan match algorithm takes a lot of time to scan match each particle. In the performed particle filter tests, 200 particles were used which is too less to be reliable. One can see that test 2 in table 4.3 did not converge within 50 iterations. It is likely that it will converge eventually but using only 200 particles it may take much more then 50 iterations. It would be interesting to test the particle filter more thorough using more particles (more then 1000) and with di↵erent parameter settings to tune accuracy and converge speed. But with the current time it takes for the particle filter to iterate this is a very tedious job which would take weeks or even months depending on the tests.

A solution to the particle filter speed problem might be to write a custom ICP scan

match algorithm, or maybe use another scan match library or maybe even use a complete

di↵erent algorithm. That di↵erent algorithm might be the point to line scan match

(38)

algorithm from Cox [7]. After a quick study of this algorithm it looks like that this algorithm is more efficient then the currently used point to point algorithm. It would be interesting to see the performance of those two algorithms compared to each other.

Another possible solution is to let the scan match be performed at another computer.

The data needed to perform a scan match is just a couple of bytes and can easy be send to a more powerful computer over a TCP connection. One can even think of a scan match server which is able to serve several clients.

Regarding the LIDAR sensor, as mentioned in chapter 2 the robots accuracy decreases when it rotates too slow or too fast. It would be easy to control the sensor rotation using a PWM signal and use the reported rotation speed to correct any deviation to maintain a constant speed which enhances accuracy. The WiringPi library [4] is however limited when one want to use PWM, some study might be required to get this to work properly.

It also should be possible to compensate the robot movement on the map the LIDAR sensor produces so we don’t end up with a distorted map. We know exact when and how fast the robot is moving and we know exact when the LIDAR sensor is scanning a specific point on the map. With this information it should be possible to produce a perfect map even when the robot moves.

Although the product created by the project is only a prototype and mainly consists of software, some parts of this software are written with energy saving in mind. To prevent the batteries for draining to fast, the software contains no unnecessary tight loops, where possible the usleep function is called to release the CPU. The whole observer model (discussed in chapter 3.3) that is used by the software eliminates use of polling for a status, which also save CPU usage and therefore power consumption. The whole system without the LIDAR sensor draws about 800 milliamperes current at a CPU utilization of 20 procent.

The projects results might also be interesting for the future society and the environment.

The knowledge gained from the project is going to be used to train or aid future students

in the aspect of vehicle localization. Given the fact many car manufacturers are building

or are planning to produce self driving cars, the demand for this expertise shall be growing

in the future. Self driving cars are going to have much impact on the society and the

environment. Self driving cars drive more safely and therefore car accidents (and peoples

injuries) will reduce, the cars will drive more efficient which shall have a positive impact

on the environment.

(39)

Chapter 6 Conclusion

The project goals are met, the robot is able to move, collect data from its sensors and can localize it self in a known environment. The robot can be controlled through a command shell user interface, which becomes available directly after the software starts.

The software makes optimum use of all four CPU cores of the Raspberry Pi 2 because all the CPU intensive and response sensitive classes run as a separate thread.

The robot is able to move by entering move and rotate commands in the command shell. The robot movement speed increases to maximum speed and decreases to zero speed gradually to minimize wheel slippage for maximum accuracy. At speeds above 886 wheel encoder pulses per second, the system becomes less accurate because the Arduino Nano subsystem is not able to keep both wheels running at the same speed. The projects software is unable to compensate the speed di↵erence between the wheels because the resolution of the speed command is to low.

The same low resolution of the speed command prevents the projects implementation of the pure pursuit correction algorithm to work properly. Activating the track correction results in erratic robot movement caused by continuous overcorrecting the robots driving path.

The particle filter and the scan matching of the reference map works as predicted. All the particles eventually gather around the robots actual position on the reference map. In the performed tests, it took the particle filter about a half an hour to determine the actual position of the robot in the robots constraint environment. The most time consuming part of the particle filter is the ICP scan match algorithm. For every particle filter iteration, every single particle is scan matched against the map data to determine its relevance for the next iteration.

When the robot is moving and scanning at the same time, the produced map of the

LIDAR sensor can be distorted, especially when the robot is rotating. It is harder to get a

good scan match result from a distorted map, for that reason a reliable position from a

moving robot can only be obtained when the robot is moving slowly and not rotating.

(40)
(41)

Appendix A Time plan

1) Writing paper

2.1) Writing project plan 2.2) Finished project plan 2.3) Preparing project plan seminar 2.4) Project plan seminar 2) Project plan

3.1) UML design software 3.2) Raspberry Pi environment 3.3) C++ environment 3.4) Raspberry Pi filesharing 3) Preparing development environment

4.1) SD card 4.2) USB serial converter 4) Determine hardware needed

5.1) LIDAR mounting construction 5) Mechanical construction

6) Determine robots kinematic model

7) Preparing theory (reading articles and algorithms) 8) Study sensor datasheets

9.1) Common driver model 9.2) User interface 9.3) Position calculation 9.4) Navigation calculation 9) Design of software

10.1) LIDAR driver 10.2) Encoder driver 10.3) Motor control 10.4) Position calculation 10.5) User interface 10.6) Navigation calculation 10) Building software

11.1) Testing Software 11) Testing software

Title 2016 jan. 2016 feb. 2016 mars 2016 apr. 2016 maj 2016 juni

Figure A.1: Project tasks and time planning

(42)
(43)

Appendix B

UML class diagrams

B.1 User Interface

Figure B.1: UML class diagram of the User Interface

(44)

B.2 Common Driver Model

Figure B.2: UML class diagram of the Common Driver Model

(45)

B.3 Position Calculator

Figure B.3: UML class diagram of the Position Calculator

(46)

B.4 Navigation Calculator

Figure B.4: UML class diagram of the Navigation Calculator

(47)

B.5 All classes

Figure B.5: Complete UML class diagram of the software

(48)
(49)

Appendix C

Speed commands vs encoder pulses

Speed Encoder Speed Encoder Speed Encoder Speed Encoder

1 27 33 473 65 890 97 1148

2 28 34 478 66 899 98 1156

3 43 35 503 67 908 99 1140

4 56 36 517 68 917 100 1171

5 73 37 530 69 924 101 1179

6 87 38 545 70 933 102 1189

7 98 39 560 71 922 103 1195

8 115 40 574 72 949 104 1203

9 130 41 587 73 956 105 1211

10 143 42 604 74 965 106 1219

11 158 43 604 75 973 107 1226

12 173 44 632 76 962 108 1209

13 182 45 645 77 989 109 1243

14 202 46 660 78 998 110 1252

15 215 47 661 79 1005 111 1258

16 229 48 688 80 1013 112 1241

17 239 49 701 81 1002 113 1273

18 258 50 716 82 1031 114 1280

19 273 51 729 83 1037 115 1289

20 286 52 743 84 1044 116 1297

21 301 53 758 85 1053 117 1280

22 316 54 769 86 1041 118 1314

23 331 55 782 87 1069 119 1320

24 344 56 797 88 1076 120 1328

25 359 57 809 89 1084 121 1310

26 365 58 820 90 1093 122 1344

27 388 59 832 91 1101 123 1352

28 401 60 845 92 1109 124 1361

29 417 61 852 93 1117 125 1367

30 422 62 861 94 1126 126 1375

31 445 63 874 95 1110 127 1387

32 459 64 884 96 1140

Table C.1: Motor speed vs left wheel encoder pulses per second

(50)
(51)

Bibliography

[1] C(anonical) scan matcher (csm), may 2010. URL http://censi.mit.edu/software/

csm/.

[2] Google car and lidar sensor, march 2015. URL http://www.wired.com/2015/09/

laser-breakthrough-speed-rise-self-driving-cars/.

[3] C++ mobile robots programming toolkit library, Februari 2016. URL http://www.

mrpt.org.

[4] Peripherals library for the raspberry pi, februari 2016. URL https://projects.

drogon.net/raspberry-pi/wiringpi/.

[5] Stephen Armah, Sun Yi, and Taher Abu-Lebdeh. Implementation of autonomous navigation algorithms on two-wheeled ground mobile robot. American Journal of Engineering and Appied Sciences, 7(1):149–164, 2014.

[6] M. Sanjeev Arulampalam, Simon Maskell, Neil Gordon, and Tim Clapp. A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Transactions on Signal Processing, 50(2):174–188, Februari 2002.

[7] Ingmar J Cox. Blanche- an experiment in guidance and navigation of an autonomous robot vehicle. IEEE transactions on robotics and automation, 7(2):12, April 1991.

[8] LIDAR datasheet. Lidar sensor datasheet, October 2014. URL http://xv11hacking.

wikispaces.com/LIDAR+Sensor.

[9] Martin Fowler and Kendall Scott. UML Distilled, Applying the standard object modelling language. Number ISBN0-201-32563-2. Addison Wesley, december 1997.

[10] Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. Design Patterns, Elements of Reusable Object-Oriented Software. Number ISBN0-201-63361-2. Addison Wesley, 1995.

[11] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In Conference on Computer Vision and Pattern Recognition (CVPR), 2012. URL http://www.cvlibs.net/software/

libicp/.

[12] J. Giesbrecht, D. Mackay, J. Collier, and S. Verret. Path tracking for unmanned

ground vehicle navigation. Technical report, Defence Research and Development

Canada, 2005.

(52)

[13] Thomas Hellstr¨om. Kinematics equations for di↵erential drive and articulated steering.

Technical Report ISSN-0348-0542, Ume˚ a University, december 2011.

[14] Martin Lundgren. Path tracking for a miniature robot. Master’s thesis, Ume˚ a University, 2003.

[15] Radu Bogdan Rusu and Steve Cousins. 3D is here: Point Cloud Library (PCL).

In IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, May 9-13 2011.

[16] Cyrril Stachniss. Freiburg university slam course, October 2013. URL https://www.

youtube.com/watch?v=wVsfCnyt5jA.

[17] Sebastian Thrun, Wolfram Burgard, and Dieter Fok. Probabilistic Robotics. Number

ISBN9780262201629. MIT Press, 2005.

(53)

PO Box 823, SE-301 18 Halmstad Phone: +35 46 16 71 00

E-mail: registrator@hh.se

My name is Johannes van Esch. After

having worked for 25 years in the field

of information technology, I wanted

to work with more technically

challenged projects. Electronics and

embedded systems always had my

interest and now it will be my future

work.

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Generally, a transition from primary raw materials to recycled materials, along with a change to renewable energy, are the most important actions to reduce greenhouse gas emissions

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa

DIN representerar Tyskland i ISO och CEN, och har en permanent plats i ISO:s råd. Det ger dem en bra position för att påverka strategiska frågor inom den internationella