• No results found

Estimation of Local Map from Radar Data

N/A
N/A
Protected

Academic year: 2021

Share "Estimation of Local Map from Radar Data"

Copied!
101
0
0

Loading.... (view fulltext now)

Full text

(1)

Institutionen för systemteknik

Department of Electrical Engineering

Examensarbete

Estimation of Local Map from Radar Data

Examensarbete utfört i Reglerteknik vid Tekniska högskolan vid Linköpings universitet

av

Malte Moritz och Anton Pettersson LiTH-ISY-EX–14/4806–SE

Linköping 2014

Department of Electrical Engineering Linköpings tekniska högskola

Linköpings universitet Linköpings universitet

(2)
(3)

Estimation of Local Map from Radar Data

Examensarbete utfört i Reglerteknik

vid Tekniska högskolan vid Linköpings universitet

av

Malte Moritz och Anton Pettersson LiTH-ISY-EX–14/4806–SE

Handledare: Niclas Evestedt

isy, Linköpings universitet

Lars Hjorth

Scania CV AB

Examinator: Daniel Axehill

isy, Linköpings universitet

(4)
(5)

Avdelning, Institution Division, Department

Division of Automatic Control Department of Electrical Engineering SE-581 83 Linköping Datum Date 2014-11-04 Språk Language Svenska/Swedish Engelska/English   Rapporttyp Report category Licentiatavhandling Examensarbete C-uppsats D-uppsats Övrig rapport  

URL för elektronisk version

http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-111916

ISBN — ISRN

LiTH-ISY-EX–14/4806–SE

Serietitel och serienummer Title of series, numbering

ISSN —

Titel Title

Skattning av lokal karta från radardata Estimation of Local Map from Radar Data

Författare Author

Malte Moritz och Anton Pettersson

Sammanfattning Abstract

Autonomous features in vehicles is already a big part of the automobile area and now many companies are looking for ways to make vehicles fully autonomous. Autonomous vehicles need to get information about the surrounding environment. The information is extracted from exteroceptive sensors and today vehicles often use laser scanners for this purpose. Laser scanners are very expensive and fragile, it is therefore interesting to investigate if cheaper radar sensors could be used. One big challenge when it comes to autonomous vehicles is to be able to use the exteroceptive sensors and extract a position of the vehicle and at the same time get a map of the environment. The area of Simultaneous Localization and Mapping (SLAM) is a well explored area when using laser scanners but is not that well explored when using radars.

It has been investigated if it is possible to use radar sensors on a truck to create a map of the area where the truck drives. The truck has been equipped with ego-motion sensors and radars and the data from them has been fused together to get a position of the truck and to get a map of the surrounding environment, i.e. a SLAM algorithm has been implemented. The map is represented by an Occupancy Grid Map (OGM) which should only consist of static objects. The OGM is updated probabilistically by using a binary Bayes filter. To localize the truck with help of motion sensors an Extended Kalman Filter (EKF) is used together with a map and a scan match method. All these methods are put together to create a SLAM algorithm. A range rate filter method is used to filter out noise and non-static measurements from the radar.

The results of this thesis show that it is possible to use radar sensors to create a map of a truck’s surroundings. The quality of the map is considered to be good and details such as space between parked trucks, signs and light posts can be distinguished. It has also been proven that methods with low performance on their own can together with other methods work very well in the SLAM algorithm. Overall the SLAM algorithm works well but when driving in unexplored areas with a low number of objects problems with positioning might occur. A real time system has also been implemented and the map can be seen at the same time as the truck is manoeuvred.

Nyckelord

Keywords Truck, Sensor Fusion, Radar, 2D Mapping, SLAM, Occupancy Grid Map, Scan Match, Ex-tended Kalman Filter, Range Rate Filter, Online SLAM

(6)
(7)

Abstract

Autonomous features in vehicles is already a big part of the automobile area and now many companies are looking for ways to make vehicles fully autonomous. Autonomous vehicles need to get information about the surrounding environ-ment. The information is extracted from exteroceptive sensors and today vehi-cles often use laser scanners for this purpose. Laser scanners are very expensive and fragile, it is therefore interesting to investigate if cheaper radar sensors could be used. One big challenge when it comes to autonomous vehicles is to be able to use the exteroceptive sensors and extract a position of the vehicle and at the same time get a map of the environment. The area of Simultaneous Localization and Mapping (SLAM) is a well explored area when using laser scanners but is not that well explored when using radars.

It has been investigated if it is possible to use radar sensors on a truck to create a map of the area where the truck drives. The truck has been equipped with ego-motion sensors and radars and the data from them has been fused together to get a position of the truck and to get a map of the surrounding environment, i.e. a SLAM algorithm has been implemented.

The map is represented by an Occupancy Grid Map (OGM) which should only consist of static objects. The OGM is updated probabilistically by using a bi-nary Bayes filter. To localize the truck with help of motion sensors an Extended Kalman Filter (EKF) is used together with a map and a scan match method. All these methods are put together to create a SLAM algorithm. A range rate filter method is used to filter out noise and non-static measurements from the radar. The results of this thesis show that it is possible to use radar sensors to create a map of a truck’s surroundings. The quality of the map is considered to be good and details such as space between parked trucks, signs and light posts can be distinguished. It has also been proven that methods with low performance on their own can together with other methods work very well in the SLAM algorithm. Overall the SLAM algorithm works well but when driving in unexplored areas with a low number of objects problems with positioning might occur. A real time system has also been implemented and the map can be seen at the same time as the truck is manoeuvred.

(8)
(9)

Sammanfattning

Autonoma funktioner i fordon är redan ett stort område för fordonsindustrin och nu ser många företag över möjligheter att göra fordonen helt självkörande. Auto-noma fordon behöver information om miljön den befinner sig i. Informationen hämtas från omgivningskännande sensorer och idag är ofta laserskanrar använ-da i detta syfte. Laserskanrar är väldigt dyra och ömtåliga, det är därför intres-sant att undersöka ifall billigare radarsensorer kan användas. En stor utmaning för autonoma fordon är hur man ska använda de omgivningskännande sensorer-na för att utläsa en position för fordonet och samtidigt kartlägga omgivningen. Simultaneous Localization and Mapping (SLAM) är ett väl utforskat område när laserskanners används jämfört med då radarsensorer används då det inte finns lika mycket forskning att ta del av kring dessa.

I det här arbetet har det undersökts huruvida det är möjligt att använda radar-sensorer på en lastbil för att skapa en karta av området som lastbilen kör inom. Lastbilen har varit utrustad med rörelsesensorer samt radars vars data har kom-binerats för att dels skatta lastbilens postion samt dels skatta en karta av den omgivande miljön. Med andra ord så har en SLAM-algoritm implementerats. Kartan är representerad av en Occupancy Grid Map (OGM) som bara består av statiska objekt. OGM:en är uppdaterad sannolikhetsbaserat genom att använda ett binärt Bayes filter. För att lokalisera lastbilen med hjälp av rörelsesensorer har ett Extended Kalman Filter (EKF) använts tillsammans med kartan och skann-matchningsmetod. Alla dessa metoder ger tillsammans en SLAM-algoritm. För att bara använda statiska mätningar och för att filtrera ut brus från radarn är ett avståndshastighetsfilter använt.

Resultatet av detta arbete visar att det är möjligt att använda radarsensorer för att skapa en karta av lastbilens omgivning. Kvaliteten av kartan anses vara god och detaljer så som mellanrum mellan lastbilar, skyltar och lyktstolpar kan urskiljas. Det har också påvisats att metoder som presterar dåligt enskilt kan kombinera-de i en SLAM-algoritm prestera väldigt bra. Överlag fungerar SLAM-algoritmen väldigt bra men när man kör i outforskade områden med få objekt kan positione-ringssvårigheter uppträda. En realtidsimplementering har även genomförts och en karta kan visas upp samtidigt som lastbilen körs.

(10)
(11)

Acknowledgments

First off we would like to thank our supervisor at Scania Lars Hjorth who has given us a great support during the whole thesis. He has always showed a great interest in our work and has helped whenever we felt stuck and lost. It was very inspiring to work with a person that showed such great commitment. Other Scania employees at the department REPA such as Kristian Lundh and Marco Trincavelli has been of great help and always taken their time to help us. We also would like to thank our fellow thesis colleagues at REPA Scania who made the days go much faster and for all the fun lunches during our stay.

We would like to thank our examiner Daniel Axehill and our academic supervisor Niclas Evestedt for all the great input and support. Thanks for the support even though you where short of time.

A big thanks to all our friends and fellow students who we have met during our five years at Linköping University. It feels good to able to say that we have found friends for life.

Anton would like to give a special thanks to Tove for traveling all over southern Sweden and supporting him during these 5 years of studies. A big thanks to my family as well for all their love and support.

Finally Malte would like to thank his family for all their encouragement and love. A big round of applause to my sister Matilda who finally won the race of graduating first.

Linköping, Augusti 2014 Malte Moritz och Anton Pettersson

(12)
(13)

Contents

Notation xi

1 Introduction 1

1.1 Background . . . 1

1.1.1 Simultaneous Localization and Mapping . . . 2

1.2 Problem formulation . . . 3

1.3 Limitations . . . 3

1.4 Goals . . . 4

1.5 Thesis outline . . . 4

2 Theory 7 2.1 Transformation between coordinate systems . . . 7

2.2 Bayes’ rule . . . 8

2.3 Gaussian blur . . . 9

2.4 Gradient decent . . . 10

2.5 Doppler effect . . . 10

2.5.1 Radar Doppler shift . . . 10

2.6 State space forms . . . 11

3 System overview 13 3.1 Test platform . . . 13 3.1.1 Sensors . . . 13 3.2 Geometry . . . 16 3.3 Kinematics . . . 18 3.4 Test track . . . 19 4 Methods 21 4.1 Occupancy Grid Map . . . 21

4.1.1 Binary Bayes filter . . . 22

4.2 Ray Tracing . . . 24

4.2.1 Sensor model . . . 25

4.3 Extended Kalman Filter . . . 28

4.3.1 Measurement model . . . 30

(14)

x Contents

4.3.2 Motion model . . . 31

4.3.3 Validation . . . 33

4.4 Radar range rate filtering . . . 36

4.4.1 Classification . . . 37

4.4.2 Validation . . . 37

4.5 Occupancy Grid Map scan match . . . 39

4.5.1 Gaussian blur . . . 40 4.5.2 Gradient ascent . . . 40 4.5.3 Validation . . . 42 5 Algorithms 45 5.1 Mapping . . . 45 5.2 SLAM . . . 46

5.2.1 Relation between methods . . . 47

5.2.2 Online Implementation . . . 48 6 Result 51 6.1 Mapping Results . . . 51 6.1.1 Tuning . . . 51 6.1.2 Mapping results . . . 56 6.2 SLAM results . . . 57 6.2.1 Offline simulations . . . 57 6.2.2 Online test . . . 61

7 Conclusions and further work 63

I

Appendix

A EKF Figures 69

B SLAM Figures 75

(15)

Notation

Abbrevations

Abbrevation Meaning

2-D 2-dimensional

CTRA Constant Turn Rate and Acceleration

EKF Extended Kalman Filter

FOV Field Of View

GPS Global Positioning System

HPIG High Precision IMU GPS

IMU Inertial Measurment Unit

LCM Lightweight Communications and Marshalling

NDT Normal Distrubution Transform

OGM Occupancy Grid Map

UTM Universal Transverse Mercator

SLAM Simultaniously Localization And Mapping

SRR Short Range Radar

(16)
(17)

1

Introduction

The purpose of this master thesis is to investigate if it is possible to use radar sensors on a truck to create a map of the area where the truck drives. The thesis has taken place at Scania CV AB in Södertälje during the spring of 2014. In this chapter the problem formulation, background, limitations, goals and outline of the master thesis are covered.

1.1

Background

Companies in the truck and automobile area are starting to look at different ways to make vehicles drive autonomous. The goal is to have commercial vehicles that are self driving and that bases its decisions on sensors in the vehicle with as min-imal interaction with a human as possible. Autonomous trucks are interesting to use in the mining industry since it could increase the efficiency and safety. It is also interesting to have autonomous vehicles in urban areas mostly from an environmental and safety aspect.

A big challenge when driving autonomous is for the vehicle to know how the landscape and its surroundings looks like. It is therefore important to have de-tailed maps available or to create maps with help of sensors. The maps are useful for knowing where the vehicle can drive and to create a virtual view for the au-tonomous system to base decisions upon. The problem of creating maps for a vehicle can be summarized into two problems, how the data in the map should be represented, and how the map should be created with help of the sensors. A map is not only useful to get information of the vehicle’s surroundings, it could also be used to navigate in or to localize the vehicle’s position. Most sensors in a vehicle only provides information about the vehicle’s ego motion but it is also

(18)

2 1 Introduction

important to have sensors that provides information about the surrounding envi-ronment, i.e. exteroceptive sensors. Examples of exteroceptive sensors are laser scanner, sonar, camera and radar.

If the position and heading (pose) of the vehicle are considered to be known there will only be a challenge to create a map of the environment, this problem is re-ferred to as the mapping problem. When both mapping and localization have to be done at the same time with help of the sensors in the vehicle the problem gets more complex. The problem is often referred to as Simultaneous Localiza-tion and Mapping (SLAM) which is the standard term when it comes to creating a map with help of sensor data since the position often needs to be derived with help of some kind of map.

1.1.1

Simultaneous Localization and Mapping

SLAM is the problem where the vehicle should map the surrounding objects and at the same time localize itself in the map, i.e. estimate the pose of the vehicle. It is crucial to get both the mapping and the localization part to work since they affect each other, the position is needed when mapping and the map is used for positioning. A SLAM algorithm often uses combinations of different sensors to get a good perception of the vehicle’s motion and its surroundings. Thrun et al. [2005] has published a book where many methods for localization, map represen-tations and SLAM are presented and discussed.

The occupancy grid map (OGM) presented by Elfes [1989] is one of the most com-mon data structures used to represent map information. When using an OGM in SLAM algorithms it is common to use laser scanners. Laser scanners are able to produce many accurate measurements every second which makes it perfect for creating high precision maps. Many methods are therefore based on laser scan-ners , Levinson and Thrun [2010] and Wang et al. [2013]. Since radars are used instead of laser scanners in this thesis many methods cannot be applied due to the, compared to the laser scanner, low resolution and accuracy. It is however possible to use radars for mapping and some research has been made in the field. There are two major advantages with using radars, it is robust against vibrations and most weather conditions and it is much cheaper than a laser scanner. Rouveure et al. [2009] has shown that it is possible to create a high resolution map of an outdoor environment with radar data. Gerossier et al. [2009] also provides a solution for outdoor high resolution mapping with radar. The methods derived is however heavy to compute and is more fit for offline mapping. The results from Marck et al. [2013] proves that radars can be used for indoor near-real-time mapping in harsh environments. Callmer et al. [2011] has also created a solution for SLAM with radars, they use visual radar features to locate a ship in a critical environment such as an archipelago.

Exteroceptive sensors are used both when a map is created and to get the pose of the vehicle. To get the pose a comparison between data from the sensor and data collected before or some known data, a map for example, is performed. The result can then be used with the other sensors to extract the pose and to create

(19)

1.2 Problem formulation 3

a map. This comparison fuses together the mapping part and localization part since objects detected by the exteroceptive sensor is used to find the position which makes it possible to map the detected object in relation to the vehicle. This technique is often called scan match which can be explained as a technique where each new update from the exteroceptive sensor matched with either a map or earlier scans.

There are many different approaches which can be used in the scan match. The scan match can fit each scan to a map or to compare a scan to the earlier scans. It can either use all the points from each detection, as in Biber and Strasser [2003] and Lee et al. [2011], or extracted landmarks and features from the scan, as in Ulas and Temeltas [2011]. Other approaches are for example when the pose is estimated directly in an Extended Kalman Filter (EKF) with help of the extero-ceptive sensor as in Gerossier et al. [2009].

1.2

Problem formulation

The problem formulation is stated as following: “Estimate a map and its static features from multiple radar sensors meanwhile estimating the position of the truck in the created map”. The problem can be divided into three main problems stated as:

• Estimate a map from radar sensors when the position is known.

• Estimate the position of the truck with help of the radar sensors when the position is uncertain.

• Implement software to run online where the estimated map can be visual-ized meanwhile driving.

The problem formulation is not very restricted, the main focus is to use radars for both mapping and localization. Otherwise the thesis has been open for choice of methods, limitations and representations. The problem can be divided into three main parts; mapping, offline SLAM and online SLAM.

1.3

Limitations

Reductions in the problem’s complexity have been made so it will be easier to focus on specific methods and so the thesis will be time realistic. The limitations in the thesis are as following:

• No moving objects needs to be mapped or distinguished in the map, i.e. they should be filtered out.

• Since one of the main tasks of the thesis is to run the algorithm online, the choice of methods will depend not only on quality but also on the compu-tation time.

(20)

4 1 Introduction

• The sensor data used comes from a Scania truck’s standard sensors only, more sensor data is not available.

• The two radars mounted in the front are placed on the sides and cannot detect objects in front of the truck. The truck therefore has no exteroceptive sensor which gives information about the area in front of the truck. • All simulation code will be written in Matlab and Simulink will be used for

the online implementation.

1.4

Goals

Since the problem formulation is not so restricted some internal goals and hopes of the system were discussed. The limitations also results in some demands on the system, such as handling moving detections. The overall goal is to create algo-rithms for mapping and SLAM, where the SLAM algorithm should be able to run both offline and online. Other goals were set based on the problem formulation accordingly:

• Represent the map in such a way that details like light posts and gaps be-tween trucks are visible.

• Be able to come back to an area and be able to locate the truck and map accurately.

• Filter out dynamic object such as moving trucks and pedestrians.

• Use information from other sensors than the radars to estimate the truck’s trajectory and movement.

1.5

Thesis outline

In Chapter 2 some of the fundamental theory which is used in the thesis is ex-plained. Basic concepts in statistics, coordinate transformations, radar physics and more are covered in this chapter. In Chapter 3 the platform used during the thesis and the different systems as the sensors, coordinate systems and the kine-matics of the truck is described. After these two chapters one should have a good background knowledge about the theory and system used in the thesis.

In Chapter 4 details about the different methods used to solve the mapping and SLAM problem are presented. Each method will be thoroughly explained, how it works, how it was implemented and some methods will also be validated by showing some method specific results.

In Chapter 5 it is explained how the different methods was put together to form the mapping algorithm, the offline SLAM algorithm and the online SLAM algo-rithm. In this chapter it is also discussed how the methods affect each other in the offline SLAM algorithm.

(21)

1.5 Thesis outline 5

In Chapter 6 the results from the algorithms will be presented and discussed together with different tuning parameters. The results will mainly be maps which are evaluated visually. In Chapter 7 conclusions from the thesis are drawn and further work is presented.

(22)
(23)

2

Theory

To get a good background knowledge for the rest of the thesis some basic theory is presented in this chapter. The topics of this chapter will be transformation between coordinate systems, Bayes’ rule, Gaussian blur, gradient decent, Doppler effect and state space forms.

2.1

Transformation between coordinate systems

When working with several different coordinate systems it is important to under-stand the relation between them. The definition of each coordinate system sets the relation between each of them. When two coordinate systems that differs with a translation and rotation according to Figure 2.1 the relation is written as:

"x0 y0 # ="cosφ −sinφ sinφ cosφ # "x y # +"ttx y # (2.1) y x x’ y’ ty φ tx

Figure 2.1: An example of two different coordinate systems related with a translation and a rotation.

(24)

8 2 Theory

2.2

Bayes’ rule

When using radar sensors to create a map it is important to use the measurements in a probabilistic way to get a trustworthy map. One central rule which is used to relate conditional probabilities is Bayes’ rule.

If A and B are two correlated stochastic events, the two conditional probabilities p(A|B) and p(B|A) exists. Bayes’ rule gives a relation between these two events and can be used to update or revise probabilities based on new evidence. The conditional probability p(A|B) can be calculate by using Bayes’ rule, this is seen in Equation 2.2.

p(A|B) = p(B|A)p(A)

p(B) (2.2)

Where,

• p(A) is the prior probability of A.

• p(A|B) is the conditional probability of A given B. It is also called the poste-rior probability.

• p(B|A) is the conditional probability of B given A.

• p(B) is the prior probability of B and is used as a normalizing constant. To derive Bayes’ rule the definition of conditional probability is used. The proba-bility of event A given event B is

p(A|B) = p(A ∩ B)

p(B) . (2.3)

The probability of event B given event A is

p(B|A) = p(A ∩ B)

p(A) . (2.4)

Rearranging and combing Equation 2.3 and Equation 2.4 results in

p(A|B)p(B) = p(A ∩ B) = p(B|A)p(A). (2.5) By dividing both sides in Equation 2.5 with p(B) Bayes’ rule is obtained

p(A|B) = p(B|A)p(A)

p(B) . (2.6)

p(B) can also be rewritten as

p(B) = p(A ∩ B) + p(¬A ∩ B) = p(B|A)p(A) + p(B|¬A)p(¬A) (2.7) where ¬A is the complementary event of A. By using Equation 2.7 an alternative

(25)

2.3 Gaussian blur 9

form of Bayes’ rule can be stated as

p(A|B) = p(B|A)p(A)

p(B|A)p(A) + p(B|¬A)p(¬A). (2.8)

2.3

Gaussian blur

When it comes to blurring an image a common way is to convolve the image with a Gaussian filter, a filter with a Gaussian impulse response. A Gaussian blur will reduce the image’s high frequency components and is therefore a low pass filter. The 2-D Gaussian impulse response is

g(x, y) = 1

2πσ2e

x2+y2

2σ 2 (2.9)

where σ is the standard deviation, x is the distance from the origin in the hori-zontal axis, and y is the distance from the origin in the vertical axis. When σ is increasing the image gets more blurred, i.e. the impulse response gets wider with a less distinct top. The continuous Gaussian function must be discretized to be represented in pixels. A distribution with a filter size n = 5 and σ = 1 would have a filter mask as

1 273                 1 4 7 4 1 4 16 26 16 4 7 26 41 26 7 4 16 26 16 4 1 4 7 4 1                

which would contribute to a blurred image as in Figure 2.2.

(a)Original image. (b)Blurred image.

Figure 2.2: Comparison between two images where one is convoluted with Gaussian filter with n = 5 and σ = 1.

(26)

10 2 Theory

2.4

Gradient decent

Gradient decent is a first-order approximation optimization algorithm and is sometimes called steepest decent or method of steepest decent. It finds a local minimum by taking steps in the direction of the negative gradient for a function. Gradient ascent works in the same way but finds a local maximum by taking a step in the gradient direction.

A multi variable function f (x), that is defined and differential nearby a point x0, decrease fastest in the negative direction of the gradient ∇f (x). The method

starts with a initial guess x0and then a new points is calculated in the negative

gradient direction for a certain step size. The search continues until the point is close enough to the minimum which is where the gradient is the zero vector. The iterative procedure follows

xk+1 = xk+ γkf (xk) (2.10)

where γ is the size of the step and k = 0, 1, 2, . . . until ∇f (xk) is small enough. The step size γ is allowed to change size in each step. If γ is not constant the step size can be chosen via a line search that satisfies the Wolfe conditions and then guarantee convergence, Jiao et al. [2011].

2.5

Doppler effect

The Doppler effect, or Doppler shift, is a physical phenomenon where the fre-quency for a signal is changed when the source of the signal is moved relative to the observer. If the source is moving away from the observer the frequency is decreased and increased when the source is moving towards the observer. With the speed c of the emitted wave in the medium the frequency of the observer is described by Equation 2.11. In Equation 2.11, fs is the emitted frequency of the source and ∆v the velocity of the observer relative the source, which is positive when the source and observer moves towards each other.

fo =  1 +∆v c  fs (2.11)

2.5.1

Radar Doppler shift

The radar Doppler shift appears when the emitted radar signal, radio wave, is reflected on a detected object with a relative speed vr to the radar. The reflected signal will then have a frequency change according to

fr = 1 +vr c 1 −vr c ! fs, (2.12)

(27)

2.6 State space forms 11

2.6

State space forms

When working with large systems with many inputs and outputs it is important to have a good structure to keep track on the system dynamics. The dynamics of a system are often very complex but can be simplified by making a model of the system. The model dynamics can be described by the general discrete state space model shown in Equation 2.13.

xk+1 yk = = f (xk, uk) h(xk, uk) (2.13) Where,

• xk ∈ Rnis the system states at time k. • xk+1∈ Rnis the system states at time k + 1. • ykis the system output.

• uk is the control signals.

• f is a function describing how the system will behave given xk and uk. • h is a function describing how the system outputs are related to the states

and system control signal.

A special case of this model is if the functions in Equation 2.13 are linear. The system dynamics can then be explained by a linear state space model seen in Equation 2.14. xk+1 yk = = Axk+ Buk Cxk+ Duk; (2.14)

where A ∈ Rn×n, B ∈ Rn×n, C ∈ Rn×n, and D ∈ Rn×n. The linear state space form is more simple to work with but does not always give a good representation of the system dynamics if they are nonlinear.

(28)
(29)

3

System overview

In this chapter all the different parts of the system is described which includes the test platform with the hardware in the truck, the geometry and kinematics of the truck, and the test track where test and data collection has been performed.

3.1

Test platform

The truck used during this thesis is called Astator, which is one of Scanias more high-tech test trucks. Compared to Scania’s standard trucks Astator has four radars, a High Precision Inertial Measurement Unit (IMU) Global Positioning Sys-tem (GPS) (HPIG) sensor and more computing power, the sensors are described in Section 3.1.1. With the extra computing power it is possible to run online tests in the truck. A picture of the truck can be seen in Figure 3.1.

3.1.1

Sensors

The truck is equipped with several sensors to locate its position and to sense objects around the vehicle. This section describes the sensors used in the thesis. The most important signals generated by the sensors are presented in table 3.1. Radar

The truck is equipped with four Delphi Short Range Radar (SRR) and they are placed around the truck to cover both sides and the rear. The radar detects up to 64 reflected objects on a range up to 80m in a span of 150◦

with a sample time of 50ms, more specifications are presented in Table 3.2. The output from the radar is distance to the detection rdet, the angle from the detection to the center of the radar φdet, and the relative radial speed (range rate) ˙rdet of the detected object.

(30)

14 3 System overview

Figure 3.1:The truck Astator from Scania used in the thesis.

Signal Notation Unit Sensor

Truck position [XH P I G,global, YH P I G,global] [m, m] HPIG

Lateral speed vH P I G,y [ms] HPIG

Longitudinal speed vH P I G,x [ms] HPIG

Heading θH P I G,head [rad] HPIG

Radar detection [rdet, ˙rdet, φdet] [m,ms, rad] HPIG

Lateral acceleration aI MU ,y [sm2] IMU

Longitudinal acceleration aI MU ,x [sm2] IMU

Yaw rate ωH P I G,ωI MU [rads ] HPIG,IMU

GPS coordinates [XGP S,global, YGP S,global] [m, m] GPS

Rear axle speed vtacho,x [ms] Tachometer

Table 3.1:Signals from the truck and its sensors.

An illustration of the truck and its radar sensors can be seen in Figure 3.2. The radar has a delay around 75ms compared to the Scania standard sensors.

Delphi SRR Mid-Range Specifications

Frequency 76-77 GHz

Bandwidth 250MHz

Range 0.5-80 m

Field Of View (FOV) ±75

Update Rate 50 msec

Table 3.2: Mid-Range specifica-tions for the Delphi SRR

Figure 3.2:Illustration of the radar sensors placement and their FOV. The Delphi SRR is a Pulse-Doppler radar which means that it determines the range by measuring the elapsed time from when a pulse of radio energy is

(31)

emit-3.1 Test platform 15

ted and when it detects the reflection. If telapsed is the elapsed time and c the speed of light, since a radio wave is electromagnetic, then

rdet =

telapsedc

2 (3.1)

which is calculated for emitted pulses in the whole FOV so each detection is as-sociated with an angle φdet. The Pulse-Doppler radar uses the Doppler effect to calculate the relative radial speed of the detection ˙rdet. If the detected object has a relative radial speed to the radar the reflected signal will have a different frequency f , caused by the Doppler shift. The received frequency is then

freceived =

2ftransmitted˙rdet

c (3.2)

which can be rewritten as

˙rdet= freceivedc 2ftransmitted

. (3.3)

The data from the radar is processed in the Delphi hardware before it is received. The processing restricts the number of detections to at most 64 by removing false detections and detections assumed to come from the same object. Still some false detections appear in the data received from the radar. Some false detections are caused by double bounces. Double bounces occur when the reflected pulse hits the truck instead of being detected by the radar and then bounces back the same object again. These detections often have a distance according to n times the distance of some true detection, where n is an integer. Another cause for noise is when pulses from one radar is detected by another radar. Since the radars emit different frequencies the range rate from the interfering detections will have a false range rate since ˙rdetis calculated with wrong ftransmitted. This phenomenon is called radar interference.

High performance IMU GPS

The HPIG sensor is equipped with an IMU and a GPS with a very high precision and it is used to get both position and velocity data. The signals received from this sensor is preprocessed in the sensor software and therefore it has a small delay compared to other sensors. The sensors has a sample time of 10ms and the delay is around 30ms longer compared to the Scania standard sensors. The position data has an uncertainty of 0.03m for the position coordinates and it is a very good precision when it comes to positioning of a moving vehicle. This sensor is very expensive and therefore not considered to be an option when it comes to commercial products but it will be used as a reference sensor and its signal will be seen as absolute true values.

Inertial Measurement Unit

The IMU is one of the standard Scania sensors and it is equipped with gyroscopes and accelerometers. It measures the yaw rate and the acceleration in longitudinal and lateral direction. The sample time of the IMU is 20ms.

(32)

16 3 System overview

GPS

The truck has a GPS to measure the position and it is a standard Scania sensor. The only signal from the GPS is the position in Universal Transverse Mercator (UTM) coordinates so no information about the heading is conceded. Informa-tion about heading could be calculated from the coordinates from the GPS. One drawback with the GPS is that the sample time is 1s compared to the standard sensors with 20ms. Another drawback is that no raw GPS data is available, the data is heavily preprocessed, and then it becomes hard to use the signal in a po-sitioning filter since the noise could not be considered Gaussian. The GPS has a time delay around 1s which is pretty big compared to the other sensors and the uncertainty in position is up to 5m.

Tachometer

The tachometer is one of the standard Scania sensors and it measures how fast the two rear axles of the truck turns. The signal from the tachometer is the speed estimated from the measured two rear axles’ speed. There is some uncertainty of the rear axle speed and compared to the HPIG the result can differ a lot. The sample time of the tachometer is 20ms.

3.2

Geometry

Different parts of the system is represented in different coordinate systems. There are one global world coordinate system and it is defined as the road or parking lot where the truck is driving and the origin is usually chosen to the spot where the truck starts from. The truck has its own local coordinate system and it is related to the global world system with the position and heading of the truck. The origin of the truck’s system is between the two rear axles in the lateral center of the truck. The radars mounted on the truck have their own coordinate systems and they are related to the truck coordinate system according to the position and rotation placement of the radars. The coordinate systems are explained in Figure 3.3 where the relations between them are visualized.

It is interesting to know how the polar coordinates [rdet φdet]T from a radar is transformed into the global world coordinates. Since all coordinate systems are right-handed defined and related to each other with a translation and a rotation the transformations are pretty similar. First the polar coordinates are translated into Cartesian coordinates xradar,det= [xradar,det yradar,det]T according to:

xradar,det="rdetr cos(θdet) detsin(θdet) #

(3.4) To transform the detection from the radar’s Cartesian coordinates into the truck’s coordinates a translation, the position of the radar rmnt = [rmnt,x rmnt,y]T, and a rotation, the radar’s rotation αmnt, is applied onto the radar coordinates. The coordinates in the truck’s coordinate system xtruck,det = [xtruck,det ytruck,det]T is

(33)

3.2 Geometry 17

X

global

Y

global xtruck ytruck φdet xradar yradar

r

det

θ

head

α

mnt

r

mnt Longitudinal axis Lateral axis

Figure 3.3:An explanation of the coordinate systems and their relations be-tween each other together with a radar detection, the dot.

then

xtruck,det = Rradarxradar,det+ rmnt (3.5) where Rradar ="cos(αmnt) −sin(αmnt) sin(αmnt) cos(αmnt) # (3.6) .

To transform the coordinates into the global world system a translation, the posi-tion of the truck Xglobal,pos = [Xglobal,pos Yglobal,pos]T, and a rotation, the head-ing of the truck θhead, is applied onto the truck coordinates according to:

Xglobal,det= Rtruckxtruck,det+ Xglobal,pos (3.7) where

Rtruck =" cos(θsin(θhead) sin(θhead) head) cos(θhead) #

(3.8) .

(34)

18 3 System overview

3.3

Kinematics

To simplify the truck’s motion an assumption that the truck drives on a planar surface where no height, roll, or pitch information will be taken into account has been made. The 2D motion of the truck is illustrated in Figure 3.4.

global Yglobal xtruck ytruck θhead vx ax vy ay ω

Figure 3.4:An illustration of the truck’s 2D motion. Speed and acceleration in the global system

Since the truck’s sensors only describe the truck’s ego motion some transforma-tion equatransforma-tions for motransforma-tion in the global system is needed. In Equatransforma-tion 3.9 and Equation 3.10 the truck’s ego motion is used to calculate the truck’s speed and acceleration in the global system.

˙ Xglobal ˙ Yglobal = =

vxcos(θhead) + vysin(θhead)

vxsin(θhead) + vycos(θhead) (3.9) ¨ Xglobal ¨ Yglobal = =

axcos(θhead) + aysin(θhead)

axsin(θhead) + aycos(θhead) (3.10) The lateral speed and acceleration are often consider to be small and therefore unnecessary in Equation 3.9 and Equation 3.10 when modeling a truck’s motion. The equations are therefore approximated as Equation 3.11 and Equation 3.12.

˙ Xglobal ˙ Yglobal = = vxcos(θhead) vxsin(θhead) (3.11)

(35)

3.4 Test track 19 ¨ Xglobal ¨ Yglobal = = axcos(θhead) axsin(θhead) (3.12) Radar kinematics

Since the radars are mounted on the truck they can be seen as a part of the truck and kinematic laws for a rigid body can be applied, Hibbeler [2010]. If the truck has an angular velocity ω and a velocity vO in the origin of the truck then the velocity in a point P on the truck on a distance rPfrom O will have the velocity

vP = vO+ ω × rP (3.13)

where ω only have a component for z. Then the velocity in the point P could be expressed as

vP ,x= vO,xωrP ,y (3.14)

vP ,y= vO,y+ ωrP ,x (3.15)

which could be used to calculate the velocity for each radar.

3.4

Test track

Simulations during this thesis are made from logged data at a parking lot in the Scania facilities in Södertälje with trucks, buses, and trailers parked. A satellite view of the parking lot is seen in Figure 3.5. Other small objects which can be hard to see in the satellite view as light posts, fences, and signs are also present in the test environment. The picture was not taken the same day as the tests so some vehicles might be parked in a different way. As can be seen in Figure 3.5 there are two areas which show where the data were logged. The logged data used for validations and results in this thesis come from one drive where the truck first drove in area 1 and then in area 2.

Area 1

Area 1 lies in the upper left corner of the map and covers a small parking lot with a lot of trucks and buses. Inside this area the radar will be able to detect the same objects many times and the SLAM algorithm will have a lot of map information available which will make it easier to locate the truck in the map.

Area 2

Area 2 covers the middle of the map and diagonally down to the lower right corner of the map. The truck has to explore more unknown area where no map information is available inside this area. When the truck drives in this kind of environment it will be harder for the SLAM algorithm to locate the truck.

(36)

20 3 System overview

Figure 3.5: A satellite view of the parking lot where the tests were made. The areas marked in the picture are two different test cases.

(37)

4

Methods

In this chapter all methods used in the mapping and SLAM algorithm will be pre-sented. The first part of the chapter will introduce and go through the map rep-resentation, statistical updates, ray tracing, and the sensor model. In the second part of the chapter the extended Kalman filter and its motion and measurement models will be explained, the range rate filter will also be introduced. The last part of the chapter will go through the scan match method. Some of the methods are also validated individually.

4.1

Occupancy Grid Map

The OGM is a data structure that represents map information in a probabilistic way. The advantages of using an OGM is that it is easy to comprehend and well suited for probabilistic updates. In an OGM the environment is divided into an evenly spaced grid map, where every grid cell c has its own probability of being occupied. The value in each cell, which represents the probability, will be between 0 ≤ p(c) ≤ 1. In an unexplored area every cell will have the probability 0.5 of being occupied, a cell which is occupied will get a value close to 1 and a cell which is unoccupied will get a value close to 0.

An OGM is especially good when the exteroceptive sensors, in this case radars, have a relatively large uncertainty in contrast to the cell size. This means that a measured detection has a variance with an area that covers many cells. Since one detection does not give enough certain information, the map needs to be represented in a probabilistic way. The map’s structure can be described proba-bilistically as

p(m(ci:j)|z1:t, xtruck,1:t, ytruck,1:t, θhead,1:t).

(38)

22 4 Methods

Where m is a 2-D map, represented by a matrix consisting of many cells ci:j, z1:t

are all radar measurement from sample 1 to t and xtruck,1:t, ytruck,1:t, θhead,1:tare all the truck’s poses from sample 1 to t. If all these components are available the grid cells will converge towards being either occupied or unoccupied and a map will be derived.

One of the many advantages of this method is that the grid’s cell size can be changed so that the map is detailed enough for the environment. In this thesis the environment is traffic related and objects which are interesting to map are often quite large, such as trucks, cars, buildings, and light posts. Large objects can be represented by many small grids and the environment will become more detailed with a smaller cell size. The cell size can however not be too small if a large environment is to be represented, the algorithm will then become heavy to compute. This thesis only covers mapping in a static environment therefore the cells do not need to have a forgetting factor, it should stay the same when the truck has left an area.

4.1.1

Binary Bayes filter

An OGM cannot say much about the surroundings of a vehicle from just one time instance, it has to be created over time where many measurements have given information about the environment. A method which can be used to update a cell’s probability over time is the binary Bayes filter, see Thrun et al. [2005]. The update equation for the binary Bayes filter is given by Equation 4.1.

p(c|z1:t) = p(zt|c, z1:t−1)p(c|z1:t−1) p(zt|z1:t−1) = p(zt|c)p(c|z1:t−1) p(zt|z1:t−1) , (4.1) where

• c is the grid cell which is being updated. • z is a radar measurement.

• p(c|z1:t) is the probability that grid cell c is being occupied based on all

radar information from sample 1 to t.

• p(zt|z1:t−1) is the probability that the new radar measurement indicates oc-cupancy given all radar measurements before.

• p(zt|c) is the probability that the new radar measurement indicates occu-pancy given the cell’s probability of being occupied.

New radar measurements are available in all updates, therefore all terms where the probability for z is being calculated should be rewritten so z is used to up-date the probability for c instead. Bayes’ rule (p(zt|c) = p(c|zp(c)t)p(zt)) is applied on Equation 4.1 and Equation 4.2 is obtained.

p(c|z1:t) =

p(c|zt)p(zt)p(c|z1:t−1)

p(c)p(zt|z1:t−1)

(39)

4.1 Occupancy Grid Map 23

To get rid of p(zt|z1:t−1) the opposite event ¬c is used and Equation 4.3 can be

derived. p(c|z1:t) p(¬c|z1:t) = p(c|zt) 1 − p(c|zt) p(c|z1:t−1) 1 − p(c|z1:t−1) 1 − p(c) p(c) (4.3)

To get an even simpler form, the logarithmic laws are used to get Equation 4.4.

lt(c) = log p(c|zt) 1 − p(c|zt) + log p(c|z1:t−1) 1 − p(c|z1:t−1) + log1 − p(c) p(c) = log p(c|zt) 1 − p(c|zt) −log p(c) 1 − p(c) + lt−1(c) (4.4) To get p(c|z1:t) from Equation 4.4, Equation 4.5 is applied.

p(c|z1:t) = 1 −

1

1 + elt(c) (4.5)

This result is used to update every grid cell independently whenever new infor-mation about a cell is available. This means that m(cn:m) can be split up in many separate cells,cn,m, and only the ones which have new information available needs to be updated. If the cell should get a higher probability of being occupied p(c|zt) should be 0.5 < p(c|zt) ≤ 1 where 0.5 gives us no new information about the grid cell and 1 corresponds to that the cell is without a doubt occupied. Vice versa goes for an update where 0 ≤ p(c|zt) < 0.5. This is illustrated in Figure 4.1.

0.5 0.5 0.7 0.3 0.7 0.3 0.7 0.7

Figure 4.1: Two update steps with a binary Bayes filter for four different cells. Black represents unoccupied and white represents occupied.

(40)

24 4 Methods

4.2

Ray Tracing

To extract more information from a radar measurement ray tracing can be used, see Elfes [1989]. The idea with ray tracing is to use information not only about where the detection is but also use the ray’s direction to see where it has passed, which should be unoccupied. This is an extreme simplified model of a radar pulse, but even though it is a simplification this model works very well for these kind of applications. An illustration of how the ray goes from the radars to the detections can be seen in Figure 4.2.

Figure 4.2:An illustration of ray tracing from the radars on the truck to their detections.

One detection has an angle and a distance with a relation to the truck accord-ing to Figure 3.3 in Section 3.2. The ray from the radar to the detection will give new information to each grid cell where the ray passes through. The radial distance for every grid cell a ray passes through in a radar’s coordinate system

rgrid = [0 . . . rdet]T will be translated into the truck’s coordinate system by Equation 4.6. Equation 4.6 calculates the coordinates Xcells and YcellsT , for all the cells in the global coordinate system for the cells the ray has passed through. In Figure 4.3 it is illustrated how the grid cells are passed by a ray from the radar.

Xgrids

Ygrids = =

xradar+ rgridsin(αradar+ φdet)

(41)

4.2 Ray Tracing 25

Figure 4.3: An example of how ray tracing can be used to add information to an OGM. All cells in between the radar and an object will be updated as unoccupied cells, vise versa goes for cells close to an object.

4.2.1

Sensor model

When the binary Bayes filter is used together with ray tracing it is important to add the new information about occupancy in a correct manner. This is done by using a probability distribution along every ray. As mentioned in Section 4.1.1 it is important to make every cell probability converge towards being occupied or unoccupied. There are many different ways to represent the probability a ray should have along its path. An ideal ray probability of a 60m measurement can be seen in Figure 4.4.

The ideal ray will not work so good as an input to the binary Bayes filter since the probabilities along the ray are to close to the max/min values. When this is the case, the grid cells will not converge towards 0 or 1. Instead the cell will take the same value as the input, which is either 0 or 1. A ray is modeled from a radar measurement and all radar measurements have uncertainty in both dis-tance and angle. Therefore the ray theoretically should be modelled with uncer-tainties in both distance and angle, Elfes [1989]. A ray with these properties can be explained by a normal distribution. The probability density function for a 2-D

(42)

26 4 Methods 0 10 20 30 40 50 60 70 80 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Distance [m] Probability

Figure 4.4:Ideal probability for a 60m measurement.

normal distribution can be seen in Equation 4.7.

P (R|µr, µφdet) = 1 2πσrσφdet e (R−µr )2 2σ 2r +( µφdet σφdet)2, (4.7) where

• R is the radar measurement.

• µris the mean for the ray’s distance. • µφdetis the mean for the ray’s angle.

• σr is the standard deviation of the ray’s distance. • σφdetis the standard deviation of the ray’s angle.

Sometimes however, a more simple sensor model can be better, especially in terms of computing power. It is also important that the simplification gives a good enough representation of the system. An example of a ray probability where only the uncertainty in distance is applied can be seen in Figure 4.5.

(43)

4.2 Ray Tracing 27 0 10 20 30 40 50 60 70 80 0.48 0.5 0.52 0.54 0.56 0.58 0.6 0.62 Distance [m] Probability

Figure 4.5:Example of a probability for a 60m measurement with only un-certainty in distance.

The probability in Figure 4.5 has a short distance span where the detection un-certainty is taken into account, it also does not give as much indications of occu-pation as the ideal ray probability in Figure 4.4 and Equation 4.7. Instead these probabilities will be used in the binary Bayes filter, many updates with indication of occupation in one cell will make it converge towards the value 1. Vice versa goes for many indications of unoccupancy. Therefore the probabilities should not consist of too large or too low values. This makes the map more resistant to radar interference and faulty detections. If the probability along all rays would look like the ideal probability a binary Bayes filter update would either give the value 0 or 1, which is not desired. An example of an OGM updated with a bi-nary Bayes filter with inputs from ray tracing with uncertainty in distance can be seen in Figure 4.6 where two OGMs are compared. It is seen how the map con-verges from one update, Figure 6.5a, into a map with clear objects when several updates are put together, Figure 6.5e. Different sensor models and other results from mapping with the HPIG are discussed and shown in Chapter 6.

(44)

28 4 Methods Xglobal [0.1m] Yglobal [0.1m] 1000 1200 1400 1600 1800 2000 2200 2400 2600 2400 2600 2800 3000 3200 3400 3600 3800 4000

(a)OGM after 1 sample.

Xglobal [0.1m] Yglobal [0.1m] 800 1000 1200 1400 1600 1800 2000 2200 2400 2600 2600 2800 3000 3200 3400 3600 3800 4000 4200

(b) OGM after 400 samples, the blue

line is the trajectory of the truck’s po-sition.

Figure 4.6: OGM updated with one measurement from all radars compared to one after 400 samples. In the OGM white and black represent 0 respec-tively 1 in probability of being occupied. Orange represents 0.5 in probabil-ity.

4.3

Extended Kalman Filter

To get a fully working SLAM algorithm a good estimation of the truck’s pose is needed. The sensors presented in Section 3.1.1 are used to get a good estimate of the truck’s motion and pose. When using information from many sensors to get an optimal estimation the linear Kalman filter is usually applied. If the linear Kalman filter should give an optimal estimation of the system two criteria have to be fulfilled. The state dynamics have to be linear, see Section 2.6, and all system noise has to be Gaussian. No verification of the system noise has been made but it is assumed to be Gaussian. However, the state dynamics are nonlinear and therefore the linear Kalman filter cannot be applied. In Equation 4.8 the nonlinear state dynamics are represented by a state space model.

xk+1 yk = = f (xk, uk, wk) h(xk, uk, ek) (4.8)

Where wkis the Gaussian process noise and ek is the measurement noise. A com-mon solution to get the nonlinear system to work in the Kalman filter is to lin-earize the state dynamics around the current state using a first order Taylor ex-pansion. The result of the combined methods is the extended Kalman filter (EKF). A general algorithm for the EKF can be seen in Algorithm 4.1.

(45)

4.3 Extended Kalman Filter 29

Algorithm 4.1EKF loop Derive Jacobian from h:

Hk = δhδx xˆ k|k−1 Measurement update: Kk = Pk|k−1HkT(HkPk|k−1HkT + Rk) −1 ˆ xk|k= ˆxk|k−1+ Kk(ykHkxˆk|k−1Dkuk) Pk|k = Pk|k−1KkHkPk|k−1 Derive Jacobian from f:

Fk= δfδx xˆ k|k,uk Time update: ˆ xk+1|k= f ( ˆxk|k, ˆuk) Pk+1|k = FkPk|kFTk + Qk Where,

• ˆxk is the estimated states.

• ykis the output, in this case the measurements.

• uk is the control signals and since the system does not have any control input uk = 0.

• f is a function describing how the system will behave based on the esti-mated states.

• Fk is the Jacobian of f .

• h is a function describing how the states are correlated to the measure-ments.

• Hk is the Jacobian of h. • Kkis the Kalman gain. • Dkis the control matrix.

• Pkis the state estimate covariance matrix. • Rkis the measurement covariance. • Qk is the process noise covariance.

As can be seen in Algorithm 4.1, the EKF can be divided into two different update steps. The measurement update, where all new measurements are updated and weighted, and the time update, where future states are estimated from a model of how the system behaves.

There is a trade off between these two updates, where different measurements and estimations can be trusted more or less. For example, the filter should trust a sensor with very accurate measurements but if the sensor’s values are very noisy

(46)

30 4 Methods

the predicted value from a motion model of the truck might give a better rep-resentation of the system. In Table 4.1 all components in the state vector are explained.

State Notation Measured Explanation

X position Xglobal Yes The truck’s X position in the

global map.

Y position Yglobal Yes The truck’s Y position in the

global map.

Heading θhead No Heading of the truck in the

global map. Longitudinal

speed

vx Yes The truck’s speed in

longitudi-nal direction.

Lateral speed vy No The truck’s speed in lateral

di-rection. Longitudinal

acceleration

ax Yes The truck’s acceleration in

lon-gitudinal direction. Lateral

accelera-tion

ay Yes The truck’s acceleration in

lat-eral direction.

Yaw rate ω Yes The truck’s yaw rate around the

rear axis. Bias for

longitudi-nal acceleration

bax No The offset error of the

mea-sured longitudinal accelera-tion.

Bias for lateral ac-celeration

bay No The offset error of the

mea-sured lateral acceleration.

Bias for yaw rate No The offset error of the

mea-sured yaw rate.

Gain for yaw rate No The scalar error of the

mea-sured yaw rate. Table 4.1:All states in the implemented EKF.

4.3.1

Measurement model

As mentioned in Section 4.3 the EKF has two main parts and one of them is the measurement update. All useful information from sensors are taken in and weighted. The implemented EKF uses the following measurement model

yk =                        XGP S,global YGP S,global vtacho,x aI MU ,x− ˆbax aI MU ,y − ˆbay ωI MUgˆω− ˆ                        . (4.9)

(47)

4.3 Extended Kalman Filter 31

A sensor never gives a perfect representation of the measured value and many sensor’s measurements are far from correct. The measurement model is used to modify the measurements to fit better in with reality. In some cases an error might not result in any major fault. Sensor values which are integrated over time, such as speed, acceleration, and yaw rate, might result in a large error over time even if the instantaneous error is very small. It is therefore important to understand how these measurements can be used to get more realistic values. Common sensor errors can be represented by bias and/or gain. Bias and gain can both be estimated in the EKF and then be used to correct the sensors mea-surements. To be able to estimate these values information from other sensors is needed, without this information there are no reference measurements to com-pare with. How bias and gain are used to improve the sensor measurements is seen in Equation 4.9.

Measurement noise

If the EKF should work well the measurement noise needs to be represented in a realistic way. The measurement noise vector is defined as Equation 4.10.

ek =                       eXGP S eYGP S evtacho,x eaI MU ,x eaI MU ,y eωI MU                       . (4.10)

In the EKF algorithm the measurement noise is taken into account by tuning the

Rk matrix. The Rk component for a sensor measurement will tell how much the EKF should trust the measurement. For example, if a sensor gives values which are not trustworthy the Rk matrix will be tuned so the measurements from this sensor has a small influence in the EKF algorithm.

4.3.2

Motion model

To get a good prediction of the truck’s pose in the time update a well suited motion model is necessary. A motion model often gives a good prediction of the near future but is not so useful when it comes to longer predictions. This is not so odd since the sensor measurements does not change much in a short period of time.

A motion model which describes a vehicle’s movement well in these conditions is a Constant Turn Rate and Acceleration (CTRA) model, Schubert et al. [2008]. In difference to Schubert et al. [2008] the CTRA model has been simplified and latitudinal velocity, latitudinal acceleration, bias for accelerations, bias for turn rate, and gain for turn rate have been added into the model. The motion model

(48)

32 4 Methods

for the system can be seen in Equation 4.11 where T is the sample time.

ˆ xk+1 =                                                       ˆ

Xglobal+ T ˆvxcos( ˆθhead) +T

2

2 ( ˆax− ˆbax)cos( ˆθhead)

ˆ

Yglobal+ T ˆvxsin( ˆθhead) +T

2

2 ( ˆax− ˆbax)sin( ˆθhead)

ˆ θhead+ T ( ˆgωω − ˆbˆ ω) ˆ vx+ T ( ˆax− ˆbax) ˆ vy+ T ( ˆay− ˆbay) ˆax− ˆbax ˆay− ˆbay ˆ ω ˆgω− ˆ ˆbax ˆbay ˆbω ˆ                                                       (4.11)

The CTRA model assumes that both accelerations and the yaw rate are constant from the current sample to the next sample. States with direct relation to the approximations will be approximated according to this assumption. For example the predicted θhead is according to the approximation the current value plus the integrated ω value. This can be seen in Equation 4.12, where the relation between

ω and θheadis explained for a discrete time system with no correction of bias and gain. ωk+1 θhead,k+1 = = ω θhead+ T ω (4.12) Process noise

When using different states to predict the new pose, process noise will have a large effect on the estimation. In the CTRA model the accelerations and yaw rate will have a large effect on the low order derivatives, Gustafsson [2012]. Bias and gain will affect the acceleration and turn rate, therefore they are also included in the process noise. The process noise vector has therefore been chosen as

wk =                            wax way wbax wbay wbω wgω                            . (4.13)

In the EKF algorithm the process noise is taken into account by tuning the Qk matrix. The values in Qk will tell how much the process noise will affect the prediction. If the tuned values are a bad representation of the truth the EKF will weight predictions according to this representation and the estimations might become bad.

(49)

4.3 Extended Kalman Filter 33

4.3.3

Validation

To test the EKF algorithm the logged data from the test track, see Section 3.4, has been used. To validate the result from this test the HPIG sensor has been used as a reference. There are many factors which affect the overall performance of the EKF but the focus of the validation has been chosen to be a comparison between the pose estimation and the HPIG pose. Since the estimated yaw rate affects the heading directly it will also be compared with the HPIG.

0 50 100 150 200 250 0 2 4 6 8 10 12 14 16 18 20 Time [s] Heading [rad]

(a)Estimated heading (blue) and the HPIG heading (green).

0 50 100 150 200 250 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Time [s] ε head [rad]

(b)The difference between estimated heading and HPIG heading.

Figure 4.7:The result from the estimated heading compared to the HPIG.

In Figure 4.7 the heading values from the EKF and HPIG are compared which shows how the heading state drift away from the HPIG heading. The trajectory from the EKF and HPIG sensor can be seen in Figure 4.8 which shows the same drift as Figure 4.7 with initially good position that drift over time.

(50)

34 4 Methods 500 1000 1500 2000 2500 3000 3500 1000 1500 2000 2500 3000 3500

X

global

[1dm]

Y

global

[1dm]

Figure 4.8:Estimated trajectory form the EKF (blue) and the HPIG trajectory (green).

As shown in Equation 4.9 the heading is not a part of the measurement model it is therefore interesting to also look at the estimated yaw rate since it plays a big part in the estimated heading. The yaw rate is integrated into a heading over time and drift in heading is therefore most likely caused by bias or gain in the yaw rate estimation. In Figure 4.9 the yaw rate values from the EKF and HPIG are compared. How accurate the yaw rate is also depends on the estimations of bω and gω. But since the only measurement information of the heading comes from the yaw rate measurements it is hard the estimate and evaluate these values. As the results show, the EKF does not work perfectly and if only the EKF would be used for localization the algorithm would not work so well. It is therefore important to note that the main purpose with the EKF is to give a good initial pose to the scan match, see Section 4.5, which can, if working well, compensate for small drift in both heading and position. As mentioned before there are many factors which affect the EKF estimations. To get a better understanding of the EKF results more figures of the EKF estimations can be found in Appendix A.

(51)

4.3 Extended Kalman Filter 35 0 50 100 150 200 250 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 Time [s]

Yaw rate [rad/s]

(a)Estimated yaw rate (blue) and the HPIG yaw rate (green).

0 50 100 150 200 250 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 Time [s] ε yaw rate [rad/s]

(b)The difference between estimated yaw rate and HPIG yaw rate.

(52)

36 4 Methods

4.4

Radar range rate filtering

Since the radar detections contain several false detections caused by interference from other radars on the truck and general noise together with detections from moving, non static, objects each radar scan must be filtered. Since the radar mea-sures the range rate it is possible to compare the measured range rate to an esti-mated range rate.

To estimate the range rate of a static object one must calculate the radial speed of the detection relative to the radar. A static object that is detected by a radar with a speed vradar, visualized in Figure 4.10, will then have a ˙rdet according to Equation 4.14 where k = 1, . . . , 64, i.e. the index for all detections.

˙rdet= −vradarcos(φdet,k+ αmountψ) (4.14)

x

radar

y

radar αmnt

ψ

v

radar rdet φdet

Figure 4.10: One detection with range rate ˙rdet relative the radar with the speed vradar.

Since the truck is assumed to be moving as a rigid body in 2-D the x and y com-ponents, the lateral and longitudinal comcom-ponents, of vradarare

vx,radar= vx,truckry,radarω (4.15)

References

Related documents

Eftersom diffraktion är en process som hela tiden skapar nya vågmönster passar den bra att använda som analytiskt redskap när en vill öppna upp för nya synvinklar och alterna-

this will help determine whether specific training activity may induce vector migration to the side in the higher level swimmers, as well as the utility of the tolerance ellipses

Tiden i sekunder för att sätta fast och släppa korna i prototypen för automatiskt bindsle respektive i korsbindsle samt tiden för att sätta fast repet till prototypen..

Hur fadern skötte sin bokföring för något tiotal år sedan, med påföljd kopplat till detta, har inte relevans för vare sig frågan om sexuella övergrepp eller frågor om vårdnad

Då målet med palliativ vård är att främja patienters livskvalitet i livets slutskede kan det vara av värde för vårdpersonal att veta vad som har inverkan på

Active sensing actions can be specified, but the framework does not support postdiction as part of a contingent planning process: It is not possible to plan for the observation of

Idag hjälper Antrop företag och organisationer att förändra sitt möte med kunden oavsett kanal då det är viktigt för företag att kunna göra det möjligt och enkelt för kunden att

The contributions are, a complete model structure for a heavy-duty diesel engine equipped with a fixed geometry turbocharger and inlet throttle, and optimal control trajec- tories for