STOCKHOLM, SWEDEN 2020
Map-aided
localization for
autonomous driving using a particle filter
KTH Thesis Report
Simon Eriksson
KTH ROYAL INSTITUTE OF TECHNOLOGY
Simon Eriksson sime@kth.se
for Department of Intelligent systems KTH Royal Institute of Technology
Place for Project
Stockholm, Sweden
Examiner
Patric Jensfelt Stockholm, Sweden
KTH Royal Institute of Technology
Supervisor
John Folkesson Stockholm, Sweden
KTH Royal Institute of Technology
Vehicles losing their GPS signal is a considerable issue for autonomous vehicles and can be a danger to people in their vicinity. To circumvent this issue, a particle filter localization technique using pre-generated offline Open Street Map (OSM) maps was investigated in a software simulation of Scaniaâs heavy-duty trucks. The localization technique runs in real-time and provides a way to localize the vehicle safely if the starting position is known. Access to global localization was limited, and the particle filter still succeeded in localizing the vehicle in the vicinity of the correct road segment by creating a graph of the map information and matching the trajectory to the vehicleâs sensor data. The mean error of the Particle filter localization technique in optimal conditions is 16m, which is 20% less than an optimally tuned dead reckoning solution.
The mean error is about 50% larger compared to a Global Positioning System. The final product shows potential for expansion but requires more investigation to allow for real-world deployment.
Keywords
Master thesis, Particle filter, Monte Carlo filter, Gaussian noise, Bayesian estimation,
probabilistic localization, autonomous vehicles, Scania, OpenStreetMap
Svenskt abstract
Att fordon kan mista sin GPS-signal Àr ett stort problem för autonoma fordon och kan vara en fara för mÀnniskor i dess nÀrhet. För att undvika detta problem föreslÄs en icke-global lokaliseringsteknik som anvÀnder Open Street Maps-kartor (OSM) och ett partikelfilter för att lokalisera fordonet i en mjukvarusimulation. Implementering körs i realtid och anger fordonets position med en tillrÀcklig trÀffsÀkerhet för att det inte ska utgöra nÄgon fara om dess startposition Àr kÀnd. Globala lokaliseringsmöjligheter var begrÀnsade, och partikelfiltret lyckades lokalisera fordonet till rÀtt vÀgsegment genom att konstruera en graf över den kartinformation den lÀst in och para ihop fordonets nuvarande fÀrdvÀg med denna. Resultatet ger en lösning som optimalt har ett medelfel pÄ 16m, vilket Àr 20% mindre Àn medelfelet jÀmfört med optimiserad dödrÀkning.
Lösningen har ca 50% större medelfel Àn positionering med GPS. Slutresultatet visar en potential att anvÀndas i verkliga situationer, men krÀver mer undersökningar.
Nyckelord
Examensarbete, Partikelfilter, Monte Carlo-filter,
Gaussiskt brus, Bayesisk uppskattning, sannolikhetsbaserad lokalisering, autonoma
fordon, Scania, OpenStreetMap
Iâd like to thank my supervisor John Folkesson for inspiring me to do this project and guiding me through it, as well as my examiner Patric Jensfelt for his comments. They both take on an astounding amount of work for us students, which Iâm grateful for.
Iâd also like to thank the Localization (EADL) team at Scania for giving me their time,
helping me with my requests and entrusting me with this task. Finally, Iâd like to thank
my family for supporting me and understanding my need to spend a lot of time on
this.
PDF Probability Distribution Function MSE Mean Square Error
OSM Open Street Map PF Particle Filter KF Kalman Filter DR Dead Reckoning
SLAM Simultaneous Location and Mapping IMU Inertial Measurement Unit
OOB Out of bounds
GPS Global Positioning system RSA Road Segment Association
RS Road Snap
1 Introduction 1
1.1 Explicit problem specification . . . . 3
1.2 Purpose of project . . . . 3
1.3 Goal of project . . . . 3
1.4 Research questions . . . . 4
1.5 Ethics and Sustainability . . . . 4
1.6 Societal aspects . . . . 4
1.7 Delimitations . . . . 5
1.8 Outline . . . . 5
2 Background 6 2.1 The problem of localization . . . . 6
2.2 Basics of localization . . . . 6
2.3 Estimating uncertainties . . . . 7
2.3.1 Gaussian noise . . . . 7
2.3.2 Bayesian inference . . . . 7
2.3.3 Estimation algorithm: Kalman filter . . . . 8
2.3.4 Estimation algorithm: Particle filter . . . . 8
2.4 Sensing environments . . . . 9
2.4.1 Feature matching . . . . 10
2.4.2 Trajectory matching . . . . 10
2.4.3 Issues with matching algorithms . . . . 11
2.5 Related work . . . . 11
3 Method choice 15 3.1 Background of a particle filter . . . . 15
3.1.1 State definition . . . . 15
3.1.2 Initialization . . . . 15
3.1.3 Prediction . . . . 16
3.1.4 Weighting . . . . 16
3.1.5 Resampling . . . . 17
3.2 Implementation and algorithms . . . . 17
3.2.1 Basic assumptions . . . . 17
3.2.2 Initialization . . . . 18
3.2.3 Sensor data . . . . 19
3.2.4 Road segment association . . . . 20
3.2.5 State prediction . . . . 21
3.2.6 Weighting . . . . 22
3.2.7 Resampling . . . . 27
3.2.8 Particle count . . . . 28
3.2.9 Particle deprivation . . . . 28
3.2.10 Particle dispersion . . . . 29
3.2.11 Map projection . . . . 29
3.2.12 Final location estimation . . . . 30
4 Implementation of work 31 4.1 Final product . . . . 32
5 Results 34 5.1 Baseline . . . . 34
5.2 Algorithms . . . . 35
5.3 Road snap . . . . 35
5.4 Test cases . . . . 36
5.4.1 Test case 1: Long track with interchangeable map details . . . . 36
5.4.2 Test case 2: Long track with interchangeable map details and random particle initialization . . . . 36
5.4.3 Test case 3: Long road with unique map details . . . . 39
5.4.4 Test case 4: Truck at standstill . . . . 40
5.5 Error values . . . . 40
6 Discussion 45 6.1 Dead reckoning . . . . 45
6.2 Particle Filter (PF) precision using weighting . . . . 46
6.3 PF precision without weighting . . . . 46
6.4 Overall PF performance . . . . 46
6.5 Road Snap localization smoothing . . . . 49
6.6 Multi-modal properties . . . . 49
6.7 Comparison to expectations . . . . 49
6.8 Comparison to Kalman Filter . . . . 50
6.9 Limits of the PF and OSM trajectory matching . . . . 50
7 Conclusions 52 7.1 Future Work . . . . 53
7.1.1 Fully utilizing and optimizing OSM data . . . . 53
7.1.2 Weighting with more advanced sensors . . . . 54
7.1.3 Improving dead reckoning sensor drift and noise models . . . . 54
7.2 Final Words . . . . 54
References 56
Introduction
The field of autonomous vehicles have several main challenges that make technological progress a true challenge. One such main field is localization. Localization encompasses the problem of determining where in a given world space an agent is located. While it sounds like a simple task, localization can never be done perfectly;
It can only be approximately estimated by processing whatever data the agentâs sensors record [8] [23]. To make the estimation as accurate as possible, localization algorithms are implemented that attempt to calculate the agentâs maximum probable position.
When it comes to autonomous vehicles, localization is of great importance in order
to make safe decisions that donât endanger a vehicleâs surroundings. Smart vehicles
already have a several ways in which their internal software control systems can be
disrupted and exploited by external actors [2] [9], but even if one would design a vehicle
with software that canât be exploited, external factors that alter the perceived location of
the vehicle can cause it to perform unwanted decisions. Most of the time, vehicles have
access to a global localization technique such as the Global Positioning system (GPS)
[3], that estimates the global position of the vehicle using trilateration of unencrypted
satellite signals. Such a system has considerable drawbacks, however: if the signal ever
becomes lost, it becomes incredibly hard to accurately estimate the current position,
and using the public GPS system, like most vehicles do, opens up for vulnerabilities
[12] that could severely endanger the vehicleâs surroundings. In some cases, the GPS
signal can be lost simply due to nearby âurban canyonsâ or complete GPS blackouts
[14].
Figure 1.0.1: A display of how a OSM map can look. The light-blue roads are sequences of OSM Nodes, strung together into a âWayâ, ordered in one direction. The yellow irregular spots are open areas.
To overcome this issue, a localization technique utilizing map-aided localization is proposed. By estimating the current trajectory and matching that to an offline- generated map, in theory a localization independent from external influences can be achieved. In addition to this, the technique should be as dynamically applicable as possible, meaning it should not be dependent on rare or expensive sensors. The map format should also be easy to handle and access, as such the OSM map format makes sense to use [24]. OSM maps have two main components that will be used: Nodes and Ways. Nodes are points on the map that may have additional features attached to them, such as speed limit and elevation. Ways describe how those nodes are stringed together into a road. There are also âOpen areasâ, but those contain no navigable points of interest. Pure dead reckoning could be used in open areas, but if the area is large, an error would quickly be accumulated. Using the GPS in those areas would be most optimal, but since that is not to be used in this project, such areas are excluded from the project scope.
Choosing a localization algorithm for the task at hand can be difficult, as all algorithms
have their advantages and disadvantages, but two algorithms in particular are regarded
as the most useful localization algorithms: The Kalman Filter and the Particle Filter.
The most common to encounter today is the Kalman Filter, which estimates the single most optimal location given sufficient information to update the data. The Kalman Filter does however have a few flaws with its structure, and to solve those, a localization algorithm with a completely different approach needs to be implemented.
In particular, the ability of multimodal distributions and non-Gaussian noise models could be of great advantage. This thesis focuses on a localization approach that grants both of those abilities: The particle filter. A particle filter simulates several potential locations instead of one, and draws conclusions of the most probably state from how probable those locations are. The particle filter is described in more detail in section 2.3.4.
1.1 Explicit problem specification
Given a map of roads and recorded sensor data from a truck, construct a probabilistic localization algorithm suitable for autonomous vehicles. The algorithm should be able to complement an already implemented localization technique using a GPS with a Kalman Filter by covering for the GPS when it becomes unavailable or inaccurate.
Evaluate how well the particle filter compares to the existing Kalman Filter solution as well as pure dead reckoning.
1.2 Purpose of project
This project was requested by Scania Sweden with the purpose of investigating a probabilistic localization approach using map information, more specifically using a particle filter-based option, for their autonomous trucks. Itâs intended to be usable when global localization such as GPS is not available, and navigate using OSM maps.
1.3 Goal of project
Have a robust PF solution that accurately estimates the vehicle position on a map. The
solution should be usable in the already existing software system that Scania uses and
be optimized to run in real-time.
1.4 Research questions
âą Is it possible to achieve robust localization for autonomous trucks using a particle filter and two-dimensional OSM maps?
âą How does the accuracy and robustness compare to a global localization implementation using a Kalman Filter?
1.5 Ethics and Sustainability
To conclude any impacts on sustainability from this thesis, one would have to investigate the potential future usage of this technology, which is fairly limitless.
The nature of the particle filter algorithm means it can be deployed for just about any autonomous agent with sufficient computational capabilities. For the intended usage, this algorithm will be used to improve the localization of autonomous trucks.
Comparing this to a case where worse localization was used, a better localization could mean less fuel is used, which leads to small sustainability gains. One can also note the fact that autonomous vehicles are almost exclusively electrified, so if the localization contributes to the autonomous vehicle having a larger improvement over a gas- powered truck, the localization can contribute to the more environmentally friendly electrified option being selected. Scania themselves are also pushing for sustainability [22] which means the company utilizing these findings has elaborate plans for sustainable development, which a better localization algorithm will accelerate.
1.6 Societal aspects
This thesis itself contains too many unexplored ends and cases to draw conclusions
about potential impacts on society as a whole. To really have an impact, more
investigation would be needed. In the long run, the findings could lead to more
theses and research that eventually gives more precise localization techniques and help
improve the future of autonomous vehicles. Using OSM maps, a completely open-
source and widely used format, also has the additional benefit of being adaptable to
any region of the world, scalable to any type of product and usable by just about anyone
regardless of education level.
1.7 Delimitations
The probabilistic localization approach proposed in this thesis is map-aided, which means it can involve sensors other than those dependent on a map. If the influence of the map would be removed, the algorithm should still work in some manner. The project will also have no guaranteed access to a global localization method, such as GPS, but will work from the assumption that the starting position is known. Finally, all programming and evaluation takes place in a software environment provided by Scania, intended to be readily deployable on real-life trucks.
1.8 Outline
Section 2 introduces the problem of localization and gives a comprehensive
background of the behaviour and usage of a PF and a Kalman Filter (KF), as well as
a review of related work. Section 3 focuses on the mechanical background and the
practical implementation of the PF used in this project. Section 4 describes what
work was part of the final project version. Section 5 contains all recorded results and
explains the test cases that were used. Section 6 examines the results and explains
any inconsistencies or features that didnât follow expectations. Finally, Section 7
summarizes the findings and connects the results to the research questions of Section
1.
Background
2.1 The problem of localization
Localization is the practice of estimating an agentâs position through the use of algorithms and probabilistic distributions. This problem has no perfect solution;
everything that is measured is always in relation to something else, and in addition to that all measurements are noisy, which results in an unavoidable uncertainty [8]. With this assumption as a base, all localization algorithms have to handle this uncertainty in one way or another. This section will discuss how this uncertainty works and why a particle filter is investigated over other localization methods for this thesis in particular. Here, I define the problem as:
Given the complete history of agent state estimates, define a probabilistic method that maximizes the probability of the state estimate x being accurate at time t.
2.2 Basics of localization
Localizing an agent typically requires a valid input in the form of sensors. Sensor inputs
can either be directly used for global localization, like with a GPS, or they can give data
that can be processed to yield a proposed position, like a radar. However, sensors are
never perfect [8] and for each measurement we will have uncertainty, or noise, that
makes the measurement deviate from the true value [23]. When using a GPS, this
deviation is simply modeled as a confidence bound, leading to the estimated position
being spread over an area with varying probabilities. When the sensor has to match
features to a map to get a position, estimation gets harder and we have uncertainties in what feature we are matching with as well. This is where localization algorithms really shine, as they can keep track of the probabilistic qualities of the measurements in a comprehensible manner.
2.3 Estimating uncertainties
The most common uncertainty to deal with is Gaussian noise. This leads to a very logical conclusion of assuming Gaussian noise when interpreting the sensor data.
Sometimes, however, the noise can be predictable, such as calculating the Doppler effect of a radar. Using our knowledge about noise, we can estimate these uncertainties using a number of techniques.
2.3.1 Gaussian noise
Gaussian noise [21] [15] is a randomly distributed variable drawn from a Gaussian distribution. Gaussian noise can never be predicted accurately, but its probability distribution can be determined by observing the outcome of the Gaussian noise. Itâs also possible to compensate for the noise by averaging it over time, since the noise has a constant mean value.
2.3.2 Bayesian inference
Bayesâ theorem [6] is a well-known theorem in the field of probability that is very useful in estimation. Assume we have the observation Z
tand our estimated state x
t. Bayesâ theorem gives a direct relationship between the prior state probability Z
tâ1the probability of a measurement z
tand our posterior state probability x
tas such:
p(x
t|Z
t) = p(z
t|x
t)p(x
t|Z
tâ1)
p(z
t|Z
tâ1) (2.1)
Through Bayesian inference, one can thus label measurements as outliers if they
appear improbable enough if we trust our position estimate a lot, or label a proposed
position as unlikely if the measurement doesnât line up with it.
2.3.3 Estimation algorithm: Kalman filter
The most reliable way of localizing an agent is through having a global localization method. A GPS is a prime example, where it can pinpoint the agentâs location anywhere on the globe with an inaccuracy estimate, if given time to converge. Other cases might be based on Aruco markers [7], where a known non-interchangeable landmark in the environment reports exactly where our current position is in relation to the landmark.
In many cases of localization, such as with vehicles, the agent is equipped with enough sensors to get a fairly believable global localization as well as complement that with real-time Dead Reckoning (DR) in order to accurately estimate a real-time position [27]. Under such circumstances, it makes sense to estimate the position using a Kalman Filter.
A Kalman Filter is the optimal estimator for a single position given that the noise is Gaussian [23] [16]. Since it gives a single optimal estimate, it can not account for multi- modal distributions, and as such only keep one hypothesis of the location. Since the physical agent only has one true position, in many cases it makes the most sense to use a Kalman Filter to estimate that position.
2.3.4 Estimation algorithm: Particle filter
A particle filter estimates the pose of an agent using Bayesian logic and Monte Carlo methods [5] [23]. Its biggest advantage is that it is not limited to a single hypothesis, it can have as many hypotheses as you like, keeping track of them through whatâs called particles. Particle filters are also not limited to only Gaussian probability distributions, they can assume any probability distribution completely independent of any mathematical functions. A demonstration can be seen in Figure 2.3.1.
A particle filter starts out with a set of M particles, where each particle is a unique hypothesis of the agentâs pose. Then, depending on input data from the sensors and the agentâs motion predictions, these particles are moved to represent the prior probability of the hypothesis, giving a better estimate of the position. To ensure that all hypotheses become unique, noise is injected into the sensor measurement as well.
After that, the particles are weighted according to how likely that hypothesis is, yielding
the posterior probability. Finally, the particle filter makes copies of the particles with
high weights and discards the particles with low weights according to a set resampling
policy. This will eventually lead to probable positions having a lot of particles, and
Figure 2.3.1: An illustration of how particles and their weights can represent a completely independent two-dimensional probability distribution. From: [19]
improbable positions having fewer or no particles. To make sure particles spread out properly, noise is also directly added to the sensor readings, following the noise models in Section 3.2.5.
As mentioned previously, one of the largest advantages of using a particle filter is that they can assume multi-modal distributions, meaning it can account for several poses being possible at once. This makes the localization more robust when navigating in environments where many landmarks and features are repeated and no single position can be estimated to be correct.
Particle filters do encounter problems, however, such as if they can not get enough information to weigh the particles. In those cases, the particles will all eventually converge to one region that is not necessarily correct, and have a hard time spreading from that region due to their motion being limited to our prediction of the agentâs movements. This problem is called particle deprivation and should be avoided at all costs, more details can be found in Section 3.2.9.
2.4 Sensing environments
Any vehicle with a sufficient Inertial Measurement Unit (IMU) can deploy a dead-
reckoning solution that estimates the derivatives of the physical forces acting on the
vehicle. With wheel sensors or accelerometers, one can read the forward momentum,
and with gyro sensors, one can determine a yaw rate. This implementation of course
has a huge flaw in that itâs very susceptible to noise and systematic errors, and it also requires a highly accurate starting point. To improve a simple dead-reckoning system the sensors have to match the readings to a map, which can be done in several ways.
2.4.1 Feature matching
A common way of globally localizing an agent is through feature matching [23].
It works by creating a map of features that the sensors can detect, such as the aforementioned Aruco markers [7]. Another common application of this is using a camera or a radar to build a map which features can be directly correlated to the same detections on subsequent runs. These features become landmarks that become very important in how the agent navigates. For this project, no fitting sensor that could create landmarks was available, and creating the map before driving in an area goes outside the scope of this thesis, thus this technique wasnât used.
2.4.2 Trajectory matching
Trajectory matching [17] is a map matching technique and was used in this thesis. It requires a map with data points recorded in such a way a trajectory can be traced. One such map type is OSM, where each point is put into âwaysâ, or roads. By taking the estimated trajectory and applying it to a given map, it is possible to acquire a collection of potential positions. Since it is impossible to perfectly determine the position using only a single instance, as a road can have several similar segments, a history of possible positions needs to be recorded. These requirements: a multimodal distribution and history-dependent localization, means that a Particle Filter is perfect for the job.
A particle filter implicitly implements a history-dependent localization with its
resampling. Every particle comes from a previous time-step, so the history of each
particle can be traced back to the very initialization of the particle filter if the data
is saved. If a current particleâs trajectory doesnât match up with the map, it will
likely not get resampled. As such, the localization algorithm doesnât need any explicit
modification to take historic data into account, but it will do so automatically as long as
we implement a weighting algorithm that reflects that the particleâs trajectory should
match the road as much as possible. As such, at the end of a run, one could trace
the heritage of a particle back to its initialization as there is a specific sequence of
resampled particle states that creates the trajectory from start to finish, but with the injected noise.
Another great advantage comes from the use of OpenStreetMap in particular, which defines the flow direction of the roads using âwaysâ, and not only tells the position of the road points. This means that where a normal curve matching algorithm would find two potential approaches through a node, a trajectory matching algorithm utilizing the ways of an OSM file instead only finds one.
A potential major drawback for such an algorithm is the computational complexity. A particle filter in itself already requires a considerable amount of computing power and optimization to work efficiently, adding another heavy algorithm on top of that could discard the entire localization objective completely in real-life scenarios. It is therefore very important to investigate if the system can actually run in real-time, or if further optimizations and improvements are required, as well as record if any pre-processing of the map is necessary.
2.4.3 Issues with matching algorithms
There are two areas in particular where all forms of feature matching perform poorly:
uniformly shaped environments, and environments yielding no usable sensor inputs.
A uniform environment could be a square-shaped room, where all four walls look the same, which makes it impossible to distinguish which walls is being looked at.
The next case is very similar but more common. It could, for example, happen on a long, straight stretch of road where everything looks the same and no unique features can be found. This would subject the localization to weigh all particles the same and thus rely completely on noise to update the believed location. In the case of a Kalman Filter, this is illustrated by the uncertainty constantly increasing until a new valid input is found. In the case of a Particle filter, this can lead to particle deprivation, detailed in Section 3.2.9.
2.5 Related work
Particle Filters are a relatively new term but are based on fluid dynamics simulations
from the 1960s [4]. Since the term was coined in 1996, a lot of work has been done on
particle filters, but the topic of this thesis investigates particle filter localization with a
rare combination of constraints. Most work has been done with a completely different type of map data with landmarks or features, while the OSM files themselves were initially designed as map files, not to be used for localization. Very little information is readily available in them, and the papers found using the OSM format usually rely on a GPS.
A good starting point for researching particle filters is [5] which describes the benefits and abilities of a particle filter, as well as a practical implementation of it. The same author wrote the book âProbabilistic Roboticsâ [23], which includes a great basis for designing the particle filter itself. The book has a very general approach to particle filters and defines the four steps used in the PF structure of this thesis: Initialization, prediction, weighting and resampling (note that the book defines an update step as well, which is combined with the prediction step in this thesis). Since every step is generalized, however, the usage context of every step has to be adapted to the target of this thesis, and each step has to be individually checked with other sources. The largest issue is that itâs not possible to evaluate the particle weights instantaneously, which will be discussed further down. In terms of initialization, it was already agreed upon with the company that the initialization can use the GPS signal to set a correct initial position. Therefore, initialization was not a topic investigated in related works.
Thesis [10] had a similar objective as this thesis. It achieves its goal using two methods:
point-mass filtering and grid-based Bayesian estimation. There are similarities with this thesis especially in the way it utilizes the sensor data to formulate the prediction step. It uses wheel speed sensors and an IMU equivalent in an almost identical manner, but it uses a map structure different from OSM. The author also proposes an assumption that ended up being used in this thesis as well: That the vehicle will always be on a road. This is simply because without a road, there is no map information to match to.
To get a better idea of how to potentially design a weighting algorithm, papers using
Simultaneous Location and Mapping (SLAM) [14] [18] are quite relevant but proved
hard to adapt to the current target. SLAM is a very popular localization approach where
the agent constructs the map around itself as it is locating itself in the map and can lead
to very accurate estimation. Papers like [18] did propose an attractive data association
algorithm for use with ambiguous landmarks, where they accurately estimate their
position using nearby trees by taking the vehicle position into account. This approach
differs from the OSM approach of this thesis, which the paper itself explains this best:
SLAM relies on a knowledge of the mapping between observations and landmarks. If one would attempt to construct landmarks in the form of road segments from the OSM map and roads, the observations would be the current position of the particles and the features would be the nodes. Therefore, given a single particle at any point in time, it is not possible to evaluate which map feature or landmark they should be associated with, if any. The paper also notes that only a few hundred landmarks can be processed efficiently, which means having each node as a landmark, even with an abstracted map, would end up as an inefficient implementation that also is not expandable.
Paper [14] proposes a method that is very similar to the target of this thesis, although it uses a Kalman filter. This also has a reliance on utilizing the camera, another common sensor in SLAM algorithms. In the paper, they use road markings as landmarks using the Iterated Closest Point (ICP) algorithm which shows promise, also suggested in [25].
Their initial location estimation appears very similar to the dead reckoning approach of this thesis, and requires post-processing. After performing a procedure called loop closure, their error drops dramatically to an inaccuracy of a few meters. The issue with using ICP for this thesis lies in the nature of the particles: the particles do not log their history other than their current location and even if they did, ICP would have to be conducted for each individual particle with a large computational cost. The paper also brings up an interesting drawback: cameras are subject to lighting conditions, and their suggested method would perform very differently in the dark compared to optimal light conditions. This highlights an advantage of the current thesis approach and was lightly discussed previously: A localization algorithm dependent on fewer sensors is less prone to outside disturbances. In [25], another ICP SLAM approach is proposed. Since it is SLAM, it is assumed to be a possibility that the vehicle can map the area by itself beforehand, this time constructing a map with the sensor data itself.
The achieved accuracy is impressive, and the sensor setups match the sensors used in this project, but this thesis rather aims for a completely offline solution using maps from a source other than the truck itself.
Shifting the focus back to papers that are more PF-oriented, [20] is a very interesting
paper. A central part of this thesis is based on their implementation, where they
propose a simple, yet elegant solution to reduce complexity: Add a non-physical state
to the particle filter that notes what road segment the particle is closest to. Adding
a non-physical state means that itâs possible to keep track of the most probable road
segment the vehicle is traversing, and assume the vehicle moves in vicinity of that.
This could also help assigning weight to particles: If the road segment is a low-speed zone, particles there would be assigned a low weight if the vehicle itself is moving quickly, or one can assess the probability that a particle belongs to that road segment by investigating the properties of the road itself. To do this on an OSM, some pre- processing of the OSM is required. Their achieved localization error is similar to the GPS used as one of the important comparisons of this thesis, but this might be because [20] in itself utilizes the GPS for localization.
Resampling is also an important part of particle filters, although not as impactful as weighting. [11] provides an excellent overview of different types of resampling. It compares the most popular resampling algorithms in a very comprehensible way with experiments and data to back up the claims. Despite being a thesis, it is more readable than certain scientific papers and is definitely worth the read.
To summarize, most related work assumes that some sort of feature map either exists
or can be generated using the vehicleâs own sensors. It is common to use cameras
and GPS for localization rather than low level motion sensors, and a very popular
localization method is SLAM, which has a different approach to localization than the
target of this thesis. Most of the papers have relevant ideas, but the use context is in
general so far from the specific target of this thesis that any methods have to be adapted
carefully. To stay within the scope and time frame of the thesis, the used related work
was isolated to a single derived approach.
Method choice
3.1 Background of a particle filter
A particle filter is highly adaptable and can take many forms. This project in particular focuses on map-aided localization, which is a 2-dimensional localization problem.
3.1.1 State definition
The vehicle is defined to have three physical states and one abstract state. The physical states are Longitude, Latitude, and heading. The Longitude and Latitude are defined according to the geographic coordinate system [13], and the heading is defined as the clockwise angle between a line from the vehicle to the geographic North Pole and the vehicleâs forward vector.
3.1.2 Initialization
Starting any localization algorithm can be problematic because the agent has no idea where it is before localization starts. Because just about all localization algorithms work by probability-based estimation, an erroneous initialization can cause a localization algorithm to perform very poorly after initialization and even prevent ever being able to locate themselves, especially if the location is randomly initialized.
A particle filter generally performs better at this because it can assume multi-modal
distributions, meaning it can keep track of several likely hypotheses of our current pose estimate. If a location on the map is not considered an unlikely location to be in, the particles will stay there and keep it as a possible location, rather than removing them for not being the most probable. It does however have a major issue: At least one particle has to start in a nearly correct location, otherwise the subsequent update steps will never be able to correct the trajectory. Since noise is also based on probability, not having particles in the starting location can lead to an algorithm that is not guaranteed to converge.
3.1.3 Prediction
The next step is to move the particles according to a constructed motion model of the agent. In the case of a car or truck, one would take into account the movement in a specific direction. This can be estimated with an IMU, gyros, wheel speed sensors and compasses. On top of this movement, random noise is injected to account for imperfections in the sensors. The general equation looks like this:
âx = y · ât + G(0, Ï) (3.1)
where x is the state, y is the sensor data in a compatible unit measurement and G is noise, in this case assumed to be Gaussian. Since the update algorithm depends on the implementation and the states used, no definite algorithm can be provided.
3.1.4 Weighting
The most important step in a particle filter is weighting. It is the step where the importance, or weight, of a particle is decided. This is then used in the resampling step to accumulate more particles in more probable locations. Weighting is done by checking the probability of a particleâs hypothesis against the collected data.
Sometimes, this is straightforward, such as checking how likely it is that a detected
landmark was seen from the particleâs current location, other times it can be really
hard if not enough data is available from measurements. If no general localization,
such as GPS, is accessible, an agent would have to completely rely on observations such
as detected landmarks associated with a map of landmarks. At other times, an agent
can use dead-reckoning approaches and work with very limited data, and as long as
the weighting and resampling algorithms are well-tuned, one can properly estimate the trajectory of the particles.
3.1.5 Resampling
After all particles have been weighted, we only want to keep the most likely location hypotheses through a process called resampling. Resampling a particle means that we keep that specific particle for the next iteration. However, we do not want to just take the single most likely one, like the Kalman Filter would opt for. Instead, we evaluate every particleâs individual probability of being correct. When resampling, the particle filter considers only the weight: the higher the weight, the more likely it is for a particle to be resampled. This usually results in particles with high weights being resampled several times and accumulating in more probable locations, while particles with low weight might get resampled at all.
3.2 Implementation and algorithms
3.2.1 Basic assumptions
To make the PF perform as expected, some basic assumptions were put in place.
âą The autonomous car will only navigate on roads. This comes from the limitations of the project as in areas that have no roads, there is no usable OSM information that our trajectory can be matched to. Keeping the vehicle on the road is doable with cameras alone, as seen in [1].
âą The autonomous car will be kept on the road through already existing software. The point of a PF is localization. However precise this localization is, it can never be better than an algorithm that extracts the side of the road from a physical camera and stays on the road that way. An autonomous vehicle would also never go off the road on its own. As such, we can overlook the âRobot kidnapping problemâ for now. [26]
âą Roads are one-way only. Like the rest of the world, autonomous vehicles tend
to travel on roads that go in one direction to avoid collisions. This would allow
us to use the tangent of a road segment to weight particles and thus not end up
heading the wrong way on a road. If a road has two neighbouring lanes where
they each go separate ways, that counts as two roads.
3.2.2 Initialization
For regular particle filter initialization, particles will spread out evenly over the map.
In the case of using OpenStreetMap, we can optimize this further if we assume that the truck is near a node or open area, and as such only initialize the particles near known road nodes. In the particular scope of this project, we can optimize even further. As this system is supposed to be a complement to the GPS and work in cases where the GPS will become unavailable, we can still initialize the particle filter with the latest GPS signal, or the latest known location, as we can assume the truck doesnât move while itâs not powered. For the initial time step, a larger noise variance was simply used to ensure a nearby road segment is associated.
Since the truck doesnât always start on a road, a separate method was implemented to only initialize the filter once the truck gets near a road segment, using the initial GPS signal, that is once any node in N had a distance less than l
maxto the GPS position.
This is not a realistic scenario, as one cannot manually decide when a GPS signal is
lost, but in this case it was required to easier evaluate how well the core of the system
worked.
Algorithm 1: Particle initialization
Result: Particles are initialized according to GPS data.
M is the set of all particles, N is the set of all nodes, N
mis the closest node to particle m, l
maxis the maximum distance a particle can be from a road
segment, d is a function calculating the absolute distance between two points.
G is here used to describe the Gaussian (Normal) distribution;
foreach m â M do
m
lon= GP S
lon+ G(0, Ï
position_init) m
lat= GP S
lat+ G(0, Ï
position_init)
m
heading= GP S
heading+ G(0, Ï
heading_init) foreach n â N do
if d(m,n)< l
maxthen m
node= n
else
keep going;
end end end
3.2.3 Sensor data
There is a lot of sensor data available for Scaniaâs trucks, so three different sensors were considered to be the most useful ones: IMU, wheel speed sensors and GPS. They were each picked for usability in different parts of the particle filter algorithm. The radar was also planned to be used but was later removed due to not being very useful and the logs not always containing radar logs.
There are two main sections where sensors can affect the localization: The prediction step and the weighting step. The prediction step will use the estimated current pose and the derivatives read from the sensors, and the weighting will be performed with regards to the read OSM map.
Incorporating IMU data
By reading the IMUâs data collection, the linear acceleration can be acquired as well
as the angular acceleration. With an assumed vehicle starting point, this is easily
integrated into the prediction step of the algorithm by analyzing the estimated bearing and comparing our believed movement with the linear and angular momentum of the vehicle. Such movement is in general called dead reckoning, and will be used as the localization baseline.
Incorporating wheel speed sensors data
Wheel speed sensors give a very good estimate of the current vehicle velocity and can be integrated into the prediction step with only minimal noise. Since they record the current change in position, they will be used for the prediction step. While this doesnât account for any wheel slipping, the PF noise usually compensates for that in a sufficient manner. As such, this measurement is likely more accurate than the linear velocity of the IMU.
Incorporating GPS data
The GPS data is straight-forward: You get estimated coordinates from a GPS on the vehicle and use the proximity to a particle to calculate its weight. It is also possible to improve the accuracy of the speed and bearing estimates using interpolation between the predicted position and the GPS position. Unfortunately, since using a GPS would defeat the purpose of a Particle filter in this project, it was ultimately decided that the GPS would not be used, apart from potential initialization.
Incorporating radar data
The radar data used for this project were supposed to better estimate the velocity and yaw rate of the vehicle. This is a straightforward measurement that can easily be integrated into the prediction step. The radar ended up not being used as not all vehicle logs provided radar data, and dynamically adjusting the prediction step to account for when data is available distracts from focus of this project. Additionally, incorporating the radar data doesnât necessarily make the prediction step better.
3.2.4 Road segment association
A considerable drawback with a particle filter is the
potentially expensive computational complexity and memory. Every particle has to be
updated with a fitting noise sample and depending on how quickly the system needs
to update the noise, it can get computationally inefficient very quickly. Coupled with a map matching algorithm that has to look for matching map segments, it can render the system completely unusable for real-world usage. For a primitive implementation matching each node and particle manually, the computational complexity becomes O(n
2) and scales with both the amount of particles and the amount of nodes. In this project, optimization was placed at lower priority, as it was judged more important to have a working system that can later by optimized, however the final algorithms, 4 and 6 run very quickly. To achieve the target of real-time execution, a concept dubbed Road Segment Association (RSA) was used to keep track of the nearest node to a particle, inspired by [20]. Please note that nodes and road segments are used interchangeably in this thesis: the path between one node and the next is considered a road segment associated with that node as they are so close together, but to simplify calculations the road segment is treated as having an absolute location. For Section 6, âroad segmentâ
instead refers to the visual stretch of the road that the node belongs to.
RSA is used to reduce the computational complexity of the solution by creating a graph of the OSM nodes rather than putting them all in one long OSM way. At initialization of the system and reading the OSM map, each node gets assigned neighbours depending on what nodes are next to it in the parsed OSM way, as well as adding themselves as a neighbour. This results in each node having a reference to 3 nodes in total as its âneighboursâ. When evaluating proximity to an OSM node, the neighbours of a particleâs associated node are iterated instead of every node in the OSM way. This reduces the computational complexity of the final product to O(m) and only scales with the amount of particles.
3.2.5 State prediction
The particles had four states: Longitude, Latitude, heading and closest road segment.
The road segment state handling was inspired by [20] and was only used in algorithms
4 and 6. The pure state prediction can be found in Algorithm 2. In the software system,
this prediction ran 100 iteration each simulated second. This value is more important
than it seems, as this is also the rate at which the RSA updates. If the state prediction
would run so slowly that the vehicle would pass by two OSM nodes during one iteration,
the RSA would lag behind, but this can be fixed by modifying the code.
Noise model
Noise was assumed to be Gaussian across the board for both sensors and position noise. This proved to be sufficient in keeping track of the road, as long as there were enough particles. Four types of noise was tested: GPS noise, IMU yaw rate noise, wheel speed noise and absolute position noise. Absolute position noise was not used in the prediction step of the final product, motivated in Section 6.4.
Algorithm 2: Particle prediction step
Result: Particles are moved according to a vehicular motion model.
M is the set of all particles, N is the set of applicable nodes, m
nodeis the closest node to particle mâ v is the wheel speed measurement, âΞ is the IMU yaw rate in degrees, l
maxis the maximum distance a particle can be from a road segment, l is the lowest distance to a node so far, d is a function calculating the absolute distance between two points. G is here used to describe the Gaussian (Normal) distribution. If using pure DR, the standard deviation of the Gaussian distribution is 0 and wonât affect the calculation at all.;
foreach m â M do
m
lon= m
lon+ v * cos(m
heading)+G(0,Ï
position) m
lat= m
lat+ v * sin(m
heading)+G(0,Ï
position) m
heading= m
heading+ âΞ + G(0, Ï
heading) N = neighbours(m
node) âȘ m
nodel = l
maxforeach n â N do
if d(m,n)< l
max&& d(m, n) < l then m
node= n
else
keep going;
end end end
3.2.6 Weighting
Weight function
The weight function had two variations with incremental functionality. The first one
weighs the particle according to its probability of belonging to a nearby road segment
according to equation 3.2 where âd is the distance between a position and a node. The second algorithm, equation 3.3, also included a factor where the difference in heading,
âΞ, compared to the road segment influenced the weight according to the probability that they were aligned. Both functions use the Gaussian Probability Distribution Function (PDF).
w
n= 1
Ï
weighting_positionâ â
2 â Ï exp ( â (âd)
22 â Ï
weighting_position2) (3.2)
w
n= 1
Ï
weighting_positionâ Ï
weighting_Ξâ 2 â Ï exp ( â (âd)
22 â Ï
2weighting_positionâ (âΞ)
22 â Ï
2weighting_Ξ)
(3.3)
Probability-based weighting
The approach that used weighting was based on the mathematics of Bayesian inference.
The particle was assigned a weight depending on the likelihood that it belonged to a nearby road segment. The algorithm has a computational complexity of O(n
2) due to every particle having to be checked against every road segment. This makes it very slow, and not usable in real-time unlike Algorithm 4.
Algorithm 3: Probability-based weighting
Result: Particles are assigned a weight according to their proximity to a road segment
Initialize according to section 3.2.2 where M is the set of all particles, N is the set of all nodes, l
maxis the maximum distance a particle can be from a road segment, d is a function calculating the absolute distance between two points.
Equation 3.3 was used to determine the weight w
mn.;
foreach m â M do w
n= 0
w
m= 0
foreach n â N do w
n= p() â p() ;
if d(m,n)< l
max&&w
n> w
mthen w
m= w
mnelse
keep going;
end end end
Probability-based weighting with road segment association
This approach is an augmentation of the above approach. Since that algorithm runs very slow, an alternate approach similar to what was performed in [20] was tested. In this approach, instead of checking the closest node by iterating over all nodes, only the last closest node and its neighbours were checked. This reduces complexity from O(n
2) to O(m) and only depends on the maximum number of neighbours a node has.
A requirement for this algorithm was to keep track of all the neighbours for each node
in a graph-like structure.
Algorithm 4: Probability-based weighting with road segment association Result: Particles are assigned a weight according to their proximity to a
neighbouring road segment
Initialize according to section 3.2.2 where M is the set of all particles, N is the set of all the neighbouring nodes to the associated road segment N
m, l
maxis the maximum distance a particle can be from a road segment, d is a function calculating the absolute distance between two points;
foreach m â M do w
n= 0
w
m= 0
N = neighbours(N
m) âȘ N
mforeach n â N do w
n= p() â p() ;
if d(m,n)< l
max&&w
n> w
mthen w
m= w
nelse
keep going;
end end end
Proximity-based weighting
Testing an approach that doesnât use weighting was inspired by [20] as well. Particles
were not resampled depending on their weight, but rather if discarded if they strayed
too far from a node and set to a constant weight if they were close enough to any
node. This check also included a difference in angle between the particle heading
and the nodeâs road tangent, meaning two roads that go in opposite directions close
to each other doesnât allow the particles to move over to the other road. After a particle
had been discarded, the resampling step then resampled a valid particle instead in a
randomized fashion, due to how all valid particles had equal weight and are unordered
by design. This solution can be described as using negative information, where we
integrate where we know the vehicle certainly isnât by assigning zero weight when the
hypothesized location is not near a road.
Algorithm 5: Proximity-based weighting
Result: Only particles near a road are resampled
Initialize according to section 3.2.2 where M is the set of all particles, N is the set of all nodes, l
maxis the maximum distance a particle can be from a road segment, d is a function calculating the absolute distance between two points;
foreach m â M do w
m= 0
foreach n â N do if d(m,n)< l
maxthen
w
m= 1 break;
else
keep going;
end end end
Proximity-based weighting with road segment association
Finally, the last variant to be tested is an augmentation of 5 similar to how Algorithm 4
was an augmentation of Algorithm 3. No weighting is done still, but proximity is only
checked against the last associated nodeâs neighbours.
Algorithm 6: Proximity-based weighting with road segment association Result: Particles are assigned a weight according to their proximity to a
neighbouring road segment
Initialize according to section 3.2.2 where M is the set of all particles, N is the set of all the neighbouring nodes to the associated road segment N
m, l
maxis the maximum distance a particle can be from a road segment;
foreach m â M do w
m= 0
N = neighbours(N
m) âȘ N
mforeach n â N do if d(m,n)< l
maxthen
w
m= 1 break;
else
keep going;
end end end
3.2.7 Resampling
Since reliability is very important in autonomous driving, it is of high priority that the heavily weighted particles are resampled in a predictable manner. For that reason, systematic resampling will be used. In systematic resampling, particles are chosen with a linear relation to their weight, where particles that weight more than the average particle weight are guaranteed to be resampled. An illustration of the algorithm can be seen in Figure 3.2.1. The algorithm also has a lower computational complexity [11]
compared to the common alternatives. The explicit algorithm is found below.
Algorithm 7: Systematic resampling algorithm, using [11].
- Draw N ordered numbers ; u
k=
(kâ1)+ ËN U, u
k⌠U [0, 1)
- Allocate n
icopies of the particle x
ito the new distribution ; u
kâ (â
iâ1s=1