• No results found

Estimating Position and Velocity of Traffic Participants Using Non-Causal Offline Algorithms

N/A
N/A
Protected

Academic year: 2021

Share "Estimating Position and Velocity of Traffic Participants Using Non-Causal Offline Algorithms"

Copied!
70
0
0

Loading.... (view fulltext now)

Full text

(1)

Master of Science Thesis in Electrical Engineering

Department of Electrical Engineering, Linköping University, 2019

Estimating position and

velocity of traffic

participants using

non-causal offline

algorithms

(2)

algorithms: Casper Johansson LiTH-ISY-EX--19/5235--SE Supervisor: Gustav Lindmark

isy, Linköpings universitet

Daniel Ankelhed

Veoneer

Examiner: Gustaf Hendeby

isy, Linköpings universitet

Division of Automatic Control Department of Electrical Engineering

Linköping University SE-581 83 Linköping, Sweden Copyright © 2019 Casper Johansson

(3)

Abstract

In this thesis several non-causal offline algorithms are developed and evaluated for a vision system used for pedestrian and vehicle traffic. The reason was to investigate if the performance increase of non-causal offline algorithms alone is enough to evaluate the performance of vision system.

In recent years the vision systems have become one of the most important sen-sors for modern vehicles active security systems. The active security systems are becoming more important today and for them to work a good object detection and tracking in the vicinity of the vehicle is needed. Thus, the vision system needs to be properly evaluated. The problem is that modern evaluation techniques are limited to a few object scenarios and thus a more versatile evaluation technique is desired for the vision system.

The focus of this thesis is to research non-causal offline techniques that increases the tracking performance without increasing the number of sensors. The Un-scented Kalman Filter is used for state estimation and an unUn-scented Rauch-Tung-Striebel smoother is used to propagate information backwards in time. Different motion models such as a constant velocity and coordinated turn are evaluated. Further assumptions and techniques such as tracking vehicles using fix width and estimating topography and using it as a measurement are evaluated.

Evaluation shows that errors in velocity and the uncertainty of all the states are significantly reduced using an unscented Rauch-Tung-Striebel smoother. For the evaluated scenarios it can be concluded that the choice of motion model depends on scenarios and the motion of the tracked vehicle but are roughly the same. Further the results show that assuming fix width of a vehicle do not work and measurements using non-causal estimation of topography can significantly re-duce the error in position, but further studies are recommended to verify this.

(4)
(5)

Acknowledgments

Firstly, I want to start by thanking Veoneer for letting me do this thesis and pro-viding the means to carry it out. In particular, I would like to thank my supervi-sor at Veoneer, Dainel Ankelhed, for his valuable insight, discussions and creative ideas throughout the thesis.

Furthermore, I would like to thank my supervisor at Linköping University, Gus-tav Lindmark, and my examiner, Gustaf Hendeby, for providing new ideas and valuable input for the thesis. The report would not be as tidy ans structured as it is without Gustav, Gustaf and Daniel’s proofreading.

Finally, I want to thank all my friends and family, for always being there when needed. You have a bigger part in the making of this thesis than you might think.

Linköping, May 2019 Casper Johansson

(6)
(7)

Contents

Notation ix

1 Introduction 1

1.1 Background . . . 1

1.2 Objectives and problem formulation . . . 2

1.3 Limitation . . . 2

1.4 Company and product . . . 3

1.5 Related work . . . 3 1.6 Outline . . . 4 2 Theoretical background 5 2.1 Target tracking . . . 5 2.2 Measurements . . . 7 2.2.1 Classification measurements . . . 7 2.2.2 Non-classification measurements . . . 7 2.2.3 DGPS . . . 8 2.3 Fusion of measurements . . . 8

2.4 Filtering and smoothing algorithms . . . 9

2.4.1 Unscented Kalman Filter . . . 9

2.4.2 Unscented Rauch-Tung-Striebel smoother . . . 12

2.5 Coordinate systems . . . 13

2.6 Performance evaluation . . . 14

3 Modelling 17 3.1 Motion models . . . 17

3.1.1 Movement between frames . . . 17

3.1.2 Constant velocity with relative position . . . 19

3.1.3 Coordinated turn with relative position . . . 20

3.1.4 Constant velocity with relative position without vz . . . . 21

3.2 Measurement equations . . . 22

3.2.1 Classification measurements . . . 22

3.2.2 Line segments . . . 22

3.2.3 L-shape segments . . . 23

(8)

3.2.4 Estimate relative topography . . . 25

3.2.4.1 Topography measurements using pxestimate . . . 25

3.2.4.2 Topography measurements using pzand px . . . . 25

3.3 Assume fixed width of object . . . 27

4 Results 29 4.1 Evaluation sequences . . . 30

4.2 Performance comparison . . . 31

4.2.1 The difference with and without the RTS smoother . . . 34

4.2.2 The effect of the vzstate . . . 35

4.2.3 The difference between a CV and a CT motion model . . . 38

4.2.4 The effect of line segments and L-shape measurements . . . 40

4.2.5 The effect of using fix width . . . 41

4.2.6 The effect of topography measurements using px . . . 42

4.2.7 The effect of topography measurements using pzand px . . 45

4.2.8 RMSE for Sequence 1 and 2 . . . 48

4.3 Summary of results . . . 49

5 Conclusion and future work 53 5.1 Conclusion . . . 53

5.2 Future Work . . . 54

A Pinhole camera 57

(9)

Notation

Abbreviations

Acronym Definition

adas Advanced Driver Assistance Systems

ad Automated Driving

ccs Camera Coordinate System pcs Plane Coordinate System imu Inertial Measurement Unit ukf Unscented Kalman Filter

urtss Unscented Rauch-Tung-Striebel Smoother rts Rauch-Tung-Striebel

ut Unscented Transform

ct Cubature Transform roi Region of interest rmse Root Meas Square Error

ae Absolute Error

Defined parameters

Notation Definition

X State vector

Rk+1 Rotation of track between frame k and k+1

tk+1 Translation of track between frame k and k+1

Ts Sampling time

n Number of states

RP CS2CCS Rotation from PCS to CCS

(10)
(11)

1

Introduction

In modern vehicles active safety systems which prevents traffic incidents are be-coming more important. In recent years, vision systems have become one of the most important sensors for this task and thus needs to be properly evaluated. The objectives of this thesis are to research if using a vision system together with non-causal offline algorithms alone is enough to evaluate the vision system. This chapter begins with a short background on why performance evaluation of the vision system is so important. This is followed by a formulation and lim-itation of the researched problem. Further a there is a brief introduction of the collaboration partner, research related to the thesis and an outline of the thesis is presented.

1.1

Background

According to the World Health Organisation[12] approximately 1.35 million peo-ple die in road accidents each year and between 20-50 million are injured. The European commission[6] states that more than 90% of the traffic accidents are caused by human error. Thus, active safety such as automated driving (AD) and advanced driving assistance systems (ADAS) could significantly reduce the num-ber of deaths and injuries caused in traffic.

The fast development of AD and ADAS functionality have been possible due to the large technology advancements in the past years which have reduced the en-ergy consumption, price and increased performance on processors and sensors. Today common ADAS functionalities in vehicles are forward collision warning, automatic emergency braking, lane departure warning, lane keeping assistance and blind spot monitoring systems.

(12)

One key to ADAS functionality is to detect objects in vicinity of the vehicle and react to dangerous situations. The primary sensors used in today’s vehicles are vision systems, lidar and radar, and modern vehicles such as Tesla model S [19] can have up to eight cameras to monitor the situation around the vehicle. Thus, the vision system is one of the most important sensor to detect and estimate ob-jects relative the ego vehicle. Since the vision system is so important it needs to be thoroughly evaluated. In this thesis it is investigated if the vision system can be evaluated without using additional distance measurement sensors and this is done by using non-causal offline algorithms.

The algorithms that are developed in the thesis are compared against ground truth, which the Differential Global Positioning System (DGPS) is considered to provide in this thesis. The DGPS is explained further in Section 2.2.3. The moti-vation for this thesis is that the DGPS is limited to single object traffic scenarios and it is expensive. Thus, a more versatile evaluation technique is desired for the vision system.

1.2

Objectives and problem formulation

The objectives of this thesis are to evaluate how much performance is increased by using non-causal offline algorithms using a stereo camera as distance sensor. The questions that are answered throughout the thesis are:

• Can a vision system be reliably evaluated without increasing the number of distance sensors?

• What additional assumptions can be made using non-causal offline algo-rithms?

• Is performance gained by making additional assumptions? • What is the performance impact of changing motion model? and they are evaluated as:

• The result should be compared to the DGPS.

• The non-causal algorithms should be compared to a causal target tracking solution.

• The implemented algorithms will be compared to each other in more than one scenario to evaluate if the performance difference is consistent.

1.3

Limitation

(13)

1.4 Company and product 3

• Only a stereo camera is used to measure distance. • The motion of the ego vehicle is predetermined. • All data association is predetermined.

• Since DGPS only measures longitudinal and lateral position and velocity this will be evaluated, the other quantities are only compared and discussed. • Since DGPS only exists for single target sequences, the performance of

multi target sequences are discussed rather than evaluated.

• All measurements are considered independent and it is stated where this is an approximation.

• The camera distortion is assumed to be perfectly compensated for.

1.4

Company and product

The thesis is performed in collaboration with Veoneer. Veoneer was founded in 2018, when Autoliv was split in to two companies and has a long history of auto-motive safety development. Veoneer is one of the largest companies which solely focuses on ADAS and AD. They develop hardware and software with primary fo-cus on vision systems. Veoneer have offices in 13 countries with 8700 employees, where around 4700 are employed in the research and development department. Veoneer develop vision, night vision, radar, lidar and driver monitoring systems for vehicles. The main focus at Veoneer’s department in Linköping is develop-ment of the vision systems. The main functions of the vision system are e.g. sign recognition, automatic emergency breaking, lane departure warning, pedestrian warning and full beam automation.

1.5

Related work

There has been a lot of research done on target tracking for vehicle and pedestrian traffic. Most of the research about target tracking in traffic is focused towards causal systems. A lot of research has also been done with regards to non-causal systems, just not related to target tracking in traffic.

In [2] different coordinate systems for target tracking are evaluated in different traffic scenarios. They concluded that using a global coordinate system using GPS as measurements would result in an unacceptably large position error for the ego vehicle. This is since the accuracy of the GPS according to [2] is only around ±15m. Therefore, it is proposed that a relative coordinate system is used e.g. in ego vehicle fix coordinates. They propose two different coordinated systems, one which solely use relative coordinates and one which use relative position, but ab-solute velocity and acceleration in the ego coordinate system (mixed coordinates).

(14)

The conclusions that [2] made was that the mixed coordinates were the best op-tion. Further they concluded that both the mixed and relative coordinates are superior to the global coordinate system.

A real-time tracker including multiple motion models of target, ego motion mod-elling, calibration and calculation of the distance using a stereo camera and radar was done in [3]. Here several filter techniques such as interacting multiple mod-els filter and a process model with a adaptive noise were evaluated.

One of the main objectives of [11] was to estimate and evaluate road curvature in a causal way. It also included ego motion modelling and estimating shape of targets using non-classification measurements.

A lot of the references regarding non-causal estimation does not have an auto-motive focus but a more general focus on non-causal filtering, such as a gen-eral forward-backwards smoother or RTS smoother. An example is [13], where a sigma point forward-backwards smoother was proposed, using pseudo-linearized dynamics obtained by weighted statistical linear regression. Another example is [16] where the focus was on creating a non-causal data association algorithm, us-ing already existus-ing smoothus-ing solutions.

1.6

Outline

Below an outline with a short explanation of the content of the coming chapters: Chapter 2explains the relevant theory and algorithms needed to solve and un-derstand the thesis.

Chapter 3 describes the proposed and evaluated models and algorithms used in this thesis.

Chapter 4 quantitatively evaluates and compares the models and algorithms used in this thesis for multiple scenarios.

Chapter 5summarizes and concludes the results from the previous chapters and discusses some extensions and future work of the thesis.

(15)

2

Theoretical background

In this chapter the needed background to understand the developed algorithms and method used in the next chapter are explained. This includes an introduc-tion to target tracking and how it is done in this thesis. An introducintroduc-tion to the used measurements and how they are fused along with a definition of Unscented Kalman filter (UKF) and smoother algorithm implemented in this thesis. The coordinate systems and evaluation methods are then defined.

2.1

Target tracking

The goal of target tracking is to estimate the states of one or several target using measurements from one or more sensors. In this thesis a target is an object such as a vehicle or pedestrian in the vicinity of the ego vehicle and only a stereo camera is used to detect and measure distance to a target. A general tracking process is illustrated in Figure 2.1, where steps are explained as:

• Sensor data and measurements

Here the measurements for the system is received. The measurements could measure different quantities such as distance, velocity and image coordi-nates.

• Gating and measurement association

Gating is a way to reduce computational complexity by reducing the num-bers of measurement that could be assigned to a track. This is often done with an ellipsoidal or rectangular windows around the estimate, where the size of the window is related to the variance of the states. Data association is then done by assigning a measurements to a track and the most common algorithms are described in [4].

(16)

• Track management

Track management is about starting new tracks, determine which tracks should still be active and erasing of old tracks. There are several methods for doing this and as further described in [4]. In general, new measurements needs to continuously be assigned to keep a track alive.

• Filtering and prediction

In this stage a track is updated with an associated measurement and a pre-diction of what the states of the targets are for the next received measure-ments are done.

Sensor data and measurements Gating and measurement association Track management Filtering and prediction

Figure 2.1:A flowchart of a target tracking system.

The process of single target tracking and multi target tracking is similar in gen-eral, the large difference is the complexity of the algorithms considered. In this thesis the gating and measurement association and track managements steps are considered predetermined. So the target tracking system used in this thesis is il-lustrated in figure 2.2, where each track have their own UKF and the ego motion is the same for all the tracks. Each track has a target, where the tracker estimates the relative position to the ego vehicle and velocity of the target. Since the track management is predetermined no track management is done in the active tracks module in Figure 2.2, it simply contains when a track is active or not. Further the measurements in Figure 2.2 are introduced in Section 2.2.

(17)

2.2 Measurements 7

Figure 2.2:Overview of the system.

2.2

Measurements

In this section an overview of the used measurements is provided and the mea-surements are further described in Section 3.2. The distance meamea-surements are all derived from the stereo camera, but in slightly different ways. There are two types of distance measurements, classification and non-classification measure-ments.

2.2.1

Classification measurements

The classification measurements are automatically generated for each frame us-ing an algorithm developed by Veoneer. The algorithm automatically identifies objects such as vehicles and pedestrians in each frame. Classification measure-ments are received as a regions of interest (ROI) in pixel coordinates in the frame and a stereo distance measurement and are further explained in Section 3.2.1.

2.2.2

Non-classification measurements

Non-classification measurements are measurements that are derived through other than classification image-processing methods and their representation in the im-age is unknown. For example, it could be a line segments for the backside of a

(18)

vehicle. These measurement are referred to as line segments and L-shaped seg-ments and are further explained in Section 3.2.2 and 3.2.3. Notice that a tracked object can only receive one line segment or L-shape at a given time.

2.2.3

DGPS

DGPS is in this thesis considered as ground truth and is used solely for evalua-tion. It is considered as ground truth because its accuracy is high. According to [10] DGPS uses a base station with known absolute position that calculates the error of the satellite measurements and sends the error correction to the target, as illustrated in Figure 2.3. The positioning errors is reduced by more than 95 percent according to [10] and the accuracy should thus be less than a meter.

Base station

Known Position

Satellite

Target

Figure 2.3:Overview of a DGPS system.

2.3

Fusion of measurements

Since all measurements are assumed to be independent, the fusion formula for independent measurements explained in Algorithm 1 is used. For all distance measurements this assumption is an approximation since they are all derived from the same images. So, if the image is distorted in some way all the mea-surements will be distorted. The distance meamea-surements are however derived in different ways and thus the independence approximation is justifiable.

(19)

2.4 Filtering and smoothing algorithms 9

Algorithm 1Fusion formula [7]

Given N independent state estimates and covariances the fused estimate and co-variance are computed as:

P = (P1−1+ P2−1+ ... + PN−1)−1 (2.1) x = P (P1−1x1+ P1 2 x2+ ... + P1 N xN) −1 (2.2)

2.4

Filtering and smoothing algorithms

A system description with additive noise is used and can be described on the form

xk+1 = f (xk, uk) + wk, (2.3)

where f (xk, uk) is the process model and wk is the process noise. The

measure-ments equations also use additive noise and are described as

yk = h(xk, uk) + ek, (2.4)

where h(xk, uk) is the measurement model and ekis the measurement noise.

2.4.1

Unscented Kalman Filter

The Unscented Kalman Filter is described in [9], [1], [17] and [7]. The Algorithm 2 describes the UKF implemented for this thesis. The UKF uses unscented trans-form (UT) to approximate the nonlinear transtrans-formation and an example of UT is portrayed in Figure 2.4. As illustrated in Figure 2.4 the UT spread out a number of sigma points around the mean using the covariance matrix and then transform the sigma points through the nonlinearity. All sigma points are then used to cal-culate a new mean and covariance. The sigma points all have a weight. The UKF have several of tuning parameters and depending on the choice of the α, β, κ, λ and W0mdifferent characteristics of the UKF is achieved. Commonly used values are stated in Table 2.1, where β = 2 is optimal for Gaussian distributions accord-ing to [1]. The tunaccord-ing parameters used in this thesis for Algorithm 2 are the ones for UT2 in Table 2.1.

(20)

General

non-linear

function

𝑦 = 𝑓(𝒳)

Sigma points

UT mean

UT covariance

Transformed

Sigma points

True mean

True covariance

Mean

Covariance

𝜙(𝑥)

Figure 2.4: Comparison between the actual system and the UT mean and covariance propagation.

Table 2.1:Commonly used values for the different parameters of the UT.

Parameters UT1 UT2 CT

α3/n 10−3 1 β α2−1 2 0 κ 0 0 0 λ α2(n + κ) − n 0 Wm0 1 − 1/α2 −α2 0

(21)

2.4 Filtering and smoothing algorithms 11

Algorithm 2Unscented Kalman Filter [9] [1] [17] [7]

Given a initial state x0 and covaraiance P0. With a process model f (xk, uk) +

wk, wk ∼ N(0, Qk) and measurement model on the form h(xk, uk) + ek, ek

N(0, Rk).

1. Create sigma points:

χ0k−1|k−1= xk−1|k−1 χk−1|k−1i = xk−1|k−1+ √ n + λhpPk−1|k−1 i i, i = 1, ..., n χk−1|k−1i+n = xk−1|k−1− √ n + λhpPk−1|k−1 i i , (2.5)

where [7] propose singular value decomposition to calculate √

P and [18] propose using Cholesky factorization. The associated weights to the sigma points are calculated as

Wm0 = n+λλ

Wc0= n+λλ + (1 − α2+ β), Wmi = Wci = 2(n+λ)1 , i = 1, ..., 2n

(2.6)

where λ is a scaling parameter and is defined in Table 2.1. 2. Transform the sigma points through the process model:

ˆ

χik|k−1= f (χik−1|k−1), i = 0, ..., 2n (2.7)

3. Compute mean and covariance of the prediction:

xk|k−1= 2n P i=0 Wmi χˆk|k−1i Pk|k−1= 2n P i=0 Wci( ˆχik|k−1xk|k−1)( ˆχi k|k−1xk|k−1)T + Qk−1 (2.8)

4. Recalculation of the sigma points:

χ0k|k−1= xk|k−1 χik|k−1= xk|k−1+ √ n + λhpPk|k−1 i i, i = 1, ..., n χi+nk|k−1= xk|k−1− √ n + λhpPk|k−1 i i , (2.9)

or use the following approximation:

χik|k−1χˆi

k|k−1, i = 0, ..., n (2.10)

5. Transform the sigma points through the measurements model: Yi

k|k−1= h(χ

i

(22)

Algorithm 2Unscented Kalman Filter part 2

6. Compute the predicted mean and covariance of the measurement and cross-covariance between the states and measurements:

yk|k−1= 2n P i=0 Wmi Yi k|k−1 Pyk|k−1 = 2n P i=0 Wci(Yk|k−1iyk|k−1)(Yi k|k−1yk|k−1) T + R k Pxk|k−1yk|k−1= 2n P i=0 Wci(χik|k−1xk|k−1)(Yi k|k−1yk|k−1)T (2.12)

7. Compute filter gain and filtered state and covariance: Kk= Pxk|k−1yk|k−1P1 yk|k−1 xk|k= xk|k−1+ Kk(ykyk|k−1) Pk|k= Pk|k−1KkPyk|k−1KkT (2.13)

Here ykis the received measurement.

It is common to use 2n + 1 sigma point which is used in Algorithm 2, where n is the number of states. In [1] they keep the old transformed sigma points in step 4 of Algorithm 2 and calculate new ones as in step 1, but excluding χ0and thus increase the number of sigma points to 4n + 1.

2.4.2

Unscented Rauch-Tung-Striebel smoother

An Unscented Rauch-Tung-Striebel Smoother (URTSS) is decribed in [17] and is a Gaussian noise based smoother, where the unscented transform is used to approximate the nonlinearity. The algorithm for the smoother is stated in Algo-rithm 3. AlgoAlgo-rithm 2 is evaluated before the smoother and a modification of step 3 in Algorithm 2 is needed. The modification is

Dk = 2n X i=0 Wci(χik−1|k−1xk−1|k−1)( ˆχi k|k−1xk|k−1)T (2.14)

(23)

2.5 Coordinate systems 13

Algorithm 3Unscented Rauch-Tung-Striebel Smoother [17]

First evaluate Algorithm 2, then set xsT |T = xT |T and PTs = PT |T and run the

recur-sion backwards for k = T − 1, ..., 0. The smoother gain, mean and covariance is calculated as: Gk = Dk+1P1 k+1|k xk|ks = xk|k+ Gk(xk+1|k+1sxk+1|k) Pk|ks = Pk|k+ Gk(Pk+1|k+1sPk+1|k)GkT (2.15)

2.5

Coordinate systems

Here the different coordinate system used in this thesis are explained. The cam-era used for measurements is mounted in top of the windshield of the vehicle and is slightly rotated around the pitch angle compared to the vehicle/road. This coordinate system is called the camera coordinate system (CCS) and is portrayed in Figure 2.5. To make more valid assumptions and to increase the accuracy of the 2D-motion model in the 3D-world the plane coordinate system (PCS) is in-troduced which is slightly pitched compared to the CCS. The pitch makes the xy-plane of the PCS parallel to the road when effects from the vehicle dynamics are disregarded as is illustrated in Figure 2.5. The rotation between the PCS and CCS is approximated as RPCS2CCS=         cos(α) 0 sin(α) 0 1 0 −sin(α) 0 cos(α)         , (2.16)

where α is a constant defined in Figure 2.5. Thus, the PCS states are rotated to the CCS states as

(24)

PCS

CCS

z

x

y

y

z

x

α

PCS

x

y

z

Figure 2.5:Overview of the coordinate systems.

2.6

Performance evaluation

Since the DGPS sequences only contains one target, there is no ground truth for multi target sequences. Therefore, the states of the multi target sequences are only compared and not evaluated.

The RMSE is used to evaluate the performance difference between the truth p0 and the estimate p for a single target, where RMSE is calculated as

RMSE(p) = v u t 1 N N X k=1  ||p0 kpk||2 2 (2.18)

(25)

2.6 Performance evaluation 15

and is further discussed in [5]. Since the RMSE only gives a value for a whole sequence AE will also be used to display the error over time and is defined as

(26)
(27)

3

Modelling

In this chapter the models and assumptions that are evaluated are explained. This includes definition of all the measurement equations and motion models used, such as topography measurements.

3.1

Motion models

The motion models used in the thesis are defined in this section. As stated in Section 1.5 a coordinate system which uses absolute position in not preferred for target tracking in traffic since the ego vehicle is moving. Thus a relative position motion model using the moving PCS is used as illustrated in Figure 3.1.

3.1.1

Movement between frames

Since the ego vehicle moves, the origin of the PCS is constantly moving. For each new frame the movement of the ego vehicle needs to be considered as illustrated in Figure 3.2.

(28)

PCS

x y z

Ego

vehicle

Target

vehicle

𝑝𝑥 𝑝𝑦 𝑝𝑧

Figure 3.1: Illustration of the relative distance between ego and target vehi-cle.

x y z

Ego vehicle at time k

Ego vehicle at time k+1

Trajectory

of ego vehicle

Translation 𝑡

𝑘+1

Rotation 𝑅

𝑘+1

Figure 3.2:Illustration of Rk+1and tk+1.

To compensate for the ego vehicle rotation the following rotation matrix defined

R = Rx(α)Ry(β)Rz(γ), (3.1)

(29)

3.1 Motion models 19

Ry(β) and Rz(γ) are defined as

Rx=         1 0 0 0 cos(α) sin(α) 0 −sin(α) cos(α)         , Ry=         cos(β) 0 −sin(β) 0 1 0 sin(β) 0 cos(β)         , Rz =         cos(γ) sin(γ) 0 −sin(γ) cos(γ) 0 0 0 1         (3.2)

and it is the inverse of the standard rotation matrix. The translation vector t is defined as the negative motion or the motion from frame k + 1 to k of the ego vehicle as illustrated in Figure 3.2. Thus, movement of the coordinate system is

pk+1= Rk+1pk+ tk+1 (3.3)

between two frames. For some motion models the velocity states are only esti-mated along the x and y-axis and the rotation to compensate for the ego vehicle then becomes Rxy= "1 0 0 0 1 0 # R. (3.4)

3.1.2

Constant velocity with relative position

This is a modified version of the 2D constant velocity model described in [14]. The states used in this model are

X = [pxpypzvxvyvzw]T. (3.5)

In (3.5), px, py, pz are the relative position in the PCS, vx, vy, vz are the absolute

velocity in the PCS and w is the width of the tracked object. The noise vector is

w = (wx¨wy¨w¨zw). (3.6)

Going from the constant velocity model explained in [14] the ego movement be-tween frames has to be considered. To do this the motion model is divided in to two sub-stages, where the first is to move the origin of the PCS from time k to k + 1 and then a motion update is done in the new coordinate system.

1. First the states are moved from the PCS for frame k to the PCS for frame k + 1.                                  ˆ px,k+1|k ˆ px,k+1|k ˆ px,k+1|k                 ˆ vx,k+1|k ˆ vy,k+1|k ˆ vz,k+1|k         ˆ wk+1|k                          =                          Rk+1         px,k|k py,k|k pz,k|k         + tk+1 Rk+1         vx,k|k vy,k|k vz,k|k         wk|k                          , (3.7)

(30)

where Rk+1and tk+1are described in Section 3.1.1.

2. Then a motion update is done for frame k + 1.

Xk+1|k=                                  ˆ px,k+1|k ˆ px,k+1|k ˆ px,k+1|k         + Ts         ˆ vx,k+1|k ˆ vy,k+1|k ˆ vz,k+1|k         + Ts2 2         wx,k|k¨ wy,k|k¨ w¨z,k|k                 ˆ vx,k+1|k ˆ vy,k+1|k ˆ vz,k+1|k         + Ts         wx,k|k¨ wy,k|k¨ w¨z,k|k         ˆ wk+1|k+ Tsww˙                          , (3.8)

where Tsis the sample time.

The noise model is defined as

Q = G(Ts)Cov(w)GT(Ts), (3.9)

where Cov(w) is the covariance of the noise defined (3.6) and the noise model is defined as G(Ts) =                          Ts2/2 0 0 0 0 T2 s /2 0 0 0 0 Ts2/2 0 Ts 0 0 0 0 Ts 0 0 0 0 Ts 0 0 0 0 Ts                          . (3.10)

The noise of the rotation and translation of the PCS between two frames are ne-glected in the noise model. This is because the rotation and translation are con-sidered highly accurate compared to the motion model.

3.1.3

Coordinated turn with relative position

Here a modified version of the 2D Coordinated Turn with Cartesian velocity ex-plained in [15] with the following states

X = [pxpypzvxvyω w]T. (3.11)

1. First the states are moved from the PCS for frame k to the PCS for frame k + 1.                                   ˆ px,k+1|k ˆ px,k+1|k ˆ px,k+1|k         " ˆvx,k+1|k ˆ vy,k+1|k # ˆ ωk+1|k ˆ wk+1|k                           =                               Rk+1         px,k|k py,k|k pz,k|k         + tk+1 Rxy,k+1         vx,k|k vy,k|k 0         ωk|k wk|k                               , (3.12)

(31)

3.1 Motion models 21

where Rk+1and tk+1are described in Section 3.1.1. Since the CT model have

no state on the velocity of vz, vz is set to 0 for the rotation and need to be

rotated with Rxy,k+1instead.

2. Then a motion update is done for frame k + 1. k1= sin( ˆωk+1|kTs+ wω˙T2s) k2= cos( ˆωk+1|kTs+ wω˙T2s) Xk+1|k=                               ˆ xk+1|k+ T 2 s 2 wx¨+ ˆ vx,k+1|k+Tswx¨ ˆ ωk+1|k+Tswω˙ k1− ˆ vy,k+1|k+Tswy¨ ˆ ωk+1|k+Tswω˙ (1 − k2) ˆ yk+1|k+ T 2 s 2 wy¨+ ˆ vx,k+1|k+Tswx¨ ˆ ωk+1|k+Tswω˙ (1 − k2) + ˆ vy,k+1|k+Tswy¨ ˆ ωk+1|k+Tswω˙ k1 ˆzk+1|k+ Tsw˙z ( ˆvx,k+1|k+ Tswx¨)k2−( ˆvy,k+1|k+ Tswy¨)k1 ( ˆvx,k+1|k+ Tswx¨)k1+ ( ˆvy,k+1|k+ Tswy¨)k2 ˆ ωk+1|k+ Tswω˙ ˆ wk+1|k+ Tsww˙                               (3.13)

The noise parameters are defined as

w = (wx¨wy¨w˙zwω˙ ww˙) (3.14) and the process noise is defined as

Q = G(Ts, X, w)Cov(w)GT(Ts, X, w) (3.15)

and the noise models is approximated by calculating the first order Jacobian of (3.13) with respect to the noise. The result is

k1 = sin(ωTs+ wω˙Ts2) k2 = cos(ωTs+ wω˙Ts2) α = Ts2ω+Tvx+Tsswawx˙k2 −Ts2vy+Tsay ω+Tsww˙k1 −Ts vx+Tsax (ω+Tsww˙)2k1+ Ts vy+Tsay (ω+Tsww˙)2(1 − k2) β = Ts2ω+Tvx+Tsswawx˙k1 −Ts2vy+Tsay ω+Tsww˙k2 −Ts vx+Tsax (ω+Tsww˙)2(1 − k2) − Ts vy+Tsay (ω+Tsww˙)2k1 δ = −Ts2(vx+ Tsax)k1−Ts2(vy+ Tsay)k2 γ = Ts2(vx+ Tsax)k2−Ts2(vy+ Tsay)k1 G (Ts, X, w) =                            1 2Ts2+ ω+TTsswω˙k1 − Ts ω+Tswω˙(1 − k2) 0 α 0 Ts ω+Tswω˙(1 − k2) 1 2Ts2+ ω+TTsswω˙k1 0 β 0 0 0 Ts 0 0 Tsk2 −Tsk1 0 δ 0 Tsk1 Tsk2 0 γ 0 0 0 0 Ts 0 0 0 0 0 Ts                            . (3.16)

3.1.4

Constant velocity with relative position without

v

z In order to compare the CV and the CT models a CV model with the states

(32)

was implemented. This was done to be able to fairly compare the CV and the CT and to discuss the impact of the vzstate. The noise vector is defined as

w = (wx¨wy¨w˙zw), (3.18)

otherwise the model is as the described CV model in Section 3.1.2.

3.2

Measurement equations

In this section the measurement equations for the different types of measure-ments are derived. All the measurement equations are defined in the image plane and CCS. Since all tracking is done in the PCS the states needs to be rotated to the CCS as describe in Section 2.5 before evaluating the measurement equations.

3.2.1

Classification measurements

The measurement vector for classification measurements is yClassification,k=oc,k ob,k ow,kxk

T

, (3.19)

where ocis the object’s horizontal center pixel, ob is the vertical bottom pixel of

the object, ow is the pixel width. The equations between oc, ob and ow and the

states px, pyand pzare

oc= −fxpy px + cx ob= fypz px + cy ow = fxxw , (3.20)

where fx, fy are the focal lengths of the camera and cx and cy are pixel offsets

and are explained in Figure 3.3. The equations in (3.20) are relate the CCS to the image plane and are derived using a pinhole camera model as further explained in Appendix A.

3.2.2

Line segments

Line segments are given as start and end point as in yLine,k=px,l,kpy,l,kpz,l,kpx,r,k py,r,kpz,r,k

T

, (3.21)

where l stands for left and r stands for right. The points are then mapped to the states as px,l = px, py,l = pyw/2, pz,l = pz, px,r = px, py,r = py+ w/2 and

pz,r = pz. Since the line segments can only be received when a L-segment is not

received, it is assumed that the relative angle of the target vehicle is small and thus a small angle approximation used. The line segments are discarded if they deviate too much in distance or size compared to the estimate. A line segment would represent the line between the left (l) and middle (m) points in Figure 3.4.

(33)

3.2 Measurement equations 23

𝑜

𝑐

𝑜

𝑏

ROI

𝑜

𝑤

Figure 3.3:Different variables for the classification measurements.

3.2.3

L-shape segments

L-shape segments are given as left (l), middle (m) and right (r) points and are formed as an L, thus the name, and are received as

yL-shape,k=px,l,kpy,l,kpz,l,kpx,m,kpy,m,kpz,m,kpx,r,k py,r,kpz,r,k

T

(3.22)

and an example is shown in Figure 3.4. Since the length of the vehicle is not estimated, only two of the three point are used. Thus an assumption that the length of a vehicle is always larger than the width is done and the shortest side l is

l = min (||plpm||, ||prpm||) , (3.23)

where pl, pmand prare the points in Figure 3.4 and l is assumed to be the front or

(34)

𝑝

𝑥,𝑙

𝑝

𝑦,𝑙

𝑝

𝑧,𝑙

𝑝

𝑥,𝑚

𝑝

𝑦,𝑚

𝑝

𝑧,𝑚

𝑝

𝑥,𝑟

𝑝

𝑦,𝑟

𝑝

𝑧,𝑟

α

α

Figure 3.4:Example of an L-shape measurement.

point as yk =                                                                                    px,k+ wksin(α)/2 py,k+ wkcos(α)/2 pz,k px,k−wksin(α)/2 py,k−wkcos(α)/2 pz,k                           , α > 0                           px,k−wksin(α)/2 py,k+ wkcos(α)/2 pz,k px,k+ wksin(α)/2 py,k−wkcos(α)/2 pz,k                           , α ≤ 0 . (3.24)

Since the equations require −π/2 ≤ α ≤ π/2 the angle v = arctan2(vy,k, vx,k)

which is defined for the interval ] − π, π] may need to be adjusted. This case happens when the target is traveling towards the ego vehicle and is adjusted with π if v is between ] − π, −π/2[ and with −π if v is between ]π/2, π].

(35)

3.2 Measurement equations 25

3.2.4

Estimate relative topography

The estimated topography is received as point estimates with information of the relative topography to the car and are received as px, pyand pzposition relative

to the current CCS and PCS. The points represent the surface of the road and can be both in front of and behind the ego vehicle. Only the points in front of the ego vehicle are considered relevant since no tracking is done behind the ego vehicle. How the estimation is done cannot be described further due to a pending patent but the relative topography estimation is seen as highly accurate.

3.2.4.1 Topography measurements usingpxestimate

The first topography measurement uses the state pxas input and calculates a pz

from it as illustrated in Figure 3.5 and is calculated in the PCS. The measurement equation is

yk = pz(px), (3.25)

where interpolation between the received topography samples are done to make a continuous function and the measurement corresponds to the pz state. px is

used since there is no way to know the exact distance to the target and it is the single largest uncertainty. The noise is modelled by calculating the topography one step away from px, using the covariance Pxas step length. Then the maximum

deviation is chosen as noise as

ez= max(|pz(pxPx) − pz(px)|, |pz(px+ Px) − pz(px)|). (3.26)

X

Z

𝑝

𝑥

, 𝑝

𝑧

𝑝

𝑥

𝑝

𝑥

Figure 3.5:Image for describing topography measurement. 3.2.4.2 Topography measurements usingpz andpx

Instead of using the px, the ratio between pz/pxis used. Since the pz/pxratio is

(36)

bottom coordinate ob from the classification measurements and calculating the ratio as pz px = obcy fy (3.27) and an example scenario is portrayed in Figure 3.6. Since the obis related to the

CCS this measurements is defined in the CCS.

X

Z

𝑥, 𝑧

Image plane

𝑜

𝑏

+ 𝑃

𝑏

𝑜

𝑏

- 𝑃

𝑏

Figure 3.6:Image for describing topography measurement using image pro-jection.

Since the relative topography is estimated for all 3-dimension of the PCS the ratio ptopo,z/ptopo,xis calculated for all estimated topography positions. Then the point which minimize the error of the two ratios at a given time k is chosen as a measurement, as i = argmin n  ptopo,z,n ptopo,x,npz,k px,k  , n = 0, ..., N − 1 yk ="ptopo,x,i ptopo,z,i # (3.28)

where ptopo,x,i corresponds to the px state and ptopo,z,i to the pz state. The

un-certainty of obis considerably larger than the uncertainty of the topography

esti-mates and therefore the noise for the topography estimate is neglected. The noise is modelled by calculating the change i yk for the points ob±Pb, where Pbis the

uncertainty for the ob. The noise is defined as

ez ="max(|ptopo,x,i(ob

Pb) − ptopo,x,i|, |ptopo,x,i(ob+ Pb) − ptopo,x,i|) max(|ptopo,z,i(obPb) − ptopo,z,i|, |ptopo,z,i(ob+ Pb) − ptopo,z,i|) #

, (3.29)

(37)

3.3 Assume fixed width of object 27

3.3

Assume fixed width of object

First the filter and smoother are evaluated. Then all width estimates with a esti-mated distance between five and 50m are extracted, if there are no estimates in that interval all the estimates are used. Then a weighted average of the width is calculated for all estimates, where the covariance is used as weight. The equation used for the calculation of weighted average are the same as explained in Algo-rithm 1. Then the filter and smoother are once again evaluated with a fix width and the states are

X = [pxpypzvxvyvz]T (3.30)

and noise is defined as

w = (wx¨wy¨w¨z). (3.31)

The motion model used is a modified version of the CV as described in Section 3.1.2 to fit the states in (3.30) and noise (3.31).

(38)
(39)

4

Results

The results of the thesis are presented in this chapter and there are mainly two types of figures in this section. The first is a comparison of states between two trackers. The second type of figure is a comparison of the AE over time between two trackers, where DGPS is used as ground truth. Since the DGPS only measures px, vx, pyand vy, these are the states compared in the figures of AE. Lastly a table

of the RMSE values for Sequence 1 and 2 is presented. All tracking was done in metric units, but all states, AE and RMSE values have been normalized. This was done by dividing with the maximum value of the default tracker for that quantity, thus all quantities are comparable in a sequence. The default tracker is a non-causal tracker that uses the RTS smoother, the CV described in Section 3.1.2 and classification measurements. The default tracker is further described in Table 4.2. The evaluated scenarios are:

• The difference with and without the RTS smoother. • The effect of the vzstate.

• The difference between a CV and a CT motion model. • The effect of line segments and L-shape measurements. • The effect of using fix width.

• The effect of topography measurements using px.

• The effect of topography measurements using the pzand px.

The filters are tuned on a couple of sequences in order to get a tracker that is able to handle a lot of different sequences and not only the evaluated ones. In all figures displaying the states there is a normalized confidence bound of 3 standard

(40)

deviations, that is 99.7%. In all state figures it should be noted that the pzhas an

offset. This is because the camera is mounted in the upper part of the windshield the bottom of the target vehicle is estimated.

4.1

Evaluation sequences

The sequences that are used for evaluating the models and filters described in Sections 2 and 3 are described here.

Sequence 1

In Sequence 1 a vehicle drives towards the ego vehicle on the oncoming lane and the ego vehicle is driving straight forward, which is illustrated in Figure 4.1.

Figure 4.1:Images of evaluation Sequence 1.

Sequence 2

In Sequence 2 a vehicle is driving in a S-shape away from the ego vehicle, while the ego vehicle is driving straight forward. To get a general idea of the sequence some frames are portrayed in Figure 4.2.

Sequence 3

The last sequence used for evaluation is a real-world recording. This sequence has no DGPS data and can thus not be validated, only compared. The sequence is interesting because of a large change in topography. A general idea of the topography in the sequence can be obtained by studying the frames in Figure 4.3. The states displayed for this sequence are from the white vehicle driving in front of the ego vehicle.

(41)

4.2 Performance comparison 31

Figure 4.2:Images of evaluation Sequence 2.

Figure 4.3:Images of evaluation Sequence 3.

4.2

Performance comparison

Each of the evaluated scenarios are compared in the following sections, where the scenarios are explained in Table 4.1. The different trackers and what type of

(42)

measurements and motion models they use are explained in Table 4.2. Table 4.1:The evaluated scenarios

Section Comparison Motion model Measurement Sequence

4.2.1 The RTS smoother CV Classification

measure-ments

Sequence 1

4.2.2 The vzstate CV Classification

measure-ments Sequence 2 and 3 4.2.3 CV and CT motion model CV and CT Classification measure-ments Sequence 2

4.2.4 Line segments and L-shape measure-ments CV Classification, line segments and L-shape measure-ments Sequence 1

4.2.5 Fix width CV Classification

measure-ments Sequence 1 4.2.6 Topography mea-surements in using px CV Classification and topogra-phy measure-ments using px Sequence 1 and 3 4.2.7 Topography mea-surements using pz and px CV Classification and topogra-phy measure-ments using pzand px Sequence 1 and 3

(43)

4.2 Performance comparison 33

Table 4.2:The evaluated trackers

Name Motion model Measurement Uses RTS

Default tracker CV Classification

measure-ments

Yes

Causal tracker CV Classification

measure-ments

No CV without vz

tracker

CV without vzstate Classification

measure-ments

Yes

CT tracker CT Classification

measure-ments

Yes Multi

measure-ment tracker

CV Classification, line

seg-ments and L-shape mea-surements

Yes

Fix width

tracker

CV without w state Classification measure-ments

Yes Topography in

using pxtracker

CV Classification and

topog-raphy measurements us-ing px

Yes

Topography us-ing pz and px

tracker

CV Classification and

topog-raphy measurements us-ing the ratio between pz

and px

(44)

4.2.1

The difference with and without the RTS smoother

The result of comparing the default tracker and the causal tracker is presented here for Sequence 1. The initial states of the causal and non-causal tracker are identical in order to do a fair comparison of the RTS smoother. In Figure 4.4 the normalized states and covariances are illustrated and in Figure 4.5 the normal-ized AE between the two trackers are compared. When studying the confidence interval in Figure 4.4 it is seen that the uncertainty is significantly reduced for the non-causal tracker compared to the causal tracker. When studying the AE, it is seen that only the error for vxis reduced. Notice that the RTS smoother gradually

lowers the covariance when iterating, this is most clear for the pystate in Figure

4.4. At some point the RTS smoother can not propagate any more information backwards and therefore the decrease of covariance is levelling off, which in this case is around six seconds.

0 1 2 3 4 5 6 t[s] -4 -2 0 2 4 6 0 1 2 3 4 5 6 t[s] -10 -5 0 5 10 0 1 2 3 4 5 6 t[s] -3 -2 -1 0 1 0 1 2 3 4 5 6 t[s] -30 -20 -10 0 10 20 30 0 1 2 3 4 5 6 t[s] -6 -4 -2 0 2 4 0 1 2 3 4 5 6 t[s] -20 -10 0 10 20

Estimated states with a 3 confidence interval

0 1 2 3 4 5 6 t[s] -2 -1 0 1 2 3 4

Figure 4.4:Comparing the normalized states of the default tracker (red) and the causal tracker (blue) for Sequence 1.

(45)

4.2 Performance comparison 35 0 1 2 3 4 5 6 t[s] 0 0.2 0.4 0.6 0.8 1 1.2 0 1 2 3 4 5 6 t[s] 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5 6 t[s] 0 0.2 0.4 0.6 0.8 1 1.2 0 1 2 3 4 5 6 t[s] 0 0.2 0.4 0.6 0.8 1

Figure 4.5: Comparing the normalized AE of the default tracker (red) and the causal tracker (blue) for Sequence 1.

4.2.2

The effect of the

v

z

state

Here the default tracker is compared with the CV without vz tracker and thus

the impact of the vz state can be evaluated. The impact in normalized AE by

removing the vz state is highlighted in Figure 4.7 for Sequence 2. It is seen that

the removal of vz does have negative impact on the performance of the px and

vx, but the effect is small. In fact, the performance was increased for the px

state between the time six and twelve seconds, however it had a much larger error in the end of the sequence. As seen in Figure 4.6 the covariance for the other states than pz have barely changed at all. It seems that the pz deviates in

the beginning and end of the sequence, compared to the model with a vz state.

The same behaviours is seen more clear in Figure 4.8, which is from the third evaluation sequence. No explanation for the deviation were found. For the states in Sequence 3 there is a small difference of the px, py and w states is observed,

(46)

0 2 4 6 8 10 12 14 16t[s] -1 0 1 2 3 0 2 4 6 8 10 12 14 16t[s] -2 -1 0 1 2 3 4 0 2 4 6 8 10 12 14 16t[s] -1.5 -1 -0.5 0 0.5 1 1.5 0 2 4 6 8 10 12 14 16t[s] -2 -1 0 1 2 3 0 2 4 6 8 10 12 14 16t[s] -2 -1.5 -1 -0.5 0 0.5 0 2 4 6 8 10 12 14 16t[s] -6 -4 -2 0 2 4 6

Estimated states with a 3 confidence interval

0 2 4 6 8 10 12 14 16 t[s] 0.2 0.4 0.6 0.8 1 1.2 1.4

Figure 4.6: Comparing the normalized states of the default tracker (blue) and CV without vztracker (red) for Sequence 2.

(47)

4.2 Performance comparison 37 0 2 4 6 8 10 12 14 16t[s] 0 0.5 1 1.5 2 0 2 4 6 8 10 12 14 16t[s] 0 0.5 1 1.5 0 2 4 6 8 10 12 14 16t[s] 0 0.5 1 1.5 0 2 4 6 8 10 12 14 16t[s] 0 0.5 1

Figure 4.7:Comparing the normalized AE of the default tracker (blue) and the CV without vztracker (red) for Sequence 2.

(48)

0 2 4 6 8 10 12 t[s] -1 0 1 2 3 0 2 4 6 8 10 12 t[s] -2 -1 0 1 2 3 4 0 2 4 6 8 10 12 t[s] -2.5 -2 -1.5 -1 -0.5 0 0.5 0 2 4 6 8 10 12 t[s] -3 -2 -1 0 1 2 0 2 4 6 8 10 12 t[s] -1.5 -1 -0.5 0 0.5 1 0 2 4 6 8 10 12 t[s] -4 -2 0 2 4

Estimated states with a 3 confidence interval

0 2 4 6 8 10 12 t[s]

0 0.5 1 1.5

Figure 4.8: Comparing the normalized states for the default tracker (blue) and the CV without vztracker (red) for Sequence 3.

4.2.3

The difference between a CV and a CT motion model

In this section the CV without vztracker is compared to the CT tracker. Sequence

2 is used for comparing the normalized AE of the two trackers and it is portrayed in Figure 4.10. The performance between the two motion models are similar, where the CV seems to perform slightly better in this scenario. As seen in Figure 4.9 the uncertainty for the vxand vy states are growing faster for the CT at the

end of the sequence which is at a larger distance and as a result the vxstarts to

rapidly deviate from the true velocity at the end, which is presented in Figure 4.10.

(49)

4.2 Performance comparison 39 0 2 4 6 8 10 12 14 16t[s] -1 0 1 2 3 0 2 4 6 8 10 12 14 16t[s] -2 -1 0 1 2 3 4 0 2 4 6 8 10 12 14 16t[s] -1.5 -1 -0.5 0 0.5 1 1.5 0 2 4 6 8 10 12 14 16t[s] -2 -1 0 1 2 3 4 0 2 4 6 8 10 12 14 16t[s] -2 -1.5 -1 -0.5 0 0.5 0 2 4 6 8 10 12 14 16t[s] -3 -2 -1 0 1 2 3

Estimated states with a 3 confidence interval

0 2 4 6 8 10 12 14 16 t[s] 0.2 0.4 0.6 0.8 1 1.2 1.4

Figure 4.9: Comparing the normalized states of the CV without vz tracker

(50)

0 2 4 6 8 10 12 14 16t[s] 0 0.5 1 1.5 2 0 2 4 6 8 10 12 14 16t[s] 0 1 2 3 0 2 4 6 8 10 12 14 16t[s] 0 0.5 1 1.5 0 2 4 6 8 10 12 14 16t[s] 0 0.5 1

Figure 4.10:Comparing the normalized AE of CV without vztracker (blue)

and the CT tracker (red) for Sequence 2.

4.2.4

The effect of line segments and L-shape measurements

In this section the default tracker is compared with the multi measurement tracker. In Figure 4.11 the normalized AE of the default tracker and the multi measure-ment tracker are displayed. It is seen that the normalized AE for the vxis

gener-ally lowered for the multi measurement tracker, but the error for pxis increased.

For pyand vythe normalized AE is basically the same. It can thus be concluded

that multi measurements does not contributing with much new information of the tracked object and as a result the performance increase is negligible.

(51)

4.2 Performance comparison 41 0 1 2 3 4 5 6 t[s] 0 0.2 0.4 0.6 0.8 1 0 1 2 3 4 5 6 t[s] 0.2 0.4 0.6 0.8 1 0 1 2 3 4 5 6 t[s] 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5 6 t[s] 0 0.2 0.4 0.6 0.8 1

Figure 4.11:Comparing the normalized AE of the default tracker (blue) and multi measurement tracker (red) for Sequence 1.

4.2.5

The effect of using fix width

In this section the default tracker is compared with the fix width tracker. As seen in Figure 4.12 the normalized AE for the fix width tracker has increased significantly for the pxand vxstates.

(52)

0 1 2 3 4 5 6 t[s] 0 0.5 1 1.5 2 0 1 2 3 4 5 6 t[s] 0 0.5 1 1.5 2 0 1 2 3 4 5 6 t[s] 0 0.2 0.4 0.6 0.8 1 1.2 0 1 2 3 4 5 6 t[s] 0 0.2 0.4 0.6 0.8 1 1.2

Figure 4.12:Comparing the normalized AE of the default tracker (blue) and the fix width tracker (red) for Sequence 1.

4.2.6

The effect of topography measurements using

p

x

The default tracker is compared with the topography using pxtracker. The

nor-malized states of the two trackers are portrayed for Sequence 1 in Figure 4.13 and the normalized AE is compared in Figure 4.14. There is a clear difference int the pzstate between the two trackers and the variance for the pz state is clearly

lowered, so the certainty of the state is increased. The increased certainty of pz

also effects the estimates of the other states. Due to the information gained on the pzthere is a clear improvement of the normalized AE for the pxand vxstate.

Thus, it can be stated that the topography measurements using px add new

in-formation to the system and thus the performance is increased. For Sequence 3, the real-world sequence, the topography is more dynamic and the states are seen in Figure 4.15. It is seen that it is a clear difference in the states. The problem is that some of the topography measurements are rejected as outliers and there-fore there are some drastic changes in the pz and vzwhen the measurements are

(53)

4.2 Performance comparison 43

rejected and thus the default tracker is probably more reliable in this case.

0 1 2 3 4 5 6 t[s] -0.5 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5 6 t[s] -3 -2 -1 0 1 0 1 2 3 4 5 6 t[s] -2 -1.5 -1 -0.5 0 0 1 2 3 4 5 6 t[s] -10 -5 0 5 10 0 1 2 3 4 5 6 t[s] -2 -1.5 -1 -0.5 0 0 1 2 3 4 5 6 t[s] -10 -5 0 5

Estimated states with a 3 confidence interval

0 1 2 3 4 5 6 t[s]

0 0.5 1 1.5

Figure 4.13:Comparing the normalized states for the default tracker (blue) and the topography using pxtracker (red) for Sequence 1.

(54)

0 1 2 3 4 5 6 t[s] 0 0.2 0.4 0.6 0.8 1 0 1 2 3 4 5 6 t[s] 0.2 0.4 0.6 0.8 1 0 1 2 3 4 5 6 t[s] 0 0.5 1 1.5 2 2.5 0 1 2 3 4 5 6 t[s] 0 0.2 0.4 0.6 0.8 1 1.2

Figure 4.14:Comparing the normalized AE for the default tracker (blue) and the topography using pxtracker (red) for Sequence 1.

(55)

4.2 Performance comparison 45 0 2 4 6 8 10 12 t[s] -1 0 1 2 3 0 2 4 6 8 10 12 t[s] -2 -1 0 1 2 3 4 0 2 4 6 8 10 12 t[s] -2.5 -2 -1.5 -1 -0.5 0 0.5 0 2 4 6 8 10 12 t[s] -3 -2 -1 0 1 2 0 2 4 6 8 10 12 t[s] -1.5 -1 -0.5 0 0.5 0 2 4 6 8 10 12 t[s] -6 -4 -2 0 2 4

Estimated states with a 3 confidence interval

0 2 4 6 8 10 12 t[s]

0 0.5 1 1.5

Figure 4.15:Comparing the normalized states for the default tracker (blue) and the topography using pxtracker (red) for Sequence 3.

4.2.7

The effect of topography measurements using

p

z

and

p

x The result of comparing the default tracker with the topography using pzand px

tracker for Sequence 1 are presented in this section. The difference between the normalized states for Sequence 1 are illustrated in Figure 4.16 and the normal-ized AE are distinguished in Figure 4.17. For this type of topography measure-ment, the difference of the pzstate is less than for the other type, but the variance

is generally decreased. The impact on the other states has also decreased, but the AE has still decreased noticeably for the px and vx states. So new information

about the system is still gained. When analyzing the Sequence 3, things have changed quite a lot for all of the states, as seen in Figure 4.18. Here the topog-raphy measurements are received during the whole sequence. It is impossible to say which tracker is more correct, but the distance measurement is generally underestimated for long ranges which gives credibility for the tracker using to-pography measurements.

(56)

0 1 2 3 4 5 6 t[s] -1 0 1 2 3 0 1 2 3 4 5 6 t[s] -3 -2 -1 0 1 0 1 2 3 4 5 6 t[s] -2 -1.5 -1 -0.5 0 0 1 2 3 4 5 6 t[s] -10 -5 0 5 10 0 1 2 3 4 5 6 t[s] -2 -1.5 -1 -0.5 0 0 1 2 3 4 5 6 t[s] -10 -5 0 5 10

Estimated states with a 3 confidence interval

0 1 2 3 4 5 6 t[s]

0 0.5 1 1.5

Figure 4.16: Comparing the normalized for the default tracker (blue) and the topography using pzand pxtracker (red) for Sequence 1.

(57)

4.2 Performance comparison 47 0 1 2 3 4 5 6 t[s] 0 0.2 0.4 0.6 0.8 1 0 1 2 3 4 5 6 t[s] 0.2 0.4 0.6 0.8 1 0 1 2 3 4 5 6 t[s] 0 0.5 1 1.5 0 1 2 3 4 5 6 t[s] 0 0.2 0.4 0.6 0.8 1

Figure 4.17:Comparing the normalized AE for the default tracker (blue) and topography using pzand pxtracker (red) for Sequence 1.

(58)

0 2 4 6 8 10 12 t[s] -1 0 1 2 3 0 2 4 6 8 10 12 t[s] -2 -1 0 1 2 3 4 0 2 4 6 8 10 12 t[s] -2.5 -2 -1.5 -1 -0.5 0 0.5 0 2 4 6 8 10 12 t[s] -3 -2 -1 0 1 2 0 2 4 6 8 10 12 t[s] -1.5 -1 -0.5 0 0.5 0 2 4 6 8 10 12 t[s] -4 -2 0 2 4

Estimated states with a 3 confidence interval

0 2 4 6 8 10 12 t[s]

0 0.5 1 1.5

Figure 4.18:Comparing the normalized states for the default tracker (blue) and the topography using pzand pxtracker (red) for Sequence 3.

4.2.8

RMSE for Sequence 1 and 2

In this section the normalized RMSE is calculated for each of the evaluated track-ers for Sequence 1 and Sequence 2. The result is presented in Tables 4.3 and 4.4. The result of the RMSE generally outlines what have already been presented in the separate sections of the result, with the trackers using topography perform-ing best. For Sequence 2 the tracker usperform-ing topography measurements which uses both pzand pxis performing better instead. Also, the performance loss of

remov-ing the vz state is quite much larger in Sequence 1 than Sequence 2 and the CT

motion model is performing better than the CV without vz for Sequence 1. A

keen eye sees that the performance of the pyand vystates changes a lot between

the different trackers in Sequence 1, however the error of the py and vy states

are much smaller than the errors in px and vx. This is seen when comparing

the normalized combined RMSE for position and velocity with the errors of each quantity for Sequence 1.

(59)

4.3 Summary of results 49

Table 4.3:Relative RMSE over time for the evaluated scenarios in Sequence 1 Scenario p0−p v0−v p0 xpx v0xvx p0ypy v0yvy Causal tracker 0.9654 1.8876 0.9654 1.9224 0.9213 1.0812 Default tracker 1 1 1 1 1 1 CV without vz tracker 1.3526 1.0552 1.3520 1.0604 1.8214 0.9585 CT tracker 0.9440 0.8900 0.9438 0.8693 1.1266 1.2043 Multiple mea-surement tracker 1.1129 0.8321 1.1122 0.8247 1.6599 0.9566 Fix width tracker 1.8744 1.6400 1.8749 1.6661 1.2403 1.0676 Topography us-ing pxtracker 0.7328 0.5873 0.7312 0.5600 1.6932 0.9541 Topography us-ing pz and px tracker 0.8294 0.7588 0.8288 0.7456 1.3190 0.9677

Table 4.4:Relative RMSE over time for the evaluated scenarios in Sequence 2 Scenario p0−p v0−v p0xpx v0xvx p0ypy v0yvy Causal tracker 1.6814 1.6434 1.7374 1.6697 1.1431 1.6258 Default tracker 1 1 1 1 1 1 CV without vz tracker 1.2393 1.1443 1.2711 1.2823 0.9497 1.0448 CT tracker 1.7206 1.4719 1.7877 1.9821 1.0450 1.0107 Multiple mea-surement tracker 1.1190 1.0724 1.1350 1.1374 0.9698 1.0160 Fix width tracker 4.4766 3.5345 4.7377 5.3539 0.9873 1.4074 Topography us-ing pxtracker 0.7451 1.1574 0.7016 1.3143 1.0271 1.0429 Topography us-ing pz and px tracker 0.6425 0.9155 0.5778 0.7565 1.0193 1.0052

4.3

Summary of results

In this thesis non-casual algorithms and concepts have been implemented and evaluated. Different techniques, measurements and motion models have been tested in order to get an as accurate non-causal method as possible. The different algorithms used are evaluated using normalized RMSE and AE, where DGPS was

References

Related documents

Great interest in non-linear acoustics has been expressed recently in the investigation of micro-inhomogeneous media exhibiting high acoustic non-linearity. The interest of such

Nitrogen cycle, 15 N tracing experiments, gross N transformation rates, climate change, climate treatments, mineralization, nitrification, forest fertilization, nitrous

Nisse berättar att han till exempel använder sin interaktiva tavla till att förbereda lektioner och prov, med hjälp av datorn kan han göra interaktiva

A pilot study of samples from 2PC has indicated high concentrations of volcanic ash particles around the expected age of the Alaskan so called Aniakchak tephra which has an age

Besides defining an option as just a put or call option, there are many other ways to describe an option. Two of the most common definitions are the Euro- pean and American options.

This research study will use empirical data from earlier presented research papers and research reports done within the Swedish Police (see for example Holgersson, 2005, 2007,

Pedagog 1 och 2 har varit med om att flerspråkiga barn visar intresse för tecken och bilder, vilket jag tänker skulle kunna vara för att barnen får ett sätt att förstå mer

Tommie Lundqvist, Historieämnets historia: Recension av Sven Liljas Historia i tiden, Studentlitteraur, Lund 1989, Kronos : historia i skola och samhälle, 1989, Nr.2, s..