• No results found

Stereo Camera Calibration Accuracy in Real-time Car Angles Estimation for Vision Driver Assistance and Autonomous Driving

N/A
N/A
Protected

Academic year: 2021

Share "Stereo Camera Calibration Accuracy in Real-time Car Angles Estimation for Vision Driver Assistance and Autonomous Driving"

Copied!
85
0
0

Loading.... (view fulltext now)

Full text

(1)

Autonomous Driving

Björn Algers June 20, 2018

(2)

Car Angles Estimation for Vision Driver Assistance and Autonomous Driving

Björn Algers

Supervisors: Ove Andersson

Dep. of Physics, Umeå Universitet Simon Blomstrand

Veoneer AB

Examiner: Alexandr Talyzin

Dep. of Physics, Umeå Universitet

Department of Physics Umeå University SE-90187 Umeå, Sweden

(3)

The automotive safety company Veoneer are producers of high end driver visual assistance systems, but the knowledge about the absolute accuracy of their dynamic calibration algorithms that estimate the vehicle’s orientation is limited.

In this thesis, a novel measurement system is proposed to be used in gathering reference data of a vehicle’s orientation as it is in motion, more specifically the pitch and roll angle of the vehicle. Focus has been to estimate how the uncertainty of the measurement system is affected by errors introduced during its construction, and to evaluate its potential in being a viable tool in gathering reference data for algorithm performance evaluation. The system consisted of three laser distance sensors mounted on the body of the vehicle, and a range of data acquisition sequences with different perturbations were performed by driving along a stretch of road in Linköping with weights loaded in the vehicle. The reference data were compared to camera system data where the bias of the calculated an-gles were estimated, along with the dynamic behaviour of the camera system algorithms.

The experimental results showed that the accuracy of the system exceeded 0.1◦ for both

pitch and roll, but no conclusions about the bias of the algorithms could be drawn as there were systematic errors present in the measurements.

(4)

Bilsäkerhetsföretaget Veoneer är utvecklare av avancerade kamerasystem inom förarassis-tans, men kunskapen om den absoluta noggrannheten i deras dynamiska kalibreringsal-goritmer som skattar fordonets orientering är begränsad.

I denna avhandling utvecklas och testas ett nytt mätsystem för att samla in referensdata av ett fordons orientering när det är i rörelse, mer specifikt dess pitchvinkel och rollvinkel. Fokus har legat på att skatta hur osäkerheten i mätsystemet påverkas av fel som intro-ducerats vid dess konstruktion, samt att utreda dess potential när det kommer till att vara ett gångbart alternativ för att samla in referensdata för evaluering av prestandan hos algoritmerna.

Systemet bestod av tre laseravståndssensorer monterade på fordonets kaross. En rad mät-försök utfördes med olika störningar introducerade genom att köra längs en vägsträcka i Linköping med vikter lastade i fordonet. Det insamlade referensdatat jämfördes med data från kamerasystemet där bias hos de framräknade vinklarna skattades, samt att de dynamiska egenskaperna kamerasystemets algoritmer utvärderades. Resultaten från

mät-försöken visade på att noggrannheten i mätsystemet översteg 0.1◦för både pitchvinklarna

och rollvinklarna, men några slutsatser kring eventuell bias hos algoritmerna kunde ej dras då systematiska fel uppstått i mätresultaten.

(5)

I would like to thank first and foremost Veoneer for the chance to prove my worth as an engineer at their expense, it has been a good learning experience. I would also like to thank Simon Blomstrand for all the constructive comments and help throughout the entire process. A huge thanks to the statistics master Xijia Liu of Umeå University for his insight on the statistical part of the error analysis. A big thank you to friends and family who took their time to glance through the thesis in search for any mishaps and shenanigans.

I would also like to thank Ove Andersson for his kind support and insightful comments on the report during the writing process, and also Alexandr Talyzin for committing to examine the project.

(6)

Contents

1 Introduction 1

1.1 Problem statement . . . 1

1.2 Project goals . . . 2

1.2.1 Car orientation measurement . . . 2

1.2.2 Post data processing . . . 2

1.2.3 Uncertainty estimation . . . 3

1.2.4 Accuracy evaluation . . . 3

1.3 Limitations . . . 3

1.4 Nomenclature . . . 4

2 Background and theory 5 2.1 Pinhole camera model . . . 5

2.2 Camera calibration . . . 7

2.2.1 Intrinsic parameters . . . 7

2.2.2 Extrinsic parameters . . . 8

2.2.3 SECAL - Static Extrinsic Calibration . . . 9

2.2.4 DECAL - Dynamic Extrinsic Calibration . . . 10

2.3 Stereo camera setup . . . 11

2.4 Coordinate systems . . . 12 2.5 Uncertainty estimation . . . 13 2.5.1 Monte-Carlo simulations . . . 14 2.5.2 Directional statistics . . . 15 2.5.3 Resolution uncertainty . . . 16 2.6 Signal filtering . . . 17

2.6.1 Low pass filter . . . 18

2.6.2 Band stop/notch filter . . . 19

2.7 Cross correlation . . . 20

2.8 Related work . . . 20

3 Method 23 3.1 Experimental setup and general procedure . . . 23

3.1.1 Laser measurement rig . . . 23

3.1.2 Car and camera system . . . 25

3.1.3 Mathematical models . . . 25

3.2 System calibration . . . 28

3.3 Data collection . . . 30

3.4 Data post-processing . . . 31

3.5 Laser data filtering . . . 31

3.6 Uncertainty estimation . . . 35

(7)

4 Results 40

4.1 System simulations . . . 40

4.2 Calibration results . . . 42

4.3 SECAL - Static Extrinsic Calibration . . . 44

4.4 Road measurements . . . 45

5 Discussion 59 5.1 Simulation results . . . 59

5.2 Laser system calibration . . . 59

5.3 Experimental results . . . 60

6 Concluding remarks 63 6.1 Conclusions . . . 63

6.2 Further work . . . 63

6.2.1 Relative orientation measurement . . . 63

6.2.2 System construction . . . 64

6.2.3 Additional sensors . . . 64

References 65

(8)

1

Introduction

Using computer vision to solve complex problems has become common practice in many fields, where the goal often is to identify objects in the image or calculate the position and orientation of a system attached to the camera generating the images. In the mid sixties, it was thought that a short summer project would be sufficient to solve this type of problem [1], but researchers are yet still today struggling with the same problems, however more successfully than ever. In 2015, a team of computer scientists managed to correctly identify 96.5% of the contents of images in a competition using their software based on deep neural networks [2], and the accuracy achievable when calculating posi-tion and orientaposi-tion is indeed very high - under certain circumstances it is possible to determine the translation of a camera with an accuracy exceeding 5 mm [3].

Veoneer is a company based in Linköping that is operating at the very edge of this field, and is where this thesis has taken place. Their camera systems are implemented in several large automotive company safety systems where the camera aids in issues like object detection, emergency braking and lane keeping. The long term goal is to produce systems which are fully autonomous. The performance of these systems depends heavily on the data that is used to make the decisions, especially when it comes to altering the velocity of a vehicle moving at 90 km/h, where faulty decisions may have catastrophic results. Accurate calibration of the camera systems is therefore essential, and gathering information about their performance in turn to increase it presents a challenge.

1.1 Problem statement

Today’s camera algorithms developed at Veoneer which estimate the car’s orientation (roll, pitch and yaw) relative to the road only have known relative performance, in other words the precision and repeatability of the algorithms are well known and understood. There is however no well defined reference method which enables establishment of ground truth information for absolute accuracy estimations.

This Master thesis aims to establish a reference method for measuring ground truth information about the road geometry, where the goal is to be able to correlate the dynamic calibration results from the algorithms to the reference method such that any bias can be estimated and removed.

(9)

1.2 Project goals

The goals of this Master’s thesis can be stated as follows:

• Establish, implement and evaluate a method for measuring the orientation of a moving vehicle with high accuracy. The pitch and roll angles are to be measured with an accuracy higher than 0.1 degrees.

• Use this measurement method to gather ground truth data of the vehicle’s orien-tation, which in turn is used to evaluate the performance of the following dynamic calibration algorithms:

– The previously deployed algorithm which estimates the pitch angle of the camera system using Visual Horizon Recognition.

– The concept algorithm currently under development which utilizes visual odom-etry to estimate the pitch and roll angles of the camera system.

Below follows a more detailed description of the problem that is to be solved, and the general approach to reach the goals set for this thesis.

1.2.1 Car orientation measurement

The orientation of the vehicle is to be measured, and this constitutes quite a challenge since the aim is an accuracy better than 0.1 degrees for both the pitch and roll angles. A car is a quite large and bulky object, with almost no flat reference surfaces, and adding to this challenge the measurements are to be done as the vehicle is propagating the long winding roads of Linköping. The measurement system first has to be calibrated with the vehicle standing still to validate its accuracy, and then a sequence of measurements in differing situation will be performed to evaluate how it behaves, and if the gathered data is meaningful. Parallel with the car orientation reference measurements, data will be gathered using Veoneers camera systems and stored for later comparison and analysis.

1.2.2 Post data processing

After the measurements have been done, the reference system data will be in raw form and needs to be processed in different ways before it is possible to analyze it. The data will be registered using the on-board computer systems of the vehicle with all its specific formats. The data will have to be transformed from raw data to a MATLAB compatible format as it is the software that will be used to perform the analysis.

(10)

1.2.3 Uncertainty estimation

A metric has to have an accuracy measurement attached to it for it to be meaningful, and the error estimation of the measurements to be made poses quite a challenge as well. As the setup to be used is experimental there are presently no well established methods for estimating the error for this particular setup. The sources of error are many, such as misalignment, mistakes made in measuring the dimensions and vibrations, and an analysis by simulating the measurement system while introducing errors will be done for it to be feasible.

1.2.4 Accuracy evaluation

After gathering information about the orientation of the car in different situations using the measurement system, the results from the measurements are to be compared to the corresponding results from the algorithms. These algorithms calculate the orientation of the car using the video stream from the on-board vehicle camera. Statistical tests will be performed to evaluate if any significant differences can be found, as well as qualitative analysis of the distributions of the angles.

1.3 Limitations

Using the results of the measurements to evaluate algorithms that estimate the ego motion of the moving vehicle, i.e high dynamic orientation estimation, is outside of the scope of this thesis, although it could most certainly be used to gather information about this. The results will also only be used to evaluate the two mentioned algorithms in their present state. The number of measurement runs will also have to be limited, as data collection is a time consuming activity.

(11)

1.4 Nomenclature

Below in table 1 follows the abbreviations commonly used in this thesis. Table 1 – Common abbreviations used in this thesis

Abbreviation Explanation

SECAL Static extrinsic Calibration

DECAL Dynamic Extrinsic Calibration

LTIC Linear Time-invariant Continuous Time (System)

LTID Linear Time-Invariant Discrete (System)

CAN-bus Controller Area Network - Automotive communications standard

FIR Finite Impulse Response

IIR Infinite Impulse Response

IQR Interquartile Range (statistical variability measure)

In table 2 follows a list with regularly occurring mathematical notation used in this thesis. Table 2 – Common mathematical notation used in this thesis.

Item Notation example

Vector x

Matrix X,Σ

Likelihood P (A), P (A|B)

Partial derivative ∂x

Random variable X

Expected value E[X]

(12)

2

Background and theory

This section will partly discuss some background knowledge that is useful for reading and understanding this thesis, as well as some theoretical sections considering some methods used to derive the results in later sections. Lastly some related work will be discussed, and how it relates to the methods implemented in this thesis.

2.1 Pinhole camera model

In a real pinhole camera, all rays pass through a small hole on the front end of the camera, and the image is projected upside down on the back end of the camera chamber. A common camera however consists of an intricate system of optics, and therefore it is convenient to model the camera system as a pinhole camera where all the complex optics are approximated. The model in this setting describes the process where points in the real world is projected onto the image plane. A very good introduction to this concept can be found in the book Computer vision: models, learning and inference [7] which covers a large portion of this field, ranging from the basics to complex 3D reconstruction applications. It is mathematically equivalent to picture the image plane on which the real world 3D points are projected onto the 2D image plane in front of the optical center of the camera, and it is easier to visualize this way. Fig.(1) shows a conceptual drawing of the pinhole camera system.

(13)

Figure 1 – The basic concept of the pinhole camera, with the optical center, the optical axis, principal point and the image plane where the real world point is projected. A line is drawn from the real world 3D point w through the optical center of the camera, and x shows the intersection point with the image plane. The point which the optical axis intersects with the image plane is called the principal point. Here two coordinate systems are depicted, where the real world coordinate system is centered on the optical center, and the image plane coordinate system is centered on the corner of the image. Image from [7].

The problem can mathematically be described as to find the position x = [x, y]T on the

image plane where the 3D point w = [u, v, w]T is imaged [7]. The pinhole camera model

is however not a deterministic model as it is subject to noise, but rather it describes the likelihood P (x|w) to observe a point x in the image plane given that it is the projection of a 3D point w in the real world [7].

As this thesis does not make use of the theoretical part of the pinhole camera as much as the actual applications that it is used in, the full pinhole model is stated here in its abbreviated form [7],

x = pinhole[w, Λ, Ω, τ ]. (1)

Here x is the position on the image plane where the real world point w is projected. Ω is a 3 × 3 rotation matrix describing the orientation of the camera system and τ is a 3 × 1 translation matrix describing the position of the camera relative to an arbitrary world coordinate system. Finally we have Λ which is the intrinsic parameter matrix, describing the focal length of the camera, the center of the image plane on which the real world points are being projected and the skew of the image [7].

(14)

However, this model does not deal with the issue of radial distortion, which must be addressed if accurate measurements are to be done with the system. This is commonly modeled as a polynomial function of the radial distance from the center of the image

[7], where the corrected image plane positions [x0, y0] are expressed as functions of the

original positions [x, y] by [7]

x0 = x(1 + β1r2+ β2r4) y0 = y(1 + β1r2+ β2r4).

(2)

Here β1and β2are parameters which control the amount of distortion, where in a practical

application, these distortions are applied after the perspective projection but before the intrinsic parameterization, such that the distortion is relative to the optical axis of the camera system rather than that of the arbitrary world coordinate system.

2.2 Camera calibration

There are a few different methods used to determine the properties of the camera system. Here the intrinsic parameters are defined as the internal specifications of the camera, namely Λ as mentioned in eq.(1), and the extrinsic parameters Ω and τ describing the external properties of the camera - its orientation and translation in an arbitrary

coordinate system. In a stereo camera setup, Veoneer uses the convention that the

relative orientation and translation between the master and slave camera are defined as intrinsic parameters, rather than extrinsic.

2.2.1 Intrinsic parameters

In this thesis, the camera calibration toolbox provided with MATLAB’s image processing toolbox is used to calibrate a DSLR camera which in turn is used when calibrating the measurement system used to solve the problem. The calibration functions in MATLAB is based on the techniques derived and described in [8], where initially a closed form analytic solution is presented and refined using a non-linear optimization technique based on the maximum likelihood criterion. The theory of solving these problems is beyond the scope of this thesis, but further reading in the subject can be found in [8] and [9].

The general approach when calibrating an image is to take a photograph of an object with known geometry, typically a checkerboard pattern printed on a flat surface as shown in fig.(2). The known geometry of the object can then be used to infer the parameters of the camera using the techniques described in [8] or [9].

(15)

Figure 2 – A typical checkerboard pattern mounted on a flat surface. Detecting the corners of the squares is done using simple image processing, and given the known size of the squares and assuming a flat surface the information can be used to determine the intrinsic parameters of the camera system.

As the parameters of the camera are approximated, it is inevitable that this introduces some errors. One way of quantifying this is by defining the reprojection error [7] which is a measure of how close the estimate of a 3D point is to its true projection [7]. This is done by projecting the points on the checkerboard from the world coordinates into pixel coordinates in the image, which is essentially the euclidean distance measured in pixels on the image plane between the estimated point and the true point [7]. The mean reprojection error of a set of images provides a good estimate of the error after the calibration sequence.

2.2.2 Extrinsic parameters

Estimating the extrinsic parameters of a camera system is done by considering its orienta-tion and translaorienta-tion relative to a known real world object. This problem is often referred to as the perspective-n-point (PnP) problem, and it is an area of vast research with many papers readily available on the web. It is however beyond the scope of this paper to dive into the theory behind this indeed very interesting field, but it is worth mentioning the general idea and problem solving approach. For further reading, references [7] and [10] are recommended.

(16)

3D points {wi}I

i=1 with its corresponding projections on the image plane {xi}Ii=1, the

orientation matrix Ω and translation vector τ are to be estimated such that the pre-dicted model agree well with the observed 2D points [7]. Formally, this can be stated mathematically as ˆ Ω, ˆτ = argmax Ω,τ " I X i=1 log[P r(xi|wi, Λ, Ω, τ )] # , (3)

which is a maximum likelihood learning problem [7]. Essentially this means maximizing the result of eq.(3) by finding the optimal orientation matrix Ω and translation vector τ .

2.2.3 SECAL - Static Extrinsic Calibration

The particular application of the camera system at Veoneer is to gather information about the world surrounding a vehicle, and one of the first steps in achieving this is to perform a so-called Static Extrinsic Calibration. Here it is assumed that the camera already has been intrinsically calibrated and the parameter Λ is known. This procedure serves the purpose of generating initial values of the camera system extrinsic orientation parameter Ω, which will be further refined in the following Dynamic Extrinsic Calibration procedure. The translational parameter τ is defined by the operator performing the procedure, i.e. it is not approximated by the algorithms.

Generally described, a special calibration target is placed at a known position in the cameras field of view, and the operator initializes the calibration sequence which then automatically computes the roll, pitch and yaw angles of the camera based on the intrinsic parameter Λ and the previously defined translational parameter τ . In fig.(3) a basic sketch of the calibration situation is visualized.

(17)

Figure 3 – A schematic sketch over the extrinsic relations between the camera and the target. The target is aligned with the vehicle and the camera using line lasers commonly used in construction, which are then assumed to be completely orthogonal and aligned with the target. Deviations from this assumption is then detected by the calibration and used to generate an initial estimate of the orientation of the camera relative to the vehicle.

2.2.4 DECAL - Dynamic Extrinsic Calibration

At the core of this thesis is the Dynamic Extrinsic Calibration, and its absolute accuracy is what we seek to evaluate. This is a procedure that is continuously performed auto-matically throughout the lifespan of the system, and the purpose is both to refine the initial values for the orientation Ω of the system but also, and even more importantly, compensate for any changes in the orientation of the car that (in turn) alters the orien-tation of the camera system. This might for instance be asymmetrically loading of the vehicle or long term degrading of the camera mounting brackets. The main goal of this calibration is to maintain a precise and accurate measure of the camera orientation Ω throughout its lifespan to ensure that algorithms further down the software chain have correct input data.

There are several ways of estimating these parameters and at Veoneer multiple methods are employed at the time of this project. The two algorithms that are to be evaluated in this thesis make use of two different methods. The previously deployed algorithm makes use of Visual Horizon Recognition, which essentially estimates the roll and pitch of the camera by identifying the horizon in the image, where a similar implementation is done by [11]. The specific implementation of this algorithm is however beyond the scope of this thesis. At Veoneer, this algorithm is only used to estimated the pitch of the vehicle and it is the performance of this that will be evaluated.

(18)

The conceptual algorithm currently under development uses visual odometry to estimate a road surface plane in front of the car, where the normal vector of this plane can be used to estimate the camera pitch and roll angles relative this road surface plane. A similar implementation is performed in [12], but the implementation of this algorithm is beyond the scope of this thesis. The general idea of visual odometry is to track the movement of points in 3D space between consecutive images in a video sequence, thus determining its movement relative to the camera providing the images [12]. In fig.(4) an example of the resulting point tracks from the visual odometry algorithm is presented.

Figure 4 – The green lines represent point tracks where single points have been tracked over several consecutive images. The movement of these points is then used to estimate the movement of the camera, as well as the road plane in front of the vehicle. Image generated using Veoneer’s in house software.

At Veoneer, this algorithm is used to estimate both the roll and pitch angles of the vehicle, and it is the performance of these estimations that will be evaluated.

For further reference, the abbreviations stated in table 3 will be used in the rest of the thesis for the two algorithms.

Table 3 – Abbreviations for the two algorithms.

Full name Short name

Road Plane Estimation Algorithm RP

Visual Horizon Recognition Algorithm HZ

2.3 Stereo camera setup

One of the main applications of the pinhole camera model in computer vision is to

(19)

two cameras [7]. Given that the two cameras are calibrated and with known position -the parameters Λ, Ω, τ are known for both cameras, and -the two cameras are observing

the same point w where the respective projections on the image planes x1 and x2 are

known, the position of w in the world can be determined by triangulation [7]. This problem boils down to an optimization problem that can formally be written as

ˆ w = argmax w " 2 X i=1 log[P r(xi|w, Λi, Ωi, τi)] # , (4)

where we manipulate the variable w until a predefined convergence criterion is met [7]. In fig.(5), a schematic is shown describing the situation with two cameras observing the same point w in 3D space.

Figure 5 – Here two situations are presented. In a) we have a discrepancy between the

observed points x1,x2and the world point w, meaning the estimation is not optimal. In

b) we have estimated w differently which corresponds better with x1and x2. Image from

[7].

2.4 Coordinate systems

For clarity it is important to define the coordinate system which the orientation of the camera system and measurement system relate to. At Veoneer, the convention which is used in this thesis is defined as ACLC (AUTOSAR Camera Located Coordinate System) - the origin of the coordinate system is centered at the camera position (ISO 8855). The X-axis points in the forward direction of the car, parallel to the ground. The Y-axis points to the left when looking in the direction of the car, also parallel to the ground. The Z-axis points up, orthogonal to the ground. Fig.(6) shows a visualization of the coordinate system, and additional clarification are found in fig.(7) and fig.(8).

(20)

Figure 6 – The coordinate system with the camera positioned arbitrarily somewhere in the car. The roll angle ω is positive when rotating the car clockwise when looking in the direction of the x-axis, the pitch angle θ is positive when the front end of the car is pointing downwards, and the yaw angle φ is positive when the vehicle is turning left.

Figure 7 – A clarification over the sign convention for the pitch angle θ, where the angle increases as the front of the car points downward.

Figure 8 – The roll angle ω is positive when the car is rotating clockwise looking in the direction which the car is driving.

2.5 Uncertainty estimation

An important part of evaluating a measurement system is to assess the uncertainty that affects its result. When dealing with systems that consist of few interacting parts which add to the total uncertainty one can easily estimate it by using Gauss’ law of uncertainty

(21)

propagation.

Given N independent random variables {xi}N

i=1, each with its respective uncertainty

measure sxi which are combined in the function f , the uncertainty sf of the function f

can be expressed as sf = v u u t N X i=1 ∂f ∂xi 2 s2 xi, (5)

according to [13]. However, when a system becomes too complex for analytic calculations of the uncertainty estimation to be feasible, Monte Carlo simulation methods are readily applied to estimate the parameters of interest.

2.5.1 Monte-Carlo simulations

Monte Carlo is a method to obtain statistical properties of a phenomenon when analytic solutions are not feasible [13].

We define a random variable X with the unknown expected value E[X] = θ and unknown

probability distribution Fx, which is dependent on the J random variables {Yi}J

i=1, each

with their respective expectation values E[Yi] = χi and probability distributions FYi. We

let bθ be the estimator of θ and cFx be the estimator of Fx, and the goal is to make an

inference about the unknown parameters θ and Fx.

The general approach is as follows [13]:

1. Define a reasonable random model for the problem. 2. For r = 1, 2, ..., R

• Generate random samples y1∗, y2∗, ..., yJ∗ ∼ FYi. • Compute bθr∗ using y1∗, y2∗, ..., y∗n.

3. Calculate bθ as the mean of bθ∗1, bθ2∗, ..., bθ∗R.

4. Calculate cFx from the empirical cumulative distribution of bθ∗1, bθ2∗, ..., bθ∗R

The law of large numbers states that when performing a trial a large number of times, the average of the result will converge towards the true expectation value [13]. One challenge is to find a balance between computing power and the time aspect, so the number of trials is rather arbitrary - it has to be sufficiently large [13]. A related example of the application of Monte Carlo simulations can be found in [14], where the authors estimate the uncertainty of dating sediment records and peat cores. An already readily used model for uncertainty in their field is the base for the process which was developed in

(22)

the seventies, and the simulations were implemented in basic calculation software [14]. When it comes to combining the results from the simulations with actual measurements, one has to consider the implication of the combination of probabilities that occur. It is formally stated in Bayes’ theorem as [13]

P (A|B) = P (B|A)P (A)

P (B) , (6)

which describes the probability of occurrence A to be true, given that B is true.

2.5.2 Directional statistics

Random variables defined as directions where the total probability exists on the unit circle, is said to have a circular distribution [15]. The domain of the circular random

variable φ ranges from0, 2π, and for the continuous case a probability density function

f (φ) exists which has the following properties [15]: 1. f (φ) ≥ 0.

2. ´02πf (φ)dφ = 1.

3. f (φ) = f (φ + k2π) for any integer k.

A circular random variable φ is said to have a circular normal distribution, or a Von Mises distribution if it has the following probability density function [15]:

f (φ, µ, κ) = 1

2πI0(κ)

eκ cos(φ−µ), (7)

where µ is the expected value of φ and κ is a parameter analogous to the variance of

a standard probability function as σ2 = κ1, and the variable I0 is the modified Bessel

function of zeroth order [15]. In fig.(9) one finds a visualization of the circular normal distribution for µ = 0 and a few different κ.

(23)

Figure 9 – The different κ generates different distributions. When κ is small the angle deviation is large, and the tails of the distribution "leaks over" to the opposite side as it is periodic. When κ is larger, the distribution rapidly converges to zero closer to π and −π. Image from [15].

The application of this field of statistics comes to use if, e.g., one wants to generate random angles described by a circular distribution.

2.5.3 Resolution uncertainty

A manufacturer of a measurement device typically specifies the resolution of the equip-ment, which is the smallest change in the input that can be detected using the instrument. The distribution of the resolution uncertainty can be modeled as a uniform distribution [5], as seen in fig.(10).

(24)

-a 0 a 1/2a

Figure 10 – The uncertainty of the resolution modeled as a uniform distribution. Here the resolution of the device is specified as 2a, and it is equally probable that the true value lies anywhere within the limits. The total area of the function needs to be 1, giving that

the height of the function equals1/2a.

To be able to use this uncertainty in simulations, it first has to be converted to standard

uncertainty. The population variance σ2 of a continuous random variable X is defined

as [4]

V ar(X) = σ2=

ˆ

(x − µ)2f (x)dx, (8)

where µ is the expected value of X, and f (x) is the probability density function of the random variable. As we are interested in modeling the error of a measurement with mean zero and a uniform probability density function as described in fig.(10), the variance of the resolution uncertainty can be expressed as

σ2=

ˆ a

−a x2 1

2adx, (9)

which after some simple calculations leads us to the population standard deviation

σ =√a

3. (10)

This information can then be used if one wants to simulate the uncertainty of the mea-surement device in question.

2.6 Signal filtering

One area of importance when it comes to signal analysis is filtering, as a recorded signal of interest in most cases is exposed to interference which introduces error. By analyzing

(25)

the signal in a correct manner, one can identify the interference and remove it by applying the proper filter.

As stated in [16], the output y(t) of a Linear Time Invariant Continuous Time (LTIC) system can be calculated as the convolution between its impulse response h(t) and the input signal x(t) as

y(t) = h(t) ∗ x(t). (11)

Performing a Laplace transform on eq.(11), one can use the convolutional property of the said transform [16], and with a little rearranging we get

Y (s) = H(s)X(s) → H(s) = Y (s)

X(s), (12)

where Y (s) and X(s) are the Laplace transforms of y(t) and x(t), and H(s) is the Laplace transform of h(t). When rearranging eq.(12) as the ratio between the Laplace transforms of the input and output, one gets the so called transfer function that linearly maps the Laplace transform of the input to the Laplace transform of the output [16]. One can essentially characterize the output for a given input by modifying the transfer function H(s), i.e. create a filter. By performing a Z-transform, the digital equivalence of eq.(12) for a Linear Time invariant filter can be defined as

H(z) = B(z) A(z) = b0+PNi=1biz−i 1 +PM j=1ajz−j , (13)

where the greater of M or N defines the order of the filter, and the parameters a0, a1...aM

and b0, b1...bN defines the properties of the filter, and hence the output signal [16].

Two important properties of a filter is its Gain response and phase response due to input frequency, where the Gain is defined as the absolute value of the amplitude response of the transfer function |H(ω)| with input frequency ω [16]. The phase spectrum of

the system is defined as ∠H(ω), and both of these properties are typically graphically

inspected to assess the properties of the systems [16].

2.6.1 Low pass filter

In case of interference in a frequency above that of interest, a common solution is to use a low pass filter. The ideal transfer function for the discrete time case is given by [16]

(26)

Hilp(Ω) = (

1 |Ω| ≤ Ωc

0 Ωc< |Ω| ≤ π,

(14)

where Ωcis the cut-off frequency of the filter, and Ω is confined to the range −π ≤ Ω ≤ π.

The impulse response hilp[k] of the ideal discrete time low pass filter is obtained by

calculating the inverse Discrete Time Fourier Transform of eq.(14) as

hilp[k] = Ωc π sinc kΩc π  , (15)

where k is the discrete time step. A graphical representation of the magnitude response can be seen in figure 11.

Figure 11 – The magnitude response of an ideal discrete time low pass filter with the

cut-off frequency Ωc.

2.6.2 Band stop/notch filter

Another common filter application is the so called notch filter, which is a band stop filter with very narrow frequency band which is attenuated [16]. It is generally a combination of high pass and low pass filters, and its ideal transfer function can be defined as [16]

Hibs(Ω) =

(

0 Ωc1≤ |Ω| ≤ Ωc2

1 |Ω| < Ωc1 and Ωc2< |Ω| ≤ π

. (16)

Here Ωc1and Ωc2are the cut-off frequencies of the band stop filter. The impulse response

of eq.(16) is calculated in the same manner as eq.(15), and is defined as

hibs[k] = δ[k] − Ωc2 π sinc kΩc2 π  +Ωc1 π sinc kΩc1 π  , (17)

where δ[k] is defined as the discrete Kronecker delta function [16]. A visualization of the magnitude response of the ideal band stop filter is presented in fig.(12).

(27)

Figure 12 – The magnitude response of an ideal discrete time band stop filter, with the

cut off frequencies Ω1 and Ω2.

2.7 Cross correlation

The cross correlation is a standard mathematical method to assess the degree of corre-lation between two data sequences [5]. In the continuous case for the two functions f (t) and g(t) it is defined such as

Rf g = (f ∗ g)(τ ) =

ˆ ∞

−∞

f∗(t)g(t + τ )dt. (18)

Here τ denotes the time shift between the two functions f (t) and g(t), and f∗(t) the

complex conjugate of f (t). In the discrete time case, for functions u[n] and w[n] it is similarly expressed as Ruw = (u ∗ w)[n] = ∞ X m=−∞ u∗[m]w[m + n]. (19)

Here n denotes the time shift between the two signals [5]. One essentially "slides" one signal over the other, summing up the multiplied function values for each time delay m. A high positive value indicates a high correlation at that particular time delay m, as the two functions behave similarly over the whole sequence. This is particularly useful if one wants to search for a shorter pattern in a longer sequence, or find the time delay between related signals [5].

2.8 Related work

The science of using computer vision to solve complex problems is well studied and the number of articles touching the subject is extensive. However, no true match was found in the efforts to find work that was done considering the specific type of measurements that this thesis concern, though a number of papers were identified as related and where inspiration and good ideas could be gathered.

(28)

The work done by Raphael Labayarade and Didier Aubert in [17] does concern the same type of problem where they wish to evaluate the accuracy of vehicle roll, pitch and yaw estimations in a stereoscopic camera application. Algorithms for estimating a sparse disparity map which is used to estimate a road plane in front of the vehicle. This is followed by the presentation of algorithms to calculate the three variables of interest from this disparity map, and followed up by experimental evaluation. Here the authors evaluate their algorithms on synthetic images with predefined translation and

orientation of the camera system, and an accuracy exceeding 0.2◦ for roll and 0.3◦ for

pitch was achieved. Practical tests were also performed by gathering video sequences in a real vehicle, but as no reference data of the orientation of the car was available, these results were only qualitative and could not be used to evaluate the accuracy further. Similarly, in the results presented by Sappa et al. [18], an algorithm for stereoscopic camera pose estimation is evaluated. The algorithm’s intended use is for both urban areas and less complex environments, where the camera pose is estimated in relation to the environment’s dominant surface area - the road. An effort is made to evaluate the algorithm’s ability to calculate the pitch of the camera system which is estimated from the horizon line position on the image plane. This is computed by back projecting the horizon line onto the image plane for every frame. The horizon lines in the same sequences were marked by a human and used as a reference for the algorithm’s performance. It was concluded that for half of the frames in the image set being tested, the line was estimated with an error smaller or equal to one pixel, and for 90% of the images the error was within 4 pixels. This accuracy measure is, however, not defined absolutely which is what this thesis aim to evaluate.

The work [21] presented by Schneider et al. discusses the method of combining the measurements from two sensors mounted on a vehicle, to estimate their relative position and orientation. A so called unscented Kalman filter is utilized to perform an online extrinsic calibration, and the results are validated both by simulation and real world data. Here a stereoscopic camera of type bumblebee XB3 is combined with an Inertial Measurement unit of type OxTS RT3003, and the results are indeed promising. The relative translational estimated standard deviation is in the range of approximately 1

cm, while the angular standard deviation is around 0.2◦− 0.3◦. This way of determining

the roll, pitch and yaw could have been applicable in this thesis, but again, the angular drift of the IMU would have posed a problem. The method is however interesting and could pose an area for further work as more sensors could have been added to increase the accuracy and minimize the drift.

Götz et al. [19] present an interesting method for very accurate translational accuracy evaluation of camera pose estimation, where they place industrial cameras on a rail mounted sled which can be precisely positioned. The sled is then moved while gathering images from the cameras which are pointed at a range of different calibration targets, while registering the movement using laser interferometry. This of course yields incredibly accurate results, but it is not a feasible method for the application considered in this

(29)

thesis, since the aim is to measure the orientation of a moving vehicle. It is however a method which indeed could be of interest if further investigations of the camera systems were needed to be taken to the extreme.

In [20], the authors Tan et al. propose a sensor fusion algorithm to minimize the drift in rotational measurements. The measurements from a laser gyroscope are combined with the tracking data acquired which both greatly reduces the computational costs for the vision tracking algorithm, and experimental results show that the accuracy could be increased. The application of using a laser gyroscope would indeed be feasible for the application in this thesis and it was considered, but gyroscopes have a tendency to drift over time, and even the most high end gyroscopes intended for the automotive industry had a drift specified that exceeded the accuracy this thesis was aiming for. The fusion of several sensors could however be the answer to this problem, but this is an area for further work.

(30)

3

Method

3.1 Experimental setup and general procedure

This section covers the general approach to solving the problem at hand - initially the practical method is discussed, with the measurement system principle, the specific tools used in constructing the system and which way the data was collected. This is then followed by a discussion of the mathematical model used to calculate the orientation of the system, how the measurement error was estimated using Monte Carlo simulations, and lastly in which ways the results were presented and analyzed.

3.1.1 Laser measurement rig

A pre-existing measurement rig used to collect ground truth data for validation of a road surface preview algorithm was conveniently available at the time for the project. The rig consisted of two laser triangulation based distance sensors mounted on an extruded aluminum rod, which could be mounted on the car’s hitch.

The mounted laser sensors were of the type OPTImess CCD, version OMS 4140 [22], with the most critical specifications being a measurement range spanning 100-500 mm and a resolution of 0.2 mm, where the output voltage from the sensor ranges from 0-5V. In fig.(13) the sensor can be seen, and in fig.(14) the extruded aluminum rig can be seen mounted on the back of the car.

Figure 13 – The OPTImess laser sensor, consisting of a rugged metal casing pro-tecting the delicate electronics and optics inside, is designed for harsh environment and industrial applications. Image from [22].

Figure 14 – The measurement rig can be seen here mounted on the hitch of the car. In an effort to minimize vibrations when in motion, two extra supports were mounted on the side increasing the over-all stability of the rig. The sensors are contained in the two black boxes, aimed downwards.

(31)

To be able to measure not only the roll angle of the car, but also the pitch angle of the car, the previously existing measurement rig was expanded by adding a third laser sensor of the same type, and it was mounted on the front end of the vehicle, which can be seen in fig.(15). A supply voltage of 12 V was connected to the three sensors, and the analog output was registered using a analog to digital converter of type Mx-SENS2-8 from the manufacturer IPEtronik. The specified accuracy was stated as ±0.10% of the current

measured voltage for unipolar measurements at an ambient temperature of 25◦ [23]. The

data acquisition rate was set to 1 kHz throughout the measurement series.

Figure 15 – A laser was mounted on the front end of the car using the mount for the towing hook.

Figure 16 – The analog to digital con-verter used in the measurement system. Image from [23].

The measurement system was used in two constellations, one for the calibration measure-ments and one for the road sequence measuremeasure-ments. For the calibration measuremeasure-ments, the CAN stream output from the A/D converter was registered using a Vector box of type VN7640 [24] and saved to a local computer using the software CANoe (v.11), as the measurements were done in the garage at Veoneer’s premises.

When performing the measurements on the road, however, the CAN output from the A/D converter was connected directly to the internal data registration framework of the data acquisition car. The online data acquisition was controlled via the on-board computer software named IRIS, and the debug data was saved in the in-house .dat file format. When continuously recording data, the convention is to split it into 30 seconds intervals, as the file sizes tend to be pretty huge where 30 seconds of data occupies around 3.7 Gigabytes of hard-drive space. The IRIS software automatically splits and names all files with timestamps, so one must keep track of the start time of a sequence for later reference.

For further reference, the measurement system will be named Reference Orientation System, or ROS for short.

(32)

3.1.2 Car and camera system

The test car which was available at the time of the data collection was a modern Mercedez S-class modified to be able to record data using different prototype camera systems. One important feature of this car was that it had air suspension, meaning that it could auto-matically adjust the height of the wheels when the on-board sensors registered changes in the orientation of the car. This system is typical for this type of high-end car and it is supposed to compensate for asymmetrical loads and increase comfort.

The camera system mounted in the camera at the time of the data acquisition was a stereoscopic camera prototype loaded with an up-to-date deployment of the target software being developed at Veoneer. Important to note here is that of the two algorithms to be evaluated in this thesis, one is previously deployed on the camera system - the one using visual horizon recognition, whilst the one currently being developed and is still in its conceptual stage needs to be run offline. That is, one needs to run the recordings of the data collection sequences through the algorithm on a desktop computer at a later time.

3.1.3 Mathematical models

The principle for the angle equations is fairly basic, as it is only based on trigonometric relations between the positions of the lasers and the respective measured distance. In fig.(17) a simple 3D model of the system is presented with all relevant dimensions marked for clarity.

(33)

Figure 17 – The basic model of the measurement system. Here Lsis the full length of the

system running along the axis of roll rotation, Ld and Lf are the perpendicular distances

from the center of the roll axis and the two lasers mounted on the back end of the vehicle

and the front laser respectively, and dF,dL and dR are the distances that are measured

using the laser sensors.

Given the distance measurements from the three laser sensors, we calculate the roll angle ω of the measurement system relative to the ground as

ω = tan−1 dR− dL

2Ld !

, (20)

where dRand dL are the distances to the ground plane measured by laser sensor R and

L, placed on the right and left back end of the car, and Ldis the distance from the center

of the measurement rig and each of the lasers, assuming they are mounted symmetrically. A schematic overview of the setup can be seen in fig.(18).

(34)

Figure 18 – Here the relations are shown for clarity. The distances dL and dR are

registered by the lasers, and the distance Ldis measured from the center of the rig to the

laser sensors. Using these measurements, ω can be calculated using eq.(20).

As the placement of the front laser is shifted sideways from the axis of roll rotation, the

distance measurement of the front laser dF has to be adjusted according to

dF,adj = dF − tan(ω)Lf, (21)

for the following equations to hold. A schematic overview can be seen in fig.(19).

Figure 19 – The adjusted distance dF,adjis calculated by using the roll angle ω previously

calculated using eq.(20), and the known displacement distance Lf.

This leads to the equation for the pitch angle θ of the measurement system, which is based on the same distance relation as in eq.(20) but with the mean distance of the two lasers on the back of the car and the adjusted front distance, such as

(35)

θ = tan−1 cos(ω)(dR+ dL) 2Ls −cos(ω)dF,adj Ls ! . (22)

Here dF,adj is the adjusted distance to the ground plane measured by laser sensor F placed

on the front end of the car, and Lsis the perpendicular distance between laser F and the

axis formed between laser R and L. Thus, knowing the dimensions of the system, and the distances measured to the ground plane, the pitch and roll can be calculated accordingly.

3.2 System calibration

Upon the construction of the system, great care was taken to mount all parts as orthog-onal as reasonably possible. It was however inevitable to introduce some errors in this step as it is rather difficult to mount something on a car that has no planar surfaces to take any reference measurements from, in addition to the rushed time frame at the time of the data collection as the data acquisition car was only available during a limited period of time. However, in the fine tuning of the measurement rig a digital spirit level

with an accuracy of 0.05◦ was used in combination with a slide gauge with an accuracy

of ±0.05 mm and a laser distance device with an accuracy of ±0.5 mm. First the laser sensors were adjusted to be square to the bottom of the aluminum extrusion which they were attached to, which in turn was mounted on the car’s hitch, where the orientation of the rig was fine tuned to be planar with the ground using the laser distance device and digital spirit level. The orientation of the front laser sensor was also fine tuned. As the dimensions of the system needed to be known, a pair of line lasers were used to find adequate reference points to make measurements from on the garage floor where the laser sensor dots were projected, and the laser distance device was used to measure the relevant distances, which can be seen in table 4. The whole calibration measurement procedure was performed on a specially built section of the garage with particularly flat floor, with a stated maximum deviation of ±0.1 mm for any given point on the surface, thus the reference floor was assumed to be perfectly flat.

Table 4 – The measurement system dimensions. For clarity, respective length defini-tion from fig.(17) is provided. The uncertainties of the measurements were increased to compensate for any parallax reading error from the unorthodox measurement technique.

Dimension Distance

Back laser displacement (Ld) 838 ±1 [mm]

System length (Ls) 5361 ±2 [mm]

Front laser displacement (Lf) 463 ±1 [mm]

The measurement system had to be calibrated in some manner, and this was solved by using basic image analysis. The idea was to take photographs of the car with two

(36)

identical checkerboard patterns mounted on the side facing the camera, while recording the distances measured by the three lasers. By rotating the car in small increments while taking a photograph and recording the laser distances at each step, the rotation of the checkerboard patterns could be related to the angles calculated using the laser distances in the mathematical model. As the car is a rigid system, the relative positions of the checkerboard corners will not change as the orientation of the car does, and assuming the axis of rotation is parallel to the optical axis of the camera and that the position of the camera is static, the pixel positions on the image plane of the checkerboard corners can be used to correlate the rotations of the car in consecutive images. In fig.(20) and fig.(21) an example of the initial and final roll orientation of the car is shown, together with how the checkerboard corners are related to one another.

Figure 20 – The initial state of the car, with no added wooden blocks under the wheels on the left side of the car. Lines are superimposed on the image for clarifi-cation.

Figure 21 – The last state of this par-ticular calibration sequence, with all five wooden blocks placed under the wheels on the left side.

The still camera used in the calibration sequence was of the model Sony A77, but before any image analysis was done, an intrinsic camera calibration was performed in an effort to enhance the quality of the results as described in section 2.2.1. A sequence of cali-bration images were taken of a calicali-bration target, which in turn were processed in the Camera Calibration application in MATLAB. A few images were discarded to enhance the reprojection error.

Following the camera calibration, a simple script was constructed in MATLAB where all checkerboard corners in an image were identified using the detectCheckerboardPoints() function supplied with the image processing toolbox, and the resulting angle between the checkerboard patterns were calculated as the pixel difference between each corner as

θ = tan−1yiB− yiA

xiB− xiA



(37)

where xiA, xiB and yiA, yiB is defined as the i:th corner pixel positions on checkerboard A and B. This was repeated for all calibration images of the measurement system. The uncertainty of the angles calculated using the images was estimated using eq.5, and after

swiftly abbreviating the differences in eq.23 as ∆xand ∆ythe uncertainty for an arbitrary

angle φ can be expressed as

sφ= s  1 ∆x 2 σ2 repr+  ∆y ∆2 x 2 σ2 repr. (24)

Here σrepr represents the mean reprojection error as defined in section 2.2.1 which was

estimated from the camera parameters in the intrinsic camera parameter calibration step. A total of 8 calibration measurement sequences were performed, 2 clockwise and 2 counter clockwise for both pitch and roll, and the car was jacked up a total of 80 times.

3.3 Data collection

The idea was to perform a number of driving sequences while simultaneously recording the debug data registered by the camera system and the actual orientation of the vehicle using the reference measurement system. The purpose of the project was to assess the accuracy of the pitch and roll angles of the dynamic calibration algorithms, and a good way to find any discrepancies between the two systems is to gather data of the system in both unperturbed and perturbed states. That is, record data whilst driving without any additional cargo, and to record data while loading the car asymmetrically using weights. A total of 6 measurement sequences were performed, according to table 5.

Table 5 – The six measurement sequences performed in this project.

Sequence # System state

1 Unperturbed

2 Unperturbed

3 Right side weighted

4 Left side weighted

5 Trunk weighted A

6 Trunk weighted B

The perturbations were performed by stuffing the car with as many barbell weights and car batteries as could be fit without it being a safety hazard, with a total weight of approximately 150 kg. The total mass was not that relevant, as it was only important to make the car shift sufficiently enough such that it could be measured by both the laser measurement system and the camera system.

(38)

The data recording sequence was initialized by performing a SECAL as described in section 2.2.3, with the addition of registering the distances measured by each of the three lasers simultaneously. Five consecutive calibrations were performed, yielding five values for the initial pitch and roll angles which could then be averaged to estimate the true value as close as possible. The laser data were also averaged, yielding an initial offset for each of the three lasers. This information could then be used to define the relative orientation between the reference orientation system and the camera system.

3.4 Data post-processing

Proceeding from the data collection, the raw data needed to be processed in some manner to enable result analysis. Regarding the calibration procedure, the laser distances were registered directly on a laptop which made things a bit easier, as only the raw CAN stream had to be converted to a MATLAB compatible file format to enable analysis, this was done using the CANoe software directly.

For the road data measurements however, more steps had to be taken. A total of around 700 GB of raw data was recorded, where initially the .dat files had to be processed separately for each of the two algorithms being evaluated. For the already deployed algorithm, an in-house MATLAB parser could directly be applied to each separate 30 second sequence to extract the debug data, where the pitch angle of interest was stored. For the road plane estimation algorithm, a local executable runner was applied to the sequences. This generated new files, which then could be parsed from another MATLAB script which extracted the information of interest from each separate 30 second sequence. All files from both the deployed algorithm and the concept algorithm could then be concatenated into their respective road measurement sequence, which then enabled the data analysis to proceed.

3.5 Laser data filtering

An initial analysis of the laser signals did show some noise, and it was interestingly different for each of the lasers. A power spectral density estimate (PSD) was generated using the MATLAB function pwelch() provided with the signal processing toolbox, and the result is presented in fig.(22). For the PSD, a segment length of 500 samples was used together with 300 overlapped samples with 500 DFT points.

(39)

0 50 100 150 200 250 300 350 400 450 500 Frequency (Hz) -65 -60 -55 -50 -45 -40 -35 -30 -25 PSD (dB/Hz)

Laser power spectral density estimate

Right laser Left laser Front laser

Figure 22 – The power spectral density estimate for all three laser sensor signals, where one can see a range of peaks, with highest intensities at around 450 Hz and below 25 Hz.

In fig.(23) one finds a region of interest for the PSD and in fig.(24) a visualization of the laser distances measured over a short time interval.

0 20 40 60 80 100 120 Frequency (Hz) -65 -60 -55 -50 -45 -40 -35 -30 -25 PSD (dB/Hz)

Laser power spectral density estimate Right laser Left laser Front laser

Figure 23 – The power spectral density estimate over a limited frequency range, where one finds distinct intensity peaks below 10 Hz. 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Time [s] -0.4 -0.2 0 0.2 Distance [mm] Front laser 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Time [s] -0.4 -0.2 0 0.2 Distance [mm] Left laser 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Time [s] -0.4 -0.2 0 0.2 Distance [mm] Right laser

Figure 24 – The three lasers show a distinct oscillation around the measured mean, which is removed here such that

they are centered around zero. Note

that the amplitudes of the oscillations are around 0.2 mm, which is the accuracy specified by the manufacturer.

(40)

at 2.5 Hz, and for the front laser at 9 Hz. Three filters were constructed in an effort to minimize the noise - two band stop filters in the form of an Infinite Impulse response (IIR) notch filter and one low pass filter in the form of a moving average filter.

The transfer function for the IIR notch filter was constructed as [16]

H(z) = 1 − 2z

−1cos(ω

0) + z−2 1 − 2rz−1cos(ω0) + r2z−2

, (25)

where r is defined as the radius of the filter poles in the complex z-plane, and ω0 is the

normalized notch filter frequency. The notch filter angular frequency is calculated using the relation [16]

ω0 = 2π

f0 fs

, (26)

where f0 is the desired notch frequency in Hz and fs is the sample frequency.

For the left and right lasers, a notch filter with pole radius of r = 0.98 and filter frequency

f0 of 9 Hz was created. For the front laser, a notch filter with pole radius of r = 0.98

and filter frequency f0 = 2.5 Hz was created.

The transfer function for the Finite Impulse Response (FIR) moving average filter was constructed with a window size τ = 101 samples as [16]

H(z) =

PW

i=0z

−i

τ . (27)

As the moving average filter introduces a delay in the filtered signal of size (τ −1)2 , care

was taken to adjust for this in the result [16]. The corresponding magnitude response and phase response for the three filters can be seen in fig.(25), fig.(26) and fig.(27). The filtered result of the static signals can be seen in fig.(28).

(41)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Normalized Frequency ( rad/sample)

-100 -50 0 50 100 Phase (degrees) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Normalized Frequency ( rad/sample)

-30 -20 -10 0 10 Magnitude (dB)

Figure 25 – The magnitude and phase response of the notch filter for the front mounted laser.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Normalized Frequency ( rad/sample)

-50 0 50 100 150 Phase (degrees) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Normalized Frequency ( rad/sample)

-20 -10 0 10

Magnitude (dB)

Figure 26 – The magnitude and phase response of the notch filter for the back mounted lasers.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Normalized Frequency ( rad/sample)

-200 -100 0 100 Phase (degrees) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Normalized Frequency ( rad/sample)

-100 -50 0

Magnitude (dB)

Figure 27 – The magnitude and phase response of the low pass filter.

Figure 28 – The static signals of all three lasers before and after filtering where all major oscillations are removed.

The three filters were then applied using the MATLAB function filter() which takes the signal being filtered as an input, together with the z-transform nominator and denom-inator for the particular filter being applied, in this case eqs.(25) and (27), and outputs the filtered signal. The resulting PSD estimate is presented in fig.(29).

(42)

0 50 100 150 200 250 300 350 400 450 500 Frequency (Hz) -120 -110 -100 -90 -80 -70 -60 -50 -40 -30 -20 PSD (dB/Hz)

Laser power spectral density estimate

Original signal

Filtered signal

Right laser Left laser Front laser

Figure 29 – The power spectral density estimate for all three laser sensor signals, before and after filtering.

3.6 Uncertainty estimation

As mentioned in section 2.5.1, when the calculations of a problem becomes intractable, a common approach is to perform Monte Carlo simulations to investigate the statistical properties of a parameter. In this thesis, we are interested in the uncertainty propagation in the measurement system and how it depends on the orientation of the car, or based on eq.(6) more formally stated as

P (ωt, θt)|(ωm, θm) =

P (ωm, θm)|(ωt, θt)P ωt, θt 

P ωm, θm

 . (28)

Here ωmand θmare the measured roll and pitch angles, and ωtand θtare the true angles

of the system. In words, eq.(28) states Given a measured roll and pitch angle ωm and

θm, what is the probability that these are the true roll and pitch angles ωt and θt?

If one assumes the following relation,

P (ωt, θt) = CP (ωm, θm) (29)

which means that the distribution of the measured angles is the same as the distribution for the true angles down to a factor C. Further assumptions of the distributions has to be made for the relation in eq.(29) to hold. Assuming both distributions are multivariate

(43)

and that the angles ω and θ are uncorrelated for both the true angles and the measured angles, we have

µ + Σt= µ + CΣm,

ΣtΣ−1m = C.

(30)

This then gives that eq.(28) can be rearranged as

P (ωt, θt)|(ωm, θm) = P (ωm, θm)|(ωt, θt)C. (31)

This states that given the assumptions made, the distribution of the true roll and pitch

angles ωt and θt given a measured roll and pitch angle ωm and θm is the same as the

distribution of the measured roll and pitch angle ωmand θm given the true roll and pitch

angle ωtand θtmultiplied by a constant C. The right hand side of eq.(31) is something

that can be simulated, which is the goal of this passage.

The main idea is then to use the mathematical model presented in section 3.1.3, where it is positioned in a range of orientations covering all plausible pitch and roll angles. Using the projected laser distances, the orientation of the model could be calculated using eq.(20), eq.(21) and eq.(22). By introducing a range of perturbations to the system, and repeating this process a large number of times, distributions of the angular error could be estimated. For eq.(30) to hold in this context, the errors introduced has to be sufficiently small due to the trigonometric functions in eq.(20) and eq.(22), as it would violate the normality assumption in eq.(30) otherwise. The purpose of the process is to estimate the effects of errors that has been introduced when building the system and mounting it on the car, and the perturbations that were introduced in the systems follows in table 6.

Table 6 – The metrics considered when simulating the system, with adjusted error esti-mations for transformation to standard deviations, as derived in section 2.5.3.

Metric Estimated error

Laser misalignment (pitch and yaw) ±0.5/3 [deg]

System length measurement ±0.02/3[m]

Laser displacement measurement ±0.0025/3 [m]

Laser resolution ±0.0002/3 [m]

These errors are arbitrarily estimated within reason except for the laser resolution error which was supplied by the manufacturer.

(44)

Table 7 – The settings used in the simulation procedure.

Setting Value

Roll range [−5◦, 5◦]

Pitch Range [−5◦, 5◦]

Step length 0.5◦

Perturbations per orientation N = 10000

This yielded a total of 441 orientations of the system, at which each state, 10000 per-turbations were performed to estimate how the distribution was affected by the errors defined in table 6. For perturbing the system length measurement, laser displacement measurement and laser resolution, MATLAB’s normal distribution random number gen-erator was used. For the pitch and yaw angles of the laser misalignment however, a different approach was used. As discussed in section 2.5.2, random variables whose prob-ability distribution exists on the unit circle is said to have a circular distribution. In [25] the authors present a method of generating circular normal distributed random numbers using the ratio-of-uniforms method, which was implemented in MATLAB in this thesis. The algorithm to generate a circular normal distributed number Θ follows as such:

1. If κ > 1.3 s :=1/κ, else s := πe−κ.

2. Generate R1, R2 ∼ U (0, 1),

Θ := s(2R2− 1)/R1.

3. If |Θ| > π go to step 2.

4. If κΘ2< 4 − 4R1 deliver Θ,

else if κcos(Θ) < 2ln(R1) + κ go to step 2.

5. Deliver Θ.

After running the simulations, a set of N = 10000 values for both pitch and roll were obtained for each of the 441 orientations, and each set was sorted and examined using histograms to assess the distribution. As a measure of spread, the 2.5th and 97.5th percentile was extracted from the sorted dataset of each orientation to construct 95% confidence interval surfaces from which a measure of uncertainty could be obtained for

any orientation within the simulated ranges. That is, for any given ωt, θt we get a

distribution measure for the errors.

These confidence interval surfaces could then be used to estimate the uncertainty of an arbitrary angle calculated using the measured laser distances, where the calculated angle value was interpolated on the confidence interval surface to construct a lower and upper 95% confidence bound.

(45)

3.7 Data analysis

The data analysis part of this thesis was straight forward. Initially the calibration se-quences were evaluated to assess whether the measurement system was working properly. To evaluate the statistical properties of the calibration results, a Lilliefors normality test for the difference between the image and laser datasets was performed. The test can be summarized as

H0: θa− θb ∼ N

H1: θa− θb 6= N α = 0.05

The null hypothesis means that on the 5% level of significance, the difference between the two sets of data belongs to a normal distribution, while the alternative hypothesis is that it does not.

Depending on the result from the normality test, a suitable statistical test for paired samples was performed to assess any significant differences in the result. The statistical test can be stated as

H0 : µa− µb = 0.

H1 : µa− µb 6= 0 α = 0.05.

The null hypothesis states that on the 5% level of significance, a significant difference between the two datasets cannot be seen, whilst the alternative hypothesis states that it can.

Proceeding, each of the road measurement sequences were taken into MATLAB to first evaluate the time series. This was done by visually comparing the angles calculated using the derived expressions in section 3.1.3 to the pitch and roll angles generated using the concept algorithm and the pitch angle generated from the deployed algorithm. The route of the data collection presented one long straight patch of highway, where the algorithms hopefully would work optimally and converge pretty quick. To quantify the results, histograms of the laser angles and the algorithm angles were produced which gave a good visualization of the distribution of the angles. The median of each time series was taken as a central estimate of the orientation of the car. Both the full time series were considered as well as the cropped time series where only the highway portion of the data collection was analyzed.

To assess whether any statistical difference could be established between the results of the laser measurements and the algorithms, Wilcoxon signed rank non-parametric tests were

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

I regleringsbrevet för 2014 uppdrog Regeringen åt Tillväxtanalys att ”föreslå mätmetoder och indikatorer som kan användas vid utvärdering av de samhällsekonomiska effekterna av

• Utbildningsnivåerna i Sveriges FA-regioner varierar kraftigt. I Stockholm har 46 procent av de sysselsatta eftergymnasial utbildning, medan samma andel i Dorotea endast

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av

Det har inte varit möjligt att skapa en tydlig överblick över hur FoI-verksamheten på Energimyndigheten bidrar till målet, det vill säga hur målen påverkar resursprioriteringar

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating