• No results found

Simultaneous Localization and Mapping of a small-scale vehicle using low-cost IMU, optical wheel encoders and LiDAR.

N/A
N/A
Protected

Academic year: 2021

Share "Simultaneous Localization and Mapping of a small-scale vehicle using low-cost IMU, optical wheel encoders and LiDAR."

Copied!
102
0
0

Loading.... (view fulltext now)

Full text

(1)

Simultaneous Localization and Mapping of a small-scale vehicle using low-cost IMU, optical wheel encoders and LiDAR.

NIKLAS BRUSÉN

KTH

SKOLAN FÖR ELEKTROTEKNIK OCH DATAVETENSKAP

(2)

Abstract

This thesis explores and investigates the usage of a low cost six Degrees of Freedom Inertial Measurement Unit, low cost photo-electric wheel encoders and one of the cheapest available single-band Light Detection and Ranging sensor to estimate the orientation and position of a small scale vehicle. Fur- thermore, we develop a Simultaneous Localization and Mapping approach for our sensor suite. The Inertial and Measurement Unit provides orientation of the vehicle based on an Attitude and Heading Reference System algorithm, us- ing Gradient Descent and Kalman Filter design. The position of the vehicle is estimated separately by the Inertial Measurement Unit and wheel encoders, but then fused together in an Extended Kalman Filter. The final odometry results are later matched with Light Detection and Ranging scans using gmapping.

Gmapping is a ROS-package that utilizes a Rao-Blackwellized Particle Filter for Simultaneous Localization and Mapping.

The thesis will present the low-cost system in detail and illustrate the perfor- mance of the system with empirical results. Each real-time experiment is com- pared and evaluated with a Motion Capture System, providing sub-millimetre accuracy, and presented with numerical and graphical results along with error propagation.

(3)

Sammanfattning

Det här examensarbetet har undersökt och utforskat användandet av hjulsen- sorer, en sex frihetsgrader tröghetsmätningsenhet och en ljusradar för att upp- skatta orientering och position av ett obemannat markfordon. Vidare skapas ett koncept, som samtidigt kartlägger omgivningen och lokaliserar fordonet i den givna mappen, kallat Simultaneously Localization and Mapping. Trög- hetsmätningsenheten används för att estimera fordonets orientering baserat på ett Attitude and Heading Reference System, som nyttjar Gradient Descent och Kalman Filter design. De optiska hjulsensorera och tröghetsmätningsenheten utvärderas sedan separat för uppskattningen av fordonets position. Vidare sam- manfogas resultaten från tröghetsmätningsenheten och hjulsensorera i ett Ex- tended Kalman Filter. Resultatet av filteret sammanfogas matchas sedan med resultatet från ljusradarn i ROS-paketet gmapping, som resulterar i Simulta- neously Localization and Mapping.

Examensarbetet kommer presentera de olika sensoruppsättningarna i detalj och utvärdera dess egenskaper. Tre olika realtidsexperiment är konstruerade och testade på varje sensoruppsättning. Alla experiment jämförs och utvärde- ras med ett Motion Capture System monterat i Smart Mobility Lab’s lokal på Kungliga Teknologiska Högskolan, Stockholm.

(4)

Acknowledgements

This thesis has been conducted at Syntronic AB in Kista, stockholm, and at Royal Institute of Technology’s Smart Mobility Lab under the supervision of Frank Jiang, Anders Gabrielsson and Jonas Mårtensson, who is also my ex- aminer.

First and foremost I would like to express my gratitude to Frank Jiang for continuously assisting me during this thesis work, mostly regarding trouble shooting, giving feedback and technical support with the SVEA design and the MOCAP system.

I thank Anders Gabrielsson at Syntronic and Jonas Mårtensson at Royal In- stitute of Technology for providing support and thoughtful ideas along the way.

Stockholm, August 2019 Niklas Brusén

(5)

1 Introduction 1

1.1 Background . . . 1

1.2 Problem Formulation . . . 2

1.3 Aim and Objectives . . . 4

1.4 Delimitations . . . 5

1.5 Disposition . . . 5

2 Frame of Reference 7 2.1 Light Detection and Ranging . . . 7

2.2 Inertial Measurement Unit . . . 7

2.2.1 Accelerometer . . . 8

2.2.2 Gyroscope . . . 9

2.2.3 Magnetometer . . . 10

2.3 Extended Kalman Filter . . . 11

2.4 Euler angles . . . 12

2.5 Quaternion . . . 13

2.6 Orientation Estimation (AHRS) . . . 15

2.6.1 Gradient Descent . . . 16

2.7 Position Estimation . . . 17

2.7.1 Wheel Encoders . . . 17

2.7.2 Inertial Measurement Unit . . . 18

2.7.3 Simultaneous Localization and Mapping . . . 19

2.8 Error Estimation . . . 20

2.8.1 Mean Square Error . . . 20

2.8.2 Mean Absolute Error . . . 20

2.8.3 Root Mean Square Error . . . 21

2.8.4 Maximum Distance Error . . . 21

iv

(6)

3 Method and Implementation 22

3.1 Vehicle Design . . . 22

3.1.1 NVIDIA Jetson TX2 . . . 23

3.1.2 ROS . . . 23

3.1.3 Sensor Setup . . . 24

3.2 Motion Capture System . . . 31

3.3 System Setup . . . 31

3.3.1 Filtering . . . 32

3.4 Simultaneously Localization and Mapping . . . 36

3.5 Experiment Setup . . . 37

3.5.1 Orientation Estimation . . . 37

3.5.2 Position Estimation . . . 38

3.5.3 Evaluation . . . 40

4 Results 42 4.1 Orientation Estimation . . . 42

4.1.1 Dynamic Performance . . . 42

4.1.2 Static Performance . . . 45

4.2 Position Estimation . . . 46

4.2.1 Wheel Encoders . . . 47

4.2.2 Inertial Measurement Unit . . . 50

4.2.3 Extended Kalman Filter . . . 54

4.3 Simultaneously Localization and Mapping . . . 61

4.3.1 Extended Localization and Mapping . . . 72

5 Analysis & Discussion 77 5.1 Orientation Estimation (AHRS) . . . 77

5.2 Position Estimation . . . 78

5.2.1 Wheel Encoders . . . 78

5.2.2 Inertial Measurement Unit . . . 79

5.2.3 Extended Kalman Filter . . . 80

5.2.4 Simultaneously Localization and Mapping . . . 82

6 Conclusions and Further Work 84 6.1 Orientation Estimation (AHRS) . . . 84

6.2 Position Estimation . . . 85

References . . . 87

(7)

2.1 Illustration of how an accelerometer work ∈ R3. X, y and z

are positive in the direction as the arrows implies. . . 8

2.2 Algorithm for the Extended Kalman Filter [1] . . . 11

2.3 Representation of Euler angles rotational sequence of a coor- dinate frame. . . 12

2.4 "The orientation of frame n is achieved by a rotation, from alignment with frame b, of angle θ around the axis ra"[2] . . . 15

2.5 Block diagram of AHRS algorithm. . . 16

2.6 Required transform tree by gmapping. . . 19

2.7 Extended transform tree. . . 19

3.1 Block diagram for sensor setup. . . 24

3.2 Breakout board of optical encoder - TCRT5000 . . . 26

3.3 Optical encoder interfacing with the arduino. . . 27

3.4 Illustration of optical sensors attached to the rear axis. . . 27

3.5 Illustration of optical sensors and encoder disk attached to the rear axis. . . 28

3.6 MPU-6050 triple axis accelerometer and gyroscope. . . 29

3.7 MPU-6050 interfacing with the arduino. . . 29

3.8 Placement of IMU and arduino on the SVEA vehicle. . . 30

3.9 Illustration of final setup, including all sensor and hardware used in the following experiments. . . 31

3.10 Schematics of the system. . . 32

3.11 Illustration of navigation route for Experiment (C). . . 39

4.1 Result of roll using the proposed AHRS algorithm. The black dashed lines represent ±10, ±20 and ±45 for plot a), and ±90 for plot b). . . 43

vi

(8)

4.2 Result of pitch using the proposed AHRS algorithm. The black dashed lines represent ±10, ±20 and ±45 for plot a), and

±90 for plot b). . . 44

4.3 Result of yaw using the proposed AHRS algorithm. The black dashed lines represent ±10, ±20, ±30and ±60for plot a), and ±180 for plot b). . . 45

4.4 Drift of roll, pitch and yaw during a stationary period of ≈ 70s. 46 4.5 Results from experiment Encoder (A). . . 48

4.6 Results from experiment Encoder (B). . . 49

4.7 Results from experiment Encoder (C). . . 50

4.8 Results from experiment IMU (A). . . 52

4.9 Results from experiment IMU (B). . . 53

4.10 Results from experiment IMU (C). . . 54

4.11 Results from experiment EKF (A). . . 56

4.12 Propagation of error for experiment EKF (A). . . 57

4.13 Results from experiment EKF (B). . . 58

4.14 Propagation of error for experiment EKF (B). . . 59

4.15 Results from experiment EKF (C). . . 60

4.16 Propagation of error for experiment EKF (C). . . 61

4.17 Results from experiment SLAM (A) using gmapping. . . 63

4.18 A screen shot of real-time visualization in rviz from experi- ment SLAM (A) run a) using gmapping. . . 64

4.19 Propagation of error for experiment SLAM (A), run a) and b), using gmapping. . . 65

4.20 Propagation of error for experiment SLAM (A), run c) and d), using gmapping. . . 66

4.21 Results from experiment SLAM (B) using gmapping. . . 67

4.22 Propagation of error for experiment SLAM (B) using gmap- ping and turning left. . . 68

4.23 Propagation of error for experiment SLAM (B) using gmap- ping and turning right. . . 69

4.24 Results from experiment SLAM (C) using gmapping. . . 70

4.25 Propagation of error for experiment SLAM (C) driving the presented track using gmapping. . . 71

4.26 Propagation of error for experiment SLAM (C) circle using gmapping. . . 72

4.27 Plotted results of the proposed EKF and gmapping for run one. 73 4.28 Result of the estimated map and pose for run one visualized in rviz. . . 74

(9)

4.29 Plotted results of the proposed EKF and gmapping for run three. 75 4.30 Result of the estimated map and position for run three visual-

ized in rviz. Note that gmappings end state is alinged with the path driven from initial state, and therefore not identifiable. . . 76

(10)

4.1 Resulting error between IMU measurements and ground truth.

Data utilized originates from fig. 4.1 a), 4.2 a) and 4.3 a). . . . 45

4.2 Result of estimated error when stationary during a period of ≈ 70s. . . 46

4.3 Result of estimated error in experiment Encoder (A). . . 48

4.4 Result of estimated error in experiment Encoder (B). . . 49

4.5 Result of estimated error in experiment Encoder (C). . . 50

4.6 Result of estimated error in experiment IMU (A). . . 52

4.7 Result of estimated error in experiment IMU (B). . . 53

4.8 Result of estimated error in experiment IMU (C). . . 54

4.9 Result of estimated error in experiment EKF (A), when fusing IMU and wheel encoder data in an EKF. . . 56

4.10 Result of estimated error in experiment EKF (B), when fusing IMU and wheel encoder data. . . 58

4.11 Result of estimated error in experiment EKF (C) when fusing IMU and wheel encoder data. . . 60

4.12 Result of estimated error in experiment SLAM (A). . . 64

4.13 Result of estimated error in experiment SLAM (B). . . 67

4.14 Result of estimated error in experiment SLAM (C) using gmap- ping. . . 70

4.15 Measured distance between start and end point using the pro- posed filter and gmapping, when navigating in the SML facil- ity. The data is based on fig. 4.27 and 4.29. . . 76

ix

(11)

Abbreviations and definitions

UGV Unmanned Ground Vehicle

UAV Unmanned Aerial Vehicle

KF Kalman filter

EKF Extended Kalman filter

SLAM Simultaneous Localization and Mapping

GD Gradient Descent

IMU Inertial Measurement Unit

INS Inertial Navigation System

DMI Distance Measurement Instrument

LiDAR Light Detection and Ranging

VBL Vision Based Localization

MEMS Micro-Electro-Mechanical System

MOCAP Motion Capture

DoF Degrees of Freedom

ToF Time of Flight

RMSE Root-Mean-Square-Error

MSE Mean-Square-Error

MAE Mean-Absolute-Error

ME Maximum Error

Pose Position and orientation of the vehicle Orientation Angle compared to the reference frame Navigation frame Aims at the global reference frame Body frame Aims at the vehicles reference frame

/tf Transform between coordinate frame in ROS Ground Truth Referring to the MOCAP measurement

(12)

Introduction

This chapter provides an insight in current research related to the thesis objec- tive. Drawbacks and advantages regarding different approaches will be dis- cussed loosely. Moreover, the purpose of this thesis will be presented along with delimitations and disposition.

1.1 Background

The use of Unmanned Ground Vehicles (UGV) for autonomous drive has in- creased noticeably the last decades among commercial and non-commercial users. To ease and lower the cost of the research, small scale mobile vehi- cle platforms are often used, which are equipped with sensors needed for au- tomated, or semi-automated, navigation. The Royal Institute of Technology (KTH) Smart Mobility Lab (SML), Stockholm, recently developed a vehicle platform using an Traxxas TRX-4 as base, along with the embedded com- puter NVIDIA Jetson TX2, for student use when conducting research [3].

This Small-Vehicle-for-Autonomy (SVEA) platform is developed to be flex- ible to the attachment of various sensors. Furthermore, the idea is to make the development of the vehicle and sensors open source, with the vision to create a community who thrives the progress in developing more advanced autonomous vehicles. Moreover, SML focus on the development and testing of intelligent transportation solutions in different traffic scenarios using the remote controlled vehicles. The SML facility is therefore equipped with an indoor wireless communication network and a Motion Capture (MOCAP) po- sitioning system.

Mobile UGV’s systems are often highly flexible robot platforms, developed

1

(13)

to provide multipurpose mobility support [4]. Depending on the design, the UGV’s can be constructed to cope with various tasks. However, one critical necessity that all UGV’s depend on, is knowledge about its pose. Vehicles highly depend on information about the environment and it’s surroundings.

This for the ability to navigate semi-autonomously or fully-autonomously. A fully automated car, with no human interactions, requires a huge amount of in- formation regarding the vehicle surroundings, internal power and control sys- tems provided by various sensors, and is a very complex practice. Using a mo- bile vehicle platform with human interaction reduces the complexity. This can be desirable when, for example, exploring an unknown environment; such as a building, cave or similar. It can also be highly necessary when certain safety precautions are needed to avoid human injuries; and by semi-automation, the task of automated navigation will remain less complex.

Using sensor fusion for dead reckoning and Simultaneously Localization and Mapping (SLAM) are proven to be good methodologies when estimating the UGV’s pose. Dead reckoning is generally achieved using Distance Measure- ment Instruments (DMI) and Inertial Navigation Systems (INS) like Inertial Measurement Unit (IMU), ultra sonic sensors, wheel encoders etc. Whereas SLAM, that builds a map of the environment when navigating, can be achieved using a Light Detection and Ranging (LiDAR), monocular and stereo cameras.

To further improve the pose estimate, these measurements can be fused with various sensors using assorted filters.

In this thesis, we assemble a vehicle based on the SML design to create a mul- tipurpose mobile UGV platform. The thesis will focus on fusing data from a low cost system to achieve dead reckoning using Extended Kalman Filter (EKF) design. Furthermore, to fuse the output of the filter with additional sensors to create a SLAM concept.

1.2 Problem Formulation

Dynamic positioning providing high-accuracy is of great importance when navigating UGV’s [5]. Autonomous vehicles need to provide as sufficient po- sition estimate as possible, and even if the vehicle receives exact information about its environment, the problem remains complex. One drawback of the information acquired from sensors, is that the data is never noise free [6]. Gy- roscopes in Inertial Measurement Units are, for example, prone to drift over time due to integration. Magnetometers are sensitive to ferrous materials in

(14)

the near environment and accelerometers to vibrations and external forces [7], which will cause significant drift. This is particularly true for low cost IMUs.

Distance measurements instrument, like wheel encoders, will suffer from er- rors originated from wheel spin and slip, or undulations in the floor. This causes errors to accumulate [6]. Thus, a sensor fusion between multiple sen- sors is often desirable. This to combine the sensor data received from various sources such that the resulting position estimate contains less uncertainty, and compensates for these individual errors.

Indoor localization can be achieved through different approaches. Popular sensing techniques is for example to utilize Wi-Fi networks, Vision-Based Localization (VBL) or LiDAR. Using Wi-Fi signals, which is a fingerprint- ing localization technique that calculates the position using trilateration and base stations, does however require pre-installed wireless hardware devices on given site. In addition, prior mentioned technique suffer from signal atten- uation during diffusion, which often results in an large offset when performing the localization [8]. Vision based localization are becoming more popular as advancement in computer vision and image processing establishes. Here, fea- tures can be extracted under the condition of sufficient image matching. Fur- thermore, using in-depth sensors based on Time of Flight (ToF) principle, a robust image recognintion can be accomplished. The sensors can provide high density of 3D data, which also results in a more computational effort and is not required for 2D localization [9]. Cameras does moreover face the challenge of poor and different lighting [10].

An ultra-low-cost SLAM approach is to consider onboard ultrasonic sensors.

However, the technique did indicate sparse measurements of the surroundings.

Hence, without information about the robot kinetics, the vehicles pose might be lost while navigating in complicated environments [9].

One method to achieve pose estimation is to only use a LiDAR mounted on the UGV, as in [11]. In addition to offering a high sampling rate, LiDAR also offers high angular resolution, good range detection and high robustness against a varying environment [9]. In [5], it is proven that the LiDAR scan can provide a highly and more accurate position estimate when stationary, than when fusing LiDAR scans with IMU data in an EKF. However, in dynamic use, the fused approach provided a better position and heading angle estimate than when only using LiDAR.

(15)

Previous research has shown that the EKF is sufficient and a relatively robust and efficient error estimation framework that is highly suitable for dynamic motion systems [12]. As describe in [5], EKF is widely used in SLAM and navigation systems. SLAM is a process that enables the ability to build a map from the vehicles surroundings, and estimate the UGV’s position within the map. Hence, it’s a highly desired method when navigating in an unknown en- vironment, such as underground, underwater or indoor [13].

In [14], they use a 9DoF Orientus IMU from Advanced Navigation, fused in an EKF with measurements from an optical shaft wheel on wheel encoder from U.S digital, to estimate the position of an electric wheelchair. For dynamic testing, the Orientus IMU provide an 0.6 accuracy for roll and pitch, and 1.0 for heading angle. The static accuracy is 0.2 for roll and pitch while 0.5for heading. For indoor position testing, the error between starting and end pose using wheel encoders and IMU is 0.5 ∼ 1.7m, and 2.1 ∼ 4.2m when only using the wheel encoders.

1.3 Aim and Objectives

This study investigate and evaluate the performance of a low cost system, con- sisting of a 6DoF IMU and photo-electric wheel encoders, for pose estimation.

Using EKF algorithm, the measurements are fused to improve the position and heading angle. LiDAR scans are later introduced with the existing ROS- package gmapping, providing a SLAM approach. This to create a more robust and reliable estimation of the vehicles pose, usable in a wide range of indoor environments. The IMU data will use an Attitude and Heading Reference Sys- tem (AHRS) to describe the orientation of the vehicle, based on the approach stated in [15] and [16]. Here, the gyroscope measurements are fused with the accelerometer measurements using the algorithm Gradient Descent (GD) and quaternion representation. Furthermore, the IMU will provide a position esti- mate based on the approach in [17], tuned with stationary detection to prevent significant drift. If the result for position estimation of the IMU is satisfying, the result will be fused wheel encoders measurements in an EKF.

Thus, the thesis is a quantitative study where the performance of aforemen- tioned sensors are evaluated. The orientation and position estimation will be compared to a MOCAP system in terms of precision and accuracy using Root- Mean-Square-Error (RMSE), Mean-Square-Error (MSE) and Maximum error (ME). The MOCAP system provide sub-millimetre accuracy and is therefore

(16)

considered as ground truth in this thesis.

1.4 Delimitations

This thesis work consist of constructing an UGV with an already existing de- sign, namely the SVEA design develop at KTH. Therefore, little to no effort will be put into re-designing or construct a vehicle with other characteristics than what the existing design already offer. However, two arduinos, wheel encoders and an IMU will be added to the platform to enhance the ability for estimating the vehicles pose. This will require some hardware design to mount these sensors.

For simplicity, one KF will be presented for orientation estimate, and one EKF for position estimate. This approach will most likely not keep yaw angle from drifting due to keeping the measurements of the wheel encoders separately.

This is considered an moderate approach for short time duration, as for the experiments conducted in this thesis. For a more sophisticated and durable approach, a single filter would be more suitable, letting the encoders act as measurement for the IMU. Little to no tuning will be performed for the pro- posed filters nor the SLAM algorithm provided by gmapping.

Solely the vehicle position will be evaluated against the MOCAP system using the proposed sensor setups. The velocity and heading angle will however be examined as the project progresses, and might mentioned to strengthen some assumptions. The position estimation will only be applicable when driving forward. To include reversing, a proposed solution is given in chapter 6.

1.5 Disposition

First, useful theory will be presented in Frame of Reference regarding the ve- hicle setup. This chapter will also include sensor information, models for po- sition and orientation estimation and Kalman Filter necessities. Thus, provide an insight of useful theory used in this work.

Secondly, chapter Method and Implementation will explain the course of ac- tion used for implementation, calibration, filter design, manufacturing of nec- essary mounts, hardware used and how the experiments are performed.

(17)

Chapter Results will solely present the results from the experiments described in Method and Implementation. The presentation will be short and concisely, consisting of plots, tables and error propagation.

In Analysis and Discussion, the experiments will be interpreted and evaluated based on the numerical and graphical results. Insights, thoughts and reflec- tions about the outcome will also be given.

Finally, a conclusion about the thesis work will be presented. This regarding the results, methods and approaches used in this work. Also, ideas about fur- ther work, alternative approaches and suggested improvements will be given.

(18)

Frame of Reference

This chapter focuses on information relevant to the objective of the project, to establish context of the work to be achieved. Hence will this chapter present theory for pose estimation using various sensors. The Kalman filter algorithm will be presented along with orientation representation of quaternions and Euler angles.

2.1 Light Detection and Ranging

LiDAR is an optical instrument that measures the characteristics of reflected light to find the distance or other properties from a distance object. The mea- surement system is based on the ToF method, which means that the LiDAR emits a short laser pulse, that bounces of an object, and measure the time that elapses before the sensor receives the laser pulse again. Since the sensor of- ten offer 360 view it can be used for SLAM, and hence, its very suitable for autonomous driving [18]. One drawback is however computational cost of algorithms treating the LiDAR data for SLAM.

2.2 Inertial Measurement Unit

Inertial Measurement Units are widely used for orientation estimation or as INS, providing odometry of an UGV’s or Unmanned Aerial Vehicle’s (UAV).

A common setup is a 6DoF configuration, which consist of an accelerometer and a gyroscope. Adding an magnetometer results in an 9DoF unit. To derive an AHRS, a 6DoF or 9DoF IMU can be utilized according to [16]. However, using an 9DoF IMU will also provide a complete measurement of the orien- tation relative to the earth’s magnetic field [16]. Nevertheless, magnetome-

7

(19)

ters are very acceptable to noise from metallic object in the near environment, which will interfere with the data collection and hence may not be suitable for indoor use [19]. Thus, a suitable low cost 6DoF unit will be utilized in the following proposal.

2.2.1 Accelerometer

Accelerometers are designed to measure external forces acting on the sensor.

The purpose is to measure the difference between any linear acceleration in the accelerometer’s reference frame and the earth’s gravitational field vector [20]. Hence, the output is a field vector in the three dimensional space ∈ R3. If no force is acting on the sensor, the output should be zero. However, this is often not the case. For example, the gravitational force is always present. A simple case when the the output would be zero is when the sensor is in free fall at exactly 9.81m/s2, or in space.

For easy understanding, fig. 2.1 shows an illustration of how the sensor can be thought of. If no force is present, the ball would be placed in the centre of the cube in all three directions. If a force is applied in positive x-direction, the ball would move and press with the same amount of force in the negative x-direction. This is know as the notional inertial force [7].

Figure 2.1: Illustration of how an accelerometer work ∈ R3. X, y and z are positive in the direction as the arrows implies.

Since the sensor measures force in all direction, and the sensor will rotate for different applications, the ball will act on multiple axes at the same time instant. The sensed acceleration force is then divided amongst all walls the ball are in contact with [21]. Furthermore, accelerometers are insensitive to rotation around the earth’s gravitational field vector. Yet, they are sensitive for

(20)

vibrations, mechanical noise and temperature change [21].

2.2.2 Gyroscope

Gyroscope sensor are developed to measure the angular rate about the x, y and z axes of the sensor frame, denoted as ωx, ωy and ωz, respectively. Generally, a gyroscope measure the angle between a spinning object, that is tilted per- pendicular to the spin, and the reference surface. However, the sensor used in this work is a Micro-Electro-Mechanical system (MEMS) based sensor. The raw data output is three-dimensional which can be represented in vector form

ω = [ωxωyωz], (2.1)

and is either given in [rad/sec] or [deg/sec]. The sensor used in this work output [deg/sec]. By simple integration over time, the absolute angle

θ = [θxθyθz] (2.2)

can be acquired according to:

θ = Z t

0

ω(x)dx =

t

X

0

ω ∗ ∆t, θ ∈ R3 (2.3)

where ω(x) = dxdt ∈ R3and ∆t represents the sampling time.

Since the measurements are given in degrees per second, an easy integration can be performed for Euler angles. In this case however, the measurements are transformed into quaternions, which are not so easily integrated [7]. Hence, the use of Ω matrix 2.4 is used to convert the angular rates from the gyroscope into quaternion rates.

Ω =

0 −omegax −omegay −omegaz

omegax 0 omegaz −omegay

omegay −omegaz 0 omegax

omegaz omegay −omegax 0

(2.4)

Further on, once the Ω matrix is constructed, and letting the system be sampled with ∆t, the quaternions rates can be transformed into a quaternion using

qk = qk−1∗ exp(Ω ∗ ∆t) (2.5)

This represent the quaternion transformation between navigation frame to body frame [15], and will be the Jacobian of the state transition process model of

(21)

the AHRS algorithm.

The noise of gyroscope is typically Gaussian in nature. However, the noise that the sensor actually output can vary due to environmental noise such as temperature or a voltage source. Moreover, the gyroscope is less sensitive than an accelerometer and normally not affected by mechanical accelerations and vibrations [7].

2.2.3 Magnetometer

A magnetometer is used to measure the local magnetic field, and hence sense the rotation about the vertical axis which can be used to estimate the yaw angle.

The magnetometer output

m = [mx my mz], (2.6)

given in Gauss or Tesla, will not be included in the EKF nor represented by quaternions. Instead, yaw can be computed using the Euler angels of roll and pitch along with the output from the magnetometer as described in [22]. Be- fore estimating the yaw angle, the magnetometer output should be compen- sated if the pitch or roll angle are not zero, or close to zero. This can be done by using

 mx my

mz

=

mxcosφ − mzsinφ

mxsinθsinφ + mycosθ + mzsinθcosφ mxcosθsinφ − mysinθ + mzcosθcosφ

, (2.7)

where θ and φ are given in Euler angles from (10). Further on, the yaw angle can be obtained by

ψ = atan(my

mx) (2.8)

Using this approach, pitch and roll angles would not be affected by magnetic distortion as they would if they were incorporated in the EKF, which is consid- ered to be a standard approach explained in [23]. Magnetometers are very ad- missible to distortions in the magnetic field, which can originate from ferrous materials on the UGV or other perturbations that exist in the near environment [7]. Perturbations can be divided into two categories, Hard iron and Soft iron.

Hard iron is basically a constant additive perturbations in the magnetic field,

(22)

that originates from ferrous materials. Soft iron is materials that does not nec- essarily generate its own magnetic field, but rather create a distortion in the magnetic field vector. These perturbations, or offsets, needs to be compen- sated for as calibration of the magnetometer by centering the ellipse/sphere around the origin. A more in-depth explanation is given in [7].

2.3 Extended Kalman Filter

Kalman Filter is a basic Gaussian filter algorithm [1], used for tracking ob- jects, navigations, economics and computer vision applications among else [24]. Kalman filter is an popular optimal estimator based on a prediction made from using input ut−1 and ut at time t − 1 and t, treated as previous and current input, respectively. However, the normal Kalman filter is limited to linear problems only, whereas the Extended Kalman Filter is extended to handle non-linear models by introducing linearization. The EKF follow the same basic principle as the Kalman Filter algorithm, and can be seen in fig.

2.2 [1].

Figure 2.2:Algorithm for the Extended Kalman Filter [1]

An intuitive approach is to consider the EKF as a process based on two steps;

predictionstep and update or correction step. Prediction step is based on the model of our process, which can be a mathematical model of a UAV, ground- based vehicle or other agent etc. Here, the prediction step is calculated using information from the wheel encoders and the motion model 2.24. This cor- responds to row two and three in fig. 2.2. State g(µ, µt−1) represents the predicted non-linear transition state of the process. Gtis the Jacobian of g, R is the process noise co-variance, Σ is the co-variance.

In the Update Step, Kalman gain K is calculated according to line four in fig.

2.2. This is done by using the error co-variance matrix ¯Σt, the linearized mea- surement prediction transition matrix Htand the noise co-variance matrix Q.

(23)

Subsequently, a correction of the state is performed by including measurement vector z, in accordance with line five.

2.4 Euler angles

Euler angles is represented by roll, pitch and yaw, denoted by the parameteri- zation φ, θ, ψ respectively. Euler angles are fairly easy to understand. In fig.

2.3 one can see that roll (φ) represent the rotation around the x-axis. Pitch (θ) represent the rotation around the y-axis. Thus, roll and pitch can be con- sidered the incline/decline or tilt of the vehicle. Lastly, yaw represents the rotation about the z-axis [7], acting as heading angle for the vehicle.

Figure 2.3:Representation of Euler angles rotational sequence of a coordinate frame.

The representation order of rotation is roll, pitch and yaw. This due to being the order of how the attitude is perceived [7]. Range of roll will be ±180, which allow the pitch of having a range of ±90. Lastly, yaw is represented using a range of ±180. Roll, pitch and yaw rotation matrices, describing the transform between two coordinate frames (e.g. navigation to body frame), can be described by

Rx(φ) =

1 0 0

0 cos(φ) sin(φ) 0 −sin(φ) cos(φ)

 (2.9)

Ry(θ) =

cos(θ) 0 −sin(θ)

0 1 0

sin(θ) 0 cos(θ)

 (2.10)

(24)

Rz(ψ) =

cos(ψ) sin(ψ) 0

−sin(ψ) cos(ψ) 0

0 0 1

. (2.11)

The complete rotation sequence then form the matrix Rxyz = Rx(φ)Ry(θ)Rz(ψ) =

c(θ)c(ψ) c(θ)s(ψ) −s(θ)

c(ψ)s(θ)s(φ) − c(φ)s(ψ) c(φ)c(ψ) + s(θ)s(φ)s(ψ) c(θ)s(φ) c(φ)c(ψ)s(θ) + s(φ)s(ψ) c(φ)s(θ)s(ψ) − c(ψ)s(φ) c(θ)c(φ)

. (2.12)

where c = cos and s = sin.

However, since Euler angles acts in a system of gimbals, they are limited by a phenomenon called Gimbal lock. Gimbal lock occurs whenever two axis aligns with each other. The rotation (0, 0, 0) is for example equal to (0, 0, 2πk) for any integer k, and by letting θ = π2 leads to

Rxyz =

0 0 −1

sin(φ − ψ) cos(φ − ψ) 0 cos(φ − ψ) −sin(φ − ψ) 0

. (2.13)

Meaning that only rotation φ − ψ is observable [17]. As described in [7]

“Gimbal lock occurs when two axis are aligned, such as when the plane is pitched straight up, the pitch an the yaw axis are aligned, and as the roll gimbal is rotated, the pitch and the yaw angle are both effected the same, therefore losing orientation”. Hence, the approach of using quaternions is used. This will contain the singularity issue due to adding another gimbal and using a different representation.

2.5 Quaternion

Quaternions have been applied in many different areas, such as life science, physics and engineering [25], and were first discovered by the Irish man William Rowan Hamilton in 1843. In this paper, quaternions are used to represent the rotation relationship between navigation frame and body frame. Hence, they provide orientation of the body frame relative to the navigation frame.

(25)

Quaternions are a hyper-complex 4-vector denoted q = [q0 q1 q2 q3]T ∈ R4 by q = q0+ q1i + q2j + q3k. Here q1, q2, q3and q4are real numbers and I, j, k satisfies the multiplication

i2 = j2 = k2 = ijk = −1, (2.14) ij = −ji = k, ki = −ik = j, jk = −kj = i. (2.15) Components (q1, q2, q3) ∈ R3 represent the vector-part of the quaternion, and q0 ∈ R the scalar that specifies the amount of rotation the vector part should perform [25]. One imaginary component is used for the axis of the rotation and the other two imaginary parts are used for the orientation of the platform [7]. For best accuracy of the rotation, it is necessary to normalize the quater- nion due to keeping the error from accumulating. Hence, the quaternion is of unit length. The normalization of the quaternion is performed in the same way as for any vector [7].

As for the discrete case described by eq. 2.5, the transformation from nav- igational frame to body frame is described by the differential equation

˙

q = 0.5 ∗ Ω ∗ q, (2.16)

where Ω is of skew symmetric form and the same as eq. 2.4. To understand the rotation and the relationship to Euler angles, the same approach as in [2]

will be presented. Let the vectors Xn, Ynand Zn, and Xb, Yband Zb, which is mutually orthogonal unit vectors, represent the axis of navigational coordinate frame and body coordinate frame, respectively. The quaternion q, given by 2.17, describes the orientation of navigational frame n relative to body frame b. This is illustrated in fig. 2.4, where rais a vector ∈ b.

q = [q0q1q2q3] = [cosα

2 rxsinα

2 rysinα

2 rzsinα

2] (2.17) where rx, ry and rz describe the components of the unit vector ra in the x,y and z axes of the navigational coordinate frame.

(26)

Figure 2.4: "The orientation of frame n is achieved by a rotation, from alignment with frame b, of angle θ around the axis ra"[2]

A rotation relationship can also be expressed with the Direct Cosine Matrix (DCM) according to:

C(q) =

q02+ q21− q22− q32 2(q1q2− q0q3) 2(q1q3+ q0q2) 2(q1q2+ q0q3) q02− q12+ q22− q32 2(q2q3− q0q1) 2(q1q3− q0q2) 2(q2q3+ q0q1) q02− q21 − q22+ q32

(2.18)

By quaternions, Euler angles can be obtained according to

 θ φ ψ

=

atan2(2q2q3+ 2q0q1, q32− q22− q21+ q20)

−asin(2q1q3− 2q0q2)

atan2(2q1q2+ 2q0q3, q12+ q02− q32− q22)

. (2.19)

2.6 Orientation Estimation (AHRS)

To gain the orientation of the vehicle, an AHRS algorithm using quaternion representation, GD and KF design will be utilized. Generally, to get a good estimate of yaw angle, measurements of an magnetometer is desired. Then, one can infuse the measurement of the magnetometer in the update step of the filter according to 2.7. Yaw angle is then obtained using 2.8. The approach is based on the work conducted in [23] and separates the magnetometer out- put from the GD algorithm. One advantage using this method is that the level attitude would not be affected by magnetic distortion. However, since no mag- netometer is used in this work, yaw angle will be estimated from the prediction

(27)

step of the EKF, using only accelerometer and gyroscope data. Consequently, the heading angle will suffer from a noticeably drift. An outlay of the AHRS algorithm can be seen in fig. 2.5.

Figure 2.5: Block diagram of AHRS algorithm.

2.6.1 Gradient Descent

Gradient descent is a first order iterative optimization algorithm used to find- ing a minimum of a function. GD will in this sense be used to formulate the observation model for the filter, and adopted in order to avoid the need for linearization. One approach could be to use Adaptive Step Gradient Descent (ASGD) algorithm that uses an adaptive step-size, so the convergence of the GD algorithm is subject to the motion rate of the vehicle. In other words, as the vehicle is under fast-moving conditions the step-size will increase, and if the vehicle is under slow-moving conditions the step-size will decrease [23].

This is however not covered in this report. Here, a fixed step-size will be set.

GD produce a computed quaternion measurement for the filter. This is achieved by using the normalized accelerometer data acc = [0 ax ay az]T in body frame, and the gravitational vector g = [0 0 0 1]T. Objective function is then described by

f (q, g, acc) = C(q)T ∗ g − acc, (2.20) where C(q) is from rotation matrix 2.18. The computed quaternion is then given by

qt+1 = qt− µ ∗ ∇f (q, g, acc)

||∇f (q, g, acc)|| (2.21)

(28)

where µ represent the fixed step-size, and ∇f (q, g, acc) is the gradient of eq.

2.20

∇f (q, g, acc) = JT(q)f (q, g, acc), (2.22) and J (q) us the Jacobian matrix of eq. 2.20:

J (q) = ∂f (q, g, acc)

∂q = 2

−q2 q3 −q0 q1

q1 q0 q3 q2

q0 −q1 −q2 q3

. (2.23)

2.7 Position Estimation

Measuring distance and pose can be achieved in various ways. For example, by the usage of wheel encoders, Global Positioning System (GPS), Laser Range Sensors or else [26]. As the vehicle is designed for indoor use only, some sensors will not be considered, e.g. GPS, since they tend to have poor perfor- mance for indoor use due to blocking satellite signals and small displacement.

2.7.1 Wheel Encoders

Measuring wheel’s rotational velocity and position can be achieved using dif- ferent sensors. For example, Hall sensors, that senses the magnitude of mag- netic fields, can be placed on the rear axis near the wheels. For every rotation, the wheel will pass magnets that the sensor senses. In general, these sensors are slightly more expensive than, e.g., photo electric sensors that are used in this report.

To represent the rotational speed and distance traveled of the vehicle, a motion model of the vehicle is needed. Various models exists that are applicable to this matter, both based on odometry and velocity. In [1], they state "Practi- cal experience suggests that odometry, while still erroneous, is usually more accurate than velocity. Both suffer from drift and slippage, but velocity addi- tionally suffers from the mismatch between the actual motion". Consequently, to compute the odometry of the vehicle, a two wheel differential drive model is utilized. The model is expressed as

ut

vt∆tcosµt−1,θ vt∆tsinµt−1,θ

ωt∆t

 (2.24)

where ωtis the angular velocity computed from

(29)

ωt= ωtRRR− ωtLRL

B (2.25)

and vtis the translation velocity computed as

vt = ωtRRR+ ωtLRL

2 . (2.26)

Here, B represent the wheel base of the vehicle, RRand RLthe radius of the right and left wheel, respectively. Finally, ωtR and ωtL are the right and left wheels individual angular velocities computed from

ωRt = 2πEtR

ET∆t (2.27)

ωtL= 2πEtL

ET∆t (2.28)

where ET is the number of encoder ticks per wheel revolution, and EtR and EtLare the encoder ticks for the right and left wheel in the tthtime step.

2.7.2 Inertial Measurement Unit

The accelerometer and gyroscope data are modelled as input to the dynamics.

The position and velocity can then be obtained by

vt+1 pt+1



=

 vt+ ya,t∆t pt+ vt∆t +ya,t2 ∗ ∆t2



, (2.29)

where p ∈ R3 represents the position p = [x, y, z] and v ∈ R3 the velocity v = [x, y, z]. The system sampling time is denoted ∆t and ya,tis the measured force in navigation frame provided by the accelerometer and described by

ya,tn = C(q) ∗ (accbt− δ) − gn. (2.30) Here, g is the nominal gravity vector, C(q) represents the quaternion rotation matrix presented in eq. (2.18), δ represents the sensor bias and acct is the accelerometer output in body frame at time t.

(30)

2.7.3 Simultaneous Localization and Mapping

Simultaneous Localization and Mapping is a concept that allows a mobile robot to use sensor data to generate a map of an unknown environment while driving, and at the same time instant localize it’s position with-in the map [27].

Thus, localization of the vehicle and map generating is the two main purposes of SLAM. All to be exerted while driving. No prior knowledge is needed about the map nor the environment the mobile robot is navigating in. Hence, it is to be considered one of the most important and complex topics to master when developing autonomous robots [1].

Gmapping

The ROS-package gmapping utilize a Rao-Blackwellized particle filter and accepts odometry information along with LiDAR scans. It requires a transfor- mation of the scan that is being published to the to Base_link of the vehicle.

Also, a transformation from Base_link to the odometry by using a ROS trans- form (tf) or a static_transform_publisher [28] is required. Requested tf tree is illustrated in fig. 2.6.

Figure 2.6: Required transform tree by gmapping.

If one wants to offset the LiDAR to represents the actual placement on the ve- hicle, and to project the accurate position of the vehicle on the floor, a /tf pub- lisher is required between Base_scan and Base_link and between Base_link to Base_footprint. Base_footprint represents the projection on the floor while Base_scan and Base_link represent the LiDAR placement. Gmapping use the data being published on /odom and /scan topic, along with the /tf transforms, to estimate the pose of the vehicle in the given map. This follows the logic illustrated in fig. 2.7. Here, map act as an "parent" to "Odom" and "Odom" as a parent for "Base_footprint" and so forth.

Figure 2.7: Extended transform tree.

(31)

2.8 Error Estimation

To evaluate the performance of the four setups in terms of accuracy and pre- cision, the error will be estimated according to the following. All methods presented provide positive values, and a smaller value is considered better.

2.8.1 Mean Square Error

Mean Square Error is a simple and common metric error estimator for regres- sion analysis, and is described by

M SE = 1 N

N

X

i=1

(yi− ˆyi)2, (2.31) where yi is the measured value and ˆyi is the actual value. By using this, the error between two coordinates can be described by

M SE = 1 N

N

X

i=1

(xi− ˆxi)2+ (yi− ˆyi)2. (2.32) Here, xiand yidescribes the coordinate of the sensor measurement and ˆxiand ˆ

yi correspond to the actual measurement provided by the MOCAP. MSE will treat the errors as exponential, meaning that an error of 2m will be considered four times as bad as an error of 1m. Thus making this method sensitive to outliers.

2.8.2 Mean Absolute Error

Mean Absolute Error is widely used in finance due that all individual differ- ences are weighted equally in the average, and is described by

M AE = 1 N

N

X

i=1

|(yi− ˆyi)|, (2.33) where yiis the measured value and ˆyiis the actual value. MAE is less sensitive to outliers than MSE, since the MAE will provide a linear score of the errors, i.e., an error of 2m will be considered twice as bad as an error of 1m. The error between two coordinates then become

M AE = 1 N

N

X

i=1

|(xi− ˆxi)| + |(yi− ˆyi)|, (2.34)

(32)

where xi and yi describes the coordinate of the sensor measurement and ˆxi and ˆyi correspond to the MOCAP measurement.

2.8.3 Root Mean Square Error

Root Mean Square Error is the square root of MSE, making them somewhat similar. RMSE is perhaps a more intuitive method to use, since it describes the average distance between two data set points. RMSE is described by

RM SE = v u u t

1 N

N

X

i=1

(yi− ˆyi)2, (2.35) and for two coordinates

RM SE = v u u t

1 N

N

X

i=1

(xi− ˆxi)2+ (yi− ˆyi)2. (2.36)

2.8.4 Maximum Distance Error

Distance between two coordinates can be describe by

p(x − ˆx)2+ (y − ˆy)2 (2.37) Thus, the Maximum Error for each measurement vector is defined as

argmaxp

(xn− ˆxn)2+ (yn− ˆyn)2 (2.38) where n is the length of the measurement vector.

(33)

Method and Implementation

This chapter explains the implementation and the proposed structure of the fil- ters along with hurdles and limitations. The vehicle design and chosen hard- ware will briefly be presented. Moreover, the experiments to be conducted will be described in detail.

3.1 Vehicle Design

The vehicle platform is based on the design developed at KTH, SML, where they use a Traxxas TRX-4 as chassis and a NVIDIA Jetson TX2 as CPU/GPU.

Original servo for steering and Electronic Speed Control (ESC) for controlling the motor is kept. The RC-receiver is removed and replaced with an micro- controller that serves as a low level controller, interacting with the ECS, servo, gear and differentials. The arduino receives and convert values sent from the programming environment on the NVIDIA Jetson TX2, and outputs appropri- ate control signals.

The vehicle is equipped with various sensors to create a platform for multipur- pose use, and to improve flexibility regarding autonomous drive. A LiDAR sensor is mounted on the top. The IMU will be placed on arbitrary location while the wheel encoders will be mounted on the rear axis. The NVIDIA Jet- son TX2 is powered by an portable powerbank, which also powers the attached sensors.

22

(34)

3.1.1 NVIDIA Jetson TX2

The NVIDIA Jetson TX2 is an powerful embedded platform designed to run autonomous machines software, and is a System-on-module with CPU, GPU, PMIC, DRAM and a flash-storage. With its high multipurpose use, the de- vice is an ideal platform for the end-to-end AI pipeline for autonomous ma- chines. It’s designed to simultaneously cope with data from various sensors and perform media decoding/encoding, networking, low-level command and control protocols after the GPU been treating the data. The TX2 offers several I/0 connections including USB, HDMI, Ethernet, GPIOs, I2C, I2S and SPI, which makes it highly suitable for handling multiple micro-controllers or var- ious sensors [29]. The NVIDIA Jetson TX2 Developer kit also comes with an pre-flashed Ubuntu 16.04 version that is compatible with ROS kinetic.

3.1.2 ROS

ROS was developed at multiple institutions, with it’s centre core at Stanford Artificial Intelligence Laboratory, for many different robot applications and was initially released in 2007. ROS has many ancestors and contributors who helped create its fundamental ideas and software packages. ROS is however not an operating system, but a robotics middleware which makes ROS a flexi- ble framework for writing robot software, and aims to simplify the challenging task of creating complex and robust robot solutions. This is achieved through various libraries and collections of tools, created from merging the knowledge and expertise from people with different backgrounds.

The main ROS client libraries, C++, Python and Lisp, are geared toward a Unix-like system, mainly since the dependence on large collections of open source software dependencies. Furthermore, ROS is built-up and communi- cates through nodes, messages, topics and bags among else. Nodes are written by the use of a ROS client library, such as roscpp (ROS in C++) or rospy (ROS in Python), and are processes that perform computation. Therefore, a complete robot control system often consists of many nodes. One node can for exam- ple control a laser range-finder, wheel motors or perform localization. Nodes communicate through different Messages, which is a data structure; for exam- ple, integer, floating point, boolean or else. Topics are a name that is used to identify the content of the messages. Basically, a node sends out messages by publishing to a given topic. Therefore, if one is interested in a certain type of data, he or she will subscribe to the appropriate topic. This allows anyone to subscribe or publish to any given topic. Bags are used to store data from

(35)

sensors or the robots driving path etc. Hence, its a format allowing for saving and playing back ROS message data [30]. Using rosserial_arduino package allows one to directly use an arduino with ROS and communicate through the ardunio’s UART. Hence, the arduino can act as an fully fledged ROS node that can publish, subscribe and access ROS system time [31].

3.1.3 Sensor Setup

Since ROS offer a Rosserial protocol for arduino micro-controller where the communication is easily setup between host and slave, the IMU and the wheel encoders will be connected to an arduino. One drawback using arduino is however the low flash memory and the number of interrupts-ports offered.

Hence, the Wheel Encoders and the IMU will be connected to separate units.

The Wheel Encoders will be connected to an arduino UNO and the IMU will be connected to an arduino Nano. In fig. 3.1 one can see the final setup.

Figure 3.1: Block diagram for sensor setup.

LiDAR

SLAMTEC offers LiDARs that is fairly cheap and suited for indoor and out- door applications. In this thesis, RPLiDAR A2 will be utilized. The sensor offers the following specifications:

(36)

• 360scanning view

• Up to 18m range radius

• 8000 samples/second

• Angular resolution 1

• Scan rate: 10 − 15Hz

• Compatible with ROS and communication interface TTL UART Photo Electric Sensor

TCRT5000 is an Reflective Optical Sensor with Transistor Output [32], where the emitting-light source and the detector are arranged in the same direction to sense presence of an object by using the reflective IR-beam from the object [33]. The IR transmitter and receiver, which are integrated in the same head, can not only detect and obstacle but also the distance through the analog out- put. The breakout board has a very compact construction and is designed with an adjustable potentiometer, which is used to determine the detection distance (1 ∼ 50)mm. Peak operation distance is 2.5mm [32]. The breakout board is shown in fig. 3.2 and the specifications is as following [33]:

• Detection distance: (1 ∼ 50)mm apply focal distance of 2.5mm

• With a multi-turn precision potentiometer adjustable sensitivity adjust- ment

• Working voltage of 3.3 ∼ 5V

• Output format: digital switching output

• Fixed bolt holes for easy installation

• A small PCB board size: 3.2 x 1.4cm

• Wide voltage LM393 comparator

(37)

Figure 3.2: Breakout board of optical encoder - TCRT5000

The photoelectric sensors will be connected to an arduino UNO. As men- tioned, working voltage for the sensor is 3.3V to 5V, but for the encoder to send a proper signal without interference, the VCC pin of the encoder must be connected to the 3.3 voltage supply of the ardunio [34]. The arduino will also be programmed to read each tick followed by a 20µs delay to reduce the possi- bility of reading rebounds [34]. The digital D0 output offered by the breakout board will be connected to the external interrupts ports offered by the arduino UNO, port two and three. A schematic diagram of the wiring can be seen in fig. 3.3. In the proposed filter, the arduino will send each tick read by the encoders to the TX2 as a ROS massage using the arduino rosserial protocol.

Hence, the computation of the translation and angular velocity, described in eq. 2.26 and 2.25, will be performed on the TX2.

(38)

Figure 3.3: Optical encoder interfacing with the arduino.

To mount the photo electric sensors to the vehicle, simple mounts will be 3D- printed and attached to the rear axis. Since the vehicle does not have stiff suspension, the sensors must follow the rear wheels and axis as they move up and down during driving. Final attachment can be seen in fig. 3.4, fig. 3.5a and fig. 3.5b. The encoder disk, consisting of 36 slits, is fitted inside the rim for protection of external objects.

Figure 3.4: Illustration of optical sensors attached to the rear axis.

(39)

(a) (b)

Figure 3.5: Illustration of optical sensors and encoder disk attached to the rear axis.

Width of the car and radius of the wheels is measured by hand with an caliper, and slightly adjusted before the execution of the experiments. This to better estimate the position. Wheel radius is measured as the vehicle is standing on the ground, due to the tires are relatively soft. Meaning that the radius will decrease as the vehicle is placed on the ground, and most likely give rise to small variations when driving.

Inertial Measurement Unit

The IMU used in the work presented is a triple axis accelermeter and gy- roscope breakout board - MPU6050, developed by InvenSense and provided by SparkFun. This IMU is MEMS-based and provide a sensing range of 2g, 4g, 8g, 16g for the accelerometer and ±250/sec, ±500/sec, ±1000/sec,

±2000/sec for the gyroscope. If needed, the MPU6050 allows easy connec- tion of an additional sensor as an magnetometer.

The output is raw data y ∈ R3 for both the accelerometer and the gyroscope, and uses I2C as communication interface [35]. Hence, it is compatible with the NVIDIA Jetson TX2 or an arduino. The sensor MPU6050 is shown in fig.

3.6

(40)

Figure 3.6: MPU-6050 triple axis accelerometer and gyroscope.

Schematic diagram of the connection to the arduino can be seen in fig. 3.7.

The VDD and VI0 pin of the breakout board are connected to the 3.3V power supply. Further on, the Serial Data (SDA) pin is connected to port A5, Serial Clock (SCL) to A4, and lastly Interrupts (INT) will be connected to digital port D2 of the arduino.

Figure 3.7: MPU-6050 interfacing with the arduino.

Inertial measurements units are in general calibrated when delivered from the manufacture. They do however suffer from an offset, which is known as a

(41)

Sensor Bias. The sensor bias has to be removed before a decent orientating or position estimate can be performed. The approach used in this work is sim- ple and straight forward. Data from the accelerometer and the gyroscope are collected when the IMU is held still and perpendicular to the earth surface.

Meaning that a = [0, 0, 9.81m/s2] and g = [0, 0, 0] in ideal case.

To calculate the Sensor Bias, or mean value of the output, the IMU is placed on a table while 1000 samples are collected. The mean value is calculated for each degree of freedom and subsequently removed from each axis. This is performed once in the beginning of the work. Moreover, the accelerometer and the gyroscope measurements will be filtered through an digital low-pass filter, according to eq. 3.1. This to reduce the influence of outliers.

yt+1= yt+ α(u − yt) (3.1)

Here, α represents the smoothing factor and u the sensor output. Due to limitations in the arduino flash-memory, the raw values are published as a int16array.

The placement of the IMU will be depending on free space and suitable op- tions on the vehicle. Possible disturbance caused by the motor etc. will also be considered. The final attachment can be seen in fig. 3.8.

Figure 3.8: Placement of IMU and arduino on the SVEA vehicle.

(42)

3.2 Motion Capture System

The MOCAP system mounted in the SML facility consist of infra-red cameras that can localize and provide 6DoF recognition of a rigid body. The system provide sub-millimetre precision and can be utilized to validate the accuracy of LiDAR, IMU, radar and other sensors.

Further on, the system can easily be synchronized with other external time based systems. Therefore, the MOCAP is set up to publish the pose of speci- fied object using ROS at approximately 80Hz. Hence, it is easy to record and gain the desired orientation or position of the vehicle, that are using the same internal time stamps. The various sensor setups is subsequently compared to these measurements. Field of range of the system in the SML lab is approxi- mately (x, y) = (±2.5, ±2.5)m.

3.3 System Setup

A final setup of the car can be seen in fig. 3.9.

Figure 3.9: Illustration of final setup, including all sensor and hardware used in the following experiments.

(43)

3.3.1 Filtering

First, the orientation on the vehicle will be calculated according to the AHRS algorithm. Secondly, the position is obtained by fusing IMU heading angle and odometry from the wheel encoders. A final schematic of the system can be shown in fig. 3.10. Note that the position estimation of the IMU is not included in this approach. The reason being due to poor performance and can be seen in chapter 4. By filtering the yaw angle from the AHRS algorithm in the EKF, eq. 2.18 will be defined by the previous time-step. This should not affect the result noticeably due to the high sampling rate 80 ∼ 110Hz.

Figure 3.10:Schematics of the system.

Orientation Estimation (AHRS)

Initial conditions for our state is µ = [1 0 0 0]T. Here, it represents that body frame is aligned with navigation frame in all axes. Matrix Σ can be arbitrary set as initial condition. However, what must yield, is that Σ has to be positive definite and same dimensions as eq. 2.4. Hence, Σ is set to

Σ =

1 0 0 0

0 1 0 0

0 0 1 0

0 0 0 1

Process noise R, and measurement noise Q are arbitrary set to:

R =

1e − 4 0 0 0

0 1e − 4 0 0

0 0 1e − 4 0

0 0 0 1e − 4

References

Related documents

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

• Utbildningsnivåerna i Sveriges FA-regioner varierar kraftigt. I Stockholm har 46 procent av de sysselsatta eftergymnasial utbildning, medan samma andel i Dorotea endast

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

Det har inte varit möjligt att skapa en tydlig överblick över hur FoI-verksamheten på Energimyndigheten bidrar till målet, det vill säga hur målen påverkar resursprioriteringar

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa

DIN representerar Tyskland i ISO och CEN, och har en permanent plats i ISO:s råd. Det ger dem en bra position för att påverka strategiska frågor inom den internationella

Det finns många initiativ och aktiviteter för att främja och stärka internationellt samarbete bland forskare och studenter, de flesta på initiativ av och med budget från departementet