• No results found

UAV Pose Estimation using Sensor Fusion of Inertial, Sonar and Satellite Signals

N/A
N/A
Protected

Academic year: 2022

Share "UAV Pose Estimation using Sensor Fusion of Inertial, Sonar and Satellite Signals"

Copied!
61
0
0

Loading.... (view fulltext now)

Full text

(1)

UPTEC F15 043

Examensarbete 30 hp Juni 2015

UAV Pose Estimation using Sensor Fusion of Inertial, Sonar and

Satellite Signals

Kenneth Gustavsson

(2)
(3)

Teknisk- naturvetenskaplig fakultet UTH-enheten

Besöksadress:

Ångströmlaboratoriet Lägerhyddsvägen 1 Hus 4, Plan 0 Postadress:

Box 536 751 21 Uppsala Telefon:

018 – 471 30 03 Telefax:

018 – 471 30 00 Hemsida:

http://www.teknat.uu.se/student

Abstract

UAV Pose Estimation using Sensor Fusion of Inertial, Sonar and Satellite Signals

Kenneth Gustavsson

In this thesis a system for pose estimation of a quadcopter is developed. The implemented approach is to use a complementary Kalman filter to combine the information from an IMU and sonar sensors on-board the quadcopter with GPS. This approach does not require a dynamic model of the quadcopter, therefore

modifications of the current quadcopter or a change to another one is facilitated.

Experiments indicate that the system provides accurate pose estimates and are robust in the sense that it can handle loss of GPS signals during shorter time periods.

Provided that further work is spent on testing and improving the stability of the estimation, especially the heading estimation, the system may be used to enable autonomous navigation of a quadcopter.

The system is implemented using ROS (Robot Operating System) and tested on the low-cost quadcopter Parrot AR.Drone 2.0.

ISSN: 1401-5757, UPTEC F15 043 Examinator: Tomas Nyberg Ämnesgranskare: Dave Zachariah Handledare: Winston Garcia-Gabin

(4)
(5)

Populärvetenskaplig sammanfattning

En quadcopter kan beskrivas som en helikopter med fyra rotorer och är en typ av obeman- nade flygfarkoster som idag kan göras både små och smidiga. Dessa egenskaper samt att de kan utrustas med olika typer av sensorer och kameror gör dem lämpliga för diverse uppdrag, allt från att leverera paket till att användas vid räddningsinsatser. Några av fördelarna med att använda quadcoptrar är att de kan komma åt platser och utrymmen som markburna robotar inte kommer åt. Även att de kan användas i miljöer där en människa riskerar att skadas, till exempel byggnader som riskerar att kollapsa eller är fyllda av giftiga gaser eller höga höjder.

För att kunna navigera i dessa miljöer, utom syn- håll för en pilot, måste man använda tekniker som ger information om bland annat position och rikt- ning för quadcoptern. För positionsbestämning kan GPS användas. Med hjälp av ett antal satelliter i omloppsbana kring jorden kan en position för använ- daren bestämmas. Detta är en teknik som fungerar över hela jorden och i alla väder men endast utomhus.

I stadsmiljö och andra mindre öppna ytor är GPS inte lika användbart eftersom positionen för användaren inte kan bestämmas lika noggrant.

Som komplement till GPS kan sensorer ombord på quadcoptern som mäter dess rörelser användas, så kallade accelerometrar och gyroskop. Genom att mäta hur quadcoptern accel- ererar och roterar kan man över tid beräkna dess hastighet, position och riktning. Fördelen med accelerometrar och gyroskop ombord gör att man inte behöver förlita sig på något externt system så som satelliter och de kan därmed användas även i de fall då GPS inte är tillförlitligt. En nackdel är dock att dessa sensorer inte mäter exakt rätt hela tiden och därmed går det, precis som med GPS, inte att enbart lita på den information som de ger.

Detta arbete fokuserar på att kombinera information från GPS, accelerometrar och gy- roskop för att bestämma just position och rikting för en quadcopter. Trots att dessa sensorer inte är ideala var för sig går det att utnyttja styrkorna hos vardera sensor för att de till- sammans ska ge bättre och mer tillförlitlig information om var quadcoptern är och i vilken rikting den är på väg. Resultatet av arbetet är ett datorprogram som tar emot data från quadcopterns sensorer, kombinerar datan med s.k sensor fusion och beräknar dess position och riktning kontinuerligt medan den flyger.

I framtiden kan detta leda till autonoma obemannade farkoster som kan navigera och utföra olika uppdrag helt eller delvis på egen hand. De kan underlätta och effektivisera sysslor som idag är tidsödande, svåra eller farliga för oss människor att utföra.

(6)

iv

(7)

Acknowledgements

First of all, I would like to thank my supervisor at ABB Corporate Research, Winston Garcia-Gabin for his encouragement and support during this work. I would also like to thank ABB for giving me the opportunity to perform my thesis work at an inspiring and welcoming work place.

Thanks also to my subject reviewer at Uppsala University, Dave Zachariah for constantly providing me with help, feedback and most of all knowledge.

The time at ABB would not have been half as fun if it were not for the group of thesis workers that I have been sharing office space with during the spring. Filip, Olov, Luca, Miguel, Sara, Therese, Angelica and of course Mikael have all hosted great dinners, provided many laughs, a lot of random enjoyments and discussions throughout our time together at ABB.Furthermore, my family are entitled to a great thank for their support and understanding.

Finally, I wish to express my deepest gratitude and love to Lina for her endless support and care.

Kenneth Gustavsson Uppsala, June 2015

(8)

vi

(9)

Contents

Acknowledgments v

List of figures viii

List of tables ix

Abbreviations and symbols xiii

Abbreviations . . . xiii

List of symbols . . . xiii

1 Introduction 1 1.1 Related work . . . 1

1.2 Objectives . . . 2

1.3 Outline of thesis . . . 3

2 Navigation 5 2.1 Introduction . . . 5

2.2 Reference frames for navigation . . . 5

2.2.1 Inertial frame . . . 5

2.2.2 ECEF frame . . . 5

2.2.3 Local navigation frame . . . 6

2.2.4 The body frame . . . 7

2.2.5 Attitude representations and coordinate frame rotations . . . 7

2.3 Systems and sensors for navigation purposes . . . 8

2.3.1 GPS . . . 8

2.3.2 SONAR . . . 9

2.3.3 IMU . . . 9

2.4 INS . . . 9

2.4.1 Inertial navigation system equations . . . 10

2.4.2 INS error equations . . . 10

3 Sensor Fusion 13 3.1 Kalman filter . . . 13

3.2 The complementary filter . . . 14

3.2.1 Feedback and feedforward configuration . . . 15

3.3 Measurement model . . . 16

4 Experimental Platform and Implementation 17 4.1 The Parrot AR.Drone 2.0 . . . 17

4.1.1 The sensors of the AR.Drone 2.0 . . . 18

4.1.2 Control and communication . . . 18

4.2 Robot Operating System . . . 18

4.2.1 ROS driver for the Parrot Ar.Drone . . . 19 vii

(10)

viii CONTENTS

4.3 System overview . . . 20

4.4 Sensor fusion implementation . . . 20

4.4.1 Choice of Q and R . . . 21

5 Experiments and Results 23 5.1 General setup of the experiments . . . 23

5.1.1 Initial conditions . . . 24

5.1.2 Accuracy measurements . . . 24

5.2 Experiment 1, a rectangular path . . . 25

5.2.1 Setup of test 1 . . . 25

5.2.2 Results of test 1 . . . 25

5.3 Experiment 2, a circular path . . . 25

5.3.1 Setup of test 2 . . . 25

5.3.2 Results of test 2 . . . 26

5.4 Experiment 3, a robustness test . . . 28

5.4.1 Setup and results of test 3 . . . 28

5.5 Summary and discussion . . . 29

6 Concluding Remarks 37 6.1 Conclusions . . . 37

6.2 Further work . . . 37

A Skew-symmetrical Matrices 41 B Transformations between Reference Frames 43 B.1 Transformation from geodetic to rectangular coordinates in the ECEF Frame 43 B.2 Transformation from rectangular coordinates in the ECEF Frame to the local navigaton frame . . . 44

C Confidence ellipses 45

(11)

List of Figures

2.1 Three different navigation frames . . . 6

2.2 The {b} frame . . . 6

2.3 The {n} and {b} frames. . . 7

2.4 A block diagram of an INS . . . 9

3.1 Complementary filter in feedback configuration. . . 15

3.2 Complementary filter in feedforward configuration. . . 16

4.1 The Parrot AR.Drone 2.0 . . . 17

4.2 ROS graph-architecture . . . 19

4.3 Overview of the implemented system. . . 20

4.4 Implemented Complementary filter. . . 20

4.5 Results from IMU test. . . 22

5.1 Path of experiment nr 1. . . 23

5.2 Path of experiment nr 2. . . 24

5.3 Estimated position in 3D, test nr 1. . . 25

5.4 Estimated and measured position in 2D, test nr 1. . . 26

5.5 Estimated altitude vs time, test nr 1. . . 26

5.6 Estimated attitude vs time, test nr 1. . . 27

5.7 Estimated position in 3D, test nr 2. . . 27

5.8 Estimated and measured position in 2D, test nr 2. . . 28

5.9 Estimated altitude vs time, test nr 2. . . 28

5.10 Estimated attitude vs time, test nr 2. . . 29

5.11 Position estimates with GPS loss, rectangular path. . . 30

5.12 Position estimates with few GPS measurements, rectangular path. . . 30

5.13 Position estimates with GPS loss, circular path. . . 31

5.14 Position estimates with few GPS measurements, circular path. . . 31

5.15 Attitude estimates with GPS loss, rectangular path. . . 32

5.16 Attitude estimates with few GPS measurements, rectangular path. . . 33

5.17 Attitude estimates with GPS loss, circular path. . . 34

5.18 Attitude estimates with few GPS measurements, circular path. . . 35

ix

(12)

x LIST OF FIGURES

(13)

List of Tables

4.1 Sensor noise parameters. The values are given on a per sample basis. . . 22 5.1 Initial values for the error covariance matrix P . . . 24 5.2 Error statistics of estimations. . . 33

xi

(14)

xii LIST OF TABLES

(15)

Abbreviations and symbols

Abbreviations

ECEF Earth-centered earth-fixed.

EKF extended Kalman filter.

ESKF error state Kalman filter.

GPS Global Positioning System.

IMU Inertial Measurement Unit.

INS Inertial Navigation System.

MAV Micro Unmanned Aerial Vehicle.

MEMS Microelectromechanical Systems.

MSE Mean square error.

RMSE root mean square error.

ROS Robot Operating System.

SINS Strapdown Inertial Navigation System.

SLAM Simultaneous Localization And Mapping.

UAV Unmanned Aerial Vehicle.

List of symbols

x column vector.

X matrix.

x> transpose of vector x.

X> transpose of matrix X.

X 1 inverse of matrix X.

diag (x), diag (X) diagonal matrix with the elements of x or X as the main diagonal.

xiii

(16)

xiv List of symbols

(17)

Chapter 1

Introduction

In recent years, remote controlled and autonomous MAVs, especially quadcopters, have become immensely popular among both researchers and hobbyist. For a hobbyist, they may be high-tech toys for flying around and shooting videos while for researchers, they provide a low-cost platform for embedded control applications and research within the areas of sensor fusion and autonomous systems.

Quadcopters are very agile and maneuverable; they have the ability to take-off and land vertically, move in any direction and hover at a fix position. They can also be made considerably smaller than conventional helicopters since they use 4 small rotors instead of a big one to obtain lift. These abilities make quadcopters suited many applications, for example reconnaissance missions in areas dangerous for humans and inaccessible by other aerial vehicles and robots, like collapsed buildings and hazardous environments.

In order to know their position and navigate in an unknown environment quadcopters need to rely on sensors that provide information about position, velocity and orientation.

Sensors may provide this either directly, like a GPS receiver that provides position, or indirectly, like inertial sensors providing velocity and orientation through integration. A lot of research is also focusing on using onboard cameras in navigation purposes.

Regardless of which combination of sensors that are used, the key to a good system for navigation is to combine the data from the different sensors in a favorable way. This is referred to as sensor fusion, i.e. fuse data or information from different sources so that the resulting information is better than what it would be if the sensors were considered individually.

The idea of this thesis is to look into the area of sensor fusion applied to quadcopters for navigation purposes. More specifically to fuse the information from inertial, satellite and sonar sensors to provide position and orientation estimates of low cost quadcopter. Com- bining GPS and inertial sensors is preferable since they have complementary characteristics and should therefore yield a robust and reliable system.

1.1 Related work

The problem of estimating the position and orientation of a aerial vehicle has been a topic of research for many years and the literature provides different solutions to the problem.

The most frequent approach however, is to fuse the information from inertial sensors and one or more complementary sensors using variants of the Kalman filter. A general choice of filter version and complementary sensors does however not exist. In general, the fusion problem is solved in two different ways. The first solution, a nonlinear version of the Kalman filter, (the extended or unscented Kalman filter) estimating the full navigation state (direct filtering) using a dynamic model of the system. The second solution, to let inertial sensors provide a nominal navigation solution and then estimate the error (indirect filtering) of this

1

(18)

1.2 Objectives 2

solution with a nonlinear, or linear Kalman filter that are fed with information from the complementary sensors.

The general idea and motivations for the second approach are described in [1]. It is stated that the second mentioned solution is preferred over the first one since cumbersome and approximate dynamic modeling are circumvented and that an error model is better suited for estimation with a Kalman filter than a full dynamic model. An integration of GPS and Inertial sensors based on indirect filtering with a linear Kalman filter is successfully applied in [2], [3], [4] and [5]. Simulations made in [3] and [5] show that a loosely coupled integration.

i.e an integration where the GPS measurements is seen as a sensor yk= pk for the position pk, of GPS and inertial sensors provides a more reliable and accurate navigation solution than each sensor individually. This is also concluded in [4] where the approach is applied on a fixed wing UAV. The same approach is further developed in [2] where a magnetometer is incorporated to provide reliable heading estimate for a helicopter UAV during hover.

[6] and [7] applies the first mentioned approach. In both these works, the use of a monocular camera onboard a quadcopter and visual SLAM is evaluated with the simple motivation that it provides a solution to the positioning problem in GPS-denied environment.

This is important since it opens up for usage in indoor areas. The conclusions drawn in [6]

are that the accuracy of the pose estimation depends on the volume of the operational area;

a larger volume results in lower accuracy. It is further concluded that their approach impose requirements on the environment, a sufficient amount of structure/texture in a relative close distance (2-5 m) from the camera is needed. On the other hand, the use of an approximate dynamic model of the quadcopter performs very well in practice despite many physical assumptions. In [7] it is concluded that a monocular camera together with inertial sensors can provide a base for reliable pose estimates in GPS-denied environments.

The combination of a monocular camera and inertial sensors is also used in [8]. Instead of using Visual SLAM, markers called tags at known positions that are detected by the camera regularly provides position information within reasonable time intervals. A Sigma-Point Kalman filter in feedback form is used to correct the nominal inertial navigation solution given by the inertial sensors. The results indicates that this is a possible solution for indoor navigation. However, the approach requires an infrastructure of tags and needs to be further evaluated for longer trajectories.

1.2 Objectives

The main objective of this thesis was to develop a system able to provide position and attitude information of a quadcopter. This has been done by implementing a sensor fusion algorithm that combines information from the GPS receiver, IMU and ultrasonic range sensor of the AR.Drone 2.0 quadcopter. The implementation was done in c++ using ROS, a tool used for the development of robotic applications.

The chosen sensor fusion implementation is a so called complementary filter where a nominal navigation solution is provided via the IMU measurements and corrected with the help of a Kalman filter and information from the GPS and ultrasonic range sensors. This approach was chosen since it makes possible to use the same filter for another quadcopter, its performance mainly depends of the sensors and not a dynamic model of the quadcopter.

(19)

3 Introduction

1.3 Outline of thesis

The content of this thesis is organized as follows:

• Chapter 2 introduces useful navigation theory and commonly used sensors for navi- gation. The chapter ends with a presentation of the INS equations and how sensor deficiencies introduce the need for sensor fusion.

• In chapter 3, it is shown how sensor fusion can be used to combine the information from inertial sensors and other sensors like GPS to provide a reliable high rate navigation system.

• A description of the experimental platform and details on the implementation of the theory from previous chapters can be found in chapter 4.

• How the experiments were performed and the results of them are presented in chapter 5. The chapter also contains a summary and discussion of the results.

• In the end of the thesis, in chapter 6, conclusions based on the work are drawn. There are also some suggestions for future work.

(20)

1.3 Outline of thesis 4

(21)

Chapter 2

Navigation

2.1 Introduction

Navigation systems in different forms are and have been present in our daily lives throughout the history. They have enabled us to discover new parts of the world, the universe and find the way to the supermarket. Two of today’s standard techniques for navigation are INS and GPS. GPS is a satellite-based positioning system continuously providing users with position information all around the globe, in all kinds of weathers. An INS is mounted to a moving object and uses measurements from inertial sensors, i.e accelerometers and gyroscopes and navigation dynamics to compute position, velocity and orientation of an moving object. INS is mounted on the moving object and do not rely on external signals for the navigation. INS and GPS are preferably used together due to their complementary characteristics which will be explained later in the chapter. Sonar sensors are a type of ranging sensor. An UAV can for example use sonar to detect obstacles or measure the distance to the ground.

The rest of this chapter is devoted to presenting different coordinate frames used in navigation and more details about inertial sensors, GPS and sonar sensors. In the end of the chapter, the INS equations and how sensor deficiencies affect them are shown. This will show the need of using sensor fusion to improve the performance of inertial sensors by combining them with information from other sensors such as GPS and sonar.

2.2 Reference frames for navigation

This section describes four different coordinate frames used in navigation of aerial vehicles.

The four coordinate frames are the inertial frame, the ECEF frame, the local navigation frame and the body frame; they are shown in figures 2.1 and 2.3.

2.2.1 Inertial frame

The inertial frame in this case is a frame with origin at the earth’s center of mass. The z-axis is aligned with the earth’s spin axis and the x- and y-axes are in the equatorial plane so that a right-handed coordinate system is completed. This system does not rotate with earth and is from here denoted as the {i}-frame.

2.2.2 ECEF frame

The ECEF frame is similar to the {i}-frame with the difference that it rotates with earth.

Hence its z-axis always points from the earth’s center of mass up through the true north pole. The x-axis passes through the intersection of the prime meridian and the equator and the y-axis completes a right-handed coordinate system. Using these axes for the ECEF

5

(22)

2.2 Reference frames for navigation 6

frame is usually referred to as the ECEF rectangular coordinate system [9] but the ECEF frame can also be described with the ECEF geodetic coordinates. These coordinates are referred to as longitude, latitude and altitude ( , ', h) and are used to describe the location of a point relative to the WGS-84 defined ellipsoid [9].

2.2.3 Local navigation frame

The local navigation frame, shown in figure 2.3 has x- and y-axes tangential to the earth’s surface, with the x-axis pointing towards north, the y axis to the west. The z-axis is pointing upwards to complete a right-hand system. For applications where the navigation is done in a local and bounded space this system is often used and the origin is placed at the starting point of the vehicle.

x

e

y

e

z

e

x

i

y

i

z

i

prime meridian

equatorial plane

x

n

y

n

z

n

Figure 2.1: Three different navigation frames: Inertial frame, {i}, Earth frame,{e} and Local navigation frame, {n}.

z

b

x

b

y

b

Roll Yaw

Pitch

Figure 2.2: The {b} frame. xb points towards the front of the vehicle.

(23)

7 Navigation

2.2.4 The body frame

When having body fixed sensors measuring quantities relative the vehicle’s body it is con- venient to have a body fixed frame {b}, shown in figure 2.2 The vehicles center of mass is picked as origin, the z-axis points to the top of the vehicle, the x-axis to the front and the y-axis completes a right-handed system (i.e. points to the left side of the vehicle). The rotations around the axes are referred to as roll, pitch and yaw for the x,y and z-axis re- spectively, denotes as , ✓ and . These should not be mistaken for the Euler angles that will me mentioned later and usually denoted in the same way.

Up, z

n

West, y

North, x

y x

zb

b b

n

n

Figure 2.3: The local navigation frame {n} and body fixed frame {b}.

2.2.5 Attitude representations and coordinate frame rotations

Consider a vehicle measuring motions relative its own body-fixed frame and an external system measuring the position, velocity etc of the same vehicle in the {e} frame. Addition- ally, the actual navigation is performed relative the local navigation frame {n}. This show the need for tools for describing the orientation of a rigid body with respect to a reference frame and how to rotate reference frames. Three of the most common ways to represent orientations and rotations in R3are Euler angles, direction-cosine matrices and quaternions.

Euler angles. With Euler angles, the rotation from one frame to another is described with three successive planar rotations about different axes. If the rotation of {b} to {n} is parameterized by the three Euler angles ✓ = [ , ✓, ]> 2 SO(3)1 the complete rotation of {b} to {n} is achieved by [10]

1. rotate through angle about zb.

2. rotate through angle ✓ about the new y-axis.

3. rotate through angles about the new x-axis.

Note that the Euler angles , ✓, are not the same as the angles used to describe the rotation of the axes of the {b}-frame.

Direction-cosine matrix. A direction-cosine matrix Rba(✓)2 SO(3), rotating frame {a}

to {b} is a 3 ⇥ 3 matrix enabling a vector in frame {a}, va to be rotated into frame {b}

with the vector-matrix multiplication vb = Rbava. The rotation matrix from frame {b} to frame {n} can be constructed by multiplying the rotation matrices for each of the above

1SO(3) is the group of all rotations about the origin of R3

(24)

2.3 Systems and sensors for navigation purposes 8

mentioned Euler angles rotations. First the three separate rotations may expressed as the rotation matrices:

R1 = 2

4 cos( ) sin( ) 0 sin( ) cos( ) 0

0 0 1

3

5 , (2.1)

R2 = 2

4cos(✓) 0 sin(✓)

0 1 0

sin(✓) 0 cos(✓) 3

5 (2.2)

and

R3 = 2

41 0 0

0 cos( ) sin( ) 0 sin( ) cos( ) 3

5 (2.3)

where R1is the rotation around the axis zb, R2the rotation around the new y-axis and R3

the rotation that finally aligns frame {b} with frame {n}. The full rotation of {b} to {n} is then given as the direction cosine matrix

Rnb (✓) = R1R2R3

= 2

4 c( )c(✓) s( )c(✓) s(✓)

s( )c( ) + c( )s(✓)s( ) c( )c( ) + s( )s(✓)s( ) c(✓)s( ) s( )s( ) + c( )s(✓)c( ) c( )s( ) + s( )s(✓)c( ) c(✓)c( ) 3

5 (2.4)

where s(x) and c(x) are used to denote sin(x) and cos(x) respectively.

Quaternions. A quaternion is basically a four-element vector q = [q1, q2, q3, q4]> where the elements are real numbers. Using quaternions for rotation in R3 are less intuitive than Euler angles and direction cosines but provide a more stable and effective attitude repre- sentation; they are therefore often used as internal states for orientation representation in computer implementation of navigation applications [11]. Even though quaternions were used in the implementation part of this work, understanding them is not necessary for the rest of this thesis and will therefore not be further explained.

Now it has been shown how the orientation or attitude of a vehicle can be described, also different ways to rotate the {b} frame into the {n} frame. This will later be used when the inertial navigation equations are given in section 2.4.1. For transformations from geodetic and rectangular coordinates of the {e} frame to the local navigation frame, see Appendix B.

2.3 Systems and sensors for navigation purposes

2.3.1 GPS

GPS is a satellite-based navigation system with worldwide coverage, available in all weathers, day or night. A set of satellites orbiting around the earth ensures that a GPS receiver anywhere on the globe can be located, provided that the receiver is within line-of-sight of at least four of the satellites. The satellites broadcast messages containing the precise time for when a message was sent, identity of the satellite and its current position. A GPS receiver determines the distance to a single satellite by comparing the arrival and sending time of a message and knowing that the signals travels with the speed of light. Knowing the distance to at least four satellites and their positions relative earth, the receiver determines its position in the ECEF coordinate system ( , ', h) using triangulation.

GPS is considered to be a reliable system for outdoor localization. However, the GPS data is generally too noisy and provided with too low sampling rate to give any information about position derivatives, especially for high-dynamic applications like quadcopters. Other

(25)

9 Navigation

Figure 2.4: Block diagram of an Inertial Navigation System. The navigation dynamics described by (2.7) relates the inertial measurements to position, velocity and attitude

drawbacks are that it suffers from short-term signal loss that can be due to blockage, jamming or interference.

2.3.2 SONAR

A SONAR (SOund Navigation and Ranging) sensor uses either infrasonic or ultrasonic sound to detect and measure the range to an obstacle. An active ultrasonic sonar emits pulse s(t) at a certain direction ✓, and then detects eventual echoes y(t) to determine the range to an obstacle. The received signal y(t) is related to the range R to the obstacle as y(t) = h(t)⇤ s (t 2R/c), where ⇤ denotes convolution, h(t) is dynamic distortion of the signal (impulse response), c the speed of sound in air and t the time of flight for the signal [11].

Standard ultrasonic rangefinders used for robot applications are able to provide tenth of distance measurements per second. Their angular resolution is usually one or two orders of magnitude worse than lasers ranging sensors, which have an sub-degree angular resolution.

Nonetheless, sonar sensors are still appealing thanks to lower prices and power consumption [12].

2.3.3 IMU

A 6 DoF IMU consist of a three axis accelerometer and a three axis gyroscope. These are inertial sensors and measure the specific force and angular velocity respectively of the IMU.

The axes of both the gyroscope and accelerometer are ideally orthogonal and aligned with the axes of the platform they are mounted to. Today, MEMS sensors are the most common type of IMU thanks to their low price and small size, which makes them useful in all kinds of devices.

The error sources for accelerometers and gyroscopes can be divided into random errors (measurement noise) and systematic errors (offset or bias, scale factors and misalignment errors) [9]. If the scale factors and misalignment errors are neglected, assuming they are compensated for with calibration, the output of the sensors can be modeled as

b = fb+ fb+ nf

˜

!b = !b+ !b+ n! (2.5)

where fb and !b 2 R3 are the true specific force and angular velocity respectively. n represents Gaussian white measurement noise and fb, !b slow varying biases [8], [2].

An IMU usually provides fast rate (100-1000 Hz) information about linear accelerations and angular velocities. Even though position, linear velocity and orientation information can be obtained through integration of measurements, the earlier mentioned errors and the integration of them results in useless solutions within seconds for consumer grade MEMS sensors.

2.4 INS

As mentioned in the introduction to this chapter, an INS is a system that provides navigation information using inertial sensors and navigation dynamics. A block diagram of an INS is

(26)

2.4 INS 10

shown in figure 2.4. The specific INS depicted in figure 2.4 is a SINS, a system were the inertial sensors are strapped directly onto vehicle frame. The measurements of the sensors then describes motions relative the body frame.

Since errors in the inertial measurements may result in a useless navigation solution within seconds, external aiding sensors can be used to periodically correct the navigation information from the INS. A common choice of external aiding is GPS. This is because an INS and GPS together have the ability to provide information about position, position derivatives and orientation with a higher sampling rate and good accuracy [11].

2.4.1 Inertial navigation system equations

As mentioned in the previous section, when using a strap-down INS on a flying (or any) vehicle, the IMU will measure the specific force fband angular velocity !brelative the body frame {b} of the vehicle. When the navigation is performed in the local navigation frame {n}, the IMU measurements need to be rotated into the {n} frame. Let the position and velocity of an aerial vehicle in {n} frame be denoted as pn and vn. Furthermore, let the attitude of the aircraft be parameterized by the Euler angles ✓ = [ , ✓, ]T or the rotation matrix Rnb(✓). If the specific force fband angular velocity !b given by the sensors are ideal the evolution of pn, vn and Rnb(✓) is given by the INS equations

˙

pn = vn

˙vn = Rnbfb+ gn 2 [!ien]vnnb = Rnb

!b

Rnb ⇥ Rbn!ien

(2.6)

where !nieand gn is the local angular velocity of Earth and gravitational acceleration. [a] is the skew-symmetric matrix representation of the vector cross product. Some of the terms in these state equations might be trivial, like that the derivative of position is equal to the velocity; or how the gravitational acceleration and specific force affects the velocity derivative. Other terms might not be as trivial. For a full derivation of the INS equations, see for example [8].

2.4.2 INS error equations

The INS navigation equations given in (2.6) that describe the evolution of the navigation states assumes ideal IMU measurements. Therefore the sensor errors described in 2.3.3 causes deviations from the actual states. Substituting the ideal values of the IMU measure- ment in (2.6) with the sensor models from (2.5) gives the equations

˙ˆpn = vˆn

˙ˆvn = Rbnb fb+ f + nf + ˆgn 2 [ ˆ!ien]n R˙b

n

b = Rbnb

!b+ ! + n!

Rbnb h Rbbnniei

(2.7)

which describe the actual propagation of the navigation states due to the sensor errors.

What is obtained are just estimates of the actual position, velocity and attitude which contain deviations from the true states. However, if these deviations could be estimated it would be possible to do corrections for them and obtain a solution closer to the true one.

Define the deviation states as p , ˆpn pn, v , ˆvn vn and R( ✓) , bRnb(Rnb). By taking the difference between (2.6) and (2.7) the following equations for the propagation of the devation states are obtained,

˙

pn = vn

˙vn = h Rbnbbi

✓ + bRnb fb+ bRnbnf

✓˙n = Rbnb !b Rbnbn!.

(2.8)

(27)

11 Navigation

Where it is assumed that the deviations are sufficiently small [8]. By modeling the slow varying biases of the inertial sensors as Brownian motion, i.e integrated white noise ˙fb= w f, ˙!b= w !they can be incorporated into the system state and a total state vector for the deviations may be written as

x =⇥

( pn)> ( vn)> ( ✓)> ( fb)> ( !b)>>

. (2.9)

Then, by using a first-order approximation of the derivatives, the propagation of the devia- tions can be written as

pnk+1= pnk +dtk vnk vk+1n = vkn+dtk

hRbnb,kkbi

k+dtkRbnb,k fkb+dtkRbnb,knf ,k

k+1= ✓k dtkRbnb,k !kb dtkRbnb,kn!,k

fk+1b = fkb+dtkw f ,k

!k+1b = !bk+dtkw !,k

(2.10)

or in a more compact way

xk+1= Fk xk+ Gkwk (2.11)

using the noise vector wk=⇥

(nf ,k)> (n!,k)> (w f ,k)> (w !,k)>>

and system matrices

Fk = 2 66 66 64

I3 dtkI3 03 03 03

03 I3 dtk

hRbnb,kkbi

dtkRbnb,k 03

03 03 I3 03 dtkRbnb,k

03 03 03 dtkI3 03

03 03 03 03 dtkI3

3 77 77 75

and

Gk = 2 66 66 4

03 03 03 03

dtkRbnb,k 03 03 03

03 dtkRbnb,k 03 03

03 03 dtkI3 03

03 03 03 dtkI3

3 77 77 5.

By having this description of the deviation state it is possible to use an estimation algorithm to estimate the deviations and do corrections for them. According to the definition of the deviation states the true navigation states pn, vn and Rnb are computed as

pn = pˆn pn vn = vˆn vn Rnb = R>( ✓) bRnb

(2.12)

where, R( ✓) is rotation matrix in (2.4) using ✓ = [ ✓ ]T instead of ✓ = [ ✓ ]T. The next chapter describes how the deviation states can be estimated using sensor fusion and thereby how an INS together with complementary sensors can be used to provide high rate and reliable navigation information.

(28)

2.4 INS 12

(29)

Chapter 3

Sensor Fusion

This chapter provides some knowledge of sensor fusion and how it can be used to improve the position estimation of an UAV equipped with an IMU and GPS. First by presenting the Kalman filter equations for linear estimation. Then by describing different approaches for fusion of INS and external position aiding, especially the feeback- and feedforward version of the error state Kalman filter. In the end of the chapter a measurement model for using GPS as external position aiding is derived.

3.1 Kalman filter

A general discrete time linear system can be described with the state space model

xk+1 = Fkxk+ Gkvk, k 0

yk = Hkxk+ ek, k 0 (3.1)

where xk is the state vector and yk the measurement vector. The matrices Fk, Gk and Hk

can be time-variant or time-invariant. The process vk is called the process noise while the process ekis called the measurement noise. They are assumed to have the known covariances COV(v) = Q and COV(e) = R.

Assuming that an initial estimate ˆx1|0 of the state and its covariance P1|0=COV(x0) is known, the Kalman filter then finds the MSE optimal estimate1k|k at time k, given measurement yk up to time k. This is done in a two-step manner. The first step is the Measurement update, in which the state ˆxk|kand covariance Pk|kare computed by correcting the previous estimates of the same quantities ˆxk|k 1, Pk|k 1 using a measurement taken at time k, yk. In the second step, called the Time update the process model and noise properties are exploited to estimate the state and its covariance in the next time step. The details for the discrete time Kalman filter is given by [11] and can be seen in Algorithm 1.

1Minimizes MSE(ˆx) =E||ˆx x||2=Trace(ˆx x)( ˆx x)>=Trace(P )

13

(30)

3.2 The complementary filter 14

Algorithm 1 The Kalman filter

The Kalman filter can be divided into the following recursions initialized with ˆx1|0=E(x0) and ˆP1|0=COV(x0):

1. Measurement update.

ˆ

xk|k= ˆxk|k 1+ Kkk

Pk|k= [I KkH] Pk|k 1. 2. Time update.

ˆ

xk+1|k= Fkk|k

Pk+1|k= KkPk|kFk>+ GkQkG>k

The measurement update in Algorithm 1 is written in a compact way using the following definitions. The innovation ✏k is is defined as

k= yk Hkk|k 1,

i.e the difference between the measurement yk and the prediction ˆyk|k 1 = H ˆxk|k 1. The covariance of the innovation is computed as

Sk = HPk|k 1H>+ R.

Knowing the innovation and its covariance, the Kalman gain can then be obtained as Kk = Pk|k 1Hk>(Sk) 1.

Structuring the steps of the measurement update this way is preferred since repeated com- putations of the same quantity are avoided.

The covariance matrix, P is by its definition symmetric and positive definite. However, in computer implementations of the filter, numerical errors can result in P losing these properties [9]. An often-used approach to avoid the covariance matrix to losing its symmetric property is to resymmetrize it at regular interval with the equation

P = 1

2 P + P> (3.2)

This is a relative computational cheap operation and only affects P if it is not symmetric.

If the covariance matrix already is symmetric, it is returned unchanged, since A = A>

for a symmetric matrix A. Another way to to ensure that P stays symmetric is to use a square root implementation, this would also ensure positive definiteness of P . However, since square-root implementations are more complex to program and require more computations [9] it may be avoided until found necessary.

3.2 The complementary filter

The complementary filter, also known as the ESKF or the indirect Kalman filter is a filtering approach used in navigation that provides high-rate information about the navigation state, robustness for sensor unavailability and does not rely on dynamic modeling. The general idea is to fuse information from a high-rate but maybe not long-term accurate IMU with measurements from one or more other sensors. These sensors may provide direct measure- ments of a subset of the navigation state, often position or velocity, with a lower rate than the IMU but are stable over longer time periods.

(31)

15 Sensor Fusion

As shown in figure 3.1, the inertial measurements are processed by an INS, an aiding sensor provides extra information, for example position. Whenever a observation from the aiding sensor is available, the observation is predicted from the INS output and the difference between the predicted observation and actual observation is fed to a Kalman filter which then estimate the INS deviation state. The estimated INS deviation state is then used to correct the nominal INS solution. It is because of that the Kalman filter operates on the deviation state instead of the full state that it is often referred to the error state Kalman filter or the indirect Kalman filter.

IMU INS

Filter

h( )

+

- Aiding Sensor

Figure 3.1: Complementary filter in feedback configuration. Filter estimates the INS deviations with help from aiding sensor. h() predicts the output of the aiding sensor from the nominal INS estimate. The estimated deviations are used to correct the INS.

As mention in section 1.1 in the beginning of the report there exist several benefits of using a complementary filter and estimate the INS deviations instead of using a direct filter to operate on the full state [9], [1]. Some of them being that:

• A dynamic model is not needed. Hence cumbersome work that may have to be redone when the vehicle is modified is avoided.

• The INS can follow rapid changes in the navigation state and the Kalman filter can be designed to have low bandwidth since it only estimates the slow varying deviations.

• The low frequency deviations are better approximated by a linear process than the high frequency full state.

3.2.1 Feedback and feedforward configuration

A complementary filter can either be implemented in feedforward or feedback form [1], [9].

The differences of the two forms are explained in [1] and the use of the feedback form is motivated. The difference between the two choices is how the estimated errors are used to correct the nominal INS estimation. Figure 3.2 shows the feedforward version, where the error estimates are fed forward to correct current INS estimation. In the feedback version, the estimated errors are not only used to correct the current INS output, but also fed back to the INS to correct the starting point of next iteration, see figure 3.1. Because of this, the next INS iteration will start from the corrected state; this limit the size of the errors. In the feedforward version on the other hand, the errors are free to drift unbounded [1].

It can be shown that in the feedforward configuration of the complementary filter is equal to the Linearized Kalman filter with the INS estimate being the reference trajectory for the linearization. In the same way, it can be shown that the feedback configuration is the same as the EKF where the INS provides the estimation of the nominal trajectory and the filter estimates and propagates the deviations from this nominal trajectory, [11].

(32)

3.3 Measurement model 16

IMU INS

Filter

h( )

+

-

+

-

Aiding Sensor

Figure 3.2: Complementary filter in feedforward configuration. The estimated deviations are only used to correct the current INS estimate.

3.3 Measurement model

Now a measurement model for use in a complementary filter with based on an INS aided by GPS and sonar will be derived. For the integration of GPS in a complementary filter there exist different methods, they are usually categorized as loose, tight and deep integration [11]. The approaches are distinguished by the flow of information in the system. Roughly speaking, a loose integration requires the least processing of the GPS observables, while a tight integration requires the most [9]. Here, the loose integration is considered, both for the simplicity of the integration and since the used GPS receiver does not provide the necessary information for the other approaches.

In a loose integration, the GPS is seen as a sensor that provides measurements of po- sition pn,GP S = pn+ eGP S, where pn = ⇥

pnx, pny, pnz> is the true position. The noise characteristics are unknown but are assumed to be Gaussian white noise, an assumption also made in [2] and [3]. The position is given in the geodetic ECEF coordinates but can be transformed into the {n} frame using the transformations in appendix B. Altitude measure- ments pn,Altz = pnz + eAlt are given by two sonar sensors. Together the GPS and altimeter measurements form pn,GP S,Alt= pn+ ewhere

pn,GP S,Alt=⇥

pn,GP Sx pn,GP Sy pn,Altz>

and

e =⇥

eGP Sx eGP Sy eAltz>

Furthermore, in a complementary filter the Kalman filter is driven by the difference between the predicted position measurement and the actual position measurement. The INS state contains the position estimate ˆpn,IN S = pn+ pn which would be the predicted position measurement. The difference between the measurements and predicted measurements is then given by

y = pn,GP S,Altn,IN S

= (pn+ e) (pn+ pn)

= pn+ e

= H x + e.

(3.3)

where x is the state vector in (2.9) and H = [ I30]2 R3⇥15. This linear measurement model y = H x + e and the linear process model of the deviation state from (2.11) can be used with a linear Kalman filter to estimate INS deviations and feed them back to the INS to improve its performance, as shown in figure 3.1.

(33)

Chapter 4

Experimental Platform and Implementation

This chapter first gives a little more details about the Parrot Ar.Drone and ROS that were used in the implementation of the system. After that an overview of the system is presented before going into the details of the implementation.

4.1 The Parrot AR.Drone 2.0

The Parrot AR.Drone 2.0 is a commercially available quadcopter produced by the French company Parrot. It is a low-cost ('300 e) and of-the-shelf quadcopter, the user only needs to charge the batteries before it is ready to fly. It can be controlled with a smartphone appli- cation developed by Parrot that are freely available on different application-stores. Parrot has also released a Software Developing Kit (SDK) called AR.Drone SDK 2.0 to enable the users to develop their own applications for both smartphones and personal computers.

The design of the AR.Drone follows the classic design of quadcopters. The four rotors are attached to the ends of the four arms made of carbon fiber. In the middle where the four arms cross, the battery and other hardware are placed inside a foam that dampens vibrations from the motors. The drone can be equipped with both an indoor and an outdoor hull. The indoor hull 4.1a have extra protection for the rotors while the outdoor hull 4.1b is more aerodynamic.

(a) Indoor hull. (b) Outdoor hull.

Figure 4.1: The Parrot AR.Drone 2.0

The onboard electronics consists of a mother-board, navigation-board, battery, sensors and four brushless motors each controlled by a MIPS AVR processor. The navigation-board serves as an interface between the sensors and mother-board. The processor is a 32 bit ARM8

17

(34)

4.2 Robot Operating System 18

Cortex processor running at 1 GHz. The system is running a real-time embedded Linux- based operating system. The OS handles threads for data acquisition, video processing, state-estimation, closed-loop control and wifi-communications.

4.1.1 The sensors of the AR.Drone 2.0

The AR.Drone has two video cameras. One that is mounted vertically on the motherboard to enable ground speed measurement (using visual odometry) and the other one is mounted horizontally in the front of the body, pointing forward.

Furthermore the AR.Drone have a low-cost IMU consisting of a 3-axis accelerometer and a 3-axis gyroscope that provides inertial measurements relative the body frame. The update frequency of the IMU is 200 Hz. The gyroscope can measure rotation rates up to 2000 /s.

The inertial sensors are calibrated to compensate for misalignment and scale factor errors.

This is done both at factory and during use [13]. An magnetometer with a 6 precision can be used as a compass.

Two downward facing sonar sensors provide ground-distance measurements and are used to determine the field of depth of the downward facing camera. The sonars emit ultrasound with a frequency of 40 kHz, can measure distances between 0.2 and 6 m at 25 Hz. There is also an air-pressure sensor with a ±10 Pa precision to enable altitude measurements and stability on heights when the sonars are not enough.

An external GPS receiver produced for the Parrot AR.Drone, called Flight Recorder can be mounted and connected with USB on top of the battery. According to Parrot it measures the position with an accuracy of 2 m at 5 Hz.

The exact information about which sensors are used is not available from Parrot. How- ever, for the first version of the AR.Drone, much information can be found in [13]. The IMU are updated in the second version, the pressure sensor, GPS receiver and magnetometer were not at all available in the first version.

4.1.2 Control and communication

Easy flying of the AR.Drone is made possible with the help of onboard algorithms for estimation of attitude, velocity and stability control. When the drone is turned it sets up an ad-hoc Wi-Fi network to which the user can connect a smartphone or PC. All the available navigation data can then be received at either 15 Hz or 200 Hz. The available navigation data includes sensor readings, internally estimated states made onboard the drone and high- level information like if the drone is flying or not [14], [15]. Control of the drone is done by sending high-level commands for take-off or land; or a control command, u = [ ddz˙d ˙d]>

with desired roll, pitch, vertical velocity and yaw-rate. The internal controller of the drone then uses closed loop control for changing the speed of the rotors.

Using Wi-Fi for the data link infers some problems, interference from other Wi-Fi devices is one of them. Problems with delays is mentioned in [6] where it is stated that the delay for receiving data from the quadcopter is ' 130 ms and ' 60 ms for sending control commands.

4.2 Robot Operating System

For the software development, it was chosen to use ROS [16], which is an open-source frame- work for developing robotic software. It was originally developed by Stanford Artificial Intel- ligence Laboratory but is nowadays maintained by the Open Source Robotics Foundation.

The aim of ROS is to simplify the task of creating robotic software by providing a base of libraries, tools and conventions for doing just this. Users can use this base to develop their own libraries for their specific application or robot. These application specific libraries, usually written in c++ or python are referred to as packages. They can be distributed to other user via the ROS website. Since all user-developed software is based on the same core

(35)

19 Experimental Platform and Implementation

components and conventions, sharing and collaborating with other users is simple and a key to the wide-spread user base.

During runtime, sets of ROS-based processes are represented in a graph-architecture where each node represents a specific process. Nodes can communicate with each-other using an publish-subscriber message passing system to publish and subscribe to different topics of communication. Figure 4.2 shows an example of the graph structure of a ROS system during runtime. The nodes are represented with ellipses and the topics by rectangles. The arrows between nodes and topics show the flow of data in the system, i.e. if a node is publishing or subscribing to a topic or not. The rates of which publishing and subscribing takes place can be individual for each node and chosen suitable for the specific case.

Figure 4.2: An example of the graph-architecture of ROS.

4.2.1 ROS driver for the Parrot Ar.Drone

The ROS package ardrone_autonomy includes a ROS driver for the Parrot AR.Drone. The driver is based on the official Parrot SDK and makes it possible to develop ROS applications for the drone. It is developed and shared by the Autonomy Lab of Simon Fraser University.

The driver publishes the data sent from the drone to several different ROS topics, the ones considered in this work are

• ardrone/navdata_altitude, with messages containing altitude information.

• ardrone/imu - with messages containing IMU data.

• ardrone/navdata_gps - with messages containing GPS position.

The user can specify if the data should be sent from the drone at 5 Hz or 200 Hz Hz. It can also be specified if the driver then should publish the incoming data directly or at a fix rate.

Reading the GPS data is not supported in the current version of the driver or in the Parrot SDK. Therefore, the driver and SDK had to be modified in this work to include support for GPS.Since the drone sends data at a faster rate (200 Hz) than the GPS update frequency (5 Hz), the same GPS data may be received several times in a row. This can be compensated for by comparing their timestamps set by the drone which are the same if the data is same, and also knowing that the GPS only updates five times per second.

(36)

4.3 System overview 20

4.3 System overview

A system where a pilot can control the AR.Drone while continuously getting updated in- formation of the drone’s pose was implemented. This was done in the c++ language using the ROS framework. The previously mentioned ardrone_autonomy package was used for communication with the drone while a new ROS package for state estimation was developed.

An overview of the system is shown in figure 4.3.

The state estimator uses a complementary filter in feedback configuration to fuse infor- mation from the IMU, GPS and sonar sensors of the drone.

Figure 4.3: Overview of the implemented system. The blocks represents different ROS packages.

The state estimator was implemented in this thesis.

4.4 Sensor fusion implementation

IMU INS

Filter

h( )

+

- GPS &

Sonars

Figure 4.4: The implemented complementary filter in feedback configuration. The aiding sensor block in figure 3.1 has been replaced by GPS and Sonars. h() predicts the positions given from GPS and Sonars.

The propagation of the the navigation state, ⇣

n, ˆvn, bRnb

is driven by the IMU mea- surements according to the inertial navigation equations (2.6). A quaternion representation of the attitude is used for stability reasons [11], [9]. The angular velocity of earth, !ie is set to zero in the INS mechanization equations (2.6), assuming that it is negligible in the presence of sensor noise and bias. The update frequency of the INS is 200 Hz.

The GPS and sonar sensor provides measurement of the position, pn,GP S,Alt. They are modeled using the linear model presented in section 3.3. Whenever a position measurement is available from both the GPS and sonars, a linear Kalman filter compares the measurements with the predicted measurement according to the INS state and estimates the deviations ( pn, vn, ✓n)and the biases fb, !b . The estimated deviations are the used to correct the current INS output according to equation (2.12). The estimated biases are added to the previously known bias estimate fkb fk 1b + fkb, !bk !k 1b + !bk. Since the deviations

(37)

21 Experimental Platform and Implementation

are corrected for, ˆx ⌘ 0 after each measurement update. This means that the predicted deviation also are equal to zero and only the covariance matrix P has to been updated in the prediction step. The algorithm for the implemented state estimation can be seen in Algorithm 2.

The turn on biases of the IMU are estimated by taking an average of measurements for a short time period during which the quadcopter remains stationary. The same procedure is done to determine the initial GPS position. The initial position of the GPS is necessary for the transformation of GPS measurements from geodetic coordinates in the {e}-frame to the {n}-frame.

Algorithm 2 Algorithm for the implemented state estimation.

1: Initialize P0| 1and ˆx0| 1

2: Compute turn on biases for IMU, ˆf0b, ˆ!0b 3: Compute GPS reference position.

4: for i=0,... do

5: Get latest IMU measurements, ˜fkb, ˜!bk.

6: // Correct for biases

7:kb= ˜fkbkb

8:kb = ˜!kbbk

9: Propagate INS estimates, ˆpnk, ˆvnk, bRnb,k

10: if (new position measurement available) then

11: Transform GPS measurement to {n}-frame

12: Form measurement pn,GP S,Altk

13: // Measurement update:

14:k = pn,GP S,Altkn,IN Sk

15: Sk = HPk|k 1H>+ R

16: Kk= Pk|k 1Hk>(Sk) 1.

17:k = Kkk

18: Pk|k = (I KkH) Pk|k 1

19: Correct nominal INS estimates using ˆxk.

20: P Pk|k

21: else

22: P Pk|k 1

23: end if

24: //Time update:

25: Form Fk and Gk.

26: Pk+1|k= FkP F>+ GQG>

27: end for

4.4.1 Choice of Q and R

The noise parameters for the Q and R matrices were chosen with the help of experiments and some manual tuning based on observed performance. Figure 4.5 shows the results of an IMU test where the quadcopter was placed on a flat surface and remained stationary for 45 minutes. The expected outputs of the accelerometer would be 0 m/s2for the horizontal axes and 9.81 m/s2 in z-direction due to the gravitation of earth. The expected outputs of the gyroscope would be 0 rad/s around all the three axes. However, both of the inertial sensors are subjected to noise and bias. The noise is represented by the fluctuations in the figures 4.5a and 4.5b. That the sensors are subjected to biases is visible in 4.5a as an offset from the expected outputs. In figure 4.5b the gyroscope biases are not equally easy to see but they exist. For both of the inertial sensors, the biases also vary with time. The values for the diagonal elements in Q, i.e. the variances of the process noise vector wk, were chosen

References

Related documents

Listan innehåller 10 huvudfrågor (inte i prioritetsordning) och handlar om att doktrinskrivaren ska: (1) ha tillräckligt brett kontaktnät; (2) ta hänsyn till den senaste

control priority TCP SYN and Priority URL-based Prio 3 drop TCP SYN (2) (4) (3) RST HTTP Get Kernel SYN (1) partial listen queue. TCP HTTP Prio 2 Prio 1 (6) (8) API listen

Om den unge skulle begå ett nytt brott under tiden för verkställighet kan domstolen, istället för att påföra ett nytt straff, förlänga verkställigheten för ungdomssanktionen

Vision-based Localization and Attitude Estimation Methods in Natural Environments Link¨ oping Studies in Science and Technology.

He has been employed by Saab since 1994 as system engineer working mainly with modelling and simulation of airborne platforms and synthetic natural environments. His

This thesis investigates the extraction of semantic information for mobile robots in outdoor environments and the use of semantic information to link ground-level occupancy maps

Salehi Z, Mashayekhi F, Naji M: Insulin like growth factor-1 and insulin like growth factor binding proteins in the cerebrospinal fluid and serum from patients with Alzheimer's

A use of sensors to detect occupants in a specified indoor environment has been an essential part for minimising energy consumption in buildings by implementing