Institutionen för systemteknik
Department of Electrical Engineering
Examensarbete
Positioning and Trajectory Control of an Autonomous
Helicopter using Real Time Kinematic GPS
Examensarbete utfört i Reglerteknik vid Tekniska högskolan vid Linköpings universitet
av
Josef Högberg and Staffan Sjöqvist LiTH-ISY-EX--13/4691--SE
Linköping 2013
Department of Electrical Engineering Linköpings tekniska högskola
Linköpings universitet Linköpings universitet
Positioning and Trajectory Control of an Autonomous
Helicopter using Real Time Kinematic GPS
Examensarbete utfört i Reglerteknik
vid Tekniska högskolan vid Linköpings universitet
av
Josef Högberg and Staffan Sjöqvist LiTH-ISY-EX--13/4691--SE
Handledare: Patrik Axelsson
isy, Linköpings universitet
Torkel Danielsson
Intuitive Aerial AB
Examinator: David Törnqvist
isy, Linköpings universitet
Avdelning, Institution Division, Department
Division of Automatic Control Department of Electrical Engineering SE-581 83 Linköping Datum Date 2013-06-13 Språk Language 2 Svenska/Swedish 2 Engelska/English 2 Rapporttyp Report category 2 Licentiatavhandling 2 Examensarbete 2 C-uppsats 2 D-uppsats 2 Övrig rapport 2
URL för elektronisk version
http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-XXXXX
ISBN — ISRN
LiTH-ISY-EX--13/4691--SE
Serietitel och serienummer Title of series, numbering
ISSN —
Titel Title
Positionering och banföljning av en autonom helikopter med hjälp av Real Time Kinematic GPS
Positioning and Trajectory Control of an Autonomous Helicopter using Real Time Kinematic GPS
Författare Author
Josef Högberg and Staffan Sjöqvist
Sammanfattning Abstract
Under examensarbetet har reglersystemet för en quadrokopter utvecklats. En quadrokopter är en liten helikopter med fyra rotorer som styrs med hjälp av en handhållen kontroll av en operatör positionerad på marken.
Tre olika huvudämnen behandlas i detta examensarbete. Till att börja med utvärderas ett Real Time Kinematic Global Positioning System (RTK-GPS). Sedan definieras en mjuk rörelsebana som farkosten i framtiden autonomt kommer att reglera efter. Till sist behand-las styrsystemet i quadrokoptern för att erhålla goda flygegenskaper i det manuella fallet och exempelvis altitudhållning i det autonoma fallet.
Utvärderingen av RTK-GPS-systemet resulterade i att det är möjligt att uppnå centime-ternoggrannhet i positionsbestämningen. I rapporten diskuteras också vad RTK-GPS-systemet kräver för att det ska fungera så bra som möjligt. En mjuk rörelsebana kan definieras på olika sätt. Önskemålet är att banan ska ha kontinuerlig första- och an-draderivata, egenskaper som kubiska spline-funktioner besitter. Reglerprestandan förbät-trades genom att krafter och vinkelhastigheter transformerades mellan de två kordinatsys-temen som nyttjades. På detta sätt blev styrningen av quadrokoptern mjukare.
Nyckelord
Sammanfattning
Under examensarbetet har reglersystemet för en quadrokopter utvecklats. En quadrokopter är en liten helikopter med fyra rotorer som styrs med hjälp av en handhållen kontroll av en operatör positionerad på marken.
Tre olika huvudämnen behandlas i detta examensarbete. Till att börja med ut-värderas ett Real Time Kinematic Global Positioning System (RTK-GPS). Sedan definieras en mjuk rörelsebana som farkosten i framtiden autonomt kommer att reglera efter. Till sist behandlas styrsystemet i quadrokoptern för att erhålla goda flygegenskaper i det manuella fallet och exempelvis altitudhållning i det autono-ma fallet.
Utvärderingen av RTK-GPS-systemet resulterade i att det är möjligt att uppnå centimeternoggrannhet i positionsbestämningen. I rapporten diskuteras också vad RTK-GPS-systemet kräver för att det ska fungera så bra som möjligt. En mjuk rörelsebana kan definieras på olika sätt. Önskemålet är att banan ska ha kon-tinuerlig första- och andraderivata, egenskaper som kubiska spline-funktioner besitter. Reglerprestandan förbättrades genom att krafter och vinkelhastigheter transformerades mellan de två kordinatsystemen som nyttjades. På detta sätt blev styrningen av quadrokoptern mjukare.
Abstract
During this master’s thesis project a control system for a quadrocopter has been developed. A quadrocopter is a small helicopter with four rotors which is con-trolled by a hand held controller by an operator positioned on the ground. This master thesis report treats three different main subjects. The first one is about evaluating a Real Time Kinematic Global Positioning System (RTK-GPS). The second one is about defining smooth motion paths which the aircraft au-tonomously shall be able to fly along. The third one is about improving the control system in the aircraft to obtain better flight performance under manual control and to enable autonomous features such as altitude hold.
The result is that it is possible to achieve centimetre accuracy in estimated po-sition with the evaluated RTK-GPS system. It is also discussed how the perfor-mance of the RTK-GPS system can be maximized.
A smooth path can be defined in different ways. It is desirable that the path has continuous first and second derivatives. Therefore cubic spline curves are used to represent the path.
The control system was improved by transforming forces and angular velocities between the two different coordinate systems. This resulted in improved abilities by making the steering and control of the aircraft smoother.
Acknowledgments
Under examensarbetet har vi haft god hjälp av vår handledare Patrik Axelsson som bidragit med många goda tips och kommentarer, både när det gäller denna rapport och i reglertekniska frågor. Vi har också haft god hjälp av Mårten Svan-feldt och Torkel Danielsson på Intuitive Aerial AB som hjälpt oss lösa många praktiska problem och kommit med värdefulla idéer och synpunkter. Till sist vill vi tacka vår examinator David Törnqvist som hjälpt oss i reglertekniska frå-gor.
Tack!
Linköping, Juni 2013 Josef Högberg och Staffan Sjöqvist
Contents
Notation xiii 1 Introduction 1 1.1 Background . . . 1 1.2 Goals . . . 1 1.3 Previous Work . . . 2 1.4 Thesis Outline . . . 32 Background Mathematics Material 5 2.1 Matrix and Vector Notation . . . 5
2.2 Coordinate Systems . . . 7
2.2.1 Earth Fixed Coordinate System . . . 7
2.2.2 Body Fixed Coordinate System . . . 7
2.3 Transformations . . . 8
2.3.1 Rotational Kinematics . . . 8
2.3.2 Transforming Angular Velocity . . . 9
2.4 Transformation of Forces . . . 11
2.4.1 Transformation of Forces from Earth frame to Body Frame 11 2.4.2 Translate Body Fixed Forces to the Earth Frame . . . 11
2.5 Translation from Latitude and Longitude to Earth Frame . . . 13
2.5.1 Translation from Latitude to Distance . . . 13
2.5.2 Conversion from Longitude to Distance . . . 15
3 Models 17 3.1 Quadrocopter Model . . . 17
3.2 Motion Path - A Smooth Trajectory in 3D . . . 20
3.2.1 Properties of Spline Curves . . . 21
3.2.2 Definition of Spline Curves . . . 21
3.2.3 Finding the Closest Point on A Spline Curve . . . 26
3.3 AHRS Algoritm . . . 29
3.3.1 Pros and Cons with Madgwick’s Orientation Filter . . . 31 4 Sensors - IMU, Magnetometer, Barometer and Internal GPS 33
4.1 IMU . . . 33
4.1.1 Gyroscopes . . . 33
4.1.2 Accelerometers . . . 33
4.2 Magnetometer . . . 34
4.2.1 Hard Iron and Soft Iron Distortion . . . 34
4.2.2 Automatic Hard Iron Calibration Algorithm . . . 34
4.3 Barometer . . . 37
4.4 Internal GPS . . . 42
4.5 Sensors and AHRS Filter Performance . . . 42
4.5.1 Yaw Rotation . . . 42
4.5.2 Roll, Pitch and Yaw Rotation . . . 43
5 Evaluation of RTK-GPS System 49 5.1 Evaluation of Trimble’s RTK-GPS . . . 49
5.1.1 Experience and Usages of RTK-GPS . . . 49
5.1.2 Tests . . . 50 5.2 Error Detection . . . 53 5.2.1 Maximum Velocity . . . 53 5.2.2 Maximum Acceleration . . . 55 5.2.3 Extrapolated Position . . . 55 6 Control System 57 6.1 State Estimation . . . 57
6.1.1 Extended Kalman Filter . . . 57
6.2 PID-Controllers . . . 58
6.2.1 Instability Issues . . . 59
6.2.2 Feedback Controller . . . 59
6.3 Controllers . . . 59
6.3.1 Altitude Hold Controller . . . 60
6.3.2 Yaw Hold Controller . . . 63
6.3.3 Position Hold Controller . . . 64
6.3.4 Motion Path Controller . . . 64
7 Discussions and Conclusions 67 7.1 Models . . . 67
7.1.1 Simulation Environment . . . 67
7.1.2 Motion Path . . . 68
7.2 Sensors - IMU, Magnetometer, Barometer and Internal GPS . . . . 68
7.3 Evaluation of RTK-GPS System . . . 69 7.3.1 Successful Tests . . . 69 7.3.2 Glitches . . . 69 7.4 Future Development . . . 70 7.5 Results . . . 70 7.6 Experiences . . . 71 A Appendix 75 A.1 Equipment Borrowed from Trimble . . . 75
CONTENTS xi
Notation
Abbreviations
Abbreviation Definition
dc Direct Current
esc Electronic Speed Controller
fc Flight Controller
gps Global Positioning System
ia Intuitive Aerial
imu Inertial Measurement Unit
ned North East Down
pid Proportional, Integral, Differential (controller)
rtk Real Time Kinematic
Notations
Notation Definition
EF Force F expressed in the Earth Coordinate System (Earth frame)
BF Force F expressed in the Body Coordinate System (Body frame) " Eξ Eη # State vector Eξ = x y z
Position of the aircraft in the Earth frame
Eη = φ θ ψ
Euler angles of the aircraft
Bυ = u v w
Velocity of the aircraft in the Body frame
Bω = p q r
Angular velocities of the aircraft
R Rotation matrix, transforming vectors from Earth frame to Body frame
W Rotation matrix, transforming angular velocities from Earth frame to Body frame
KP Proportional gain KI Integral gain
KD Derivative gain
K Kalman gain
e(t) Control error
u(t) Input signal
r(t) Desired output signal, reference signal y(t) Actual output signal
p (t|c; t) Spline segment f (t) Spline curve
c Control point in a spline curve ti Knot in a spline curve
x State vector
1
Introduction
This chapter contains the background and goals of this master’s thesis. The thesis was performed at Intuitive Aerial AB.
1.1
Background
Intuitive Aerial (IA) is developing a flying camera platform for advanced film cameras. The camera platform consists of a multi-rotor helicopter with 12 rotors and a three-axis stabilized camera gimbal. Currently it is adapted to be operated manually but in the future IA wants it to be able to fly along a path autonomously. There are autopilot systems available that can fly along a path defined by discrete waypoints, but that solution provides too little flexibility and control of the tra-jectory to be usable in a movie recording.
This master’s thesis is about the development of the control system and the posi-tioning methods needed to be able to fly along a smooth motion path with high precision. The aircraft used as a development platform is a quadrocopter, see Fig-ure 1.1, with four rotors that are controlled by a control unit (Flight Controller) which is mounted in the centre of the aircraft. The results of this master thesis will be used for the further development of autonomous unmanned aerial vehi-cles at Intuitive Aerial.
1.2
Goals
The goals of the master’s thesis are the following • Improve the flight performance of the aircraft
Figure 1.1:The aircraft used during the project was this quadrocopter.
• Develop a method to define a smooth motion path • Evaluate the performance of an RTK-GPS system
1.3
Previous Work
The office of Flying Machine Arena (FMA) is located at Eidgenössische Technische Hochschule Zürich (ETH Zürich). They describe themselves as A space where fly-ing robots live and learn. Quadrocopters are used in their research in adaption and learning because of the agility, hover ability, mechanical simplicity and ro-bustness of these aircraft. Their research is also about trajectory planning and cooperation between multiple quadrocopters. For example they solve planning problems such as constructing small structures of blocks with the help of each other and with different responsibilities during the construction phase. They use a high-precision motion capture system to keep track of the quadrocopters. Mod-els over the system and other data, such as gyroscopes and accelerometers, are used in a sensor fusion framework to predict the state of the object in the future. For more information see [FMA].
The similarities with FMA:s work and this thesis are that quadrocopters are used and that we are interested in the physical model of them. Both are also interested in solving the path planning issues and the control issue. To our help both have IMU:s with three gyroscopes and three accelerometers but there the similarities ends. They use a high-precision motion capture system which gives them a good overview of where their quadrocopters actually are - they know exactly in which point in space their quadrocopters are. Thus they are limited to a confined vol-ume of approximately 10 × 10 × 10 m3. Instead of a motion-capture system we use a GPS, a barometer and a magnetometer to get position in space, altitude and heading. The quadrocopter has not been limited to a confined volume indoors during this master’s thesis.
1.4 Thesis Outline 3
1.4
Thesis Outline
Chapter 1 gives an introduction to the master’s thesis. Chapter 2 gives the fun-damental mathematics that are needed to follow the text. Chapter 3 contains a theoretical background that gives a solid ground for the models and algorithms used in the aircraft. After that the sensors used in the aircraft are introduced in Chapter 4 and their pros and cons are discussed. The background and evaluation of the RTK-GPS system from Trimble are presented in Chapter 5. In Chapter 6 the control systems are presented. Chapter 7 is dedicated to discussions and con-clusions.
2
Background Mathematics Material
This chapter contains an overview of the basic mathematics that are needed to fol-low the work in the folfol-lowing chapters. The chapter starts with a section about matrix and vector notation and continues with a section about the coordinate sys-tems used. It ends with some transformation mathematics including rotational matrices and conversions between the different coordinate systems.
2.1
Matrix and Vector Notation
Matrices and vectors are written with boldface letters, for example F. Left sub in-dices,EF, for the geometric vectors mark which coordinate system the vectors are represented in. The elements will differ but the vector is the same independently of which coordinate system it is described in. The left sub indexEstands for the Earth fixed frame (Earth frame) and the left sub indexBstands for the Body fixed frame (Body frame).
An example of the above described notation system is given by
EF= EFx EFy EFz . (2.1)
The state vector is given by
" Eξ Eη # (2.2) 5
where Eξ = x y z (2.3) is the position of the body in the Earth frame and
Eη = φ θ ψ (2.4) are the Euler angles of the body expressed in the Earth frame. The velocity of the aircraft expressed in the Body frame is
Bυ = u v w . (2.5)
The angular velocities of the aircraft in the Body frame are Bω = p q r (2.6) In flight mechanics motion in theBXBZ-plane, u, w and q, are called longitudinal and v, p and r are called laterals.
Table 2.1 defines the different units that are used in this report and Table 2.2 Table 2.1:Units used in the report.
Measure Unit Symbol In base units
Angle radian rad −
Frequency Hertz Hz s−1
Force Newton N m · kg · s−2
Voltage Volt V m2· kg · s−3· A−1
gives a summary of Equations (2.4), (2.5) and (2.6). Table 2.2 is borrowed from Gustafsson [2012].
Table 2.2: Coordinate notation for the rotational kinematic of a body ex-pressed in its Body fixed coordinate system with respect to the Earth fixed coordinate system.
Axis Motion components Rotation Euler angle Angular velocities
BX Longitudinal motion u Roll φ p
BY Lateral motion v Pitch θ q
2.2 Coordinate Systems 7
2.2
Coordinate Systems
Two coordinate systems are used in this report. Both are NED-systems where NED is the acronym for North, East, Down. The first is fixed to a reference point on Earth and the other is fixed to the aircraft. In Figure 2.1 the two different coordinate systems are shown.
EY
EX
EZ
Earth fixed coordinate system East North Down EO Forward BY BX BZ
Body fixed coordinate system Yaw, ψ
Roll, φ Pitch, θ
BO
Figure 2.1:The Earth fixed coordinate system together with the Body fixed coordinate system.
2.2.1
Earth Fixed Coordinate System
The Earth fixed coordinate system has its originEO at the surface of Earth and near the operational area of the aircraft. It may for example be the start posi-tion of the aircraft. TheEX-axis points northward and theEY -axis points east-ward. TheEXEY -plane is parallel with an idealized surface segment of the Earth around the chosen originEO. Vectors expressed in the Earth fixed coordinate system have left sub index E.
2.2.2
Body Fixed Coordinate System
The Body fixed coordinate system is fixed to the aircraft and rotates with the aircraft. Its originEO is fixed to the mass centre of the aircraft. The BX-axis points in the forward direction and the BY -axis points out to the right of the aircraft. Vectors expressed in the Body fixed coordinate system have left sub index B.
2.3
Transformations
In this section some useful transformations between the Earth fixed and Body fixed coordinate systems are introduced and explained.
2.3.1
Rotational Kinematics
A vector represented in one coordinate system can be transformed to another system by multiplying it with a rotation matrix that is defined by the Euler angles. The Euler angles are represented by
Eη = φ θ ψ (2.7) A rotation from Earth frame to Body frame is performed in the following order:
1. Rotation ψ around theBZ-axis, see rotation matrix Rz(ψ) in (2.8) 2. Rotation θ around theBY -axis, see rotation matrix Ry(θ) in (2.9) 3. Rotation φ around theBX-axis, see rotation matrix Rx(φ) in (2.10) The rotation matrix Rz(ψ) is given by
Rz(ψ) = cos(ψ) sin(ψ) 0 −sin(ψ) cos(ψ) 0 0 0 1 , (2.8)
the rotation matrix Ry(θ) is given by Ry(θ) = cos(θ) 0 −sin(θ) 0 1 0 sin(θ) 0 cos(θ) (2.9) and the rotation matrix Rx(φ) is given by
Rx(φ) = 1 0 0 0 cos(φ) sin(φ) 0 −sin(φ) cos(φ) . (2.10)
The above rotation matrices are the standard in flight mechanics, see Nelson [1998]. To get the total rotation matrix R equations (2.8) to (2.10) are multiplied together, from the left, in the order mentioned above.
R= Rx(φ)Ry(θ)Rz(ψ) = 1 0 0 0 cos(φ) sin(φ) 0 −sin(φ) cos(φ) cos(θ) 0 −sin(θ) 0 1 0 sin(θ) 0 cos(θ) cos(ψ) sin(ψ) 0 −sin(ψ) cos(ψ) 0 0 0 1 = (2.11)
cos(θ) cos(ψ) cos(θ) sin(ψ) −sin(θ)
−cos(φ) sin(ψ) + sin(φ) sin(θ) cos(ψ) cos(φ) cos(ψ) + sin(φ) sin(θ) sin(ψ) sin(φ) cos(θ) sin(φ) sin(ψ) + cos(φ) sin(θ) cos(ψ) −sin(φ) cos(ψ) + cos(φ) sin(θ) sin(ψ) cos(φ) cos(θ)
2.3 Transformations 9 The transformation from the Earth fixed coordinate systemEXEYEZ to the Body fixed frameBXBYBZ is expressed as
BF= REF (2.12)
whereBFstands for a vector in the Body fixed coordinate system andEFfor a vector in the Earth fixed coordinate system. The rotation matrix R in (2.11) is orthogonal which means that its inverse is given by its transpose
R−1= RT, (2.13)
hence it is easy to transform a vector in the Earth fixed coordinate system to the Body fixed coordinate system;
EF= R
−1
BF= RTBF (2.14)
by using the transpose of R.
2.3.2
Transforming Angular Velocity
The transformation of angular velocitiesEη in the Earth frame to angular veloci-˙ tiesBν in the Body fixed frame is given by
Bν = WE˙η (2.15)
which can be written as p q r = 1 0 −sin(θ)
0 cos(φ) cos(θ) sin(φ) 0 −sin(φ) cos()φ) cos(φ)
˙ φ ˙ θ ˙ ψ (2.16) The inverse to W is used to transform the body’s angular velocities to inertial angular velocities, ˙ φ ˙ θ ˙ ψ =
1 sin(φ) tan(θ) cos(φ) tan(θ)
0 cos(φ) −sin(φ)
0 sin(φ)cos(θ) cos(φ)cos(θ) p q r (2.17) expressed in the Earth frame. Observe the singularities that appears when θ = ±π
2 because the denominator cos (θ) becomes zero. With quaternion
representa-tion these singularities can be avoided. Derivation ofW
This derivation is taken from a lecture with Lars Johansson in Flight Dynamics and is reproduced here. Assume that the angular velocities are non-zero, that is
˙ ψ , 0 (2.18) ˙ θ , 0 (2.19) ˙ φ , 0 (2.20)
and introduce two help coordinate systems x0y0z0and x00y00z00where the rotation can take place, see Figure 2.2. The first rotation becomes
x0 y0 z0W˜ ˙ψ = 0 0 ˙ ψ =EW˜ ˙ψ (2.21) where x0
y0z0rotates relative to the Earth frame E expressed in x0
y0z0components. The second rotation is
x00y00z00W˜ ˙θ = 0 ˙ θ 0 =x0y0z0W˜ ˙θ (2.22) where x00y00z00 rotates relative to x0y0z0 expressed in x00y00z00 components. The third and last rotation is
BW˜ ˙φ = ˙ φ 0 0 =x00 y00 z00W˜ ˙φ (2.23) where B rotates relative to x00
y00
z00
expressed in B components. B is the new orientation of the aircraft and E express the original orientation of the aircraft, before the orientation. The aircraft follows all three rotations, (2.21) to (2.23), and gets the angular velocity
˜ W =x0 y0 z0W˜ ˙ψ +x00 y00 z00W˜ ˙θ +BW˜ ˙φ . (2.24)
To add the components in (2.24) a common coordinate system for the component contribution needs to be chosen. With B as the common coordinate system equa-tions (2.9) to (2.10) can be used to transform equaequa-tions (2.21) to (2.23) which gives W = Rx(φ) Ry(θ)x0y0z0W˜ ˙ψ + Rx(φ)x00y00z00W˜ ˙θ +BW˜ ˙φ . (2.25) A summary of these steps is shown in Figure 2.2. The inverse to W can be calcu-lated as long as θ , ±90◦.
EY
EX
EZ
Rotation 1 Rotation 2 Rotation 3
ψ y0 x0 z0 y0 x0 z0 y00 x00 z00 y00 x00 z00 θ BY BX BZ φ
Figure 2.2:The help coordinate systems to transform the angular velocities from the Earth frame to the Body frame.
2.4 Transformation of Forces 11
2.4
Transformation of Forces
In this section two examples are given of how the rotation matrix R from equation (2.11) is used.
2.4.1
Transformation of Forces from Earth frame to Body Frame
Figure 2.3 shows an aircraft that flies in the positiveEY -direction in the Earth fixed coordinate system. Its goal is to go from point A to point B. To obtain this the gravitational accelerationEag m/s2 is transformed to the Body fixed coordi-nate system by usingBag = REag. Then the force from the four rotors can be calculated to get enough lift forceEFl and also a forceEFain the right direction towards B. The vector sum ofEFl andEFagives the forceBF which is the force needed from the propellers to achieve the right thrust.
2.4.2
Translate Body Fixed Forces to the Earth Frame
The thrust forceBF can also be translated to the Earth fixed frame by the equation
EF = R−1BF (2.26)
then the component of the thrust force inEZ-direction gives the lift force of the aircraft.
Example of Rotational Kinematics in 2D
Above a general rotation matrix R was derived. In this section a rotation in 2D is given as an example of rotational kinematics. The thrust forceBFin Figure 2.3 is transformed to the Earth fixed coordinate system with
EF= 0 EFa −EFl = 0 FTsin(θ) −FTcos(θ) (2.27) assuming that the aircraft only moves in the yz-plane. Then the accelerations in y-direction and z-direction are calculated by
Eay =E Fa m = FTsin(θ) m ⇔ Eaym FT = sin(θ) ⇔ θ = arcsin Eaym FT ! (2.28a) Eaz = Eag−EFl m = mEag−(−FTcos(θ)) m ⇔ mEaz−mEag BFT = cos(θ) ⇔ FT = m(az−Eag) cos(θ) (2.28b)
Forw ard, in body sense BY BX BZ BF Eag EFa EFl EY B Aθ θ EY EX EZ East North Down EO
Figure 2.3:A model over the aircraft when moving from point A to point B.
which finally are used to find θ according to θ = arcsin Eaym m(Eaz−Eag) cos(θ)
⇔sin(θ) = Eaycos(θ)
Eaz−Eag ⇔ (2.29a) sin (θ) cos(θ) = Eay Eaz−Eag ⇔tan(θ) = Eay Eaz−Eag ⇔ (2.29b) arctan Eay Eaz−Eag ! (2.29c) In this 2D-case only one Euler angle is unknown, θ, in the 3D-case all three Euler angles are unknown and the rotation matrix R is needed to make the transforma-tion possible. See Luukkonen [2012] for more informatransforma-tion about transformatransforma-tions between different coordinate systems.
2.5 Translation from Latitude and Longitude to Earth Frame 13
Table 2.3: Constants used in the transformation from latitudes and longi-tudes to meters.
Constant Value Meaning
requator 6 378 137 [m] Earth radius at the equator
rnorth pole 6 356 752 [m] Earth radius at the north pole
f 298.2572235631 [m] Flattening constant
2.5
Translation from Latitude and Longitude to Earth
Frame
The GPS provides the position in latitude and longitude in the WGS84 datum, see Stanaway [2007]. WGS84 is a global standard reference coordinate frame for the Earth that defines a spheroidal surface for altitude and a gravitational equipotential surface for the sea level. The latitude measures the angle from the equator to north and south, see Figure 2.5. The north pole lies on the geodetic latitude of 90◦north thus all lines of longitude converge at the north pole and at the south pole. The equator lies on latitude 0◦and the south pole on −90◦. The longitude measures the angle from Greenwich in eastward and westward direction. It values goes from −180◦
to 180◦
or equivalent from 180◦
west to 180◦
east.
2.5.1
Translation from Latitude to Distance
To translate from latitude to meters an origin is needed, in Figure 2.4 the test area is displayed and the origin is marked with a red cross. The constants used in the translation are collected in Table 2.3. At different latitudes a slice of the Earth, parallel to the equator, will have different radii, see Figure 2.5. At the north and the south pole the slice will only cut out a dot with zero meters in radius. The Earth is not a perfect sphere and it is a bit flattened. From the centre of Earth to the equator the mean radius is 6 378 137 m and from the centre to the north pole the radius is 6 356 752 m. The angle between the current latitude φbody and the latitude of the above chosen origin φoriginis labelled ∆φ,
∆φ = φbody−φorigin. (2.30)
The tests were performed in the area nearby latitude Φorigin = 58.3934◦ and longitude Λorigin= 15.5617◦which expressed in radians are φorigin= 1.0192 rad and λorigin= 0.2716 rad.
north-Figure 2.4: The test area is displayed above. Origin for the transforma-tion from latitudes/longitudes to meters is located at the edge on the wall, marked with a red cross.
0◦ 90◦ Equator N orthpole x = requator 1 − φoriginπ/2f ∆φ a φorigin ∆φr EX EZ
Figure 2.5: The blue dotted circle is on latitude 58◦
. The distance x inE X-direction is given above.
2.5 Translation from Latitude and Longitude to Earth Frame 15 ward direction, is x = requator 1 − φorigin f π/2 ! | {z } r ∆φ (2.31)
where the flatting constant f is used to account for the Earth’s different radii at the equator and at the north pole. Furthermore, r gives the Earth’s radius at latitude φorigin and ∆φ is the difference in latitude between the origin and the aircraft. The distance in meters, following the circle path between the two points, is then the product between the Earth radius at the current latitude and the angle difference.
N orthpole λorigin Greenwich
∆λ y
EY
r3
Figure 2.6:The Earth seen from above the north pole.
2.5.2
Conversion from Longitude to Distance
Two different methods are used to calculate the orthogonal radius from the rota-tional axis of the Earth to the origin. The first one uses cosine and the radius at the equator, r1= cos φorigin · requator (2.32)
whereas the second one uses sine and the radius at the north pole, r2= sin
π
2 −φorigin
to calculate the radius r from the centre of Earth to the origin. The resulting radii from (2.32) and (2.33) are weighted together. The weight for the equator radius was set to wa= 90 − Φorigin 90 (2.34)
and the weight for the north pole radius was set to
wb= Φorigin
90 (2.35)
With the above mentioned origin r2get higher weight because that radius is
cal-culated with the help of the radius at the north pole, which is closer to the above mentioned origin than the equator.
In Figure 2.6 an outline of Earth seen from above the the north pole is shown. The radius r3is displayed where
r3= wa· r1+ wb· r2 (2.36)
and the difference in angle is calculated according to
∆λ = λbody−λorigin (2.37)
Finally the distance between the longitude of the origin and the position of the aircraft is calculated as
y = r3· ∆λ (2.38)
which gives how far away the aircraft is in the eastward direction from the origin, compare with Figure 2.6.
With (2.31) and (2.38) the latitude and longitude values can be translated to me-ters.
3
Models
This chapter describes the dynamic model of the aircraft that has been used dur-ing the project.
3.1
Quadrocopter Model
In Figure 3.1 the forces from the propellers and the gravitational force are shown. The gravitational accelerationEag is expressed in the Earth fixed coordinate sys-tem and the forceBFtotfrom the propellers are expressed in the body fixed coor-dinate system. To understand how the gravitational acceleration acts on the body, in the Body fixed frame, it needs to be transformed into the body fixed coordinate system. This is done by using rotational kinematics from Section 2.3.1. The Euler Angles are represented by
Eη = φ θ ψ (3.1) where roll φ, pitch θ and yaw ψ are the rotations around theBX-,BY -BZ-axis respectively. In a +-configured model forward is along one of the booms which is shown in Figure 3.1.
The thrust produced by the propeller is proportional to the square of the angular velocity. The revolutions result in a particular torque with respect to the propeller dimension and shape. The motors are voltage controlled and the control system use this to control the force from the rotors. More expensive electronic speed controllers (ESC) are controlled with respect to the angular velocity.
A model for the quadrocopter and the direct current motor is presented below.
EY
EX
EZ
Earth fixed coordinate system East North Down EO Forward BY BX BZ
Body fixed coordinate system Yaw, ψ Roll, φ Pitch, θ BO BF1 BF2 BF3 BF4 Eag
Figure 3.1: Earth fixed coordinate system according to the North, East, Down-model. In the Body fixed coordinate system roll (φ) is around the BX-axis, pitch (θ) around theBY -axis and yaw (ψ) around theBZ-axis.
The forces from the four rotors are summed together,
BFtot=BF1+BF2+BF3+BF4 (3.2)
whereBFtot is the total force from the four rotors andBF1...4are the individual
forces from each rotor. The angular acceleration calculated around theBX-axis is
Bω˙x=
(BF4−BF2)l
Ix
(3.3a) and around theBY -axis
Bω˙y=
(BF1−BF3)l
Iy (3.3b)
and finally around theBZ-axis
Bω˙z= B
L2+BL4−BL1−BL3
Iz
(3.3c) where l is the distance from the centre to the motor axis. The parametersBω˙x,y,z has the SI-units rad/s2, I
x,y,z has the SI-units kgm2 andBL1...4 has the SI-units
Nm, see Table 3.1. Each of the propellers gives a thrust force upward
BFn= kt Bωrotor2 (3.4)
and a torque around the motor axis
3.1 Quadrocopter Model 19
Table 3.1:Variables used in the dynamic model for the aircraft. Variable SI-unit Description
BτMi kgm2 rad Angular momentum
BIM hkgm2i Moment of inertia around one rotor axis b kgm2 rad Drag constant Bω hrad s i Angular velocity k hkgmrad2 i Lift constant Bfi hkgm s2 i
Lift force for motor i
l [m] Length from the centre of the aircraft to the rotor axis
whereBωrotorstands for the angular velocity around the rotor axis for each motor n. Both (3.4) and (3.5) are proportional to the angular velocity squared. The parameter kt has the SI-unit kgm and kd has the SI-unit kgm2. For an electric motor, the two differential equations
V = I R + ke Bω +BLdI
dt (3.6)
MBω˙z = kTI − vBω − τ (3.7)
are useful. In (3.6) V stands for the voltage applied to the motor, I the current, R the resistance and L for the inductance of the motor. In (3.7) M stands for the rotor’s moment of inertia and τ is the torque applied to the rotor. The con-stants ke, kT and v are the motor’s back electro magnetic force constant, torque constant and viscous friction constant, respectively. Usually the dynamics de-pending on the inductance of the motor are fast enough to be neglected, but the motors still have a certain time constant because of the motor and propeller iner-tia, see Movellan [2010].
The inertia of the aircraft is assumed to be symmetric, the same assumption as in Luukkonen [2012]. That means that Ixxis equal to Iyyand that the inertia matrix is BI = Ixx 0 0 0 Iyy 0 0 0 Izz , (3.8)
which is a diagonal matrix. The angular momentumBτMi around the rotor axis is given by
BτMi = bBω2i +BIM Bωi˙ (3.9)
where the used variables are explained in Table 3.1.
and is given by Bτ = τφ τθ τψ = lk−Bω2 2+Bω42 lk−Bω2 1+Bω32 −P4 i=1B τMi (3.10)
where k is the lift constant.
3.2
Motion Path - A Smooth Trajectory in 3D
Here, the term motion path is used to describe a path through 3D space that defines the motion of the aircraft. It also has a time stamp related to different sections of the path. This means that at any given time, the system shall be at a certain point along the path. One way to get a 3D-path is to take an earlier flight. In Figure 3.2 such a path is displayed. But this approach lacks the possibility to evaluate it before flight. Smooth motion paths can also be defined using for example 3D Bezier curves or splines which are time coded, see Lyche and Morken [2008].
A fundamental problem is how the path should be defined and what properties it should have to make it suitable as a camera trajectory. Desirable properties for a motion path are:
• Possibility to define smooth paths of any shape
• Ability to control the velocity and/or flight time along the path • Intuitive for an operator to define and modify the path
• There should be an efficient way of calculating the closest point on the path from the current position and to calculate the tangent vector at any point on the path
A smooth path in this sense is a path that does not contain any sudden changes in direction or velocity.
There are several ways to define a path through the air: • Discrete waypoints
• Straight lines connected by circle segments of constant radii • Spline curves
Using discrete waypoints provides a simple and intuitive way of defining the path of the aircraft. Straight lines connected by circle segments provide more flexibility than discrete waypoints as smooth turns can be defined. The first two approaches are not good enough because discrete waypoints results in a discon-tinuous motion and circle segments with constant radii limits the possibilities in making the path. Spline curves are defined by polynomial interpolation between
3.2 Motion Path - A Smooth Trajectory in 3D 21 20 25 30 35 40 45 50 55 −80 −75 −70 −65 −60 −55 −50 −45 North [m] East [m]
(a) An example of a path that would
be possible to fly after.
25 30 35 40 45 50 −70 −60 −50 76 76.577 North [m] East [m] Down [m]
(b) Same path but seen from the
side. The path does not cross itself, although it is difficult to see that from this view.
Figure 3.2: An earlier flight log can be used as parameters for an au-tonomous flight.
a set of control points. Curves of any shape can be constructed by using spline curves.
3.2.1
Properties of Spline Curves
A spline curve is a polynomial interpolation of a set of control points. Splines can be defined as linear splines, quadratic splines, cubic splines and so forth. The simplest case, linear splines, connect points with straight lines, like waypoints. A quadratic spline connects the control points with polynomials of degree two, and have a continous first derivative. In the same way a cubic spline have both continuous first and second derivatives. In Figure 3.3 and Figure 3.4 a quadratic and cubic spline, respectively, are shown.
3.2.2
Definition of Spline Curves
Splines are defined by interpolation between a set of control points. The simplest form of spline is the linear spline, which connect a set of control points by straight lines. A point p on a linear spline segment is defined by
p (t|c1, c2; t1, t2) = t2−t t2−t1c1 + t − t1 t2−t1 c2 (3.11)
where ci are the control points, tiare the knots, and t ∈ [t1, t2]. The knots ti give the value of the parameter t at the corresponding control points. Equation (3.11) computes a point on the line connecting c1 and c2. The computation is called a
convex combination of points c1and c2. Convex combinations have the property
P1 P2 P C1 C2 C3 0 0.5 1 1.5 2 0 0.5 1 1.5 2 x y Control points Interpolated points Spline
Figure 3.3:A non-uniform quadratic spline in two dimensions with control points (0, 0), (1, 2), (2, 0.5) and knots (0, 0, 1, 1). The points P 1, P 2 and P are evaluated for t = 0.4.
Morken [2008].
Quadratic Spline Curves
To compose a quadratic spline curve three control points c1, c2and c3are needed
together with four knots t1, t2, t3 and t4 where t1 6 t2 6 t3 6 t4. A quadratic
spline can be constructed by forming a convex combination of two linear splines according to p (t|c1, c2, c3; t1, t2, t3, t4) = t3−t t3−t2 p (t|c1, c2; t1, t3) + t − t2 t3−t2 p (t|c2, c3; t2, t4) (3.12)
where t ∈ [t2, t3). The line connecting c1and c2 is denoted p (t|c1, c2; t1, t3) and
the line connecting c2and c3is denoted p (t|c2, c3; t2, t4).
In the case with n control points (ci)ni=1and n + 1 knots (ti)n+2i=2, where the knots are nondecreasing and the two first and two last knots are t2= t3and tn+1= tn+2,
3.2 Motion Path - A Smooth Trajectory in 3D 23 P1 P2 P3 P4 P5 P C1 C2 C3 C4 0 1 2 3 4 0 0.5 1 1.5 2 2.5 x y Control points Interpolated points Spline
Figure 3.4: A non-uniform cubic spline in two dimensions with control points (0, 0), (1, 2), (2, 2), (3, 0) and knots (0, 0, 0, 0, 1, 1, 1).
a quadratic spline curve f (t) can be denoted
f (t) = pd+1,d(t) t ∈ [td+1, td+2] pd+2,d(t) t ∈ [td+2, td+3] .. . ... pn,d(t) t ∈ [tntn+1] (3.13)
Another way to express f (t) is with a sum of all segments, f (t) =
n X i=d+1
pi,d(t) Bi,0(t) (3.14)
where Bi,0(t) are piecewise constant functions defined by Bi,0(t) =
(
1, ti ≤t ≤ ti+1
0, otherwise . (3.15)
The difference between two adjacent values in the knot vector can be seen as the time difference between two control points, although the spline generally does not pass through the control points.
Cubic Splines
The calculation of points on linear and quadratic splines are easily generalized to higher polynomial degrees by using more averages. Calculating a point on an arbitrary spline segment of polynomial degree d requires n = d + 1 control points
c1, c2. . . cd, cd+1and 2d knots (ti)2di=1. A spline segment of degree d is then given as the average of two spline segments of degree d − 1
pt|ci−d, ci−(d−1), . . . , ci; ti−(d−1), . . . , ti, ti+1. . . ti+(d−1), ti+d = ti+1−t
ti+1−ti
pt|ci−d, ci−(d−1), . . . , ci−1; ti−(d−1), . . . ti+(d−2), ti+(d−1)
+ t − ti ti+1−ti
pt|ci−(d−1), ci−(d−2), . . . , ci; ti−(d−2), . . . , ti, ti+(d−1), ti+d (3.16)
where i starts at d + 1 for the first segment of the spline curve. The spline curve of degree d and with n control points (ci)ni=1 and 2d knots (ti)2di=2 is given by (3.13). The spline segment in equation (3.16) can be shortened to psi,d where i stands for the ingoing control points and knots, d stands for the polynomial degree and s gives the gap between the knots in the computation of the weight (t − ti) / (ti+s−ti) which is explained at page 24 in Lyche and Morken [2008].
A cubic spline segment requires four control points (n = 4) c1, c2, c3 and c4and
six knots (ti)n+2i=2. The cubic spline segment is given by
p (t|ci−3, ci−2, ci−1, ci; ti−2, ti−1, ti, ti+1, ti+2, ti+3) = ti+1−t
ti+1−ti
p (t|ci−3, ci−2, ci−1; ti−2, ti−1, ti+1, ti+2)
+ t − ti ti+1−ti
p (t|ci−2, ci−1, ci; ti−1, ti, ti+2, ti+3) (3.17)
which is a combination of quadratic splines. Matrix Representation of Cubic Spline
In order to simplify the evaluation of points on a spline and enable simple calcu-lation of the derivatives of the spline curve the function can be represented on matrix form. As shown in Lyche and Morken [2008][p. 46], a cubic spline can be written on the form
f (t) = R1(t)R2(t)R3(t)
h
cµ−3 cµ−2 cµ−1 cµ iT
, (3.18)
where ci = µ − 3, ..., µ are the control points that are active on segment µ and Ri are defined by the knots ti according to
R1(t) = t µ+1−t tµ+1−tµ t−tµ tµ+1−tµ = A1(T1t + B1), (3.19) R2(t) = tµ+1−t tµ+1−tµ−1 t−tµ−1 tµ+1−tµ−1 0 0 ttµ+2µ+2−−ttµ t−tµ−1 tµ+2−tµ = A2(T2t + B2), (3.20) R3(t) = tµ+1−t tµ+1−tµ−2 t−tµ−2 tµ+1−tµ−2 0 0 0 ttµ+2−t µ+2−tµ−1 t−tµ−1 tµ+2−tµ−1 0 0 0 ttµ+3−t µ+3−tµ t−tµ tµ+3−tµ = A3(T3t + B3). (3.21)
3.2 Motion Path - A Smooth Trajectory in 3D 25 Ri(t) can be rewritten as the matrix product
Ri = Ai(Tit + Bi) (3.22) where A1= 1 tµ+1−tµ (3.23a) A2= 1 tµ+1−tµ−1 0 0 t 1 µ+2−tµ (3.23b) A3= 1 tµ+1−tµ−2 0 0 0 t 1 µ+2−tµ−1 0 0 0 t 1 µ+3−tµ (3.23c) B1= h tµ+1 −tµi (3.23d) B2= "tµ+1 −tµ−1 0 0 tµ+2 tµ−1 # (3.23e) B3= tµ+1 −tµ−2 0 0 0 tµ+2 −tµ−1 0 0 0 tµ+3 tµ (3.23f) T1= h −1 1i (3.23g) T2= "−1 1 0 0 −1 1 # (3.23h) T3= −1 1 0 0 0 −1 1 0 0 0 −1 1 (3.23i) Using the relationship (3.22), The multiplication of the R-matrices can be written as R1(t)R2(t)R3(t) = (A1(T1t + B1))(A2(T2t + B2))(A3(T3t + B3)) = (A1T1t + A1B1)(A2T2t + A2B2)(A3T3t + A3B3) = t3A1T1A2T2A3T3 +t2(A1T1A2T2A3B3+ A1T1A2B2A3T3+ A1B1A2T2A3T3) +t(A1T1A2B2A3B3+ A1B1A2T2A3B3+ A1B1A2B2A3T3) +(A1B1A2B2A3B3) =ht3 t2 t 1i ˜R (3.24) where ˜ R = A1T1A2T2A3T3 A1T1A2T2A3B3+ A1T1A2B2A3T3+ A1B1A2T2A3T3 A1T1A2B2A3B3+ A1B1A2T2A3B3+ A1B1A2B2A3T3 A1B1A2B2A3B3 (3.25a)
This allows to separate the parameter t from the matrix ˜R (called the blend ma-trix) that depends only on the knots tµ−2. . . tµ+3 that are active on segment µ. By doing this, the blend matrix can be pre-calculated for every segment and the function value at t can be efficiently calculated as
f (t) =ht3 t2 t 1i ˜Rhcµ−3 cµ−2 cµ−1 cµ iT
(3.26) This also makes it easy to calculate the first and second derivatives (velocity and acceleration) at any point on the curve as
f0(t) =h3t2 2t 1 0i ˜Rhcµ−3 cµ−2 cµ−1 cµ iT
(3.27) and
f00(t) =h6t 2 0 0i ˜Rhcµ−3 cµ−2 cµ−1 cµiT (3.28) respectively by simply differentiating f (t) with respect to t.
Continuous Derivatives
3.1 Theorem. Suppose that the number ti+1 occurs m times among the knots
(tj)m+dj=i−dwith m some integer bounded by 1 ≤ m ≤ d + 1, that is
ti ≤ti+1 = · · · = ti+m≤ti+m+1 (3.29) Then the spline function f (t) = pi,d,1(t) Bi,0(t) + pi+m,d,1(t) Bi+m,0(t) has contin-uous derivatives up to order d − m at the joint ti+1.
For a more detailed derivation of spline curves see page 27 in Lyche and Morken [2008] where this theorem is taken from.
3.2.3
Finding the Closest Point on A Spline Curve
The latest known visited position at the curve gives an allowed section of the curve that it is okay to control towards. Together with the wish to follow the path as good as possible it would be sound to control towards the nearest point on the curve but also weighting in to control towards point p(t).
In Wang et al. [2003] the problem of finding the closest point on a spline curve is treated. They suggest a combination of Quadratic Minimization Method and Newton’s Method, see Algorithm 1 and Algorithm 2, respectively. The strength in Wang et al. [2003] is that they use the result of the Quadratic Minimization Method as an initial guess for Newton’s Method. The problem is not convex and there are often local minima. Due to the local minima it is important that the initial guesses are in the right region of the spline curve, otherwise the Quadratic Minimization Method will find a point that is suboptimal. After four iterations with Quadratic Minimization Method the result is often very near the optimal solution but continue searching after the optimal solution with Quadratic Min-imization Method will converge at super linear rate. Super linear rate means roughly that it often takes very few iterations before the difference is little enough. On the other hand Newton’s Method converges quadratically, which is faster than
3.2 Motion Path - A Smooth Trajectory in 3D 27 super linear, but can get problems in the beginning if the initial guess is poor. Hence it is a good idea to combine these two methods and utilize the advantages of them.
To find the point on the curve that lies nearest the aircraft the square distance between the aircraft’s position paand an optional point along the spline curve, p(t) is calculated. The squared distance is
D(t) = (Ex(t) −Exa)2+ (Ey(t) −Eya)2+ (Ez(t) −Eza)2 (3.30) The goal is to find the value t? that minimizes D(t) and hence gives the nearest point p(t?) =hEx(t?),Ey(t?),Ez(t?)
iT
on the curve. Algorithm 1Quadratic Minimization Method
1. Initial estimates, guess three values of t?
˜t1, ˜t2and ˜t3 (3.31)
2. Calculate a fourth t value t?,k= 1 2 q23D(˜t1) + q31D(˜t2) + q12D(˜t3) t23D(˜t1) + t31D(˜t2) + t12D(˜t3) , k = 1, 2, 3, . . . (3.32a) tij = ˜ti−˜tj, i, j ∈ {1, 2, 3} (3.32b) qij = ˜t2i −˜t2 j, i, j ∈ {1, 2, 3} (3.32c)
3. Calculate a quadratic polynomial P (t) that interpolates D(t) at ˜t1, ˜t2and ˜t3
and then minimize it
P (t) = (t − ˜t2) (t − ˜t3) (˜t1−˜t2) (˜t1−˜t3) D(˜t1) + (t − ˜t1) (t − ˜t3) (˜t2−˜t1) (˜t2−˜t3) D(˜t2) + (t − ˜t1) (t − ˜t2) (˜t3−˜t1) (˜t3−˜t2) D(˜t3) (3.33)
4. Pick the three values from ˜t1, ˜t2, ˜t3and t?,k that gives the smallest value of
P (t) and restart at 3 with these estimates and repeat until the difference between the current calculated P (t) and the one before is small enough. The three initial estimates of t? in step 1 in Algorithm 1 can be initialized in different ways. One choice is to use the previous visited point on the curve, the point that the aircraft should be at according to its velocity and the point it should be at according to the time. Another way is to utilize that the spline curve is divided into segments and the three initial guesses can be chosen as ti, ti+t2i+1
P1 P2 P3 P4 P5 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 x y Spline Curve Aircraft Closest Point Iterations
Figure 3.5:A spline curve and the iterations of the Quadratic Minimization Method. In this case nine iterations were needed (only the first five and the last one is shown) to find the closest point on the curve. The start guesses of t were 0, 0.5 and 3.75. 0 0.2 0.4 0.6 0.8 1 3 2 1 0 P2 0 1 P5 P1 P4 P3 2 3
3.3 AHRS Algoritm 29 and ti+1 where i denotes the last segment the aircraft was in and i + 1 the next segment that the aircraft will enter.
Figure 3.5 shows a graph over a motion path in theEXEY -plane and the func-tionality of the Quadratic Minimization Method. The iterations are marked with red stars, the closest point is marked with a circle and the aircraft’s position is marked with a plus sign. The Quadratic Minimization Method was used to find the closest point on the curve. During this test the t-values were chosen in two different segments of the curve, 0, 0.5 and 3.75. This choice of t-values was made to show the functionality of the algorithm and it took nine iterations to find the closest point on the curve, with respect to the allowed error. With t-values cho-sen with less difference the iterations were fewer and often the iterated points ended very near each other which resulted in a poor plot for the purpose of illus-trating the principal of the algorithm. In Figure 3.6 a graph over a motion path Algorithm 2Newton’s Method
1. Initial estimate t?,0 (3.34a) 2. Iteration t?,m+1= t?,m− D 0 t?,m D00 (t?,m), m = 0, 1, 2, . . . (3.35a)
created with a cubic spline is displayed. The closest point to the aircraft and the iterations toward it are also displayed. Newton’s Method uses the tangent of the curve to find a better value. In Algorithm 2, the notation D0and D00stand for the derivative and second derivative, respectively, of the squared distance D in (3.30) between the aircraft and a point on the curve.
3.3
AHRS Algoritm
AHRS stands for Attitude and Heading Reference System and provides an ori-entation of the aircraft relative to the direction of gravity and the Earth’s mag-netic field. In the orientation filter developed by Madgwick [2010] the tri-axis gyroscopes and accelerometers in the IMU are used together with the tri-axis magnetometer to calculate the orientation of the aircraft. This so called MARG implementation makes it possible to compensate for magnetic distortion and gy-roscopic bias drift. MARG stands for Magnetic, Angular Rate and Gravity and is a combination of an IMU and a tri-axis magnetometer. MARG is thus the same as AHRS.
The orientation filter uses a quaternion representation to avoid the risk of gimbal lock, and uses an optimized gradient-descent algorithm to compute the direc-tion of the measurement error in the gyroscope data. Madgwick’s orientadirec-tion filter is computationally cheap and effective at low rates down to 10 Hz. Algo-rithm 3 gives a brief overview of the algoAlgo-rithm, a thorough derivation can be found in Madgwick [2010].
Algorithm 3AHRS Filter
1. Integrate angular velocity measurements
If a measurement of the angular velocity in the body frame is expressed on the form
sω=h0 ωx ωy ωz i
, (3.36)
the quaternion derivative that represents the same rotation can be calcu-lated as
˙qω,k = 1
2ˆqk−1⊗ sω, (3.37)
where ⊗ denotes quaternion multiplication and ˆqk−1is the previous quater-nion estimate. The new estimated quaterquater-nion ˆqω,k is then calculated by integrating according to
ˆqω,k = ˆqω,k−1+ ˙qω,k∆t, (3.38)
where ∆t is the sampling time.
2. Calculate correction based on vector measurements
An estimate of the current quaternion can be calculated using vector mea-surements such as gravity and magnetic field according to
f ˆqk−1,Ed,Bs= ˆq∗k−1⊗Ed ⊗ ˆq k−1−Bs (3.39) ˆq∇,k−1= ˆqk−1−µ ∇f ˆqk−1,Ed,Bs ||∇f ˆqk−1,Ed,Bs || (3.40) whereEd is the reference vector in the earth frame andBs is the measure-ment in the body frame. The function ∇f is the gradient of f with respect to the quaternion. Equations (3.39) and (3.40) describe a gradient descent algorithm that uses the previous quaternion estimate as a starting point and makes one step of length µ in the direction of the gradient.
3. Combine the two estimates The two estimates are then weighed together to give the final estimate according to
ˆqk = γ ˆq∇,k+ (1 − γ) ˆqω,k, 0 ≤ γ ≤ 1 (3.41)
where γ controls the influence of the vector measurements. 4. Normalize quaternion
Only quaternions of magnitude 1 represent orientations, therefore the quaternion need to be renormalized every iteration according to
ˆqω,k= ||ˆqω,k
3.3 AHRS Algoritm 31
3.3.1
Pros and Cons with Madgwick’s Orientation Filter
One advantage of Madgwick’s orientation filter is for example a single adjustable parameter. Other advantages of the orientation filter are:
• Good performance at low sampling rates • A single adjustable parameter
• Gyroscope bias drift compensation
• The use of quaternions avoid the problems with the singularities associated with the Euler angle representation
• Simple to implement and tune
The quaternions used in the filter need to be transformed to Euler angle represen-tation to be able to interpret the orienrepresen-tation of the aircraft. This is done by
ψ=arctan2 2q2q3−2q1q4 2q12+ 2q22−1 ! (3.43) θ = − arcsin (2q2q4+ 2q1q3) (3.44) φ=arctan2 2q3q4−2q1q2 2q12+ 2q42−1 ! (3.45) where arctan2 y x
=Principal value arg (x + iy) (3.46) which can be written as
arctan2 y x = arctanyx x > 0 arctanyx+ π y > 0, x < 0 arctanyx−π y < 0, x < 0 +π2 y > 0, x = 0 −π 2 y < 0, x = 0 undefined y = 0, x = 0 (3.47)
4
Sensors - IMU, Magnetometer,
Barometer and Internal GPS
This chapter describes the different sensors that are used in the flight controller. Their abilities and limitations are discussed under each section. The aircraft uses gyroscopes, accelerometers and magnetometers to measure its orientation and angular velocity relative to the Earth. None of the sensors alone can provide a complete representation of the attitude of the aircraft as they all suffer from different limitations.
4.1
IMU
The IMU consists of gyroscopes and accelerometers which are discussed next.
4.1.1
Gyroscopes
The aircraft is equipped with three gyroscopes that each measure the angular ve-locity along one of the axes in the Body fixed coordinate system. The gyroscope suffers from two main drawbacks, bias and noise. If it were not for these draw-backs, the measured angular velocities alone could be integrated to provide a complete representation of the attitude of the aircraft if only the initial orienta-tion would be known.
4.1.2
Accelerometers
The three accelerometers in the IMU are used to measure the direction of the gravitational acceleration, which is assumed to always point down (along the pos-itive z-axis) in the Earth frame. They are mounted in three mutually orthogonal directions to be able to measure in three dimensions. One problem with the ac-celerometers is that they also measure the linear acceleration of the aircraft and
can therefore not be used alone for attitude determination of a non stationary system.
4.2
Magnetometer
A three axis magnetometer is used to provide an absolute reference of heading, something that cannot be achieved with only accelerometers and rate gyroscopes. The magnetometer measures the magnetic field in the Body fixed frame.
4.2.1
Hard Iron and Soft Iron Distortion
The magnetic measurements are distorted by magnetic objects in the vicinity of the sensor. The distortion can be divided into two categories, hard iron and soft iron distortion. Hard iron distortion can be described as an offset that is constant in the Body frame and not affected by the orientation of the sensor with respect to the Earth’s magnetic field.
The hard iron offset is caused by permanent magnets or electric currents close to the sensor. Soft iron distortion is caused by the interaction between the external magnetic field and magnetic materials in the Body frame. The soft iron offset is therefore dependent on the orientation of the sensor relative to the external magnetic field. Hard iron distortion can be compensated for by subtracting the offset from the measurements directly in the Body frame, while compensating for soft iron effects is more complicated, see Vasconcelos et al. [2011]
4.2.2
Automatic Hard Iron Calibration Algorithm
The problem of magnetometer bias (hard iron) is addressed by an automatic al-gorithm that continuously estimates the bias of all three axes during flight. The algorithm is based on the work by Premerlani [2011].
A measurement of the magnetic field is described by
Bb −Bb0= REb (4.1)
whereBbis the measurement in the Body frame, Bb0 is the measurement bias,
Ris the rotation matrix from the Earth frame to the Body frame and Ebis the Earth’s magnetic field. The square of the magnitude of both sides of (4.1) is given by
|Bb −Bb0|2= |REb|2 (4.2)
Because taking the squared magnitude of a vector is equivalent to taking the dot product of the vector with itself and because R is a rotation matrix and thus preserves vector magnitudes, (4.2) can be rewritten as
(Bb −Bb0) · (Bb −Bb0) = |Eb|2, (4.3) where the left side expands to