### Autonomous Following and

### Filming of a Test Vehicle

Teodor Johnsson LiTH-ISY-EX–16/4963–SE Supervisor: Clas Veib

isy, Linkpings universitet

Pierre Pettersson

BorgWarner AB

Examiner: Daniel Axehill

isy_{, Linkpings universitet}

Division of Automatic Control Department of Electrical Engineering

Linköping University SE-581 83 Linköping, Sweden

Quadcopters have been used as filming platforms for years and are mainly man-ually controlled, with its inherited diﬃculties. The objective of this thesis project was to eliminate the human factor by developing and constructing a prototype for an autonomous filming platform at BorgWarner AB. Diﬀerent methods for following and filming the test vehicles were investigated with the use of diﬀerent sensors, such as Global Navigation Satellite Systems (GNSS) and Inertial Navi-gation System (INS). These were evaluated by simulations in Simulink where it was shown that an improved reference following could be achieved by the use of a complementary filter. Further development was focused on writing a control system for the prototype in which the following-algorithms and prototype could be evaluated. Final testing resulted in the conclusion that the desired objective was feasible with the current platform. There were however some limitations in the desired change in direction of the quadcopter, first discovered during simu-lations and proven during experimental testing.

This master’s thesis would not have been possible without the support of my supervisor, Pierre Pettersson at BorgWarner AB and the TTC division for their cheerful support.

I would also like to thank Clas Veib, and Daniel Axehill at Linkping Univer-sity for their feedback and and support.

Further acknowledgement should go to David Tjskog for great support and help with troubling software.

*Linkping, Februari 2016*
*Teodor Johnsson*

Notation xi 1 Introduction 1 1.1 Purpose . . . 1 1.2 Problem Formulation . . . 2 1.3 Limitations . . . 2 1.4 Literature Study . . . 2 1.5 Software . . . 2

1.5.1 Matlab and Simulink . . . 2

1.5.2 Visual Studio . . . 3 1.5.3 CanAlyzer . . . 3 2 Preliminaries 5 2.1 Modelling . . . 5 2.1.1 Actuators . . . 6 2.1.2 Coordinate Systems . . . 7 2.1.3 Motors . . . 8 2.1.4 Thrust . . . 9 2.1.5 Torque . . . 9 2.1.6 Gyroscopic Eﬀects . . . 10 2.1.7 Aerodynamic Forces . . . 10 2.1.8 Inertia . . . 11 2.2 Simulations . . . 11 2.2.1 Equations of Motion . . . 11 2.2.2 Rotational Conversion . . . 13 2.2.3 Controllers . . . 13

2.2.4 Global Navigation Satellite Systems . . . 15

2.2.5 Complementary Filter . . . 17

3 Modelling 19 3.1 Thrust and Torque . . . 19

3.2 Aerodynamic Forces . . . 21

3.3 Inertia And Mass . . . 22

4 Simulations 23 4.1 Coordinate Systems . . . 23 4.2 Dynamic Model . . . 24 4.2.1 Calculations . . . 24 4.3 Control Systems . . . 25 4.3.1 Altitude Control . . . 25 4.3.2 Euler Angles . . . 26

4.3.3 Position Controller Design . . . 26

4.3.4 Position Controller . . . 27

4.3.5 Slow and Fast Mode . . . 28

4.3.6 Tuning . . . 28

4.4 Trajectory . . . 28

4.4.1 Logged Data . . . 29

4.4.2 Shortest Path Trajectory . . . 30

4.5 Sensor Fusion . . . 31 5 Prototype 35 5.1 Flight Controller . . . 35 5.2 Communication . . . 35 5.2.1 xBee . . . 36 5.2.2 Mavlink . . . 36 5.2.3 Ground Station . . . 37 5.2.4 Quadcopter Specification . . . 38 5.3 Gimbal . . . 39

5.3.1 Gimbal Modelling And Control . . . 39

5.3.2 Global Navigation Satellite Systems . . . 40

6 Results 43 6.1 Modelling . . . 43 6.1.1 Model Validation . . . 43 6.1.2 Control Validation . . . 44 6.2 Simulations . . . 44 6.2.1 Trajectory Evaluation . . . 44

6.2.2 Slow and Fast Mode . . . 47

6.3 Prototype . . . 47 6.3.1 Heading Control . . . 48 6.3.2 Altitude Control . . . 50 6.3.3 Position Control . . . 50 6.3.4 Gimbal Control . . . 52 7 Discussion 55 7.1 Modelling . . . 55 7.2 Simulations . . . 56 7.3 Prototype . . . 56 7.3.1 Video Capturing . . . 56

8.1 Conclusion . . . 59 8.2 Future Work . . . 59

Variables

Notations Meaning

*θ* Pitch

*ϕ* Roll

*ψ* Yaw

p Angular Velocity, x axis q Angular Velocity, y axis r Angular Velocity, z axis u B-Frame Longitudinal Velocity v B-Frame Lateral Velocity w B-Frame Vertical Velocity X Local E-Frame Position Y Local E-Frame Position Z Local E-Frame Position

v Voltage
i Current
A Area
*CT* Thrust Coeﬃcient
*CD* Drag Coeﬃcient
g Gravity
m Mass
I Inertial Constant
*ρ* Density
*ω* Angular Velocity
*Ts* Sample Time

*αh* Weight factor for heading filter

*αp* Weight factor for position filter

Frkortningar

Frkortning Betydelse

IDE Integrated Development Environment VS Visual Studio

E-Frame Earth Frame B-Frame Body Frame

BLDC Brushless Direct Current DC Direct Current

DoF Degrees of Freedom MCU Micro Control Unit

CoG Center of Gravity

INS Inertial Navigation System

GNSS Global Navigation Satellite Systems LiPO Lithium Polymer

PWM Pulse Width Modulation ESC Electronic Speed Controller

FC Flight Controller

CAN Controller Area Network CEP Circular Error Probability

UART Universal Asynchronous Receiver/Transmitter C# Programming Language

PID Proportional Integral Derivative CF Complementary Filter

**1**

**Introduction**

The following master’s thesis was conducted at BorgWarner AB in Landskrona, autumn of 2015.

BorgWarner AB is a global manufacturer of automotive components for driv-etrains in cars. Their main product in Sweden is their All-Wheel Drive (AWD) system, developed in Landskrona. During testing in northern Sweden, there is a desire for filming with an aerial platform.

The technology within these platforms have primarily been associated with expensive military uses, but with a reduction in price it has become more avail-able to the public and commercial businesses.

The drawback with this type of vehicles is primarily the human factor, be-cause the control is done by eye-contact and manual control. This method results in diﬃculties with deteriorating conditions such as distance, fog and rain. This in combination with a moving car as an objective to follow sets the main diﬃculties. Introducing a control system which autonomously carries out this operation with faster and more precise control, the results have potential to improve con-siderably. This is done with the use of diﬀerent sensors, such as Global Navi-gation Satellite Systems (GNSS) and Inertial NaviNavi-gation Systems (INS) which is investigated in this masters thesis.

**1.1**

**Purpose**

The objective of this thesis project is to construct and evaluate the performance of an autonomous quadcopter for filming a car in motion around a test track at various speeds.

**1.2**

**Problem Formulation**

Within the frame of the master’s thesis an autonomous control system will be
investigated for its capability in "*following and filming a test vehicle without human*
*input". This shall be done with simulations and gathered data from a constructed*

prototype.

**1.3**

**Limitations**

• The thesis will not be focused around closed-loop stability of the quad-copter.

• The use of image processing for video feedback will not be considered. • Flight controller and quadcopter hardware will be bought and not

devel-oped in this project.

**1.4**

**Literature Study**

In the master’s thesis project a literature study was carried out to investigate the topic. Due to the content one could divide it in diﬀerent stages. The first stage consist of modelling and simulations of the quadcopter. For this purpose mainly literature from [18], [5] and [20] are used, which consist of methods for mod-elling the physical attributes of a quadcopter. Their work is primarily used for setting up the model later used for simulations in Matlab and Simulink. Fur-ther information regarding modelling of motors, such as [16] was used for fine tuning the model. The second stage of the thesis project consists of controlling the quadcopter. The focus is mainly on sensor fusion and trajectory generation for diﬀerent scenarios, at both high and low speeds. For this purpose there are several interesting publications that have contributed to the area, such as [22], [14], [13]. These are primarily investigating the use of sensor fusion for various vehicles, mainly with focus on GNSS/INS. Combining these is most commonly done with a Kalman filtering technique or complementary filter, enabling an en-hanced position estimate. Further studies were carried out on various simulation methods for the quadcopter. One could use Matlab calculations such as [10] or the most common method which is with the use of Simulink [7].

**1.5**

**Software**

To achieve the objective a wide range of programs were utilized. A short expla-nation of the programs used follows.

**1.5.1**

**Matlab and Simulink**

Matlab is a computing environment commonly used for numerical computations and simulations and is widely used in the industry. It is very commonly paired

with Simulink, which is a graphical block simulation environment including Matlab for simulations.

**1.5.2**

**Visual Studio**

Visual Studio (VS) is an IDE by Microsoft for the use of building applications for Windows. VS was used because of its wide range of features and compatibility with third party applications mainly used in vehicle software.

**1.5.3**

**CanAlyzer**

CanAlyzer is a program specialized in data management from the commonly used bus existing in most vehicles. This program, together with a CAN-Case XL can be connected into several diﬀerent CAN-networks at the time and be configured to work with third party programs developed in VS.

**2**

**Preliminaries**

This chapter summarizes the theory used in the thesis. It starts oﬀ with an expla-nation of what a quadcopter is and how it works by introducing its mechanics. Further exploration is performed in how to implement a model for simulations of a quadcopter.

**2.1**

**Modelling**

A multirotor is an ultralight aircraft, a simple and eﬀective platform for multiple purposes. Mainly because of its ease to customize and aﬀordability. Depending on the size of the multirotor it has diﬀerent capabilities, but the principle layout consists of actuators in, most commonly a symmetrical configuration around a center hub. These actuators are primarily Brushless Direct Current (BLDC) mo-tors, which in recent years have become more eﬃcient and deliver great perfor-mance in a small light-weight package. An example of such a platform is shown in Figure 2.1, which is a quadcopter in a cross configuration due to its defined forward orientation.

The on-board power source is most commonly a Lithium Polymer (LiPO) bat-tery, powering the motors through Electronic Speed Controllers (ESC).

The Flight Controller (FC) is the brain of the multirotor and consists of a Micro Control Unit (MCU) and sensors. The MCU computes the control signals to the actuators, with the purpose of stability on an otherwise inherently unstable platform.

The layout of the investigated multirotor is a quadcopter consisting of four motors to control the platform. Its size is very suitable for small camera equip-ment such as the GoPro Hero 4 used in this thesis project.

Developing a simulation environment for a quadcopter begins with the ac-knowledgement of its six Degrees of Freedom (DoF) mechanism, meaning it can

translate in *u, v, w and rotate around its axes roll, pitch and yaw also further*

named as*ϕ, θ and ψ respectively.*

u v

w

Roll

Pitch Yaw

Figure 2.1: Illustration of the quadcopter coordinate system and rotational

notations.

A quadcopter has its motors mounted a distance*dm*from the center hub, with

all the motors producing lift in the z-axis. A steady state is achieved by having two of the motors counter-rotating in a symmetrical pattern, as can be seen in Figure 2.3. T 1 T 2 T 3 T 4 1 d m d Y

Figure 2.2:Thrust is defined as

*T*1-*T*4 with a distance *dm* from

the origin.

Figure 2.3: The rotational

direction of each propeller

around its axis.

**2.1.1**

**Actuators**

Controlling the quadcopter’s motion is done by varying the motor’s angular
ve-locity. An example of this is the pitch, which requires torque to be increased
around the v-axis. This is done by increasing the angular velocity on motors
(3*, 4) or decreasing motors (1, 2), relative to the opposite pair, as shown in Figure*

2.4. An increase in angular velocity of the motors will produce more lift. Hence, the quadcopter will initiate tilt forward or backwards depending on which motor pair is actuated.

Roll is controlled with the same method as pitch. Torque is induced by
in-creasing or dein-creasing the relative angular velocity of motor pairs (1*, 4) and (2, 3).*

Pitch T

3

T 4

Figure 2.4: Increasing the thrust

on motor (3,4), a forward pitching torque is induced. Roll T 1 T 4

Figure 2.5: Inducing a rolling

torque by increasing the thrust on motors (1,4).

T2

T4

Figure 2.6:Positive change in yaw is achieved by increasing the angular

ve-locity of motors (2,4).

Initializing a change in yaw is possible due to, two of the motors being counter rotating. Increasing or decreasing the relative angular velocity of either motor pair (2,4) or (1,3) creates torque around the z-axis, resulting in the airframe ro-tating, as shown in Figure 2.6.

**2.1.2**

**Coordinate Systems**

Within the mechanical model there are two coordinate systems describing the motions of the quadcopter, as can be seen in Figure 2.7. These are body frame and navigation frame, further called B-frame and N-frame [21]. Depending on the desired control objective, either the B-frame or N-frame are used in the controller design.

U p r q X Y Z V W B-Frame N-Frame

Figure 2.7:The B-frame in reference to the N-frame.

The distance travelled by the quadcopter can be considered to be relatively short, hence a locally based navigation frame is used, where the N-frame’s origin is the quadcopter’s starting position.

These coordinate systems define diﬀerent quantities, such as the quadcopter’s
translation in the N-frame (*x, y, z) and Euler angles (ϕ, θ, ψ). The B-frame is used*

when calculating the linear velocities (*u, v, w), and the angular velocities (p, q, r),*

shown in Figure 2.7.

Developing a model for simulations one can preferably use the most common method, Euler-Lagrange formalism [20]. This method assumes that the following physical statements are fulfilled or that the diﬀerence is neglected:

• Rigid Structure. • Symmetrical Structure.

• Center of Gravity (CoG) and B-frame origin is coincident.

Simulating the quadcopter’s movement relative to the car requires a good po-sition estimate in the N-frame. To achieve this the altitude and attitude of the quadcopter must first be calculated. The simulation model does this by comput-ing the change in attitude and altitude from the angular velocity of the motors. With the knowledge of these motions one can then estimate the translation in the N-frame.

**2.1.3**

**Motors**

The actuators are commonly known as Brushless Direct Current (BLDC) motors, which convert electrical energy to mechanical energy. This is done by an electro-magnetic circuit. In the center there is a rotor with magnets spinning around the stator, where Direct Current (DC) flows through the coils inducing a magnetic

force resulting in further acceleration of the rotor [16]. This is modelled by using the commonly used diﬀerential equation for a BLDC motor,

*L*
*di*
*dt* =*v− Rmi− keωm*
*Imdω _{dt}m* =

*τm− τd*(2.1) according to [16]. The input for the motor is the voltage

*v and current i*

result-ing in a change of rotation*ωm*. Influencing the increase or decrease of*ωm*is the

torque*τm*, which is the applied torque on the motor and*τd*is the load caused by

the motor spinning in the air. Further constants are the specific internal
resis-tance*Rm*, motor constants*ke*and inertia *Im*. With the use of a small optimized

motor the inductance is considered negligible*L*≈ 0, resulting in
*Im*
*dωm*
*dt* =−
*k*2*m*
*Rm*
*ωm− τd*+
*km*
*Rm*
*v.* (2.2)

This is the non-linear model of a BLDC motor commonly used for modelling the
motor angular velocity*ωm*. In this model the motor constant*ke*has been

substi-tuted with a lumped constant for the motor torque, resulting in*km*. The second

motor constant*kτ* =*km*, because of*τm*=*kτi which is used during transformation*

to 2.2.

**2.1.4**

**Thrust**

The thrust from the motors are the main force controlling the quadcopter, which can be derived by using the equations from [16]. It is concluded that the thrust for a single motor is equal to

*T = CtρAr*2*ω*2*m.* (2.3)

Where*Ct* is the thrust coeﬃcient and ρ is the density of ambient air. Propeller

geometry is represented with the area*A and radius r of the rotor. These constants*

can however be lumped together for system identification of the constant*CT* as

*T = CTω*2*m.* (2.4)

**2.1.5**

**Torque**

The motors are mainly producing thrust in the z-axis. When deviating from the
steady state of uniform thrust, the motors will induce torque,*τ on the quadcopter*

as can be seen in Figure 2.8. This torque can be calculated from

*τ = CqρAω*2*m* (2.5)

[10, 5]. Where*Cq* is the motor specific torque constant, *ρ is the density of the*

ambient air and*A is the cross section area of the propeller. This results in an*

equation with the motor angular velocity *ωm* as input. Which can further be

simplified and lumped together and estimated from system identification

Given the airframe seen in Figure 2.8 the motors will induce torque on the body, which is used for controlling the quadcopter’s roll, pitch and yaw.

τ 1 τ 2 τ 3 τ 4

Figure 2.8:Illustration of the torque acting on the quadcopter.

The magnitude of the torque applied on the airframe is based on the relative diﬀerence of thrust and torque from the motors, which is linked to the constants

*CT* and*CQ*. *CT* is explained in (2.4) and *CQ* is defined in (2.6). Taking these

parameters into consideration and the quadcopter’s cross-configuration, the
mo-tor’s angular velocity will apply torque in the roll, pitch and yaw (*ϕ, θ, ψ) axes of*

the airframe with a lever distance of*dm*from the body origin, as

*τϕ*
*τθ*
*τψ*
=
*dxCT* *−dxCT* *−dxCT* *dxCT*
*dxCT* *dxCT* *−dxCT* *−dxCT*
*−CQ* *CQ* *−CQ* *CQ*
*ω _{m,1}*2

*ω*2

_{m,2}*ω*2

_{m,3}*ω*2

_{m,4}*.*(2.7)

where*ωm,n*is the angular velocity of motor n*∈ [1, 4].*

**2.1.6**

**Gyroscopic Effects**

The second torque eﬀect acting on the body is the gyroscopic eﬀect [20], due to rotating propellers and motors. This force is counteracting a change of motion in the rolling and pitching direction and is proportional to the angular velocity of the rotating mass of the propeller and motor.

*τϕGyro*
*τθGyro*
*τψ _{Gyro}*
=

*Im*

*pq*

*−p −pq*

*−q −qp*0 0 0 0

*ωm,1*

*ωm,2*

*ωm,3*

*ωm,4*

*.*(2.8)

**2.1.7**

**Aerodynamic Forces**

There are several aerodynamic forces acting on the body [4]. These are strongly dependent on the layout of the quadcopter and its geometric design and can be hard to quantify for a general case. The most important factor needed to be taken

into consideration is the aerodynamic drag, which acts on all objects within the atmosphere and can be defined as

*Fd* = 1

2*ρu*
2_{C}

*dA.* (2.9)

Where the force induced from the drag*Fd*, is mainly depending on the object

velocity*u and proportional to the area cross section A of the quadcopter, and its*

drag coeﬃcient C*d*. The drag coeﬃcient is a measurement of the aerodynamic

eﬃciency of the object moving in the ambient air, with density ρ. Most of these constants are physical parameters, hence one commonly writes the drag coeﬃ-cient as

*Fd* =*kdu*2 (2.10)

where*kd*is the lumped drag coeﬃcient.

**2.1.8**

**Inertia**

The quadcopter is considered to be symmetrical around all axes resulting in a diagonal matrix for the inertia [19]

*I =*
*I*0*xx* *I*0*yy* 00
0 0 *Izz*
*.* (2.11)

**2.2**

**Simulations**

Setting up a simulation model requires not only the physical properties of the quadcopter. Estimating the movement of the quadcopter requires a set of equa-tions describing how the quadcopter moves around in space, described in this section.

**2.2.1**

**Equations of Motion**

Simulating the quadcopter requires a six DoF problem to be solved. The base
for the simulations is the rigid body dynamics of the quadcopter [19]. For a
single dimension rigid body, the rotational acceleration*α is based on the applied*

torque*τ, and the body’s momentum ω× Iω, where ω is the angular velocity of*

the quadcopter and its inertia*I.*

*τ = I α + ω× Iω.* (2.12)

This can further be expanded with the three axes used for the quadcopter,

result-ing in _{}
*ττuv*
*τw*
* = I*
¨
*ϕ*
¨
*θ*
¨
*ψ*
+
˙
*ϕ*
˙
*θ*
˙
*ψ*
* × I*
˙
*ϕ*
˙
*θ*
˙
*ψ*
(2.13)

where*τ is the total torque applied in each axis, resulting in a change in rotational*

velocity. Expanding (2.13) one can extract ¨*ϕ, ¨θ, ¨ψ and add all the external torque*

components from (2.8) and (2.7).
*Ixxϕ = ˙*¨ *θ ˙ψ(Iyy− Izz*) +*τϕ*+*τu,gyro*
*Iyyθ = ˙*¨ *ϕ ˙ψ(Izz− Ixx*) +*τθ*+*τv,gyro*
*Izzψ = ˙*¨ *ϕ ˙θ(Ixx− Iyy*) +*τψ.*
(2.14)
This set of equations estimates the rotation in reference to the N-frame, when
torque is applied on the body. There is however no information regarding the
translation of the body. This needs a second set of equations to be solved.

Newtons second law can be used to determine the translational movement of
the body in reference to the N-frame [19],*ma = F.*

Studying Newton’s second law one can describe the linear dynamics of the
body. The rotational matrix,*RN B*(2.19) is used to convert the gravity from the

N-frame to B-N-frame. The forces that are taken into consideration are the gravity
act-ing in the z-direction [0*, 0,−g]*′, and the thrust [0*, 0*∑4* _{i=1}Ti*]′in the B-frame

coun-teracting this force. Further input is the aerodynamic drag [*kdu*2*, kdv*2*, kdw*2]′

from (2.9). The centripetal force *ωB× mυB*, which is depending on the body’s

linear and angular velocities*υb* = [*u, v, w]*′,*ωB*= [*p, q, r]*′and its mass*m, is also*

acting on the body resulting in

*m*
*u*˙*˙v*
˙
*w*
* = RN B*
00
*−mg*
+
0
0
∑_{4}
*i=1Ti*
+
*pq*
*r*
* × m*
*uv*
*w*
−
*kdu*
2
*kdv*2
*kdw*2
*.* (2.15)
Newtons’s second law (2.15) can be rewritten in the N-frame by neglecting
the centrifugal force acting on the airframe and converting with the rotational
matrix . Introducing (2.16), where the acceleration in the B-frame is dependent
on the gravity and thrust from the motors, resulting in

*xy*¨¨
*¨z*
=
00
*−g*
+*m*1
0
0
∑4
*i=1Ti*
*CSψψSSθθCCϕϕ− C*+*SψψSSϕϕ*
*CθCϕ*
− *m*1
*kd˙x*
2
*kd˙y*2
*kd˙z*2
*.* (2.16)
where*Cx*= cos(*x) and Sx*= sin(*x).*

The acceleration of the body is calculated in the N-frame and one can convert
the acceleration using the rotational matrix*RN B*, as

¨
*x*
¨
*y*
*¨z*
* = RBN*
˙
*u*
*˙v*
˙
*w*
*.* (2.17)

The last resulting translational equation, is the velocity of the airframe rela-tive to the N-frame, which is

*u*
*v*
*w*
* = RN B*
*˙x*
*˙y*
*˙z*
*.* (2.18)

With this set of equations the six states of the body can be converted to transla-tional motion in the N-frame for use in simulations.

**2.2.2**

**Rotational Conversion**

Simulating the quadcopter requires conversion from the N-frame to the B-frame and vice versa, this is done by the rotational matrix

*RN B*=
*CSψψCCθθ* *CSψψSSθθSSϕϕ*+*− SCψψCCϕθ* *SCψψSSθθCCϕϕ− C*+*SψψSSϕϕ*
*−Sθ* *CθSϕ* *CθCϕ*
(2.19)
according to [21]. Where*ϕ, θ and ψ are the attitude angles roll, pitch and yaw.*

Further notations are*Cx*= cos(*x) and Sx*= sin(*x). The method uses the rotational*

matrix to convert vectors from one coordinate system to the other, an example of
how it’s used is _{}

*˙x*
*˙y*
*˙z*
* = RBN*
*uv*
*w*
(2.20)

The rotational matrix can also be used in the inverse direction, from the
N-frame to the B-N-frame. This is done by inverting the matrix, due to the matrix’s
orthogonal property this is easily done by*RN B*=*R*−1*BN* or*RN B*=*RTBN*.

Further parameters requiring transformation are the angular velocities, ˙*ϕ, ˙θ*

and ˙*ψ. Transforming for these parameters from the B-frame to the N-frame is*

done by _{}
˙
*ϕ*
˙
*θ*
˙
*ψ*
* = HBN*
*pq*
*r*
(2.21)

where*p, q, r is the angular velocities in the B-frame. The conversion uses the *

Eu-ler kinematic equation matrix*HBN*[9] , which is used for rate of change for Euler

angles. This is because the method is a far more accurate method for non infinite
small changes. Simply one can understand that changes in roll and pitch
influ-ence the final yaw-axis. Hinflu-ence*HBN*is derived from studying each axis separately

and determining its eﬀect on the rotational axis, resulting in

*HBN* =
10 *SCϕTϕθ* *C−SϕTϕθ*
0 *SϕCθ* *CϕCθ*
(2.22)

where*Cx*= cos(*x), Sx*= sin(*x) and Tx*= tan(*x).*

**2.2.3**

**Controllers**

The quadcopter is inherently unstable in open loop. Enabling control of such a system is done by manipulating the motor outputs through a controller, which is the basic idea behind the control system in a quadcopter. The system makes

tiny adjustments to the motors at a significant rate stabilizing the platform. If the pilot requests a change in attitude, the control system will calculate the correct control signal to the motors and increase/decrease the angular velocity of the motors correctly. There are several ways of doing this, but the most common is with the use of three controllers. The most basic is one for each axis, roll, pitch and yaw [18], [1].

**PID Controllers**

The most common controller is the PID controller which is a simple yet eﬀective way of controlling systems [11]. Consisting of three diﬀerent parts, these are the proportional (P), integral (I) and derivative (D) components. All of these parts play a diﬀerent role in improving the control of the system.

*u(t) = KPe(t) + KI*
*t*
∫
0
*e(τ)dτ + KD* *d*
*dte(t)* (2.23)

The most basic controller is the P controller, which only corrects the error by a proportional constant with its inherited drawbacks. It tends to leave a static er-ror between the desired reference and control signal. With the use of an integral part this is significantly reduced but it often increases the oscillations in the sys-tem. This is, however, reduced with the introduction of a derivative part in the controller, improving faster change in the control signal.

Tuning controllers can be done by diﬀerent methods. One example of this
is the Ziegler Nichols[8] method, based on bringing the system into oscillation.
This is done by adding a P-controller and incrementally increasing the gain until
the system becomes unstable. At the point when the system becomes unstable
it has reached its critical gain*Ku*, with a period*Tu*. From which the parameters

*KP, KI, KD* in a controller (2.23) can be designated. Often is however*KI* and*KD*

instead written as *TI* = *KP/ KI* and *TD* = *KD/KP*, as shown in Table 2.1. The

Ziegler Nichols method can however not be considered to be an optimal solution but a method to quickly get a good functional controller.

Table 2.1:Tuning parameters according to Ziegler Nichols method.

Controller *KP* *TI* *TD*

P 0.5*Ku*

PI 0.4*Ku* 0*.8Tu*

**Cascade Control**

A common method for controlling quadcopters is with a cascade control princi-ple [1] [3] as shown in Figure 2.9.

F2(s)
F1(s)
R_{1}(s) _{R}
2(s) Y2(s) Y1(s)
G_{2}(s) G1(s)
+
-+
-100Hz
50Hz

Figure 2.9:The general principle of cascade control.

It is based on diﬀerent loops of control, where the inner loop is significantly faster than the outer [8], both in regards to the dynamic of the system and its update frequency.

This is most easily explained with the use of transfer functions [11] of the used system. The inner loop of the system can be written as

*Y*2(*s) =*

*F*2(*s)G*2(*s)*
1 +*F*2(*s)G*2(*s)*

*R*2(*s),* (2.24)

where*Y*2(*s) is the output of the inner loop, F*2(*s) is the controller and G*2(*s) is the*
inner system. The final parameters is the reference signal*R*2(*s). Which is further*

written as

*Ginner*(*s) =*

*F*2(*s)G*2(*s)*
1 +*F*2(*s)G*2(*s)*

*.* (2.25)

The overall system can then be written by the same method as

*Gouter*(*s) =*

*F*1(*s)Ginner*(*s)G*1(*s)*

1 +*F*1(*s)Ginner*(*s)G*1(*s)*

*.* (2.26)

Where one can see the inner control loop being a major part of the outer. If
however it is possible to tune the inner controller*F*2(*s) in such a way that the*

*Ginner*(*s)*≈ 1 the outer system becomes

*Gouter*(*s) =*

*F*1(*s)G*1(*s)*

1 +*F*1(*s)G*1(*s)* (2.27)

Tuning these kinds of controllers can be done by conventional methods, but
they always start of with tuning the inner loop to fulfill*Ginner*(*s)*≈ 1.

**2.2.4**

**Global Navigation Satellite Systems**

In the majority of autonomous projects conducted, Global Navigation Satellite Systems (GNSS) is used for position measurements.

It is a satellite based system, which measures the time it takes for the signal to transmit from the source to several receivers. By triangulating this signal one can receive a fairly accurate position estimate of the source [6] in good conditions.

Due to the earth being approximately spherical these values are returned in the spherical coordinate system WGS84, in which latitude and longitude are its angles. Latitude is referenced to the zero meridian, through Greenwich and the longitude is referenced to the equator. Calculating the diﬀerence between two GNSS points, the earth’s curvature needs to be taken into consideration, as can be seen in (2.28)-(2.33). This is done by using haversine’s formula [2], calculating the distance between two points on a sphere, as can be seen in Figure 2.10.

(Lat2,Lon2) d

(Lat1,Lon1)

Figure 2.10:Illustration of the haversine distance between two spherical

co-ordinates.

*ϕ*1=*XLat,1* (2.28)

*ϕ*2=*XLat,2* (2.29)

∆*ϕ = (XLat,2− XLat,1*) (2.30)

∆*λ = (YLon,2− YLon,1*) (2.31)

The positions in latitude and longitude is shortened as X and Y. By calculating the diﬀerence between the points a ∆ϕ and ∆λ are obtained. These are further used in the haversine formula in (2.32) - (2.33), which uses basic trigonometry functions to derive the distance between the two points.

*a = sin(*∆*ϕ*

2 ) sin(
∆*ϕ*

2 ) + cos(*ϕ*1) cos(*ϕ*2) sin
(

(∆*λ*
2 )

2) _{(2.32)}

*d = 2R · atan2(*√*a,*√*a*− 1) (2.33)

The calculated distance d between two points is calculated for a given R, earth radius of about 6371 m. Where atan2 is the trigonometric function for arctan with compensation for change in sign of the input.

**Limitations**

The system has however a few limitations. The most noticeable one is the lack of satellite signal indoors which makes it an outdoor system only, with rapidly de-creasing reception or major distortion, when tall objects interfere with the signal.

Figure 2.11:Interference of tall objects with the signal causing faulty

posi-tion measurements acquirements.

If the system is within range of tall objects something called multipathing [6] can occur. The signal reflects on tall nearby objects and thereby travels a greater distance between the source and the satellite, as can be seen in Figure 2.11. Re-sulting in a faulty position acquirement from the satellites. This weakness makes satellite navigation not suitable for accurate city navigation where multipathing between buildings can result in severe deviation.

**2.2.5**

**Complementary Filter**

A complementary filter is a light weight non-model based sensor fusion filter designed for an easy use of diﬀerent sensors such as low rate GNSS and high-rate sensors, i.e. gyroscopes. The theory behind the filter is based on the availability of sensors, for an application with two sensors the formulation will be [17], [12]

*y*1(*t) = s(t) + n*1(*t)* (2.34)

*y*2(*t) = s(t) + n*2(*t),* (2.35)

where*y is the sensor output and s(t) is the signal portion, for which a model is*

+
+
G_{1}
G_{2}
y_{2}
s^
y_{1}

Figure 2.12:The principle layout of the complementary filter, with two

sen-sor inputs*y*1*, y*2, two filters*G*1*, G*2and one output*s*ˆ.

With a problem formulation as such in Figure 2.12 one can write the problem as

ˆ

*S(s) = G*1(*s)[Y*1(*s)] + G*2(*s)[Y*2(*s)].* (2.36)

The result is the requirement to find the two filters *G*1 and *G*2 to extract ˆ*S(s),*
which is the estimated filter output, i.e position of a car. This signal can however
be distorted since the two filters*G*1*, G*2 can amplify or weaken the signal. This
can be corrected by introducing

*G*1(*s) + G*2(*s) = 1,* (2.37)

ensuring the total gain of the filter to be constant with the new setup. The new filter can be seen in Figure 2.13.

+
+
1-G_{2}
G_{2}
y_{2}
s^
y_{1}

Figure 2.13: The improved version contains two filters connected to

**3**

**Modelling**

In the following Chapter a review of the method in developing a simulation en-vironment of a quadcopter is explained. This was done by creating a dynamic model of the quadcopter from the physical models in chapter 2, with the use of Matlab and Simulink. This specific model was thereafter used in the develop-ment of diﬀerent control algorithms and state estimations of the car.

The first stage in this thesis project has been to introduce a model for simula-tions, in which testing and evaluations of algorithms could be performed on the quadcopter, in its desired application. In the thesis project the final task would be to validate the algorithms on a prototype, but since the quadcopter was not spec-ified the only option was to estimate and find the required physical constants from others work.

**3.1**

**Thrust and Torque**

The specific dynamics from the motor could not be derived from a mathematical
approach, because of the required system identification data explained in
2.1.4-2.1.5. One can however consider the dynamics of the motor to be generic for
similar sized BLDC motors and propellers with the same*Kv*, which is the specific

rotation per voltage. Hence a dynamic model of a motor was obtained from [20], where the model is a first order system described in (3.1) and Figure 3.1.

*G(s) =* 0*.9*

0*.178s + 1* (3.1)

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
**Step Response**
Time (seconds)
Amplitude

Figure 3.1:The dynamics of the motor in a step response.

The model and the dynamics of the motor were identified from the previous
study [20]. Further required data were the motor constants *Ct* and *Cq*. These

parameters should also be obtained through system identification but was not possible. These parameters were instead calculated by introducing

*ωm,max*=

2*π*

60*KvvBattery,* (3.2)

estimating the maximum angular velocity by using the constant*Kv*, which is the

specific RPM/v of the motor. With the known battery voltage, *vBattery* one can

estimate the maximum angular velocity of the motor,*ωm,max*.

Since the quadcopter was not specified, the system had to be estimated for
its desired application. Dimensioning a quadcopter’s motors a rule of thumb is
to have the power to weight ratio≈ 2. This is because the quadcopter requires
enough power to weight ratio for both lift and maintaining stabilization. By an
approximated mass of the system, a desired steady state at*ωm,50%*, 50 % throttle

could be designed. A steady state is achieved when*mg = 4T , m is the total mass*

of the system, *g is the gravity acting on the body and T is the thrust from each*

motor. Since *T is a function of the desired unknown motor constant CT* and

angular velocity*ωm*from (2.4) one can extract the*CT* from the mass and angular

velocity of the motors at half throttle, resulting in

*CT* =

*mg*

4*ω*2_{50%}*.* (3.3)

By solving this equation a desired*CT* was calculated for the specific motor

and system. The need to estimate *CQ* for the motor, from (2.6) was even

fur-ther complicated without measurements. The variable was instead estimated by the use of existing tests according to [20]. The motor constants determined are defined in Table 3.1.

Table 3.1:Motor constants.
Constant Value
*CT* 1.5e-6
*CQ* 2.9e-8
Time [s]
0 0.5 1 1.5 2 2.5 3
Thrust,
ω
0
100
200
300
400
500
600
700
800
ω [Rad/s]
Thrust [N]

Figure 3.2:Step response of a motor and its eﬀective lift force.

Table 3.2:The thrust T, and total lift generated from the quadcopter.

Value

*ωmax* 800 rad/s

*Tmax* 40 N

**3.2**

**Aerodynamic Forces**

The aerodynamic forces acting on the body are based on the square of the
ve-locity (2.9). For the simulations it was required for the quadcopter to be in a
steady state at maximum forward propulsion, resulting in a terminal forward
ve-locity. Since this could not be tested without a finished prototype, an estimation
of the drag coeﬃcient was calculated at a chosen top speed from the dynamics
explained in Section 2.1.7. The estimated top speed was set to approximately 20
m/s performed at a 45◦forward pitch. With the known maximum forward
veloc-ity ˙*x, one can estimate the drag force Fd*acting on the body since the quadcopter

is in a steady state at full throttle,*ωmax*. With this knowledge of the system one

can estimate the lumped drag coeﬃcient k*d*shown in

*Fd*=*kdu*2*⇔ kd* =
*Fd*
*u*2 =
sin(*ϕ)*
4
∑
*n=1*
*CTω*2*max,n*
*u*2 *.* (3.4)

This resulted in*k = 976, which was then implemented in the simulations, when*

Table 3.3:Aerodynamic constants used in the dynamic model.

Name Value Unit

*kd* 976 *N s*2*/ m*2

*umax* 20 m/s

**3.3**

**Inertia And Mass**

The quadcopter’s inertia is considered to be symmetrical in reference to the CoG. Increasing the accuracy of the inertia a model of the quadcopter was examined in CAD software for simplified calculation as can be seen in Figure 3.3.

Table 3.4:Inertia constants

*Ixx* 0.0083

*Iyy* 0.0083

*Izz* 0.0163

The mass is coupled with the inertia and is required for the inertia matrix to
be calculated. The mass was estimated for the quadcopter and resulted in*m = 1.2*

kg.

Figure 3.3:The CAD model used for inertia calculations.

The total mass of the quadcopter is one of the more important factors for the model. It is proportional to the inertia which would aﬀect fast cornering and influence the required thrust for steady hovering. Hence for a lower mass a significantly more eﬃcient propulsion of the vehicle is achieved resulting in longer flight time and a more agile platform.

**4**

**Simulations**

In the following chapter the simulation environment is explained.

**4.1**

**Coordinate Systems**

The control system needed a unified coordinate system for the car and the quad-copter. Two options were taken into consideration. The first was a body fixed coordinate system of the car and the second one was an earth fixed coordinate system in regards to longitude and latitude.

It was concluded that an earth fixed coordinate system would be more reli-able and easily implemented. The origin of the coordinates was based on the quadcopter’s starting position.

Figure 4.1:Coordinate System

**4.2**

**Dynamic Model**

The dynamic model was setup in Simulink with the physical properties inves-tigated in Chapter 3. Calculations conducted for the model used the angular velocity of the motors as input to further estimate the movement of the quad-copter.

Table 4.1:List of the calculated states of the quadcopter, and its initial

con-dition.

State Value Unit State Value Unit

x -10 m u 0 m/s
y -10 m v 0 m/s
z 0 m w 0 m/s
*ϕ* 0 ◦ p 0 ◦*/ s*
*θ* 0 ◦ q 0 ◦*/ s*
*ψ* 0 ◦ r 0 ◦*/ s*

In the simulations all twelve states shown in Table 4.1 are calculated. This was done by implementing (2.16)-(2.17) into Simulink. The method requires the initial states for the quadcopter to begin calculations, also shown in Table 4.1.

**4.2.1**

**Calculations**

The main calculations were derived from the equations of motion (2.14). In the
used method one can substitute the N-frame angular rates ˙*ϕ, ˙θ, ˙ψ with the *

B-frame angular rates*p, q, r because of the small change. The change of angular rate*

in the B-frame is calculated from the external influence of the motors *τϕ, τθ, τψ*

and its gyroscopic eﬀects τ*gyro*. For an infinitesimal time step one can use

*˙p˙q*
*˙r*
* = I*−1
*ττϕθ*++*ττgyrogyro*
*τψ*
+
*qr(Iyy− Izz*)
*pr(Izz− Ixx*)
*pq(Ixx− Iyy*)
*.* (4.1)

Since all of these calculations are done during very small time steps one can make
the assumption e.g ˙*p*→ _{∆}*p _{t}*, and by multiplying with ∆

*t gives the angular velocity*

state*p. Hence the angular rates are known, one can calculate the Euler angles in*

the N-Frame by using (2.20).
˙
*ϕ*
˙
*θ*
˙
*ψ*
* = HBN*
*pq*
*r*
(4.2)

Which uses the same kind of approximation as (4.1). The last set of equations solved is from Newton’s second law, converted to the B-frame, according to (2.15).

˙
*u*
*˙v*
˙
*w*
= *m*1
0
0
4
∑
*i=1*
*CTω*2*i*
+*RN B*
0
0
*−g*
−
*p*
*q*
*r*
×
*u*
*v*
*w*
−
*kdu*2
*kdv*2
*kdw*2
(4.3)
The equation was approximated by the same principle as (4.1). The first term
of the equation represents the total lift force from the motors. The second term
is the gravity in the N-Frame, converted to the B-frame by*RN B*. The third term

represents the centripetal force acting on the body, and the last term adds the
aerodynamic forces, as a function of the quadcopter’s B-frame velocity*u, v and*
*w.*

**4.3**

**Control Systems**

The objective of the thesis is primarily about the position control of the quad-copter. With this in mind the control system was heavily influenced by the exist-ing cascade control from the Pixhawk flight controller [1].

The quadcopter has four physical properties which are required to be con-trolled for a functional stabilized vehicle. These are:

• Altitude • Pitch • Roll • Yaw • Position

**4.3.1**

**Altitude Control**

The altitude control consists of three controllers in a cascade configuration as can be seen in Figure 4.2. Zref Z Z• + -+ -+ -+ + Kp KP KD du/dt du/dt + + Kp 1 s 0.75 1 3 6 1 Acceleration Rate u

The outer loop is a P-controller calculating the error between the desired al-titude and actual alal-titude of the quadcopter with a saturation. The second con-troller is the rate concon-troller, which is a PD concon-troller with a limitation to 1.5

*m/s*2. The last controller saturation is set at 2.5 *m/s , limiting the quadcopter*

from going faster than its desired threshold.

**4.3.2**

**Euler Angles**

Controlling roll and pitch was made in a similar method as the altitude. The cascade loop has however one less controller and does not use the accelerometer as seen in Figure 4.3. It uses a P-controller for the angle of the quadcopter and an inner PID controller for the angular rate.

Phiref Phi p u + - + -+ + + KP KI 1 s 0.1 0.15 180/pi 180/pi 100 1/4500 10 0 Kp 2 Rate KD du/dt 0.01

Figure 4.3:The attitude control system modelled in Simulink.

**4.3.3**

**Position Controller Design**

Solving the main objective of the thesis,*autonomously follow a car on a racetrack*

the most eﬀort was put into developing the desired position controller. This was done by implementing a position controller in Simulink. Where algorithms easily could be tested and validated, shown in Figure 4.4.

The position controller used the position from the control command block, generating the desired trajectory from the car’s estimated position.

As pointed out in previous Section 4.1, the raw data from the GNSS sensor
are based in the N-Frame and thereby need conversion to the B-Frame for use
in the control system as distance error, ∆*U , ∆V in the B-frame from ∆x, ∆y in*

the N-frame. This was done with the two dimensional version of the rotational
matrix (2.19), where*ψ is the specific heading of the quadcopter in the N-Frame.*

∆*U = ∆xCψ*+ ∆*ySψ* (4.4)

∆*V = ∆yCψ− ∆xSψ* (4.5)

During this conversion it was considered that only the planar translation was required, since no major change in z-direction would be seen on a frozen lake.

Quadcopter
Groundstation
+ _{}
-G(S)
F(S)
ψ_{ref}
V(u,v)_{ref}
Attitude
(X,Y,Z)

Figure 4.5:Data transfer between the diﬀerent modules.

**4.3.4**

**Position Controller**

The most simple method controlling the desired velocity of the quadcopter is done by the use of two controllers, one for each direction, u and v in the B-Frame. These controllers convert the desired velocity into desired pitch and roll.

The controllers are in a cascade setup as can be seen in Figure 4.6, with an outer P controller converting the error into desired velocity. The inner loop con-trols the rate with a PID controller.

Velocity_{ref}
Velocity
Phi_{ref}
+
-+
+
+
K_{P}
KI
1
s
0.05
0.12
1 -1
Rate
KD
du/dt
0.06

Figure 4.6:The simulink model of the position control system.

As a protective measure, a limit was set to the desired angle of the quadcopter to be no greater than 40◦. This was done to ensure the safety of the quadcopter which otherwise could behave unpredictably for greater angles.

Table 4.2:Parameters of the controller used in the position control block.

P I D Yaw

Slow Mode 0.12 0.05 0.06 1 Fast Mode 0.2 0.01 0.12 0.3

**4.3.5**

**Slow and Fast Mode**

The issue with performing a change in heading and accelerate at the same time is due to the contradiction of the desired signal. To achieve a great change in head-ing the relative angular velocity between each motor pair must be high, hence two motors must have a low angular velocity. This inhibits the acceleration, which desires four motors to have maximum angular velocity. To prevent this contra-diction to happen a fast and slow mode was used. Which gives a lower priority to yaw, than roll and pitch, as shown in Table 4.2.

**4.3.6**

**Tuning**

The control systems on the quadcopter were primarily tuned according to previ-ous knowledge of the control loops, from [1]. This is great for a rough tuning of the parameters of the cascade control, which otherwise can be a time consuming task.

The position control system was however made from scratch and no previous knowledge of the system was obtained. For this the Ziegler Nichols method was used, explained in Section 2.2.3 to get a rough estimate of the controller param-eters. Further analysis and testing of diﬀerent parameters concluded the used parameters shown in Table 4.2.

**4.4**

**Trajectory**

Before simulations were carried out data, was gathered from one of the Borg-Warner test vehicles. The GNSS module was placed on the roof of the test vehicle for best reception. Diﬀerent test scenarios were logged from the Volvo S60 shown in Figure 4.7, which were used for evaluation of the objective.

**4.4.1**

**Logged Data**

Data logging was done by using the RaceLogic VB10SPS GPS sensor, outputting the raw navigation data onto a CAN bus. This connected into the CANCase XL which decodes the CAN bus. The CANCase XL also connected to the internal CAN of the car for sensor logging. The car’s dataset consisted of accelerometers, gyroscopes and wheel speeds.

RaceLogic VB10SPS Latitude Longtude Car CAN: Acc Gyr ω w CANCase XL CanAlyzer

Figure 4.8:Schematic of the data logging setup.

The data collected for trajectory analysis can be seen in Figure 4.9, diﬀerent sectors are based on various characteristics, such as a slow twisty section and a fast section with less corners.

• The first session is a low speed sweeping trajectory.

• The second sector is a medium velocity sector with corners at low velocity. • The third sector is a high velocity stage with hard cornering.

Longitude 12.862 12.863 12.864 12.865 12.866 Latitude 55.8708 55.871 55.8712 55.8714 55.8716 55.8718 Car Time [s] 0 50 100 150 Vel o ci ty [m /s] 0 2 4 6 8 10 12 14 Car Velocity Longitude 12.87 12.871 12.872 12.873 12.874 12.875 12.876 Latitude 55.867 55.868 55.869 55.87 55.871 55.872 55.873 55.874 Car Time [s] 0 50 100 150 200 250 Vel o ci ty [m /s] 0 5 10 15 Car Velocity Longitude 12.866 12.8665 12.867 12.8675 12.868 12.8685 Latitude 55.8735 55.874 55.8745 55.875 55.8755 55.876 55.8765 55.877 Car Time [s] 0 20 40 60 80 100 120 Vel o ci ty [m /s] 0 5 10 15 20 25 30 35 Car Velocity

Figure 4.9:The sectors used for algorithm evaluation.

The logging system was improved with the help of a C# program sending and receiving data from the Pixhawk flight controller, further called the ground sta-tion. This was done according to the schematic in Figure 4.10. The car’s GNSS module sends its coordinates to the CANCase, sending it to the ground station via Universal Asynchronous Receiver/Transmitter (UART). Converting the pack-age to a preset CAN messpack-age for the logging system (CanAlyzer) to receive, and timestamp all data from all devices, for easy data analysis.

U-Blox NEO-7 Latitude Longtude Pixhawk Accelerometer Gyroscope Magnetometer Computer C# Shell CanAlyzer RaceLogic VB10SPS Latitude Longtude CANCase XL

Figure 4.10:Overview of the overall system.

**4.4.2**

**Shortest Path Trajectory**

Generating a trajectory which *cuts corners is something which could help the*

quadcopter by smoothing out hard cornering and maintaining momentum. A simple method for generating such a reference trajectory is done in Algorithm 1.

Algorithm 1Shortest Path Algorithm.
∆_{x}← x_{Quad}− x_{Car}

∆* _{y}← y_{Quad}− y_{Car}*
Ψ

*← atan2(∆*)

_{x}, ∆y*xRef* *← xCar*+*ddistance*· cos(Ψ )

*yRef* *← yCar*+*ddistance*· sin(Ψ )

The algorithm calculates the shortest way from its previous position and then
estimates where it is supposed to be with a given distance*ddistance*from the

tar-get. This is then low pass filtered using the filter in (4.6), smoothing the data creating a nice and flowing trajectory from the measured data.

*F(z) =* 0*.01867*

*z− 0.9813* (4.6)

**4.5**

**Sensor Fusion**

The configuration of the car and the quadcopter is a setup with a wide range of systems and sensors. For the car there are sensors primarily for the use of diﬀerent safety systems such as ABS and ESP. The sensors these systems use could be accessed from the CAN bus, as can be seen in Table 4.3.

Table 4.3:Available signals for use from the car.

Car RaceLogic VB10SPS Accelerometer x

Gyro x

GNSS Coordinates x

Heading x

The quadcopter also uses a wide range of sensors which are being used for navigation. These are listed in Table 4.4.

Table 4.4:Available signals on the Pixhawk flight controller.

PixHawk Accelerometer x Gyro x GNSS x Magnetometer x Barometer x

By sensor fusion one could use the advantages of each sensor and minimize the error of the estimated position of either the car or the quadcopter.

Position IMU(Yawrate) GPS(Heading) GPS(Position,Velocity) Filter Filter

Figure 4.11:Complementary filter for a position estimate of a car.

There are several ways of using the filters shown in Section 2.2.5. The usage depends heavily on the specific application and conditions. In this thesis project a complementary filter was use for heading and position estimation. An example of this complementary filter is shown in Figure 4.11. This setup of the filter is however only functional if the vehicle maintain near zero roll and pitch, which is assumed with a car running on a normal road.

What the filter accomplishes is improved navigation estimation with the help of fast estimations from INS sensors and gets corrected by the GNSS sensor over time.

For the car to be used as a reference signal one needs to study the signal, in this case the GNSS position. One of the major problems with the GNSS is its 10 Hz update frequency, limiting the control loops running at 100 Hz. The most simple way to boost the update frequency was to use the angular rate sensor combined with the velocity of the car, to complement the GNSS.

As shown in Figure 4.11 the finished filter actually contains contains two com-plementary filter. One which estimates the heading of the car, and one which ultimately calculates the new position of the car.

The first stage of the filter estimates the heading ˆ*ψk|k−1*by integrating the

yaw-rate ˙*ψk* from the gyroscope with the sample time*Ts* and adds it to the current

heading value ˆ*ψk−1|k−1*
ˆ

*ψk|k−1*= ˆ*ψk−1|k−1*+ ˙*ψkTs* (4.7)

Which is then fed into the filter, correcting the estimated heading with the
one from the GNSS sensor, *ψGN SS _{k}* . The filter is tuned with the parameter

*αh*

resulting in

ˆ

*ψk|k*= ˆ*ψk|k−1αh*+*ψkGN SS*(1*− αh*) (4.8)

when new GNSS heading is available, otherwise ˆ

*ψk|k* = ˆ*ψk|k−1.* (4.9)
The second stage of the filter estimates the new coordinates of the car,*x and y.*

This was done with the same method as above. First by integrating the position
with the known speed*vk*and estimated heading ˆ*ψk|k*from (4.8).

ˆ

*xk|k−1*= ˆ*xk−1|k−1*+*vkTs*cos( ˆ*ψk|k*) (4.10)
ˆ

*yk|k−1*= ˆ*yk−1|k−1*+*vkTs*sin( ˆ*ψk|k*)*.* (4.11)

Combining these with the measured values (*xGN SS _{k}*

*, y*) from the GNSS, one can get an improved position estimate ( ˆ

_{k}GN SS*xk|k, ˆyk|k*), when the variable

*αp*is

tuned properly, resulting in 4.11 when new GNSS position is available
( ˆ*xk|k, ˆyk|k*) = ( ˆ*xk|k−1, ˆyk|k−1*)*αp*+ (*xGN SSk* *, y*

*GN SS*

*k* )(1*− αp*)*.* (4.12)

Otherwise when no GNSS readings are available the filter is

( ˆ*xk|k, ˆyk|k*) = ( ˆ*xk|k−1, ˆyk|k−1*)*.* (4.13)
The result of this method can be seen in Figure 4.12 where simulations were
carried out to validate the improvement. Several diﬀerent scenarios such as low
velocity or high velocity, were tested.

X [m] -327 -326 -325 -324 -323 -322 -321 -320 -319 -318 Y [m] 16 18 20 22 24 26 Raw GPS Data Trajectory Trajectory with CF

Figure 4.12: GNSS measured data in circles, up sampled data as a dotted

**5**

**Prototype**

The thesis project requires a prototype to be built for evaluation of the method used for filming. This was done with mostly open source software and hardware for a user friendly environment. The following chapter will introduce the used components.

**5.1**

**Flight Controller**

The Pixhawk flight controller is an advanced autopilot designed on an open source hardware project, with the possibility to control a diverse amount of ve-hicles. Consisting of a 32-bit micro controller and sensors from ST Microelec-tronics, this powerful hardware was chosen for its flexibility and open source availability.

Figure 5.1:The Pixhawk flight controller.

**5.2**

**Communication**

The finished communication schematic, with the wireless connection between the Pixhawk and the C# program running on the computer, are shown in Figure

5.2.

Computer C# Shell

Input -Keyboard

U-Blox NEO-7 PixHawk

xBee Wireless xBee RaceLogic VB10SPS Car CAN Case XL

Figure 5.2:The principle layout of the diﬀerent components in the system.

**5.2.1**

**xBee**

The xBee modules are connected to the Pixhawk by UART, for a simple user inter-face between the components. The baudrate was set to 57600 for a compromise between the bandwidth and robustness of the protocol.

Figure 5.3: Xbee modules used for communication between the

groundsta-tion and quadcopter.

**5.2.2**

**Mavlink**

With the use of the wireless communication between the Pixhawk and the ground
station, a communication protocol was required. Such a protocol already exists,
it is called*MAVlink. It is a generic protocol for data management for a variety of*

diﬀerent Micro Air Vehicle (MAV). The general protocol works like this:

Table 5.1:Message layout of the MAVLink protocol.

Byte Number Meaning Variables

1 Message Header 0xFE

2 Message Length 9-254

3 Sequence Number 0-255

4 System ID e.g. 0

5 Component ID e.g. 0

6 Message ID e.g. 0(Heartbeat)

n Payload Data

n+1 Checksum Data n+2 Checksum Data

Table 5.2:Message layout of the MAVLink protocol.

**5.2.3**

**Ground Station**

The ground station and its GUI as seen in Figure 5.4, was written in VS which is a Microsoft based software with a wide range of options. The chosen language was C# which is an object oriented language suitable for a variety of applications.

The main layout of the program consists of three diﬀerent modules: • CAN

• Data • Control

These modules handle each of the specific areas of the program with every task running a separate thread.

The CAN module is specified to handle the data transfer from the vehicle to the main program. This is done by the use of the CANCase XL, decoding the data from the vehicles and making it available for the program.

The CAN modules also consist of the data logging system, which outputs all the data from the quadcopter into a virtual CAN connection. Syncing with CAN-Alyzer, which then timestamps all the data, both from the quadcopter and the Car/GNSS. This results in a single robust data logging system which is being used for all units in the system.

The data module is simplified as a data storage unit for all the devices where one of CAN/Control tasks is allowed to get/set the data variables in the control algorithm.

The Control module is the heart of the program. It consists of several proce-dures and commands for the program to work according to the user’s desire. One can divide the program into three parts:

• Starting Procedure • Mission

• Landing Procedure

Figure 5.4:The GUI developed for the ground station.

**5.2.4**

**Quadcopter Specification**

The quadcopter used for testing was chosen to be a small light weight quadcopter, with enough power for use with a GoPro camera. With these specifications a platform was chosen according to Table 5.3.

Table 5.3:Quadcopter components used for the prototype.

Component ID

Frame REPTILE500-V3

Motors AX-2810Q-750KV ESC Afro ESC 30Amp OPTO Battery Zippy Flightmax 3300mAh 30C Gimbal FeiYu Tech Mini 3D

The general parts for the quadcopter were assembled as the schematics in [15]. An option to increase the flight time with two batteries was available.

**Quadcopter Design**

Figure 5.5:Final quadcopter design.

The fully assembled quadcopter, which carried out the validation tests, have
a total mass of 1*.4 kg, it consists of a single battery with the option to add an*

additional battery for an increase in flight time, with a decrease in eﬃciency.

**5.3**

**Gimbal**

The gimbal used in the project is the FeiYu Tech Mini 3D. It is a three axis gimbal, which have motors mounted on the pitch, roll and yaw axes. This enables the gimbal to stabilize the camera in the rolling axis and allows the user to control the yawing and pitching angles.

Table 5.4:Gimbal specifications, the roll angles will always be controlled by

the gimbal itself.

Feature Value
Pitch Angle ±150◦
Roll Angle ±45◦
Yaw Angle ±120◦
Heading Rate 75◦*/Sec*

Pitch Rate 25◦*/Sec*

**5.3.1**

**Gimbal Modelling And Control**

A third party gimbal was acquired, which didn’t have the raw sensor values read-able. However, the specifications for its motion, specified in Table 5.4, concludes that the dynamics of the gimbal will be suﬃciently quick for the desired objec-tive.

The control system for the gimbal will be an open loop system based on the estimation of the car and quadcopter. Hence, the positions of both vehicles are known and where the camera should point is possible. Due to the fact that there

is no image processing, the camera feed does not have the possibility to achieve a closed loop system with the current hardware.

x dx dψ dy y z x y z

Figure 5.6:Gimbal coordinate system in reference to the B-frame.

The desired angles in pitch*ψP itch*and heading*ψH eading* of the gimbal is

cal-culated from the diﬀerence in position ∆x, ∆y and altitude ∆z between the quad-copter and car as

*ψP itch*= arctan
√_{∆}* _{x}*∆

_{2}

*z*+ ∆

*y*2 (5.1)

in the E-frame. The secondary angle, *ψH eading* is calculated from the diﬀerence

between the quadcopter heading*ψQ*and gimbal*ψG*as*ψH eading* =*ψQ− ψG*.

**Implementation**

Implementing of the gimbal with the flight controller was fairly simple. Each axis receives a PWM signal which specifies a variable set point for the gimbal. With a neutral state of 0◦the PWM signal is set to 1500. The min/max PWM is (1000, 2000) resulting in the heading angle of−120◦ will be 1000 in PWM set point and vice versa for the positive angle.

**5.3.2**

**Global Navigation Satellite Systems**

Controlling the quadcopter relatively to the car was done using two diﬀerent GNSS sensors. A RaceLogic VB10SPS was used for the car and a U-Blox NEO-7 for the quadcopter. These two sensors both acquired 3D fix with the use of mini-mum three satellites. The diﬀerences between the two sensors were in the update frequency and accuracy, measured using Circular Error Probability (CEP), as can be seen in Table 5.5.

Table 5.5:Circular Error Probability (CEP)

GNSS Update Frequency [*H z]* Accuracy [m] Channels

RaceLogic VB10SPS 10 2.5 [95% CEP] 8

U-Blox NEO-7 5 2.5 [50% CEP] 56

It was important to validate the data and measure the relative oﬀset of the two sensors, since this could influence the final result. A test was conducted by placing the two sensors in the same location for 20 minutes.

x [m] 0 1.3 -1.3 -0.9 -0.4 0.4 0.9 y [m] -1.2 -1.2 -0.9 -0.9 -0.6 -0.6 -0.3 -0.3 0

Figure 5.7:A measurement sequence of 20 minutes of both GNSS sensors.

The relative diﬀerence between the two sensors was calculated from the mean value with (2.33). The absolute measured deviation was 1.23 m, which is consid-ered a slight diﬀerence, but not a major concern in the used velocities.

Figure 5.8: The

used U-Blox NEO-7 GPS.

Figure 5.9: The

used RaceLogic

VB10SPS (external receiver).