Autonomous Following and Filming of a Test Vehicle

Full text


Autonomous Following and

Filming of a Test Vehicle


Teodor Johnsson LiTH-ISY-EX–16/4963–SE Supervisor: Clas Veib

isy, Linkpings universitet

Pierre Pettersson

BorgWarner AB

Examiner: Daniel Axehill

isy, Linkpings universitet

Division of Automatic Control Department of Electrical Engineering

Linköping University SE-581 83 Linköping, Sweden


Quadcopters have been used as filming platforms for years and are mainly man-ually controlled, with its inherited difficulties. The objective of this thesis project was to eliminate the human factor by developing and constructing a prototype for an autonomous filming platform at BorgWarner AB. Different methods for following and filming the test vehicles were investigated with the use of different sensors, such as Global Navigation Satellite Systems (GNSS) and Inertial Navi-gation System (INS). These were evaluated by simulations in Simulink where it was shown that an improved reference following could be achieved by the use of a complementary filter. Further development was focused on writing a control system for the prototype in which the following-algorithms and prototype could be evaluated. Final testing resulted in the conclusion that the desired objective was feasible with the current platform. There were however some limitations in the desired change in direction of the quadcopter, first discovered during simu-lations and proven during experimental testing.


This master’s thesis would not have been possible without the support of my supervisor, Pierre Pettersson at BorgWarner AB and the TTC division for their cheerful support.

I would also like to thank Clas Veib, and Daniel Axehill at Linkping Univer-sity for their feedback and and support.

Further acknowledgement should go to David Tjskog for great support and help with troubling software.

Linkping, Februari 2016 Teodor Johnsson


Notation xi 1 Introduction 1 1.1 Purpose . . . 1 1.2 Problem Formulation . . . 2 1.3 Limitations . . . 2 1.4 Literature Study . . . 2 1.5 Software . . . 2

1.5.1 Matlab and Simulink . . . 2

1.5.2 Visual Studio . . . 3 1.5.3 CanAlyzer . . . 3 2 Preliminaries 5 2.1 Modelling . . . 5 2.1.1 Actuators . . . 6 2.1.2 Coordinate Systems . . . 7 2.1.3 Motors . . . 8 2.1.4 Thrust . . . 9 2.1.5 Torque . . . 9 2.1.6 Gyroscopic Effects . . . 10 2.1.7 Aerodynamic Forces . . . 10 2.1.8 Inertia . . . 11 2.2 Simulations . . . 11 2.2.1 Equations of Motion . . . 11 2.2.2 Rotational Conversion . . . 13 2.2.3 Controllers . . . 13

2.2.4 Global Navigation Satellite Systems . . . 15

2.2.5 Complementary Filter . . . 17

3 Modelling 19 3.1 Thrust and Torque . . . 19

3.2 Aerodynamic Forces . . . 21

3.3 Inertia And Mass . . . 22


4 Simulations 23 4.1 Coordinate Systems . . . 23 4.2 Dynamic Model . . . 24 4.2.1 Calculations . . . 24 4.3 Control Systems . . . 25 4.3.1 Altitude Control . . . 25 4.3.2 Euler Angles . . . 26

4.3.3 Position Controller Design . . . 26

4.3.4 Position Controller . . . 27

4.3.5 Slow and Fast Mode . . . 28

4.3.6 Tuning . . . 28

4.4 Trajectory . . . 28

4.4.1 Logged Data . . . 29

4.4.2 Shortest Path Trajectory . . . 30

4.5 Sensor Fusion . . . 31 5 Prototype 35 5.1 Flight Controller . . . 35 5.2 Communication . . . 35 5.2.1 xBee . . . 36 5.2.2 Mavlink . . . 36 5.2.3 Ground Station . . . 37 5.2.4 Quadcopter Specification . . . 38 5.3 Gimbal . . . 39

5.3.1 Gimbal Modelling And Control . . . 39

5.3.2 Global Navigation Satellite Systems . . . 40

6 Results 43 6.1 Modelling . . . 43 6.1.1 Model Validation . . . 43 6.1.2 Control Validation . . . 44 6.2 Simulations . . . 44 6.2.1 Trajectory Evaluation . . . 44

6.2.2 Slow and Fast Mode . . . 47

6.3 Prototype . . . 47 6.3.1 Heading Control . . . 48 6.3.2 Altitude Control . . . 50 6.3.3 Position Control . . . 50 6.3.4 Gimbal Control . . . 52 7 Discussion 55 7.1 Modelling . . . 55 7.2 Simulations . . . 56 7.3 Prototype . . . 56 7.3.1 Video Capturing . . . 56


8.1 Conclusion . . . 59 8.2 Future Work . . . 59



Notations Meaning

θ Pitch

ϕ Roll

ψ Yaw

p Angular Velocity, x axis q Angular Velocity, y axis r Angular Velocity, z axis u B-Frame Longitudinal Velocity v B-Frame Lateral Velocity w B-Frame Vertical Velocity X Local E-Frame Position Y Local E-Frame Position Z Local E-Frame Position

v Voltage i Current A Area CT Thrust Coefficient CD Drag Coefficient g Gravity m Mass I Inertial Constant ρ Density ω Angular Velocity Ts Sample Time

αh Weight factor for heading filter

αp Weight factor for position filter



Frkortning Betydelse

IDE Integrated Development Environment VS Visual Studio

E-Frame Earth Frame B-Frame Body Frame

BLDC Brushless Direct Current DC Direct Current

DoF Degrees of Freedom MCU Micro Control Unit

CoG Center of Gravity

INS Inertial Navigation System

GNSS Global Navigation Satellite Systems LiPO Lithium Polymer

PWM Pulse Width Modulation ESC Electronic Speed Controller

FC Flight Controller

CAN Controller Area Network CEP Circular Error Probability

UART Universal Asynchronous Receiver/Transmitter C# Programming Language

PID Proportional Integral Derivative CF Complementary Filter




The following master’s thesis was conducted at BorgWarner AB in Landskrona, autumn of 2015.

BorgWarner AB is a global manufacturer of automotive components for driv-etrains in cars. Their main product in Sweden is their All-Wheel Drive (AWD) system, developed in Landskrona. During testing in northern Sweden, there is a desire for filming with an aerial platform.

The technology within these platforms have primarily been associated with expensive military uses, but with a reduction in price it has become more avail-able to the public and commercial businesses.

The drawback with this type of vehicles is primarily the human factor, be-cause the control is done by eye-contact and manual control. This method results in difficulties with deteriorating conditions such as distance, fog and rain. This in combination with a moving car as an objective to follow sets the main difficulties. Introducing a control system which autonomously carries out this operation with faster and more precise control, the results have potential to improve con-siderably. This is done with the use of different sensors, such as Global Navi-gation Satellite Systems (GNSS) and Inertial NaviNavi-gation Systems (INS) which is investigated in this masters thesis.



The objective of this thesis project is to construct and evaluate the performance of an autonomous quadcopter for filming a car in motion around a test track at various speeds.



Problem Formulation

Within the frame of the master’s thesis an autonomous control system will be investigated for its capability in "following and filming a test vehicle without human input". This shall be done with simulations and gathered data from a constructed




• The thesis will not be focused around closed-loop stability of the quad-copter.

• The use of image processing for video feedback will not be considered. • Flight controller and quadcopter hardware will be bought and not

devel-oped in this project.


Literature Study

In the master’s thesis project a literature study was carried out to investigate the topic. Due to the content one could divide it in different stages. The first stage consist of modelling and simulations of the quadcopter. For this purpose mainly literature from [18], [5] and [20] are used, which consist of methods for mod-elling the physical attributes of a quadcopter. Their work is primarily used for setting up the model later used for simulations in Matlab and Simulink. Fur-ther information regarding modelling of motors, such as [16] was used for fine tuning the model. The second stage of the thesis project consists of controlling the quadcopter. The focus is mainly on sensor fusion and trajectory generation for different scenarios, at both high and low speeds. For this purpose there are several interesting publications that have contributed to the area, such as [22], [14], [13]. These are primarily investigating the use of sensor fusion for various vehicles, mainly with focus on GNSS/INS. Combining these is most commonly done with a Kalman filtering technique or complementary filter, enabling an en-hanced position estimate. Further studies were carried out on various simulation methods for the quadcopter. One could use Matlab calculations such as [10] or the most common method which is with the use of Simulink [7].



To achieve the objective a wide range of programs were utilized. A short expla-nation of the programs used follows.


Matlab and Simulink

Matlab is a computing environment commonly used for numerical computations and simulations and is widely used in the industry. It is very commonly paired


with Simulink, which is a graphical block simulation environment including Matlab for simulations.


Visual Studio

Visual Studio (VS) is an IDE by Microsoft for the use of building applications for Windows. VS was used because of its wide range of features and compatibility with third party applications mainly used in vehicle software.



CanAlyzer is a program specialized in data management from the commonly used bus existing in most vehicles. This program, together with a CAN-Case XL can be connected into several different CAN-networks at the time and be configured to work with third party programs developed in VS.




This chapter summarizes the theory used in the thesis. It starts off with an expla-nation of what a quadcopter is and how it works by introducing its mechanics. Further exploration is performed in how to implement a model for simulations of a quadcopter.



A multirotor is an ultralight aircraft, a simple and effective platform for multiple purposes. Mainly because of its ease to customize and affordability. Depending on the size of the multirotor it has different capabilities, but the principle layout consists of actuators in, most commonly a symmetrical configuration around a center hub. These actuators are primarily Brushless Direct Current (BLDC) mo-tors, which in recent years have become more efficient and deliver great perfor-mance in a small light-weight package. An example of such a platform is shown in Figure 2.1, which is a quadcopter in a cross configuration due to its defined forward orientation.

The on-board power source is most commonly a Lithium Polymer (LiPO) bat-tery, powering the motors through Electronic Speed Controllers (ESC).

The Flight Controller (FC) is the brain of the multirotor and consists of a Micro Control Unit (MCU) and sensors. The MCU computes the control signals to the actuators, with the purpose of stability on an otherwise inherently unstable platform.

The layout of the investigated multirotor is a quadcopter consisting of four motors to control the platform. Its size is very suitable for small camera equip-ment such as the GoPro Hero 4 used in this thesis project.

Developing a simulation environment for a quadcopter begins with the ac-knowledgement of its six Degrees of Freedom (DoF) mechanism, meaning it can


translate in u, v, w and rotate around its axes roll, pitch and yaw also further

named asϕ, θ and ψ respectively.

u v



Pitch Yaw

Figure 2.1: Illustration of the quadcopter coordinate system and rotational


A quadcopter has its motors mounted a distancedmfrom the center hub, with

all the motors producing lift in the z-axis. A steady state is achieved by having two of the motors counter-rotating in a symmetrical pattern, as can be seen in Figure 2.3. T 1 T 2 T 3 T 4 1 d m d Y

Figure 2.2:Thrust is defined as

T1-T4 with a distance dm from

the origin.

Figure 2.3: The rotational

direction of each propeller

around its axis.



Controlling the quadcopter’s motion is done by varying the motor’s angular ve-locity. An example of this is the pitch, which requires torque to be increased around the v-axis. This is done by increasing the angular velocity on motors (3, 4) or decreasing motors (1, 2), relative to the opposite pair, as shown in Figure

2.4. An increase in angular velocity of the motors will produce more lift. Hence, the quadcopter will initiate tilt forward or backwards depending on which motor pair is actuated.

Roll is controlled with the same method as pitch. Torque is induced by in-creasing or dein-creasing the relative angular velocity of motor pairs (1, 4) and (2, 3).


Pitch T


T 4

Figure 2.4: Increasing the thrust

on motor (3,4), a forward pitching torque is induced. Roll T 1 T 4

Figure 2.5: Inducing a rolling

torque by increasing the thrust on motors (1,4).



Figure 2.6:Positive change in yaw is achieved by increasing the angular

ve-locity of motors (2,4).

Initializing a change in yaw is possible due to, two of the motors being counter rotating. Increasing or decreasing the relative angular velocity of either motor pair (2,4) or (1,3) creates torque around the z-axis, resulting in the airframe ro-tating, as shown in Figure 2.6.


Coordinate Systems

Within the mechanical model there are two coordinate systems describing the motions of the quadcopter, as can be seen in Figure 2.7. These are body frame and navigation frame, further called B-frame and N-frame [21]. Depending on the desired control objective, either the B-frame or N-frame are used in the controller design.


U p r q X Y Z V W B-Frame N-Frame

Figure 2.7:The B-frame in reference to the N-frame.

The distance travelled by the quadcopter can be considered to be relatively short, hence a locally based navigation frame is used, where the N-frame’s origin is the quadcopter’s starting position.

These coordinate systems define different quantities, such as the quadcopter’s translation in the N-frame (x, y, z) and Euler angles (ϕ, θ, ψ). The B-frame is used

when calculating the linear velocities (u, v, w), and the angular velocities (p, q, r),

shown in Figure 2.7.

Developing a model for simulations one can preferably use the most common method, Euler-Lagrange formalism [20]. This method assumes that the following physical statements are fulfilled or that the difference is neglected:

• Rigid Structure. • Symmetrical Structure.

• Center of Gravity (CoG) and B-frame origin is coincident.

Simulating the quadcopter’s movement relative to the car requires a good po-sition estimate in the N-frame. To achieve this the altitude and attitude of the quadcopter must first be calculated. The simulation model does this by comput-ing the change in attitude and altitude from the angular velocity of the motors. With the knowledge of these motions one can then estimate the translation in the N-frame.



The actuators are commonly known as Brushless Direct Current (BLDC) motors, which convert electrical energy to mechanical energy. This is done by an electro-magnetic circuit. In the center there is a rotor with magnets spinning around the stator, where Direct Current (DC) flows through the coils inducing a magnetic


force resulting in further acceleration of the rotor [16]. This is modelled by using the commonly used differential equation for a BLDC motor,

  L di dt =v− Rmi− keωm Imdωdtm =τm− τd (2.1) according to [16]. The input for the motor is the voltagev and current i

result-ing in a change of rotationωm. Influencing the increase or decrease ofωmis the

torqueτm, which is the applied torque on the motor andτdis the load caused by

the motor spinning in the air. Further constants are the specific internal resis-tanceRm, motor constantskeand inertia Im. With the use of a small optimized

motor the inductance is considered negligibleL≈ 0, resulting in Im dωm dt =− k2m Rm ωm− τd+ km Rm v. (2.2)

This is the non-linear model of a BLDC motor commonly used for modelling the motor angular velocityωm. In this model the motor constantkehas been

substi-tuted with a lumped constant for the motor torque, resulting inkm. The second

motor constant =km, because ofτm=kτi which is used during transformation

to 2.2.



The thrust from the motors are the main force controlling the quadcopter, which can be derived by using the equations from [16]. It is concluded that the thrust for a single motor is equal to

T = CtρAr2ω2m. (2.3)

WhereCt is the thrust coefficient and ρ is the density of ambient air. Propeller

geometry is represented with the areaA and radius r of the rotor. These constants

can however be lumped together for system identification of the constantCT as

T = CTω2m. (2.4)



The motors are mainly producing thrust in the z-axis. When deviating from the steady state of uniform thrust, the motors will induce torque,τ on the quadcopter

as can be seen in Figure 2.8. This torque can be calculated from

τ = CqρAω2m (2.5)

[10, 5]. WhereCq is the motor specific torque constant, ρ is the density of the

ambient air andA is the cross section area of the propeller. This results in an

equation with the motor angular velocity ωm as input. Which can further be

simplified and lumped together and estimated from system identification


Given the airframe seen in Figure 2.8 the motors will induce torque on the body, which is used for controlling the quadcopter’s roll, pitch and yaw.

τ 1 τ 2 τ 3 τ 4

Figure 2.8:Illustration of the torque acting on the quadcopter.

The magnitude of the torque applied on the airframe is based on the relative difference of thrust and torque from the motors, which is linked to the constants

CT andCQ. CT is explained in (2.4) and CQ is defined in (2.6). Taking these

parameters into consideration and the quadcopter’s cross-configuration, the mo-tor’s angular velocity will apply torque in the roll, pitch and yaw (ϕ, θ, ψ) axes of

the airframe with a lever distance ofdmfrom the body origin, as

   τϕ τθ τψ    =    dxCT −dxCT −dxCT dxCT dxCT dxCT −dxCT −dxCT −CQ CQ −CQ CQ         ωm,12 ωm,22 ωm,32 ωm,42      . (2.7)

whereωm,nis the angular velocity of motor n∈ [1, 4].


Gyroscopic Effects

The second torque effect acting on the body is the gyroscopic effect [20], due to rotating propellers and motors. This force is counteracting a change of motion in the rolling and pitching direction and is proportional to the angular velocity of the rotating mass of the propeller and motor.

    τϕGyro τθGyro τψGyro    =Im   pq −p −pq −q −qp 0 0 0 0        ωm,1 ωm,2 ωm,3 ωm,4    . (2.8)


Aerodynamic Forces

There are several aerodynamic forces acting on the body [4]. These are strongly dependent on the layout of the quadcopter and its geometric design and can be hard to quantify for a general case. The most important factor needed to be taken


into consideration is the aerodynamic drag, which acts on all objects within the atmosphere and can be defined as

Fd = 1

2ρu 2C

dA. (2.9)

Where the force induced from the dragFd, is mainly depending on the object

velocityu and proportional to the area cross section A of the quadcopter, and its

drag coefficient Cd. The drag coefficient is a measurement of the aerodynamic

efficiency of the object moving in the ambient air, with density ρ. Most of these constants are physical parameters, hence one commonly writes the drag coeffi-cient as

Fd =kdu2 (2.10)

wherekdis the lumped drag coefficient.



The quadcopter is considered to be symmetrical around all axes resulting in a diagonal matrix for the inertia [19]

I =   I0xx I0yy 00 0 0 Izz   . (2.11)



Setting up a simulation model requires not only the physical properties of the quadcopter. Estimating the movement of the quadcopter requires a set of equa-tions describing how the quadcopter moves around in space, described in this section.


Equations of Motion

Simulating the quadcopter requires a six DoF problem to be solved. The base for the simulations is the rigid body dynamics of the quadcopter [19]. For a single dimension rigid body, the rotational accelerationα is based on the applied

torqueτ, and the body’s momentum ω× Iω, where ω is the angular velocity of

the quadcopter and its inertiaI.

τ = I α + ω× Iω. (2.12)

This can further be expanded with the three axes used for the quadcopter,

result-ing in  ττuv τw    = I    ¨ ϕ ¨ θ ¨ ψ    +       ˙ ϕ ˙ θ ˙ ψ    × I    ˙ ϕ ˙ θ ˙ ψ       (2.13)


whereτ is the total torque applied in each axis, resulting in a change in rotational

velocity. Expanding (2.13) one can extract ¨ϕ, ¨θ, ¨ψ and add all the external torque

components from (2.8) and (2.7).      Ixxϕ = ˙¨ θ ˙ψ(Iyy− Izz) +τϕ+τu,gyro Iyyθ = ˙¨ ϕ ˙ψ(Izz− Ixx) +τθ+τv,gyro Izzψ = ˙¨ ϕ ˙θ(Ixx− Iyy) +τψ. (2.14) This set of equations estimates the rotation in reference to the N-frame, when torque is applied on the body. There is however no information regarding the translation of the body. This needs a second set of equations to be solved.

Newtons second law can be used to determine the translational movement of the body in reference to the N-frame [19],ma = F.

Studying Newton’s second law one can describe the linear dynamics of the body. The rotational matrix,RN B(2.19) is used to convert the gravity from the

N-frame to B-N-frame. The forces that are taken into consideration are the gravity act-ing in the z-direction [0, 0,−g]′, and the thrust [0, 0∑4i=1Ti]′in the B-frame

coun-teracting this force. Further input is the aerodynamic drag [kdu2, kdv2, kdw2]′

from (2.9). The centripetal force ωB× mυB, which is depending on the body’s

linear and angular velocitiesυb = [u, v, w]′,ωB= [p, q, r]′and its massm, is also

acting on the body resulting in

m   u˙˙v ˙ w    = RN B    00 −mg    +    0 0 ∑4 i=1Ti    +   pq r    × m   uv w    −   kdu 2 kdv2 kdw2   . (2.15) Newtons’s second law (2.15) can be rewritten in the N-frame by neglecting the centrifugal force acting on the airframe and converting with the rotational matrix . Introducing (2.16), where the acceleration in the B-frame is dependent on the gravity and thrust from the motors, resulting in

  xy¨¨ ¨z    =   00 −g    +m1    0 0 ∑4 i=1Ti      CSψψSSθθCCϕϕ− C+SψψSSϕϕ CθCϕ    − m1   kd˙x 2 kd˙y2 kd˙z2   . (2.16) whereCx= cos(x) and Sx= sin(x).

The acceleration of the body is calculated in the N-frame and one can convert the acceleration using the rotational matrixRN B, as

   ¨ x ¨ y ¨z    = RBN    ˙ u ˙v ˙ w   . (2.17)

The last resulting translational equation, is the velocity of the airframe rela-tive to the N-frame, which is

   u v w    = RN B    ˙x ˙y ˙z   . (2.18)


With this set of equations the six states of the body can be converted to transla-tional motion in the N-frame for use in simulations.


Rotational Conversion

Simulating the quadcopter requires conversion from the N-frame to the B-frame and vice versa, this is done by the rotational matrix

RN B=   CSψψCCθθ CSψψSSθθSSϕϕ+− SCψψCCϕθ SCψψSSθθCCϕϕ− C+SψψSSϕϕ −Sθ CθSϕ CθCϕ    (2.19) according to [21]. Whereϕ, θ and ψ are the attitude angles roll, pitch and yaw.

Further notations areCx= cos(x) and Sx= sin(x). The method uses the rotational

matrix to convert vectors from one coordinate system to the other, an example of how it’s used is

  ˙x ˙y ˙z    = RBN   uv w    (2.20)

The rotational matrix can also be used in the inverse direction, from the N-frame to the B-N-frame. This is done by inverting the matrix, due to the matrix’s orthogonal property this is easily done byRN B=R−1BN orRN B=RTBN.

Further parameters requiring transformation are the angular velocities, ˙ϕ, ˙θ

and ˙ψ. Transforming for these parameters from the B-frame to the N-frame is

done by   ˙ ϕ ˙ θ ˙ ψ    = HBN   pq r    (2.21)

wherep, q, r is the angular velocities in the B-frame. The conversion uses the

Eu-ler kinematic equation matrixHBN[9] , which is used for rate of change for Euler

angles. This is because the method is a far more accurate method for non infinite small changes. Simply one can understand that changes in roll and pitch influ-ence the final yaw-axis. Hinflu-enceHBNis derived from studying each axis separately

and determining its effect on the rotational axis, resulting in

HBN =   10 SCϕTϕθ C−SϕTϕθ 0 SϕCθ CϕCθ    (2.22)

whereCx= cos(x), Sx= sin(x) and Tx= tan(x).



The quadcopter is inherently unstable in open loop. Enabling control of such a system is done by manipulating the motor outputs through a controller, which is the basic idea behind the control system in a quadcopter. The system makes


tiny adjustments to the motors at a significant rate stabilizing the platform. If the pilot requests a change in attitude, the control system will calculate the correct control signal to the motors and increase/decrease the angular velocity of the motors correctly. There are several ways of doing this, but the most common is with the use of three controllers. The most basic is one for each axis, roll, pitch and yaw [18], [1].

PID Controllers

The most common controller is the PID controller which is a simple yet effective way of controlling systems [11]. Consisting of three different parts, these are the proportional (P), integral (I) and derivative (D) components. All of these parts play a different role in improving the control of the system.

u(t) = KPe(t) + KI t ∫ 0 e(τ)dτ + KD d dte(t) (2.23)

The most basic controller is the P controller, which only corrects the error by a proportional constant with its inherited drawbacks. It tends to leave a static er-ror between the desired reference and control signal. With the use of an integral part this is significantly reduced but it often increases the oscillations in the sys-tem. This is, however, reduced with the introduction of a derivative part in the controller, improving faster change in the control signal.

Tuning controllers can be done by different methods. One example of this is the Ziegler Nichols[8] method, based on bringing the system into oscillation. This is done by adding a P-controller and incrementally increasing the gain until the system becomes unstable. At the point when the system becomes unstable it has reached its critical gainKu, with a periodTu. From which the parameters

KP, KI, KD in a controller (2.23) can be designated. Often is howeverKI andKD

instead written as TI = KP/ KI and TD = KD/KP, as shown in Table 2.1. The

Ziegler Nichols method can however not be considered to be an optimal solution but a method to quickly get a good functional controller.

Table 2.1:Tuning parameters according to Ziegler Nichols method.

Controller KP TI TD

P 0.5Ku

PI 0.4Ku 0.8Tu


Cascade Control

A common method for controlling quadcopters is with a cascade control princi-ple [1] [3] as shown in Figure 2.9.

F2(s) F1(s) R1(s) R 2(s) Y2(s) Y1(s) G2(s) G1(s) + -+ -100Hz 50Hz

Figure 2.9:The general principle of cascade control.

It is based on different loops of control, where the inner loop is significantly faster than the outer [8], both in regards to the dynamic of the system and its update frequency.

This is most easily explained with the use of transfer functions [11] of the used system. The inner loop of the system can be written as

Y2(s) =

F2(s)G2(s) 1 +F2(s)G2(s)

R2(s), (2.24)

whereY2(s) is the output of the inner loop, F2(s) is the controller and G2(s) is the inner system. The final parameters is the reference signalR2(s). Which is further

written as

Ginner(s) =

F2(s)G2(s) 1 +F2(s)G2(s)

. (2.25)

The overall system can then be written by the same method as

Gouter(s) =


1 +F1(s)Ginner(s)G1(s)

. (2.26)

Where one can see the inner control loop being a major part of the outer. If however it is possible to tune the inner controllerF2(s) in such a way that the

Ginner(s)≈ 1 the outer system becomes

Gouter(s) =


1 +F1(s)G1(s) (2.27)

Tuning these kinds of controllers can be done by conventional methods, but they always start of with tuning the inner loop to fulfillGinner(s)≈ 1.


Global Navigation Satellite Systems

In the majority of autonomous projects conducted, Global Navigation Satellite Systems (GNSS) is used for position measurements.


It is a satellite based system, which measures the time it takes for the signal to transmit from the source to several receivers. By triangulating this signal one can receive a fairly accurate position estimate of the source [6] in good conditions.

Due to the earth being approximately spherical these values are returned in the spherical coordinate system WGS84, in which latitude and longitude are its angles. Latitude is referenced to the zero meridian, through Greenwich and the longitude is referenced to the equator. Calculating the difference between two GNSS points, the earth’s curvature needs to be taken into consideration, as can be seen in (2.28)-(2.33). This is done by using haversine’s formula [2], calculating the distance between two points on a sphere, as can be seen in Figure 2.10.

(Lat2,Lon2) d


Figure 2.10:Illustration of the haversine distance between two spherical


ϕ1=XLat,1 (2.28)

ϕ2=XLat,2 (2.29)

ϕ = (XLat,2− XLat,1) (2.30)

λ = (YLon,2− YLon,1) (2.31)

The positions in latitude and longitude is shortened as X and Y. By calculating the difference between the points a ∆ϕ and ∆λ are obtained. These are further used in the haversine formula in (2.32) - (2.33), which uses basic trigonometry functions to derive the distance between the two points.

a = sin(ϕ

2 ) sin( ∆ϕ

2 ) + cos(ϕ1) cos(ϕ2) sin (

(∆λ 2 )

2) (2.32)

d = 2R · atan2(a,a− 1) (2.33)

The calculated distance d between two points is calculated for a given R, earth radius of about 6371 m. Where atan2 is the trigonometric function for arctan with compensation for change in sign of the input.



The system has however a few limitations. The most noticeable one is the lack of satellite signal indoors which makes it an outdoor system only, with rapidly de-creasing reception or major distortion, when tall objects interfere with the signal.

Figure 2.11:Interference of tall objects with the signal causing faulty

posi-tion measurements acquirements.

If the system is within range of tall objects something called multipathing [6] can occur. The signal reflects on tall nearby objects and thereby travels a greater distance between the source and the satellite, as can be seen in Figure 2.11. Re-sulting in a faulty position acquirement from the satellites. This weakness makes satellite navigation not suitable for accurate city navigation where multipathing between buildings can result in severe deviation.


Complementary Filter

A complementary filter is a light weight non-model based sensor fusion filter designed for an easy use of different sensors such as low rate GNSS and high-rate sensors, i.e. gyroscopes. The theory behind the filter is based on the availability of sensors, for an application with two sensors the formulation will be [17], [12]

y1(t) = s(t) + n1(t) (2.34)

y2(t) = s(t) + n2(t), (2.35)

wherey is the sensor output and s(t) is the signal portion, for which a model is


+ + G1 G2 y2 s^ y1

Figure 2.12:The principle layout of the complementary filter, with two

sen-sor inputsy1, y2, two filtersG1, G2and one outputsˆ.

With a problem formulation as such in Figure 2.12 one can write the problem as


S(s) = G1(s)[Y1(s)] + G2(s)[Y2(s)]. (2.36)

The result is the requirement to find the two filters G1 and G2 to extract ˆS(s), which is the estimated filter output, i.e position of a car. This signal can however be distorted since the two filtersG1, G2 can amplify or weaken the signal. This can be corrected by introducing

G1(s) + G2(s) = 1, (2.37)

ensuring the total gain of the filter to be constant with the new setup. The new filter can be seen in Figure 2.13.

+ + 1-G2 G2 y2 s^ y1

Figure 2.13: The improved version contains two filters connected to




In the following Chapter a review of the method in developing a simulation en-vironment of a quadcopter is explained. This was done by creating a dynamic model of the quadcopter from the physical models in chapter 2, with the use of Matlab and Simulink. This specific model was thereafter used in the develop-ment of different control algorithms and state estimations of the car.

The first stage in this thesis project has been to introduce a model for simula-tions, in which testing and evaluations of algorithms could be performed on the quadcopter, in its desired application. In the thesis project the final task would be to validate the algorithms on a prototype, but since the quadcopter was not spec-ified the only option was to estimate and find the required physical constants from others work.


Thrust and Torque

The specific dynamics from the motor could not be derived from a mathematical approach, because of the required system identification data explained in 2.1.4-2.1.5. One can however consider the dynamics of the motor to be generic for similar sized BLDC motors and propellers with the sameKv, which is the specific

rotation per voltage. Hence a dynamic model of a motor was obtained from [20], where the model is a first order system described in (3.1) and Figure 3.1.

G(s) = 0.9

0.178s + 1 (3.1)


0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Step Response Time (seconds) Amplitude

Figure 3.1:The dynamics of the motor in a step response.

The model and the dynamics of the motor were identified from the previous study [20]. Further required data were the motor constants Ct and Cq. These

parameters should also be obtained through system identification but was not possible. These parameters were instead calculated by introducing



60KvvBattery, (3.2)

estimating the maximum angular velocity by using the constantKv, which is the

specific RPM/v of the motor. With the known battery voltage, vBattery one can

estimate the maximum angular velocity of the motor,ωm,max.

Since the quadcopter was not specified, the system had to be estimated for its desired application. Dimensioning a quadcopter’s motors a rule of thumb is to have the power to weight ratio≈ 2. This is because the quadcopter requires enough power to weight ratio for both lift and maintaining stabilization. By an approximated mass of the system, a desired steady state atωm,50%, 50 % throttle

could be designed. A steady state is achieved whenmg = 4T , m is the total mass

of the system, g is the gravity acting on the body and T is the thrust from each

motor. Since T is a function of the desired unknown motor constant CT and

angular velocityωmfrom (2.4) one can extract theCT from the mass and angular

velocity of the motors at half throttle, resulting in

CT =


4ω250%. (3.3)

By solving this equation a desiredCT was calculated for the specific motor

and system. The need to estimate CQ for the motor, from (2.6) was even

fur-ther complicated without measurements. The variable was instead estimated by the use of existing tests according to [20]. The motor constants determined are defined in Table 3.1.


Table 3.1:Motor constants. Constant Value CT 1.5e-6 CQ 2.9e-8 Time [s] 0 0.5 1 1.5 2 2.5 3 Thrust, ω 0 100 200 300 400 500 600 700 800 ω [Rad/s] Thrust [N]

Figure 3.2:Step response of a motor and its effective lift force.

Table 3.2:The thrust T, and total lift generated from the quadcopter.


ωmax 800 rad/s

Tmax 40 N


Aerodynamic Forces

The aerodynamic forces acting on the body are based on the square of the ve-locity (2.9). For the simulations it was required for the quadcopter to be in a steady state at maximum forward propulsion, resulting in a terminal forward ve-locity. Since this could not be tested without a finished prototype, an estimation of the drag coefficient was calculated at a chosen top speed from the dynamics explained in Section 2.1.7. The estimated top speed was set to approximately 20 m/s performed at a 45◦forward pitch. With the known maximum forward veloc-ity ˙x, one can estimate the drag force Fdacting on the body since the quadcopter

is in a steady state at full throttle,ωmax. With this knowledge of the system one

can estimate the lumped drag coefficient kdshown in

Fd=kdu2⇔ kd = Fd u2 = sin(ϕ) 4 ∑ n=1 CTω2max,n u2 . (3.4)

This resulted ink = 976, which was then implemented in the simulations, when


Table 3.3:Aerodynamic constants used in the dynamic model.

Name Value Unit

kd 976 N s2/ m2

umax 20 m/s


Inertia And Mass

The quadcopter’s inertia is considered to be symmetrical in reference to the CoG. Increasing the accuracy of the inertia a model of the quadcopter was examined in CAD software for simplified calculation as can be seen in Figure 3.3.

Table 3.4:Inertia constants

Ixx 0.0083

Iyy 0.0083

Izz 0.0163

The mass is coupled with the inertia and is required for the inertia matrix to be calculated. The mass was estimated for the quadcopter and resulted inm = 1.2


Figure 3.3:The CAD model used for inertia calculations.

The total mass of the quadcopter is one of the more important factors for the model. It is proportional to the inertia which would affect fast cornering and influence the required thrust for steady hovering. Hence for a lower mass a significantly more efficient propulsion of the vehicle is achieved resulting in longer flight time and a more agile platform.




In the following chapter the simulation environment is explained.


Coordinate Systems

The control system needed a unified coordinate system for the car and the quad-copter. Two options were taken into consideration. The first was a body fixed coordinate system of the car and the second one was an earth fixed coordinate system in regards to longitude and latitude.

It was concluded that an earth fixed coordinate system would be more reli-able and easily implemented. The origin of the coordinates was based on the quadcopter’s starting position.

Figure 4.1:Coordinate System



Dynamic Model

The dynamic model was setup in Simulink with the physical properties inves-tigated in Chapter 3. Calculations conducted for the model used the angular velocity of the motors as input to further estimate the movement of the quad-copter.

Table 4.1:List of the calculated states of the quadcopter, and its initial


State Value Unit State Value Unit

x -10 m u 0 m/s y -10 m v 0 m/s z 0 m w 0 m/s ϕ 0 ◦ p 0 ◦/ s θ 0 ◦ q 0 ◦/ s ψ 0 ◦ r 0 ◦/ s

In the simulations all twelve states shown in Table 4.1 are calculated. This was done by implementing (2.16)-(2.17) into Simulink. The method requires the initial states for the quadcopter to begin calculations, also shown in Table 4.1.



The main calculations were derived from the equations of motion (2.14). In the used method one can substitute the N-frame angular rates ˙ϕ, ˙θ, ˙ψ with the

B-frame angular ratesp, q, r because of the small change. The change of angular rate

in the B-frame is calculated from the external influence of the motors τϕ, τθ, τψ

and its gyroscopic effects τgyro. For an infinitesimal time step one can use

  ˙p˙q ˙r    = I−1      ττϕθ++ττgyrogyro τψ    +    qr(Iyy− Izz) pr(Izz− Ixx) pq(Ixx− Iyy)      . (4.1)

Since all of these calculations are done during very small time steps one can make the assumption e.g ˙ppt, and by multiplying with ∆t gives the angular velocity

statep. Hence the angular rates are known, one can calculate the Euler angles in

the N-Frame by using (2.20).    ˙ ϕ ˙ θ ˙ ψ    = HBN   pq r    (4.2)


Which uses the same kind of approximation as (4.1). The last set of equations solved is from Newton’s second law, converted to the B-frame, according to (2.15).

   ˙ u ˙v ˙ w    = m1     0 0 4 ∑ i=1 CTω2i    +RN B    0 0 −g    −    p q r    ×    u v w    −    kdu2 kdv2 kdw2    (4.3) The equation was approximated by the same principle as (4.1). The first term of the equation represents the total lift force from the motors. The second term is the gravity in the N-Frame, converted to the B-frame byRN B. The third term

represents the centripetal force acting on the body, and the last term adds the aerodynamic forces, as a function of the quadcopter’s B-frame velocityu, v and w.


Control Systems

The objective of the thesis is primarily about the position control of the quad-copter. With this in mind the control system was heavily influenced by the exist-ing cascade control from the Pixhawk flight controller [1].

The quadcopter has four physical properties which are required to be con-trolled for a functional stabilized vehicle. These are:

• Altitude • Pitch • Roll • Yaw • Position


Altitude Control

The altitude control consists of three controllers in a cascade configuration as can be seen in Figure 4.2. Zref Z Z• + -+ -+ -+ + Kp KP KD du/dt du/dt + + Kp 1 s 0.75 1 3 6 1 Acceleration Rate u


The outer loop is a P-controller calculating the error between the desired al-titude and actual alal-titude of the quadcopter with a saturation. The second con-troller is the rate concon-troller, which is a PD concon-troller with a limitation to 1.5

m/s2. The last controller saturation is set at 2.5 m/s , limiting the quadcopter

from going faster than its desired threshold.


Euler Angles

Controlling roll and pitch was made in a similar method as the altitude. The cascade loop has however one less controller and does not use the accelerometer as seen in Figure 4.3. It uses a P-controller for the angle of the quadcopter and an inner PID controller for the angular rate.

Phiref Phi p u + - + -+ + + KP KI 1 s 0.1 0.15 180/pi 180/pi 100 1/4500 10 0 Kp 2 Rate KD du/dt 0.01

Figure 4.3:The attitude control system modelled in Simulink.


Position Controller Design

Solving the main objective of the thesis,autonomously follow a car on a racetrack

the most effort was put into developing the desired position controller. This was done by implementing a position controller in Simulink. Where algorithms easily could be tested and validated, shown in Figure 4.4.


The position controller used the position from the control command block, generating the desired trajectory from the car’s estimated position.

As pointed out in previous Section 4.1, the raw data from the GNSS sensor are based in the N-Frame and thereby need conversion to the B-Frame for use in the control system as distance error, ∆U , ∆V in the B-frame from ∆x, ∆y in

the N-frame. This was done with the two dimensional version of the rotational matrix (2.19), whereψ is the specific heading of the quadcopter in the N-Frame.

U = ∆xCψ+ ∆ySψ (4.4)

V = ∆yCψ− ∆xSψ (4.5)

During this conversion it was considered that only the planar translation was required, since no major change in z-direction would be seen on a frozen lake.

Quadcopter Groundstation + -G(S) F(S) ψref V(u,v)ref Attitude (X,Y,Z)

Figure 4.5:Data transfer between the different modules.


Position Controller

The most simple method controlling the desired velocity of the quadcopter is done by the use of two controllers, one for each direction, u and v in the B-Frame. These controllers convert the desired velocity into desired pitch and roll.

The controllers are in a cascade setup as can be seen in Figure 4.6, with an outer P controller converting the error into desired velocity. The inner loop con-trols the rate with a PID controller.

Velocityref Velocity Phiref + -+ + + KP KI 1 s 0.05 0.12 1 -1 Rate KD du/dt 0.06

Figure 4.6:The simulink model of the position control system.

As a protective measure, a limit was set to the desired angle of the quadcopter to be no greater than 40◦. This was done to ensure the safety of the quadcopter which otherwise could behave unpredictably for greater angles.


Table 4.2:Parameters of the controller used in the position control block.

P I D Yaw

Slow Mode 0.12 0.05 0.06 1 Fast Mode 0.2 0.01 0.12 0.3


Slow and Fast Mode

The issue with performing a change in heading and accelerate at the same time is due to the contradiction of the desired signal. To achieve a great change in head-ing the relative angular velocity between each motor pair must be high, hence two motors must have a low angular velocity. This inhibits the acceleration, which desires four motors to have maximum angular velocity. To prevent this contra-diction to happen a fast and slow mode was used. Which gives a lower priority to yaw, than roll and pitch, as shown in Table 4.2.



The control systems on the quadcopter were primarily tuned according to previ-ous knowledge of the control loops, from [1]. This is great for a rough tuning of the parameters of the cascade control, which otherwise can be a time consuming task.

The position control system was however made from scratch and no previous knowledge of the system was obtained. For this the Ziegler Nichols method was used, explained in Section 2.2.3 to get a rough estimate of the controller param-eters. Further analysis and testing of different parameters concluded the used parameters shown in Table 4.2.



Before simulations were carried out data, was gathered from one of the Borg-Warner test vehicles. The GNSS module was placed on the roof of the test vehicle for best reception. Different test scenarios were logged from the Volvo S60 shown in Figure 4.7, which were used for evaluation of the objective.



Logged Data

Data logging was done by using the RaceLogic VB10SPS GPS sensor, outputting the raw navigation data onto a CAN bus. This connected into the CANCase XL which decodes the CAN bus. The CANCase XL also connected to the internal CAN of the car for sensor logging. The car’s dataset consisted of accelerometers, gyroscopes and wheel speeds.

RaceLogic VB10SPS Latitude Longtude Car CAN: Acc Gyr ω w CANCase XL CanAlyzer

Figure 4.8:Schematic of the data logging setup.

The data collected for trajectory analysis can be seen in Figure 4.9, different sectors are based on various characteristics, such as a slow twisty section and a fast section with less corners.

• The first session is a low speed sweeping trajectory.

• The second sector is a medium velocity sector with corners at low velocity. • The third sector is a high velocity stage with hard cornering.


Longitude 12.862 12.863 12.864 12.865 12.866 Latitude 55.8708 55.871 55.8712 55.8714 55.8716 55.8718 Car Time [s] 0 50 100 150 Vel o ci ty [m /s] 0 2 4 6 8 10 12 14 Car Velocity Longitude 12.87 12.871 12.872 12.873 12.874 12.875 12.876 Latitude 55.867 55.868 55.869 55.87 55.871 55.872 55.873 55.874 Car Time [s] 0 50 100 150 200 250 Vel o ci ty [m /s] 0 5 10 15 Car Velocity Longitude 12.866 12.8665 12.867 12.8675 12.868 12.8685 Latitude 55.8735 55.874 55.8745 55.875 55.8755 55.876 55.8765 55.877 Car Time [s] 0 20 40 60 80 100 120 Vel o ci ty [m /s] 0 5 10 15 20 25 30 35 Car Velocity

Figure 4.9:The sectors used for algorithm evaluation.

The logging system was improved with the help of a C# program sending and receiving data from the Pixhawk flight controller, further called the ground sta-tion. This was done according to the schematic in Figure 4.10. The car’s GNSS module sends its coordinates to the CANCase, sending it to the ground station via Universal Asynchronous Receiver/Transmitter (UART). Converting the pack-age to a preset CAN messpack-age for the logging system (CanAlyzer) to receive, and timestamp all data from all devices, for easy data analysis.

U-Blox NEO-7 Latitude Longtude Pixhawk Accelerometer Gyroscope Magnetometer Computer C# Shell CanAlyzer RaceLogic VB10SPS Latitude Longtude CANCase XL

Figure 4.10:Overview of the overall system.


Shortest Path Trajectory

Generating a trajectory which cuts corners is something which could help the

quadcopter by smoothing out hard cornering and maintaining momentum. A simple method for generating such a reference trajectory is done in Algorithm 1.


Algorithm 1Shortest Path Algorithm. ∆x← xQuad− xCar

y← yQuad− yCar Ψ ← atan2(∆x, ∆y)

xRef ← xCar+ddistance· cos(Ψ )

yRef ← yCar+ddistance· sin(Ψ )

The algorithm calculates the shortest way from its previous position and then estimates where it is supposed to be with a given distanceddistancefrom the

tar-get. This is then low pass filtered using the filter in (4.6), smoothing the data creating a nice and flowing trajectory from the measured data.

F(z) = 0.01867

z− 0.9813 (4.6)


Sensor Fusion

The configuration of the car and the quadcopter is a setup with a wide range of systems and sensors. For the car there are sensors primarily for the use of different safety systems such as ABS and ESP. The sensors these systems use could be accessed from the CAN bus, as can be seen in Table 4.3.

Table 4.3:Available signals for use from the car.

Car RaceLogic VB10SPS Accelerometer x

Gyro x

GNSS Coordinates x

Heading x

The quadcopter also uses a wide range of sensors which are being used for navigation. These are listed in Table 4.4.

Table 4.4:Available signals on the Pixhawk flight controller.

PixHawk Accelerometer x Gyro x GNSS x Magnetometer x Barometer x

By sensor fusion one could use the advantages of each sensor and minimize the error of the estimated position of either the car or the quadcopter.


Position IMU(Yawrate) GPS(Heading) GPS(Position,Velocity) Filter Filter

Figure 4.11:Complementary filter for a position estimate of a car.

There are several ways of using the filters shown in Section 2.2.5. The usage depends heavily on the specific application and conditions. In this thesis project a complementary filter was use for heading and position estimation. An example of this complementary filter is shown in Figure 4.11. This setup of the filter is however only functional if the vehicle maintain near zero roll and pitch, which is assumed with a car running on a normal road.

What the filter accomplishes is improved navigation estimation with the help of fast estimations from INS sensors and gets corrected by the GNSS sensor over time.

For the car to be used as a reference signal one needs to study the signal, in this case the GNSS position. One of the major problems with the GNSS is its 10 Hz update frequency, limiting the control loops running at 100 Hz. The most simple way to boost the update frequency was to use the angular rate sensor combined with the velocity of the car, to complement the GNSS.

As shown in Figure 4.11 the finished filter actually contains contains two com-plementary filter. One which estimates the heading of the car, and one which ultimately calculates the new position of the car.

The first stage of the filter estimates the heading ˆψk|k−1by integrating the

yaw-rate ˙ψk from the gyroscope with the sample timeTs and adds it to the current

heading value ˆψk−1|k−1 ˆ

ψk|k−1= ˆψk−1|k−1+ ˙ψkTs (4.7)

Which is then fed into the filter, correcting the estimated heading with the one from the GNSS sensor, ψGN SSk . The filter is tuned with the parameter αh

resulting in


ψk|k= ˆψk|k−1αh+ψkGN SS(1− αh) (4.8)

when new GNSS heading is available, otherwise ˆ

ψk|k = ˆψk|k−1. (4.9) The second stage of the filter estimates the new coordinates of the car,x and y.

This was done with the same method as above. First by integrating the position with the known speedvkand estimated heading ˆψk|kfrom (4.8).



xk|k−1= ˆxk−1|k−1+vkTscos( ˆψk|k) (4.10) ˆ

yk|k−1= ˆyk−1|k−1+vkTssin( ˆψk|k). (4.11)

Combining these with the measured values (xGN SSk , ykGN SS) from the GNSS, one can get an improved position estimate ( ˆxk|k, ˆyk|k), when the variable αp is

tuned properly, resulting in 4.11 when new GNSS position is available ( ˆxk|k, ˆyk|k) = ( ˆxk|k−1, ˆyk|k−1)αp+ (xGN SSk , y


k )(1− αp). (4.12)

Otherwise when no GNSS readings are available the filter is

( ˆxk|k, ˆyk|k) = ( ˆxk|k−1, ˆyk|k−1). (4.13) The result of this method can be seen in Figure 4.12 where simulations were carried out to validate the improvement. Several different scenarios such as low velocity or high velocity, were tested.

X [m] -327 -326 -325 -324 -323 -322 -321 -320 -319 -318 Y [m] 16 18 20 22 24 26 Raw GPS Data Trajectory Trajectory with CF

Figure 4.12: GNSS measured data in circles, up sampled data as a dotted




The thesis project requires a prototype to be built for evaluation of the method used for filming. This was done with mostly open source software and hardware for a user friendly environment. The following chapter will introduce the used components.


Flight Controller

The Pixhawk flight controller is an advanced autopilot designed on an open source hardware project, with the possibility to control a diverse amount of ve-hicles. Consisting of a 32-bit micro controller and sensors from ST Microelec-tronics, this powerful hardware was chosen for its flexibility and open source availability.

Figure 5.1:The Pixhawk flight controller.



The finished communication schematic, with the wireless connection between the Pixhawk and the C# program running on the computer, are shown in Figure



Computer C# Shell

Input -Keyboard

U-Blox NEO-7 PixHawk

xBee Wireless xBee RaceLogic VB10SPS Car CAN Case XL

Figure 5.2:The principle layout of the different components in the system.



The xBee modules are connected to the Pixhawk by UART, for a simple user inter-face between the components. The baudrate was set to 57600 for a compromise between the bandwidth and robustness of the protocol.

Figure 5.3: Xbee modules used for communication between the

groundsta-tion and quadcopter.



With the use of the wireless communication between the Pixhawk and the ground station, a communication protocol was required. Such a protocol already exists, it is calledMAVlink. It is a generic protocol for data management for a variety of

different Micro Air Vehicle (MAV). The general protocol works like this:


Table 5.1:Message layout of the MAVLink protocol.

Byte Number Meaning Variables

1 Message Header 0xFE

2 Message Length 9-254

3 Sequence Number 0-255

4 System ID e.g. 0

5 Component ID e.g. 0

6 Message ID e.g. 0(Heartbeat)

n Payload Data

n+1 Checksum Data n+2 Checksum Data

Table 5.2:Message layout of the MAVLink protocol.


Ground Station

The ground station and its GUI as seen in Figure 5.4, was written in VS which is a Microsoft based software with a wide range of options. The chosen language was C# which is an object oriented language suitable for a variety of applications.

The main layout of the program consists of three different modules: • CAN

• Data • Control

These modules handle each of the specific areas of the program with every task running a separate thread.

The CAN module is specified to handle the data transfer from the vehicle to the main program. This is done by the use of the CANCase XL, decoding the data from the vehicles and making it available for the program.

The CAN modules also consist of the data logging system, which outputs all the data from the quadcopter into a virtual CAN connection. Syncing with CAN-Alyzer, which then timestamps all the data, both from the quadcopter and the Car/GNSS. This results in a single robust data logging system which is being used for all units in the system.

The data module is simplified as a data storage unit for all the devices where one of CAN/Control tasks is allowed to get/set the data variables in the control algorithm.

The Control module is the heart of the program. It consists of several proce-dures and commands for the program to work according to the user’s desire. One can divide the program into three parts:

• Starting Procedure • Mission


• Landing Procedure

Figure 5.4:The GUI developed for the ground station.


Quadcopter Specification

The quadcopter used for testing was chosen to be a small light weight quadcopter, with enough power for use with a GoPro camera. With these specifications a platform was chosen according to Table 5.3.

Table 5.3:Quadcopter components used for the prototype.

Component ID

Frame REPTILE500-V3

Motors AX-2810Q-750KV ESC Afro ESC 30Amp OPTO Battery Zippy Flightmax 3300mAh 30C Gimbal FeiYu Tech Mini 3D

The general parts for the quadcopter were assembled as the schematics in [15]. An option to increase the flight time with two batteries was available.

Quadcopter Design


Figure 5.5:Final quadcopter design.

The fully assembled quadcopter, which carried out the validation tests, have a total mass of 1.4 kg, it consists of a single battery with the option to add an

additional battery for an increase in flight time, with a decrease in efficiency.



The gimbal used in the project is the FeiYu Tech Mini 3D. It is a three axis gimbal, which have motors mounted on the pitch, roll and yaw axes. This enables the gimbal to stabilize the camera in the rolling axis and allows the user to control the yawing and pitching angles.

Table 5.4:Gimbal specifications, the roll angles will always be controlled by

the gimbal itself.

Feature Value Pitch Angle ±150◦ Roll Angle ±45◦ Yaw Angle ±120◦ Heading Rate 75◦/Sec

Pitch Rate 25◦/Sec


Gimbal Modelling And Control

A third party gimbal was acquired, which didn’t have the raw sensor values read-able. However, the specifications for its motion, specified in Table 5.4, concludes that the dynamics of the gimbal will be sufficiently quick for the desired objec-tive.

The control system for the gimbal will be an open loop system based on the estimation of the car and quadcopter. Hence, the positions of both vehicles are known and where the camera should point is possible. Due to the fact that there


is no image processing, the camera feed does not have the possibility to achieve a closed loop system with the current hardware.

x dx dψ dy y z x y z

Figure 5.6:Gimbal coordinate system in reference to the B-frame.

The desired angles in pitchψP itchand headingψH eading of the gimbal is

cal-culated from the difference in position ∆x, ∆y and altitude ∆z between the quad-copter and car as

ψP itch= arctan   √x2z + ∆y2    (5.1)

in the E-frame. The secondary angle, ψH eading is calculated from the difference

between the quadcopter headingψQand gimbalψGasψH eading =ψQ− ψG.


Implementing of the gimbal with the flight controller was fairly simple. Each axis receives a PWM signal which specifies a variable set point for the gimbal. With a neutral state of 0◦the PWM signal is set to 1500. The min/max PWM is (1000, 2000) resulting in the heading angle of−120◦ will be 1000 in PWM set point and vice versa for the positive angle.


Global Navigation Satellite Systems

Controlling the quadcopter relatively to the car was done using two different GNSS sensors. A RaceLogic VB10SPS was used for the car and a U-Blox NEO-7 for the quadcopter. These two sensors both acquired 3D fix with the use of mini-mum three satellites. The differences between the two sensors were in the update frequency and accuracy, measured using Circular Error Probability (CEP), as can be seen in Table 5.5.

Table 5.5:Circular Error Probability (CEP)

GNSS Update Frequency [H z] Accuracy [m] Channels

RaceLogic VB10SPS 10 2.5 [95% CEP] 8

U-Blox NEO-7 5 2.5 [50% CEP] 56

It was important to validate the data and measure the relative offset of the two sensors, since this could influence the final result. A test was conducted by placing the two sensors in the same location for 20 minutes.


x [m] 0 1.3 -1.3 -0.9 -0.4 0.4 0.9 y [m] -1.2 -1.2 -0.9 -0.9 -0.6 -0.6 -0.3 -0.3 0

Figure 5.7:A measurement sequence of 20 minutes of both GNSS sensors.

The relative difference between the two sensors was calculated from the mean value with (2.33). The absolute measured deviation was 1.23 m, which is consid-ered a slight difference, but not a major concern in the used velocities.

Figure 5.8: The

used U-Blox NEO-7 GPS.

Figure 5.9: The

used RaceLogic

VB10SPS (external receiver).




Relaterade ämnen :