• No results found

Autonomous Recharging System for Drones: Detection and Landing on the Charging Platform

N/A
N/A
Protected

Academic year: 2021

Share "Autonomous Recharging System for Drones: Detection and Landing on the Charging Platform"

Copied!
71
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS

,

STOCKHOLM SWEDEN 2019

Autonomous Recharging System

for Drones: Detection and Landing

on the Charging Platform

MARÍA ÁLVAREZ CUSTODIO

(2)
(3)

KTH Royal Institute of Technology

Degree program in Electrical Engineering

MASTER THESIS

Autonomous Recharging System for Drones:

Detection and Landing on the Charging Platform

Mar´ıa ´

Alvarez Custodio

Supervisor: Patric Jensfelt

Examiner: Joakim Gustafson

(4)
(5)

Abstract

In the last years, the use of indoor drones has increased significantly in many di↵erent areas. However, one of the main limitations of the potential of these drones is the battery life. This is due to the fact that the battery size has to be limited since the drones have a maximum payload in order to be able to take-o↵ and maintain the flight. Therefore, a recharging process need to be performed frequently, involving human intervention and thus limiting the drones applications.

In order to solve this problem, this master thesis presents an autonomous recharging system for a nano drone, the Crazyflie 2.0 by Bitcraze AB. By automating the battery recharging process no human intervention will be needed, and thus overall mission time of the drone can be considerably increased, broadening the possible applications.

The main goal of this thesis is the design and implementation of a control system for the indoor nano drone, in order to control it towards a landing platform and accurately land on it. The design and implementation of an actual recharging system is carried out too, so that in the end a complete full autonomous system exists.

Before this controller and system are designed and presented, a research study is first carried out to obtain a background and analyze existing solutions for the autonomous landing problem.

A camera is integrated together with the Crazyflie 2.0 to detect the landing station and control the drone with respect to this station position. A visual system is designed and implemented for detecting the landing station. For this purpose, a marker from the ArUco library is used to identify the station and estimate the distance to the marker and the camera orientation with respect to it.

Finally, some tests are carried out to evaluate the system. The flight time obtained is 4.6 minutes and the landing performance (the rate of correct landings) is 80%.

Keywords

Nano drone, Crazyflie 2.0, controller, autonomous recharging system, autonomous land-ing, ArUco.

(6)

Sammanfattning

Under de senaste ˚aren har anv¨andningen av inomhusdr¨onare ¨okat betydligt p˚a m˚anga olika omr˚aden. En av de st¨orsta begr¨ansningarna f¨or dessa dr¨onare ¨ar batteritiden. Detta beror p˚a att batteristorleken m˚aste begr¨ansas eftersom dr¨onarna har en v¨aldigt begr¨ansad maximal nyttolast f¨or att kunna flyga. D¨arf¨or m˚aste de laddas ofta, vilket involverar m¨anskligt ingripande och d¨armed begr¨ansar dr¨onartill¨ampningarna.

F¨or att l¨osa detta problem presenterar detta examensarbete ett autonomt laddningssys-tem f¨or en nanodr¨onare, Crazyflie 2.0. Genom att automatisera batteriladdningspro-cessen beh¨ovs inget m¨anskligt ingrepp, och d¨arigenom kan uppdragstiden f¨or dr¨onaren ¨okas avsev¨art och bredda de m¨ojliga till¨ampningarna.

Huvudm˚alet med denna avhandling ¨ar designen och implementationen av ett styrsystem f¨or en inomhusdr¨onare, f¨or att styra den mot en landningsplattform och landa korrekt p˚a den. Arbetet inkluderar det faktiska laddningssystet ocks˚a, s˚a att slutresultatet ¨ar ett fullst¨andigt autonomt system.

Innan regulatorn och systemet utformas och presenteras presenteras en genomg˚ang av bakgrundsmaterial och analys av befintliga l¨osningar f¨or problemet med autonom land-ning.

En kamera monteras p˚a Crazyflie 2.0 f¨or att kunna detektera och positionera land-ningsstationen och styra dr¨onaren med avseende p˚a detta. F¨or detektion anv¨ands ArUco-bibliotekets mark¨orer vilka ocks˚a g¨or det m¨ojligt att r¨akna ut kamerans position och orientering med avseende p˚a mark¨oren och d¨armed laddstationen.

Slutligen utf¨ors tester f¨or att utv¨ardera systemet. Den erh˚allna flygtiden ¨ar 4,6 minuter och landningsprestandan (andel korrekta landningar p˚a f¨orsta f¨ors¨oket) ¨ar 80%.

Nyckelord

Nanodr¨onare, Crazyflie 2.0, regulator, autonomt laddningssystem, autonom landning, ArUco.

(7)

Contents

1 Introduction and goals 1

1.1 Introduction . . . 1 1.2 Goals . . . 1 1.3 Assumptions . . . 2 1.4 Outline . . . 2 2 Research 3 2.1 Basic concepts . . . 3

2.2 Research study results . . . 4

3 Platform description 8 3.1 Crazyflie 2.0 . . . 8

3.1.1 Hardware . . . 9

3.1.2 Flow deck . . . 11

3.2 ROS driver for Crazyflie . . . 11

3.3 Camera, transmitter and receiver . . . 13

4 System design 15 4.1 Proposed solution . . . 15

4.2 Position controller design . . . 15

4.3 Marker detector design . . . 17

4.4 Charging platform design . . . 18

4.5 System architecture design . . . 18

5 Visual system 20 5.1 Marker creation with ArUco module . . . 20

5.2 Marker detection with ArUco module . . . 22

5.3 Camera calibration with ArUco module . . . 23

5.4 Camera pose estimation using ArUco module . . . 25

5.5 Implementation . . . 27

6 Drone controller 29 6.1 Pose message transformation . . . 29

6.2 Estimate drone distance to landing point . . . 31

6.3 PID controller . . . 33

6.4 Landing . . . 36

6.5 Implementation . . . 37 7 Landing and recharging platform 39

(8)

7.1 Landing station . . . 39

7.2 Battery and battery charger . . . 40

7.3 Recharging system . . . 41

7.3.1 Reverse voltage protection . . . 41

7.3.2 System design and implementation . . . 42

8 Evaluation and testing 45 8.1 Battery life characterization . . . 45

8.2 Detection performance . . . 47

8.3 Landing performance . . . 49

9 Conclusions and future work lines 50 9.1 Conclusions . . . 50

9.2 Future work lines . . . 51

A Annex I: Ethical, economic, social and environmental aspects 56 A.1 Introduction . . . 56

A.2 Description of relevant impacts related to the project . . . 57

A.3 Detailed analysis of some of the main impacts . . . 58

A.4 Conclusions . . . 59 B Annex II: Economic budget 60

(9)

List of Figures

1 Direction of rotation for each quadcopter’s rotor . . . 3

2 Quadcopter axes . . . 4

3 Asctec Pelican quadcopter . . . 6

4 3D model of Asctec Pelican base with landing foot installed inside the landing platform . . . 6

5 Crazyflie 2.0 . . . 8

6 Crazyflie 2.0 expansion port pinout . . . 10

7 Crazyflie 2.0 system architecture . . . 10

8 Flow deck expansion board for Crazyflie 2.0 . . . 11

9 Module of camera and transmitter VM275T . . . 13

10 Crazyflie 2.0 with the camera on board . . . 14

11 Feature tracking over time . . . 17

12 Overall system architecture . . . 19

13 ArUco marker to identify the landing platform . . . 21

14 Pinhole camera model . . . 23

15 ChArUco board used to calibrate the camera . . . 24

16 PnP problem formulation . . . 26

17 Flow diagram of the visual system . . . 28

18 Camera and Crazyflie 2.0 coordinate system . . . 29

19 Problem formulation for estimating the distance to the landing point . . 31

20 PID controller . . . 34

21 Flow diagram of the controller node . . . 38

22 Landing platform . . . 39

23 Alternatives for the protection circuit . . . 42

24 Landing pads design . . . 43

25 Overall landing and recharging platform . . . 44

26 Battery life characterization with and without camera . . . 46

27 Marker deformation problem . . . 48

(10)

List of Tables

1 E↵ects of independently increasing the value of the PID the parameters . 35

2 Parameters of the four PID controllers . . . 35

3 Marker detection performance . . . 47

4 Dead angle estimation . . . 48

(11)

1

Introduction and goals

1.1

Introduction

During the last years, the use of Unmanned Aerial Vehicles (UAV) or drones has in-creased considerably in many di↵erent applications and areas, such as industry and mili-tary. Surveillance, transportation, delivery or even search and rescue tasks are just some examples of applications that drones have nowadays.

However, the potential of the indoor drones is limited by the flight time due to limited battery capacity, which in many cases allow a flight of only few minutes. This is due to the fact that the batteries can not be very heavy as the drones have a limited payload in order to be able to take-o↵ and fly. Therefore, a recharging process need to be per-formed frequently. Usually, these recharging processes involve direct human intervention, thus interrupting the autonomous capability of drones and limiting their use and ap-plications. Therefore, by automating the battery recharging process of UAVs no human intervention will be needed, and thus overall mission time of the drone can be significantly increased.

For this reason, one of the research topics in drones is focused on autonomous recharging systems. In fact, there are still many open challenges regarding the detection and landing on the platform, as it will be explained further in chapter 2.

The main di↵erence with many of the already existing solutions for autonomous flight, landing and recharging is that in this thesis a complete system is provided, including the ability to autonomously fly towards the charging station, accurately land on it and also recharge its battery. This is done in a real system without human intervention for giving any commands, like for example start the landing maneuver.

1.2

Goals

In order to solve the autonomous charging problem, the main goal of this thesis is the design and implementation of a control system for an indoor nano drone, in order to control it towards a landing platform whose position is roughly known, and also accurately land on it. Another goal is the design and implementation of an actual recharging system,

(12)

so that in the end a complete full autonomous system exists.

1.3

Assumptions

The type of drone used in this project is a nano quadcopter. A quadcopter or quadrotor is a type of UAV that has become very popular [1], and it is widely used in research laboratories and events of aerial vehicles.

Therefore, the proposed solution involves a nano quadrotor, specifically the Crazyflie 2.0 by Bitcraze 1. It has been decided to use the Crazyflie 2.0 because it is a low cost nano

quadcopter, and it is small and lightweight, which implies that it is safe to perform the flights and test the solution. Moreover, it is an open source project, with source code and hardware design both documented and available.

1.4

Outline

The document is divided in several chapters covering the di↵erent tasks carried out. First, chapter 2 covers the results obtained after the research study. In chapter 3 the platform used in the thesis is detailed, including the drone and all the hardware and software. Chapter 4 explains the proposed solution and the system design. Chapters 5, 6 and 7 cover the visual system, the position controller and the recharging system, respectively. The evaluation results are collected in chapter 8. Finally, in chapter 9 the conclusions and future research and work lines are presented.

(13)

2

Research

In this chapter the research work carried out in the thesis is presented. First, a little background is given in section 2.1, covering basics concepts of quadcopters in order to understand the terminology and coming explanations. In section 2.2 the results of the research study are explained.

2.1

Basic concepts

A quadcopter is a flying mechanical vehicle with four arms, each of these has a rotor attached to a propeller. Two of these rotors turn clockwise (CW) while the other two turn counter clockwise (CCW). Figure 1 shows the rotation direction for each rotor and propeller.

Figure 1: Direction of rotation for each quadcopter’s rotor. (Source: Bitcraze)

Each rotating propeller produces two di↵erent forces. One is an upward thrust. The other force is an opposing rotating spin (torque), this means that a rotor turning CW produces a torque which causes the body of the drone to spin in CCW direction, and vice versa. Therefore, while hovering, the moments from the two CW rotors and the two CCW rotors compensate each other, thus avoiding that the quadcopter keep spinning around its body axis. Therefore, by varying the speed of the four rotors di↵erent movements are possible.

(14)

It is worth noting that when flying a quadcopter there are four main dimensions of control: roll, pitch, yaw and thrust. Roll is the rotation around the horizontal axis going through the quadcopter from back to front. By controlling this parameter we can control the movement to left or right. Pitch is the rotation around a horizontal axis going through the quadcopter from left to right. This tilts the drone and moves it forwards or backwards. Yaw is the rotation around a vertical axis, which rotates the quadcopter left or right, and thus changing the direction the drone is facing. Finally the thrust adjusts the altitude of the quadcopter. Figure 2 shows the di↵erent axes of the quadcopter that define roll, pitch and yaw angles and thrust. The yaw, pitch and roll angles define the drone attitude, which is its angular position.

Figure 2: Quadcopter axes. (Source: Bitcraze)

2.2

Research study results

As already explained in chapter 1, the research in the autonomy of UAVs is of great interest in order to improve mission time and expand their applications. This is the reason why there are many projects and contributions in this area, suggesting di↵erent ways to address the problem and proposing di↵erent solutions.

In this section some of these projects and solutions will be explained. This research e↵ort has been mainly focused on autonomous landing. It is worth noting that some of the solutions found have inspired this thesis, as it will be detailed in chapter 4.

(15)

with the aim of its stabilization and autonomous landing. The robot is fully autonomous, using just internal sensors and processing. The method for landing is based on landing tag recognition by computer vision, then estimating the quadcopter’s relative pose with respect to this target and controlling it to approach the landing station. The camera is installed in the quadcopter’s bottom side and the tag (formed by several markers obtained from the ArUco2 library [3]) is placed horizontally on the landing platform. In order to

obtain the camera pose, they formulate the problem as the Perspective-n-Point (PnP) problem. This consists of obtaining the camera pose from the camera matrix and a set of correspondences between image points (2D points) and real world points (3D points). For the control they use a set of decoupled PID controllers (x, y, z and yaw). However, in order to evaluate the system they rely on simulations. They use a physics simulator called Gazebo, which operates on Robot Operating System (ROS). The implementation is not done on a real quadcopter.

In [4] they implement a controller for the AR.drone for fully autonomous landing on a given visual pattern and hovering above the pattern. However, they do not charge the drone autonomously. The landing point is located on an horizontal plane (with respect to the floor) and it is identified by the blob detection algorithm (method that aims at detecting regions in an image that di↵er in properties compared to surrounding regions, in this case these regions are two di↵erent circles, red and blue). The camera is in the quadrotor’s bottom as in [2]. To measure the distance to the landing point they analyze the position of this point in the image captured and apply trigonometry (making some approximations) using also the altitude measured from the quadcopter’s ultrasound sensor. As in [2], they use four decoupled PID controllers. The experiments in [5] show that the success rate is around 80%, considering successful landing at most 30 cm from the point. This is a quite high error considering the drone dimensions (52,5 x 51,5 cm). Thus, we consider that with this error it would be hard to use this landing system for recharging the drone using the landing platform.

In [6] a complete on-board recharging solution for the drone Asctec Pelican is given: autonomously take-o↵, navigate and landing, recharging quadrotor’s battery by using their custom designed landing platform. Figures 3 and 4 show respectively the quadcopter used and a 3D model of the quadcopter’s landing foot inside the landing platform. They use computer vision techniques to detect the landing station as in [2] and [4], by using a tag. And apart from this vision system, they have another onboard vision system for navigation when the target is not visible, which is based on optical flow to estimate the

(16)

position of the drone. For the control system, PIDs are used to define the drone position and yaw angle. Regarding the landing and charging platform, it is based on slip-rings installed on the platform (on the bottom of the cones that can be observed in figure 4) and on the bottom of drone feet, connected respectively to the battery charger and the UAV battery. A fuse is used in order to protect the system in case of short circuit. The designed system allows a landing error of 5 cm along X and Y axis and a rotation of 10 degrees.

Figure 3: Asctec Pelican used in [6]

Figure 4: 3D model of Asctec Pelican base with landing foot installed, once landed in-side the landing platform, from [6]

Finally, in [7] they implement a vision based tracking and landing approach for a quadro-tor (AR.drone). The image processing and position controlling is performed in a ground station. The quadrotor is controlled with PIDs like in [2], [4] and [6], the output being the attitude angle commands. The vision algorithm uses enhancement of red, green and blue (RGB) color information, being robust under di↵erent lightning conditions. The camera is installed in the lower part as in [2],[4] and [6], and in this case to estimate the drone position they use the relative position in X and Y axis from the image. The commands need to be manually sent from the ground station, that is, the drone doesn’t land autonomously once it is in the right position until the command giving the order of landing has not been sent.

Comparing to the project presented in this thesis, one important di↵erence with many of the already existing solutions for autonomous flight, landing and recharging is that in

(17)

this thesis the system is implemented with a nano quadcopter. There are many projects and research works focus on nano quadcopters autonomous flight control, such as [8], [9], [10] or [11]. Some projects also consider tracking capabilities, such as [12] or [13]. However, they do not focus on the autonomous charging process. Because of their size, nano quadcopters have usually even shorter flight time than normal size quadcopters. The battery life is usually around 10 minutes. Therefore, in this thesis we develop a full solution so that this kind of drones could be used in cases that normal size drones can not be used (for instance in tasks that involved being near people) even if these applications require long mission time.

(18)

3

Platform description

In this chapter the platform used will be described, including the Crazyflie 2.0 and the ROS library for the Crazyflie. Moreover it will be explained how the camera is installed on the quadcopter so that it can send images to the computer (ground station) for further processing.

3.1

Crazyflie 2.0

The Crazyflie 2.0 is a light and small quadcopter, it weighs 27 grams and fits in the palm of the hand [14]. It is quickly assembled by simply attaching the motors to the circuit-board frame, no soldering is needed. In spite of its size, it has a very durable design. The exact size of the quadrotor is 92 x 92 x 29 mm (motor-to-motor and including motor mount feet). The figure 5 shows an image of the Crazyflie 2.0 assembled.

Figure 5: Crazyflie 2.0. (Source: Bitcraze)

It can communicate with a computer through the Crazyradio PA, which is a 2.4 GHz radio USB dongle with a 20dBm power amplifier. According to the specification, the range of communication between the quadrotor and the Crazyradio PA is about 1km range line-of-sight (LOS). The Crazyflie 2.0 can be identified and accessed through a Uniform Resource Identifier (URI), for instance: radio://0/80/250k.

The flight time is around 7 minutes and the charging time is 40 minutes, both considering the stock battery, which is the one used in this project. The maximum recommended pay-load weight is 15 g, which needs to be considered when choosing the camera or mounting additional pieces or systems on the Crazyflie 2.0.

(19)

3.1.1 Hardware

The Crazyflie 2.0 has two microcontrollers, one for the main application and other one in charge of the radio and the power management. The microcontroller for the main application is a STM32F405, with a Cortex-M4 core, 168MHz, 192kb SRAM and 1Mb flash. Its functions are sensor reading and motor control, flight control, telemetry and additional user development. The radio and power management microcontroller is a nRF51822, with a Cortex-M0 core, 32Mhz, 16kb SRAM and128kb flash. The functions of this microcontroller are enabling power to the rest of the system (STM32, sensors and expansion board), battery charging management and voltage measurement, master radio bootloader, radio and BLE communication, and detecting and checking installed expansion boards.

It also has an on-board Lithium Polymer (LiPo) charger that can be accessed through a microUSB connector. The memory unit is a 8KB EEPROM. The Inertial Measure-ment Unit (IMU) is formed by a 3-axis gyroscope (MPU-9250), a 3-axis accelerometer (MPU-9250), a 3-axis magnetometer (MPU-9250) and a high precision pressure sensor (LPS25H).

There is an expansion port (figure 6) with the following connections:

• VCC (3.0V, max 100mA) • GND

• VCOM (unregulated VBAT or VUSB, max 1A) • VUSB (both for input and output)

• I2C (400kHz) • SPI

• 2 x UART

• 4 x GPIO/CS for SPI

• 1-wire bus for expansion identification • 2 x GPIO connected to nRF51

Figure 7 shows the Crazyflie 2.0 system architecture with all the elements described above.

(20)

Figure 6: Crazyflie 2.0 expansion port pinout. (Source: Bitcraze wiki)

(21)

3.1.2 Flow deck

The Crazyflie 2.0 has several expansion decks also designed by Bitcraze that can be used to extend the quadrotor’s functionalities and that are easy to include thanks to the 1-wire automatic detection implemented on the Crazyflie 2.0 board.

In our case, we use the Flow deck, which gives the Crazyflie 2.0 the ability to detect its motions along the in X, Y and Z axes. It is mounted under the quadrotor and it has two sensors: the VL53L0x Time of Flight (ToF) sensor measures the distance to the ground with high precision (up to 2 meters within a few millimeters) and the PMW3901 optical flow sensor measures movements of the ground. Figure 8 shows the flow deck board.

Figure 8: Flow deck expansion board for Crazyflie 2.0. (Source: Bitcraze wiki)

With this deck we achieve a very stable quadrotor, and we can easily control it by giving velocity commands in each axis or even the desired flight height. How and why the flow deck is used for the position control is further explained in chapter 4.

It only weighs 1.6 g so, regarding the payload limitation, there is still enough margin to install the camera or other necessary elements.

3.2

ROS driver for Crazyflie

The Robot Operating System3 (ROS) is a flexible and open source framework for writing

robot software. It is formed by a collection of tools, libraries, and conventions to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms.

It has been decided to use ROS because it facilitates a modular design structure, where the software can be separated into di↵erent components and processes but it is easy to

(22)

establish the communication among these di↵erent processes. Moreover, there already exist several ROS drivers for the Crazyflie, thus making the development easy.

For this thesis the ROS Crazyflie driver introduced in [15] is used4. It has been decided

to use this instead of other similar projects because it is the most complete one. The features included that are relevant for this thesis are:

• Support for Crazyflies 1.0 and 2.0 using the stock firmware, which makes it easy to use.

• It publishes on-board sensors in ROS standard message formats.

• A tutorial [16] which, in spite of being for a slightly older version, is useful for the overall understanding of the di↵erent packages.

Other projects including ROS drivers for the Crazyflie do not include documentation, are based on custom firmware or are not completed.

The main unit for organizing software in ROS are packages, which may contain ROS runtime processes (nodes), a ROS-dependent library, datasets, configuration files, or any-thing else that is usefully organized together 5. The Crazyflie driver is formed by six

packages:

• Crazyflie Cpp: a package that contains a C++ library for the Crazyradio and Crazyflie. It can be used independently of ROS.

• Crazyflie driver: a driver package that contains a server (communicating with one or more Crazyflies) and a script to add Crazyflies to be able to communicate with them through this server.

• Crazyflie tools: this package contains tools which are helpful although not required for normal operation, such as for instance a tool for scanning for a Crazyflie. • Crazyflie description: this package contains a 3D model of the Crazyflie.

• Crazyflie controller: this package contains a simple PID controller for hovering or waypoint navigation. It can be used with external motion capture systems.

• Crazyflie demo: this package contains a wide set of examples to get quickly started with the Crazyflie.

4https://github.com/whoenig/crazyflie ros 5http://wiki.ros.org/ROS/Concepts

(23)

The most important package for this thesis is the driver package, since it provides a simple interface to communicate with the Crazyflie, being able to send all the required commands as well as reading di↵erent parameters from the Crazyflie, such as battery voltage, which is very interesting for the recharging part. Moreover, the demo package provides many useful scripts to learn how to actually send the commands and communicate through the server.

3.3

Camera, transmitter and receiver

The most limiting requirements when selecting the camera to install on the drone are the weight and the power consumption. The Crazyflie payload is 15 g, but taking into account the flow deck weight (1.6 g), this payload is reduced to 13.4 g. However, this limit should not be reached in order to be able to fly the quadcopter without using much extra power and thus without reducing autonomy so much. Therefore the camera should be as light as possible. Regarding the power consumption, the logic is the same, low consumption is needed to reduce autonomy as little as possible. Regarding the transmitter to send the video obtained from the camera, the requirements are the same.

For the reasons explained above, it has been decided to use a module of camera plus trans-mitter, the VM275T v1.3 with cloverleaf antenna. This module meets the requirements needed. Figure 9 shows the module.

Figure 9: Module of camera and transmitter VM275T. (Source: HobbyKing)

(24)

V. The module dimensions are 14.5 x 12.2 x 12mm and the weight (without the antenna) is 3.35 g. The antenna weight has not been measured but it does not increased the total weight significantly compared to the maximum payload.

The transmitter frequency is in the 5.8 GHz band, with 48 channels range and FM modulation. The camera resolution is 600 Television lines (TVL) and it supports both NTSC and PAL video standards. The lens field of view is 120o horizontally and 100o

vertically.

In oder to supply the power to this module, the pin VCOM from the Crazyflie expansion port (figure 6) can be used. The voltage of this pin is either directly the voltage coming from the battery, unregulated, or the voltage from USB port when it is connected. This means that the range is 3.0 - 5.5 V, and thus it meets the camera module requirements. Therefore the camera plus the transmitter are powered from the VCOM pin and the ground (GND) pin, which are respectively pins 9 right and 10 left of the expansion port (figure 6). Figure 10 shows the Crazyflie 2.0 equipped with the camera.

Regarding the receiver, a 5.8 GHz receiver that can be connected through USB to the computer is used.

(25)

4

System design

In this chapter the proposed solution will be described and, based on this, the system design will be explained.

4.1

Proposed solution

As already mentioned, the goal of the thesis is the design and implementation of a control system for an indoor nano drone, in order to control it towards a landing platform, accurately land on it and recharge the drone using this platform, so that in the end we have a full autonomous system. With this, we address the autonomy problem of drones.

Considering the di↵erent goals and in order to achieve them, the project can be divided into di↵erent areas where it is needed to find a solution:

· Landing platform design and implementation · Landing platform detection

· Controller design and implementation

Inspired by [2], [4], [6] and [7] the solution proposed in this thesis uses a camera installed on the Crazyflie 2.0 to detect the landing station by computer vision techniques, and the control of the landing maneuver is based on estimating the position of a tag on the landing platform. A ground station (computer) will be used for processing the images and carrying out the computations.

In the following sections the di↵erent parts of the system, which are designed for giving solutions to the areas mentioned above, will be described.

4.2

Position controller design

Regarding the controller for the quadrotor, all the solutions presented in chapter 2 have in common the use of PID controllers, as they are simple to implement but e↵ective and accurate for controlling normal size quadrotors. However, nano quadcopters are very

(26)

sensitive to external disturbance and can be difficult to control their position just with PID controllers [8].

In our case we want to control the quadcopter’s position with respect to the tag. To achieve this, it is needed to estimate the position of the tag very accurately, in order to make sure that this position can be used as the input to the controller. Therefore, we suggest to address this position problem by using two cameras. One camera, placed vertically on top of the drone, is used to determine the target location, that is, where to land. Another camera pointing down is used to stabilize the drone by applying an optical flow algorithm. With this combination of cameras a robust solution is obtained because the drone is stable even if the tag goes out of the view.

In order to make the system robust, the quadcopter inner control loop (the one in charge of stabilization control) which uses the down camera should be accurate and with the least possible delay. For this reason, it has been decided to close the inner control loop onboard, which implies that the implementation must require a low computation power. This is where the flow deck, explained in the previous chapter, comes into play, since its optical flow sensor can be used as the pointing down camera to measure movement of the drone, helped by the ToF sensor to measure distance to the ground. The optical flow sensor included in the flow deck provides X-Y motion information with a wide working range of 80 mm to infinity.

The flow deck consists of a camera (optical flow sensor) to identify features in the surface below it and track their motion between frames (basically it tracks how patterns are moving). Figure 11 shows a feature being tracked over time. The distance sensor is then needed to know the distance to these features and thus obtaining the real dimensions of the movement. This approach is similar to the one in [11] where a nano quadrotor is fully controlled based on optic flow with a downward-looking camera. It is also similar to the solutions implemented for state estimation of drones by using Semidirect Visual Odometry (SVO), which consists of obtaining motion from combining direct visual odometry (motion extracted from intensity values in the image) and feature tracking, obtaining as a result a lightweight algorithm that can be implemented onboard. An example of this is developed in [17], where they implement SVO in a quadcopter with the camera pointing down, being the approach similar to the one chose in this thesis.

It has been decided to use the flow deck because it meets the requirements already stated and it is easily integrated with the Crazyflie 2.0, being possible to control the quadcopter just by sending the desired velocity commands from the ground station.

(27)

Figure 11: Feature tracking over time (Source: Bitcraze blog)

Therefore we have two control layers, needed to ensure a complete autonomous flight. The inner control layer, implemented onboard, ensures the stability of drone attitude and movement. The outer control layer (o↵board) provides the stabilization of quadcopter position in the space with respect to the tag. Since we have a control onboard, now it is possible to use PID controllers to do the high level control (o↵board) and guide the drone towards the landing platform.

4.3

Marker detector design

The marker to identify the landing platform will be obtained from the ArUco library as in [2]. ArUco is an OpenSource library for camera pose estimation using squared markers. It has been decided to use this library because it is a fast fiducial marker system specially appropriated for camera localization in applications such as augmented reality applications or robotics.

The library includes a complete set of functions for detecting markers and estimating camera pose. For these reasons, contrary to using our own algorithm as in the other solutions presented in chapter 2, the functions of the library will be used for calibrating the vertical camera, detecting the marker and estimating the camera pose, because good results are achieved in a simple way, with low error and fast [3]. All these computations will be done on the ground station.

(28)

4.4

Charging platform design

The concept of the charging platform designed and implemented in this thesis is similar to the one proposed in [6]. Our system has been designed to include contacts on the platform and the quadrotor’s legs, and also a system protection for the reverse voltage problem is included. However, in order to keep a simple system, it has been decided to design and implement a system with no landing error correction, but with error tolerance. Therefore our design is enough for having a complete autonomous recharging system, with some margin for the landing and without having to make big modifications in the Crazyflie 2.0. The design and implementation will be further detailed in chapter 7.

4.5

System architecture design

After having described the di↵erent parts of the system, it is worth summarising the system architecture to have an overall idea on how everything is integrated and how the communication is done.

The architecture of the overall system can be divided in the quadcopter Crazyflie 2.0 with the flow deck and the camera and transmitter installed on-board, the charging platform with the tag and the ground station, where the video processing for the tag recognition and the computation for the position control is done. The computer acting as ground station includes the receiver for obtaining the video from the camera and the Crazyradio PA in order to communicate with the Crazyflie 2.0. Therefore in this ground station the video is processed and the camera pose is estimated to obtain the quadcopter pose with respect to the landing station, then this pose is used as input of several PID controllers which determine the commands that need to be sent to the quadcopter in order to approach to the station and land on it. Figure 12 shows the overall architecture of the system.

It would have been ideal to do all the computations on-board, as it is done in [6]. This would avoid having to communicate with the ground station, and thus obtaining a pletely autonomous system. However, our quadcopter is small and without enough com-putation power for implementing complex algorithms and adding additional features, as explained in [12], where an algorithm for recognizing and detecting a pattern is imple-mented on a Crazyflie (on-board), obtaining a mean position error of 32 cm in horizontal axis and 57 cm in vertical axis. On-board computation would have implied a landing error

(29)

Figure 12: Overall system architecture

high enough to make impossible the recharging. Therefore any useful application using the Crazyflie would require an o↵-board computer anyway, due to the low computation power.

The main reason to have the recharging functionality onboard is that it could be part of the low level ”safety” functionality of the system that ideally work even if the radio connection with the ground station is lost. However, this would most likely require a custom hardware solution, thus increasing the complexity of the solution.

These are the reasons why it has been decided to split the control problem into two, as already mentioned: one high bandwidth controller onboard that stabilizies the drone position and one controller o↵board that is used mainly to specify the goal location. Nevertheless, the solution developed here can be implemented in the future using bigger drones which allow on-board computation.

(30)

5

Visual system

In this chapter it will be explained the recognition system designed to detect the marker that identifies the landing station, as well as to estimate the camera (and drone) pose with respect to this marker.

5.1

Marker creation with ArUco module

As already stated in previous chapters, in order to identify the platform the ArUco library is used. This library has binary square fiducial markers that can be used for camera pose estimation. Their main benefit is that their detection is robust, fast and simple.

In our case, it is used the ArUco module for OpenCV (Open Source Computer Vision Library) 6. It has both C++ and python interfaces, but in this project the python

interface is used. This ArUco module includes the detection of the ArUco markers and the tools to employ them for pose estimation and camera calibration.

An ArUco marker is a square marker formed by a wide black border and an inner binary matrix which determines its identifier. The black border contributes to its fast detection while the binary codification allows its identification and, moreover, the application of error detection and correction techniques. The size of the internal matrix determines the marker size, that is, a marker size of 4x4 is composed by 16 bits.

The ArUco markers can be grouped in dictionaries, defined by the dictionary size, which is the number of markers that compose the dictionary and the marker size of these markers. The ArUco module used includes some predefined dictionaries covering a range of di↵erent dictionary sizes and marker sizes. It is worth noting that a marker ID is the marker index inside the dictionary it belongs to.

In order to detect the marker, first we have to create it and print it. To create the marker, a function provided by the ArUco library can be used. In this project a python script has been written with the required steps to create and save the marker image.

First, the dictionary object is created by choosing one of the predefined dictionaries in the ArUco module. In our case we use a dictionary of 50 di↵erent 4x4 bits markers. The

(31)

reason of using this marker size is because, with the same printed size, it is easier the detection from farther than larger size markers, since it is formed by less bits and thus, these are bigger.

In order to create the marker, the function needs the following parameters:

· The dictionary object, which is the one explained above.

· Marker ID. In our case, the range of valid IDs goes from 0 to 49 (50 markers) and the one chosen is the marker with ID 1.

· The size of the marker image. In our case we choose 200, which means that the image size is 200x200 pixels. In order to avoid deformations, this parameter should be proportional to the number of bits plus border size, or at least much higher than the marker size (like 200 here), so that deformations are insignificant.

· There is an optional parameter to specify the width of the marker black border. In our case we use the default value, which is 1.

The marker created is shown in figure 13.

(32)

5.2

Marker detection with ArUco module

The detection process returns a list of detected markers. Each detected marker includes the position of its four corners in the image and the marker ID. In the ArUco module there is a function that performs this detection, but before detailing the parameters used by this function, the concepts behind the detection process will be briefly explained.

The process can be divided into two steps. First, the detection of marker candidates. In this step the image is analysed to find square shapes that can be candidates to be markers. An adaptive thresholding is applied to segment the markers, and then the contours are extracted to discard those that are not convex or do not approximate to a square shape. Furthermore, some filtering is applied to remove candidates that do not meet some requirements. After this candidate detection, the inner codification is analysed to determine the true markers, by extracting the bits from each marker. To do this, first, the markers in their canonical form are obtained by applying perspective transformation. Then, the canonical image is thresholded using the Otsu’s method to separate white and black bits. The image is divided in di↵erent cells according to the marker size and the border size, then the amount of black or white pixels on each cell is counted to determine if it is a black or a white bit. Finally, the bits are analysed to check if the marker belongs to the specific dictionary. Moreover, error correction techniques are employed when necessary.

The parameters needed by the function that performs the detection are the follow-ing:

· The image where the markers will be detected.

· The dictionary object that contains the marker(s) used.

· There is an optional parameter, an object that includes all the parameters that can be customized during the detection process.

The function returns a list with the corners of the detected markers in their original order (clockwise starting with top left), and a list with the IDs of each detected marker. Moreover, it also returns a list with the rejected candidates, that is, squares found but that do not present a valid codification.

(33)

5.3

Camera calibration with ArUco module

After detecting the marker, we want to estimate the camera pose using this marker. However, to perform the camera pose estimation first it is needed to know the calibration parameters of the camera, which are the camera matrix and the distortion coefficients. It is worth noting that, in order to use the functions provided in the ArUco module and OpenCV, the model considered for the camera is the pinhole camera model, which is shown in figure 14. It is important to take into account the camera coordinate axis, since it di↵ers from the drone axis and this needs to be considered.

Figure 14: Pinhole camera model (Source: OpenCV documentation)

The camera matrix and distortion coefficients expressions are:

Camera matrix = 2 6 6 4 fx cx 0 fy cy 0 0 1 3 7 7 5

Distortion coef f icients =hk1 k2 p1 p2 k3

i

In the camera matrix, the parameters fx and fy represent the focal length in pixel units,

(34)

center coordinates (principal point). The distortion coefficients vector is formed by five parameters that model the distortion produced by the camera.

In order to calibrate the camera the ArUco module is used too. It is worth noting that the camera parameters remain fixed unless the camera optic is modified, thus camera calibration needs to be done only once.

Performing the calibration requires some correspondences between environment points and their projection in the camera image from di↵erent viewpoints. In general, these correspondences are obtained from the corners of chessboard patterns when using the OpenCV function for calibrating cameras. In this project, instead of this OpenCV func-tion a funcfunc-tion included in the ArUco module is used, because calibrafunc-tion is more ver-satile than using traditional chessboard patterns, since it allows occlusions or partial views.

Using the ArUco module, we have two options for calibration: based on ArUco markers corners or ChArUco corners. However, it is recommended using the ChArUco corners approach since the provided corners are much more accurate in comparison to the marker corners. A ChArUco board is similar to a chessboard pattern but using the ArUco markers instead of the white squares. An example board, which has been created in order to calibrate the camera used in this project, is shown in figure 15.

(35)

Therefore, to calibrate the camera using this ChArUco board, it is necessary to detect the board from di↵erent viewpoints. However, as already mentioned, occlusions and partial views are allowed, and not all the corners need to be visible in all the viewpoints.

A full working example is available in the ArUco module, thus this has been used to calibrate the camera and obtain the required parameters. First, the ChArUco board was created also by using functions in the module and providing reasonable parameters regarding for instance the number of horizontal and vertical squares or the marker size, and then the camera was calibrated using this board. As a result, a yaml file is obtained with the camera matrix, the distortion coefficients vector and the average reprojection error.

In this calibration process, the average reprojection error obtained was 0.38 and the camera parameters are the following:

Camera matrix = 2 6 6 4 223.32 0 333.08 0 221.82 244.42 0 0 1 3 7 7 5

Distortion coef f icients =h 2.01e 01 2.25e 01 9.29e 04 6.99e 04 5.56e 02 i

5.4

Camera pose estimation using ArUco module

Once we have the camera calibrated and we have detected the marker, the next step is estimating the camera pose from this detected marker.

The approach used to estimate the camera pose is the already mentioned PnP problem, which is a common simplification of the general version and it assumes known calibration parameters, which in our case is correct since the camera has been calibrated and the parameters are known. An overall view of the problem formulation is shown in figure 16. This problem formulation consists in retrieving the pose (rotation R and translation t) of the camera with respect to the world reference frame and the focal length, given a set of correspondences between 3D points pi expressed in the world reference frame, and their

2D projections ui onto the image.

(36)

Figure 16: PnP problem formulation (Source: OpenCV tutorials)

respect to the marker and not with respect to the world coordinate system. This facilitates the drone controlling since we actually want to control the drone with respect to the marker position, because the landing station position is related to this marker position. Therefore the camera pose is the 3D transformation from the marker coordinate system to the camera coordinate system, and it is also specified by a rotation and a translation vector.

The ArUco function takes as parameters the vector of marker corners already detected, the size of the marker side in meters (the translation vectors of the estimated poses will be in this same unit) and the camera calibration parameters that have been obtained after the calibration process. It returns the rotation and translation vectors for each of the markers in corners. The marker coordinate system that is assumed by this function is placed at the center of the marker with the Z axis pointing out.

The translation vector obtained is the (x, y, z) coordinates of the marker in the coordinate system of the camera (figure 14), and the rotation vector is the rotation of the marker with respect to the camera. However, we need the rotation of the camera with respect to the marker so that the rotation of the drone with respect to the marker is directly known.

(37)

The rotation vector is a most compact representation of a rotation matrix (since any rotation matrix has just 3 degrees of freedom). The Rodrigues’ rotation formula allows to obtain the rotation matrix from the rotation vector (and vice versa). There is a function in the OpenCV library to apply this Rodrigues’ formula (Rodrigues()), and it is the one used here. Once the rotation matrix is obtained, the Euler angles can be obtained by applying an OpenCV function (RQDecomp3x3()) that computes a RQ decomposition of 3x3 matrices (decomposition of a matrix into a product of an orthogonal matrix Q and an upper triangular matrix R) and returns the Euler angles in degrees (yaw, pitch and roll).

Therefore, once applied all the steps explained in this section, the distance to the marker in each axis and the rotation of the camera with respect to the marker are obtained, and thus they can be used to control the drone with respect to the marker.

5.5

Implementation

In this section it is explained how all the steps described in the previous sections have been implemented. Some of the tasks explained need to be done only once (marker creation and camera calibration), but others need to be done continuously so that the controller loop can work correctly.

It has been created a ROS package called camera aruco in order to implement all the tasks that need to be done related to the visual system in a ROS node. A script has been created to do the processing for detecting the marker and computing the camera pose. In this script, after performing initial configurations, there is a loop to analyze the image, detect the marker, compute the camera pose and send this pose in a message so that the controller can work with it, as it will be explained in chapter 6.

The camera configuration is the following: channel frequency 5.74 GHz (group A, channel 1) and video format NTSC, whose video frequency is 60 Hz. For this reason, the frequency of the loop is also chosen to be 60Hz. This way, every frame received is analyzed to check if the marker appears.

The flow diagram in figure 17 shows the di↵erent steps that are carried out to perform the tasks already explained. First, after the initialization required by ROS, the yaml file is read to obtain the camera calibration parameters (camera matrix and distortion

(38)

coefficients), then the message to send the camera pose is initialize. A ROS publisher is created and through this the message will be send to the controller, which is subscribed to this publisher. After these configurations, the 60 Hz loop is executed to read a frame, analyze the image to detect markers and, if the marker ID coincides with the marker of our system, the camera pose is obtained and the message is sent.

(39)

6

Drone controller

In this chapter, the ROS node created for the position controller of the quadcopter will be explained in detail, including all the steps and tasks needed to compute the distance to the landing point and send the control commands to the Crazyflie.

6.1

Pose message transformation

Once the message from the image processing node is received, the camera pose needs to be saved and transformed from the camera coordinate system to the drone coordinate system. Both the camera and the drone coordinate systems are shown in figure 18.

Figure 18: Camera and Crazyflie 2.0 coordinate system

It can be seen that in order to obtain the camera coordinate system given the drone coordinate system, we need to perform a rotation of ⇡/2 in Y axis (drone coordinate system) and then a rotation of ⇡/2 in Z axis (new coordinate system obtained after the first rotation).

Although the origins of both coordinate systems do not coincide exactly, it has been decided to consider both origins in the same position since the di↵erence is not significant, because the camera is installed close to the drone center.

(40)

The rotation matrix in Y axis is: RY(✓ = ⇡/2) = 2 6 6 4 cos✓ 0 sin✓ 0 1 0 sin✓ 0 cos✓ 3 7 7 5 = 2 6 6 4 0 0 1 0 1 0 1 0 0 3 7 7 5 (1)

And the rotation matrix in Z axis is:

RZ(✓ = ⇡/2) = 2 6 6 4 cos✓ sin✓ 0 sin✓ cos✓ 0 0 0 1 3 7 7 5 = 2 6 6 4 0 1 0 1 0 0 0 0 1 3 7 7 5 (2)

Therefore, in order to obtain the rotation in both axis, the rotation matrix is:

R = RYRZ = 2 6 6 4 0 0 1 1 0 0 0 1 0 3 7 7 5 (3)

And thus, the coordinates of a point in the Crazyflie coordinate system given the coor-dinates in the camera system can be obtained by solving the equation:

2 6 6 4 Xdrone Ydrone Zdrone 3 7 7 5 = R 2 6 6 4 Xcamera Ycamera Zcamera 3 7 7 5 (4)

Therefore we have the following equivalences: 0 B B @ Xdrone = Zcamera Ydrone = Xcamera Zdrone = Ycamera 1 C C A (5)

Due to the transformations performed during the image processing phase to obtain the Euler angles (pitch, roll and yaw) no further transformations are needed for obtaining these angles in the drone reference system.

The ROS interface allows to control the Crazyflie 2.0 (with the flow deck included) by indicating the velocity in X and Y axis (m/s), the height in Z (meters) and the yaw rate (degrees/s) and thus this will be the control command used. Therefore, we need to know

(41)

the distance from the marker center in X, Y and Z axis plus the yaw angle. Thus, when a pose message is received (at a frequency of 60 Hz) the parameters in (5) are saved, as well as the yaw angle, for further processing to estimate the velocity, height or yaw rate needed.

6.2

Estimate drone distance to landing point

After applying the rotations explained in the previous section, the distance from the drone to the marker center is known. However, in order to control the drone we need to know the distance to the landing point, which is not the same as the marker central point. The landing point is at 40 cm from the marker as shown in figure 19. This figure shows the problem formulation of the distance estimation from the quadcopter to the landing point.

Figure 19: Problem formulation for estimating the distance to the landing point

First we have the distance to the marker center in the drone coordinate system (the point (x , y ) in the figure). We know that the landing point is at 40 cm from the

(42)

marker in the X axis, considering a coordinate system with no rotation, that is, being the X axis perpendicular to the marker plane and the Y axis parallel (coordinate system X’Y’ in the figure). Therefore, since we know the Crazyflie rotation with respect to this system because it is given by the yaw angle, we can compute a rotation to obtain the marker center coordinates expressed in the X’Y’ system, which can be named (x0

marker,

y0

marker). After this rotation, we can obtain the point (x0landing point, y0landing point) in the

figure by subtracting 40 cm to the x coordinate of the point (x0

marker, y0marker). Finally, in

order to obtain the distance to the landing point in the actual coordinate system XY, it is needed to perform again the rotation but in this case it should be the reverse rotation, that is, being now the rotation angle -yaw. This way we know the distance (xlanding point,

ylanding point) with respect to the real drone pose.

It is worth noting that this problem does not a↵ect the drone Z axis because the rotation a↵ecting the distance change is in the XY plane, being the drone height independent with regard to the yaw angle. The height estimation would be a↵ected by great pitch and roll angles, but this is not the case since these angles will only vary for moving the drone in one direction or other, being the variation very small for a↵ecting considerably the Z distance measured in the image processing phase.

Therefore, starting by the measured point (xmarker, ymarker) the computations that need

to be done to obtain the true distance to the landing point are detailed below.

First, the rotation matrix is:

R = " cos(✓) sin(✓) sin(✓) cos(✓) # (6) And thus: " x0marker ymarker0 # = " cos(✓) sin(✓) sin(✓) cos(✓) # " xmarker ymarker # (7)

where ✓ is the yaw angle.

Then: " x0 landing point y0 landing point # = " x0 marker y0 marker # " xsetpoint ysetpoint # (8)

(43)

this case, as already mentioned: " xsetpoint ysetpoint # = " 40cm 0cm # (9)

Finally, the reverse rotation needs to be computed. In this case, the rotation matrix is:

R 1 = RT = " cos(✓) sin(✓) sin(✓) cos(✓) # (10) So therefore: " xlanding point ylanding point # = " cos(✓) sin(✓) sin(✓) cos(✓) # " x0 landing point x0 landing point # (11)

After all this process, we have obtained the distance to the landing point in the drone coordinate system.

6.3

PID controller

As already explained in chapter 4, it has been decided to use PID controllers to do the o↵-board control, that is, the control of the drone to guide it to the landing point given the distance to this point (obtained as explained in the previous section).

There are four di↵erent components in the control command (velocity in X, velocity in Y, height in Z and yaw rate). Therefore, four decoupled PIDs have been designed to control each of these four parameters.

A PID (Proportional Integral Derivative) controller is a control loop feedback mechanism. It is widely used in industrial control systems, robotics and a variety of other applica-tions requiring continuously modulated control. This controller continuously computes an error value as the di↵erence between a desired setpoint and a measured variable, then a correction is applied based on proportional, integral, and derivative terms. Figure 20 shows the block diagram of a typical PID controller.

In our case, we compute the error as the di↵erence between the actual distance (or yaw angle) to the landing point and the distance (or yaw angle) at which it is desired to be (setpoint). Therefore, the PID input is the distance or yaw angle to the landing point

(44)

Figure 20: PID controller

and the PID output is velocity in X and Y axes, height in Z axis or yaw rate, depending on each parameter.

Each PID term a↵ects the control signal in a di↵erent way [18]:

· Proportional: this component depends only on the error. The output is directly proportional to the error, depending on the gain Kp. In general, increasing this proportional gain will increase the speed of the control system response. However, if the proportional gain is too large, the process variable will begin to oscillate. If Kp is increased further, the oscillations will become larger and the system will become unstable and may even oscillate out of control.

· Integral: this component sums the error term over time. Therefore the result is that even a small error term causes the integral component to increase slowly. The integral response will continually increase over time unless the error is zero, so the e↵ect is to drive the steady-state error to zero. This steady-state error is the final di↵erence between the process variable and the setpoint.

· Derivative: this component causes the output to decrease if the process variable is increasing rapidly. The derivative response is proportional to the rate of change of the process variable. Increasing the Kd parameter will cause the control system to react more strongly to changes in the error term and will increase the speed of the overall control system response. However, this parameter is usually small since the derivative response is highly sensitive to noise in the process variable signal. If the sensor feedback signal is noisy or if the control loop rate is too slow, the derivative response can make the control system unstable.

(45)

The PIDs have been tuned experimentally and a good performance is obtained. In order to make a good tuning, the e↵ects of independently increasing the value of the PID parameters have been considered [19], which are included in table 3.

Gain Rise time Overshoot Convergence time Steady-state error Stability Kp decrease increase small change decrease degrade

Ki decrease increase increase eliminate degrade

Kd increase decrease decrease no e↵ect improve if Kd small

Table 1: E↵ects of independently increasing the value of the PID the parameters

In our case, the setpoint is 40 cm in X axis, 0 cm in Y axis, 0 cm in Z axis and a yaw of 0o

(with respect to the center of the marker). This setpoint is above the landing point, that means that the landing point is in the same position but at height 0 in the Z axis.

The velocity in X and Y and the yaw rate values are directly obtained from the controller output. A limitation has been set so that the drone does not move extremely fast. These limitations are a maximum velocity of 0.5 m/s in X and Y axes and a maximum yaw rate of 45 degrees/s. The height in Z axis works di↵erently since the height needs to be maintained, and thus, the output obtained from the controller is added (or subtracted) to the height value. The initial height is decided to be 0.5 m, which is the height chosen for taking o↵.

As already explained in chapter 4, the Crazyflie 2.0 performs on-board attitude and ve-locity control, thanks to the flow deck used. Therefore any bias in the quadcopter, caused for example by unbalanced motors or external sources a↵ecting the flight, is compensated by the on-board controller, and thus the integral term of the velocity controllers (X and Y axes) is not required, resulting in PD-controllers.

The PID parameters of each of the four controllers are summarized in table 2:

Output Input Error Kp Ki Kd

Velocity in X Distance to landing point X, dx dx 40cm 0.4 0.0 0.005

Velocity in Y Distance to landing point Y, dy dy 0cm 0.4 0.0 0.005

Height in Z Distance to landing point Z, dz dz ( 10cm) 0.05 0.0002 0.01

Yaw rate Yaw angle yaw 0o 0.5 0.005 0.005

(46)

same frequency at which the images are received and the camera pose is estimated.

6.4

Landing

Once the drone is flying and the landing process has to start, in order to find the marker it has been decided to rotate the quadcopter around the Z axis (yaw).

Once the setpoint is reached and the Crazyflie 2.0 is above the landing point, the drone position must be stable during some time before starting the landing. In order to measure this stability, a margin region has been defined, and if the Crazyflie position is within this area during a given time the landing can start. The margin of error is ±3 cm in X axis, ±4 cm in Y axis and ±8 cm in Z axis. These values have been chosen according to the size of the pads where the drone needs to land in order to be able to charge the battery, as it will be explained in the next chapter. The margin error in Z axis is not as important as the others, since it is perpendicular to the landing plane and it does not a↵ect the landing in terms of right/left movement, thus a greater margin is chosen. If the drone is within this region during 1.25 seconds then it starts landing. This value corresponds to 70 iterations having into account the 60 Hz loop, it has been chosen experimentally and it has been checked that it is enough for a good landing. On one hand, if this time is chosen to be very long, then it can take a lot of time to start landing since if the drone is out of the region in one iteration, the count is reset. On the other hand, if the time is very short it can not be guaranteed that the drone is stable for landing. The time chosen proves to have an accurate landing, although similar times could be chosen.

This landing process takes 4 seconds and the goal is to gradually decrease the height until a height of 8 cm is reached. Then the motors are shut down. The reason of this 8 cm height is because it is the minimum altitude at which the optical flow sensor can work and, moreover, when the drone is close to the floor its dynamics change, the propellers cause turbulence and the drone becomes harder to control. The step size to gradually reduce the height is computed according to the initial altitude and the landing time, and this step size is subtracted from the current height also in a 60 Hz loop.

(47)

6.5

Implementation

Throughout this chapter we have covered the di↵erent tasks needed to control the quad-copter: pose estimation, distance to the lading point calculation, PID controllers design and take-o↵ and landing processes design. After the explanation of all of them, this section covers how the implementation and integration of the di↵erent tasks have been done.

Di↵erent ROS nodes are needed in order to communicate with the Crazyflie and control it. The driver already described in chapter 3 is used. This driver contains the server, whose functions can be used to send commands to the Crazyflie (like the control command specifying the velocities, height and yaw rate) or to receive parameters and measures from the Crazyflie (like the battery level).

A ROS launch file, which allows to launch di↵erent ROS nodes easily at the same time, has been created. Therefore with this file we launch the server, the node to add the Crazyflie and be able to communicate with it (which is included also in the driver), and finally the node that implements the controller.

It has been decided to start the landing maneuver after the battery voltage is below a level. This battery level will be decided after carrying out some tests to characterize the battery life in chapter 8. In the meantime, the drone is just in hovering mode, and when the battery level is reached the rotating process for finding the marker starts.

Moreover, once the approaching process has started, in case that the marker is lost the Crazyflie is landed for security reasons. This can happen because the marker goes out of the image or it is not well detected. This hovering mode is entered if the marker is lost during 20 iterations, which corresponds to approximately 0.3 seconds since the loop frequency is 60 Hz. During the time the marker is not detected the previous command is continuously sent (velocities, height and yaw rate are maintained).

As already said in the previous chapter, the controller node is subscribed to the camera pose publisher. These messages are received at a 60 Hz frequency, and this is the reason why the controller is also executed at this rate. This message is then processed at this rate to estimate the drone pose as already explained. Furthermore, it is worth noting that the node is also subscribed to receive the messages from the battery level publisher (this publisher is included in the driver server) so that we can read the battery level and

(48)

In order to clarify the di↵erent steps followed, figure 21 shows the flow diagram for the node that implements the drone controller.

(49)

7

Landing and recharging platform

In this chapter the recharging system design and implementation will be explained in detail, as well as the integration of this system together with a landing platform.

7.1

Landing station

In order to have a landing station where both the marker and the recharging system can be included, it has been decided to build a platform with L shape. The marker is attached to the vertical wall, while the charging system will be implemented on the horizontal floor, where the drone will land. Figure 22 shows the landing station, including also the corresponding dimensions.

(50)

The station is built using wooden planks. The floor has been covered with a pattern to facilitate the flow deck operation, since for a good functioning of the flow sensor the floor should vary and not be uniform in color. This way, variations on the floor can be found and they can be tracked to estimate the movement.

7.2

Battery and battery charger

Before designing the recharging system, an analysis of the battery and the charger in-cluded in the Crazyflie 2.0 is done in order to know the charge and discharge character-istics, and thus the voltage and current needed.

The battery used is the one included with the Crazyflie 2.0, which is of the type LiPo (Lithium-Polymer). This battery type is one the most popular since it has among the best power to weight ratios and discharge currents. It is worth noting that the Crazyflie 2.0 has a Protection Circuit Module (PCM) that will prevent the user from under/over charging the battery.

The battery voltage is 3.7 V and it has a capacity of 240 mAh. The charge rate is 2C, which means that it is charged at 480 mA. The discharge rate is 15C, which means that the current is 3600 mA. The battery charging time is about 40 minutes and the battery life about 7 minutes.

The Crazyflie 2.0 includes an integrated circuit (IC) which is an integrated Li-Ion linear charger and system power path management device. In this case, it operates from the USB port and supports charge currents up to 1.5 A. The chip features dynamic power path management (DPPM) that powers the system while simultaneously and independently charging the battery. This feature reduces the number of charge and discharge cycles on the battery, allows for proper charge termination and enables the system to run with a defective or absent battery pack [20].

Therefore, thanks to this charger, the battery can be charged directly by connecting the power supply to the USB pin and the ground pin (pins 10 right and left in figure 6, respectively). The USB standard voltage is 5.0 V.

References

Related documents

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av

Det har inte varit möjligt att skapa en tydlig överblick över hur FoI-verksamheten på Energimyndigheten bidrar till målet, det vill säga hur målen påverkar resursprioriteringar

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa

DIN representerar Tyskland i ISO och CEN, och har en permanent plats i ISO:s råd. Det ger dem en bra position för att påverka strategiska frågor inom den internationella

While firms that receive Almi loans often are extremely small, they have borrowed money with the intent to grow the firm, which should ensure that these firm have growth ambitions even

Effekter av statliga lån: en kunskapslucka Målet med studien som presenteras i Tillväxtanalys WP 2018:02 Take it to the (Public) Bank: The Efficiency of Public Bank Loans to

Indien, ett land med 1,2 miljarder invånare där 65 procent av befolkningen är under 30 år står inför stora utmaningar vad gäller kvaliteten på, och tillgången till,