• No results found

Design and Development of a Hexacopter for the Search and Rescue of a Lost Drone

N/A
N/A
Protected

Academic year: 2022

Share "Design and Development of a Hexacopter for the Search and Rescue of a Lost Drone"

Copied!
8
0
0

Loading.... (view fulltext now)

Full text

(1)

http://www.diva-portal.org

This is the published version of a paper presented at IROS 2019 - Workshop on Challenges in

Vision-based Drones Navigation, Macau, China, November 8, 2019.

Citation for the original published paper:

David, J., Mostowski, W., Aramrattna, M., Fan, Y., Varshosaz, M. et al. (2019) Design and Development of a Hexacopter for the Search and Rescue of a Lost Drone In:

N.B. When citing this work, cite the original published paper.

Permanent link to this version:

http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-40830

(2)

Design and Development of a Hexacopter for the Search and Rescue of a Lost Drone

Jennifer David, Wojciech Mostowski, Maytheewat Aramrattna, Yuantao Fan, Mahsa Varshosaz, Patrick Karlsson, Marcus Rodèn, Anders Bogga, Jakob Carlsèn, Emil Johansson and Emil Andersson

1School of Information Technology (ITE), Halmstad University, Sweden Email: firstname.lastname@hh.se

Abstract— Search and rescue with an autonomous robot is an attractive and challenging task within the research commu- nity. This paper presents the development of an autonomous hexacopter that is designed for retrieving a lost object, like a drone, from a vast-open space, like a desert area. Navigating its path with a proposed coverage path planning strategy, the hexacopter can efficiently search for a lost target and locate it using an image-based object detection algorithm.

Moreover, after the target is located, our hexacopter can grasp it with a customised gripper and transport it back to a destined location. It is also capable of avoiding static obstacles and dynamic objects. The proposed system was realised in simulations before implementing it in a real hardware setup, i.e. assembly of the drone, crafting of the gripper, software implementation and testing under real-world scenarios. The designed hexacopter won the best UAV design award at the CPS-VO 2018 Competition held in Arizona, USA.

I. INTRODUCTION

The use of Unmanned Aerial Vehicles (UAV) for search and rescue missions is a challenging application area within the community of Robotics and Autonomous Systems. It involves integrating various of tasks from different disci- plines, like robotic mapping, path planning, object detection, manipulation mechanism, power management into UAVs.

In this context, Cyber-Physical Systems Virtual Organi- zation (CPS-VO) 2018 competition was organised which involves a UAV to scan an area for a lost aircraft using camera and other sensors, and recover it safely back to the base in a desert environment. The state-of-art involv- ing UAVs used for search mission has been widely ex- plored both in known as well as in unknown environments [1],[2],[3],[4],[5],[6],[7],[8]. However, the main challenges here are: the manipulation design to grab the target, fast object detection, and to maintain the battery power while searching for the target in an unknown dynamic environment.

In this work, we address the problem of design and devel- opment of an UAV and its manipulation mechanism along with object detection and localization, coverage planning and control with focus on the challenges. The contributions of this work are threefold: i) The design of the UAV and its gripper mechanism; ii) the use of an open source platform for machine learning - Tensor Flow [9], for very fast object detection aided the coverage planning to be energy efficient, this reduces battery power consumption and extends flight time; iii) the drone is built on an open-source physics engine and robot simulator, ROS-Gazebo with MAVLink communication (the messaging protocol for communicating

Fig. 1: The UAV picking up the target

with drones), making it easy to replicate and reuse by the research community.

The complete end-to-end system design, CAD models, circuit design, algorithms, and programs are made open- source and can be easily accessed1. This paper documents all the simulation and experimental results under different scenario conditions. Finally, we also brief on the CPS-VO 20182 competition in which the UAV won the best design award.

The paper is organised as follows. Section II details on the UAV design and its major components, hardware and software along with the gripper mechanism. Section III discusses our methodology for object detection, coverage planning, state estimation and control. Sections IV and V detail the simulation and experimental results and finally Section VI concludes the paper.

II. UAV DESIGN

The UAV was designed by taking into account some of the main requirements of the CPS-VO Challenge 2018.

The UAV should be able to pick up an object of a shape similar to the drone, which is approximately 10-20% of its weight. The total mission time should be about 15 minutes.

Also, the UAV should be compatible with PX4 autopilot flight controller3 and should have an on-board computer

1CPS-VO2018 source code-Halmstad University https://github.

com/orgs/CPS2018/

2Cyber Physical System - Virtual Organisation 2018 Challenge - https:

//cps-vo.org/group/CPSchallenge 3http://px4.io/

(3)

Fig. 2: Overview of the System

and sensors that support ROS4 integration. The complete simulation of the platform will be tested under Open-UAV5 (cloud-enabled UAV testbed), before participating in the final outdoor challenge in Arizona.

A. The Hexacopter

There are many commercially available UAVs in the market with predefined and typically non-modifiable flight parameters. The gripper design affects the lift capacity of the UAV that can in turn affect the energy of the UAV and its flight time. Thus, building an UAV from scratch rather than picking an existing design allowed for heavier payload, more powerful hardware and the freedom to design a more complex gripping device. Gatti [10] enlisted a number of factors that should be considered to build a multicopter for search and rescue missions. Accordingly, a hexacopter DJI F5506 frame was chosen over a quadcopter frame because in case of a motor failure, the hexacopter can choose to shut off the paired parallel propeller down and steer as a quadcopter. On the other hand, the hexacopter is more cost effective compared to an octocopter. The design was kept as symmetric as possible to avoid uneven weight distribution and wind flow, while aerodynamic effciency itself was not in the focus. All the parts were fabricated using 3D printers and SolidWorks because of the short prototyping time and the low cost of plastic filament. The estimated cost of the UAV was about 6000 EUR including spare parts, chargers, batteries, hard cases for transportation and miscellaneous.

The hexacopter consists of a DJI hexcopter frame F550, sensors (1 Matrix Vision7 make camera for detecting the object, 3 MaxBotix8 make SONAR for avoiding obstacles, Garminmake LIDAR9for altitude control), sensor controller board, on-board PC, gripper module, PX4 autopilot, and

4http://wiki.ros.org/

5https://github.com/Open-UAV

6https://www.dji.com/flame-wheel-arf/feature 7https://www.matrix-vision.com/USB2.

0-single-board-camera-mvbluefox-mlc.html 8https://www.maxbotix.com/Ultrasonic_Sensors/

MB1242.htm

9https://buy.garmin.com/en-US/US/p/557294

Fig. 3: Schematic Diagram of the Gripper motors

batteries. The on-board computer is a Nvidia Jetson TX210 with 256-core Pascal GPU. The sensor controller board was custom-made so that it can connect to four I2C components at the same time on the same board. It was built on an ATMega328p Arduino Nano base communicating over UART. The GPS module used is a HERE make GNSS (Global Navigation Satellite System) and RTK (Real Time Kinematic) system. The PX4 autpilot is equipped with an Inertial Measurement Unit (IMU), telemetry and a manual remote control. The telemetry communicates with the autopi- lot to stream live data during autonomous operations in the QGroundControl Ground Control Station (GCS) software.

Lithium-ion Polymer (LiPo) batteries are used for powering the hexacopter which has 22.2 V with an estimated flight time of 15 minutes. The six propellers are connected to Sunnysky 460 KV motors11 which has a maximum thrust of 2520 g. The battery sensor is a Mauch 07612 with a maximum current of 200 A and up to 14 cells (58.8 V). When the battery power drops to 30% of its full capacity, the flight mode is set to change to "LAND" mode [11]. MAVLink communication protocol is used for all the communication between the PX4 and other systems. The complete system (hexacopter and gripper) is developed on ROS and MAVROS and simulated on a Gazebo simulation environment. The whole weight of the hexacopter is 7.3 kg.

B. Gripper Design

Previous work in [12], [13] focuses solely on the technique of grasping an object with adhesion-controlled friction which lowers the squeezing force making it possible to lift fragile and deformable objects. In [14], the authors take the grippers into consideration when creating the mathematical model of the UAV. In our gripper design [15], we consider it as a disturbance to the PID controller. There are two different types of claws working together: squeezing claws and grip- ping claws. Each type of claws has three legs, as illustrated in Fig. 4. Figure 3 is a simple schematic diagram about the

10https://developer.nvidia.com/embedded/buy/

jetson-tx2

11http://www.rcsunnysky.com/content/34.html 12https://www.mauch-electronic.com/apps/webstore/

products/show/6626809

(4)

(a) Squeezing claws

(b) Gripping claws

Fig. 4: The Claws used in the Gripper System

circuit connection of the gripper system with the onboard computer.

Squeezing Claws: The squeezing claws have two arms, one that is moved by a servo motor and one that provides a second rotational point on the part that is rotated on to the object, see Fig. 4a. The idea is that the rubber part is pushed on to the object that needs to be held so that the rubber is squeezed onto and around any uneven parts.

Gripping Claws: The second triplet of claws shown in Fig. 4b are longer and provide two different functions; it acts as landing legs, and to grasp around and if possible underneath the object. Before landing, the arms open fully and can be used as legs, and after take off, with the squeezing claws holding a object, the gripping claws close to secure the recovered object. Since, the gripping claw is subjected to most force during landing, a Finite Element Analysis (FEA) was done on the 3D printed part using Solid Works. Figure 5 shows that the claws can withstand a force of 100 N (approx 10 kg) and the part will break when the force is increased to 300 N. All the six claws are connected to six Blue-Bird

13servomotors controlled by an Arduino Nano board with a switching regulator.

Fig. 5: FEA model of the Grippers

13https://www.blue-bird-model.com/products_

detail/106.htm

III. SYSTEMFRAMEWORK

A. System Model

The hexacopter was first developed from scratch and mod- elled in Solid Works14. The mass and inertia characteristics of the Typhoon H480 hexacopter was modified according to the specifications of our UAV. Based on the arm length and motor shaft, the largest possible propellers were about 352.12mm and the frame size was 651.36mm. The CAD drawings were imported in Gazebo as a SDF model file and plugins were added so that the hexacopter can be simulated.

B. Motion Control

Our hexacopter uses ENU (East North Up) convention which means the positive x-axis is east, y-axis is north and the z-axis is up respectively. The state estimation is done by the sensor controller which reads data from LIDAR, SONAR and GPS and filters noise by a single variable Kalman filter (max. sampling frequency 10Hz). The altitude hof the hexacopter is measured using the LIDAR sensor and its IMU data. The pitch θ and roll φ angles are calculated from the IMU as quateranions using Euler transformations and the LIDAR sensor measures the current altitude hcurr of the hexacopter corrected by an offset, accommodating the displacement of the IMU from the LIDAR. The actual alti- tude is then calculated from these parameters and controlled by means of a PID controller.

C. Coverage Path Planning

A zigzag path planner like a lawn mover [16] is used here for coverage planning after analysing on [17], [18]. The planner requires the center GPS coordinates of the search area. This central point is expanded into a rectangle search space for coverage planning depending upon the given search area size. The search area is then divided into finite number of small cells. The dimension of each of the cell is calculated according to the hexacopter’s height above the ground and the camera’s field of view as in Fig. 7. This is a grid- based decomposition method where the workspace is directly represented as a distance graph and a zigzag path is generated using simple path search algorithm. The centre of each small cell is given as the list of way points to the hexacopter.

The hexacopter navigates from its current location to the list of predefined waypoints by using GPS-based position control and a PID-regulated velocity control. Hence, in this method, a minimum number of waypoints are processed with a full coverage of the search area. This leads to improved execution time and less power consumption. This approach always gives complete coverage and fails only when there are errors due to the GPS system.

D. Object Detection

The hexacopter needs to detect an object with a shape similar to its frame. Hence, a suitable object detection method was to be evaluated from literature [19] [20] [21]

14Halmstad University-UAV-CAD drawings-https://github.com/

CPS2018/CAD_drawings

(5)

(a) (b)

(c)

(d)

Fig. 6: Sample images used for training CNN

using camera. First, we compared three methods - SIFT (Scale-Invariant Feature Transform), SURF (Speeded Up Robust Features) and CNN (Convolutional Neural Network) [22] [23] and based on a selection of parameters, the best method was chosen. Parameters such as accuracy, frames per second (FPS) and energy efficiency were used as they play a vital role in fast object detection in the challenge.

After a series of test experiments [24], it was decided to use three different models of Convolutional Neural Networks [25], [26] implemented using Google TensorFlow Object Detection API15with Tensor Board 4.5. All the three models were retrained with a custom data set made from 3105 images of the object that has to be detected. In order to retrain a model with a custom data set, it involves three steps:

a) collecting data; b) labelling the data; and c) training the network.

Collecting Data: To get the most of significant diversity of images, both aerial and non-aerial images under different settings were captured as shown in Fig. 6. Multiple videos and images taken from different distances, environment, angles and captured from different cameras were used. The data set was then divided into two subsets – 15% of them were randomly used for evaluation and the remaining set were used for actual training.

Labelling Data: LabelImg tool is used for labelling the data manually and saves the annotations in Pascal Visual Object Classes (Pascal VOC) format which is supported by both LabelImg and TensorFlow.

Training the model: The data sets are trained with all the 3 models using the Nvidia GTX 960M GPU for 20000 global steps. The TensorFlow Object Detection API is then used to evaluate the performance of each of the model using TensorBoard. To test the accuracy of the retrained models,

15https://github.com/tensorflow

another data set was also used. It contains 400 images of the object to be detected from the distances at 1.5m, 5m, 10m and 15m. At every distance, 50 images in two different background environments were evaluated. This was done to test the effect of background environment in the training set.

E. Object Localization

Positioning above the object: Once the object is detected, the hexacopter has to position itself on top of it to descend down and pick it up. The problem of estimating the pose of the camera relative to a specific object is called Perspective- n-Point (PnP) problem in computer vision. Our positioning method is similar to the approaches used in [6] [8]. The object detection mode gives the detection boxes and images as input to the system. We use the OpenCV library called SolvePnPto get the X and Y offset in world coordinates from the camera to the detected objects center from the system.

Descending on the detected object: The descending method repeatedly checks if the UAV is centered above the object before continuing to descend. There are also two different classes on the object to be detected: the whole object and also the center of the object. Depending on the altitude of the UAV and these classes, the centering is done.

On a higher altitude, the whole object is used, and at a lower altitude, the center of the object is used to center the UAV. This partitioning between detected objects is due to the characteristics of the camera and size of the object to be detected, and this is done to minimize the offset from the center point to the actual pick-up point.

F. Obstacle Avoidance and Heading Control

Though the challenge does not require obstacle avoidance, this feature was incorporated for future use of the UAV [27]. Thus, the UAV has been fitted with three additional SONARS – one pointing forward in the heading direction and the other two on either side of the UAV [28]. We use the bug algorithm [29] for avoiding obstacles. Because of this feature, the heading direction control is required as the UAV has to steer towards the goal position after avoiding the obstacle. The yaw orientation of the UAV is important as the forward sensor has to always point in the heading direction of the UAV. Based on the orientation data of the IMU sensor, the heading direction is calculated in quaternions and sent to the velocity controller.

IV. SIMULATION

In this section, we present the simulation results of the major tasks and the whole competition as well using ROS- Gazebo simulation environment.

1) Coverage Planning: The simulation of the coverage algorithm was tested and is shown in Fig. 7 which shows that there is 100% coverage of the search area. The camera footprints are a visualisation on how much of the camera covers and is based on the camera angles of view. The red dots are the way-points of the UAV and the rectangles are the camera footprints in x y-plane. The search space is40×40×10 with a center at (10, 10). The camera view angle parameters are of width37 and height47.

(6)

(a) (b)

Fig. 7: Simulation of Coverage Planning

Normal Mode Performance Mode

FPS 10.5 12.5

Energy 5.88W 5.88W

Height 1.5m 5m 10m 15m

Avg detection %

on true positives 81.9% 83.85% 75.07% - True Positives 50% 86.89% 79% 0%

False Positives 0% 0% 0% 0%

False Negatives 50% 13.11% 21% 100%

TABLE I: CNN model 2 - Object Detection

2) Object Detection: There were three CNN models [24]

used for object detection implemented using TensorFlow.

They were chosen with respect to the performance of our hardware and tested for 3 parameters under both normal and performance modes. Under performance mode, both the CPU clock (2.0GHz) and GPU clock (1.3GHz) are set to maximum. The tested parameters are: accuracy, FPS and energy consumption. The accuracy of object detection is measured by the percentage of True positives (correct object detected), False positives (wrong object detected), False neg- atives(object not detected) and Average detection percentage on true positives(certainty that the model detected the correct object). The energy efficiency test is done by measuring the difference in power consumption during the idle and performance mode.

Table I shows the result of the CNN model 2 for the three different parameters which showed that the model 2 could successfully detect objects for up to 10 m with 75.07% average detection on true positives. It was concluded that model 2 is preferred for the challenge as it does not require powerful hardware. However, model 1 and model 3 could be used after upgrading the hardware and in known environments [25] respectively.

3) Positioning the Object and Descending: The simu- lation test was performed 30 times to test the detection, positioning of the object and descending over it without coverage planning. The results of positioning the UAV on top of the object for an average of 30 experiments is shown in Table II. Here, the UAV lands with its center 2 cm apart from the object’s center. It was found that the exact location of the object in relation to the UAV is not precise but the positioning is more stable when the object is closer to the camera center.

4) Altitude Control: The altitude control was tested by creating Gazebo world environment with different obstacles

Position of the object (in m) UAV position on top of object (in m)

X -105.000129 -105.027609

Y 0.000018 -0.007006

Z 0.010908 0.178237

TABLE II: Descending on the object - tests

Proximity Target Reached Travelled Distance

1m Crash 40.8m

2m Success 176.7m

3m Success 181.0m

4m Success 193.6m

5m Success 184.2m

TABLE III: Obstacle avoidance experiments

placed along the flight path. The obstacles are three barri- cades and a ramp and the goal position of the UAV was set on top of the ramp as shown in Fig. 8a. The results of the UAV altitude with respect to the ground and the obstacles is plotted as shown in Fig. 8b. The results show that the UAV keeps a fixed altitude to the ground with respect to the ground shape and it is able to compensate the roll and pitch of the UAV better than the barometer in the autopilot. The system was tested to measure the distance even at angles bigger than 45 and it was found to work for a maximum altitude of 20m with a precision of 5cm.

(a) Gazebo environment for testing the altitude

(b) Altitude plots Fig. 8: Altitude Control tests

5) Obstacle Avoidance: For testing this additional feature, a Gazebo environment was created with two obstacle walls of height 10 m and 20 m respectively. The flight altitude is set to 4 m and the test is examined to check the behaviour changes of the UAV for different proximity settings. The simulation was run for different proximity settings of the SONARs as shown in Tab. III and the obstacle avoidance feature is tested for each scenario. As seen in the table, the UAV failed for a shorter proximity distance of 1 m but could successfully avoid obstacles for higher ranges.

Figure 9 shows all the steps of the competition (search, de- tect, pickup and dropoff) except coverage planning (search)

(7)

simulated in Gazebo and the video can be found here16.

Fig. 9: Gazebo simulation of the task

V. EXPERIMENTS

A. Results

1) Coverage Planning: The UAV was tested under real flight conditions in a search space of20m × 20m × 10m. The view angle parameters are the same as simulation. The plots are shown in Fig. 10 which shows that there is less than 100%

coverage of the area with some missed out white spots. This is due to the GPS errors that makes the positions to drift from its original position. This was not a practical problem for the challenge. However, it could be addressed by creating a drifting buffer that can overlap the camera footprints with one another to create a shared area. But this will create a trade- off between speed/energy consumption and missing spots.

This was also tested in simulation by adding a 10% overlap that makes the algorithm to calculate 42 way-points instead of 30 and the total length travelled increased from 233.4m to 272.3m respectively.

(a) (b)

Fig. 10: Real Experiments of Coverage Planning 2) Altitude Control: Real flight test was conducted to test the altitude control17. In the test environment, a table of height 85cm was placed to check how the UAV maintains its altitude with respect to the ground and its landing control.

Figure 11 shows the table and the landing part in the whole test and compares the altitude system of the PX4 autopilot with our altitude control system. It shows that both the altitude system have the same performance.

B. Challenge Results

The challenge took place at Timpa AirField, Tuscon in the Arizona desert18, see Fig. 12. An UAV is declared the

16https://youtu.be/Utye0OgjiB0- Video in Gazebo simulation 17https://youtu.be/m4P72LMfe9c- Real world experiment 18https://youtu.be/BAfED_XtjM8- CPS challenge 2018

Fig. 11: Altitude Control

Fig. 12: Hexacopter at the CPS-VO Challenge at Timpa airfield, Arizona

winner, if it is able to scan, search and find the lost drone frame, pick it up, and drop it off at the predestined location in an unknown desert environment. Our UAV performed well in terms of coverage, positioning, descending and altitude control as in the simulation results. The desert wind did not affect the positioning or detection and the UAV was under control. However, our gripper system did not have a feedback system when the claws grasped the object. This resulted in a complete empty run of the UAV after an unsuccessful pickup.

When this was corrected, another problem emerged when the UAV identified its own shadow as the dead frame drone because it was supposed to be the replica of the UAV which was right underneath it. However, because of the completed empty run, the UAV managed to secure the second place in the challenge and a special mention award for the best UAV design for its rugged performance.

VI. CONCLUSION

This paper presents an open source framework and a durable model of an autonomous hexacopter with its com- plete design and development along with the gripper system.

The system was designed to withstand unknown environ- mental conditions like the desert. The framework involves many complex tasks like object detection, coverage planning, positioning of the object, and object grasping and our UAV performs better under different scenarios in simulation and real world environments. One of the possible improvements

(8)

to the existing system is to make use of gimbal for the camera to avoid errors due to camera movement under extreme wind conditions. The drone can make use of other localisation approaches, besides GPS, for tasks operated in indoor area.

REFERENCES

[1] P. R. Patrick Doherty, “A uav search and rescue scenario with human body detection and geolocalization,” in AI 2007: Advances in Artificial Intelligence, pp. 1–13, Springer Berlin Heidelberg, 2007.

[2] A. W. N. Ibrahim, P. W. Ching, G. L. G. Seet, W. S. M. Lau, and W. Czajewski, “Moving objects detection and tracking framework for uav-based surveillance,” in 2010 Fourth Pacific-Rim Symposium on Image and Video Technology, pp. 456–461, Nov 2010.

[3] P. Sadeghi-Tehran, C. Clarke, and P. Angelov, “A real-time approach for autonomous detection and tracking of moving objects from uav,”

in 2014 IEEE Symposium on Evolving and Autonomous Learning Systems (EALS), pp. 43–49, Dec 2014.

[4] P. Ramon Soria, R. Bevec, B. C. Arrue, A. Ude, and A. Ollero,

“Extracting objects for aerial manipulation on uavs using low cost stereo sensors,” Sensors, vol. 16, no. 5, 2016.

[5] E. S. A. A. Joshua Weaver, Daniel Frank, “Uav performing au- tonomous landing on usv utilizing the robot operating system,” in ASME District F - Early Career Technical Conference, ASME District F â ˘A ¸S ECTC 2013, 2013.

[6] P. Ramon Soria, B. C. Arrue, and A. Ollero, “Detection, location and grasping objects using a stereo sensor on uav in outdoor environ- ments,” Sensors, vol. 17, no. 1, 2017.

[7] P. Smyczy´nski, Ł. Starzec, and G. Granosik, “Autonomous drone control system for object tracking: Flexible system design with im- plementation example,” in Methods and Models in Automation and Robotics (MMAR), 2017 22nd International Conference on, pp. 734–

738, IEEE, 2017.

[8] K. E. Wenzel, A. Masselli, and A. Zell, “Automatic take off, tracking and landing of a miniature uav on a moving carrier vehicle,”

[9] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al., “Tensorflow: A system for large-scale machine learning.,” in OSDI, vol. 16, pp. 265–283, 2016.

[10] M. Gatti, “Design and prototyping high endurance multi-rotor.”

http://amsdottorato.unibo.it/7124/1/gatti_

mauro_tesi.pdf, 2015. [Online; accessed 01-May-2018].

[11] F. Baronti, G. Fantechi, L. Fanucci, E. Leonardi, R. Roncella, R. Saletti, and S. Saponara, “State-of-charge estimation enhancing of lithium batteries through a temperature-dependent cell model,” in 2011 International Conference on Applied Electronics, pp. 1–5, Sept 2011.

[12] E. W. Hawkes, H. Jiang, D. L. Christensen, A. K. Han, and M. R. Cutkosky, “Grasping without squeezing: Design and modeling of shear-activated grippers.” http://ieeexplore.ieee.org/

stamp/stamp.jsp?tp=&arnumber=8239707, 2017. [Online;

accessed 02-Mar-2018].

[13] B. Kvæstad, “Autonomous drone with object pickup capabilities,”

Master’s thesis, NTNU, 2016.

[14] S. Kim, S. Choi, and H. J. Kim, “Aerial manipulation using a quadrotor with a two dof robotic arm.” http://ieeexplore.ieee.org/

stamp/stamp.jsp?tp=&arnumber=6697077, 2013. [Online;

accessed 02-Mar-2018].

[15] E. Andersson and A. Bogga, “Hexacopter with gripping module,” in Bachelor Thesis, Halmstad University, Sweden, 2018.

[16] J. Valente, D. Sanz, J. Del Cerro, A. Barrientos, and M. Á. de Frutos,

“Near-optimal coverage trajectories for image mosaicing using a mini quad-rotor over irregular-shaped fields,” Precision agriculture, vol. 14, no. 1, pp. 115–132, 2013.

[17] C. Di Franco and G. Buttazzo, “Energy-aware coverage path planning of uavs,” in Autonomous Robot Systems and Competitions (ICARSC), 2015 IEEE International Conference on, pp. 111–117, IEEE, 2015.

[18] E. Galceran and M. Carreras, “A survey on coverage path planning for robotics,” Robotics and Autonomous systems, vol. 61, no. 12, pp. 1258–1276, 2013.

[19] I. Daramouskas, I. Perikos, and I. Hatzilygeroudis, “A method for performing efficient real-time object tracing for drones,” Advances in Smart Systems Research, vol. 6, no. 2, p. 55, 2012.

[20] Y. Wu, Y. Sui, and G. Wang, “Vision-based real-time aerial object localization and tracking for uav sensing system,” IEEE Access, vol. 5, pp. 23969–23978, 2017.

[21] S. Saripalli, J. F. Montgomery, and G. S. Sukhatme, “Visually guided landing of an unmanned aerial vehicle,” IEEE transactions on robotics and automation, vol. 19, no. 3, pp. 371–380, 2003.

[22] L. Juan and O. Gwun, “A comparison of sift, pca-sift and surf,”

International Journal of Image Processing (IJIP), vol. 3, no. 4, pp. 143–152, 2009.

[23] P. Fischer, A. Dosovitskiy, and T. Brox, “Descriptor matching with convolutional neural networks: a comparison to sift,” arXiv preprint arXiv:1405.5769, 2014.

[24] P. Karlsson and E. Johansson, “Object identifier system for au- tonomous uav : A subsystem providing methods for detecting and descending to an object. the object is located in a specified area with a coverage algorithm.,” in Bachelor Thesis, Halmstad University, Sweden, 2018.

[25] J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, et al., “Speed/accuracy trade-offs for modern convolutional object detectors,” in IEEE CVPR, 2017.

[26] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, pp. 1097–1105, 2012.

[27] J. Carlsén Stenström and M. Rodén, “Obstacle avoidance and altitude control for autonomous uav,” in Bachelor Thesis, Halmstad University, Sweden, 2018.

[28] O. Khatib, “Real-time obstacle avoidance for manipulators and mobile robots,” IEEE International Conference on Robotics and Automation, vol. 2, pp. 500–505, 1985.

[29] V. J. Lumelsky and T. Skewis, “Incorporating range sensing in the robot navigation function,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 20, no. 5, pp. 1058–1069, 1990.

References

Related documents

Inom ramen för uppdraget att utforma ett utvärderingsupplägg har Tillväxtanalys också gett HUI Research i uppdrag att genomföra en kartläggning av vilka

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

This is the concluding international report of IPREG (The Innovative Policy Research for Economic Growth) The IPREG, project deals with two main issues: first the estimation of

Tillväxtanalys har haft i uppdrag av rege- ringen att under år 2013 göra en fortsatt och fördjupad analys av följande index: Ekono- miskt frihetsindex (EFW), som

I regleringsbrevet för 2014 uppdrog Regeringen åt Tillväxtanalys att ”föreslå mätmetoder och indikatorer som kan användas vid utvärdering av de samhällsekonomiska effekterna av

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

• Utbildningsnivåerna i Sveriges FA-regioner varierar kraftigt. I Stockholm har 46 procent av de sysselsatta eftergymnasial utbildning, medan samma andel i Dorotea endast

Det har inte varit möjligt att skapa en tydlig överblick över hur FoI-verksamheten på Energimyndigheten bidrar till målet, det vill säga hur målen påverkar resursprioriteringar