• No results found

Autonomous Exploration and Data Gathering with a Drone

N/A
N/A
Protected

Academic year: 2022

Share "Autonomous Exploration and Data Gathering with a Drone"

Copied!
55
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS

STOCKHOLM SWEDEN 2017 ,

Autonomous Exploration and Data Gathering with a Drone

ABHISHEK CHOUDHARY

KTH ROYAL INSTITUTE OF TECHNOLOGY

(2)
(3)

Autonomous Exploration and Data Gathering with a Drone

ABHISHEK CHOUDHARY

Master’s Thesis at RPL, KTH

Supervisor: Patric Jensfelt

Examiner: Joakim Gustafson

(4)
(5)

Abstract

Unmanned Aerial Vehicles (UAV) are agile and are able to fly in and out of areas that are either dangerous for humans or have complex terrains making ground robots unsuitable. For their au- tonomous operation, the ability to explore unmapped areas is im- perative. This has applications in data gathering tasks, search and rescue etc.

The objective of this thesis is to ascertain that it is, in fact, pos- sible and feasible to use UAVs equipped with 2D laser scanners to perform autonomous exploration tasks in indoor environments.

The system is evaluated by testing it in different simulated and real environments. The results presented show that the system is capable of completely and safely exploring unmapped and/or unexplored regions.

(6)

Referat

Obemannade flygfarkoster (UAV) är smidiga och kan flyga in och ut ur områden som är farliga för människor eller är svårtillgängliga för markrobotar. För att nå höga nivåer av autonomitet måste en UAV kunna utforska och kartlägga ett okänt område på egen hand.

Det finns flera tillämpningar för detta, så som räddningsuppdrag och datainsamling. Målet med denna avhandling är att visa att det är möjligt att använda en UAV utrustad med 2D-laserskannrar för att utföra autonoma kartläggningsuppdrag i inomhusmiljöer.

Systemet utvärderas genom att testa det i olika simulerade och verkliga miljöer. De presenterade resultaten visar att systemet kan utforska okända områden på ett säkert sätt.

(7)

Contents

List of Figures List of Tables

1 Introduction 1

1.1 Research Question . . . . 2

1.2 Hypothesis and Objective . . . . 2

2 Background 3 2.1 Exploration . . . . 3

2.2 Planning . . . . 4

3 Related Work 5 3.1 Exploration . . . . 5

3.1.1 Frontier Based Technique . . . . 5

3.1.2 Weighted Frontiers . . . . 6

3.1.3 Multi-Criteria Decision Making . . . . 7

3.1.4 3D Frontiers . . . . 8

3.1.5 Next Best View Planner . . . . 9

3.1.6 Receding Horizon ’Next Best View’ Planner . . . . 10

3.2 Planning . . . . 11

3.2.1 Refined Bellman-Ford . . . . 11

3.2.2 Rapidly Exploring Random Tree (RRT) . . . . 13

3.3 3D Reconstruction . . . . 14

4 Method 16 4.1 SLAM . . . . 16

4.2 Pose Fusion . . . . 17

4.3 Frontier Exploration . . . . 18

5 Implementation 19 5.1 Hardware Setup . . . . 19

5.1.1 UAV Design . . . . 20

5.2 Software Setup . . . . 21

(8)

5.2.1 Simulation Setup . . . . 21

5.3 Frontier Implementation . . . . 21

5.4 Navigation Implementation . . . . 23

6 Experiments 24 6.1 Exploration Test . . . . 25

6.1.1 Results . . . . 25

6.1.2 Result Analysis . . . . 27

6.2 Data Gathering Test . . . . 28

6.2.1 Results . . . . 28

6.2.2 Result Analysis . . . . 30

6.3 Real Life Test . . . . 31

6.3.1 Setup Details . . . . 31

6.3.2 Results . . . . 33

6.3.3 Result Analysis . . . . 33

7 Conclusion 36 7.1 Future Work . . . . 38

Appendices 38

A Societal Impacts and Ethics 39

Bibliography 41

(9)

List of Figures

3.1 Frontier Region Detection . . . . 6

3.2 Line Detection in scene for NBV Planner . . . . 9

3.3 Path smoothing for Bellman-Ford path planner . . . . 12

4.1 Frontier Exploration Process . . . . 18

5.1 Front(left) and Top(right) view of the quadcopter . . . . 19

5.2 Wing Configuration [1] . . . . 20

5.3 Grid Map Cell Connection . . . . 22

5.4 Pose and Goal Points . . . . 23

6.1 Test environment for varying starting regions - Room 1 . . . . 25

6.2 Exploration Results for varying starting regions . . . . 26

6.3 Test environment for varying FOV . . . . 28

6.4 Exploration Results for varying FOV . . . . 29

6.5 End State- Exploration with limited laser FOV . . . . 30

6.6 Test environment for real experiments . . . . 32

6.7 Front(left) and Top(right) view of the Hexacopter . . . . 32

6.8 Exploration Results for real life experiment . . . . 34

6.9 3D model of test environment from captured depth data . . . . 35

(10)

List of Tables

5.1 List of electronics on board the UAV . . . . 20

6.1 Experimental Setup - Single Room Environment . . . . 24

6.2 Results - Single Room- Starting region varied . . . . 27

6.3 Results - Single Room - FOV varied . . . . 30

(11)

Chapter 1

Introduction

The increasing popularity of UAVs in mobile robotics research is fueled by a combination of increased research interest in the area and growing commercial applications of UAVs. The latter has traditionally been a big factor in driving down cost and encouraging more research in several robotics areas, a perfect example is the extensive use depth cameras after the Microsoft Kinect made them popular and cheap. For robotic solutions to become widely accepted, they must cross the threshold from primarily being used for research or in industrial/controlled environments to being able to smartly and safely work in complex and unpredictable environments. For several of these applications like surveillance, data acquisition etc. autonomy is a prerequisite.

Robots are most suited for tasks that are dull, dirty, dumb or dangerous [2]. For instance, in the construction industry, it is often prudent to have detailed information about progress of work. Gathering meaningful data can not only serve as a tracking mechanism to meet deadlines but can also aid in identifying areas of improvement through careful analysis. Dense indoor 3D models of the site, in theory, would be an ideal way to track said progress by comparing incremental changes. It is possible to do 3D model reconstruction from depth images [3, 4, 5], however they all have one common drawback. The data gathering phase is either a depth-camera mounted on a wheeled robot or alternatively a hand held depth camera roughly pointed in the direction of motion [6]. Given the nature of buildings under construction, UGV (Un- manned Ground Vehicles) are not the ideal choice as the terrain could be hard to navigate and motion would affect the quality of data acquired.

A camera mounted on a UAV platform could be a sensible alternative to

explore unknown areas and capture depth images. UAVs provide greater agility

and access as compared to wheeled robots. This means that the data captured

and consequentially the 3D model constructed would be superior.

(12)

CHAPTER 1. INTRODUCTION

The need to automate this data gathering process and the premise that UAVs are a suitable option for a vehicle are the inspiration for this thesis project. Work done as part of this project can further be used for applications like surveillance, monitoring tasks, search and rescue etc.

This chapter details the research question that the thesis project aims to answer. It also limits the scope of the project.

1.1 Research Question

Can a UAV perform autonomous exploration based on a 2D laser scanner with motion confined to a plane while capturing data from a RGB-D camera?

1.2 Hypothesis and Objective

The hypothesis being tested here is that a UAV with a 2D laser scanner is a suitable platform to perform autonomous exploration in unmapped areas while simultaneously gathering data from other perception sensors.

A possible application of this system could be to use an on-board depth camera to capture RGB-D images for tasks like 3D mapping/reconstruction.

A requirement of this nature demands consideration to the manner in which UAVs navigate in the environment, as this influences the data being captured by the camera.

The 3D mapping/reconstruction itself is not in the scope of this project.

The goal is only to build a system that can gather meaningful and suitable

data, not to actually process it.

(13)

Chapter 2

Background

UAVs are becoming more popular in a variety of application areas including surveillance, military ops, photography etc. Many of these applications require little or no human intervention and thus must have the ability to operate au- tonomously. Often with incomplete or, worse, no maps at all, the UAVs must explore the environment in a manner suitable to fulfill the required objectives.

There are several subsystems that are involved in achieving the mentioned level of autonomy in mobile robots, namely localization, mapping, planning, control, and navigation. This chapter deals with these areas that form the basic foundation upon which the work of this project is done.

2.1 Exploration

For mobile robots to be autonomous while operating in unexplored environ- ments, a key functionality is the ability to make decisions on navigation targets.

This decision making can have several approaches based on factors like cost of movement, potential information gain, application requirement etc. Strategies employed by the robot to consider these metrics and compute necessary trade- offs fall under the purview of exploration.

These algorithms work under the assumption that the robot is capable of

mapping and gathering information from its surroundings through on-board

sensors like laser scanners, cameras etc. The strategy must use this informa-

tion to compute what the best next goal should be. Typically the two most

common trade-offs are between information gain from a target and the cost

(path length or complexity) as explained by Stachniss[7].

(14)

CHAPTER 2. BACKGROUND

2.2 Planning

As mentioned before, the UAV being used for this project is equipped with a 2D laser scanner and as a result the map generated, the goals from exploration etc are all in the 2D plane. Since the system is designed for operation in indoor environments and there is no 3D data available, path planning and navigation are challenging. To mitigate this uncertainty, the environment is assumed to be 2.5D formed by the horizontal ground plane, and vertical walls and doors.

In cases of a cluttered environment, however, it is possible that obstacles exist just above/below the laser scanner plane, this would violate the 2.5D assump- tion. Therefore, in addition to a 2.5D environment it is also assumed that the environment is reasonably uncluttered. This means it is possible to fly at certain constant altitude above the ground plane such that there exist no obstacles that are not in the FOV (field of view) of the laser scanner. Authors in [8] tested the validity of the aforementioned assumptions on a MAV(Micro Aerial Vehicle) and found experimental flight performance to be safe and ac- ceptable.

The exploration algorithms produce high level goals on a map of the world, a cell in an occupancy grid

1

for example. The next step would be to plan a path to the goal. This path should take into consideration motion constraints of the UAV. A few of these constraints are limits on the turning manoeuvres that can safely be executed, how close to walls/obstacles is it safe to fly, and what is the optimum range (distance from object) of operation for on-board sensors that gather data during flight since this range can vary.

1used to represent a robot workspace in a discrete grid representation, each cell in the grid is either ’free’, ’occupied’ or ’unknown’

(15)

Chapter 3

Related Work

This chapter includes details about prior work thats been done on different exploration and planning techniques for mobile robots, specifically UAVs.

3.1 Exploration

Most exploration techniques rely on one of two major types of perception sensors; laser scanners and RGB-D cameras. Since the hypothesis (Sec 1.2) restricts the type of perception sensors to be used, the primary focus is on techniques that rely on 2D laser scanners. Exploration techniques can be deconstructed into two major tasks; candidate identification and candidate se- lection. The two steps differ based on the type of environment, robot motion constraints, navigation requirement, perception sensors on-board etc.

The subsequent subsections tackle the strengths and weaknesses of a few popular methods that have been tested on UAVs for variety of applications, indoor and outdoor exploration for instance.

3.1.1 Frontier Based Technique

Initial investigation point to seminal work by Yamauchi[9] where he intro- duced a Frontier based technique. A ’frontier’ is defined as the boundary on the map between explored and unexplored regions. This algorithm aims to exploit this knowledge on the premise that navigating to these targets will constantly increase information gathered by pushing back the boundary as the robot explores new territories.

The process of frontier detection is analogous to an edge detection algo-

rithm commonly used in computer vision applications. The robot constructs

an evidence grid with each cell holding the value of the occupancy probability

(16)

CHAPTER 3. RELATED WORK

Figure 3.1: Frontier Detection: Evidence Grids (Lef t), Frontier Egde Segments (M iddle), Frontier Regions (Right)

of that particular cell. One such implementation of such an evidence map is an Occupancy Grid[10] that updates itself as the robot moves around to keep track of the explored space. The frontier search algorithm detects, labels, and lists all the edges between the explored and unexplored region on the map as frontier regions as shown in Fig 3.1. Based on a minimum size threshold, the smaller frontiers are ignored and the remaining make up the list of possible frontiers for the robot to navigate to from its current pose. The decision on which frontier to visit can be adapted based on different criteria like size of frontier, distance from frontiers etc. As the robot moves and updates the occu- pancy grid, the entire process (searching for frontiers, picking targets, navigat- ing) repeats recursively till there are no more frontiers for the robot to explore.

The greatest strengths of this technique are the ease of implementation and the fact that it has been extensively tested on several different platforms.

The one obvious shortcoming however is that the frontier selection process is not very sophisticated as it only considers size and distance of frontier. The next proposed method builds upon this to incorporate a more sophisticated approach to select frontiers.

3.1.2 Weighted Frontiers

Authors in [11] introduce an augmented definition of frontiers where each fron-

tier region also includes a frontier pose which maximizes the amount of un-

explored space and the robots ability to localize itself. Searching through all

poses in each region will be computationally expensive, instead the frontier

poses are sampled in each region. The samples are all assigned normalized

weights I

U R

(x), a ratio of total unexplored region to the total sweep region of

the laser scanner. To avoid border cases where the robot might fail to localize,

(17)

3.1. EXPLORATION

another parameter Sensor Uncertainty is introduced, to quantify the ability of the UAV to localize itself in different areas of the map. This uncertainty value is used to calculate another set of normalized weights I

SU

(x), a difference in the entropies of the prior and posterior distribution of robot state (pose of the robot) x and measurements from the the laser scanner z.

I

SU

(x) = H(p(x)) − H(p(x|z)) (3.1)

I(x) = I

U R

(x) + I

SU

(x) (3.2) A large difference implies a low posterior entropy and higher information gain for that robot state (pose). For each frontier selection, goals are selected that maximize the weighted sum of two metrics given by Eq 3.2.

The traditional approach from Yamauchi[9] does not have a sophisticated frontier selection method. The weighted frontier approach improves upon this by including sensor uncertainty in the combined weight assigned to each fron- tier. This is done in order to avoid a situation where the robot might pick and navigate to a frontier where it cannot localize due to insufficient structure in the environment for a feature matching algorithm to estimate pose of the robot.

3.1.3 Multi-Criteria Decision Making

A novel approach introduced by Nicola & Francesco [12] is largely focused on the candidate selection method of exploration techniques. The authors ar- gue that most common techniques use "ad-hoc" utility functions tailored for and thus heavily dependent on the criteria (for example, distance from goal, information gain, ease of navigation etc) being considered. Instead, the au- thors propose an approach based on MCDM (Multi-Criteria Decision Making) wherein the method treats the evaluation process of the criterion as a multi- dimensional optimization problem.

This approach is different from other traditional methods mentioned above in that it takes into account the dependency between different criteria that is often not considered. For criteria that estimate the same or similar features with different methods (travel cost to a waypoint with euclidean and man- hattan distance), their net affect on the utility function should be less than their individual sums. Conversely, for criterion that are different and can not be optimized together have their net effect higher than their individual sums.

An example is information gain and overlap. Here the MCDM approach will

favour those candidates that satisfy both metrics reasonably well as opposed

to candidates that satisfy the metrics in an unbalanced manner.

(18)

CHAPTER 3. RELATED WORK

An advantage of this framework is also that the optimization problem can theoretically be scaled to include any number of criteria as optimization vari- ables. This enables performance analyses of impact of adding or removing criteria on the overall results. Another property of the MCDM approach is the scale independence, all utilities/metrics in the function are normalized to the same scale making comparisons between fundamentally different types of criterion easier and more effective.

3.1.4 3D Frontiers

So far, the discussed methods involve laser scan data from 2D scanners, there- fore the frontiers and navigation goals are all present in the plane. This is not ideal if the UAV is required to fly in a 3D environment (above and below ob- stacles, not just around them). Shen, Michael & Kumar[13] devised a strategy for exploration using UAVs in the 3D environment. They propose simulating particles that are dispersed through the known and unknown spaces on the map with dynamics of ideal gas molecules that collide with explored areas of the map. These particles are then used to compute frontiers by observing their varying density in regions to obtain high level navigation goals.

This approach requires sampling the free space as if the robot were emit- ting particles as it perceives its surroundings. Each individual particle x

i

that represents a point in space is a member of a set X. As the robot moves around, particles are emitted and thus the set X will grow. By limiting the rate of emit- tance and resampling the particles when a prescribed upper limit is reached, the total number can be restricted to a fixed range to increase efficiency and reduce the computational load.

All particles are modelled with particle dynamics in order to mimic the be- haviour of ideal gas molecules. Analogous to ideal gas, it is assumed that apart from elastic collisions that last for a negligible duration of time there are no other forces acting on the particles, including but not limited to inter molecule interactions. The areas of the map where the density of these particles is low would be unexplored regions and high density areas would be explored.

The extraction process involves tracking different particle sets to find areas

that have large volumetric expansion rate with time i.e. the particles that

dissipate and occupy larger volumes with time as these point to unexplored

space. The particles are clustered together based on time of emission and their

separation from neighbouring particles is tracked over time. Particle sets that

have moved away from neighbours further than a maximum threshold distance

over a fixed period of time are tagged as frontier particles. The threshold val-

ues for distance and observation time can be tuned as per requirement in order

(19)

3.1. EXPLORATION

to have the minimum number of false positive frontiers.

3.1.5 Next Best View Planner

So far the exploration methods that have been discussed focus on information gain, ability to navigate and localize as the primary metrics based on which to make decisions on next targets. The information gain however is limited to exploring previously unexplored areas. Nüchter, Surmann & Hertzberg [14]

discuss a NBV (Next Best View) technique, and augment the definition of in- formation gain to include quality of data retrieved from the next target. The result is a 2D pose (x, y, θ) for 3D laser scanner to maximize the information gain from the next frame.

The algorithm is largely divided into two parts, generating a list of seen and unseen polygon edges in the map, and next-pose selection. The initial laser scan data in the unexplored map are used to detect lines in the scene using Hough transformation[15]. These lines are then sorted based on angle between end points of the lines and the scanner position in the map. Fig 3.2 includes illustrations of the ground plane of a map used by the authors originally. The second image describes how to connect two lines, when α

1

< α

2

and no further α exists between them. These lines are labeled as seen or unseen and are then connected to form polygons.

Figure 3.2: Left: The detected lines. Second: Lines are sorted by using the smallest angle between the lines and the scanner position. Third: The polygon.

Right: Two polygons with edge labels prior merging.[14]

For the next part, a 2D laser scan is simulated from randomly generated

points (x, y) inside the polygon. Of these, the point which has maximum

intersections with unseen lines will provide the most information gain. The

(20)

CHAPTER 3. RELATED WORK

orientation is decided by the limits on the max sweep angle of the laser scan- ner and penalty on rotation.

Although this method would guarantee high quality data acquisition for offline 3D reconstruction, an obvious limitation is the use of 3D laser scanner.

The UAV being used in this project is equipped with only a 2D laser scanner.

There exist methods [16][17] that mitigate this limitation by installing two 2D scanners mounted orthogonally. The scans from the horizontally mounted laser is used for localization, whereas the vertically mounted laser is used to detect the facade of the the environment to build 3D maps/models. In experiments, these methods have been tested on wheeled robots or even cars and trucks. In contrast to UAVs, payload and weight aren’t huge concerns for wheeled robots, for this reason this alternative although successful is not ideal for this project.

3.1.6 Receding Horizon ’Next Best View’ Planner

Bircher & Kamel [18] introduce a novel path planning strategy developed to to explore an unknown 3D space with a UAV platform. Similar to Receding Horizon control strategies, this method samples the current state (map, sen- sor information, robot pose etc.) to make predictions on the next best view and computes a path by rewarding information gain and penalizing distance.

The perception sensors provide constant feedback which is used to update the map and make improvements to the path. This feedback also helps attenuate tracking and perception errors like a receding horizon controller.

The map used to represent the environment is a 3D occupancy grid, where the entire volume that can be perceived into divided in cubical volumes, form- ing what is called a voxel grid. To ensure efficient computation, the voxels are saved in a octree which make querying or updating voxel information faster.

Paths are only planned through known free space, thus making it collision free.

The robot is treated as a point-model and the obstacles in the map are inflated to accommodate this. From the starting configuration of the robot and the map, a tree is built using the RRT [19] algorithm. The nodes on the tree denote waypoints and the edges are collision free paths. The sampling process takes into account the information gain at each possible node, this is quantified by number of unexplored voxels that will be seen by the perception sensor at the node. After each planning step, the first node of the path is executed and the tree is reinitialized to plan again, this time with updated map information.

The authors tested this method and compared the performance to a a tradi-

tional frontier based approach and found that performance was better in terms

of scaling ie. exploration of larger maps was more efficient. Since the environ-

ment is assumed to be uncluttered [Sec 2.2] in the context of this project and

(21)

3.2. PLANNING

the perception sensor used was a stereo camera, this method is not suitable in spite of better experimental results when compared to an implementation of frontier exploration.

3.2 Planning

All exploration methods provide, as output, high level navigation goals, they do not provide a trajectory for the robot to follow. Often, even in uncluttered environments there might be a requirement for the UAV to avoid flying into obstacles like walls or doors. This is achieved by providing these high level navigation goals to a path planner which has access to the map and obstacle information. This planner must then generate a set of waypoints that is free from obstacles and safe to fly through. The follow sections detail a few methods that have either been adapted to or designed from scratch to find paths for a UAV given a map and a goal.

3.2.1 Refined Bellman-Ford

Jun & D’Andrea [20] describe a path planning algorithm that uses an im- plementation of the Bellman-Ford algorithm to find the shortest path and subsequently applies path refinement techniques to make the path suitable for specifically for UAVs. In the worst case, Djikstra algorithm is O(n

2

) while Bellman-Ford is O(n

3

), the advantage of the latter is its the ability to tolerate changes in link lengths and probability distributions during execution. This means that the path planner can operate dynamically as and when the map updates without the need to stop and restart execution.

The method involves searching through all the vertices in the graph to find shortest paths from a single source vertex to all other vertices in the given graph. Bellman-Ford is based on a relaxation process where the initial distance estimates for each edge on the graph are gradually replaced by better values until they converge to an optimum solution. This initial distance is always an overestimate (usually ∞).

The relax function in Algorithm 1, is iterated over each possible pair (source, node) in the graph i.e. |V|-1 times. It continuously shortens the calculated distance by comparing it with distances to goal from other nodes, this process is repeated till the shortest path is found.

The path so generated from the Bellman-Ford algorithm is a set of line

segments joining nodes on a graph. Since there is no involvement of orientation

in the path planning process, to follow it the UAV at each waypoint must either

make sharp turns or alternatively stop and orient itself to the next waypoint.

(22)

CHAPTER 3. RELATED WORK

Algorithm 1: Bellman Ford Algorithm

1:

function BellmanFord():

2:

for (v in V) do {V: All vertices in the graph}

3:

v.distance = ∞ {Set distance to all vertices to infinity}

v.p = Nothing

4:

end for

5:

source.distance = 0

6:

for (i from 1 to |V|-1) do

7:

for ((u,v) in E) do

8:

relax(u,v)

9:

end for

10:

end for

11:

function relax(u,v):

12:

if (v.distance > u.distance + weight(u,v)) then

13:

v.distance = u.distance + weight(u,v)

14:

v.p = u

15:

end if

For this reason, the shortest path is then refined to make motion smoother and easier to execute. It is done by making turning vertices on the paths to arcs that the UAV can navigate through.

Figure 3.3: Smoothing the vertex of a preliminary path with an arc of a circle of radius r

min

[20]

In Fig 3.3 r

min

is the minimum radius of the circle the circumference of

(23)

3.2. PLANNING

which the UAV can navigate along and θ is decided by the shape that the cells of the occupancy map are in. For hexagonal cells the minimum value of θ is π/6, for square cells it is π/4 etc. Let distance between centers of the cells be d

c

,

d = r

min

· tan(θ) (3.3)

d

c

≥ d (3.4)

This method guarantees a smoother path which does not cross over into cells that are potentially obstructed if Eq 3.4 is satisfied. With regards to this project however, the metric for ideal performance is not path length but paths suitable for data gathering.

3.2.2 Rapidly Exploring Random Tree (RRT)

This section details two techniques, RRT and an optimal variant RRT*. They are both popular techniques and have been widely used (see Section 3.1.6) and tested.

RRT is an incremental sampling-based single-query algorithm developed for real-time applications (See Algorithm 2 [21]). The algorithm incrementally builds a tree starting from the initial configuration (of the robot) by searching through the configuration space.

At each iteration the space is sampled and random point x

rand

is chosen, the algorithm then attempts to connect the nearest vertex x

nearest

to a new vertex x

new

that is close(within a given threshold radius) to x

rand

. If the con- nection is successful i.e the edge is collision free, x

new

is added to the tree.

Since the method is developed or real-time applications, the focus is on finding a feasible solution quickly, the quality of the solution found is not given due attention.

Authors in [19] introduce a novel algorithm RRT* proven to be asymp- tomatically optimal i.e. it almost always converges to an optimal solution as the number of samples tends to infinity. This is achieved by an additional re-wiring step after each x

new

is connected to the tree. For each step, the algorithm searches for an existing node on the tree that incurs minimum ac- cumulated cost to the x

new

and re-wires the tree.

In operation a distinguishing feature of RRT* is the ability to improve over

time. In a cluttered environment, for example, solutions generated by RRT

tends to get stuck with the first solution it finds irrespective of whether or not

it is a good one. The starting bias often results in a bad path. RRT*, how-

ever, may also start with an initial cost that is not optimal, but the re-wiring

(24)

CHAPTER 3. RELATED WORK

Algorithm 2: RRT

1:

function RRT():

2:

T ← x

init

; {T= Current tree}

3:

for i = 1, 2,..., n do

4:

x

rand

← SampleFree(i);

5:

x

nearest

← Nearest(T ,x

rand

);

6:

x

new

← Steer(x

nearest

, x

rand

);

7:

if CollisionFree(v

nearest

, x

new

) then

8:

T ← T ∪ x

new

9:

end if

10:

end for

step enables the solution to improve over time. In [19] the authors compared Monte-Carlo simulations of both RRT and RRT* and found that over time (few thousand iterations) the variance of path cost for RRT was fairly high, as opposed to negligible variance seen in RRT*.

3.3 3D Reconstruction

A possible application of an autonomous exploration system, as mentioned in Section 1.2 is to do 3D reconstruction. This is the process of using data gath- ered from RGB-D cameras to build dense 3D maps/models of indoor environ- ments. Most reconstruction algorithms have three main components: spatial alignment of frames, loop closure detections and globally consistent alignment of the data sequence. Spatial Alignment refers to the process of geometrically aligning multiple images of the same scene acquired at different times, or with different sensors, or with different viewpoints[22]. Loop closure detection, in this context, is the ability to assert that the robot has returned to a previously visited location by matching features from individual local scenes[23] to previ- ously observed scenes.

Henry [24] explains techniques that are employed, often successfully, to

tackle each of the aforementioned components. These differ based on the

allowable trade-offs between factors like computational cost, time, need for

additional information etc. and type of data being gathered. For spatial align-

ment, an implementation of an ICP[25] method modified to utilize 3D data

from RGB-D cameras is used. For the loop-closure problem, most recent ap-

proaches to 3D mapping rely on fast image matching techniques [26]. For

globally consistent data alignment, Henry [24] points to supporting research

like bundle adjustment techniques.

(25)

3.3. 3D RECONSTRUCTION

Existing research on 3D reconstruction [3][24][27] with RGB-D data pri- marily focuses on strategies to process gathered data for robust performance.

The data gathering step is often given little consideration. A common solution is walking around test areas while holding the camera unit steady and pointed in the direction of motion to gather required data. This lack of insight affects design considerations of the path planner and controller for the UAV

The authors in [6] propose a clever workaround to this lack of informa-

tion about the data gathering process. They use a hand-held setup (cam-

era+computer) the user can walk around the room with and see the 3D recon-

struction in real time. This allows the system to prompt the user to rewind or

return to specific locations if the data was not suitable. However, this method

requires higher computational capacity than the off-line alternative.

(26)

Chapter 4

Method

This section deals with the design of an exploration system based on ideas and inspirations from Chapter 3. It also explains the practical differences between testing only in simulation and testing the actual hardware operation.

4.1 SLAM

The UAV must be designed to operate autonomously even with no prior infor- mation about the environment. To explore in such a scenario, it is crucial for the robot to be capable of SLAM (Simultaneous Localisation and Mapping) as it moves. This is needed to safely navigate through free space and to make informed decisions about exploration targets, ie. minimize instances where the UAV returns to previously explored areas. The method used is GMapping [28, 29, 30] that creates a 2D occupancy grid from laser and pose data from a mobile robot and provides a map. The primary reason for this choice was that a combination of GMapping and Frontier Exploration implementations have been successfully tested, albeit on wheeled robots. Thus, there is rich content to learn from and potentially even aid with development issues in the future.

GMapping employs a Rao-Blackwellized particle filter where the re-sampling step is adaptive in order to reduce the number of particles while ensuring good particles are not dropped. For resampling, each particle set is assigned an importance weight given by [28]

w

(i)

= p(x

(i)t

|z

1:t

, u

0:t

) π(x

(i)t

|z

1:t

, u

0:t

)

where p(x

t

|z

1:t

, u

0:t

) represents the posterior about robot pose x

t

, given obser- vations (sensor measurements) z

t

and odometry u

t

. The denominator

π(x

(i)t

|z

1:t

, u

0:t

) represents the entire distribution of particles from which the

i

th

set was sampled. Particles with low importance weight are more likely to

(27)

4.2. POSE FUSION

get dropped in the sampling process thereby reducing the number of particles, making the process more computationally effective. In order to avoid particle depletion

1

during the resampling process, a trigger variable

N

ef f

= 1

PN

i=1

(w

(i)

)

2

is used which is the number effective particles. Sampling only is done each time N

ef f

drops below N/2 where N = number of particles.

It also uses a scan-matching algorithm to estimate a pose distribution for the robot from the last laser scan, this helps limit selection of sampling points only to an interval around the current pose thereby reducing the number of particles required.

4.2 Pose Fusion

Providing an accurate pose estimate is instrumental in order to perform certain tasks, for instance to orient the UAV in best position to grab a frame, tagging each grabbed frame with its pose and pose required by robot in order to safely navigate are all examples where the robot’s full pose (x, y, z, roll, pitch, yaw) are required. Certain other cases like providing robot pose to the frontier ex- ploration node require only the 2D pose in the plane (x, y, theta).

The on-board Flight Control Unit (FCU) uses the Inertial Measurement Unit (IMU) data and Global Positioning System (GPS) signals to provide an estimate of the full pose of the UAV and an on-board 1D laser scanner is used to measure the absolute distance to ground. During outdoor flight or in sim- ulation, the FCU is capable of fusing IMU data with a GPS signal, therefore the pose estimate is quite reliable in those two scenarios. In the absence of a GPS signal however, like while flying the real UAV indoors, this pose cannot be trusted because IMU data alone drifts over time as the error accumulates.

Additionally, since GMapping was originally designed for wheeled robots, it requires odometry

2

information from the robot in order to provide a transform relation between the map it builds and the robot. For these two reasons, a laser scan matching technique, RF20[31] is employed. This method is used to estimate the motion of a lidar from consecutive laser scans and is popular for mobile robots with inaccurate base odometry. Since it relies only on laser scans, this technique independent of the type robot and locomotion.

1random resampling eliminates all particles near the true state

2estimate of change in position of robots over time by using motion sensors mounted on motors

(28)

CHAPTER 4. METHOD

4.3 Frontier Exploration

With a robot capable of SLAM, the next step is to compute exploration targets for the UAV to navigate to in order to explore as much of the environment as possible. Based on investigation of existing solutions for exploration in Section 3.1, Frontier Exploration[9] was chosen to test the hypothesis (Section 1.2).

Most exploration algorithms discussed are, to some extent, built around the concept of frontiers and that pushing these boundaries is the best way to explore an environment. These methods, however, differ in how the frontiers are identified and the scheme designed to select them for navigation, some are more sophisticated and complex than others. Frontier exploration is straight- forward to implement, it was designed for operation on 2D occupancy grid maps, and has been widely tested.

The exploration process has two modes of operation, with or without SLAM. In the first case, the exploration process uses a global costmap (from SLAM) in order to update costs based on laser scans and decide on exploration targets. In cases where the robot has an existing SLAM system, the exploration process can use the map generated from it to keep track of explored regions and decide on exploration goals, this reduces the computational load on the system. In the second case, if there is a reasonable pose estimate available, the exploration node can build and update its own costmap in order to select exploration goals. A drawback in the latter case is that the map is non-static i.e. the map data is lost if the robot moves outside the prescribed map area. A high-level overview of the expected operation of the process is shown in Fig 4.1.

Wait

Done!

Trigger Exploration Select point in

unexplored region Pass goal to Navigation stack

Reached?Goal New Frontier?

Yes

Yes No

No

Figure 4.1: Frontier Exploration Process

(29)

Chapter 5

Implementation

This chapter describes the details of hardware and software required to perform both simulation and real life experiments. The setup of the environment for simulation is also explained in the chapter.

5.1 Hardware Setup

Hardware involvement in the project can broadly be categorized into the UAV itself (Fig 5.1) and the on-board sensors. Since the design of the UAV itself was not in the scope of work of this project, this section only briefly describes the design without delving into the details or motivations of design considerations made.

Figure 5.1: Front(left) and Top(right) view of the quadcopter

(30)

CHAPTER 5. IMPLEMENTATION

5.1.1 UAV Design

The UAV platform used is a widely used X-wing quadrotor design with its forward direction between its legs as shows in Fig 5.2.

Figure 5.2: Wing Configuration [1]

Advantages with this design included better agility as compared to its Plus- Shape counterpart, the available thrust to roll and pitch come from two ro- tors. Another advantage with this design configuration is that camera(s) can be mounted in the front of the vehicle without the Field Of View (FOV) being obstructed by the rotors.

The UAV was designed to meet all requirements discussed in Chapter 4.

Following is this list of sensors, controllers and computers on-board the UAV.

Intel NUC - On-board computer

Pixhawk - Flight computer

Hokuyo UST-30LX - 2D Laser Scanner

TeraRanger - Laser range finder

Intel RealSense - RGB-D Camera

Table 5.1: List of electronics on board the UAV

(31)

5.2. SOFTWARE SETUP

5.2 Software Setup

The on-board computer is the Intel NUC, running Ubuntu 14.04. The robot is developed using the ROS(Robot Operating System) Framework [32], it comes bundled with device drivers, visualization tools, debugging tools, communica- tion protocols etc.

Programs(nodes) running on the UAV are in C++, they are capable of 2- way communication making monitoring, preempting, information flow between different nodes etc. possible. Each node can broadcast messages

1

over custom topics

2

.

The on-board FCU is the Pixhawk Pixracer running on the PX4[33] firmware, it comes equipped with an IMU and controllers that can be tuned as per re- quirement. For UAVs specifically, Mavros is a package to enable communica- tion between on-board computers running ROS and MAVLink, a communica- tion library that supports various autopilot systems like the PX4. Since Mavros is a ROS wrapper for MAVLink, it enables use of traditional ROS functionality to communicate with and send commands to the autopilot system on the UAV.

5.2.1 Simulation Setup

In order to perform software tests, it was necessary to have a simulation model of the UAV. Through several open source plug-ins available for Gazebo, a vi- sualization tool within ROS, a simulation environment was created consisting of an X-wing configuration UAV model equipped with a PX4 based FCU and a 2D laser scanner. The next chapter contains examples of some test environ- ments that were created and used within Gazebo to run simulation based tests.

5.3 Frontier Implementation

The implementation of the exploration node, as stated earlier, is done within the ROS framework and its operation in simulation and otherwise is tied to ROS tools like RVIZ (visualization tool). There are two major parts of the de- sign; search algorithm to find frontiers in a prescribed RoI

3

and another node to trigger the search, provide the selected frontier to the navigation system, maintain and update the occupancy map, display progress on RVIZ etc. The process is as follows in Algorithm 3.

1Description language for describing the data values that ROS nodes publish[32]

2Named buses over which nodes exchange messages[32]

3Region of Interest

(32)

CHAPTER 5. IMPLEMENTATION

Algorithm 3: Frontier Exploration Algorithm

1:

M← Current Map

2:

U ← User defined search area

3:

while (!area_covered & !frontiers) do

4:

P ← Robot Pose

5:

F = FrontierSearch(M,P,U)

6:

G = SelectFrontier(F);

7:

SendGoal(G);

8:

UpdateMap(M,P);

9:

Visualise();

10:

end while

A list of frontier cells is created by searching outward from the source cell in the occupancy grid in a 8-cell neighborhood, each cell in the list must be free/unknown and have at least 1 free cell to navigate to. This list is then fed into a Breadth First implementation for a 4-cell neighborhood case to connect frontier cells within distance thresholds in order to group them into clusters.

The neighborhood cell layout is shown in Fig 5.3.

Figure 5.3: (Left) 4-connected and (Right) 8-connected cells in a gridmap

This entire process is FrontierSearch. For the largest frontiers found, based

on a chosen metric (closest frontier point to the robot, centroid of all the found

frontier points etc.), a navigation goal is chosen with each iteration. The pose

of this goal is supplied to Mavros, a communication wrapper which accepts

goal pose in a ROS message format and commands the FCU on the UAV to

navigate to it.

(33)

5.4. NAVIGATION IMPLEMENTATION

5.4 Navigation Implementation

The capabilities explained in the previous sections will result in high-level nav- igation goals for the UAV to fly to. This means that the exploration node will only provide a goal pose (x, y, z, θ). For the UAV to navigate, another node must translate these goal poses into motor commands to send to the FCU.

This is achieved by setting the goal pose in the NED(North-East-Down) frame as the desired position of the UAV, the on-board FCU position controller then uses PID control to navigate to this waypoint. In implementation, the Mavros library handles the conversion of the waypoints to the NED coordinate frame and subsequently passes it to the FCU.

Fig 5.4 shows a trace of the pose of the UAV using this navigation method in each of the runs, the setpoints are the goal poses selected by the exploration node. The method, as can be seen, performs reasonably well in simulation with the UAV, in most cases, getting to within a threshold distance from each setpoint.

Figure 5.4: Plot of Pose(x,y) and Goal Points provided to the mavros setpoint

follower

(34)

Chapter 6

Experiments

This section includes details of the experimental test cases designed to test the exploration algorithm. Several experiments include testing with varying room size/shape, Laser scanner ranges, goal selection thresholds to avoid overlap etc.

The performance of the entire system and the effect of changing these variables are also discussed here. Test parameters that remain unchanged for different runs of the experiments are listed in Table 6.1 .

Simulation Setup

UAV Size 40cm Radius

Exploration Setup Explore Clear Space Yes

Map Update Freq 40 Hz

Pose Mavros pose estimate

Table 6.1: Experimental Setup - Single Room Environment

Setting the "Explore Clear Space" function allows the frontier selection process to pick areas of the map that are known to be free according to the Occupancy grid map being updated. In cases where the goal is just to update a map as opposed to gather information, there is no need for the UAV to visit areas of the map as long as the laser scanner can see them. However, this is especially useful, for instance, in cases where the goal is data gathering with sensors like depth cameras that typically have shorter ranges than high quality laser scanners. For such applications the ability to get close enough (within said threshold range) to all areas of the map is crucial.

Since these tests are in simulation, the pose information is obtained from

the local position estimator from Mavros which is highly reliable since it uses

a simulated GPS signal as mentioned in Sec 4.2

(35)

6.1. EXPLORATION TEST

6.1 Exploration Test

Since the system was built for an uncluttered environment (Sec 2.2), a test environment with a single empty yeahroom is used. The motivation behind this choice is be able to test the hypothesis (Sec 1.2) without the need for an additional obstacle avoidance system, the implementation of which was not in the scope of this work. The design of the room is shown in Fig 6.1 and the room size is roughly ~10m by ~10m.

Isometric View Top View

Figure 6.1: Test environment for varying starting regions - Room 1

6.1.1 Results

The UAV starts at the center of the map and navigates to each new setpoint, this stops either in cases of error recovery ie. crashes, loss of communication with the nodes commanding the FCU, or upon completion of exploration. The system is designed such that the exploration can be limited to a user defined region to limit the search area for new frontiers. The absence of obstacle avoidance feature means it is important that the starting area given to the exploration node is conservative enough to not pick targets further away than the laser scan FOV.

This section details the results from 3 runs in the same test environment

where the only variable is the starting region. The first two runs listed in Table

6.2 both resulted crashes and thus did not completely explore the map.

(36)

CHAPTER 6. EXPERIMENTS

1stRun

(Intermediate State) (End State)

2ndRun

(Intermediate State) (End State)

3rd Run

(Intermediate State) (End State)

Figure 6.2: Exploration test results for a single room environment with varying

starting regions

(37)

6.1. EXPLORATION TEST

6.1.2 Result Analysis

Fig 6.2 shows the status of the occupancy grid map for the three runs in the single room environment (Section 6.1). The yellow edges of the map signify the obstacles seen by the laser scanner, the blue region is the a representation of the obstacles being inflated in size in order to ensure collision free naviga- tion. In the 1

st

and 2

nd

runs, the UAV was unable to explore the entire area available to it. The End State map shows the mapped(turquoise blue) vs the unmapped(green) walls in the room.

Metrics Run 1 Run 2 Run 3

Run Time 100s 163s 198s

%Room Mapped ~30% ~60% 100%

Table 6.2: Results - Single Room- Starting region varied

As can be seen from the End State figures for 1

st

and 2

nd

, the crashes occurred in cases where the starting regions were large. In both cases, the largest frontiers found were in areas where the room had walls. Since the laser scanner range was not large enough in these cases, the goals picked were on or outside walls resulting in crashes.

In run 3 however, the prescribed starting region was conservatively selected.

Therefore the goals selected were further away from the walls and thus the UAV

was able to complete the exploration without crashing/bumping into walls.

(38)

CHAPTER 6. EXPERIMENTS

6.2 Data Gathering Test

As mentioned before (Sec 1.2) a possible application of a UAV based explo- ration system is data gathering for 3D reconstruction of environments. The purpose of the following experimental setup is to test if this exploration sys- tem is at all suitable for such applications. Most RGB-D cameras have a FOV much smaller (sweep angle and range) than the laser scanner in use on this platform.

To simulate a scenario where only information from such a camera was being used to explore, the FOV of the laser scanner is limited. Since the goal is to check completeness of exploration with varying FOVs only and not the navigational ability of the system, the design of the room is shown in Fig 6.3 also contains only a single room where the UAV has free space to fly safely.

The room size is ~7m by ~10m each.

Isometric View Top View

Figure 6.3: Test environment for varying FOV

6.2.1 Results

The UAV again starts at the center of the map and navigates to each new setpoint stopping only in cases of error recovery or completion of exploration.

This experiment had two runs in the same map environment where each run had a drastically different FOV for the laser scanner.

The 1

st

run had the same FOV as the previous experiment i.e. 270

angle

and a range of 5m. For the 2

nd

run however, to mimic the FOV of a RGB-D

camera, the FOV is 120

angle and a range of 1m. This is done so that the

exploration system only considers those areas explored that a camera would

have seen without having to parse and process the camera feed, which is a

computationally intensive process.

(39)

6.2. DATA GATHERING TEST

Laser FOV (270& 5m) Laser FOV (120& 1m)

Figure 6.4: Exploration test results: (Left) Wide FOV Laser Scanner and

(Right) FOV of laser scanner similar to a RGB-D camera to simulate data

gathering experiments

(40)

CHAPTER 6. EXPERIMENTS

6.2.2 Result Analysis

The design of the room used in this experiment is similar in comparison to the room described in Section 6.1, in that there are no protrusions or occlusions in the map that would require a require an obstacle avoidance system to navigate through safely. With the drastic reduction in FOV, an obvious difference would be time taken to explore. As shown in Table 6.3, near complete exploration in both runs took 3 times longer with the reduced FOV.

Metrics Run 1 Run 2 FOV 270

, 5m 120

, 1m

Run Time 110s 278s

Table 6.3: Results - Single Room - FOV varied

The UAV used has a radius of 40cm (Table 6.1) and with a range limited to 1m the UAV must at least get to within 0.5m of a waypoint to consider it explored in order to ensure that the the camera be at an optimum distance from the scene to capture images. The result of this is that the UAV constantly get closer to the walls than is safe. In Fig 6.5 the resulting map has several discontinuities and errors, these include the erroneous data that lies outside the walls of the room. The reason for this was that during exploration the UAV scraped the walls on several occasions but managed to continue flying anyway. This, in part, was possible because the tests were only in simulation and because the UAVs never picked waypoints in/outside the walls but only points too close to them. In reality, data from such a flight would most likely either be unusable or require significant post processing.

Figure 6.5: End State- Exploration with limited laser FOV

(41)

6.3. REAL LIFE TEST

6.3 Real Life Test

From the first two experiments, it is clear that the performance of the system would benefit from a navigation planning step. This would involve a path plan- ner to ensure that the UAV finds a collision free path in order to avoid collision to obstacles. A system capable of this could also enable exploration in mul- tiple room environments where the UAV might need to fly through doorways for example. To test the validity of this, the exploration method was tested on such a system. The following section details the setup of the platform itself and the environment in which it was tested.

The experiment in this environment was structured similar to the data gathering experiment in Section 6.2. The exploration system gets information to update the costmap only for the FOV of the PrimeSense camera. During this experiment, the depth camera was also used to gather depth images during flight, this data was then used to perform 3D model reconstruction of the room, the results are also shown and discussed.

6.3.1 Setup Details

In terms of hardware, the platform was equipped with the same perception sensors as the original UAV and differed only in that it was hexacopter con- figuration for increased stability during flight. Since the UAV is controlled via command velocities or goal setpoints, the FCU handles the differences be- tween commanding 6 rotors instead of 4. In terms of the software setup, the platform used was nearly identical to the original UAV, running ROS Indigo on an Ubuntu distribution. The two differences were the added path planning capability and a different SLAM technique than what was used on the previous UAV.The path planning strategy on the platform is an implementation of the popular A-star[34] algorithm. The SLAM technique is an open-source pack- age available for ROS called Hector SLAM[35]. It is different from Gmapping in that this method does not require odometry information from the robot.

Instead it encapsulates 3D attitude estimation from the IMU and a fast laser scan matching algorithm into one robust method capable of SLAM in grid based maps.

The system plans a path on the occupancy grid map, this path is converted

into velocity commands and supplied to the FCU to navigate to the goal. Upon

reaching the target, the exploration system provides a new goal and the entire

process is repeated. The path itself is a collection of points that the UAV must

navigate through to reach the final goal. The velocity command computation

is done by first calculating yaw velocity to turn and orient the UAV towards

the next waypoint, and then a straight line velocity is computed to the next

waypoint.

(42)

CHAPTER 6. EXPERIMENTS

Figure 6.6: The two rooms that the exploration was tested inside. Bird- cage(left) with safety nets and padded flooring and the Drone Lab(right)

Figure 6.7: Front(left) and Top(right) view of the Hexacopter

In order to test the improvement in performance of the overall system the test environment had occlusions and obstacles like desks, chairs, etc. The en- vironments shown in Figure 6.6 is the Drone Lab at RPL, KTH. The Birdcage is uncluttered, has padded flooring, and has safety nets around it.

Whereas the rest of the Drone Lab has desks, chairs etc that the UAV must

detect and avoid. For both these cases, the UAV platform used is shown in

Figure 6.7, in addition to the perception sensors on the original platform, this

UAV also has a PrimeSense RGB-D camera that is used to capture depth data

while exploring the rooms.

(43)

6.3. REAL LIFE TEST

6.3.2 Results

The entire exploration process for the two rooms took a total of ~200 seconds.

The UAV started exploring at the center of the outer room in the drone lab and first explores this room. Upon completion it picks a target inside the second room and navigates through the narrow opening to enter the birdcage where it stops and lands after the entire space has been explored.

As mentioned before, the RGB-D camera on-board was used to record data during the exploration process in order to see if this exploration strategy would be feasible for applications like 3D model reconstruction of unknown areas. The results from the reconstruction process are shown in Figure 6.9, the design and implementation of the reconstruction method itself is not within the scope of this project. It involves use of ORB-SLAM2 [36] for camera pose estimation, and loop closure detection. The resultant trajectory is used for generating mesh model using ElasticFusion [37] that can be seen in the result figures.

6.3.3 Result Analysis

In Figure 6.8, the progress of the exploration process can be seen. The im- ages show the current state of the occupancy grid maps at 5 different time instants during flight and the red arrow depicts the pose of the UAV at that instant. Figure 6.8 (iv) shows the instant when the UAV navigates through the doorway like opening and flies safely into the other room. Overall, the entire process was completed without any crashes or instances where the UAV got too close/scraped the obstacles. The path planner enabled exploring areas where there was not straight line path making multi room flight possible.

The completeness of the exploration process can also be judged based on

the 3D model of the room in Figure 6.9. While there are irregularities like

slightly misaligned walls and certain segments with sparse point density, the

model overall is a reasonably good representation of what the test environment

looks like. Details like desks, chairs etc that are captured by the camera can

also be seen in the costmap build only from laser data. The comparisons in

Figure 6.8 (v) and the top view from Figure 6.9 show similar outlines of the

two rooms. There is however still scope for improvement in both the method

and the platform used to test it. For instance, there are certain isolated points

floating in empty space, this is often a result of erroneous data from the depth

images. This could possibly be mitigated by flying slower in order to reduce

motion.

(44)

CHAPTER 6. EXPERIMENTS

(i) (ii)

(iii) (iv)

(v) -Exploration Finished

Figure 6.8: Real life exploration test results for a multi room environment with

path planning

(45)

6.3. REAL LIFE TEST

Figure 6.9: Top(up) and Isometric(bottom) views of the 3D model recon-

structed from depth data captured during autonomous exploration

(46)

Chapter 7

Conclusion

The purpose of this thesis was to test if UAVs equipped with 2D laser scanners can be used to do exploration in unmapped areas in order to decouple the data gathering process from localization and exploration. For this, a system was developed on a quadrotor UAV platform capable of autonomous flight, local- ization and mapping. A frontier exploration strategy was implemented capable of using data from the 2D laser to update costmaps of the environment and generate high-level navigation goals for the UAV to follow.

The system was then tested in different simulated map environments with single room environments with varying capabilities. The navigation system used did not have obstacle avoidance capability, therefore, the best results were obtained when the starting search area was conservative, in that starting exploration targets were not beyond the laser FOV and when the FOV was larger so that more area was covered with fewer waypoints. For data gathering (capturing RGB-D images, for instance) applications an additional ’explore clear space’ feature was tested which picks goals in areas that have been seen by the perception sensor but not visited. This feature is particularly useful for data gathering through RGB-D cameras that have a shorter range than most laser scanners. The tests showed that while a limited FOV laser can be used to completely explore unknown spaces for such applications, the per- formance would significantly be better with global path planner and a local obstacle avoidance system. To validate this, the method was tested on a real hexacopter platform capable of path planning. The results showed that the UAV successfully manages to explore even multi room environments, gather- ing depth data in the process. The model reconstructed from this data, while leaves room for improvement, shows that the exploration process forces the UAV to visit/see all possible areas when put in previously unknown environ- ments.

The work done through the course of this project shows that it is possible

(47)

to use a UAV platform to explore areas with only 2D laser scanner, and if

coupled with path planning capability, the platform is capable of gathering

data for applications such as 3D reconstruction of scenes. This successfully

answers the initial research question (Section 1.1).

(48)

CHAPTER 7. CONCLUSION

7.1 Future Work

The current implemented exploration strategy has a simplified approach to sample and select frontiers. This performs well for use cases when covering the entire map is the sole objective. However, there is scope to improve the frontier search process so it is adapted for targeted applications. For instance, using weighted frontiers as mentioned in [11] in order to add cost functions to the selection process and penalize goals that are not optimal for capturing images.

Since the current platform does not prioritise smooth flight, the data gath- ered was seen to be less than perfect from the reconstructed model. With im- provements to the control and path planning the flight can be made smoother in order to reduce sudden movements of the camera and thus improving the data gathered.

The current system has no way to asses the quality of data gathered while

still in flight, the reconstruction is done offline. This means that the quality is

not known till after the flight limiting the ability to get better data for instance

by returning to certain areas of the map where the data was not optimal. A

feedback loop where the data is analysed could provide information to, for

instance, clear out the costmap from areas with poor data in order to return

to these locations.

References

Related documents

As soon as we knew our idea was feasible, we started building our platform: Firstly, a website serving as an easy- to-read encyclopaedia for micro loans, as well as encouraging

[r]

Methods: Blood was collected in tubes containing lithium heparin with different sep‐ arators; gel separator (Vacutainer ® PST™, Becton Dickinson and Vacuette ® , Greiner

Participation as a way to ensure equal accessibility to democracy has gained importance alongside the concept of sustainability. At the same time, it has become clear

Our findings agree with W. Evan, thus they discuss using a high negative reward where causing catastrophes to reach the goal is infeasible, learning the agent to avoid catastrophes

Ensure participants ability to unload device data If there is need for manual drop-off by the users, as in the BubbleNode example (Appendice C), we need to make sure that the users

This section demonstrates the statistics of data collected from Instagram and Zalando, the evaluation of annotation web-solution by Amazon Crowdsourcing Worker as well as

Using the 3D-LiDAR for exploration for autonomous exploration adds some complex problems to the path planning and obstacle avoidance as the 3D- LiDAR has a limited field of view