• No results found

Autonomous Aerial Void Exploration

N/A
N/A
Protected

Academic year: 2021

Share "Autonomous Aerial Void Exploration"

Copied!
53
0
0

Loading.... (view fulltext now)

Full text

(1)

Autonomous Aerial Void Exploration

Emil Vidmark

Computer Science and Engineering, master's level 2020

Luleå University of Technology

(2)

Autonomous Aerial Void

Exploration

Emil Vidmark

Department of Computer Science, Electrical and Space Engineering

Lule˚

a University of Technology

Lule˚

a, Sweden

Supervisors:

(3)
(4)
(5)
(6)

Abstract

Deploying robots in unknown and complex areas for inspection tasks is becoming a real need for various application scenarios. Recently, there has been an increasing interest to develop and use autonomous aerial robots in environments such as urban voids and subterranean mine tunnels, aiming to decrease the human presence in dangerous or inac-cessible areas. These areas are characterized by complete darkness and narrow tunnels, where the ground can often be rough and not traversible for mobile vehicles, thus the developments focus on Micro Aerial Vehicles (MAVs). MAVs are mechanically simple and agile platforms that can navigate through cluttered areas and have the potential to perform complex exploration tasks when equipped with proper onboard sensors.

One of the key milestones in the development of autonomous robots is self-exploration. The definition of self-exploration according to [7] is ”the act of moving through an un-known environment while building a map that can be used for subsequent navigation”. By reaching this milestone, robots would be freed from the limitation of requiring already existing maps for navigation.

In this thesis, a frontier based exploration algorithm is established and evaluated to understand how such method could be used to reach the self-exploration milestone. By marking the border between what is known and unknown the method is able to determine the next desired position for the robot to expand the map.

(7)
(8)

Contents

Chapter 1 – Thesis Introduction 1

1.1 Background . . . 1

1.2 Motivation . . . 4

1.3 Problem Definition . . . 5

1.4 Related Works . . . 5

1.5 Scope and Delimitation . . . 6

1.6 Methodology . . . 7 1.7 Thesis Structure . . . 7 Chapter 2 – Theory 9 2.1 Introduction . . . 9 2.2 Mapping . . . 10 2.3 Frontier-Based Exploration . . . 15 2.4 Quadcopter . . . 20 2.5 Path planning . . . 22

2.6 Next Best View Planning . . . 23

Chapter 3 – Simulation Setup 25 3.1 Measurements And Evaluations . . . 25

3.2 Simulated Environments . . . 26

3.3 Settings and Equipment . . . 28

Chapter 4 – Simulation results 31 4.1 The Void . . . 31

4.2 The Mine . . . 34

Chapter 5 – Discussion 37 5.1 Simulations . . . 37

(9)
(10)

Acknowledgments

I would like to express my deep gratitude to Professor George Nikolakopoulos, my re-search supervisor, for his patient guidance, motivational speeches and useful critiques of this research work.

(11)
(12)

Chapter 1

Thesis Introduction

1.1

Background

1.1.1

Unmanned Aerial Vehicles

UAVs has become a popular research field in recent years. The research and development of these vehicles was primarily motivated for military use [13], as the cost of both materi-als and reliable sensors have been reduced, other usage areas of these vehicles have been found. Agriculture, exploration and wildfire surveillance are just some of the applications where these vehicles have been found useful outside of the military [6].

The definition of UAV is different depending on the selected field, in some of these fields they are called “Unarmed Aerial Vehicles”, ”Unpiloted Aerial Vehicles” or even ”Unmanned Aerial Vehicles” as the vehicle is not in the need of an onboard pilot nav-igating it. During recent years, the interest for UAVs has grown and various aircraft platforms have been developed to suit different kinds of tasks and quests [10]. The platforms can be placed in four major groups which are:

• Multi rotors • Fixed wings • Single rotors

• Fixed wing hybrid VTOL

Each of these different air crafts have their advantages and disadvantages [10]. To get a better understanding of these, a brief explanation will be given below.

(13)

with. The common properties of these groups are that they can take off and land ver-tically and move freely without the need of airflow over the blades, instead the blades themselves generate the needed airflow. This property is good, as it allows the multirotor to hover which essentially is to stand still at a specific point. Some other advantages of this group of platforms is that they are agile, fast and can perform complex movements. One downside though, is that they have a limited flight time and cannot carry a lot of load.

Fixed wing - Fixed wings have a totally different design than multirotors. A fixed wing uses a rigid wing like usual airplanes and flies thanks to the lift generated by a propeller. Like an airplane, it has an aileron, elevator and rudder to be able to navigate in the air. They have the advantages that they can carry a heavier weight than multirotors and can also travel longer distances. Just like airplanes though, the fixed wing needs a runway for takeoff and landing and does not have the property of hovering as they constantly need to be moving. As a result of this they are not as agile as multirotors.

Single rotors - Single rotors look very similar in both structure and design to he-licopters. Single rotors, unlike multi rotors, only uses big sized blade plus a small sized blade on its tail to prevent it from spinning and to control its heading. Single rotors, just as multirotors, take off and land vertically and has the ability to hover as it produces the required airflow itself. The advantage of single rotors is that they can carry heavy loads and are more efficient than multirotors and therefore has a longer flight time. The disadvantage they have though is that they are less stable than multi rotors and have a much higher complexity which makes them more expensive also.

Fixed wing hybrid with VTOL - This group combines the benefits of fixed wings and the rotor-based models. The hybrids use rotor blades to generate airflow which gives them the property of hovering and take off and land vertically. Hybrids is quite experi-mental and is not very common on the market today.

All the different UAV types has their different advantages and disadvantages. There exists no model which are suited for all different kind of situations. In figure 1.1 the different types of UAVs is shown.

1.1.2

The dangers of mines and abandoned mines

Mines, especially abandoned mines, are very dangerous environments for humans to visit, due to low oxygen levels, flooding, moving rocks and expose of methane, a gas which is very explosive and frequently accumulates when mines are not well ventilated.

In March 1976 two such gas explosions occurred in the Scotia Mine in Kentucky, killing 26 people working in the mine. The source to the first exploration was believed to be an ignite from one of the controllers in the locomotive, while the other one was believed to be an ignite from an arc coming from some electrical equipment [3].

(14)

Figure 1.1: Various types of UAVs.

in the accident [3].

These are just two of many disasters which have occurred in mines during the last 50 years. The accidents prove a point that mines are highly dangerous areas which can have a deadly outcome if not handled correctly.

(15)

Figure 1.3: Quadrotor flying inside a Swedish mine. As no light source exists inside the tunnel, a light source is placed on the quadrotor.

1.2

Motivation

As exploration of both mines and abandoned mines can make out a dangerous task for humans, it makes out a good task to be performed by robots instead. UAVs make out a good candidate for exploration in mines as they are not limited by the surface of the ground. Such limitations exist for ground robots where the surface of the ground can be to cluttered and messy to travel without damaging the robot or equipment. UAVs is usually remotely operated and with the characteristics of a mine environment the task of remote operation is demanding, and the costs are high as a trained pilot would be needed. Also, what needs to be taken into consideration is the reach of signals, as thick walls and curvy tunnels can limit signals from reaching the remotely controlled vehicle. Such limits could cause loss of video signal which prevents the pilot from missing or loosing vision during a flight.

By moving to an autonomous exploration, the costs would be lowered to use UAVs. Costs for training sessions, human pilots and eventual crashes and replacement of equip-ment could be minimized or removed fully. Also, human factors, such as fatigue [6], would be removed if autonomous control could be used for the task instead of a human pilot.

(16)

with the disaster in the Scotia Mine, an ignite can cause incredible amount of damage. Not all of the different types of UAVs are suitable for a mission in these kinds of environments. A need for vertical takeoff and landing exists as no as no runway for start and landing can be assumed to exists. As high agility might be needed a multi rotor is suitable which also keeps the costs low. This requires a highly accurate and efficient exploration algorithm though as the flight time is quite low and the environment can be narrow and messy which sets a demand for advanced flight trajectories and movements.

1.3

Problem Definition

A reliable and robust exploration is one of the core pieces when it comes to autonomous exploration, as it provides the information of where the robot should go next to expand the knowledge of the environment. The advantages of using an aerial platform for explo-ration is because it is not limited by obstacles on the ground, which a ground exploexplo-ration robot might be.

In this thesis an algorithm of how to explore unknown environments with micro aerial vehicles will be developed and evaluated. The problem to solve is to come up with an exploration algorithm and method which can be used to autonomously explore an unknown environment in safe and reliable manner. While there already exist algorithms for autonomous exploration, a lot of them is designed only for ground robots which usually only takes a 2-dimensional world into concern, and others which is design for 3-dimensional worlds, a requirement for flight exploration, are usually inefficient and never tested in the field.

The designed algorithm will be implemented and tested on a quadrotor equipped with a 3-dimensional LiDAR. Thanks to the high agility and simple construction of the quadrotor, it is a valid platform for using in autonomous exploration, especially in mine like environments where its agility will be highly suitable.

As an addition to the problem of developing an autonomous exploration algorithm, no science or article could be found on autonomous exploration with a 3-dimensional LiDAR at the time of writing. The most common sensor used for the task was an RGB-D camera which has a limited field of view and is using another approach to measure the distance to an object.

To be able to measure and evaluate the performance of the algorithm a path planner and mapping strategy will have to be implemented together with the algorithm to create a full autonomous exploration system. While there exists different approaches to achieve such properties, they need to be highly reliable and robust in order to achieve a trustful and valid evaluation of the designed algorithm.

1.4

Related Works

(17)

fields, especially in robotics. While autonomous exploration is used for both ground and air vehicles, the methods might vary largely between the two due to the added complexity of air vehicles.

For ground vehicles the world is usually represented in 2 dimensions when it comes to autonomous exploration, as ground vehicles is stuck on the ground it lacks the ability to change altitude compared to air vehicles. While 2-Dimensional exploration methods works for aerial exploration also, which is proven in the work of [1] where a solution for enabling a quadrotor to explore indoor environments was presented using a 2-dimensional method, the drawback is that the altitude is fixed.

In the work of [19] from 1997, a method called frontier exploration was presented which has been a frequently used exploration strategy both for 2-dimensional and 3-dimensional exploration. The method marks the border between the known and unknown and constantly tries to push it away until the border does not exist more. Many works such as [1], [20] and [5] among many others builds upon this strategy and validates that it is time efficient and energy efficient compared to other exploration methods.

Another method that is used to achieve autonomous exploration is called NBV, ”Next Best Planner”. The method, which instead of marking a border between the known and unknown, generates a random tree of paths in the known space, where the path which will give the newest information of the environment selected. Such method is used in the work [2] for 3-dimensional exploration with quadrotors. While the method proved to be capable for exploration with micro aerial vehicles and shows improved scaling properties compared to a frontier-based approach, it is very computational heavy and is not make use of energy in an efficient way.

Efficiency is one of the core pieces when it comes to autonomous exploration for MAVs as the flight time is highly limited. In the work [5] a highly efficient method based on the frontier-based exploration is shown. By limiting the number of changes in directions, the method reach a high level of efficiency as the time of hovering and planning is minimized The exploration method presented in this thesis will be using a frontier-based explo-ration approach with inspiexplo-ration from the work of [5]. While the work of [5] is using a RGB-D camera for vision the approach in this thesis will be using a 3D-LiDAR to increase the efficiency as it can cover a larger area and gain more information about the environment.

1.5

Scope and Delimitation

Implementing a fully autonomous system requires many different research fields to be put together, research such as control theory, electronics, robotics, algorithms etc. The scope of this thesis has been to implement and evaluate a exploration algorithm which can be used for autonomous exploration in unknown environments. To achieve this autonomy, different systems such as obstacle avoidance and path planning need to be used and implemented. These systems lie outside the scope of this thesis and therefore already existing packages is being used.

(18)

real world experiments as the time were running out in the end of the thesis and the virus COVID-19 limited the use of the laboratory where real world experiments was planned to be done.

1.6

Methodology

To be able to answer solve the problem presented in section 1.3 theory and simulations has been combined. To solve the problem inspiration has been taken from the work in [5] and [20] which is using something called a frontier exploration based approach which was first presented in the work [19]. The frontier exploration can and have been divided into two different parts in this thesis, generation of frontiers and selection of frontiers, each unit have been developed once at a time to convert theory to practice. A continuous dialog has been held up with Christoforos Kanellakis at Lule˚a University of Technology to discuss the performance of the different units and to discuss ideas of how the method and strategy could be further developed to raise the performance.

The research was divided into a numbered order of different stages and goals which would be achieved before the next stage could be started. These stages were:

1. Literature study 2. Generation of frontiers 3. Selection of frontiers

4. Trajectory generation with obstacle avoidance 5. Simulations

The workflow also followed a certain pattern to achieve as high performance as pos-sible. Earlier research and new ideas were discussed and compared. Once a certain idea was chosen the idea was implemented into code and followed by a simulation. After the simulation had been made the performance in the simulation was evaluated and ei-ther kept, eiei-ther finished or iterated again to optimize it, or it was discarded. An easy overview of the workflow can be seen in figure 1.3.

1.7

Thesis Structure

(19)
(20)

Chapter 2

Theory

2.1

Introduction

According to [19], the central question in exploration is: Given what you know about the world, where should you move to gain as much new information as possible?. From the start of an exploration, the only existing information about the environment is the one you see from where you are standing. The task of the exploration is then to build a map that describes as much of the world as possible in a fairly amount of time.

In the strategy of frontier exploration presented in the work [19], the entire space is classified into three different spaces, unknown space, open known space and occupied space. Unknown space is a territory which has not yet been discovered and mapped. Open known space is space that has been explored and inserted into the map. This space is considered to be free from obstacles and space where the robot can navigate safely. Occupied space is considered a region which contains obstacles and where the robot cannot travel and navigate.

In frontier exploration the central idea is to move to the boundary between open known space and unknown space to gain as much new information as possible. Frontiers marks this border between open known space and unknown space, in figure 2.1 an example is shown of how frontier exploration in 2D can look. The assumption is that if the robot moves to one of these frontiers, the robot will discover more of the unknown space and the map will be expanded. As the robot continues to navigate to the frontiers, the border between the open known and unknown space is pushed back until no frontiers longer exist. When no frontiers longer exists, the environment is considered to be fully explored.

Stated by [19], frontier-based exploration would eventually explore all of the accessible space in the world, assuming perfect sensor and motor control. Therefore, frontier-based exploration is suitable in a restricted area as it then eventually will reach an end of the exploration.

(21)

later research[20] and [5], 3-Dimensional maps have been created and used for frontier exploration. 3-Dimensional maps are required for aerial vehicles to explore safely while allowing for change in altitude, though it is more complex and requires more data re-sources.

Figure 2.1: A 2-dimensional case of frontier exploration. The red circle marks the robot. The open known space is colored green and unknown space is colored grey. The frontiers and the border are marked as orange.

2.2

Mapping

To use 3-dimensional maps for frontier exploration some method to represent such map is required. The map is not only needed for the frontier-based exploration and the ability to find new areas to discover, but also for the ability to navigate through the environment. In order to use and generate such map the robot needs to be able to localize itself inside the map by using some type of visualization which gives the robot a sense of the environment. A common sensor human uses to get vision from a remotely operated robot is a camera. While a camera is suitable for humans to get vision, autonomous vehicles might require a more advanced sensor in order to get vision from the environment.

2.2.1

Vision and representation

As mentioned above, for a robot to collect information about it surrounding some type of vision is required which is achieved by employing vision sensors. LiDAR’s and RGB-D cameras makes out good sensors for such use and have their different advantages and disadvantages. Why these sensors are suitable compared to other types of sensors, is that they can collect a high amount of data in each sensor reading and that they provide information about the distance to obstacles in sight. The property of measuring distance to an object is one of the core pieces as this gives the robot a perception of depth.

The two sensors LiDAR and RGB-D Camera will briefly be presented below:

LiDAR - LiDAR is short for Light Detection and Ranging can be considered as a laser distance sensor. Using light waves it can measure the distance to a target.

(22)

back to the lidar. By knowing the constant speed of light in air and the time for the signal from when it was emitted until it bounced back onto it again the distance can be calculated, the equation for the distance calculation is shown in equation (2.1) where c is the constant speed of light in air and t the time for the laser to return to the sensor.

d = c ∗ t

2 (2.1)

The advantages of the LiDAR technology is that it has a high measurement range and accuracy, can measure 3D structures and has a fast update rate which lets the robot have real time vision. As it is using light it can be used in dark environments also. Another important advantage of the LiDAR is that it can have a horizontal field of view of 360o.

The disadvantage of these sensor is that they are quite expensive and that transparent surfaces can cause faulty measurements. The vertical view is also limited on a 3D lidar which gives it some blind spots above and under the sensor.

RGB-D cameras - RGB-D cameras is working like a regular camera but uses two or more lenses to achieve human binocular vision. This lets the robot capture 3D images and the images that the RGB-D cameras provides contains the distance to each and every pixel in the frame from the camera.

The advantage of these cameras is that they are fairly cheap and can provide a good amount of data. The disadvantages is that they are in the need of a light source and has a limited field of view.

What is important to remember is that these sensors are subject to errors, often re-ferred to as measurement noise. To deal with such noise different software solutions is usually implemented. In this thesis such noise is handled by the mapping framework used, discussed in the next section.

As each sensor reading provides not only one measuring point but many, a method for representing the data received from the sensor is needed. One existing representation of such data is called point cloud. Point cloud is a data structure used to represent a collection of multi-dimensional points. As the data incoming from the sensor is a collection of points, point cloud makes out a good representation to represent the points from the sensor. Each point in the point cloud is represented as a geometric coordinate, an X, Y and Z coordinate of a sampled surface relative to the sensor [15]. In figure 2.2 an incoming point cloud from a simulated 3D-lidar is displayed. All the red dots make out a point cloud, coming from one sensor reading, and each dot marks a physical surface. Analyzing the figure further the strength and weaknesses of the LiDAR can be seen. While the points on the walls is quite dense, the points above and underneath the quadrotor is almost none, this is because the 3D-LiDAR has a limited field of view.

2.2.2

Constructing a map

(23)

Figure 2.2: A point cloud received from a 3D-LiDAR.

coordinate, the map to be constructed can be represented as an X, Y, Z frame. This frame which is used to store all the incoming points is referred to as the world frame. The world frame is not the only existing frame though, there exists a frame for the robot and the sensor also.

Another requirement in order to construct a map is knowing the robots start position in the world frame. Knowing this and using proper sensors which estimates the move-ment of the robot, an estimation of the position and localization of the robot inside the world frame is possible as the robot moves. The last requirement before a map can be constructed is to know the location of the sensor on the robot and represent this position in the robot frame relative to the robot. When this is achieved a map can be constructed using transformations between the frames.

As a point cloud is received, each geometrical point is relative to the sensor frame and not the world frame, our map. In order to store the points in the world frame, the points need to be transformed into the world frame.

To clarify the transformation, an example is shown in figure [fig:drone]. In the figure a point P is discovered and described in the sensor frame by vector v. Vector v does not describe the point in the world frame though, in order to represent point P in the world frame, the task is to find the vector t which describes the point in the world frame. To achieve this a transformation is needed to be done from the sensor frame to the world frame.

(24)

Figure 2.3: A point which is described by the vector t and v by O respective R frame.

Where t ∈ R3 is a vector defining the origin of frame B with respect to frame A, and R is the 3 × 3 rotation matrix which for a point, which do not need to be rotated, is the identity matrix. By storing all the incoming points in the world frame, a map of the environment can be achieved.

It is possible to construct a map by only using this strategy and storing all the incoming points inside the world frame use it for representation of the map. However, this way of building a map is not very good approach, since by storing all incoming points would be very memory inefficient as sensors can deliver up to 300 000 points per second [18].

This is not the only disadvantage of using just the point clouds to build a map. Problems in robotic mapping includes statically dependent measurement noise, high di-mensionality of the entities that are being mapped, and the correspondence problem of determining if sensor measurements taken from different points in time corresponds to the same physical object in the world [16].

To overcome these issues researchers has successfully developed mapping algorithms which is able to deal with such problems. Instead of inserting the points directly into a map, the points from the point cloud can be used to calculate probabilities and construct a map in other ways instead. In this thesis occupancy grid maps have been used as they are very reliable and easy to implement.

2.2.3

Occupancy Grid Maps

Occupancy grid maps is a mapping algorithm which divides the map into a grid by generating probabilistic maps, which can be both a 2-dimensional grid or a 3-dimensional grid. By using Bayes filters [4], the posterior over the occupancy of each grid cell can be calculated. Occupancy is described by a binary variable, either the cell is free or it is occupied. Let < x, y > be the coordinates of a grid cell and mx,y its occupancy.

The problem is to calculate a posterior over a set of binary variables, each of which is a single numerical probability p(mx,y | zt, xt). Bayer filters can be used to calculate these

posteriors.

(25)

with known poses st works as follows: p(mx,y | zt, st) 1 − p(mx,y | zt, st) = p(mx,y | zt, st) 1 − p(mx,y | zt, st) 1 − p(mx,y) p(mx,y) p(mx,y | zt−1, st−1) 1 − p(mx,y | zt−1, st−1) (2.3)

To get some computational advantageous the update equation is usually implemented in logarithmic form. log p(mx,y | z t, st) 1 − p(mx,y | zt, st) = log p(mx,y | zt, st) 1 − p(mx,y | zt, st) +log1 − p(mx,y) p(mx,y) +log p(mx,y | z t−1, st−1) 1 − p(mx,y | zt−1, st−1) (2.4) The occupancy grid mapping algorithm is recursive which allow for incrementally updat-ing the individual grid cells when new sensor data arrives. Two probability densities is required, p(mx,y | zt, st) and p(mx,y) which is the prior for occupancy. p(mx,y | zt, st) is

called the inverse sensor model and specifies the probability that a grid cell mx,y is

occu-pied based on a single measurement zt, taken at location st. Example of an occupancy

grid map is shown in figure 2.3.

Figure 2.4: Example of a 2D occupancy grid map. Black squares are occupied space, grey is unknown space and white is free space.

2.2.4

OctoMap

(26)

The OctoMap uses octree which is a hierarchical data structure for spatial subdivision in 3D. An octree is a tree of nodes, where each node represents the space contained in a cubic volume, usually called a voxel. This cube, voxel, is subdivided recursively into eight sub-volumes until a given minimum volume is reached. This volume is the minimal voxel size which determines the resolution of the octree, where the resolution determines how detailed the map will be.

In the context of robotics, the octree can mark the occupancy of a voxel. If a certain volume of the voxel is measured and occupied, the corresponding node is initialized and marked as occupied. By using ray casting the nodes between the sensor and the occupied node on a ray can be initialized and marked as free. Areas which is not initialized is therefore implicitly modeled as unknown space.

The points from the point cloud is integrated using occupancy grid mapping intro-duced in [12]. P (n |z1:t) is the probability of a leaf node n to be occupied given sensor

measurements z1:t. is estimated according to

P (n | z1:t) = [1 + 1 − P (n | zt) P (n | zt) 1 − P (n | z1:t−1) P (n | z1:t−1) P (n) 1 − P (n)] −1 (2.5) Equation (2.5) is the considered the update formula dependent on the current mea-surement zt, a prior probability P (n), and the previous estimate P (n |z1:t−1). P (n |zt)

denotes the probability of voxel n to be occupied given the measurement zt which is

specific to each sensor model generating it.

By making the assumption of a uniform prior probability leads to P (n) = 0.5. Using log odds notation equation (2.5) can be rewritten as

L(n | z1:t) = L(n | z1:t−1) + L(n | zt) (2.6)

with

L(n) = log[ P (n)

1 − P (n)] (2.7)

This allows for faster updates since addition is used instead of multiplications. Log-odds can be converted to probabilities and vice versa, therefore the OctoMap stores this value for each voxel instead of occupancy probability.

2.3

Frontier-Based Exploration

The frontier-based exploration in this thesis is based on the frontier exploration theory introduced in section 2.1, In this work the frontier exploration is divided into three subsystems. The first system is the frontier generation, which task is to generate frontiers based on the OctoMap from the earlier section, where the voxel grid V is defined as V = { #»x } with voxel dimensions s3 and each voxel is represented by the centroid. This

(27)

These points are then sent to the second system, which is the frontier selection. This system analyses all the newly generated frontiers and selects the frontier which seems to be the next best frontier to go to. This system is described more detailed in section 2.3.2.

The third and final system is the frontier publisher, which publishes and generates a trajectory to the quadcopter. This system also handles the commands for starting and stopping the simulation. While the other systems are described more in detail, this system is not as it only adds functionality to the algorithm and system.

Figure 2.5: Overview over the exploration system.

2.3.1

Definition and Generation of Frontiers

The definition of a frontier is that it marks the border between free space and unknown space. As OctoMap, which generates an occupancy grid map, is selected for the map representation the map is divided into a 3-dimensional grid where each voxel in the grid is either marked as free, occupied or not marked at all (unknown). The full set of voxels in the OctoMap can be described as V = Vf ree ∪ Voccupied∪ Vunknown. Using OctoMap

gives no possibility to mark the border between the cells in the grid as it is not properly defined in the framework, instead a cell in the grid must be marked as a frontier.

To extract such cell a similar definition as in the work of [20] has been used to define a voxel as a frontier. A frontier cell is considered as a free cell that has at least one unknown cell as its neighbor. In this work the same principle has been used with some changes: the number of unknown neighbors a voxel needs to be a frontier is defined by the user before the exploration and no neighbor is allowed to be occupied.

By raising the required number of neighbors, a voxel needs to be a frontier, a tradeoff between how dense the map will be and how long the time for the exploration will take can be made. If a voxel would be missed in a wall, due to bad sensor reading, the robot would not need to return later on to fill in that space of only one voxel. The other change of frontier requirements is that no neighbor is allowed to be occupied. By not allowing this, frontiers are avoided to be created in narrow, tiny environments and also inside walls due to measurement noise.

In this thesis a 3D-LiDAR has been used. The 3D LiDAR has a limited field of view above and under the sensor. This limited field of view becomes a problem if a frontier is generated above or under the UAV so that it never can get vision on it and explore it. To overcome this frontier which is within a range r from the UAV is deleted and marked as free. The range r is determined before the exploration starts.

(28)

algorithm 2 pseudo code for how frontiers is extracted and validated is shown.

Figure 2.6: A voxel (yellow), and its 26 neighbours (blue).

Frontiers is incrementally extracted as the map is extended continuously. When a new point cloud arrives the OctoMap framework extends the map and the states of the cells in the map is updated. To reduce the number of cells to examine only the cells which state has been updated is examined to determine if the cells are frontiers or not. Pseudo code for the extraction of frontiers is shown in algorithm 1.

To make sure that the list always contain valid frontiers, frontiers where a neighbor is unknown, algorithm 2 runs before each update of the list. If a frontier no longer contains any unknown neighbor it is removed from the list of frontiers.

Using the same strategy as in [5] the frontiers in the f rontiers list can be defined as a global set as in equation (2.8). This set is called the global set of frontiers as it contains all the frontiers. In the next section the strategy for selecting the frontier to explore will be explained.

Fglobal := { #»xi ∈ Vf ree : ∃ #»xj ∈ Vunknown : | #»xi − #»xj | = s} (2.8)

.

2.3.2

Selection of Frontiers

(29)

Algorithm 1: Frontier extraction input : m octree : Current octree,

n : Number of neighbours required to be a frontier, r : the minimum distance to the UAV to be a frontier output: f rontiers : List containing frontiers

1 foreach cell ∈ m octree.getChangedCells() do 2 if cell.isF ree() then

3 if cell.distance() ¿ r then

4 i = 0;

5 foreach neighbour ∈ cell.getN eighbours() do 6 if neighbour.isOccupied then 7 i = 0; 8 break; 9 else 10 i = i + 1; 11 end 12 end 13 if i >= n then 14 frontiers.add(cell); 15 end 16 end 17 end 18 end

However, while considering the full set of frontiers works for smaller environments, this type of method will not scale very well. By considering the full scope of frontiers, a large amount of data needs to be processed, as each and every frontier needs to be evaluated and compared before a selection can be made. Such operation would also require a high amount of data processing. To overcome this issue, a set of local frontiers will created which is a subset of the global frontiers and can be described as

Flocal ⊆ Fglobal. (2.9)

The frontiers which belongs to the subset, is frontiers that is present in the field of view. This field of view is in the quadrotors head direction and has a horizontal and vertical view of certain degree set by the user. Figure 2.7 shows an image to clarify this subset of local frontiers.

The selection algorithm is designed to select local frontiers if there exist at least one frontier in the field of view. If no frontiers exist in the field of view, a frontier is instead selected from the set of global frontiers.

(30)

Algorithm 2: Frontier validation

input : f rontiers : List from last update,

n : Number of neighbors required to be a frontier, r : the minimum distance to the UAV to be a frontier output: f rontiers : List containing only valid frontiers

1 foreach cell ∈ f rontiers do 2 if cell.distance() ¿ r then

3 i = 0;

4 foreach neighbor ∈ cell.GetN eighbors() do 5 if neighbor.isOccupied() then 6 i = 0; 7 break; 8 else 9 i = i + 1; 10 end 11 end 12 if i < n then 13 frontiers.remove(cell); 14 end 15 end 16 end 17

Figure 2.7: Both squares are frontiers. The green square is considered a local frontier as it lies inside the field of view of 60 degrees, and the yellow square is considered a global frontier as it lies outside the field of view.

# »

xf := { # »xf ∈ Flocal : ∃ # »xf = argmin(α)} (2.10)

(31)

α = cos−1( #» xi∗ #»vd | #»xi| ∗ | #»vd| ) (2.11) .

By selecting the frontier with the lowest angle to the moving direction, higher veloc-ity can be contained as the quadrotor does not have to change direction. In the work of [5], a similar method was presented, the method proved to use the flight time efficiently as the quadrotor could fly with a more constant velocity during longer periods of time. As the flight time of quadrotors is limited, this efficiency is something to strive for. To select a global frontier, the full set of global frontiers is considered instead as no fron-tiers exist in the set of local fronfron-tiers. This means that the robot must make a larger change of direction, in order to continue the exploration, as no frontier exist in the field of view. The selection of how a global frontier is selected differs from the selection of local frontiers. When a local frontier is selected the frontier should be as close to the moving direction as possible, in the global selection case, a weighting algorithm is used, as shown in equation (2.12) which can be tuned to suit different types of environments. The global frontier to be selected is the frontier which has the lowest score C.

C = waα + wd∆D + wh∆H (2.12)

In equation 2.12, wa, wdand wh is all weights set by the user before starting the

explo-ration. By configuring the weights differently, the exploration behavior of the quadrotor can be changed. The term α is the angle from the moving direction just as in the selection of local frontiers. ∆D is the distance between the frontier and the robot. Finally, the term ∆H is the height difference in altitude between the frontier and the robot.

By raising one of the weights, wa, wdand wh, the penalty of that term raises. Raising

the term wh would raise the score on altitude difference, which would result in frontiers

with difference in height would less likely be selected.

When the set of global frontiers is empty, the exploration is considered done, as there is nothing more to explore.

2.4

Quadcopter

For the exploration, a quadcopter has been selected for the exploration. A quadcopter, or quadrotor, is a multirotor helicopter which has four vertical oriented propelled motors which are placed in a square formation with equal distance from the quadcopters center of the mass. Two of the motor’s spins clockwise and two of the motors spins counter-clockwise to generate lift without getting a constant yaw, spin, on the quadcopter. The configuration of the propellers is shown in figure 2.8.

(32)

Figure 2.8: The arrows is showing the spin direction of the propeller motors on the quadcopter.

For the quadcopter to hover the angular velocity of all motors needs to be same and they need to generate a total thrust which is equal to the gravity and in the opposite direction, equation 2.13, example is shown in figure 2.9.

FM otor1+ FM otor2+ FM otor3+ FM otor4 = g (2.13)

Figure 2.9: For the quadcopter to hover the total thrust must be equal to the gravity g.

When the quadcopter is hovering, it is possible to navigate up and down by controlling the angular velocity of the motors which can either be raised or lowered. By raising the angular velocity of all four motors more thrust is gained and the quadcopter generates lift and go upwards. If the angular velocity on the four motors is lowered the quadcopter will go downwards.

To navigate forward and backwards the quadcopter must pitch which is achieved by lowering the angular velocity of the two motors in the forward or backwards direction. To navigate left and right the same principle is used but with roll instead of pitch and changing the angular velocity of the two motors facing left or right side.

(33)

pressure which can be converted to give the altitude of the quadcopter. All of these sensors are essential for a UAV to be able to be stable in the air.

2.5

Path planning

Path planning is a core piece in robotics, as it is a requirement for the ability to navigate autonomously without any manual inputs. In autonomous exploration the path planning is required for the ability to explore environment, that in the scope of this thesis means move between selected frontiers.

Achieving an autonomous path planner is not a trivial task and is a major research field in the field of robotics. Not only is the most efficient path to the goal requested, but also a safe path, free from any static or dynamic obstacles in a fairly amount of time, and its complexity raises with the degrees of freedom. The goal of path planning is therefore to find the optimal path between two points, optimal such that the path consumes a minimum amount of energy and has a minimal computational time.

A planned path can be described as having a starting point S, and an ending point T . The path between S and T consists of successive segments which, if you add them all together, makes out the full path. A path p can therefore be described to have a starting point ps and an end point ptwith n consecutive points in between, equation (2.14). Thus,

a segment is generated between the points [11].

p = ps+ p0+ p1+ ... + pn+ pt (2.14)

To solve the path planning problem in this thesis, a strategy of splitting the path planning into two planners have been used, a global path planner and local path planner. Using different resources to plan a safe path between two points they together make out a robust and reliable path planner.

A global path planner is used to plan a path between the start point and the end point by using information from a map. By using a map to plan the path, a path avoiding known static obstacles can be planned. How the path is selected depends on what kind of map is used. Using an occupancy map, described earlier in the chapter, one example of how the path could be calculated could be to plan a path in free space which traversed as few cells in the grid as possible. In this thesis, the global planner has been implanted using the simplest approach possible, by just planning the shortest path possible to the goal point in the map without considering the obstacles in the map at all. Therefore, the path planner in this thesis relies heavily on the local path planner instead of the global path planner.

(34)

The local planner also does not plan a complete path to the goal point, instead it plans a path to sub goals along the global path to fly as close as possible to it.

In this thesis to achieve a reliable and robust local path planner, a local planner called Ewok [17] has been implemented which is especially designed for micro aerial vehicles (MAVs).

Before diving into how Ewok plans the path it is important to understand how it detect obstacles. Ewok does not use the full map of the environment to detect obstacles as it is a local path planner, instead it uses stores an own local map of the surroundings to navigate. This local map is only covering a radius of a set distance around the quadrotor, as the quadrotor moves, it forgets the mapping when it goes outside the set radius. The advantage of using this strategy is that the robot can plan for dynamic obstacles as the local map is constantly updated, even if the quadrotor travels in already known space and that it is very memory efficient.

In Ewok, this strategy is implemented using an occupancy map to store information about obstacles. Ewok uses the same information as used to build the global map. Just like the global map, it is dividing the world into a 3-dimensional grid where each cell either can be free, occupied or unknown. The path planner can then achieve safe paths by only travel through cells which is marked as free.

While the task of the global path planner is to plan a path to a goal point, the task of the local path planner is used to plan a path that follows, or is as close to the global path as possible. As the quadrotor follows along the global path, Ewok plan to see if any obstacles are blocking the path. This is made possible by creating a polynomial spline between way points on the global path. If any of the segments between the waypoint passes through an obstacle, the way point is moved such that the spline does not pass through any obstacle.

The advantage of using B-splines for this purpose is that it generates a smooth tra-jectory and that a move of a control point only affects the tratra-jectory locally around the control points. This means that the full trajectory does not need to be recalculated, instead only a smaller part of it.

2.6

Next Best View Planning

In the following chapters the simulation setup and simulation results will be presented. In these chapters our implementation will be compared to another exploration algorithm called Next Best View [2] which is another approach to exploration compared to frontier exploration. To make a comparison between the two methods it is important to know the key differences in the two methods. As our approach already has been presented in this chapter this section will present the next best view planner briefly.

(35)
(36)

Chapter 3

Simulation Setup

3.1

Measurements And Evaluations

In order to evaluate the performance of the exploration method together with the map-ping and the path planner, simulations has been performed using the Gazebo simulator [7]. Gazebo is a computer software to build, design and simulate different environments. In this thesis, different environments have been developed, see section 3.2, to test and evaluate how the exploration algorithm behaves in different scenarios. While simulations can give a good prediction of real-world scenarios, it is important to understand it is a perfect world, in the sense that unpredictable effects like wind do not happen without intention.

To test and evaluate the performance of the method proposed in this thesis, it will be compared to the performance of a Next Best View (NBV) planner to evaluate the perfor-mance. It is important to notice that during the writing of this thesis, no method where a 3-dimensional LiDAR was used could be found. The method proposed in this thesis uses a 3D-LiDAR and the Next Best View planner uses an RGB-D camera, therefore a fair comparison between methods is hard to achieve. To overcome this unfairness as much as possible, the multi-rotors has been configured to fly at the same velocities and to let the sensors have the same maximum range. The settings that has been configured equally is shown in table 3.2.

Parameter Value Sensor Distance 10m

Max Velocity 0.5, 1.0, 1.5 (m/s) Voxel Resolution 0.5m3

Safety Distance 0.5m

(37)

While a direct comparison cannot be made between the two exploration algorithms, because of their different sensor setup, a comparison of complete exploration systems could be made instead. As a system the full robot is considered with the path planning, mapping and exploration method. This comparison makes sense, since the Next Best View planner is a well-known working exploration method.

To compare the two systems to each other, data must be collected from the two systems while performing an exploration of the same environment. The data that has been collected and compared is the following

• Time taken for exploration

• Flight distance during exploration • Total map coverage

The time for the exploration is measured from when start command for the explo-ration is sent to the quadrotor until the robot responds that the map is fully explored. To measure the total flight distance, points is set every time the inertial measurement unit (IMU) updates the quadrotors position. By calculating a vector between the current position and the point since last update, the distance between two updates can be cal-culated. Summing up all these vectors together the total distance can be achieved. The total map coverage is measured in percent and displays how much of the full map has been explored. As the mapping divides the world into a 3-dimensiona grid, it is possible to calculate the total coverage by comparing the number of voxels discovered against the total existing voxels in the environment. The number of total existing voxels is easily calculated if the total volume of the environment is known. Worth noticing is though that 100% coverage is extremely hard, if not impossible to achieve as corner pieces might be concealed or not visible and therefore not discoverable

3.2

Simulated Environments

To test the exploration algorithm and system properly, two different type of void envi-ronments has been examined and is named as follows:

• The Void • The Mine

(38)

Figure 3.1: The void with its dimensions. The left shows the void from above and the other from the side.

The second simulated environment is design in a more advanced manner and is de-signed to illustrate a smaller part of a real mine. While a mine do not has straight walls, can contain height differences and obstacles along the path might be present in the form of stalactites and stalagmites, the total volume of the environment as in the void cannot be calculated. All exits in the mine is blocked and is just like the void a closed environment.

This type of environment is selected to see how the algorithm and system behaves and flies in a possible real-world scenario. The environment is seen as a ‘highly advanced’ scenario as the tunnels are quite narrow and involves obstacles which requires a very precise and accurate path planner to not leap the risk of hitting any physical object. In figure 3.2 a picture is taken from inside the mine to display how the environment look, looking at figure 3.3 a top view of the mine is shown.

(39)

Figure 3.3: The mine shown from above. The mine contains three tunnels which all is blocked.

3.3

Settings and Equipment

The simulated multi-rotor which runs the exploration algorithm and system presented in this thesis is equipped with a 3D-LiDAR. The 3D-LiDAR is placed on top of the quadrotor, see figure 3.4.

Figure 3.4: The quadrotor with the Velodyne VLP-16 placed on top.

The quadrotor used for the simulation and the 3D-senor, present in figure 3.4, is a Velodyne VLP-16 [18]. To get an accurate simulation of the sensor the latest version of the Velodyne VLP-16 sensor simulation files has been used during the exploration. The VLP-16 has a field of view of 360o horizontal view and 15o of vertical view and a total range of 100m, though in this thesis it is limited to 10m during the exploration.

For the multi-rotor running the NextBestView planner a VI-Sensor [14] is equipped. While the VI-Sensor contains more than one type of sensor, one of them is an RGB-D camera with a 132o of view.

During the simulations, the following settings was used for the exploration algorithm as shown in table 3.2. As mentioned in chapter 2 wa, wd, wh are all weight for the

(40)

prevent frontiers to be above or under sensor such that the exploration gets stuck Parameter Value wa 1 wd 5 wh 20 n 1 r 4m

(41)
(42)

Chapter 4

Simulation results

4.1

The Void

In the tables below, table 4.1 and 4.2, the total time of the two methods is presented together with the path length and the total coverage in percentage. Below the tables in figure 4.1 and 4.2 the data from the tables is plotted into two graphs to visualize the differences more clearly.

Method Speed (ms) Time(mm:ss) Distance(m) Coverage (%)

NBV 0.5 10:31 257 99.22

NBV 1.0 4:33 225 99.45

NBV 1.5 4:31 296 99.34

Table 4.1: The result table using NBV in the void.

Method Speed (ms) Time(mm:ss) Distance(m) Coverage (%)

F.E 0.5 6:45 133 98.77

F.E 1.0 3:00 124 99.50

F.E 1.5 2:21 141 99.37

Table 4.2: The result table using FE in the void.

(43)

0.4 0.6 0.8 1 1.2 1.4 1.6 150 200 250 300 Velocity(m/s) Distance(m)

Figure 4.1: Plot of the results presented in table 4.1 and 4.3, distance against velocity. The NBV is marked with blue squares and the FE with red ones.

0.4 0.6 0.8 1 1.2 1.4 1.6 2 4 6 8 10 Velocity(m/s) Time of explorat ion(min)

Figure 4.2: Plot of the results presented in table 4.1 and 4.3, time of exploration against velocity. The NBV is marked with blue squares and the FE with red ones.

(44)

Figure 4.3: The trajectory generated by the frontier exploration method.

Figure 4.4: The trajectory generated by the NextBestView metod.

the environment, it causes the quadrotor to fly back and forth through known space time to time. This strategy is good from theoretical point of view, while in reality it can be highly inefficient and result in a high waste of energy. The frontier exploration algorithm also has to pass through known space which is seen in figure 4.3 but only at one time which make a huge difference in the flight distance.

From the high coverage percentage from both methods we can draw the conclusion that the full map has been explored. As mentioned in the previous chapter 100% coverage is almost impossible to reach as some voxels can be concealed behind corners.

(45)

algorithm proposed in this thesis is more suitable to use than a Next Best View planner if the power source is small and efficiency is prioritized. It is worth noticing though that the cost for the frontier-based exploration is higher as the 3D-LiDAR sensor more expensive than the VI-Sensor.

4.2

The Mine

Method Speed (ms) Time(mm:ss) Distance(m) Coverage (No. Voxels)

NBV 0.5 6:40 137 9127

NBV 1.0 6:39 176 9829

NBV 1.5 8:12 391 13017

Method Speed (ms) Time(mm:ss) Distance(m) Coverage (No. Voxels)

F.E 0.5 8:20 191 13007

F.E 1.0 4:36 226 12936

F.E 1.5 3:10 130 11376

Table 4.3: The result table using FE in the building.

The results from the mine was very diverse compared to the results achieved from the simulations made in the void. As mentioned in the previous chapter the environment is highly advanced due to the narrow tunnels and the presence of obstacles in the tunnels. While many attempts were made with different configurations, we never got the Next Best View planner to make a successful flight inside the mine. Through all the simulation the method was never able to plan a path through the full tunnel which resulted with the exploration stopped before the full environment was explored. This is the reason why the Next Best View planner has a lower total time and total distance travelled for the exploration compared to our proposed method.

The best run for the Next Best View planner was the one at a velocity of 1.5 m/s. Even though the flown distance is high it only made it through two of the three tunnels in the mine.

In comparison to the Next Best View planner the frontier exploration method suc-ceeded to explore all the tunnels fully at a velocity of 0.5m/s and 1.0m/s. At the speed of 1.5m/s it failed to explore the full mine as it planned a route through one of the walls and run into it.

(46)

to both the total flight time and the total distance travelled. While the Next Best View planner never finished with a full exploration of the environment, the total distance travelled is a lot higher when comparing the best result from the NBV, which was at a velocity of 1.5m/s, to the successful exploration made by the frontier exploration system. The same applies to the time, the time taken for the NBV at a velocity of 1.5 m/s is almost the same as the frontier exploration time at a velocity of 0.5 m/s, and the NBV never even explored the environment to its fully extent.

(47)
(48)

Chapter 5

Discussion

5.1

Simulations

5.1.1

The Void

As mentioned in Chapter 3 the void was designed to test the ability to search an open area and to show how the method generates different paths during the exploration.

As could be seen from the results the exploration algorithm and system developed in this thesis outperforms the Next-Best-View system in the total distance travelled and time of exploration in an open environment such in the void. The reason for this is because the Next Best View algorithm is always picking the branch which will give the most information as possible, which may lead to that it has to travel back and forth through known space to explore more of the environment. This strategy of exploration is inefficient as a lot of the flown distance might go onto travel between different branches without gaining any new information. A trajectory of the exploration is shown in figure 4.4.

The frontier exploration algorithm instead always tries to pick a local frontier which is in front of the multi rotor to explore more of the environment. The advantage of this is that the multi rotor can keep a higher velocity for longer periods as it does not have to change direction as often. Also, when no local frontier exists, the method, depending on the weights, selects a close frontier as possible if it must change direction, which is not the case with the Next Best View method. In comparison to the Next Best View method this strategy avoids going back and forth in known space as seen in the trajectory in figure 4.3.

5.1.2

The Mine

(49)

through the full environment. During the simulations, many different settings were tested to achieve a successful flight inside the mine but in none of the attempts the Next Best View was possible to explore the full mine without any collision or a full exploration. An interesting part of the Next Best Views attempts though is that it has almost as high coverage in one of the tests as the frontier exploration, even though it never explored the full mine.

5.2

Conclusions and Future Work

The developed exploration algorithm in this thesis proves to be efficient and safe according to the result in this thesis. During the exploration none of the selected frontiers were generated at an unsafe place which would cause a collision. Even though the exploration in the mine did not succeed at the highest speed, this is not assumed to be a problem to the algorithm developed in this thesis, as this issue had more to do with the path planning system than the algorithm itself. Looking at table 4.2 it could be seen that the frontier exploration algorithm delivered similar exploration coverage of the full map as the already established algorithm Next Best View does.

As mentioned earlier in the thesis a totally fair comparison cannot be made between the two algorithms and systems as they are using different sensors. What can and should be mention though is that the frontier exploration algorithm together with the 3D-LiDAR seems to be a more efficient setup to explore an unknown environment by looking at the results in Chapter 4. As flight time is one of the most critical weakness of multi rotors such exploration setup should be kept in mind if an efficient exploration is targeted. What is important to point out though, is that the frontier exploration might not be able to generate a map with the same density as a Next Best View planner might be able to do. This is an important to take into consideration when choosing between the two systems and their sensors.

While starting this thesis it was planned to perform real world experiments to test the performance outside a simulated environment. This was not possible mainly because of two reasons. The first which was the spread of the COVID-19 pandemic, but also because such experiments would introduce conditions which is not possible to achieve in a simulated environment, such as drifts in positioning as an example.

One way to further improve the work of this thesis is to find a more efficient way of how to use the 3D-LiDAR. Using the 3D-LiDAR for exploration for autonomous exploration adds some complex problems to the path planning and obstacle avoidance as the 3D-LiDAR has a limited field of view above and under the sensor. This can cause the multi rotor to collide with the ceiling or ground when flying as it might not be able to recognize obstacles close to the robot. One way of taking this further would be to have the 3D LiDAR to lean back and forth to collect more points above and under the robot.

(50)
(51)
(52)

Bibliography

[1] Abraham Bachrach, Ruijie He, and Nicholas Roy. “Autonomous flight in unknown indoor environments”. In: International Journal of Micro Air Vehicles 1.4 (2009), pp. 217–228.

[2] Andreas Bircher et al. “Receding horizon ”next-best-view” planner for 3D ex-ploration”. In: 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE. 2016, pp. 1462–1468.

[3] MJ Brnich, Kathleen M Kowalski-Trakofler, and J Brune. “Underground coal mine disasters 1900-2010: Events, responses, and a look to the future”. In: Extracting the science: a century of mining research (2010), p. 363.

[4] Zhe Chen et al. “Bayesian filtering: From Kalman filters to particle filters, and beyond”. In: Statistics 182.1 (2003), pp. 1–69.

[5] Titus Cieslewski, Elia Kaufmann, and Davide Scaramuzza. “Rapid exploration with multi-rotors: A frontier selection method for high speed flight”. In: 2017 Ieee/rsj International Conference on Intelligent Robots and Systems (iros). IEEE. 2017, pp. 2135–2142.

[6] Nancy J Cooke. “Human factors of remotely operated vehicles”. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting. Vol. 50. 1. SAGE Publications Sage CA: Los Angeles, CA. 2006, pp. 166–169.

[7] Gazebo Simlator. http://gazebosim.org/. Accessed: 2020-04-01.

[8] Armin Hornung et al. “OctoMap: An efficient probabilistic 3D mapping framework based on octrees”. In: Autonomous robots 34.3 (2013), pp. 189–206.

[9] Miguel Juli´a, Arturo Gil, and Oscar Reinoso. “A comparison of path planning strategies for autonomous exploration and mapping of unknown environments”. In: Autonomous Robots 33.4 (2012), pp. 427–444.

[10] Christoforos Kanellakis. On Visual Perception for an Aerial Robotic Worker. Li-centiate thesis / Lule˚a University of Technology. 2017. isbn: 978-91-7583-956-1. [11] Pablo Marin-Plaza et al. “Global and local path planning study in a ROS-based

(53)

42

[12] Hans Moravec and Alberto Elfes. “High resolution maps from wide angle sonar”. In: Proceedings. 1985 IEEE international conference on robotics and automation. Vol. 2. IEEE. 1985, pp. 116–121.

[13] Francesco Nex and Fabio Remondino. “UAV for 3D mapping applications: a re-view”. In: Applied geomatics 6.1 (2014), pp. 1–15.

[14] Janosch Nikolic et al. “A synchronized visual-inertial sensor system with FPGA pre-processing for accurate real-time SLAM”. In: Robotics and Automation (ICRA), 2014 IEEE International Conference on. IEEE. 2014, pp. 431–437.

[15] Radu Bogdan Rusu and Steve Cousins. “3D is here: Point Cloud Library (PCL)”. In: IEEE International Conference on Robotics and Automation (ICRA). Shanghai, China, May 2011.

[16] Sebastian Thrun et al. “Robotic mapping: A survey”. In: Exploring artificial intel-ligence in the new millennium 1.1-35 (2002), p. 1.

[17] Vladyslav Usenko et al. “Real-time trajectory replanning for MAVs using uniform B-splines and a 3D circular buffer”. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE. 2017, pp. 215–222.

[18] Velodyne Puck VLP-16. https : / / velodynelidar . com / products / puck/. Ac-cessed: 2020-03-19.

[19] Brian Yamauchi. “A frontier-based approach for autonomous exploration”. In: Pro-ceedings 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation CIRA’97.’Towards New Computational Principles for Robotics and Automation’. IEEE. 1997, pp. 146–151.

References

Related documents

Figure 5.5: Failure cases of the KITTI dataset (The classified 3D points are all projected and overlay on the synchronized image for better visualization and understanding. The

Methods: Blood was collected in tubes containing lithium heparin with different sep‐ arators; gel separator (Vacutainer ® PST™, Becton Dickinson and Vacuette ® , Greiner

Dessa ska vara de som är involverade i planeringen av projektet och har en typ av kommunikation mellan varandra, alltså den externa kommunikationen mellan

arbetsuppgifterna. Syftet är dels att öka kunskapen om vilka förutsättningar som krävs och vad som inte fungerar fullt ut i verksamheten, men också att främja personlig

4.3 Vilka andra sätt, utöver dagens, kan det finnas för att identifiera soldater som är i behov av stöd och hjälp 44 4.4 Hur påverkar rädslan för att ses som avvikande

förkortning för General health questionnaire (engelska för frågeformulär om generell hälsa), FFMQ-15 förkortning för five-facet mindfulness questionnaire (engelska

Könsidentitet är enligt deltagare oväsentligt men eftersom tydlig könsidentitet inom den binära diskursen meriterar till det begripliga och mänskliga eftersträvar

Most of our representation will be allocated for empty space, hence a solution that would only represent the blocks containing collision objects and ignoring the others would be