• No results found

Improving path planning of autonomous vacuum cleaners using obstacle classification

N/A
N/A
Protected

Academic year: 2021

Share "Improving path planning of autonomous vacuum cleaners using obstacle classification"

Copied!
50
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT

COMPUTER ENGINEERING,

FIRST CYCLE, 15 CREDITS

,

STOCKHOLM SWEDEN 2018

Improving path planning of

autonomous vacuum cleaners

using obstacle classification

Förbättrad vägplanering för självgående

robotdammsugare genom hinderklassificering

NOA SPELE

(2)

Improving path planning of autonomous

vacuum cleaners using obstacle

classification

F¨orb¨attrad v¨agplanering f¨or sj¨alvg˚aende robotdammsugare genom hinderklassificering

Noa Spele Tomas Andersson

Degree project in Computer Science, DD142X Supervisor: Jana Tumova

Examiner: ¨Orjan Ekeberg

School of Electrical Engineering and Computer Science KTH Royal Institute of Technology

(3)

Abstract

To effectively plan their movement, robotic vacuum cleaners require information about their surrounding environment. This thesis presents a mapping algorithm which uses collision input from a moving robot to create a grid map of the environment. The algorithm also classifies each obstacle cell in the grid map as either mobile or immobile.

Methods for creating maps of unknown environments already exist. However, they often rely on information from expensive sensory equip-ment, and the maps do not include information about the mobility of the objects in the map. Hence, the aim of this thesis was to investigate if the map could be created using only limited information from more accessible sensors.

A testing environment was created using the Unity 3D game engine, in which the robot was represented by a circular object. The robot had access to perfect information about its current position in relation to its starting position, the direction in which it was heading and any incoming collisions. Three test scenes were created in the simulation environment, representing different common operating spaces for a robotic vacuum cleaner. The scenes contained different kinds of mobile and immobile obstacles that the robot collided with.

A series of tests were then conducted in the test scenes. The tests examined the performance of the created algorithm. The results indicated that the algorithm can create a grid map representation of an unknown environment and classify objects within it with an average correctness of around 80%. However, it is hard to say whether the algorithm would be effective in a real situation, due to the inconsistent results and unrealistic assumptions.

(4)

Sammanfattning

Autonoma robotdammsugare beh¨over information om den omgivande milj¨on f¨or att kunna planera sin resv¨ag p˚a ett effektivt s¨att. I den h¨ar rap-porten presenteras en kartl¨aggningsalgoritm som med hj¨alp av kollisions information skapar en rutn¨atskarta av den omgivande milj¨on. Algoritmen klassificerar ¨aven hindren i rutn¨atskartan som mobila eller immobila.

Det existerar redan metoder f¨or att kartl¨agga ¨ok¨anda milj¨oer. Dock anv¨ander m˚anga av dessa metoder information fr˚an mer avancerade och dyrare sensorer, och kartorna inneh˚aller ingen information om mobiliteten hos hindren i milj¨on. Syftet med denna rapport ¨ar d¨arf¨or att unders¨oka huruvida en s˚adan karta skulle kunna skapas med hj¨alp av mer tillg¨angliga och enklare sensorer.

En testmilj¨o skapades med hj¨alp av spelmotorn Unity 3D, och i den-na milj¨o representerades roboten av ett cirkul¨art objekt. Roboten hade tillg˚ang till felfri information r¨orande sin position i relation till sin start-position, riktningen i vilken roboten r¨orde sig i, och alla inkommande kollisioner. Tre stycken olika testrum skapades, vilka representerade oli-ka vanliga milj¨oer i vilka autonoma robotdammsugare brukar anv¨andas. Rummen inneh¨oll olika typer av mobila och immobila hinder, vilka robo-ten kolliderade med.

En upps¨attning av tester utf¨ordes sedan i dessa testrum. Testerna un-ders¨okte den skapade algoritmens prestanda. Resultaten visade p˚a att algoritmen kan skapa en rutn¨atskarta som representerar en ok¨and milj¨o, och klassificerar mobiliteten hos hindren med en genomsnittlig korrekthet p˚a runt 80%. Dock ¨ar det sv˚art att avg¨ora huruvida algoritmen kan pre-stera lika bra i en verklig milj¨o, p˚a grund av de inkonsekventa resultaten och de f¨ormodligen orealistiska antagandena.

(5)

Contents

1 Introduction 1 1.1 Purpose . . . 1 1.2 Problem statement . . . 1 1.3 Scope . . . 1 1.4 Outlines . . . 2 2 Background 3 2.1 Global and local navigation . . . 3

2.2 Simple exploration algorithms . . . 3

2.2.1 Random motion . . . 4

2.2.2 Contour following . . . 5

2.3 Path planning . . . 6

2.3.1 Path planning in static environments . . . 6

2.3.2 Path planning in dynamic environments . . . 6

2.3.3 The A* and D* Algorithms . . . 7

2.4 Map building . . . 7

2.4.1 Mapping with ray-cast data (SLAM) . . . 7

3 Method 9 3.1 Simulation and testing environment . . . 9

3.1.1 Obstacle movement . . . 9

3.1.2 Test scenes . . . 10

3.1.3 Scene 1: regular room . . . 10

3.1.4 Scene 2: messy room . . . 11

3.1.5 Scene 3: hallway . . . 12

3.2 Robot sensory limitations . . . 13

3.3 Exploration strategy . . . 13 3.4 Mapping algorithm . . . 13 3.4.1 Pseudo-code . . . 14 3.5 Simulation parameters . . . 17 3.6 Verification . . . 18 4 Results 19 4.1 Algorithm visualization . . . 19

4.2 Result set 1: Classifying mobile and immobile obstacles . . . 20

4.3 Result set 2: Classifying specific obstacles . . . 24

5 Discussion 26 5.1 Result analysis . . . 26 5.1.1 Discretization problems . . . 26 5.1.2 Exploration effectiveness . . . 26 5.1.3 Mobility limit . . . 27 5.2 Known limitations . . . 27

(6)

5.2.2 Reliance on chance . . . 28

5.2.3 Search algorithm for finding coherent obstacles . . . 28

5.2.4 Potentially unrealistic robot movement . . . 28

5.2.5 Mobility estimation outside the simulation . . . 28

5.2.6 Verification method . . . 29 6 Conclusion 30 7 References 31 8 Appendix 32 8.1 Program code . . . 32 8.1.1 DiscreteMap.cs . . . 32 8.1.2 MapGenerator.cs . . . 34 8.1.3 MapObstacle.cs . . . 35 8.1.4 MapPoint.cs . . . 35 8.1.5 ObstacleBehaviour.cs . . . 36 8.1.6 RandomMove.cs . . . 37 8.1.7 Verificator.cs . . . 40

(7)

1

Introduction

The use of robotic vacuum cleaners, robotic lawn mowers, and other autonomous robots for performing basic everyday tasks is becoming more and more common in modern homes. In the field of autonomous robotics, a core problem is to try to determine and understand the unknown operating space which the robot is working in. This includes trying to figure out where the robot is in the environment and identifying the surrounding geometry. In order to find a time efficient and space optimized path, it is important for the robot to understand the different types of obstacles it can encounter. It can also be beneficial for the robot to be able to adjust to potential changes in the operating space, without losing track of its localization and the information it has gathered.

The complexity and pricing of robotic vacuum cleaners vary greatly depend-ing on their sensory capabilities. Although advanced and effective coverage and mapping algorithms exist, many of them rely on expensive sensory input, such as sonar sensors or computer vision. While they work in theory and for some professional applications, this makes any robot using these methods too expensive for use in an average private household. As a result, there is a great disparity between the simple algorithms currently used by most private robotic vacuum cleaners, and the much more sophisticated theoretical algorithms.

1.1

Purpose

The purpose of this report is to bridge the gap between current practical and theoretical coverage strategies. This will be done by investigating how simple sensors (such as bumper sensors) can be used to generate a map of a dynamic domestic environment. The idea is that this map could then be used for more advanced path planning algorithms.

1.2

Problem statement

The thesis will investigate the following:

• How can obstacles in a dynamic domestic environment be mapped, using the limited sensory capabilities of a simple robotic vacuum cleaner? • Is it possible to accurately classify these obstacles based on their mobility?

1.3

Scope

Today’s robotic vacuum cleaners aimed at private households do usually not have access to advanced visual perception devices, because of the fact that such devices would drastically increase the price of the device. Most of the devices today have access to instruments which indicate if the unit has collided with an obstacle, and an IR laser which can be used to follow objects and measure the distance to surrounding objects. These instruments are the sources of the information that we are going to use.

(8)

This study is therefore focused on analyzing the possibility of using the exist-ing technology in autonomous robotic devices in order to map the surroundexist-ing environment. Because of this limitation we are not taking into account the pos-sibilities of more advanced sensors and we are not trying to achieve the most precise and detailed performance, instead focusing on trying to improve robotic vacuum cleaners with limited sensory capabilities. We do not intend to utilize advanced hardware or computer vision.

Also note that while some simple path planning algorithms will be presented and used in the report, the main focus of the study is to investigate environment mapping. Although the intended end purpose of the map is to be applied for more sophisticated path planning algorithms, this application is not within the scope of this report.

1.4

Outlines

In section two, the most commonly used strategies for exploring unknown envi-ronments, path planning, and map creation will be presented.

In section three, the developed mapping algorithm will be presented and explained. Information about the testing environment and how the tests were conducted is also presented in this section.

In section four, the results from the conducted test is presented. The follow-ing section, section five, is devoted to discussfollow-ing the produced results.

In section six, conclusions are made based on the discussion in section five, and the problem statement is addressed.

Finally, the references used in the thesis are presented, followed by an ap-pendix including the complete source code used to generate the results.

(9)

2

Background

In order to create an efficient and well-performing algorithm it is important to understand the present work in the field, and the target devices of the map which the algorithm is producing. In this section, different common path-coverage strategies will be presented in order for the reader to be able to understand the main components of the algorithm. After that, different simple path planning algorithms will be presented, in order to create an understanding of where the algorithm will be used. The section ends with a subsection discussing existing strategies for map creation.

2.1

Global and local navigation

The navigation problem of autonomous robots concerns finding the position of the robot, finding the goal position we aim to move it to and calculating a collision-free path from the robot to the goal position. Although the coverage problem is slightly different from finding a path from point A to point B, the navigation problem is still relevant for more advanced coverage algorithms. The navigation problem can be divided into two sub-problems; global navigation and local navigation [1]. In global navigation, the environments geometry is known beforehand, and the problem revolves around finding a collision-free path from the robots current position to a specific goal. In local navigation, the environment is not known beforehand and the aim is to utilize sensory input to detect obstacles and react to them.

2.2

Simple exploration algorithms

In order to create a map based on encountered collisions, the robot has to start by exploring the environment, using one or several simple exploration algorithms. To minimize the robots hardware requirements, the algorithm may not require more sensory input than short-distance obstacle detectors, such as bumper sensors and IR-lasers. As the robot collects information about the environment, it could potentially switch to using more sophisticated algorithms later on. However, this is not within the scope of this report. Some examples of common simple exploration and path-coverage algorithms are random motion and contour following. The details of their implementation may vary, but here follows a description of each algorithm as they are going to be implemented in this report:

• Random motion: The robot travels straight forward until it hits an ob-stacle. Any time it hits an obstacle it turns to a random direction away from the obstacle and resumes the forward motion.

• Contour following: The robot travels forward while turning slightly left until it hits an obstacle. It then stops, turns right and resumes the forward-left-tuning motion. The aim of this motion is to follow the contour of a large obstacle, such as a wall, on the left side of the robot.

(10)

2.2.1 Random motion

As the name suggests, the random motion algorithm is based on randomization. The central idea of the algorithm is that if a robot is moving randomly in a closed environment, the robot will most likely cover the whole environment if it gets enough time.

The random motion based algorithm has three main advantages:

• It does not rely on any information about the environment, which is often the case in real world applications where the environment is often initially unknown.

• It does not get stuck in a repeating pattern, where the robot only covers a few specific parts of the environment repeatedly.

• It is not dependent on any advanced sensors, and only requires the robot to have access to bumper sensors.

The random motion algorithm can be implemented in various ways, but the core structure of the algorithm is the same in each approach. The core structure can be described as [4]:

1. Move forward until a collision is detected.

2. When a collision is detected, analyze the info gathered from the bumper sensor regarding the collision.

3. Determine an interval for the new direction which the robot is supposed to follow from the data collected.

4. Generate a random number that determines which direction to chose from the interval.

5. Repeat the process until a certain time has elapsed, or another restraint is reached.

The benefit of using a similar implementation as the one described above is that this algorithm is computationally cheap, which means that the robot does not need to have an advanced CPU. Another benefit of using random motion is that it would try to explore areas that might be falsely marked as non-reachable, due to its unpredictable behavior which can be utilized as a tool for exploration. Figure 1 depicts the general motion of the random motion algorithm.

(11)

Figure 1: General motion of the random motion algorithm.

2.2.2 Contour following

The contour following algorithm, also known as the wall follow algorithm [4], is used in situations where it is important to follow a wall or go around an object in an environment. The main benefit of the contour following algorithm, com-pared to the random motion algorithm, is that this algorithm generates grouped collision points which can be treated as a coherent object. However, unlike the random motion algorithm, the contour following algorithm is only going to move in a repeated pattern or around a single object or a cluster of objects. The con-tour following algorithm, like the random motion algorithm, does not need any knowledge about the environment in which the robot is supposed to operate, other than collision input. The general flow of the algorithm is as follows:

1. Go forward and turn left or right until a collision is detected.

2. Turn in the opposite direction until the robot faces α degrees away from the contact point.

3. Go to step 1.

Just like the random motion algorithm, the contour following algorithm is com-putationally cheap. The algorithm does not need to save any large data sets about the environment in which it is operating, and the algorithm is therefore suited for use on appliances with a low capacity integrated CPU. The main benefit of this algorithm in the field of autonomous robotic vacuum cleaners is that much of the dust in a room is typically located around objects and along the walls of a room, which is covered by this algorithm. Figure 2 depicts the general motion of the contour following algorithm.

(12)

Figure 2: General motion of the contour following algorithm.

2.3

Path planning

Path planning in contrary to the previously mentioned exploration algorithms rely on information about its surroundings, and uses the information to find an optimal path, in such a way that the whole area is visited during the traversal along the chosen path, and that the operating time is as optimized as possible. Path planning is used in order to reach a goal as fast as possible. More commonly in the case of robotic vacuum cleaners, it is used to clean every part of a room as fast as possible.

2.3.1 Path planning in static environments

Static environments are the simplest abstraction of the real world geometry, where all obstacles are assumed to be static. This is useful since this allows the robot to be able to proceed through the environment without the risk of any collisions.

2.3.2 Path planning in dynamic environments

Since objects often are moved around in homes, a static approach is often not suitable for a robotic vacuum cleaner which is operating in a generic house-hold. A dynamic approach is often necessary in order to achieve a sophisticated movement behaviour. The general idea of path planning in dynamic environ-ments is to start with a simple static approach, and in the case of an obstacle in the chosen path, alter the path according to the information gathered from the collision or detection of the distracting object.

(13)

2.3.3 The A* and D* Algorithms

The A* algorithm is an algorithm commonly used for path-planning in au-tonomous robots, such as robotic vacuum cleaners. It is an information-based algorithm that relies on a static grid-composed representation of the environ-ment in which the appliance is intended to travel. The grid-composed map consists of small parts called nodes, and each node represents a part of the environment and each grid is classed as either free space or an obstacle. The algorithm uses a heuristic-based Dijkstra algorithm in order to find a path from a start node to an end node. The path is chosen in such a way that the chosen path incurs the smallest cost, which can be based on time, length or another factor. The benefit of this algorithm is its performance and accuracy, but in order to achieve this the algorithm is required to have a complete map of the environment, which often means unnecessary data and large data sets.

The A* algorithm can be expanded to be used in a dynamic environment and is then often referred to as the D* algorithm. The algorithm can be imple-mented in multiple ways, but has a common core idea. The idea of the algorithm is to find a path in the environment from a start node to an end node, the envi-ronment is assumed to be static during this process in order to find a first path. The appliance then starts to move along the found path, until any abnormality is detected. The initial static map is then updated with the new information and a new path is generated from the map. This process is then repeated until the end node is reached or the algorithm determines that it is impossible to reach the end node. [3] and [10] are some examples of path planning algorithms which use the A*/D* algorithms.

2.4

Map building

The central idea of mapping is to translate a complex and often large envi-ronment into a computer readable file or data set. A common method is to store every object in the mapped environment as a set of vertices, and the line segments which are defined by these vertices define the outline of the object. In robotics the vertices are often extracted using on-board sensors and cam-eras, during the exploration of the unknown environment. Another approach is to use this input to construct a discretized representation of the environment, which can improve the efficiency of some algorithms and allow faster access to geometry data at specified positions, at the cost of precision.

2.4.1 Mapping with ray-cast data (SLAM)

Simultaneous Localization and Mapping (SLAM) is one of the most common approaches for mapping an unknown environment with an autonomous robot. The core idea of the approach is to use a ray-cast scanning device in order to create an occupancy grid map of an unknown environment. The general structure of the approach is as follows:

(14)

2. Identify all frontier cells (explained down below) using the information gather from the ray-cast and all earlier information gathered.

3. Find a path from the current location to one of the frontier cells.

4. Travel along the path to the frontier cell and repeat the process until there are no more frontier cells left in the occupancy grid map.

In the occupancy grid map, each node is marked as either free space, an ob-stacle, or as unexplored. Each node is initially marked as unexplored [2]. After each ray-cast measurement, the distance which the ray traveled is measured in each direction, and all nodes which the rays traveled through are marked as free space. The nodes which include the points where the rays bounced back from are marked as obstacles [11]. Frontier cells are nodes in the occupancy grid map marked as free space, with one or more adjacent nodes marked as unexplored. Which frontier cell to travel to in each iteration can be chosen at random or by trying to find the most beneficial one. [5], [9] and [8] are some examples of how this method can be used.

(15)

3

Method

In this section we will present a new mapping algorithm, constructed to solve the mapping and obstacle mobility classification problem.

A simple simulation environment was created in the Unity 3D game en-gine. The environment contained a mobile robot and obstacles which it collided with. The movement of the robot was simulated in real time. It explored the environment by switching between the previously mentioned random and contour-following exploration strategies.

During the simulation, incoming collision events were used to send input to a mapping algorithm. In the mapping algorithm, the environment space was discretized into small cells. In the end result of the algorithm, each cell in the grid map was classified as either obstacle, empty space or unexplored, depending on the recorded collision events at that cell. Additionally, any cell classified as obstacle was also classified as either mobile or immobile.

3.1

Simulation and testing environment

To investigate the problem we implemented a graphical simulation environment using Unity 3D version 2017.3.0f3. Documentation for this version of Unity can be found at [7]. Associated scripts were written in the C# programming lan-guage. The simulation environment consisted of multiple 2D scenarios depicting example domestic environments. Each scene contained multiple obstacles such as walls and furniture that triggered collisions with the robot.

3.1.1 Obstacle movement

Each obstacle in the scenes had an attached ObstacleBehaviour script. This script controlled the movement of the obstacle and contained methods for check-ing whether or not the obstacle is mobile. Each obstacle used one of the followcheck-ing three movement patterns:

• Stationary: The obstacle does not move around during the simulation. • Random: Given two intervals I1and I2, and a radius r, move the obstacle

to a new random location within r distance units of the obstacles starting position. The movement takes I1seconds and is repeated after additional

I2seconds.

• Back-and-forth: Given two intervals I1 and I2, and a vector v, move the

obstacle to its starting position offset by v distance units over I1seconds.

After an additional I2seconds, move back to the obstacles starting position

over I1seconds. Wait for I2 seconds before repeating the pattern.

Obstacles using the Stationary movement pattern were considered immobile in the verification algorithm. Obstacles using any of the other patterns were considered mobile. Note that this correct set of classifications was only used to

(16)

verify the results of the mapping algorithm. The mapping algorithm itself did not have any access to the ObstacleBehaviour class.

3.1.2 Test scenes

3.1.3 Scene 1: regular room

The first scene consisted of a single room containing a chair, a very small sta-tionary object and two large rectangular objects. There was a small wall at the top of the room. The purpose of the scene was to test the algorithm’s perfor-mance in an environment where most of the obstacles were perfectly aligned with the map matrices coordinates.

Figure 3: Overview of the regular room scene.

Obstacle mobility: 1. Wall up: stationary 2. Wall down: stationary 3. Wall left: stationary 4. Wall right: stationary 5. Wall mid: stationary 6. Small object: stationary 7. Rectangle: stationary

8. Moving chair: random movement (1.0 distance units over 1 second with 10 second breaks)

9. Moving rectangle: random movement (1.83 distance units over 1 second with 15 second breaks)

(17)

3.1.4 Scene 2: messy room

The second scene consisted of two rooms divided by a wall with an open door-way. It contained five chairs and two rectangular objects. The moving objects moved relatively rarely. The purpose of the scene was to test the algorithm’s performance in an unorganized environment filled with many moving obstacles.

Figure 4: Overview of the messy room scene.

Obstacle mobility: 1. Wall up: stationary 2. Wall down: stationary 3. Wall left: stationary 4. Wall right: stationary 5. Wall mid 1: stationary 6. Wall mid 2: stationary 7. Chair 1: stationary 8. Chair 2: stationary 9. Cube 1: stationary 10. Cube 2: stationary

11. Moving chair 1: random movement (1.0 distance units over 1 second with 30 second breaks)

(18)

12. Moving chair 2: random movement (1.0 distance units over 1 second with 30 second breaks)

13. Moving chair 3: random movement (1.0 distance units over 1 second with 30 second breaks)

3.1.5 Scene 3: hallway

The third scene consisted of a tight hallway with two closed doorways. The hallway contained a chair and a moving rectangular object. The purpose of the scene was to test a common room layout, as well as testing the special case where an object moves along a stationary wall.

Figure 5: Overview of the hallway scene.

Obstacle mobility: 1. Wall up 1: stationary 2. Wall up 2: stationary 3. Wall up 3: stationary 4. Wall down 1: stationary 5. Wall down 2: stationary 6. Wall down 3: stationary 7. Wall left: stationary 8. Wall right: stationary

(19)

10. Moving rectangle: back-and-forth (1.414 distance units over 1 second with 10 second breaks)

3.2

Robot sensory limitations

During the simulation, the robot object had access to its current rotation and position in global plane coordinates. The robot was also able to perfectly detect any incoming collisions with obstacles in 2D space. From each collision event, the robot was able to extract information about the location of the collisions contact point (in global coordinates) as well as the colliding surface’s normal vector.

3.3

Exploration strategy

The robot was placed in an unknown environment and did not have any initial information about the environments geometry. The only information that was present at the start of the exploration was the robot’s starting position and rotation in global coordinates. The robot used the previously mentioned random motion and contour following motion patterns to explore the environment.

The robot started by using the random motion mode. The robot switched from random motion to contour-following mode when it collided with an ob-stacle after running random motion mode for a certain duration. Similarly, the robot switched from contour-following to random motion mode after running contour-following mode for a certain duration. If the robot did not encounter any obstacles during this time, it switched algorithm sooner.

During the simulation, the position of the robot was continuously sent to the mapping algorithm. Also, any time a collision occurred, information about the position of the collision was sent to the mapping algorithm.

3.4

Mapping algorithm

The map generation program in the simulation used the following key classes: • MapObstacle: Object representing what the mapping algorithm

inter-preters as a coherent obstacle in the simulation environment.

• MapPoint: Object representing a single discretized cell in the generated grid map. The object stores the cell’s current obstacle classification (obsta-cle, empty space or unexplored ), the cell’s current local mobility estimate, as well as a pointer to the MapObstacle it is considered to be part of. • DiscreteMap: The generated collision-based grid map. It represents a

discretized section of the global coordinate plane and stores a quadratic matrix of MapPoints.

The program stored information about previously encountered collisions as MapPoints in a quadratic matrix m in the DiscreteMap. The matrix was a discretized representation of a limited section of the global coordinate plane.

(20)

A collision event at the position (x, y) in continuous global space updated the matrix data at m[s · round(x), s · round(y)], where s is the specific scaling of m relative to global coordinates and round(a) returns a rounded off to the closest integer.

Each MapPoint object in the matrix contained a queue of boolean values and a pointer to a MapObstacle. The queue stored a constant number of the most recent collision statuses at the point. Every time a new value was added to the queue, the oldest value was removed. When a collision was registered at the point, true was added to the queue. When the robot moved over the point without registering a collision, the queue was completely filled with false. A built-in timer prevented the queue from being modified more than once every few seconds, meaning that the robot had to return to the position at a later time to modify the collision data again. The local mobility estimate of the point was calculated as the percentage of values in the queue that were false.

After the robot had explored the environment for a specific duration, the generated map was post-processed. The program iterated through each Map-Point that was registered as an obstacle. From each MapMap-Point that had not been previously visited during this process, a breadth-first-search (BFS) was performed to find a coherent obstacle containing that point. Edges in the BFS were drawn to any point that was registered as an obstacle and was horizontally, vertically or diagonally adjacent to the current point. For each search, a new MapObstacle was created and every MapPoint that was found by the search was assigned to that obstacle. Every MapObstacle that was created during this process was assigned a final mobility estimate equal to the average mobility es-timate of the points in the obstacle. Any MapObstacle with a mobility eses-timate that exceeded a specific constant, referred to as MobilityLimit in this report, was considered mobile.

3.4.1 Pseudo-code

The following pseudo-code segments present the key components of the mapping algorithm. Note that the actual implementation was done using an object-oriented approach in C#. The program code can be found in the appendix. In the pseudo-code, the classes used have been simplified to only be containers of data. The class property x of an object o is denoted o.x.

A MapPoint-object has the following properties:

• type: specifies whether the MapPoint is currently recognized as an obsta-cle, empty space or if it is unexplored.

• lastHitTime: stores the last time the mobility estimate or type of the MapPoint was modified.

• group: the MapObstacle object pointer that the MapPoint is considered part of.

(21)

A MapObstacle-object only has a single property:

• mobilityEstimate: final mobility estimate of the MapObstacle, and thereby all MapPoints assigned to that obstacle.

MarkObstacle MarkObstacle is called when the robot detects a collision. It marks the MapPoint at the collision’s contact point (x, y) in the matrix P as an obstacle. If the MapPoint was already marked as obstacle, the mobility estimate of the MapPoint is decreased.

Input: MapPoint-matrix P , integers (x, y) within the boundaries of P , cur-rent time t.

procedure MarkObstacle(P , (x, y), t) if P [x][y].type = obstacle then

AddHitV alue(P [x][y], true, t) else

P [x][y].type ← obstacle P [x][y].lastHitT ime ← t

VisitCircle VisitCircle is called regularly during exploration to mark any MapPoints under the robot as empty space. This is done by finding all Map-Points in P in a circle with radius r around the robots position (x, y) and setting their type to empty.

Input: MapPoint-matrix P , integers (x, y) within the boundaries of P , radius r, current time t.

procedure VisitCircle(P , (x, y), r, t) for i ← x − r to x + r do

d ← distance between (x, y) and (i, j) if d ≤ r then

AddHitV alue(P [i][j], f alse, t) P [i][j].type ← empty

P [i][j].group ← none

AssignGroups AssignGroups finds coherent obstacles in P by grouping to-gether any adjacent MapPoints with the type obstacle. MapPoints in the same adjacency group are assigned the same MapObstacle. The mobility estimate of a MapObstacle is the average local mobility estimate of the MapPoints assigned to that MapObstacle. The final mobility estimate of a MapPoint is simply the mobility estimate of its MapObstacle.

(22)

procedure AssignGroups(P , n) R ← all MapPoints in P for i ← 0 to n − 1 do

for j ← 0 to n − 1 do

if P [i][j].type = obstacle and R contains P [i][j] then G ← M atrixBF S(i, j, P )

o ← new obstacle object foreach p in G do

p.group ← o

o.mobilityEstimate ← o.mobilityEstimate +GetM obilityEstimate(p)/G.size remove p from R

MatrixBFS MatrixBFS uses a variant of BFS to find a coherent group of obstacle points and return it. Edges in the BFS consist of any points that are horizontally, vertically or diagonally adjacent to the current point.

Input: MapPoint-matrix P , integers (xstart, ystart) within the boundaries of P .

procedure MatrixBFS(p, (xstart, ystart)) V ← P [i][j]

Q ← (xstart, ystart) while Q is not empty do

(cx, cy) ← Dequeue(Q) for i ← −1 to 1 do

for j ← −1 to 1 do x ← cx + i y ← cy + j

if P [x][y].type = obstacle and V does not contain P [x][y] then Enqueue(Q, (x, y))

add P [x][y] to V return V

AddHitValue AddHitValue is used to change the local mobility estimate of a MapPoint p. If a collision was triggered at p, b should be true. If p was visited without triggering a collision, b should be false. The function prevents the mobility estimate from being modified more than once every c seconds.

Input: MapPoint-instance p, boolean value b, current time t. procedure AddHitValue(p, b, t)

(23)

if p.type = obstacle then Dequeue(p.history) Enqueue(p.history, b) else

fill p.history with false

GetMobilityEstimate GetMobilityEstimate calculates and returns the local mobility estimate of a MapPoint.

Input: MapPoint-instance p. procedure GetMobilityEstimate(p) count ← 0 foreach b in p.history do if b = f alse then count ← count + 1

3.5

Simulation parameters

This section contains a list of robot movement and environment mapping pa-rameters that could be changed in the simulation. All results were generated using the same values for all parameters except for Mobility limit, where mul-tiple values were tested. The values in parenthesis after the parameter names are the values used to generate the data for the result section.

Robot movement parameters:

• Radius (0.32): Radius of the robots circular collision component.

• Speed (8): Movement speed of the robot, in Unity distance units per second.

• Rotation speed (140): Angular speed of the robot when using the contour-following movement mode, in degrees per second.

• Contour-following angle (0): Angle outwards from the edge of the wall that the robot rotates to, upon colliding with an obstacle in the contour-following mode.

• Minimum random-mode time (2): Minimum time that the robot uses the random motion mode, before switching mode on the next collision. • Maximum contour-mode time (3): Maximum time that the robot uses the

contour-following mode before switching mode.

• Maximum contour-mode time without collisions (1): Maximum time that the robot uses the contour-following mode before switching mode, if no collisions are detected.

(24)

Mapping algorithm parameters:

• Rows and columns (128): Number of rows and columns in the mapping algorithms quadratic MapPoint-matrix.

• Distance scaling (6): Map scale in relation to Unity distance units. • Robot radius factor (0.72): factor to scale the robot radius with when

marking empty space under the robot. Note that radius is also scaled with Distance scaling.

• History length (2): Number of values in each MapPoints history queue. • Minimum time between updates (3): The minimum time that has to pass

between modifications of the mobility data of a MapPoint.

• Mobility limit (0.15, 0.3, 0.45, 0.6, 0.85): The limit where an obstacle cell is considered mobile. If the final mobility estimate of a cell exceeds this limit, the obstacle is classified as mobile. Otherwise, it is classified as immobile.

3.6

Verification

To verify the correctness of a set of obstacle classifications, the simulation pro-gram kept track of the actual last encountered obstacle object at each point. This was done by storing the obstacle’s attached ObstacleBehaviour object each time a collision occurred.

The ObstacleBehaviour objects were stored in a matrix that corresponded to the DiscreteMap’s MapPoint-matrix. The object’s position in the matrix was based on the collisions position global coordinates, and the same Obstacle-Behaviour object pointer could be stored at multiple positions in the matrix. When the verification was executed, the verification program iterated through each MapPoint in the DiscreteMap that was considered an obstacle. It com-pared the mobility classification of the MapPoint’s assigned MapObstacle to the actual obstacles mobility classification by fetching the ObstacleBehaviour object at the same position in the verificator’s ObstacleBehaviour-matrix. If these were equal, the point was deemed correctly classified. Note that the classification of an obstacle could be deemed partially correct if only some of the corresponding MapPoints were correctly classified.

To estimate the overall correctness of the algorithm, the average correctness of all obstacles was calculated. It was also useful to compile the average cor-rectness for mobile and immobile obstacles separately, since the most common category could otherwise skew the data. Note that the verification process only took into account the average correctness of each obstacle classification. It did not account for different sizes of obstacles or how well explored each obstacle was.

(25)

4

Results

The purpose of this section is to evaluate the effectiveness of the mapping algo-rithm presented in the previous section. The results were generated by running the simulation in real time and printing logs containing output from the previ-ously mentioned verification algorithm. The tests were generated using different MobilityLimit -values, which specifies the limit where an obstacle is considered mobile.

The result generation was divided into two parts:

• Result set 1 compared the accuracy of classifying mobile immobile obsta-cles respectively in each scene, using different MobilityLimit -values. • Result set 2 compared the accuracy of classifying the mobility of each

specific obstacle in every scene, using different MobilityLimit -values.

4.1

Algorithm visualization

Figure 6 shows an example visualization of the obstacle point mobility estima-tion in scene 2. In the simulaestima-tion GUI, points with a high mobility estimate are visualized with a bright pink color, while points with a low mobility estimate are blue. Points with a mobility estimate near 0.5 are purple. Note that the mobility estimates are not the final classifications; the mobility estimates vary between 0.0 and 1.0, and the obstacle point is classified as mobile if and only if its mobility estimate exceeds a specific M obilityLimit-value.

Figure 6: Screenshot showcasing the algorithm’s obstacle mobility estimation in scene 2.

(26)

4.2

Result set 1: Classifying mobile and immobile

obsta-cles

Result set 1 aimed to evaluate the accuracy of classifying mobile and immo-bile obstacles respectively, using different MobilityLimit -values. A low limit improves the accuracy of classifying mobile obstacles at the cost of accuracy classifying immobile obstacles, and vice-versa. It is therefore relevant to find a balanced MobilityLimit -value that provides a high success-rate classifying both types of obstacles.

The results were generated for each of the three test scenes, using Mobil-ityLimit -values 0.15, 0.30, 0.45, 0.6 and 0.85. Figures 7, 9 and 11 show the average correct mobile and immobile classification respectively for each scene, using different limits. The y-axis is the average correctness using the specified limit and the x-axis is the verification iteration number. One verification was performed every 10 seconds, meaning that verification 30 occurred after 300 seconds.

The graphs were created using Gnuplot version 5.0 [6]. The graphs were calculated using the built in fit -function in Gnuplot, which uses the nonlin-ear least-squares Marquardt-Levenberg algorithm. The graphs were fitted to a x/(x + a) function, chosen due to the fact that this function goes through the origin and asymptotically approaches 1. The fit -function was using a sample of 200 simulation values for each scene and MobilityLimit.

(a) Average correct immobile classifi-cation.

(b) Average correct mobile classifica-tion.

(27)

Figure 8: Average correct classification of all obstacles in scene 1.

(a) Average correct immobile classifi-cation.

(b) Average correct mobile classifica-tion.

(28)

Figure 10: Average correct classification of all obstacles in scene 2.

(a) Average correct immobile classifi-cation.

(b) Average correct mobile classifica-tion.

(29)

Figure 12: Average correct classification of all obstacles in scene 3.

(a) Average correct immobile classifi-cation.

(b) Average correct mobile classifica-tion.

(30)

Figure 14: Average correct classification of all obstacles in all scenes together.

4.3

Result set 2: Classifying specific obstacles

Result set 2 aimed to evaluate the accuracy of classifying each specific obstacle in the scenes. This was useful to find what sorts of obstacles the algorithm had trouble classifying.

The results were generated for each of the three test scenes, using Mobil-ityLimit -values 0.15, 0.30, 0.45, 0.6 and 0.85. Figures 15, 16 and 17 show the average correct classification for each specific obstacle in each scene after run-ning the simulation for 300 seconds, using different limits. Each average value in the graphs was calculated from a sample of 100 simulation values.

(31)

Figure 16: Chart showing the obstacle classification performance of scene 2.

(32)

5

Discussion

In this section we will discuss the accuracy of the results and the problems they reveal. We will also discuss some known limitations that affect the validity of the results and the practical usefulness of the presented mapping algorithm.

5.1

Result analysis

5.1.1 Discretization problems

The results show that the algorithm, given enough time, could produce quite accurate mobility classifications of the obstacles in the given test scenes. How-ever, there were special cases where the algorithm had problems classifying the obstacles. Most of these problems were side-effects of using a discretization of world space to represent the map. For example, the results show that the al-gorithm had trouble classifying the obstacle Right wall in Scene 1 (see figure 15). This was likely because the wall was perfectly aligned with the y-axis of the discretized map, combined with colliding edge being positioned midway be-tween two discretized points on the x-axis. This caused the program to first trigger a collision at the closest points to the wall, marking them as obstacles, and then include those same points in the space visiting procedure, marking them as empty space. The result was that Right wall in many cases was clas-sified as mobile. A similar issue occurred when exploring many angled obstacle geometries. However, that problem was reduced by setting the final mobility estimate of each point to the average local mobility estimate of all points in the same coherent obstacle.

A highly inaccurate obstacle classification can be found in figure 17, where Moving rectangle was mostly incorrectly classified for mobility limits above 0.15. This obstacle moved closely along a stationary wall. Due to its close proximity to the wall, the mapping algorithm sometimes considered Moving rectangle as a part of the same obstacle as the wall, which caused it to be incorrectly clas-sified. Another contributing factor to this incorrect classification was that the obstacle moved such a short distance that it permanently overlapped some of the discretized cells. Consequentially, these cells were categorized as immobile.

5.1.2 Exploration effectiveness

The algorithm correctly classified most walls, but had less success classifying small stationary objects in the middle of the scenes, as can be seen in figures 15 and 16. Examples include Rectangle in scene 1 and Chair 1, Chair 2, Cube 1 and Cube 2 in scene 2. This may be because the robot uses very simple exploration strategies that are better suited for finding and exploring walls. Random motion has a high chance of finding walls due to their large collision boundaries, and contour-following is well suited for both exploring walls and finding adjacent walls. The robot is significantly less likely to find a small isolated obstacle in

(33)

motion. This was an idea which we discarded due to being more suitable for path coverage rather than exploration, but the spiral motion could compensate for the lack of long range sensors. For further studies it could be interesting to find how different exploration strategies impact the effectiveness of the mapping algorithm.

5.1.3 Mobility limit

Looking at the average correct classification in figure 13, it seems that a very high MobilityLimit -value, such as 0.85, provides better classifications. However, that result is greatly influenced by the fact that the immobile obstacles in the example scenes greatly outnumber the mobile obstacles. Looking at the results from scene 2 (figure 9), where the obstacles move less frequently than in the other scenes, the accuracy of classifying mobile obstacles decreases significantly with higher limits. In that specific scene, a suitable mobility limit to achieve accurate classifications for both types of obstacles seems to be between 0.30 and 0.45. A suitable MoblityLimit for achieving a good average result for both categories in all scenes seems to be 0.6, as can be seen in figure 13. This results in an average correctness of around 80% for both categories.

It should also be noted that the accuracy of classifying immobile obstacles increased noticeably during the entire simulation runs. This is because newly detected obstacle points are classified as mobile by default, and their mobility estimates decrease when they are repeatedly bumped into. As a consequence, longer simulation runs provide better immobile classifications. Figure 13 (a) indicates that the average correct immobile classification was not yet close to a maximum when the simulations were terminated, especially for lower mobility limits. For further studies it would be interesting to find the average simulation time until the accuracy stops increasing.

5.2

Known limitations

5.2.1 Assumed perfect localization

One of the greatest limitations of the presented solution is that it is dependent on perfect localization of the robot. The algorithm requires information about the exact position of the robot in relation to its starting point. It also requires the exact collision positions of any collisions that are registered. One way that localization could be implemented is by continuously monitoring the distance the robot travels from its starting position and tracking its rotation. The problem with using that localization method for this algorithm is that the positional and rotational error would most likely grow during the exploration process, which would cause the mapping algorithm to detect small movements from all detected obstacles. This would significantly worsen the algorithms ability to classify obstacle mobility.

(34)

5.2.2 Reliance on chance

While the mapping algorithm itself is deterministic, the movement of the robot in the simulation relied on random number generation. Since the robot move-ment is essential for the effectiveness of the mapping algorithm, this means that the generated results were greatly influenced by random variation. To reduce this problem, most of the results were presented using the average of at least 100 simulation result values. However, more reliable results could have been generated using a higher sample size.

5.2.3 Search algorithm for finding coherent obstacles

The algorithm uses a variant of breadth-first-search to find coherent obstacles in the discretized point matrix. A problem with using this method to find coherent obstacles is that the minimum distance for two points to be considered connected becomes heavily dependent on the resolution of the discretized map. For example, if the matrix size 256 × 256 is used instead of 128 × 128, not only will the level of detail in the map geometry be greater, but some points that were previously interpreted as being part of the same obstacle will be split. This makes it very difficult for the algorithm to find coherent obstacles in high-resolution maps. Ideally it would be possible to configure a limit on how far apart two points can be and still be considered part of the same obstacle. However, this is not possible using the current BFS-based solution without making it significantly slower.

5.2.4 Potentially unrealistic robot movement

During the test runs, the simulation used a slightly unrealistic value for the contour-following angle parameter. This caused the robot to always follow the wall perfectly without really “bouncing off” the surface. It allowed the simula-tion robot to find contours very effectively, which greatly increased the speed of the map generation, compared to using a rougher contour-following pattern.

5.2.5 Mobility estimation outside the simulation

One of the main problems encountered when constructing the algorithm and test cases was which degrees of mobility the robot should be able to detect. Another uncertainty was over what period of time the map should be generated. In the end result of the algorithm, each discretized obstacle point is classified as either mobile or stationary. The result is verified by checking if the obstacle in the simulation environment has a behavior that causes it to move. However, in a real world situation, the limit where a human would perceive an object as mobile would most likely depend on the period of time over which it is observed. An object such as a chair may be immobile over the course of a minute or an hour, but mobile over the course of a day.

(35)

5.2.6 Verification method

Another limitation was that the verification algorithm did not keep track of how well an obstacle had been explored. If only one point of the obstacle had been explored, and that specific point was correctly classified, that obstacle was deemed 100% correctly classified. Since all newly encountered obstacles were classified as mobile by default, this can make the results a bit misleading. Also note that an obstacle’s impact on the results in the verification algorithm was not weighted by its size. A small obstacle being incorrectly classified had the exact same impact on the average correctness as a large obstacle in the same room.

(36)

6

Conclusion

The results show that obstacles in a dynamic domestic environment can be mapped and classified based on mobility with an average correctness of around 80%, using only localization data and collision detection data from a randomly moving circular robot as input. This can be done using a discretized map of the environment which records collision events at each cell in the map. The accuracy of the mobility classification in this study varied and was tied to the obstacle’s shape, position in relation to other obstacles and alignment to the map’s coor-dinate axes. Mobile obstacles that moved less frequently were generally slightly harder to accurately classify than frequently moving obstacles.

(37)

7

References

[1] F. Cherni, Y. Boutereaa, C. Rekik, and N. Derbel. Autonomous mobile robot navigation algorithm for planning collision-free path designed in dy-namic environments. In Electrical and Electronics Engineering Conference (JIEEEC), 2015 9th Jordanian International, pages 1–6. IEEE, 2015. [2] E. Galceran and M. Carreras. A survey on coverage path planning for

robotics. Robotics and Autonomous systems, 61(12):1258–1276, 2013. [3] M. Ganeshmurthy and G. Suresh. Path planning algorithm for autonomous

mobile robot in dynamic environment. In Signal Processing, Communi-cation and Networking (ICSCN), 2015 3rd International Conference on, pages 1–6. IEEE, 2015.

[4] K. M. Hasan, K. J. Reza, et al. Path planning algorithm development for autonomous vacuum cleaner robots. In Informatics, Electronics & Vision (ICIEV), 2014 International Conference on, pages 1–6. IEEE, 2014. [5] K. Lenac, A. Kitanov, I. Maurovi´c, M. Dakulovi´c, and I. Petrovi´c. Fast

active slam for accurate and complete coverage mapping of unknown envi-ronments. In Intelligent Autonomous Systems 13, pages 415–428. Springer, 2016.

[6] C. K. Thomas Williams and many others. Gnuplot 5.0: an interactive plotting program, January 2015.

[7] Unity 2017.4 documentation, 2018.

[8] R. Valencia and J. Andrade-Cetto. Active pose slam. In Mapping, Planning and Exploration with Pose SLAM, pages 89–108. Springer, 2018.

[9] J. Vallv´e and J. Andrade-Cetto. Active pose slam with rrt. In Robotics and Automation (ICRA), 2015 IEEE International Conference on, pages 2167–2173. IEEE, 2015.

[10] J. Van Den Berg, D. Ferguson, and J. Kuffner. Anytime path planning and replanning in dynamic environments. In Robotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEE International Conference on, pages 2366–2371. IEEE, 2006.

[11] Y. Yan and Y. Li. Mobile robot autonomous path planning based on fuzzy logic and filter smoothing in dynamic environment. In Intelligent Control and Automation (WCICA), 2016 12th World Congress on, pages 1479– 1484. IEEE, 2016.

(38)

8

Appendix

8.1

Program code

8.1.1 DiscreteMap.cs

/*

The DiscreteMap class contains a matrix of MapPoints which represents a discretized section of the global coordinate plane. It contains methods used for retrieving data and registering collision events at specific positions in the map.

*/ using System.Collections; using System.Collections.Generic; using UnityEngine; using System.Text; using System; using System.Linq; public class DiscreteMap {

private MapPoint[][] points;

public int side { get; private set; } // Note: the matrix is quadratic; columns = rows

public float scale { get; private set; } // The map is [scale] times larger than world space coordinates. public DiscreteMap(int rows, float scale)

{

this.side = rows; this.scale = scale; points =new MapPoint[rows][]; for (int i = 0; i < points.Length; i++) {

points[i] =new MapPoint[rows]; }

for (int i = 0; i < rows; i++) {

for (int j = 0; j < rows; j++) {

points[i][j] =new MapPoint(); }

} }

/// Get the MapPoint data at a specific position. public MapPoint this[int x, int y]

{ get { return points[x][y]; } }

/// Mark points under the robot as empty space.

public void VisitCircle(Vector2Int center, float scaledRadius, float time) {

var dRadius = Round(scaledRadius);

for (int x = center.x - dRadius; x <= center.x + dRadius; x++) {

for (int y = center.y - dRadius; y <= center.y + dRadius; y++) {

if (IsWithinBounds(x, y)) {

float xDiff = x - center.x; float yDiff = y - center.y;

// If point is within circle, mark as empty space. float distance = Mathf.Sqrt(xDiff * xDiff + yDiff * yDiff); if (distance <= scaledRadius)

{

points[x][y].AddHitValue(false, time); points[x][y].type = MapPoint.PointType.empty; points[x][y].group =null; } } } } }

/// Register a collision at a position.

public void MarkObstacle(Vector2Int position, float time) {

if (IsWithinBounds(position.x, position.y)) {

(39)

} else { points[position.x][position.y].type = MapPoint.PointType.obstacle; points[position.x][position.y].lastHitTime = time; points[position.x][position.y].RefreshData(); } } }

/// Check if a discretized position is within the map bounds. public bool IsWithinBounds(int x, int y)

{

return x >= 0 && y >= 0 && x < side && y < side; }

/// Find a coherent obstacle, starting from points[x][y], using BFS. Assumes that (x, y) is an obstacle point. List<MapPoint> MatrixBFS(int x, int y)

{

Queue<Vector2Int> q =new Queue<Vector2Int>(); List<MapPoint> visited =new List<MapPoint>(); visited.Add(points[x][y]);

q.Enqueue(new Vector2Int(x, y)); while (!(q.Count == 0)) {

var current = q.Dequeue(); for (int i = -1; i <= 1; i++) {

for (int j = -1; j <= 1; j++) {

var cx = current.x + i; var cy = current.y + j;

if (IsWithinBounds(cx, cy) && points[cx][cy].IsObstacle() && !visited.Contains(points[cx][cy])) {

q.Enqueue(new Vector2Int(cx, cy)); visited.Add(points[cx][cy]); } } } } return visited; }

/// AssignGroups modifies the map by finding coherent obstacles and grouping together any /// adjacent MapPoints with the type "obstacle".

public void AssignGroups() {

List<MapPoint> remainingPoints =new List<MapPoint>(); for (int i = 0; i < side; i++)

{

remainingPoints.AddRange(points[i]); }

for (int i = 0; i < side; i++) {

for (int j = 0; j < side; j++) {

if (points[i][j].IsObstacle() && remainingPoints.Contains(points[i][j])) {

var group = MatrixBFS(i, j);

MapObstacle obstacle =new MapObstacle(); foreach (var item in group)

{

item.group = obstacle;

obstacle.mobilityEstimate += item.GetMobilityEstimate() / group.Count; remainingPoints.Remove(item);

} } } }

for (int i = 0; i < side; i++) {

for (int j = 0; j < side; j++) {

points[i][j].RefreshData(); }

} }

public static int Round(float value) {

return (int)Mathf.Round(value); }

public Vector2Int VectorToMapCoordinates(Vector2 value) {

return Round(value * scale); }

public static Vector2Int Round(Vector2 value) {

(40)

return new Vector2Int(Round(value.x), Round(value.y)); }

}

8.1.2 MapGenerator.cs

/*

MapGenerator is a MonoBehaviour which should be attached to the robot object in the scene. It is used to pass data from the robot to the DiscreteMap and the Verificator. */

using System.Collections; using System.Collections.Generic; using UnityEngine;

using UnityEngine.SceneManagement; public class MapGenerator : MonoBehaviour {

enum ResultMode { result1, result2 } [SerializeField]

ResultMode resultMode; float mobilityLimitSingle = 0.3f;

float[] mobilityLimitMultiple = new float[] { 0.15f, 0.3f, 0.45f, 0.6f, 0.85f }; [SerializeField] bool autoVerify; [SerializeField] float autoVerifyInterval = 10; [SerializeField] bool runApplicationInBackground; [SerializeField] int verificationsPerRun = 10; int currentVerification;

public DiscreteMap dMap = new DiscreteMap(128, 6); public Verificator verificator;

float timer;

/// Start is called at the beginning of the simulation. void Start()

{

verificator =new Verificator(dMap);

Application.runInBackground = runApplicationInBackground; }

/// Update is called every frame during the simulation. void Update()

{

// Check input. Mainly used for debugging. if (Input.GetKeyDown("r")) { dMap.AssignGroups(); } if (Input.GetKeyDown("v")) { verificator.RunVerification(mobilityLimitSingle); } if (Input.GetKeyDown("a")) { dMap.AssignGroups(); verificator.RunVerification(mobilityLimitSingle); }

if (autoVerify && autoVerifyInterval >= 5) { timer += Time.deltaTime; if (timer >= autoVerifyInterval) { timer = 0; dMap.AssignGroups(); if (resultMode == ResultMode.result1) { verificator.RunVerification(mobilityLimitMultiple); verificator.PrintResult1(); currentVerification++; if (currentVerification >= verificationsPerRun) { SceneManager.LoadScene(SceneManager.GetActiveScene().name); } }

else if (resultMode == ResultMode.result2) {

(41)

SceneManager.LoadScene(SceneManager.GetActiveScene().name); } } } } }

public void VisitPosition(Vector2 center, float radius) {

dMap.VisitCircle(dMap.VectorToMapCoordinates(center), radius * dMap.scale, Time.deltaTime); }

public void TriggerCollision(ContactPoint2D point) { dMap.MarkObstacle(dMap.VectorToMapCoordinates(point.point), Time.time); verificator.MarkObstacle(dMap.VectorToMapCoordinates(point.point), point.collider.gameObject); } } 8.1.3 MapObstacle.cs /*

MapObstacle represents a coherent obstacle in the DiscreteMap. Note that a MapObstacle does not contain pointers to the MapPoints it consists of. Instead, each MapPoint has a pointer to its MapObstacle.

*/

using System.Collections; using System.Collections.Generic; using UnityEngine;

public class MapObstacle {

public float mobilityEstimate = 0; public bool IsMobile(float mobilityLimit) {

return mobilityEstimate > mobilityLimit; }

}

8.1.4 MapPoint.cs

/*

MapPoint represents a discretized location point in the DiscreteMap. Each MapPoint has a local, temporary mobility estimate and is given its final mobility estimate by its MapObstacle.

*/

using System.Collections; using System.Collections.Generic; using UnityEngine;

using System; public class MapPoint {

public enum PointType { unexplored, empty, obstacle } public PointType type = PointType.unexplored; const int historyLength = 2;

const bool immobileByDefault = false; const float minimumTimeBetweenUpdates = 3; public event Action OnUpdateData; Queue<bool> history = new Queue<bool>(); public float lastHitTime;

private MapObstacle _group; public MapObstacle group { get { return _group; } set { _group = value; if (OnUpdateData != null) { OnUpdateData(); } } }

(42)

public MapPoint(MapObstacle group = null) {

this.group = group;

for (int i = 0; i < historyLength; i++) {

history.Enqueue(immobileByDefault); }

}

public bool IsObstacle() {

return type == PointType.obstacle; }

public bool IsEmptySpace() {

return type == PointType.empty; }

public bool IsUnexplored() {

return type == PointType.unexplored; }

/// AddHitValue is used to change the local mobility estimate of a MapPoint. /// If an obstacle was deteced at the point, use isObstacle = true. If the /// point was visited without triggering a collision, use isObstacle = false. public void AddHitValue(bool isObstacle, float currentTime)

{

if (currentTime - lastHitTime > minimumTimeBetweenUpdates) { lastHitTime = currentTime; if (isObstacle) { history.Enqueue(isObstacle); if (history.Count > historyLength) { history.Dequeue(); } } else { FillHistory(false); } OnUpdateData(); } }

void FillHistory(bool value) {

for (int i = 0; i < historyLength; i++) {

history.Enqueue(value); history.Dequeue(); }

}

/// Returns the points local mobility estimate. This is not the final mobility /// estimate, which should be retreived from the MapObstacle.

public float GetMobilityEstimate() {

float value = 0; if (history.Count > 0) {

foreach (var item in history) { if (item == false) { value++; } }

return value / history.Count; }

return 0; }

/// Trigger a data update event, used public void RefreshData() { OnUpdateData(); } } 8.1.5 ObstacleBehaviour.cs /*

(43)

using System.Collections; using System.Collections.Generic; using UnityEngine;

public class ObstacleBehaviour : MonoBehaviour {

enum MovementPattern { stationary, random, twoPoint } [SerializeField] MovementPattern movementPattern; [SerializeField] Vector2 twoPointMoveDistance; [SerializeField] float randomMoveRadius; [SerializeField] float interval = 1; [SerializeField] float moveTime = 1; float timer; bool moving = false; int direction = 1; Vector3 originalPosition; Vector3 targetPosition;

/// Start is called at the beginning of the simulation. void Start()

{

originalPosition = transform.position; }

/// Update is called every frame during the simulation. void Update() { if (IsMobile()) { timer += Time.deltaTime; if (moving) {

transform.position += (targetPosition - transform.position) / moveTime * Time.deltaTime; if (timer > moveTime) { timer = 0; moving =false; direction *= -1; } } else { if (timer > interval) { timer = 0; moving =true; if (movementPattern == MovementPattern.random) {

var angle = Random.Range(0, 2 * Mathf.PI); var x = randomMoveRadius * Mathf.Cos(angle); var y = randomMoveRadius * Mathf.Sin(angle);

targetPosition = originalPosition +new Vector3(x, y, 0); }

else if (movementPattern == MovementPattern.twoPoint) {

targetPosition = transform.position + (Vector3)twoPointMoveDistance * direction; }

} } } }

/// Returns true if the obstacle has a mobile movement pattern. public bool IsMobile()

{

return movementPattern != MovementPattern.stationary; }

}

8.1.6 RandomMove.cs

/*

RandomMove is a MonoBehaviour which should be attached to the robot object in the scene. It is responsible for moving the robot, detecting input and sending that input to the MapGenerator. */ using System.Collections; using System.Collections.Generic; using UnityEngine; using System.Text;

(44)

public class RandomMove : MonoBehaviour {

enum MovementPattern { random, contourFollow } [SerializeField]

MovementPattern pattern = MovementPattern.random; const float speed = 8;

const float rotationSpeed = 140; const float contourFollowAngle = 0; const float collisionRotationRange = 89; const float radiusFactor = 0.72f; float timeSinceLastCollision; float timeSincePatternSwitch; Rigidbody2D body;

List<Collision2D> collisions =new List<Collision2D>(); MapGenerator mapGenerator;

const float MaxRandomTime = 2; const float MaxContourTime = 3;

const float MaxContourTimeWithoutCollision = 1; int contourFollowDirection = 1;

float radius;

/// Start is called at the beginning the simulation. void Start()

{

body = GetComponent<Rigidbody2D>(); mapGenerator = GetComponent<MapGenerator>();

radius = transform.lossyScale.y * GetComponent<CircleCollider2D>().radius; }

/// FixedUpdate is called 50 times per second during the simulation. void FixedUpdate()

{

timeSinceLastCollision += Time.fixedDeltaTime; timeSincePatternSwitch += Time.fixedDeltaTime;

mapGenerator.VisitPosition(transform.position, radius * radiusFactor); if (pattern == MovementPattern.random) { RandomMoveUpdate(); } if (pattern == MovementPattern.contourFollow) { contourFollowUpdate(); } if (collisions.Count > 0) { timeSinceLastCollision = 0; } collisions.Clear(); } void RandomMoveUpdate() {

body.velocity = speed * transform.right; body.angularVelocity = 0;

if (collisions.Count != 0) {

float lowerBound = 0;

float upperBound = float.MaxValue; foreach (var item in collisions) { var a = VectorToAngle(item.contacts[0].normal); if (a - collisionRotationRange > lowerBound) { lowerBound = a - collisionRotationRange; } if (a + collisionRotationRange < upperBound) { upperBound = a + collisionRotationRange; } if (upperBound < lowerBound) { upperBound += 360; } }

var angle = Random.Range(lowerBound, upperBound); transform.rotation = Quaternion.Euler(0, 0, angle); if (timeSincePatternSwitch > MaxRandomTime) {

(45)

} } }

void contourFollowUpdate() {

body.velocity = speed * transform.right; body.angularVelocity = 0; if (collisions.Count != 0) { float a = LimitAngle(VectorToAngle(collisions[0].contacts[0].normal)); if (contourFollowDirection == -1) {

foreach (var item in collisions) { float b = LimitAngle(VectorToAngle(item.contacts[0].normal)); if ((b > a && b < a + 180) || (b + 360 > a && b + 360 < a + 180)) { a = b; } } } else {

foreach (var item in collisions) { float b = LimitAngle(VectorToAngle(item.contacts[0].normal)); if (!((b > a && b < a + 180) || (b + 360 > a && b + 360 < a + 180))) { a = b; } } } a += (90 - contourFollowAngle) * contourFollowDirection; transform.rotation = Quaternion.Euler(0, 0, a); }

else {

transform.rotation = Quaternion.Euler(0, 0, transform.rotation.eulerAngles.z + rotationSpeed * Time.fixedDeltaTime * speed * contourFollowDirection);

}

if (timeSinceLastCollision > MaxContourTimeWithoutCollision || timeSincePatternSwitch > MaxContourTime) {

pattern = MovementPattern.random; timeSincePatternSwitch = 0; }

}

float LimitAngle(float angle) { while (angle < 0) { angle += 360; } return angle % 360; } void PrintCollisons() {

StringBuilder stringBuilder =new StringBuilder(); foreach (var item in collisions)

{

stringBuilder.Append("Object: " + item.gameObject + ", Normal: " + item.contacts[0].normal + "\n"); } if (stringBuilder.Length != 0) { print(stringBuilder.ToString()); } }

void OnCollisionStay2D(Collision2D other) {

CollisionTriggererd(other); }

void OnCollisionEnter2D(Collision2D other) {

CollisionTriggererd(other); }

void CollisionTriggererd(Collision2D other) { if (!collisions.Contains(other)) { collisions.Add(other); mapGenerator.TriggerCollision(other.contacts[0]); } } float VectorToAngle(Vector2 v) {

References

Related documents

In simulation, the Stanley controller made the vehicle follow the paths closely both in forward and reverse driving with the front wheels as guiding wheels, while only being able

(a) Random algorithm (b) Spiral algorithm (c) Snaking algorithm Figure 5.1: Figures showing the heat maps for the three tested algo- rithms after 2000 runs in the square room.. A

Desto längre en patient vårdas desto större risk har sjuksköterskan att bli smittad, detta kan leda till att sjuksköterskor visar sämre attityd till att vårda patienter ju

Syftet är att under- söka hur typer av in- struktioner påverkar kvaliteten på ”self- explanation” och påföljande inlärda kunskaper hos ele- ver mellan åk 2 -5

Samtidigt som de flesta barn beskrev att de vill uppleva gemenskap och göra olika saker tillsammans med föräldrar eller hela familjen beskrev en grupp tonåringar att det är viktigt

Tommie Lundqvist, Historieämnets historia: Recension av Sven Liljas Historia i tiden, Studentlitteraur, Lund 1989, Kronos : historia i skola och samhälle, 1989, Nr.2, s..

We formulated this as an optimization problem and solved it by linearizing the dynamics. We also applied this methodology on two dierent reference paths provided by Volvo

Könsidentitet är enligt deltagare oväsentligt men eftersom tydlig könsidentitet inom den binära diskursen meriterar till det begripliga och mänskliga eftersträvar