• No results found

Combining adjustable autonomy and shared control as a new platform for controlling robotic systems with ROS on TurtleBot

N/A
N/A
Protected

Academic year: 2021

Share "Combining adjustable autonomy and shared control as a new platform for controlling robotic systems with ROS on TurtleBot"

Copied!
71
0
0

Loading.... (view fulltext now)

Full text

(1)

Report

Combining adjustable autonomy and shared control

as a new platform for controlling robotic systems

with ROS on TurtleBot

Alexander Biro

Computer Science

(2)
(3)

Combining adjustable autonomy and shared

control as a new platform for controlling robotic

(4)
(5)

Studies from the Department of Technology at Örebro

University

Alexander Biro

Combining adjustable autonomy

and shared control as a new

platform for controlling robotic

systems with ROS on TurtleBot

(6)

© Alexander Biro, 2018

Title: Combining adjustable autonomy and shared control as a new platform for controlling robotic systems with ROS on TurtleBot

(7)

Abstract

Fully autonomous robotic systems can fulfill their functionality, without hu-man interaction, but their efficiency is way lower, than a robotic system, which is teleoperated by a specialist. The teleoperation of robotic systems requires continuously high attention from the operator, if this attention is taken away or reduced, the efficiency drops heavily. The combination of Ad-justable Autonomy and Shared Control represent a promising approach, of how great efficiency could be maintained in a robotic system, with a mini-mum of human interaction.

The goal of this project is the re-implementation of the utilitarian vot-ing scheme for navigation for usage with modern robotic platforms, as pro-posed in the publication “Experiments in Adjustable Autonomy” by Jacob W. Crandall and Michael A. Goodrich. This voting scheme combines a pro-posed direction, which is given by a human operator, with environmental sensor data to determine the best direction for a robots next movement.

The implemented prototype in this project was developed with ROS on TurtleBot and is processing the sensor data and calculating the best direction for the robot’s movement in the same way, as the original prototype. Since the original setup consists of a Nomad SuperScout robot with sixteen sonar range finders, adjustments needed to be made, to run the same algorithm on a different setup. The correct processing of the input data and estimation of the best direction was verified by pen and paper calculations. Finally, further ideas for improving the implemented prototype and usage in other scenarios were presented.

(8)
(9)

Acknowledgements

I am grateful for the continuous help of my supervisor Dr. Andrey Kise-lev. He was always available for my questions, even during his holidays and provided me with everything I needed for my work. Whenever I needed guidance for staying on the right path, Andrey was there to steer me in the right direction. Working with Andrey was a pleasure and I wish him all the best for the future.

Furthermore, I am also grateful that Prof. Amy Loutfi has agreed to be the examiner for this project work.

During my project work, I worked with modern hard- and software in an innovative environment. I had many possibilities to get in contact with inspiring people and gained new insights about my subject.

Spending my semester abroad at the Örebro University was the best de-cision I could possibly make. Hopefully Örebro University and Sweden is going to continue welcoming students from other countries, so that every-body can make this great experience.

(10)
(11)

Contents

1 Introduction 1

1.1 Autonomous and non-autonomous robot control . . . 1

1.2 Term definitions . . . 3

1.3 Brief overview of ROS . . . 4

1.4 Goal of the thesis . . . 6

2 Related Work 9 2.1 Review of important related work . . . 9

2.1.1 Related work on teleoperation . . . 9

2.1.2 Related work on autonomy of robotic systems . . . 11

2.1.3 Related work on adjustable autonomy and shared control 13 2.2 Review of the paper “Experiments in Adjustable Autonomy” . 16 2.2.1 The goal-achieving and obstacle-avoiding behaviors . . 16

2.2.2 The vetoing behavior . . . 18

3 Presentation of the implemented Prototype 21 3.1 Replacement sonar range finders through a Laser scanner . . . 22

3.2 Access of the laser scan data . . . 23

3.3 Overview of the prototype’s architecture . . . 23

3.4 Description of the prototype’s nodes . . . 25

3.4.1 Split_Laser_Scan_Data . . . 25 3.4.2 Initial_Vector . . . 27 3.4.3 Adjust_Robot_Orientation . . . 28 3.4.4 Calculate_Best_Direction . . . 29 3.4.5 Vetoing_Behavior . . . 33 3.4.6 Move_Robot . . . 34

3.5 Description of the prototype’s topics . . . 34

3.5.1 /scan . . . 36

3.5.2 /sections . . . 36

3.5.3 /initialVector . . . 37

(12)

vi CONTENTS

3.5.5 /robotPosition . . . 37

4 Evaluation by pen and paper calculations 39 4.1 Pen and paper verification in the first situation . . . 40

4.1.1 Calculation with pen and paper (first situation) . . . 40

4.1.2 Calculation with the prototype (first situation) . . . 41

4.1.3 Comparison of the results (first situation) . . . 43

4.2 Pen and paper verification in the second situation . . . 43

4.2.1 Calculation with pen and paper (second situation) . . . 43

4.2.2 Calculation with the prototype (second situation) . . . . 45

4.2.3 Comparison of the results (second situation) . . . 46

5 Future work and conclusions 47 5.1 Future work for improving the implemented prototype . . . 47

5.2 Final Conclusions . . . 48

(13)

List of Figures

1.1 Scheme of the sense-plan-act architecture. . . 1

1.2 The neglect curve. . . 3

1.3 Scheme of the ROS architecture . . . 5

2.1 Teleoperation of mobile robotic systems . . . 9

2.2 Representation of the three determinants of presence . . . 12

2.3 Exemplary architecture for adjustable autonomy and shared control . . . 14

2.4 Selection of the corresponding sonars . . . 18

3.1 Overview of the prototype’s architecture. . . 24

3.2 Flowchart for the node Split_Laser_Scan_Data. . . 26

3.3 Flowchart for the node Set_Initial_Vector. . . 27

3.4 Flowchart for the node Adjust_Robot_Orientation. . . 28

3.5 Flowchart for the node Calculate_Best_Direction. . . 30

3.6 Determination of the opening angle theta . . . 33

3.7 Flowchart of the node Vetoing_Behavior . . . 34

3.8 Flowchart of the node Move_Robot . . . 35

4.1 Visualization of the first situation . . . 42

(14)
(15)

List of Tables

3.1 Initial sonar angles . . . 31 3.2 Overview of all topics used in the prototype for the voting

scheme . . . 36 4.1 Sensor data for the first situation . . . 40 4.2 Sensor data for the second situation . . . 43

(16)
(17)

List of Algorithms

1 Algorithm for calculating the “best” direction for the robot’s motion. . . 32

(18)
(19)

List of Acronyms

IV Initial Vector

OSRF Open Source Robotics Foundation

P2P Peer-to-Peer

ROS Robot Operating System

(20)
(21)

Chapter 1

Introduction

1.1

Autonomous and non-autonomous robot

control

A traditional approach for controlling fully autonomous robot systems is called the “sense-plan-act” architecture. The essence of this approach is to use different sensors to build a complete internal model of the robot’s en-vironment, plan the actions accordingly and to execute the result [19] This three steps, as shown in figure 1.1, are executed in a closed-loop.

Figure 1.1:Scheme of the sense-plan-act architecture [4]. The three steps of this archi-tecture are executed in each iteration inside a closed loop. In the first step a detailed model of the entire environment is built, which is evaluated in the second step. The results of the evaluation are then executed in the last step. Due to the order of execu-tion, this system is not able to react to changes in the environment between the first and the third step. Changes to the environment are registered within the first step in the next iteration.

One big drawback of this approach is, that it is only capable of controlling a robot in a perfectly static environment. During the execution of the planed action, the robot is “blind” and cannot react to changes to the environment. Changes to the environment will be noticed in the next iteration of the algo-rithm, when the data about the environment is going to be acquired. So, this

(22)

2 CHAPTER 1. INTRODUCTION

architecture cannot be used for robot control in a dynamic environment. Controlling robotic systems in dynamic environments require quick recog-nition of changes to the environment and appropriate actions. Since au-tonomous systems based on the sense-plan-act architecture cannot react to unexpected changes, non-autonomous systems must be used.

A simple solution for controlling a non-autonomous system in a dynamic environment is called “teleoperation”. Teleoperation is a way of controlling the actions of a system remotely through instructions given by an operator. The operator can be a human, but also a software, like e.g. a high-level con-trol system.

Human operators need to invest a lot of attention for controlling systems, so they can only control a limited amount of different systems at the same time. Using human operators for controlling robotic systems is very expen-sive and can be dangerous due to human factors.

A robot’s effectiveness by using teleoperation with a human operator is highly dependent on the operator’s attention to it. If the human invests a lot of attention, the robot’s effectiveness is rather high and vice versa. Conse-quentially the effectiveness declines, if the system is neglected. On the other hand, a fully autonomous robot system provides always the same effective-ness at any amount of attention or neglect by a human operator.

The connection between a robot’s effectiveness and the amount of neglect is discussed in the publication “Characterizing efficiency of human robot in-teraction” by Michael A. Goodrich and Jacob W. Crandall [12]. The effective-ness of a robotic system based on teleoperation with low neglect can be very high, but it drops rapidly with the increase of neglect. On the other hand, a fully autonomous system is often less effective in dynamic environments, but at least it is not affected by neglect of an operator. A promising compro-mise between the connection of effectiveness and neglect can be provided by combining adjustable autonomy and shared control (See Fig. 1.2).

This new solution called “adjustable autonomy” can maintain a certain rate of the robot’s effectiveness, even under neglect from a human opera-tor. Adjustable autonomy is an approach, which allows a system to adjust its level of autonomy, according to its current environmental situation. The different levels of autonomy define certain shares of teleoperation and auton-omy. This option of control, when an operator and a control system provides instructions for a system at the same time, is called “shared control”.

(23)

1.2. TERM DEFINITIONS 3

Figure 1.2:The so called “neglect curve” describes the connection between a robot’s effectiveness and the amount of neglect in different modes of operation [12]. The y-axis represents the robot effectiveness and the x-y-axis, the amount of neglect. The de-scribed modes of operation are teleoperation (blue curve), fully autonomous systems (red curve) and systems with adjustable autonomy (green dashed curve). Adjustable autonomy is able to maintain a certain rate of effectiveness, even under neglect from a human operator.

1.2

Term definitions

In robot control and autonomous systems, a lot of different terms are used in the literature. To describe the approach “adjustable autonomy” with “shared control” as a new solution for controlling a robotic system in a dynamic environment, clear definitions of technical terms are essential. This section provides an overview of the most important terms and their definitions.

• Autonomy

Autonomy is the ability of a system to fulfill its function without any kind of human intervention. The opposite of an autonomous system, a non-autonomous system, follows directly the commands given by a hu-man operator. Huhu-man interaction is necessary for a non-autonomous system to operate effectively.

• Adjustable Autonomy

The property of an autonomous system, to change its level of auton-omy while it operates, is called “Adjustable Autonauton-omy” [15]. The level of autonomy can be described, as the amount of attention from a hu-man operator needed by a system, to fulfill its purpose effectively.

(24)

4 CHAPTER 1. INTRODUCTION

control, towards fully autonomous systems. Adapting the level of au-tonomy, can e.g. be used to optimize the performance of a system un-der dynamic conditions.

• Shared Control

Shared control describes the fact, that a system is controlled by at least two different entities. The entities in this case can be human operators or a software system. A common example for shared control, is e.g. controlling an aircraft. An Aircraft is usually controlled by two human entities, one pilot and one copilot. They both control the actions of the aircraft and check each other.

• Teleoperation

Teleoperation is the act of operating a non-autonomous system re-motely. That means, that the operator is not at the same location as the system, which is controlled. Teleoperation can be realized in many ways, e.g. with human control or through a control software [10]. The provided instructions for the system to be controlled, are usually trans-mitted via computer networks.

• Guarded Teleoperation

Guarded teleoperation is a special type of teleoperation. It verifies the safety of the provided instructions, before passing them to the system to be controlled. A common example for guarded (tele-)operation, are smart cars with brake assist systems. The brake assist systems can, e.g. start stopping the car immediately in case of emergency and prevent collisions. So even if a driver would like to steer a smart car willingly into an obstacle, he couldn’t. All in all, guarded teleoperation provides another layer of safety for robotic systems.

1.3

Brief overview of ROS

The Robot Operating System (ROS) was selected as the technological foun-dation for this project, because it is broadly used throughout the research community in the robotic area and robotic industry. ROS contains a lot of libraries and “ready-to-use” software solutions for different robot platforms and many common tasks, like e.g. navigation, path following and Simulta-neous Localization and Mapping (SLAM). Due to ROS, many technical chal-lenges were already solved, hence I could focus on the core of my project.

ROS is a free open source software platform, which provides an extensive middleware for operating different kinds of robots, like industrial robots, mobile robots and even aerial drones. It was developed from 2007 until 2013 at Willow Garage. Since 2013 until now the further development of ROS is mostly done at the Open Source Robotics Foundation (OSRF) [3]. Since ROS

(25)

1.3. BRIEF OVERVIEW OF ROS 5

is an open source software, it is also further developed by a broad commu-nity throughout the world.

ROS itself is programming language independent, that means, that a de-veloper can implement his software for ROS in different languages. How-ever, the most used programming languages are C++ and Python. ROS is intended for operation on top of a Linux operating system, with the Debian or Ubuntu distribution. There are also efforts being made by the ROS com-munity to support Java and Android [6].

The architecture of ROS is at its core a centralized Peer-to-Peer (P2P) sys-tem. It is made of a huge number of small programs, which continuously ex-change data via messages [20]. These small programs represent the “peers” in the P2P system and are called “nodes” in the special ROS terminology. Every new node needs to be registered at the ROS master node. The ROS master is the core of the P2P system and is needed for establishing a com-munication channel between all of the system’s nodes. After the registration all messages between the nodes can be exchanged directly, without interac-tion of the master node. The figure 1.3 shows abstractly the ROS architecture and how the nodes communicate among themselves.

Figure 1.3:Scheme of the ROS architecture [5]. The ROS architecture is at its core a centralized P2P system. The nodes are small programs, which solve a specific task. The nodes can communicate with each other by sending messages. Every new node

(26)

6 CHAPTER 1. INTRODUCTION

By solving a very specific problem, each node makes a small contribution to the functionality of the whole system. Since the supported robot platforms vary a lot in their features and characteristics, the nodes for the different platforms and tasks can be very differently from each other. All nodes can be modified or replaced very easily, without many adjustments to the re-maining system.

Every node can select which messages to receive and which not. Ev-ery message is published to a specific “topic”. If a node wants to receive messages published to a specific topic, it must “subscribe” to it. The act of sending messages to a certain topic is called “publishing”. Every node can publish and subscribe to every topic of the system, all topics are publicly accessible. A node can subscribe and publish to multiple topics at the same time, that is necessary, if a node needs to process data from multiple origins. Altogether the nodes and topics form the ROS network, which offers many advantages. One advantage of it is the great possibility for further changes to specific parts of the system, without the need to change other parts. Furthermore, the functionality of the system can easily be expanded, by adding new nodes and topics. Another advantage is, that an error in one of the nodes, will most probably not make the whole system crash. A part of the system can still be working fine, even if some parts stopped working. Finally, the different nodes do not have to run on only one single computer. The network and the computing load can be spread over many computers, to maintain reliability and robustness against disturbances.

1.4

Goal of the thesis

In this student project, the Utilitarian Voting Scheme for Navigation, as pro-posed by Jacob W. Crandall and Michael A. Goodrich [18] was re-implemented for execution on a TurtleBot with ROS. The idea of this voting scheme is to combine sensor readings, with the direction and magnitude of a given Initial Vector (IV). This information is combined, to calculate the “best” direction for the robot’s motion. The IV can be provided in many ways. The origin of the IV, defines the level of autonomy for the system.

Originally the utilitarian voting scheme for navigation was proposed for the Nomad SuperScout robot. The SuperScout robot has 16 sonar range sen-sors, which can altogether measure 360° around the robot. The TurtleBot robot used in this project features only one laser range finder, which can measure 240°. Therefore, some adjustments for measuring the environment had to be made, but the core of the voting scheme, the calculation of the best direction stayed the same.

(27)

1.4. GOAL OF THE THESIS 7

In conclusion the results of the re-implementation for the TurtleBot robot were verified and analyzed with example situations and scenarios. Further-more, some ideas for future work and further development of the proposed prototype were introduced.

(28)
(29)

Chapter 2

Related Work

2.1

Review of important related work

2.1.1

Related work on teleoperation

Teleoperation is, like described in the section 1.2, the act of controlling a non-autonomous robotic system without being at the same location (refer to Fig. 2.1). It should be distinguished from remote control, which usually means radio-based driving in line-of-sight [16]. The area, in which teleoper-ation is often required, is regularly unknown. So unlike remote control, tele-operation requires furthermore reliable navigation in unknown or changing environments.

Figure 2.1:Overview of teleoperation for mobile robotic systems. The operator and the telerobot are divided by a barrier (e.g. environment, distance, time). The opera-tor receives information about the telerobot’s environment through an interface and returns reliable commands for operation to the telerobot (local loop). The telerobot receives the given commands and transforms them into motion during execution. The changed state of the environment is measured by sensors and sent to the operator (remote loop) [16].

(30)

10 CHAPTER 2. RELATED WORK

In the publication “Vehicle Teleoperation Interfaces”, the authors Ter-rence Fong and Charles Thorpe provide an overview of teleoperation and present a variety of interfaces for human interaction [16]. The underlying claim in the publication from Fong and Thorpe is, that the effectiveness of a teleoperation robotic system is directly depending on the efficiency and capability of the corresponding human interface [16]. While the level of au-tonomy increases, the main function of the interface shifts from controlling to monitoring. The most common type of interface in teleoperation for mo-bile robots, is direct control. Here the operator directly controls the robot via hand-controllers (e.g. joystick), while watching a video from a robot mounted camera. The idea is, that the operator gets the feeling, that he is sitting inside the mobile robot. Other types of interfaces are multi-modal, supervisory and novel interfaces1[16].

On the other hand, the publication “The Human Role in Telerobotics”, by Rafael Aracil et al. presents telepresence as one of the key factors to enhance the performance of a telerobotic system [11]. Telepresence in the context of teleoperation means, that the information about the robot’s environment is communicated to the operator in a natural manner. Natural manner means, that multiple senses are addressed in the way the information is displayed through a multi-modal human interface [11].

Additionally, different approaches to address multiple human senses at once in teleoperation interfaces are discussed in the paper of Aracil et al [11]. For example, the sense of vision informs a human about the environment, by giving information about the distance, color and shapes of the objects in the line-of-sight. To use that sense in an interface, cameras need to be mounted on a telerobot, which observe the remote environment. The data from those cameras is displayed in the human interface through stereoscopic devices. Stereoscopic devices are capable to create the illusion of depth in flat im-ages [9]. Stereoscopic devices can be divided in three different categories: binocular, auto-stereoscopic and immersive. The difference between binoc-ular and auto-stereoscopic devices is, that binocbinoc-ular devices need an addi-tional component (e.g. glasses, helmet) to show different images to each eye and auto-stereoscopic devices don’t. Immersive devices cover the whole vi-sual area to create the illusion of depth. The illusion of depth is necessary to estimate distances correctly in teleoperation scenarios [11]. Just like with the sense of vision, other multi-modal devices try to address every function within all our senses (sight, hearing, taste, smell, touch). In that way, telep-resence can be created, and the operator gets the feeling, that he is really in the remote environment and can directly interact with it in a natural manner

1The paper “Vehicle Teleoperation Interfaces” from Terrence Fong et al. was released in 2001, therefore the group of so called “novel” interfaces may not be that novel and innovative anymore when compared to current interfaces.

(31)

2.1. REVIEW OF IMPORTANT RELATED WORK 11

to achieve a certain goal.

To measure the amount of “presence” in telepresence, an operational measure is necessary. Maximizing the amount of presence would lead to bet-ter efficiency and performance in any teleoperation scenario. In order achieve that goal and provide a greater amount of presence, a better understanding of what presence is and what factors influence presence are mandatory.

The paper “Musings on Telepresence and Virtual Presence” by Thomas B. Sheridan, describes three determinants to specify the level of presence [22]:

1. Extent of sensory information

Number of transmitted bits of information from the sensors concerning the environment to the observer.

2. Control of relation of sensors to environment

Ability of the observer to change the measurable area of the sensors, e.g. to change its point of view.

3. Ability to modify physical environment

Extant of motor control to interact with the physical environment. The three determinants can also be represented as three orthogonal axes (refer to Fig. 2.2). In that context, “perfect presence” is represented by the maximum of all three determinants.

2.1.2

Related work on autonomy of robotic systems

The three-layer architecture ‘sense-plan-act’ is a traditional approach for de-signing autonomous robotic systems, as described in section 1.1. The most significant feature of this architecture is the unidirectional flow of control. The information is always propagated in the following direction: from sen-sors to world model to plan and finally to effectors [17].

The unidirectional flow of control is also the biggest issue with the sense-plan-act architecture. . The steps sensing, and planning require a significant amount of time, and in the meantime, the actual world could change, in that case the internal plan and the calculated actions would no longer match to the actual world.

A promising approach to solve the presented issue of the sense-plan-act architecture is presented in the publication "On Three-Layer Architectures" by Erann Gat [17]. In his publication, Erann Gat describes, that effective

(32)

12 CHAPTER 2. RELATED WORK

Figure 2.2: Three-dimensional representation of the determinants of presence. The three axes represent the different determinants for measuring the amount of pres-ence. “Perfect presence” is defined by the maximum of those three determinants [22].

1. Reactive control algorithms.

Reactive control algorithms contain no internal state, like an internal world in the sense-plan-act architecture, and map sensor readings di-rectly to corresponding actions.

2. Algorithms with an internal state, but without search.

This algorithm relies massively on an internal model of the real envi-ronment and try to estimate the optimal action in that virtual environ-ment.

3. Search-based algorithms.

Executing a search to find the optimal solution, requires a significant amount of time, relative to the rate of change to the environment [17]. Erann Gat introduces the “three-layer architecture” as a combination of all three presented categories of control algorithms. The idea of this approach is to combine the advantages of each approach (fast reaction, internal state and search/planning) into a new one. The three-layer architecture takes ad-vantage of the effective computational abstractions, which are provided by the algorithms of the first and second type to construct interfaces to algo-rithms of the third type. The presented three-layer architecture is no final so-lution to designing autonomous systems, but it offers a good starting point for further research.

(33)

2.1. REVIEW OF IMPORTANT RELATED WORK 13

Another important aspect in autonomous robotics is the field of appli-cation of those systems. Autonomous robotic systems were constantly a big and evolving research area. But nowadays, there is much effort being done in integrating robotic systems in other areas, such as military (e.g. Unmanned Aerial Vehicles (UAV)), clinical purposes (e.g. da Vinci surgery robot [1]) and many others.

One of the biggest area of application for robotics was always the indus-try. Technical developments changed the industry many times, from steam operated machines in the 20th century until intelligent industrial robots for industry 4.0 [7].

Programmable industrial robots shaped the industry from the beginning of the 1970s until today [21]. There is a big need for designing effective robotic systems, since many production facilities heavily rely on such sys-tems. In the publication “Opportunities for increased intelligence and au-tonomy in robotic systems for manufacturing” from Alfred A. Rizzi and Ralph L. Hollis, a new architecture for agile assembly (AAA) was intro-duced. The goal of the AAA is to provide an approach for creating assembly lines, which cover dramatically less floor space with modular robotic and computational components. Another advantage of the AAA, is that the indi-vidual components could be exchanged or expanded easily for adapting to new processes [21].

2.1.3

Related work on adjustable autonomy and shared

control

The level of autonomy of robot control, can be measured by the amount, that the operator can neglect a system and still achieve great effectiveness [13]. The various degrees of autonomy, reach from manual teleoperation, up to full autonomy. Adjustable autonomy is a property of autonomous systems, to change the level of autonomy during execution. Any robotic system is op-erated in shared control, if the level of autonomy, is in between teleoperation and full autonomy. Refer to section 1.2 for more information about impor-tant terms.

Munjal Desai and Holly A. Yanco introduce an architecture for adjustable autonomy and shared control in their publication “Blending Human and Robot Inputs for Sliding Scale Autonomy”, which can create new levels of autonomy and change in between those during execution time [13].

(34)

14 CHAPTER 2. RELATED WORK

teleoperation, shared control and fully autonomous. The four levels of au-tonomy are internally defined by certain variables. The system, as presented by Mujal Desai and Holly A. Yanco, can change these variables and mix the behavior of the different modes to create new levels of autonomy. Human input is provided by using a game pad with six degrees of movement and six buttons on it [13]. After receiving the input from the user, the given speed is given to the speed limiter function. The speed limiter function ensures with sensor readings, that the selected speed is safe enough for the current envi-ronment [13].

In shared control mode, both the user and the robot suggest values for direction and speed. Both inputs are afterwards forwarded to the behavior arbitrator, which blends both suggestions, based on the selected level of au-tonomy and determines and passes the result to the motors of the robot to execute the calculated action (see Fig. 2.3).

Figure 2.3:Overview of the architecture of the adjustable autonomy and shared con-trol system as introduced by Munjal Desai and Holly A. Yanco [13]. Human and robot suggestions for actions are blended together according to a certain level of au-tonomy. The suggestion from the human operator is first given to the speed limiter function, to ensure the safety of the selected speed in the current environment. The final action is determined by the behavior arbitrator and sent to the motors.

(35)

2.1. REVIEW OF IMPORTANT RELATED WORK 15

While previous work mostly focused on establishing adjustable auton-omy and shared control with one single user at one single robot, another important challenge is working in a multi user and multi robot context. M. Bernardine Dias et al. present their investigations on this challenge in their publication “Sliding Autonomy for Peer-To-Peer Human-Robot Teams” [14]. The work of M. Bernardine Dias et al. identified six key capabilities for overcoming the challenges of establishing adjustable autonomy in peer-to-peer human-robot teams. The importance of those characteristics is demon-strated by discussing the results from a peer-to-peer human-robot team work-ing together in a treasure hunt context [14]. The humans and robots, which are working together as a team, can furthermore assign tasks to each other, or even to some entity outside the team [14].

The capabilities, as identified by the work of M. Bernandine Dias et al., can be divided into two different categories: human awareness in multi-agent teams and robots in mixed-initiative teams. All six capabilities from both categories are shown in the following [14]:

• Human awareness in multi-agent teams

Ability to request help

Ability to maintain team coordination during interventions

Ability to provide situational awareness • Robots in mixed-initiative teams

Ability to interact at different granularities

Ability to prioritize team members

Ability to learn from interactions

Finally, the results from the treasure hunt scenario show, that adjustable autonomy can enhance the performance of human-robot teams [14]. For more details, please refer to the publication “Sliding Autonomy for Peer-To-Peer Human-Robot Teams” by M. Bernandine Dias et al.

All in all, the combination of adjustable autonomy and shared control provides a promising approach for robot control with human interaction for various applications and scenarios. Human intervention is often critical for a robot’s effectiveness, because the human can provide great help in dangerous situations or at least, monitor the robot’s behavior and interfere when it is

(36)

16 CHAPTER 2. RELATED WORK

2.2

Review of the paper “Experiments in

Adjustable Autonomy”

The publication “Experiments in Adjustable Autonomy” from Michael A. Goodrich, Jacob W. Crandall et al. is of great importance for this project, because in it the authors proposed a promising new navigation system us-ing adjustable autonomy for mobile robots [18]. The core of the navigation system is the “Utilitarian Voting Scheme for Navigation”, which was also introduced in this publication. This voting scheme enables a robotic system with the ability to adjust its level of autonomy to corresponding environ-mental situations, to maintain high effectiveness, up to a certain amount of neglect from an operator. In the publication the authors developed an exem-plary prototype for the Nomad SuperScout robot.

The goal of this project is to re-implement the navigation system from this paper for usage with ROS on TurtleBot and to analyze its capabilities for further use in modern robotic platforms and systems. See section 1.4 for further information about the goal of this project.

The utilitarian voting scheme for navigation uses three different behav-iors: a goal-achieving behavior, an obstacle-avoiding behavior, and a vetoing behavior [18].

2.2.1

The goal-achieving and obstacle-avoiding behaviors

The goal-achieving and the obstacle-avoiding behaviors combine a given ini-tial vector (IV) with sensor readings of the robot’s environment, to determine the best direction for the robot’s next movement. An IV proposes a certain direction and a magnitude for the robot’s movement. Additionally, there is a value (IVweight), which specifies the priority of the IV in comparison with the environmental sensor readings. The higher this value, the more the robot will follow the proposed IV. The lower this value is, the more the systems actions are going to be defined by the sensor readings.

The IV can be provided from different origins. In the paper three possi-bilities to obtain an IV are discussed.

1. Teleoperation

The IV is provided directly through a human operator by using an input device, like a joystick or a controller. Since the IV needs to be provided frequently, this requires much attention and workload form the operator. If the given IV leads to efficient robot behavior, or not, is strongly dependent on the knowledge and practice of the operator. A

(37)

2.2. REVIEW OF THE PAPER “EXPERIMENTS IN ADJUSTABLE AUTONOMY” 17 “bad” IV, e.g. if the operator steers the robot towards an obstacle, can be rejected by the vetoing behavior. The vetoing behavior is presented at the end of this section.

2. Way points and Heuristics

A human operator marks important locations or paths in an internal map for the robot with specific icons. The icons can represent goals, way points or locations to avoid. A high-level navigation system can calculate the optimal path with the information of the internal map using heuristics and obtain an IV out of it. This possibility to obtain an IV requires only human interaction and attention during the mark down of the map with icons. Afterwards the robot can periodically generate an IV out of the map and provide it to the voting scheme. 3. Autonomy

The IV always points straight ahead. In this mode, the robot becomes autonomous and doesn’t rely on a human operator anymore. The be-havior of the robot results most probably in random exploration of the environment.

The various ways to provide an IV represent the different levels of au-tonomy for the system. Each source of the IV requires either more or less human interaction and attention.

After the retrieval of the IV, the next step is to select seven out of sixteen sonars, which also propose a direction for movement. The SuperScout robot is equipped with sixteen sonars, each one of them can measure 22.5° degrees. Furthermore, each sonar has an assigned sonar angle, that is the angle the sonar forms with the front of the robot. Sonar 12 corresponds to 0° degrees and sonar 8 to −90° or 270° degrees (see Fig. 2.4) [18]. The sonar angle of each sonar is compared to the given IV and the one with the smallest abso-lute difference is chosen as a starting point. The three sonars adjacent to the left and right from the starting point, are also selected. Each one of the seven sonars returns the distance to the closest obstacle in its measurable area. All the seven distance values form together the array S.

After building the array S, priorities for the different distances must be calculated. If the distance value Siis greater than the defined safe distance,

a goal-achieving behavior is casted and if the distance is smaller than the warning distance, an obstacle-avoiding behavior. The magnitude of the be-havior depends on the obtained priority, the bigger the distance Si, the more

the robot will execute the goal-achieving behavior and vice versa.

(38)

18 CHAPTER 2. RELATED WORK

Figure 2.4:Selection of the corresponding seven sonars [18]. The direction of the IV is compared to the sonar angle of each sonar and the one with the smallest absolute difference is chosen as a starting point. The chosen sonar and the three ones adjacent to the left and right side are considered for calculating the best direction. Based on the measured distance to the closest obstacle for each sonar and the predefined distances SafeDist (safe distance) and WarningDist (warning distance) priorities for movement in that direction can be determined.

movement can be calculated. The inverse tangent function of both compo-nents, θ = tan−1(yx)gives us the opening angle for the movement. All details about the calculation of the vote array V and both coordinate components can be found in the description of the calculation node in section 3.4.4.

2.2.2

The vetoing behavior

The vetoing behavior represents an added layer of safety when executing the utilitarian voting scheme for navigation. The voting scheme does not guaran-tee, that moving the robot in the “best” calculated direction θ will not result in a collision. In most cases, the voting scheme tries to prevent collisions, since it moves the robot in a very foresighted way.

To always prevent collisions, the vetoing behavior implements a kind of ’guarded motion’ [18]. The guarded motion ensures, that the robot never moves in the direction θ, if the direction is not ’safe’. The direction is not

(39)

2.2. REVIEW OF THE PAPER “EXPERIMENTS IN ADJUSTABLE AUTONOMY” 19 safe, if the robot will leave the ’safe zone’ in some future time t, when it stays moving in the same course. Safe zone means, that there is no obstacle closer to the robot, than the defined safe distance SafeDist.

If it turns out, that the calculated direction θ is not safe, the robot will discard all results and end that iteration without movement. A new IV needs to be provided to continue the execution of the navigation system.

(40)
(41)

Chapter 3

Presentation of the

implemented Prototype

The utilitarian voting scheme for navigation, as proposed in the publication “Experiments in adjustable autonomy” from Jacob W. Crandall and Michael A. Goodrich [18], was re-implemented in this project for usage with ROS on TurtleBot.

In this section the functionality and the architecture the new implemented prototype for executing the voting scheme is presented. Since the TurtleBot does not share the same features, as the robot, for which the voting scheme was originally designed, some adjustments in the algorithm needed to be made. The adjustments in the algorithm were mostly about the usage of laser sensors, instead of sonar range finders. Which adjustments needed to be made for the exchange of sensors and how the mapping from the sonar range finders to the laser scanner was implemented, is described in the sec-tion 3.1. Addisec-tionally, the integrasec-tion of the laser scanner into ROS and how the data is accessed by the implemented prototype is shown on section 3.2.

The re-implementation of the voting scheme is designed for usage on ROS. Since ROS is a centralized P2P message-based system, the architecture for the re-implemented prototype needs to be specifically designed for it. The architecture of ROS and what nodes and topics are, is presented in sec-tion 1.3. The new prototype consists of six ROS nodes, which exchange data through sending messages. Each node solves a certain task and provides data, which is used by the following nodes.

The core of the new prototype is the Calculate_Best_Direction node. This node is an exact representation of the voting scheme as presented in sec-tion 2.2. The voting scheme gets an initial vector and the corresponding

(42)

sen-22 CHAPTER 3. PRESENTATION OF THE IMPLEMENTED PROTOTYPE

sor data as input data and provides the best direction θ as a result. All details about the functionality of this node is shown in section 3.4.4.

An overview of the architecture with all nodes and topics of the imple-mented prototype is shown in section 3.3. In the following sections each one of the nodes and topics is also described and presented in detail.

3.1

Replacement sonar range finders through a

Laser scanner

Originally, the voting scheme was designed to work with sixteen sonar sen-sors on a Nomad SuperScout robot. Since we are working with a laser sensor on the TurtleBot2, we have to adapt the original approach to our robot sys-tem. We chose this setup, because the TurtleBot is a low-cost open source robot platform with support for various sensors and other additional hard-ware [8]. The combination TurtleBot and ROS is furthermore used in many research scenarios about human robot interaction.

Initially, the algorithm selects the sonar, which corresponds best with the direction proposed by the IV. The selection is based on the sonar angle, which each sonar forms with the center of the robot. After selecting the best sonar, the three adjacent neighbors to it in both sides are also selected. All the seven selected sonar sensors are evaluated to obtain the best direction.

The laser scanner used with our TurtleBot can only detect 240° degrees, but on the other hand, the robot from the original setup can measure 360° degrees with its sixteen sonar range finders. One of the sonar range finders can measure 22.5° degrees. Since in every iteration seven sonar sensors are selected, the total area, which needs to be evaluated covers 157.5° degrees. The obtained data by the laser scanner is split equally into sections, which cover exactly 22.5° degrees, like the sonar range finders. The splitting of the data is done by the node called Split_Laser_Data, which is described in sec-tion 3.4.1.

Because of the limited measurable angle of the laser scanner, the needed data can be outside of the measured area. If the needed data, which is spec-ified by the selection of the sonars, is outside of the measurable area, the TurtleBot needs to turn according to the given IV. For simplification, the implemented TurtleBot always adjusts its position in every iteration exactly to the given IV. The turning of the TurtleBot is implemented by the Ad-just_Robot_Position node, which is described in section 3.4.3.

(43)

3.2. ACCESS OF THE LASER SCAN DATA 23

3.2

Access of the laser scan data

The laser scanner used in this project is the “Hokuyo URG-04LX-UG01” 1. It is fixed at the base of the robot and connected via an USB-cable with the notebook of the TurtleBot. The laser sensor provides its raw sensor data at a fixed rate of 10Hz, within a range from 20 mm to 5600 mm and a maximum angle of 240° degrees. The laser scanner is publishing its raw sensor data to the topic /scan. The raw sensor data is provided in a defined LaserScan message format2

To use this laser scanner in ROS, we installed the “hokuyo-node” driver package and edited the TurtleBot’s description as proposed in various tuto-rials at the ROS homepage3.

To access the data within a ROS node, the node must import the LaserScan message and subscribe to the /scan topic. The sensor data can be accessed under the data structure ranges inside the LaserScan message. ranges is a float array with 512 values 4, which represent the different measured dis-tances. The hokuyo-node driver automatically filters the data and exchanges values out of range with “inf” and error readings with “nan”.

3.3

Overview of the prototype’s architecture

The architecture of this prototype is based on ROS and consists of differ-ent nodes, which exchange data by publishing and subscribing to differdiffer-ent topics. Every node fulfills one specific task and together they represent an exemplary solution, of how adjustable autonomy and shared control can be developed in ROS. Refer to Fig. 3.1 to get an overview of the whole architec-ture. In the following every node of the architecture is described briefly in the order of execution.

1The specification of this laser scanner can be found on the official website of Hokuyo. https://www.hokuyo-aut.jp

2The LaserScan message is defined in the documentation of ROS.http://docs.ros.org/ api/sensor_msgs/html/msg/LaserScan.html

(44)

24 CHAPTER 3. PRESENTATION OF THE IMPLEMENTED PROTOTYPE

Figure 3.1: Overview of the prototype’s architecture. The architecture of the prototype is based on iterative execution. At first an IV is provided with the Set_Initial_Vector node. The direction of the IV is received by the Ad-just_Robot_Orientation node, which turns the robot accordingly. After the robot reached its goal orientation, the information from the IV and the sensor data is taken, to calculate the best direction θ. The calculated direction is sent to the Ad-just_Robot_Orientation node, which turns the robot again. When the robot reached its position, the Vetoing_Behavior node verifies the direction, by evaluating the sensor data. If there is no obstacle close to the robot, the direction is assumed to be safe and the Move_Robot node can move the robot in that direction. The robot moves until either a new iteration is initiated, by providing a new IV, or an obstacle is registered. The architecture of the prototype is based on iterative execution. The execution of the algorithm is initiated by setting a specific IV through the Set_Initial_Vector node. To maintain a safe and effective operation of the nav-igation system, a new IV needs to be provided regularly. The Set_Initial_Vector node is a simple example of how the IV could be provided. In this case the IV is given directly from a human operator.

During the whole execution of the navigation system, the node Split_Laser_Scan_Data stays active and publishes regularly the necessary sensor data to the topic

/sections. To access the laser scan data, this node subscribes to /scan, that is the topic, where the raw sensor data from the laser scanner is published.

(45)

3.4. DESCRIPTION OF THE PROTOTYPE’S NODES 25

The received data is filtered, adjusted and split equally into sections of 22.5° degrees. Only the minimal distance in each of the sections is added to the sections array and published to the topic /sections.

The node Adjust_Robot_Orientation receives the given IV and turns the robot exactly as proposed by it. When the robot reached the goal orientation, the calculation begins. The calculations are done by the Calculate_Best_Direction node. This node receives the given IV and the relevant sensor data from the /sections topic. The needed background knowledge for understanding how the calculations are done are described in section 2.2, all details about how the calculations are implemented in this prototype are presented in sec-tion 3.4.4. The best direcsec-tion is published to the /theta topic.

After the determination of the best direction, this information is taken and validated with the Vetoing_Behavior node. Before the direction can be verified, the robot needs to adjust its position again. Now the Adjust_Robot_Position node takes the calculated direction as published to the /theta topic and turns the robot, just like it did with the IV.

When the robot reached its position, the Vetoing_Behavior node verifies the direction by evaluating the sensor data from the /sections topic. If mov-ing accordmov-ing to the selected course would result in a collision, a new IV has to be obtained in the next iteration and the system starts over with new cal-culations.

If the direction could be verified successfully, the results can be sent to the engines and the robot starts moving. The movement of the robot is in-terrupted by setting a new IV, what also initiates a new iteration of the algorithm.

3.4

Description of the prototype’s nodes

As shown in figure 3.1, the architecture of this prototype consists of many different nodes. All of these are working together and exchanging data, to fulfill the functionality of sliding autonomy and shared control for the Turtle-Bot robot. In this section the implementation of all nodes in the system are presented, starting in the order of execution.

3.4.1

Split_Laser_Scan_Data

The task of this node is, to retrieve the raw sensor data from the laser scan-ner, to filter it and split it equally into several sections. Refer to Fig. 3.2 for an overview of the Split_Laser_Scan_Data. The sensor data is accessed, by

(46)

26 CHAPTER 3. PRESENTATION OF THE IMPLEMENTED PROTOTYPE

Figure 3.2:Flowchart for the node Split_Laser_Scan_Data. The task of this node is to read the sensor data, split it into equal sections of 22.5° degrees and to publish this information to the topic /sections. The sensor data is published at a fixed rate of 10 hz, until the node is stopped.

node driver, which publishes raw laser data at a fixed rate of 10 hz to this topic in the LaserScan message type. More information about how the sen-sor data is accessed can be found in section 3.2.

Every time a new LaserScan message is published to the /scan topic, the node Split_Laser_Scan_Data receives this message and processes the data in it. The range data from the sensor, is stored in the ranges array inside the message.

At first, the received data must be filtered for better data handling. All values smaller or greater than the minimum and maximum measurable dis-tances are removed. Furthermore, all special values for errors during the measurement, like e.g. ‘‘inf’’ or ‘‘nan’’, are removed as well.

Afterwards, this filtered data needs to be split into equal sections. Since the laser scanner measures 240° degrees and one section covers 22.5° degrees, ten sections can be measured at once. The measured area is represented by the range array from the LaserScan message, which contains 512 distance values. Each section is assigned 48 of these values. The remaining 32 values (approx. 7.5° degrees) are placed on both ends of the scan-able area and are being neglected in the calculations.

Finally, the minimum distance for each section is selected and added to the sections array. The sections array is published to the topic /sections.

(47)

3.4. DESCRIPTION OF THE PROTOTYPE’S NODES 27

Figure 3.3:Flowchart for the node Set_Initial_Vector. The argument for the IV is given to this node directly by the command line. The node publishes the given information to the topic /initialVector and also sets the value in the topic /robotPosition to False.

3.4.2

Initial_Vector

The IV is a proposal for the best direction. It can be provided in different ways, either directly through a human operator by using an input device, via specifying a goal inside an internal map by using heuristics, or by teleop-eration through a control system. Fig. 3.3 gives an overview of the different execution steps of the node Set_Initial_Vector.

The task of this node, is to receive the IV and to publish it to the topic /initialVector. In this case, the IV is provided by a human user through a command line interface. When the node is executed, a float value needs to be provided, this value is taken as the proposed direction.

The proposed direction is published to the topic /initialVector, in the special InitialVector message. This message contains a float value for the direction and a Boolean value. The Boolean value is used, to tell the Calcu-late_Best_Direction node, that a new initial vector is sent, and the system is ready for calculating the new best direction.

Additionally, this node also publishes False to the topic /robotPosition. This topic represents the robot’s orientation. If the robot faces in the same direction as the IV proposes, the data in this topic becomes T rue, otherwise it stays False. When a new IV is published, and the proposed direction is not 0° degree, the robotPosition is always wrong and orientation should

(48)

28 CHAPTER 3. PRESENTATION OF THE IMPLEMENTED PROTOTYPE

Originally in the work from Goodrich and Crandall, additionally to the direction, an IV also holds a magnitude Mag and a weight IVweight [18]. Refer to the section 2.2 for more information.

In this prototype, for the sake of simplification, we use a fixed magnitude, which makes our robot move always with the same speed. Furthermore, we also use IVweight = 1.4 as a fixed value in our prototype5.

3.4.3

Adjust_Robot_Orientation

Figure 3.4: Flowchart for the node Adjust_Robot_Orientation. The purpose of the node is to adjust the orientation of the robot according to a given direction. The di-rection can be either provided with the IV or the calculated best didi-rection θ. When the robot reached the desired orientation, robotPosition = T rue is published to /robotPosition, to inform the other nodes in the system about the robot’s orienta-tion.

The purpose of the node Adjust_Robot_Orientation is to always turn the robot exactly to the direction of the IV or the best direction θ. Since the laser scanner at the robot can only measure 240°, the robot needs to turn to mea-sure the right area.

The node stays active until it’s shutdown. If the node receives a new direction from either the IV or θ, it turns the robot accordingly and waits afterwards for new messages. Fig. 3.4 shows the architecture, of how the ad-justment of the robot’s orientation is implemented.

(49)

3.4. DESCRIPTION OF THE PROTOTYPE’S NODES 29

To receive the necessary messages, this node subscribes to several top-ics. The subscribed topics are: /robotPosition, /initialVector, /odom and /theta. The topic /odom contains helpful information about the odometry of the robot, it always returns the actual pose of the robot with position and orientation.

After the start of this node, it waits for new messages published to the topics /initialVector and /theta. These messages contain a desired ori-entation, which the robot should adjust to. The robot only rotates, while robotPosition == False, so when a new message in one of these topics is received, the node also checks the value at the topic /robotPosition.

Before starting to turn, the odometry information in the topic /odom needs to be reseted. Afterwards the received direction is converted from degrees to radians. That is necessary, because the odometry of the robot measures its orientation in radians. During the rotation, the proposed direc-tion in radians is always compared with the orientadirec-tion of the robot from the odometry. When the goal orientation is reached, the robot stops.

After reaching the goal position, robotPosition = T rue is published to the topic /robotPosition, to inform the other nodes in the system about the orientation of the robot. Now the node starts to wait again for incoming messages and adjusts the orientation of the robot, whenever needed.

3.4.4

Calculate_Best_Direction

The Calculate_Best_Direction node represents the core of the system, be-cause it is the place where all the input data is processed. The way the data is processed, is the same as proposed in the paper “Experiments in Ad-justable Autonomy” from Goodrich and Crandall [18].

The algorithm starts by defining variables, which are used for the calcu-lations:

• Rejection array R with R = [0.1, 0.4, 0.7, 0.8, 0.7, 0.4, 0.1] • Pull array P with P = [0.1, 0.45, 1.0, 1.0, 1.0, 0.45, 0.1] • Weight of the IV with IVweight = 1.4

• Magnitude of the IV Mag with Mag = 1.0

(50)

30 CHAPTER 3. PRESENTATION OF THE IMPLEMENTED PROTOTYPE

Figure 3.5:Flowchart for the node Calculate_Best_Direction. This node only starts its calculations, if robotPosition == T rue. If that is the case, the proposed IV and the sensor data is evaluated and the best direction for the next movement is determined. The result is published afterwards to the topic /theta. Before the robot moves in the calculated direction, the direction needs to be verified by the vetoing behavior.

All variables, except the magnitude, are defined exactly with the same values like in the paper. We chose a fixed value for the magnitude in our prototype, so that the robot moves always with the same speed. Originally, the IV also proposes a certain speed value.

After initializing all the variables, the algorithm starts only, if both Boolean values in the topics /robotPosition and /initialVector.calculations are T rue. That means, that the robot is in position, a new IV was proposed and the laser scanner measured the right area. The data from the laser scanner is accessed, by receiving the messages published to the topic /sections.

Calculation of the sonar angles

Every section covers exactly 22.5° degrees and forms an angle with the mid-dle of the section and the center of the robot. These angles are called, like in the original paper, sonar angles.

The sonar angles are initiated with fixed values for the ten sections, which can be measured by the laser scanner (see Tab. 3.1).

If the robot rotates at a given angle, the sonar angle of each section changes, depending on the amount the robot rotated. In each execution of this node, the new sonar angles need to be calculated for every section.

(51)

3.4. DESCRIPTION OF THE PROTOTYPE’S NODES 31

Section SonarAngle (in radians) 0. Section −1.325 1. Section −1.030 2. Section −0.736 3. Section −0.441 4. Section −0.147 5. Section 0.147 6. Section 0.441 7. Section 0.736 8. Section 1.030 9. Section 1.325

Table 3.1:Initial sonar angles. The sonar angles change accordingly, with the rotation of the robot. Negative angles are located on the left side of the robot and positive ones on the right.

The new sonar angles are determined, by adding the angle of the rotation to the initially defined sonar angles and shifting the values afterwards into the range from −π to π. The values are stored in the array SonarAngles.

Determination of the best direction

After the calculation of the SonarAngles, the next step is calculating the weighted votes Vi. These votes Vi represent the priority for the algorithm,

to choose the sonarAngleias the best direction for movement.

In the original paper are only seven votes used to calculate the best di-rection θ. Therefore, seven sections need to be selected for the calculations, out of the ten sections availabile from the laser scanner. The selected sections are always the same: section[1] until section[7]. The other sections 0, 8 and 9 are not used for the calculations. The selection of the sections could be done more sophistically in a future version of this prototype.

To calculate the weighted votes, the algorithm works on the sections ar-ray S, which contains the distances to the closest obstacle in each section. Si

stands for the distance value in section i. The following algorithm 1 shows how to determine the votes Vi[18].

At first, the measured distance value for each section Siis compared with

the warning and safe distance. Distances smaller than the warning distance receive a negative priority value and get multiplied with the corresponding value from the rejection array Ri. Distances bigger than the safe distance on

(52)

32 CHAPTER 3. PRESENTATION OF THE IMPLEMENTED PROTOTYPE

Algorithm 1 This algorithm builds the array V, containing the priority val-ues, for moving in one of the directions, from the sections array. The sections array S contains the distance values to the closest obstacle in each section. R and P are the predefined rejection and the pull arrays. WarningDist and SafeDistare also predefined values, for two different distances. The defini-tion of the predefined values is presented in secdefini-tion 3.4.4.

forall Si ∈ S do

if Si<=WarningDist then

Vi= (SiWarningDist−WarningDist)∗ Ri

else if Si>SafeDist then

Vi= Pi

else

Vi=0

end

responding value from the pull array Pi. A neutral value is assigned, if the

distance Si is in between the warning and the safe distance. The weighted

votes Viare stored in the votes array V.

The votes array V is used in the next step, to calculate the x and y values. xand y are coordinates, which mark the best direction for the robot’s move-ment in an coordinate system. The coordinates are calculated, by using the following formulas [18].

x = IVweight∗ Mag ∗ cos(IV) +

6

X

i=0

Vi∗ cos(SonarAnglei)

y = IVweight∗ Mag ∗ sin(IV) +

6

X

i=0

Vi∗ sin(SonarAnglei)

With the help of the inverse tangent function, the opening angle θ can be determined, which is formed with the calculated location in the coordi-nate system: θ = tan−1(yx). The angle θ represents the best direction for the robot’s movement.

The figure 3.6 visualizes the relation between the values x and y and how θcan be obtained from them.

The direction θ is published after finishing the calculations to the topic /theta. The published message contains additionally the Boolean value Checked, which is set to False. This value shows whether the direction is already ver-ified by the vetoing behavior, or not. Before the direction can be verver-ified, the

(53)

3.4. DESCRIPTION OF THE PROTOTYPE’S NODES 33

Figure 3.6:Determination of the opening angle θ. θ represents the best direction for the robot’s movement and is calculated by applying the inverse tangent function to the two coordinates x and y.

robot needs to adjust its orientation again according to the calculated angle θ.

3.4.5

Vetoing_Behavior

The node Vetoing_Behavior is responsible for the safety of the navigation system. It verifies the calculated direction, by evaluating the sensor data from the laser scanner. The vetoing behavior starts, when the robot reached the orientation θ, as calculated by the Calculate_Best_Direction node. Addi-tionally, the value Checked inside the messages published to the topic /theta should be False. That means, that the direction was not verified yet.

The verification is simple: if there is an obstacle closer than the defined safe distance, the direction is not safe, and the robot cannot move in that direction. If there is no obstacle registered, the direction is assumed to be safe and the node Move_Robot has the permission to move the robot.

(54)

34 CHAPTER 3. PRESENTATION OF THE IMPLEMENTED PROTOTYPE

Figure 3.7:Flowchart of the node Vetoing_Behavior. The purpose of this node is to verify the calculated best direction. That is achieved by evaluating the sensor data. If there is an obstacle closer than the defined safe distance to the robot, the direction is no safe. If the direction is safe, the robot is free to move in that direction, as long, as the formula robotPositon == T rue and theta.checked == T rue evaluates to T rue.

data from this topic is compared to the defined safe distance. The predefined safe distance is set to 0.30m or 30cm in the prototype. If the minimal distance from the sensor data is smaller than the safe distance, the direction could not be verified, and the Checked value stays False. But if there is no obstacle and the direction is safe, the Checked value is set to T rue. Afterwards the direc-tion θ together with the Checked value is published as a new message to the topic /theta.

3.4.6

Move_Robot

The task of the Move_Robot node is very simple, it checks regularly if both, robotPosition and theta.checked are T rue. If they are, the robot moves with a fixed speed in the calculated and verified best direction θ.

When a new IV is provided through the Set_Initial_Vector node, the robotPosition is also set to False. That causes the check in the beginning of this node to evaluate to false and therefore the robot stops its movement.

3.5

Description of the prototype’s topics

The prototype for the utilitarian voting scheme for navigation is made of var-ious small nodes, which continuously need to communicate and exchange

(55)

3.5. DESCRIPTION OF THE PROTOTYPE’S TOPICS 35

Figure 3.8:Flowchart of the node Move_Robot. If the calculated direction θ is verified by the vetoing behavior, the robot is free to move in that direction. The movement of the robot is interrupted, by initiating a new iteration of the algorithm through providing a new IV.

data to function properly. The nodes establish their connection, through mes-sages. The messages are posted in a certain format to the corresponding topic. Every message posted to a topic needs to be in the right format for that specific topic. A node, which subscribes to a topic, receives all messages posted to that topic. A node can publish and subscribe to any number of topics.

In this section every custom defined message, which is not already pre-defined by the ROS libraries, used in this prototype is presented. Refer to Tab 3.2 for an overview of all used topics and further information about the

(56)

36 CHAPTER 3. PRESENTATION OF THE IMPLEMENTED PROTOTYPE

Topic Data structure Description

/scan LaserScan message Official ROS format of pub-lishing sensor data

/sections float32[] “sections” Minimum distance values for each section as measured by the laser scanner

/initialVector float32 “direction” bool “calculation”

Used for proposing the IV and if the system should cal-culate a new direction θ. /theta float32 “direction”

bool “checked‘”

Contains the calculated best direction θ. The boolean shows, if the direction was already verified.

/robotPosition bool “data” Shows if the robot reached the goal position or needs further rotation.

Table 3.2: Overview of all topics used in the prototype for the utilitarian voting scheme for navigation. All messages published to one of the topics need to be in the format, as shown in the data structure column.

3.5.1

/scan

The Hokuyo node device driver for the mounted Hokuyo laser scanner URG-04LX-UG01 publishes the raw data, which is acquired by the laser scanner, to the topic /scan. The messages are in the LaserScan format, which is pre-defined in the ROS documentation.

The LaserScan message provides not only the sensor data, but also many information’s about the used hardware, like e.g. minimum and maximum measuring distance and angle, angular distance between measurements and the time between scans [2].

3.5.2

/sections

The raw data from the laser scanner is accessed and processed by the Split_Laser_Scan_Data node. This node filters the raw data and splits it equally into sections.

Af-terwards the minimum distance value for each section is determined and added to the sections array. The sections array contains the minimum dis-tances from all ten sections as float32 values.

The sections message is used to only provide the necessary data for the other nodes, so that the amount of exchanged data is lower. Furthermore, the other nodes do not need to work with the raw data or transform the data in

References

Related documents

Each participant was tested in three head direction conditions: straight, left, and right, with the gaze held straight in relation to the seating position in all conditions.. A

Using this, an obstacle surface can be mapped into velocity space, and the resulting obstacle avoidance force vector on the handle of the haptic device becomes.. F = ff Vh

eliminate any unnecessary process steps is thought to be the best way to achieve very high levels of quality in terms of consistency and timeliness – two features of

The main objective is to build up a simulation model of an existing building with the software IDA ICE and to investigate how to control it. The project shows how to design the

Garley et al., (1997); Pölkki et al., (2004) beskriver att både vuxna barn och barn till en förälder med psykologisk sjukdom inte fick någon information gällande den sjuka

This thesis is devoted to the study of some aspects of and interactions between the Laplace transform, Hardy operators and Laguerre polynomials.. Its perhaps most significant gain

To answer the second research question: “What is the interplay between the EV-fleet and the grid, regarding availability for grid services?”, the number of EV:s needed to match

By performing the potential →density and the density→exchange energy per particle mappings based on two model systems characterizing the physics in the interior 共uniform