• No results found

Semi-Autonomous,Teleoperated Search and Rescue Robot

N/A
N/A
Protected

Academic year: 2022

Share "Semi-Autonomous,Teleoperated Search and Rescue Robot"

Copied!
105
0
0

Loading.... (view fulltext now)

Full text

(1)

Teleoperated

Search and Rescue Robot

Kristoffer Cavallin and Peter Svensson

February 3, 2009

Master’s Thesis in Computing Science, 2*30 ECTS-credits Supervisor at CS-UmU: Thomas Hellstr¨ om

Examiner: Per Lindstr¨ om

Ume˚ a University

Department of Computing Science SE-901 87 UME˚ A

SWEDEN

(2)
(3)

The interest in robots in the urban search and rescue (USAR) field has increased the last two decades. The idea is to let robots move into places where human rescue workers cannot or, due to high personal risks, should not enter.

In this thesis project, an application is constructed with the purpose of teleoperating a simple robot. This application contains a user interface that utilizes both autonomous and semi-autonomous functions, such as search, explore and point-and-go behaviours.

The purpose of the application is to work with USAR principles in a refined and simpli- fied environment, and thereby increase the understanding for these principles and how they interact with each other.

Furthermore, the thesis project reviews the recent and the current status of robots in USAR applications and use of teleoperation and semi-autonomous robots in general.

Some conclusions that are drawn towards the end of the thesis are that the use of robots, especially in USAR situations, will continue to increase. As robots and sup- port technology both become more advanced and cheaper by the day, teleoperation and semi-autonomous robots will also be seen in more and more places.

Key Words: Robotics, Urban Search and Rescue, Path Planning, Semi-autonomy, Tele-

operation.

(4)
(5)

1 Introduction 1

1.1 Goal . . . . 2

1.2 Disposition . . . . 3

2 Urban Search and Rescue 5 2.1 Background . . . . 5

2.2 USAR Robotics . . . . 7

2.3 Robotic USAR in practice . . . . 9

2.4 Robot design . . . . 10

2.4.1 Requirements . . . . 11

2.4.2 Choosing Sensors . . . . 13

2.4.3 Related Work . . . . 17

3 Human-Robot Interaction 21 3.1 Teleoperation . . . . 21

3.1.1 Time delays . . . . 23

3.1.2 Planetary Exploration . . . . 23

3.1.3 Unmanned Aerial Vehicles . . . . 24

3.1.4 Urban Search and Rescue . . . . 24

3.1.5 Other examples of teleoperation . . . . 24

3.2 Semi-autonomous Control . . . . 25

3.2.1 Shared Control . . . . 25

3.2.2 Traded Control . . . . 25

3.2.3 Safeguarded teleoperation . . . . 26

3.2.4 Adjustable autonomy . . . . 26

3.3 Common ground and situation awareness . . . . 28

3.4 User Interface . . . . 29

3.4.1 Background . . . . 30

3.4.2 Telepresence . . . . 30

3.4.3 Sensor fusion . . . . 33

3.4.4 Visual feedback . . . . 35

iii

(6)

3.4.5 Interactive maps and virtual obstacles . . . . 35

3.4.6 USAR User Interface Examples . . . . 36

4 Implementation 41 4.1 Hardware set-up . . . . 42

4.1.1 Amigobot and the Amigobot Sonar . . . . 42

4.1.2 Swissranger SR3000 camera . . . . 43

4.2 Software . . . . 47

4.2.1 System overview . . . . 47

4.2.2 Coordinate transformation . . . . 52

4.2.3 Obstacle detection . . . . 54

4.2.4 Map Building . . . . 55

4.2.5 Sensor fusion . . . . 58

4.2.6 Path planning . . . . 59

4.2.7 Autonomous behaviours . . . . 61

4.2.8 Control . . . . 67

5 Results 71 5.1 Obstacle Detection . . . . 71

5.2 Path planning . . . . 74

5.3 Human Detection . . . . 74

5.4 Exploration behaviour . . . . 75

5.5 Manual control . . . . 76

5.6 Mapping . . . . 79

6 Conclusions 83 6.1 Discussion . . . . 83

6.2 Future work . . . . 84

7 Acknowledgments 87

References 89

(7)

1.1 The Packbot USAR-robot facing a potential USAR scenario. . . . . 1 1.2 System overview; showing the laptop, the robot, the sonars, the 3D-

camera and the connections between them. . . . . 3 2.1 Oklahoma City, OK, April 26, 1995 - Search and Rescue crews work

to save those trapped beneath the debris, following the Oklahoma City bombing (FEMA News Photo). . . . 6 2.2 Hierarchy of a FEMA USAR task force that includes four robotic elements[8]. 7 2.3 A destroyed PackBot, made by iRobot, displayed at the 2007 Associa-

tion for Unmanned Vehicles International (AUVSI) show. The robot was destroyed while surveying an explosive device in Iraq. . . . . 8 2.4 iRobot’s PackBot is an example of a robot that can operate even when it

is flipped upside down[27]. . . . 12 2.5 An example of an image created by using data from a 3D-camera[1]. . . 15 2.6 The Solem robot, which was used in the World Trade Center rescue op-

erations. . . . 17 2.7 The operator control unit for controlling iRobots PackBot. This portable

device is used to control the robot from a distance. . . . 18 2.8 A prototype image of a robot swarm using a world embedded interface.

The arrows points towards a potential victim in a USAR situation. . . . 19 3.1 The principle of teleoperation. The operator (or “master”) is connected to

the robot (or “slave”) via an arbitrary connection medium. The operator controls the robot based on the feedback received from the robot’s remote environment[17]. . . . 22 3.2 The Sojourner Mars rover, which was sent by NASA to Mars in 1997 to

explore the planet. (Image by NASA) . . . . 23 3.3 A Joint Service Explosive Ordnance Disposal robot ’Red Fire’ prepared

to recover a mine on February 9, 2007 in Stanley, Falkland Islands (photo by Peter Macdiarmid/Getty Images). . . . 25

v

(8)

3.4 The neglect curve containing teleoperation and full autonomy. The x-axis represents the amount of neglect that a robot receives, and the y-axis represents the effectiveness of the robot. The dashed curve represents intermediate types of semi-autonomous robots, such as a robot that uses waypoints[16]. . . . 27 3.5 Autonomy modes as a function of neglect. The x-axis represents the

amount of neglect that a robot receives, and the y-axis represents the effectiveness of the robot[16]. . . . 28 3.6 An implementation of scripts concept, which is an attempt at improving

common ground between a robot and an operator[6]. . . . 29 3.7 An example of Virtual Reality equipment, including headgear and motion

sensing gloves. (Image by NASA) . . . . 31 3.8 A prototype image of a world embedded interface, which is a mix of reality

and an interface. . . . 32 3.9 A traditional teleoperated system (top), and a system which utilizes a

predictive display to reduce the problems caused by latency (bottom). . 33 3.10 The CASTER user interface. In addition to the main camera, three

other cameras are used. At the top of the interface there is a rear-view camera. In the lower corners of the screen, auxiliary cameras show images of the robots tracks. Various feedback data is superimposed on the main camera’s image[19]. . . . 36 3.11 The Idaho National Laboratory User Interface. The most distinctive fea-

ture is the mixture between real world images and a virtual perspective[32]. 38 4.1 Block scheme of main components in the system. Every oval represents a

component of the system, and the lines show how they relate to each other. 41 4.2 The main hardware components used for this project: the Amigobot robot

with the SR3000 3D-camera attached. . . . 42 4.3 The SR3000 Swissranger 3D-camera[1]. . . . 44 4.4 4-times sampled incoming light signal. The figure was taken from the

SR3000 manual[1]. . . . 44 4.5 Example of application utilizing the SR3000 camera, taken from the

SR3000 manual[1]. . . . 45 4.6 Schematic drawing illustrating the problem with light scattering artifacts.

This occurs when the camera observes nearby objects, or objects that reflect extraordinary much of the transmitted light and shines so brightly back to the sensor, that all light cannot be absorbed by the imager. This in turn results in that the light is reflected to the lens, and back to the imager again. . . . 46 4.7 Illustration of a problematic scenario for the SR3000 camera featuring

multiple reflections. . . . 46

(9)

4.8 The graphical user interface of the system. Parts A and D are the 3D- camera displays, part B is the map display, part C is the command bar and part E is the output console. . . . 48 4.9 The sensor tab (top) and the map tab (bottom) in the settings menu. The

sensor tab contains settings for the 3D-camera (mounting parameters, obstacle detection parameters, etc) and the sonar (the coverage angle).

The map tab contains settings for the robot’s map (update frequencies, display modes, etc). . . . . 51 4.10 The architecture of the software as implemented in Java. . . . 53 4.11 The SR3000 mounted on the Amigobot. There are two different coor-

dinate systems available, which is the reason that a transformation is used. The coordinates in the 3D-camera’s coordinate system (y, z) to the robot’s coordinate system (y 0 , z 0 ). . . . 54 4.12 The angular sensor coverage of the robot. The dark cones are covered by

ultrasonic sonars, and the light cone is covered by the 3D-camera. White space denotes dead, uncovered, angles. . . . 55 4.13 The base of the sensor model used for updating the map with the help of a

sonar. The cone represents a sonar reading, and the dark areas represents the parts of the reading that can provide the occupancy grid with new information. Nothing can be determined about areas C and D. . . . 56 4.14 Wave-front propagation in a map grid[23]. The starting point is seen in

the top left corner of (a). The first wave of the wave-front is shown as a light-colored area around the starting point in (b). The next wave is shown in (c), with the old wave now being shown in a darker color, etc.

More waves are added until the goal point is found. . . . 60 4.15 The left flowchart describes the wave-front matrix generation, which is the

process of creating the matrix that describes the wave-front propagation between the starting point and the wanted goal point. The right flowchart describes the way the wave-front propagation matrix is used to find the best path between the points. . . . 62 4.16 The simplification of “human beings” used in the application. Two cylin-

dric wooden blocks fitted with reflective tape, easily detected by the SR3000 camera. . . . 63 4.17 A big red triangle moving in to encircle the location of a suspected ”human

being” in the sensor view section of the user interface. This is done to clearly indicate this (possibly) important discovery. . . . 63 4.18 A big red triangle moving in to encircle the location of a suspected ”human

being” in the map section of the user interface. This is done to clearly

indicate this (possibly) important discovery. . . . 64

(10)

4.19 The two grids used for the exploration behaviour. The map grid (left) and the frontier grid (right). Gray areas represent obstacles and F’s represent frontiers. . . . 64 4.20 Target selection for the exploration behaviour. Blank cells are unoccu-

pied, gray cells are occupied and numbered cells are frontiers[21]. . . . . 65 4.21 Exploration behaviour flowchart: Showing the process of the autonomous

exploration behaviour of the robot. . . . 66 4.22 Navigation with the help of named locations: In this picture a blue dotted

path from the robot to the location “Treasure” can be seen. This is one of the results after the activation of the command “Go to location:

Treasure”. The other result is the initiation of the robot’s journey towards this location. . . . 68 4.23 Navigation with the help of waypoints: As seen in both pictures, the three

wainpoints wp0, wp1 and wp2 are already added. The left picture shows the menu with various options. After the activation of the command

“Follow waypoint path” the view changes into the one visible to the right and the robot starts to follow the dotted blue line, moving to all waypoints in order. . . . 69 5.1 An overview of the setup and the environment used for testing the system. 72 5.2 A map over the testing area with all important features such as starting

position of the robot, obstacles and the “humans” to be found marked. . 72 5.3 A case when the 3D-camera obstacle detection provides good results.

Both the pillar and the amigobot-twin are detected without problems. . 73 5.4 A case when the 3D-camera obstacle detection provides bad results. The

segmented shape of the chair pose problems. It is only partially detected. 73 5.5 A visualization of the robot’s chosen path (the dotted line) after a goto

command has been processed. . . . 74 5.6 A third person perspective of an encounter between the robot and a “hu-

man being”. . . . 75 5.7 The system provides visual feedback (a triangle shape zooms in on the

detected “human”) whenever the robot detects “humans”. . . . 76 5.8 The test-case environment with the chosen path of the robot’s exploration

behaviour from test-case number one. None of the “humans” were found. 77 5.9 The resulting sensor-fused map of test-case number one. No human label

included, since no human object was found. . . . 77 5.10 The test-case environment with the chosen path of the robot’s exploration

behaviour in test-case number two. One “human” was found, a “x” and

an arrow indicates where that “human” was found. . . . 78

(11)

5.11 The resulting sensor-fused map of test-case number two. One human- label can be seen in this picture. “human0”. The other “human” was not found. . . . 78 5.12 A third person view of the test-arena for the exploration test-cases show-

ing the robot on a mission to find the two “humans” (encircled). The depicted path is the one the robot chose in test-case two. . . . . 79 5.13 The resulting map of the manually controlled test-case, using only the 3D-

camera data for mapping. There are many unexplored areas left (gray areas), because of the low resolution and restrained coverage area of the 3D-camera. . . . 80 5.14 The resulting map of the manually controlled test-case, using only the

sonar data for mapping. There is loads of clutter all over the picture, and some walls are missing. This is mostly due to specular reflections. . . . . 81 5.15 The resulting map of the manually controlled test-case. The map was

constructed by fusing both the sonar and 3D-camera data. It contains

fewer holes than the 3D-camera map, as well as less clutter and more

consistent walls than the sonar-map. . . . 81

(12)
(13)

3.1 Sensor failure situations of a common sensor suite[22]. . . . 34

xi

(14)
(15)

Introduction

Humans are well adapted at handling a huge variety of tasks. After a short training period, we can handle almost any assignment (even if it is in a inadequate or inefficient way sometimes). However, we have several serious disadvantages.

First of all, we have problems with concentrating on repetitive work and we are prone to make mistakes. Most people also agree that humans are not expendable, and that excessive human suffering is intolerable. As a consequence of this, and also of the fact that we are fragile and vulnerable to things such as poison, smoke, radiation, heat, cold, explosions and corrosive materials, we would love to have some sort of non-human stunt-men to assign our hazardous and dull tasks to.

The vision and the dream comes in the form of super-robots that with great ease can substitute a human in any task. Unfortunately the current technology has a long way to go before it satisfies these ambitious wishes.

The best solution at hand is a compromise. A compromise in where humans and

Figure 1.1: The Packbot USAR-robot facing a potential USAR scenario.

1

(16)

robots team up. Where the robot does repetitive, dull, hazardous and dangerous parts of the work, while the human concentrates on finding patterns, making conclusions and helping the robot to understand what part of a task to concentrate on and in what order to execute its sub-goals. The general idea is to let the robot do things it does better than the human, and vice versa. Each part of the system compensates for the other part’s shortcomings, while utilizing each part’s strengths to their full potential. This area of intelligent robotics is called teleoperation.

Teleoperation and semi-autonomy are hot topics and their potential is being explored, not only by the industries and the military, but also by the service sector.

A specific sub-area of semi-autonomous robotics is urban search and rescue (USAR) robotics. The goal of USAR is to rescue people from hazardous sites where disasters such as hurricanes, bombings or flooding have happened. Working at such a site, finding people in a collapsed building for example, can be extremely dangerous for rescue workers considering the risks of being crushed by debris, suffocation, etc. It would be a much better situation if these risks could be transferred to robots instead. Another reason is size, since a small robot can go places that a human can not.

1.1 Goal

The goal of this project is to program a simple semi-autonomous teleoperated search and rescue robot, that is capable of working together with an operator as a team, solving problems such as exploring its surroundings and performing object detection; the base of a successful USAR-robot.

The work is divided into two distinct parts. The first part is to create a graphical user interface with which the operator easily can get a intuitive overview of what is going on around the robot, while also allowing him or her to access the control functions of the robot. The other part consists mainly of semi-autonomous, or autonomous, parts that allows the robot to carry out high-level tasks, such as navigation from one location to another, and thereby minimizing the cognitive load that is put on the operator.

In addition to a number of sensors, with which it observes the world, a successful USAR robot also needs an intelligent user interface (UI). The UI between the human operator and the robot should be able to transfer commands from the human to the robot and information from the robot to the human operator. The UI should maximize this information transfer and at the same time minimize cognitive load on the operator.

The software should make it possible for the operator to easily control the robot with the help of, for example, a regular keyboard and a mouse. The operator should also have quick access to several options, making it possible to choose what sensors to receive information from, how the robot should respond to various commands, etc. The operator should also be able to instruct the robot to move to some location with a click in the map in the user interface.

The map, depicting the robots surroundings, should be constructed with the help of the robot’s sensor readings. This map should be good enough to be of use for both the robot’s navigation and the operator’s situational awareness later on.

The operator should also be able to command the robot to execute some autonomous behaviours, such as “explore” and “find humans”. Things to consider here include: How the robot is supposed to navigate around obstacles without the help of the operator, and how will the robot identify interesting objects?

As for how the sensor data should be presented, the UI should have some sort of map

and a sensor view showing the data received from the sensors in a intuitive way for the

(17)

Figure 1.2: System overview; showing the laptop, the robot, the sonars, the 3D-camera and the connections between them.

operator. It should also have some sort of transparency, allowing the user insight into the robots autonomous functions, perhaps through some console where the operator can choose different levels of output. Ranging from output such as “Robot is now exploring its surroundings.” to low level information such as “Robot initiating partial movement to (x = 77, y = 18) as a sub goal toward (x = 89, y = 20).”.

When the robot is navigating from one point to another, it should perform obstacle detection. Meaning that it should see where obstacles are, and avoid them. In order to truly detect humans autonomously, the robot would require expensive equipment not available for this project, therefore a simplified version of detection will be implemented.

The hardware that was to be used in this project was the Amigobot from Mobile Robots Inc, equipped with a ring of sonars combined with a Swissranger SR3000 3D- camera. The purpose of the 3D-camera was to supply a 3D-image of the world and to determine distances to objects. This would help with obstacle detection and map building, and it would simplify the process of delivering a clear overview of the robot’s situation to the operator. The sonar would supply information about the robot’s close surroundings, and would compensate for the few shortcomings of the 3D-camera. The sonar would also contribute to map building. The software parts of the project were to be implemented in the programming language Java.

In summary, the goal is to program a robot with the following capabilities:

– Manual navigation.

– Semi-autonomous navigation.

– A human-finding behaviour.

– A autonomous exploration behaviour.

A graphical user interface should be constructed, with the capability of controlling the robot and to observing its environment.

1.2 Disposition

Chapter 1, Introduction

This chapter provides an introduction to this thesis project, describes the problems,

introduces our preliminary approach and loosely forms the context for the rest of the

(18)

project.

Chapter 2, Urban Search and Rescue

This chapter brings the attention to some Urban Search and Rescue (USAR) background and history. It discusses the robot’s part in this, and reviews some real world examples.

Later in this chapter, the design of a USAR-robot is examined thoroughly.

Chapter 3, Human-Robot Interaction

This chapter reviews all aspects of Human-Robot Interactions relevant to this project:

teleoperation, common ground, situational awareness, telepresence and more. It brings forth examples from each field, and discusses both problems and strengths of the different aspects.

Chapter 4, Implementation

This chapter presents our software, what parts we included in our project, what sensors we used, and how we used them. Principles and algorithms we included in the project are explained and discussed.

Chapter 5, Results

This chapter presents the results of the thesis. Various test-cases are conducted with the intention of showing the different aspects of the system.

Chapter 6, Conclusions

This chapter presents a discussion about the results of the project. It also contains

suggestions for future work on the project.

(19)

Urban Search and Rescue

The purpose of the Urban Search And Rescue (USAR) field is to find humans, or other valuable assets, in distress during catastrophes (search) and then to relocate them to safety (rescue). Although the “Urban” part of the name indicates that this activity usually takes place in cities, USAR-missions also takes place in more lightly populated areas.

A few examples of the disasters that require the expertise of USAR-teams are earth- quakes, tornadoes and explosions. Figure 2.1 shows a USAR-team in action trying to save trapped victims that got caught under the debris of a collapsed building following the 1995 Oklahoma City bombing.

Being a USAR-worker is a very tough and dangerous occupation, which is the main reason that a lot of efforts has been directed towards designing robots to assist the USAR-crews with their missions.

This chapter will briefly describe the background of USAR, the use of robotics within this field, and lastly it will discuss the appropriate robot hardware that is required to complete actual USAR-tasks.

2.1 Background

Urban search and rescue teams are required to handle situations that are classed as“an emergency involving the collapse of a building or other structure”[14].

When a disaster strikes, such as an earthquake, hurricane, typhoon, flood, dam failure, technological accident, terrorist activities or the release of hazardous materials, rescue workers have to step in, in order to prevent, or at least minimize, the loss of human lives. While the situation can vary a lot, the main objective of the rescue workers is to find humans in need of assistance (Search) and to move them out of harms way (Rescue). They also have two secondary objectives. The first of which is to provide technical expertise of structural engineering in order to evaluate the structural integrity of a collapsing building, so that it can be established which parts of a building that are stable and thus safe to enter. The other secondary objective is to provide the vacated victims with medical care.

In most countries, the task of search and rescue is divided among various institutions such as fire-fighters, the police and the military. But in addition to this, many countries have established special departments for this purpose. The Federal Emergency Manage- ment Agency (FEMA), which was created in 1979, is an example of such an agency in

5

(20)

Figure 2.1: Oklahoma City, OK, April 26, 1995 - Search and Rescue crews work to save those trapped beneath the debris, following the Oklahoma City bombing (FEMA News Photo).

the United States.

USAR is a fairly new field. It started to emerge during the early eighties with the formation of the Fairfax County Fire & Rescue and Metro-Dade County Fire Depart- ment elite USAR teams. These teams provided support in such places as Mexico City, the Philippines and Armenia. This concept of a specialized team of USAR operatives became more and more popular. And while FEMA itself started as a general disaster response agency, in 1989 it initiated a framework called National Urban Search and Res- cue Response System, which then became the leading USAR task force in the United States, and it is still active as of 2008.

Within FEMA, USAR task forces can consist of up to 70 highly trained personnel.

And depending on the classification of the team, up to 140 people can stand in readi- ness per team at one time. The team consists of emergency service personnel, medics, engineers and search dog pairs, as well as specialized equipment (which may include robots, see Section 2.2). As of 2008, there are 28 USAR national task forces prepared for deployment[3].

The list of past missions the FEMA USAR task forces have performed include[2]:

– Hurricane Iniki – Kauai, Hawaii; 1992

– Northridge Earthquake – Los Angeles, California; 1994

– Murrah Federal Building, Oklahoma City Bombing – Oklahoma, 1995 – Hurricane Opal – Ft. Walton Beach, Florida; 1995

– Humberto Vidal Building Explosion – Puerto Rico, 1996

– DeBruce Grain elevator explosion – Wichita, Kansas; 1998

(21)

Figure 2.2: Hierarchy of a FEMA USAR task force that includes four robotic elements[8].

– Tornadoes – Oklahoma, 1999 – Earthquakes – Turkey, 1999

– Hurricane Floyd – North Carolina, 1999

– World Trade Center and Pentagon Disaster – New York & Washington, D.C.; 2001 – Olympic Games – Utah, 2002

2.2 USAR Robotics

The introduction of USAR-robots into the arsenal of some USAR-teams is a fairly recent development. See Figure 2.2 for an example of how the make-up of a USAR task force that contains robotic elements can be arranged.

Being a USAR-worker is an exposed occupation. There are a variety of risks involved when working in a USAR task force[26]:

– Risk of physical injury, such as cuts, scrapes, burns, and broken bones.

– Risk of respiratory injuries due to hazardous materials, fumes, dust, and carbon monoxide.

– Risk of diseases such as diphtheria, tetanus and pneumonia.

– Risk of psychological and emotional trauma caused by gruesome scenes.

(22)

Figure 2.3: A destroyed PackBot, made by iRobot, displayed at the 2007 Association for Unmanned Vehicles International (AUVSI) show. The robot was destroyed while surveying an explosive device in Iraq.

Robots, on the other hand, are insusceptible to all these things. Robots can not catch diseases, and they don’t breathe. Nor do they suffer psychological or emotional traumas.

They can break, but parts of the robot that break can be replaced. It is therefore easy to see why it would be preferable to have robots perform such a hazardous job instead of risking humans. See Figure 2.3 for an example of such a situation. Even though that particular robot was destroyed when inspecting an improvised explosive device (IED) in Iraq, it is still a good example of a situation where it is advantageous to have a robot to take the lead in a dangerous situation. It is unlikely that a human would have survived such an explosion.

A fact of USAR is that “the manner in which large structures collapse often prevents heroic rescue workers from searching buildings due to the unacceptable personal risk from further collapse”[26]. Robots are expendable; humans are not. A robot can be sent into a unstable building that has a risk of crashing down, but a rescue team on the other hand, would not go in because of the risk to themselves. Fewer problems arise when it is a replaceable robot that will take the damage. When a robot is crushed under debris it is only some economical value is lost. After an excavation the robot can most likely be repaired, or at least some of the more valuable parts can be salvaged. And most importantly, no human life is lost in the process. But the physical failings of humans is not the only motivation why it would be a good idea to involve robots in the field of USAR. There are plenty of other reasons.

Another fact of USAR is that “collapsed structures create confined spaces witch are

frequently too small for both people and dogs to enter ”[26]. This has two consequences.

(23)

One, robots can be constructed to have access to places people can’t, which could result in finding victims that would otherwise be lost. Two, this allows for a faster search of an area, since the task force can use paths that would otherwise be blocked. This is a very important advantage that USAR-robotics can provide, since time is a highly valuable commodity in rescue scenarios. A fast search is vital, because of the fact that the lack of food, water and medical treatment causes the likelihood of finding a person alive to be greatly diminished after 48 hours has passed since the incident.

One more valuable asset that robots bring to USAR is the potential ability to survey a disaster area better than a human alone would be able to. Robots equipped with the right kinds of sensors can be of much use thanks to “superhuman” senses. With things like IR-sensors, carbon-dioxide-sensors and various other sensors, robots can be far superior to humans when it comes to detecting victims in USAR missions, especially under difficult circumstances, such as searching rooms that are filled with smoke or dust.

In summary, robots will have a bright future in USAR because of four key reasons [26]:

1. They can “reduce personal risk to workers by entering unstable structures”

2. They can “increase speed of response by accessing ordinarily inaccessible voids”

3. They can “increase efficiency and reliability by methodically searching areas with multiple sensors using algorithms guaranteed to provide a complete search in three dimensions”

4. They can “extend the reach of USAR specialists to go places that were otherwise inaccessible.”

2.3 Robotic USAR in practice

“One of the first uses of robots in search and rescue operation was during the World Trade Center disaster in New York.”[26]

The World Trade Center incident can be seen as the breakthrough for USAR-robotics and, although the robots did not perform as well as most people were hoping for, many valuable lessons were learned.

The Center of Robot-Assisted Search and Rescue (CRASAR) responded to the catas- trophe almost immediately, and within six hours, six robots from four different teams were in place to help FEMA and the Fire Department of New York in the recovery of victims.

The Foster-Miller team brought the robots “Talon” and “Solem”. More information about these robots can be found in Section 2.4.3.

Other robots on the scene included the “micro-VGTV” and “MicroTracs” by Inuk- tun. iRobot had brought their “Packbot” and SPAWAR had the “UrBot”. All the robots were of different sizes, weights and all had different kinds of mobility, tethers, visions, lightning, communication, speed, power supplies and sensing capabilities.

During the rescue period, which lasted from September 11th to the 20th, robot

teams were sent in eight times into the restricted zone that surrounded the rubble of the

disaster area itself. A total of eight “drops”(which is defined as “an individual void/space

robots are dropped into) were performed. The average time that such a drop lasted was

6 minutes and 44 seconds[8]. Within the first 10 days, the robot teams found at least

five bodies in places inaccessible by humans and dogs.

(24)

The robots faced tough problems. Extreme heat sources deep within the debris caused softening of the robot tracks, the software on some of the robots would not accept new sensors and almost all robots had problems with poor user interface designs which made the robots hard to control. The robots lacked image processing skills and most of them had a hard time with communications. The robots with tethers found themselves stuck often as the tether got caught in the debris, and the wireless ones had problems with noisy and clogged communications since so many people were trying to use walkie talkies, radios and mobile phones. In fact, about 25% of the wireless communication was useless[26].

A study was conducted in 2002, which examined about 11 hours of tape that was recorded during the operations, as well as various field notes by the people involved.

The article concluded that the priorities of further studies should be “reducing both the transport and operator human-robot ratios, intelligent and assistive interfaces, and dedicated user studies to further identify issues in the social niche”[8].

The lessons that were learned according to the article, in summary:

– Tranportation of the robots needs to be taken into account. The robot teams had to carry their robots over 75 feet of rubble, and some of the robots required several people to carry them. If a robot could be carried by single person, the teams could have carried several robots to the scene at the same time, which would have created a redundancy if a robot was damaged in some way.

– The robot to operator ratio should be reduced to 1:1, meaning that there should only have to be one operator per robot. At the disaster scene, a second person had to stand near the void where the robot was sent in order to hold the rope or the tether. This was a very dangerous position to be in for the person holding the rope, as the fall was quite high.

– The response performance needs to be maximized. This can be approached from several angles. One thing that should be done is to improve team organization.

Training standards needs to be developed, with regards to both USAR and robots.

But another thing that must happen is that research need to be conducted on how robots can be integrated in regular USAR task forces.

– Cognitive fatigue needs to be minimized. Better team organization will contribute to this, but work also needs to be done on creating better user interfaces. If USAR- workers don’t become comfortable with the user interfaces of the robots, they will not use them, as rescue workers most often use “tried and true” methods during disasters.

– Better robot communication needs to be developed. User confidence of USAR- proffesionals will be affected if the communication with a remote robot only works intermittently.

– And lastly, researchers that wants to actively help in real USAR situations must acquire USAR training certification, and they should also work on establishing a relationship with a real USAR team beforehand.

2.4 Robot design

When it comes to the design of USAR-robots, scientists have yet to agree on a stan-

dardization. A wide variety of different designs have been tried both in artificial test

(25)

environments and in actual USAR situations, with varying results.

A successful USAR-robot should have excellent mobility, a balanced composition of sensors and it needs to be robust. There are many things to consider when designing a USAR-robot, since most decisions have both pros and cons. This section will mention some of the things that needs to be taken into consideration, as well as some of the different designs that has been tried already.

2.4.1 Requirements

When a USAR-situation occurs, time is in short supply. Rescue workers have to act quickly and correctly. There is no time for time-consuming errors. Therefore every aspect of USAR-robotics needs to be reliable. If some shortcomings of a robot is known, thats no big problem. The rescue workers can simply work around these problems.

What cannot happen though, is that a robot fails miserably at some task it is supposed to handle without problems.

The requirements that are essential to USAR-robots can be classified into three categories:

1. Awareness. How well both the robot and the operator conceives an mental image of the environment the robot currently is in.

2. Mobility. What amount of time and at what distance the robot can operate, and how well the robot can traverse a particular area

3. Robustness. How consistent and durable the robot is.

The interesting thing is that these requirements are all entangled in a web. They are all dependent on each other, if one of them fails, the others cannot necessarily compensate for that loss.

Awareness

Awareness, meaning the possibility of the robot and the user to get a good understand- ing of the environment, is an important requirement. Good situational awareness leads to both increased work effectiveness, and a reduced risk of cognitive fatigue (see Sec- tion 3.4.2). Having good victim detection and mapping are also important parts of the awareness concept.

The awareness of both the operator and the robot comes mainly from the robots sensors. Different sensors have different strengths and weaknesses. A thing to consider when choosing a sensor is not only the use it has, but also the cost it brings. There is not only a economical cost for each sensor, but there is also a cost in terms mobility and robustness to have in mind. Even if a sensor gives an excellent view of the world, it is useless if it is too big, breaks too easily or if it draws so much power that the battery time will be too limited. See Section 2.4.2 for more information about sensor selection.

If the operator and the robot can get a clear view of the surroundings, crucial mistakes

can be avoided. But if the sensing is so poor that it is easy for the robot to run of a

cliff by mistake, and then fall and break, then it does not matter how mobile, or how

robust, it is constructed. It would still be useless.

(26)

Figure 2.4: iRobot’s PackBot is an example of a robot that can operate even when it is flipped upside down[27].

Mobility

Mobility is another important consideration. A small and flexible robot will be able to explore areas that rescue workers can not. Things such as working duration and the range of the robot also falls under this category.

If we imagine a rock-solid robot; one that is able to withstand even explosions;

one that can sense everything in its surroundings with crystal clear resolution, without ambiguities. That robot is still totally useless in a USAR-scenario if it is either to large or to clumsy to navigate through the disaster scenes. The same problems arises if it has too limited battery duration, or if it cannot receive orders from the operator further than 1 meter from a transmitter. The point is that even if a robot can handle two of the three categories well, it would still be a poor USAR-robot if it could not handle all three.

The terrain the robot will move around in will be very varying. It might contain rocks, piles of debris and concrete, among other things. A successful USAR-robot must have a robust and reliable locomotion to be able to move across such terrain. Most USAR-robots have a tank-like design, with track-wheels. A common design philosophy within USAR-robotics is to make the track-wheels in a triangular shape. This allows the robot to drive over obstacles that would otherwise have been impassable for a robot with traditional, oval tracks. Being able to transform between oval and triangular tracks during a mission can also be a very efficient way to increase mobility.

As shown in figure 2.4, some robots have the ability to flip itself on the right keel

again if it happens to find itself upside-down, which is an invaluable asset for a robot

striving achieve good mobility. A rescue robot is totally useless if it finds itself helplessly

turtled upside down halfway through a mission, unable to right itself.

(27)

The “PackBot” approach to this problem is far from the only one. Some robots work equally good upside-down as they do normally, making flipping unnecessary. Some robots solve this problem with specially designed “flippers”, and others solve it with standard arms.

Choosing between having wireless communication and a battery or having com- munication and power via a tether is an important choice to make when designing a USAR-robot. The use of a tether can give the robot a reliable source of energy, and a channel for noiseless communication, and it can also reduce the weight of the robot drastically by removing the need to carry a battery along. But a tether also greatly lim- its the mobility of the robot, since the tether will be of a limited length. The tether will also tend to get stuck in objects. Another drawback is that it may require an additional person to operate the cable full time, keeping it out of harms way[8].

Other interesting mobility-striving approaches is the construction of robots with feet, robots that crawl around like snakes or robots that are polymorphic (robots that can change shape during missions). See Section 2.4.3 for more detailed information.

Robustness

The robot should be able to achieve its goal every time, even if it is working under less than ideal conditions. It should also be resistant to hardware failure. If a robot breaks down in any way during a stressful USAR mission, it is not only the work that the robot could have done that is lost, but also the would-be work of operators and engineers whom then would have to repair it. It may even be impossible to repair in a realistic time frame, and lives may be lost because of it.

Super senses and excellent mobility is useless, if the robot breaks down at every turn, pointing again to the fact that all three requirement categories needs to be satisfied.

Since a USAR-robots main task is to move around in dangerous areas with the risk of collapses, falling objects, sharp edges and intense heat, the robot must be very reliable and robust.

Many sensors available on the market are not designed with rough conditions in mind, and special-designed variations of such equipment can be very costly. A cost-effective compromise can be found by reinforcing and protecting weaker equipments. Amongst others, the “Caster”-team solves this problem by adding the following to their robot:

“two 1cm thick polycarbonate plastic roll cages to protect the additional equipment”[19].

2.4.2 Choosing Sensors

“The sensor is a device that measures some attribute of the world ”[23].

A sensor suite is a set of sensors for a particular robot. The selection of a sensor suite is a very important part of the USAR-robot design process. There are eight attributes to consider when choosing a sensor for a sensor suite[13]:

1. Field of view and range. The amount of space that is covered by a sensor. For example, a 70 degree wide-lens camera might be selected for its superior field of view compared to a regular 27 degree camera, in order to provide a less constrained view of the world.

2. Accuracy, repeatability and resolution. Accuracy refers to the correctness

of the sensor reading, repeatability refers to how often a reading is correct given

the same circumstances, and resolution is how finely grained the sensor readings

are.

(28)

3. Responsiveness in the target domain. How well a sensor works in the in- tended environment. Some sensors work well in some situations, but are useless in others. A sonar, for example, will produce low quality results if it navigates an area that contains a lot of glass panels, since the sound waves will reflect unpredictably.

4. Power consumption. The amount of drain a sensor has on the robots battery. If a sensor has a high power consumption, it limits the amount of other sensors that could be added to the robot, as well as lowering the robots mobility by reducing the time that it can operate without recharging its battery.

5. Hardware reliability. The physical limitations of a sensor. For example, some sensors might be unreliable under certain temperature and moistness conditions.

6. Size. Size and weight considerations.

7. Computational complexity. The amount of computational power that is needed to process the different algorithms. This problem has become less critical as pro- cessors have become more powerful, but it can still remain a problem for smaller robots with less advanced CPUs.

8. Interpretation reliability. The reliability of the algorithms that interpret the sensor data. The algorithms must be able to correctly handle any mistakes that the sensor makes, and not make bad choices because of bad information. In other words, it should know when the sensor “hallucinates”, and when it is working correctly.

The points that ties the most into the requirements mentioned earlier are:

– Awareness: 1, 2 and 8.

– Mobility: 4 and 6.

– Robustness: 2, 3, 5 and 8.

Here is a list of examples of sensors that often are chosen for use in USAR-robots:

CCD Cameras

CCD cameras are one of the most common types of sensors when it comes to USAR- robots. Several CCD cameras are often placed in different directions on the same robot to give a wider view of the environment. The main argument for using this kind of camera is that it gives an image in the RGB-spectrum, which is very similar to the view of the world that humans have. It can also be used for movement detection, which is useful for finding victims. It is a very established technology, and it is generally cheap to acquire[23].

Laser range imaging

A laser range imager is a sensor that has the purpose of providing obstacle detection.

Laser rangers function by sending out a laser beam, and then it measures how long

it took before the beam’s reflection returned in order to calculate the distance to an

object. While lasers that can cover a entire 3D-area are technically possible, they are

very expensive, with costs on the order of 30.000$ to 100.000$. The more commonly

(29)

Figure 2.5: An example of an image created by using data from a 3D-camera[1].

available laser range sensors only cover a 180 degree horizontal plane. The strength of the laser is its high resolution, and relatively long range. The are several downsides, however. Not every type material reflects light well enough for it to be read; specular reflection might occur (“light hitting corners gets reflected away from the receiver”); the point may be out of range, all of which could lead to an incorrect depth image. Another issue is that even if the depth image of the horizontal plane is correct, obstacles still might exist beneath or above the plane. This problem has been combated by researches recently though, by mounting two laser range devices, one tilted slightly upwards, and one slightly downwards[23].

3D-cameras

A 3D-camera, or a time-of-flight camera, is similar to laser range imaging in many ways, with the difference that the 3D-camera sends rays of IR-light in a rectangular shape, rather than in a plane. The 3D-camera calculates the distance to an object with the help of the phase status of returned signal (see Section 4.1.2 for details on the process).

See figure 2.5 for an example of an application displaying data from a 3D-camera. The advantage of a 3D-camera is that it is relatively cheap compared to similar alternatives, while still providing a reasonably accurate depth image. The downside, besides the same ones that also affect laser range imaging, is that it provides a rather low-resolution image[1].

Carbon dioxide sensors

Carbon dioxide sensors are mostly specific to USAR-robots, at least in the area of

robotics. It measures the carbon dioxide contents of the air around it, with the purpose

(30)

of finding spaces occupied by humans (since such spaces will have a higher concentration of carbon dioxide because of humans breathing the air). This can obviously be very helpful when searching for victims in a collapsed building[26].

IR sensors

An active proximity sensor that is quite cheap. A near-infrared signal is emitted and then a measurement is taken how much light is returned. This technique often fails in practice because the light gets “washed out” by bright ambient lightning or is absorbed by dark material[23].

Thermal cameras

Thermal cameras acquire a heat image of the environment. They are very useful for USAR because of the fact that humans emit more heat than the building itself. They can also be used to spot dangerous areas, like when a door emits very high temperatures, there could very likely be a fire raging on the other side of it[23].

Sound sensors

Sound sensors are basically just microphones placed on the robot. They are useful for both victim detection and for increased situational awareness for the operator (see Section 3.3). Victim detection is enhanced by allowing the operator to hear the voices of people that are trapped. The victims can also speak to the operator via the robot, in order to reveal vital information regarding other nearby victims. Having sound from the robot can also increase the situational awareness of the operator by letting him or her hearing what the robot hears. This might reveal information that otherwise might be lost, such as hearing the wheels of the robot skidding, and thus realizing the reason why the robot is stuck[19].

Sonar

The sonar is one of the most common robotic sensors. It sends out an acoustic signal, and measures the time it takes for it to reflect back. It has the same problems as the laser range finder has, with the additional weakness of having a very poor resolution.

It performs poorly in noisy environments and is more suitable for controlled research situations. The low price and its relatively high gain is its main strength[23].

This list is by no means exhaustive, the number of sensors possible is far too numerous to mention in this paper. But the above are some of the most commonly used ones, and the ones that have already proven their worth.

Choosing what sensors should be included in a USAR-robot project is only part of the sensor design problem. Another important part is making sure that they are used efficiently.

Due to limited bandwidth, it can be of great use to have some sort of preprocessing of the collected data on-board the robot. This way, only useful information is sent to a operator, preventing clogging of the communication medium.

A prerequisite for this is good sensors and smart algorithms for sensor refinement

and sensor fusion (see Section 3.4.3).

(31)

Figure 2.6: The Solem robot, which was used in the World Trade Center rescue opera- tions.

2.4.3 Related Work

Many different groups of researchers are trying different angles of tackling the problems of constructing robust and mobile robots that are aware of their environment. This section brings forth only a few of the many different creative ways to construct ever increasingly effective USAR-robots.

Basic Search and Rescue approach

The robots “Talon” and “Solem” from “Foster-Miller” will serve as examples of what can be considered as “standard” urban search and rescue robots, as they use many techniques that are ubiquitous in USAR robot design. They were two of the robots that were used in the rescue operation follow the World Trade Center attacks in 2001, see Section 2.3 for more information.

“Talon” is a wireless, suitcase-sized, tracked robot. Its tracks are built to handle heavy brush and soft surfaces such as sand, mud or even water. It is equipped with several cameras, including a zoom thermal camera and a night-vision camera. It also has a two-stage arm that can be used to move pieces of debris or to pick up small objects.

Talon is large enough to tackle mobility problems such as stairs very easily. It can carry a payload of 138 kg, and it can pull a 91 kg load. However, its size makes it unable to enter small spaces, which along with a rather short battery-time of around one hour, account for Talons major weaknesses.

The “Solem” robot (shown in Figure 2.6) is much lighter than the Talon-robot, but

also a lot slower. It is an amphibious, all-weather, day-night robot. Just like the “Talon”,

it uses radio for communication. “Solem” is equipped with a 4 mm wide-lens camera,

and additional sensors, such as night vision or thermal cameras, can be attached to its

arm. It carries with it four Nickel-Metal batteries that allows it to operate one hour at

full capacity when moving through rough terrain.

(32)

Figure 2.7: The operator control unit for controlling iRobots PackBot. This portable device is used to control the robot from a distance.

Both robots can be controlled with a Operator Control Unit (OCU), which is a portable computer that can be used to control every aspect of the robot. See Figure 2.7 for an example of a common OCU.

The images from the robots sensors can either be sent to the screen of the OCU or to a pair of higher resolution virtual reality goggles[26].

Robot swarms

The article “World Embedded Interfaces for Human-Robot Interaction” by Mike Daily, et al.[10] discusses the idea of robot swarms.

The authors of the article have a vision in which a operator arrives at a scene, quickly programs his army of mini-robots (at approximately the size of small rodents) to a certain task, and then lets the swarm lose.

This swarm then starts to scout around, with no lone robot communicating directly with the operator, instead they only talk with the closest robots in its surroundings (much like a mesh network).

In a hypothetical USAR scenario, these robots would enter the debris, some of the

robots would stay put and act as beacons propagating information from the robots

further inside, while others would enter further into the debris. And suddenly, some of

the robots starts to blink with a red light. This means that some robot far down in the

debris have found a human. This has been communicated only to the closest robots in

its surroundings, and it gives the end result of a breadcrumb-like path that is formed

from the operator down to the suspected victim.

(33)

Figure 2.8: A prototype image of a robot swarm using a world embedded interface. The arrows points towards a potential victim in a USAR situation.

Now some bigger robot (or an operator) can be dispatched to affirm the sighting of a victim, following a path of red-blinking robots indicating the path.

These types of robots are far from reality at this point, but the idea certainly has some potential, as emergent systems have proven their worth many times. See Figure 2.8 for an image of a prototype of such a system.

Snake and snake hybrid

The article “Survey on Urban Search and Rescue Robotics”[26] makes the following statement regarding the design of a good robot base: “The base should be able to drive on wet surfaces possibly contaminated with corrosives and it needs to be heat resistant, water proof and fire proof. Without such a base, a robot cannot explore a disaster site making all its sophisticated sensors and software useless. These strict requirements stem from the extreme environments which they need to explore”.

The same article then goes on to highlight some of the suggestions that has been made for such a solid base. One of the most interesting ones is a robot with a snakelike body as a foundation for this base. A snake shaped robot is extremely well suited to navigate through confined spaces. And it has little difficult with moving at any direction in a three-dimensional space.

The article also points out that the advantages of a snake-robot comes at a price.

“The many degrees of freedom that furnish the snake with its wide range of capabil-

ities also prove its major challenges: mechanism design and path planning.”, after this

conclusion they continue and present some of the work of a researcher named “Shigeo

Hirose”, namely the work on a 3-degree of freedom, 116 cm long crawler named Souryo,

also known as blue dragon. This robot is sort of a hybrid between a snake and a crawler

(34)

robot, and it is trying to utilize the advantages of each side. Its design and effectiveness

has been displayed in the article: “Development of Souryo-I: Connected Crawler Vehicle

for Inspection of narrow and Winding Space” - T. Takayama, S. Hirose[28].

(35)

Human-Robot Interaction

Human-robot interaction (HRI) is a research field concerned with the interactions be- tween a human user, or operator, and a robot. This subject is a very important part of urban search and rescue robotics (USAR) because of the need of the operator to get a good overview of the situation, and the need to effectively operate the robot during stressful conditions.

The objective of HRI is to simplify the potentially complex interactions between the operator and the robot. The problem lies in providing the operator with an interface to the robot that is powerful enough that all necessary tasks can be executed successfully, while at the same time being so simple and intuitive that it is easy, quick and painless to use.

This chapter will specifically deal with tele-operated robots, and how humans interact with them. It will also mention some real world applications of teleoperation and it will touch on various principles that are useful to teleoperation in general, such as semi- autonomy, common ground and graphical user interfaces.

3.1 Teleoperation

There are several negative aspects associated with the strictly autonomous approach that traditionally has existed within the robotics field. Robots have been found to lack both the necessary perception and decision making capabilities required to operate in the real world environment. Therefore a lot of the research efforts was directed towards teleoperation (i.e. robots and humans working together), rather than robots that are working entirely on their own. Teleoperation is often seen as an interim solution, with fully autonomous robots being the ideal long term goal.

Teleoperation is a type of control where a human operator controls a robot from a distance. In most cases the human (or “master”) directs the robot (or “slave”) via some sort of workstation interface located out of viewing distance from the robot (see Figure 3.1). The human operator is required to have a user interface (UI) consisting of a display and a control mechanism, and the robot is required to have power, effectors and sensors. For more detailed information regarding user interfaces see Section 3.4.

The robot’s sensors are required because the operator generally cannot see the robot directly, so the robot needs to collect data about its nearby environment to send back to the operator. The display enables the operator to see a representation of the robot’s sensor data, with the goal of making more informed decisions regarding the robot’s

21

(36)

Figure 3.1: The principle of teleoperation. The operator (or “master”) is connected to the robot (or “slave”) via an arbitrary connection medium. The operator controls the robot based on the feedback received from the robot’s remote environment[17].

situation. The control mechanism of teleoperation normally consists of a joystick, or a keyboard and a mouse, but a wide number of innovative control devices, such as virtual reality headgear, can also be used.

Teleoperation is a popular approach because it tries to evade some of the problems of purely autonomous robotics by letting a human be part of the decision process. The goal is then to create a synergy between the strengths of a robot and the strengths of a human in order to minimize the limitations of both[23].

According to Wampler[31], teleoperation is best suited to applications where the following points apply:

1. The tasks are unstructured and not repetitive.

2. The task workspace cannot be engineered to permit the use of industrial manipu- lators.

3. Key portions of the task intermittently require dexterous manipulation, especially hand-eye coordination.

4. Key portions of the task require object recognition, situational awareness, or other advanced perception.

5. The needs of the display technology do not exceed the limitations of the commu- nication link (bandwidth, time delays).

6. The availability of trained personnel is not an issue.

(37)

Figure 3.2: The Sojourner Mars rover, which was sent by NASA to Mars in 1997 to explore the planet. (Image by NASA)

3.1.1 Time delays

A significant disadvantage when teleoperating robots, especially over huge distances, is the fact that radio communications between the local operator and the remote robot takes a long time to transmit. For example, a task that requires direct control takes just one minute for operators to do on earth takes two and a half minutes on the Moon and 140 minutes on Mars. The increase in time between the different scenarios is caused by transmission delays between the robot and the operator, so the operator has to wait for the results after every action[23]. This time delay can cause a lot of problems, since the operator has to predict what the robot’s status is several minutes into the future. This can often lead to situations where the operator unawarely puts the robot in a danger.

There are several proposed solutions to this problem, including predictive displays (see Section 3.4.2) and some of the various approaches to semi-autonomous control, such as Traded control and Shared control (see Section 3.2).

3.1.2 Planetary Exploration

A practical example of a situation where teleoperation has been found to be useful is planetary exploration. The reasons why it is preferred to use robots for space missions instead of a human are plentiful:

– It is cheaper

– A robot does not need any biological support systems, such as oxygen and food – A robot can theoretically stay forever, so no return journey is required

– A robot can handle harsh environmental situations

– There is no risk of loss of life

(38)

Planetary exploration is an area that has been tried in practice several times in recent years by the National Aeronautics and Space Administration (NASA). The first teleop- erated robot to have successfully been sent to an other planet was the Sojourner robot, shown in Figure 3.2, which was sent to explore the surface of Mars in the summer of 1997. The Sojourner remained operational for almost three months before radio con- tact was lost permanently on the 27:th of September[23]. In 2003 NASA sent two more exploration robots to Mars, named Spirit and Opportunity, to investigate the Martian geology and topology. While the original mission was planned to last 90 days, they were so successful that the mission was extended to operate all through 2009[24].

3.1.3 Unmanned Aerial Vehicles

An unmanned aerial vehicle (UAV) is a robotic aircraft that can be either entirely automated, or teleoperated. The advantages of not having a pilot are that the plane can be made a lot smaller, and therefore use less fuel, and also that it can be put into dangerous situations, such as flying into war zones or natural disaster areas, without fear for the pilots life. A disadvantage is that UAV:s currently require several people to operate, as the planes sensors and its controls are often manned by different people.

The work requires a high degree of skill, and the training of the operator takes about a year to complete[23].

While teleoperated robots have various potential military applications, the UAV is one that has been tested in real life situations. The United States Air Force has created several UAV:s in the last decades, the Darkstar UAV and The Predator, for example.

The Darkstar UAV was an advanced prototype that could fly autonomously, but was teleoperated by a human during take-off and landing, since those are the most critical moments of the mission. But unfortunately, transmission latency was not taken into account when constructing the first working prototype, and the seven second delay induced by the satellite communications link caused the Darkstar no.1 to crash on take- off, since the operators commands took to long to arrive (see Section 3.1.1 for more info on effects of latency).

The Predator was used successfully in Bosnia, where it was used to take surveillance photos in order to verify that the Dayton Accords was being honored[23].

3.1.4 Urban Search and Rescue

Urban search and rescue robotics is an area of teleoperation that is currently the focus of a lot of research effort, see the chapter 2 for more information.

3.1.5 Other examples of teleoperation

Other similarly dangerous environments where teleoperated robots have been applied include:

– Underwater missions (such as the exploration of the wreck of the Titanic[23]) – Volcanic missions[5]

– Explosive Ordnance Disposal[20], such as the one seen in Figure 3.3.)

(39)

Figure 3.3: A Joint Service Explosive Ordnance Disposal robot ’Red Fire’ prepared to recover a mine on February 9, 2007 in Stanley, Falkland Islands (photo by Peter Macdiarmid/Getty Images).

3.2 Semi-autonomous Control

Semi-autonomous control, or supervisory control, is when the operator gives the robot a task it can more or less perform on its own. The two major kinds of semi-autonomous control are Shared control, or continuous assistance, and Traded control. While the concepts are separate, both paradigms can be present in the same system (as in the case with the previously mentioned Sojourner rover (see Section 3.1.2)).

Safeguarded teleoperation is an attempt to alleviate the effects of time delays by letting the robot have the power to override or ignore commands if it feels the command would put it in danger. Another approach to semi-autonomy is adjustable autonomy, where the autonomy level of the robot is dynamically set, depending on the situation.

3.2.1 Shared Control

Shared control is a type of teleoperation where the human operator and the robot share the control over the robot’s actions. The operator can choose between delegating a task to the robot, or accomplish it him or herself via direct control. If the task is delegated, then the operator takes the role of a supervisor, and monitors the robot to check if any problems arise. This relationship enables the operator to handle simple and boring or repetitive tasks to the robot, while personally handling tasks that require hand-to- eye coordination, for example. This helps to reduce the issue of cognitive fatigue (see Section 3.4.2), but the communication bandwidth that is demanded by the direct control is potentially high, since a lot of sensor data needs to be sent to the operator[23].

3.2.2 Traded Control

Traded control is an attempt to avoid the problem of high demands on bandwidth and

operator attention. The idea is for the operator to just initiate the robots actions, and

References

Related documents

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

Den förbättrade tillgängligheten berör framför allt boende i områden med en mycket hög eller hög tillgänglighet till tätorter, men även antalet personer med längre än

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa

DIN representerar Tyskland i ISO och CEN, och har en permanent plats i ISO:s råd. Det ger dem en bra position för att påverka strategiska frågor inom den internationella

While firms that receive Almi loans often are extremely small, they have borrowed money with the intent to grow the firm, which should ensure that these firm have growth ambitions even

Effekter av statliga lån: en kunskapslucka Målet med studien som presenteras i Tillväxtanalys WP 2018:02 Take it to the (Public) Bank: The Efficiency of Public Bank Loans to