• No results found

Cooperative Anchoring

N/A
N/A
Protected

Academic year: 2021

Share "Cooperative Anchoring "

Copied!
254
0
0

Loading.... (view fulltext now)

Full text

(1)

Cooperative Anchoring

(2)
(3)

Örebro Studies in Technology 39

Kevin LeBlanc

Cooperative Anchoring

Sharing Information About Objects

in Multi-Robot Systems

(4)

© Kevin LeBlanc, 2010

Title: Cooperative Anchoring – Sharing Information About Objects in Multi-Robot Systems.

Publisher: Örebro University 2010 www.publications.oru.se

trycksaker@oru.se

Printer: Intellecta Infolog, Kållered 09/2010 issn 1650-8580

isbn 978-91-7668-754-3

This research has been supported by:

(5)

Abstract

In order to perform most tasks, robots must perceive or interact with physical objects in their environment; often, they must also communicate and reason about objects and their properties. Information about objects is typically pro- duced, represented and used in different ways in various robotic sub-systems. In particular, high-level sub-systems often reason with object names and descrip- tions, while low-level sub-systems often use representations based on sensor data. In multi-robot systems, object representations are also distributed across robots. Matters are further complicated by the fact that the sets of objects con- sidered by each robot and each sub-system often differ.

Anchoring is the process of creating and maintaining associations between descriptions and perceptual information corresponding to the same physical objects. To illustrate, imagine you are asked to fetch “the large blue book from the bookshelf”. To accomplish this task, you must somehow associate the de- scription of the book you have in your mind with the visual representation of the appropriate book. Cooperative anchoring deals with associations between descriptions and perceptual information which are distributed across multiple agents. Unlike humans, robots can exchange both descriptions and perceptual information; in a sense, they are able to “see the world through each other’s eyes”. Again, imagine you are asked to fetch a particular book, this time from the library. But now, in addition to your own visual representations, you also have access to information about books observed by others. This can allow you to find the correct book without searching through the entire library yourself.

This thesis proposes an anchoring framework for both single-robot and cooperative anchoring that addresses a number of limitations in existing ap- proaches. The framework represents information using conceptual spaces, al- lowing various types of object descriptions to be associated with uncertain and heterogeneous perceptual information. An implementation is described which uses fuzzy logic to represent, compare and combine information. The implementation also includes a cooperative object localisation method which takes uncertainty in both observations and self-localisation into account. Ex- periments using simulated and real robots are used to validate the proposed framework and the cooperative object localisation method.

i

(6)
(7)

Acknowledgements

I’ve often said that the acknowledgements section is probably one of the most frequently read parts of a thesis, and that as such, it should be written with par- ticular care. However, like most Ph.D. students before me, I wrote this section at the last minute, and I apologise if in the final rush to get everything fin- ished I’ve overlooked anyone. When you spend as much time doing anything as I’ve spent preparing this thesis, a lot of people are bound to have had the opportunity to help.

First and foremost, I would like to thank my supervisor, Alessandro Saf- fiotti, for giving me the opportunity to carry out my Ph.D. studies at Örebro University. The breadth and depth of his knowledge have been an immensely valuable resource over the years, and I am extremely grateful for his guidance, encouragement, and patience throughout my studies.

A number of people have helped the development of the ideas and methods contained in this thesis. In particular, I would like to thank Silvia Coradeschi, Amy Loutfi, and Mathias Broxvall for their involvement and interest in this work. I would also like to thank Mathias for help with some of the algorithms in this thesis, and for his work on the Peis middleware, which was an invalu- able tool during the experimental phase of this work. The presented experi- ments would also not have been possible without help from Per Sporrong and Bo-Lennart Silfverdal, who somehow managed to keep the robots and other hardware at AASS up and running despite my software.

For always having answers to administrative questions, I would like to thank Barbro Alvin, Anne Moe, Kicki Ekberg, and Jenny Tiberg; I would also like to thank the countless others who have helped me with all sorts of admin- istrative issues over the years.

This work was partially funded by CUGS (the National Graduate School in Computer Science, Sweden), and I would like to thank the lecturers and my fellow CUGS students for making CUGS courses and seminars both rewarding and entertaining. I would also like to thank the Swedish Knowledge Foundation for financial support, and ETRI (Electronics and Telecommunications Research Institute, Korea) for funding the Peis-Ecology project.

iii

(8)

iv

And of course, Ph.D. studies involve more than just taking courses and writing a thesis. I would like to thank everyone at AASS, past and present, for making the work environment so enjoyable. In particular, I would like to thank Robert Lundh for many interesting conversations while travelling to and from CUGS courses, and for listening to my numerous complaints about the oddities of the Swedish language. I would also like to thank the Italians for dragging me out of my office when I needed it most. And thank you to everyone who went skiing, biking, swimming, or running with me over the years. The fact that we chose to torture ourselves the way we did rather than work on our theses shows just how powerful the procrastination instinct is with Ph.D. students in general (and this one in particular).

A special thank you goes to my parents and my brother, for their love and

support, and for providing me with everything I needed in order to get where

I am today. I would also like to thank my extended family and friends, both

near and far, for supporting me throughout my studies. And last but not least,

I would like to express my loving thanks to Anita for her encouragement, sup-

port, patience, and help during the writing of this thesis.

(9)

Contents

1 Introduction 1

1.1 Motivation . . . . 1

1.2 Illustration . . . . 2

1.3 Objectives . . . . 4

1.4 Challenges . . . . 4

1.5 Contributions . . . . 7

1.6 Outline . . . . 8

1.7 Publications . . . . 9

2 Related Work 11 2.1 Anchoring . . . . 11

2.1.1 Single-Robot Anchoring . . . . 12

2.1.2 Cooperative Anchoring . . . . 15

2.1.3 Overcoming the Limitations of Existing Approaches . . . 16

2.2 Related Challenges . . . . 19

2.2.1 Symbol Grounding . . . . 20

2.2.2 Binding . . . . 20

2.2.3 Perception Management . . . . 20

2.2.4 Tracking . . . . 20

2.2.5 Data Association . . . . 21

2.2.6 Information Fusion . . . . 23

2.3 Discussion . . . . 27

3 Problem Formalisation 29 3.1 Ingredients . . . . 29

3.1.1 Information Sources . . . . 31

3.1.2 Anchors . . . . 32

3.1.3 Descriptions . . . . 32

3.2 Problem Definition . . . . 33

3.2.1 Data Association . . . . 33

3.2.2 Information Fusion . . . . 34

v

(10)

vi CONTENTS

3.2.3 Prediction . . . . 34

3.3 Illustration . . . . 35

4 Anchoring Framework 37 4.1 Framework Overview . . . . 37

4.1.1 Local and Global Anchoring . . . . 37

4.1.2 A Decentralised Approach . . . . 38

4.1.3 Illustration . . . . 39

4.2 Conceptual Spaces . . . . 41

4.2.1 Interpretations . . . . 42

4.2.2 Similarity . . . . 44

4.2.3 Anchor Spaces . . . . 44

4.3 Local Anchor Management . . . . 44

4.3.1 Self-anchors . . . . 45

4.3.2 Local Data Association . . . . 46

4.3.3 Local Information Fusion . . . . 46

4.3.4 Local Prediction . . . . 47

4.3.5 Local Anchor Deletion . . . . 47

4.3.6 Illustration . . . . 48

4.4 Global Anchor Management . . . . 49

4.4.1 Global Data Association . . . . 50

4.4.2 Global Information Fusion . . . . 51

4.4.3 Global Prediction . . . . 52

4.4.4 Global Anchor Deletion . . . . 52

4.4.5 Illustration . . . . 53

4.5 Descriptions . . . . 54

4.5.1 Descriptions and Anchoring . . . . 54

4.5.2 Descriptions and Interest Filtering . . . . 55

4.6 Names . . . . 56

4.6.1 Assigning Names . . . . 56

4.6.2 Associating Names and Anchors . . . . 57

4.7 Framework Summary . . . . 58

4.8 Discussion . . . . 61

5 Framework Realisation Part 1: Representations 63 5.1 Implementation Overview . . . . 63

5.1.1 Representations . . . . 63

5.1.2 Processes . . . . 64

5.1.3 Experimental Tool . . . . 65

5.2 Information Representation . . . . 65

5.2.1 Fuzzy Sets . . . . 65

5.2.2 Implementing Fuzzy Sets . . . . 67

5.2.3 Operations On Fuzzy Sets . . . . 73

5.3 Domain Choices . . . . 78

(11)

CONTENTS vii

5.3.1 Common Local and Global Anchor Spaces . . . . 78

5.3.2 Dimensions and Coordinate Systems . . . . 79

5.4 Descriptions . . . . 80

5.5 Grounding Functions . . . . 81

5.6 Conceptual Sensor Models . . . . 81

5.6.1 Symbolic Conceptual Sensor Models . . . . 82

5.6.2 Numeric Conceptual Sensor Models . . . . 85

5.6.3 Negative Information . . . . 88

5.7 Summary . . . . 88

6 Framework Realisation Part 2: Processes 89 6.1 Self-Localisation . . . . 89

6.1.1 Representation . . . . 90

6.1.2 Landmark-Based Self-Localisation . . . . 90

6.1.3 Adaptive Monte-Carlo Localisation . . . . 94

6.2 Object Localisation . . . . 95

6.2.1 Relevant Information . . . . 95

6.2.2 Coordinate Transformation Process . . . . 96

6.2.3 Approximate Coordinate Transformation . . . . 98

6.2.4 Coordinate Transformation Complexity . . . 100

6.3 Data Association . . . 101

6.3.1 Local Data Association . . . 101

6.3.2 Global Data Association . . . 102

6.3.3 Data Association Algorithm . . . 104

6.3.4 Bounded Data Association Algorithm . . . 110

6.3.5 Data Association Complexity . . . 112

6.4 Information Fusion . . . 115

6.4.1 Local Information Fusion . . . 115

6.4.2 Global Information Fusion . . . 116

6.4.3 Approximating Local Anchors . . . 116

6.4.4 Information Fusion Complexity . . . 117

6.5 Prediction . . . 118

6.5.1 Local Prediction . . . 118

6.5.2 Global Prediction . . . 119

6.5.3 Anchor Deletion . . . 119

6.6 Illustration . . . 119

6.6.1 Robot 1: Local Anchor Management . . . 119

6.6.2 Robot 2: Local Anchor Management . . . 123

6.6.3 Global Anchor Management . . . 124

6.7 Summary . . . 127

(12)

viii CONTENTS

7 Cooperative Object Localisation Experiments 129

7.1 Methodology . . . 130

7.2 Experimental setup . . . 130

7.2.1 Robots . . . 130

7.2.2 Environment . . . 131

7.2.3 Ground truth . . . 132

7.2.4 Performance Metrics . . . 132

7.2.5 Software Setup . . . 133

7.3 Evaluated Methods . . . 135

7.4 Exploring The Input-Error Landscape . . . 136

7.5 Results . . . 138

7.5.1 Artificial Errors on Target Observations . . . 138

7.5.2 Artificial Errors on Landmark Observation . . . 143

7.5.3 Unaltered Data . . . 144

7.6 Discussion . . . 145

8 Anchoring Experiments 149 8.1 Objectives . . . 149

8.2 Methodology . . . 150

8.3 Common Experimental Setup . . . 151

8.3.1 Environment . . . 151

8.3.2 Fixed Cameras . . . 152

8.3.3 Mobile Robots . . . 153

8.3.4 Software Configuration . . . 155

8.4 Experiment 1: Find a Parcel (Simulation) . . . 160

8.4.1 Goal . . . 160

8.4.2 Setup . . . 160

8.4.3 Execution . . . 163

8.4.4 Results . . . 165

8.4.5 Discussion . . . 165

8.5 Experiment 2: Find a Parcel (Real Robots) . . . 168

8.5.1 Goal . . . 168

8.5.2 Setup . . . 168

8.5.3 Execution . . . 170

8.5.4 Results . . . 179

8.5.5 Discussion . . . 179

8.6 Experiment 3: Find Multiple Parcels . . . 180

8.6.1 Goal . . . 180

8.6.2 Setup . . . 180

8.6.3 Execution . . . 181

8.6.4 Results . . . 194

8.6.5 Discussion . . . 194

8.7 Experiment 4: Anchoring in a Full Robotic System . . . 195

8.7.1 Goal . . . 195

(13)

CONTENTS ix

8.7.2 Setup . . . 195

8.7.3 Approach . . . 196

8.7.4 Execution . . . 203

8.7.5 Results . . . 205

8.7.6 Discussion . . . 206

8.8 Summary . . . 206

9 Conclusions 207 9.1 Summary . . . 207

9.1.1 Problem Definition . . . 207

9.1.2 Framework . . . 207

9.1.3 Realisation . . . 208

9.1.4 Experiments . . . 208

9.2 Limitations and Future Work . . . 209

9.2.1 Framework Improvements and Extensions . . . 209

9.2.2 Implementation Improvements and Extensions . . . 211

9.3 Conclusions . . . 211

References 213

(14)
(15)

List of Figures

1.1 Problem illustration . . . . 3

3.1 Illustration of the problem formalisation . . . . 35

4.1 Framework Overview . . . . 40

4.2 Conceptual space . . . . 43

4.3 Local anchor management . . . . 48

4.4 Global anchor management . . . . 53

4.5 Framework summary . . . . 60

5.1 Uncertainty in fuzzy sets . . . . 66

5.2 Fuzzy sets implemented using bin models . . . . 68

5.3 Parametric ramp membership functions . . . . 69

5.4 Parametric 2D ramp membership functions . . . . 69

5.5 Parametric trapezoidal membership functions . . . . 70

5.6 Parametric 2D trapezoidal membership functions . . . . 71

5.7 Multi-modal parametric membership functions . . . . 72

5.8 Hybrid 2.5D grid . . . . 73

5.9 Matching fuzzy sets . . . . 75

5.10 Fusing fuzzy sets to reach a consensus . . . . 76

5.11 Fusing unreliable information . . . . 77

5.12 Trapezoidal envelope . . . . 78

5.13 Region information . . . . 83

5.14 Near self information . . . . 83

5.15 Symbolic colour information . . . . 84

5.16 Near position information . . . . 86

5.17 Numeric colour information . . . . 87

6.1 Landmark-based self-localisation . . . . 93

6.2 AMCL self-localisation . . . . 94

6.3 Coordinate transformation . . . . 97

xi

(16)

xii LIST OF FIGURES

6.4 Full versus approximate coordinate transformation . . . . 99

6.5 Table of entities for robot 1 . . . 121

6.6 Local data association search for robot 1 . . . 121

6.7 Table of entities for robot 2 . . . 122

6.8 Local data association search for robot 2 . . . 122

6.9 Global matching example . . . 125

6.10 Global matching search . . . 126

6.11 Global anchors . . . 126

7.1 AIBO robot . . . 131

7.2 Experimental environment . . . 132

7.3 Experimental layouts . . . 133

7.4 Tracker error versus distance from reference . . . 134

7.5 Systematic bearing errors . . . 139

7.6 Random bearing errors . . . 139

7.7 Systematic range errors . . . 140

7.8 Random range errors . . . 140

7.9 False positives . . . 141

7.10 Fused error versus self-localisation errors . . . 142

7.11 Fused error versus self-orientation errors . . . 142

7.12 Results for each method . . . 144

7.13 Self and ball position estimates . . . 147

7.14 Orientation estimates . . . 147

7.15 Bearing errors cause averaging to perform poorly . . . 148

7.16 Range errors cause the proposed method to perform poorly . . . 148

8.1 The Peis-Home . . . 151

8.2 Simulation of the Peis-Home . . . 152

8.3 Fixed cameras and mobile robots . . . 153

8.4 Images from the fixed cameras in the Peis-Home . . . 154

8.5 Anchoring monitor tool . . . 157

8.6 Software configuration using simulator . . . 158

8.7 Software configuration using real robots . . . 159

8.8 Experiment 1: domains . . . 161

8.9 Experiment 1: initial configuration . . . 166

8.10 Experiment 1: anchors . . . 167

8.11 Photos of setup for experiments 2 and 3 . . . 169

8.12 Initial configuration for experiments 2 and 3 . . . 171

8.13 Experiment 2 run 1: observations and trajectories . . . 173

8.14 Experiment 2 run 2: observations and trajectories . . . 173

8.15 Experiment 2 run 1: observation error versus time . . . 174

8.16 Experiment 2 run 2: observation error versus time . . . 174

8.17 Experiment 2 run 1: anchors . . . 176

8.18 Experiment 2 run 2: anchors . . . 176

(17)

LIST OF FIGURES xiii

8.19 Experiment 2 run 1: global anchors . . . 177

8.20 Experiment 2 run 2: global anchors . . . 177

8.21 Experiment 2 run 1: global anchor error versus time . . . 178

8.22 Experiment 2 run 2: global anchor error versus time . . . 178

8.23 Experiment 2 run 3: observations . . . 183

8.24 Experiment 2 run 5: observations . . . 183

8.25 Experiment 2 run 3: observation error versus time . . . 184

8.26 Experiment 2 run 4: observation error versus time . . . 184

8.27 Experiment 2 run 3: global anchors . . . 185

8.28 Experiment 2 run 4: global anchors . . . 185

8.29 Experiment 2 run 3: global anchor error versus time . . . 186

8.30 Experiment 2 run 4: global anchor error versus time . . . 186

8.31 Experiment 2 run 3: global anchors (bounded) . . . 189

8.32 Experiment 2 run 4: global anchors (bounded) . . . 189

8.33 Experiment 2 run 3: global anchor error versus time (bounded) . 190 8.34 Experiment 2 run 4: global anchor error versus time (bounded) . 190 8.35 Experiment 3 run 1: local timing . . . 191

8.36 Experiment 3 run 2: local timing . . . 191

8.37 Experiment 3 run 1: global timing . . . 192

8.38 Experiment 3 run 2: global timing . . . 192

8.39 Experiment 3: computation time versus number of associations . 193 8.40 Experiment 4: field of view of the fixed cameras . . . 196

8.41 Experiment 4: objects . . . 197

8.42 Experiment 4: shape and SURF signatures . . . 200

8.43 Experiment 4: range and bearing trapezoids . . . 202

8.44 Experiment 4: possible object positions . . . 204

(18)
(19)

List of Tables

1.1 Examples of various types of information. . . . . 6

2.1 Limitations of existing anchoring approaches . . . . 17

5.1 Positive and negative descriptions . . . . 80

6.1 Entities used for local data association . . . 102

6.2 Entities used for global data association . . . 103

6.3 Associations of entities . . . 104

6.4 Hypotheses . . . 106

6.5 Local associations for robot 1 . . . 121

6.6 Local associations for robot 2 . . . 122

6.7 Associations for global matching . . . 125

8.1 Full and bounded data association results . . . 188

xv

(20)
(21)

List of Algorithms

1 Fuzzy coordinate transformation . . . . 98

2 Approximate fuzzy coordinate transformation . . . 100

3 Data association algorithm . . . 109

4 Bounded data association algorithm . . . 111

5 Analysis of the input-error landscape . . . 137

xvii

(22)
(23)

Chapter 1

Introduction

1.1 Motivation

Robotic systems are used in an increasing number of application areas to- day [65]. This trend is supported by numerous advances which allow more use- ful and complex tasks to be performed. In particular, advances in multi-robot systems [33, 4, 55], and more recently, network robot systems [141, 139], al- low a wide range of interesting problems to be addressed. For many of these problems, single-robot systems are either inadequate or inefficient. The advan- tages of multi-robot and network robot systems arise mainly from their ability to exploit parallelism, heterogeneity, and cooperation [74].

The vast majority of autonomous robot applications require that robots perceive or interact with physical objects in some way. Simply obtaining object properties is the goal of many tasks; this is true for most surveillance and detec- tion tasks, for instance. In many other tasks, knowledge of object properties is required in order to enable identification and meaningful physical interaction;

this is the case for tasks such as foraging and manipulation. Information about object positions, in particular, is crucial for most common tasks.

Information about objects is typically produced, represented and used in different ways in the various sub-systems of robotic architectures. In particu- lar, cognitive sub-systems often reason with names and descriptions of objects, while perception and control sub-systems often deal with object representa- tions based on sensor data. In multi-robot systems, object representations are also distributed across robots. Matters are further complicated by the fact that the sets of objects considered by each robot and each sub-system often differ.

Roughly stated, anchoring is the process of creating and maintaining asso- ciations between descriptions and perceptual information corresponding to the same physical objects [135, 38]. To illustrate, imagine you are asked to fetch

“the large blue book from the bookshelf”. To accomplish this task, you must somehow associate the description of the book you have in your mind with the visual representation of the appropriate book.

1

(24)

2 CHAPTER 1. INTRODUCTION

When descriptions and perceptual information are distributed across multi- ple agents, the process is called cooperative anchoring. Unlike humans, robots can exchange both descriptions and perceptual information; in a sense, they are able to “see the world through each other’s eyes”. So not only can they extract and exchange object descriptions, such as “the large blue book”, but they can also directly exchange representations based on perceptual data. Again, imag- ine you are asked to fetch a particular book, this time from the library. But now, in addition to your own visual representations, you also have access to information about books observed by others. This can allow you to find the correct book without searching through the entire library yourself.

1.2 Illustration

Figure 1.1 illustrates the cooperative anchoring problem. In the depicted sce- nario, a mobile robot called Astrid is told to fetch “parcel-21” from the en- trance of an apartment containing a number of sensors and robots. In order to perform this task, Astrid can use information obtained from a number of different sources.

• The task contains a description of the parcel of interest; Astrid might store this information in a knowledge base, for instance:

position[parcel-21] = {entrance}

• Astrid’s vision system can detect the colour and approximate positions of two observed objects:

position ≈ (3.1, 1.5), colour = (0.3, 0.9, 0.8) position ≈ (2.9, 1.5), colour = (0.0, 1.0, 0.7)

• An RFID reader called Reader-01, located near the entrance, can detect one RFID-tagged object:

<object>

<id>parcel-21</id>

<texture>striped</texture>

</object>

• Another robot, called PeopleBoy, is equipped with a vision system capable of detecting the colour and texture of objects; however, due to poor self- localisation, PeopleBoy is, for the moment, unable to compute position estimates for detected objects:

colour = (0.4, 0.9, 0.8), texture = {striped}

colour = (0.0, 1.0, 0.8), texture = {none}

(25)

1.2. ILLUSTRATION 3

Figure 1.1: Illustration of the cooperative anchoring problem. Astrid is tasked with find- ing “parcel-21”, which is located near the entrance of the apartment. In order to identify the correct parcel, information from various sources must be considered.

• A black and white security camera called Camera-01, mounted on the ceiling, can detect object positions; due to its fixed position and elevated perspective, position estimates from the security camera are particularly accurate and precise:

position = (3.11, 1.58) position = (2.82, 1.48)

In the presented scenario, the cooperative anchoring problem with which Astrid is faced involves associating the provided description of “parcel-21”

with corresponding perceptual information arriving from a number of hetero- geneous and distributed sources.

The given description contains both a name (“parcel-21”), and symbolic position information (“near the entrance”). Descriptions often contain names and symbolic information, since they typically originate from cognitive pro- cesses which reason with such representations. However, the description could just as easily have contained numeric information – for instance, the task could have been to fetch the parcel located at position (3.1, 1.6).

The available perceptual information originates from a number of different

sources. Although perceptual information is often numeric, sources can provide

perceptual information at a symbolic level. In the above scenario, for instance,

information from the RFID reader, as well as texture information from People-

Boy’s vision system, are symbolic.

(26)

4 CHAPTER 1. INTRODUCTION

1.3 Objectives

Although most robotic systems must address the anchoring problem in some way, few explicit approaches exist; most systems use application or system spe- cific solutions, which fail to address the general problem. Moreover, existing approaches have a number of limitations. In particular, they do not adequately consider uncertainty and heterogeneity in object descriptions and perceptual information; also, cooperative aspects are often ignored.

The main objective of this thesis is to propose a complete and novel anchor- ing framework which addresses these limitations. The framework should ad- dress both the single-robot and cooperative anchoring problems, and it should be able to associate various types of object descriptions with uncertain and heterogeneous perceptual information arriving from distributed sources. The thesis also aims to experimentally validate the proposed framework.

1.4 Challenges

Anchoring involves a number of challenges. One important aspect is the cre- ation and maintenance of object representations based on perceptual informa- tion; these will then be associated with descriptions of objects of interest. To address this challenge, the following sub-problems must be addressed.

1. Perceptual information should be associated with appropriate object rep- resentations. This is data association [12, 129, 60, 8], an important and well-studied problem in robotics. By addressing data association, anchor- ing ensures that perceptual information about a specific object is correctly associated with the appropriate internal representation of that object.

2. Perceptual information arriving from different sources at different times should be gathered and fused. Gathering properties is related to the bind- ing problem [152, 18]; combining them is related to the information fu- sion problem [154, 76]. Binding brings the various names, descriptions, and representations of an object together. This allows sub-systems and other robots to easily access all available information about a particular object. Information fusion ensures that estimates of object properties are as complete and accurate as possible. As will be discussed in section 2.2.6, anchoring is mainly concerned with fusion at levels 0 and 1 of the JDL data fusion process model [158, 145, 100].

3. Estimates of object properties should be maintained in time via predic-

tion; prediction, data association, and information fusion are used to per-

form tracking [8, 10]. Prediction allows items of information arriving at

different times to be meaningfully compared and combined. It also pro-

vides persistent estimates, which are useful when dealing with occlusions,

sensor errors, and scarcity of perceptual resources.

(27)

1.4. CHALLENGES 5

Solutions to these sub-problems exist for many robotic applications. How- ever, most of these consider only a few different types of information, often selected based on the task at hand or available sensors. Anchoring requires a more general approach, able to deal with the different types of information present in robotic systems. Some approaches to the sub-problems in question consider heterogeneous information; however, these approaches are rarely used in robotic applications. The anchoring approach proposed in this thesis allows the above sub-problems to be addressed despite information heterogeneity.

There are many possible ways to categorise and describe information. Some of the different types of information used in robotic systems include: informa- tion from various domains (e.g. colour, position, or shape), information repre- sented in different ways (e.g. grids, samples, or parametric functions), and infor- mation with different characteristics (e.g. noisy or unreliable). Uncertainty, in particular, is an important characteristic for robotic systems. Robots often deal with information characterised by various types and amounts of uncertainty.

In table 1.1, a categorisation of information types which is particularly use- ful for describing the anchoring problem is proposed. The table describes infor- mation using the following three dimensions.

• Perceptual versus Non-perceptual: Perceptual information is measured;

such information is often, but not always, numeric. Many sensors act as virtual sensors, which “measure” numeric values but produce symbolic abstractions of these values. Non-perceptual information is modelled; for instance, a priori information is non-perceptual. Anchoring can be seen as the problem of associating non-perceptual object descriptions (right side of table 1.1) with representations of objects based on perceptual in- formation (left side of table 1.1).

• Symbolic versus Numeric: Typically, higher levels in robotic architectures rely mainly on symbolic information. Symbolic labels, or names, are of- ten used to denote objects, and symbolic predicates are often used to de- scribe them. Some sensors, virtual sensors in particular, may also provide information at a symbolic level. Numeric information is used in many dif- ferent ways, throughout robotic architectures; in particular, lower levels in robotic architectures often produce and use numeric information for perception and control.

• Interoceptive versus Exteroceptive: Information about oneself is intero-

ceptive; this includes proprioception (e.g. feeling the position of one’s

arm) and egoreception (e.g. seeing one’s arm at a given position). Ex-

teroceptive information is about external entities, such as physical ob-

jects. This distinction is particularly important in systems such as network

robot systems [141, 139], in which observed objects can communicate

their properties to observing robots.

(28)

6 CHAPTER 1. INTRODUCTION

Table 1.1: Examples of various types of information. Anchoring associates non- perceptual object descriptions (right) with representations of objects based on perceptual information (left). This can be particularly challenging given the diversity of the relevant information (rows).

Perceptual (measured)

Non-perceptual (modelled)

Symbolic (qualitative)

Interoceptive (about self)

{battery-low}, {pan-joint-stuck}, {grasper-open}

{my_colour=red}, {my_weight=heavy}

Exteroceptive (about world)

{obstacle-near}, {door-open}, {lights-on}

{colour=green}, {texture=striped}, topological map

Numeric (quantitative)

Interoceptive (about self)

battery_voltage=11.3, pan_position=0.23

my_colour=(0, 249, 88), my_weight=3.2

Exteroceptive (about world)

range_bearing=(2, 9), blob_colour=(68, 99, 84), wall_position=(3, 23)

cup_volume=0.20, cup_position=(2, 8), geometric map

Many common problems involve only a small subset of the types of infor- mation discussed here. For example:

• self-localisation normally relies on numeric perceptual exteroceptive in- formation (e.g. there is a wall 0.92m away) and numeric non-perceptual exteroceptive information (e.g. a geometric map);

• topological self-localisation might use symbolic perceptual exteroceptive information (e.g. “room-10” detected) and symbolic non-perceptual ex- teroceptive information (e.g. a topological map);

• cooperative self-localisation might use perceptual interoceptive informa- tion (own properties) and perceptual exteroceptive information (proper- ties of perceived robots);

• traditional sensor fusion and tracking approaches often consider only nu- meric perceptual exteroceptive information (e.g. there are objects at posi- tions (12, 35) and (12, 39)).

Most existing works on anchoring consider only symbolic non-perceptual

exteroceptive information (e.g. the green box-shaped object), and numeric per-

ceptual exteroceptive information (e.g. an object with colour (67, 200, 177)

was observed at position (21, 32)). However, as has been discussed, the an-

choring problem can involve all of the types of information discussed here.

(29)

1.5. CONTRIBUTIONS 7

1.5 Contributions

Although anchoring can be vital for even the most trivial tasks, humans typi- cally perform anchoring without even thinking about it. Perhaps correspond- ingly, the problem is often overlooked in the robotics literature. In many robotic architectures, anchoring is performed in an ad-hoc manner, where the approach to anchoring is hidden within the implementation. For a number of years, how- ever, the anchoring problem has been gaining recognition as an important chal- lenge in robotics, and several works explicitly address the problem. Despite this, many approaches suffer from a number of important limitations, in particular with respect to the types of information considered. The cooperative anchoring problem has received little attention in the literature, and only a few works con- sider anchoring from a multi-robot perspective. Given this, the contributions of this thesis are the following.

1. The main contribution of this thesis is the proposal of a complete and novel anchoring framework for robotic systems, which addresses both single-robot anchoring and cooperative anchoring. The proposed frame- work addresses a number of limitations in current approaches, and pro- vides a unified approach which transparently extends from single-robot to multi-robot scenarios.

2. The thesis presents a “proof of concept” realisation of the proposed framework, which is used to validate its applicability to the anchoring problem. The implementation uses fuzzy logic as a primary tool for rep- resenting, comparing, and combining information. The implementation is able to consider various types of information about objects originat- ing from multiple robots. A number of experiments are described which illustrate how the framework addresses the anchoring problem.

3. The thesis proposes a novel data association algorithm, used within the realised anchoring framework, which considers various types of informa- tion from various domains. The algorithm allows heterogeneous items of information to be matched and associated, for both single-robot and cooperative anchoring. The algorithm is validated through experiments performed using the presented implementation of the framework.

4. The thesis proposes a novel information fusion algorithm for coopera- tive object localisation, used within the realised anchoring framework.

The approach is based on fuzzy logic, and it fully considers uncertainty

both in observations and self-localisation. A set of experiments validates

the fusion algorithm using an experimental methodology which system-

atically tests the algorithm’s robustness with respect to various types of

errors on each of the method’s inputs.

(30)

8 CHAPTER 1. INTRODUCTION

1.6 Outline

The rest of this thesis is organised as follows.

Chapter 2 discusses existing approaches to the anchoring and cooperative an- choring problems, and gives an overview of works which address a num- ber of important related problems.

Chapter 3 describes the systems considered in this work, and provides a formal definition of the anchoring and cooperative anchoring problems, in terms of a number of key system components.

Chapter 4 presents the proposed computational framework for single-robot and cooperative anchoring. The chapter also briefly discusses conceptual spaces, which inspired the approach to information representation used in the framework.

Chapter 5 discusses how fuzzy sets are used to represent information in the presented implementation of the proposed framework. A number of trans- formations are also presented, which are used to convert various types of information into representations within the same conceptual space.

Chapter 6 describes the various processes used in the implementation of the proposed framework. The chapter describes how self-localisation and object localisation are performed, and detailed descriptions of the im- plemented data association and information fusion algorithms are given.

Chapter 7 presents a number of experiments which examine the performance of the proposed information fusion algorithm, applied to the coopera- tive object localisation problem. The experiments include an “input-error landscape” analysis, which characterises the performance of the fusion algorithm in response to various types of systematic and random errors applied to the method’s inputs.

Chapter 8 presents a number of experiments which illustrate the applicability of the proposed framework to the anchoring problem. The first exper- iment was performed in a mid-fidelity simulator, the other three were performed using real robots.

Chapter 9 concludes the thesis with a summary of the work and its contribu-

tions, a discussion of the limitations of the proposed framework and the

presented implementation of it, and an overview of possible directions for

future work.

(31)

1.7. PUBLICATIONS 9

1.7 Publications

Some of the work presented in this thesis has been published in a number of journal and conference papers, available at http://aass.oru.se.

• D. Herrero-Pérez, H. Martínez-Barberá, K. LeBlanc, and A. Saffiotti. Fuzzy Uncertainty Modeling for Grid Based Localization of Mobile Robots Int Journal of Approximate Reasoning, 51(8):912–932, October 2010.

• K. LeBlanc and A. Saffiotti. Multirobot object localization: A fuzzy fusion approach. IEEE Trans on Systems, Man and Cybernetics B, 39(5):1259–

1276, 2009.

• A. Saffiotti, M. Broxvall, M. Gritti, K. LeBlanc, R. Lundh, J. Rashid, B. S.

Seo, and Y. J. Cho. The PEIS-ecology project: vision and results. In Procs of the IEEE Int Conf on Intelligent Robots and Systems (IROS), pages 2329–2335, Nice, France, 2008.

• K. LeBlanc and A. Saffiotti. Cooperative anchoring in heterogeneous multi-robot systems. In Procs of the IEEE Int Conf on Robotics and Automation (ICRA), Pasadena, CA, USA, 2008.

• K. LeBlanc and A. Saffiotti. Issues of perceptual anchoring in ubiqui- tous robotic systems. In Procs of the ICRA-07 Workshop on Omniscient Space, Rome, Italy, 2007.

• K. LeBlanc and A. Saffiotti. Cooperative information fusion in a network robot system. In Proc of the Int Conf on Robot Communication and Coordination (RoboComm), Athens, Greece, 2007.

• J.-P. Cánovas, K. LeBlanc, and A. Saffiotti. Robust multi-robot object lo- calization using fuzzy logic. In D. Nardi, M. Riedmiller, and C. Sammut, editors, RoboCup 2004: Robot Soccer World Cup VIII, LNCS, pages 247–261. Springer, 2005.

• J.-P. Cánovas, K. LeBlanc, and A. Saffiotti. Cooperative object localiza- tion using fuzzy logic. In Procs of the IEEE Int Conf on Methods and Models in Automation and Robotics (MMAR), pages 773–778, 2003.

• A. Saffiotti and K. LeBlanc. Active perceptual anchoring of robot behav-

ior in a dynamic environment. In Procs of the IEEE Int Conf on Robotics

and Automation (ICRA), pages 3796–3802, San Francisco, CA, 2000.

(32)
(33)

Chapter 2

Related Work

In this chapter a discussion of related work is presented. Existing single-robot and cooperative approaches to the anchoring problem are first described, and a number of their limitations are discussed. The anchoring problem is then sit- uated with respect to a number of important related problems. Specifically, the relationships between anchoring and symbol grounding, binding, perception management, tracking, data association, and information fusion are discussed.

2.1 Anchoring

In chapter 1, anchoring was described as the process of creating and main- taining associations between descriptions and perceptual information corre- sponding to the same physical objects. The problem of performing anchoring in robotic systems was originally acknowledged by Saffiotti [135], and a de- tailed formalisation was first proposed by Coradeschi and Saffiotti [38]. The term “anchor” was borrowed from the field of situation semantics [14], where the term is used to refer to the assignment of variables to individuals, rela- tions, and locations. Although previous works had examined the problem of linking descriptions to their referents from philosophical and linguistic stand- points [63, 134], the relevant concepts had yet to be applied to the correspond- ing computational problem facing artificial systems equipped with sensors.

The anchoring problem has often been overlooked in robotics, and the problem is often addressed using ad hoc approaches. Anchoring has, however, gradually gained recognition as an important challenge for robotic systems, as is evidenced by a number of workshops, special issues, and surveys which address the topic [39, 41, 42, 102].

11

(34)

12 CHAPTER 2. RELATED WORK

2.1.1 Single-Robot Anchoring

Anchoring Foundations

A number of approaches to single-robot anchoring have been proposed over the years, and many of these were inspired by the formalisation proposed by Coradeschi and Saffiotti [38]. Their work has been extended and studied in a number of subsequent works [40, 104]. The later versions of their framework include the following components.

• A symbol system which contains: a set of symbols which denote objects (e.g. “cup-22”); a set of unary predicate symbols, which describe sym- bolic properties of objects (e.g. “green”); and an inference mechanism which uses these components.

• A perceptual system which contains: a set of percepts (collections of mea- surements assumed to have originated from the same object) and a set of attributes (measurable properties of percepts).

• A predicate grounding relation, which embodies the correspondence be- tween the unary predicates in the symbol system and the attributes in the perceptual system.

The symbol system assigns unary predicates, such as {green}, to symbols which denote objects, such as “cup-22”. The perceptual system continuously gener- ates percepts, such as regions in images, and associates them with measurable attributes of corresponding objects, such as HSV colour values. The associa- tions between symbols and percepts are reified via structures called anchors.

Each anchor contains one symbol, one percept, and estimates of one object’s properties. Anchors are time indexed, since their contents can change over time.

Anchors are managed using the following three steps.

• Anchor creation can occur in both bottom-up and top-down manners;

both are event-based. Bottom-up anchor creation occurs when the per- ceptual system generates a percept which matches an a priori description of interesting objects, and which does not match any existing anchors.

An arbitrary symbol is assigned to such anchors. Top-down anchor cre- ation occurs when the symbol system provides a symbol and correspond- ing symbolic description it wants to anchor, and this description matches an existing percept but no existing anchors. Top down anchoring occurs only when a provided symbolic description does not match the a priori description used to trigger bottom-up anchor creation.

• Anchor maintenance involves periodically assigning newly received per-

cepts to appropriate anchors, as well updating the object property esti-

mates stored in anchors. These updates can include predictions as well as

updates based on newly received percepts.

(35)

2.1. ANCHORING 13

• Anchor deletion occurs when an anchor has not been updated with per- ceptual information within a certain time limit. The time to deletion can be decreased if an expected observation did not occur.

Anchoring and Concepts

Chella et al [35] also extend the work by Coradeschi and Saffiotti [38]; their work uses conceptual spaces to represent information. Gärdenfors proposed conceptual spaces as a means to bridge the gap between symbolic and sub- symbolic representations [66]. This makes them well-suited for anchoring, which also deals with both symbolic and sub-symbolic information. Conceptual spaces will be discussed in more detail in section 4.2. In the work by Chella et al, a conceptual space is defined which includes all dimensions of interest for an- choring in the given application (e.g. hue, saturation, value, x-position and y-position). This space is used to represent both predicates and percepts. The work by Chella et al provides three main advantages compared to previously described framework.

1. Operations performed on anchors and descriptions can be generalised, since information is always represented in the same type of conceptual space. This can avoid some application or configuration specific opera- tions within the anchoring process itself; it can also make it easier to add new predicates and new sensors to the system in a modular fashion.

2. The integration of symbols and percepts is clarified, since both descrip- tions and perceptual information are represented in the same conceptual space. Symbols are therefore perceptually grounded, and perceptual infor- mation from multiple sensors can be conveniently represented and used by cognitive processes. The common representation also simplifies match- ing and fusion operations; these are fundamental for anchoring, as will be discussed in later chapters.

3. The temporal representation of anchors is clarified, since each anchor can be represented as a trajectory in a conceptual space. This facilitates prediction.

The framework by Coradeschi and Saffiotti [38] has also been extended by Daoutis et al [45] to incorporate high-level conceptual reasoning about per- ceived objects. This is achieved by combining anchoring with methods from knowledge representation and reasoning [114, 103]. A KRR system is used which reasons using ontologies of concepts which describe “common sense”

knowledge [111, 112, 97, 96, 95]. This type of knowledge can be used together

with abstractions of perceptual information to allow artificial systems to access

contextual and conceptual information about perceived objects. This is partic-

ularly useful for communicating with humans, as well as other artificial systems

(36)

14 CHAPTER 2. RELATED WORK

which reason at a symbolic level. Work by Melchert et al [115] examined the use of spatial relations, in particular, to facilitate interaction with a human user.

This work has been combined with a full KRR system by Daoutis et al [45], resulting in a system which is able to reason about perceived objects at a con- ceptual level. The system can also communicate its knowledge about perceived objects and their properties via a natural language interface.

Bonarini et al [23, 24] propose a framework in which anchoring involves the bottom-up identification of perceived objects as instances of known concepts.

Instances are similar to the anchors used in the previously described frame- works. Concepts are effectively structured a priori descriptions of objects of interest. As in the conceptual spaces framework proposed by Gärdenfors, con- cepts are defined as sets of properties which can be specialisations and general- isations of one another. Instances include estimates of object properties based on parent concepts as well as perceptual information.

Several instances of the same concept may exist, and each instance is as- sociated with exactly one concept: the most “specific” concept which matches the perceived properties of the object. A distinction is made between substan- tial properties, which are unchanging and inherent to a concept (e.g. a ball’s shape is round), versus accidental properties, which are dynamic properties as- sociated with a particular instance of a concept (e.g. the ball is at a particular position). This distinction is similar to the distinction between matching and action properties proposed by Saffiotti [135]. Only substantial properties are considered when comparing concepts with observed objects.

Bonarini et al consider two main types of uncertainty. First, uncertainty is considered during the transformation of raw sensor data into single-valued fea- tures (similar to attributes in the previously described frameworks). This trans- formation can include low-level filtering as well as compensation for certain types of sensor errors. In later works features are also associated with a relia- bility measure [24]. The second type of uncertainty considered is uncertainty in the match between concepts and their instances. A reliability measure is associ- ated with each instance, which represents the degree of matching between the concept and the instance. This measure takes into consideration the number of domains which match as well as how well they match. As in the conceptual spaces framework by Gärdenfors, the proximity of features and properties can be used as a measure of similarity.

The structured way in which Bonarini et al use concepts allows object types

to be conveniently identified, and useful properties can be inferred from this

classification. However the approach is limited in how it represents objects of

interest. In particular, accidental properties cannot be used to describe objects

of interest. Position information, in particular, is usually an accidental property,

and it is often one of the most salient object properties. As such, it is often used

to describe and recognise objects.

(37)

2.1. ANCHORING 15

Applied Anchoring

Anchoring is also addressed in a number of works which focus on specific ap- plications, as opposed to the previously discussed works which focus on the anchoring problem itself. Such works often provide few details regarding how anchoring is performed, although the anchoring problem is nonetheless ad- dressed for the given application. For instance, work by Heintz et al [79] lists anchoring as one of the key components in a knowledge processing middle- ware used to process perceptual information and make it available to cogni- tive layers. Shapiro and Ismail [143] describe a general robotic architecture in which perceived objects are represented using tuples of values. High-level object descriptions are aligned with perceived objects, via application specific alignment functions, in order to allow various objects in the environment to be detected and described. Modayil and Kuipers [117] detect clusters of raw sensor values which correspond to objects in the environment; information ex- tracted from these clusters is used to track and categorise perceived objects.

These tracked objects can then be associated with symbols used for high-level reasoning. Fritsch et al [64] perform people-tracking using an extended ver- sion of the anchoring framework proposed by Coradeschi and Saffiotti [38].

The extended framework combines individually detected parts of humans into composite objects, which can be detected and tracked. A number of works focus on grounding linguistic terms to perceptual data in order to facilitate human-robot interaction [101, 160, 133]. In these works, symbols used to re- fer to objects are associated with some form of perceptual representation; this representation often depends on the used sensor configuration.

2.1.2 Cooperative Anchoring

In chapter 1, cooperative anchoring was described as the process of perform- ing anchoring in systems which have descriptions and perceptual information distributed across multiple agents. Few works have attempted to address this problem explicitly.

Bonarini et al [23, 24] briefly describe an extension of their framework which allows instances of concepts to be exchanged between robots. The ap- proach assumes that all robots share the same set of concepts; exchanged in- stances are then compared and combined using similar methods to those used when associating existing instances with new perceptual information. The re- sulting instances are then matched against the set of known concepts. These global instances do not replace exchanged local instances; instead, they comple- ment the locally created instances with information received from other robots.

Some single-robot anchoring approaches are applied in multi-robot systems

by treating robots as sensors which belong to the same overall system. In these

approaches, perceptual information is collected from distributed sensors, but

descriptions of objects of interest, associations with perceptual information,

(38)

16 CHAPTER 2. RELATED WORK

and the anchoring process itself are not distributed. One example of such an approach is that proposed by Daoutis et al [45]; in their approach, anchoring is performed in a robot ecology, where both robots and fixed sensors are used to detect properties of objects of interest. Another example is an approach by Mastrogiovanni et al [108], which performs symbolic data fusion in an ambient intelligence scenario. The approach allows multiple sensors to contribute to the knowledge of the overall system; sensor data is then associated with entities in a centralised knowledge base.

2.1.3 Overcoming the Limitations of Existing Approaches

In the above discussion, four main approaches which explicitly address the an- choring problem have been described: Coradeschi and Saffiotti [38], Chella et al [35], Daoutis et al [45], and Bonarini et al [23]. Although some of these approaches have been extended and refined in other works, the fundamental characteristics of the approaches remain the same. Table 2.1 provides a sum- mary of some of the limitations of these approaches.

This thesis proposes a novel anchoring framework which addresses these limitations. The proposed framework addresses both the single-robot and co- operative anchoring problems, and it allows heterogeneous and uncertain in- formation to be considered. It also provides a flexible strategy for managing descriptions of objects of interest.

Earlier versions of the framework have been described by LeBlanc and Saf- fiotti [92, 93]. The approach was inspired by several of the previously described approaches, and it inherits advantages from a number of these. In particular, the advantages resulting from the use of conceptual spaces for anchoring, as first proposed by Chella et al [35], are exploited and extended in a number of ways. The key differences between the proposed framework and previous approaches are described here.

Uncertainty

The proposed framework addresses uncertainty more comprehensively than

previous approaches. In particular, descriptions of objects, information ob-

tained from sensors, as well as estimates of object properties, are all repre-

sented as generic regions in conceptual spaces. These regions can be multi-

modal and complex, as opposed to the points and crisp regions used in previous

approaches. This allows various types of uncertainty to be represented and con-

sidered both while tracking perceived objects and when comparing descriptions

and perceptual information.

(39)

2.1. ANCHORING 17

Table 2.1: Limitations of the main approaches to anchoring found in the literature.

Limitations marked with an X apply to the method in the corresponding column.

CoradeschiandSaffiotti Chellaetal Daoutisetal Bonarinietal ProposedApproach

Uncertainty in perceptual information is not fully repre- sented, only crisp values are used (possibly with an asso- ciated reliability measure).

X X X X

Uncertainty in matches between descriptions and percep- tual information is ignored, or computed using only the number of domains in which entities match.

X X

Representations of descriptions and estimates of object properties are in sensor, domain, or application specific formats, making it difficult to generalise operations and add new sensors, domains, concepts and descriptions.

X X

Representations of perceived information consider only numeric sensor data; symbolic and abstracted perceptual information are not considered.

X X X

Descriptions of objects of interest are represented both as a priori sensor calibrations (for bottom-up anchoring) and symbolic predicates (for top-down anchoring), mak- ing them difficult to manage and update.

X

Descriptions are only symbolic; support for numeric de- scriptions of objects of interest is not provided.

X X

Descriptions only contain substantial properties, used to define concepts; support for descriptions of particular ob- ject instances is not provided.

X

Multiple descriptions cannot match the same observed object.

X X X X

Domains in which information can be represented are not treated separately, resulting in increased computa- tional complexity.

X

Names are only used to denote objects; names are not used for data association or for associating descriptions and perceptual representations.

X X X X

Cooperative anchoring is either not addressed, or only briefly discussed.

X X X X

(40)

18 CHAPTER 2. RELATED WORK

Heterogeneity

The proposed framework exploits the richness of conceptual spaces to allow various types of information to be used as descriptions and perceptual informa- tion. In particular, both descriptions and perceptual information can be sym- bolic or numeric. Previous approaches typically assume that descriptions are symbolic, and perceptual information is numeric. These assumptions are par- ticularly limiting for systems in which many heterogeneous devices are used, such as network robot systems [141] and robot ecologies [138, 139].

Descriptions

The proposed framework is more general and flexible than previous approaches when it comes to representing descriptions of objects of interest. This allows a broader range of applications to be addressed. Descriptions can be symbolic or numeric, and they can span multiple domains. They are represented using generic regions in conceptual spaces, thus avoiding the need for sensor-specific representations or calibrations. The framework also allows groups of descrip- tions to be activated, deactivated, and stored for particular applications. Only bottom-up anchoring is needed: anchors are created when new objects are de- tected which match active descriptions of objects of interest.

Descriptions can also be named, and names can be used to constrain as- sociations between descriptions and perceptual information. The proposed ap- proach also allows multiple descriptions to be associated with a single anchor, and a single description to be associated with multiple anchors. None of the previous frameworks support the same object matching two descriptions, for instance: “the green cup in the kitchen” and “objects on the table”. In the proposed approach, associations are based solely on the characteristics of de- scriptions and perceived information, without any artificial constraints.

The proposed treatment of descriptions replaces the a priori information used for bottom-up anchoring in Loutfi et al [104]; it also avoids the need for top-down anchoring, since top-down requests can be handled by creating and activating new descriptions. The proposed treatment of descriptions could also be used to represent the concepts used by Bonarini et al [23, 24] – their approach to representing concepts is similar to the conceptual spaces approach in a number of ways. But unlike the work by Bonarini et al, the proposed framework also allows all object properties, including “accidental” properties (such as object positions), to be used to describe objects of interest.

Note that although the conceptual spaces approach does lend itself to the

representation of concepts and high-level knowledge, this thesis does not fo-

cus on high-level reasoning using concepts and object classes. This is left as a

potential direction for future work.

(41)

2.2. RELATED CHALLENGES 19

Domains

In the proposed approach, dimensions in conceptual spaces are grouped into domains (such as colour, position, and shape). This grouping is included in the conceptual spaces framework [66], but it was not included in the anchor- ing framework proposed by Chella et al [35]. Considering domains separately reduces the complexity of matching and fusion operations, and it allows do- mains to be treated independently with respect to information representation and processing. The separation also makes it easier to add, remove, and update the domains considered by the system.

Names

In previous approaches, names are used only to denote objects. In the proposed framework, names are also used in the anchoring process itself. In particular, the framework allows object names to be perceived (e.g. by an RFID reader in a robot ecology). This allows names to constrain data association when ap- plicable. Object descriptions can also be named, which means that names can also constrain matches between descriptions and perceived information about objects of interest.

Cooperative Anchoring

The proposed framework fully addresses both the single-robot and coopera- tive anchoring problems. It allows all available information about objects to be accessed transparently, regardless of whether the information was produced locally or received from other robots. This is achieved by having local anchors, which contain only information produced locally, and global anchors, which contain information from all sources. Both local and global representations are always available, and global anchors are created even if information about a given object is based solely on locally obtained information. Local anchors can be reliably maintained independently of other robots, making them particularly useful for task execution. Global anchors improve information completeness and robustness, and they can be particularly useful for building shared repre- sentations, which are useful for reasoning and coordination [24, 72].

2.2 Related Challenges

The anchoring problem is broad, and it borders on a number of other important

problems in robotics. In this section a number of these problems are briefly dis-

cussed. The intention of this section is mainly to situate anchoring with respect

to neighbouring problems, rather than to provide a full discussion of existing

work in these fields.

(42)

20 CHAPTER 2. RELATED WORK

2.2.1 Symbol Grounding

Anchoring is sometimes described as a subset of the symbol grounding prob- lem [78], which is the problem of associating symbols with their referents. Ob- ject descriptions are indeed often associated with symbols; for instance, the symbol “cup-22” might be used to denote an object described by a particular colour, size, and shape. However, unlike symbol grounding, which considers all types of symbols, anchoring only aims to ground symbols which denote physical objects.

2.2.2 Binding

Another problem which is closely related to anchoring is the binding prob- lem [152, 18], which is the problem of gathering various object properties into one coherent entity. Anchoring allows object properties associated with an ob- ject’s description as well as properties obtained from perception to be linked.

Properties in this sense can span various domains (such as position, colour, and shape), and they can have widely varying characteristics.

2.2.3 Perception Management

Anchoring can be used to facilitate perception management [132], which is an extension of sensor management [2]. Sensor management involves low-level al- location and control of perceptual resources; perception management extends this to include the use of high-level information to guide perception. In particu- lar, perception management can involve the use of active perception [6] to select sensing actions which are expected to maximise information gain. For instance, in anchoring, this might mean that objects about which available information is thought to be unreliable (for instance, objects which have not been observed for some time) should be examined first. Saffiotti and LeBlanc [140] use anchoring to assist in controlling gaze in a dynamic environment with multiple objects of interest. Guirnaldo et al [74] propose a similar approach, in which anchoring is used to assist in the allocation of sensing resources in a multi-robot system.

Perceptual actions can also be triggered by errors or ambiguity in the anchor- ing process itself. When such problems are communicated to an action planner, corresponding sensing actions or recovery procedures can be initiated [30, 88].

2.2.4 Tracking

In order to ensure that associations between object descriptions and perceptual

representations are up to date, new perceptual information must be taken into

account; this is accomplished by tracking [8, 10] objects over time. Tracking

uses state estimation [109, 67, 73, 11] techniques to maintain persistent esti-

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Exakt hur dessa verksamheter har uppstått studeras inte i detalj, men nyetableringar kan exempelvis vara ett resultat av avknoppningar från större företag inklusive

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

Den förbättrade tillgängligheten berör framför allt boende i områden med en mycket hög eller hög tillgänglighet till tätorter, men även antalet personer med längre än

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av