• No results found

Exploiting Cloud Resources ForSemantic Scene Understanding OnMobile Robots

N/A
N/A
Protected

Academic year: 2021

Share "Exploiting Cloud Resources ForSemantic Scene Understanding OnMobile Robots"

Copied!
50
0
0

Loading.... (view fulltext now)

Full text

(1)

DEGREE PROJECT, IN COMPUTER SCIENCE , SECOND LEVEL STOCKHOLM, SWEDEN 2015

Exploiting Cloud Resources For Semantic Scene Understanding On Mobile Robots

ANDREAS BRUSE

KTH ROYAL INSTITUTE OF TECHNOLOGY

(2)

Exploiting Cloud Resources For Semantic Scene Understanding On Mobile Robots

Användning av molnresurser för semantisk förståelse av omgivningar på mobila robotar

ANDREAS BRUSE bruse@kth.se

Master’s Thesis in Computer Science Supervisor: Andrzej Pronobis, John Folkesson

Examiner: Danica Kragic 2015-06-01

(3)
(4)

Abstract

Modern day mobile robots are constrained in the resources available to them. Only so much hardware can be fit onto the robotic frame and at the same time they are required to perform tasks that require lots of computational resources, access to massive amounts of data and the ability to share knowledge with other robots around it.

This thesis explores the cloud robotics approach in which complex compu- tations can be offloaded to a cloud service which can have a huge amount of computational resources and access to massive data sets. The Robot Operat- ing System, ROS, is extended to allow the robot to communicate with a high powered cluster and this system is used to test our approach on such a complex task as semantic scene understanding. The benefits of the cloud approach is utilized to connect to a cloud based object detection system and to build a cat- egorization system relying on large scale datasets and a parallel computation model. Finally a method is proposed for building a consistent scene description by exploiting semantic relationships between objects.

(5)

Sammanfattning

Moderna mobila robotar har begränsade resurser. Det får inte plats hur mycket hårdvara som helst på roboten och ändå förväntas de utföra arbeten som kräver extremt mycket datorkraft, tillgång till enorm mängd data och samtidigt kommunicera med andra robotar runt omkring sig.

Det här examensarbetet utforskar robotik i molnet där komplexa beräk- ningar kan läggas ut i en molntjänst som kan ha tillgång till denna stora mängd datakraft och ha plats för de stora datamängder som behövs. The Ro- bot Operating System, eller ROS, byggs ut för att stödja kommunikation med en molntjänst och det här systemet används sedan för att testa vår lösning på ett så komplext problem som att förstå en omgivning eller miljö på ett seman- tiskt plan. Fördelarna med att använda en molnbaserad lösning används genom att koppla upp sig mot ett objektigenkänningssytem i molnet och för att byg- ga ett objektkategoriseringssystem som förlitar sig på storskaliga datamängder och parallella beräkningsmodeller. Slutligen föreslås en metod för att bygga en tillförlitlig miljöbeskrivning genom att utnyttja semantiska relationer mellan föremål.

(6)

Acknowledgments

I would like to thank the following people for their help and for allowing me to distract them from their regular duties for questions, support and guidance.

Andrzej Pronobis Babak Rasolzadeh John Folkesson Thorbiörn Fritzon Giuliano Manno

(7)

Contents

Contents vi

1 Introduction 1

1.1 Contributions . . . 1

1.2 System Overview . . . 2

1.2.1 The Robot . . . 2

1.2.2 Object Detection . . . 3

1.2.3 Exploiting Semantics . . . 3

1.3 Related Work . . . 3

1.4 Organization Of Thesis . . . 4

2 Infrastructure 5 2.1 The Robot . . . 5

2.1.1 The Platform . . . 5

2.1.2 ROS . . . 6

2.2 The Cloud . . . 8

2.2.1 The Cluster . . . 8

2.2.2 BigBrain . . . 8

2.3 Extending ROS for cloud robotics . . . 8

2.3.1 Communication Using ROSTCPTransport . . . 9

2.3.2 Communication Using HTTP . . . 10

2.4 Visualization Tool . . . 10

3 Object detection 13 3.1 GIST Based Object Detection . . . 13

3.1.1 Introduction To GIST . . . 14

3.1.2 Distributed GIST Search . . . 15

3.2 Sliding Window . . . 16

3.3 Training Data . . . 16

3.3.1 LabelMe . . . 17

3.3.2 Preprocessing . . . 17

4 Exploiting Semantics 19 4.1 Algorithm . . . 19

(8)

CONTENTS

4.2 Construction of P-matrix . . . 20

4.3 Intuitive Correctness . . . 20

4.4 Convergence Times . . . 22

4.5 Random Initial Weights . . . 23

5 Evaluation 27 5.1 Sliding Window Parameters . . . 27

5.2 Object Detection Parallelization . . . 28

5.3 Semantic Filtering . . . 28

5.3.1 Random Noise . . . 29

5.3.2 Random Noise Adjusted For Frequency . . . 29

5.3.3 Qualitative Investigation Of Results . . . 30

6 Discussion And Conclusions 35

7 Future Work 37

Bibliography 39

(9)
(10)

Chapter 1

Introduction

Computation on mobile robots is limited by the physical space they have available to carry hardware. At the same time the tasks they are required to perform are getting more and more complex, requiring more and more resources. For example, humans tend to use context, a bunch of semantic information to parse information in a scene. We exclude hypotheses of what objects we are looking at based on context given by other objects, locations, smells and sounds. The computational resources required for executing these tasks seems to increase faster than the rate of growth of performance in hardware that fits on a mobile robot. Meanwhile projects like Hadoop (Shvachko et al., 2010) and GraphLab (Low et al., 2010) provide infras- tructure to create large distributed platforms across computer clusters that would be well suited for many of the more computationally intensive robotic tasks. These tasks also require access to massive amounts of data that can be queried in a random fashion faster than what single machine systems currently can accomplish. Finally some tasks would benefit from multiple robots being able to communicate with each other to exchange knowledge gained during their lifetime. This thesis takes on the cloud robotics approach in which complex computations can be offloaded to a cloud service. Specifically the BigBrain cluster developed at OculusAI is leveraged for its computational resources and massive data storage. With the extra compu- tational resources an object detection and categorization system which uses large scale datasets and parallel computation is built. The cloud robotics approach is then tested by using this system to accomplish semantic scene understanding.

1.1 Contributions

This thesis explores and implements two different ways of extending ROS (the Robot Operating System) in a way that adds functionality for a robot to communicate with a cloud service. This is used to communicate with the BigBrain cluster at OculusAI.

It further presents an object detection and categorization system built on a parallel computation model designed to use large scale datasets and to be run on a cluster, scaling horizontally.

(11)

CHAPTER 1. INTRODUCTION

Take picture on robot

Semantic pruning Split into windows

GIST

node 1 GIST

node 2 GIST

node n

Merge results

1 2

3

Data aquisition

Detection

Filtering

Figure 1.1: Functional overview of system

Additionally this thesis presents a novel approach to leveraging semantic in- formation to improve object detection with an algorithm that can filter out false positive object detections by using prior information about how likely two objects are to appear together in the same picture, based on object occurrences in a training set.

1.2 System Overview

The system has three major stages of operation (Figure 1.1). Images of the scene are first acquired by sensors on the robot. Those images are then transferred to the BigBrain cluster using a custom built extension for ROS. Once the image has been transferred a distributed object detection job is run on sub regions of the image.

The results are then merged and weighted using pre-computed semantic relations between objects. Finally the results are sent back to the robot which can act on the information discovered by the system.

1.2.1 The Robot

A turtlebot (WillowGarage) clone was built using the iRobot create as a base, various off-the-shelf parts for the trays, and a Microsoft Kinect as the only sensor.

A stock laptop was used for all the on robot processing and communication to off robot systems. As the robot drives around, it collects images using the Kinect

(12)

1.3. RELATED WORK

sensor and sends them to an off robot cluster using a custom built extension to ROS.

1.2.2 Object Detection

Once an image has been transferred from the robot to the cluster it is passed to a distributed object detection and classification system. The object detection runs a sliding window search (Viola and Jones, 2001) using GIST (Oliva and Torralba, 2001) descriptors and uses simple euclidean distances as the distance measure. The images in the training set that most closely resemble the input window are returned and the best matches for all the windows are passed on to the next stage.

1.2.3 Exploiting Semantics

The final step in the off robot process is to use prior semantic knowledge about the objects detected to be able to filter out false positives. Specifically, prior knowledge about how often object a appears in pictures together with object b is used. The idea is to filter out objects if they do not seem to fit in with the other objects in a scene.

1.3 Related Work

Arumugam et al. (2010) builds a cloud computing platform for collaborative robotics and shows that it can reduce computation time of Simultaneous Localization And Mapping (Thrun, 2008). A distributed version of FastSLAM (Thrun et al., 2005) is developed using a Map/Reduce (Dean and Ghemawat, 2004) architecture which is run on top of a Hadoop (Shvachko et al., 2010) cluster. Instead of seeing the cluster as an independent service which the robot communicates with to augment its abilities, the system utilizes a ROS setup where the master node is placed on the remote cluster itself. All necessary sensor data is then sent to the cluster to be used as input to the SLAM algorithm.

White et al. (2010) introduces several ideas on how to use the Map/Reduce (Dean and Ghemawat, 2004) distributed computation architecture for different com- puter vision techniques and algorithms. They show implementations for, among other things, sliding windows for detection of a single object in an image, classi- fier training, background subtraction, and clustering. All of their experiments are built using Hadoop (Shvachko et al., 2010), while a custom system built on top of BigBrain (see chapter 2.2.2) was created for the purpose on this thesis.

Perko and Leonardis (2010) explores using co-occurrences with spatial informa- tion in a voting scheme to improve detection results. For every picture in their training set and every object in said images a two-dimensional probability distribu- tion like function is collected by calculating the relative offset to all other objects in the image. This means that given a known object the probability of the other object detection in the image can be weighted using this probability distribution

(13)

CHAPTER 1. INTRODUCTION

like function. At the detection stage all of these distributions are then used in a voting manner. Each detection in an image votes for all other detections. If the other detection is in a region of the image that the first detection would imply is likely, it gets a higher vote.

Galleguillos et al. (2008) also use co-occurrence as one of the factors to improve object detection. In addition to co-occurrence they use the spatial relationship between objects. In this case they have four pre-defined spatial relationships avail- able: below, above, inside and around. Frequency matrices are built using anno- tated training data for both spatial relationships and co-occurrences. Monte Carlo integration is then used to maximize the log likelihood of the occurrences of the observed labels. The likelihood is a function of the number of images n, and the full object frequency matrix which is n × n. This means that the running time at detection time would increase when adding more object labels to the training data.

1.4 Organization Of Thesis

This thesis starts off in Infrastructure by describing the infrastructure used during the work. This chapter includes a description of the robot built, the software stack running on top of it and the cloud platform and cloud service that the robot was communicating with.

In Object detection the system for detecting and classifying images is described.

There is an introduction to the image descriptor used and the system for paralellizing the object detection in the cloud service is explained, along with a description of the training data that was used.

Exploiting Semantics describes a novel solution to taking the semantic relation- ships of the objects into account for reducing the number of false positives of the object detection system.

The evaluation of the techniques used in the thesis is presented in Evaluation and in Discussion And Conclusions the results and findings of the thesis are discussed.

Finally Future Work goes through some possible next steps to take to take the work of this thesis further.

(14)

Chapter 2

Infrastructure

Modern robots are almost never built entirely from scratch. They have a myriad of sensors, actuators, different onboard computers, batteries, and complex software taking care of everything from talking to the hardware to sending data wirelessly to other robots to planning how to get from point A to point B in the most clever way.

A number of companies have sprung up offering various robot parts and platforms, and this thesis takes its inspiration from some of them. Mostly off the shelf parts were used to put together a mobile robot that runs open source robotic software developed by researchers from all over the world. On top of that a way to commu- nicate with a cloud service is built, letting us run far more complex computations than we could have on the robot itself.

This chapter starts off in 2.1 describing the hardware and the software that is available to everyone and then in 2.2 describes the cloud that was used. In 2.3 it is explained how the communication between the robot and the cloud was solved and finally in 2.4 a tool that was used to visualize the result of the cloud computation is presented.

2.1 The Robot

2.1.1 The Platform

A mobile robot was built with inspiration from Willow Garages TurtleBot. The turtlebot is a mobile robotics platform designed to be low cost and for personal use.

Their idea is to use only hardware that is easily accessible and open source software.

The hardware structure on top of the base has blueprints available for download and the robot uses the open source ROS software as their software stack. As a base it uses the iRobot Create, which has a lot of similarities to the iRobot Roomba vacuum cleaner robot, only with the vacuuming module taken out. The base provides sensors for odometry and collision detection. It also powers the external sensors on the robot.

On top of the iRobot Create base, the TurtleBot adds three layers of platforms or trays, designed for easy mounting of external sensors. This is also where the on

(15)

CHAPTER 2. INFRASTRUCTURE

board computer is designed to be positioned.

The TurtleBot comes with two sensors external from the base. The Microsoft Kinect was used for vision (both 2D and 3D) and an accelerometer is used for additional odometric information.

The robot designed for this project was similar to TurtleBot in most ways. Just like the TurtleBot the iRobot create was used as a base. The blueprints for the structure were not used and instead a similar structure made out of plexiglass and steel ribs was created. The Microsoft Kinect is used, but the accelerometer was left out as it was not needed for navigation.

2.1.2 ROS

The TurtleBot was designed to run on ROS, and the same option was chosen for the robot in this project. ROS, or "the Robot Operating System" is despite its name not actually an operating system in the classic sense. It is a software stack designed to provide a framework for developing robot software components that can talk to each other. The aim is to provide a set of core components that any additional robot software can build around and on top of so that code from one robot researcher can easily be reused by another. To achieve this an architecture of separate modules with a strict communication protocol has been developed.

ROS is more than a set of tools to help separate modules communicate with each other, though. It is also a build system, a configuration manager and a software repository. For this thesis however, we focus on how to work with ROS modules that communicate with robot sensors since we’re insterested in how to set up com- munication between the robot and the cloud.

ROS architecture

The modules in ROS are called nodes and together they form a peer-to-peer net- work which in ROS terms is called the Computation Graph. ROS provides two ways for these nodes to communicate with each other. The first is through topics.

Topics provide publish/subscribe semantics where one node can send a message by publishing it to a topic. Another node can then receive these messages by subscrib- ing to the same topic. Typically a node publishes to a topic with a specific name for a specific type of data, and nodes that are interested in consuming such data subscribe to the topic with the same name.

The other way of communicating is through a request/reply type of semantic.

One node can set up a service which another node can then request data from. With this type of communication a node can request data from a service when it requires it, instead of passively listening to a stream of data as in the publish/subscribe scenario.

The message data structure used for this communication is a collection of typed fields. Every type of node can define its own variation of different fields as long as the node that is meant to consume the message ad hears to the same definition.

(16)

2.1. THE ROBOT

Master

A B

1 2

3

4

Figure 2.1: Negotiation publishing and subscribing with the master node

1. A tells the master node it wants to publish on a topic.

2. B tells the master node it wants to subscribe to the same topic.

3. The master node tells A that B wants to subscribe to the given topic.

4. A will now send all messages for the topic directly to B.

The types allowed are simple primitive types like integer, floats, booleans, etc. with the addition of arrays consisting of the same primitive types, or other arrays.

ROS runtime

ROS has three mandatory parts. The Master Node, the Parameter Server and rosout. The Master Node is the centralized communication center for any running ROS system. It keeps track of all nodes in the current system and negotiates communication channels between them. Any time a node wants to send a message to another node, it first has to ask the master node to look up the name and to set up a direct connection between them (see 2.1). The Parameter Server keeps track of all the configuration parameters for the different nodes. This can include things such as what USB port a laser scanner is connected to or what the starting angle for a pan-tilt-zoom unit is. rosout is the name of the console logging system in ROS. It consist of a node that subscribes to the /rosout topic. Any message that gets published to this topic is picked up by the rosout node, recorded to a text file and rebroadcasted to the /rosout_agg topic. Any optional log viewer can then subscribe to /rosout_agg to display the messages to the users.

(17)

CHAPTER 2. INFRASTRUCTURE

2.2 The Cloud

2.2.1 The Cluster

The off robot computer cluster used was provided by OculusAI and designing and building it was not part of this project. Its purpose outside of this project was to create a platform where modular parts of computer vision systems can be reused and repurposed into new projects. During the duration of this project it was still under development and this was the first project to use it and in many ways influenced the design of the architecture and design process of the cluster itself.

The cluster was built on top of a number of x86 Linux servers. They each had a multicore CPU and an nVidia GPU intended for CUDA processes.

2.2.2 BigBrain

The vision for BigBrain was to be able to provide Artificial Intelligence As A Service, with a focus on Computer Vision. To do this a number of services was to be built including user management, automatic resource allocation, and a number of state of the art algorithm implementations. At the time of this thesis work however, the project was in it infancy and was mostly a stack of software surrounding the Map/Reduce (Dean and Ghemawat, 2004) concept.

Each server had a number of virtual machines running on top. To reduce over- head, Linux Containers were used instead of truly virtualized machines. All of thesoftware written for this project that was not run on the robot was run on these containers. All the virtual machines were able to communicate with each other us- ing Unix sockets or using a distributed file system shared over all virtual machines.

The file system used, GlusterFS, is designed for cluster environments and is by its nature a distributed file system.

2.3 Extending ROS for cloud robotics

In order to run the scene understanding code on BigBrain we needed to send sensor data from the robot to the cluster. We did not want to set up a normal ROS node on BigBrain, however, since it requires a master node to be present and since there is only one master node allowed in any ros system it would conflict with the master node running on the robot. One solution would be to run the master node on the cluster, forcing all robots to use it as its master node. The other solution would be keeping it on the robot. This would mean we would have to spawn a new ROS node for every robot connecting to the service, since that robots master node would need to communicate with it as if it was its child node. None of these options were viable for a system for which we wanted to allow multiple robots to communicate with independently.

We ended up investigating two different approaches of extending ROS. The first used internal ROS infrastructure to communicate over the network while the second

(18)

2.3. EXTENDING ROS FOR CLOUD ROBOTICS

Master

BigBrain

Robot

Master

BigBrain

Robot

ROS Transport Restful HTTP Transport Proprietary BigBrain Transport

Figure 2.2: The left side of the figure shows the ROSTCPTransport implementation and the right side of the figure shows the Restful HTTP Transport

uses a classic REST style HTTP connection (see Figure 2.2)

2.3.1 Communication Using ROSTCPTransport

Since a normal node could not be used, we decided to try to still use as much of the ROS networking infrastructure as possible. The publish/subscribe and re- quest/reply type of communication is ROS happens over protocols derived from the TCPROSTransportProtocol class in the tcpros_base package. This class defines a base protocol for communication over TCP that we can use to define our own ROS communication protocol.

ROS also has a class called TCPServer which is a wrapper around platform specific socket objects. This class is what actually sends the data over the network.

Lastly ROS has the TCPROSTransport class which deals with writing and pars- ing ROS specific header messages for every message that is sent using the given TCPROSTransportProtocol.

Using these classes together a new communication channel was created. It al- lowed the robot to transfer messages to BigBrain using native ROS messages and using the network code that already exists ROS. While functional, this approach made use of several internal and undocumented ROS APIs. This came with the risk of the system breaking when new versions of ROS was released. A decision was made to abandon this approach for a more general HTTP based server client communication.

(19)

CHAPTER 2. INFRASTRUCTURE

2.3.2 Communication Using HTTP

The HTTP server was set up as a classic REST (Fielding, 2000) style HTTP server implemented in Python. This allowed usage of well understood, documented li- braries and also allowed other non ROS subsystems such as the visualization tool (section 2.4) to interface with the server. A local ROS node was then set up to provide a normal ROS service to the rest of the ROS nodes on the robot.

2.4 Visualization Tool

A custom visualization tool was designed to speed up development and testing of the object detection system. It allowed for an image of a scene to be submitted to the cluster. Once the results were returned the object detections were overlaid on top of the scene image and could be interacted with. Clicking on an object detection gave you information about that object detection including the corresponding image that was matched from the database, the distance between the two descriptors and the label of the object. It was also possible to click and drag in the scene image to select many object detections which shows up in a sorted list, displaying the same information as if you clicked on each hit individually (see Figure2.3).

The tool was developed as a web application using the D3 library for visualiza- tion and jQuery for network communication.

(20)

2.4. VISUALIZATION TOOL

Figure 2.3: The top image shows the visualization tool visualizing the position for every detected object in the scene. In the middle image the user has clicked on one of the objects, making the tool display further information about that detection on the right. In the lower image the user is clicking and dragging the mouse, selecting many objects at once. This is displayed on the right as a list of objects.

(21)
(22)

Chapter 3

Object detection

Detecting objects in images is something that humans often find easy while com- puters have a very hard time doing. Object detection can be used to refer to the act of simply detecting that there is an object in an image, but more often it is used to refer to the process of both detecting that an image contains an object (or several objects) and also figure out their location and what type of object it is. Some systems go as far as to differentiate between different instances of object (Bike A is different from bike B) but in this thesis we settle for knowing what type of object we have detected and where they are.

A wide variety of approaches have been devised to try and solve this problem but most of them try to represent the image in some sort of mathematical representation called, a descriptor, that can then be compared to an object database of images that have been stored with the same type of representation. Often this representation is a multidimensional vector. A descriptor generated from an input image can then be compared to a set of already known image descriptors using some sort of distance function, like the euclidean distance.

This chapter starts off in 3.1 with describing the type of representation used for this thesis. It then goes on in 3.2 to explain how we go from a system that identifies objects when they take up a whole image to a system that can find multiple objects of almost any sizes in an image that depicts an entire scene. Finally the method to build up an object database that we can use to compare our input images to is described in 3.3.

3.1 GIST Based Object Detection

A choice was made early on to use GIST (Oliva and Torralba, 2001) as the image descriptor for object detection and possibly extend this if time was available later in the project. A big reason for this decision was the availability of a finished implementation of a GIST search system already being used at OculusAI.

(23)

CHAPTER 3. OBJECT DETECTION

3.1.1 Introduction To GIST

The GIST descriptor was originally developed for description of scenes. It is a global descriptor that aims to model what the authors call the perceptual dimensions of a scene. These dimensions are based on how humans are thought to quickly differen- tiate between different type of scenes. This is a more high level representation of a scene, as opposed to a detailed representation where you would take the objects in the scene into account to determine what category the scene belongs to.

The following is a description of the perceptual dimensions as stated in the GIST paper (Oliva and Torralba, 2001).

Degree of naturalness The structure of a scene strongly differs between man- made and natural environments. Straight horizontal and vertical lines dom- inate man-made structures whereas most natural landscapes have textured zones and undulating contours. Therefore, scenes having a distribution of edges commonly found in natural landscapes would have a high degree of naturalness whereas scenes with edges biased toward vertical and horizontal orientations would have a low degree of naturalness.

Degree of openness A second major attribute of the scene spatial envelope is its sense of Enclosure. A scene can have a closed spatial envelope full of visual references (e.g., a forest, a mountain, a city center), or it can be vast and open to infinity (e.g., a coast, a highway). The existence of a horizon line and the lack of visual references confer to the scene a high degree of Openness. Degree of Openness of a scene decreases when the number of boundary elements increases.

Degree of roughness Roughness of a scene refers principally to the size of its major components. It depends upon the size of elements at each spatial scale, their abilities to build complex elements and their relations between elements that are also assembled to build other structures, and so on. Roughness is correlated with the fractal dimension of the scene and thus, its complexity.

Degree of expansion Man-made structures are mainly composed of vertical and horizontal structures. However, according to the observer’s point of view, structures can be seen under different perspectives. The convergence of par- allel lines gives the perception of the depth gradient of the space. A flat view of a building would have a low degree of Expansion. On the contrary, a street with long vanishing lines would have a high degree of Expansion.

Degree of ruggedness Ruggedness refers to the deviation of the ground with respect to the horizon (e.g., from open environments with a flat horizontal ground level to mountainous landscapes with a rugged ground). A rugged en- vironment produces oblique contours in the picture and hides the horizon line.

Most of the man-made environments are built on a flat ground. Therefore, rugged environments are mostly natural.

(24)

3.1. GIST BASED OBJECT DETECTION

The idea is that if a descriptor can capture these high level human like concepts, it could be used to distinguish one scene from the other in very much the same way humans are thought to differentiate between different types of scenes. For example, two different images depicting forests both have high degrees of naturalness and low degree of openness. The two images will be highly correlated over these two dimensions. The other dimensions may vary a bit. One forest image might be depicting a hill or a mountain within the forest while the other is on completely flat ground, making their ruggedness differ. This means that a descriptor that captures this would also be able to cluster scenes of slightly different types of forest scenes together. Two images of forests with a lot of bushes and low hanging branches would be more correlated than an image of a forest common in the north, with only straight pine trees and little other vegetation, and an image of a rain forest with plenty of foliage.

To achieve this, the GIST descriptor is designed to be a multidimensional de- scriptor and is constructed using spectral analysis and coarsely localized information on the input images. The authors show that images semantically close (e.g., two pictures of faces of houses, two pictures where the camera is looking down streets) are located closely to each other in this multidimensional space according to the euclidean distance.

Although the GIST descriptor was designed to differentiate between scenes, it was postulated that the same intrinsic ability of the GIST descriptor to detect the difference between scenes with a lot of straight lines, parallel structures etc. would make a decent fit for an object descriptor for this thesis.

3.1.2 Distributed GIST Search

The distributed gist search system was designed so that for every processor available in the BigBrain cluster there is a GIST search process that is always waiting for jobs to appear in a job queue. When a series of images are put onto the job queue, these search processes each take an image and tries to match it to the images in the database and return the best matches for their respective input images. After they have finished they take another image from the queue, if one exists, and processes it.

The individual search process itself works by extracting the GIST descriptor from the image received on the queue and then, using the euclidean distance, comparing that to the GIST descriptors computed from the training set.

All the results are sent back to the same process that split the windows into individual images and are joined together into one list. Similar results caused by two windows of similar scale and position are then merged. If there are two object detections from the same object in the database where the ratio of the area between the two windows is 0.5 > ratio > 1.5 and the center points of the windows are not further than min(ohyp1 , ohyp2 )/2, where ohypx is the distance from the center of the objects window to one of the corners, the hits will be merged. This ensures that two adjacent overlapping windows (covering the same object) will not get reported as two separate objects by merging hits that are of similar size and position.

(25)

CHAPTER 3. OBJECT DETECTION

3.2 Sliding Window

For finding object matches in any position and with a wide variety of sizes with GIST, a simple sliding window technique (Viola and Jones, 2001) was implemented.

The scene image is iteratively explored, part by part, to find objects in specific areas (as opposed to a single object consisting of the whole image). A square frame is slid over the image and for every iteration a search is done over all of the stored GIST descriptors to see which images in the database are closest to that of the image region in question.

The implementation used extracts overlapping images of various scale and po- sition over the input image. Starting from a window covering the whole image, it chooses smaller and smaller window sizes based on a pre-defined scale factor. For every window size, the window is slid over the image in an iterative fashion. The distance in each dimension between the windows is defined as a function of the window size.

windowsize ← startsize

while windowsize > minwindowsize do y ← 0

while y < imageheight − windowsize do x ← 0

while x < imagewidth − windowsize do save image defined by x, y and windowsize x ← x + stepsize ∗ windowsize

end while

y ← y + stepsize ∗ windowsize end while

windowsize ← windowsize ∗ scalef actor end while

All the windows are saved as individual images to disk to be processed by the object detection step.

3.3 Training Data

The actual matching system in the object detection takes a GIST descriptor as an input, and a list of possible object matches is expected as output. These possible objects that come back have to come from somewhere. A model of what objects the robot is expected to be able to detect is needed. What is commonly referred to as training data is used to build this model. The training data in this case is a set of images of objects together with some metadata describing the object. The training data goes through a preprocessing stage where the data is cleaned up to suit the needs of the application in question and is then processed into what we call the model. In this case the final processing was simply to extract gist descriptors for each of the preprocessed images.

(26)

3.3. TRAINING DATA

3.3.1 LabelMe

LabelMe (Russell et al., 2008) is a dataset consisting of scene images with manually annotated objects. Russell et al. set up a web based tool which is publicly available where anyone can contribute by annotating images. A user is presented with an image and is expected to draw a polygon surrounding the objects in the image.

They are also expected to provide a description of the object. Users of the system are also allowed to upload their own photos to get annotated. These photos will then be amongst the images that gets presented to the annotators.

The LabelMe dataset was selected for this thesis because of its large number of annotated indoor images.

3.3.2 Preprocessing

Of the 281 folders of image sets in the LabelMe dataset 87 were manually selected on the basis that they had a large set of indoor images. This resulted in a total of 23405 images and a total of 78618 annotated objects. 7261 objects were filtered out because they were smaller than 40x40 pixels. This minimum size was selected experimentally after seeing poor results when matching what were small regions in the training dataset to large regions in the test dataset. 3580 were skipped because the object annotations were so large it was not possible to surround them by a square and still keep it inside the picture. Square regions were enforced because that is the only aspect ratio the GIST descriptor implementation used supports. In total 13% of all objects were skipped.

These objects were extracted and saved as individual images with a unique iden- tifier together with their respective metadata. The metadata consists of a number of fields, including the object name (as given by the LabelMe annotation), the size of the object, what other objects were in the scene image the object was extracted from and so on. Some grouping of similar object labels was done. A "sanitized description" field was added where the original label was converted to all lower case letters and everything outside of the English alphabet was removed. This brought the number of unique object labels down from 4813 to 3833.

A GIST descriptor was pre-calculated for every extracted image and saved to an in-memory database.

(27)
(28)

Chapter 4

Exploiting Semantics

Classically object detection identifies the probability of an area of an image contain- ing a certain object. A threshold value is then used to filter out areas with a low probability of containing the given object (Papageorgiou et al., 1998). The semantic filtering method implemented in this thesis is based on the idea that this probability should be adjusted before the thresholding based on the other object detections in the image. Using prior information about the likeliness of two object appearing in the same image, false positive detections caused by noise can be reduced or com- pletely filtered out. For example, if the detections in an image contains a plate, a knife, a fork, a table cloth and a moose, it is likely that the moose is a false positive and should be filtered out.

This can be thought of as a layer below the object detections that connect them to each other (figure 4.1). Each object will from this underlying layer get a weight that in combination with the object detection score can be used with a new threshold value to filter out objects that either have a low detection score or objects that do not belong in the resulting object composition.

This chapter starts off in 4.1 with an explanation of the algorithm used. It then gives an explanation of how to build the underlying model in 4.2. It finally explores how the algorithm converges (4.4) and how that is affected by different starting criteria (4.5).

4.1 Algorithm

A matrix is first constructed where every cell Pi,j represents the frequency which object i and object j appeared in the same image in the training set. A fully connected graph is then constructed, where every node represent an object detected and every edge has a value given by P . An iterative approach loosely inspired by Google PageRank (Page et al., 1999) is then taken, where in every iteration the weights of the nodes are updated based on the weight of its neighbors and the

(29)

CHAPTER 4. EXPLOITING SEMANTICS

A B

C D

Pac

Pab

Pbd

Pcd

Pbc

Pad

W

a

W

c

W

b

W

d

Figure 4.1: Object detections A through D with the underlying graph structure that connects them. Pij represents the frequency of which object i appeared in the same image as object j and wx is the resulting weight to be applied to the object detection score of x.

values connecting them (equation 4.1).

wit+1= wti+ X

j∈Ni

wjt· Pi,j (4.1)

where wi is the weight of the specific detected object and N is the neighboring objects. In each iteration the weights are also normalized so that Pw = 1. The weights w are initialized either with wi = 1/dim(w) or with random weights, where Pw = 1. See section Random Initial Weights.

4.2 Construction of P-matrix

The P-matrix was constructed using data from the preprocessed LabelMe dataset (see section 3.3.2). The matrix itself is N ×N where N is the total number of objects in the dataset. Two relationship matrices were created. For the first matrix, every element Pi,j in the matrix indicated how many times object i was seen in the same image as object j. In the second matrix each element was also normalized by the number of times i and j had been seen in total. That is, ki appears with jk

kik+kjk .

4.3 Intuitive Correctness

The purpose of this algorithm is to give objects that do not "fit in" a lower weight than those that do. Figure 4.2 shows a graph with five objects. Four of them form

(30)

4.3. INTUITIVE CORRECTNESS

1 2

3 4

5

0.1

0.1

0.1 0.1

0.1 0.1

0.001

0.001

Object Weight

1 0.2496

2 0.2496

3 0.2496

4 0.2496

5 0.0017

Figure 4.2 1

2

3 4

0.1

0.1

0.1 0.1

1

2

3 4

0.1

0.1

0.1 0.15

1

2

3 4

0.1

0.1

0.1 0.2

Object Weight

1 0.2696

2 0.2696

3 0.3154

4 0.1454

Object Weight

1 0.2349

2 0.2349

3 0.3256

4 0.2047

Object Weight

1 0.2026

2 0.2026

3 0.3407

4 0.2541

Figure 4.3

a coherent group while there is one outlier which is not strongly connected to any other object. The four objects in this coherent group should get higher weights and the results show that they do.

Figure 4.3 shows three different graphs where each have a group of three nodes that are all tightly connected. One of these three nodes is also connected to a fourth node, an outlier. This can be seen as an example where the highly connected group of nodes represent objects that often appear in the same image, and where one of the objects in that group also appear in images with the fourth outlier object. When the value of the edge connecting the outlier to the group is the same as the value connecting the nodes in the group, the outlier gets a lower weight than the nodes in the group. This means that the occurrence of many objects that appeared in the same images in the training set will outweigh single objects that appeared as many times.

(31)

CHAPTER 4. EXPLOITING SEMANTICS

1

2

3 4

5

6

0.1

0.1 0.1

0.1

0.1 0.001 0.1

Object Weight

1 0.1665

2 0.1665

3 0.1670

4 0.1670

5 0.1665

6 0.1665

Figure 4.4

If the value of the edge between the outlier and the object in the group is increased (that is, if we assume the object in the group and the outlier appeared in more images in the training set), the weight for the outlier starts going up (figure 4.3 b), until it surpasses the other weights for the nodes in the group (figure 4.3 c). Two objects appearing in the same image in the training set will outweigh many objects that did not appear as often together. Note that object 3 get slightly higher weights than object 1 and 2 in the first and second case, and higher than object 4 in the last case. This is expected, since they also appeared in some images together with another object (object 4 at first and object 1 and 2 later), weighing up its total suitability for this composition of objects.

Figure 4.4 shows an example of an object composition with six objects. There are two clear groups of objects, with one edge with a low value connecting them.

Intuitively the weights of the nodes in the group should be the same, since neither of them outweigh the other, and that is what the algorithm gives us as well. The two nodes connecting the groups together both have a slightly higher value, as expected.

4.4 Convergence Times

The time till convergence of the examples in the previous section is small. In the table below, the algorithm runs with uniform initial weights until the weight with the maximum change of value between two iterations is less than 10−8.

(32)

4.5. RANDOM INITIAL WEIGHTS

Graph Iterations

4.2 61

4.3 (a) 85 4.3 (b) 37 4.3 (c) 79

4.4 33

A test framework was constructed for testing the convergence times of graphs with explicit clusterings. Clustered graphs are generated, where the cluster sizes are equal. All the edges in each cluster gets a value of wi = 1/dim(G) where dim(G) is the number of nodes in the graph. In addition every edge value in the graph is distorted by adding random noise of a specified magnitude to it. Since the noise is added uniformly to all edges, including those with zero values (edges between clusters), it will affect how well separated the clusters are. More noise means the individual clusters are have higher values for the edges that connects them, while low noise gives a lower connectivity between clusters. The noise also affects the values of edges within clusters, making the clusters themselves less uniform in relation to each other.

Different configurations of clusters from this framework were tested with differ- ent amount of noise added. The tests were run 100 times for each configuration and noise combination. Figure 4.5 and figure 4.6 show the mean convergence time and the standard deviation of convergence time for these 100 runs.

In the low noise situation, the convergence is slower, since the clusters will be more uniform. The updates in each iteration will be small since there is no clear winner. When the noise is increased the clusters get less and less uniform, producing a situation where one cluster will invariably become the dominant one and convergence will be faster and faster.

As can be seen in figure 4.5 it is the noise between clusters and not the size of the clusters that have the most impact on the convergence time of the algorithm.

The standard deviation also increases as the noise increases as shown in figure 4.6.

4.5 Random Initial Weights

All the tests in the previous sections were made with the initial weights set to wi = 1/dim(G) where dim(G) is the number of nodes in the graph. Another option is to set the weights to random values (but that still sum to 1). For all experiments run up to this point for this thesis, both options were tested and they always converged to the same answer.

In most cases the convergence times were comparable, as in figure 4.7 which depicts the convergence times of the graph shown in figure 4.3 (a). As can be seen, the convergence when the initial weights are random and when the initial weights are equal are about the same.

When there is no clear answer as to which cluster to favor, the random initial weights seem to do a lot worse. Figure 4.8 shows the convergence times on the graph

(33)

CHAPTER 4. EXPLOITING SEMANTICS

10−5 10−4

10−3 10−2

10−1 101 102 103 104 105 106

Amount of noise

Convergencetime(iterations)

4 nodes, 2 clusters 8 nodes, 2 clusters 8 nodes, 4 clusters 16 nodes. 2 clusters 16 nodes, 8 clusters

Figure 4.5: Convergence time for different sets of cluster composition with different amount of noise added. Note that axes are logarithmic.

10−5 10−4

10−3 10−2

10−1 100 101 102 103 104 105 106 107

Amount of noise

Standarddeviationofconvergencetime(iterations)

4 nodes, 2 clusters 8 nodes, 2 clusters 8 nodes, 4 clusters 16 nodes, 4 clusters 16 nodes, 8 clusters

Figure 4.6: Standard deviation of convergence time for different sets of cluster composition with different amount of noise added. Note that axes are logarithmic.

(34)

4.5. RANDOM INITIAL WEIGHTS

0 20 40 60 80 100

0 0.1 0.2 0.3 0.4

Iteration

Weight

Random initial weights

0 20 40 60 80 100

0.1 0.15 0.2 0.25 0.3 0.35

Iteration

Weight

Equal initial weights

0 20 40 60 80 100

0 0.02 0.04 0.06 0.08 0.1 0.12

Iteration

Sumofweightchanges

Random initial weights

0 20 40 60 80 100

0 0.01 0.02 0.03 0.04 0.05

Iteration

Sumofweightchanges

Equal initial weights

Figure 4.7: Weights over time and the sum of the difference of weights over time for the graph shown in figure 4.3 (a)

0 1000 2000 3000 4000

0.15 0.16 0.17 0.18 0.19 0.2

Iteration

Weight

Random initial weights

0 10 20 30 40 50

0.1665 0.1666 0.1667 0.1668 0.1669 0.167

Iteration

Weight

Equal initial weights

0 100 200 300 400 500 0

0.05 0.1 0.15 0.2

Iteration

Sumofweightchanges

Random initial weights

0 100 200 300 400 500 0

1 x 10−4

Iteration

Sumofweightchanges

Equal initial weights

Figure 4.8: Weights over time and the sum of the difference of weights over time for the graph shown in figure 4.4

(35)

CHAPTER 4. EXPLOITING SEMANTICS

shown in figure 4.4. Note the difference in number of iterations and the difference in the sum of the weight changes for every iteration. Because of this, the rest of the experiments in this thesis use equal initial weights.

(36)

Chapter 5

Evaluation

Because of time constraints the robot with its cloud components and object detec- tion was never tested in conjunction with the semantic filtering algorithm. While some of the object detection testing was done by the robot sending images to the cloud, most of it was done using LabelMe images that were not part of the training data. The semantic filtering was only run in this manner.

5.1 Sliding Window Parameters

A small experiment was run to find optimal parameters for the sliding window process. An image from the training set was selected and used as the query image for object detection. Since all of the annotated objects in this image exist in the database, the object detection mechanism should be able to find all of the objects, given the right parameters for the sliding window process. Table 5.1 shows the results of running object detection on this image with various parameters. The last column shows how many percent of the known objects the object detection

Scale factor Step size Windows Result

0.5 1/4 3304 23.5%

0.5 1/8 13216 23.5%

0.5 1/16 52864 41.2%

0.75 1/4 4794 23.5%

0.75 1/8 19042 64.7%

0.75 1/16 75773 70.6%

0.9 1/4 10688 41.2%

0.9 1/8 41832 76.5%

0.9 1/16 166047 88.2%

Table 5.1: Percentage of known objects found in one image and how many windows produced using various sliding window parameters

(37)

CHAPTER 5. EVALUATION

10 15 20 25 30 35 40 45 50 55 60

200 300 400 500 600 700 800 900 1000

Number of processors

Secondstillcompletion

Figure 5.1: Time taken to complete a GIST search with different number of proces- sors available, given scalefactor = 0.5 and stepsize = 1/16

algorithm found in the list of the 100 best matches.

5.2 Object Detection Parallelization

As an evaluation, 6 searches were made with varying number of processors available to the GIST search system. Figure 5.1 shows the average of those 6 searches. It can be seen that doubling the number of cores approximately halves the execution time.

There is some overhead as the number of cores increases and for a high number of cores it is unlikely that increasing the number further would improve the results.

5.3 Semantic Filtering

A set of 66 annotated indoor scene images that had not been seen during the construction of the two relationship matrices were selected as a test set. In each test, artificial noise was added to simulate noise from a computer vision object detection system. Noise in this case is random extra objects not actually annotated in the image added to the scene. These added objects can be thought of as false positives while the original set of annotated objects can be thought of as the true positives generated from any computer vision object detection algorithm. The goal of the semantic filtering algorithm is to identify the true positive object detections and give them a higher weight, while also identifying the false positives and giving them a lower weight. Two different noise models were tested.

(38)

5.3. SEMANTIC FILTERING

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure 5.2: ROC-curves when 50%, 100% and 200% of noise was added using the random noise model and the unnormalized P-matrix

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure 5.3: ROC-curves when 50%, 100% and 200% of noise was added using the random noise model and the normalized P-matrix

5.3.1 Random Noise

The objects added to the set of annotated objects using the first noise model was simply a sample of the set of unique object labels seen during construction of the P-matrix.

Figures 5.2 and 5.3 show ROC curves (Bradley, 1997) describing how well the algorithm performs with different amount of noise added. 100% in this case means that for every object in the original set of annotated objects, another object was added as noise. That is, if an image had n annotated objects, another n were added.

5.3.2 Random Noise Adjusted For Frequency

The second noise model makes a correction for the number of times an object ap- pears in the training set. This idea is to reflect an object detection system where objects seen more frequently during training also appear more frequently as false positives in the results. For this noise model, the objects added to the set of an- notated objects was also a sample from the set of unique object labels, but this time the probability of selecting an object o was proportional to how many times it appeared in the training set.

Figures 5.4 and 5.5 show how well the algorithm performs with different amounts of noise added and when using the noise model that adjusts for the frequency of the objects.

(39)

CHAPTER 5. EVALUATION

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure 5.4: ROC-curves when 50%, 100% and 200% of noise was added using the random noise model adjusted for frequency and the unnormalized P-matrix

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure 5.5: ROC-curves when 50%, 100% and 200% of noise was added using the random noise model adjusted for frequency and the normalized P-matrix

5.3.3 Qualitative Investigation Of Results

Looking at what the algorithm actually does on a case to case basis shows where the algorithm gets things right and where it gets things wrong. All results shown in this section was produced using the first noise model (that does not correct for frequency of objects in the training set) and the unnormalized P-matrix.

Table 5.2 shows the result for an image of a kitchen. As can be seen, the only object that remains from the set of added noise is wineglass. This is obviously an ob- ject that is not unexpected to see in a kitchen, so the fact that the algorithm weighs it up seems to make sense. There are however a few objects from the original set (the ground truth) that are weighted below the mean. Recessed light, paperpile and doorway could arguably be considered as "non kitcheny", but cooktop and produce would make sense to find in a kitchen.

Table 5.3 shows the result for an image of a bathroom with very few annotated objects. The algorithm seems to have a hard time realizing this is a bathroom, and instead weighs up a bed from the added noise set. This could indicate that there is too little information in the scene for the algorithm to work correctly.

Table 5.4 shows the result for an image of a bedroom. Some objects that could be very useful in the case of scene classification are weighted down from the original set. This might not be ideal depending on the use case of the algorithm. It can be noted however that those objects still have a significantly higher weight than those from the added noise set.

Table 5.5 shows the results for an image depicting a bedroom. In this case the

(40)

5.3. SEMANTIC FILTERING

Object Weight

cabinet 0.236956341238

hood 0.0170226228685

refrigerator 0.0478596629292 countertop 0.212691810224 cooktop 0.0105942163117 produce 0.00160067789696

sink 0.201455367885

oven 0.131764391841

recessedlights 0.00219422018316 doorway 0.0281277547171

paperpile 0.0

tablelight 0.000585463668997

bracket 0.0

wineglass 0.0717706609561

bow 0.0

plots 0.0

redwinebottle 0.00396852936456 watertub 0.0

ovenmitt 0.00217212778862

dummy 0.0

gluestick 0.0

view 0.0

chaiselounge 0.0 trashbin 0.0

bowls 0.0312361521269

Table 5.2: Results for the image static_web_submitted_noa_ofen_jenny_chai_- ef/indoor053 depicting a kitchen, from the LabelMe dataset. The first set of results is for object originally present in the image while the second set is for objects that has been added as noise. Bold text represent a "correct" result (a weight above the mean for the first original set and below the mean for the added noise).

(41)

CHAPTER 5. EVALUATION

Object Weight

tub 0.0043014999442 window 0.37470332013 curtain 0.347349707983

tiles 0.0

sofa 0.228869246531 televison 0.00207226861253 curtains 0.0427039568001 tvscreen 0.0

Table 5.3: Results for the image static_web_submitted_noa_ofen_jenny_chai_- ef/indoor286 depicting a bathroom, from the LabelMe dataset.

Object Weight

picture 0.103534649136

bed 0.0166983993245

pillow 0.0577815976247

chair 0.278323138384

table 0.249197720145

flowers 0.0348845794142 window 0.144916086002

lamp 0.102346224258

fish 0.000861215001137

shovel 0.00113792941231

tree 0.00882906535251

pieceofbread 0.000501483579222 motionsensor 0.000691307201409 roadblock 0.0

paperprotector 0.0

glassware 0.000296605164525

Table 5.4: Results for the image static_web_submitted_noa_ofen_jenny_chai_- ef/indoor30 depicting a bedroom, from the LabelMe dataset.

References

Related documents

To help us quickly respond to your request, please specify what your relationship to the Stockholm School of Economics is by indicating one (or more) of the tick boxes

Preliminary data (figure 2) shows that mAb A biotinylated at an antibody concentration of 1 mg/mL and a biotin:antibody molar ratio of 10 results in a low

This project trained and implemented two important functions including object detection and semantic segmentation in Carla simulator for autonomous vehicles environment perception

The one shot object detector is a neural network based detection algorithm, that tries to find the exemplar in the search window. The output is a detection in the form of bounding

the parser delegation, lexer delegation and parse graph node object techniques.. employed by the

Figure : Example of Parser Delegation for Grammar Mo dularisation. ele atin

To construct a broom from the branches of the trees growing close to the square.. To sweep the square and the small branches in the broom breaks one by one and now they are

Resultaten för mätpunkter längs flank 3 redovisas i tabell 6.3 och figur 6.5 (tersband) och figur 6.6 (smalband) med excitation av hammarapparat... Flankerande vägg står