• No results found

Home Assistant Navigation - Smart Optical and Laser Orientation: H.A.N.S.O.L.O

N/A
N/A
Protected

Academic year: 2021

Share "Home Assistant Navigation - Smart Optical and Laser Orientation: H.A.N.S.O.L.O"

Copied!
78
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT MECHANICAL ENGINEERING, FIRST CYCLE, 15 CREDITS

,

STOCKHOLM SWEDEN 2018

Home Assistant Navigation - Smart

Optical and Laser Orientation

H.A.N.S.O.L.O

PHILIP ANDERSSON

EDVIN KUGELBERG

(2)
(3)

Home Assistant Navigation - Smart Optical and

Laser Orientation

H.A.N.S.O.L.O

PHILIP ANDERSSON, EDVIN KUGELBERG

Bachelor’s Thesis at ITM Supervisor: Nihad Subasic

Examiner: Nihad Subasic

(4)
(5)

Abstract

With the market for autonomous home assistants steadily growing, reasons for the development of new ways of com-municating with the robot assistants have emerged. This thesis aims to develop a system wherein a human can draw instructions for the robot to interpret and follow or per-form. An autonomous robot based on an Arduino board was built. Software for pattern recognition through a cam-era was developed and tested for precision and efficiency.

The device was able to read a command drawn on a sheet of paper by the user, analyze it to extract a route for the four wheeled robot to follow and transmit operations wirelessly via Bluetooth. The intent was not to create a marketable product, but rather to build a proof of concept for a viable alternative to existing communication systems for home assistants.

The resulting product was able to interact with a hu-man operator, ask it to circle an area to sweep and success-fully do as commanded. When the camera captured what had been drawn the system needed 70 seconds to sweep an area of 0.8m2. Errors for forward motion and rotation were

(6)

Referat

Med en v¨axande marknad f¨or hemn¨ara robotar finns utrymme f¨or utveckling av ett nytt s¨att att kommunice-ra med robotassistenter. Denna avhandling kommer be-handla framst¨allandet av ett system d¨ar en anv¨andare har m¨ojligheten att, med grafiska kommandon, instruera vad den vill att roboten ska utf¨ora.

En autonom Arduino-baserad robot byggdes och mjuk-vara f¨or m¨onsterigenk¨anning med en kamera utvecklades och testades f¨or precision och tidseffektivitet.

Enheten kunde l¨asa av en instruktion som anv¨andare ritat p˚a ett papper, analysera detta och extrahera en rutt f¨or roboten att f¨olja. Kommunikationen mellan kameraen-heten och roboten skedde tr˚adl¨ost via Bluetooth. Syftet var inte att bygga en kommersiellt f¨ardig produkt, avsikten var att konstruera ett system som skulle kunna visas upp som ett alternativ till redan existerande kommunikationssystem f¨or hemn¨ara robotar.

Resultatet blev en produkt som kunde interagera med en anv¨andare, be den ringa in en area att svepa ¨over och med framg˚ang utf¨ora detta. Efter att kameran tagit bild ¨over den ritade instruktionen beh¨ovde systemet totalt 70 sekunder f¨or att utf¨ora det och t¨acka en area p˚a 0,8m2.

Felmarginaler f¨or k¨orning rakt fram och rotation uppm¨attes till 2% - 3,5% respektive 0,98% - 0,99%.

(7)

Acknowledgements

First of all, we would like to thank our examiner and supervisor Nihad Subasic for useful lectures, guidence and feedback throughout the project. We would also like to thank Staffan Qvarnstr¨om and all the assistants for help with components and tips for solutions on problems. Finally, we would like to thank our classmates in the mechatronic group. In particular the support and feedback of Douglas Eriksson and Joel Greberg has been much appreciated and found very helpful.

(8)

Contents

1 Introduction 1 1.1 Background . . . 1 1.2 Purpose . . . 1 1.3 Scope . . . 2 1.4 Method . . . 2 2 Theoretical background 5 2.1 Image processing . . . 5 2.1.1 Color theory . . . 5 2.1.2 Camera . . . 6 2.1.3 OpenCV . . . 6

2.2 Electronics and Hardware . . . 6

2.2.1 Microcontroller and the Arduino . . . 6

2.2.2 Stepper motor . . . 7 2.2.3 H-bridge . . . 7 2.2.4 Bluetooth . . . 8 2.2.5 Steering mechanism . . . 8 3 Demonstrator 9 3.1 Problem Formulation . . . 9

3.2 Hardware and Electronics . . . 10

3.2.1 Camera Unit . . . 10

3.2.2 Arduino . . . 10

3.2.3 Bluetooth Module . . . 11

3.2.4 Stepper Motor Driver . . . 11

3.2.5 Stepper Motors . . . 12

3.2.6 Mechanical Parts . . . 12

3.3 Software . . . 14

3.3.1 Reading graphical commands . . . 14

3.3.2 Finding contours . . . 15

3.3.3 Extracting coordinates . . . 15

(9)

4 Experiments 17

4.1 Experiment 1: Precision of travel . . . 17

4.1.1 Experiment description . . . 17

4.1.2 Experiment result . . . 18

4.1.3 Experiment discussion . . . 20

4.2 Experiment 2: Precision of capture . . . 21

4.2.1 Experiment description . . . 21

4.2.2 Experiment result . . . 21

4.2.3 Experiment discussion . . . 22

4.3 Experiment 3: Time efficiency . . . 22

4.3.1 Experiment description . . . 23

4.3.2 Experiment result . . . 24

4.3.3 Experiment discussion . . . 24

5 Discussion and conclusions 25 5.1 Discussion . . . 25

5.2 Conclusion . . . 26

6 Recommendations and future work 27 6.1 Recommendations . . . 27 6.2 Future work . . . 27 Bibliography 29 Appendices 31 A Python Code 33 B Arduino Code 49

C On-board software flowshart 57 D User interaction flowshart 59

E Datasheet Stepper Motor 61

(10)

List of Figures

2.1 Graphical representation of a RGB-color space[13] . . . 6

2.2 Current flowing through an H-bridge [8] . . . 7

2.3 Properties of differential steering [23] . . . 8

3.1 Hardware overview of the system. Made in Draw.io . . . 10

3.2 Exploded sketch of the chassis with motor houses. Made in Solid Edge ST9 . . . 13

3.3 Circuit board with all connections. Made in fritzing . . . 13

3.4 Image before and after applying Adaptive Gaussian Threshold. Made in Python 2.7 . . . 14

3.5 Simulated image of how coordinates are found. Made in Adobe Photoshop 15 3.6 The complete route the robot will follow in blue, laid over the input image. Made in Python 2.7 . . . 15

4.1 Results from Experiment 1.1. Made in Python 2.7 . . . 18

4.2 Results from the 90 degrees rotation tests. Made in MATLAB . . . 19

4.3 Results from the 180 and 360 degrees rotation tests. Made in MATLAB 19 4.4 Mean error value for the measured angles. Made in MATLAB . . . 20

4.5 Examples of pattern recognition software failing to interpret commands. Made in Python 2.7 . . . 22

4.6 The input image for Experiment 3. The route marked in blue. Made in Python 2.7 . . . 23

(11)

List of Tables

3.1 Resolution table for controlling steps [16] . . . 11

4.1 Result from experiment 1.1 . . . 18

4.2 Results from experiment 1.2 . . . 20

4.3 Operation list for Experient 3 . . . 23

(12)

Abbreviations

CW - Clockwise

CCW - Counter Clockwise MS - Microstep

RGB - Red Green Blue color system OCV - Open CV

SPD - Motor steps per angular degree SPM - Motor steps per millimeter

(13)

Chapter 1

Introduction

This thesis focuses on the development of a communication system between au-tonomous home assistants and human operators. It includes the necessary compo-nents, algorithms and other implemented software, and the entire process of con-structing such a product.

The introductory chapter describes the background to this process, the research questions the project aims to answer and how it was accomplished.

1.1 Background

A study in attitudes towards intelligent service robots [15] found a willingness among their interviewees of having robots in their homes to avoid tedious everyday chores, such as polishing windows, wiping surfaces and vacuum cleaning. The same study also stated the increase of the demands on these robots and how human-to-robot communication is of utmost concern. Another study [9] states that it is also espe-cially important for elderly people that the robot should be easy to use with low effort.

With increasing amounts of robots in the homes and industries the importance of having an efficient and easy way to control a robot with high accuracy is increasing. There are many different ways to interact with a robot, this report studies the accuracy and efficiency of controlling a robot by drawing instructions on a blueprint of the area, and having these commands captured by a camera, and translated by software.

1.2 Purpose

The purpose of this thesis was to develop and investigate a system able to take orientational commands from an operator and successfully executing them. Devel-oping both hardware and software for this project was aimed to answer the following research question:

(14)

CHAPTER 1. INTRODUCTION

How can a reliable guide system for an autonomous robot be developed based on graphical commands from a human operator?

To be able to evaluate this question it was further split into subquestions: • What is the final accuracy of the system and how large are the errors? • What are the largest factors to these errors?

• How time-efficient is the system?

1.3 Scope

This project was performed as a part of a bachelor thesis at KTH Royal Institute of Technology in Stockholm. Budget and time were the main limitations of the course. Therefore, the project resources (materials, equipment and machines etc.) were limited to those available to students at the university.

The scope of this thesis was to build and test a down-scaled demonstration unit. The focus was to evaluate the research questions presented in 1.2 Purpose, not to construct a fully functional commercial product. Therefore, the result was not fully representative of an end product with all its final features and it has only be tested in a controlled testing environment.

The demonstration unit can be replicated utilizing this thesis and commercially available hardware refined either by manual tools or more advanced machinery. The software for the demonstrator was developed for a microcontroller and a commercial computer.

To answer the research questions a fully functional product had to be designed. The product had to be able to take in drawn graphical instructions from the user, and execute these autonomously by a robot. The environment for testing it was built for experiment purposes with even floor and lighting.

Final product was aimed to correctly interpret the given commands and be able to execute them. This execution must be carried out with enough precision to be a viable proof of concept of an alternative to existing market available guiding systems.

1.4 Method

To answer the research questions previously stated in 1.2 Purpose, research on controlling robots and patter recognition software was conducted. After studying other similar projects a demonstrator was constructed consisting of an autonomous vehicle, a camera unit with image processing, and wireless communication between the two. The user of the demonstrator maps out instructions for the camera register. The picture is analyzed with image processing and then the extracted route is sent via wireless communication to the demonstration vehicle.

(15)

1.4. METHOD

The project was evaluated on time efficiency and accuracy of execution. The deviation from the route assessed the preciseness of the robot. The trade-off between time and precision was also discussed.

(16)
(17)

Chapter 2

Theoretical background

This project includes the use of image processing, microcontrollers, motors and wire-less communication among other things. This chapter will outline the theoretical introductions needed.

Foundation of this thesis was built on sources such as previous mechatronic thesis projects at KTH Royal Institute of Technology as well as literary and online sources.

2.1 Image processing

2.1.1 Color theory

Light can often be described as a multi-property element [28], but for all intents and purposes of this thesis all light will be modeled to only have its associated wave properties.

The recorded wavelength of a light ray determines its color, and digital imagery translates these frequencies to data. There are multiple ways of digitally describing light, one of which will be presented.

The RGB system is widely used in electronic displays and devices for capturing images, like cameras and photocopiers. The name comes from the three recorded values, namely: red, green and blue. Each pixel in the image is represented by the three color values from 0 to 255, and the added outcome of these gives a resulting color [13]. See Figure 2.1 for a graphical representation of RGB.

(18)

CHAPTER 2. THEORETICAL BACKGROUND

Figure 2.1: Graphical representation of a RGB-color space[13]

2.1.2 Camera

A digital image is a form of representing a picture using digital information, often in either vector or raster format. Vector images are made so every field of color are defined by single or multiple vectors with a point, direction, length and color assigned to it. Raster images are matrices with a specified value in each of the elements [3]. All images in this thesis are expected to be raster images.

The camera is used for registering the instructions given by the user. The picture quality needs to be clear enough to allow for an effective pattern recognition process, but still be economically viable for this project.

2.1.3 OpenCV

OpenCV (OCV) is one of the largest open source libraries of computer vision, compatible with both Python and C. It is aimed at real-time computer vision and the applications varies from interactive art, mine inspection and most importantly for this purpose, robotics [26]. Some of the more useful functions for the intents of this thesis include edge detection, contour approximation and geometric analysis.

2.2 Electronics and Hardware

2.2.1 Microcontroller and the Arduino

A microcontroller is a complete processing unit with integrated circuits such as a Central Processing Unit (CPU), Random Access Memory (RAM), Read Only Memory (ROM) and other peripherals for task specific implementations. It will read the code stored in the ROM and execute any tasks therein. Depending on both the processing power and the built-in circuits the controllers vary in both size and cost [7].

Microcontrollers come in many different qualities, ranging from industrial grade platforms for only managing one task to versatile educational boards like the Ardi-uno. Since the Arduino is built with learning in mind [5], it is user friendly with a pre-built coding environment aswell as multiple forums online where users share

(19)

2.2. ELECTRONICS AND HARDWARE

their knowledge and expertise with the rest of the community. The code is simple to alter or change with a USB-cable, making it easy to operate [6].

2.2.2 Stepper motor

A stepper motor is a brushless electric motor which operates off of multiple electro-magnetic coils, called phases, around a shaft. It has two motor windings through which a magnetic field can be generated around the coil by letting an electrical pulse flow through each phase, making the shaft rotate in discrete steps [10]. The direction of the rotation is controlled by the order and direction of the electrical pulses sent to the motor. The speed of the shafts rotation is proportional to the frequency of the pulses.

Because of this, high precision can be achieved by controlling the electrical pulses given from a microcontroller without needing closed loop feedback [11].

It is possible to feed both of the motors windings with a pulse at the same time, which will make the motor to take a so called ”half-step” due to the rotor settling in between the windings. It is also possible to control the current amplitude and direction by setting the current as a sine wave [17]. This will make the motor take even smaller steps, microsteps, which further improves accuracy.

2.2.3 H-bridge

Control over the current through a motor can be achieved by an H-bridge circuit. To change the direction of the current in an H-bridge, four transmitters are used to control four switches in pairs. When the transmitters over switches 2 and 3 receives a digital input they will close the switches, and the current will go through the motor in one direction. If the transmitters over switches 1 and 4 receive a signal the opposite will occur, making the current go in the opposite direction [10].

(20)

CHAPTER 2. THEORETICAL BACKGROUND

2.2.4 Bluetooth

To establish a wireless communication between two devices for sending and receiving data, Bluetooth technology can be used. A Bluetooth device communicates at short distances and sends signals in all directions. The signal can travel through obstacles so no line of sight is needed for this type of communication.

A Bluetooth device communicates in the 2.4GHz - 2.4835GHz radio frequency which is reserved for industrial, scientific and medical use worldwide. A Bluetooth module is able to use up to 80 channels, and jumps between these in steps of 1 MHz or 2 MHz to avoid channel collisions in the radio band [2].

2.2.5 Steering mechanism

Differential steering is the technique of having two wheels on separate axles and adjusting the speed of each wheel individually. When the motors rotate in opposite directions with the same speed the vehicle will make a pivot turn, thus rotating without any turning radius. This approach to driving can be seen on military tanks and other vehicles utilizing tank treads [18]. The possibility of not having a turning radius gives the vehicle superior control in tight turns in comparison to other steering mechanisms such as Ackermann steering found in cars [14].

(21)

Chapter 3

Demonstrator

To provide answers to the purpose of the report which was presented in 1.2 Purpose a demonstrator was built. This chapter is divided into two major parts: software and hardware. It describes how the demonstrator is constructed and how its components communicate to enable the system to work as intended.

3.1 Problem Formulation

The goal of the project was to construct a robot that can perform tasks with high precision. In order to reach the goal, the following requirements had to be be met: • The image processing computer needed to extract the drawn commands

cap-tured by camera with legitimate accuracy and speed

• The communication between computer and robot had to be robust and effi-cient

• The robot had to be able to execute commands with high precision

The initial idea was to let the operator circle areas on the floor with the use of a laser pointer and allow the user to draw different shapes to get different behaviors from the robot. Both of these plans were revised to instead use pen and paper (with the paper being a blueprint of the room - with possible pre-drawn features such as furnitures and other landmarks to make for a more intuitive experience) and only letting the user draw circles and ovals with the robot having a sweeping behavior like an automatic vacuum cleaner. These changes were made due to limitations in time, this will later be revisited in Chapter 6.

It was decided early to use stepper motors and a stepper motor driver so the robot could orient itself by nothing else but counting steps. This decision will also be revised in Chapter 6.

(22)

CHAPTER 3. DEMONSTRATOR

3.2 Hardware and Electronics

A hardware overview is shown in Figure 3.1. The signals are represented by arrows and power distribution by double lines.

Figure 3.1: Hardware overview of the system. Made in Draw.io

3.2.1 Camera Unit

The camera used for this project was the Creative Live Cam Chat HD. It is a 5.7 megapixel web camera with a 720p video image sensor and fixed lens focus [27]. This model was assumed capable due to its relative low cost compared to the image output and the fact that it was compatible with both Microsoft Windows and Mac OS.

The camera used to capture instructions drawn by the user was placed on an arm above a vertical sheet of paper. A plastic folder was placed over the paper for the user to draw on, for easy erasing and for not wasting so much paper. The camera was then connected to the computer via USB cable.

3.2.2 Arduino

The microcontroller used in this project was the Arduino UNO, an entry level educational board based on the ATmega328p chip. It has 14 digital input/output pins [5]. The first two are used for serial communication [4], a pair of five pins each are used for sending signals for controlling the wheels on the robot.

Since the route calculation was done on an external computer, the microcon-troller needed little processing to execute the packages given to it. This together

(23)

3.2. HARDWARE AND ELECTRONICS

with only needing a total of 12 pins deemed the Arduino UNO enough to handle the task.

3.2.3 Bluetooth Module

Since the robot is mobile and the camera unit perceiving the user’s commands is stationary, wireless communication between computer and Arduino was needed. Bluetooth was deemed suitable for this application. An inexpensive Bluetooth mod-ule HC-06 for the Arduino was mounted on the robot. This modmod-ule has a built in 2.4GHz antenna so no extra antenna is needed making the construction as compact as possible [12].

The Bluetooth module RX pin can only handle 3.3V and was connected to the Arduino transmitter TX pin outputting 5V, a voltage divider was therefore needed for the Bluetooth module. See Figure 3.3 for connections.

3.2.4 Stepper Motor Driver

Each motor needs one dual H-bridge for allowing it to run in either direction since the motors has of two windings respectively. The H-bridge circuit used on the robot was the Allergo A3967 [16] which is controlled by Luxorparts EasyDriver [1], a microstepping motor controller with a built in translator. It is designed for easy control over stepper motors with a user-friendly interface. Declaring direction is done by setting DIR input to either high or low, and sending one pulse from the Arduino to the logic input STEP will make the motor take one step in the declared direction.

Enabling microstepping is done with the logic inputs MS1 and MS2. See Table 3.1 for the enabling signals. The stepper controller circuit is also able to minimize energy consumption by deactivating all outputs, this is controlled by setting the

SLP pin to low, which is also necessary to prevent unnecessary heat when the

robot is stationary.

Table 3.1: Resolution table for controlling steps [16]

Resolution MS1 MS2 Full step Low Low Half step High Low

1

4 step Low High 1

8 step High High

Because the Arduino can not supply the peripherals with enough voltage and current through its logic pins, the two stepper motor drivers are fed with 12V externally. The drivers inputs were connected to the Arduinos output ports, while the outputs on the controller were connected to the motor making the robot able to run as desired.

(24)

CHAPTER 3. DEMONSTRATOR

3.2.5 Stepper Motors

The DC stepper motors used on the robot are Lin Engineering 4118-03S-02RO (data sheet are found in Appendix E) marked with 1.2 A and are fed current through the stepper motor controllers. At first two motors marked 0.25A were used. However these motors could not deliver enough torque for the robot to move as desired so the more powerful motors had to be used.

The motors have 200 steps per revolution, which allows the axle to rotate 1.8 degrees in one full step. The stepper motor controller’s ability of microstepping made it possible for the motor to go in either eighth-, fourth- or half steps. The motor is therefore able to take a maximum 1600 per revolution, achieving higher accuracy and a smoother motion with a higher torque, but at a lower speed.

3.2.6 Mechanical Parts

The demonstrator robot unit was designed in a way to be accessible and allow for replacement of parts. The most important feature of the design is placement of the wheels due to the differential steering. Both wheels must be placed symmetrically on either side of the frame to make for an efficient control of the robot.

The two wheels were mounted on the stepper motor’s shafts and in turn fitted in 3D printed motor houses made for horizontal mounting on the frame. The chassis itself was a laser cut 3mm acrylic plastic with holes made for installing motor houses, two support swivel wheels front and back, and the possibility for fitting a second story for extra space.

The acrylic chassis was found to be to thin to fully support the weight of the robot so an aluminum L-beam was fitted underneath for stabilization. This cor-rected some errors found due to the plastic bending and giving the construction a U-shape.

The Arduino, motor controllers and a battery pack supplying 12V were all fitted on the chassis with double sided tape, the cables between these components were fastened so the chance of tangling was kept at a minimum. The components need-ing 12V were connected in parallel with the battery pack supplyneed-ing the necessary voltage. The Bluetooth module was also connected on the circuit coupled to the voltage divider. See Figure 3.3 for connections.

(25)

3.2. HARDWARE AND ELECTRONICS

Figure 3.2: Exploded sketch of the chassis with motor houses. Made in Solid Edge ST9

(26)

CHAPTER 3. DEMONSTRATOR

3.3 Software

Software developed for this thesis was made in Python and C. The flowchart for user interaction can be found in Appendix D, and the following chapter will dive into the pattern recognition software as well as the code loaded onto the Arduino.

3.3.1 Reading graphical commands

The picture of the drawn command by the human operator is captured by the camera, it is then loaded into Python 2.7 together with the OCV library.

To limit the data processed in each algorithm the 8-bit RGB image is first converted to grayscale. This reduces the data amount in each pixel of the image from red R, green G and blue B channels to only one called luminance, Y. Thus the conversion are made in OCV is a weighted sum of the colors [19]:

Y = 0.299 · R + 0.587 · G + 0.114 · B (3.1) The grayscaled picture is then slightly blurred to even out high contrast noise with a median filter [21] where each pixel is replaced with the median of its neigh-boring pixels [24].

The drawn feature was then extracted from the background, done with the OCV function Adaptive Gaussian Threshold. Each pixel is assigned a weighted sum of either 1 or 0 based on it neighboring pixels[20]. This method is superior to other threshold algorithms as it will analyze pixels locally and therefore counteract uneven lighting conditions and other larger disturbances. See Figure 3.4 for before and after.

(a) Grayscaled blurred image (b) Binary image of input image

Figure 3.4: Image before and after applying Adaptive Gaussian Threshold. Made in Python 2.7

The command is thereby isolated from the background and the user input can be analyzed without any disturbances from the background or lighting conditions.

(27)

3.3. SOFTWARE

3.3.2 Finding contours

The built-in OCV function findContours is used for detecting the edges of the isolated command. The function takes in an 8-bit single channel image and returns all contours found as a vector of points [22] by utilizing an algorithm called border following [25].

To increase robustness of the system some faulty contours must be removed. Some unprocessed noise can still cause small edges and contours to appear. The findContours will find these imperfections and register them as contours, these can be singled out and deleted based on size with the function contourArea.

3.3.3 Extracting coordinates

The coordinates, or points, for the robot to follow are found by searching for specific horizontal values in each of the contours found. They are found by linearly searching through the contour vector for specific y-values and saving the associated x-value in a new vector. See Figure 3.5 for a simulated result from this operation.

The points are then rearranged in sequence to create a sweeping motion across the floor. A start and end point are also appended to the route so that the robot will start and end in the same place, the lower left corner. The final generated route can be seen in blue in Figure 3.6.

Figure 3.5: Simulated image of how coordi-nates are found. Made in Adobe Photoshop

Figure 3.6: The complete route the robot will follow in blue, laid over the input image. Made in Python 2.7

As all points are found, a route and operation list is created by calculating the angle and length between each point. The operations are then formatted as a letter (D for drive and L or R for left and right turn) followed by a value in either millimeters or degrees depending on the motion.

(28)

CHAPTER 3. DEMONSTRATOR

3.3.4 On-board Software

Serial communication is used for the Arduino board to maintain communication between the robot and computer. The Arduino UNO board has two serial ports to receive and transmit serial data [4]. The communication works in such way that data is sent one bit at a time back and forth between the devices.

The Bluetooth module will wait for a connection with the computer. After established connection it will wait to receive data to send to the Arduino. The data packages received are short operations starting with a letter declaring if the robot is to move forward or turn. Following the letter is a value giving the amount of movement required. The operation ends with an ’#’ sign so the Arduino knows when to stop reading the data and start execute the operation.

For example, if the Arduino receives ’D2000#’ or ’L90#’, the Arduino will read this as ’Drive forward 2000 millimeters’ or ’Turn left 90 degrees’ respectively. It will multiply the value with either a millimeter-constant (SPM) or steps-per-degree-constant (SPD) to translate the data to equivalent steps. Through a process of iteratively changing these values and measuring the movement, final values were decided to be SPM = 1.014 and SPD = 8.53.

It will then wake the stepper motor drivers, set the directional inputs to cor-respond to the desired movement and finally send a number of pulses equal to the number of steps.

As soon as the operation is finished the Bluetooth module will send back ’Done’ to the computer and the next operation is sent. This loop will continue until it has exhausted the operation list and therefore executed the entire route.

If the Arduino would receive an operation that does not start with either D, L or R it will tell the operator that it is a wrong command and wait for a new.

(29)

Chapter 4

Experiments

4.1 Experiment 1: Precision of travel

The first experiment was designed to find the errors and error margin of the moving robot. This would in part give the answer to the questions What is the final accuracy of the system and how large are the errors? and What are the largest factors to these errors? The experiment was split up into two parts, both individually described below.

4.1.1 Experiment description

The experiment was designed to measure how the robot moved when fed hard coded instructions from the computer. The robot was fitted with three laser point-ers, directed right, left and forward perpendicular to each other, to help with the positioning before each test - and also for triangulating the final position after each executed test.

The robot was placed indoors on a linoleum floor, was sent hard coded commands in millimeters and angles in degrees. After the robot had executed said commands the data was recorded, position triangulated and then compared to what was sent to it. Two sub-experiments were set up to test two sources of errors.

Experiment 1.1 was made to measure straight motion. The robot was given

a straight path to go, commands were sent telling it to move forward 1, 2 and 3 meters. Measurements were taken to see how well the robot performed when moving in a straight line during ten trials for each distance.

Experiment 1.2 was made to measure rotation. The robot was given instructions

to turn around its own axis, much like it does in a live sequence. Six tests were performed with ten trials each, making the robot turn clockwise (CW) and counter clockwise (CCW) 90, 180 and 360 degrees. This was to see how consistent the construction would turn.

(30)

CHAPTER 4. EXPERIMENTS

4.1.2 Experiment result

During Experiment 1.1 the robot was tested on moving in a straight line. The results from these three test (ten trials each) are shown in Figure 4.1.

Figure 4.1: Results from Experiment 1.1. Made in Python 2.7

The circle points in the graphs are the robots final positions after executed command. The cyan marks the goal to hit and the stars are the mean value for final position. Mean values and deviations are calculated and shown in Table 4.1.

Table 4.1: Result from experiment 1.1

Length Mean error Mean error in %

1m 23 mm 2.3 %

2m 40 mm 2 %

(31)

4.1. EXPERIMENT 1: PRECISION OF TRAVEL

In Experiment 1.2 the robots turning accuracy was measured. The six test results are shown in Figure 4.2 and in Figure 4.3. The red diamonds are results from CCW rotation and the blue in CW rotation. The goal is marked with a magenta circle.

Figure 4.2: Results from the 90 degrees rotation tests. Made in MATLAB

Figure 4.3: Results from the 180 and 360 degrees rotation tests. Made in MATLAB

The mean error for when the robot rotates in different angles are presented in Table 4.2 and Figure 4.4 where the orange line marks the goal to hit.

(32)

CHAPTER 4. EXPERIMENTS

Table 4.2: Results from experiment 1.2

Angle Mean angle in degrees Mean error in degrees Mean error in %

90°CW 88.6° 1.4° 0.98 % 90°CCW 88.7° 1.3° 0.99 % 180°CW 178.1° 1.9° 0.99 % 180°CCW 178.2° 1.8° 0.99 % 360°CW 355.7° 4.3° 0.99 % 360°CCW 355.7° 4.3° 0.99 %

Figure 4.4: Mean error value for the measured angles. Made in MATLAB

4.1.3 Experiment discussion

The experiments shows inaccuracy and shines a light on the problem of only count-ing steps of the motor and not the actual movement of the robot. The robot believes its position to be on the mark on each try and with multiple operations during one route these errors will add to each other and increase the total error. One option to try and counteract this would be change the SPM and SPD constants on the Ar-duino. Although this might change with respect to surface and friction or obstacles

(33)

4.2. EXPERIMENT 2: PRECISION OF CAPTURE

so calibration would always be needed if put in another environment. These are some sources of error:

• Human factor while placing the vehicle and measuring its position.

• Varying friction on the wheels during the tests. The floor was not cleaned prior to the tests and the rubber tires could have gotten progressively more dusty. Other variations of surface quality is also a factor

• Variations in the motor. For calculating the number of steps needed each step is assumed to be of equal length. This may not be the case as the torque from the motor differs depending on the voltage from the batteries and imperfections in the motors.

• Calibration of the SPM/SPD. The previous calibration may have been off due to initial calibration made on a surface with different qualities and different voltages from the batteries.

• The two passive swivel wheels on the front and back may cause error due to their initial position and the friction in their bearings.

4.2 Experiment 2: Precision of capture

The second experiment was to observe the reliability of the pattern recognition software and the efficiency of understanding input commands from the user. It was aimed to answer the question of the reliability of the system.

4.2.1 Experiment description

Multiple images of varying qualities were input to the pattern recognition software. The resulting routes were examined for possible deviations from the ideal result. The robustness of the algorithm used was then assessed based on the results given, and possible problems were iteratively collected.

4.2.2 Experiment result

The algorithm had problems with the following image qualities: • The command was drawn with a thin pen or with a pale color • Uneven lighting

• The circle drawn was either not closed or overlapped itself • The image captured some of the table the paper was placed on • A very small command was drawn

(34)

CHAPTER 4. EXPERIMENTS

(a) Command drawn with a thin pen (b) Overlap

(c) Area outside of paper showing

Figure 4.5: Examples of pattern recognition software failing to interpret commands. Made in Python 2.7

4.2.3 Experiment discussion

The results show limitations in the command capturing subsystem. This would be a problem in real life implementation as strict ideal conditions would have to be met to get a functioning system. Problems related to contrast and lighting were due to the camera automatically trying to compensate contrast by self adjusting the brightness of the image, so trying to achieve higher contrast by shining a light on the surface made no difference due to this property. A better camera could be used for better image capturing quality.

Problems relating to the software not understanding the instructions could be corrected with some more advanced algorithms.

4.3 Experiment 3: Time efficiency

The final experiment was to time the system and thereby answering the question How time-efficient is the system?. The tests were made to time individual parts of the sequence and assess the efficiency, by isolating and analyzing how much time different steps needed.

(35)

4.3. EXPERIMENT 3: TIME EFFICIENCY

4.3.1 Experiment description

The experiment was made to simulate a live command sent to the robot to perform and time it. A route was extracted by giving the pattern recognition software a circular command. The route had a length ten operations of turning and forward movement each, with a total 7.4 meters of forward motion, 568 degrees of CW turning, and 210 degrees of CCW turning. The entire route can be seen in Figure 4.6 and the individual operations in Table 4.3.

Figure 4.6: The input image for Experiment 3. The route marked in blue. Made in Python 2.7

Table 4.3: Operation list for Experient 3 Operation Amount CW rotation 31° Forward 1540 mm CW rotation 58° Forward 948 mm CW rotation 69° Forward 260 mm CW rotation 110° Forward 1188 mm CCW rotation 95° Forward 245 mm CCW rotation 84° Forward 1148 mm CW rotation 137° Forward 356 mm CW rotation 42° Forward 620 mm CCW rotation 31° Forward 1113 mm CW rotation 121°

Ten trials with the same route were made and the time for the robot to execute the command was measured, five trials where the robot always moved by full steps and another five where ramping (by taking four and half steps at the beginning and end of each operation) was activated for accuracy.

Time for the computer to find coordinates and calculate a path were also mea-sured. The same procedure were used to time the computer connecting to the robot via Bluetooth as well.

(36)

CHAPTER 4. EXPERIMENTS

4.3.2 Experiment result

The results from the experiments are shown in Table 4.4, individual results of each measurement are presented in Appendix F.

Table 4.4: Mean values over the measured time

Full speed Ramping Bluetooth Connection Pattern Recognition

39.9s 65.4s 4.2s 0.18s

4.3.3 Experiment discussion

The experiment clearly shows that the biggest contributor to the total time is the robot going through the operations it has been sent. Ramping through microstep-ping is evidently slower than going by full steps, but has a much higher accuracy due to limiting the slipping to a minimum. When the robot is speeding through the course it slips uncontrollably and restricting the robot to only move by full steps was thus not a viable option.

(37)

Chapter 5

Discussion and conclusions

This chapter is designated for discussing the entire project and the final version of the product as well as some of the construction solutions made. Answers to the research questions are also presented, along with the issues leading up to answering them.

5.1 Discussion

The finished demonstrator and its hardware shows to be sturdy and somewhat reli-able. The chassis of the robot and the quick fixes made to stabilize the construction helped with accuracy by aligning the wheels straight. The largest source of er-ror was inconsistent friction between floor and wheels. Constant calibration of the SPM and SPD had to be made outside of the research environment due to slipping, grabbing and incorrectly distributed friction between the two wheels.

The trade off between reliability and time was examined where precision was favored. The biggest impact of the time needed for the system to perform one loop came from the robot going through its route. Even when speeding the robot up to its maximum, the mere seconds needed for the rest of the sequence was unmatched. The first iteration of the robot would only do full steps, making each end and start of an operation go from full speed to a full stop or vice versa. This created issues with the robot slipping off its course. The solution to this was forcing the robot to ramp up and ramp down in speed, with microstepping. This was a clear trade off between time and accuracy, as it reduced the speed but greatly increased reliability of the demonstrator.

Another factor of efficiency was the quality of the camera. Because of its poor quality the pictures taken had a lot of noise in them, requiring new pictures to be taken. The self adjusting brightness feature also made in hard to control the contrast and lighting.

(38)

CHAPTER 5. DISCUSSION AND CONCLUSIONS

5.2 Conclusion

This paper aimed to answer the following question: How can a reliable guide sys-tem for an autonomous robot be developed based on graphical commands from a human operator? The final product developed showed an inexpensive and easily constructed demonstrator to fulfill this function. The robot chassis was made from commercially available materials. Although the materials were processed in more high end machines (i.e. laser cutter and 3D-printer), the same result can be achieved with manual tools. All electronics can be bought and soldered together easily and all software is available free online.

The specific questions stated to evaluate the broader one were presented as follows:

• What is the final accuracy of the system and how large are the errors? • What are the largest factors to these errors?

• How time-efficient is the system?

Experiment 1 answered the first two subquestions. The results given from it show that the mean errors for its accuracy are systematic. This was expected due to the fact that the steering algorithm only counting steps. The robot has no feedback over its position in the room.

Errors when traveling in a straight line ranged from 2% to 3.5% and maintained accuracy for rotation ranged from 0.98% to 0.99%. The largest cause of errors are wheels slipping on the floor and other inconsistencies in motors and tires. These errors could be eliminated with a more sophisticated way of controlling motion rather than only counting steps.

Experiment 2 and 3 answered the last subquestion How time-efficient is the system?. The demonstrator fulfilled an entire route, covering an area of about 0.8m2 in 19 operations in 1 minute and 5.4 seconds, with pattern recognition in

0.18s and connection time at 4.2s.

The biggest factor on the time needed came from the robot executing the in-structions. However, since speeding up the robot meant lowering its precision a consideration had to be made. The autonomous robot is designed for a sweeping motion much like a vacuum robot, so precision was favored over time. This decision was made due to the authors belief that the user will not actually wait for the robot to finish but rather tell the robot to sweep and then let it complete the task in the background.

The algorithms for path calculation and establishing connection was fast. Yet there could be some problem with the camera as it sometimes added noise to the instructions, forcing the operator take several new pictures or even redraw the command. This problem occurred as the camera self adjusted the brightness. A better camera could be used for allowing less noise to pass through and also to achieve better control over image quality.

(39)

Chapter 6

Recommendations and future work

This project aimed to construct and program a functional system where in an op-erator would communicate to an autonomous vehicle by graphical inputs. The final iteration shows promising results but also room for improvement and further de-velopment to improve performance and usability. Hardware improvement might require more advanced materials but expanding the software for more functionality is simply a question of adding more of the same.

6.1 Recommendations

Keeping the wheels straight and centered should be the focus when constructing the chassis as this would decrease some of the quick fixes and clunkiness of the robot resulting in a more consistent and accurate system. It would also be wise to ensure a simpler way of calibrating the SPM and SPD instead of having to change them by updating the software on the Arduino by cable.

A more precise set of wheels would also greatly improve the performance. The robot in this paper had poorly casted plastic wheels, where all in a set of six had varying diameters. This would also somewhat limit the impact of imperfect friction.

6.2 Future work

This project could be expanded by adding more instructions for the software to recognize and give the robot different behaviors depending on sizes and shapes of the given command. To make the system even more intuitive and robust a mobile application could be developed, where the user draws the commands on a blueprint over the room digitally. Meaning little image recognition would be needed and noise would not be of an issue.

The communication could also be over wifi instead of Bluetooth making the con-nection more permanent. Also some sort of feedback regarding the robots position would increase its accuracy and allowing it to always know where it is. This could

(40)

CHAPTER 6. RECOMMENDATIONS AND FUTURE WORK

be done by use of decoders on the wheels or some type of triangulation so a more precise movement could be ensured.

A possible future feature could also be to map out out areas of interest directly on the floor with a laser pointer, by using pattern recognition with existing surveillance cameras in the home. This would greatly increase the intuitiveness of the product by limiting the amount of peripherals.

Collision avoidance by ultrasonic sensors could be used on the robot, and allow-ing to charge the batteries in a home base could also be of interest. Other traits commonly found in for example autonomous vacuum cleaners or lawnmowers would also make for a more commercially appealing product.

(41)

Bibliography

[1] Luxorparts easydriver v4 a3967 stegmotordrivare. Kjell & Company, 2018. Availble: https://www.kjell.com/se/sortiment/el-verktyg/arduino/ tillbehor/luxorparts-easydriver-v4-a3967-stegmotordrivare-p90787 [Online; accessed 4-April-2018].

[2] Mohit Kumar Agnihotri. Energy efficient topology formation for bluetooth mesh networks using heterogeneous devices. School of Information and Communication Technology, KTH, Stockholm, Sweden, 2015. Available: http://www.diva-portal.org/smash/record.jsf?pid=diva2:928559.

[3] Gabriel Andersson Santiago and Martin Favre. Desinobot: The construction of a color tracking turret. School of Industrial Engineering and Management, KTH, Stockholm, Sweden, 2015. Available: http://www.diva-portal.org/ smash/record.jsf?pid=diva2:915902.

[4] Arduino. Arduino serial. Arduino, 2018. Available: https://www.arduino.cc/ reference/en/language/functions/communication/serial/ [Online, ac-cessed 5-May-2018].

[5] Arduino. Arduino uno rev3 overview. Arduino, 2018. Available: https: //store.arduino.cc/arduino-uno-rev3 [Online, accessed 3-April-2018]. [6] Yusuf Abdullahi Badamasi. The working principle of an arduino. In

Elec-tronics, computer and computation (icecco), 2014 11th international confer-ence on Electronics. School of Industrial Engineering and Management, KTH, Stockholm Sweden, 2014. Available: http://kth.diva-portal.org/smash/ record.jsf?pid=diva2:550278.

[7] Dimosthenis E Bolanakis. Microcontroller education: Do it yourself, rein-vent the wheel, code to learn. In Synthesis Lectures on Mechanical Engineer-ing, volume 1. Morgan & Claypool Publishers, 2017. Available: https:// www.morganclaypool.com/doi/abs/10.2200/S00802ED1V01Y201709MEC009. [8] Image Of Electrician. Electrical dc motor unique dc motor control circuit

diagram. IOE, 2016. Available: http://5736718.net/fresh-electrical-dc-

(42)

motor-picture-vfm/electrical-dc-motor-unique-dc-motor-control-BIBLIOGRAPHY

circuit-diagram-wiring-diagram-ponents-wallpaper-mlp/ [Online, accessed 3-April-2018].

[9] Neta Ezer, Arthur D Fisk, and Wendy A Rogers. More than a servant: Self-reported willingness of younger and older adults to having a robot perform interactive and critical tasks in the home. In Proceedings of the human factors and ergonomics society annual meeting, volume 53. SAGE Publications, Los Angeles, CA, USA, 2009.

[10] Adam Forsgren, Anton Forss, Fredrik Isaksson, Johan Malgerud, and Johan Tideman. Sj¨alvbalanserande robot–balansering av tv˚ahjulig robot med propell-rar. School of Industrial Engineering and Management, KTH, Stockohlm, Swe-den, 2009. Available: http://kth.diva-portal.org/smash/record.jsf?pid= diva2:915465.

[11] Linus Gidl¨of ¨Ornerfors. Testautomatisering av linj¨armotorer. School of Tech-nology and Health (STH), Medical Engineering, Computer and Electronic Engineerin, KTH, Sweden, 2017. Available: http://www.diva-portal.org/ smash/record.jsf?pid=diva2:1080331.

[12] Guangzhou HC. Hc-06 data sheet. Guangzhou HC Information Technology Co Ltd, Tianhe, China, 2011. Available:http://www.sgbotic.com/products/ datasheets/wireless/hc06 datasheet.pdf [Online, accessed 3-April-2018]. [13] George H. Joblove and Donald Greenberg. Color spaces for computer graphics.

volume 11. ACM New York, NY, USA, 1978. ISSN: 0099-8930.

[14] Pontus Jonasson and Viktor Lassila. Styrtransmission f¨or bandfordon. School of Industrial Engineering and Management, KTH, Sweden, 2006. Available: http://kth.diva-portal.org/smash/record.jsf?pid=diva2:550278. [15] Zayera Khan. Attitudes towards intelligent service robots. volume 17.

De-partment of Numerical Analysis and Computing Science, KTH, Stockholm, 1998. Available: http://wproj.nada.kth.se/midhistoric/ftp.nada.kth.se/ IPLab/TechReports/IPLab-154.pdf.

[16] Allegro MicroSystems. A3967 datasheet. Allegro MicroSystems Inc, Worces-ter, Massachusetts, USA, 2008. Available: http://www.alldatasheet.com/ datasheet-pdf/pdf/480197/ALLEGRO/A3967.html [Online, accessed 3-April-2018].

[17] Alexandru MORAR. Microstepping system for bipolar hibrid stepper motor control. In The International Conference Interdisciplinarity in Engineering INTER-ENG. Editura Universitatii Petru Maior din Tirgu Mures, 2011. [18] Olof Nor´eus. Improving a six-wheeler sperformance both on-and off-road.

KTH, School of Engineering Sciences (SCI), Stockholm, Sweden, 2010. Avail-able:http://kth.diva-portal.org/smash/record.jsf?pid=diva2:358889.

(43)

BIBLIOGRAPHY

[19] OpenCV. Color conversions. Open Source Computer Vi-sion, 2018. Available:https://docs.opencv.org/3.1.0/de/d25/ imgproc color conversions.html [Online, accessed 3-April-2018].

[20] OpenCV. Image thresholding. Open Source Computer Vi-sion, 2018. Available:https://docs.opencv.org/3.4/d7/d4d/ tutorial py thresholding.html [Online, accessed 14-May-2018].

[21] OpenCV. Smoothing images theory. Open Source Com-puter Vision, 2018. Available:docs.opencv.org/2.4/doc/ tutorials/imgproc/gausian median blur bilateral filter/

gausian median blur bilateral filter.html [Online, accessed 3-April-2018].

[22] OpenCV. Structural analysis and shape descriptors. Open Source Computer Vision, 2018. Available:https://docs.opencv.org/2.4/modules/imgproc/ doc/structural analysis and shape descriptors.html?highlight= findcontours#findcontours [Online; accessed 3-April-2018].

[23] Stan. Differential drive with continuous rotation servos and arduino. 42 Bots, 2014. Available:http://42bots.com/tutorials/differential-steering-with-continuous-rotation-servos-and-arduino/ [Online; ac-cessed 4-April-2018].

[24] Tong Sun and Yrj¨o Neuvo. Detail-preserving median based filters in image processing. In Pattern Recognition Letters, volume 15. Elsevier Publishing, Amsterdam, Netherlands, 1994. Available: https://www.sciencedirect.com/ science/article/pii/0167865594900825.

[25] Satoshi Suzuki et al. Topological structural analysis of digitized binary images by border following. In Computer vision, graphics, and image processing, vol-ume 30. Elsevier Publishing, Amsterdam, Netherlands, 1985. Available:https: //www.sciencedirect.com/science/article/pii/0734189X85900167. [26] OpenCV Developers Team. About opencv. Open Source Computer Vision,

2017. Available: http://opencv.org/about.html, [Online, accessed 3-April-2018].

[27] Creative Technology. Creative live! cam chat hd. Creative Technology Lt., 2018. Available:us.creative.com/p/web-cameras/live-cam-chat-hd [On-line, accessed 3-April-2018].

[28] Hugh D Young, Roger A Freedman, and Albert Lewis Ford. Sears and zeman-sky’s university physics. volume 1. Pearson education, New York, NY, USA, 2006.

(44)
(45)

Appendix A

Python Code

# Algorithms f o r the H.A.N. S .O. L .O. p r o j e c t #

# User i n t e r f a c e and main f i l e #

# By Edvin Kugelberg & P h i l i p Andersson # # KTH Mechatronics , 19 May 2018 # # TRITA≠ITM≠EX 2018:67 # # S c r i p t : HANSOLO. py #import a l l other s c r i p t s

from cameraread import from imageprocessing import from transmit import

import os

def read ( ) : # get commands from user ### START ###

p r i n t(’≠≠≠≠≠ Welcome to H.A.N. S .O. L .O. ≠≠≠≠≠’) # Graphics f o r user

p r i n t ( ’\n ’ )

p r i n t ( ’ Please draw your command on the paper . ’ ) p r i n t ( ’ Press SPACE when f i n i s h e d . ’ )

p r i n t ( ’\n ’ )

p r i n t(’≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠’) startcamera ( )

(46)

APPENDIX A. PYTHON CODE

# Start camera and wait f o r user to draw

getcontours ( ’ operation . png ’ , roomheight , robotheight ) # Start image p ro c e s s algoritm

os . system ( ’ c le a r ’ ) # c l e a r terminal

p r i n t(’≠≠≠≠≠≠≠≠≠≠≠ H.A.N. S .O. L .O. ≠≠≠≠≠≠≠≠≠≠≠’) # Graphics f o r user

p r i n t ( ’\n ’ )

p r i n t ( ’ Examine the r e s u l t . ’ )

p r i n t ( ’ Please c l o s e the window to continue ’ ) p r i n t ( ’\n ’ )

p r i n t(’≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠’) showcontours ( )

# Show contours f o r user g l o b a l userinput

os . system ( ’ c le ar ’ ) # c l e a r terminal

p r i n t(’≠≠≠≠≠≠≠≠≠≠≠ H.A.N. S .O. L .O. ≠≠≠≠≠≠≠≠≠≠≠’) # Graphics f o r user

p r i n t ( ’\n ’ )

p r i n t ( ’ Press ” Enter ” to i n i t i a t e robot . ’ )

p r i n t ( ’ OR’ )

p r i n t ( ’ Input ”E” then ” Enter ” ’)

userinput = raw input ( ’ to escape and try again : ’) # Loop i f user wants to c o r r e c t command

p r i n t(’≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠≠’) def write ( ) : # write o p e r a t i o n s to robot

os . system ( ’ c le ar ’ ) # c l e a r terminal

p r i n t(’≠≠≠≠≠≠≠≠≠≠≠ H.A.N. S .O. L .O. ≠≠≠≠≠≠≠≠≠≠≠’) # Graphics f o r user

p r i n t ( ’ Connecting to robot . . . ’ ) g e t o p e r a t i o n s ( )

# c a l c u l a t e route send ( )

(47)

i f name == ’ main ’ : os . system ( ’ c le a r ’ ) # c l e a r terminal

read ( )

# s t a r t c o l l e c t i n g command while userinput == ’E ’ : # loop f o r redrawing

os . system ( ’ c le a r ’ ) read ( )

e l s e :

write ( )

# Algorithms f o r the H.A.N. S .O. L .O. p r o j e c t #

# Global v a r i a b l e s #

# By Edvin Kugelberg & P h i l i p Andersson # # KTH Mechatronics , 19 May 2018 # # TRITA≠ITM≠EX 2018:67 # # S c r i p t : v a r i a b l e s . py g l o b a l wantedimgsize

# make a l l v a r i a b l e s g l o b a l f o r use in multiple s c r i p t s g l o b a l ConcentricCircleThresh g l o b a l RoomHeight g l o b a l RobotHeight g l o b a l S t a r t a n g l e g l o b a l printContours g l o b a l plotCoordinates g l o b a l b l o c k S i z e g l o b a l adaptiveThreshconst wantedimgsize = 600.0 # s i z e of processed image ConcentricCircleThresh = 5 # t hres hol d f o r t i g h t c i r c l e s roomheight = 2

# height of the rooms robotheight = 0.27

(48)

APPENDIX A. PYTHON CODE

# width of the robot S t a r t a n g l e = 90 # s t a r t angle printContours = 1 # s t a t e v a r i a b l e f o r d i s p l a y i n g contours plotCoordninates = 1 # s t a t e v a r i a b l e f o r p l o t t i n g route b l o c k S i z e = 7

# adaptive t hres hol d block s i z e adaptiveThreshconst = 2

# adabtive t hres hol d s u b t r a c t i o n constant # Algorithms f o r the H.A.N. S .O. L .O. p r o j e c t #

# Start camera and take p i c t u r e of command #

# By Edvin Kugelberg & P h i l i p Andersson # # KTH Mechatronics , 19 May 2018 # # TRITA≠ITM≠EX 2018:67 # # S c r i p t : cameraread . py import cv2 def startcamera ( ) :

# s t a r t camera and take p i c t u r e when t o l d cam = cv2 . VideoCapture (1)

# i n i t i a t e camera

cv2 . namedWindow(” Live Feed . Press SPACE to take p i c t u r e . ” ) # t i t l e of the window

while True :

ret , frame = cam . read ( ) # capture frames

cv2 . imshow (” Live Feed . Press SPACE to take p i c t u r e . ” , frame ) # t i t l e of the window

i f not r e t : # break argument

(49)

break keypress = cv2 . waitKey (30) # l i s t e n f o r ke y pr e s s e s i f keypress %256 == 27: # i f ESC i s h i t p r i n t (” Escape hit , c l o s i n g . . . ” ) break # break loop e l i f keypress %256 == 32: # i f SPACE i s h i t

img name = ” operation . png” # save current image

cv2 . imwrite ( img name , frame ) break # break loop cam . r e l e a s e ( ) # stop camera cv2 . destroyAllWindows ( ) # c l o s e windows

# Algorithms f o r the H.A.N. S .O. L .O. p r o j e c t #

# Pattern r e c o g n i t i o n and route c a l c u l a t i o n #

# By Edvin Kugelberg & P h i l i p Andersson # # KTH Mechatronics , 19 May 2018 # # TRITA≠ITM≠EX 2018:67 # # S c r i p t : imageprocessing . py import cv2 import numpy as np

from matplotlib import pyplot as p l t import copy

import math import csv

(50)

APPENDIX A. PYTHON CODE

c o l o r s = [ ( 2 5 5 , 0 , 0) , (0 , 255 , 0) , (255 , 0 , 0) , \

(255 , 0 , 255) , (0 , 255 , 255) , (255 , 255 , 0 ) , ] # c o l o r s f o r d i s p l a y i n g contours

def dispImg ( img , name=None ) : # f u n c t i o n f o r d i s p l a y i n g images

cv2 . imshow (name , img ) # d i s p l a y image

cv2 . waitKey (0)

# i f any key i s pressed cv2 . destroyAllWindows ( ) # c l o s e window

c l a s s imageprocessing : # pattern r e c o g n i t i o n c l a s s

def i n i t ( s e l f , filename ) :

s e l f . img = cv2 . imread ( filename ) # opens image

s e l f . imgheight , s e l f . imgwidth = s e l f . img . shape [ 0 : 2 ] # saves s i z e s

def r e s i z e i m a g e ( s e l f , wantedimgwidth ) : # r e s i z e the image SizeRatio = wantedimgwidth / s e l f . imgwidth

# s e t s an image r a t i o n from input width

ImgDim = ( i n t ( wantedimgwidth ) , i n t ( s e l f . imgheight SizeRatio ) ) # s e t s the dimensions f o r the new s i z e

s e l f . img = cv2 . r e s i z e ( s e l f . img , ImgDim ,

i n t e r p o l a t i o n=cv2 .INTER AREA) # r e s i z e s the image

s e l f . imgheight , s e l f . imgwidth = s e l f . img . shape [ 0 : 2 ] # c o r r e c t s image dimension v a r i a b l e s

def thre s hold ( s e l f ) : # apply adaptive thresh

s e l f . g r a y s c a l e = cv2 . cvtColor ( s e l f . img , cv2 .COLOR BGR2GRAY) # convert image to g r a y s c a l e

(51)

blurimg = cv2 . medianBlur ( s e l f . grayscale , 5) # s l i g h t blur

s e l f . thresh = cv2 . adaptiveThreshold \

( blurimg , 255 , cv2 .ADAPTIVE THRESH GAUSSIAN C, cv2 .THRESH BINARY, blockSize , adaptiveThreshconst ) # apply adaptive t hre s ho ld

def contours ( s e l f ) : # f i n d a l l contours s e l f . printimgcontours = copy . copy ( s e l f . img ) # copy image f o r applying contours

im2 , s e l f . imgcontours , hierarchy = \ cv2 . findContours ( s e l f . thresh , \

cv2 .RETR TREE, \ cv2 .CHAIN APPROX NONE)

# f i n d contours

s e l f . imgcontours = s e l f . imgcontours [1: ≠1] # remove outer rim contour

def d e f i n e c o n t o u r s ( s e l f ) : # d e l e t e a l l f a u l t y contours cont area = [ ]

# i n i t i a t e vector f o r contour areas and t h e i r c e n t e r p o i n t c o n t c e n t e r = [ ]

f o r contour in range ( len ( s e l f . imgcontours ) ) : cont = s e l f . imgcontours [ contour ]

cont area . append ( cv2 . contourArea ( cont ) ) # get area and c e n t e r p o i n t f o r each contour

(x , y ) , radius = cv2 . minEnclosingCircle ( cont ) c o n t c e n t e r . append ( ( x , y ) )

i f cont area [ contour ] <= 300:

# i f area i s to small s e t i t to zero f o r l a t e r removal cont area [ contour ] = c o n t c e n t e r [ contour ] = ( 0 , 0 ) s e l f . imgcontours [ contour ] = 0

i f not type ( s e l f . imgcontours [ contour ] ) i s i n t : # i f not already s e t to zero ,

# remove a l l with short c i r c u m f e r e n c e s i f len ( s e l f . imgcontours [ contour ] ) < 30:

(52)

APPENDIX A. PYTHON CODE

cont area [ contour ] = ( 0 , 0 ) c o n t c e n t e r [ contour ] = ( 0 , 0 ) s e l f . imgcontours [ contour ] = 0 comp = 0

while comp < contour :

# check current contour with a l l previous # i f they are c o n c e n t r i c

i f abs ( c o n t c e n t e r [ contour ][0] ≠ cont center [ comp ] [ 0 ] ) < \ ConcentricCircleThresh \

and abs ( c o n t c e n t e r [ contour ][1] ≠ \

c o n t c e n t e r [ comp ] [ 1 ] ) < ConcentricCircleThresh : i f cont area [ comp]< cont area [ contour ] : # remove the s m a l l e r one

cont area [ comp ] = ( 0 , 0 ) c o n t c e n t e r [ comp ] = ( 0 , 0 ) s e l f . imgcontours [ comp ] = 0 e l s e :

cont area [ contour ] = ( 0 , 0 ) c o n t c e n t e r [ contour ] = ( 0 , 0 ) s e l f . imgcontours [ contour ] = 0 comp += 1

cont area [ : ] = [ x f o r x in cont area i f x != ( 0 , 0 ) ] # remove a l l contours s e t to zero

c o n t c e n t e r [ : ] = [ x f o r x in c o n t c e n t e r i f x != ( 0 , 0 ) ] s e l f . imgcontours [ : ] = \

[ x f o r x in s e l f . imgcontours i f not type ( x ) i s i n t ]

def g e t c o o r d i n a t e s ( s e l f , roomheight , robotheight ) : # c a l c u l a t e c o o r d i n a t e s to match robot

coordspacing = s e l f . imgheight robotheight / roomheight # width between each foundcoordinates

# f o r sweeping motion in value of p i x e l s

yvalues =[]

# get l i s t of a l l y≠values i = 0

(53)

yvalues . append ( i ) i += coordspacing contourYmax = [ ] # i n i t i a t e l i s t s contourYmin = [ ] wantedYvalues = [ ] foundcoordinates = [ ] s e l f . p y l i s t i m g c o n t o u r s = [ ]

f o r Contour in range ( len ( s e l f . imgcontours ) ) : # f o r each s e t of contours

s e l f . p y l i s t i m g c o n t o u r s . append ( [ ] )

# make i t a python l i s t i n s t e a d of numpy array s e l f . p y l i s t i m g c o n t o u r s [ Contour ] = \

s e l f . imgcontours [ Contour ] . t o l i s t ( )

f o r o in range ( len ( s e l f . p y l i s t i m g c o n t o u r s [ Contour ] ) ) : s e l f . p y l i s t i m g c o n t o u r s [ Contour ] [ o ] = \

s e l f . p y l i s t i m g c o n t o u r s [ Contour ] [ o ] [ 0 ]

contourYmax . append (0)

#i n i t i a t e max/min v a r i a b l e s

contourYmin . append ( s e l f . imgheight )

f o r a in range ( len ( s e l f . imgcontours [ Contour ] ) ) : # l i n e a r search f o r h i g h e s t and lowest

# y≠value of each contour

i f s e l f . p y l i s t i m g c o n t o u r s [ Contour ] [ a ] [ 1 ] > \ contourYmax [ Contour ] : contourYmax [ Contour ] = \ s e l f . p y l i s t i m g c o n t o u r s [ Contour ] [ a ] [ 1 ] i f s e l f . p y l i s t i m g c o n t o u r s [ Contour ] [ a ] [ 1 ] < \ contourYmin [ Contour ] : contourYmin [ Contour ] = \ s e l f . p y l i s t i m g c o n t o u r s [ Contour ] [ a ] [ 1 ] wantedYvalues . append ( copy . copy ( yvalues ) )

# remove a l l searched≠f o r y values outside the contours f o r b in range ( len ( wantedYvalues [ Contour ] ) ) :

(54)

APPENDIX A. PYTHON CODE

wantedYvalues [ Contour ] [ b ] = 0

i f wantedYvalues [ Contour ] [ b ] < contourYmin [ Contour ] : wantedYvalues [ Contour ] [ b ] = 0

wantedYvalues [ Contour ] = \

f i l t e r ( lambda a : a != 0 , wantedYvalues [ Contour ] )

foundcoordinates . append ( [ ] )

# get a l l c r o s s i n g s of wanted y≠values and contours by ≠ # going through the contour and

# saving c l o s e s t points of i n t e r s e c t

f o r c in range ( len ( wantedYvalues [ Contour ] ) ) :

c r o s s i n g s = [ ]

f o r d in range ( len ( s e l f . p y l i s t i m g c o n t o u r s [ Contour ]) ≠1): curr = s e l f . p y l i s t i m g c o n t o u r s [ Contour ] [ d ]

next = s e l f . p y l i s t i m g c o n t o u r s [ Contour ] [ d+1] yv = wantedYvalues [ Contour ] [ c ]

i f yv >= curr [ 1 ] and yv < next [ 1 ] : # f i n d c r o s s i n g y≠values i f yv > ( curr [1]+ next [ 1 ] ) / 2 : # save c l o s e s t c r o s s i n g s . append ( next ) e l s e : c r o s s i n g s . append ( curr )

i f yv <= curr [ 1 ] and yv > next [ 1 ] : # f i n d c r o s s i n g y≠values i f yv < ( curr [1]+ next [ 1 ] ) / 2 : # save c l o s e s t c r o s s i n g s . append ( next ) e l s e : c r o s s i n g s . append ( curr ) m o s t l e f t = [ s e l f . imgwidth , s e l f . imgwidth ]

# i f multiple points of i n t e r s e c t were found ≠

# save only the most l e f t and r i g h t point of i n t e r s e c t i o n mostright = [ 0 , 0 ]

(55)

f o r item in range ( len ( c r o s s i n g s ) ) :

i f c r o s s i n g s [ item ] [ 0 ] > mostright [ 0 ] : mostright = c r o s s i n g s [ item ]

e l i f c r o s s i n g s [ item ] [ 0 ] < m o s t l e f t [ 0 ] : m o s t l e f t = c r o s s i n g s [ item ]

foundcoordinates [ Contour ] . append ( m o s t l e f t ) foundcoordinates [ Contour ] . append ( mostright )

f o r g in range ( len ( foundcoordinates [ Contour ] ) ) :

# change order of points so the robot w i l l t r a v e l in a # sweeping motion i n s t e a d of z i g zag

i f g % 4 == 0 : foundcoordinates [ Contour ] [ g ≠1] , \ foundcoordinates [ Contour ] [ g≠2] = \ foundcoordinates [ Contour ] [ g ≠2] , \ foundcoordinates [ Contour ] [ g≠1] s e l f . c o o r d i n a t e s = [ [ 0 , 0 ] ]

f o r COntour in range ( len ( foundcoordinates ) ) :

# mirror y≠axis and c o l l a p s e the nestled l i s t of ≠ # c o o r d i n a t e s to one f l a t l i s t

f o r point in range ( len ( foundcoordinates [ COntour ] ) ) : foundcoordinates [ COntour ] [ point ] [ 1 ] = \

abs ( s e l f . img . shape [ 0 ] ≠ foundcoordinates [ COntour ] [ point ] [ 1 ] ) s e l f . c o o r d i n a t e s . append ( foundcoordinates [ COntour ] [ point ] ) s e l f . c o o r d i n a t e s . append ( [ 0 , 0 ] )

def p l o t r o u t e ( s e l f ) :

# p l o t c o o r d i n a t e s as a route xplot =[]

yplot =[]

f o r Point in range ( len ( s e l f . c o o r d i n a t e s ) ) : # arrange a l l c o o r d i n a t e s f o r p l o t t i n g

xplot . append ( s e l f . c o o r d i n a t e s [ Point ] [ 0 ] ) yplot . append ( s e l f . c o o r d i n a t e s [ Point ] [ 1 ] )

i f plotCoordninates == 1 : # check s t a t e v a r i a b l e s

(56)

APPENDIX A. PYTHON CODE # p l o t course p l t . a x i s ( [ 0 , s e l f . imgwidth , 0 , s e l f . imgheight ] ) p l t . imshow ( cv2 . f l i p ( s e l f . printimgcontours , 0)) p l t . show ( block=True ) def g e t o p e r a t i o n s ( s e l f , roomheight ) : r e s o l u t i o n = ( roomheight 1000)/ s e l f . imgheight # get the r e s o l u t i o n of mm/ p i x e l in image

s e l f . o p e r a t i o n s = [ ] s t a r t a n g l e = 90

# i n i t i a t e s t a r t angle c urrangle = s t a r t a n g l e

f o r points in range ( len ( s e l f . c o o r d i n a t e s ) ≠1): curr = s e l f . c o o r d i n a t e s [ points ]

# current p o s i t i o n

next = s e l f . c o o r d i n a t e s [ points +1] # next p o s i t i o n

vectorwidth = next [0] ≠ curr [ 0 ] # width of operation t r i a n g l e v e c t o r h e i g h t = next [1] ≠ curr [ 1 ] # height of operation t r i a n g l e

i f v e c t o r h e i g h t == 0 and vectorwidth < 0 : # angle of operation t r i a n g l e gotten # by arctan , i f heigth of width ≠ # i s 0 ( in 90 ,180 ,170 ≠ turns ) d i v i s i o n # with zero must f i r s t be countered

v e c t o r a n g l e = math . pi e l i f v e c t o r h e i g h t == 0 and vectorwidth > 0 : v e c t o r a n g l e = 0 e l i f vectorwidth == 0 and v e c t o r h e i g h t < 0 : v e c t o r a n g l e = math . pi 3/2 e l i f vectorwidth == 0 and v e c t o r h e i g h t > 0 : v e c t o r a n g l e = math . pi /2 e l s e :

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa

Det finns många initiativ och aktiviteter för att främja och stärka internationellt samarbete bland forskare och studenter, de flesta på initiativ av och med budget från departementet