• No results found

An Autonomous Self-Docking Robot on Passive Systems

N/A
N/A
Protected

Academic year: 2022

Share "An Autonomous Self-Docking Robot on Passive Systems"

Copied!
92
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS

STOCKHOLM SWEDEN 2020,

AN AUTONOMOUS SELF- DOCKING ROBOT ON

PASSIVE SYSTEMS

GAULTIER MATHIEU

(2)
(3)

This thesis has been performed in partnership with

AN AUTONOMOUS SELF-DOCKING ROBOT ON PASSIVE SYSTEMS

EN AUTONOM SJ ¨ ALVDOCKNINGSROBOT F ¨ OR PASSIVA SYSTEM

Mathieu G AULTIER

D

EGREE PROJECT SUBJECT

: Computer Science

P

ROGRAMME

: Master’s programme in Computer Science S

UPERVISOR IN

KTH: Patric J

ENSFELT

E

XAMINER

: Joakim G

USTAFSON

S

UPERVISOR IN

RI: L´ eo E

SCANDE

February 28, 2020

(4)
(5)

Abstract

The design of a robot is tightly linked to it assigned task. A robot does not have the same adaptability as humans do. Therefore every robot is designed to achieve one specific task.

An increasing alternative to this issue are modular robots. Since one can not achieve the task alone, it requires the help of others. In this thesis is studied the possibility of having one single ground robot managing the displacement task instead of implementing sensors and algorithms on each independent autonomous systems. The displacement robot should have the ability to plug itself to some others so that it can then drive them around at their desired places. This thesis work is especially focusing on this docking sequence between the displacement robot and the other module. The achievement of this docking sequence relies on the use of fiducial markers to extract the relative poses of the two systems. Furthermore, the noisiness of the obtained measure has been dealt with Extended Kalman Filters and Monte Carlo Localisation algorithms. The results obtained through the simulations shown that a MCL algorithm would provide better results for such systems. However the output relative poses were not precise enough for the system to actually plug connectors and ensure the communication between the two systems.

(6)

Sammanfattning

Utformningen av en robot är nära kopplad till dess tilldelade uppgift. En robot har inte samma anpassningsförmåga som människor. Därför är varje robot utformad för att lösa en specifik uppgift. Ett alternativ till detta är modulära robotar, vilket ökar i populäritet. En robot kräver andras hjälp eftersom den inte kan klara uppgiften ensam. I detta examensarbete undersöks möjligheten att ha en enda markrobot som hanterar förflyttningsuppgiften istället för att implementera sensorer och algoritmer på varje oberoende system. Förflyttningsroboten bör ha förmågan att ansluta sig till vissa andra moduler så att den sedan kan styra dem på önskade platser. Detta examensarbete fokuserar särskilt på dockningssekvensen mellan förflyt- tningsroboten och den andra modulen. För att uppnå denna dockningssekvens används visuella markörer för att extrahera de relativa positionerna för de två systemen. Dessutom har Kalman- filter och Monte Carlo-lokaliseringsalgoritmer undersökts. Resultaten som erhållits genom simuleringarna är korrekta men de beräknade relativa positionerna var inte tillräckligt exakta för att systemen faktiskt skulle kunna ansluta och säkerställa kommunikationen mellan dem.

(7)

Contents

1 Introduction 1

1.1 Problem statement . . . 1

1.2 Thesis outline . . . 2

2 Background 3 2.1 Vocabulary over the Groud Modular Drone (GMD) . . . 3

2.2 Presentation of the GMD functionalities . . . 4

2.3 Some specificity of the GMD . . . 6

2.3.1 The communication between the PODs and the INT . . . 6

2.3.2 The bodies states . . . 7

2.3.3 Description of the whole docking sequence . . . 8

2.4 The kinematics of the GMD . . . 8

2.4.1 The two wheel robot kinematics . . . 9

2.4.2 The multiple wheel robot kinematics . . . 9

2.5 The wheel odometry . . . 11

2.6 Sensor Fusion filters . . . 12

2.6.1 The Extended Kalman Filter (EKF) . . . 13

2.6.2 The Particle Filter (PF) . . . 14

3 Related Work 16 3.1 The binaural sonar . . . 16

3.2 The Infrared based pose estimation . . . 17

3.3 The vision based pose estimation . . . 18

3.4 The laser range scanner (Lidar) . . . 20

4 Method 23 4.1 The sensor . . . 23

4.1.1 The choice of the sensor . . . 23

4.1.2 Optimisation of the sensor . . . 24

4.1.3 The sensor accuracy and frequency . . . 27

4.2 The pose estimation algorithm . . . 31

4.2.1 Equation of the odometry . . . 32

4.2.2 Computation of the matrices for the EKF algorithm . . . 33

4.2.3 Computation of the weights for the PF algorithm . . . 35

4.3 The docking algorithm . . . 36

4.4 The differential control . . . 38

(8)

5 Simulations 41

5.1 The GMD simulation model . . . 41

5.1.1 The robot model . . . 42

5.2 The results . . . 43

5.2.1 The angle correction . . . 44

5.2.2 The lateral position correction . . . 49

5.2.3 The backing sequence . . . 52

5.2.4 The full sequence . . . 55

6 Conclusion 61 6.1 Future Work . . . 61

Appendix 64

Appendix 1 : a tool for evaluating aruco sensor accuracy 64 1 Presentation of the geometry of the tool . . . 64

2 Presentation of the tool . . . 65

2.1 The 3D design . . . 65

2.2 The 3D printed tool . . . 67

Appendix 2 : influence of the marker pose over the sensor accuracy 68 1 Influence of the marker pose over the deepness measure . . . 68

1.1 Influence of the deepness x . . . 68

1.2 Influence of the angle θ . . . 69

1.3 Influence of the lateral position y . . . 69

2 Influence of the marker pose over the lateral position measure . . . 70

2.1 Influence of the deepness x . . . 70

2.2 Influence of the angle θ . . . 71

2.3 Influence of the lateral position y . . . 71

3 Influence of the marker pose over the angle measure . . . 72

3.1 Influence of the deepness x . . . 72

3.2 Influence of the angle θ . . . 73

3.3 Influence of the lateral position y . . . 73

Appendix 3 : datasheet of the motor and gearbox used for simulations 75 1 The motor datasheet . . . 75

2 The gear box datasheet . . . 76

Bibliography 77

(9)

List of Figures

2.1 Vocabulary over the Ground Modular Drone . . . 3

2.2 Modularity of the GMD . . . 4

2.3 A fully assembled robot based on the GMD . . . 5

2.4 Possibility of changing the main body of the GMD . . . 6

2.5 Wires communication within the GMD . . . 7

2.6 Schematic of the full docking sequence . . . 8

2.7 A two wheels robot schematics . . . 9

2.8 A multiple steering wheels robot schematics . . . 10

2.9 Impact of the pivot placement on the oscillations of the rear of the vehicle . . . 11

2.10 Algorithm scheme of the Extended Kalman Filter Method . . . 14

3.1 Robot relative pose estimated with the Binaural Sonar method . . . 16

3.2 The homing system and the regions divided by infrared beams . . . 18

3.3 Example of existing fiducial markers developed until now . . . 19

3.4 The binary code system of ArUco markers . . . 19

3.5 Example of the measure output of a laser range scanner sensor . . . 20

3.6 Lidar based relative pose estimation of two robots . . . 21

3.7 Pose estimation with a partial obstruction of the dock . . . 22

4.1 A wide detection range fiducial marker (figure extracted from [19]) . . . 25

4.2 Final fiducial marker board used for the GMD . . . 26

4.3 The band analysis of the camera frame for fast marker pose estimation . . . 27

4.4 Time delay between two ArUco measures . . . 28

4.5 Accuracy and precision of the x measure . . . 29

4.6 Accuracy and precision of the y measure . . . 29

4.7 Accuracy and precision of the θ measure . . . 30

4.8 Enhancement of a robot circular trajectory estimation based on an EKF . . . . 32

4.9 Sample of possible initial pose for the robot before it starts the docking sequence 37 4.10 Decomposition of the full docking sequence . . . 38

4.11 Pose error of the robot regarding the reference path to follow . . . 39

5.1 The full simulink schematics used for the simulation of the docking sequence . 42 5.2 The simulink model for a motor . . . 42

5.3 The simulink model for an encoder . . . 43

5.4 The motor control based . . . 43

5.5 Picture of the starting pose and the final pose of the docking sequence . . . 44

5.6 Angle control for both EKF and PF . . . 46

5.7 Influence of the aruco sensor data loss over the angle accuracy for the PF . . . . 48

(10)

5.8 Evolution of the GMD pose during the lateral position correction step . . . 50

5.9 Zoom over the GMD pose during the lateral position correction step . . . 51

5.10 Evolution of the xGM D depth during the backing sequence . . . 52

5.11 Evolution of the GMD pose during the backing sequence . . . 54

5.12 Evolution of the velocities and the data loss during a full sequence of docking . 56 5.13 Evolution of the full robot state during a complete docking sequence . . . 57

5.14 Evolution of the GMD rear pose during a complete docking sequence . . . 59

5.15 Zoom over the GMD rear pose during a complete docking sequence . . . 60

6.1 Scheme of the real position of both marker and camera in a 2D plan . . . 64

6.2 Scheme of the position of the marker and camera in a 1D plan . . . 65

6.3 3D model of the support of both marker and camera . . . 65

6.4 Top view of the angle adjustable support in three different position . . . 66

6.5 3D model of the supports . . . 66

6.6 3D model of the whole sensor measuring tool . . . 67

6.7 3D printed support based on the pre-designed 3D model . . . 67

6.8 Influence of the x marker pose over the x measure accuracy and repetability . . 69

6.9 Influence of the θ marker pose over the x measure accuracy and repetability . . 69

6.10 Influence of the y marker pose over the x measure accuracy and repetability . . 70

6.11 Influence of the x marker pose over the y measure accuracy and repetability . . 71

6.12 Influence of the θ marker pose over the y measure accuracy and repetability . . 71

6.13 Influence of the y marker pose over the y measure accuracy and repetability . . 72

6.14 Influence of the x marker pose over the θ measure accuracy and repetability . . 72

6.15 Influence of the θ marker pose over the θ measure accuracy and repetability . . 73

6.16 Influence of the y marker pose over the θ measure accuracy and repetability . . 73

6.17 Table of the inertia for the gearbox . . . 75

6.18 Table of the inertia for the gearbox . . . 76

(11)

List of Tables

3.1 The binaural sonar sensor accuracy regarding it close environment shape . . . . 17

4.1 Ranges of detection for different marker sizes . . . 24

4.2 Ranges of detection for the newly defined marker . . . 30

4.3 Equations used for the ArUco based sensor simulations . . . 31

5.1 The initial pose and the final pose of the docking sequence . . . 44

(12)
(13)

Chapter 1 Introduction

The increasing place of robots in the society is obvious. More and more are being developed everyday in a high number of diverse areas. The either dangerous or impossible tasks for humans can now be achieved by those robots without any risk for human beings. Besides, the accuracy with which it handles its assigned tasks are way beyond human level.

All these robots are initially simply composed of some hardware components but needs brains to be able to move appropriately : they need code. All these possible existing robots handles their own specific tasks which is distinct from one another. Therefore each of them should have its own code, Unmanned Ground Vehicles (UGVs) and Unmanned Aerial Vehicles (UAVs) will have quite different code for their tasks.

Even though each robot handles distinct tasks, most of them needs to have the ability to move around in the space. With all the recently developed obstacle avoidance or SLAM based algorithms for displacement, moving around is not an issue anymore in most cases. Never- theless, implementing a fully autonomous mobile robot is getting more and more complicated.

Hence a lot of time is spent on this specific part meanwhile some more energy should be focus on the main tasks of the robot.

The core idea of this project is to be able to remove the necessity of implementing those algorithm every single time a new robot is developed. The focus is put on the UGVs.

1.1 Problem statement

This thesis is built along with a project called the GMD (presented in Chapter 2). To sum up the project, it is a wheeled robot that has the ability to get plugged and unplugged to another adapted wheeled module. Once plugged to a module, the GMD is able to carry it to whatever accessible place and then drop it.

The ability to get plugged to a module is divided into two main functionalities :

• Find the module : called the approach step in section 2.3.3

• Get plugged to the previously found module

An essential point of this docking sequence is the passiveness of the module to dock to.

Since one would not wish to waste the module battery while it is not moving ,i.e. not plugged to the main robot, the module is turned off. It can thus no longer communicate with the robot.

(14)

This thesis boundaries are set to the second part. It is focusing on the docking step when the robot already reached the close space of the module to dock to and approximately knows it position. The full approach step is assumed to be functional and gives a position of the body with at least the accuracy stated in part 2.3.3.

Due to time constraint, some experimentation could not be realized and some simulations has been performed instead. For some simplification reasons, the ground will be supposed to be flat between the module to dock and the robot. Besides place in between the robot and the module to dock to is supposed to be obstacle free. Finally, no data losses are taken into account. All sensors data are assumed to be correctly received by the robot and can be used for the docking algorithms. This thesis is focusing on the final accuracy reachable with the docking sequence. The wire connections are requiring a precision close to a millimeter and though some nearly centimeter close accuracy are not tolerable.

More specifically, this thesis test the feasibility of having a high precision docking sequence for a robot over a passive system.

1.2 Thesis outline

The thesis is structured in five chapters as noted below :

• Chapter 2 : details several concepts useful for the comprehension of the whole thesis

• Chapter 3 : list different sensors and methods already existing for achieving a docking sequence for a robot

• Chapter 4 : entailed the method that is developed within this thesis to achieve a proper docking sequence

• Chapter 5 : show the results obtained when running simulations based on the method presented

(15)

Chapter 2 Background

This chapter covers the knowledge required to understand the rest of the thesis. It gives some explanations about both the GMD and some theory used within the docking sequence.

2.1 Vocabulary over the Groud Modular Drone (GMD)

This thesis is linked to a project called the Ground Modular Drone. In Figure 2.1 some specific vocabulary concerning the GMD is given.

Figure 2.1: Vocabulary over the Ground Modular Drone The GMD is composed of two different modules :

• the INT (standing for INTelligence) which is the brain part of the GMD. It does not have any type of actuators and is essentially composed of some sensors and the main algorithms.

(16)

• the POD which is the core actuator of the GMD. It has some sensors for obstacle detec- tion and a simple program which mainly communicate about its state with the INT and execute the orders coming out of it.

2.2 Presentation of the GMD functionalities

The GMD has two main functionalities :

• modularity : it is possible to assemble it in a way that answer the best to the concerned issue

• self-docking sequence : once all your modules have been assembled, it can bring it au- tonomously from a place to another

The modularity

On one hand, the main idea of this project is to avoid a new displacement algorithm each time a new robot needs to be designed to achieve a specific task. On the other hand, every single robot has its own size adapted to its tasks. Indeed, one carrying heavy loads should not have the same dimensions as one monitoring a defined area.

Therefore the GMD is a modular robot. Thanks to this modularity, it can be assembled with different sizes and a varying number of wheels. As it is shown in Figure 2.2, the main body of the GMD can be adapted according to the requirements.

Figure 2.2: Modularity of the GMD

Once the main body is assembled, the INT can be plugged onto it so that the newly defined GMD get the ability to move around. Once the GMD is assembled, the rest of the robot can be

(17)

dropped on top of it. Figure 2.3 shows one among all the possible configurations of the GMD with an arm-robot on top that can perform other tasks without taking care of the displacement part.

Figure 2.3: A fully assembled robot based on the GMD The final system is thus composed of :

• An autonomous moving platform (the GMD) that deals with path-planning, path-following and obstacle avoidance.

• Another entity dropped on top of it : the main part handling the core task of the overall system.

The autonomous self-docking sequence

The GMD modularity presents a great advantage concerning the time saving when it comes to the integration. The GMD is only able to safely move from a point A to a point B (as soon as the environment allows it). Whatever automaton stands on top of the GMD, the GMD will nec- essarily stop at some point so that the automaton can handle its task. Nevertheless, if this task is time-consuming, the GMD will waste some time standing still while some other automaton might need to be carry from a place to another. Therefore, the second main functionality of the GMD is it ability to get freely plugged and unplugged to several different main bodies. On Figure 2.4, the GMD is dropping one main body and get plugged to another one in order to carry it somewhere else.

(18)

Figure 2.4: Possibility of changing the main body of the GMD

2.3 Some specificity of the GMD

2.3.1 The communication between the PODs and the INT

All the displacement algorithms are based on several parameters. Some of the main ones are concerning the robot mechanics itself. Each specific mechanical design is coupled with a spe- cific behaviour . An example concerns the robot’s close environment. A larger robot would not have the same possibilities for its movements as a small one. Whenever a path is chosen for the robot to drive along, the robot size is taken into account. It does not make sense to ask the robot to drive through a meter large door if its own width is one and half meter.

On the other hand, the GMD is designed in such a way that it can take care of the displace- ment of several different sized autonomous systems. It implies for the GMD to know the exact position of each of the PODs it is composed of so it gives the correct orders to the corresponding PODs. This communication is a full wire communication (no wireless communication within one GMD). The main wires visible on Figure 2.5 are :

• one CAN bus : it allows a fast and straight communication directly from the PODs to the INT and from the INT to the PODs. There is no communication from POD to POD on the CAN bus.

• some UART : it allows a communication between two side by side PODs.

• some GPIOs : it allows to get some specific information like the distance between two biPODs. Three of the GPIOs are used as a binary code to know which size the linking beam is. Two PODs connected have wires to link each of the corresponding pins of their connectors. The GPIO pins used for the beam size are all put high on one of the two PODs (the writer POD) meanwhile the other POD (the reader POD) is reading the values of his GPIOs. If a cable is connecting the corresponding two GPIOs pins of each POD, then the signal is correctly transmitted and the reader POD reads high on this specific GPIO (high stands for a 1 in binary code). On the other hand, a missing cable will not transmit the signal leading to a low value for the read GPIO of the reader POD (standing for 0 in binary code). Therefore, in this specific case, we have the possibility to change the beam size for 23 = 8 different values.

(19)

• power wires : all the PODs have their own inner batteries so they are energetically inde- pendent. However, it is not the case of the INT which needs to be plugged onto a POD to get power supply.

Figure 2.5: Wires communication within the GMD

2.3.2 The bodies states

The bodies of the GMD have three possible states :

• charging : the body is plugged onto a charging station directly linked to the electrical network. On this node, the body does not execute any action. It waits until an INT gets plugged in and sends commands.

• plugged : the body is plugged to an INT. It receives commands to execute and sends all the sensor data back to the INT.

• unplugged : the body is left unplugged (no INT and no charging station). In order to save as much energy as possible, it is completely turned off. Thus no communication between an unplugged body and the INT is possible (even with wireless communications). The body is turned back on whenever an INT is plugged and it receives a signal out of a certain GPIO.

These modes are a significant part of this thesis since the choice of the sensors for the au- tonomous self-docking sequence are highly affected by this lack of communication between the body and the INT during the docking sequence.

(20)

2.3.3 Description of the whole docking sequence

The full docking sequence starts at the moment when the GMD receives the order to get a specific body and ends when it brings the body back to the right spot. This sequence can be divided into several steps.

1. The GMD receives the order to get a specific body 2. The approach and identification step :

(a) The GMD goes into the area where it is most likely to find the expected body.

(b) The GMD looks within this area for the body, finds it and identifies it. If the body is corresponding to the expectations, it moves to the next step. Otherwise it goes one step backward and keeps looking for the body.

3. The docking step :

(a) The GMD gets aligned with the identified body

(b) The GMD drives backward until it gets docked to the body. If this step fails, it goes back to the alignment step and starts over again.

4. The GMD drives where the body is required

Figure 2.6: Schematic of the full docking sequence

2.4 The kinematics of the GMD

Achieving an accurate control of a wheeled robot involves a clear-cut knowledge of it kine- matics. Since the GMD is a modular robot, it has several possible configurations. Thus it is necessary to provide an adapted kinematics to every configuration and an appropriate control.

The configurations of the GMD can be divided into two categories : the two-wheeled GMD and the multiple (higher than four) wheels GMD.

(21)

2.4.1 The two wheel robot kinematics

The two wheeled robot, or differential drive robot, is among the most common configurationss.

It is composed of two motorised wheels located on two opposite sides of the robot as shown in Figure 2.7. The whole motion control is based on the differential speed of each wheel.

Figure 2.7: A two wheels robot schematics

In order to simplify the kinematics equations, two hypotheses are made. The contact be- tween a wheel and the ground is reduced to a point. Furthermore, it is assumed that there is no wheel slippage.

Thanks to these two hypotheses, it is possible to find the relation between the wheel speed and the whole robot for both angular and linear velocities.

ωGM D = r ∗ (ωL− ωR) L vGM D = r ∗ (ωL+ ωR)

2

(2.1)

Where :

ωGM D and vGM D are respectively the angular and linear speeds of the GMD ωLand ωRare respectively the angular speed of the left and right wheel r is the wheel’s radius

L is the wheel base

2.4.2 The multiple wheel robot kinematics

The multiple wheel kinematics is more complex and requires a deeper study. The multiple steering can be extended from the four steering methods explained in [1]. Being able to steer all the wheels implies the possibility to chose the pivot point of the robot. The pivot point corresponds to the center point of the turn, i.e. the point the closest to the path-planned when it comes to path following. In a classic car with only the two front wheel steering, this pivot point is localised right in between the two rear wheels.

(22)

A generalization from the four wheels steering up to multiple wheels steering is schema- tized in Figure 2.8.

Figure 2.8: A multiple steering wheels robot schematics Where :

ωi,l and ωi,r are respectively the angular velocities of the left and right wheels of the set number i

θi,land θi,rare respectively the steering angle of the left and right wheels of the set number i

xi is the distance from the front of the GMD of the set of wheels number i xRis the distance from the front of the GMD of the pivot point

R is the radius of the turn curves followed by the GMD L is the wheel base

r is the wheel’s radius

In order to minimize the tire friction, each wheels is steered with a different angle and has it own angular velocity. Both the angular velocity and the steering angle of the wheel are linked to its position relatively to the pivot point and the radius of the turn. According to some trigonometry relationships, the expressions of both variables are given by :

θi,∗ = δturn∗ δpod∗ sign (xR− xi) ∗ π

2 − atan 2R − δturn∗ δpod∗ L 2 ∗ |xR− xi|



ωi,pod = vGM D R ∗ r ∗

s



R − δturn∗ δpod∗L 2

2

+ (xR− xi)2

(2.2)

(23)

Where :

δturnis equal to 1 if it is a left turn and to −1 otherwise δpodis equal to 1 for a left wheel and −1 otherwise

vGM D is the linear velocity of the GMD expressed at the pivot point ωGM D is the angular velocity of the GMD expressed at the pivot point

The chose of the pivot point has two main impacts over the full steering wheel robot. It first determines how tight can be a turn. Since the steering angle is limited by some mechanical constraints the tightest turn allowed is though also subject to some constraints. Basically, the limited steering set of wheels is the one the furthest from the pivot point. This constraint is given by the relation :

RS = L

2 + max (|xR− x0|, |xR− xN|) ∗ tan (θsteermax) (2.3) The second impact is the possibility to chose the part of the robot with the least oscillations during the movement. Indeed, as stated above, the pivot point corresponds to the point where both linear and angular velocities are defined for the robot. It implies no side way velocities in this specific point. As shown in Figure 2.9 the oscillations at the back of the vehicle are smaller with a pivot point located at the back than right in the middle.

Figure 2.9: Impact of the pivot placement on the oscillations of the rear of the vehicle This thesis goal is to dock two biPODs together. Since the docking sequence will mainly be a control over both angular and linear velocities, the pivot point should be chosen so that the docking parts has the smallest oscillations possible. Thus it should be set at the back of the vehicle on the same vertical plane as the connectors.

2.5 The wheel odometry

The odometry is a method to estimate the position of the robot relatively to it previous position.

The most common one is the wheel odometry based on the encoders values of each wheels.

(24)

The estimation of the displacement of the robot is given by the inverse kinematics of the robot.

The inverse kinematics are given by expressing the equations 2.1.

Over a short time period dt, these set of speeds can be considered constant and thus it is possible to estimate the next position x((k + 1)dt) of the robot based on the previous position x(k ∗ dt). For simplicity, x(k ∗ dt) is noted x(k).

xGM D(k + 1) = xGM D(k) + vGM D ∗ dt ∗ cos (θGM D(k)) yGM D(k + 1) = yGM D(k) + vGM D ∗ dt ∗ sin (θGM D(k))

θGM D(k + 1) = θGM D(k) + ωGM D ∗ dt

(2.4)

Where :

• xGM D(k) and yGM D(k) are respectively the x and y coordinates of the robot at time step k

• θGM D(k) is the orientation of the robot at time step k

• vGM D and ωGM D are respectively the linear and angular velocity of the robot for the short incremental time step dt

The odometry is an accurate method for short distances estimations but it drifts on longer paths. Due to the several hypotheses made, it does not take into account the slippage of the wheels and the dimensions errors made on the robot over the wheel’s base or radius. Therefore, some noise is added to the odometry values to take into account those errors in the position estimations. Equations 2.7 thus becomes :

xGM D(k + 1) = xGM D(k) + (vGM D ∗ dt + ϑD) ∗ cos (θGM D(k)) yGM D(k + 1) = yGM D(k) + (vGM D ∗ dt + ϑD) ∗ sin (θGM D(k))

θGM D(k + 1) = θGM D(k) + ωGM D ∗ dt + ϑθ

(2.5)

Where :

• ϑD and ϑθ are respectively the error over the distance and angle of the odometry estima- tion

Both ∆D and ∆θ are zero-mean Gaussian. The standard deviations are directly linked to the angular and linear velocities and linear coefficients which estimate the error :

∆D ∼ N 0, (v ∗ dt ∗ kD)2

∆θ ∼ N 0, (ω ∗ dt ∗ kθ)2 (2.6)

2.6 Sensor Fusion filters

Sensor fusion methods are often used in robotics. A sensor is never perfect for every single situation. Each sensor is adapted to a specific case. The distance sensors, for example, have two main parameters : it accuracy and it range of measure. A long distance sensor typically has an accuracy worse than a short distance sensor. Thus many sensors are complementary.

And by using sensor fusion one can define a better sensor composed of the best part of the two sub-sensors.

(25)

The dead-reckoning is usually coupled with other sensors to increase it accuracy and limit the drifting problem. Two algorithms usable to handle this task are the Extended Kalman Filter and the Particle Filter.

2.6.1 The Extended Kalman Filter (EKF)

The Kalman Filtering process has been first published in 1960 by R.E Kalman. This method relies on a set of mathematical equations to estimate the mean of a state by minimizing the mean of the squared error. This method is powerful because it takes into account the past, present and future states of the system [2].

Since most of the systems are non-linear, it also exist an Extended version of the Kalman Filter, the Extended Kalman Filter (EKF) which linearizes both the mean and the covariance of the system. An EKF has two stages : a prediction and an update. The prediction is estimating the next state of the system based on the previous states, the inputs and the model of the system.

Then the update corrects the estimation. Let’s consider the non-linear model given by the set of equations :

xk = f (xk−1, uk−1, wk−1)

zk= h (xk, vk) (2.7)

Where :

xkand uk−1corresponds to two consecutive state estimation at step k and k − 1 ukcorresponds to the control inputs

zk corresponds to the measure

wkand vkare the noise over the estimation and the measurement

In the case where either f or h are a non-linear functions, it is possible to linearize it using a first order Taylor series expansion. Therefore four different matrices can be defined based on the following equations :

xk ≈ ˜xk+ Ak(xk−1− ˆxk−1) + Wkwk−1

zk ≈ ˜zk+ Hk(xk− ˜xk) + Vkvk (2.8) Where : (extracted from [2])

• xkand zkare the actual state and measurement vectors

• ˜xkand ˜zkare the approximate state and measurement vectors

• ˆxkis an a posteriori estimate of the state at step k

• the random variables wkand vkrepresent the process and measurement noise

• Akis the Jacobian matrix of partial derivatives of f with respect to x at step k

Ak,[i,j] = ∂f[i]

∂x[j](ˆxk−1, uk−1, 0) (2.9)

• Wk is the Jacobian matrix of partial derivatives of f with respect to w at step k

(26)

Wk,[i,j] = ∂f[i]

∂w[j]

(ˆxk−1, uk−1, 0) (2.10)

• Hk is the Jacobian matrix of partial derivatives of h with respect to x at step k

Hk,[i,j] = ∂h[i]

∂x[j] (˜xk, 0) (2.11)

• Vkis the Jacobian matrix of partial derivatives of h with respect to v at step k

Vk,[i,j] = ∂h[i]

∂v[j] (˜xk, 0) (2.12)

Then, the complete iteration for the discrete EKF is given in Figure 2.10 where the matrice the kalman gain Kk

Figure 2.10: The complete algorithm of the Extended Kalman Filter with both the prediction and the correction steps (picture extracted from [2])

The two matrices R and Q are respectively the noise

2.6.2 The Particle Filter (PF)

The Particle Filter, or when applied to localization the so called Monte Carlo Localization (MCL) [3], approximates the pose using a set of samples called particles. Each particle repre- sents a possible state of the system associated to a weight. The PF estimates the next states of each particles based on their previous states and using the dynamic model of the system. As there is some uncertainty within the dynamic model prediction (i.e. odometry section 2.5) the particles will spread. The set of equation of a differential drive ground vehicle for the state estimation are given in equations 2.5.

(27)

Therefore, an external sensor is used to handle a measurement update over the set of par- ticle. The measurement update consist in multiplying the weight of each particles with the measurement likelihood given by the sensor model :

wik∝ p zk|xik

(2.13) Where :

• xikis the ithparticle at time step k

• zkis the measurement at time step k

• wki is the weight of the ithparticule at time step k

Once all the particles weights are estimated an normalization is done to ensure the weights sum is equal to one. Finally, the resulting estimation of the system state is estimated based on the weighted states.

xk =X

i

wik∗ xik (2.14)

Since the particles are spreading due to the odometry error, fewer particles are concentrated where p (zk|xik) is high. As time goes, more and more of the particles weights will either be high (accurate particle) or really low (inaccurate particle). To deal with this issue a resampling is performed to remove the low-weighted particles. The resampling consist in deleting the par- ticles with a weight lower than a threshold and create copies of the highest weighted particles.

However, resampling too often will results in particle depletion due to an overfitting of the lat- est measurements. That is the particles will gather in a few states which would not even be the true state of the system. A typical method for resampling decision is to check the condition stated in equation 2.15. Whenever it comes to be true, a resampling is achieved.

1 P

i(wti)2 < N

2 (2.15)

To ensure a high accuracy for the PF the number of particles should be chosen regarding the posterior distribution. A too small sample would not be able to properly represent the distribution and though result in a decrease of the accuracy. On the other hand taking a larger sample than required would not increase the accuracy but only increase the computational power of the algorithm.

(28)

Chapter 3

Related Work

The docking sequence of an autonomous system relies on the knowledge of the pose of the dock relative to the robot. This pose can be estimated by using numerous different sensors.

3.1 The binaural sonar

The binaural sonar method is based on ultrasonic transmitters and receivers [4]. Both the robot and the system to dock is equipped with one transmitter and two receivers as shown in Figure 3.1. The two systems communicate in order to alternatively send ultrasonic signal from one another. The receivers estimate the distance gap with the other robot transmitters based on phase difference computations. Then all four data provided by the receivers of the two autonomous system are merged to deliver a relative pose estimation relying on geometrical equations.

Figure 3.1: Two independent autonomous system equipped with one emitter (red) and two receivers (blue) for estimating its relative poses (figure extracted from [4])

The relative pose estimation provided by the binaural sonar method is accurate in its speci- fied range of detection. As long as the sideways offset is smaller than 0.25 m for a distance of 1.5 m between the robots, the accuracy of the overall pose estimation is less than 0.05deg for

(29)

angle and 1cm for lateral error. On the other hand, the sensor has a high sensitivity to its envi- ronment. As reported in Table 3.1, a partial obstruction of the line of sight between transmitters and receivers results in a non negligible error over both orientation and position.

- Average orienta-

tion error [deg]

Average lateral error [m]

No objects obstruction 0.038 0.0023

Objects in proximity (no direct obstruction of line-of-sight)

0.036 0.0021

Object partially obscures line-of-sight between transmitter and one receiver

3.65 0.019

Object fully obscures line-of-sight between transmitter and one receiver

21.38 0.137

Object partially obscures both receivers - -

Table 3.1: The binaural sonar sensor accuracy regarding it close environment shape (table extracted from [4])

3.2 The Infrared based pose estimation

The infrared based relative pose estimation is a really common method. Currently it is the most widespread technology used for docking sequences. This technology can be found in some daily robots such as the autonomous vacuum cleaners [5] and [6] present in our houses when it drives back to its charging dock.

The method for estimating the relative pose of the two systems relies on the light intensity difference between the receivers placed on one system. As shown in the Figure 3.2 extracted from [5], several infrared beam are emitted in many complementary sections so that the robot knows where about the dock is according to the beam signature.

(30)

Figure 3.2: The homing system and the regions divided by infrared beams (figure extracted from [5])

This method already brought the proof to be working. However the accuracy of this tech- nology is low. Indeed the lateral offset climbs up to 5cm and the angle error of a few degrees.

But these errors are not an issue since the charging station itself is able to compensate these error offsets with its mechanical design. Besides since the sensor communication is based on beams, any partial obstruction of the sensors leads to a loss of data from the whole system.

This technology has also been used for a modular robot so that it can autonomously get docked together [7]. The use of several infrared receivers and transmitters placed in pre- determined position on top of the robot permits to estimate the relative pose of the two systems.

Thus it manage to properly execute a docking sequence.

3.3 The vision based pose estimation

With the enhancement of the camera resolutions, the vision based pose estimation is an increas- ing promising technology. The extraction of several specific patterns out of the camera frame permits to estimate it poses relatively to the camera lens. If the geometrical shape of a specific target is specified then a pattern extraction [8] [9] can be handled to estimate the pose of this specific object.

A promising technology is the fiducial marker. These markers are of several types as ob- servable in Figure 3.3. Among these markers is the ArUco marker observable on Figure 3.3b.

This marker has been developed by Rafael Munoz and Sergio Garrido [10] as an open-CV library.

(31)

(a) reacTIVision (b) ArUco (c) Chroma tag (d) Fourrier Tag

Figure 3.3: Example of existing fiducial markers developed until now

The ArUco method relies on the geometrical specific shapes of the marker. The algorithm first extract the corners out of the camera frame. Then it verifies the validity of the marker by checking the binary code extracted out from the virtual grid drawn over the marker. Each of the sub cells of the virtual grid on Figure 3.4 are associated to a 1 (black cell) or a 0 (white cell).

Thus it is possible to extract a binary code out of the marker. If the corresponding binary code is present in the aruco library, it is then possible to estimate it pose.

Depending on the application desired, there is several samples available in the ArUco li- brary. Each marker binary code is coded on a define amount of cells. Thus the marker can be classify regarding this number of cells. It can vary from 9 (i.e. 3x3) up to 36 (i.e. 6x6).

In Figure 3.4 is shown a marker extracted from the 5x5 bits library since the binary code is coded over a 5x5 cells square. The cell size is defined by the marker size itself. All the inner cells have the same size. Furthermore, the binary code cells are surrounded by another line of black cells used for the corner extraction. Therefore one cell size depends on both the whole marker size and the library the marker belongs to. A marker extracted from the library NxN (N ∈ 3, 4, 5, 6) with a border size of amarkerwould have a cell size of :

acell = amarker

N + 2 (3.1)

Figure 3.4: Picture of the virtual grid represented on top of an ArUco marker to extract the corresponding binary code (figure extracted from [10])

The pose estimation of the ArUco marker is computed with matrix products. These different matrices are defined on the intrinsic parameters of the camera lens and the four corners of the aruco. So the accuracy of the corner extracted within the camera frame influenced significantly the final accuracy of the marker pose. Furthermore, the relative size of the aruco within the camera frame impacts the accuracy. A one pixel error over the corner position in the frame will

(32)

have less impact for a thousand pixels large marker than a dozen pixels large one. Though both the size of the marker and the distance to the lens influence the accuracy.

The overall accuracy of the aruco pose estimation thus depends on :

• the hardware : A higher camera resolution results in a better pose estimation of a marker (see [11])

• the marker size : A bigger marker results in a more accurate pose estimation

• the distance gap between the lens and the marker : the closer it is, the more accurate it is Several figures over the accuracy are given in section 4.1.3. Since the accuracy of the ArUco based sensor is correlated with the hardware, it can provide inaccurate measures. In order to enhance it one can complete the sensor with extra sensors [12] and merge it with kalman filters.

The constraints of this sensor concerns essentially it field of view. In order to estimate the marker pose, the open-CV algorithm requires the marker to be fully captured within the camera frame. Any situation with a part of the marker out of the camera field or partial obstruction of it prevents the algorithm from estimating the pose of the marker.

3.4 The laser range scanner (Lidar)

The laser range scanner is a sensor that gives distances measurements for a set of different angles as seen on Figure 3.5. It has many advantages. Compared to the other sensors it has a wide range of detection that can rise up to 360 deg for some specific lidar. This wide range of detection ensure to constantly keep a track on the dock pose.

Figure 3.5: Example of the measure output of a laser range scanner sensor

The pose estimation of the dock relies on a profile extraction out of the measures. Thanks to algorithms such as the Iterative Closest Point (ICP) algorithm, it is possible to make the

(33)

measure sample converge onto a specific pattern. Briefly, the ICP algorithm is a method of local convergence that minimize the squared distance between two point clouds.

If the dock geometric profile of the dock is known with a high enough precision then the convergence algorithm will provide a pose estimation with a high accuracy [13], [14] and [15].

Thus the pose estimation of the dock is extracted from the output of the ICP execution as shown in Figure 3.6.

(a) Geometric shape of the dock (b) Lidar point cloud measurements

(c) Pose estimation using the ICP method

Figure 3.6: Pose estimation of the dock based on a lidar point cloud sensor and an Iterative Closest Point method

Furthermore, the ICP algorithm prevent the loss of data when a partial obstruction of the dock occurs. A partial obstruction of the dock results in the presence of outliers closer to the lidar (Figure 3.7a). The ICP algorithm rejects the outliers and keeps an accurate pose estimation of the dock (Figure 3.7b).

(34)

Even if this method mostly relies on an ICP execution, it cannot be used as it is. Some pre-processing and extra filters are added to make it work conveniently. However it will not be explicitly listed in this thesis.

(a) Lidar profile with a partial obstruction of the

dock (b) Pose estimation using the ICP method

Figure 3.7: Pose estimation with a partial obstruction of the dock

The pose estimation of the dock is though highly accurate, not affected by partial obstruc- tion and has a wide range of detection. The only main drawback remains in the price of it compared to the other sensors.

(35)

Chapter 4 Method

In this section is described the method implemented for the docking sequence. It concerns both hardware technology choices and software algorithms implementations. As the results are provided by a set of simulations, a section of this chapter is dedicated to the simulations of the sensor. One section concerns the algorithms used for the localization of the mobile robot and the final one focuses on the control applied on it.

4.1 The sensor

The chosen sensor for the docking sequence is the ArUco vision based method. This section justify this choice and point out the accuracy of it.

4.1.1 The choice of the sensor

The choice of the sensor is influenced by both the feasibility of the docking sequence and the constraints of the overall project. Concerning the feasibility itself, most of the technologies stated in chapter 3 have a great potential to handle this task. Thus the decision has mainly been influenced by the project purpose.

As a reminder, the project docking sequence is meant to plug the GMD to some extra modules initially turned off. The GMD is unable to communicate with those modules in any kind of visual, audio or wireless form. Therefore, the only remained sensors are the Lidar and the vision based sensors.

Another major point of the project is the ability to chose the right module out of all the present module in the area. Since the Lidar can merely gives some distance measures it can not be used for the module identification part. Therefore, the retained sensor for the docking sequence is the ArUco vision based method.

In section 3.3 is underlined that the accuracy of this method is directly linked to the hard- ware. The whole following sensor study is based on this hardware :

• Raspberry Pi Zero W v1.1, 1GHz, 512MB RAM

• Raspberry Camera v2.1, 8MP 1080p

(36)

4.1.2 Optimisation of the sensor

Appendix 2 gives some curves over the different parameters influences over the pose estimation.

The conclusion of this preliminary study is that only the marker relative size in the camera frame has an influence over the pose estimation accuracy. The bigger it is, the more accurate it is.

On the other hand, the ArUco detection algorithm needs to fully capture the marker in the frame to be able to output a measure. Therefore, it is relevant to chose the marker as large as possible but within the expected detection range. In the Table 4.1, the detection range for different marker sizes are given.

Marker Sizes (cm x cm)

Minimum distance detected

(m)

Maximum distance detected

(m)

2x2 0.04 1.2

3x3 0.06 1.8

4x4 0.08 2.6

5x5 0.10 3.5

6x6 0.12 4.2

7x7 0.14 5.5

8x8 0.16 > 6

9x9 0.18 > 6

10x10 0.20 > 6

Table 4.1: Ranges of detection for different marker sizes extracted from the 4x4 bit library Since the same marker is used for far away module identification and close module pose

(37)

estimation, there is not a single marker size answering both conditions :

• A far away identification up to 5 meters

• A close pose estimation till 10 centimeters

A solution is to couple several marker sizes and merge it as it is explained in [16], [17]

and [18]. In Figure 4.1 is shown a Fractal-marker (extracted from [16]) composed of several markers nested in one another. The outside marker is used for far away detection since it size makes it detectable further than the others. On the other hand, when the distance between the marker and the camera gets to small, this outside marker does not fit anymore within the camera frame. The pose estimation of the fiducial marker can not rely on this outside marker anymore.

The close pose estimation thus relies on a smaller marker located in the center of the outside marker which is still within the camera frame. This Fractal marker thus allows a wider range of detection than the markers seen on Figure 3.3.

Figure 4.1: A wide detection range fiducial marker (figure extracted from [19])

Furthermore, the accuracy of the marker pose estimation is essential for the docking se- quence. A higher accuracy should obviously leads to a more accurate docking position. In the articles [19] is explained a method to increase the marker accuracy. In order to estimate the

(38)

position of a marker within the 3D space, 4 points are needed. In the actual ArUco markers, this four points corresponds to the square corners. In order to increase the accuracy of this pose estimation, [19] suggest to multiply the number of points to estimate the 3D pose. In order to increase these number of points, two options are available :

• Increase the number of pose estimation points within the marker as shown in 4.1

• Increase the number of markers used for the pose estimation. The charuco boards pre- sented in [20] has the particularity to be composed of several ArUco markers.

All in all, the final marker used for the docking sequence is shown in Figure 4.2. The top marker is used for both module identification and far pose estimation. The bottom charuco board (set of three markers) is then used for the close pose estimation. This set of three markers does not change from a module to another since the identification should has already been driven with the bigger marker.

Figure 4.2: Final fiducial marker board used for the GMD. The top marker has an edge size of 8cm and the bottom markers edge size is 2.5cm

The algorithm is then coded such as it switches from a marker to another whenever the leftover distance is higher or smaller than a defined distance : 50cm for this specific marker.

Another crucial point of a sensor is the frequency of its measures. The vision based al- gorithms are usually slow because of its complexities. Thus using it as a sensor for some control algorithms can lead to the instability of the whole system. The corner extraction stated in section 3.3 is the slowest part of the aruco algorithm. The whole picture analyses requires significant ressources and is largely a waste. The GMD is an UGV and is deployed on a flat ground environment. So its displacement are within the x and y dimensions, in other words on a horizontal plan. Since both the camera and the marker are integrated on the same height, the marker displacements are also located in the horizontal plan of the camera frame. Hence the

(39)

aruco algorithm does not need to analyse the whole camera frame but simply a horizontal band of it centered around the previous location of the fiducial marker. Taking margins around the previously found marker ensure the marker to still be located within the same band on the next iteration as Figure 4.3 shows.

Figure 4.3: The band analysis of the camera frame for fast marker pose estimation The initial time delay between two consecutive measures was about half a second up to one second. Thanks to this method, it was possible to greatly shorten this delay as shown in Figure 4.4.

4.1.3 The sensor accuracy and frequency

The sensor testing tools presented in Appendix 1 permitted to drive several test measurements over the defined marker. As outlined in Appendix 2 the accuracy of the measure depends exclusively on the distance between the marker and the camera. Therefore only the depth influence is investigated in this section.

The frequency

As stated in section 4.1.2 the further the marker, the higher the frequency. In Figure 4.4 is presented the frequency regarding the distance over the x-axis. The step appearing on the graph are due to intrinsic parameters of the aruco algorithm. The two points isolated around 500 and 600 mm corresponds to the switches from the identification marker to the close pose estimation board. Furthermore the distance between the marker and the camera never gets lower than 100 mm according to the mechanical design of the GMD. Thus the frequency of the sensor should always stay above 5 Hz.

(40)

0 200 400 600 800 1000 1200 1400 1600 1800 GMD depth x (mm)

0 50 100 150 200 250 300 350 400 450 500 550

period between two measures (ms)

robot driving toward the module to dock robot driving away from the module to dock

Figure 4.4: Evolution of the time delay between two consecutive measures regarding the depth x between the fiducial marker from the camera

The measure accuracy

The ArUco pose estimation is able to output the full pose of the marker (position and orienta- tion). Since the robot is a ground vehicle only matters three of these variables : the distances over the x and y axes and the angle θ around the z axis. In order to have the complete datasheet of the sensor both the accuracy and the precision for each measures are needed. In the Figures 4.5, 4.6 and 4.7 are presented the results of the different accuracy and precision of respectively the x, y and θ measures.

(41)

0 500 1000 1500 depth of the marker x (mm)

0 5 10 15 20 25 30 35 40 45

depth accuracy (mm)

depth accuracy measure 0.9225 + x * 0.0118 + x² * 9E-06

(a) Accuracy

0 500 1000 1500

depth of the marker x (mm) 0

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6

depth repetability (mm)

depth repetability measure 0.0528 + x * 8E-05 + x² * 5E-07

(b) Repetability

Figure 4.5: Accuracy and precision of the depth x measure in function of the depth x between the camera and the marker

0 500 1000 1500

depth of the marker x (mm) 0

1 2 3 4 5 6 7

lateral accuracy (mm)

lateral accuracy measure -0.8186 + x * 0.0049

(a) Accuracy

0 500 1000 1500

depth of the marker x (mm) 0

0.5 1 1.5 2 2.5 3 3.5

lateral repetability (deg)

lateral repetability measure -0.0481 + x * 0.0021

(b) Repetability

Figure 4.6: Accuracy and precision of the lateral position y measure in function of the depth x between the camera and the marker

(42)

0 500 1000 1500 depth of the marker x (mm)

0.2 0.22 0.24 0.26 0.28 0.3 0.32 0.34 0.36 0.38 0.4

angle accuracy (deg)

angle accuracy measure 0.25 + x * 8.667E-05

(a) Accuracy

0 500 1000 1500

depth of the marker x (mm) 0

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

angle repetability (deg)

angle repetability measure 0.18 + x * 6.8E-05 + x² * 1.23E-04

(b) Repetability

Figure 4.7: Accuracy and precision of the angle θ measure in function of the depth x between the camera and the marker

The range of detection

The range of detection is influenced by two factors. The size of the marker and the camera’s field of view. The marker’s size affects how far and close it can be detected. On the other hand the camera’s field of view affects the detection on the side. If the marker gets too far from the optical axis, it will end up out of this field of view and no measures would be possible. Both of these range of detection are given in the Table 4.2.

Parameter range

of detection Minimum Maximum

distance to the marker px2+ y2

0.04 m > 6 m

angle of view the

camera - 27 deg + 27 deg

Table 4.2: Ranges of detection for the newly defined marker

Conclusion for the sensor

The full sensor parameters are summarised in the Table 4.3.

(43)

- Precision Repetability

θGM D 0.25 + x ∗ 8.66710−5 0.18+x∗6.8∗10−5+x2∗1.23∗10−4

xGM D 0.9225 + x ∗ 0.0118 + x2∗ 9 ∗ 10−6 0.0528 + x ∗ 8 ∗ 10−5+ x2∗ 5 ∗ 10−7

yGM D −0.8186 + x ∗ 0.0049 −0.0481 + x ∗ 0.0021

Table 4.3: Curve approximation of the standard deviation of each parameter of the aruco sensor pose estimation

4.2 The pose estimation algorithm

While the two wheel robot initializes its docking sequence, two data sources are available to estimate it pose. These sources are based on the sensors listed below :

• The previously described aruco based sensor in section 4.1.

• wheel odometry : each wheel is equipped with a 1200 incremental encoder coupled with a 20:1 gear box which corresponds to a ticRev = 1200∗20 = 24000 incremental encoder.

Since the robot has not been built so far, it was not possible to estimate the drifting of the odometry. As a consequence, a typical value for the drifting has been taken (see [21]) for the simulations. These drifting for both angular and linear are of 5% of the displacement of the two wheel robot.

Having two sources of data for the pose estimation leads to the use of sensor fusion algo- rithms. These sensor fusion methods provide an enhanced value of the robot pose based on several data sources available. The algorithms used for this sensor fusion are of two types : the Extended Kalman Filter and the Particle Filter.

As explained in 2.6.2 and 2.6.1 the filters are some iterative algorithms. A first predictive step based on odometry is applied and then a correction step rectifies the pose error based on an absolute pose measures. In our case, the predictive step is given by the wheel odometry meanwhile the correction is provided by the aruco based measures.

In [22], [23] and [24] are provided some results over both EKF and PF. Both algorithms are obviously increasing the accuracy, reliability and abundance of information of the robot pose in its environment. Figure 4.8 compares the robot trajectory based on the raw measures given by the sensor and the measure given by an EKF enhancement. It appears clearly that the EKF based robot trajectory is much closer to the desired model.

(44)

Figure 4.8: Enhancement of a robot circular trajectory estimation based on an EKF algorithm (picture extracted from [25])

Since both PF and EKF has a great potential for this specific application, both of them will be tested to ensure which one provides the best result over the docking sequence. In this section is entailed the matrices for both algorithms based on the sensors used for this project.

4.2.1 Equation of the odometry

The wheel based odometry is estimated out from the encoders values given at each step. Based on the equations 2.1 and 2.5, it is possible to deduce the set of equation 4.1 which corresponds to both the linear and angular displacement that occurred during the sample time dt.

∆lGM D(k) = vGM D(k) ∗ dt = 2 ∗ π

ticRev ∗ r ∗ (el+ er) 2

∆θGM D(k) = ωGM D(k) ∗ dt = 2 ∗ π

ticRev ∗r ∗ (el− er) L

(4.1)

Where :

• ∆lGM D(k) corresponds to the linear displacement of the robot at step k

• ∆θGM D(k) corresponds to the angular displacement of the robot at step k

• el and er respectively corresponds to the number of encoder tics on the left and right motors that occured between step k − 1 and k

(45)

4.2.2 Computation of the matrices for the EKF algorithm

As explained in section 2.6.1, the Extended Kalman Filter relies on four matrices which corre- sponds to the linearization of the overall robot dynamics and error estimation. Since the robot is evolving in a 2D-space, its overall state can be described with the three parameters θGM D, xGM D and yGM D. Then the full state of the system is given in equation 4.2.

XGM D =

 xGM D yGM D θGM D

(4.2)

Furthermore, based on the equations 2.5 and 4.1 the EKF equations can be deduced in equation 4.3. The aruco based sensor output corresponds directly to the state of the system so comes the equation 4.4.

xGM D(k + 1) yGM D(k + 1) θGM D(k + 1)

=

xGM D(k) + (∆lGM D(k) + ϑD) ∗ cos (θGM D(k)) yGM D(k) + (∆lGM D(k) + ϑD) ∗ sin (θGM D(k))

θGM D(k) + θGM D(k) + ϑθ

(4.3)

ZGM D(k) =

xZ(k) yZ(k) θZ(k)

=

xGM D(k) + ϑxZ(xGM D(k)) yGM D(k) + ϑyZ(xGM D(k)) θGM D(k) + ϑθZ(xGM D(k))

(4.4)

Where :

• ZGM D(k) corresponds to the measure of the state of the robot at step k given by the aruco based sensor

• xZ(k), yZ(k) and θZ(k) are the corresponding values of the aruco based measures of the three states

• ϑxZ, ϑyZ and ϑθZ are the errors committed over each of the states by the aruco based sensor. These errors are varying with the distance to the marker as formulated in table 4.3

Transforming the set of equations 4.2 and 4.4 into the form of the EKF equations 2.8 gives a set matrices as follow in equation 4.5.

(46)

A(k) =

1 0 −∆lGM D(k) ∗ sin θGM D(k) 0 1 ∆lGM D(k) ∗ cos θGM D(k)

0 0 1

W (k) =

cos θGM D(k) 0 sin θGM D(k) 0

0 1

H(k) =

1 0 0 0 1 0 0 0 1

V (k) =

1 0 0 0 1 0 0 0 1

Q(k) =

ϑD 0 0 ϑθ

R(k) =

ϑxZ(xGM D(k)) 0 0

0 ϑyZ(xGM D(k)) 0

0 0 ϑθZ(xGM D(k))

(4.5)

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av

However, the effect of receiving a public loan on firm growth despite its high interest rate cost is more significant in urban regions than in less densely populated regions,

While firms that receive Almi loans often are extremely small, they have borrowed money with the intent to grow the firm, which should ensure that these firm have growth ambitions even

Som visas i figurerna är effekterna av Almis lån som störst i storstäderna, MC, för alla utfallsvariabler och för såväl äldre som nya företag.. Äldre företag i