• No results found

Positional calibration methods for linear pipetting robot

N/A
N/A
Protected

Academic year: 2021

Share "Positional calibration methods for linear pipetting robot"

Copied!
47
0
0

Loading.... (view fulltext now)

Full text

(1)

UPTEC E 20013

Examensarbete 30 hp

Januari 2020

Positional calibration methods

for linear pipetting robot

(2)

Teknisk- naturvetenskaplig fakultet UTH-enheten Besöksadress: Ångströmlaboratoriet Lägerhyddsvägen 1 Hus 4, Plan 0 Postadress: Box 536 751 21 Uppsala Telefon: 018 – 471 30 03 Telefax: 018 – 471 30 00 Hemsida: http://www.teknat.uu.se/student

Abstract

Positional calibration methods for linear pipetting

robot

Oscar Uudelepp

This thesis aims to investigate and develop two positional calibration methods that can be applied to a linear pipetting robot. The goal of the calibration is to detect displacements that have been made to objects that are located in the the robot’s reference system and try to estimate their new position. One of the methods utilizes the pressure system that is mounted on the robot’s arm. The pressure system was able to detect surfaces by blowing air through a pipette against desired surfaces. Positional information of targeted objects are acquired by using the surface detection feature against an extruded square landmark that acts as a reference for estimating displacements.

The other method uses a barcode scanning camera by using its images to detect and retrieve positional information on Aruco markers. Estimation of the targeted object is done by tracking the movement of the Arucos position and orientation.

Tests were made in order to analyse the performance of both methods and to verify that the requirement of 0.1 mm accuracy and precision could be obtained.

The tests were limited to analysing the methods performance on stationary targets to guarantee that the methods did not detect incorrect displacements.

It was found that the camera method could fulfill the requirement when it came to estimating XY-coordinates by using multiple images and placing the Aruco marker within a reasonable distance to the targeted object. However, the camera method was not accurate when it came to estimating the Z-coordinates of objects. As for the pressure method, it was able to fulfill the requirement when it came to estimating Z-coordinates, but its ability to estimate the XY-coordinates of an object was not sufficient. A recommendation would be combine both methods so that they can compensate each other by using the camera method for estimating the XY-coordinates and the pressure method for estimating the Z-coordinates.

(3)

Populärvetenskaplig sammanfattning.

Prevas AB är ett konsultföretag som verkar i olika branscher inom industrin. Deras kontor i Sundby-berg arbetar främst inom Life science-industrin, vilket inkluderar bioteknik, medicintekniska produkter och läkemedelsbranschen. Ett av deras uppdrag är att designa ett system som ska förflytta vätska från provrör till behållare som innehåller reaktiva ämnen som används för att analysera vätskan. En viktig del i systemet är en linjär pipetteringsrobot som ansvarar för själva förflyttningen av vätskan. Pipetteringsroboten samlar vätskan genom att använda sig av ett trycksystem som kan suga upp vätskan i en pipett och därefter blåsa ut det i behållaren. Roboten kan förflytta sig snabbt och precist med en hög repeterbarhet. Över tid kan objekt som roboten rör sig till förflytta sig från sina tidigare kända positioner, vilket betyder att positionerna av dessa objekt måste hittas igen för att få roboten att röra sig korrekt. Därför behöver objektens kända position kalibreras vid behov.

Syftet med examensarbetet är att undersöka och utveckla två positionella kalibreringsmetoder som kan appliceras på den linjära pipetteringsroboten. Målet med kalibreringen är att upptäcka förflyttningar som har inträffat av objekt som befinner sig i robotens referenssystem och därefter försöka att uppskatta deras nya position. En av metoderna använder sig av ett trycksystem som är monterat på robotens arm. Trycksys-temet kan upptäcka ytor genom att blåsa luft genom en pipett mot önskade ytor. Med den ytdekterande egenskapen av trycksystemet kan positionell information om uppmätta objekt hittas. Den andra metoden använder sig av en streckkodsläsande kamera genom att använda kamerans bilder för att upptäcka och hämta information om önskvärda objekts position. Estimeringen av objekten görs genom att spåra rörelser av markörer som tidigare har placerats intill objektet.

(4)

Contents

1 Introduction 1 1.1 Background . . . 1 1.2 Work Description . . . 1 1.3 Limitations . . . 2 2 Background 3 2.1 Linear robots . . . 3

2.2 Communication System and Hardware . . . 4

2.2.1 Hardware . . . 4 2.2.2 Communication system . . . 4 2.2.3 Rotary Encoders . . . 5 2.2.4 Home sensors . . . 5 2.3 Pressure System . . . 6 2.3.1 Surface Detection . . . 7 2.4 Homogeneous Coordinates . . . 8

2.5 The Pinhole Camera model . . . 9

2.6 Ficidual markers . . . 12

3 Method 13 3.1 Linear robot coordinate system . . . 13

3.2 Camera Method . . . 14

3.2.1 Camera . . . 14

3.2.2 Camera coordinate system . . . 14

3.2.3 Calibration . . . 15

3.2.4 Aruco markers . . . 16

3.2.5 Camera coordinates in respect to tip position . . . 17

3.2.6 Initial position . . . 17

3.2.7 New position . . . 18

3.3 Pressure Method . . . 19

3.3.1 Surface Detection . . . 19

3.3.2 Edge detection . . . 21

3.3.3 Edge detection algorithm . . . 21

3.4 Position estimation of target . . . 22

(5)

4.2.3 Estimation of target position . . . 27

4.2.4 Estimation with multiple images . . . 29

4.2.5 Z-coordinate estimation . . . 33

4.3 Pressure method . . . 33

4.3.1 Landmark position estimation . . . 34

4.3.2 Target Estimation . . . 34

4.3.3 Z estimation . . . 35

4.4 Distance between markers and target . . . 36

5 Discussion 39

6 Conclusion 41

(6)

1.

Introduction

1.1

Background

Automation of industrial processes has been on the rise ever since the industrial revolution started and with the further advancement of technology during the last decades, automation has become a dominating factor in the industry today. With automation, one seeks to perform a task with minimal human assistance and one popular automation process is the use of industrial robots. The earliest known industrial robot was in 1937, when Bill Griffith P. Taylor published an article in a magazine about a crane-like device that he had built almost entirely out of Meccano parts and it was powered by an electric motor. The robot could perform movement on five axes and its task was to pick and place blocks.[6] The first patent on industrial robots was made in 1956 by George Devol and Joseph F. Engelberger who together founded the company Unimation. The popularity of industrial robots began to rapidly increase when both Japan and Europe started to produce commercially avalaible industrial robots. Prevas AB is a consulting company that can provide solutions for many sectors in the industry. Their office in Sundbyberg has a main focus in the Life science sector. One of their major projects is to design alife science machine that will automate the process of analyzing liquid samples with high speeds and reliability. An essential part of the machine is a linear industrial robot. The purpose of this robot is to automate the process of analyzing liquid by dispensing liquid samples into (test capsules) that contains reactive substances. In order to successfully perform this task with a high reliability, high demands must be met when it comes to accuracy and precision. During the time the robot moves between objects and perform its task, the combined weight of the robots moving parts and its high speed can together cause stress on the supporting frame and its surroundings. Over time, there is a chance that this stress could effect the geometrical structure of the entire system. This could lead to a displacement of vital objects and their known positions in relevance to the robots reference system. These displacements could then cause major and unwanted malfunctions during operating time. Other factors such as temperature, humidity could also have an impact on the position of objects in the operating area. To counter this problem, one solution is to identify potential displacements of objects and calibrate their known position on a periodic basis.

1.2

Work Description

(7)

To meet the demands, the methods should achieve an accuracy and precision of 0.1mm. As the linear robot that was used in the life science machine was unavailable during this project, an identical robot was used without the machine.Thus, a simplified support frame for the linear robot was needed to be built to simulate the linear robots operating environment inside the machine.

1.3

Limitations

(8)

2.

Background

This chapter will cover the background behind the hardware and software in order to give an insight in how they are utilized by the calibration methods. The hardware that was used were all a part of the linear robot, which includes servo motors, encoders, one barcode reading camera ,a pressure system and all the PCB boards that controls the hardware and communication of the the system. The information on how the hardware of the Prevas robot interacted with each other was learnt by studying their firmware design documentation of the robot and built the foundation of the calibration method by learning how to control the hardware as desired.

2.1

Linear robots

A linear robot also called a Cartesian coordinate robot is a type of industrial robot that can move in a straight line along two or three principal axes simultaneously, where the axes are perpendicular to each other. The limited movement that can be achieved by the linear robot allows for a simpler robot control solution and provides high reliability and precision when moving within its dimensional coordinate system. The linear robot used by Prevas has three principal axes and its movement can be controlled by rotating threaded rods with the help of servo motors. The rotary motion of the threaded rod translates into linear movement of robot arm that is attached to on the rod’s threads. Figure 2.1 shows the linear robot coordinate system, which it can operate in.

(9)

2.2

Communication System and Hardware

2.2.1

Hardware

There are a total of four circuit boards on the robot system. Three of them are identical and are mounted on each axis and are called servo controller boards. The purpose of the three boards is mainly to control the position of the servo motors on each axis by using external optic sensors and encoders. The fourth board is located on the robotic arm of the Z-axis and is called the pressure board. The pressure board controls the pressure system and the camera system by using an external stepper motor, optic sensor and pressure sensor.The boards can communicate with each other by having a com port in and a com port out on each board. These ports are connected in a so called daisy chain, which is a connection that is done in sequential order, where the X-board is the the direct connection to the computer and the pressure board is the end of the daisy chain. Figure 2.2 shows an illustration on the daisy chain connection. The communication between these cards is done by using the RS422 bus protocol.

Figure 2.2: Shows how the hardware is connected in the daisy chain.

2.2.2

Communication system

(10)

format that makes the data easily readable to its users. JSON is most commonly used in web interfaces, but it is also applicable in other areas.

2.2.3

Rotary Encoders

A rotary encoder is a device that can be externally mounted on the shaft of a motor. The purpose of the encoder is to convert angular position into analog or digital signals, which can be used to monitor the motion of the motor shaft. There exists two main types of encoders called absolute and incremental encoders. In this robotic system, an incremental type is used for the servo motors and stepper motors. The incremental encoder continuously monitors and outputs the occurrence and direction of movement of the motor’s ro-tating shaft. The encoder does this by sending two output signals as pulses. These two output signals are quadrature-encoded, which means that the duty cycle of each pulse will be 50% and the phase difference will be 90 degrees between them when the rotating velocity is constant. The phase difference will be either 90 degrees or -90 degrees depending on whether the rotation is clockwise or counterclockwise. By using external circuitry to incrementally count the occurrences of the pulses, one can get the velocity, direction, acceleration and position of the rotating shaft.

The continuous feedback positional information from the incremental encoder makes it ideal to use in robotics as it provides almost a real-time position monitoring and also provides precise measurements that can be used in control feedback systems. However, the positional information that is perceived by the encoder is not the absolute position as the encoder lacks a reference point that relates to an actual position and it will require information from external sensors to get a reference point.

2.2.4

Home sensors

(11)

2.3

Pressure System

The purpose of the pressure system is to extract fluid from sample containers and then put the samples into other containers containing reactive substances. The process is done by utilizing a step motor that regulates the volume inside a chamber. The chamber is connected to a long metal pipe with an open end at its tip. At the tip, a pipette can be attached and be used to extract and dispense the fluid. Figure 2.3 shows an image of the pressure system that is located on the robot arm.

Figure 2.3: Picture of the pressure system. The image has been edited due to some parts that are confidential. Depending on the direction of the rotation made by the step motor a threaded rod will move up or down and the volume inside the chamber will be either increased or decreased.

By changing the volume, the pressure will change also change as it is inversely proportional to the volume. The relationship between the volume and change inside the chamber can be explained using the Ideal Gas Law as a simplified model [14].

P = nRT

V (2.1)

(12)

2.3.1

Surface Detection

The air pressure method utilizes the pressure system to detect surfaces by reducing the volume inside the chamber. This will cause air to be pushed out from the chamber as the pressure constantly increases. During the time of the constant pressure change, the robot arm is moved towards an surface. As the robot arm moves downwards towards the surface, the distance between the tip of the pipette and the surfaces diminishes. When a certain distance has been reached, the air that is being blown towards the surface will be effected as the air molecules are being reflected by the surface. The pressure sensor will read a sudden spike of change when compared to the constant change that it have been measuring before. A feature that has been built into the system will stop the movement of both the step motor in the pressure motor and the servo motor that is moving the robot arm downwards when a spike that exceeds a specifiable threshold. An example of the constant pressure change is shown in figure 2.4 and figure 2.5 shows when a spike has occurred when the robot has moved close to a surface.

Figure 2.4: Moving average of pressure changes when no surface is detected.

(13)

2.4

Homogeneous Coordinates

As apposed to Cartesian coordinates that is used to describe Euclidean geometry, homogeneous coordinates are used for projected geometry, which is frequently used in computer vision.[4] The reason why it is often used over Cartesian coordinates is that Homogeneous coordinates allows for less matrix operations when it comes to operations such as translation, rotation, perspective project and scaling of matrices. A point in two dimensional Cartesian coordinates is represented by (x, y)T when defined as a matrix, whereas Homogeneous

coordinates is represented with an additional dimension W such as (x, y, w)T. Equation 2.2 shows how a point in Homogeneous coordinates can be represented in Cartesian coordinates.

  x y w  ⇔ x w y w  (2.2)

By using the relation described in equation (2.1) a point in Cartesian coordinates can be described in infinite ways in homogeneous coordinates. For example, the Cartesian coordinates (1,2) can be defined in homogeneous coordinates as (1, 2, 1)T, (2, 4, 2)T, (4, 8, 4)T and so forth. Another feature of the homogeneous

coordinates is that it can define a Cartesian coordinate located at infinity with finite coordinates, such as (1, 1, 0)T. Another illustration of how useful homogeneous coordinates can be is to compare the matrix

operation done by both coordinate systems when it comes to translation and rotation from point (x, y) to (x0, y0). With Cartesian coordinates this can be shown as.

x0 y0  =cos φ − sin φ sin φ cos φ  x y  +tx ty  (2.3) As for Homogeneous coordinates, the following operation will yield the same result.

  x0 y0 1  =   cos φ − sin φ tx sin φ cos φ ty 0 0 1     x y 1   (2.4)

Just as when dealing with three dimension in Cartesian coordinates, simply adding another dimension is added for both coordinate system will give the following relationship.

(14)

2.5

The Pinhole Camera model

The Pinhole camera model is a simple model that is commonly used in computer vision. The model sees the camera as a single focal point instead as a lens. Light that has been reflected on an object in the real world pass through the point and projects an inverted image on the opposite side of the camera, as seen in figure 2.6.

Figure 2.6: Projection of 3-D object onto the Image plane based on pinholemodel. [2]

Based on the pinhole model, a 3D point in real world coordinates can be projected into 2D virtual image coordinates of a picture that is taken by a camera. It can be explained by first defining a point in a real world coordinate system, which is located at an arbitrary point P = (Xw, Yw, Yw), where the origin of the

world coordinate system is fixed. Let another arbitrary point be at tw = (tx, ty, tz) in the same coordinate

system represent a camera that will take the 2D image. Lets also say that the camera has its own reference system where the origin is at the location tw in real world coordinates. Then the origin t of the cameras

coordinate system can with be translated with respect to the world coordinates. This translation does not take into account what direction the camera is facing. If the direction of the camera is said to be facing in an arbitrary direction in the real world coordinates, then the cameras coordinate system will also have a rotation R with respect to the real world coordinate system. Figure 2.7 illustrates how a point P can be seen from the two coordinate systems.

(15)

However, the point P would have a different projection in the camera coordinate system as its origin is not the same as the real world coordinate system, instead it would have the coordinates (Xc, Yc, Zc). By

using Homogeneous coordinates, the relationship between (Xw, Yw, Yw) and (Xc, Yc, Zc) is shown by the

following equation.   Xc Yc Zc  =R|t     Xw Yw Zw 1     (2.6)

Where R is the rotation matrix and t is the translation matrix. Together they are referred as the extrinsic matrix. With the point P given in camera coordinates, this point can then be projected onto the virtual 2D image plane that was taken by the camera.

The image plane is located at Zc = f in camera coordinates. Where f is the cameras focal length and the

line that goes from the center of the camera to the image plane is called the principal axis. The point where the principal axis meets the image plane is called the principal point [1] [4]. Figure 2.9 shows how the image plane is located relative to the other coordinate systems.

Figure 2.8: Depicts the projection of P in 2D-coordinates. f is the focal length, C is the camera centre, p is the principle point [4]

Figure 2.8 illustrates how the point P is projected onto the image plane and by using similar triangles, the point P can be expressed in image coordinates as the following:

x = fXc Zc

y = fYc Zc

(2.7)

(16)

Where the camera matrix or also called the intrinsic matrix K is the following: K =   f 0 0 0 f 0 0 0 1   (2.9)

The equation 2.7 assumes that the image plane has its origin at the principal point. However, in reality the origin of the image plane can have an offset to the principal point. This means that the principal point may be located at (cx, cy) in the image plane. The pixels may also not be square, which means that there

can be two different focal lengths fx and fy. Additionally there may also be a small skew γ between the x

and y axes of the camera sensor. With this taken into account, the camera matrix can be expressed as:

K =   fx γ cx 0 fy cy 0 0 1   (2.10)   x y 1  =   fx γ cx 0 fy cy 0 0 1     Xc Yc Zc   (2.11)

(17)

2.6

Ficidual markers

Camera pose estimation is frequently used in robot navigation, augmented realities and in computer vision. When an image is taken, the process of determining pose estimation of either the camera or an object in the image that is based on finding correspondences between 3D-points in real world coordinates and their projection onto an image’s 2D-coordinates. The 3D-points are often found by using objects with known dimensions as landmarks. However, lightning conditions, rotations and partial occlusion of the reference object makes it difficult to consistently find the required amount of 3D-points that is needed in order to estimate positions. A popular approach to counter this is to use ficidual markers as their geometric figures are easy to detect with a high speed, precision and robustness [13] [3]. A list of ficidual markers can be seen in figure 2.10.

Figure 2.10: List of different ficidual markers

(18)

3.

Method

This chapter will cover the procedure of both calibration methods. Both methods follows a similar approach when it comes to acquiring a position of a desired target. The general outline for methods is to first locate a reference point of the marker M0, locate a reference position of target T0, find a new marker point M1 and

compare to the reference and finally estimate new target position T1 . The flowchart in figure 3.1 shows the

general procedure of both methods.

Figure 3.1: Flowchart describing the general procedure for both methods

In order to implement, test and make an application of the methods, proper control of the motors, camera and pressure system was required to be implemented. Several test scripts and the firmware design documentation of the linear robot provided by Prevas, showed basic examples on how to send commands and how to read data with the specific hardware on the linear robot system. The test scripts served as a basis for the functions that was used in order to control the linear robot as desired.

3.1

Linear robot coordinate system

A three dimensional coordinate system was established in order to have an orientation that was relative to the steps measured by the linear robots encoders. Each step in a direction measured by a encoder indicates a linear movement along its corresponding axis. The resolution for one full rotation of the servo motors is defined as 512 steps per rotation. When sending a command to the servo motors, a specific amount of steps needs to be used as an input in order to move to a location along one of the axes in the linear robot system. This means that the unit of each axis in the linear robot coordinate system is defined in steps.

(19)

3.2

Camera Method

The following section will cover the prodecure behind the Camera method. The camera method uses the OpenCv library in Python.

3.2.1

Camera

The camera that is used for the camera method is a barcode reading camera that is attached to the robotic arm. The purpose of this camera is mainly to scan for bar codes on objects and read certain patterns. The default settings on the camera is to scan bar codes with 60 frames per second and then send information of the scanned bar code back to the linear robot system. Instead of using the camera to read bar codes, it can also be used to take images and ship them to the pc/host system as a Jpeg file. It is also possible to specify the gain and exposure of the camera when it takes the image and the images sent to the PC is set to be gray colored with an resolution of 844x640 pixels. The exposure time for the camera is set to auto-exposure when taking images as default. However, it was found early on in the project that auto-exposure would cause each image to have a different exposure time and therefore, the pixels that defines objects in the image could be different for each image taken by the camera. The camera has a setting that allows the user to specify the exposure time and the exposure time was set to an arbitrary time of 254ms in order to have a constant exposure time.

3.2.2

Camera coordinate system

As seen in figure 3.2, the three dimensional camera coordinate system is located at the sensor inside the camera and the image plane is located with an offset in the Z direction in camera coordinates. This offset is defined as the distance of the cameras focal length.

(20)

3.2.3

Calibration

Information about the cameras extrinsic and intrinsic parameters needs to be acquired in order to project a three dimensional point in world coordinates onto the two dimensional image plane. Both parameters can be estimated by using camera calibration. [1] [15] Camera calibration consist of mainly two steps. The first step is to find the extrinsic parameters that gives the relation between real world coordinates and camera coordinates. The second step is to find the intrinsic parameters that is needed to project the 3D-point onto the image plane. To find the extrinsic parameters, one popular calibration method in computer vision is to use a calibration pattern. The method is to take several images of a pattern such as a checkerboard pattern or circular pattern where the geometric properties of the pattern is known. The pattern that was used in this project was the checkerboard pattern because its simple to estimate its 3D-coordinates and to extract the position of its corners.

(a) Image of the square calibration pattern that is used for the calibration.

(b) Image of the square calibration pattern when corner detection is applied.

Figure 3.3: Shows image on the calibration pattern that is used to find the intrinsic parameters of the camera.

The checkerboard will represent the origin of the world coordinate system where the X and Y goes along the flat surface of the board and Z is perpendicular to the surface. This means that all the 3D-points that can be found on the checker board is only defined on the XY-plane in real world coordinates as Z is 0. The corners of the squares will acts as 3D-points in the calibration and the distance between each corner are equal with a known length defined in millimeters. This fulfills the first step of the camera calibration. The second step is to find the 2D-coordinates of the corners on the checkerboard. Figure 3.3 shows the calibration pattern with and without the corners detected. In order to get a good estimation of the extrinsic parameters, 30 images where taken of the checkerboard where in each image the position of the camera is fixed and pose of the checkerboard has been altered in all direction. The reason to use different positions of the checkerboard to provide more inputs to the camera calibration method. The found coordinates are then finally used to estimate the intrinsic parameters [15]. Equation 3.1 is the intrinsic parameters that was found. This matrix is later used with Aruco markers to estimate the rotation and translation vector that describes the pose of the marker.

(21)

3.2.4

Aruco markers

Information about a known object or shape in real world coordinates needs to be known in order to estimate the pose of the camera. As explained in the ficidual section in background, ficidual markers are often used in robot navigation to get a good pose estimation. Aruco markers was chosen for this task as it provides robustness, accuracy and precision when it comes to estimating the pose. Figure 3.4 shows the Aruco marker that was used for the camera method. The pose of the Aruco markers can be found by using the intrinsic matrix and the distortion matrix that was acquired from the camera calibration and knowing the real size of the marker. The Aruco marker detection first decodes the binary identification of the found marker and returns a true statement if the correct marker is identified and rejects false markers, acting as a fail-safe in case the wrong marker or if the image taken was not sufficient. [12] [3]

Figure 3.4: Before the Aruco marker is detected

(22)

Figure 3.5: Aruco marker when its detected.

3.2.5

Camera coordinates in respect to tip position

As the camera is assumed to be perfectly mounted horizontally within the linear robots coordinate system, the XY-coordinates of the cameras origin in linear robot coordinates can be translated to the to the position of pipette’s tip. This can be done by measuring the distance in X and Y coordinates from the lens of the camera to the tip of the pipette. However, the distance to the tip of the actual pipette was hard to accurately measure as it is located far away from the camera in the Z-direction. Instead the robot arm was moved so that the tip of the attached pipette was close to the same surface as the marker and its X and Y linear robot coordinates was stored. Then an image of the marker was taken and its position was estimated and stored. To get the distance from the tip of the pipette and the center of the camera, the robot was then moved until the tip of the pipette was located directly on the center of the marker. The new X and Y linear robot coordinates was compared to the previous position where the image was taken and then the stored marker position in camera coordinates was subtracted from the new position. The X and Y distance from the tip of the pipette to the origin of the Camera coordinates can then be expressed in the units of steps and mm.

Xd Yd  =−948.577steps −12.637steps  = −0.494mm −37.054mm  (3.2)

3.2.6

Initial position

Two initial points that describes the relation between the position of a targeted object and the position of the marker needs to be determined before being able to detect if a potential displacement has occurred. The reference point T0that indicates the position of the target is determined by moving the robot arm to a

desired location and then storing the linear robot coordinates of the point. The second initial point M0 is

the point found at the center of the marker and it is defined by the following equation.

(23)

Where Xr and Yr are the X and Y location of the current robot arm and tx and tx is the translation

vector that describes the position of the marker.

Figure 3.6: Initial estimation that is used as a reference

The angle θ0 in figure 3.6 is the euclidean angle of the marker and it can be found by converting the

markers rotation about its Z axis to an Euler angle. [8].

3.2.7

New position

If a displacement of the target has occurred, the position of the marker has changed its position relative to the target as the marker is assumed to be attached to the same rigid body as the target. Let the new location of the marker be located at M1and the new rotation of the marker is estimated to be theta1. Then

the target is no longer located at T0 and it has moved to the location of T1in the linear robots coordinate

system. T1 can be found by using the stored vector ~v at the point M0. However, this only solves the X and

Y displacement of the marker itself and does not take the rotation into consideration. This can be solved by taking the difference in angular displacement of the markers and then rotating the vector v with the angular difference and then adding it to the point M1. The estimation of the new target is illustrated in figure 3.7.

The rotation of the vector can be done by using the dot product of the two dimensional rotation matrix and the vector. The Camera method estimates the Z-coordinates of the target by adding the difference between the marker’s new Z-coordinates and the initial Z-coordinates, where the marker’s Z-coordinates are found from the translation vector that is estimated for the Aruco marker. The difference between the Z-position of the initial marker and the new marker position is then added to the stored Z-position of the desired target.

T1=

cos (θ1− θ0) − sin (θ1− θ0)

sin (θ1− θ0) cos (θ1− θ0)



(24)

Figure 3.7: Target estimation by using the camera method.

3.3

Pressure Method

The following section will cover the procedure behind the Pressure method. The pressure method utilizes the existing pressure system that is located on the robotic arm. The fundamental part of the Pressure method is based on the surface detection that was described in the Background chapter and it will be explained further in this section. The Surface detecting feature was found to also be able to detect edges and therefore a reference point could be found by using geometrical landmarks.

3.3.1

Surface Detection

The pressure system as described in the background is able to detect surfaces by regulating the pressure with a constant speed and simultaneously moving the robot arm the robot arm towards a flat surface until the pressure sensor detected a large change in the pressure inside the chamber. When testing the feature, it was found that that attaching a pipette was more efficient than just using the metal pipe to blow air against a surface. An additional feature of the pipette was also that it was expandable as it could be attached and removed easily by hand. Because of this, the risk of damaging the robot was removed if robot arm was moved too far and colliding with surfaces as the pipette would take all the damage.

(25)

Figure 3.8: Flowchart of surface detection algorithm

(26)

3.3.2

Edge detection

The surface detection is able to provide information about an objects position in the Z-axis when it comes to the robots linear robot coordinate system. However, information needs to known in order to know an objects position in XY-coordinates. The approach was to try to find coordinate points that are located on the edges of a geometrical figure by utilizing the surface detection. Similarly to the camera method’s Aruco marker, a square land marker is used in order have a reference that can be related to a desired target. The landmark that was used for the pressure method was an extruded square figure that has sharp edges. The smallest step that the robot can make in XY-coordinates was defined in the linear robot coordinate section, where one step was A mm/step. The idea was that an edge can be found if the robot goes from one position where the surface detection says that a surface has been found,to another with a A mm difference and this time the surface detection determines that no surface has been found. This means that the an edge could have been found with a potential accuracy of A mm.

However, the accuracy of the edge detection would also be limited by the area of the hole on the tip of the pipette as a part of the hole can be over a surface and no surface at the time. This is another reason why the pipette was used as the area of its exiting hole is very small.

3.3.3

Edge detection algorithm

To find coordinate points on the edge of the square, the robot arms pipette is first moved to a initial point inside the square. Then the robot is moved in a straight line along either the X-axis or Y-axis in linear robot coordinates to find one of the sides of the square. The step size that is first taken is the half length of the square marker in order to be sure that the first step taken has either gone past the edge or is close to the edge. When a step has been taken, a surface detection is performed to determine if a surface has been found. Depending on the answer from the surface detection, the robot will go either keep going in the same direction or go in the reverse direction. The step size that is taken will be reduced by half for every time a new step has been taken. This will be repeated until the step size reaches the smallest possible step that can be made by the robot. When the smallest step size has been reached, the edge detection algorithm will stop and store the current position as a point that is located on the edge of the land marker. The flow chart in figure 3.9, describes the Edge detecting algorithm.

(27)

The algorithm is also illustrated in figure 3.10, which shows an example of how the steps are taken from the initial position P0 to the final position Pi.

Figure 3.10: Shows in what order and the step size of the steps taken towards the edge.

The edge detection is automated by an algorithm, which performs the detection and then returns the point that is located on the edge. Two variation of the algorithm exists, where each variation handles edge detection along one axis in the XY-plane.

3.4

Position estimation of target

The pressure method uses the same estimation technique as the camera method when it comes to estimating the position of the target. First, knowledge about the initial position of the marker and target needs to be acquired so that it can be used as a reference that will be compared to future estimations. The target point T0is acquired by moving the robot to the desired position that will represent the target position. As it takes

the edge detection a considerable time to find only one data point on the edge of the marker, the marker’s corner is used as the marker’s reference position M0 as it is requires less data points to estimate compared

to the center of the marker. Figure 3.11 shows an illustration on how the new target is estimated. The angle of the marker θ0 is found by calculating the angle between one of the markers vertical sides to a vertical

vector. When a new position of the target referred as T1 is to be estimated, the pressure method will first

estimate the marker’s new position M1 and new angle θ1. Then the estimation of T0 is done by using the

equation 3.4 that was also used for the Camera method.

(28)

Figure 3.11: Shows how the target is estimated.

3.5

Marker estimation

To estimate the marker’s reference position M0, three points are located on one of the vertical sides of the

marker and an additional three points for a horizontal side, as seen in figure 3.12a. These points are found by using the edge detection function 3 times for each axis. The length of the squares sides are used as a reference when picking the initial position for the edge detection algorithm in order to avoid picking initial positions that are located outside of the marker.

Based on the three points for one side, a one-dimensional polynomial curve fitting function is applied in order estimate each side as a line in the linear robot coordinates. With the polyfit function, two functions are generates that describes the two lines. Two vectors are then drawn by plotting two points in each function. Then the intersections of the two vectors are found and the intersection point is assumed to be the corner. The intersection of two non-parallel vectors can be found by using the equation 3.5. [5][11]

(29)

(a) 6 points are found on the sides of the square by utilizing the edge detection function.

(b) The lines represents the two vectors that are used to estimate the markers corner.

(30)

4.

Results

This chapter will cover the results of the measurements that was made in order to measure the performance of both methods. The tests were conducted on sample board, which had a extruded square shape with sharp edges that was used in order to act as a landmark for the pressure method. An Aruco marker was attached to the surface of the sample board in order to test the camera method. The target for the methods was a point that was located 128mm away from the Aruco marker and 84mm away from the square landmark. The specific target was picked in order to simulate a similar environment where the markers would be placed on surfaces with a limited area and the distance between the markers and targets would differ by 1 to 20 cm. The performance of the methods were analysed by repetitively testing them on the markers, where the position of the markers was stationary during the entire testing period. This test was done in order to analyse if the method were able to determine that no displacement had been made between the measurements. The plots in the results are using the linear robots X-axis as the Y-axis and the robots Y-axis as the X-axis due to the way that the methods depicts the 2-D positions of the robots XY-axes. A red circle with a radius of 0.1mm was drawn in the estimated target plots in order to visualize how the close the estimated targets point was to the actual target.

4.1

Estimation of Accuracy

An accuracy of 0.1mm was a requirement for the method. This means that the target positions that is estimated by the methods needs to be within a distance of 0.1mm away from the actual position of the target. A red circle with a radius of 0.1mm was drawn in all the following plots that shows the estimations XY-target coordinates.

In order to analyse the dispersion of the points, a center point was determined by calculating the average position of all points. The euclidean distance that is the distance between 2 points in multidimensional space was used in order to get a distance to the center point from each estimated point. The euclidean distance between the center point and a estimated point could be calculating with the following equation:

di=

q

(xi− cx)2+ (yi− cy)2 (4.1)

Where (xi, yi) are the coordinates of one of estimated points and (cx, cy) is the the average point that is the

center point. By analysing the euclidean distance of multiple estimations, an average value and standard deviation could be determined. This is done with the results that displays the estimations made in the XY-plane. For one dimensional results such as the estimation of angles and Z-positions, the average value, standard deviation was instead calculated normally.

(31)

4.2

Camera method

This section will cover the results of how accurate the marker and target position could be estimated. The camera method was tested by taking 200 images from four different positions along the Z-axis in order to see how the estimation of the marker and target position would be effected by different distances between the camera and marker.

First, the spread of the estimated target points was analysed by only using one image for the camera method. The four different heights where the images were taken from was 218mm, 296mm, 335mm and 374mm. The reference point M0 and reference angle θ0 that was described in the Camera method was determined by

calculating the average position and the average angle of the marker based on all 200 images.

4.2.1

Marker position

The estimation of the markers position was consistent with a 99.7% certainty that the marker point was within a deviation of 0.0123mm to 0.0207mm, depending on the position of the camera.

By using the equation 4.1, the euclidean distance between all 200 points and the average point was calculated and analysed. The table 4.1 shows the how well the marker could be estimated.

Table 4.1: Shows the XY-coordinate estimations of the marker’s reference point M , where the images was taken from four different distances between camera and marker. µ is the average euclidean distance. σ, 2σ and 3σ shows the deviation of the euclidean distance with the corresponding certainty of 68%, 95% and 99.7% of all the estimations.

Distance µ[mm] σ 2σ 3σ 218 0.0089 0.0044 0.0088 0.0132 296 0.086 0.0049 0.0098 0.0147 335 0.0076 0.0041 0.0082 0.0123 374 0.0117 0.0069 0.0138 0.0207 .

4.2.2

Marker orientation

The estimation of the markers angle was also different between each photo and this could potentially effect the estimation of the targeted position as the Camera method relies on the angular difference between the average angle and the new angle received from each image. The table 4.2, shows the results of estimation of the marker orientation.

(32)

The variation of the angle between each photo was randomly spread on all height. When looking again at the position that yielded the best results regarding the disparity of the targeted points, the angle of that was perceived by each marker appeared to be normally distributed. This could be shown by sorting all the angle values in a ascending order and dividing the values in 25 bins and then plotting the the probability density as shown in figure 4.1a, which shows the angle distribution for images that was taken from 335mm. Three out of the four different positions of the camera showed a similar normal distribution. The position that was closest to the marker showed a different result that did not appear to be normally distributed, as seen in figure 4.1b.

(a) Shows distribution of the marker’s estimated angle θ for images taken with a distance of 335mm between camera and marker.

(b) Shows distribution of the marker’s estimated angle θ for images taken with a distance of 335mm between camera and marker.

Figure 4.1: Shows how the distribution of angles when taking images from the closest position differs to images taken from a distance of 335mm.

4.2.3

Estimation of target position

(33)

(a) Estimation of T1based on images taken from 218mm

away from the marker.

(b) Estimation of T1based on images taken from 297mm

away from the marker.

(c) Estimation of T1based on images taken from 335mm

away from the marker.

(d) Estimation of T1based on images taken from 374mm

away from the marker.

(34)

In figure 4.2, the spread in all plots appears to be distributed linearly. The points was also found to be randomly distributed by giving each point a number corresponding to the chronological order of when each image was taken. The two factors that could have an effect on the estimation of the target points would be the position of the marker and the angle of the marker based on that the camera method estimates the targeted position, see figure 3.7.

Table 4.3: Shows the XY-coordinate estimations of the target point T , where the images was taken from four different distances between camera and marker. µ is the average euclidean distance.σ, 2σ and 3σ shows the deviation of the euclidean distance with the corresponding certainty of 68%, 95% and 99.7% of all the estimations.

Position [mm] µ[mm] σ 2σ 3σ

218 0.4220 0.1680 0.3360 0.504

296 0.1804 0.1351 0.2702 0.4053

335 0.1346 0.0925 0.1850 0.2775

374 0.2936 0.2077 0.4154 0.6231

4.2.4

Estimation with multiple images

The results has so far only been analysed when the camera method estimates a position based on one image. This section will cover how the results can be effected by using more images for the estimations.The camera method takes the average of marker location and the average angle of the total amount of images that are being used for the estimation. By increasing the amount of images per estimation, the idea was that the new estimation would yield better results as it would remove the outlaying data. The amount of images per estimation that were analysed were 5, 10, 15 and 20 images where the total amount of estimations were the corresponding amount of 40, 20, 13 and 10 target points.

However, as the amount of images used for the estimation increases, the data becomes less conclusive as the number of data points reduces if the same amount of total images are used. Additionally, the estimation is using the the same set of images used for the reference point that is the average point of the 200 images. This means that there will be an bias that will not truly reflect the real scenario when the camera methods will be implemented. In reality, the reference point will be based on a isolated set of images and the new images that will be used for the estimation will not be a part of the reference images. However, as the target points appears to be normally distributed, this section should still illustrate how using more images will improve the results.

(35)

(a) Estimation of T1 based on the average of 5 images. (b) Estimation of T1based on the average of 10 images.

(c) Estimation of T1 based on the average of 15 images (d) Estimation of T1 based on the average of 20 images Figure 4.3: Shows the disparity between the estimations of the target point T1where four different distances between

(36)

The results shows that using more images, reduces the spread of the estimations and the accuracy of the camera method would increased, which can be seen in table 4.4 where 99.7% of the estimations fulfill the 0.1mm requirement when using 10, 15 and 20 images.

Table 4.4: Shows the XY-coordinate estimations of the target point T , where the estimations are based on a different amount of images taken from a distance of 335mm. µ is the average euclidean distance. σ, 2σ and 3σ shows the deviation of the euclidean distance with the corresponding certainty of 68%, 95% and 99.7% of all the estimations.

Images µ[mm] σ 2σ 3σ

5 0.0553 0.0419 0.08138 0.1257

10 0.0416 0.0302 0.0604 0.0906

15 0.0253 0.0208 0.0416 0.0624

20 0.0269 0.0171 0.0342 0.0513

Figure 4.4 shows the the disparity of all four different positions when the estimation is based on 20 images and the table 4.5 shows the mean value, standard deviation and variance based on the euclidean distance between the target and the estimated points.

Table 4.5: Shows the XY-coordinate estimations of the target point T , where the estimations are based on 20 images taken from different distances. µ is the average euclidean distance. σ, 2σ and 3σ shows the deviation of the euclidean distance with the corresponding certainty of 68%, 95% and 99.7% of all the estimations.

(37)

(a) Estimation of T1 based on 20 images taken from

218mm away from the marker.

(b) Estimation of T1 based on 20 images taken from

297mm away from the marker.

(c) Estimation of T1 based on 20 images taken from

335mm away from the marker.

(d) Estimation of T1 based on 20 images taken from

374mm away from the marker.

(38)

4.2.5

Z-coordinate estimation

This sections shows how consistent the camera method was it came to estimating the same distance between the camera and marker, which is the distance in the Z-axis of the linear robot coordinate system.. The estimation of the distance comes directly from translation vector that is estimated by the Aruco marker functions. The table 4.6 shows how consistent the distance estimation was from different distances when using 20 images per estimation. The result was that the estimations had a large deviation even though multiple images were used.

Table 4.6: Shows the Z-coordinate estimations of the target point T , where the estimations are based on 20 images taken from different distances. µ is the average euclidean distance. σ, 2σ and 3σ shows the deviation of the euclidean distance with the corresponding certainty of 68%, 95% and 99.7% of all the estimations.

Position [mm] µ[mm] σ 2σ 3σ

218 218.5179 0.2290 0.4580 0.6870

296 298.8981 0.4463 0.8926 1.3389

335 340.5732 0.4120 0.8240 1.2360

374 380.5225 0.7749 1.5498 2.3247

As seen in table 4.6, the accuracy of the z-coordinate estimation becomes smaller as the distance between the marker and camera becomes shorter. This could be due to the amount of pixels that defines the marker is reduced when the camera is further away from the marker, which could make the error factors such as changing lightning conditions and camera sensor noise have a larger impact on the estimation of the marker.

4.3

Pressure method

(39)

Figure 4.5: Shows 30 iterations of the pressure method, where in each iteration, 6 data points on the marker are found and the reference point M is estimated.

4.3.1

Landmark position estimation

When looking closer at the estimation of the reference points, the disparity of the reference points appears to be quite large when compared to the 0.1mm precision requirement. The disparity of the points was calculated again by using the euclidean distance from every point to the average point of all 30 estimations. The table 4.7 shows how the mean value, standard deviation and variance based on the euclidean distance to the average reference point.

Table 4.7: Shows the XY-coordinate estimations of reference point M , which is the corner of the marker. µ is the average euclidean distance. σ, 2σ and 3σ shows the deviation of the euclidean distance with the corresponding certainty of 68%, 95% and 99.7% of all the estimations.

µ σ 2σ 3σ

Distance 0.2111[mm] 0.0938 0.1876 0.2814

Angle 4.5020 [◦] 0.3978 0.7956 1.1934

4.3.2

Target Estimation

(40)

Table 4.8: Shows the XY-coordinate estimations of target point T . µ is the average euclidean distance. σ, 2σ and 3σ shows the deviation of the euclidean distance with the corresponding certainty of 68%, 95% and 99.7% of all the estimations.

µ σ 2σ 3σ

0.5682 0.3010 0.6020 0.9030

Figure 4.6: Shows the estimations of the target point T when doing 30 iterations with the pressure method.

4.3.3

Z estimation

The surface searching function is the first function that the pressure method performs when searching for the six points of the square marker. The function continuously searches for the landmarker’s surface until the the pressure system receives a trigger value that indicates that a surface has been found. The location of this surface was stored when found in all 30 tests. The results for Z-coordinate estimation is shown in the table 4.9.

Table 4.9: Shows the Z-coordinate estimations of the marker when using the pressure method. µ is the average Z-coordinate of the marker’s location. σ, 2σ and 3σ shows the deviation with the corresponding certainty of 68%, 95% and 99.7% of all the estimations.

µ σ 2σ 3σ

(41)

4.4

Distance between markers and target

This section shows how the distance between the target and markers could affect the estimation of the target point. Figure 4.7, shows an illustration on how desired targets can be located. T1 was seen as the target

in the previous measurements, however new targets where tested that was located closer and further away from the T1.

Figure 4.7: Example of how targets can be placed away from the marker

An analysis was made to see how the spread was changed when the desired target was moved closer to the marker and further away. This was done by having a start target T0 located closer to the marker and then

the position of the target was incrementally increased by 19.53mm every iteration. The graph in figure 4.8, shows that the average euclidean distance between the estimations is proportional to the distance between the target and marker. Which means that the marker should be placed as close as possible to the target.

(42)

The best position at 335mm was used to analyse the typical maximum distance of 200mm that the marker would be away from the target if it was placed on a sample board. Figure 4.9 shows the disparity of the estimated target points when one image was used

The table 4.10 shows how the spread is reduced by increasing the amount of images used for the estimation.

Table 4.10: Shows the XY-coordinate estimations of the target point T that is located 200mm away from the marker.The estimations are based on a different amount of images taken from a distance of 335mm. µ is the average euclidean distance. σ, 2σ and 3σ shows the deviation of the euclidean distance with the corresponding certainty of 68%, 95% and 99.7% of all the estimations.

(43)

(a) Estimation of T1 based on 5 images taken from

218mm away from the marker.

(b) Estimation of T1 based on 10 images taken from

297mm away from the marker.

(c) Estimation of T1 based on 15 images taken from

335mm away from the marker.

(d) Estimation of T1 based on 20 images taken from

374mm away from the marker.

(44)

5.

Discussion

This chapter contains a discussion about the results of the two calibration methods and the limitations of the results.

The result was that the camera method was better than the pressure method when it came to speed and accuracy of estimating the position of the stationary target in XY-coordinates. The camera could estimate the position of the stationary target with a 99.7% certainty of being within a deviation of 0.2275mm to 0.5040mm, depending on the distance between the marker and the camera. The pressure method could estimate the target position with a 99.7% certainty of being within a deviation of 0.2814mm. What made the camera method even better was that it could base its estimations on more images than one, which would reduce the spread of the estimated target points even further. This is illustrated in figure 4.3, where 5-20 images are used for the estimation. With enough images, the goal of an accuracy of 0.1mm could potentially be reached. However, when using 15 and 20 images when estimating, the amount of data points acquired are only 13 and 10. From a statistical point of view, this could be seen as insufficient when it comes to determining accuracy and precision. More images needs to be taken in order to get sufficient results that would truly reflect the performance of using more images.

The accuracy of the pressure method could also be increased if the data points for its landmark was increased or if more iterations was done so that an average could be taken. However, it currently takes the pressure method over 11 minutes to locate the 6 data points that is needed to determine the position of its square marker. As for the camera method, it only takes the camera around 6 seconds to find one image and estimate the position of the Aruco marker. This means that the camera could potentially take 110 images before the pressure method has done one iteration. It is worth pointing out that the code for the pressure method was not optimally written when it comes to speed as the main focus was to determine how accurate it could be and to make the method less prone to errors as its entire procedure was automated.

When it comes to estimating the Z-position of the target, the pressure method was considerably more consistent than the camera method. The standard deviation of the Z-position was 0.0118 mm and the best result for the camera method was a standard deviation of 0.2290mm. The pressure method also has an advantage that the it can directly store the position of where the target or marker is in relation to the tip of the robot arm.

It was initial thought that the camera method would get better as the distance between the marker and the camera became smaller, as the objects in the images would be defined by more pixels. However, this only appeared to be true when estimating the Z position based on table 4.7. When looking at the results of the target XY-position, the camera position that was second furthest away from the marker was shown to be the most accurate and the camera position: 218mm that was closest had the worst accuracy. This was most likely due to the variation in the marker’s angle estimation.

(45)

if the anomaly started to occur after several images were taken, but the target points seems to be spread out randomly among the 200 images, which means that the no accidental displacements was made during the images was taken. It is unclear of what may have cause this. It could be that the camera calibration was not done in an optimal way and could have an effect on the marker estimations. The lightning condition could have been momentarily altered in between the images, which would have on the images as the exposure time is fixed. Alterations in lightning conditions could have been caused by people walking by and blocking the light from reaching the camera. Additional images may have to be taken with constant lightning conditions in order to determine if the anomaly still occurs at this camera position.

The results in section 4.4, shows that the distance between the marker and the target also has an impact on the accuracy for both of the methods. This would most likely be because of the estimation of the angular difference between the initial marker and the new marker position. The estimated error of the marker angle was found to be the largest factor that contributes to errors in the target estimations made by both methods. Based on figure 4.10 where different distances between the target and marker was analysed. The error in the angular difference will cause the spread of the estimated target points to be increased proportionally to the distance. This means the accuracy of the methods would be increased if the target was as closer to the marker and decreased if the target was further away from the marker.

The table 4.10, shows the result of estimations that were made of a target that was located 200mm away from the marker instead of the 128mm that was used earlier in the results. The images that was used was the ones that were taken 335mm away from the marker and the results showed that the deviation was significantly increased. The deviation 3σ of the estimations that were based on 15 and 20 images would barely fulfill the requirement of 0.1mm, which means that the it is vital to place the marker on a position that yields results that has a sufficient safety margin. The angular difference error could be solved by using more than one marker and find the angle between the two markers instead of estimating an angle from only one marker.

As the camera method was found to be better at estimating positions of XY-coordinates and the pressure method was better at estimation Z-coordinates. Both of the methods could be used in order to complement each other. Together, the requirement of 0.1mm could be fulfilled with a 99.7% certainty when it comes to estimating stationary targets.

The camera method and the pressure method could potentially be used in other areas such as positional error detection of the homing positions, where markers can be placed close to the homing position of all three axes. Then these markers can be compared to previous positions and a potential drift of the homing positions could be detected. Similarly, multiple markers could be spread out in the linear robot coordinate system in order to detect potential displacement of the robots mechanical axes or support frame.

(46)

6.

Conclusion

Two positional calibration methods was successfully developed for the linear pipetting robot. One method used a camera and an Aruco marker to estimate displacements and the other used a pressure system combined with an extruded landmark. The accuracy of the methods was tested on a stationary target in order to evaluate whether or not if the method could determine if displacement has occurred. The camera method was found to be superior when it came to estimating displacements in XY-coordinates and the Pressure method was better when it came to estimating displacements in Z-coordinates. In order to do estimations in all three dimensions, a recommendation would be to implement a combination of the two methods where the camera method estimates the XY-coordinates and the pressure method estimates the Z-coordinates. The goal of fulfilling the requirement of having an accuracy of 0.1mm could potentially be achieved if the camera method used 5 or more images when it comes to estimating a position of a stationary target. This means that the methods could be viable to use as calibration methods.

(47)

References

[1] Wilhelm Burger. Zhang’s camera calibration algorithm: in-depth tutorial and implementation. Hagen-berg, Austria, 2016.

[2] Mathworks documentation. What is camera calibration? https://se.mathworks.com/help/vision/ug/camera-calibration.html. Retrieved 2020-05-24.

[3] Sergio Garrido-Jurado, Rafael Muñoz-Salinas, Francisco José Madrid-Cuevas, and Rafael Medina-Carnicer. Generation of fiducial marker dictionaries using mixed integer linear programming. Pattern Recognition, 51:481–491, 2016.

[4] Richard Hartley and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge uni-versity press, 2003.

[5] John F. Hughes. Computer graphics : principles and practice. Addison-Wesley, Upper Saddle River, NJ, third edition, 2013. ISBN: 9780321399526.

[6] Meccano Magazine. An Automatic Block-Setting Crane page 172, 1938. http://cyberneticzoo.com/ robots/1937-the-robot-gargantua-bill-griffith-p-taylor-australiancanadian/, Retrieved 2020-05-30.

[7] Satya Mallick. Geometry of image formation. https://www.learnopencv.com/geometry-of-image-formation/. Retrieved 2020-04-24.

[8] Satya Mallick. Rotation matrix to euler angles. https://www.learnopencv.com/rotation-matrix-to-euler-angles/. Retrieved 2020-04-15.

[9] Motoman. Robotics terms, definitions and examples. https://www.motoman.com/en-us/about/company/robotics-glossary. Retrieved 2020-05-27.

[10] Balasubramanian Narasimhan. The Normal Distribution, description of the empirical rule. http://statweb.stanford.edu/ naras/jsm/NormalDensity/NormalDensity.html. Retrieved 2020-05-20. [11] Joseph O’rourke et al. "Search and Intersection" in Computational geometry in C. Cambridge university

press, 1998.

[12] Francisco J Romero-Ramirez, Rafael Muñoz-Salinas, and Rafael Medina-Carnicer. Speeded up detection of squared fiducial markers. Image and vision Computing, 76:38–47, 2018.

[13] Artur Sagitov, Ksenia Shabalina, Leysan Sabirova, Hongbing Li, and Evgeni Magid. Artag, apriltag and caltag fiducial marker systems: Comparison in a presence of partial marker occlusion and rotation. pages 182–191, 2017. doi:10.5220/0006478901820191.

[14] Zhiwu Shang, Xiangping Zhou, Cheng Li, and Sang-Bing Tsai. A study on micropipetting detection technology of automatic enzyme immunoassay analyzer. Scientific reports, 8(1):1–11, 2018.

References

Related documents

För det tredje har det påståtts, att den syftar till att göra kritik till »vetenskap», ett angrepp som förefaller helt motsägas av den fjärde invändningen,

Keywords: fashion shows, runways, shift, models, new designers, presentations, fashion system, past, present, culture, role models, society, collections,

Note that in the original WRA, WAsP was used for the simulations and the long term reference data was created extending the M4 dataset by correlating it with the

Participants were asked to do a prediction task at a given point during the first video (depending on the questionnaire they had received, the first video would be the video with

Att förhöjningen är störst för parvis Gibbs sampler beror på att man på detta sätt inte får lika bra variation mellan de i tiden närliggande vektorerna som när fler termer

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The literature suggests that immigrants boost Sweden’s performance in international trade but that Sweden may lose out on some of the positive effects of immigration on

Swedenergy would like to underline the need of technology neutral methods for calculating the amount of renewable energy used for cooling and district cooling and to achieve an