• No results found

Improving the accuracy of an industrial robotic arm using iterative learning control with data fusion of motor angles and imu sensors

N/A
N/A
Protected

Academic year: 2021

Share "Improving the accuracy of an industrial robotic arm using iterative learning control with data fusion of motor angles and imu sensors"

Copied!
60
0
0

Loading.... (view fulltext now)

Full text

(1)

aster˚

as, Sweden

Thesis for the Degree of Master of Science in Engineering - Robotics

30.0 credits

IMPROVING THE ACCURACY OF

AN INDUSTRIAL ROBOTIC ARM

USING ITERATIVE LEARNING

CONTROL WITH DATA FUSION OF

MOTOR ANGLES AND IMU SENSORS

Peter Andersson

pan15005@student.mdh.se

Staffan Brickman

sbn14007@student.mdh.se

Examiner: Prof. Ning Xiong

alardalen University, V¨

aster˚

as, Sweden

Supervisors: Dr. Anas Fattouh

alardalen University, V¨

aster˚

as, Sweden

Company supervisor: Dr. Stig Moberg,

ABB Robotics, V¨

aster˚

as, Sweden

(2)

Abstract

The estimated position of an industrial robot’s end-effector is crucial for its performance. Contem-porary methods for doing the estimation is limited in certain aspects, and alternative methods are in high demand. This work builds on a method previously introduced where an Inertial Measure-ment Unit (IMU) device is combined with the robot’s system via sensor fusion. The IMU must be calibrated before its signal is used in sensor fusion and this work implements and builds on current cutting-edge methods for calibration. Sensor fusion is a crucial part of the method and here a complementary filter is used. The finished estimation is then used with Iterative Learning Control (ILC) to investigate if the accuracy can be further improved and also test its viability. Results from ILC show that the IMU can indeed be used to estimate the end-effector’s trajectory but that sensor fusion is mandatory. Further research could be done to allow the estimation to be done online instead of offline and ILC could be tested more extensively.

(3)

Acknowledgements

We, the authors of this report, would like to thank Dr. Stig Moberg, Prof. Mikael Norrl¨of and Engr. Johan Nor´en at ABB Robotics for their continued support and patient encouragement throughout the project. Furthermore, we would also like to thank Dr. Anas Fattouh and Prof. Ning Xiong at M¨alardalen University for their constant backing and encouragement. A special thanks to team Sigma and the rest of ABB Robotics for a very good and professional reception.

You all made this possible and for that you have our eternal gratitude.

Peter Andersson & Staffan Brickman May 2020

(4)

Acronyms

CCF Conventional Complementary Filter CF Complementary Filter

CMAC Cerebellar Model Articulation Controller DOF Degrees Of Freedom

EKF Extended Kalman Filter FK Forward Kinematics FIR Finite Impulse Response ILC Iterative Learning Control IMU Inertial Measurement Unit IIR Infinite Impulse Response LM Levenberg-Marquardt

MEMS Micro-Electro-Mechanical Systems RMSE Root Mean Square Error

STD Standard Deviation TCP Tool Center Point UKF Unscented Kalman Filter

(5)

Table of Contents

1 Introduction 1

2 Background 4

2.1 Robot anatomy . . . 4

2.2 Calibration of the IMU . . . 6

2.3 Basic filter theory . . . 6

2.4 Forward kinematics . . . 7

2.5 Iterative Learning Control . . . 9

3 Related Work 10 3.1 Calibration of the IMU . . . 10

3.2 Filter for sensor fusion . . . 11

3.3 Offline correction of the estimated path . . . 12

4 Problem formulation 13 4.1 Hypothesis . . . 13

4.2 Research questions . . . 14

5 Method 15 5.1 Data acquisition . . . 15

5.2 Calibrate the IMU . . . 16

5.3 Sensor fusion . . . 18 5.4 ILC algorithm . . . 21 6 Implementation 22 6.1 IMU calibration . . . 23 6.2 Integration . . . 24 6.3 Sensor fusion . . . 26 6.4 ILC implementation . . . 27 7 Results 29 7.1 IMU performance . . . 29 7.2 IMU calibration . . . 32 7.3 Sensor fusion . . . 37 7.4 ILC results . . . 43 8 Conclusions 48 8.1 Discussion . . . 48 8.2 Future Works . . . 51 References 55

(6)

1

Introduction

The fourth industrial revolution is in full swing and in many factories robots do most of the work. Industrial automation can be an integral part of many measurable manufacturing processes [1]. Today an industrial robot can be used in a plethora of ways [2]. However, limitations linger and some operations are still not possible for the robot to complete [3]. Operations that require high precision and accuracy, for example, can sometimes be challenging, especially at high speeds [4]. One of the problems when dealing with robotic manipulators is vibrations due to elasticity in the robot. This can result in estimation errors when the software tries to calculate the position of the robot [5]. Another problem is that many of today’s robots use kinematic models based only on joint-motor angles to estimate the position of the manipulator’s tool, also known as the end-effector [6]. Using Forward Kinematics (FK), these estimations are very important since this is the only way for the robot to know the position of its tool [7]. Because of this, complementary ways of estimating the end-effector’s position is of high demand [8]. This is the problem this project set out to solve. By fusing signals from an Inertial Measurement Unit (IMU) device and the robot’s sensors, the estimation of the end-effector’s position is improved. The new estimation based on sensor fusion is then used with Iterative Learning Control (ILC) to further improve the estimated results.

This project is done in collaboration with ABB Robotics, and the skill of the engineers there have benefited the project greatly. Apart from knowledgeable support provided during the project the collaboration partner also provides access to industrial robots and an extensive simulation envi-ronment. Furthermore, this project is inspired by and an indirect continuation of the work done by Nor´en [9]. As part of the collaboration team, Nor´en has been available for support throughout the project.

Early IMUs were large and bulky, but even so, they were used in aircraft navigation [10]. Jim´enez et al.[11] claims that small Micro-Electro-Mechanical Systems (MEMS) sensors get increasingly popular. For this project an ultra-high performance MEMS IMU device from Sensonor1 are used

to collect data. An IMU is a device that contains multiple sensors. In this case, the IMU contains an accelerometer and a gyroscope. The actual IMU used contains additional sensors, but through this report and for this project the device is considered to have only two. The accelerometer sensor is used to estimate the trajectory of the robot’s end-effector while the gyroscope estimates the rotation of the IMU device e.i. roll, pitch and yaw velocities. An IMU of this type is said to have has six Degrees Of Freedom (DOF), the accelerometer provides three dimensions and the gyroscope another three.

Several challenges had to be met and conquered before the IMU could complement the existing system [12]. One major problem with the accelerometer sensor is the noisy data it produces as described by the works of Mihelj [13] and Alam et al.[14]. Because of this, claims Corrales et al.[15], the IMU can not be used to calculate the trajectory estimations by itself, therefore, as already mentioned, it acts complementary to the joint-motor angles when calculating the end-effector’s estimated trajectory. This means that sensor fusion is required to fuse the joint-motor angles and the IMU data. During this project, the sensor fusion is performed by a filter, which fuses the signals used to calculate the estimations and rotation. According to work done by Nor´en [9] and Hedberg et al. [6], the well-known Extended Kalman Filter (EKF) or the Complementary Filter (CF) could be considered for this task. For this project, a CF is used for sensor fusion. The reasons for this is mainly ease of use, but this also enables direct comparisons to the work of Nor´en [9] where a CF is used. When configuring the filters, many important parameters must be tuned. First of all, the filter should be accurate and robust, i.e., as low Root Mean Square Error (RMSE) and Standard Deviation (STD) values as possible. Other considerations are execution time and ease of use. Even though the filter is used to estimate the end-effector’s trajectory offline, real-time usage of the estimation pipeline is implied. This means that execution time can indeed be a major factor. To find good filter parameters they were iterated upon in two steps where the resolution is increased in the second step. The result is a CF that fused the desired signals.

(7)

When the IMU signal is collected it has to be processed. The acceleration signal will contribute to the position estimation, while the gyroscope signal will act complementary to the orientation of the same estimation. Before the IMU is used in sensor fusion the position of the device compared to the rest of the robot must be known. To determine the relative position of the end-effector and the IMU the later needs to be calibrated. Within this project, the calibration methods are based on methods developed by Axelsson and Norrl¨of [16] and Tedaldi et al.[17]. There have, however, been modifications of the methods to the better, at least in a simulated environment. During this work, the IMU calibration has not been evaluated on a physical robot due to lack of measuring instrument. Furthermore, this thesis will guide the reader through the concept of calibration and its parameters. For a thorough understanding, the concepts of orientation calculations, including both rotation matrices as well as quaternions, which is widely used during this project, Section 2 is preferred.

Accurate estimations can be useful in several ways as made obvious by several previously cited works. Diagnostics, optimization and real-time measurements are just a few areas where an accu-rate estimation of the end-effector’s trajectory can be useful. To test the calculated estimation, and as proof of concept offline correction of the estimated trajectory will be employed throughout the project. One method for offline correction is ILC as explained by Gunnarsson et al.[18]. In that paper, the work of Arimoto et al.[19] are cited as an inspiration for ILC. The fact is that Arimoto with coauthors is almost always cited in works on ILC, as for example by [20], [21] and [22]. ILC utilises the fact that robot manipulators often perform the same operation repeatedly. This fact enables ILC to use a calculated error signal to alter the input to the system as explained in an article by Bristow et al.[23]. By altering the input, the output will converge closer to the reference signal for each iteration. This can be done both in open and closed loop systems as explained by Gunnarsson and Norrl¨of [18]. This project utilises this established method to test and improve the estimated path.

Three big concepts have been evaluated through this work. That is IMU calibration, trajec-tory estimation through sensor fusion and offline correction of the estimated trajectrajec-tory through estimation-based ILC. This report shows an improvement of an already existing calibration method in the front edge of research on a simulated robot. The error calculation for the calibration meth-ods shows an improvement of around 37 per cent. This assumption is only for this particular experimental robot used in this project, where bias became a large factor. It should be said that, while the bias vector was improved, the orientation got worse. Moving on to the estimated results, dynamic motions was observed by the IMU device, which was its main purpose. Both the fused signals as well as the integrations of the IMU signals have been evaluated with different methods. Simulated results shows an RMSE lower than 0.5 mm, with a maximum error around 0.9 mm, on a ISO-9283 trajectory. Results on a physical experimental robot shows worse results with an RMSE above 1 mm with a maximum error above the double. All estimations are on a 3-D basis. Furthermore, estimation-based ILC results shows an improvement by correcting only the reference joint-motor positions of a simulated robot.

This report details the project described above. It is also the master thesis of the authors and rep-resents the crescendo of four-month of work. As such the intended audience is of course staff and students at M¨alardalen University as well as employees as the collaboration partner ABB Robotics. But the topics discussed in this report could be interesting to anyone working with sensor fusion, trajectory estimations, IMU calibration or offline correction done by ILC. The report is divided into eight major sections, where the first one is this introduction.

The introduction is followed by Section 2 where several major concepts needed to be understood by the reader are addressed. Concepts here are explained at a basic level and provided to help readers understand the rest of the report. Section 3 aims to put this project in the context of similar works. Also, brief discussions on and comparisons of the cited works are provided. Next Section 4 provides a problem formulation. Here the three research questions this project aims to investigate are presented, also a hypothesis is stated. In Section 5 the major methods used during the project

(8)

are discussed. In this section, the chosen methods are stated and resons explained. The methods here act as blueprints for the implemented software and procedures presented next in Section 6. Here the implementations are explained in detail and the estimation pipeline is introduced. The implementations produce results and they are presented next in Section 7. The results are detailed and presented in detail with figures and tables. At last in Section 8 conclusions are drawn from the results. Also in this section future works are proposed.

(9)

2

Background

This section of the report introduces and briefly explains five central concepts needed to fully appreciate this project. First the basic concepts surrounding an industrial robot are introduced in Section 2.1. In Section 2.2 calibration of the IMU is explained. The device needs to be calibrated before any readings are recorded and motivation for, and theory about, the procedure is described. The sensor fusion is dependent upon a filter and in Section 2.3 basic filter theory is explained, where filters in general and the complementary filter in specific are described. When the robot’s motors are used to estimate the position of its end-effector forward kinematics is used. This mathematical technique is briefly explained in Section 2.4. ILC is a technique that is used to optimise a robot’s end-effector’s trajectory and in this project, it is used to test the calculated estimation. The theory regarding this technique is explained in Section 2.5.

2.1

Robot anatomy

Numerous different applications or machines could be called a robot [24]. During this project, the word ”robot” will refer to an industrial robot with 6 DOF including 6 revolute joints. Working with, and analysing data from, a robot of this type requires knowledge on how the robot observes its surroundings. Also, the terminology surrounding the robot will be explained here.

Source: Image captured from ABB Robotics model documentation, edited by hand

Figure 1: An industrial robot with its joints numbered 1 to 6. The link between joints 2 and 3 is labeled. Also labeled are the robot’s base and its end-effector.

An industrial robot is made up of joints and links attached to each other [25]. One side of the robot is attached to a flat surface and the other end to an end-effector. Every joint has a motor that will steer the position of that joint, these are called joint-motor. Furthermore, between every joint, there is a link, which is the visible arms of a robot. Through the report the joints or links are numbered from 1 to 6, where link 1 is the one nearest the flat surface and link 6 is the one adjoining to the end-effector. A robot of this kind has 6 DOF. Furthermore, a tool may be mounted on the end-effector. This tool will have a point called Tool Center Point (TCP), which is a point in space that is controlled by the robot’s movement [26]. A TCP could be, for example, the center of a gripper or the tip of a laser cutter. A visual summery of the robot’s parts and concepts is

(10)

presented in Figure 1.

An industrial robot has several different coordinate systems [27]. It has one coordinate system for • every link,

• its TCP, and • its base.

There could also be another coordinate system called “world” [28], which has the same initial position as the base but may be rotated in a different way. It is important to understand which coordinate system every action or conclusion is made in, since the coordinates in one system are different from coordinates in another. Every coordinate system does follow the right-hand rule, but they are rotated differently. For example, the base coordinate system has its z-axis pointing along the gravitational force vector and its x-axis and y-axis perpendicular to the surface it is mounted upon. However, another perspective is that the z-axis is pointing in the link direction, which is the case of the other link’s coordinate systems. For example, the end-effector in Figure 1 has its z-axis pointing in the base’s x-axis direction and the end-effector has its x-axis pointing in the base’s negative z-axis direction. This is, in this report, referred to as the tool coordinate system. The tool coordinate system is used when, e.g., setting a TCP or an IMU position on the tool.

This project has mostly been calculating rotations in quaternions. However, when rotating signals, they have been mapped to rotation matrices. The relation between a rotation matrix and a quaternion is as follows: R(q) =   q02+ q12− q22− q23 2q1q2− 2q0q3 2q1q3+ 2q0q2 2q1q2+ 2q0q3 q20− q21+ q22− q32 2q2q3− 2q0q1 2q1q3− 2q0q2 2q2q3+ 2q0q1 q02− q12− q22+ q23  , (1)

where q is a quaternion vector [29]. The quaternion vector for this project has the scalar part denoted as q0, while q1−3 are rotations around the x-, y- and z axes respectively. For a quaternion

vector to just be a rotation, the equation:

||q|| = 1, (2)

needs to be true. Thereby, the quaternions through this work have followed the formula

q = qˆ

||ˆq||. (3)

Furthermore, to convert a rotation matrix to a quaternion

q(R) =     q0 q1 q2 q3     , (4) where R =   a11 a12 a13 a21 a22 a23 a31 a32 a33  , (5)

the following formula [30] has been used:

q(R) =      √ 1+a11+a22+a33 2 a32−a23 4q0 a13−a31 4q0 a21−a12 4q0      . (6)

(11)

2.2

Calibration of the IMU

Since this work is about estimating positions partly based on IMU signals, the device needs to be calibrated. IMUs do not necessarily have perfect 3D orthogonal axes [17]. Axes misalignments, not accurately scaling and wrong bias could give unsatisfied outputs from the IMU. Faulty readings from the IMU will cause the double-integration of the acceleration to the estimated position to drift [31]. This will of course also be a problem in terms of rotations from the gyroscope that suffers equally from an incorrectly calibrated IMU. Calibration is needed to take care of these drifting problems. By measuring readings in different known positions, parameters could be modified to minimize the unwanted results.

Regarding this thesis work, it is also important to know the position and rotation of the IMU, relative to another known coordinate system. Such a relative coordinate system could be any system of the robot, but the base or the end-effector’s system is often utilised due to simplicity. Finding a relationship between the IMU’s coordinate system and another system is associated with calibration [32]. However, all calibration methods do not utilise that relationship, due to different application areas.

Utilising a system of different sensors, including an IMU, requires that the sensors are talking the same language. This work is about having cooperation between the robot and the IMU. When the IMU has something to tell about the movements, the robot system has to understand what it says [9]. This is why calibration is needed. By first calibrating the IMU such that its readings are as reliable as possible and then calibrate the pose of the IMU relative to the robot, they will better understand each other. By ensuring an IMU’s pose in a robot’s work-space, the IMU data could be beneficial.

2.3

Basic filter theory

In this work, a signal is a sequence of readings that describes a phenomenon. This phenomenon could be anything measurable, but in the context of this work, it is readings that describe accelera-tion, positions and so on of a certain part of the robot. According to Jean-Baptiste Joseph Fourier [33], a continuous function can be expressed as the sum of a series of sine or cosine terms. These terms all have different frequencies and another way of looking at a signal is that it is constructed by a sum of different frequencies with different amplitudes.

A filter is an entity that dampens certain frequencies from a signal [34]. When applying a filter to a signal it is commonly described as filtering the signal. When a signal is filtered certain frequencies in the signal are suppressed, while others are amplified. Depending on which frequencies are suppressed and amplified, filters are often categorized into different types. There are many different filter types, but the most common types are low-pass and high-pass filters. When applying a low-pass filter, frequencies above an arbitrary threshold frequency is dampened in the signal. The threshold frequency of a filter is commonly known as the cut-off frequency. All frequencies below the cut-off frequency in a low-pass filter is known as the passband, while all frequencies above it are known as the stop-band. While a low-pass filter lets low frequencies pass and stop higher frequencies, the opposite is true for a high-pass filter. A signal filtered by a high-pass filter dampens any frequency under the cut-off frequency and keeps frequencies above it. A graphical representation of this is shown in Figure 2, where the parts of both a low-pass filter are marked.

Sensor fusion

Often, two signals measure the same phenomenon. When this happens, it is often desirable to fuse the two signals into one [35]. The new fused signal should ideally have the good parts from both original signals while the bad parts should be left out. The goal is to describe the phenomenon in more detail and let the two signals complement each other. Say for example that you have two devices that measure the same phenomenon. One of the devices is very accurate in darkness, and the other in stark light. Also assume you are doing the measuring on a rainy and cloudy day. None of the devices will perform perfectly since it is neither pitch black or bright sunlight. But

(12)

Source: Image generated in MATLAB R2019b, edited by hand

Figure 2: A visual representation of a low-pass filter. The x-axis is the frequency in Hz and the y-axis is the amplification in dB. The filters pass-band, stop-band and cut-off frequency are labeled.

together they may complement each other and create a signal that is greater than its individual parts. This is called sensor fusion. There are many techniques for doing sensor fusion and it is often a challenging part of any project. In this work, several signals describe the same phenomenon. For example, the resolver and the accelerometer readings are both used to calculate and estimate positions independently. The resolver is good at capturing low-frequency changes in the position, while the accelerometer is more proficient at detecting high-frequency changes [36]. Because of this, these two signals are fused to create an estimation where both the high and low-frequency changes are described with high accuracy. For this fusion, a complementary filter is used.

Complementary filters

The complementary filter is a filter that is used for sensor fusion and is built from a low-pass and high-pass filter. The two filters share a common cut-off frequency[37]. The filter will, therefore, let all frequencies through. But when the filter is used for sensor fusion two signals are fed into the filter. One will be fed into the low-pass filter while the other one will pass through the high-pass filter. In this way, it is possible to retain the low-frequency information of the first signal while damping the high-frequency parts. While the other signal only retains its high-frequency information. The filtered signals are then added together to create a fused signal. This will result in a signal that preserves the best of both signals and add them together [37].

2.4

Forward kinematics

Without sensors, a robot knows nothing of the world or its place in it. But for the industrial robot to be useful, it needs to know a lot about the world, most importantly its position in it. To this end, an industrial robot uses many different sensors, and one commonly used for positioning is the resolver. The resolver is an electronic device that measures degrees of rotation. In the industrial robot, the resolver is frequently used to measure rotation in the joints of the robot. This is illus-trated in Figure 1, where each joint contains a resolver. These angles are then used to calculate the position of the robot’s most important part, its end-effector. The mathematical technique used for this is known as FK [38]. To do this, the robot is split into links and the point where two links meet is called a joint. The dimensions of the links need to be known, as well as the angle of the joints. The position of the end-effector is then calculated by using the rigid body transformation of

(13)

each link relative to the other links. For example, a robot with two connected links. Imagine that the first link moves one meter and the second link moves the same distance, in the same direction, within a one-time step. So if both links move one meter in the same direction the end-effector, positioned on the second link, will move two meters.

Robotic motions are referred to as rotations and translations. Rotations are simply rotations, while translations are linear motions. An end-effector position may have both its orientation rotated and its position translated at the same time. When a robot reads a line of code, telling it to move to a certain point, it calculates the trajectory in terms of joint angles. These joint angles are determined based on the robot’s FK model. An example of a transformation matrix in a 3-dimensional space can be represented as

T =R3x3 t3x1 01x3 1



, (7)

where R is a rotation and t is a translation.[39]

When calculating the FK for a robot and thereof getting knowledge about the end-effector position, or even the TCP, transformations are made. The transformations are rotations and translations through the robot. Each joint, including its respective link, has a transformation matrix, based on the angles of the joints and the length of the links. These transformations are often represented as Denavit–Hartenberg parameters [40], which tells the behaviour of that specific link’s position. Every single transformation along the line does not have information about the previous transfor-mations, i.e. each transformation gives a new pose relative to the start of the link.

Each transformation matrix is dependent on the joint angle corresponding to that link. When the joint angles are known, the matrix calculations are straight forward. Since each transformation matrix is connected to each other, they are multiplicative [39]. This means that a transformation of two links can be described as

T20= T10T21, (8) where T0

2 is a transformation from frame 2 to frame 0. This is easily compared to a serial robot

with 6 DOF, when calculating its end-effector position. For example, joint 6 knows its orientation, which is sent back through the robot such that its position and orientation are known in the base frame. To get the position of the TCP in the base frame the formula looks like:

p0= T60p6, (9)

where p0 is the position in the base frame and p6 is the position of the TCP in the sixth frame [41].

Inverse kinematics

While the forward kinematics is used to calculate the position of the end-effector given the angels of the joints. Inverse kinematics is used to calculate the joint angles given the end-effector posi-tion [42]. This is useful when the goal is to move the end-effector of the robot to a given position. In other words when planning the trajectory.

The inverse kinematics problem is not as straight forward as the forward kinematics problem. In a six DOF robot, there will almost always be at least two solutions to one problem. One example of this is an elbow-up or an elbow-down robot posture.

(14)

2.5

Iterative Learning Control

Most of the time, an industrial robot will be programmed to let its tool follow a pre-determined trajectory. The robot will then do its best to follow the pre-programmed trajectory but as made evidenced by this work, the actual tool trajectory of the robot almost always deviates from the desired one. So the input into the robot control system will result in an output that differs from the desired. This is, of course, a problem and one that ILC tries to solve.

Frequently an industrial robot will be configured to perform the same task repeatedly. If the robot initiates the repeated task from the same position, in each iteration, the errors in the trajectory will also be repeated. This assumption is what ILC uses to improve the accuracy of each iteration of the task [23]. After each iteration of the trajectory, an error vector will be calculated by using an external measuring tool. This error vector contains information for each sample on how the actual position of the tool differs from the desired one. The vector will then be used to calculate new input or reference signals for the same trajectory for the robot. After the new input is calculated, another iteration commences and another error vector is calculated. Each iteration will decrease the difference between the actual trajectory and the one originally desired. This means that each iteration will have a slightly different input that tries to compensate for the output error measured in the previous iteration [18]. Pseudo mathematically the input of the next iteration will be calculated as follows:

Uk+1= Uk+ γek, (10)

where k is the iteration number, U is the input of the system and γ is the gain for the ILC algo-rithm [20]. Furthermore, the variable e in Equation (10) is a vector that contains the errors from the previous iteration. This will result in the error converging towards zero, thus closer and closer to the desired trajectory for each iteration.

(15)

3

Related Work

This section introduces methods, comparisons and experiments done in other works related to the project. The section is divided into three parts, where the first one discusses works associated with calibrating the IMU. This part is followed by another containing comparisons and discussions about filter design and position estimation in the current context. The last part brings up works about correcting the estimated trajectory by utilising ILC.

3.1

Calibration of the IMU

This work is about estimating the pose of a robot’s end-effector utilising an IMU. Because of this, calibration of the IMU is very important. By utilising a robust calibration method, reliable read-ings may be acquired from the IMU. In the context of this project, there are two different areas of calibration. That is to find a relationship between the IMU’s coordinate system and another system and also to calibrate the IMU itself, such as axes misalignments and bias. Several different methods for calibration will briefly be discussed below.

Tedaldi et al.[17] proposes a method for the calibration of an IMU. The method does not produce a relationship between the IMUs pose and another coordinate system. However, it can be used to calculate the scaling, offset, axes misalignment and bias of the sensor, i.e., most of the properties mentioned in the data-sheet of the IMU. Part of the algorithm explained in Tedaldi et al. makes use of the Allan variance method to compute the stability of the IMUs noise. This is done by placing the IMU in a static position and measure its gyroscope data. When the measurement is made, Allan variance shows the stability of the device. This is done to retrieve a threshold value, ζinit, which is used to decide when the IMU is static and when it is not;

ζinit= q [vartw(a t x)2+ vartw(a t y)2+ vartw(a t z)2]2, (11)

where twis the length of the measured variance, i.e., the time measured in seconds until the static

gyroscope noise is considered stable. Operator var(at), in Equation. 11, is the variance of signal a centered in t. Also, by measuring the mean value of the gyroscope readings over time, Tinit, until

the gyroscope is considered stable, the bias vector for the gyroscope can be acquired.

Furthermore, Tedaldi et al. uses the information about the gyroscope stability to measure N static positions, which the authors propose to 36 ≤ N ≥ 50 to get a robust calibration. By calculating the Allan variance threshold value, claims the authors, no manual pre-processing of the data has to be done, i.e., the algorithm will in itself decide when the signal is in a new static position.

The work of Axelsson and Norrl¨of [16] offers a calibration procedure for an IMU mounted on an industrial robot. This procedure is divided into two parts, where the first part estimates the orientation of the IMU and the second part estimates the device’s position. Regarding orientation, the authors show a relationship between measurements in the sensor’s internal coordinate system and a desired coordinate system. In this case, the desired system is the base system of the robot. Mathematically the relationship is described below.

ρs= κRρa+ ρ0 (12)

In Equation (12), ρsrepresent measurements in the base coordinate system and ρais measurements

in the sensor’s coordinate system. Furthermore, R is a rotation matrix that describes rotation from the sensor’s coordinate system to that of the base. Meanwhile κ and ρ0 are the accelerometer’s

sensitivity and bias respectively, where the sensitivity is assumed to be equal on all three accelerom-eter axes.

The second part of the calibration procedure, developed by Axelsson and Norrl¨of, estimates the IMU’s position in the base coordinate system and for this, the authors present a method with good accuracy. This method requires that the robot can move at a constant and known speed and that the robot’s arm length and angles between them are also known and can be measured. The results

(16)

from the paper are a mean estimation of the translation vector of (26.2 4.57 12.77)T cm, with a

mean standard deviation of (1.13 0.4 0.4)T cm. Meanwhile, errors in the rotations are between 1

and 2 degrees.

Olofsson et al.[43] proposes another method for determining the IMU’s pose relative to the base of the robot. This is done through a rotation matrix and translation vector from the end-effector to the IMU. The method presented by Olofsson et al. differs in the parameters used in the calcula-tions compared to the work of Axelsson and Norrl¨of, as well as the algorithms used to calculate the transformation from the end-effector to the IMU. When determining the rotation of the IMU, the authors base their calculations on the fact that a rigid body will have the same rotational velocity in all points. Furthermore, the rotation matrix from the end-effectors coordinate system to that of the IMU is obtained via information about the rotational velocity of the end-effector and the IMU. The mean estimation of the translation vector is denoted by (−0.0544 0.0295 0.125)T m,

with a standard deviation of (2.2 0.79 2.5)T mm.

Another calibration method is proposed by Du et al.[44]. The method uses a neural network called Cerebellar Model Articulation Controller (CMAC) to do the calibration online. Du et al. estimates the error of kinematic parameters in a serial robot with the help of an IMU and a laser tracker system. The idea is to produce a Jacobian matrix based on the changes in the robot’s joint-motor angles. By knowing the true and estimated position, the algorithm will optimise the Jacobian with the help of Unscented Kalman Filter (UKF). The algorithm show good results with a position error of a maximum 0.15 mm and a maximum orientation error of 0.24 deg.

3.2

Filter for sensor fusion

The filter in this project is mainly used for sensor fusion. Because of this, works describing sensor fusion through filtering is of particular interest. Jo et al. [45] claims that the well-known EKF is a common choice for sensor fusion. While Rahim Jassemi-Zargani and Dan Necsulescu [46], Zhang et al. [47] and Li et al. [48] use the EKF for sensor fusion in their respective works. In their work on object tracking Yunqiang Chen and Yong Rui [49] propose the use of a particle filter for sensor fusion. Furthermore, Jo et al. [45] claims that although heavily dependant on process models the Bayesian filter is another good choice for sensor fusion. Another filter choice for sensor fusion is the CF used in the works by Yi et al. [50] and Hedberg et al. [6].

In the work done by Nor´en [9] two types of filters were considered. The EKF and the CF where compared. Since this project is a successor to the work done by Nor´en in 2014, the same two filters are considered for this work. When choosing a filter for sensor fusion there are several parameters to take into account. First of all, the filter should produce results that preserve the strength of the fused signals. In this project, for example, the filter is used for fusing the accelerometer signal with a signal from the robot’s resolvers. The valuable information of the accelerometers is mainly represented in higher frequencies, while the most important information from the resolvers is low-frequency signals as explained by Hedberg et al. [6]. In this case the filter should ensure that both signals can contribute its most valuable information to the fused signal. Furthermore, while the filtering and estimation calculations are done offline in this work the application implies online usage. For this reason, the filter chosen should be fast computations wise. Several papers claim that the EKF indeed produces good results when used for sensor fusion, but with slow execution time. CF is a simpler filter with fast and good results [9][6][51]. Both the work of Hedberg et al.[6] and Nor´en [9] propose the use of a CF over the EKF due to a big difference in execution time, while there is no big disparity in terms of performance. Furthermore, the work of Duong et al.[51] compares two variants of the CF to EKF. The authors of that work show that variants of the CF along with EKF have a higher performance than CF. Again the basic CF has the fastest execution time compared to the others. This is also confirmed by Gui et al. [52]. However, the work of Nor´en [9] also claims that an implementation of an EKF that is fast enough to run in real-time should be possible. Thus the EKF cannot be excluded solely based on execution time.

(17)

con-sists of two basic filter types. To make the CF more versatile Yang et al.[53] and Karunarathne et al. [54] proposes a variant of the conventional CF that uses an adaptive-gain. The two filters in the CF are weighted based on external circumstances. Another variant of the filter is mentioned in the work by Vihonen et al.[55] who uses a Time-Varying CF when fusing rotation data. This filter variant moves the cut-off frequency of the CF-based on the filtered signals.

3.3

Offline correction of the estimated path

Estimations of the position of a manipulator’s end-effector can be used in several ways. One im-portant area of use is diagnostics where estimations can be used to help identify weaknesses.

Furthermore, in the work of Wall´en [20] the author describes estimation-based ILC to improve the performance of a robot. The author continues to say that the ILC algorithm, modifying the joint angles, is an alternative choice for TCP accuracy improvement when estimating the TCP position. There are first-order- and high-order ILC algorithms. The first-order algorithm does only take the previous iteration into account, meanwhile the high-order algorithm takes an arbitrarily amount of iterations into account. Wall´en shows that a first-order ILC algorithm can improve a robot’s performance.

There are several different variants of the ILC algorithm. The work by Zeungnam Bien and Kyung M Huh [22] presents a higher order ILC algorithm where the correction is not only dependant of the last iteration but of multiple. In the work by Tayebi [56] an adaptive ILC method for trajectory tracking are presented. This variant only uses two variables instead of classic ILC where the number of variables are equal to the number of system inputs. This, claims the author, are memory space saving.

(18)

4

Problem formulation

The impact of robots in the contemporary industry can not be argued. Today robots can suc-cessfully perform almost any task it is programmed to do. But for certain tasks, the accuracy of the robot is not high enough. This can be solved by executing sensitive tasks at low velocity. Small errors can sometimes be acceptable. But a more accurate industrial robot is of course desired.

The pose of an industrial robot’s end-effector has to be estimated by using sensors. This is fre-quently done by using FK and the angels in the robot’s joint measured by resolver sensors. The result is the pose of the robot’s end-effector. FK is described in Section 2.4. Since today’s industry have an ever-increasing demand for accuracy from robots, the accuracy of the estimation also has to be increased. In this project, an IMU is mounted on an industrial robot to investigate if the device can improve the estimations.

Section 2.2 shortly explains the concept of integrating an IMU to a robot system. Thereby, before signals from the IMU can be used to calculate estimation, the pose of the device must be known. Also, the device’s pose relative to the rest of the robot must be determined. This is done by cal-ibrating the IMU before it is used to acquire data. In this project, several calibration techniques have been investigated and implemented. This is a crucial step since data from an uncalibrated IMU is useless. Also, the IMU itself might have inaccurate readings. If the readings are not reli-able, knowledge about its pose might not be enough for improving the robot’s accuracy.

As mentioned in previous sections the noisy signal from an IMU, even perfectly calibrated, can not be used to estimate the pose of an industrial robot’s end-effector alone. Therefore calculations based on data from the sensors of the robot are used to complete the estimations based on the IMU signal. This is done by fusing the pose estimations from both sources by filtering them through a filter. Sensor fusion enables each of the sensors to contribute its key features to the calculated estimation. This is important since estimations from the current system are accurate during cer-tain motions, while the IMU are more accurate during others. In this way, the IMU can support the current system when needed. But for the sensor fusion to work as intended a good filter are needed. This means that to improve the estimation, the filter-based sensor fusion must work and produce reliable signals. Thus the filter type and parameters must be thoroughly investigated and experimented upon.

An improved estimation through sensor fusion of the pose of the robot’s end-effector can result in advances of the manipulator’s accuracy. To test the viability, the calculated estimations need to be tested and used for further gains. To this end, this project includes an implementation of the ILC algorithm. By comparing the estimated results with the reference trajectory, an error vector is calculated. This vector is then used by the ILC algorithm when it iteratively changes the control input to the system to match the reference output. It is important to test the viability of the calculated estimations outside the experimental context since this will indicate its usefulness. Also, the ILC algorithm improves the estimated results even further and therefore also its usefulness.

4.1

Hypothesis

The hypothesis for this project is presented below. It contains the predicted result based on previous works presented in Section 3. The IMU calibration and sensor fusion can be seen as the most important independent variables for the project. While the trajectory estimation is the dependent variable. These three variables are all represented in the hypothesis.

Hypothesis By carefully calibrating an IMU, mounted on an industrial robot, its filtered data can be used to estimate the trajectory of the robot’s end-effector.

(19)

4.2

Research questions

With the discussions from the previous section in mind, the following three research questions are answered in this report.

RQ1 How can an IMU, mounted near an industrial robot’s end-effector, be calibrated, and its pose relative to the end-effector be determined?

RQ2 What is an appropriate filter for sensor fusion between motor angles and IMU data, and how can it be used to estimate the trajectory of an industrial robot’s end-effector?

RQ3 How can the estimated trajectory be used with iterative learning control to improve the position tracking performance?

(20)

5

Method

As made obvious by Section 3 there is much to contemplate when selecting methods for this project. The overall goal of the project is to estimate the trajectory of an industrial robot’s end-effector by merging signals acquired from an IMU and the robot’s joint-motors. The estimated trajectory is then used in the ILC algorithm to iteratively improve the actual path tracking performance. To achieve this goal, topics such as IMU calibration, filter design, and ILC have to be researched. In this section, major considerations around the trajectory estimation are discussed and the methods are presented.

For easy reference, the section is split into subsections where each section discusses a major method consideration. The trajectory can not be estimated without data and in Section 5.1 the data acquisition process is discussed. Furthermore, the difference between live and simulated data sets are explained and how the signals are re-sampled. After this in Section 5.2 IMU calibration is discussed. Here the calibration method used during the project is discussed and equations related to the topic explained. This project relies heavily on sensor fusion and in Section 5.3 this is explained together with the filter used. At last, in Section 5.4 the method chosen for ILC is explained.

5.1

Data acquisition

Data for this project are collected in sets. The sets are a collection of signals acquired from different sources where each signal is readings from a sensor. These signals are furthermore broken-down into sub-signals representing readings for individual axes. For this project, two types of sets are created. The first type is the simulated sets and the other type is live sets. For the simulated sets, the robot is constructed in a virtual environment and a simulated IMU is mounted on the robot. The simulated robot is then programmed to follow predetermined trajectories and measurements from the simulated environment are recorded and collected into sets. One major advantage of using a simulated environment is that anything can be measured. Only the detail of the simulation limits the data available for acquisition. For this project, numerous signals were recorded from the simulated environment. But the most important is the virtual IMU signal, the virtual robot’s joint-motor angels and a reference signal.

Live sets, on the other hand, are recorded in an actual robot cell, in a real environment. Here the data is recorded from three different physical sources. From the IMU mounted on the robot, the accelerometer and the gyroscope are recorded, data on motor angles are recorded from the robot system itself, and the actual trajectory is recorded from the laser tracking system. A major difference between the simulated and the live environment is important to understand. In the simulation, a true reference signal exists. This means that the true position of the TCP is always known. For this project in the live environment, on the other hand, no true reference exists. A reference signal is recorded using a laser tracking system, but that system produces sensor noise and reading errors. This also means that the true location of the mounted IMU is known in sim-ulation, but not in the live environment. For this reason, most software developments are done using the simulated sets, but for results, the live sets are used.

In the simulated environment, all data is acquired at the same sample rate. This makes it easy to compare and work with different signals, this is not the case in data recorded in the live environment. The IMU mounted on the live robot sample data at 2000Hz, and the robot-system itself has a sample-rate of 1984Hz. The laser tracker system reads at 1000Hz. To be able to combine data and use it to do sensor fusion all data needs to be at the same sample-rate. Since the IMU has the highest sample-rate the other two signals are resampled at a higher frequency so that no data is lost by removing samples. This means that all three signals will be synced at 2000Hz. For this purpose, a basic linear interpolation method is used on the data from the robot system and the laser tracking system. Thus bringing both up to the sample rate of 2000Hz.

(21)

5.2

Calibrate the IMU

As explained in Section 2.2, calibration is basically split into two different areas. That is to find the correct parameters for the IMU, to get proper readings, and also to find the relation of the IMUs coordinate system relative to a known coordinate system. Methods developed for utilising an IMU attached to an industrial manipulator often combine these problems in the calibration since they are related to each other. This work will compare different methods to find the best one suited for this problem.

Section 3.1 gives a short explanation of two methods, developed by Axelsson and Norrl¨of [16] and Olofsson et al.[43], to calculate the pose of the IMU relative to the robot. The mentioned methods calculates all the information needed for calibration of an IMU mounted on an industrial manip-ulator, with similar results. Thereof, since the objective of this thesis work is also to improve the results from Nor´en [9], which use the previous method, it was considered negligible. Also, this adoption opens up for the evaluation of other methods.

ρs= κRρa+ ρ0 (13)

Referring to the above motivation, the first method to evaluate will be the method developed by Axelsson and Norrl¨of [16]. To calculate the parameters of the IMU, as well as the rotation relative to a known coordinate system, a relation between accelerations in the different coordinate systems must be known. The relation is shown in Equation (13), where ρsis the acceleration vector

in the robot’s end-effector coordinate system and ρa is the measured acceleration vector from the

accelerometer. Furthermore, κ, R and ρ0are the parameters, namely the scaling, rotation and bias

respectively, to orient the IMU as the end-effector. One thing to mention about this algorithm is that the first three joints are fixed in every calculation. The authors of the method have proposed to set joint two to 0 deg and joint three to 90 deg, i.e., link two is pointing straight up from the base and link three is orthogonal to link two, in six different orientations. Those six proposed robot configurations can easiest be described as vectors of joint angles in degrees, which are shown in Table 1. Configuration Joint # # 1 2 3 4 5 6 1 0 0 90 0 0 0 2 0 0 90 0 0 90 3 0 0 90 0 0 180 4 0 0 90 0 0 -90 5 0 0 90 0 90 0 6 0 0 90 0 -90 0

Table 1: Joint angles in degrees for the six proposed robot configurations.

The six proposed robot configurations, seen in Table 1 will represent a general orientation esti-mation of the IMU, relative to the robot, for the system to act upon. The measured acceleration vector for each configuration will then be used to minimise the equation

N X k=1 ||ek||2, (14) where ek = ρs,k− κRρa,k− ρ0. (15)

Estimation of the IMU’s position in the base coordinate system is made in the second part of the calibration procedure. Axelsson and Norrl¨of have proposed a method, where the key is to measure the IMUs acceleration, over an arbitrarily time, within a constant angular velocity. These measurements are made with three different orientation configurations for the robot. Regarding

(22)

the constant angular velocity, the robot will be positioned as in the previous static part, but set with a certain speed for joint 1. The three different configurations are shown in Table 2, which shows the joint angle in degrees for the orientation axes.

Configuration Joint #

# 4 5 6

1 0 0 0

2 0 90 0

2 0 0 -90

Table 2: Joint angles in degrees for the three proposed robot configurations for estimation of the IMU position.

Three runs are enough for the algorithm to calculate the desired parameters. The following equation is used to find the position of the IMU in the world frame.

[rs]b = [Qbf /b]b([rw]bf + [rs/w]bf) (16)

Equation (16) has two unknowns, namely [rs]b and [rs/w]bf. [rs]b is the vector to the IMU from

the base, in the world coordinate system. Meanwhile [rs/w]bf is the vector from the end-effector

to the IMU, relative to the robot’s base coordinate system, i.e., relative to the current angle of the first joint-motor. Furthermore, the known parameters of Equation (16) are the rotation of the base, relative to the world frame, and the vector from the base to the end-effector, in the base frame, specified as [Qbf /b]b and [rw]bf respectively. The final equation to find the relation of the

IMU and the end-effector looks as follows.          0 0 − ˙θ2 c1 0 θ˙2 c1 0 ˙ θ2 c2 0 0 0 θ˙c22 0 0 0 − ˙θ2 c3 ˙ θ2 c3 0 0            l1 l2 l3  =         aMs,x,c1+ ˙θc12L1 aM s,y,c1 aMs,z,c2+ ˙θ2c2L3 aM s,y,c2 aMs,x,c3+ ˙θc32L1 aM s,z,c3         (17)

There are three unknowns and nine known variables in Equation (17). The unknowns are l1, l2

and l3, which are position vectors of the IMU relative to the end-effector. The known variables

are the rest, where θ is the angle between the base and the world frame. Meanwhile L1 and L3

are position vectors to the end-effector relative to the base, in the x-direction. Lastly, aMs is the

measured acceleration of the rotated IMU coordinate system, i.e. the new coordinate system re-trieved from the first part of the algorithm, where e.g. aM

s,x,c1is the acceleration in the x-direction

of the base frame in the first configuration. This relation is obtained through information that there is no rotation between the base and the corrected IMU frame obtained from the rotation part.

Secondly, a method developed by Tedaldi et al.[17] will be evaluated. This method does not de-fine the IMUs coordinate system relative to another system. However, the method will give a scaling, misalignment, offset and bias for the accelerometer and the gyroscope. Those parameters have already been calibrated by the manufacturer, but the parameters could perhaps be improved. Moreover, the IMU has been used plenty of times since it was bought, introducing eventual prop-erty changes over time. When this method has been utilised, those parameters can be used with another method to later retrieve the relations between the coordinate systems.

The calibration method proposed by Tedaldi et al. uses 9 calibration parameters:

θacc= [αyz, αzy, αzx, sax, s a y, s a z, b a x, b a y, b a z], (18)

for the accelerometer and another the 9 for the gyroscope:

θgyro= [γyz, γzy, γxz, γzx, γxy, γyx, sgx, s g y, s

g

(23)

These parameters are obtained through two optimization equations solved with the Levenberg-Marquardt (LM) algorithm. The first equation is

L(θacc) = N X n=1 (||g||2− ||h(aS n, θ acc )||2)2, (20) to obtain θacc where h(aS

n, θ

acc) = TaKa(a + ba), shown in Equation(22,23,24). The second

equation is L(θgyro) = N X n=2 ||ua,n− ug,n||2, (21)

to obtain θgyro where N is the number of intervals utilised in the method. Equation (21) is about minimising the difference of calibrated accelerometer readings, obtained from Equation (20). Furthermore, ua,n is the calibrated accelerometer vector at interval n. Meanwhile ug,n is the

rotated accelerometer vector from interval n − 1, which rotation is based on the gyroscope readings during the transition period from interval n−1 to n. The gyroscope readings during the transition is integrated to acquire an orientation, where the gyroscope readings, ω, are first rotated as TgKg(ω+ bg) for the optimisation algorithm to solve. The obtained matrices for solving these equations are

Ta =   1 −αyz αzy 0 1 −αzx 0 0 1  , T g=   1 −γyz γzy −γxz 1 −γzx −γxy −γyx 1   (22)

for the misalignment,

Ka=   sax 0 0 0 say 0 0 0 sa z  , K g =   sgx 0 0 0 sgy 0 0 0 sg z   (23)

for the scaling and

ba=   bax bay ba z   (24)

for the acceleration bias. Furthermore, Tedaldi et al. means that the gyroscope bias vector is directly retrieved from the Allan variance results when classifying the motion state of the IMU.

5.3

Sensor fusion

As mentioned in several previous sections, the estimations in this project are based on data ac-quired from sensor fusion. In the estimation procedure, sensor fusion is preformed for two different purposes. The first sensor fusion is done to calculate the orientation of the end-effector throughout the trajectory. For orientation, the gyroscope signal and data from the robot’s joint angles are fused. The fused data is then used to recalculate the accelerometer signal based on how the IMU is rotated. The second occurrence of sensor fusion in the procedure is between the acceleration acquired from the IMU’s accelerometer and data from the robot’s joint angles. This fused data are then used to finalize the estimation. Since sensor fusion is an integral part of the estimation process, designing the filter used for sensor fusion is an important part of software development.

Previous works have compared different filters for sensor fusion as described in Section 3.2. But for this project, a CF is used. There are several reasons for this. First and most important, the sensor fusion results are on par with other filter variants. According to several referenced works the sensor fusion results from the CF can rival those of the popular EKF but also other filters. Furthermore, Nor´en [9] use a CF with good results in his work, and for easy result comparison, the same filter will be used for this project. Also, the filter implementation done by Nor´en provides a good baseline filter for this project. Although execution time is not a hard requirement for this

(24)

project, it is implied for further development. The low execution time of the CF is a big asset when bringing the estimation pipeline online. Lastly, the CF is a simple filter, both regarding implementation and optimisation. This enables more and faster experiments.

To fuse IMU and joint-motor angles with a CF, they first have to be in the same unit as well as in the same coordinate system. When the signals are acquired, they will first be filtered and then fused. This process is easiest shown as:

1. ˆq = G1(qres) + G2(qgyro)

2. ˆp = G1(pres) + G2(pacc),

where G1 is a low-pass filter and G2 is the high-pass filter of the common CF. Furthermore,

the information coming from the joint-motors is the baseline for the estimation, where the IMU contributes more as an offset, starting in 0. This means that the initial estimation will only be based on the FK model.

Integration

To be able to fuse the data from the IMU as positions, both the accelerometer and the gyroscope have to be integrated. The gyroscope is measuring its measurements as angular velocity, which means it has to be integrated once. Meanwhile the accelerometer data has to be integrated twice. Two methods have been evaluated for each of the IMU signals used in this project.

For the orientation integration, the first evaluated method is from the work of Nor´en [9]. This method is denoted as qk= q0, f or k = 1 (25) qk+1= (cos( ∆t||ωk|| 2 )I4+ ∆t 2 sinc( ∆t||ωk+1|| 2 )S(ωk+1))qk, f or k > 1 (26) where S(ω) =     0 −ωx −ωy −ωz ωx 0 ωz −ωy ωy −ωz 0 ωx ωz ωy −ωx 0     . (27)

The second method is the 4th order Runge-Kutta. [17]. Both of the orientation integration algorithms are calculated in quaternions and are divided by its own length at the end of every time step. The division is to make sure the length of the quaternion is equal to 1. This second method is calculating the quaternions for every time step ∆t as

qk= q0, f or k = 1 (28) qk+1= qk+ ∆t ( k1 6 + k2 3 + k3 3 + k4 6), f or k > 1 (29) which is estimating an orientation q by estimating how the velocity will behave during one time step. Equation (30, 31, 32 and 33) shows the calculations of the variables k1−4.

k1= 1 2S(ω(tk))qk (30) k2= 1 2S(ω(tk+ ∆t 2 ))(qk+ ∆t 2 k1) (31) k3= 1 2S(ω(tk+ ∆t 2 ))(qk+ ∆t 2 k2) (32)

(25)

k4=

1

2S(ω(tk+1))(qk+ ∆t

2 k3) (33)

Furthermore, S(ω(t)) is solved with the same skew symmetric matrix as the other method, seen in Equation (27).

For the acceleration integration, the first method to evaluate is cumulative sum of accelerations for each time step ∆t, such as

pt= cumsum(cumsum(at∗ ∆t) ∗ ∆t) (34)

in MATLAB.

The cumulative sum method does only rely on the acceleration readings for each time step and misses what comes in between the time steps. Therefore, another method was also implemented to integrate the estimated trajectory between the time steps and thereby also keep the estimation correct in time. The integration algorithm is

pt= pt−1+ vt−1∗ dt + at−1∗

dt2

2 , (35)

where v is the integrated acceleration signal between two time steps, i.e. vt= vt−1+(at+at−1)∗dt2.

Filter design

This work employs a CF to do sensor fusion. A CF is a filtering entity that contains two sub-filters, i.e. one high-pass filter and one low-pass filter, that shares a common cut-off frequency. The sensor fusion by a CF between two discrete signals, x1(s) and x2(s), producing one signal,

y(s), are shown in Equation (36), where Glpf(s) and Ghpf(s) represent the low-pass and high-pass

sub-filters respectively.

y(s) = Glpf(s)x1(s) + Ghpf(s)x2(s) (36)

The sub-filters use information about the frequencies of the signals to dampen undesired frequency bands and the signals are then added together to form one signal. The gain of the sub-filters is 0dB in the pass-band. This will result in the one signal dominating the low-frequency band and the other signal the high-frequency band. Since the sub-filters share a common cut-off frequency the transition bands of the two sub-filters will overlap. This means that in the transition band of the two signals they will both influence the output signal. Because of this, the cut-off frequency and filter order are the most important filter parameters to consider when designing a CF. Other filter parameters considered are pass-band ripple and also if the filter should be an Infinite Impulse Response (IIR) or an Finite Impulse Response (FIR) filter.

For filter design MATLAB is used and the design process is implemented as an iterative approach. During the process, filter order and cut-off frequency are iterated upon in set intervals. The iter-ative approach is performed two times, once for an IIR filter and another time for an FIR filter. During the process parameters for a low-pass filter is obtained. From the low-pass filter parame-ters, a complementary high-pass filter is calculated. By combining the two filparame-ters, the iteratively constructed low-pass filter and the calculated high-pass filter, the CF candidate is completed. An-other parameter considered during filter design were the pass-band ripple of the sub-filters. The difference between filters with pass-band ripple and filters without, was barely noticeable during evaluation. However, where a difference was noticeable the filter without pass-band ripple per-formed better. Because of this no further tests with pass-band ripple were preper-formed or considered.

To evaluate filter parameters standard deviation, root-mean-square-error and the maximum error were evaluated. When evaluating an iterated filter candidate during the design process, a raw simulated IMU signal and resolver signal are used. The filter candidate is employed to filter calculated rotation data from both the gyroscope and the resolvers. The filtered information on rotations are then used to reorient the accelerometer signal. The reoriented accelerometer signal is integrated and the resolver signal is transformed by FK to produce position coordinates. Next, the

(26)

integrated signal from the accelerometer and the transformed resolver signal are filtered through the same candidate filter. These coordinates are then added together to form a single estimation signal. The estimation signal is then compared to a reference trajectory. This reference trajectory was, during filter design, recorded from the simulated environment and represent the ground truth of the robot’s end-effector’s trajectory. In Equation 37 the calculation of the error signal, e, are shown where ˆy is the estimated signal and yref is the reference signal.

e = ˆy − yref (37)

By using the error signal to calculate STD, RMSE and max error, filter candidates are compared. The best filter candidate produced during the iterative process is then used to create the CF.

Trajectory evaluation

To measure the performance of the estimated trajectory, different evaluation parameters have been used. Those are the RMSE, STD and the maximum error along a trajectory.

5.4

ILC algorithm

One usage of an estimated trajectory is to adjust the estimated trajectory closer to the reference trajectory, which can be accomplished utilising ILC. The idea behind the ILC method in this project is to modify the reference of the joint-motor positions and speed for a certain trajectory. The method to follow in this project is the first-order ILC algorithm

Uk+1(t) = Q(q)(uk(t) + γ ∗ ek(t + δ ∗ Ts)), (38)

from Wallen [20]. The variables in the algorithm is a low-pass filter, Q(q), the reference, u(t), the learning gain, γ and the error e. Furthermore, δ is how many time steps, Ts, into the future the

algorithm should adjust the error. This means that the algorithm will start to adjust errors in the joint space in advance to an error in the Cartesian coordinate system. However, adjusting an error in the joint space for an error in the Cartesian coordinate system is not straight forward. Computations are needed for such a transformation. For this purpose, a Jacobian matrix was obtained as an estimated transformation between the two coordinate systems. This Jacobian is thoroughly explained in Section 6.4.

(27)

6

Implementation

This section concretely describes the implementation of the methods discussed in Section 5 and based on related works presented in Section 3. Estimating the end-effector’s trajectory is a pro-cedure done in several steps. The propro-cedure can be visualized as a pipeline where the raw signals from the IMU and the robot’s resolvers are fed into one end, and estimations of the end-effector’s trajectory are obtained as output in the other. The pipeline is a sequence of implemented methods where each step takes the raw signals closer to the final estimation. An overview of the steps of the pipeline is presented in Table 3.

Step Procedure

1 Calibrate the IMU

2 Calculate estimated rotation from gyroscope signal 3 Calculate estimated rotation from motor angels 4 Filter and fuse estimated rotations from step 2 and 3 5 Reorient accelerometer signal by utilising data from step 4 6 Obtain positions from motor angels through FK

7 Obtain positions from accelerometer signal through integration 8 Filter and fuse estimated positions from step 6 and 7

Table 3: An overview of the estimation pipeline. The steps are presented in order from the first one on top to the finished estimation at the bottom.

All software development, if not stated otherwise, are implemented and tested using simulated data. The major benefit of developing with simulated data is that the true position of the end-effector or any other part of the robot is always known. This is not realistic and the software can’t be developed using only the true values, but they are great when debugging and comparing results. Most of the software development is done in MATLAB and especially the filter design relies heavily on built-in functions. The robot is simulated using the Algoryx physics engine. The simulated robot enabled much more and faster experimenting which benefit this project im-mensely. Live data sets, although not used in software development, are recorded on a physical robot at the premises of ABB Robotics. The robot model for the live data sets is represented by a virtual twin in the simulation environment. This particular robot model is older and more prone to vibrations than newer models. This choice was intentional as the older model’s trajec-tories are easier to improve that those of its newer counterpart. Thus visualize results more clearly.

As mentioned above, the source of the estimation is raw signals from the IMU and the robot. The IMU signal contains sub-signals from both an accelerometer and a gyroscope. Both sensors provide data in three axes and are sampled at 2000Hz. The resolver signal contains angles read from the robot’s six joints, sampled in 1984Hz. Signals directly from these two sources are from here on referred to as raw signals. Before the raw signals can be used together they need to be presented in the same sample rate. The synchronization and acquisition of the two sets of data were previously described in Section 5.1.

The first step in the pipeline is to calibrate the IMU. This is important because the true pose of the acimu mounted on the robot is not known. The calibration is a process that will, among other things, determine the IMU:s pose relative to that of the end-effector. The implementation of the calibration process is described in Section 6.1. When the calibration has finished the offset of the IMU is known and the device’s pose on the robot’s hull can be calculated.

The next step is to determine how the IMU’s axes are rotated relative to the base coordinate sys-tem of the robot. Since the accelerometer measures acceleration, gravity will be part of any of the sensors readings and will be part of at least one axis measured value. The goal is not only to remove gravity from the raw signal but also to rotate the accelerometer signal, based on how the device is oriented. This must be done for each sample since the IMU mounted on the robot can and will rotate, thus the acceleration from gravity will alternate between the axes. Therefore calculations

Figure

Figure 2: A visual representation of a low-pass filter. The x-axis is the frequency in Hz and the y-axis is the amplification in dB
Table 1: Joint angles in degrees for the six proposed robot configurations.
Table 2: Joint angles in degrees for the three proposed robot configurations for estimation of the IMU position.
Table 4: Joint angles in degrees for the modified proposed robot configurations.
+7

References

Related documents

Reliability of IMUs to quantify ADL tests in the upper extremities was also established, and the clinical applicability of trunk sway measurements and relevance of a set of

It was shown that gyroscopes may be used to measure postural stability in stance and gait, and that clinically more applicable IMUs are suited for measurement of upper

With decreasing financial margins within the dairy industry around the world, the trend of today's dairy producers is an ever-increasing degree of automated

A sensor fusion approach to estimate the end-effector position by combining a triaxial accelerometer at the end- effector and the motor angular positions of an industrial robot

A model based Iterative Learning Control method applied to an industrial robot.. Mikael Norrl¨of and Svante Gunnarsson Department of

Författarna menar att även om resultaten i studien, efter endast 12 månader, är goda behövs det en mer omfattande studie med fler patienter för att undersöka TRT inverkan

Studien bekräftar tidigare forskning som visar att det tar längre tid för äldre medarbetare att anpassa sig till en teknisk förändring, på grund av försämrade kognitiva

The method used to process LiDAR data will be based on point cloud to a point cloud comparison where the underlying optimization cost function is based on the generalized