• No results found

Technology Z illehussnain RobotAssistedVideoMicroscopyforFree-formSurfaceInspection InternationalMaster’sThesis

N/A
N/A
Protected

Academic year: 2021

Share "Technology Z illehussnain RobotAssistedVideoMicroscopyforFree-formSurfaceInspection InternationalMaster’sThesis"

Copied!
111
0
0

Loading.... (view fulltext now)

Full text

(1)

International Master’s Thesis

Robot Assisted Video Microscopy for Free-form

Surface Inspection

Zillehussnain

Technology

Studies from the Department of Technology at Örebro University 11

(2)
(3)

Robot Assisted Video Microscopy for Free-form

Surface Inspection

(4)
(5)

Studies from the Department of Technology

at Örebro University 11

Zillehussnain

Robot Assisted Video Microscopy for

Free-form Surface Inspection

Supervisor: Prof. Ivan Kalaykov

Examiners: Dr. Boyko Iliev Krzysztof Charusta

(6)

© Zillehussnain, 2011

Title: Robot Assisted Video Microscopy for Free-form Surface Inspection

(7)

Abstract

Atomization of the free form surface inspection is one of the most important technique for future industrial product quality control. Small workpieces are important but difficult to inspect, for well insurance of their quality the human inspection is the only choice. The smaller parts which have complex curvature are difficult to examine and the probability of the error is also high. These tiny free form workpiece can be examine with automatic visual inspection technique developed in this thesis. Where the rate of the quality is higher and guaranteed compare to non human inspection. This technique use video microscopy of free form surface for inspection. The technique has no limitation on Workpiece shapes. Issues related to registration and localization (after grasping) of free-form Workpiece, robot inspection planning and video microscopy are covered in this thesis. The experiments with system exemplify the complete working automatic visual inspection system for free form workpiece quality control.

Keywords:

free-form surface inspection, video microscopy, micrography, robot assisted inspection, photomicrography, visual inspection.

(8)
(9)

Acknowledgements

During my graduate studies in the University of Örebro several persons and in-stitutions collaborated directly and indirectly with my research. Without their support it would be impossible for me to finish my work.

This is the reason I wish to dedicate this section to regard their support. I want to start expressing a sincere acknowledgement to my advisor, Prof. Ivan Kalaykov because he gave me the opportunity to research under his guidance and supervision. I received motivation, encouragement, support and indepen-dence form him during all research period.

I also want to thank Dr. Dimitar Dimitrov and Dr. Boyko Iliev from these two persons, I am completely grateful for providing me a very good research re-sources, they transmitted crucial knowledge necessary for the completion of my work. Without their interest, previous work and unconditional help, it wasn’t possible to show up with this finished software product. I admire the help pro-vided by krzysztof charusta as practical tutorials on robot.

Special thanks to Dr. Federico Pecora and Dr. Amy Loutfi for their thought-ful consultation and provision throughout my studies. My obligatory acknowl-edgement to the people of Sweden, Swedish govt and University of Örebro who gave me opportunity to pursue my study program providing platform for the development of this research.

The most important I would like to thank my family, for their unconditional support, inspiration and love.

(10)
(11)

Contents

1 Introduction 1

1.1 Background . . . 2

1.1.1 Robot assisted inspection . . . 4

1.2 Research goal . . . 8

1.3 Thesis organization . . . 8

2 Preliminary study 9 2.1 Related study . . . 9

2.1.1 Automated visual inspection . . . 9

2.1.2 Video microscope design . . . 10

2.1.3 Workpiece registration . . . 14 2.1.4 Robot Control . . . 15 2.2 Hardware Description . . . 17 2.2.1 Robot used . . . 17 2.2.2 Video microscope. . . 20 2.3 Summary . . . 23 3 Developed System 25 3.1 System architecture . . . 25

3.1.1 Basic steps of proposed system. . . 25

3.2 Experimental setup . . . 29

3.2.1 Workpiece used . . . 29

3.2.2 Reference markers . . . 29

3.2.3 Old setup . . . 30

3.2.4 Video microscope. . . 31

3.2.5 Light diffusion for experimentation . . . 38

3.2.6 Camera XY frame . . . 40

3.3 Camera focus plane position estimation. . . 43

3.3.1 Tuner program . . . 43

3.3.2 TCP position to reference marker position calculation. . 45

3.3.3 Least erroneous camera focus plane points calculation. . 47

(12)

vi CONTENTS

3.3.4 Camera CP, CPf and CPscalculation . . . 49

3.4 Video microscopy and inspection . . . 50

3.4.1 Free-form surface P, Pf, Pscalculation. . . 50

3.4.2 Path planning . . . 51

3.4.3 TCP orientation relative to focus plane . . . 55

3.4.4 Inverse kinematics calculation for TCP orientation. . . . 58

3.4.5 Trajectory file handling. . . 59

3.4.6 Visual inspection . . . 61

3.4.7 Limitations . . . 63

4 Experimentation 65 4.1 Inspection setup. . . 65

4.2 Training of system . . . 66

4.3 Testing of the system . . . 70

4.4 General test for microscope . . . 77

4.5 Analysis . . . 80 5 Conclusions 81 5.1 Summary . . . 81 5.2 Future work . . . 81 References 83 A Appendix 87 A.0.1 Robot Singularity . . . 88

A.0.2 Robot Configuration . . . 89

(13)

List of Figures

1.1 Micrography using IRB 140 robot. . . 2

1.2 Results inspection through 3D laser scanner. . . 4

1.3 Free-form object surface inspection using touch-probe. . . 5

1.4 Automatic Optical Inspection System (AOIS) . . . 6

1.5 AOI inspection enviornmnent model... . . 7

2.1 concept of WD, DOF, FOV and resolution in systems. . . 12

2.2 FOV of lens vs size of image sensor. . . 13

2.3 Primary principle and secondary principle points. . . 13

2.4 Kinematics model of the robotABB IRB 140 robot . . . 17

2.5 Robot wrist coordinate system. . . 18

2.6 Axis rotation with respect to joints. . . 18

2.7 ABB IRB 140 robot work space . . . 18

2.8 Robot and robot controller architecture. . . 19

2.9 Robot coordinate system. . . 20

2.10 Video microscope. . . 20

2.11 Camera used during project development. . . 22

2.12 Camera used for final experiments. . . 22

3.1 Camera focus plane position estimation. . . 27

3.2 Video microscopy. . . 28

3.3 Basic Workpiece. . . 29

3.4 Reference Markers. . . 30

3.5 Old setup for microscope development. . . 31

3.6 Old system light diffusion technique. . . 32

3.7 Relation between object size and image. . . 33

3.8 Ball pointer roller-ball. . . 35

3.9 Millimeter scale under microscope. . . 35

3.10 20mm single extension tube. . . 36

3.11 All macro rings together. . . 37

3.12 50mm macro lens . . . 37

(14)

viii LIST OF FIGURES

3.13 Microscope depth of field . . . 38

3.14 Light reflection from workpiece surface. . . 39

3.15 Light diffusion during experiments. . . 39

3.16 Light diffusion setup side view. . . 40

3.17 Micrograph after light diffusion. . . 40

3.18 A Human inspecting workpiece. . . 41

3.19 The illustration of XY frame. . . 42

3.20 Developed XY frame . . . 42

3.21 Inconsistent camera focus plane position measurement. . . 44

3.22 On-line TCP tuner Key mapping. . . 45

3.23 TCP basic measurements . . . 47

3.24 Vector V for TCP to reference marker position calculation . . . 47

3.25 The blue print of vector V on original workpiece attached to TCP. 48 3.26 Camera CP, CPf, CPspoints. . . 50

3.27 support vector on a single surface patch. . . 52

3.28 Appearance of support vectors on the free form surface.. . . 53

3.29 Support vector on workpiece used for experiments. . . 53

3.30 P,Pf, Ps calculation and tranformation. . . 54

3.31 microscopy of free form surface.. . . 56

3.32 Free-form surface after path planing. . . 56

3.33 Angle between different surface patches. . . 57

3.34 Optimal path planing on mapped grid. . . 57

3.35 workpiece transformation from one frame of reference to other. 59 3.36 Control of the TCP relative to workpiece. . . 60

3.37 Different posture of robot TCP control relative to surface normal. 61 3.38 Reverse TCP calculation . . . 62

3.39 Two simulation steps, initial orientation to target orientation.. . 63

4.1 Testing setup. . . 66

4.2 The MODE of X values of all reference markers.. . . 68

4.3 The MODE of Y values of all reference markers.. . . 68

4.4 The MODE of Z values of all reference markers.. . . 69

4.5 Micrograph captured by tuner.. . . 69

4.6 Ilustration of test reference marker. . . 72

4.7 Test result of the video micrography. . . 72

4.8 The orignal micrograph. . . 73

4.9 RGB equalization. . . 73

4.10 Bilateral filtering. . . 74

4.11 Gaussian convolution. . . 74

4.12 Magnitude of gradient vector . . . 75

4.13 Canny edge detection result . . . 75

4.14 Binary operation. . . 76

4.15 Fault highlight on original micrograph. . . 76

(15)

LIST OF FIGURES ix

4.17 LCD panel under focus under microscope. . . 78

4.18 VLSI chip under focus under microscope. . . 78

4.19 Human hair on circuit board. . . 79

A.1 Singularity caused by axis 5 and axis 1. . . 88

A.2 Singularity caused by axis 5 and axis 6. . . 88

A.3 Singularity caused by axis 2, 3 and 5. . . 89

A.4 Arm configuration problem. . . 89

(16)
(17)

List of Tables

A.1 Factors in (HEART) . . . 87

A.2 Marlin F-033C (Allied Vision Technologies) camera specifications 90

A.3 DFK 41BF02 FireWire (The Imaging Source) camera specifications 91

(18)
(19)

List of Algorithms

1 Free-form surface P, Pfand Pscalculation. . . 51

(20)
(21)

Chapter 1

Introduction

This thesis imitate an Automated Optical Inspection(AOI) or Automated visual Inspection of free-form surface. The main goal of this research is to fetch mi-crographs1 of free-form surface as shown in Fig.1.1and detect defects from

micrographs using video microscope, where free-form surface is typically an irregular shape workpiece grasped in hand of high precision articulated2robot.

The main motivation behind this research was to develop a complete auto-matic task which is able to demonstrate the AOI. The system should work for every new introduce component with less efforts. The completion of all phases of such an inspection to explore the validity of the research, which some time go beyond the scope of the purposed research. The developed system is based on the single flow scenario and all sub tasks are either assumed or developed for demonstration of automatic system.

1Micrograph is fully focused photograph showing object contents with magnification > 1, see section3.2.4for magnification.

2Robot containing sections connected with two or more rotary joints in sequence

(22)

2 CHAPTER 1. INTRODUCTION

Figure 1.1: The IRB 140 robot during optical inspection

1.1

Background

Free-form workpieces are components used in every industry. These compo-nents are essential parts of all home appliances and industrial machines.The most important parts are smaller, having complex shapes, difficult to produce and need to be highly accurate. During the production of these parts it is very necessary to control the quality of these parts. A formal inspection method

(23)

1.1. BACKGROUND 3

to ensure the quality of these workpieces is manual inspection by human in-spectors with bare eyes or using magnifying lenses. Which is costly and time consuming, where at the same time 100% correctness of inspected object is not guaranteed. Also ultra-small components are practically difficult to inspect manually on commercial basis. According to Human Error Assessment and Re-duction (HEART) technique , during inspection normal likelihood3 of failure

of human agent is 0.27 [27]. Another problem with human inspection is that the experience of human inspector varies from one person to other, which may lead to different levels of quality in finished product. The other methods in categories of automatic inspection are contact and non contact. Touch probe shown in Fig. 1.3is example of contact inspection systems, which can only determine the product deviation from its given set of specifications [30]. Non contact inspection systems consist of laser shown in Fig.1.2and visual inspec-tion. This thesis lies in the domain of visual inspection [36].

Robotics technology have been changing the trends of inspection methods and numerous efforts have been reported in automobile industry, these systems mainly find the dissimilarity between free-form part to its 3D model using 3D laser scanner [1], this technique has similarity with purposed research because both required 3D model to inspect workpiece. However the these techniques work are totally different from each other in term of inspection results. There is another research, similar in some aspects is robot-based visual inspection method that mainly covers detection of defects on the surface or skin of an aircraft [20]. Therefore, none of the literature provides inspection techniques based on video microscopy4of free-form workpieces.

Even though there is no direct comparison possible between previous dis-cussed techniques and purposed technique, however most of the currently used inspection methods are also limited to regular shapes and required a lot of effort for newly introduced manufacturing parts [37], where the proposed research is especially design to work with free-from workpieces.

The introduction of video microscopy for inspection purpose make our technique different from previous researches. It is expected that the developed technique of video microscopy might be used for unpredictable application us-age because the developed technique is generic and can be used for any shape of object. The field of optics covered by this thesis provides a good base for the future research in this field.

Automatic Optical Inspection System (AOIS) purposed in this research en-sures the same level of quality for each object [33], so the inspection quality

3see appendixA

4Video microscopy is technique to use digital microscope to grab on-line frames of focused subject

(24)

4 CHAPTER 1. INTRODUCTION

Figure 1.2: The measurement results of basic information produced by 3D laser scanner for a free-form part, The inspection point out the correction areas of wear and tear with color range from red to blue (+1 to -1 error from its original model respectively) on 3D model of tool surface which may require recondi-tioning [1].

could meet the standard of expert (human eye inspection) inspection.The AOI system comes at the end of assembly line to distinguish between good and bad workpieces. AOI eliminate the need to appoint the human inspection expert of related field [8], instead of that the best inspection rule could be added in visual inspection expert system for stable and higher level of quality of each finished product [14], This could be reduce the total product cost and manufacturing time.

1.1.1

Robot assisted inspection

The industrial inspection of manufactured parts divided into four main classes on basis of workpiece size like ultra-small, small, medium and large workpieces, there is no sharp boundary between these classes and a general rules is to make

(25)

1.1. BACKGROUND 5

Figure 1.3: A free-form manufactured workpiece inspected by touch probe for surface error in terms of variation from CAD model. It is also called measure-ment modeling [30].

this classification relative to robot work-space5 and handling capabilities, like

if a robot can only handle an object of weight up to 6 kg then weight more than 6 kg on TCP6 would lead to inaccurate results, better is to put workpiece on

the ground and then robot inspect the workpiece moving vision system around it. Similarly a robot having small working space can not inspect a big fixed workpiece.

In this research we will stick to the ultra-small and small components. Small workpieces have great rate of change of surface angle which lead to very small area of surface-patch7 that can be only well focused with a microscope [39].

The microscope is fixed on a XY table8 to achieve the better and adjusted

position for robot work space. Object is mounted on the TCP to move under microscope. when a particular surface-patch is well focused under microscope then micrograph is grabbed and image processing operation are performed to detect the defects on surface. The Fig. 1.5shows the setup for robot assisted inspection.

5work-space is the set of positions and orientation which robot TCP can approach . see Fig.2.7 6Tool center point of robot. see Fig.3.23

7Surface-patch is unit area in discretized representation of 3D mesh. 8see chapter 3 for Camera fixed on XY table

(26)

6 CHAPTER 1. INTRODUCTION

(27)

1.1. BACKGROUND 7

(28)

8 CHAPTER 1. INTRODUCTION

1.2

Research goal

The main objective of this thesis is the demonstrate of Automatic Industrial Inspection System AOI, for the inspection of free-form workpiece using video microscope.

• Stable video microscope.

• Accurate localization of free-form surface. • Accurate camera focus plane points estimation. • Free-form surface support vector calculation. • Optimal path-planning for free-form surface. • Highly reflective surface video microscopy. • Fault detection.

1.3

Thesis organization

The remaining chapters of this thesis are organized as follows: Chapter 2 dis-cusses related background concepts and the contents mentioned in the chap-ter 3. Chapchap-ter 2 provides an overview of the relevant background concepts alongwith a survey of existing techniques. Chapter 3 purposes our solution to the problem and provides detailed description of the implementation proce-dure. Chapter 4 describes the test scenario of the developed system. Chapter 5 presents conclusions of the system on the basis of results and suggestions for future work.

(29)

Chapter 2

Preliminary study

This chapter provides all theoretical background and information about meth-ods and equipments used in thesis. This chapter is prerequisite to chapter 3

and necessary to read before proceeding to chapter 3.

2.1

Related study

2.1.1

Automated visual inspection

The defect detection applications ranging from automotive parts to printed cir-cuit boards (PCBs) inspection because each application has specific require-ments. Basically most of these inspection applications have been using fixed camera and fixed product placement in front of camera at particular time. These techniques use hard coded configured data for easy and quick to start inspection. ,Definitely these customized techniques will never work for ucts different in shape because the applications are designed for specific prod-uct [12]. So next time manufacturer need to develop completely new vision sys-tem for new products rather than using previous vision syssys-tem, which required a lot of customization, if use for new products. Mostly a product greatly varies from another in shape and size thats why manufacturers are forced to design a complete new vision system for every new product rather than using the old vision system [40]. In applications like package bar-code reading, it does not matter if we change the package color, size or even shape, in worst case we may require to manually change the camera orientation. However it does matter for automotive parts inspection. Vision system in visual inspection of automotive part is very customize to particular complex shape part and the new parts may have the incompatible design with the old vision system.

There has been lot of research going on the automatic methods of inspection of products. It is discussed in section1.1that automotive industry use currently developed inspection techniques to detect parts variation form original design.

(30)

10 CHAPTER 2. PRELIMINARY STUDY

The automotive manufacturers are forced to develop this technique because most of the cars change in design very frequently and it is so costly to change the whole inspection setup for new introduced parts. Automated visual inspec-tion gives freedom to manufacturers not to be afraid about the inspecinspec-tion of the new developed parts despite variation in shape.

Automated visual inspection guarantees that there would be no need of spe-cial arrangements and setting for products different in shape [23]. The main goal of the robot assisted visual inspection is to accommodate dynamic maneu-vers, either required by camera or object itself for free form object inspection. Optics and image sensor plays pivotal role in most of the computer vision ap-plication and their importance is inevitable for apap-plications involving image analysis. It is true that todays image processing techniques could be used to enhance the images having distortion because of low level vision system but when we come to the design of new vision system then all problems and their trade-off should be consider. Without the solid understanding of optics and im-age sensor it is nearly impossible to create a design feasibility of the new vision system.

2.1.2

Video microscope design

A video microscope is a vision system which has magnification 1power 6 1.

In this topic, the different feature of the video microscope are investigated for the development of the video microscope design, as well it is discussed why a particular feature needs to consider.

Light is a physical environmental factor for a vision system, which is ne-glected in many of the applications. It may be possible to ignore the factor of light for some application but it is very important for video microscope. Vision system involve in robot assisted surgery should be carefully dealt with the de-sign of the optical system, where magnification and quality of image contents is very important. Same is the case with our project, which has no choice other than having appropriate requirements. In design process of a good vision sys-tem light is the first thing we should start from rather than considering optics and image sensor first, without knowing the exact intensity of light, it is vague to work on the design of a microscopes optics and image sensor.

When it is required to focus at a particular surface under video microscope then all features shown in Fig.2.1are important. The importance of the total field of view FOV provided by optics and the final field of view perceived by image sensor is shown in Fig.2.2. The video microscope could only produce quality image if both optics and image sensor are compatible with each other,

(31)

2.1. RELATED STUDY 11

only a good optics or good image sensor alone can not produce good results. Light plays pivotal role because on microscopic level the total surface area re-flects light is very small which leads to low light condition in photograph. It is possible to increase the aperture size to get more bundle of light rays available for sensor. But the aperture size is directly proportional to the depth of field (DOF) shown in Fig. 2.1. The smaller the aperture size the greater the DOF. The smaller DOF makes it difficult to focus a subject and very high accuracy is required by robot to focus subject under video microscope.

The spot light can be use to illuminate the subject surface but spot light will directly reflect from the surface, which will make it difficult to identify defects from micrograph. In section3.2.5, it is discussed that the light diffusion method is used to overcome the problem of light reflection caused by spot light. It is also possible to increase the digital shutter speed to increase the light intensity on the image sensor. Also there are image sensors available which work better in low light, but the high speed of digital shutter can suddenly decrease the frame rate and ultimately the total time required for video microscopy.

It is important to know what image sensor is better for our application. Overall CCD senors are better in low light condition than the counterpart CMOS sensors. Another problem is the WD 2and the FOV3, these both are

also directly proportional to each other. If the Small FOV is required as in our case then the WD will also decrease and the extra macro rings will be required as mention in section 3.2.4. WD distance is important because we can not perform microscopy very near to the object. Object has to move continuously under microscope in case of less working distance the microscope or robot can collide. It is obvious that there are many trade offs between all these feature of video microscope type vision systems.

Compound lens system

A compound lens is sequential arrangement of multiple sinlge lenses on com-mon axis, single lens is also called this lens. Interestingly most of the computer vision system use camera lens which are combination of many thin lenses [?], except for optical detection, it use physically one thin lens, detail of these sys-tem is out of the scope of our discussion. It is nearly impossible to find the parameter of each thin lens because it is not provided by manufacturer. How-ever if the exact parameters of each thin lens in compound system are given or provided then it can be trivial to find the focal point. For that purpose we can use the simple optics formula given as.

2It is explaned in section2.1.2under subheading of Working Distance 3see figure2.2

(32)

12 CHAPTER 2. PRELIMINARY STUDY

Figure 2.1: The basic concept of field of view FOV, Depth of field DOF, resolu-tion, Working Distance, sensor size, camera position and optics position in the system(Edmund Optics, 2010). 1 f = 1 S1 + 1 S2 (2.1)

where f is the focal length of the lens, S1is distance between primary

prin-ciple point to object and S2 is distance between secondary principle point to

image sensor. The object is well focused on image sensor if equation2.1is sat-isfied. The concepts of primary principle point and the secondary principle be seen in the image2.3. Fortunately the compound lens can be hypothesized as single thin lens [2]. In Fig.2.3it is shown that compound lens contains two principle points one is called primary principle focal point and other is called secondary principle point. The focal length in compound lens is the distance

(33)

2.1. RELATED STUDY 13

Figure 2.2: The total filed of view gain by vision system is dependent on the size of the image sensor. Usually it is recommended that the image sensor should be smaller than the lens field of view but very small image sensor in the vision system shows the incompatibility of vision system design because lens with large FOV are more expensive than which has small FOV.

between secondary principle point to image sensor. The WD of compound lens is distance between the secondary focal point to object.

Figure 2.3: The primary and secondary principle points, which are important to understand the compound lens systems.

Working distance

The working distance is simply the length between object and lens or the dis-tance between the objective lens and the focus plane. Working disdis-tance is di-rectly proportional to the magnification, smaller the working distance, larger

(34)

14 CHAPTER 2. PRELIMINARY STUDY

the magnification [?]. In this thesis we are dealing with the scenario requiring large working distance over larger magnification. So the optical system with smaller WD is selected because the large magnification is inevitable. It is correct to say that we need high magnification, so optical system with smaller working distance is selected. In some application smaller WD is not acceptable, lenses are required which contains great magnification and reasonable working dis-tance. Of course the solution of the problem is to use modern lenses but these are more costly than the regular lenses. It is justified for protoype stage that we can use developed microscope with small WD for small size of workpiece where usually large WD is required for microscopy.

The working distance is an important factor while designing the optical system because without knowing the available space between the camera and object it is not possible to model the feature necessary to design the optical system. In machine vision application, working distance is inevitable for two reason first is the secure distance between the object and the lens for safe ma-neuvering, other reason is the light condition if lens is very near to the object then it may block light rays from light source to object surface, which can pro-duce poor 4 image results. In this thesis working distance is very important

because object has to move very near to lens due to the limitation of current optical system. The WD for experiments is approximately 65 mm, the working distance can increase sacrificing magnification.

2.1.3

Workpiece registration

Workpiece registration is an important concept for this project, without the registration algorithm it is not possible to localize the workpiece in robot co-ordinate system [15]. Least Square Fit of Point Clouds(LSFPC) [43] algorithm is used for the localization of the workpiece [6]. Iterative Point Cloud (IPC) algorithm can be also use for the registration, although IPC algorithm is very popular in robotics community for localization of the 3D point clouds but this algorithm is best for fitting of two point clouds and find transformation, es-pecially when prior information about the orientation order of 3D scans is unknown [13], also number of points and area can be vary between two scans. IPC algorithm provided extra functionality which is not required by the current problem [16]. The algorithm use K-nearest neighbour kNN algorithm which is unnecessary and time costly.

In our case, from source to target, both side have same number of points and the order of points is also known. So ICP is not a good choice for our prob-lem [7]. In fact the problem required much simple algorithm as Least Square Fit of Point Clouds [19], this algorithm is proven its strength in the area of rigid

(35)

2.1. RELATED STUDY 15

body localization [32]. In this project we use the concept of reference markers5,

which are similar to the points cloud of an object, the only difference is that these are the point position, well known location on the surface of the work-piece. It could be replaced by a 3D laser scanner6to get the point clouds of the

object for localization purpose [9]. Our main purpose is to localize workpiece accurately in robot coordinate system, despite what technique we use.

Lets suppose that we have the position of well dispersed point on the sur-face of the workpiece 3D mesh model. The detail of algorithm can be consult from the [43] pages 344, 345 and 346.

x1, ..., xm, xi∈Rn, 1 6 n 6 3

Now we measure the same point in same the order on the surface of the real workpiece.

ξ1, ..., ξm, ξ ∈ Rn, 1 6 n 6 3

We want to find the frame transformation between the CAD model ideal points and measured point of the real workpiece. one translation vector is re-quired and the orthogonal matrix Q with det(Q) = 1 and QTQ = I [43]

ξi= Qxi+ t, fori = 1, ..., m.

Above equation is an overdetermined system of equations for m>6, but in our case value are measured by machine defiantly have error. So it becomes the least square problem

ei ≈ Qxi+ t.

In return LSFPC algorithm provides us the rotation and translation matri-ces, which can be applied to the problem in section 3.4.3for successful regis-tration.

2.1.4

Robot Control

Robot control means here controlling the motion of the TCP of ABB IRB 140 robot. There are two motion command available to control the motion of robot TCP using RAPID. One RAPID command is MOVEL to move the robot TCP linearly and second is MOVEJ to move robot joints irrespective of TCP move-ment. The only difference between two commands is that in MOVEL the 3D

5see section 3.2.2for more detail on reference markers 6which provides accuray of 0.6mm

(36)

16 CHAPTER 2. PRELIMINARY STUDY

position in robot coordinate system is provided and robot controller calculate joint angles to achieve that position, where in case of MOVEJ command we have to use our own code to calculate joint angles for a particular movement of robot TCP.

It does not matter which command we use but what matters is the feasibil-ity to use such commands. In fact the easy choice to control the robot is to use MOVEL command but there is a problem with the MOVEL command when used from RAPID code it can cause singularity and configuration problems. For singularity7problem stated in the appendixA.0.1and configuration problem

stated in appendix A.0.2, the MOVEJ command is used in trajectory file to avoid these problems. If singularity occur in case of MOVEL then robot issues an exception on FelxPendant and controller stops moving robot TCP. Robot configuration is important because robot TCP can achieve the same position with number of orientation shown in the figure A.4. When singularity occurs robot controller can not just randomly chose one configuration and run but stops due to safety measures. It is possible to remove that configuration safety constraints but it might be dangerous in our situation and can cause damage to robot or equipments. Although to move the robot in linear way is trivial but to avoid singularity problems, we preferred to work in the joint space8, for that

purpose a simulator especially developed for ABB IRB 140 robot arm by Dr. Dimitar Dimitrov of Örebro university is used.

Our main routine runs from the Matlab, so it is not possible to move robot directly from Matlab. The trajectory file mechanism is used to integrate the Matlab code with robot control code. Trajectory file is a file contains a large number of MOVE commands together to move robot continuously. More de-tail of the trajectory file development and working can be seen in section3.4.5. It is naturally easy to work on the simulator than the real robot especially to avoid any damage to robot or camera which is very close to TCP and object. The Fig. 2.4 show the kinematics model of the ABB IRB 140 robot used to calculate joint angles for a particular TCP moment. The algorithm used in sim-ulator can be seen on the official web page9of Dr. Dimitar Dimitrov.

The algorithm developed by Dr. Dimitar Dimitrov is used to get the all values of joint angles q ∈ Rn, if it is required to move TCP from initial position

to target, and let F(q) denote forward geometric model (forward kinematics

7Singularity is the specific orientation of robot axis which stops robot to move further once achieved. See figure2.6to understand the concept of robot axis

8Also called forward kinematics

(37)

2.2. HARDWARE DESCRIPTION 17

for positions) of ABB IRB 140 robot. Let p be a desired position for the TCP. Consider the following equality constraint.

T (q) = F(q) − p = 0, (2.2) T (q) = 0 defines the target position of the TCP.

Let Tk(q) = 0 is the kth constraints.

Tk(q) = 0, k = 1, . . . , K. (2.3)

Once the joint angles p are calculated by robot simulator then these are as string ready to put in to the trajectory file.

Figure 2.4: Kinematics model of the ABB IRB 140 robot (ABB, 2004).

2.2

Hardware Description

2.2.1

Robot used

The robot used for this project is called ABB IRB 140 and has six rotary joints shown in the figure 2.6. The TCP of the robot move according to wrist co-ordinate system as shown in figure 2.5. The robot working space is also an important concept shown in the figure 2.7. Robot working space allows us to understand the maneuvers robot can produce to achieve the given tasks. The robot working space is also important for the design development of the XY frame describe in section3.2.6. That robot has accuracy while moving TCP with tolerance of ±0.03mm, it may effect if the wait on the TCP increase from 6 kg.

(38)

18 CHAPTER 2. PRELIMINARY STUDY

Figure 2.5: Wrist coordinate system of the ABB IRB 140 robot (ABB, 2004).

Figure 2.6: Rotation of robot axis with respect to their joints (ABB, 2004).

Figure 2.7: Work space of ABB IRB 140 robot arm (ABB, 2004).

Robot controller

Like every advance robot, IRB 140 robot also has it own controller called IRC5. The Fig.2.8shows the interaction of the IRC5. RAPID language is specially de-sign to work with the family of ABB robots. The standard way to write RAPID

(39)

2.2. HARDWARE DESCRIPTION 19

code is robot studio. Robot studio is a software provided by the ABB robot for the RAPID program development. The robot studio has to be installed on one computer directly connected with the robot controller using LAN cable. That computer directly attached to robot controller is also called robot server. The Rapid code can also be written using flexPendant which is important part of the robot controller but it is difficult than the robot studio due to the better HCI10 provided by robot studio. It is also important to know that the final

run of the code is only possible with help of FlexPendant. However the RAPID code itself has many other options to run the instruction set. The tuner pro-gram describe in section3.3.1 use open socket available from robot server to send MOVE command from node to controller. Another way is to upload the trajectory file on robot controller and then run that file from RAPID code, as define in section3.4.5.

Figure 2.8: Robot and robot controller interaction architecture.

Robot coordinate system

Throughout this thesis we use an important term of robot coordinate system shown in figure2.9. The robot coordinate system is the robot reference to the TCP movement. Every time robot moves TCP, it uses the reference of a robot coordinate system. RAPID support many coordinate systems but MOVEL and MOVEJ command work in robot coordinate system. So it is important to con-sider the robot coordinate system while writing MOVE instructions.

(40)

20 CHAPTER 2. PRELIMINARY STUDY

Figure 2.9: Coordinate system of ABB IRB 140 robot arm.(ABB, 2004).

2.2.2

Video microscope

Video microscope is combination of a compound lens shown in Fig. 3.12, FireWire digital camera shown in Fig.2.11and Fig.2.12and extension tubes or macro rings shown in figure 3.10. The video microscope is supposed to be fix on XY frame mentioned in section3.2.6.

Figure 2.10: Video microscope assembled with one of the camera used for ex-periments

(41)

2.2. HARDWARE DESCRIPTION 21

Cameras

Total of two camera’s were used during the project development and experi-ments. The camera used for project development is Marlin F-033C from AVT shown in figure2.11. Only this camera was available for the initial experiments and testing. Soon it was realized that a better camera is needed to capture more details. As it is discussed in section2.1.2that only better optics is not enough for the video microscope. The image sensor was the main issue with this cam-era. In fact Marlin F-033C camera is not designed to work with our project. The size of the pixels used for that camera is 9.9µm × 9.9µm. For microscopic imagining we need better image sensor resolution, better mean smaller pixel size to get more detail of subject. So for final experiments the camera shown in figure2.12DFK 41BF02 is used. The pixel size on image sensor of this camera is 4.65µm × 4.65µm, which is better than previous camera.

(42)

22 CHAPTER 2. PRELIMINARY STUDY

Figure 2.11: Camera used during the development of the project.

(43)

2.3. SUMMARY 23

2.3

Summary

This chapter discussed the necessary techniques which are prerequisite of this thesis research. The important concepts for the design development of the video microscope, about optics and image sensor were presented. The most important theories about the robot control and workpiece registration were discussed with necessary formulation and algorithm. A brief introduction was presented about hardware available for the development of the video microscope and inspection system.

(44)
(45)

Chapter 3

Developed System

In this chapter we propose solution for automatic visual inspection of free-form workpieces. First the steps of a complete inspection system and a quick introduction to the developed system is presented, then after explaining the ex-perimental setup for developed system, all steps of developed system are also explained respective to their place in project. Video microscopy of free-form surface required the position estimation of camera focus plane points. So be-fore discussing developed technique of video microscopy first we explain the position estimation of the camera focus plane points. Then video microscopy technique and visual inspection is explained.

3.1

System architecture

The main goal of the inspection system is to capture well focused micrograph of the free-form surface. For that reason the exact position of the camera focus plane is required. Camera focus plane will be use as target for every surface normal of the free-form workpiece to focus. Then the micrograph of that par-ticular surface patch will be used by image processing module to detect defects on the user define criteria.

3.1.1

Basic steps of proposed system

It is very important to understand the place of proposed research between the complete inspection task. Now a days 3D-model of industrial workpieces are being develop before the development of the workpieces, in this situation first step, which is related to getting 3D-model of workpiece can be skip and work-piece video microscopy is possible while following remaining steps. These steps are developed by Robin Reicher in his masters thesis “Robot based 3D Scan-ning and Recognition of Workpieces”. The technique developed in this thesis mainly construct a 3D mesh of the free-form workpiece [31]. His technique re-turn the surface normal vectors of the discretized facets of constructed 3D mesh

(46)

26 CHAPTER 3. DEVELOPED SYSTEM

as output [28]. It can be seen in the figure3.1and figure3.2which represents two main modules of our thesis and require the surface normal vectors from that technique developed by Robin Reicher.

• 3D mesh construction from free-form workpiece.(Robin Reicher Master thesis)

• Proposed research assume that the object is grasped 1 in the hand of

robot, however the orientation of workpiece is unknown.

• Camera focus plane position estimation shown in figure3.1and explained in section3.3.

• Video microscopy shown in figure3.2and explained in section 3.4. Free-form surface P, Pf and Ps calculation defined in section3.4.1. Generation of shortest motion path for robot trajectory is discussed in section3.4.2.

TCP position calculation for path in motion plan list is defined in section3.4.3.

Calculation of inverse kinematics for all calculated position of TCP is mention in section3.4.4.

Trajectory file generation and uploading on controller is described in section3.4.5.

Visual inspection is given in section3.4.6. Image acquisition is provided in section3.4.6. Fault detection is described in section3.4.6.

(47)

3.1. SYSTEM ARCHITECTURE 27

Figure 3.1: Architectural flow diagram of camera focus plane position estima-tion process.

(48)

28 CHAPTER 3. DEVELOPED SYSTEM

(49)

3.2. EXPERIMENTAL SETUP 29

3.2

Experimental setup

3.2.1

Workpiece used

For the demonstration of developed system a free from workpiece is used as given in the figure3.3, even though the workpiece looks relatively simple but its shape full fill the criteria required to test the developed technique. The sim-ple workpiece is selected to minimize the error occurred due to getting 3D mesh from real object, as well as mention there in section2.1.2that proposed research required highly accurate workpiece because of the limitation of depth of field provided by vision system. The model of that workpiece is developed in SolidWorks and then given as input to the technique developed by Robin Rechie, which in return provides the surface normal vectors of the surface of the workpiece mesh model.

Figure 3.3: The model of workpiece developed in SolidWorks for experimenta-tion.

3.2.2

Reference markers

Reference markers are small squares of papers equal to the size 0.6 × 0.6 × 0.1mm and printed with unique numeric digit on each of them. These are at-tached on the surface of real workpiece in well dispersed locations on work-piece. There are 8 reference markers on the workpiece surface. In ideal con-dition only 3 position on the surface of the workpiece is required to estimate the exact orientation of the workpiece on robot coordinate system shown in figure 2.9. But in reality more than 3 positions are required to compensate

(50)

30 CHAPTER 3. DEVELOPED SYSTEM

error occurred due to the exact position calculation of the reference markers describes in the section 3.3. The reference markers have been use to estimate the precise position of camera focus plane and the exact position of workpiece on axis 6, which is the axis of TCP. The position of the workpiece on TCP is used to reorientates the TCP. for video microscopy discussed in section3.4.3.

After putting physical markers on workpiece as shown in Fig3.4for regis-tration purpose. The things we already know are.

1. The position of reference marker in workpiece coordinate system from 3D model.

2. Incorrect position of workpiece attachment on TCP.

Figure 3.4: Reference markers attached on the surface of the workpiece.

3.2.3

Old setup used for video microscope design and

development

Figure3.5shows the experimental setup used for the design and development of the video microscope. This system was developed to avoid the involvement of robot in initial stage of the project. There is a stand in the system holding the video microscope on fix position from the fix distance to workpiece. Once the required result of microscope were accomplished, then use of that system was discontinued. Later the camera was placed on the XY frame as defined in section3.2.6and workpiece was fixed on robot TCP. The reason why camera is fixed and workpiece is fixed on robot TCP is also given in section 3.2.6. It was realized from the initial experiments that video microscope is required diffused light. In old experimental system a simple 80gsm A4 paper sheet was

(51)

3.2. EXPERIMENTAL SETUP 31

used to diffused the light as shown in Fig.3.6. The light diffusion paper was later replaced with the light diffusion cloth because of the movement required by robot TCP as given in the section3.2.5.

Figure 3.5: Old setup used for video microscope development.

3.2.4

Video microscope

Magnification of microscope

Magnification is a relationship between the image and object size. The Fig3.7

(52)

32 CHAPTER 3. DEVELOPED SYSTEM

Figure 3.6: Light diffusion on old experimental setup using 80gsm A4 paper sheet around fix object.

(53)

3.2. EXPERIMENTAL SETUP 33

Figure 3.7: The relation between object size and image size in term of magnifi-cation.

When we calculate magnification of a lens then we have following parame-ters.

Height of image (vertical size of image) = H Width of image (horizontal size of image) = W Height of object (vertical size of object) = h Width of object (horizontal size of object) = w The magnification can be define as.

Magnification(vertical) = size of image(vertical)

size of object(vertical) (3.1) Magnification(horizontal) = size of image(horizontal)

size of object(horizontal) (3.2) Lets take example of our system. We are currently using the image sensor of size 12"

with resolution of 1280 × 960 where each pixel is of size 4.65µm × 4.65µm, the horizontal size of sensor can be express as.

1280 × 4.65µm = 5.95200mm (horizontal size of sensor) The vertical size can be describe as following.

(54)

34 CHAPTER 3. DEVELOPED SYSTEM

The maximum size of object that can be seen with the 4.464mm×5.952mm sensor using the currently used lens is the following.

Height of object (vertical size of object) = 2.6mm

Width of object (horizontal size of object) = 3.55mm

Using equation3.2the magnification of the video microscope can be define as. Magnification (horizontal) =5.95mm 3.55mm Magnification (horizontal) = 1.67605634 Magnification (vertical) = 4.557mm 2.6mm Magnification (vertical) = 1.75269231

Theoretically the both magnification should be same but there is difference between both of them because of difference between the placements of sensor pixel horizontally and vertically. The detail is out of the scope of this topic. The resultant magnification achieve by the video microscope can be realize to see the real scale in the Fig. 3.9. The Fig.3.8shows the roller-ball of the 0.7mm ballpoint.

(55)

3.2. EXPERIMENTAL SETUP 35

Figure 3.8: Ball pointer roller-ball focused under video microscope.

Figure 3.9: Millimeter focused under microscope.

Macro rings or extension tube

Extension tube or macro ring as shown in the Fig.3.10is the important com-ponent of the video microscope. Recall focal length from the section2.1.2and

(56)

36 CHAPTER 3. DEVELOPED SYSTEM

magnification from section3.2.4, there is the direct relationship between the fo-cal length and the magnification of the macro lens. Macro lens used is described in section3.2.4, this is different from normal camera lenses, the magnification of macro lens can be increase by increasing the focal length. Most of the appli-cations with high magnification requirements, insert macro rings between the lens and camera to increase the magnification, however very small magnifica-tion is not achievable with every camera lens except macro lenses. To assemble the current microscope we used 85mm long extension tube, which is combina-tion of further sub macro rings as shown in the Fig.3.11.

Figure 3.10: 20mm extension tube.

Macro lens

The lens used for development of the video microscope is the 50mm TV lens and it is macro enable. Macro enable means that it is possible to increase the magnification by increasing focal length. There is manufacturing design differ-ence between the simple lens and macro lens. However this discussion is out of the scope of this thesis. The lens used in the video microscope is presented in the Fig. 3.12. Real focal length of the lens is 50mm, which is increased after adding the extra macro rings, as discussed in section3.2.4.

(57)

3.2. EXPERIMENTAL SETUP 37

Figure 3.11: 4x5mm and 3x20mm macro rings provides total of 85mm exten-sion in focal length.

Figure 3.12: 50mm macro lens used in project development and experiments.

Microscope DOF

The depth of field provided by the video microscope can be seen in the Fig3.13, it is less than even a millimeter. The scale shown in the figure may show the

(58)

38 CHAPTER 3. DEVELOPED SYSTEM

DOF more than the actual because the scale was not placed perpendicular to the lens axis, so angle of the scale show the DOF more than it is actually. Once the macro rings are used then they eliminate the infinity focus, which is usually part of the normal camera lens. Now the only limited DOF is available from the lens.

Figure 3.13: The area of scale in focused, roughly shows the video microscope DOF.

3.2.5

Light diffusion for experimentation

Light diffusion is method to diffused the light rays before reflecting from the surface. Direct light were used to tackle the low light condition as described in section2.1.2but light directly reflect from the highly reflective surfaces.The reflection disturb the visual contents of an image. It is revealed during the ex-periments with old setup that the diffusion of the light should be done near to the workpiece. The light diffusion closer to the light sources is not well diffused and also the ambient light becomes part of reflection. If diffusion is occurred very near to the surface then the results would be appreciable. The setup used for light diffusion is present in the Fig3.15and Fig3.16. The diffusion cloth dif-fuse direct light reflection from surface. The cloth is used for diffusion, because the TCP needs to move continuously in front of the camera, while camera is fixed. The diffusion cloth is connected with both the fixed camera and moving robot arm provides flexibility to robot movement while hurdle for light rays.

It is difficult to find out the faults in the micrograph shown in Fig.3.14, because of the direct light reflection and ambient light reflection. The reflective

(59)

3.2. EXPERIMENTAL SETUP 39

surface in fact reflect every thing from surroundings including the faults on the surface. It is difficult to distinguish the defects with environment reflection. The result of the same surface can be seen after light diffusion in the Fig3.17

Figure 3.14: Light reflection from workpiece surface.

(60)

40 CHAPTER 3. DEVELOPED SYSTEM

Figure 3.16: Light diffusion setup from side view.

Figure 3.17: Micrograph of workpiece surface after light diffusion.

3.2.6

Camera XY frame

The developed technique is inspired with human inspection model shown in the Fig.3.18. In that method we used the fixed camera because its resemblance to human inspection and the best utilization of the robot workspace. Imagine a human inspecting a fixed object by walking around it. In another situation

(61)

3.2. EXPERIMENTAL SETUP 41

consider that the same object is in the hand of human inspector and object can be inspect easily by just rotating it in hand. The technique inspired this thesis used the later situation. Now consider a real example, suppose a metallic cube is fixed on table front of robot and need to scan from four sides. Camera is mounted on the TCP, in this case in order to focus camera, robot positioned camera perpendicular to the front of four faces of cube and this requires a big work space of robot. Now consider that the object is mounted on TCP and camera is fixed, then in order to focus each face of cube we only required to rotate the cube on the same position, which is extremely trivial comparing to the previous scenario.

As it is clear from both examples that the camera should be fixed and work-piece should be move using robot. For current articulated robot, it is difficult to estimate the best work space, which is the hurdle in the way of finding fix position of camera. Camera should be fix in the better place in terms of best utilization of robot work space. The solution of the problem is the XY table. We had the rough idea about the space where the robot work space can be bet-ter utilize. Now the target is to collide the Camera node of the XY frame with best work space of robot.

Figure 3.18: A human inspecting workpiece.

Then it is later possible to adjust the camera easily on very accurate posi-tion. XY table use for that camera and its workspace is shown in the figure3.19

the workspace in the figure is referred as accessible area of camera node. Fig-ure3.20show the developed XY table in the inspection system.

(62)

42 CHAPTER 3. DEVELOPED SYSTEM

Figure 3.19: The illustration of XY frame.

(63)

3.3. CAMERA FOCUS PLANE POSITION ESTIMATION 43

3.3

Camera focus plane position estimation

Any surface on the camera focus plane should be clearly visible through cam-era. The camera focus plane is centered on lens axis and perpendicular to the camera focus plane vectors shown in Fig. 3.26. The position of that plane is required to focus any given surface under video microscope [17], so the exact position estimation of that plane is inevitable. This whole section is dedicated to the accurate estimate of the position of the camera focus plane. The com-plete architectural diagram of the process can be seen in the Fig. 3.1. There is one manual way available as shown in the Fig.3.21to measure the camera focus plane position. The depth of field of the video microscope is very small and manually measured position can lead to error more than 1 mm, where sys-tem can not bears the error in millimeters. So this technique is developed for accurate measurement of the camera focus plane position.

3.3.1

Tuner program

Tuner program is a software module which provides on-line robot interaction to user to move the workpiece attached to TCP. Operator move robot TCP with tuner program to get focus the given reference marker under video microscope shown in the Fig.3.1. To get focus a particular reference marker, it is required to move the robot TCP gradually such that whole movement is occurred on-line while live video stream can be visualize on the visualizer. Visualizer is an open source program developed by Allied Vision Technologies for Marlin F-033C camera. This software also work fine with DFK 41BF02 camera. This software provides live video stream from camera in a GUI. When reference marker is well focused, operator stops moving TCP from tuner program and then issues the position of that reference marker for further use as described in the sec-tion3.3.2.

There are many reasons to develop tuner program. One reason is the un-availability of any built-in routine by RAPID, which can be use to change the robot position similarly. At the same time tuner program can reorientates the TCP linearly and in joint space. The Fig. 3.22 shows the keys used, for the robot TCP movement linearly and in joint space. The keys (W, A, S, D, Q, Z) and (w, a, s, d, q, z) are used to move linearly, while (I, J, K, L) and (i, j , k, l) are used to move the TCP in joint angle space. There are two modes to re-orientate the TCP, one mode is move TCP in 1mm for every key press, while other move TCP in 0.1mm for fine movement while shift is pressed.

S, s = Move TCP in +ve x-axis@1mmfor per stepto0.1mmper step. W, s = Move TCP in -ve x-axis@1mmfor per stepto0.1mmper step. D, d = Move TCP in +ve y-axis@1mmfor per stepto0.1mmper step. A, a = Move TCP in -ve y-axis@1mmfor per stepto0.1mmper step.

(64)

44 CHAPTER 3. DEVELOPED SYSTEM

Figure 3.21: Inconsistent camera focus plane position measurement in robot coordinate system.

Q, q = Move TCP in +ve z-axis@1mmfor per stepto0.1mmper step. Z, z = Move TCP in -ve z-axis@1mmfor per stepto0.1mmper step.

I, i = Rotate axis 5 +ve@1mmfor per step to0.1mmper step. K, k = Rotate axis 5 -ve@1mmfor per step to0.1mmper step. J, j = Rotate axis 6 +ve@1mmfor per step to0.1mmper step. L, l = Rotate axis 6 -ve@1mmfor per step to0.1mmper step.

It is possible to use the joystick on FelxPendant to move the TCP but joy-stick is not a very good instrument for very fine movement. The microscope used in the current setup has very less field of view and for accurate positioning

(65)

3.3. CAMERA FOCUS PLANE POSITION ESTIMATION 45

of reference marker it is very difficult with high speed joystick. One problem is speed error given by robot controller if joystick is used to move the robot under certain speed limit. Other problem is the limited functionality of joystick usage, joystick can not move TCP linearly or in joint space at the same time, it is required to switch manually between linear movement to joint angle. Moving TCP with joystick may be able to position the reference marker focused under microscope, however the coordinate value of TCP should be save by hand be-cause during the manual operation of joy stick, it is not possible to access TCP coordinate using C++ code. Where tuner program can save coordinate values of TCP position on real time.

Figure 3.22: Keyboard key mapping to control the TCP tuner on-line .

3.3.2

TCP position to reference marker position calculation

This technique define the translation required to calculate the position of the reference markers in robot coordinate system, when given reference marker is focused under video microscope. The TCP position provided by tuner program against the given reference marker is translated to the location of that reference marker. The position of the workpiece attachment with TCP is known as screw hole. The attachment position of workpiece on TCP is shown in the Fig.3.23. The same position on the TCP is shown in the Fig.3.24as screw position. The Fig.3.25shows the workpiece attached on the TCP as well as the translation vector of attachment. The screw position is known in the local coordinate sys-tem of the workpiece, while TCP position is the point only known in the robot coordinate system, a rough relation measure by hand is that length of vector is

(66)

46 CHAPTER 3. DEVELOPED SYSTEM

d = 20mm where angle is θ = 45 and it is assume that the screw point is lying on axis 6 plane, which assume it as 2D problem.In first step the value of the TCP is translated from TCP to vector V and then it requires to translate from screw hole to reference marker. Lets suppose the position of the TCP is Ptas.

Pt= (Xt, Yt, Zt)

Two important component of the vector V showing figure3.24. d = 20 θ = 45,

The vector V from TCP position to screw position can be describe as.

V = [20 × cos(45), 20 × cos(45)]

the first translation form the TCP position to the vector V in robot coordi-nate system can be expressed as.

Pscrew= (Xt, Yt+20 × cos(45), Zt+20 × sin(45))

Screw point is in Workpiece coordinate system is. Pw−s= (Xw−s, Yw−s, Zw−s)

Now translatePscrewto Pw−s

Pscrew−w−s= (Xt− Xw−s, (Yt+20 × cos(45))

+Yw−s, (Zt+20 × sin(45)) − Zw−s)

Position of marker 5 in the robot coordinate system is. P − m5 = (Xm5, Ym5, Zm5)

Translating Pts−w−sto Pm5

Pscrew−w−s−m5= ((Xt− Xw−s) + Xm5, ((Yt+20 × cos(45))

+Yw−s) + Ym5, (((Zt+20 × sin(45)) − Zw−s) + Zm5)

The Pscrew−w−s−m5 is the absolute ideal position of marker 5 in robot

coordinate system.

(67)

3.3. CAMERA FOCUS PLANE POSITION ESTIMATION 47

Figure 3.23: The basic measurements of the TCP.

Figure 3.24: Calculation of the vector V for TCP position translation to refer-ence marker.

3.3.3

Least erroneous camera focus plane points calculation

The values of reference markers calculated in section3.3.2are erroneous, sup-pose that reference marker 5 is fully focused under microscope using tuner program, and the position of reference marker is calculated using technique ex-plained in section 3.3.2. If the location of marker 5 has variation from given

(68)

48 CHAPTER 3. DEVELOPED SYSTEM

Figure 3.25: The blue print of vector V on original workpiece attached to TCP. 3D model then the obtained value of reference marker 5 would be erroneous. In fact it is unknown that which reference markers is accurate according to the given 3D model. Due to high accuracy constraint, 8 reference markers are used to compensate even small amount of error occurred due to inaccuracies or tolerance in real world measurement. 8 reference markers are used to minimize the probability of the error. The arithmetic MODE is used to find the most frequent values of X, Y and Z from all the calculated position of the reference markers [24]. The resultant value can be use for camera focus plane points.

Pscrew−w−s−mi is the real absolute position of ith reference marker. Then the least erroneous value of the reference marker can be define as PMODas,

PMODE = MODE[Pscrew−w−s−m1, Pscrew−w−s−m2, Pscrew−w−s−m3,

Pscrew−w−s−m4, Pscrew−w−s−m5, Pscrew−w−s−m6,

Pscrew−w−s−m7, Pscrew−w−s−m8]

(69)

3.3. CAMERA FOCUS PLANE POSITION ESTIMATION 49

3.3.4

Camera CP, CPf and CP

s

calculation

Camera focus plane points are three points shown in Fig.3.26as CP, CPfand CPs.

These 3 points can also imagine as connected with two support vectors. The points CP and CPfmake first support vector of length L1. The Points CPfand

CPsmake second support vector of the length L2also shown in the Fig.3.26.

Following is the formula to calculate these vector from the PMODEor.

CP = PMODE

It is known that camera first support vector is parallel to the z-axis.

Pf = (XMODE, YMODE, ZMODE+ L1)

PsMODE = (XMODE+ L2, YMODE, ZMODE+ L1)

Where

(70)

50 CHAPTER 3. DEVELOPED SYSTEM

Figure 3.26: Calculation of the camera CP, CPf, CPspoints

3.4

Video microscopy and inspection

3.4.1

Free-form surface P, P

f

, P

s

calculation

Similar to the calculation of the camera focus points this section describes P, Pfand Pscalculation for each surface normal of the free-form workpiece.

As it is described earlier in section3.1.1that the technique developed by Robin Reicher give the normal vectors of the 3D mesh. His technique first discretized original mesh model and then provide the normal vector for each subdivided surface patch area. If the surface of the workpiece is sharply curved then it is not possible to focus all area in single micrograph, in that case smaller surface patches are required [11]. The technique he developed can take an input of

(71)

3.4. VIDEO MICROSCOPY AND INSPECTION 51

maximum size of a given surface patch and divide large surface patches into small, on the basis of given factor.

These support vectors are created with the similar length values of L1and L2

described in section 3.3.4. The length of L1and L2are similar to camera

sup-port vectors because it is required to register these vector with camera supsup-port vector. The Fig. 3.27shows typical surface patch and the support vectors on it. The Fig3.28show the support vector on a free-from surface. More specif-ically the Fig3.29shows the support vector on the marker 5 of the free form workpiece.

The algorithm1shows the steps of P, Pf, Pspoint calculation, these steps

can also be seen in Fig.3.30.

Algorithm 1: Free-form surface P, Pfand Pscalculation. input : P, CP, surface normal, L2, L1

output: Pf, Ps

(0) Find the point Pfon surface normal having distance L1from P. (1) Find the projection vector VP, from projection of vector (P − CP) on the current surface plane.

(2) Find the point P

son the vector VPhaving distance L2from P. (3) Find the translation between P to Pf.

(4) Apply translation (calculated in step 3) to the point P

swhich gives Ps.

3.4.2

Path planning

The purpose to find the optimal path for video microscopy is to decrease the effort and time taken by robot manipulator to move the free-form surface [4]. The simplest way for free-form video microscopy suggests that robot should target first surface normal from file containing all surface normals of 3D mesh. However it can lead to unexpected time required for inspection [5]. Unexpected time required by inspection is not an option because it is discussed in chapter1

that the automatics inspection technique should reduce the total time required for inspection [29].

Figure3.31shows a conceptual movement of the robot on a planned path. This illustration is developed because of its similarity with TCP movement dur-ing the video microscopy. It is very important to note that Fig3.31is nothing to do with original system because in original system the camera is not mounted on the robot. The real movement of TCP during path following is difficult to illustrate. This Figure provides easy understanding of planned path and valid

(72)

52 CHAPTER 3. DEVELOPED SYSTEM

Figure 3.27: The support vectors on a single surface patch.

because conceptually there no difference if camera mounted on the TCP and workpiece is fixed. The decision to fix camera on XY frame and put workpiece on TCP was made due to small work space of the ABB IRB 140. This idea works fine to understand the problem and to find the shortest motion path for robot.

Figure 3.32 shows the illustration of a simple free-form surface, though a free-from surface can be more complex than of that. All the triangles in any free-form surface are geometrically attached to their neighbor triangles [44] [34]. The free-form surface can stretched down to flat surface as shown in illustra-tion3.34to just conceptualize the shortest path planing.

In our developed system two criteria are used to assign weights to each sur-face patch. One is the difference of intersection angle with the current sursur-face patch to its neighbors as shown in the Fig3.33, other is the euclidean distance between Pf point of the current surface normal to the neighbors surface nor-mals.

References

Related documents

The ranking function of vocabulary tree reflects the size of shared similar visual elements between query and database objects, which is one criteria of visual search.. The

Further on, distributions, based on the historical data, can be used as input to an implemented simulation model of the actual traffic program.. By continuous validation

The other two curves show how the dialogue act tagging results improve as more annotated training data is added, in the case of active (upper, dashed curve) and passive learning

Nordin-Hultman (2004) menar att olika handlingsmöjligheter i rum ger förutsättningar för att kunna bemöta barns olikheter, samt att rummen är mindre reglerade. Detta kan passa

[r]

I de resultat som vi har kommit fram till genom våra studier på alla genomförda Paralympiska vinterspel visar att både i Aftonbladets- och DNs sportbilagor erhåller de

Medan Reichenberg lyfter fram vikten av att väcka elevernas läslust, vilket är viktigt att tänka på när du som lärare väljer ut texter och böcker Reichenberg (2014, s. 15)

Figure 5.2 shows four of the targets in the three full point clouds using the voxel size 7.5 cm and Figure 5.3 shows the full SLAM cloud in the same scene using different voxel