• No results found

Vision Based Grasp Planning for Robot Assembly

N/A
N/A
Protected

Academic year: 2021

Share "Vision Based Grasp Planning for Robot Assembly"

Copied!
103
0
0

Loading.... (view fulltext now)

Full text

(1)

Naresh Marturi

Technology

Studies from the Department of Technology at Örebro University

(2)
(3)
(4)
(5)

Naresh Marturi

Vision Based Grasp Planning for

Robot Assembly

(6)

Title: Vision Based Grasp Planning for Robot Assembly

(7)

achieved through the integration of vision sensing.

For this work, we prototype an assembly cell that has one ABB IRB140 robot equipped with a flexible gripper, a flexible fixture and a camera fixed in the midpoint of the gripper. The flexibility of the assembly cell is provided by the main components - the gripper and the fixture, which are already designed and prototyped at AASS IC Laboratory and the vision system, developed dur-ing this project. The image information from the camera is used to perceive the robot’s task space and to recognize the workpieces. In turn this information is used to compute the spatial position and orientation of the workpieces. Based on this information an automatic assembly grasp planner was designed and developed to compute the possible stable grasps and to select and execute the posture of the entire robot arm plus gripper. In order to recognize the work-pieces, different low-level object recognition algorithms were developed based on their geometrical models. Once the workpieces are identified and grasped by the robot, the vision system is no longer in use and the robot will execute the predefined sequence of assembly operations. In this system, the assembly pro-cess of every product is described as an assembly tree, e.g. precedence graph, for all parts in the product.

Entire work was assessed by evaluating the individual modules of the project against a set of goal based criteria and using those results in finding the project overall importance. The tests conducted on the developed system showed that the system is capable of grasping and assembling workpieces regardless of their initial position and orientation. Apart from this, a simple and reliable commu-nication was developed in order to connect the components of the assembly cell and to provide a flexible process execution.

Keywords: Flexible assembly cell, Grasp planner, Object identification

(8)
(9)

years I stayed in Örebro.

My final words go to my family. I want to thank my mom, dad, sister and bujji, whose love and guidance is with me in whatever I pursue.

(10)
(11)

1.3 Contributions . . . 20

1.4 Thesis structure . . . 21

2 Previous study 23 2.1 Related study . . . 23

2.1.1 Visual servoing . . . 23

2.1.2 Grasping with visual servoing . . . 24

2.1.3 Vision system . . . 26 2.1.4 Grasping module . . . 28 2.2 Resource description . . . 30 2.2.1 Robot . . . 30 2.2.2 Grasping system . . . 34 2.2.3 Camera . . . 39 2.3 Functions in FMS . . . 40 2.4 Summary . . . 41 3 Developed system 43 3.1 Experimental setup . . . 43 3.2 System architecture . . . 44 3.2.1 Work flow . . . 45 3.3 Object identification . . . 48 3.3.1 Shaft recognition . . . 48 3.3.2 Gear recognition . . . 51

3.4 Automatic planner for arm posture control . . . 54

3.5 Synchronizing arm and gripper motions . . . 55

3.5.1 Arm motion control . . . 55

(12)

3.5.2 Gripper motion control . . . 57

3.5.3 Interface to synchronize motions . . . 58

3.6 Limitations . . . 61

4 Experiments with the system 63 4.1 Test environment . . . 63

4.2 Assembly representation . . . 64

4.2.1 Assembly sequence selection . . . 64

4.3 Test scenario . . . 66 4.3.1 Special cases . . . 67 4.3.2 Test results . . . 68 4.4 Analysis . . . 75 5 Conclusions 77 5.1 Summary . . . 77 5.2 Future work . . . 78

A Source code for shaft identification 83 B Source code for gear identification 85 C Camera calibration procedure 91 C.1 Camera intrinsic parameters . . . 91

C.2 Camera extrinsic parameters . . . 92

D Source code for client and server 95

(13)

2.2 Grasping control architecture . . . 25

2.3 Pinhole camera model . . . 27

2.4 Contour image . . . 29

2.5 Contour image . . . 30

2.6 ABB IRB 140B robotic manipulator . . . 31

2.7 Robot geometry . . . 31

2.8 Robot mechanical structure . . . 31

2.9 Robot base and tool coordinate systems . . . 32

2.10 Robot wrist coordinate system . . . 33

2.11 Flexible gripper . . . 35

2.12 Finger configuration 1 . . . 36

2.13 Finger configuration 2 . . . 36

2.14 Finger configuration 3 . . . 37

2.15 Flexible fixture . . . 37

2.16 Galil motion controller . . . 39

2.17 Camera . . . 39

2.18 Simulated view of robot-centered FMS . . . 41

3.1 Test assembly . . . 43

3.2 Test assembly parts sequence . . . 44

3.3 System architecture . . . 44

3.4 Work flow diagram . . . 47

3.5 (A) Background image (B) Current frame . . . 49

3.6 Subtracted image . . . 49

3.7 Curve pixels of the shaft . . . 49

3.8 (A) Original image (B) Edge image . . . 51

(14)

3.9 Recognized gears . . . 52

3.10 Sample code for robot motion control . . . 56

3.11 Robot zone illustration diagram . . . 57

3.12 Robot controller communication architecture . . . 58

3.13 Sequential diagram for client – server model . . . 59

4.1 Test environment . . . 63

4.2 Graph structure of test assembly . . . 64

4.3 Precedence diagram of test assembly . . . 65

4.4 (A) Initialized gripper (B) Arm at home position . . . 68

4.5 (A) Arm at position 1 (B) Arm at position 2 . . . 68

4.6 Arm at position 3 . . . 69

4.7 Screen-shot of execution window . . . 69

4.8 Workpieces in robot task space . . . 69

4.9 Recognized shaft . . . 70

4.10 Grasping shaft . . . 70

4.11 (A) Robot Fixing shaft (B) Fixed shaft . . . 71

4.12 (A) Searching at POS2 (B) Searching at POS3 . . . 71

4.13 Recognized small gear . . . 72

4.14 Grasping small gear . . . 72

4.15 (A) Robot fixing gear (B) Assembled gear . . . 72

4.16 (A) Searching for the gear (B) Robot grasping the gear . . . 73

4.17 (A) Fixing the gear (B) Assembled big gear . . . 73

4.18 (A) Robot grasping the pin (B) Fixing the pin . . . 74

4.19 Assembled pin . . . 74

4.20 Final assembled product . . . 74

(15)

2.6 Camera specifications . . . 40

3.1 Pseudo code for shaft recognition . . . 50

3.2 Pseudo code for gear recognition . . . 53

3.3 Robot positioning instructions . . . 56

4.1 Test assembly task description . . . 66

(16)
(17)

system that is integrated with a high precision articulated robotic manipu-lator. The main goal of this work is to develop a successful grasp planning methodology using the vision information for a flexible assembly cell.

Figure 1.1: FMS schematic diagram

In the past two decades, a number of technological advancements have been made in the development of flexible manufacturing systems(FMS). A FMS can be described as a system consisting of one or more handling devices like robotic manipulators along with the robot controllers and machine tools, arranged so that it can handle different family of parts for which it has been designed and

1A robot with rotatory joints and a fixed base is called as an articulated robot.

(18)

developed [Rezaie et al., 2009]. Figure 1.1 shows an example of FMS. In the present day life, these systems are playing vital role in industrial applications like welding, assembly operations etc.. A simple FMS used for assembly opera-tions, which is also termed as flexible assembly cell is an arrangement of one or more Computed Numerically Controlled (CNC) machines along with a robot manipulator and a cell computer. The main tasks of CNC machines include work load balancing, task scheduling etc. and also responsible for handling the machine breakdown and tool breakage. The cell computer is responsible for the supervision and coordination for various operations in the manufacturing cell. The end effector of the robot manipulator is fixed with specific machine tools (e.g. two or three figured grippers) depending on the task to be performed. The functions of these manufacturing cells and the technical specifications of the used tools are described in more detail in the preparation study chapter of this thesis.

Due to the lack of sensory capabilities, most of the assembly cells cannot act intelligently in recognizing the workpieces and perceiving the task space. For example, a general robotic assembly cell requires the workpieces presented to the robot must be placed in a predefined precise locations and with a known orientation, which is fixed to complete the overall task. These types of systems are lacking the flexibility and capability of automatic modification in their tra-jectories to accommodate changes in the task. Such a flexibility for assembly cells is achieved through the integration of vision sensing, as the visual sen-sors can provide a rich and complete information of the task space than any other sensing devices. And also with the help of the information received from integrated vision, robots can now act intelligently and possess a capability of dealing with imprecisely positioned workpieces and also can handle uncertain-ties and variations in the work environment. This vision information is also useful in enhancing the capability of robot by continuously updating its view of the world. This type of architecture is termed as Eye-in-hand or Eye-to-hand configuration. The basic building blocks of the whole system are shown

Figure 1.2: System architecture of robotic vision based control

in Figure1.2 and can be described as follows:

(19)

For the last couple of decades eye-in-hand using visual servoing has been stud-ied extensively because of its importance in industrial assembly operations. An eye-in-hand system can be described as a robot end effector equipped with a close-range camera as shown in Figure1.3. The camera selection is based on the task complexity. An illuminating source is attached to the gripper along with the camera in order to capture good images in dim lighting conditions and also to conflict the light changes at some areas in the real scene of view. The camera has a lens that can be adjusted for proper focus to minimize the positioning error [Eye].

These type of systems are mainly employed to guide the robot end effec-tors and the grippers in performing a specific task. The images acquired by the camera are processed using specific algorithms in a computer system in order to recognize the object of interest and also to find its spatial information. This information can be used to guide the robot movement in a specific workspace.

(20)

1.2

Project goal

The main objective of this project is to develop and demonstrate a pilot robotic system for implementing new basic functionalities in assembly cell. These func-tionalities include:

• Investigating and implementing eye-in-hand vision system for perceiving the working environment of the robot and recognizing the workpieces to be manipulated.

• Developing and implementing an automatic grasp planner, which com-putes possible stable grasps and sets gripper posture accordingly.

• Developing and implementing an automatic grasping planner, which se-lects and executes the posture of the entire robot arm plus a flexible grip-per.

• All above functionalities have to be implemented by a respective visual perception system.

1.2.1

Project evaluation

The general approach for evaluating this thesis work involves evaluating the individual modules of the project against a set of goal based criteria and using those results in finding the project overall importance. These individual mod-ules includes eye-in-hand visual servoing, grasp planner, motion control and an interface to integrate all these modules. The overall process time is not included as a concept of research in this thesis.

1.3

Contributions

With the completion of this thesis, the overall goal has been achieved and a grasp planner for robot assembly operations has been successfully developed. Main contributions of this thesis are:

• Developing a methodology of autonomous planning and replanning of assembly operations.

• Intelligent use of the Flexible Gripper.

• Integration and demonstration of the above two functions in a heteroge-neous environment involving plenty of particular limitations and prob-lems.

(21)

Chapter 3 Proposes a solution for the problem introduced in previous chapter.

This chapter also provides a detailed description of the implementation procedure.

Chapter 4 Describes the test scenario to demonstrate the capacity of the

devel-oped system.

(22)
(23)

functionalities of FMS.

2.1

Related study

2.1.1

Visual servoing

Visual servoing mainly refers to the use of vision data to control the motion of the robot. Visual servoing can be described as a closed loop control algorithm for which the error is defined in terms of visual measurements. The main goal of this control scheme is to reduce the error and drive the robot joint angles as a function of the pose error [Taylor and Kleeman, 2006]. This type of control schema is mainly applied in the object manipulation tasks which requires object detection, servoing, alignment and grasping. Figure 2.1 shows the basic build-ing blocks of visual servobuild-ing control schema. The visual servobuild-ing systems are generally classified into two types: position based visual servoing(PBVS) and image based visual servoing(IBVS). In the former model, the error is calculated

Figure 2.1: Visual servoing control schema

(24)

after constructing the pose of the gripper from visual measurements where as in the later model, the error is formulated directly as the difference between the observed and desired location of the image features. In both cases, image information is used to compute the error.

2.1.2

Grasping with visual servoing

Grasping by multi-fingered grippers has been an important topic of research in the field of manipulation for many years. Reaching a particular position and grasping an object is a complex task which requires lots of sensing activities. These activities need to be performed in a right sequence and in a right time in order to make a smooth and stable grasp. Most of the studies state that in order to provide a stable grasp, the grasping system requires complete information regarding the robot kinematics, gripper capabilities, sensor capabilities and also about the workspace where the objects are placed [Diankov et al., 2009]. A few work, however, has been done in integrating vision-sensors for grasping and manipulation tasks. For grasping stationary objects which is the concept of this thesis, the objects pose can be computed from the available image frames and motion of the arm can be planned accordingly. Such type of approach in which a robotic arm picks up a stationary object and places it in a specified location is described by Chiu et al. [1986]. Figure 2.2 shows a theoretical control program by Arbib [1981] for grasping a stationary object using vision data. According to this control program, in moving the arm towards the stationary target object, the spatial location of the target need to be known in prior. The required spatial information i.e. the object location, size and its orientation of the object is provided by vision system. As the arm approaches the target, it needs to correct its orientation towards the target. At the point of grasping the arm should be aligned towards the target in such a way that it will grasp around the longest axis in order to make stable hold of the object.

Generally, for any grasping model using the common manipulating frame-work there will be three phases namely specification, planning and execution phase. Specification phase is responsible for supplying enough information to the robot in order to perform a specific task. Planning phase is responsible to produce a global plan based on the information received from the specification phase. A collision free path for the robot end effector in order to achieve the final goal is produced from this phase. Finally, the execution phase is responsi-ble for continuously updating the information from the planning phase and to execute a right grasping action. The performance of the overall model mainly depends on these three phases. For planning a grasp based on visual servoing the final goal of the robot is defined with respect to the task frame i.e. the infor-mation acquired by the camera. The overall plan of the robot will be updated every time whenever the task frame updates [Prats et al., 2007]. One main fun-damental observation with robotic systems using vision for grasping is that a robot cannot determine the accurate spatial location of the object if the camera

(25)

Figure 2.2: Grasping control architecture

is located far away. Therefore, the robot should move closer towards the object for better accuracy.

Eye-hand coordination

Eye-hand coordination is a typical scenario that links perception with action [Xie, 1997]. It is the integration of visual perception system with arm manipu-lation system; the overall performance of the model depends on how these two systems are integrated. The simplest approach of eye-hand coordination is to make use of the vision system to compute the pose of the target object in the robot workspace and pass this information to the robotic manipulator to plan and execute the specific actions required for the task. In order to make this co-ordination model more reliable, an incremental mapping principle which maps the visual sensory data to hand’s motions should be established [Xie, 2002]. Most of the recent studies on eye-hand coordination focus on the estimation of feature Jacobian matrix1for mapping the motion of the robot arm to changes in

the image frame. To model a reliable incremental mapping principle the camera should be calibrated using a high precession calibration rig.

1The image feature Jacobian matrix is a linear transformation matrix to transform task space to image space [Hutchinson et al., 1996].

(26)

2.1.3

Vision system

Vision system mainly comprises of a normal pinhole camera along with an il-lumination source. In order to integrate this vision system along with a manip-ulation framework, it should be capable of tracking multiple objects that need to be manipulated. Functionally this vision system can be decomposed into two subsystems:

1. Object tracking 2. Pose2estimation

Object tracking

Multiple object tracking and shape representation is a vital task for assisting robotic manipulator in assembly operations using vision system. Extensive re-search has been carried out in this field since many years and various rere-search algorithms have been developed. The common approach used for most of the object tracking applications is by training the vision system with an image dataset containing object, and matching the object in real scene with trained object [Ba¸stanlar et al., 2010]. But this type of tracking system is inapplicable for vision based manipulation as it consumes time in performing the overall task. Other approach is to find different characterizing features of the objects from the image frames, and the most preferred feature of this kind is the ob-ject centroid. Koivo and Houshangi [1991] used these type of features in their work for tracking different objects. Hunt and Sanderson [1982] proposed var-ious algorithms for object tracking based on the mathematical predictions of the centroid locations of an object from the image frames. Huang et al. [2002] proposed image warping and Kalman filtering based object tracking method. H.Yoshimi and Allen [1997] used fiducial marks3 and Allen et al. [1993] used

snakes4 approach to trace various objects in a given workspace. In order to

reduce the computational time for object tracking, many algorithms are devel-oped based on image background subtraction. One such approach was used by Wiklund and Granlund [1987] for tracking multiple objects. A variety of techniques based on blob detection, contour fitting, image segmentation and object feature extraction are in practice for low level object identification and geometrical shape description. One such approach is used in this thesis.

Pose estimation

After detecting the object to be manipulated, it is necessary to compute its pose in order to find a set of valid grasping points. Pose of an object can be

com-2Pose can be described as a combination of position and orientation of object. 3Fiducial mark is a black dot on a white background.

4A snake is an energy minimizing spline which can detect objects in an image and track non-occluded objects in a sequence of images [Tabb et al., 2000].

(27)

Figure 2.3: Pinhole camera model

puted either from a single image or from a stereo pair of images. Most of the vision based manipulation applications use POSIT algorithm [Dementhon and Davis, 1995], to estimate the pose of an object using a single pinhole camera. In general, vision based pose estimation algorithms mainly rely on a set of im-age features like corner points, edge lines and curves. In order to estimate the pose of an object from an image, prior knowledge about the 3D location of the object features is necessary. The pose of an object is a combination of its rotation R3×3 and its translation T3×1with respect to the camera. So the pose

of an object can me mathematically shown as [R|T ] which is a 3 × 4 matrix. For a given 3D point on an object, its corresponding 2D projection (perspec-tive projection5) in image, camera’s focal length and principle point of focus

are used to compute its pose. The camera focal length and its principle point of focus can be computed by following a standard calibration technique proposed by Chen and He [2007]. Figure 2.3 shows the normal pinhole camera model and the related coordinate systems. The projection of a 3D point M [X, Y, Z]T

on to image plane at a point m[x, y]T can be represented in the homogeneous

matrix form as λ   x y 1   m =   f 0 αx 0 0 f αy 0 0 0 1 0   K     X Y Z 1     M (2.1)

where, K is the intrinsic parameter matrix of the camera, f is the camera focal length, λ = Z is the homogeneous scaling factor, αxand αyrepresent the

princi-ple point of the camera. These parameters are used to find the point projection. A complete pixel position of a point M can now be written as

(28)

λm =K 03 R 0 03 1  0T 3 −T 0 1  M (2.2)

Alternatively after combining matrices, equation 2.2 can be written as

λm =K 03  R −RT 0T 3 1  M (2.3)

From the corresponding mapping of 3D feature points to 2D image points, pose is estimated.

2.1.4

Grasping module

Grasping module used for robotic manipulation tasks mainly comprised of the following two sub-modules:

1. Grasp planning 2. Grasp execution

Grasp planning has been an important concept of research in the field of dexterous manipulation since many years. Okamura.A.M. et al. [2000] pre-sented a survey of existing techniques in dexterous manipulation along with a comparison to robotics and human manipulation. Rao et al. [1989] proposed 8 possible grasps using a three fingered gripper. Most of the researches assume that the geometry of the object is known in prior before starting the grasping process. A very important concept in grasp planning is the way of using this information to filter unstable grasp points like corners of the object. The main task is to select a particular type of grasp i.e. to take a final decision of using two or three fingers for a particular object based on its geometry.

In general, grasping an object means building a relationship between ma-nipulator and object model. Sometimes the complete information of the object model is hardly known. In that case grasp planning is imprecise and not reli-able. So instead of using the information about the object model directly, object features obtained by vision system and a binary image containing contours of objects can be used. The contour images serve as input to grasp computing block inside the planning module and a respected output from a database of grasps is produced, which serves as input to the planning module. The database of grasps contains the following list of grasps:

1. Valid grasps – Those satisfy a particular grasping criteria and can be used for grasping but a stable grasping is not ensured and needs verification. 2. Best grasps – These are a part of valid grasps and ensure stable grasping.

(29)

Grasp region determination

“The contiguous regions on the contour that comply with the fin-ger adaptation criterion are called grasp regions [Morales et al., 2006].”

The preliminary step in the grasp point determination is the extraction of grasp-ing region from binary image containgrasp-ing object contours. An example contour image is shown in the Figure2.4. Contour based approach is the one of the most commonly used technique for shape description and grasping region selection. The main idea behind this approach is to extract meaningful information from curves like finding the vertexes and line segments in image which are highly sig-nificant for grasping point selection. Some of the other research works prefer polygonal approximation using linking and merging algorithms [Rosin, 1997].

Figure 2.4: Contour image

A curvature function along with a fixed threshold (τ1) is used to process all

the contour points [Morales et al., 2006]. τ1 helps in finding the continuous

edge points for which a line segment can be approximated. The final outcome of this step will be a set of line segments. These line segments are processed further by using another threshold, τ2 (selected based on finger properties);

such that all segments below τ2are rejected. The remaining edge line segments

are the valid grasping regions. Ideally, this type of approach fails for some objects shown in Figure 2.5. For those objects most of edge points lie below τ1,

(30)

therefore approximation of edge line segments is not possible. For such type of objects the longer regions are broken down to small pieces. And these pieces can be approximated as straight lines.

Figure 2.5: Contour image

Once the grasp regions are selected, the next step is to compute the grasp points where the robot can make hold of the object. In order to find good grasp points, we need to find compatible regions out of all available valid regions. For a two finger grasp, finding the compatible grasp regions can be performed iteratively by selecting two regions at a time and validating them. The valida-tion procedure can be performed by finding the normal vectors of the selected regions and projecting the regions in the direction of normal vectors. If there exists any intersection between the regions, the regions are said to be compat-ible for grasping. Once the compatcompat-ible regions are selected the midpoints of these regions serves as grasping points for the gripper. On the other way for a three finger grasping, the similar approach is used for finding the compatible regions as well as grasp points, which is explained by Morales et al. [2006].

2.2

Resource description

2.2.1

Robot

The robot used in this thesis is ABB IRB 140B which is shown in the Figure 2.6. This is a six axis articulated robotic manipulator which allows an arbi-trary positioning and orientation in the robot’s work space. The geometrical and mechanical structures of the robot are shown in Figures 2.7 and 2.8 respec-tively. The accuracy of this robot is very high with its position repeatability of +/-0.03mm. The maximum payload handling capacity of this robot is 6kg. The axis specification and the joints velocities of the robot are shown in Table2.1.

(31)

Figure 2.6: ABB IRB 140B robotic manipulator [ABB, 2004]

Figure 2.7: Robot geometry [ABB, 2004]

(32)

Axis No. Axis Range Velocity 1 C, Rotation 360◦ 200◦ /s 2 B, Arm 200◦ 200◦ /s 3 A, Arm 280◦ 260◦ /s 4 D, Wrist Unlimited(400◦default) 360/s

5 E, Bend 240◦

360◦

/s 6 P, Turn Unlimited(800◦default) 450/s

Table 2.1: Robot axis specifications

Robot coordinate systems

The position and motion of the robot are always related to the robot Tool Center Point (TCP). The TCP is located in the middle of the defined tool. For any application, we can define several tools but only one tool or TCP is active at a particular point of time or point of move. The coordinates of TCP are defined with respect to the robot base coordinate system or can use its own coordinates. Figure 2.9 shows the robot base and tool coordinate systems. For many applications TCP coordinates are recorded with respect to the robots base coordinate system. The base coordinate system of the robot can be described

Figure 2.9: Robot base and tool coordinate systems

as follows:

• It is located on the base of the robot.

• The origin is located at the intersection point of axis 1 and the robot’s base mounting surface

• The xy- plane coincides with base mounting surface such that x-axis points forward from the base and y-axis points to the left (From the robot’s perspective).

(33)

Figure 2.10: Robot wrist coordinate system [ABB, 2004]

• The wrist coordinate system always remains same as the mounting flange of the robot.

• The origin(TCP) is located at the center of the mounting flange. • The z-axis points outwards from the mounting flange.

• At a point of calibration, the x-axis points in the opposite direction, to-wards the origin of the base coordinate system.

• The y-axis points to the left and can be seen as a parallel axis to the y-axis of base coordinate system.

Robot motion controller

The motion of the ABB IRB 140 is controlled by a special purpose fifth gener-ation robot controller, IRC5 designed by ABB robotics. The IRC5 controller is embedded with all the functions and controls in order to move and control the robot. This controller combines the motion control, flexibility and safety with PC tool support and optimizes the robot performance for short cycle times and precise movements. Because of its multi move function, it is capable of syn-chronizing up to four robot controls. The standard IRC5 controller supports

(34)

the high-level robot programming language RAPID and also features a well de-signed hand-held interface unit called FlexPendant or teach pendant which is connected to the controller by an integrated cable and connector.

Robot software and programming

RobotStudio is a computer application for the offline creation, programming, and simulation of robot cells. This application is used to simulate the robot in offline mode and to work on the controller directly in the online mode, as a compliment to the FlexPendant. Both the FlexPendant and RobotStudio are used to programming. FlexPendant is best suited for modifying position and path sequences in the program where as robot studio is used for more complex programming (e.g. socket programming).

2.2.2

Grasping system

The grasping system used in this thesis consists of two components, namely: 1. Flexible gripper for grasping objects.

2. Flexible fixture for fixing the grasped objects.

Both these components are designed and prototyped at AASS research labora-tory.

Flexible gripper

The gripper prototype shown in Figure 2.11 has three single joint identical fin-gers, providing a balance between functionality and increased level of dexterity, compared to standard industrial grippers. The base (palm) increases the func-tionality of the gripper providing the possibility of different relative orientation of the fingers. One of the fingers is fixed to the base, while the other two can symmetrically rotate up to 90 deg each. Driven by four motors, the gripper is capable of:

• Grasping work pieces of various shapes and sizes (4 – 300 mm). • Holding the part rigidly after grasping.

(35)

Figure 2.11: Flexible gripper

Flexible Gripper – FG 2.0

Type 3 fingered servo driven Gripping method Outside

Payload 2.5kg

Gripping force for every finger max.30N Course for every finger 145mm Time for total finger course 10s

Rotation speed of the finger max.14.5mm/s Rotation degree of moved fingers max.90◦

Dimensions 397 × 546 × 580 Table 2.2: Flexible gripper technical configuration [Ananiev, 2009]

Sensory system

The sensory system equipped with the gripper prototype provides a vital feed-back information to implement a closed loop control of the gripper (finger) movement. Mainly two types of sensors are used in this system. They are:

1. Tactile sensors for touch information. 2. Limit switches6for position information.

6Limit switches are the switching devices designed to cut off power automatically at the limit of travel of a moving object

(36)

With the latest prototype of the gripper, the contact surfaces of the fin-gers are enclosed with tactile sensing pads for enabling tactile feedback during grasping movements. These sensors are based on force sensing resistors, whose resistance value changes whenever the finger comes in contact with an object.

Limit switches, which are commonly used to control the movement of the mechanical parts are the second type of sensors used with this system. The limit switches enclosed with the three fingers of the gripper are used to determine the coarse finger positions and the attached optical shaft encoders estimates the precise position information of the fingers.

Finger configurations

The three available configurations for the finger movement are shown in the Figures 2.12, 2.13 and 2.14. Table 2.3 shows the number of fingers used and the suitable type of grasping for a particular finger configuration.

Figure 2.12: Finger configuration 1

(37)

Figure 2.14: Finger configuration 3

Configuration No. of fingers used Type

1 3 Grasp long objects 2 3 Grasp circular objects 3 2 Grasp small objects

Table 2.3: Flexible gripper finger configuration

Flexible fixture

Figure 2.15: Flexible fixture

The fixture prototype shown in the Figure 2.15 features 4 DOF and consists of one driving module and two pairs of connecting and grasping modules. Table 2.4 provides the complete technical configuration of the fixture. The driving module consists of two identical parts connected with orienting couple. The two different types of movements defined with the driving module are linear and rotatory. Linear movement is accomplished by using a gear motor, ball

(38)

screw pair converting the rotary motion of the nut into reciprocal and a sliding block connected to it where as rotatory movement is accomplished by using gear motor passing rotary motion via a teeth - belt connection to the ball-linear pairs. The connecting modules are responsible for holding the grasping modules containing finger pairs. The fixture is designed in such a way that it can open and close the finger pairs independently with respect to each other. The main benefits with this fixture architecture are:

• Grasping wide range of objects with different shapes and sizes. (See Table 2.5 for allowed dimensions).

• Holding the objects firmly. • Self centering of the objects.

• Precise control over the horizontal movement of holding modules and the vertical movement of fingers.

Type 4 fingered flexible fixture Type of grasp Inside and outside Driving Elctromechanical, 24V Run of every holder 50mm

Force of grasping 2500N Time for full travel of the holders 2.5s Operational time for grasping 0.5s Maximum rotational speed 360◦

/s Maximum torque 7.5N.m Maximum angle of rotation 210◦

Positioning accuracy ±0.05 Max. Weight of the Grasped Detail 10kg

Dimensions 550mm × 343mm × 150mm

Weight 22kg

Table 2.4: Flexible fixture technical configuration

Motion control

The motion of the both gripper and fixture are controlled by two different Galil’s DMC – 21x3 Ethernet motion controllers. Figure 2.16 shows the Galil’s DMC – 21x3 motion controller. With a 32-bit microcomputer, the Galil’s DMC – 21x3 motion controller provides advanced features like PID compensation with velocity and acceleration, program memory with multitasking and un-committed I/O for synchronizing motion with external events. The encoder

(39)

Figure 2.16: Galil motion controller

and limit switch information can be accessed by special purpose Galil’s soft-ware tools (GalilTools) for motion controllers. This tool is also used for sending and receiving the Galil commands. The integrated Watch Tool is used to moni-tor the controller status such as I/O and motion throughout the operation. The GalilTools C++ communication library (Galil class is compatible with g++ com-piler in Linux) provides various methods for communication with Galil motion controller over Ethernet gal.

2.2.3

Camera

Figure 2.17: Camera

The camera used in this thesis is Logitech webcam pro 9000 (See Figure 2.17). This camera is fixed in the midpoint of the flexible gripper. A source of illumination is integrated with the camera in order to provide a uniform light distribution over the scene of view. Table 2.6 provides the technical specifica-tions of the camera.

(40)

Focal length 3.7mm Lens iris F/2.0 Megapixels 2 (enhanced up to 8) Focus Adjustment Automatic Video resolution 1600 × 1200 image resolution 1600 × 1200 Optical sensor CMOS Frame rate Up to 30FPS Communication USB 2.0

Table 2.6: Camera specifications

2.3

Functions in FMS

As explained in Section 1.1, a flexible manufacturing system is a

“highly automated group technology (GT) machine cell, consisting of a group of workstations, interconnected by an automated ma-terial handling and storage system and controlled by a distributed computer system [Groover, 2000].”

These types of systems are mainly designed to produce various parts which are defined within the range of different sizes and shapes. These are a group of computer guided machines used to produce various products based on the controller (CNC machine) instructions [Mitchell]. The important characteristic of these systems are flexibility; comes from their capability of handling differ-ent family of parts and their adaptability for the task changes. Generally these FMSs’ can be distinguished into various types like single machine cell, flexible manufacturing cell and flexible manufacturing system, based on the number of CNC machines used for operation [Leicester, 2009]. A Single machine cell consists of only one CNC machine and a flexible manufacturing cell consists of two or three CNC controlled workstations along with the automated handling tools.

The flexible robot assembly cell used for this thesis consists of:

• A CNC machine tool responsible for controlling the overall assembly op-eration.

• An automated robot whose movements can be controlled by program-ming.

• A cell computer which is used to control the program flow and to coor-dinate the activities of the work station by communication with CNC. It is also used to monitor the work status.

(41)

Figure 2.18: Simulated view of robot-centered FMS [Festo solution center]

These types of systems are also called as robot-centered FMS. A simulated view of one such system is shown in Figure 2.18. All the required instructions for the flexible assembly cell are programmed and ported into the controllers before starting the assembly operation. All the components in the flexible as-sembly are connected under a common network such that the communication between different machines and the cell computer is performed over high speed Intranet. One main advantage with this model is, all the instructions required for the task can be programmed offline and tested in a simulation environment and can be executed online directly on the robot. The basic functionalities pro-vided by this flexible assembly cell are:

• Sensorial perception and status identification.

• Interactive planning in order to assure a stable performance for process changes.

• On-line decision making capabilities such as process monitoring, error recovery and handling tool breakage.

2.4

Summary

This chapter has provided an overview of the previous work done in the field of vision based grasping and also provides a survey of existing techniques rel-evant to grasp planning methodologies, eye-in-hand control and extraction of work piece geometrical information from images. Technical details of various resources like the robot, CNC controller, camera, flexible gripper and flexible fixture used in this thesis along with a short description of the flexible assem-bly cell and its functionalities are also provided in this chapter. The concepts that are mentioned in this chapter are relevant to the main contributions in this thesis.

(42)
(43)

3.1

Experimental setup

In order to demonstrate the capabilities of developed system a test assembly containing four different workpieces has been designed as shown in the Figure 3.1. These four objects differ from each other in size and shape and are initially placed in different locations at the robot work space. The main goal of the sys-tem is to identify the workpieces separately using the information received from camera and to develop a planning methodology for controlling the entire pos-ture of the arm and gripper in order to assemble them. The parts identification sequence is shown in Figure 3.2.

Figure 3.1: Test assembly

(44)

Figure 3.2: Test assembly parts sequence

3.2

System architecture

A conceptual diagram of the proposed system is shown in the Figure 3.3. The

Figure 3.3: System architecture

complete architecture is divided into four different subsystems, described as follows:

(45)

produces an output containing the object feature information. This out-put serves as the inout-put for shape and position identification block and a respected output containing the object spatial information is produced. This information is served as input to the grasp planning block and a overall plan for grasping is developed. The two interface blocks for the arm and Galil motion controllers are used to communicate with the ABB IRB 140 robot arm and flexible gripper and fixture respectively. These two interface blocks serve as a bridge between the software and the hard-ware.

Robot arm subsystem is responsible for controlling the arm actions based on

the information received from the controller subsystem. The task of this subsystem is to send/receive the position information of the robot end ef-fector to/from controller. This subsystem is developed using ABB’s RAPID programming language.

Fixture and gripper subsystems are responsible for controlling the actions of

the flexible fixture and gripper respectively based on the information received from the controller subsystem. These subsystems were devel-oped using special purpose Galil programming language for motion con-trollers.

3.2.1

Work flow

Flow chart shown in Figure 3.4 provides a step-by-step solution to the problem of vision based assembly operation. As the system operates continuously for all the parts in a similar manner, a single cycle is displayed in the diagram. The system will run this cycle every time the variable P N (part number) increments. The work flow starts by initializing the robot, gripper and fixture. In this step, all the three devices are moved to a predefined position and orientation. In the next step, camera device and MATLAB engine are initialized, if any problem occurs during this step the system execution is stopped. Even though the fixture and the gripper operate in a similar way, the execution steps are shown sepa-rately because, the fixture is responsible only for holding the shaft (P N = 1) and remaining parts are mounted on this shaft. As the camera is fixed inside

(46)

the gripper, it cannot cover the total workspace in a single frame. Therefore the total region is divided into several subregions for which predefined posi-tions P OS1, P OS2 and P OS3 are specified. All workpieces should be found in these three subregions.

If the initialization process is successful, the system will start executing the main cycle by changing the variable P N to 1. As a primary step of this cycle, the robot arm will be moved to a predefined position P OS1 in order to search for particular workpiece in its field of view. If the workpiece is identified at this position, system will start executing the next steps, otherwise the arm will move to next predefined position P OS2 to find the part. Once the part is iden-tified, its position and shape details are computed and the arm is commanded to move to get a pre-grasping pose. At this point, the grippers fingers are ad-justed depending on the received shape information. In the next step, the arm is commanded to move near the object and gripper grasps the workpiece. Once the grasping operation is successful, the camera is no longer in use and the arm is moved to a predefined medium point position in order to avoid collision with the fixture while fixing the part. At this point of operation, the fixture jaws are assigned in an order to hold the object. As a final step in this cycle, the arm moves over the fixture and delivers the grasped part. The whole cycle is exe-cuted repeatedly for all the parts and system execution stops once the assembly operation is finished.

(47)
(48)

3.3

Object identification

This section describes the developed methods to identify the workpieces from images. The workpieces in this assembly operation are, a cylindrical shaft, two circular gears which are already shown in the test assembly setup. As stated earlier in Section 3.1 these workpieces have to be identified in a specific order during assembly execution. The implementation procedures for part identifica-tion are explained below.

3.3.1

Shaft recognition

The procedure used to recognize the shaft was implemented in MATLAB based on the image background subtraction and boundary extraction algorithms. The pseudo code in Table 3.1 presents the basic steps of the procedure.

Initially, a background image of the workspace is captured without any workpieces. For every new frame from the camera the proposed algorithm for shaft recognition, subtracts this new frame from the background image. The result of this step will be a binary image containing the region of interest (ROI) of different objects. Figures 3.5 and 3.6 show the background image, the orig-inal image and the resulting binary image after image subtraction respectively. The area of this ROI has to be greater than a certain threshold in order to avoid the recognition of small objects. This threshold is chosen manually by trial and error method. Next, in order to determine the type of the workpiece the curves that composes the boundaries are found in a systematic way. A threshold is ob-tained to discriminate between the shaft and the remaining work objects. The boundary region containing the total number of curves less than this threshold will be the region containing the shaft. Figure 3.7 shows the extracted curve pixels of a boundary region. As a cross checking approach another threshold is obtained based on the difference between the major axis and minor axis of the found region. This step is performed in order to eliminate misjudged regions in the previous step. The final region after this step is to be the shaft region. The object orientation is determined based on the angle between the major axis of the detected region with the X-axis. Source code for shaft identification is given in Appendix A.

(49)

Figure 3.5: (A) Background image (B) Current frame

Figure 3.6: Subtracted image

(50)

Table 3.1: Pseudo code for shaft recognition 1: Capture the background image BG

2: Subtract each new frame from BG

3: for each resulting region

4: if (area > areathreshold)

5: add this region to ROI

6: end if

7: end for

8: for each resulting ROI

9: extract the boundaries

10: for each resulting boundary

11: extract the curves

12: end for

13: for each extracted curve

14: Find the number of curve pixels N CP

15: end for

16: if (N CP < curvethreshold)

17: findF V~T = [M 1, M 2, O, A]

18: where F V is a vector containing object features 19: M 1 is the major axis of the region 20: M 2 is the minor axis of the region 21: O is the Orientation of the major axis 22: A is the Area of the region

23: find difference D = M 1 − M 2 24: set the value of found to one.

25: end if

26: end for

27: if f ound = 1andD > shaf tthreshold

28: compute the centroid 29: return shaft found 30: else

31: return shaft not found

32: Go To step 2

(51)

with the exact search for circles, these captured frames require preprocessing. For this, each captured frame is converted into gray scale image and undergoes histogram equalization in order to normalize the intensity levels. A median fil-ter of size 11 × 11 is applied on the equalized image to reduce the noise levels. Once the preprocessing stage is completed, edge features are extracted from the image using Canny edge detection algorithm. The Canny edge detection algo-rithm applies two thresholds which are used for edge linking and to find the initial segments of strong edges respectively. These two thresholds are obtained using trial and error method. The result of this step is to be a binary image containing edges. Figure 3.8 shows the original image along with its computed edges.

Figure 3.8: (A) Original image (B) Edge image

The proposed algorithm uses this edge image to search for the presence of Hough circle candidates with specific radius. If one or more than one circle candidate found, the captured image is considered for the rest of the recogni-tion process otherwise it is discarded and a new frame is captured. As an initial case if more than one circle candidate found, a filter is applied on the found circle candidates in order to limit their total count to two (assuming that both the gears are placed in the same subregion). This filter is designed based on the manually measured radii of two gears. As a next step, the radii of these circles are computed and the circle with least radius and satisfying the small gear radius condition is to be the smaller gear and the circle with most radius

(52)

and satisfying the big gear radius condition is to be the bigger gear. Figure 3.9 shows the recognized gears. On the other hand if the gears are placed in differ-ent subregions i.e. if only one circle candidate found, two differdiffer-ent thresholds are obtained (based on their radii) to recognize the gears. At a particular posi-tion, the time to search and recognize the gears is fixed and if the gears are not recognized within this time the robot is commanded to move to one of the other predefined position P OS1, P OS2 or P OS3 as mentioned earlier. If any of the two gears are found missing in the robot workspace, the system produces an assembling error message and program execution is stopped. Source code for gear identification is given in Appendix B.

(53)

9: for (circlecandidate = 1 to 2)

10: compute the radii R1 and R2

11: if (SM ALLGEAR)

12: if (R1 < R2)and(smallmin < R1 < smallmax)

13: compute centroid of the circle 14: return centroid pixel coordinates 15: found = true

16: break the for loop

17: else if (R2 < R1)and(smallmin < R2 < smallmax)

18: do steps 13 to 16

19: end if

20: else if (BIGGEAR)

21: if (R1 > R2)and(bigmin < R1 < bigmax)

22: do steps 13 to 16

23: else if (R2 > R1)and(bigmin < R2 < bigmax)

24: do steps 13 to 16

25: end if

26: end for

27: break the while loop

28: end if

29: else if (circlesf ound == 1)

30: compute radius R1 31: if (SM ALLGEAR) 32: if (smallmin < R2 < smallmax) 33: do steps 13 to 16 34: else if (BIGGEAR) 35: if (bigmin < R2 < bigmax) 36: do steps 13 to 16 37: end if 38: else 39: Go To step 2 40: end while

(54)

3.4

Automatic planner for arm posture control

Once a workpiece is identified in the robot workspace, the next step is to com-pute its spatial location with respect to the robot and its possible stable grasping state depending on its orientation. Based on this information the arm’s posture can be controlled automatically. This can be performed in several steps.

1. Initially the robot is moved to a predefined position in the robots work-space. As the robot is calibrated, its TCP position coordinates are avail-able for future computation.

2. Next step is to find the camera location in the robot world coordinates. This is performed by calibrating the camera with respect to the robot. The camera calibration procedure used for this thesis is described in Appendix C.

3. From the above step, a final transformation matrix containing the cam-era rotation R(3 × 3) and translation T (3 × 1) with respect to the robot coordinate system is computed. This transformation matrix along with the camera intrinsic parameters matrix K is used to compute the object’s location in robot frame. This can be explained as follows: Let us con-sider a 2D point m(u, v) in the image which corresponds to the centroid of the recognized object. For a 2D point m in an image, there exists a collection of 3D points that are mapped onto the same point m. This col-lection of 3D points constitutes a ray P (λ) connecting the camera center Oc(x, y, z)T and m(x, y, 1)T , where λ is a positive scaling factor that

de-fines the position of the 3D point on the ray. The value of λ is the average value of the back projection error of a set of known points in 3D. This value is used to obtain the X and Y coordinates of the 3D point using

  X Y Z   M = T + λR−1 K−1 m (3.1)

As the used vision system is monocular, it is not possible to compute the value of Z i.e. the distance between the object and the camera, instead it is computed by using the robot TCP coordinates and the object model. 4. Next step is to compute the orientation of the object in robot workspace.

This is performed by fitting the detected object region (in image) to an ellipse. Now the orientation of the object corresponds to the orientation of the major axis of the ellipse with respect to its x-axis. Based on this orientation a final rotation matrix also called as direction cosine matrix is computed using the current camera rotation. This rotation matrix is

(55)

6. Once the robot is moved to the object grasping location, a suitable grasp-ing type (2 or 3 fgrasp-ingered) is selected automatically based on the objects orientation.

3.5

Synchronizing arm and gripper motions

The motions of the robot, the gripper and the fixture are controlled indepen-dently by a set of predefined functions that are defined in their motion con-trollers. As both gripper and fixture motions are controlled in a similar manner we are not pointing out to the fixture in this section.

3.5.1

Arm motion control

As explained before in Section 2.2, the robot’s motion is controlled by ABB IRC5 controller. The software embedded with this controller has the libraries containing predefined functions and instructions developed in RAPID for arm motion control. For this project, a software program is developed in RAPID containing all the required set of instructions to control the robot’s motion autonomously and also to communicate with cell computer. The movements of the robot are programmed as pose – to – pose movements, i.e. move from the current position to a new position and the robot automatically calculates the path between these two positions. The basic motion characteristics (e.g. type of path) are specified by choosing appropriate position instructions. Both the robot and the external axes are positioned by the same instructions. Some of these positioning instructions used for this thesis are shown in the Table 3.3 and an example program is shown in Figure 3.10.

(56)

Instruction Type of movement (TCP)

MoveC Moves along a circular path MoveJ Joint movement

MoveL Moves along a linear path MoveAbsJ Absolute joint movement

Table 3.3: Robot positioning instructions

Figure 3.10: Sample code for robot motion control

The syntax of a basic positioning instruction is

MoveL p1, v500, z20, tool1

This instruction requires following parameters in order to move the robot: • Type of path: linear (MoveL), joint motion (MoveJ) or circular (MoveC). • The destined position, p1.

• Motion velocity, v500 (Velocity in mm/s).

• Zone size (accuracy) of robot destined position, z20. • Tool data currently in use, tool1.

Zone size defines how close the robot TCP has to move to the destined position. If this is defined as fine, robot will move to the exact position. Figure 3.11 illustrates this.

(57)

Figure 3.11: Robot zone illustration diagram

3.5.2

Gripper motion control

Gripper motion refers to the finger movements. In order to control these move-ments a Galil motion controller was used with the gripper prototype (See Sec-tion 2.2). The main task of this Galil controller is to control the motor speed that is associated with each finger. Each motor in the prototype is associated with different axis encoders in order to record the finger positions. Generally, the motor speeds can be controlled by executing specific commands (Galil com-mands) on the controller by using Galil tools software. For this thesis, a low – level control program containing the required Galil commands is developed and downloaded to the controller to command the finger movements autonomously during different stages of the process execution. The finger movements are pro-grammed in different steps and all these steps constitutes a grasp cycle. These steps are described below.

Initialization is the primary step performed by the gripper fingers before

start-ing the assembly process. Durstart-ing this step all three fstart-ingers are commanded to move to their home positions. The information provided by different limit switches (fixed in the home positions) is used to stop the fingers after reaching their home positions. This step is mainly required to provide the camera, a clear view of the workspace.

Pregrasping step is executed after a workpiece is identified by the vision system.

During this step fingers are commanded to move to a pregrasping pose. This is computed based on the workpiece size, shape and orientation. This step is mainly required to avoid the collision of gripper fingers with other workpieces. After this step the finger motors are turned off.

Grasping and holding steps are executed when the arm reaches near the

work-piece and is ready to grasp. During this steps, fingers start closing and their movement is stopped based on the received tactile information from

(58)

the force sensors fixed on the finger tips. After this step motors are kept ON in order to hold the workpiece until it is released.

Releasing is the final and simple step executed at the end of every grasp cycle.

During this step, the finger motors are turned off in order to release the workpiece.

3.5.3

Interface to synchronize motions

From the previous subsections we understood that the control programs for each device are developed using different programming languages related to their motion controllers and it is difficult to integrate them in a single program. So, for a flexible process execution it is required to develop a software interface that can interact with all devices simultaneously. For this thesis, a software in-terface program is developed in C++ to communicate with the robot controller as well as to download control programs to Galil controllers and to execute Galil commands. This interface is executed from the cell computer which is a normal PC running with Ubuntu. The cell computer is also responsible for:

• Controlling the overall process execution. • Monitoring the process status.

• Communicating with various devices in the cell for activity coordination. • Handling assembly errors.

• Executing the object detection programs.

As all the devices and cell computer are connected under a common LAN with different IP addresses, the interface program can interact with them over Eth-ernet by connecting to the specific IP address.

Interface interaction with robot controller

(59)

Figure 3.13: Sequential diagram for client – server model

• Initially, sockets are created on both server and client (by default, a server socket is created with the execution of server program).

• Client requests a socket connection with server on specific port.

• If the requested port is free to use, server establishes a connection and is ready to communicate with client.

(60)

• Once, the connection is established, the client program send/receive posi-tion informaposi-tion to/from the server program.

• Both sockets are closed when data transfer is successful.

The server program is mainly responsible for receiving the position information from client and to execute it on robot. This position information is transferred in the form of strings from client. An example line of code is

har pos[256℄="[50.9,39,-10.7,-34.5,18.5,-14.9℄1";

These strings are preprocessed in server and are converted to float/double. The movement type i.e. linear or joint can be decided by the value after brackets in the passing string (e.g. 1 in the above code). 1 and 2 for Joint movement using MoveJ and 3 for linear movement using MoveL. Once the instruction is executed on the robot, server sends current position information of TCP to client in the form of

[X,Y,Z℄[Q1,Q2,Q3,Q4℄[ f1, f4, f6, fx℄...

where,[X, Y, Z] are TCP position coordinates, [Q1, Q2, Q3, Q4] are quaternions of TCP and [cf 1, cf 4, cf 6, cf x] are Robot axis configurations. Source codes of server and client are given in Appendix D.

Interface interaction with Galil controller

Interface for the Galil controller is developed using specific predefined func-tions of Galil communication library. These funcfunc-tions are used to connect and download the program to controller and also to read specific digital outputs. The basic functionalities provided by communication library are:

• Connecting and Disconnecting with a controller. • Basic Communication.

• Downloading and uploading embedded programs. • Downloading and uploading array data.

• Executing Galil commands.

• Access to the data record in both synchronous and asynchronous modes. It is also possible to connect and communicate with more than one Galil con-troller at the same time. Source code is given in Appendix E.

(61)

So a fixed assembly plan is used for assembling the pin.

• The developed system cannot identify the pin hole presented on the shaft. • Even though background subtraction technique is producing good re-sults, it is sensitive to illumination changes and noise present in the back-ground.

(62)
(63)

The test environment used to test the developed system is shown below.

Figure 4.1: Test environment

(64)

4.2

Assembly representation

As stated earlier in Section 3.1 the test assembly consists of a cylindrical shaft, two circular gears and one pin. In order to generate an assembly plan for our test assembly, a computer representation of mechanical assembly is required. Generally an assembly of parts can be represented by the individual description of each component and their relationships in the assembly. Knowledge base contains all the related information regarding the geometric models of each component, their spatial orientations and the assembly relationships between them. Based on this knowledge an assembly model is represented by a graph structure as shown in Figure 4.2 in which each node represents an individual component of the assembly and the connected links represents the relationship among them.

Figure 4.2: Graph structure of test assembly

4.2.1

Assembly sequence selection

The entire assembly of any type should follow a predefined assembly sequence, but with a content specific to the respective components. In order to find a correct and reliable assembly sequence one should evaluate all the possible as-sembly lines of a given product. This task can be accomplished by using prece-dence diagrams1[Y. Nof et al., 1997; Prenting and Battaglin, 1964]. These

di-agrams are designed based on the assembly knowledge base. Figure 4.3 shows the precedence diagram for our test assembly and is described below.

1Precedence diagrams are the graphical representation of a set of precedence constraints or precedence relations [Lambert, 2006].

(65)

Figure 4.3: Precedence diagram of test assembly

Usually this diagram is organized into different columns and all the assem-bly operations that can be carried out first are placed in the first column and so on. Each individual operation is assigned a number and is represented by a cir-cle. The connecting arrows shows the precedence relations. Now let us consider the geometrical design of our test assembly components; the shaft contains a pin hole on its top and the small gear contains a step on one side. These com-ponents can be assembled only if they have a proper orientation. Based on this geometrical constraints, two assembly sequences are derived as shown in the diagram. In the first sequence the shaft is fixed at first and the small gear is mounted on the shaft such that the base of the shaft is fixed and the step side of the gear is facing towards the shaft’s base. Where as in the second sequence, small gear is fixed at first having the step side facing up and the shaft is fixed to it such that the base of the shaft remains above the gear which is exactly the opposite case of first sequence. Next, in order to fix the big gear, first one is a direct assembly sequence where the bigger gear can be directly mounted on the existing components where as in the second sequence the complete subassem-bly needs to be turned upside down and re-fixed. All precedence relations are restricted to simple AND relations. A simple AND relation for a specific com-ponent means that it can be assembled only if it have a proper orientation and if the other required operations are performed beforehand e.g. the small gear can be assembled only if it has a proper orientation and the shaft is assembled prop-erly. Once the precedence diagram has been designed, the cost of each action is estimated and the correct assembly sequence is selected by applying mixed integer programming2[Lambert and M. Gupta, 2002]. After evaluating both

2A mixed integer programming is the minimization or maximization of a linear function subject to linear constraints.

(66)

sequences, the first sequence is selected for our test assembly mainly because of the following reasons:

• It is a direct assembly sequence. • Task complexity is reduced. • Overall assembly time is reduced.

The final assembly sequence along with the tasks are described in Table 4.1.

Table 4.1: Test assembly task description Task No. Description

1 Fix shaft’s base

2 Mount small gear (step towards shaft’s base) 3 Mount big gear

4 Install pin

5 Remove complete assembly

4.3

Test scenario

In order to test the developed system, the following test scenario has been de-veloped:

1. The demonstrator starts the assembly process by executing the main con-trol program in the cell computer.

2. Subsequently the hardware components like the robot, the gripper and the fixture start initializing.

3. Once the hardware component initialization is successful the software components like the vision system and MATLAB engine are initialized and the process status is displayed in the cell computer execution window (See Figure 4.7).

4. After all the system components are initialized, the robot will move to the three predefined positions (POS1, POS2, POS3. See Section 3.2.1) in order to capture the background images of the workspace.

5. Then the arm will move to its home position and a assembly message stating “submit the workpieces” is displayed in the execution window. During this time the process execution is paused and will continue after the demonstrator gives a command.

References

Related documents

&#34;Vdren I 797 hade England inte längre ndgra allierade pd kontinenten. Tilf och med Portugal hade slutit fred med Frankrike. England stod vid nederlagets rand. Irland sjöd av

Her main research focus has been dedicated to the development of analytical methods for emerging and legacy environmental contaminants using packed column supercritical

kommer fram till att TPL kan vara ett alternativ så genomför man en analys av vilka TPL- företag som

Abstract This article examines the revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) and its claim of incorporating a Bgreater cultural sensitivity.^

(This quantity is used to calculate the absorbed dose in the heterogeneous environment in correlated sampling Monte Carlo methods.) Batch averages containing

By combining materials with fast charge transfer kinetics, such as carbon or PEDOT, with materials with high specific energy, such as transition metal oxides, redox active

Mattias Otterström 2HO013SA MASTER HOP2 Med förståelse för uppdragstaktik som ledningsfilosofi kan man skönja att det inte enbart finns behov av olika uppsättningar utav metoder

Resultatet av undersökningen är att Ryssland värderade principen om interventioner beslutade av säkerhetsrådet och relationen till väst högre än sina ekonomiska intressen i